Perceptual learning in the identification of lung cancer in chest radiographs.

Cogn Res Princ Implic

Department of Psychology, University of Minnesota, N240 Elliott Hall, 75 East River Road, Minneapolis, MN, 55455, USA.

Published: February 2020

Extensive research has shown that practice yields highly specific perceptual learning of simple visual properties such as orientation and contrast. Does this same learning characterize more complex perceptual skills? Here we investigated perceptual learning of complex medical images. Novices underwent training over four sessions to discriminate which of two chest radiographs contained a tumor and to indicate the location of the tumor. In training, one group received six repetitions of 30 normal/abnormal images, the other three repetitions of 60 normal/abnormal images. Groups were then tested on trained and novel images. To assess the nature of perceptual learning, test items were presented in three formats - the full image, the cutout of the tumor, or the background only. Performance improved across training sessions, and notably, the improvement transferred to the classification of novel images. Training with more repetitions on fewer images yielded comparable transfer to training with fewer repetitions on more images. Little transfer to novel images occurred when tested with just the cutout of the cancer region or just the background, but a larger cutout that included both the cancer region and some surrounding regions yielded good transfer. Perceptual learning contributes to the acquisition of expertise in cancer image perception.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6997313PMC
http://dx.doi.org/10.1186/s41235-020-0208-xDOI Listing

Publication Analysis

Top Keywords

perceptual learning
20
novel images
12
chest radiographs
8
images
8
training sessions
8
repetitions normal/abnormal
8
normal/abnormal images
8
cancer region
8
perceptual
6
learning
5

Similar Publications

The classical view is that perceptual attunement to the native language, which emerges by 6-10 months, developmentally precedes phonological feature abstraction abilities. That assumption is challenged by findings from adults adopted into a new language environment at 3-5 months that imply they had already formed phonological feature abstractions about their birth language prior to 6 months. As phonological feature abstraction had not been directly tested in infants, we examined 4-6-month-olds' amodal abstraction of the labial versus coronal place of articulation distinction between consonants.

View Article and Find Full Text PDF

Predicting image memorability from evoked feelings.

Behav Res Methods

January 2025

Department of Psychology, Columbia University, New York, NY, USA.

While viewing a visual stimulus, we often cannot tell whether it is inherently memorable or forgettable. However, the memorability of a stimulus can be quantified and partially predicted by a collection of conceptual and perceptual factors. Higher-level properties that represent the "meaningfulness" of a visual stimulus to viewers best predict whether it will be remembered or forgotten across a population.

View Article and Find Full Text PDF

Background: Alterations in sensory perception, a core phenotype of autism, are attributed to imbalanced integration of sensory information and prior knowledge during perceptual statistical (Bayesian) inference. This hypothesis has gained momentum in recent years, partly because it can be implemented both at the computational level, as in Bayesian perception, and at the level of canonical neural microcircuitry, as in predictive coding. However, empirical investigations have yielded conflicting results with evidence remaining limited.

View Article and Find Full Text PDF

Listeners can use both lexical context (i.e., lexical knowledge activated by the word itself) and lexical predictions based on the content of a preceding sentence to adjust their phonetic categories to speaker idiosyncrasies.

View Article and Find Full Text PDF

The goal of the present investigation was to perform a registered replication of Jones and Macken's (1995b) study, which showed that the segregation of a sequence of sounds to distinct locations reduced the disruptive effect on serial recall. Thereby, it postulated an intriguing connection between auditory stream segregation and the cognitive mechanisms underlying the irrelevant speech effect. Specifically, it was found that a sequence of changing utterances was less disruptive in stereophonic presentation, allowing each auditory object (letters) to be allocated to a unique location (right ear, left ear, center), compared to when the same sounds were played monophonically.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!