Cognitive control and social perception both change during adolescence, but little is known of the interaction of these 2 processes. We aimed to characterize developmental changes in brain activity related to the influence of a social stimulus on cognitive control and more specifically on inhibitory control. Children (age 8-11, = 19), adolescents (age 12-17, = 20), and adults (age 24-40, = 19) performed an antisaccade task with either faces or cars as visual stimuli, during functional magnetic resonance brain imaging.
View Article and Find Full Text PDFFoveal vision loss has been shown to reduce efficient visual search guidance due to contextual cueing by incidentally learned contexts. However, previous studies used artificial (T- among L-shape) search paradigms that prevent the memorization of a target in a semantically meaningful scene. Here, we investigated contextual cueing in real-life scenes that allow explicit memory of target locations in semantically rich scenes.
View Article and Find Full Text PDFPurpose: Search in repeatedly presented visual search displays can benefit from implicit learning of the display items' spatial configuration. This effect has been named contextual cueing. Previously, contextual cueing was found to be reduced in observers with foveal or peripheral vision loss.
View Article and Find Full Text PDFFaces are an important source of social signal throughout the lifespan. In adults, they have a prioritized access to the orienting system. Here we investigate when this effect emerges during development.
View Article and Find Full Text PDFWe tested if high-level athletes or action video game players have superior context learning skills. Incidental context learning was tested in a spatial contextual cueing paradigm. We found comparable contextual cueing of visual search in repeated displays in high-level amateur handball players, dedicated action video game players and normal controls.
View Article and Find Full Text PDFBecause of the close link between foveal vision and the spatial deployment of attention, typically only objects that have been foveated during scene exploration may form detailed and persistent memory representations. In a recent study on patients suffering from age-related macular degeneration, however, we found surprisingly accurate visual long-term memory for objects in scenes. Normal exploration patterns that the patients had learned to rereference saccade targets to an extrafoveal retinal location.
View Article and Find Full Text PDFAllocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual long-term memory for objects in patients suffering from foveal vision loss due to age-related macular degeneration.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
September 2015
Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental learning of contextual cues or the expression of learning, that is, the guidance of search by learned target-distractor configurations.
View Article and Find Full Text PDFObjective: Visual search can be guided by past experience of regularities in our visual environment. This search guidance by contextual memory cues is impaired by foveal vision loss. Here we compared retinal and cortical visually evoked responses in their predictive value for contextual cueing impairment and visual acuity.
View Article and Find Full Text PDFWe investigated the neural basis of conjoined processing of color and spatial frequency with functional magnetic resonance imaging (fMRI). A multivariate classification algorithm was trained to differentiate between either isolated color or spatial frequency differences, or between conjoint differences in both feature dimensions. All displays were presented in a singleton search task, avoiding confounds between conjunctive feature processing and search difficulty that arose in previous studies contrasting single feature and conjunction search tasks.
View Article and Find Full Text PDFCurrent models of cognitive control assume gradual adjustment of processing selectivity to the strength of conflict evoked by distractor stimuli. Using a flanker task, we varied conflict strength by manipulating target and distractor onset. Replicating previous findings, flanker interference effects were larger on trials associated with advance presentation of the flankers compared to simultaneous presentation.
View Article and Find Full Text PDFVisual attention can be guided by past experience of regularities in our visual environment. In the contextual cueing paradigm, incidental learning of repeated distractor configurations speeds up search times compared to random search arrays. Concomitantly, fewer fixations and more direct scan paths indicate more efficient visual exploration in repeated search arrays.
View Article and Find Full Text PDFGaze-contingent displays provide a valuable method in visual research for controlling visual input and investigating its visual and cognitive processing. Although the body of research using gaze-contingent retinal stabilization techniques has grown considerably during the last decade, only few studies have been concerned with the reliability of the specific real-time simulations applied. Using a Landolt ring discrimination task, we present a behavioral validation of gaze-contingent central scotoma simulation in healthy observers.
View Article and Find Full Text PDFThe neural substrates of feature binding are an old, yet still not completely resolved problem. While patient studies suggest that posterior parietal cortex is necessary for feature binding, imaging evidence has been inconclusive in the past. These studies compared visual feature and conjunction search to investigate the neural substrate of feature conjunctions.
View Article and Find Full Text PDFIn the contextual cueing paradigm, incidental visual learning of repeated distractor configurations leads to faster search times in repeated compared to new displays. This contextual cueing is closely linked to the visual exploration of the search arrays as indicated by fewer fixations and more efficient scan paths in repeated search arrays. Here, we examined contextual cueing under impaired visual exploration induced by a simulated central scotoma that causes the participant to rely on extrafoveal vision.
View Article and Find Full Text PDFWhen distractor configurations are repeated over time, visual search becomes more efficient, even if participants are unaware of the repetition. This contextual cueing is a form of incidental, implicit learning. One might therefore expect that contextual cueing does not (or only minimally) rely on working memory resources.
View Article and Find Full Text PDF