There is growing interest in the visual and attentional processes recruited when human observers view social scenes containing multiple people. Findings from visual search paradigms have helped shape this emerging literature. Previous research has established that, when hidden amongst pairs of individuals facing in the same direction (leftwards or rightwards), pairs of individuals arranged front-to-front are found faster than pairs of individuals arranged back-to-back. Here, we describe a second, closely-related effect with important theoretical implications. When searching for a pair of individuals facing in the same direction (leftwards or rightwards), target dyads are found faster when hidden amongst distractor pairs arranged front-to-front, than when hidden amongst distractor pairs arranged back-to-back. This distractor arrangement effect was also obtained with target and distractor pairs constructed from arrows and types of common objects that cue visuospatial attention. These findings argue against the view that pairs of people arranged front-to-front capture exogenous attention due to a domain-specific orienting mechanism. Rather, it appears that salient direction cues (e.g., gaze direction, body orientation, arrows) hamper systematic search and impede efficient interpretation, when distractor pairs are arranged back-to-back.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8346951 | PMC |
http://dx.doi.org/10.1016/j.cognition.2021.104737 | DOI Listing |
Autism Res
December 2024
Department of Psychological Sciences, Birkbeck, University of London, London, UK.
Recent findings obtained with non-autistic participants indicate that pairs of facing individuals (face-to-face dyadic targets) are found faster than pairs of non-facing individuals (back-to-back dyadic targets) when hidden among distractor pairings (e.g., pairs of individuals arranged face-to-back) in visual search displays.
View Article and Find Full Text PDFCortex
December 2024
Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
Background: Language is multimodal and situated in rich visual contexts. Language is also incremental, unfolding moment-to-moment in real time, yet few studies have examined how spoken language interacts with gesture and visual context during multimodal language processing. Gesture is a rich communication cue that is integrally related to speech and often depicts concrete referents from the visual world.
View Article and Find Full Text PDFJ Exp Child Psychol
January 2025
Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, UK.
Atten Percept Psychophys
November 2024
Centre for Neuroscience, Indian Institute of Science, Bengaluru, 560012, India.
When a spiky object is occluded, we expect its spiky features to continue behind the occluder. Although many real-world objects contain complex features, it is unclear how more complex features are amodally completed and whether this process is automatic. To investigate this issue, we created pairs of displays with identical contour edges up to the point of occlusion, but with occluded portions exchanged.
View Article and Find Full Text PDFCan J Exp Psychol
December 2024
Department of Global Studies, Business School, King Fahd University of Petroleum and Minerals.
Faces and body parts play a crucial role in human social communication. Numerous studies emphasize their significance as sociobiological stimuli in daily interactions. Two experiments were conducted to examine the following: (a) whether faces or body parts are processed more quickly than other visual objects when relevant to the task and serving as targets, and (b) the effects of presenting faces or body parts as distractors on task reaction times and error rates.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!