In social interactions, it is highly salient to us where other people are looking. The ability to recover this information is critical to typical social development, helping us to coordinate our attention and behavior with others and understand their intentions and mental states [1-3]. The depth and direction in which another individual is fixating are specified jointly by their head position, eye deviation, and binocular vergence [4, 5]. It is hereto unknown, however, whether this dynamic visual information about others' focus of attention affects how we ourselves see the world. Here we show that the perceived depth and movement of physical objects in our environment are influenced by others' tracking behavior. This effect occurred even in the presence of conflicting size cues to object location and generalized to the context of apparent motion displays [6] and judgments about causal interactions between moving objects [7]. Perceived object trajectory was modulated primarily by the object-level motion of the tracking agent (e.g., the head), with less-pronounced effects of eye motion and low-level motion. Interestingly, comparable perceptual effects were induced by non-face objects that displayed similar tracking behavior, indicating a mechanism of distal coupling between the motion of the target and an appropriately moving inducer. These results demonstrate that social information can have a fundamental effect on our vision, such that the visual reality constructed in each brain is determined in part by what others see.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.cub.2017.06.019 | DOI Listing |
Plant Biol (Stuttg)
January 2025
Department of Behavioral Physiology and Sociobiology, University of Würzburg, Würzburg, Germany.
Nature offers a bewildering diversity of flower colours. Understanding the ecology and evolution of this fantastic floral diversity requires knowledge about the visual systems of their natural observers, such as insect pollinators. The key question is how flower colour and pattern can be measured and represented to characterise the signals that are relevant to pollinators.
View Article and Find Full Text PDFJ Exp Psychol Gen
January 2025
Department of Experimental Psychology, Helmholtz Institute, Utrecht University.
Predicting the location of moving objects in noisy environments is essential to everyday behavior, like when participating in traffic. Although many objects provide multisensory information, it remains unknown how humans use multisensory information to localize moving objects, and how this depends on expected sensory interference (e.g.
View Article and Find Full Text PDFPLoS Comput Biol
January 2025
Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.
The human visual system possesses a remarkable ability to detect and process faces across diverse contexts, including the phenomenon of face pareidolia--seeing faces in inanimate objects. Despite extensive research, it remains unclear why the visual system employs such broadly tuned face detection capabilities. We hypothesized that face pareidolia results from the visual system's optimization for recognizing both faces and objects.
View Article and Find Full Text PDFInfant Behav Dev
January 2025
Universität zu Köln, Richard Strauss Straße 2, Cologne 50931, Germany.
The study examined the saccadic behavior of 4- to 10-month-old infants when tracking a two-dimensional linear motion of a circle that occasionally bounced off a barrier constituted by the screen edges. It was investigated whether infants could anticipate the angle of the circle's direction after the bounce and the circle's displacement from the location of bounce. Seven bounce types were presented which differed in the angle of incidence.
View Article and Find Full Text PDFSensors (Basel)
January 2025
The 54th Research Institute, China Electronics Technology Group Corporation, College of Signal and Information Processing, Shijiazhuang 050081, China.
The multi-sensor fusion, such as LiDAR and camera-based 3D object detection, is a key technology in autonomous driving and robotics. However, traditional 3D detection models are limited to recognizing predefined categories and struggle with unknown or novel objects. Given the complexity of real-world environments, research into open-vocabulary 3D object detection is essential.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!