Perceptual decision-making in a dynamic environment requires two integration processes: integration of sensory evidence from multiple modalities to form a coherent representation of the environment, and integration of evidence across time to accurately make a decision. Only recently studies started to unravel how evidence from two modalities is accumulated across time to form a perceptual decision. One important question is whether information from individual senses contributes equally to multisensory decisions. We designed a new psychophysical task that measures how visual and auditory evidence is weighted across time. Participants were asked to discriminate between two visual gratings, and/or two sounds presented to the right and left ear based on respectively contrast and loudness. We varied the evidence, i.e., the contrast of the gratings and amplitude of the sound, over time. Results showed a significant increase in performance accuracy on multisensory trials compared to unisensory trials, indicating that discriminating between two sources is improved when multisensory information is available. Furthermore, we found that early evidence contributed most to sensory decisions. Weighting of unisensory information during audiovisual decision-making dynamically changed over time. A first epoch was characterized by both visual and auditory weighting, during the second epoch vision dominated and the third epoch finalized the weighting profile with auditory dominance. Our results suggest that during our task multisensory improvement is generated by a mechanism that requires cross-modal interactions but also dynamically evokes dominance switching.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1163/22134808-bja10088 | DOI Listing |
J Exp Psychol Gen
January 2025
Department of Experimental Psychology, Helmholtz Institute, Utrecht University.
Predicting the location of moving objects in noisy environments is essential to everyday behavior, like when participating in traffic. Although many objects provide multisensory information, it remains unknown how humans use multisensory information to localize moving objects, and how this depends on expected sensory interference (e.g.
View Article and Find Full Text PDFGamma oscillations are disrupted in various neurological disorders, including Alzheimer's disease (AD). In AD mouse models, non-invasive audiovisual stimulation (AuViS) at 40 Hz enhances gamma oscillations, clears amyloid-beta, and improves cognition. We investigated mechanisms of circuit remodeling underlying these restorative effects by leveraging the sensitivity of hippocampal neurogenesis to activity in middle-aged wild-type mice.
View Article and Find Full Text PDFA stimulus with light is clearly visual; a stimulus with sound is clearly auditory. But what makes a stimulus "social", and how do judgments of socialness differ across people? Here, we characterize both group-level and individual thresholds for perceiving the presence and nature of a social interaction. We take advantage of the fact that humans are primed to see social interactions-e.
View Article and Find Full Text PDFiScience
January 2025
Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland.
The recognition of conspecifics, animals of the same species, and keeping track of changes in the social environment is essential to all animals. While molecules, circuits, and brain regions that control social behaviors across species are studied in-depth, the neural mechanisms that enable the recognition of social cues are largely obscure. Recent evidence suggests that social cues across sensory modalities converge in a thalamic area conserved across vertebrates.
View Article and Find Full Text PDFFront Neurosci
January 2025
Department of Mathematics, University of Antwerp-Interuniversity Microelectronics Centre (imec), Antwerp, Belgium.
Introduction: The study of attention has been pivotal in advancing our comprehension of cognition. The goal of this study is to investigate which EEG data representations or features are most closely linked to attention, and to what extent they can handle the cross-subject variability.
Methods: We explore the features obtained from the univariate time series from a single EEG channel, such as time domain features and recurrence plots, as well as representations obtained directly from the multivariate time series, such as global field power or functional brain networks.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!