Cross-modal conflicts arise when information from multisensory modalities is incongruent. Most previous studies investigating audiovisual cross-modal conflicts have focused on visual targets with auditory distractors, and only a few studies have focused on auditory targets with visual distractors. Moreover, no study has investigated the differences in the impact of visual cross-modal conflict with semantic and nonsemantic competition and its neural basis. This cross-sectional study aimed to characterize the impact of 2 types of visual cross-modal conflicts with semantic and nonsemantic distractors through a working memory task and associated brain activities. The participants were 33 healthy, right-handed, young male adults. The paced auditory serial addition test was performed under 3 conditions: no-distractor and 2 types of visual distractor conditions (nonsemantic and semantic distractor conditions). Symbols and numbers were used as nonsemantic and semantic distractors, respectively. The oxygenated hemoglobin (Oxy-Hb) concentration in the frontoparietal regions, bilateral ventrolateral prefrontal cortex (VLPFC), dorsolateral prefrontal cortex, and inferior parietal cortex (IPC) were measured during the task under each condition. The results showed significantly lower paced auditory serial addition test performances in both distractor conditions than in the no-distractor condition, but no significant difference between the 2 distractor conditions. For brain activity, a significantly increased Oxy-Hb concentration in the right VLPFC was only observed in the nonsemantic distractor condition (corrected P = .015; Cohen d = .46). The changes in Oxy-Hb in the bilateral IPC were positively correlated with changes in task performance for both types of visual cross-modal distractor conditions. Visual cross-modal conflict significantly impairs auditory working memory task performance, regardless of the presence of semantic or nonsemantic distractors. The right VLPFC may be a crucial region to inhibit visual nonsemantic information in cross-modal conflict situations, and bilateral IPC may be closely linked with the inhibition of visual cross-modal distractor, regardless of the presence of semantic or nonsemantic distractors.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10980433 | PMC |
http://dx.doi.org/10.1097/MD.0000000000030330 | DOI Listing |
Sensors (Basel)
January 2025
The 54th Research Institute, China Electronics Technology Group Corporation, College of Signal and Information Processing, Shijiazhuang 050081, China.
The multi-sensor fusion, such as LiDAR and camera-based 3D object detection, is a key technology in autonomous driving and robotics. However, traditional 3D detection models are limited to recognizing predefined categories and struggle with unknown or novel objects. Given the complexity of real-world environments, research into open-vocabulary 3D object detection is essential.
View Article and Find Full Text PDFInt J Mol Sci
January 2025
School of Mathematics and Computer Science, Gannan Normal University, Ganzhou 341000, China.
Due to advances in big data technology, deep learning, and knowledge engineering, biological sequence visualization has been extensively explored. In the post-genome era, biological sequence visualization enables the visual representation of both structured and unstructured biological sequence data. However, a universal visualization method for all types of sequences has not been reported.
View Article and Find Full Text PDFBiol Psychol
January 2025
Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC 29201, USA. Electronic address:
We examined differences in physiological responses to aversive and non-aversive naturalistic audiovisual stimuli and their auditory and visual components within the same experiment. We recorded five physiological measures that have been shown to be sensitive to affect: electrocardiogram, electromyography (EMG) for zygomaticus major and corrugator supercilii muscles, electrodermal activity (EDA), and skin temperature. Valence and arousal ratings confirmed that aversive stimuli were more negative in valence and higher in arousal than non-aversive stimuli.
View Article and Find Full Text PDFBiol Psychol
January 2025
Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China.
Audiovisual associative memory and audiovisual integration involve common behavioral processing components and significantly overlap in their neural mechanisms. This suggests that training on audiovisual associative memory may have the potential to improve audiovisual integration. The current study tested this hypothesis by applying a 2 (group: audiovisual training group, unimodal control group) * 2 (time: pretest, posttest) design.
View Article and Find Full Text PDFPhysiol Behav
January 2025
Department of Anesthesiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan. Electronic address:
Cross-modal interactions between sensory modalities may be necessary for recognition of chewing food by the invisible oral cavity to avoid damaging the tongue and/or oral mucosa. The present study used functional near-infrared spectroscopy to investigate whether the food properties hardness and size influence activities in the posterior parietal cortex and visual cortex during chewing performance in healthy individuals. It was found that an increase in food hardness enhanced both posterior parietal cortex and visual cortex activities, while an increase in food size enhanced activities in the same regions.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!