Merging information derived from different sensory channels allows the brain to amplify minimal signals to reduce their ambiguity, thereby improving the ability of orienting to, detecting, and identifying environmental events. Although multisensory interactions have been mostly ascribed to the activity of higher-order heteromodal areas, multisensory convergence may arise even in primary sensory-specific areas located very early along the cortical processing stream. In three experiments, we investigated early multisensory interactions in lower-level visual areas, by using a novel approach, based on the coupling of behavioral stimulation with two noninvasive brain stimulation techniques, namely, TMS and transcranial direct current stimulation (tDCS). First, we showed that redundant multisensory stimuli can increase visual cortical excitability, as measured by means of phosphene induction by occipital TMS; such physiological enhancement is followed by a behavioral facilitation through the amplification of signal intensity in sensory-specific visual areas. The more sensory inputs are combined (i.e., trimodal vs. bimodal stimuli), the greater are the benefits on phosphene perception. Second, neuroelectrical activity changes induced by tDCS in the temporal and in the parietal cortices, but not in the occipital cortex, can further boost the multisensory enhancement of visual cortical excitability, by increasing the auditory and tactile inputs from temporal and parietal regions, respectively, to lower-level visual areas.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1162/jocn_a_00347 | DOI Listing |
Biomedicines
December 2024
Neuromodulation Center and Center for Clinical Research Learning, Spaulding Rehabilitation Hospital and Massachusetts General Hospital, Harvard Medical School, Boston, MA 02115, USA.
Background/objectives: Haptic technology has transformed interactions between humans and both tangible and virtual environments. Despite its widespread adoption across various industries, the potential therapeutic applications of this technology have yet to be fully explored.
Methods: A systematic review of randomized controlled trials (RCTs) and randomized crossover trials was conducted, utilizing databases such as PubMed, Embase, Cochrane Library, and Web of Science.
Asia Pac J Oncol Nurs
December 2024
College of Nursing, Michigan State University, East Lansing, MI, USA.
Brain Behav
January 2025
Department of General Practice, Yantaishan Hospital Affiliated to Binzhou Medical University, Yantai, China.
Introduction: Persistent postural-perceptual dizziness (PPPD) is the most prevalent chronic functional dizziness in the clinic. Unsteadiness, dizziness, or non-spinning vertigo are the main symptoms of PPPD, and they are typically aggravated by upright posture, active or passive movement, and visual stimulation. The pathogenesis of PPPD remains incompletely understood, and it cannot be attributed to any specific anatomical defect within the vestibular system.
View Article and Find Full Text PDFFront Hum Neurosci
December 2024
Faculty of Psychology, Universitas Gadjah Mada, Yogyakarta, Indonesia.
The COVID-19 pandemic has highlighted the prevalence of fatigue, reduced interpersonal interaction, and heightened stress in work environments. The intersection of neuroscience and architecture underscores how intricate spatial perceptions are shaped by multisensory stimuli, profoundly influencing workers' wellbeing. In this study, EEG and VR technologies, specifically the , were employed to gather data on perception and cognition.
View Article and Find Full Text PDFCogn Neurodyn
December 2024
School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072 China.
The integration and interaction of cross-modal senses in brain neural networks can facilitate high-level cognitive functionalities. In this work, we proposed a bioinspired multisensory integration neural network (MINN) that integrates visual and audio senses for recognizing multimodal information across different sensory modalities. This deep learning-based model incorporates a cascading framework of parallel convolutional neural networks (CNNs) for extracting intrinsic features from visual and audio inputs, and a recurrent neural network (RNN) for multimodal information integration and interaction.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!