In everyday life, we often must coordinate information across spatial locations and different senses for action. It is well known, for example, that reactions are faster when an imperative stimulus and its required response are congruent than when they are not, even if stimulus location itself is completely irrelevant for the task (the so-called Simon effect). However, because these effects have been frequently investigated in single-modality scenarios, the consequences of spatial congruence when more than one sensory modality is at play are less well known. Interestingly, at a behavioral level, the visual Simon effect vanishes in mixed (visual and tactile) modality scenarios, suggesting that irrelevant spatial information ceases to exert influence on vision. To shed some light on this surprising result, here we address the expression of irrelevant spatial information in EEG markers typical of the visual Simon effect (P300, theta power modulation, LRP) in mixed-modality contexts. Our results show no evidence for the visual-spatial information to affect performance at behavioral and neurophysiological levels. The absence of evidence of the neural markers of visual S-R conflict in the mixed-modality scenario implies that some aspects of spatial representations that are strongly expressed in single-modality scenarios might be bypassed.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/ejn.13882 | DOI Listing |
IEEE Trans Comput Aided Des Integr Circuits Syst
November 2024
Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15261, USA.
J Imaging
October 2024
School of Information Technology, Sripatum University, Bangkok 10900, Thailand.
This study introduces a novel approach for the diagnosis of Cleft Lip and/or Palate (CL/P) by integrating Vision Transformers (ViTs) and Siamese Neural Networks. Our study is the first to employ this integration specifically for CL/P classification, leveraging the strengths of both models to handle complex, multimodal data and few-shot learning scenarios. Unlike previous studies that rely on single-modality data or traditional machine learning models, we uniquely fuse anatomical data from ultrasound images with functional data from speech spectrograms.
View Article and Find Full Text PDFFront Robot AI
September 2024
Department of Computer Science, University College London, London, United Kingdom.
Predicting the consequences of the agent's actions on its environment is a pivotal challenge in robotic learning, which plays a key role in developing higher cognitive skills for intelligent robots. While current methods have predominantly relied on vision and motion data to generate the predicted videos, more comprehensive sensory perception is required for complex physical interactions such as contact-rich manipulation or highly dynamic tasks. In this work, we investigate the interdependence between vision and tactile sensation in the scenario of dynamic robotic interaction.
View Article and Find Full Text PDFBreast
December 2024
Breast Unit, Champalimaud Clinical Center, Champalimaud Foundation, Lisbon, Portugal.
Purpose: The recently released EANM/SNMMI guideline, endorsed by several important clinical and imaging societies in the field of breast cancer (BC) care (ACR, ESSO, ESTRO, EUSOBI/ESR, EUSOMA), emphasized the role of [F]FDG PET/CT in management of patients with no special type (NST) BC. This review identifies and summarizes similarities, discrepancies and novelties of the EANM/SNMMI guideline compared to NCCN, ESMO and ABC recommendations.
Methods: The EANM/SNMMI guideline was based on a systematic literature search and the AGREE tool.
IEEE Trans Med Robot Bionics
August 2024
Department of Electrical and Computer Engineering, Western University, London, ON, Canada, and Canadian Surgical Technologies and Advanced Robotics (CSTAR), University Hospital, LHSC, London, ON, Canada.
Catheter-based cardiac ablation is a minimally invasive procedure for treating atrial fibrillation (AF). Electrophysiologists perform the procedure under image guidance during which the contact force between the heart tissue and the catheter tip determines the quality of lesions created. This paper describes a novel multi-modal contact force estimator based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!