The breakdown of the Simon effect in cross-modal contexts: EEG evidence.

Eur J Neurosci

Center for Brain and Cognition, Departament de Tecnologies de la Informació i les Comunicacions, Universitat Pompeu Fabra, Edifici Mercè Rodoreda, carrer Ramon Trias Fargas 25-27, 08005, Barcelona, Spain.

Published: April 2018

In everyday life, we often must coordinate information across spatial locations and different senses for action. It is well known, for example, that reactions are faster when an imperative stimulus and its required response are congruent than when they are not, even if stimulus location itself is completely irrelevant for the task (the so-called Simon effect). However, because these effects have been frequently investigated in single-modality scenarios, the consequences of spatial congruence when more than one sensory modality is at play are less well known. Interestingly, at a behavioral level, the visual Simon effect vanishes in mixed (visual and tactile) modality scenarios, suggesting that irrelevant spatial information ceases to exert influence on vision. To shed some light on this surprising result, here we address the expression of irrelevant spatial information in EEG markers typical of the visual Simon effect (P300, theta power modulation, LRP) in mixed-modality contexts. Our results show no evidence for the visual-spatial information to affect performance at behavioral and neurophysiological levels. The absence of evidence of the neural markers of visual S-R conflict in the mixed-modality scenario implies that some aspects of spatial representations that are strongly expressed in single-modality scenarios might be bypassed.

Download full-text PDF

Source
http://dx.doi.org/10.1111/ejn.13882DOI Listing

Publication Analysis

Top Keywords

single-modality scenarios
8
visual simon
8
irrelevant spatial
8
spatial
5
breakdown simon
4
simon cross-modal
4
cross-modal contexts
4
contexts eeg
4
eeg evidence
4
evidence everyday
4

Similar Publications

CHEF: A Framework for Deploying Heterogeneous Models on Clusters with Heterogeneous FPGAs.

IEEE Trans Comput Aided Des Integr Circuits Syst

November 2024

Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15261, USA.

Article Synopsis
  • - DNNs are transitioning from simple models (single-modality, single-task) to more complex ones (multi-modality, multi-task), which require advanced hardware solutions to handle their varying layers and complex dependencies.
  • - Heterogeneous systems are being developed, integrating different accelerators to reduce latency, with FPGAs being a key component due to their high density and configurability for machine-learning tasks.
  • - The authors introduce CHEF, a framework that efficiently implements these complex models on heterogeneous FPGA clusters, featuring two main approaches (CHEF-A2F and CHEF-M2A) that significantly reduce latency and search times compared to previous methods.
View Article and Find Full Text PDF

This study introduces a novel approach for the diagnosis of Cleft Lip and/or Palate (CL/P) by integrating Vision Transformers (ViTs) and Siamese Neural Networks. Our study is the first to employ this integration specifically for CL/P classification, leveraging the strengths of both models to handle complex, multimodal data and few-shot learning scenarios. Unlike previous studies that rely on single-modality data or traditional machine learning models, we uniquely fuse anatomical data from ultrasound images with functional data from speech spectrograms.

View Article and Find Full Text PDF

Predicting the consequences of the agent's actions on its environment is a pivotal challenge in robotic learning, which plays a key role in developing higher cognitive skills for intelligent robots. While current methods have predominantly relied on vision and motion data to generate the predicted videos, more comprehensive sensory perception is required for complex physical interactions such as contact-rich manipulation or highly dynamic tasks. In this work, we investigate the interdependence between vision and tactile sensation in the scenario of dynamic robotic interaction.

View Article and Find Full Text PDF

Purpose: The recently released EANM/SNMMI guideline, endorsed by several important clinical and imaging societies in the field of breast cancer (BC) care (ACR, ESSO, ESTRO, EUSOBI/ESR, EUSOMA), emphasized the role of [F]FDG PET/CT in management of patients with no special type (NST) BC. This review identifies and summarizes similarities, discrepancies and novelties of the EANM/SNMMI guideline compared to NCCN, ESMO and ABC recommendations.

Methods: The EANM/SNMMI guideline was based on a systematic literature search and the AGREE tool.

View Article and Find Full Text PDF

Machine-Learning-Based Multi-Modal Force Estimation for Steerable Ablation Catheters.

IEEE Trans Med Robot Bionics

August 2024

Department of Electrical and Computer Engineering, Western University, London, ON, Canada, and Canadian Surgical Technologies and Advanced Robotics (CSTAR), University Hospital, LHSC, London, ON, Canada.

Catheter-based cardiac ablation is a minimally invasive procedure for treating atrial fibrillation (AF). Electrophysiologists perform the procedure under image guidance during which the contact force between the heart tissue and the catheter tip determines the quality of lesions created. This paper describes a novel multi-modal contact force estimator based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!