Predicting the sensory consequences of one's own action: First evidence for multisensory facilitation.

Atten Percept Psychophys

Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Rudolf-Bultmann-Straße 8, 35039, Marburg, Germany.

Published: November 2016

AI Article Synopsis

  • The research explores how we predict the sensory outcomes of our actions, focusing on both single and multiple sensory inputs (unimodal and bimodal stimuli) during self-initiated actions.
  • Experiments revealed that participants were better at detecting delays when multiple senses were involved (bimodal) compared to just one (unimodal), especially when the action was self-generated.
  • The findings suggest that our brain uses a "forward model" to anticipate sensory feedback across different modalities, enhancing our ability to process multisensory experiences during active actions.

Article Abstract

Predicting the sensory consequences of our own actions contributes to efficient sensory processing and might help distinguish the consequences of self- versus externally generated actions. Previous research using unimodal stimuli has provided evidence for the existence of a forward model, which explains how such sensory predictions are generated and used to guide behavior. However, whether and how we predict multisensory action outcomes remains largely unknown. Here, we investigated this question in two behavioral experiments. In Experiment 1, we presented unimodal (visual or auditory) and bimodal (visual and auditory) sensory feedback with various delays after a self-initiated buttonpress. Participants had to report whether they detected a delay between their buttonpress and the stimulus in the predefined task modality. In Experiment 2, the sensory feedback and task were the same as in Experiment 1, but in half of the trials the action was externally generated. We observed enhanced delay detection for bimodal relative to unimodal trials, with better performance in general for actively generated actions. Furthermore, in the active condition, the bimodal advantage was largest when the stimulus in the task-irrelevant modality was not delayed-that is, when it was time-contiguous with the action-as compared to when both the task-relevant and task-irrelevant modalities were delayed. This specific enhancement for trials with a nondelayed task-irrelevant modality was absent in the passive condition. These results suggest that a forward model creates predictions for multiple modalities, and consequently contributes to multisensory interactions in the context of action.

Download full-text PDF

Source
http://dx.doi.org/10.3758/s13414-016-1189-1DOI Listing

Publication Analysis

Top Keywords

predicting sensory
8
sensory consequences
8
externally generated
8
generated actions
8
forward model
8
visual auditory
8
sensory feedback
8
task-irrelevant modality
8
sensory
5
consequences one's
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!