The representation of actions within the action-observation network is thought to rely on a distributed functional organization. Furthermore, recent findings indicate that the action-observation network encodes not merely the observed motor act, but rather a representation that is independent from a specific sensory modality or sensory experience. In the present study, we wished to determine to what extent this distributed and 'more abstract' representation of action is truly supramodal, i.e. shares a common coding across sensory modalities. To this aim, a pattern recognition approach was employed to analyze neural responses in sighted and congenitally blind subjects during visual and/or auditory presentation of hand-made actions. Multivoxel pattern analyses-based classifiers discriminated action from non-action stimuli across sensory conditions (visual and auditory) and experimental groups (blind and sighted). Moreover, these classifiers labeled as 'action' the pattern of neural responses evoked during actual motor execution. Interestingly, discriminative information for the action/non action classification was located in a bilateral, but left-prevalent, network that strongly overlaps with brain regions known to form the action-observation network and the human mirror system. The ability to identify action features with a multivoxel pattern analyses-based classifier in both sighted and blind individuals and independently from the sensory modality conveying the stimuli clearly supports the hypothesis of a supramodal, distributed functional representation of actions, mainly within the action-observation network.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3589380 | PMC |
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0058632 | PLOS |
Neuropsychologia
January 2025
Neuroscience Area, SISSA, Trieste, Italy; Dipartimento di Medicina dei Sistemi, Università di Roma-Tor Vergata, Roma, Italy.
Although gesture observation tasks are believed to invariably activate the action-observation network (AON), we investigated whether the activation of different cognitive mechanisms when processing identical stimuli with different explicit instructions modulates AON activations. Accordingly, 24 healthy right-handed individuals observed gestures and they processed both the actor's moved hand (hand laterality judgment task, HT) and the meaning of the actor's gesture (meaning task, MT). The main brain-level result was that the HT (vs MT) differentially activated the left and right precuneus, the left inferior parietal lobe, the left and right superior parietal lobe, the middle frontal gyri bilaterally and the left precentral gyrus.
View Article and Find Full Text PDFNeurobiol Dis
February 2025
Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich Heine University, 40225 Düsseldorf, Germany. Electronic address:
Corticobasal syndrome (CBS) is characterized not only by parkinsonism but also by higher-order cortical dysfunctions, such as apraxia. However, the electrophysiological mechanisms underlying these symptoms remain poorly understood. To explore the pathophysiology of CBS, we recorded magnetoencephalographic (MEG) data from 17 CBS patients and 20 age-matched controls during an observe-to-imitate task.
View Article and Find Full Text PDFCereb Cortex
December 2024
Department of Psychology, Harvard University, 33 Kirkland Street, Cambridge, MA 02138, United States.
Understanding the goal of an observed action requires computing representations that are invariant to specific instantiations of the action. For example, we can accurately infer the goal of an action even when the agent's desired outcome is not achieved. Observing actions consistently recruits a set of frontoparietal and posterior temporal regions, often labeled the "action observation network.
View Article and Find Full Text PDFRes Sq
December 2024
Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA.
We effortlessly extract behaviorally relevant information from dynamic visual input in order to understand the actions of others. In the current study, we develop and test a number of models to better understand the neural representational geometries supporting action understanding. Using fMRI, we measured brain activity as participants viewed a diverse set of 90 different video clips depicting social and nonsocial actions in real-world contexts.
View Article and Find Full Text PDFWe effortlessly extract behaviorally relevant information from dynamic visual input in order to understand the actions of others. In the current study, we develop and test a number of models to better understand the neural representational geometries supporting action understanding. Using fMRI, we measured brain activity as participants viewed a diverse set of 90 different video clips depicting social and nonsocial actions in real-world contexts.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!