In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic) or gestures (G, visual) in more (+) or less (-) meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G-), while during the acoustic control condition a foreign language was presented (S-). The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain's core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3511386PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0051207PLOS

Publication Analysis

Top Keywords

fmri study
8
iconic gestures
8
control condition
8
semantic processing
8
gestures
5
supramodal neural
4
neural network
4
speech
4
network speech
4
speech gesture
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!