TMS reveals a two-stage priming circuit of gesture-speech integration.

Front Psychol

CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.

Published: May 2023

Introduction: Naturalistically, multisensory information of gesture and speech is intrinsically integrated to enable coherent comprehension. Such cross-modal semantic integration is temporally misaligned, with the onset of gesture preceding the relevant speech segment. It has been proposed that gestures prime subsequent speech. However, there are unresolved questions regarding the roles and time courses that the two sources of information play in integration.

Methods: In two between-subject experiments of healthy college students, we segmented the gesture-speech integration period into 40-ms time windows (TWs) based on two separately division criteria, while interrupting the activity of the integration node of the left posterior middle temporal gyrus (pMTG) and the left inferior frontal gyrus (IFG) with double-pulse transcranial magnetic stimulation (TMS). In Experiment 1, we created fixed time-advances of gesture over speech and divided the TWs from the onset of speech. In Experiment 2, we differentiated the processing stages of gesture and speech and segmented the TWs in reference to the speech lexical identification point (IP), while speech onset occurred at the gesture semantic discrimination point (DP).

Results: The results showed a TW-selective interruption of the pMTG and IFG only in Experiment 2, with the pMTG involved in TW1 (-120 ~ -80 ms of speech IP), TW2 (-80 ~ -40 ms), TW6 (80 ~ 120 ms) and TW7 (120 ~ 160 ms) and the IFG involved in TW3 (-40 ~ 0 ms) and TW6. Meanwhile no significant disruption of gesture-speech integration was reported in Experiment 1.

Discussion: We determined that after the representation of gesture has been established, gesture-speech integration occurs such that speech is first primed in a phonological processing stage before gestures are unified with speech to form a coherent meaning. Our findings provide new insights into multisensory speech and co-speech gesture integration by tracking the causal contributions of the two sources of information.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10203497PMC
http://dx.doi.org/10.3389/fpsyg.2023.1156087DOI Listing

Publication Analysis

Top Keywords

gesture-speech integration
16
speech
12
gesture speech
12
-40 tw6
8
integration
7
gesture
7
tms reveals
4
reveals two-stage
4
two-stage priming
4
priming circuit
4

Similar Publications

Article Synopsis
  • This study investigates how different interaction methods between humans and drones (HDI) affect the safety and balance of construction workers in virtual reality.
  • Using VR simulations, it evaluated gesture and speech communications versus a control group, with participants' movements tracked to assess balance.
  • Findings indicate that gesture interaction improved balance and reduced fall risks, while speech interaction increased cognitive load, suggesting a need for further research to balance physical safety and mental strain in real-world environments.
View Article and Find Full Text PDF

Background: Pragmatic skills allow children to use language for social purposes, that is, to communicate and interact with people. Most children with neurodevelopmental disorders (NDDs) face pragmatic difficulties during development. Nevertheless, pragmatic skills are often only partially assessed because the existing instruments usually focus on specific aspects of pragmatics and are not always adapted to children with communication difficulties.

View Article and Find Full Text PDF

The visuo-sensorimotor substrate of co-speech gesture processing.

Neuropsychologia

November 2023

Research Centre for Mind, Brain, and Learning, National Chengchi University, Taipei, Taiwan; Department of Psychology, National Chengchi University, Taipei, Taiwan. Electronic address:

Co-speech gestures are integral to human communication and exhibit diverse forms, each serving a distinct communication function. However, existing literature has focused on individual gesture types, leaving a gap in understanding the comparative neural processing of these diverse forms. To address this, our study investigated the neural processing of two types of iconic gestures: those representing attributes or event knowledge of entity concepts, beat gestures enacting rhythmic manual movements without semantic information, and self-adaptors.

View Article and Find Full Text PDF

TMS reveals a two-stage priming circuit of gesture-speech integration.

Front Psychol

May 2023

CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.

Introduction: Naturalistically, multisensory information of gesture and speech is intrinsically integrated to enable coherent comprehension. Such cross-modal semantic integration is temporally misaligned, with the onset of gesture preceding the relevant speech segment. It has been proposed that gestures prime subsequent speech.

View Article and Find Full Text PDF

Speech and gesture are two integrated and temporally coordinated systems. Manual gestures can help second language (L2) speakers with vocabulary learning and word retrieval. However, it is still under-investigated whether the synchronisation of speech and gesture has a role in helping listeners compensate for the difficulties in processing L2 aural information.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!