This study investigates the synchronization of manual gestures with prosody and information structure using Turkish natural speech data. Prosody has long been linked to gesture as a key driver of gesture-speech synchronization. Gesture has a hierarchical phrasal structure similar to prosody. At the lowest level, gesture has been shown to be synchronized with prosody (e.g., apexes and pitch accents). However, less is known about higher levels. Even less is known about timing relationships with information structure, though this is signaled by prosody and linked to gesture. The present study analyzed phrase synchronization in 3 hr of narrations in Turkish annotated for gesture, prosody, and information structure-topics and foci. The analysis of 515 gesture phrases showed that there was no one-to-one synchronization with intermediate phrases, but their onsets and offsets were synchronized. Moreover, information structural units, topics, and foci were closely synchronized with gesture phrase medial stroke + post-hold combinations (i.e., apical areas). In addition, iconic and metaphoric gestures were more likely to be paired with foci, and deictics with topics. Overall, the results confirm synchronization of gesture and prosody at the phrasal level and provide evidence that gesture shows a direct sensitivity to information structure. These show that speech and gesture production are more connected than assumed in existing production models.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1177/00238309231185308 | DOI Listing |
Front Robot AI
December 2024
Robotic Musicianship Lab, Center for Music Technology, Georgia Institute of Technology, Atlanta, GA, United States.
Musical performance relies on nonverbal cues for conveying information among musicians. Human musicians use bodily gestures to communicate their interpretation and intentions to their collaborators, from mood and expression to anticipatory cues regarding structure and tempo. Robotic Musicians can use their physical bodies in a similar way when interacting with fellow musicians.
View Article and Find Full Text PDFSensors (Basel)
November 2024
School of Mechanical Engineering, Nantong University, Nantong 226019, China.
Gesture recognition techniques based on surface electromyography (sEMG) signals face instability problems caused by electrode displacement and the time-varying characteristics of the signals in cross-time applications. This study proposes an incremental learning framework based on densely connected convolutional networks (DenseNet) to capture non-synchronous data features and overcome catastrophic forgetting by constructing replay datasets that store data with different time spans and jointly participate in model training. The results show that, after multiple increments, the framework achieves an average recognition rate of 96.
View Article and Find Full Text PDFJ Colloid Interface Sci
February 2025
Key Laboratory of Bionic Engineering (Ministry of Education), College of Biological and Agricultural Engineering, Jilin University, Changchun, Jilin 130022, China. Electronic address:
Electronic skin (e-skin) inspired by the sensory function of the skin demonstrates broad application prospects in health, medicine, and human-machine interaction. Herein, we developed a self-powered all-fiber bio-inspired e-skin (AFBI E-skin) that integrated functions of antifouling, antibacterial, biocompatibility and breathability. AFBI E-skin was composed of three layers of electrospun nanofibrous films.
View Article and Find Full Text PDFPLoS One
September 2024
CY Cergy Paris Université - ETIS UMR 8051, Cergy, Pontoise, France.
Conversations encompass continuous exchanges of verbal and nonverbal information. Previous research has demonstrated that gestures dynamically entrain each other and that speakers tend to align their vocal properties. While gesture and speech are known to synchronize at the intrapersonal level, few studies have investigated the multimodal dynamics of gesture/speech between individuals.
View Article and Find Full Text PDFIEEE Trans Biomed Circuits Syst
September 2024
Ultrasound-based Hand Gesture Recognition has gained significant attention in recent years. While static gesture recognition has been extensively explored, only a few works have tackled the task of movement regression for real-time tracking, despite its importance for the development of natural and smooth interaction strategies. In this paper, we demonstrate the regression of 3 hand-wrist Degrees of Freedom (DoFs) using a lightweight, A-mode-based, truly wearable US armband featuring four transducers and WULPUS, an ultra-low-power acquisition device.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!