Asking children to gesture while being taught a concept facilitates their learning. Here, we investigated whether children benefitted equally from producing gestures that reflected speech (speech-gesture matches) versus gestures that complemented speech (speech-gesture mismatches), when learning the concept of palindromes. As in previous studies, we compared the utility of each gesture strategy to a speech alone strategy. Because our task was heavily based on language ability, we also considered children's phonological competency as a predictor of success at posttest. Across conditions, children who had low phonological competence were equally likely to perform well at posttest. However, gesture use was predictive of learning for children with high phonological competence: Those who produced either gesture strategy during training were more likely to learn than children who used a speech alone strategy. These results suggest that educators should be encouraged to use either speech-gesture match or mismatch strategies to aid learners, but that gesture may be especially beneficial to children who possess basic skills related to the new concept, in this case, phonological competency. Results also suggest that there are differences between the cognitive effects of naturally produced speech-gesture matches and mismatches, and those that are scripted and taught to children.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1037/a0039471 | DOI Listing |
Front Artif Intell
December 2024
Computer Science Department, Brandeis University, Waltham, MA, United States.
Multimodal dialogue involving multiple participants presents complex computational challenges, primarily due to the rich interplay of diverse communicative modalities including speech, gesture, action, and gaze. These modalities interact in complex ways that traditional dialogue systems often struggle to accurately track and interpret. To address these challenges, we extend the textual enrichment strategy of Dense Paraphrasing (DP), by translating each nonverbal modality into linguistic expressions.
View Article and Find Full Text PDFHumans rarely speak without producing co-speech gestures of the hands, head, and other parts of the body. Co-speech gestures are also highly restricted in how they are timed with speech, typically synchronizing with prosodically-prominent syllables. What functional principles underlie this relationship? Here, we examine how the production of co-speech manual gestures influences spatiotemporal patterns of the oral articulators during speech production.
View Article and Find Full Text PDFInfancy
December 2024
Centre de Recherche en Psychologie et Neurosciences (CRPN), CNRS, Aix-Marseille Université, Marseille, France.
Speech and co-speech gestures always go hand in hand. Whether we find the precursors of these co-speech gestures in infants before they master their native language still remains an open question. Except for deictic gestures, there is little agreement on the existence of iconic, non-referential and conventional gestures before children start producing their first words.
View Article and Find Full Text PDFCortex
December 2024
Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
Background: Language is multimodal and situated in rich visual contexts. Language is also incremental, unfolding moment-to-moment in real time, yet few studies have examined how spoken language interacts with gesture and visual context during multimodal language processing. Gesture is a rich communication cue that is integrally related to speech and often depicts concrete referents from the visual world.
View Article and Find Full Text PDFAutism
October 2024
Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
Our study explored how meaningful hand gestures, alongside spoken words, can help autistic individuals to understand speech, especially when the speech quality is poor, such as when there is a lot of noise around. Previous research has suggested that meaningful hand gestures might be processed differently in autistic individuals, and we therefore expected that these hand gestures might aid them less in understanding speech in adverse listening conditions than for non-autistic people. To this end, we asked participants to watch and listen to videos of a woman uttering a Dutch action verb.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!