Language is inherently multimodal. In spoken languages, combined spoken and visual signals (e.g., co-speech gestures) are an integral part of linguistic structure and language representation. This requires an extension of the parallel architecture, which needs to include the visual signals concomitant to speech. We present the evidence for the multimodality of language. In addition, we propose that distributional semantics might provide a format for integrating speech and co-speech gestures in a common semantic representation.

Download full-text PDF

Source
http://dx.doi.org/10.1111/tops.12728DOI Listing

Publication Analysis

Top Keywords

visual signals
8
co-speech gestures
8
extending architecture
4
language
4
architecture language
4
language multimodal
4
multimodal perspective
4
perspective language
4
language inherently
4
inherently multimodal
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!