Highlighting relevant information in a discourse context is a major aim of spoken language communication. Prosodic cues such as focal prominences are used to fulfill this aim through the pragmatic function of prosody. To determine whether listeners make on-line use of focal prominences to build coherent representations of the informational structure of the utterances, we used the brain event-related potential (ERP) method. Short dialogues composed of a question and an answer were presented auditorily. The design of the experiment allowed us to examine precisely the time course of the processing of prosodic patterns of sentence-medial or -final words in the answer. These patterns were either congruous or incongruous with regard to the pragmatic context introduced by the question. Furthermore, the ERP effects were compared for words with or without focal prominences. Results showed that pragmatically congruous and incongruous prosodic patterns elicit clear differences in the ERPs, which were largely modulated in latency and polarity by their position within the answer. By showing that prosodic patterns are processed on-line by listeners in order to understand the informational structure of the message, the present results demonstrate the psychobiological validity of the pragmatic concept of focus, expressed via prosodic cues. Moreover, the functional significance of the positive-going effects found sentence medially and negative-going effects found sentence finally is discussed. Whereas the former may reflect the processing of surprising and task-relevant prosodic patterns, the latter may reflect the integration problems encountered in extracting the overall informational structure of the sentence.

Download full-text PDF

Source
http://dx.doi.org/10.1162/0898929053747667DOI Listing

Publication Analysis

Top Keywords

prosodic patterns
16
focal prominences
12
informational structure
12
prosodic cues
8
congruous incongruous
8
effects sentence
8
prosodic
6
patterns
5
on-line processing
4
processing "pop-out"
4

Similar Publications

Affective voice signaling has significant biological and social relevance across various species, and different affective signaling types have emerged through the evolution of voice communication. These types range from basic affective voice bursts and nonverbal affective up to affective intonations superimposed on speech utterances in humans in the form of paraverbal prosodic patterns. These different types of affective signaling should have evolved to be acoustically and perceptually distinctive, allowing accurate and nuanced affective communication.

View Article and Find Full Text PDF

Noncanonical sentence structures pose comprehension challenges because they require increased cognitive demand. Prosody may partially alleviate this cognitive load. These findings largely stem from behavioral studies, yet physiological measures may reveal additional insights into how cognition is deployed to parse sentences.

View Article and Find Full Text PDF

This study investigates the acquisition of sentence focus in Russian by adult English-Russian bilinguals, while paying special attention to the relative contribution of constituent order and prosodic expression. It aims to understand how these factors influence perceived word-level prominence and focus assignment during listening. We present results of two listening tasks designed to examine the influence of pitch cues and constituent order on perceived word prominence (Experiment 1) and focus assignment (Experiment 2) during the auditory comprehension of SV[O] and OV[S] sentences in Russian.

View Article and Find Full Text PDF

Rhythm perception in speech and non-speech acoustic stimuli has been shown to be affected by general acoustic biases as well as by phonological properties of the native language of the listener. The present paper extends the cross-linguistic approach in this field by testing the application of the iambic-trochaic law as an assumed general acoustic bias on rhythmic grouping of non-speech stimuli by speakers of three languages: Arabic, Hebrew and German. These languages were chosen due to relevant differences in their phonological properties on the lexical level alongside similarities on the phrasal level.

View Article and Find Full Text PDF

Humans rarely speak without producing co-speech gestures of the hands, head, and other parts of the body. Co-speech gestures are also highly restricted in how they are timed with speech, typically synchronizing with prosodically-prominent syllables. What functional principles underlie this relationship? Here, we examine how the production of co-speech manual gestures influences spatiotemporal patterns of the oral articulators during speech production.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!