The present experiment was aimed at investigating the on-line processing of semantic and prosodic information. We recorded the Event-Related brain Potentials (ERPs) to semantically and/or prosodically congruous and incongruous sentences that were presented aurally, to study the time course of semantic and prosodic processing, and to determine whether these two processes are independent or interactive. The prosodic mismatch was produced by cross-splicing the beginning of statements with the end of questions, and vice-versa. Subjects had to decide whether the sentences were semantically or prosodically congruous in two different attention conditions. Results showed that a right centro-parietal negative component (N400) was associated with semantic mismatch, and a left temporo-parietal positive component (P800) was associated with prosodic mismatch. Thus, these two electrophysiological markers of semantic and prosodic processing differed in their polarity, latency and scalp distribution. These differences may indicate that the two processes stem from different underlying generators. However, the finding that the P800 elicited by prosodic mismatch was larger when the sentences were semantically incongruous than congruous suggests that the two processes may be interactive.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.cogbrainres.2003.10.002 | DOI Listing |
Cognition
December 2024
Max Plank Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands; Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, 6525 EN Nijmegen, The Netherlands.
Face-to-face communication is not only about 'what' is said but also 'how' it is said, both in speech and bodily signals. Beat gestures are rhythmic hand movements that typically accompany prosodic prominence in conversation. Yet, it is still unclear how beat gestures influence language comprehension.
View Article and Find Full Text PDFJ Speech Lang Hear Res
December 2024
Department of Speech-Language-Hearing Sciences and Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis.
Purpose: Prior research extensively documented challenges in recognizing verbal and nonverbal emotion among older individuals when compared with younger counterparts. However, the nature of these age-related changes remains unclear. The present study investigated how older and younger adults comprehend four basic emotions (i.
View Article and Find Full Text PDFJ Cogn Neurosci
October 2024
Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
Prosody underpins various linguistic domains ranging from semantics and syntax to discourse. For instance, prosodic information in the form of lexical stress modifies meanings and, as such, syntactic contexts of words as in Turkish kaz-má "pickaxe" (noun) versus káz-ma "do not dig" (imperative). Likewise, prosody indicates the focused constituent of an utterance as the noun phrase filling the wh-spot in a dialogue like What did you eat? I ate----.
View Article and Find Full Text PDFDev Sci
January 2025
Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, USA.
When infants hear sentences containing unfamiliar words, are some language-world links (such as noun-object) more readily formed than others (verb-predicate)? We examined English learning 14-15-month-olds' capacity for linking referents in scenes with bisyllabic nonce utterances. Each of the two syllables referred either to the object's identity, or the object's motion. Infants heard the syllables in either a Verb-Subject (VS) or Subject-Verb (SV) order.
View Article and Find Full Text PDFEar Hear
October 2024
Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel.
Objectives: Cochlear implants (CI) are remarkably effective, but have limitations regarding the transformation of the spectro-temporal fine structures of speech. This may impair processing of spoken emotions, which involves the identification and integration of semantic and prosodic cues. Our previous study found spoken-emotions-processing differences between CI users with postlingual deafness (postlingual CI) and normal hearing (NH) matched controls (age range, 19 to 65 years).
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!