In two ERP experiments, we assessed the impact of discourse-level information on the processing of an unfolding spoken sentence. Subjects listened to sentences like Jane told her brother that he was exceptionally quick/slow, designed such that the alternative critical words were equally acceptable within the local sentence context. In Experiment 1, these sentences were embedded in a discourse that rendered one of the critical words anomalous (e.g. because Jane's brother had in fact done something very quickly). Relative to the coherent alternative, these discourse-anomalous words elicited a standard N400 effect that started at 150-200 ms after acoustic word onset. Furthermore, when the same sentences were heard in isolation in Experiment 2, the N400 effect disappeared. The results demonstrate that our listeners related the unfolding spoken words to the wider discourse extremely rapidly, after having heard the first two or three phonemes only, and in many cases well before the end of the word. In addition, the identical nature of discourse- and sentence-dependent N400 effects suggests that from the perspective of the word-elicited comprehension process indexed by the N400, the interpretive context delineated by a single unfolding sentence and a larger discourse is functionally identical.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/s0926-6410(03)00196-4 | DOI Listing |
J Exp Psychol Gen
December 2024
Department of Psychology, Harvard University.
It is well-established that people make predictions during language comprehension--the nature and specificity of these predictions, however, remain unclear. For example, do comprehenders routinely make predictions about which words (and phonological forms) might come next in a conversation, or do they simply make broad predictions about the gist of the unfolding context? Prior EEG studies using tightly controlled experimental designs have shown that form-based prediction can occur during comprehension, as N400s to unexpected words are reduced when they resemble the form of a predicted word (e.g.
View Article and Find Full Text PDFCortex
December 2024
Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
Background: Language is multimodal and situated in rich visual contexts. Language is also incremental, unfolding moment-to-moment in real time, yet few studies have examined how spoken language interacts with gesture and visual context during multimodal language processing. Gesture is a rich communication cue that is integrally related to speech and often depicts concrete referents from the visual world.
View Article and Find Full Text PDFJ Neurosci Methods
January 2025
Department of Psychology, University of Kassel, Germany.
An increase in pupil size is an important index of listening effort, for example, when listening to speech masked by noise. Specifically, the pupil dilates as the signal-to-noise ratio decreases. A growing body of work aims to assess listening effort under naturalistic conditions using continuous speech, such as spoken stories.
View Article and Find Full Text PDFBrain Lang
October 2024
Department of Psychology and Human Development, Peabody College, Vanderbilt University, USA.
Language is processed incrementally, with addressees considering multiple candidate interpretations as speech unfolds, supporting the retention of these candidate interpretations in memory. For example, after interpreting the utterance, "Click on the striped bag", listeners exhibit better memory for non-mentioned items in the context that were temporarily consistent with what was said (e.g.
View Article and Find Full Text PDFConscious Cogn
August 2024
Department of Psychology, University of Turku, Finland; Turku Brain and Mind Centre, University of Turku, Finland.
The level-of-processing (LoP) hypothesis postulates that transition from unaware to aware visual stimuli is either graded or dichotomous depending on the depth of stimulus processing. Humans can be progressively aware of the low-level features, such as colors or shapes, while the high-level features, such as semantic category, enter consciousness in an all-or none fashion. Unlike in vision, sounds always unfold in time, which might require mechanisms dissimilar from visual processing.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!