Publications by authors named "Ingo Hertrich"

Discourse understanding is hampered when missing or conflicting context information is given. In four experiments, we investigated what happens (a) when the definite determiner "the," which presupposes existence and uniqueness, does not find a unique referent in the context or (b) when the appropriate use of the indefinite determiner is violated by the presence of a unique referent (Experiment 1 and Experiment 2). To focus on the time-course of processing the uniqueness presupposition of the definite determiner, we embedded the determiner in different sentence structures and varied the context (Experiment 3 and Experiment 4).

View Article and Find Full Text PDF

This review article summarizes various functions of the dorsolateral prefrontal cortex (DLPFC) that are related to language processing. To this end, its connectivity with the left-dominant perisylvian language network was considered, as well as its interaction with other functional networks that, directly or indirectly, contribute to language processing. Language-related functions of the DLPFC comprise various aspects of pragmatic processing such as discourse management, integration of prosody, interpretation of nonliteral meanings, inference making, ambiguity resolution, and error repair.

View Article and Find Full Text PDF

Discourse structures enable us to generate expectations based upon linguistic material that has already been introduced. We investigated how the required cognitive operations such as reference processing, identification of critical items, and eventual handling of violations correlate with neuronal activity within the language network of the brain. To this end, we conducted a functional magnetic resonance imaging (fMRI) study in which we manipulated spoken discourse coherence by using presuppositions (PSPs) that either correspond or fail to correspond to items in preceding context sentences.

View Article and Find Full Text PDF

Cross-correlation of magnetoencephalography (MEG) with time courses derived from the speech signal has shown differences in phase-locking between blind subjects able to comprehend accelerated speech and sighted controls. The present training study contributes to disentangle the effects of blindness and training. Both subject groups (baseline: n = 16 blind, 13 sighted; trained: 10 blind, 3 sighted) were able to enhance speech comprehension up to ca.

View Article and Find Full Text PDF

The pre-supplementary motor area (pre-SMA) is engaged in speech comprehension under difficult circumstances such as poor acoustic signal quality or time-critical conditions. Previous studies found that left pre-SMA is activated when subjects listen to accelerated speech. Here, the functional role of pre-SMA was tested for accelerated speech comprehension by inducing a transient "virtual lesion" using continuous theta-burst stimulation (cTBS).

View Article and Find Full Text PDF

Background: Habituation, as a basic form of learning, is characterized by decreasing amplitudes of neuronal reaction following repeated stimuli. Recent studies indicate that habituation to pure tones of different frequencies occurs in fetuses and infants.

Aims: Neural processing of different syllables in fetuses and infants was investigated.

View Article and Find Full Text PDF

Apart from its function in speech motor control, the supplementary motor area (SMA) has largely been neglected in models of speech and language processing in the brain. The aim of this review paper is to summarize more recent work, suggesting that the SMA has various superordinate control functions during speech communication and language reception, which is particularly relevant in case of increased task demands. The SMA is subdivided into a posterior region serving predominantly motor-related functions (SMA proper) whereas the anterior part (pre-SMA) is involved in higher-order cognitive control mechanisms.

View Article and Find Full Text PDF

Discourse structure enables us to generate expectations based upon linguistic material that has already been introduced. The present magnetoencephalography (MEG) study addresses auditory perception of test sentences in which discourse coherence was manipulated by using presuppositions (PSP) that either correspond or fail to correspond to items in preceding context sentences with respect to uniqueness and existence. Context violations yielded delayed auditory M50 and enhanced auditory M200 cross-correlation responses to syllable onsets within an analysis window of 1.

View Article and Find Full Text PDF

In many functional magnetic resonance imaging (fMRI) studies blind humans were found to show cross-modal reorganization engaging the visual system in non-visual tasks. For example, blind people can manage to understand (synthetic) spoken language at very high speaking rates up to ca. 20 syllables/s (syl/s).

View Article and Find Full Text PDF

Late-blind humans can learn to understand speech at ultra-fast syllable rates (ca. 20 syllables/s), a capability associated with hemodynamic activation of the central-visual system. Thus, the observed functional cross-modal recruitment of occipital cortex might facilitate ultra-fast speech processing in these individuals.

View Article and Find Full Text PDF

In less than three decades, the concept "cerebellar neurocognition" has evolved from a mere afterthought to an entirely new and multifaceted area of neuroscientific research. A close interplay between three main strands of contemporary neuroscience induced a substantial modification of the traditional view of the cerebellum as a mere coordinator of autonomic and somatic motor functions. Indeed, the wealth of current evidence derived from detailed neuroanatomical investigations, functional neuroimaging studies with healthy subjects and patients and in-depth neuropsychological assessment of patients with cerebellar disorders shows that the cerebellum has a cardinal role to play in affective regulation, cognitive processing, and linguistic function.

View Article and Find Full Text PDF

Individuals suffering from vision loss of a peripheral origin may learn to understand spoken language at a rate of up to about 22 syllables (syl) per seconds (s)-exceeding by far the maximum performance level of untrained listeners (ca. 8 syl/s). Previous findings indicate the central-visual system to contribute to the processing of accelerated speech in blind subjects.

View Article and Find Full Text PDF

In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution.

View Article and Find Full Text PDF

Background: Individuals suffering from vision loss of a peripheral origin may learn to understand spoken language at a rate of up to about 22 syllables (syl) per second - exceeding by far the maximum performance level of normal-sighted listeners (ca. 8 syl/s). To further elucidate the brain mechanisms underlying this extraordinary skill, functional magnetic resonance imaging (fMRI) was performed in blind subjects of varying ultra-fast speech comprehension capabilities and sighted individuals while listening to sentence utterances of a moderately fast (8 syl/s) or ultra-fast (16 syl/s) syllabic rate.

View Article and Find Full Text PDF
Neurophonetics.

Wiley Interdiscip Rev Cogn Sci

March 2013

Neurophonetics aims at the elucidation of the brain mechanisms underlying speech communication in our species. Clinical observations in patients with speech impairments following cerebral disorders provided the initial vantage point of this research area and indicated distinct functional-neuroanatomic systems to support human speaking and listening. Subsequent approaches-considering speech production a motor skill-investigated vocal tract movements associated with spoken language by means of kinematic and electromyographic techniques-allowing, among other things, for the evaluation of computational models suggesting elementary phonological gestures or a mental syllabary as basic units of speech motor control.

View Article and Find Full Text PDF

Blind people can learn to understand speech at ultra-high syllable rates (ca. 20 syllables/s), a capability associated with hemodynamic activation of the central-visual system. To further elucidate the neural mechanisms underlying this skill, magnetoencephalographic (MEG) measurements during listening to sentence utterances were cross-correlated with time courses derived from the speech signal (envelope, syllable onsets and pitch periodicity) to capture phase-locked MEG components (14 blind, 12 sighted subjects; speech rate=8 or 16 syllables/s, pre-defined source regions: auditory and visual cortex, inferior frontal gyrus).

View Article and Find Full Text PDF

Recent experiments showed that the perception of vowel length by German listeners exhibits the characteristics of categorical perception. The present study sought to find the neural activity reflecting categorical vowel length and the short-long boundary by examining the processing of non-contrastive durations and categorical length using MEG. Using disyllabic words with varying /a/-durations and temporally-matched nonspeech stimuli, we found that each syllable elicited an M50/M100-complex.

View Article and Find Full Text PDF

Hemodynamic mismatch responses can be elicited by deviant stimuli in a sequence of standard stimuli even during cognitive demanding tasks. Emotional context is known to modulate lateralized processing. Right-hemispheric negative emotion processing may bias attention to the right and enhance processing of right-ear stimuli.

View Article and Find Full Text PDF

The Asperger syndrome (AS) includes impaired recognition of other people's mental states. Since language-based diagnostic procedures may be confounded by cognitive-linguistic compensation strategies, nonverbal test materials were created, including human affective and vegetative sounds. Depending on video context, each sound could be interpreted either as direct expression of an agent's affective/vegetative state or as result of intentional-executive mental operations.

View Article and Find Full Text PDF

During speech perception, acoustic correlates of syllable structure and pitch periodicity are directly reflected in electrophysiological brain activity. Magnetoencephalography (MEG) recordings were made while 10 participants listened to natural or formant-synthesized speech at moderately fast or ultrafast rate. Cross-correlation analysis was applied to show brain activity time-locked to the speech envelope, to an acoustic marker of syllable onsets, and to pitch periodicity.

View Article and Find Full Text PDF

This study investigates the temporal resolution capacities of the central-auditory system in a subject (NP) suffering from repetition conduction aphasia. More specifically, the patient was asked to detect brief gaps between two stretches of broadband noise (gap detection task) and to evaluate the duration of two biphasic (WN-3) continuous noise elements, starting with white noise (WN) followed by 3kHz bandpass-filtered noise (duration discrimination task). During the gap detection task, the two portions of each stimulus were either identical ("intra-channel condition") or differed ("inter-channel condition") in the spectral characteristics of the leading and trailing acoustic segments.

View Article and Find Full Text PDF

Clinical data indicate that the brain network of speech motor control can be subdivided into at least three functional-neuroanatomical subsystems: (i) planning of movement sequences (premotor ventrolateral-frontal cortex and/or anterior insula), (ii) preparedness for/initiation of upcoming verbal utterances (supplementary motor area, SMA), and (iii) on-line innervation of vocal tract muscles, i.e., motor execution (corticobulbar system, basal ganglia, cerebellum).

View Article and Find Full Text PDF

During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K.

View Article and Find Full Text PDF

Blind individuals may learn to understand ultra-fast synthetic speech at a rate of up to about 25 syllables per second (syl)/s, an accomplishment by far exceeding the maximum performance level of normal-sighted listeners (8-10 syl/s). The present study indicates that this exceptional skill engages distinct regions of the central-visual system. Hemodynamic brain activation during listening to moderately- (8 syl/s) and ultra-fast speech (16 syl/s) was measured in a blind individual and six normal-sighted controls.

View Article and Find Full Text PDF

Using functional magnetic resonance imaging, the distribution of hemodynamic brain responses bound to the perceptual processing of interjections, that is 'exclamations inserted into an utterance without grammatical connection to it', was determined (vs. a silent baseline condition). These utterances convey information about a speaker's affective/emotional state by their 'tone' (emotional prosody) and/or their lexical content.

View Article and Find Full Text PDF