Publications by authors named "Natalya Kaganovich"

In earlier work with adults, we showed that long-term phonemic representations are audiovisual, meaning that they contain information on typical mouth shape during articulation. Many aspects of audiovisual processing have a prolonged developmental course, often not reaching maturity until late adolescence. In this study, we examined the status of phonemic representations in two groups of children - 8-9-year-olds and 11-12-year-olds.

View Article and Find Full Text PDF

The presence of long-term auditory representations for phonemes has been well-established. However, since speech perception is typically audiovisual, we hypothesized that long-term phoneme representations may also contain information on speakers' mouth shape during articulation. We used an audiovisual oddball paradigm in which, on each trial, participants saw a face and heard one of two vowels.

View Article and Find Full Text PDF

We examined whether children with developmental language disorder (DLD) differed from their peers with typical development (TD) in the degree to which they encode information about a talker's mouth shape into long-term phonemic representations. Children watched a talker's face and listened to rare changes from [i] to [u] or the reverse. In the neutral condition, the talker's face had a closed mouth throughout.

View Article and Find Full Text PDF

The ability to use visual speech cues does not fully develop until late adolescence. The cognitive and neural processes underlying this slow maturation are not yet understood. We examined electrophysiological responses of younger (8-9 years) and older (11-12 years) children as well as adults elicited by visually perceived articulations in an audiovisual word matching task and related them to the amount of benefit gained during a speech-in-noise (SIN) perception task when seeing the talker's face.

View Article and Find Full Text PDF

Purpose: Earlier, my colleagues and I showed that children with a history of specific language impairment (H-SLI) are significantly less able to detect audiovisual asynchrony compared with children with typical development (TD; Kaganovich & Schumaker, 2014). Here, I first replicate this finding in a new group of children with H-SLI and TD and then examine a relationship among audiovisual function, attention skills, and language in a combined pool of children.

Method: The stimuli were a pure tone and an explosion-shaped figure.

View Article and Find Full Text PDF

Background: Visual speech cues influence different aspects of language acquisition. However, whether developmental language disorders may be associated with atypical processing of visual speech is unknown. In this study, we used behavioral and ERP measures to determine whether children with a history of SLI (H-SLI) differ from their age-matched typically developing (TD) peers in the ability to match auditory words with corresponding silent visual articulations.

View Article and Find Full Text PDF

Seeing articulatory gestures while listening to speech-in-noise (SIN) significantly improves speech understanding. However, the degree of this improvement varies greatly among individuals. We examined a relationship between two distinct stages of visual articulatory processing and the SIN accuracy by combining a cross-modal repetition priming task with ERP recordings.

View Article and Find Full Text PDF

Sensitivity to the temporal relationship between auditory and visual stimuli is key to efficient audiovisual integration. However, even adults vary greatly in their ability to detect audiovisual temporal asynchrony. What underlies this variability is currently unknown.

View Article and Find Full Text PDF

Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood.

View Article and Find Full Text PDF

Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7-8-year-olds and 10-11-year-olds) and in adults.

View Article and Find Full Text PDF

Previous studies indicate that at least some aspects of audiovisual speech perception are impaired in children with specific language impairment (SLI). However, whether audiovisual processing difficulties are also present in older children with a history of this disorder is unknown. By combining electrophysiological and behavioral measures, we examined perception of both audiovisually congruent and audiovisually incongruent speech in school-age children with a history of SLI (H-SLI), their typically developing (TD) peers, and adults.

View Article and Find Full Text PDF

Purpose: One possible source of tense and agreement limitations in children with specific language impairment (SLI) is a weakness in appreciating structural dependencies that occur in many sentences in the input. This possibility was tested in the present study.

Method: Children with a history of SLI (H-SLI; n = 12; M = 9;7 [years;months]) and typically developing same-age peers (TD; n = 12; M = 9;7) listened to and made grammaticality judgments about grammatical and ungrammatical sentences involving either a local agreement error (e.

View Article and Find Full Text PDF

Purpose: The authors examined whether school-age children with a history of specific language impairment (H-SLI), their peers with typical development (TD), and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information.

Method: Fifteen H-SLI children, 15 TD children, and 15 adults judged whether a flashed explosion-shaped figure and a 2-kHz pure tone occurred simultaneously. The stimuli were presented at 0-, 100-, 200-, 300-, 400-, and 500-ms temporal offsets.

View Article and Find Full Text PDF

Using electrophysiology, we have examined two questions in relation to musical training - namely, whether it enhances sensory encoding of the human voice and whether it improves the ability to ignore irrelevant auditory change. Participants performed an auditory distraction task, in which they identified each sound as either short (350 ms) or long (550 ms) and ignored a change in timbre of the sounds. Sounds consisted of a male and a female voice saying a neutral sound [a], and of a cello and a French Horn playing an F3 note.

View Article and Find Full Text PDF

Non-linguistic auditory processing and working memory update were examined with event-related potentials (ERPs) in 18 children who stutter (CWS) and 18 children who do not stutter (CWNS). Children heard frequent 1 kHz tones interspersed with rare 2 kHz tones. The two groups did not differ on any measure of the P1 and N1 components, strongly suggesting that early auditory processing of pure tones is unimpaired in CWS.

View Article and Find Full Text PDF

In English, voiced and voiceless syllable-initial stop consonants differ in both fundamental frequency at the onset of voicing (onset F0) and voice onset time (VOT). Although both correlates, alone, can cue the voicing contrast, listeners weight VOT more heavily when both are available. Such differential weighting may arise from differences in the perceptual distance between voicing categories along the VOT versus onset F0 dimensions, or it may arise from a bias to pay more attention to VOT than to onset F0.

View Article and Find Full Text PDF

This study combined behavioral and electrophysiological measurements to investigate interactions during speech perception between native phonemes and talker's voice. In a Garner selective attention task, participants either classified each sound as one of two native vowels ([epsilon] and [ae]), ignoring the talker, or as one of two male talkers, ignoring the vowel. The dimension to be ignored was held constant in baseline tasks and changed randomly across trials in filtering tasks.

View Article and Find Full Text PDF