Publications by authors named "Sheila E Blumstein"

From film and television to graphic storytelling, tonal music can accompany visual narratives in a variety of contexts. The apprehension of both musical and narrative sequences involves temporal categories in ordered patterning, which raises an interesting question: Do musical progressions and visual narratives rely on shared sequence processing mechanisms? If this is the case, then cues from music and sequential static images, when presented simultaneously, should interact during audiovisual online processing. We tested this question by measuring reaction times to target picture panels appearing in visual narrative (comic strip) sequences, which were presented panel by panel and synchronized with musical chord progressions.

View Article and Find Full Text PDF

This study examines cross-modality effects of a semantically-biased written sentence context on the perception of an acoustically-ambiguous word target identifying neural areas sensitive to interactions between sentential bias and phonetic ambiguity. Of interest is whether the locus or nature of the interactions resembles those previously demonstrated for auditory-only effects. FMRI results show significant interaction effects in right mid-middle temporal gyrus (RmMTG) and bilateral anterior superior temporal gyri (aSTG), regions along the ventral language comprehension stream that map sound onto meaning.

View Article and Find Full Text PDF

Research has implicated the left inferior frontal gyrus (LIFG) in mapping acoustic-phonetic input to sound category representations, both in native speech perception and non-native phonetic category learning. At issue is whether this sensitivity reflects access to phonetic category information per se or to explicit category labels, the latter often being required by experimental procedures. The current study employed an incidental learning paradigm designed to increase sensitivity to a difficult non-native phonetic contrast without inducing explicit awareness of the categorical nature of the stimuli.

View Article and Find Full Text PDF

In spoken word recognition, subphonemic variation influences lexical activation, with sounds near a category boundary increasing phonetic competition as well as lexical competition. The current study investigated the interplay of these factors using a visual world task in which participants were instructed to look at a picture of an auditory target (e.g.

View Article and Find Full Text PDF

The role of semantic features, which are distinctive (e.g., a zebra's stripes) or shared (e.

View Article and Find Full Text PDF

Recognition of and memory for a spoken word can be facilitated by a prior presentation of that word spoken by the same talker. However, it is less clear whether this speaker congruency advantage generalizes to facilitate recognition of unheard related words. The present investigation employed a false memory paradigm to examine whether information about a speaker's identity in items heard by listeners could influence the recognition of novel items (critical intruders) phonologically or semantically related to the studied items.

View Article and Find Full Text PDF

Prior research has shown that the perception of degraded speech is influenced by within sentence meaning and recruits one or more components of a frontal-temporal-parietal network. The goal of the current study is to examine whether the overall conceptual meaning of a sentence, made up of one set of words, influences the perception of a second acoustically degraded sentence, made up of a different set of words. Using functional magnetic resonance imaging (fMRI), we presented an acoustically clear sentence followed by an acoustically degraded sentence and manipulated the semantic relationship between them: Related in meaning (but consisting of different content words), Unrelated in meaning, or Same.

View Article and Find Full Text PDF

Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all.

View Article and Find Full Text PDF

Phonemic paraphasias are a common presenting symptom in aphasia and are thought to reflect a deficit in which selecting an incorrect phonemic segment results in the clear-cut substitution of one phonemic segment for another. The current study re-examines the basis of these paraphasias. Seven left hemisphere-damaged aphasics with a range of left hemisphere lesions and clinical diagnoses including Broca's, Conduction, and Wernicke's aphasia, were asked to produce syllable-initial voiced and voiceless fricative consonants, [z] and [s], in CV syllables followed by one of five vowels [i e a o u] in isolation and in a carrier phrase.

View Article and Find Full Text PDF

Although much evidence suggests that the identification of phonetically ambiguous target words can be biased by preceding sentential context, interactive and autonomous models of speech perception disagree as to the mechanism by which higher level information affects subjects' responses. Some have suggested that the time course of context effects is incompatible with interactive models (e.g.

View Article and Find Full Text PDF

Two experiments examined the influence of phonologically similar neighbors on articulation of words' initial stop consonants in order to investigate the conditions under which arises. In Experiment 1, participants produced words in isolation. Results showed that the voice-onset time (VOT) of a target's initial voiceless stop was predicted by its overall neighborhood density, but not by its having a voicing minimal pair.

View Article and Find Full Text PDF

Findings in the domain of spoken word recognition have indicated that lexical representations contain both abstract and episodic information. It has been proposed that processing time determines when each source of information is recruited, with increased processing time being required to access lower-frequency episodic instantiations. The time-course hypothesis of specificity effects has thus identified a strong role for retrieval mechanisms mediating the use of abstract versus episodic information.

View Article and Find Full Text PDF

Spoken word production research has shown that phonological information influences lexical selection. It remains unclear, however, whether this phonological information is specified for its phonological environment (e.g.

View Article and Find Full Text PDF

Adult speech perception reflects the long-term regularities of the native language, but it is also flexible such that it accommodates and adapts to adverse listening conditions and short-term deviations from native-language norms. The purpose of this article is to examine how the broader neuroscience literature can inform and advance research efforts in understanding the neural basis of flexibility and adaptive plasticity in speech perception. Specifically, we highlight the potential role of learning algorithms that rely on prediction error signals and discuss specific neural structures that are likely to contribute to such learning.

View Article and Find Full Text PDF

Young word learners fail to discriminate phonetic contrasts in certain situations, an observation that has been used to support arguments that the nature of lexical representation and lexical processing changes over development. An alternative possibility, however, is that these failures arise naturally as a result of how word familiarity affects lexical processing. In the present work, we explored the effects of word familiarity on adults' use of phonetic detail.

View Article and Find Full Text PDF

The current study explored how factors of acoustic-phonetic and lexical competition affect access to the lexical-semantic network during spoken word recognition. An auditory semantic priming lexical decision task was presented to subjects while in the MR scanner. Prime-target pairs consisted of prime words with the initial voiceless stop consonants /p/, /t/, and /k/ followed by word and nonword targets.

View Article and Find Full Text PDF

Previous behavioral work has shown that the phonetic realization of words in spoken word production is influenced by sound shape properties of the lexicon. A recent fMRI study (Peramunage, Blumstein, Myers, Goldrick, & Baese-Berk, 2011) showed that this influence of lexical structure on phonetic implementation recruited a network of areas that included the supramarginal gyrus (SMG) extending into the posterior superior temporal gyrus (pSTG) and the inferior frontal gyrus (IFG). The current study examined whether lesions in these areas result in a concomitant functional deficit.

View Article and Find Full Text PDF

Listeners' perception of acoustically presented speech is constrained by many different sources of information that arise from other sensory modalities and from more abstract higher-level language context. An open question is how perceptual processes are influenced by and interact with these other sources of information. In this study, we use fMRI to examine the effect of a prior sentence fragment meaning on the categorization of two possible target words that differ in an acoustic phonetic feature of the initial consonant, VOT.

View Article and Find Full Text PDF

One of the oldest questions in cognitive science is whether cognitive operations are modular or distributed across domains. We propose that fMRI has made a unique contribution to this question by elucidating the nature of structure-function relations. We focus our discussion on language, which is the classic domain for arguments in favor of domain specificity and a fixed neural architecture.

View Article and Find Full Text PDF

The current study explored how listeners map the variable acoustic input onto a common sound structure representation while being able to retain phonetic detail to distinguish among the identity of talkers. An adaptation paradigm was utilized to examine areas which showed an equal neural response (equal release from adaptation) to phonetic change when spoken by the same speaker and when spoken by two different speakers, and insensitivity (failure to show release from adaptation) when the same phonetic input was spoken by a different speaker. Neural areas which showed speaker invariance were located in the anterior portion of the middle superior temporal gyrus bilaterally.

View Article and Find Full Text PDF

We used eye-tracking to investigate lexical processing in aphasic participants by examining the fixation time course for rhyme (e.g., carrot-parrot) and cohort (e.

View Article and Find Full Text PDF

Lexical-semantic access is affected by the phonological structure of the lexicon. What is less clear is whether such effects are the result of continuous activation between lexical form and semantic processing or whether they arise from a more modular system in which the timing of accessing lexical form determines the timing of semantic activation. This study examined this issue using the visual world paradigm by investigating the time course of semantic priming as a function of the number of phonological competitors.

View Article and Find Full Text PDF

In the McGurk Effect, a visual stimulus can affect the perception of an auditory signal, suggesting integration of the auditory and visual streams. However, it is unclear when in speech processing this auditory-visual integration occurs. The present study used a semantic priming paradigm to investigate whether integration occurs before, during, or after access of the lexical-semantic network.

View Article and Find Full Text PDF

The current study examined the neural systems underlying lexically conditioned phonetic variation in spoken word production. Participants were asked to read aloud singly presented words, which either had a voiced minimal pair (MP) neighbor (e.g.

View Article and Find Full Text PDF

Apraxic patients are known for deficits in producing and comprehending skilled movements. Two experiments tested their implicit and explicit knowledge about manipulable objects in order to examine whether such deficits accompany impairment in the conceptual representation of manipulation features. An eyetracking method was used to test implicit knowledge (Experiment 1): participants viewed a visual display on a computer screen and touched the corresponding object in response to an auditory input.

View Article and Find Full Text PDF