Publications by authors named "Tyler Perrachione"

Listeners identify talkers less accurately in a foreign language than in their native language, but it remains unclear whether this language-familiarity effect arises because listeners (1) simply lack experience identifying foreign-language talkers or (2) gain access to additional talker-specific information during concurrent linguistic processing of talkers' speech. Here, we tested whether sustained practice identifying talkers of an unfamiliar, foreign language could lead to generalizable improvement in learning to identify new talkers speaking that language, even if listeners remained unable to understand the talkers' speech. English-speaking adults with no prior experience with Mandarin practiced learning to identify Mandarin-speaking talkers over four consecutive days and were tested on their ability to generalize their Mandarin talker-identification abilities to new Mandarin-speaking talkers on the fourth day.

View Article and Find Full Text PDF
Article Synopsis
  • Musical training does not seem to enhance the neural processing of sounds, contradicting earlier smaller studies that suggested otherwise.
  • A large-scale study with over 260 participants found no significant correlation between musical training and neural responses to speech sounds.
  • The research indicates a lack of evidence for neural plasticity in early auditory responses tied to musical training and exposure.
View Article and Find Full Text PDF

Despite a vast literature on how speech intelligibility is affected by hearing loss and advanced age, remarkably little is known about the perception of talker-related information in these populations. Here, we assessed the ability of listeners to detect whether a change in talker occurred while listening to and identifying sentence-length sequences of words. Participants were recruited in four groups that differed in their age (younger/older) and hearing status (normal/impaired).

View Article and Find Full Text PDF

Important recent advances in the cognitive neuroscience of language have been made using functional localizers to demarcate language-selective regions in individual brains. Although single-subject localizers offer insights that are unavailable in classic group analyses, they require additional scan time that imposes costs on investigators and participants. In particular, the unique practical challenges of scanning children and other special populations has led to less adoption of localizers for neuroimaging research with these theoretically and clinically important groups.

View Article and Find Full Text PDF

Purpose: The practice of removing "following" responses from speech perturbation analyses is increasingly common, despite no clear evidence as to whether these responses represent a unique response type. This study aimed to determine if the distribution of responses to auditory perturbation paradigms represents a bimodal distribution, consisting of two distinct response types, or a unimodal distribution.

Method: This mega-analysis pooled data from 22 previous studies to examine the distribution and magnitude of responses to auditory perturbations across four tasks: adaptive pitch, adaptive formant, reflexive pitch, and reflexive formant.

View Article and Find Full Text PDF

Cerebellar differences have long been documented in autism spectrum disorder (ASD), yet the extent to which such differences might impact language processing in ASD remains unknown. To investigate this, we recorded brain activity with magnetoencephalography (MEG) while ASD and age-matched typically developing (TD) children passively processed spoken meaningful English and meaningless Jabberwocky sentences. Using a novel source localization approach that allows higher resolution MEG source localization of cerebellar activity, we found that, unlike TD children, ASD children showed no difference between evoked responses to meaningful versus meaningless sentences in right cerebellar lobule VI.

View Article and Find Full Text PDF

The task of processing speech masked by concurrent speech/noise can pose a substantial challenge to listeners. However, performance on such tasks may not directly reflect the amount of listening effort they elicit. Changes in pupil size and neural oscillatory power in the alpha range (8-12 Hz) are prominent neurophysiological signals known to reflect listening effort; however, measurements obtained through these two approaches are rarely correlated, suggesting that they may respond differently depending on the specific cognitive demands (and, by extension, the specific type of effort) elicited by specific tasks.

View Article and Find Full Text PDF

Repeated exposure to a stimulus results in reduced neural response, or repetition suppression, in brain regions responsible for processing that stimulus. This rapid accommodation to repetition is thought to underlie learning, stimulus selectivity, and strengthening of perceptual expectations. Importantly, reduced sensitivity to repetition has been identified in several neurodevelopmental, learning, and psychiatric disorders, including autism spectrum disorder (ASD), a neurodevelopmental disorder characterized by challenges in social communication and repetitive behaviors and restricted interests.

View Article and Find Full Text PDF

Phonetic variability across talkers imposes additional processing costs during speech perception, evident in performance decrements when listening to speech from multiple talkers. However, within-talker phonetic variation is a less well-understood source of variability in speech, and it is unknown how processing costs from within-talker variation compare to those from between-talker variation. Here, listeners performed a speeded word identification task in which three dimensions of variability were factorially manipulated: between-talker variability (single vs multiple talkers), within-talker variability (single vs multiple acoustically distinct recordings per word), and word-choice variability (two- vs six-word choices).

View Article and Find Full Text PDF

Individuals with autism spectrum disorder (ASD) commonly display speech processing abnormalities. Binding of acoustic features of speech distributed across different frequencies into coherent speech objects is fundamental in speech perception. Here, we tested the hypothesis that the cortical processing of bottom-up acoustic cues for speech binding may be anomalous in ASD.

View Article and Find Full Text PDF

Nonword repetition, a common clinical measure of phonological working memory, involves component processes of speech perception, working memory, and speech production. Autistic children often show behavioral challenges in nonword repetition, as do many individuals with communication disorders. It is unknown which subprocesses of phonological working memory are vulnerable in autistic individuals, and whether the same brain processes underlie the transdiagnostic difficulty with nonword repetition.

View Article and Find Full Text PDF

Childhood socioeconomic status (SES) strongly predicts disparities in reading development, yet it is unknown whether early environments also moderate the cognitive and neurobiological bases of reading disorders (RD) such as dyslexia, the most prevalent learning disability. SES-diverse 6-9-year-old children (n = 155, half with RD) completed behavioral and functional magnetic resonance imaging (fMRI) tasks engaging phonological and orthographic processing, which revealed corresponding double-dissociations in neurocognitive deficits. At the higher end of the SES spectrum, RD was most strongly explained by differences in phonological skill and corresponding activation in left inferior frontal and temporoparietal regions during phonological processing-widely considered the "core deficit" of RD.

View Article and Find Full Text PDF

The neural representation of a repeated stimulus is the standard against which a deviant stimulus is measured in the brain, giving rise to the well-known mismatch response. It has been suggested that individuals with dyslexia have poor implicit memory for recently repeated stimuli, such as the train of standards in an oddball paradigm. Here, we examined how the neural representation of a standard emerges over repetitions, asking whether there is less sensitivity to repetition and/or less accrual of "standardness" over successive repetitions in dyslexia.

View Article and Find Full Text PDF

In the real world, listeners seem to implicitly learn talkers' vocal identities during interactions that prioritize attending to the content of talkers' speech. In contrast, most laboratory experiments of talker identification employ training paradigms that require listeners to explicitly practice identifying voices. Here, we investigated whether listeners become familiar with talkers' vocal identities during initial exposures that do not involve explicit talker identification.

View Article and Find Full Text PDF

According to several influential theoretical frameworks, phonological deficits in dyslexia result from reduced sensitivity to acoustic cues that are essential for the development of robust phonemic representations. Some accounts suggest that these deficits arise from impairments in rapid auditory adaptation processes that are either speech-specific or domain-general. Here, we examined the specificity of auditory adaptation deficits in dyslexia using a nonlinguistic tone anchoring (adaptation) task and a linguistic task in children and adults with and without dyslexia.

View Article and Find Full Text PDF

A perceptual adaptation deficit often accompanies reading difficulty in dyslexia, manifesting in poor perceptual learning of consistent stimuli and reduced neurophysiological adaptation to stimulus repetition. However, it is not known how adaptation deficits relate to differences in feedforward or feedback processes in the brain. Here we used electroencephalography (EEG) to interrogate the feedforward and feedback contributions to neural adaptation as adults with and without dyslexia viewed pairs of faces and words in a paradigm that manipulated whether there was a high probability of stimulus repetition versus a high probability of stimulus change.

View Article and Find Full Text PDF

The mapping between speech acoustics and phonemic representations is highly variable across talkers, and listeners are slower to recognize words when listening to multiple talkers compared with a single talker. Listeners' speech processing efficiency in mixed-talker settings improves when given time to reorient their attention to each new talker. However, it remains unknown how much time is needed to fully reorient attention to a new talker in mixed-talker settings so that speech processing becomes as efficient as when listening to a single talker.

View Article and Find Full Text PDF

Speech is processed less efficiently from discontinuous, mixed talkers than one consistent talker, but little is known about the neural mechanisms for processing talker variability. Here, we measured psychophysiological responses to talker variability using electroencephalography (EEG) and pupillometry while listeners performed a delayed recall of digit span task. Listeners heard and recalled seven-digit sequences with both talker (single- vs.

View Article and Find Full Text PDF

Autism spectrum disorder (ASD) is associated with widespread receptive language impairments, yet the neural mechanisms underlying these deficits are poorly understood. Neuroimaging has shown that processing of socially-relevant sounds, including speech and non-speech, is atypical in ASD. However, it is unclear how the presence of lexical-semantic meaning affects speech processing in ASD.

View Article and Find Full Text PDF

This study aimed to investigate brain regions that show different activation patterns between semantically typical and atypical items in both healthy adults and individuals with aphasia (PWA). Eighteen neurologically healthy adults and twenty-one PWA participated in an fMRI semantic feature verification task that included typical and atypical stimuli from five different semantic categories. A whole-brain searchlight multi-voxel pattern analysis (MVPA) was conducted to classify brain activation patterns between typical and atypical conditions in each participant group separately.

View Article and Find Full Text PDF

Sensorimotor adaptation-enduring changes to motor commands due to sensory feedback-allows speakers to match their articulations to intended speech acoustics. How the brain integrates auditory feedback to modify speech motor commands and what limits the degree of these modifications remain unknown. Here, we investigated the role of speech motor cortex in modifying stored speech motor plans.

View Article and Find Full Text PDF

Phonetic variability across talkers imposes additional processing costs during speech perception, often measured by performance decrements between single- and mixed-talker conditions. However, models differ in their predictions about whether accommodating greater phonetic variability (i.e.

View Article and Find Full Text PDF

Purpose Child language acquisition is marked by an optional infinitive period (ages 2-4 years) during which children use nonfinite (infinitival) verb forms and finite verb forms interchangeably in grammatical contexts that require finite forms. In English, children's errors include omissions of past tense /-/ and 3rd-person singular /-/. This language acquisition period typically ends by the age of 4 years, but it persists in children with language impairments.

View Article and Find Full Text PDF

The human voice is a complex acoustic signal that conveys talker identity via individual differences in numerous features, including vocal source acoustics, vocal tract resonances, and dynamic articulations during speech. It remains poorly understood how differences in these features contribute to perceptual dissimilarity of voices and, moreover, whether linguistic differences between listeners and talkers interact during perceptual judgments of voices. Here, native English- and Mandarin-speaking listeners rated the perceptual dissimilarity of voices speaking English or Mandarin from either forward or time-reversed speech.

View Article and Find Full Text PDF

Phonological working memory is the capacity to briefly maintain and recall representations of sounds important for speech and language and is believed to be critical for language and reading acquisition. Whether phonological working memory is supported by fronto-parietal brain regions associated with short-term memory storage or perisylvian brain structures implicated in speech perception and production is unclear, perhaps due to variability in stimuli, task demands, and individuals. We used fMRI to assess neurophysiological responses while individuals performed two tasks with closely matched stimuli but divergent task demands-nonword repetition and nonword discrimination-at two levels of phonological working memory load.

View Article and Find Full Text PDF