Over the past few decades, research into the function of the cerebellum has expanded far beyond the motor domain. A growing number of studies are probing the role of specific cerebellar subregions, such as Crus I and Crus II, in higher-order cognitive functions including receptive language processing. In the current fMRI study, we show evidence for the cerebellum's sensitivity to variation in two well-studied psycholinguistic properties of words-lexical frequency and phonological neighborhood density-during passive, continuous listening of a podcast.
View Article and Find Full Text PDFJ Speech Lang Hear Res
February 2023
Purpose: This study investigates whether crosslinguistic effects on auditory word recognition are modulated by the quality of the auditory signal (clear and noisy).
Method: In an online experiment, a group of Spanish-English bilingual listeners performed an auditory lexical decision task, in their second language, English. Words and pseudowords were either presented in the clear or were embedded in white auditory noise.
J Speech Lang Hear Res
September 2021
Purpose Morse code as a form of communication became widely used for telegraphy, radio and maritime communication, and military operations, and remains popular with ham radio operators. Some skilled users of Morse code are able to comprehend a full sentence as they listen to it, while others must first transcribe the sentence into its written letter sequence. Morse thus provides an interesting opportunity to examine comprehension differences in the context of skilled acoustic perception.
View Article and Find Full Text PDFSpoken language comprehension is a fundamental component of our cognitive skills. We are quite proficient at deciphering words from the auditory input despite the fact that the speech we hear is often masked by noise such as background babble originating from talkers other than the one we are attending to. To perceive spoken language as intended, we rely on prior linguistic knowledge and context.
View Article and Find Full Text PDFSpeaking involves coordination of multiple neuromotor systems, including respiration, phonation and articulation. Developing non-invasive imaging methods to study how the brain controls these systems is critical for understanding the neurobiology of speech production. Recent models and animal research suggest that regions beyond the primary motor cortex (M1) help orchestrate the neuromotor control needed for speaking, including cortical and sub-cortical regions.
View Article and Find Full Text PDFThis study examines cross-modality effects of a semantically-biased written sentence context on the perception of an acoustically-ambiguous word target identifying neural areas sensitive to interactions between sentential bias and phonetic ambiguity. Of interest is whether the locus or nature of the interactions resembles those previously demonstrated for auditory-only effects. FMRI results show significant interaction effects in right mid-middle temporal gyrus (RmMTG) and bilateral anterior superior temporal gyri (aSTG), regions along the ventral language comprehension stream that map sound onto meaning.
View Article and Find Full Text PDFResearch has implicated the left inferior frontal gyrus (LIFG) in mapping acoustic-phonetic input to sound category representations, both in native speech perception and non-native phonetic category learning. At issue is whether this sensitivity reflects access to phonetic category information per se or to explicit category labels, the latter often being required by experimental procedures. The current study employed an incidental learning paradigm designed to increase sensitivity to a difficult non-native phonetic contrast without inducing explicit awareness of the categorical nature of the stimuli.
View Article and Find Full Text PDFIn spoken word recognition, subphonemic variation influences lexical activation, with sounds near a category boundary increasing phonetic competition as well as lexical competition. The current study investigated the interplay of these factors using a visual world task in which participants were instructed to look at a picture of an auditory target (e.g.
View Article and Find Full Text PDFPrior research has shown that the perception of degraded speech is influenced by within sentence meaning and recruits one or more components of a frontal-temporal-parietal network. The goal of the current study is to examine whether the overall conceptual meaning of a sentence, made up of one set of words, influences the perception of a second acoustically degraded sentence, made up of a different set of words. Using functional magnetic resonance imaging (fMRI), we presented an acoustically clear sentence followed by an acoustically degraded sentence and manipulated the semantic relationship between them: Related in meaning (but consisting of different content words), Unrelated in meaning, or Same.
View Article and Find Full Text PDFJ Exp Psychol Hum Percept Perform
July 2016
When listeners encounter speech under adverse listening conditions, adaptive adjustments in perception can improve comprehension over time. In some cases, these adaptive changes require the presence of external information that disambiguates the distorted speech signals, whereas in other cases mere exposure is sufficient. Both external (e.
View Article and Find Full Text PDFHuman speech perception rapidly adapts to maintain comprehension under adverse listening conditions. For example, with exposure listeners can adapt to heavily accented speech produced by a non-native speaker. Outside the domain of speech perception, adaptive changes in sensory and motor processing have been attributed to cerebellar functions.
View Article and Find Full Text PDFAdult speech perception reflects the long-term regularities of the native language, but it is also flexible such that it accommodates and adapts to adverse listening conditions and short-term deviations from native-language norms. The purpose of this article is to examine how the broader neuroscience literature can inform and advance research efforts in understanding the neural basis of flexibility and adaptive plasticity in speech perception. Specifically, we highlight the potential role of learning algorithms that rely on prediction error signals and discuss specific neural structures that are likely to contribute to such learning.
View Article and Find Full Text PDFThe current study explored how factors of acoustic-phonetic and lexical competition affect access to the lexical-semantic network during spoken word recognition. An auditory semantic priming lexical decision task was presented to subjects while in the MR scanner. Prime-target pairs consisted of prime words with the initial voiceless stop consonants /p/, /t/, and /k/ followed by word and nonword targets.
View Article and Find Full Text PDFListeners' perception of acoustically presented speech is constrained by many different sources of information that arise from other sensory modalities and from more abstract higher-level language context. An open question is how perceptual processes are influenced by and interact with these other sources of information. In this study, we use fMRI to examine the effect of a prior sentence fragment meaning on the categorization of two possible target words that differ in an acoustic phonetic feature of the initial consonant, VOT.
View Article and Find Full Text PDFNeuropsychological findings together with recent advances in neuroanatomical and neuroimaging techniques have spurred the investigation of cerebellar contributions to cognition. One cognitive process that has been the focus of much research is working memory, in particular its verbal component. Influenced by Baddeley's cognitive theory of working memory, cerebellar activation during verbal working memory tasks has been predominantly attributed to the cerebellum's involvement in an articulatory rehearsal network.
View Article and Find Full Text PDF