People regularly communicate in complex environments, requiring them to flexibly shift their attention across multiple sources of sensory information. Increasing recruitment of the executive functions that support successful speech comprehension in these multitasking settings is thought to contribute to the sense of effort that listeners often experience. One common research method employed to quantify listening effort is the dual-task paradigm in which individuals recognize speech and concurrently perform a secondary (often visual) task.
View Article and Find Full Text PDFBrain differences linked to autism spectrum disorder (ASD) can manifest before observable symptoms. Studying these early neural precursors in larger and more diverse cohorts is crucial for advancing our understanding of developmental pathways and potentially facilitating earlier identification. EEG is an ideal tool for investigating early neural differences in ASD, given its scalability and high tolerability in infant populations.
View Article and Find Full Text PDFThe ability to leverage visual cues in speech perception - especially in noisy backgrounds - is well established from infancy to adulthood. Yet, the developmental trajectory of audiovisual benefits stays a topic of debate. The inconsistency in findings can be attributed to relatively small sample sizes or tasks that are not appropriate for given age groups.
View Article and Find Full Text PDFAs reading is inherently a multisensory, audiovisual (AV) process where visual symbols (i.e., letters) are connected to speech sounds, the question has been raised whether individuals with reading difficulties, like children with developmental dyslexia (DD), have broader impairments in multisensory processing.
View Article and Find Full Text PDFBackground: Functional near-infrared spectroscopy (fNIRS) is a viable non-invasive technique for functional neuroimaging in the cochlear implant (CI) population; however, the effects of acoustic stimulus features on the fNIRS signal have not been thoroughly examined. This study examined the effect of stimulus level on fNIRS responses in adults with normal hearing or bilateral CIs. We hypothesized that fNIRS responses would correlate with both stimulus level and subjective loudness ratings, but that the correlation would be weaker with CIs due to the compression of acoustic input to electric output.
View Article and Find Full Text PDFAuditory processing differences, including hyper- or hyposensitivity to sound, aversions to sound, and difficulty listening under noisy, real-world conditions, are commonly reported in autistic individuals. However, the developmental course and functional impact of these auditory processing differences are unclear. In this study, we investigate the prevalence, developmental trajectory, and functional impact of auditory processing differences in autistic children throughout childhood using a longitudinal study design.
View Article and Find Full Text PDFAttention-deficit/hyperactivity disorder (ADHD) is a neurodevelopmental disorder that impairs the control of attention and behavioral inhibition in affected individuals. Recent genome-wide association findings have revealed an association between glutamate and GABA gene sets and ADHD symptoms. Consistently, people with ADHD show altered glutamate and GABA content in the brain circuitry that is important for attention control function.
View Article and Find Full Text PDFJ Speech Lang Hear Res
December 2021
Purpose: It is generally accepted that adults use visual cues to improve speech intelligibility in noisy environments, but findings regarding visual speech benefit in children are mixed. We explored factors that contribute to audiovisual (AV) gain in young children's speech understanding. We examined whether there is an AV benefit to speech-in-noise recognition in children in first grade and if visual salience of phonemes influences their AV benefit.
View Article and Find Full Text PDFFunctional near-infrared spectroscopy (fNIRS) is an increasingly popular tool in auditory research, but the range of analysis procedures employed across studies may complicate the interpretation of data. We aim to assess the impact of different analysis procedures on the morphology, detection, and lateralization of auditory responses in fNIRS. Specifically, we determine whether averaging or generalized linear model (GLM)-based analysis generates different experimental conclusions when applied to a block-protocol design.
View Article and Find Full Text PDFBackground: Atypical behavioral responses to sensation are reported in a large proportion of children affected by prenatal alcohol exposure (PAE). Systematic examination of symptoms across the fetal alcohol spectrum in a large clinical sample is needed to inform diagnosis and intervention.
Aims: To describe the prevalence and patterns of atypical sensory processing symptoms in a clinical sample of children with PAE.
Purpose Data from standardized caregiver questionnaires indicate that children with fetal alcohol spectrum disorders (FASDs) frequently exhibit atypical auditory behaviors, including reduced responsivity to spoken stimuli. Another body of evidence suggests that prenatal alcohol exposure may result in auditory dysfunction involving loss of audibility (i.e.
View Article and Find Full Text PDFJ Assoc Res Otolaryngol
April 2019
Active listening involves dynamically switching attention between competing talkers and is essential to following conversations in everyday environments. Previous investigations in human listeners have examined the neural mechanisms that support switching auditory attention within the acoustic featural cues of pitch and auditory space. Here, we explored the cortical circuitry underlying endogenous switching of auditory attention between pitch and spatial cues necessary to discern target from masker words.
View Article and Find Full Text PDFLang Cogn Neurosci
January 2019
This paper describes a technique to assess the correspondence between patterns of similarity in the brain's response to speech sounds and the patterns of similarity encoded in phonological feature systems, by quantifying the recoverability of phonological features from the neural data using supervised learning. The technique is applied to EEG recordings collected during passive listening to consonant-vowel syllables. Three published phonological feature systems are compared, and are shown to differ in their ability to recover certain speech sound contrasts from the neural data.
View Article and Find Full Text PDFPupillometry has emerged as a useful tool for studying listening effort. Past work involving listeners with normal audiological thresholds has shown that switching attention between competing talker streams evokes pupil dilation indicative of listening effort [McCloy, Lau, Larson, Pratt, and Lee (). J.
View Article and Find Full Text PDFSpeech is an ecologically essential signal, whose processing crucially involves the subcortical nuclei of the auditory brainstem, but there are few experimental options for studying these early responses in human listeners under natural conditions. While encoding of continuous natural speech has been successfully probed in the cortex with neurophysiological tools such as electroencephalography (EEG) and magnetoencephalography, the rapidity of subcortical response components combined with unfavorable signal-to-noise ratios signal-to-noise ratio has prevented application of those methods to the brainstem. Instead, experiments have used thousands of repetitions of simple stimuli such as clicks, tone-bursts, or brief spoken syllables, with deviations from those paradigms leading to ambiguity in the neural origins of measured responses.
View Article and Find Full Text PDFHow and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex.
View Article and Find Full Text PDFSuccessful speech communication often requires selective attention to a target stream amidst competing sounds, as well as the ability to switch attention among multiple interlocutors. However, auditory attention switching negatively affects both target detection accuracy and reaction time, suggesting that attention switches carry a cognitive cost. Pupillometry is one method of assessing mental effort or cognitive load.
View Article and Find Full Text PDFAnalysis of pupil dilation has been used as an index of attentional effort in the auditory domain. Previous work has modeled the pupillary response to attentional effort as a linear time-invariant system with a characteristic impulse response, and used deconvolution to estimate the attentional effort that gives rise to changes in pupil size. Here it is argued that one parameter of the impulse response (the latency of response maximum, t(max)) has been mis-estimated in the literature; a different estimate is presented, and it is shown how deconvolution with this value of t(max) yields more intuitively plausible and informative results.
View Article and Find Full Text PDFTrends Neurosci
February 2016
Crossmodal integration is a term applicable to many phenomena in which one sensory modality influences task performance or perception in another sensory modality. We distinguish the term binding as one that should be reserved specifically for the process that underpins perceptual object formation. To unambiguously differentiate binding form other types of integration, behavioral and neural studies must investigate perception of a feature orthogonal to the features that link the auditory and visual stimuli.
View Article and Find Full Text PDFWhether crossing a busy intersection or attending a large dinner party, listeners sometimes need to attend to multiple spatially distributed sound sources or streams concurrently. How they achieve this is not clear-some studies suggest that listeners cannot truly simultaneously attend to separate streams, but instead combine attention switching with short-term memory to achieve something resembling divided attention. This paper presents two oddball detection experiments designed to investigate whether directing attention to phonetic versus semantic properties of the attended speech impacts listeners' ability to divide their auditory attention across spatial locations.
View Article and Find Full Text PDFObjective: Brain-computer interfaces (BCIs) represent a technology with the potential to rehabilitate a range of traumatic and degenerative nervous system conditions but require a time-consuming training process to calibrate. An area of BCI research known as transfer learning is aimed at accelerating training by recycling previously recorded training data across sessions or subjects. Training data, however, is typically transferred from one electrode configuration to another without taking individual head anatomy or electrode positioning into account, which may underutilize the recycled data.
View Article and Find Full Text PDFIn noisy settings, listening is aided by correlated dynamic visual cues gleaned from a talker's face-an improvement often attributed to visually reinforced linguistic information. In this study, we aimed to test the effect of audio-visual temporal coherence alone on selective listening, free of linguistic confounds. We presented listeners with competing auditory streams whose amplitude varied independently and a visual stimulus with varying radius, while manipulating the cross-modal temporal relationships.
View Article and Find Full Text PDFThe right inferior frontal cortex (rIFC) is specifically associated with attentional control via the inhibition of behaviorally irrelevant stimuli and motor responses. Similarly, recent evidence has shown that alpha (7-14 Hz) and beta (15-29 Hz) oscillations in primary sensory neocortical areas are enhanced in the representation of non-attended stimuli, leading to the hypothesis that allocation of these rhythms plays an active role in optimal inattention. Here, we tested the hypothesis that selective synchronization between rIFC and primary sensory neocortex occurs in these frequency bands during inattention.
View Article and Find Full Text PDFModern neuroimaging techniques enable non-invasive observation of ongoing neural processing, with magnetoencephalography (MEG) in particular providing direct measurement of neural activity with millisecond time resolution. However, accurately mapping measured MEG sensor readings onto the underlying source neural structures remains an active area of research. This so-called "inverse problem" is ill posed, and poses a challenge for source estimation that is often cited as a drawback limiting MEG data interpretation.
View Article and Find Full Text PDF