Publications by authors named "Jessica Monaghan"

Humans make use of small differences in the timing of sounds at the two ears-interaural time differences (ITDs)-to locate their sources. Despite extensive investigation, however, the neural representation of ITDs in the human brain is contentious, particularly the range of ITDs explicitly represented by dedicated neural detectors. Here, using magneto- and electro-encephalography (MEG and EEG), we demonstrate evidence of a sparse neural representation of ITDs in the human cortex.

View Article and Find Full Text PDF

Measurement of brain functional connectivity has become a dominant approach to explore the interaction dynamics between brain regions of subjects under examination. Conventional functional connectivity measures largely originate from deterministic models on empirical analysis, usually demanding application-specific settings (e.g.

View Article and Find Full Text PDF

Analysis of neuroimaging data (e.g., Magnetic Resonance Imaging, structural and functional MRI) plays an important role in monitoring brain dynamics and probing brain structures.

View Article and Find Full Text PDF

EEG-based tinnitus classification is a valuable tool for tinnitus diagnosis, research, and treatments. Most current works are limited to a single dataset where data patterns are similar. But EEG signals are highly non-stationary, resulting in model's poor generalization to new users, sessions or datasets.

View Article and Find Full Text PDF

With the development of digital technology, machine learning has paved the way for the next generation of tinnitus diagnoses. Although machine learning has been widely applied in EEG-based tinnitus analysis, most current models are dataset-specific. Each dataset may be limited to a specific range of symptoms, overall disease severity, and demographic attributes; further, dataset formats may differ, impacting model performance.

View Article and Find Full Text PDF

Sensory deprivation can lead to cross-modal cortical changes, whereby sensory brain regions deprived of input may be recruited to perform atypical function. Enhanced cross-modal responses to visual stimuli observed in auditory cortex of postlingually deaf cochlear implant (CI) users are hypothesized to reflect increased activation of cortical language regions, but it is unclear if this cross-modal activity is "adaptive" or "mal-adaptive" for speech understanding. To determine if increased activation of language regions is correlated with better speech understanding in CI users, we assessed task-related activation and functional connectivity of auditory and visual cortices to auditory and visual speech and non-speech stimuli in CI users (n = 14) and normal-hearing listeners (n = 17) and used functional near-infrared spectroscopy to measure hemodynamic responses.

View Article and Find Full Text PDF

Millions of people around the world have difficulty hearing. Hearing aids and cochlear implants help people hear better, especially in quiet places. Unfortunately, these devices do not always help in noisy situations like busy classrooms or restaurants.

View Article and Find Full Text PDF

Modern neuroimaging techniques enable us to construct human brains as brain networks or connectomes. Capturing brain networks' structural information and hierarchical patterns is essential for understanding brain functions and disease states. Recently, the promising network representation learning capability of graph neural networks (GNNs) has prompted related methods for brain network analysis to be proposed.

View Article and Find Full Text PDF

Cochlear implants (CIs) convey the amplitude envelope of speech by modulating high-rate pulse trains. However, not all of the envelope may be necessary to perceive amplitude modulations (AMs); the effective envelope depth may be limited by forward and backward masking from the envelope peaks. Three experiments used modulated pulse trains to measure which portions of the envelope can be effectively processed by CI users as a function of AM frequency.

View Article and Find Full Text PDF
Article Synopsis
  • People can listen better in noisy places, like a cocktail party, by focusing on important sounds instead of background noise.
  • The study looks at how different parts of our hearing system help us understand speech when it's hard to hear, like when a cochlear implant is used or when there's a lot of noise in the background.
  • There are two ways our brain helps us hear better: one way boosts important speech sounds directly in the ear, and another way uses higher brain areas to filter out the background noise.
View Article and Find Full Text PDF

Electroencephalogram (EEG)-based neurofeedback has been widely studied for tinnitus therapy in recent years. Most existing research relies on experts' cognitive prediction, and studies based on machine learning and deep learning are either data-hungry or not well generalizable to new subjects. In this paper, we propose a robust, data-efficient model for distinguishing tinnitus from the healthy state based on EEG-based tinnitus neurofeedback.

View Article and Find Full Text PDF

Brain signals refer to the biometric information collected from the human brain. The research on brain signals aims to discover the underlying neurological or physical status of the individuals by signal decoding. The emerging deep learning techniques have improved the study of brain signals significantly in recent years.

View Article and Find Full Text PDF

Many individuals with seemingly normal hearing abilities struggle to understand speech in noisy backgrounds. To understand why this might be the case, we investigated the neural representation of speech in the auditory midbrain of gerbils with "hidden hearing loss" through noise exposure that increased hearing thresholds only temporarily. In noise-exposed animals, we observed significantly increased neural responses to speech stimuli, with a more pronounced increase at moderate than at high sound intensities.

View Article and Find Full Text PDF

Throughout the eighteenth century the issue of authenticity shaped portrayals of fashionable diseases. From the very beginning of the century, writers satirized the behavior of elite invalids who paraded their delicacy as a sign of their status. As disorders such as the spleen came to be regarded as "fashionable," the legitimacy of patients' claims to suffer from distinguished diseases was called further into question, with some observers questioning the validity of the disease categories themselves.

View Article and Find Full Text PDF

Objective: Processing delay is one of the important factors that limit the development of novel algorithms for hearing devices. In this study, both normal-hearing listeners and listeners with hearing loss were tested for their tolerance of processing delay up to 50 ms using a real-time setup for own-voice and external-voice conditions based on linear processing to avoid confounding effects of time-dependent gain.

Design: Participants rated their perceived subjective annoyance for each condition on a 7-point Likert scale.

View Article and Find Full Text PDF

Interaural time differences (ITDs) conveyed by the modulated envelopes of high-frequency sounds can serve as a cue for localizing a sound source. Klein-Hennig et al. ( 129: 3856, 2011) demonstrated the envelope attack (the rate at which stimulus energy in the envelope increases) and the duration of the pause (the interval between successive envelope pulses) as important factors affecting sensitivity to envelope ITDs in human listeners.

View Article and Find Full Text PDF

Machine-learning based approaches to speech enhancement have recently shown great promise for improving speech intelligibility for hearing-impaired listeners. Here, the performance of three machine-learning algorithms and one classical algorithm, Wiener filtering, was compared. Two algorithms based on neural networks were examined, one using a previously reported feature set and one using a feature set derived from an auditory model.

View Article and Find Full Text PDF

Speech understanding in noisy environments is still one of the major challenges for cochlear implant (CI) users in everyday life. We evaluated a speech enhancement algorithm based on neural networks (NNSE) for improving speech intelligibility in noise for CI users. The algorithm decomposes the noisy speech signal into time-frequency units, extracts a set of auditory-inspired features and feeds them to the neural network to produce an estimation of which frequency channels contain more perceptually important information (higher signal-to-noise ratio, SNR).

View Article and Find Full Text PDF

The ability of normal-hearing (NH) listeners to exploit interaural time difference (ITD) cues conveyed in the modulated envelopes of high-frequency sounds is poor compared to ITD cues transmitted in the temporal fine structure at low frequencies. Sensitivity to envelope ITDs is further degraded when envelopes become less steep, when modulation depth is reduced, and when envelopes become less similar between the ears, common factors when listening in reverberant environments. The vulnerability of envelope ITDs is particularly problematic for cochlear implant (CI) users, as they rely on information conveyed by slowly varying amplitude envelopes.

View Article and Find Full Text PDF

Sensitivity to interaural time differences (ITDs) conveyed in the temporal fine structure of low-frequency tones and the modulated envelopes of high-frequency sounds are considered comparable, particularly for envelopes shaped to transmit similar fidelity of temporal information normally present for low-frequency sounds. Nevertheless, discrimination performance for envelope modulation rates above a few hundred Hertz is reported to be poor-to the point of discrimination thresholds being unattainable-compared with the much higher (>1,000 Hz) limit for low-frequency ITD sensitivity, suggesting the presence of a low-pass filter in the envelope domain. Further, performance for identical modulation rates appears to decline with increasing carrier frequency, supporting the view that the low-pass characteristics observed for envelope ITD processing is carrier-frequency dependent.

View Article and Find Full Text PDF

At high frequencies, interaural time differences (ITDs) are conveyed by the sound envelope. Sensitivity to envelope ITDs depends crucially on the envelope shape. Reverberation degrades the envelope shape, reducing the modulation depth of the envelope and the slope of its flanks.

View Article and Find Full Text PDF

Adult mice are highly vocal animals, with both males and females vocalizing in same sex and cross sex social encounters. Mouse pups are also highly vocal, producing isolation vocalizations when they are cold or removed from the nest. This study examined patterns in the development of pup isolation vocalizations, and compared these to adult vocalizations.

View Article and Find Full Text PDF

This paper investigates the theoretical basis for estimating vocal-tract length (VTL) from the formant frequencies of vowel sounds. A statistical inference model was developed to characterize the relationship between vowel type and VTL, on the one hand, and formant frequency and vocal cavity size, on the other. The model was applied to two well known developmental studies of formant frequency.

View Article and Find Full Text PDF