Adults benefit more from visual speech in speech maskers than in noise maskers because visual speech helps perceptually isolate target talkers from competing talkers. To investigate whether children use visual speech to perceptually isolate target talkers, this study compared children's speech recognition thresholds in auditory and audiovisual condition across two maskers: two-talker speech and noise. Children demonstrated similar audiovisual benefit in both maskers. Individual differences in speechreading accuracy predicted audiovisual benefit in each masker to a similar degree. Results suggest that although visual speech improves children's masked speech recognition thresholds, children may use visual speech in different ways than adults.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7731949 | PMC |
http://dx.doi.org/10.1121/10.0001867 | DOI Listing |
Age-related hearing loss (ARHL) is considered one of the most common neurodegenerative disorders in the elderly; however, how it contributes to cognitive decline is poorly understood. With resting-state functional magnetic resonance imaging from 66 individuals with ARHL and 54 healthy controls, group spatial independent component analyses, sliding window analyses, graph-theory methods, multilayer networks, and correlation analyses were used to identify ARHL-induced disturbances in static and dynamic functional network connectivity (sFNC/dFNC), alterations in global network switching and their links to cognitive performances. ARHL was associated with decreased sFNC/dFNC within the default mode network (DMN) and increased sFNC/dFNC between the DMN and central executive, salience (SN), and visual networks.
View Article and Find Full Text PDFHum Brain Mapp
January 2025
Montreal Neurological Institute, McGill University, Montréal, Quebec, Canada.
Perception and production of music and speech rely on auditory-motor coupling, a mechanism which has been linked to temporally precise oscillatory coupling between auditory and motor regions of the human brain, particularly in the beta frequency band. Recently, brain imaging studies using magnetoencephalography (MEG) have also shown that accurate auditory temporal predictions specifically depend on phase coherence between auditory and motor cortical regions. However, it is not yet clear whether this tight oscillatory phase coupling is an intrinsic feature of the auditory-motor loop, or whether it is only elicited by task demands.
View Article and Find Full Text PDFEur Arch Otorhinolaryngol
January 2025
Librarian, Vidyavardhaka Law College, Mysore, Karnataka, 570 001, India.
Purpose: Research on vestibular function tests has advanced significantly over the past century. This study aims to evaluate research productivity, identify top contributors, and assess global collaboration to provide a comprehensive overview of trends and advancements in the field.
Method: A scientometric analysis was conducted using publications from the Scopus database, retrieved on January 5, 2024.
J Voice
January 2025
School of Behavioral and Brain Sciences, Department of Speech, Language, and Hearing, Callier Center for Communication Disorders, University of Texas at Dallas, Richardson, TX; Department of Otolaryngology - Head and Neck Surgery, University of Texas Southwestern Medical Center, Dallas, TX. Electronic address:
Introduction: Patients with primary muscle tension dysphonia (pMTD) commonly report symptoms of vocal effort, fatigue, discomfort, odynophonia, and aberrant vocal quality (eg, vocal strain, hoarseness). However, voice symptoms most salient to pMTD have not been identified. Furthermore, how standard vocal fatigue and vocal tract discomfort indices that capture persistent symptoms-like the Vocal Fatigue Index (VFI) and Vocal Tract Discomfort Scale (VTDS)-relate to acute symptoms experienced at the time of the voice evaluation is unclear.
View Article and Find Full Text PDFeNeuro
January 2025
Neurophysiology of Everyday Life Group, Department of Psychology, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
A comprehensive analysis of everyday sound perception can be achieved using Electroencephalography (EEG) with the concurrent acquisition of information about the environment. While extensive research has been dedicated to speech perception, the complexities of auditory perception within everyday environments, specifically the types of information and the key features to extract, remain less explored. Our study aims to systematically investigate the relevance of different feature categories: discrete sound-identity markers, general cognitive state information, and acoustic representations, including discrete sound onset, the envelope, and mel-spectrogram.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!