In face-to-face speech communication, the listener extracts and integrates information from the acoustic and optic speech signals. Integration occurs within the auditory modality (i.e., across the acoustic frequency spectrum) and across sensory modalities (i.e., across the acoustic and optic signals). The difficulties experienced by some hearing-impaired listeners in understanding speech could be attributed to losses in the extraction of speech information, the integration of speech cues, or both. The present study evaluated the ability of normal-hearing and hearing-impaired listeners to integrate speech information within and across sensory modalities in order to determine the degree to which integration efficiency may be a factor in the performance of hearing-impaired listeners. Auditory-visual nonsense syllables consisting of eighteen medial consonants surrounded by the vowel [a] were processed into four nonoverlapping acoustic filter bands between 300 and 6000 Hz. A variety of one, two, three, and four filter-band combinations were presented for identification in auditory-only and auditory-visual conditions: A visual-only condition was also included. Integration efficiency was evaluated using a model of optimal integration. Results showed that normal-hearing and hearing-impaired listeners integrated information across the auditory and visual sensory modalities with a high degree of efficiency, independent of differences in auditory capabilities. However, across-frequency integration for auditory-only input was less efficient for hearing-impaired listeners. These individuals exhibited particular difficulty extracting information from the highest frequency band (4762-6000 Hz) when speech information was presented concurrently in the next lower-frequency band (1890-2381 Hz). Results suggest that integration of speech information within the auditory modality, but not across auditory and visual modalities, affects speech understanding in hearing-impaired listeners.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1121/1.2405859 | DOI Listing |
Lang Speech
January 2025
Department of Educational Psychology, Leadership, & Counseling, Texas Tech University, USA.
Adapting one's speaking style is particularly crucial as children start interacting with diverse conversational partners in various communication contexts. The study investigated the capacity of preschool children aged 3-5 years ( = 28) to modify their speaking styles in response to background noise, referred to as noise-adapted speech, and when talking to an interlocutor who pretended to have hearing loss, referred to as clear speech. We examined how two modified speaking styles differed across the age range.
View Article and Find Full Text PDFEur Arch Otorhinolaryngol
January 2025
Department of Otolaryngology, China-Japan Friendship Hospital, Beijing, China.
Objectives: This study examined the relationships between electrophysiological measures of the electrically evoked auditory brainstem response (EABR) with speech perception measured in quiet after cochlear implantation (CI) to identify the ability of EABR to predict postoperative CI outcomes.
Methods: Thirty-four patients with congenital prelingual hearing loss, implanted with the same manufacturer's CI, were recruited. In each participant, the EABR was evoked at apical, middle, and basal electrode locations.
Alzheimers Dement
December 2024
Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom.
Background: Patients with behavioural variant frontotemporal dementia (bvFTD) and right temporal variant frontotemporal dementia (rtvFTD) commonly exhibit abnormal hedonic and other behavioural responses to sounds, however hearing dysfunction in this disorder is poorly characterised. Here we addressed this issue using the Queen Square Tests of Auditory Cognition (QSTAC) - a neuropsychological battery for the systematic assessment of central auditory functions (including pitch pattern perception, environmental sound recognition, sound localisation and emotion processing) in cognitively impaired people.
Method: The QSTAC was administered to 12 patients with bvFTD, 7 patients with rtvFTD and 24 patients with comparator dementia syndromes (primary progressive aphasia and typical Alzheimer's disease) and 15 healthy age-matched individuals.
Alzheimers Dement
December 2024
Newcastle University, Newcastle upon Tyne, United Kingdom.
Background: Hearing loss is associated with cognitive and neuroimaging markers of Alzheimer's disease dementia but it is unclear how specific measures relate to these after accounting for a range of hearing abilities.
Method: 200 participants (155 cognitively normal, 25 mild cognitively impaired and 20 Alzheimer's disease dementia) underwent auditory testing (peripheral and central abilities), cognitive testing and MR scanning (structural and diffusion-weighted sequences) to evaluate the relationship between hearing, cognition and imaging brain measures.
Result: Central auditory measures such as speech-in-noise perception and auditory memory for longer durations were associated with cognitive impairment across the Alzheimer's disease continuum and specific auditory measures were independently associated with morphometric and diffusion-weighted brain measures.
Alzheimers Dement
December 2024
Centre for Brain Research, Indian Institute of Science, Bangalore, Karnataka, India.
Background: Auditory attention and memory are the understudied aspects of cognition. Poor performance on cognitive tasks is assumed to be due to peripheral hearing loss, which is not always the case. Auditory processing issues may affect the auditory recall and attention tasks even though the hearing and cognition are normal.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!