Introduction: Helicopter cockpits are very noisy and this noise must be reduced for effective communication. The standard U.S. Army aviation helmet is equipped with a noise-canceling acoustic microphone, but some ambient noise still is transmitted. Throat microphones are not sensitive to air molecule vibrations and thus, transmittal of ambient noise is reduced. It is possible that throat microphones could enhance speech communication in helicopters, but speech intelligibility with the devices must first be assessed. In the current study, speech intelligibility of signals generated by an acoustic microphone, a throat microphone, and by the combined output of the two microphones was assessed using the Modified Rhyme Test (MRT).
Methods: Stimulus words were recorded in a reverberant chamber with ambient broadband noise intensity at 90 and 106 dBA. Listeners completed the MRT task in the same settings, thus simulating the typical environment of a rotary-wing aircraft.
Results: Results show that speech intelligibility is significantly worse for the throat microphone (average percent correct = 55.97) than for the acoustic microphone (average percent correct = 69.70), particularly for the higher noise level. In addition, no benefit is gained by simultaneously using both microphones. A follow-up experiment evaluated different consonants using the Diagnostic Rhyme Test and replicated the MRT results.
Discussion: The current results show that intelligibility using throat microphones is poorer than with the use of boom microphones in noisy and in quiet environments. Therefore, throat microphones are not recommended for use in any situation where fast and accurate speech intelligibility is essential.
Download full-text PDF |
Source |
---|
PLoS One
January 2025
Deptartment of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado, United States of America.
Binaural speech intelligibility in rooms is a complex process that is affected by many factors including room acoustics, hearing loss, and hearing aid (HA) signal processing. Intelligibility is evaluated in this paper for a simulated room combined with a simulated hearing aid. The test conditions comprise three spatial configurations of the speech and noise sources, simulated anechoic and concert hall acoustics, three amounts of multitalker babble interference, the hearing status of the listeners, and three degrees of simulated HA processing provided to compensate for the noise and/or hearing loss.
View Article and Find Full Text PDFJ Speech Lang Hear Res
January 2025
Centre for Language Studies, Radboud University, Nijmegen, the Netherlands.
Purpose: In this review article, we present an extensive overview of recent developments in the area of dysarthric speech research. One of the key objectives of speech technology research is to improve the quality of life of its users, as evidenced by the focus of current research trends on creating inclusive conversational interfaces that cater to pathological speech, out of which dysarthric speech is an important example. Applications of speech technology research for dysarthric speech demand a clear understanding of the acoustics of dysarthric speech as well as of speech technologies, including machine learning and deep neural networks for speech processing.
View Article and Find Full Text PDFTrends Hear
January 2025
Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Cologne, Germany.
Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal.
View Article and Find Full Text PDFInt J Lang Commun Disord
January 2025
Division of Communication Sciences and Disorders, University of Cape Town, Rondebosch, South Africa.
Background: There is a global need for synthetic speech development in multiple languages and dialects, as many children who cannot communicate using their natural voice struggle to find synthetic voices on high-technology devices that match their age, social and linguistic background.
Aims: To document multiple stakeholders' perspectives surrounding the quality, acceptability and utility of newly created synthetic speech in three under-resourced South African languages, namely South African English, Afrikaans and isiXhosa.
Methods & Procedures: A mixed methods research design was selected.
Int J Audiol
January 2025
German Institute of Hearing Aids, Lübeck, Germany.
Objective: To describe application scenarios of a mobile device that provides a practical means for showcasing potential hearing aid benefits.
Design: A prototype of a hearing aid demonstrator based on circumaural headphones and a mobile signal processing platform was developed, providing core functions of a hearing aid, including several gain presets, in a hygienic, robust, and easy-to-use form factor. Speech intelligibility outcomes with the demonstrator and broadband level adaptations as potential fitting references were compared to outcomes with the own hearing aids of hearing-impaired participants.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!