AI Article Synopsis

  • * Researchers conducted experiments with a female speaker to compare speech clarity in three conditions: no mask, an N95 mask, and an N95 mask with a face shield.
  • * Results showed that the combo of an N95 mask and face shield significantly reduced speech recognition, highlighting the need for effective protective measures that do not compromise communication, especially in clinical settings.

Article Abstract

Objectives: The objectives were to characterize the effects of wearing face coverings on: 1) acoustic speech cues, and 2) speech recognition of patients with hearing loss who listen with a cochlear implant.

Methods: A prospective cohort study was performed in a tertiary referral center between July and September 2020. A female talker recorded sentences in three conditions: no face covering, N95 mask, and N95 mask plus a face shield. Spectral differences were analyzed between speech produced in each condition. The speech recognition in each condition for twenty-three adult patients with at least 6 months of cochlear implant use was assessed.

Results: Spectral analysis demonstrated preferential attenuation of high-frequency speech information with the N95 mask plus face shield condition compared to the other conditions. Speech recognition did not differ significantly between the uncovered (median 90% [IQR 89%-94%]) and N95 mask conditions (91% [IQR 86%-94%]; P = .253); however, speech recognition was significantly worse in the N95 mask plus face shield condition (64% [IQR 48%-75%]) compared to the uncovered (P < .001) or N95 mask (P < .001) conditions.

Conclusions: The type and combination of protective face coverings used have differential effects on attenuation of speech information, influencing speech recognition of patients with hearing loss. In the face of the COVID-19 pandemic, there is a need to protect patients and clinicians from spread of disease while maximizing patient speech recognition. The disruptive effect of wearing a face shield in conjunction with a mask may prompt clinicians to consider alternative eye protection, such as goggles, in appropriate clinical situations.

Level Of Evidence: 3 Laryngoscope, 131:E2038-E2043, 2021.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8014501PMC
http://dx.doi.org/10.1002/lary.29447DOI Listing

Publication Analysis

Top Keywords

speech recognition
20
n95 mask
20
mask face
12
face shield
12
face coverings
8
speech
8
cochlear implant
8
shield condition
8
face
6
recognition
5

Similar Publications

Objectives: This study examined the relationships between electrophysiological measures of the electrically evoked auditory brainstem response (EABR) with speech perception measured in quiet after cochlear implantation (CI) to identify the ability of EABR to predict postoperative CI outcomes.

Methods: Thirty-four patients with congenital prelingual hearing loss, implanted with the same manufacturer's CI, were recruited. In each participant, the EABR was evoked at apical, middle, and basal electrode locations.

View Article and Find Full Text PDF

Background: Continuous speech analysis is considered as an efficient and convenient approach for early detection of Alzheimer's Disease (AD). However, the traditional approach generally requires human transcribers to transcribe audio data accurately. This study applied automatic speech recognition (ASR) in conjunction with natural language processing (NLP) techniques to automatically extract linguistic features from Chinese speech data.

View Article and Find Full Text PDF

Background: There is growing evidence that discourse (i.e., connected speech) could serve as a cost-effective and ecologically valid means of identifying individuals with prodromal Alzheimer's disease.

View Article and Find Full Text PDF

Emerging Wearable Acoustic Sensing Technologies.

Adv Sci (Weinh)

January 2025

Key Laboratory of Optoelectronic Technology & Systems of Ministry of Education, International R&D Center of Micro-Nano Systems and New Materials Technology, Chongqing University, Chongqing, 400044, China.

Sound signals not only serve as the primary communication medium but also find application in fields such as medical diagnosis and fault detection. With public healthcare resources increasingly under pressure, and challenges faced by disabled individuals on a daily basis, solutions that facilitate low-cost private healthcare hold considerable promise. Acoustic methods have been widely studied because of their lower technical complexity compared to other medical solutions, as well as the high safety threshold of the human body to acoustic energy.

View Article and Find Full Text PDF

Stress classification with in-ear heartbeat sounds.

Comput Biol Med

December 2024

École de technologie supérieure, 1100 Notre-Dame St W, Montreal, H3C 1K3, Quebec, Canada; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), 527 Rue Sherbrooke O #8, Montréal, QC H3A 1E3, Canada. Electronic address:

Background: Although stress plays a key role in tinnitus and decreased sound tolerance, conventional hearing devices used to manage these conditions are not currently capable of monitoring the wearer's stress level. The aim of this study was to assess the feasibility of stress monitoring with an in-ear device.

Method: In-ear heartbeat sounds and clinical-grade electrocardiography (ECG) signals were simultaneously recorded while 30 healthy young adults underwent a stress protocol.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!