With the COVID-19 pandemic, we have become used to wearing masks and have experienced how masks seem to impair emotion and speech recognition. While several studies have focused on facial emotion recognition by adding images of masks on photographs of emotional faces, we have created a video database with actors really wearing masks to test its effect in more ecological conditions. After validating the emotions displayed by the actors, we found that surgical mask impaired happiness and sadness recognition but not neutrality. Moreover, for happiness, this effect was specific to the mask and not to covering the lower part of the face, possibly due to a cognitive bias associated with the surgical mask. We also created videos with speech and tested the effect of mask on emotion and speech recognition when displayed in auditory, visual, or audiovisual modalities. In visual and audiovisual modalities, mask impaired happiness and sadness but improved neutrality recognition. Mask impaired the recognition of bilabial syllables regardless of modality. In addition, it altered speech recognition only in the audiovisual modality for participants above 70 years old. Overall, COVID-19 masks mainly impair emotion recognition, except for older participants for whom it also impacts speech recognition, probably because they rely more on visual information to compensate age-related hearing loss.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9540850 | PMC |
http://dx.doi.org/10.3389/fnins.2022.982899 | DOI Listing |
Vestn Otorinolaringol
December 2024
St. Petersburg Research Institute of Ear, Throat, Nose and Speech, St. Petersburg, Russia.
Unlabelled: Central auditory disorders (CSD) - this is a violation of the processing of sound stimuli, including speech, above the cochlear nuclei of the brain stem, which is mainly manifested by difficulties in speech recognition, especially in noisy environments. Children with this pathology are more likely to have behavioral problems, impaired auditory, linguistic and cognitive development, and especially difficulties with learning at school.
Objective: To analyze the literature data on the epidemiology of central auditory disorders in school-age children.
Audiol Res
December 2024
Division of Audiology, Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, MN 55902, USA.
Background/objectives: Adult hearing-impaired patients qualifying for cochlear implants typically exhibit less than 60% sentence recognition under the best hearing aid conditions, either in quiet or noisy environments, with speech and noise presented through a single speaker. This study examines the influence of deep neural network-based (DNN-based) noise reduction on cochlear implant evaluation.
Methods: Speech perception was assessed using AzBio sentences in both quiet and noisy conditions (multi-talker babble) at 5 and 10 dB signal-to-noise ratios (SNRs) through one loudspeaker.
Audiol Res
December 2024
Audiology, Primary Care Department, AUSL of Modena, 41100 Modena, Italy.
: Hearing loss is a highly prevalent condition in the world population that determines emotional, social, and economic costs. In recent years, it has been definitely recognized that the lack of physiological binaural hearing causes alterations in the localization of sounds and reduced speech recognition in noise and reverberation. This study aims to explore the psycho-social profile of adult workers affected by single-sided deafness (SSD), without other major medical conditions and otological symptoms, through comparison to subjects with normal hearing.
View Article and Find Full Text PDFAudiol Res
December 2024
Doctoral School, Grigore T Popa University of Medicine and Pharmacy, 700115 Iasi, Romania.
Background/objectives: Understanding speech in background noise is a challenging task for listeners with normal hearing and even more so for individuals with hearing impairments. The primary objective of this study was to develop Romanian speech material in noise to assess speech perception in diverse auditory populations, including individuals with normal hearing and those with various types of hearing loss. The goal was to create a versatile tool that can be used in different configurations and expanded for future studies examining auditory performance across various populations and rehabilitation methods.
View Article and Find Full Text PDFInterspeech
September 2024
Pattern Recognition Lab. Friedrich-Alexander University, Erlangen, Germany.
Magnetic Resonance Imaging (MRI) allows analyzing speech production by capturing high-resolution images of the dynamic processes in the vocal tract. In clinical applications, combining MRI with synchronized speech recordings leads to improved patient outcomes, especially if a phonological-based approach is used for assessment. However, when audio signals are unavailable, the recognition accuracy of sounds is decreased when using only MRI data.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!