Purpose The primary purpose of this study was to explore the efficacy of using virtual reality (VR) technology in hearing research with children by comparing speech perception abilities in a typical laboratory environment and a simulated VR classroom environment. Method The study included 48 final participants (40 children and eight young adults). The study design utilized a speech perception task in conjunction with a localization demand in auditory-only (AO) and auditory-visual (AV) conditions. Tasks were completed in simulated classroom acoustics in both a typical laboratory environment and in a virtual classroom environment accessed using an Oculus Rift head-mounted display. Results Speech perception scores were higher for AV conditions over AO conditions across age groups. In addition, interaction effects of environment (i.e., laboratory environment and VR classroom environment) and visual accessibility (i.e., AV vs. AO) indicated that children's performance on the speech perception task in the VR classroom was more similar to their performance in the laboratory environment for AV tasks than it was for AO tasks. AO tasks showed improvement in speech perception scores from the laboratory to the VR classroom environment, whereas AV conditions showed little significant change. Conclusion These results suggest that VR head-mounted displays are a viable research tool in AV tasks for children, increasing flexibility for audiovisual testing in a typical laboratory environment.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7839020 | PMC |
http://dx.doi.org/10.1044/2020_AJA-19-00004 | DOI Listing |
Sci Rep
December 2024
Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK.
Previous research suggests that emotional prosody perception is impaired in neurodegenerative diseases like Alzheimer's disease (AD) and primary progressive aphasia (PPA). However, no previous research has investigated emotional prosody perception in these diseases under non-ideal listening conditions. We recruited 18 patients with AD, and 31 with PPA (nine logopenic (lvPPA); 11 nonfluent/agrammatic (nfvPPA) and 11 semantic (svPPA)), together with 24 healthy age-matched individuals.
View Article and Find Full Text PDFPsychoneuroendocrinology
December 2024
Department of Psychology, Trinity College, USA.
Background: Theories highlight the important role of chronic stress in remodeling HPA-axis responsivity under stress. The Perceived Stress Scale (PSS) is one of the most widely used measures of enduring stress perceptions, and no previous studies have evaluated whether greater perceptions of stress on the PSS are associated with cortisol hypo- or hyperactivity responses to the Trier Social Stress Test (TSST).
Objective: To examine if high perceived stress over the past month, as measured by the PSS, alters cortisol and subjective acute stress reactivity to the TSST in healthy young adults.
Sci Rep
December 2024
Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, 15260, USA.
Multi-talker speech intelligibility requires successful separation of the target speech from background speech. Successful speech segregation relies on bottom-up neural coding fidelity of sensory information and top-down effortful listening. Here, we studied the interaction between temporal processing measured using Envelope Following Responses (EFRs) to amplitude modulated tones, and pupil-indexed listening effort, as it related to performance on the Quick Speech-in-Noise (QuickSIN) test in normal-hearing adults.
View Article and Find Full Text PDFAudiol Res
December 2024
Division of Audiology, Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, MN 55902, USA.
Background/objectives: Adult hearing-impaired patients qualifying for cochlear implants typically exhibit less than 60% sentence recognition under the best hearing aid conditions, either in quiet or noisy environments, with speech and noise presented through a single speaker. This study examines the influence of deep neural network-based (DNN-based) noise reduction on cochlear implant evaluation.
Methods: Speech perception was assessed using AzBio sentences in both quiet and noisy conditions (multi-talker babble) at 5 and 10 dB signal-to-noise ratios (SNRs) through one loudspeaker.
Audiol Res
December 2024
Doctoral School, Grigore T Popa University of Medicine and Pharmacy, 700115 Iasi, Romania.
Background/objectives: Understanding speech in background noise is a challenging task for listeners with normal hearing and even more so for individuals with hearing impairments. The primary objective of this study was to develop Romanian speech material in noise to assess speech perception in diverse auditory populations, including individuals with normal hearing and those with various types of hearing loss. The goal was to create a versatile tool that can be used in different configurations and expanded for future studies examining auditory performance across various populations and rehabilitation methods.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!