Purpose This study evaluated how 1st and 2nd vowel formant frequencies (F1, F2) differ between normal and loud speech in multiple speaking tasks to assess claims that loudness leads to exaggerated vowel articulation. Method Eleven healthy German-speaking women produced normal and loud speech in 3 tasks that varied in the degree of spontaneity: reading sentences that contained isolated /i: a: u:/, responding to questions that included target words with controlled consonantal contexts but varying vowel qualities, and a recipe recall task. Loudness variation was elicited naturalistically by changing interlocutor distance. First and 2nd formant frequencies and average sound pressure level were obtained from the stressed vowels in the target words, and vowel space area was calculated from /i: a: u:/. Results Comparisons across many vowels indicated that high, tense vowels showed limited formant variation as a function of loudness. Analysis of /i: a: u:/ across speech tasks revealed vowel space reduction in the recipe retell task compared to the other 2. Loudness changes for F1 were consistent in direction but variable in extent, with few significant results for high tense vowels. Results for F2 were quite varied and frequently not significant. Speakers differed in how loudness and task affected formant values. Finally, correlations between sound pressure level and F1 were generally positive but varied in magnitude across vowels, with the high tense vowels showing very flat slopes. Discussion These data indicate that naturalistically elicited loud speech in typical speakers does not always lead to changes in vowel formant frequencies and call into question the notion that increasing loudness is necessarily an automatic method of expanding the vowel space. Supplemental Material https://doi.org/10.23641/asha.8061740.

Download full-text PDF

Source
http://dx.doi.org/10.1044/2018_JSLHR-S-18-0043DOI Listing

Publication Analysis

Top Keywords

loud speech
16
normal loud
12
formant frequencies
12
vowel space
12
high tense
12
tense vowels
12
vowel
8
vowel formant
8
speech tasks
8
sound pressure
8

Similar Publications

How Does Deep Neural Network-Based Noise Reduction in Hearing Aids Impact Cochlear Implant Candidacy?

Audiol Res

December 2024

Division of Audiology, Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, MN 55902, USA.

Background/objectives: Adult hearing-impaired patients qualifying for cochlear implants typically exhibit less than 60% sentence recognition under the best hearing aid conditions, either in quiet or noisy environments, with speech and noise presented through a single speaker. This study examines the influence of deep neural network-based (DNN-based) noise reduction on cochlear implant evaluation.

Methods: Speech perception was assessed using AzBio sentences in both quiet and noisy conditions (multi-talker babble) at 5 and 10 dB signal-to-noise ratios (SNRs) through one loudspeaker.

View Article and Find Full Text PDF

This literature review investigates the application of wide dynamic range compression (WDRC) to enhance hearing protection and communication among workers in a noisy environment. Given the prevalence of noise-induced hearing loss, there is a major need to provide workers, with or at risk of hearing loss, with a solution that not only protects their hearing but also facilitates effective communication. WDRC, which amplifies softer sounds while limiting louder sounds, appears a promising approach.

View Article and Find Full Text PDF

Background:  Music-induced hearing loss (MIHL) is a critical public health issue. During music instruction, students and teachers are at risk of developing hearing loss due to exposure to loud and unsafe sound levels that can exceed 100 dBA. Prevention of MIHL in music students must be a desired action of all music educators.

View Article and Find Full Text PDF

Purpose: This study investigated the ecological validity of conventional voice assessments by comparing the self-perceived voice quality and acoustic characteristics of voice production during these assessments to those in a simulated environment with varying distracting conditions and noise levels.

Method: Forty-two university professors (26 women) participated in the study, where they were asked to produce loud connected speech by reading a 100-word text under four different conditions: a conventional assessment and three virtual classroom simulations created with 360° videos, each with different noise levels, played through a virtual reality headset and headphones. The first video depicted students paying attention in class (40 dB classroom noise); the second showed some students talking, generating moderate conversational noise (60 dB); and the third depicted students talking loudly and not paying attention (70 dB).

View Article and Find Full Text PDF

Background: During hearing aid (HA) fitting, individuals may experience better speech discrimination at normal speech levels and worse discrimination at loud speech levels than without an HA. Therefore, we investigated factors that worsen speech discrimination when the speech sound level increases.

Methods: Speech discrimination was measured in patients aged >20 years who had average hearing thresholds <90 dB on pure-tone audiometry.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!