Estimation of sound pressure levels of voiced speech from skin vibration of the neck.

J Acoust Soc Am

National Center for Voice and Speech, the Denver Center for the Performing Arts, Denver, Colorado 80204, USA.

Published: March 2005

How accurately can sound pressure levels (SPLs) of speech be estimated from skin vibration of the neck? Measurements using a small accelerometer were carried out in 27 subjects (10 males and 17 females) who read Rainbow and Marvin Williams passages in soft, comfortable, and loud voice, while skin acceleration levels (SALs) and SPLs were simultaneously registered and analyzed every 30 ms. The results indicate that the mean SPL of voiced speech can be estimated with accuracy better than +/-2.8 dB in 95% of the cases when the subjects are individually calibrated. This makes the accelerometer an interesting sensor for SPL measurement of speech when microphones are problematic to use (e.g., noisy environments or in voice dosimetry). The estimates of equivalent SPL, which is the logarithm of averaged relative energy of voiced speech, were found to be up to 1.5 dB less accurate than the mean SPL. The estimation accuracy for instantaneous SPLs was worse than for the mean and equivalent SPLs (on average +/-6 and +/-5 dB for males and females, respectively).

Download full-text PDF

Source
http://dx.doi.org/10.1121/1.1850074DOI Listing

Publication Analysis

Top Keywords

voiced speech
12
sound pressure
8
pressure levels
8
skin vibration
8
speech estimated
8
males females
8
speech
5
estimation sound
4
levels voiced
4
speech skin
4

Similar Publications

Background: Vocal fatigue involves self-perceived vocal symptoms and reduced physiological capacity. This study aimed to adapt and validate the Vocal Fatigue Index (VFI), a tool originally designed to distinguish between patients with vocal fatigue and vocally healthy individuals, for Italian speakers.

Method: A four-step translation and validation process was employed.

View Article and Find Full Text PDF

Purpose: Mental health screening is recommended by the US Preventive Services Task Force for all patients in areas where treatment options are available. Still, it is estimated that only 4% of primary care patients are screened for depression. The goal of this study was to evaluate the efficacy of machine learning technology (Kintsugi Voice, v1, Kintsugi Mindful Wellness, Inc) to detect and analyze voice biomarkers consistent with moderate to severe depression, potentially allowing for greater compliance with this critical primary care public health need.

View Article and Find Full Text PDF

Purpose: The Daily Phonotrauma Index (DPI) can quantify pathophysiological mechanisms associated with daily voice use in individuals with phonotraumatic vocal hyperfunction (PVH). Since DPI was developed based on weeklong ambulatory voice monitoring, this study investigated if DPI can achieve comparable performance using (a) short laboratory speech tasks and (b) fewer than 7 days of ambulatory data.

Method: An ambulatory voice monitoring system recorded the vocal function/behavior of 134 females with PVH and vocally healthy matched controls in two different conditions.

View Article and Find Full Text PDF

Intonation adaptation to multiple talkers.

J Exp Psychol Learn Mem Cogn

December 2024

University at Buffalo, The State University of New York, Department of Psychology.

Speech intonation conveys a wealth of linguistic and social information, such as the intention to ask a question versus make a statement. However, due to the considerable variability in our speaking voices, the mapping from meaning to intonation can be many-to-many and often ambiguous. Previous studies suggest that the comprehension system resolves this ambiguity, at least in part, by adapting to recent exposure.

View Article and Find Full Text PDF

Affective voice signaling has significant biological and social relevance across various species, and different affective signaling types have emerged through the evolution of voice communication. These types range from basic affective voice bursts and nonverbal affective up to affective intonations superimposed on speech utterances in humans in the form of paraverbal prosodic patterns. These different types of affective signaling should have evolved to be acoustically and perceptually distinctive, allowing accurate and nuanced affective communication.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!