Speech sound classification and detection of articulation disorders with support vector machines and wavelets.

Conf Proc IEEE Eng Med Biol Soc

Lab. for Autom. & Robotics, Patras Univ, 26500 Patras, Greece.

Published: March 2008

This paper proposes a novel integrated methodology to extract features and classify speech sounds with intent to detect the possible existence of a speech articulation disorder in a speaker. Articulation, in effect, is the specific and characteristic way that an individual produces the speech sounds. A methodology to process the speech signal, extract features and finally classify the signal and detect articulation problems in a speaker is presented. The use of support vector machines (SVMs), for the classification of speech sounds and detection of articulation disorders is introduced. The proposed method is implemented on a data set where different sets of features and different schemes of SVMs are tested leading to satisfactory performance.

Download full-text PDF

Source
http://dx.doi.org/10.1109/IEMBS.2006.259499DOI Listing

Publication Analysis

Top Keywords

speech sounds
12
detection articulation
8
articulation disorders
8
support vector
8
vector machines
8
extract features
8
speech
6
articulation
5
speech sound
4
sound classification
4

Similar Publications

Voice of a woman: influence of interaction partner characteristics on cycle dependent vocal changes in women.

Front Psychol

December 2024

Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University, Jena, Germany.

Introduction: Research has shown that women's vocal characteristics change during the menstrual cycle. Further, evidence suggests that individuals alter their voices depending on the context, such as when speaking to a highly attractive person, or a person with a different social status. The present study aimed at investigating the degree to which women's voices change depending on the vocal characteristics of the interaction partner, and how any such changes are modulated by the woman's current menstrual cycle phase.

View Article and Find Full Text PDF

Acoustic Exaggeration Enhances Speech Discrimination in Young Autistic Children.

Autism Res

December 2024

Psychiatry and Addictology Department, CIUSSS-NIM Research Center, University of Montreal, Montreal, Quebec, Canada.

Child-directed speech (CDS), which amplifies acoustic and social features of speech during interactions with young children, promotes typical phonetic and language development. In autism, both behavioral and brain data indicate reduced sensitivity to human speech, which predicts absent, decreased, or atypical benefits of exaggerated speech signals such as CDS. This study investigates the impact of exaggerated fundamental frequency (F0) and voice-onset time on the neural processing of speech sounds in 22 Chinese-speaking autistic children aged 2-7 years old with a history of speech delays, compared with 25 typically developing (TD) peers.

View Article and Find Full Text PDF

: Hearing loss is a highly prevalent condition in the world population that determines emotional, social, and economic costs. In recent years, it has been definitely recognized that the lack of physiological binaural hearing causes alterations in the localization of sounds and reduced speech recognition in noise and reverberation. This study aims to explore the psycho-social profile of adult workers affected by single-sided deafness (SSD), without other major medical conditions and otological symptoms, through comparison to subjects with normal hearing.

View Article and Find Full Text PDF

This literature review investigates the application of wide dynamic range compression (WDRC) to enhance hearing protection and communication among workers in a noisy environment. Given the prevalence of noise-induced hearing loss, there is a major need to provide workers, with or at risk of hearing loss, with a solution that not only protects their hearing but also facilitates effective communication. WDRC, which amplifies softer sounds while limiting louder sounds, appears a promising approach.

View Article and Find Full Text PDF

Magnetic Resonance Imaging (MRI) allows analyzing speech production by capturing high-resolution images of the dynamic processes in the vocal tract. In clinical applications, combining MRI with synchronized speech recordings leads to improved patient outcomes, especially if a phonological-based approach is used for assessment. However, when audio signals are unavailable, the recognition accuracy of sounds is decreased when using only MRI data.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!