Vocal Identity Recognition in Autism Spectrum Disorder.

PLoS One

NTT Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa, Japan; Department of Information Processing, Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, Yokohama, Kanagawa, Japan; CREST, JST, Atsugi, Kanagawa, Japan.

Published: February 2016

AI Article Synopsis

Article Abstract

Voices can convey information about a speaker. When forming an abstract representation of a speaker, it is important to extract relevant features from acoustic signals that are invariant to the modulation of these signals. This study investigated the way in which individuals with autism spectrum disorder (ASD) recognize and memorize vocal identity. The ASD group and control group performed similarly in a task when asked to choose the name of the newly-learned speaker based on his or her voice, and the ASD group outperformed the control group in a subsequent familiarity test when asked to discriminate the previously trained voices and untrained voices. These findings suggest that individuals with ASD recognized and memorized voices as well as the neurotypical individuals did, but they categorized voices in a different way: individuals with ASD categorized voices quantitatively based on the exact acoustic features, while neurotypical individuals categorized voices qualitatively based on the acoustic patterns correlated to the speakers' physical and mental properties.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4466534PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0129451PLOS

Publication Analysis

Top Keywords

categorized voices
12
vocal identity
8
autism spectrum
8
spectrum disorder
8
asd group
8
control group
8
individuals asd
8
neurotypical individuals
8
individuals categorized
8
voices
7

Similar Publications

EXPRESS: Vocal and musical emotion perception, voice cue discrimination, and quality of life in cochlear implant users with and without acoustic hearing.

Q J Exp Psychol (Hove)

January 2025

Department of Otorhinolaryngology / Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.

This study aims to provide a comprehensive picture of auditory emotion perception in cochlear implant (CI) users by (1) investigating emotion categorization in both vocal (pseud-ospeech) and musical domains, and (2) how individual differences in residual acoustic hearing, sensitivity to voice cues (voice pitch, vocal tract length), and quality of life (QoL) might be associated with vocal emotion perception, and, going a step further, also with musical emotion perception. In 28 adult CI users, with or without self-reported acoustic hearing, we showed that sensitivity (d') scores for emotion categorization varied largely across the participants, in line with previous research. However, within participants, the d' scores for vocal and musical emotion categorization were significantly correlated, indicating similar processing of auditory emotional cues across the pseudo-speech and music domains and robustness of the tests.

View Article and Find Full Text PDF

Polariton lattices as binarized neuromorphic networks.

Light Sci Appl

January 2025

Spin-Optics laboratory, St. Petersburg State University, St. Petersburg, 198504, Russia.

We introduce a novel neuromorphic network architecture based on a lattice of exciton-polariton condensates, intricately interconnected and energized through nonresonant optical pumping. The network employs a binary framework, where each neuron, facilitated by the spatial coherence of pairwise coupled condensates, performs binary operations. This coherence, emerging from the ballistic propagation of polaritons, ensures efficient, network-wide communication.

View Article and Find Full Text PDF

Background: The two most commonly used methods to identify frailty are the frailty phenotype and the frailty index. However, both methods have limitations in clinical application. In addition, methods for measuring frailty have not yet been standardized.

View Article and Find Full Text PDF

Voice Quality as Digital Biomarker in Bipolar Disorder: A Systematic Review.

J Voice

January 2025

Department of Surgery, UMONS Research Institute for Health Sciences and Technology, University of Mons (UMons), Mons, Belgium; Division of Laryngology and Bronchoesophagology, Department of Otolaryngology Head Neck Surgery, EpiCURA Hospital, Baudour, Belgium; Department of Otolaryngology-Head and Neck Surgery, Foch Hospital, School of Medicine, UFR Simone Veil, Université Versailles Saint-Quentin-en-Yvelines (Paris Saclay University), Paris, France; Department of Otolaryngology, Elsan Hospital, Paris, France. Electronic address:

Background: Voice analysis has emerged as a potential biomarker for mood state detection and monitoring in bipolar disorder (BD). The systematic review aimed to summarize the evidence for voice analysis applications in BD, examining (1) the predictive validity of voice quality outcomes for mood state detection, and (2) the correlation between voice parameters and clinical symptom scales.

Methods: A PubMed, Scopus, and Cochrane Library search was carried out by two investigators for publications investigating voice quality in BD according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statements.

View Article and Find Full Text PDF

Purpose: The Daily Phonotrauma Index (DPI) can quantify pathophysiological mechanisms associated with daily voice use in individuals with phonotraumatic vocal hyperfunction (PVH). Since DPI was developed based on weeklong ambulatory voice monitoring, this study investigated if DPI can achieve comparable performance using (a) short laboratory speech tasks and (b) fewer than 7 days of ambulatory data.

Method: An ambulatory voice monitoring system recorded the vocal function/behavior of 134 females with PVH and vocally healthy matched controls in two different conditions.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!