Background: Abnormalities in vocal expression during a depressed episode have frequently been reported in people with depression, but less is known about if these abnormalities only exist in special situations. In addition, the impacts of irrelevant demographic variables on voice were uncontrolled in previous studies. Therefore, this study compares the vocal differences between depressed and healthy people under various situations with irrelevant variables being regarded as covariates.
Methods: To examine whether the vocal abnormalities in people with depression only exist in special situations, this study compared the vocal differences between healthy people and patients with unipolar depression in 12 situations (speech scenarios). Positive, negative and neutral voice expressions between depressed and healthy people were compared in four tasks. Multiple analysis of covariance (MANCOVA) was used for evaluating the main effects of variable group (depressed vs. healthy) on acoustic features. The significances of acoustic features were evaluated by both statistical significance and magnitude of effect size.
Results: The results of multivariate analysis of covariance showed that significant differences between the two groups were observed in all 12 speech scenarios. Although significant acoustic features were not the same in different scenarios, we found that three acoustic features (loudness, MFCC5 and MFCC7) were consistently different between people with and without depression with large effect magnitude.
Conclusions: Vocal differences between depressed and healthy people exist in 12 scenarios. Acoustic features including loudness, MFCC5 and MFCC7 have potentials to be indicators for identifying depression via voice analysis. These findings support that depressed people's voices include both situation-specific and cross-situational patterns of acoustic features.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6794822 | PMC |
http://dx.doi.org/10.1186/s12888-019-2300-7 | DOI Listing |
J Exp Psychol Hum Percept Perform
January 2025
School of Psychology, University of Sussex.
Human listeners have a remarkable capacity to adapt to severe distortions of the speech signal. Previous work indicates that perceptual learning of degraded speech reflects changes to sublexical representations, though the precise format of these representations has not yet been established. Inspired by the neurophysiology of auditory cortex, we hypothesized that perceptual learning involves changes to perceptual representations that are tuned to acoustic modulations of the speech signal.
View Article and Find Full Text PDFJ Acoust Soc Am
January 2025
School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, People's Republic of China.
A complex-valued neural process method, combined with modal depth functions (MDFs) of the ocean waveguide, is proposed to reconstruct the acoustic field. Neural networks are used to describe complex Gaussian processes, modeling the distribution of the acoustic field at different depths. The network parameters are optimized through a meta-learning strategy, preventing overfitting under small sample conditions (sample size equals the number of array elements) and mitigating the slow reconstruction speed of Gaussian processes (GPs), while denoising and interpolating sparsely distributed acoustic field data, generating dense field data for virtual receiver arrays.
View Article and Find Full Text PDFBMC Neurosci
January 2025
National Brain Research Centre, Manesar, Gurugram, 122052, Haryana, India.
Delta-opioid receptors (δ-ORs) are known to be involved in associative learning and modulating motivational states. We wanted to study if they were also involved in naturally-occurring reinforcement learning behaviors such as vocal learning, using the zebra finch model system. Zebra finches learn to vocalize early in development and song learning in males is affected by factors such as the social environment and internal reward, both of which are modulated by endogenous opioids.
View Article and Find Full Text PDFCommun Biol
January 2025
Western Institute for Neuroscience, Western University, London, ON, Canada.
Our brain seamlessly integrates distinct sensory information to form a coherent percept. However, when real-world audiovisual events are perceived, the specific brain regions and timings for processing different levels of information remain less investigated. To address that, we curated naturalistic videos and recorded functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) data when participants viewed videos with accompanying sounds.
View Article and Find Full Text PDFSoft Robot
January 2025
Department of Mechanical and Nuclear Engineering, Khalifa University, Abu Dhabi, UAE.
The inherent challenges of robotic underwater exploration, such as hydrodynamic effects, the complexity of dynamic coupling, and the necessity for sensitive interaction with marine life, call for the adoption of soft robotic approaches in marine exploration. To address this, we present a novel prototype, ZodiAq, a soft underwater drone inspired by prokaryotic bacterial flagella. ZodiAq's unique dodecahedral structure, equipped with 12 flagella-like arms, ensures design redundancy and compliance, ideal for navigating complex underwater terrains.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!