The sound of the voice has several acoustic features that influence the perception of how cooperative the speaker is. It remains unknown, however, whether these acoustic features are associated with actual cooperative behaviour. This issue is crucial to disentangle whether inferences of traits from voices are based on stereotypes, or facilitate the detection of cooperative partners. The latter is likely due to the pleiotropic effect that testosterone has on both cooperative behaviours and acoustic features. In the present study, we quantified the cooperativeness of native French-speaking men in a one-shot public good game. We also measured mean fundamental frequency, pitch variations, roughness, and breathiness from spontaneous speech recordings of the same men and collected saliva samples to measure their testosterone levels. Our results showed that men with lower-pitched voices and greater pitch variations were more cooperative. However, testosterone did not influence cooperative behaviours or acoustic features. Our finding provides the first evidence of the acoustic correlates of cooperative behaviour. When considered in combination with the literature on the detection of cooperativeness from faces, the results imply that assessment of cooperative behaviour would be improved by simultaneous consideration of visual and auditory cues.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/bjop.12437 | DOI Listing |
PLoS One
January 2025
Department of Teacher Education, University of Jyväskylä, Jyväskylä, Finland.
The aim of the study was to find whether certain meaningful moments in the learning process are noticeable through features of voice and how acoustic voice analyses can be utilized in learning research. The material consisted of recordings of nine university students as they were completing tasks concerning direct electric circuits as part of their course of teacher education in physics. Prosodic features of voice-fundamental frequency (F0), sound pressure level (SPL), acoustic voice quality measured by LTAS, and pausing-were investigated.
View Article and Find Full Text PDFBioengineering (Basel)
January 2025
CenBRAIN Neurotech Center of Excellence, School of Engineering, Westlake University, Hangzhou 310030, China.
Skulls with high optical scattering and acoustic attenuation are a great challenge for photoacoustic imaging for human beings. To explore and improve photoacoustic generation and propagation, we conducted the photoacoustic simulation and image reconstruction of the multi-layer brain model with an embedded blood vessel under different optical source types. Based on the optical simulation results under different types of optical sources, we explored the characteristics of reconstructed images obtained from acoustic simulations with and without skull conditions.
View Article and Find Full Text PDFJ Exp Psychol Hum Percept Perform
January 2025
School of Psychology, University of Sussex.
Human listeners have a remarkable capacity to adapt to severe distortions of the speech signal. Previous work indicates that perceptual learning of degraded speech reflects changes to sublexical representations, though the precise format of these representations has not yet been established. Inspired by the neurophysiology of auditory cortex, we hypothesized that perceptual learning involves changes to perceptual representations that are tuned to acoustic modulations of the speech signal.
View Article and Find Full Text PDFJ Acoust Soc Am
January 2025
School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, People's Republic of China.
A complex-valued neural process method, combined with modal depth functions (MDFs) of the ocean waveguide, is proposed to reconstruct the acoustic field. Neural networks are used to describe complex Gaussian processes, modeling the distribution of the acoustic field at different depths. The network parameters are optimized through a meta-learning strategy, preventing overfitting under small sample conditions (sample size equals the number of array elements) and mitigating the slow reconstruction speed of Gaussian processes (GPs), while denoising and interpolating sparsely distributed acoustic field data, generating dense field data for virtual receiver arrays.
View Article and Find Full Text PDFBMC Neurosci
January 2025
National Brain Research Centre, Manesar, Gurugram, 122052, Haryana, India.
Delta-opioid receptors (δ-ORs) are known to be involved in associative learning and modulating motivational states. We wanted to study if they were also involved in naturally-occurring reinforcement learning behaviors such as vocal learning, using the zebra finch model system. Zebra finches learn to vocalize early in development and song learning in males is affected by factors such as the social environment and internal reward, both of which are modulated by endogenous opioids.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!