This article presents the development of the "Hoosier Vocal Emotions Corpus," a stimulus set of recorded pseudo-words based on the pronunciation rules of English. The corpus contains 73 controlled audio pseudo-words uttered by two actresses in five different emotions (i.e., happiness, sadness, fear, anger, and disgust) and in a neutral tone, yielding 1,763 audio files. In this article, we describe the corpus as well as a validation study of the pseudo-words. A total of 96 native English speakers completed a forced choice emotion identification task. All emotions were recognized better than chance overall, with substantial variability among the different tokens. All of the recordings, including the ambiguous stimuli, are made freely available, and the recognition rates and the full confusion matrices for each stimulus are provided in order to assist researchers and clinicians in the selection of stimuli. The corpus has unique characteristics that can be useful for experimental paradigms that require controlled stimuli (e.g., electroencephalographic or fMRI studies). Stimuli from this corpus could be used by researchers and clinicians to answer a variety of questions, including investigations of emotion processing in individuals with certain temperamental or behavioral characteristics associated with difficulties in emotion recognition (e.g., individuals with psychopathic traits); in bilingual individuals or nonnative English speakers; in patients with aphasia, schizophrenia, or other mental health disorders (e.g., depression); or in training automatic emotion recognition algorithms. The Hoosier Vocal Emotions Corpus is available at https://psycholinguistics.indiana.edu/hoosiervocalemotions.htm.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3758/s13428-019-01288-0 | DOI Listing |
Front Psychol
December 2024
Department of Psychology, Shaoxing University, Shaoxing, Zhejiang, China.
Background: In previous studies, an in-group advantage in emotion recognition has been demonstrated to suggest that individuals are more proficient in identifying emotions within their own culture than in other cultures. However, the existing research focuses mainly on the cross-cultural variations in vocal emotion recognition, with limited attention paid to exploring intracultural differences. Furthermore, there is little research conducted on the ability of adolescents to recognize the emotions conveyed by vocal cues in various cultural settings.
View Article and Find Full Text PDFMed J Armed Forces India
December 2024
Associate Professor, Dayanand Sagar Univerity, Bengaluru, India.
Background: Vital information about a person's physical and emotional health can be perceived in their voice. After sleep loss, altered voice quality is noticed. The circadian rhythm controls the sleep cycle, and when it is askew, it results in fatigue, which is manifested in speech.
View Article and Find Full Text PDFJ Speech Lang Hear Res
December 2024
University of California, San Francisco.
Purpose: We investigate the extent to which automated audiovisual metrics extracted during an affect production task show statistically significant differences between a cohort of children diagnosed with autism spectrum disorder (ASD) and typically developing controls.
Method: Forty children with ASD and 21 neurotypical controls interacted with a multimodal conversational platform with a virtual agent, Tina, who guided them through tasks prompting facial and vocal communication of four emotions-happy, angry, sad, and afraid-under conditions of high and low verbal and social cognitive task demands.
Results: Individuals with ASD exhibited greater standard deviation of the fundamental frequency of the voice with the minima and maxima of the pitch contour occurring at an earlier time point as compared to controls.
eNeuro
December 2024
Department of Cell Biology, Duke University Medical School, Durham, North Carolina, USA.
Epilepsy Aphasia Syndrome (EAS) is a spectrum of childhood disorders that exhibit complex co-morbidities that include epilepsy and the emergence of cognitive and language disorders. CNKSR2 is an X-linked gene in which mutations are linked to EAS. We previously demonstrated Cnksr2 knockout (KO) mice model key phenotypes of EAS analogous to those present in clinical patients with mutations in the gene.
View Article and Find Full Text PDFAm J Primatol
January 2025
Unit of Ethology, Department of Biology, University of Pisa, Pisa, Paris, Italy.
Behavioral contagion is widespread in primates, with yawn contagion (YC) being a well-known example. Often associated with ingroup dynamics and synchronization, the possible functions and evolutionary pathways of YC remain subjects of active debate. Among nonhuman animals, geladas (Theropithecus gelada) are the only species known to occasionally emit a distinct vocalization while yawning.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!