This article presents the development of the "Hoosier Vocal Emotions Corpus," a stimulus set of recorded pseudo-words based on the pronunciation rules of English. The corpus contains 73 controlled audio pseudo-words uttered by two actresses in five different emotions (i.e., happiness, sadness, fear, anger, and disgust) and in a neutral tone, yielding 1,763 audio files. In this article, we describe the corpus as well as a validation study of the pseudo-words. A total of 96 native English speakers completed a forced choice emotion identification task. All emotions were recognized better than chance overall, with substantial variability among the different tokens. All of the recordings, including the ambiguous stimuli, are made freely available, and the recognition rates and the full confusion matrices for each stimulus are provided in order to assist researchers and clinicians in the selection of stimuli. The corpus has unique characteristics that can be useful for experimental paradigms that require controlled stimuli (e.g., electroencephalographic or fMRI studies). Stimuli from this corpus could be used by researchers and clinicians to answer a variety of questions, including investigations of emotion processing in individuals with certain temperamental or behavioral characteristics associated with difficulties in emotion recognition (e.g., individuals with psychopathic traits); in bilingual individuals or nonnative English speakers; in patients with aphasia, schizophrenia, or other mental health disorders (e.g., depression); or in training automatic emotion recognition algorithms. The Hoosier Vocal Emotions Corpus is available at https://psycholinguistics.indiana.edu/hoosiervocalemotions.htm.

Download full-text PDF

Source
http://dx.doi.org/10.3758/s13428-019-01288-0DOI Listing

Publication Analysis

Top Keywords

vocal emotions
12
hoosier vocal
8
emotions corpus
8
emotion processing
8
english speakers
8
researchers clinicians
8
stimuli corpus
8
emotion recognition
8
corpus
6
emotions
5

Similar Publications

Cross-regional cultural recognition of adolescent voice emotion.

Front Psychol

December 2024

Department of Psychology, Shaoxing University, Shaoxing, Zhejiang, China.

Background: In previous studies, an in-group advantage in emotion recognition has been demonstrated to suggest that individuals are more proficient in identifying emotions within their own culture than in other cultures. However, the existing research focuses mainly on the cross-cultural variations in vocal emotion recognition, with limited attention paid to exploring intracultural differences. Furthermore, there is little research conducted on the ability of adolescents to recognize the emotions conveyed by vocal cues in various cultural settings.

View Article and Find Full Text PDF

Harmonic-to-noise ratio as speech biomarker for fatigue: K-nearest neighbour machine learning algorithm.

Med J Armed Forces India

December 2024

Associate Professor, Dayanand Sagar Univerity, Bengaluru, India.

Background: Vital information about a person's physical and emotional health can be perceived in their voice. After sleep loss, altered voice quality is noticed. The circadian rhythm controls the sleep cycle, and when it is askew, it results in fatigue, which is manifested in speech.

View Article and Find Full Text PDF

Purpose: We investigate the extent to which automated audiovisual metrics extracted during an affect production task show statistically significant differences between a cohort of children diagnosed with autism spectrum disorder (ASD) and typically developing controls.

Method: Forty children with ASD and 21 neurotypical controls interacted with a multimodal conversational platform with a virtual agent, Tina, who guided them through tasks prompting facial and vocal communication of four emotions-happy, angry, sad, and afraid-under conditions of high and low verbal and social cognitive task demands.

Results: Individuals with ASD exhibited greater standard deviation of the fundamental frequency of the voice with the minima and maxima of the pitch contour occurring at an earlier time point as compared to controls.

View Article and Find Full Text PDF

Epilepsy Aphasia Syndrome (EAS) is a spectrum of childhood disorders that exhibit complex co-morbidities that include epilepsy and the emergence of cognitive and language disorders. CNKSR2 is an X-linked gene in which mutations are linked to EAS. We previously demonstrated Cnksr2 knockout (KO) mice model key phenotypes of EAS analogous to those present in clinical patients with mutations in the gene.

View Article and Find Full Text PDF

Behavioral contagion is widespread in primates, with yawn contagion (YC) being a well-known example. Often associated with ingroup dynamics and synchronization, the possible functions and evolutionary pathways of YC remain subjects of active debate. Among nonhuman animals, geladas (Theropithecus gelada) are the only species known to occasionally emit a distinct vocalization while yawning.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!