Publications by authors named "Stefan R Schweinberger"

Introduction: Research has shown that women's vocal characteristics change during the menstrual cycle. Further, evidence suggests that individuals alter their voices depending on the context, such as when speaking to a highly attractive person, or a person with a different social status. The present study aimed at investigating the degree to which women's voices change depending on the vocal characteristics of the interaction partner, and how any such changes are modulated by the woman's current menstrual cycle phase.

View Article and Find Full Text PDF

Humans are highly social, typically without this ability requiring noticeable efforts. Yet, such social fluency poses challenges both for the human brain to compute and for scientists to study. Over the last few decades, neuroscientific research of human sociality has witnessed a shift in focus from single-brain analysis to complex dynamics occurring across several brains, posing questions about what these dynamics mean and how they relate to multifaceted behavioural models.

View Article and Find Full Text PDF
Article Synopsis
  • Humans excel at recognizing familiar faces, but past research using EEG/ERP methods hasn't fully captured the neural mechanisms involved in this ability.
  • The article highlights key aspects of familiar face recognition, including image-independence, varying levels of familiarity, automatic recognition, and how selective this process is.
  • A new theoretical framework is proposed, breaking down familiar face recognition into distinct phases and integrating current concepts, marking significant progress in understanding the brain's role in this skill.
View Article and Find Full Text PDF

Cracking the non-verbal "code" of human emotions has been a chief interest of generations of scientists. Yet, despite much effort, a dictionary that clearly maps non-verbal behaviours onto meaning remains elusive. We suggest this is due to an over-reliance on language-related concepts and an under-appreciation of the evolutionary context in which a given non-verbal behaviour emerged.

View Article and Find Full Text PDF

The brain calibrates itself based on the past stimulus diet, which makes frequently observed stimuli appear as typical (as opposed to uncommon stimuli, which appear as distinctive). Based on predictive processing theory, the brain should be more "prepared" for typical exemplars, because these contain information that has been encountered frequently, allowing it to economically represent items of that category. Thus, one could ask whether predictability and typicality of visual stimuli interact, or rather act in an additive manner.

View Article and Find Full Text PDF

Neurofeedback training (NFT) is a promising adjuvant intervention method. The desynchronization of mu rhythm (8-13 Hz) in the electroencephalogram (EEG) over centro-parietal areas is known as a valid indicator of mirror neuron system (MNS) activation, which has been associated with social skills. Still, the effect of neurofeedback training on the MNS requires to be well investigated.

View Article and Find Full Text PDF

Empirical investigations into eyewitness identification accuracy typically necessitate the creation of novel stimulus materials, which can be a challenging and time-consuming task. To facilitate this process and promote further research in this domain, we introduce the new Jena Eyewitness Research Stimuli (JERS). They comprise six video sequences depicting a mock theft committed by two different perpetrators, available in both two-dimensional (2D) and 360° format, combined with the corresponding lineup images presented in 2D or three-dimensional (3D) format.

View Article and Find Full Text PDF

Musicians outperform non-musicians in vocal emotion recognition, but the underlying mechanisms are still debated. Behavioral measures highlight the importance of auditory sensitivity towards emotional voice cues. However, it remains unclear whether and how this group difference is reflected at the brain level.

View Article and Find Full Text PDF

Musicians outperform non-musicians in vocal emotion perception, likely because of increased sensitivity to acoustic cues, such as fundamental frequency (F0) and timbre. Yet, how musicians make use of these acoustic cues to perceive emotions, and how they might differ from non-musicians, is unclear. To address these points, we created vocal stimuli that conveyed happiness, fear, pleasure or sadness, either in all acoustic cues, or selectively in either F0 or timbre only.

View Article and Find Full Text PDF

We describe JAVMEPS, an audiovisual (AV) database for emotional voice and dynamic face stimuli, with voices varying in emotional intensity. JAVMEPS includes 2256 stimulus files comprising (A) recordings of 12 speakers, speaking four bisyllabic pseudowords with six naturalistic induced basic emotions plus neutral, in auditory-only, visual-only, and congruent AV conditions. It furthermore comprises (B) caricatures (140%), original voices (100%), and anti-caricatures (60%) for happy, fearful, angry, sad, disgusted, and surprised voices for eight speakers and two pseudowords.

View Article and Find Full Text PDF

Guide dogs hold the potential to increase confidence and independence in visually impaired individuals. However, the success of the partnership between a guide dog and its handler depends on various factors, including the compatibility between the dog and the handler. Here, we conducted interviews with 21 guide dog owners to explore determinants of compatibility between the dog and the owner.

View Article and Find Full Text PDF

Valentine's influential norm-based multidimensional face-space model (nMDFS) predicts that perceived distinctiveness of a face increases with its distance to the norm. Occipito-temporal event-related potentials (ERPs) have been recently shown to respond selectively to variations in distance-to-norm (P200) or familiarity (N250, late negativity), respectively (Wuttke & Schweinberger, 2019). Despite growing evidence on interindividual differences in face perception skills at the behavioral level, little research has focused on their electrophysiological correlates.

View Article and Find Full Text PDF

Recognizing people from their voices may be facilitated by a voice's distinctiveness, in a manner similar to that which has been reported for faces. However, little is known about the neural time-course of voice learning and the role of facial information in voice learning. Based on evidence for audiovisual integration in the recognition of familiar people, we studied the behavioral and electrophysiological correlates of voice learning associated with distinctive or non-distinctive faces.

View Article and Find Full Text PDF

Research into voice perception benefits from manipulation software to gain experimental control over acoustic expression of social signals such as vocal emotions. Today, parameter-specific voice morphing allows a precise control of the emotional quality expressed by single vocal parameters, such as fundamental frequency (F0) and timbre. However, potential side effects, in particular reduced naturalness, could limit ecological validity of speech stimuli.

View Article and Find Full Text PDF

Although different human races do not exist from the perspective of biology and genetics, ascribed 'race' influences psychological processing, such as memory and perception of faces. Research from this Special Issue, as well as a wealth of previous research, shows that other-'race' faces are more difficult to recognize compared to own-'race' faces, a phenomenon known as the other-'race' effect. Theories of expertise attribute the cause of the other-'race' effect to less efficient visual representations of other-'race' faces, which results from reduced visual expertise with other-'race' faces compared to own-'race' faces due to limited contact with individuals from other 'racial' groups.

View Article and Find Full Text PDF

Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)-disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Experiment 1, 26 CI users and normal-hearing (NH) individuals classified emotions for auditory-only, AV congruent, or AV incongruent utterances.

View Article and Find Full Text PDF

Vocal emotion recognition (VER) in natural speech, often referred to as speech emotion recognition (SER), remains challenging for both humans and computers. Applied fields including clinical diagnosis and intervention, social interaction research or Human Computer Interaction (HCI) increasingly benefit from efficient VER algorithms. Several feature sets were used with machine-learning (ML) algorithms for discrete emotion classification.

View Article and Find Full Text PDF

Most findings on prosopagnosia to date suggest preserved voice recognition in prosopagnosia (except in cases with bilateral lesions). Here we report a follow-up examination on M.T.

View Article and Find Full Text PDF

Two competing theories explain the other-'race' effect (ORE) either by greater perceptual expertise to same-'race' (SR) faces or by social categorization of other-'race' (OR) faces at the expense of individuation. To assess expertise and categorization contributions to the ORE, a promising-yet overlooked-approach is comparing activations for different other-'races'. We present a label-based systematic review of neuroimaging studies reporting increased activity in response to OR faces (African, Caucasian, or Asian) when compared with the SR of participants.

View Article and Find Full Text PDF

The use of digitally modified stimuli with enhanced diagnostic information to improve verbal communication in children with sensory or central handicaps was pioneered by Tallal and colleagues in 1996, who targeted speech comprehension in language-learning impaired children. Today, researchers are aware that successful communication cannot be reduced to linguistic information-it depends strongly on the quality of communication, including non-verbal socio-emotional communication. In children with cochlear implants (CIs), quality of life (QoL) is affected, but this can be related to the ability to recognize emotions in a voice rather than speech comprehension alone.

View Article and Find Full Text PDF

The ability to recognize someone's voice spans a broad spectrum with phonagnosia on the low end and super-recognition at the high end. Yet there is no standardized test to measure an individual's ability of learning and recognizing newly learned voices with samples of speech-like phonetic variability. We have developed the Jena Voice Learning and Memory Test (JVLMT), a 22-min test based on item response theory and applicable across languages.

View Article and Find Full Text PDF

Our ability to infer a speaker's emotional state depends on the processing of acoustic parameters such as fundamental frequency (F0) and timbre. Yet, how these parameters are processed and integrated to inform emotion perception remains largely unknown. Here we pursued this issue using a novel parameter-specific voice morphing technique to create stimuli with emotion modulations in only F0 or only timbre.

View Article and Find Full Text PDF

Since COVID-19 has become a pandemic, everyday life has seen dramatic changes affecting individuals, families, and children with and without autism. Among other things, these changes entail more time at home, digital forms of communication, school closures, and reduced support and intervention. Here, we assess the effects of the pandemic on quality of life for school-age autistic and neurotypical children and adolescents.

View Article and Find Full Text PDF

Objectives: Research on cochlear implants (CIs) has focused on speech comprehension, with little research on perception of vocal emotions. We compared emotion perception in CI users and normal-hearing (NH) individuals, using parameter-specific voice morphing.

Design: Twenty-five CI users and 25 NH individuals (matched for age and gender) performed fearful-angry discriminations on bisyllabic pseudoword stimuli from morph continua across all acoustic parameters (Full), or across selected parameters (F0, Timbre, or Time information), with other parameters set to a noninformative intermediate level.

View Article and Find Full Text PDF

Recent research suggested disproportional usage of shape information by people with poor face recognition, although texture information appears to be more important for familiar face recognition. Here, we tested a training program with faces that were selectively caricatured in either shape or texture parameters. Forty-eight young adults with poor face recognition skills (1 SD below the mean in at least 2/3 face processing tests: CFMT, GFMT, BFFT) were pseudo-randomly assigned to either one of two training groups or a control group (n = 16 each).

View Article and Find Full Text PDF