Publications by authors named "Jean-Julien Aucouturier"

Social interaction research is lacking an experimental paradigm enabling researchers to make causal inferences in free social interactions. For instance, the expressive signals that causally modulate the emergence of romantic attraction during interactions remain unknown. To disentangle causality in the wealth of covarying factors that govern social interactions, we developed an open-source video-conference platform enabling researchers to covertly manipulate the social signals produced by participants during interactions.

View Article and Find Full Text PDF

The interplay between the different components of emotional contagion (i.e. emotional state and facial motor resonance), both during implicit and explicit appraisal of emotion, remains controversial.

View Article and Find Full Text PDF

The human voice is a potent social signal and a distinctive marker of individual identity. As individuals go through puberty, their voices undergo acoustic changes, setting them apart from others. In this article, we propose that hormonal fluctuations in conjunction with morphological vocal tract changes during puberty establish a sensitive developmental phase that affects the monitoring of the adolescent voice and, specifically, self-other distinction.

View Article and Find Full Text PDF

After a right hemisphere stroke, more than half of the patients are impaired in their capacity to produce or comprehend speech prosody. Yet, and despite its social-cognitive consequences for patients, aprosodia following stroke has received scant attention. In this report, we introduce a novel, simple psychophysical procedure which, by combining systematic digital manipulations of speech stimuli and reverse-correlation analysis, allows estimating the internal sensory representations that subtend how individual patients perceive speech prosody, and the level of internal noise that govern behavioral variability in how patients apply these representations.

View Article and Find Full Text PDF

A wealth of behavioral evidence indicate that sounds with increasing intensity (i.e. appear to be looming towards the listener) are processed with increased attentional and physiological resources compared to receding sounds.

View Article and Find Full Text PDF

Recent deep-learning techniques have made it possible to manipulate facial expressions in digital photographs or videos, however, these techniques still lack fine and personalized ways to control their creation. Moreover, current technologies are highly dependent on large labeled databases, which limits the range and complexity of expressions that can be modeled. Thus, these technologies cannot deal with non-basic emotions.

View Article and Find Full Text PDF
Article Synopsis
  • People have a natural ability to recognize emotions and individuals better in their own culture, known as the other-race and language-familiarity effect.
  • Researchers used voice transformations to create identical acoustic stimuli in French and Japanese to eliminate cultural expression differences and conducted cross-cultural experiments.
  • The results showed that participants were more accurate in identifying emotional cues and pitch changes in their native language, indicating that difficulties stem more from unfamiliarity with the sounds of another language than from differences in its structure.
View Article and Find Full Text PDF

Emotional speech perception is a multisensory process. When speaking with an individual we concurrently integrate the information from their voice and face to decode e.g.

View Article and Find Full Text PDF
Article Synopsis
  • The study looked at how the sound of a person's own name affects brain responses in patients who might not be fully aware of their surroundings.
  • Researchers examined data from 251 patients in French hospitals to see how different ways of saying names (like high or low pitch) influenced brain wave patterns called P300.
  • They found that names said in a higher pitch made the brain respond faster than names said in a lower pitch, which could help doctors understand how to better assess consciousness in patients.
View Article and Find Full Text PDF

Rapid technological advances in artificial intelligence are creating opportunities for real-time algorithmic modulations of a person's facial and vocal expressions, or 'deep-fakes'. These developments raise unprecedented societal and ethical questions which, despite much recent public awareness, are still poorly understood from the point of view of moral psychology. We report here on an experimental ethics study conducted on a sample of = 303 participants (predominantly young, western and educated), who evaluated the acceptability of vignettes describing potential applications of expressive voice transformation technology.

View Article and Find Full Text PDF

Imitation is one of the core building blocks of human social cognition, supporting capacities as diverse as empathy, social learning, and knowledge acquisition. Newborns' ability to match others' motor acts, while quite limited initially, drastically improves during the first months of development. Of notable importance to human sociality is our tendency to rapidly mimic facial expressions of emotion.

View Article and Find Full Text PDF

Whether speech prosody truly and naturally reflects a speaker's subjective confidence, rather than other dimensions such as objective accuracy, is unclear. Here, using a new approach combing psychophysics with acoustic analysis and automatic classification of verbal reports, we tease apart the contributions of sensory evidence, accuracy, and subjective confidence to speech prosody. We find that subjective confidence and objective accuracy are distinctly reflected in the loudness, duration and intonation of verbal reports.

View Article and Find Full Text PDF

The success of human cooperation crucially depends on mechanisms enabling individuals to detect unreliability in their conspecifics. Yet, how such epistemic vigilance is achieved from naturalistic sensory inputs remains unclear. Here we show that listeners' perceptions of the certainty and honesty of other speakers from their speech are based on a common prosodic signature.

View Article and Find Full Text PDF

Human interactions are often improvised rather than scripted, which suggests that efficient coordination can emerge even when collective plans are largely underspecified. One possibility is that such forms of coordination primarily rely on mutual influences between interactive partners, and on perception-action couplings such as entrainment or mimicry. Yet some forms of improvised joint actions appear difficult to explain solely by appealing to these emergent mechanisms.

View Article and Find Full Text PDF

Emotions are often accompanied by vocalizations whose acoustic features provide information about the physiological state of the speaker. Here, we ask if perceiving these affective signals in one's own voice has an impact on one's own emotional state, and if it is necessary to identify these signals as self-originated for the emotional effect to occur. Participants had to deliberate out loud about how they would feel in various familiar emotional scenarios, while we covertly manipulated their voices in order to make them sound happy or sad.

View Article and Find Full Text PDF
Article Synopsis
  • Neural oscillations synchronize with sound dynamics during auditory perception, yet their exact role in processing auditory info is still under investigation.
  • A study using EEG and behavioral measures found that complex musical structures enhance neural entrainment, particularly highlighting the impact of melodic spectral complexity on brain activity.
  • The research concludes that neural entrainment is affected by the characteristics of music, showing a specific connection to auditory processing distinct from emotional responses.
View Article and Find Full Text PDF

In social interactions, people have to pay attention both to the 'what' and 'who'. In particular, expressive changes heard on speech signals have to be integrated with speaker identity, differentiating e.g.

View Article and Find Full Text PDF

Over the past few years, the field of visual social cognition and face processing has been dramatically impacted by a series of data-driven studies employing computer-graphics tools to synthesize arbitrary meaningful facial expressions. In the auditory modality, reverse correlation is traditionally used to characterize sensory processing at the level of spectral or spectro-temporal stimulus properties, but not higher-level cognitive processing of e.g.

View Article and Find Full Text PDF

Smiles, produced by the bilateral contraction of the zygomatic major muscles, are one of the most powerful expressions of positive affect and affiliation and also one of the earliest to develop [1]. The perception-action loop responsible for the fast and spontaneous imitation of a smile is considered a core component of social cognition [2]. In humans, social interaction is overwhelmingly vocal, and the visual cues of a smiling face co-occur with audible articulatory changes on the speaking voice [3].

View Article and Find Full Text PDF