Publications by authors named "Aucouturier J"

Social interaction research is lacking an experimental paradigm enabling researchers to make causal inferences in free social interactions. For instance, the expressive signals that causally modulate the emergence of romantic attraction during interactions remain unknown. To disentangle causality in the wealth of covarying factors that govern social interactions, we developed an open-source video-conference platform enabling researchers to covertly manipulate the social signals produced by participants during interactions.

View Article and Find Full Text PDF

The interplay between the different components of emotional contagion (i.e. emotional state and facial motor resonance), both during implicit and explicit appraisal of emotion, remains controversial.

View Article and Find Full Text PDF

The human voice is a potent social signal and a distinctive marker of individual identity. As individuals go through puberty, their voices undergo acoustic changes, setting them apart from others. In this article, we propose that hormonal fluctuations in conjunction with morphological vocal tract changes during puberty establish a sensitive developmental phase that affects the monitoring of the adolescent voice and, specifically, self-other distinction.

View Article and Find Full Text PDF

After a right hemisphere stroke, more than half of the patients are impaired in their capacity to produce or comprehend speech prosody. Yet, and despite its social-cognitive consequences for patients, aprosodia following stroke has received scant attention. In this report, we introduce a novel, simple psychophysical procedure which, by combining systematic digital manipulations of speech stimuli and reverse-correlation analysis, allows estimating the internal sensory representations that subtend how individual patients perceive speech prosody, and the level of internal noise that govern behavioral variability in how patients apply these representations.

View Article and Find Full Text PDF

A wealth of behavioral evidence indicate that sounds with increasing intensity (i.e. appear to be looming towards the listener) are processed with increased attentional and physiological resources compared to receding sounds.

View Article and Find Full Text PDF

Recent deep-learning techniques have made it possible to manipulate facial expressions in digital photographs or videos, however, these techniques still lack fine and personalized ways to control their creation. Moreover, current technologies are highly dependent on large labeled databases, which limits the range and complexity of expressions that can be modeled. Thus, these technologies cannot deal with non-basic emotions.

View Article and Find Full Text PDF
Article Synopsis
  • People have a natural ability to recognize emotions and individuals better in their own culture, known as the other-race and language-familiarity effect.
  • Researchers used voice transformations to create identical acoustic stimuli in French and Japanese to eliminate cultural expression differences and conducted cross-cultural experiments.
  • The results showed that participants were more accurate in identifying emotional cues and pitch changes in their native language, indicating that difficulties stem more from unfamiliarity with the sounds of another language than from differences in its structure.
View Article and Find Full Text PDF

Emotional speech perception is a multisensory process. When speaking with an individual we concurrently integrate the information from their voice and face to decode e.g.

View Article and Find Full Text PDF
Article Synopsis
  • The study looked at how the sound of a person's own name affects brain responses in patients who might not be fully aware of their surroundings.
  • Researchers examined data from 251 patients in French hospitals to see how different ways of saying names (like high or low pitch) influenced brain wave patterns called P300.
  • They found that names said in a higher pitch made the brain respond faster than names said in a lower pitch, which could help doctors understand how to better assess consciousness in patients.
View Article and Find Full Text PDF

Rapid technological advances in artificial intelligence are creating opportunities for real-time algorithmic modulations of a person's facial and vocal expressions, or 'deep-fakes'. These developments raise unprecedented societal and ethical questions which, despite much recent public awareness, are still poorly understood from the point of view of moral psychology. We report here on an experimental ethics study conducted on a sample of = 303 participants (predominantly young, western and educated), who evaluated the acceptability of vignettes describing potential applications of expressive voice transformation technology.

View Article and Find Full Text PDF

A wealth of theoretical and empirical arguments have suggested that music triggers emotional responses by resembling the inflections of expressive vocalizations, but have done so using low-level acoustic parameters (pitch, loudness, speed) that, in fact, may not be processed by the listener in reference to human voice. Here, we take the opportunity of the recent availability of computational models that allow the simulation of three specifically vocal emotional behaviours: smiling, vocal tremor and vocal roughness. When applied to musical material, we find that these three acoustic manipulations trigger emotional perceptions that are remarkably similar to those observed on speech and scream sounds, and identical across musician and non-musician listeners.

View Article and Find Full Text PDF

Imitation is one of the core building blocks of human social cognition, supporting capacities as diverse as empathy, social learning, and knowledge acquisition. Newborns' ability to match others' motor acts, while quite limited initially, drastically improves during the first months of development. Of notable importance to human sociality is our tendency to rapidly mimic facial expressions of emotion.

View Article and Find Full Text PDF

Whether speech prosody truly and naturally reflects a speaker's subjective confidence, rather than other dimensions such as objective accuracy, is unclear. Here, using a new approach combing psychophysics with acoustic analysis and automatic classification of verbal reports, we tease apart the contributions of sensory evidence, accuracy, and subjective confidence to speech prosody. We find that subjective confidence and objective accuracy are distinctly reflected in the loudness, duration and intonation of verbal reports.

View Article and Find Full Text PDF

The success of human cooperation crucially depends on mechanisms enabling individuals to detect unreliability in their conspecifics. Yet, how such epistemic vigilance is achieved from naturalistic sensory inputs remains unclear. Here we show that listeners' perceptions of the certainty and honesty of other speakers from their speech are based on a common prosodic signature.

View Article and Find Full Text PDF

Human interactions are often improvised rather than scripted, which suggests that efficient coordination can emerge even when collective plans are largely underspecified. One possibility is that such forms of coordination primarily rely on mutual influences between interactive partners, and on perception-action couplings such as entrainment or mimicry. Yet some forms of improvised joint actions appear difficult to explain solely by appealing to these emergent mechanisms.

View Article and Find Full Text PDF

Emotions are often accompanied by vocalizations whose acoustic features provide information about the physiological state of the speaker. Here, we ask if perceiving these affective signals in one's own voice has an impact on one's own emotional state, and if it is necessary to identify these signals as self-originated for the emotional effect to occur. Participants had to deliberate out loud about how they would feel in various familiar emotional scenarios, while we covertly manipulated their voices in order to make them sound happy or sad.

View Article and Find Full Text PDF

In the sports domain, cannabis is prohibited by the World Anti-Doping Agency (WADA) across all sports in competition since 2004. The few studies on physical exercise and cannabis focused on the main compound i.e.

View Article and Find Full Text PDF
Article Synopsis
  • Neural oscillations synchronize with sound dynamics during auditory perception, yet their exact role in processing auditory info is still under investigation.
  • A study using EEG and behavioral measures found that complex musical structures enhance neural entrainment, particularly highlighting the impact of melodic spectral complexity on brain activity.
  • The research concludes that neural entrainment is affected by the characteristics of music, showing a specific connection to auditory processing distinct from emotional responses.
View Article and Find Full Text PDF

Many animal vocalizations contain nonlinear acoustic phenomena as a consequence of physiological arousal. In humans, nonlinear features are processed early in the auditory system, and are used to efficiently detect alarm calls and other urgent signals. Yet, high-level emotional and semantic contextual factors likely guide the perception and evaluation of roughness features in vocal sounds.

View Article and Find Full Text PDF

Background: Insufficient levels of physical activity and increasing sedentary time among children and youth are being observed internationally. The purpose of this paper is to summarize findings from France's 2018 Report Card on physical activity for children and youth, and to make comparisons with its 2016 predecessor and with the Report Cards of other countries engaged in the Global Matrix 3.0.

View Article and Find Full Text PDF

Objective: Long before clinical complications of type 1 diabetes (T1D) develop, oxygen supply and use can be altered during activities of daily life. We examined in patients with uncomplicated T1D all steps of the oxygen pathway, from the lungs to the mitochondria, using an integrative ex vivo (muscle biopsies) and in vivo (during exercise) approach.

Research Design And Methods: We compared 16 adults with T1D with 16 strictly matched healthy control subjects.

View Article and Find Full Text PDF

Nitrate (NO)-rich beetroot juice (BR) is recognized as an ergogenic supplement that improves exercise tolerance during submaximal to maximal intensity exercise in recreational and competitive athletes. A recent study has investigated the effectiveness of BR on exercise performance during supramaximal intensity intermittent exercise (SIE) in Olympic-level track cyclists, but studies conducted in elite endurance athletes are scarce. The present study aimed to determine whether BR supplementation enhances the tolerance to SIE in elite endurance athletes.

View Article and Find Full Text PDF