Publications by authors named "Coriandre Vilain"

Cued Speech (CS) is a communication system that uses manual gestures to facilitate lipreading. In this study, we investigated how CS information interacts with natural speech using Event-Related Potential (ERP) analyses in French-speaking, typically hearing adults (TH) who were either naïve or experienced CS producers. The audiovisual (AV) presentation of lipreading information elicited an amplitude attenuation of the entire N1 and P2 complex in both groups, accompanied by N1 latency facilitation in the group of CS producers.

View Article and Find Full Text PDF

A computational model of speech perception, COSMO (Laurent et al., 2017), predicts that speech sounds should evoke both auditory representations in temporal areas and motor representations mainly in inferior frontal areas. Importantly, the model also predicts that auditory representations should be narrower, i.

View Article and Find Full Text PDF

How do children learn to write letters? During writing acquisition, some letters may be more difficult to produce than others because certain movement sequences require more precise motor control (e.g., the rotation that produces curved lines like in letter O or the pointing movement to trace the horizontal bar of a T).

View Article and Find Full Text PDF

Purpose: Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data.

View Article and Find Full Text PDF

Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities.

View Article and Find Full Text PDF

Previous electrophysiological studies have provided strong evidence for early multisensory integrative mechanisms during audiovisual speech perception. From these studies, one unanswered issue is whether hearing our own voice and seeing our own articulatory gestures facilitate speech perception, possibly through a better processing and integration of sensory inputs with our own sensory-motor knowledge. The present EEG study examined the impact of self-knowledge during the perception of auditory (A), visual (V) and audiovisual (AV) speech stimuli that were previously recorded from the participant or from a speaker he/she had never met.

View Article and Find Full Text PDF

Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions.

View Article and Find Full Text PDF

Audiovisual (AV) speech integration of auditory and visual streams generally ends up in a fusion into a single percept. One classical example is the McGurk effect in which incongruent auditory and visual speech signals may lead to a fused percept different from either visual or auditory inputs. In a previous set of experiments, we showed that if a McGurk stimulus is preceded by an incongruent AV context (composed of incongruent auditory and visual speech materials) the amount of McGurk fusion is largely decreased.

View Article and Find Full Text PDF

Studies of speech motor control suggest that articulatory and phonemic goals are defined in multidimensional motor, somatosensory, and auditory spaces. To test whether motor simulation might rely on sensory-motor coding common with those for motor execution, we used a repetition suppression (RS) paradigm while measuring neural activity with sparse sampling fMRI during repeated overt and covert orofacial and speech actions. RS refers to the phenomenon that repeated stimuli or motor acts lead to decreased activity in specific neural populations and are associated with enhanced adaptive learning related to the repeated stimulus attributes.

View Article and Find Full Text PDF

Recent magneto-encephalographic and electro-encephalographic studies provide evidence for cross-modal integration during audio-visual and audio-haptic speech perception, with speech gestures viewed or felt from manual tactile contact with the speaker's face. Given the temporal precedence of the haptic and visual signals on the acoustic signal in these studies, the observed modulation of N1/P2 auditory evoked responses during bimodal compared to unimodal speech perception suggest that relevant and predictive visual and haptic cues may facilitate auditory speech processing. To further investigate this hypothesis, auditory evoked potentials were here compared during auditory-only, audio-visual and audio-haptic speech perception in live dyadic interactions between a listener and a speaker.

View Article and Find Full Text PDF

Speech can be perceived not only by the ear and by the eye but also by the hand, with speech gestures felt from manual tactile contact with the speaker׳s face. In the present electro-encephalographic study, early cross-modal interactions were investigated by comparing auditory evoked potentials during auditory, audio-visual and audio-haptic speech perception in dyadic interactions between a listener and a speaker. In line with previous studies, early auditory evoked responses were attenuated and speeded up during audio-visual compared to auditory speech perception.

View Article and Find Full Text PDF
Article Synopsis
  • This study investigated the brain regions activated during independent supralaryngeal (lip, tongue, jaw) and laryngeal (vowel production) movements using fMRI.
  • The research found that many brain areas were commonly activated across tasks, including sensorimotor areas and the basal ganglia, while differences were mainly observed in auditory cortices and sensorimotor cortex during vowel vocalization.
  • Additionally, the findings revealed a specific organization of movements in the brain, showing how different orofacial actions are represented in a structured manner within the motor and sensory areas.
View Article and Find Full Text PDF

This paper presents the biomechanical finite element models that have been developed in the framework of the computer-assisted maxillofacial surgery. After a brief overview of the continuous elastic modelling method, two models are introduced and their use for computer-assisted applications discussed. The first model deals with orthognathic surgery and aims at predicting the facial consequences of maxillary and mandibular osteotomies.

View Article and Find Full Text PDF