Publications by authors named "Savariaux C"

A quick correction mechanism of the tongue has been formerly experimentally observed in speech posture stabilization in response to a sudden tongue stretch perturbation. Given its relatively short latency (< 150 ms), the response could be driven by somatosensory feedback alone. The current study assessed this hypothesis by examining whether this response is induced in the absence of auditory feedback.

View Article and Find Full Text PDF

Objectives: The study aims to better understand the rhythmic abilities of people who stutter and to identify which processes potentially are impaired in this population: (1) beat perception and reproduction; (2) the execution of movements, in particular their initiation; (3) sensorimotor integration.

Material And Method: Finger tapping behavior of 16 adults who stutter (PWS) was compared with that of 16 matching controls (PNS) in five rhythmic tasks of various complexity: three synchronization tasks - a simple 1:1 isochronous pattern, a complex non-isochronous pattern, and a 4 tap:1 beat isochronous pattern -, a reaction task to an aperiodic and unpredictable pattern, and a reproduction task of an isochronous pattern after passively listening.

Results: PWS were able to reproduce an isochronous pattern on their own, without external auditory stimuli, with similar accuracy as PNS, but with increased variability.

View Article and Find Full Text PDF
Article Synopsis
  • Researchers examined the acoustic features and articulation involved in beatboxing, analyzing twelve drum sounds across five categories (kick, snare, hi-hat, rimshot, and cymbal).
  • They collected various types of data through advanced techniques like electroglottography and acoustic recording, achieving a high cluster purity of 94% in sound classification.
  • The study found significant differences in sound intensity between humming and non-humming techniques, along with the involvement of the tongue in sound production and the use of multiple airstream mechanisms.
View Article and Find Full Text PDF

Auditory speech perception enables listeners to access phonological categories from speech sounds. During speech production and speech motor learning, speakers' experience matched auditory and somatosensory input. Accordingly, access to phonetic units might also be provided by somatosensory information.

View Article and Find Full Text PDF

Speech unfolds in time and, as a consequence, its perception requires temporal integration. Yet, studies addressing audio-visual speech processing have often overlooked this temporal aspect. Here, we address the temporal course of audio-visual speech processing in a phoneme identification task using a Gating paradigm.

View Article and Find Full Text PDF

Purpose: This study compares the precision of the electromagnetic articulographs used in speech research: Northern Digital Instruments' Wave and Carstens' AG200, AG500, and AG501 systems.

Method: The fluctuation of distances between 3 pairs of sensors attached to a manually rotated device that can position them inside the measurement volumes was determined. For each device, 2 precision estimates made on the basis of the 95% quantile range of these distances (QR95) were defined: The local QR95 was computed for bins around specific rotation angles, and the global QR95 was computed for all angles pooled.

View Article and Find Full Text PDF

Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI) controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligible speech in real-time with a reasonable number of control parameters. We present here an articulatory-based speech synthesizer that can be controlled in real-time for future BCI applications.

View Article and Find Full Text PDF
Article Synopsis
  • We undergo perceptual narrowing for phoneme identification as we specialize in our native language, losing the ability to recognize sounds not present in it.
  • A study tested bilingual and monolingual adults on identifying a Bengali phoneme contrast that doesn't exist in their languages, using both audio-only and audiovisual methods.
  • Results showed that while both groups struggled in audio-only conditions, they improved in audiovisual settings; however, bilinguals were slower and less accurate, indicating different processing strategies compared to monolinguals.
View Article and Find Full Text PDF
Article Synopsis
  • The study highlights a lack of research on how covert (internal) and overt (observable) orofacial gestures interact, especially in the context of speech perception theories.
  • Results indicate that dynamic movements of the mandible and lips significantly disrupt an inner speech task, while static facial configurations or manual actions do not have the same effect.
  • The authors suggest that incorporating these orofacial movements into dual-task experiments could provide insights into the motor processes involved in speech perception.
View Article and Find Full Text PDF

An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call "preparatory gestures". However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way.

View Article and Find Full Text PDF

This article focuses on methodological issues related to quantitative assessments of speech quality after glossectomy. Acoustic and articulatory data were collected for 8 consonants from two patients. The acoustic analysis is based on spectral moments and the Klatt VOT.

View Article and Find Full Text PDF

This paper presents a quantitative and comprehensive study of the lip movements of a given speaker in different speech/nonspeech contexts, with a particular focus on silences (i.e., when no sound is produced by the speaker).

View Article and Find Full Text PDF

The relations between production and perception in 4-year-old children were examined in a study of compensation strategies for a lip-tube perturbation. Acoustic and perceptual analyses of the rounded vowel [u] produced by twelve 4-year-old French speakers were conducted under two conditions: normal and with a 15-mm-diam tube inserted between the lips. Recordings of isolated vowels were made in the normal condition before any perturbation (N1), immediately upon insertion of the tube and for the next 19 trials in this perturbed condition, with (P2) or without articulatory instructions (P1), and in the normal condition after the perturbed trials (N2).

View Article and Find Full Text PDF

In the framework of experimental phonetics, our approach to the study of speech production is based on the measurement, the analysis and the modeling of orofacial articulators such as the jaw, the face and the lips, the tongue or the velum. Therefore, we present in this article experimental techniques that allow characterising the shape and movement of speech articulators (static and dynamic MRI, computed tomodensitometry, electromagnetic articulography, video recording). We then describe the linear models of the various organs that we can elaborate from speaker-specific articulatory data.

View Article and Find Full Text PDF
Article Synopsis
  • Lip reading helps people understand speech better by watching the speaker's lips, especially in noisy environments.
  • Recent experiments show that seeing lips can improve sensitivity to sound, making it easier to detect speech in noise.
  • The study found that visual information significantly enhances speech intelligibility compared to just audio, highlighting its importance in understanding communication.
View Article and Find Full Text PDF

Functional tests are needed to assess the quality of reconstructive surgery after treatment of intraoral cancers. Quality of Life tests are subjective and Cinefluoroscopy is a demanding and non-comparative procedure. We develop here a method to test the capacity of patients to maximize use of their articulatory space.

View Article and Find Full Text PDF

A perceptual analysis of the French vowel [u] produced by 10 speakers under normal and perturbed conditions (Savariaux et al., 1995) is presented which aims at characterizing in the perceptual domain the task of a speaker for this vowel, and, then, at understanding the strategies developed by the speakers to deal with the lip perturbation. Identification and rating tests showed that the French [u] is perceptually fairly well described in the [F1, (F2-F0)] plane, and that the parameter (((F2-F0) + F1)/2) (all frequencies in bark) provides a good overall correlate of the "grave" feature classically used to describe the vowel [u] in all languages.

View Article and Find Full Text PDF