A quick correction mechanism of the tongue has been formerly experimentally observed in speech posture stabilization in response to a sudden tongue stretch perturbation. Given its relatively short latency (< 150 ms), the response could be driven by somatosensory feedback alone. The current study assessed this hypothesis by examining whether this response is induced in the absence of auditory feedback.
View Article and Find Full Text PDFObjectives: The study aims to better understand the rhythmic abilities of people who stutter and to identify which processes potentially are impaired in this population: (1) beat perception and reproduction; (2) the execution of movements, in particular their initiation; (3) sensorimotor integration.
Material And Method: Finger tapping behavior of 16 adults who stutter (PWS) was compared with that of 16 matching controls (PNS) in five rhythmic tasks of various complexity: three synchronization tasks - a simple 1:1 isochronous pattern, a complex non-isochronous pattern, and a 4 tap:1 beat isochronous pattern -, a reaction task to an aperiodic and unpredictable pattern, and a reproduction task of an isochronous pattern after passively listening.
Results: PWS were able to reproduce an isochronous pattern on their own, without external auditory stimuli, with similar accuracy as PNS, but with increased variability.
Proc Natl Acad Sci U S A
March 2020
Auditory speech perception enables listeners to access phonological categories from speech sounds. During speech production and speech motor learning, speakers' experience matched auditory and somatosensory input. Accordingly, access to phonetic units might also be provided by somatosensory information.
View Article and Find Full Text PDFSpeech unfolds in time and, as a consequence, its perception requires temporal integration. Yet, studies addressing audio-visual speech processing have often overlooked this temporal aspect. Here, we address the temporal course of audio-visual speech processing in a phoneme identification task using a Gating paradigm.
View Article and Find Full Text PDFPurpose: This study compares the precision of the electromagnetic articulographs used in speech research: Northern Digital Instruments' Wave and Carstens' AG200, AG500, and AG501 systems.
Method: The fluctuation of distances between 3 pairs of sensors attached to a manually rotated device that can position them inside the measurement volumes was determined. For each device, 2 precision estimates made on the basis of the 95% quantile range of these distances (QR95) were defined: The local QR95 was computed for bins around specific rotation angles, and the global QR95 was computed for all angles pooled.
Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI) controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligible speech in real-time with a reasonable number of control parameters. We present here an articulatory-based speech synthesizer that can be controlled in real-time for future BCI applications.
View Article and Find Full Text PDFAn increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call "preparatory gestures". However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way.
View Article and Find Full Text PDFThis article focuses on methodological issues related to quantitative assessments of speech quality after glossectomy. Acoustic and articulatory data were collected for 8 consonants from two patients. The acoustic analysis is based on spectral moments and the Klatt VOT.
View Article and Find Full Text PDFThis paper presents a quantitative and comprehensive study of the lip movements of a given speaker in different speech/nonspeech contexts, with a particular focus on silences (i.e., when no sound is produced by the speaker).
View Article and Find Full Text PDFThe relations between production and perception in 4-year-old children were examined in a study of compensation strategies for a lip-tube perturbation. Acoustic and perceptual analyses of the rounded vowel [u] produced by twelve 4-year-old French speakers were conducted under two conditions: normal and with a 15-mm-diam tube inserted between the lips. Recordings of isolated vowels were made in the normal condition before any perturbation (N1), immediately upon insertion of the tube and for the next 19 trials in this perturbed condition, with (P2) or without articulatory instructions (P1), and in the normal condition after the perturbed trials (N2).
View Article and Find Full Text PDFRev Laryngol Otol Rhinol (Bord)
May 2010
In the framework of experimental phonetics, our approach to the study of speech production is based on the measurement, the analysis and the modeling of orofacial articulators such as the jaw, the face and the lips, the tongue or the velum. Therefore, we present in this article experimental techniques that allow characterising the shape and movement of speech articulators (static and dynamic MRI, computed tomodensitometry, electromagnetic articulography, video recording). We then describe the linear models of the various organs that we can elaborate from speaker-specific articulatory data.
View Article and Find Full Text PDFRev Stomatol Chir Maxillofac
April 2000
Functional tests are needed to assess the quality of reconstructive surgery after treatment of intraoral cancers. Quality of Life tests are subjective and Cinefluoroscopy is a demanding and non-comparative procedure. We develop here a method to test the capacity of patients to maximize use of their articulatory space.
View Article and Find Full Text PDFA perceptual analysis of the French vowel [u] produced by 10 speakers under normal and perturbed conditions (Savariaux et al., 1995) is presented which aims at characterizing in the perceptual domain the task of a speaker for this vowel, and, then, at understanding the strategies developed by the speakers to deal with the lip perturbation. Identification and rating tests showed that the French [u] is perceptually fairly well described in the [F1, (F2-F0)] plane, and that the parameter (((F2-F0) + F1)/2) (all frequencies in bark) provides a good overall correlate of the "grave" feature classically used to describe the vowel [u] in all languages.
View Article and Find Full Text PDF