AI Article Synopsis

  • The study introduces a comprehensive system for analyzing the 3D structure of the vocal tract during speech, utilizing MRI technology to capture images while subjects pronounce sounds from the German language.
  • The image processing technique includes automating the registration and segmentation of vocal tract structures and creating a 3D model of the vocal tract, which aids in synthesizing and evaluating phoneme sounds.
  • Results indicate that using 3D data improves the quality of synthesized vowel sounds compared to 2D data, showcasing the potential of fast MRI and detailed analysis in studying human speech production.

Article Abstract

We present a complete system for image-based 3D vocal tract analysis ranging from MR image acquisition during phonation, semi-automatic image processing, quantitative modeling including model-based speech synthesis, to quantitative model evaluation by comparison between recorded and synthesized phoneme sounds. For this purpose, six professionally trained speakers, age 22-34y, were examined using a standardized MRI protocol (1.5 T, T1w FLASH, ST 4mm, 23 slices, acq. time 21s). The volunteers performed a prolonged (> or = 21s) emission of sounds of the German phonemic inventory. Simultaneous audio tape recording was obtained to control correct utterance. Scans were made in axial, coronal, and sagittal planes each. Computer-aided quantitative 3D evaluation included (i) automated registration of the phoneme-specific data acquired in different slice orientations, (ii) semi-automated segmentation of oropharyngeal structures, (iii) computation of a curvilinear vocal tract midline in 3D by nonlinear PCA, (iv) computation of cross-sectional areas of the vocal tract perpendicular to this midline. For the vowels /a/,/e/,/i/,/o/,/ø/,/u/,/y/, the extracted area functions were used to synthesize phoneme sounds based on an articulatory-acoustic model. For quantitative analysis, recorded and synthesized phonemes were compared, where area functions extracted from 2D midsagittal slices were used as a reference. All vowels could be identified correctly based on the synthesized phoneme sounds. The comparison between synthesized and recorded vowel phonemes revealed that the quality of phoneme sound synthesis was improved for phonemes /a/, /o/, and /y/, if 3D instead of 2D data were used, as measured by the average relative frequency shift between recorded and synthesized vowel formants (p < 0.05, one-sided Wilcoxon rank sum test). In summary, the combination of fast MRI followed by subsequent 3D segmentation and analysis is a novel approach to examine human phonation in vivo. It unveils functional anatomical findings that may be essential for realistic modelling of the human vocal tract during speech production.

Download full-text PDF

Source
http://dx.doi.org/10.1007/978-3-540-85990-1_37DOI Listing

Publication Analysis

Top Keywords

vocal tract
20
recorded synthesized
12
phoneme sounds
12
human vocal
8
tract analysis
8
complete system
8
quantitative modeling
8
speech synthesis
8
synthesized phoneme
8
area functions
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!