Normal-hearing and hearing-impaired subjects with good lipreading skills lipread videotaped material under visual-only conditions. V1CV2 utterances were used where V could be /i/, /ae/ or /u/ and C could be /p/, /t/, /k/, /ch/, /f/, /theta/, /sh/, /sh/ or /w/. Coarticulatory effects were present in these stimuli. The influence of phonetic context on lipreading scores for each V and C was analyzed in an effort to explain some of the variability in the visual perception of phonemes which was suggested by existing literature. Transmission of information for four phonetic features was also analyzed. Lipreading performance was nearly perfect for /p/, /f/, /w/, /theta/ and /u/. Lipreading performance on /t/, /k/, /ch/, /s/, /i/ and /ae/ depended on context. The features labial, rounded, and alveolar or palatal place of articulation were found to transmit more information to lipreaders than did the feature continuant. Variability in articulatory parameters resulting from coarticulatory effects appears to increase overall lipreading difficulty.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1044/jshr.2504.600 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!