Previous research has shown that infants match vowel sounds to facial displays of vowel articulation [Kuhl, P. K., & Meltzoff, A. N. (1982). The bimodal perception of speech in infancy. Science, 218, 1138-1141; Patterson, M. L., & Werker, J. F. (1999). Matching phonetic information in lips and voice is robust in 4.5-month-old infants. Infant Behaviour & Development, 22, 237-247], and integrate seen and heard speech sounds [Rosenblum, L. D., Schmuckler, M. A., & Johnson, J. A. (1997). The McGurk effect in infants. Perception & Psychophysics, 59, 347-357; Burnham, D., & Dodd, B. (2004). Auditory-visual speech integration by prelinguistic infants: Perception of an emergent consonant in the McGurk effect. Developmental Psychobiology, 45, 204-220]. However, the role of visual speech in language development remains unknown. Our aim was to determine whether seen articulations enhance phoneme discrimination, thereby playing a role in phonetic category learning. We exposed 6-month-old infants to speech sounds from a restricted range of a continuum between /ba/ and /da/, following a unimodal frequency distribution. Synchronously with these speech sounds, one group of infants (the two-category group) saw a visual articulation of a canonical /ba/ or /da/, with the two alternative visual articulations, /ba/ and /da/, being presented according to whether the auditory token was on the /ba/ or /da/ side of the midpoint of the continuum. Infants in a second (one-category) group were presented with the same unimodal distribution of speech sounds, but every token for any particular infant was always paired with the same syllable, either a visual /ba/ or a visual /da/. A stimulus-alternation preference procedure following the exposure revealed that infants in the former, and not in the latter, group discriminated the /ba/-/da/ contrast. These results not only show that visual information about speech articulation enhances phoneme discrimination, but also that it may contribute to the learning of phoneme boundaries in infancy.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.cognition.2008.05.009 | DOI Listing |
JMIR Form Res
January 2025
Faculty of Audiology and Speech Language Pathology, Sri Ramachandra Institute of Higher Education and Research, Chennai, India.
Background: The prevalence of hearing loss in infants in India varies between 4 and 5 per 1000. Objective-based otoacoustic emissions and auditory brainstem response have been used in high-income countries for establishing early hearing screening and intervention programs. Nevertheless, the use of objective screening tests in low- and middle-income countries (LMICs) such as India is not feasible.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
December 2024
University at Buffalo, The State University of New York, Department of Psychology.
Speech intonation conveys a wealth of linguistic and social information, such as the intention to ask a question versus make a statement. However, due to the considerable variability in our speaking voices, the mapping from meaning to intonation can be many-to-many and often ambiguous. Previous studies suggest that the comprehension system resolves this ambiguity, at least in part, by adapting to recent exposure.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
December 2024
University of Massachusetts-Amherst, Department of Psychological and Brain Sciences.
Listeners can use both lexical context (i.e., lexical knowledge activated by the word itself) and lexical predictions based on the content of a preceding sentence to adjust their phonetic categories to speaker idiosyncrasies.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
December 2024
Technical University of Darmstadt, Institute of Psychology.
The goal of the present investigation was to perform a registered replication of Jones and Macken's (1995b) study, which showed that the segregation of a sequence of sounds to distinct locations reduced the disruptive effect on serial recall. Thereby, it postulated an intriguing connection between auditory stream segregation and the cognitive mechanisms underlying the irrelevant speech effect. Specifically, it was found that a sequence of changing utterances was less disruptive in stereophonic presentation, allowing each auditory object (letters) to be allocated to a unique location (right ear, left ear, center), compared to when the same sounds were played monophonically.
View Article and Find Full Text PDFPsychophysiology
January 2025
Active Life Lab, South-Eastern Finland University of Applied Sciences, Mikkeli, Finland.
Stress and psychological disorders are substantial public health concerns, necessitating innovative therapeutic strategies. This study investigated the psychophysiological benefits of nature-based soundscapes, drawing on the biophilia hypothesis. Using a randomized, acute cross-over design, 53 healthy participants experienced either a nature-based or a reference soundscape for 10 min, with a 2-min washout period.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!