Language acquisition depends on the ability to detect and track the distributional properties of speech. Successful acquisition also necessitates detecting changes in those properties, which can occur when the learner encounters different speakers, topics, dialects, or languages. When encountering multiple speech streams with different underlying statistics but overlapping features, how do infants keep track of the properties of each speech stream separately? In four experiments, we tested whether 8-month-old monolingual infants (N = 144) can track the underlying statistics of two artificial speech streams that share a portion of their syllables. We first presented each stream individually. We then presented the two speech streams in sequence, without contextual cues signaling the different speech streams, and subsequently added pitch and accent cues to help learners track each stream separately. The results reveal that monolingual infants experience difficulty tracking the statistical regularities in two speech streams presented sequentially, even when provided with contextual cues intended to facilitate separation of the speech streams. We discuss the implications of our findings for understanding how infants learn and separate the input when confronted with multiple statistical structures.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7028448 | PMC |
http://dx.doi.org/10.1111/desc.12896 | DOI Listing |
J Exp Psychol Learn Mem Cogn
December 2024
Technical University of Darmstadt, Institute of Psychology.
The goal of the present investigation was to perform a registered replication of Jones and Macken's (1995b) study, which showed that the segregation of a sequence of sounds to distinct locations reduced the disruptive effect on serial recall. Thereby, it postulated an intriguing connection between auditory stream segregation and the cognitive mechanisms underlying the irrelevant speech effect. Specifically, it was found that a sequence of changing utterances was less disruptive in stereophonic presentation, allowing each auditory object (letters) to be allocated to a unique location (right ear, left ear, center), compared to when the same sounds were played monophonically.
View Article and Find Full Text PDFTrends Hear
January 2025
Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Cologne, Germany.
Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal.
View Article and Find Full Text PDFBrain Sci
November 2024
School of Foreign Languages, Hunan University, Lushannan Road No. 2, Yuelu District, Changsha 410082, China.
Background/objectives: Normative perceptual segmentation facilitates event perception, comprehension, and memory. Given that native English listeners' normative perceptual segmentation of English speech streams coexists with a highly selective attention pattern at segmentation boundaries, it is significant to test whether Chinese learners of English have a different attention pattern at boundaries, thereby checking whether they perform a normative segmentation.
Methods: Thirty Chinese learners of English with relatively higher language proficiency (CLH) and 26 with relatively lower language proficiency (CLL) listened to a series of English audio sentences.
Netw Neurosci
December 2024
Department of Clinical Cognition Science, Clinic of Neurology at the RWTH Aachen University Faculty of Medicine, ZBMT, Aachen, Germany.
Networks in the parietal and premotor cortices enable essential human abilities regarding motor processing, including attention and tool use. Even though our knowledge on its topography has steadily increased, a detailed picture of hemisphere-specific integrating pathways is still lacking. With the help of multishell diffusion magnetic resonance imaging, probabilistic tractography, and the Graph Theory Analysis, we investigated connectivity patterns between frontal premotor and posterior parietal brain areas in healthy individuals.
View Article and Find Full Text PDFAm J Otolaryngol
December 2024
Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, Tianjin 300192, China; Institute of Otolaryngology of Tianjin, Tianjin, China; Key Laboratory of Auditory Speech and Balance Medicine, Tianjin, China; Key Clinical Discipline of Tianjin (Otolaryngology), Tianjin, China; Otolaryngology Clinical Quality Control Centre, Tianjin, China.
Purpose: To use deep learning technology to design and implement a model that can automatically classify laryngoscope images and assist doctors in diagnosing laryngeal diseases.
Materials And Methods: The experiment was based on 3057 images (normal, glottic cancer, granuloma, Reinke's Edema, vocal cord cyst, leukoplakia, nodules and polyps) from the dataset Laryngoscope8. A classification model based on deep neural networks was developed and tested.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!