Children begin to talk at about age one. The vocabulary they need to do so must be built on perceptual evidence and, indeed, infants begin to recognize spoken words long before they talk. Most of the utterances infants hear, however, are continuous, without pauses between words, so constructing a vocabulary requires them to decompose continuous speech in order to extract the individual words. Here, we present electrophysiological evidence that 10-month-old infants recognize two-syllable words they have previously heard only in isolation when these words are presented anew in continuous speech. Moreover, they only need roughly the first syllable of the word to begin doing this. Thus, prelinguistic infants command a highly efficient procedure for segmentation and recognition of spoken words in the absence of an existing vocabulary, allowing them to tackle effectively the problem of bootstrapping a lexicon out of the highly variable, continuous speech signals in their environment.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.cogbrainres.2004.12.009 | DOI Listing |
Disabil Rehabil
January 2025
Discipline of Speech Pathology, School of Allied Health, Human Services and Sport La Trobe University, Melbourne, Australia.
Purpose: People with post-stroke aphasia experience relationship changes which can lead to an altered relational self. The aim of this research was to explore the experiences of a group of people with post-stroke aphasia regarding changes to the relational self.
Method: A constructivist grounded theory approach was used.
Cortex
December 2024
Department of Behavioral Neurology and Cognitive Neuroscience, Tohoku University Graduate School of Medicine, Japan.
The applause sign (AS) is a recognized phenomenon observed in progressive supranuclear palsy (PSP) and other neurological conditions where individuals produce over three claps following a request to clap only thrice after a demonstration. In this study, we introduced a novel linguistic phenomenon termed the oral applause sign (OAS) associated with the AS. The OAS is characterized by increased repetition counts of Japanese repetitive onomatopoeic words, such as uttering "pata-pata-pata" instead of the expected "pata-pata.
View Article and Find Full Text PDFeNeuro
January 2025
Neurophysiology of Everyday Life Group, Department of Psychology, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
A comprehensive analysis of everyday sound perception can be achieved using Electroencephalography (EEG) with the concurrent acquisition of information about the environment. While extensive research has been dedicated to speech perception, the complexities of auditory perception within everyday environments, specifically the types of information and the key features to extract, remain less explored. Our study aims to systematically investigate the relevance of different feature categories: discrete sound-identity markers, general cognitive state information, and acoustic representations, including discrete sound onset, the envelope, and mel-spectrogram.
View Article and Find Full Text PDFAlzheimers Dement
December 2024
Miin Wu School of Computing, National Cheng Kung University, Tainan, Taiwan.
Background: Continuous speech analysis is considered as an efficient and convenient approach for early detection of Alzheimer's Disease (AD). However, the traditional approach generally requires human transcribers to transcribe audio data accurately. This study applied automatic speech recognition (ASR) in conjunction with natural language processing (NLP) techniques to automatically extract linguistic features from Chinese speech data.
View Article and Find Full Text PDFBackground: There is growing evidence that discourse (i.e., connected speech) could serve as a cost-effective and ecologically valid means of identifying individuals with prodromal Alzheimer's disease.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!