Children learn language by listening to speech from caregivers around them. However, the type and quantity of speech input that children are exposed to change throughout early childhood in ways that are poorly understood due to the small samples (few participants, limited hours of observation) typically available in developmental psychology. Here we used child-centered audio recorders to unobtrusively measure speech input in the home to 292 children (aged 2-7 years), acquiring English in the United States, over 555 distinct days (approximately 8600 total hours of observation, or 29.
View Article and Find Full Text PDFHumans have an impressive ability to comprehend signal-degraded speech; however, the extent to which comprehension of degraded speech relies on human-specific features of speech perception vs. more general cognitive processes is unknown. Since dogs live alongside humans and regularly hear speech, they can be used as a model to differentiate between these possibilities.
View Article and Find Full Text PDFChildren who receive cochlear implants develop spoken language on a protracted timescale. The home environment facilitates speech-language development, yet it is relatively unknown how the environment differs between children with cochlear implants and typical hearing. We matched eighteen preschoolers with implants (31-65 months) to two groups of children with typical hearing: by chronological age and hearing age.
View Article and Find Full Text PDFTo learn language, children must map variable input to categories such as phones and words. How do children process variation and distinguish between variable pronunciations ("shoup" for soup) versus new words? The unique sensory experience of children with cochlear implants, who learn speech through their device's degraded signal, lends new insight into this question. In a mispronunciation sensitivity eyetracking task, children with implants (N = 33), and typical hearing (N = 24; 36-66 months; 36F, 19M; all non-Hispanic white), with larger vocabularies processed known words faster.
View Article and Find Full Text PDFBecause speaking rates are highly variable, listeners must use cues like phoneme or sentence duration to normalize speech across different contexts. Scaling speech perception in this way allows listeners to distinguish between temporal contrasts, like voiced and voiceless stops, even at different speech speeds. It has long been assumed that this speaking rate normalization can occur over small units such as phonemes.
View Article and Find Full Text PDFAlthough there is ample evidence documenting the development of spoken word recognition from infancy to adolescence, it is still unclear how development of word-level processing interacts with higher-level sentence processing, such as the use of lexical-semantic cues, to facilitate word recognition. We investigated how the ability to use an informative verb (e.g.
View Article and Find Full Text PDFThis research examined whether the auditory short-term memory (STM) capacity for speech sounds differs from that for nonlinguistic sounds in 11-month-old infants. Infants were presented with streams composed of repeating sequences of either 2 or 4 syllables, akin to prior work by Ross-Sheehy and Newman (2015) using nonlinguistic musical instruments. These syllable sequences either stayed the same for every repetition (constant) or changed by one syllable each time it repeated (varying).
View Article and Find Full Text PDFInt J Billing
October 2021
Aims And Objectives: The purpose of this study was to examine whether differences in language exposure (i.e., being raised in a bilingual versus a monolingual environment) influence young children's ability to comprehend words when speech is heard in the presence of background noise.
View Article and Find Full Text PDFStudies have shown that both cotton-top tamarins as well as rats can discriminate between two languages based on rhythmic cues. This is similar to the capabilities of young infants, who also rely on rhythmic cues to differentiate between languages. However, the animals in these studies did not have long-term language exposure, so these studies did not specifically assess the role of language experience.
View Article and Find Full Text PDFCochlear-implant (CI) users have previously demonstrated perceptual restoration, or successful repair of noise-interrupted speech, using the interrupted sentences paradigm [Bhargava, Gaudrain, and Başkent (2014). "Top-down restoration of speech in cochlear-implant users," Hear. Res.
View Article and Find Full Text PDFConcussions are common among flat-track roller derby players, a unique and under-studied sport, but little has been done to assess how common they are or what players can do to manage injury risk. The purpose of this study is to provide an epidemiological investigation of concussion incidence and experience in a large international sampling of roller derby players. Six hundred sixty-five roller derby players from 25 countries responded to a comprehensive online survey about injury and sport participation.
View Article and Find Full Text PDFCochlear-implant (CI) listeners experience signal degradation, which leads to poorer speech perception than normal-hearing (NH) listeners. In the present study, difficulty with word segmentation, the process of perceptually parsing the speech stream into separate words, is considered as a possible contributor to this decrease in performance. CI listeners were compared to a group of NH listeners (presented with unprocessed speech and eight-channel noise-vocoded speech) in their ability to segment phrases with word segmentation ambiguities (e.
View Article and Find Full Text PDFSpeech-language input from adult caregivers is a strong predictor of children's developmental outcomes. But the properties of this child-directed speech are not static over the first months or years of a child's life. This study assesses a large cohort of children and caregivers ( = 84) at 7, 10, 18, and 24 months to document (1) how a battery of phonetic, phonological, and lexical characteristics of child-directed speech changes in the first 2 years of life and (2) how input at these different stages predicts toddlers' phonological processing and vocabulary size at 2 years.
View Article and Find Full Text PDFPrevious work has found that preschoolers with greater phonological awareness and larger lexicons, who speak more throughout the day, exhibit less intra-syllabic coarticulation in controlled speech production tasks. These findings suggest that both linguistic experience and speech-motor control are important predictors of spoken phonetic development. Still, it remains unclear how preschoolers' speech practice when they talk drives the development of coarticulation because children who talk more are likely to have both increased fine motor control and increased auditory feedback experience.
View Article and Find Full Text PDFPurpose Children with cochlear implants (CIs) are more likely to struggle with spoken language than their age-matched peers with normal hearing (NH), and new language processing literature suggests that these challenges may be linked to delays in spoken word recognition. The purpose of this study was to investigate whether children with CIs use language knowledge via semantic prediction to facilitate recognition of upcoming words and help compensate for uncertainties in the acoustic signal. Method Five- to 10-year-old children with CIs heard sentences with an informative verb () or a neutral verb () preceding a target word ().
View Article and Find Full Text PDFCochlear-implant (CI) users experience less success in understanding speech in noisy, real-world listening environments than normal-hearing (NH) listeners. Perceptual restoration is one method NH listeners use to repair noise-interrupted speech. Whereas previous work has reported that CI users can use perceptual restoration in certain cases, they failed to do so under listening conditions in which NH listeners can successfully restore.
View Article and Find Full Text PDFBackground: Adults and adolescents with autism spectrum disorders show greater difficulties comprehending speech in the presence of noise. Moreover, while neurotypical adults use visual cues on the mouth to help them understand speech in background noise, differences in attention to human faces in autism may affect use of these visual cues. No work has yet examined these skills in toddlers with ASD, despite the fact that they are frequently faced with noisy, multitalker environments.
View Article and Find Full Text PDFConsonants and vowels play different roles in speech perception: listeners rely more heavily on consonant information rather than vowel information when distinguishing between words. This reliance on consonants for word identification is the consonant bias Nespor et al. (Ling 2:203-230, 2003).
View Article and Find Full Text PDFQ J Exp Psychol (Hove)
February 2021
Viewers' perception of actions is coloured by the context in which those actions are found. An action that seems uncomfortably sudden in one context might seem expeditious in another. In this study, we examined the influence of one type of context: the rate at which an action is being performed.
View Article and Find Full Text PDFThe ability to recognize speech that is degraded spectrally is a critical skill for successfully using a cochlear implant (CI). Previous research has shown that toddlers with normal hearing can successfully recognize noise-vocoded words as long as the signal contains at least eight spectral channels [Newman and Chatterjee. (2013).
View Article and Find Full Text PDFPurpose Previous research shows that shared storybook reading interactions can function as effective speech and language interventions for young children, helping to improve a variety of skills-including word-learning. This study sought to investigate the potential benefits of elaboration of new words during a single storybook reading with preschoolers. Method Thirty-three typically developing children ages 35-37 months listened to a storybook containing novel words that were either repeated with a definition, repeated with no additional information, or only said once.
View Article and Find Full Text PDFAims: Although IDS is typically described as slower than adult-directed speech (ADS), potential impacts of slower speech on language development have not been examined. We explored whether IDS speech rates in 42 mother-infant dyads at four time periods predicted children's language outcomes at two years. Method: We correlated IDS speech rate with child language outcomes at two years, and contrasted outcomes in dyads displaying high/low rate profiles.
View Article and Find Full Text PDF: Inform the production of a screening tool for language in children with concussion. The authors predicted that children with a recent concussion would perform the cognitive-linguistic tasks more poorly, but some tasks may be more sensitive to concussion than others.: 22 elementary school aged children within 30 days of a concussion and age-matched peers with no history of concussion were assessed on a battery of novel language and cognitive-linguistic tasks.
View Article and Find Full Text PDFPurpose: Several studies have explored relationships between children's early phonological development and later language performance. This literature has included a more recent focus on the potential for early phonological profiles to predict later language outcomes.
Methods: The present study longitudinally examined the nature of phonetic inventories and syllable structure patterns of 48 typically developing children at 7, 11, and 18 months, and related them to expressive language outcomes at 2 years of age.
Am J Speech Lang Pathol
November 2019
Purpose The purpose of this research is to determine whether individuals with a history of concussion retain enduring differences in narrative writing tasks, which necessitate rapid and complex integration of both cognitive and linguistic faculties. Method Participants aged 12-40 years old, who did or did not have a remote history of concussion, were recruited to take an online survey that included writing both a familiar and a novel narrative. They also were asked to complete multiple tasks targeting word-level and domain general cognitive skills, so that their performance could be interpreted across these dimensions.
View Article and Find Full Text PDF