Publications by authors named "Keith S Apfelbaum"

Open science practices, such as pre-registration and data sharing, increase transparency and may improve the replicability of developmental science. However, developmental science has lagged behind other fields in implementing open science practices. This lag may arise from unique challenges and considerations of longitudinal research.

View Article and Find Full Text PDF

Research on speech categorization and phoneme recognition has relied heavily on tasks in which participants listen to stimuli from a speech continuum and are asked to either classify each stimulus (identification) or discriminate between them (discrimination). Such tasks rest on assumptions about how perception maps onto discrete responses that have not been thoroughly investigated. Here, we identify critical challenges in the link between these tasks and theories of speech categorization.

View Article and Find Full Text PDF

Words are fundamental to language, linking sound, articulation, and spelling to meaning and syntax; and lexical deficits are core to communicative disorders. Work in language acquisition commonly focuses on how lexical knowledge-knowledge of words' sound patterns and meanings-is acquired. But lexical knowledge is insufficient to account for skilled language use.

View Article and Find Full Text PDF

Efficient word recognition depends on the ability to overcome competition from overlapping words. The nature of the overlap depends on the input modality: spoken words have temporal overlap from other words that share phonemes in the same positions, whereas written words have spatial overlap from other words with letters in the same places. It is unclear how these differences in input format affect the ability to recognise a word and the types of competitors that become active while doing so.

View Article and Find Full Text PDF

A common critique of the Visual World Paradigm (VWP) in psycholinguistic studies is that what is designed as a measure of language processes is meaningfully altered by the visual context of the task. This is crucial, particularly in studies of spoken word recognition, where the displayed images are usually seen as just a part of the measure and are not of fundamental interest. Many variants of the VWP allow participants to sample the visual scene before a trial begins.

View Article and Find Full Text PDF

In humans and other mammals, the stillness of sleep is punctuated by bursts of rapid eye movements (REMs) and myoclonic twitches of the limbs. Like the spontaneous activity that arises from the sensory periphery in other modalities (e.g.

View Article and Find Full Text PDF

Objectives: Work in normal-hearing (NH) adults suggests that spoken language processing involves coping with ambiguity. Even a clearly spoken word contains brief periods of ambiguity as it unfolds over time, and early portions will not be sufficient to uniquely identify the word. However, beyond this temporary ambiguity, NH listeners must also cope with the loss of information due to reduced forms, dialect, and other factors.

View Article and Find Full Text PDF

Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated.

View Article and Find Full Text PDF

Recent work has demonstrated that the addition of multiple talkers during habituation improves 14-month-olds' performance in the switch task (Rost & McMurray, 2009). While the authors suggest that this boost in performance is due to the increase in acoustic variability (Rost & McMurray, 2010), it is also possible that there is something crucial about the presence of multiple talkers that is driving this performance. To determine whether or not acoustic variability in and of itself is beneficial in early word learning tasks like the switch task, we tested 14-month-old infants in a version of the switch task using acoustically variable auditory stimuli produced by a single speaker.

View Article and Find Full Text PDF

Traditional studies of human categorization often treat the processes of encoding features and cues as peripheral to the question of how stimuli are categorized. However, in domains where the features and cues are less transparent, how information is encoded prior to categorization may constrain our understanding of the architecture of categorization. This is particularly true in speech perception, where acoustic cues to phonological categories are ambiguous and influenced by multiple factors.

View Article and Find Full Text PDF

The speech signal is notoriously variable, with the same phoneme realized differently depending on factors like talker and phonetic context. Variance in the speech signal has led to a proliferation of theories of how listeners recognize speech. A promising approach, supported by computational modeling studies, is contingent categorization, wherein incoming acoustic cues are computed relative to expectations.

View Article and Find Full Text PDF

Early reading abilities are widely considered to derive in part from statistical learning of regularities between letters and sounds. Although there is substantial evidence from laboratory work to support this, how it occurs in the classroom setting has not been extensively explored; there are few investigations of how statistics among letters and sounds influence how children actually learn to read or what principles of statistical learning may improve learning. We examined 2 conflicting principles that may apply to learning grapheme-phoneme-correspondence (GPC) regularities for vowels: (a) variability in irrelevant units may help children derive invariant relationships and (b) similarity between words may force children to use a deeper analysis of lexical structure.

View Article and Find Full Text PDF

At 14 months, children appear to struggle to apply their fairly well-developed speech perception abilities to learning similar sounding words (e.g., bih/dih; Stager & Werker, 1997).

View Article and Find Full Text PDF

Lexical-semantic access is affected by the phonological structure of the lexicon. What is less clear is whether such effects are the result of continuous activation between lexical form and semantic processing or whether they arise from a more modular system in which the timing of accessing lexical form determines the timing of semantic activation. This study examined this issue using the visual world paradigm by investigating the time course of semantic priming as a function of the number of phonological competitors.

View Article and Find Full Text PDF