Cognitive, computational, and neurobiological approaches have made impressive advances in characterizing the operations that transform linguistic signals into meanings. But our understanding of how words and concepts are retained in the brain remains inadequate. How is the long-term storage of words, or in fact any representations, achieved? This puzzle requires new thinking to stimulate reinvestigation of the storage problem.
View Article and Find Full Text PDFWe comment on the technical interpretation of the study of Watson and caution against their conclusion that the behavioral evidence in their experiments points to nonhuman animals' ability to learn syntactic dependencies, because their results are also consistent with the learning of phonological dependencies in human languages.
View Article and Find Full Text PDFWe incorporate social reasoning about groups of informants into a model of word learning, and show that the model accounts for infant looking behavior in tasks of both word learning and recognition. Simulation 1 models an experiment where 16-month-old infants saw familiar objects labeled either correctly or incorrectly, by either adults or audio talkers. Simulation 2 reinterprets puzzling data from the Switch task, an audiovisual habituation procedure wherein infants are tested on familiarized associations between novel objects and labels.
View Article and Find Full Text PDFQ J Exp Psychol (Hove)
February 2021
Viewers' perception of actions is coloured by the context in which those actions are found. An action that seems uncomfortably sudden in one context might seem expeditious in another. In this study, we examined the influence of one type of context: the rate at which an action is being performed.
View Article and Find Full Text PDFPhilos Trans R Soc Lond B Biol Sci
January 2020
We consider the Phonological Continuity Hypothesis (PCH) of Fitch (2018) in light of a broader range of formal systems. A consideration of the learning and generalization of simple patterns such as AAB from Marcus (Marcus 2000 . , 145-147(doi:10.
View Article and Find Full Text PDFPhilos Trans R Soc Lond B Biol Sci
January 2020
The complex and melodic nature of many birds' songs has raised interest in potential parallels between avian vocal sequences and human speech. The similarities between birdsong and speech in production and learning are well established, but surprisingly little is known about how birds perceive song sequences. One popular laboratory songbird, the zebra finch (), has recently attracted attention as an avian model for human speech, in part because the male learns to produce the individual elements in its song motif in a fixed sequence.
View Article and Find Full Text PDFPhilos Trans R Soc Lond B Biol Sci
January 2020
Language has been considered by many to be uniquely human. Numerous theories for how it evolved have been proposed but rarely tested. The articles in this theme issue consider the extent to which aspects of language, such as vocal learning, phonology, syntax, semantics, intentionality, cognition and neurobiological adaptations, are shared with other animals.
View Article and Find Full Text PDFAtten Percept Psychophys
May 2019
Phonetic categories must be learned, but the processes that allow that learning to unfold are still under debate. The current study investigates constraints on the structure of categories that can be learned and whether these constraints are speech-specific. Category structure constraints are a key difference between theories of category learning, which can roughly be divided into instance-based learning (i.
View Article and Find Full Text PDFJ Acoust Soc Am
December 2017
Humans have an impressive, automatic capacity for identifying and organizing sounds in their environment. However, little is known about the timescales that sound identification functions on, or the acoustic features that listeners use to identify auditory objects. To better understand the temporal and acoustic dynamics of sound category identification, two go/no-go perceptual gating studies were conducted.
View Article and Find Full Text PDFPhonemes play a central role in traditional theories as units of speech perception and access codes to lexical representations. Phonemes have two essential properties: they are 'segment-sized' (the size of a consonant or vowel) and abstract (a single phoneme may be have different acoustic realisations). Nevertheless, there is a long history of challenging the phoneme hypothesis, with some theorists arguing for differently sized phonological units (e.
View Article and Find Full Text PDFAtten Percept Psychophys
April 2017
Listeners must adapt to differences in speech rate across talkers and situations. Speech rate adaptation effects are strong for adjacent syllables (i.e.
View Article and Find Full Text PDFTo attain native-like competence, second language (L2) learners must establish mappings between familiar speech sounds and new phoneme categories. For example, Spanish learners of English must learn that [d] and [ð], which are allophones of the same phoneme in Spanish, can distinguish meaning in English (i.e.
View Article and Find Full Text PDFWhile previous research has established that language-specific knowledge influences early auditory processing, it is still controversial as to what aspects of speech sound representations determine early speech perception. Here, we propose that early processing primarily depends on information propagated top-down from abstractly represented speech sound categories. In particular, we assume that mid-vowels (as in 'bet') exert less top-down effects than the high-vowels (as in 'bit') because of their less specific (default) tongue height position as compared to either high- or low-vowels (as in 'bat').
View Article and Find Full Text PDFPurpose: A new literature has suggested that speech rate can influence the parsing of words quite strongly in speech. The purpose of this study was to investigate differences between younger adults and older adults in the use of context speech rate in word segmentation, given that older adults perceive timing information differently from younger ones.
Method: Younger (18-25 years) and older (55-65 years) adults performed a sentence transcription task for sentences that varied in speech rate context (i.
Previous research in speech perception has shown that category information affects the discrimination of consonants to a greater extent than vowels. However, there has been little electrophysiological work on the perception of fricative sounds, which are informative for this contrast as they share properties with both consonants and vowels. In the current study we address the relative contribution of phonological and acoustic information to the perception of sibilant fricatives using event-related fields (ERFs) and dipole modeling with magnetoencephalography (MEG).
View Article and Find Full Text PDFTwo experiments using the form-preparation paradigm were conducted to investigate the effect of orthographic form-cuing on the phonological preparation unit during spoken word production with native Mandarin speakers. In both experiments, participants were instructed to memorize nine prompt-response monosyllabic word pairs, after which an associative naming session was conducted in which the prompts were presented and participants were asked to say the corresponding response names as quickly and accurately as possible. In both experiments, the response words in the homogeneous lists shared the same onsets, or shared the same rimes; the response names had no common aspects of pronunciation in the heterogeneous lists.
View Article and Find Full Text PDFJ Speech Lang Hear Res
August 2014
Purpose: This study investigated how listeners' native language affects their weighting of acoustic cues (such as vowel quality, pitch, duration, and intensity) in the perception of contrastive word stress.
Method: Native speakers (N = 45) of typologically diverse languages (English, Russian, and Mandarin) performed a stress identification task on nonce disyllabic words with fully crossed combinations of each of the 4 cues in both syllables.
Results: The results revealed that although the vowel quality cue was the strongest cue for all groups of listeners, pitch was the second strongest cue for the English and the Mandarin listeners but was virtually disregarded by the Russian listeners.
Current models of speech perception tend to emphasize either fine-grained acoustic properties or coarse-grained abstract characteristics of speech sounds. We argue for a particular kind of 'sparse' vowel representations and provide new evidence that these representations account for the successful access of the corresponding categories. In an auditory semantic priming experiment, American English listeners made lexical decisions on targets (e.
View Article and Find Full Text PDFThere is a wide range of acoustic and visual variability across different talkers and different speaking contexts. Listeners with normal hearing (NH) accommodate that variability in ways that facilitate efficient perception, but it is not known whether listeners with cochlear implants (CIs) can do the same. In this study, listeners with NH and listeners with CIs were tested for accommodation to auditory and visual phonetic contexts created by gender-driven speech differences as well as vowel coarticulation and lip rounding in both consonants and vowels.
View Article and Find Full Text PDFWe investigated the electrophysiological response to matched two-formant vowels and two-note musical intervals, with the goal of examining whether music is processed differently from language in early cortical responses. Using magnetoencephalography (MEG), we compared the mismatch-response (MMN/MMF, an early, pre-attentive difference-detector occurring approximately 200 ms post-onset) to musical intervals and vowels composed of matched frequencies. Participants heard blocks of two stimuli in a passive oddball paradigm in one of three conditions: sine waves, piano tones and vowels.
View Article and Find Full Text PDFPurpose: The contributions of voice onset time (VOT) and fundamental frequency (F0) were evaluated for the perception of voicing in syllable-initial stop consonants in words that were low-pass filtered and/or masked by speech-shaped noise. It was expected that listeners would rely less on VOT and more on F0 in these degraded conditions.
Method: Twenty young listeners with normal hearing identified modified natural speech tokens that varied by VOT and F0 in several conditions of low-pass filtering and masking noise.
An important distinction between phonology and syntax has been overlooked. All phonological patterns belong to the regular region of the Chomsky Hierarchy, but not all syntactic patterns do. We argue that the hypothesis that humans employ distinct learning mechanisms for phonology and syntax currently offers the best explanation for this difference.
View Article and Find Full Text PDFTo acquire one's native phonological system, language-specific phonological categories and relationships must be extracted from the input. The acquisition of the categories and relationships has each in its own right been the focus of intense research. However, it is remarkable that research on the acquisition of categories and the relations between them has proceeded, for the most part, independently of one another.
View Article and Find Full Text PDFAlthough some cochlear implant (CI) listeners can show good word recognition accuracy, it is not clear how they perceive and use the various acoustic cues that contribute to phonetic perceptions. In this study, the use of acoustic cues was assessed for normal-hearing (NH) listeners in optimal and spectrally degraded conditions, and also for CI listeners. Two experiments tested the tense/lax vowel contrast (varying in formant structure, vowel-inherent spectral change, and vowel duration) and the word-final fricative voicing contrast (varying in F1 transition, vowel duration, consonant duration, and consonant voicing).
View Article and Find Full Text PDFPurpose: Speech perception can be described as the transformation of continuous acoustic information into discrete memory representations. Therefore, research on neural representations of speech sounds is particularly important for a better understanding of this transformation. Speech perception models make specific assumptions regarding the representation of mid vowels (e.
View Article and Find Full Text PDF