Publications by authors named "Michael D Tyler"

Article Synopsis
  • Perceptual narrowing is important for cognitive and category learning in infants, but its neural mechanisms in the brain are not well understood.
  • A study using EEG investigated Australian infants' brain responses to English and Nuu-Chah-Nulth languages at two ages (5-6 months and 11-12 months) to explore changes in speech perception.
  • Results showed younger infants had immature neural responses to both language contrasts, while older infants recognized the native contrast more effectively, indicating that brain plasticity allows for adaptation in early speech perception despite some limitations.
View Article and Find Full Text PDF

Fundamental frequency ( ), perceived as pitch, is the first and arguably most salient auditory component humans are exposed to since the beginning of life. It carries multiple linguistic (e.g.

View Article and Find Full Text PDF

Auditory speech appears to be linked to visual articulatory gestures and orthography through different mechanisms. Yet, both types of visual information have a strong influence on speech processing. The present study directly compared their contributions to speech processing using a novel word learning paradigm.

View Article and Find Full Text PDF

Vowel contrasts tend to be perceived independently of pitch modulation, but it is not known whether pitch can be perceived independently of vowel quality. This issue was investigated in the context of a lexical tone language, Mandarin Chinese, using a printed word version of the visual world paradigm. Eye movements to four printed words were tracked while listeners heard target words that differed from competitors only in tone (test condition) or also in onset consonant and vowel (control condition).

View Article and Find Full Text PDF

To become language users, infants must embrace the integrality of speech perception and production. That they do so, and quite rapidly, is implied by the native-language attunement they achieve in each domain by 6-12 months. Yet research has most often addressed one or the other domain, rarely how they interrelate.

View Article and Find Full Text PDF

This study examined three ways that perception of non-native phones may be uncategorized relative to native (L1) categories: focalized (predominantly similar to a single L1 category), clustered (similar to > 2 L1 categories), and dispersed (not similar to any L1 categories). In an online study, Egyptian Arabic speakers residing in Egypt categorized and rated all Australian English vowels. Evidence was found to support focalized, clustered, and dispersed uncategorized assimilations.

View Article and Find Full Text PDF

Research on language-specific tuning in speech perception has focused mainly on consonants, while that on non-native vowel perception has failed to address whether the same principles apply. Therefore, non-native vowel perception was investigated here in light of relevant theoretical models: the Perceptual Assimilation Model (PAM) and the Natural Referent Vowel (NRV) framework. American-English speakers completed discrimination and native language assimilation (categorization and goodness rating) tests on six nonnative vowel contrasts.

View Article and Find Full Text PDF

Past research has shown that English learners begin segmenting words from speech by 7.5 months of age. However, more recent research has begun to show that, in some situations, infants may exhibit rudimentary segmentation capabilities at an earlier age.

View Article and Find Full Text PDF

Monolingual listeners are constrained by native language experience when categorizing and discriminating unfamiliar non-native contrasts. Are early bilinguals constrained in the same way by their two languages, or do they possess an advantage? Greek-English bilinguals in either Greek or English language mode were compared to monolinguals on categorization and discrimination of Ma'di stop-voicing distinctions that are non-native to both languages. As predicted, English monolinguals categorized Ma'di prevoiced plosive and implosive stops and the coronal voiceless stop as English voiced stops.

View Article and Find Full Text PDF

By 12 months, children grasp that a phonetic change to a word can change its identity (phonological distinctiveness). However, they must also grasp that some phonetic changes do not (phonological constancy). To test development of phonological constancy, sixteen 15-month-olds and sixteen 19-month-olds completed an eye-tracking task that tracked their gaze to named versus unnamed images for familiar words spoken in their native (Australian) and an unfamiliar non-native (Jamaican) regional accent of English.

View Article and Find Full Text PDF

How listeners categorize two phones predicts the success with which they will discriminate the given phonetic distinction. In the case of bilinguals, such perceptual patterns could reveal whether the listener's two phonological systems are integrated or separate. This is of particular interest when a given contrast is realized differently in each language, as is the case with Greek and English stop-voicing distinctions.

View Article and Find Full Text PDF

Speech production research has demonstrated that the first language (L1) often interferes with production in bilinguals' second language (L2), but it has been suggested that bilinguals who are L2-dominant are the most likely to suppress this L1-interference. While prolonged contextual changes in bilinguals' language use (e.g.

View Article and Find Full Text PDF

The way that bilinguals produce phones in each of their languages provides a window into the nature of the bilingual phonological space. For stop consonants, if early sequential bilinguals, whose languages differ in voice onset time (VOT) distinctions, produce native-like VOTs in each of their languages, it would imply that they have developed separate first and second language phones, that is, language-specific phonetic realisations for stop-voicing distinctions. Given the ambiguous phonological status of Greek voiced stops, which has been debated but not investigated experimentally, Greek-English bilinguals can offer a unique perspective on this issue.

View Article and Find Full Text PDF

Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants' ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language.

View Article and Find Full Text PDF

Two artificial-language learning experiments directly compared English, French, and Dutch listeners' use of suprasegmental cues for continuous-speech segmentation. In both experiments, listeners heard unbroken sequences of consonant-vowel syllables, composed of recurring three- and four-syllable "words." These words were demarcated by (a) no cue other than transitional probabilities induced by their recurrence, (b) a consistent left-edge cue, or (c) a consistent right-edge cue.

View Article and Find Full Text PDF

Efficient word recognition depends on detecting critical phonetic differences among similar-sounding words, or sensitivity to phonological distinctiveness, an ability evident at 19 months of age but unreliable at 14 to 15 months of age. However, little is known about phonological constancy, the equally crucial ability to recognize a word's identity across natural phonetic variations, such as those in cross-dialect pronunciation differences. We show that 15- and 19-month-old children recognize familiar words spoken in their native dialect, but that only the older children recognize familiar words in a dissimilar nonnative dialect, providing evidence for emergence of phonological constancy by 19 months.

View Article and Find Full Text PDF

The aim of this paper is to provide further evidence that orthography plays a central role in phonemic awareness, by demonstrating an orthographic congruency effect in phoneme deletion. In four initial phoneme deletion experiments, adult participants produced the correct response more slowly with orthographically mismatched stimulus-response pairs (e.g.

View Article and Find Full Text PDF

Many researchers rely on analogue voice keys for psycholinguistic research. However, the triggering of traditional simple threshold voice keys (STVKs) is delayed after response onset, and the delay duration may vary depending on initial phoneme type. The delayed trigger voice key (DTVK), a stand-alone electronic device that incorporates an additional minimum signal duration parameter, is described and validated in two experiments.

View Article and Find Full Text PDF

Is it possible to learn the relation between 2 nonadjacent events? M. Pena, L. L.

View Article and Find Full Text PDF