Publications by authors named "Ann Bradlow"

High-frequency speech information is susceptible to inaccurate perception in even mild to moderate forms of hearing loss. Some hearing aids employ frequency-lowering methods such as nonlinear frequency compression (NFC) to help hearing-impaired individuals access high-frequency speech information in more accessible lower-frequency regions. As such techniques cause significant spectral distortion, tests such as the S-Sh Confusion Test help optimize NFC settings to provide high-frequency audibility with the least distortion.

View Article and Find Full Text PDF

Speech recognition by both humans and machines frequently fails in non-optimal yet common situations. For example, word recognition error rates for second-language (L2) speech can be high, especially under conditions involving background noise. At the same time, both human and machine speech recognition sometimes shows remarkable robustness against signal- and noise-related degradation.

View Article and Find Full Text PDF

Measuring how well human listeners recognize speech under varying environmental conditions (speech intelligibility) is a challenge for theoretical, technological, and clinical approaches to speech communication. The current gold standard-human transcription-is time- and resource-intensive. Recent advances in automatic speech recognition (ASR) systems raise the possibility of automating intelligibility measurement.

View Article and Find Full Text PDF

Recent work on perceptual learning for speech has suggested that while high-variability training typically results in generalization, low-variability exposure can sometimes be sufficient for cross-talker generalization. We tested predictions of a similarity-based account, according to which, generalization depends on training-test talker similarity rather than on exposure to variability. We compared perceptual adaptation to second-language (L2) speech following single- or multiple-talker training with a round-robin design in which four L2 English talkers from four different first-language (L1) backgrounds served as both training and test talkers.

View Article and Find Full Text PDF

Inspired by information theoretic analyses of L1 speech and language, this study proposes that L1 and L2 speech exhibit distinct information encoding and transmission profiles in the temporal domain. Both the number and average duration of acoustic syllables (i.e.

View Article and Find Full Text PDF

Objectives: The role of subcortical synchrony in speech-in-noise (SIN) recognition and the frequency-following response (FFR) was examined in multiple listeners with auditory neuropathy. Although an absent FFR has been documented in one listener with idiopathic neuropathy who has severe difficulty recognizing SIN, several etiologies cause the neuropathy phenotype. Consequently, it is necessary to replicate absent FFRs and concomitant SIN difficulties in patients with multiple sources and clinical presentations of neuropathy to elucidate fully the importance of subcortical neural synchrony for the FFR and SIN recognition.

View Article and Find Full Text PDF

Recordings of Spanish and English sentences by switched-dominance bilingual (SDB) Spanish (i.e., L2-dominant Spanish-English bilinguals) and by L1-dominant Spanish and English controls were presented to L1-dominant Spanish and English listeners, respectively.

View Article and Find Full Text PDF

Objective: The goal of this study was to assess recognition of foreign-accented speech of varying intelligibility and linguistic complexity in older adults. It is important to understand the factors that influence the recognition of this commonly encountered type of speech, in a population that remains understudied in this regard.

Design: A repeated measures design was used.

View Article and Find Full Text PDF

Foreign-accented speech recognition is typically tested with linguistically simple materials, which offer a limited window into realistic speech processing. The present study examined the relationship between linguistic structure and talker intelligibility in several sentence-in-noise recognition experiments. Listeners transcribed simple/short and more complex/longer sentences embedded in noise.

View Article and Find Full Text PDF

Memory for speech benefits from linguistic structure. Recall is better for sentences than for random strings of words (the "sentence superiority effect"; SSE), and evidence suggests that ongoing speech may be organized advantageously as clauses in memory (recall by word position shows within-clause "U shape"). In this study, we examined the SSE and clause-based organization for closed-set speech materials with low semantic predictability and without typical prosody.

View Article and Find Full Text PDF

The current study investigated the phonetic adjustment mechanisms that underlie perceptual adaptation in first and second language (Dutch-English) listeners by exposing them to a novel English accent containing controlled deviations from the standard accent (e.g. /i/-to-/ɪ/ yielding /krɪm/ instead of /krim/ for 'cream').

View Article and Find Full Text PDF

While indexical information is implicated in many levels of language processing, little is known about the internal structure of the system of indexical dimensions, particularly in bilinguals. A series of three experiments using the speeded classification paradigm investigated the relationship between various indexical and non-linguistic dimensions of speech in processing. Namely, we compared the relationship between a lesser-studied indexical dimension relevant to bilinguals, which language is being spoken (in these experiments, either Mandarin Chinese or English), with: talker identity (Experiment 1), talker gender (Experiment 2), and amplitude of speech (Experiment 3).

View Article and Find Full Text PDF

Second-language (L2) speech is consistently slower than first-language (L1) speech, and L1 speaking rate varies within- and across-talkers depending on many individual, situational, linguistic, and sociolinguistic factors. It is asked whether speaking rate is also determined by a language-independent talker-specific trait such that, across a group of bilinguals, L1 speaking rate significantly predicts L2 speaking rate. Two measurements of speaking rate were automatically extracted from recordings of read and spontaneous speech by English monolinguals (n = 27) and bilinguals from ten L1 backgrounds (n = 86): speech rate (syllables/second), and articulation rate (syllables/second excluding silent pauses).

View Article and Find Full Text PDF

Adaptation to foreign-accented sentences can be guided by knowledge of the lexical content of those sentences, which, being an exact match for the target, provides feedback on all linguistic levels. The extent to which this feedback needs to match the accented sentence was examined by manipulating the degree of match on different linguistic dimensions, including sub-lexical, lexical, and syntactic levels. Conditions where target-feedback sentence pairs matched and mismatched generated greater transcription improvement over non-English speech feedback, indicating listeners can draw upon sources of linguistic information beyond matching lexical items, such as sub- and supra-lexical information, during adaptation.

View Article and Find Full Text PDF

This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g.

View Article and Find Full Text PDF

Language acquisition typically involves periods when the learner speaks and listens to the new language, and others when the learner is exposed to the language without consciously speaking or listening to it. Adaptation to variants of a native language occurs under similar conditions. Here, speech learning by adults was assessed following a training regimen that mimicked this common situation of language immersion without continuous active language processing.

View Article and Find Full Text PDF

Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child's future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to the precision of information processing in noise, but it is unknown whether these markers are present in pre-reading children.

View Article and Find Full Text PDF

Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But this auditory learning rarely occurs in ideal listening conditions-children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning.

View Article and Find Full Text PDF

Speech processing can often take place in adverse listening conditions that involve the mixing of speech and background noise. In this study, we investigated processing dependencies between background noise and indexical speech features, using a speeded classification paradigm (Garner, 1974; Exp. 1), and whether background noise is encoded and represented in memory for spoken words in a continuous recognition memory paradigm (Exp.

View Article and Find Full Text PDF

Background: Masking release for an English sentence-recognition task in the presence of foreign-accented English speech compared with native-accented English speech was reported in Calandruccio et al (2010a). The masking release appeared to increase as the masker intelligibility decreased. However, it could not be ruled out that spectral differences between the speech maskers were influencing the significant differences observed.

View Article and Find Full Text PDF

This study examined the influence of background language variation on speech recognition. English listeners performed an English sentence recognition task in either "pure" background conditions in which all trials had either English or Dutch background babble or in mixed background conditions in which the background language varied across trials (i.e.

View Article and Find Full Text PDF

This study examined whether language specific properties may lead to cross-language differences in the degree of phonetic reduction. Rates of syllabic reduction (defined here as reduction in which the number of syllables pronounced is less than expected based on canonical form) in English and Mandarin were compared. The rate of syllabic reduction was higher in Mandarin than English.

View Article and Find Full Text PDF

Purpose: To investigate masking release for speech maskers for linguistically and phonetically close (English and Dutch) and distant (English and Mandarin) language pairs.

Method: Thirty-two monolingual speakers of English with normal audiometric thresholds participated in the study. Data are reported for an English sentence recognition task in English and for Dutch and Mandarin competing speech maskers (Experiment 1) and noise maskers (Experiment 2) that were matched either to the long-term average speech spectra or to the temporal modulations of the speech maskers from Experiment 1.

View Article and Find Full Text PDF