We exploit the phenomenon of cross-modal, cross-language activation to examine the dynamics of language processing. Previous within-language work showed that seeing a sign coactivates phonologically related signs, just as hearing a spoken word coactivates phonologically related words. In this study, we conducted a series of eye-tracking experiments using the visual world paradigm to investigate the time course of cross-language coactivation in hearing bimodal bilinguals (Spanish-Spanish Sign Language) and unimodal bilinguals (Spanish/Basque).
View Article and Find Full Text PDFSpoken words and signs both consist of structured sub-lexical units. While phonemes unfold in time in the case of the spoken signal, visual sub-lexical units such as location and handshape are produced simultaneously in signs. In the current study we investigate the role of sub-lexical units in lexical access in spoken Spanish and in Spanish Sign Language (LSE) in hearing early bimodal bilinguals and in hearing second language (L2) learners of LSE, both native speakers of Spanish, using the visual world paradigm.
View Article and Find Full Text PDFJ Deaf Stud Deaf Educ
October 2018
This study investigated the impact of language modality and age of acquisition on semantic fluency in American Sign Language (ASL) and English. Experiment 1 compared semantic fluency performance (e.g.
View Article and Find Full Text PDFBehav Brain Sci
January 2017
In our commentary, we raise concerns with the idea that location should be considered a gestural component of sign languages. We argue that psycholinguistic studies provide evidence for location as a "categorical" element of signs. More generally, we propose that the use of space in sign languages comes in many flavours and may be both categorical and imagistic.
View Article and Find Full Text PDFMany bimodal bilinguals are immersed in a spoken language-dominant environment from an early age and, unlike unimodal bilinguals, do not necessarily divide their language use between languages. Nonetheless, early ASL-English bilinguals retrieved fewer words in a letter fluency task in their dominant language compared to monolingual English speakers with equal vocabulary level. This finding demonstrates that reduced vocabulary size and/or frequency of use cannot completely account for bilingual disadvantages in verbal fluency.
View Article and Find Full Text PDFThis study investigated the relation between linguistic and spatial working memory (WM) resources and language comprehension for signed compared to spoken language. Sign languages are both linguistic and visual-spatial, and therefore provide a unique window on modality-specific versus modality-independent contributions of WM resources to language processing. Deaf users of American Sign Language (ASL), hearing monolingual English speakers, and hearing ASL-English bilinguals completed several spatial and linguistic serial recall tasks.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
November 2017
This study investigated whether language control during language production in bilinguals generalizes across modalities, and to what extent the language control system is shaped by competition for the same articulators. Using a cued language-switching paradigm, we investigated whether switch costs are observed when hearing signers switch between a spoken and a signed language. The results showed an asymmetrical switch cost for bimodal bilinguals on reaction time (RT) and accuracy, with larger costs for the (dominant) spoken language.
View Article and Find Full Text PDFWe used picture-word interference (PWI) to discover a) whether cross-language activation at the lexical level can yield phonological priming effects when languages do not share phonological representations, and b) whether semantic interference effects occur without articulatory competition. Bimodal bilinguals fluent in American Sign Language (ASL) and English named pictures in ASL while listening to distractor words that were 1) translation equivalents, 2) phonologically related to the target sign through translation, 3) semantically related, or 4) unrelated. Monolingual speakers named pictures in English.
View Article and Find Full Text PDFBiling (Camb Engl)
March 2016
Bimodal bilinguals, fluent in a signed and a spoken language, exhibit a unique form of bilingualism because their two languages access distinct sensory-motor systems for comprehension and production. Differences between unimodal and bimodal bilinguals have implications for how the brain is organized to control, process, and represent two languages. Evidence from code-blending (simultaneous production of a word and a sign) indicates that the production system can access two lexical representations without cost, and the comprehension system must be able to simultaneously integrate lexical information from two languages.
View Article and Find Full Text PDFJ Deaf Stud Deaf Educ
April 2016
Semantic and lexical decision tasks were used to investigate the mechanisms underlying code-blend facilitation: the finding that hearing bimodal bilinguals comprehend signs in American Sign Language (ASL) and spoken English words more quickly when they are presented together simultaneously than when each is presented alone. More robust facilitation effects were observed for semantic decision than for lexical decision, suggesting that lexical integration of signs and words within a code-blend occurs primarily at the semantic level, rather than at the level of form. Early bilinguals exhibited greater facilitation effects than late bilinguals for English (the dominant language) in the semantic decision task, possibly because early bilinguals are better able to process early visual cues from ASL signs and use these to constrain English word recognition.
View Article and Find Full Text PDFThis study investigates the role of acoustic salience and hearing impairment in learning phonologically minimal pairs. Picture-matching and object-matching tasks were used to investigate the learning of consonant and vowel minimal pairs in five- to six-year-old deaf children with a cochlear implant (CI), and children of the same age with normal hearing (NH). In both tasks, the CI children showed clear difficulties with learning minimal pairs.
View Article and Find Full Text PDFFindings from recent studies suggest that spoken-language bilinguals engage nonlinguistic inhibitory control mechanisms to resolve cross-linguistic competition during auditory word recognition. Bilingual advantages in inhibitory control might stem from the need to resolve perceptual competition between similar-sounding words both within and between their two languages. If so, these advantages should be lessened or eliminated when there is no perceptual competition between two languages.
View Article and Find Full Text PDFJ Deaf Stud Deaf Educ
January 2014
The effect of using signed communication on the spoken language development of deaf children with a cochlear implant (CI) is much debated. We report on two studies that investigated relationships between spoken word and sign processing in children with a CI who are exposed to signs in addition to spoken language. Study 1 assessed rapid word and sign learning in 13 children with a CI and found that performance in both language modalities correlated positively.
View Article and Find Full Text PDFPurpose: This study examined the use of different acoustic cues in auditory perception of consonant and vowel contrasts by profoundly deaf children with a cochlear implant (CI) in comparison to age-matched children and young adults with normal hearing.
Method: A speech sound categorization task in an XAB format was administered to 15 children ages 5-6 with a CI (mean age at implant: 1;8 [years;months]), 20 normal-hearing age-matched children, and 21 normal-hearing adults. Four contrasts were examined: //-/a/, /i/-/i/, /bu/-/pu/, and /fu/-/su/.