The picture word interference (PWI) paradigm and ERPs were used to investigate whether lexical selection in deaf and hearing ASL-English bilinguals occurs via lexical competition or whether the response exclusion hypothesis (REH) for PWI effects is supported. The REH predicts that semantic interference should not occur for bimodal bilinguals because sign and word responses do not compete within an output buffer. Bimodal bilinguals named pictures in ASL, preceded by either a translation equivalent, semantically-related, or unrelated English written word. In both the translation and semantically-related conditions bimodal bilinguals showed facilitation effects: reduced RTs and N400 amplitudes for related compared to unrelated prime conditions. We also observed an unexpected focal left anterior positivity that was stronger in the translation condition, which we speculate may be due to articulatory priming. Overall, the results support the REH and models of bilingual language production that assume lexical selection occurs without competition between languages.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8411899PMC
http://dx.doi.org/10.1080/23273798.2020.1821905DOI Listing

Publication Analysis

Top Keywords

bimodal bilinguals
16
lexical selection
12
bilinguals
5
lexical
4
bimodal
4
selection bimodal
4
bilinguals erp
4
erp evidence
4
evidence picture-word
4
picture-word interference
4

Similar Publications

In perceptual studies, musicality and pitch aptitude have been implicated in tone learning, while vocabulary size has been implicated in distributional (segment) learning. Moreover, working memory plays a role in the overnight consolidation of explicit-declarative L2 learning. This study examines how these factors uniquely account for individual differences in the distributional learning and consolidation of an L2 tone contrast, where learners are tonal language speakers, and the training is implicit.

View Article and Find Full Text PDF

Bimodal aphasia and dysgraphia: Phonological output buffer aphasia and orthographic output buffer dysgraphia in spoken and sign language.

Cortex

November 2024

Language and Brain Lab, Sagol School of Neuroscience, and School of Education, Tel Aviv University, Tel Aviv, Israel. Electronic address:

We report a case of crossmodal bilingual aphasia-aphasia in two modalities, spoken and sign language-and dysgraphia in both writing and fingerspelling. The patient, Sunny, was a 42 year-old woman after a left temporo-parietal stroke, a speaker of Hebrew, Romanian, and English and an adult learner, daily user of Israeli Sign language (ISL). We assessed Sunny's spoken and sign languages using a comprehensive test battery of naming, reading, and repetition tasks, and also analysed her spontaneous-speech and sign.

View Article and Find Full Text PDF

The macrostructure of narratives produced by children acquiring Finnish Sign Language.

J Deaf Stud Deaf Educ

November 2024

Department of Language and Communication Studies, University of Jyväskylä, Seminaarinkatu 15, PO Box 35, FI-40014, Jyväskylä, Finland.

This article investigates the narrative skills of children acquiring Finnish Sign Language (FinSL). Producing a narrative requires vocabulary, the ability to form sentences, and cognitive skills to construct actions in a logical order for the recipient to understand the story. Research has shown that narrative skills are an excellent way of observing a child's language skills, for they reflect both grammatical language skills and the ability to use the language in situationally appropriate ways.

View Article and Find Full Text PDF

Neural changes in sign language vocabulary learning: Tracking lexical integration with ERP measures.

Brain Lang

December 2024

Department of Cognition, Development and Educational Psychology, Institut de Neurociències, Universitat de Barcelona, Spain. Electronic address:

The present study aimed to investigate the neural changes related to the early stages of sign language vocabulary learning. Hearing non-signers were exposed to Catalan Sign Language (LSC) signs in three laboratory learning sessions over the course of a week. Participants completed two priming tasks designed to examine learning-related neural changes by means of N400 responses.

View Article and Find Full Text PDF

Distributional learning of bimodal and trimodal phoneme categories in monolingual and bilingual infants.

Infant Behav Dev

December 2024

Department of Speech-Language-Hearing: Sciences and Disorders, University of Kansas, USA. Electronic address:

Distributional learning has been proposed as a mechanism for infants to learn the native phonemes of the language(s) to which they are exposed. When hearing two speech streams, bilingual infants may find other strategies more useful and rely on distributional learning less than monolingual infants. A series of studies examined how bilingual language experience affects the application of the distributional learning to novel phoneme distributions.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!