Bimodal bilinguals, fluent in a signed and a spoken language, provide unique insight into the nature of syntactic integration and language control. We investigated whether bimodal bilinguals who are conversing with English monolinguals produce American Sign Language (ASL) grammatical facial expressions to accompany parallel syntactic structures in spoken English. In ASL, raised eyebrows mark conditionals, and furrowed eyebrows mark wh-questions; the grammatical brow movement is synchronized with the manual onset of the clause. Bimodal bilinguals produced more ASL-appropriate facial expressions than did nonsigners and synchronized their expressions with the onset of the corresponding English clauses. This result provides evidence for a dual-language architecture in which grammatical information can be integrated up to the level of phonological implementation. Overall, participants produced more raised brows than furrowed brows, which can convey negative affect. Bimodal bilinguals suppressed but did not completely inhibit ASL facial grammar when it conflicted with conventional facial gestures. We conclude that morphosyntactic elements from two languages can be articulated simultaneously and that complete inhibition of the nonselected language is difficult.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2632943 | PMC |
http://dx.doi.org/10.1111/j.1467-9280.2008.02119.x | DOI Listing |
J Acoust Soc Am
December 2024
Division of Humanities, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong.
In perceptual studies, musicality and pitch aptitude have been implicated in tone learning, while vocabulary size has been implicated in distributional (segment) learning. Moreover, working memory plays a role in the overnight consolidation of explicit-declarative L2 learning. This study examines how these factors uniquely account for individual differences in the distributional learning and consolidation of an L2 tone contrast, where learners are tonal language speakers, and the training is implicit.
View Article and Find Full Text PDFCortex
January 2025
Language and Brain Lab, Sagol School of Neuroscience, and School of Education, Tel Aviv University, Tel Aviv, Israel. Electronic address:
We report a case of crossmodal bilingual aphasia-aphasia in two modalities, spoken and sign language-and dysgraphia in both writing and fingerspelling. The patient, Sunny, was a 42 year-old woman after a left temporo-parietal stroke, a speaker of Hebrew, Romanian, and English and an adult learner, daily user of Israeli Sign language (ISL). We assessed Sunny's spoken and sign languages using a comprehensive test battery of naming, reading, and repetition tasks, and also analysed her spontaneous-speech and sign.
View Article and Find Full Text PDFJ Deaf Stud Deaf Educ
November 2024
Department of Language and Communication Studies, University of Jyväskylä, Seminaarinkatu 15, PO Box 35, FI-40014, Jyväskylä, Finland.
This article investigates the narrative skills of children acquiring Finnish Sign Language (FinSL). Producing a narrative requires vocabulary, the ability to form sentences, and cognitive skills to construct actions in a logical order for the recipient to understand the story. Research has shown that narrative skills are an excellent way of observing a child's language skills, for they reflect both grammatical language skills and the ability to use the language in situationally appropriate ways.
View Article and Find Full Text PDFBrain Lang
December 2024
Department of Cognition, Development and Educational Psychology, Institut de Neurociències, Universitat de Barcelona, Spain. Electronic address:
The present study aimed to investigate the neural changes related to the early stages of sign language vocabulary learning. Hearing non-signers were exposed to Catalan Sign Language (LSC) signs in three laboratory learning sessions over the course of a week. Participants completed two priming tasks designed to examine learning-related neural changes by means of N400 responses.
View Article and Find Full Text PDFInfant Behav Dev
December 2024
Department of Speech-Language-Hearing: Sciences and Disorders, University of Kansas, USA. Electronic address:
Distributional learning has been proposed as a mechanism for infants to learn the native phonemes of the language(s) to which they are exposed. When hearing two speech streams, bilingual infants may find other strategies more useful and rely on distributional learning less than monolingual infants. A series of studies examined how bilingual language experience affects the application of the distributional learning to novel phoneme distributions.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!