How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4821822 | PMC |
http://dx.doi.org/10.1017/S0140525X15001247 | DOI Listing |
Cogn Process
January 2025
Institute of Cognitive Sciences and Technologies (ISTC-CNR), Via Nomentana 56, 00161, Rome, Italy.
Face masks can impact processing a narrative in sign language, affecting several metacognitive dimensions of understanding (i.e., perceived effort, confidence and feeling of understanding).
View Article and Find Full Text PDFJ Speech Lang Hear Res
January 2025
Department of Communication Science and Disorders, University of Pittsburgh, PA.
Purpose: The present study assessed the test-retest reliability of the American Sign Language (ASL) version of the Computerized Revised Token Test (CRTT-ASL) and compared the differences and similarities between ASL and English reading by Deaf and hearing users of ASL.
Method: Creation of the CRTT-ASL involved filming, editing, and validating CRTT instructions, sentence commands, and scoring. Deaf proficient (DP), hearing nonproficient (HNP), and hearing proficient sign language users completed the CRTT-ASL and the English self-paced, word-by-word reading CRTT (CRTT-Reading-Word Fade [CRTT-R-wf]).
Arch Public Health
January 2025
School of Nursing and Rehabilitation, Nantong University, Nantong, Jiangsu, 226001, China.
Background: Chinese cancer survivors are not doing well in returning to work. Peer support, as an external coping resource to help cancer survivors return to work, brings together members of the lay community with similar stressors or problems for mutual support. Peer volunteers have not received systematic training, so inappropriate language in the support process can often cause secondary damage to both the peer and the cancer survivor.
View Article and Find Full Text PDFJMIR Res Protoc
January 2025
Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil.
Background: Individuals with hearing impairments may face hindrances in health care assistance, which may significantly impact the prognosis and the incidence of complications and iatrogenic events. Therefore, the development of automatic communication systems to assist the interaction between this population and health care workers is paramount.
Objective: This study aims to systematically review the evidence on communication systems using human-computer interaction techniques developed for deaf people who communicate through sign language that are already in use or proposed for use in health care contexts and have been tested with human users or videos of human users.
Mem Cognit
January 2025
Department of Linguistics, University of California San Diego, 9500 Gilman Drive, La Jolla, CA, 92093-0108, USA.
Research shows that insufficient language access in early childhood significantly affects language processing. While the majority of this work focuses on syntax, phonology also appears to be affected, though it is unclear exactly how. Here we investigated phonological production across age of acquisition of American Sign Language (ASL).
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!