Sign language is an essential means of communication for individuals with hearing disabilities. However, there is a significant shortage of sign language interpreters in some languages, especially in Saudi Arabia. This shortage results in a large proportion of the hearing-impaired population being deprived of services, especially in public places. This paper aims to address this gap in accessibility by leveraging technology to develop systems capable of recognizing Arabic Sign Language (ArSL) using deep learning techniques. In this paper, we propose a hybrid model to capture the spatio-temporal aspects of sign language (i.e., letters and words). The hybrid model consists of a Convolutional Neural Network (CNN) classifier to extract spatial features from sign language data and a Long Short-Term Memory (LSTM) classifier to extract spatial and temporal characteristics to handle sequential data (i.e., hand movements). To demonstrate the feasibility of our proposed hybrid model, we created a dataset of 20 different words, resulting in 4000 images for ArSL: 10 static gesture words and 500 videos for 10 dynamic gesture words. Our proposed hybrid model demonstrates promising performance, with the CNN and LSTM classifiers achieving accuracy rates of 94.40% and 82.70%, respectively. These results indicate that our approach can significantly enhance communication accessibility for the hearing-impaired community in Saudi Arabia. Thus, this paper represents a major step toward promoting inclusivity and improving the quality of life for the hearing impaired.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11175347 | PMC |
http://dx.doi.org/10.3390/s24113683 | DOI Listing |
J Vis
January 2025
Department of Communicative Disorders, University of Alabama, Tuscaloosa, AL, USA.
The visual environment of sign language users is markedly distinct in its spatiotemporal parameters compared to that of non-signers. Although the importance of temporal and spectral resolution in the auditory modality for language development is well established, the spectrotemporal parameters of visual attention necessary for sign language comprehension remain less understood. This study investigates visual temporal resolution in learners of American Sign Language (ASL) at various stages of acquisition to determine how experience with sign language affects perceptual sampling.
View Article and Find Full Text PDFLinguist Vanguard
December 2024
Laboratoire de Sciences Cognitives et Psycholinguistique (ENS, EHESS, CNRS), Ecole Normale Supérieure - PSL, 29 rue d'Ulm, 75005 Paris, France.
We investigate the degree to which mispronounced signs can be accommodated by signers of French Sign Language (LSF). Using an offline judgment task, we examine both the individual contributions of three parameters - handshape, movement, and location - to sign recognition, and the impact of the individual features that were manipulated to obtain the mispronounced signs. Results indicate that signers judge mispronounced handshapes to be less damaging for well-formedness than mispronounced locations or movements.
View Article and Find Full Text PDFmLife
December 2024
State Key Laboratory of Microbial Metabolism, Joint International Research Laboratory of Metabolic & Developmental Sciences, School of Life Sciences and Biotechnology Shanghai Jiao Tong University Shanghai China.
Curr Diab Rep
December 2024
College of Nursing, University of Utah, 10 South 2000 East, Salt Lake City, UT, 84112, USA.
Purpose Of Review: Describe the connection between Deaf/hard of hearing (DHH) and diabetes, explain the bidirectional relationship of blind/low vision (BLV) and diabetes, characterize challenges DHH and BLV populations face when seeking healthcare regarding their diabetes management. Highlight the inaccessibility of diabetes technology in these populations. Provide best practices when communicating with DHH and BLV people in the clinical setting.
View Article and Find Full Text PDFACS Appl Mater Interfaces
December 2024
National Engineering Lab of Special Display Technology, Special Display and Imaging Technology Innovation Center of Anhui Province, Academy of Optoelectronic Technology, Hefei University of Technology, Hefei 230009, China.
Flexible sensors mimic the sensing ability of human skin, and have unique flexibility and adaptability, allowing users to interact with intelligent systems in a more natural and intimate way. To overcome the issues of low sensitivity and limited operating range of flexible strain sensors, this study presents a highly innovative preparation method to develop a conductive elastomeric sensor with a cracked thin film by combining polydimethylsiloxane (PDMS) with multiwalled carbon nanotubes (MCNT). This novel design significantly increases both the sensitivity and operating range of the sensor (strain range 0-50%; the maximum tensile sensitivity of this sensor reaches 4.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!