Chinese sign language (CSL) is one of the most widely used sign language systems in the world. As such, the automatic recognition and generation of CSL is a key technology enabling bidirectional communication between deaf and hearing people. Most previous studies have focused solely on sign language recognition (SLR), which only addresses communication in a single direction. As such, there is a need for sign language generation (SLG) to enable communication in the other direction (i.e., from hearing people to deaf people). To achieve a smoother exchange of ideas between these two groups, we propose a skeleton-based CSL recognition and generation framework based on a recurrent neural network (RNN), to support bidirectional CSL communication. This process can also be extended to other sequence-to-sequence information interactions. The core of the proposed framework is a two-level probability generative model. Compared with previous techniques, this approach offers a more flexible approximate posterior distribution, which can produce skeletal sequences of varying styles that are recognizable to humans. In addition, the proposed generation method compensated for a lack of training data. A series of experiments in bidirectional communication were conducted on the large 500 CSL dataset. The proposed algorithm achieved high recognition accuracy for both real and synthetic data, with a reduced runtime. Furthermore, the generated data improved the performance of the discriminator. These results suggest the proposed bidirectional communication framework and generation algorithm to be an effective new approach to CSL recognition.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2020.01.030 | DOI Listing |
Elife
March 2025
Department of Neuroscience, Georgetown University Medical Center, Washington DC, United States.
Research on brain plasticity, particularly in the context of deafness, consistently emphasizes the reorganization of the auditory cortex. But to what extent do all individuals with deafness show the same level of reorganization? To address this question, we examined the individual differences in functional connectivity (FC) from the deprived auditory cortex. Our findings demonstrate remarkable differentiation between individuals deriving from the absence of shared auditory experiences, resulting in heightened FC variability among deaf individuals, compared to more consistent FC in the hearing group.
View Article and Find Full Text PDFJ Deaf Stud Deaf Educ
March 2025
American Sign Language Department, Columbia College, Chicago, IL, United States.
PeerJ Comput Sci
February 2025
Institute of Mathematical Sciences, College of Arts and Sciences, University of the Philippines Los Baños, Los Baños, Laguna, Philippines.
Increasing number of deaf or hard-of-hearing individuals is a crucial problem since communication among and within the deaf population proves to be a challenge. Despite sign languages developing in various countries, there is still lack of formal implementation of programs supporting its needs, especially for the Filipino sign language (FSL). Recently, studies on FSL recognition explored deep networks.
View Article and Find Full Text PDFBMC Microbiol
March 2025
Kumasi Centre for Collaborative Research in Tropical Medicine, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana.
Background: The isolation and culture of Mycobacterium ulcerans (Mu) as a primary diagnostic modality for Buruli ulcer (BU) disease are limiting due to their low sensitivity and slow-growing nature. M. ulcerans cultures can also be overgrown with other bacteria and fungi.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
July 2024
An estimated 0.2% of the world population is living with severe deafblindness with approximately ~1.5 million Americans using tactile American Sign Language (t-ASL) as their primary form of communication.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!