The myoelectric prosthesis is a promising tool to restore the hand abilities of amputees, but the classification accuracy of surface electromyography (sEMG) is not high enough for real-time application. Researchers proposed integrating sEMG signals with another feature that is not affected by amputation. The strong coordination between vision and hand manipulation makes us consider including visual information in prosthetic hand control. In this study, we identified a sweet period during the early reaching phase in which the vision data could yield a higher accuracy in classifying the grasp patterns. Moreover, the visual classification results from the sweet period could be naturally integrated with sEMG data collected during the grasp phase. After the integration, the accuracy of grasp classification increased from 85.5% (only sEMG) to 90.06% (integrated). Knowledge gained from this study encourages us to further explore the methods for incorporating computer vision into myoelectric data to enhance the movement control of prosthetic hands.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9538562 | PMC |
http://dx.doi.org/10.3389/frobt.2022.948238 | DOI Listing |
Humans excel at applying learned behavior to unlearned situations. A crucial component of this generalization behavior is our ability to compose/decompose a whole into reusable parts, an attribute known as compositionality. One of the fundamental questions in robotics concerns this characteristic: How can linguistic compositionality be developed concomitantly with sensorimotor skills through associative learning, particularly when individuals only learn partial linguistic compositions and their corresponding sensorimotor patterns? To address this question, we propose a brain-inspired neural network model that integrates vision, proprioception, and language into a framework of predictive coding and active inference on the basis of the free-energy principle.
View Article and Find Full Text PDFNeuroinformatics
January 2025
Department of Information Technology, Faculty of Engineering and Technology, SRM Institute of Science and Technology, Ramapuram, Chennai, 600089, India.
Brain tumours are one of the most deadly and noticeable types of cancer, affecting both children and adults. One of the major drawbacks in brain tumour identification is the late diagnosis and high cost of brain tumour-detecting devices. Most existing approaches use ML algorithms to address problems, but they have drawbacks such as low accuracy, high loss, and high computing cost.
View Article and Find Full Text PDFData Brief
February 2025
Department of Computer Science and Engineering, East West University, Aftabnagar, Dhaka, Bangladesh.
In the field of agriculture, particularly within the context of machine learning applications, quality datasets are essential for advancing research and development. To address the challenges of identifying different mango leaf types and recognizing the diverse and unique characteristics of mango varieties in Bangladesh, a comprehensive and publicly accessible dataset titled "BDMANGO" has been created. This dataset includes images essential for research, featuring six mango varieties: Amrapali, Banana, Chaunsa, Fazli, Haribhanga, and Himsagar, which were collected from different locations.
View Article and Find Full Text PDFHeliyon
January 2025
Department of Optometry and Vision Science, School of Rehabilitation, Tehran University of Medical Science, Tehran, Iran.
Purpose: We aimed to build a machine learning-based model to predict radiation-induced optic neuropathy in patients who had treated head and neck cancers with radiotherapy.
Materials And Methods: To measure radiation-induced optic neuropathy, the visual evoked potential values were obtained in both case and control groups and compared. Radiomics features were extracted from the area segmented which included the right and left optic nerves and chiasm.
Quant Imaging Med Surg
January 2025
School of Computing, Mathematics and Engineering, Charles Sturt University, Albury, Australia.
Background: The limitation in spatial resolution of bone scintigraphy, combined with the vast variations in size, location, and intensity of bone metastasis (BM) lesions, poses challenges for accurate diagnosis by human experts. Deep learning-based analysis has emerged as a preferred approach for automating the identification and delineation of BM lesions. This study aims to develop a deep learning-based approach to automatically segment bone scintigrams for improving diagnostic accuracy.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!