Pattern recognition can provide intuitive control of myoelectric prostheses. Currently, screen-guided training (SGT), in which individuals perform specific muscle contractions in sync with prompts displayed on a screen, is the common method of collecting the electromyography (EMG) data necessary to train a pattern recognition classifier. Prosthesis-guided training (PGT) is a new data collection method that requires no additional hardware and allows the individuals to keep their focus on the prosthesis itself. The movement of the prosthesis provides the cues of when to perform the muscle contractions. This study compared the training data obtained from SGT and PGT and evaluated user performance after training pattern recognition classifiers with each method. Although the inclusion of transient EMG signal in PGT data led to decreased accuracy of the classifier, subjects completed a performance task faster than when compared to using a classifier built from SGT data. This may indicate that training data collected using PGT that includes both steady state and transient EMG signals generates a classifier that more accurately reflects muscle activity during real-time use of a pattern recognition-controlled myoelectric prosthesis.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/EMBC.2012.6346318 | DOI Listing |
Efficient visual word recognition presumably relies on orthographic prediction error (oPE) representations. On the basis of a transparent neurocognitive computational model rooted in the principles of the predictive coding framework, we postulated that readers optimize their percept by removing redundant visual signals, allowing them to focus on the informative aspects of the sensory input (i.e.
View Article and Find Full Text PDFJ Affect Disord
January 2025
Department of Psychiatry, University of Oxford, Warneford Ln, Headington, Oxford OX3 7JX, United Kingdom; Oxford Health NHS Foundation Trust, Warneford Ln, Headington, Oxford OX3 7JX, United Kingdom. Electronic address:
Background: The renin angiotensin system (RAS) is implicated in various cognitive processes relevant to anxiety. However, the role of the RAS in pattern separation, a hippocampal memory mechanism that enables discrete encoding of similar stimuli, is unclear. Given the proposed role of this mechanism in overgeneralization and the maintenance of anxiety, we explored the influence of the RAS on mnemonic discrimination i.
View Article and Find Full Text PDFComput Biol Med
January 2025
School of Computer Science, Chungbuk National University, Cheongju 28644, Republic of Korea. Electronic address:
The fusion index is a critical metric for quantitatively assessing the transformation of in vitro muscle cells into myotubes in the biological and medical fields. Traditional methods for calculating this index manually involve the labor-intensive counting of numerous muscle cell nuclei in images, which necessitates determining whether each nucleus is located inside or outside the myotubes, leading to significant inter-observer variation. To address these challenges, this study proposes a three-stage process that integrates the strengths of pattern recognition and deep-learning to automatically calculate the fusion index.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Cognitive Systems Lab, University of Bremen, 28359 Bremen, Germany.
This paper presents an approach for event recognition in sequential images using human body part features and their surrounding context. Key body points were approximated to track and monitor their presence in complex scenarios. Various feature descriptors, including MSER (Maximally Stable Extremal Regions), SURF (Speeded-Up Robust Features), distance transform, and DOF (Degrees of Freedom), were applied to skeleton points, while BRIEF (Binary Robust Independent Elementary Features), HOG (Histogram of Oriented Gradients), FAST (Features from Accelerated Segment Test), and Optical Flow were used on silhouettes or full-body points to capture both geometric and motion-based features.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of Artifcial Intelligence, Chung-Ang University, Heukseok-dong, Dongjak-gu, Seoul 06974, Republic of Korea.
Sensor-based gesture recognition on mobile devices is critical to human-computer interaction, enabling intuitive user input for various applications. However, current approaches often rely on server-based retraining whenever new gestures are introduced, incurring substantial energy consumption and latency due to frequent data transmission. To address these limitations, we present the first on-device continual learning framework for gesture recognition.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!