Vision-based human joint angle estimation is essential for remote and continuous health monitoring. Most vision-based angle estimation methods use the locations of human joints extracted using optical motion cameras, depth cameras, or human pose estimation models. This study aimed to propose a reliable and straightforward approach with deep learning networks for knee and elbow flexion/extension angle estimation from the RGB video. Fifteen healthy participants performed four daily activities in this study. The experiments were conducted with four different deep learning networks, and the networks took nine subsequent frames as input while output was knee and elbow joint angles extracted from an optical motion capture system for each frame. The BiLSTM network-based joint angles estimator can estimate both joint angles with a correlation of 0.955 for knee and 0.917 for elbow joints regardless of the camera view angles.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/EMBC48229.2022.9871106 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!