AI Article Synopsis

  • Vision-based human joint angle estimation is crucial for ongoing health monitoring and typically relies on methods like optical motion cameras or pose estimation models.
  • The study introduced a deep learning approach for estimating knee and elbow angles using only RGB video from participants performing daily activities, with nine frames of video as input.
  • The BiLSTM network achieved high accuracy in estimating the angles, demonstrating a correlation of 0.955 for the knee and 0.917 for the elbow, irrespective of camera angles.

Article Abstract

Vision-based human joint angle estimation is essential for remote and continuous health monitoring. Most vision-based angle estimation methods use the locations of human joints extracted using optical motion cameras, depth cameras, or human pose estimation models. This study aimed to propose a reliable and straightforward approach with deep learning networks for knee and elbow flexion/extension angle estimation from the RGB video. Fifteen healthy participants performed four daily activities in this study. The experiments were conducted with four different deep learning networks, and the networks took nine subsequent frames as input while output was knee and elbow joint angles extracted from an optical motion capture system for each frame. The BiLSTM network-based joint angles estimator can estimate both joint angles with a correlation of 0.955 for knee and 0.917 for elbow joints regardless of the camera view angles.

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC48229.2022.9871106DOI Listing

Publication Analysis

Top Keywords

angle estimation
16
deep learning
12
learning networks
12
knee elbow
12
joint angles
12
elbow joint
8
joint angle
8
extracted optical
8
optical motion
8
joint
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!