Robotic exoskeletons require human control and decision making to switch between different locomotion modes, which can be inconvenient and cognitively demanding. To support the development of automated locomotion mode recognition systems (i.e., intelligent high-level controllers), we designed an environment recognition system using computer vision and deep learning. Here we first reviewed the development of the "ExoNet" database - the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a hierarchical labelling architecture. We then trained and tested the EfficientNetB0 convolutional neural network, which was optimized for efficiency using neural architecture search, to forward predict the walking environments. Our environment recognition system achieved ~73% image classification accuracy. These results provide the inaugural benchmark performance on the ExoNet database. Future research should evaluate and compare different convolutional neural networks to develop an accurate and real- time environment-adaptive locomotion mode recognition system for robotic exoskeleton control.

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC46164.2021.9630064DOI Listing

Publication Analysis

Top Keywords

recognition system
12
computer vision
8
vision deep
8
deep learning
8
locomotion mode
8
mode recognition
8
environment recognition
8
walking environments
8
convolutional neural
8
learning environment-adaptive
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!