Developing hand-crafted visual features to enhance perception with prosthetic vision devices can often miss important aspects of a given task. Retinal implants suffer from the need to create low-dimensional features for elaborate tasks such as navigation in complex environments. Using Deep Reinforcement Learning (DRL), visual features are learnt through task-based simulations that remove the ambiguity of inferring the visual information most crucial to a specific activity. Learning task-based features ensures that the visual information is salient to the tasks an implant recipient may be undertaking and eliminates potentially redundant features. In this paper, we focus specifically on basic orientation and mobility, and the methods for feature learning and visualisation in structured 3D environments. We propose a new model for learning visual features through task-based simulations and show that learnt features can be transferred directly to real RGB-D images. We demonstrate this new scalable approach for feature learning in simulation and open the possibility for more complex simulations of more complex tasks in the future.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/EMBC.2019.8856541 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!