Eating and drinking is an essential part of every-day life. And yet, there are many people in the world today who rely on others to feed them. In this work, we present a prototype robot-assisted self-feeding system for individuals with movement disorders. The system is capable of perceiving, localizing, grasping, and delivering non-compliant food items to an individual. We trained an object recognition network to detect specific food items, and we compute the grasp pose for each item. Human input is obtained through an interface consisting of an eye-tracker and a display screen. The human selects options on the monitor with their eye and head movements and triggers responses with mouth movements. We performed a pilot study with four able-bodied participants and one participant with a spinal cord injury (SCI) to evaluate the performance of our prototype system. Participants selected food items with their eye movements, which were then delivered by the robot. We observed an average overall feeding success rate of 89.1% and an average overall task time of $31.4 \pm 2.4$ seconds per food item. The SCI participant gave scores of 90.0 and 8.3 on the System Usability Scale and NASA Task Load Index, respectively. We also conducted a custom, post-study interview to gather participant feedback to drive future design decisions. The quantitative results and qualitative user feedback demonstrate the feasibility of robot-assisted self-feeding and justify continued research into mealtime-related assistive devices.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/ICORR55369.2022.9896535 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!