Objective: Hand function is central to inter- actions with our environment. Developing a comprehen- sive model of hand grasps in naturalistic environments is crucial across various disciplines, including robotics, ergonomics, and rehabilitation. Creating such a taxonomy poses challenges due to the significant variation in grasp- ing strategies that individuals may employ. For instance, individuals with impaired hands, such as those with spinal cord injuries (SCI), may develop unique grasps not used by unimpaired individuals. These grasping techniques may differ from person to person, influenced by variable senso- rimotor impairment, creating a need for personalized meth- ods of analysis.
Method: This study aimed to automatically identify the dominant distinct hand grasps for each indi- vidual without reliance on a priori taxonomies, by applying semantic clustering to egocentric video. Egocentric video recordings collected in the homes of 19 individual with cervical SCI were used to cluster grasping actions with semantic significance. A deep learning model integrating posture and appearance data was employed to create a per- sonalized hand taxonomy.
Results: Quantitative analysis reveals a cluster purity of 67.6% ± 24.2% with 18.0% ± 21.8% redundancy. Qualitative assessment revealed meaningful clusters in video content.
Discussion: This methodology provides a flexible and effective strategy to analyze hand function in the wild, with applications in clinical assess- ment and in-depth characterization of human-environment interactions in a variety of contexts.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/JBHI.2024.3495699 | DOI Listing |
Front Neurosci
November 2024
German Center for Vertigo and Balance Disorders, LMU University Hospital, Munich, Germany.
IEEE J Biomed Health Inform
November 2024
Objective: Hand function is central to inter- actions with our environment. Developing a comprehen- sive model of hand grasps in naturalistic environments is crucial across various disciplines, including robotics, ergonomics, and rehabilitation. Creating such a taxonomy poses challenges due to the significant variation in grasp- ing strategies that individuals may employ.
View Article and Find Full Text PDFData Brief
December 2024
Faculty of Computing, Universiti Teknologi Malaysia, 81310 Skudai, Johor Bahru, Malaysia.
The dataset presents raw data on the egocentric (first-person view) and exocentric (third-person view) perspectives, including 47166 frame images. Egocentric and exocentric frame images are recorded from original iPhone videos simultaneously. The egocentric view captures the details of proximity hand gestures and attentiveness of the iPhone wearer, while the exocentric view captures the hand gestures in the top-down view of all participants.
View Article and Find Full Text PDFSensors (Basel)
October 2024
Department of Informatics, Indiana University Bloomington, Bloomington, IN 47408, USA.
Rock climbing has propelled from niche sport to mainstream free-time activity and Olympic sport. Moreover, climbing can be studied as an example of a high-stakes perception-action task. However, understanding what constitutes an expert climber is not simple or straightforward.
View Article and Find Full Text PDFWe introduce the Visual Experience Dataset (VEDB), a compilation of more than 240 hours of egocentric video combined with gaze- and head-tracking data that offer an unprecedented view of the visual world as experienced by human observers. The dataset consists of 717 sessions, recorded by 56 observers ranging from 7 to 46 years of age. This article outlines the data collection, processing, and labeling protocols undertaken to ensure a representative sample and discusses the potential sources of error or bias within the dataset.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!