Cervical spinal cord injury (cSCI) causes the paralysis of upper and lower limbs and trunk, significantly reducing quality of life and community participation of the affected individuals. The functional use of the upper limbs is the top recovery priority of people with cSCI and wearable vision-based systems have recently been proposed to extract objective outcome measures that reflect hand function in a natural context. However, previous studies were conducted in a controlled environment and may not be indicative of the actual hand use of people with cSCI living in the community. Thus, we propose a deep learning algorithm for automatically detecting hand-object interactions in egocentric videos recorded by participants with cSCI during their daily activities at home. The proposed approach is able to detect hand-object interactions with good accuracy (F1-score up to 0.82), demonstrating the feasibility of this system in uncontrolled situations (e.g., unscripted activities and variable illumination). This result paves the way for the development of an automated tool for measuring hand function in people with cSCI living in the community.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/EMBC44109.2020.9176274 | DOI Listing |
Nat Commun
November 2024
National Key Laboratory of Advanced Micro and Nano Manufacture Technology, Shanghai Jiao Tong University, Shanghai, China.
Capturing forceful interaction with deformable objects during manipulation benefits applications like virtual reality, telemedicine, and robotics. Replicating full hand-object states with complete geometry is challenging because of the occluded object deformations. Here, we report a visual-tactile recording and tracking system for manipulation featuring a stretchable tactile glove with 1152 force-sensing channels and a visual-tactile joint learning framework to estimate dynamic hand-object states during manipulation.
View Article and Find Full Text PDFData Brief
December 2024
Faculty of Computing, Universiti Teknologi Malaysia, 81310 Skudai, Johor Bahru, Malaysia.
The dataset presents raw data on the egocentric (first-person view) and exocentric (third-person view) perspectives, including 47166 frame images. Egocentric and exocentric frame images are recorded from original iPhone videos simultaneously. The egocentric view captures the details of proximity hand gestures and attentiveness of the iPhone wearer, while the exocentric view captures the hand gestures in the top-down view of all participants.
View Article and Find Full Text PDFAnat Sci Educ
January 2025
The Corps for Research of Instructional and Perceptual Technologies (CRIPT) Laboratory, Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada.
The Cognitive Theory of Multimedia Learning (CTML) suggests humans learn through visual and auditory sensory channels. Haptics represent a third channel within CTML and a missing component for experiential learning. The objective was to measure visual and haptic behaviors during spatial tasks.
View Article and Find Full Text PDFJ Neurophysiol
December 2024
Institute of Cognitive Neuroscience, University College London, London, United Kingdom.
When we run our hand across a surface, each finger typically repeats the sensory stimulation that the leading finger has already experienced. Because of this redundancy, the leading finger may attract more attention and contribute more strongly when tactile signals are integrated across fingers to form an overall percept. To test this hypothesis, we re-analyzed data collected in a previous study (Arslanova I, Takamuku S, Gomi H, Haggard P, 128: 418-433, 2022), where two probes were moved in different directions on two different fingerpads and participants reported the probes' average direction.
View Article and Find Full Text PDFIEEE Trans Vis Comput Graph
September 2024
Grasping generation holds significant importance in both robotics and AI-generated content. While pure network paradigms based on VAEs or GANs ensure diversity in outcomes, they often fall short of achieving plausibility. Additionally, although those two-step paradigms that first predict contact and then optimize distance yield plausible results, they are always known to be time-consuming.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!