Robotic rehabilitative systems have been an active area of research for all movements, including Sit to Stand (STS). STS is an important movement for performing various activities of daily living. Rehabilitation of the STS movement is one of the most challenging tasks for patients and physiotherapists alike. The existing rehabilitative systems constrain the patient to move with the system, making it difficult for the patient to eventually perform the movement independently without facing resistance from the system. This paper proposes the design of an STS rehabilitation system that assists subjects only in the parts of the motion that they fail to perform independently. The assistance is provided in a two-phase process and allows subject to attempt different levels of difficulty dynamically without having to select a target difficulty level before the start of the therapy session. The individual under test also receives real-time feedback on the movement from a multi-sensory feedback system. Post the movement, a score is generated from the system, allowing both the subject and physiotherapist to track the long-term progress of the individual under treatment.

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC53108.2024.10782303DOI Listing

Publication Analysis

Top Keywords

multi-sensory feedback
8
rehabilitative systems
8
sts movement
8
system
6
movement
5
resistance-free sit-to-stand
4
sit-to-stand rehabilitative
4
rehabilitative system
4
system incorporated
4
incorporated multi-sensory
4

Similar Publications

Robotic rehabilitative systems have been an active area of research for all movements, including Sit to Stand (STS). STS is an important movement for performing various activities of daily living. Rehabilitation of the STS movement is one of the most challenging tasks for patients and physiotherapists alike.

View Article and Find Full Text PDF

During haptic rendering, a visual display and a haptic interface are commonly utilized together to elicit multi-sensory perception of a virtual object, through a combination and integration of force-related and movement-related cues. In this study, we explore visual-haptic cue integration during multi-modal haptic rendering under conflicting cues and propose a systematic means to determine the optimal visual scaling for haptic manipulation that maximizes the perceived realism of spring rendering for a given haptic interface. We show that the parameters affecting visual-haptic congruency can be effectively optimized through a qualitative feedback-based human-in-the-loop (HiL) optimization to ensure a consistently high rating of perceived realism.

View Article and Find Full Text PDF

Musician's dystonia: a perspective on the strongest evidence towards new prevention and mitigation treatments.

Front Netw Physiol

January 2025

Laboratory of Electrophysiology for Translational neuroScience LET'S, Institute of Cognitive Sciences and Technologies ISTC, Consiglio Nazionale delle Ricerche CNR, Roma, Italy.

This perspective article addresses the critical and up-to-date problem of task-specific musician's dystonia (MD) from both theoretical and practical perspectives. Theoretically, MD is explored as a result of impaired sensorimotor interplay across different brain circuits, supported by the most frequently cited scientific evidence-each referenced dozens of times in Scopus. Practically, MD is a significant issue as it occurs over 60 times more frequently in musicians compared to other professions, underscoring the influence of individual training as well as environmental, social, and emotional factors.

View Article and Find Full Text PDF

In Virtual Reality (VR), a higher level of presence positively influences the experience and engagement of a user. There are several parameters that are responsible for generating different levels of presence in VR, including but not limited to, graphical fidelity, multi-sensory stimuli, and embodiment. However, standard methods of measuring presence, including self-reported questionnaires, are biased.

View Article and Find Full Text PDF
Article Synopsis
  • Affective data forms the foundation for emotion recognition, which is studied through eliciting emotions using various sensory stimuli, including visual, auditory, and olfactory inputs.
  • An experimental dataset (OVPD-II) was created, showing that a combination of video and odor significantly enhanced emotional responses compared to video alone, leading to higher accuracy in emotion recognition tasks.
  • The use of a transformer model alongside a hybrid fusion approach improved emotion classification accuracy to 89.50% for the video-odor pattern and 88.47% for the video-only pattern.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!