This paper addresses the question of how large the temporal delay between a visual and a haptic stimulus may be such that the stimuli are still perceived as being synchronous. Participants had to judge whether the moment at which a graphical object collided with a virtual wall occurred simultaneously with the moment at which a force was felt through a force feedback joystick. Participants either moved the joystick to drive the object (active touch) or held the joystick in a steady position while the object moved by itself (passive touch). Participants were found to be very sensitive to visual-haptic time delays. Sensitivity was higher for passive touch than for active touch. The minimum delay at which participants judged the stimuli as asynchronous was on average 45 ms. The delay at which the proportion of synchronous judgments reached a maximum was on average close to zero. The results indicate that the temporal accuracy of visual-haptic interfaces has to meet stringent requirements in order to optimize the overall realism that users experience. Actual or potential applications of this research include teleoperation, medical training, computer-aided-design, and scientific visualization.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1518/hfes.46.1.118.30394 | DOI Listing |
Behav Brain Res
March 2025
Faculty of Kinesiology & Physical Education, University of Toronto, Toronto, ON, Canada.
Compared to physical unmediated reality (UR), mixed reality technologies, such as Virtual (VR) and Augmented (AR) Reality, entail perturbations across multiple sensory modalities (visual, haptic, etc.) that could alter how actors move within the different environments. Because of the mediated nature, goal-directed movements in VR and AR may rely on planning and control processes that are different from movements in UR, resulting in less efficient motor control.
View Article and Find Full Text PDFClin Neurophysiol
November 2024
Department of Clinical Neuroscience, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK; Department of Paediatric Neurosciences, Evelina London Children's Hospital, London, UK. Electronic address:
Objective: Therapeutic interventions for children and young people with dystonia and dystonic/dyskinetic cerebral palsy are limited. EEG-based neurofeedback is emerging as a neurorehabilitation tool. This scoping review maps research investigating EEG-based sensorimotor neurofeedback in adults and children with neurological motor impairments, including augmentative strategies.
View Article and Find Full Text PDFISA Trans
July 2024
State Key Laboratory of Fluid Power and Mechatronic Systems, Zhejiang University, Hangzhou, 310027, China; Key Laboratory of Intelligent Robot for Operation and Maintenance of Zhejiang Province, Hangzhou, 311121, China; Ocean College, Zhejiang University, Zhoushan, 316021, China. Electronic address:
Teleoperation under human guidance has become an effective solution to extend human's reach in various environments. However, the teleoperation system still faces challenges of insufficient sense of both visual and haptic feedback from remote environments, which results in the inadequate guidance for the operator. In this paper, a visual/haptic integrated perception and reconstruction system (VHI-PRS) is developed to provide the operator with 3D visual information and effective haptic guidance.
View Article and Find Full Text PDFHaptic interfaces and virtual reality (VR) technology have been increasingly introduced in rehabilitation, facilitating the provision of various feedback and task conditions. However, correspondence between the feedback/task conditions and movement strategy during reaching tasks remains a question. To investigate movement strategy, we assessed velocity parameters and peak latency of electromyography.
View Article and Find Full Text PDFSci Rep
March 2024
Biomedical Science and Biomedical Engineering, School of Biological Sciences, University of Reading, Whiteknights, Reading, RG6 6AY, UK.
Accomplishing motor function requires multimodal information, such as visual and haptic feedback, which induces a sense of ownership (SoO) over one's own body part. In this study, we developed a visual-haptic human machine interface that combines three different types of feedback (visual, haptic, and kinesthetic) in the context of passive hand-grasping motion and aimed to generate SoO over a virtual hand. We tested two conditions, both conditions the three set of feedback were synchronous, the first condition was in-phase, and the second condition was in antiphase.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!