Low-cost virtual-reality (VR) head-mounted displays (HMDs) with the integration of smartphones have brought the immersive VR to the masses, and increased the ubiquity of VR. However, these systems are often limited by their poor interactivity. In this paper, we present GestOnHMD, a gesture-based interaction technique and a gesture-classification pipeline that leverages the stereo microphones in a commodity smartphone to detect the tapping and the scratching gestures on the front, the left, and the right surfaces on a mobile VR headset. Taking the Google Cardboard as our focused headset, we first conducted a gesture-elicitation study to generate 150 user-defined gestures with 50 on each surface. We then selected 15, 9, and 9 gestures for the front, the left, and the right surfaces respectively based on user preferences and signal detectability. We constructed a data set containing the acoustic signals of 18 users performing these on-surface gestures, and trained the deep-learning classification pipeline for gesture detection and recognition. Lastly, with the real-time demonstration of GestOnHMD, we conducted a series of online participatory-design sessions to collect a set of user-defined gesture-referent mappings that could potentially benefit from GestOnHMD.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TVCG.2021.3067689DOI Listing

Publication Analysis

Top Keywords

gesture-based interaction
8
gestures front
8
front left
8
left surfaces
8
gestonhmd
4
gestonhmd enabling
4
enabling gesture-based
4
interaction low-cost
4
low-cost head-mounted
4
head-mounted display
4

Similar Publications

Musical performance relies on nonverbal cues for conveying information among musicians. Human musicians use bodily gestures to communicate their interpretation and intentions to their collaborators, from mood and expression to anticipatory cues regarding structure and tempo. Robotic Musicians can use their physical bodies in a similar way when interacting with fellow musicians.

View Article and Find Full Text PDF

Teleoperation system for multiple robots with intuitive hand recognition interface.

Sci Rep

December 2024

Department of Information Systems, Universidade do Estado de Santa Catarina (UDESC), São Bento do Sul, 89283-081, Brazil.

Robotic teleoperation is essential for hazardous environments where human safety is at risk. However, efficient and intuitive human-machine interaction for multi-robot systems remains challenging. This article aims to demonstrate a robotic teleoperation system, denominated AutoNav, centered around autonomous navigation and gesture commands interpreted through computer vision.

View Article and Find Full Text PDF

Mastering Gesture-Based Screen Readers on Mobile Devices - Exploring Teaching and Practice Strategies.

Stud Health Technol Inform

November 2024

Department of Applied Research in Technology (DART), Norsk Regnesentral (NR), Norway.

Gesture-based screen readers like VoiceOver or TalkBack provide visually impaired users with a means to interact with digital content. However, there is a significant lack of both strategies and resources for teaching the use of these screen readers, and standardized teaching guidelines are notably absent. Furthermore, there is no free, universally designed, and accessible app for practicing gestures in mobile screen readers.

View Article and Find Full Text PDF

Development of Dual-Arm Human Companion Robots That Can Dance.

Sensors (Basel)

October 2024

Electrical Engineering Department, Pusan National University, Busan 43241, Republic of Korea.

As gestures play an important role in human communication, there have been a number of service robots equipped with a pair of human-like arms for gesture-based human-robot interactions. However, the arms of most human companion robots are limited to slow and simple gestures due to the low maximum velocity of the arm actuators. In this work, we present the JF-2 robot, a mobile home service robot equipped with a pair of torque-controlled anthropomorphic arms.

View Article and Find Full Text PDF

Touchless interfaces have gained considerable importance in the modern era, particularly due to their user-friendly and hygienic nature of interaction. This article presents the designing of two touchless cursor control systems based on hand gestures and head movements utilising the MediaPipe framework to extract the key landmarks of the hand and face utilising a laptop camera. The index finger's landmark points are tracked and converted to corresponding screen coordinates for cursor control.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!