Text entry with word-gesture keyboards (WGK) is emerging as a popular method and becoming a key interaction for Extended Reality (XR). However, the diversity of interaction modes, keyboard sizes, and visual feedback in these environments introduces divergent word-gesture trajectory data patterns, thus leading to complexity in decoding trajectories into text. Template-matching decoding methods, such as SHARK2 [32], are commonly used for these WGK systems because they are easy to implement and configure.
View Article and Find Full Text PDFIEEE Trans Vis Comput Graph
November 2024
Text entry is a critical capability for any modern computing experience, with lightweight augmented reality (AR) glasses being no exception. Designed for all-day wearability, a limitation of lightweight AR glass is the restriction to the inclusion of multiple cameras for extensive field of view in hand tracking. This constraint underscores the need for an additional input device.
View Article and Find Full Text PDFAlphanumeric and special characters are essential during text entry. Text entry in virtual reality (VR) is usually performed on a virtual Qwerty keyboard to minimize the need to learn new layouts. As such, entering capitals, symbols, and numbers in VR is often a direct migration from a physical/touchscreen Qwerty keyboard-that is, using the mode-switching keys to switch between different types of characters and symbols.
View Article and Find Full Text PDFIEEE Trans Vis Comput Graph
November 2023
We present a fast mid-air gesture keyboard for head-mounted optical see-through augmented reality (OST AR) that supports users in articulating word patterns by merely moving their own physical index finger in relation to a virtual keyboard plane without a need to indirectly control a visual 2D cursor on a keyboard plane. To realize this, we introduce a novel decoding method that directly translates users' three-dimensional fingertip gestural trajectories into their intended text. We evaluate the efficacy of the system in three studies that investigate various design aspects, such as immediate efficacy, accelerated learning, and whether it is possible to maintain performance without providing visual feedback.
View Article and Find Full Text PDFIEEE Trans Vis Comput Graph
November 2022
In this paper we examine the task of key gesture spotting: accurate and timely online recognition of hand gestures. We specifically seek to address two key challenges faced by developers when integrating key gesture spotting functionality into their applications. These are: i) achieving high accuracy and zero or negative activation lag with single-time activation; and ii) avoiding the requirement for deep domain expertise in machine learning.
View Article and Find Full Text PDF