Immersive environments offer new possibilities for exploring three-dimensional volumetric or abstract data. However, typical mid-air interaction offers little guidance to the user in interacting with the resulting visuals. Previous work has explored the use of haptic controls to give users tangible affordances for interacting with the data, but these controls have either: been limited in their range and resolution; were spatially fixed; or required users to manually align them with the data space. We explore the use of a robot arm with hand tracking to align tangible controls under the user's fingers as they reach out to interact with data affordances. We begin with a study evaluating the effectiveness of a robot-extended slider control compared to a large fixed physical slider and a purely virtual mid-air slider. We find that the robot slider has similar accuracy to the physical slider but is significantly more accurate than mid-air interaction. Further, the robot slider can be arbitrarily reoriented, opening up many new possibilities for tangible haptic interaction with immersive visualisations. We demonstrate these possibilities through three use-cases: selection in a time-series chart; interactive slicing of CT scans; and finally exploration of a scatter plot depicting time-varying socio-economic data.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TVCG.2022.3209433DOI Listing

Publication Analysis

Top Keywords

mid-air interaction
8
physical slider
8
robot slider
8
slider
6
data
5
robohapalytics robot
4
robot assisted
4
assisted haptic
4
haptic controller
4
controller immersive
4

Similar Publications

Selection is a fundamental interaction element in virtual reality (VR) and 3D user interfaces (UIs). Raycasting, one of the most common object selection techniques, is known to have difficulties in selecting small or distant objects. Meanwhile, recent advancements in computer vision technology have enabled seamless vision-based hand tracking in consumer VR headsets, enhancing accessibility to freehand mid-air interaction and highlighting the need for further research in this area.

View Article and Find Full Text PDF

Mastering the correct use of laboratory equipment is a fundamental skill for undergraduate science students involved in laboratory-based training. However, hands-on laboratory time is often limited, and remote students may struggle as their absence from the physical lab limits their skill development. An air-displacement micropipette was selected for our initial investigation, as accuracy and correct technique are essential in generating reliable assay data.

View Article and Find Full Text PDF

Real-Time Air-Writing Recognition for Arabic Letters Using Deep Learning.

Sensors (Basel)

September 2024

Department of Software Engineering, College of Computing, Umm Al-Qura University, Makkah 21955, Saudi Arabia.

Learning to write the Arabic alphabet is crucial for Arab children's cognitive development, enhancing their memory and retention skills. However, the lack of Arabic language educational applications may hamper the effectiveness of their learning experience. To bridge this gap, SamAbjd was developed, an interactive web application that leverages deep learning techniques, including air-writing recognition, to teach Arabic letters.

View Article and Find Full Text PDF

Text entry with word-gesture keyboards (WGK) is emerging as a popular method and becoming a key interaction for Extended Reality (XR). However, the diversity of interaction modes, keyboard sizes, and visual feedback in these environments introduces divergent word-gesture trajectory data patterns, thus leading to complexity in decoding trajectories into text. Template-matching decoding methods, such as SHARK2 [32], are commonly used for these WGK systems because they are easy to implement and configure.

View Article and Find Full Text PDF

We propose and study a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures-often at multiple spatial or semantic scales-across various application domains and requiring diverse visual representations for effective visualization.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!