Researchers have adopted remote methods, such as online surveys and video conferencing, to overcome challenges in conducting in-person usability testing, such as participation, user representation, and safety. However, remote user evaluation on hardware testbeds is limited, especially for blind participants, as such methods restrict access to observations of user interactions. We employ smart glasses in usability testing with blind people and share our lessons from a case study conducted in blind participants' homes ( = 12), where the experimenter can access participants' activities via dual video conferencing: a third-person view via a laptop camera and a first-person view via smart glasses worn by the participant. We show that smart glasses hold potential for observing participants' interactions with smartphone testbeds remotely; on average 58.7% of the interactions were fully captured via the first-person view compared to 3.7% via the third-person. However, this gain is not uniform across participants as it is susceptible to head movements orienting the ear towards a sound source, which highlights the need for a more inclusive camera form factor. We also share our lessons learned when it comes to dealing with lack of screen reader support in smart glasses, a rapidly draining battery, and Internet connectivity in remote studies with blind participants.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10631802 | PMC |
http://dx.doi.org/10.1145/3493612.3520448 | DOI Listing |
Transl Vis Sci Technol
January 2025
Amsterdam UMC, Vrije Universiteit Amsterdam, Ophthalmology, Amsterdam, The Netherlands.
Purpose: This study assessed objective performance, usability, and acceptance of artificial intelligence (AI) by people with vision impairment. The goal was to provide evidence-based data to enhance technology selection for people with vision loss (PVL) based on their loss and needs.
Methods: Using a cross-sectional, counterbalanced, cross-over study involving 25 PVL, we compared performance using two smart glasses (OrCam and Envision Glasses) and two AI apps (Seeing AI and Google Lookout).
Sensors (Basel)
December 2024
The State Key Laboratory of Fluid Power and Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Hangzhou 310027, China.
In physical spaces, pointing interactions cannot rely on cursors, rays, or virtual hands for feedback as in virtual environments; users must rely solely on their perception and experience to capture targets. Currently, research on modeling target distribution for pointing interactions in physical space is relatively sparse. Area division is typically simplistic, and theoretical models are lacking.
View Article and Find Full Text PDFImaging Sci Dent
December 2024
OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Leuven, Belgium.
BMC Anesthesiol
November 2024
Department of Anesthesiology, West China Hospital of Sichuan University, Chengdu, 610041, Sichuan, P. R. China.
J Med Internet Res
November 2024
College of Nursing, University of Colorado, Aurora, CO, United States.
Background: Smart glasses have emerged as a promising solution for enhancing communication and care coordination among distributed medical teams. While prior research has explored the feasibility of using smart glasses to improve prehospital communication between emergency medical service (EMS) providers and remote physicians, a research gap remains in understanding the specific requirements and needs of EMS providers for smart glass implementation.
Objective: This study aims to iteratively design and evaluate a smart glass application tailored for prehospital communication by actively involving prospective users in the system design process.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!