We present the results of an exploratory study on how pairs interact with speech commands and touch gestures on a wall-sized display during a collaborative sensemaking task. Previous work has shown that speech commands, alone or in combination with other input modalities, can support visual data exploration by individuals. However, it is still unknown whether and how speech commands can be used in collaboration, and for what tasks. To answer these questions, we developed a functioning prototype that we used as a technology probe. We conducted an in-depth exploratory study with 10 participant pairs to analyze their interaction choices, the interplay between the input modalities, and their collaboration. While touch was the most used modality, we found that participants preferred speech commands for global operations, used them for distant interaction, and that speech interaction contributed to the awareness of the partner's actions. Furthermore, the likelihood of using speech commands during collaboration was related to the personality trait of agreeableness. Regarding collaboration styles, participants interacted with speech equally often whether they were in loosely or closely coupled collaboration. While the partners stood closer to each other during close collaboration, they did not distance themselves to use speech commands. From our fndings, we derive and contribute a set of design considerations for collaborative and multimodal interactive data analysis systems. All supplemental materials are available at https://osf.io/8gpv2.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TVCG.2024.3456335DOI Listing

Publication Analysis

Top Keywords

speech commands
24
speech
9
speech interaction
8
exploratory study
8
input modalities
8
commands collaboration
8
commands
6
collaboration
6
talk wall
4
wall role
4

Similar Publications

Purpose: The present study assessed the test-retest reliability of the American Sign Language (ASL) version of the Computerized Revised Token Test (CRTT-ASL) and compared the differences and similarities between ASL and English reading by Deaf and hearing users of ASL.

Method: Creation of the CRTT-ASL involved filming, editing, and validating CRTT instructions, sentence commands, and scoring. Deaf proficient (DP), hearing nonproficient (HNP), and hearing proficient sign language users completed the CRTT-ASL and the English self-paced, word-by-word reading CRTT (CRTT-Reading-Word Fade [CRTT-R-wf]).

View Article and Find Full Text PDF

Objectives: Occupational hearing loss is a significant problem worldwide despite the fact that it can be mitigated by the wearing of hearing protection devices (HPDs). When surveyed, workers frequently report that worsened work performance while wearing HPDs is one reason why they choose not to wear them. However, there have been few studies to supplement these subjective reports with objective measures.

View Article and Find Full Text PDF

Polariton lattices as binarized neuromorphic networks.

Light Sci Appl

January 2025

Spin-Optics laboratory, St. Petersburg State University, St. Petersburg, 198504, Russia.

We introduce a novel neuromorphic network architecture based on a lattice of exciton-polariton condensates, intricately interconnected and energized through nonresonant optical pumping. The network employs a binary framework, where each neuron, facilitated by the spatial coherence of pairwise coupled condensates, performs binary operations. This coherence, emerging from the ballistic propagation of polaritons, ensures efficient, network-wide communication.

View Article and Find Full Text PDF

Background: Cardiac catheterization in children with heart disease is associated with an increased risk of arterial ischemic stroke. We created and evaluated the diagnostic performance of a bedside screening tool administered postprocedure to identify arterial ischemic stroke.

Methods: We developed a postprocedure stroke screen comprising history of stroke, responsiveness, command following, speech, facial and limb strength symmetry, new seizure, and caregiver concern.

View Article and Find Full Text PDF

Speech-mediated manipulation of da Vinci surgical system for continuous surgical flow.

Biomed Eng Lett

January 2025

Department of Biomedical Engineering, Seoul National University College of Medicine, 103 Daehak-ro, Jongno- gu, Seoul, 03080 Republic of Korea.

Unlabelled: With the advent of robot-assisted surgery, user-friendly technologies have been applied to the da Vinci surgical system (dVSS), and their efficacy has been validated in worldwide surgical fields. However, further improvements are required to the traditional manipulation methods, which cannot control an endoscope and surgical instruments simultaneously. This study proposes a speech recognition control interface (SRCI) for controlling the endoscope via speech commands while manipulating surgical instruments to replace the traditional method.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!