Conditionally automated driving (CAD) systems are expected to improve traffic safety. Whenever the CAD system exceeds its limit of operation, designers of the system need to ensure a safe and timely enough transition from automated to manual mode. An existing visual Human-Machine Interface (HMI) was supplemented by different auditory outputs. The present work compares the effects of different auditory outputs in form of (1) a generic warning tone and (2) additional semantic speech output on driver behavior for the announcement of an upcoming take-over request (TOR). We expect the information carried by means of speech output to lead to faster reactions and better subjective evaluations by the drivers compared to generic auditory output. To test this assumption, N=17 drivers completed two simulator drives, once with a generic warning tone ('Generic') and once with additional speech output ('Speech+generic'), while they were working on a non-driving related task (NDRT; i.e., reading a magazine). Each drive incorporated one transition from automated to manual mode when yellow secondary lanes emerged. Different reaction time measures, relevant for the take-over process, were assessed. Furthermore, drivers evaluated the complete HMI regarding usefulness, ease of use and perceived visual workload just after experiencing the take-over. They gave comparative ratings on usability and acceptance at the end of the experiment. Results revealed that reaction times, reflecting information processing time (i.e., hands on the steering wheel, termination of NDRT), were shorter for 'Speech+generic' compared to 'Generic' while reaction time, reflecting allocation of attention (i.e., first glance ahead), did not show this difference. Subjective ratings were in favor of the system with additional speech output.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.aap.2017.09.019DOI Listing

Publication Analysis

Top Keywords

speech output
16
auditory outputs
12
transition automated
8
automated manual
8
manual mode
8
generic warning
8
warning tone
8
additional speech
8
reaction time
8
output
5

Similar Publications

Objectives: This study examined the relationships between electrophysiological measures of the electrically evoked auditory brainstem response (EABR) with speech perception measured in quiet after cochlear implantation (CI) to identify the ability of EABR to predict postoperative CI outcomes.

Methods: Thirty-four patients with congenital prelingual hearing loss, implanted with the same manufacturer's CI, were recruited. In each participant, the EABR was evoked at apical, middle, and basal electrode locations.

View Article and Find Full Text PDF

Restoring Speech Using Brain-Computer Interfaces.

Annu Rev Biomed Eng

January 2025

Department of Neurological Surgery, University of California, Davis, California, USA; email:

People who have lost the ability to speak due to neurological injuries would greatly benefit from assistive technology that provides a fast, intuitive, and naturalistic means of communication. This need can be met with brain-computer interfaces (BCIs): medical devices that bypass injured parts of the nervous system and directly transform neural activity into outputs such as text or sound. BCIs for restoring movement and typing have progressed rapidly in recent clinical trials; speech BCIs are the next frontier.

View Article and Find Full Text PDF

Cross-device and test-retest reliability of speech acoustic measurements derived from consumer-grade mobile recording devices.

Behav Res Methods

December 2024

Anhui Province Key Laboratory of Medical Physics and Technology, Institute of Health and Medical Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China.

In recent years, there has been growing interest in remote speech assessment through automated speech acoustic analysis. While the reliability of widely used features has been validated in professional recording settings, it remains unclear how the heterogeneity of consumer-grade recording devices, commonly used in nonclinical settings, impacts the reliability of these measurements. To address this issue, we systematically investigated the cross-device and test-retest reliability of classical speech acoustic measurements in a sample of healthy Chinese adults using consumer-grade equipment across three popular speech tasks: sustained phonation (SP), diadochokinesis (DDK), and picture description (PicD).

View Article and Find Full Text PDF

Introduction: Numerous studies have explored the linguistic and executive processes underlying verbal fluency using association designs, which provide limited evidence. To assess the validity of our model, we aimed to refine the cognitive architecture of verbal fluency using an interference design.

Methods: A total of 487 healthy participants performed letter and semantic fluency tests under the single condition and dual conditions while concurrently performing a secondary task that interferes with speed, semantics, phonology, or flexibility.

View Article and Find Full Text PDF

Objective: To improve performance of medical entity normalization across many languages, especially when fewer language resources are available compared to English.

Materials And Methods: We propose xMEN, a modular system for cross-lingual (x) medical entity normalization (MEN), accommodating both low- and high-resource scenarios. To account for the scarcity of aliases for many target languages and terminologies, we leverage multilingual aliases via cross-lingual candidate generation.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!