Onboard virtual assistants with the ability to converse with users are gaining favour in supporting effective human-machine interaction to meet safe standards of operation in automated vehicles (AVs). Previous studies have highlighted the need to communicate situation information to effectively support the transfer of control and responsibility of the driving task. This study explores 'interaction types' used for this complex human-machine transaction, by analysing how situation information is conveyed and reciprocated during a transfer of control scenario.
View Article and Find Full Text PDFAnalysis of thirty-one hours of video-data documenting 36 experienced drivers highlighted the prevalence of face-touching, with 819 contacts identified (mean frequency: 26.4 face touches/hour (FT/h); mean duration: 3.9-seconds).
View Article and Find Full Text PDFObjective: We controlled participants' glance behavior while using head-down displays (HDDs) and head-up displays (HUDs) to isolate driving behavioral changes due to use of different display types across different driving environments.
Background: Recently, HUD technology has been incorporated into vehicles, allowing drivers to, in theory, gather display information without moving their eyes away from the road. Previous studies comparing the impact of HUDs with traditional displays on human performance show differences in both drivers' visual attention and driving performance.
Objective: Vehicle automation shifts the driver's role from active operator to passive observer at the potential cost of degrading their alertness. This study investigated the role of an in-vehicle voice-based assistant (VA; conversing about traffic/road environment) to counter the disengaging and fatiguing effects of automation.
Method: Twenty-four participants undertook two drives- with and without VA in a partially automated vehicle.
Augmented reality (AR) offers new ways to visualize information on-the-go. As noted in related work, AR graphics presented via optical see-through AR displays are particularly prone to color blending, whereby intended graphic colors may be perceptually altered by real-world backgrounds, ultimately degrading usability. This work adds to this body of knowledge by presenting a methodology for assessing AR interface color robustness, as quantitatively measured via shifts in the CIE color space, and qualitatively assessed in terms of users' perceived color name.
View Article and Find Full Text PDFFocussed ultrasound can be used to create the sensation of touch in mid-air. Combined with gestures, this can provide haptic feedback to guide users, thereby overcoming the lack of agency associated with pure gestural interfaces, and reducing the need for vision - it is therefore particularly apropos of the driving domain. In a counter-balanced 2 × 2 driving simulator study, a traditional in-vehicle touchscreen was compared with a virtual mid-air gestural interface, both with and without ultrasound haptics.
View Article and Find Full Text PDFTouchscreen Human-Machine Interfaces (HMIs) are a well-established and popular choice to provide the primary control interface between driver and vehicle, yet inherently demand some visual attention. Employing a secondary device with the touchscreen may reduce the demand but there is some debate about which device is most suitable, with current manufacturers favouring different solutions and applying these internationally. We present an empirical driving simulator study, conducted in the UK and China, in which 48 participants undertook typical in-vehicle tasks utilising either a touchscreen, rotary-controller, steering-wheel-controls or touchpad.
View Article and Find Full Text PDFGiven the proliferation of 'intelligent' and 'socially-aware' digital assistants embodying everyday mobile technology - and the undeniable logic that utilising voice-activated controls and interfaces in cars reduces the visual and manual distraction of interacting with in-vehicle devices - it appears inevitable that next generation vehicles will be embodied by digital assistants and utilise spoken language as a method of interaction. From a design perspective, defining the language and interaction style that a digital driving assistant should adopt is contingent on the role that they play within the social fabric and context in which they are situated. We therefore conducted a qualitative, Wizard-of-Oz study to explore how drivers might interact linguistically with a natural language digital driving assistant.
View Article and Find Full Text PDFDrivers' awareness of the rearward road scene is critical when contemplating or executing lane-change manoeuvres, such as overtaking. Preliminary investigations have speculated on the use of rear-facing cameras to relay images to displays mounted inside the car to create 'digital mirrors'. These may overcome many of the limitations associated with traditional 'wing' and rear-view mirrors, yet will inevitably effect drivers' normal visual scanning behaviour, and may force them to consider the rearward road scene from an unfamiliar perspective that is incongruent with their mental model of the outside world.
View Article and Find Full Text PDFIntroduction: Automobiles are suffused with computers and technology designed to support drivers at all levels of the driving hierarchy. Classic secondary devices, such as in-vehicle navigation systems (IVNS), present strategic and tactical information to drivers. In order to mitigate the potential distraction and workload when interacting with these devices while driving, IVNS often employ voices to deliver navigational instructions.
View Article and Find Full Text PDF