When manually steering a car, the driver's visual perception of the driving scene and his or her motor actions to control the vehicle are closely linked. Since motor behaviour is no longer required in an automated vehicle, the sampling of the visual scene is affected. Autonomous driving typically results in less gaze being directed towards the road centre and a broader exploration of the driving scene, compared to manual driving. To examine the corollary of this situation, this study estimated the state of automation (manual or automated) on the basis of gaze behaviour. To do so, models based on partial least square regressions were computed by considering the gaze behaviour in multiple ways, using static indicators (percentage of time spent gazing at 13 areas of interests), dynamic indicators (transition matrices between areas) or both together. Analysis of the quality of predictions for the different models showed that the best result was obtained by considering both static and dynamic indicators. However, gaze dynamics played the most important role in distinguishing between manual and automated driving. This study may be relevant to the issue of driver monitoring in autonomous vehicles.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8184294PMC
http://dx.doi.org/10.16910/jemr.12.3.10DOI Listing

Publication Analysis

Top Keywords

driving scene
8
manual automated
8
gaze behaviour
8
dynamic indicators
8
driving
5
model-based estimation
4
estimation state
4
state vehicle
4
vehicle automation
4
automation derived
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!