Gaze Zone Classification for Driving Studies Using YOLOv8 Image Classification.

Sensors (Basel)

Department of Computer Science, Open University of the Netherlands, 6419 AT Heerlen, The Netherlands.

Published: November 2024

Gaze zone detection involves estimating where drivers look in terms of broad categories (e.g., left mirror, speedometer, rear mirror). We here specifically focus on the automatic annotation of gaze zones in the context of road safety research, where the system can be tuned to specific drivers and driving conditions, so that an easy to use but accurate system may be obtained. We show with an existing dataset of eye region crops (nine gaze zones) and two newly collected datasets (12 and 10 gaze zones) that image classification with YOLOv8, which has a simple command line interface, achieves near-perfect accuracy without any pre-processing of the images, as long as a model is trained on the driver and conditions for which annotation is required (such as whether the drivers wear glasses or sunglasses). We also present two apps to collect the training images and to train and apply the YOLOv8 models. Future research will need to explore how well the method extends to real driving conditions, which may be more variable and more difficult to annotate for ground truth labels.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11598252PMC
http://dx.doi.org/10.3390/s24227254DOI Listing

Publication Analysis

Top Keywords

gaze zones
12
gaze zone
8
image classification
8
driving conditions
8
gaze
5
zone classification
4
classification driving
4
driving studies
4
studies yolov8
4
yolov8 image
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!