Eye-tracking glasses in face-to-face interactions: Manual versus automated assessment of areas-of-interest.

Behav Res Methods

Amsterdam UMC, University of Amsterdam, Department of Medical Psychology, Amsterdam Public Health, Location AMC, Meibergdreef 9, 1100 DD, Amsterdam, The Netherlands.

Published: October 2021

The assessment of gaze behaviour is essential for understanding the psychology of communication. Mobile eye-tracking glasses are useful to measure gaze behaviour during dynamic interactions. Eye-tracking data can be analysed by using manually annotated areas-of-interest. Computer vision algorithms may alternatively be used to reduce the amount of manual effort, but also the subjectivity and complexity of these analyses. Using additional re-identification (Re-ID) algorithms, different participants in the interaction can be distinguished. The aim of this study was to compare the results of manual annotation of mobile eye-tracking data with the results of a computer vision algorithm. We selected the first minute of seven randomly selected eye-tracking videos of consultations between physicians and patients in a Dutch Internal Medicine out-patient clinic. Three human annotators and a computer vision algorithm annotated mobile eye-tracking data, after which interrater reliability was assessed between the areas-of-interest annotated by the annotators and the computer vision algorithm. Additionally, we explored interrater reliability when using lengthy videos and different area-of-interest shapes. In total, we analysed more than 65 min of eye-tracking videos manually and with the algorithm. Overall, the absolute normalized difference between the manual and the algorithm annotations of face-gaze was less than 2%. Our results show high interrater agreements between human annotators and the algorithm with Cohen's kappa ranging from 0.85 to 0.98. We conclude that computer vision algorithms produce comparable results to those of human annotators. Analyses by the algorithm are not subject to annotator fatigue or subjectivity and can therefore advance eye-tracking analyses.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8516759PMC
http://dx.doi.org/10.3758/s13428-021-01544-2DOI Listing

Publication Analysis

Top Keywords

computer vision
20
mobile eye-tracking
12
eye-tracking data
12
vision algorithm
12
human annotators
12
eye-tracking
8
eye-tracking glasses
8
gaze behaviour
8
vision algorithms
8
eye-tracking videos
8

Similar Publications

Low Complexity Regions (LCRs) are segments of proteins with a low diversity of amino acid composition. These regions play important roles in proteins. However, annotations describing these functions are dispersed across databases and scientific literature.

View Article and Find Full Text PDF

We used machine learning to investigate the residual visual field (VF) deficits and macula retinal ganglion cell (RGC) thickness loss patterns in recovered optic neuritis (ON). We applied archetypal analysis (AA) to 377 same-day pairings of 10-2 VF and optical coherence tomography (OCT) macula images from 93 ON eyes and 70 normal fellow eyes ≥ 90 days after acute ON. We correlated archetype (AT) weights (total weight = 100%) of VFs and total retinal thickness (TRT), inner retinal thickness (IRT), and macular ganglion cell-inner plexiform layer (GCIPL) thickness.

View Article and Find Full Text PDF

The proliferation of deepfake generation has become increasingly widespread. Current solutions for automatically detecting and classifying generated content require substantial computational resources, making them impractical for use by the average non-expert individual, particularly from edge computing applications. In this paper, we propose a series of techniques to accelerate the inference speed of deepfake detection on video data.

View Article and Find Full Text PDF

To observe the structural changes of retina and choroid in patients with different degrees of myopia. We recruited 219 subjects with different degrees of myopia for best corrected visual acuity, computer refraction, intraocular pressure, axial length (AL), optical coherence tomography (OCT) imaging, and other examinations. Central macular retinal thickness (CRT), subfoveal choroidal thickness (SFCT), nasal retinal thickness (NRT), temporal retinal thickness (TRT), nasal choroidal thickness (NCT) and temporal choroidal thickness (TCT) were measured by optical coherence tomography.

View Article and Find Full Text PDF

Crops3D: a diverse 3D crop dataset for realistic perception and segmentation toward agricultural applications.

Sci Data

December 2024

National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Huazhong Agricultural University, Wuhan, 430070, P. R. China.

Point cloud analysis is a crucial task in computer vision. Despite significant advances over the past decade, the developments in agricultural domain have faced challenges due to a scarcity of datasets. To facilitate 3D point cloud research in agriculture community, we introduce Crops3D, the diverse real-world dataset derived from authentic agricultural scenarios.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!