Online experiments have been transforming the field of behavioral research, enabling researchers to increase sample sizes, access diverse populations, lower the costs of data collection, and promote reproducibility. The field of developmental psychology increasingly exploits such online testing approaches. Since infants cannot give explicit behavioral responses, one key outcome measure is infants' gaze behavior. In the absence of automated eyetrackers in participants' homes, automatic gaze classification from webcam data would make it possible to avoid painstaking manual coding. However, the lack of a controlled experimental environment may lead to various noise factors impeding automatic face detection or gaze classification. We created an adult webcam dataset that systematically reproduced noise factors from infant webcam studies which might affect automated gaze coding accuracy. We varied participants' left-right offset, distance to the camera, facial rotation, and the direction of the lighting source. Running two state-of-the-art classification algorithms (iCatcher+ and OWLET) revealed that facial detection performance was particularly affected by the lighting source, while gaze coding accuracy was consistently affected by the distance to the camera and lighting source. Morphing participants' faces to be unidentifiable did not generally affect the results, suggesting facial anonymization could be used when making online video data publicly available, for purposes of further study and transparency. Our findings will guide improving study design for infant and adult participants during online experiments. Moreover, training algorithms using our dataset will allow researchers to improve robustness and allow developmental psychologists to leverage online testing more efficiently.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362184 | PMC |
http://dx.doi.org/10.3758/s13428-024-02424-1 | DOI Listing |
Atten Percept Psychophys
January 2025
Department of Psychology, Senshu University, Kawasaki, Japan.
Directional judgments of an arrow became slower when the direction and location were incongruent in a spatial Stroop task (i.e., a standard congruency effect).
View Article and Find Full Text PDFPLoS One
December 2024
Department of Psychology, University of British, Columbia, BC, Canada.
When people discuss something that they can both see, their attention becomes increasingly coupled. Previous studies have found that this coupling is temporally asymmetric (e.g.
View Article and Find Full Text PDFSci Rep
December 2024
Department of Clinical and Health Psychology, Faculty of Psychology, University of Vienna, Liebiggasse 5, 1010, Vienna, Austria.
J Neurophysiol
November 2024
York Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada.
A fundamental question in neuroscience is how the brain integrates egocentric (body-centered) and allocentric (landmark-centered) visual cues, but for many years this question was ignored in sensorimotor studies. This changed in recent behavioral experiments, but the underlying physiology of ego / allocentric integration remained largely unstudied. The specific goal of this review is to explain how prefrontal neurons integrate eye-centered and landmark-centred visual codes for optimal gaze behavior.
View Article and Find Full Text PDFJ Neurosci
November 2024
Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.
Eye movements in daily life occur in rapid succession and often without a predefined goal. Using a free viewing task, we examined how fixation duration prior to a saccade correlates to visual saliency and neuronal activity in the superior colliculus (SC) at the saccade goal. Rhesus monkeys (three male) watched videos of natural, dynamic, scenes while eye movements were tracked and, simultaneously, neurons were recorded in the superficial and intermediate layers of the superior colliculus (SCs and SCi respectively), a midbrain structure closely associated with gaze, attention, and saliency coding.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!