Objectives: Clinicians performing a horizontal head impulse test (HIT) are looking for a corrective saccade. The detection of such saccades is a challenge. The aim of this study is to assess an expert's likelihood of detecting corrective saccades in subjects with vestibular hypofunction.
Design: In a prospective cohort observational study at a tertiary referral hospital, we assessed 365 horizontal HITs performed clinically by an expert neurootologist from a convenience sample of seven patients with unilateral or bilateral deficient vestibulo-ocular reflex (VOR). All HITs were recorded simultaneously by video-oculography, as a gold standard. We evaluated saccades latency and amplitude, head velocity, and gain.
Results: Saccade amplitude was statistically the most significant parameter for saccade detection (p < 0.001).The probability of saccade detection was eight times higher for HIT toward the pathological side (p = 0.029). In addition, an increase in saccade amplitude resulted in an increased probability of detection (odds ratio [OR] 1.77 [1.31 to 2.40] per degree, p < 0.001). The sensitivity to detect a saccade amplitude of 1 degree was 92.9% and specificity 79%. Saccade latency and VOR gain did not significantly influence the probability of the physician identifying a saccade (OR 1.02 [0.94 to 1.11] per 10-msec latency and OR 0.84 [0.60 to 1.17] per 0.1 VOR gain increase).
Conclusions: The saccade amplitude is the most important factor for accurate saccade detection in clinically performed head impulse tests. Contrary to current knowledge, saccade latency and VOR gain play a minor role in saccade detection.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7722467 | PMC |
http://dx.doi.org/10.1097/AUD.0000000000000894 | DOI Listing |
Background: Early detection is crucial for alleviating Alzheimer's disease (AD) burden. At present, assessment for early detection of AD is time-consuming, costly, and often invasive. In recent years, various eye-tracking methodologies have emerged, and they show promising results in detecting persons at risk of developing AD dementia (ADD).
View Article and Find Full Text PDFFront Psychol
December 2024
Departent of Learning, Data-Analytics and Technology, Faculty of Behavioural, Management and Social Sciences, University of Twente, Enschede, Netherlands.
Learning experiences are intertwined with emotions, which in turn have a significant effect on learning outcomes. Therefore, digital learning environments can benefit from taking the emotional state of the learner into account. To do so, the first step is real-time emotion detection which is made possible by sensors that can continuously collect physiological and eye-tracking data.
View Article and Find Full Text PDFFront Neurol
December 2024
Institut de Recherche Oto-Neurologique (IRON), Paris, France.
Introduction: While most head movements in daily life are active, most tools used to assess vestibular deficits rely on passive head movements. A single gain value is not sufficient to quantify gaze stabilization efficiency during active movements in vestibular deficit patients. Moreover, during active gaze shifts, anticipatory mechanisms come into play.
View Article and Find Full Text PDFObjective: To explore and validate effective eye movement features related to motion sickness (MS) through closed-track experiments and to provide valuable insights for practical applications.
Background: With the development of autonomous vehicles (AVs), MS has attracted more and more attention. Eye movements have great potential to evaluate the severity of MS as an objective quantitative indicator of vestibular function.
Sci Rep
December 2024
School of Computing, University of Leeds, Leeds, UK.
Human visual attention allows prior knowledge or expectations to influence visual processing, allocating limited computational resources to only that part of the image that are likely to behaviourally important. Here, we present an image recognition system based on biological vision that guides attention to more informative locations within a larger parent image, using a sequence of saccade-like motions. We demonstrate that at the end of the saccade sequence the system has an improved classification ability compared to the convolutional neural network (CNN) that represents the feedforward part of the model.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!