Our goal was to determine whether performance variability during predictive visual tracking can provide a screening measure for mild traumatic brain injury (mTBI). Seventeen subjects with chronic postconcussive syndrome and 9 healthy control subjects were included in this study. Eye movements were recorded with video-oculography as the subject visually tracked a target that moved through a circular trajectory. We compared the variability of gaze positional errors relative to the target with the microstructural integrity of white matter tracts as measured by the fractional anisotropy (FA) parameter of diffusion tensor imaging. Gaze error variability was significantly correlated with the mean FA values of the right anterior corona radiata (ACR) and the left superior cerebellar peduncle, tracts that support spatial processing and sustenance of attention, and the genu of the corpus callosum. Because the ACR and the genu are among the most frequently damaged white matter tracts in mTBI, the correlations imply that gaze error variability during visual tracking may provide a useful screening tool for mTBI. Gaze error variability was also significantly correlated with attention and working memory measures in neurocognitive testing; thus, measurement of visual tracking performance is promising as a fast and practical screening tool for mTBI.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1097/HTR.0b013e3181e67936 | DOI Listing |
Int J Biol Macromol
January 2025
Key Laboratory of Bionic Engineering (Ministry of Education), College of Biological and Agricultural Engineering, Jilin University, Changchun 130022, China. Electronic address:
Inspired by the superhydrophobic characteristics of duck feathers this research introduced an intelligent, responsive, and eco-friendly food packaging film constructed from carboxymethyl cellulose (CMC) and gelatin (Gel). The film successfully mimics the asymmetric, bionic, barbed structure of a duck feather, offering outstanding hydrophobicity and moisture vapor resistance characteristics. Moreover, the bionic film demonstrated good mechanical strength, film-forming ability and biodegradability.
View Article and Find Full Text PDFFront Robot AI
January 2025
Life- and Neurosciences, Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.
Biological vision systems simultaneously learn to efficiently encode their visual inputs and to control the movements of their eyes based on the visual input they sample. This autonomous joint learning of visual representations and actions has previously been modeled in the Active Efficient Coding (AEC) framework and implemented using traditional frame-based cameras. However, modern event-based cameras are inspired by the retina and offer advantages in terms of acquisition rate, dynamic range, and power consumption.
View Article and Find Full Text PDFFront Neurorobot
January 2025
School of Information and Communication Engineering, Hainan University, Haikou, China.
A reward shaping deep deterministic policy gradient (RS-DDPG) and simultaneous localization and mapping (SLAM) path tracking algorithm is proposed to address the issues of low accuracy and poor robustness of target path tracking for robotic control during maneuver. RS-DDPG algorithm is based on deep reinforcement learning (DRL) and designs a reward function to optimize the parameters of DDPG to achieve the required tracking accuracy and stability. A visual SLAM algorithm based on semantic segmentation and geometric information is proposed to address the issues of poor robustness and susceptibility to interference from dynamic objects in dynamic scenes for SLAM based on visual sensors.
View Article and Find Full Text PDFJ Neurosci
January 2025
The Department of Psychology and The Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem.
Predictive updating of an object's spatial coordinates from pre-saccade to post-saccade contributes to stable visual perception. Whether object features are predictively remapped remains contested. We set out to characterise the spatiotemporal dynamics of feature processing during stable fixation and active vision.
View Article and Find Full Text PDFNeuroscience
January 2025
Departamento de Ciencias Médicas y de la Vida, Centro Universitario de la Ciénega, Universidad de Guadalajara, Ocotlán, Mexico; Laboratorio de Conducta Animal, Departamento de Psicología, Centro Universitario de la Ciénega, Universidad de Guadalajara, Ocotlán, Mexico.
Motor actions adapt dynamically to external changes through the brain's ability to predict sensory outcomes and adjust for discrepancies between anticipated and actual sensory inputs. In this study, we investigated how changes in target speed (v) and direction influenced visuomotor responses, focusing on gaze and manual joystick control during an interception task. Participants tracked a moving target with sinusoidal variations in v and directional changes, generating sensory prediction errors and requiring real-time adjustments.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!