Microsaccade selectivity as discriminative feature for object decoding.

iScience

School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran 14399-57131, Iran.

Published: January 2025

Microsaccades, a form of fixational eye movements, help maintain visual stability during stationary observations. This study examines the modulation of microsaccadic rates by various stimulus categories in monkeys and humans during a passive viewing task. Stimulus sets were grouped into four primary categories: human, animal, natural, and man-made. Distinct post-stimulus microsaccade patterns were identified across these categories, enabling successful decoding of the stimulus category with accuracy and recall of up to 85%. We observed that microsaccade rates are independent of pupil size changes. Neural data showed that category classification in the inferior temporal (IT) cortex peaks earlier than changes in microsaccade rates, suggesting feedback from the IT cortex influences eye movements after stimulus discrimination. These results contribute to neurobiological models, enhance human-machine interfaces, optimize experimental visual stimuli, and deepen understanding of microsaccades' role in object decoding.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11731985PMC
http://dx.doi.org/10.1016/j.isci.2024.111584DOI Listing

Publication Analysis

Top Keywords

object decoding
8
eye movements
8
microsaccade rates
8
microsaccade
4
microsaccade selectivity
4
selectivity discriminative
4
discriminative feature
4
feature object
4
decoding microsaccades
4
microsaccades form
4

Similar Publications

Microsaccade selectivity as discriminative feature for object decoding.

iScience

January 2025

School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran 14399-57131, Iran.

Microsaccades, a form of fixational eye movements, help maintain visual stability during stationary observations. This study examines the modulation of microsaccadic rates by various stimulus categories in monkeys and humans during a passive viewing task. Stimulus sets were grouped into four primary categories: human, animal, natural, and man-made.

View Article and Find Full Text PDF

Retinotopic biases in contextual feedback signals to V1 for object and scene processing.

Curr Res Neurobiol

June 2025

Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, 62 Hillhead Street, Glasgow, G12 8QB, United Kingdom.

Identifying the objects embedded in natural scenes relies on recurrent processing between lower and higher visual areas. How is cortical feedback information related to objects and scenes organised in lower visual areas? The spatial organisation of cortical feedback converging in early visual cortex during object and scene processing could be retinotopically specific as it is coded in V1, or object centred as coded in higher areas, or both. Here, we characterise object and scene-related feedback information to V1.

View Article and Find Full Text PDF

Data-Efficient Bone Segmentation Using Feature Pyramid- Based SegFormer.

Sensors (Basel)

December 2024

Master's Program in Information and Computer Science, Doshisha University, Kyoto 610-0394, Japan.

The semantic segmentation of bone structures demands pixel-level classification accuracy to create reliable bone models for diagnosis. While Convolutional Neural Networks (CNNs) are commonly used for segmentation, they often struggle with complex shapes due to their focus on texture features and limited ability to incorporate positional information. As orthopedic surgery increasingly requires precise automatic diagnosis, we explored SegFormer, an enhanced Vision Transformer model that better handles spatial awareness in segmentation tasks.

View Article and Find Full Text PDF

Residual Vision Transformer and Adaptive Fusion Autoencoders for Monocular Depth Estimation.

Sensors (Basel)

December 2024

Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan.

Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human-computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps.

View Article and Find Full Text PDF

Introduction: Segmentation tasks in computer vision play a crucial role in various applications, ranging from object detection to medical imaging and cultural heritage preservation. Traditional approaches, including convolutional neural networks (CNNs) and standard transformer-based models, have achieved significant success; however, they often face challenges in capturing fine-grained details and maintaining efficiency across diverse datasets. These methods struggle with balancing precision and computational efficiency, especially when dealing with complex patterns and high-resolution images.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!