Predictive updating of an object's spatial coordinates from pre-saccade to post-saccade contributes to stable visual perception. Whether object features are predictively remapped remains contested. We set out to characterise the spatiotemporal dynamics of feature processing during stable fixation and active vision. To do so, we applied multivariate decoding methods to electroencephalography (EEG) data collected while human participants (male and female) viewed brief visual stimuli. Stimuli appeared at different locations across the visual field at either high or low spatial frequency (SF). During fixation, classifiers were trained to decode SF presented at one parafoveal location and cross-tested on SF from either the same, adjacent or more peripheral locations. When training and testing on the same location, SF was classified shortly after stimulus onset (∼79 ms). Decoding of SF at locations farther from the trained location emerged later (∼144 - 295 ms), with decoding latency modulated by eccentricity. This analysis provides a detailed time course for the spread of feature information across the visual field. Next, we investigated how active vision impacts the emergence of SF information. In the presence of a saccade, the decoding time of peripheral SF at parafoveal locations was earlier, indicating predictive anticipation of SF due to the saccade. Crucially however, this predictive effect was not limited to the specific remapped location. Rather, peripheral SF was correctly classified, at an accelerated time course, at all parafoveal positions. This indicates spatially coarse, predictive anticipation of stimulus features during active vision, likely enabling a smooth transition on saccade landing. Maintaining a continuous representation of object features across saccades is vital for stable vision. In order to characterise the spatiotemporal dynamics of stimulus feature representation in the brain, we presented stimuli at a high and low spatial frequency at multiple locations across the visual field. Applying EEG-decoding methods we tracked the neural representation of spatial frequency during both stable fixation and active vision. Using this approach, we provide a detailed time course for the spread of feature information across the visual field during fixation. In addition, when a saccade is imminent, we show that peripheral spatial frequency is predictively represented in anticipation of the post-saccadic input.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1523/JNEUROSCI.1652-24.2024 | DOI Listing |
Front Robot AI
January 2025
Life- and Neurosciences, Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.
Biological vision systems simultaneously learn to efficiently encode their visual inputs and to control the movements of their eyes based on the visual input they sample. This autonomous joint learning of visual representations and actions has previously been modeled in the Active Efficient Coding (AEC) framework and implemented using traditional frame-based cameras. However, modern event-based cameras are inspired by the retina and offer advantages in terms of acquisition rate, dynamic range, and power consumption.
View Article and Find Full Text PDFData Brief
February 2025
College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar.
This study presents a comprehensive ultrasound image dataset for Non-Alcoholic Fatty Liver Disease (NAFLD), addressing the critical need for standardized resources in AI-assisted diagnosis. The dataset comprises 10,352 high-resolution ultrasound images from 384 patients collected at King Saud University Medical City and National Guard Health Affairs in Saudi Arabia. Each image is meticulously annotated with NAFLD Activity Score (NAS) fibrosis staging and steatosis grading based on corresponding liver biopsy results.
View Article and Find Full Text PDFCureus
December 2024
Department of Ophthalmology and Visual Science, Ophthalmology Clinic, Hospital Universiti Sains Malaysia, Universiti Sains Malaysia, Kubang Kerian, MYS.
A juxtapapillary retinal capillary hemangioma (JRCH) is a rare vascular hamartoma located on the optic nerve head or adjacent region. While often associated with von Hippel-Lindau (VHL) disease, JRCHs can also occur as an isolated condition, presenting unique therapeutic challenges and risks of visual impairment. We report a case of a 50-year-old Malay gentleman with diabetes mellitus who presented with a non-progressive superior visual field defect in his left eye for three months.
View Article and Find Full Text PDFFront Artif Intell
January 2025
Department of Physics and Astronomy, The University of Alabama, Tuscaloosa, AL, United States.
Recent work has established an alternative to traditional multi-layer perceptron neural networks in the form of Kolmogorov-Arnold Networks (KAN). The general KAN framework uses learnable activation functions on the edges of the computational graph followed by summation on nodes. The learnable edge activation functions in the original implementation are basis spline functions (B-Spline).
View Article and Find Full Text PDFCancer Imaging
January 2025
Melbourne Theranostic Innovation Centre, Level 8, 14-20 Blackwood St, North Melbourne, VIC, 3051, Australia.
True total-body and extended axial field-of-view (AFOV) PET/CT with 1m or more of body coverage are now commercially available and dramatically increase system sensitivity over conventional AFOV PET/CT. The Siemens Biograph Vision Quadra (Quadra), with an AFOV of 106cm, potentially allows use of significantly lower administered radiopharmaceuticals as well as reduced scan times. The aim of this study was to optimise acquisition protocols for routine clinical imaging with FDG on the Quadra the prioritisation of reduced activity given physical infrastructure constraints in our facility.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!