Leveraging real-time eye tracking, foveated rendering optimizes hardware efficiency and enhances visual quality virtual reality (VR). This approach leverages eye-tracking techniques to determine where the user is looking, allowing the system to render high-resolution graphics only in the foveal region-the small area of the retina where visual acuity is highest, while the peripheral view is rendered at lower resolution. However, modern deep learning-based gaze-tracking solutions often exhibit a long-tail distribution of tracking errors, which can degrade user experience and reduce the benefits of foveated rendering by causing misalignment and decreased visual quality. This paper introduces FovealNet, an advanced AI-driven gaze tracking framework designed to optimize system performance by strategically enhancing gaze tracking accuracy. To further reduce the implementation cost of the gaze tracking algorithm, FovealNet employs an event-based cropping method that eliminates over 64.8% of irrelevant pixels from the input image. Additionally, it incorporates a simple yet effective token-pruning strategy that dynamically removes tokens on the fly without compromising tracking accuracy. Finally, to support different runtime rendering configurations, we propose a system performance-aware multi-resolution training strategy, allowing the gaze tracking DNN to adapt and optimize overall system performance more effectively. Evaluation results demonstrate that FovealNet achieves at least 1.42× speed up compared to previous methods and 13% increase in perceptual quality for foveated output. The code is available at https://github.com/wl3181/FovealNet.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TVCG.2025.3549577DOI Listing

Publication Analysis

Top Keywords

gaze tracking
20
foveated rendering
12
ai-driven gaze
8
tracking
8
virtual reality
8
visual quality
8
optimize system
8
system performance
8
tracking accuracy
8
gaze
5

Similar Publications

Exploring the impact of myoelectric prosthesis controllers on visuomotor behavior.

J Neuroeng Rehabil

March 2025

Department of Biomedical Engineering, Faculty of Engineering, College of Natural and Applied Science, University of Alberta, Edmonton, AB, Canada.

Background: Prosthesis users often rely on vision to monitor the activity of their prosthesis, which can be cognitively demanding. This compensatory visual behaviour may be attributed to an absence of feedback from the prosthesis or the unreliability of myoelectric control. Unreliability can arise from the unpredictable control due to variations in electromyography signals that can occur when the arm moves through different limb positions during functional use.

View Article and Find Full Text PDF

Prompt-based polyp segmentation during endoscopy.

Med Image Anal

February 2025

School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK.

Accurate judgment and identification of polyp size is crucial in endoscopic diagnosis. However, the indistinct boundaries of polyps lead to missegmentation and missed cancer diagnoses. In this paper, a prompt-based polyp segmentation method (PPSM) is proposed to assist in early-stage cancer diagnosis during endoscopy.

View Article and Find Full Text PDF

Battery-free head orientation measurement using passive RFID tags.

Wearable Technol

February 2025

Department of Human Centered Design, Cornell University, Ithaca, NY, USA.

Real-time measurement of head rotation, a primary human body movement, offers potential advantages in rehabilitating head or neck motor disorders, promoting seamless human-robot interaction, and tracking the lateral glance of children with autism spectrum disorder for effective intervention. However, existing options such as cameras capturing the entire face or skin-attached sensors have limitations concerning privacy, safety, and/or usability. This research introduces a novel method that employs a battery-free RFID tag-based wearable sensor for monitoring head orientation, as a substitute for the existing options like camera.

View Article and Find Full Text PDF

Eye gaze tracking is increasingly popular due to improved technology and availability. In the domain of assistive device control, however, eye gaze tracking is often used in discrete ways (e.g.

View Article and Find Full Text PDF

Unlabelled: Face masks became a part of everyday life during the SARS-CoV-2 pandemic. Previous studies showed that the face cognition mechanism involves holistic face processing, and the absence of face features could lower the cognition ability. This is opposed to the experience during the pandemic, when people could correctly recognize faces, although the mask covered a part of the face.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!