Prior foveated rendering methods often suffer from a limitation where the shading load escalates with increasing display resolution, leading to decreased efficiency, particularly when dealing with retinal-level resolutions. To tackle this challenge, we begin with the essence of the human visual system (HVS) perception and present visual acuity-consistent foveated rendering (VaFR), aiming to achieve exceptional rendering performance at retinal-level resolutions. Specifically, we propose a method with a novel log-polar mapping function derived from the human visual acuity model, which accommodates the natural bandwidth of the visual system. This mapping function and its associated shading rate guarantee a consistent output of rendering information, regardless of variations in the display resolution of the VR HMD. Consequently, our VaFR outperforms alternative methods, improving rendering speed while preserving perceptual visual quality, particularly when operating at retinal resolutions. We validate our approach using both the rasterization and ray-casting rendering pipelines. We also validate our approach using different binocular rendering strategies for HMD devices. In diverse testing scenarios, our approach delivers better perceptual visual quality than prior foveated rendering while achieving an impressive speedup of 6.5×-9.29× for deferred rendering of 3D scenarios and an even more powerful speedup of 10.4×-16.4× for ray-casting at retinal resolution. Additionally, our approach significantly enhances the rendering performance of binocular 8K path tracing, achieving smooth frame rates.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TVCG.2025.3539851DOI Listing

Publication Analysis

Top Keywords

foveated rendering
16
rendering
11
visual acuity
8
retinal resolution
8
prior foveated
8
display resolution
8
retinal-level resolutions
8
human visual
8
visual system
8
rendering performance
8

Similar Publications

The integration of gaze/eye tracking into virtual and augmented reality devices has unlocked new possibilities, offering a novel human-computer interaction (HCI) modality for on-device extended reality (XR). Emerging applications in XR, such as low-effort user authentication, mental health diagnosis, and foveated rendering, demand real-time eye tracking at high frequencies, a capability that current solutions struggle to deliver. To address this challenge, we present EX-Gaze, an event-based real-time eye tracking system designed for on-device extended reality.

View Article and Find Full Text PDF

Foveated rendering reduces computational load by distributing resources based on the human visual system. This enables the implementation of ray tracing in virtual reality applications, where a high frame rate is essential to achieve visual immersion. However, traditional foveation methods based solely on eccentricity cannot adequately account for the complex behavior of visual attention.

View Article and Find Full Text PDF

Prior foveated rendering methods often suffer from a limitation where the shading load escalates with increasing display resolution, leading to decreased efficiency, particularly when dealing with retinal-level resolutions. To tackle this challenge, we begin with the essence of the human visual system (HVS) perception and present visual acuity-consistent foveated rendering (VaFR), aiming to achieve exceptional rendering performance at retinal-level resolutions. Specifically, we propose a method with a novel log-polar mapping function derived from the human visual acuity model, which accommodates the natural bandwidth of the visual system.

View Article and Find Full Text PDF

Neural radiance fields (NeRF) has achieved revolutionary breakthrough in the novel view synthesis task for complex 3D scenes. However, this new paradigm struggles to meet the requirements for real-time rendering and high perceptual quality in virtual reality. In this paper, we propose VPRF, a novel visual perceptual based radiance fields representation method, which for the first time integrates the visual acuity and contrast sensitivity models of human visual system (HVS) into the radiance field rendering framework.

View Article and Find Full Text PDF

We propose a new scene-aware foveated rendering method, which incorporates the scene awareness and characteristics of the human visual system into the mapping-based foveated rendering framework. First, we generate the conservative visual importance map that encodes the visual features of the scene, visual acuity, and gaze motion. Second, we construct the pixel size control map using a convolution kernel method.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!