Neural Radiance Fields (NeRF) is a popular view synthesis technique that represents a scene as a continuous volumetric function, parameterized by multilayer perceptrons that provide the volume density and view-dependent emitted radiance at each location. While NeRF-based techniques excel at representing fine geometric structures with smoothly varying view-dependent appearance, they often fail to accurately capture and reproduce the appearance of glossy surfaces. We address this limitation by introducing Ref-NeRF, which replaces NeRF's parameterization of view-dependent outgoing radiance with a representation of reflected radiance and structures this function using a collection of spatially-varying scene properties. We show that together with a regularizer on normal vectors, our model significantly improves the realism and accuracy of specular reflections. Furthermore, we show that our model's internal representation of outgoing radiance is interpretable and useful for scene editing.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2024.3360018DOI Listing

Publication Analysis

Top Keywords

view-dependent appearance
8
neural radiance
8
radiance fields
8
outgoing radiance
8
radiance
6
ref-nerf structured
4
view-dependent
4
structured view-dependent
4
appearance neural
4
fields neural
4

Similar Publications

Texture synthesis is a fundamental problem in computer graphics that would benefit various applications. Existing methods are effective in handling 2D image textures. In contrast, many real-world textures contain meso-structure in the 3D geometry space, such as grass, leaves, and fabrics, which cannot be effectively modeled using only 2D image textures.

View Article and Find Full Text PDF

Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the scene from unobserved viewpoints. However, NeRF's computational requirements are prohibitive for real-time applications: rendering views from a trained NeRF requires querying a multilayer perceptron (MLP) hundreds of times per ray. We present a method to train a NeRF, then precompute and store (i.

View Article and Find Full Text PDF

Neural Radiance Fields (NeRF) is a popular view synthesis technique that represents a scene as a continuous volumetric function, parameterized by multilayer perceptrons that provide the volume density and view-dependent emitted radiance at each location. While NeRF-based techniques excel at representing fine geometric structures with smoothly varying view-dependent appearance, they often fail to accurately capture and reproduce the appearance of glossy surfaces. We address this limitation by introducing Ref-NeRF, which replaces NeRF's parameterization of view-dependent outgoing radiance with a representation of reflected radiance and structures this function using a collection of spatially-varying scene properties.

View Article and Find Full Text PDF

The recently proposed neural radiance fields (NeRF) use a continuous function formulated as a multi-layer perceptron (MLP) to model the appearance and geometry of a 3D scene. This enables realistic synthesis of novel views, even for scenes with view dependent appearance. Many follow-up works have since extended NeRFs in different ways.

View Article and Find Full Text PDF

Does automatic human face categorization depend on head orientation?

Cortex

August 2021

Psychological Sciences Research Institute & Institute of Neuroscience, University of Louvain, Belgium; Université de Lorraine, CNRS, CRAN, F-54000 Nancy, France; CHRU-Nancy, Service de Neurologie, F-54000 Nancy, France.

Whether human categorization of visual stimuli as faces is optimal for full-front views, best revealing diagnostic features but lacking depth cues, remains largely unknown. To address this question, we presented 16 human observers with unsegmented natural images of different living and non-living objects at a fast rate (f = 12 Hz), with natural face images appearing at f/9 = 1.33 Hz.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!