Arbitrary Optics for Gaussian Splatting Using Space Warping.

J Imaging

Department of Computer Science, Kiel University, 24118 Kiel, Germany.

Published: December 2024

Due to recent advances in 3D reconstruction from RGB images, it is now possible to create photorealistic representations of real-world scenes that only require minutes to be reconstructed and can be rendered in real time. In particular, 3D Gaussian splatting shows promising results, outperforming preceding reconstruction methods while simultaneously reducing the overall computational requirements. The main success of 3D Gaussian splatting relies on the efficient use of a differentiable rasterizer to render the Gaussian scene representation. One major drawback of this method is its underlying pinhole camera model. In this paper, we propose an extension of the existing method that removes this constraint and enables scene reconstructions using arbitrary camera optics such as highly distorting fisheye lenses. Our method achieves this by applying a differentiable warping function to the Gaussian scene representation. Additionally, we reduce overfitting in outdoor scenes by utilizing a learnable skybox, reducing the presence of floating artifacts within the reconstructed scene. Based on synthetic and real-world image datasets, we show that our method is capable of creating an accurate scene reconstruction from highly distorted images and rendering photorealistic images from such reconstructions.

Download full-text PDF

Source
http://dx.doi.org/10.3390/jimaging10120330DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11678575PMC

Publication Analysis

Top Keywords

gaussian splatting
12
gaussian scene
8
scene representation
8
gaussian
5
scene
5
arbitrary optics
4
optics gaussian
4
splatting space
4
space warping
4
warping advances
4

Similar Publications

Due to recent advances in 3D reconstruction from RGB images, it is now possible to create photorealistic representations of real-world scenes that only require minutes to be reconstructed and can be rendered in real time. In particular, 3D Gaussian splatting shows promising results, outperforming preceding reconstruction methods while simultaneously reducing the overall computational requirements. The main success of 3D Gaussian splatting relies on the efficient use of a differentiable rasterizer to render the Gaussian scene representation.

View Article and Find Full Text PDF

This paper explores the influence of various camera settings on the quality of 3D reconstructions, particularly in indoor crime scene investigations. Utilizing Neural Radiance Fields (NeRF) and Gaussian Splatting for 3D reconstruction, we analyzed the impact of ISO, shutter speed, and aperture settings on the quality of the resulting 3D reconstructions. By conducting controlled experiments in a meeting room setup, we identified optimal settings that minimize noise and artifacts while maximizing detail and brightness.

View Article and Find Full Text PDF

Recently, 3D Gaussian Splatting (3DGS) has attracted widespread attention due to its high-quality rendering, and ultra-fast training and rendering speed. However, due to the unstructured and irregular nature of Gaussian point clouds, it is difficult to guarantee geometric reconstruction accuracy and multi-view consistency simply by relying on image reconstruction loss. Although many studies on surface reconstruction based on 3DGS have emerged recently, the quality of their meshes is generally unsatisfactory.

View Article and Find Full Text PDF

The speech-driven facial animation technology is generally categorized into two main types: 3D and 2D talking face. Both of these have garnered considerable research attention in recent years. However, to our knowledge, the research into 3D talking face has not progressed as deeply as that of 2D talking face, particularly in terms of lip-sync and perceptual mouth movements.

View Article and Find Full Text PDF

Differentiable 3D-Gaussian splatting (GS) is emerging as a prominent technique in computer vision and graphics for reconstructing 3D scenes. GS represents a scene as a set of 3D Gaussians with varying opacities and employs a computationally efficient splatting operation along with analytical derivatives to compute the 3D Gaussian parameters given scene images captured from various viewpoints. Unfortunately, capturing surround view (360° viewpoint) images is impossible or impractical in many real-world imaging scenarios, including underwater imaging, rooms inside a building, and autonomous navigation.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!