Deep generative models have proven to be effective priors for solving a variety of image processing problems. However, the learning of realistic image priors, based on a large number of parameters, requires a large amount of training data. It has been shown recently, with the so-called deep image prior (DIP), that randomly initialized neural networks can act as good image priors without learning. In this paper, we propose a deep generative model for light fields, which is compact and which does not require any training data other than the light field itself. To show the potential of the proposed generative model, we develop a complete light field compression scheme with quantization-aware learning and entropy coding of the quantized weights. Experimental results show that the proposed method yields very competitive results compared with state-of-the-art light field compression methods, both in terms of PSNR and MS-SSIM metrics.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2022.3217374 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!