On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks.

Entropy (Basel)

Department of Computer Science, National University of Singapore, 15 Computing Dr., Singapore 117418, Singapore.

Published: November 2021

In recent years, neural network based image priors have been shown to be highly effective for linear inverse problems, often significantly outperforming conventional methods that are based on sparsity and related notions. While pre-trained generative models are perhaps the most common, it has additionally been shown that even untrained neural networks can serve as excellent priors in various imaging applications. In this paper, we seek to broaden the applicability and understanding of untrained neural network priors by investigating the interaction between architecture selection, measurement models (e.g., inpainting vs. denoising vs. compressive sensing), and signal types (e.g., smooth vs. erratic). We motivate the problem via statistical learning theory, and provide two practical algorithms for tuning architectural hyperparameters. Using experimental evaluations, we demonstrate that the optimal hyperparameters may vary significantly between tasks and can exhibit large performance gaps when tuned for the wrong task. In addition, we investigate which hyperparameters tend to be more important, and which are robust to deviations from the optimum.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8623203PMC
http://dx.doi.org/10.3390/e23111481DOI Listing

Publication Analysis

Top Keywords

untrained neural
12
architecture selection
8
linear inverse
8
inverse problems
8
neural networks
8
neural network
8
selection linear
4
problems untrained
4
neural
4
networks years
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!