In the context of variational regularization, it is a known result that, under suitable differentiability assumptions, source conditions in the form of variational inequalities imply range conditions, while the converse implication only holds under an additional restriction on the operator. In this article, we prove the analogous result for regularization. More precisely, we show that the variational inequality derived by the authors in 2017 implies that the derivative of the regularization functional must lie in the range of the dual-adjoint of the derivative of the operator. In addition, we show how to adapt the restriction on the operator in order to obtain the converse implication.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6136524PMC
http://dx.doi.org/10.1080/01630563.2018.1467447DOI Listing

Publication Analysis

Top Keywords

variational regularization
8
converse implication
8
restriction operator
8
range condition
4
condition polyconvex
4
variational
4
polyconvex variational
4
regularization
4
regularization context
4
context variational
4

Similar Publications

On the directional asymptotic approach in optimization theory.

Math Program

July 2024

Department of Mathematics and Computer Science, Philipps-Universität Marburg, 35032 Marburg, Germany.

As a starting point of our research, we show that, for a fixed order , each local minimizer of a rather general nonsmooth optimization problem in Euclidean spaces is either M-stationary in the classical sense (corresponding to stationarity of order 1), satisfies stationarity conditions in terms of a coderivative construction of order , or is asymptotically stationary with respect to a critical direction as well as order in a certain sense. By ruling out the latter case with a constraint qualification not stronger than directional metric subregularity, we end up with new necessary optimality conditions comprising a mixture of limiting variational tools of orders 1 and . These abstract findings are carved out for the broad class of geometric constraints and , and visualized by examples from complementarity-constrained and nonlinear semidefinite optimization.

View Article and Find Full Text PDF

Identifying transitional states is crucial for understanding protein conformational changes that underlie numerous biological processes. Markov state models (MSMs), built from Molecular Dynamics (MD) simulations, capture these dynamics through transitions among metastable conformational states, and have demonstrated success in studying protein conformational changes. However, MSMs face challenges in identifying transition states, as they partition MD conformations into discrete metastable states (or free energy minima), lacking description of transition states located at the free energy barriers.

View Article and Find Full Text PDF

The numerical solution of differential equations using machine learning-based approaches has gained significant popularity. Neural network-based discretization has emerged as a powerful tool for solving differential equations by parameterizing a set of functions. Various approaches, such as the deep Ritz method and physics-informed neural networks, have been developed for numerical solutions.

View Article and Find Full Text PDF

Data augmentation is a crucial regularization technique for deep neural networks, particularly in medical imaging tasks with limited data. Deep learning models are highly effective at linearizing features, enabling the alteration of feature semantics through the shifting of latent space representations-an approach known as semantic data augmentation (SDA). The paradigm of SDA involves shifting features in a specified direction.

View Article and Find Full Text PDF

Dyadic Partition-Based Training Schemes for TV/TGV Denoising.

J Math Imaging Vis

October 2024

Department of Applied Mathematics, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands.

Due to their ability to handle discontinuous images while having a well-understood behavior, regularizations with total variation (TV) and total generalized variation (TGV) are some of the best-known methods in image denoising. However, like other variational models including a fidelity term, they crucially depend on the choice of their tuning parameters. A remedy is to choose these automatically through multilevel approaches, for example by optimizing performance on noisy/clean image pairs.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!