On Tractable Convex Relaxations of Standard Quadratic Optimization Problems under Sparsity Constraints.

J Optim Theory Appl

School of Mathematics, The University of Edinburgh, Peter Guthrie Tait Road, Edinburgh, EH9 3FD UK.

Published: January 2025

Standard quadratic optimization problems (StQPs) provide a versatile modelling tool in various applications. In this paper, we consider StQPs with a hard sparsity constraint, referred to as sparse StQPs. We focus on various tractable convex relaxations of sparse StQPs arising from a mixed-binary quadratic formulation, namely, the linear optimization relaxation given by the reformulation-linearization technique, the Shor relaxation, and the relaxation resulting from their combination. We establish several structural properties of these relaxations in relation to the corresponding relaxations of StQPs without any sparsity constraints, and pay particular attention to the rank-one feasible solutions retained by these relaxations. We then utilize these relations to establish several results about the quality of the lower bounds arising from different relaxations. We also present several conditions that ensure the exactness of each relaxation.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11757943PMC
http://dx.doi.org/10.1007/s10957-024-02593-1DOI Listing

Publication Analysis

Top Keywords

tractable convex
8
convex relaxations
8
standard quadratic
8
quadratic optimization
8
optimization problems
8
sparsity constraints
8
sparse stqps
8
relaxations
6
stqps
5
relaxations standard
4

Similar Publications

Standard quadratic optimization problems (StQPs) provide a versatile modelling tool in various applications. In this paper, we consider StQPs with a hard sparsity constraint, referred to as sparse StQPs. We focus on various tractable convex relaxations of sparse StQPs arising from a mixed-binary quadratic formulation, namely, the linear optimization relaxation given by the reformulation-linearization technique, the Shor relaxation, and the relaxation resulting from their combination.

View Article and Find Full Text PDF

DC algorithm for estimation of sparse Gaussian graphical models.

PLoS One

December 2024

Institute of Systems and Information Engineering, University of Tsukuba, Tsukuba, Ibaraki, Japan.

Sparse estimation of a Gaussian graphical model (GGM) is an important technique for making relationships between observed variables more interpretable. Various methods have been proposed for sparse GGM estimation, including the graphical lasso that uses the ℓ1 norm regularization term, and other methods that use nonconvex regularization terms. Most of these methods approximate the ℓ0 (pseudo) norm by more tractable functions; however, to estimate more accurate solutions, it is preferable to directly use the ℓ0 norm for counting the number of nonzero elements.

View Article and Find Full Text PDF

Unlabelled: The topic of pricing non-convexities in power markets has been explored vividly in the literature and among practitioners for the past twenty years. The debate has been focused on indivisibilities in short-term auctions, the computational tractability of some pricing proposals, and the economic analysis of their behavior. In this paper, we analyse a source of non-convexities that is not discussed as broadly: the indivisibilities in investment decisions.

View Article and Find Full Text PDF

Using Limited Trial Evidence to Credibly Choose Treatment Dosage when Efficacy and Adverse Effects Weakly Increase with Dose.

Epidemiology

January 2025

From the Department of Economics and Institute for Policy Research, Northwestern University, Evanston, IL.

It has become standard in medical treatment to base dosage on evidence in randomized trials. Yet it has been rare to study how outcomes vary with dosage. In trials to obtain drug approval, the norm has been to compare some dose of a new drug with an established therapy or placebo.

View Article and Find Full Text PDF

Background: The first step in computed tomography (CT) reconstruction is to estimate attenuation pathlength. Usually, this is done with a logarithm transformation, which is the direct solution to the Beer-Lambert Law. At low signals, however, the logarithm estimator is biased.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!