In this paper, we suggest a new framework for analyzing primal subgradient methods for nonsmooth convex optimization problems. We show that the classical step-size rules, based on normalization of subgradient, or on knowledge of the optimal value of the objective function, need corrections when they are applied to optimization problems with constraints. Their proper modifications allow a significant acceleration of these schemes when the objective function has favorable properties (smoothness, strong convexity).
View Article and Find Full Text PDFWe introduce a (BiOPT) framework for minimizing the sum of two convex functions, where one of them is smooth enough. The BiOPT framework offers three levels of freedom: (i) choosing the order of the proximal term; (ii) designing an inexact th-order proximal-point method in the upper level; (iii) solving the auxiliary problem with a lower-level non-Euclidean method in the lower level. We here regularize the objective by a th-order proximal term (for arbitrary integer ) and then develop the generic inexact high-order proximal-point scheme and its acceleration using the standard estimating sequence technique at the upper level.
View Article and Find Full Text PDFIn this paper, we propose a first second-order scheme based on arbitrary non-Euclidean norms, incorporated by Bregman distances. They are introduced directly in the Newton iterate with regularization parameter proportional to the square root of the norm of the current gradient. For the basic scheme, as applied to the composite convex optimization problem, we establish the global convergence rate of the order both in terms of the functional residual and in the norm of subgradients.
View Article and Find Full Text PDFIn this paper, we present a new ellipsoid-type algorithm for solving nonsmooth problems with convex structure. Examples of such problems include nonsmooth convex minimization problems, convex-concave saddle-point problems and variational inequalities with monotone operator. Our algorithm can be seen as a combination of the standard Subgradient and Ellipsoid methods.
View Article and Find Full Text PDFIn this paper, we study local convergence of high-order Tensor Methods for solving convex optimization problems with composite objective. We justify local superlinear convergence under the assumption of uniform convexity of the smooth component, having Lipschitz-continuous high-order derivative. The convergence both in function value and in the norm of minimal subgradient is established.
View Article and Find Full Text PDFJ Optim Theory Appl
March 2021
In this paper, we study the iteration complexity of cubic regularization of Newton method for solving composite minimization problems with uniformly convex objective. We introduce the notion of second-order condition number of a certain degree and justify the linear rate of convergence in a nondegenerate case for the method with an adaptive estimate of the regularization parameter. The algorithm automatically achieves the best possible global complexity bound among different problem classes of uniformly convex objective functions with Hölder continuous Hessian of the smooth part of the objective.
View Article and Find Full Text PDFJ Optim Theory Appl
January 2021
We present a new theoretical analysis of local superlinear convergence of classical quasi-Newton methods from the convex Broyden class. As a result, we obtain a significant improvement in the currently known estimates of the convergence rates for these methods. In particular, we show that the corresponding rate of the Broyden-Fletcher-Goldfarb-Shanno method depends only on the product of the dimensionality of the problem and the of its condition number.
View Article and Find Full Text PDFIn this paper we develop new tensor methods for unconstrained convex optimization, which solve at each iteration an auxiliary problem of minimizing convex multivariate polynomial. We analyze the simplest scheme, based on minimization of a regularized local model of the objective function, and its accelerated version obtained in the framework of estimating sequences. Their rates of convergence are compared with the worst-case lower complexity bounds for corresponding problem classes.
View Article and Find Full Text PDFIn this paper formulas are derived for the analytic center of the solution set of linear matrix inequalities (LMIs) defining passive transfer functions. The algebraic Riccati equations that are usually associated with such systems are related to boundary points of the convex set defined by the solution set of the LMI. It is shown that the analytic center is described by closely related matrix equations, and their properties are analyzed for continuous- and discrete-time systems.
View Article and Find Full Text PDFIn this paper, we study derivatives of powers of Euclidean norm. We prove their Hölder continuity and establish explicit expressions for the corresponding constants. We show that these constants are optimal for odd derivatives and at most two times suboptimal for the even ones.
View Article and Find Full Text PDF