A second-order accelerated neurodynamic approach for distributed convex optimization.

Neural Netw

Department of Applied Mathematics, University of Waterloo, Waterloo, N2L3G1, Canada. Electronic address:

Published: February 2022

Based on the theories of inertial systems, a second-order accelerated neurodynamic approach is designed to solve a distributed convex optimization with inequality and set constraints. Most of the existing approaches for distributed convex optimization problems are usually first-order ones, and it is usually hard to analyze the convergence rate for the state solution of those first-order approaches. Due to the control design for the acceleration, the second-order neurodynamic approaches can often achieve faster convergence rate. Moreover, the existing second-order approaches are mostly designed to solve unconstrained distributed convex optimization problems, and are not suitable for solving constrained distributed convex optimization problems. It is acquired that the state solution of the designed neurodynamic approach in this paper converges to the optimal solution of the considered distributed convex optimization problem. An error function which demonstrates the performance of the designed neurodynamic approach, has a superquadratic convergence. Several numerical examples are provided to show the effectiveness of the presented second-order accelerated neurodynamic approach.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2021.11.013DOI Listing

Publication Analysis

Top Keywords

distributed convex
24
convex optimization
24
neurodynamic approach
20
second-order accelerated
12
accelerated neurodynamic
12
optimization problems
12
designed solve
8
convergence rate
8
state solution
8
designed neurodynamic
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!