Accelerating gradient descent and Adam via fractional gradients.

Neural Netw

Division of Applied Mathematics, Brown University, Providence, RI 02912, USA; School of Engineering, Brown University, Providence, RI 02912, USA.

Published: April 2023

We propose a class of novel fractional-order optimization algorithms. We define a fractional-order gradient via the Caputo fractional derivatives that generalizes integer-order gradient. We refer it to as the Caputo fractional-based gradient, and develop an efficient implementation to compute it. A general class of fractional-order optimization methods is then obtained by replacing integer-order gradients with the Caputo fractional-based gradients. To give concrete algorithms, we consider gradient descent (GD) and Adam, and extend them to the Caputo fractional GD (CfGD) and the Caputo fractional Adam (CfAdam). We demonstrate the superiority of CfGD and CfAdam on several large scale optimization problems that arise from scientific machine learning applications, such as ill-conditioned least squares problem on real-world data and the training of neural networks involving non-convex objective functions. Numerical examples show that both CfGD and CfAdam result in acceleration over GD and Adam, respectively. We also derive error bounds of CfGD for quadratic functions, which further indicate that CfGD could mitigate the dependence on the condition number in the rate of convergence and results in significant acceleration over GD.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2023.01.002DOI Listing

Publication Analysis

Top Keywords

caputo fractional
12
gradient descent
8
descent adam
8
fractional-order optimization
8
caputo fractional-based
8
cfgd cfadam
8
caputo
5
cfgd
5
accelerating gradient
4
adam
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!