Weight learning forms a basis for the machine learning and numerous algorithms have been adopted up to date. Most of the algorithms were either developed in the stochastic framework or aimed at minimization of loss or regret functions. Asymptotic convergence of weight learning, vital for good output prediction, was seldom guaranteed for online applications. Since linear regression is the most fundamental component in machine learning, we focus on this model in this paper. Aiming at online applications, a deterministic analysis method is developed based on LaSalle's invariance principle. Convergence conditions are derived for both the first-order and the second-order learning algorithms, without resorting to any stochastic argument. Moreover, the deterministic approach makes it easy to analyze the noise influence. Specifically, adaptive hyperparameters are derived in this framework and their tuning rules disclosed for the compensation of measurement noise. Comparison with four most popular algorithms validates that this approach has a higher learning capability and is quite promising in enhancing the weight learning performance.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2024.3399312DOI Listing

Publication Analysis

Top Keywords

weight learning
12
learning
8
machine learning
8
online applications
8
algorithms
5
deterministic gradient-descent
4
gradient-descent learning
4
learning linear
4
linear regressions
4
regressions adaptive
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!