Gradient Learning (GL), aiming to estimate the gradient of target function, has attracted much attention in variable selection problems due to its mild structure requirements and wide applicability. Despite rapid progress, the majority of the existing GL works are based on the empirical risk minimization (ERM) principle, which may face the degraded performance under complex data environment, e.g., non-Gaussian noise. To alleviate this sensitiveness, we propose a new GL model with the help of the tilted ERM criterion, and establish its theoretical support from the function approximation viewpoint. Specifically, the operator approximation technique plays the crucial role in our analysis. To solve the proposed learning objective, a gradient descent method is proposed, and the convergence analysis is provided. Finally, simulated experimental results validate the effectiveness of our approach when the input variables are correlated.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9320015PMC
http://dx.doi.org/10.3390/e24070956DOI Listing

Publication Analysis

Top Keywords

gradient learning
8
empirical risk
8
risk minimization
8
gradient
4
learning tilted
4
tilted empirical
4
minimization gradient
4
learning aiming
4
aiming estimate
4
estimate gradient
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!