Generalized robust loss functions for machine learning.

Neural Netw

School of Economics and Management, University of Chinese Academy of Sciences, Beijing 100190, China; Research Center on Fictitious Economy and Data Science, Chinese Academy of Sciences, Beijing 100190, China; Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing 100190, China; MOE Social Science Laboratory of Digital Economic Forecasts and Policy Simulation at UCAS, Beijing 100190, China. Electronic address:

Published: March 2024

Loss function is a critical component of machine learning. Some robust loss functions are proposed to mitigate the adverse effects caused by noise. However, they still face many challenges. Firstly, there is currently a lack of unified frameworks for building robust loss functions in machine learning. Secondly, most of them only care about the occurring noise and pay little attention to those normal points. Thirdly, the resulting performance gain is limited. To this end, we put forward a general framework of robust loss functions for machine learning (RML) with rigorous theoretical analyses, which can smoothly and adaptively flatten any unbounded loss function and apply to various machine learning problems. In RML, an unbounded loss function serves as the target, with the aim of being flattened. A scale parameter is utilized to limit the maximum value of noise points, while a shape parameter is introduced to control both the compactness and the growth rate of the flattened loss function. Later, this framework is employed to flatten the Hinge loss function and the Square loss function. Based on this, we build two robust kernel classifiers called FHSVM and FLSSVM, which can distinguish different types of data. The stochastic variance reduced gradient (SVRG) approach is used to optimize FHSVM and FLSSVM. Extensive experiments demonstrate their superiority, with both consistently occupying the top two positions among all evaluated methods, achieving an average accuracy of 81.07% (accompanied by an F-score of 73.25%) for FHSVM and 81.54% (with an F-score of 75.71%) for FLSSVM.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2023.12.013DOI Listing

Publication Analysis

Top Keywords

loss function
24
machine learning
20
robust loss
16
loss functions
16
functions machine
12
loss
10
unbounded loss
8
fhsvm flssvm
8
function
6
machine
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!