In safety-critical engineering applications, such as robust prediction against adversarial noise, it is necessary to quantify neural networks' uncertainty. Interval neural networks (INNs) are effective models for uncertainty quantification, giving an interval of predictions instead of a single value for a corresponding input. This article formulates the problem of training an INN as a chance-constrained optimization problem. The optimal solution of the formulated chance-constrained optimization naturally forms an INN that gives the tightest interval of predictions with a required confidence level. Since the chance-constrained optimization problem is intractable, a sample-based continuous approximate method is used to obtain approximate solutions to the chance-constrained optimization problem. We prove the uniform convergence of the approximation, showing that it gives the optimal INN consistently with the original ones. Additionally, we investigate the reliability of the approximation with finite samples, giving the probability bound for violation with finite samples. Through a numerical example and an application case study of anomaly detection in wind power data, we evaluate the effectiveness of the proposed INN against existing approaches, including Bayesian neural networks, highlighting its capability to significantly improve the performance of applying INNs for regression and unsupervised anomaly detection.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TNNLS.2024.3409379 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!