Information bottleneck theory of high-dimensional regression: relevancy, efficiency and optimality.

Adv Neural Inf Process Syst

Initiative for the Theoretical Sciences, The Graduate Center, CUNY.

Published: December 2022

Avoiding overfitting is a central challenge in machine learning, yet many large neural networks readily achieve zero training loss. This puzzling contradiction necessitates new approaches to the study of overfitting. Here we quantify overfitting via residual information, defined as the bits in fitted models that encode noise in training data. Information efficient learning algorithms minimize residual information while maximizing the relevant bits, which are predictive of the unknown generative models. We solve this optimization to obtain the information content of optimal algorithms for a linear regression problem and compare it to that of randomized ridge regression. Our results demonstrate the fundamental trade-off between residual and relevant information and characterize the relative information efficiency of randomized regression with respect to optimal algorithms. Finally, using results from random matrix theory, we reveal the information complexity of learning a linear map in high dimensions and unveil information-theoretic analogs of double and multiple descent phenomena.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10275337PMC

Publication Analysis

Top Keywords

optimal algorithms
8
bottleneck theory
4
theory high-dimensional
4
regression
4
high-dimensional regression
4
regression relevancy
4
relevancy efficiency
4
efficiency optimality
4
optimality avoiding
4
avoiding overfitting
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!