Variational learning and bits-back coding: an information-theoretic view to Bayesian learning.

IEEE Trans Neural Netw

Neural Networks Research Centre, Helsinki University of Technology, FI-02015 HUT, Finland.

Published: July 2004

The bits-back coding first introduced by Wallace in 1990 and later by Hinton and van Camp in 1993 provides an interesting link between Bayesian learning and information-theoretic minimum-description-length (MDL) learning approaches. The bits-back coding allows interpreting the cost function used in the variational Bayesian method called ensemble learning as a code length in addition to the Bayesian view of misfit of the posterior approximation and a lower bound of model evidence. Combining these two viewpoints provides interesting insights to the learning process and the functions of different parts of the model. In this paper, the problem of variational Bayesian learning of hierarchical latent variable models is used to demonstrate the benefits of the two views. The code-length interpretation provides new views to many parts of the problem such as model comparison and pruning and helps explain many phenomena occurring in learning.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNN.2004.828762DOI Listing

Publication Analysis

Top Keywords

bits-back coding
12
bayesian learning
12
learning bits-back
8
variational bayesian
8
learning
7
bayesian
5
variational learning
4
coding information-theoretic
4
information-theoretic view
4
view bayesian
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!