Models can be simple for different reasons: because they yield a simple and computationally efficient interpretation of a generic dataset (e.g., in terms of pairwise dependencies)-as in statistical learning-or because they capture the laws of a specific phenomenon-as e.g., in physics-leading to non-trivial falsifiable predictions. In information theory, the simplicity of a model is quantified by the stochastic complexity, which measures the number of bits needed to encode its parameters. In order to understand how simple models look like, we study the stochastic complexity of spin models with interactions of arbitrary order. We show that bijections within the space of possible interactions preserve the stochastic complexity, which allows to partition the space of all models into equivalence classes. We thus found that the simplicity of a model is not determined by the order of the interactions, but rather by their mutual arrangements. Models where statistical dependencies are localized on non-overlapping groups of few variables are simple, affording predictions on independencies that are easy to falsify. On the contrary, fully connected pairwise models, which are often used in statistical learning, appear to be highly complex, because of their extended set of interactions, and they are hard to falsify.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7512302PMC
http://dx.doi.org/10.3390/e20100739DOI Listing

Publication Analysis

Top Keywords

stochastic complexity
16
complexity spin
8
models
8
spin models
8
pairwise models
8
simplicity model
8
models statistical
8
stochastic
4
models pairwise
4
models simple?
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!