Random network models, constrained to reproduce specific statistical features, are often used to represent and analyze network data and their mathematical descriptions. Chief among them, the configuration model constrains random networks by their degree distribution and is foundational to many areas of network science. However, configuration models and their variants are often selected based on intuition or mathematical and computational simplicity rather than on statistical evidence. To evaluate the quality of a network representation, we need to consider both the amount of information required to specify a random network model and the probability of recovering the original data when using the model as a generative process. To this end, we calculate the approximate size of network ensembles generated by the popular configuration model and its generalizations, including versions accounting for degree correlations and centrality layers. We then apply the minimum description length principle as a model selection criterion over the resulting nested family of configuration models. Using a dataset of over 100 networks from various domains, we find that the classic configuration model is generally preferred on networks with an average degree above 10, while a layered configuration model constrained by a centrality metric offers the most compact representation of the majority of sparse networks.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1103/PhysRevE.110.034305 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!