A good fit of model predictions to empirical data are often used as an argument for model validity. However, if the model is flexible enough to fit a large proportion of potential empirical outcomes, finding a good fit becomes less meaningful. We propose a method for estimating the proportion of potential empirical outcomes that the model can fit: Model Flexibility Analysis (MFA). MFA aids model evaluation by providing a metric for gauging the persuasiveness of a given fit. We demonstrate that MFA can be more informative than merely discounting the fit by the number of free parameters in the model, and show how the number of free parameters does not necessarily correlate with the flexibility of the model. Additionally, we contrast MFA with other flexibility assessment techniques, including Parameter Space Partitioning, Model Mimicry, Minimum Description Length, and Prior Predictive Evaluation. Finally, we provide examples of how MFA can help to inform modeling results and discuss a variety of issues relating to the use of MFA in model validation. (PsycINFO Database Record
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1037/a0039657 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!