We illustrate the use of statistical tools (asymptotic theories of standard error quantification using appropriate statistical models, bootstrapping, and model comparison techniques) in addition to sensitivity analysis that may be employed to determine the information content in data sets. We do this in the context of recent models [S. Prigent, A. Ballesta, F. Charles, N. Lenuzza, P. Gabriel, L.M. Tine, H. Rezaei, and M. Doumic, An efficient kinetic model for assemblies of amyloid fibrils and its application to polyglutamine aggregation, PLoS ONE 7 (2012), e43273. doi:10.1371/journal.pone.0043273.] for nucleated polymerization in proteins, about which very little is known regarding the underlying mechanisms; thus, the methodology we develop here may be of great help to experimentalists. We conclude that the investigated data sets will support with reasonable levels of uncertainty only the estimation of the parameters related to the early steps of the aggregation process.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4493483 | PMC |
http://dx.doi.org/10.1080/17513758.2015.1050465 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!