Understanding the uncertainty in parameter estimates or in derived secondary variables is important in all data analysis activities. In pharmacometrics, this is often done based on the standard errors from the variance-covariance matrix of the estimates. Confidence intervals derived in this way are by definition symmetrical, which may lead to implausible outcomes, and will require translation to generate uncertainties in derived variables. An often-used alternative is numerical percentile estimation by, for example, nonparametric bootstraps to circumvent these issues. Visual predictive checks (VPCs), which is a commonly used model diagnostic tool in pharmacometric analyses, also rely on the estimation of percentiles through numerical approaches. Given the cost in terms of run times and processing times for these methods, it is important to consider the trade-off between the number of bootstrap samples or simulated data sets in the VPCs, to the increase in precision related to a large number of bootstrap samples or simulated data sets. The objective with this tutorial is to provide a quantitative framework for assessing the precision in estimated percentile limits in bootstrap and visual predictive checks analyses to facilitate an informed choice of confidence interval width, number of bootstrap samples/simulated data sets, and required level of precision.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9197539PMC
http://dx.doi.org/10.1002/psp4.12790DOI Listing

Publication Analysis

Top Keywords

visual predictive
12
number bootstrap
12
data sets
12
percentile estimation
8
bootstrap visual
8
predictive checks
8
bootstrap samples
8
samples simulated
8
simulated data
8
bootstrap
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!