We present a Bayesian framework for sequential monitoring that allows for use of external data, and that can be applied in a wide range of clinical trial applications. The basis for this framework is the idea that, in many cases, specification of priors used for sequential monitoring and the stopping criteria can be semi-algorithmic byproducts of the trial hypotheses and relevant external data, simplifying the process of prior elicitation. Monitoring priors are defined using the family of generalized normal distributions, which comprise a flexible class of priors, naturally allowing one to construct a prior that is peaked or flat about the parameter values thought to be most likely.
View Article and Find Full Text PDFObjective: To assess whether initiation of insulin glargine (glargine), compared with initiation of NPH or insulin detemir (detemir), was associated with an increased risk of breast cancer in women with diabetes.
Research Design And Methods: This was a retrospective new-user cohort study of female Medicare beneficiaries aged ≥65 years initiating glargine (203,159), detemir (67,012), or NPH (47,388) from September 2006 to September 2015, with follow-up through May 2017. Weighted Cox proportional hazards regression was used to estimate hazard ratios (HRs) and 95% CIs for incidence of breast cancer according to ever use, cumulative duration of use, cumulative dose of insulin, length of follow-up time, and a combination of dose and length of follow-up time.
In this paper, we develop the fixed-borrowing adaptive design, a Bayesian adaptive design which facilitates information borrowing from a historical trial using subject-level control data while assuring a reasonable upper bound on the maximum type I error rate and lower bound on the minimum power. First, one constructs an informative power prior from the historical data to be used for design and analysis of the new trial. At an interim analysis opportunity, one evaluates the degree of prior-data conflict.
View Article and Find Full Text PDFEvaluation of safety is a critical component of drug review at the US Food and Drug Administration (FDA). Statisticians are playing an increasingly visible role in quantitative safety evaluation and regulatory decision-making. This article reviews the history and the recent events relating to quantitative drug safety evaluation at the FDA.
View Article and Find Full Text PDFHave you noticed when you browse a book, journal, study report, or product label how your eye is drawn to figures more than to words and tables? Statistical graphs are powerful ways to transparently and succinctly communicate the key points of medical research. Furthermore, the graphic design itself adds to the clarity of the messages in the data. The goal of this paper is to provide a mechanism for selecting the appropriate graph to thoughtfully construct quality deliverables using good graphic design principles.
View Article and Find Full Text PDFJ Acquir Immune Defic Syndr
December 2012
Background: Several studies have reported an association between abacavir (ABC) exposure and increased risk of myocardial infarction (MI) among HIV-infected individuals. Randomized controlled trials (RCTs) and a pooled analysis by GlaxoSmithKline, however, do not support this association. To better estimate the effect of ABC use on risk of MI, the US Food and Drug Administration (FDA) conducted a trial-level meta-analysis of RCTs in which ABC use was randomized as part of a combined antiretroviral regimen.
View Article and Find Full Text PDFGene expression data from microarrays are being applied to predict preclinical and clinical endpoints, but the reliability of these predictions has not been established. In the MAQC-II project, 36 independent teams analyzed six microarray data sets to generate predictive models for classifying a sample with respect to one of 13 endpoints indicative of lung or liver toxicity in rodents, or of breast cancer, multiple myeloma or neuroblastoma in humans. In total, >30,000 models were built using many combinations of analytical methods.
View Article and Find Full Text PDFClinicians need to evaluate the quality of individual clinical studies and synthesize the information from multiple clinical studies to provide insights in selecting appropriate therapies for patients. Understanding the key statistical principles that underlie a clinical trial and how they may be implemented can help clinicians properly interpret the efficacy and safety findings of clinical trials. Several factors should be considered when evaluating clinical studies reported in the literature, as important differences might exist among reported studies, thereby impacting the reliability of their findings.
View Article and Find Full Text PDFMotivation: Genome-wide microarray data are often used in challenging classification problems of clinically relevant subtypes of human diseases. However, the identification of a parsimonious robust prediction model that performs consistently well on future independent data has not been successful due to the biased model selection from an extremely large number of candidate models during the classification model search and construction. Furthermore, common criteria of prediction model performance, such as classification error rates, do not provide a sensitive measure for evaluating performance of such astronomic competing models.
View Article and Find Full Text PDFJ Bioinform Comput Biol
January 2004
Microarrays can provide genome-wide expression patterns for various cancers, especially for tumor sub-types that may exhibit substantially different patient prognosis. Using such gene expression data, several approaches have been proposed to classify tumor sub-types accurately. These classification methods are not robust, and often dependent on a particular training sample for modelling, which raises issues in utilizing these methods to administer proper treatment for a future patient.
View Article and Find Full Text PDF