Background: Whole-pelvis radiation therapy is common practice in the post-surgical treatment of cervical and endometrial cancer. Gastrointestinal mucositis is an adverse side effect of radiation therapy, and is a primary concern in patient management. We investigate whether proteomic information obtained from blood samples drawn from patients scheduled to receive radiation therapy for gynecological cancers could be used to predict which patients are most susceptible to radiation-induced gastrointestinal mucositis, in order to improve the individualization of radiation therapy.
View Article and Find Full Text PDFBackground: Immune function may influence the ability of older adults to maintain or improve muscle mass, strength, and function during aging. Thus, nutritional supplementation that supports the immune system could complement resistance exercise as an intervention for age-associated muscle loss. The current study will determine the relationship between immune function and exercise training outcomes for older adults who consume a nutritional supplement or placebo during resistance training and post-training follow-up.
View Article and Find Full Text PDFThe dose of a substance that causes death in P% of a population is called an LDP, where LD stands for lethal dose. In radiation research, a common LDP of interest is the radiation dose that kills 50% of the population by a specified time, i.e.
View Article and Find Full Text PDFIn chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED.
View Article and Find Full Text PDFObjective: Although classification algorithms are promising tools to support clinical diagnosis and treatment of disease, the usual implicit assumption underlying these algorithms, that all patients are homogeneous with respect to characteristics of interest, is unsatisfactory. The objective here is to exploit the population heterogeneity reflected by characteristics that may not be apparent and thus not controlled, in order to differentiate levels of classification accuracy between subpopulations and further the goal of tailoring therapies on an individual basis.
Methods And Materials: A new subpopulation-based confidence approach is developed in the context of a selective voting algorithm defined by an ensemble of convex-hull classifiers.
This article proposes a method for multiclass classification problems using ensembles of multinomial logistic regression models. A multinomial logit model is used as a base classifier in ensembles from random partitions of predictors. The multinomial logit model can be applied to each mutually exclusive subset of the feature space without variable selection.
View Article and Find Full Text PDFDespite the fact that benefit-risk analysis is a necessary component of the review of new drugs for potential regulatory approval in the presence of known adverse side effects, and of the review of already-approved drugs for possible withdrawal from the market when unanticipated adverse events are discovered, formal quantitative tools for benefit-risk analysis are few. This paper proposes a quantitative method that utilizes receiver operating characteristic (ROC) curves to find an optimal dose of a drug that maximizes the differential between the benefit of the intended effect and the risk of adverse side effects, where costs associated with lack of benefit and risk can be incorporated. The method can be applied separately to subpopulations of different sensitivities and to different adverse events to give a full picture of the trade-offs between the benefit afforded by the drug and the risk it incurs, and potentially to allow the drug to be approved only selectively for specific subpopulations, or at different doses for different subpopulations.
View Article and Find Full Text PDFBackground: To estimate a classifier's error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV) methods based on sampling without replacement. Monte Carlo (MC) simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV) to k-fold CV for estimating clasification error.
View Article and Find Full Text PDFBMC Med Res Methodol
July 2012
Background: Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient's class.
View Article and Find Full Text PDFAMIA Jt Summits Transl Sci Proc
August 2012
Genes work in concert as a system as opposed to independent entities and mediate disease states. There has been considerable interest in understanding variations in molecular signatures between normal and disease states. However, a majority of techniques implicitly assume homogeneity between samples within a given group and use a fixed set of genes in discerning the groups.
View Article and Find Full Text PDFFood-borne infection is caused by intake of foods or beverages contaminated with microbial pathogens. Dose-response modeling is used to estimate exposure levels of pathogens associated with specific risks of infection or illness. When a single dose-response model is used and confidence limits on infectious doses are calculated, only data uncertainty is captured.
View Article and Find Full Text PDFWe present an introduction to, and examples of, Cox proportional hazards regression in the context of animal lethality studies of potential radioprotective agents. This established method is seldom used to analyze survival data collected in such studies, but is appropriate in many instances. Presenting a hypothetical radiation study that examines the efficacy of a potential radioprotectant both in the absence and presence of a potential modifier, we detail how to implement and interpret results from a Cox proportional hazards regression analysis used to analyze the survival data, and we provide relevant SAS® code.
View Article and Find Full Text PDFObjective: Classification algorithms can be used to predict risks and responses of patients based on genomic and other high-dimensional data. While there is optimism for using these algorithms to improve the treatment of diseases, they have yet to demonstrate sufficient predictive ability for routine clinical practice. They generally classify all patients according to the same criteria, under an implicit assumption of population homogeneity.
View Article and Find Full Text PDFIdiosyncratic liver toxicity that may lead to post-marketing removal of approved drugs can potentially be explained by the existence of hidden, sensitive subpopulations that are not large enough to affect premarketing toxicity assessments. We consider whether molecular biomarkers of risk and response can be developed to identify sensitive individuals, using classification methods that allow for population heterogeneity represented by characteristics that may not be readily apparent or controlled. If so, drugs that may be hepatotoxic to only a relatively small subpopulation would not be mislabeled as hepatotoxic to the general population and could be prescribed selectively to achieve a maximum health benefit.
View Article and Find Full Text PDFIn response to the ever increasing threat of radiological and nuclear terrorism, active development of nontoxic new drugs and other countermeasures to protect against and/or mitigate adverse health effects of radiation is ongoing. Although the classical LD(50) study used for many decades as a first step in preclinical toxicity testing of new drugs has been largely replaced by experiments that use fewer animals, the need to evaluate the radioprotective efficacy of new drugs necessitates the conduct of traditional LD(50) comparative studies (FDA, 2002, Federal Register 67, 37988-37998). There is, however, no readily available method to determine the number of animals needed for establishing efficacy in these comparative potency studies.
View Article and Find Full Text PDFObjective: A classification algorithm that utilizes two-dimensional convex hulls of training-set samples is presented.
Methods And Material: For each pair of predictor variables, separate convex hulls of positive and negative samples in the training set are formed, and these convex hulls are used to classify test points according to a nearest-neighbor criterion. An ensemble of these two-dimensional convex-hull classifiers is formed by trimming the (m)C(2) possible classifiers derived from the m predictors to a set of classifiers comprised of only unique predictor variables.
J Biopharm Stat
November 2008
A new statistical method for estimating the lag time between onset of and death from an occult tumor is proposed for data without cause-of-death information. In this method, the survival function for time to tumor onset, tumor-specific survival function, and competing risks survival function are estimated using the maximum likelihood estimates of the parameters. The proposed method utilizes the estimated survival functions and statistically imputed fatal tumors to estimate the lag time.
View Article and Find Full Text PDFWe apply robust classification algorithms to high-dimensional genomic data to find biomarkers, by analyzing variable importance, that enable a better diagnosis of disease, an earlier intervention, or a more effective assignment of therapies. The goal is to use variable importance ranking to isolate a set of important genes that can be used to classify life-threatening diseases with respect to prognosis or type to maximize efficacy or minimize toxicity in personalized treatment of such diseases. A ranking method and present several other methods to select a set of important genes to use as genomic biomarkers is proposed, and the performance of the selection procedures in patient classification by cross-validation is evaluated.
View Article and Find Full Text PDFRegul Toxicol Pharmacol
July 2008
Under the new U.S. Environmental Protection Agency (EPA) Cancer Risk Assessment Guidelines [U.
View Article and Find Full Text PDF13C NMR data have been correlated to Toxic Equivalency Factors (TEFs) of the 29 PCDDs, PCDFs, or PCBs for which non-zero TEFs have been defined. Such correlations are called quantitative spectrometric data-activity relationship (QSDAR) models. An improved QSDAR model predicted TEFs of 0.
View Article and Find Full Text PDFObjective: Personalized medicine is defined by the use of genomic signatures of patients in a target population for assignment of more effective therapies as well as better diagnosis and earlier interventions that might prevent or delay disease. An objective is to find a novel classification algorithm that can be used for prediction of response to therapy in order to help individualize clinical assignment of treatment.
Methods And Materials: Classification algorithms are required to be highly accurate for optimal treatment on each patient.
A general probabilistically-based approach is proposed for both cancer and noncancer risk/safety assessments. The familiar framework of the original ADI/RfD formulation is used, substituting in the numerator a benchmark dose derived from a hierarchical pharmacokinetic/pharmacodynamic model and in the denominator a unitary uncertainty factor derived from a hierarchical animal/average human/sensitive human model. The empirical probability distributions of the numerator and denominator can be combined to produce an empirical human-equivalent distribution for an animal-derived benchmark dose in external-exposure units.
View Article and Find Full Text PDFPersonalized medicine is defined by the use of genomic signatures of patients to assign effective therapies. We present Classification by Ensembles from Random Partitions (CERP) for class prediction and apply CERP to genomic data on leukemia patients and to genomic data with several clinical variables on breast cancer patients. CERP performs consistently well compared to the other classification algorithms.
View Article and Find Full Text PDFRisk assessment involves an analysis of the relationship between exposure and health related outcomes to derive an allowable exposure level or to estimate a low-dose risk. Acceptable levels of human exposure for non-cancer effects generally are derived by dividing an experimental no-observed-adverse-effect-level or a lower confidence limit benchmark dose by a product of several uncertainty factors. This paper presents a hierarchical modeling framework for a probabilistic approach to non-cancer risk assessment.
View Article and Find Full Text PDFJ Toxicol Environ Health A
August 2006
The percent active (A) and inactive (I) chemicals in a database can directly affect the sensitivity (% active chemicals predicted correctly) and specificity (% inactive chemicals predicted correctly) of structure-activity relationship (SAR) analyses. Subdividing the National Center for Toxicological Research (NCTR) liver cancer database (NCTRlcdb) into various A/I ratios, which varied from 0.2 to 5.
View Article and Find Full Text PDF