Publications by authors named "Paul N Zivich"

Iterated conditional expectation (ICE) g-computation is an estimation approach for addressing time-varying confounding for both longitudinal and time-to-event data. Unlike other g-computation implementations, ICE avoids the need to specify models for each time-varying covariate. For variance estimation, previous work has suggested the bootstrap.

View Article and Find Full Text PDF

Comparisons of treatments, interventions, or exposures are of central interest in epidemiology, but direct comparisons are not always possible due to practical or ethical reasons. Here, we detail a fusion approach to compare treatments across studies. The motivating example entails comparing the risk of the composite outcome of death, AIDS, or greater than a 50% CD4 cell count decline in people with HIV when assigned triple versus mono antiretroviral therapy, using data from the AIDS Clinical Trial Group (ACTG) 175 (mono versus dual therapy) and ACTG 320 (dual versus triple therapy).

View Article and Find Full Text PDF

Selection bias has long been central in methodological discussions across epidemiology and other fields. In epidemiology, the concept of selection bias has been continually evolving over time. In this issue of the Journal, Mathur and Shpitser (Am J Epidemiol.

View Article and Find Full Text PDF

Purpose: Generalized (g-) computation is a useful tool for causal inference in epidemiology. However, in settings when the outcome is a survival time subject to right censoring, the standard pooled logistic regression approach to g-computation requires arbitrary discretization of time, parametric modeling of the baseline hazard function, and the need to expand one's dataset. We illustrate a semiparametric Breslow estimator for g-computation with time-fixed treatments and survival outcomes that is not subject to these limitations.

View Article and Find Full Text PDF

Multiple imputation (MI) is commonly implemented to mitigate potential selection bias due to missing data. The accompanying article by Nguyen and Stuart (Am J Epidemiol. 2024;193(10):1470-1476) examines the statistical consistency of several ways of integrating MI with propensity scores.

View Article and Find Full Text PDF

M-estimation is a statistical procedure that is particularly advantageous for some comon epidemiological analyses, including approaches to estimate an adjusted marginal risk contrast (i.e. inverse probability weighting and g-computation) and data fusion.

View Article and Find Full Text PDF

Higher-order evidence is evidence about evidence. Epidemiologic examples of higher-order evidence include the settings where the study data constitute first-order evidence and estimates of misclassification comprise the second-order evidence (e.g.

View Article and Find Full Text PDF

While randomized controlled trials (RCTs) are critical for establishing the efficacy of new therapies, there are limitations regarding what comparisons can be made directly from trial data. RCTs are limited to a small number of comparator arms and often compare a new therapeutic to a standard of care which has already proven efficacious. It is sometimes of interest to estimate the efficacy of the new therapy relative to a treatment that was not evaluated in the same trial, such as a placebo or an alternative therapy that was evaluated in a different trial.

View Article and Find Full Text PDF

Approaches to address measurement error frequently rely on validation data to estimate measurement error parameters (e.g., sensitivity and specificity).

View Article and Find Full Text PDF

Background: While noninferiority of tenofovir alafenamide and emtricitabine (TAF/FTC) as preexposure prophylaxis (PrEP) for the prevention of human immunodeficiency virus (HIV) has been shown, interest remains in its efficacy relative to placebo. We estimate the efficacy of TAF/FTC PrEP versus placebo for the prevention of HIV infection.

Methods: We used data from the DISCOVER and iPrEx trials to compare TAF/FTC to placebo.

View Article and Find Full Text PDF

Studies designed to estimate the effect of an action in a randomized or observational setting often do not represent a random sample of the desired target population. Instead, estimates from that study can be transported to the target population. However, transportability methods generally rely on a positivity assumption, such that all relevant covariate patterns in the target population are also observed in the study sample.

View Article and Find Full Text PDF

We describe an approach to sensitivity analysis introduced by Robins et al (1999), for the setting where the outcome is missing for some observations. This flexible approach focuses on the relationship between the outcomes and missingness, where data can be missing completely at random, missing at random given observed data, or missing not at random. We provide examples from HIV that include the sensitivity of the estimation of a mean and proportion under different missingness mechanisms.

View Article and Find Full Text PDF

Many research questions in public health and medicine concern sustained interventions in populations defined by substantive priorities. Existing methods to answer such questions typically require a measured covariate set sufficient to control confounding, which can be questionable in observational studies. Differences-in-differences rely instead on the parallel trends assumption, allowing for some types of time-invariant unmeasured confounding.

View Article and Find Full Text PDF

Background: When accounting for misclassification, investigators make assumptions about whether misclassification is "differential" or "nondifferential." Most guidance on differential misclassification considers settings where outcome misclassification varies across levels of exposure, or vice versa. Here, we examine when covariate-differential misclassification must be considered when estimating overall outcome prevalence.

View Article and Find Full Text PDF

Pooled testing has been successfully used to expand SARS-CoV-2 testing, especially in settings requiring high volumes of screening of lower-risk individuals, but efficiency of pooling declines as prevalence rises. We propose a differentiated pooling strategy that independently optimizes pool sizes for distinct groups with different probabilities of infection to further improve the efficiency of pooled testing. We compared the efficiency (results obtained per test kit used) of the differentiated strategy with a traditional pooling strategy in which all samples are processed using uniform pool sizes under a range of scenarios.

View Article and Find Full Text PDF

Missing data are pandemic and a central problem for epidemiology. Missing data reduce precision and can cause notable bias. There remain too few simple published examples detailing types of missing data and illustrating their possible impact on results.

View Article and Find Full Text PDF