Publications by authors named "Louis A Cox"

Exposure-response associations between fine particulate matter (PM2.5) and mortality have been extensively studied but potential confounding by daily minimum and maximum temperatures in the weeks preceding death has not been carefully investigated. This paper seeks to close that gap by using lagged partial dependence plots (PDPs), sorted by importance, to quantify how mortality risk depends on lagged values of PM2.

View Article and Find Full Text PDF

Many recent articles in public health risk assessment have stated that causal conclusions drawn from observational data must rely on inherently untestable assumptions. They claim that such assumptions ultimately can only be evaluated by informed human judgments. We call this the to causal interpretation of observational results.

View Article and Find Full Text PDF

Metabolic conversion of benzene (Bz) is thought to be required for the hematotoxic effects observed following Bz exposures. Most safe exposure limits set for Bz utilize epidemiology data on the hematotoxic effects of Bz for the dose-response assessments. These hematotoxic effects occurred among workers exposed to elevated Bz levels, thus dose extrapolation is required for assessing relevant risks for populations exposed orders of magnitude lower.

View Article and Find Full Text PDF
Article Synopsis
  • Using models that aren't verified can give wrong information about health risks and can lead to bad policy choices.
  • A recent claim says gas stoves produce a lot of cases of asthma in kids, but this isn't backed by solid evidence and relies on shaky assumptions.
  • It's important to check and prove these models and claims are correct before making rules or decisions about people's health.
View Article and Find Full Text PDF

Introduction: Causal epidemiology for regulatory risk analysis seeks to evaluate how removing or reducing exposures would change disease occurrence rates. We define (IPoC) as the change in probability of a disease (or other harm) occurring over a lifetime or other specified time interval that would be caused by a specified change in exposure, as predicted by a fully specified causal model. We define the closely related concept of (CAS) as the predicted fraction of disease risk that would be removed or prevented by a specified reduction in exposure, holding other variables fixed.

View Article and Find Full Text PDF

Drawing sound causal inferences from observational data is often challenging for both authors and reviewers. This paper discusses the design and application of an Artificial Intelligence Causal Research Assistant (AIA) that seeks to help authors improve causal inferences and conclusions drawn from epidemiological data in health risk assessments. The AIA-assisted review process provides structured reviews and recommendations for improving the causal reasoning, analyses and interpretations made in scientific papers based on epidemiological data.

View Article and Find Full Text PDF

We present a Socratic dialogue with ChatGPT, a large language model (LLM), on the causal interpretation of epidemiological associations between fine particulate matter (PM2.5) and human mortality risks. ChatGPT, reflecting probable patterns of human reasoning and argumentation in the sources on which it has been trained, initially holds that "It is well-established that exposure to ambient levels of PM2.

View Article and Find Full Text PDF

Several recent news stories have alarmed many politicians and members of the public by reporting that indoor air pollution from gas stoves causes about 13% of childhood asthma in the United States. Research on the reproducibility and trustworthiness of epidemiological risk assessments has identified a number of common questionable research practices (QRPs) that should be avoided to draw sound causal conclusions from epidemiological data. Examples of such QRPs include claiming causation without using study designs or data analyses that allow valid causal inferences; generalizing or transporting risk estimates based on data for specific populations, time periods, and locations to different ones without accounting for differences in the study and target populations; claiming causation without discussing or quantitatively correcting for confounding, external validity bias, or other biases; and not mentioning or resolving contradictory evidence.

View Article and Find Full Text PDF

Exposure-response curves are among the most widely used tools of quantitative health risk assessment. However, we propose that exactly what they mean is usually left ambiguous, making it impossible to answer such fundamental questions as whether and by how much reducing exposure by a stated amount would change average population risks and distributions of individual risks. Recent concepts and computational methods from causal artificial intelligence (CAI) and machine learning (ML) can be applied to clarify what an exposure-response curve means; what other variables are held fixed (and at what levels) in estimating it; and how much inter-individual variability there is around population average exposure-response curves.

View Article and Find Full Text PDF

In 2022, the US EPA published an important risk assessment concluding that "Compared to the current annual standard, meeting a revised annual standard with a lower level is estimated to reduce PM2.5-associated health risks in the 30 annually-controlled study areas by about 7-9% for a level of 11.0 µg/m… and 30-37% for a level of 8.

View Article and Find Full Text PDF

This paper summarizes recent insights into causal biological mechanisms underlying the carcinogenicity of asbestos. It addresses their implications for the shapes of exposure-response curves and considers recent epidemiologic trends in malignant mesotheliomas (MMs) and lung fiber burden studies. Since the commercial amphiboles crocidolite and amosite pose the highest risk of MMs and contain high levels of iron, endogenous and exogenous pathways of iron injury and repair are discussed.

View Article and Find Full Text PDF

How can and should epidemiologists and risk assessors assemble and present evidence for causation of mortality or morbidities by identified agents such as fine particulate matter or other air pollutants? As a motivating example, some scientists have warned recently that ammonia from the production of meat significantly increases human mortality rates in exposed populations by increasing the ambient concentration of fine particulate matter (PM2.5) in air. We reexamine the support for such conclusions, including quantitative calculations that attribute deaths to PM2.

View Article and Find Full Text PDF

Population attributable fraction (PAF), probability of causation, burden of disease, and related quantities derived from relative risk ratios are widely used in applied epidemiology and health risk analysis to quantify the extent to which reducing or eliminating exposures would reduce disease risks. This causal interpretation conflates association with causation. It has sometimes led to demonstrably mistaken predictions and ineffective risk management recommendations.

View Article and Find Full Text PDF

We argue that population attributable fractions, probabilities of causation, burdens of disease, and similar association-based measures often do not provide valid estimates or surrogates for the fraction or number of disease cases that would be prevented by eliminating or reducing an exposure because their calculations do not include crucial mechanistic information. We use a thought experiment with a cascade of dominos to illustrate the need for mechanistic information when answering questions about how changing exposures changes risk. We suggest that modern methods of causal artificial intelligence (CAI) can fill this gap: they can complement and extend traditional epidemiological attribution calculations to provide information useful for risk management decisions.

View Article and Find Full Text PDF

For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potential courses of action to resulting outcome probabilities. They reflect an understanding of possible actions, preferred outcomes, the effects of action on outcome probabilities, and acceptable risks and trade-offs-the standard ingredients of normative theories of decision-making under uncertainty, such as expected utility theory.

View Article and Find Full Text PDF

Applying risk assessment and management tools to plutonium disposition is a long-standing challenge for the U.S. government.

View Article and Find Full Text PDF

Are dose-response relationships for benzene and health effects such as myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML) supra-linear, with disproportionately high risks at low concentrations, e.g. below 1 ppm? To investigate this hypothesis, we apply recent mode of action (MoA) and mechanistic information and modern data science techniques to quantify air benzene-urinary metabolite relationships in a previously studied data set for Tianjin, China factory workers.

View Article and Find Full Text PDF

Causal inference regarding exposures to ambient fine particulate matter (PM) and mortality estimated from observational studies is limited by confounding, among other factors. In light of a variety of causal inference frameworks and methods that have been developed over the past century to specifically quantify causal effects, three research teams were selected in 2016 to evaluate the causality of PM-mortality association among Medicare beneficiaries, using their own selections of causal inference methods and study designs but the same data sources. With a particular focus on controlling for unmeasured confounding, two research teams adopted an instrumental variables approach under a quasi-experiment or natural experiment study design, whereas one team adopted a structural nested mean model under the traditional cohort study design.

View Article and Find Full Text PDF

Do faster slaughter line speeds for young chickens increase risk of Salmonella contamination? We analyze data collected in 2018-2019 from 97 slaughter establishments processing young chickens to examine the extent to which differences in slaughter line speeds across establishments operating under the same inspection system explain observed differences in their microbial quality, specifically frequencies of positive Salmonella samples. A variety of off-the-shelf statistical and machine learning techniques applied to the data to identify and visualize correlations and potential causal relationships among variables showed that the presence of Salmonella or other indicators of process control, such as noncompliance records for regulations associated with process control and food safety, are not significantly increased in establishments with higher line speeds (e.g.

View Article and Find Full Text PDF

Decision analysis and risk analysis have grown up around a set of organizing questions: what might go wrong, how likely is it to do so, how bad might the consequences be, what should be done to maximize expected utility and minimize expected loss or regret, and how large are the remaining risks? In probabilistic causal models capable of representing unpredictable and novel events, probabilities for what will happen, and even what is possible, cannot necessarily be determined in advance. Standard decision and risk analysis questions become inherently unanswerable ("undecidable") for realistically complex causal systems with "open-world" uncertainties about what exists, what can happen, what other agents know, and how they will act. Recent artificial intelligence (AI) techniques enable agents (e.

View Article and Find Full Text PDF

In the first half of 2020, much excitement in news media and some peer reviewed scientific articles was generated by the discovery that fine particulate matter (PM2.5) concentrations and COVID-19 mortality rates are statistically significantly positively associated in some regression models. This article points out that they are non-significantly negatively associated in other regression models, once omitted confounders (such as latitude and longitude) are included.

View Article and Find Full Text PDF

We examine how Bayesian network (BN) learning and analysis methods can help to meet several methodological challenges that arise in interpreting significant regression coefficients in exposure-response regression modeling. As a motivating example, we consider the challenge of interpreting positive regression coefficients for blood lead level (BLL) as a predictor of mortality risk for nonsmoking men. We first note that practices such as dichotomizing or categorizing continuous confounders (e.

View Article and Find Full Text PDF