36  Reporting Standards for Causal-Dynamical Claims

Status: Draft

v0.2

36.1 Learning Objectives

After reading this chapter, you will be able to:

  • Use a comprehensive template for reporting causal-dynamical analyses
  • Ensure all necessary components (estimand, assumptions, identification, estimator, diagnostics, sensitivity, limitations) are included
  • Avoid common reporting pitfalls
  • Create “reviewer-proof” causal claims

36.2 Introduction

Causal claims require careful justification. This chapter provides a “reviewer-proof” template for reporting causal-dynamical analyses, ensuring all necessary components are included and clearly communicated.

36.3 The Reporting Template

36.3.1 1. Estimand

What do we want to estimate?

  • Clear definition: State the causal quantity of interest
  • Notation: Use standard notation (\(do(\cdot)\), counterfactuals)
  • Population: Define the target population
  • Time horizon: Specify the time period of interest

Example: “We estimate the average treatment effect (ATE) of early vaccination on 6-month recovery, defined as \(\mathbb{E}[Y^{do(A=1)}] - \mathbb{E}[Y^{do(A=0)}]\) in the target population.”

36.3.2 2. Assumptions

What assumptions are we making?

  • Causal structure: State the causal graph or structural assumptions
  • Identifiability: What makes the estimand identifiable?
  • No unmeasured confounding: Explicitly state or test
  • Positivity: Treatment assignment probabilities
  • Model assumptions: Functional forms, distributions

Example: “We assume no unmeasured confounding conditional on observed confounders \(L_t\), represented by the causal graph \(L_t \rightarrow A_t \rightarrow Y_t\), \(L_t \rightarrow Y_t\).”

36.3.3 3. Identification

How do we express the estimand in terms of observables?

  • Identification result: Show how \(P^{do(\cdot)}(Y)\) relates to \(P(Y, A, X)\)
  • Graphical criteria: Use backdoor/frontdoor/instrumental variable criteria
  • G-formula: Show the identifying formula

Example: “Under no unmeasured confounding, the ATE is identified as \(\mathbb{E}[Y \mid A=1, L] - \mathbb{E}[Y \mid A=0, L]\) averaged over \(P(L)\).”

36.3.4 4. Estimator

How do we estimate the identified quantity?

  • Method: Regression, matching, IPTW, TMLE, etc.
  • Implementation: Software, packages, algorithms
  • Hyperparameters: Tuning parameters, model selection
  • Code: Provide reproducible code

Example: “We use targeted maximum likelihood estimation (TMLE) implemented in the tmle R package, with Super Learner for initial outcome and treatment models.”

36.3.5 5. Diagnostics

How do we check the model and assumptions?

  • Model criticism: Calibration, PPCs, residuals
  • Assumption checks: Test for unmeasured confounding, positivity violations
  • Sensitivity analysis: How robust are results?
  • Validation: Out-of-domain, cross-validation

Example: “Posterior predictive checks show good calibration (calibration slope = 0.98). Sensitivity analysis suggests results are robust to moderate unmeasured confounding (E-value = 2.3).”

36.3.6 6. Sensitivity

How sensitive are results to assumptions?

  • Sensitivity parameters: Unmeasured confounding strength, model misspecification
  • Sensitivity analysis: Vary assumptions, report how results change
  • Robustness: Worst-case scenarios, bounds

Example: “Under the assumption that unmeasured confounders have odds ratio ≤ 2.0 with both treatment and outcome, the ATE remains positive (95% CI: [0.1, 0.5]).”

36.3.7 7. Limitations

What are the limitations?

  • Assumptions: Which assumptions may not hold?
  • Data: What data limitations affect results?
  • Generalisation: To what populations/domains do results apply?
  • Uncertainty: What uncertainties remain?

Example: “Results assume no unmeasured confounding, which may not hold if unobserved factors (e.g., genetic variants) affect both treatment and outcome. Generalisation to other populations requires transportability assumptions.”

36.4 Checklists

36.4.1 Transportability Checklist

36.4.2 Robustness Checklist

36.5 Common Reporting Pitfalls

36.5.1 Pitfall 1: Missing Assumptions

Problem: Not stating assumptions explicitly.

Solution: Use the template—always state assumptions.

36.5.2 Pitfall 2: Confusing Association and Causation

Problem: Reporting associations as causal effects.

Solution: Use \(do(\cdot)\) notation, state identification assumptions.

36.5.3 Pitfall 3: Ignoring Uncertainty

Problem: Not reporting uncertainty (confidence intervals, sensitivity).

Solution: Always report uncertainty and sensitivity analysis.

36.5.4 Pitfall 4: Overstating Generalisation

Problem: Claiming results generalise without justification.

Solution: State transportability assumptions, validate in target domain.

36.6 Key Takeaways

  1. Use the reporting template to ensure all necessary components are included
  2. Always state assumptions explicitly
  3. Use standard notation (\(do(\cdot)\), counterfactuals) for clarity
  4. Report uncertainty and sensitivity analysis
  5. State limitations and generalisation assumptions
  6. Use checklists to avoid common pitfalls

36.7 Further Reading

  • Hernán and Robins (2020): Causal Inference: What If — comprehensive guide to reporting causal analyses
  • VanderWeele (2015): Explanation in Causal Inference — reporting standards for causal mediation and interaction
  • Rothman et al. (2021): Modern Epidemiology (4th ed.) — reporting standards for epidemiological studies