17  Intervening in Deterministic Systems

Status: Draft

v0.4

17.1 Learning Objectives

After reading this chapter, you will be able to:

  • Understand how to intervene in deterministic dynamical systems (ODEs)
  • Modify ODEs through the \(do(\cdot)\) operator
  • Evaluate policies in deterministic systems
  • Apply optimal control in deterministic systems

17.2 Introduction

This chapter shows how to intervene in deterministic dynamical systems, modifying ODEs through the \(do(\cdot)\) operator and evaluating policies. This is part of Doing in the Dynamical world—how can we intervene/modify dynamic processes?

17.3 Interventions in ODEs

17.3.1 The Intervention Operator

When we intervene \(do(A_t = a)\) in a deterministic system, we modify the structural equation for \(A_t\):

\[ A_t \coloneqq a \quad \text{(instead of } \pi(H_t, U^A_t) \text{)} \]

This structural change propagates through the ODE system.

17.3.2 How Interventions Modify System Dynamics

Steps: 1. Set intervention: \(do(A_t = a)\) for all \(t\) in intervention period 2. Modify ODE: Replace \(A_t\) with constant \(a\) in the ODE 3. Solve ODE: Integrate the modified system forward 4. Observe effects: How does the system evolve under intervention?

17.3.3 Example: Vaccination in SIR Model

Consider an SIR model with vaccination intervention:

\[ \begin{aligned} \frac{dS}{dt} &= -\beta S I - \nu \cdot do(V = 1) \\ \frac{dI}{dt} &= \beta S I - \gamma I \\ \frac{dR}{dt} &= \gamma I + \nu \cdot do(V = 1) \end{aligned} \]

Under intervention \(do(V = 1)\), vaccination rate \(\nu\) is applied, moving individuals directly from \(S\) to \(R\).

17.3.4 Implementation: Intervention in ODEs

We can implement interventions in ODEs using DifferentialEquations.jl. Here’s an example with the SIR model:

# Find project root and include ensure_packages.jl
project_root = let
    current = pwd()
    while !isfile(joinpath(current, "Project.toml")) && !isfile(joinpath(current, "_quarto.yml"))
        parent = dirname(current)
        parent == current && break
        current = parent
    end
    current
end
include(joinpath(project_root, "scripts", "ensure_packages.jl"))
@auto_using DifferentialEquations CairoMakie

# SIR model parameters
β = 0.3  # Transmission rate
γ = 0.1  # Recovery rate
ν = 0.05  # Vaccination rate (when intervention is active)

# Original SIR model (no intervention)
function sir_original!(du, u, p, t)
    """SIR epidemic model: Susceptible → Infected → Recovered."""
    S, I, R = u
    du[1] = -β * S * I  # dS/dt
    du[2] = β * S * I - γ * I  # dI/dt
    du[3] = γ * I  # dR/dt
end

# SIR model under intervention do(V = 1) (vaccination active)
function sir_intervened!(du, u, p, t)
    """SIR model with vaccination intervention: do(V = 1)."""
    S, I, R = u
    # Intervention: do(V = 1) means vaccination is active
    du[1] = -β * S * I - ν  # dS/dt: vaccination moves S → R
    du[2] = β * S * I - γ * I  # dI/dt: unchanged
    du[3] = γ * I + ν  # dR/dt: vaccination adds to recovered
end

# Initial conditions
u0 = [0.99, 0.01, 0.0]  # 99% susceptible, 1% infected, 0% recovered
tspan = (0.0, 100.0)

# Solve original system
prob_original = ODEProblem(sir_original!, u0, tspan)
sol_original = solve(prob_original, Tsit5())

# Solve intervened system
prob_intervened = ODEProblem(sir_intervened!, u0, tspan)
sol_intervened = solve(prob_intervened, Tsit5())

# Visualise comparison
let
    fig = Figure(size = (1000, 600))
    ax1 = Axis(fig[1, 1], title = "Susceptible (S)", xlabel = "Time", ylabel = "Proportion")
    ax2 = Axis(fig[1, 2], title = "Infected (I)", xlabel = "Time", ylabel = "Proportion")
    ax3 = Axis(fig[2, 1], title = "Recovered (R)", xlabel = "Time", ylabel = "Proportion")
    ax4 = Axis(fig[2, 2], title = "Peak Infection Comparison", xlabel = "Time", ylabel = "Infected")
    
    lines!(ax1, sol_original.t, [u[1] for u in sol_original.u], label = "Original", linewidth = 2)
    lines!(ax1, sol_intervened.t, [u[1] for u in sol_intervened.u], label = "Intervened (do(V=1))", linewidth = 2, linestyle = :dash)
    axislegend(ax1)
    
    lines!(ax2, sol_original.t, [u[2] for u in sol_original.u], label = "Original", linewidth = 2)
    lines!(ax2, sol_intervened.t, [u[2] for u in sol_intervened.u], label = "Intervened (do(V=1))", linewidth = 2, linestyle = :dash)
    axislegend(ax2)
    
    lines!(ax3, sol_original.t, [u[3] for u in sol_original.u], label = "Original", linewidth = 2)
    lines!(ax3, sol_intervened.t, [u[3] for u in sol_intervened.u], label = "Intervened (do(V=1))", linewidth = 2, linestyle = :dash)
    axislegend(ax3)
    
    lines!(ax4, sol_original.t, [u[2] for u in sol_original.u], label = "Original", linewidth = 2)
    lines!(ax4, sol_intervened.t, [u[2] for u in sol_intervened.u], label = "Intervened (do(V=1))", linewidth = 2, linestyle = :dash)
    axislegend(ax4)
    
    fig  # Only this gets displayed
end

# Compare peak infection
peak_original = maximum([u[2] for u in sol_original.u])
peak_intervened = maximum([u[2] for u in sol_intervened.u])
println("Peak infection rate:")
println("  Original: ", round(peak_original, digits=3))
println("  Intervened (do(V=1)): ", round(peak_intervened, digits=3))
println("  Reduction: ", round((1 - peak_intervened/peak_original) * 100, digits=1), "%")
Peak infection rate:
  Original: 0.302
  Intervened (do(V=1)): 0.033
  Reduction: 89.2%

17.4 Taxonomy of Continuous-Time Interventions

The interventions introduced above can be formalised into a taxonomy that distinguishes four fundamental types. Each type has distinct mathematical properties and appropriate domains of application.

17.4.1 Point Interventions

Point interventions are instantaneous interventions at a single time \(t^*\):

\[ X(t^*) = x^* \]

Equivalently, they can be represented as Dirac delta forcing in the dynamics:

\[ \dot{X} = f(X) + c \cdot \delta(t - t^*) \]

where \(\delta(t - t^*)\) is the Dirac delta distribution concentrated at \(t^*\). The constant \(c\) determines the magnitude of the instantaneous impulse.

Mathematical properties:

  • The intervention creates a discontinuity: the state jumps from its pre-intervention value to \(x^*\) (or receives an impulse)
  • The modified ODE is solved in two phases: integrate from initial conditions to \(t^*\), apply the intervention, then integrate forward from the new state
  • Point interventions apply when the intervention duration is negligible compared to the system’s characteristic timescales

Biological example: A bolus drug injection delivers a fixed dose instantaneously. The drug concentration jumps at the injection time, then decays according to the system’s natural dynamics (e.g., clearance).

17.4.2 Sustained Interventions

Sustained interventions hold a variable at a constant value over an interval \([t_1, t_2]\):

\[ X(t) = x^* \quad \text{for } t \in [t_1, t_2] \]

This is equivalent to clamping: the variable is removed from the dynamics during the intervention period. Downstream variables that depend on \(X\) receive the fixed value \(x^*\) instead of the natural evolution of \(X\).

Mathematical formulation: The modified ODE has boundary conditions \(X(t_1) = x^*\) and \(X(t_2) = x^*\), and \(X\) is treated as exogenous (not governed by \(\dot{X} = f(X)\)) during \([t_1, t_2]\). The system reduces to fewer state variables during the intervention.

Biological example: Continuous drug infusion maintains a constant drug concentration (or infusion rate) over time. The intervention sustains the treatment level rather than delivering it as a single pulse.

17.4.3 Stochastic Interventions

Stochastic interventions specify that the intervened variable is drawn from a distribution \(G\) rather than fixed at a value:

\[ do(X \sim G) \quad \text{instead of } do(X = x) \]

This generalises the standard \(do(X = x)\) to allow for uncertainty or heterogeneity in the intervention.

Connection to epidemiology: In shift interventions, the distribution of \(X\) is shifted (e.g., \(do(X \sim X + \delta)\)). Stochastic interventions formalise this by specifying a target distribution \(G\).

Mathematical formulation:

\[ P(Y \mid do(X \sim G)) = \int P(Y \mid do(X = x)) \, dG(x) \]

The interventional distribution under \(do(X \sim G)\) is the mixture of interventional distributions \(P(Y \mid do(X = x))\) weighted by \(G\).

Biological example: Heterogeneous treatment response—when the “same” treatment produces different effects across individuals, modelling \(do(X \sim G)\) captures the distribution of responses. For instance, a drug dose may vary around a target due to pharmacokinetic variability.

17.4.4 Dynamic Regimes

Dynamic regimes make the intervention depend on the current state (or history) of the system:

\[ do(X(t) = g(Y(t))) \]

where \(g\) is a function of the observed state \(Y(t)\) (or more generally, the history \(H_t\)). The intervention is not fixed in advance but adapts as the system evolves.

Feedback control as a dynamic regime: A controller that adjusts the input \(u(t)\) based on the current state \(x(t)\) implements a dynamic regime: \(u(t) = \pi(x(t))\).

Connection to optimal control theory: Optimal control seeks the policy \(\pi^*\) that maximises (or minimises) an objective. The control law \(\pi^*\) defines a dynamic regime.

Mathematical formulation:

\[ \dot{X} = f(X) + u(X, t) \]

where \(u\) is the control policy. The intervention \(u(X, t)\) depends on both state and time, making the system a closed-loop controlled system.

Biological example: Adaptive dosing protocols adjust drug dose based on measured biomarkers (e.g., therapeutic drug monitoring). The dose at time \(t\) depends on the patient’s current state rather than being fixed in advance.

17.4.5 Summary Table

Type Mathematical form When to use Biological example
Point \(X(t^*) = x^*\) or \(\dot{X} = f(X) + c \cdot \delta(t - t^*)\) Instantaneous, negligible duration Bolus drug injection
Sustained \(X(t) = x^*\) for \(t \in [t_1, t_2]\) Constant intervention over interval Continuous drug infusion
Stochastic \(do(X \sim G)\); \(P(Y \mid do(X \sim G)) = \int P(Y \mid do(X = x)) \, dG(x)\) Heterogeneous or uncertain intervention Heterogeneous treatment response
Dynamic regime \(do(X(t) = g(Y(t)))\); \(\dot{X} = f(X) + u(X, t)\) State-dependent, adaptive intervention Adaptive dosing protocols

17.5 Policy Evaluation in Deterministic Systems

17.5.1 Evaluating Treatment Policies

Question: What is the expected outcome under policy \(\pi\)?

\[ \mathbb{E}[Y^{do(\pi)}] = \mathbb{E}[Y \mid \text{policy } \pi] \]

Method: Simulate ODE under policy and compute expected outcomes.

17.5.2 Optimal Control

Question: What policy maximises expected outcome?

\[ \pi^* = \arg\max_\pi \mathbb{E}^{do(\pi)}[Y_T] \]

Methods: Dynamic programming, Pontryagin’s maximum principle.

17.5.3 Implementation: Policy Evaluation

We can evaluate policies by simulating the ODE under different treatment strategies:

# Find project root and include ensure_packages.jl
project_root = let
    current = pwd()
    while !isfile(joinpath(current, "Project.toml")) && !isfile(joinpath(current, "_quarto.yml"))
        parent = dirname(current)
        parent == current && break
        current = parent
    end
    current
end
include(joinpath(project_root, "scripts", "ensure_packages.jl"))
@auto_using DifferentialEquations CairoMakie

# Example: Treatment policy in a simple disease model
# State: X (disease severity), Action: A (treatment level 0 or 1)
# Policy π: Treat if severity > threshold

β = 0.2  # Disease progression rate
α = 0.3  # Treatment effectiveness

# Policy 1: No treatment (A = 0 always)
function disease_no_treatment!(du, u, p, t)
    X = u[1]
    du[1] = β * X  # Disease progresses
end

# Policy 2: Treat if X > 0.5 (threshold policy)
function disease_threshold!(du, u, p, t)
    X = u[1]
    A = X > 0.5 ? 1.0 : 0.0  # Policy: treat if severity > 0.5
    du[1] = β * X - α * A * X  # Treatment reduces severity
end

# Policy 3: Always treat (A = 1 always)
function disease_always_treat!(du, u, p, t)
    X = u[1]
    du[1] = β * X - α * X  # Always apply treatment
end

# Initial condition
u0 = [0.1]  # Initial severity = 0.1
tspan = (0.0, 20.0)

# Solve under each policy
prob1 = ODEProblem(disease_no_treatment!, u0, tspan)
sol1 = solve(prob1, Tsit5())

prob2 = ODEProblem(disease_threshold!, u0, tspan)
sol2 = solve(prob2, Tsit5())

prob3 = ODEProblem(disease_always_treat!, u0, tspan)
sol3 = solve(prob3, Tsit5())

# Visualise policy comparison
let
    fig = Figure(size = (800, 400))
    ax = Axis(fig[1, 1], title = "Policy Evaluation: Disease Severity Over Time", 
              xlabel = "Time", ylabel = "Severity")
    
    lines!(ax, sol1.t, [u[1] for u in sol1.u], label = "Policy 1: No treatment", linewidth = 2)
    lines!(ax, sol2.t, [u[1] for u in sol2.u], label = "Policy 2: Treat if X > 0.5", linewidth = 2, linestyle = :dash)
    lines!(ax, sol3.t, [u[1] for u in sol3.u], label = "Policy 3: Always treat", linewidth = 2, linestyle = :dot)
    axislegend(ax)
    
    fig  # Only this gets displayed
end
# Evaluate expected outcomes (final severity)
outcome1 = sol1.u[end][1]
outcome2 = sol2.u[end][1]
outcome3 = sol3.u[end][1]

println("Expected final severity under each policy:")
println("  Policy 1 (no treatment): ", round(outcome1, digits=3))
println("  Policy 2 (threshold): ", round(outcome2, digits=3))
println("  Policy 3 (always treat): ", round(outcome3, digits=3))
println("\nBest policy: ", outcome3 < outcome2 < outcome1 ? "Policy 3 (always treat)" : 
        outcome2 < outcome1 ? "Policy 2 (threshold)" : "Policy 1 (no treatment)")
Expected final severity under each policy:
  Policy 1 (no treatment): 5.46
  Policy 2 (threshold): 0.502
  Policy 3 (always treat): 0.014

Best policy: Policy 3 (always treat)

17.5.4 Optimal Control

For optimal control problems, we seek the policy that maximises (or minimises) an objective function. This typically requires specialised optimisation methods:

# Find project root and include ensure_packages.jl
project_root = let
    current = pwd()
    while !isfile(joinpath(current, "Project.toml")) && !isfile(joinpath(current, "_quarto.yml"))
        parent = dirname(current)
        parent == current && break
        current = parent
    end
    current
end
include(joinpath(project_root, "scripts", "ensure_packages.jl"))

# Note: Full optimal control implementation requires:
# - Optimization.jl for optimisation
# - Dynamic programming or Pontryagin's maximum principle
# - Cost function definition

# Example framework (conceptual):
# Objective: Minimise ∫[X(t)² + c*A(t)²] dt
#   - X(t)²: Penalty for high disease severity
#   - c*A(t)²: Cost of treatment (c is cost parameter)
# Constraints: 0 ≤ A(t) ≤ 1 (treatment bounded)

println("Optimal control framework:")
println("  Objective: Minimise ∫[X(t)² + c*A(t)²] dt")
println("  Subject to: dX/dt = β*X - α*A*X")
println("  Constraints: 0 ≤ A(t) ≤ 1")
println("\nMethods:")
println("  - Dynamic programming (discrete time)")
println("  - Pontryagin's maximum principle (continuous time)")
println("  - Numerical optimisation (Optimization.jl)")
println("\nFor full implementation, see Optimization.jl documentation")
Optimal control framework:
  Objective: Minimise ∫[X(t)² + c*A(t)²] dt
  Subject to: dX/dt = β*X - α*A*X
  Constraints: 0 ≤ A(t) ≤ 1

Methods:
  - Dynamic programming (discrete time)
  - Pontryagin's maximum principle (continuous time)
  - Numerical optimisation (Optimization.jl)

For full implementation, see Optimization.jl documentation

17.6 World Context

This chapter addresses Doing in the Dynamical world—how can we intervene/modify deterministic dynamic processes? Interventions in ODEs modify system dynamics, allowing us to evaluate policies and find optimal control strategies.

17.7 Key Takeaways

  1. Interventions modify ODEs: \(do(\cdot)\) operator changes system dynamics
  2. Policy evaluation: Simulate ODE under policy to evaluate outcomes
  3. Optimal control: Find policies that maximise expected outcomes
  4. Structural changes propagate: Interventions affect system evolution

17.8 Further Reading