Lesson overview | Lesson overview | Next part
Structural Causal Models: Part 1: Intuition
1. Intuition
Intuition develops the part of structural causal models specified by the approved Chapter 22 table of contents. The treatment is causal, not merely predictive: the central objects are mechanisms, interventions, assumptions, and counterfactuals.
1.1 correlation vs causation
Correlation vs causation belongs to the canonical scope of Structural Causal Models. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.
For this subsection, the working scope is structural assignments, causal graphs, d-separation, interventions, Markovian assumptions, and SCM links to robust ML. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.
The formula gives a compact handle on correlation vs causation. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.
| Causal object | Meaning | AI interpretation |
|---|---|---|
| Variable | Quantity in the causal system | Prompt feature, user action, treatment, tool call, exposure, label, reward |
| Mechanism | Assignment that generates a variable | Data pipeline, recommender policy, human behavior, model routing rule |
| Graph | Qualitative causal assumptions | What can affect what, and which paths may confound effects |
| Intervention | Replacement of a mechanism | A/B rollout, policy switch, prompt template change, retrieval update |
| Counterfactual | Unit-level alternate world | What this user or model trace would have done under another action |
Three examples of correlation vs causation:
- A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
- An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
- A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.
Two non-examples expose the boundary:
- A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
- A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.
The proof habit for correlation vs causation is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.
observed association: P(Y | X=x)
intervention question: P(Y | do(X=x))
counterfactual question: P(Y_x | E=e)
discovery question: which G could have generated P(V)?
In machine learning, correlation vs causation is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.
Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.
Checklist for using correlation vs causation responsibly:
- State the causal question before choosing a method.
- Draw or describe the assumed causal graph.
- Mark observed, latent, treatment, outcome, and adjustment variables.
- Separate intervention notation from conditioning notation.
- Decide whether the query is identifiable before estimating it.
- Report assumptions that cannot be tested from the observed data alone.
- Use ML as an estimation aid, not as a substitute for causal design.
This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.
Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.
Thus, correlation vs causation is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.
A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.
| Diagnostic question | Causal discipline it tests |
|---|---|
| What is being changed? | Intervention target |
| Which mechanism is replaced? | SCM modularity |
| Which paths transmit the effect? | Graph semantics |
| Which variables are merely observed? | Conditioning versus intervention |
| Which quantities are unobserved? | Confounding and counterfactual uncertainty |
1.2 mechanisms as stable assignments
Mechanisms as stable assignments belongs to the canonical scope of Structural Causal Models. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.
For this subsection, the working scope is structural assignments, causal graphs, d-separation, interventions, Markovian assumptions, and SCM links to robust ML. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.
The formula gives a compact handle on mechanisms as stable assignments. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.
| Causal object | Meaning | AI interpretation |
|---|---|---|
| Variable | Quantity in the causal system | Prompt feature, user action, treatment, tool call, exposure, label, reward |
| Mechanism | Assignment that generates a variable | Data pipeline, recommender policy, human behavior, model routing rule |
| Graph | Qualitative causal assumptions | What can affect what, and which paths may confound effects |
| Intervention | Replacement of a mechanism | A/B rollout, policy switch, prompt template change, retrieval update |
| Counterfactual | Unit-level alternate world | What this user or model trace would have done under another action |
Three examples of mechanisms as stable assignments:
- A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
- An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
- A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.
Two non-examples expose the boundary:
- A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
- A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.
The proof habit for mechanisms as stable assignments is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.
observed association: P(Y | X=x)
intervention question: P(Y | do(X=x))
counterfactual question: P(Y_x | E=e)
discovery question: which G could have generated P(V)?
In machine learning, mechanisms as stable assignments is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.
Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.
Checklist for using mechanisms as stable assignments responsibly:
- State the causal question before choosing a method.
- Draw or describe the assumed causal graph.
- Mark observed, latent, treatment, outcome, and adjustment variables.
- Separate intervention notation from conditioning notation.
- Decide whether the query is identifiable before estimating it.
- Report assumptions that cannot be tested from the observed data alone.
- Use ML as an estimation aid, not as a substitute for causal design.
This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.
Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.
Thus, mechanisms as stable assignments is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.
A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.
| Diagnostic question | Causal discipline it tests |
|---|---|
| What is being changed? | Intervention target |
| Which mechanism is replaced? | SCM modularity |
| Which paths transmit the effect? | Graph semantics |
| Which variables are merely observed? | Conditioning versus intervention |
| Which quantities are unobserved? | Confounding and counterfactual uncertainty |
1.3 DAGs as causal assumptions
Dags as causal assumptions belongs to the canonical scope of Structural Causal Models. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.
For this subsection, the working scope is structural assignments, causal graphs, d-separation, interventions, Markovian assumptions, and SCM links to robust ML. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.
The formula gives a compact handle on dags as causal assumptions. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.
| Causal object | Meaning | AI interpretation |
|---|---|---|
| Variable | Quantity in the causal system | Prompt feature, user action, treatment, tool call, exposure, label, reward |
| Mechanism | Assignment that generates a variable | Data pipeline, recommender policy, human behavior, model routing rule |
| Graph | Qualitative causal assumptions | What can affect what, and which paths may confound effects |
| Intervention | Replacement of a mechanism | A/B rollout, policy switch, prompt template change, retrieval update |
| Counterfactual | Unit-level alternate world | What this user or model trace would have done under another action |
Three examples of dags as causal assumptions:
- A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
- An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
- A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.
Two non-examples expose the boundary:
- A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
- A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.
The proof habit for dags as causal assumptions is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.
observed association: P(Y | X=x)
intervention question: P(Y | do(X=x))
counterfactual question: P(Y_x | E=e)
discovery question: which G could have generated P(V)?
In machine learning, dags as causal assumptions is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.
Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.
Checklist for using dags as causal assumptions responsibly:
- State the causal question before choosing a method.
- Draw or describe the assumed causal graph.
- Mark observed, latent, treatment, outcome, and adjustment variables.
- Separate intervention notation from conditioning notation.
- Decide whether the query is identifiable before estimating it.
- Report assumptions that cannot be tested from the observed data alone.
- Use ML as an estimation aid, not as a substitute for causal design.
This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.
Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.
Thus, dags as causal assumptions is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.
A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.
| Diagnostic question | Causal discipline it tests |
|---|---|
| What is being changed? | Intervention target |
| Which mechanism is replaced? | SCM modularity |
| Which paths transmit the effect? | Graph semantics |
| Which variables are merely observed? | Conditioning versus intervention |
| Which quantities are unobserved? | Confounding and counterfactual uncertainty |
1.4 interventions as model surgery
Interventions as model surgery belongs to the canonical scope of Structural Causal Models. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.
For this subsection, the working scope is structural assignments, causal graphs, d-separation, interventions, Markovian assumptions, and SCM links to robust ML. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.
The formula gives a compact handle on interventions as model surgery. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.
| Causal object | Meaning | AI interpretation |
|---|---|---|
| Variable | Quantity in the causal system | Prompt feature, user action, treatment, tool call, exposure, label, reward |
| Mechanism | Assignment that generates a variable | Data pipeline, recommender policy, human behavior, model routing rule |
| Graph | Qualitative causal assumptions | What can affect what, and which paths may confound effects |
| Intervention | Replacement of a mechanism | A/B rollout, policy switch, prompt template change, retrieval update |
| Counterfactual | Unit-level alternate world | What this user or model trace would have done under another action |
Three examples of interventions as model surgery:
- A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
- An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
- A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.
Two non-examples expose the boundary:
- A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
- A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.
The proof habit for interventions as model surgery is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.
observed association: P(Y | X=x)
intervention question: P(Y | do(X=x))
counterfactual question: P(Y_x | E=e)
discovery question: which G could have generated P(V)?
In machine learning, interventions as model surgery is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.
Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.
Checklist for using interventions as model surgery responsibly:
- State the causal question before choosing a method.
- Draw or describe the assumed causal graph.
- Mark observed, latent, treatment, outcome, and adjustment variables.
- Separate intervention notation from conditioning notation.
- Decide whether the query is identifiable before estimating it.
- Report assumptions that cannot be tested from the observed data alone.
- Use ML as an estimation aid, not as a substitute for causal design.
This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.
Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.
Thus, interventions as model surgery is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.
A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.
| Diagnostic question | Causal discipline it tests |
|---|---|
| What is being changed? | Intervention target |
| Which mechanism is replaced? | SCM modularity |
| Which paths transmit the effect? | Graph semantics |
| Which variables are merely observed? | Conditioning versus intervention |
| Which quantities are unobserved? | Confounding and counterfactual uncertainty |
1.5 why SCMs matter for ML distribution shift
Why scms matter for ml distribution shift belongs to the canonical scope of Structural Causal Models. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.
For this subsection, the working scope is structural assignments, causal graphs, d-separation, interventions, Markovian assumptions, and SCM links to robust ML. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.
The formula gives a compact handle on why scms matter for ml distribution shift. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.
| Causal object | Meaning | AI interpretation |
|---|---|---|
| Variable | Quantity in the causal system | Prompt feature, user action, treatment, tool call, exposure, label, reward |
| Mechanism | Assignment that generates a variable | Data pipeline, recommender policy, human behavior, model routing rule |
| Graph | Qualitative causal assumptions | What can affect what, and which paths may confound effects |
| Intervention | Replacement of a mechanism | A/B rollout, policy switch, prompt template change, retrieval update |
| Counterfactual | Unit-level alternate world | What this user or model trace would have done under another action |
Three examples of why scms matter for ml distribution shift:
- A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
- An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
- A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.
Two non-examples expose the boundary:
- A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
- A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.
The proof habit for why scms matter for ml distribution shift is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.
observed association: P(Y | X=x)
intervention question: P(Y | do(X=x))
counterfactual question: P(Y_x | E=e)
discovery question: which G could have generated P(V)?
In machine learning, why scms matter for ml distribution shift is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.
Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.
Checklist for using why scms matter for ml distribution shift responsibly:
- State the causal question before choosing a method.
- Draw or describe the assumed causal graph.
- Mark observed, latent, treatment, outcome, and adjustment variables.
- Separate intervention notation from conditioning notation.
- Decide whether the query is identifiable before estimating it.
- Report assumptions that cannot be tested from the observed data alone.
- Use ML as an estimation aid, not as a substitute for causal design.
This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.
Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.
Thus, why scms matter for ml distribution shift is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.
A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.
| Diagnostic question | Causal discipline it tests |
|---|---|
| What is being changed? | Intervention target |
| Which mechanism is replaced? | SCM modularity |
| Which paths transmit the effect? | Graph semantics |
| Which variables are merely observed? | Conditioning versus intervention |
| Which quantities are unobserved? | Confounding and counterfactual uncertainty |