Lesson overview | Previous part | Lesson overview
Counterfactuals: Part 6: AI Applications to References
6. AI Applications
AI Applications develops the part of counterfactuals specified by the approved Chapter 22 table of contents. The treatment is causal, not merely predictive: the central objects are mechanisms, interventions, assumptions, and counterfactuals.
6.1 counterfactual fairness
Counterfactual fairness belongs to the canonical scope of Counterfactuals. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.
For this subsection, the working scope is potential outcomes, SCM counterfactuals, abduction-action-prediction, twin networks, treatment effects, recourse, and fairness. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.
The formula gives a compact handle on counterfactual fairness. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.
| Causal object | Meaning | AI interpretation |
|---|---|---|
| Variable | Quantity in the causal system | Prompt feature, user action, treatment, tool call, exposure, label, reward |
| Mechanism | Assignment that generates a variable | Data pipeline, recommender policy, human behavior, model routing rule |
| Graph | Qualitative causal assumptions | What can affect what, and which paths may confound effects |
| Intervention | Replacement of a mechanism | A/B rollout, policy switch, prompt template change, retrieval update |
| Counterfactual | Unit-level alternate world | What this user or model trace would have done under another action |
Three examples of counterfactual fairness:
- A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
- An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
- A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.
Two non-examples expose the boundary:
- A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
- A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.
The proof habit for counterfactual fairness is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.
observed association: P(Y | X=x)
intervention question: P(Y | do(X=x))
counterfactual question: P(Y_x | E=e)
discovery question: which G could have generated P(V)?
In machine learning, counterfactual fairness is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.
Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.
Checklist for using counterfactual fairness responsibly:
- State the causal question before choosing a method.
- Draw or describe the assumed causal graph.
- Mark observed, latent, treatment, outcome, and adjustment variables.
- Separate intervention notation from conditioning notation.
- Decide whether the query is identifiable before estimating it.
- Report assumptions that cannot be tested from the observed data alone.
- Use ML as an estimation aid, not as a substitute for causal design.
This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.
Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.
Thus, counterfactual fairness is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.
A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.
| Diagnostic question | Causal discipline it tests |
|---|---|
| What is being changed? | Intervention target |
| Which mechanism is replaced? | SCM modularity |
| Which paths transmit the effect? | Graph semantics |
| Which variables are merely observed? | Conditioning versus intervention |
| Which quantities are unobserved? | Confounding and counterfactual uncertainty |
6.2 algorithmic recourse
Algorithmic recourse belongs to the canonical scope of Counterfactuals. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.
For this subsection, the working scope is potential outcomes, SCM counterfactuals, abduction-action-prediction, twin networks, treatment effects, recourse, and fairness. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.
The formula gives a compact handle on algorithmic recourse. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.
| Causal object | Meaning | AI interpretation |
|---|---|---|
| Variable | Quantity in the causal system | Prompt feature, user action, treatment, tool call, exposure, label, reward |
| Mechanism | Assignment that generates a variable | Data pipeline, recommender policy, human behavior, model routing rule |
| Graph | Qualitative causal assumptions | What can affect what, and which paths may confound effects |
| Intervention | Replacement of a mechanism | A/B rollout, policy switch, prompt template change, retrieval update |
| Counterfactual | Unit-level alternate world | What this user or model trace would have done under another action |
Three examples of algorithmic recourse:
- A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
- An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
- A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.
Two non-examples expose the boundary:
- A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
- A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.
The proof habit for algorithmic recourse is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.
observed association: P(Y | X=x)
intervention question: P(Y | do(X=x))
counterfactual question: P(Y_x | E=e)
discovery question: which G could have generated P(V)?
In machine learning, algorithmic recourse is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.
Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.
Checklist for using algorithmic recourse responsibly:
- State the causal question before choosing a method.
- Draw or describe the assumed causal graph.
- Mark observed, latent, treatment, outcome, and adjustment variables.
- Separate intervention notation from conditioning notation.
- Decide whether the query is identifiable before estimating it.
- Report assumptions that cannot be tested from the observed data alone.
- Use ML as an estimation aid, not as a substitute for causal design.
This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.
Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.
Thus, algorithmic recourse is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.
A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.
| Diagnostic question | Causal discipline it tests |
|---|---|
| What is being changed? | Intervention target |
| Which mechanism is replaced? | SCM modularity |
| Which paths transmit the effect? | Graph semantics |
| Which variables are merely observed? | Conditioning versus intervention |
| Which quantities are unobserved? | Confounding and counterfactual uncertainty |
6.3 offline policy evaluation
Offline policy evaluation belongs to the canonical scope of Counterfactuals. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.
For this subsection, the working scope is potential outcomes, SCM counterfactuals, abduction-action-prediction, twin networks, treatment effects, recourse, and fairness. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.
The formula gives a compact handle on offline policy evaluation. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.
| Causal object | Meaning | AI interpretation |
|---|---|---|
| Variable | Quantity in the causal system | Prompt feature, user action, treatment, tool call, exposure, label, reward |
| Mechanism | Assignment that generates a variable | Data pipeline, recommender policy, human behavior, model routing rule |
| Graph | Qualitative causal assumptions | What can affect what, and which paths may confound effects |
| Intervention | Replacement of a mechanism | A/B rollout, policy switch, prompt template change, retrieval update |
| Counterfactual | Unit-level alternate world | What this user or model trace would have done under another action |
Three examples of offline policy evaluation:
- A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
- An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
- A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.
Two non-examples expose the boundary:
- A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
- A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.
The proof habit for offline policy evaluation is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.
observed association: P(Y | X=x)
intervention question: P(Y | do(X=x))
counterfactual question: P(Y_x | E=e)
discovery question: which G could have generated P(V)?
In machine learning, offline policy evaluation is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.
Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.
Checklist for using offline policy evaluation responsibly:
- State the causal question before choosing a method.
- Draw or describe the assumed causal graph.
- Mark observed, latent, treatment, outcome, and adjustment variables.
- Separate intervention notation from conditioning notation.
- Decide whether the query is identifiable before estimating it.
- Report assumptions that cannot be tested from the observed data alone.
- Use ML as an estimation aid, not as a substitute for causal design.
This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.
Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.
Thus, offline policy evaluation is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.
A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.
| Diagnostic question | Causal discipline it tests |
|---|---|
| What is being changed? | Intervention target |
| Which mechanism is replaced? | SCM modularity |
| Which paths transmit the effect? | Graph semantics |
| Which variables are merely observed? | Conditioning versus intervention |
| Which quantities are unobserved? | Confounding and counterfactual uncertainty |
6.4 personalized treatment and recommendation
Personalized treatment and recommendation belongs to the canonical scope of Counterfactuals. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.
For this subsection, the working scope is potential outcomes, SCM counterfactuals, abduction-action-prediction, twin networks, treatment effects, recourse, and fairness. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.
The formula gives a compact handle on personalized treatment and recommendation. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.
| Causal object | Meaning | AI interpretation |
|---|---|---|
| Variable | Quantity in the causal system | Prompt feature, user action, treatment, tool call, exposure, label, reward |
| Mechanism | Assignment that generates a variable | Data pipeline, recommender policy, human behavior, model routing rule |
| Graph | Qualitative causal assumptions | What can affect what, and which paths may confound effects |
| Intervention | Replacement of a mechanism | A/B rollout, policy switch, prompt template change, retrieval update |
| Counterfactual | Unit-level alternate world | What this user or model trace would have done under another action |
Three examples of personalized treatment and recommendation:
- A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
- An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
- A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.
Two non-examples expose the boundary:
- A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
- A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.
The proof habit for personalized treatment and recommendation is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.
observed association: P(Y | X=x)
intervention question: P(Y | do(X=x))
counterfactual question: P(Y_x | E=e)
discovery question: which G could have generated P(V)?
In machine learning, personalized treatment and recommendation is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.
Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.
Checklist for using personalized treatment and recommendation responsibly:
- State the causal question before choosing a method.
- Draw or describe the assumed causal graph.
- Mark observed, latent, treatment, outcome, and adjustment variables.
- Separate intervention notation from conditioning notation.
- Decide whether the query is identifiable before estimating it.
- Report assumptions that cannot be tested from the observed data alone.
- Use ML as an estimation aid, not as a substitute for causal design.
This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.
Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.
Thus, personalized treatment and recommendation is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.
A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.
| Diagnostic question | Causal discipline it tests |
|---|---|
| What is being changed? | Intervention target |
| Which mechanism is replaced? | SCM modularity |
| Which paths transmit the effect? | Graph semantics |
| Which variables are merely observed? | Conditioning versus intervention |
| Which quantities are unobserved? | Confounding and counterfactual uncertainty |
6.5 limits of LLM-generated counterfactual explanations
Limits of llm-generated counterfactual explanations belongs to the canonical scope of Counterfactuals. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.
For this subsection, the working scope is potential outcomes, SCM counterfactuals, abduction-action-prediction, twin networks, treatment effects, recourse, and fairness. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.
The formula gives a compact handle on limits of llm-generated counterfactual explanations. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.
| Causal object | Meaning | AI interpretation |
|---|---|---|
| Variable | Quantity in the causal system | Prompt feature, user action, treatment, tool call, exposure, label, reward |
| Mechanism | Assignment that generates a variable | Data pipeline, recommender policy, human behavior, model routing rule |
| Graph | Qualitative causal assumptions | What can affect what, and which paths may confound effects |
| Intervention | Replacement of a mechanism | A/B rollout, policy switch, prompt template change, retrieval update |
| Counterfactual | Unit-level alternate world | What this user or model trace would have done under another action |
Three examples of limits of llm-generated counterfactual explanations:
- A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
- An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
- A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.
Two non-examples expose the boundary:
- A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
- A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.
The proof habit for limits of llm-generated counterfactual explanations is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.
observed association: P(Y | X=x)
intervention question: P(Y | do(X=x))
counterfactual question: P(Y_x | E=e)
discovery question: which G could have generated P(V)?
In machine learning, limits of llm-generated counterfactual explanations is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.
Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.
Checklist for using limits of llm-generated counterfactual explanations responsibly:
- State the causal question before choosing a method.
- Draw or describe the assumed causal graph.
- Mark observed, latent, treatment, outcome, and adjustment variables.
- Separate intervention notation from conditioning notation.
- Decide whether the query is identifiable before estimating it.
- Report assumptions that cannot be tested from the observed data alone.
- Use ML as an estimation aid, not as a substitute for causal design.
This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.
Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.
Thus, limits of llm-generated counterfactual explanations is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.
A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.
| Diagnostic question | Causal discipline it tests |
|---|---|
| What is being changed? | Intervention target |
| Which mechanism is replaced? | SCM modularity |
| Which paths transmit the effect? | Graph semantics |
| Which variables are merely observed? | Conditioning versus intervention |
| Which quantities are unobserved? | Confounding and counterfactual uncertainty |
7. Common Mistakes
| # | Mistake | Why It Is Wrong | Fix |
|---|---|---|---|
| 1 | Equating correlation with causation | Conditional association can arise from confounding, selection, or collider bias. | State the causal graph and the target intervention before interpreting associations. |
| 2 | Conditioning on colliders | A collider can open a spurious path when conditioned on. | Use d-separation and adjustment criteria, not variable-importance intuition alone. |
| 3 | Forgetting the estimand-estimator split | Identification is a symbolic question; estimation is a statistical question. | First derive the causal estimand, then choose an estimator and diagnostics. |
| 4 | Using do-calculus without assumptions | The rules operate on a causal graph whose assumptions are supplied by the analyst. | Make graph assumptions explicit and discuss unobserved variables. |
| 5 | Treating counterfactuals as factual labels | Only one potential outcome is observed for each unit. | Use consistency, exchangeability, and sensitivity analysis carefully. |
| 6 | Assuming discovery is assumption-free | Many graphs can imply the same observational distribution. | Report equivalence classes, required assumptions, and intervention needs. |
| 7 | Confusing prediction robustness with causal invariance | A predictive feature can be stable in one dataset and noncausal under intervention. | Use environment shifts and mechanism assumptions to justify causal claims. |
| 8 | Ignoring positivity or overlap | Causal effects cannot be estimated where treatment assignments have no support. | Inspect propensity or support before using adjustment formulas. |
| 9 | Letting ML hide causal design | Flexible nuisance models do not create identification. | Use ML after identification, with cross-fitting or regularization as estimation tools. |
| 10 | Overtrusting LLM causal explanations | Language models can narrate plausible mechanisms without evidence. | Use LLMs for hypothesis generation, then require graph, data, and domain checks. |
8. Exercises
-
(*) Work through a causal-inference task for counterfactuals.
- (a) State the causal query using intervention or counterfactual notation.
- (b) Draw or describe the relevant graph and assumptions.
- (c) Decide whether the estimand is identifiable from the available data.
- (d) Give an estimator or diagnostic only after identification is clear.
- (e) Explain the AI or LLM system implication.
-
(*) Work through a causal-inference task for counterfactuals.
- (a) State the causal query using intervention or counterfactual notation.
- (b) Draw or describe the relevant graph and assumptions.
- (c) Decide whether the estimand is identifiable from the available data.
- (d) Give an estimator or diagnostic only after identification is clear.
- (e) Explain the AI or LLM system implication.
-
(*) Work through a causal-inference task for counterfactuals.
- (a) State the causal query using intervention or counterfactual notation.
- (b) Draw or describe the relevant graph and assumptions.
- (c) Decide whether the estimand is identifiable from the available data.
- (d) Give an estimator or diagnostic only after identification is clear.
- (e) Explain the AI or LLM system implication.
-
(**) Work through a causal-inference task for counterfactuals.
- (a) State the causal query using intervention or counterfactual notation.
- (b) Draw or describe the relevant graph and assumptions.
- (c) Decide whether the estimand is identifiable from the available data.
- (d) Give an estimator or diagnostic only after identification is clear.
- (e) Explain the AI or LLM system implication.
-
(**) Work through a causal-inference task for counterfactuals.
- (a) State the causal query using intervention or counterfactual notation.
- (b) Draw or describe the relevant graph and assumptions.
- (c) Decide whether the estimand is identifiable from the available data.
- (d) Give an estimator or diagnostic only after identification is clear.
- (e) Explain the AI or LLM system implication.
-
(**) Work through a causal-inference task for counterfactuals.
- (a) State the causal query using intervention or counterfactual notation.
- (b) Draw or describe the relevant graph and assumptions.
- (c) Decide whether the estimand is identifiable from the available data.
- (d) Give an estimator or diagnostic only after identification is clear.
- (e) Explain the AI or LLM system implication.
-
(***) Work through a causal-inference task for counterfactuals.
- (a) State the causal query using intervention or counterfactual notation.
- (b) Draw or describe the relevant graph and assumptions.
- (c) Decide whether the estimand is identifiable from the available data.
- (d) Give an estimator or diagnostic only after identification is clear.
- (e) Explain the AI or LLM system implication.
-
(***) Work through a causal-inference task for counterfactuals.
- (a) State the causal query using intervention or counterfactual notation.
- (b) Draw or describe the relevant graph and assumptions.
- (c) Decide whether the estimand is identifiable from the available data.
- (d) Give an estimator or diagnostic only after identification is clear.
- (e) Explain the AI or LLM system implication.
-
(***) Work through a causal-inference task for counterfactuals.
- (a) State the causal query using intervention or counterfactual notation.
- (b) Draw or describe the relevant graph and assumptions.
- (c) Decide whether the estimand is identifiable from the available data.
- (d) Give an estimator or diagnostic only after identification is clear.
- (e) Explain the AI or LLM system implication.
-
(***) Work through a causal-inference task for counterfactuals.
- (a) State the causal query using intervention or counterfactual notation.
- (b) Draw or describe the relevant graph and assumptions.
- (c) Decide whether the estimand is identifiable from the available data.
- (d) Give an estimator or diagnostic only after identification is clear.
- (e) Explain the AI or LLM system implication.
9. Why This Matters for AI
| Concept | AI Impact |
|---|---|
| SCM | Encodes which mechanisms should stay stable under policy or data changes |
| Do-operator | Separates observing a model behavior from changing an input, policy, or tool |
| Adjustment | Identifies which variables should be controlled for and which should not |
| Counterfactual | Supports recourse, fairness, and unit-level explanation |
| Causal discovery | Generates candidate mechanism graphs when domain knowledge is incomplete |
| Positivity | Prevents extrapolating treatment effects into unsupported regions |
| Hidden confounding | Warns when observational logs cannot support a causal claim |
| Estimand-estimator split | Keeps flexible ML estimators from hiding causal assumptions |
10. Conceptual Bridge
Counterfactuals follows statistical learning theory because learning theory explains how observed samples support future prediction claims. Causal inference asks a different question: what happens when an action changes the system that generated those samples?
The backward bridge is risk and uncertainty. Chapter 21 provides language for finite-sample generalization. Chapter 22 adds intervention semantics, graph assumptions, and counterfactual worlds. A causal claim is not just a better prediction; it is a claim about a modified data-generating mechanism.
The forward bridge is game theory. Once multiple agents adapt to interventions, the causal question becomes strategic: actions change incentives, incentives change behavior, and behavior changes the causal system. Chapter 23 will study that interaction explicitly.
+--------------------------------------------------------------+
| Chapter 21: prediction under finite samples |
| Chapter 22: intervention, counterfactuals, causal discovery |
| Chapter 23: strategic interaction and adversarial systems |
+--------------------------------------------------------------+
References
- Rubin. Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies. https://dash.harvard.edu/entities/publication/73120378-82c0-6bd4-e053-0100007fdf3b
- Holland. Statistics and Causal Inference. https://www.ets.org/research/policy_research_reports/publications/article/1986/ajqr.html
- Imbens and Rubin. Causal Inference for Statistics, Social, and Biomedical Sciences. https://www.cambridge.org/core/books/causal-inference-for-statistics-social-and-biomedical-sciences/71126BE90C58F1A431FE9B2DD07938AB
- Pearl. Causality: Models, Reasoning, and Inference. https://www.cambridge.org/core/books/causality/6836DD2F4FD4A767DE97BBECDD1655F5