Private notes
0/8000

Notes stay private to your browser until account sync is configured.

Part 6
21 min read11 headingsSplit lesson page

Lesson overview | Previous part | Lesson overview

Causal Discovery: Part 6: Evaluation and ML Applications to References

6. Evaluation and ML Applications

Evaluation and ML Applications develops the part of causal discovery specified by the approved Chapter 22 table of contents. The treatment is causal, not merely predictive: the central objects are mechanisms, interventions, assumptions, and counterfactuals.

6.1 structural Hamming distance

Structural hamming distance belongs to the canonical scope of Causal Discovery. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.

For this subsection, the working scope is constraint-based, score-based, functional, invariant, and optimization-based causal graph discovery with clear assumptions and evaluation metrics. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.

h(W)=tr(eWW)d=0.h(W)=\operatorname{tr}(e^{W\odot W})-d=0.

The formula gives a compact handle on structural hamming distance. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.

Causal objectMeaningAI interpretation
VariableQuantity in the causal systemPrompt feature, user action, treatment, tool call, exposure, label, reward
MechanismAssignment that generates a variableData pipeline, recommender policy, human behavior, model routing rule
GraphQualitative causal assumptionsWhat can affect what, and which paths may confound effects
InterventionReplacement of a mechanismA/B rollout, policy switch, prompt template change, retrieval update
CounterfactualUnit-level alternate worldWhat this user or model trace would have done under another action

Three examples of structural hamming distance:

  1. A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
  2. An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
  3. A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.

Two non-examples expose the boundary:

  1. A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
  2. A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.

The proof habit for structural hamming distance is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.

observed association:      P(Y | X=x)
intervention question:     P(Y | do(X=x))
counterfactual question:   P(Y_x | E=e)
discovery question:        which G could have generated P(V)?

In machine learning, structural hamming distance is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.

Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.

Checklist for using structural hamming distance responsibly:

  • State the causal question before choosing a method.
  • Draw or describe the assumed causal graph.
  • Mark observed, latent, treatment, outcome, and adjustment variables.
  • Separate intervention notation from conditioning notation.
  • Decide whether the query is identifiable before estimating it.
  • Report assumptions that cannot be tested from the observed data alone.
  • Use ML as an estimation aid, not as a substitute for causal design.

This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.

Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.

Thus, structural hamming distance is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.

A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.

Diagnostic questionCausal discipline it tests
What is being changed?Intervention target
Which mechanism is replaced?SCM modularity
Which paths transmit the effect?Graph semantics
Which variables are merely observed?Conditioning versus intervention
Which quantities are unobserved?Confounding and counterfactual uncertainty

6.2 structural intervention distance preview

Structural intervention distance preview belongs to the canonical scope of Causal Discovery. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.

For this subsection, the working scope is constraint-based, score-based, functional, invariant, and optimization-based causal graph discovery with clear assumptions and evaluation metrics. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.

SHD(G,G^)=#{edge additions, deletions, reversals}.\operatorname{SHD}(G,\widehat{G})=\#\{\text{edge additions, deletions, reversals}\}.

The formula gives a compact handle on structural intervention distance preview. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.

Causal objectMeaningAI interpretation
VariableQuantity in the causal systemPrompt feature, user action, treatment, tool call, exposure, label, reward
MechanismAssignment that generates a variableData pipeline, recommender policy, human behavior, model routing rule
GraphQualitative causal assumptionsWhat can affect what, and which paths may confound effects
InterventionReplacement of a mechanismA/B rollout, policy switch, prompt template change, retrieval update
CounterfactualUnit-level alternate worldWhat this user or model trace would have done under another action

Three examples of structural intervention distance preview:

  1. A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
  2. An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
  3. A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.

Two non-examples expose the boundary:

  1. A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
  2. A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.

The proof habit for structural intervention distance preview is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.

observed association:      P(Y | X=x)
intervention question:     P(Y | do(X=x))
counterfactual question:   P(Y_x | E=e)
discovery question:        which G could have generated P(V)?

In machine learning, structural intervention distance preview is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.

Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.

Checklist for using structural intervention distance preview responsibly:

  • State the causal question before choosing a method.
  • Draw or describe the assumed causal graph.
  • Mark observed, latent, treatment, outcome, and adjustment variables.
  • Separate intervention notation from conditioning notation.
  • Decide whether the query is identifiable before estimating it.
  • Report assumptions that cannot be tested from the observed data alone.
  • Use ML as an estimation aid, not as a substitute for causal design.

This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.

Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.

Thus, structural intervention distance preview is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.

A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.

Diagnostic questionCausal discipline it tests
What is being changed?Intervention target
Which mechanism is replaced?SCM modularity
Which paths transmit the effect?Graph semantics
Which variables are merely observed?Conditioning versus intervention
Which quantities are unobserved?Confounding and counterfactual uncertainty

6.3 synthetic benchmarks

Synthetic benchmarks belongs to the canonical scope of Causal Discovery. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.

For this subsection, the working scope is constraint-based, score-based, functional, invariant, and optimization-based causal graph discovery with clear assumptions and evaluation metrics. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.

Xj=fj(paj)+Nj,Nj ⁣ ⁣ ⁣paj.X_j=f_j(\operatorname{pa}_j)+N_j,\qquad N_j \perp\!\!\!\perp \operatorname{pa}_j.

The formula gives a compact handle on synthetic benchmarks. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.

Causal objectMeaningAI interpretation
VariableQuantity in the causal systemPrompt feature, user action, treatment, tool call, exposure, label, reward
MechanismAssignment that generates a variableData pipeline, recommender policy, human behavior, model routing rule
GraphQualitative causal assumptionsWhat can affect what, and which paths may confound effects
InterventionReplacement of a mechanismA/B rollout, policy switch, prompt template change, retrieval update
CounterfactualUnit-level alternate worldWhat this user or model trace would have done under another action

Three examples of synthetic benchmarks:

  1. A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
  2. An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
  3. A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.

Two non-examples expose the boundary:

  1. A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
  2. A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.

The proof habit for synthetic benchmarks is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.

observed association:      P(Y | X=x)
intervention question:     P(Y | do(X=x))
counterfactual question:   P(Y_x | E=e)
discovery question:        which G could have generated P(V)?

In machine learning, synthetic benchmarks is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.

Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.

Checklist for using synthetic benchmarks responsibly:

  • State the causal question before choosing a method.
  • Draw or describe the assumed causal graph.
  • Mark observed, latent, treatment, outcome, and adjustment variables.
  • Separate intervention notation from conditioning notation.
  • Decide whether the query is identifiable before estimating it.
  • Report assumptions that cannot be tested from the observed data alone.
  • Use ML as an estimation aid, not as a substitute for causal design.

This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.

Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.

Thus, synthetic benchmarks is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.

A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.

Diagnostic questionCausal discipline it tests
What is being changed?Intervention target
Which mechanism is replaced?SCM modularity
Which paths transmit the effect?Graph semantics
Which variables are merely observed?Conditioning versus intervention
Which quantities are unobserved?Confounding and counterfactual uncertainty

6.4 causal feature selection

Causal feature selection belongs to the canonical scope of Causal Discovery. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.

For this subsection, the working scope is constraint-based, score-based, functional, invariant, and optimization-based causal graph discovery with clear assumptions and evaluation metrics. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.

Aij=1    XiXj.A_{ij}=1 \iff X_i \to X_j.

The formula gives a compact handle on causal feature selection. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.

Causal objectMeaningAI interpretation
VariableQuantity in the causal systemPrompt feature, user action, treatment, tool call, exposure, label, reward
MechanismAssignment that generates a variableData pipeline, recommender policy, human behavior, model routing rule
GraphQualitative causal assumptionsWhat can affect what, and which paths may confound effects
InterventionReplacement of a mechanismA/B rollout, policy switch, prompt template change, retrieval update
CounterfactualUnit-level alternate worldWhat this user or model trace would have done under another action

Three examples of causal feature selection:

  1. A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
  2. An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
  3. A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.

Two non-examples expose the boundary:

  1. A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
  2. A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.

The proof habit for causal feature selection is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.

observed association:      P(Y | X=x)
intervention question:     P(Y | do(X=x))
counterfactual question:   P(Y_x | E=e)
discovery question:        which G could have generated P(V)?

In machine learning, causal feature selection is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.

Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.

Checklist for using causal feature selection responsibly:

  • State the causal question before choosing a method.
  • Draw or describe the assumed causal graph.
  • Mark observed, latent, treatment, outcome, and adjustment variables.
  • Separate intervention notation from conditioning notation.
  • Decide whether the query is identifiable before estimating it.
  • Report assumptions that cannot be tested from the observed data alone.
  • Use ML as an estimation aid, not as a substitute for causal design.

This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.

Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.

Thus, causal feature selection is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.

A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.

Diagnostic questionCausal discipline it tests
What is being changed?Intervention target
Which mechanism is replaced?SCM modularity
Which paths transmit the effect?Graph semantics
Which variables are merely observed?Conditioning versus intervention
Which quantities are unobserved?Confounding and counterfactual uncertainty

6.5 LLM-assisted causal hypothesis generation with human review

Llm-assisted causal hypothesis generation with human review belongs to the canonical scope of Causal Discovery. The central move in causal inference is to distinguish a statistical relation from a claim about what would happen under an intervention.

For this subsection, the working scope is constraint-based, score-based, functional, invariant, and optimization-based causal graph discovery with clear assumptions and evaluation metrics. The mathematical objects are variables, mechanisms, graphs, interventions, and assumptions. A causal claim is incomplete until all five are visible.

h(W)=tr(eWW)d=0.h(W)=\operatorname{tr}(e^{W\odot W})-d=0.

The formula gives a compact handle on llm-assisted causal hypothesis generation with human review. It should not be read as a purely algebraic identity. In causal inference, equations encode assumptions about mechanisms, missing variables, and which parts of the world remain stable under intervention.

Causal objectMeaningAI interpretation
VariableQuantity in the causal systemPrompt feature, user action, treatment, tool call, exposure, label, reward
MechanismAssignment that generates a variableData pipeline, recommender policy, human behavior, model routing rule
GraphQualitative causal assumptionsWhat can affect what, and which paths may confound effects
InterventionReplacement of a mechanismA/B rollout, policy switch, prompt template change, retrieval update
CounterfactualUnit-level alternate worldWhat this user or model trace would have done under another action

Three examples of llm-assisted causal hypothesis generation with human review:

  1. A recommender team wants the causal effect of ranking a document higher, not merely the correlation between rank and clicks.
  2. An LLM platform changes a safety policy and wants to estimate whether refusals changed because of the policy or because user prompts shifted.
  3. A fairness auditor asks whether a proxy feature transmits an impermissible causal path into a model decision.

Two non-examples expose the boundary:

  1. A high predictive coefficient is not a causal effect unless the graph and intervention assumptions justify it.
  2. A plausible narrative produced by a language model is not a counterfactual unless it is grounded in a causal model.

The proof habit for llm-assisted causal hypothesis generation with human review is to name the graph operation. Conditioning restricts a distribution. Intervention replaces a mechanism. Counterfactual reasoning updates exogenous uncertainty from evidence, changes a mechanism, then predicts.

observed association:      P(Y | X=x)
intervention question:     P(Y | do(X=x))
counterfactual question:   P(Y_x | E=e)
discovery question:        which G could have generated P(V)?

In machine learning, llm-assisted causal hypothesis generation with human review is valuable because models are often deployed under interventions: ranking changes, policy changes, safety filters, tool-use gates, data collection changes, and human feedback loops. Prediction alone does not tell us which change caused which downstream behavior.

Notebook implementation will use synthetic SCMs and small graphs. This keeps the examples executable while preserving the conceptual split between identification and estimation.

Checklist for using llm-assisted causal hypothesis generation with human review responsibly:

  • State the causal question before choosing a method.
  • Draw or describe the assumed causal graph.
  • Mark observed, latent, treatment, outcome, and adjustment variables.
  • Separate intervention notation from conditioning notation.
  • Decide whether the query is identifiable before estimating it.
  • Report assumptions that cannot be tested from the observed data alone.
  • Use ML as an estimation aid, not as a substitute for causal design.

This chapter follows the boundary set by Chapter 21. Statistical learning theory controls prediction error under distributional assumptions. Causal inference asks what happens when the distribution changes because something is done.

Modern AI systems make this distinction unavoidable. A foundation model can predict which action historically followed a context, but a decision system needs to know what would happen if it took a different action in that context.

Thus, llm-assisted causal hypothesis generation with human review is not an abstract philosophical add-on. It is a production and research tool for deciding which model, prompt, policy, feature, or intervention actually changed an outcome.

A final diagnostic question is whether the claim would survive a policy change. If the answer depends only on a historical correlation, it belongs in predictive modeling. If the answer depends on what mechanism is replaced and which paths remain active, it belongs in causal inference.

Diagnostic questionCausal discipline it tests
What is being changed?Intervention target
Which mechanism is replaced?SCM modularity
Which paths transmit the effect?Graph semantics
Which variables are merely observed?Conditioning versus intervention
Which quantities are unobserved?Confounding and counterfactual uncertainty

7. Common Mistakes

#MistakeWhy It Is WrongFix
1Equating correlation with causationConditional association can arise from confounding, selection, or collider bias.State the causal graph and the target intervention before interpreting associations.
2Conditioning on collidersA collider can open a spurious path when conditioned on.Use d-separation and adjustment criteria, not variable-importance intuition alone.
3Forgetting the estimand-estimator splitIdentification is a symbolic question; estimation is a statistical question.First derive the causal estimand, then choose an estimator and diagnostics.
4Using do-calculus without assumptionsThe rules operate on a causal graph whose assumptions are supplied by the analyst.Make graph assumptions explicit and discuss unobserved variables.
5Treating counterfactuals as factual labelsOnly one potential outcome is observed for each unit.Use consistency, exchangeability, and sensitivity analysis carefully.
6Assuming discovery is assumption-freeMany graphs can imply the same observational distribution.Report equivalence classes, required assumptions, and intervention needs.
7Confusing prediction robustness with causal invarianceA predictive feature can be stable in one dataset and noncausal under intervention.Use environment shifts and mechanism assumptions to justify causal claims.
8Ignoring positivity or overlapCausal effects cannot be estimated where treatment assignments have no support.Inspect propensity or support before using adjustment formulas.
9Letting ML hide causal designFlexible nuisance models do not create identification.Use ML after identification, with cross-fitting or regularization as estimation tools.
10Overtrusting LLM causal explanationsLanguage models can narrate plausible mechanisms without evidence.Use LLMs for hypothesis generation, then require graph, data, and domain checks.

8. Exercises

  1. (*) Work through a causal-inference task for causal discovery.

    • (a) State the causal query using intervention or counterfactual notation.
    • (b) Draw or describe the relevant graph and assumptions.
    • (c) Decide whether the estimand is identifiable from the available data.
    • (d) Give an estimator or diagnostic only after identification is clear.
    • (e) Explain the AI or LLM system implication.
  2. (*) Work through a causal-inference task for causal discovery.

    • (a) State the causal query using intervention or counterfactual notation.
    • (b) Draw or describe the relevant graph and assumptions.
    • (c) Decide whether the estimand is identifiable from the available data.
    • (d) Give an estimator or diagnostic only after identification is clear.
    • (e) Explain the AI or LLM system implication.
  3. (*) Work through a causal-inference task for causal discovery.

    • (a) State the causal query using intervention or counterfactual notation.
    • (b) Draw or describe the relevant graph and assumptions.
    • (c) Decide whether the estimand is identifiable from the available data.
    • (d) Give an estimator or diagnostic only after identification is clear.
    • (e) Explain the AI or LLM system implication.
  4. (**) Work through a causal-inference task for causal discovery.

    • (a) State the causal query using intervention or counterfactual notation.
    • (b) Draw or describe the relevant graph and assumptions.
    • (c) Decide whether the estimand is identifiable from the available data.
    • (d) Give an estimator or diagnostic only after identification is clear.
    • (e) Explain the AI or LLM system implication.
  5. (**) Work through a causal-inference task for causal discovery.

    • (a) State the causal query using intervention or counterfactual notation.
    • (b) Draw or describe the relevant graph and assumptions.
    • (c) Decide whether the estimand is identifiable from the available data.
    • (d) Give an estimator or diagnostic only after identification is clear.
    • (e) Explain the AI or LLM system implication.
  6. (**) Work through a causal-inference task for causal discovery.

    • (a) State the causal query using intervention or counterfactual notation.
    • (b) Draw or describe the relevant graph and assumptions.
    • (c) Decide whether the estimand is identifiable from the available data.
    • (d) Give an estimator or diagnostic only after identification is clear.
    • (e) Explain the AI or LLM system implication.
  7. (***) Work through a causal-inference task for causal discovery.

    • (a) State the causal query using intervention or counterfactual notation.
    • (b) Draw or describe the relevant graph and assumptions.
    • (c) Decide whether the estimand is identifiable from the available data.
    • (d) Give an estimator or diagnostic only after identification is clear.
    • (e) Explain the AI or LLM system implication.
  8. (***) Work through a causal-inference task for causal discovery.

    • (a) State the causal query using intervention or counterfactual notation.
    • (b) Draw or describe the relevant graph and assumptions.
    • (c) Decide whether the estimand is identifiable from the available data.
    • (d) Give an estimator or diagnostic only after identification is clear.
    • (e) Explain the AI or LLM system implication.
  9. (***) Work through a causal-inference task for causal discovery.

    • (a) State the causal query using intervention or counterfactual notation.
    • (b) Draw or describe the relevant graph and assumptions.
    • (c) Decide whether the estimand is identifiable from the available data.
    • (d) Give an estimator or diagnostic only after identification is clear.
    • (e) Explain the AI or LLM system implication.
  10. (***) Work through a causal-inference task for causal discovery.

  • (a) State the causal query using intervention or counterfactual notation.
  • (b) Draw or describe the relevant graph and assumptions.
  • (c) Decide whether the estimand is identifiable from the available data.
  • (d) Give an estimator or diagnostic only after identification is clear.
  • (e) Explain the AI or LLM system implication.

9. Why This Matters for AI

ConceptAI Impact
SCMEncodes which mechanisms should stay stable under policy or data changes
Do-operatorSeparates observing a model behavior from changing an input, policy, or tool
AdjustmentIdentifies which variables should be controlled for and which should not
CounterfactualSupports recourse, fairness, and unit-level explanation
Causal discoveryGenerates candidate mechanism graphs when domain knowledge is incomplete
PositivityPrevents extrapolating treatment effects into unsupported regions
Hidden confoundingWarns when observational logs cannot support a causal claim
Estimand-estimator splitKeeps flexible ML estimators from hiding causal assumptions

10. Conceptual Bridge

Causal Discovery follows statistical learning theory because learning theory explains how observed samples support future prediction claims. Causal inference asks a different question: what happens when an action changes the system that generated those samples?

The backward bridge is risk and uncertainty. Chapter 21 provides language for finite-sample generalization. Chapter 22 adds intervention semantics, graph assumptions, and counterfactual worlds. A causal claim is not just a better prediction; it is a claim about a modified data-generating mechanism.

The forward bridge is game theory. Once multiple agents adapt to interventions, the causal question becomes strategic: actions change incentives, incentives change behavior, and behavior changes the causal system. Chapter 23 will study that interaction explicitly.

+--------------------------------------------------------------+
| Chapter 21: prediction under finite samples                  |
| Chapter 22: intervention, counterfactuals, causal discovery  |
| Chapter 23: strategic interaction and adversarial systems    |
+--------------------------------------------------------------+

References

Skill Check

Test this lesson

Answer 4 quick questions to lock in the lesson and feed your adaptive practice queue.

--
Score
0/4
Answered
Not attempted
Status
1

Which module does this lesson belong to?

2

Which section is covered in this lesson content?

3

Which term is most central to this lesson?

4

What is the best way to use this lesson for real learning?

Your answers save locally first, then sync when account storage is available.
Practice queue