Lesson overview | Previous part | Lesson overview
Online Experimentation and AB Testing: Part 7: Trustworthiness Checks to References
7. Trustworthiness Checks
Trustworthiness Checks is the part of online experimentation and ab testing that turns the approved TOC into a concrete learning path. The subsections below keep the focus on Chapter 17's canonical job: measurement, reliability, uncertainty, and decision support for AI systems.
7.1 A/A tests
A/A tests is part of the canonical scope of online experimentation and ab testing. In this chapter, the object under study is not merely a dataset or a model, but the full online randomized experiment: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For a/a tests, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in a/a tests |
| Scoring rule | Exact formula for Y_i(1)-Y_i(0) | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report a/a tests with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for a/a tests:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, a/a tests is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. A/A tests is one place where that habit becomes concrete.
7.2 Sample-ratio mismatch
Sample-ratio mismatch is part of the canonical scope of online experimentation and ab testing. In this chapter, the object under study is not merely a dataset or a model, but the full online randomized experiment: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For sample-ratio mismatch, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in sample-ratio mismatch |
| Scoring rule | Exact formula for Y_i(1)-Y_i(0) | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report sample-ratio mismatch with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for sample-ratio mismatch:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, sample-ratio mismatch is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. Sample-ratio mismatch is one place where that habit becomes concrete.
7.3 Novelty and carryover effects
Novelty and carryover effects is part of the canonical scope of online experimentation and ab testing. In this chapter, the object under study is not merely a dataset or a model, but the full online randomized experiment: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For novelty and carryover effects, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in novelty and carryover effects |
| Scoring rule | Exact formula for Y_i(1)-Y_i(0) | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report novelty and carryover effects with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for novelty and carryover effects:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, novelty and carryover effects is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. Novelty and carryover effects is one place where that habit becomes concrete.
7.4 Interference
Interference is part of the canonical scope of online experimentation and ab testing. In this chapter, the object under study is not merely a dataset or a model, but the full online randomized experiment: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For interference, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in interference |
| Scoring rule | Exact formula for Y_i(1)-Y_i(0) | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report interference with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for interference:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, interference is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. Interference is one place where that habit becomes concrete.
7.5 Logging and auditability
Logging and auditability is part of the canonical scope of online experimentation and ab testing. In this chapter, the object under study is not merely a dataset or a model, but the full online randomized experiment: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For logging and auditability, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in logging and auditability |
| Scoring rule | Exact formula for Y_i(1)-Y_i(0) | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report logging and auditability with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for logging and auditability:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, logging and auditability is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. Logging and auditability is one place where that habit becomes concrete.
8. Common Mistakes
| # | Mistake | Why It Is Wrong | Fix |
|---|---|---|---|
| 1 | Treating a point estimate as exact | Every finite evaluation has sampling error in online experimentation and ab testing. | Report uncertainty with the point estimate. |
| 2 | Changing prompts between models | The protocol changed with the treatment in online experimentation and ab testing. | Lock prompt, decoding, and grader before comparison. |
| 3 | Ignoring invalid outputs | Missingness can be correlated with model quality in online experimentation and ab testing. | Track invalid, timeout, refusal, and parse-failure rates. |
| 4 | Overfitting to a public leaderboard | Repeated testing leaks information from the benchmark in online experimentation and ab testing. | Use private holdouts and regression suites. |
| 5 | Averaging incomparable metrics | Different scales do not have shared units in online experimentation and ab testing. | Normalize by a stated decision rule or report separately. |
| 6 | Forgetting paired structure | Two models often answer the same items in online experimentation and ab testing. | Use paired bootstrap or paired tests where possible. |
| 7 | Reporting only aggregate performance | Subgroup failures can be hidden in online experimentation and ab testing. | Add slice and tail-risk views. |
| 8 | Trusting model judges blindly | LLM judges have position, verbosity, and self-preference biases in online experimentation and ab testing. | Calibrate judges against human labels. |
| 9 | Peeking during online experiments | Optional stopping inflates false positives in online experimentation and ab testing. | Use fixed horizons or sequential-valid methods. |
| 10 | Conflating evaluation with monitoring | Chapter 17 measures controlled evidence; production monitoring is ongoing operations in online experimentation and ab testing. | Hand off drift dashboards to Chapter 19 concepts. |
9. Exercises
-
(*) Offline eval predicts, online tests measure. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(*) Randomization as causal design. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(*) Overall evaluation criterion. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(**) Guardrails and downside risk. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(**) Experiment culture for LLM systems. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(**) Treatment and control. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(***) Randomization unit. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(***) Overall evaluation criterion and guardrails. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(***) Average treatment effect. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(***) Power and Type I or Type II errors. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
10. Why This Matters for AI
| Concept | AI Impact |
|---|---|
| Protocol as measurement | Prevents hidden prompt or grader changes from masquerading as model progress |
| Uncertainty intervals | Keeps model rankings honest when differences are smaller than sampling noise |
| Slice metrics | Reveals failures on languages, domains, formats, or user groups hidden by averages |
| Calibration | Lets systems decide when to answer, abstain, ask for help, or escalate |
| Robustness | Tests whether behavior survives realistic perturbations and distribution shift |
| Ablations | Separates real improvements from accidental metric movement |
| Online tests | Measures causal user impact rather than offline proxy success |
| Audit trails | Turns evaluation from a screenshot into reproducible scientific evidence |
11. Conceptual Bridge
This section sits after the training-data pipeline because evaluation depends on clean holdouts, contamination audits, and well-documented data provenance. It does not repeat those pipeline mechanics; it consumes their outputs as the basis for credible measurement.
It also sits before alignment and production chapters. Alignment asks how to shape model behavior with supervised data, preferences, policies, and feedback. Production MLOps asks how deployed systems are observed and maintained over time. Online Experimentation and AB Testing supplies the measurement discipline both chapters need.
The recurring mathematical pattern is empirical risk with uncertainty. Whether the object is a benchmark item, a calibrated probability, a shifted subgroup, an ablation comparison, or an online treatment effect, the learner should ask: what distribution generated this evidence, what estimator did we compute, and what decision is justified by the uncertainty?
16 Data Pipeline
-> clean eval data, manifests, decontamination
17 Evaluation and Reliability
-> benchmarks, calibration, robustness, ablations, online tests
18 Alignment and Safety
-> SFT, preferences, policies, human feedback
19 Production ML and MLOps
-> monitoring, serving, retraining, observability