Lesson overview | Previous part | Next part
Calibration and Uncertainty: Part 6: Uncertainty Sets and Abstention
6. Uncertainty Sets and Abstention
Uncertainty Sets and Abstention is the part of calibration and uncertainty that turns the approved TOC into a concrete learning path. The subsections below keep the focus on Chapter 17's canonical job: measurement, reliability, uncertainty, and decision support for AI systems.
6.1 Conformal prediction
Conformal prediction is part of the canonical scope of calibration and uncertainty. In this chapter, the object under study is not merely a dataset or a model, but the full probabilistic forecast: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For conformal prediction, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in conformal prediction |
| Scoring rule | Exact formula for \ell_{\mathrm{NLL}} | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report conformal prediction with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for conformal prediction:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, conformal prediction is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. Conformal prediction is one place where that habit becomes concrete.
6.2 Prediction sets and marginal coverage
Prediction sets and marginal coverage is part of the canonical scope of calibration and uncertainty. In this chapter, the object under study is not merely a dataset or a model, but the full probabilistic forecast: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For prediction sets and marginal coverage, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in prediction sets and marginal coverage |
| Scoring rule | Exact formula for \ell_{\mathrm{NLL}} | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report prediction sets and marginal coverage with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for prediction sets and marginal coverage:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, prediction sets and marginal coverage is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. Prediction sets and marginal coverage is one place where that habit becomes concrete.
6.3 Risk-coverage curves
Risk-coverage curves is part of the canonical scope of calibration and uncertainty. In this chapter, the object under study is not merely a dataset or a model, but the full probabilistic forecast: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For risk-coverage curves, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in risk-coverage curves |
| Scoring rule | Exact formula for \ell_{\mathrm{NLL}} | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report risk-coverage curves with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for risk-coverage curves:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, risk-coverage curves is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. Risk- coverage curves is one place where that habit becomes concrete.
6.4 Reject option and triage
Reject option and triage is part of the canonical scope of calibration and uncertainty. In this chapter, the object under study is not merely a dataset or a model, but the full probabilistic forecast: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For reject option and triage, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in reject option and triage |
| Scoring rule | Exact formula for \ell_{\mathrm{NLL}} | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report reject option and triage with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for reject option and triage:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, reject option and triage is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. Reject option and triage is one place where that habit becomes concrete.
6.5 Structured-output uncertainty previews
Structured-output uncertainty previews is part of the canonical scope of calibration and uncertainty. In this chapter, the object under study is not merely a dataset or a model, but the full probabilistic forecast: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For structured-output uncertainty previews, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in structured-output uncertainty previews |
| Scoring rule | Exact formula for \ell_{\mathrm{NLL}} | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report structured-output uncertainty previews with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for structured-output uncertainty previews:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, structured-output uncertainty previews is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. Structured- output uncertainty previews is one place where that habit becomes concrete.