Lesson overview | Previous part | Lesson overview
Calibration and Uncertainty: Part 7: LLM-Specific Uncertainty to References
7. LLM-Specific Uncertainty
LLM-Specific Uncertainty is the part of calibration and uncertainty that turns the approved TOC into a concrete learning path. The subsections below keep the focus on Chapter 17's canonical job: measurement, reliability, uncertainty, and decision support for AI systems.
7.1 Token confidence
Token confidence is part of the canonical scope of calibration and uncertainty. In this chapter, the object under study is not merely a dataset or a model, but the full probabilistic forecast: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For token confidence, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in token confidence |
| Scoring rule | Exact formula for \ell_{\mathrm{NLL}} | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report token confidence with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for token confidence:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, token confidence is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. Token confidence is one place where that habit becomes concrete.
7.2 Sequence confidence
Sequence confidence is part of the canonical scope of calibration and uncertainty. In this chapter, the object under study is not merely a dataset or a model, but the full probabilistic forecast: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For sequence confidence, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in sequence confidence |
| Scoring rule | Exact formula for \ell_{\mathrm{NLL}} | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report sequence confidence with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for sequence confidence:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, sequence confidence is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. Sequence confidence is one place where that habit becomes concrete.
7.3 Self-consistency and sample disagreement
Self-consistency and sample disagreement is part of the canonical scope of calibration and uncertainty. In this chapter, the object under study is not merely a dataset or a model, but the full probabilistic forecast: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For self-consistency and sample disagreement, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in self-consistency and sample disagreement |
| Scoring rule | Exact formula for \ell_{\mathrm{NLL}} | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report self-consistency and sample disagreement with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for self-consistency and sample disagreement:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, self-consistency and sample disagreement is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. Self- consistency and sample disagreement is one place where that habit becomes concrete.
7.4 Semantic uncertainty
Semantic uncertainty is part of the canonical scope of calibration and uncertainty. In this chapter, the object under study is not merely a dataset or a model, but the full probabilistic forecast: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For semantic uncertainty, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in semantic uncertainty |
| Scoring rule | Exact formula for \ell_{\mathrm{NLL}} | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report semantic uncertainty with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for semantic uncertainty:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, semantic uncertainty is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. Semantic uncertainty is one place where that habit becomes concrete.
7.5 Knowing what it knows
Knowing what it knows is part of the canonical scope of calibration and uncertainty. In this chapter, the object under study is not merely a dataset or a model, but the full probabilistic forecast: the items, prompts, outputs, graders, uncertainty statements, and decision rules that turn model behavior into evidence.
The basic mathematical pattern is an empirical estimator. For a model or system evaluated on items , the local estimate is written
The formula is intentionally simple. The difficulty lies in deciding what counts as an item, which loss or score is meaningful, whether the items are independent, and whether the estimate answers the real product or research question. For knowing what it knows, those choices determine whether the reported number is evidence or decoration.
A useful invariant is that every evaluation claim should be reproducible as a tuple , where is the system, is the task sample, is the prompt or intervention policy, is the grader, and is the aggregation rule. If any part of this tuple is missing, the number cannot be audited.
| Component | What to record | Why it matters |
|---|---|---|
| Item definition | IDs, source, split, and allowed transformations | Prevents accidental drift in knowing what it knows |
| Scoring rule | Exact formula for \ell_{\mathrm{NLL}} | Makes comparisons repeatable |
| Aggregation | Mean, weighted mean, worst group, or pairwise model | Determines the scientific claim |
| Uncertainty | Standard error, interval, or posterior summary | Separates signal from sampling noise |
| Audit trail | Code version and random seeds | Makes failures debuggable |
Examples of correct use:
- Report knowing what it knows with item count, prompt protocol, grader version, and a confidence interval.
- Use paired comparisons when two models answer the same evaluation items.
- Inspect at least one meaningful slice before concluding that the aggregate result is reliable.
- Store raw outputs so future graders can be replayed without querying the model again.
- Document whether the metric is measuring capability, reliability, user value, or risk.
Non-examples:
- A leaderboard point estimate without sample size.
- A benchmark score produced with an undocumented prompt template.
- A model-graded result without judge identity, rubric, or agreement check.
- A robustness claim measured only on the easiest in-distribution examples.
- An online win declared before the randomization and logging checks pass.
Worked evaluation pattern for knowing what it knows:
- Define the evaluation population in words before writing code.
- Choose the smallest metric set that answers the decision question.
- Compute the point estimate and an uncertainty statement together.
- Run a slice or paired analysis to check whether the aggregate hides structure.
- Archive raw outputs, scores, and seeds before changing the prompt or grader.
For AI systems, knowing what it knows is especially delicate because the same model can be used with many prompts, decoding policies, tools, retrieval contexts, and safety filters. The measured quantity is therefore a property of the system configuration, not just the base weights.
| AI connection | Evaluation consequence |
|---|---|
| Prompting | Treat prompt templates as part of the protocol, not as invisible setup |
| Decoding | Temperature and sampling change both mean score and variance |
| Retrieval | Retrieved context creates an extra source of failure and leakage |
| Tool use | Tool errors need separate attribution from model reasoning errors |
| Safety layer | Guardrail behavior can improve risk metrics while changing capability metrics |
Implementation checklist:
- Use deterministic seeds for synthetic or sampled evaluation subsets.
- Print metric denominators, not only percentages.
- Keep missing, invalid, timeout, and refusal outcomes explicit.
- Prefer typed result records over loose CSV columns.
- Separate raw model outputs from normalized grader inputs.
- Track the smallest reproducible command that generated the result.
- Record whether the estimate is item-weighted, token-weighted, user-weighted, or domain-weighted.
- Write the decision rule before seeing the final score whenever the result will guide a release.
The mathematical habit to build is skepticism with structure. A score is not ignored because it is noisy; it is interpreted through the design that produced it. Knowing what it knows is one place where that habit becomes concrete.
8. Common Mistakes
| # | Mistake | Why It Is Wrong | Fix |
|---|---|---|---|
| 1 | Treating a point estimate as exact | Every finite evaluation has sampling error in calibration and uncertainty. | Report uncertainty with the point estimate. |
| 2 | Changing prompts between models | The protocol changed with the treatment in calibration and uncertainty. | Lock prompt, decoding, and grader before comparison. |
| 3 | Ignoring invalid outputs | Missingness can be correlated with model quality in calibration and uncertainty. | Track invalid, timeout, refusal, and parse-failure rates. |
| 4 | Overfitting to a public leaderboard | Repeated testing leaks information from the benchmark in calibration and uncertainty. | Use private holdouts and regression suites. |
| 5 | Averaging incomparable metrics | Different scales do not have shared units in calibration and uncertainty. | Normalize by a stated decision rule or report separately. |
| 6 | Forgetting paired structure | Two models often answer the same items in calibration and uncertainty. | Use paired bootstrap or paired tests where possible. |
| 7 | Reporting only aggregate performance | Subgroup failures can be hidden in calibration and uncertainty. | Add slice and tail-risk views. |
| 8 | Trusting model judges blindly | LLM judges have position, verbosity, and self-preference biases in calibration and uncertainty. | Calibrate judges against human labels. |
| 9 | Peeking during online experiments | Optional stopping inflates false positives in calibration and uncertainty. | Use fixed horizons or sequential-valid methods. |
| 10 | Conflating evaluation with monitoring | Chapter 17 measures controlled evidence; production monitoring is ongoing operations in calibration and uncertainty. | Hand off drift dashboards to Chapter 19 concepts. |
9. Exercises
-
(*) Confidence should match correctness. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(*) High accuracy can still be unsafe. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(*) Selective prediction and abstention. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(**) Epistemic and aleatoric uncertainty. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(**) Why LLM verbal confidence is unreliable. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(**) Calibration condition. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(***) Reliability function. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(***) Expected and maximum calibration error. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(***) Brier score and negative log-likelihood. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
-
(***) Coverage, set size, and selective risk. (a) Define the relevant evaluation object. (b) Write the estimator in LaTeX notation. (c) Give one example where the estimator is reliable. (d) Give one example where the same number would be misleading. (e) Describe what the theory notebook should verify computationally.
10. Why This Matters for AI
| Concept | AI Impact |
|---|---|
| Protocol as measurement | Prevents hidden prompt or grader changes from masquerading as model progress |
| Uncertainty intervals | Keeps model rankings honest when differences are smaller than sampling noise |
| Slice metrics | Reveals failures on languages, domains, formats, or user groups hidden by averages |
| Calibration | Lets systems decide when to answer, abstain, ask for help, or escalate |
| Robustness | Tests whether behavior survives realistic perturbations and distribution shift |
| Ablations | Separates real improvements from accidental metric movement |
| Online tests | Measures causal user impact rather than offline proxy success |
| Audit trails | Turns evaluation from a screenshot into reproducible scientific evidence |
11. Conceptual Bridge
This section sits after the training-data pipeline because evaluation depends on clean holdouts, contamination audits, and well-documented data provenance. It does not repeat those pipeline mechanics; it consumes their outputs as the basis for credible measurement.
It also sits before alignment and production chapters. Alignment asks how to shape model behavior with supervised data, preferences, policies, and feedback. Production MLOps asks how deployed systems are observed and maintained over time. Calibration and Uncertainty supplies the measurement discipline both chapters need.
The recurring mathematical pattern is empirical risk with uncertainty. Whether the object is a benchmark item, a calibrated probability, a shifted subgroup, an ablation comparison, or an online treatment effect, the learner should ask: what distribution generated this evidence, what estimator did we compute, and what decision is justified by the uncertainty?
16 Data Pipeline
-> clean eval data, manifests, decontamination
17 Evaluation and Reliability
-> benchmarks, calibration, robustness, ablations, online tests
18 Alignment and Safety
-> SFT, preferences, policies, human feedback
19 Production ML and MLOps
-> monitoring, serving, retraining, observability