Part 2Math for LLMs

LLM Evaluation Observability and Guardrails: Part 2 - Formal Definitions

Production ML and MLOps / LLM Evaluation Observability and Guardrails

Private notes
0/8000

Notes stay private to your browser until account sync is configured.

Part 2
19 min read6 headingsSplit lesson page

Lesson overview | Previous part | Next part

LLM Evaluation Observability and Guardrails: Part 2: Formal Definitions

2. Formal Definitions

Formal Definitions develops the part of llm evaluation observability and guardrails assigned by the approved Chapter 19 table of contents. The treatment is production-focused: every idea is connected to a versioned artifact, measurable signal, release decision, or incident response.

2.1 trace τ\tau

Trace τ\tau is part of the canonical scope of LLM Evaluation Observability and Guardrails. In production ML, the useful question is not only whether the model can be trained, but whether the surrounding artifact, signal, or control can be named, versioned, measured, and recovered after a failure.

For this section, the working object is LLM traces, online evaluation, runtime guardrails, incident response, and closing production loops into evals and training data. The notation below treats production systems as mathematical objects because that is how incidents become diagnosable. A dataset, feature, run, trace, or endpoint that lacks a stable identifier cannot be compared across time.

cost(τ)=ctokntok+ctoolntool+creviewnreview.\operatorname{cost}(\tau)=c_{\mathrm{tok}}n_{\mathrm{tok}}+c_{\mathrm{tool}}n_{\mathrm{tool}}+c_{\mathrm{review}}n_{\mathrm{review}}.

The formula is intentionally simple. It says that trace τ\tau should be reduced to a measurable object before anyone argues about dashboards or tools. Once the object is measurable, the system can decide whether to accept, warn, rollback, retrain, or escalate.

Production objectMathematical roleOperational consequence
IdentifierA stable key in a set or graphLets teams join logs, artifacts, and incidents
VersionA time-indexed element such as vtv_tMakes old and new behavior comparable
MetricA function m:XRm: \mathcal{X} \to \mathbb{R}Turns behavior into a release or alert signal
ContractA predicate C()C(\cdot)Rejects invalid inputs before the model absorbs them
OwnerA decision variable outside the modelPrevents silent failure after detection

Examples of trace τ\tau in a real system:

  1. A production pipeline records the input version, transformation code hash, model version, and endpoint version before serving predictions.
  2. An LLM application logs prompt version, retrieval index version, tool span, latency, token count, and guardrail action for each trace.
  3. A release gate compares the candidate model against the current model on quality, safety, latency, and cost before promotion.

Non-examples that often look similar but fail the production contract:

  1. A manually named file like final_dataset.csv with no hash, schema, lineage, or owner.
  2. A metric screenshot pasted into chat without the run id, evaluation dataset, seed, or model artifact.
  3. A dashboard alert with no threshold rationale, no escalation rule, and no rollback candidate.

The AI connection is concrete. Modern ML and LLM systems are compound systems: data pipelines, feature stores, model registries, inference servers, retrievers, tools, evaluators, and safety layers. Trace τ\tau is one place where the compound system either becomes observable or becomes technical debt.

Operational checklist for trace τ\tau:

  • State the artifact or signal being controlled.
  • Give it a stable id and version.
  • Define the metric or predicate that decides whether it is valid.
  • Log the dependency chain needed to reproduce it.
  • Attach an owner and a response action.
  • Test the check in continuous integration or release gating.

A useful mental model is to treat every production ML component as a function with preconditions and postconditions. If uu is the upstream artifact and zz is the downstream artifact, the production question is whether the relation uzu \mapsto z can be replayed and audited.

z=T(u;c,e),z = T(u; c, e),

where TT is the transformation, cc is code or configuration, and ee is the execution environment. The hidden technical debt appears when any of uu, cc, or ee is missing from the record.

In notebooks, this subsection will be represented with small synthetic arrays, graphs, traces, or counters rather than external services. The point is not to mimic a vendor tool. The point is to make the mathematics of trace τ\tau executable enough to test.

Boundary note: this chapter assumes the evaluation methods from Chapter 17, the safety policy ideas from Chapter 18, and the data documentation work from Chapter 16. Here we focus on the production machinery that makes those ideas run repeatedly.

Failure analysis for trace τ\tau should be written before the incident occurs. A good production note asks what can be stale, missing, corrupted, delayed, unaudited, or too expensive. Each answer should correspond to one observable signal and one response action.

Failure questionProduction testResponse
Is the artifact stale?Compare event time to freshness limitWarn, block, or backfill
Is the artifact malformed?Evaluate schema and semantic contractReject before serving or training
Is the artifact inconsistent?Compare current statistic with reference statisticInvestigate drift or skew
Is the artifact unauditable?Check for missing version, owner, or lineage edgeStop promotion until metadata exists
Is the artifact too costly?Track latency, tokens, storage, or computeRoute, cache, batch, or downscale

The production design pattern is therefore not just to calculate a value. It is to calculate a value, compare it with a declared rule, log the evidence, and make the next action unambiguous. That four-step pattern will reappear across all Chapter 19 notebooks.

2.2 span

Span is part of the canonical scope of LLM Evaluation Observability and Guardrails. In production ML, the useful question is not only whether the model can be trained, but whether the surrounding artifact, signal, or control can be named, versioned, measured, and recovered after a failure.

For this section, the working object is LLM traces, online evaluation, runtime guardrails, incident response, and closing production loops into evals and training data. The notation below treats production systems as mathematical objects because that is how incidents become diagnosable. A dataset, feature, run, trace, or endpoint that lacks a stable identifier cannot be compared across time.

g(x,y){allow,block,revise,escalate}.g(x,y) \in \{\mathrm{allow},\mathrm{block},\mathrm{revise},\mathrm{escalate}\}.

The formula is intentionally simple. It says that span should be reduced to a measurable object before anyone argues about dashboards or tools. Once the object is measurable, the system can decide whether to accept, warn, rollback, retrain, or escalate.

Production objectMathematical roleOperational consequence
IdentifierA stable key in a set or graphLets teams join logs, artifacts, and incidents
VersionA time-indexed element such as vtv_tMakes old and new behavior comparable
MetricA function m:XRm: \mathcal{X} \to \mathbb{R}Turns behavior into a release or alert signal
ContractA predicate C()C(\cdot)Rejects invalid inputs before the model absorbs them
OwnerA decision variable outside the modelPrevents silent failure after detection

Examples of span in a real system:

  1. A production pipeline records the input version, transformation code hash, model version, and endpoint version before serving predictions.
  2. An LLM application logs prompt version, retrieval index version, tool span, latency, token count, and guardrail action for each trace.
  3. A release gate compares the candidate model against the current model on quality, safety, latency, and cost before promotion.

Non-examples that often look similar but fail the production contract:

  1. A manually named file like final_dataset.csv with no hash, schema, lineage, or owner.
  2. A metric screenshot pasted into chat without the run id, evaluation dataset, seed, or model artifact.
  3. A dashboard alert with no threshold rationale, no escalation rule, and no rollback candidate.

The AI connection is concrete. Modern ML and LLM systems are compound systems: data pipelines, feature stores, model registries, inference servers, retrievers, tools, evaluators, and safety layers. Span is one place where the compound system either becomes observable or becomes technical debt.

Operational checklist for span:

  • State the artifact or signal being controlled.
  • Give it a stable id and version.
  • Define the metric or predicate that decides whether it is valid.
  • Log the dependency chain needed to reproduce it.
  • Attach an owner and a response action.
  • Test the check in continuous integration or release gating.

A useful mental model is to treat every production ML component as a function with preconditions and postconditions. If uu is the upstream artifact and zz is the downstream artifact, the production question is whether the relation uzu \mapsto z can be replayed and audited.

z=T(u;c,e),z = T(u; c, e),

where TT is the transformation, cc is code or configuration, and ee is the execution environment. The hidden technical debt appears when any of uu, cc, or ee is missing from the record.

In notebooks, this subsection will be represented with small synthetic arrays, graphs, traces, or counters rather than external services. The point is not to mimic a vendor tool. The point is to make the mathematics of span executable enough to test.

Boundary note: this chapter assumes the evaluation methods from Chapter 17, the safety policy ideas from Chapter 18, and the data documentation work from Chapter 16. Here we focus on the production machinery that makes those ideas run repeatedly.

Failure analysis for span should be written before the incident occurs. A good production note asks what can be stale, missing, corrupted, delayed, unaudited, or too expensive. Each answer should correspond to one observable signal and one response action.

Failure questionProduction testResponse
Is the artifact stale?Compare event time to freshness limitWarn, block, or backfill
Is the artifact malformed?Evaluate schema and semantic contractReject before serving or training
Is the artifact inconsistent?Compare current statistic with reference statisticInvestigate drift or skew
Is the artifact unauditable?Check for missing version, owner, or lineage edgeStop promotion until metadata exists
Is the artifact too costly?Track latency, tokens, storage, or computeRoute, cache, batch, or downscale

The production design pattern is therefore not just to calculate a value. It is to calculate a value, compare it with a declared rule, log the evidence, and make the next action unambiguous. That four-step pattern will reappear across all Chapter 19 notebooks.

2.3 prompt version

Prompt version is part of the canonical scope of LLM Evaluation Observability and Guardrails. In production ML, the useful question is not only whether the model can be trained, but whether the surrounding artifact, signal, or control can be named, versioned, measured, and recovered after a failure.

For this section, the working object is LLM traces, online evaluation, runtime guardrails, incident response, and closing production loops into evals and training data. The notation below treats production systems as mathematical objects because that is how incidents become diagnosable. A dataset, feature, run, trace, or endpoint that lacks a stable identifier cannot be compared across time.

regress(r)=1[Mnew(r)<Mold(r)ϵ].\operatorname{regress}(r)=\mathbb{1}[M_{\mathrm{new}}(r)<M_{\mathrm{old}}(r)-\epsilon].

The formula is intentionally simple. It says that prompt version should be reduced to a measurable object before anyone argues about dashboards or tools. Once the object is measurable, the system can decide whether to accept, warn, rollback, retrain, or escalate.

Production objectMathematical roleOperational consequence
IdentifierA stable key in a set or graphLets teams join logs, artifacts, and incidents
VersionA time-indexed element such as vtv_tMakes old and new behavior comparable
MetricA function m:XRm: \mathcal{X} \to \mathbb{R}Turns behavior into a release or alert signal
ContractA predicate C()C(\cdot)Rejects invalid inputs before the model absorbs them
OwnerA decision variable outside the modelPrevents silent failure after detection

Examples of prompt version in a real system:

  1. A production pipeline records the input version, transformation code hash, model version, and endpoint version before serving predictions.
  2. An LLM application logs prompt version, retrieval index version, tool span, latency, token count, and guardrail action for each trace.
  3. A release gate compares the candidate model against the current model on quality, safety, latency, and cost before promotion.

Non-examples that often look similar but fail the production contract:

  1. A manually named file like final_dataset.csv with no hash, schema, lineage, or owner.
  2. A metric screenshot pasted into chat without the run id, evaluation dataset, seed, or model artifact.
  3. A dashboard alert with no threshold rationale, no escalation rule, and no rollback candidate.

The AI connection is concrete. Modern ML and LLM systems are compound systems: data pipelines, feature stores, model registries, inference servers, retrievers, tools, evaluators, and safety layers. Prompt version is one place where the compound system either becomes observable or becomes technical debt.

Operational checklist for prompt version:

  • State the artifact or signal being controlled.
  • Give it a stable id and version.
  • Define the metric or predicate that decides whether it is valid.
  • Log the dependency chain needed to reproduce it.
  • Attach an owner and a response action.
  • Test the check in continuous integration or release gating.

A useful mental model is to treat every production ML component as a function with preconditions and postconditions. If uu is the upstream artifact and zz is the downstream artifact, the production question is whether the relation uzu \mapsto z can be replayed and audited.

z=T(u;c,e),z = T(u; c, e),

where TT is the transformation, cc is code or configuration, and ee is the execution environment. The hidden technical debt appears when any of uu, cc, or ee is missing from the record.

In notebooks, this subsection will be represented with small synthetic arrays, graphs, traces, or counters rather than external services. The point is not to mimic a vendor tool. The point is to make the mathematics of prompt version executable enough to test.

Boundary note: this chapter assumes the evaluation methods from Chapter 17, the safety policy ideas from Chapter 18, and the data documentation work from Chapter 16. Here we focus on the production machinery that makes those ideas run repeatedly.

Failure analysis for prompt version should be written before the incident occurs. A good production note asks what can be stale, missing, corrupted, delayed, unaudited, or too expensive. Each answer should correspond to one observable signal and one response action.

Failure questionProduction testResponse
Is the artifact stale?Compare event time to freshness limitWarn, block, or backfill
Is the artifact malformed?Evaluate schema and semantic contractReject before serving or training
Is the artifact inconsistent?Compare current statistic with reference statisticInvestigate drift or skew
Is the artifact unauditable?Check for missing version, owner, or lineage edgeStop promotion until metadata exists
Is the artifact too costly?Track latency, tokens, storage, or computeRoute, cache, batch, or downscale

The production design pattern is therefore not just to calculate a value. It is to calculate a value, compare it with a declared rule, log the evidence, and make the next action unambiguous. That four-step pattern will reappear across all Chapter 19 notebooks.

2.4 evaluation case

Evaluation case is part of the canonical scope of LLM Evaluation Observability and Guardrails. In production ML, the useful question is not only whether the model can be trained, but whether the surrounding artifact, signal, or control can be named, versioned, measured, and recovered after a failure.

For this section, the working object is LLM traces, online evaluation, runtime guardrails, incident response, and closing production loops into evals and training data. The notation below treats production systems as mathematical objects because that is how incidents become diagnosable. A dataset, feature, run, trace, or endpoint that lacks a stable identifier cannot be compared across time.

τ=(s1,s2,,sk),si=(ti,ai,oi,mi).\tau = (s_1,s_2,\ldots,s_k), \qquad s_i=(t_i, a_i, o_i, m_i).

The formula is intentionally simple. It says that evaluation case should be reduced to a measurable object before anyone argues about dashboards or tools. Once the object is measurable, the system can decide whether to accept, warn, rollback, retrain, or escalate.

Production objectMathematical roleOperational consequence
IdentifierA stable key in a set or graphLets teams join logs, artifacts, and incidents
VersionA time-indexed element such as vtv_tMakes old and new behavior comparable
MetricA function m:XRm: \mathcal{X} \to \mathbb{R}Turns behavior into a release or alert signal
ContractA predicate C()C(\cdot)Rejects invalid inputs before the model absorbs them
OwnerA decision variable outside the modelPrevents silent failure after detection

Examples of evaluation case in a real system:

  1. A production pipeline records the input version, transformation code hash, model version, and endpoint version before serving predictions.
  2. An LLM application logs prompt version, retrieval index version, tool span, latency, token count, and guardrail action for each trace.
  3. A release gate compares the candidate model against the current model on quality, safety, latency, and cost before promotion.

Non-examples that often look similar but fail the production contract:

  1. A manually named file like final_dataset.csv with no hash, schema, lineage, or owner.
  2. A metric screenshot pasted into chat without the run id, evaluation dataset, seed, or model artifact.
  3. A dashboard alert with no threshold rationale, no escalation rule, and no rollback candidate.

The AI connection is concrete. Modern ML and LLM systems are compound systems: data pipelines, feature stores, model registries, inference servers, retrievers, tools, evaluators, and safety layers. Evaluation case is one place where the compound system either becomes observable or becomes technical debt.

Operational checklist for evaluation case:

  • State the artifact or signal being controlled.
  • Give it a stable id and version.
  • Define the metric or predicate that decides whether it is valid.
  • Log the dependency chain needed to reproduce it.
  • Attach an owner and a response action.
  • Test the check in continuous integration or release gating.

A useful mental model is to treat every production ML component as a function with preconditions and postconditions. If uu is the upstream artifact and zz is the downstream artifact, the production question is whether the relation uzu \mapsto z can be replayed and audited.

z=T(u;c,e),z = T(u; c, e),

where TT is the transformation, cc is code or configuration, and ee is the execution environment. The hidden technical debt appears when any of uu, cc, or ee is missing from the record.

In notebooks, this subsection will be represented with small synthetic arrays, graphs, traces, or counters rather than external services. The point is not to mimic a vendor tool. The point is to make the mathematics of evaluation case executable enough to test.

Boundary note: this chapter assumes the evaluation methods from Chapter 17, the safety policy ideas from Chapter 18, and the data documentation work from Chapter 16. Here we focus on the production machinery that makes those ideas run repeatedly.

Failure analysis for evaluation case should be written before the incident occurs. A good production note asks what can be stale, missing, corrupted, delayed, unaudited, or too expensive. Each answer should correspond to one observable signal and one response action.

Failure questionProduction testResponse
Is the artifact stale?Compare event time to freshness limitWarn, block, or backfill
Is the artifact malformed?Evaluate schema and semantic contractReject before serving or training
Is the artifact inconsistent?Compare current statistic with reference statisticInvestigate drift or skew
Is the artifact unauditable?Check for missing version, owner, or lineage edgeStop promotion until metadata exists
Is the artifact too costly?Track latency, tokens, storage, or computeRoute, cache, batch, or downscale

The production design pattern is therefore not just to calculate a value. It is to calculate a value, compare it with a declared rule, log the evidence, and make the next action unambiguous. That four-step pattern will reappear across all Chapter 19 notebooks.

2.5 guardrail action

Guardrail action is part of the canonical scope of LLM Evaluation Observability and Guardrails. In production ML, the useful question is not only whether the model can be trained, but whether the surrounding artifact, signal, or control can be named, versioned, measured, and recovered after a failure.

For this section, the working object is LLM traces, online evaluation, runtime guardrails, incident response, and closing production loops into evals and training data. The notation below treats production systems as mathematical objects because that is how incidents become diagnosable. A dataset, feature, run, trace, or endpoint that lacks a stable identifier cannot be compared across time.

cost(τ)=ctokntok+ctoolntool+creviewnreview.\operatorname{cost}(\tau)=c_{\mathrm{tok}}n_{\mathrm{tok}}+c_{\mathrm{tool}}n_{\mathrm{tool}}+c_{\mathrm{review}}n_{\mathrm{review}}.

The formula is intentionally simple. It says that guardrail action should be reduced to a measurable object before anyone argues about dashboards or tools. Once the object is measurable, the system can decide whether to accept, warn, rollback, retrain, or escalate.

Production objectMathematical roleOperational consequence
IdentifierA stable key in a set or graphLets teams join logs, artifacts, and incidents
VersionA time-indexed element such as vtv_tMakes old and new behavior comparable
MetricA function m:XRm: \mathcal{X} \to \mathbb{R}Turns behavior into a release or alert signal
ContractA predicate C()C(\cdot)Rejects invalid inputs before the model absorbs them
OwnerA decision variable outside the modelPrevents silent failure after detection

Examples of guardrail action in a real system:

  1. A production pipeline records the input version, transformation code hash, model version, and endpoint version before serving predictions.
  2. An LLM application logs prompt version, retrieval index version, tool span, latency, token count, and guardrail action for each trace.
  3. A release gate compares the candidate model against the current model on quality, safety, latency, and cost before promotion.

Non-examples that often look similar but fail the production contract:

  1. A manually named file like final_dataset.csv with no hash, schema, lineage, or owner.
  2. A metric screenshot pasted into chat without the run id, evaluation dataset, seed, or model artifact.
  3. A dashboard alert with no threshold rationale, no escalation rule, and no rollback candidate.

The AI connection is concrete. Modern ML and LLM systems are compound systems: data pipelines, feature stores, model registries, inference servers, retrievers, tools, evaluators, and safety layers. Guardrail action is one place where the compound system either becomes observable or becomes technical debt.

Operational checklist for guardrail action:

  • State the artifact or signal being controlled.
  • Give it a stable id and version.
  • Define the metric or predicate that decides whether it is valid.
  • Log the dependency chain needed to reproduce it.
  • Attach an owner and a response action.
  • Test the check in continuous integration or release gating.

A useful mental model is to treat every production ML component as a function with preconditions and postconditions. If uu is the upstream artifact and zz is the downstream artifact, the production question is whether the relation uzu \mapsto z can be replayed and audited.

z=T(u;c,e),z = T(u; c, e),

where TT is the transformation, cc is code or configuration, and ee is the execution environment. The hidden technical debt appears when any of uu, cc, or ee is missing from the record.

In notebooks, this subsection will be represented with small synthetic arrays, graphs, traces, or counters rather than external services. The point is not to mimic a vendor tool. The point is to make the mathematics of guardrail action executable enough to test.

Boundary note: this chapter assumes the evaluation methods from Chapter 17, the safety policy ideas from Chapter 18, and the data documentation work from Chapter 16. Here we focus on the production machinery that makes those ideas run repeatedly.

Failure analysis for guardrail action should be written before the incident occurs. A good production note asks what can be stale, missing, corrupted, delayed, unaudited, or too expensive. Each answer should correspond to one observable signal and one response action.

Failure questionProduction testResponse
Is the artifact stale?Compare event time to freshness limitWarn, block, or backfill
Is the artifact malformed?Evaluate schema and semantic contractReject before serving or training
Is the artifact inconsistent?Compare current statistic with reference statisticInvestigate drift or skew
Is the artifact unauditable?Check for missing version, owner, or lineage edgeStop promotion until metadata exists
Is the artifact too costly?Track latency, tokens, storage, or computeRoute, cache, batch, or downscale

The production design pattern is therefore not just to calculate a value. It is to calculate a value, compare it with a declared rule, log the evidence, and make the next action unambiguous. That four-step pattern will reappear across all Chapter 19 notebooks.

Skill Check

Test this lesson

Answer 4 quick questions to lock in the lesson and feed your adaptive practice queue.

--
Score
0/4
Answered
Not attempted
Status
1

Which module does this lesson belong to?

2

Which section is covered in this lesson content?

3

Which term is most central to this lesson?

4

What is the best way to use this lesson for real learning?

Your answers save locally first, then sync when account storage is available.
Practice queue