Lesson overview | Previous part | Next part
Model Serving and Inference Optimization: Part 3: Serving Architectures
3. Serving Architectures
Serving Architectures develops the part of model serving and inference optimization assigned by the approved Chapter 19 table of contents. The treatment is production-focused: every idea is connected to a versioned artifact, measurable signal, release decision, or incident response.
3.1 REST and gRPC endpoints
Rest and grpc endpoints is part of the canonical scope of Model Serving and Inference Optimization. In production ML, the useful question is not only whether the model can be trained, but whether the surrounding artifact, signal, or control can be named, versioned, measured, and recovered after a failure.
For this section, the working object is serving architectures, inference optimization, queueing, capacity planning, deployment safety, and LLM serving economics. The notation below treats production systems as mathematical objects because that is how incidents become diagnosable. A dataset, feature, run, trace, or endpoint that lacks a stable identifier cannot be compared across time.
The formula is intentionally simple. It says that rest and grpc endpoints should be reduced to a measurable object before anyone argues about dashboards or tools. Once the object is measurable, the system can decide whether to accept, warn, rollback, retrain, or escalate.
| Production object | Mathematical role | Operational consequence |
|---|---|---|
| Identifier | A stable key in a set or graph | Lets teams join logs, artifacts, and incidents |
| Version | A time-indexed element such as | Makes old and new behavior comparable |
| Metric | A function | Turns behavior into a release or alert signal |
| Contract | A predicate | Rejects invalid inputs before the model absorbs them |
| Owner | A decision variable outside the model | Prevents silent failure after detection |
Examples of rest and grpc endpoints in a real system:
- A production pipeline records the input version, transformation code hash, model version, and endpoint version before serving predictions.
- An LLM application logs prompt version, retrieval index version, tool span, latency, token count, and guardrail action for each trace.
- A release gate compares the candidate model against the current model on quality, safety, latency, and cost before promotion.
Non-examples that often look similar but fail the production contract:
- A manually named file like
final_dataset.csvwith no hash, schema, lineage, or owner. - A metric screenshot pasted into chat without the run id, evaluation dataset, seed, or model artifact.
- A dashboard alert with no threshold rationale, no escalation rule, and no rollback candidate.
The AI connection is concrete. Modern ML and LLM systems are compound systems: data pipelines, feature stores, model registries, inference servers, retrievers, tools, evaluators, and safety layers. Rest and grpc endpoints is one place where the compound system either becomes observable or becomes technical debt.
Operational checklist for rest and grpc endpoints:
- State the artifact or signal being controlled.
- Give it a stable id and version.
- Define the metric or predicate that decides whether it is valid.
- Log the dependency chain needed to reproduce it.
- Attach an owner and a response action.
- Test the check in continuous integration or release gating.
A useful mental model is to treat every production ML component as a function with preconditions and postconditions. If is the upstream artifact and is the downstream artifact, the production question is whether the relation can be replayed and audited.
where is the transformation, is code or configuration, and is the execution environment. The hidden technical debt appears when any of , , or is missing from the record.
In notebooks, this subsection will be represented with small synthetic arrays, graphs, traces, or counters rather than external services. The point is not to mimic a vendor tool. The point is to make the mathematics of rest and grpc endpoints executable enough to test.
Boundary note: this chapter assumes the evaluation methods from Chapter 17, the safety policy ideas from Chapter 18, and the data documentation work from Chapter 16. Here we focus on the production machinery that makes those ideas run repeatedly.
Failure analysis for rest and grpc endpoints should be written before the incident occurs. A good production note asks what can be stale, missing, corrupted, delayed, unaudited, or too expensive. Each answer should correspond to one observable signal and one response action.
| Failure question | Production test | Response |
|---|---|---|
| Is the artifact stale? | Compare event time to freshness limit | Warn, block, or backfill |
| Is the artifact malformed? | Evaluate schema and semantic contract | Reject before serving or training |
| Is the artifact inconsistent? | Compare current statistic with reference statistic | Investigate drift or skew |
| Is the artifact unauditable? | Check for missing version, owner, or lineage edge | Stop promotion until metadata exists |
| Is the artifact too costly? | Track latency, tokens, storage, or compute | Route, cache, batch, or downscale |
The production design pattern is therefore not just to calculate a value. It is to calculate a value, compare it with a declared rule, log the evidence, and make the next action unambiguous. That four-step pattern will reappear across all Chapter 19 notebooks.
3.2 batch serving
Batch serving is part of the canonical scope of Model Serving and Inference Optimization. In production ML, the useful question is not only whether the model can be trained, but whether the surrounding artifact, signal, or control can be named, versioned, measured, and recovered after a failure.
For this section, the working object is serving architectures, inference optimization, queueing, capacity planning, deployment safety, and LLM serving economics. The notation below treats production systems as mathematical objects because that is how incidents become diagnosable. A dataset, feature, run, trace, or endpoint that lacks a stable identifier cannot be compared across time.
The formula is intentionally simple. It says that batch serving should be reduced to a measurable object before anyone argues about dashboards or tools. Once the object is measurable, the system can decide whether to accept, warn, rollback, retrain, or escalate.
| Production object | Mathematical role | Operational consequence |
|---|---|---|
| Identifier | A stable key in a set or graph | Lets teams join logs, artifacts, and incidents |
| Version | A time-indexed element such as | Makes old and new behavior comparable |
| Metric | A function | Turns behavior into a release or alert signal |
| Contract | A predicate | Rejects invalid inputs before the model absorbs them |
| Owner | A decision variable outside the model | Prevents silent failure after detection |
Examples of batch serving in a real system:
- A production pipeline records the input version, transformation code hash, model version, and endpoint version before serving predictions.
- An LLM application logs prompt version, retrieval index version, tool span, latency, token count, and guardrail action for each trace.
- A release gate compares the candidate model against the current model on quality, safety, latency, and cost before promotion.
Non-examples that often look similar but fail the production contract:
- A manually named file like
final_dataset.csvwith no hash, schema, lineage, or owner. - A metric screenshot pasted into chat without the run id, evaluation dataset, seed, or model artifact.
- A dashboard alert with no threshold rationale, no escalation rule, and no rollback candidate.
The AI connection is concrete. Modern ML and LLM systems are compound systems: data pipelines, feature stores, model registries, inference servers, retrievers, tools, evaluators, and safety layers. Batch serving is one place where the compound system either becomes observable or becomes technical debt.
Operational checklist for batch serving:
- State the artifact or signal being controlled.
- Give it a stable id and version.
- Define the metric or predicate that decides whether it is valid.
- Log the dependency chain needed to reproduce it.
- Attach an owner and a response action.
- Test the check in continuous integration or release gating.
A useful mental model is to treat every production ML component as a function with preconditions and postconditions. If is the upstream artifact and is the downstream artifact, the production question is whether the relation can be replayed and audited.
where is the transformation, is code or configuration, and is the execution environment. The hidden technical debt appears when any of , , or is missing from the record.
In notebooks, this subsection will be represented with small synthetic arrays, graphs, traces, or counters rather than external services. The point is not to mimic a vendor tool. The point is to make the mathematics of batch serving executable enough to test.
Boundary note: this chapter assumes the evaluation methods from Chapter 17, the safety policy ideas from Chapter 18, and the data documentation work from Chapter 16. Here we focus on the production machinery that makes those ideas run repeatedly.
Failure analysis for batch serving should be written before the incident occurs. A good production note asks what can be stale, missing, corrupted, delayed, unaudited, or too expensive. Each answer should correspond to one observable signal and one response action.
| Failure question | Production test | Response |
|---|---|---|
| Is the artifact stale? | Compare event time to freshness limit | Warn, block, or backfill |
| Is the artifact malformed? | Evaluate schema and semantic contract | Reject before serving or training |
| Is the artifact inconsistent? | Compare current statistic with reference statistic | Investigate drift or skew |
| Is the artifact unauditable? | Check for missing version, owner, or lineage edge | Stop promotion until metadata exists |
| Is the artifact too costly? | Track latency, tokens, storage, or compute | Route, cache, batch, or downscale |
The production design pattern is therefore not just to calculate a value. It is to calculate a value, compare it with a declared rule, log the evidence, and make the next action unambiguous. That four-step pattern will reappear across all Chapter 19 notebooks.
3.3 streaming inference
Streaming inference is part of the canonical scope of Model Serving and Inference Optimization. In production ML, the useful question is not only whether the model can be trained, but whether the surrounding artifact, signal, or control can be named, versioned, measured, and recovered after a failure.
For this section, the working object is serving architectures, inference optimization, queueing, capacity planning, deployment safety, and LLM serving economics. The notation below treats production systems as mathematical objects because that is how incidents become diagnosable. A dataset, feature, run, trace, or endpoint that lacks a stable identifier cannot be compared across time.
The formula is intentionally simple. It says that streaming inference should be reduced to a measurable object before anyone argues about dashboards or tools. Once the object is measurable, the system can decide whether to accept, warn, rollback, retrain, or escalate.
| Production object | Mathematical role | Operational consequence |
|---|---|---|
| Identifier | A stable key in a set or graph | Lets teams join logs, artifacts, and incidents |
| Version | A time-indexed element such as | Makes old and new behavior comparable |
| Metric | A function | Turns behavior into a release or alert signal |
| Contract | A predicate | Rejects invalid inputs before the model absorbs them |
| Owner | A decision variable outside the model | Prevents silent failure after detection |
Examples of streaming inference in a real system:
- A production pipeline records the input version, transformation code hash, model version, and endpoint version before serving predictions.
- An LLM application logs prompt version, retrieval index version, tool span, latency, token count, and guardrail action for each trace.
- A release gate compares the candidate model against the current model on quality, safety, latency, and cost before promotion.
Non-examples that often look similar but fail the production contract:
- A manually named file like
final_dataset.csvwith no hash, schema, lineage, or owner. - A metric screenshot pasted into chat without the run id, evaluation dataset, seed, or model artifact.
- A dashboard alert with no threshold rationale, no escalation rule, and no rollback candidate.
The AI connection is concrete. Modern ML and LLM systems are compound systems: data pipelines, feature stores, model registries, inference servers, retrievers, tools, evaluators, and safety layers. Streaming inference is one place where the compound system either becomes observable or becomes technical debt.
Operational checklist for streaming inference:
- State the artifact or signal being controlled.
- Give it a stable id and version.
- Define the metric or predicate that decides whether it is valid.
- Log the dependency chain needed to reproduce it.
- Attach an owner and a response action.
- Test the check in continuous integration or release gating.
A useful mental model is to treat every production ML component as a function with preconditions and postconditions. If is the upstream artifact and is the downstream artifact, the production question is whether the relation can be replayed and audited.
where is the transformation, is code or configuration, and is the execution environment. The hidden technical debt appears when any of , , or is missing from the record.
In notebooks, this subsection will be represented with small synthetic arrays, graphs, traces, or counters rather than external services. The point is not to mimic a vendor tool. The point is to make the mathematics of streaming inference executable enough to test.
Boundary note: this chapter assumes the evaluation methods from Chapter 17, the safety policy ideas from Chapter 18, and the data documentation work from Chapter 16. Here we focus on the production machinery that makes those ideas run repeatedly.
Failure analysis for streaming inference should be written before the incident occurs. A good production note asks what can be stale, missing, corrupted, delayed, unaudited, or too expensive. Each answer should correspond to one observable signal and one response action.
| Failure question | Production test | Response |
|---|---|---|
| Is the artifact stale? | Compare event time to freshness limit | Warn, block, or backfill |
| Is the artifact malformed? | Evaluate schema and semantic contract | Reject before serving or training |
| Is the artifact inconsistent? | Compare current statistic with reference statistic | Investigate drift or skew |
| Is the artifact unauditable? | Check for missing version, owner, or lineage edge | Stop promotion until metadata exists |
| Is the artifact too costly? | Track latency, tokens, storage, or compute | Route, cache, batch, or downscale |
The production design pattern is therefore not just to calculate a value. It is to calculate a value, compare it with a declared rule, log the evidence, and make the next action unambiguous. That four-step pattern will reappear across all Chapter 19 notebooks.
3.4 model routers
Model routers is part of the canonical scope of Model Serving and Inference Optimization. In production ML, the useful question is not only whether the model can be trained, but whether the surrounding artifact, signal, or control can be named, versioned, measured, and recovered after a failure.
For this section, the working object is serving architectures, inference optimization, queueing, capacity planning, deployment safety, and LLM serving economics. The notation below treats production systems as mathematical objects because that is how incidents become diagnosable. A dataset, feature, run, trace, or endpoint that lacks a stable identifier cannot be compared across time.
The formula is intentionally simple. It says that model routers should be reduced to a measurable object before anyone argues about dashboards or tools. Once the object is measurable, the system can decide whether to accept, warn, rollback, retrain, or escalate.
| Production object | Mathematical role | Operational consequence |
|---|---|---|
| Identifier | A stable key in a set or graph | Lets teams join logs, artifacts, and incidents |
| Version | A time-indexed element such as | Makes old and new behavior comparable |
| Metric | A function | Turns behavior into a release or alert signal |
| Contract | A predicate | Rejects invalid inputs before the model absorbs them |
| Owner | A decision variable outside the model | Prevents silent failure after detection |
Examples of model routers in a real system:
- A production pipeline records the input version, transformation code hash, model version, and endpoint version before serving predictions.
- An LLM application logs prompt version, retrieval index version, tool span, latency, token count, and guardrail action for each trace.
- A release gate compares the candidate model against the current model on quality, safety, latency, and cost before promotion.
Non-examples that often look similar but fail the production contract:
- A manually named file like
final_dataset.csvwith no hash, schema, lineage, or owner. - A metric screenshot pasted into chat without the run id, evaluation dataset, seed, or model artifact.
- A dashboard alert with no threshold rationale, no escalation rule, and no rollback candidate.
The AI connection is concrete. Modern ML and LLM systems are compound systems: data pipelines, feature stores, model registries, inference servers, retrievers, tools, evaluators, and safety layers. Model routers is one place where the compound system either becomes observable or becomes technical debt.
Operational checklist for model routers:
- State the artifact or signal being controlled.
- Give it a stable id and version.
- Define the metric or predicate that decides whether it is valid.
- Log the dependency chain needed to reproduce it.
- Attach an owner and a response action.
- Test the check in continuous integration or release gating.
A useful mental model is to treat every production ML component as a function with preconditions and postconditions. If is the upstream artifact and is the downstream artifact, the production question is whether the relation can be replayed and audited.
where is the transformation, is code or configuration, and is the execution environment. The hidden technical debt appears when any of , , or is missing from the record.
In notebooks, this subsection will be represented with small synthetic arrays, graphs, traces, or counters rather than external services. The point is not to mimic a vendor tool. The point is to make the mathematics of model routers executable enough to test.
Boundary note: this chapter assumes the evaluation methods from Chapter 17, the safety policy ideas from Chapter 18, and the data documentation work from Chapter 16. Here we focus on the production machinery that makes those ideas run repeatedly.
Failure analysis for model routers should be written before the incident occurs. A good production note asks what can be stale, missing, corrupted, delayed, unaudited, or too expensive. Each answer should correspond to one observable signal and one response action.
| Failure question | Production test | Response |
|---|---|---|
| Is the artifact stale? | Compare event time to freshness limit | Warn, block, or backfill |
| Is the artifact malformed? | Evaluate schema and semantic contract | Reject before serving or training |
| Is the artifact inconsistent? | Compare current statistic with reference statistic | Investigate drift or skew |
| Is the artifact unauditable? | Check for missing version, owner, or lineage edge | Stop promotion until metadata exists |
| Is the artifact too costly? | Track latency, tokens, storage, or compute | Route, cache, batch, or downscale |
The production design pattern is therefore not just to calculate a value. It is to calculate a value, compare it with a declared rule, log the evidence, and make the next action unambiguous. That four-step pattern will reappear across all Chapter 19 notebooks.
3.5 shadow and canary deployments
Shadow and canary deployments is part of the canonical scope of Model Serving and Inference Optimization. In production ML, the useful question is not only whether the model can be trained, but whether the surrounding artifact, signal, or control can be named, versioned, measured, and recovered after a failure.
For this section, the working object is serving architectures, inference optimization, queueing, capacity planning, deployment safety, and LLM serving economics. The notation below treats production systems as mathematical objects because that is how incidents become diagnosable. A dataset, feature, run, trace, or endpoint that lacks a stable identifier cannot be compared across time.
The formula is intentionally simple. It says that shadow and canary deployments should be reduced to a measurable object before anyone argues about dashboards or tools. Once the object is measurable, the system can decide whether to accept, warn, rollback, retrain, or escalate.
| Production object | Mathematical role | Operational consequence |
|---|---|---|
| Identifier | A stable key in a set or graph | Lets teams join logs, artifacts, and incidents |
| Version | A time-indexed element such as | Makes old and new behavior comparable |
| Metric | A function | Turns behavior into a release or alert signal |
| Contract | A predicate | Rejects invalid inputs before the model absorbs them |
| Owner | A decision variable outside the model | Prevents silent failure after detection |
Examples of shadow and canary deployments in a real system:
- A production pipeline records the input version, transformation code hash, model version, and endpoint version before serving predictions.
- An LLM application logs prompt version, retrieval index version, tool span, latency, token count, and guardrail action for each trace.
- A release gate compares the candidate model against the current model on quality, safety, latency, and cost before promotion.
Non-examples that often look similar but fail the production contract:
- A manually named file like
final_dataset.csvwith no hash, schema, lineage, or owner. - A metric screenshot pasted into chat without the run id, evaluation dataset, seed, or model artifact.
- A dashboard alert with no threshold rationale, no escalation rule, and no rollback candidate.
The AI connection is concrete. Modern ML and LLM systems are compound systems: data pipelines, feature stores, model registries, inference servers, retrievers, tools, evaluators, and safety layers. Shadow and canary deployments is one place where the compound system either becomes observable or becomes technical debt.
Operational checklist for shadow and canary deployments:
- State the artifact or signal being controlled.
- Give it a stable id and version.
- Define the metric or predicate that decides whether it is valid.
- Log the dependency chain needed to reproduce it.
- Attach an owner and a response action.
- Test the check in continuous integration or release gating.
A useful mental model is to treat every production ML component as a function with preconditions and postconditions. If is the upstream artifact and is the downstream artifact, the production question is whether the relation can be replayed and audited.
where is the transformation, is code or configuration, and is the execution environment. The hidden technical debt appears when any of , , or is missing from the record.
In notebooks, this subsection will be represented with small synthetic arrays, graphs, traces, or counters rather than external services. The point is not to mimic a vendor tool. The point is to make the mathematics of shadow and canary deployments executable enough to test.
Boundary note: this chapter assumes the evaluation methods from Chapter 17, the safety policy ideas from Chapter 18, and the data documentation work from Chapter 16. Here we focus on the production machinery that makes those ideas run repeatedly.
Failure analysis for shadow and canary deployments should be written before the incident occurs. A good production note asks what can be stale, missing, corrupted, delayed, unaudited, or too expensive. Each answer should correspond to one observable signal and one response action.
| Failure question | Production test | Response |
|---|---|---|
| Is the artifact stale? | Compare event time to freshness limit | Warn, block, or backfill |
| Is the artifact malformed? | Evaluate schema and semantic contract | Reject before serving or training |
| Is the artifact inconsistent? | Compare current statistic with reference statistic | Investigate drift or skew |
| Is the artifact unauditable? | Check for missing version, owner, or lineage edge | Stop promotion until metadata exists |
| Is the artifact too costly? | Track latency, tokens, storage, or compute | Route, cache, batch, or downscale |
The production design pattern is therefore not just to calculate a value. It is to calculate a value, compare it with a declared rule, log the evidence, and make the next action unambiguous. That four-step pattern will reappear across all Chapter 19 notebooks.