Private notes
0/8000

Notes stay private to your browser until account sync is configured.

Part 2
29 min read12 headingsSplit lesson page

Lesson overview | Previous part | Next part

PAC Learning: Part 3: Realizable PAC Learning to 4. Agnostic PAC Learning

3. Realizable PAC Learning

Realizable PAC Learning develops the part of pac learning specified by the approved Chapter 21 table of contents. The emphasis is statistical learning theory, not generic statistics, optimization recipes, or benchmark operations.

3.1 consistency assumption

Consistency assumption is part of the canonical scope of PAC Learning. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.

In this subsection the working scope is probably approximately correct guarantees, finite-class sample complexity, realizable and agnostic learning, and distribution-free learnability. We use a distribution D\mathcal{D}, a sample SS, a hypothesis class H\mathcal{H}, and a loss-derived risk. The core question is whether the behavior on SS can control the behavior under D\mathcal{D}.

m1ϵ(logH+log1δ).m \ge \frac{1}{\epsilon}\left(\log\lvert\mathcal{H}\rvert + \log\frac{1}{\delta}\right).

The formula should be read operationally. For consistency assumption, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.

Theory objectMeaningAI interpretation
D\mathcal{D}Unknown data distributionUser prompts, images, tokens, labels, or tasks the system will face
SSFinite training or evaluation sampleThe observed examples available to the learner or auditor
H\mathcal{H}Hypothesis classClassifiers, probes, reward models, safety filters, or predictors
LS(h)L_S(h)Empirical riskError measured on the observed sample
LD(h)L_{\mathcal{D}}(h)True riskError on the distribution that matters after deployment

Three examples of consistency assumption:

  1. A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
  2. A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
  3. A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.

Two non-examples are just as important:

  1. A leaderboard rank without a distributional statement is not a learnability guarantee.
  2. A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.

The proof habit for consistency assumption is to identify the random object first. Sometimes the randomness is the sample SS. Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.

A useful ASCII picture for this subsection is:

unknown distribution D
        | sample S
        v
 empirical learner h_S ----> empirical risk L_S(h_S)
        |
        v
 true deployment risk L_D(h_S)

The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.

Implementation note for the companion notebook: consistency assumption will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.

The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.

Checklist for using consistency assumption responsibly:

  • State the sample space and label space.
  • State the hypothesis or function class.
  • State the loss and risk definition.
  • State whether the setting is realizable or agnostic.
  • Track both accuracy tolerance and confidence.
  • Identify whether the bound is distribution-free or data-dependent.
  • Separate the theorem from the empirical measurement.

For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.

The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.

3.2 finite hypothesis class bound

Finite hypothesis class bound is part of the canonical scope of PAC Learning. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.

In this subsection the working scope is probably approximately correct guarantees, finite-class sample complexity, realizable and agnostic learning, and distribution-free learnability. We use a distribution D\mathcal{D}, a sample SS, a hypothesis class H\mathcal{H}, and a loss-derived risk. The core question is whether the behavior on SS can control the behavior under D\mathcal{D}.

P[LD(hS)ϵ]1δ.P[L_{\mathcal{D}}(h_S)\le \epsilon] \ge 1-\delta.

The formula should be read operationally. For finite hypothesis class bound, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.

Theory objectMeaningAI interpretation
D\mathcal{D}Unknown data distributionUser prompts, images, tokens, labels, or tasks the system will face
SSFinite training or evaluation sampleThe observed examples available to the learner or auditor
H\mathcal{H}Hypothesis classClassifiers, probes, reward models, safety filters, or predictors
LS(h)L_S(h)Empirical riskError measured on the observed sample
LD(h)L_{\mathcal{D}}(h)True riskError on the distribution that matters after deployment

Three examples of finite hypothesis class bound:

  1. A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
  2. A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
  3. A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.

Two non-examples are just as important:

  1. A leaderboard rank without a distributional statement is not a learnability guarantee.
  2. A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.

The proof habit for finite hypothesis class bound is to identify the random object first. Sometimes the randomness is the sample SS. Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.

A useful ASCII picture for this subsection is:

unknown distribution D
        | sample S
        v
 empirical learner h_S ----> empirical risk L_S(h_S)
        |
        v
 true deployment risk L_D(h_S)

The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.

Implementation note for the companion notebook: finite hypothesis class bound will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.

The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.

Checklist for using finite hypothesis class bound responsibly:

  • State the sample space and label space.
  • State the hypothesis or function class.
  • State the loss and risk definition.
  • State whether the setting is realizable or agnostic.
  • Track both accuracy tolerance and confidence.
  • Identify whether the bound is distribution-free or data-dependent.
  • Separate the theorem from the empirical measurement.

For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.

The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.

3.3 union bound proof sketch

Union bound proof sketch is part of the canonical scope of PAC Learning. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.

In this subsection the working scope is probably approximately correct guarantees, finite-class sample complexity, realizable and agnostic learning, and distribution-free learnability. We use a distribution D\mathcal{D}, a sample SS, a hypothesis class H\mathcal{H}, and a loss-derived risk. The core question is whether the behavior on SS can control the behavior under D\mathcal{D}.

LD(h)=P(x,y)D[h(x)y].L_{\mathcal{D}}(h)=P_{(\mathbf{x},y)\sim\mathcal{D}}[h(\mathbf{x})\ne y].

The formula should be read operationally. For union bound proof sketch, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.

Theory objectMeaningAI interpretation
D\mathcal{D}Unknown data distributionUser prompts, images, tokens, labels, or tasks the system will face
SSFinite training or evaluation sampleThe observed examples available to the learner or auditor
H\mathcal{H}Hypothesis classClassifiers, probes, reward models, safety filters, or predictors
LS(h)L_S(h)Empirical riskError measured on the observed sample
LD(h)L_{\mathcal{D}}(h)True riskError on the distribution that matters after deployment

Three examples of union bound proof sketch:

  1. A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
  2. A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
  3. A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.

Two non-examples are just as important:

  1. A leaderboard rank without a distributional statement is not a learnability guarantee.
  2. A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.

The proof habit for union bound proof sketch is to identify the random object first. Sometimes the randomness is the sample SS. Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.

A useful ASCII picture for this subsection is:

unknown distribution D
        | sample S
        v
 empirical learner h_S ----> empirical risk L_S(h_S)
        |
        v
 true deployment risk L_D(h_S)

The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.

Implementation note for the companion notebook: union bound proof sketch will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.

The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.

Checklist for using union bound proof sketch responsibly:

  • State the sample space and label space.
  • State the hypothesis or function class.
  • State the loss and risk definition.
  • State whether the setting is realizable or agnostic.
  • Track both accuracy tolerance and confidence.
  • Identify whether the bound is distribution-free or data-dependent.
  • Separate the theorem from the empirical measurement.

For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.

The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.

3.4 sample complexity mH(ϵ,δ)m_{\mathcal{H}}(\epsilon,\delta)

Sample complexity mh(ϵ,δ)m_{\mathcal{h}}(\epsilon,\delta) is part of the canonical scope of PAC Learning. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.

In this subsection the working scope is probably approximately correct guarantees, finite-class sample complexity, realizable and agnostic learning, and distribution-free learnability. We use a distribution D\mathcal{D}, a sample SS, a hypothesis class H\mathcal{H}, and a loss-derived risk. The core question is whether the behavior on SS can control the behavior under D\mathcal{D}.

LS(h)=1mi=1m1[h(x(i))y(i)].L_S(h)=\frac{1}{m}\sum_{i=1}^{m}\mathbb{1}[h(\mathbf{x}^{(i)})\ne y^{(i)}].

The formula should be read operationally. For sample complexity mh(ϵ,δ)m_{\mathcal{h}}(\epsilon,\delta), a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.

Theory objectMeaningAI interpretation
D\mathcal{D}Unknown data distributionUser prompts, images, tokens, labels, or tasks the system will face
SSFinite training or evaluation sampleThe observed examples available to the learner or auditor
H\mathcal{H}Hypothesis classClassifiers, probes, reward models, safety filters, or predictors
LS(h)L_S(h)Empirical riskError measured on the observed sample
LD(h)L_{\mathcal{D}}(h)True riskError on the distribution that matters after deployment

Three examples of sample complexity mh(ϵ,δ)m_{\mathcal{h}}(\epsilon,\delta):

  1. A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
  2. A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
  3. A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.

Two non-examples are just as important:

  1. A leaderboard rank without a distributional statement is not a learnability guarantee.
  2. A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.

The proof habit for sample complexity mh(ϵ,δ)m_{\mathcal{h}}(\epsilon,\delta) is to identify the random object first. Sometimes the randomness is the sample SS. Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.

A useful ASCII picture for this subsection is:

unknown distribution D
        | sample S
        v
 empirical learner h_S ----> empirical risk L_S(h_S)
        |
        v
 true deployment risk L_D(h_S)

The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.

Implementation note for the companion notebook: sample complexity mh(ϵ,δ)m_{\mathcal{h}}(\epsilon,\delta) will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.

The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.

Checklist for using sample complexity mh(ϵ,δ)m_{\mathcal{h}}(\epsilon,\delta) responsibly:

  • State the sample space and label space.
  • State the hypothesis or function class.
  • State the loss and risk definition.
  • State whether the setting is realizable or agnostic.
  • Track both accuracy tolerance and confidence.
  • Identify whether the bound is distribution-free or data-dependent.
  • Separate the theorem from the empirical measurement.

For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.

The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.

3.5 consistent ERM

Consistent erm is part of the canonical scope of PAC Learning. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.

In this subsection the working scope is probably approximately correct guarantees, finite-class sample complexity, realizable and agnostic learning, and distribution-free learnability. We use a distribution D\mathcal{D}, a sample SS, a hypothesis class H\mathcal{H}, and a loss-derived risk. The core question is whether the behavior on SS can control the behavior under D\mathcal{D}.

m1ϵ(logH+log1δ).m \ge \frac{1}{\epsilon}\left(\log\lvert\mathcal{H}\rvert + \log\frac{1}{\delta}\right).

The formula should be read operationally. For consistent erm, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.

Theory objectMeaningAI interpretation
D\mathcal{D}Unknown data distributionUser prompts, images, tokens, labels, or tasks the system will face
SSFinite training or evaluation sampleThe observed examples available to the learner or auditor
H\mathcal{H}Hypothesis classClassifiers, probes, reward models, safety filters, or predictors
LS(h)L_S(h)Empirical riskError measured on the observed sample
LD(h)L_{\mathcal{D}}(h)True riskError on the distribution that matters after deployment

Three examples of consistent erm:

  1. A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
  2. A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
  3. A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.

Two non-examples are just as important:

  1. A leaderboard rank without a distributional statement is not a learnability guarantee.
  2. A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.

The proof habit for consistent erm is to identify the random object first. Sometimes the randomness is the sample SS. Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.

A useful ASCII picture for this subsection is:

unknown distribution D
        | sample S
        v
 empirical learner h_S ----> empirical risk L_S(h_S)
        |
        v
 true deployment risk L_D(h_S)

The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.

Implementation note for the companion notebook: consistent erm will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.

The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.

Checklist for using consistent erm responsibly:

  • State the sample space and label space.
  • State the hypothesis or function class.
  • State the loss and risk definition.
  • State whether the setting is realizable or agnostic.
  • Track both accuracy tolerance and confidence.
  • Identify whether the bound is distribution-free or data-dependent.
  • Separate the theorem from the empirical measurement.

For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.

The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.

4. Agnostic PAC Learning

Agnostic PAC Learning develops the part of pac learning specified by the approved Chapter 21 table of contents. The emphasis is statistical learning theory, not generic statistics, optimization recipes, or benchmark operations.

4.1 Bayes error and approximation error

Bayes error and approximation error is part of the canonical scope of PAC Learning. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.

In this subsection the working scope is probably approximately correct guarantees, finite-class sample complexity, realizable and agnostic learning, and distribution-free learnability. We use a distribution D\mathcal{D}, a sample SS, a hypothesis class H\mathcal{H}, and a loss-derived risk. The core question is whether the behavior on SS can control the behavior under D\mathcal{D}.

P[LD(hS)ϵ]1δ.P[L_{\mathcal{D}}(h_S)\le \epsilon] \ge 1-\delta.

The formula should be read operationally. For bayes error and approximation error, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.

Theory objectMeaningAI interpretation
D\mathcal{D}Unknown data distributionUser prompts, images, tokens, labels, or tasks the system will face
SSFinite training or evaluation sampleThe observed examples available to the learner or auditor
H\mathcal{H}Hypothesis classClassifiers, probes, reward models, safety filters, or predictors
LS(h)L_S(h)Empirical riskError measured on the observed sample
LD(h)L_{\mathcal{D}}(h)True riskError on the distribution that matters after deployment

Three examples of bayes error and approximation error:

  1. A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
  2. A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
  3. A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.

Two non-examples are just as important:

  1. A leaderboard rank without a distributional statement is not a learnability guarantee.
  2. A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.

The proof habit for bayes error and approximation error is to identify the random object first. Sometimes the randomness is the sample SS. Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.

A useful ASCII picture for this subsection is:

unknown distribution D
        | sample S
        v
 empirical learner h_S ----> empirical risk L_S(h_S)
        |
        v
 true deployment risk L_D(h_S)

The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.

Implementation note for the companion notebook: bayes error and approximation error will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.

The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.

Checklist for using bayes error and approximation error responsibly:

  • State the sample space and label space.
  • State the hypothesis or function class.
  • State the loss and risk definition.
  • State whether the setting is realizable or agnostic.
  • Track both accuracy tolerance and confidence.
  • Identify whether the bound is distribution-free or data-dependent.
  • Separate the theorem from the empirical measurement.

For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.

The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.

4.2 excess risk

Excess risk is part of the canonical scope of PAC Learning. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.

In this subsection the working scope is probably approximately correct guarantees, finite-class sample complexity, realizable and agnostic learning, and distribution-free learnability. We use a distribution D\mathcal{D}, a sample SS, a hypothesis class H\mathcal{H}, and a loss-derived risk. The core question is whether the behavior on SS can control the behavior under D\mathcal{D}.

LD(h)=P(x,y)D[h(x)y].L_{\mathcal{D}}(h)=P_{(\mathbf{x},y)\sim\mathcal{D}}[h(\mathbf{x})\ne y].

The formula should be read operationally. For excess risk, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.

Theory objectMeaningAI interpretation
D\mathcal{D}Unknown data distributionUser prompts, images, tokens, labels, or tasks the system will face
SSFinite training or evaluation sampleThe observed examples available to the learner or auditor
H\mathcal{H}Hypothesis classClassifiers, probes, reward models, safety filters, or predictors
LS(h)L_S(h)Empirical riskError measured on the observed sample
LD(h)L_{\mathcal{D}}(h)True riskError on the distribution that matters after deployment

Three examples of excess risk:

  1. A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
  2. A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
  3. A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.

Two non-examples are just as important:

  1. A leaderboard rank without a distributional statement is not a learnability guarantee.
  2. A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.

The proof habit for excess risk is to identify the random object first. Sometimes the randomness is the sample SS. Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.

A useful ASCII picture for this subsection is:

unknown distribution D
        | sample S
        v
 empirical learner h_S ----> empirical risk L_S(h_S)
        |
        v
 true deployment risk L_D(h_S)

The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.

Implementation note for the companion notebook: excess risk will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.

The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.

Checklist for using excess risk responsibly:

  • State the sample space and label space.
  • State the hypothesis or function class.
  • State the loss and risk definition.
  • State whether the setting is realizable or agnostic.
  • Track both accuracy tolerance and confidence.
  • Identify whether the bound is distribution-free or data-dependent.
  • Separate the theorem from the empirical measurement.

For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.

The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.

4.3 agnostic ERM

Agnostic erm is part of the canonical scope of PAC Learning. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.

In this subsection the working scope is probably approximately correct guarantees, finite-class sample complexity, realizable and agnostic learning, and distribution-free learnability. We use a distribution D\mathcal{D}, a sample SS, a hypothesis class H\mathcal{H}, and a loss-derived risk. The core question is whether the behavior on SS can control the behavior under D\mathcal{D}.

LS(h)=1mi=1m1[h(x(i))y(i)].L_S(h)=\frac{1}{m}\sum_{i=1}^{m}\mathbb{1}[h(\mathbf{x}^{(i)})\ne y^{(i)}].

The formula should be read operationally. For agnostic erm, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.

Theory objectMeaningAI interpretation
D\mathcal{D}Unknown data distributionUser prompts, images, tokens, labels, or tasks the system will face
SSFinite training or evaluation sampleThe observed examples available to the learner or auditor
H\mathcal{H}Hypothesis classClassifiers, probes, reward models, safety filters, or predictors
LS(h)L_S(h)Empirical riskError measured on the observed sample
LD(h)L_{\mathcal{D}}(h)True riskError on the distribution that matters after deployment

Three examples of agnostic erm:

  1. A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
  2. A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
  3. A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.

Two non-examples are just as important:

  1. A leaderboard rank without a distributional statement is not a learnability guarantee.
  2. A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.

The proof habit for agnostic erm is to identify the random object first. Sometimes the randomness is the sample SS. Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.

A useful ASCII picture for this subsection is:

unknown distribution D
        | sample S
        v
 empirical learner h_S ----> empirical risk L_S(h_S)
        |
        v
 true deployment risk L_D(h_S)

The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.

Implementation note for the companion notebook: agnostic erm will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.

The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.

Checklist for using agnostic erm responsibly:

  • State the sample space and label space.
  • State the hypothesis or function class.
  • State the loss and risk definition.
  • State whether the setting is realizable or agnostic.
  • Track both accuracy tolerance and confidence.
  • Identify whether the bound is distribution-free or data-dependent.
  • Separate the theorem from the empirical measurement.

For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.

The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.

4.4 finite-class agnostic bound

Finite-class agnostic bound is part of the canonical scope of PAC Learning. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.

In this subsection the working scope is probably approximately correct guarantees, finite-class sample complexity, realizable and agnostic learning, and distribution-free learnability. We use a distribution D\mathcal{D}, a sample SS, a hypothesis class H\mathcal{H}, and a loss-derived risk. The core question is whether the behavior on SS can control the behavior under D\mathcal{D}.

m1ϵ(logH+log1δ).m \ge \frac{1}{\epsilon}\left(\log\lvert\mathcal{H}\rvert + \log\frac{1}{\delta}\right).

The formula should be read operationally. For finite-class agnostic bound, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.

Theory objectMeaningAI interpretation
D\mathcal{D}Unknown data distributionUser prompts, images, tokens, labels, or tasks the system will face
SSFinite training or evaluation sampleThe observed examples available to the learner or auditor
H\mathcal{H}Hypothesis classClassifiers, probes, reward models, safety filters, or predictors
LS(h)L_S(h)Empirical riskError measured on the observed sample
LD(h)L_{\mathcal{D}}(h)True riskError on the distribution that matters after deployment

Three examples of finite-class agnostic bound:

  1. A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
  2. A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
  3. A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.

Two non-examples are just as important:

  1. A leaderboard rank without a distributional statement is not a learnability guarantee.
  2. A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.

The proof habit for finite-class agnostic bound is to identify the random object first. Sometimes the randomness is the sample SS. Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.

A useful ASCII picture for this subsection is:

unknown distribution D
        | sample S
        v
 empirical learner h_S ----> empirical risk L_S(h_S)
        |
        v
 true deployment risk L_D(h_S)

The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.

Implementation note for the companion notebook: finite-class agnostic bound will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.

The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.

Checklist for using finite-class agnostic bound responsibly:

  • State the sample space and label space.
  • State the hypothesis or function class.
  • State the loss and risk definition.
  • State whether the setting is realizable or agnostic.
  • Track both accuracy tolerance and confidence.
  • Identify whether the bound is distribution-free or data-dependent.
  • Separate the theorem from the empirical measurement.

For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.

The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.

4.5 noisy labels

Noisy labels is part of the canonical scope of PAC Learning. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.

In this subsection the working scope is probably approximately correct guarantees, finite-class sample complexity, realizable and agnostic learning, and distribution-free learnability. We use a distribution D\mathcal{D}, a sample SS, a hypothesis class H\mathcal{H}, and a loss-derived risk. The core question is whether the behavior on SS can control the behavior under D\mathcal{D}.

P[LD(hS)ϵ]1δ.P[L_{\mathcal{D}}(h_S)\le \epsilon] \ge 1-\delta.

The formula should be read operationally. For noisy labels, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.

Theory objectMeaningAI interpretation
D\mathcal{D}Unknown data distributionUser prompts, images, tokens, labels, or tasks the system will face
SSFinite training or evaluation sampleThe observed examples available to the learner or auditor
H\mathcal{H}Hypothesis classClassifiers, probes, reward models, safety filters, or predictors
LS(h)L_S(h)Empirical riskError measured on the observed sample
LD(h)L_{\mathcal{D}}(h)True riskError on the distribution that matters after deployment

Three examples of noisy labels:

  1. A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
  2. A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
  3. A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.

Two non-examples are just as important:

  1. A leaderboard rank without a distributional statement is not a learnability guarantee.
  2. A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.

The proof habit for noisy labels is to identify the random object first. Sometimes the randomness is the sample SS. Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.

A useful ASCII picture for this subsection is:

unknown distribution D
        | sample S
        v
 empirical learner h_S ----> empirical risk L_S(h_S)
        |
        v
 true deployment risk L_D(h_S)

The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.

Implementation note for the companion notebook: noisy labels will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.

The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.

Checklist for using noisy labels responsibly:

  • State the sample space and label space.
  • State the hypothesis or function class.
  • State the loss and risk definition.
  • State whether the setting is realizable or agnostic.
  • Track both accuracy tolerance and confidence.
  • Identify whether the bound is distribution-free or data-dependent.
  • Separate the theorem from the empirical measurement.

For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.

The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.

Skill Check

Test this lesson

Answer 4 quick questions to lock in the lesson and feed your adaptive practice queue.

--
Score
0/4
Answered
Not attempted
Status
1

Which module does this lesson belong to?

2

Which section is covered in this lesson content?

3

Which term is most central to this lesson?

4

What is the best way to use this lesson for real learning?

Your answers save locally first, then sync when account storage is available.
Practice queue