Part 1Math for LLMs

Quality Checks: Part 1 - Intuition To 3 Rule Based Filters

LLM Training Data Pipeline / Quality Checks

Private notes
0/8000

Notes stay private to your browser until account sync is configured.

Part 1
25 min read18 headingsSplit lesson page

Lesson overview | Lesson overview | Next part

Quality Checks: Part 1: Intuition to 3. Rule-Based Filters

1. Intuition

Intuition gives the conceptual and mathematical layer for quality checks. The local variables in this section should be read as pipeline objects: documents, records, tokens, filters, weights, shards, and manifests.

1.1 Quality as effective token multiplier Deff=qDD_{\mathrm{eff}}=qD

Quality as effective token multiplier Deff=qDD_{\mathrm{eff}}=qD is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For quality score, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

1.2 Filtering as precision/recall tradeoff

Filtering as precision/recall tradeoff is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For filter, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

1.3 Quality vs diversity

Quality vs diversity is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For acceptance rate, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

1.4 Safety vs capability

Safety vs capability is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For PII, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

1.5 Lessons from C4, Dolma, FineWeb, and DCLM

Lessons from C4, Dolma, FineWeb, and DCLM is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For toxicity, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

2. Formal Definitions

Formal Definitions gives the conceptual and mathematical layer for quality checks. The local variables in this section should be read as pipeline objects: documents, records, tokens, filters, weights, shards, and manifests.

2.1 Quality score q(x)q(x)

Quality score q(x)q(x) is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For quality score, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

2.2 Filter function f(x){0,1}f(x)\in\{0,1\}

Filter function f(x){0,1}f(x)\in\{0,1\} is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For filter, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

2.3 Acceptance rate

Acceptance rate is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For acceptance rate, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

2.4 False positives and false negatives

False positives and false negatives is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For PII, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

2.5 Filter cascade

Filter cascade is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For toxicity, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

3. Rule-Based Filters

Rule-Based Filters gives the conceptual and mathematical layer for quality checks. The local variables in this section should be read as pipeline objects: documents, records, tokens, filters, weights, shards, and manifests.

3.1 Length filters

Length filters is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For quality score, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

3.2 Language ID

Language ID is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For filter, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

3.3 Repetition ratios

Repetition ratios is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For acceptance rate, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

3.4 Character/script ratios

Character/script ratios is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For PII, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

3.5 Boilerplate and markup filters

Boilerplate and markup filters is part of the canonical scope of quality checks. We model the relevant object as a finite collection D={ri}i=1n\mathcal{D} = \{r_i\}_{i=1}^n with record-level metadata mim_i and text or token content xix_i. The practical question is whether the transformation preserves the intended empirical distribution.

A useful local invariant is:

valid(ri,S)=1ri can be consumed by the next pipeline stage.\text{valid}(r_i, \mathcal{S}) = 1 \quad \Longrightarrow \quad r_i \text{ can be consumed by the next pipeline stage.}

For toxicity, the invariant should be explicit enough that a checker can fail fast. If the invariant is only written in a notebook comment or an engineer's memory, it will not protect a long-running data build.

Examples:

  • A small local experiment can store this object in memory; a frontier-scale run must store it as sharded, versioned, validated records.
  • The mathematical object is simple, but the operational contract must survive restarts, parallel workers, schema changes, and audits.
  • The notebook for this section uses synthetic data so the same ideas can be executed without external files.

Non-examples:

  • A path on disk without a manifest is not a reproducible dataset.
  • A metric dashboard without record-level lineage is not a provenance system.
  • A filter threshold without an audit sample is not evidence of quality.

Implementation consequence: every transformation should report both a count and a rate. If ninn_{\mathrm{in}} records enter the stage and noutn_{\mathrm{out}} records leave, the acceptance rate is

a=noutnin.a = \frac{n_{\mathrm{out}}}{n_{\mathrm{in}}}.

A sudden change in aa is a data-drift signal even when the code still runs. This is why pipeline math is inseparable from logging, manifests, and audit slices.

For LLM work, the token-weighted view is often more important than the document-weighted view. A filter that removes 5 percent of documents may remove 30 percent of tokens if it targets long documents. The corresponding token acceptance rate is

atok=if(ri)TiiTi,a_{\mathrm{tok}} = \frac{\sum_i f(r_i)\,T_i}{\sum_i T_i},

where TiT_i is the token count or a deterministic token-count estimate. The distinction matters for compute budgets, mixture proportions, and scaling-law interpretation.

Skill Check

Test this lesson

Answer 4 quick questions to lock in the lesson and feed your adaptive practice queue.

--
Score
0/4
Answered
Not attempted
Status
1

Which module does this lesson belong to?

2

Which section is covered in this lesson content?

3

Which term is most central to this lesson?

4

What is the best way to use this lesson for real learning?

Your answers save locally first, then sync when account storage is available.
Practice queue