Part 5Math for LLMs

Instruction Tuning and SFT: Part 5 - Synthetic And Self Generated Instructions To References

Alignment and Safety / Instruction Tuning and SFT

Private notes
0/8000

Notes stay private to your browser until account sync is configured.

Part 5
19 min read11 headingsSplit lesson page

Lesson overview | Previous part | Lesson overview

Instruction Tuning and SFT: Part 7: Synthetic and Self-Generated Instructions to References

7. Synthetic and Self-Generated Instructions

Synthetic and Self-Generated Instructions develops the part of instruction tuning and sft that the approved TOC assigns to Chapter 18. The emphasis is alignment behavior, safety constraints, and feedback loops, not generic fine-tuning or production monitoring.

7.1 FLAN-style multitask instruction tuning

FLAN-style multitask instruction tuning belongs in the canonical scope of instruction tuning and sft. The object is the instruction-following policy, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol \pi_\theta(y \mid x). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

LSFT(θ)=1Ni=1NtRilogπθ(yi,txi,yi,<t).\mathcal{L}_{\mathrm{SFT}}(\theta) = -\frac{1}{N}\sum_{i=1}^{N}\sum_{t \in R_i}\log \pi_\theta(y_{i,t} \mid x_i,y_{i,<t}).

For flan-style multitask instruction tuning, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat flan-style multitask instruction tuning as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for flan-style multitask instruction tuning:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: FLAN-style multitask instruction tuning is part of the post-training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

7.2 Self-Instruct bootstrapping

Self-Instruct bootstrapping belongs in the canonical scope of instruction tuning and sft. The object is the instruction-following policy, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol \pi_\theta(y \mid x). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

LSFT(θ)=1Ni=1NtRilogπθ(yi,txi,yi,<t).\mathcal{L}_{\mathrm{SFT}}(\theta) = -\frac{1}{N}\sum_{i=1}^{N}\sum_{t \in R_i}\log \pi_\theta(y_{i,t} \mid x_i,y_{i,<t}).

For self-instruct bootstrapping, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat self-instruct bootstrapping as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for self-instruct bootstrapping:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: Self-Instruct bootstrapping is part of the post-training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

7.3 Data filtering

Data filtering belongs in the canonical scope of instruction tuning and sft. The object is the instruction-following policy, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol \pi_\theta(y \mid x). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

LSFT(θ)=1Ni=1NtRilogπθ(yi,txi,yi,<t).\mathcal{L}_{\mathrm{SFT}}(\theta) = -\frac{1}{N}\sum_{i=1}^{N}\sum_{t \in R_i}\log \pi_\theta(y_{i,t} \mid x_i,y_{i,<t}).

For data filtering, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat data filtering as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for data filtering:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: Data filtering is part of the post-training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

7.4 Quality scoring

Quality scoring belongs in the canonical scope of instruction tuning and sft. The object is the instruction-following policy, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol \pi_\theta(y \mid x). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

LSFT(θ)=1Ni=1NtRilogπθ(yi,txi,yi,<t).\mathcal{L}_{\mathrm{SFT}}(\theta) = -\frac{1}{N}\sum_{i=1}^{N}\sum_{t \in R_i}\log \pi_\theta(y_{i,t} \mid x_i,y_{i,<t}).

For quality scoring, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat quality scoring as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for quality scoring:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: Quality scoring is part of the post-training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

7.5 Bootstrapping loops

Bootstrapping loops belongs in the canonical scope of instruction tuning and sft. The object is the instruction-following policy, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol \pi_\theta(y \mid x). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

LSFT(θ)=1Ni=1NtRilogπθ(yi,txi,yi,<t).\mathcal{L}_{\mathrm{SFT}}(\theta) = -\frac{1}{N}\sum_{i=1}^{N}\sum_{t \in R_i}\log \pi_\theta(y_{i,t} \mid x_i,y_{i,<t}).

For bootstrapping loops, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat bootstrapping loops as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for bootstrapping loops:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: Bootstrapping loops is part of the post-training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

8. Common Mistakes

#MistakeWhy It Is WrongFix
1Treating SFT as full alignmentSFT imitates demonstrations but does not optimize preferences or robust safety.Use preference optimization and safety evals after SFT.
2Masking prompt tokens incorrectlyThe model is trained to copy user prompts instead of answer them.Use response-only loss masks for chat SFT.
3Trusting reward scores as truthReward models are learned proxies with bias and calibration error.Evaluate reward models on held-out preference and safety sets.
4Ignoring KL driftA policy can become high reward but lose language quality or capability.Track KL to the reference policy and capability regressions.
5Optimizing only refusal rateHigh refusal can hide low helpfulness and overblocking.Measure safe compliance and benign refusal separately.
6Using public jailbreaks as the only red teamStatic attacks overfit quickly.Mix human, automated, private, and adaptive attacks.
7Changing policy text without versioningLabels become incomparable across time.Version policy, rubric, data, and model together.
8Skipping reviewer calibrationHuman feedback becomes noisy and inconsistent.Use gold tasks, overlap, adjudication, and disagreement analysis.
9Letting guardrails replace model trainingRuntime filters cannot fix every model behavior.Use layered defenses: data, training, policies, and gates.
10Confusing safety monitoring with production observabilityChapter 18 feedback loops are not full MLOps dashboards.Hand production telemetry to Chapter 19 while preserving safety feedback evidence.

9. Exercises

  1. (*) Why pretrained next-token models need instruction-following alignment. Define the alignment object, write the relevant loss or decision rule, give one safe example and one unsafe edge case, then explain which held-out metric would catch regression.

  2. (*) Instruction following as behavior conditioning. Define the alignment object, write the relevant loss or decision rule, give one safe example and one unsafe edge case, then explain which held-out metric would catch regression.

  3. (*) Demonstrations versus preferences. Define the alignment object, write the relevant loss or decision rule, give one safe example and one unsafe edge case, then explain which held-out metric would catch regression.

  4. (**) Chat templates as part of the objective. Define the alignment object, write the relevant loss or decision rule, give one safe example and one unsafe edge case, then explain which held-out metric would catch regression.

  5. (**) Historical arc from FLAN to InstructGPT. Define the alignment object, write the relevant loss or decision rule, give one safe example and one unsafe edge case, then explain which held-out metric would catch regression.

  6. (**) Prompt xx and response yy. Define the alignment object, write the relevant loss or decision rule, give one safe example and one unsafe edge case, then explain which held-out metric would catch regression.

  7. (***) Demonstration dataset DSFT\mathcal{D}_{\mathrm{SFT}}. Define the alignment object, write the relevant loss or decision rule, give one safe example and one unsafe edge case, then explain which held-out metric would catch regression.

  8. (***) Policy πθ(yx)\pi_\theta(y \mid x). Define the alignment object, write the relevant loss or decision rule, give one safe example and one unsafe edge case, then explain which held-out metric would catch regression.

  9. (***) Response-token mask. Define the alignment object, write the relevant loss or decision rule, give one safe example and one unsafe edge case, then explain which held-out metric would catch regression.

  10. (***) Instruction distribution and validation split. Define the alignment object, write the relevant loss or decision rule, give one safe example and one unsafe edge case, then explain which held-out metric would catch regression.

10. Why This Matters for AI

ConceptAI Impact
Instruction tuningConverts raw next-token prediction into usable assistant behavior
Preference learningOptimizes choices that are hard to express as reference answers
KL controlLimits destructive policy drift during reward optimization
Red teamingFinds harmful behavior before deployment and creates regression cases
GuardrailsAdds runtime control when training alone is insufficient
Policy versioningKeeps safety labels auditable across changing rules
Human feedbackSupplies sparse but high-value evidence about user intent and risk
Release gatesConnects alignment work to measurable safety and capability thresholds

11. Conceptual Bridge

Chapter 17 taught how to measure model behavior with benchmarks, uncertainty, robustness tests, ablations, and online experiments. Chapter 18 uses those measurements to change behavior through data, objectives, policies, guardrails, and human feedback.

Chapter 15 remains the home for general fine-tuning mechanics: parameter-efficient updates, memory cost, and broad training details. This chapter narrows the focus to post-training methods whose purpose is alignment with instructions, preferences, and safety policies.

Chapter 19 will pick up production lineage, monitoring, observability, drift, and serving systems. Chapter 18 stops at the safety feedback loop: how evidence becomes alignment data or runtime policy, not how every deployed metric is stored forever.

15 LLM training and fine-tuning math
        -> objectives and update mechanics
17 Evaluation and Reliability
        -> evidence about model behavior
18 Alignment and Safety
        -> SFT, preferences, red teams, policies, feedback
19 Production ML and MLOps
        -> deployment, observability, drift, retraining

References

Skill Check

Test this lesson

Answer 4 quick questions to lock in the lesson and feed your adaptive practice queue.

--
Score
0/4
Answered
Not attempted
Status
1

Which module does this lesson belong to?

2

Which section is covered in this lesson content?

3

Which term is most central to this lesson?

4

What is the best way to use this lesson for real learning?

Your answers save locally first, then sync when account storage is available.
Practice queue