Lesson overview | Lesson overview | Next part
Reinforcement Learning: Part 1: Intuition and Motivation to 2. Formal MDP Setup
1. Intuition and Motivation
Intuition and Motivation is part of the core mathematical path from Markov chains to modern AI agents. The emphasis is on the object definitions and update equations a learner must be able to inspect in code.
1.1 Sequential decision making
Purpose. Sequential decision making focuses on why actions change future data. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
RL is the mathematics of decisions whose consequences alter later observations. A training example is no longer fixed before learning; the policy helps create the future dataset.
Worked reading.
At time , the agent sees , chooses , receives , and reaches . The learning signal is attached to a trajectory, not a single independent example.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- robot navigation.
- game play.
- dialogue policy tuning.
Non-examples:
- ordinary regression with fixed labels.
- a bandit problem with no state evolution.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
1.2 Rewards returns and delayed credit
Purpose. Rewards returns and delayed credit focuses on why scalar feedback is hard to assign. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
The reward signal is local, but the objective is cumulative. The central difficulty is assigning a future return back to earlier actions.
Worked reading.
The discounted return is .
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- sparse game win/loss rewards.
- human preference scores for full responses.
- robot task completion bonuses.
Non-examples:
- per-token supervised labels.
- a deterministic lookup table with no delayed consequence.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
1.3 Exploration versus exploitation
Purpose. Exploration versus exploitation focuses on why the learner must choose data. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
The agent must balance actions that look good now with actions that reveal useful information. This makes data collection part of the optimization problem.
Worked reading.
An -greedy policy chooses a greedy action with probability and explores otherwise.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- optimistic initialization.
- Boltzmann exploration.
- UCB-style uncertainty bonuses.
Non-examples:
- evaluating one fixed logged policy only.
- training on a static supervised corpus.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
1.4 RL versus supervised learning
Purpose. RL versus supervised learning focuses on why labels are not enough. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Supervised learning assumes labeled targets. RL instead observes consequences from an interactive process and must handle shifting state-action distributions.
Worked reading.
A policy update changes , the visitation distribution, so future data are policy-dependent.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- offline RL from logged data.
- online policy improvement.
- preference-tuned language models.
Non-examples:
- i.i.d. image classification.
- least-squares regression with fixed design.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
1.5 Where RL appears in LLM systems
Purpose. Where RL appears in LLM systems focuses on why preference optimization uses this math. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
RL enters LLM systems through preference learning, reward modeling, KL-regularized policy updates, and evaluation policies that adapt from feedback.
Worked reading.
RLHF typically optimizes a reward model while penalizing divergence from a reference model with .
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- PPO-style RLHF.
- DPO-style preference optimization.
- bandit feedback for ranking.
Non-examples:
- plain next-token pretraining.
- static instruction tuning without preference feedback.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
2. Formal MDP Setup
Formal MDP Setup is part of the core mathematical path from Markov chains to modern AI agents. The emphasis is on the object definitions and update equations a learner must be able to inspect in code.
2.1 States actions rewards and transitions
Purpose. States actions rewards and transitions focuses on the tuple . This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
An MDP is the formal object that makes sequential decision-making mathematically precise.
Worked reading.
The Markov property says the current state contains the predictive information needed for the next transition and reward.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- finite gridworld.
- inventory control.
- dialogue state tracking.
Non-examples:
- raw observations that omit hidden state.
- a static labeled dataset with no actions.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
2.2 The Markov property
Purpose. The Markov property focuses on why the present state summarizes the useful past. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
An MDP is the formal object that makes sequential decision-making mathematically precise.
Worked reading.
The Markov property says the current state contains the predictive information needed for the next transition and reward.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- finite gridworld.
- inventory control.
- dialogue state tracking.
Non-examples:
- raw observations that omit hidden state.
- a static labeled dataset with no actions.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
2.3 Episodic continuing finite and infinite horizon tasks
Purpose. Episodic continuing finite and infinite horizon tasks focuses on how time changes the objective. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
This concept is part of the bridge from sequential probability models to practical RL algorithms.
Worked reading.
The key habit is to name the state, action, reward, transition, policy, and value object before writing an update.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- small tabular MDPs.
- neural value functions.
- preference-optimized language policies.
Non-examples:
- static regression.
- uncontrolled simulation traces.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
2.4 Transition kernels and reward functions
Purpose. Transition kernels and reward functions focuses on how stochastic environments are represented. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Function approximation replaces tables with parameterized models so the agent can generalize across large state spaces.
Worked reading.
Deep RL is powerful because neural networks share statistical strength, but unstable because approximate bootstrapping can amplify errors.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- DQN target networks.
- experience replay.
- critic networks.
Non-examples:
- exact dynamic programming in a tiny known MDP.
- memorizing every state-action value in a table.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
2.5 Partial observability and belief states
Purpose. Partial observability and belief states focuses on what breaks when observations are not states. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
An MDP is the formal object that makes sequential decision-making mathematically precise.
Worked reading.
The Markov property says the current state contains the predictive information needed for the next transition and reward.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- finite gridworld.
- inventory control.
- dialogue state tracking.
Non-examples:
- raw observations that omit hidden state.
- a static labeled dataset with no actions.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.