Lesson overview | Previous part | Lesson overview
Reinforcement Learning: Part 9: Policy Gradients to References
9. Policy Gradients
Policy Gradients is part of the core mathematical path from Markov chains to modern AI agents. The emphasis is on the object definitions and update equations a learner must be able to inspect in code.
9.1 Policy objective
Purpose. Policy objective focuses on maximizing . This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Policy methods optimize the action distribution directly. This is essential when actions are continuous, structured, or generated token by token.
Worked reading.
The score-function identity converts a derivative of an expectation into an expectation of times a return-like signal.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- REINFORCE.
- actor-critic.
- PPO for RLHF.
Non-examples:
- choosing from a tiny action table.
- behavior cloning without reward feedback.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
9.2 Log-derivative trick
Purpose. Log-derivative trick focuses on turning trajectory probabilities into score functions. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
This concept is part of the bridge from sequential probability models to practical RL algorithms.
Worked reading.
The key habit is to name the state, action, reward, transition, policy, and value object before writing an update.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- small tabular MDPs.
- neural value functions.
- preference-optimized language policies.
Non-examples:
- static regression.
- uncontrolled simulation traces.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
9.3 Policy gradient theorem
Purpose. Policy gradient theorem focuses on why gradients can use . This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Policy methods optimize the action distribution directly. This is essential when actions are continuous, structured, or generated token by token.
Worked reading.
The score-function identity converts a derivative of an expectation into an expectation of times a return-like signal.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- REINFORCE.
- actor-critic.
- PPO for RLHF.
Non-examples:
- choosing from a tiny action table.
- behavior cloning without reward feedback.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
9.4 Baselines and variance reduction
Purpose. Baselines and variance reduction focuses on why subtracting is unbiased. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
This concept is part of the bridge from sequential probability models to practical RL algorithms.
Worked reading.
The key habit is to name the state, action, reward, transition, policy, and value object before writing an update.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- small tabular MDPs.
- neural value functions.
- preference-optimized language policies.
Non-examples:
- static regression.
- uncontrolled simulation traces.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
9.5 Entropy regularization
Purpose. Entropy regularization focuses on why stochastic policies are encouraged. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
This concept is part of the bridge from sequential probability models to practical RL algorithms.
Worked reading.
The key habit is to name the state, action, reward, transition, policy, and value object before writing an update.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- small tabular MDPs.
- neural value functions.
- preference-optimized language policies.
Non-examples:
- static regression.
- uncontrolled simulation traces.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
10. Actor-Critic PPO and RLHF
Actor-Critic PPO and RLHF is part of the core mathematical path from Markov chains to modern AI agents. The emphasis is on the object definitions and update equations a learner must be able to inspect in code.
10.1 Actor-critic decomposition
Purpose. Actor-critic decomposition focuses on policy and value learning together. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Policy methods optimize the action distribution directly. This is essential when actions are continuous, structured, or generated token by token.
Worked reading.
The score-function identity converts a derivative of an expectation into an expectation of times a return-like signal.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- REINFORCE.
- actor-critic.
- PPO for RLHF.
Non-examples:
- choosing from a tiny action table.
- behavior cloning without reward feedback.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
10.2 Generalized advantage estimation
Purpose. Generalized advantage estimation focuses on the tradeoff for advantages. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Value functions summarize future consequences. Advantage functions compare an action with the policy's average behavior at the same state.
Worked reading.
Occupancy measures explain why RL gradients weight states by how often the current policy visits them.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- .
- .
- .
Non-examples:
- instant reward only.
- a metric computed on states never reached by the policy.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
10.3 Trust regions and KL control
Purpose. Trust regions and KL control focuses on limiting policy movement. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Control algorithms learn how to choose actions, not just how to evaluate a fixed policy.
Worked reading.
SARSA uses the action actually sampled by the behavior policy; Q-learning uses a greedy target and is therefore off-policy.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- tabular gridworld control.
- DQN-style value learning.
- epsilon-greedy exploration.
Non-examples:
- estimating only.
- planning with a perfect model and no samples.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
10.4 PPO clipped surrogate objective
Purpose. PPO clipped surrogate objective focuses on a practical trust-region approximation. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Policy methods optimize the action distribution directly. This is essential when actions are continuous, structured, or generated token by token.
Worked reading.
The score-function identity converts a derivative of an expectation into an expectation of times a return-like signal.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- REINFORCE.
- actor-critic.
- PPO for RLHF.
Non-examples:
- choosing from a tiny action table.
- behavior cloning without reward feedback.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
10.5 Reward modeling and preference optimization
Purpose. Reward modeling and preference optimization focuses on the RLHF bridge to language models. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
This concept is part of the bridge from sequential probability models to practical RL algorithms.
Worked reading.
The key habit is to name the state, action, reward, transition, policy, and value object before writing an update.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- small tabular MDPs.
- neural value functions.
- preference-optimized language policies.
Non-examples:
- static regression.
- uncontrolled simulation traces.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
11. Common Mistakes
| # | Mistake | Why it is wrong | Fix |
|---|---|---|---|
| 1 | Confusing rewards with returns | A reward is local; a return is accumulated over time. | Always write before deriving an update target. |
| 2 | Ignoring the data distribution shift | Changing the policy changes which states are visited. | Name whether the data are on-policy or off-policy. |
| 3 | Treating Bellman equations as supervised labels | Bellman targets contain estimates and bootstrapping. | Track target networks, stop-gradient choices, or tabular guarantees. |
| 4 | Using Q-learning for every problem | Continuous action spaces and large state spaces often need different methods. | Choose value-based, policy-gradient, or actor-critic methods from the action and data structure. |
| 5 | Forgetting exploration | A greedy policy may never see better actions. | Use explicit exploration or uncertainty-aware data collection. |
| 6 | Trusting average reward without variance | RL estimates are noisy and seed-sensitive. | Report confidence intervals, seeds, and learning curves. |
| 7 | Mixing offline and online assumptions | Logged data may not cover actions needed by the learned policy. | Check coverage and use conservative offline RL when needed. |
| 8 | Over-optimizing a reward model | The policy may exploit reward model errors. | Use KL control, held-out preference evaluation, and adversarial tests. |
| 9 | Calling PPO a magic stabilizer | PPO still depends on advantage quality, clipping, normalization, and KL monitoring. | Audit ratios, advantages, entropy, KL, and value loss. |
| 10 | Forgetting terminal states | Bootstrapping through terminal states creates false future value. | Mask terminal transitions in TD targets. |
12. Exercises
-
(*) Solve a Bellman policy-evaluation system for a three-state chain.
- (a) Name the random variables.
- (b) Write the target equation.
- (c) Compute the numeric result.
- (d) Explain what the result means for an agent or LLM policy.
-
(*) Run three value-iteration backups and track the sup-norm change.
- (a) Name the random variables.
- (b) Write the target equation.
- (c) Compute the numeric result.
- (d) Explain what the result means for an agent or LLM policy.
-
(*) Compute a Monte Carlo return and compare it with a TD target.
- (a) Name the random variables.
- (b) Write the target equation.
- (c) Compute the numeric result.
- (d) Explain what the result means for an agent or LLM policy.
-
(**) Apply one SARSA update and one Q-learning update to the same transition.
- (a) Name the random variables.
- (b) Write the target equation.
- (c) Compute the numeric result.
- (d) Explain what the result means for an agent or LLM policy.
-
(**) Compute an epsilon-greedy action distribution.
- (a) Name the random variables.
- (b) Write the target equation.
- (c) Compute the numeric result.
- (d) Explain what the result means for an agent or LLM policy.
-
(**) Estimate a policy-gradient direction for a two-action softmax policy.
- (a) Name the random variables.
- (b) Write the target equation.
- (c) Compute the numeric result.
- (d) Explain what the result means for an agent or LLM policy.
-
(**) Compute generalized advantages from rewards and value estimates.
- (a) Name the random variables.
- (b) Write the target equation.
- (c) Compute the numeric result.
- (d) Explain what the result means for an agent or LLM policy.
-
(***) Evaluate the PPO clipped surrogate for positive and negative advantages.
- (a) Name the random variables.
- (b) Write the target equation.
- (c) Compute the numeric result.
- (d) Explain what the result means for an agent or LLM policy.
-
(***) Fit a Bradley-Terry reward-model probability for preference data.
- (a) Name the random variables.
- (b) Write the target equation.
- (c) Compute the numeric result.
- (d) Explain what the result means for an agent or LLM policy.
-
(***) Compute a DPO-style preference loss and explain the KL-control intuition.
- (a) Name the random variables.
- (b) Write the target equation.
- (c) Compute the numeric result.
- (d) Explain what the result means for an agent or LLM policy.
13. Why This Matters for AI
| Concept | AI impact |
|---|---|
| MDPs | Provide the mathematical contract for agents, simulators, robotics tasks, games, and dialogue policies. |
| Bellman equations | Turn long-horizon objectives into one-step recursive learning targets. |
| TD learning | Explains bootstrapping, credit assignment, and value-learning targets used in deep RL. |
| Q-learning | Powers value-based control and explains why replay and target networks stabilize DQN-style systems. |
| Policy gradients | Give the gradient estimator behind REINFORCE, actor-critic, PPO, and many RLHF implementations. |
| Advantage estimation | Reduces variance and makes policy updates more sample-efficient. |
| KL regularization | Keeps a learned policy close to a reference model in RLHF and safe fine-tuning. |
| Reward modeling | Connects human preferences to scalar optimization, while exposing reward hacking risks. |
14. Conceptual Bridge
The backward bridge is probability and Markov chains. A Markov chain has transitions but no actions. An MDP adds actions, rewards, and optimization. Once actions enter the process, the learner must reason about both inference and control.
The forward bridge is alignment and interactive systems. RLHF, DPO, preference models, online experiments, and agentic tool-use loops all reuse RL ideas: reward signals, policies, KL constraints, distribution shift, and evaluation under feedback.
+-------------------+ +------------------------+ +----------------------+
| Markov chains | ---> | Markov decision process | ---> | policy optimization |
| transition only | | actions and rewards | | RLHF, agents, games |
+-------------------+ +------------------------+ +----------------------+
The most important practical lesson is that RL is not just an optimizer. It is a complete data-generating loop. When a policy changes, the future dataset changes. That is why careful RL work always audits rewards, policies, value estimates, exploration, and evaluation together.
References
- Sutton and Barto. Reinforcement Learning: An Introduction, 2nd ed.. https://incompleteideas.net/book/the-book.html
- OpenAI. Spinning Up: Key Concepts in RL. https://spinningup.openai.com/en/latest/spinningup/rl_intro.html
- OpenAI. Spinning Up: Policy Optimization. https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html
- Mnih et al.. Human-level control through deep reinforcement learning. https://www.nature.com/articles/nature14236
- Schulman et al.. Proximal Policy Optimization Algorithms. https://arxiv.org/abs/1707.06347
- Christiano et al.. Deep reinforcement learning from human preferences. https://arxiv.org/abs/1706.03741
- Rafailov et al.. Direct Preference Optimization. https://arxiv.org/abs/2305.18290