"Backpropagation is an algorithm for computing gradients efficiently in a computational graph. At its heart, it is nothing more than the chain rule of calculus applied repeatedly."
- Goodfellow, Bengio & Courville, Deep Learning (2016)
Overview
The chain rule is the single most important theorem in applied mathematics for machine learning. Every time a neural network is trained - whether a two-layer MLP or a 70-billion parameter language model - the update step depends on computing gradients of a scalar loss with respect to millions or billions of parameters. That computation is backpropagation, and backpropagation is nothing more than the multivariate chain rule applied systematically to a computational graph.
This section develops the chain rule from first principles and then derives backpropagation rigorously. We prove every formula from scratch: the general Jacobian chain rule, the VJP (vector-Jacobian product) form that makes backprop efficient, the recurrence relation for the error signal , and the gradient formulae for every layer type that appears in modern transformers. We also analyse the pathologies - vanishing and exploding gradients - with mathematical precision, and derive the interventions that cure them: careful initialisation, residual connections, and normalisation layers.
The connection to automatic differentiation (05) is previewed but not developed here: this section establishes what is computed (the chain rule gradient) while 05 establishes how it is computed mechanically by an AD engine.
Prerequisites
- Partial derivatives, gradient, directional derivative - 01 Partial Derivatives and Gradients
- Jacobian matrix, Frchet derivative, VJP/JVP - 02 Jacobians and Hessians
- Single-variable chain rule, Taylor series - 04 Calculus Fundamentals
- Matrix multiplication, transpose - 02 Linear Algebra Basics
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Chain rule verification, backprop from scratch, all layer gradients, vanishing/exploding gradient simulation, checkpointing |
| exercises.ipynb | 10 graded exercises: chain rule through LoRA backward pass and BPTT |
Learning Objectives
After completing this section, you will be able to:
- State and prove the general chain rule via the Frchet derivative
- Explain why the VJP form is the fundamental equation of backpropagation
- Define a computational graph, perform forward and backward passes, handle gradient accumulation at branches
- Derive the backprop recurrence from first principles
- Derive gradients for linear layers, activations, softmax+CE (fused), LayerNorm, and scaled dot-product attention
- Analyse vanishing/exploding gradients and derive Xavier/He initialisation from variance propagation
- Explain residual connections as gradient highways via
- Implement gradient checkpointing and explain the memory / extra compute tradeoff
- Differentiate through discrete operations using the straight-through estimator and REINFORCE
- Trace the full backward pass through a transformer layer (attention + MLP + LayerNorm + residuals)
Table of Contents
- 1. Intuition
- 2. The Multivariate Chain Rule - Full Theory
- 3. Computation Graphs
- 4. Backpropagation - Complete Derivation
- 5. Gradient Derivations for Key ML Operations
- 6. Vanishing and Exploding Gradients
- 7. Memory-Efficient Backpropagation
- 8. Advanced Chain Rule Topics
- 9. Transformer Backpropagation in Depth
- 10. Common Mistakes
- 11. Exercises
- 12. Why This Matters for AI (2026 Perspective)
- Conceptual Bridge
1. Intuition
1.1 From Single-Variable to Multivariate Chain Rule
The single-variable chain rule says: if and , then
The intuition is rates of change compose multiplicatively. If triples its input and doubles its input, then multiplies by six.
The multivariate generalisation replaces scalars with vectors and scalar derivatives with Jacobian matrices. If and , then
The product is now matrix multiplication. This is not a different rule - it is the same rule, stated in the correct language for vector-valued functions. The single-variable rule is the special case where Jacobians degenerate to scalars.
SCALAR CHAIN RULE vs JACOBIAN CHAIN RULE
Scalar: x -> g -> u -> f -> y
R R R R R
dy/dx = (dy/du)(du/dx) [scalar multiplication]
Vector: x -> g -> u -> f -> y
R R R R R
J_{fog} = J_f * J_g [matrix multiplication]
(mxp) = (mxn) * (nxp)
The dimensions work out exactly like matrix multiplication.
The chain rule IS matrix multiplication for Jacobians.
What makes the multivariate version non-trivial is that must be evaluated at - the output of the inner function - not at itself. This point-dependence is where the local linear approximation lives: the Jacobian is the best linear approximation to at the specific point , and is the best linear approximation to at .
1.2 Backpropagation as Iterated Chain Rule
A deep neural network is a long composition of functions:
where each is a layer (linear + activation), is the loss function, and is a scalar. Computing - the gradient of the loss with respect to layer 's parameters - requires applying the chain rule through every layer from to .
The chain rule gives:
where is the error signal at layer , and it satisfies the backpropagation recurrence:
This recurrence propagates the error signal backward from layer to layer 1 - hence "backpropagation." At each step, we multiply by the transposed Jacobian of the next layer. The entire algorithm is:
- Forward pass: compute and store , for
- Initialise:
- Backward pass: compute for using the recurrence
- Gradients: extract from and
Backpropagation is not a fundamentally different concept from the chain rule. It is the chain rule, applied efficiently by sharing intermediate computations (the error signals ) across all parameters in a layer.
1.3 Historical Context
| Year | Contributor | Development |
|---|---|---|
| 1676 | Leibniz | Differential calculus; first statement of the single-variable chain rule |
| 1755 | Euler | Extended to multiple variables |
| 1960 | Kelley | Gradient computation for optimal control (independent discovery of backprop concept) |
| 1970 | Linnainmaa | First complete description of reverse-mode automatic differentiation for computing gradients |
| 1974 | Werbos | First application to neural networks in his PhD thesis |
| 1986 | Rumelhart, Hinton, Williams | Popularised backpropagation in "Learning representations by back-propagating errors" - the paper that launched the neural network revolution |
| 1989 | LeCun | Applied backprop to convolutional networks for handwritten digit recognition |
| 2012 | Krizhevsky, Sutskever, Hinton | AlexNet demonstrated GPU-accelerated backprop at scale - kicked off the deep learning era |
| 2015 | Google Brain, Facebook AI | PyTorch/TensorFlow: automatic differentiation engines that compute backprop automatically |
| 2017 | Vaswani et al. | Transformer: backprop through multi-head attention; the architecture underlying GPT, BERT, LLaMA |
| 2021 | Hu et al. (LoRA) | Parameter-efficient fine-tuning by limiting gradient flow to low-rank subspaces |
| 2022 | Dao et al. (FlashAttention) | Recompute activations during backward to avoid materialising the attention matrix |
1.4 Why Backprop Defines Modern AI
Every large language model, image classifier, and diffusion model trained today relies on backpropagation for every gradient update. The scale is staggering: training GPT-4 reportedly required floating-point operations, the vast majority of which are forward and backward passes through the transformer network.
Backprop enables gradient-based learning at scale because its cost is proportional to the cost of the forward pass - typically where is the number of parameters. Alternative approaches (finite differences, evolution strategies, zeroth-order methods) are orders of magnitude more expensive.
Three properties make backprop indispensable:
-
Efficiency: One backward pass computes for all parameters simultaneously. Finite differences would need forward passes.
-
Exactness: Unlike finite differences, backprop computes the exact gradient (up to floating-point precision), not an approximation.
-
Composability: Any differentiable function composed of differentiable primitives has an automatically computable gradient. This is why PyTorch/JAX can differentiate arbitrary Python code that uses differentiable operations.
For AI in 2026: The gradient is the workhorse of every training algorithm: SGD, Adam, AdaGrad, Muon, SOAP - all are gradient-based. Fine-tuning (LoRA, QLoRA, DoRA), RLHF (PPO, DPO, GRPO), distillation, and continual learning all depend on backprop. Even methods that appear gradient-free (evolutionary strategies, black-box optimisation) are often used because they approximate the gradient in settings where backprop is unavailable (non-differentiable objectives, external APIs).
2. The Multivariate Chain Rule - Full Theory
2.1 The General Chain Rule - Proof
We prove the chain rule using the Frchet derivative from 02. Recall:
Definition. is Frchet differentiable at if there exists a linear map such that
The matrix of is the Jacobian .
Theorem (Chain Rule). Let be Frchet differentiable at , and be Frchet differentiable at . Then is Frchet differentiable at and
Proof. Let . We need to show that is the Frchet derivative of at . Write:
Let . Since is Frchet differentiable:
Now apply Frchet differentiability of at :
Substituting:
We show the remainder is :
- First term: .
- Second term: Since , we have .
Therefore .
When the chain rule fails. The chain rule requires both at and at to be Frchet differentiable. If either fails - for example at a ReLU kink where - the classical chain rule does not apply. In practice, these measure-zero sets are handled by choosing a subgradient (any element of the Clarke subdifferential), which is what deep learning frameworks do automatically.
2.2 Three Cases in Increasing Generality
Case 1: Scalar composition . where . Jacobians are = scalars, so .
Case 2: Scalar loss of a vector function. , where and . Jacobians: and (a row vector). So:
Taking the transpose: - the gradient of with respect to is the transposed Jacobian of times the gradient of . This is the VJP equation, the core of backprop.
Case 3: Vector composition . The most general case; Jacobians are full matrices and the chain rule is full matrix multiplication:
The dimensions verify: . The "inner dimension" (the dimension of the intermediate space ) cancels in the product, exactly as in matrix multiplication.
2.3 The VJP Form - Foundation of Backprop
Definition (VJP). For and a "cotangent" vector , the vector-Jacobian product is:
Why this is the right primitive for backprop. For a scalar loss composed with :
The gradient of the composed function with respect to the input is the VJP of the inner function, with the cotangent being the gradient of the outer function.
The backprop recursion is a chain of VJPs. For :
Starting from and applying VJPs from right to left computes all intermediate gradients.
Cost comparison. Computing for :
- JVP (forward mode): requires passes (one per input dimension). Cost: .
- VJP (reverse mode): requires pass. Cost: .
For parameters, reverse mode is times cheaper. This asymmetry is why all gradient-based deep learning uses reverse mode (backprop).
2.4 Long Chains and Telescoping Products
For a depth- network , the Jacobian of with respect to is:
This is a product of matrices. The spectral norm of the product satisfies:
If each , then . For , the gradient vanishes exponentially; for , it explodes. This is the mathematical source of the vanishing/exploding gradient problem (6).
Efficient computation: reverse order. In the forward direction, we compute left to right. In the backward direction, we compute the error signals right to left, reusing stored activations. The key observation: at step , we only need and the stored activation (or ) - we do not need to recompute from scratch.
2.5 Differentiating Through Discrete Operations
Some operations in neural networks are discontinuous or discrete: argmax (in beam search), rounding/quantisation (in QAT), sampling (in VAEs and RL). The chain rule does not directly apply.
Straight-Through Estimator (STE). For a quantisation function (round to nearest integer), the derivative is almost everywhere, giving zero gradient. The STE replaces the "true" zero gradient with 1 during the backward pass:
In code: y = round(x).detach() + x - x.detach() - adds in the forward pass (cancels) but contributes its gradient in the backward pass. STE is used in VQ-VAE, binary neural networks, and quantisation-aware training.
REINFORCE (score function estimator). For a stochastic node and loss , the gradient of with respect to is:
This allows gradient estimation without differentiating through the sampling step. Used in RLHF (PPO, GRPO) and variational inference. High variance; mitigated by baselines.
3. Computation Graphs
3.1 Formal DAG Definition
A computation graph is a directed acyclic graph encoding how scalar or tensor quantities depend on one another.
Nodes partition into three types:
| Type | Symbol | Role |
|---|---|---|
| Input nodes | Hold model inputs and parameters; no incoming edges | |
| Intermediate nodes | Hold computed activations; receive edges from their operands | |
| Output node | Holds the scalar loss ; required to be scalar for standard backprop |
Edges encode data dependency: for some primitive . Each edge carries an implicit local Jacobian .
Primitive operations are the atomic building blocks with known local gradients:
PRIMITIVE OPERATIONS AND THEIR LOCAL GRADIENTS
Operation Forward Local gradient (wrt input i)
z = x + y z = x + y partialz/partialx = 1, partialz/partialy = 1
z = x * y z = xy partialz/partialx = y, partialz/partialy = x
z = exp(x) z = e partialz/partialx = e
z = log(x) z = ln x partialz/partialx = 1/x
z = relu(x) z = max(0,x) partialz/partialx = [x>0] (a.e.)
z = W x + b Wx+b partialz/partialW = x (as outer), partialz/partialx = W
z = softmax(x) e/Sigmae diag(p) - pp (see 02)
Every deep learning framework maintains a lookup table of these
primitives together with their vjp implementations.
Topological ordering - a linear ordering of such that for every edge , appears before in . Topological order exists iff is acyclic (Kahn's algorithm, 1962). Both the forward pass and the backward pass respect topological order (the latter in reverse).
For AI: Every modern deep learning framework (PyTorch, JAX, TensorFlow) represents a neural network as a computation graph. PyTorch builds the graph dynamically during the forward pass via the autograd tape; JAX traces the graph statically via XLA compilation.
3.2 Forward Pass - Value Propagation
The forward pass evaluates all node values in topological order, caching intermediates required by the backward pass.
Algorithm (Forward Pass):
Input: graph G = (V, E), input values {x_1,...,x}
Output: loss value v_N, cache of intermediates
For v in topological_order(G):
if v is an input node:
cache[v] = x_v (given)
else:
cache[v] = phi_v(cache[u_1], ..., cache[u])
where u_1,...,u = parents(v)
return cache[v_N]
Memory cost of a naive forward pass: Caching all intermediates for backprop costs memory where is the number of nodes. For a transformer with layers and activations of size , this is approximately:
This is why gradient checkpointing (7) is essential for large models.
What gets cached? A memory-optimal forward pass only caches values that appear in at least one local gradient formula. For a linear layer , the backward needs (to compute ) but not (already accumulated into the output).
3.3 Backward Pass - Gradient Accumulation
The backward pass evaluates adjoint values for every node, in reverse topological order.
Define the adjoint of node as:
where we treat as a scalar intermediate (extending to tensors componentwise).
Initialisation: (the loss node).
Backward recurrence: For a node with children (successors) - nodes that depend on :
This is exactly the chain rule applied in reverse.
Algorithm (Backward Pass):
Input: graph G, cache from forward pass
Output: partial/partialv for all v in V
adjoint[v_N] <- 1
For v in reverse_topological_order(G):
For each parent u of v:
adjoint[u] += adjoint[v] * partialv/partialu(cache)
local_vjp(v, u, adjoint[v])
return {adjoint[u] : u is a parameter node}
The key observation: each edge requires only:
- The cached forward value at (for the local gradient formula)
- The downstream adjoint (for the VJP multiplication)
3.4 Gradient Accumulation at Branching Nodes
A fan-out node has multiple children . The correct gradient is the sum of contributions:
Proof: By the total derivative,
Example - residual connection:
RESIDUAL BRANCH: u feeds into both F(u) and the skip path
u
/ \
/ \
F(u) \ <- identity skip
\ /
\ /
z = F(u) + u
Forward: z = F(u) + u
Backward: = z * partialF(u)/partialu + z * 1
= z * J_F(u) + z
The identity skip guarantees a gradient highway:
even if J_F(u) ~= 0 (saturated layer), z flows back unchanged.
This is the deep reason residual networks (He et al., 2016) solved the vanishing gradient problem: the skip connection creates a constant-1 term in the backward accumulation, guaranteeing in gradient magnitude.
3.5 Dynamic vs Static Graphs
Two design philosophies produce different tradeoffs:
DYNAMIC GRAPHS (PyTorch eager mode) STATIC GRAPHS (JAX jit / TF graph)
Graph built anew each forward pass Graph compiled once, reused
Natural Python control flow XLA/CUDA fusion, kernel merging
Easy debugging (print anywhere) Memory-optimal buffer allocation
Variable-length sequences trivial Can export/serve without Python
Graph construction overhead Tracing must handle all branches
Less compiler optimisation Python side-effects invisible
Examples: PyTorch, early Chainer Examples: JAX jit, TF2 tf.function,
ONNX Runtime, TensorRT
For transformers: Most production LLM training uses torch.compile (PyTorch 2.0+) which bridges the two: eager-mode graph construction with TorchDynamo tracing and inductor backend compilation, recovering ~30-50% throughput from kernel fusion.
4. Backpropagation
4.1 Network Notation
Consider a feedforward neural network with layers. Define:
| Symbol | Meaning |
|---|---|
| Input vector, dimension | |
| Weight matrix for layer | |
| Bias vector for layer | |
| Pre-activation (linear combination) | |
| Post-activation (elementwise) | |
| Network output | |
| Scalar loss |
The forward pass computes and for .
4.2 Forward Equations
Cache for backward: .
4.3 Output Layer Gradient
For cross-entropy loss with softmax output, the output gradient has the celebrated clean form (derived in 5.3):
where is the one-hot label. This combines the softmax Jacobian with the cross-entropy gradient into a single elegant expression.
For MSE loss () with linear output:
(Same form, different derivation - a useful coincidence that makes implementation uniform.)
4.4 Backpropagation Recurrence - Proof
Define the error signal:
Theorem (Backpropagation Recurrence):
Proof: Apply the chain rule from to via and :
Step 1: .
Step 2: .
In matrix form: .
Step 3: Multiply by and transpose to get column vector :
The (elementwise) product arises because is applied elementwise - its Jacobian is diagonal.
4.5 Weight and Bias Gradients
Once error signals are computed, parameter gradients follow immediately:
Derivation of weight gradient:
Collecting over all : .
This is an outer product - the gradient is rank-1 for a single sample. For a batch of samples it averages to higher rank.
4.6 Batched Backpropagation
With a mini-batch , stack inputs into a matrix .
The forward pass becomes:
where .
The backward pass produces (error signals for all samples simultaneously).
Weight gradient for the batch:
This is a single matrix multiplication, making batched backprop efficient on GPUs which excel at large GEMM (general matrix multiplication) operations.
5. Gradient Derivations for Standard Layers
5.1 Linear Layer
Forward: , where , .
Upstream gradient: .
VJP (backward):
Derivation of : Each , so . By VJP: .
For AI: In a transformer with hidden dim and MLP expansion : the two linear layers in FFN pass gradients back with operations - same cost as the forward GEMM. Gradient computation for is also a GEMM.
5.2 Activation Functions
For elementwise :
Gradient formulas for common activations:
| Activation | Notes | ||
|---|---|---|---|
| ReLU | Sparse gradient; "dead neurons" if always | ||
| Sigmoid | Saturates; max gradient 0.25 at | ||
| Tanh | Saturates; max gradient 1 at | ||
| GELU | = Gaussian CDF; smooth at 0 | ||
| SiLU/Swish | Used in LLaMA, Mistral | ||
| SoftPlus | Smooth ReLU; gradient never zero |
GELU (Hendrycks & Gimpel, 2016) is the standard activation in GPT-2/3, BERT, and most modern LLMs. It gates the input by its own probability under a Gaussian, producing richer gradient structure than ReLU.
5.3 Fused Softmax + Cross-Entropy Gradient
Setup: Output logits , softmax probabilities , true label , loss .
Claim:
where is the -th standard basis vector.
Proof: Write .
This direct derivation bypasses the softmax Jacobian computation entirely, which is why modern frameworks implement cross-entropy as a fused operation. For numerical stability, the is computed with the log-sum-exp trick: where .
5.4 LayerNorm Gradient
Forward: LayerNorm normalises each token independently:
where , .
Backward: Let be the upstream gradient.
For the input gradient, define . The full gradient through the normalisation is:
This expression subtracts mean and mean-of-hadamard terms, reflecting that LayerNorm's Jacobian projects out two degrees of freedom (02 exercises).
For AI: LayerNorm appears in every transformer layer (pre-norm placement in modern architectures like GPT-NeoX, LLaMA). The gradient through LayerNorm is never zero - it always passes signal, unlike BatchNorm which can become degenerate at small batch sizes.
5.5 Dot-Product Attention Gradient
Forward (simplified single-head):
Backward: Given upstream :
Then , and similarly for , .
Critical memory issue: Storing for the backward costs - this is what FlashAttention avoids by recomputing from during the backward pass (see 7.3).
5.6 Embedding Layer Gradient
Forward: , where is the embedding table.
Backward: Given upstream for all tokens :
This is a sparse gradient - only rows corresponding to tokens in the sequence receive nonzero updates. For vocabulary size (LLaMA-3), the embedding matrix is , but only a tiny fraction of rows are updated per batch. Distributed training with embedding sharding exploits this sparsity.
6. Vanishing and Exploding Gradients
6.1 Magnitude Analysis - The Core Problem
Consider an -layer network with no activation functions (to isolate the linear case). The gradient of the loss with respect to layer parameters involves the product:
This is a product of matrices. By the submultiplicativity of the spectral norm:
If for all layers:
If :
GRADIENT MAGNITUDE ACROSS LAYERS
gradient norm
exploding (rho > 1)
ideal (rho = 1)
vanishing (rho < 1)
layer l
L 0
With activations, the product includes sigma'(z) terms (< 1 for sigmoid)
compounding the vanishing problem.
This was identified by Hochreiter (1991) as the fundamental obstacle to training deep networks with gradient descent.
6.2 Activations and Saturation
For sigmoid : for all , with equality only at . In the tails (), .
For tanh: , saturating similarly.
In a network with sigmoid layers and all activations near saturation, the gradient at layer 1 is suppressed by approximately . For : - numerically zero.
ReLU resolves saturation: , which is either 0 or 1. For active neurons, it passes gradients unchanged. However, "dying ReLU" (neurons with always) creates a different problem - those neurons receive zero gradient and never recover.
GELU and SiLU (used in LLaMA) are smooth approximations that avoid hard zeros, maintaining nonzero gradients everywhere.
6.3 Xavier and He Initialisation
Goal: Choose initial weights so that gradient (and activation) variance is preserved across layers - avoiding exponential growth or decay from the start of training.
Xavier Initialisation (Glorot & Bengio, 2010) - for symmetric activations (tanh, linear):
Assumption: Weights i.i.d., inputs with variance .
Forward variance preservation: .
Backward variance preservation: .
Compromise:
He Initialisation (He et al., 2015) - for ReLU activations:
ReLU zeroes half the distribution, so effective variance is halved: . To compensate:
For AI: GPT-2 uses a scaled version: weight initialisation with the residual projection layers further scaled by where is the number of transformer layers, to control the variance accumulation in the residual stream.
6.4 Residual Connections as Gradient Highways
Theorem: In a residual network , the gradient satisfies:
Key insight: Expanding the product, we get:
The identity term guarantees that even if all (at initialisation), the gradient receives the full upstream signal unchanged. This is the theoretical explanation for why ResNets (He et al., 2016) can be trained with hundreds of layers.
In modern transformers, the pre-norm architecture (LayerNorm before the sublayer, not after) further improves gradient flow by ensuring that the residual path carries a pure copy of the signal.
6.5 Gradient Clipping
Gradient explosion is addressed pragmatically by global gradient norm clipping:
where is the concatenated parameter gradient vector and is the clip threshold.
Typical values: for transformers (used in GPT-3, PaLM, LLaMA).
Why global (not per-layer)? Clipping each layer's gradient independently destroys the relative proportions of updates across layers, disrupting the Adam momentum states. Global clipping preserves direction, only reducing magnitude.
Relationship to RNNs: Gradient clipping was originally introduced for RNNs (Mikolov, 2012; Pascanu et al., 2013), where the vanishing/exploding problem is especially severe due to the long chain of time steps.
6.6 Batch Normalisation and Layer Normalisation
BatchNorm (Ioffe & Szegedy, 2015) normalises each feature across the batch, stabilising the distribution of pre-activations. Its gradient has a complex form involving batch statistics, but crucially it prevents activations from saturating on average.
LayerNorm (Ba et al., 2016) normalises each sample across features - preferred in transformers because:
- Behaviour is independent of batch size (critical for small-batch inference)
- Gradient analysis shows it damps large pre-activation magnitudes
- Pre-norm placement ensures the residual stream grows in a controlled manner
Empirical gradient norm tracking is standard practice in LLM training: the gradient norm is logged at every step, and sudden spikes indicate loss spikes or numerical issues. The Chinchilla and GPT-4 training runs used gradient norm monitoring as a primary signal for training health.
7. Memory-Efficient Backpropagation
7.1 Memory Cost of Standard Backprop
Standard backpropagation caches all intermediate activations for use in the backward pass. For a transformer with layers, batch size , sequence length , and hidden dimension :
| Component cached | Size | At FP16 |
|---|---|---|
| Attention QKV projections | bytes | |
| Attention scores (pre-softmax) | bytes | |
| MLP intermediate | bytes | |
| LayerNorm stats | negligible |
For GPT-3 (, , , , ): the attention scores alone require - clearly infeasible without optimisation.
7.2 Gradient Checkpointing
Idea: Trade compute for memory. Instead of caching all activations, cache only a subset of "checkpoint" activations and recompute the rest during the backward pass.
Algorithm (checkpointing at every -th layer):
GRADIENT CHECKPOINTING
Forward pass:
Compute all layers normally
Save activations only at layers 0, k, 2k, 3k, ...
Discard all other intermediate activations
Backward pass:
For each segment [lk, (l+1)k]:
Re-run the forward pass from checkpoint lk
Now have all intermediates for this segment
Compute gradients for layers lk+1 to (l+1)k-1
Discard intermediates (no longer needed)
Memory-compute tradeoff:
- Memory: checkpoints (optimal with ) instead of
- Compute: Each layer's forward pass is run twice (once in original forward, once in recomputation) -> approximately compute overhead
For AI: torch.utils.checkpoint.checkpoint() implements this in PyTorch with a single function call. LLaMA, Mistral, and most OSS LLM trainers enable activation checkpointing by default for sequences longer than ~2048 tokens.
Selective recomputation: Flash Attention (see 7.3) takes a more targeted approach - instead of checkpointing by layer, it recomputes only the attention scores (the term) during the backward pass, since those are the dominant memory consumer.
7.3 FlashAttention: Fused Backward Pass
The problem: Standard attention stores for the backward pass. For (long-context models), this is GB per layer per batch element.
FlashAttention solution (Dao et al., 2022): Compute attention in tiles that fit in SRAM (GPU on-chip cache), using the online softmax algorithm (Milakov & Gimelshein, 2018) to avoid materialising the full matrix.
Backward pass in FlashAttention: The backward pass needs but doesn't store it. Instead:
- Store only the softmax normalisation statistics (scalars per row) - memory
- During the backward pass, recompute tile by tile from and the stored statistics
- Accumulate gradients tile by tile without ever forming full
Complexity:
- Memory: instead of
- FLOPs: the forward FLOPs (small constant factor)
- Wall-clock speedup: 2-4x over standard PyTorch attention on A100
For AI: FlashAttention is the default attention implementation in modern LLM training (vLLM, HuggingFace Transformers, NanoGPT). FlashAttention-3 (2024) further optimises for H100 tensor core and async operations.
7.4 Mixed Precision Training
Observation: FP32 (32-bit float) is unnecessarily precise for gradients. FP16 (16-bit float) has higher memory bandwidth on modern GPUs, but overflow/underflow is common for small/large gradient values.
AMP (Automatic Mixed Precision) strategy:
| Component | Precision | Reason |
|---|---|---|
| Forward activations | FP16 | Fast compute, lower memory |
| Backward gradients | FP16 | Fast compute |
| Weight updates | FP32 | Avoid precision loss |
| Master weights | FP32 | Preserve small updates () |
| Loss scaling | Dynamic | Prevent FP16 underflow for small gradients |
Loss scaling: Multiply the loss by a large scale factor (typically to ) before backward, then divide gradients by before the weight update. This shifts gradient values into the representable FP16 range. The scale factor is increased or decreased based on whether overflow (inf/nan) occurred.
BF16 (Brain Float 16, used in TPUs and H100): same 16-bit width but with 8 exponent bits (same as FP32) and only 7 mantissa bits. Eliminates overflow issues while retaining dynamic range - now the preferred format for LLM training.
8. Advanced Differentiation Topics
8.1 Backpropagation Through Time (BPTT)
A recurrent neural network (RNN) with hidden state can be viewed as a feedforward network unrolled through time:
UNROLLED RNN - BPTT VIEW
x_1 -> [cell] -> h_1 -> [cell] -> h_2 -> [cell] -> h_3 -> ... -> h -> loss
W W W (shared weights)
BPTT = backprop through the unrolled graph.
Gradient of loss w.r.t. W = sum of gradients from all time steps.
The gradient with respect to at time step involves the product:
Each factor . When , the product of such factors vanishes exponentially. This is the core failure mode of vanilla RNNs on long sequences (Hochreiter, 1991; Bengio et al., 1994).
Truncated BPTT: In practice, gradients are truncated to a window of steps to reduce memory and compute costs, at the cost of ignoring long-range dependencies beyond step .
LSTM/GRU solution: Long Short-Term Memory networks (Hochreiter & Schmidhuber, 1997) use gating mechanisms to maintain a cell state with additive updates - replacing multiplicative products of weight matrices with additive accumulation, similar to residual connections.
8.2 Implicit Differentiation Preview
For optimisation problems or fixed-point iterations, we sometimes need gradients of implicit functions.
Example: Consider where the optimum satisfies .
By the implicit function theorem:
This allows differentiating through optimisation steps without unrolling them - the basis of MAML (Model-Agnostic Meta-Learning, Finn et al., 2017) and DEQs (Deep Equilibrium Models, Bai et al., 2019).
Full treatment: Implicit differentiation and differentiable optimisation are covered in depth in 05/05-Automatic-Differentiation.
8.3 Straight-Through Estimator and REINFORCE
The discrete problem: When a node in the computation graph applies a discrete operation (argmax, sampling, rounding), the gradient is zero almost everywhere. The chain rule breaks - the graph is not differentiable at these nodes.
Straight-Through Estimator (STE) (Hinton, 2012; Bengio et al., 2013):
Applications:
- Quantisation-aware training (QAT): Simulate INT8 forward, use STE backward. Used in GPTQ, AWQ, and quantised LLM training.
- VQ-VAE: Vector quantisation in the encoder uses STE so gradients flow from decoder back to encoder.
- Binary neural networks: Forward uses sign(x), backward uses STE with gradient identity.
REINFORCE (Williams, 1992): For stochastic nodes, use the log-derivative trick:
This produces an unbiased gradient estimate but with high variance (addressed by baseline subtraction: ). REINFORCE is the foundation of policy gradient methods in RL and is used in RLHF's PPO step.
8.4 Higher-Order Gradients
Second-order gradients arise in:
- Newton's method: requires Hessian (see 02-Jacobians-and-Hessians)
- Meta-learning (MAML): gradient of gradient w.r.t. outer parameters
- Gradient penalty in GAN training:
In PyTorch: Higher-order gradients are computed by running autograd through itself:
# Second derivative of loss w.r.t. input
loss = model(x).sum()
g = torch.autograd.grad(loss, x, create_graph=True)[0]
g2 = torch.autograd.grad(g.sum(), x)[0] # second derivative
create_graph=True tells autograd to build a graph for the gradient computation itself, enabling differentiation through it.
Hessian-vector products (HVPs): As shown in 02, the HVP can be computed in time without forming :
This is the primitive operation behind conjugate gradient and Lanczos methods for curvature estimation.
9. Transformer Backpropagation
9.1 Full Transformer Layer Gradient Flow
A pre-norm transformer layer processes the residual stream as:
Backward through one transformer layer (given ):
GRADIENT FLOW - ONE TRANSFORMER LAYER
FORWARD BACKWARD
x x'' flows in
LN_1(x) x' = x'' + MLP_backward
(x'')
Attn(*) x = x' + Attn_backward
(x')
x' = x + Attn(LN_1(x))
The two residual additions
split the gradient stream
LN_2(x') into parallel paths -
the identity skip carries
MLP(*) the full upstream signal
unchanged.
x'' = x' + MLP(LN_2(x'))
The critical observation: both residual additions in the transformer layer act as gradient splitters. The skip path carries a copy of directly back to without passing through the MLP Jacobian. This gives transformers well-behaved gradients even at layers (GPT-3) or layers (Grok-1).
9.2 LoRA Backward Pass
Low-Rank Adaptation (Hu et al., 2022) reparametrises a weight matrix:
Forward: .
Backward (given ):
Note: is frozen, so - no gradient is computed or stored for . The backward pass only updates and .
Memory savings: For with : gradient storage reduces from to parameters - a reduction in gradient memory for that layer.
DoRA (Liu et al., 2024) further decomposes LoRA into magnitude + direction components, improving fine-tuning quality while preserving the low-rank backward structure.
9.3 Gradient Accumulation
Problem: Large effective batch sizes ( tokens, as in GPT-4 training) don't fit in GPU memory for a single forward-backward pass.
Solution - gradient accumulation:
For each micro-batch b = 1, ..., G:
loss_b = forward(micro_batch_b) / G # scaled loss
backward(loss_b) # accumulates gradients
# gradients are NOT zeroed between micro-batches
optimizer.step() # update once after G micro-batches
optimizer.zero_grad()
The division by ensures the accumulated gradient is mathematically identical to what a single pass with the full batch would produce.
For AI: GPT-3 used gradient accumulation to achieve an effective batch of tokens with hardware that could only process tokens per step.
9.4 Distributed Gradient Synchronisation
In data parallelism, each GPU processes a different micro-batch but shares the same model weights. After the backward pass, gradients must be synchronised:
All-Reduce: Sum gradients across all GPUs and divide by . Implemented via ring-all-reduce (NCCL) for communication that is bandwidth-optimal.
Gradient sharding (ZeRO): DeepSpeed's ZeRO (Zero Redundancy Optimizer) partitions gradient storage across GPUs:
- ZeRO Stage 1: Shard optimiser states -> memory reduction
- ZeRO Stage 2: Shard gradients additionally -> reduction
- ZeRO Stage 3: Shard parameters too -> reduction (linear in GPU count)
For LLaMA-3 70B training: ZeRO Stage 3 across 1024 H100 GPUs allows storing only parameters per GPU - fitting the model in memory.
10. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Applying scalar chain rule for vector functions | Scalar chain rule multiplies; multivariate chain rule composes Jacobians. Order matters: , not | Write Jacobians explicitly and multiply left-to-right in the order of composition |
| 2 | Forgetting to sum gradients at fan-out (shared weight) nodes | Each use of a weight contributes a gradient; missing uses means undercounting | Accumulate gradients with += in the backward loop over all uses |
| 3 | Treating as shape-correct without checking | The outer product has shape matching ; but transposing either vector gives wrong shape | Always verify gradient shapes match parameter shapes before implementation |
| 4 | Using sigmoid/tanh in deep networks expecting no vanishing gradients | Their derivatives are bounded by / - products over many layers vanish exponentially | Use ReLU, GELU, or SiLU with proper initialisation; add residual connections |
| 5 | Initialising all weights to zero (or same value) | Symmetry breaking fails: every neuron in a layer computes the same gradient, so they all update identically and remain identical forever | Use Xavier or He initialisation with random values |
| 6 | Skipping the fused softmax + cross-entropy optimisation and computing them separately | Intermediate probabilities overflow/underflow for large logits | Always use the log-sum-exp trick or a library's CrossEntropyLoss (which applies it internally) |
| 7 | Confusing JVP and VJP - using JVP for all gradient computations | JVP costs passes for scalar output; VJP costs per output dimension. For scalar loss, always use VJP (backprop) | Use VJP (backward) for scalar losses; reserve JVP for computing Jacobian rows or directional derivatives |
| 8 | Clipping per-layer gradients independently instead of global norm | Destroys the relative scale of gradients across layers; disrupts Adam's per-parameter adaptive scaling | Clip the global gradient norm: compute across all parameters, scale down if above threshold |
| 9 | Using STE incorrectly in quantisation-aware training - applying STE to continuous weights | STE should only be applied at the discrete rounding step, not to subsequent continuous operations | Apply STE only at the round() or sign() node; propagate real gradients elsewhere |
| 10 | Misunderstanding gradient accumulation - forgetting to scale the loss | Accumulating micro-batch gradients without dividing by produces too large an effective gradient | Divide loss by before backward, or divide accumulated gradients by before the optimiser step |
| 11 | Not using create_graph=True when computing higher-order gradients in PyTorch | Without create_graph=True, the gradient computation is not tracked, so differentiating through it returns None or wrong values | Use create_graph=True in the first torch.autograd.grad() call when second derivatives are needed |
| 12 | Confusing BPTT truncation with sequence truncation | Truncated BPTT still runs the full forward sequence; it only truncates the backward window. Sequence truncation shortens both | These are different operations - read the framework docs to confirm which is applied |
11. Exercises
Exercise 1 - Scalar Chain Rule Verification
Let and .
(a) Compute using the chain rule analytically.
(b) Evaluate the derivative at and .
(c) Verify numerically using centred finite differences.
(d) Compute - explain why the order of composition matters.
Exercise 2 - Jacobian Composition
Let and be defined by:
(a) Compute and analytically.
(b) Compute the Jacobian of using the chain rule .
(c) Verify using finite differences at .
(d) Compute directly and confirm it equals part (b).
Exercise 3 - Backprop Through a 2-Layer Network
Two-layer network: , , , .
With , , :
(a) Implement forward pass. Compute for given values.
(b) Implement backward pass manually using the backpropagation recurrence.
(c) Verify your gradients using numpy finite differences.
(d) Implement gradient descent for 100 steps with learning rate and verify loss decreases.
Exercise 4 - Vanishing Gradients Analysis
(a) Construct a 20-layer sigmoid network with all weights . Compute the gradient at layer 1 symbolically and numerically.
(b) Repeat with ReLU activation. Compare gradient magnitudes at layers 1, 5, 10, 20.
(c) Apply Xavier initialisation to the sigmoid network and compare gradient flow.
(d) Add residual connections to the 20-layer sigmoid network. Quantify the improvement.
(e) Plot gradient norm vs. layer depth for all four cases.
Exercise 5 - Gradient Checkpointing
(a) Implement a 10-layer feedforward network with explicit intermediate caching. Measure peak memory usage.
(b) Implement the same network with gradient checkpointing at every 3rd layer. Measure memory.
(c) Verify that both implementations produce identical gradients.
(d) Measure the compute overhead of recomputation. How does it compare to the theoretical ?
(e) Find the optimal checkpoint interval that minimises total memory x compute cost.
Exercise 6 - Attention Gradient
Single-head attention: with for , .
(a) Implement forward pass.
(b) Implement backward pass computing given .
(c) Verify all three gradients using finite differences.
(d) For causal masking (set for ), show that the backward pass is unchanged except at masked positions.
Exercise 7 - LoRA Gradient Analysis
(a) Implement a linear layer with LoRA adaptation. Set .
(b) Compute gradients and analytically and verify numerically.
(c) Confirm that but is not used (frozen).
(d) Compare the number of gradient parameters for full fine-tuning vs. LoRA.
(e) Implement LoRA training for 200 steps on a toy task and compare convergence with full fine-tuning.
Exercise 8 - REINFORCE and STE
(a) Implement a stochastic computational graph: , .
(b) Compute the REINFORCE gradient analytically.
(c) Estimate the REINFORCE gradient with 10000 samples. Verify against the analytical value.
(d) Implement STE for the rounding operation: , . Compute the STE gradient and update .
(e) Compare STE-based quantisation-aware training on a toy example: train for 50 steps and measure quantisation error vs. a post-training quantised model.
12. Why This Matters for AI (2026 Perspective)
| Concept | Concrete AI Impact |
|---|---|
| Multivariate chain rule | The mathematical foundation of every gradient-based learning algorithm - without it, backprop cannot be defined |
| VJP as backprop primitive | Modern autodiff systems (JAX, PyTorch) are built around VJP primitives; the cost of reverse mode is what makes training billion-parameter models tractable |
| Computation graphs | torch.compile (PyTorch 2.0), XLA (JAX/TensorFlow), TensorRT all operate by analysing the computation graph to fuse kernels and optimise memory layout |
| Fused softmax + CE gradient | The clean gradient makes language model training numerically stable; Flash Attention's backward uses the same softmax log-sum-exp statistics |
| Xavier/He initialisation | Ensures gradient scale at depth 1 or depth 96 - a critical practical enabler for deep network training |
| Residual connections | The "gradient highway" identity term in ResNets/transformers is why 100-layer networks train at all; this was the key insight enabling GPT-3's 96 layers |
| Gradient checkpointing | Enables training LLMs with 128K context lengths; without it, the activation memory would be prohibitive |
| FlashAttention backward | IO-aware backward pass reduces memory from to while maintaining numerical equivalence; standard in all production LLM training as of 2024 |
| LoRA backward | Only parameters accumulate gradients; enables fine-tuning 70B models on a single H100 via the low-rank backward structure |
| STE / REINFORCE | STE enables quantisation-aware training (GPTQ, AWQ, QLoRA); REINFORCE enables RLHF's policy gradient step in PPO-based alignment training |
| BPTT | The failure of vanilla BPTT for long sequences motivated LSTMs, GRUs, and ultimately the attention mechanism which replaces recurrence with direct pairwise interactions |
| ZeRO gradient sharding | Partitions gradient storage across GPUs linearly in GPU count; enables training models that would require more memory per GPU without it |
| Mixed precision backward | BF16 backward passes achieve memory bandwidth vs FP32 on H100, with dynamic loss scaling preventing underflow; standard in all LLM training since GPT-3 |
| Higher-order gradients | Gradient penalties in GANs, MAML's meta-gradient, and Hessian-vector products for learning rate scheduling all require differentiating through the backward pass |
Conceptual Bridge
Where we came from: 01 (Partial Derivatives) gave us tools to differentiate multivariate functions component by component. 02 (Jacobians and Hessians) assembled those into matrix objects capturing full first- and second-order sensitivity. We now know what a derivative is for a function .
What this section added: The chain rule tells us how derivatives compose - allowing us to differentiate functions built from primitives. Backpropagation is the algorithmic instantiation of this composition for computation graphs, and the VJP (reverse mode) makes the cost of differentiating a scalar loss with respect to millions of parameters equal to the cost of a single forward pass. This is not an approximation - it is exact and provably optimal.
What this enables: Every gradient-based learning algorithm - SGD, Adam, RMSprop, LARS, Shampoo - requires only the gradient , which backprop provides. The advanced sections of this chapter (04 Optimisation, 05 Automatic Differentiation) build directly on the VJP abstraction established here.
Connection to transformer training: Modern LLM training is essentially an exercise in efficient backpropagation at scale. Every engineering decision - Flash Attention's tiled backward, ZeRO's gradient sharding, gradient checkpointing, LoRA's low-rank backward, mixed precision loss scaling - is a response to the memory and compute constraints of the backward pass. Understanding backpropagation is therefore prerequisite to understanding why LLM training systems are designed the way they are.
POSITION IN THE CURRICULUM
PREREQUISITES (must know):
01 Partial Derivatives - partialf/partialx, gradient, directional derivative
02 Jacobians & Hessians - J_f, Frchet derivative, VJP/JVP
THIS SECTION (03):
Chain Rule & Backpropagation
- Multivariate chain rule (J_{fog} = J_f * J_g)
- Computation graphs (DAG, topological order)
- Backprop recurrence (delta = Wdelta sigma'(z))
- Gradient derivations (linear, softmax+CE, LN, attention)
- Vanishing/exploding gradients + solutions
- Memory-efficient backprop (checkpointing, Flash Attention)
- Advanced: BPTT, STE, REINFORCE, higher-order gradients
WHAT THIS ENABLES:
04 Optimisation - gradient descent, Adam, second-order methods
05 Automatic Differentiation - AD systems, tape, jit compilation
07 Neural Networks - full training loop built on backprop
08 Transformer Architecture - FlashAttention, LoRA, gradient flow
CROSS-CHAPTER CONNECTIONS:
03-Advanced-LA/02-SVD - gradient low-rank structure
04-Calculus/02-Derivatives - scalar chain rule (special case)
06-Probability/03-MLE - loss functions that backprop optimises
For automatic differentiation systems that implement these ideas at scale, see 05 Automatic Differentiation.
For the optimisation algorithms that consume backprop's output, see 04 Multivariate Optimisation.
Appendix A: Worked Backpropagation Example
A.1 Complete Worked Example - 3-Layer Network
To make the backpropagation formulas concrete, we trace through a minimal example end-to-end.
Network architecture:
- Input:
- Layer 1: , , activation: ReLU
- Layer 2: , , activation: none (scalar output)
- Loss: (MSE with target )
Forward pass:
Backward pass:
Output layer gradient:
Layer 2 gradients (scalar output, linear):
Error signal propagated to layer 1:
Through ReLU:
Layer 1 weight gradients:
Verification (finite difference for ): Perturb by :
Numerically: . Difference .
A.2 Computational Cost Comparison
Forward pass: - one GEMM per layer.
Backward pass: Also - same asymptotic cost, with constant factor .
Memory: Cache all and : scalars - linear in total neuron count.
The fundamental theorem of backpropagation: Computing for all parameters costs only a constant factor more than computing itself. This is the miracle that makes gradient-based learning tractable.
Formal statement: Let be the time to evaluate in the forward pass. Then the time to compute all partial derivatives via backprop is at most where in practice.
This contrasts with finite differences: computing for each of parameters via finite differences costs forward passes - for GPT-3 with , this would be billion forward passes, or approximately the heat death of the universe in compute time.
Appendix B: JVP vs VJP - Mode Selection and Complexity
B.1 Forward Mode vs Reverse Mode
Given , both modes compute the same gradient information but with different costs:
| Mode | Computes | Cost per pass | Total cost for full Jacobian |
|---|---|---|---|
| Forward (JVP) | One column of | ||
| Reverse (VJP) | One row of |
COST MATRIX: WHICH MODE WINS?
Goal: compute partial/partialtheta for : R -> R (scalar loss)
n = |theta| = 175,000,000,000 (GPT-3 parameter count)
m = 1 (scalar loss)
Forward mode: m x Tf = 1 x Tf <- ONE PASS
Reverse mode: n x Tf = 175B x Tf <- 175 BILLION PASSES
Wait - that's backwards! Reverse mode (backprop) costs 1 pass
because m=1 means ONE ROW of J_f = the gradient row vector.
Forward mode would need n=175B passes to fill all columns.
RULE: Use reverse mode (backprop) when m n
RULE: Use forward mode (JVP) when n m
Most ML: n m = 1 -> backprop is optimal
When forward mode wins: Computing the sensitivity of all outputs to one input parameter - e.g., computing how the entire model output changes as a single hyperparameter varies. Also: Jacobian-vector products in conjugate gradient (no need for the full Jacobian).
Mixed strategies: For functions with , the optimal choice is to split the Jacobian into row/column blocks and use each mode for the appropriate blocks - the basis of adjoint methods in numerical PDE solvers.
B.2 Tangent Mode for Hessian-Vector Products
As shown in 02, the Hessian-vector product can be computed by composing forward and reverse modes:
Algorithm (Pearlmutter's R{} trick, 1994):
- Forward pass (JVP with direction ): compute and simultaneously
- Cost: same as backprop () - one pass suffices
Implementation in PyTorch:
g = torch.autograd.grad(loss, params, create_graph=True)
flat_g = torch.cat([gi.view(-1) for gi in g])
hvp = torch.autograd.grad(flat_g @ v, params)
Cost: 2 backprop passes, no matrix formed. This is the primitive for:
- Conjugate gradient for Newton steps (K-FAC-style)
- Lanczos iteration for of Hessian
- Eigenvalue monitoring during training (Cohen et al., 2022 - edge of stability)
Appendix C: Automatic Differentiation Preview
C.1 The AD Abstraction
Automatic differentiation (AD) is a mechanical procedure for transforming any program that computes into a program that also computes (or JVPs/VJPs). This section previews the idea; the full treatment is in 05.
AD is neither symbolic differentiation (too slow, exponentially large expressions) nor numerical differentiation (finite differences - too imprecise, costs evaluations). AD exploits the fact that every program is a composition of primitives, and the chain rule tells us exactly how to compose their derivatives.
Two flavours:
SYMBOLIC DIFF NUMERICAL DIFF AUTO DIFF
f(x) = x^2 + sin(x) Compute f(x+h) Track ops in
and f(x-h) computation tape
-> d/dx = 2x+cos(x) -> (f(x+h)-f(x-h))/2h -> Exact as FP allows
Exact but expression Approximate; costs Exact, costs O(1)
size can explode O(n) evaluations evaluations
C.2 The Tape (Wengert List)
The Wengert list (1964) records, during the forward pass, every primitive operation applied and its operands. The backward pass replays this tape in reverse, accumulating adjoints.
FORWARD TAPE EXAMPLE: f(x) = exp(x) * (x + 1)
Tape (built during forward):
v_1 = x (input)
v_2 = exp(v_1) (op: exp, operand: v_1)
v_3 = v_1 + 1.0 (op: add, operands: v_1, 1.0)
v_4 = v_2 x v_3 (op: mul, operands: v_2, v_3)
Backward (replay in reverse):
v_4 = 1.0 (seed)
v_2 += v_4 x v_3 = 1.0 x (x+1) (mul backward)
v_3 += v_4 x v_2 = 1.0 x exp(x) (mul backward)
v_1 += v_3 x 1.0 = exp(x) (add backward)
v_1 += v_2 x exp(v_1) = (x+1)exp(x) (exp backward)
Total: v_1 = exp(x) + (x+1)exp(x) = (x+2)exp(x) (by product rule)
PyTorch's Tensor stores a grad_fn attribute at each node - this is the tape in disguise. Calling .backward() replays the tape in reverse.
For more: See 05 Automatic Differentiation for the complete treatment of forward/reverse mode AD, source transformation, operator overloading, and the design of JAX vs PyTorch autograd.
Appendix D: Numerical Gradient Verification
In practice, every backpropagation implementation should be verified against finite differences. This appendix presents the standard toolkit.
D.1 Centred Finite Differences
For a scalar loss and parameter :
Error analysis: Centred differences have error (vs for forward differences). Optimal step size balances truncation error () against floating-point cancellation error ( where is machine epsilon):
Use for float64 and for float32.
Relative error check: Accept the gradient check if:
D.2 When Gradient Checks Fail
Common failure modes:
| Symptom | Likely cause |
|---|---|
| Relative error throughout | h too large (truncation) or float32 precision |
| Relative error for specific parameters | Bug in backward for that parameter type |
| Relative error for all gradients | Loss is approximately linear in those parameters at the test point |
| Fails at kink (ReLU/max) | Gradient not defined at ; test point near kink; use away from kinks |
| Fails only for batch size 1 | BatchNorm statistics degenerate; use batch size for BN checks |
D.3 Gradient Check in PyTorch
from torch.autograd import gradcheck
def f(x):
return (x ** 2).sum()
x = torch.randn(5, requires_grad=True, dtype=torch.float64)
gradcheck(f, (x,), eps=1e-6, atol=1e-4, rtol=1e-4)
gradcheck automates centred finite differences for all inputs with requires_grad=True. Always use dtype=torch.float64 for gradient checking - float32 precision is insufficient for reliable checks.
Appendix E: Key Formulas Reference
E.1 Chain Rule Summary
| Setting | Formula |
|---|---|
| Scalar composition | |
| Vector composition | |
| VJP (backprop step) | |
| JVP (forward step) |
E.2 Backpropagation Formulas
| Layer | Forward | Backward ( given) |
|---|---|---|
| Linear | - | , , |
| Elementwise | - | |
| Softmax+CE | ||
| Residual | - | |
| LayerNorm | Complex (see 5.4); passes signal |
E.3 Activation Derivatives
| Name | ||
|---|---|---|
| ReLU | ||
| Sigmoid | ||
| Tanh | ||
| GELU | ||
| SiLU | ||
| Softplus |
E.4 Initialisation Standards
| Method | Distribution | Variance | When |
|---|---|---|---|
| Xavier uniform | Sigmoid, tanh | ||
| Xavier normal | Sigmoid, tanh | ||
| He uniform | ReLU | ||
| He normal | ReLU | ||
| GPT-2 residual | - | Transformer residuals |
Appendix F: Deep Dive - Vanishing Gradients in Transformers
F.1 Why Transformers Don't Vanish
A naive reading of the vanishing gradient analysis (6.1) suggests that 96-layer transformers should suffer catastrophic vanishing. They don't. Here is why.
The residual stream analysis: In a pre-norm transformer, the residual stream after layer is:
where is the -th sublayer (attention or MLP, wrapped in LayerNorm).
The gradient of the loss with respect to the input is:
At initialisation, the transformer weights are small, so and the product . The gradient flows back unchanged through all layers. This is categorically different from a plain deep network where the product of small Jacobians vanishes.
Gradient norm growth: As training progresses and weights grow, becomes nontrivial. The gradient norm may grow with depth, but this is controlled by:
- LayerNorm dampening (see 6.6)
- GPT-2's scaling of residual projections
- Gradient clipping ()
The "edge of stability" phenomenon (Cohen et al., 2022): In practice, the maximum Hessian eigenvalue often approaches (twice the inverse learning rate) and oscillates there. This is a gradient flow regime where the training dynamics are neither fully stable nor unstable, and gradients are large enough to cause oscillation but not divergence.
F.2 Gradient Norm as Training Signal
Modern LLM training monitors gradient norm at every step. Typical patterns:
GRADIENT NORM DURING LLM TRAINING
nablatheta_2
spike (loss spike)
1 clip threshold
normal training
steps
Patterns:
- Steady nablatheta < 1: healthy training, clipping inactive
- Sudden spike -> loss spike -> recovery: numerical event
(often a "bad" batch; LLM training has ~1-3 such events
per trillion tokens at scale)
- Slow upward drift: learning rate may be too high
Loss spike mitigation: When the gradient norm exceeds the clip threshold, the entire gradient update is scaled down. If the spike is from a corrupted batch, this prevents permanent damage to the model weights.
Gradient accumulation and norm: When using accumulation steps, each micro-batch contributes of the gradient. The global norm is computed across the accumulated gradient (after summation, before the optimiser step) - not across individual micro-batches.
F.3 Per-Layer Gradient Norm Analysis
For diagnostic purposes, logging the gradient norm per layer reveals:
- Embedding gradients: Often the largest, due to sparse updates (5.6)
- Early layers: Smallest (furthest from loss); potential vanishing
- Late layers: Largest; potential exploding
- LayerNorm parameters: Very small - and converge quickly
This per-layer analysis guided the design of:
- LARS/LAMB optimisers (You et al., 2017): layer-wise adaptive learning rates based on weight-to-gradient ratio
- Muon (2024): applies Newton step in gradient space with Nesterov momentum; designed for hidden layers while AdamW handles embedding and output
Appendix G: Historical Development
G.1 Timeline of Backpropagation
The development of backpropagation spans three centuries and multiple independent discoveries:
| Year | Event | Significance |
|---|---|---|
| 1676 | Leibniz publishes differential calculus (chain rule for single variable) | Mathematical foundation |
| 1744 | Euler uses variational methods (antecedent of reverse mode) | First "adjoint" idea |
| 1847 | Cauchy introduces gradient descent | The algorithm backprop serves |
| 1960 | Kalman filter (reverse-mode for linear dynamical systems) | AD in engineering |
| 1964 | Wengert introduces the "reverse accumulation" algorithm | First explicit AD |
| 1970 | Linnainmaa's thesis: general backpropagation | Full theoretical framework |
| 1974 | Werbos PhD thesis: backprop for neural networks | Connection to ML |
| 1982 | Hopfield networks (energy-based models with gradient) | Alternative to backprop |
| 1986 | Rumelhart, Hinton & Williams - "Learning representations by back-propagating errors" | Popularised backprop for NNs |
| 1991 | Hochreiter: vanishing gradient problem analysed | Identified depth barrier |
| 1997 | LSTM: gating to address vanishing gradient in RNNs | First scalable deep sequence model |
| 2012 | AlexNet: backprop on GPU at scale | Practical deep learning |
| 2015 | ResNets: residual connections for gradient flow | Enabled 100+ layer networks |
| 2016 | PyTorch / TensorFlow 1.0: autodiff frameworks | Democratised backprop |
| 2017 | Transformers: attention replaces BPTT | Solved long-range vanishing |
| 2018 | JAX: functional autodiff, JIT compilation | Research-grade AD |
| 2022 | FlashAttention: IO-aware backward pass | Efficient attention backward |
| 2022 | PyTorch 2.0 torch.compile | Graph-based kernel fusion |
| 2023 | FlashAttention-2: improved GPU utilisation | Standard for production |
| 2024 | FlashAttention-3: H100-optimised with async | State-of-art attention backward |
G.2 The Independent Discoveries
Backpropagation was independently discovered at least four times before becoming widely known:
-
Linnainmaa (1970): In his master's thesis, presented the general algorithm for computing exact partial derivatives of any function composed of elementary operations - precisely what we today call reverse-mode AD.
-
Werbos (1974): Applied the same idea to multi-layer neural networks in his PhD thesis, but the work was largely ignored for over a decade.
-
Parker (1985): Independently rediscovered backpropagation for neural networks.
-
Rumelhart, Hinton & Williams (1986): Published the algorithm in Nature and produced the critical experimental demonstrations that convinced the community it could work. Their paper is the one most often cited today.
This pattern of independent rediscovery is common in mathematics - the ideas are "in the air" once the prerequisites are established. The chain rule (1676) + computation graphs (1960s) + gradient descent (1847) = backpropagation (inevitable).
G.3 The Hardware-Algorithm Co-evolution
The practical impact of backpropagation depends critically on hardware:
- CPU era (1986-2011): Backprop is theoretically valid but computationally slow. Networks with more than 3-4 layers were impractical.
- GPU era (2012-present): NVIDIA's CUDA (2007) enables massively parallel GEMM operations. The bottleneck shifts from FLOPS to memory bandwidth.
- Tensor core era (2017-present): NVIDIA Volta/Ampere/Hopper GPUs have dedicated matrix multiply accelerators. FP16/BF16 tensor cores achieve 10x the throughput of FP32.
- Memory wall: As models scale, the backward pass's memory requirements dominate. FlashAttention, ZeRO, gradient checkpointing all address the memory wall.
The 2024 FLOP/memory ratio in H100 GPUs ( TFLOPS vs TB/s bandwidth) means that memory access, not computation, is the primary bottleneck for backprop at scale. This fundamental constraint is why FlashAttention's IO-aware design is so impactful.
Appendix H: Connections to Optimisation and Learning Theory
H.1 What the Gradient Tells Us
The gradient computed by backpropagation is the direction of steepest ascent in parameter space (by the first-order Taylor expansion). Gradient descent moves in the opposite direction:
What the gradient does NOT tell us:
- The curvature of the loss landscape (need Hessian for that)
- The optimal step size
- Whether we are near a local minimum, saddle point, or maximum
- Whether the gradient is statistically well-estimated (needs large enough batch)
What the gradient DOES tell us:
- The direction of maximal increase (used negated for descent)
- The sensitivity of the loss to each parameter
- Which parameters are "active" (nonzero gradient) vs. saturated (near-zero gradient)
H.2 Gradient Stochasticity
In practice, the true gradient over the full data distribution is approximated by the stochastic gradient over a mini-batch:
This is an unbiased estimator: .
Variance: . Larger batches have lower gradient variance (more accurate gradient estimate) but provide diminishing returns beyond the "critical batch size" (McCandlish et al., 2018).
For LLMs: The critical batch size for GPT-3-scale models is approximately million tokens. Training at this batch size achieves the best loss-per-FLOP tradeoff. Using larger batches wastes compute; using smaller batches wastes gradient estimation quality.
H.3 The Gradient as a Sufficient Statistic
For first-order optimisers (SGD, Adam, AdaGrad, RMSprop), the gradient is the only information extracted from the forward-backward pass. Second-order information (Hessian curvature) is either ignored or approximated.
Why not use the full Hessian? For parameters, the Hessian is a matrix - entries. Storing it is impossible ( FP32 values ~= bytes ~= GB). Inverting it is even more impossible.
Practical second-order methods use approximations:
- Diagonal: AdaGrad/Adam maintain diagonal Hessian approximations ( memory)
- Kronecker factored: K-FAC (see 02) uses per layer ( per layer)
- Low-rank: PSGD, Shampoo maintain low-rank or block-diagonal approximations
- Newton-Schulz: Muon (2024) approximates the matrix square root efficiently
H.4 Generalisation and the Implicit Gradient Bias
Gradient descent with small learning rate and large mini-batches does not merely find any minimum - it has an implicit bias toward flat minima (large regions with low loss) over sharp minima (narrow valleys).
Conjecture (Keskar et al., 2017): Flat minima generalise better because small perturbations to the parameters don't change the loss much - robust to noise in the data.
Mathematical foundation: The SGD noise effectively adds a regularisation term proportional to - the trace of the Hessian - biasing toward flat (low-trace-Hessian) minima.
This connects gradient computation (the topic of this section) to generalisation theory (a major open question in deep learning theory) - a reminder that understanding backpropagation fully requires understanding not just the mechanics, but the geometry of the loss landscape it navigates.
Appendix I: Practical Implementation Guide
I.1 Implementing Backprop from Scratch
When building a neural network framework from scratch, implement these components in order:
1. Primitive registry:
primitives = {}
def register_primitive(name, forward_fn, backward_fn):
"""Register a primitive op with its VJP."""
primitives[name] = (forward_fn, backward_fn)
# Example: multiplication primitive
def mul_forward(x, y): return x * y
def mul_backward(x, y, g_out): return g_out * y, g_out * x # (g_x, g_y)
register_primitive('mul', mul_forward, mul_backward)
2. Value class with gradient tracking:
class Value:
def __init__(self, data, parents=(), op=''):
self.data = data
self.grad = 0.0
self._backward = lambda: None # closure capturing parents
self._parents = parents
self._op = op
def __mul__(self, other):
out = Value(self.data * other.data, (self, other), 'mul')
def _backward():
self.grad += other.data * out.grad # VJP for self
other.grad += self.data * out.grad # VJP for other
out._backward = _backward
return out
def backward(self):
# Topological sort, then reverse
topo = []
visited = set()
def build_topo(v):
if v not in visited:
visited.add(v)
for p in v._parents: build_topo(p)
topo.append(v)
build_topo(self)
self.grad = 1.0
for v in reversed(topo): v._backward()
This is essentially the complete autograd engine from Karpathy's micrograd (2020) - approximately 100 lines implement a working backprop engine.
3. Building blocks: Extend Value with __add__, __pow__, exp, log, relu, softmax - each with its VJP closure.
I.2 Common Implementation Bugs
Bug 1: Overwriting instead of accumulating gradients
# Wrong:
self.grad = other.data * out.grad # erases previous contributions!
# Correct:
self.grad += other.data * out.grad # accumulates (fan-out nodes)
Bug 2: Forgetting to zero gradients between batches
# Wrong: gradient accumulates across batches
loss = model(x)
loss.backward()
optimizer.step()
# Correct:
optimizer.zero_grad() # <- must come before backward
loss = model(x)
loss.backward()
optimizer.step()
Bug 3: Not detaching from the graph for inference
# Wrong: builds graph unnecessarily during inference
with torch.no_grad(): # <- this is the correct fix
prediction = model(x)
Bug 4: Shape mismatch in weight gradient
# Wrong: grad_W and W may have different shapes
grad_W = delta @ x # (n_out, 1) @ (1, n_in) only works for batch size 1
# Correct: outer product for single sample
grad_W = np.outer(delta, x) # (n_out, n_in)
# Correct: batched
grad_W = (1/B) * Delta @ X.T # (n_out, B) @ (B, n_in) = (n_out, n_in)
I.3 Testing Checklist
Before deploying any backprop implementation:
- Gradient check passes for all primitive operations (relative error )
- Loss decreases monotonically for small enough learning rate (verify on toy problem)
- Gradients are zero for frozen parameters
- Gradient accumulation at fan-out nodes verified (shared weight receives sum)
- Shape of each gradient matches shape of corresponding parameter
- Memory usage is not
- Higher-order gradients work if needed (use
create_graph=Truein PyTorch) - Mixed precision: FP16 forward, FP32 gradient accumulation, loss scaling in place
Appendix J: Connections to Information Theory and Statistics
J.1 Fisher Information and the Natural Gradient
The ordinary gradient measures the steepest direction in parameter space with respect to the Euclidean metric. But parameter space has a natural metric induced by the probability distribution - the Fisher information metric.
Fisher information matrix:
Natural gradient (Amari, 1998):
The natural gradient is the steepest direction in the distributional geometry of the model - invariant to reparametrisation. Computing it exactly requires inverting , which costs .
K-FAC (02) approximates as a Kronecker product, making the natural gradient step tractable. It remains the most principled second-order optimiser for neural networks.
For LLMs: The approximation used in practice is Adam's diagonal (second moment of gradient as proxy for diagonal Fisher). This is crude but sufficient - Adam is a diagonal natural gradient step.
J.2 Gradient as Score Function
For a probabilistic model , the gradient of the log-likelihood is the score function:
The score function is the quantity computed by backpropagation during maximum likelihood estimation. Properties:
- (score has zero mean)
- (Fisher information = variance of score)
For language models: The negative log-likelihood has gradient - the same formula from 5.3, now understood as the negative score.
J.3 KL Divergence and the Gradient of ELBO
In variational inference and RL (RLHF), we often need gradients of KL divergences. For discrete distributions:
This is computed via backprop through the log-probability of the policy under KL regularisation - the precise form used in RLHF's PPO loss, which includes a KL penalty between the fine-tuned policy and the reference model .
References
-
Rumelhart, Hinton & Williams (1986) - "Learning representations by back-propagating errors." Nature, 323, 533-536. The canonical backpropagation paper.
-
Linnainmaa, S. (1970) - "The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors." Master's thesis, University of Helsinki. First general reverse-mode AD.
-
Hochreiter, S. (1991) - "Untersuchungen zu dynamischen neuronalen Netzen." Diploma thesis, TU Munich. First analysis of vanishing gradients.
-
Glorot, X. & Bengio, Y. (2010) - "Understanding the difficulty of training deep feedforward neural networks." AISTATS. Xavier initialisation.
-
He, K. et al. (2015) - "Delving Deep into Rectifiers." ICCV. He initialisation for ReLU networks.
-
He, K. et al. (2016) - "Deep Residual Learning for Image Recognition." CVPR. ResNets and gradient highways.
-
Ba, J. et al. (2016) - "Layer Normalization." arXiv:1607.06450. LayerNorm for transformers.
-
Vaswani, A. et al. (2017) - "Attention Is All You Need." NeurIPS. Transformer architecture with attention backward pass.
-
Amari, S. (1998) - "Natural Gradient Works Efficiently in Learning." Neural Computation. Natural gradient and Fisher information.
-
Martens, J. & Grosse, R. (2015) - "Optimizing Neural Networks with Kronecker-factored Approximate Curvature." ICML. K-FAC.
-
Hu, E. et al. (2022) - "LoRA: Low-Rank Adaptation of Large Language Models." ICLR. LoRA backward pass.
-
Dao, T. et al. (2022) - "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness." NeurIPS. IO-aware backward for attention.
-
Cohen, J. et al. (2022) - "Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability." ICLR. Edge of stability phenomenon.
-
Dao, T. (2023) - "FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning." ICLR 2024. FlashAttention-2.
-
Liu, S. et al. (2024) - "DoRA: Weight-Decomposed Low-Rank Adaptation." DoRA backward analysis.
Appendix K: Summary Tables
K.1 Backpropagation Algorithm Summary
COMPLETE BACKPROPAGATION ALGORITHM
INPUT: network weights theta, training pair (x, y)
PHASE 1 - FORWARD PASS
a^0 = x
For l = 1, 2, ..., L:
z^l = W^l a^{l-1} + b^l (cache z^l and a^{l-1})
a^l = sigma^l(z^l) (cache a^l)
= a^L
= loss(, y)
PHASE 2 - BACKWARD PASS
delta^L = partial/partialz^L (output layer gradient, layer-specific)
For l = L-1, L-2, ..., 1:
delta^l = (W^{l+1}) delta^{l+1} sigma'^l(z^l)
PHASE 3 - GRADIENT ASSEMBLY
For l = 1, 2, ..., L:
nabla_{W^l} = delta^l (a^{l-1})
nabla_{b^l} = delta^l
PHASE 4 - PARAMETER UPDATE
theta <- theta - eta * nabla_theta (or Adam/RMSprop update)
K.2 Complexity Summary
| Operation | Time | Memory |
|---|---|---|
| Forward pass (L layers, width n) | cached activations | |
| Backward pass | error signals | |
| Full Jacobian via FD | $O( | \theta |
| Full Jacobian via backprop | ||
| Hessian-vector product | ||
| Gradient checkpointing | ||
| FlashAttention forward | ||
| FlashAttention backward |
K.3 Gradient Flow Interventions
| Problem | Diagnosis | Intervention |
|---|---|---|
| Vanishing gradients | ReLU/GELU, He init, residual connections | |
| Exploding gradients | Gradient clipping, LR warmup | |
| Dead neurons | for layer | Leaky ReLU, better init, BN |
| Slow convergence | at saddle | Momentum, Adam, noise injection |
| Oscillating loss | spikes | Reduce LR, increase batch |
| NaN gradients | Loss scaling, check log/softmax |