Lesson overview | Previous part | Next part
Markov Chains: Part 10: Common Mistakes to Appendix Y: Connections to Physics
10. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Confusing stationarity with convergence | Stationarity () is a property of a distribution; convergence () is about the chain's trajectory. The chain may start at and be stationary without "converging" (it's already there). | Distinguish: stationarity is the destination; convergence is the journey. |
| 2 | Assuming irreducibility implies unique stationary distribution | An irreducible chain on an infinite state space may be null-recurrent (no normalisable stationary distribution), e.g., SRW on . | Unique stationary distribution requires irreducibility + positive recurrence. |
| 3 | Forgetting aperiodicity for convergence | An irreducible positive-recurrent but periodic chain has a unique stationary distribution but does NOT converge - it oscillates. | Convergence to stationarity requires ergodicity = irreducible + positive recurrent + aperiodic. |
| 4 | Treating detailed balance as necessary for stationarity | Detailed balance () is sufficient but not necessary for . Many chains (e.g., cyclic chains) have stationary distributions without satisfying detailed balance. | Detailed balance stationarity, but stationarity detailed balance. |
| 5 | Confusing the transition matrix orientation | Some books write as probability from to (column stochastic); others write it as probability from to (row stochastic). This flips vs. . | Fix a convention: row stochastic means rows sum to 1 and (left eigenvector). |
| 6 | Ignoring burn-in in MCMC | The first MCMC samples come from a distribution close to (initial distribution), not . Including them biases posterior estimates. | Always discard a burn-in period. Use and ESS to assess convergence. |
| 7 | Confusing mixing time with burn-in | Mixing time is a property of the chain (how long until the worst-case distribution is close to ). Burn-in is a practical heuristic. They are related but not equal. | Burn-in should be at least ; more if starting far from . |
| 8 | Assuming high acceptance rate = well-mixed MCMC | A chain that always accepts (tiny step size in MH) explores very slowly; acceptance rate ~1 with step size -> 0 gives high acceptance but poor mixing. | Target ~23% acceptance in high dimensions (optimal for Gaussian targets). |
| 9 | Using to compute stationary distribution for large | Matrix exponentiation converges to the rank-1 matrix , but floating-point errors accumulate. | Use power iteration on the distribution vector: , which is numerically stable. |
| 10 | Misidentifying the state space for LLM generation | A bigram LM has states = vocabulary (size ). A full autoregressive LM has states = entire context = exponential in context length. | The Markov chain for a transformer is on the context window, not the vocabulary alone. |
| 11 | Confusing transient and absorbing | A transient state will eventually be left for good; an absorbing state is one from which the chain never leaves. Every absorbing state is recurrent (trivially), not transient. | Transient: ; absorbing: . Absorbing states are recurrent. |
| 12 | Expecting MCMC samples to be independent | MCMC samples are correlated (autocorrelated) - consecutive samples are not independent. ESS < T measures effective independence. | Report ESS, not raw sample count. Use thinning to reduce autocorrelation if needed. |
11. Exercises
Exercise 1 * - Transition Matrix and Distribution Evolution
A weather model has two states: Sunny (1) and Rainy (2), with transition matrix .
(a) Starting from (certainly sunny), compute .
(b) Compute the 3-step transition matrix . What is ?
(c) Find the stationary distribution by solving with .
(d) Verify that as by computing for .
Exercise 2 * - State Classification
Consider the Markov chain on with transition matrix:
(a) Draw the transition diagram. Identify all communicating classes.
(b) Classify each state as recurrent or transient. Justify using the return probability criterion.
(c) For the recurrent states, determine their period. Are any states aperiodic?
(d) Starting from state 1, what is the probability of eventually reaching state 4?
Exercise 3 * - Computing Stationary Distributions
(a) Implement stationary_power_iteration(P, n_iter=1000) that starts from a uniform distribution and returns after iterations. Apply to the chain with for all and a random row-stochastic matrix.
(b) Verify your result satisfies to tolerance .
(c) For a doubly-stochastic matrix (rows AND columns sum to 1), prove the stationary distribution is uniform without solving any equations. Verify numerically.
(d) For the birth-death chain with , on with reflecting barriers, compute analytically and verify against power iteration.
Exercise 4 ** - Detailed Balance and Birth-Death Chains
(a) Verify that the two-state chain satisfies detailed balance with .
(b) Implement a queue birth-death chain with on states with an absorbing boundary at 20. Compute the stationary distribution and verify with .
(c) For the cyclic chain with , find the stationary distribution. Does it satisfy detailed balance? Explain why not.
(d) Show that if is doubly stochastic, then detailed balance holds with the uniform distribution iff is also symmetric.
Exercise 5 ** - Mixing Time and Spectral Gap
(a) For the two-state chain in Exercise 1, compute the eigenvalues of . What is the spectral gap?
(b) Using the spectral gap bound, upper-bound the mixing time .
(c) Compute the TV distance for and plot the convergence. At what does it drop below ?
(d) For the lazy version , redo parts (a)-(c). How does the spectral gap change? How does the mixing time change?
Exercise 6 ** - Metropolis-Hastings
(a) Implement Metropolis-Hastings with a Gaussian random walk proposal for target (double-well potential). Run for steps with .
(b) Verify your sampler satisfies detailed balance: for two states , check numerically where is the MH transition kernel.
(c) Compute the acceptance rate and effective sample size (ESS). How does ESS change as varies over ?
(d) Estimate from your samples. Compare to the true value (computed numerically).
Exercise 7 *** - PageRank
(a) Implement PageRank power iteration for the 5-node graph with adjacency matrix:
with damping factor . Handle dangling nodes.
(b) Verify that your PageRank vector satisfies to tolerance .
(c) Add a new node 6 that links to all existing nodes, and all existing nodes link to node 6. How does the PageRank of node 1 change? Why?
(d) Implement personalised PageRank where the teleportation vector is non-uniform: . Compare to standard PageRank.
Exercise 8 *** - HMM Forward Algorithm and Viterbi
An HMM models DNA sequences with 2 hidden states (CpG island: , non-CpG: ) and 4 observations (A, C, G, T).
(a) Given transition matrix and emission matrices (HMM notebook provides values), implement the forward algorithm and compute under equal initial probabilities.
(b) Implement the Viterbi algorithm to find the most likely hidden state sequence for the same observation.
(c) Verify the forward probabilities satisfy at each .
(d) Compare Viterbi decoding to the most-probable-state-per-position (posterior decoding using the backward algorithm). When do they differ?
12. Why This Matters for AI (2026 Perspective)
| Concept | AI/ML Application | Impact |
|---|---|---|
| Transition matrix / stationary dist. | PageRank web graph; LLM token distribution | Foundation for all sequential reasoning |
| Ergodic theorem | MCMC posterior estimation; time-averaging in RL | Justifies replacing expectation with time average |
| Perron-Frobenius | PageRank convergence proof; convergence of power iteration | Guarantees unique ranking vectors exist |
| Detailed balance / reversibility | Correctness of MH, Gibbs, SGLD | All Bayesian MCMC algorithms rely on this |
| Spectral gap / mixing time | MCMC convergence rate; warm-start certification | Explains why MCMC works (or fails) in practice |
| Metropolis-Hastings | Bayesian neural network posteriors; sampling EBMs | Foundation of probabilistic inference in ML |
| Gibbs sampling | LDA topic models; Boltzmann machine inference | Tractable posterior sampling via conditionals |
| HMC / NUTS | Stan, PyMC, posterior sampling in Bayesian DL | Gold standard for Bayesian inference on |
| MDP / Bellman equation | RL algorithms (PPO, SAC, DQN, RLHF) | Every RL agent solves a Markov chain problem |
| HMM forward-backward | Speech recognition, gene prediction, legacy NLP | Efficient inference on latent sequence models |
| CTMC / generator matrix | LLM serving queues; continuous-time RL | Latency modelling and event-driven simulation |
| Diffusion as CTMC | DDPM/DDIM forward process; score-matching | Connects generative modelling to Markov chain theory |
| Power iteration | PageRank; dominant eigenvector computation | algorithm that scales to trillion-node graphs |
| Lazy chains / warm starts | MCMC initialisation strategy | Reduces burn-in and improves sample efficiency |
13. Conceptual Bridge
Markov chains occupy the central position in the probability curriculum: they are the first class of stochastic processes with nontrivial structure - neither completely independent (iid) nor completely dependent. The Markov property gives just enough memory to model real sequential phenomena while remaining mathematically tractable.
Looking backward: Markov chains build on everything in Chapters 1-6. The transition matrix is a row-stochastic matrix - linear algebra from Chapters 2-3. The stationary distribution is an eigenvector problem solved by Perron-Frobenius. Computing stationary distributions uses the conditional probability machinery from Section03. The ergodic theorem is a strengthening of the law of large numbers (Section06). The Metropolis-Hastings acceptance ratio uses Bayes' theorem (Section03). Mixing times use concentration inequalities (Section05) through the spectral gap. The stochastic process framework of filtrations and stopping times (Section06) provides the rigorous foundation.
Looking forward: Markov chains are the gateway to Chapter 7 (Statistics), where MCMC enables Bayesian inference with intractable posteriors; Chapter 8 (Optimisation), where Markov chain mixing theory explains why SGD finds flat minima; and Chapter 9 (Information Theory), where the entropy of a stationary distribution connects to compression. Markov Decision Processes - the reinforcement learning setting - are a direct extension requiring optimisation over policies.
The deep connection: The central theorem of Markov chain theory - ergodicity implies - is the probabilistic analogue of the contraction mapping theorem from analysis: repeated application of the transition operator contracts all distributions toward the unique fixed point . The spectral gap quantifies the contraction rate, just as the Lipschitz constant does in the deterministic setting. This structural parallel runs through SGD convergence theory, where the gradient operator acts as a stochastic contraction.
MARKOV CHAINS IN THE CURRICULUM
========================================================================
Section01 Random Variables -------------------------------------+
Section02 Distributions --------------------------------------+ |
Section03 Joint Distributions ---------------------------+ | |
Section04 Expectation & Moments ----------------------+ | | |
Section05 Concentration Inequalities -------------+ | | | |
Section06 Stochastic Processes ---------------+ | | | | |
v v v v v v
+---------------------------------+
| Section07 MARKOV CHAINS |
| |
| Markov property |
| Transition matrices |
| Stationary distributions |
| Mixing times, spectral gap |
| MCMC (MH, Gibbs, HMC) |
| PageRank, MDP, HMM |
+--------------+------------------+
|
+-----------------------------+--------------------+
v v v
Ch7: Statistics Ch8: Optimisation Ch9: Info Theory
(MCMC for Bayes, (SGD mixing theory, (Entropy of \\pi,
posterior inference) Langevin dynamics) KL from stationarity)
v v
RL/RLHF: MDP Diffusion models
(policy eval, (DDPM as CTMC,
value functions) reverse Markov)
========================================================================
Appendix A: Harmonic Functions and the Dirichlet Problem
A function is harmonic for a Markov chain at state if:
That is, equals the expected value of one step later. Harmonic functions are the "conserved quantities" of Markov chains - if is the starting value, then forms a martingale.
Maximum principle: For an irreducible chain on a finite state space, the only bounded harmonic functions are constants. This is the probabilistic analogue of the maximum principle for harmonic functions in PDE theory.
Dirichlet problem: On a chain with absorbing boundary and interior , the function (expected boundary value at hitting time) is the unique solution to:
Application - gambler's ruin: Let . Then is harmonic on with boundary conditions , . For the simple random walk: .
For AI: Value functions in reinforcement learning are harmonic: is a Bellman equation - a discrete-time analogue of the Dirichlet problem with discount factor .
Appendix B: Spectral Theory for Non-Reversible Chains
For non-reversible chains, eigenvalues of can be complex (though all have ). The Jordan form replaces the diagonal eigendecomposition.
Pseudo-spectrum: For nearly-reversible chains, the pseudo-spectrum can be much larger than the spectrum, causing transient amplification before eventual convergence.
Non-reversible speedup: Some non-reversible chains mix faster than any reversible chain with the same stationary distribution. The "lifted" or "skewed" chains in the literature achieve mixing time vs. for reversible chains, where is the Cheeger constant.
Lifting: A common technique to construct non-reversible fast-mixing chains: add a "direction" variable to the state, creating a Markov chain on that preferentially moves in one direction while still having as marginal stationary distribution.
Appendix C: Advanced MCMC Methods
C.1 Stochastic Gradient Langevin Dynamics (SGLD)
For Bayesian inference on large datasets, standard MH requires evaluating at the full dataset - too expensive. SGLD combines SGD with Gaussian noise:
where is the mini-batch stochastic gradient.
For step sizes satisfying and (Robbins-Monro conditions), the chain converges to samples from the posterior. At small but fixed step size , the chain samples from an approximate posterior within of the true posterior.
For AI: SGLD underlies Bayesian deep learning at scale. It explains why SGD with learning rate decay has regularisation properties: the decaying noise variance means the chain converges to a tighter distribution around the posterior mode.
C.2 Parallel Tempering (Replica Exchange)
For multimodal targets, single chains can get stuck in one mode. Parallel tempering runs chains at different temperatures (with being the target). Periodically, adjacent chains swap states with MH acceptance probability:
High-temperature chains mix fast (flat distribution) and pass samples to low-temperature chains through swaps. This enables exploration of multiple modes.
C.3 Slice Sampling
Slice sampling introduces a uniform auxiliary variable , creating a joint distribution on that is uniform on the "slice" . Marginalising out recovers .
The slice sampler alternates between updating (trivial given ) and updating (uniform on the slice level set). No tuning parameter; acceptance rate = 1; effective for univariate distributions.
Appendix D: Worked Examples
D.1 Google PageRank Computation (Mini Example)
Consider a 4-page web: page 1 links to 2,3; page 2 links to 4; page 3 links to 2; page 4 links to 1,3.
Transition matrix (with no damping for clarity):
Stationary distribution (solve , ): - verified by direct computation.
With damping : where is the all-ones matrix. Power iteration converges in ~30 iterations.
D.2 MCMC on a Bimodal Distribution
Target: (mixture of two Gaussians at ).
With Gaussian random walk MH (): chain mixes within one mode; inter-modal jumps are rare because the barrier between modes is nats. Acceptance rate ~60%, but ESS/T ~0.02 (poor mixing).
With : chain jumps between modes; acceptance rate ~15%, ESS/T ~0.10 (better mixing).
This illustrates the exploration-exploitation tradeoff in MCMC proposal design.
D.3 HMM Example: Weather from Ice Core
Two hidden states: Ice Age (I), Warm Period (W). Observations: high () or low () dust in ice core.
Transition: . Emission: .
For observation sequence : the Viterbi algorithm gives hidden sequence (high dust = ice age, low dust = warm period).
Forward algorithm gives likelihood .
Appendix E: Key Theorems and Proofs
E.1 Proof: Stationary Distribution is Unique for Finite Irreducible Chains
Theorem. A finite irreducible Markov chain has a unique stationary distribution with for all .
Proof. Consider the -dimensional simplex . The map is a continuous map from to . By Brouwer's fixed point theorem, has at least one fixed point .
For uniqueness: suppose and are two stationary distributions. Since the chain is irreducible and finite, all states are positive recurrent, so where is the mean return time. The mean return time is unique (it equals and also if is stationary), so for all .
For : by irreducibility, from any state there exists a path to . The stationary mass at each state in this path is (by positivity of transitions and stationarity), propagating to give .
E.2 Proof: Detailed Balance Implies Stationarity
Theorem. If for all , then .
Proof. For each :
where we used detailed balance in the second equality and the row-sum property of stochastic matrices in the fourth.
E.3 Proof: Convergence via Coupling (Sketch)
Theorem. Let be ergodic. For any and coupling with , , define . Then:
Proof. For any event :
On : (coalesce), so . Therefore:
Taking supremum over gives the result.
Appendix F: Python Recipes for Markov Chains
import numpy as np
# -- 1. Power iteration for stationary distribution
def stationary(P, tol=1e-12, max_iter=10000):
pi = np.ones(P.shape[0]) / P.shape[0]
for _ in range(max_iter):
pi_new = pi @ P
if np.max(np.abs(pi_new - pi)) < tol:
return pi_new
pi = pi_new
return pi
# -- 2. Simulate a Markov chain
def simulate_chain(P, x0, n_steps):
N = P.shape[0]
x = x0
trajectory = [x]
for _ in range(n_steps):
x = np.random.choice(N, p=P[x])
trajectory.append(x)
return np.array(trajectory)
# -- 3. Metropolis-Hastings for 1D target
def metropolis_hastings(log_pi, n_steps, sigma=0.5, x0=0.0):
x = x0
samples = []
for _ in range(n_steps):
x_prime = x + sigma * np.random.randn()
log_a = log_pi(x_prime) - log_pi(x)
if np.log(np.random.rand()) < log_a:
x = x_prime
samples.append(x)
return np.array(samples)
# -- 4. PageRank
def pagerank(A, alpha=0.85, tol=1e-8):
N = A.shape[0]
out_degree = A.sum(axis=1, keepdims=True)
# Fix dangling nodes
dangling = (out_degree == 0).flatten()
out_degree[dangling] = 1
P = A / out_degree
P[dangling] = 1 / N
teleport = np.ones((N, N)) / N
P_hat = alpha * P + (1 - alpha) * teleport
pi = np.ones(N) / N
for _ in range(1000):
pi_new = pi @ P_hat
if np.max(np.abs(pi_new - pi)) < tol:
return pi_new
pi = pi_new
return pi
# -- 5. HMM forward algorithm
def hmm_forward(obs, T, E, pi0):
"""obs: list of observations, T: NxN transition, E: NxM emission, pi0: initial."""
N = T.shape[0]
alpha = np.zeros((len(obs), N))
alpha[0] = pi0 * E[:, obs[0]]
for t in range(1, len(obs)):
alpha[t] = (alpha[t-1] @ T) * E[:, obs[t]]
return alpha, alpha[-1].sum() # alpha matrix and likelihood
Appendix G: Connections to Other Areas
Spectral graph theory: For a random walk on an undirected graph , the transition matrix (where is the degree matrix and is the adjacency matrix) has eigenvalues in . The spectral gap of the Laplacian equals the spectral gap of the walk. Well-connected (expander) graphs have large spectral gaps and fast-mixing random walks.
Information theory: The relative entropy (KL divergence) between the chain's distribution and stationarity decreases monotonically: . This is the data processing inequality applied to the Markov transition. The rate of KL decrease is related to the spectral gap.
Ergodic theory: The Markov chain ergodic theorem is a special case of Birkhoff's ergodic theorem in abstract measure-preserving systems. The stationarity condition says is an invariant measure for the measure-preserving map on the path space.
Linear programming: Computing the stationary distribution satisfying , , is a linear system. The LP relaxation of combinatorial problems can sometimes be solved by finding stationary distributions of associated Markov chains.
Quantum computing: Quantum walks are the quantum analogue of random walks on graphs, with amplitudes instead of probabilities. They can achieve quadratic speedups over classical random walks for certain search and sampling problems.
Appendix H: Self-Assessment Checklist
Before moving to Chapter 7 (Statistics), verify you can:
- Write the transition matrix for any small Markov chain from a verbal description
- Compute for by hand or code
- Classify all states of a small chain as communicating/recurrent/transient/absorbing
- Solve for a or stochastic matrix
- State Perron-Frobenius and explain why the stationary distribution is unique
- Verify detailed balance for a given pair
- Compute the spectral gap from eigenvalues and give a mixing time bound
- Implement Metropolis-Hastings for a 1D target and compute acceptance rate
- Implement power iteration for PageRank with dangling node handling
- Implement the HMM forward algorithm and compute the likelihood of an observation sequence
- Explain why MCMC sample autocorrelation reduces the effective sample size
- State the ergodic theorem and explain why it justifies MCMC estimation
Appendix I: Detailed Proofs - Perron-Frobenius and Mixing
I.1 Perron-Frobenius Theorem (Complete Proof)
We prove the Perron-Frobenius theorem for primitive stochastic matrices.
Step 1: Eigenvalue 1 exists. For any stochastic matrix , (column vector of ones). So iff is doubly stochastic. More relevantly: by the Brouwer fixed-point theorem on , the map has a fixed point - the stationary distribution. This fixed point is a left eigenvector with eigenvalue 1.
Step 2: for all eigenvalues. Let be an eigenvalue with (right) eigenvector (). Pick . Then . So .
Step 3: implies for primitive . For a primitive matrix, (all entries strictly positive) for some . Suppose , , with and . The equality case in Step 2 requires for all and all rows of to "align" the phases of . But for , all rows of are strictly positive, and the alignment condition forces and all equal - meaning and . Contradiction.
Step 4: Convergence. Since is the unique eigenvalue on the unit circle for primitive , all other eigenvalues satisfy for some . The spectral decomposition gives where at rate .
I.2 Coupling Proof of Convergence (Complete)
Theorem. Let be ergodic with stationary distribution . For any :
where is the coalescence time of the Markov coupling (two chains started at and ).
Proof. Let be a coupling with and , where both chains use the same transition and coalesce when they meet. Since and is stationary, for all .
For any event :
On : (they have coalesced), so their indicators for are equal.
Since this holds for any and for by symmetry, taking the supremum gives the TV bound.
I.3 Spectral Gap Lower Bound via Conductance (Cheeger Inequality)
Definition. The conductance (Cheeger constant) of a chain is:
Conductance measures the minimum probability flux crossing any "bottleneck" cut in the chain.
Cheeger Inequality: For a reversible chain:
The lower bound is the harder direction and gives: . In words: a large bottleneck (small ) implies a small spectral gap (slow mixing). Conversely, if is bounded below, the chain mixes in steps (vs. for the tighter bound ).
For AI: The Cheeger inequality provides the tightest practical bounds for MCMC convergence. For neural network posteriors, the conductance can be estimated using gradient information - a well-conditioned posterior (no sharp barriers) has high conductance.
Appendix J: Markov Chains and Entropy
J.1 Entropy Production
The relative entropy (KL divergence) from the chain's distribution to stationarity is:
Data processing inequality (for Markov chains):
This follows from the data processing inequality: applying the stochastic map cannot increase KL divergence. Equality holds iff .
For reversible chains: The relative entropy decays at rate governed by the log-Sobolev constant. Chains with larger log-Sobolev constants (tighter functional inequalities) decay faster.
J.2 Entropy of the Stationary Distribution
The Shannon entropy of is . For a uniform distribution (doubly stochastic ), is maximised. For PageRank, the entropy of measures how "concentrated" web traffic is - low entropy means a few pages dominate.
Maximum entropy interpretation: The stationary distribution of a reversible chain with detailed balance can be interpreted as the maximum entropy distribution subject to the constraint that the detailed balance equations hold. This connects Markov chains to the exponential family (Section02).
J.3 Kullback-Leibler and MCMC Acceptance
The Metropolis-Hastings acceptance ratio has an information-theoretic interpretation:
The exponent is the log-ratio of probability densities - related to the KL divergence between the proposal and target. When the proposal matches well, most proposals are accepted.
Appendix K: Continuous-Time Chains and Generators
K.1 Semigroup Theory
The family of transition matrices forms a Markov semigroup: , , and is continuous in . The generator characterises the infinitesimal behaviour.
Exponential formula: where the matrix exponential can be computed via eigendecomposition of . If has eigenvalues (all real and non-positive for reversible chains), then:
Convergence to stationarity is at rate where is the spectral gap of .
K.2 Reversible CTMCs
A CTMC is reversible with respect to if for all . The spectral theory for reversible CTMCs is identical to that for reversible DTMCs: real eigenvalues, orthonormal eigenfunctions, exponential convergence.
Detailed balance for CTMCs:
This is the continuous-time analogue of the discrete detailed balance. Metropolis-Hastings in continuous time (the Metropolis algorithm for particle simulations) satisfies this by construction.
Appendix L: Theory Problems
Problem L.1. Prove that for a doubly stochastic matrix (rows and columns both sum to 1), the uniform distribution is stationary. Give an example showing the converse fails.
Problem L.2. Let be irreducible with stationary distribution . Prove that for all by constructing a path from any with to .
Problem L.3. (Coupling inequality) Let be ergodic. Construct a coupling of two copies of the chain, one started at and one at the stationary distribution . Use the coupling inequality to prove .
Problem L.4. For the symmetric random walk on with reflecting barriers, show the spectral gap is for large . What is the mixing time?
Problem L.5. (Metropolis-Hastings correctness) Let be the MH transition kernel with proposal and target . Show that (detailed balance), and conclude is stationary.
Problem L.6. For a birth-death chain on with rates and (constant), find the stationary distribution. For what values of does the chain have a proper stationary distribution on (infinite chain)?
Problem L.7. The lazy walk on a graph sets (stay with prob 1/2) and for neighbours (where is the degree). Show the lazy walk has all non-negative eigenvalues and therefore converges monotonically to stationarity in TV distance.
Problem L.8. (PageRank convergence rate) For the damped PageRank matrix with damping , show all eigenvalues except the dominant one have magnitude . Conclude the PageRank power iteration converges in iterations.
Appendix M: Notation Summary
| Symbol | Meaning |
|---|---|
| State space | |
| Transition probability from state to state | |
| -step transition probability () | |
| Distribution at time ; row vector | |
| Stationary distribution; row vector with | |
| Return probability to state : | |
| Mean return time to state ; | |
| Period of state : | |
| Total variation distance between and | |
| Mixing time to -accuracy | |
| Spectral gap: $1 - | |
| Cheeger constant (conductance) | |
| Generator matrix of a CTMC; , | |
| CTMC transition matrix at time | |
| MH acceptance probability | |
| Coupling time (coalescence time of two chains) | |
| HMM forward variable: | |
| Viterbi variable: |
Appendix N: Advanced ML Applications
N.1 RLHF as a Constrained MDP
Reinforcement learning from human feedback (RLHF) trains a language model policy to maximise:
where is a learned reward model, is the reference (SFT) policy, and controls the KL penalty.
This is a constrained Markov chain optimisation problem: the state space is all possible prefixes (contexts), the action space is the vocabulary, and the KL divergence penalty ensures the learned chain doesn't deviate too far from the reference chain in transition structure.
The optimal policy has an explicit closed form:
where is a normalising constant. This is a Gibbs distribution with the reward as energy function - exactly the target distribution for MCMC.
Implication: The RLHF optimal policy can be sampled by constructing a Markov chain with the above stationary distribution. Recent work uses Langevin-type MCMC to directly sample from without RL.
N.2 Diffusion Sampling as Time-Reversed CTMC
The score-based generative modelling framework (Song et al., 2021) writes the forward noising process as a CTMC/SDE and generates by running the reversed process. The key identity is Anderson's reverse-time SDE:
where is a Brownian motion running backwards in time. The score function is the generator of the reversed Markov chain. Learning this score function via denoising score matching allows sampling by running the reverse process.
Discrete diffusion: For categorical distributions (text), the VP-SDE is replaced by a CTMC on the vocabulary with generator . MDLM and other discrete diffusion models use the forward chain = masked tokens, Uniform transitions, or absorbing-state corruption, then learn the reverse chain.
N.3 MCMC for LLM Sampling
Standard autoregressive LLM decoding (top-, nucleus) can be viewed as Markov chain sampling from a specific transition matrix. Recent speculative decoding methods (Chen et al., 2023; Leviathan et al., 2023) use a smaller "draft" model as a proposal and the large model as the acceptance criterion - this is precisely independence Metropolis-Hastings:
- Proposal:
- Acceptance:
- Guarantees: the accepted tokens have the same distribution as large-LM-only sampling
This is MCMC applied to language model decoding - Markov chain theory directly yields correctness of speculative decoding.
N.4 Graph Neural Networks and Random Walks
Graph Neural Networks (GNNs) can be understood through the lens of Markov chains. A single GNN layer performs:
For the simple mean-aggregation GNN: . The aggregation step is applying the random walk transition matrix to the feature matrix. layers of GNN = applying the random walk times - the receptive field grows as the -step neighbourhood.
Over-smoothing: After many layers, all node features converge to the stationary distribution of the random walk (uniform for regular graphs). This is over-smoothing - the GNN loses discriminative power because the Markov chain mixes. Mixing time bounds from Section6 directly bound the number of useful GNN layers.
N.5 Token Position Encoding as Markov Chain
The self-attention mechanism with rotary position encoding (RoPE) can be viewed as computing a position-dependent Markov chain over tokens. Each attention head defines a stochastic matrix over token positions (the attention weights), and information flows along this chain.
Attention as ergodic averaging: In multi-head attention, each head computes . The softmax weights form a row-stochastic matrix - a one-step Markov transition. The output is the expected value of under this distribution. Stacking attention layers is like composing Markov kernels.
Information bottleneck: The rank of the attention matrix limits the "state space" of the Markov chain. Low-rank attention (as in linear attention / Performer) corresponds to a Markov chain with restricted state space - mixing may be faster but with less expressivity.
Appendix O: Review Problems
Review Problem 1. Let be a stochastic matrix with . (a) Is this chain irreducible? (b) Find the stationary distribution. (c) Starting from state 1, compute the distribution after 10 steps. (d) Compare to stationarity.
Review Problem 2. Consider a random walk on a cycle of length (states , move clockwise or counterclockwise with equal probability ). (a) What is the stationary distribution? (b) What is the period? (c) Compute the mixing time using the spectral gap (eigenvalues: for ). (d) How does mixing time scale with ?
Review Problem 3. (MCMC) You want to sample from (a Gamma distribution) using MH with log-normal proposal . (a) Write down the acceptance ratio. (b) Is symmetric? (c) Will this chain have as stationary distribution?
Review Problem 4. (PageRank) The web has 3 pages: 1 links to 2 and 3; 2 links to 1; 3 links to 1. With damping , write the PageRank transition matrix and find by solving a linear system. Which page has the highest PageRank?
Review Problem 5. (HMM) For an HMM with 2 hidden states, 2 observations, transition , emission , initial : (a) Compute using the forward algorithm. (b) Find the Viterbi path.
Appendix P: Common Distributions in MCMC Targets
When , the gradient drives Langevin dynamics. Key shapes:
Gaussian: . Gradient: . Langevin mixes in steps. HMC mixes in steps (advantage for ill-conditioned targets).
Logistic posterior: . Strongly log-concave; SGLD mixes in polynomial time.
Mixture of Gaussians: . Multi-modal; standard MCMC can get stuck between modes. Parallel tempering or simulated annealing needed for good mixing.
Heavy-tailed: (Student-t). Sub-quadratic growth means the tails are heavy. HMC can struggle; NUTS with dual averaging handles this.
Boltzmann (energy-based models): for neural network energy . Sampling requires running MCMC; contrastive divergence (CD-) uses short MCMC chains as an approximation.
Appendix Q: Further Reading
-
Levin, Peres, Wilmer - Markov Chains and Mixing Times (AMS, 2nd ed., 2017) - the definitive reference for mixing times and coupling; freely available online
-
Norris, J.R. - Markov Chains (Cambridge, 1997) - rigorous treatment of discrete and continuous-time chains with clean proofs
-
Brooks, Gelman, Jones, Meng - Handbook of Markov Chain Monte Carlo (CRC Press, 2011) - comprehensive MCMC reference; includes HMC, NUTS, diagnostics
-
Betancourt, M. - "A Conceptual Introduction to Hamiltonian Monte Carlo" (arXiv, 2017) - outstanding intuitive treatment of HMC geometry
-
Page et al. - "The PageRank Citation Ranking: Bringing Order to the Web" (Stanford Technical Report, 1999) - original PageRank paper
-
Rabiner, L.R. - "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition" (Proc. IEEE, 1989) - classic HMM reference
-
Song et al. - "Score-Based Generative Modeling through Stochastic Differential Equations" (ICLR 2021) - connects diffusion models to CTMC theory
-
Welling & Teh - "Bayesian Learning via Stochastic Gradient Langevin Dynamics" (ICML 2011) - introduces SGLD for large-scale Bayesian inference
-
Sutton & Barto - Reinforcement Learning: An Introduction (MIT Press, 2nd ed., 2018) - MDPs, Bellman equations, TD-learning; free online
Appendix R: Spectral Methods for Large-Scale Markov Chains
R.1 Lanczos Algorithm for Large Transition Matrices
For transition matrices arising in PageRank ( web pages), direct eigendecomposition is infeasible. The power iteration and Lanczos algorithm enable computing the top eigenvectors using only matrix-vector products.
Power iteration for stationary distribution:
- Memory: for the distribution vector
- Per-iteration cost: (number of non-zeros in ) for sparse matrices
- Convergence in iterations
For the web graph, (damping), so ~70 iterations give precision.
Randomised SVD: For low-rank approximation of (needed for long-range transition probability estimation), randomised methods can approximate the top- singular vectors in time - much cheaper than full SVD.
R.2 Multi-Scale Markov Chains
For chains with multiple timescales (fast-mixing within clusters, slow-mixing between clusters), aggregation-disaggregation methods provide efficient algorithms:
- Aggregation: Lump states within each cluster into a single meta-state. Compute the inter-cluster transition matrix .
- Disaggregation: Compute the within-cluster stationary distributions for each cluster .
- Combine: for state in cluster .
This decomposition is exact when the chain is "nearly lumpable" (within-cluster transitions are much faster than between-cluster). It's the foundation of multi-scale MCMC methods.
For AI: LLM attention patterns have multi-scale structure: attention heads attend to nearby tokens (fast mixing) and distant context (slow mixing). This multi-scale structure can be exploited for efficient long-context modelling.
R.3 Reversibilisation
Any Markov chain can be made reversible by considering the Doob -transform or by multiplicative reversibilisation:
The resulting is reversible with the same stationary distribution . Reversibilisation can speed up or slow down convergence depending on the chain - it destroys the directional bias that non-reversible chains use for faster mixing.
Appendix S: Continuous-State Markov Chains
S.1 Harris Chains
For Markov chains on continuous state spaces (like MCMC on ), the discretestate ergodic theory extends with modifications. A Markov chain with transition kernel is:
- -irreducible if there exists a -finite measure such that for any Borel set with , we have for all .
- Harris recurrent if it is -irreducible and returns to any set with with probability 1.
- Positive Harris recurrent if the chain is Harris recurrent and has a unique invariant probability measure .
A positive Harris recurrent, aperiodic chain is called a Harris ergodic chain, and the ergodic theorem holds: time averages converge to -expectations.
For MCMC: The Metropolis-Hastings chain on with Gaussian proposal and a proper target density is Harris ergodic under mild regularity conditions. This justifies MCMC estimators for continuous posteriors.
S.2 Geometric Ergodicity
A Markov chain is geometrically ergodic if there exist constants and such that:
Geometric ergodicity implies a central limit theorem for MCMC estimators:
where is the asymptotic variance. This is the Markov chain CLT - it justifies asymptotic confidence intervals for MCMC estimates.
For AI: Geometric ergodicity of the Langevin algorithm for log-concave targets gives polynomial bounds on the number of gradient evaluations needed for -accurate posterior estimates. This is the theoretical basis for Bayesian deep learning via SGLD.
Appendix T: Markov Chain Games and Nash Equilibria
T.1 Stochastic Games
A stochastic game (Shapley, 1953) is an MDP with multiple agents. Two-player zero-sum stochastic games have:
- State space ; action spaces
- Transition
- Reward (player 1 maximises, player 2 minimises)
At a Nash equilibrium, each player's policy is a best response to the other's. The value function satisfies a minimax Bellman equation:
For AI: Multi-agent RL (MARL) with competing agents (e.g., AlphaGo, Libratus for poker) uses stochastic game theory. RLHF with adversarial reward models is a stochastic game between the policy and the worst-case reward model.
T.2 Markov Perfect Equilibrium
In dynamic games, a Markov perfect equilibrium (MPE) is a Nash equilibrium where strategies depend only on the current state (Markov property on the strategy). MPE is the game-theoretic analogue of the optimal policy in MDPs.
Computing MPE requires solving a system of coupled Bellman equations - one per player. For two-player zero-sum games, this reduces to linear programming (minimax theorem). For general-sum games, this is PPAD-complete.
Appendix U: Temporal Difference Learning and Martingales
Connecting back to Section06 Stochastic Processes Section3.7, the TD(0) algorithm is:
The update direction is the TD error. Under the true value function , (martingale difference). TD learning is a stochastic approximation algorithm for finding the fixed point of the Bellman operator .
Convergence proof via martingale theory: Write (error from true value). Then:
where is a martingale difference noise term. Under the Robbins-Monro conditions , , the ODE method (Borkar, 2008) shows a.s.
Appendix V: Practical MCMC Checklist for AI/ML
Before running MCMC:
- Identify the target: is proper? Is evaluable efficiently?
- Choose algorithm: log-concave -> Langevin/HMC; multimodal -> parallel tempering; discrete -> Gibbs
- Set proposal scale: tune to ~23% acceptance (MH) or ~65% (HMC)
- Run multiple chains () from diverse starting points
During MCMC:
- Monitor trace plots for signs of non-stationarity (trends, sticking)
- Track acceptance rate: should be in target range
- Compute after every 1000 iterations; stop when
After MCMC:
- Discard burn-in (first 50% as rule of thumb)
- Compute ESS per parameter; target ESS for reliable estimates
- Check posterior predictive checks: do simulated data match observed data?
- Report: posterior mean, posterior std, 95% credible interval, ESS,
Common pathologies:
- Divergent transitions (HMC): posterior geometry has sharp ridges; reparameterise
- Low ESS: chain mixes slowly; increase step size, use HMC, or apply preconditioning
- : chains haven't agreed; run longer, use better initialisation
- Bimodal trace plots: chain stuck between modes; use parallel tempering
Appendix W: Worked Computation - Two-State Chain
Full Analysis
Let , starting from .
Step 1: Eigenvalues. Characteristic polynomial: . Eigenvalues: .
Step 2: Stationary distribution. From : . With : .
Step 3: -step distribution. Using the spectral decomposition:
Explicitly:
So .
Step 4: TV distance. .
At : - essentially converged. Spectral gap , so mixing is fast.
Step 5: Verify detailed balance.
The chain is reversible with this stationary distribution.
Appendix X: Advanced Topics
X.1 Geometric Random Walks for Volume Computation
The ball walk and Gaussian walk are Markov chains used to estimate the volume of convex bodies. Starting from a point in a convex body , the chain moves to a uniformly random point within a ball of radius centred at (rejecting if outside ). The stationary distribution is uniform on .
Mixing time analysis: for the ball walk, where is the dimension. This gives a polynomial-time randomised algorithm for volume computation (Dyer-Frieze-Kannan theorem).
For AI: High-dimensional sampling problems (Bayesian inference with parameters) require understanding how mixing time scales with dimension. Geometric random walks provide the fundamental tools.
X.2 Markov Chain Tree Theorem
For a strongly connected directed graph with transition matrix , the stationary distribution can be computed via spanning trees:
where is the sum of weights of all spanning trees directed toward node (arborescences rooted at ). This is the Matrix-Tree theorem (Kirchhoff 1847) extended to directed graphs.
Implication: The stationary distribution has a combinatorial formula in terms of the graph structure. This provides an alternative to eigendecomposition that can be faster for sparse graphs.
X.3 Lumping and Aggregation
A partition of is lumpable with respect to if for any two states in the same class , the aggregated transitions for all classes . In this case, the quotient chain on is a well-defined Markov chain.
Lumping enables dimensionality reduction: replace a large chain with a smaller aggregated chain. The stationary distribution of the lumped chain aggregates that of the original.
For AI: Grouping similar states (e.g., tokens with similar embeddings) can define a lumped chain that captures the high-level dynamics of the LLM's token distribution, enabling efficient analysis of LLM behaviour at scale.
Appendix Y: Connections to Physics
Y.1 Statistical Mechanics and Gibbs Measures
The Gibbs distribution (Boltzmann distribution) is the equilibrium distribution of a physical system with Hamiltonian at temperature . This is the canonical target distribution for MCMC in physics.
The Metropolis algorithm was originally designed to sample from Gibbs distributions of particle systems (Metropolis et al., 1953). MCMC methods in ML directly descended from statistical mechanics - including the energy-based models (EBMs) that predate modern deep learning.
Temperature annealing: Starting at high (flat, easy-to-sample distribution) and gradually cooling to (greedy optimisation) is simulated annealing - a probabilistic optimisation algorithm. The connection between MCMC sampling and optimisation at is the bridge between Bayesian inference and MAP estimation.
Y.2 Detailed Balance and Thermodynamic Equilibrium
Detailed balance in physics is called microscopic reversibility or time-reversal symmetry. A physical system is at thermodynamic equilibrium iff its microscopic dynamics satisfy detailed balance. Systems out of equilibrium (driven by energy flows) violate detailed balance and maintain directed probability currents.
Non-equilibrium steady states: Some biological systems (motor proteins, gene regulatory networks) maintain non-reversible Markov chains in steady state - driven by ATP hydrolysis. These are "out-of-equilibrium" in the thermodynamic sense. Non-reversible MCMC is inspired by this physics.
Y.3 Renormalization Group and Multi-Scale Chains
The renormalization group (Wilson, 1971) is a technique in physics for coarse-graining: replacing fine-grained microscopic degrees of freedom with effective coarse-grained ones. This is mathematically analogous to the lumping/aggregation of Markov chains.
In ML, knowledge distillation (training a small model to mimic a large model) is a form of renormalization: the small model's transition matrix approximates the "effective" dynamics of the large model at a coarser scale.