Lesson overview | Lesson overview | Next part
Markov Chains: Part 1: Intuition to 9. ML Deep Dive
1. Intuition
1.1 What Is a Markov Chain?
A Markov chain is a sequence of random variables taking values in a state space , where the future is independent of the past given the present. Formally, for every and every state :
This is the Markov property: the current state is a sufficient statistic for predicting . No matter how long the history, knowing only the present gives as much information about the future as knowing the entire past trajectory.
Why memorylessness is powerful. At first glance, the Markov property looks like a restriction - real systems do have memory. But three things make it useful:
-
Many systems genuinely satisfy it. A fair die, re-rolled at each step, is Markov (no memory). A chess position captures all relevant game state (Markov). An LLM's key-value cache provides sufficient statistics for the next token (Markov, given the KV cache).
-
Higher-order Markov chains reduce to first-order. A chain where depends on and can be converted to a first-order chain on the augmented state . So first-order is universal given a rich enough state space.
-
The mathematics is tractable. Transition probabilities only involve pairs of states, so the full dynamics are captured by a matrix rather than an exponentially large conditional probability table.
The mental picture: Imagine a particle hopping between states on a graph. At each step, it looks only at which node it currently occupies, then jumps to a neighbouring node with probability given by the edge weights. The history of how it got to the current node is irrelevant - only where it is now matters.
For AI: Every autoregressive language model generates tokens one at a time, each conditioned on all previous tokens. Given a context window of length , the distribution over the next token is fully specified by the current state (the context). This is precisely the Markov property with state space = all possible contexts of length .
1.2 Everywhere in AI
Markov chains appear as the backbone of several major AI paradigms:
Language generation: GPT, LLaMA, and every autoregressive transformer generates text via a Markov chain. The state is the current context (the key-value cache), the transition is the softmax over the vocabulary, and the stationary distribution - if it exists - characterises what the model "talks about" at equilibrium.
PageRank: Google's original ranking algorithm computes the stationary distribution of a Markov chain on the web graph, where each page links to others. A page is important if the random surfer spends a lot of time there at stationarity. Power iteration on the transition matrix gives the PageRank vector.
Reinforcement learning: A Markov Decision Process (MDP) is a Markov chain with actions. The agent's policy induces a Markov chain over states, and the value function is the expected discounted return from each state - computable from the stationary properties of this chain. TD-learning, Q-learning, and policy gradient methods all exploit the Markov structure.
Bayesian inference via MCMC: When the posterior distribution is intractable, MCMC constructs a Markov chain whose stationary distribution is exactly . Running the chain for long enough generates samples that represent the posterior. Metropolis-Hastings and Gibbs sampling are the two canonical MCMC algorithms.
Speech and sequence modelling: Hidden Markov Models (HMMs) model sequences where the true state (e.g., phoneme) is latent. The forward-backward algorithm and Viterbi decoder exploit the Markov structure to perform efficient inference.
Diffusion models: The DDPM forward process is a Markov chain where each step adds Gaussian noise. The learned reverse process is a Markov chain running backwards in time. The SDE perspective (Section06 Stochastic Processes) gives the continuous-time version.
1.3 Historical Timeline
| Year | Contributor | Result |
|---|---|---|
| 1906 | Andrei Markov | Defined Markov chains; proved the law of large numbers for dependent sequences |
| 1907 | Andrei Markov | First application: statistical analysis of letters in Pushkin's Eugene Onegin |
| 1913 | Paul Ehrenfest | Urn model illustrating approach to equilibrium; connection to statistical mechanics |
| 1928 | Andrei Kolmogorov | Backward equations for Markov processes; rigorous foundation |
| 1936 | Oskar Perron, Georg Frobenius | Perron-Frobenius theorem (positive matrices have unique dominant eigenvector) |
| 1950 | Nicholas Metropolis et al. | Monte Carlo method; Metropolis algorithm for statistical mechanics |
| 1953 | Metropolis, Rosenbluthx2, Tellerx2 | Metropolis-Hastings algorithm published; first MCMC |
| 1970 | W.K. Hastings | Generalized Metropolis to asymmetric proposals |
| 1984 | Geman & Geman | Gibbs sampling for image restoration |
| 1996 | Levin, Peres, Wilmer | Systematic mixing time theory; coupling and spectral gap bounds |
| 1998 | Page, Brin, Motwani, Winograd | PageRank: web ranking via Markov chain stationary distribution |
| 2015 | Ho, Jain & Abbeel | DDPM: diffusion as Markov chain with learned reverse |
| 2022 | Various | LLMs as Markov chains; stationary distribution theory for language models |
1.4 Taxonomy of Markov Chains
Markov chains are classified along two dimensions:
State space: Finite (), countably infinite (), or continuous (). Most of this section treats finite or countably infinite state spaces; MCMC typically targets continuous spaces.
Time: Discrete () or continuous (). Discrete-time chains are developed in Section2-Section6; continuous-time chains in Section7.
| Discrete Time | Continuous Time | |
|---|---|---|
| Finite states | Finite DTMC - PageRank, HMM | CTMC - queueing theory |
| Countable states | SRW, birth-death chain | Poisson process |
| Continuous states | MCMC on | Diffusion SDE (Section06) |
Time-homogeneous vs. inhomogeneous: If transition probabilities do not depend on - for all - the chain is time-homogeneous. Almost all chains in this section are time-homogeneous.
2. Formal Definitions
2.1 The Markov Property
Recall: Conditional independence was defined in Section03 Joint Distributions. The Markov property is conditional independence of the future from the past, given the present.
Definition (Discrete-Time Markov Chain). A sequence of random variables on a measurable state space is a (time-homogeneous) Markov chain if for all and all measurable :
The function is called the transition kernel or transition probability. For finite , it is encoded by the transition matrix with .
Equivalent formulation via conditional independence: The Markov property is equivalent to:
This is the cleanest statement: given the present , the future is independent of the entire past . More generally, the strong Markov property asserts this holds for stopping times: for any stopping time .
Initial distribution: The full law of the chain is specified by:
- The initial distribution with
- The transition matrix with and for all
Together, determines all finite-dimensional distributions.
2.2 Transition Matrix
For a finite state space , the transition matrix is an stochastic matrix: every row sums to 1, all entries are non-negative. The entry is the probability of jumping from state to state in one step.
Distribution at step : If is the row vector of probabilities at step , then:
The distribution evolves by left-multiplying by at each step. In component form: .
-step transition probabilities: The probability of going from to in exactly steps is the entry of :
Computing via matrix exponentiation (eigendecomposition or repeated squaring) is the core computational task for Markov chains.
Non-example: A matrix with a row summing to more than 1, or with any negative entry, is NOT a valid transition matrix. A doubly-stochastic matrix (both rows and columns sum to 1) is a transition matrix, and its stationary distribution is uniform.
For AI: In a language model with vocabulary size , the transition matrix has size where = probability of token following token (bigram model). Real LLMs use much richer state spaces (the context window), but the Markov structure is the same.
2.3 Chapman-Kolmogorov Equations
Theorem (Chapman-Kolmogorov). For any and states :
In matrix form: . This is simply the rule for matrix multiplication - to go from to in steps, sum over all intermediate states at the midpoint.
Proof: By the law of total probability and the Markov property:
Chapman-Kolmogorov is the reason matrix multiplication is the right operation for composing Markov chain transitions. It also says: to predict steps ahead, you can decompose the prediction at any intermediate time .
2.4 Standard Examples
Example 1: Two-state chain (weather model)
State 1 = sunny, state 2 = rainy. Sunny stays sunny with probability , turns rainy with probability . Rainy turns sunny with probability . The stationary distribution is .
Example 2: Simple Random Walk on with reflecting barriers
The particle bounces off the walls at 0 and . Stationary distribution: uniform.
Example 3: PageRank chain
where is the web adjacency matrix and is the damping factor (usually ). The teleportation term ensures the chain is irreducible and aperiodic. The stationary distribution is the PageRank vector.
Example 4: Gibbs chain on
For the Ising model on two spins, is a matrix constructed by randomly updating one spin at a time conditional on the other. This is the simplest MCMC example.
3. Classification of States
3.1 Communicating Classes and Irreducibility
Definition. State is accessible from state (written ) if there exists such that . States and communicate (written ) if and .
Communication is an equivalence relation, and its equivalence classes are called communicating classes. A communicating class is closed if no state outside is accessible from any state inside : and implies .
Definition (Irreducible Chain). A Markov chain is irreducible if it has exactly one communicating class - every state communicates with every other. Equivalently, for every pair , there exists with .
Irreducibility is the "connectivity" condition for Markov chains. An irreducible chain cannot get trapped in a subset of states. Most of the theory (stationary distributions, convergence) requires irreducibility.
Decomposition theorem: Any finite Markov chain decomposes into transient states (a particle eventually leaves forever) and one or more closed communicating classes of recurrent states. Once the chain enters a closed class, it stays there forever.
For AI: A language model's transition matrix is nearly irreducible: any token sequence of sufficient length can be generated (in principle). Practical degeneracy (zero-probability transitions due to softmax temperatures -> 0) creates absorbing states.
3.2 Recurrence and Transience
Definition. Define the first return time to state : , with if the chain never returns. The return probability is:
- State is recurrent if (return is certain)
- State is transient if (positive probability of never returning)
Theorem (Recurrence criterion). State is recurrent if and only if . It is transient if and only if .
Proof sketch: Let be the number of returns to . Then . If , returns are geometrically distributed with mean , so the sum diverges. If , the number of returns is geometric with finite mean, so the sum converges.
Positive vs. null recurrence: Among recurrent states, define the mean return time .
- Positive recurrent: and (returns in finite average time)
- Null recurrent: but (certain to return, but takes forever on average)
All recurrent states in a finite irreducible chain are positive recurrent (finite chains cannot be null-recurrent). The simple random walk on is null-recurrent in 1D and 2D, transient in 3D and above (Polya's theorem).
For AI: In MCMC, we want the chain to be recurrent (it returns to every region of the posterior). A transient MCMC chain would drift away from the posterior, giving biased samples.
3.3 Periodicity
Definition. The period of state is:
State is aperiodic if ; otherwise it is periodic with period .
Key facts:
- All states in the same communicating class have the same period
- A chain is called aperiodic if all states have period 1
- If for some , then (self-loops force aperiodicity)
- An irreducible aperiodic chain converges to its stationary distribution (Section4.3)
Example: A two-state chain with is periodic with period 2: the chain alternates 0,1,0,1,... and only for even .
Periodicity breaks convergence: If a chain has period , then does not converge as - it oscillates. Adding any self-loop probability makes the chain aperiodic.
For AI: Metropolis-Hastings with a symmetric proposal that includes the possibility of staying put is automatically aperiodic (self-loops happen when a proposal is rejected). This is why MCMC chains converge.
3.4 Absorbing States and Absorption Probabilities
Definition. State is absorbing if (the chain stays at forever once it arrives). A chain with at least one absorbing state is called absorbing.
Canonical form: For an absorbing chain with absorbing states and transient states, the transition matrix can be written in canonical form:
where is the sub-matrix of transitions among transient states, is the matrix of transitions to absorbing states, and is the identity on absorbing states.
Fundamental matrix: . The entry of is the expected number of times the chain visits transient state before absorption, starting from transient state .
Absorption probabilities: gives the probability of being absorbed into each absorbing state. The gambler's ruin formula (derived via OST in Section06) follows directly from this formula.
3.5 Ergodic Chains
Definition (Ergodic Chain). A Markov chain is ergodic if it is:
- Irreducible - every state communicates with every other
- Positive recurrent - mean return time is finite for all states
- Aperiodic - all states have period 1
Ergodic chains are the "nice" chains: they have a unique stationary distribution, and the chain converges to it from any starting state.
For finite chains: An irreducible chain on a finite state space is automatically positive recurrent. So for finite chains, ergodic = irreducible + aperiodic.
Ergodic theorem: For an ergodic chain, the time-average converges to the space-average:
This is the Markov chain version of the law of large numbers. It is why MCMC works: time-averaging over the chain's trajectory approximates the posterior expectation.
4. Stationary Distributions
4.1 Definition and Existence
Definition (Stationary Distribution). A probability distribution on is a stationary distribution (also called invariant distribution or steady-state distribution) for a Markov chain with transition matrix if:
Equivalently, is a left eigenvector of with eigenvalue 1. If the chain starts in distribution , it stays in distribution for all time: for all .
Interpretation: represents the long-run fraction of time the chain spends in state . For PageRank, is the importance score of page . For MCMC, is the target posterior distribution.
Existence: Every finite irreducible Markov chain has a unique stationary distribution with for all . For infinite chains, existence requires positive recurrence.
Non-uniqueness when reducible: A chain with multiple closed communicating classes has infinitely many stationary distributions - any convex combination of the stationary distributions within each closed class is stationary.
The mean return time formula: For a positive recurrent chain, where is the mean return time. States visited more often (smaller ) get higher stationary probability.
4.2 Perron-Frobenius Theorem
The Perron-Frobenius theorem is the fundamental result guaranteeing existence and uniqueness of the stationary distribution.
Theorem (Perron-Frobenius for Stochastic Matrices). Let be an irreducible stochastic matrix. Then:
- is an eigenvalue of (always true for stochastic matrices, since )
- for all other eigenvalues
- There exists a unique probability vector (componentwise) with
- is the unique (up to scaling) left eigenvector with eigenvalue 1
If additionally is primitive (irreducible and aperiodic), then is the unique eigenvalue on the unit circle, and strictly. This gap drives convergence.
Proof sketch (uniqueness): Suppose and are both stationary. Then satisfies with . If any , irreducibility implies all . But then cannot be a probability vector minus a probability vector with the right sign everywhere - contradiction.
For AI: PageRank power iteration converges because the damped PageRank matrix is primitive (the teleportation term makes it irreducible and the self-loops make it aperiodic). The dominant eigenvalue is 1, and all other eigenvalues are in magnitude, guaranteeing geometric convergence.
4.3 Convergence to Stationarity
Theorem (Convergence for Ergodic Chains). Let be irreducible and aperiodic with stationary distribution . Then for any initial distribution :
Moreover, for any starting state : as for all .
Rate of convergence: For a reversible chain with eigenvalues , the convergence rate is governed by :
The rate is called the second largest eigenvalue modulus (SLEM). The spectral gap is ; a large gap means fast convergence.
Periodic chains: If is irreducible but periodic with period , then does not converge - it cycles. However, (the -step chain) is aperiodic and does converge. In practice, a small perturbation (adding laziness) kills periodicity.
The lazy chain: Replace with . This halves the spectral gap but makes the chain aperiodic with all eigenvalues non-negative, ensuring monotone convergence.
4.4 Null-Recurrent Chains
The simple random walk on returns to the origin with probability 1 (recurrent) but takes infinitely long on average (null-recurrent). Such chains have no stationary probability distribution - the "stationary measure" is not normalisable.
The 1D random walk has for all , which gives infinite total mass . The chain visits every state infinitely often, but spends a vanishing fraction of time at any fixed state.
Implications for AI: Infinite-state chains that model text generation (treating the entire prefix as state) may be null-recurrent or transient. In practice, language models cap context length, making the state space finite and all states positive recurrent.
4.5 Computing Stationary Distributions
Method 1: Solve the linear system
This is an -dimensional system (the normalisation replaces one of the redundant equations ). For small , direct linear algebra is exact.
Method 2: Power iteration
Starting from any distribution , iterating converges to at rate . Power iteration is the PageRank algorithm.
Method 3: Eigendecomposition Compute the left eigenvector of corresponding to eigenvalue 1. For symmetric or reversible chains, this is equivalent to finding the dominant eigenvector.
Method 4: MCMC (for continuous state spaces) For distributions on , construct a Markov chain with stationary distribution and run it - the empirical distribution of the chain approximates by the ergodic theorem.
5. Detailed Balance and Reversibility
5.1 Detailed Balance Equations
Definition (Detailed Balance). A distribution satisfies the detailed balance equations with respect to transition matrix if:
Detailed balance says: the probability flux from to at stationarity equals the flux from to . This is a pointwise balance of probability flows, stronger than the global balance equations .
Theorem. If satisfies detailed balance with , then is a stationary distribution for .
Proof:
So . The converse is false - a stationary distribution need not satisfy detailed balance.
Significance: Detailed balance is easier to verify than stationarity (it's a pairwise condition rather than a global one), and it characterises reversible chains. All Metropolis-Hastings algorithms are constructed to satisfy detailed balance by design.
5.2 Reversible Markov Chains
Definition (Reversible Chain). A Markov chain is reversible (or time-reversible) with respect to distribution if it satisfies detailed balance. Equivalently, the chain running forwards in time has the same law as the chain running backwards.
The reversed chain: Given a stationary Markov chain with transition matrix , define the time-reversed chain by . The reversed chain is also a Markov chain, with the same stationary distribution. The original chain is reversible iff .
Symmetric Markov chains are reversible: If for all (symmetric transition matrix), then detailed balance holds with the uniform distribution . The symmetric random walk on a graph is reversible.
Non-reversible example: The chain (cyclic with ) has uniform stationary distribution but is NOT reversible: it has a preferred direction of rotation.
For AI: Reversibility is a sufficient condition for correctness of MCMC algorithms. A Metropolis-Hastings chain with target satisfies detailed balance - it's reversible - and therefore has as stationary distribution. Non-reversible MCMC (e.g., lifting methods, HMC) can mix faster but is harder to analyse.
5.3 Birth-Death Chains
A birth-death chain is a Markov chain on where only nearest-neighbour transitions are allowed:
Birth-death chains are automatically reversible, and their stationary distribution can be computed explicitly. The detailed balance equations give:
Setting (normalisation) gives the stationary distribution.
Queue models: An queue with arrival rate (service rate) is a birth-death chain. Stationary distribution: where . The chain is stable (stationary distribution exists) iff .
For AI: The queue length at an LLM serving system (number of pending requests) is approximately a birth-death chain. The stationary distribution under load determines average latency: .
5.4 Spectral Decomposition of Reversible Chains
For a reversible chain, the transition matrix has real eigenvalues and an orthonormal basis of eigenvectors (with respect to the -weighted inner product).
Spectral decomposition:
where are the eigenfunctions (normalised in ). As , only the term survives (since for ), giving . The rate of convergence is .
The spectral gap: measures how fast the chain forgets its initial state. A chain with spectral gap mixes in steps. Bounding the spectral gap from below is the main technical challenge in proving MCMC convergence rates.
Dirichlet form: For reversible chains, the spectral gap equals:
where is the Dirichlet form. This variational characterisation is the basis for Poincare inequality methods.
6. Mixing Times and Convergence Rates
6.1 Total Variation Distance
Definition. The total variation (TV) distance between two probability distributions and on is:
TV distance measures how distinguishable two distributions are: it equals the maximum probability that any event has different probability under vs. . The TV distance is 0 iff , and 1 iff the distributions have disjoint support.
Coupling characterisation: where the infimum is over all couplings with marginals and . This is the coupling inequality: any coupling gives an upper bound on TV distance.
For AI: TV distance is used to measure how far an MCMC chain is from stationarity, and to certify when the chain has "mixed" sufficiently to produce approximately independent samples from the posterior.
6.2 Mixing Time
Definition. The mixing time of a Markov chain with stationary distribution is:
The standard mixing time is (the default is conventional). Mixing time measures how long the chain must run before its distribution is -close to stationarity from the worst-case starting state.
Why the worst case? In applications, we often don't control the starting state. The worst-case mixing time certifies that regardless of initialisation, the chain has converged after steps.
Submultiplicativity: . So to achieve precision, it suffices to run for steps.
6.3 Spectral Gap
For a reversible ergodic chain, the spectral gap provides a bound on mixing time:
Theorem (Spectral Gap Bound):
where .
Proof sketch: From the spectral decomposition, . Setting this and solving for gives the bound.
Lower bound: There is also a lower bound: . So the mixing time is for reversible chains.
Computing the spectral gap: For small matrices, compute eigenvalues of directly. For large chains, use the Cheeger inequality: where is the Cheeger constant (conductance of the chain).
6.4 Coupling Arguments
Definition (Coupling). A coupling of two Markov chains with the same transition matrix but different initial distributions is a process such that each marginal is a valid Markov chain with transition , and once they remain together (the coalescence or meeting time).
Coupling inequality:
where is the coupling time.
Grand coupling: For a finite ergodic chain, one can construct a single coupling of chains starting from all possible pairs simultaneously, such that is the same for all pairs. The mixing time is then .
Example: Random walk on hypercube. For a lazy random walk on , couple two chains by updating the same coordinate at each step. When the coordinate is chosen, the two chains agree on that bit after the update. All bits agree after steps (coupon collector), giving .
6.5 AI Relevance: Fast Mixing and MCMC Warm Starts
Fast mixing in practice: For Bayesian inference in LLM fine-tuning (e.g., RLHF reward models), the posterior over model weights is a distribution on with . Direct MCMC is intractable; approximate methods (Langevin dynamics, HMC) exploit gradient information to achieve better mixing.
Spectral gap and dimensionality: For a Gaussian target , the Langevin SDE mixing time scales as where is the condition number of . Preconditioning (e.g., SGLD with adaptive learning rates, Adam-Langevin) improves the effective spectral gap.
Warm starts: If the initial distribution is already close to (e.g., from a previous MCMC run), the chain needs fewer steps to mix. In sequential Bayesian updating (fine-tuning an LLM on new data), the previous posterior serves as a warm start.
Temperature scheduling: Simulated annealing starts with a high-temperature chain (wide, fast-mixing distribution) and gradually cools toward the target. This exploits fast mixing at high temperature and accurate targeting at low temperature.
7. Continuous-Time Markov Chains
7.1 Generator Matrix
A continuous-time Markov chain (CTMC) on a finite state space satisfies the Markov property for continuous time:
The chain holds in each state for an exponentially distributed sojourn time, then jumps according to a discrete-time chain. The rate of leaving state is where are the jump rates from to .
Definition (Generator Matrix / Q-Matrix). The generator (or Q-matrix) of a CTMC is the matrix with:
Key property: rows of sum to zero (). The diagonal entries are negative; off-diagonal entries are non-negative.
Transition matrix: The probability of being in state at time given state at time 0 is:
This is the matrix exponential. Note and (convergence to stationarity for ergodic chains).
Embedded chain: The CTMC can be decomposed into (1) an embedded discrete-time Markov chain governing the sequence of states visited, and (2) exponential holding times in each state. Jump probabilities: for .
7.2 Kolmogorov Equations
Forward Equation (Fokker-Planck):
This equation describes how the distribution evolves forward in time: where .
Backward Equation:
The forward equation applies to the time-marginal distribution; the backward equation to the transition probabilities viewed as functions of the initial state.
Stationary distribution: is stationary if at , i.e.:
For an ergodic CTMC, the stationary distribution is unique.
7.3 Stationary Distribution of CTMCs
Detailed balance for CTMCs: A distribution satisfies detailed balance if for all . Detailed balance implies stationarity ().
Computing the stationary distribution: Solve with . This is equivalent to finding the null space of (right eigenvector with eigenvalue 0).
Birth-death CTMC: With birth rate from state and death rate , detailed balance gives , yielding:
7.4 Poisson Process as CTMC
Recall from Section06 Stochastic Processes: The Poisson process counts arrivals up to time with independent Poisson increments. Inter-arrival times are .
The Poisson process is the simplest CTMC: states , jump rates (arrivals only), generator:
The Poisson process is transient - it drifts to with no stationary distribution. An queue (arrivals at rate , service at rate ) modifies the Poisson process by adding downward jumps (service completions), yielding a birth-death CTMC with geometric stationary distribution where .
For AI: The Poisson process models token arrivals at an LLM serving endpoint. At high load (), the queue length distribution becomes heavy-tailed, leading to high-percentile latency spikes. The M/M/1 formula quantifies this: 90% utilisation means 9x the base queue length.
8. MCMC: Markov Chain Monte Carlo
8.1 The Sampling Problem
The challenge: In Bayesian inference, the posterior is analytically intractable for most models. Computing requires integrating over a high-dimensional distribution we cannot normalise.
MCMC solution: Construct a Markov chain on the parameter space such that:
- The chain is ergodic (irreducible, aperiodic, positive recurrent)
- The stationary distribution equals the target
By the ergodic theorem, time-averaging gives posterior expectations:
The key insight: we only need to know up to a normalising constant. MCMC constructs the chain using only unnormalised evaluations .
8.2 Metropolis-Hastings
Algorithm (Metropolis-Hastings). Given current state , target , proposal :
- Sample candidate
- Compute acceptance ratio:
- With probability : accept ; otherwise: (reject and stay)
Correctness: The Metropolis-Hastings chain satisfies detailed balance with respect to :
Proof: Without loss of generality, assume . Then and , so both sides equal .
Special cases:
- Metropolis algorithm: Symmetric proposal simplifies acceptance to
- Independence sampler: Proposal independent of current state; acceptance
Proposal tuning: For a Gaussian random walk proposal , the optimal achieves acceptance rate ~23.4% in high dimensions (Roberts-Gelman-Gilks 1997). Too small : high acceptance but slow exploration; too large: low acceptance, inefficient.
For AI: MCMC is used for posterior inference in Bayesian neural networks, uncertainty quantification in LLM outputs, and sampling from energy-based models. In practice, Langevin dynamics (gradient-based MCMC) replaces vanilla MH for high-dimensional targets.
8.3 Gibbs Sampling
Algorithm (Gibbs Sampling). For target , cycle through coordinates:
- Sample
- Sample
- Sample
Correctness: Gibbs sampling is a special case of Metropolis-Hastings where each coordinate update is a MH step with proposal equal to the full conditional. The acceptance ratio is always 1 - every proposal is accepted. This requires the ability to sample from full conditionals .
Block Gibbs: Instead of updating one coordinate at a time, update a block of coordinates jointly. More efficient when coordinates within a block are highly correlated.
For AI: Gibbs sampling is used in latent Dirichlet allocation (LDA) for topic modelling - the full conditionals for topic assignments are available in closed form. In RL, it can be used to sample from the Q-function distribution for exploration.
8.4 Convergence Diagnostics
Running an MCMC chain for a finite number of steps gives an approximate (not exact) sample from . Practical convergence diagnostics:
Trace plots: Plot vs. . A well-mixed chain looks like stationary noise. Trends, drifts, or stuck values indicate non-convergence.
statistic (Gelman-Rubin): Run chains from diverse starting points. Compute . If , chains have converged to the same distribution.
Effective sample size (ESS): Due to autocorrelation within the chain, MCMC samples give fewer independent samples than iid samples. where is the lag- autocorrelation. indicates poor mixing.
Burn-in: The first samples (before the chain has mixed) are discarded. Typical burn-in is of total samples.
8.5 Hamiltonian Monte Carlo
Motivation: For distributions on , the random walk Metropolis-Hastings has step size to achieve acceptable acceptance rates, giving diffusive mixing: steps to traverse the distribution. Hamiltonian Monte Carlo (HMC) exploits gradient information to take large, high-acceptance steps.
Algorithm: Augment with momentum . The Hamiltonian is . Propose a new state by simulating Hamiltonian dynamics using the leapfrog integrator for steps with step size , then accept/reject with MH ratio .
Why it works: Hamiltonian dynamics preserves the joint distribution on . Marginalising out recovers . The leapfrog integrator introduces discretisation error, corrected by the MH accept/reject step.
Mixing time: HMC achieves gradient evaluations per independent sample (vs. for random walk), making it scalable to high dimensions. NUTS (No-U-Turn Sampler) automates the choice of and .
For AI: HMC is the backbone of probabilistic programming systems like Stan and PyMC. In Bayesian deep learning, SGLD (stochastic gradient Langevin dynamics) adds Gaussian noise to SGD steps, yielding approximate Bayesian inference at scale.
9. ML Deep Dive
9.1 Language Model Generation
Every autoregressive language model - GPT, LLaMA, PaLM, Gemini - generates text as a Markov chain on the vocabulary. The state at step is the entire generated sequence (treated as a single token in the KV cache), and the transition is:
Temperature and the stationary distribution: At temperature :
- : greedy decoding (deterministic); : standard sampling; : uniform
- The generated sequence is a Markov chain; its stationary distribution (if the chain were infinite) would represent the model's "preferred" discourse
Mixing time of language models: For a well-trained LLM, the Markov chain mixes quickly - the model "forgets" its starting state within a few hundred tokens for most topics. This reflects the inductive bias toward coherent text: the chain has a large spectral gap.
Top- and nucleus sampling: Top- sampling restricts transitions to the highest-probability tokens; nucleus (top-) sampling restricts to the minimal set of tokens whose cumulative probability exceeds . Both modify the transition matrix to improve mixing (reduce probability of stuck low-quality loops).
Repetition and periodicity: Without repetition penalties, language model chains can enter periodic loops (mode collapse in generation: "the the the ..."). Adding a repetition penalty introduces a state-dependent transition modification that destroys the periodic structure.
9.2 PageRank
Setup: Represent the web as a directed graph with pages. The random surfer model: a user follows a random link at each step with probability (damping factor), or teleports to a random page with probability .
Transition matrix:
where if page links to page , and is the out-degree of page .
Dangling nodes: Pages with no outbound links () create a stochastic matrix problem. Fix: replace the row for dangling node with the uniform distribution . The resulting matrix is stochastic.
PageRank computation: The PageRank vector satisfies . Computed by power iteration:
Convergence guaranteed by Perron-Frobenius (the damped matrix is primitive). Convergence rate: geometric (since the second eigenvalue is ). With , convergence to precision takes iterations.
Interpretation: is the long-run fraction of time the random surfer spends on page . Pages with high in-degree from other high-PageRank pages get high scores.
For AI: Influence in a social network, citations in academic papers, and token importance in attention patterns can all be modelled as PageRank variants. The attention mechanism in transformers can be viewed as a differentiable, query-dependent version of PageRank.
9.3 Markov Decision Processes
A Markov Decision Process (MDP) extends Markov chains by adding actions:
- : state space; : action space
- : transition probabilities given action
- : reward function
- : discount factor
A policy induces a Markov chain on with transition .
Bellman equation: The value function satisfies:
In matrix form: , giving .
Policy evaluation = linear system: Given a fixed policy , computing requires inverting . This is guaranteed well-conditioned since implies all eigenvalues of have magnitude .
RLHF as MDP: Reinforcement learning from human feedback trains a language model (policy ) to maximise a reward model - an MDP with the KV cache as state, vocabulary as actions, and the human preference model as reward.
9.4 Hidden Markov Models
A Hidden Markov Model (HMM) is a Markov chain (hidden states) with observations emitted from the current hidden state:
where is the transition matrix and is the emission distribution.
Forward algorithm: Compute recursively:
Time complexity: . Used for likelihood computation and decoding.
Viterbi algorithm: Find the most likely hidden state sequence via dynamic programming:
Time complexity: same as forward algorithm.
Baum-Welch (EM for HMMs): Learn from unlabelled observations. E-step: compute forward-backward probabilities. M-step: update parameters using expected sufficient statistics. Converges to a local maximum of the likelihood.
For AI: HMMs are classical sequence models (speech recognition, gene finding). Modern LLMs subsume HMMs: the hidden state (KV cache) is a rich continuous representation of the context, and emission (next-token distribution) is the softmax output.
9.5 Diffusion Models as CTMCs
The DDPM forward process can be viewed as a discrete-time Markov chain with Gaussian transitions:
The marginal with .
Recall from Section06 Stochastic Processes: In the SDE formulation (Song et al., 2021), the continuous-time version is a CTMC/SDE: .
The reverse process: By time-reversal of Markov chains (Anderson, 1982), the reverse of an ergodic CTMC is also a CTMC. The reverse diffusion has:
The neural network learns to parameterise the reverse transitions - equivalently, to approximate the score function .