"A sequence of experiments forms a simple Markov chain if the conditional distribution of each experiment, given all preceding ones, depends only on the immediately preceding experiment."
- Andrei Markov, 1906
Overview
A Markov chain is a stochastic process with a single, powerful property: the future depends on the past only through the present. This memorylessness - the Markov property - is both a mathematical simplification and a profound modelling insight. It says: knowing the current state is a complete sufficient statistic for predicting what comes next.
This simplicity conceals enormous richness. Markov chains model language (every autoregressive LLM generates tokens one at a time, conditioning on the current state of the context), web structure (PageRank computes the stationary distribution of a Markov chain on the web graph), Bayesian computation (MCMC algorithms construct Markov chains whose stationary distribution is the target posterior), reinforcement learning (MDPs are Markov chains with actions), and speech recognition (HMMs). The mathematics of Markov chains - transition matrices, stationary distributions, mixing times - is the engine beneath all of these.
This section builds the complete theory from first principles: formal definitions, state classification, existence and uniqueness of stationary distributions via Perron-Frobenius, convergence to stationarity and mixing times, reversibility and detailed balance, continuous-time chains, and the MCMC algorithms central to Bayesian ML.
Prerequisites
- Section06 Stochastic Processes - filtrations, adapted processes, the Markov property preview, random walks
- Section03 Joint Distributions - conditional distributions, conditional independence, Bayes' theorem
- Section04 Expectation and Moments - conditional expectation, tower property
- Section02 Common Distributions - Poisson distribution (for CTMC); Gaussian (for MCMC targets)
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Interactive derivations: transition matrices, power iteration, classification, stationary distributions, mixing times, Metropolis-Hastings, Gibbs, PageRank, HMMs |
| exercises.ipynb | 10 graded exercises from state classification to PageRank and HMM decoding |
Learning Objectives
After completing this section, you will be able to:
- State the Markov property and express it as conditional independence
- Write the transition matrix of a Markov chain and compute -step probabilities via
- Classify states as communicating, recurrent/transient, periodic/aperiodic, and absorbing
- State the Perron-Frobenius theorem and use it to establish existence and uniqueness of stationary distributions for ergodic chains
- Compute stationary distributions via linear systems and power iteration
- State the detailed balance equations and explain how they relate to reversibility
- Define total variation distance and mixing time; bound mixing time via the spectral gap
- Write the generator matrix of a CTMC and derive its stationary distribution
- Implement Metropolis-Hastings and verify it satisfies detailed balance for the target distribution
- Implement Gibbs sampling and explain why it is a special case of Metropolis-Hastings
- Compute PageRank via power iteration with teleportation
- Describe the HMM forward algorithm and Viterbi decoding
Table of Contents
- 1. Intuition
- 2. Formal Definitions
- 3. Classification of States
- 4. Stationary Distributions
- 5. Detailed Balance and Reversibility
- 6. Mixing Times and Convergence Rates
- 7. Continuous-Time Markov Chains
- 8. MCMC: Markov Chain Monte Carlo
- 9. ML Deep Dive
- 10. Common Mistakes
- 11. Exercises
- 12. Why This Matters for AI (2026 Perspective)
- 13. Conceptual Bridge
1. Intuition
1.1 What Is a Markov Chain?
A Markov chain is a sequence of random variables taking values in a state space , where the future is independent of the past given the present. Formally, for every and every state :
This is the Markov property: the current state is a sufficient statistic for predicting . No matter how long the history, knowing only the present gives as much information about the future as knowing the entire past trajectory.
Why memorylessness is powerful. At first glance, the Markov property looks like a restriction - real systems do have memory. But three things make it useful:
-
Many systems genuinely satisfy it. A fair die, re-rolled at each step, is Markov (no memory). A chess position captures all relevant game state (Markov). An LLM's key-value cache provides sufficient statistics for the next token (Markov, given the KV cache).
-
Higher-order Markov chains reduce to first-order. A chain where depends on and can be converted to a first-order chain on the augmented state . So first-order is universal given a rich enough state space.
-
The mathematics is tractable. Transition probabilities only involve pairs of states, so the full dynamics are captured by a matrix rather than an exponentially large conditional probability table.
The mental picture: Imagine a particle hopping between states on a graph. At each step, it looks only at which node it currently occupies, then jumps to a neighbouring node with probability given by the edge weights. The history of how it got to the current node is irrelevant - only where it is now matters.
For AI: Every autoregressive language model generates tokens one at a time, each conditioned on all previous tokens. Given a context window of length , the distribution over the next token is fully specified by the current state (the context). This is precisely the Markov property with state space = all possible contexts of length .
1.2 Everywhere in AI
Markov chains appear as the backbone of several major AI paradigms:
Language generation: GPT, LLaMA, and every autoregressive transformer generates text via a Markov chain. The state is the current context (the key-value cache), the transition is the softmax over the vocabulary, and the stationary distribution - if it exists - characterises what the model "talks about" at equilibrium.
PageRank: Google's original ranking algorithm computes the stationary distribution of a Markov chain on the web graph, where each page links to others. A page is important if the random surfer spends a lot of time there at stationarity. Power iteration on the transition matrix gives the PageRank vector.
Reinforcement learning: A Markov Decision Process (MDP) is a Markov chain with actions. The agent's policy induces a Markov chain over states, and the value function is the expected discounted return from each state - computable from the stationary properties of this chain. TD-learning, Q-learning, and policy gradient methods all exploit the Markov structure.
Bayesian inference via MCMC: When the posterior distribution is intractable, MCMC constructs a Markov chain whose stationary distribution is exactly . Running the chain for long enough generates samples that represent the posterior. Metropolis-Hastings and Gibbs sampling are the two canonical MCMC algorithms.
Speech and sequence modelling: Hidden Markov Models (HMMs) model sequences where the true state (e.g., phoneme) is latent. The forward-backward algorithm and Viterbi decoder exploit the Markov structure to perform efficient inference.
Diffusion models: The DDPM forward process is a Markov chain where each step adds Gaussian noise. The learned reverse process is a Markov chain running backwards in time. The SDE perspective (Section06 Stochastic Processes) gives the continuous-time version.
1.3 Historical Timeline
| Year | Contributor | Result |
|---|---|---|
| 1906 | Andrei Markov | Defined Markov chains; proved the law of large numbers for dependent sequences |
| 1907 | Andrei Markov | First application: statistical analysis of letters in Pushkin's Eugene Onegin |
| 1913 | Paul Ehrenfest | Urn model illustrating approach to equilibrium; connection to statistical mechanics |
| 1928 | Andrei Kolmogorov | Backward equations for Markov processes; rigorous foundation |
| 1936 | Oskar Perron, Georg Frobenius | Perron-Frobenius theorem (positive matrices have unique dominant eigenvector) |
| 1950 | Nicholas Metropolis et al. | Monte Carlo method; Metropolis algorithm for statistical mechanics |
| 1953 | Metropolis, Rosenbluthx2, Tellerx2 | Metropolis-Hastings algorithm published; first MCMC |
| 1970 | W.K. Hastings | Generalized Metropolis to asymmetric proposals |
| 1984 | Geman & Geman | Gibbs sampling for image restoration |
| 1996 | Levin, Peres, Wilmer | Systematic mixing time theory; coupling and spectral gap bounds |
| 1998 | Page, Brin, Motwani, Winograd | PageRank: web ranking via Markov chain stationary distribution |
| 2015 | Ho, Jain & Abbeel | DDPM: diffusion as Markov chain with learned reverse |
| 2022 | Various | LLMs as Markov chains; stationary distribution theory for language models |
1.4 Taxonomy of Markov Chains
Markov chains are classified along two dimensions:
State space: Finite (), countably infinite (), or continuous (). Most of this section treats finite or countably infinite state spaces; MCMC typically targets continuous spaces.
Time: Discrete () or continuous (). Discrete-time chains are developed in Section2-Section6; continuous-time chains in Section7.
| Discrete Time | Continuous Time | |
|---|---|---|
| Finite states | Finite DTMC - PageRank, HMM | CTMC - queueing theory |
| Countable states | SRW, birth-death chain | Poisson process |
| Continuous states | MCMC on | Diffusion SDE (Section06) |
Time-homogeneous vs. inhomogeneous: If transition probabilities do not depend on - for all - the chain is time-homogeneous. Almost all chains in this section are time-homogeneous.
2. Formal Definitions
2.1 The Markov Property
Recall: Conditional independence was defined in Section03 Joint Distributions. The Markov property is conditional independence of the future from the past, given the present.
Definition (Discrete-Time Markov Chain). A sequence of random variables on a measurable state space is a (time-homogeneous) Markov chain if for all and all measurable :
The function is called the transition kernel or transition probability. For finite , it is encoded by the transition matrix with .
Equivalent formulation via conditional independence: The Markov property is equivalent to:
This is the cleanest statement: given the present , the future is independent of the entire past . More generally, the strong Markov property asserts this holds for stopping times: for any stopping time .
Initial distribution: The full law of the chain is specified by:
- The initial distribution with
- The transition matrix with and for all
Together, determines all finite-dimensional distributions.
2.2 Transition Matrix
For a finite state space , the transition matrix is an stochastic matrix: every row sums to 1, all entries are non-negative. The entry is the probability of jumping from state to state in one step.
Distribution at step : If is the row vector of probabilities at step , then:
The distribution evolves by left-multiplying by at each step. In component form: .
-step transition probabilities: The probability of going from to in exactly steps is the entry of :
Computing via matrix exponentiation (eigendecomposition or repeated squaring) is the core computational task for Markov chains.
Non-example: A matrix with a row summing to more than 1, or with any negative entry, is NOT a valid transition matrix. A doubly-stochastic matrix (both rows and columns sum to 1) is a transition matrix, and its stationary distribution is uniform.
For AI: In a language model with vocabulary size , the transition matrix has size where = probability of token following token (bigram model). Real LLMs use much richer state spaces (the context window), but the Markov structure is the same.
2.3 Chapman-Kolmogorov Equations
Theorem (Chapman-Kolmogorov). For any and states :
In matrix form: . This is simply the rule for matrix multiplication - to go from to in steps, sum over all intermediate states at the midpoint.
Proof: By the law of total probability and the Markov property:
Chapman-Kolmogorov is the reason matrix multiplication is the right operation for composing Markov chain transitions. It also says: to predict steps ahead, you can decompose the prediction at any intermediate time .
2.4 Standard Examples
Example 1: Two-state chain (weather model)
State 1 = sunny, state 2 = rainy. Sunny stays sunny with probability , turns rainy with probability . Rainy turns sunny with probability . The stationary distribution is .
Example 2: Simple Random Walk on with reflecting barriers
The particle bounces off the walls at 0 and . Stationary distribution: uniform.
Example 3: PageRank chain
where is the web adjacency matrix and is the damping factor (usually ). The teleportation term ensures the chain is irreducible and aperiodic. The stationary distribution is the PageRank vector.
Example 4: Gibbs chain on
For the Ising model on two spins, is a matrix constructed by randomly updating one spin at a time conditional on the other. This is the simplest MCMC example.
3. Classification of States
3.1 Communicating Classes and Irreducibility
Definition. State is accessible from state (written ) if there exists such that . States and communicate (written ) if and .
Communication is an equivalence relation, and its equivalence classes are called communicating classes. A communicating class is closed if no state outside is accessible from any state inside : and implies .
Definition (Irreducible Chain). A Markov chain is irreducible if it has exactly one communicating class - every state communicates with every other. Equivalently, for every pair , there exists with .
Irreducibility is the "connectivity" condition for Markov chains. An irreducible chain cannot get trapped in a subset of states. Most of the theory (stationary distributions, convergence) requires irreducibility.
Decomposition theorem: Any finite Markov chain decomposes into transient states (a particle eventually leaves forever) and one or more closed communicating classes of recurrent states. Once the chain enters a closed class, it stays there forever.
For AI: A language model's transition matrix is nearly irreducible: any token sequence of sufficient length can be generated (in principle). Practical degeneracy (zero-probability transitions due to softmax temperatures -> 0) creates absorbing states.
3.2 Recurrence and Transience
Definition. Define the first return time to state : , with if the chain never returns. The return probability is:
- State is recurrent if (return is certain)
- State is transient if (positive probability of never returning)
Theorem (Recurrence criterion). State is recurrent if and only if . It is transient if and only if .
Proof sketch: Let be the number of returns to . Then . If , returns are geometrically distributed with mean , so the sum diverges. If , the number of returns is geometric with finite mean, so the sum converges.
Positive vs. null recurrence: Among recurrent states, define the mean return time .
- Positive recurrent: and (returns in finite average time)
- Null recurrent: but (certain to return, but takes forever on average)
All recurrent states in a finite irreducible chain are positive recurrent (finite chains cannot be null-recurrent). The simple random walk on is null-recurrent in 1D and 2D, transient in 3D and above (Polya's theorem).
For AI: In MCMC, we want the chain to be recurrent (it returns to every region of the posterior). A transient MCMC chain would drift away from the posterior, giving biased samples.
3.3 Periodicity
Definition. The period of state is:
State is aperiodic if ; otherwise it is periodic with period .
Key facts:
- All states in the same communicating class have the same period
- A chain is called aperiodic if all states have period 1
- If for some , then (self-loops force aperiodicity)
- An irreducible aperiodic chain converges to its stationary distribution (Section4.3)
Example: A two-state chain with is periodic with period 2: the chain alternates 0,1,0,1,... and only for even .
Periodicity breaks convergence: If a chain has period , then does not converge as - it oscillates. Adding any self-loop probability makes the chain aperiodic.
For AI: Metropolis-Hastings with a symmetric proposal that includes the possibility of staying put is automatically aperiodic (self-loops happen when a proposal is rejected). This is why MCMC chains converge.
3.4 Absorbing States and Absorption Probabilities
Definition. State is absorbing if (the chain stays at forever once it arrives). A chain with at least one absorbing state is called absorbing.
Canonical form: For an absorbing chain with absorbing states and transient states, the transition matrix can be written in canonical form:
where is the sub-matrix of transitions among transient states, is the matrix of transitions to absorbing states, and is the identity on absorbing states.
Fundamental matrix: . The entry of is the expected number of times the chain visits transient state before absorption, starting from transient state .
Absorption probabilities: gives the probability of being absorbed into each absorbing state. The gambler's ruin formula (derived via OST in Section06) follows directly from this formula.
3.5 Ergodic Chains
Definition (Ergodic Chain). A Markov chain is ergodic if it is:
- Irreducible - every state communicates with every other
- Positive recurrent - mean return time is finite for all states
- Aperiodic - all states have period 1
Ergodic chains are the "nice" chains: they have a unique stationary distribution, and the chain converges to it from any starting state.
For finite chains: An irreducible chain on a finite state space is automatically positive recurrent. So for finite chains, ergodic = irreducible + aperiodic.
Ergodic theorem: For an ergodic chain, the time-average converges to the space-average:
This is the Markov chain version of the law of large numbers. It is why MCMC works: time-averaging over the chain's trajectory approximates the posterior expectation.
4. Stationary Distributions
4.1 Definition and Existence
Definition (Stationary Distribution). A probability distribution on is a stationary distribution (also called invariant distribution or steady-state distribution) for a Markov chain with transition matrix if:
Equivalently, is a left eigenvector of with eigenvalue 1. If the chain starts in distribution , it stays in distribution for all time: for all .
Interpretation: represents the long-run fraction of time the chain spends in state . For PageRank, is the importance score of page . For MCMC, is the target posterior distribution.
Existence: Every finite irreducible Markov chain has a unique stationary distribution with for all . For infinite chains, existence requires positive recurrence.
Non-uniqueness when reducible: A chain with multiple closed communicating classes has infinitely many stationary distributions - any convex combination of the stationary distributions within each closed class is stationary.
The mean return time formula: For a positive recurrent chain, where is the mean return time. States visited more often (smaller ) get higher stationary probability.
4.2 Perron-Frobenius Theorem
The Perron-Frobenius theorem is the fundamental result guaranteeing existence and uniqueness of the stationary distribution.
Theorem (Perron-Frobenius for Stochastic Matrices). Let be an irreducible stochastic matrix. Then:
- is an eigenvalue of (always true for stochastic matrices, since )
- for all other eigenvalues
- There exists a unique probability vector (componentwise) with
- is the unique (up to scaling) left eigenvector with eigenvalue 1
If additionally is primitive (irreducible and aperiodic), then is the unique eigenvalue on the unit circle, and strictly. This gap drives convergence.
Proof sketch (uniqueness): Suppose and are both stationary. Then satisfies with . If any , irreducibility implies all . But then cannot be a probability vector minus a probability vector with the right sign everywhere - contradiction.
For AI: PageRank power iteration converges because the damped PageRank matrix is primitive (the teleportation term makes it irreducible and the self-loops make it aperiodic). The dominant eigenvalue is 1, and all other eigenvalues are in magnitude, guaranteeing geometric convergence.
4.3 Convergence to Stationarity
Theorem (Convergence for Ergodic Chains). Let be irreducible and aperiodic with stationary distribution . Then for any initial distribution :
Moreover, for any starting state : as for all .
Rate of convergence: For a reversible chain with eigenvalues , the convergence rate is governed by :
The rate is called the second largest eigenvalue modulus (SLEM). The spectral gap is ; a large gap means fast convergence.
Periodic chains: If is irreducible but periodic with period , then does not converge - it cycles. However, (the -step chain) is aperiodic and does converge. In practice, a small perturbation (adding laziness) kills periodicity.
The lazy chain: Replace with . This halves the spectral gap but makes the chain aperiodic with all eigenvalues non-negative, ensuring monotone convergence.
4.4 Null-Recurrent Chains
The simple random walk on returns to the origin with probability 1 (recurrent) but takes infinitely long on average (null-recurrent). Such chains have no stationary probability distribution - the "stationary measure" is not normalisable.
The 1D random walk has for all , which gives infinite total mass . The chain visits every state infinitely often, but spends a vanishing fraction of time at any fixed state.
Implications for AI: Infinite-state chains that model text generation (treating the entire prefix as state) may be null-recurrent or transient. In practice, language models cap context length, making the state space finite and all states positive recurrent.
4.5 Computing Stationary Distributions
Method 1: Solve the linear system
This is an -dimensional system (the normalisation replaces one of the redundant equations ). For small , direct linear algebra is exact.
Method 2: Power iteration
Starting from any distribution , iterating converges to at rate . Power iteration is the PageRank algorithm.
Method 3: Eigendecomposition Compute the left eigenvector of corresponding to eigenvalue 1. For symmetric or reversible chains, this is equivalent to finding the dominant eigenvector.
Method 4: MCMC (for continuous state spaces) For distributions on , construct a Markov chain with stationary distribution and run it - the empirical distribution of the chain approximates by the ergodic theorem.
5. Detailed Balance and Reversibility
5.1 Detailed Balance Equations
Definition (Detailed Balance). A distribution satisfies the detailed balance equations with respect to transition matrix if:
Detailed balance says: the probability flux from to at stationarity equals the flux from to . This is a pointwise balance of probability flows, stronger than the global balance equations .
Theorem. If satisfies detailed balance with , then is a stationary distribution for .
Proof:
So . The converse is false - a stationary distribution need not satisfy detailed balance.
Significance: Detailed balance is easier to verify than stationarity (it's a pairwise condition rather than a global one), and it characterises reversible chains. All Metropolis-Hastings algorithms are constructed to satisfy detailed balance by design.
5.2 Reversible Markov Chains
Definition (Reversible Chain). A Markov chain is reversible (or time-reversible) with respect to distribution if it satisfies detailed balance. Equivalently, the chain running forwards in time has the same law as the chain running backwards.
The reversed chain: Given a stationary Markov chain with transition matrix , define the time-reversed chain by . The reversed chain is also a Markov chain, with the same stationary distribution. The original chain is reversible iff .
Symmetric Markov chains are reversible: If for all (symmetric transition matrix), then detailed balance holds with the uniform distribution . The symmetric random walk on a graph is reversible.
Non-reversible example: The chain (cyclic with ) has uniform stationary distribution but is NOT reversible: it has a preferred direction of rotation.
For AI: Reversibility is a sufficient condition for correctness of MCMC algorithms. A Metropolis-Hastings chain with target satisfies detailed balance - it's reversible - and therefore has as stationary distribution. Non-reversible MCMC (e.g., lifting methods, HMC) can mix faster but is harder to analyse.
5.3 Birth-Death Chains
A birth-death chain is a Markov chain on where only nearest-neighbour transitions are allowed:
Birth-death chains are automatically reversible, and their stationary distribution can be computed explicitly. The detailed balance equations give:
Setting (normalisation) gives the stationary distribution.
Queue models: An queue with arrival rate (service rate) is a birth-death chain. Stationary distribution: where . The chain is stable (stationary distribution exists) iff .
For AI: The queue length at an LLM serving system (number of pending requests) is approximately a birth-death chain. The stationary distribution under load determines average latency: .
5.4 Spectral Decomposition of Reversible Chains
For a reversible chain, the transition matrix has real eigenvalues and an orthonormal basis of eigenvectors (with respect to the -weighted inner product).
Spectral decomposition:
where are the eigenfunctions (normalised in ). As , only the term survives (since for ), giving . The rate of convergence is .
The spectral gap: measures how fast the chain forgets its initial state. A chain with spectral gap mixes in steps. Bounding the spectral gap from below is the main technical challenge in proving MCMC convergence rates.
Dirichlet form: For reversible chains, the spectral gap equals:
where is the Dirichlet form. This variational characterisation is the basis for Poincare inequality methods.
6. Mixing Times and Convergence Rates
6.1 Total Variation Distance
Definition. The total variation (TV) distance between two probability distributions and on is:
TV distance measures how distinguishable two distributions are: it equals the maximum probability that any event has different probability under vs. . The TV distance is 0 iff , and 1 iff the distributions have disjoint support.
Coupling characterisation: where the infimum is over all couplings with marginals and . This is the coupling inequality: any coupling gives an upper bound on TV distance.
For AI: TV distance is used to measure how far an MCMC chain is from stationarity, and to certify when the chain has "mixed" sufficiently to produce approximately independent samples from the posterior.
6.2 Mixing Time
Definition. The mixing time of a Markov chain with stationary distribution is:
The standard mixing time is (the default is conventional). Mixing time measures how long the chain must run before its distribution is -close to stationarity from the worst-case starting state.
Why the worst case? In applications, we often don't control the starting state. The worst-case mixing time certifies that regardless of initialisation, the chain has converged after steps.
Submultiplicativity: . So to achieve precision, it suffices to run for steps.
6.3 Spectral Gap
For a reversible ergodic chain, the spectral gap provides a bound on mixing time:
Theorem (Spectral Gap Bound):
where .
Proof sketch: From the spectral decomposition, . Setting this and solving for gives the bound.
Lower bound: There is also a lower bound: . So the mixing time is for reversible chains.
Computing the spectral gap: For small matrices, compute eigenvalues of directly. For large chains, use the Cheeger inequality: where is the Cheeger constant (conductance of the chain).
6.4 Coupling Arguments
Definition (Coupling). A coupling of two Markov chains with the same transition matrix but different initial distributions is a process such that each marginal is a valid Markov chain with transition , and once they remain together (the coalescence or meeting time).
Coupling inequality:
where is the coupling time.
Grand coupling: For a finite ergodic chain, one can construct a single coupling of chains starting from all possible pairs simultaneously, such that is the same for all pairs. The mixing time is then .
Example: Random walk on hypercube. For a lazy random walk on , couple two chains by updating the same coordinate at each step. When the coordinate is chosen, the two chains agree on that bit after the update. All bits agree after steps (coupon collector), giving .
6.5 AI Relevance: Fast Mixing and MCMC Warm Starts
Fast mixing in practice: For Bayesian inference in LLM fine-tuning (e.g., RLHF reward models), the posterior over model weights is a distribution on with . Direct MCMC is intractable; approximate methods (Langevin dynamics, HMC) exploit gradient information to achieve better mixing.
Spectral gap and dimensionality: For a Gaussian target , the Langevin SDE mixing time scales as where is the condition number of . Preconditioning (e.g., SGLD with adaptive learning rates, Adam-Langevin) improves the effective spectral gap.
Warm starts: If the initial distribution is already close to (e.g., from a previous MCMC run), the chain needs fewer steps to mix. In sequential Bayesian updating (fine-tuning an LLM on new data), the previous posterior serves as a warm start.
Temperature scheduling: Simulated annealing starts with a high-temperature chain (wide, fast-mixing distribution) and gradually cools toward the target. This exploits fast mixing at high temperature and accurate targeting at low temperature.
7. Continuous-Time Markov Chains
7.1 Generator Matrix
A continuous-time Markov chain (CTMC) on a finite state space satisfies the Markov property for continuous time:
The chain holds in each state for an exponentially distributed sojourn time, then jumps according to a discrete-time chain. The rate of leaving state is where are the jump rates from to .
Definition (Generator Matrix / Q-Matrix). The generator (or Q-matrix) of a CTMC is the matrix with:
Key property: rows of sum to zero (). The diagonal entries are negative; off-diagonal entries are non-negative.
Transition matrix: The probability of being in state at time given state at time 0 is:
This is the matrix exponential. Note and (convergence to stationarity for ergodic chains).
Embedded chain: The CTMC can be decomposed into (1) an embedded discrete-time Markov chain governing the sequence of states visited, and (2) exponential holding times in each state. Jump probabilities: for .
7.2 Kolmogorov Equations
Forward Equation (Fokker-Planck):
This equation describes how the distribution evolves forward in time: where .
Backward Equation:
The forward equation applies to the time-marginal distribution; the backward equation to the transition probabilities viewed as functions of the initial state.
Stationary distribution: is stationary if at , i.e.:
For an ergodic CTMC, the stationary distribution is unique.
7.3 Stationary Distribution of CTMCs
Detailed balance for CTMCs: A distribution satisfies detailed balance if for all . Detailed balance implies stationarity ().
Computing the stationary distribution: Solve with . This is equivalent to finding the null space of (right eigenvector with eigenvalue 0).
Birth-death CTMC: With birth rate from state and death rate , detailed balance gives , yielding:
7.4 Poisson Process as CTMC
Recall from Section06 Stochastic Processes: The Poisson process counts arrivals up to time with independent Poisson increments. Inter-arrival times are .
The Poisson process is the simplest CTMC: states , jump rates (arrivals only), generator:
The Poisson process is transient - it drifts to with no stationary distribution. An queue (arrivals at rate , service at rate ) modifies the Poisson process by adding downward jumps (service completions), yielding a birth-death CTMC with geometric stationary distribution where .
For AI: The Poisson process models token arrivals at an LLM serving endpoint. At high load (), the queue length distribution becomes heavy-tailed, leading to high-percentile latency spikes. The M/M/1 formula quantifies this: 90% utilisation means 9x the base queue length.
8. MCMC: Markov Chain Monte Carlo
8.1 The Sampling Problem
The challenge: In Bayesian inference, the posterior is analytically intractable for most models. Computing requires integrating over a high-dimensional distribution we cannot normalise.
MCMC solution: Construct a Markov chain on the parameter space such that:
- The chain is ergodic (irreducible, aperiodic, positive recurrent)
- The stationary distribution equals the target
By the ergodic theorem, time-averaging gives posterior expectations:
The key insight: we only need to know up to a normalising constant. MCMC constructs the chain using only unnormalised evaluations .
8.2 Metropolis-Hastings
Algorithm (Metropolis-Hastings). Given current state , target , proposal :
- Sample candidate
- Compute acceptance ratio:
- With probability : accept ; otherwise: (reject and stay)
Correctness: The Metropolis-Hastings chain satisfies detailed balance with respect to :
Proof: Without loss of generality, assume . Then and , so both sides equal .
Special cases:
- Metropolis algorithm: Symmetric proposal simplifies acceptance to
- Independence sampler: Proposal independent of current state; acceptance
Proposal tuning: For a Gaussian random walk proposal , the optimal achieves acceptance rate ~23.4% in high dimensions (Roberts-Gelman-Gilks 1997). Too small : high acceptance but slow exploration; too large: low acceptance, inefficient.
For AI: MCMC is used for posterior inference in Bayesian neural networks, uncertainty quantification in LLM outputs, and sampling from energy-based models. In practice, Langevin dynamics (gradient-based MCMC) replaces vanilla MH for high-dimensional targets.
8.3 Gibbs Sampling
Algorithm (Gibbs Sampling). For target , cycle through coordinates:
- Sample
- Sample
- Sample
Correctness: Gibbs sampling is a special case of Metropolis-Hastings where each coordinate update is a MH step with proposal equal to the full conditional. The acceptance ratio is always 1 - every proposal is accepted. This requires the ability to sample from full conditionals .
Block Gibbs: Instead of updating one coordinate at a time, update a block of coordinates jointly. More efficient when coordinates within a block are highly correlated.
For AI: Gibbs sampling is used in latent Dirichlet allocation (LDA) for topic modelling - the full conditionals for topic assignments are available in closed form. In RL, it can be used to sample from the Q-function distribution for exploration.
8.4 Convergence Diagnostics
Running an MCMC chain for a finite number of steps gives an approximate (not exact) sample from . Practical convergence diagnostics:
Trace plots: Plot vs. . A well-mixed chain looks like stationary noise. Trends, drifts, or stuck values indicate non-convergence.
statistic (Gelman-Rubin): Run chains from diverse starting points. Compute . If , chains have converged to the same distribution.
Effective sample size (ESS): Due to autocorrelation within the chain, MCMC samples give fewer independent samples than iid samples. where is the lag- autocorrelation. indicates poor mixing.
Burn-in: The first samples (before the chain has mixed) are discarded. Typical burn-in is of total samples.
8.5 Hamiltonian Monte Carlo
Motivation: For distributions on , the random walk Metropolis-Hastings has step size to achieve acceptable acceptance rates, giving diffusive mixing: steps to traverse the distribution. Hamiltonian Monte Carlo (HMC) exploits gradient information to take large, high-acceptance steps.
Algorithm: Augment with momentum . The Hamiltonian is . Propose a new state by simulating Hamiltonian dynamics using the leapfrog integrator for steps with step size , then accept/reject with MH ratio .
Why it works: Hamiltonian dynamics preserves the joint distribution on . Marginalising out recovers . The leapfrog integrator introduces discretisation error, corrected by the MH accept/reject step.
Mixing time: HMC achieves gradient evaluations per independent sample (vs. for random walk), making it scalable to high dimensions. NUTS (No-U-Turn Sampler) automates the choice of and .
For AI: HMC is the backbone of probabilistic programming systems like Stan and PyMC. In Bayesian deep learning, SGLD (stochastic gradient Langevin dynamics) adds Gaussian noise to SGD steps, yielding approximate Bayesian inference at scale.
9. ML Deep Dive
9.1 Language Model Generation
Every autoregressive language model - GPT, LLaMA, PaLM, Gemini - generates text as a Markov chain on the vocabulary. The state at step is the entire generated sequence (treated as a single token in the KV cache), and the transition is:
Temperature and the stationary distribution: At temperature :
- : greedy decoding (deterministic); : standard sampling; : uniform
- The generated sequence is a Markov chain; its stationary distribution (if the chain were infinite) would represent the model's "preferred" discourse
Mixing time of language models: For a well-trained LLM, the Markov chain mixes quickly - the model "forgets" its starting state within a few hundred tokens for most topics. This reflects the inductive bias toward coherent text: the chain has a large spectral gap.
Top- and nucleus sampling: Top- sampling restricts transitions to the highest-probability tokens; nucleus (top-) sampling restricts to the minimal set of tokens whose cumulative probability exceeds . Both modify the transition matrix to improve mixing (reduce probability of stuck low-quality loops).
Repetition and periodicity: Without repetition penalties, language model chains can enter periodic loops (mode collapse in generation: "the the the ..."). Adding a repetition penalty introduces a state-dependent transition modification that destroys the periodic structure.
9.2 PageRank
Setup: Represent the web as a directed graph with pages. The random surfer model: a user follows a random link at each step with probability (damping factor), or teleports to a random page with probability .
Transition matrix:
where if page links to page , and is the out-degree of page .
Dangling nodes: Pages with no outbound links () create a stochastic matrix problem. Fix: replace the row for dangling node with the uniform distribution . The resulting matrix is stochastic.
PageRank computation: The PageRank vector satisfies . Computed by power iteration:
Convergence guaranteed by Perron-Frobenius (the damped matrix is primitive). Convergence rate: geometric (since the second eigenvalue is ). With , convergence to precision takes iterations.
Interpretation: is the long-run fraction of time the random surfer spends on page . Pages with high in-degree from other high-PageRank pages get high scores.
For AI: Influence in a social network, citations in academic papers, and token importance in attention patterns can all be modelled as PageRank variants. The attention mechanism in transformers can be viewed as a differentiable, query-dependent version of PageRank.
9.3 Markov Decision Processes
A Markov Decision Process (MDP) extends Markov chains by adding actions:
- : state space; : action space
- : transition probabilities given action
- : reward function
- : discount factor
A policy induces a Markov chain on with transition .
Bellman equation: The value function satisfies:
In matrix form: , giving .
Policy evaluation = linear system: Given a fixed policy , computing requires inverting . This is guaranteed well-conditioned since implies all eigenvalues of have magnitude .
RLHF as MDP: Reinforcement learning from human feedback trains a language model (policy ) to maximise a reward model - an MDP with the KV cache as state, vocabulary as actions, and the human preference model as reward.
9.4 Hidden Markov Models
A Hidden Markov Model (HMM) is a Markov chain (hidden states) with observations emitted from the current hidden state:
where is the transition matrix and is the emission distribution.
Forward algorithm: Compute recursively:
Time complexity: . Used for likelihood computation and decoding.
Viterbi algorithm: Find the most likely hidden state sequence via dynamic programming:
Time complexity: same as forward algorithm.
Baum-Welch (EM for HMMs): Learn from unlabelled observations. E-step: compute forward-backward probabilities. M-step: update parameters using expected sufficient statistics. Converges to a local maximum of the likelihood.
For AI: HMMs are classical sequence models (speech recognition, gene finding). Modern LLMs subsume HMMs: the hidden state (KV cache) is a rich continuous representation of the context, and emission (next-token distribution) is the softmax output.
9.5 Diffusion Models as CTMCs
The DDPM forward process can be viewed as a discrete-time Markov chain with Gaussian transitions:
The marginal with .
Recall from Section06 Stochastic Processes: In the SDE formulation (Song et al., 2021), the continuous-time version is a CTMC/SDE: .
The reverse process: By time-reversal of Markov chains (Anderson, 1982), the reverse of an ergodic CTMC is also a CTMC. The reverse diffusion has:
The neural network learns to parameterise the reverse transitions - equivalently, to approximate the score function .
10. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Confusing stationarity with convergence | Stationarity () is a property of a distribution; convergence () is about the chain's trajectory. The chain may start at and be stationary without "converging" (it's already there). | Distinguish: stationarity is the destination; convergence is the journey. |
| 2 | Assuming irreducibility implies unique stationary distribution | An irreducible chain on an infinite state space may be null-recurrent (no normalisable stationary distribution), e.g., SRW on . | Unique stationary distribution requires irreducibility + positive recurrence. |
| 3 | Forgetting aperiodicity for convergence | An irreducible positive-recurrent but periodic chain has a unique stationary distribution but does NOT converge - it oscillates. | Convergence to stationarity requires ergodicity = irreducible + positive recurrent + aperiodic. |
| 4 | Treating detailed balance as necessary for stationarity | Detailed balance () is sufficient but not necessary for . Many chains (e.g., cyclic chains) have stationary distributions without satisfying detailed balance. | Detailed balance stationarity, but stationarity detailed balance. |
| 5 | Confusing the transition matrix orientation | Some books write as probability from to (column stochastic); others write it as probability from to (row stochastic). This flips vs. . | Fix a convention: row stochastic means rows sum to 1 and (left eigenvector). |
| 6 | Ignoring burn-in in MCMC | The first MCMC samples come from a distribution close to (initial distribution), not . Including them biases posterior estimates. | Always discard a burn-in period. Use and ESS to assess convergence. |
| 7 | Confusing mixing time with burn-in | Mixing time is a property of the chain (how long until the worst-case distribution is close to ). Burn-in is a practical heuristic. They are related but not equal. | Burn-in should be at least ; more if starting far from . |
| 8 | Assuming high acceptance rate = well-mixed MCMC | A chain that always accepts (tiny step size in MH) explores very slowly; acceptance rate ~1 with step size -> 0 gives high acceptance but poor mixing. | Target ~23% acceptance in high dimensions (optimal for Gaussian targets). |
| 9 | Using to compute stationary distribution for large | Matrix exponentiation converges to the rank-1 matrix , but floating-point errors accumulate. | Use power iteration on the distribution vector: , which is numerically stable. |
| 10 | Misidentifying the state space for LLM generation | A bigram LM has states = vocabulary (size ). A full autoregressive LM has states = entire context = exponential in context length. | The Markov chain for a transformer is on the context window, not the vocabulary alone. |
| 11 | Confusing transient and absorbing | A transient state will eventually be left for good; an absorbing state is one from which the chain never leaves. Every absorbing state is recurrent (trivially), not transient. | Transient: ; absorbing: . Absorbing states are recurrent. |
| 12 | Expecting MCMC samples to be independent | MCMC samples are correlated (autocorrelated) - consecutive samples are not independent. ESS < T measures effective independence. | Report ESS, not raw sample count. Use thinning to reduce autocorrelation if needed. |
11. Exercises
Exercise 1 * - Transition Matrix and Distribution Evolution
A weather model has two states: Sunny (1) and Rainy (2), with transition matrix .
(a) Starting from (certainly sunny), compute .
(b) Compute the 3-step transition matrix . What is ?
(c) Find the stationary distribution by solving with .
(d) Verify that as by computing for .
Exercise 2 * - State Classification
Consider the Markov chain on with transition matrix:
(a) Draw the transition diagram. Identify all communicating classes.
(b) Classify each state as recurrent or transient. Justify using the return probability criterion.
(c) For the recurrent states, determine their period. Are any states aperiodic?
(d) Starting from state 1, what is the probability of eventually reaching state 4?
Exercise 3 * - Computing Stationary Distributions
(a) Implement stationary_power_iteration(P, n_iter=1000) that starts from a uniform distribution and returns after iterations. Apply to the chain with for all and a random row-stochastic matrix.
(b) Verify your result satisfies to tolerance .
(c) For a doubly-stochastic matrix (rows AND columns sum to 1), prove the stationary distribution is uniform without solving any equations. Verify numerically.
(d) For the birth-death chain with , on with reflecting barriers, compute analytically and verify against power iteration.
Exercise 4 ** - Detailed Balance and Birth-Death Chains
(a) Verify that the two-state chain satisfies detailed balance with .
(b) Implement a queue birth-death chain with on states with an absorbing boundary at 20. Compute the stationary distribution and verify with .
(c) For the cyclic chain with , find the stationary distribution. Does it satisfy detailed balance? Explain why not.
(d) Show that if is doubly stochastic, then detailed balance holds with the uniform distribution iff is also symmetric.
Exercise 5 ** - Mixing Time and Spectral Gap
(a) For the two-state chain in Exercise 1, compute the eigenvalues of . What is the spectral gap?
(b) Using the spectral gap bound, upper-bound the mixing time .
(c) Compute the TV distance for and plot the convergence. At what does it drop below ?
(d) For the lazy version , redo parts (a)-(c). How does the spectral gap change? How does the mixing time change?
Exercise 6 ** - Metropolis-Hastings
(a) Implement Metropolis-Hastings with a Gaussian random walk proposal for target (double-well potential). Run for steps with .
(b) Verify your sampler satisfies detailed balance: for two states , check numerically where is the MH transition kernel.
(c) Compute the acceptance rate and effective sample size (ESS). How does ESS change as varies over ?
(d) Estimate from your samples. Compare to the true value (computed numerically).
Exercise 7 *** - PageRank
(a) Implement PageRank power iteration for the 5-node graph with adjacency matrix:
with damping factor . Handle dangling nodes.
(b) Verify that your PageRank vector satisfies to tolerance .
(c) Add a new node 6 that links to all existing nodes, and all existing nodes link to node 6. How does the PageRank of node 1 change? Why?
(d) Implement personalised PageRank where the teleportation vector is non-uniform: . Compare to standard PageRank.
Exercise 8 *** - HMM Forward Algorithm and Viterbi
An HMM models DNA sequences with 2 hidden states (CpG island: , non-CpG: ) and 4 observations (A, C, G, T).
(a) Given transition matrix and emission matrices (HMM notebook provides values), implement the forward algorithm and compute under equal initial probabilities.
(b) Implement the Viterbi algorithm to find the most likely hidden state sequence for the same observation.
(c) Verify the forward probabilities satisfy at each .
(d) Compare Viterbi decoding to the most-probable-state-per-position (posterior decoding using the backward algorithm). When do they differ?
12. Why This Matters for AI (2026 Perspective)
| Concept | AI/ML Application | Impact |
|---|---|---|
| Transition matrix / stationary dist. | PageRank web graph; LLM token distribution | Foundation for all sequential reasoning |
| Ergodic theorem | MCMC posterior estimation; time-averaging in RL | Justifies replacing expectation with time average |
| Perron-Frobenius | PageRank convergence proof; convergence of power iteration | Guarantees unique ranking vectors exist |
| Detailed balance / reversibility | Correctness of MH, Gibbs, SGLD | All Bayesian MCMC algorithms rely on this |
| Spectral gap / mixing time | MCMC convergence rate; warm-start certification | Explains why MCMC works (or fails) in practice |
| Metropolis-Hastings | Bayesian neural network posteriors; sampling EBMs | Foundation of probabilistic inference in ML |
| Gibbs sampling | LDA topic models; Boltzmann machine inference | Tractable posterior sampling via conditionals |
| HMC / NUTS | Stan, PyMC, posterior sampling in Bayesian DL | Gold standard for Bayesian inference on |
| MDP / Bellman equation | RL algorithms (PPO, SAC, DQN, RLHF) | Every RL agent solves a Markov chain problem |
| HMM forward-backward | Speech recognition, gene prediction, legacy NLP | Efficient inference on latent sequence models |
| CTMC / generator matrix | LLM serving queues; continuous-time RL | Latency modelling and event-driven simulation |
| Diffusion as CTMC | DDPM/DDIM forward process; score-matching | Connects generative modelling to Markov chain theory |
| Power iteration | PageRank; dominant eigenvector computation | algorithm that scales to trillion-node graphs |
| Lazy chains / warm starts | MCMC initialisation strategy | Reduces burn-in and improves sample efficiency |
13. Conceptual Bridge
Markov chains occupy the central position in the probability curriculum: they are the first class of stochastic processes with nontrivial structure - neither completely independent (iid) nor completely dependent. The Markov property gives just enough memory to model real sequential phenomena while remaining mathematically tractable.
Looking backward: Markov chains build on everything in Chapters 1-6. The transition matrix is a row-stochastic matrix - linear algebra from Chapters 2-3. The stationary distribution is an eigenvector problem solved by Perron-Frobenius. Computing stationary distributions uses the conditional probability machinery from Section03. The ergodic theorem is a strengthening of the law of large numbers (Section06). The Metropolis-Hastings acceptance ratio uses Bayes' theorem (Section03). Mixing times use concentration inequalities (Section05) through the spectral gap. The stochastic process framework of filtrations and stopping times (Section06) provides the rigorous foundation.
Looking forward: Markov chains are the gateway to Chapter 7 (Statistics), where MCMC enables Bayesian inference with intractable posteriors; Chapter 8 (Optimisation), where Markov chain mixing theory explains why SGD finds flat minima; and Chapter 9 (Information Theory), where the entropy of a stationary distribution connects to compression. Markov Decision Processes - the reinforcement learning setting - are a direct extension requiring optimisation over policies.
The deep connection: The central theorem of Markov chain theory - ergodicity implies - is the probabilistic analogue of the contraction mapping theorem from analysis: repeated application of the transition operator contracts all distributions toward the unique fixed point . The spectral gap quantifies the contraction rate, just as the Lipschitz constant does in the deterministic setting. This structural parallel runs through SGD convergence theory, where the gradient operator acts as a stochastic contraction.
MARKOV CHAINS IN THE CURRICULUM
========================================================================
Section01 Random Variables -------------------------------------+
Section02 Distributions --------------------------------------+ |
Section03 Joint Distributions ---------------------------+ | |
Section04 Expectation & Moments ----------------------+ | | |
Section05 Concentration Inequalities -------------+ | | | |
Section06 Stochastic Processes ---------------+ | | | | |
v v v v v v
+---------------------------------+
| Section07 MARKOV CHAINS |
| |
| Markov property |
| Transition matrices |
| Stationary distributions |
| Mixing times, spectral gap |
| MCMC (MH, Gibbs, HMC) |
| PageRank, MDP, HMM |
+--------------+------------------+
|
+-----------------------------+--------------------+
v v v
Ch7: Statistics Ch8: Optimisation Ch9: Info Theory
(MCMC for Bayes, (SGD mixing theory, (Entropy of \\pi,
posterior inference) Langevin dynamics) KL from stationarity)
v v
RL/RLHF: MDP Diffusion models
(policy eval, (DDPM as CTMC,
value functions) reverse Markov)
========================================================================
Appendix A: Harmonic Functions and the Dirichlet Problem
A function is harmonic for a Markov chain at state if:
That is, equals the expected value of one step later. Harmonic functions are the "conserved quantities" of Markov chains - if is the starting value, then forms a martingale.
Maximum principle: For an irreducible chain on a finite state space, the only bounded harmonic functions are constants. This is the probabilistic analogue of the maximum principle for harmonic functions in PDE theory.
Dirichlet problem: On a chain with absorbing boundary and interior , the function (expected boundary value at hitting time) is the unique solution to:
Application - gambler's ruin: Let . Then is harmonic on with boundary conditions , . For the simple random walk: .
For AI: Value functions in reinforcement learning are harmonic: is a Bellman equation - a discrete-time analogue of the Dirichlet problem with discount factor .
Appendix B: Spectral Theory for Non-Reversible Chains
For non-reversible chains, eigenvalues of can be complex (though all have ). The Jordan form replaces the diagonal eigendecomposition.
Pseudo-spectrum: For nearly-reversible chains, the pseudo-spectrum can be much larger than the spectrum, causing transient amplification before eventual convergence.
Non-reversible speedup: Some non-reversible chains mix faster than any reversible chain with the same stationary distribution. The "lifted" or "skewed" chains in the literature achieve mixing time vs. for reversible chains, where is the Cheeger constant.
Lifting: A common technique to construct non-reversible fast-mixing chains: add a "direction" variable to the state, creating a Markov chain on that preferentially moves in one direction while still having as marginal stationary distribution.
Appendix C: Advanced MCMC Methods
C.1 Stochastic Gradient Langevin Dynamics (SGLD)
For Bayesian inference on large datasets, standard MH requires evaluating at the full dataset - too expensive. SGLD combines SGD with Gaussian noise:
where is the mini-batch stochastic gradient.
For step sizes satisfying and (Robbins-Monro conditions), the chain converges to samples from the posterior. At small but fixed step size , the chain samples from an approximate posterior within of the true posterior.
For AI: SGLD underlies Bayesian deep learning at scale. It explains why SGD with learning rate decay has regularisation properties: the decaying noise variance means the chain converges to a tighter distribution around the posterior mode.
C.2 Parallel Tempering (Replica Exchange)
For multimodal targets, single chains can get stuck in one mode. Parallel tempering runs chains at different temperatures (with being the target). Periodically, adjacent chains swap states with MH acceptance probability:
High-temperature chains mix fast (flat distribution) and pass samples to low-temperature chains through swaps. This enables exploration of multiple modes.
C.3 Slice Sampling
Slice sampling introduces a uniform auxiliary variable , creating a joint distribution on that is uniform on the "slice" . Marginalising out recovers .
The slice sampler alternates between updating (trivial given ) and updating (uniform on the slice level set). No tuning parameter; acceptance rate = 1; effective for univariate distributions.
Appendix D: Worked Examples
D.1 Google PageRank Computation (Mini Example)
Consider a 4-page web: page 1 links to 2,3; page 2 links to 4; page 3 links to 2; page 4 links to 1,3.
Transition matrix (with no damping for clarity):
Stationary distribution (solve , ): - verified by direct computation.
With damping : where is the all-ones matrix. Power iteration converges in ~30 iterations.
D.2 MCMC on a Bimodal Distribution
Target: (mixture of two Gaussians at ).
With Gaussian random walk MH (): chain mixes within one mode; inter-modal jumps are rare because the barrier between modes is nats. Acceptance rate ~60%, but ESS/T ~0.02 (poor mixing).
With : chain jumps between modes; acceptance rate ~15%, ESS/T ~0.10 (better mixing).
This illustrates the exploration-exploitation tradeoff in MCMC proposal design.
D.3 HMM Example: Weather from Ice Core
Two hidden states: Ice Age (I), Warm Period (W). Observations: high () or low () dust in ice core.
Transition: . Emission: .
For observation sequence : the Viterbi algorithm gives hidden sequence (high dust = ice age, low dust = warm period).
Forward algorithm gives likelihood .
Appendix E: Key Theorems and Proofs
E.1 Proof: Stationary Distribution is Unique for Finite Irreducible Chains
Theorem. A finite irreducible Markov chain has a unique stationary distribution with for all .
Proof. Consider the -dimensional simplex . The map is a continuous map from to . By Brouwer's fixed point theorem, has at least one fixed point .
For uniqueness: suppose and are two stationary distributions. Since the chain is irreducible and finite, all states are positive recurrent, so where is the mean return time. The mean return time is unique (it equals and also if is stationary), so for all .
For : by irreducibility, from any state there exists a path to . The stationary mass at each state in this path is (by positivity of transitions and stationarity), propagating to give .
E.2 Proof: Detailed Balance Implies Stationarity
Theorem. If for all , then .
Proof. For each :
where we used detailed balance in the second equality and the row-sum property of stochastic matrices in the fourth.
E.3 Proof: Convergence via Coupling (Sketch)
Theorem. Let be ergodic. For any and coupling with , , define . Then:
Proof. For any event :
On : (coalesce), so . Therefore:
Taking supremum over gives the result.
Appendix F: Python Recipes for Markov Chains
import numpy as np
# -- 1. Power iteration for stationary distribution
def stationary(P, tol=1e-12, max_iter=10000):
pi = np.ones(P.shape[0]) / P.shape[0]
for _ in range(max_iter):
pi_new = pi @ P
if np.max(np.abs(pi_new - pi)) < tol:
return pi_new
pi = pi_new
return pi
# -- 2. Simulate a Markov chain
def simulate_chain(P, x0, n_steps):
N = P.shape[0]
x = x0
trajectory = [x]
for _ in range(n_steps):
x = np.random.choice(N, p=P[x])
trajectory.append(x)
return np.array(trajectory)
# -- 3. Metropolis-Hastings for 1D target
def metropolis_hastings(log_pi, n_steps, sigma=0.5, x0=0.0):
x = x0
samples = []
for _ in range(n_steps):
x_prime = x + sigma * np.random.randn()
log_a = log_pi(x_prime) - log_pi(x)
if np.log(np.random.rand()) < log_a:
x = x_prime
samples.append(x)
return np.array(samples)
# -- 4. PageRank
def pagerank(A, alpha=0.85, tol=1e-8):
N = A.shape[0]
out_degree = A.sum(axis=1, keepdims=True)
# Fix dangling nodes
dangling = (out_degree == 0).flatten()
out_degree[dangling] = 1
P = A / out_degree
P[dangling] = 1 / N
teleport = np.ones((N, N)) / N
P_hat = alpha * P + (1 - alpha) * teleport
pi = np.ones(N) / N
for _ in range(1000):
pi_new = pi @ P_hat
if np.max(np.abs(pi_new - pi)) < tol:
return pi_new
pi = pi_new
return pi
# -- 5. HMM forward algorithm
def hmm_forward(obs, T, E, pi0):
"""obs: list of observations, T: NxN transition, E: NxM emission, pi0: initial."""
N = T.shape[0]
alpha = np.zeros((len(obs), N))
alpha[0] = pi0 * E[:, obs[0]]
for t in range(1, len(obs)):
alpha[t] = (alpha[t-1] @ T) * E[:, obs[t]]
return alpha, alpha[-1].sum() # alpha matrix and likelihood
Appendix G: Connections to Other Areas
Spectral graph theory: For a random walk on an undirected graph , the transition matrix (where is the degree matrix and is the adjacency matrix) has eigenvalues in . The spectral gap of the Laplacian equals the spectral gap of the walk. Well-connected (expander) graphs have large spectral gaps and fast-mixing random walks.
Information theory: The relative entropy (KL divergence) between the chain's distribution and stationarity decreases monotonically: . This is the data processing inequality applied to the Markov transition. The rate of KL decrease is related to the spectral gap.
Ergodic theory: The Markov chain ergodic theorem is a special case of Birkhoff's ergodic theorem in abstract measure-preserving systems. The stationarity condition says is an invariant measure for the measure-preserving map on the path space.
Linear programming: Computing the stationary distribution satisfying , , is a linear system. The LP relaxation of combinatorial problems can sometimes be solved by finding stationary distributions of associated Markov chains.
Quantum computing: Quantum walks are the quantum analogue of random walks on graphs, with amplitudes instead of probabilities. They can achieve quadratic speedups over classical random walks for certain search and sampling problems.
Appendix H: Self-Assessment Checklist
Before moving to Chapter 7 (Statistics), verify you can:
- Write the transition matrix for any small Markov chain from a verbal description
- Compute for by hand or code
- Classify all states of a small chain as communicating/recurrent/transient/absorbing
- Solve for a or stochastic matrix
- State Perron-Frobenius and explain why the stationary distribution is unique
- Verify detailed balance for a given pair
- Compute the spectral gap from eigenvalues and give a mixing time bound
- Implement Metropolis-Hastings for a 1D target and compute acceptance rate
- Implement power iteration for PageRank with dangling node handling
- Implement the HMM forward algorithm and compute the likelihood of an observation sequence
- Explain why MCMC sample autocorrelation reduces the effective sample size
- State the ergodic theorem and explain why it justifies MCMC estimation
Appendix I: Detailed Proofs - Perron-Frobenius and Mixing
I.1 Perron-Frobenius Theorem (Complete Proof)
We prove the Perron-Frobenius theorem for primitive stochastic matrices.
Step 1: Eigenvalue 1 exists. For any stochastic matrix , (column vector of ones). So iff is doubly stochastic. More relevantly: by the Brouwer fixed-point theorem on , the map has a fixed point - the stationary distribution. This fixed point is a left eigenvector with eigenvalue 1.
Step 2: for all eigenvalues. Let be an eigenvalue with (right) eigenvector (). Pick . Then . So .
Step 3: implies for primitive . For a primitive matrix, (all entries strictly positive) for some . Suppose , , with and . The equality case in Step 2 requires for all and all rows of to "align" the phases of . But for , all rows of are strictly positive, and the alignment condition forces and all equal - meaning and . Contradiction.
Step 4: Convergence. Since is the unique eigenvalue on the unit circle for primitive , all other eigenvalues satisfy for some . The spectral decomposition gives where at rate .
I.2 Coupling Proof of Convergence (Complete)
Theorem. Let be ergodic with stationary distribution . For any :
where is the coalescence time of the Markov coupling (two chains started at and ).
Proof. Let be a coupling with and , where both chains use the same transition and coalesce when they meet. Since and is stationary, for all .
For any event :
On : (they have coalesced), so their indicators for are equal.
Since this holds for any and for by symmetry, taking the supremum gives the TV bound.
I.3 Spectral Gap Lower Bound via Conductance (Cheeger Inequality)
Definition. The conductance (Cheeger constant) of a chain is:
Conductance measures the minimum probability flux crossing any "bottleneck" cut in the chain.
Cheeger Inequality: For a reversible chain:
The lower bound is the harder direction and gives: . In words: a large bottleneck (small ) implies a small spectral gap (slow mixing). Conversely, if is bounded below, the chain mixes in steps (vs. for the tighter bound ).
For AI: The Cheeger inequality provides the tightest practical bounds for MCMC convergence. For neural network posteriors, the conductance can be estimated using gradient information - a well-conditioned posterior (no sharp barriers) has high conductance.
Appendix J: Markov Chains and Entropy
J.1 Entropy Production
The relative entropy (KL divergence) from the chain's distribution to stationarity is:
Data processing inequality (for Markov chains):
This follows from the data processing inequality: applying the stochastic map cannot increase KL divergence. Equality holds iff .
For reversible chains: The relative entropy decays at rate governed by the log-Sobolev constant. Chains with larger log-Sobolev constants (tighter functional inequalities) decay faster.
J.2 Entropy of the Stationary Distribution
The Shannon entropy of is . For a uniform distribution (doubly stochastic ), is maximised. For PageRank, the entropy of measures how "concentrated" web traffic is - low entropy means a few pages dominate.
Maximum entropy interpretation: The stationary distribution of a reversible chain with detailed balance can be interpreted as the maximum entropy distribution subject to the constraint that the detailed balance equations hold. This connects Markov chains to the exponential family (Section02).
J.3 Kullback-Leibler and MCMC Acceptance
The Metropolis-Hastings acceptance ratio has an information-theoretic interpretation:
The exponent is the log-ratio of probability densities - related to the KL divergence between the proposal and target. When the proposal matches well, most proposals are accepted.
Appendix K: Continuous-Time Chains and Generators
K.1 Semigroup Theory
The family of transition matrices forms a Markov semigroup: , , and is continuous in . The generator characterises the infinitesimal behaviour.
Exponential formula: where the matrix exponential can be computed via eigendecomposition of . If has eigenvalues (all real and non-positive for reversible chains), then:
Convergence to stationarity is at rate where is the spectral gap of .
K.2 Reversible CTMCs
A CTMC is reversible with respect to if for all . The spectral theory for reversible CTMCs is identical to that for reversible DTMCs: real eigenvalues, orthonormal eigenfunctions, exponential convergence.
Detailed balance for CTMCs:
This is the continuous-time analogue of the discrete detailed balance. Metropolis-Hastings in continuous time (the Metropolis algorithm for particle simulations) satisfies this by construction.
Appendix L: Theory Problems
Problem L.1. Prove that for a doubly stochastic matrix (rows and columns both sum to 1), the uniform distribution is stationary. Give an example showing the converse fails.
Problem L.2. Let be irreducible with stationary distribution . Prove that for all by constructing a path from any with to .
Problem L.3. (Coupling inequality) Let be ergodic. Construct a coupling of two copies of the chain, one started at and one at the stationary distribution . Use the coupling inequality to prove .
Problem L.4. For the symmetric random walk on with reflecting barriers, show the spectral gap is for large . What is the mixing time?
Problem L.5. (Metropolis-Hastings correctness) Let be the MH transition kernel with proposal and target . Show that (detailed balance), and conclude is stationary.
Problem L.6. For a birth-death chain on with rates and (constant), find the stationary distribution. For what values of does the chain have a proper stationary distribution on (infinite chain)?
Problem L.7. The lazy walk on a graph sets (stay with prob 1/2) and for neighbours (where is the degree). Show the lazy walk has all non-negative eigenvalues and therefore converges monotonically to stationarity in TV distance.
Problem L.8. (PageRank convergence rate) For the damped PageRank matrix with damping , show all eigenvalues except the dominant one have magnitude . Conclude the PageRank power iteration converges in iterations.
Appendix M: Notation Summary
| Symbol | Meaning |
|---|---|
| State space | |
| Transition probability from state to state | |
| -step transition probability () | |
| Distribution at time ; row vector | |
| Stationary distribution; row vector with | |
| Return probability to state : | |
| Mean return time to state ; | |
| Period of state : | |
| Total variation distance between and | |
| Mixing time to -accuracy | |
| Spectral gap: $1 - | |
| Cheeger constant (conductance) | |
| Generator matrix of a CTMC; , | |
| CTMC transition matrix at time | |
| MH acceptance probability | |
| Coupling time (coalescence time of two chains) | |
| HMM forward variable: | |
| Viterbi variable: |
Appendix N: Advanced ML Applications
N.1 RLHF as a Constrained MDP
Reinforcement learning from human feedback (RLHF) trains a language model policy to maximise:
where is a learned reward model, is the reference (SFT) policy, and controls the KL penalty.
This is a constrained Markov chain optimisation problem: the state space is all possible prefixes (contexts), the action space is the vocabulary, and the KL divergence penalty ensures the learned chain doesn't deviate too far from the reference chain in transition structure.
The optimal policy has an explicit closed form:
where is a normalising constant. This is a Gibbs distribution with the reward as energy function - exactly the target distribution for MCMC.
Implication: The RLHF optimal policy can be sampled by constructing a Markov chain with the above stationary distribution. Recent work uses Langevin-type MCMC to directly sample from without RL.
N.2 Diffusion Sampling as Time-Reversed CTMC
The score-based generative modelling framework (Song et al., 2021) writes the forward noising process as a CTMC/SDE and generates by running the reversed process. The key identity is Anderson's reverse-time SDE:
where is a Brownian motion running backwards in time. The score function is the generator of the reversed Markov chain. Learning this score function via denoising score matching allows sampling by running the reverse process.
Discrete diffusion: For categorical distributions (text), the VP-SDE is replaced by a CTMC on the vocabulary with generator . MDLM and other discrete diffusion models use the forward chain = masked tokens, Uniform transitions, or absorbing-state corruption, then learn the reverse chain.
N.3 MCMC for LLM Sampling
Standard autoregressive LLM decoding (top-, nucleus) can be viewed as Markov chain sampling from a specific transition matrix. Recent speculative decoding methods (Chen et al., 2023; Leviathan et al., 2023) use a smaller "draft" model as a proposal and the large model as the acceptance criterion - this is precisely independence Metropolis-Hastings:
- Proposal:
- Acceptance:
- Guarantees: the accepted tokens have the same distribution as large-LM-only sampling
This is MCMC applied to language model decoding - Markov chain theory directly yields correctness of speculative decoding.
N.4 Graph Neural Networks and Random Walks
Graph Neural Networks (GNNs) can be understood through the lens of Markov chains. A single GNN layer performs:
For the simple mean-aggregation GNN: . The aggregation step is applying the random walk transition matrix to the feature matrix. layers of GNN = applying the random walk times - the receptive field grows as the -step neighbourhood.
Over-smoothing: After many layers, all node features converge to the stationary distribution of the random walk (uniform for regular graphs). This is over-smoothing - the GNN loses discriminative power because the Markov chain mixes. Mixing time bounds from Section6 directly bound the number of useful GNN layers.
N.5 Token Position Encoding as Markov Chain
The self-attention mechanism with rotary position encoding (RoPE) can be viewed as computing a position-dependent Markov chain over tokens. Each attention head defines a stochastic matrix over token positions (the attention weights), and information flows along this chain.
Attention as ergodic averaging: In multi-head attention, each head computes . The softmax weights form a row-stochastic matrix - a one-step Markov transition. The output is the expected value of under this distribution. Stacking attention layers is like composing Markov kernels.
Information bottleneck: The rank of the attention matrix limits the "state space" of the Markov chain. Low-rank attention (as in linear attention / Performer) corresponds to a Markov chain with restricted state space - mixing may be faster but with less expressivity.
Appendix O: Review Problems
Review Problem 1. Let be a stochastic matrix with . (a) Is this chain irreducible? (b) Find the stationary distribution. (c) Starting from state 1, compute the distribution after 10 steps. (d) Compare to stationarity.
Review Problem 2. Consider a random walk on a cycle of length (states , move clockwise or counterclockwise with equal probability ). (a) What is the stationary distribution? (b) What is the period? (c) Compute the mixing time using the spectral gap (eigenvalues: for ). (d) How does mixing time scale with ?
Review Problem 3. (MCMC) You want to sample from (a Gamma distribution) using MH with log-normal proposal . (a) Write down the acceptance ratio. (b) Is symmetric? (c) Will this chain have as stationary distribution?
Review Problem 4. (PageRank) The web has 3 pages: 1 links to 2 and 3; 2 links to 1; 3 links to 1. With damping , write the PageRank transition matrix and find by solving a linear system. Which page has the highest PageRank?
Review Problem 5. (HMM) For an HMM with 2 hidden states, 2 observations, transition , emission , initial : (a) Compute using the forward algorithm. (b) Find the Viterbi path.
Appendix P: Common Distributions in MCMC Targets
When , the gradient drives Langevin dynamics. Key shapes:
Gaussian: . Gradient: . Langevin mixes in steps. HMC mixes in steps (advantage for ill-conditioned targets).
Logistic posterior: . Strongly log-concave; SGLD mixes in polynomial time.
Mixture of Gaussians: . Multi-modal; standard MCMC can get stuck between modes. Parallel tempering or simulated annealing needed for good mixing.
Heavy-tailed: (Student-t). Sub-quadratic growth means the tails are heavy. HMC can struggle; NUTS with dual averaging handles this.
Boltzmann (energy-based models): for neural network energy . Sampling requires running MCMC; contrastive divergence (CD-) uses short MCMC chains as an approximation.
Appendix Q: Further Reading
-
Levin, Peres, Wilmer - Markov Chains and Mixing Times (AMS, 2nd ed., 2017) - the definitive reference for mixing times and coupling; freely available online
-
Norris, J.R. - Markov Chains (Cambridge, 1997) - rigorous treatment of discrete and continuous-time chains with clean proofs
-
Brooks, Gelman, Jones, Meng - Handbook of Markov Chain Monte Carlo (CRC Press, 2011) - comprehensive MCMC reference; includes HMC, NUTS, diagnostics
-
Betancourt, M. - "A Conceptual Introduction to Hamiltonian Monte Carlo" (arXiv, 2017) - outstanding intuitive treatment of HMC geometry
-
Page et al. - "The PageRank Citation Ranking: Bringing Order to the Web" (Stanford Technical Report, 1999) - original PageRank paper
-
Rabiner, L.R. - "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition" (Proc. IEEE, 1989) - classic HMM reference
-
Song et al. - "Score-Based Generative Modeling through Stochastic Differential Equations" (ICLR 2021) - connects diffusion models to CTMC theory
-
Welling & Teh - "Bayesian Learning via Stochastic Gradient Langevin Dynamics" (ICML 2011) - introduces SGLD for large-scale Bayesian inference
-
Sutton & Barto - Reinforcement Learning: An Introduction (MIT Press, 2nd ed., 2018) - MDPs, Bellman equations, TD-learning; free online
Appendix R: Spectral Methods for Large-Scale Markov Chains
R.1 Lanczos Algorithm for Large Transition Matrices
For transition matrices arising in PageRank ( web pages), direct eigendecomposition is infeasible. The power iteration and Lanczos algorithm enable computing the top eigenvectors using only matrix-vector products.
Power iteration for stationary distribution:
- Memory: for the distribution vector
- Per-iteration cost: (number of non-zeros in ) for sparse matrices
- Convergence in iterations
For the web graph, (damping), so ~70 iterations give precision.
Randomised SVD: For low-rank approximation of (needed for long-range transition probability estimation), randomised methods can approximate the top- singular vectors in time - much cheaper than full SVD.
R.2 Multi-Scale Markov Chains
For chains with multiple timescales (fast-mixing within clusters, slow-mixing between clusters), aggregation-disaggregation methods provide efficient algorithms:
- Aggregation: Lump states within each cluster into a single meta-state. Compute the inter-cluster transition matrix .
- Disaggregation: Compute the within-cluster stationary distributions for each cluster .
- Combine: for state in cluster .
This decomposition is exact when the chain is "nearly lumpable" (within-cluster transitions are much faster than between-cluster). It's the foundation of multi-scale MCMC methods.
For AI: LLM attention patterns have multi-scale structure: attention heads attend to nearby tokens (fast mixing) and distant context (slow mixing). This multi-scale structure can be exploited for efficient long-context modelling.
R.3 Reversibilisation
Any Markov chain can be made reversible by considering the Doob -transform or by multiplicative reversibilisation:
The resulting is reversible with the same stationary distribution . Reversibilisation can speed up or slow down convergence depending on the chain - it destroys the directional bias that non-reversible chains use for faster mixing.
Appendix S: Continuous-State Markov Chains
S.1 Harris Chains
For Markov chains on continuous state spaces (like MCMC on ), the discretestate ergodic theory extends with modifications. A Markov chain with transition kernel is:
- -irreducible if there exists a -finite measure such that for any Borel set with , we have for all .
- Harris recurrent if it is -irreducible and returns to any set with with probability 1.
- Positive Harris recurrent if the chain is Harris recurrent and has a unique invariant probability measure .
A positive Harris recurrent, aperiodic chain is called a Harris ergodic chain, and the ergodic theorem holds: time averages converge to -expectations.
For MCMC: The Metropolis-Hastings chain on with Gaussian proposal and a proper target density is Harris ergodic under mild regularity conditions. This justifies MCMC estimators for continuous posteriors.
S.2 Geometric Ergodicity
A Markov chain is geometrically ergodic if there exist constants and such that:
Geometric ergodicity implies a central limit theorem for MCMC estimators:
where is the asymptotic variance. This is the Markov chain CLT - it justifies asymptotic confidence intervals for MCMC estimates.
For AI: Geometric ergodicity of the Langevin algorithm for log-concave targets gives polynomial bounds on the number of gradient evaluations needed for -accurate posterior estimates. This is the theoretical basis for Bayesian deep learning via SGLD.
Appendix T: Markov Chain Games and Nash Equilibria
T.1 Stochastic Games
A stochastic game (Shapley, 1953) is an MDP with multiple agents. Two-player zero-sum stochastic games have:
- State space ; action spaces
- Transition
- Reward (player 1 maximises, player 2 minimises)
At a Nash equilibrium, each player's policy is a best response to the other's. The value function satisfies a minimax Bellman equation:
For AI: Multi-agent RL (MARL) with competing agents (e.g., AlphaGo, Libratus for poker) uses stochastic game theory. RLHF with adversarial reward models is a stochastic game between the policy and the worst-case reward model.
T.2 Markov Perfect Equilibrium
In dynamic games, a Markov perfect equilibrium (MPE) is a Nash equilibrium where strategies depend only on the current state (Markov property on the strategy). MPE is the game-theoretic analogue of the optimal policy in MDPs.
Computing MPE requires solving a system of coupled Bellman equations - one per player. For two-player zero-sum games, this reduces to linear programming (minimax theorem). For general-sum games, this is PPAD-complete.
Appendix U: Temporal Difference Learning and Martingales
Connecting back to Section06 Stochastic Processes Section3.7, the TD(0) algorithm is:
The update direction is the TD error. Under the true value function , (martingale difference). TD learning is a stochastic approximation algorithm for finding the fixed point of the Bellman operator .
Convergence proof via martingale theory: Write (error from true value). Then:
where is a martingale difference noise term. Under the Robbins-Monro conditions , , the ODE method (Borkar, 2008) shows a.s.
Appendix V: Practical MCMC Checklist for AI/ML
Before running MCMC:
- Identify the target: is proper? Is evaluable efficiently?
- Choose algorithm: log-concave -> Langevin/HMC; multimodal -> parallel tempering; discrete -> Gibbs
- Set proposal scale: tune to ~23% acceptance (MH) or ~65% (HMC)
- Run multiple chains () from diverse starting points
During MCMC:
- Monitor trace plots for signs of non-stationarity (trends, sticking)
- Track acceptance rate: should be in target range
- Compute after every 1000 iterations; stop when
After MCMC:
- Discard burn-in (first 50% as rule of thumb)
- Compute ESS per parameter; target ESS for reliable estimates
- Check posterior predictive checks: do simulated data match observed data?
- Report: posterior mean, posterior std, 95% credible interval, ESS,
Common pathologies:
- Divergent transitions (HMC): posterior geometry has sharp ridges; reparameterise
- Low ESS: chain mixes slowly; increase step size, use HMC, or apply preconditioning
- : chains haven't agreed; run longer, use better initialisation
- Bimodal trace plots: chain stuck between modes; use parallel tempering
Appendix W: Worked Computation - Two-State Chain
Full Analysis
Let , starting from .
Step 1: Eigenvalues. Characteristic polynomial: . Eigenvalues: .
Step 2: Stationary distribution. From : . With : .
Step 3: -step distribution. Using the spectral decomposition:
Explicitly:
So .
Step 4: TV distance. .
At : - essentially converged. Spectral gap , so mixing is fast.
Step 5: Verify detailed balance.
The chain is reversible with this stationary distribution.
Appendix X: Advanced Topics
X.1 Geometric Random Walks for Volume Computation
The ball walk and Gaussian walk are Markov chains used to estimate the volume of convex bodies. Starting from a point in a convex body , the chain moves to a uniformly random point within a ball of radius centred at (rejecting if outside ). The stationary distribution is uniform on .
Mixing time analysis: for the ball walk, where is the dimension. This gives a polynomial-time randomised algorithm for volume computation (Dyer-Frieze-Kannan theorem).
For AI: High-dimensional sampling problems (Bayesian inference with parameters) require understanding how mixing time scales with dimension. Geometric random walks provide the fundamental tools.
X.2 Markov Chain Tree Theorem
For a strongly connected directed graph with transition matrix , the stationary distribution can be computed via spanning trees:
where is the sum of weights of all spanning trees directed toward node (arborescences rooted at ). This is the Matrix-Tree theorem (Kirchhoff 1847) extended to directed graphs.
Implication: The stationary distribution has a combinatorial formula in terms of the graph structure. This provides an alternative to eigendecomposition that can be faster for sparse graphs.
X.3 Lumping and Aggregation
A partition of is lumpable with respect to if for any two states in the same class , the aggregated transitions for all classes . In this case, the quotient chain on is a well-defined Markov chain.
Lumping enables dimensionality reduction: replace a large chain with a smaller aggregated chain. The stationary distribution of the lumped chain aggregates that of the original.
For AI: Grouping similar states (e.g., tokens with similar embeddings) can define a lumped chain that captures the high-level dynamics of the LLM's token distribution, enabling efficient analysis of LLM behaviour at scale.
Appendix Y: Connections to Physics
Y.1 Statistical Mechanics and Gibbs Measures
The Gibbs distribution (Boltzmann distribution) is the equilibrium distribution of a physical system with Hamiltonian at temperature . This is the canonical target distribution for MCMC in physics.
The Metropolis algorithm was originally designed to sample from Gibbs distributions of particle systems (Metropolis et al., 1953). MCMC methods in ML directly descended from statistical mechanics - including the energy-based models (EBMs) that predate modern deep learning.
Temperature annealing: Starting at high (flat, easy-to-sample distribution) and gradually cooling to (greedy optimisation) is simulated annealing - a probabilistic optimisation algorithm. The connection between MCMC sampling and optimisation at is the bridge between Bayesian inference and MAP estimation.
Y.2 Detailed Balance and Thermodynamic Equilibrium
Detailed balance in physics is called microscopic reversibility or time-reversal symmetry. A physical system is at thermodynamic equilibrium iff its microscopic dynamics satisfy detailed balance. Systems out of equilibrium (driven by energy flows) violate detailed balance and maintain directed probability currents.
Non-equilibrium steady states: Some biological systems (motor proteins, gene regulatory networks) maintain non-reversible Markov chains in steady state - driven by ATP hydrolysis. These are "out-of-equilibrium" in the thermodynamic sense. Non-reversible MCMC is inspired by this physics.
Y.3 Renormalization Group and Multi-Scale Chains
The renormalization group (Wilson, 1971) is a technique in physics for coarse-graining: replacing fine-grained microscopic degrees of freedom with effective coarse-grained ones. This is mathematically analogous to the lumping/aggregation of Markov chains.
In ML, knowledge distillation (training a small model to mimic a large model) is a form of renormalization: the small model's transition matrix approximates the "effective" dynamics of the large model at a coarser scale.
Appendix Z: Glossary
Absorbing state. A state with ; the chain stays forever once it arrives. Every absorbing state is recurrent.
Aperiodic. A state with period . A chain is aperiodic if all states are aperiodic. Aperiodicity + irreducibility + positive recurrence = ergodicity.
Birth-death chain. A Markov chain on with only nearest-neighbour transitions (). Always reversible; stationary distribution computed via detailed balance.
Chapman-Kolmogorov equations. : the step transition matrix factors as a product of - and -step matrices.
Communicating class. A maximal set of states where for all . Partitions the state space into equivalence classes.
Conductance (Cheeger constant). where is the probability flux from to at stationarity. Bounds: .
Coupling. A joint process where each marginal is a valid Markov chain with transition . Used to bound TV distance via where is the coalescence time.
CTMC. Continuous-time Markov chain. Holds in each state for an exponential time, then jumps. Specified by a generator matrix with .
Detailed balance. : probability flux is equal in both directions at stationarity. Sufficient (not necessary) for to be stationary.
Doubly stochastic matrix. A matrix where rows AND columns all sum to 1. Stationary distribution is uniform.
Embedded chain. The discrete-time chain of states visited by a CTMC (ignoring holding times). Jump probabilities .
Ergodic chain. Irreducible + positive recurrent + aperiodic. Convergence to unique stationary distribution is guaranteed.
Ergodic theorem. For an ergodic chain: a.s. Justifies MCMC estimation.
Fundamental matrix. for absorbing chains, where is the sub-matrix of transitions between transient states. = expected visits to state starting from before absorption.
Generator matrix (Q-matrix). for (jump rates), . Rows sum to zero. CTMC transition: .
Gibbs sampling. MCMC algorithm that updates one coordinate at a time from its full conditional distribution. Special case of MH with acceptance ratio 1.
Harmonic function. : equals its own expected value one step ahead. The martingale is constant in expectation.
HMC (Hamiltonian Monte Carlo). MCMC using gradient information to make long-range moves. Augments the state with momentum; uses leapfrog integration to simulate Hamiltonian dynamics.
HMM (Hidden Markov Model). A Markov chain of hidden states with observations emitted from the current state. Algorithms: forward, backward, Viterbi, Baum-Welch.
Irreducible chain. Every state is accessible from every other: for all . Ensures no trapping in subsets.
Lazy chain. : stays put with prob 1/2, otherwise transitions. Aperiodic by construction; eigenvalues non-negative.
Lumpable partition. A partition where all states in the same class have identical transition probabilities to every other class. Enables dimensionality reduction.
Markov chain. A sequence of random variables where the future depends on the past only through the present: .
Markov property. : conditional independence of future from past given present.
MDP (Markov Decision Process). Extension of Markov chain with actions; the policy induces a Markov chain; the Bellman equation characterises the value function.
Metropolis-Hastings. MCMC algorithm: propose , accept with probability . Correct by detailed balance.
Mixing time. . Bounded by .
Null recurrent. Recurrent state with infinite mean return time (). Common in countable chains (1D SRW); no normalisable stationary distribution.
PageRank. Stationary distribution of the random surfer Markov chain on the web graph. Computed by power iteration.
Period. . Period 1 = aperiodic.
Perron-Frobenius theorem. For primitive stochastic : eigenvalue 1 is unique and dominant; unique stationary distribution ; geometrically.
Positive recurrent. Recurrent state with finite mean return time (). In a finite irreducible chain, all states are positive recurrent.
Reversible chain. Satisfies detailed balance; the time-reversed chain equals the forward chain. Equivalent to: is self-adjoint in .
SGLD. Stochastic gradient Langevin dynamics: SGD + Gaussian noise. Approximates Bayesian inference at scale.
Spectral gap. for reversible chains. Determines mixing rate: .
Stationary distribution. with , , . The equilibrium distribution; unique for ergodic chains.
Stochastic matrix. and for all . The transition matrix of a DTMC.
Transient state. State with . The chain leaves and never returns with positive probability; .
Total variation (TV) distance. . The mixing distance for Markov chains.
Viterbi algorithm. Dynamic programming for the most likely hidden state sequence in an HMM. Complexity .
Appendix AA: Practice Problems with Solutions
AA.1 Absorption Probability Calculation
Problem. A gambler starts with \3$1$1$0$6$6$3$, (b) expected duration of the game.
Solution. Let , , . Using the absorption formula for biased walks with absorbing boundaries at 0 and :
(a)
Only 12% chance of reaching the goal from the midpoint - the bias significantly hurts the gambler.
(b) Expected duration:
Actually for biased walks: . With : rounds.
AA.2 PageRank Mini-Example with Computation
Problem. Three web pages: 1 links to {2,3}, 2 links to {3}, 3 links to {1}. Find PageRank with .
Adjacency matrix:
Out-degrees: .
Transition matrix without damping:
Damped matrix ():
Solving with : by symmetry of the network (1 has out-degree 2, others 1) and solving the system: .
Page 3 has highest PageRank - it's the target of all paths through the cycle , plus direct link from 1.
AA.3 HMM Forward Algorithm Computation
Problem. HMM with 2 states H,L and 2 observations A,B.
- ,
- Emission: , (A and B respectively)
- Observation: A, B
Forward algorithm:
, obs=A: ,
, obs=B:
Likelihood: .
Posterior: ; .
Interpretation: Observing (A,B) - first high-emission then low-emission - suggests state L at time 2.
Appendix BB: Index of Key Theorems
| Theorem / Result | Where Proved | Key Implication |
|---|---|---|
| Perron-Frobenius | Section4.2, App. I.1 | Unique stationary distribution for ergodic chains |
| Ergodic theorem | Section3.5, App. E | Time averages converge to ; justifies MCMC |
| Chapman-Kolmogorov | Section2.3 | -step transitions: |
| Convergence theorem | Section4.3 | for ergodic chains |
| Detailed balance stationarity | Section5.1, App. E.2 | Sufficient condition; easier to verify |
| Spectral gap bound | Section6.3 | |
| Cheeger inequality | Section6.3, App. I.3 | |
| Coupling inequality | Section6.1, App. I.2 | |
| MH correctness | Section8.2 | Detailed balance implies is stationary |
| MH acceptance optimality | Section8.2 | ~23% acceptance optimal in high dimensions |
| Markov chain CLT | App. S.2 | |
| Dirichlet problem / OST | App. A | Value function = expected boundary value |
| Mean return time formula | Section4.1 | ; occupancy = reciprocal return time |
| Absorption formula | Section3.4 | ; absorption probabilities from fundamental matrix |
| Viterbi optimality | Section9.4 | DP gives exact most likely path in $O(T |
Appendix CC: Mixing Times for Specific Chains
A reference table of known mixing time results for important chains:
| Chain | State Space | Mixing Time | Technique |
|---|---|---|---|
| Two-state chain | Eigenvalue | ||
| Random walk on path | Eigenvalues | ||
| Random walk on cycle | Fourier analysis | ||
| Random walk on hypercube | Coupling (coupon collector) | ||
| Random walk on complete graph | Spectral gap | ||
| Random walk on expander | Spectral gap | ||
| Glauber dynamics (Ising, high ) | Coupling or spectral gap | ||
| Glauber dynamics (Ising, ) | Phase transition barrier | ||
| MH on (Gaussian RW) | Optimal scaling | ||
| HMC on | Leapfrog integrator analysis | ||
| PageRank (damping ) | Web pages | Second eigenvalue |
Key observations:
- Dimension scaling: Random walk MH scales as (diffusive), HMC as (much better)
- Phase transitions: Near critical temperature, MCMC for Ising model has exponential mixing time - a computational hard phase
- Expander graphs: spectral gap means mixing - the best possible for non-trivial graphs
Appendix DD: Software Ecosystem
The following Python libraries implement the algorithms in this section:
MCMC and Probabilistic Programming:
PyMC- full-featured Bayesian modelling with NUTS/HMC samplerStan(viacmdstanpy/pystan) - the gold standard for Bayesian modelling; generates C++ code for fast samplingNumPyro- JAX-based; supports GPU-accelerated HMC and NUTSblackjax- modular MCMC library (JAX); easy to extend with custom kernels
Markov Chain Simulation:
networkx- graph operations; random walks on graphsnumpy- transition matrix operations; power iteration; eigendecomposition
Hidden Markov Models:
hmmlearn- sklearn-compatible HMM; Baum-Welch training, Viterbi decodingpomegranate- probabilistic models including HMMs with GPU support
Reinforcement Learning (MDP):
gymnasium(formerly OpenAI Gym) - RL environments as MDPsstable-baselines3- RL algorithms (PPO, SAC, TD3) implementing policy optimisation
Diffusion Models:
diffusers(HuggingFace) - implements DDPM, DDIM, and score-based models as CTMCs
Appendix EE: Common Markov Chain Constructions
Ehrenfest Model: balls in two urns, uniform random ball moved at each step. State = number of balls in urn 1. Birth-death chain; stationary distribution: Binomial. Models diffusion/thermodynamic equilibrium.
Polya Urn: Start with red and blue balls; each step add a ball of the drawn colour. Path-dependent; NOT Markov without extended state. But satisfies - a martingale.
Bernoulli-Laplace Diffusion: balls total, red and blue, split between two urns of size . State = number of red in urn 1. Symmetric birth-death chain; used to model genetics and combinatorics.
Random Transposition Walk: On (permutation group of elements), pick two positions uniformly and swap. State space: (size ). Mixing time: . Used to model card shuffling (Bayer-Diaconis theorem).
Metropolis on Graphs: For any graph and target , the Metropolis chain with uniform proposal on neighbours satisfies detailed balance with respect to . Useful for sampling graph colorings, independent sets, etc.
Skip-free chains: Transition matrices with for (can only go up by 1 or down arbitrarily). Examples: GI/M/1 queues. Efficient algorithms via matrix-analytic methods.
Appendix FF: Spectral Graph Theory Connection
FF.1 Normalized Laplacian and Random Walks
For an undirected graph with degree matrix and adjacency matrix , the random walk transition matrix is . The normalized Laplacian is:
The eigenvalues of are . The spectral gap of the random walk is .
Cheeger inequality for graphs: where is the edge expansion of the graph.
Expander graphs: A -regular expander has (bounded below independently of ). Examples: Ramanujan graphs achieve (optimal). Random regular graphs are expanders with high probability.
FF.2 Graph-Based ML and Markov Chains
Graph attention networks: GAT attention coefficients (learnable) create a data-dependent transition matrix over the graph. The spectral properties of this attention-weighted adjacency matrix determine the GNN's ability to propagate information.
Label propagation: Semi-supervised learning via label propagation: where is the random walk matrix, are labels, are known labels, and is a damping parameter. This is exactly the PageRank recursion applied to labels - convergence follows from the same Perron-Frobenius analysis.
Spectral clustering: The smallest eigenvectors of embed the graph into such that well-connected clusters map to nearby points. This exploits the fact that slow-mixing chains (small spectral gap) correspond to loosely connected clusters.
Appendix GG: Long-Range Dependencies and Markov Approximations
GG.1 Finite-Order Markov Chains
A th-order Markov chain satisfies:
This can always be reduced to a first-order chain on the augmented state with state space size .
Trade-off: Higher-order captures more dependencies but exponentially increases state space. N-gram language models are th-order Markov chains on the vocabulary - trigram models () are common in classical NLP.
For transformers: A transformer with context length has an effective th-order Markov chain for token generation. The KV cache stores the sufficient statistics (the last tokens) - first-order Markov in the augmented state space.
GG.2 Variable-Length Markov Chains (VLMC)
VLMCs use a context tree to specify the order dynamically: some contexts require deep history, others only shallow. The transition probability where depends on the context.
VLMCs provide compact representations of processes with heterogeneous memory. Related to suffix trees and compressed suffix arrays - efficient data structures for sequential data.
GG.3 Hidden Non-Markovian Processes
Many real processes are non-Markovian but become Markovian on an augmented state space. Examples:
- AR() process: . First-order Markov on .
- ARMA(): Moving average component creates non-Markovian behaviour; state space includes latent noise terms.
- LLM with KV cache: First-order Markov on the KV cache state; non-Markovian on just the last token.
Appendix HH: Excursion Theory and Renewal Processes
HH.1 Renewal Theory
A renewal process is a sequence of times where are iid positive random variables (inter-renewal times). The process counts renewals up to time .
Renewal theorem: where is the mean inter-renewal time. This is the law of large numbers for renewal processes.
Connection to Markov chains: The times of visits to a fixed recurrent state form a renewal process with inter-renewal distribution = return time distribution. The renewal theorem gives (mean return time formula).
HH.2 Excursions from a State
An excursion from state is the portion of the trajectory between two consecutive visits to . Excursions are iid by the strong Markov property. The distribution of the excursion - its length, states visited, and path structure - encodes rich information about the chain.
For AI: In RLHF, the "excursions" of the LM's generation from topic to topic form a renewal-like structure. Understanding the statistics of these excursions (how long the LM stays on-topic, how it transitions between topics) is important for reward model design.
Appendix II: Summary Diagrams
MARKOV CHAIN THEORY SUMMARY
========================================================================
CHAIN PROPERTIES IMPLICATIONS
---------------- ------------
Irreducible ----------------------> Every state reachable from every other
+ Positive Recurrent --------------> Stationary distribution \\pi exists
+ Aperiodic -----------------------> \\pi unique; P^n -> 1\\cdot\\pi^T (convergence)
(= Ergodic)
Reversible -----------------------> Detailed balance \\pi_i P_ij = \\pi_j P_ji
Real eigenvalues; symmetric in L^2(\\pi)
MH, Gibbs, HMC all reversible
MIXING SPEED HIERARCHY
-----------------------
Spectral gap large ---------------> Fast mixing (t_mix = O(1/gap))
Cheeger constant large -----------> Large spectral gap (Cheeger inequality)
Non-reversible + directed ---------> Potentially faster than any reversible
HMC gradient steps ---------------> O(d^{1/4}) vs O(d) for random walk
COMPUTING \\pi
-----------
Small N: solve \\pi P = \\pi, \\Vert\\pi\\Vert_1 = 1 (linear system)
Large N: power iteration \\pi^{k+1} = \\pi^{k} P (PageRank)
Continuous space: MCMC (Metropolis-Hastings, HMC)
Birth-death: explicit formula via detailed balance
ML APPLICATIONS
---------------
Language generation --------------> Markov chain on vocabulary (or context)
PageRank -------------------------> Stationary dist of web graph walk
MCMC -----------------------------> Bayesian posterior sampling
RL (MDP) -------------------------> Policy induces Markov chain; Bellman eqn
HMM ------------------------------> Latent Markov chain with observations
Diffusion models -----------------> Forward = CTMC; reverse = learned CTMC
GNNs -----------------------------> Message passing = transition matrix power
========================================================================