"A stochastic process is a mathematical model for a randomly evolving system. To understand modern AI is to understand how randomness unfolds over time."
- inspired by Doob, Stochastic Processes (1953)
Overview
A stochastic process is a collection of random variables indexed by time: . Where probability theory asks "what is the distribution of a single random outcome?", stochastic processes ask "how does a random system evolve?" This shift - from snapshots to trajectories - is essential for understanding modern AI.
The trajectory of SGD through a loss landscape is a stochastic process. The sequence of tokens generated by a language model is a stochastic process. The forward noising schedule of a diffusion model is a stochastic process. The value function estimates in reinforcement learning satisfy martingale properties. None of these can be understood through static probability alone.
This section builds the rigorous framework - filtrations, adapted processes, martingales, stopping times - and applies it to the canonical examples: random walks, Poisson processes, Brownian motion, and stationary Gaussian processes. Markov chains, which deserve their own dedicated treatment, are previewed here and developed fully in Section07 Markov Chains.
Prerequisites
- Section01 Introduction and Random Variables - probability spaces, random variables, measurability
- Section03 Joint Distributions - joint distributions, independence, conditional distributions
- Section04 Expectation and Moments - conditional expectation , MGF, tower property
- Section05 Concentration Inequalities - Azuma-Hoeffding, McDiarmid (Doob martingale used there)
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Interactive derivations: random walks, martingale OST, Poisson process, Brownian motion, OU process, diffusion models, Gaussian processes, SGD as diffusion |
| exercises.ipynb | 10 graded exercises from martingale verification to RL TD-error martingale and GP posterior |
Learning Objectives
After completing this section, you will be able to:
- Define a stochastic process and classify it by time/state space type
- Define filtrations and adapted processes; interpret information accumulation formally
- Define martingales and verify the martingale property for standard examples
- State and apply Doob's Optional Stopping Theorem with correct conditions
- Construct the Doob martingale and connect it to McDiarmid's inequality
- Define the Poisson process via its three characterising properties and apply superposition/thinning
- Define Brownian motion via Wiener's axioms and compute quadratic variation
- Explain the Ornstein-Uhlenbeck process and its connection to regularisation
- Define strict and wide-sense stationarity, ergodicity, and the Wiener-Khinchin theorem
- Define Gaussian processes, specify the kernel as covariance function, and compute GP posteriors
- Explain how diffusion models implement a forward stochastic process and connect to Brownian motion
- Model SGD as a stochastic process and identify where martingale/near-martingale structure appears
Table of Contents
- 1. Intuition and Overview
- 2. Formal Framework
- 3. Martingales
- 4. Discrete-Time Random Walks
- 5. Poisson Processes
- 6. Brownian Motion
- 7. Stationary Processes and Ergodicity
- 8. Preview: Markov Chains
- 9. ML Deep Dive
- 10. Common Mistakes
- 11. Exercises
- 12. Why This Matters for AI (2026 Perspective)
- 13. Conceptual Bridge
1. Intuition and Overview
1.1 What Is a Stochastic Process?
A stochastic process is a family of random variables defined on a common probability space , indexed by a set called the index set or time set. For each fixed outcome , the mapping is called a sample path or realisation of the process.
The key conceptual shift from classical probability: rather than asking "what is the distribution of ?", we ask "how does the random system evolve over time?" This trajectory-level view captures phenomena that static distributions cannot: memory, feedback, path dependence, and accumulation of information.
Informal classification: The simplest stochastic process is a sequence of iid random variables - no memory, no dependence. More interesting processes have memory (the present depends on the past), structure (the conditional distributions have a specific form), or continuity (sample paths are almost surely continuous). The richest models - Brownian motion, diffusion processes - have all three.
For AI: Almost every quantity that evolves during training or inference is a stochastic process. The parameter vector after steps of SGD is a stochastic process. The hidden state of an RNN at position is a stochastic process. The sequence of tokens sampled from an LLM is a stochastic process. The noise level in a diffusion model forward pass is a stochastic process. Stochastic process theory gives us the mathematical language to reason about all of these rigorously.
1.2 The Central Examples
The five canonical stochastic processes appear throughout mathematics and AI:
-
Simple Random Walk (SRW): where . Discrete time, discrete state space. Models fair gambling, particle diffusion in discrete space, and gradient noise accumulation.
-
Poisson Process: = number of events in , with independent of the past. Continuous time, discrete (integer) state space. Models arrival streams, request queues, neuron spike trains.
-
Brownian Motion (Wiener Process): with , continuous paths, independent of the past. Continuous time, continuous state space. The fundamental building block of continuous-time stochastic processes, and the forward process in diffusion models.
-
Gaussian Process: A process where every finite collection is jointly Gaussian. Fully specified by its mean function and covariance kernel . Foundation of Gaussian process regression (Bayesian nonparametrics) and closely related to neural network functions in the infinite-width limit.
-
Markov Chain: where - the past matters only through the present. Discrete time, arbitrary state space. The backbone of MCMC, reinforcement learning, and language model token generation. Full treatment: Section07 Markov Chains.
1.3 Why Processes Matter for AI
Stochastic process theory provides the theoretical foundation for several pillar areas of modern AI:
Diffusion models (DDPM, DDIM, Score SDE) define a forward process that progressively corrupts data with Gaussian noise - this is literally a discrete-time approximation to the Ornstein-Uhlenbeck SDE. The reverse process learned by the neural network is a time-reversed SDE, and training minimises a score-matching objective derived from Ito calculus.
SGD dynamics can be approximated by a continuous-time diffusion SDE: where is the effective noise temperature proportional to the learning rate divided by batch size. This approximation (Mandt et al., 2017; Li et al., 2017) explains why SGD finds flat minima: the stationary distribution of the SDE concentrates in regions of low curvature.
Reinforcement learning uses the discounted return , which is a function of the trajectory - a stochastic process. The temporal difference error forms a martingale difference sequence under the optimal value function, and TD-learning is an online martingale approximation algorithm.
Gaussian processes are used as priors over functions in Bayesian optimisation (tuning LLM hyperparameters), and the neural tangent kernel theory shows that infinite-width neural networks converge to GPs. RoPE (Rotary Position Embedding) encodes relative position using a stationary kernel - exploiting the wide-sense stationarity machinery developed in Section7.
1.4 Historical Timeline
| Year | Contributor | Result |
|---|---|---|
| 1900 | Louis Bachelier | Modelled stock prices as random walk (Brownian motion) in doctoral thesis |
| 1905 | Albert Einstein | Derived diffusion equation from random molecular motion |
| 1923 | Norbert Wiener | Constructed Brownian motion rigorously on path space |
| 1933 | Andrei Kolmogorov | Axiomatic probability; forward/backward equations for Markov processes |
| 1940 | Paul Levy | Characterised Levy processes; Levy-Khinchin representation |
| 1944 | Kiyosi Ito | Developed stochastic calculus (Ito integral, Ito's lemma) |
| 1953 | Joseph Doob | Martingale theory; optional stopping; convergence theorems |
| 1964 | Skorokhod | Embedding theorem; functional CLT (Donsker's theorem) |
| 1991 | Brockwell & Davis | Time series analysis unified with stochastic process theory |
| 2015 | Ho, Jain & Abbeel | Denoising diffusion probabilistic models (DDPM) |
| 2021 | Song et al. | Score-based generative models via SDEs |
| 2022 | Su et al. | RoPE: positional encoding as stationary kernel |
2. Formal Framework
2.1 Probability Spaces and Filtrations
The rigorous foundation of stochastic processes builds on probability spaces enhanced with a filtration - a formal model of accumulating information.
Definition (Filtered Probability Space). A filtered probability space is a quadruple where:
- is a probability space
- is a filtration: an increasing family of sub--algebras, for all
Intuitively, represents the information available at time - everything that has been observed up to and including time . The filtration captures how information accumulates: knowing more over time means larger -algebras.
Definition (Adapted Process). A stochastic process is adapted to the filtration if is -measurable for each . Intuitively: the value of the process at time is determined by information available at time (not the future).
The natural filtration: Given a process , the natural filtration is - the -algebra generated by the process up to time . Every process is adapted to its natural filtration.
For AI: In online learning, represents the set of training examples seen up to step . The model parameters are -measurable (they depend only on past data). A look-ahead bias - where depends on future data - violates adaptedness and invalidates PAC bounds.
In a causal language model, the token at position is sampled conditioned on all previous tokens - this is precisely the adaptedness condition. Non-causal (bidirectional) models like BERT use a different filtration where includes information from both past and future.
Standard conditions (usual conditions): A filtration satisfies the usual conditions if:
- is right-continuous:
- contains all -null sets (completeness)
These technical conditions ensure stopping times and martingales behave as expected. All filtrations in this section are assumed to satisfy the usual conditions.
2.2 Classification of Stochastic Processes
Processes are classified by the structure of their time set and state space :
TAXONOMY OF STOCHASTIC PROCESSES
========================================================================
State Space S
---------------------------------
Discrete Continuous
---------------------------------
Time Discrete | Markov chains Random walk |
Set T | | HMMs AR(p) models |
| | Galton-Watson trees |
+----------+----------------------------------+
Continuous | Poisson process Brownian mot |
| Birth-death proc OU process |
| Counting processes Diffusion SDE|
---------------------------------
========================================================================
Discrete time, discrete state: Markov chains, hidden Markov models (HMMs used in speech recognition), random walks on graphs. The transition dynamics are described by a stochastic matrix.
Discrete time, continuous state: Random walks in , AR/ARMA time series models, recurrent neural network hidden states. State space is for some dimension .
Continuous time, discrete state: Poisson processes, birth-death processes, queueing models. Transitions happen at random times; between transitions the state is constant.
Continuous time, continuous state: Brownian motion, Ornstein-Uhlenbeck process, general diffusion SDEs. These are the most mathematically sophisticated and most relevant to diffusion models and SGD dynamics.
2.3 Stopping Times
A stopping time (or optional time) formalises the idea of a "decision made using only available information."
Definition (Stopping Time). A random variable is a stopping time with respect to filtration if for all . Equivalently, the decision "stop now" can be made using only information available at the current time.
Examples:
- : constant time - trivially a stopping time
- (first passage time): stopping time (the event is -measurable)
- (last time above level): not a stopping time (requires future knowledge)
The -algebra : For a stopping time , the -algebra at is . This represents "information available at the random time ."
For AI: Early stopping in neural network training is a stopping time: the rule "stop when validation loss hasn't improved for steps" depends only on past validation losses - it is -measurable. In contrast, "stop at the training step that achieves minimum test loss" is NOT a stopping time - it requires knowing future test losses.
In RL, = "the first time the agent reaches the goal state" is a stopping time. Doob's Optional Stopping Theorem (Section3.3) characterises the distribution of - the martingale value at this random time.
2.4 Sample Paths and Almost-Sure Properties
For each fixed , the sample path is an ordinary function of time. The regularity of sample paths (continuity, measurability, boundedness) is crucial for the applicability of theorems.
Path properties:
- Continuous paths: is continuous for a.e. . Example: Brownian motion.
- Cadlag paths (right-continuous with left limits): is right-continuous with left limits for a.e. . Example: Poisson process. Pronounced "cadlag" from French continu a droite, limite a gauche.
- Caglad paths: left-continuous with right limits. Useful in predictable integrands.
Why continuity matters: A process with continuous paths can be approximated by discrete-time processes as the mesh goes to zero - this is the content of Donsker's invariance principle (Section6.3) and justifies the SDE approximation of SGD. A process with only cadlag paths (like Poisson) has discontinuities that cannot be approximated this way.
Almost-sure properties: Properties that hold with probability one (a.s.) are the gold standard. For example, Brownian motion is a.s. nowhere differentiable - every sample path is continuous but has infinite variation everywhere. This non-differentiability is responsible for the scaling of noise in SDEs (Ito's formula) and explains why diffusion model noise scales as rather than linearly in time.
3. Martingales
3.1 Definition and Intuition
The word martingale comes from a gambling strategy: double your bet after each loss. Mathematically, a martingale captures the notion of a "fair game" - the best prediction of the future value is the current value, given everything known so far.
Definition (Martingale). An adapted process with for all is a martingale with respect to filtration if:
The condition says: given all information up to time , the best prediction of is . The process neither drifts up nor down in conditional expectation.
Three types:
- Martingale: - fair game
- Submartingale: - tends upward on average (e.g., convex functions of martingales via Jensen)
- Supermartingale: - tends downward on average
Discrete-time discrete version: is a martingale if for all . This is equivalent to the martingale difference condition: .
Immediate consequence (tower property): For any , . So the mean of a martingale is constant: for all .
3.2 Examples of Martingales
Example 1: Partial Sums of Zero-Mean iid. Let be iid with . Then is a martingale:
Example 2: Products of Unit-Mean iid. Let be iid with , . Then is a martingale:
This is the likelihood ratio (Radon-Nikodym derivative) martingale - fundamental to sequential hypothesis testing and importance sampling.
Example 3: Conditional Expectations. For any integrable and filtration , the process is a martingale (the Doob martingale). This follows directly from the tower property: .
Example 4: Polya Urn. An urn contains red and blue balls. At each step, draw a ball, record its colour, return it with one extra ball of the same colour. Let = fraction of red balls after step . Then is a martingale. The limiting fraction .
Example 5: Squared SRW minus time. The simple random walk satisfies is a martingale (when ):
For AI: The gradient of the expected loss minus the mini-batch gradient estimate is a martingale difference - the fundamental quantity that makes SGD unbiased. The accumulation of these martingale differences over training steps determines the SGD trajectory.
3.3 Doob's Optional Stopping Theorem
Doob's Optional Stopping Theorem (OST) is perhaps the most useful result in martingale theory. It answers: "what is the expected value of a martingale at a stopping time?"
Naively: Since for all , one might expect for any stopping time . This is false in general (e.g., the doubling strategy in gambling can exploit martingale properties without bounded stopping times).
Theorem (Doob's Optional Stopping Theorem). Let be a martingale and a stopping time. Then provided at least one of the following holds:
- is bounded: a.s. for some constant
- and a.s. for some constant
- (uniformly integrable)
Proof sketch (bounded case). . Write , use adaptedness of , and the martingale property . Each cross term vanishes, leaving .
Application: Gambler's Ruin. Symmetric random walk on with absorbing barriers. Start at . Let .
Since is a martingale and is bounded by Wald's identity (it can be shown ):
Also: is a martingale, so , giving .
3.4 Doob's Martingale Inequalities
Beyond Optional Stopping, Doob proved several fundamental inequalities that control the maximum of a martingale.
Theorem (Doob's Maximal Inequality). For a non-negative submartingale and :
Corollary (Doob's Inequality). For a martingale with :
Theorem (Martingale Convergence Theorem, Doob 1953). Every -bounded martingale (i.e., ) converges almost surely to a finite limit .
This convergence theorem is the rigorous foundation for why TD-learning and Q-learning converge under appropriate conditions - the value function updates form a bounded supermartingale under the optimal policy.
3.5 The Doob Martingale Construction
Definition. For a function of independent random variables, define:
Then , , and is a martingale (by tower property). This is the Doob martingale for .
Connection to McDiarmid: The martingale differences are . If has bounded differences , then a.s. Applying Azuma's inequality to the martingale :
This is exactly McDiarmid's inequality - it is Azuma applied to the Doob martingale. The Doob construction thus unifies martingale theory with the concentration inequalities of Section05.
For AI: The empirical risk is a function of independent training examples, each contributing bounded difference. The Doob martingale converges from to , and Azuma gives the McDiarmid generalisation bound .
3.6 Supermartingales and Submartingales
Supermartingales () model processes that decrease in conditional expectation - losing bets, damped oscillators, optimisation algorithms.
Key example: For a Lyapunov function in optimisation, if , then is a "noisy supermartingale." Under appropriate noise control, the Martingale Convergence Theorem guarantees a.s., implying convergence to a critical point.
Submartingales () arise as convex functions of martingales (by Jensen's inequality: convex, martingale submartingale). Examples: , , (for ).
For AI: The training loss trajectory is typically a supermartingale in the early phases of training (decreasing in expectation per step). Plateau phases correspond to near-martingale behaviour. The Doob decomposition theorem guarantees any submartingale can be written as where is a martingale and is a predictable increasing process - this decomposition underlies variance reduction in stochastic optimisation.
3.7 ML: SGD as a Near-Martingale
Consider the SGD update: where is the mini-batch loss. Define = gradient noise.
Since each mini-batch is sampled iid, - the gradient noise is a martingale difference sequence. The noise accumulation forms a martingale.
The diffusion approximation (continuous-time limit): As step size with and fixed, the SGD trajectory converges in distribution to the SDE:
where is the gradient covariance matrix and is a Brownian motion in parameter space. The noise temperature (learning rate / batch size) controls the diffusion coefficient.
Variance reduction methods: SVRG (Stochastic Variance Reduced Gradient) and SAG (Stochastic Average Gradient) reduce to make SGD closer to a true gradient descent. They convert the martingale noise from variance to variance by periodically computing the full gradient. The convergence improvement from (SGD) to (linear convergence of SVRG) directly reflects this martingale variance reduction.
4. Discrete-Time Random Walks
4.1 Simple Random Walk
Definition. The simple random walk (SRW) on is , where (so ).
Distribution: for . By CLT: .
Key identity (Reflection Principle): The number of paths from to that touch or cross the -axis equals the number of all paths from to . This gives:
for integers - approximately for large .
The arc-sine law: The last time the SRW is at 0 before time has a distribution that concentrates near 0 and , not at as intuition suggests. Formally: if , then .
Biased random walk: With , , the walk has drift . It is a submartingale if , supermartingale if , martingale if . The exponential tilting is a martingale for any .
4.2 Recurrence and Transience
Definition. A random walk is recurrent if it returns to its starting point with probability 1; transient if it escapes to infinity with positive probability.
Polya's Theorem (1921). The SRW on is:
- Recurrent for and (probability 1 of returning to origin)
- Transient for (positive probability of never returning)
Proof idea for : The expected number of returns to 0 is by Stirling. Since the expected number of returns is infinite, the walk must return infinitely often (Borel-Cantelli).
Proof idea for : by Stirling, so by Borel-Cantelli the walk returns only finitely many times.
For AI: In gradient descent on a loss landscape with one flat direction (rank deficiency), the gradient is zero in that direction - the component of the iterate in the flat direction performs a random walk. In 2D, this walk is recurrent: training can drift arbitrarily far in the flat direction, explaining parameter growth and the need for weight decay (which adds a restoring force, converting the walk from recurrent to transient).
4.3 Gambler's Ruin
Setup: A gambler starts with dollars, plays a fair game (win/lose \1$N$0$ (ruin).
Result (via OST): Using the martingales and :
- ,
- (expected duration)
Biased gambler (): Use the exponential martingale :
For AI: Gambler's ruin models early stopping in training: the training loss random-walks around a local minimum, and we stop when it either reaches a "good" threshold (success) or explodes (ruin). The expected stopping time motivates why learning rate schedules that reduce the step size as training progresses can reduce expected convergence time.
4.4 ML: Token Sequences as Random Walks
A language model generates tokens sequentially. Conditional on the context, each token selection is a draw from a distribution - making the sequence a stochastic process. The log-probability of a sequence:
is a sum of log-probabilities - a random walk in log-space. The perplexity is the geometric mean of the inverse probabilities, and concentrates around where is the true entropy rate of the language.
Surprise accumulation: The cumulative surprise for a well-calibrated model is a martingale (under the true distribution). If the model is miscalibrated, the surprise process is either a submartingale (model too confident -> underestimates entropy) or supermartingale (model too diffuse). This connects language model calibration to martingale theory.
5. Poisson Processes
5.1 Definition and Characterisation
The Poisson process is the canonical continuous-time counting process - it counts the number of events in an interval.
Definition (Poisson Process). A right-continuous process with is a Poisson process with rate if:
- Independent increments: For any , the increments are mutually independent
- Stationary increments: for all
- Poisson distribution of increments: for
Equivalent characterisation via inter-arrivals: If are the inter-arrival times between successive events, then is a Poisson process iff .
The limit: A Poisson process can be approximated by a Bernoulli process: divide into intervals of length . Place an event in interval with probability independently. As with fixed, this converges to a Poisson process.
Martingale structure: is a martingale (compensated Poisson process). Its quadratic variation is also , so is also a martingale.
5.2 Properties: Superposition and Thinning
Superposition: If and are independent, then . The superposition of independent Poisson processes is Poisson with rate .
Thinning (Coloring): Start with a Poisson process . Independently colour each arrival red with probability and blue with probability . Then the red and blue counting processes are independent Poisson processes with rates and .
Conditioning (uniform order statistics): Given , the arrival times are distributed as the order statistics of iid Uniform random variables. This is the conditional uniformity property - conditioning on the count makes the arrival times maximally spread.
Non-homogeneous Poisson process: Generalise by allowing a time-varying rate : with independent increments. The compensator is (the integrated intensity).
5.3 Compound Poisson Process
Definition. Let be a Poisson process with rate and iid random variables (independent of ). The compound Poisson process is:
Key moments:
- MGF:
For AI: The total compute consumed by serving an LLM API follows a compound Poisson model: requests arrive as a Poisson process, and each request requires a random amount of compute (the "jump size" ). Capacity planning uses compound Poisson theory to compute the required buffer size to handle peak demand with high probability.
5.4 ML: Event Streams and Token Timing
LLM inference event streams: In deployed LLMs, user requests arrive as a Poisson process with rate (requests/second). Under heavy load, the inter-arrival distribution determines queueing behaviour. If token generation takes seconds on average, the system is stable when (arrival rate < service rate) - an M/M/1 queue analysis using Poisson process theory.
KV-cache access patterns: In multi-head attention with prefix caching, the pattern of cache hits/misses follows a complex point process. Poisson approximation applies when individual hit probabilities are small and requests are approximately independent.
Continuous batching: Modern LLM serving (vLLM, TensorRT-LLM) uses continuous batching: new requests join an existing batch as others complete. The batch size at time is a birth-death process - a continuous-time Markov chain with Poisson arrivals and Exponential service times.
6. Brownian Motion
6.1 Definition and Wiener's Axioms
Brownian motion (the Wiener process) is the fundamental continuous-time continuous-path stochastic process. It is simultaneously the limit of rescaled random walks (Donsker's theorem) and the building block for all diffusion processes in modern AI.
Definition (Standard Brownian Motion). A stochastic process is a standard Brownian motion (or Wiener process) if:
- a.s.
- Independent increments: For , the increments are mutually independent
- Gaussian increments: for all
- Continuous paths: is continuous a.s.
Existence: Wiener (1923) proved that such a process exists by constructing it on the space of continuous functions with the Wiener measure. The construction uses Levy's Haar wavelet representation and is one of the foundational results of modern probability.
Finite-dimensional distributions: For , the joint density of is:
with .
Martingale properties: The following are all martingales:
- (mean zero, no drift)
- (variance process)
- for any (exponential martingale / Doleans-Dade)
6.2 Key Properties and Quadratic Variation
Self-similarity: for any . The process "looks the same" at any scale - it is fractal.
Time inversion: (with ).
Non-differentiability: Brownian motion is a.s. nowhere differentiable. Formally: for almost every , the function is not differentiable at any point. This is why stochastic calculus requires the Ito integral - the standard Riemann-Stieltjes integral does not apply.
Quadratic variation: The key property distinguishing Brownian motion from ordinary functions is its quadratic variation:
where the limit is over partitions as the mesh . Informally: . This is the foundation of Ito's lemma and the scaling of stochastic noise.
The Levy characterisation: A continuous martingale with and quadratic variation is a standard Brownian motion. This characterises BM among all continuous martingales.
For AI: The quadratic variation formula explains why diffusion model noise at timestep scales as , not . In DDPM, the forward process adds noise scaled by , where - a discretisation of BM's variance accumulation.
6.3 Brownian Motion as a Limit
Theorem (Donsker's Invariance Principle, 1951). Let be iid with and . Define the continuous interpolation:
Then (convergence in distribution on with the uniform topology), where is standard Brownian motion.
This is the functional central limit theorem - it upgrades the CLT from convergence of individual random variables to convergence of entire paths. The SRW path, rescaled by in space and in time, converges to Brownian motion.
Implications: Any result proved for Brownian motion has a discrete-time analogue for random walks. This transfers insights between the continuous-time theory (SDEs, Ito calculus) and the discrete-time practice (mini-batch SGD, RL).
6.4 Geometric Brownian Motion
Definition. A process satisfies geometric Brownian motion if:
By Ito's lemma applied to :
So .
For AI: GBM models multiplicative noise, which appears in layer-normalisation dynamics. When a transformer residual stream is updated as where , the norm approximately follows GBM, explaining the norm growth observed in deep transformers without proper initialisation.
6.5 Ornstein-Uhlenbeck Process
The Ornstein-Uhlenbeck (OU) process is the unique stationary Gaussian Markov process - and the theoretical foundation for diffusion model forward processes.
Definition (OU SDE). The OU process satisfies:
The drift is a mean-reverting force pulling the process toward .
Solution (via Ito's formula):
Distributions:
- Stationary distribution:
- Autocorrelation: - exponential decay
For AI: The DDPM forward process is , which matches the OU solution with , , (after time reparametrisation). The variance-preserving SDE (VP-SDE) of Song et al. (2021) makes this correspondence exact:
with . The OU process provides the continuous-time foundation for all modern diffusion models.
L2 regularisation as OU: L2 regularisation in gradient descent adds a restoring force to the update, giving a discrete-time OU process for the parameters. The stationary distribution concentrates around with variance (noise / regularisation), explaining why stronger L2 keeps parameters smaller.
6.6 ML: Diffusion Models
The forward process (data -> noise): Given clean data , DDPM defines:
The marginal is where . This forward process is a discretised OU process.
The reverse process (noise -> data): By Bayes' theorem and the Gaussian Markov property:
The network learns to predict the score (or equivalently, the noise ), enabling the reverse process to denoise step by step.
Score matching: The training objective minimises:
This is equivalent to minimising Fisher divergence between and , connecting to information-geometric stochastic process theory.
Continuous-time perspective (Score SDE, Song et al. 2021): The continuous limit is:
- Forward SDE:
- Reverse SDE:
The neural network approximates - the score function - enabling the SDE to run backward from noise to data. This is the unified framework behind DDPM (discrete VP-SDE), DDIM (deterministic ODE), and Consistency Models.
7. Stationary Processes and Ergodicity
7.1 Strict and Wide-Sense Stationarity
Definition (Strict Stationarity). A stochastic process is strictly stationary if for any , any times , and any shift :
The finite-dimensional distributions are invariant under time shifts.
Definition (Wide-Sense Stationarity, WSS). A process is wide-sense stationary (or second-order stationary) if:
- (constant mean)
- depends only on the lag
Examples:
- Strictly stationary but not WSS: -distributed noise with 1 DOF (undefined variance)
- WSS but not strictly stationary: with iid - WSS with ; not strictly stationary since is perfectly periodic with a random phase
- Both: Gaussian processes with stationary kernels; iid sequences
The autocovariance function for a WSS process satisfies:
- Symmetry: (real process)
- Positive semidefiniteness: for all finite sequences
For AI: A WSS process is fully characterised by and - enormously parsimonious compared to the general joint distribution. This is why WSS is the right model for relative position encodings in transformers.
7.2 Ergodicity and the Ergodic Theorem
Intuition: A process is ergodic if "time averages equal ensemble averages" - a single long trajectory captures the full statistical behaviour of the process.
Theorem (Birkhoff's Ergodic Theorem, 1931). For a stationary ergodic process with :
For discrete time: .
What "ergodic" means formally: A stationary process is ergodic if the only time-invariant events have probability 0 or 1 (the -algebra of invariant events is trivial). Equivalently, correlations decay: .
Non-ergodic example: for all , where is fixed once and for all. This is stationary (distribution constant) but not ergodic (time average for any realisation).
For AI: SGD convergence analysis assumes that the gradient noise process is ergodic - the time average of gradient variance equals the ensemble average. If the data distribution shifts during training (non-stationarity), this fails. Continual learning methods address the resulting catastrophic forgetting, which is fundamentally a violation of the stationarity assumption.
7.3 Autocorrelation and Power Spectral Density
Definition. The autocovariance function (ACVF) is . The autocorrelation function (ACF) is , normalised to .
Examples of ACFs:
- iid white noise:
- AR(1) process (, ): - geometric decay
- OU process: - continuous-time geometric decay
- Seasonal process: periodic ACF
Theorem (Wiener-Khinchin, 1934). For a WSS process with absolutely summable autocovariance:
The power spectral density (PSD) is the Fourier transform of and is non-negative: for all .
Conversely, any non-negative integrable function is the PSD of some WSS process (Bochner's theorem). PSD decomposes the variance of a process by frequency: low-frequency components explain slow variation, high-frequency components explain rapid fluctuation.
For AI: The attention mechanism computes . For a stationary input sequence, and , so the attention score between positions and depends (via the input autocorrelation) only on - the relative position. This is the stationarity condition that relative position encodings (ALiBi, T5 bias, RoPE) exploit.
7.4 Gaussian Processes
Definition. A stochastic process is a Gaussian process (GP) if every finite collection is jointly Gaussian. A GP is fully specified by:
- Mean function:
- Covariance kernel:
Notation: .
The kernel determines everything: Properties of the GP sample paths are determined by :
- Smoothness: Matern- kernel controls differentiability ( times differentiable paths)
- Length scale: (RBF/squared-exponential) has length scale
- Periodicity: for periodic processes
Common kernels:
| Kernel | Formula | Properties |
|---|---|---|
| RBF (squared-exponential) | Infinitely differentiable | |
| Matern-1/2 | Continuous but not diff. (= OU) | |
| Matern-3/2 | Once differentiable | |
| Periodic | Exactly periodic | |
| Linear | Bayesian linear regression |
GP Posterior (Bayesian inference): Given observations , :
This is exact Bayesian inference in time (from the Cholesky decomposition of the kernel matrix).
For AI: Bayesian optimisation for hyperparameter search uses a GP prior over the loss surface. The acquisition function (UCB, EI) determines where to evaluate next, and the GP posterior updates after each observation. This is the principled method for tuning LLM training hyperparameters (, , , etc.) when each evaluation is expensive.
Neural tangent kernel (NTK): An infinite-width neural network at initialisation is a GP with kernel determined by the architecture. During gradient descent with small learning rate, the network stays approximately GP (lazy training regime). The NTK determines the learning dynamics via the Gram matrix .
7.5 ML: RoPE and Stationarity in Attention
Wide-sense stationarity of attention: For a sequence of token embeddings , the attention matrix can be made to depend only on the relative position if we choose position encoding appropriately. This is equivalent to making the - inner product a stationary kernel.
RoPE (Rotary Position Embedding, Su et al. 2022): RoPE encodes position by rotating the query and key vectors:
The inner product satisfies:
which depends only on - exactly the stationarity condition! RoPE makes attention score a stationary kernel in position, enabling efficient relative position encoding.
ALiBi and linear bias: ALiBi (Press et al., 2022) adds to each attention logit where is a per-head slope. The resulting attention scores are of the form , where the term is a stationary (translation-invariant) bias. This enables extrapolation to longer sequences - a key advantage when the training distribution has stationarity structure.
8. Preview: Markov Chains
8.1 The Markov Property
A stochastic process satisfies the Markov property if:
The past is irrelevant - the present state is a sufficient statistic for the future. Markov chains are the simplest non-trivial stochastic processes with memory.
Markov chains fit within the stochastic process framework developed here:
- They are adapted to their natural filtration
- The transition function defines a stochastic matrix
- Many Markov chains are martingale transforms: is a martingale if is harmonic for the transition operator
8.2 Forward Reference
Full treatment: Section07 Markov Chains
That section covers: transition matrices, invariant/stationary distributions, detailed balance, recurrence/transience (Markov chain version), mixing times, spectral gap, Perron-Frobenius theorem, reversibility, MCMC (Metropolis-Hastings, Gibbs), and ML applications in language model generation, RL policy evaluation, and PageRank.
9. ML Deep Dive
9.1 SGD Trajectory as Stochastic Process
The parameter sequence produced by SGD is a discrete-time stochastic process with values in . Understanding its properties as a process - not just its endpoint - reveals why deep learning generalises.
The update rule as a random dynamical system:
where is a random mini-batch drawn at step .
Key stochastic process properties:
- is adapted to
- - unbiasedness means the noise is a martingale difference
- is a martingale difference sequence
The diffusion approximation (Mandt et al., 2017): Under small step size and local quadratic approximation , the SGD dynamics converge to:
This is a multivariate OU process with mean-reversion matrix and noise . The stationary distribution is:
Lower learning rate -> smaller stationary variance -> SGD concentrates more tightly around . The ratio determines the noise temperature.
Flat minima and generalisation: The stationary distribution of the SGD SDE concentrates in regions of small Hessian (flat minima). Fisher-Rao information connects to the curvature, and the entropy of the stationary distribution is large for flat minima. This provides a stochastic process explanation for why SGD with large learning rates prefers flat minima that generalise well.
9.2 RL Value Functions as Martingales
In reinforcement learning, the agent at time is in state , takes action , receives reward , and transitions to . The discounted return is:
The Bellman equation: Under a fixed policy , the value function satisfies:
Martingale characterisation: Define . Under the true :
So is a martingale! The TD error is a martingale difference: . TD-learning converges because it performs online martingale approximation - stochastic approximation theory guarantees convergence of martingale difference algorithms under Robbins-Monro conditions.
REINFORCE gradient: The policy gradient estimator is an unbiased estimator of - it is a martingale difference. The baseline reduces variance without introducing bias: since .
9.3 Attention over Sequences
Attention can be viewed as a stochastic process operating over the token sequence:
Token sequence as discrete-time process: with embedding . The attention mechanism at layer computes:
The hidden state is a weighted average of all positions - a generalisation of the conditional expectation .
Causal attention as filtration: In autoregressive generation, causal masking ensures depends only on - maintaining the adaptedness condition to the natural filtration of the sequence. Bidirectional attention (BERT) uses a "global" -algebra that is not filtered.
KV-cache as path memory: The key-value cache stores - the full path of the process up to position . This is the computational realisation of the filtration : at each new token, the model has access to the complete history.
9.4 Diffusion Model Schedules
Different noise schedules in diffusion models correspond to different continuous-time SDEs:
| Schedule | SDE | Marginal |
|---|---|---|
| Linear (DDPM) | ||
| Cosine | Same SDE, different | Smoother transition |
| VE-SDE | ||
| SubVP-SDE | Tighter bounds |
The reverse SDE (Anderson, 1982) is:
where is a backward Brownian motion. The neural network is trained to estimate the score function, enabling the reverse process to convert noise into data.
Consistency Models (Song et al., 2023) learn to map any point on the reverse SDE trajectory directly to the final sample, bypassing the iterative nature of full diffusion. The key insight uses probability flow ODE theory - replacing the stochastic reverse SDE with a deterministic ODE that has the same marginals.
10. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | "Any increasing family of sets is a filtration" | Filtration requires -algebras, not just sets. A family of sets with has no algebraic structure. | Verify each is a -algebra (closed under complements and countable unions) and for . |
| 2 | "If for all , then is a martingale" | Constant mean is necessary but not sufficient. Non-martingale with constant mean: with Rademacher steps (mean 0, but $\mathbb{E}[M_{n+1} | M_n] = -M_n \neq M_n$). |
| 3 | "OST applies to any stopping time" | Without boundedness or uniform integrability, . The doubling strategy ( = "first win") has and violates OST. | Verify one of the three OST conditions: bounded, + bounded differences, or UI. |
| 4 | "Brownian motion is differentiable since it's continuous" | Continuity does not imply differentiability. BM is a.s. nowhere differentiable (Wiener 1923). Its paths have infinite variation on any interval. | Use Ito calculus () for BM integrals, not classical Riemann-Stieltjes. |
| 5 | "The Poisson process has stationary paths" | Stationarity of increments \neq stationarity of the process itself. - the marginal distribution changes with ; is not stationary. | The compensated process is a martingale; stationarity applies to increments, not levels. |
| 6 | "Strict stationarity implies WSS" | Strict stationarity only implies WSS if the second moment exists. A process with at all times (iid) is strictly stationary but , so WSS is undefined. | Always verify second moment existence before claiming WSS from strict stationarity. |
| 7 | "WSS implies strict stationarity" | WSS is a second-moment condition. Higher-order moments may not be stationary. Example: an AR(1) with non-Gaussian innovations has the same ACVF regardless of the innovation distribution, but higher moments differ with time if innovations have time-varying skewness. | WSS -> strict stationarity ONLY for Gaussian processes (since Gaussians are determined by first two moments). |
| 8 | "The OU process converges to its mean" | The OU process converges to its stationary distribution , not to the constant . It fluctuates around with variance in steady state. | Distinguish convergence of the distribution (to stationary distribution) from convergence of the paths (which remain stochastic). |
| 9 | "Ergodicity means the process repeats" | Ergodicity means time averages equal ensemble averages. An ergodic process does not repeat its paths - it explores the full distribution over time. | Ergodic = time average converges to ensemble average. Periodicity (repeating paths) is a different and usually violated property. |
| 10 | "A GP with RBF kernel can model any function" | RBF GPs have infinitely smooth sample paths. Functions with discontinuities or kinks (like ReLU activations) cannot be well-approximated by RBF GPs. Matern-1/2 (OU) or Matern-3/2 kernels are better for rough functions. | Match kernel smoothness to the smoothness of the function being modelled. Use Matern family for non-smooth functions. |
| 11 | "DDPM forward process is Brownian motion" | DDPM uses a discretised, time-reversed, variance-preserving process. True BM has no mean reversion and linearly growing variance; DDPM's forward process is a discrete OU approximation with . | DDPM forward process \approx discrete VP-SDE (OU); the connection to BM is via Donsker's theorem in the limit , . |
11. Exercises
Exercise 1 * - Martingale Verification Let be iid with , . Define and .
(a) Prove is a martingale with respect to its natural filtration. (b) Prove is a martingale. What does imply for a bounded stopping time ? (c) For , verify (a) and (b) numerically with and simulations. Check and . (d) Let for a fixed simulation. Verify and empirically.
Exercise 2 * - Doob's OST: Gambler's Ruin A fair gambler starts with dollars and plays until reaching or going broke.
(a) Use OST applied to to derive . Compute numerically for , . (b) Use OST applied to to derive . Compute for the given values. (c) Simulate games and verify both and empirically. (d) For a biased game with , use the exponential martingale to derive . Verify numerically.
Exercise 3 * - Poisson Process Properties Let .
(a) Compute exactly and via simulation ( trials). (b) Show that is a martingale (verify and for ). (c) Thinning: Generate a Poisson process with rate . Thin independently with . Verify the thinned process is Poisson() by checking its inter-arrival distribution. (d) Superposition: Show that the superposition of and is empirically.
Exercise 4 ** - Doob Martingale and McDiarmid Let on with .
(a) Show has bounded differences: for each , so . (b) Construct the Doob martingale and compute and analytically. (c) Apply Azuma's inequality to bound and compare to McDiarmid's bound. (d) Verify both bounds numerically for , . Compute the empirical tail probability and compare to the bound.
Exercise 5 ** - Brownian Motion Properties Simulate on using steps.
(a) Verify the increment distribution: check that holds for several pairs using KS test. (b) Verify quadratic variation: compute for and show . (c) Verify non-differentiability heuristically: compute finite differences for and observe they grow as . (d) Simulate the OU process with Euler-Maruyama. Verify the stationary distribution is .
Exercise 6 ** - Gaussian Process Posterior Let with , .
(a) Draw 5 sample paths from the GP prior on using the covariance matrix. (b) Given observations at with , compute the GP posterior mean and variance. (c) Plot the posterior mean standard deviations and verify that all observation points are covered. (d) Compare the posterior using RBF kernel vs Matern-1/2 kernel. Which has smoother interpolation? Which has faster uncertainty growth away from observations?
Exercise 7 *** - SGD as Diffusion Process Minimise (quadratic, , ) using SGD with , .
(a) Show that the SGD iterates satisfy . Find the stationary distribution of . (b) Verify analytically that for small . (c) Simulate for , , steps. Plot the stationary distribution and compare to the analytical prediction. (d) Verify the diffusion approximation: the continuous-time SDE has stationary distribution . Compare to part (b).
Exercise 8 *** - RL TD-Error as Martingale Consider a Markov reward process: states , transition matrix , rewards , discount .
(a) Compute the true value function exactly. (b) Implement TD(0) to estimate : run episodes of length 50, update with . Compare to exact . (c) Verify the martingale property of the TD error: under the true , the TD error has . Verify numerically. (d) Implement variance-reduced version using baseline (mean reward). Show this reduces the variance of the gradient estimate without introducing bias.
12. Why This Matters for AI (2026 Perspective)
| Concept | AI/ML Impact |
|---|---|
| Filtrations and adaptedness | Causal vs. bidirectional attention; early stopping as stopping time; data leakage detection |
| Martingales | TD-learning convergence; REINFORCE gradient estimators; unbiased gradient noise in SGD |
| Optional Stopping Theorem | Early stopping criteria; gambler's ruin models of gradient explosion; RL episode termination |
| Doob Martingale | Unified proof of McDiarmid/Azuma; online learning regret bounds; generalisation theory |
| Poisson Process | LLM inference request queues; KV-cache hit modelling; continuous batching server design |
| Brownian Motion | Forward process in diffusion models; SGD noise continuous-time approximation; random feature maps |
| OU Process | DDPM/Score-SDE forward processes; L2 regularisation as mean reversion; Adam momentum as filtering |
| Geometric Brownian Motion | Residual stream norm growth; multiplicative noise in LoRA fine-tuning |
| Donsker's Theorem | Justifies SDE approximation of SGD; random walk -> BM limit for CLT on paths |
| Wide-Sense Stationarity | RoPE relative position encoding; ALiBi attention bias; translation-invariant kernels |
| Ergodic Theorem | Time-average = ensemble average in training; failure mode in continual learning |
| Gaussian Processes | Bayesian hyperparameter search; neural tangent kernel in infinite-width limit; GAN discriminator theory |
| GP Posterior | Uncertainty quantification in Bayesian deep learning; active learning for LLM fine-tuning data selection |
| Score SDE | Unified framework for DDPM, DDIM, consistency models, flow matching |
| Martingale CLT | Asymptotic normality of SGD parameter estimates; statistical inference on trained models |
13. Conceptual Bridge
Looking backward: This section builds directly on the static probability of Section01-Section05. Filtrations formalise conditional expectations (Section04) over time; the Doob martingale makes McDiarmid's inequality (Section05) a consequence of Azuma applied to a martingale; the Gaussian process is the stochastic-process version of the multivariate Gaussian (Section03). Every concept here has a static probability antecedent.
Looking forward: Stochastic processes enable the dynamic probability of Section07 and beyond. Markov chains (Section07) are stochastic processes satisfying the Markov property - the specialisation of the general framework to memoryless dynamics. Statistics (Section07-Statistics) uses time series models (AR, MA, ARIMA) which are stationary stochastic processes analysed via the autocorrelation and spectral density tools developed here. Optimisation (Section08) uses the SDE/martingale formalism to analyse SGD convergence and diffusion-based sampling.
The martingale as a unifying concept: The martingale is arguably the most powerful single concept in probability theory. It appears in: (a) concentration inequalities as the Doob/Azuma construction, (b) SGD as a near-martingale gradient process, (c) RL value functions as martingale transforms, (d) diffusion model reverse processes as time-reversed SDEs with martingale driving noise, (e) sequential hypothesis testing (SPRT) as a martingale stopping problem. Understanding martingales is understanding the probabilistic structure that makes machine learning algorithms work.
STOCHASTIC PROCESSES: POSITION IN CURRICULUM
========================================================================
STATIC PROBABILITY (Section01-Section05)
---------------------------------------------
Section01 Random Variables -> Section02 Distributions
Section03 Joint Dist. -> Section04 Expectation/MGF
Section05 Concentration -> Azuma = Doob + Azuma
---------------------------------------------
| filtrations, martingales, paths
v
+---------------------------------------------+
| Section06 STOCHASTIC PROCESSES (THIS SECTION) |
| ----------------------------------------- |
| Filtrations Martingales OST |
| Random Walks Poisson BM/OU |
| Stationarity GPs SGD/RL/Diff |
+---------------------------------------------+
|
+--------+--------+
v v
Section07 Markov Chains Section07-Stat/05 Time Series
(Markov property, (AR/MA/ARIMA, spectral
MCMC, PageRank, analysis, forecasting,
RL policy eval) seasonal decomposition)
|
v
Section08 Optimisation (SDE/martingale-based
convergence of SGD, Langevin dynamics,
diffusion-based sampling)
========================================================================
The AI synthesis: In 2026, the stochastic process framework permeates every major AI subfield. Diffusion models are explicit SDE constructions. RL convergence relies on martingale theory. LLM evaluation uses Hoeffding/CLT on sequences of i.i.d. samples (Poisson processes of test cases). Bayesian optimisation uses GP posteriors. Continual learning grapples with non-stationarity. The practitioner who understands stochastic processes sees the probabilistic skeleton underlying methods that otherwise appear unrelated.
This completes Section06 Stochastic Processes. The framework built here - filtrations, martingales, the canonical processes - is the common language of probability theory in motion.
<- Back to Chapter 6: Probability Theory | Next: Markov Chains ->
Appendix A: Stochastic Calculus Primer
A.1 The Ito Integral
Classical calculus integrates smooth functions. The Ito integral extends integration to Brownian motion, where the integrand is a stochastic process.
Why ordinary integrals fail: Define using Riemann sums. Two natural choices:
- Left-endpoint (Ito):
- Midpoint (Stratonovich):
These give different limits as ! This is unique to stochastic integration - in classical calculus all Riemann sums give the same limit.
Computation:
The Ito integral gives an extra term from the quadratic variation . This is the source of Ito's lemma.
The Ito integral is a martingale: For any adapted square-integrable integrand :
with (Ito isometry).
A.2 Ito's Lemma
Theorem (Ito's Lemma). Let satisfy the SDE . For any function :
The extra term (the Ito correction) arises from the quadratic variation . It has no classical analogue.
Example: GBM. . Apply Ito to :
The Ito correction converts the arithmetic drift to the geometric/log drift .
For AI: DDPM noise levels satisfy (linear SDE for ). The Ito correction explains why the SNR does not decrease linearly even for linear schedules - the correction term shifts the effective noise profile.
A.3 Stochastic Differential Equations
General Ito SDE:
- : drift (deterministic tendency)
- : diffusion coefficient (noise strength)
Existence and uniqueness (Lipschitz conditions): If and are Lipschitz in uniformly in and have linear growth, the SDE has a unique strong solution (Picard-Lindelof for SDEs).
Important SDEs in ML:
| Process | SDE | Notes |
|---|---|---|
| BM | , | |
| OU | Mean-reverting | |
| GBM | Multiplicative noise | |
| Langevin | Stationary | |
| SGD-SDE | SGD diffusion approx. | |
| DDPM fwd | VP-SDE for diffusion |
Langevin dynamics and MCMC: The Langevin SDE has stationary distribution . This is the continuous-time version of Langevin MCMC, used for sampling from complex posteriors (e.g., Bayesian neural networks, energy-based models).
Appendix B: Convergence Theorems for Martingales
B.1 Martingale Convergence
Theorem (Doob's Martingale Convergence, 1953). If is a martingale and ( bounded), then exists a.s. and .
This theorem is foundational for proving convergence of:
- Bayesian posterior updates (likelihood ratio martingales)
- Online learning algorithms (regret bounds via bounded martingales)
- Stochastic approximation algorithms (Robbins-Monro)
The version: If , then both a.s. and in .
B.2 Upcrossing Inequality
The proof of martingale convergence uses the upcrossing inequality: the number of times a martingale upcrosses an interval (goes from below to above ) satisfies:
If is bounded, then , so upcrossings are finite for every rational interval . This means oscillation in the limit is impossible, forcing convergence.
B.3 Uniform Integrability and Convergence
Definition. A family is uniformly integrable (UI) if:
UI is the condition needed for (exchange of limit and expectation). Without UI, a.s. convergence does not imply convergence.
Sufficient condition for UI: for any .
For AI: Uniform integrability is implicitly required for most SGD convergence proofs that need (expectation of limit = limit of expectations). When gradient clipping is used, it ensures the gradient process is UI, validating the convergence proofs.
Appendix C: Gaussian Process Details
C.1 Kernel Properties
A function is a valid kernel (positive semidefinite / Mercer kernel) iff:
Operations preserving PSD kernels:
- Sum: (sum of two valid kernels is valid)
- Product: (product of two valid kernels is valid)
- Scaling: for
- Composition: for any feature map
Bochner's theorem (characterisation of stationary kernels): A continuous function is the covariance of a stationary process iff:
for some non-decreasing bounded measure (the spectral measure). The PSD is .
C.2 Sparse GP Approximations
Exact GP inference costs for training points. For large (e.g., ), this is intractable. Sparse GP approximations use inducing points :
The most common is the SVGP (Sparse Variational GP) which optimises a variational lower bound (ELBO), reducing cost to .
For AI: Sparse GPs enable scalable Bayesian optimisation for LLM hyperparameter tuning with evaluations. The inducing points act as a compressed representation of the observations, analogous to the key-value cache in attention.
C.3 Deep Kernel Learning
Neural-network kernels: Replace the input with a neural network representation and use:
This is deep kernel learning (DKL): train by maximising the GP marginal likelihood. DKL combines the flexibility of deep representations with the uncertainty quantification of GPs, and is used in scalable Bayesian optimisation for AutoML.
Appendix D: Extended ML Applications
D.1 Diffusion Models: Mathematical Details
The probability flow ODE for the VP-SDE is:
This ODE has the same marginal distributions as the reverse SDE but is deterministic. It enables:
- DDIM sampling: Solve the ODE with large step sizes (fewer function evaluations)
- Latent space interpolation: Deterministic ODE gives unique encoding of each data point
- Consistency models: Learn direct mapping that is consistent along ODE trajectories
The score function is the neural network that drives both the SDE and ODE. At optimum:
This is a conditional expectation - directly connecting score learning to the Doob martingale construction.
D.2 Online Learning and Martingale Regret Bounds
In online learning, at each round the learner predicts , suffers loss , and observes true outcome . The regret against best fixed action is:
Martingale structure: In Follow-the-Regularised-Leader (FTRL) algorithms, the loss differences form a martingale difference sequence. Applying Azuma's inequality:
This gives high-probability regret bounds - the stochastic process framework converts expected regret into individual-run guarantees.
Bandit algorithms (Thompson Sampling): In multi-armed bandits with arms, at each round the agent samples parameters from the posterior and plays the arm with highest reward. The number of plays of suboptimal arms is bounded using stopping time arguments on the likelihood ratio martingale.
D.3 RLHF and Reward Models as Stochastic Processes
RLHF (Reinforcement Learning from Human Feedback) trains a language model to maximise a learned reward model subject to a KL penalty:
This is a stochastic control problem: the policy is a controller, the generation trajectory is the controlled stochastic process, and the reward is a terminal cost.
The RLHF policy update (PPO) as a martingale: The policy gradient estimator in PPO is:
where is the advantage estimate. Under the true advantage function, has - it is a martingale difference. PPO clips the ratio to prevent large updates, maintaining stability of the martingale approximation.
D.4 Stochastic Processes in LLM Interpretability
Mechanistic interpretability (Elhage et al., Nanda et al.) studies circuits in transformer weights. The token sequence flowing through a transformer layers can be modelled as a stochastic process evolving through residual stream updates:
Each residual update is a perturbation to the current state - the trajectory through layers is a discrete-time process indexed by layer depth , not token position . Understanding this process (what information is stored/retrieved at each layer) is the central question of interpretability.
Induction heads are attention heads that implement the operation "if A appeared before B at some position, predict B will appear after the current A." This is a pattern-matching stopping time: the head activates when it detects that the current context matches a past pattern. Formalising this as a stopping time over the layer-depth stochastic process is an active research direction.
Appendix E: The Functional CLT (Donsker's Theorem)
E.1 Weak Convergence on Function Spaces
Donsker's theorem states convergence in distribution on the function space - the space of continuous functions equipped with the supremum norm .
Theorem (Donsker, 1951). Let be iid with , . Define:
Then weakly in , where is standard Brownian motion.
Continuous mapping theorem: If and is continuous, then . This gives:
- - reflection principle for random walks
- - law of Brownian local time
E.2 Applications to SGD Analysis
Donsker's theorem justifies the SDE approximation of SGD: as step size , the piecewise-constant SGD interpolation converges (in distribution, as a path) to the SDE . This means:
- Convergence results for the SDE (e.g., mixing time to stationary distribution) translate to discrete-time SGD results
- The stationary distribution of SGD approximates near a minimum
- The escape rate from a local minimum scales as - the Kramers escape rate formula
This is why practitioners observe that increasing batch size by factor requires proportionally increasing learning rate by to maintain the same noise temperature and thus the same optimisation dynamics.
Appendix F: Notation Summary
| Symbol | Meaning |
|---|---|
| Probability space | |
| Filtration (increasing family of -algebras) | |
| Stochastic process indexed by | |
| Martingale | |
| Stopping time | |
| -algebra at stopping time | |
| Quadratic variation of process up to time | |
| Standard Brownian motion (Wiener process) | |
| Poisson process with rate | |
| Covariance kernel (for GPs) | |
| Gaussian process with mean and kernel | |
| Wiener increment (stochastic differential) | |
| Ito integral | |
| Quadratic variation of BM | |
| DDPM cumulative noise schedule | |
| Score function approximation | |
| VP-SDE | Variance-Preserving SDE (DDPM continuum) |
| VE-SDE | Variance-Exploding SDE |
| OST | Optional Stopping Theorem (Doob) |
| UI | Uniformly Integrable |
| WSS | Wide-Sense Stationary |
| ACVF | Autocovariance function |
| PSD | Power Spectral Density |
| NTK | Neural Tangent Kernel |
| OU | Ornstein-Uhlenbeck process |
| GBM | Geometric Brownian Motion |
| SRW | Simple Random Walk |
Appendix G: Worked Examples
G.1 Symmetric Random Walk: Hitting Probabilities
Problem. A particle starts at position on the integer line. At each step it moves or with equal probability. Find: (a) The probability it hits 0 before hitting 10 (b) The expected number of steps to hit (c) The probability it ever returns to 3 starting from 3
Solution.
(a) By the gambler's ruin result, .
(b) steps.
(c) For a symmetric walk starting at on , the probability of ever returning to 3 equals 1 (by Polya's theorem, recurrence). However, the expected return time is (the walk is null-recurrent).
Numerical verification:
import numpy as np
np.random.seed(42)
N_trials = 100_000
# (a) Gambler's ruin simulation
hits_zero = 0
for _ in range(N_trials):
x = 3
while 0 < x < 10:
x += np.random.choice([-1, 1])
hits_zero += (x == 0)
print(f"P(hit 0) \\approx {hits_zero/N_trials:.4f}, theory: 0.7000") # -> 0.7000
# (b) Expected steps
total_steps = 0
for _ in range(N_trials):
x, steps = 3, 0
while 0 < x < 10:
x += np.random.choice([-1, 1])
steps += 1
total_steps += steps
print(f"E[\\tau] \\approx {total_steps/N_trials:.2f}, theory: 21.00") # -> 21.00
G.2 Poisson Process: Conditional Uniformity
Problem. Events arrive as a Poisson process with rate . Given that exactly 5 events occurred in , what is the distribution of the first event time ?
Solution. By the conditional uniformity property, given , the 5 event times are the order statistics of 5 iid Uniform random variables. Therefore scaled to :
More precisely: for .
Verification: Simulate 5 Uniform random variables, take the minimum. Compare to conditional Poisson process simulation.
G.3 Brownian Motion: The Arc-Sine Distribution
Problem. Let be the last zero of Brownian motion before time 1. Find .
Solution. By Levy's arc-sine law:
This is the CDF of the arc-sine distribution on . The density concentrates near 0 and 1 - the last zero before time 1 is most likely to be near 0 or near 1, and least likely to be near . This violates the naive intuition that "the last zero should be somewhere in the middle."
Implication for AI: The arc-sine law suggests that in a long random walk (or SRW approximation of SGD noise), the trajectory spends most of its time on one side of zero - the noise process can have long "runs" of the same sign. This persistent structure in gradient noise is why momentum-based optimisers (Adam, SGD with momentum) perform better than pure SGD for certain problem structures.
G.4 OU Process: Analytical vs Numerical
Problem. Solve , .
Solution. This is OU with , , .
- Solution:
- Mean:
- Variance:
- Stationary distribution:
The process decays exponentially from toward the mean with decay rate , while accumulating noise that saturates at .
For diffusion models: This corresponds to DDPM with (time-discretised) and the signal-to-noise ratio at time approaching for - which is the "pure noise" terminal condition when .
G.5 Gaussian Process: Prior vs Posterior
Problem. Fit a GP to three observations , , with RBF kernel and noise .
Solution. Build the kernel matrix :
Posterior at test point :
- Compute
The posterior mean interpolates between the observations, and the posterior variance is small near observations (where the kernel provides strong correlation) and large away from them.
Appendix H: Self-Assessment Checklist
After completing this section, verify you can:
- State the definition of a stochastic process and give 3 examples of each combination of (discrete/continuous) x (time/state space)
- Define a filtration and explain what "adapted" means in ML context
- Verify the martingale property for a new process by computing
- State all three conditions of Doob's OST and identify which applies to a given problem
- Apply OST to compute expected stopping times and boundary hitting probabilities
- Construct the Doob martingale for a given function and derive McDiarmid from Azuma
- State the three defining properties of a Poisson process and derive the superposition/thinning results
- State Wiener's four axioms for Brownian motion and compute
- Compute the quadratic variation and state Ito's lemma
- Solve the OU SDE and state its stationary distribution
- Connect the DDPM forward process to the VP-SDE and OU process
- Define WSS and ergodicity, and state the Wiener-Khinchin theorem
- Specify a GP by its mean function and kernel, and compute the posterior given observations
- Explain why RoPE makes attention a stationary kernel in position
- Model SGD as a near-martingale and state the diffusion SDE approximation
Appendix I: Connections to Other Areas
I.1 Information Theory (Section09)
- The log-likelihood ratio process is a supermartingale under and a submartingale under
- The sequential probability ratio test (SPRT) is an optimal stopping time for hypothesis testing, derived using the likelihood ratio martingale
- Renyi entropy rates and the entropy rate of a stationary process are central concepts at the intersection of stochastic processes and information theory
I.2 Functional Analysis (Section12)
- The space of continuous functions is a Banach space; weak convergence on this space is Donsker's theorem
- Gaussian processes are infinite-dimensional Gaussian distributions on function spaces (Hilbert spaces)
- Reproducing Kernel Hilbert Spaces (RKHS): For each valid kernel , there is a unique RKHS with the reproducing property . GP regression in is equivalent to minimising a regularised empirical risk in the RKHS
I.3 Optimisation (Section08)
- Stochastic approximation (Robbins-Monro, 1951): The original framework for SGD convergence uses martingale theory. Under step sizes , and bounded variance, the iterates converge a.s.
- Lyapunov methods: Proving convergence of SGD amounts to constructing a Lyapunov function that is a (noisy) supermartingale, then applying martingale convergence
- Langevin MCMC: The Langevin SDE is used for posterior sampling in Bayesian DL, connecting optimisation and stochastic process theory
I.4 Statistics (Section07-Statistics)
- Time series analysis (ARIMA, state-space models) is the statistical estimation side of stochastic processes: given observed data , estimate the model parameters of an assumed process
- Spectral analysis uses the Wiener-Khinchin theorem to estimate the PSD from data via the periodogram
- Hypothesis testing for stationarity (Augmented Dickey-Fuller test, KPSS test) checks whether an observed time series is consistent with a stationary process
Appendix J: Advanced Topics
J.1 Levy Processes
A Levy process generalises both Brownian motion (continuous paths) and Poisson process (jumps) to arbitrary combinations of drift, diffusion, and jumps.
Levy-Khinchin representation: Every Levy process has characteristic function:
where (drift), (diffusion), and is the Levy measure (jump intensity).
For AI: Heavy-tailed gradient distributions in neural network training are better modelled by Levy processes (with large jumps) than by Brownian motion (Gaussian, no jumps). The Levy stable distribution family includes the Gaussian (index ) and Cauchy () as special cases. Empirical studies of gradient statistics in transformers show heavy tails (Student- with low degrees of freedom), suggesting Levy process models may be more appropriate than the Gaussian SDE approximation.
J.2 Malliavin Calculus
Malliavin calculus is a variational calculus on Wiener space - it provides derivatives of random variables with respect to the underlying Brownian motion.
The Malliavin derivative measures the sensitivity of to the Brownian increment at time . It satisfies:
Clark-Ocone formula: For with :
This represents any square-integrable random variable as a stochastic integral - connecting Malliavin calculus to the martingale representation theorem.
For AI: Malliavin calculus provides pathwise gradients for diffusion models, enabling gradient computation through the stochastic sampling process. This is used in score-based generative model training with more stable gradient estimates than the standard score-matching loss.
J.3 Stochastic Control and HJB Equation
A stochastic control problem seeks to find a policy (adapted to ) minimising:
subject to .
The Hamilton-Jacobi-Bellman (HJB) equation gives the optimal value function :
For AI: RLHF can be formulated as a stochastic control problem where the LLM generates tokens sequentially (the SDE), the reward is the RLHF reward model (the cost), and the KL penalty is a regularisation term. The HJB equation characterises the optimal RLHF policy, and PPO approximately solves it using neural network approximations of .
Appendix K: Python Recipes for Stochastic Processes
K.1 Simulating Stochastic Processes
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
# Simple Random Walk
def simulate_srw(n_steps, n_paths=5):
steps = np.random.choice([-1, 1], size=(n_paths, n_steps))
return np.hstack([np.zeros((n_paths,1)), steps.cumsum(axis=1)])
# Brownian Motion (via SRW limit)
def simulate_bm(T=1.0, n_steps=1000, n_paths=5):
dt = T / n_steps
increments = np.random.normal(0, np.sqrt(dt), (n_paths, n_steps))
return np.hstack([np.zeros((n_paths,1)), increments.cumsum(axis=1)])
# Ornstein-Uhlenbeck (Euler-Maruyama)
def simulate_ou(theta=1.0, mu=0.0, sigma=1.0, T=10.0, n_steps=1000, n_paths=5):
dt = T / n_steps
X = np.zeros((n_paths, n_steps+1))
X[:,0] = 2.0 # initial condition
for t in range(n_steps):
dB = np.random.normal(0, np.sqrt(dt), n_paths)
X[:,t+1] = X[:,t] + theta*(mu - X[:,t])*dt + sigma*dB
return X
# Poisson Process
def simulate_poisson(lam=1.0, T=10.0, n_paths=5):
paths = []
for _ in range(n_paths):
t, events = 0, [0]
while t < T:
t += np.random.exponential(1/lam)
events.append(min(t, T))
paths.append(events)
return paths
# Geometric Brownian Motion
def simulate_gbm(mu=0.1, sigma=0.3, S0=100.0, T=1.0, n_steps=252, n_paths=5):
dt = T / n_steps
log_S = np.zeros((n_paths, n_steps+1))
log_S[:,0] = np.log(S0)
for t in range(n_steps):
log_S[:,t+1] = log_S[:,t] + (mu - sigma**2/2)*dt + sigma*np.random.normal(0, np.sqrt(dt), n_paths)
return np.exp(log_S)
K.2 Gaussian Process Posterior
import numpy as np
from scipy.linalg import cho_factor, cho_solve
def rbf_kernel(X1, X2, length_scale=1.0, variance=1.0):
"""Squared-exponential (RBF) kernel."""
dists = ((X1[:,None] - X2[None,:])**2).sum(-1)
return variance * np.exp(-dists / (2 * length_scale**2))
def gp_posterior(X_train, y_train, X_test, kernel_fn, noise_var=1e-3):
"""Compute GP posterior mean and variance."""
K_tt = kernel_fn(X_train, X_train) + noise_var * np.eye(len(X_train))
K_ts = kernel_fn(X_train, X_test)
K_ss = kernel_fn(X_test, X_test)
# Solve K_tt @ alpha = y_train using Cholesky
cho = cho_factor(K_tt)
alpha = cho_solve(cho, y_train)
v = cho_solve(cho, K_ts)
mu_post = K_ts.T @ alpha
var_post = np.diag(K_ss) - (K_ts * v).sum(0)
return mu_post, np.sqrt(np.maximum(var_post, 0))
# Usage:
# X_train = np.array([0., 1., 2.])
# y_train = np.array([1., -0.5, 0.8])
# X_test = np.linspace(-0.5, 2.5, 100)
# mu, std = gp_posterior(X_train, y_train, X_test, rbf_kernel)
K.3 Martingale Verification
import numpy as np
def verify_martingale(process_fn, n_steps=100, n_paths=10000, tol=0.05):
"""
Verify martingale property: E[M_{n+1} | M_n] \\approx M_n.
process_fn: callable that generates (n_paths, n_steps+1) array.
"""
paths = process_fn(n_steps, n_paths)
# Check: mean at step n equals mean at step 0
means = paths.mean(axis=0)
assert np.allclose(means, means[0], atol=tol), f"Mean not constant: {means[:5]}"
# Check: E[M_{n+1}^2] - E[M_n^2] = variance increment
print(f"Mean at all steps: {means[0]:.4f} +/- {means.std():.4f} (should be ~0)")
print(f"PASS: Martingale mean constant within tolerance {tol}")
def simulate_srw_paths(n_steps, n_paths):
steps = np.random.choice([-1., 1.], size=(n_paths, n_steps))
return np.hstack([np.zeros((n_paths,1)), steps.cumsum(axis=1)])
np.random.seed(42)
verify_martingale(simulate_srw_paths)
K.4 Diffusion Model Forward Process
import numpy as np
def ddpm_forward_process(x0, T=1000, beta_start=1e-4, beta_end=0.02):
"""
DDPM forward process: q(x_t | x_0) = N(sqrt(alpha_bar_t)*x0, (1-alpha_bar_t)*I)
Returns: {t: (x_t, alpha_bar_t)} for t = 0, 1, ..., T
"""
betas = np.linspace(beta_start, beta_end, T)
alphas = 1 - betas
alpha_bars = np.cumprod(alphas)
trajectory = {0: (x0, 1.0)}
x0_arr = np.array(x0, dtype=float)
for t in range(1, T+1):
ab = alpha_bars[t-1]
eps = np.random.randn(*x0_arr.shape)
x_t = np.sqrt(ab) * x0_arr + np.sqrt(1 - ab) * eps
trajectory[t] = (x_t, ab)
return trajectory, alpha_bars
def snr(alpha_bar):
"""Signal-to-noise ratio: alpha_bar / (1 - alpha_bar)."""
return alpha_bar / (1 - alpha_bar + 1e-9)
# Example:
# x0 = np.array([1.0, 0.0, -1.0])
# traj, alpha_bars = ddpm_forward_process(x0, T=1000)
# For t=500: SNR \\approx 1 (equal signal and noise)
# For t=1000: SNR \\approx 0 (pure noise)
K.5 DDPM Schedule Comparison
import numpy as np
def linear_schedule(T=1000, beta_start=1e-4, beta_end=0.02):
return np.linspace(beta_start, beta_end, T)
def cosine_schedule(T=1000, s=0.008):
t = np.arange(T+1) / T
f = np.cos((t + s)/(1+s) * np.pi/2)**2
alpha_bars = f / f[0]
betas = 1 - alpha_bars[1:] / alpha_bars[:-1]
return np.clip(betas, 0, 0.999)
T = 1000
betas_linear = linear_schedule(T)
betas_cosine = cosine_schedule(T)
ab_linear = np.cumprod(1 - betas_linear)
ab_cosine = np.cumprod(1 - betas_cosine)
print("Signal preserved at t=T/4, T/2, 3T/4, T:")
for name, ab in [("Linear", ab_linear), ("Cosine", ab_cosine)]:
vals = [ab[T//4-1], ab[T//2-1], ab[3*T//4-1], ab[-1]]
print(f" {name}: {[f'{v:.4f}' for v in vals]}")
# Cosine schedule destroys signal more uniformly across timesteps
Appendix L: Theory Problems (Advanced)
L.1 Optional Stopping and Wald's Identity
Problem. Let be iid with and . Let be a stopping time with . Prove Wald's Identity:
Hint: is a martingale. Apply OST. For the second moment, use .
L.2 The Optional Stopping Theorem: Necessary Conditions
Problem. Show that the condition alone does NOT suffice for OST. Construct a martingale and stopping time with but .
Hint: Let SRW and for appropriate depending on ...
L.3 Levy Characterisation of BM
Problem. Prove that if is a continuous local martingale starting at 0 with quadratic variation , then is a standard Brownian motion.
Hint: Use the exponential martingale and show it is a martingale. Then compute the characteristic function of .
L.4 GP Posterior as Hilbert Space Projection
Problem. Show that the GP posterior mean is the orthogonal projection of onto the span of in the RKHS .
Hint: Use the reproducing property to express the GP as a Hilbert space problem.
Appendix M: References and Further Reading
Textbooks
-
Karatzas & Shreve, Brownian Motion and Stochastic Calculus (1991) - The standard reference for rigorous continuous-time stochastic processes, Ito calculus, and SDEs. Graduate-level.
-
Durrett, Probability: Theory and Examples (5th ed., 2019) - Comprehensive coverage including martingales, stopping times, ergodic theory, Brownian motion. Excellent exercises.
-
Williams, Probability with Martingales (1991) - Elegant, concise introduction to martingale theory. The clearest proof of martingale convergence. Highly recommended.
-
Oksendal, Stochastic Differential Equations (7th ed., 2014) - Standard SDEs reference. Accessible treatment of Ito calculus and applications.
-
Rasmussen & Williams, Gaussian Processes for Machine Learning (2006) - The canonical GP reference. Available free online.
-
Doob, Stochastic Processes (1953) - The original monograph. Still valuable for historical context.
Papers for AI Connections
-
Song et al. (2021), Score-Based Generative Modeling through Stochastic Differential Equations - Score SDE framework unifying DDPM, SMLD.
-
Mandt, Hoffman & Blei (2017), Stochastic Gradient Descent as Approximate Bayesian Inference - SDE approximation of SGD.
-
Su et al. (2022), RoFormer: Enhanced Transformer with Rotary Position Embedding - RoPE as stationary kernel.
-
Ho, Jain & Abbeel (2020), Denoising Diffusion Probabilistic Models - DDPM.
-
Li et al. (2017), Stochastic Modified Equations and Adaptive Stochastic Gradient Algorithms - SDE theory for SGD.
-
Song et al. (2023), Consistency Models - Distillation of diffusion models via probability flow ODE.
-
Tropp (2015), An Introduction to Matrix Concentration Inequalities - Matrix Bernstein, extends scalar martingale results.
-
Williams (1997), Reinforcement Learning: An Introduction (Sutton & Barto) - RL value functions as martingales; Chapter 6 on TD-learning.
Appendix N: Extended Proofs
N.1 Proof of Doob's Maximal Inequality
Theorem. For a non-negative submartingale and :
Proof. Let (first time the process crosses ), with if no crossing occurs. Then is a stopping time (the crossing event is -measurable).
Decompose the expectation :
For the first term: since is a submartingale, on :
Since :
Corollary. Applying to (non-negative submartingale since is convex):
N.2 Proof of Azuma's Inequality
Theorem (Azuma-Hoeffding). Let be a martingale with a.s. Then:
Proof. Let be the martingale differences. Since and , by Hoeffding's lemma:
By the Markov inequality for (for any ):
Factor the MGF using conditional independence:
Iterating: .
Combining:
Optimise over : set to get:
N.3 Proof of Polya's Theorem (: Recurrence)
Theorem. The SRW on is recurrent.
Proof. It suffices to show the expected number of returns to 0 is infinite: .
By Stirling's approximation :
Therefore .
By the second Borel-Cantelli lemma (the events are independent since each depends on distinct steps): .
Note: In , by CLT in , and for (since ). By Borel-Cantelli (first lemma): returns occur finitely often, so the walk is transient.
N.4 Proof that the OU Process Has the Correct Stationary Distribution
Theorem. The stationary distribution of () is .
Proof via Fokker-Planck equation. The probability density satisfies:
At stationarity :
Try :
Substituting: [ok]
The normalisation gives .
Appendix O: Spectral Theory of Stochastic Processes
O.1 Spectral Representation
Every WSS process with absolutely integrable ACVF admits the spectral representation:
where is an orthogonal increment process (the spectral process) satisfying .
This is the stochastic analogue of the Fourier transform: a stationary process decomposes into uncorrelated frequency components, each contributing variance in the frequency band .
For AI: The power spectral density of a token sequence determines how information is distributed across "frequencies" (temporal scales). A language model with strong long-range dependencies (like GPT-4 modelling coherent long documents) will have a PSD with high power at low frequencies. Attention head specialisation can be viewed as learning frequency-selective filters - heads that attend to nearby tokens (high frequency) vs. distant tokens (low frequency).
O.2 Spectral Gap and Mixing
For a stationary Markov chain with transition operator , the spectral gap (where is the second-largest eigenvalue of ) determines the mixing time .
- Large spectral gap -> fast mixing -> samples become independent quickly
- Small spectral gap -> slow mixing -> long-range correlations persist
For transformers: The attention pattern at each layer can be viewed as a stochastic matrix (for sequence length ). The spectral gap of determines how quickly information "mixes" across the sequence. Heads with nearly uniform attention (large spectral gap) aggregate globally; heads with peaked attention (small spectral gap) attend locally.
Appendix P: Decision Framework for Choosing Process Models
CHOOSE THE RIGHT STOCHASTIC PROCESS MODEL
========================================================================
Is the system evolving over time?
+-- No -> Use static probability (Section01-Section05)
+-- Yes ->
Does the state space have a natural structure?
+-- Countable (e.g., integer counts, categories)
| +-- Discrete time -> Markov chain (Section07)
| +-- Continuous time -> Poisson or birth-death process
+-- Continuous (e.g., parameter vector, embedding)
+-- Does it have independent increments?
| +-- Yes, with Gaussian increments -> Brownian motion
| +-- Yes, with jump structure -> Poisson / Levy
| +-- No, but stationary -> ARMA or OU process
+-- Does it decrease toward an attractor?
| +-- Yes -> OU process or SDE with drift
+-- Does it only depend on the current state?
+-- Yes -> Markov process (continuous time)
+-- No, depends on full history -> GP or ARIMA
========================================================================
AI Application -> Recommended Model
---------------------------------------------------------------------
SGD parameter trajectory -> OU SDE (diffusion approx)
Mini-batch gradient noise -> Martingale difference sequence
Diffusion model forward -> VP-SDE (discrete OU)
LLM token generation -> Markov chain (Section07)
Request arrivals -> Poisson process
Hyperparameter optimisation -> Gaussian process (Bayesian opt)
RL value function -> Martingale transform
Training loss trajectory -> Supermartingale
NTK / infinite-width NN -> Gaussian process
Residual stream updates -> Additive noise SDE
---------------------------------------------------------------------
Appendix Q: Connections to Measure Theory
Q.1 Stochastic Processes as Measurable Functions
A stochastic process on can be viewed as a measurable function from to the path space . The probabilistic properties of the process are encoded in the pushforward measure on .
For Brownian motion: The path space is (continuous functions). The Wiener measure is the unique probability measure on such that (coordinate process) satisfies the four BM axioms. Constructing rigorously is Wiener's 1923 achievement.
Q.2 The Radon-Nikodym Theorem and Change of Measure
If and are probability measures on with (Q is absolutely continuous wrt P), then there exists the Radon-Nikodym derivative such that for all .
Girsanov's theorem: Under mild conditions, if is a BM under and is adapted, then:
is a BM under where (the Girsanov kernel).
For diffusion models: Girsanov's theorem is the mathematical basis for the reverse diffusion process. The reverse SDE under the data distribution corresponds to a change of measure from the noise distribution (standard BM), with the Radon-Nikodym derivative involving the score function . This makes the score-matching objective the likelihood maximisation objective under the reverse measure.
Appendix R: Supplementary: Concentration and Martingales Revisited
R.1 The Azuma-McDiarmid Connection - Full Detail
In Section3.5 we introduced the Doob martingale construction as the proof of McDiarmid's inequality. Here we make the connection fully explicit.
Setup. Let be independent random variables and satisfy the bounded differences condition: for each ,
Step 1: Doob martingale. Define for . Then , , and is a martingale.
Step 2: Bounded differences. The martingale difference satisfies a.s. because:
Step 3: Azuma. Apply Azuma's inequality (proved in SectionN.2) to the martingale :
This IS McDiarmid's inequality. The concentration inequality Section05 is a consequence of martingale theory.
R.2 The Freedman Inequality
Azuma's inequality uses only the almost-sure bound . Freedman's inequality (1975) also uses conditional variances, giving a Bernstein-type improvement:
Theorem (Freedman, 1975). Let be a martingale with a.s. Define (total conditional variance). Then for :
This is a martingale version of Bernstein's inequality. The event restricts to "low variance" trajectories. Unconditionally:
For AI: Freedman's inequality gives tighter bounds for SGD convergence when the gradient variance is small. In the low-variance early-training phase (when the model is near initialisation and gradients are large but consistent), Freedman gives bounds tighter than Azuma's .
Appendix S: Practice Problems (All Levels)
Level * (Computation)
S.1. Let where is the SRW. For what value of in the biased walk () is a martingale?
Answer: . Set equal to : , so .
S.2. A Poisson process has rate (events per hour). Find and the expected time to the 4th event.
Answer: , so . Time to 4th event: , hours.
S.3. For the OU process with , , , : compute and .
Answer: . .
Level ** (Theory)
S.4. Show that for a martingale , the sequence is also a martingale, where is the predictable quadratic variation.
S.5. Prove that if is convex and is a martingale, then is a submartingale (using Jensen's inequality).
S.6. For the compound Poisson process with , : compute the MGF and use it to find and .
Level *** (AI Applications)
S.7. (Diffusion model noise schedule design) You want a noise schedule for the DDPM forward process such that the SNR decreases linearly from to over steps. Derive the required and compare to the linear schedule.
S.8. (SGD stationary distribution) In a 2D quadratic loss with , SGD with step and gradient noise has stationary distribution . Show that the ratio of the standard deviations in the two directions is , and interpret this for optimisation.
S.9. (Martingale in RLHF) In PPO with KL-regularised reward , show that the KL penalty term creates a non-martingale perturbation to the policy gradient estimator. Under what conditions on does the perturbation become negligible?
Appendix T: Theorem Index
| Theorem | Statement | Section |
|---|---|---|
| Doob's OST | under UI or bounded conditions | Section3.3 |
| Doob's Maximal Inequality | Section3.4, App. N | |
| Martingale Convergence | -bounded martingale converges a.s. | Section3.4 |
| Azuma's Inequality | Bounded differences -> | App. N |
| McDiarmid's Inequality | Bounded differences function -> Azuma on Doob martingale | Section3.5 |
| Freedman's Inequality | Martingale Bernstein with conditional variance | App. R |
| Polya's Theorem | SRW on : recurrent iff | Section4.2, App. N |
| Donsker's Theorem | SRW BM in | Section6.3, App. E |
| Levy Characterisation | Continuous martingale + QV BM | Section6.2 |
| Ito's Lemma | App. A | |
| Wiener-Khinchin | ACVF PSD via Fourier | Section7.3 |
| Bochner's Theorem | PSD is valid iff non-negative definite | Section7.3 |
| Birkhoff Ergodic | Time average = ensemble average a.s. | Section7.2 |
| Girsanov's Theorem | Change of drift = change of measure | App. Q |
| Fokker-Planck | PDE for density of SDE | App. N |
| Anderson's Reverse SDE | Reverse SDE = forward SDE + score | Section6.6 |
| Clark-Ocone Formula | Malliavin representation on Wiener space | App. J |
| Wald's Identity | App. L |
Appendix U: Continuous-Time Martingales
U.1 Local Martingales
In continuous time, the natural generalisation of a martingale is a local martingale: a process such that there exist stopping times and each stopped process is a martingale.
Every continuous local martingale with can be written as a time-changed Brownian motion (by the Dambis-Dubins-Schwarz theorem):
where is a standard BM and is the quadratic variation of .
The Ito integral is a local martingale (and a true martingale when ).
U.2 The Martingale Representation Theorem
Theorem (Martingale Representation). Every square-integrable martingale adapted to the Brownian filtration can be written as:
for some adapted process with .
For AI: In score-based diffusion model training, the training loss can be expressed as an Ito integral against the Wiener process of the forward noise. The neural network effectively learns the integrand that represents the reverse process. The martingale representation theorem guarantees that such a representation exists - it is the theoretical justification for why neural networks can learn to "undo" Brownian noise.
U.3 Feynman-Kac Formula
For the SDE and function , the Feynman-Kac formula relates the SDE to a PDE:
satisfies the PDE with terminal condition .
For AI: The score function in diffusion models satisfies a backward Kolmogorov equation (the PDE form of the reverse SDE). Feynman-Kac connects this PDE to the expectation form , which is directly implemented by the neural denoiser.
Appendix V: Numerical Methods for SDEs
V.1 Euler-Maruyama Method
The simplest numerical scheme for :
where .
Convergence: Euler-Maruyama has strong order 1/2 (pathwise error ) and weak order 1 (distributional error ). Compare to Euler method for ODEs which has order 1 for pathwise error - the order reflects the irregularity of BM.
V.2 Milstein Method
The Milstein method improves strong convergence to order 1 by adding an Ito correction:
The extra term uses the Ito formula's second-order term. For scalar SDEs, Milstein achieves the same convergence order as the trapezoidal rule for ODEs.
V.3 DDPM as Euler-Maruyama
The DDPM update is exactly the Euler-Maruyama discretisation of the VP-SDE with step size :
where the approximation holds for small . DDIM corresponds to using an implicit (backward Euler) discretisation of the probability flow ODE, which allows larger step sizes.
Appendix W: Bridge Processes and Conditional Diffusions
W.1 Brownian Bridge
A Brownian bridge from to on is a Brownian motion conditioned to be at at time :
This is a Gaussian process with and for .
For AI: Stochastic interpolation (Albergo & Vanden-Eijnden, 2023) uses Brownian bridges to construct generative models that interpolate between data and noise along curved paths, rather than straight (linear interpolation) or OU (DDPM) paths. Flow matching (Lipman et al., 2022) uses deterministic bridges (straight-line paths) for efficient training.
W.2 Conditional Score Function
Given a forward process , the conditional score is:
where is the noise added in the forward process. The marginal score satisfies:
This is a conditional expectation - a Doob martingale evaluated at ! The diffusion model learns to estimate , which by Tweedie's formula equals the MMSE denoiser. The entire diffusion model training objective is a stochastic process estimation problem formulated in martingale language.
End of Section06 Stochastic Processes.
<- Back to Chapter 6: Probability Theory | Next: Markov Chains ->
Appendix X: Glossary
| Term | Definition |
|---|---|
| Adapted process | Process that is -measurable at each |
| Autocorrelation function (ACF) | |
| Azuma's inequality | Concentration bound for martingales with bounded differences |
| Brownian motion | Continuous Gaussian process with independent stationary increments |
| Cadlag | Right-continuous with left limits (French acronym) |
| Compensated Poisson | - a martingale |
| Doob martingale | $M_k = \mathbb{E}[f |
| Ergodic theorem | Time average ensemble average a.s. |
| Filtration | Increasing family of -algebras: for |
| Gaussian process (GP) | Process where every finite marginal is jointly Gaussian |
| Ito integral | Stochastic integral adapted to BM filtration |
| Ito's lemma | Chain rule for BM: |
| Kernel | Positive semidefinite function - covariance of GP |
| Local martingale | Stopped process is a martingale for each |
| Martingale | $\mathbb{E}[M_t |
| Martingale difference | with $\mathbb{E}[D_k |
| OU process | - mean-reverting Gaussian process |
| Optional Stopping Theorem | under appropriate conditions |
| Poisson process | Counting process with iid exponential inter-arrival times |
| Quadratic variation | for BM - central property of stochastic integration |
| Score function | - gradient of log-density |
| Stationary process | Distribution invariant under time shifts |
| Stopping time | with - no look-ahead |
| Submartingale | $\mathbb{E}[M_t |
| Supermartingale | $\mathbb{E}[M_t |
| Thinning | Poisson process subsampled with prob is Poisson() |
| Uniform integrability | $\sup_t\mathbb{E}[ |
| VP-SDE | Variance-preserving SDE - DDPM forward process continuum |
| Wide-sense stationary | Constant mean + covariance depends only on lag |
| Wiener measure | Probability measure on defining BM paths |