Lesson overview | Previous part | Next part
Stochastic Processes: Part 11: Exercises to Appendix U: Continuous-Time Martingales
11. Exercises
Exercise 1 * - Martingale Verification Let be iid with , . Define and .
(a) Prove is a martingale with respect to its natural filtration. (b) Prove is a martingale. What does imply for a bounded stopping time ? (c) For , verify (a) and (b) numerically with and simulations. Check and . (d) Let for a fixed simulation. Verify and empirically.
Exercise 2 * - Doob's OST: Gambler's Ruin A fair gambler starts with dollars and plays until reaching or going broke.
(a) Use OST applied to to derive . Compute numerically for , . (b) Use OST applied to to derive . Compute for the given values. (c) Simulate games and verify both and empirically. (d) For a biased game with , use the exponential martingale to derive . Verify numerically.
Exercise 3 * - Poisson Process Properties Let .
(a) Compute exactly and via simulation ( trials). (b) Show that is a martingale (verify and for ). (c) Thinning: Generate a Poisson process with rate . Thin independently with . Verify the thinned process is Poisson() by checking its inter-arrival distribution. (d) Superposition: Show that the superposition of and is empirically.
Exercise 4 ** - Doob Martingale and McDiarmid Let on with .
(a) Show has bounded differences: for each , so . (b) Construct the Doob martingale and compute and analytically. (c) Apply Azuma's inequality to bound and compare to McDiarmid's bound. (d) Verify both bounds numerically for , . Compute the empirical tail probability and compare to the bound.
Exercise 5 ** - Brownian Motion Properties Simulate on using steps.
(a) Verify the increment distribution: check that holds for several pairs using KS test. (b) Verify quadratic variation: compute for and show . (c) Verify non-differentiability heuristically: compute finite differences for and observe they grow as . (d) Simulate the OU process with Euler-Maruyama. Verify the stationary distribution is .
Exercise 6 ** - Gaussian Process Posterior Let with , .
(a) Draw 5 sample paths from the GP prior on using the covariance matrix. (b) Given observations at with , compute the GP posterior mean and variance. (c) Plot the posterior mean standard deviations and verify that all observation points are covered. (d) Compare the posterior using RBF kernel vs Matern-1/2 kernel. Which has smoother interpolation? Which has faster uncertainty growth away from observations?
Exercise 7 *** - SGD as Diffusion Process Minimise (quadratic, , ) using SGD with , .
(a) Show that the SGD iterates satisfy . Find the stationary distribution of . (b) Verify analytically that for small . (c) Simulate for , , steps. Plot the stationary distribution and compare to the analytical prediction. (d) Verify the diffusion approximation: the continuous-time SDE has stationary distribution . Compare to part (b).
Exercise 8 *** - RL TD-Error as Martingale Consider a Markov reward process: states , transition matrix , rewards , discount .
(a) Compute the true value function exactly. (b) Implement TD(0) to estimate : run episodes of length 50, update with . Compare to exact . (c) Verify the martingale property of the TD error: under the true , the TD error has . Verify numerically. (d) Implement variance-reduced version using baseline (mean reward). Show this reduces the variance of the gradient estimate without introducing bias.
12. Why This Matters for AI (2026 Perspective)
| Concept | AI/ML Impact |
|---|---|
| Filtrations and adaptedness | Causal vs. bidirectional attention; early stopping as stopping time; data leakage detection |
| Martingales | TD-learning convergence; REINFORCE gradient estimators; unbiased gradient noise in SGD |
| Optional Stopping Theorem | Early stopping criteria; gambler's ruin models of gradient explosion; RL episode termination |
| Doob Martingale | Unified proof of McDiarmid/Azuma; online learning regret bounds; generalisation theory |
| Poisson Process | LLM inference request queues; KV-cache hit modelling; continuous batching server design |
| Brownian Motion | Forward process in diffusion models; SGD noise continuous-time approximation; random feature maps |
| OU Process | DDPM/Score-SDE forward processes; L2 regularisation as mean reversion; Adam momentum as filtering |
| Geometric Brownian Motion | Residual stream norm growth; multiplicative noise in LoRA fine-tuning |
| Donsker's Theorem | Justifies SDE approximation of SGD; random walk -> BM limit for CLT on paths |
| Wide-Sense Stationarity | RoPE relative position encoding; ALiBi attention bias; translation-invariant kernels |
| Ergodic Theorem | Time-average = ensemble average in training; failure mode in continual learning |
| Gaussian Processes | Bayesian hyperparameter search; neural tangent kernel in infinite-width limit; GAN discriminator theory |
| GP Posterior | Uncertainty quantification in Bayesian deep learning; active learning for LLM fine-tuning data selection |
| Score SDE | Unified framework for DDPM, DDIM, consistency models, flow matching |
| Martingale CLT | Asymptotic normality of SGD parameter estimates; statistical inference on trained models |
13. Conceptual Bridge
Looking backward: This section builds directly on the static probability of Section01-Section05. Filtrations formalise conditional expectations (Section04) over time; the Doob martingale makes McDiarmid's inequality (Section05) a consequence of Azuma applied to a martingale; the Gaussian process is the stochastic-process version of the multivariate Gaussian (Section03). Every concept here has a static probability antecedent.
Looking forward: Stochastic processes enable the dynamic probability of Section07 and beyond. Markov chains (Section07) are stochastic processes satisfying the Markov property - the specialisation of the general framework to memoryless dynamics. Statistics (Section07-Statistics) uses time series models (AR, MA, ARIMA) which are stationary stochastic processes analysed via the autocorrelation and spectral density tools developed here. Optimisation (Section08) uses the SDE/martingale formalism to analyse SGD convergence and diffusion-based sampling.
The martingale as a unifying concept: The martingale is arguably the most powerful single concept in probability theory. It appears in: (a) concentration inequalities as the Doob/Azuma construction, (b) SGD as a near-martingale gradient process, (c) RL value functions as martingale transforms, (d) diffusion model reverse processes as time-reversed SDEs with martingale driving noise, (e) sequential hypothesis testing (SPRT) as a martingale stopping problem. Understanding martingales is understanding the probabilistic structure that makes machine learning algorithms work.
STOCHASTIC PROCESSES: POSITION IN CURRICULUM
========================================================================
STATIC PROBABILITY (Section01-Section05)
---------------------------------------------
Section01 Random Variables -> Section02 Distributions
Section03 Joint Dist. -> Section04 Expectation/MGF
Section05 Concentration -> Azuma = Doob + Azuma
---------------------------------------------
| filtrations, martingales, paths
v
+---------------------------------------------+
| Section06 STOCHASTIC PROCESSES (THIS SECTION) |
| ----------------------------------------- |
| Filtrations Martingales OST |
| Random Walks Poisson BM/OU |
| Stationarity GPs SGD/RL/Diff |
+---------------------------------------------+
|
+--------+--------+
v v
Section07 Markov Chains Section07-Stat/05 Time Series
(Markov property, (AR/MA/ARIMA, spectral
MCMC, PageRank, analysis, forecasting,
RL policy eval) seasonal decomposition)
|
v
Section08 Optimisation (SDE/martingale-based
convergence of SGD, Langevin dynamics,
diffusion-based sampling)
========================================================================
The AI synthesis: In 2026, the stochastic process framework permeates every major AI subfield. Diffusion models are explicit SDE constructions. RL convergence relies on martingale theory. LLM evaluation uses Hoeffding/CLT on sequences of i.i.d. samples (Poisson processes of test cases). Bayesian optimisation uses GP posteriors. Continual learning grapples with non-stationarity. The practitioner who understands stochastic processes sees the probabilistic skeleton underlying methods that otherwise appear unrelated.
This completes Section06 Stochastic Processes. The framework built here - filtrations, martingales, the canonical processes - is the common language of probability theory in motion.
<- Back to Chapter 6: Probability Theory | Next: Markov Chains ->
Appendix A: Stochastic Calculus Primer
A.1 The Ito Integral
Classical calculus integrates smooth functions. The Ito integral extends integration to Brownian motion, where the integrand is a stochastic process.
Why ordinary integrals fail: Define using Riemann sums. Two natural choices:
- Left-endpoint (Ito):
- Midpoint (Stratonovich):
These give different limits as ! This is unique to stochastic integration - in classical calculus all Riemann sums give the same limit.
Computation:
The Ito integral gives an extra term from the quadratic variation . This is the source of Ito's lemma.
The Ito integral is a martingale: For any adapted square-integrable integrand :
with (Ito isometry).
A.2 Ito's Lemma
Theorem (Ito's Lemma). Let satisfy the SDE . For any function :
The extra term (the Ito correction) arises from the quadratic variation . It has no classical analogue.
Example: GBM. . Apply Ito to :
The Ito correction converts the arithmetic drift to the geometric/log drift .
For AI: DDPM noise levels satisfy (linear SDE for ). The Ito correction explains why the SNR does not decrease linearly even for linear schedules - the correction term shifts the effective noise profile.
A.3 Stochastic Differential Equations
General Ito SDE:
- : drift (deterministic tendency)
- : diffusion coefficient (noise strength)
Existence and uniqueness (Lipschitz conditions): If and are Lipschitz in uniformly in and have linear growth, the SDE has a unique strong solution (Picard-Lindelof for SDEs).
Important SDEs in ML:
| Process | SDE | Notes |
|---|---|---|
| BM | , | |
| OU | Mean-reverting | |
| GBM | Multiplicative noise | |
| Langevin | Stationary | |
| SGD-SDE | SGD diffusion approx. | |
| DDPM fwd | VP-SDE for diffusion |
Langevin dynamics and MCMC: The Langevin SDE has stationary distribution . This is the continuous-time version of Langevin MCMC, used for sampling from complex posteriors (e.g., Bayesian neural networks, energy-based models).
Appendix B: Convergence Theorems for Martingales
B.1 Martingale Convergence
Theorem (Doob's Martingale Convergence, 1953). If is a martingale and ( bounded), then exists a.s. and .
This theorem is foundational for proving convergence of:
- Bayesian posterior updates (likelihood ratio martingales)
- Online learning algorithms (regret bounds via bounded martingales)
- Stochastic approximation algorithms (Robbins-Monro)
The version: If , then both a.s. and in .
B.2 Upcrossing Inequality
The proof of martingale convergence uses the upcrossing inequality: the number of times a martingale upcrosses an interval (goes from below to above ) satisfies:
If is bounded, then , so upcrossings are finite for every rational interval . This means oscillation in the limit is impossible, forcing convergence.
B.3 Uniform Integrability and Convergence
Definition. A family is uniformly integrable (UI) if:
UI is the condition needed for (exchange of limit and expectation). Without UI, a.s. convergence does not imply convergence.
Sufficient condition for UI: for any .
For AI: Uniform integrability is implicitly required for most SGD convergence proofs that need (expectation of limit = limit of expectations). When gradient clipping is used, it ensures the gradient process is UI, validating the convergence proofs.
Appendix C: Gaussian Process Details
C.1 Kernel Properties
A function is a valid kernel (positive semidefinite / Mercer kernel) iff:
Operations preserving PSD kernels:
- Sum: (sum of two valid kernels is valid)
- Product: (product of two valid kernels is valid)
- Scaling: for
- Composition: for any feature map
Bochner's theorem (characterisation of stationary kernels): A continuous function is the covariance of a stationary process iff:
for some non-decreasing bounded measure (the spectral measure). The PSD is .
C.2 Sparse GP Approximations
Exact GP inference costs for training points. For large (e.g., ), this is intractable. Sparse GP approximations use inducing points :
The most common is the SVGP (Sparse Variational GP) which optimises a variational lower bound (ELBO), reducing cost to .
For AI: Sparse GPs enable scalable Bayesian optimisation for LLM hyperparameter tuning with evaluations. The inducing points act as a compressed representation of the observations, analogous to the key-value cache in attention.
C.3 Deep Kernel Learning
Neural-network kernels: Replace the input with a neural network representation and use:
This is deep kernel learning (DKL): train by maximising the GP marginal likelihood. DKL combines the flexibility of deep representations with the uncertainty quantification of GPs, and is used in scalable Bayesian optimisation for AutoML.
Appendix D: Extended ML Applications
D.1 Diffusion Models: Mathematical Details
The probability flow ODE for the VP-SDE is:
This ODE has the same marginal distributions as the reverse SDE but is deterministic. It enables:
- DDIM sampling: Solve the ODE with large step sizes (fewer function evaluations)
- Latent space interpolation: Deterministic ODE gives unique encoding of each data point
- Consistency models: Learn direct mapping that is consistent along ODE trajectories
The score function is the neural network that drives both the SDE and ODE. At optimum:
This is a conditional expectation - directly connecting score learning to the Doob martingale construction.
D.2 Online Learning and Martingale Regret Bounds
In online learning, at each round the learner predicts , suffers loss , and observes true outcome . The regret against best fixed action is:
Martingale structure: In Follow-the-Regularised-Leader (FTRL) algorithms, the loss differences form a martingale difference sequence. Applying Azuma's inequality:
This gives high-probability regret bounds - the stochastic process framework converts expected regret into individual-run guarantees.
Bandit algorithms (Thompson Sampling): In multi-armed bandits with arms, at each round the agent samples parameters from the posterior and plays the arm with highest reward. The number of plays of suboptimal arms is bounded using stopping time arguments on the likelihood ratio martingale.
D.3 RLHF and Reward Models as Stochastic Processes
RLHF (Reinforcement Learning from Human Feedback) trains a language model to maximise a learned reward model subject to a KL penalty:
This is a stochastic control problem: the policy is a controller, the generation trajectory is the controlled stochastic process, and the reward is a terminal cost.
The RLHF policy update (PPO) as a martingale: The policy gradient estimator in PPO is:
where is the advantage estimate. Under the true advantage function, has - it is a martingale difference. PPO clips the ratio to prevent large updates, maintaining stability of the martingale approximation.
D.4 Stochastic Processes in LLM Interpretability
Mechanistic interpretability (Elhage et al., Nanda et al.) studies circuits in transformer weights. The token sequence flowing through a transformer layers can be modelled as a stochastic process evolving through residual stream updates:
Each residual update is a perturbation to the current state - the trajectory through layers is a discrete-time process indexed by layer depth , not token position . Understanding this process (what information is stored/retrieved at each layer) is the central question of interpretability.
Induction heads are attention heads that implement the operation "if A appeared before B at some position, predict B will appear after the current A." This is a pattern-matching stopping time: the head activates when it detects that the current context matches a past pattern. Formalising this as a stopping time over the layer-depth stochastic process is an active research direction.
Appendix E: The Functional CLT (Donsker's Theorem)
E.1 Weak Convergence on Function Spaces
Donsker's theorem states convergence in distribution on the function space - the space of continuous functions equipped with the supremum norm .
Theorem (Donsker, 1951). Let be iid with , . Define:
Then weakly in , where is standard Brownian motion.
Continuous mapping theorem: If and is continuous, then . This gives:
- - reflection principle for random walks
- - law of Brownian local time
E.2 Applications to SGD Analysis
Donsker's theorem justifies the SDE approximation of SGD: as step size , the piecewise-constant SGD interpolation converges (in distribution, as a path) to the SDE . This means:
- Convergence results for the SDE (e.g., mixing time to stationary distribution) translate to discrete-time SGD results
- The stationary distribution of SGD approximates near a minimum
- The escape rate from a local minimum scales as - the Kramers escape rate formula
This is why practitioners observe that increasing batch size by factor requires proportionally increasing learning rate by to maintain the same noise temperature and thus the same optimisation dynamics.
Appendix F: Notation Summary
| Symbol | Meaning |
|---|---|
| Probability space | |
| Filtration (increasing family of -algebras) | |
| Stochastic process indexed by | |
| Martingale | |
| Stopping time | |
| -algebra at stopping time | |
| Quadratic variation of process up to time | |
| Standard Brownian motion (Wiener process) | |
| Poisson process with rate | |
| Covariance kernel (for GPs) | |
| Gaussian process with mean and kernel | |
| Wiener increment (stochastic differential) | |
| Ito integral | |
| Quadratic variation of BM | |
| DDPM cumulative noise schedule | |
| Score function approximation | |
| VP-SDE | Variance-Preserving SDE (DDPM continuum) |
| VE-SDE | Variance-Exploding SDE |
| OST | Optional Stopping Theorem (Doob) |
| UI | Uniformly Integrable |
| WSS | Wide-Sense Stationary |
| ACVF | Autocovariance function |
| PSD | Power Spectral Density |
| NTK | Neural Tangent Kernel |
| OU | Ornstein-Uhlenbeck process |
| GBM | Geometric Brownian Motion |
| SRW | Simple Random Walk |
Appendix G: Worked Examples
G.1 Symmetric Random Walk: Hitting Probabilities
Problem. A particle starts at position on the integer line. At each step it moves or with equal probability. Find: (a) The probability it hits 0 before hitting 10 (b) The expected number of steps to hit (c) The probability it ever returns to 3 starting from 3
Solution.
(a) By the gambler's ruin result, .
(b) steps.
(c) For a symmetric walk starting at on , the probability of ever returning to 3 equals 1 (by Polya's theorem, recurrence). However, the expected return time is (the walk is null-recurrent).
Numerical verification:
import numpy as np
np.random.seed(42)
N_trials = 100_000
# (a) Gambler's ruin simulation
hits_zero = 0
for _ in range(N_trials):
x = 3
while 0 < x < 10:
x += np.random.choice([-1, 1])
hits_zero += (x == 0)
print(f"P(hit 0) \\approx {hits_zero/N_trials:.4f}, theory: 0.7000") # -> 0.7000
# (b) Expected steps
total_steps = 0
for _ in range(N_trials):
x, steps = 3, 0
while 0 < x < 10:
x += np.random.choice([-1, 1])
steps += 1
total_steps += steps
print(f"E[\\tau] \\approx {total_steps/N_trials:.2f}, theory: 21.00") # -> 21.00
G.2 Poisson Process: Conditional Uniformity
Problem. Events arrive as a Poisson process with rate . Given that exactly 5 events occurred in , what is the distribution of the first event time ?
Solution. By the conditional uniformity property, given , the 5 event times are the order statistics of 5 iid Uniform random variables. Therefore scaled to :
More precisely: for .
Verification: Simulate 5 Uniform random variables, take the minimum. Compare to conditional Poisson process simulation.
G.3 Brownian Motion: The Arc-Sine Distribution
Problem. Let be the last zero of Brownian motion before time 1. Find .
Solution. By Levy's arc-sine law:
This is the CDF of the arc-sine distribution on . The density concentrates near 0 and 1 - the last zero before time 1 is most likely to be near 0 or near 1, and least likely to be near . This violates the naive intuition that "the last zero should be somewhere in the middle."
Implication for AI: The arc-sine law suggests that in a long random walk (or SRW approximation of SGD noise), the trajectory spends most of its time on one side of zero - the noise process can have long "runs" of the same sign. This persistent structure in gradient noise is why momentum-based optimisers (Adam, SGD with momentum) perform better than pure SGD for certain problem structures.
G.4 OU Process: Analytical vs Numerical
Problem. Solve , .
Solution. This is OU with , , .
- Solution:
- Mean:
- Variance:
- Stationary distribution:
The process decays exponentially from toward the mean with decay rate , while accumulating noise that saturates at .
For diffusion models: This corresponds to DDPM with (time-discretised) and the signal-to-noise ratio at time approaching for - which is the "pure noise" terminal condition when .
G.5 Gaussian Process: Prior vs Posterior
Problem. Fit a GP to three observations , , with RBF kernel and noise .
Solution. Build the kernel matrix :
Posterior at test point :
- Compute
The posterior mean interpolates between the observations, and the posterior variance is small near observations (where the kernel provides strong correlation) and large away from them.
Appendix H: Self-Assessment Checklist
After completing this section, verify you can:
- State the definition of a stochastic process and give 3 examples of each combination of (discrete/continuous) x (time/state space)
- Define a filtration and explain what "adapted" means in ML context
- Verify the martingale property for a new process by computing
- State all three conditions of Doob's OST and identify which applies to a given problem
- Apply OST to compute expected stopping times and boundary hitting probabilities
- Construct the Doob martingale for a given function and derive McDiarmid from Azuma
- State the three defining properties of a Poisson process and derive the superposition/thinning results
- State Wiener's four axioms for Brownian motion and compute
- Compute the quadratic variation and state Ito's lemma
- Solve the OU SDE and state its stationary distribution
- Connect the DDPM forward process to the VP-SDE and OU process
- Define WSS and ergodicity, and state the Wiener-Khinchin theorem
- Specify a GP by its mean function and kernel, and compute the posterior given observations
- Explain why RoPE makes attention a stationary kernel in position
- Model SGD as a near-martingale and state the diffusion SDE approximation
Appendix I: Connections to Other Areas
I.1 Information Theory (Section09)
- The log-likelihood ratio process is a supermartingale under and a submartingale under
- The sequential probability ratio test (SPRT) is an optimal stopping time for hypothesis testing, derived using the likelihood ratio martingale
- Renyi entropy rates and the entropy rate of a stationary process are central concepts at the intersection of stochastic processes and information theory
I.2 Functional Analysis (Section12)
- The space of continuous functions is a Banach space; weak convergence on this space is Donsker's theorem
- Gaussian processes are infinite-dimensional Gaussian distributions on function spaces (Hilbert spaces)
- Reproducing Kernel Hilbert Spaces (RKHS): For each valid kernel , there is a unique RKHS with the reproducing property . GP regression in is equivalent to minimising a regularised empirical risk in the RKHS
I.3 Optimisation (Section08)
- Stochastic approximation (Robbins-Monro, 1951): The original framework for SGD convergence uses martingale theory. Under step sizes , and bounded variance, the iterates converge a.s.
- Lyapunov methods: Proving convergence of SGD amounts to constructing a Lyapunov function that is a (noisy) supermartingale, then applying martingale convergence
- Langevin MCMC: The Langevin SDE is used for posterior sampling in Bayesian DL, connecting optimisation and stochastic process theory
I.4 Statistics (Section07-Statistics)
- Time series analysis (ARIMA, state-space models) is the statistical estimation side of stochastic processes: given observed data , estimate the model parameters of an assumed process
- Spectral analysis uses the Wiener-Khinchin theorem to estimate the PSD from data via the periodogram
- Hypothesis testing for stationarity (Augmented Dickey-Fuller test, KPSS test) checks whether an observed time series is consistent with a stationary process
Appendix J: Advanced Topics
J.1 Levy Processes
A Levy process generalises both Brownian motion (continuous paths) and Poisson process (jumps) to arbitrary combinations of drift, diffusion, and jumps.
Levy-Khinchin representation: Every Levy process has characteristic function:
where (drift), (diffusion), and is the Levy measure (jump intensity).
For AI: Heavy-tailed gradient distributions in neural network training are better modelled by Levy processes (with large jumps) than by Brownian motion (Gaussian, no jumps). The Levy stable distribution family includes the Gaussian (index ) and Cauchy () as special cases. Empirical studies of gradient statistics in transformers show heavy tails (Student- with low degrees of freedom), suggesting Levy process models may be more appropriate than the Gaussian SDE approximation.
J.2 Malliavin Calculus
Malliavin calculus is a variational calculus on Wiener space - it provides derivatives of random variables with respect to the underlying Brownian motion.
The Malliavin derivative measures the sensitivity of to the Brownian increment at time . It satisfies:
Clark-Ocone formula: For with :
This represents any square-integrable random variable as a stochastic integral - connecting Malliavin calculus to the martingale representation theorem.
For AI: Malliavin calculus provides pathwise gradients for diffusion models, enabling gradient computation through the stochastic sampling process. This is used in score-based generative model training with more stable gradient estimates than the standard score-matching loss.
J.3 Stochastic Control and HJB Equation
A stochastic control problem seeks to find a policy (adapted to ) minimising:
subject to .
The Hamilton-Jacobi-Bellman (HJB) equation gives the optimal value function :
For AI: RLHF can be formulated as a stochastic control problem where the LLM generates tokens sequentially (the SDE), the reward is the RLHF reward model (the cost), and the KL penalty is a regularisation term. The HJB equation characterises the optimal RLHF policy, and PPO approximately solves it using neural network approximations of .
Appendix K: Python Recipes for Stochastic Processes
K.1 Simulating Stochastic Processes
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
# Simple Random Walk
def simulate_srw(n_steps, n_paths=5):
steps = np.random.choice([-1, 1], size=(n_paths, n_steps))
return np.hstack([np.zeros((n_paths,1)), steps.cumsum(axis=1)])
# Brownian Motion (via SRW limit)
def simulate_bm(T=1.0, n_steps=1000, n_paths=5):
dt = T / n_steps
increments = np.random.normal(0, np.sqrt(dt), (n_paths, n_steps))
return np.hstack([np.zeros((n_paths,1)), increments.cumsum(axis=1)])
# Ornstein-Uhlenbeck (Euler-Maruyama)
def simulate_ou(theta=1.0, mu=0.0, sigma=1.0, T=10.0, n_steps=1000, n_paths=5):
dt = T / n_steps
X = np.zeros((n_paths, n_steps+1))
X[:,0] = 2.0 # initial condition
for t in range(n_steps):
dB = np.random.normal(0, np.sqrt(dt), n_paths)
X[:,t+1] = X[:,t] + theta*(mu - X[:,t])*dt + sigma*dB
return X
# Poisson Process
def simulate_poisson(lam=1.0, T=10.0, n_paths=5):
paths = []
for _ in range(n_paths):
t, events = 0, [0]
while t < T:
t += np.random.exponential(1/lam)
events.append(min(t, T))
paths.append(events)
return paths
# Geometric Brownian Motion
def simulate_gbm(mu=0.1, sigma=0.3, S0=100.0, T=1.0, n_steps=252, n_paths=5):
dt = T / n_steps
log_S = np.zeros((n_paths, n_steps+1))
log_S[:,0] = np.log(S0)
for t in range(n_steps):
log_S[:,t+1] = log_S[:,t] + (mu - sigma**2/2)*dt + sigma*np.random.normal(0, np.sqrt(dt), n_paths)
return np.exp(log_S)
K.2 Gaussian Process Posterior
import numpy as np
from scipy.linalg import cho_factor, cho_solve
def rbf_kernel(X1, X2, length_scale=1.0, variance=1.0):
"""Squared-exponential (RBF) kernel."""
dists = ((X1[:,None] - X2[None,:])**2).sum(-1)
return variance * np.exp(-dists / (2 * length_scale**2))
def gp_posterior(X_train, y_train, X_test, kernel_fn, noise_var=1e-3):
"""Compute GP posterior mean and variance."""
K_tt = kernel_fn(X_train, X_train) + noise_var * np.eye(len(X_train))
K_ts = kernel_fn(X_train, X_test)
K_ss = kernel_fn(X_test, X_test)
# Solve K_tt @ alpha = y_train using Cholesky
cho = cho_factor(K_tt)
alpha = cho_solve(cho, y_train)
v = cho_solve(cho, K_ts)
mu_post = K_ts.T @ alpha
var_post = np.diag(K_ss) - (K_ts * v).sum(0)
return mu_post, np.sqrt(np.maximum(var_post, 0))
# Usage:
# X_train = np.array([0., 1., 2.])
# y_train = np.array([1., -0.5, 0.8])
# X_test = np.linspace(-0.5, 2.5, 100)
# mu, std = gp_posterior(X_train, y_train, X_test, rbf_kernel)
K.3 Martingale Verification
import numpy as np
def verify_martingale(process_fn, n_steps=100, n_paths=10000, tol=0.05):
"""
Verify martingale property: E[M_{n+1} | M_n] \\approx M_n.
process_fn: callable that generates (n_paths, n_steps+1) array.
"""
paths = process_fn(n_steps, n_paths)
# Check: mean at step n equals mean at step 0
means = paths.mean(axis=0)
assert np.allclose(means, means[0], atol=tol), f"Mean not constant: {means[:5]}"
# Check: E[M_{n+1}^2] - E[M_n^2] = variance increment
print(f"Mean at all steps: {means[0]:.4f} +/- {means.std():.4f} (should be ~0)")
print(f"PASS: Martingale mean constant within tolerance {tol}")
def simulate_srw_paths(n_steps, n_paths):
steps = np.random.choice([-1., 1.], size=(n_paths, n_steps))
return np.hstack([np.zeros((n_paths,1)), steps.cumsum(axis=1)])
np.random.seed(42)
verify_martingale(simulate_srw_paths)
K.4 Diffusion Model Forward Process
import numpy as np
def ddpm_forward_process(x0, T=1000, beta_start=1e-4, beta_end=0.02):
"""
DDPM forward process: q(x_t | x_0) = N(sqrt(alpha_bar_t)*x0, (1-alpha_bar_t)*I)
Returns: {t: (x_t, alpha_bar_t)} for t = 0, 1, ..., T
"""
betas = np.linspace(beta_start, beta_end, T)
alphas = 1 - betas
alpha_bars = np.cumprod(alphas)
trajectory = {0: (x0, 1.0)}
x0_arr = np.array(x0, dtype=float)
for t in range(1, T+1):
ab = alpha_bars[t-1]
eps = np.random.randn(*x0_arr.shape)
x_t = np.sqrt(ab) * x0_arr + np.sqrt(1 - ab) * eps
trajectory[t] = (x_t, ab)
return trajectory, alpha_bars
def snr(alpha_bar):
"""Signal-to-noise ratio: alpha_bar / (1 - alpha_bar)."""
return alpha_bar / (1 - alpha_bar + 1e-9)
# Example:
# x0 = np.array([1.0, 0.0, -1.0])
# traj, alpha_bars = ddpm_forward_process(x0, T=1000)
# For t=500: SNR \\approx 1 (equal signal and noise)
# For t=1000: SNR \\approx 0 (pure noise)
K.5 DDPM Schedule Comparison
import numpy as np
def linear_schedule(T=1000, beta_start=1e-4, beta_end=0.02):
return np.linspace(beta_start, beta_end, T)
def cosine_schedule(T=1000, s=0.008):
t = np.arange(T+1) / T
f = np.cos((t + s)/(1+s) * np.pi/2)**2
alpha_bars = f / f[0]
betas = 1 - alpha_bars[1:] / alpha_bars[:-1]
return np.clip(betas, 0, 0.999)
T = 1000
betas_linear = linear_schedule(T)
betas_cosine = cosine_schedule(T)
ab_linear = np.cumprod(1 - betas_linear)
ab_cosine = np.cumprod(1 - betas_cosine)
print("Signal preserved at t=T/4, T/2, 3T/4, T:")
for name, ab in [("Linear", ab_linear), ("Cosine", ab_cosine)]:
vals = [ab[T//4-1], ab[T//2-1], ab[3*T//4-1], ab[-1]]
print(f" {name}: {[f'{v:.4f}' for v in vals]}")
# Cosine schedule destroys signal more uniformly across timesteps
Appendix L: Theory Problems (Advanced)
L.1 Optional Stopping and Wald's Identity
Problem. Let be iid with and . Let be a stopping time with . Prove Wald's Identity:
Hint: is a martingale. Apply OST. For the second moment, use .
L.2 The Optional Stopping Theorem: Necessary Conditions
Problem. Show that the condition alone does NOT suffice for OST. Construct a martingale and stopping time with but .
Hint: Let SRW and for appropriate depending on ...
L.3 Levy Characterisation of BM
Problem. Prove that if is a continuous local martingale starting at 0 with quadratic variation , then is a standard Brownian motion.
Hint: Use the exponential martingale and show it is a martingale. Then compute the characteristic function of .
L.4 GP Posterior as Hilbert Space Projection
Problem. Show that the GP posterior mean is the orthogonal projection of onto the span of in the RKHS .
Hint: Use the reproducing property to express the GP as a Hilbert space problem.
Appendix M: References and Further Reading
Textbooks
-
Karatzas & Shreve, Brownian Motion and Stochastic Calculus (1991) - The standard reference for rigorous continuous-time stochastic processes, Ito calculus, and SDEs. Graduate-level.
-
Durrett, Probability: Theory and Examples (5th ed., 2019) - Comprehensive coverage including martingales, stopping times, ergodic theory, Brownian motion. Excellent exercises.
-
Williams, Probability with Martingales (1991) - Elegant, concise introduction to martingale theory. The clearest proof of martingale convergence. Highly recommended.
-
Oksendal, Stochastic Differential Equations (7th ed., 2014) - Standard SDEs reference. Accessible treatment of Ito calculus and applications.
-
Rasmussen & Williams, Gaussian Processes for Machine Learning (2006) - The canonical GP reference. Available free online.
-
Doob, Stochastic Processes (1953) - The original monograph. Still valuable for historical context.
Papers for AI Connections
-
Song et al. (2021), Score-Based Generative Modeling through Stochastic Differential Equations - Score SDE framework unifying DDPM, SMLD.
-
Mandt, Hoffman & Blei (2017), Stochastic Gradient Descent as Approximate Bayesian Inference - SDE approximation of SGD.
-
Su et al. (2022), RoFormer: Enhanced Transformer with Rotary Position Embedding - RoPE as stationary kernel.
-
Ho, Jain & Abbeel (2020), Denoising Diffusion Probabilistic Models - DDPM.
-
Li et al. (2017), Stochastic Modified Equations and Adaptive Stochastic Gradient Algorithms - SDE theory for SGD.
-
Song et al. (2023), Consistency Models - Distillation of diffusion models via probability flow ODE.
-
Tropp (2015), An Introduction to Matrix Concentration Inequalities - Matrix Bernstein, extends scalar martingale results.
-
Williams (1997), Reinforcement Learning: An Introduction (Sutton & Barto) - RL value functions as martingales; Chapter 6 on TD-learning.
Appendix N: Extended Proofs
N.1 Proof of Doob's Maximal Inequality
Theorem. For a non-negative submartingale and :
Proof. Let (first time the process crosses ), with if no crossing occurs. Then is a stopping time (the crossing event is -measurable).
Decompose the expectation :
For the first term: since is a submartingale, on :
Since :
Corollary. Applying to (non-negative submartingale since is convex):
N.2 Proof of Azuma's Inequality
Theorem (Azuma-Hoeffding). Let be a martingale with a.s. Then:
Proof. Let be the martingale differences. Since and , by Hoeffding's lemma:
By the Markov inequality for (for any ):
Factor the MGF using conditional independence:
Iterating: .
Combining:
Optimise over : set to get:
N.3 Proof of Polya's Theorem (: Recurrence)
Theorem. The SRW on is recurrent.
Proof. It suffices to show the expected number of returns to 0 is infinite: .
By Stirling's approximation :
Therefore .
By the second Borel-Cantelli lemma (the events are independent since each depends on distinct steps): .
Note: In , by CLT in , and for (since ). By Borel-Cantelli (first lemma): returns occur finitely often, so the walk is transient.
N.4 Proof that the OU Process Has the Correct Stationary Distribution
Theorem. The stationary distribution of () is .
Proof via Fokker-Planck equation. The probability density satisfies:
At stationarity :
Try :
Substituting: [ok]
The normalisation gives .
Appendix O: Spectral Theory of Stochastic Processes
O.1 Spectral Representation
Every WSS process with absolutely integrable ACVF admits the spectral representation:
where is an orthogonal increment process (the spectral process) satisfying .
This is the stochastic analogue of the Fourier transform: a stationary process decomposes into uncorrelated frequency components, each contributing variance in the frequency band .
For AI: The power spectral density of a token sequence determines how information is distributed across "frequencies" (temporal scales). A language model with strong long-range dependencies (like GPT-4 modelling coherent long documents) will have a PSD with high power at low frequencies. Attention head specialisation can be viewed as learning frequency-selective filters - heads that attend to nearby tokens (high frequency) vs. distant tokens (low frequency).
O.2 Spectral Gap and Mixing
For a stationary Markov chain with transition operator , the spectral gap (where is the second-largest eigenvalue of ) determines the mixing time .
- Large spectral gap -> fast mixing -> samples become independent quickly
- Small spectral gap -> slow mixing -> long-range correlations persist
For transformers: The attention pattern at each layer can be viewed as a stochastic matrix (for sequence length ). The spectral gap of determines how quickly information "mixes" across the sequence. Heads with nearly uniform attention (large spectral gap) aggregate globally; heads with peaked attention (small spectral gap) attend locally.
Appendix P: Decision Framework for Choosing Process Models
CHOOSE THE RIGHT STOCHASTIC PROCESS MODEL
========================================================================
Is the system evolving over time?
+-- No -> Use static probability (Section01-Section05)
+-- Yes ->
Does the state space have a natural structure?
+-- Countable (e.g., integer counts, categories)
| +-- Discrete time -> Markov chain (Section07)
| +-- Continuous time -> Poisson or birth-death process
+-- Continuous (e.g., parameter vector, embedding)
+-- Does it have independent increments?
| +-- Yes, with Gaussian increments -> Brownian motion
| +-- Yes, with jump structure -> Poisson / Levy
| +-- No, but stationary -> ARMA or OU process
+-- Does it decrease toward an attractor?
| +-- Yes -> OU process or SDE with drift
+-- Does it only depend on the current state?
+-- Yes -> Markov process (continuous time)
+-- No, depends on full history -> GP or ARIMA
========================================================================
AI Application -> Recommended Model
---------------------------------------------------------------------
SGD parameter trajectory -> OU SDE (diffusion approx)
Mini-batch gradient noise -> Martingale difference sequence
Diffusion model forward -> VP-SDE (discrete OU)
LLM token generation -> Markov chain (Section07)
Request arrivals -> Poisson process
Hyperparameter optimisation -> Gaussian process (Bayesian opt)
RL value function -> Martingale transform
Training loss trajectory -> Supermartingale
NTK / infinite-width NN -> Gaussian process
Residual stream updates -> Additive noise SDE
---------------------------------------------------------------------
Appendix Q: Connections to Measure Theory
Q.1 Stochastic Processes as Measurable Functions
A stochastic process on can be viewed as a measurable function from to the path space . The probabilistic properties of the process are encoded in the pushforward measure on .
For Brownian motion: The path space is (continuous functions). The Wiener measure is the unique probability measure on such that (coordinate process) satisfies the four BM axioms. Constructing rigorously is Wiener's 1923 achievement.
Q.2 The Radon-Nikodym Theorem and Change of Measure
If and are probability measures on with (Q is absolutely continuous wrt P), then there exists the Radon-Nikodym derivative such that for all .
Girsanov's theorem: Under mild conditions, if is a BM under and is adapted, then:
is a BM under where (the Girsanov kernel).
For diffusion models: Girsanov's theorem is the mathematical basis for the reverse diffusion process. The reverse SDE under the data distribution corresponds to a change of measure from the noise distribution (standard BM), with the Radon-Nikodym derivative involving the score function . This makes the score-matching objective the likelihood maximisation objective under the reverse measure.
Appendix R: Supplementary: Concentration and Martingales Revisited
R.1 The Azuma-McDiarmid Connection - Full Detail
In Section3.5 we introduced the Doob martingale construction as the proof of McDiarmid's inequality. Here we make the connection fully explicit.
Setup. Let be independent random variables and satisfy the bounded differences condition: for each ,
Step 1: Doob martingale. Define for . Then , , and is a martingale.
Step 2: Bounded differences. The martingale difference satisfies a.s. because:
Step 3: Azuma. Apply Azuma's inequality (proved in SectionN.2) to the martingale :
This IS McDiarmid's inequality. The concentration inequality Section05 is a consequence of martingale theory.
R.2 The Freedman Inequality
Azuma's inequality uses only the almost-sure bound . Freedman's inequality (1975) also uses conditional variances, giving a Bernstein-type improvement:
Theorem (Freedman, 1975). Let be a martingale with a.s. Define (total conditional variance). Then for :
This is a martingale version of Bernstein's inequality. The event restricts to "low variance" trajectories. Unconditionally:
For AI: Freedman's inequality gives tighter bounds for SGD convergence when the gradient variance is small. In the low-variance early-training phase (when the model is near initialisation and gradients are large but consistent), Freedman gives bounds tighter than Azuma's .
Appendix S: Practice Problems (All Levels)
Level * (Computation)
S.1. Let where is the SRW. For what value of in the biased walk () is a martingale?
Answer: . Set equal to : , so .
S.2. A Poisson process has rate (events per hour). Find and the expected time to the 4th event.
Answer: , so . Time to 4th event: , hours.
S.3. For the OU process with , , , : compute and .
Answer: . .
Level ** (Theory)
S.4. Show that for a martingale , the sequence is also a martingale, where is the predictable quadratic variation.
S.5. Prove that if is convex and is a martingale, then is a submartingale (using Jensen's inequality).
S.6. For the compound Poisson process with , : compute the MGF and use it to find and .
Level *** (AI Applications)
S.7. (Diffusion model noise schedule design) You want a noise schedule for the DDPM forward process such that the SNR decreases linearly from to over steps. Derive the required and compare to the linear schedule.
S.8. (SGD stationary distribution) In a 2D quadratic loss with , SGD with step and gradient noise has stationary distribution . Show that the ratio of the standard deviations in the two directions is , and interpret this for optimisation.
S.9. (Martingale in RLHF) In PPO with KL-regularised reward , show that the KL penalty term creates a non-martingale perturbation to the policy gradient estimator. Under what conditions on does the perturbation become negligible?
Appendix T: Theorem Index
| Theorem | Statement | Section |
|---|---|---|
| Doob's OST | under UI or bounded conditions | Section3.3 |
| Doob's Maximal Inequality | Section3.4, App. N | |
| Martingale Convergence | -bounded martingale converges a.s. | Section3.4 |
| Azuma's Inequality | Bounded differences -> | App. N |
| McDiarmid's Inequality | Bounded differences function -> Azuma on Doob martingale | Section3.5 |
| Freedman's Inequality | Martingale Bernstein with conditional variance | App. R |
| Polya's Theorem | SRW on : recurrent iff | Section4.2, App. N |
| Donsker's Theorem | SRW BM in | Section6.3, App. E |
| Levy Characterisation | Continuous martingale + QV BM | Section6.2 |
| Ito's Lemma | App. A | |
| Wiener-Khinchin | ACVF PSD via Fourier | Section7.3 |
| Bochner's Theorem | PSD is valid iff non-negative definite | Section7.3 |
| Birkhoff Ergodic | Time average = ensemble average a.s. | Section7.2 |
| Girsanov's Theorem | Change of drift = change of measure | App. Q |
| Fokker-Planck | PDE for density of SDE | App. N |
| Anderson's Reverse SDE | Reverse SDE = forward SDE + score | Section6.6 |
| Clark-Ocone Formula | Malliavin representation on Wiener space | App. J |
| Wald's Identity | App. L |
Appendix U: Continuous-Time Martingales
U.1 Local Martingales
In continuous time, the natural generalisation of a martingale is a local martingale: a process such that there exist stopping times and each stopped process is a martingale.
Every continuous local martingale with can be written as a time-changed Brownian motion (by the Dambis-Dubins-Schwarz theorem):
where is a standard BM and is the quadratic variation of .
The Ito integral is a local martingale (and a true martingale when ).
U.2 The Martingale Representation Theorem
Theorem (Martingale Representation). Every square-integrable martingale adapted to the Brownian filtration can be written as:
for some adapted process with .
For AI: In score-based diffusion model training, the training loss can be expressed as an Ito integral against the Wiener process of the forward noise. The neural network effectively learns the integrand that represents the reverse process. The martingale representation theorem guarantees that such a representation exists - it is the theoretical justification for why neural networks can learn to "undo" Brownian noise.
U.3 Feynman-Kac Formula
For the SDE and function , the Feynman-Kac formula relates the SDE to a PDE:
satisfies the PDE with terminal condition .
For AI: The score function in diffusion models satisfies a backward Kolmogorov equation (the PDE form of the reverse SDE). Feynman-Kac connects this PDE to the expectation form , which is directly implemented by the neural denoiser.