Private notes
0/8000

Notes stay private to your browser until account sync is configured.

Part 1
27 min read18 headingsSplit lesson page

Lesson overview | Lesson overview | Next part

Stochastic Processes: Part 1: Intuition and Overview to 10. Common Mistakes

1. Intuition and Overview

1.1 What Is a Stochastic Process?

A stochastic process is a family {Xt}tT\{X_t\}_{t \in T} of random variables defined on a common probability space (Ω,F,P)(\Omega, \mathcal{F}, P), indexed by a set TT called the index set or time set. For each fixed outcome ωΩ\omega \in \Omega, the mapping tXt(ω)t \mapsto X_t(\omega) is called a sample path or realisation of the process.

The key conceptual shift from classical probability: rather than asking "what is the distribution of XX?", we ask "how does the random system {Xt}\{X_t\} evolve over time?" This trajectory-level view captures phenomena that static distributions cannot: memory, feedback, path dependence, and accumulation of information.

Informal classification: The simplest stochastic process is a sequence of iid random variables - no memory, no dependence. More interesting processes have memory (the present depends on the past), structure (the conditional distributions have a specific form), or continuity (sample paths are almost surely continuous). The richest models - Brownian motion, diffusion processes - have all three.

For AI: Almost every quantity that evolves during training or inference is a stochastic process. The parameter vector θt\theta_t after tt steps of SGD is a stochastic process. The hidden state hth_t of an RNN at position tt is a stochastic process. The sequence of tokens (w1,w2,)(w_1, w_2, \ldots) sampled from an LLM is a stochastic process. The noise level σt\sigma_t in a diffusion model forward pass is a stochastic process. Stochastic process theory gives us the mathematical language to reason about all of these rigorously.

1.2 The Central Examples

The five canonical stochastic processes appear throughout mathematics and AI:

  1. Simple Random Walk (SRW): Sn=k=1nXkS_n = \sum_{k=1}^n X_k where XkiidUniform{1,+1}X_k \stackrel{iid}{\sim} \text{Uniform}\{-1,+1\}. Discrete time, discrete state space. Models fair gambling, particle diffusion in discrete space, and gradient noise accumulation.

  2. Poisson Process: N(t)N(t) = number of events in [0,t][0,t], with N(t)N(s)Poisson(λ(ts))N(t) - N(s) \sim \text{Poisson}(\lambda(t-s)) independent of the past. Continuous time, discrete (integer) state space. Models arrival streams, request queues, neuron spike trains.

  3. Brownian Motion (Wiener Process): BtB_t with B0=0B_0 = 0, continuous paths, BtBsN(0,ts)B_t - B_s \sim \mathcal{N}(0, t-s) independent of the past. Continuous time, continuous state space. The fundamental building block of continuous-time stochastic processes, and the forward process in diffusion models.

  4. Gaussian Process: A process where every finite collection (Xt1,,Xtn)(X_{t_1}, \ldots, X_{t_n}) is jointly Gaussian. Fully specified by its mean function μ(t)\mu(t) and covariance kernel k(s,t)k(s,t). Foundation of Gaussian process regression (Bayesian nonparametrics) and closely related to neural network functions in the infinite-width limit.

  5. Markov Chain: {Xn}\{X_n\} where P(Xn+1X0,,Xn)=P(Xn+1Xn)P(X_{n+1} | X_0,\ldots,X_n) = P(X_{n+1} | X_n) - the past matters only through the present. Discrete time, arbitrary state space. The backbone of MCMC, reinforcement learning, and language model token generation. Full treatment: Section07 Markov Chains.

1.3 Why Processes Matter for AI

Stochastic process theory provides the theoretical foundation for several pillar areas of modern AI:

Diffusion models (DDPM, DDIM, Score SDE) define a forward process that progressively corrupts data with Gaussian noise - this is literally a discrete-time approximation to the Ornstein-Uhlenbeck SDE. The reverse process learned by the neural network is a time-reversed SDE, and training minimises a score-matching objective derived from Ito calculus.

SGD dynamics can be approximated by a continuous-time diffusion SDE: dθ=L(θ)dt+2β1dBtd\theta = -\nabla L(\theta)\,dt + \sqrt{2\beta^{-1}}dB_t where β1\beta^{-1} is the effective noise temperature proportional to the learning rate divided by batch size. This approximation (Mandt et al., 2017; Li et al., 2017) explains why SGD finds flat minima: the stationary distribution of the SDE concentrates in regions of low curvature.

Reinforcement learning uses the discounted return Gt=k=0γkrt+kG_t = \sum_{k=0}^\infty \gamma^k r_{t+k}, which is a function of the trajectory (rt,rt+1,)(r_t, r_{t+1}, \ldots) - a stochastic process. The temporal difference error δt=rt+γV(st+1)V(st)\delta_t = r_t + \gamma V(s_{t+1}) - V(s_t) forms a martingale difference sequence under the optimal value function, and TD-learning is an online martingale approximation algorithm.

Gaussian processes are used as priors over functions in Bayesian optimisation (tuning LLM hyperparameters), and the neural tangent kernel theory shows that infinite-width neural networks converge to GPs. RoPE (Rotary Position Embedding) encodes relative position using a stationary kernel - exploiting the wide-sense stationarity machinery developed in Section7.

1.4 Historical Timeline

YearContributorResult
1900Louis BachelierModelled stock prices as random walk (Brownian motion) in doctoral thesis
1905Albert EinsteinDerived diffusion equation from random molecular motion
1923Norbert WienerConstructed Brownian motion rigorously on path space
1933Andrei KolmogorovAxiomatic probability; forward/backward equations for Markov processes
1940Paul LevyCharacterised Levy processes; Levy-Khinchin representation
1944Kiyosi ItoDeveloped stochastic calculus (Ito integral, Ito's lemma)
1953Joseph DoobMartingale theory; optional stopping; convergence theorems
1964SkorokhodEmbedding theorem; functional CLT (Donsker's theorem)
1991Brockwell & DavisTime series analysis unified with stochastic process theory
2015Ho, Jain & AbbeelDenoising diffusion probabilistic models (DDPM)
2021Song et al.Score-based generative models via SDEs
2022Su et al.RoPE: positional encoding as stationary kernel

2. Formal Framework

2.1 Probability Spaces and Filtrations

The rigorous foundation of stochastic processes builds on probability spaces enhanced with a filtration - a formal model of accumulating information.

Definition (Filtered Probability Space). A filtered probability space is a quadruple (Ω,F,(Ft)t0,P)(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t \geq 0}, P) where:

  • (Ω,F,P)(\Omega, \mathcal{F}, P) is a probability space
  • (Ft)t0(\mathcal{F}_t)_{t \geq 0} is a filtration: an increasing family of sub-σ\sigma-algebras, FsFtF\mathcal{F}_s \subseteq \mathcal{F}_t \subseteq \mathcal{F} for all sts \leq t

Intuitively, Ft\mathcal{F}_t represents the information available at time tt - everything that has been observed up to and including time tt. The filtration captures how information accumulates: knowing more over time means larger σ\sigma-algebras.

Definition (Adapted Process). A stochastic process {Xt}\{X_t\} is adapted to the filtration (Ft)(\mathcal{F}_t) if XtX_t is Ft\mathcal{F}_t-measurable for each tt. Intuitively: the value of the process at time tt is determined by information available at time tt (not the future).

The natural filtration: Given a process {Xt}\{X_t\}, the natural filtration is FtX=σ(Xs:st)\mathcal{F}_t^X = \sigma(X_s : s \leq t) - the σ\sigma-algebra generated by the process up to time tt. Every process is adapted to its natural filtration.

For AI: In online learning, Ft\mathcal{F}_t represents the set of training examples seen up to step tt. The model parameters θt\theta_t are Ft\mathcal{F}_t-measurable (they depend only on past data). A look-ahead bias - where θt\theta_t depends on future data - violates adaptedness and invalidates PAC bounds.

In a causal language model, the token at position tt is sampled conditioned on all previous tokens - this is precisely the adaptedness condition. Non-causal (bidirectional) models like BERT use a different filtration where Ft\mathcal{F}_t includes information from both past and future.

Standard conditions (usual conditions): A filtration satisfies the usual conditions if:

  1. (Ft)(\mathcal{F}_t) is right-continuous: Ft=s>tFs\mathcal{F}_t = \bigcap_{s > t} \mathcal{F}_s
  2. F0\mathcal{F}_0 contains all PP-null sets (completeness)

These technical conditions ensure stopping times and martingales behave as expected. All filtrations in this section are assumed to satisfy the usual conditions.

2.2 Classification of Stochastic Processes

Processes are classified by the structure of their time set TT and state space SS:

TAXONOMY OF STOCHASTIC PROCESSES
========================================================================

                    State Space S
                    ---------------------------------
                    Discrete             Continuous
                    ---------------------------------
  Time  Discrete  | Markov chains        Random walk  |
  Set T |          | HMMs                 AR(p) models |
        |          | Galton-Watson trees               |
        +----------+----------------------------------+
        Continuous | Poisson process      Brownian mot |
                  | Birth-death proc     OU process   |
                  | Counting processes   Diffusion SDE|
                   ---------------------------------

========================================================================

Discrete time, discrete state: Markov chains, hidden Markov models (HMMs used in speech recognition), random walks on graphs. The transition dynamics are described by a stochastic matrix.

Discrete time, continuous state: Random walks in Rd\mathbb{R}^d, AR/ARMA time series models, recurrent neural network hidden states. State space is Rd\mathbb{R}^d for some dimension dd.

Continuous time, discrete state: Poisson processes, birth-death processes, queueing models. Transitions happen at random times; between transitions the state is constant.

Continuous time, continuous state: Brownian motion, Ornstein-Uhlenbeck process, general diffusion SDEs. These are the most mathematically sophisticated and most relevant to diffusion models and SGD dynamics.

2.3 Stopping Times

A stopping time (or optional time) formalises the idea of a "decision made using only available information."

Definition (Stopping Time). A random variable τ:Ω[0,]\tau: \Omega \to [0,\infty] is a stopping time with respect to filtration (Ft)(\mathcal{F}_t) if {τt}Ft\{\tau \leq t\} \in \mathcal{F}_t for all t0t \geq 0. Equivalently, the decision "stop now" can be made using only information available at the current time.

Examples:

  • τ=5\tau = 5: constant time - trivially a stopping time
  • τ=inf{t:Xta}\tau = \inf\{t : X_t \geq a\} (first passage time): stopping time (the event {τt}={maxstXsa}\{\tau \leq t\} = \{\max_{s \leq t} X_s \geq a\} is Ft\mathcal{F}_t-measurable)
  • τ=sup{tT:Xta}\tau = \sup\{t \leq T : X_t \geq a\} (last time above level): not a stopping time (requires future knowledge)

The σ\sigma-algebra Fτ\mathcal{F}_\tau: For a stopping time τ\tau, the σ\sigma-algebra at τ\tau is Fτ={AF:A{τt}Ft for all t}\mathcal{F}_\tau = \{A \in \mathcal{F} : A \cap \{\tau \leq t\} \in \mathcal{F}_t \text{ for all } t\}. This represents "information available at the random time τ\tau."

For AI: Early stopping in neural network training is a stopping time: the rule "stop when validation loss hasn't improved for kk steps" depends only on past validation losses - it is Ft\mathcal{F}_t-measurable. In contrast, "stop at the training step that achieves minimum test loss" is NOT a stopping time - it requires knowing future test losses.

In RL, τ\tau = "the first time the agent reaches the goal state" is a stopping time. Doob's Optional Stopping Theorem (Section3.3) characterises the distribution of MτM_\tau - the martingale value at this random time.

2.4 Sample Paths and Almost-Sure Properties

For each fixed ωΩ\omega \in \Omega, the sample path tXt(ω)t \mapsto X_t(\omega) is an ordinary function of time. The regularity of sample paths (continuity, measurability, boundedness) is crucial for the applicability of theorems.

Path properties:

  • Continuous paths: tXt(ω)t \mapsto X_t(\omega) is continuous for a.e. ω\omega. Example: Brownian motion.
  • Cadlag paths (right-continuous with left limits): Xt(ω)X_t(\omega) is right-continuous with left limits for a.e. ω\omega. Example: Poisson process. Pronounced "cadlag" from French continu a droite, limite a gauche.
  • Caglad paths: left-continuous with right limits. Useful in predictable integrands.

Why continuity matters: A process with continuous paths can be approximated by discrete-time processes as the mesh goes to zero - this is the content of Donsker's invariance principle (Section6.3) and justifies the SDE approximation of SGD. A process with only cadlag paths (like Poisson) has discontinuities that cannot be approximated this way.

Almost-sure properties: Properties that hold with probability one (a.s.) are the gold standard. For example, Brownian motion is a.s. nowhere differentiable - every sample path is continuous but has infinite variation everywhere. This non-differentiability is responsible for the dt\sqrt{dt} scaling of noise in SDEs (Ito's formula) and explains why diffusion model noise scales as αˉt\sqrt{\bar{\alpha}_t} rather than linearly in time.


3. Martingales

3.1 Definition and Intuition

The word martingale comes from a gambling strategy: double your bet after each loss. Mathematically, a martingale captures the notion of a "fair game" - the best prediction of the future value is the current value, given everything known so far.

Definition (Martingale). An adapted process {Mt}t0\{M_t\}_{t \geq 0} with E[Mt]<\mathbb{E}[|M_t|] < \infty for all tt is a martingale with respect to filtration (Ft)(\mathcal{F}_t) if:

E[MtFs]=Msfor all st\mathbb{E}[M_t \mid \mathcal{F}_s] = M_s \quad \text{for all } s \leq t

The condition says: given all information up to time ss, the best prediction of MtM_t is MsM_s. The process neither drifts up nor down in conditional expectation.

Three types:

  • Martingale: E[MtFs]=Ms\mathbb{E}[M_t | \mathcal{F}_s] = M_s - fair game
  • Submartingale: E[MtFs]Ms\mathbb{E}[M_t | \mathcal{F}_s] \geq M_s - tends upward on average (e.g., convex functions of martingales via Jensen)
  • Supermartingale: E[MtFs]Ms\mathbb{E}[M_t | \mathcal{F}_s] \leq M_s - tends downward on average

Discrete-time discrete version: {Mn}n0\{M_n\}_{n \geq 0} is a martingale if E[Mn+1M0,,Mn]=Mn\mathbb{E}[M_{n+1} | M_0,\ldots,M_n] = M_n for all nn. This is equivalent to the martingale difference condition: E[Mn+1MnFn]=0\mathbb{E}[M_{n+1} - M_n | \mathcal{F}_n] = 0.

Immediate consequence (tower property): For any sts \leq t, E[Mt]=E[E[MtFs]]=E[Ms]\mathbb{E}[M_t] = \mathbb{E}[\mathbb{E}[M_t | \mathcal{F}_s]] = \mathbb{E}[M_s]. So the mean of a martingale is constant: E[Mt]=E[M0]\mathbb{E}[M_t] = \mathbb{E}[M_0] for all tt.

3.2 Examples of Martingales

Example 1: Partial Sums of Zero-Mean iid. Let X1,X2,X_1, X_2, \ldots be iid with E[Xi]=0\mathbb{E}[X_i] = 0. Then Sn=k=1nXkS_n = \sum_{k=1}^n X_k is a martingale:

E[Sn+1Fn]=E[Sn+Xn+1Fn]=Sn+E[Xn+1]=Sn\mathbb{E}[S_{n+1} | \mathcal{F}_n] = \mathbb{E}[S_n + X_{n+1} | \mathcal{F}_n] = S_n + \mathbb{E}[X_{n+1}] = S_n

Example 2: Products of Unit-Mean iid. Let X1,X2,X_1, X_2, \ldots be iid with E[Xi]=1\mathbb{E}[X_i] = 1, Xi>0X_i > 0. Then Mn=k=1nXkM_n = \prod_{k=1}^n X_k is a martingale:

E[Mn+1Fn]=MnE[Xn+1]=Mn\mathbb{E}[M_{n+1} | \mathcal{F}_n] = M_n \cdot \mathbb{E}[X_{n+1}] = M_n

This is the likelihood ratio (Radon-Nikodym derivative) martingale - fundamental to sequential hypothesis testing and importance sampling.

Example 3: Conditional Expectations. For any integrable ZZ and filtration (Ft)(\mathcal{F}_t), the process Mt=E[ZFt]M_t = \mathbb{E}[Z | \mathcal{F}_t] is a martingale (the Doob martingale). This follows directly from the tower property: E[MtFs]=E[E[ZFt]Fs]=E[ZFs]=Ms\mathbb{E}[M_t | \mathcal{F}_s] = \mathbb{E}[\mathbb{E}[Z|\mathcal{F}_t] | \mathcal{F}_s] = \mathbb{E}[Z|\mathcal{F}_s] = M_s.

Example 4: Polya Urn. An urn contains rr red and bb blue balls. At each step, draw a ball, record its colour, return it with one extra ball of the same colour. Let RnR_n = fraction of red balls after step nn. Then RnR_n is a martingale. The limiting fraction RBeta(r,b)R_\infty \sim \text{Beta}(r, b).

Example 5: Squared SRW minus time. The simple random walk SnS_n satisfies Sn2nS_n^2 - n is a martingale (when Var(Xk)=1\text{Var}(X_k) = 1):

E[Sn+12(n+1)Fn]=E[(Sn+Xn+1)2Fn]n1=Sn2+1n1=Sn2n\mathbb{E}[S_{n+1}^2 - (n+1) | \mathcal{F}_n] = \mathbb{E}[(S_n + X_{n+1})^2 | \mathcal{F}_n] - n - 1 = S_n^2 + 1 - n - 1 = S_n^2 - n

For AI: The gradient of the expected loss L(θ)\nabla L(\theta) minus the mini-batch gradient estimate L^(θ)\nabla \hat{L}(\theta) is a martingale difference - the fundamental quantity that makes SGD unbiased. The accumulation of these martingale differences over training steps determines the SGD trajectory.

3.3 Doob's Optional Stopping Theorem

Doob's Optional Stopping Theorem (OST) is perhaps the most useful result in martingale theory. It answers: "what is the expected value of a martingale at a stopping time?"

Naively: Since E[Mn]=E[M0]\mathbb{E}[M_n] = \mathbb{E}[M_0] for all nn, one might expect E[Mτ]=E[M0]\mathbb{E}[M_\tau] = \mathbb{E}[M_0] for any stopping time τ\tau. This is false in general (e.g., the doubling strategy in gambling can exploit martingale properties without bounded stopping times).

Theorem (Doob's Optional Stopping Theorem). Let {Mn}\{M_n\} be a martingale and τ\tau a stopping time. Then E[Mτ]=E[M0]\mathbb{E}[M_\tau] = \mathbb{E}[M_0] provided at least one of the following holds:

  1. τ\tau is bounded: τN\tau \leq N a.s. for some constant NN
  2. E[τ]<\mathbb{E}[\tau] < \infty and Mn+1MnC|M_{n+1} - M_n| \leq C a.s. for some constant CC
  3. E[supnMnτ]<\mathbb{E}[\sup_{n} |M_{n \wedge \tau}|] < \infty (uniformly integrable)

Proof sketch (bounded case). E[Mτ]=k=0NE[Mk1τ=k]\mathbb{E}[M_\tau] = \sum_{k=0}^N \mathbb{E}[M_k \mathbf{1}_{\tau=k}]. Write Mk=MNj=kN1(Mj+1Mj)M_k = M_N - \sum_{j=k}^{N-1}(M_{j+1}-M_j), use adaptedness of {τ=k}\{\tau=k\}, and the martingale property E[Mj+1MjFj]=0\mathbb{E}[M_{j+1}-M_j | \mathcal{F}_j] = 0. Each cross term vanishes, leaving E[Mτ]=E[MN]=E[M0]\mathbb{E}[M_\tau] = \mathbb{E}[M_N] = \mathbb{E}[M_0]. \square

Application: Gambler's Ruin. Symmetric random walk SnS_n on {0,1,,N}\{0, 1, \ldots, N\} with absorbing barriers. Start at S0=kS_0 = k. Let τ=inf{n:Sn{0,N}}\tau = \inf\{n : S_n \in \{0, N\}\}.

Since SnS_n is a martingale and τ\tau is bounded by Wald's identity (it can be shown E[τ]<\mathbb{E}[\tau] < \infty):

E[Sτ]=E[S0]=k\mathbb{E}[S_\tau] = \mathbb{E}[S_0] = k E[Sτ]=NP(Sτ=N)+0P(Sτ=0)=NP(win)\mathbb{E}[S_\tau] = N \cdot P(S_\tau = N) + 0 \cdot P(S_\tau = 0) = N \cdot P(\text{win}) P(win)=k/N\Rightarrow P(\text{win}) = k/N

Also: Sn2nS_n^2 - n is a martingale, so E[Sτ2τ]=k20\mathbb{E}[S_\tau^2 - \tau] = k^2 - 0, giving E[τ]=E[Sτ2]k2=(N(k/N)N+0)k2=k(Nk)\mathbb{E}[\tau] = \mathbb{E}[S_\tau^2] - k^2 = (N \cdot (k/N) \cdot N + 0) - k^2 = k(N-k).

3.4 Doob's Martingale Inequalities

Beyond Optional Stopping, Doob proved several fundamental inequalities that control the maximum of a martingale.

Theorem (Doob's Maximal Inequality). For a non-negative submartingale {Mn}\{M_n\} and λ>0\lambda > 0:

λP ⁣(max0knMkλ)E[Mn]\lambda \cdot P\!\left(\max_{0 \leq k \leq n} M_k \geq \lambda\right) \leq \mathbb{E}[M_n]

Corollary (Doob's L2L^2 Inequality). For a martingale {Mn}\{M_n\} with E[Mn2]<\mathbb{E}[M_n^2] < \infty:

E ⁣[max0knMk2]4E[Mn2]\mathbb{E}\!\left[\max_{0 \leq k \leq n} M_k^2\right] \leq 4\mathbb{E}[M_n^2]

Theorem (Martingale Convergence Theorem, Doob 1953). Every L1L^1-bounded martingale (i.e., supnE[Mn]<\sup_n \mathbb{E}[|M_n|] < \infty) converges almost surely to a finite limit MM_\infty.

This convergence theorem is the rigorous foundation for why TD-learning and Q-learning converge under appropriate conditions - the value function updates form a bounded supermartingale under the optimal policy.

3.5 The Doob Martingale Construction

Definition. For a function f(X1,,Xn)f(X_1,\ldots,X_n) of independent random variables, define:

Mk=E[f(X1,,Xn)X1,,Xk],k=0,1,,nM_k = \mathbb{E}[f(X_1,\ldots,X_n) \mid X_1,\ldots,X_k], \quad k = 0, 1, \ldots, n

Then M0=E[f]M_0 = \mathbb{E}[f], Mn=f(X1,,Xn)M_n = f(X_1,\ldots,X_n), and {Mk}\{M_k\} is a martingale (by tower property). This is the Doob martingale for ff.

Connection to McDiarmid: The martingale differences are Dk=MkMk1D_k = M_k - M_{k-1}. If ff has bounded differences f(,xk,)f(,xk,)ck|f(\ldots,x_k,\ldots) - f(\ldots,x_k',\ldots)| \leq c_k, then Dkck|D_k| \leq c_k a.s. Applying Azuma's inequality to the martingale (M0,M1,,Mn)(M_0, M_1, \ldots, M_n):

P(fE[f]t)=P(MnM0t)exp ⁣(2t2kck2)P(f - \mathbb{E}[f] \geq t) = P(M_n - M_0 \geq t) \leq \exp\!\left(\frac{-2t^2}{\sum_k c_k^2}\right)

This is exactly McDiarmid's inequality - it is Azuma applied to the Doob martingale. The Doob construction thus unifies martingale theory with the concentration inequalities of Section05.

For AI: The empirical risk R^(h)=1ni=1n(h(xi),yi)\hat{R}(h) = \frac{1}{n}\sum_{i=1}^n \ell(h(x_i), y_i) is a function of nn independent training examples, each contributing ci=1/nc_i = 1/n bounded difference. The Doob martingale Mk=E[R^(h)z1,,zk]M_k = \mathbb{E}[\hat{R}(h) | z_1,\ldots,z_k] converges from R(h)R(h) to R^(h)\hat{R}(h), and Azuma gives the McDiarmid generalisation bound P(R^Rt)2e2nt2P(|\hat{R}-R| \geq t) \leq 2e^{-2nt^2}.

3.6 Supermartingales and Submartingales

Supermartingales (E[Mn+1Fn]Mn\mathbb{E}[M_{n+1}|\mathcal{F}_n] \leq M_n) model processes that decrease in conditional expectation - losing bets, damped oscillators, optimisation algorithms.

Key example: For a Lyapunov function V(θ)V(\theta) in optimisation, if E[V(θn+1)θn]V(θn)αL(θn)2+noise\mathbb{E}[V(\theta_{n+1}) | \theta_n] \leq V(\theta_n) - \alpha \|\nabla L(\theta_n)\|^2 + \text{noise}, then V(θn)V(\theta_n) is a "noisy supermartingale." Under appropriate noise control, the Martingale Convergence Theorem guarantees V(θn)VV(\theta_n) \to V^* a.s., implying convergence to a critical point.

Submartingales (E[Mn+1Fn]Mn\mathbb{E}[M_{n+1}|\mathcal{F}_n] \geq M_n) arise as convex functions of martingales (by Jensen's inequality: ff convex, MnM_n martingale \Rightarrow f(Mn)f(M_n) submartingale). Examples: Mn|M_n|, Mn2M_n^2, etMne^{tM_n} (for t>0t > 0).

For AI: The training loss trajectory is typically a supermartingale in the early phases of training (decreasing in expectation per step). Plateau phases correspond to near-martingale behaviour. The Doob decomposition theorem guarantees any submartingale YnY_n can be written as Yn=Mn+AnY_n = M_n + A_n where MnM_n is a martingale and AnA_n is a predictable increasing process - this decomposition underlies variance reduction in stochastic optimisation.

3.7 ML: SGD as a Near-Martingale

Consider the SGD update: θn+1=θnηL^n(θn)\theta_{n+1} = \theta_n - \eta \nabla \hat{L}_n(\theta_n) where L^n\hat{L}_n is the mini-batch loss. Define gn=L^n(θn)L(θn)g_n = \nabla \hat{L}_n(\theta_n) - \nabla L(\theta_n) = gradient noise.

Since each mini-batch is sampled iid, E[gnθn]=0\mathbb{E}[g_n | \theta_n] = 0 - the gradient noise is a martingale difference sequence. The noise accumulation SN=n=1NgnS_N = \sum_{n=1}^N g_n forms a martingale.

The diffusion approximation (continuous-time limit): As step size η0\eta \to 0 with NN \to \infty and ηNT\eta N \to T fixed, the SGD trajectory converges in distribution to the SDE:

dθt=L(θt)dt+ηΣ(θt)dBtd\theta_t = -\nabla L(\theta_t)\,dt + \sqrt{\eta \cdot \Sigma(\theta_t)}\,dB_t

where Σ(θ)=Cov(L^(θ))\Sigma(\theta) = \text{Cov}(\nabla \hat{L}(\theta)) is the gradient covariance matrix and BtB_t is a Brownian motion in parameter space. The noise temperature η/B\eta/B (learning rate / batch size) controls the diffusion coefficient.

Variance reduction methods: SVRG (Stochastic Variance Reduced Gradient) and SAG (Stochastic Average Gradient) reduce gn\|g_n\| to make SGD closer to a true gradient descent. They convert the martingale noise gng_n from O(1)O(1) variance to O(1/n)O(1/n) variance by periodically computing the full gradient. The convergence improvement from O(1/n)O(1/\sqrt{n}) (SGD) to O(ρn)O(\rho^n) (linear convergence of SVRG) directly reflects this martingale variance reduction.


4. Discrete-Time Random Walks

4.1 Simple Random Walk

Definition. The simple random walk (SRW) on Z\mathbb{Z} is S0=0S_0 = 0, Sn=k=1nXkS_n = \sum_{k=1}^n X_k where XkiidUniform{1,+1}X_k \stackrel{iid}{\sim} \text{Uniform}\{-1, +1\} (so P(Xk=+1)=P(Xk=1)=1/2P(X_k = +1) = P(X_k = -1) = 1/2).

Distribution: P(Sn=k)=(n(n+k)/2)2nP(S_n = k) = \binom{n}{(n+k)/2} 2^{-n} for kn(mod2)k \equiv n \pmod{2}. By CLT: Sn/ndN(0,1)S_n / \sqrt{n} \xrightarrow{d} \mathcal{N}(0,1).

Key identity (Reflection Principle): The number of paths from (0,0)(0,0) to (n,k)(n, k) that touch or cross the xx-axis equals the number of all paths from (0,2)(0, -2) to (n,k)(n, k). This gives:

P ⁣(max0knSkm)=2P(Snm)P(Sn=m)P\!\left(\max_{0 \leq k \leq n} S_k \geq m\right) = 2P(S_n \geq m) - P(S_n = m)

for integers m>0m > 0 - approximately 2P(Snm)2P(S_n \geq m) for large nn.

The arc-sine law: The last time the SRW is at 0 before time nn has a distribution that concentrates near 0 and nn, not at n/2n/2 as intuition suggests. Formally: if Ln=max{kn:Sk=0}L_n = \max\{k \leq n : S_k = 0\}, then Ln/ndArcsine(0,1)L_n/n \xrightarrow{d} \text{Arcsine}(0,1).

Biased random walk: With P(Xk=+1)=pP(X_k = +1) = p, P(Xk=1)=1pP(X_k = -1) = 1-p, the walk has drift μ=2p1\mu = 2p-1. It is a submartingale if p>1/2p > 1/2, supermartingale if p<1/2p < 1/2, martingale if p=1/2p = 1/2. The exponential tilting Mn=((1p)/p)SnM_n = ((1-p)/p)^{S_n} is a martingale for any pp.

4.2 Recurrence and Transience

Definition. A random walk is recurrent if it returns to its starting point with probability 1; transient if it escapes to infinity with positive probability.

Polya's Theorem (1921). The SRW on Zd\mathbb{Z}^d is:

  • Recurrent for d=1d = 1 and d=2d = 2 (probability 1 of returning to origin)
  • Transient for d3d \geq 3 (positive probability of never returning)

Proof idea for d=1d=1: The expected number of returns to 0 is nP(S2n=0)=n(2nn)4nn(1/πn)=\sum_{n} P(S_{2n}=0) = \sum_n \binom{2n}{n}4^{-n} \sim \sum_n (1/\sqrt{\pi n}) = \infty by Stirling. Since the expected number of returns is infinite, the walk must return infinitely often (Borel-Cantelli).

Proof idea for d=3d=3: nP(S2n=0)=n(2nn)362n<\sum_n P(S_{2n}=0) = \sum_n \binom{2n}{n}^3 6^{-2n} < \infty by Stirling, so by Borel-Cantelli the walk returns only finitely many times.

For AI: In gradient descent on a loss landscape with one flat direction (rank deficiency), the gradient is zero in that direction - the component of the iterate in the flat direction performs a random walk. In 2D, this walk is recurrent: training can drift arbitrarily far in the flat direction, explaining parameter growth and the need for weight decay (which adds a restoring force, converting the walk from recurrent to transient).

4.3 Gambler's Ruin

Setup: A gambler starts with kk dollars, plays a fair game (win/lose \1eachround),andstopswhenreachingeach round), and stops when reaching$N(success)or(success) or$0$ (ruin).

Result (via OST): Using the martingales SnS_n and Sn2nS_n^2 - n:

  • P(ruin)=1k/NP(\text{ruin}) = 1 - k/N, P(success)=k/NP(\text{success}) = k/N
  • E[τ]=k(Nk)\mathbb{E}[\tau] = k(N-k) (expected duration)

Biased gambler (p1/2p \neq 1/2): Use the exponential martingale Mn=((1p)/p)SnM_n = ((1-p)/p)^{S_n}:

P(success)=1(q/p)k1(q/p)N,q=1pP(\text{success}) = \frac{1 - (q/p)^k}{1 - (q/p)^N}, \quad q = 1-p

For AI: Gambler's ruin models early stopping in training: the training loss random-walks around a local minimum, and we stop when it either reaches a "good" threshold (success) or explodes (ruin). The expected stopping time k(Nk)k(N-k) motivates why learning rate schedules that reduce the step size as training progresses can reduce expected convergence time.

4.4 ML: Token Sequences as Random Walks

A language model generates tokens sequentially. Conditional on the context, each token selection is a draw from a distribution - making the sequence w1,w2,w_1, w_2, \ldots a stochastic process. The log-probability of a sequence:

logP(w1,,wT)=t=1TlogP(wtw1,,wt1)\log P(w_1, \ldots, w_T) = \sum_{t=1}^T \log P(w_t | w_1, \ldots, w_{t-1})

is a sum of log-probabilities - a random walk in log-space. The perplexity PPL=exp(1TtlogP(wt))\text{PPL} = \exp(-\frac{1}{T}\sum_t \log P(w_t|\cdot)) is the geometric mean of the inverse probabilities, and concentrates around eHe^{H} where HH is the true entropy rate of the language.

Surprise accumulation: The cumulative surprise logP(wtw<t)-\log P(w_t | w_{<t}) for a well-calibrated model is a martingale (under the true distribution). If the model is miscalibrated, the surprise process is either a submartingale (model too confident -> underestimates entropy) or supermartingale (model too diffuse). This connects language model calibration to martingale theory.


5. Poisson Processes

5.1 Definition and Characterisation

The Poisson process is the canonical continuous-time counting process - it counts the number of events in an interval.

Definition (Poisson Process). A right-continuous process {N(t)}t0\{N(t)\}_{t \geq 0} with N(0)=0N(0) = 0 is a Poisson process with rate λ>0\lambda > 0 if:

  1. Independent increments: For any 0t0<t1<<tn0 \leq t_0 < t_1 < \cdots < t_n, the increments N(t1)N(t0),N(t2)N(t1),N(t_1)-N(t_0), N(t_2)-N(t_1), \ldots are mutually independent
  2. Stationary increments: N(t+s)N(s)=dN(t)N(0)N(t+s) - N(s) \stackrel{d}{=} N(t) - N(0) for all t,s0t, s \geq 0
  3. Poisson distribution of increments: N(t)N(s)Poisson(λ(ts))N(t) - N(s) \sim \text{Poisson}(\lambda(t-s)) for t>st > s

Equivalent characterisation via inter-arrivals: If T1,T2,T_1, T_2, \ldots are the inter-arrival times between successive events, then {N(t)}\{N(t)\} is a Poisson process iff T1,T2,iidExp(λ)T_1, T_2, \ldots \stackrel{iid}{\sim} \text{Exp}(\lambda).

The λ0\lambda \to 0 limit: A Poisson process can be approximated by a Bernoulli process: divide [0,T][0,T] into nn intervals of length δ=T/n\delta = T/n. Place an event in interval kk with probability λδ\lambda\delta independently. As nn \to \infty with λδn=λT\lambda\delta n = \lambda T fixed, this converges to a Poisson process.

Martingale structure: N(t)λtN(t) - \lambda t is a martingale (compensated Poisson process). Its quadratic variation is also λt\lambda t, so (N(t)λt)2λt(N(t)-\lambda t)^2 - \lambda t is also a martingale.

5.2 Properties: Superposition and Thinning

Superposition: If N1(t)Poisson(λ1)N_1(t) \sim \text{Poisson}(\lambda_1) and N2(t)Poisson(λ2)N_2(t) \sim \text{Poisson}(\lambda_2) are independent, then N(t)=N1(t)+N2(t)Poisson(λ1+λ2)N(t) = N_1(t) + N_2(t) \sim \text{Poisson}(\lambda_1 + \lambda_2). The superposition of kk independent Poisson processes is Poisson with rate iλi\sum_i \lambda_i.

Thinning (Coloring): Start with a Poisson process N(t)Poisson(λ)N(t) \sim \text{Poisson}(\lambda). Independently colour each arrival red with probability pp and blue with probability 1p1-p. Then the red and blue counting processes are independent Poisson processes with rates λp\lambda p and λ(1p)\lambda(1-p).

Conditioning (uniform order statistics): Given N(T)=nN(T) = n, the arrival times T1<T2<<TnT_1 < T_2 < \cdots < T_n are distributed as the order statistics of nn iid Uniform(0,T)(0,T) random variables. This is the conditional uniformity property - conditioning on the count makes the arrival times maximally spread.

Non-homogeneous Poisson process: Generalise by allowing a time-varying rate λ(t)\lambda(t): N(t)N(s)Poisson(stλ(u)du)N(t) - N(s) \sim \text{Poisson}(\int_s^t \lambda(u)\,du) with independent increments. The compensator is Λ(t)=0tλ(u)du\Lambda(t) = \int_0^t \lambda(u)\,du (the integrated intensity).

5.3 Compound Poisson Process

Definition. Let {N(t)}\{N(t)\} be a Poisson process with rate λ\lambda and {Yk}\{Y_k\} iid random variables (independent of NN). The compound Poisson process is:

X(t)=k=1N(t)YkX(t) = \sum_{k=1}^{N(t)} Y_k

Key moments:

  • E[X(t)]=λtE[Y1]\mathbb{E}[X(t)] = \lambda t \cdot \mathbb{E}[Y_1]
  • Var(X(t))=λtE[Y12]\text{Var}(X(t)) = \lambda t \cdot \mathbb{E}[Y_1^2]
  • MGF: MX(t)(s)=exp(λt(MY1(s)1))M_{X(t)}(s) = \exp(\lambda t (M_{Y_1}(s) - 1))

For AI: The total compute consumed by serving an LLM API follows a compound Poisson model: requests arrive as a Poisson process, and each request requires a random amount of compute (the "jump size" YkY_k). Capacity planning uses compound Poisson theory to compute the required buffer size to handle peak demand with high probability.

5.4 ML: Event Streams and Token Timing

LLM inference event streams: In deployed LLMs, user requests arrive as a Poisson process with rate λ\lambda (requests/second). Under heavy load, the inter-arrival distribution determines queueing behaviour. If token generation takes μ1\mu^{-1} seconds on average, the system is stable when λ<μ\lambda < \mu (arrival rate < service rate) - an M/M/1 queue analysis using Poisson process theory.

KV-cache access patterns: In multi-head attention with prefix caching, the pattern of cache hits/misses follows a complex point process. Poisson approximation applies when individual hit probabilities are small and requests are approximately independent.

Continuous batching: Modern LLM serving (vLLM, TensorRT-LLM) uses continuous batching: new requests join an existing batch as others complete. The batch size at time tt is a birth-death process - a continuous-time Markov chain with Poisson arrivals and Exponential service times.


6. Brownian Motion

6.1 Definition and Wiener's Axioms

Brownian motion (the Wiener process) is the fundamental continuous-time continuous-path stochastic process. It is simultaneously the limit of rescaled random walks (Donsker's theorem) and the building block for all diffusion processes in modern AI.

Definition (Standard Brownian Motion). A stochastic process {Bt}t0\{B_t\}_{t \geq 0} is a standard Brownian motion (or Wiener process) if:

  1. B0=0B_0 = 0 a.s.
  2. Independent increments: For 0t0<t1<<tn0 \leq t_0 < t_1 < \cdots < t_n, the increments Bt1Bt0,Bt2Bt1,B_{t_1}-B_{t_0}, B_{t_2}-B_{t_1}, \ldots are mutually independent
  3. Gaussian increments: BtBsN(0,ts)B_t - B_s \sim \mathcal{N}(0, t-s) for all 0s<t0 \leq s < t
  4. Continuous paths: tBt(ω)t \mapsto B_t(\omega) is continuous a.s.

Existence: Wiener (1923) proved that such a process exists by constructing it on the space C([0,))C([0,\infty)) of continuous functions with the Wiener measure. The construction uses Levy's Haar wavelet representation and is one of the foundational results of modern probability.

Finite-dimensional distributions: For 0<t1<<tn0 < t_1 < \cdots < t_n, the joint density of (Bt1,,Btn)(B_{t_1}, \ldots, B_{t_n}) is:

p(x1,,xn)=k=1n12π(tktk1)exp ⁣((xkxk1)22(tktk1))p(x_1, \ldots, x_n) = \prod_{k=1}^n \frac{1}{\sqrt{2\pi(t_k - t_{k-1})}} \exp\!\left(-\frac{(x_k - x_{k-1})^2}{2(t_k - t_{k-1})}\right)

with t0=x0=0t_0 = x_0 = 0.

Martingale properties: The following are all martingales:

  • BtB_t (mean zero, no drift)
  • Bt2tB_t^2 - t (variance process)
  • exp(sBts2t/2)\exp(sB_t - s^2t/2) for any sRs \in \mathbb{R} (exponential martingale / Doleans-Dade)

6.2 Key Properties and Quadratic Variation

Self-similarity: {cBt/c2}t0=d{Bt}t0\{cB_{t/c^2}\}_{t \geq 0} \stackrel{d}{=} \{B_t\}_{t \geq 0} for any c>0c > 0. The process "looks the same" at any scale - it is fractal.

Time inversion: {tB1/t}t>0=d{Bt}t>0\{tB_{1/t}\}_{t > 0} \stackrel{d}{=} \{B_t\}_{t > 0} (with B0=0B_0 = 0).

Non-differentiability: Brownian motion is a.s. nowhere differentiable. Formally: for almost every ω\omega, the function tBt(ω)t \mapsto B_t(\omega) is not differentiable at any point. This is why stochastic calculus requires the Ito integral - the standard Riemann-Stieltjes integral does not apply.

Quadratic variation: The key property distinguishing Brownian motion from ordinary functions is its quadratic variation:

[B]tlimP0k(BtkBtk1)2=ta.s.[B]_t \coloneqq \lim_{|\mathcal{P}| \to 0} \sum_{k} (B_{t_k} - B_{t_{k-1}})^2 = t \quad \text{a.s.}

where the limit is over partitions P={0=t0<t1<<tn=t}\mathcal{P} = \{0 = t_0 < t_1 < \cdots < t_n = t\} as the mesh P0|\mathcal{P}| \to 0. Informally: (dBt)2=dt(dB_t)^2 = dt. This is the foundation of Ito's lemma and the dt\sqrt{dt} scaling of stochastic noise.

The Levy characterisation: A continuous martingale {Mt}\{M_t\} with M0=0M_0 = 0 and quadratic variation [M]t=t[M]_t = t is a standard Brownian motion. This characterises BM among all continuous martingales.

For AI: The quadratic variation formula (dBt)2=dt(dB_t)^2 = dt explains why diffusion model noise at timestep tt scales as t\sqrt{t}, not tt. In DDPM, the forward process adds noise ϵN(0,I)\epsilon \sim \mathcal{N}(0, I) scaled by αˉt\sqrt{\bar{\alpha}_t}, where αˉt=s=1t(1βs)\bar{\alpha}_t = \prod_{s=1}^t (1-\beta_s) - a discretisation of BM's variance accumulation.

6.3 Brownian Motion as a Limit

Theorem (Donsker's Invariance Principle, 1951). Let X1,X2,X_1, X_2, \ldots be iid with E[Xi]=0\mathbb{E}[X_i] = 0 and E[Xi2]=σ2<\mathbb{E}[X_i^2] = \sigma^2 < \infty. Define the continuous interpolation:

Bn(t)=1σn(Snt+(ntnt)Xnt+1),t[0,1]B_n(t) = \frac{1}{\sigma\sqrt{n}}\left(S_{\lfloor nt \rfloor} + (nt - \lfloor nt \rfloor) X_{\lfloor nt \rfloor + 1}\right), \quad t \in [0,1]

Then BnBB_n \Rightarrow B (convergence in distribution on C([0,1])C([0,1]) with the uniform topology), where BB is standard Brownian motion.

This is the functional central limit theorem - it upgrades the CLT from convergence of individual random variables to convergence of entire paths. The SRW path, rescaled by 1/n1/\sqrt{n} in space and 1/n1/n in time, converges to Brownian motion.

Implications: Any result proved for Brownian motion has a discrete-time analogue for random walks. This transfers insights between the continuous-time theory (SDEs, Ito calculus) and the discrete-time practice (mini-batch SGD, RL).

6.4 Geometric Brownian Motion

Definition. A process {St}\{S_t\} satisfies geometric Brownian motion if:

dSt=μStdt+σStdBt,S0=s0>0dS_t = \mu S_t \,dt + \sigma S_t \,dB_t, \quad S_0 = s_0 > 0

By Ito's lemma applied to f(S)=logSf(S) = \log S:

dlogSt=(μσ22)dt+σdBtd\log S_t = \left(\mu - \frac{\sigma^2}{2}\right)dt + \sigma\,dB_t St=s0exp ⁣((μσ22)t+σBt)S_t = s_0 \exp\!\left(\left(\mu - \frac{\sigma^2}{2}\right)t + \sigma B_t\right)

So StLogNormal ⁣(logs0+(μσ2/2)t,σ2t)S_t \sim \text{LogNormal}\!\left(\log s_0 + (\mu - \sigma^2/2)t,\, \sigma^2 t\right).

For AI: GBM models multiplicative noise, which appears in layer-normalisation dynamics. When a transformer residual stream is updated as ht+1=ht+Δth_{t+1} = h_t + \Delta_t where Δt/htN(0,σ2)\Delta_t / \|h_t\| \sim \mathcal{N}(0, \sigma^2), the norm ht\|h_t\| approximately follows GBM, explaining the norm growth observed in deep transformers without proper initialisation.

6.5 Ornstein-Uhlenbeck Process

The Ornstein-Uhlenbeck (OU) process is the unique stationary Gaussian Markov process - and the theoretical foundation for diffusion model forward processes.

Definition (OU SDE). The OU process satisfies:

dXt=θ(Xtμ)dt+σdBt,θ>0dX_t = -\theta(X_t - \mu)\,dt + \sigma\,dB_t, \quad \theta > 0

The drift θ(Xtμ)-\theta(X_t - \mu) is a mean-reverting force pulling the process toward μ\mu.

Solution (via Ito's formula):

Xt=μ+(X0μ)eθt+σ0teθ(ts)dBsX_t = \mu + (X_0 - \mu)e^{-\theta t} + \sigma\int_0^t e^{-\theta(t-s)}\,dB_s

Distributions:

  • XtX0N ⁣(μ+(X0μ)eθt, σ22θ(1e2θt))X_t | X_0 \sim \mathcal{N}\!\left(\mu + (X_0-\mu)e^{-\theta t},\ \frac{\sigma^2}{2\theta}(1 - e^{-2\theta t})\right)
  • Stationary distribution: XN(μ,σ2/(2θ))X_\infty \sim \mathcal{N}(\mu, \sigma^2/(2\theta))
  • Autocorrelation: Corr(Xs,Xt)=eθts\text{Corr}(X_s, X_t) = e^{-\theta|t-s|} - exponential decay

For AI: The DDPM forward process is q(xtx0)=N(αˉtx0,(1αˉt)I)q(x_t|x_0) = \mathcal{N}(\sqrt{\bar{\alpha}_t}x_0, (1-\bar{\alpha}_t)I), which matches the OU solution with θ=1/2\theta = 1/2, μ=0\mu = 0, σ2=1\sigma^2 = 1 (after time reparametrisation). The variance-preserving SDE (VP-SDE) of Song et al. (2021) makes this correspondence exact:

dx=12β(t)xdt+β(t)dBtdx = -\frac{1}{2}\beta(t)x\,dt + \sqrt{\beta(t)}\,dB_t

with αˉt=exp(120tβ(s)ds)\bar{\alpha}_t = \exp(-\frac{1}{2}\int_0^t \beta(s)\,ds). The OU process provides the continuous-time foundation for all modern diffusion models.

L2 regularisation as OU: L2 regularisation in gradient descent adds a restoring force λθ-\lambda\theta to the update, giving a discrete-time OU process for the parameters. The stationary distribution concentrates around θ=0\theta=0 with variance η/λ\propto \eta/\lambda (noise / regularisation), explaining why stronger L2 keeps parameters smaller.

6.6 ML: Diffusion Models

The forward process (data -> noise): Given clean data x0pdatax_0 \sim p_{\text{data}}, DDPM defines:

q(xtxt1)=N(xt;1βtxt1,βtI),t=1,,Tq(x_t | x_{t-1}) = \mathcal{N}(x_t; \sqrt{1-\beta_t}x_{t-1}, \beta_t I), \quad t = 1,\ldots,T

The marginal is q(xtx0)=N(αˉtx0,(1αˉt)I)q(x_t | x_0) = \mathcal{N}(\sqrt{\bar{\alpha}_t}x_0, (1-\bar{\alpha}_t)I) where αˉt=s=1t(1βs)\bar{\alpha}_t = \prod_{s=1}^t(1-\beta_s). This forward process is a discretised OU process.

The reverse process (noise -> data): By Bayes' theorem and the Gaussian Markov property:

pθ(xt1xt)=N(xt1;μθ(xt,t),Σθ(t))p_\theta(x_{t-1}|x_t) = \mathcal{N}(x_{t-1}; \mu_\theta(x_t, t), \Sigma_\theta(t))

The network learns to predict the score xtlogq(xt)\nabla_{x_t}\log q(x_t) (or equivalently, the noise ϵ\epsilon), enabling the reverse process to denoise step by step.

Score matching: The training objective minimises:

L=Et,x0,ϵ ⁣[ϵϵθ(αˉtx0+1αˉtϵ,t)2]\mathcal{L} = \mathbb{E}_{t, x_0, \epsilon}\!\left[\|\epsilon - \epsilon_\theta(\sqrt{\bar{\alpha}_t}x_0 + \sqrt{1-\bar{\alpha}_t}\epsilon, t)\|^2\right]

This is equivalent to minimising Fisher divergence between pθp_\theta and qq, connecting to information-geometric stochastic process theory.

Continuous-time perspective (Score SDE, Song et al. 2021): The continuous limit is:

  • Forward SDE: dx=f(x,t)dt+g(t)dBtdx = f(x,t)\,dt + g(t)\,dB_t
  • Reverse SDE: dx=[f(x,t)g(t)2xlogpt(x)]dt+g(t)dBˉtdx = [f(x,t) - g(t)^2\nabla_x\log p_t(x)]\,dt + g(t)\,d\bar{B}_t

The neural network approximates xlogpt(x)\nabla_x\log p_t(x) - the score function - enabling the SDE to run backward from noise to data. This is the unified framework behind DDPM (discrete VP-SDE), DDIM (deterministic ODE), and Consistency Models.


7. Stationary Processes and Ergodicity

7.1 Strict and Wide-Sense Stationarity

Definition (Strict Stationarity). A stochastic process {Xt}\{X_t\} is strictly stationary if for any nn, any times t1,,tnt_1, \ldots, t_n, and any shift hh:

(Xt1+h,,Xtn+h)=d(Xt1,,Xtn)(X_{t_1+h}, \ldots, X_{t_n+h}) \stackrel{d}{=} (X_{t_1}, \ldots, X_{t_n})

The finite-dimensional distributions are invariant under time shifts.

Definition (Wide-Sense Stationarity, WSS). A process {Xt}\{X_t\} is wide-sense stationary (or second-order stationary) if:

  1. E[Xt]=μ\mathbb{E}[X_t] = \mu (constant mean)
  2. Cov(Xt,Xs)=k(ts)\text{Cov}(X_t, X_s) = k(t-s) depends only on the lag τ=ts\tau = t-s

Examples:

  • Strictly stationary but not WSS: tt-distributed noise with 1 DOF (undefined variance)
  • WSS but not strictly stationary: Xt=Acos(ωt)+Bsin(ωt)X_t = A\cos(\omega t) + B\sin(\omega t) with A,BN(0,σ2)A,B \sim \mathcal{N}(0,\sigma^2) iid - WSS with k(τ)=σ2cos(ωτ)k(\tau) = \sigma^2\cos(\omega\tau); not strictly stationary since XtX_t is perfectly periodic with a random phase
  • Both: Gaussian processes with stationary kernels; iid sequences

The autocovariance function k(τ)k(\tau) for a WSS process satisfies:

  • Symmetry: k(τ)=k(τ)k(\tau) = k(-\tau) (real process)
  • Positive semidefiniteness: i,jaiajk(titj)0\sum_{i,j} a_i a_j k(t_i - t_j) \geq 0 for all finite sequences

For AI: A WSS process is fully characterised by μ\mu and k(τ)k(\tau) - enormously parsimonious compared to the general joint distribution. This is why WSS is the right model for relative position encodings in transformers.

7.2 Ergodicity and the Ergodic Theorem

Intuition: A process is ergodic if "time averages equal ensemble averages" - a single long trajectory captures the full statistical behaviour of the process.

Theorem (Birkhoff's Ergodic Theorem, 1931). For a stationary ergodic process {Xt}\{X_t\} with E[X0]<\mathbb{E}[|X_0|] < \infty:

1T0TXtdta.s.E[X0]as T\frac{1}{T}\int_0^T X_t\,dt \xrightarrow{a.s.} \mathbb{E}[X_0] \quad \text{as } T \to \infty

For discrete time: 1Nn=0N1Xna.s.E[X0]\frac{1}{N}\sum_{n=0}^{N-1} X_n \xrightarrow{a.s.} \mathbb{E}[X_0].

What "ergodic" means formally: A stationary process is ergodic if the only time-invariant events have probability 0 or 1 (the σ\sigma-algebra of invariant events is trivial). Equivalently, correlations decay: 1Nn=0N1Cov(X0,Xn)0\frac{1}{N}\sum_{n=0}^{N-1}\text{Cov}(X_0, X_n) \to 0.

Non-ergodic example: Xt=ZX_t = Z for all tt, where ZN(0,1)Z \sim \mathcal{N}(0,1) is fixed once and for all. This is stationary (distribution constant) but not ergodic (time average =ZE[Z]=0= Z \neq \mathbb{E}[Z] = 0 for any realisation).

For AI: SGD convergence analysis assumes that the gradient noise process is ergodic - the time average of gradient variance equals the ensemble average. If the data distribution shifts during training (non-stationarity), this fails. Continual learning methods address the resulting catastrophic forgetting, which is fundamentally a violation of the stationarity assumption.

7.3 Autocorrelation and Power Spectral Density

Definition. The autocovariance function (ACVF) is k(τ)=Cov(Xt,Xt+τ)k(\tau) = \text{Cov}(X_t, X_{t+\tau}). The autocorrelation function (ACF) is ρ(τ)=k(τ)/k(0)\rho(\tau) = k(\tau)/k(0), normalised to [1,1][-1,1].

Examples of ACFs:

  • iid white noise: k(τ)=σ21[τ=0]k(\tau) = \sigma^2 \mathbf{1}[\tau=0]
  • AR(1) process (Xt=ϕXt1+ϵtX_t = \phi X_{t-1} + \epsilon_t, ϕ<1|\phi|<1): k(τ)=σ2ϕτ/(1ϕ2)k(\tau) = \sigma^2\phi^{|\tau|}/(1-\phi^2) - geometric decay
  • OU process: k(τ)=(σ2/2θ)eθτk(\tau) = (\sigma^2/2\theta)e^{-\theta|\tau|} - continuous-time geometric decay
  • Seasonal process: periodic ACF

Theorem (Wiener-Khinchin, 1934). For a WSS process with absolutely summable autocovariance:

S(ω)=τ=k(τ)eiωτ=F[k](ω)S(\omega) = \sum_{\tau=-\infty}^\infty k(\tau)e^{-i\omega\tau} = \mathcal{F}[k](\omega)

The power spectral density (PSD) S(ω)S(\omega) is the Fourier transform of k(τ)k(\tau) and is non-negative: S(ω)0S(\omega) \geq 0 for all ω\omega.

Conversely, any non-negative integrable function S(ω)S(\omega) is the PSD of some WSS process (Bochner's theorem). PSD decomposes the variance of a process by frequency: low-frequency components explain slow variation, high-frequency components explain rapid fluctuation.

For AI: The attention mechanism computes softmax(QK/dk)V\text{softmax}(QK^\top/\sqrt{d_k})V. For a stationary input sequence, Qi=WQxiQ_i = W_Q x_i and Kj=WKxjK_j = W_K x_j, so the attention score between positions ii and jj depends (via the input autocorrelation) only on ij|i-j| - the relative position. This is the stationarity condition that relative position encodings (ALiBi, T5 bias, RoPE) exploit.

7.4 Gaussian Processes

Definition. A stochastic process {f(t)}tT\{f(t)\}_{t \in T} is a Gaussian process (GP) if every finite collection (f(t1),,f(tn))(f(t_1), \ldots, f(t_n)) is jointly Gaussian. A GP is fully specified by:

  • Mean function: m(t)=E[f(t)]m(t) = \mathbb{E}[f(t)]
  • Covariance kernel: k(s,t)=Cov(f(s),f(t))k(s,t) = \text{Cov}(f(s), f(t))

Notation: fGP(m,k)f \sim \mathcal{GP}(m, k).

The kernel determines everything: Properties of the GP sample paths are determined by kk:

  • Smoothness: Matern-ν\nu kernel controls differentiability (ν\nu times differentiable paths)
  • Length scale: k(s,t)=est2/(22)k(s,t) = e^{-|s-t|^2/(2\ell^2)} (RBF/squared-exponential) has length scale \ell
  • Periodicity: k(s,t)=σ2cos(2πst/p)k(s,t) = \sigma^2\cos(2\pi|s-t|/p) for periodic processes

Common kernels:

KernelFormulaProperties
RBF (squared-exponential)σ2exp(xx2/(22))\sigma^2\exp(-\|x-x'\|^2/(2\ell^2))Infinitely differentiable
Matern-1/2σ2exp(xx/)\sigma^2\exp(-\|x-x'\|/\ell)Continuous but not diff. (= OU)
Matern-3/2σ2(1+3r/)exp(3r/)\sigma^2(1+\sqrt{3}r/\ell)\exp(-\sqrt{3}r/\ell)Once differentiable
Periodicσ2exp(2sin2(πxx/p)/2)\sigma^2\exp(-2\sin^2(\pi\|x-x'\|/p)/\ell^2)Exactly periodic
Linearσ2xx\sigma^2 x^\top x'Bayesian linear regression

GP Posterior (Bayesian inference): Given observations y=f(t)+ϵ\mathbf{y} = f(\mathbf{t}) + \epsilon, ϵN(0,σn2I)\epsilon \sim \mathcal{N}(0, \sigma_n^2 I):

ft,t,yN(μ,Σ)f_* | \mathbf{t}_*, \mathbf{t}, \mathbf{y} \sim \mathcal{N}(\mu_*, \Sigma_*) μ=m(t)+k(t,t)[k(t,t)+σn2I]1(ym(t))\mu_* = m(\mathbf{t}_*) + k(\mathbf{t}_*, \mathbf{t})[k(\mathbf{t},\mathbf{t}) + \sigma_n^2 I]^{-1}(\mathbf{y} - m(\mathbf{t})) Σ=k(t,t)k(t,t)[k(t,t)+σn2I]1k(t,t)\Sigma_* = k(\mathbf{t}_*, \mathbf{t}_*) - k(\mathbf{t}_*, \mathbf{t})[k(\mathbf{t},\mathbf{t}) + \sigma_n^2 I]^{-1}k(\mathbf{t}, \mathbf{t}_*)

This is exact Bayesian inference in O(n3)O(n^3) time (from the Cholesky decomposition of the n×nn \times n kernel matrix).

For AI: Bayesian optimisation for hyperparameter search uses a GP prior over the loss surface. The acquisition function (UCB, EI) determines where to evaluate next, and the GP posterior updates after each observation. This is the principled method for tuning LLM training hyperparameters (η\eta, β1\beta_1, β2\beta_2, etc.) when each evaluation is expensive.

Neural tangent kernel (NTK): An infinite-width neural network at initialisation is a GP with kernel determined by the architecture. During gradient descent with small learning rate, the network stays approximately GP (lazy training regime). The NTK Θ(x,x)=θf(x;θ)θf(x;θ)\Theta(x,x') = \nabla_\theta f(x;\theta)^\top \nabla_\theta f(x';\theta) determines the learning dynamics via the Gram matrix Θ(X,X)\Theta(X,X).

7.5 ML: RoPE and Stationarity in Attention

Wide-sense stationarity of attention: For a sequence of token embeddings {xt}\{x_t\}, the attention matrix Aij=softmax(qikjd)A_{ij} = \text{softmax}(\frac{q_i^\top k_j}{\sqrt{d}}) can be made to depend only on the relative position iji-j if we choose position encoding appropriately. This is equivalent to making the QQ-KK inner product a stationary kernel.

RoPE (Rotary Position Embedding, Su et al. 2022): RoPE encodes position tt by rotating the query and key vectors:

Rtq,Rtk,where Rt is a block-diagonal rotation matrixR_t\mathbf{q}, \quad R_t\mathbf{k}, \quad \text{where } R_t \text{ is a block-diagonal rotation matrix}

The inner product satisfies:

(Rmq)(Rnk)=qRmnR0k=qRnmk(R_m\mathbf{q})^\top(R_n\mathbf{k}) = \mathbf{q}^\top R_{m-n}^\top R_0 \mathbf{k} = \mathbf{q}^\top R_{n-m}\mathbf{k}

which depends only on mnm-n - exactly the stationarity condition! RoPE makes attention score a stationary kernel in position, enabling efficient relative position encoding.

ALiBi and linear bias: ALiBi (Press et al., 2022) adds mij-m|i-j| to each attention logit where mm is a per-head slope. The resulting attention scores are of the form f(qi,kj)mijf(q_i, k_j) - m|i-j|, where the mij-m|i-j| term is a stationary (translation-invariant) bias. This enables extrapolation to longer sequences - a key advantage when the training distribution has stationarity structure.


8. Preview: Markov Chains

8.1 The Markov Property

A stochastic process {Xn}\{X_n\} satisfies the Markov property if:

P(Xn+1=jX0,X1,,Xn)=P(Xn+1=jXn)n,jP(X_{n+1} = j \mid X_0, X_1, \ldots, X_n) = P(X_{n+1} = j \mid X_n) \quad \forall n, j

The past is irrelevant - the present state XnX_n is a sufficient statistic for the future. Markov chains are the simplest non-trivial stochastic processes with memory.

Markov chains fit within the stochastic process framework developed here:

  • They are adapted to their natural filtration Fn=σ(X0,,Xn)\mathcal{F}_n = \sigma(X_0,\ldots,X_n)
  • The transition function P(Xn+1=jXn=i)=PijP(X_{n+1}=j|X_n=i) = P_{ij} defines a stochastic matrix
  • Many Markov chains are martingale transforms: f(Xn)f(X_n) is a martingale if ff is harmonic for the transition operator

8.2 Forward Reference

Full treatment: Section07 Markov Chains

That section covers: transition matrices, invariant/stationary distributions, detailed balance, recurrence/transience (Markov chain version), mixing times, spectral gap, Perron-Frobenius theorem, reversibility, MCMC (Metropolis-Hastings, Gibbs), and ML applications in language model generation, RL policy evaluation, and PageRank.


9. ML Deep Dive

9.1 SGD Trajectory as Stochastic Process

The parameter sequence {θn}n0\{\theta_n\}_{n \geq 0} produced by SGD is a discrete-time stochastic process with values in Rp\mathbb{R}^p. Understanding its properties as a process - not just its endpoint - reveals why deep learning generalises.

The update rule as a random dynamical system:

θn+1=θnηng~n(θn),g~n(θ)=1BiBn(θ;zi)\theta_{n+1} = \theta_n - \eta_n \tilde{g}_n(\theta_n), \quad \tilde{g}_n(\theta) = \frac{1}{B}\sum_{i \in \mathcal{B}_n}\nabla\ell(\theta; z_i)

where Bn\mathcal{B}_n is a random mini-batch drawn at step nn.

Key stochastic process properties:

  • θn\theta_n is adapted to Fn=σ(B1,,Bn)\mathcal{F}_n = \sigma(\mathcal{B}_1,\ldots,\mathcal{B}_n)
  • E[g~n(θn)Fn1]=L(θn)\mathbb{E}[\tilde{g}_n(\theta_n) | \mathcal{F}_{n-1}] = \nabla L(\theta_n) - unbiasedness means the noise is a martingale difference
  • g~nL(θn)\tilde{g}_n - \nabla L(\theta_n) is a martingale difference sequence

The diffusion approximation (Mandt et al., 2017): Under small step size and local quadratic approximation L(θ)12(θθ)H(θθ)L(\theta) \approx \frac{1}{2}(\theta-\theta^*)^\top H (\theta-\theta^*), the SGD dynamics converge to:

dθ=Hθdt+ηΣdBtd\theta = -H\theta\,dt + \sqrt{\eta \Sigma}\,dB_t

This is a multivariate OU process with mean-reversion matrix HH and noise ηΣ\sqrt{\eta\Sigma}. The stationary distribution is:

θN ⁣(θ, η2ΣH1)\theta_\infty \sim \mathcal{N}\!\left(\theta^*,\ \frac{\eta}{2}\Sigma H^{-1}\right)

Lower learning rate η\eta -> smaller stationary variance -> SGD concentrates more tightly around θ\theta^*. The ratio η/B\eta/B determines the noise temperature.

Flat minima and generalisation: The stationary distribution of the SGD SDE concentrates in regions of small Hessian HH (flat minima). Fisher-Rao information connects Σ\Sigma to the curvature, and the entropy of the stationary distribution logdet(H1)\propto \log\det(H^{-1}) is large for flat minima. This provides a stochastic process explanation for why SGD with large learning rates prefers flat minima that generalise well.

9.2 RL Value Functions as Martingales

In reinforcement learning, the agent at time tt is in state sts_t, takes action ata_t, receives reward rtr_t, and transitions to st+1s_{t+1}. The discounted return is:

Gt=k=0γkrt+kG_t = \sum_{k=0}^\infty \gamma^k r_{t+k}

The Bellman equation: Under a fixed policy π\pi, the value function Vπ(s)=E[Gtst=s]V^\pi(s) = \mathbb{E}[G_t | s_t = s] satisfies:

Vπ(s)=E[rt+γVπ(st+1)st=s]V^\pi(s) = \mathbb{E}[r_t + \gamma V^\pi(s_{t+1}) | s_t = s]

Martingale characterisation: Define Zt=k=0t1γkrk+γtVπ(st)Z_t = \sum_{k=0}^{t-1}\gamma^k r_k + \gamma^t V^\pi(s_t). Under the true VπV^\pi:

E[Zt+1Ft]=E ⁣[γtrt+γt+1Vπ(st+1)+k=0t1γkrkFt]=Zt\mathbb{E}[Z_{t+1} | \mathcal{F}_t] = \mathbb{E}\!\left[\gamma^t r_t + \gamma^{t+1} V^\pi(s_{t+1}) + \sum_{k=0}^{t-1}\gamma^k r_k \,\Big|\, \mathcal{F}_t\right] = Z_t

So {Zt}\{Z_t\} is a martingale! The TD error δt=rt+γVπ(st+1)Vπ(st)\delta_t = r_t + \gamma V^\pi(s_{t+1}) - V^\pi(s_t) is a martingale difference: E[δtFt]=0\mathbb{E}[\delta_t | \mathcal{F}_t] = 0. TD-learning converges because it performs online martingale approximation - stochastic approximation theory guarantees convergence of martingale difference algorithms under Robbins-Monro conditions.

REINFORCE gradient: The policy gradient estimator g^=Gtlogπθ(atst)\hat{g} = G_t \nabla\log\pi_\theta(a_t|s_t) is an unbiased estimator of J(θ)\nabla J(\theta) - it is a martingale difference. The baseline b(st)b(s_t) reduces variance without introducing bias: E[(Gtb(st))logπst]=Jb(st)0=J\mathbb{E}[(G_t - b(s_t))\nabla\log\pi | s_t] = \nabla J - b(s_t)\cdot 0 = \nabla J since E[logπst]=0\mathbb{E}[\nabla\log\pi|s_t] = 0.

9.3 Attention over Sequences

Attention can be viewed as a stochastic process operating over the token sequence:

Token sequence as discrete-time process: (w1,w2,,wT)(w_1, w_2, \ldots, w_T) with embedding xt=Embed(wt)x_t = \text{Embed}(w_t). The attention mechanism at layer \ell computes:

ht()=s=1Tαts()vs(),αts=softmax(qtksdk)h_t^{(\ell)} = \sum_{s=1}^T \alpha_{ts}^{(\ell)} v_s^{(\ell)}, \quad \alpha_{ts} = \text{softmax}\left(\frac{q_t^\top k_s}{\sqrt{d_k}}\right)

The hidden state ht()h_t^{(\ell)} is a weighted average of all positions - a generalisation of the conditional expectation E[Vquery=q]\mathbb{E}[V | \text{query} = q].

Causal attention as filtration: In autoregressive generation, causal masking ensures hth_t depends only on h1,,hth_1, \ldots, h_t - maintaining the adaptedness condition to the natural filtration of the sequence. Bidirectional attention (BERT) uses a "global" σ\sigma-algebra that is not filtered.

KV-cache as path memory: The key-value cache stores {ks,vs}st\{k_s, v_s\}_{s \leq t} - the full path of the process up to position tt. This is the computational realisation of the filtration Ft\mathcal{F}_t: at each new token, the model has access to the complete history.

9.4 Diffusion Model Schedules

Different noise schedules in diffusion models correspond to different continuous-time SDEs:

ScheduleSDEMarginal
Linear (DDPM)dx=β(t)2xdt+β(t)dBtdx = -\frac{\beta(t)}{2}x\,dt + \sqrt{\beta(t)}\,dB_tN(αˉtx0,(1αˉt)I)\mathcal{N}(\sqrt{\bar\alpha_t}x_0, (1-\bar\alpha_t)I)
CosineSame SDE, different β(t)\beta(t)Smoother transition
VE-SDEdx=σ(t)dBtdx = \sigma(t)\,dB_tN(x0,σ(t)2I)\mathcal{N}(x_0, \sigma(t)^2 I)
SubVP-SDEdx=12β(t)xdt+β(t)(1e20tβ)dBtdx = -\frac{1}{2}\beta(t)x\,dt + \sqrt{\beta(t)(1-e^{-2\int_0^t\beta})}\,dB_tTighter bounds

The reverse SDE (Anderson, 1982) is:

dx=[f(x,t)g(t)2xlogpt(x)]dt+g(t)dBˉtdx = \left[f(x,t) - g(t)^2\nabla_x\log p_t(x)\right]dt + g(t)\,d\bar{B}_t

where Bˉt\bar{B}_t is a backward Brownian motion. The neural network sθ(x,t)xlogpt(x)s_\theta(x,t) \approx \nabla_x\log p_t(x) is trained to estimate the score function, enabling the reverse process to convert noise into data.

Consistency Models (Song et al., 2023) learn to map any point on the reverse SDE trajectory directly to the final sample, bypassing the iterative nature of full diffusion. The key insight uses probability flow ODE theory - replacing the stochastic reverse SDE with a deterministic ODE that has the same marginals.


10. Common Mistakes

#MistakeWhy It's WrongFix
1"Any increasing family of sets is a filtration"Filtration requires σ\sigma-algebras, not just sets. A family of sets AtA_t with AsAtA_s \subset A_t has no algebraic structure.Verify each Ft\mathcal{F}_t is a σ\sigma-algebra (closed under complements and countable unions) and FsFt\mathcal{F}_s \subseteq \mathcal{F}_t for sts \leq t.
2"If E[Mn]=c\mathbb{E}[M_n] = c for all nn, then {Mn}\{M_n\} is a martingale"Constant mean is necessary but not sufficient. Non-martingale with constant mean: Mn=(1)nM_n = (-1)^n with P=1/2P=1/2 Rademacher steps (mean 0, but $\mathbb{E}[M_{n+1}M_n] = -M_n \neq M_n$).
3"OST applies to any stopping time"Without boundedness or uniform integrability, E[Mτ]E[M0]\mathbb{E}[M_\tau] \neq \mathbb{E}[M_0]. The doubling strategy (τ\tau = "first win") has E[τ]=\mathbb{E}[\tau] = \infty and violates OST.Verify one of the three OST conditions: τ\tau bounded, E[τ]<\mathbb{E}[\tau]<\infty + bounded differences, or UI.
4"Brownian motion is differentiable since it's continuous"Continuity does not imply differentiability. BM is a.s. nowhere differentiable (Wiener 1923). Its paths have infinite variation on any interval.Use Ito calculus (dBt2=dtdB_t^2 = dt) for BM integrals, not classical Riemann-Stieltjes.
5"The Poisson process has stationary paths"Stationarity of increments \neq stationarity of the process itself. N(t)Poisson(λt)N(t) \sim \text{Poisson}(\lambda t) - the marginal distribution changes with tt; N(t)N(t) is not stationary.The compensated process N(t)λtN(t) - \lambda t is a martingale; stationarity applies to increments, not levels.
6"Strict stationarity implies WSS"Strict stationarity only implies WSS if the second moment exists. A process with Xt=dCauchy(0,1)X_t \stackrel{d}{=} \text{Cauchy}(0,1) at all times (iid) is strictly stationary but E[Xt2]=\mathbb{E}[X_t^2] = \infty, so WSS is undefined.Always verify second moment existence before claiming WSS from strict stationarity.
7"WSS implies strict stationarity"WSS is a second-moment condition. Higher-order moments may not be stationary. Example: an AR(1) with non-Gaussian innovations has the same ACVF regardless of the innovation distribution, but higher moments differ with time if innovations have time-varying skewness.WSS -> strict stationarity ONLY for Gaussian processes (since Gaussians are determined by first two moments).
8"The OU process converges to its mean"The OU process converges to its stationary distribution N(μ,σ2/(2θ))\mathcal{N}(\mu, \sigma^2/(2\theta)), not to the constant μ\mu. It fluctuates around μ\mu with variance σ2/(2θ)\sigma^2/(2\theta) in steady state.Distinguish convergence of the distribution (to stationary distribution) from convergence of the paths (which remain stochastic).
9"Ergodicity means the process repeats"Ergodicity means time averages equal ensemble averages. An ergodic process does not repeat its paths - it explores the full distribution over time.Ergodic = time average converges to ensemble average. Periodicity (repeating paths) is a different and usually violated property.
10"A GP with RBF kernel can model any function"RBF GPs have infinitely smooth sample paths. Functions with discontinuities or kinks (like ReLU activations) cannot be well-approximated by RBF GPs. Matern-1/2 (OU) or Matern-3/2 kernels are better for rough functions.Match kernel smoothness to the smoothness of the function being modelled. Use Matern family for non-smooth functions.
11"DDPM forward process is Brownian motion"DDPM uses a discretised, time-reversed, variance-preserving process. True BM has no mean reversion and linearly growing variance; DDPM's forward process is a discrete OU approximation with αˉt0\bar\alpha_t \to 0.DDPM forward process \approx discrete VP-SDE (OU); the connection to BM is via Donsker's theorem in the limit TT \to \infty, Δt0\Delta t \to 0.

Skill Check

Test this lesson

Answer 4 quick questions to lock in the lesson and feed your adaptive practice queue.

--
Score
0/4
Answered
Not attempted
Status
1

Which module does this lesson belong to?

2

Which section is covered in this lesson content?

3

Which term is most central to this lesson?

4

What is the best way to use this lesson for real learning?

Your answers save locally first, then sync when account storage is available.
Practice queue