Lesson overview | Previous part | Lesson overview
Stochastic Processes: Appendix V: Numerical Methods for SDEs to Appendix X: Glossary
Appendix V: Numerical Methods for SDEs
V.1 Euler-Maruyama Method
The simplest numerical scheme for :
where .
Convergence: Euler-Maruyama has strong order 1/2 (pathwise error ) and weak order 1 (distributional error ). Compare to Euler method for ODEs which has order 1 for pathwise error - the order reflects the irregularity of BM.
V.2 Milstein Method
The Milstein method improves strong convergence to order 1 by adding an Ito correction:
The extra term uses the Ito formula's second-order term. For scalar SDEs, Milstein achieves the same convergence order as the trapezoidal rule for ODEs.
V.3 DDPM as Euler-Maruyama
The DDPM update is exactly the Euler-Maruyama discretisation of the VP-SDE with step size :
where the approximation holds for small . DDIM corresponds to using an implicit (backward Euler) discretisation of the probability flow ODE, which allows larger step sizes.
Appendix W: Bridge Processes and Conditional Diffusions
W.1 Brownian Bridge
A Brownian bridge from to on is a Brownian motion conditioned to be at at time :
This is a Gaussian process with and for .
For AI: Stochastic interpolation (Albergo & Vanden-Eijnden, 2023) uses Brownian bridges to construct generative models that interpolate between data and noise along curved paths, rather than straight (linear interpolation) or OU (DDPM) paths. Flow matching (Lipman et al., 2022) uses deterministic bridges (straight-line paths) for efficient training.
W.2 Conditional Score Function
Given a forward process , the conditional score is:
where is the noise added in the forward process. The marginal score satisfies:
This is a conditional expectation - a Doob martingale evaluated at ! The diffusion model learns to estimate , which by Tweedie's formula equals the MMSE denoiser. The entire diffusion model training objective is a stochastic process estimation problem formulated in martingale language.
End of Section06 Stochastic Processes.
<- Back to Chapter 6: Probability Theory | Next: Markov Chains ->
Appendix X: Glossary
| Term | Definition |
|---|---|
| Adapted process | Process that is -measurable at each |
| Autocorrelation function (ACF) | |
| Azuma's inequality | Concentration bound for martingales with bounded differences |
| Brownian motion | Continuous Gaussian process with independent stationary increments |
| Cadlag | Right-continuous with left limits (French acronym) |
| Compensated Poisson | - a martingale |
| Doob martingale | $M_k = \mathbb{E}[f |
| Ergodic theorem | Time average ensemble average a.s. |
| Filtration | Increasing family of -algebras: for |
| Gaussian process (GP) | Process where every finite marginal is jointly Gaussian |
| Ito integral | Stochastic integral adapted to BM filtration |
| Ito's lemma | Chain rule for BM: |
| Kernel | Positive semidefinite function - covariance of GP |
| Local martingale | Stopped process is a martingale for each |
| Martingale | $\mathbb{E}[M_t |
| Martingale difference | with $\mathbb{E}[D_k |
| OU process | - mean-reverting Gaussian process |
| Optional Stopping Theorem | under appropriate conditions |
| Poisson process | Counting process with iid exponential inter-arrival times |
| Quadratic variation | for BM - central property of stochastic integration |
| Score function | - gradient of log-density |
| Stationary process | Distribution invariant under time shifts |
| Stopping time | with - no look-ahead |
| Submartingale | $\mathbb{E}[M_t |
| Supermartingale | $\mathbb{E}[M_t |
| Thinning | Poisson process subsampled with prob is Poisson() |
| Uniform integrability | $\sup_t\mathbb{E}[ |
| VP-SDE | Variance-preserving SDE - DDPM forward process continuum |
| Wide-sense stationary | Constant mean + covariance depends only on lag |
| Wiener measure | Probability measure on defining BM paths |