"The expectation of the product of two independent random variables equals the product of their expectations - a theorem so simple it conceals its own depth."
- Mark Kac
Overview
Expectation is the single most important operation in probability theory. It compresses an entire distribution into a number - the weighted average of all possible values, where weights are probabilities. Every loss function in machine learning is an expectation. Every gradient update in stochastic training is an estimated expectation. Every generative model is ultimately defined by the distributions whose expectations it learns to match.
This section builds the theory of the expectation operator from first principles: the formal definition via the law of the unconscious statistician (LOTUS), linearity (which holds without independence), the tower property (iterated conditioning), moment generating functions (which encode the entire distribution in an analytic function), and Jensen's inequality (which underlies the derivation of the ELBO, the KL divergence bound, and every variational argument in modern deep learning).
The central thread connecting all these tools is moments - the sequence that describes the shape of a distribution. The first moment locates the center. The second moment (variance) measures spread. The third captures asymmetry. The fourth captures tail heaviness. In modern ML, the Adam optimizer literally tracks first and second moments of gradients at every parameter, every step. Understanding moments is understanding adaptive optimisation.
Prerequisites
- Section01 Introduction and Random Variables - random variables, CDF, PDF, PMF
- Section02 Common Distributions - Gaussian, Exponential, Binomial; their stated means and variances
- Section03 Joint Distributions - joint PDF, conditional distributions, marginalisation, covariance matrix
- Single-variable integration, convergence of improper integrals, power series
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Interactive derivations: LOTUS, Jensen, MGFs, bias-variance, Adam moment tracking |
| exercises.ipynb | 10 graded exercises from LOTUS computation to policy gradient estimation |
Learning Objectives
After completing this section, you will be able to:
- Compute expectations for discrete, continuous, and mixed distributions using LOTUS
- Apply linearity of expectation without assuming independence
- Derive variance using the computational formula and prove properties (Var(aX+b), Var(X+Y))
- Interpret and compute skewness and kurtosis from their moment definitions
- State and apply the tower property (iterated expectation) and law of total variance
- Derive moment generating functions and extract moments via differentiation
- Apply Jensen's inequality to prove KL divergence non-negativity and derive the ELBO
- Prove and apply the Cauchy-Schwarz inequality for the bound
- Decompose expected squared error into bias^2, variance, and irreducible noise
- Connect Adam optimizer to first and second moment estimation with bias correction
- Preview Markov's and Chebyshev's inequalities as moment-based tail bounds (-> Section05)
Table of Contents
- 1. Intuition and Historical Context
- 2. Formal Definition of Expectation
- 3. Variance, Standard Deviation, and Higher Moments
- 4. Covariance and Correlation
- 5. Conditional Expectation
- 6. Moment Generating Functions and Characteristic Functions
- 7. Jensen's Inequality and Moment Inequalities
- 8. Bias-Variance Decomposition
- 9. Expectation in ML: Core Applications
- 10. Common Mistakes
- 11. Exercises
- 12. Why This Matters for AI (2026 Perspective)
- 13. Conceptual Bridge
1. Intuition and Historical Context
1.1 The Moment Analogy
The word "moment" comes from classical mechanics. In physics, the first moment of a mass distribution is the center of mass; the second moment is the moment of inertia. Probability theory borrows this language directly: the -th moment of a random variable is , the expected value of raised to the -th power.
Imagine balancing a plank on which weights are placed at positions with weights (probabilities). The balance point is - the center of mass. The spread around that center is captured by , the second central moment. Higher moments capture the shape: whether the distribution leans left or right (skewness), whether it has heavy tails (kurtosis).
MOMENTS AS SHAPE DESCRIPTORS
========================================================================
First moment (mean \\mu): Location of center
Second central moment (\\sigma^2): Spread about center
Third central moment (\\gamma_1\\sigma^3): Asymmetry (skewness)
Fourth central moment (\\gamma_2\\sigma^4): Tail weight (kurtosis)
Standard normal N(0,1): \\mu=0, \\sigma^2=1, \\gamma_1=0, \\gamma_2=0 (baseline)
Right-skewed (\\gamma_1 > 0): Heavy right tail: income, LLM token probs
#
###
#####=====........
----+--------------------
Heavy-tailed (\\gamma_2 > 0): Leptokurtic: gradient norms, loss landscapes
#
#
###
#####.............
----+--------------------
========================================================================
Every major operation in statistics reduces to computing moments. The mean is the first moment. The variance is the second central moment. The sample statistics we compute to estimate distributions are moment estimates. The method of moments estimation technique literally sets sample moments equal to theoretical moments and solves for parameters.
For AI: Every loss function is an expectation over the data distribution:
We cannot compute this exactly (the true distribution is unknown), so we estimate it with the sample mean over a minibatch. The entire machinery of stochastic gradient descent rests on this approximation.
1.2 Why Moments Matter for AI
Moments appear in nearly every component of modern deep learning:
Loss functions. The mean squared error estimates , the expected squared loss. Cross-entropy loss estimates . Every loss is a first moment of some function of the predictions.
Optimisation. The Adam optimizer maintains running estimates of the first moment and second moment of the gradient . The adaptive learning rate normalises by the square root of the second moment. Without understanding moments, Adam is a black box; with it, Adam is moment matching with bias correction.
Batch Normalisation. BatchNorm computes the sample mean and variance of activations across a batch, then normalises. This is moment normalisation: forcing the first moment to zero and the second to one, stabilising training by controlling the distribution of layer activations.
KL Divergence. The KL divergence is an expectation under . Its non-negativity () follows directly from Jensen's inequality applied to the convex function . The ELBO in variational autoencoders is derived by applying Jensen's inequality to decompose .
Reparameterisation. The reparameterisation trick in VAEs () enables computing gradients of expectations. The gradient is intractable to compute directly, but after reparameterisation it becomes , estimable via sampling.
Score functions. The score function satisfies - its expectation is zero. This identity (proved by differentiating the normalisation condition ) underlies Fisher information, the REINFORCE policy gradient estimator, and Stein's identity used in score-based diffusion models.
1.3 Historical Timeline
HISTORY OF MOMENTS AND EXPECTATION
========================================================================
1654 Pascal-Fermat correspondence: first rigorous treatment of
expected value ("valeur" of a game) in probability
1657 Huygens: De Ratiociniis in Ludo Aleae - published the first
probability textbook with formal expectation calculations
1713 Bernoulli (Jakob): Ars Conjectandi - law of large numbers
(published posthumously); shows sample mean -> E[X]
1730 De Moivre: normal approximation to binomial via moments
1814 Laplace: Theorie Analytique des Probabilites - systematic
use of characteristic functions (Fourier transforms of densities)
1853 Chebyshev: first inequality bounding tails via variance;
rigorous proof of LLN; laid groundwork for moment inequalities
1867 Chebyshev: full variance-based inequality (P(|X-\\mu| \\geq k\\sigma) \\leq 1/k^2)
1894 Pearson: coined "moment" in statistics; introduced skewness
and kurtosis; method of moments estimation
1922 Fisher: likelihood, sufficient statistics, and moment
generating functions as tools of parametric inference
1945 Cramer: large deviation theory; MGF-based tail bounds
(foundations of Hoeffding and Chernoff later work)
1972 Williams: REINFORCE algorithm (score function gradient
estimator = gradient of log-expectation)
2014 Kingma & Ba: Adam optimizer - formal moment-tracking
adaptive gradient method
2014 Kingma & Welling: VAE - reparameterisation trick;
expectations with differentiable sampling
2020 Song et al.: score-based diffusion models - reverse SDE
guidance via score function = \\nabla log p(x)
========================================================================
2. Formal Definition of Expectation
2.1 Expectation for Discrete, Continuous, and Mixed Distributions
The expected value (or expectation or mean) of a random variable is defined as the probability-weighted average of all possible values.
Discrete random variable. If takes values in a countable set with PMF :
provided this sum converges absolutely: . If it does not converge absolutely, does not exist.
Example. For a fair die with :
Continuous random variable. If has PDF :
provided .
Example. For with PDF on :
Mixed distribution. If has a mixed distribution (point mass at plus a continuous density):
where is the point mass probability.
Notation. We write , , , or (physics notation) interchangeably. When the distribution is parameterised by : or .
For AI: The empirical mean is the canonical estimator of . Every forward pass computes a function of the inputs; averaging that function over a batch estimates its expectation. The loss function IS an expectation - we just estimate it from data.
2.2 Existence and Integrability
The expectation exists (is finite) when , i.e., when is integrable. This fails in important cases:
Cauchy distribution. has . The Cauchy distribution has no mean (or any finite moment). A ratio of two independent standard normals follows a Cauchy distribution.
Heavy-tailed distributions. The Pareto distribution for has:
- exists iff
- exists iff
- -th moment exists iff
This matters in practice: internet traffic volumes, earthquake magnitudes, word frequencies in language (Zipf's law), and wealth distributions all exhibit Pareto-like tails. Models that assume finite variance (e.g., CLT-based confidence intervals) can fail badly when applied to such data.
Convergence condition. is well-defined (possibly ) for non-negative (since the integral may diverge to but is unambiguous). For general , write where and . Then , finite only when both are finite.
For AI: In practice, neural network outputs and loss values are bounded (through clipping, normalisation, or the bounded nature of activation functions). But gradient values can have very heavy tails, especially in early training. Gradient clipping is a practical acknowledgment that the gradient may not have finite variance, making the Cauchy-Schwarz and CLT approximations break down.
2.3 Linearity of Expectation
The most powerful property of expectation is its linearity, which holds without any independence assumption:
Theorem (Linearity of Expectation). For any random variables and and constants :
Proof (continuous case). Using the joint density :
where we used (marginalisation from Section03). The discrete case is analogous.
Extension. By induction: for any constants and random variables (not necessarily independent):
Critical point: This holds even when and are dependent. It does NOT extend to products: only when .
Examples demonstrating power of linearity:
Sum of dice. where each is a fair die. Without worrying about dependence (there is none here, but the theorem doesn't require it): .
Binomial mean. . Write where (the indicator that trial succeeds). Then: . No need to compute the full PMF.
Expected number of fixed points. A random permutation of has expected number of fixed points () equal to 1, for any . Proof: .
For AI: Linearity of expectation is the reason we can decompose complex expected losses into sums of simpler expected losses. The expected cross-entropy over a vocabulary of size is - a sum of simpler expectations. The expected gradient of a sum is the sum of the expected gradients, justifying minibatch averaging as a gradient estimator.
2.4 LOTUS - Law of the Unconscious Statistician
A common need: compute for some function when we know the distribution of but not necessarily of directly. LOTUS provides the answer.
Theorem (LOTUS - Law of the Unconscious Statistician). Let be a random variable and a measurable function.
Discrete:
Continuous:
Key insight: We do NOT need the distribution of . We use the distribution of directly. This is why it's called the "unconscious" statistician - you can compute without consciously finding the PDF of .
Proof sketch (discrete case). Let . Grouping terms:
The last step rearranges the double sum.
Examples:
Variance via LOTUS. . Using LOTUS directly on :
Second moment of Exponential. For , using :
Jensen's inequality (preview). For convex , LOTUS combined with the supporting hyperplane property gives . (Full treatment in Section7.)
LOTUS for functions of multiple variables. For jointly distributed with joint density :
For AI: LOTUS is the theorem behind every expectation computation in variational inference. The ELBO contains - an expectation of under . LOTUS says: integrate weighted by , no need to find the distribution of directly.
3. Variance, Standard Deviation, and Higher Moments
3.1 Variance
The variance of measures the expected squared deviation from the mean:
Computational formula. Expanding the square:
This is the Konig-Huygens formula: .
Properties of variance:
- , with equality iff is almost surely constant.
- for any constants .
- (see Section4.4).
- If : .
Proof of property 2.
Note: the constant does not affect variance (shifts don't change spread).
Standard examples:
- : , , . Maximum at .
- : , .
- : , (by construction).
- : , , .
- : (special property).
For AI: Variance quantifies uncertainty. The variance of a model's predictions measures how much predictions vary with the training set - high variance = overfitting. The variance of gradient estimates in SGD determines how noisy the update steps are - high variance slows convergence. Reducing gradient variance is the goal of techniques like control variates, variance reduction in REINFORCE, and importance-weighted estimators.
3.2 Standard Deviation and Coefficient of Variation
The standard deviation has the same units as (variance has squared units). It is the more interpretable spread measure.
The coefficient of variation (CV) normalises the standard deviation by the mean:
CV is dimensionless and measures relative spread. A CV of 0.1 means the standard deviation is 10% of the mean. This is useful when comparing spread across distributions with different scales.
Standardisation. Subtracting the mean and dividing by the standard deviation gives the standardised variable:
which has and .
For AI: Batch Normalisation (Ioffe & Szegedy, 2015) standardises layer activations during training: for each feature, subtract the batch mean and divide by the batch standard deviation. This forces and , then rescales with learnable . Layer Norm and RMS Norm are variations that normalise over the feature dimension rather than the batch dimension - critical for transformers where batch sizes can be 1.
3.3 Raw vs Central Moments
Raw moments (moments about zero):
Central moments (moments about the mean ):
Relationship: The central moments can be expressed in terms of raw moments via the binomial theorem:
For the first four:
- (always - the mean is the center)
Standardised moments divide central moments by , making them dimensionless:
= skewness, = excess kurtosis (kurtosis of normal = 3; excess kurtosis subtracts 3).
3.4 Skewness
Skewness measures the asymmetry of a distribution about its mean:
- (right-skewed / positive skew): Long tail on the right; mean > median > mode. Examples: income distributions, insurance claims, LLM generation probabilities.
- (symmetric): Normal distribution, uniform, any distribution symmetric about its mean.
- (left-skewed / negative skew): Long tail on the left; mean < median < mode. Examples: age at death in developed countries.
SKEWNESS INTUITION
========================================================================
Right-skewed (\\gamma_1 > 0): Left-skewed (\\gamma_1 < 0):
## ##
#### ####
######... ...##########
--+--+--+------ ------+--+--+--+--
Mode < Median < Mean Mean < Median < Mode
Tail pulls mean right Tail pulls mean left
========================================================================
For AI: Gradient distributions in deep networks are often right-skewed - most gradients are small, but occasional large gradients occur. This is why gradient clipping is standard practice: the long right tail causes instability if unconstrained. Token probability distributions from language models are also right-skewed: most tokens have very low probability, while a handful of likely tokens dominate.
Skewness of common distributions:
- : (symmetric by construction)
- : (always right-skewed)
- : (approaches 0 as )
- :
3.5 Kurtosis and Excess Kurtosis
Kurtosis measures the weight of the tails relative to a normal distribution:
Excess kurtosis (the standard in statistics) subtracts the normal baseline of 3:
- (leptokurtic): Heavier tails than Gaussian; more probability in the tails and peak. Examples: Student- distribution, financial returns, gradient norms.
- (mesokurtic): Normal distribution (the baseline).
- (platykurtic): Lighter tails; more uniform. Uniform distribution has .
KURTOSIS AND TAIL WEIGHT
========================================================================
Leptokurtic (\\gamma_2>0): Mesokurtic (\\gamma_2=0): Platykurtic (\\gamma_2<0):
# # ###
### ### #####
##### ##### #######
......... ....... .......
Heavy tails - outliers Normal tails Thin tails
common. Student-t, loss Gaussian Uniform
gradients, finance
========================================================================
For AI: Gradient norm distributions during training are leptokurtic - occasional large-gradient events are far more common than a Gaussian would predict. This motivates both gradient clipping (cap the maximum norm) and heavy-tailed noise models for understanding why SGD generalises. The Student- distribution ( for ) is often used to model heavy-tailed distributions in robust statistics.
Kurtosis of common distributions:
- : (by definition - the reference)
- :
- :
- :
- : for ; infinite for
4. Covariance and Correlation
4.1 Covariance as an Operator
The covariance of two random variables and measures their linear co-movement:
Computational formula:
Proof: .
Properties (bilinearity):
- (symmetric)
- (bilinear, constants drop out)
- (linear in first argument)
- (Cauchy-Schwarz, proved in Section7.3)
Sign interpretation:
- : and tend to be above their means simultaneously.
- : when is above its mean, tends to be below its mean.
- : no linear relationship (but may still be nonlinearly dependent).
Recall from Section03: The full covariance matrix of a random vector collects all pairwise covariances: . The multivariate Gaussian is parameterised by . -> Full treatment: Joint Distributions Section03
4.2 Pearson Correlation Coefficient
The Pearson correlation normalises covariance to remove units:
Theorem: .
Proof via Cauchy-Schwarz. By the Cauchy-Schwarz inequality (proved in Section7.3):
Dividing both sides by : , so .
Boundary cases:
- : for some (perfect positive linear relationship).
- : for some (perfect negative linear relationship).
- : uncorrelated (no linear relationship - but not necessarily independent).
For AI: The correlation coefficient appears in weight matrix analysis. The effective rank of a weight matrix is related to the correlations between its columns. In attention, high correlation between key-query pairs produces high attention weights. In representational similarity analysis (RSA), neural network layers are compared by computing correlation matrices of their activations.
4.3 Independence and Zero Covariance
Theorem. If , then .
Proof. If , then (a characterisation of independence). Therefore .
The converse is FALSE. Zero covariance does not imply independence.
Classic counterexample. Let and . Then:
- (symmetric distribution)
- (odd function, symmetric distribution)
But is completely determined by - knowing tells you exactly. They are maximally dependent!
For jointly Gaussian variables only: Zero covariance DOES imply independence. This is a special property of the Gaussian distribution - it is the unique distribution for which uncorrelated implies independent (for the joint distribution; marginal Gaussians with zero covariance need not be jointly Gaussian).
For AI: Feature selection based on correlation can miss highly informative features that have nonlinear (but zero-covariance) relationships with the target. Mutual information is a better measure of dependence. In contrastive learning (e.g., SimCLR), redundancy reduction methods explicitly decorrelate representations - but decorrelation (zero covariance) and statistical independence are different objectives.
4.4 Variance of Sums
Theorem. For any random variables and :
Proof:
Corollary (independence): If : .
General sum: For :
If all are independent: .
Scaling. For iid with variance :
The variance of the sample mean decreases as - the statistical basis for why larger datasets reduce estimation uncertainty.
For AI: Weight initialisation strategies (Xavier/Glorot, He) are derived from variance propagation. If , then the variance of each output neuron is . To keep variance constant across layers (preventing vanishing/exploding activations): (Xavier) or (He for ReLU). This is the variance-of-sums formula applied to layer activations.
5. Conditional Expectation
5.1 Definition and Basic Properties
The conditional expectation is the expected value of given that takes the specific value . It is a function of , not a number.
Discrete case:
Continuous case:
As a random variable. When we write (without specifying the value of ), we mean the random variable obtained by composing the function with . This is a function of and is therefore itself a random variable.
Basic properties:
- (linearity)
- (taking out known factors)
- If : (conditioning on independent variable doesn't help)
- (tower property - see Section5.2)
Example. Let be jointly uniform on the triangle . From Section03 (Appendix V.1), the conditional density is on . Therefore:
The conditional mean of given is - exactly half of , which makes sense since is uniform on .
5.2 Tower Property (Iterated Expectation)
The tower property (also called the law of iterated expectations or Adam's law) is one of the most useful results in probability:
More generally, for any function :
Proof (continuous case):
Intuition. If you first average within groups defined by , then average those group means weighted by group size, you get the overall mean of .
Example: Expected test score. A class has 40% students from Group A (mean score 75) and 60% from Group B (mean score 85). Overall expected score: . This is exactly the tower property: where is group membership.
For AI: The tower property underlies the EM algorithm. The E-step computes - the expected complete-data log-likelihood, where the expectation is over the latent variable given the observed . The tower property guarantees that maximising increases , the marginal likelihood. In policy gradient methods, - decomposing trajectory returns by initial state.
5.3 Conditional Variance and Law of Total Variance
The conditional variance of given is:
Law of Total Variance (Eve's Law):
This decomposes total variance into:
- Within-group variance: - average variability within each value of .
- Between-group variance: - variability of the group means.
Proof:
Using the tower property twice and writing :
And . Therefore:
For AI: The law of total variance is the mathematical foundation of the bias-variance decomposition (Section8). For a model trained on dataset : total error = expected within-dataset error + between-dataset variance. It also explains why mixture models (GMMs) combine within-component variance and between-component variance.
5.4 AI Applications of Conditional Expectation
Optimal predictor. The function minimises the expected squared error:
Proof: For any ,
The cross term vanishes because , so by the tower property for any . The last term is non-negative. Therefore .
This is profound: the best possible predictor (in MSE sense) is the conditional expectation. Neural networks are universal approximators trained to approximate .
Rao-Blackwell theorem. In estimation theory: if is any unbiased estimator of and is a sufficient statistic, then is also unbiased and has variance . Conditioning on sufficient statistics never increases variance. This is the variance reduction analogue of why attention over relevant context helps.
Attention as conditional expectation. The output of self-attention at position is:
This is the conditional expectation of the value vectors under the attention distribution over key positions. High-entropy attention = averaging many values; low-entropy (sharp) attention = conditioning on one specific value.
6. Moment Generating Functions and Characteristic Functions
6.1 MGF Definition and Basic Properties
The moment generating function (MGF) of a random variable is:
defined for all in an open interval containing 0. If no such interval exists (the integral diverges), the MGF is said not to exist.
Moments from the MGF. Differentiating times and evaluating at gives the -th raw moment:
Proof: (when interchange is valid).
Differentiating times: .
Uniqueness theorem. If for in an open neighbourhood of 0, then and have the same distribution. The MGF uniquely determines the distribution (when it exists).
Existence. The MGF exists when for for some . Heavy-tailed distributions (Cauchy, Pareto with small , log-normal) do NOT have finite MGFs. The characteristic function (Section6.4) always exists and is the preferred tool for such distributions.
6.2 MGF of Common Distributions
Bernoulli():
Check: , . [ok]
Binomial(): (sum of iid Bernoulli)
First moment: . [ok]
Poisson():
Derivation: . , , so . [ok]
Normal():
Derivation: Complete the square in the exponent of . , , . [ok]
Exponential():
Derivation: (converges iff ). , , . [ok]
For AI: MGFs appear in the proof of the Chernoff bound (Section05), which gives exponentially sharp tail bounds. The exponential form is closely related to the log-partition function in the exponential family: . The moments of the sufficient statistic can be computed by differentiating - the MGF and the log-partition function are connected by a change of variables.
6.3 MGF of Sums and the Reproductive Property
Key theorem. If :
Proof: , using independence for the third equality.
This is the MGF version of the convolution theorem. It provides elegant proofs of reproductive properties:
Gaussian reproductive property. If and independently:
This is the MGF of . So .
Poisson reproductive property. If , independently:
So .
6.4 Characteristic Function
The characteristic function of is:
where .
Key advantage: exists for ALL distributions (since always), unlike the MGF which may not exist for heavy-tailed distributions.
Connection to MGF: when the MGF exists. But the characteristic function is defined even when the MGF is not.
Moments from . When moments exist:
Inversion formula. The distribution is uniquely determined by its characteristic function:
(when is continuous). This is the Fourier inversion formula - the characteristic function IS the Fourier transform of the PDF.
For AI: Characteristic functions appear in the analysis of convergence of distributions (Central Limit Theorem proofs use characteristic functions because they always exist). The score-based diffusion model framework uses the score function , which is related to the characteristic function via Fourier analysis. Random Fourier Features (Rahimi & Recht, 2007) approximate kernel functions via random sampling from the characteristic function of the kernel.
6.5 Cumulants and the Cumulant Generating Function
The cumulant generating function (CGF) is the log of the MGF:
Cumulants are the coefficients in the Taylor expansion of :
so .
First four cumulants:
- (mean)
- (variance)
- (third central moment = )
- (excess kurtosis )
Key property. For independent :
Cumulants of a sum of independent variables add - this is the additivity property that makes cumulants more natural than moments for sums.
Normal distribution: . All cumulants for . This characterises the normal distribution - it is the unique distribution with all cumulants beyond the second equal to zero.
7. Jensen's Inequality and Moment Inequalities
7.1 Jensen's Inequality
Definition. A function is convex if for all and :
Geometrically: the chord between any two points lies above the curve. Equivalently (for twice-differentiable ): everywhere.
Theorem (Jensen's Inequality). If is convex and :
Proof (via supporting hyperplane). Since is convex, for any there exists a supporting hyperplane at : a linear function such that for all (where is the subgradient at ). Setting :
Taking expectations of both sides:
Equality condition: iff is linear on the support of , or is almost surely constant.
For concave : (Jensen's inequality reverses).
Common applications:
| Function | Convex/Concave | Jensen gives |
|---|---|---|
| Convex | ||
| Convex | , i.e., | |
| Convex | (variance ) | |
| Concave | ||
| Concave |
7.2 Applications: KL Divergence and ELBO
KL divergence non-negativity. The Kullback-Leibler divergence is:
Theorem (Gibbs' inequality). , with equality iff almost everywhere.
Proof via Jensen:
where we applied Jensen's inequality with the concave function . Therefore .
ELBO derivation via Jensen. For a latent variable model :
where the inequality applies Jensen's to the concave . This lower bound is the ELBO:
The gap between and the ELBO is exactly . Maximising the ELBO tightens this bound.
Log-sum inequality. For non-negative :
This is the discrete version of KL non-negativity via Jensen.
7.3 Cauchy-Schwarz Inequality
Theorem (Cauchy-Schwarz for expectations). For any random variables and :
with equality iff almost surely for some constant .
Proof. For any :
This is a quadratic in that is always non-negative. A quadratic for all iff discriminant . Here , , :
Corollary (). Apply the Cauchy-Schwarz inequality to the centred variables and :
Dividing: .
For AI: Cauchy-Schwarz appears in attention - the dot product is bounded by , motivating the scaling in scaled dot-product attention (). Without this scaling, the dot products can become large in magnitude for high-dimensional keys/queries, causing the softmax to saturate and gradients to vanish.
7.4 Lyapunov Inequality
Theorem (Lyapunov's inequality). For :
In words: the norm of (as a random variable) is non-decreasing in .
Proof. Apply Jensen's inequality with the convex function (convex since ) to the random variable :
Taking powers of both sides: .
Consequence. If the -th moment is finite, all lower moments are finite. If , then as well.
For AI: Lyapunov's inequality underlies the Lyapunov CLT (a variant of the central limit theorem for non-identically distributed variables). In the theory of stochastic gradient descent, the existence of higher moments of the gradient controls the tightness of convergence bounds.
7.5 Preview: Tail Bounds from Moments
The most direct application of moments to probability is bounding how far a random variable can stray from its mean. These are covered fully in Section05, but we preview the key results here.
Preview: Markov's Inequality
For any non-negative random variable and :
Proof: . The mean bounds the probability of large values.
-> Full treatment and applications: Section05 Concentration Inequalities
Preview: Chebyshev's Inequality
For any random variable with mean and variance :
Proof: Apply Markov's inequality to : . The variance bounds how often deviates from its mean by standard deviations.
-> Full treatment and sharper bounds: Section05 Concentration Inequalities
8. Bias-Variance Decomposition
8.1 Statistical Risk and Optimal Predictor
Consider predicting from using a function . The statistical risk (expected squared loss) is:
Theorem. The minimiser of over all measurable functions is the conditional expectation:
and the minimum risk equals the irreducible noise:
Proof: From Section5.4, for any :
And by definition.
The Bayes risk (irreducible error) cannot be reduced by any model, no matter how complex. It is the noise inherent in the prediction problem.
For regression: With where is independent noise with variance , the Bayes risk is .
For AI: The irreducible error is why perfect test accuracy is impossible on noisy datasets. In language modelling, even a perfect model (which exactly learns the true distribution) will have non-zero cross-entropy loss equal to the entropy of the true distribution . GPT-4's perplexity on human text is bounded below by the entropy of human language.
8.2 Bias-Variance-Noise Decomposition
In practice, we learn a model from a finite dataset of samples. The model is a random function - it depends on the random training data. We can decompose the expected prediction error at a new test point :
Proof. Let (expected prediction) and (noisy observation):
The cross term is (since and ). Then:
And . Combining: Bias^2 + Variance + Noise.
BIAS-VARIANCE TRADEOFF
========================================================================
Error Total Error
^ /
| Variance /
| _______________/
| /
| / Bias^2
| / _______________
|_____/__/________________
| Noise
+-------------------------------- Model Complexity ->
Classical view: sweet spot minimises total error
========================================================================
Interpretations:
- High bias: The model is too simple to capture (underfitting). Consistent but wrong. Example: linear model for a quadratic .
- High variance: The model is too complex - it fits training noise (overfitting). Right on average but variable. Example: high-degree polynomial on limited data.
- Noise: Irreducible. The best we can do.
8.3 Double Descent and Modern Deep Learning
The classical bias-variance picture predicts a unique optimal complexity. Modern deep learning violates this: neural networks trained to zero training loss (interpolating the training data, which classical theory predicts should overfit catastrophically) often generalise well. This is the double descent phenomenon.
DOUBLE DESCENT
========================================================================
Test Error
^
| Classical regime Modern regime
| | |
| +----+ | |
| ++ ++ | |
| ++ ++ | +----+
| ++ +--+-----------------+
| | Interpolation threshold
+---------------------------------------- Model Size ->
^ ^
Underfitting Overparameterised
regime regime (good!)
========================================================================
Why does this happen? In overparameterised models:
- Many solutions interpolate the training data.
- Gradient descent (especially with weight decay or implicit regularisation from SGD noise) selects the minimum-norm solution among all interpolating solutions.
- The minimum-norm interpolant has low variance among interpolants - it is the "flattest" fit.
The bias-variance decomposition still holds in double descent, but the variance first rises (at the interpolation threshold, the model just barely interpolates - highly sensitive to training data) and then falls again (overparameterised models have many interpolating solutions, most with low variance).
For AI (2026 context): Large language models like GPT-4, Claude, and Gemini are massively overparameterised. They interpolate (or nearly interpolate) their training data and yet generalise remarkably well - this is the double descent regime. Understanding why overparameterisation helps generalisation remains one of the central open questions in deep learning theory.
8.4 Regularisation as Bias Injection
L2 regularisation (Ridge regression). The regularised estimator minimises:
For linear regression with design matrix (overloading notation): .
Bias-variance analysis of Ridge:
- Bias: is a biased estimator of . Specifically, . Ridge shrinks estimates toward 0, introducing bias.
- Variance: for any . The regularised estimator has smaller variance - it is more stable with respect to perturbations in the training data.
- Trade-off: As increases, bias increases and variance decreases. The optimal minimises total risk.
For AI: Weight decay in neural network training (the most common form of regularisation) adds to the loss, equivalent to L2 regularisation. AdamW implements weight decay correctly by decoupling it from the adaptive learning rate. Understanding the bias-variance trade-off tells us why weight decay helps: it reduces model variance (overfitting) at the cost of some bias.
9. Expectation in ML: Core Applications
9.1 Cross-Entropy Loss as an Expectation
For a -class classification problem, the cross-entropy loss for a single example with true label and model probabilities is:
The expected cross-entropy over the data distribution :
This equals the cross-entropy between the true label distribution and the model's predicted distribution, averaged over inputs .
Connection to KL divergence. For each input :
where is the entropy of the true conditional. Since doesn't depend on , minimising cross-entropy is equivalent to minimising KL divergence - i.e., making the model distribution as close as possible to the true distribution, in the KL sense.
For language models. The cross-entropy loss on a token sequence is:
This is the expected negative log-probability per token - an expectation under the empirical distribution of the training corpus. Minimising this drives the model distribution toward the empirical distribution of human text. The perplexity is : a model with perplexity 50 is "equally surprised" by text as a uniform distribution over 50 tokens.
9.2 ELBO as an Expected Value
The Variational Autoencoder (VAE) introduces an approximate posterior (the encoder) to approximate the intractable true posterior . The Evidence Lower BOund (ELBO) is:
Reconstruction term: - the expected log-likelihood of the observed data given a latent code, averaged over the approximate posterior. This rewards the decoder for reconstructing well from samples of .
KL regularisation term: - penalises the approximate posterior for straying from the prior . For Gaussian encoder :
Gradient computation. The reconstruction term cannot be differentiated through the sampling operation directly. The reparameterisation trick (Section03) writes with , enabling:
This gradient is computable by backpropagation through the deterministic function .
9.3 Policy Gradient (REINFORCE)
In reinforcement learning, an agent with policy (parameterised by ) seeks to maximise the expected return:
where is a trajectory. The REINFORCE gradient is:
Derivation via log-derivative trick (score function):
Since (dynamics don't depend on ), we get the REINFORCE formula.
The score function appears because we differentiated the log of the probability. The key identity underlying this:
(differentiate the normalisation condition to get , then note ).
For RLHF. Reinforcement learning from human feedback uses policy gradients to fine-tune language models. The policy is the LLM: where is the context and is the generated token. The reward comes from a reward model trained on human preferences. REINFORCE and its variants (PPO, GRPO) compute gradients of with respect to the LLM parameters.
9.4 Adam Optimizer as Moment Tracking
The Adam optimizer (Kingma & Ba, 2014) maintains exponential moving averages of the first and second moments of the gradient:
where is the gradient at step .
Bias correction. Since , the early estimates are biased toward zero. The debiased estimates are:
Why does this work? (when gradient is stationary), so dividing by recovers an unbiased estimate of .
Parameter update:
The update is the first moment (direction of average gradient) scaled by - normalising by the root mean square of the gradient. This is adaptive learning rate: parameters with large gradient variance get smaller updates (conservative), parameters with small variance get larger updates (aggressive).
Connection to Fisher Information. The diagonal of the Fisher information matrix is (when gradients are zero-mean). Adam's approximates the diagonal Fisher, making Adam an approximate natural gradient method.
AdamW. Standard Adam conflates weight decay and L2 regularisation (they are equivalent for SGD but not for adaptive methods). AdamW decouples them:
The last term is direct weight decay - not affected by the adaptive scaling. This is the current default for training large language models.
10. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | (always) | Only true when is linear. For convex : (Jensen). For : . | Apply LOTUS directly to compute . |
| 2 | Assuming without checking independence | True only for independent (or uncorrelated for linear functions). Dependent variables violate this. | Check (uncorrelated) before using this; for full product rule you need . |
| 3 | Confusing variance and standard deviation units | Variance has squared units; std dev has same units as . Mixing them gives nonsense. | Always track units: (squared), (same as ). |
| 4 | (wrong) | The constant shifts location, not spread. Variance is invariant to translation. | - the vanishes. |
| 5 | without independence | This only holds when . For positively correlated : variance is larger. | . |
| 6 | Zero covariance implies independence | False in general. The unit disk example: has but is determined by . | Use mutual information to test full independence. Covariance only measures linear dependence. |
| 7 | Confusing (a number) with (a random variable) | is fixed once is fixed. is random because is random. | Be explicit: ; then is the random variable. |
| 8 | Applying Jensen in the wrong direction for concave | Jensen: convex . For (concave): . | Check second derivative: = convex (Jensen \leq); = concave (Jensen \geq). |
| 9 | Assuming MGF exists for all distributions | Cauchy, Pareto (small ), log-normal: MGFs are infinite. Using MGF-based results for these fails. | Use characteristic function (always exists). Check integrability: for $ |
| 10 | Confusing raw moments and central moments | and differ unless . Kurtosis is , not . | Explicitly write which you mean. For Gaussian: but . |
11. Exercises
Exercise 1 * - LOTUS: Expected Value of a Transformation
Let with PDF for .
(a) Compute directly from the definition.
(b) Using LOTUS, compute .
(c) Compute .
(d) Let . Compute using LOTUS. (Hint: .) What is the domain of for which is finite?
Exercise 2 * - Linearity Without Independence
Let and .
(a) Compute , , and and verify linearity.
(b) Compute . What does this tell you about the linear relationship?
(c) Are and independent? Justify rigorously.
(d) Compute using the formula , and verify numerically.
Exercise 3 * - Moments of the Gaussian
For (standard normal):
(a) Show for all (odd moments vanish).
(b) Using the MGF , find , , and by differentiating.
(c) Verify the general formula .
(d) For , compute the skewness and excess kurtosis .
Exercise 4 ** - Tower Property in Action
A fair coin is flipped. If heads (prob 1/2), . If tails (prob 1/2), .
(a) Using the tower property, compute (let be the coin flip).
(b) Using the law of total variance, compute .
(c) Write down the marginal PMF of (it is a mixture of two Poissons). Verify directly.
(d) AI connection: Mixture-of-experts (MoE) models route tokens to different experts. Each expert has its own output distribution. The total output distribution is a mixture; the tower property computes its moments.
Exercise 5 ** - MGF and Moment Extraction
Let with PDF for .
(a) Derive the MGF: show for .
(b) Use to compute and by differentiating.
(c) Show that the sum of independent variables (same rate ) follows using the MGF multiplicative property.
(d) The chi-squared distribution . What is and ?
Exercise 6 ** - Jensen and KL Divergence
(a) Prove that for any probability distribution over :
with equality iff is uniform. (Hint: Apply Jensen to or use the KL divergence to uniform.)
(b) Show that using Jensen's inequality applied to .
(c) Compute analytically. Verify it equals zero when , .
(d) For the ELBO: show that with gap .
Exercise 7 *** - Bias-Variance Decomposition
Consider estimating from iid samples .
(a) Show that the sample mean is unbiased with variance .
(b) Consider a regularised estimator . Compute its bias and variance as functions of .
(c) The MSE of is Bias^2 + Variance. Find the that minimises MSE. (Hint: it depends on .)
(d) Simulate this: draw 10000 datasets of samples, compute for , plot bias^2, variance, and MSE vs .
Exercise 8 *** - Adam as Moment Estimation
(a) Show that the Adam first-moment estimate satisfies when gradients are iid with mean (stationary gradient assumption). Therefore is unbiased.
(b) Implement a minimal Adam optimizer in NumPy (no PyTorch) and apply it to minimise . Track the first and second moment estimates. Plot , , and over 200 steps.
(c) Show that for a quadratic loss , Adam converges to and the second moment squared gradient. In this case, : Adam reduces to steepest descent with unit step size.
(d) Connect to Fisher information: the Fisher information matrix for has diagonal equal to 1 (the variance of the score). Natural gradient descent uses . How does Adam approximate this for diagonal Fisher?
12. Why This Matters for AI (2026 Perspective)
| Concept | AI/LLM Application | 2026 Context |
|---|---|---|
| - expected loss | Every training objective; cross-entropy, MSE, RLHF reward | All SOTA LLMs (GPT-4, Claude, Gemini) optimise expected log-likelihood; RLHF maximises expected reward from human preference model |
| Linearity of expectation | Gradient of a sum = sum of gradients; minibatch averaging | Distributed training computes gradients on data shards and averages them - justified by linearity |
| Tower property | EM algorithm E-step; policy gradient decomposition | Mixture-of-experts routing in GPT-4, Mixtral uses tower property to compute expected output over router distribution |
| - variance | Uncertainty quantification, BatchNorm/LayerNorm, gradient clipping | LayerNorm is standard in every transformer (BERT, GPT, LLaMA); normalises by mean and variance per token |
| Bias-variance decomposition | Model selection, regularisation, double descent | Understanding why GPT-4-scale models (massively overparameterised) still generalise - double descent regime |
| Jensen's inequality | KL divergence ; ELBO derivation; softmax bounds | VAE encoder (DALL-E 2, Stable Diffusion) optimises ELBO; Jensen is the mathematical foundation |
| MGF / Cumulants | Proof of CLT, Chernoff bounds, tail analysis | Hoeffding/Chernoff bounds used to prove PAC-learning generalisation bounds for language models |
| Adam / moment tracking | Default optimiser for all large-scale training | AdamW trains GPT-4, LLaMA, Claude, Gemini; Adam moment tracking is a direct application of and |
| Score function = | REINFORCE, RLHF policy gradient, diffusion model score | Score-based diffusion (Stable Diffusion, DALL-E 3) is defined by ; trained by denoising score matching |
| Cauchy-Schwarz | Attention scaling , Fisher information bounds | Scaled dot-product attention in every transformer uses scaling derived from variance of dot products |
| Conditional expectation | Attention output = conditional mean; optimal predictor = | Transformers approximate via attention; mechanistic interpretability studies what conditional distributions each head computes |
| Reparameterisation | VAE encoder, diffusion posterior sampling | Stable Diffusion's latent diffusion uses Gaussian reparameterisation for differentiable sampling through the encoder |
13. Conceptual Bridge
Where We Came From
This section builds directly on the foundations laid in Section01-Section03. The probability spaces and random variable formalism of Section01 provides the objects - PDFs, PMFs, CDFs - over which we compute expectations. The named distributions of Section02 supply the examples whose moments we derive in Section6.2: the mean and variance of the exponential, the mean and variance of the binomial, the mean and variance that define the Gaussian. The joint distribution machinery of Section03 provides the tools for conditional expectation (Section5) and the covariance matrix (Section4): the tower property is essentially iterated marginalisation, and the conditional variance formula is the law of total variance derived via joint distributions.
Where We Are Going
Section Section05 (Concentration Inequalities) takes the moment-bound preview from Section7.5 much further. Markov's inequality (from the mean) and Chebyshev's inequality (from the variance) are the first two results, but Section05 develops exponentially sharper bounds: Hoeffding's inequality (bounded variables), Chernoff bounds (via MGFs), and McDiarmid's inequality (functions of independent variables). These bounds are the mathematical machinery of PAC-learning generalisation theory - they quantify how many training examples are needed to guarantee that the empirical risk is close to the true risk.
Section Section06 (Stochastic Processes) extends expectation to sequences of random variables indexed by time. The Law of Large Numbers proves rigorously that - the sample mean converges to the expectation. The Central Limit Theorem proves that the standardised sample mean converges in distribution to - the Gaussian emerges as the universal limit of sums, with proof via MGF or characteristic function techniques introduced here.
POSITION IN CHAPTER 6 CURRICULUM
========================================================================
Section01 Probability Spaces Section02 Distributions
+------------------+ +------------------+
| Kolmogorov axioms| | Named PDFs/PMFs |
| Events, \\sigma-algebra| | Parameters |
| CDF, PDF, PMF | | Relationships |
+--------+---------+ +--------+---------+
| |
+--------------+----------------+
v
Section03 Joint Distributions
+----------------------+
| Marginals |
| Conditionals f(y|x) |
| MVN, Bayes, Chain |
+----------+----------+
|
v
+==============================+
| Section04 Expectation & Moments | <- YOU ARE HERE
| E[X], Var, Cov, MGF |
| Jensen, C-S, LOTUS |
| Bias-Variance, Adam |
+==============+=============+
|
+----------+----------+
v v
Section05 Concentration Section06 Stochastic
Inequalities Processes
+-------------+ +-------------+
| Markov | | LLN: Xbar->E[X]|
| Chebyshev | | CLT: ->N(0,1)|
| Hoeffding | | Gaussian |
| PAC bounds | | processes |
+-------------+ +-------------+
| |
+----------+----------+
v
Section07 Markov Chains
+----------------------+
| Transition matrices |
| Steady state |
| MCMC (Bayes sampling)|
+----------------------+
========================================================================
The conceptual arc through Section04 is: we began with probability as a framework for describing uncertainty (Section01), learned the vocabulary of named distributions (Section02), understood how to reason about multiple random variables jointly (Section03), and now in Section04 we have the tools to summarise distributions with numbers - expectations, variances, covariances, moments - and to bound the gap between reality and our estimates. Every subsequent section will use the expectation operator as a core tool: Section05 to prove tail bounds, Section06 to formalise convergence, Section07 to analyse Markov chain stationary distributions via matrix expectations.
<- Back to Chapter 6: Probability Theory | Next: Concentration Inequalities ->
Appendix A - Worked Examples: Expectation and Moments
A.1 Expected Value of the Geometric Distribution
The geometric distribution counts the number of trials until the first success in a sequence of iid Bernoulli() trials.
PMF: for
Expected value via the definition:
where . Using the identity :
Variance via the second moment: . Using :
AI connection: The geometric distribution models the number of tokens until a specific token (e.g., end-of-sequence) appears, assuming iid generation. In practice tokens are not iid, but the geometric provides a baseline model. Expected sequence length under a uniform = stopping probability is .
A.2 Method of Moments Estimation
The method of moments estimates distribution parameters by setting sample moments equal to theoretical moments and solving.
Example: Estimating for .
Theoretical moments: and .
Given samples , compute sample mean and sample variance .
Set and . Solving:
For AI: Many Bayesian models require estimating hyperparameters of prior distributions. Method of moments provides fast, closed-form initial estimates that can be refined by maximum likelihood or MCMC. It is also used in moment matching for knowledge distillation: a student model's distribution moments are matched to the teacher's.
A.3 St. Petersburg Paradox: When Expectation Misleads
The St. Petersburg game: flip a coin repeatedly until the first head. If head appears on flip , win dollars.
Expected winnings:
The expected value is infinite! Yet no rational person would pay more than a few dollars to play this game.
Resolution: Rational agents maximise expected utility, not expected monetary value. For a logarithmic utility function , the expected utility is finite:
This is Bernoulli's resolution (1738): diminishing marginal utility. The paradox reveals that infinite expected value is not sufficient for rational choice - the existence of all moments (or at least bounded utility) is needed.
For AI: Reinforcement learning reward design must account for this. An agent with unbounded reward function may take extremely risky actions that have infinite expected reward but almost surely fail. This motivates bounded reward functions and regularisation in RLHF: keeping reward signals within a bounded range prevents policy collapse toward infinite-expectation strategies.
Appendix B - Proofs of Key Identities
B.1 Cauchy-Schwarz via Inner Product
The expectation can be viewed as an inner product in the space of square-integrable random variables. The Cauchy-Schwarz inequality for inner products states , which gives directly.
This inner product interpretation gives convergence its name: in means - convergence in the inner product sense.
B.2 Variance as Second Cumulant
The CGF of is . Computing its second derivative at :
At : , , , so:
The variance is precisely the second cumulant .
B.3 Fisher Information and the Score
The Fisher information matrix is defined as:
The second equality uses the fact that (proved by differentiating ):
Therefore the Fisher information is the variance (covariance matrix) of the score function .
For AI: Fisher information appears in:
- Cramer-Rao bound: - the variance of any unbiased estimator is at least the reciprocal Fisher information.
- Natural gradient: The natural gradient descent update moves in the direction of steepest descent in the space of distributions (KL-divergence geometry), rather than parameter space. Adam approximates this with a diagonal Fisher.
- Elastic Weight Consolidation (EWC): Used in continual learning to prevent catastrophic forgetting. The Fisher information diagonal identifies which parameters are important for previous tasks.
B.4 Stein's Lemma
Lemma (Stein, 1972). If and is differentiable with :
Proof: Integration by parts:
Note . Integrating by parts:
Special cases:
- : [ok]
- : [ok]
For AI: Stein's lemma is the foundation of Stein's identity used in score matching and denoising diffusion models. The score function satisfies (Stein's operator), enabling training by minimising a quadratic loss without computing the intractable normalisation constant of .
Appendix C - Moment Computations for Common Distributions
This table gives raw moments, central moments, MGF, skewness, and excess kurtosis for the distributions used throughout the course.
| Distribution | Skewness | Ex. Kurtosis | (domain) | ||
|---|---|---|---|---|---|
| Bernoulli() | |||||
| Binomial() | |||||
| Poisson() | |||||
| Geometric() | , | ||||
| Uniform() | |||||
| Normal() | |||||
| Exponential() | , | ||||
| Gamma() | , | ||||
| Beta() | complex | No closed form | |||
| Student-() | () | () | () | () | Does not exist |
Notes:
- Student- has no MGF (heavy tails cause for all ).
- Beta distribution skewness is zero iff (symmetric); negative when , positive when .
- Poisson has equal mean and variance - a property used to test whether count data follows Poisson (overdispersion: means extra variability, e.g., negative binomial is better).
Appendix D - The Exponential Family and Moments
Many common distributions belong to the exponential family, which has a elegant connection between natural parameters and moments.
D.1 Exponential Family Form
A distribution belongs to the exponential family if its density can be written as:
where:
- = natural parameter vector
- = sufficient statistic vector
- = log-partition function (log-normaliser)
- = base measure
D.2 Moments from the Log-Partition Function
Theorem. For an exponential family distribution:
Proof sketch. Differentiating with respect to :
Therefore . Differentiating again gives the covariance formula.
Examples:
| Distribution | ||||
|---|---|---|---|---|
| Bernoulli() | ||||
| Gaussian() | ||||
| Poisson() |
For AI: The log-partition function is the free energy of the exponential family. Its gradient gives the expected sufficient statistics (moments), its Hessian gives the Fisher information matrix. In variational inference, the ELBO is optimised over the natural parameters of the approximate posterior; the optimal satisfies (moment matching). This is the connection between maximum entropy, sufficient statistics, and the moments derived in Section6.
Appendix E - Convergence in Probability and L^2 Convergence
The sample mean estimates . In what sense does this estimate converge?
E.1 L^2 Convergence (Mean Square Convergence)
converges to in mean square () sense:
This is an immediate consequence of the variance formula for means (Section4.4). The sample mean's MSE decreases at rate .
E.2 Weak Law of Large Numbers (Preview)
The Weak LLN (proved in Section06 using characteristic functions or Chebyshev's inequality) states that for iid with finite mean :
i.e., for any .
Proof via Chebyshev (when ): .
This is a direct application of Chebyshev's inequality (from Section7.5 preview) to the sample mean. The LLN justifies: if we run a neural network many times on iid batches and average the loss estimates, the average converges to the true expected loss.
-> Full treatment of LLN and CLT: Section06 Stochastic Processes
Appendix F - Conditional Expectation as Projection
The conditional expectation can be understood geometrically as an orthogonal projection in the Hilbert space of square-integrable random variables.
Inner product:
Projection: (conditional expectation on a sub--algebra ) is the projection of onto the closed subspace of -measurable random variables.
Geometric interpretation: Among all -measurable (i.e., functions of ) approximations to , the conditional expectation minimises the distance . This is the projection theorem: the best approximation is the projection, and the residual is orthogonal to every -measurable random variable:
Consequence: The tower property is the projection version of the law of total expectation. In a Hilbert space, projecting onto a subspace and then taking the "total length" (expectation) equals the total length of .
For AI: This projection view clarifies why neural networks trained with MSE loss approximate : the network is learning the projection of onto the subspace of functions representable by the architecture. Deeper networks can represent larger subspaces, hence better approximations to the conditional expectation.
Appendix G - Worked Problems: Moments and Inequalities
G.1 Computing the Moments of the Beta Distribution
The Beta() distribution has PDF on , where .
Raw moment: Using the Beta function definition:
First moment: .
Second moment: .
Variance:
AI connection: The Beta distribution is the conjugate prior for the Bernoulli/Binomial likelihood. After observing successes and failures, the posterior is Beta(, ). The posterior mean is - a weighted average of the prior mean and the sample mean . As , the posterior converges to the sample mean (data dominates prior).
G.2 Proving by Jensen
For a probability distribution with and , the Shannon entropy is .
Proof that :
Apply Jensen's inequality with the concave function :
Equality holds iff const for all (Jensen with equality iff the argument is constant), i.e., (uniform).
Alternative proof via KL divergence:
where is uniform. KL non-negativity gives .
G.3 Adam Bias Correction: Full Derivation
At step , the Adam first-moment accumulator is:
Unrolling from :
Taking expectations (assuming iid gradients with constant mean ):
The bias is , which decays to zero geometrically. The bias-corrected estimate satisfies (unbiased).
Similarly for , and the debiased estimates unbiasedly.
In early training ( small), is small, so greatly amplifies . Without bias correction, Adam would take tiny steps at the start because after one step. With bias correction: - the first step uses the actual gradient.
G.4 Conditional Expectation: Gaussian Case
Let with and .
Conditional distribution (from Section03 Schur complement formula):
Therefore:
Verification via law of total variance:
The variance of the conditional mean contributes (explained variance), and the within-group variance contributes (unexplained variance). The fraction of explained by is the coefficient of determination of linear regression.
Appendix H - Notation Summary
| Symbol | Meaning | First defined |
|---|---|---|
| Expected value of | Section2.1 | |
| or | Mean (expected value) | Section2.1 |
| or | Variance of | Section3.1 |
| Standard deviation of | Section3.2 | |
| -th raw moment | Section3.3 | |
| -th central moment | Section3.3 | |
| Skewness | Section3.4 | |
| Excess kurtosis | Section3.5 | |
| Covariance | Section4.1 | |
| Pearson correlation | Section4.2 | |
| Conditional expectation (function of ) | Section5.1 | |
| Conditional expectation (random variable) | Section5.1 | |
| Conditional variance | Section5.3 | |
| Moment generating function | Section6.1 | |
| Characteristic function | Section6.4 | |
| Cumulant generating function | Section6.5 | |
| -th cumulant | Section6.5 | |
| ELBO (evidence lower bound) | Section9.2 | |
| KL divergence | Section7.2 | |
| Shannon entropy | Section9.1 |
Appendix I - Quick Reference: Key Formulas
Expectation:
Variance:
Conditional expectation:
MGF:
Jensen's inequality:
Cauchy-Schwarz:
Bias-Variance:
Adam:
Appendix J - Common Mistakes: Extended Examples
J.1 Jensen's Direction: A Costly Error
One of the most frequent mistakes when applying Jensen's inequality is applying it in the wrong direction. The rule is simple: for convex , ; for concave , the inequality reverses.
Common confusion: vs. .
- is convex (). Jensen: . Equivalently, . [ok]
- is concave (). Jensen: for .
A model that minimises is NOT the same as minimising . The former cares about average square-root loss, the latter about the square root of average loss. In practice this distinction matters for robust loss functions.
Common confusion: vs. .
- is concave -> . This is the key inequality behind the ELBO derivation.
- is convex -> . This is the key inequality for bounding the MGF.
The ELBO derivation applies Jensen with (concave): where . Students often try to apply it in the wrong direction, getting an upper bound instead of a lower bound.
J.2 The Tower Property Subtlety: Nested Conditioning
The tower property states . A more general version for nested conditioning: for -algebras :
Conditioning on less information from already conditioned: you keep the coarser conditioning.
Example: Let be a sufficient statistic for given data . Then:
where is the Rao-Blackwellised estimator. The outer expectation equals the original (tower property), but the Rao-Blackwellised estimator has smaller variance. This is NOT circular - it is the statement that smoothing via conditioning on doesn't change its mean.
J.3 Correlation Does Not Imply Causation - A Statistical View
Zero correlation means - no linear relationship. But:
- There may be nonlinear dependence (Section4.3 example: ).
- Even positive correlation may arise from a common cause (confounding): if and (fork structure from Section03), then and are correlated even if neither causes the other.
In ML: the correlation between model predictions and labels on the test set measures linear predictive ability, not causal understanding. A model can achieve high correlation while exploiting spurious features (shortcuts). For example, language models correlate "hospital" with "disease" not through causal understanding but through co-occurrence patterns.
Conditional independence as the resolution: If is the common cause (confounder), and may be conditionally independent given : even though . Adjusting for confounders (either by conditioning or instrumental variables) is the statistical approach to causal inference.
Appendix K - Information-Theoretic View of Moments
K.1 Entropy as Negative Expected Log-Probability
The Shannon entropy of a discrete distribution is:
This is simply minus the expected log-probability. For a continuous distribution with PDF , the differential entropy is:
Gaussian maximises differential entropy. Among all distributions on with fixed mean and variance , the Gaussian maximises differential entropy:
Proof sketch (via KL divergence): For any distribution with mean and variance , let .
Since , and (since has variance ):
Therefore , i.e., . Equality iff (since iff distributions are equal).
AI connection: This maximum entropy property explains why the Gaussian prior is so common in Bayesian machine learning: given a known mean and variance (from domain knowledge), the Gaussian is the least informative (maximum entropy) prior consistent with those constraints. It makes the fewest additional assumptions about the distribution.
K.2 Mutual Information as Expected KL Divergence
The mutual information between and is:
This is the KL divergence between the joint distribution and the product of marginals. Since : with equality iff .
Relationship to conditional entropy:
where is the conditional entropy (tower property applied to entropy).
For AI: Mutual information is the gold standard for measuring statistical dependence. It captures all forms of dependence (linear and nonlinear), unlike correlation. Contrastive learning methods (SimCLR, CLIP) can be viewed as maximising a lower bound on - encouraging representations to capture information shared between different views/modalities. Information bottleneck methods (used for analysing neural networks) study the trade-off between compressing (low ) and preserving information about (high ).
Appendix L - The Reparameterisation Trick: Full Mathematical Treatment
The reparameterisation trick is a technique for computing gradients of expectations when the distribution depends on the parameters we differentiate with respect to.
L.1 The Problem: Gradient Through Sampling
We want where is a distribution parameterised by . Naively:
This requires computing for each , and the expectation is now over the fixed distribution of values - but the integration measure changes with , making Monte Carlo estimation require evaluating .
L.2 REINFORCE (Score Function Estimator)
The score function trick uses :
This is computable by Monte Carlo: sample , compute , average over samples. However, this estimator has high variance because can be large and can vary greatly.
L.3 Reparameterisation
When is a location-scale family (or more generally, when there exists a deterministic transformation ):
For Gaussian: , .
Then:
and:
The gradient now flows through the deterministic function , enabling automatic differentiation. The variance of this estimator is typically much lower than REINFORCE because is often smoother than .
Jacobian of the transform:
So the gradient with respect to is and with respect to is .
L.4 Why Lower Variance?
Consider with .
REINFORCE gradient: . The product can be large in magnitude.
Reparameterisation gradient: . The gradient is , bounded in , much more stable.
In general, reparameterisation produces gradients of magnitude when is smooth, while REINFORCE gradients scale with values which can be large.
Appendix M - Moments in the Context of Score Matching and Diffusion
M.1 Denoising Score Matching
Diffusion models (DDPM, Score SDE) learn the score function at each noise level . The training objective is:
where and .
The conditional score is:
This is an expectation over the noise schedule. The loss simplifies to:
which is an MSE loss - an empirical estimate of where is the added noise and .
M.2 First Moment of the Reverse Process
The reverse diffusion process gives where:
The mean of the reverse step is determined by the predicted noise. The conditional expectation is the denoised estimate of :
This is LOTUS applied to the reparameterised relationship: the expected clean image given the noisy image is a simple linear function of the predicted noise, evaluated using the noisy image.
Appendix N - Worked Problems: Bias-Variance and Regularisation
N.1 Ridge Regression Bias-Variance Tradeoff
Setup. Data: where , .
Ridge estimator: .
Let .
Bias: .
Since :
So . Bias increases with .
Variance: .
As : , so variance . As : recovers OLS variance .
Total MSE: .
The optimal minimises this sum - bias grows, variance shrinks, and there is a sweet spot.
N.2 Neural Network Initialisation via Variance Propagation
He initialisation (for ReLU networks) is derived from the bias-variance / variance propagation analysis.
For a layer , with iid inputs (zero mean, variance ) and iid weights independent of inputs:
Variance propagation:
where . For ReLU: (since ReLU zeros half the distribution).
Therefore .
For variance to stay constant across layers: requires .
He initialisation: . This preserves variance through the network during the forward pass, preventing gradients from vanishing or exploding in deep networks.
Appendix O - Self-Assessment Checklist
Use this checklist after studying the section to identify gaps before proceeding to Section05.
Core Mechanics (Should be fluent)
- I can compute from a PMF or PDF using the definition.
- I can apply LOTUS: without finding the distribution of .
- I know that linearity holds without independence.
- I can compute using .
- I know (shift doesn't change variance).
- I can compute skewness and kurtosis .
Intermediate Theory (Should understand proofs)
- I can state and prove the tower property .
- I can state and prove the law of total variance.
- I can derive the MGF of the Gaussian, Exponential, and Poisson distributions.
- I can state Jensen's inequality and identify when it applies ( convex vs. concave).
- I can prove using Jensen.
- I can state and prove the Cauchy-Schwarz inequality for expectations.
- I know that follows from Cauchy-Schwarz.
- I understand why zero covariance does NOT imply independence (with a counterexample).
Advanced Applications (Should be able to apply)
- I can derive the bias-variance decomposition from first principles.
- I can explain what double descent is and why overparameterised models can generalise.
- I can derive the ELBO using Jensen's inequality.
- I can explain the reparameterisation trick and why it reduces gradient variance.
- I understand Adam as tracking first and second moments with bias correction.
- I can explain why the score function .
Appendix P - Further Reading and References
Textbooks
-
Probability and Statistics for Engineering and the Sciences - Jay Devore (2015). Accessible introduction to moments, MGFs, and expectation with engineering examples.
-
Probability Theory: The Logic of Science - E.T. Jaynes (2003). Philosophical and technical treatment; excellent on entropy and maximum entropy principle.
-
Pattern Recognition and Machine Learning - Christopher Bishop (2006), Ch. 1-2. The ML-focused treatment of expectations, KL divergence, ELBO, and variational inference.
-
Deep Learning - Goodfellow, Bengio, Courville (2016), Ch. 3. Standard reference for probability in ML context including bias-variance tradeoff.
-
Probabilistic Machine Learning: An Introduction - Kevin Murphy (2022), Ch. 2-4. Modern treatment with extensive ML applications including Adam, VAEs, and diffusion models.
Papers
-
Kingma & Welling (2014). Auto-Encoding Variational Bayes. arXiv:1312.6114. Original VAE paper; derives ELBO via Jensen and introduces reparameterisation trick.
-
Kingma & Ba (2015). Adam: A Method for Stochastic Optimization. arXiv:1412.6980. Original Adam paper; moment tracking interpretation is explicit in the derivation.
-
Williams (1992). Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning. REINFORCE algorithm; score function gradient estimator.
-
Ioffe & Szegedy (2015). Batch Normalization. arXiv:1502.03167. Moment normalisation of activations; analysis of how variance affects training dynamics.
-
Belkin et al. (2019). Reconciling Modern Machine-Learning Practice and the Classical Bias-Variance Trade-Off. PNAS. Double descent phenomenon; formal analysis of why interpolating models can generalise.
-
Song et al. (2020). Score-Based Generative Modeling through Stochastic Differential Equations. arXiv:2011.13456. Diffusion models as score-matching; score function is a moment of the distribution.
Summary of Key Results
This section established the expectation operator as the central tool in probability and machine learning. Starting from the LOTUS definition, we derived linearity (which holds without independence), the tower property (iterated expectation), and the law of total variance (which decomposes uncertainty into within-group and between-group components).
The moment hierarchy captures distribution shape: the first moment (mean) locates it; the second (variance) measures spread; the third (skewness) captures asymmetry; the fourth (kurtosis) captures tail weight. Moment generating functions encode all moments in a single analytic function and enable elegant proofs of the reproductive property (sum of independent Gaussians is Gaussian) and cumulant additivity.
Jensen's inequality - the most important inequality in this section - gives the KL divergence its non-negativity, the ELBO its existence as a lower bound, and the bias-variance decomposition its clean structure. Cauchy-Schwarz gives and motivates the attention scaling factor .
Every major ML method reviewed in Section9 is an application of these results: cross-entropy is expected log-loss, the ELBO is Jensen applied to the marginal likelihood, policy gradient is the score function trick, and Adam is first-and-second-moment tracking with debiasing.
The concepts forward-referenced here - Markov's and Chebyshev's inequalities, the Law of Large Numbers, the Central Limit Theorem - will be fully developed in Section05 and Section06, completing the probabilistic toolkit required for modern ML.
The conceptual unification: expectation is a linear functional on the space of random variables. It maps random variables to real numbers while preserving linear structure. Every computation in this section - variance as , covariance as , the MGF as , the characteristic function as , the KL divergence as , Adam's first moment as - is an application of this single linear functional applied to different functions of the random variable.
Understanding this unity transforms the apparent complexity of ML training into a coherent framework: we are always estimating expectations from samples, bounding how far sample estimates stray from true expectations, and choosing parameterisations that make those expectations tractable to compute and differentiate.
<- Back to Chapter 6: Probability Theory | Next: Concentration Inequalities ->
End of Section06/04 - Expectation and Moments
| File | Lines / Cells | Status |
|---|---|---|
| notes.md | 2000+ | [ok] Complete |
| theory.ipynb | 38 cells | [ok] Complete |
| exercises.ipynb | 27 cells | [ok] Complete |