<- Back to Chapter 7: Statistics | Next: Hypothesis Testing ->
"The problem of statistical estimation is one of the most fundamental in all of science: given observations drawn from some unknown process, what can we infer about that process?"
- Sir Ronald A. Fisher, Statistical Methods and Scientific Inference (1956)
Overview
Estimation theory answers the central question of applied statistics: given a finite, noisy sample from an unknown population, how do we recover the population's parameters? This inversion - from data to model - is not merely an academic exercise. It is the mathematical foundation of every machine learning training algorithm ever written. When a language model minimises cross-entropy loss on a text corpus, it is performing maximum likelihood estimation. When a researcher reports model accuracy with error bars, those bars are frequentist confidence intervals. When the natural gradient optimizer adjusts step sizes using curvature information, it is inverting the Fisher information matrix.
The discipline divides into three interconnected pillars. Point estimation asks: given a sample, what single value should we report for the unknown parameter? We want estimators that are unbiased (correct on average), consistent (converge to the truth as sample size grows), and efficient (achieve the minimum possible variance). The Cramer-Rao lower bound establishes a fundamental limit - no unbiased estimator can have variance smaller than the reciprocal of Fisher information, a quantity measuring how much data "inform" the parameter. Maximum likelihood estimation is the workhorse of both classical statistics and modern machine learning: find the parameter value that makes the observed data most probable. It is consistent, asymptotically efficient, and invariant under reparametrisation - properties that make it the default choice across science. Confidence intervals extend point estimates to ranges, quantifying the uncertainty inherent in estimation from finite samples.
This section builds the complete theory systematically, from formal definitions through asymptotic theory, culminating in concrete ML applications: MLE as cross-entropy, Fisher information in natural gradient descent, bootstrap confidence intervals for benchmark evaluation, and Fisher information matrices in elastic weight consolidation for catastrophic forgetting prevention.
Prerequisites
- Sample statistics (mean, variance, covariance matrix) - Section01 Descriptive Statistics
- Expectation, variance, MGF (, , ) - Ch6 Section04 Expectation and Moments
- Common distributions (Gaussian, Bernoulli, Poisson, Exponential, Beta) - Ch6 Section02 Common Distributions
- Bayes' theorem and conditional distributions - Ch6 Section03 Joint Distributions
- Law of large numbers and CLT - Ch6 Section06 Stochastic Processes
- Matrix inverse, SVD, positive definiteness - Ch3 Advanced Linear Algebra
- Partial derivatives and optimisation conditions - Ch4 Calculus Fundamentals
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Interactive derivations: MLE for Gaussian/Bernoulli/Poisson, Fisher information visualisation, CRB verification, asymptotic normality simulation, bootstrap CI, natural gradient |
| exercises.ipynb | 10 graded exercises from Bernoulli MLE and bias proofs through Fisher information for neural network layers and Chinchilla scaling law estimation |
Learning Objectives
After completing this section, you will:
- Define an estimator as a statistic and explain why it is a random variable with a sampling distribution
- Decompose the MSE of any estimator into bias-squared plus variance, and construct examples demonstrating the trade-off
- State and prove the Cramer-Rao lower bound using the Cauchy-Schwarz inequality
- Compute the Fisher information for Bernoulli, Gaussian, Poisson, and Exponential models
- Derive the MLE for scalar and multivariate Gaussian parameters from first principles
- Prove that MLE of Gaussian variance is biased and state the corrected estimator
- Explain why minimising NLL/cross-entropy loss is equivalent to performing MLE
- Derive confidence intervals using both the pivoting method and the asymptotic normal approximation
- Construct bootstrap confidence intervals and explain when they outperform asymptotic CIs
- State the asymptotic normality theorem for MLE and explain its rate
- Apply the delta method to derive asymptotic distributions of transformed estimators
- Connect Fisher information to natural gradient descent, K-FAC, and elastic weight consolidation
Table of Contents
- 1. Intuition
- 2. The Formal Estimation Problem
- 3. Properties of Estimators
- 4. Fisher Information and the Cramer-Rao Bound
- 5. Maximum Likelihood Estimation
- 6. Method of Moments
- 7. Confidence Intervals
- 8. Asymptotic Theory
- 9. Applications in Machine Learning
- 10. Common Mistakes
- 11. Exercises
- 12. Why This Matters for AI (2026 Perspective)
- 13. Conceptual Bridge
1. Intuition
1.1 What Is Statistical Estimation?
Suppose you flip a coin 100 times and observe 63 heads. The coin's true bias - the probability of heads on any single flip - is unknown. What is your best guess for ? The obvious answer is . But why is this the right answer? What makes it "best"? Could a different estimate be better in some sense? And how confident should you be - could the true be 0.5 (a fair coin) despite observing 63 heads?
These are precisely the questions estimation theory addresses. The statistical estimation problem has three ingredients:
- An unknown parameter governing some probability distribution
- Observed data drawn independently from
- An estimator : a function of the data that produces a guess for
The key insight, often overlooked by beginners, is that the estimator is a random variable. Before we collect data, is uncertain - its value depends on which random sample we happen to observe. If we repeated the experiment with a fresh sample, we would get a different . The sampling distribution of - the distribution of across all possible samples - is the object of study. A good estimator has a sampling distribution tightly concentrated around the true .
For AI: This perspective matters directly. When you train a neural network on a dataset and obtain weights , those weights are a realisation of an estimator - a function of the random training data. Training on a different shuffle or subsample gives different . The variation you see across training runs, seeds, and dataset splits is the sampling distribution of the MLE manifesting in practice.
1.2 The Estimation Landscape
Estimation theory spans three major frameworks, each answering a different question:
Point estimation produces a single value for the unknown parameter. The challenge is specifying what "best" means. Three criteria dominate:
- Unbiasedness: - on average, the estimator is correct
- Minimum variance: among unbiased estimators, prefer the one with smallest
- Consistency: as - with enough data, the estimator converges to the truth
These criteria can conflict. The Cramer-Rao lower bound establishes a hard floor on variance for unbiased estimators, and the estimators that achieve this floor are called efficient. Maximum likelihood estimation (MLE) is the dominant method: it is consistent, asymptotically efficient, and invariant under smooth reparametrisations.
Interval estimation goes beyond a point to a range with a coverage guarantee: in repeated experiments, fraction of such intervals will contain the true . Frequentist confidence intervals capture estimation uncertainty without requiring a prior distribution on .
Asymptotic theory studies estimator behaviour as . The flagship result is the asymptotic normality of MLE: under regularity conditions,
where is the Fisher information. This single result simultaneously justifies MLE as an estimation method, provides the basis for asymptotic confidence intervals, and connects estimation theory to information geometry.
For AI: The three frameworks map directly to ML practice. Point estimation = model training (find ). Interval estimation = evaluation with error bars (report accuracy 95% CI). Asymptotic theory = understanding why larger datasets produce more reliable models (the convergence rate).
1.3 Historical Timeline
ESTIMATION THEORY - HISTORICAL TIMELINE
========================================================================
1795 Gauss (age 18) uses least squares to predict asteroid orbits
- the first systematic estimation method
1805 Legendre publishes least squares formally (Gauss priority dispute)
1809 Gauss derives least squares from the assumption of Gaussian errors
- first connection between MLE and squared-error minimisation
1894 Pearson introduces method of moments - systematic moment matching
1912 Fisher introduces "maximum likelihood" as a term; begins systematic
study of estimation properties
1922 Fisher: "On the Mathematical Foundations of Theoretical Statistics"
- defines sufficiency, efficiency, consistency; derives properties of MLE
1925 Fisher introduces Fisher information and the information inequality
(precursor to Cramer-Rao)
1945 Cramer proves the Cramer-Rao lower bound (independently of Rao)
1945 Rao independently proves the same bound + Rao-Blackwell theorem
1947 Lehmann & Scheffe: completeness and UMVUE theory
1950s Wald develops statistical decision theory - unified framework for
estimation as minimising expected loss
1979 Efron introduces the bootstrap - non-parametric confidence intervals
without distributional assumptions
1998 Amari: natural gradient via Fisher information geometry
- direct connection to modern ML optimisation
2015 Martens & Grosse: K-FAC - tractable approximation of FIM for DNNs
2017 Kirkpatrick et al.: Elastic Weight Consolidation (EWC) uses FIM
to prevent catastrophic forgetting in continual learning
2022 Hoffmann et al. (Chinchilla): scaling laws estimated via MLE on
(N, D, L) data - estimation theory at trillion-parameter scale
========================================================================
1.4 Why Estimation Theory Matters for AI
Every major component of a modern ML pipeline rests on estimation theory:
Training objectives. Minimising cross-entropy loss is equivalent to performing MLE. The connection is exact: . Every training run of GPT-4, LLaMA-3, Gemini, or any neural network trained with NLL loss is performing MLE on the conditional distribution.
Evaluation uncertainty. When a benchmark reports "Model A achieves 73.2% accuracy", this is a point estimate. How uncertain is it? With test examples, the 95% CI is approximately - meaning Model A and Model B with 73.5% accuracy are statistically indistinguishable. Estimation theory makes this precise.
Second-order optimisation. Adam, K-FAC, and natural gradient all incorporate curvature information. The natural gradient multiplies the gradient by , where is the Fisher information matrix. This reparametrises the optimisation in the geometry of the statistical manifold, making gradient steps invariant to reparametrisation. K-FAC (Martens & Grosse, 2015) approximates tractably for deep networks.
Continual learning. Elastic weight consolidation (Kirkpatrick et al., 2017) protects important weights during sequential task learning. "Importance" is measured by the diagonal of the FIM: parameters with high Fisher information are those to which the likelihood is most sensitive, and perturbing them most damages performance.
Calibration. A well-calibrated model's confidence scores match empirical frequencies: when it says "80% confident", it's right 80% of the time. Temperature scaling - dividing logits by a scalar - is an MLE problem: find the that maximises likelihood on a held-out calibration set.
2. The Formal Estimation Problem
2.1 Statistical Model and Parametric Family
Definition 2.1 (Parametric Statistical Model). A parametric statistical model is a collection of probability distributions indexed by a parameter:
where is the parameter space. The model is:
- Correctly specified if the true data-generating distribution satisfies for some
- Misspecified if - the model family does not contain the truth (most practical ML models are misspecified)
- Identifiable if different parameters give different distributions:
Identifiability is necessary for consistent estimation - you cannot consistently estimate a parameter that leaves the data distribution unchanged.
Standard examples of parametric families:
| Model | Parameter(s) | Parameter space | Notes |
|---|---|---|---|
| Single binary outcome | |||
| Location-scale family | |||
| Multivariate Gaussian | |||
| Count data | |||
| Time-to-event | |||
| Probabilities |
Non-examples (non-identifiable):
- with parameter : we can only estimate , not the individual components. The model is not identifiable.
- A neural network with two hidden layers of width 1 using : swapping the two neurons gives the same function, so the parameterisation is not identifiable (though the function class may be).
The iid assumption. Throughout this section, observations are assumed independent and identically distributed (iid) from . The iid assumption enables the joint log-likelihood to factorise as a sum:
This factorisation is what makes MLE computationally tractable and theoretically tractable. In practice, this assumption is violated whenever data points are correlated (time series, text sequences, spatially clustered data), and estimation theory has extensions to handle dependent data.
2.2 Point Estimators
Definition 2.2 (Estimator). An estimator of is any measurable function of the sample. The key points:
- is a random variable - before observing data, it is uncertain
- A specific value after observing data is called an estimate (not estimator)
- The sampling distribution of is the distribution of across all possible samples of size
This distinction - estimator (random variable) vs. estimate (specific realisation) - is pedantic but consequential. Confidence interval coverage is a property of the estimator (random), not the estimate (fixed). When we say "95% CI", we mean that the random interval contains with probability 95%, not that this specific computed interval has a 95% chance of containing (after observing data, is either in the interval or not - there is no remaining randomness).
Examples of estimators for the Gaussian mean :
Let .
- Sample mean: - unbiased, consistent, efficient
- Constant zero: - biased unless , inconsistent
- First observation: - unbiased! but inconsistent (variance regardless of )
- Trimmed mean: mean of middle 90% - slightly biased for Gaussian, but robust to outliers
- Constant : - biased by , inconsistent; can have lower MSE than the sample mean when is small (illustrates bias-variance trade-off)
The existence of unbiased but inconsistent estimators () and consistent but biased estimators (trimmed mean) shows these properties are logically independent.
2.3 The Loss Framework: MSE, Bias, and Variance
To compare estimators, we need a criterion. The most widely used is Mean Squared Error (MSE):
Definition 2.3 (MSE, Bias, Variance). For an estimator of a scalar parameter :
The bias-variance decomposition is the central identity of estimation theory:
Proof:
Add and subtract :
where .
Geometric interpretation:
BIAS-VARIANCE DECOMPOSITION - GEOMETRIC VIEW
========================================================================
True parameter \\theta* = 0 (target)
Case 1: Low bias, low variance (good estimator)
--- Estimates cluster tightly around \\theta*
---- MSE \\approx small
---
Case 2: Low bias, high variance (unbiased but noisy)
- -
- - x - -
- -
Centred on \\theta* but spread out. MSE = Var is large.
Case 3: High bias, low variance (biased but stable)
---
---- x (\\theta*)
---
Centred off target. MSE = Bias^2 + small Var.
Case 4: High bias, high variance (worst case)
- -
- x (\\theta*)
- -
Both centred off target AND spread out.
Key trade-off: Reducing variance by shrinkage introduces bias.
Increasing bias via regularisation often reduces variance more than
it increases bias-squared, giving lower MSE overall.
========================================================================
The bias-variance trade-off in ML is the exact same phenomenon: regularisation (L2, dropout, early stopping) introduces bias (the model is pulled away from the MLE toward a constrained region) but reduces variance (the model is less sensitive to the particular training sample), often reducing test MSE overall.
Minimax estimators. MSE is not the only loss criterion. The minimax estimator minimises the worst-case risk: . The James-Stein estimator (1961) famously showed that in for , the sample mean is inadmissible - there exists an estimator with strictly lower MSE at every . This is the Stein paradox: shrinkage toward the origin (a biased estimator!) uniformly dominates the unbiased sample mean in 3+ dimensions.
2.4 Examples of Biased and Unbiased Estimators
Example 1: Sample mean is unbiased.
Let with . Then:
The sample mean is an unbiased estimator of for any distribution with finite mean.
Example 2: MLE of Gaussian variance is biased.
Let . The MLE of is:
Computing the bias: Using :
Taking expectations:
Therefore:
The bias is - the MLE underestimates variance. The correction is Bessel's correction: is unbiased.
Recall from Section01: We saw Bessel's correction introduced as the "standard" sample variance formula. Now we understand why: MLE gives , which is biased. The denominator corrects for the one degree of freedom lost in estimating by .
Example 3: MLE is biased but consistent.
The Gaussian variance MLE has bias as . It is asymptotically unbiased and consistent. This illustrates that bias can be acceptable if it vanishes with sample size.
Example 4: When bias reduces MSE.
Consider estimating with the shrinkage estimator for some . Then:
The optimal , which gives lower MSE than (unbiased sample mean) whenever . The key insight: when is small relative to , shrinking toward zero introduces small bias but large variance reduction.
3. Properties of Estimators
3.1 Bias and Unbiasedness
Definition 3.1 (Bias). The bias of an estimator for parameter is:
The estimator is unbiased if for all , and asymptotically unbiased if as .
Why unbiasedness is desirable - but not sacred. An unbiased estimator is correct on average over infinite repetitions of the experiment. This is a natural property to want. However, unbiasedness alone is insufficient:
- The estimator (use only the first observation) is unbiased but throws away observations - clearly wasteful
- No unbiased estimator exists for some parameters: there is no unbiased estimator of based on a Gaussian sample (by the Lehmann-Scheffe theorem, for this to exist would need to be a polynomial in , which it is not)
- The Stein phenomenon (Section2.3) shows that in 3+ dimensions, unbiased estimators can be uniformly dominated by biased ones
Bias correction. When an estimator has known bias , we can correct it: . This is the idea behind Bessel's correction ( vs in the variance formula) and jackknife bias reduction.
For AI: Batch normalisation computes as the batch mean during training. This is an unbiased estimator of the feature mean. During inference, PyTorch uses the running mean accumulated over training, which is a biased estimator of the current data mean if the distribution has shifted - illustrating that unbiasedness depends on whether the estimation scenario matches the training scenario.
3.2 Consistency
Definition 3.2 (Consistency). An estimator is:
- Weakly consistent if for all (convergence in probability)
- Strongly consistent if for all (almost-sure convergence)
Weak consistency: for every , as .
Sufficient conditions for consistency:
- If and as , then is weakly consistent. (By Chebyshev + the bias decomposition: .)
Example: Sample mean consistency.
satisfies and . Therefore . This is exactly the Weak Law of Large Numbers - consistency of the sample mean is the LLN.
Example: MLE of Gaussian variance is consistent.
has bias and variance , so it is consistent even though it is biased for finite .
Inconsistent estimator example: (take only the first observation) has regardless of . It never concentrates - it is inconsistent.
Why consistency matters: Consistency is the minimal requirement for an estimator to be scientifically useful. A method that doesn't converge to the right answer with infinite data is fundamentally broken. Consistency does not guarantee that the estimator is good for small - convergence can be arbitrarily slow - but it at least ensures the method is correct in the large-data limit.
For AI: The "double descent" phenomenon in over-parameterised neural networks challenges classical bias-variance thinking. A massively over-parameterised model () can still be consistent for the Bayes-optimal predictor if trained with appropriate implicit regularisation (e.g., gradient descent from zero initialisation). This is an active area connecting statistical estimation theory to modern deep learning theory.
3.3 Efficiency
Among all unbiased estimators of , which has the smallest variance? This question is answered by the Cramer-Rao bound (Section 4), but we can define efficiency as a property here.
Definition 3.3 (Efficiency and Relative Efficiency).
- An unbiased estimator is efficient if its variance equals the Cramer-Rao lower bound: .
- The relative efficiency of two unbiased estimators and is . Values greater than 1 mean is more efficient.
Examples:
-
Gaussian mean: The sample mean is efficient - it achieves the CRB . Relative efficiency of the sample median vs. mean is for Gaussian data - the median throws away ~36% of efficiency.
-
Gaussian variance: The unbiased sample variance is not efficient (the MLE achieves the CRB but is biased; correcting for bias loses the CRB).
Asymptotic efficiency. For large , MLE achieves the CRB asymptotically - it is asymptotically efficient. This is one of its key advantages.
For AI: The sample mean is the most efficient estimator of the mean for Gaussian data. Batch gradient descent using all samples computes the exact gradient (equivalent to efficient estimation), while SGD uses a single sample or mini-batch (less efficient, higher variance, but much faster per update). The trade-off between statistical efficiency and computational efficiency is fundamental to modern ML training.
3.4 Sufficiency and the Rao-Blackwell Theorem
A sufficient statistic captures all the information in the data that is relevant to estimating .
Definition 3.4 (Sufficient Statistic). A statistic is sufficient for if the conditional distribution does not depend on for any .
Intuitively: once you know , the raw data provide no additional information about .
Fisher-Neyman Factorisation Theorem. is sufficient for if and only if the likelihood factors as:
where depends on data only through , and does not depend on .
Examples of sufficient statistics:
| Model | Sufficient statistic |
|---|---|
| , samples | (total successes) |
| , known | (sample mean) |
| , both unknown | (mean and SS) |
| (total count) | |
| (total time) |
Rao-Blackwell Theorem. If is any unbiased estimator and is a sufficient statistic, then satisfies:
with equality iff is already a function of . The "Rao-Blackwellised" estimator is at least as good.
Proof sketch: By the law of total variance, .
For AI: Sufficient statistics are the foundation of exponential families, which include Gaussian, Bernoulli, Poisson, Gamma, and most common distributions. The natural parameterisation of exponential families (used in generalised linear models and variational inference) exploits sufficient statistics to simplify computation. In the VAE (Kingma & Welling, 2014), the encoder outputs sufficient statistics of the Gaussian posterior approximation.
3.5 Completeness and the Lehmann-Scheffe Theorem
Definition 3.5 (Complete Statistic). A sufficient statistic is complete if for any measurable function : for all implies a.s. for all .
Completeness means the only function of that has zero expectation everywhere is the zero function - there are no "hidden" unbiasedness conditions.
Lehmann-Scheffe Theorem. If is a complete sufficient statistic and is unbiased for , then is the unique minimum variance unbiased estimator (UMVUE) of .
The UMVUE is the "best possible" unbiased estimator - it has the lowest variance among all unbiased estimators, and it is unique.
Example: For with known, is complete sufficient, and itself is unbiased for . By Lehmann-Scheffe, is the UMVUE of - there is no unbiased estimator with smaller variance. This also confirms that achieves the CRB .
4. Fisher Information and the Cramer-Rao Bound
4.1 The Score Function
The score function is the fundamental quantity linking estimation and information theory.
Definition 4.1 (Score Function). For a single observation from , the score is:
For a sample of iid observations, the total score is:
The MLE solves (the score equation).
Zero-mean property of the score:
under regularity conditions permitting interchange of differentiation and integration.
Interpretation: The score measures how sensitively the log-likelihood responds to perturbations of . At the true parameter value, the score has zero mean - on average, the likelihood is at a stationary point. But it has positive variance, measuring how much information the data carries about .
SCORE FUNCTION - GEOMETRIC INTUITION
========================================================================
Log-likelihood \\ell(\\theta) = \\Sigma_i log p(x_i;\\theta) as a function of \\theta:
\\ell(\\theta)
|
| +-----+
| +-+ +-+
| +-+ +-+
|---+-+ +-+----
| \\theta
^
\\theta* (true parameter)
slope = score = 0 at maximum
Score s(\\theta; data) = d\\ell/d\\theta is the slope of the log-likelihood.
High variance of score at \\theta* means the data sharply identifies
\\theta* (the likelihood is "peaky"). Low variance means the data
are relatively uninformative about \\theta.
========================================================================
4.2 Fisher Information
Definition 4.2 (Fisher Information). The Fisher information for parameter in model is:
(The second equality uses .)
Alternative form (under regularity conditions):
Proof of equivalence: Differentiating with respect to :
Therefore .
The second-derivative form has intuitive content: is the expected (negative) curvature of the log-likelihood. High curvature = the likelihood has a sharp peak = the data strongly identifies . Low curvature = flat likelihood = data is relatively uninformative.
Fisher information for an iid sample of observations:
Information accumulates linearly with sample size - doubling data doubles information, halving uncertainty (in variance terms: ).
Computed examples:
Bernoulli: . Score: . Score squared: .
with known: . Score: .
Poisson: . Score: .
Pattern: For exponential family distributions, in scalar cases - information is the reciprocal of the natural parameter variance.
Multivariate Fisher Information Matrix. For :
In matrix form: where is the Hessian of the log-likelihood.
The FIM is always positive semi-definite (as an outer product expectation), and positive definite under identifiability.
4.3 The Cramer-Rao Lower Bound
The Cramer-Rao Lower Bound (CRB) is the fundamental limit on estimation accuracy.
Theorem 4.3 (Cramer-Rao Lower Bound). Let be any unbiased estimator of based on iid observations. Under regularity conditions:
For the multivariate case with unbiased estimator of :
(meaning is positive semidefinite).
Proof (scalar case, single observation):
We need to show .
Since is unbiased: . Differentiating with respect to :
Now apply the Cauchy-Schwarz inequality to :
By Cauchy-Schwarz:
When is the bound tight? The CRB is achieved (with equality in Cauchy-Schwarz) if and only if for some function . This occurs precisely for exponential family distributions, and the efficient estimator is the MLE.
Biased estimator CRB. For biased estimators with :
This shows that introducing bias (via regularisation) can actually reduce the CRB - a biased estimator faces a different, potentially less demanding bound.
Examples of efficient estimators:
| Model | Parameter | Efficient estimator | CRB | |
|---|---|---|---|---|
| , known | [ok] | |||
| [ok] | ||||
| [ok] | ||||
| [ok] |
The sample mean is efficient for all these single-parameter exponential families. This is not a coincidence - it follows from the structure of exponential families.
4.4 Fisher Information Matrix in Practice
For a neural network with parameters defining a conditional distribution , the FIM is:
This is an outer product of gradient vectors, making it a PSD matrix. For modern LLMs with parameters, this matrix is completely intractable to store (would require bytes).
Empirical FIM. In practice, approximate by sampling:
This is the average squared gradient - exactly what is accumulated in gradient variance estimates.
Natural gradient (Amari, 1998). The ordinary gradient is the steepest ascent direction in Euclidean parameter space . But parameters do not live in Euclidean space - they live in the statistical manifold of distributions, where the natural metric is the Fisher information. The natural gradient is:
This is the steepest ascent direction in the metric defined by the FIM. Natural gradient is invariant to reparametrisation: if we change from to , the natural gradient update gives the same parameter change.
K-FAC approximation (Martens & Grosse, 2015). Full FIM inversion costs . K-FAC approximates by assuming layer-wise independence and Kronecker factorisation of each layer's FIM: , where (input covariance) and (gradient covariance). Inverting the Kronecker product: , reducing cost from to per layer.
Jeffreys prior. In Bayesian statistics (preview of Section04), the Jeffreys prior is the "uninformative" prior that is invariant to reparametrisation. For in Bernoulli: , so , which is .
Preview: MAP Estimation. Adding a prior and finding gives the Maximum A Posteriori (MAP) estimate - MLE regularised by the log-prior. MAP with a Gaussian prior gives L2-regularised MLE. The full Bayesian treatment of priors, posteriors, and conjugate families is in Section04 Bayesian Inference.
5. Maximum Likelihood Estimation
5.1 The Likelihood Principle
Definition 5.1 (Likelihood Function). Given observed data and a parametric model , the likelihood function is:
The log-likelihood is:
The Maximum Likelihood Estimator (MLE) is:
The likelihood is not a probability. is a function of for fixed data , not a probability distribution over . It does not integrate to 1 over . The statement "" means "the data has probability density 0.3 under parameter ", not "there is a 30% chance is the true parameter".
Why log? Three reasons:
- Computational: underflows to zero for large (e.g., ); sums are numerically stable
- Mathematical: sums are easier to differentiate and optimise than products
- Statistical: log is a monotone transformation, so
The likelihood principle states that all information about in the data is contained in the likelihood function. Two datasets with proportional likelihoods should lead to the same inferences about . This is a philosophical principle - frequentists may not accept it fully (confidence intervals depend on the sampling procedure, not just the likelihood), but it underlies MLE and Bayesian inference alike.
5.2 Deriving MLEs: Scalar Parameters
The standard procedure for finding the MLE:
- Write the log-likelihood
- Differentiate and set (the score equation)
- Verify it is a maximum (second derivative or global structure)
- Check boundary cases (the maximum might be at 's boundary)
Bernoulli MLE. Let .
where is the total number of successes. Setting :
The MLE is the sample proportion - the intuitive answer.
Poisson MLE. Let .
Setting : .
The MLE of the Poisson rate is the sample mean - the estimator of the mean equals the MLE because .
Exponential MLE. Let , i.e., for .
Setting : .
The MLE of the rate is the reciprocal of the sample mean - since , this is natural.
Uniform MLE. Let , for .
The likelihood is zero for (some observation would be outside ) and decreasing in for . The MLE is at the boundary: .
This is an example where the score equation gives no solution (the log-likelihood has no interior critical point); the MLE is found by reasoning about the likelihood's shape. Also, the MLE here is biased: .
5.3 Deriving MLEs: Multivariate Gaussian
Let with and .
The log-likelihood is:
MLE of : Taking the gradient with respect to and setting to zero:
MLE of : Substituting and differentiating with respect to (using matrix calculus identities and ):
Bias of : By the same calculation as in the scalar case, . The unbiased estimator is .
For AI: Fitting a Gaussian to activations or embeddings is done in numerous ML methods. The empirical covariance of a layer's activations, used in whitening transforms, weight initialisation (Xavier/He initialisation matches the second moment), and covariance-regularised fine-tuning, all use this MLE formula. The multivariate Gaussian MLE is also the building block for Gaussian mixture models (EM algorithm) and linear discriminant analysis.
5.4 MLE as Cross-Entropy Minimisation
This connection is perhaps the most important in all of applied ML.
Theorem 5.4. For a classification model and iid data :
The right-hand side is the negative log-likelihood (NLL) or cross-entropy loss:
Connection to KL divergence: Define the empirical distribution . Then:
Since does not depend on :
MLE minimises the KL divergence from the data distribution to the model. This is the information-theoretic interpretation of MLE.
For language models: An LLM defines . Training by NLL is MLE:
where the sum is over all tokens in the training corpus. This objective is used verbatim in every large language model - GPT-4 (OpenAI, 2023), LLaMA-3 (Meta, 2024), Gemini (Google, 2023), Claude (Anthropic, 2024). The perplexity is the standard evaluation metric.
Label smoothing as regularised MLE. Standard MLE uses hard labels (one-hot). Label smoothing (Szegedy et al., 2016) uses soft targets , adding a small weight to incorrect classes. This is equivalent to regularising MLE by mixing the empirical distribution with a uniform prior - reducing overconfidence and improving calibration.
5.5 Properties of MLE
Under regularity conditions (the "Cramer-Rao conditions" on the model), MLE satisfies:
1. Consistency: . The proof uses the fact that the expected log-likelihood is uniquely maximised at the true parameter (by the strict convexity of KL divergence), and the LLN ensures uniformly.
2. Asymptotic normality:
For large : .
3. Asymptotic efficiency: The asymptotic covariance achieves the CRB. No consistent estimator can have smaller asymptotic variance (at every ). MLE is asymptotically optimal.
4. Invariance under reparametrisation: If is the MLE of , then is the MLE of for any function (not required to be injective). This is the invariance property of MLE.
Examples of invariance:
- MLE of is (not , which is the unbiased estimator)
- MLE of in Exponential is (consistent with )
- MLE of in Bernoulli is (biased, but the MLE)
Regularity conditions (Cramer-Rao conditions). These are the sufficient conditions for the above properties:
- The parameter space is an open subset of
- The model is identifiable
- The support does not depend on (excludes Uniform)
- The log-likelihood is three times differentiable in
- The Fisher information matrix is positive definite at
When these fail (as in the Uniform example), MLE may still be the natural estimator but its properties differ.
5.6 Numerical MLE
For complex models, the score equation has no closed-form solution and must be solved numerically.
Gradient ascent on log-likelihood. The simplest approach:
For neural networks trained with mini-batches: stochastic gradient ascent on the mini-batch log-likelihood. This is exactly SGD on NLL loss with a negated gradient.
Newton-Raphson / Fisher scoring. Use second-order information:
where is the Hessian of the log-likelihood. Replacing the Hessian with the expected negative Hessian (Fisher information) gives Fisher scoring:
This is exactly the natural gradient update. Newton-Raphson converges quadratically near the maximum - much faster than gradient ascent - but requires inverting a matrix per step.
EM algorithm. For models with latent variables (mixtures, HMMs, VAEs), the EM algorithm alternates between:
- E-step: compute
- M-step:
EM guarantees that - the log-likelihood is non-decreasing. EM is a specialised case of variational inference (to be developed in Section04 Bayesian Inference).
Numerical pitfalls:
- Overflow/underflow: compute , not (avoid multiplying probabilities)
- Log-sum-exp trick: for softmax-based likelihoods,
- Flat log-likelihoods: when near the solution, gradient methods converge extremely slowly; natural gradient methods help
- Local maxima: the log-likelihood may be multimodal for mixture models and neural networks; multiple restarts are necessary
6. Method of Moments
6.1 The Method of Moments
The method of moments (MoM), introduced by Karl Pearson (1894), is the oldest systematic estimation procedure. The idea is simple: match sample moments to population moments and solve for .
Definition 6.1 (Method of Moments Estimator). For a -parameter model, the MoM estimator solves the system:
where is the -th sample moment.
Example: Normal distribution. For with 2 parameters:
- 1st moment equation:
- 2nd moment equation:
For the Gaussian, MoM and MLE coincide.
Example: Gamma distribution. with mean and variance .
Solving: , . Unlike the Gaussian case, the Gamma MLE requires numerical optimisation, making MoM attractive for quick estimation.
Example: Beta distribution. with mean and variance .
Setting and , solving the moment equations:
This requires (the variance must be less than the theoretical maximum for the Beta).
Properties of MoM:
- Consistent: by the LLN, sample moments converge to population moments; moment equations are continuous, so MoM estimators are consistent
- Asymptotically normal: by the multivariate CLT and delta method
- Inefficient: generally does not achieve the CRB; MLE has smaller asymptotic variance when it exists in closed form
6.2 Generalised Method of Moments
The Generalised Method of Moments (GMM), introduced by Hansen (1982), allows for more moment conditions than parameters.
Suppose we have moment conditions , where and (over-identified system).
Sample moment conditions:
When , we cannot set all conditions to zero simultaneously. The GMM estimator minimises the weighted quadratic form:
for some positive-definite weighting matrix . The optimal weight matrix (minimising asymptotic variance) is where .
GMM is widely used in econometrics. In ML, it appears implicitly when models are estimated by matching distributional moments rather than maximising likelihood.
6.3 MoM vs MLE
| Property | Method of Moments | Maximum Likelihood |
|---|---|---|
| Computational | Often closed-form | May require numerical optimisation |
| Asymptotic efficiency | Generally inefficient | Efficient (achieves CRB) |
| Consistency | Yes | Yes (under regularity conditions) |
| Robustness | Depends only on moments | Can be sensitive to tail misspecification |
| Applicability | Works even when likelihood is intractable | Requires tractable likelihood |
| Parameter constraints | Can produce invalid estimates (e.g., ) | Respects constraints if optimised on |
When to prefer MoM:
- The likelihood is intractable (latent variable models without EM)
- Speed matters more than efficiency
- Only the first few moments are of interest
- The distributional assumption may be wrong (moments are more robust)
When to prefer MLE:
- Full likelihood is tractable and the model is well-specified
- Maximum statistical efficiency is needed
- Regularity conditions are satisfied
- A standard reference distribution is being fitted
7. Confidence Intervals
7.1 The Frequentist Interpretation
Definition 7.1 (Confidence Interval). A random interval is a confidence interval for if:
The key is that and are random (functions of the data ), while is fixed (unknown but not random in the frequentist framework).
The correct interpretation: If we repeat the experiment many times and construct a 95% CI each time, 95% of those intervals will contain the true . This specific computed interval either contains or it does not - there is no 95% probability attached to this particular interval after observing data.
Common misconceptions:
| Incorrect statement | Why it's wrong |
|---|---|
| "There is a 95% probability that ." | After computing the interval, is either in it or not - no probability remains |
| "95% of the data lies within the CI." | CIs are about the parameter, not the data distribution |
| "If I repeat the experiment, my estimate will be in this CI 95% of the time." | The CI is about the parameter being covered, not about future estimates |
| "A wider CI means I'm 95% more confident." | Width affects precision but all 95% CIs have the same coverage probability |
The Bayesian analogue - "there is a 95% probability that is in this interval" - is a credible interval and requires a prior distribution on . The full treatment is in Section04 Bayesian Inference.
7.2 Exact Confidence Intervals via Pivoting
The pivotal method constructs CIs by finding a pivot: a function of data and parameter whose distribution is known and does not depend on .
Gaussian mean, known. Let .
The pivot is .
Solving for : the CI is .
Gaussian mean, unknown. Replacing with the sample standard deviation changes the distribution: (Student's with degrees of freedom).
CI:
As , (Student's converges to standard normal).
STUDENT'S t DISTRIBUTION vs. NORMAL
========================================================================
p(t)
|
| --- Normal(0,1)
| ==== t(df=5)
| - - - t(df=1) [Cauchy]
|
| +-----+ Heavier tails of t distribution
| ++ ++ reflect additional uncertainty
| ++ ++ from estimating \\sigma.
| --+ +--
|--------------------------- t
-3 -2 -1 0 1 2 3
For df \\geq 30, t \\approx Normal - practical "large sample" threshold.
========================================================================
Gaussian variance CI. The pivot is , giving the CI:
Note this CI is asymmetric because the distribution is skewed.
7.3 Asymptotic Confidence Intervals
When exact pivots are unavailable, use the asymptotic normality of MLE: for large , .
Wald interval. Plugging in the MLE for the unknown in the Fisher information:
For Bernoulli with observations: .
Limitations of the Wald interval near boundaries. For near 0 or 1, the Wald interval can extend outside . The Wilson score interval is better-behaved:
where . This is the standard CI for proportions in ML evaluation.
Likelihood ratio interval. Based on the likelihood ratio statistic , which satisfies under . The CI is the set of values that pass the test:
This is often more accurate than the Wald interval for small .
7.4 Bootstrap Confidence Intervals
The bootstrap (Efron, 1979) is a non-parametric resampling method for constructing CIs without distributional assumptions.
Non-parametric bootstrap algorithm:
- From the original sample , draw bootstrap samples (each of size , with replacement)
- Compute the statistic for each bootstrap sample
- Estimate the sampling distribution of by the empirical distribution of
Percentile bootstrap CI: Use the and quantiles of as CI endpoints.
BCa (bias-corrected and accelerated) bootstrap: Adjusts for bias in the bootstrap distribution and non-constant acceleration. Typically more accurate than the percentile method for small .
When to use bootstrap:
- The sampling distribution of is non-standard (no closed-form pivot)
- The distributional assumptions for parametric CIs are suspect
- The statistic is complex (e.g., ratio of two MLEs, AUC, BLEU score)
- is moderate and asymptotic approximations are poor
Bootstrap for ML evaluation (example). To compute a 95% CI for test accuracy:
- Let the test set be ; accuracy
- Bootstrap: resample with replacement times, compute accuracy on each bootstrap test set
- CI = [2.5th percentile, 97.5th percentile] of bootstrap accuracies
7.5 Confidence Intervals in ML Evaluation
Understanding CI coverage for ML evaluation prevents erroneous conclusions.
Binary accuracy. Test accuracy with examples and correct. Wilson score CI:
For , : CI = - width . For : CI = - width .
This shows why benchmark results on test examples are statistically unreliable.
Comparing two models. If Model A achieves and Model B achieves on the same test examples, is A significantly better? The difference with standard error gives a 95% CI for of , which includes zero. The models are not statistically distinguishable.
McNemar's test is more powerful than comparing independent CIs when models are evaluated on the same examples (the paired structure reduces variance). See Section03 Hypothesis Testing for the full treatment.
Multiple benchmark correction. When evaluating on benchmarks, the probability of at least one spuriously significant result rises. This is the multiple comparison problem - addressed fully in Section03. For reporting CIs, Bonferroni correction replaces with , widening each CI to maintain family-wise coverage.
8. Asymptotic Theory
8.1 Convergence Modes
Before stating the asymptotic normality theorem, we review the convergence modes used.
Almost-sure (a.s.) convergence (): . The sequence converges on every sample path except a set of probability zero. This is the strongest mode; the Strong LLN gives .
Convergence in probability (): For every , . Weaker than a.s. convergence. The Weak LLN gives . Sufficient for consistency.
Convergence in distribution (): at every continuity point of . The weakest mode; the CLT gives . Used for asymptotic normality.
Key theorems for manipulating convergence:
Slutsky's Theorem. If and (a constant), then:
- (if )
Continuous Mapping Theorem. If and is continuous, then . Same holds for .
Application: The MLE (consistency) implies by the continuous mapping theorem (assuming is continuous). Then by Slutsky: .
8.2 The Delta Method
The delta method extends the CLT to smooth functions of asymptotically normal estimators.
Theorem 8.2 (Delta Method - Scalar). Suppose and is differentiable at with . Then:
Proof sketch: By differentiability, . Multiply through by : .
Examples:
- CI for log-odds. For Bernoulli with , the log-odds are . The MLE is . The asymptotic variance: , . So:
- CI for standard deviation. Given (asymptotic variance of variance estimator), the delta method with , :
Multivariate delta method. If and is differentiable:
where is the Jacobian of at .
8.3 Asymptotic Normality of MLE
Theorem 8.3 (Asymptotic Normality of MLE). Under the Cramer-Rao regularity conditions, if is the MLE based on iid observations:
Proof sketch for the scalar case. Taylor-expand the score equation around :
Rearranging:
Numerator: . By CLT (scores are iid with mean 0 and variance ): .
Denominator: . By LLN: .
By Slutsky: .
Implications for practice:
- Large sample CI: For (rule of thumb), the CI is
- Convergence rate: The MLE error is - doubling data halves the standard error
- Efficiency: The asymptotic covariance achieves the CRB - no consistent estimator is asymptotically better
- Standard errors in ML: The standard errors reported in logistic regression, GLMs, and survival models are all based on this theorem, using the observed or expected Fisher information
8.4 Misspecified Models
In practice, the model rarely contains the true data-generating distribution . What happens when the model is misspecified?
Theorem 8.4 (MLE under Misspecification). Under regularity conditions, even when , the MLE converges to:
The "pseudo-true parameter" is the closest point in the model family to the truth in KL divergence. The asymptotic distribution under misspecification is the sandwich estimator:
where is the actual variance of the score (not equal to when misspecified). Under correct specification, and the sandwich collapses to .
Implications for LLM training: Language models trained by NLL minimisation are nearly always misspecified (the model cannot perfectly represent natural language). The trained model converges to the KL-minimising distribution within the model class. This is why language models exhibit characteristic failure modes related to the model architecture's inductive biases - they converge to the nearest representable distribution, not the true distribution.
9. Applications in Machine Learning
9.1 MLE Is Cross-Entropy Training
We derived in Section5.4 that MLE = cross-entropy minimisation. Here we examine the consequences.
NLL loss for common model types:
| Task | Model | NLL loss | Standard name | | --- | --- | --- | --- | | Binary classification | | | Binary cross-entropy | | Multi-class | | | Categorical cross-entropy | | Regression | | | MSE (up to constant) | | Language model | | | NLL / perplexity |
Temperature scaling for calibration. After training a classifier, its softmax probabilities are often overconfident (Guo et al., 2017). Temperature scaling finds where logits are divided by . This is a 1-parameter MLE problem on a calibration set, provably maintaining accuracy while improving Expected Calibration Error (ECE).
Label smoothing. Standard MLE on one-hot targets drives (i.e., ) for the correct class, making the model overconfident. Label smoothing (Szegedy et al., 2016) uses target for small (typically 0.1). This regularises MLE toward a mixture of the data distribution and uniform - reducing overconfidence without sacrificing accuracy.
9.2 Natural Gradient and Fisher Information
The problem with Euclidean gradient descent. The ordinary gradient is the direction of steepest ascent in Euclidean space. But this is parameterisation-dependent: rescaling parameters changes the gradient direction. Different parameterisations of the same model family give different gradient descent trajectories.
Information geometry. The space of probability distributions has a natural Riemannian metric - the Fisher information metric . In this curved space, the steepest ascent direction is:
Natural gradient descent (Amari, 1998):
Properties:
- Parameterisation-invariant: changing parameterisation gives the same update in the function space
- Adaptive: steps are small in directions of high curvature (where the loss changes rapidly), large in flat directions
- Connection to Newton's method: natural gradient = Newton's method when the Hessian equals the FIM (true for GLMs at the maximum)
K-FAC (Martens & Grosse, 2015). Full FIM inversion is - impossible for neural networks. K-FAC approximates layer-wise using the Kronecker structure:
where and . This reduces the cost from to per layer, making natural gradient feasible for moderate-sized networks.
Shampoo (Gupta et al., 2018) is a related second-order optimiser that maintains full-matrix preconditioners per parameter group, used at Google-scale training.
9.3 Confidence Intervals for Model Evaluation
The evaluation uncertainty problem. A model is evaluated on a test set of examples. The reported accuracy is an estimate of the true accuracy . Without a CI, it is impossible to determine if two models are genuinely different or differ only by chance.
Standard practice for ML papers:
- Wilson score CI for binary metrics (accuracy, F1 per class)
- Bootstrap CI for non-standard metrics (BLEU, ROUGE, perplexity)
- Correct for multiple benchmarks using Bonferroni or BH correction
Confidence interval for AUC. The Area Under the ROC Curve is a function of ranks, not a simple proportion. Its asymptotic distribution follows the Wilcoxon-Mann-Whitney form, giving a CI:
For complex metrics like this, bootstrap is typically preferred.
LLM benchmark confidence. Major benchmarks (MMLU, HumanEval, HellaSwag) have 1000-14000 questions. For MMLU (), the 95% Wilson CI for an accuracy of 70% is approximately . For HumanEval (), the CI is - models within 7 percentage points cannot be reliably ranked.
9.4 Estimating Scaling Laws
Scaling laws (Kaplan et al., 2020; Hoffmann et al., 2022) describe how model performance depends on model size (parameters), training tokens , and compute . These are regression problems - a direct application of estimation theory.
Chinchilla scaling law (Hoffmann et al., 2022). The loss function is modelled as:
where are estimated by nonlinear least squares MLE on observations from hundreds of training runs. The fitted parameters (, ) are MLE estimates with associated uncertainty.
The optimal allocation. For a fixed compute budget , the optimal is found by constrained optimisation of the fitted loss model. Chinchilla showed that LLaMA-1 (Touvron et al., 2023) and GPT-3 were undertrained relative to their model size - the optimal allocation uses smaller models trained on more tokens.
For AI: The Chinchilla scaling law computation is MLE applied at the scale of frontier AI research. The "parameters" being estimated () are 5 numbers that determine the optimal architecture and training strategy for models costing millions of dollars to train.
9.5 Fisher Information in Fine-Tuning
Elastic Weight Consolidation (EWC, Kirkpatrick et al., 2017). In continual learning, a model trained sequentially on tasks tends to forget earlier tasks - catastrophic forgetting. EWC prevents this by penalising changes to weights that were important for task :
where is the -th diagonal element of the Fisher information matrix computed on task 's data: .
Parameters with high are those to which the task-1 likelihood is most sensitive - changing them most damages task-1 performance. EWC protects these parameters with strong quadratic penalties.
LoRA as low-rank MLE (Hu et al., 2022). LoRA fine-tunes large models by constraining weight updates to be low-rank: where , , and . This is MLE with a rank constraint on the parameter update - the MLE over a restricted parameter manifold. The rank controls the trade-off between expressivity and the number of estimated parameters.
Optimal Brain Damage (LeCun et al., 1990). Neural network pruning by removing parameters that, according to a second-order Taylor expansion, increase the loss least. The per-parameter importance is approximately , where is the diagonal Hessian. Replacing with (diagonal FIM) recovers FIM-based pruning - estimation theory meets efficient neural architecture.
10. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | "The 95% CI means there is a 95% probability that ." | After observing data, is either in or not - no randomness remains. The probability refers to the procedure, not this interval. | Say "In repeated experiments, 95% of such intervals cover ." Use Bayesian credible intervals if a posterior probability statement is needed. |
| 2 | Confusing bias with MSE, treating "unbiased" as synonymous with "accurate." | Low bias says nothing about variance or overall error. An unbiased estimator with high variance can have larger MSE than a biased estimator with low variance. | Always decompose MSE = Bias^2 + Var and consider both terms. |
| 3 | Using as the unbiased variance estimator. | The estimator is the MLE and is biased; . | Use (Bessel's correction) for unbiased estimation. |
| 4 | Treating the MLE as automatically unbiased. | MLE is consistent and asymptotically efficient but not generally unbiased. The Gaussian variance MLE, Uniform maximum MLE, and many others are biased. | Compute explicitly or use the delta method for composite parameters. |
| 5 | Applying the CRB to biased estimators using the unbiased form . | The CRB for biased estimators is . Using the unbiased form gives an incorrect lower bound. | For biased estimators, always use the biased-estimator CRB, or switch to MSE rather than variance comparisons. |
| 6 | Using the Wald CI for proportions near 0 or 1 (e.g., , ). | The Wald interval can extend below 0 or above 1, and has poor coverage near the boundary. | Use the Wilson score interval or Clopper-Pearson exact interval for proportions near 0 or 1. |
| 7 | Treating the Fisher information (expected curvature) as identical to the Hessian of the log-likelihood (observed curvature). | equals the expected Hessian. For a specific sample, the observed Hessian differs. The observed Hessian is used in practice (faster) but they are equal only in expectation. | Use observed Hessian when computational speed is needed; expected FIM for theoretical comparisons. Both give consistent estimates of . |
| 8 | Applying the MLE invariance property in reverse: assuming is unbiased for because is unbiased for . | MLE invariance says , but says nothing about bias. If is nonlinear, Jensen's inequality shows in general. | Use the delta method to compute the asymptotic bias of and add a bias correction if needed. |
| 9 | Applying asymptotic () CI formulas for small (e.g., ). | Asymptotic normality of MLE typically requires - depending on the model. For small , the -distribution, exact intervals, or bootstrap should be used. | Use the -distribution CI for Gaussian mean with small ; bootstrap CI for non-standard statistics; exact binomial CI for small count data. |
| 10 | Using the sum of likelihoods instead of the product (or sum of log-likelihoods). | The likelihood function is the joint probability of the data - a product for independent observations, not a sum. Maximising the sum gives a different, generally inconsistent estimator. | Always write (log-likelihood = sum of log densities). |
11. Exercises
Exercise 1 [*] MLE for Bernoulli and Poisson.
(a) Given coin flips with outcomes (6 heads, 4 tails), compute and its standard error .
(b) For the Poisson model with iid observations summing to , derive the MLE by solving the score equation and verify it is a maximum (second derivative < 0).
(c) Using the MLE from (b) and the Fisher information , verify the MLE achieves the CRB: .
Exercise 2 [*] Bias of the MLE variance estimator.
Let with both parameters unknown.
(a) Prove that has . (Hint: write in terms of and .)
(b) Show that the corrected estimator is unbiased.
(c) By MLE invariance, . Is this an unbiased estimator of ? If not, what is the sign of its bias? Use Jensen's inequality.
Exercise 3 [*] Cramer-Rao bound verification.
For iid observations from with :
(a) Compute the score and verify .
(b) Compute the Fisher information using both the variance-of-score and negative-expected-Hessian formulas and verify they are equal.
(c) Show that is the MLE and compute using the delta method applied to with . Verify that - the CRB is achieved.
Exercise 4 [**] Method of Moments for the Beta distribution.
Let with moments and .
(a) Derive the MoM estimators and in terms of and .
(b) Show that the MoM estimates are only valid when . What is the statistical interpretation of this constraint?
(c) Generate samples from and compute both the MoM and the MLE (via numerical optimisation). Compare the estimates and their standard errors. Which is more efficient?
Exercise 5 [**] Bootstrap confidence interval.
Let (a sample of size 10).
(a) Compute the sample median and the 95% asymptotic CI for the median using the fact that, for a distribution with density at the true median, .
(b) Implement the non-parametric bootstrap with resamples and compute the percentile bootstrap CI for the median. Compare the width to the asymptotic CI.
(c) Is the bootstrap CI or the asymptotic CI more reliable here, and why?
Exercise 6 [**] Asymptotic normality simulation.
Verify the asymptotic normality of the Bernoulli MLE empirically.
(a) For true parameter and sample sizes , simulate MLEs and plot their distributions.
(b) For each , standardise: and overlay the PDF. At what does the approximation appear adequate?
(c) Compute the empirical coverage of the 95% Wald CI at each . At what does coverage first reach 94%-96%? Does the CRB bound hold: is always ?
Exercise 7 [***] Fisher information for a neural network layer.
Consider a one-layer softmax classifier for , where .
(a) Derive the score where .
(b) Show that the Fisher information matrix (as a matrix) is:
(c) Identify the two factors in the K-FAC approximation: (input covariance) and (output covariance). Simulate this for , , and compare to the full FIM.
Exercise 8 [***] Chinchilla scaling law estimation.
The Chinchilla model (Hoffmann et al., 2022) estimates language model loss as .
(a) Given synthetic training run data (simulate 50 runs with pairs and losses with added Gaussian noise), fit the 5 parameters by nonlinear least squares using scipy.optimize.curve_fit.
(b) Compute 95% CIs for each parameter using the covariance matrix returned by curve_fit (which estimates , where is the Jacobian at the solution).
(c) For a fixed compute budget (FLOPs), find the optimal by minimising over . How sensitive is this optimal allocation to uncertainty in and ?
12. Why This Matters for AI (2026 Perspective)
| Concept | AI/LLM Impact |
|---|---|
| MLE = cross-entropy minimisation | Every language model (GPT-4, LLaMA-3, Gemini, Claude) is trained by NLL minimisation; MLE provides the theoretical foundation for why this converges to the right distribution |
| Fisher information matrix | K-FAC (Martens & Grosse, 2015) enables tractable second-order optimisation for deep networks; Shampoo (Google) uses full-matrix FIM approximations in large-scale training |
| Natural gradient | Invariant to reparametrisation; theoretically optimal steepest direction in distribution space; practical approximations (K-FAC, SOAP) achieve closer-to-optimal convergence rates |
| Asymptotic normality of MLE | Justifies confidence intervals for model parameters; provides the basis for Laplace approximation in Bayesian neural networks (Section04) |
| Cramer-Rao bound | Fundamental limit on estimation from data; explains why larger datasets () always help; motivates data-efficient fine-tuning methods |
| Confidence intervals | Benchmark evaluation without CIs is scientifically misleading; Wilson CIs for accuracy, bootstrap CIs for composite metrics; the HELM/OpenLLM leaderboards report point estimates without CIs - a known limitation |
| Bias-variance decomposition | Theoretical foundation for regularisation in ML: L2/L1 regularisation = adding bias to reduce variance; early stopping, dropout, and weight decay all operate on this trade-off |
| Sufficient statistics | Exponential family sufficient statistics underlie variational autoencoders (encoder outputs , of Gaussian posterior) and normalising flows |
| Scaling law estimation | Chinchilla (Hoffmann et al., 2022) uses nonlinear MLE to fit data; the estimated scaling exponents (, ) determine optimal model sizes at every compute budget |
| Elastic weight consolidation | FIM-based penalty prevents catastrophic forgetting in continual learning; applied in fine-tuning pipelines to protect pre-trained capabilities while adapting to new tasks |
| Temperature calibration | MLE on a calibration set finds the scalar that minimises NLL; well-calibrated models produce reliable uncertainty estimates for downstream decision-making |
| Bootstrap evaluation | Non-parametric CI construction for arbitrary metrics (BLEU, pass@k, ECE); essential for comparing models with complex evaluation protocols |
13. Conceptual Bridge
Estimation theory is the bridge between probability and action. Probability theory (Chapter 6) defines random variables, distributions, and their properties - it answers "if we know the model, what data should we expect?" Estimation theory inverts this: "given data, what can we infer about the model?" This inversion from data to parameters is the core operation of every machine learning training algorithm.
The section you have just completed builds on Descriptive Statistics (Section01) in a crucial way: sample statistics like and were introduced there as data summaries; here they are elevated to estimators - random variables with sampling distributions, bias, variance, and convergence properties. The concept of Bessel's correction ( denominator) was motivated in Section01 by practicality; here it is derived from the requirement of unbiasedness.
Looking forward to Hypothesis Testing (Section03): the confidence intervals of this section have a duality with hypothesis tests - rejecting at level is equivalent to falling outside the CI. The sampling distributions developed here (Gaussian, Student's , chi-squared, ) are the exact distributions that hypothesis tests use.
The next major extension is Bayesian Inference (Section04), which reframes the estimation problem by treating as a random variable with a prior distribution . The posterior combines the likelihood (covered here) with the prior. MAP estimation - maximising the posterior - is MLE regularised by the log-prior. L2 regularisation is MAP with a Gaussian prior; L1 regularisation is MAP with a Laplace prior. The full treatment of conjugate priors, credible intervals, and MCMC is in Section04.
ESTIMATION THEORY IN THE CURRICULUM
========================================================================
Chapter 6 - Probability Theory
| Section01 Random Variables (distributions, CDF/PDF)
| Section04 Expectation and Moments (E[X], Var(X), CLT, LLN)
| Section03 Joint Distributions (Bayes' theorem)
+------------------------------------------+
| prerequisites
Chapter 7 - Statistics v
| Section01 Descriptive Statistics ----------------------------------
| sample mean, variance, data summaries -> estimators
| covariance, correlation
|
+--> Section02 ESTIMATION THEORY (this section)
| - Point estimators: bias, variance, MSE
| - Cramer-Rao bound, Fisher information
| - MLE: derivation, properties, examples
| - Asymptotic normality, delta method
| - Confidence intervals (frequentist)
| - MLE = cross-entropy training
|
+--> Section03 Hypothesis Testing (duality: CIs <-> tests)
| p-values, power, t/z/\\chi^2 tests, A/B testing
|
+--> Section04 Bayesian Inference (extension: add prior to likelihood)
| posterior = likelihood x prior, MAP = regularised MLE
| conjugate priors, MCMC, variational inference
|
+--> Section05 Time Series (sequential estimation, Kalman filter)
|
+--> Section06 Regression Analysis (applied MLE: OLS, Ridge, Lasso)
Chapter 8 - Optimisation (natural gradient uses FIM)
Chapter 9 - Information Theory (FIM and Shannon information)
========================================================================
The Fisher information matrix is the central object connecting estimation theory outward: it bounds estimation variance (CRB), defines the geometry of natural gradient optimisation (Ch8), underpins the Laplace approximation in Bayesian inference (Section04), and is related to the Shannon capacity of statistical experiments (Ch9). Understanding the FIM deeply is understanding the mathematical infrastructure of modern ML training.
End of Section02 Estimation Theory
<- Back to Chapter 7: Statistics | Next: Hypothesis Testing ->
Appendix A: Exponential Families
Exponential families are the most important class of parametric models in statistics, unifying the Gaussian, Bernoulli, Poisson, Exponential, Gamma, Beta, and dozens of other distributions under one framework.
Definition A.1 (Exponential Family). A parametric family is an exponential family if it can be written in the form:
where:
- is the natural (canonical) parameter
- is the sufficient statistic (vector of natural statistics)
- is the base measure
- is the log-partition function (ensures normalisation)
The log-partition function encodes all moments: and .
Verification examples:
Bernoulli. . Write as . Natural parameter (log-odds), sufficient statistic , .
Gaussian. : natural parameters , sufficient statistics .
Key properties of exponential families for MLE:
-
MLE score equation: satisfies - the MLE moment-matches the sufficient statistics.
-
MLE = MoM: For exponential families, MLE and method of moments coincide (they both set ).
-
Efficient MLE: The MLE achieves the CRB for all exponential families - it is always efficient.
-
Log-likelihood is concave: For minimal exponential families, is strictly concave - the MLE is the unique global maximum, and Newton-Raphson converges from any starting point.
-
Complete sufficient statistics: is a complete sufficient statistic, so the MLE is the UMVUE by Lehmann-Scheffe.
Importance for ML: The categorical distribution (softmax output of neural networks), Gaussian (regression, VAE latent space), Bernoulli (binary classification), and Poisson (count data, Poisson regression) are all exponential families. The VAE encoder outputs the natural parameters of the Gaussian posterior - the sufficient statistics. Generalised linear models (GLMs, of which logistic regression is a special case) are defined precisely as: exponential family response + linear natural parameter .
Appendix B: Sufficient Statistics - Detailed Examples
B.1 The Exponential Distribution
For :
By the factorisation theorem: (total waiting time) is a sufficient statistic. It factors as with and .
MLE: , which depends on the data only through - consistent with sufficiency.
B.2 Order Statistics and the Uniform Distribution
For :
The sufficient statistic is (the maximum order statistic). The MLE is a function of as expected.
Bias correction. Since , the unbiased estimator is . By Rao-Blackwell applied to (use only first observation): conditioning on gives , so the Rao-Blackwell estimator is indeed .
Appendix C: The EM Algorithm - Connecting MLE to Latent Variable Models
Many models of interest (Gaussian mixture models, HMMs, topic models, VAEs) have latent variables in addition to observed data . The complete-data log-likelihood would be easy to maximise if were observed, but we only observe .
The EM algorithm maximises the marginal log-likelihood by iterating two steps:
E-step: Compute the expected complete-data log-likelihood under the current posterior over :
M-step: Maximise:
Why EM guarantees : The evidence lower bound (ELBO):
where is the entropy term not involving . The M-step maximises , and the E-step tightens the bound at the new .
Gaussian Mixture Model (GMM). For -component GMM with means , covariances , and mixing weights :
- E-step: Compute responsibilities
- M-step: Update parameters:
Each M-step is a weighted MLE for a Gaussian - exactly the closed-form solutions from Section5.3, but with fractional weights.
For AI: The EM algorithm is the algorithmic prototype for variational inference in VAEs (Section04). In the VAE, the E-step is replaced by the encoder (an amortised posterior approximation) and the M-step is replaced by joint gradient ascent on the ELBO with respect to both encoder parameters and decoder parameters .
Appendix D: Cramer-Rao Bound - Multivariate Proof and Geometry
Multivariate CRB (full proof). Let be an unbiased estimator of . Define the score vector .
Since is unbiased: . Differentiating with respect to :
In matrix form: (a identity).
By the matrix Cauchy-Schwarz (Lowner ordering):
The Schur complement condition gives: , i.e., .
Information-geometric interpretation. The statistical manifold is a Riemannian manifold with the Fisher-Rao metric . The CRB states that the covariance of any unbiased estimator is at least as large as the inverse metric - the "volume" of estimation uncertainty is bounded below by the curvature of the statistical manifold.
This geometric view motivates the natural gradient: gradient descent in the Fisher-Rao metric on the statistical manifold corresponds to the natural gradient update .
Appendix E: The Score Test, Wald Test, and Likelihood Ratio Test
These three test statistics form the classical trinity of asymptotic tests in statistics. While their full treatment belongs in Section03 Hypothesis Testing, their derivations from MLE theory belong here.
For testing :
Score (Rao) test: under .
Uses only the restricted model - does not require fitting the unrestricted MLE.
Wald test: under .
Uses only the unrestricted MLE.
Likelihood ratio test: under .
Requires fitting both restricted and unrestricted models. By Wilks' theorem (1938), this converges to the chi-squared distribution.
Asymptotic equivalence. Under : - all three are first-order equivalent. They differ in finite samples. The LRT is generally most accurate for small .
These tests are developed fully in Section03 Hypothesis Testing, including the Neyman-Pearson lemma, p-values, and power calculations.
Appendix F: Sampling Distributions of Classical Statistics
Understanding the exact (finite-sample) distributions of key estimators is essential for constructing valid hypothesis tests and confidence intervals when is small.
F.1 Distribution of the Sample Mean
Let .
This is an exact result (not asymptotic) - the sum of independent Gaussians is Gaussian. The standardised version gives exact z-tests and z-intervals.
F.2 Distribution of the Sample Variance: Chi-Squared
Theorem F.1. Let . Then:
and this is independent of .
Proof sketch. Write where . The quadratic form equals , a projection of onto the -dimensional subspace orthogonal to . By the spectral theorem for Gaussian quadratic forms, this has a distribution. Independence from (a function of ) follows from the orthogonality.
The degrees of freedom arise because estimating by removes one degree of freedom from the deviations (they sum to zero, so only are free).
F.3 Student's Distribution
Definition. If and independently, then .
Application to Gaussian mean. With and (independent of ):
This is exact for all - no large-sample approximation needed. The -distribution is symmetric, bell-shaped, heavier-tailed than the Gaussian, and converges to as .
Quantiles comparison:
| (two-sided) | |||
|---|---|---|---|
| 10% | 1.645 | 1.833 | 1.699 |
| 5% | 1.960 | 2.262 | 2.045 |
| 1% | 2.576 | 3.250 | 2.756 |
The -distribution with small df requires wider intervals to achieve the same coverage - reflecting the additional uncertainty from estimating .
F.4 The Distribution
Definition. If and independently, then .
Application. Comparing variances: . Testing equality of variances : under .
The -distribution also underlies ANOVA (testing equality of multiple means), regression significance tests, and model comparison - fully developed in Section03 Hypothesis Testing and Section06 Regression Analysis.
Appendix G: Numerical Example - Fitting a Gaussian Mixture Model via EM
This worked example demonstrates MLE for a latent variable model.
Setup. Suppose we observe observations believed to come from a mixture of two Gaussians: .
The parameters are .
Direct MLE is hard. The log-likelihood is , which contains a log of a sum - non-convex, no closed form.
EM solution. Introduce latent indicators indicating which component generated .
E-step: Compute responsibilities
M-step: Update parameters (weighted MLEs for each component):
Convergence. EM guarantees at every iteration, converging to a local maximum. Multiple random initialisations are needed to find the global maximum.
For AI: The EM algorithm is the prototype for alternating optimisation in deep learning. The BERT pre-training objective (masked language modelling) alternates between predicting masked tokens (like the E-step computing responsibilities) and updating parameters (like the M-step). Expectation-Maximisation underlies the theory of the EM algorithm, which is used for fitting GMMs, HMMs, and topic models in NLP preprocessing pipelines.
Appendix H: Regularised MLE and the Bias-Variance Trade-Off in Practice
Ridge regression (L2-regularised linear regression) is the canonical example of regularised MLE.
Setup. For linear regression with , the MLE is OLS: .
Ridge regression: Add L2 penalty:
Bias: - ridge is biased.
Variance: .
Ridge has smaller covariance than OLS - introducing bias reduces variance. The bias-variance trade-off: for well-chosen , MSE of ridge < MSE of OLS.
MAP interpretation. Ridge is MAP estimation with Gaussian prior . The prior pulls toward zero, introducing bias toward zero. The full development of MAP (adding a prior to MLE) is in Section04 Bayesian Inference.
Optimal selection. Cross-validation estimates the prediction MSE for each , selecting the one that minimises the bias-variance trade-off from a prediction perspective. Alternatively, analytical formula: the MSE-minimising (depends on unknown , so must be estimated).
Connection to weight decay in LLMs. AdamW (Loshchilov & Hutter, 2019) adds weight decay to the Adam update. This is regularised MLE with a Gaussian prior on all parameters - the standard training procedure for all frontier language models. The weight decay coefficient (typically 0.1) controls the strength of the prior, determining how much parameters are regularised toward zero.
Appendix I: Non-Parametric Estimation
When the parametric model is uncertain or clearly wrong, non-parametric estimation makes minimal distributional assumptions.
Empirical Distribution Function (EDF). Given sample :
The EDF is the non-parametric MLE of the CDF . By the Glivenko-Cantelli theorem: - the EDF converges uniformly to the true CDF.
The Dvoretzky-Kiefer-Wolfowitz (DKW) inequality gives a finite-sample bound:
This is the non-parametric Cramer-Rao analogue - it bounds estimation error for the CDF without any parametric assumption.
Kernel density estimation. Estimating the PDF non-parametrically:
where is a kernel (e.g., Gaussian) and is the bandwidth. The bias-variance trade-off applies: small -> low bias, high variance; large -> high bias, low variance. Optimal bandwidth: (mean integrated squared error).
Bootstrap as non-parametric MLE. The bootstrap resamples from the EDF - effectively replacing the unknown with its non-parametric MLE. Bootstrap CI validity follows from the closeness of to (Glivenko-Cantelli) and the continuity of the statistic of interest.
Appendix J: Selected Proofs and Derivations
J.1 Asymptotic Variance of the Sample Median
For a distribution with PDF and CDF , let denote the population median (). The sample median satisfies:
Proof sketch. The sample median is the MLE for the double-exponential (Laplace) distribution, and the asymptotic variance of the -th quantile estimator is . For : .
Comparison to mean (for Gaussian data). For : , so the median's asymptotic variance is vs. for the mean. The relative efficiency is - the mean extracts more information from Gaussian data than the median. However, for heavy-tailed distributions (e.g., Cauchy), the mean has infinite variance while the median remains consistent - robustness matters.
J.2 The Neyman-Scott Paradox - Inconsistency of MLE
MLE is not always consistent. The Neyman-Scott example (1948) provides a famous cautionary case.
Setup. Observe pairs for , where . The parameters are - parameters for observations.
MLE. For each pair: . For : .
Inconsistency. - the MLE permanently underestimates by a factor of 2, regardless of . The bias does not vanish because the number of parameters grows with .
Root cause. The regularity conditions for MLE consistency require the number of parameters to be fixed as . When nuisance parameters grow with (an "incidental parameters" problem), MLE can fail. The fix: use a conditional or marginal likelihood that eliminates the nuisance parameters.
For AI. This paradox is relevant to neural network generalisation. A model with parameters (overparameterised regime) has more parameters than observations. Classical MLE theory breaks down - yet in practice, overparameterised networks often generalise well due to implicit regularisation (gradient descent inductive bias). This remains an active research area.
J.3 Locally Most Powerful Tests and the Neyman-Pearson Lemma
(Preview - full treatment in Section03 Hypothesis Testing)
The Neyman-Pearson lemma provides the most powerful test for simple hypotheses vs. . The rejection region is:
where is chosen to control type-I error at level . The likelihood ratio is a sufficient statistic for the test - no other test can be more powerful at the same level. The connection to MLE: the most powerful test uses the likelihood, and the MLE is the point that maximises the likelihood.
For composite alternatives , the likelihood ratio test statistic is the standard generalisation, and connects to the Wald and score tests via asymptotic equivalence.
Appendix K: Reference Tables
K.1 Common MLE Formulas
| Distribution | Parameters | Log-likelihood | MLE |
|---|---|---|---|
| , | , | ||
| No closed form; requires Newton-Raphson | |||
| No closed form |
K.2 Fisher Information Summary
| Distribution | Parameter | Efficient MLE | ||
|---|---|---|---|---|
| , known | ||||
| , known | ||||
| (one of free) | (marginal) | Multinomial variance |
K.3 Confidence Interval Reference
| Parameter | Setting | Pivot | CI formula |
|---|---|---|---|
| Gaussian mean | known | ||
| Gaussian mean | unknown | ||
| Gaussian variance | unknown | ||
| Bernoulli (large ) | - | Wald: | Wilson score preferred near 0/1 |
| General MLE | Large | ||
| Any statistic | Non-parametric | Bootstrap | Percentile/BCa bootstrap |
End of Appendices
Appendix L: Worked Examples - End-to-End Estimation
L.1 Estimating the Parameters of a Sentiment Classifier
Problem. A BERT-based sentiment classifier is evaluated on a test set of examples. It classifies 418 correctly (). Report (a) the point estimate with standard error, (b) the 95% Wilson CI, (c) determine if this differs significantly from a baseline accuracy of 80%.
Step 1: Point estimate and SE.
. Standard error: .
Step 2: Wilson score CI.
With , , :
Numerically: centre ; margin . CI .
Step 3: Comparison to baseline 80%.
The null hypothesis corresponds to checking if 0.80 is inside the CI. Since 0.80 is inside ? No - 0.80 is just outside. Alternatively, -test: . The improvement is statistically significant at .
Statistical interpretation. The classifier significantly outperforms the 80% baseline, but uncertainty spans from 80.4% to 86.9% - real-world performance could be anywhere in this range, highlighting the importance of CIs even for significant results.
L.2 MLE for a Gaussian Mixture - Worked Iteration
Problem. Two clusters are observed: and (but we don't know the assignments). Fit a 2-component Gaussian mixture.
Initialisation. , , , , .
E-step (iteration 1). For :
For : . The algorithm already strongly assigns each point to the correct cluster.
M-step (iteration 1). With (the four small values):
After 2-3 iterations, the algorithm converges to the obvious clusters. The key insight: EM separates the cluster assignment (E-step soft assignments ) from parameter estimation (M-step weighted MLEs), alternating until convergence.
L.3 Bootstrap CI for Median Income - ML Context
Problem. A dataset of yearly salaries (thousands) for ML engineers is collected. Sample statistics: , . Construct a 95% bootstrap CI for the median.
Why bootstrap? The Gaussian CI for the median requires knowing the PDF at the median - unknown without assuming a distribution. The salary distribution is likely right-skewed (some outliers earn much more), making parametric CIs unreliable.
Bootstrap procedure (pseudocode):
for b = 1 to B=2000:
x_star = sample(x, n=50, replace=True)
median_star[b] = median(x_star)
CI_95 = [percentile(median_star, 2.5), percentile(median_star, 97.5)]
Typical result: thousand. The CI is slightly asymmetric (the right tail is longer due to income skewness), which the bootstrap correctly captures and a symmetric Gaussian CI would miss.
For AI. This exact procedure is used to report confidence intervals for ML engineer compensation surveys, which inform hiring benchmarks and compensation strategy at AI companies. Statistically sound salary benchmarks require bootstrap CIs because income distributions are strongly non-Gaussian.
Appendix M: Glossary of Key Terms
| Term | Formal definition | Intuition |
|---|---|---|
| Estimator | A measurable function of the data | A rule for computing a guess from observed data |
| Estimate | A specific value after observing data | The actual number produced by the estimator |
| Sampling distribution | The distribution of across all possible samples of size | How much the estimate varies from experiment to experiment |
| Bias | How far off the estimator is on average | |
| Variance | How much the estimator fluctuates around its mean | |
| MSE | Total estimation error | |
| Consistency | as | Converges to the right answer with more data |
| Efficiency | Achieves the minimum possible variance | |
| Sufficient statistic | such that $p(\mathbf{x} | T;\theta)\theta$ |
| Fisher information | How much the data informs ; curvature of log-likelihood | |
| CRB | for unbiased | Hard lower bound on variance; no unbiased estimator can beat it |
| MLE | Parameter that makes observed data most probable | |
| Asymptotic normality | MLE is approximately normal for large | |
| Confidence interval | Random interval covering with probability | Uncertainty quantification for frequentist estimation |
| Natural parameter | in | Parameterisation of exponential families |
| Natural gradient | Steepest ascent in Fisher-Rao metric on statistical manifold | |
| Misspecification | True distribution is not in the model family | |
| Pseudo-true parameter | Closest point in model to truth under KL divergence |
Appendix N: Further Reading and References
Primary References
-
Casella, G. & Berger, R.L. (2002). Statistical Inference (2nd ed.). Duxbury. - The definitive graduate textbook on classical estimation theory; covers all topics in this section at full rigor.
-
Lehmann, E.L. & Casella, G. (1998). Theory of Point Estimation (2nd ed.). Springer. - Advanced treatment of UMVUE theory, Rao-Blackwell, and sufficiency.
-
Fisher, R.A. (1922). "On the Mathematical Foundations of Theoretical Statistics." Philosophical Transactions of the Royal Society A, 222, 309-368. - The foundational paper defining sufficiency, efficiency, consistency, and MLE.
-
Cramer, H. (1946). Mathematical Methods of Statistics. Princeton University Press. - Original proof of the CRB.
-
Rao, C.R. (1945). "Information and the Accuracy Attainable in the Estimation of Statistical Parameters." Bulletin of the Calcutta Mathematical Society, 37, 81-91. - Independent CRB proof and Rao-Blackwell theorem.
-
Efron, B. (1979). "Bootstrap Methods: Another Look at the Jackknife." Annals of Statistics, 7(1), 1-26. - Original bootstrap paper.
ML-Specific References
-
Amari, S. (1998). "Natural Gradient Works Efficiently in Learning." Neural Computation, 10(2), 251-276. - Foundation of natural gradient methods.
-
Martens, J. & Grosse, R. (2015). "Optimizing Neural Networks with Kronecker-factored Approximate Curvature." ICML. - K-FAC for tractable FIM approximation in deep networks.
-
Kirkpatrick, J. et al. (2017). "Overcoming Catastrophic Forgetting in Neural Networks." PNAS, 114(13), 3521-3526. - EWC using FIM for continual learning.
-
Hoffmann, J. et al. (2022). "Training Compute-Optimal Large Language Models." NeurIPS. - Chinchilla scaling laws via nonlinear MLE.
-
Hu, E. et al. (2022). "LoRA: Low-Rank Adaptation of Large Language Models." ICLR. - Low-rank MLE for efficient fine-tuning.
-
Guo, C. et al. (2017). "On Calibration of Modern Neural Networks." ICML. - Temperature scaling as MLE on calibration set.
-
Goodfellow, I., Bengio, Y. & Courville, A. (2016). Deep Learning. MIT Press. - Chapter 5 covers MLE in the context of deep learning at the appropriate depth for ML practitioners.
Advanced References
-
van der Vaart, A.W. (1998). Asymptotic Statistics. Cambridge. - Graduate-level asymptotic theory; proofs of asymptotic normality, delta method, semiparametric theory.
-
Wasserman, L. (2004). All of Statistics. Springer. - Excellent graduate-level survey connecting classical statistics to modern methods.
-
Bishop, C.M. (2006). Pattern Recognition and Machine Learning. Springer. - Chapters 1-3 cover MLE, MAP, and Bayesian estimation with ML motivation.
This section is part of the Math for LLMs curriculum. For corrections or contributions, see CONTRIBUTING.md.
Appendix O: Connections to Information Theory
The Fisher information matrix has a deep connection to Shannon information theory, previewing material from Chapter 9 - Information Theory.
O.1 Fisher Information as Local Curvature of KL Divergence
The KL divergence between distributions at nearby parameter values satisfies:
The Fisher information matrix is the Hessian of the KL divergence with respect to the second argument, evaluated at the point where both arguments agree. This identifies as the local curvature of the divergence surface.
Consequence: Moving in the Fisher metric direction by produces the maximal increase in KL divergence from the current distribution - this is why the natural gradient (moving in parameter space) corresponds to moving in the direction of maximal KL divergence gain in distribution space.
O.2 Cramer-Rao and Channel Capacity
Shannon's channel capacity theorem and the Cramer-Rao bound are related through the van Trees inequality (a Bayesian version of the CRB for random ):
where is the Fisher information of the prior. As the prior becomes uninformative (), this recovers the standard CRB. The connection: information accumulates (Fisher information adds) across observations and from the prior, exactly as Shannon information adds in a noisy channel.
O.3 Sufficient Statistics and Data Compression
A sufficient statistic satisfies the data processing inequality: processing data through cannot increase Fisher information. In fact, for a sufficient statistic, - no information is lost. This is the estimation-theory analogue of lossless compression: a sufficient statistic compresses the data without losing any information about .
Minimal sufficient statistics provide the maximum compression while retaining all information - analogous to the minimum description length (MDL) principle in information theory.
Appendix P: Practice Problems
P.1 Identification Problems
For each of the following, state whether the model is identifiable. If not, identify the unidentifiable combination.
- with parameters .
- for , with . (Identifiable?)
- A two-layer ReLU network with parameters . Is the function class identifiable?
P.2 Computational Problems
-
MLE and invariance. For Poisson data with and : (a) find ; (b) find the MLE of (the probability of observing zero events); (c) find the MLE of (mean inter-arrival time).
-
CRB for a transformed parameter. For with observations, derive the CRB for estimating using the biased-estimator form of the CRB. Show that the MLE does not achieve this bound for finite , but does so asymptotically.
-
Bootstrap vs. asymptotic. For observations from with : (a) compute the exact 95% -CI; (b) simulate the bootstrap CI with ; (c) compare coverage by repeating 1000 times and computing the empirical coverage of each CI type.
P.3 Conceptual Problems
-
Stein paradox. For , the sample mean is the admissible MLE. For , the James-Stein estimator dominates in MSE. Simulate this for , , true , , and verify .
-
Model misspecification. A logistic regression model is fitted to data from a probit model (). The MLE converges to what? Explain using the misspecified MLE theorem and the KL divergence interpretation.
-
FIM for temperature scaling. For a -class classifier with logits and temperature , the softmax is . Derive the Fisher information for a single example and explain why Newton-Raphson temperature calibration converges in 1-2 steps.
Section Section02 Estimation Theory - Chapter 7 Statistics - Math for LLMs curriculum
Lines: ~2000 | Theory notebook: 50+ cells | Exercises: 8 graded problems