Lesson overview | Lesson overview | Next part
Estimation Theory: Part 1: Intuition to 5. Maximum Likelihood Estimation
1. Intuition
1.1 What Is Statistical Estimation?
Suppose you flip a coin 100 times and observe 63 heads. The coin's true bias - the probability of heads on any single flip - is unknown. What is your best guess for ? The obvious answer is . But why is this the right answer? What makes it "best"? Could a different estimate be better in some sense? And how confident should you be - could the true be 0.5 (a fair coin) despite observing 63 heads?
These are precisely the questions estimation theory addresses. The statistical estimation problem has three ingredients:
- An unknown parameter governing some probability distribution
- Observed data drawn independently from
- An estimator : a function of the data that produces a guess for
The key insight, often overlooked by beginners, is that the estimator is a random variable. Before we collect data, is uncertain - its value depends on which random sample we happen to observe. If we repeated the experiment with a fresh sample, we would get a different . The sampling distribution of - the distribution of across all possible samples - is the object of study. A good estimator has a sampling distribution tightly concentrated around the true .
For AI: This perspective matters directly. When you train a neural network on a dataset and obtain weights , those weights are a realisation of an estimator - a function of the random training data. Training on a different shuffle or subsample gives different . The variation you see across training runs, seeds, and dataset splits is the sampling distribution of the MLE manifesting in practice.
1.2 The Estimation Landscape
Estimation theory spans three major frameworks, each answering a different question:
Point estimation produces a single value for the unknown parameter. The challenge is specifying what "best" means. Three criteria dominate:
- Unbiasedness: - on average, the estimator is correct
- Minimum variance: among unbiased estimators, prefer the one with smallest
- Consistency: as - with enough data, the estimator converges to the truth
These criteria can conflict. The Cramer-Rao lower bound establishes a hard floor on variance for unbiased estimators, and the estimators that achieve this floor are called efficient. Maximum likelihood estimation (MLE) is the dominant method: it is consistent, asymptotically efficient, and invariant under smooth reparametrisations.
Interval estimation goes beyond a point to a range with a coverage guarantee: in repeated experiments, fraction of such intervals will contain the true . Frequentist confidence intervals capture estimation uncertainty without requiring a prior distribution on .
Asymptotic theory studies estimator behaviour as . The flagship result is the asymptotic normality of MLE: under regularity conditions,
where is the Fisher information. This single result simultaneously justifies MLE as an estimation method, provides the basis for asymptotic confidence intervals, and connects estimation theory to information geometry.
For AI: The three frameworks map directly to ML practice. Point estimation = model training (find ). Interval estimation = evaluation with error bars (report accuracy 95% CI). Asymptotic theory = understanding why larger datasets produce more reliable models (the convergence rate).
1.3 Historical Timeline
ESTIMATION THEORY - HISTORICAL TIMELINE
========================================================================
1795 Gauss (age 18) uses least squares to predict asteroid orbits
- the first systematic estimation method
1805 Legendre publishes least squares formally (Gauss priority dispute)
1809 Gauss derives least squares from the assumption of Gaussian errors
- first connection between MLE and squared-error minimisation
1894 Pearson introduces method of moments - systematic moment matching
1912 Fisher introduces "maximum likelihood" as a term; begins systematic
study of estimation properties
1922 Fisher: "On the Mathematical Foundations of Theoretical Statistics"
- defines sufficiency, efficiency, consistency; derives properties of MLE
1925 Fisher introduces Fisher information and the information inequality
(precursor to Cramer-Rao)
1945 Cramer proves the Cramer-Rao lower bound (independently of Rao)
1945 Rao independently proves the same bound + Rao-Blackwell theorem
1947 Lehmann & Scheffe: completeness and UMVUE theory
1950s Wald develops statistical decision theory - unified framework for
estimation as minimising expected loss
1979 Efron introduces the bootstrap - non-parametric confidence intervals
without distributional assumptions
1998 Amari: natural gradient via Fisher information geometry
- direct connection to modern ML optimisation
2015 Martens & Grosse: K-FAC - tractable approximation of FIM for DNNs
2017 Kirkpatrick et al.: Elastic Weight Consolidation (EWC) uses FIM
to prevent catastrophic forgetting in continual learning
2022 Hoffmann et al. (Chinchilla): scaling laws estimated via MLE on
(N, D, L) data - estimation theory at trillion-parameter scale
========================================================================
1.4 Why Estimation Theory Matters for AI
Every major component of a modern ML pipeline rests on estimation theory:
Training objectives. Minimising cross-entropy loss is equivalent to performing MLE. The connection is exact: . Every training run of GPT-4, LLaMA-3, Gemini, or any neural network trained with NLL loss is performing MLE on the conditional distribution.
Evaluation uncertainty. When a benchmark reports "Model A achieves 73.2% accuracy", this is a point estimate. How uncertain is it? With test examples, the 95% CI is approximately - meaning Model A and Model B with 73.5% accuracy are statistically indistinguishable. Estimation theory makes this precise.
Second-order optimisation. Adam, K-FAC, and natural gradient all incorporate curvature information. The natural gradient multiplies the gradient by , where is the Fisher information matrix. This reparametrises the optimisation in the geometry of the statistical manifold, making gradient steps invariant to reparametrisation. K-FAC (Martens & Grosse, 2015) approximates tractably for deep networks.
Continual learning. Elastic weight consolidation (Kirkpatrick et al., 2017) protects important weights during sequential task learning. "Importance" is measured by the diagonal of the FIM: parameters with high Fisher information are those to which the likelihood is most sensitive, and perturbing them most damages performance.
Calibration. A well-calibrated model's confidence scores match empirical frequencies: when it says "80% confident", it's right 80% of the time. Temperature scaling - dividing logits by a scalar - is an MLE problem: find the that maximises likelihood on a held-out calibration set.
2. The Formal Estimation Problem
2.1 Statistical Model and Parametric Family
Definition 2.1 (Parametric Statistical Model). A parametric statistical model is a collection of probability distributions indexed by a parameter:
where is the parameter space. The model is:
- Correctly specified if the true data-generating distribution satisfies for some
- Misspecified if - the model family does not contain the truth (most practical ML models are misspecified)
- Identifiable if different parameters give different distributions:
Identifiability is necessary for consistent estimation - you cannot consistently estimate a parameter that leaves the data distribution unchanged.
Standard examples of parametric families:
| Model | Parameter(s) | Parameter space | Notes |
|---|---|---|---|
| Single binary outcome | |||
| Location-scale family | |||
| Multivariate Gaussian | |||
| Count data | |||
| Time-to-event | |||
| Probabilities |
Non-examples (non-identifiable):
- with parameter : we can only estimate , not the individual components. The model is not identifiable.
- A neural network with two hidden layers of width 1 using : swapping the two neurons gives the same function, so the parameterisation is not identifiable (though the function class may be).
The iid assumption. Throughout this section, observations are assumed independent and identically distributed (iid) from . The iid assumption enables the joint log-likelihood to factorise as a sum:
This factorisation is what makes MLE computationally tractable and theoretically tractable. In practice, this assumption is violated whenever data points are correlated (time series, text sequences, spatially clustered data), and estimation theory has extensions to handle dependent data.
2.2 Point Estimators
Definition 2.2 (Estimator). An estimator of is any measurable function of the sample. The key points:
- is a random variable - before observing data, it is uncertain
- A specific value after observing data is called an estimate (not estimator)
- The sampling distribution of is the distribution of across all possible samples of size
This distinction - estimator (random variable) vs. estimate (specific realisation) - is pedantic but consequential. Confidence interval coverage is a property of the estimator (random), not the estimate (fixed). When we say "95% CI", we mean that the random interval contains with probability 95%, not that this specific computed interval has a 95% chance of containing (after observing data, is either in the interval or not - there is no remaining randomness).
Examples of estimators for the Gaussian mean :
Let .
- Sample mean: - unbiased, consistent, efficient
- Constant zero: - biased unless , inconsistent
- First observation: - unbiased! but inconsistent (variance regardless of )
- Trimmed mean: mean of middle 90% - slightly biased for Gaussian, but robust to outliers
- Constant : - biased by , inconsistent; can have lower MSE than the sample mean when is small (illustrates bias-variance trade-off)
The existence of unbiased but inconsistent estimators () and consistent but biased estimators (trimmed mean) shows these properties are logically independent.
2.3 The Loss Framework: MSE, Bias, and Variance
To compare estimators, we need a criterion. The most widely used is Mean Squared Error (MSE):
Definition 2.3 (MSE, Bias, Variance). For an estimator of a scalar parameter :
The bias-variance decomposition is the central identity of estimation theory:
Proof:
Add and subtract :
where .
Geometric interpretation:
BIAS-VARIANCE DECOMPOSITION - GEOMETRIC VIEW
========================================================================
True parameter \\theta* = 0 (target)
Case 1: Low bias, low variance (good estimator)
--- Estimates cluster tightly around \\theta*
---- MSE \\approx small
---
Case 2: Low bias, high variance (unbiased but noisy)
- -
- - x - -
- -
Centred on \\theta* but spread out. MSE = Var is large.
Case 3: High bias, low variance (biased but stable)
---
---- x (\\theta*)
---
Centred off target. MSE = Bias^2 + small Var.
Case 4: High bias, high variance (worst case)
- -
- x (\\theta*)
- -
Both centred off target AND spread out.
Key trade-off: Reducing variance by shrinkage introduces bias.
Increasing bias via regularisation often reduces variance more than
it increases bias-squared, giving lower MSE overall.
========================================================================
The bias-variance trade-off in ML is the exact same phenomenon: regularisation (L2, dropout, early stopping) introduces bias (the model is pulled away from the MLE toward a constrained region) but reduces variance (the model is less sensitive to the particular training sample), often reducing test MSE overall.
Minimax estimators. MSE is not the only loss criterion. The minimax estimator minimises the worst-case risk: . The James-Stein estimator (1961) famously showed that in for , the sample mean is inadmissible - there exists an estimator with strictly lower MSE at every . This is the Stein paradox: shrinkage toward the origin (a biased estimator!) uniformly dominates the unbiased sample mean in 3+ dimensions.
2.4 Examples of Biased and Unbiased Estimators
Example 1: Sample mean is unbiased.
Let with . Then:
The sample mean is an unbiased estimator of for any distribution with finite mean.
Example 2: MLE of Gaussian variance is biased.
Let . The MLE of is:
Computing the bias: Using :
Taking expectations:
Therefore:
The bias is - the MLE underestimates variance. The correction is Bessel's correction: is unbiased.
Recall from Section01: We saw Bessel's correction introduced as the "standard" sample variance formula. Now we understand why: MLE gives , which is biased. The denominator corrects for the one degree of freedom lost in estimating by .
Example 3: MLE is biased but consistent.
The Gaussian variance MLE has bias as . It is asymptotically unbiased and consistent. This illustrates that bias can be acceptable if it vanishes with sample size.
Example 4: When bias reduces MSE.
Consider estimating with the shrinkage estimator for some . Then:
The optimal , which gives lower MSE than (unbiased sample mean) whenever . The key insight: when is small relative to , shrinking toward zero introduces small bias but large variance reduction.
3. Properties of Estimators
3.1 Bias and Unbiasedness
Definition 3.1 (Bias). The bias of an estimator for parameter is:
The estimator is unbiased if for all , and asymptotically unbiased if as .
Why unbiasedness is desirable - but not sacred. An unbiased estimator is correct on average over infinite repetitions of the experiment. This is a natural property to want. However, unbiasedness alone is insufficient:
- The estimator (use only the first observation) is unbiased but throws away observations - clearly wasteful
- No unbiased estimator exists for some parameters: there is no unbiased estimator of based on a Gaussian sample (by the Lehmann-Scheffe theorem, for this to exist would need to be a polynomial in , which it is not)
- The Stein phenomenon (Section2.3) shows that in 3+ dimensions, unbiased estimators can be uniformly dominated by biased ones
Bias correction. When an estimator has known bias , we can correct it: . This is the idea behind Bessel's correction ( vs in the variance formula) and jackknife bias reduction.
For AI: Batch normalisation computes as the batch mean during training. This is an unbiased estimator of the feature mean. During inference, PyTorch uses the running mean accumulated over training, which is a biased estimator of the current data mean if the distribution has shifted - illustrating that unbiasedness depends on whether the estimation scenario matches the training scenario.
3.2 Consistency
Definition 3.2 (Consistency). An estimator is:
- Weakly consistent if for all (convergence in probability)
- Strongly consistent if for all (almost-sure convergence)
Weak consistency: for every , as .
Sufficient conditions for consistency:
- If and as , then is weakly consistent. (By Chebyshev + the bias decomposition: .)
Example: Sample mean consistency.
satisfies and . Therefore . This is exactly the Weak Law of Large Numbers - consistency of the sample mean is the LLN.
Example: MLE of Gaussian variance is consistent.
has bias and variance , so it is consistent even though it is biased for finite .
Inconsistent estimator example: (take only the first observation) has regardless of . It never concentrates - it is inconsistent.
Why consistency matters: Consistency is the minimal requirement for an estimator to be scientifically useful. A method that doesn't converge to the right answer with infinite data is fundamentally broken. Consistency does not guarantee that the estimator is good for small - convergence can be arbitrarily slow - but it at least ensures the method is correct in the large-data limit.
For AI: The "double descent" phenomenon in over-parameterised neural networks challenges classical bias-variance thinking. A massively over-parameterised model () can still be consistent for the Bayes-optimal predictor if trained with appropriate implicit regularisation (e.g., gradient descent from zero initialisation). This is an active area connecting statistical estimation theory to modern deep learning theory.
3.3 Efficiency
Among all unbiased estimators of , which has the smallest variance? This question is answered by the Cramer-Rao bound (Section 4), but we can define efficiency as a property here.
Definition 3.3 (Efficiency and Relative Efficiency).
- An unbiased estimator is efficient if its variance equals the Cramer-Rao lower bound: .
- The relative efficiency of two unbiased estimators and is . Values greater than 1 mean is more efficient.
Examples:
-
Gaussian mean: The sample mean is efficient - it achieves the CRB . Relative efficiency of the sample median vs. mean is for Gaussian data - the median throws away ~36% of efficiency.
-
Gaussian variance: The unbiased sample variance is not efficient (the MLE achieves the CRB but is biased; correcting for bias loses the CRB).
Asymptotic efficiency. For large , MLE achieves the CRB asymptotically - it is asymptotically efficient. This is one of its key advantages.
For AI: The sample mean is the most efficient estimator of the mean for Gaussian data. Batch gradient descent using all samples computes the exact gradient (equivalent to efficient estimation), while SGD uses a single sample or mini-batch (less efficient, higher variance, but much faster per update). The trade-off between statistical efficiency and computational efficiency is fundamental to modern ML training.
3.4 Sufficiency and the Rao-Blackwell Theorem
A sufficient statistic captures all the information in the data that is relevant to estimating .
Definition 3.4 (Sufficient Statistic). A statistic is sufficient for if the conditional distribution does not depend on for any .
Intuitively: once you know , the raw data provide no additional information about .
Fisher-Neyman Factorisation Theorem. is sufficient for if and only if the likelihood factors as:
where depends on data only through , and does not depend on .
Examples of sufficient statistics:
| Model | Sufficient statistic |
|---|---|
| , samples | (total successes) |
| , known | (sample mean) |
| , both unknown | (mean and SS) |
| (total count) | |
| (total time) |
Rao-Blackwell Theorem. If is any unbiased estimator and is a sufficient statistic, then satisfies:
with equality iff is already a function of . The "Rao-Blackwellised" estimator is at least as good.
Proof sketch: By the law of total variance, .
For AI: Sufficient statistics are the foundation of exponential families, which include Gaussian, Bernoulli, Poisson, Gamma, and most common distributions. The natural parameterisation of exponential families (used in generalised linear models and variational inference) exploits sufficient statistics to simplify computation. In the VAE (Kingma & Welling, 2014), the encoder outputs sufficient statistics of the Gaussian posterior approximation.
3.5 Completeness and the Lehmann-Scheffe Theorem
Definition 3.5 (Complete Statistic). A sufficient statistic is complete if for any measurable function : for all implies a.s. for all .
Completeness means the only function of that has zero expectation everywhere is the zero function - there are no "hidden" unbiasedness conditions.
Lehmann-Scheffe Theorem. If is a complete sufficient statistic and is unbiased for , then is the unique minimum variance unbiased estimator (UMVUE) of .
The UMVUE is the "best possible" unbiased estimator - it has the lowest variance among all unbiased estimators, and it is unique.
Example: For with known, is complete sufficient, and itself is unbiased for . By Lehmann-Scheffe, is the UMVUE of - there is no unbiased estimator with smaller variance. This also confirms that achieves the CRB .
4. Fisher Information and the Cramer-Rao Bound
4.1 The Score Function
The score function is the fundamental quantity linking estimation and information theory.
Definition 4.1 (Score Function). For a single observation from , the score is:
For a sample of iid observations, the total score is:
The MLE solves (the score equation).
Zero-mean property of the score:
under regularity conditions permitting interchange of differentiation and integration.
Interpretation: The score measures how sensitively the log-likelihood responds to perturbations of . At the true parameter value, the score has zero mean - on average, the likelihood is at a stationary point. But it has positive variance, measuring how much information the data carries about .
SCORE FUNCTION - GEOMETRIC INTUITION
========================================================================
Log-likelihood \\ell(\\theta) = \\Sigma_i log p(x_i;\\theta) as a function of \\theta:
\\ell(\\theta)
|
| +-----+
| +-+ +-+
| +-+ +-+
|---+-+ +-+----
| \\theta
^
\\theta* (true parameter)
slope = score = 0 at maximum
Score s(\\theta; data) = d\\ell/d\\theta is the slope of the log-likelihood.
High variance of score at \\theta* means the data sharply identifies
\\theta* (the likelihood is "peaky"). Low variance means the data
are relatively uninformative about \\theta.
========================================================================
4.2 Fisher Information
Definition 4.2 (Fisher Information). The Fisher information for parameter in model is:
(The second equality uses .)
Alternative form (under regularity conditions):
Proof of equivalence: Differentiating with respect to :
Therefore .
The second-derivative form has intuitive content: is the expected (negative) curvature of the log-likelihood. High curvature = the likelihood has a sharp peak = the data strongly identifies . Low curvature = flat likelihood = data is relatively uninformative.
Fisher information for an iid sample of observations:
Information accumulates linearly with sample size - doubling data doubles information, halving uncertainty (in variance terms: ).
Computed examples:
Bernoulli: . Score: . Score squared: .
with known: . Score: .
Poisson: . Score: .
Pattern: For exponential family distributions, in scalar cases - information is the reciprocal of the natural parameter variance.
Multivariate Fisher Information Matrix. For :
In matrix form: where is the Hessian of the log-likelihood.
The FIM is always positive semi-definite (as an outer product expectation), and positive definite under identifiability.
4.3 The Cramer-Rao Lower Bound
The Cramer-Rao Lower Bound (CRB) is the fundamental limit on estimation accuracy.
Theorem 4.3 (Cramer-Rao Lower Bound). Let be any unbiased estimator of based on iid observations. Under regularity conditions:
For the multivariate case with unbiased estimator of :
(meaning is positive semidefinite).
Proof (scalar case, single observation):
We need to show .
Since is unbiased: . Differentiating with respect to :
Now apply the Cauchy-Schwarz inequality to :
By Cauchy-Schwarz:
When is the bound tight? The CRB is achieved (with equality in Cauchy-Schwarz) if and only if for some function . This occurs precisely for exponential family distributions, and the efficient estimator is the MLE.
Biased estimator CRB. For biased estimators with :
This shows that introducing bias (via regularisation) can actually reduce the CRB - a biased estimator faces a different, potentially less demanding bound.
Examples of efficient estimators:
| Model | Parameter | Efficient estimator | CRB | |
|---|---|---|---|---|
| , known | [ok] | |||
| [ok] | ||||
| [ok] | ||||
| [ok] |
The sample mean is efficient for all these single-parameter exponential families. This is not a coincidence - it follows from the structure of exponential families.
4.4 Fisher Information Matrix in Practice
For a neural network with parameters defining a conditional distribution , the FIM is:
This is an outer product of gradient vectors, making it a PSD matrix. For modern LLMs with parameters, this matrix is completely intractable to store (would require bytes).
Empirical FIM. In practice, approximate by sampling:
This is the average squared gradient - exactly what is accumulated in gradient variance estimates.
Natural gradient (Amari, 1998). The ordinary gradient is the steepest ascent direction in Euclidean parameter space . But parameters do not live in Euclidean space - they live in the statistical manifold of distributions, where the natural metric is the Fisher information. The natural gradient is:
This is the steepest ascent direction in the metric defined by the FIM. Natural gradient is invariant to reparametrisation: if we change from to , the natural gradient update gives the same parameter change.
K-FAC approximation (Martens & Grosse, 2015). Full FIM inversion costs . K-FAC approximates by assuming layer-wise independence and Kronecker factorisation of each layer's FIM: , where (input covariance) and (gradient covariance). Inverting the Kronecker product: , reducing cost from to per layer.
Jeffreys prior. In Bayesian statistics (preview of Section04), the Jeffreys prior is the "uninformative" prior that is invariant to reparametrisation. For in Bernoulli: , so , which is .
Preview: MAP Estimation. Adding a prior and finding gives the Maximum A Posteriori (MAP) estimate - MLE regularised by the log-prior. MAP with a Gaussian prior gives L2-regularised MLE. The full Bayesian treatment of priors, posteriors, and conjugate families is in Section04 Bayesian Inference.
5. Maximum Likelihood Estimation
5.1 The Likelihood Principle
Definition 5.1 (Likelihood Function). Given observed data and a parametric model , the likelihood function is:
The log-likelihood is:
The Maximum Likelihood Estimator (MLE) is:
The likelihood is not a probability. is a function of for fixed data , not a probability distribution over . It does not integrate to 1 over . The statement "" means "the data has probability density 0.3 under parameter ", not "there is a 30% chance is the true parameter".
Why log? Three reasons:
- Computational: underflows to zero for large (e.g., ); sums are numerically stable
- Mathematical: sums are easier to differentiate and optimise than products
- Statistical: log is a monotone transformation, so
The likelihood principle states that all information about in the data is contained in the likelihood function. Two datasets with proportional likelihoods should lead to the same inferences about . This is a philosophical principle - frequentists may not accept it fully (confidence intervals depend on the sampling procedure, not just the likelihood), but it underlies MLE and Bayesian inference alike.
5.2 Deriving MLEs: Scalar Parameters
The standard procedure for finding the MLE:
- Write the log-likelihood
- Differentiate and set (the score equation)
- Verify it is a maximum (second derivative or global structure)
- Check boundary cases (the maximum might be at 's boundary)
Bernoulli MLE. Let .
where is the total number of successes. Setting :
The MLE is the sample proportion - the intuitive answer.
Poisson MLE. Let .
Setting : .
The MLE of the Poisson rate is the sample mean - the estimator of the mean equals the MLE because .
Exponential MLE. Let , i.e., for .
Setting : .
The MLE of the rate is the reciprocal of the sample mean - since , this is natural.
Uniform MLE. Let , for .
The likelihood is zero for (some observation would be outside ) and decreasing in for . The MLE is at the boundary: .
This is an example where the score equation gives no solution (the log-likelihood has no interior critical point); the MLE is found by reasoning about the likelihood's shape. Also, the MLE here is biased: .
5.3 Deriving MLEs: Multivariate Gaussian
Let with and .
The log-likelihood is:
MLE of : Taking the gradient with respect to and setting to zero:
MLE of : Substituting and differentiating with respect to (using matrix calculus identities and ):
Bias of : By the same calculation as in the scalar case, . The unbiased estimator is .
For AI: Fitting a Gaussian to activations or embeddings is done in numerous ML methods. The empirical covariance of a layer's activations, used in whitening transforms, weight initialisation (Xavier/He initialisation matches the second moment), and covariance-regularised fine-tuning, all use this MLE formula. The multivariate Gaussian MLE is also the building block for Gaussian mixture models (EM algorithm) and linear discriminant analysis.
5.4 MLE as Cross-Entropy Minimisation
This connection is perhaps the most important in all of applied ML.
Theorem 5.4. For a classification model and iid data :
The right-hand side is the negative log-likelihood (NLL) or cross-entropy loss:
Connection to KL divergence: Define the empirical distribution . Then:
Since does not depend on :
MLE minimises the KL divergence from the data distribution to the model. This is the information-theoretic interpretation of MLE.
For language models: An LLM defines . Training by NLL is MLE:
where the sum is over all tokens in the training corpus. This objective is used verbatim in every large language model - GPT-4 (OpenAI, 2023), LLaMA-3 (Meta, 2024), Gemini (Google, 2023), Claude (Anthropic, 2024). The perplexity is the standard evaluation metric.
Label smoothing as regularised MLE. Standard MLE uses hard labels (one-hot). Label smoothing (Szegedy et al., 2016) uses soft targets , adding a small weight to incorrect classes. This is equivalent to regularising MLE by mixing the empirical distribution with a uniform prior - reducing overconfidence and improving calibration.
5.5 Properties of MLE
Under regularity conditions (the "Cramer-Rao conditions" on the model), MLE satisfies:
1. Consistency: . The proof uses the fact that the expected log-likelihood is uniquely maximised at the true parameter (by the strict convexity of KL divergence), and the LLN ensures uniformly.
2. Asymptotic normality:
For large : .
3. Asymptotic efficiency: The asymptotic covariance achieves the CRB. No consistent estimator can have smaller asymptotic variance (at every ). MLE is asymptotically optimal.
4. Invariance under reparametrisation: If is the MLE of , then is the MLE of for any function (not required to be injective). This is the invariance property of MLE.
Examples of invariance:
- MLE of is (not , which is the unbiased estimator)
- MLE of in Exponential is (consistent with )
- MLE of in Bernoulli is (biased, but the MLE)
Regularity conditions (Cramer-Rao conditions). These are the sufficient conditions for the above properties:
- The parameter space is an open subset of
- The model is identifiable
- The support does not depend on (excludes Uniform)
- The log-likelihood is three times differentiable in
- The Fisher information matrix is positive definite at
When these fail (as in the Uniform example), MLE may still be the natural estimator but its properties differ.
5.6 Numerical MLE
For complex models, the score equation has no closed-form solution and must be solved numerically.
Gradient ascent on log-likelihood. The simplest approach:
For neural networks trained with mini-batches: stochastic gradient ascent on the mini-batch log-likelihood. This is exactly SGD on NLL loss with a negated gradient.
Newton-Raphson / Fisher scoring. Use second-order information:
where is the Hessian of the log-likelihood. Replacing the Hessian with the expected negative Hessian (Fisher information) gives Fisher scoring:
This is exactly the natural gradient update. Newton-Raphson converges quadratically near the maximum - much faster than gradient ascent - but requires inverting a matrix per step.
EM algorithm. For models with latent variables (mixtures, HMMs, VAEs), the EM algorithm alternates between:
- E-step: compute
- M-step:
EM guarantees that - the log-likelihood is non-decreasing. EM is a specialised case of variational inference (to be developed in Section04 Bayesian Inference).
Numerical pitfalls:
- Overflow/underflow: compute , not (avoid multiplying probabilities)
- Log-sum-exp trick: for softmax-based likelihoods,
- Flat log-likelihoods: when near the solution, gradient methods converge extremely slowly; natural gradient methods help
- Local maxima: the log-likelihood may be multimodal for mixture models and neural networks; multiple restarts are necessary