Lesson overview | Previous part | Next part
Introduction to Probability and Random Variables: Part 13: Why This Matters for AI 2026 Perspective to Appendix Q: Historical Development of Probability Theory
13. Why This Matters for AI (2026 Perspective)
| Concept | Impact on AI/LLMs |
|---|---|
| Kolmogorov axioms | The foundation of every probabilistic model. MLE, MAP, VAEs, diffusion models, Bayesian NNs - all rest on these three axioms. Knowing the axioms lets you verify whether a proposed probability model is well-defined. |
| Conditional probability | The core operation in Bayesian inference and autoregressive language modelling. Every in a language model is a conditional probability. Causal attention implements conditional independence structure. |
| Bayes' theorem | Underlies RLHF (updating reward model beliefs from human feedback), Bayesian hyperparameter search, and the theoretical analysis of in-context learning as implicit Bayesian inference. |
| Independence | The i.i.d. assumption justifies mini-batch gradient descent. Conditional independence defines the graphical structure of Bayesian networks and the Markov property of diffusion models. |
| PMF / Categorical distribution | Every token prediction in a language model is a Categorical PMF. Softmax produces a valid PMF over $ |
| Gaussian distribution | Used in weight initialisation (He, Glorot), VAE latent spaces, diffusion forward processes, Gaussian process priors, and noise injection for differential privacy. The CLT (Section06) explains its ubiquity. |
| Expected value | The training objective is an expected loss over the data distribution. Linearity of expectation justifies mini-batch gradient estimation. |
| Variance | Gradient variance determines training stability. BN and LN are variance-control mechanisms. Dropout variance analysis guides the choice of dropout rate. |
| Change of variables | Foundation of normalising flows (Glow, RealNVP, neural spline flows). The change-of-variables formula computes the density of a transformed distribution. |
| Negative log-likelihood | The universal training objective. Cross-entropy loss (classification), MSE (regression under Gaussian noise), ELBO (VAEs) all derive from minimising NLL. |
Conceptual Bridge
This section establishes the probability-theoretic language that all subsequent sections build upon. We began from the Kolmogorov axioms - three simple rules that define what probabilities must be - and derived the core computational rules (complement, union, conditioning, total probability, Bayes) that allow us to reason under uncertainty. The random variable formalism converts outcomes to numbers, enabling the full machinery of analysis.
Looking back: The section draws on set theory (union, intersection, complement - assumed), on calculus for the continuous case (integration for CDFs and PDFs, differentiation for the fundamental theorem), and on the concept of a function (random variables are functions). The proof techniques (direct calculation from axioms, algebraic manipulation) mirror those in linear algebra and calculus sections of this course.
Looking forward - within this chapter:
- Section02-Common-Distributions takes the distribution vocabulary started here (Bernoulli, Uniform, Gaussian) and completes it: Binomial, Poisson, Exponential, Beta, Dirichlet, Categorical, Student-t, plus the exponential family.
- Section03-Joint-Distributions extends the single-variable machinery to multiple random variables: joint PDFs, marginalisation, and Bayes' theorem for random variables rather than events.
- Section04-Expectation-and-Moments develops expectation much more deeply: LOTUS, covariance matrices, MGFs, characteristic functions, and the full moment theory.
- Section05-Concentration-Inequalities proves how expectations and variances control tail probabilities (Markov, Chebyshev, Hoeffding) - the mathematical tools for generalisation guarantees.
- Section06-Stochastic-Processes introduces the Central Limit Theorem (explaining Gaussian ubiquity) and Gaussian processes (infinite-dimensional generalisations).
- Section07-Markov-Chains uses conditional independence (Section4.2) and Bayes' theorem (Section3.6) to define memoryless sequential processes and MCMC sampling.
Looking forward - across chapters:
- Chapter 7 (Statistics) builds estimation theory (MLE, MAP, confidence intervals) directly on the probability foundation here.
- Chapter 8 (Optimisation) uses the Gaussian noise model (Section7.4) and NLL loss (Section10.1) to motivate SGD and Adam.
- Chapter 9 (Information Theory) defines entropy as and KL divergence in terms of two probability measures - direct extensions of Section8.1 and Section10.1.
POSITION IN CURRICULUM
========================================================================
Section04 Calculus Fundamentals
Section05 Multivariate Calculus
|
+-- Section06 Probability Theory
+-- 01-Introduction-and-Random-Variables <== YOU ARE HERE
| (probability spaces, RVs, CDF/PDF/PMF, E[X], Var[X])
+-- 02-Common-Distributions
| (Binomial, Poisson, Gaussian, Beta, Dirichlet, ...)
+-- 03-Joint-Distributions
| (joint PDF, marginalisation, Bayes for RVs)
+-- 04-Expectation-and-Moments
| (LOTUS, covariance, MGF, higher moments)
+-- 05-Concentration-Inequalities
| (Markov, Chebyshev, Hoeffding, PAC bounds)
+-- 06-Stochastic-Processes
| (LLN, CLT, Gaussian processes)
+-- 07-Markov-Chains
(MCMC, detailed balance, stationary distributions)
========================================================================
<- Back to Probability Theory | Next: Common Distributions ->
Appendix A: The Gaussian Normalisation Integral
The fact that is not obvious. Here is the classical proof using a two-dimensional trick.
Claim: .
Proof (Gaussian integral via polar coordinates):
Convert to polar coordinates with , , :
Let , :
Therefore (since ).
This means the normalisation constant in the Gaussian PDF is exactly what is needed to make the integral equal 1. The proof that the Gaussian with any integrates to 1 follows by the substitution .
Appendix B: Discrete Probability - Worked Examples in Full
B.1 The Birthday Problem
Question: In a room of people, what is the probability that at least two share a birthday (assuming 365 days, uniform distribution, no leap years)?
This is a classic application of the complement rule.
Let = "at least two people share a birthday" and = "all birthdays are distinct."
The probability of at least one shared birthday:
| 10 | 11.7% |
| 23 | 50.7% - crossover point |
| 50 | 97.0% |
| 70 | 99.9% |
For AI: The birthday paradox is closely related to hash collision probability and the birthday attack in cryptography. In machine learning, it appears in:
- Retrieval collision: The probability that two embeddings in a high-dimensional space are closer than a threshold depends on how many embeddings exist - birthday-paradox type reasoning
- Token collision in tokenisation: In BPE tokenisation, character pairs collide and merge; the frequency of collisions follows birthday-paradox dynamics
- Data deduplication: Estimating the probability of exact or near-duplicate examples in large training datasets
B.2 Geometric Series and PMF Normalisation
Many PMFs involve geometric series. Recall:
Application to Geometric PMF: for
Computing the mean of Geometric via differentiation of generating function:
The expected wait time is - intuitively, if each trial has probability of success, on average you need trials.
Appendix C: Continuous Distributions - Computing Moments Directly
C.1 Gaussian Mean and Variance from First Principles
Mean of : By symmetry of the Gaussian PDF about :
Substitute so and :
Variance of : With the same substitution, where .
Integrate by parts: , , so :
Therefore .
C.2 Uniform Distribution Variance - Direct Computation
For :
using the factorisation .
Appendix D: Key Inequalities and Their Probability Theory Roots
Several important inequalities follow directly from the basic probability theory developed in this section:
D.1 Markov's Inequality (Preview)
For any non-negative random variable and :
Proof sketch: .
This connects the expected value (Section8.1) to tail probabilities. Full treatment with applications to PAC bounds: Section05-Concentration-Inequalities.
D.2 Jensen's Inequality
For a convex function and random variable :
Examples:
- (convex): - the computational variance formula
- (convex): - used in exponential tail bounds
- (convex for ): - used in ELBO derivation for VAEs
D.3 Cauchy-Schwarz for Expectations
This bounds the correlation between random variables and is used to prove the variance of sums and covariance properties in Section04.
Appendix E: Probability in High Dimensions - A Preview
All of the distributions introduced in this section are one-dimensional (). In machine learning, we almost always work with high-dimensional data - image pixels, word embeddings, weight tensors. High-dimensional probability has surprising and sometimes counterintuitive properties:
E.1 The Curse of Dimensionality for Probability
In dimensions, a unit hypercube has volume 1. But an inscribed hypersphere of radius 0.5 has volume:
For : . For : . For : essentially 0.
Almost all the volume of a high-dimensional cube is in its corners, not near its centre. A uniform distribution over the hypercube concentrates most of its mass far from the centre - which means nearest-neighbour methods based on Euclidean distance break down.
E.2 Gaussian Concentration in High Dimensions
For (standard Gaussian in dimensions):
By the law of large numbers: as (concentrates on the sphere of radius ). This means:
- High-dimensional Gaussian samples are approximately uniformly distributed on the -sphere
- The dot product for two independent samples (random vectors are nearly orthogonal)
For AI: This explains why random embedding matrices in transformers initialised with produce approximately orthogonal row vectors in high dimensions - a useful inductive bias for attention.
These high-dimensional considerations are fully developed in Section03-Joint-Distributions (multivariate Gaussian) and Section05-Concentration-Inequalities (concentration of measure).
Appendix F: Information Theory Preview - Entropy and KL Divergence
This appendix previews two information-theoretic quantities that appear constantly in ML loss functions. The full derivations using the expectation operator belong in Section04 (Expectation and Moments) and Chapter 9 (Information Theory).
F.1 Entropy: Measuring Uncertainty
The Shannon entropy of a discrete random variable with PMF is:
Entropy measures the average "surprise" or uncertainty in . Higher entropy = more spread-out distribution = more uncertainty.
Key values:
- Bernoulli(): . Maximum nats at (fair coin). Zero when or (certain outcome).
- Uniform on values: - this is the maximum entropy distribution for a discrete variable with outcomes.
- Deterministic: - no uncertainty.
For continuous random variables, the differential entropy is:
The Gaussian has differential entropy , which is the maximum entropy distribution for a continuous variable with fixed variance.
For AI: The cross-entropy loss for classification is . When is a one-hot label, this equals - the negative log probability the model assigned to the correct class.
F.2 KL Divergence: Measuring Distribution Distance
The Kullback-Leibler divergence from distribution to distribution is:
Properties (proved using Jensen's inequality in Appendix D):
- (non-negativity; follows from being convex)
- (equals zero only when identical)
- (asymmetric - not a metric)
Decomposition: Cross-entropy entropy KL divergence:
Since is fixed when is the true label distribution, minimising cross-entropy loss is equivalent to minimising - pushing the model's distribution toward the true distribution .
For AI: VAE training minimises the ELBO, which includes a regularisation term. RLHF preference modelling computes KL penalties between the fine-tuned and reference policy distributions.
-> Full treatment: Chapter 9 - Information Theory
F.3 Negative Log-Likelihood Connects Entropy and Probability
For a dataset drawn i.i.d. from , the negative log-likelihood under model is:
By the law of large numbers (Section06 preview), as :
Minimising NLL is therefore equivalent to minimising cross-entropy, which is equivalent to minimising . Maximum likelihood estimation is KL minimisation.
Appendix G: The Exponential Family - A Preview
Almost every distribution in Section02 belongs to a unified family called the exponential family. Understanding this family's structure explains why Gaussian, Bernoulli, Poisson, and many other distributions share the same algorithmic properties in ML.
G.1 Exponential Family Form
A distribution belongs to the exponential family if its PDF/PMF can be written as:
where:
- - natural parameters (the parameterisation used in optimisation)
- - sufficient statistics (all information about in the data is captured by )
- - log-partition function (ensures normalisation)
- - base measure (does not depend on )
G.2 Standard Distributions as Exponential Family Members
| Distribution | Natural param | Sufficient stat | Log-partition |
|---|---|---|---|
| Bernoulli() | |||
| Gaussian(, fixed ) | |||
| Poisson() | |||
| Exponential() |
The Bernoulli natural parameter is the log-odds or logit. The softmax function is the multivariate inverse: recovers from . This is why logistic regression outputs are Bernoulli exponential family probabilities.
G.3 Why the Log-Partition Function Matters
The log-partition function is convex and its derivatives give the cumulants:
This means computing moments of exponential family distributions reduces to differentiating - a powerful computational shortcut.
For AI: Softmax attention uses the log-partition function. The logsumexp operation is the log-partition function of a categorical distribution over query-key matches. The numerical stability trick exploits properties of .
-> Full treatment: Section02-Common-Distributions
Appendix H: Extended Worked Examples
H.1 The Monty Hall Problem - Conditional Probability in Action
Setup: A car is behind one of three doors. You choose Door 1. The host (who knows where the car is) opens Door 3, revealing a goat. Should you switch to Door 2?
Solution using conditional probability:
Let = event car is behind Door , = event host opens Door 3.
Prior: .
Host strategy: if car is at Door 1, host opens Door 2 or 3 with equal probability ; if car is at Door 2, host must open Door 3 (probability 1); if car is at Door 3, host must open Door 2 (probability 0 for Door 3).
By Bayes' theorem:
Conclusion: Switching wins with probability . Staying wins with probability .
Intuition: The host's action is not random - they always reveal a goat. This action transfers probability mass from Door 3 to Door 2 (given your initial choice was Door 1). The host's knowledge changes the posterior.
For AI: This illustrates why naive probabilistic reasoning fails when the observation process is not independent of the hypothesis. In model evaluation, the selection of which test examples to report affects the posterior over model quality - a key issue in benchmark gaming.
H.2 The Prosecutor's Fallacy -
A DNA test for a rare genetic marker has false positive rate and false negative rate . The marker is present in of the population. A defendant tests positive. A prosecutor claims: "The probability of a false positive is only , so there is a chance the defendant is guilty."
What's wrong: The prosecutor is claiming , but they are calculating .
By Bayes' theorem:
- (base rate)
- (no false negatives)
- (false positive rate)
The actual probability of guilt given the positive test is only about , not .
For AI: This fallacy appears in anomaly detection. A model with accuracy on a test set may have only precision on a rare-class detection task if the base rate of the rare class is .
H.3 Coupon Collector Problem - Expected Waiting Times
There are distinct coupons. Each cereal box contains one coupon uniformly at random. How many boxes must you buy (in expectation) to collect all coupons?
Let = number of boxes needed to get the -th new coupon (given you already have distinct coupons). At that point, the probability of drawing a new coupon is , so .
Total boxes: , so:
where (harmonic number).
For (days of the year): attempts needed to see all birthdays.
For AI: The coupon collector framework models how many gradient steps are needed for a stochastic optimizer to see every training example at least once. For training examples, about steps are needed in expectation - which is why practitioners set epoch counts based on logarithmic growth.
Appendix I: Characteristic Functions and Moment Generating Functions - Preview
I.1 The Moment Generating Function (MGF)
The moment generating function of a random variable is:
Why "moment generating"? Differentiating and evaluating at :
The -th derivative of the MGF at zero is the -th raw moment of .
For Gaussian :
This elegant form is why the Gaussian is easy to work with analytically: all moments can be extracted by differentiating a simple exponential.
Key property (moment generating functions and sums): If :
This product rule for MGFs is the probabilistic analogue of the convolution theorem, and it is the key tool for proving that sums of independent Gaussians are Gaussian and that sums of independent Poissons are Poisson.
I.2 The Characteristic Function
The characteristic function of is (Fourier transform of the density). Unlike the MGF, the characteristic function always exists and uniquely determines the distribution. It is the tool used in the proof of the Central Limit Theorem.
-> Full treatment: Section04-Expectation-and-Moments
Appendix J: Simulation and Monte Carlo Methods - A Preview
A recurring theme throughout this chapter is that computing exact probabilities analytically is often hard or impossible. Monte Carlo simulation is the practical alternative: approximate by averaging over many samples.
J.1 The Monte Carlo Principle
If are i.i.d. samples from , then by the Law of Large Numbers (Section06):
The Monte Carlo estimator is unbiased () and has variance , so the standard error is regardless of dimension.
Why this matters: Unlike numerical integration (which requires exponentially many grid points in high dimensions), Monte Carlo has convergence rate independent of dimension. This makes it the tool of choice for high-dimensional integrals - exactly the setting of modern ML.
J.2 Estimating Probabilities
, so we can estimate any probability:
Example: Estimate for by sampling:
- Draw
- Count the fraction where
- Answer converges to (the true tail probability)
J.3 Sampling from Basic Distributions
Inverse CDF method (Universality of the Uniform): If and is a CDF, then has distribution . This is because:
Consequences:
- Sample Exponential(): where
- Sample Geometric():
- Sample Bernoulli():
The Box-Muller transform generates Gaussian samples from uniform:
where i.i.d., and i.i.d.
J.4 Rejection Sampling
When the inverse CDF has no closed form, use rejection sampling: to sample from , find a proposal and constant such that everywhere.
Algorithm:
- Sample
- Accept with probability ; otherwise reject and repeat
The accepted samples have distribution . The acceptance rate is , so choose as small as possible.
For AI: Rejection sampling underlies early language model decoding strategies. Modern methods like nucleus sampling (top-) and temperature sampling directly manipulate the token probability distribution - understanding these requires the CDF and quantile framework from Section7.
J.5 Monte Carlo Integration of Normalisation Constants
Many probabilistic models have densities of the form where computing the normalisation constant is intractable. Examples:
- Bayesian posterior:
- Boltzmann distribution: over exponentially many states
- Normalising flow: by construction but requires a change-of-variables Jacobian
Monte Carlo methods (particularly MCMC, developed in Section07) are the primary tools for inference in these models.
-> Full treatment: Section07-Markov-Chains (MCMC); Section06-Stochastic-Processes (LLN and CLT)
Appendix K: Probability Distributions in PyTorch and NumPy
Modern ML frameworks have built-in probability distribution objects. Understanding the mathematical definitions in this section makes framework APIs immediately transparent.
K.1 NumPy / SciPy Distributions
import numpy as np
from scipy import stats
# Bernoulli(p=0.3)
rv = stats.bernoulli(p=0.3)
rv.pmf(1) # P(X=1) = 0.3
rv.cdf(0) # P(X\\leq0) = 0.7
rv.rvs(size=100) # 100 random samples
# Gaussian N(mu=2, sigma=1.5)
rv = stats.norm(loc=2, scale=1.5)
rv.pdf(2.0) # f(2.0) = 1/(1.5*sqrt(2\\pi)) \\approx 0.266
rv.cdf(2.0) # \\Phi(0) = 0.5
rv.ppf(0.975) # inverse CDF (quantile): z such that P(X\\leqz)=0.975
# Uniform(a=0, b=1)
rv = stats.uniform(loc=0, scale=1)
The loc and scale parameters implement the location-scale transform: where is the standard form.
K.2 PyTorch torch.distributions
import torch
from torch.distributions import Bernoulli, Normal, Uniform, Categorical
# Bernoulli - for binary classification output
dist = Bernoulli(probs=torch.tensor(0.3))
dist.log_prob(torch.tensor(1.0)) # log P(X=1) = log(0.3) \\approx -1.204
dist.sample() # sample {0,1}
dist.entropy() # H(X) \\approx 0.611 nats
# Normal - for regression, VAE latent, etc.
dist = Normal(loc=torch.tensor(0.0), scale=torch.tensor(1.0))
dist.log_prob(torch.tensor(1.96)) # log f(1.96) \\approx -2.84
dist.cdf(torch.tensor(1.96)) # \\approx 0.975
# Categorical - for language model outputs
logits = torch.tensor([2.0, 1.0, 0.5]) # unnormalised log probabilities
dist = Categorical(logits=logits)
dist.probs # softmax probabilities
dist.log_prob(torch.tensor(0)) # log P(X=0)
dist.entropy() # entropy of the distribution
K.3 Reparameterisation - The Cornerstone of VAE Training
The reparameterisation trick lets us differentiate through a sampling operation:
Instead of sampling (which has no gradient w.r.t. ), write:
Now and - gradients flow through.
# Without reparameterisation (WRONG for training)
z = Normal(mu, sigma).sample() # no gradient
# With reparameterisation (CORRECT)
eps = torch.randn_like(sigma) # \\varepsilon ~ N(0,1)
z = mu + sigma * eps # z ~ N(mu, sigma^2), gradients flow
This technique requires understanding that and have the same distribution - a direct consequence of the location-scale property developed in Section7.3.
For AI: VAE encoders output and use this trick to sample the latent code while maintaining a differentiable path back to the encoder parameters. The same idea extends to normalising flows and diffusion model sampling.
Appendix L: Measure Theory - The Foundations Beneath the Formalism
This section develops probability from elementary axioms without requiring measure theory. But for completeness, this appendix sketches where the measure-theoretic foundations live and when you need them.
L.1 Why Elementary Probability Is Insufficient
The elementary theory (as developed in Section2) works when the sample space is discrete or when we can write explicit PDF formulas. It breaks for:
-
Mixed distributions: a random variable that is sometimes discrete (e.g., exactly zero) and sometimes continuous (e.g., positive real) - arises in zero-inflated models and ReLU activations.
-
Limits: taking limits of sequences of random variables requires knowing what "" means precisely - there are multiple notions of convergence (, almost sure, in probability, in distribution) that are not cleanly expressible without measure theory.
-
Conditional expectation on events of probability zero: when (any continuous ) requires the Radon-Nikodym theorem.
-
Change-of-measure / importance sampling: when and are continuous requires the Radon-Nikodym derivative.
L.2 The Measure-Theoretic Setup
A probability space is a triple where:
- - sample space (any set)
- - sigma-algebra: a collection of subsets of closed under complement and countable union (the measurable events)
- - probability measure satisfying Kolmogorov's axioms
A random variable is a measurable function , meaning for all (so that is well-defined).
The distribution of is the induced measure for Borel sets .
The Lebesgue integral replaces the Riemann integral and is defined for any measurable function - this is what makes well-defined even for random variables with no PDF.
L.3 The Lebesgue-Stieltjes Integral Unifies PDF and PMF
For any random variable with CDF , the expectation can be written as a Stieltjes integral:
This notation works for both discrete and continuous cases:
- Continuous: (use Riemann integral)
- Discrete: (use a sum)
- Mixed: combination of both
For AI: Importance sampling in policy gradient methods (PPO, TRPO) uses the Radon-Nikodym derivative implicitly when bounding the ratio - this is the change-of-measure formula at work.
The full measure-theoretic treatment is beyond the scope of this curriculum. The reader interested in rigorous foundations should consult Billingsley's Probability and Measure or Durrett's Probability: Theory and Examples.
Appendix M: Classic Probability Paradoxes and Their Resolutions
M.1 Bertrand's Paradox - Why "Uniform at Random" Needs Specification
Problem: A chord is drawn "at random" in a unit circle. What is the probability the chord is longer than the side of the inscribed equilateral triangle (length )?
The answer depends on how you sample "at random":
Method 1 - Random endpoints: Choose two points uniformly on the circle's circumference. Probability = .
Method 2 - Random midpoint: Choose a point uniformly inside the circle as the chord's midpoint. Probability = .
Method 3 - Random radius and distance: Choose a radius uniformly; choose the midpoint's distance from centre uniformly on . Probability = .
All three methods are geometrically natural, yet give different answers.
Resolution: The phrase "uniform at random" is not well-defined without specifying the sample space and the probability measure on it. The three methods define different probability spaces - they are asking different questions.
Lesson for ML: When we say "sample a random minibatch" or "sample a random initialisation," we must specify the distribution explicitly. The implicit assumption (uniform distribution) can lead to different algorithmic properties depending on whether we sample with or without replacement, sample tokens or sequences, etc.
M.2 The Two-Envelope Paradox
Two envelopes each contain money; one has twice the amount of the other. You pick one, see inside, and are offered the chance to switch. You reason: the other envelope contains either (probability ) or (probability ). Expected value of switching: . So you should always switch - but the same reasoning applies after switching back!
Resolution: The calculation is flawed because the event "the other envelope has " and "the other envelope has " cannot both be assigned probability simultaneously for a fixed . The reasoning treats as fixed and random simultaneously. A rigorous Bayesian analysis shows no advantage to switching when the prior over the smaller amount is proper (integrable).
Lesson for ML: The paradox arises from mixing conditional and unconditional reasoning. In ML, this appears in off-policy evaluation (evaluating policy using data from ) - naively computing expected reward leads to analogous double-counting errors.
M.3 The St. Petersburg Paradox - When Expected Value Is Infinite
A game: flip a fair coin repeatedly. If the first head appears on flip , you win dollars. Expected value:
Yet no rational person would pay more than ~$25 to play.
Resolution: Human decision-making is based on utility, not raw monetary value. If utility is (a common assumption in economics), the expected utility is finite. This gives expected utility theory - the rational framework for decisions under uncertainty.
For AI: Reinforcement learning explicitly maximises expected return (reward), which can lead to high-variance, unstable policies if reward distributions have heavy tails. Techniques like reward normalisation, reward clipping (PPO), and distributional RL (C51, QR-DQN) are direct engineering responses to the St. Petersburg problem.
Appendix N: Summary of Key Inequalities and Bounds
This appendix collects all the inequalities introduced in this section and previewed from later sections, as a reference card.
Probability Bounds
| Inequality | Statement | When to Use |
|---|---|---|
| Boole (union bound) | When events may overlap; gives upper bound | |
| Inclusion-exclusion | Exact for two events | |
| Bonferroni | Lower bound on "all good" probability | |
| Frechet | Bounds on intersection without independence |
Moment-Based Bounds (from Section05)
| Inequality | Statement | Uses |
|---|---|---|
| Markov | for | Only needs |
| Chebyshev | Needs and | |
| Jensen | for convex | Log-sum-exp, ELBO, entropy |
| Cauchy-Schwarz | $ | \mathbb{E}[XY] |
Concentration Bounds (from Section05)
| Inequality | Statement | When to Use |
|---|---|---|
| Hoeffding | for bounded | i.i.d. bounded variables |
| Chernoff | for Poisson binomial | Sums of independent Bernoullis |
| McDiarmid | If changes by at most in coordinate : | General functions of independent variables |
For AI: Hoeffding's inequality directly gives PAC learning generalisation bounds: with i.i.d. training examples and loss values in , the gap between empirical and expected loss satisfies for any fixed hypothesis.
Appendix O: Sigma-Algebras and the Filtration Framework
This appendix provides a slightly more formal treatment of sigma-algebras for readers who want mathematical precision, without requiring full measure theory.
O.1 Why Sigma-Algebras Appear
The Kolmogorov axioms in Section2 assume we know which subsets of count as "events." For finite or countable , every subset is an event. For continuous , we cannot assign probabilities to all subsets (Vitali sets are non-measurable), so we restrict to a collection closed under the operations we need.
O.2 The Borel Sigma-Algebra
The Borel sigma-algebra is the smallest sigma-algebra containing all open intervals . It includes:
- All open and closed sets
- All countable unions and intersections of intervals
- All CDFs are measurable with respect to
For practical purposes, every set you will encounter in ML is Borel-measurable.
O.3 Information and Filtrations
In sequential problems (reinforcement learning, time series, stochastic processes), we need to track how much information is available at each time step. A filtration is an increasing sequence of sigma-algebras:
where represents "all information available by time ."
A random variable is -measurable if its value is determined by the information in - that is, you can compute from the history up to time .
Example: In a sequential decision problem:
- - no information (only trivial events)
- - sigma-algebra generated by the first observation
- - sigma-algebra generated by observations up to time
The Markov property (Section07) states: depends on only through (not the full history).
For AI: The attention mechanism's causal mask in GPT-style models enforces a filtration: token can only attend to tokens at positions . The mask ensures -measurability of each token prediction, making the autoregressive language model a valid probabilistic model of sequences.
O.4 The Generated Sigma-Algebra
For any random variable , the generated sigma-algebra is:
This is the smallest sigma-algebra that makes measurable - it contains exactly the information encoded in .
For a discrete random variable: - the partition of induced by the values of .
Key insight: Conditional expectation is really - the expectation of given all the information encoded in . This formulation (using sigma-algebras) is what makes conditional expectation rigorously defined even when .
Appendix P: Probability Generating Functions
P.1 Definition
For a non-negative integer-valued random variable (taking values ), the probability generating function (PGF) is:
where .
The PGF converges for and stores all probabilities as coefficients of a power series.
P.2 Recovering Probabilities and Moments
P.3 PGFs of Common Distributions
| Distribution | PGF |
|---|---|
| Bernoulli() | |
| Binomial() | |
| Poisson() | |
| Geometric() |
The Binomial PGF is the -fold product of Bernoulli PGFs - directly encoding that a Binomial is a sum of i.i.d. Bernoullis (by the product property for independent variables).
Product property: If : .
This is the PGF analogue of the convolution theorem, and it gives a clean proof that the sum of independent Poisson() and Poisson() is Poisson():
For AI: The token generation process in language models can be viewed as repeatedly sampling from a categorical PGF. The PGF of the output token count under various sampling strategies (greedy, top-, nucleus) encodes the statistical properties of generated text length distributions.
Appendix Q: Historical Development of Probability Theory
Understanding how probability theory evolved helps locate the axioms, distributions, and theorems in their proper intellectual context.
Q.1 Pre-Axiomatic Period (1654-1900)
Pascal and Fermat (1654): The famous exchange of letters about the "Problem of Points" (how to divide stakes in an unfinished game) is often cited as the birth of probability theory. They developed the idea of expected value and laid groundwork for combinatorics.
Bernoulli (1713): Jakob Bernoulli's Ars Conjectandi stated the first version of the Law of Large Numbers: relative frequencies of outcomes converge to true probabilities. This transformed probability from gambling mathematics to a science of inference.
Bayes (1763): Thomas Bayes' posthumous Essay gave the first treatment of inverse probability - inferring parameters from observations. Richard Price edited and published it; Laplace independently developed the same ideas more completely.
Laplace (1812): Theorie analytique des probabilites synthesised the field, introducing characteristic functions, the Central Limit Theorem, the method of moments, and the formal definition of probability as a fraction of equally likely cases.
Gauss (1809, 1823): Derived the normal distribution as the error distribution minimising the sum of squared residuals. Proved that the sample mean is the best linear unbiased estimator - establishing connections between probability and statistics.
Chebyshev, Markov, Lyapunov (1867-1901): Proved increasingly general versions of the CLT. Markov introduced his eponymous chains; Lyapunov's proof of the CLT via characteristic functions is essentially the modern proof.
Q.2 Axiomatic Period (1900-1950)
Borel (1909): Proved the strong law of large numbers for the Bernoulli case, introducing Borel sets and measure-theoretic methods.
Kolmogorov (1933): Grundbegriffe der Wahrscheinlichkeitsrechnung placed probability theory on rigorous measure-theoretic foundations. The three axioms in Section2 are Kolmogorov's. This work resolved foundational debates and enabled the rigorous development of stochastic processes.
Wiener (1923): Constructed Brownian motion rigorously as a probability measure on function spaces - the first infinite-dimensional probability space.
Doob (1940s): Developed martingale theory, providing rigorous tools for studying random processes with conditional structure.
Q.3 Modern Period (1950-present)
Shannon (1948): A Mathematical Theory of Communication introduced entropy, mutual information, and channel capacity - connecting probability to communication. Cross-entropy loss in modern ML is a direct descendant.
Robbins-Monro (1951): Stochastic approximation - the theoretical foundation for stochastic gradient descent. Proved that gradient estimates from mini-batches converge to the true gradient.
Dempster-Laird-Rubin (1977): The EM algorithm - an expectation-maximisation procedure for MLE with latent variables. Built on conditional expectation and Jensen's inequality.
Pearl (1988): Bayesian networks and causal inference - graphical models encoding conditional independence structure. Direct descendant of conditional probability in Section4 and Section03 of this chapter.
Goodfellow et al. (2014): Generative Adversarial Networks - training a generator to match a target distribution by playing a minimax game. The divergence between generated and true distributions is what is being minimised (implicitly or explicitly).
Sohl-Dickstein et al. (2015), Ho et al. (2020): Diffusion models - add Gaussian noise progressively (forward process), then learn to denoise (reverse process). Grounded in stochastic processes and Gaussian distributions.
Q.4 The Connections to Modern LLMs
Every component of a modern language model has a probabilistic interpretation rooted in the theory developed in this section:
| LLM Component | Probability Foundation | Section |
|---|---|---|
| Token prediction | Categorical distribution over vocabulary | Section6 (discrete RV) |
| Temperature scaling | Scale parameter of Gumbel distribution | Section7 (continuous RV) |
| Dropout | Bernoulli mask on activations | Section6 (Bernoulli) |
| Weight initialisation | Normal distribution, variance scaling | Section7 (Gaussian) |
| KV cache attention | Conditional probability over past keys | Section4 (conditioning) |
| RLHF reward model | Bradley-Terry preference model | Section3 (conditional probability) |
| Beam search | Greedy argmax over joint probability | Section6 (joint events) |
| Perplexity metric | Exponential of cross-entropy | Section8 (expectation) |
| Calibration | Agreement between predicted probs and frequencies | Section2 (frequentist) |
| Chain-of-thought | Latent variable model over reasoning traces | Section4 (conditional independence) |