Lesson overview | Lesson overview | Next part
Floating-Point Arithmetic: Part 1: Intuition to Appendix E: Proofs and Derivations
1. Intuition
1.1 Why Computers Cannot Represent Most Real Numbers
The real numbers form an uncountably infinite set. A computer, by contrast, stores information in a finite number of bits. With bits you can represent at most distinct values. No finite is large enough to cover even the rationals between 0 and 1 - there are infinitely many of them.
The resolution: floating-point numbers are a carefully chosen, finite subset of the rationals - dense near zero (where scientific quantities tend to cluster) and increasingly sparse at large magnitudes. The IEEE 754 standard specifies exactly which rationals are representable and exactly how arithmetic on them is performed. Every GPU, CPU, and TPU on the planet implements this standard.
The consequence for computation: Every real-number operation - addition, multiplication, transcendentals - must be rounded to the nearest representable floating-point number. This introduces a tiny error at each step. Over millions of operations (a single transformer forward pass involves billions), these errors can accumulate in structured, predictable ways. Understanding and controlling this accumulation is numerical analysis.
A concrete example: The number cannot be represented exactly in binary floating-point. In fp32 it is stored as approximately - an error of . This seems negligible, but when you sum 10 million such values, the error accumulates to - a 15% error in a sum that should be exactly .
1.2 Why This Matters for AI
Floating-point arithmetic is not an abstraction for AI practitioners - it is the daily substrate:
Training stability: fp16 training requires loss scaling to prevent gradient underflow. Without it, gradients smaller than the fp16 minimum () are rounded to zero, silently stopping learning for those parameters.
Model quality: Quantizing from fp32 to int8 for inference can degrade accuracy by 1-5% on challenging tasks if done naively. Understanding the rounding error model tells you which operations are most sensitive and how to compensate.
Numerical bugs: NaN (Not a Number) propagation is one of the most common silent failures in deep learning. A single NaN anywhere in the computation graph - caused by , , or overflow - propagates through every subsequent operation, turning the entire gradient to NaN.
Speed and memory: The transition from fp32 to bf16 for training LLMs (GPT-4, Llama, Gemini all use bf16) halves memory and roughly doubles arithmetic throughput on modern hardware. Understanding why bf16 preserves training stability when fp16 does not requires knowing that bf16 preserves the exponent range of fp32 while sacrificing mantissa precision.
For AI: Every tensor in PyTorch, JAX, or TensorFlow is a floating-point array. torch.finfo(torch.float16).eps gives you for fp16. Knowing this number - - immediately tells you the precision limit of every fp16 computation.
1.3 A Day in the Life of a Floating-Point Number
Follow a real number through a computation:
Birth (encoding): is mapped to the nearest fp32 representable value: . Error: .
Arithmetic (rounding): Compute . Each input is rounded; the addition result is rounded again. The result differs from the true by .
Comparison (danger): Testing x_fp32 + 0.6_fp32 == 0.9 returns False in Python because both sides round differently. This is the source of countless bugs in numerical code.
Accumulation (catastrophe): Subtract from the result: vs the true . Relative error: - 30x larger than the original encoding error, because subtraction of nearly-equal numbers amplifies relative error. This is catastrophic cancellation.
Death (overflow/underflow): Multiply by in fp32: overflow to . Multiply by : underflow to (or a subnormal). Once a value becomes or NaN, all subsequent operations inherit the corruption.
1.4 Historical Timeline: 1985-2024
FLOATING-POINT ARITHMETIC: KEY MILESTONES
========================================================================
1985 IEEE 754-1985 published - first universal FP standard
(sign, exponent, mantissa; round-to-nearest-even default)
1987 First IEEE 754 hardware: Intel 8087 math coprocessor
1998 IEEE 754-1985 extended to include 80-bit extended precision
2008 IEEE 754-2008 - adds fp16 (half precision), decimal formats
2010 NVIDIA Fermi GPU: first IEEE-compliant fp64 on GPU
2017 Mixed-precision training (Micikevicius et al., NVIDIA) -
fp16 forward/backward, fp32 master weights
2018 bf16 (Brain Float 16) introduced by Google Brain for TPUs
2019 NVIDIA Ampere: bf16 tensor cores, hardware Transformer Engine
2022 fp8 training (NVIDIA H100 Transformer Engine) - 8-bit FP
with dynamic scaling; used for LLM training
2023 Llama-2, GPT-4, Gemini: bf16 training standard
2024 fp4 experimental; int4/int8 GGUF quantization standard
for LLM inference (llama.cpp ecosystem)
========================================================================
2. Formal Definitions: IEEE 754
2.1 The IEEE 754 Standard
The IEEE 754 standard represents a floating-point number in sign-magnitude-exponent form:
where:
- is the sign bit
- is the mantissa (or significand) in binary, with an implicit leading 1
- is the stored exponent (biased)
- for a -bit exponent field
Format specifications:
| Format | Total bits | Sign | Exponent | Mantissa | Max value | Min normal | |
|---|---|---|---|---|---|---|---|
| fp64 (double) | 64 | 1 | 11 | 52 | |||
| fp32 (single) | 32 | 1 | 8 | 23 | |||
| bf16 | 16 | 1 | 8 | 7 | |||
| fp16 | 16 | 1 | 5 | 10 | |||
| fp8 E4M3 | 8 | 1 | 4 | 3 | |||
| fp8 E5M2 | 8 | 1 | 5 | 2 |
Key insight: bf16 has the same exponent range as fp32 (both use 8 exponent bits, bias 127) but only 7 mantissa bits vs 23. This means bf16 can represent the same scale of numbers as fp32, but with far less precision per value. This is why bf16 is training-stable (no overflow, no gradient underflow) while fp16 (5-bit exponent, max value ) overflows easily during training.
Bit-layout diagram:
IEEE 754 BIT LAYOUTS
========================================================================
fp32 (32 bits):
[S][EEEEEEEE][MMMMMMMMMMMMMMMMMMMMMMM]
1 8 23
fp16 (16 bits):
[S][EEEEE][MMMMMMMMMM]
1 5 10
bf16 (16 bits):
[S][EEEEEEEE][MMMMMMM]
1 8 7 <- Same exponent field as fp32!
fp8 E4M3 (8 bits):
[S][EEEE][MMM]
1 4 3
========================================================================
2.2 Machine Epsilon
Definition (Machine Epsilon): The machine epsilon is the smallest floating-point number such that . Equivalently:
where is the number of mantissa bits (the "precision").
For fp32: .
Unit in the last place (ULP): The ULP of a floating-point number is the spacing between and the next representable number. For a number in the range :
Fundamental property: For any real in the normal range, the nearest floating-point number satisfies:
This is the relative rounding error bound: .
Non-example: Machine epsilon is NOT the smallest positive floating-point number. The smallest normal fp32 number is . Machine epsilon is the smallest perturbation to that is detectable - a very different concept.
2.3 The Floating-Point Number Line
The floating-point numbers are not uniformly spaced. They cluster densely near zero and become increasingly sparse at large magnitudes:
- In : spacing = (fp32) - numbers
- In : spacing = - numbers
- In : spacing = - same count, but 1000x coarser
- Near : spacing - individual values separated by !
Consequence for AI: Neural network weights are typically initialized in , where fp32 provides distinct values per unit. As training progresses and some weights grow large, precision degrades - but this rarely matters because the loss landscape is smooth there. The dangerous zone is when gradients become very small (near in fp32), where underflow begins.
2.4 Special Values: Infinity, NaN, Subnormals
Positive/negative infinity : Stored with all-ones exponent, zero mantissa. Result of overflow (e.g., in fp32) or division by zero. All arithmetic on is defined: , , but .
NaN (Not a Number): All-ones exponent, nonzero mantissa. Results from , , . NaN is contagious: any arithmetic involving NaN produces NaN. A single NaN in a gradient will propagate backward through the entire computation graph, turning all parameter updates to NaN.
Subnormal numbers: When the stored exponent is all-zeros, the implicit leading 1 becomes a leading 0, allowing numbers smaller than the normal minimum. fp32 subnormals cover with reduced precision. Many GPUs flush subnormals to zero ("FTZ mode") for performance, which can cause unexpected underflow.
Practical rule: In deep learning code, if torch.isnan(loss) is True, trace backward: either the loss computation involves log(0), a division by a near-zero value, or upstream activations have overflowed.
2.5 Rounding Modes
IEEE 754 defines five rounding modes. The default - used in virtually all ML training - is round-to-nearest-even (banker's rounding):
| Mode | Description | When used |
|---|---|---|
| Round-to-nearest-even | Round to nearest; on tie, round to even mantissa bit | Default (IEEE 754) |
| Round toward | Always round up | Interval arithmetic upper bounds |
| Round toward | Always round down | Interval arithmetic lower bounds |
| Round toward zero | Truncate | Hardware division |
| Stochastic rounding | Randomly round up/down, proportional to distance | Low-precision training (fp8) |
Round-to-nearest-even is the default because it is unbiased: on average, half of tie-breaks round up and half round down, preventing systematic accumulation of rounding errors in long sums. This is critical for gradient accumulation over millions of steps.
3. Floating-Point Arithmetic
3.1 The Fundamental Rounding Error Model
Theorem (Rounding Error Model): For any IEEE 754-compliant basic operation on floating-point numbers :
The computed result equals the exact result times a factor where is at most one machine epsilon in magnitude. This is the single most important fact in numerical analysis.
Proof sketch: By definition of , rounding any real to float gives with (round-to-nearest). IEEE 754 mandates that each basic operation is computed as if exact and then rounded, so .
Chaining errors: For a sequence of operations, the cumulative error is bounded by approximately in relative terms. For a dot product :
For AI: The attention softmax computes - a dot product of length (typically 64-128). In fp16, , and the dot product accumulates relative error if done naively in fp16. This is why FlashAttention accumulates the softmax denominator in fp32 even when computing in fp16.
3.2 Catastrophic Cancellation
Catastrophic cancellation occurs when two nearly-equal floating-point numbers are subtracted, causing the leading significant digits to cancel and leaving only the noisy low-order bits.
Example: Compute for large .
For in fp32:
- (fp32: )
- (fp32: )
- - complete loss of all digits!
True value: .
Algebraic fix: Multiply and divide by the conjugate:
The denominator is computed by addition (not subtraction), avoiding cancellation entirely.
General rule: When a subtraction results in a value much smaller than and , suspect catastrophic cancellation. The relative error in the result can be as large as - the ratio of operand magnitude to result magnitude times machine epsilon.
In AI: The log-sum-exp function suffers cancellation when values are very different in magnitude. The numerically stable form subtracts the maximum first - an algebraic rearrangement that avoids both overflow and cancellation (see Section6.2).
3.3 Error Accumulation in Long Computations
Forward error analysis: Track how errors in inputs propagate to errors in outputs. For a function evaluated via a sequence of operations, the forward error bound expresses in terms of and problem data.
Backward error analysis (Wilkinson): Ask a different question: for what perturbed input does the computed output equal the exact output? The backward error is - how much was the input effectively perturbed?
Why backward analysis is preferred: If an algorithm's backward error is of size (i.e., the computed answer is the exact answer for a slightly perturbed input), the algorithm is backward stable - as good as you can expect from finite-precision arithmetic.
Summation error: For the sum , naive sequential summation has forward error . Pairwise summation (binary tree) reduces this to . Kahan summation reduces it to , independent of .
3.4 Kahan Compensated Summation
Kahan (1965) showed that a small modification to sequential summation reduces the accumulated error from to , regardless of :
KAHAN SUMMATION ALGORITHM
========================================================================
Input: x_1, x_2, ..., x_n
Output: S \\approx sum(x_i)
s = 0.0 # running sum
c = 0.0 # compensation (tracks lost digits)
for each x_i:
y = x_i - c # compensated addend
t = s + y # sum so far (y is small, loses digits)
c = (t - s) - y # recover the lost digits
s = t # update sum
return s
========================================================================
Why it works: The compensation variable c tracks the rounding error at each step. It accumulates the "lost" low-order bits and feeds them back into the next iteration. The net effect is that the error behaves as regardless of - equivalent to computing in twice the precision.
In PyTorch: torch.sum() uses pairwise summation (tree reduction), which achieves error. For high-precision accumulation (e.g., loss over a large batch), consider accumulating in fp64 then converting back.
3.5 Interval Arithmetic and Verified Computation
Interval arithmetic replaces each number with an interval guaranteed to contain the true value. All operations are extended to operate on intervals, widening the bounds to account for rounding errors.
Example: where the subscript fp32 denotes rounding up.
Practical use: Interval arithmetic gives verified bounds on computation results - you know with certainty that the true answer lies in the computed interval. Used in: formal verification of safety-critical software, reliable computing, and bound propagation in neural network verification (e.g., - CROWN for adversarial robustness).
Limitation: Interval widths grow rapidly in long computations (dependency problem), making intervals pessimistic for iterative algorithms. More sophisticated tools (affine arithmetic, Taylor models) improve the dependency problem.
4. Condition Numbers and Stability
4.1 Condition Number of a Scalar Problem
The condition number of a computational problem measures the sensitivity of the output to perturbations in the input - it quantifies how much the problem amplifies input errors, independent of any particular algorithm.
Definition (absolute condition number): For a function :
Definition (relative condition number):
This measures relative output change per unit relative input change - the more natural quantity.
Examples:
| Function | Well-conditioned? | |
|---|---|---|
| Yes - always | ||
| $1/ | \ln x | |
| $ | x | |
| $ | x \cos x / \sin x | |
| (subtraction) | $\max( | a |
Key insight: Condition number means the problem is inherently ill-conditioned - even the best algorithm operating in exact arithmetic cannot give accurate results given inaccurate inputs. No numerical technique can fix a fundamentally ill-conditioned problem; it requires reformulation.
4.2 Forward vs Backward Error Analysis
Given a computed result for a true answer :
Forward error: (or relative: ) - how far the computed answer is from the truth.
Backward error: The smallest such that - what perturbation to the input makes our computed answer exact. Symbolically: for some ; the backward error is .
Connection: Forward error condition number backward error:
If an algorithm has backward error , then its forward error is at most - the best possible for that problem.
4.3 Forward and Backward Stability of Algorithms
Definition (backward stable): An algorithm for computing is backward stable if the computed result satisfies:
A backward-stable algorithm computes the exact answer for a slightly perturbed problem. Combined with a well-conditioned problem (), this guarantees the forward error is .
Definition (forward stable): An algorithm is forward stable if the computed result satisfies:
Hierarchy:
backward stable -> forward stable (when is moderate)
forward stable backward stable (in general)
Examples:
- Backward stable: Householder QR, Gaussian elimination with partial pivoting
- Forward stable but not backward stable: Gram-Schmidt orthogonalization (modified version)
- Unstable: Naive Gaussian elimination without pivoting (exponential error growth possible)
4.4 Condition Number of a Matrix
For a linear system , the matrix condition number is:
where and are the largest and smallest singular values.
Geometric interpretation: measures how much can stretch unit vectors relative to how much it shrinks them. A unit sphere gets mapped to an ellipsoid with semi-axes ; the condition number is the ratio .
Error amplification: If and (small right-hand side perturbation):
The condition number is the worst-case amplification factor from -error to -error.
Values to know:
- - perfectly conditioned (identity)
- - you lose decimal digits of accuracy
- - is numerically singular; solutions are meaningless
For AI: The Hessian matrix of a neural network loss has condition number . When is large (- in practice), gradient descent converges slowly and is sensitive to learning rate. Adam's parameter-wise scaling approximates diagonal preconditioning, reducing the effective condition number. Full preconditioning (Shampoo optimizer) targets the exact Hessian eigenspectrum.
5. Numerical Formats for AI
5.1 fp32, fp16, bf16 Compared
FLOATING-POINT FORMAT COMPARISON
========================================================================
Property fp32 fp16 bf16
---------------------------------------------------------------------
Bits 32 16 16
Exponent bits 8 5 8
Mantissa bits 23 10 7
Max value 3.4e+38 6.5e+04 3.4e+38
Min normal 1.2e-38 6.1e-05 1.2e-38
Machine eps 1.2e-07 9.8e-04 7.8e-03
---------------------------------------------------------------------
Training use Reference Legacy (APEX) Standard (2023+)
Gradient risk None Underflow None
Memory (1B par) 4 GB 2 GB 2 GB
Arithmetic TPU Baseline 2x fp32 2x fp32
========================================================================
bf16 vs fp16: Both are 16-bit. bf16 keeps the 8-bit exponent field of fp32, giving the same dynamic range. fp16 uses only 5 exponent bits - max value - which causes overflow for activations in large models. bf16 gives 3 fewer significant decimal digits than fp16, but training remains stable because gradients don't overflow.
Why bf16 won for LLMs: GPT-2 trained in fp16 with careful loss scaling. GPT-3, Llama, Mistral, Gemini all train in bf16 - no loss scaling needed, simpler codebase, same memory.
5.2 fp8 and Extreme Quantization
fp8 uses only 8 bits. NVIDIA's H100 supports two fp8 formats:
- E4M3 (4 exponent, 3 mantissa): higher precision, smaller range (max 448)
- E5M2 (5 exponent, 2 mantissa): lower precision, larger range (max 57344)
fp8 training workflow: Forward pass in fp8 E4M3; backward pass in fp8 E5M2; weight updates in fp32 or bf16. The Transformer Engine (NVIDIA) automates format selection and scaling.
Post-training quantization: Most LLM inference today uses int8 or int4 weight quantization (not fp8): weights stored as 8-bit integers, dequantized to fp16/bf16 before matmul. Tools: GPTQ, AWQ, bitsandbytes. The quantization error is where is the bit-width.
5.3 Mixed-Precision Training
Mixed-precision training (Micikevicius et al., 2018) is the standard approach for training large models:
MIXED-PRECISION TRAINING WORKFLOW
========================================================================
Master weights: fp32 ---------------------------------------------+
| cast down ^ update |
Forward pass: fp16/bf16 (activations, weights) |
Loss: fp32 accumulation |
Backward pass: fp16/bf16 gradients |
Gradient: fp16/bf16 -> fp32 (before weight update) ----------+
With loss scaling (fp16 only):
loss_scaled = loss x scale_factor (e.g., 2^15 = 32768)
Backward through loss_scaled
gradients_fp32 /= scale_factor before optimizer step
========================================================================
Key elements:
- fp32 master weights: Full-precision copy of weights for optimizer states (momentum, variance) and weight updates
- fp16/bf16 forward/backward: Halves memory for activations; 2x throughput on tensor cores
- Loss scaling (fp16 only): Multiplies loss by a large factor before backward pass to prevent gradient underflow; divides gradients before applying them
- fp32 accumulation: Matrix multiplications accumulate partial sums in fp32 on modern hardware, even when inputs are in fp16
5.4 Stochastic Rounding
Standard rounding (round-to-nearest) is deterministically biased: a number exactly halfway between two representable values always rounds to even. Over many operations, this can systematically shift the computed trajectory.
Stochastic rounding: Round to with probability , otherwise to . This introduces random noise but ensures - the rounding is unbiased.
Why this matters for low-precision training: In fp8/fp4 training, rounding errors are large enough to systematically bias weight updates. Stochastic rounding prevents this systematic bias, allowing effective training at very low precision. Used in Graphcore IPUs and experimental fp4 training.
6. Applications in ML
6.1 Gradient Vanishing/Exploding Through a Floating-Point Lens
Vanishing gradients in fp16: If a gradient satisfies (fp16 minimum normal), it is represented as 0 (or a subnormal with reduced precision). The parameter receives no gradient - vanishing is literal in floating-point.
Loss scaling prevents this: multiplying the loss by before backprop multiplies all gradients by . A gradient of with scale becomes - comfortably within fp16 range.
Gradient clipping addresses exploding gradients: if , scale . This prevents overflow and stabilizes training. Used by default in transformer training (clip norm is standard).
6.2 Numerically Stable Softmax and Log-Sum-Exp
Naive softmax:
Problem: For large (e.g., ), - overflow in fp32. For very negative , - underflow, losing the contribution entirely.
Stable softmax: Subtract the maximum before exponentiation:
By linearity of softmax, this gives identical output. Now the largest exponent is - no overflow. All other terms are in - no underflow for reasonable inputs.
Log-sum-exp: . Stable form:
Used in: numerically stable cross-entropy, log-probabilities, normalizing constants.
6.3 Stable Cross-Entropy Loss
Naive implementation:
# UNSTABLE: loss = -sum(y * log(softmax(logits)))
probs = exp(logits) / sum(exp(logits)) # overflow risk
loss = -sum(y * log(probs)) # log(0) risk
Stable implementation (PyTorch's F.cross_entropy):
# Combines log-softmax and NLL in one numerically stable pass:
# loss = -logits[true_class] + logsumexp(logits)
log_probs = logits - logsumexp(logits) # log-softmax, stable
loss = -log_probs[true_class] # NLL
This avoids both overflow in exp and log(0) in the entropy term.
6.4 Loss Scaling for fp16 Training
Dynamic loss scaling (PyTorch GradScaler):
- Start with scale
- Compute
scaled_loss = loss * s - Call
scaled_loss.backward()- gradients are scaled by - If any gradient is
infornan: reduce ; skip optimizer step - Otherwise: unscale gradients by ; optimizer step; every steps try
This adaptive scheme maintains the gradient scale in fp16 range, maximizing utilization of fp16 precision while detecting and recovering from overflow.
7. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Testing equality: a + b == c for floats | Rounding makes exact equality almost never true | Use abs(a+b-c) < tol * max(abs(a+b), abs(c), 1) |
| 2 | Assuming 0.1 + 0.2 == 0.3 | Neither 0.1, 0.2, nor 0.3 is exactly representable | Always use relative tolerance for float comparisons |
| 3 | Using fp16 for loss accumulation | fp16 has only 10 bits of mantissa; running sum loses precision fast | Accumulate loss in fp32, even when training in fp16 |
| 4 | Ignoring NaN propagation | One NaN in a gradient turns ALL gradients to NaN | Add torch.isnan(loss).any() check before backward; trace to source |
| 5 | Conflating machine epsilon with smallest float | for fp32; smallest normal is ; completely different | Know both values; use torch.finfo to look them up |
| 6 | Computing log(softmax(x)) in two steps | softmax(x) first rounds, then log can see zeros | Use log_softmax(x) which is a single numerically stable operation |
| 7 | Not checking condition number before solving | High means the solution is meaningless in floating point | Compute np.linalg.cond(A) first; if in fp64, reformulate |
| 8 | Using fp16 without loss scaling | Gradients of magnitude flush to zero | Use torch.amp.GradScaler() or switch to bf16 |
| 9 | Naive large-number subtraction | loses all digits in fp32 | Rearrange algebraically to avoid large-magnitude cancellation |
| 10 | Computing for large before dividing | Overflow before normalization | Use log-sum-exp trick; never compute raw without max-subtraction |
| 11 | Assuming GPU arithmetic is associative | (a+b)+c \\neq a+(b+c) in floating point; GPU thread ordering varies | Don't rely on associativity; use torch.use_deterministic_algorithms(True) for reproducibility |
| 12 | Ignoring subnormal flushing on GPU | CUDA default: FTZ (flush-to-zero) mode for subnormals | Be aware of subnormal behavior; check if FTZ affects your gradient magnitudes |
8. Exercises
Exercise 1 * - Machine Epsilon and Floating-Point Formats
(a) Without using np.finfo, write code to experimentally determine machine epsilon for fp32 by finding the smallest such that (start from 1.0 and halve repeatedly).
(b) Compare to np.finfo(np.float32).eps. Also compute for fp64 and fp16. What is the ratio between fp32 and fp64 epsilon?
(c) Compute the maximum representable value for fp32 without using np.finfo.max. Start from 1.0 and double until overflow.
(d) Explain: why does np.float16(65504) + np.float16(1) == np.float16(65504) return True?
Exercise 2 * - Catastrophic Cancellation
(a) Compute for in fp32 and fp64. Compare to the Taylor series approximation .
(b) Compute for in fp32 and fp64. Derive an algebraically equivalent form without cancellation and verify numerically.
(c) For the quadratic formula with , , : compute both roots in fp32. One root is catastrophically inaccurate. Identify which and implement the numerically stable form using the Vieta relation .
Exercise 3 * - Kahan Compensated Summation
(a) Implement naive summation and Kahan summation in pure NumPy (no special functions).
(b) Sum 10 million copies of using both methods in fp32. Compare to the exact answer . What is the absolute error for each method?
(c) Repeat for a sequence of alternating and of length . True sum = 0. What do both methods give?
(d) Verify that Kahan summation gives error independent of , while naive summation gives error .
Exercise 4 ** - Condition Numbers
(a) Compute the relative condition number of at , , . At which value is most sensitive to input perturbation?
(b) Construct a Hilbert matrix (notoriously ill-conditioned). Compute using np.linalg.cond. Solve for (true solution ). What is the relative error in the computed solution?
(c) For the Hilbert matrix, how many decimal digits of accuracy do you expect in the solution? Verify by computing and comparing to the actual error.
Exercise 5 ** - Numerically Stable Softmax and Log-Sum-Exp
(a) Implement naive softmax and stable softmax. Test both on in fp32. Which produces NaN? What does the stable version give?
(b) Implement the log-sum-exp function: naive (log(sum(exp(x)))) and stable. Test on .
(c) Implement numerically stable cross-entropy loss where is the true class. Test on logits with true class . Compare to naively computing log(softmax(z)[y]).
(d) Profile the two implementations for calls. Is there a speed difference?
Exercise 6 ** - Mixed-Precision Comparison
(a) Compute the inner product for -dimensional random unit vectors in fp16, bf16, fp32, and fp64. Use the fp64 result as ground truth. Report absolute and relative errors.
(b) Simulate gradient underflow: generate a random gradient vector in fp32 with entries drawn from . Cast to fp16. What fraction of gradient entries are exactly zero (underflowed)? Repeat with scale factor applied before casting.
(c) Implement simple loss scaling: wrap a function f(x) that returns a simulated loss; scale the loss, compute a "gradient" via finite differences in fp16, unscale. Compare gradient quality to fp32 finite differences.
Exercise 7 *** - Floating-Point Conditioning of a Linear System
(a) Generate random matrices with condition number by constructing with random orthogonal and with .
(b) For each, solve (true solution ) in fp32 and fp64. Plot relative error vs on a log-log scale. Verify the predicted slope of 1.
(c) For in fp32: add a tiny perturbation to . How much does the solution change? Compare to .
Exercise 8 *** - Stable Algorithm Design
(a) The sample variance formula can be expanded as (one-pass formula). Show that the one-pass formula suffers catastrophic cancellation for in fp32. Verify using Welford's online algorithm as the stable alternative.
(b) Implement Welford's online algorithm for computing running mean and variance.
(c) Compare naive, one-pass, and Welford for correctness on in fp32.
9. Why This Matters for AI (2026 Perspective)
| Concept | AI Impact |
|---|---|
| Machine epsilon / format choice | LLM training: bf16 is now the universal default (Llama-3, Gemma, Mistral) - understanding why requires knowing bf16's exponent range matches fp32 |
| Catastrophic cancellation | Naive attention score computation can cancel digits; FlashAttention's online softmax reordering avoids this |
| Loss scaling | Required for stable fp16 training; not needed with bf16; integral to torch.amp.GradScaler |
| Stable log-sum-exp | Every transformer's softmax in the forward pass uses this; also in all language modeling losses and perplexity calculations |
| Condition numbers | The Hessian condition number determines gradient descent convergence rate; Adam/Adafactor implicitly precondition to reduce effective |
| fp8 training | NVIDIA H100 Transformer Engine, used for Llama-3 and GPT-4 training; requires per-tensor scaling factors |
| Gradient underflow | Without scaling, fp16 gradient magnitudes below flush to zero - silent learning failure |
| Stochastic rounding | Enables effective fp4/fp8 training by preventing systematic rounding bias; used in Graphcore IPU and emerging hardware |
| Subnormal flushing | CUDA FTZ mode flushes subnormals to zero for speed; can cause unexpected behavior in extreme low-precision regimes |
| Kahan summation | PyTorch's attention accumulation uses fp32 accumulation (equivalent to Kahan) even in bf16 forward passes |
10. Conceptual Bridge
Backward: What This Builds On
Floating-point arithmetic is the computational manifestation of the real number system. The real numbers () are dense, complete, and infinite - studied in Mathematical Foundations Section01. Floating-point numbers are a finite approximation: the map introduces the rounding errors analyzed here.
The condition number of a linear system - introduced as - is defined using matrix norms from Linear Algebra Basics Section06 and singular values from Advanced Linear Algebra Section02. The interpretation requires the full SVD theory developed there.
Forward: What This Enables
Numerical Linear Algebra (Section02, this chapter): Every linear system solver, least-squares method, and eigenvalue algorithm is analyzed through the lens of backward error and condition numbers introduced here. The pivoting strategies in Gaussian elimination and the choice between QR and normal equations for least squares are direct applications of the stability concepts from Section4.
Numerical Optimization (Section03, this chapter): Gradient checking via finite differences relies on the finite-difference approximation error analysis - the step size must balance truncation error against cancellation error , an optimization that requires understanding both. Mixed-precision training is the union of Section5 (format choices) and optimization algorithms.
All of Applied Mathematics: Every numerical computation in this curriculum - ODE solvers, PDE methods, Fourier transforms - is ultimately floating-point arithmetic analyzed by the tools developed here.
POSITION IN CURRICULUM
========================================================================
Section01-Mathematical-Foundations (real numbers, binary representation)
|
v
Section02-Linear-Algebra-Basics (matrix norms, condition numbers prelim.)
Section03-Advanced-Linear-Algebra (SVD -> (A) = \\sigma_max/\\sigma_min)
|
v
Section10-Numerical-Methods
+-- Section01-Floating-Point-Arithmetic === YOU ARE HERE
| | \\varepsilon_mach, , stability
| v
+-- Section02-Numerical-Linear-Algebra (stable solvers, iterative methods)
+-- Section03-Numerical-Optimization (AD, line search, finite diffs)
+-- Section04-Interpolation (polynomial, spline, RBF)
+-- Section05-Numerical-Integration (quadrature, Monte Carlo)
|
v
Section08-Optimization (gradient descent, Adam - uses mixed-precision)
Section13-ML-Specific-Math (numerical stability of transformers)
========================================================================
The foundations laid in this section - machine epsilon, rounding error models, condition numbers, backward stability - are the vocabulary of all numerical computation. Every section that follows assumes fluency with these concepts.
Appendix A: IEEE 754 Bit-Level Encoding
A.1 Decoding a 32-bit Float by Hand
Every fp32 number is stored as three fields:
Bit 31 | Bits 30-23 | Bits 22-0
Sign S | Exponent E | Mantissa M
The value is:
Example: Decode the fp32 hex 0x3FC00000:
- Binary:
0 01111111 10000000000000000000000 - (positive)
- ; exponent
- Mantissa
- Value: [ok]
Encoding 1.0: ; ; ; hex 0x3F800000.
Encoding 0.1: (repeating binary). Stored exponent ; mantissa rounded to 23 bits: 11001100110011001100110; value is .
A.2 Special Patterns
| Pattern | Meaning |
|---|---|
| , | (zero) |
| , | Subnormal: |
| , | |
| , | NaN (quiet or signaling) |
| , | Quiet NaN (qNaN) - most common |
| , | Signaling NaN (sNaN) - triggers exception |
A.3 Python Bit Manipulation
import struct, numpy as np
def fp32_to_bits(x):
return struct.unpack('I', struct.pack('f', x))[0]
def bits_to_fp32(bits):
return struct.unpack('f', struct.pack('I', bits))[0]
def decode_fp32(x):
bits = fp32_to_bits(x)
S = (bits >> 31) & 1
E = (bits >> 23) & 0xFF
M = bits & 0x7FFFFF
if E == 0:
val = (-1)**S * 2**(-126) * M / 2**23 # subnormal
elif E == 255:
val = float('inf') if M == 0 else float('nan')
else:
val = (-1)**S * 2**(E - 127) * (1 + M / 2**23)
return {'S': S, 'E': E, 'M': M, 'value': val}
Appendix B: Error Analysis Worked Examples
B.1 Quadratic Formula Cancellation
The quadratic has roots:
In fp32, (exactly). So:
- - fine
- - catastrophically wrong (true: )
Stable form: Use , so .
B.2 Dot Product Error Bound
For , the computed dot product satisfies:
where .
For , fp16: - potential 50% error! This is why transformer attention uses fp32 accumulation inside bf16/fp16 matmuls.
B.3 Gram-Schmidt vs Householder
Modified Gram-Schmidt (MGS) is forward stable but not backward stable: given columns , the computed satisfies . For ill-conditioned , the computed columns are not orthogonal.
Householder QR is backward stable: and is exactly orthogonal (to machine precision). Always prefer Householder for numerical QR.
Appendix C: Floating-Point Arithmetic in PyTorch
C.1 Default Dtypes
import torch
torch.get_default_dtype() # float32 by default
torch.set_default_dtype(torch.float64) # change for scientific computing
# Check format properties:
torch.finfo(torch.float16) # eps, min, max, tiny
torch.finfo(torch.bfloat16)
torch.finfo(torch.float32)
torch.finfo(torch.float64)
C.2 Mixed-Precision with AMP
from torch.amp import autocast, GradScaler
scaler = GradScaler()
for batch in dataloader:
optimizer.zero_grad()
with autocast(device_type='cuda', dtype=torch.float16):
output = model(batch)
loss = criterion(output, target)
# Scales loss, calls backward, updates scaler
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
C.3 Numerically Stable Operations
# Stable log-softmax (use this, not log(softmax(x))):
torch.nn.functional.log_softmax(x, dim=-1)
# Stable cross-entropy (combines log-softmax + NLL):
torch.nn.functional.cross_entropy(logits, targets)
# Stable sigmoid cross-entropy:
torch.nn.functional.binary_cross_entropy_with_logits(logits, targets)
# NEVER: binary_cross_entropy(sigmoid(logits), targets) <- unstable
# Numerically stable softplus:
torch.nn.functional.softplus(x) # log(1 + exp(x)), stable
C.4 Debugging NaN Issues
# Enable anomaly detection (slow, for debugging only):
torch.autograd.set_detect_anomaly(True)
# Check for NaN:
assert not torch.isnan(loss).any(), f"NaN loss at step {step}"
assert not torch.isinf(loss).any(), f"Inf loss at step {step}"
# Gradient NaN check:
for name, param in model.named_parameters():
if param.grad is not None:
if torch.isnan(param.grad).any():
print(f"NaN gradient in {name}")
Appendix D: Numerical Formats Reference Card
QUICK REFERENCE: FLOATING-POINT FORMATS FOR AI
========================================================================
Format Bits \\varepsilon_mach Max Use case
----------------------------------------------------------------------
fp64 64 2.2e-16 1.8e+308 Scientific computing, reference
fp32 32 1.2e-07 3.4e+038 Training master weights, debug
bf16 16 7.8e-03 3.4e+038 LLM training (2023+ standard)
fp16 16 9.8e-04 6.5e+004 Older mixed-precision training
fp8 E4M3 8 1.2e-01 448 H100 forward pass
fp8 E5M2 8 2.5e-01 57344 H100 backward pass
int8 8 1/128 127 Inference quantization (weights)
int4 4 1/8 7 GGUF LLM inference
----------------------------------------------------------------------
Rule of thumb:
- Training large models: bf16 compute + fp32 master weights
- Inference large models: int8 or int4 weights, fp16 activations
- Scientific computing: fp64 throughout
- Edge inference: int4/int8 quantized (ONNX, TFLite, GGUF)
========================================================================
<- Back to Numerical Methods | Next: Numerical Linear Algebra ->
Appendix E: Proofs and Derivations
E.1 Proof: Kahan Summation Error Bound
Theorem: Kahan compensated summation computes with error:
This is rather than for naive summation.
Proof sketch: At each step, Kahan maintains a compensation that captures the rounding error from the previous step to full precision. Formally, after step :
The accumulated error is per step rather than , so after steps the total is (treating as small).
E.2 Derivation: Stable Softmax
Claim: for any scalar .
Proof:
Setting ensures the largest exponent is , bounding all terms in - within the range of any floating-point format.
E.3 Wilkinson's Backward Error for GE with Partial Pivoting
Gaussian elimination with partial pivoting (GEPP) on computes satisfying:
where and is the growth factor: (ratio of largest element appearing during elimination to largest initial element).
Key point: With partial pivoting, in the worst case (Wilkinson, 1961) but is almost always in practice. This is why GEPP is considered backward stable for practical purposes even though its worst-case bound is exponential.
E.4 Condition Number and Relative Error
Theorem: Let be invertible, . If and , then:
When is small (as for rounding errors ), this simplifies to:
Proof: Subtract the two equations, use , take norms, and use triangle inequality. The factor appears from a Neumann series approximation.