Floating-Point Arithmetic
"The computer is binary by nature; real numbers are not. Everything that follows is a consequence of this mismatch."
- David Goldberg
Overview
Every AI model trained on a GPU operates entirely in floating-point arithmetic - a finite, discrete approximation to the infinite real number line. When a neural network fails to converge, produces NaN losses, or gives subtly wrong predictions, the root cause is almost always numerical: a poorly conditioned operation, a rounding error that compounds across millions of iterations, or a format mismatch between fp16 and fp32 accumulators.
This section develops floating-point arithmetic from the ground up. We begin with the IEEE 754 standard - the universal specification for how computers represent and compute with non-integer numbers - then study rounding error models, catastrophic cancellation, condition numbers, and numerical stability. We close with the numerical formats used in modern AI: fp32, fp16, bf16, fp8, and their implications for training stability, inference speed, and quantization.
The central lesson is: floating-point errors are not random noise - they are deterministic, structured, and predictable. Once you understand the rounding error model , , you can analyze, bound, and eliminate the numerical errors that silently corrupt computations.
Prerequisites
- Real number system and binary representation - Mathematical Foundations Section01
- Basic linear algebra: vectors, matrix norms - Linear Algebra Basics
- No calculus or probability required
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Interactive: IEEE 754 bit layouts, rounding error experiments, Kahan summation, mixed-precision comparisons, stable softmax |
| exercises.ipynb | 10 graded problems: machine epsilon, catastrophic cancellation, compensated summation, condition numbers, stable algorithms |
Learning Objectives
After completing this section, you will:
- Decode IEEE 754 binary representations for fp16, fp32, fp64, bf16, and fp8
- Define machine epsilon and explain its role in bounding rounding errors
- Apply the fundamental rounding error model
- Identify catastrophic cancellation and redesign algorithms to avoid it
- Define forward error, backward error, and backward stability; classify algorithms
- Compute the condition number of a scalar function and of a linear system
- Explain mixed-precision training (fp16 + fp32 accumulation) and loss scaling
- Implement numerically stable log-sum-exp and softmax
- Compare fp32, fp16, bf16, and fp8 in terms of range, precision, and AI suitability
- Apply Kahan compensated summation to reduce floating-point sum errors
- Explain gradient underflow in fp16 and how loss scaling addresses it
Table of Contents
- 1. Intuition
- 2. Formal Definitions: IEEE 754
- 3. Floating-Point Arithmetic
- 4. Condition Numbers and Stability
- 5. Numerical Formats for AI
- 6. Applications in ML
- 7. Common Mistakes
- 8. Exercises
- 9. Why This Matters for AI (2026 Perspective)
- 10. Conceptual Bridge
1. Intuition
1.1 Why Computers Cannot Represent Most Real Numbers
The real numbers form an uncountably infinite set. A computer, by contrast, stores information in a finite number of bits. With bits you can represent at most distinct values. No finite is large enough to cover even the rationals between 0 and 1 - there are infinitely many of them.
The resolution: floating-point numbers are a carefully chosen, finite subset of the rationals - dense near zero (where scientific quantities tend to cluster) and increasingly sparse at large magnitudes. The IEEE 754 standard specifies exactly which rationals are representable and exactly how arithmetic on them is performed. Every GPU, CPU, and TPU on the planet implements this standard.
The consequence for computation: Every real-number operation - addition, multiplication, transcendentals - must be rounded to the nearest representable floating-point number. This introduces a tiny error at each step. Over millions of operations (a single transformer forward pass involves billions), these errors can accumulate in structured, predictable ways. Understanding and controlling this accumulation is numerical analysis.
A concrete example: The number cannot be represented exactly in binary floating-point. In fp32 it is stored as approximately - an error of . This seems negligible, but when you sum 10 million such values, the error accumulates to - a 15% error in a sum that should be exactly .
1.2 Why This Matters for AI
Floating-point arithmetic is not an abstraction for AI practitioners - it is the daily substrate:
Training stability: fp16 training requires loss scaling to prevent gradient underflow. Without it, gradients smaller than the fp16 minimum () are rounded to zero, silently stopping learning for those parameters.
Model quality: Quantizing from fp32 to int8 for inference can degrade accuracy by 1-5% on challenging tasks if done naively. Understanding the rounding error model tells you which operations are most sensitive and how to compensate.
Numerical bugs: NaN (Not a Number) propagation is one of the most common silent failures in deep learning. A single NaN anywhere in the computation graph - caused by , , or overflow - propagates through every subsequent operation, turning the entire gradient to NaN.
Speed and memory: The transition from fp32 to bf16 for training LLMs (GPT-4, Llama, Gemini all use bf16) halves memory and roughly doubles arithmetic throughput on modern hardware. Understanding why bf16 preserves training stability when fp16 does not requires knowing that bf16 preserves the exponent range of fp32 while sacrificing mantissa precision.
For AI: Every tensor in PyTorch, JAX, or TensorFlow is a floating-point array. torch.finfo(torch.float16).eps gives you for fp16. Knowing this number - - immediately tells you the precision limit of every fp16 computation.
1.3 A Day in the Life of a Floating-Point Number
Follow a real number through a computation:
Birth (encoding): is mapped to the nearest fp32 representable value: . Error: .
Arithmetic (rounding): Compute . Each input is rounded; the addition result is rounded again. The result differs from the true by .
Comparison (danger): Testing x_fp32 + 0.6_fp32 == 0.9 returns False in Python because both sides round differently. This is the source of countless bugs in numerical code.
Accumulation (catastrophe): Subtract from the result: vs the true . Relative error: - 30x larger than the original encoding error, because subtraction of nearly-equal numbers amplifies relative error. This is catastrophic cancellation.
Death (overflow/underflow): Multiply by in fp32: overflow to . Multiply by : underflow to (or a subnormal). Once a value becomes or NaN, all subsequent operations inherit the corruption.
1.4 Historical Timeline: 1985-2024
FLOATING-POINT ARITHMETIC: KEY MILESTONES
========================================================================
1985 IEEE 754-1985 published - first universal FP standard
(sign, exponent, mantissa; round-to-nearest-even default)
1987 First IEEE 754 hardware: Intel 8087 math coprocessor
1998 IEEE 754-1985 extended to include 80-bit extended precision
2008 IEEE 754-2008 - adds fp16 (half precision), decimal formats
2010 NVIDIA Fermi GPU: first IEEE-compliant fp64 on GPU
2017 Mixed-precision training (Micikevicius et al., NVIDIA) -
fp16 forward/backward, fp32 master weights
2018 bf16 (Brain Float 16) introduced by Google Brain for TPUs
2019 NVIDIA Ampere: bf16 tensor cores, hardware Transformer Engine
2022 fp8 training (NVIDIA H100 Transformer Engine) - 8-bit FP
with dynamic scaling; used for LLM training
2023 Llama-2, GPT-4, Gemini: bf16 training standard
2024 fp4 experimental; int4/int8 GGUF quantization standard
for LLM inference (llama.cpp ecosystem)
========================================================================
2. Formal Definitions: IEEE 754
2.1 The IEEE 754 Standard
The IEEE 754 standard represents a floating-point number in sign-magnitude-exponent form:
where:
- is the sign bit
- is the mantissa (or significand) in binary, with an implicit leading 1
- is the stored exponent (biased)
- for a -bit exponent field
Format specifications:
| Format | Total bits | Sign | Exponent | Mantissa | Max value | Min normal | |
|---|---|---|---|---|---|---|---|
| fp64 (double) | 64 | 1 | 11 | 52 | |||
| fp32 (single) | 32 | 1 | 8 | 23 | |||
| bf16 | 16 | 1 | 8 | 7 | |||
| fp16 | 16 | 1 | 5 | 10 | |||
| fp8 E4M3 | 8 | 1 | 4 | 3 | |||
| fp8 E5M2 | 8 | 1 | 5 | 2 |
Key insight: bf16 has the same exponent range as fp32 (both use 8 exponent bits, bias 127) but only 7 mantissa bits vs 23. This means bf16 can represent the same scale of numbers as fp32, but with far less precision per value. This is why bf16 is training-stable (no overflow, no gradient underflow) while fp16 (5-bit exponent, max value ) overflows easily during training.
Bit-layout diagram:
IEEE 754 BIT LAYOUTS
========================================================================
fp32 (32 bits):
[S][EEEEEEEE][MMMMMMMMMMMMMMMMMMMMMMM]
1 8 23
fp16 (16 bits):
[S][EEEEE][MMMMMMMMMM]
1 5 10
bf16 (16 bits):
[S][EEEEEEEE][MMMMMMM]
1 8 7 <- Same exponent field as fp32!
fp8 E4M3 (8 bits):
[S][EEEE][MMM]
1 4 3
========================================================================
2.2 Machine Epsilon
Definition (Machine Epsilon): The machine epsilon is the smallest floating-point number such that . Equivalently:
where is the number of mantissa bits (the "precision").
For fp32: .
Unit in the last place (ULP): The ULP of a floating-point number is the spacing between and the next representable number. For a number in the range :
Fundamental property: For any real in the normal range, the nearest floating-point number satisfies:
This is the relative rounding error bound: .
Non-example: Machine epsilon is NOT the smallest positive floating-point number. The smallest normal fp32 number is . Machine epsilon is the smallest perturbation to that is detectable - a very different concept.
2.3 The Floating-Point Number Line
The floating-point numbers are not uniformly spaced. They cluster densely near zero and become increasingly sparse at large magnitudes:
- In : spacing = (fp32) - numbers
- In : spacing = - numbers
- In : spacing = - same count, but 1000x coarser
- Near : spacing - individual values separated by !
Consequence for AI: Neural network weights are typically initialized in , where fp32 provides distinct values per unit. As training progresses and some weights grow large, precision degrades - but this rarely matters because the loss landscape is smooth there. The dangerous zone is when gradients become very small (near in fp32), where underflow begins.
2.4 Special Values: Infinity, NaN, Subnormals
Positive/negative infinity : Stored with all-ones exponent, zero mantissa. Result of overflow (e.g., in fp32) or division by zero. All arithmetic on is defined: , , but .
NaN (Not a Number): All-ones exponent, nonzero mantissa. Results from , , . NaN is contagious: any arithmetic involving NaN produces NaN. A single NaN in a gradient will propagate backward through the entire computation graph, turning all parameter updates to NaN.
Subnormal numbers: When the stored exponent is all-zeros, the implicit leading 1 becomes a leading 0, allowing numbers smaller than the normal minimum. fp32 subnormals cover with reduced precision. Many GPUs flush subnormals to zero ("FTZ mode") for performance, which can cause unexpected underflow.
Practical rule: In deep learning code, if torch.isnan(loss) is True, trace backward: either the loss computation involves log(0), a division by a near-zero value, or upstream activations have overflowed.
2.5 Rounding Modes
IEEE 754 defines five rounding modes. The default - used in virtually all ML training - is round-to-nearest-even (banker's rounding):
| Mode | Description | When used |
|---|---|---|
| Round-to-nearest-even | Round to nearest; on tie, round to even mantissa bit | Default (IEEE 754) |
| Round toward | Always round up | Interval arithmetic upper bounds |
| Round toward | Always round down | Interval arithmetic lower bounds |
| Round toward zero | Truncate | Hardware division |
| Stochastic rounding | Randomly round up/down, proportional to distance | Low-precision training (fp8) |
Round-to-nearest-even is the default because it is unbiased: on average, half of tie-breaks round up and half round down, preventing systematic accumulation of rounding errors in long sums. This is critical for gradient accumulation over millions of steps.
3. Floating-Point Arithmetic
3.1 The Fundamental Rounding Error Model
Theorem (Rounding Error Model): For any IEEE 754-compliant basic operation on floating-point numbers :
The computed result equals the exact result times a factor where is at most one machine epsilon in magnitude. This is the single most important fact in numerical analysis.
Proof sketch: By definition of , rounding any real to float gives with (round-to-nearest). IEEE 754 mandates that each basic operation is computed as if exact and then rounded, so .
Chaining errors: For a sequence of operations, the cumulative error is bounded by approximately in relative terms. For a dot product :
For AI: The attention softmax computes - a dot product of length (typically 64-128). In fp16, , and the dot product accumulates relative error if done naively in fp16. This is why FlashAttention accumulates the softmax denominator in fp32 even when computing in fp16.
3.2 Catastrophic Cancellation
Catastrophic cancellation occurs when two nearly-equal floating-point numbers are subtracted, causing the leading significant digits to cancel and leaving only the noisy low-order bits.
Example: Compute for large .
For in fp32:
- (fp32: )
- (fp32: )
- - complete loss of all digits!
True value: .
Algebraic fix: Multiply and divide by the conjugate:
The denominator is computed by addition (not subtraction), avoiding cancellation entirely.
General rule: When a subtraction results in a value much smaller than and , suspect catastrophic cancellation. The relative error in the result can be as large as - the ratio of operand magnitude to result magnitude times machine epsilon.
In AI: The log-sum-exp function suffers cancellation when values are very different in magnitude. The numerically stable form subtracts the maximum first - an algebraic rearrangement that avoids both overflow and cancellation (see Section6.2).
3.3 Error Accumulation in Long Computations
Forward error analysis: Track how errors in inputs propagate to errors in outputs. For a function evaluated via a sequence of operations, the forward error bound expresses in terms of and problem data.
Backward error analysis (Wilkinson): Ask a different question: for what perturbed input does the computed output equal the exact output? The backward error is - how much was the input effectively perturbed?
Why backward analysis is preferred: If an algorithm's backward error is of size (i.e., the computed answer is the exact answer for a slightly perturbed input), the algorithm is backward stable - as good as you can expect from finite-precision arithmetic.
Summation error: For the sum , naive sequential summation has forward error . Pairwise summation (binary tree) reduces this to . Kahan summation reduces it to , independent of .
3.4 Kahan Compensated Summation
Kahan (1965) showed that a small modification to sequential summation reduces the accumulated error from to , regardless of :
KAHAN SUMMATION ALGORITHM
========================================================================
Input: x_1, x_2, ..., x_n
Output: S \\approx sum(x_i)
s = 0.0 # running sum
c = 0.0 # compensation (tracks lost digits)
for each x_i:
y = x_i - c # compensated addend
t = s + y # sum so far (y is small, loses digits)
c = (t - s) - y # recover the lost digits
s = t # update sum
return s
========================================================================
Why it works: The compensation variable c tracks the rounding error at each step. It accumulates the "lost" low-order bits and feeds them back into the next iteration. The net effect is that the error behaves as regardless of - equivalent to computing in twice the precision.
In PyTorch: torch.sum() uses pairwise summation (tree reduction), which achieves error. For high-precision accumulation (e.g., loss over a large batch), consider accumulating in fp64 then converting back.
3.5 Interval Arithmetic and Verified Computation
Interval arithmetic replaces each number with an interval guaranteed to contain the true value. All operations are extended to operate on intervals, widening the bounds to account for rounding errors.
Example: where the subscript fp32 denotes rounding up.
Practical use: Interval arithmetic gives verified bounds on computation results - you know with certainty that the true answer lies in the computed interval. Used in: formal verification of safety-critical software, reliable computing, and bound propagation in neural network verification (e.g., - CROWN for adversarial robustness).
Limitation: Interval widths grow rapidly in long computations (dependency problem), making intervals pessimistic for iterative algorithms. More sophisticated tools (affine arithmetic, Taylor models) improve the dependency problem.
4. Condition Numbers and Stability
4.1 Condition Number of a Scalar Problem
The condition number of a computational problem measures the sensitivity of the output to perturbations in the input - it quantifies how much the problem amplifies input errors, independent of any particular algorithm.
Definition (absolute condition number): For a function :
Definition (relative condition number):
This measures relative output change per unit relative input change - the more natural quantity.
Examples:
| Function | Well-conditioned? | |
|---|---|---|
| Yes - always | ||
| $1/ | \ln x | |
| $ | x | |
| $ | x \cos x / \sin x | |
| (subtraction) | $\max( | a |
Key insight: Condition number means the problem is inherently ill-conditioned - even the best algorithm operating in exact arithmetic cannot give accurate results given inaccurate inputs. No numerical technique can fix a fundamentally ill-conditioned problem; it requires reformulation.
4.2 Forward vs Backward Error Analysis
Given a computed result for a true answer :
Forward error: (or relative: ) - how far the computed answer is from the truth.
Backward error: The smallest such that - what perturbation to the input makes our computed answer exact. Symbolically: for some ; the backward error is .
Connection: Forward error condition number backward error:
If an algorithm has backward error , then its forward error is at most - the best possible for that problem.
4.3 Forward and Backward Stability of Algorithms
Definition (backward stable): An algorithm for computing is backward stable if the computed result satisfies:
A backward-stable algorithm computes the exact answer for a slightly perturbed problem. Combined with a well-conditioned problem (), this guarantees the forward error is .
Definition (forward stable): An algorithm is forward stable if the computed result satisfies:
Hierarchy:
backward stable -> forward stable (when is moderate)
forward stable backward stable (in general)
Examples:
- Backward stable: Householder QR, Gaussian elimination with partial pivoting
- Forward stable but not backward stable: Gram-Schmidt orthogonalization (modified version)
- Unstable: Naive Gaussian elimination without pivoting (exponential error growth possible)
4.4 Condition Number of a Matrix
For a linear system , the matrix condition number is:
where and are the largest and smallest singular values.
Geometric interpretation: measures how much can stretch unit vectors relative to how much it shrinks them. A unit sphere gets mapped to an ellipsoid with semi-axes ; the condition number is the ratio .
Error amplification: If and (small right-hand side perturbation):
The condition number is the worst-case amplification factor from -error to -error.
Values to know:
- - perfectly conditioned (identity)
- - you lose decimal digits of accuracy
- - is numerically singular; solutions are meaningless
For AI: The Hessian matrix of a neural network loss has condition number . When is large (- in practice), gradient descent converges slowly and is sensitive to learning rate. Adam's parameter-wise scaling approximates diagonal preconditioning, reducing the effective condition number. Full preconditioning (Shampoo optimizer) targets the exact Hessian eigenspectrum.
5. Numerical Formats for AI
5.1 fp32, fp16, bf16 Compared
FLOATING-POINT FORMAT COMPARISON
========================================================================
Property fp32 fp16 bf16
---------------------------------------------------------------------
Bits 32 16 16
Exponent bits 8 5 8
Mantissa bits 23 10 7
Max value 3.4e+38 6.5e+04 3.4e+38
Min normal 1.2e-38 6.1e-05 1.2e-38
Machine eps 1.2e-07 9.8e-04 7.8e-03
---------------------------------------------------------------------
Training use Reference Legacy (APEX) Standard (2023+)
Gradient risk None Underflow None
Memory (1B par) 4 GB 2 GB 2 GB
Arithmetic TPU Baseline 2x fp32 2x fp32
========================================================================
bf16 vs fp16: Both are 16-bit. bf16 keeps the 8-bit exponent field of fp32, giving the same dynamic range. fp16 uses only 5 exponent bits - max value - which causes overflow for activations in large models. bf16 gives 3 fewer significant decimal digits than fp16, but training remains stable because gradients don't overflow.
Why bf16 won for LLMs: GPT-2 trained in fp16 with careful loss scaling. GPT-3, Llama, Mistral, Gemini all train in bf16 - no loss scaling needed, simpler codebase, same memory.
5.2 fp8 and Extreme Quantization
fp8 uses only 8 bits. NVIDIA's H100 supports two fp8 formats:
- E4M3 (4 exponent, 3 mantissa): higher precision, smaller range (max 448)
- E5M2 (5 exponent, 2 mantissa): lower precision, larger range (max 57344)
fp8 training workflow: Forward pass in fp8 E4M3; backward pass in fp8 E5M2; weight updates in fp32 or bf16. The Transformer Engine (NVIDIA) automates format selection and scaling.
Post-training quantization: Most LLM inference today uses int8 or int4 weight quantization (not fp8): weights stored as 8-bit integers, dequantized to fp16/bf16 before matmul. Tools: GPTQ, AWQ, bitsandbytes. The quantization error is where is the bit-width.
5.3 Mixed-Precision Training
Mixed-precision training (Micikevicius et al., 2018) is the standard approach for training large models:
MIXED-PRECISION TRAINING WORKFLOW
========================================================================
Master weights: fp32 ---------------------------------------------+
| cast down ^ update |
Forward pass: fp16/bf16 (activations, weights) |
Loss: fp32 accumulation |
Backward pass: fp16/bf16 gradients |
Gradient: fp16/bf16 -> fp32 (before weight update) ----------+
With loss scaling (fp16 only):
loss_scaled = loss x scale_factor (e.g., 2^15 = 32768)
Backward through loss_scaled
gradients_fp32 /= scale_factor before optimizer step
========================================================================
Key elements:
- fp32 master weights: Full-precision copy of weights for optimizer states (momentum, variance) and weight updates
- fp16/bf16 forward/backward: Halves memory for activations; 2x throughput on tensor cores
- Loss scaling (fp16 only): Multiplies loss by a large factor before backward pass to prevent gradient underflow; divides gradients before applying them
- fp32 accumulation: Matrix multiplications accumulate partial sums in fp32 on modern hardware, even when inputs are in fp16
5.4 Stochastic Rounding
Standard rounding (round-to-nearest) is deterministically biased: a number exactly halfway between two representable values always rounds to even. Over many operations, this can systematically shift the computed trajectory.
Stochastic rounding: Round to with probability , otherwise to . This introduces random noise but ensures - the rounding is unbiased.
Why this matters for low-precision training: In fp8/fp4 training, rounding errors are large enough to systematically bias weight updates. Stochastic rounding prevents this systematic bias, allowing effective training at very low precision. Used in Graphcore IPUs and experimental fp4 training.
6. Applications in ML
6.1 Gradient Vanishing/Exploding Through a Floating-Point Lens
Vanishing gradients in fp16: If a gradient satisfies (fp16 minimum normal), it is represented as 0 (or a subnormal with reduced precision). The parameter receives no gradient - vanishing is literal in floating-point.
Loss scaling prevents this: multiplying the loss by before backprop multiplies all gradients by . A gradient of with scale becomes - comfortably within fp16 range.
Gradient clipping addresses exploding gradients: if , scale . This prevents overflow and stabilizes training. Used by default in transformer training (clip norm is standard).
6.2 Numerically Stable Softmax and Log-Sum-Exp
Naive softmax:
Problem: For large (e.g., ), - overflow in fp32. For very negative , - underflow, losing the contribution entirely.
Stable softmax: Subtract the maximum before exponentiation:
By linearity of softmax, this gives identical output. Now the largest exponent is - no overflow. All other terms are in - no underflow for reasonable inputs.
Log-sum-exp: . Stable form:
Used in: numerically stable cross-entropy, log-probabilities, normalizing constants.
6.3 Stable Cross-Entropy Loss
Naive implementation:
# UNSTABLE: loss = -sum(y * log(softmax(logits)))
probs = exp(logits) / sum(exp(logits)) # overflow risk
loss = -sum(y * log(probs)) # log(0) risk
Stable implementation (PyTorch's F.cross_entropy):
# Combines log-softmax and NLL in one numerically stable pass:
# loss = -logits[true_class] + logsumexp(logits)
log_probs = logits - logsumexp(logits) # log-softmax, stable
loss = -log_probs[true_class] # NLL
This avoids both overflow in exp and log(0) in the entropy term.
6.4 Loss Scaling for fp16 Training
Dynamic loss scaling (PyTorch GradScaler):
- Start with scale
- Compute
scaled_loss = loss * s - Call
scaled_loss.backward()- gradients are scaled by - If any gradient is
infornan: reduce ; skip optimizer step - Otherwise: unscale gradients by ; optimizer step; every steps try
This adaptive scheme maintains the gradient scale in fp16 range, maximizing utilization of fp16 precision while detecting and recovering from overflow.
7. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Testing equality: a + b == c for floats | Rounding makes exact equality almost never true | Use abs(a+b-c) < tol * max(abs(a+b), abs(c), 1) |
| 2 | Assuming 0.1 + 0.2 == 0.3 | Neither 0.1, 0.2, nor 0.3 is exactly representable | Always use relative tolerance for float comparisons |
| 3 | Using fp16 for loss accumulation | fp16 has only 10 bits of mantissa; running sum loses precision fast | Accumulate loss in fp32, even when training in fp16 |
| 4 | Ignoring NaN propagation | One NaN in a gradient turns ALL gradients to NaN | Add torch.isnan(loss).any() check before backward; trace to source |
| 5 | Conflating machine epsilon with smallest float | for fp32; smallest normal is ; completely different | Know both values; use torch.finfo to look them up |
| 6 | Computing log(softmax(x)) in two steps | softmax(x) first rounds, then log can see zeros | Use log_softmax(x) which is a single numerically stable operation |
| 7 | Not checking condition number before solving | High means the solution is meaningless in floating point | Compute np.linalg.cond(A) first; if in fp64, reformulate |
| 8 | Using fp16 without loss scaling | Gradients of magnitude flush to zero | Use torch.amp.GradScaler() or switch to bf16 |
| 9 | Naive large-number subtraction | loses all digits in fp32 | Rearrange algebraically to avoid large-magnitude cancellation |
| 10 | Computing for large before dividing | Overflow before normalization | Use log-sum-exp trick; never compute raw without max-subtraction |
| 11 | Assuming GPU arithmetic is associative | (a+b)+c \\neq a+(b+c) in floating point; GPU thread ordering varies | Don't rely on associativity; use torch.use_deterministic_algorithms(True) for reproducibility |
| 12 | Ignoring subnormal flushing on GPU | CUDA default: FTZ (flush-to-zero) mode for subnormals | Be aware of subnormal behavior; check if FTZ affects your gradient magnitudes |
8. Exercises
Exercise 1 * - Machine Epsilon and Floating-Point Formats
(a) Without using np.finfo, write code to experimentally determine machine epsilon for fp32 by finding the smallest such that (start from 1.0 and halve repeatedly).
(b) Compare to np.finfo(np.float32).eps. Also compute for fp64 and fp16. What is the ratio between fp32 and fp64 epsilon?
(c) Compute the maximum representable value for fp32 without using np.finfo.max. Start from 1.0 and double until overflow.
(d) Explain: why does np.float16(65504) + np.float16(1) == np.float16(65504) return True?
Exercise 2 * - Catastrophic Cancellation
(a) Compute for in fp32 and fp64. Compare to the Taylor series approximation .
(b) Compute for in fp32 and fp64. Derive an algebraically equivalent form without cancellation and verify numerically.
(c) For the quadratic formula with , , : compute both roots in fp32. One root is catastrophically inaccurate. Identify which and implement the numerically stable form using the Vieta relation .
Exercise 3 * - Kahan Compensated Summation
(a) Implement naive summation and Kahan summation in pure NumPy (no special functions).
(b) Sum 10 million copies of using both methods in fp32. Compare to the exact answer . What is the absolute error for each method?
(c) Repeat for a sequence of alternating and of length . True sum = 0. What do both methods give?
(d) Verify that Kahan summation gives error independent of , while naive summation gives error .
Exercise 4 ** - Condition Numbers
(a) Compute the relative condition number of at , , . At which value is most sensitive to input perturbation?
(b) Construct a Hilbert matrix (notoriously ill-conditioned). Compute using np.linalg.cond. Solve for (true solution ). What is the relative error in the computed solution?
(c) For the Hilbert matrix, how many decimal digits of accuracy do you expect in the solution? Verify by computing and comparing to the actual error.
Exercise 5 ** - Numerically Stable Softmax and Log-Sum-Exp
(a) Implement naive softmax and stable softmax. Test both on in fp32. Which produces NaN? What does the stable version give?
(b) Implement the log-sum-exp function: naive (log(sum(exp(x)))) and stable. Test on .
(c) Implement numerically stable cross-entropy loss where is the true class. Test on logits with true class . Compare to naively computing log(softmax(z)[y]).
(d) Profile the two implementations for calls. Is there a speed difference?
Exercise 6 ** - Mixed-Precision Comparison
(a) Compute the inner product for -dimensional random unit vectors in fp16, bf16, fp32, and fp64. Use the fp64 result as ground truth. Report absolute and relative errors.
(b) Simulate gradient underflow: generate a random gradient vector in fp32 with entries drawn from . Cast to fp16. What fraction of gradient entries are exactly zero (underflowed)? Repeat with scale factor applied before casting.
(c) Implement simple loss scaling: wrap a function f(x) that returns a simulated loss; scale the loss, compute a "gradient" via finite differences in fp16, unscale. Compare gradient quality to fp32 finite differences.
Exercise 7 *** - Floating-Point Conditioning of a Linear System
(a) Generate random matrices with condition number by constructing with random orthogonal and with .
(b) For each, solve (true solution ) in fp32 and fp64. Plot relative error vs on a log-log scale. Verify the predicted slope of 1.
(c) For in fp32: add a tiny perturbation to . How much does the solution change? Compare to .
Exercise 8 *** - Stable Algorithm Design
(a) The sample variance formula can be expanded as (one-pass formula). Show that the one-pass formula suffers catastrophic cancellation for in fp32. Verify using Welford's online algorithm as the stable alternative.
(b) Implement Welford's online algorithm for computing running mean and variance.
(c) Compare naive, one-pass, and Welford for correctness on in fp32.
9. Why This Matters for AI (2026 Perspective)
| Concept | AI Impact |
|---|---|
| Machine epsilon / format choice | LLM training: bf16 is now the universal default (Llama-3, Gemma, Mistral) - understanding why requires knowing bf16's exponent range matches fp32 |
| Catastrophic cancellation | Naive attention score computation can cancel digits; FlashAttention's online softmax reordering avoids this |
| Loss scaling | Required for stable fp16 training; not needed with bf16; integral to torch.amp.GradScaler |
| Stable log-sum-exp | Every transformer's softmax in the forward pass uses this; also in all language modeling losses and perplexity calculations |
| Condition numbers | The Hessian condition number determines gradient descent convergence rate; Adam/Adafactor implicitly precondition to reduce effective |
| fp8 training | NVIDIA H100 Transformer Engine, used for Llama-3 and GPT-4 training; requires per-tensor scaling factors |
| Gradient underflow | Without scaling, fp16 gradient magnitudes below flush to zero - silent learning failure |
| Stochastic rounding | Enables effective fp4/fp8 training by preventing systematic rounding bias; used in Graphcore IPU and emerging hardware |
| Subnormal flushing | CUDA FTZ mode flushes subnormals to zero for speed; can cause unexpected behavior in extreme low-precision regimes |
| Kahan summation | PyTorch's attention accumulation uses fp32 accumulation (equivalent to Kahan) even in bf16 forward passes |
10. Conceptual Bridge
Backward: What This Builds On
Floating-point arithmetic is the computational manifestation of the real number system. The real numbers () are dense, complete, and infinite - studied in Mathematical Foundations Section01. Floating-point numbers are a finite approximation: the map introduces the rounding errors analyzed here.
The condition number of a linear system - introduced as - is defined using matrix norms from Linear Algebra Basics Section06 and singular values from Advanced Linear Algebra Section02. The interpretation requires the full SVD theory developed there.
Forward: What This Enables
Numerical Linear Algebra (Section02, this chapter): Every linear system solver, least-squares method, and eigenvalue algorithm is analyzed through the lens of backward error and condition numbers introduced here. The pivoting strategies in Gaussian elimination and the choice between QR and normal equations for least squares are direct applications of the stability concepts from Section4.
Numerical Optimization (Section03, this chapter): Gradient checking via finite differences relies on the finite-difference approximation error analysis - the step size must balance truncation error against cancellation error , an optimization that requires understanding both. Mixed-precision training is the union of Section5 (format choices) and optimization algorithms.
All of Applied Mathematics: Every numerical computation in this curriculum - ODE solvers, PDE methods, Fourier transforms - is ultimately floating-point arithmetic analyzed by the tools developed here.
POSITION IN CURRICULUM
========================================================================
Section01-Mathematical-Foundations (real numbers, binary representation)
|
v
Section02-Linear-Algebra-Basics (matrix norms, condition numbers prelim.)
Section03-Advanced-Linear-Algebra (SVD -> (A) = \\sigma_max/\\sigma_min)
|
v
Section10-Numerical-Methods
+-- Section01-Floating-Point-Arithmetic === YOU ARE HERE
| | \\varepsilon_mach, , stability
| v
+-- Section02-Numerical-Linear-Algebra (stable solvers, iterative methods)
+-- Section03-Numerical-Optimization (AD, line search, finite diffs)
+-- Section04-Interpolation (polynomial, spline, RBF)
+-- Section05-Numerical-Integration (quadrature, Monte Carlo)
|
v
Section08-Optimization (gradient descent, Adam - uses mixed-precision)
Section13-ML-Specific-Math (numerical stability of transformers)
========================================================================
The foundations laid in this section - machine epsilon, rounding error models, condition numbers, backward stability - are the vocabulary of all numerical computation. Every section that follows assumes fluency with these concepts.
Appendix A: IEEE 754 Bit-Level Encoding
A.1 Decoding a 32-bit Float by Hand
Every fp32 number is stored as three fields:
Bit 31 | Bits 30-23 | Bits 22-0
Sign S | Exponent E | Mantissa M
The value is:
Example: Decode the fp32 hex 0x3FC00000:
- Binary:
0 01111111 10000000000000000000000 - (positive)
- ; exponent
- Mantissa
- Value: [ok]
Encoding 1.0: ; ; ; hex 0x3F800000.
Encoding 0.1: (repeating binary). Stored exponent ; mantissa rounded to 23 bits: 11001100110011001100110; value is .
A.2 Special Patterns
| Pattern | Meaning |
|---|---|
| , | (zero) |
| , | Subnormal: |
| , | |
| , | NaN (quiet or signaling) |
| , | Quiet NaN (qNaN) - most common |
| , | Signaling NaN (sNaN) - triggers exception |
A.3 Python Bit Manipulation
import struct, numpy as np
def fp32_to_bits(x):
return struct.unpack('I', struct.pack('f', x))[0]
def bits_to_fp32(bits):
return struct.unpack('f', struct.pack('I', bits))[0]
def decode_fp32(x):
bits = fp32_to_bits(x)
S = (bits >> 31) & 1
E = (bits >> 23) & 0xFF
M = bits & 0x7FFFFF
if E == 0:
val = (-1)**S * 2**(-126) * M / 2**23 # subnormal
elif E == 255:
val = float('inf') if M == 0 else float('nan')
else:
val = (-1)**S * 2**(E - 127) * (1 + M / 2**23)
return {'S': S, 'E': E, 'M': M, 'value': val}
Appendix B: Error Analysis Worked Examples
B.1 Quadratic Formula Cancellation
The quadratic has roots:
In fp32, (exactly). So:
- - fine
- - catastrophically wrong (true: )
Stable form: Use , so .
B.2 Dot Product Error Bound
For , the computed dot product satisfies:
where .
For , fp16: - potential 50% error! This is why transformer attention uses fp32 accumulation inside bf16/fp16 matmuls.
B.3 Gram-Schmidt vs Householder
Modified Gram-Schmidt (MGS) is forward stable but not backward stable: given columns , the computed satisfies . For ill-conditioned , the computed columns are not orthogonal.
Householder QR is backward stable: and is exactly orthogonal (to machine precision). Always prefer Householder for numerical QR.
Appendix C: Floating-Point Arithmetic in PyTorch
C.1 Default Dtypes
import torch
torch.get_default_dtype() # float32 by default
torch.set_default_dtype(torch.float64) # change for scientific computing
# Check format properties:
torch.finfo(torch.float16) # eps, min, max, tiny
torch.finfo(torch.bfloat16)
torch.finfo(torch.float32)
torch.finfo(torch.float64)
C.2 Mixed-Precision with AMP
from torch.amp import autocast, GradScaler
scaler = GradScaler()
for batch in dataloader:
optimizer.zero_grad()
with autocast(device_type='cuda', dtype=torch.float16):
output = model(batch)
loss = criterion(output, target)
# Scales loss, calls backward, updates scaler
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
C.3 Numerically Stable Operations
# Stable log-softmax (use this, not log(softmax(x))):
torch.nn.functional.log_softmax(x, dim=-1)
# Stable cross-entropy (combines log-softmax + NLL):
torch.nn.functional.cross_entropy(logits, targets)
# Stable sigmoid cross-entropy:
torch.nn.functional.binary_cross_entropy_with_logits(logits, targets)
# NEVER: binary_cross_entropy(sigmoid(logits), targets) <- unstable
# Numerically stable softplus:
torch.nn.functional.softplus(x) # log(1 + exp(x)), stable
C.4 Debugging NaN Issues
# Enable anomaly detection (slow, for debugging only):
torch.autograd.set_detect_anomaly(True)
# Check for NaN:
assert not torch.isnan(loss).any(), f"NaN loss at step {step}"
assert not torch.isinf(loss).any(), f"Inf loss at step {step}"
# Gradient NaN check:
for name, param in model.named_parameters():
if param.grad is not None:
if torch.isnan(param.grad).any():
print(f"NaN gradient in {name}")
Appendix D: Numerical Formats Reference Card
QUICK REFERENCE: FLOATING-POINT FORMATS FOR AI
========================================================================
Format Bits \\varepsilon_mach Max Use case
----------------------------------------------------------------------
fp64 64 2.2e-16 1.8e+308 Scientific computing, reference
fp32 32 1.2e-07 3.4e+038 Training master weights, debug
bf16 16 7.8e-03 3.4e+038 LLM training (2023+ standard)
fp16 16 9.8e-04 6.5e+004 Older mixed-precision training
fp8 E4M3 8 1.2e-01 448 H100 forward pass
fp8 E5M2 8 2.5e-01 57344 H100 backward pass
int8 8 1/128 127 Inference quantization (weights)
int4 4 1/8 7 GGUF LLM inference
----------------------------------------------------------------------
Rule of thumb:
- Training large models: bf16 compute + fp32 master weights
- Inference large models: int8 or int4 weights, fp16 activations
- Scientific computing: fp64 throughout
- Edge inference: int4/int8 quantized (ONNX, TFLite, GGUF)
========================================================================
<- Back to Numerical Methods | Next: Numerical Linear Algebra ->
Appendix E: Proofs and Derivations
E.1 Proof: Kahan Summation Error Bound
Theorem: Kahan compensated summation computes with error:
This is rather than for naive summation.
Proof sketch: At each step, Kahan maintains a compensation that captures the rounding error from the previous step to full precision. Formally, after step :
The accumulated error is per step rather than , so after steps the total is (treating as small).
E.2 Derivation: Stable Softmax
Claim: for any scalar .
Proof:
Setting ensures the largest exponent is , bounding all terms in - within the range of any floating-point format.
E.3 Wilkinson's Backward Error for GE with Partial Pivoting
Gaussian elimination with partial pivoting (GEPP) on computes satisfying:
where and is the growth factor: (ratio of largest element appearing during elimination to largest initial element).
Key point: With partial pivoting, in the worst case (Wilkinson, 1961) but is almost always in practice. This is why GEPP is considered backward stable for practical purposes even though its worst-case bound is exponential.
E.4 Condition Number and Relative Error
Theorem: Let be invertible, . If and , then:
When is small (as for rounding errors ), this simplifies to:
Proof: Subtract the two equations, use , take norms, and use triangle inequality. The factor appears from a Neumann series approximation.
Appendix F: Numerical Experiments Reference
F.1 Key Numbers to Remember
| Quantity | fp16 | bf16 | fp32 | fp64 |
|---|---|---|---|---|
| Max finite value | ||||
| Min normal | ||||
| Decimal digits | ||||
| Bits | 16 | 16 | 32 | 64 |
F.2 Common Condition Numbers in ML
| Matrix type | Typical | Implication |
|---|---|---|
| Random Gaussian | Well-conditioned; easy to solve | |
| Gram matrix for correlated features | - | May need regularization |
| Hessian of neural loss at sharp minimum | - | Slow gradient descent; use Adam |
| Hessian at flat region (saddle) | on some axes | Ill-conditioned; Newton fails |
| Hilbert matrix | Numerically singular in fp32 | |
| Discrete Laplacian PDE | Requires preconditioning |
F.3 Loss Scaling Reference Values
| Scale | When to use | Notes |
|---|---|---|
| Default initial scale for fp16 | PyTorch GradScaler default | |
| After repeated overflow reductions | Still keeps most gradients in fp16 range | |
| (no scaling) | bf16 training | Not needed; bf16 has same range as fp32 |
| Dynamic | Best practice | GradScaler adjusts automatically |
Appendix G: Glossary
| Term | Definition |
|---|---|
| ULP | Unit in the Last Place: spacing between consecutive floats near a given value |
| Machine epsilon | Smallest with ; equals for -bit mantissa |
| Catastrophic cancellation | Loss of significant digits when subtracting nearly-equal numbers |
| Backward error | Size of input perturbation that makes computed output exact |
| Forward error | Distance from computed output to true output |
| Backward stable | Algorithm whose backward error is |
| Condition number | Worst-case ratio of relative output change to relative input change |
| Overflow | Result exceeds maximum representable value; becomes |
| Underflow | Result is too small to represent normally; becomes subnormal or 0 |
| NaN | Not a Number: result of undefined operations (, ) |
| Subnormal | Floating-point number smaller than the minimum normal; loses precision |
| Rounding mode | Rule for mapping exact values to the nearest representable float |
| Stochastic rounding | Random rounding, unbiased in expectation; used in low-precision training |
| Loss scaling | Multiplying the loss by a large factor before backprop to prevent gradient underflow |
| Mixed precision | Using different formats for different parts of training (e.g., fp16 compute + fp32 weights) |
| bf16 | Brain Float 16: 8-bit exponent (same range as fp32) + 7-bit mantissa |
| fp8 | 8-bit floating point; two variants: E4M3 (more precision) and E5M2 (more range) |
| Kahan summation | Compensated summation algorithm achieving error independent of |
| Pairwise summation | Binary-tree summation achieving error |
| FTZ | Flush to Zero: hardware mode that maps subnormals to 0 for speed |
| GradScaler | PyTorch class implementing dynamic loss scaling for fp16 training |
Appendix H: Connections to Other Sections
H.1 Floating-Point and Optimization
The choice of floating-point format directly impacts optimizer behavior:
Adam in fp16: Adam maintains first moment and second moment (moving averages of gradients and squared gradients). These accumulators grow slowly over training. In fp16, small updates can underflow. This is why Adam's optimizer states are almost always kept in fp32, even when gradients are computed in fp16.
Gradient clipping and numerical overflow: When gradients overflow to in fp16, the global norm also becomes infinity. Clipping to when norm is produces division . PyTorch's clip_grad_norm_ guards against this by checking for non-finite norms before clipping.
Numerical second-order methods: Computing the Hessian via finite differences requires step size - balancing truncation error with cancellation error . For fp32, optimal .
H.2 Floating-Point and Neural Network Architecture
Layer normalization: LayerNorm computes where (often or ) prevents division by zero when . This is a numerical safeguard, not a mathematical constant.
Softmax temperature scaling: Attention logits scale inversely with dimension. Without the divisor, logits grow as and softmax becomes peaky (near one-hot), causing vanishing gradients. The scaling keeps logits in a numerically well-conditioned range.
Residual connections: prevents the gradient vanishing problem: . The identity term ensures at least one gradient path with multiplication by 1 - no floating-point underflow across layers.
H.3 Floating-Point and Transformer Efficiency
FlashAttention: The standard attention computation requires materializing the attention matrix. FlashAttention (Dao et al., 2022) computes this in chunks, maintaining online softmax statistics in fp32 while storing intermediate results in fp16/bf16. This avoids both overflow and numerical drift across the tokens.
KV cache quantization: In inference, key and value matrices are cached across decoding steps. Quantizing this cache from fp16 to int8 (or int4) reduces memory by 2-4x. The quantization error adds noise to the attention scores - understanding the rounding error model tells you this noise is per element, tolerable for typical sequence lengths.
Appendix I: Further Reading
Foundational Papers
-
Goldberg, D. (1991). What every computer scientist should know about floating-point arithmetic. ACM Computing Surveys, 23(1), 5-48. - The definitive reference; free online.
-
Higham, N.J. (2002). Accuracy and Stability of Numerical Algorithms (2nd ed.). SIAM. - Chapter 2-3: rounding errors; Chapter 9: Gaussian elimination; definitive modern reference.
-
Wilkinson, J.H. (1963). Rounding Errors in Algebraic Processes. Prentice-Hall. - Original backward error analysis.
-
Kahan, W. (1965). Pracniques: Further remarks on reducing truncation errors. Communications of the ACM, 8(1), 40.
ML-Specific Papers
-
Micikevicius, P. et al. (2018). Mixed Precision Training. ICLR 2018. - The paper that established fp16 mixed-precision training as standard.
-
Kalamkar, D. et al. (2019). A Study of BFLOAT16 for Deep Learning Training. arXiv:1905.12322. - Why bf16 works better than fp16 for training.
-
Noune, B. et al. (2022). 8-bit Numerical Formats for Deep Neural Networks. arXiv:2206.02915. - fp8 training foundations.
-
Dao, T. et al. (2022). FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness. NeurIPS 2022. - Numerically stable attention at scale.
Textbooks
-
Trefethen, L.N. & Bau, D. (1997). Numerical Linear Algebra. SIAM. - Excellent treatment of backward error; Chapter 12-15 on floating-point and stability.
-
Press, W.H. et al. (2007). Numerical Recipes: The Art of Scientific Computing (3rd ed.). Cambridge. - Practical algorithms with implementation details.
<- Back to Numerical Methods | Next: Numerical Linear Algebra ->
Appendix J: Extended Examples and Case Studies
J.1 Case Study: NaN Debugging in a Transformer
A common production scenario: you launch training, the loss drops for 100 steps, then suddenly loss = nan. Here is a systematic debugging procedure grounded in floating-point theory.
Step 1: Identify when NaN first appears.
for step, batch in enumerate(dataloader):
with torch.autocast('cuda', torch.bfloat16):
logits = model(batch['input_ids'])
loss = criterion(logits, batch['labels'])
if torch.isnan(loss):
print(f"NaN loss at step {step}")
break
Step 2: Check for NaN in individual components. Common sources in order of frequency:
log(0): occurs when model outputs exactly 0 probability for the target class0/0: LayerNorm with (all-constant hidden states)- Overflow then : attention logits too large (missing scaling)
- Exploding gradients: accumulated for many steps without clipping
Step 3: Numerical fixes.
- Replace
log(softmax(x)[y])withF.cross_entropy(x, y)(stable) - Add
eps=1e-6to LayerNorm (PyTorch default) - Add gradient clipping:
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) - Switch from fp16 to bf16 (eliminates overflow entirely)
J.2 Case Study: Ill-Conditioned Normal Equations
Linear regression via normal equations: .
The condition number of is . If has condition number (common for correlated features), then - near the threshold of fp32 accuracy.
Safer alternative: Use the QR decomposition of ; then . The condition number of equals - half the digits lost.
Safest: Use scipy.linalg.lstsq which internally uses SVD-based pseudo-inverse with automatic rank determination. Condition number is , with singular values below rcond * sigma_max set to zero.
J.3 Case Study: fp16 vs bf16 in Attention
In a transformer with and (32 heads):
Attention logits (before softmax): . Without the factor, the dot product is in expectation - fine. With random initialization, dot products can reach before training stabilizes.
fp16 concern: - severe overflow. But with the factor: - safely within fp16 max of 65504. Gradient of the attention weights passes through softmax's Jacobian, which has values - fine for fp16.
bf16: Identical analysis; bf16's wider exponent range means even moderate overflow during early training is handled gracefully, whereas fp16 needs careful initialization.
J.4 Numerical Stability of Common Activation Functions
| Activation | Naive formula | Stable implementation | Issue |
|---|---|---|---|
| Sigmoid | for ; for | Overflow for | |
| Softplus | for | Overflow for large | |
| GELU | (error function) | Use precomputed tables or polynomial approx | computation accuracy |
| Swish | Standard; no overflow for normal | None in typical range | |
| log-softmax | Overflow in naive softmax |
J.5 Gradient Checkpointing and Numerical Precision
Gradient checkpointing (Chen et al., 2016) saves memory by not storing all activations during the forward pass; it recomputes them during backward. Numerical concern: The recomputed activations may differ slightly from the original computation due to floating-point non-associativity (different thread ordering on GPU).
For deterministic behavior with gradient checkpointing, use:
torch.use_deterministic_algorithms(True)
torch.backends.cuda.matmul.allow_tf32 = False # Disable TF32 (approximate matmul)
TF32 (TensorFloat-32) is NVIDIA's format that uses fp32 exponent/sign but only 10 bits of mantissa for matmul accumulation - faster but less precise. Enabled by default in PyTorch since 1.7. For reproducible science, disable it; for production training, leave it enabled.
Appendix K: Self-Assessment Questions
- What is the machine epsilon of fp32? Of bf16? Of fp16?
- Why is bf16 preferred over fp16 for training large language models?
- What is catastrophic cancellation? Give an example involving subtraction.
- State the fundamental rounding error model. What does represent?
- What is the backward error of an algorithm? Why is backward stability more useful than forward stability?
- If a matrix has condition number and you solve in fp32 (), what is the expected relative error in the solution?
- Describe the Kahan summation algorithm. What is its error guarantee?
- Why does
log(softmax(x))fail for large inputs? How islog_softmaximplemented stably? - What is loss scaling and when is it needed? Not needed?
- Define in terms of singular values. What does mean geometrically?
- What is the difference between overflow and underflow? Which causes NaN?
- Why does FlashAttention use fp32 for softmax accumulation even in bf16 mode?
<- Back to Numerical Methods | Next: Numerical Linear Algebra ->
Appendix L: Numerical Analysis in PyTorch - Implementation Patterns
L.1 Computing Machine Epsilon Empirically
The standard algorithm to find machine epsilon without library calls:
def find_machine_epsilon(dtype):
"""Find machine epsilon for a given dtype by halving."""
one = np.ones(1, dtype=dtype)
eps = dtype(1.0)
while dtype(1.0) + eps / dtype(2.0) > dtype(1.0):
eps = eps / dtype(2.0)
return eps
for dt in [np.float16, np.float32, np.float64]:
emp_eps = find_machine_epsilon(dt)
lib_eps = np.finfo(dt).eps
print(f"{dt.__name__:12s}: empirical={emp_eps:.3e}, library={lib_eps:.3e}")
This gives:
- float16:
- float32:
- float64:
L.2 Visualizing the Floating-Point Number Line
import numpy as np
import matplotlib.pyplot as plt
# Generate all positive fp16 normals in [1, 2)
# fp16: bias=15, so exponent bits = 01111 = 15, mantissa = 10 bits
mantissas = np.arange(0, 2**10)
values = 1.0 + mantissas / 2**10 # fp16 values in [1, 2)
spacings = np.diff(values)
# Plot: density of fp16 values in different ranges
ranges = [(0.5, 1.0), (1.0, 2.0), (2.0, 4.0), (4.0, 8.0)]
for lo, hi in ranges:
# All fp16 values in this range
count = 2**10 # same mantissa count per exponent
spacing = (hi - lo) / count
print(f"[{lo}, {hi}): {count} values, spacing = {spacing:.6f}")
Output shows: every power-of-2 interval contains exactly 1024 fp16 values (for normals), but the spacing doubles with each interval - geometric, not arithmetic, distribution.
L.3 Detecting and Fixing Common Numerical Issues
class NumericalGuard:
"""Context manager for detecting numerical issues during training."""
def __init__(self, model, check_inputs=True, check_outputs=True):
self.model = model
self.hooks = []
self.check_inputs = check_inputs
self.check_outputs = check_outputs
def __enter__(self):
def hook(module, input, output):
name = type(module).__name__
if self.check_inputs:
for i, x in enumerate(input):
if isinstance(x, torch.Tensor):
if torch.isnan(x).any():
raise ValueError(f"NaN in {name} input {i}")
if torch.isinf(x).any():
raise ValueError(f"Inf in {name} input {i}")
if self.check_outputs:
if isinstance(output, torch.Tensor):
if torch.isnan(output).any():
raise ValueError(f"NaN in {name} output")
for module in self.model.modules():
self.hooks.append(module.register_forward_hook(hook))
return self
def __exit__(self, *args):
for hook in self.hooks:
hook.remove()
L.4 Comparing Summation Methods
def naive_sum(arr):
"""Standard left-to-right accumulation."""
s = type(arr[0])(0)
for x in arr:
s += x
return s
def kahan_sum(arr):
"""Kahan compensated summation."""
s = type(arr[0])(0)
c = type(arr[0])(0)
for x in arr:
y = x - c
t = s + y
c = (t - s) - y
s = t
return s
def pairwise_sum(arr):
"""Recursive pairwise (binary tree) summation."""
n = len(arr)
if n == 1:
return arr[0]
mid = n // 2
return pairwise_sum(arr[:mid]) + pairwise_sum(arr[mid:])
# Test: sum 1 million copies of 1/3 in fp32
n = 1_000_000
x = np.full(n, 1/3, dtype=np.float32)
true_sum = n / 3 # exact in float64
for name, fn in [('Naive', naive_sum), ('Kahan', kahan_sum)]:
result = fn(list(x))
error = abs(float(result) - true_sum)
rel_error = error / abs(true_sum)
print(f"{name:8s}: result={result:.10f}, rel_error={rel_error:.2e}")
L.5 Stable Implementations of Common Functions
def log_sum_exp(x):
"""Numerically stable log(sum(exp(x))) for array x."""
m = np.max(x)
return m + np.log(np.sum(np.exp(x - m)))
def softmax(x):
"""Numerically stable softmax."""
x_shifted = x - np.max(x)
e = np.exp(x_shifted)
return e / np.sum(e)
def log_softmax(x):
"""Numerically stable log-softmax."""
return x - log_sum_exp(x)
def cross_entropy(logits, target):
"""Numerically stable cross-entropy: -log(softmax(logits)[target])."""
return -log_softmax(logits)[target]
def sigmoid(x):
"""Numerically stable sigmoid avoiding overflow for large negative x."""
return np.where(x >= 0,
1 / (1 + np.exp(-x)),
np.exp(x) / (1 + np.exp(x)))
def softplus(x):
"""Numerically stable softplus: log(1 + exp(x))."""
return np.where(x > 20,
x, # log(1 + exp(x)) \\approx x for large x
np.log1p(np.exp(x))) # log1p is more accurate than log(1+...)
L.6 Loss Scaling Implementation
class SimpleLossScaler:
"""Minimal loss scaler for fp16 training."""
def __init__(self, init_scale=2**15, scale_factor=2.0, scale_window=2000):
self.scale = init_scale
self.scale_factor = scale_factor
self.scale_window = scale_window
self._steps_since_update = 0
self._successful_steps = 0
def scale_loss(self, loss):
return loss * self.scale
def step(self, optimizer, params):
"""Unscale gradients and step, or skip if overflow detected."""
# Check for overflow
overflow = any(
torch.isnan(p.grad).any() or torch.isinf(p.grad).any()
for p in params if p.grad is not None
)
if overflow:
# Reduce scale and skip step
self.scale /= self.scale_factor
print(f"Overflow detected! Scale reduced to {self.scale}")
else:
# Unscale gradients
for p in params:
if p.grad is not None:
p.grad /= self.scale
optimizer.step()
self._successful_steps += 1
# Increase scale every scale_window successful steps
if self._successful_steps % self.scale_window == 0:
self.scale *= self.scale_factor
print(f"Scale increased to {self.scale}")
<- Back to Numerical Methods | Next: Numerical Linear Algebra ->
Appendix M: Historical Perspective - The Road to IEEE 754
M.1 Before IEEE 754 (Pre-1985)
Before the IEEE 754 standard, every computer manufacturer used its own floating-point format. IBM mainframes used base-16 (hexadecimal) floating point. DEC VAX used its own 32-bit format. CDC 6600 used 60-bit words with 48-bit mantissa. Programs written for one machine could not be reliably ported to another - the same computation produced different results depending on the hardware.
The chaos had real consequences: scientific simulations gave different answers on different machines; numerical algorithms required hand-tuning for each platform; software bugs were indistinguishable from floating-point differences. The 1970s saw a growing recognition that a universal standard was needed.
M.2 The IEEE 754 Committee (1977-1985)
The IEEE 754 working group (chaired by Jerome Coonen, with key contributions from William Kahan and Harold Stone) spent eight years developing the standard. Kahan - who would win the 1989 Turing Award partly for this work - insisted on several features that were controversial but proved crucial:
- Round-to-nearest-even as default: Reduces systematic bias in long computations
- Gradual underflow (subnormals): Prevents sudden loss of precision near zero
- Signed zeros: Enables correct complex analysis identities ()
- NaN as a value (not an exception): Allows computation to continue past undefined operations
The Intel 8087 coprocessor (1980) was the first commercial implementation, years before the standard was formally published in 1985. Its success validated the design.
M.3 IEEE 754-2008 and Beyond
The 2008 revision added:
- fp16 (half precision): Initially for graphics/ML; 5-bit exponent limits range to
- Decimal floating-point: For financial computations where exactly
- Fused multiply-add (FMA): computed with only one rounding error, not two
FMA is particularly important for ML: matrix multiplication decomposes into FMA operations, and modern GPU tensor cores implement FMA in mixed precision (fp16/bf16 multiply + fp32 accumulate) for maximum throughput with minimum precision loss.
M.4 The ML Formats (2017-2024)
The ML revolution created demand for formats the IEEE committee never anticipated:
bf16 (2018): Google Brain developed Brain Float 16 for TPUs. The key insight: training stability requires dynamic range (exponent bits), not precision (mantissa bits). Keep all 8 exponent bits of fp32, sacrifice 16 mantissa bits. The result trains as stably as fp32 at half the cost.
tf32 (2020): NVIDIA's TensorFloat-32 uses fp32's 8-bit exponent and 10-bit mantissa (same as fp16) for matmul accumulation, giving 3x speedup with minimal accuracy loss. Enabled by default in PyTorch on Ampere and newer GPUs.
fp8 E4M3 / E5M2 (2022): For H100 training at extreme throughput. Requires per-tensor scaling factors and careful format selection per layer (E4M3 for weights/activations, E5M2 for gradients).
fp4 / int4 (experimental): Used for inference quantization (GGUF format, llama.cpp). Weights stored as 4-bit integers with a shared scale factor per block. 4-bit reduces model size by 8x vs fp32, enabling 70B-parameter models on a single consumer GPU.
Appendix N: Quick Reference and Formulas
N.1 Key Inequalities
N.2 Algorithms Reference
Stable softmax:
Stable log-sum-exp:
Kahan step:
Optimal finite-difference step:
Condition number bounds:
N.3 Checklist: Numerically Stable Implementation
Before deploying any numerical computation, verify:
- All
log()calls are protected:log(x + eps)orlog_softmax()instead oflog(softmax()) - All divisions are protected:
x / (y + eps)for division by potentially-zero values - Softmax is computed with max-subtraction (
stable_softmax) not naive - Cross-entropy uses
F.cross_entropy(logits, labels)notF.nll_loss(F.log_softmax(logits), labels)(equivalent but one extra call) - Float comparisons use
torch.isclosewith appropriateatolandrtol, not== - Accumulation of small differences is done in higher precision (
float64or Kahan) - Gradient clipping is enabled:
clip_grad_norm_(params, max_norm=1.0) - NaN detection:
torch.isnan(loss)checked at training start - Matrix condition number checked before solving:
np.linalg.cond(A) < 1/eps - bf16 used instead of fp16 for new training runs
<- Back to Numerical Methods | Next: Numerical Linear Algebra ->
Appendix O: Worked Problems with Full Solutions
O.1 Problem: Compute for Small
Problem: Compute accurately for in fp32.
Naive approach: in fp32 (only 7 decimal digits). Then ... but in fp32: (exact 1.0 in fp32, since is below the fp32 precision at magnitude 1). Result: . Relative error: 100%.
Stable approach: Use the identity for small , implemented as numpy.expm1(x):
The expm1 function is computed without the catastrophic cancellation - it directly computes without subtracting large numbers. Result: . Relative error: machine precision.
Rule: Use np.expm1(x) instead of np.exp(x) - 1 whenever . Similarly, use np.log1p(x) instead of np.log(1+x) for small .
O.2 Problem: Running Mean and Variance
Problem: Compute the mean and variance of a stream of numbers without storing all values.
Naive: Store all values and compute , . Requires memory.
One-pass (unstable): . Catastrophically unstable when is large relative to .
Welford's online algorithm (stable):
Initialize: M_1 = x_1, S_1 = 0, n = 1
For k = 2, 3, ...:
n += 1
delta = x_k - M_{k-1}
M_k = M_{k-1} + delta / n
delta2 = x_k - M_k
S_k = S_{k-1} + delta * delta2
Variance = S_n / (n - 1) (sample variance)
This is used in: PyTorch BatchNorm (online mean/variance), Adam optimizer (running mean of gradients), gradient clipping (running norm estimation).
O.3 Problem: Matrix-Vector Product Error Bound
Problem: Bound the error in computed in fp32, where and .
Each entry of the product is a dot product: .
Each such dot product of terms has forward error at most .
In matrix form:
where denotes the entry-wise absolute value.
For a GPU matmul using FMA with fp32 accumulation: the effective error is reduced by approximately factor due to cancellations, but the worst-case bound above still applies.
Implication for attention: The attention score is a dot product of length . In fp16 () without fp32 accumulation: error (product magnitude). For : 12.8% relative error - unacceptably large. With fp32 accumulation (as in FlashAttention): error - perfectly fine.
O.4 Problem: Condition Number of the Attention Matrix
Problem: What is the condition number of the softmax output ?
The Jacobian of softmax is , which has eigenvalues and for each . This matrix is singular (rank ), so in the strictest sense, the softmax map has infinite condition number (it maps from to the probability simplex, a lower-dimensional manifold).
Practical question: How sensitive are the softmax outputs to perturbations in the logits? The answer depends on the sharpness of the softmax. A "peaky" distribution (one ) has Jacobian entries near 0 - small gradient flow, effectively zero sensitivity. A "flat" distribution (all ) has maximum sensitivity: the Frobenius norm of is maximized.
For training: Sharp attention (peaky softmax) causes vanishing gradients through the attention weights. This is prevented by: (1) temperature scaling ; (2) attention dropout (randomly zero-ing some weights, softening the distribution); (3) label smoothing (softens the target distribution, preventing overconfident predictions).
Appendix P: Summary Statistics of Chapter
CHAPTER Section01 SUMMARY
========================================================================
Core concepts: 10 (IEEE 754, \\varepsilon_mach, rounding, cancellation,
Kahan, condition number, stability, formats,
mixed precision, stable implementations)
Key formulas: 12 (rounding model, Kahan step, stable softmax,
logsumexp, (A), error bounds, ...)
Numerical formats: 8 (fp64, fp32, bf16, fp16, fp8 E4M3, fp8 E5M2,
int8, int4)
AI connections: 15 (loss scaling, NaN debugging, FlashAttention,
TF32, bf16 training, gradient clipping, ...)
Exercises: 8 (* to ***), covering all major topics
Appendices: 16 (A through P)
Notes length: ~1700 lines
Theory cells: 50+
Exercises: 10 graded problems (24 cells)
========================================================================
Appendix Q: Deep Dive - IEEE 754 Special Value Arithmetic
Understanding how special values propagate is essential for debugging AI training runs.
Q.1 NaN Propagation Rules
NaN (Not a Number) is sticky: any arithmetic operation with a NaN produces a NaN.
NaN + x = NaN for any x (including NaN)
NaN * 0 = NaN (not 0, despite "anything times zero is zero")
NaN < x = False for any x
NaN == NaN = False (IEEE mandates this - NaN is not equal to itself)
The self-inequality of NaN is the canonical way to detect it:
def is_nan(x):
return x != x # True only when x is NaN
In PyTorch: torch.isnan(x) or x != x.
NaN in gradient computation: If any parameter gradient is NaN, the entire parameter update step is corrupted. A single NaN in one layer's weight will propagate backward through the chain rule to corrupt all earlier layer gradients.
Debugging strategy for NaN gradients:
- Register forward hooks to check activations for NaN after each layer
- Register backward hooks to check gradients for NaN
- Binary search over layers to find the first occurrence
- Check for division by zero in custom loss functions (log(0), 0/0)
- Check for overflow in exponentials (exp(large_number))
Q.2 Infinity Arithmetic
Infinity follows extended real arithmetic:
+\\infty + (+\\infty) = +\\infty
+\\infty + (-\\infty) = NaN (indeterminate form)
+\\infty * 0 = NaN (indeterminate form)
+\\infty * (+\\infty) = +\\infty
1 / 0 = +\\infty (for positive numerator)
-1 / 0 = -\\infty
0 / 0 = NaN
x / +\\infty = 0 for finite x
Loss spike analysis: When a loss value becomes inf, trace backward:
log(0)- zero probability assigned to correct classexp(x)for largexbefore normalization (use log-sum-exp)- Division by a very small denominator (batch norm with near-zero variance)
- Accumulated gradients that overflow fp16 range before gradient clipping
Q.3 Signed Zero
IEEE 754 has both +0 and -0. They compare equal (+0 == -0 is True) but differ in division:
1 / (+0) = +\\infty
1 / (-0) = -\\infty
Signed zero matters in:
- Complex number arithmetic: branch cuts depend on sign of zero imaginary part
- Sorting algorithms:
sort([+0, -0])may or may not preserve order - Gradient of relu at zero: subgradient convention can produce
+0or-0
Q.4 Subnormal Performance Impact
On most hardware, operations involving subnormal numbers (also called denormals) execute 10-100x slower than normal operations, because they require software emulation or special hardware paths.
In training:
- Very small weight values gradually entering subnormal range -> sudden throughput drop
- Gradient values approaching zero -> subnormal gradients -> training slows
- Fix: Set the flush-to-zero (FTZ) flag, which replaces subnormals with zero
- PyTorch: enabled by default in CUDA
- NumPy:
np.seterr(under='ignore')+ platform FTZ setting - Downside: FTZ introduces larger underflow errors but avoids performance cliff
Appendix R: Floating-Point Error Case Studies from AI Research
Case Study 1: The fp16 Loss Explosion Problem (2017-2018)
When researchers first attempted to train large language models in fp16, they observed frequent loss explosions. Investigation revealed:
- Gradient magnitudes of ~10^-^4 to 10^-^3 during stable training
- fp16 minimum positive normal: ~6.1 x 10^-5
- Many gradients underflowed to zero -> effectively frozen parameters
- Sudden instability from accumulated representation errors in weights
Solution (Micikevicius et al., 2017): Mixed-precision training with loss scaling:
- Maintain fp32 master copy of all weights
- Cast to fp16 for forward and backward pass (4x speedup)
- Scale loss by (typically 28 to 215) before backward
- Check for overflow (any gradient is Inf or NaN)
- If no overflow: unscale, apply gradient clipping, update fp32 weights
- If overflow: skip step, halve
This framework is now standard in torch.cuda.amp.
Case Study 2: Attention Score Overflow in Early Transformers
In the original Transformer (Vaswani et al., 2017), attention scores are:
Without the scaling, with :
- can have magnitude (assuming unit-variance features)
- After softmax, extreme values dominate:
- Gradient of softmax becomes nearly zero -> vanishing gradients
The factor ensures dot products have variance 1, keeping softmax in its informative regime.
fp16 version: With and input std , raw scores , so (fp16 max) with non-negligible probability when is large. FlashAttention computes attention in blocks, using online softmax with numerical rescaling to avoid materializing the full attention matrix.
Case Study 3: Layer Normalization Stability
Layer normalization computes:
where (typically to ) prevents division by zero.
Numerical pitfall: If all activations in a layer are identical (common during initialization or after a bad update), exactly in floating point. Without , we'd compute .
Stability consideration: The choice of matters:
- Too small (): offers no protection when is represented as exact zero
- Too large (): changes the effective normalization for small-variance activations
- Standard choice balances stability with fidelity
Catastrophic cancellation in variance: Computing suffers catastrophic cancellation when (small variance). Welford's online algorithm avoids this (see Appendix O).
Case Study 4: Gradient Checkpointing and Recomputation Errors
Gradient checkpointing saves memory by recomputing intermediate activations during the backward pass instead of storing them. This recomputation uses the same inputs but may use different precision if the precision mode changes between forward and backward.
Source of error: In mixed-precision training with autocast, the recomputed forward pass during backward may use slightly different floating-point operations than the original forward. The resulting gradient is mathematically correct (same bit pattern inputs) but the error can accumulate differently.
Industry practice: PyTorch's torch.utils.checkpoint.checkpoint handles this correctly by default. The key requirement is that the checkpointed function must be deterministic - stochastic operations (like Dropout) must use the same random seed in forward and recomputation.
Appendix S: Floating-Point Benchmarks and Profiling
Measuring fp32 vs bf16 Throughput
import torch
import time
def benchmark_matmul(dtype, size=4096, n_warmup=5, n_trials=20):
"""Benchmark matrix multiplication throughput."""
device = 'cuda' if torch.cuda.is_available() else 'cpu'
A = torch.randn(size, size, dtype=dtype, device=device)
B = torch.randn(size, size, dtype=dtype, device=device)
# Warmup
for _ in range(n_warmup):
C = torch.mm(A, B)
if device == 'cuda':
torch.cuda.synchronize()
# Benchmark
start = time.perf_counter()
for _ in range(n_trials):
C = torch.mm(A, B)
if device == 'cuda':
torch.cuda.synchronize()
elapsed = time.perf_counter() - start
# FLOPS: 2 * N^3 for N x N matmul
flops = 2 * size**3 * n_trials
tflops = flops / elapsed / 1e12
return tflops
# Typical results on A100 GPU:
# fp32: ~19.5 TFLOPS
# bf16: ~77.0 TFLOPS (4x speedup with Tensor Cores)
# fp16: ~77.0 TFLOPS (similar to bf16)
Detecting Numerical Issues in Training
def register_nan_hooks(model):
"""Register hooks to detect NaN/Inf in forward and backward passes."""
hooks = []
def forward_hook(name):
def hook(module, input, output):
if isinstance(output, torch.Tensor):
if torch.isnan(output).any():
print(f"NaN detected in forward: {name}")
if torch.isinf(output).any():
print(f"Inf detected in forward: {name}")
return hook
def backward_hook(name):
def hook(module, grad_input, grad_output):
for i, g in enumerate(grad_input):
if g is not None and torch.isnan(g).any():
print(f"NaN in grad_input[{i}] at: {name}")
return hook
for name, module in model.named_modules():
hooks.append(module.register_forward_hook(forward_hook(name)))
hooks.append(module.register_full_backward_hook(backward_hook(name)))
return hooks # Call hook.remove() to deregister
# Usage:
# hooks = register_nan_hooks(model)
# train_step(model, batch)
# for h in hooks: h.remove()
Condition Number Monitoring
def monitor_weight_conditioning(model, threshold=1e4):
"""Monitor condition numbers of weight matrices."""
ill_conditioned = []
for name, param in model.named_parameters():
if param.dim() >= 2:
# Use fast approximation via max/min singular values
try:
sv = torch.linalg.svdvals(param.view(param.shape[0], -1))
kappa = sv[0] / sv[-1]
if kappa > threshold:
ill_conditioned.append((name, kappa.item()))
except Exception:
pass
return ill_conditioned
Appendix T: Quick Derivations
T.1 Why Machine Epsilon Is
A floating-point number with precision bits (significand) represents values of the form:
The spacing between consecutive representable numbers near is:
This is the unit in the last place (ULP) at , which equals machine epsilon .
For fp32: (1 implicit + 23 explicit mantissa bits), so .
T.2 Why Kahan Summation Has Error
Naive summation of numbers has error bound - grows with .
Kahan summation maintains a compensation variable tracking the lost low-order bits:
c = 0
for each x_i:
y = x_i - c # Corrected input
t = sum + y # sum is large, y small
c = (t - sum) - y # Algebraically zero, but captures rounding error
sum = t
The compensation captures the bits lost in sum + y and feeds them into the next iteration. The net error is , independent of .
T.3 Why Softmax Is Numerically Stable With Shifting
For , softmax satisfies translation invariance:
Proof:
Setting ensures all exponents , preventing overflow while preserving the mathematical value.
T.4 Relative Backward Stability of Householder QR
The Householder QR algorithm produces such that:
This is backward stable: the computed result is the exact answer for a slightly perturbed problem. The forward error in solving via QR is then , which is optimal - no algorithm can do better without additional information about the problem structure.
Appendix U: Cross-Reference with Other Sections
This section connects to several other parts of the curriculum:
| Topic Introduced Here | Full Treatment |
|---|---|
| Condition number | Section02 Numerical Linear Algebra - condition number theory, backward stability proofs, perturbation analysis |
| Stable matrix algorithms (LU, QR, Cholesky) | Section03-Advanced-Linear-Algebra/08-Matrix-Decompositions - full decomposition theory |
| Floating-point in optimization (gradient descent) | Section03 Numerical Optimization - learning rate selection, gradient accumulation, precision effects on convergence |
| Interpolation with finite precision | Section04 Interpolation and Approximation - Runge's phenomenon, Chebyshev nodes, numerical stability of polynomial evaluation |
| Numerical quadrature errors | Section05 Numerical Integration - error analysis of quadrature rules in finite precision |
| Probabilistic error analysis | Section05-Probability-and-Statistics - probabilistic numerics, Gaussian process approximations |
Notation cross-references:
- defined here -> used in all Section10 sections
- introduced here -> defined formally in Section02
- rounding operator defined here -> used throughout Section10
Appendix V: Extended Exercises with Hints
These supplementary problems extend the main exercises for students who want deeper practice.
V.1 Sterbenz's Lemma (**)
Statement: If and , then is computed exactly (no rounding error).
Proof sketch: When , the result is small. Since both and are exactly representable, and their exponents differ by at most 1, the subtraction can be performed in the significand without any rounding.
Application: In Kahan summation, the step c = (t - sum) - y uses Sterbenz's lemma to capture rounding errors exactly when the values have similar magnitudes.
Exercise: Verify Sterbenz's lemma numerically for fp32 arithmetic. Generate pairs where and confirm that a - b equals (a - b) computed via Fraction (exact rational arithmetic). Compare with pairs outside the range.
V.2 The Table Maker's Dilemma (***)
When evaluating to correctly-rounded results, we need to know to much higher precision - potentially arbitrary precision. This is because the correctly-rounded result depends on which side of a representable number falls on. For a -bit significand, the worst case requires computing to approximately bits of precision (the Table Maker's Dilemma, Kahan and Ziv, 1990s).
Modern resolution: The CRlibm library (INRIA) and Intel SVML use multi-stage argument reduction + polynomial approximation with double-double arithmetic to guarantee correctly-rounded elementary functions.
For AI: This is why torch.sin() may give slightly different results than math.sin() for the same input - different levels of accuracy guarantees.
V.3 Floating-Point Reproducibility (**)
Training a deep neural network on the same hardware, same code, same random seed should produce the same result. But in practice, results often differ between runs. Sources of non-reproducibility:
- Non-associative reductions: GPU CUDA reduction operations may sum in different orders depending on thread scheduling
- Atomic operations: Multi-threaded gradient accumulation uses atomics, which arrive in non-deterministic order
- cuDNN algorithm selection: cuDNN may select different convolution algorithms with different rounding behaviors
PyTorch reproducibility settings:
import torch
torch.manual_seed(42)
torch.cuda.manual_seed_all(42)
torch.backends.cudnn.deterministic = True # Slower but deterministic
torch.backends.cudnn.benchmark = False # Disable algorithm search
Tradeoff: Deterministic mode can be 10-30% slower due to avoiding certain non-deterministic fast paths.
V.4 Error-Free Transformations (***)
An error-free transformation (EFT) computes not just but also the exact rounding error such that exactly.
TwoSum algorithm (Knuth, 1969):
function TwoSum(a, b):
s = fl(a + b)
v = fl(s - a)
e = fl((a - fl(s - v)) + fl(b - v))
return (s, e) # a + b = s + e exactly
This costs 6 floating-point operations to compute both the sum and the exact rounding error.
Application - Double-Double Arithmetic: By representing each number as a pair where is the true value (and ), we effectively double the working precision from to bits using only hardware arithmetic. This is how long double is emulated on some platforms and how Kahan summation achieves its precision.
Appendix W: Format Selection Guide for AI Practitioners
Decision Tree for Format Selection
CHOOSING FLOATING-POINT FORMAT
========================================================================
Is this inference or training?
|
+-> INFERENCE
| Is accuracy critical?
| +-> YES: fp32 (or bf16 if model supports it)
| +-> NO: int8 quantization (post-training quant, GPTQ, AWQ)
| -> 4-8x memory reduction, ~1-2% accuracy loss
|
+-> TRAINING
Hardware with Tensor Cores? (A100/H100/RTX 30xx/40xx)
+-> YES: bf16 + AMP (torch.cuda.amp)
| - Weight master copy in fp32
| - Compute in bf16 (4x throughput)
| - Dynamic loss scaling
+-> NO (V100 or CPU):
fp32 safe baseline
fp16 + AMP if V100 (needs loss scaling, unstable for LLMs)
========================================================================
Format Comparison Summary
| Format | Bits | Dynamic Range | Precision | Best Use Case |
|---|---|---|---|---|
| fp64 | 64 | +/-10^308 | ~15 digits | Scientific computing, debugging |
| fp32 | 32 | +/-10^38 | ~7 digits | Training baseline, optimizer states |
| tf32 | 19* | +/-10^38 | ~3 digits | A100 matmul (automatic, transparent) |
| bf16 | 16 | +/-10^38 | ~2 digits | LLM training (same range as fp32) |
| fp16 | 16 | +/-65504 | ~3 digits | Vision models, inference |
| fp8 E4M3 | 8 | +/-448 | ~1 digit | Forward pass (H100+) |
| fp8 E5M2 | 8 | +/-57344 | <1 digit | Gradient accumulation (H100+) |
| int8 | 8 | +/-127 | Integer | Inference quantization |
*tf32 uses 10-bit mantissa for compute but stores as fp32.
Choosing Loss Scale Initial Value
# Typical AMP GradScaler configuration
scaler = torch.cuda.amp.GradScaler(
init_scale=2**16, # Start with 65536 - large enough for fp16
growth_factor=2.0, # Double scale every growth_interval steps
backoff_factor=0.5, # Halve scale on overflow
growth_interval=2000, # Steps between scale increases
enabled=True
)
Heuristic: Start with init_scale = 2^16 for most models. If you see frequent overflow warnings in the first 100 steps, reduce to 2^12. If no overflow for 10,000+ steps, the scaler will auto-increase.
End of Appendix W