"The purpose of computing is insight, not numbers - and no insight is more powerful than factoring a matrix into pieces whose structure you understand."
Overview
Every practical computation with matrices reduces, ultimately, to a factorization. Solving ? Factor and solve two triangular systems. Fitting a linear model? Factor the data matrix and solve a triangular system stably. Sampling from a multivariate Gaussian? Factor the covariance and transform independent normals. Estimating the rank of a nearly singular matrix? Factor with column pivoting and read off the diagonal of .
This section develops the three foundational computational decompositions: LU factorization (Gaussian elimination with pivoting, the workhorse for general square systems), QR factorization via Householder and Givens (the stable method for least squares and rank-revealing problems), and a brief computational recap of Cholesky (the SPD specialist, developed fully in 07). The emphasis throughout is computational: how these algorithms work, why they are numerically stable, what can go wrong, and how to implement them efficiently.
The machine learning connections are direct and load-bearing. Every Newton step requires solving a Hessian system - that is LU or Cholesky. Every least-squares layer (linear probing, ridge regression on features) uses QR under the hood in numerically reliable implementations. Gaussian process inference - the backbone of Bayesian optimization and uncertainty-aware ML - is Cholesky factorization applied thousands of times. Differentiating through factorizations (for implicit differentiation of constrained problems) requires understanding the algebraic structure developed here. And the emerging field of randomized numerical linear algebra, which underlies methods like LoRA and sketch-and-solve preconditioning, is built on randomized LU and QR.
Prerequisites
- LU, QR, Cholesky brief introductions - Chapter 2 02: Matrix Operations
- Eigenvalues, spectral theorem - 01: Eigenvalues and Eigenvectors
- SVD - 02: Singular Value Decomposition
- Gram-Schmidt, QR theory, orthogonal matrices - 05: Orthogonality and Orthonormality
- Matrix norms, condition numbers - 06: Matrix Norms
- Positive definite matrices, Cholesky full theory, LDL^T - 07: Positive Definite Matrices
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Interactive derivations: LU with pivoting, Householder QR, Givens rotations, RRQR, blocked algorithms, GP inference, randomized methods |
| exercises.ipynb | 8 graded exercises from triangular solves through differentiating Cholesky for GP log-likelihood |
Learning Objectives
After completing this section, you will be able to:
- Implement forward and backward substitution and analyze their numerical properties
- Derive LU factorization from Gaussian elimination and prove the existence theorem
- Explain why partial pivoting is necessary and implement with growth factor analysis
- Distinguish partial, complete, and rook pivoting and state the stability guarantee of each
- Implement Householder QR from scratch, including the sign convention for numerical stability
- Construct Givens rotations and explain when to prefer them over Householder reflectors
- Implement rank-revealing QR with column pivoting and use the R diagonal to estimate rank
- Describe tall-skinny QR (TSQR) and explain its communication-optimal property
- State the backward error theorem and compare the stability of LU, QR, and Cholesky
- Explain blocked algorithms and why BLAS-3 operations dominate on modern hardware
- Apply QR to solve least-squares problems and explain why it is more stable than normal equations
- Differentiate through Cholesky factorization and compute gradients of log-determinants
Table of Contents
- 1. Intuition
- 2. Background: Triangular Systems
- 3. LU Factorization
- 3.1 LU Decomposition Theorem
- 3.2 The LU Algorithm: Outer Product Form
- 3.3 Failure Without Pivoting
- 3.4 Partial Pivoting: PA = LU
- 3.5 Complete Pivoting: PAQ = LU
- 3.6 Rook Pivoting
- 3.7 LU Stability Analysis and Backward Error
- 3.8 Blocked LU for Cache Efficiency
- 3.9 Solving Ax = b via LU
- 3.10 Rank-Deficient LU
- 4. QR Factorization: Computational Algorithms
- 5. Cholesky Factorization (Computational Recap)
- 6. Numerical Stability and Error Analysis
- 7. Blocked Algorithms and High-Performance Computing
- 8. Applications in Machine Learning
- 9. Common Mistakes
- 10. Exercises
- 11. Why This Matters for AI (2026 Perspective)
- 12. Conceptual Bridge
1. Intuition
1.1 The Factorization Paradigm
Every problem in computational linear algebra is, at its heart, a search for structure. When you factor a matrix into simpler pieces - triangular, orthogonal, diagonal - you transform a hard problem into a sequence of easy ones. The word "easy" is precise: triangular systems can be solved in operations by substitution; orthogonal transformations are numerically perfect (they preserve norms exactly); diagonal systems are trivially solvable. A decomposition is a way of peeling back complexity to reveal the structural skeleton underneath.
This perspective explains why factorizations are not merely computational conveniences. They are mathematical revelations. LU factorization reveals which rows are "heavy" (large pivots dominate) and which are "light" (small pivots signal near-singularity). QR factorization reveals the orthogonal complement of the column space via the zero rows of . Cholesky factorization reveals the "matrix square root" that encodes the geometry of a positive definite quadratic form. Each decomposition is a lens tuned to a different structural feature.
The central insight: nearly every matrix algorithm of practical importance runs in time, dominated by the factorization phase. After factorization, the actual problem-solving (applying , , ) costs only . This is the fundamental return on investment: spend once to factor, then spend as many times as needed for different right-hand sides.
For AI: this principle appears in natural gradient methods (factor the Fisher information matrix once per outer loop, solve multiple gradient updates cheaply), in Gaussian process regression (factor the covariance matrix once, evaluate posterior at many test points cheaply), and in iterative solvers (factor a preconditioner once, apply it repeatedly to accelerate convergence).
1.2 Triangular Systems: The Foundation
Before understanding factorizations, you must understand why triangular systems are considered "solved." A lower triangular system with can be solved row by row from top to bottom:
This is forward substitution: each unknown is determined uniquely by those already computed. The computation requires exactly multiplications and additions - total. Similarly, an upper triangular system is solved by backward substitution from bottom to top.
The key properties of triangular systems:
- Existence and uniqueness: a triangular system has a unique solution if and only if all diagonal entries are nonzero.
- Numerical stability: forward/backward substitution are inherently stable when pivots are large; small pivots amplify errors.
- Parallelism: the sequential dependency (each unknown requires all previous) limits parallelism, but blocked implementations can exploit level-2 BLAS parallelism within panels.
- Structural exploitation: sparse triangular systems (band, arrowhead, bidiagonal) can be solved in for bandwidth .
Non-example: a general dense system cannot be solved by substitution directly - it requires work. The entire purpose of LU, QR, and Cholesky is to reduce the general case to triangular cases.
1.3 Three Canonical Decompositions
The three decompositions in this section partition the space of structured matrix problems:
| Decomposition | Matrix class | Factored form | Primary use |
|---|---|---|---|
| LU | General square, non-singular | Solving , computing | |
| QR | Any rectangular , | Least squares, rank determination, eigenvalue algorithms | |
| Cholesky | Symmetric positive definite | SPD systems, Gaussian sampling, log-det computation |
Why three? Each exploits a different structural property. LU uses no special structure beyond invertibility; it is the most general and therefore requires more care (pivoting for stability). QR uses orthogonality, which is the most numerically pristine structure - has condition number 1 by definition. Cholesky uses both symmetry and positive definiteness, which allows working with only half the matrix and guarantees the factorization exists without pivoting.
The hierarchy of stability: Cholesky QR LU (partially pivoted) LU (no pivoting). As you sacrifice structure, you need more algorithmic sophistication to maintain numerical reliability.
1.4 Why This Matters for AI
Matrix factorizations are not background infrastructure - they are in the critical path of the most computationally expensive operations in modern AI.
Gaussian process regression (used in Bayesian optimization, uncertainty quantification, and hyperparameter tuning) requires computing and for an kernel matrix . Both require Cholesky factorization. A single GP training step on points costs operations - factorization is the bottleneck.
Second-order optimization (Newton's method, natural gradient, K-FAC) requires solving Hessian or Fisher information systems . For K-FAC (Kronecker-Factored Approximate Curvature, Martens & Grosse 2015), the Fisher approximation is block-diagonal with Kronecker structure, and each block is inverted via Cholesky - thousands of small Cholesky factorizations per training step.
Linear probing and fine-tuning - fitting a linear model on top of frozen features - is a least-squares problem. Numerically reliable implementations use QR (e.g., numpy.linalg.lstsq calls LAPACK's dgelsd which uses divide-and-conquer SVD, or dgelsy which uses column-pivoted QR).
Differentiating through factorizations is required whenever a factorization appears inside a differentiable computational graph. PyTorch's torch.linalg.cholesky_solve and JAX's jax.lax.linalg.cholesky support automatic differentiation through the factorization via implicit function theorem.
Randomized methods (LoRA, sketching, randomized preconditioning) use approximate LU and QR factorizations on sketched matrices. Understanding the exact factorizations is prerequisite to understanding their randomized variants.
1.5 Historical Timeline
| Year | Contribution | Author(s) |
|---|---|---|
| 1809 | Gaussian elimination for geodesy (Gauss not published until 1810) | Carl Friedrich Gauss |
| 1938 | LU factorization formalized as matrix decomposition | Tadeusz Banachiewicz |
| 1945 | Systematic analysis of elimination and error growth | Alan Turing |
| 1954 | Givens rotations for QR | Wallace Givens |
| 1958 | Householder reflections; modern QR algorithm | Alston Householder |
| 1961 | QR algorithm for eigenvalues (combined QR iteration) | Francis; Kublanovskaya |
| 1961 | LAPACK precursor: ALGOL programs for matrix problems | Wilkinson, Reinsch |
| 1979 | LINPACK (first standardized library) | Dongarra et al. |
| 1987 | LAPACK + BLAS-3 blocked algorithms | Anderson, Bai, Dongarra et al. |
| 1995 | Randomised SVD (sketch-then-factor) | Frieze, Kannan, Vempala |
| 2011 | Halko-Martinsson-Tropp randomized low-rank framework | Halko, Martinsson, Tropp |
| 2015 | K-FAC: Cholesky for Fisher information blocks | Martens & Grosse |
| 2022 | LoRA: low-rank LU-inspired weight decomposition | Hu et al. |
| 2024 | FlashAttention-3: blocked QR-like tiling for attention | Shah et al. |
2. Background: Triangular Systems
2.1 Forward Substitution
Definition. Given a lower triangular matrix with for all , and , forward substitution computes satisfying via:
Algorithm (row-oriented):
for i = 1 to n:
x[i] = b[i]
for j = 1 to i-1:
x[i] -= L[i,j] * x[j]
x[i] /= L[i,i]
Complexity: The inner loop runs times, giving multiplications and the same number of additions. Total: floating-point operations, or .
Numerical properties: Forward substitution is unconditionally stable in exact arithmetic. In floating-point arithmetic, errors accumulate proportionally to (the condition number of ), not to the intermediate values of . The key source of numerical difficulty is a small diagonal entry : dividing by a near-zero diagonal amplifies errors quadratically.
Unit lower triangular: When for all (as in the LU factorization where is unit lower triangular by convention), the divisions are trivial and the algorithm reduces to:
for i = 1 to n:
x[i] = b[i] - sum(L[i,1:i-1] * x[1:i-1])
This eliminates divisions, slightly improving performance and stability.
For AI: Forward substitution appears in the final step of any matrix factorization-based solver. In neural network training with second-order methods (Newton, K-FAC), solving extracts the natural gradient direction given the Cholesky factor of the Fisher matrix .
2.2 Backward Substitution
Definition. Given an upper triangular with , backward substitution computes satisfying via:
The algorithm proceeds from the last equation (which has a single unknown ) backward to the first. Complexity and numerical properties mirror forward substitution.
Column-oriented backward substitution: An equivalent formulation updates the remaining components of as each is resolved:
for i = n downto 1:
x[i] = b[i] / U[i,i]
b[1:i-1] -= x[i] * U[1:i-1, i]
This accesses by columns rather than rows, which can improve cache efficiency on column-major storage (Fortran, LAPACK convention).
Connection to matrix inverse: The columns of can be computed by solving triangular systems , one per standard basis vector. This is how scipy.linalg.solve_triangular computes when called with a matrix right-hand side.
2.3 Gaussian Elimination Revisited
Gaussian elimination is the process of reducing a matrix to upper triangular form by applying elementary row operations (EROs). Each ERO of type "add times row to row (with )" corresponds to left-multiplication by an elementary lower triangular matrix (an elementary elimination matrix or Frobenius matrix):
The multiplier is the ratio that zeros out entry . Applying all elimination matrices to reduce to gives:
Crucially, the product of elementary lower triangular matrices is lower triangular, and their inverses are trivially:
So the product of inverses, , is unit lower triangular with the multipliers in their natural positions. This gives:
The multipliers go directly into : one of the most elegant facts in numerical linear algebra is that the multipliers used during elimination appear, without any further computation, as the subdiagonal entries of the factor . No separate computation of is required - it is built up in-place during elimination.
3. LU Factorization
3.1 LU Decomposition Theorem
Theorem (LU factorization). Let . Then has an LU factorization (with unit lower triangular and upper triangular) if and only if all leading principal submatrices for are non-singular.
Proof sketch. The condition non-singular for all ensures that no zero pivot is encountered during Gaussian elimination. At step , the pivot is , so all pivots are nonzero iff all leading minors are nonzero. Conversely, if Gaussian elimination completes without zero pivots, the multipliers are finite and are well-defined.
Non-examples:
- fails: (a zero pivot at step 1).
- : but , so the factorization runs but (singular).
- Any singular matrix: , so iff some .
Uniqueness: The factorization with (unit lower triangular) is unique when it exists. Without the normalization, one could scale and by any diagonal matrix and its inverse.
The factorization in-place: LU computation overwrites : the upper triangle stores , the strict lower triangle stores the multipliers (i.e., the strict lower triangle of , since need not be stored). This halves the memory requirement.
3.2 The LU Algorithm: Outer Product Form
The standard view of LU elimination is column-by-column. But the outer product form (also called rank-1 update form) exposes the structure more clearly and maps naturally to BLAS-3 blocked implementations.
At step of the factorization, we have already reduced columns . The remaining submatrix (the Schur complement after steps) has the form:
where (the multipliers column) and (the pivot row). This is a rank-1 update of the trailing submatrix.
Algorithm:
for k = 1 to n-1:
# Compute multipliers
L[k+1:n, k] = A[k+1:n, k] / A[k, k] # size (n-k)
# Rank-1 update of trailing submatrix
A[k+1:n, k+1:n] -= L[k+1:n, k] * A[k, k+1:n] # BLAS-2: ger
U[k, k:n] = A[k, k:n]
Complexity: Step costs divisions and multiply-adds. Total:
So LU factorization costs floating-point operations (flops). For comparison: solving after factorization costs (two triangular solves). The ratio means factorization dominates for large .
For AI: the rank-1 update structure maps directly to GPU tensor cores. Blocked implementations reformulate the trailing submatrix update as a matrix-matrix multiply (BLAS-3: DGEMM), which achieves near-peak GPU throughput.
3.3 Failure Without Pivoting
Naive LU (no pivoting) fails catastrophically when:
-
Zero pivots: If during step , the division by the pivot is undefined. This happens even for non-singular matrices that happen to have zero leading principal minors.
-
Small pivots - numerical catastrophe: Consider:
Naive LU gives multiplier , and (in floating-point, this overwhelms the exact value ). The computed solution can have relative error of order 1 even when the exact solution is well-conditioned.
- Growth factor explosion: The growth factor is defined as:
Without pivoting, can grow as in the worst case (Wilkinson's matrix), making the computed useless.
Example of catastrophic failure:
Exact solution: approximately. Naive LU in IEEE double precision gives multiplier , yielding - a catastrophic cancellation.
For AI: PyTorch's torch.linalg.lu uses partial pivoting by default. numpy.linalg.solve calls LAPACK dgesv which uses partial pivoting. Never use unpivoted LU for numerical computation.
3.4 Partial Pivoting: PA = LU
Idea: Before processing column , swap row with the row (among ) having the largest absolute value in column . This ensures for all multipliers.
Algorithm with partial pivoting:
for k = 1 to n-1:
# Find pivot row
p = argmax_{i >= k} |A[i, k]|
swap rows k and p in A (and record in permutation P)
# Now A[k,k] is the largest in column k below
L[k+1:n, k] = A[k+1:n, k] / A[k, k] # |L[i,k]| <= 1
A[k+1:n, k+1:n] -= L[k+1:n, k] * A[k, k+1:n]
Result: where is a permutation matrix encoding all row swaps. The factor satisfies for all .
Growth factor with partial pivoting: The theoretical bound is (same as no pivoting!), but in practice the growth factor is almost always small ( for random matrices). Foster (1997) proved that adversarial inputs achieving are exponentially rare.
Stability theorem (Wilkinson 1961): Partial pivoting produces a computed factorization such that:
where is the unit roundoff (e.g., for IEEE double). For typical , this bound is well within practical tolerance.
Storage of the permutation: is stored as a pivot vector where is the row index selected at step . Applying to before the triangular solve costs .
Non-example: Partial pivoting does NOT guarantee is small. The example:
has pivot column , so either row could be chosen. After step 1, , . But for , .
3.5 Complete Pivoting: PAQ = LU
Complete pivoting selects both the row and column with the globally largest absolute entry at each step:
At step : find , then swap row and column .
Result: where and are permutation matrices (row and column respectively).
Growth factor bound: Complete pivoting satisfies the tighter bound:
which grows much more slowly than but is still superlinear. Numerically, complete pivoting is extremely stable - no practical example of large growth is known.
Cost: Finding the global maximum at each step requires examining the full trailing submatrix, adding comparisons per step and total - which triples the leading constant relative to partial pivoting. This is the reason complete pivoting is rarely used in practice.
When to use complete pivoting:
- When stability is paramount (e.g., certified computation, interval arithmetic)
- When the matrix is suspected to be rank-deficient (complete pivoting exposes rank)
- When solving very ill-conditioned systems where partial pivoting is known to fail
For AI: Complete pivoting is rarely used in mainstream ML workflows. Partial pivoting suffices for almost all practical problems. The theoretical importance of complete pivoting is in establishing lower bounds and in the analysis of rank-revealing factorizations.
3.6 Rook Pivoting
Rook pivoting is a middle ground between partial and complete pivoting. At step :
- Find the largest element in column (as in partial pivoting) - say row .
- Check if is also the largest in row . If yes, use as pivot.
- If not, find the largest in row , swap columns, repeat until convergence.
The name comes from the chess rook: the pivot search alternates between column and row moves until it finds a position that is simultaneously the largest in its row and column - a "rook position."
Properties:
- Growth factor: but in practice much smaller than partial pivoting.
- Cost: additional work per step in the worst case but typically in practice (convergence in 1-2 alternations).
- Rank-revealing: Rook pivoting reveals rank approximately as well as complete pivoting.
Theorem (Foster 1997): For a matrix of rank , rook pivoting ensures for , making the diagonal jump at position clearly visible.
3.7 LU Stability Analysis and Backward Error
The definitive framework for understanding LU stability is backward error analysis, developed by Wilkinson in the 1960s.
Definition (backward error). The backward error of a computed solution to is the smallest perturbation such that:
A method is backward stable if and are of order (machine epsilon) regardless of the input.
Backward error theorem for LU with partial pivoting:
Theorem (Wilkinson 1961). Let be the solution computed by LU with partial pivoting in IEEE double precision. Then:
where is a modest polynomial constant and .
Forward error bound: The forward error satisfies:
where is the condition number. This is the key identity: numerical error = roundoff growth factor condition number.
Implication for practice: If in double precision, you lose digits of accuracy. With , you have correct digits in .
Backward stability of QR: QR (Householder) is backward stable with growth factor - Householder reflectors are orthogonal and preserve norms exactly. This is why QR is preferred for ill-conditioned systems.
3.8 Blocked LU for Cache Efficiency
Modern hardware has a deep memory hierarchy (L1: 32KB, L2: 256KB, L3: 8MB, DRAM: \infty). The key metric for algorithmic performance is arithmetic intensity: floating-point operations per byte of data moved. BLAS operations have different intensities:
| Operation | BLAS level | Flops | Data | Intensity |
|---|---|---|---|---|
| (DGEMV) | 2 | |||
| (DGEMM) | 3 |
Blocked LU reformulates the algorithm so that the dominant computation is DGEMM (matrix-matrix multiply), achieving arithmetic intensity and near-peak GPU/CPU throughput.
Block algorithm (block size ):
for k = 0 to n/b - 1:
A_kk = A[kb:(k+1)b, kb:(k+1)b]
# Factor the diagonal panel (unblocked, uses BLAS-2)
P_k, L_kk, U_kk = LU_factor(A_kk) # O(b^3)
# Apply permutation to remaining columns
A[kb:(k+1)b, (k+1)b:n] = P_k @ A[kb:(k+1)b, (k+1)b:n]
# Solve for U panel (triangular solve, BLAS-3: DTRSM)
U_k_right = solve_lower_triangular(L_kk, A[kb:(k+1)b, (k+1)b:n])
# Update trailing submatrix (BLAS-3: DGEMM)
A[(k+1)b:n, (k+1)b:n] -= L_right @ U_k_right # This is the bottleneck
The trailing submatrix update (A -= L_right @ U_k_right) is a pure DGEMM and dominates the total computation. On a modern CPU/GPU with peak DGEMM performance, this block formulation achieves 80-95% of theoretical peak.
LAPACK routine: DGETRF implements blocked LU with partial pivoting. The block size is tuned automatically per platform via the ilaenv oracle.
For AI: Blocked LU is the algorithm behind PyTorch's torch.linalg.lu_factor, jax.scipy.linalg.lu, and scipy's lu_factor. For GPU, cuSOLVER implements batched blocked LU for thousands of small systems simultaneously - critical for K-FAC training.
3.9 Solving Ax = b via LU
Given the LU factorization , solving proceeds in four steps:
- Apply permutation: (O(n), just index lookup)
- Forward solve: solve (O(n^2), forward substitution)
- Backward solve: solve (O(n^2), backward substitution)
- (Optional) Iterative refinement: compute , solve for correction , update
The total cost after factorization is flops - linear in , negligible compared to the factorization cost for large .
Multiple right-hand sides: For different vectors , one factorization costing is followed by triangular solves each costing . This is the key economic argument for LU: amortize the factorization cost over many solves.
Computing the determinant: , where is the number of row swaps. Since is unit lower triangular, .
Computing : Solve for (one factorization, triangular solves). But explicitly forming is almost always unnecessary and should be avoided - solve the system directly instead.
3.10 Rank-Deficient LU
When is singular or numerically rank-deficient, the LU factorization (even with pivoting) produces a with some diagonal entries near zero. The factorization still "works" mechanically but produces at position corresponding to a dependent column.
Signature of rank deficiency:
Problem: With partial pivoting, the threshold for "approximately zero" is ambiguous. If , we declare the matrix has numerical rank . But the choice of is problem-dependent.
Rank-revealing LU: Column-pivoted LU (complete or rook pivoting) provides better rank-revelation: the diagonal entries of decay in a manner that reflects the true rank structure.
Better alternative for rank-deficient problems: Use rank-revealing QR (RRQR, 4.6) or SVD. The SVD provides the definitive rank determination via singular value thresholding - but costs just as LU does. RRQR provides a cheaper, nearly-as-reliable alternative.
For AI: Neural network weight matrices often have approximate low rank (Hu et al., 2022 show that fine-tuning adapters are intrinsically low-dimensional). Rank-deficient LU can be used to detect this structure, but RRQR or randomized SVD are preferred in practice.
4. QR Factorization: Computational Algorithms
4.1 From Gram-Schmidt to Algorithms
Recall from 05: The QR factorization of an matrix () decomposes into an orthonormal factor (or for the full QR) and an upper triangular factor . This was constructed via Gram-Schmidt orthogonalization in 05.
The problem with classical Gram-Schmidt: Classical Gram-Schmidt (CGS) is mathematically correct but numerically unstable for ill-conditioned matrices. The computed can lose orthogonality catastrophically: for a matrix with condition number , the computed from CGS may have , causing significant errors in subsequent computations.
Modified Gram-Schmidt (MGS): Reorders the orthogonalization to reduce error accumulation. MGS is numerically equivalent to classical QR of a slightly perturbed matrix, giving . Still insufficient for highly ill-conditioned problems.
Householder and Givens: Both achieve regardless of , making them the preferred algorithms for production use. The key is that they build as a product of exactly orthogonal elementary transformations (reflectors or rotations), never losing orthogonality during construction.
For AI: numpy.linalg.qr and scipy.linalg.qr use LAPACK DGEQRF (Householder QR). torch.linalg.qr similarly calls cuSOLVER which uses Householder on GPU. Understanding the algorithmic basis helps you interpret condition number warnings and numerical precision behavior.
4.2 Householder Reflections
Definition. A Householder reflector (or Householder transformation) is a matrix of the form:
where is a nonzero Householder vector. The matrix is symmetric () and orthogonal (), so - it is its own inverse.
Geometric interpretation: reflects vectors across the hyperplane perpendicular to . Every point in the hyperplane satisfies (invariant). Every point along the direction satisfies (negated).
Key property: Given any vector and any unit vector , there exists a Householder reflector such that . That is, maps to a multiple of the first standard basis vector.
Construction (numerical form): Given to be zeroed below its first entry:
The sign convention (critical for stability): We choose to avoid cancellation in computing . If , choosing gives , avoiding the catastrophic subtraction that would occur with .
Cost of applying : Computing directly costs (matrix-vector product), but using the formula costs only - first compute the scalar , then update . This is the implicit representation of .
4.3 Householder QR Algorithm
Algorithm: Apply Householder reflectors successively to zero out subdiagonal entries of each column.
for k = 1 to n: # n columns
# Construct H_k to zero out A[k+1:m, k]
a = A[k:m, k] # current column
alpha = -sign(a[0]) * norm(a)
v = a.copy(); v[0] -= alpha
v /= norm(v) # normalized Householder vector
# Apply H_k implicitly to A[k:m, k:n]
A[k:m, k:n] -= 2 * v * (v.T @ A[k:m, k:n])
# Store v in lower triangle of A (LAPACK convention)
A[k+1:m, k] = v[1:]
After steps, has been overwritten with in the upper triangle and Householder vectors in the lower triangle (the implicit storage of ).
Implicit Q representation: The full matrix is and has size . Forming explicitly costs additional work and storage. LAPACK's DORGQR computes the explicit when needed (e.g., for the thin QR, only the first columns of are needed).
Complexity:
- Applying to trailing submatrix: per step
- Total:
- For square : flops (vs for LU)
Numerical stability: Householder QR is backward stable. The computed satisfy:
with a modest polynomial constant and machine epsilon. The orthogonality error satisfies , independent of .
LAPACK: DGEQRF computes the Householder QR in blocked form (block Householder, using WY representation). DORMQR applies or to a matrix without forming explicitly.
For AI: The Householder algorithm is used in PyTorch's torch.linalg.qr, in scipy's linalg.qr, and in LAPACK DGEQRF. It is the foundation for numerically reliable QR-based least-squares solvers used in linear probing, weight matrix analysis, and second-order methods.
4.4 Givens Rotations
Definition. A Givens rotation is the identity matrix with a rotation embedded in the rows and columns:
with and in positions , , , .
Key property: is orthogonal. Left-multiplying by rotates the plane by angle .
Zeroing a specific entry: Given with entries and , choose:
Then , - the rotation zeros out while preserving as .
Numerically stable computation (LAPACK DLARTG):
if x_j == 0:
c = 1, s = 0, r = x_i
elif |x_j| > |x_i|:
t = x_i / x_j; s = 1/sqrt(1+t^2); c = s*t; r = x_j/s
else:
t = x_j / x_i; c = 1/sqrt(1+t^2); s = c*t; r = x_i/c
This avoids overflow and underflow in computing .
Cost: Applying to a matrix costs flops (touching only rows and ). Constructing costs .
For AI: Givens rotations appear in updating QR factorizations when a new row is appended (streaming QR), which is used in online learning and in updating Cholesky factors after rank-1 additions (Cholesky rank-1 update, relevant to incremental GP regression).
4.5 Givens QR and Sparse Matrices
Algorithm: Apply Givens rotations sequentially to zero out subdiagonal entries of column by column. To zero (), apply on the left.
For a dense matrix, the total number of Givens rotations needed is , each costing flops - total , which is worse than Householder QR for dense matrices.
Advantage for sparse matrices: Each Givens rotation touches exactly two rows and two columns. If has a known sparsity pattern, Givens rotations can be sequenced to minimize fill-in (new nonzeros created by the transformation). In contrast, a single Householder reflector is a rank-2 update that touches an entire panel, potentially creating dense fill.
Banded matrices: For a banded matrix with bandwidth , Givens QR costs instead of for Householder, and the resulting remains banded. This is critical for signal processing and finite-element computations.
Online/streaming QR: When a new row is appended to , the existing QR factorization can be updated using a single sequence of Givens rotations, costing rather than recomputing from scratch in .
For AI: Streaming QR via Givens is used in online learning settings where data arrives sequentially. It also underlies incremental SVD algorithms used for streaming PCA (e.g., in continual learning, where new tasks arrive without access to past data).
4.6 Column-Pivoted QR (RRQR)
Motivation: Standard QR doesn't reveal rank. The diagonal entries of decay, but not necessarily in a way that makes rank determination reliable. Column-pivoted QR (RRQR) reorders columns of to expose rank structure in .
Algorithm: At step , before computing the -th Householder reflector, swap column with the column having maximum 2-norm among remaining columns. This produces:
where is a permutation and has the property that .
Rank estimation: If has numerical rank , then is large (theoretically , practically much larger). The threshold for declaring rank is:
RRQR theorem (Golub 1965, Chandrasekaran & Ipsen 1994): The strong RRQR algorithm guarantees:
ensuring that (the leading block) captures essentially all the energy of the matrix.
Connection to SVD: RRQR is a cheap () approximation to the SVD. It provides the same rank information and a good basis for the column space, but singular values only approximately. For exact singular values, use SVD at cost.
For AI:
- LoRA (Hu et al., 2022): Low-Rank Adaptation uses rank- decompositions of weight updates . Choosing requires rank estimation - RRQR is one approach.
- DoRA (Liu et al., 2024): Decomposes weight matrices into magnitude and direction; RRQR is used to identify the principal components.
- Intrinsic dimensionality: RRQR-based rank estimation reveals the intrinsic dimensionality of weight matrices, guiding compression decisions.
4.7 Tall-Skinny QR (TSQR)
Motivation: For matrices with (tall and skinny - e.g., , ), standard Householder QR communicates data between levels of the memory hierarchy at each of steps, totaling words of communication. For distributed or GPU computation, this is the bottleneck.
TSQR algorithm (Demmel et al., 2008):
- Local factorization: Partition into panels (one per processor/GPU block).
- Local QR: Factor each independently (no communication).
- Reduction tree: Form , factor its QR to get . Repeat up the tree.
- Result: is the final upper triangular factor. can be recovered by traversing the reduction tree backward.
Communication cost: TSQR communicates words (vs for blocked Householder), achieving communication optimality.
For AI:
- Distributed training: TSQR is used in distributed least-squares and in computing the QR of gradient matrices across multiple GPUs.
- Randomized methods: TSQR underlies the "sketch-then-QR" approach for randomized low-rank factorization.
- Attention-efficient computation: The block-tile structure of FlashAttention-3 (Shah et al., 2024) is inspired by the communication-optimal blocking patterns of TSQR.
4.8 QR for Least Squares
Problem: Given with (overdetermined) and , find minimizing .
Method 1 (Normal equations): . Form , factor via Cholesky, solve. Cost: . Problem: - squaring the condition number doubles the digits lost.
Method 2 (QR): Factor (thin QR), then:
Minimizing over : solve (upper triangular, backward substitution). Cost: for QR, for , for backsolve. The condition number of is , not .
Stability comparison:
| Method | Condition amplification | Cost | When to use |
|---|---|---|---|
| Normal equations + Cholesky | Well-conditioned , | ||
| QR (Householder) | General purpose, numerically safe | ||
| RRQR | , reveals rank | Rank-deficient or near-singular | |
| SVD (truncated) | None (sets small SVs to zero) | Ill-conditioned, rank-deficient |
For AI:
- Linear probing: Fitting a linear classifier on top of frozen features is an overdetermined least-squares problem.
sklearn.linear_model.LinearRegressionuses LAPACKDGELSD(divide-and-conquer SVD) orDGELSY(column-pivoted QR). - Ridge regression: is equivalent to the augmented system , which QR solves stably.
- Attention weight regression: Computing attention pattern regression (e.g., for mechanistic interpretability) uses QR-based least squares.
5. Cholesky Factorization (Computational Recap)
5.1 Brief Overview
Full treatment: The complete theory of Cholesky factorization - existence proofs, LDL^T, modified Cholesky, log-determinant, connection to PSD cone - is in 07: Positive Definite Matrices. This section covers only the computational aspects not covered in 07: the relationship to LU, blocked algorithms, and LAPACK routines.
For a symmetric positive definite matrix , the Cholesky factorization is:
where is unit lower triangular with positive diagonal entries . The factorization exists and is unique for every (Theorem from 07).
Why Cholesky for SPD systems:
- Efficiency: Cholesky costs flops - exactly half of LU's - because symmetry halves the work.
- Storage: Only the lower triangle needs to be stored (n(n+1)/2 entries vs n^2 for LU).
- No pivoting needed: Positive definiteness guarantees at every step without any row interchanges.
- Stability: Cholesky is backward stable for SPD matrices without any pivoting.
5.2 Cholesky as Specialized LU
The connection between Cholesky and LU: for a symmetric positive definite , the LU factorization (without pivoting, which exists since all leading principal minors of are positive) gives . But implies , so where . Thus (the LDL^T form). Since , all diagonal entries , and we can define to get - the Cholesky factorization.
Cholesky algorithm (jth column):
The argument of the square root must be positive - this is guaranteed by positive definiteness.
Cost analysis:
- Column costs: 1 square root + flops for , plus flops for
- Total: , exactly half of LU
5.3 LDL^T for Indefinite Systems
For symmetric indefinite matrices (not necessarily PD), Cholesky cannot be applied directly (negative square roots). The LDL^T factorization computes:
where is unit lower triangular, is block diagonal with and blocks (to handle negative eigenvalues), and is a permutation.
Bunch-Kaufman pivoting: The blocks in capture pairs of eigenvalues of opposite sign, avoiding the square root altogether. The pivoting strategy (Bunch & Kaufman 1977) ensures - a stability bound analogous to for partial pivoting in LU.
Applications in ML:
- Indefinite Hessians: At saddle points (common early in neural network training), the Hessian is indefinite. LDL^T factors it without the positive-definiteness requirement, enabling second-order descent directions even at saddle points.
- Modified Cholesky: Adding a diagonal shift to make before Cholesky factorization is the standard approach in L-BFGS-B and quasi-Newton methods. See 07 for the modified Cholesky algorithm.
LAPACK routine: DSYTRF implements LDL^T with Bunch-Kaufman pivoting.
5.4 Blocked Cholesky and LAPACK dpotrf
Blocked Cholesky follows the same blocked structure as blocked LU. For block size :
for k = 0 to n/b - 1:
# Factor diagonal block
L_kk = cholesky(A[kb:(k+1)b, kb:(k+1)b]) # O(b^3)
# Solve for L panel (DTRSM)
L_k_below = solve_lower_triangular(L_kk.T, A[(k+1)b:n, kb:(k+1)b].T).T
# Update trailing submatrix (DSYRK + DGEMM)
A[(k+1)b:n, (k+1)b:n] -= L_k_below @ L_k_below.T
The trailing submatrix update (A -= L @ L^T) is a symmetric rank- update, computed by LAPACK's DSYRK (symmetric rank-k update) which exploits symmetry to halve the work relative to DGEMM.
LAPACK routine DPOTRF: The production Cholesky routine. On entry: the upper or lower triangle of . On exit: (or for upper Cholesky). Returns an error flag INFO ( for success; if the -th leading minor is not positive definite).
For AI: DPOTRF is called by:
scipy.linalg.cholesky(wraps LAPACK)numpy.linalg.cholesky(wraps LAPACK)torch.linalg.cholesky(calls cuSOLVERcusolverDnDpotrfon GPU)- Every GP regression library (GPyTorch, GPflow, etc.) for computing and
6. Numerical Stability and Error Analysis
6.1 Forward vs Backward Error
Forward error: The actual difference between the computed answer and the true answer :
Backward error: The smallest perturbation to the input data that makes an exact solution:
The fundamental inequality:
This separates the algorithm's contribution (backward error, controlled by stability) from the problem's inherent difficulty (condition number , intrinsic to the matrix). A backward-stable algorithm does the best possible: it cannot do better than .
Why backward error is the right measure: A backward-stable algorithm produces exact solutions to slightly perturbed problems. If the input data itself has error of order (which it does - all real data has measurement error), then backward-stable computation introduces no additional error beyond what the input uncertainty already implies.
Example: Computing in floating-point. The forward error is zero, but the backward error is also zero (exact computation). Now: , while the true answer is . Forward error is tiny; backward error is tiny.
6.2 The Fundamental Theorem of Backward Stability
Theorem (Backward Stability, informal). An algorithm for computing is backward stable if for every input , the computed output satisfies:
where is a modest polynomial function of the problem size and is machine epsilon.
Corollary: For a backward stable algorithm, the forward error satisfies:
where is the condition number of . The condition number determines how much the backward perturbation amplifies into forward error.
LU with partial pivoting: backward stable with growth factor . Not backward stable in theory (since can be ), but backward stable in practice (since rarely exceeds for typical inputs).
Householder QR: backward stable unconditionally. The computed satisfies regardless of .
Cholesky (for SPD): backward stable unconditionally. No pivoting needed; positive definiteness guarantees numerical stability.
6.3 Stability Comparison: LU vs QR vs Cholesky
| Property | LU (partial pivot) | Householder QR | Cholesky (SPD) |
|---|---|---|---|
| Growth factor | (worst case) | 1 (exact) | 1 (exact) |
| Backward stable? | In practice, yes | Unconditionally | Unconditionally |
| Orthogonality loss | N/A | N/A | |
| Requires pivoting? | Yes (partial pivot) | No (column pivot for RRQR) | No |
| Cost (leading term) | |||
| Memory | (+ for R) | ||
| Handles all square matrices? | Yes (nonsingular) | Yes | No (SPD only) |
Rule of thumb:
- Have a symmetric positive definite matrix? Use Cholesky (twice as fast as LU, always stable).
- Have a general square system? Use LU with partial pivoting (standard default in all libraries).
- Have an overdetermined least-squares problem? Use Householder QR.
- Need rank determination or have a rank-deficient matrix? Use RRQR or SVD.
- Have an ill-conditioned system where every digit matters? Use QR (avoids condition number squaring).
6.4 Mixed Precision and Iterative Refinement
Mixed precision computation: Modern GPUs (NVIDIA A100, H100) support FP64, FP32, FP16, and BF16 with different throughputs. The ratio is typically 1:2:8:8 (64:32:16:16 bit throughput). Training AI models almost exclusively uses FP16 or BF16 to maximize throughput; but this reduces precision from (FP64) to (FP16).
Iterative refinement algorithm:
1. Factor A \approx PA = LU in low precision (FP16 or FP32)
2. Solve Ax_0 = b using L, U in low precision -> x_0
3. For k = 0, 1, 2, ...:
a. Compute residual r_k = b - A x_k (in HIGH precision, FP64)
b. Solve L U \delta_k = r_k (in LOW precision - cheap)
c. Update x_{k+1} = x_k + \delta_k
4. Converge when ||r_k|| / ||b|| < \epsilon_target
Convergence: Iterative refinement converges in iterations if the initial precision is high enough relative to . Specifically, if , refinement converges to FP64 precision in 2-3 steps.
NVIDIA cuSOLVER IR (Iterative Refinement): Implements exactly this pattern: BF16 factorization (8x throughput vs FP64) + FP64 residual computation + BF16 solve for correction. Used in NVIDIA's deep learning solver libraries and in the mixed-precision training of large language models.
For AI: NVIDIA's "GMRES-IR" (GMRES Iterative Refinement) in cuSOLVER applies this to poorly conditioned systems arising in transformer inference optimization. K-FAC implementations use this to factor Fisher information blocks quickly in FP16 and refine to FP32 accuracy.
6.5 Ill-Conditioned Systems and Regularization
When is very large, even backward-stable algorithms produce solutions with large forward error. The cure is regularization - changing the problem to a nearby well-conditioned one.
Tikhonov regularization: Replace with . The solution is . The regularized condition number is:
which decreases as increases. At : condition number . At : condition number .
Truncated factorizations: Set all singular values (or diagonal entries of or ) below a threshold to zero. This is the basis of truncated SVD and RRQR-based pseudoinverse computation.
For AI: The Adam optimizer's parameter (default ) is exactly Tikhonov regularization applied to the per-parameter Hessian approximation. K-FAC's damping parameter similarly regularizes the Fisher information matrix to prevent ill-conditioned natural gradient steps.
7. Blocked Algorithms and High-Performance Computing
7.1 Cache Hierarchy and Algorithm Design
Modern computer architecture has a deep memory hierarchy:
| Level | Size | Latency | Bandwidth |
|---|---|---|---|
| L1 cache | 32-64 KB | 4 cycles | 1 TB/s |
| L2 cache | 256-512 KB | 12 cycles | 400 GB/s |
| L3 cache | 8-32 MB | 40 cycles | 200 GB/s |
| DRAM | 16-512 GB | 100+ cycles | 50 GB/s |
| GPU HBM | 16-80 GB | 80 cycles | 2 TB/s |
The roofline model: An algorithm's performance is limited by the minimum of:
- Peak arithmetic throughput (FLOP/s)
- Memory bandwidth \times arithmetic intensity (bytes/flop)
A naive matrix-vector multiply reads words and performs flops - arithmetic intensity . It is memory bandwidth bound: DRAM bandwidth, not arithmetic units, limits performance.
A matrix-matrix multiply reads words and performs flops - arithmetic intensity . For large , it is compute bound and can achieve near-peak FLOP/s.
The key insight for blocked algorithms: Reformulate the bottleneck step as a matrix-matrix multiply (BLAS-3), not as a matrix-vector multiply (BLAS-2). This changes arithmetic intensity from to , giving -fold speedup on modern hardware.
7.2 Blocked Factorizations
All three factorizations (LU, QR, Cholesky) can be reformulated as blocked algorithms that expose BLAS-3 matrix-matrix multiplies as the dominant computation.
Block LU (Panel + Update):
A = [A_{11} A_{12}] Factor: [L_{11} 0 ] [U_{11} U_{12}]
[A_{21} A_{22}] [L_{21} L_{22}] [ 0 U_{22}]
Step 1: Factor A_11 = L_{11} U_{11} (small, BLAS-2)
Step 2: L_{21} = A_{21} U_{11}^{-1} (DTRSM, BLAS-3)
Step 3: U_{12} = L_{11}^{-1} A_{12} (DTRSM, BLAS-3)
Step 4: A_{22} -= L_{21} U_{12} (DGEMM, BLAS-3) <- dominates
Step 5: Recurse on A_{22}
WY representation for blocked Householder QR: A product of Householder reflectors can be written as where . Applying to a matrix costs using BLAS-3 DGEMM. LAPACK's DGEQRF uses this representation for blocked Householder QR.
Performance: On a modern CPU with AVX-512, blocked DGEMM achieves of peak FLOP/s for . Without blocking (using BLAS-2), performance drops to of peak. The speedup from blocking is why production factorization libraries use block sizes of -.
7.3 LAPACK Routine Reference
| Routine | Operation | Notes |
|---|---|---|
DGETRF | LU with partial pivoting () | Returns pivot indices; in-place |
DGETRS | Solve using DGETRF output | Multiple right-hand sides |
DGETRI | Compute from DGETRF output | Rarely needed; use DGETRS instead |
DGEQRF | QR factorization (Householder) | Returns H vectors; blocked (WY) |
DORMQR | Apply or from DGEQRF | Without forming Q explicitly |
DORGQR | Form explicit Q from DGEQRF output | Costs extra |
DGELSY | Least squares via column-pivoted QR | Rank-deficient safe |
DPOTRF | Cholesky ( or ) | Returns INFO error if not SPD |
DPOTRS | Solve using DPOTRF output | SPD matrix only |
DSYTRF | LDL^T for symmetric indefinite | Bunch-Kaufman pivoting |
DSYTRS | Solve using DSYTRF output |
LAPACK naming convention: D = double precision, S = single, Z = complex double; GE = general, PO = positive definite, SY = symmetric; TRF = triangular factorization, TRS = triangular solve, TRI = triangular inverse.
In Python: scipy.linalg provides direct LAPACK access via scipy.linalg.lapack.dgetrf, etc. High-level wrappers (scipy.linalg.lu, scipy.linalg.qr, scipy.linalg.cholesky) call these automatically.
7.4 Sparse Factorizations
For sparse matrices (e.g., from finite-element discretization, graph Laplacians, or neural network weight matrices with structured sparsity), dense factorizations waste memory and computation on zeros.
Fill-in: When a sparse matrix is factored, the and factors typically contain more nonzeros than itself. The additional nonzeros are called fill-in.
Reordering to minimize fill-in:
- Minimum degree ordering (AMD): Greedy heuristic - eliminate the variable connected to fewest others first. Approximate minimum degree (AMD) is the production algorithm.
- Nested dissection: Recursively partition the matrix graph using separator sets; theoretically optimal for regular 2D grids ( fill vs without reordering).
Supernodal methods: CHOLMOD (Davis & Hager, 2009) uses "supernodes" - groups of columns with identical sparsity pattern - and applies dense BLAS-3 operations to each supernode, recovering high arithmetic intensity.
For AI:
- Graph neural networks: The adjacency matrix of a graph is sparse. Solving GNN systems requires sparse Cholesky or LU.
- Attention patterns: Sparse attention (Longformer, BigBird) creates structured-sparse weight matrices. Sparse LU factorization is used in some attention-efficient inference methods.
- Finite-element physics simulation: Differentiable simulation requires backpropagating through sparse LU/Cholesky, implemented in JAX using
jax.experimental.sparse.
8. Applications in Machine Learning
8.1 Least Squares via QR
Setup: The standard supervised learning problem with linear model over training examples reduces to the least-squares problem:
QR solution procedure:
- Factor (thin QR, , )
- Compute (project onto column space of )
- Solve (backward substitution)
Why QR over normal equations in practice:
- Normal equations form , which has condition number . For typical neural network feature matrices (-), this squares to -, losing 8-12 digits of precision in double.
- QR maintains condition number throughout - twice as many correct digits.
Ridge regression via augmented QR: is equivalent to:
Factor and solve . This is numerically superior to forming explicitly.
For AI: Linear probing (evaluating feature quality by fitting a linear classifier) and prompt tuning (fitting a linear head on frozen features) both reduce to this problem. The sklearn.linear_model.Ridge with solver='svd' or 'cholesky' uses exactly these methods.
8.2 Newton's Method and Second-Order Optimization
Newton's method for minimizing applies the update:
where is the Hessian. This requires solving the linear system at each step - a direct application of LU or Cholesky factorization.
Cost per step: where . For GPT-3 with , this is astronomically expensive. Full Newton is infeasible for large neural networks.
K-FAC (Kronecker-Factored Approximate Curvature, Martens & Grosse 2015): Approximates the Fisher information matrix (a PSD proxy for the Hessian) as a block-diagonal Kronecker product:
where and are computed from activations and pre-activation gradients respectively. Each block is inverted via Cholesky:
with and computed via Cholesky factorization of each factor separately.
Cost per K-FAC step: - sum over layers, each requiring two Cholesky factorizations of small matrices. For a 10-layer network with hidden dim 1024: flops, feasible on modern hardware.
Shampoo (Gupta et al., 2018; Anil et al., 2020): Maintains full-matrix preconditioners per layer:
The matrix square root requires eigendecomposition or iterative methods. The recent "Distributed Shampoo" (Google Brain, 2022) runs this in FP32 on accelerators, with Cholesky-based Newton iterations for the matrix square root.
8.3 Gaussian Process Inference at Scale
GP regression setup: Given observations with and , the posterior predictive mean and variance at test points are:
where is the training kernel matrix and is the test-train kernel matrix.
Cholesky for GP: The matrix is SPD (positive definite noise term ensures this). The Cholesky factorization:
enables efficient computation of:
- : forward + backward solve with , cost once, for new right-hand sides
- : the log-likelihood used for hyperparameter optimization
Bottleneck: For training points, is , and Cholesky costs flops - seconds on a GPU, but prohibitive for .
Scalable GP methods:
- Inducing point methods (Titsias 2009): Introduce inducing points; compute Cholesky instead of . GPyTorch implements this as
gpytorch.models.ApproximateGP. - SKI/KISS-GP (Wilson & Nickisch 2015): Structure kernel interpolation - exploit grid structure to decompose as a Kronecker product, enabling Cholesky-like solves.
- CG + Lanczos: Conjugate gradient-based methods (Gardner et al., 2018) avoid explicit Cholesky by iteratively solving the system, used in GPyTorch's
LinearCGbackend.
For AI: GP regression is the inference engine of Bayesian optimization (BoHB, Lindauer et al., 2022), which is used for hyperparameter tuning of large language models. Every BoO iteration requires a Cholesky factorization.
8.4 Backpropagation Through Factorizations
Differentiating through Cholesky: If and a scalar loss , then:
where extracts the lower triangular part: for , for , for .
This formula (Iain Murray, 2016) is implemented in PyTorch's torch.linalg.cholesky with backward() support.
Differentiating through LU: For and a loss through the solution :
where is the adjoint vector (computed via backward triangular solves).
Implicit differentiation: Rather than differentiating through the factorization algorithm itself (which would unroll all operations), implicit differentiation differentiates through the solution condition :
This requires solving one additional linear system per output - using the already-computed factorization.
For AI: Differentiating through linear solvers is central to:
- Meta-learning: MAML and its variants differentiate through inner-loop optimization steps
- Differentiable physics: FEA simulation (e.g., PhiML) requires differentiating through sparse LU solves
- Constrained optimization: Interior-point methods for differentiable convex optimization
- Gaussian processes in end-to-end learning: Differentiating the GP marginal likelihood w.r.t. kernel parameters requires differentiating through Cholesky
8.5 Randomized Factorizations
Motivation: For matrices where only a low-rank approximation is needed, exact factorization costs - wasteful if rank . Randomized methods achieve or even using random projections.
Randomized QR (sketch-and-apply):
- Sketch: where is a random Gaussian matrix ( oversampling)
- Orthogonalize: Factor (thin QR of the sketch)
- Project: (project onto the sketched subspace)
- Factor: (thin QR of the matrix)
- Output: - a rank- QR approximation
Cost: Steps 1-4 cost , negligible compared to for exact QR.
Error bound (Halko-Martinsson-Tropp, 2011): With probability :
The approximation is near-optimal - within a polynomial factor of the Eckart-Young lower bound.
Randomized LU (Yu et al., 2018): Applies random column sampling to produce a structured LU factorization useful for rank-deficient matrices.
Connection to LoRA (Hu et al., 2022): Low-Rank Adaptation of language models uses rank- decompositions where and . Initializing via randomized QR of the pretrained weight matrix gives a structured initialization that preserves the principal components.
DoRA (Liu et al., 2024): Decomposes where is the column norm. The directional component is approximated via LoRA. This uses the QR decomposition of each weight column.
GALORE (Zhao et al., 2024): Gradient Low-Rank Projection - projects gradients onto their principal subspace via randomized SVD before applying Adam updates. The projection matrix is computed via randomized QR every 200 steps.
9. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Using LU without pivoting for numerical computation | Zero or small pivots cause catastrophic cancellation; growth factor can reach | Always use partial pivoting (scipy.linalg.lu, LAPACK dgetrf) |
| 2 | Forming explicitly to solve | Computing costs and introduces additional round-off; is less accurate than the triangular solve | Use scipy.linalg.solve or lu_solve with the stored factorization |
| 3 | Applying Cholesky to a non-SPD matrix | Negative square roots cause NaN; the algorithm may "succeed" silently with complex values or incorrect results | Check np.linalg.eigvalsh(A).min() > 0 or catch LinAlgError from failed Cholesky |
| 4 | Using normal equations for ill-conditioned least squares | - doubles the digits lost; for , you lose all precision | Use scipy.linalg.lstsq (QR-based) or explicitly factor via Householder QR |
| 5 | Confusing thin and full QR | Full QR: , . Thin QR: , . Using the wrong one gives wrong dimensions | Use mode='economic' in scipy.linalg.qr or full_matrices=False in numpy.linalg.qr for thin QR |
| 6 | Ignoring the growth factor in stability analysis | Partial pivoting has theoretical growth factor ; while rare in practice, adversarial inputs exist | For certified computation, use complete pivoting or switch to Householder QR |
| 7 | Re-factoring A for each right-hand side | LU/QR factorization costs ; each additional solve costs only | Cache the factorization (lu_factor returns LU; qr returns Q,R) and call lu_solve or backsolve repeatedly |
| 8 | Forgetting the permutation in | Solving instead of gives wrong answer | Always apply first: b = b[piv] or use lu_solve which handles pivots automatically |
| 9 | Using classical Gram-Schmidt for QR instead of Householder | CGS loses orthogonality for ill-conditioned matrices; | Use scipy.linalg.qr (Householder) or numpy.linalg.qr |
| 10 | Misinterpreting rank from LU diagonal | LU (even with partial pivoting) doesn't give a reliable rank estimate; the pivot threshold is ambiguous | Use RRQR (scipy.linalg.qr(pivoting=True)) or SVD for rank determination |
| 11 | Ignoring mixed precision issues in GPU factorizations | FP16 Cholesky has ; for , FP16 factorization gives only 0 correct digits | Use FP32 or FP64 for factorizations; apply iterative refinement if FP16 is needed for speed |
| 12 | Differentiating through factorization naively by unrolling | Unrolling elimination steps creates graph edges - memory explosion | Use implicit differentiation through the solution condition ; PyTorch's linalg.solve handles this automatically |
10. Exercises
Exercise 1 (*): Implement forward and backward substitution.
(a) Write a function forward_sub(L, b) that solves for unit lower triangular (no division needed).
(b) Write backward_sub(U, b) that solves for upper triangular .
(c) Verify your implementations against scipy.linalg.solve_triangular on a system.
(d) Measure the accuracy: for a random lower triangular system with entries from , compute (residual) and compare to machine epsilon.
Exercise 2 (*): Implement naive LU and demonstrate failure.
(a) Implement lu_naive(A) that performs Gaussian elimination without pivoting, returning .
(b) For the matrix with , compute for using your naive LU.
(c) Compare to numpy.linalg.solve. Report the relative error .
(d) Explain the failure in terms of the growth factor. What is for this matrix?
Exercise 3 (*): Implement LU with partial pivoting and verify .
(a) Implement lu_pivot(A) returning where is stored as a permutation vector.
(b) Verify the factorization: compute and confirm it is .
(c) Use your factorization to solve a random system and compare accuracy to the naive LU.
(d) Compute from the LU factorization as and verify against numpy.linalg.det.
Exercise 4 ():** Implement a Householder reflector and apply it.
(a) Implement householder_vector(a) that computes the Householder vector such that with the correct sign convention to avoid cancellation.
(b) Implement householder_apply(v, C) that applies to matrix using the implicit formula (not the full matrix).
(c) Verify: for a random vector , confirm that (entries 2-5 should be zero to machine precision).
(d) Show that the naive computation (I - 2*v*v.T/norm(v)**2) @ a gives the same result but is less efficient. Time both for a large vector.
Exercise 5 ():** Implement Householder QR and compare to scipy.
(a) Implement householder_qr(A) that returns the upper triangular factor and stores Householder vectors.
(b) Implement recover_Q(householder_vecs, m) to form the explicit matrix.
(c) Verify and for a matrix.
(d) Compare to scipy.linalg.qr. Verify they give identical (up to sign conventions) and identical orthogonality.
(e) For an ill-conditioned with , compare the orthogonality of your Householder QR vs classical Gram-Schmidt.
Exercise 6 ():** Rank estimation via column-pivoted QR.
(a) Create a rank-4 matrix: where and are random orthogonal.
(b) Compute QR with column pivoting: Q, R, P = scipy.linalg.qr(A, pivoting=True).
(c) Plot the diagonal entries on a log scale. Identify the rank gap.
(d) Implement rank estimation: declare rank when for threshold .
(e) Compare to the SVD-based rank estimation. Which is more reliable? Which is faster?
Exercise 7 (*):** GP regression with Cholesky.
(a) Implement rbf_kernel(X1, X2, ell, sf) computing .
(b) Generate 100 training points from with noise.
(c) Implement gp_predict(X_train, y_train, X_test, ell, sf, sigma_n) using Cholesky factorization for all linear solves (no explicit matrix inverse).
(d) Compute the log marginal likelihood using log_det_chol (2\sumlog diagonal of L).
(e) Optimize via grid search maximizing the log marginal likelihood. Plot the GP posterior mean and \pm2\sigma credible interval.
Exercise 8 (*):** Differentiating through Cholesky - gradient of log-determinant.
(a) For an SPD matrix , the gradient of w.r.t. is . Verify this numerically: compute via finite differences and via (using Cholesky), compare.
(b) Implement cholesky_grad(L, dL) computing the gradient of a scalar function (with given ) w.r.t. , using the Iain Murray (2016) formula.
(c) Use your gradient to implement gradient descent on (the negative log-likelihood of a Wishart distribution). Starting from , verify convergence to (the analytic minimum).
(d) Compare to PyTorch's automatic differentiation through torch.linalg.cholesky and torch.logdet. Verify that gradients match to relative error.
11. Why This Matters for AI (2026 Perspective)
| Concept | AI/ML Impact |
|---|---|
| LU with partial pivoting | Solving Hessian systems in Newton/quasi-Newton methods; computing inverses and log-determinants in probabilistic ML |
| Householder QR | Foundation of least-squares linear probing; basis for eigenvalue algorithms (QR iteration); stable weight matrix analysis |
| Column-pivoted QR (RRQR) | Rank estimation of weight matrices in LoRA, DoRA; detecting intrinsic dimensionality of neural representations |
| Cholesky factorization (computational) | GP regression at scale; K-FAC training; VAE/diffusion sampling; Fisher information inversion |
| Blocked algorithms (BLAS-3) | 80-95% of peak GPU throughput for all factorizations; the reason FlashAttention uses tile-based blocking |
| Backward error analysis | Understanding precision limits in FP16/BF16 training; justifying mixed-precision with iterative refinement |
| Iterative refinement | cuSOLVER IR: FP16 factorization + FP64 refinement for numerically reliable inference at high throughput |
| Sparse factorizations | GNN training on large graphs; differentiable physics simulation; structured attention patterns |
| QR for least squares | Linear probing, ridge regression, prompt tuning accuracy; avoiding condition-number squaring in normal equations |
| Differentiation through factorizations | End-to-end differentiable GP/Bayesian models; implicit differentiation in meta-learning; constraint-based optimization |
| TSQR | Communication-optimal QR for distributed training; tile-based blocking in FlashAttention-3 |
| Randomized LU/QR | LoRA initialization; GaLore gradient projection; sketch-and-solve preconditioning for large-scale optimization |
| K-FAC with Cholesky | Natural gradient training of large transformers; 2-5x faster convergence than Adam at equivalent compute |
| GP marginal likelihood optimization | Bayesian hyperparameter tuning (BoHB); automated ML pipeline optimization |
| Rank-deficient LU / RRQR | Intrinsic dimensionality measurement of LLM weight updates; guided rank selection for compression |
12. Conceptual Bridge
From 07 (Positive Definite Matrices): Section 07 developed the theory of positive definite matrices and gave the full treatment of the Cholesky factorization: existence proof (the Cholesky product theorem), the LDL^T variant, the connection to log-determinants, and the PSD cone. This section builds directly on that theory, treating Cholesky as one of three production-grade computational tools alongside LU and QR. The student who understands 07's theory will see 08's Cholesky discussion as pure application - taking the mathematical theorem and implementing it efficiently.
The computational turn: Sections 01-07 of Chapter 3 developed mathematical structures: eigenvalues, SVD, PCA, linear maps, orthogonality, norms, and positive definiteness. Section 08 marks a turn toward the computational: algorithms, numerical stability, blocking, LAPACK. This turn is not a descent in abstraction but a different kind of mathematics - the mathematics of floating-point arithmetic, error propagation, and algorithm design under hardware constraints. Both kinds are essential for AI/ML practice.
Into Chapter 4 (Calculus): The matrix factorizations developed here are the computational backbone of calculus-based optimization. The Hessian matrix (Chapter 4, 05) is positive definite at local minima - and therefore amenable to Cholesky. Newton's method (Chapter 4, 06) requires solving Hessian systems via LU or Cholesky at each step. The Jacobian matrix (Chapter 4, 04) appears in the normal equations for nonlinear least squares (Gauss-Newton, Levenberg-Marquardt), which are solved via QR. The machinery of 08 is the engine that makes calculus-based optimization computable.
Into Chapter 8 (Optimization): The optimization chapter (Chapter 8) develops gradient descent, Newton's method, interior-point methods, and constrained optimization - all of which require the factorizations of 08 in their inner loops. The K-FAC connection to natural gradient optimization, the QR connection to least-squares problems, and the Cholesky connection to GP-based Bayesian optimization are all elaborated with full optimization context in Chapter 8.
MATRIX DECOMPOSITIONS IN THE CURRICULUM
========================================================================
Chapter 2: Linear Algebra Basics
+-- 02 Matrix Operations -------- brief LU/QR/Cholesky preview
+-- 04 Determinants ------------ det(A) = \prod U_ii from LU
Chapter 3: Advanced Linear Algebra
+-- 01 Eigenvalues -------------- QR iteration uses Householder QR
+-- 02 SVD ---------------------- thin QR as step in SVD algorithm
+-- 05 Orthogonality ------------ QR theory (Gram-Schmidt)
+-- 07 Positive Definite -------- Cholesky THEORY (full treatment)
+-- 08 Matrix Decompositions <-- YOU ARE HERE
+-- LU (canonical home) PA = LU with partial pivoting
+-- QR algorithms Householder, Givens, RRQR, TSQR
+-- Cholesky (computational) Blocked dpotrf, LDL^T for indef.
Chapter 4: Calculus Fundamentals
+-- 04 Jacobian ----------------- Gauss-Newton needs QR
+-- 05 Hessian ------------------ Cholesky at local minima
Chapter 8: Optimization
+-- Newton's method -------------- LU/Cholesky for Hessian systems
+-- K-FAC ------------------------ Kronecker-Cholesky per layer
+-- Interior point methods ------- Cholesky at each Newton step
ML Applications (load-bearing)
+-- GP regression ---------------- Cholesky (O(n^3) bottleneck)
+-- K-FAC / Shampoo -------------- blocked Cholesky per layer
+-- LoRA / GaLore ---------------- randomized QR for rank selection
+-- Linear probing --------------- Householder QR least squares
+-- Bayesian optimization -------- GP + Cholesky every step
========================================================================
The three factorizations of this section - LU, QR, and Cholesky - are not abstract mathematical objects. They are the algorithms that run inside every call to numpy.linalg.solve, scipy.linalg.qr, torch.linalg.cholesky, and LAPACK's dgesv, dgeqrf, dpotrf. Understanding them at this level - theorem, algorithm, numerical stability, and AI application - completes the computational linear algebra foundation needed for everything that follows.
<- Back to Advanced Linear Algebra | <- Positive Definite Matrices | Next: Calculus Fundamentals ->
Appendix A: Complete Algorithms and Pseudocode
A.1 LU with Partial Pivoting - Full Pseudocode
The following pseudocode gives the complete in-place LU factorization with partial pivoting. The matrix is overwritten: the strict lower triangle stores the multipliers (the subdiagonal entries of , since is implicit), and the upper triangle plus diagonal stores .
Algorithm LU_PARTIAL_PIVOT(A):
Input: A \in \mathbb{R}^{n\timesn}
Output: Modified A (lower: L multipliers, upper+diag: U), pivot vector piv
piv = [1, 2, ..., n] # permutation record
for k = 1 to n-1:
# Step 1: Find pivot in column k (rows k to n)
max_val = |A[k,k]|
max_row = k
for i = k+1 to n:
if |A[i,k]| > max_val:
max_val = |A[i,k]|
max_row = i
# Step 2: Swap rows k and max_row
if max_row \neq k:
swap A[k, :] <-> A[max_row, :]
swap piv[k] <-> piv[max_row]
# Step 3: Check for (near-)singular pivot
if |A[k,k]| < \epsilon:
warn("Near-singular pivot at step k")
# Step 4: Compute multipliers and update
for i = k+1 to n:
A[i,k] = A[i,k] / A[k,k] # multiplier m_{ik}
for j = k+1 to n:
A[i,j] -= A[i,k] * A[k,j] # rank-1 update
return A, piv
Solve Ax = b using LU_PARTIAL_PIVOT:
Compute A, piv = LU_PARTIAL_PIVOT(A)
Apply permutation: b = b[piv]
Forward solve Lx = b (L has unit diagonal, ignore A[k,k]):
for i = 1 to n:
for j = 1 to i-1:
b[i] -= A[i,j] * b[j]
Backward solve Ux = b:
for i = n downto 1:
for j = i+1 to n:
b[i] -= A[i,j] * b[j]
b[i] /= A[i,i]
return b
Implementation notes:
- The inner loop
for j = k+1 to n: A[i,j] -= A[i,k] * A[k,j]is a SAXPY operation (Scalar A times X Plus Y) - BLAS level 1, vectorizable. - The full step 4 over all simultaneously is a rank-1 update - BLAS level 2
DGER. - Blocked version batches steps into a panel and uses DGEMM for the trailing update.
A.2 Householder QR - Full Pseudocode
Algorithm HOUSEHOLDER_QR(A):
Input: A \in \mathbb{R}^{m\timesn}, m \geq n
Output: Modified A (upper triangle: R; lower triangle: Householder vectors),
beta vector (scaling factors)
beta = zeros(n)
for k = 1 to n:
# Extract column k from row k to m
x = A[k:m, k] # length (m-k+1)
# Compute Householder vector
sigma = norm(x)
if sigma == 0: continue # column already zero
# Sign convention: ensure v[0] and x[0] have opposite signs
alpha = -sign(x[0]) * sigma # target value
v = copy(x)
v[0] -= alpha
beta_k = 2 / (v^T v) # = 2 / ||v||^2
# Apply H_k to trailing submatrix A[k:m, k:n]
# H_k A = A - (beta_k * v)(v^T A) = A - beta_k * v * (v^T A)
w = beta_k * (v^T @ A[k:m, k:n]) # row vector, length (n-k+1)
A[k:m, k:n] -= outer(v, w)
# Store Householder vector in lower triangle
A[k+1:m, k] = v[1:] # store v[1:] (v[0] = 1 implicit)
beta[k] = beta_k
# A[k,k] now equals alpha = -sign(x[0]) * norm(x)
return A, beta
Recover Q (thin, first n columns):
Q = I_{m\timesn}
for k = n downto 1:
# Reconstruct v from stored lower triangle
v = [1; A[k+1:m, k]] # prepend implicit 1
beta_k = beta[k]
# Apply H_k to Q[k:m, k:n]
w = beta_k * (v^T @ Q[k:m, k:n])
Q[k:m, k:n] -= outer(v, w)
return Q
A.3 Givens QR for Banded Matrices
For a matrix with upper and lower bandwidth and respectively:
Algorithm GIVENS_QR_BANDED(A, ku, kl):
for j = 1 to n: # column
for i = j+kl downto j+1: # eliminate subdiagonal entries in bandwidth
# Compute Givens rotation to zero A[i,j] using A[i-1,j]
c, s, r = compute_givens(A[i-1,j], A[i,j])
A[i,j] = 0
A[i-1,j] = r
# Apply G to remaining columns in bandwidth
for jj = j+1 to min(j+ku+kl, n):
temp = c*A[i-1,jj] + s*A[i,jj]
A[i,jj] = -s*A[i-1,jj] + c*A[i,jj]
A[i-1,jj] = temp
return R (upper banded), Q (product of Givens rotations)
Complexity: - linear in for fixed bandwidth. This is the algorithm used in tridiagonal eigensolvers.
Appendix B: Deeper Theoretical Results
B.1 The Fundamental Theorem of Existence for LU
Theorem (Existence of LU, precise version). Let . The following are equivalent:
- has an LU factorization with unit lower triangular.
- All leading principal minors for .
- Gaussian elimination completes without encountering a zero pivot.
Proof of (1) <=> (2): We use induction. Base case : trivial. Inductive step: write . If has an LU factorization (by induction, iff all leading minors of are nonzero), then:
where is the Schur complement (see 07). This gives a valid LU factorization with all blocks well-defined.
Corollary: The Schur complement is the last pivot. It equals . This shows that the pivot sequence in Gaussian elimination directly encodes the leading minors via their ratios.
B.2 The Backward Error Theorem (Precise)
Theorem (Wilkinson, 1963). Let be the matrices computed by Gaussian elimination with partial pivoting, applied to (after row interchanges). Then there exists a matrix such that:
where is unit roundoff and is the growth factor.
Corollary (Solution accuracy). The computed solution to satisfies:
The forward error bound then follows from the perturbation lemma:
The fundamental lesson: error = condition number \times backward error. The algorithm controls the backward error; the problem controls the condition number.
B.3 Optimality of Householder QR
Theorem. Among all algorithms for computing QR that use orthogonal similarity transformations, Householder QR minimizes the number of arithmetic operations asymptotically.
Sketch: Each Householder reflector zeros entries in column using flops (inner product + SAXPY). Any other elementary orthogonal transformation (e.g., a sequence of Givens rotations) requires at least as many flops per zero created. Householder maximizes the work per transformation.
Why Givens is sometimes preferred: Despite higher total flop count, Givens rotations are preferred for:
- Sparse matrices (avoid creating fill-in in the reflection direction)
- Parallel computation (rotations on disjoint row pairs can be parallelized)
- Streaming data (new rows can be incorporated one at a time)
B.4 Perturbation Theory for Triangular Factorizations
Theorem (Stewart, 1977). If (LU without pivoting, assuming it exists), then:
The condition numbers and can independently be much larger than , explaining why small perturbations to can cause large perturbations to and separately.
For QR: The Householder factors have exactly (orthogonal matrices). Therefore:
The QR factors are perturbed proportionally to , not as in the normal equations.
Appendix C: Connections to Other Decompositions
C.1 LU and the Spectral Decomposition
The LU factorization of is not directly related to the eigendecomposition , but the two share a common ancestor: block triangularization. The Schur decomposition (with upper triangular, unitary) is a complex-field generalization where is upper triangular with eigenvalues on the diagonal - the "LU" of the spectral world.
The QR algorithm for eigenvalues alternates QR factorizations and similarity transformations to drive toward upper triangular (Schur) form. Each QR step is one Householder QR followed by one matrix-matrix product. After convergence, the diagonal of gives the eigenvalues. This is the most important use of Householder QR in all of numerical linear algebra.
C.2 Cholesky and the SVD
For a symmetric positive definite matrix, the Cholesky factorization and the SVD (symmetric PD, so has orthonormal columns and has positive diagonal) are related by:
where is the upper triangular factor of a QR decomposition of . Explicitly: where and for some orthogonal - but this is not the standard Cholesky. The key point is that Cholesky is a non-orthogonal factorization while SVD is orthogonal; they capture different aspects of the same structure.
Practical consequence: The Cholesky factor is not related to the eigenvectors of in a simple way. Computing the matrix square root (which requires eigenvectors) is more expensive than Cholesky.
C.3 QR and the Gram-Schmidt-Cholesky Identity
The Gram-Schmidt process applied to the columns of produces the thin QR . But the normal equations give - a Cholesky factorization of the Gram matrix! This identity explains why:
- QR of and Cholesky of produce the same
- Computing QR of is equivalent to computing Cholesky of in exact arithmetic
- In floating-point, QR is more stable (works with ) while Cholesky of works with
C.4 CUR and Interpolative Decompositions
Beyond LU, QR, and Cholesky, recent methods have developed structure-revealing factorizations that select actual columns/rows of :
CUR decomposition: where contains a subset of columns of , contains a subset of rows, and is a small "bridge" matrix. Unlike LU/QR, and are actual data columns - interpretable and memory-efficient.
Interpolative decomposition (ID): where is a subset of column indices and is a well-conditioned matrix. The ID is computed via column-pivoted QR: the pivot columns from RRQR form .
For AI: CUR and ID are used in interpretability research (identifying the "canonical" data points or features that the model attends to), in structured pruning (removing weight columns corresponding to small pivots), and in efficient attention (selecting key "skeleton" positions for linear-complexity attention).
Appendix D: Implementation Reference
D.1 Python/NumPy/SciPy API
import numpy as np
import scipy.linalg as la
# === LU Factorization ===
# scipy.linalg.lu (returns P, L, U as separate matrices)
P, L, U = la.lu(A)
# verify: A \approx P @ L @ U
# scipy.linalg.lu_factor (returns compact (LU, piv) for solving)
lu, piv = la.lu_factor(A)
x = la.lu_solve((lu, piv), b) # solves Ax = b
# === QR Factorization ===
# Full QR (Q is m\timesm)
Q, R = la.qr(A) # default: full
# Thin/economy QR (Q is m\timesn for m\timesn matrix A)
Q, R = la.qr(A, mode='economic')
# Column-pivoted QR (rank-revealing)
Q, R, P = la.qr(A, pivoting=True)
# verify: A[:, P] \approx Q @ R (P is permutation array)
rank_est = np.sum(np.abs(np.diag(R)) > 1e-10 * np.abs(R[0,0]))
# Apply Q^T without forming Q
# (use the raw tau output for LAPACK-level efficiency)
# === Cholesky ===
L = la.cholesky(A, lower=True) # A = L L^T
x = la.cho_solve((L, True), b) # solves Ax = b
# === Triangular solves ===
x = la.solve_triangular(L, b, lower=True) # solves Lx = b
x = la.solve_triangular(U, b, lower=False) # solves Ux = b
D.2 PyTorch API
import torch
A = torch.randn(n, n, dtype=torch.float64)
A = A @ A.T + torch.eye(n) * 0.1 # make SPD
# LU
LU, pivots, info = torch.linalg.lu_factor(A)
x = torch.linalg.lu_solve(LU, pivots, b)
# QR
Q, R = torch.linalg.qr(A, mode='reduced') # thin QR
# Cholesky
L = torch.linalg.cholesky(A)
x = torch.linalg.cholesky_solve(b.unsqueeze(-1), L).squeeze(-1)
# All support autograd:
A.requires_grad_(True)
L = torch.linalg.cholesky(A)
loss = torch.logdet(A)
loss.backward() # computes grad via implicit differentiation
D.3 JAX API
import jax.numpy as jnp
import jax.scipy.linalg as jla
# LU (no grad through lu_factor yet in standard JAX)
lu, piv, permutation = jla.lu(A)
# QR
Q, R = jnp.linalg.qr(A, mode='reduced')
# Cholesky (supports JIT, vmap, grad)
L = jnp.linalg.cholesky(A)
# Solve via triangular
y = jla.solve_triangular(L, b, lower=True)
x = jla.solve_triangular(L.T, y, lower=False)
# Differentiating log-det through Cholesky:
def log_det_spd(A):
L = jnp.linalg.cholesky(A)
return 2 * jnp.sum(jnp.log(jnp.diag(L)))
grad_A = jax.grad(log_det_spd)(A) # should equal A^{-1}
D.4 Performance Benchmarks (n = 1000, FP64)
| Operation | NumPy (CPU) | PyTorch (CPU) | PyTorch (A100 GPU) |
|---|---|---|---|
| LU factorization | 85 ms | 78 ms | 3 ms |
| QR factorization | 210 ms | 195 ms | 8 ms |
| Cholesky | 35 ms | 32 ms | 1.5 ms |
| Triangular solve | 12 ms | 10 ms | 0.5 ms |
| Matrix multiply (DGEMM) | 40 ms | 38 ms | 0.8 ms |
Approximate values on typical hardware (2024). GPU speedups increase for larger as arithmetic intensity grows.
Appendix E: Error Accumulation in Practice
E.1 Condition Number Estimation
Computing the exact condition number requires computing all singular values - work. LAPACK provides cheap condition number estimators via inverse iteration:
LAPACK DGECON: Given the LU factorization of (already computed for solving), estimates in via a sequence of triangular solves. The estimate is typically within a factor of 10 of the true condition number.
Rule of thumb: If DGECON returns , you lose approximately digits of precision. For FP64 (), you have correct digits. For FP32 (), you have correct digits.
For AI: PyTorch's torch.linalg.cond computes the exact condition number via SVD. For quick checks without full SVD: torch.linalg.matrix_norm(A) * torch.linalg.matrix_norm(torch.linalg.inv(A)).
E.2 Diagnosis of Numerical Failures
| Symptom | Likely cause | Diagnosis | Fix |
|---|---|---|---|
LinAlgError: Singular matrix | Zero pivot encountered | is singular or nearly so | Check np.linalg.matrix_rank(A) |
nan in output | Zero or tiny pivot in LU | No pivoting, tiny diagonal | Use partial pivoting |
| Solution with large residual | High condition number | Compute | Regularize or use iterative refinement |
LinAlgError from cholesky | Matrix not SPD | Eigenvalues not all positive | Check np.linalg.eigvalsh(A).min() |
| Q not orthogonal: | Classical Gram-Schmidt used | Compute | Switch to Householder QR |
| Slow factorization | Block size mismatch | Profile with BLAS calls | Tune block size; use BLAS-3 routine |
E.3 Iterative Refinement Implementation
def iterative_refinement(A, b, max_iter=3, tol=1e-14):
"""
Solve Ax = b with iterative refinement.
Factor in FP64, residuals in FP64, corrections in FP32.
"""
import scipy.linalg as la
# Factor in FP32 (simulating low-precision)
A32 = A.astype(np.float32)
lu, piv = la.lu_factor(A32.astype(np.float64)) # use FP64 here for demo
# Initial solve
x = la.lu_solve((lu, piv), b)
for k in range(max_iter):
# Residual in FP64 (high precision)
r = b - A @ x # FP64 residual
r_norm = np.linalg.norm(r) / np.linalg.norm(b)
print(f" Iter {k}: relative residual = {r_norm:.3e}")
if r_norm < tol:
break
# Correction solve (in FP32 precision)
d = la.lu_solve((lu, piv), r)
x = x + d
return x
Appendix F: The Geometry of Factorizations
F.1 LU as a Basis Change
The LU factorization can be interpreted geometrically. The unit lower triangular matrix represents a shearing transformation - it maps the standard basis to the columns of , which form a "lower triangular" basis. The upper triangular then represents the coordinates of the rows of in this new basis.
Concretely: each step of Gaussian elimination adds multiples of row to lower rows, which is a shear in the "row space" of . The cumulative effect is a change to a triangular basis - the essence of LU.
F.2 QR as a Rotation to Triangular Form
The Householder QR factorization is geometrically a rotation of the column space of to align with the coordinate axes. Each Householder reflector reflects the -th "residual" column to align with , zeroing out all entries below the diagonal.
After reflections, the matrix is a composition of orthogonal reflections - itself orthogonal - that has rotated to upper triangular form .
Geometric insight: The columns of form an orthonormal basis for the column space of . The entries of give the coordinates of the original columns of in this orthonormal basis. This is the Gram-Schmidt process, done via reflections rather than projections.
F.3 Cholesky as the Matrix Square Root Factorization
For , the Cholesky factor satisfies - making a "matrix square root" of in a specific sense. Unlike the symmetric square root (which is symmetric), the Cholesky factor is lower triangular.
Geometric interpretation: The linear map defined by transforms the unit ball into the ellipsoid - the level set of the quadratic form . This is why Cholesky factorization is the foundation of sampling from : if , then .
Uniqueness: The Cholesky factor with positive diagonal entries is unique for every . The symmetric square root is also unique and PSD, but it is NOT the same as (they differ by an orthogonal factor).
F.4 Triangular Factorizations and the Flag Manifold
The space of all invertible matrices acts on the set of factorizations as follows:
- LU factorizations correspond to the decomposition where (lower triangular) and (upper triangular) are Borel subgroups. The "generic" element of has a unique LU (the open Bruhat cell).
- QR factorizations correspond to where is the orthogonal group. This decomposition is always unique (no exceptional cases, unlike LU).
- The flag manifold parameterizes complete flags and is the natural geometric setting for both LU and QR.
This connection to Lie group theory underlies the deep relationship between matrix factorizations and the geometry of symmetric spaces - a topic developed in Chapter 12 (Functional Analysis).
Appendix G: Randomized Algorithms - Deeper Theory
G.1 The Randomized SVD via QR
The Halko-Martinsson-Tropp (2011) algorithm computes a rank- approximation of in operations:
Algorithm RANDOMIZED_SVD(A, r, p):
# Oversampling: target rank r + p (p \approx 10)
\Omega = randn(n, r+p) # Gaussian sketch matrix
Y = A @ \Omega # Sketch: Y \in \mathbb{R}^{m\times(r+p)}
Q, _ = qr(Y) # Orthonormalize: Q \in \mathbb{R}^{m\times(r+p)}
B = Q.T @ A # Project: B \in \mathbb{R}^{(r+p)\timesn}
U, \Sigma, Vt = svd(B, full_matrices=False) # SVD of small matrix
U = Q @ U # Lift back to full space
return U[:, :r], \Sigma[:r], Vt[:r, :]
Error bound: With probability :
For : error is with probability .
Power iteration improvement: For matrices with slowly decaying singular values, applying (with or power iterations) dramatically improves accuracy:
For AI - GaLore (Zhao et al., 2024): GaLore uses randomized SVD to project gradient matrices onto their principal -dimensional subspace every steps. The projection matrix is computed via randomized SVD of the gradient, then Adam is applied to the projected gradients . This reduces optimizer memory by while maintaining training quality.
G.2 Structured Random Matrices
Instead of dense Gaussian , structured random matrices enable faster sketching:
| Matrix | Sketch cost | Randomness | Error bound |
|---|---|---|---|
| Dense Gaussian | Full i.i.d. | Optimal | |
| SRFT (subsampled random Fourier) | Hadamard + sampling | Near-optimal | |
| Sparse embedding | Hash functions | Near-optimal for sparse | |
| CountSketch | Hash functions | Suboptimal but fast |
SRFT (Johnson-Lindenstrauss): where is a random diagonal matrix, is the DFT, and samples rows uniformly. Applying costs via FFT.
Appendix H: Software Ecosystem
H.1 BLAS and LAPACK Hierarchy
User Code (Python/Julia/MATLAB)
down
SciPy / NumPy / PyTorch / JAX (Python wrappers)
down
LAPACK (high-level: LU, QR, Cholesky, eigensolvers)
down
BLAS Level 3 (DGEMM, DSYRK, DTRSM - blocked, cache-optimal)
BLAS Level 2 (DGEMV, DGER - matrix-vector)
BLAS Level 1 (DDOT, DAXPY, DNRM2 - vector)
down
Hardware-optimized BLAS (MKL, OpenBLAS, cuBLAS, BLIS)
down
CPU/GPU Tensor Cores / AVX-512 / AMX tiles
Key implementations:
- Intel MKL: Optimized for Intel CPUs; 2-3x faster than reference BLAS for blocked operations
- OpenBLAS: Open source, near-MKL performance on x86; default in many Linux distributions
- BLIS (BLAS-like Library Instantiation Software): Framework for portable high-performance BLAS; used by many modern CPU vendors
- cuBLAS: NVIDIA GPU BLAS; powers all PyTorch/JAX GPU linear algebra
- cuSOLVER: NVIDIA GPU LAPACK; powers
torch.linalgon NVIDIA hardware - rocBLAS / rocSOLVER: AMD GPU equivalents
H.2 Choosing the Right Routine
Need to solve Ax = b?
+-- A is SPD? -> DPOTRF (Cholesky) + DPOTRS
+-- A is symmetric indefinite? -> DSYTRF (LDL^T) + DSYTRS
+-- A is general square? -> DGESV (= DGETRF + DGETRS, partial pivot)
+-- A is triangular? -> DTRTRS (triangular solve directly)
Need to solve min ||Ax - b||?
+-- A is full-rank, well-conditioned? -> DGELS (QR-based least squares)
+-- A may be rank-deficient? -> DGELSY (column-pivoted QR)
+-- Need minimum-norm solution? -> DGELSD (divide-and-conquer SVD)
Need eigenvalues?
+-- A is symmetric? -> DSYEV (or DSYEVD for large n)
+-- A is general? -> DGEEV (QR iteration)
Need SVD?
+-- Full SVD? -> DGESVD or DGESDD (divide-and-conquer, faster)
+-- Truncated SVD? -> Use randomized SVD (not in LAPACK)
CHUNK_APPENDIX
Appendix I: Practical Decision Guide for Practitioners
I.1 Which Factorization to Use? - Decision Tree
The choice among LU, QR, and Cholesky depends on matrix structure, problem type, and numerical requirements. The following decision framework covers the vast majority of practical cases.
MATRIX FACTORIZATION SELECTION GUIDE
========================================================================
Is A symmetric?
+-- YES: Is A positive definite? (all eigenvalues > 0)
| +-- YES: -> CHOLESKY (dpotrf)
| | Cost: n^3/3 flops. Fastest, most stable for SPD.
| | Use for: Gaussians, GP regression, K-FAC, Fisher
| +-- NO: -> LDL^T (dsytrf, Bunch-Kaufman pivoting)
| Cost: n^3/3 flops. Handles indefinite (saddle pts).
| Use for: indefinite Hessians, Newton at saddle pts
+-- NO: Is A square?
+-- YES: -> LU with partial pivoting (dgetrf)
| Cost: 2n^3/3 flops. General purpose.
| Use for: Solving Ax=b, computing det(A)
+-- NO: Is A overdetermined (m > n)? -> LEAST SQUARES
+-- Well-conditioned & fast? -> Normal equations + Cholesky
| Cost: mn^2 + n^3/3. Squares condition number.
+-- General / numerically safe? -> Householder QR (dgelsy)
| Cost: 2mn^2 - 2n^3/3. Standard choice.
+-- May be rank-deficient? -> Column-pivoted QR or SVD
RRQR: O(mn^2), good rank estimate
SVD: O(mn^2+n^3), exact singular values
========================================================================
I.2 Numerical Precision Requirements
| Scenario | Precision needed | Recommended approach |
|---|---|---|
| Single FP32 forward pass | Standard FP32 factorization | |
| Training with FP16/BF16 | to | Mixed precision + refinement |
| Scientific computing | (FP64) | FP64 throughout |
| Certified computation | Exact or interval | Symbolic or interval arithmetic |
| Ill-conditioned system () | All digits lost in FP64 | Regularize or use extended precision |
I.3 Memory and Flop Budgets
For a matrix of size :
| Factorization | Memory (dense) | Flops | Triangular solve flops |
|---|---|---|---|
| LU | (in-place) | ||
| QR (Householder) | + | for , for backsolve | |
| Cholesky | |||
| LDL^T |
Memory for large :
- : LU requires doubles = 800 MB (fits in GPU HBM)
- : LU requires doubles = 80 GB (exceeds most GPUs)
- : must use iterative solvers (CG, GMRES) or sparse factorizations
I.4 Parallelism Characteristics
| Factorization | CPU parallelism | GPU parallelism | Distributed |
|---|---|---|---|
| Blocked LU | Panel: serial; trailing: DGEMM (parallel) | cuSOLVER DGETRF; good for | ScaLAPACK PDGETRF |
| Householder QR | WY representation + DGEMM | cuSOLVER DGEQRF | ScaLAPACK PDGEQRF |
| Cholesky | Panel: serial; trailing: DSYRK (parallel) | cuSOLVER DPOTRF; excellent | ScaLAPACK PDPOTRF |
| Sparse LU | AMD reordering; supernodal parallel | cuSOLVER batch for many small systems | MUMPS, SuperLU_DIST |
Batch factorizations (small systems): For thousands of small systems (common in K-FAC, GP mini-batches), GPU batch factorizations (cublasDgetrfBatched) outperform sequential large-matrix factorizations by 10-100x.
I.5 Common Library Calls and Their Underlying Algorithms
| High-level call | Underlying algorithm | Notes |
|---|---|---|
np.linalg.solve(A, b) | LAPACK DGESV (= DGETRF + DGETRS) | Partial pivoting, FP64 |
np.linalg.lstsq(A, b) | LAPACK DGELSD (divide-and-conquer SVD) | Rank-deficient safe |
np.linalg.qr(A) | LAPACK DGEQRF (Householder) | Full or thin via mode |
scipy.linalg.lu(A) | LAPACK DGETRF | Returns P, L, U separately |
scipy.linalg.cholesky(A) | LAPACK DPOTRF | Raises if not SPD |
scipy.linalg.lu_solve((lu,piv), b) | LAPACK DGETRS | Amortized over many b |
torch.linalg.solve(A, b) | cuSOLVER DGESV (GPU) / MKL (CPU) | Autograd supported |
torch.linalg.cholesky(A) | cuSOLVER DPOTRF (GPU) | Autograd via implicit diff |
jax.numpy.linalg.solve(A, b) | XLA:HLO -> cuSOLVER / LAPACK | JIT, vmap, grad |
sklearn.linear_model.Ridge(solver='cholesky') | LAPACK DPOTRS | Normal equations + Cholesky |
Appendix J: Extended Example - GP Hyperparameter Optimization
This appendix walks through a complete GP regression pipeline showing how LU, QR, and Cholesky all appear in different roles within the same ML workflow.
J.1 Problem Setup
We have observations from , . We fit a GP with RBF kernel and noise .
J.2 Where Each Factorization Appears
Cholesky - computing the GP posterior:
The kernel matrix is factored via Cholesky:
The log-determinant is computed from the Cholesky diagonal at zero additional cost.
LU (implicit, via matrix-vector products) - CG solver for large :
For , direct Cholesky costs flops. Instead, conjugate gradient (CG) is used to solve iteratively. Each CG iteration requires one matrix-vector product , plus the application of a preconditioner.
The preconditioner is a low-rank + diagonal approximation , where is a rank- approximation of computed via randomized SVD (which uses QR internally). The preconditioned CG converges in iterations instead of .
QR - selecting inducing points:
For the sparse GP with inducing points, the inducing point locations are selected via column-pivoted QR on the feature matrix:
The pivot columns of scipy.linalg.qr(\Phi.T, pivoting=True) identify the most "important" training points - those that span the feature space. These become the inducing points.
Cholesky - computing the evidence lower bound (ELBO) for inducing point GP:
The sparse GP ELBO requires computing the Cholesky of the inducing point kernel matrix plus a correction term - again with .
J.3 Hyperparameter Optimization
The hyperparameters are optimized by maximizing the log marginal likelihood via gradient-based optimization (L-BFGS or Adam). Each gradient evaluation requires:
- A new Cholesky factorization of -
- Computing
- Each trace computation uses: - per hyperparameter
Total cost per hyperparameter gradient: for hyperparameters. For , : about flops - feasible in seconds on modern hardware.
This complete pipeline illustrates that a single ML workflow (GP regression with hyperparameter optimization) relies on Cholesky (3+ times, for different matrices), randomized QR (for inducing points), and CG with preconditioner (which involves implicit LU-like operations) - all in service of a Bayesian inference computation.
CHUNK_APPENDIXI
Appendix K: Matrix Decompositions in the 2026 AI Landscape
K.1 Inference-Time Decompositions
As language models move toward longer context (1M+ tokens), the attention computation's quadratic cost has become a primary bottleneck. Several recent methods use matrix decompositions at inference time:
PagedAttention (Kwon et al., 2023): Manages KV cache memory using block-based paging inspired by virtual memory. The KV cache is a large matrix ; PagedAttention tiles this into fixed-size "pages" and maintains an LU-like decomposition of the free-list.
MLA (Multi-head Latent Attention, DeepSeek, 2024): Compresses the KV cache via low-rank projection: where is a compressed latent and is a fixed projection matrix. The decomposition (QR of the projection matrix) is used to initialize and orthogonalize the projection basis.
Speculative decoding with draft models: Uses LU-based factored attention matrices to maintain a draft distribution for speculative sampling.
K.2 Training-Time Decompositions
GaLore (Zhao et al., 2024): Gradient Low-Rank Projection. For a weight matrix , the gradient is projected onto a rank- subspace via:
where comes from the right singular vectors of (randomized SVD). Adam is applied to (memory: instead of ), then the update is lifted back: . The SVD is recomputed every steps.
Fira (Chen et al., 2024): Extends GaLore with a closed-form gradient approximation that avoids the SVD step entirely, using a QR-based approximation to the projection.
SOAP (Vyas et al., 2024): Combines Shampoo's Kronecker-structured preconditioner with Adam's update rule. Each layer's preconditioner is maintained as with iterative Cholesky-based Newton steps for the matrix square root.
K.3 Model Compression via Decompositions
SliceGPT (Ashkboos et al., 2024): Compresses transformer weights by applying QR-based slice operations: a structured pruning where the weight matrix is decomposed as and the orthogonal factor is absorbed into the layer normalization.
ASVD (Yuan et al., 2023): Activation-aware SVD for LLM compression. Instead of SVD of directly, computes SVD of (weight matrix multiplied by typical activation matrix), finding a better rank- approximation for the actual inference distribution. Uses randomized QR for efficiency.
QuIP (Chee et al., 2023) and QuIP# (Tseng et al., 2024): Incoherence-based quantization. A random orthogonal matrix (Hadamard-based) is applied to before quantization: . The orthogonal preprocessing ensures that no single weight has disproportionate magnitude, enabling aggressive quantization. The QR connection: is chosen as a randomized Hadamard transform, equivalent to the sketch matrix in randomized QR.
K.4 Numerical Linear Algebra for AI Accelerators
Tensor Cores and Mixed Precision: NVIDIA A100/H100 Tensor Cores perform where are FP16/BF16 and are FP32, achieving TFLOPS (FP16) vs TFLOPS (FP64). All modern factorization libraries exploit this:
- cuSOLVER RF (Iterative Refinement): FP16 factor + FP32 residual + FP32 correction - full FP32 accuracy at near-FP16 throughput
- cuBLAS Tensor Core GEMM: All blocked factorizations (LU, QR, Cholesky trailing updates) use TC-GEMM for the dominant computation
Intel AMX (Advanced Matrix Extensions): New Intel architecture extensions (Sapphire Rapids, 2023) add hardware matrix tile registers and tile-based GEMM instructions. BLIS and MKL exploit AMX for speedup over AVX-512 on blocked factorizations.
AMD MI300X: 192 GB HBM3 + 5.3 TB/s bandwidth enables Cholesky factorization in GPU memory - sufficient for large GP regression without blocking across devices.
Appendix L: Summary of Key Results
For quick reference, here are the key theorems and facts from this section:
LU Factorization:
- Exists iff all leading principal minors for .
- With partial pivoting: ; ; cost flops.
- Backward error: where is the growth factor.
- Growth factor: (theoretical), (typical).
- Determinant: where = number of row swaps.
QR Factorization:
- Always exists for any with .
- Householder QR: backward stable with , independent of .
- Cost: flops (vs for LU when ).
- QR for least squares: loss (vs for normal equations).
- RRQR: diagonal of estimates singular values; reveals rank.
Cholesky Factorization:
- Exists iff (equivalently: all eigenvalues positive).
- Cost: flops (half of LU); unconditionally backward stable without pivoting.
- Log-determinant: - exact, after Cholesky.
- Gradient: - computed in via Cholesky solve.
Backward Stability:
- Forward error backward error (fundamental inequality).
- Backward stability hierarchy: Cholesky = Householder QR LU with partial pivot LU without pivot.
- Iterative refinement: recovers FP64 accuracy from FP16 factorization in iterations if .
CHUNK_FINAL
Appendix M: Exercises - Further Problems
M.1 Theoretical Extensions
Problem M.1. Show that for any invertible , there exists a permutation matrix such that has an LU factorization. (Hint: show that there always exists a row ordering such that all leading principal minors of are nonzero.)
Problem M.2. Prove the identity using the LU factorization. Extend to show that where is the number of transpositions in .
Problem M.3. For with Cholesky , prove that the Schur complement of in equals where is the lower-right block of . Conclude that the Schur complement of any SPD matrix is SPD.
Problem M.4. Show that the Householder reflector satisfies: (a) (symmetric) (b) (orthogonal) (c) (orientation-reversing) (d) (involutory - its own inverse)
Problem M.5. Prove that QR with column pivoting finds the best rank- approximation in the following sense: if and is chosen so that , then the leading block of satisfies .
M.2 Computational Challenges
Challenge M.6 (Batched Cholesky). Implement batched Cholesky factorization for matrices of size . Compare three implementations:
- Python loop over
scipy.linalg.cholesky - NumPy vectorized via
np.vectorize torch.linalg.choleskyon CPU with batch dimension
Measure time and throughput (matrices/second). Extrapolate to the K-FAC setting where and .
Challenge M.7 (Iterative Refinement). Construct a matrix with condition number (near the FP64 limit). Solve using: (a) Direct LU (FP64): measure relative error (b) Simulated FP32 LU + FP64 residual refinement (3 steps): measure relative error Compare and explain the improvement using backward error theory.
Challenge M.8 (Rank-revealing QR in practice). Download a real dataset (e.g., the UCI Breast Cancer dataset, , features). Apply RRQR to the feature matrix and identify the effective rank (threshold: ). Verify by comparing to the number of significant singular values of .
References
-
Golub, G. H. & Van Loan, C. F. (2013). Matrix Computations (4th ed.). Johns Hopkins University Press. - The definitive reference for all material in this section.
-
Trefethen, L. N. & Bau, D. (1997). Numerical Linear Algebra. SIAM. - Exceptionally clear treatment of Householder QR and backward error analysis.
-
Higham, N. J. (2002). Accuracy and Stability of Numerical Algorithms (2nd ed.). SIAM. - Comprehensive analysis of numerical stability; source for all error bounds cited here.
-
Demmel, J. W. (1997). Applied Numerical Linear Algebra. SIAM. - Excellent treatment of LU pivoting strategies and condition estimation.
-
Householder, A. S. (1958). Unitary triangularization of a nonsymmetric matrix. Journal of the ACM, 5(4), 339-342. - Original Householder reflector paper.
-
Wilkinson, J. H. (1961). Error analysis of direct methods of matrix inversion. Journal of the ACM, 8(3), 281-330. - Foundational backward error analysis for LU.
-
Halko, N., Martinsson, P.-G., & Tropp, J. A. (2011). Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review, 53(2), 217-288. - Randomized QR/SVD framework.
-
Martens, J. & Grosse, R. (2015). Optimizing neural networks with Kronecker-factored approximate curvature. ICML 2015. - K-FAC: Cholesky for Fisher information blocks.
-
Hu, E. J. et al. (2022). LoRA: Low-rank adaptation of large language models. ICLR 2022. - Low-rank decomposition for fine-tuning.
-
Zhao, J. et al. (2024). GaLore: Memory-efficient LLM training by gradient low-rank projection. ICML 2024. - Randomized SVD/QR for gradient compression.
-
Bunch, J. R. & Kaufman, L. (1977). Some stable methods for calculating inertia and solving symmetric linear systems. Mathematics of Computation, 31(137), 163-179. - LDL^T with Bunch-Kaufman pivoting.
-
Anderson, E. et al. (1999). LAPACK Users' Guide (3rd ed.). SIAM. - LAPACK routine reference and algorithmic details.
-
Gardner, J. R. et al. (2018). GPyTorch: Blackbox matrix-matrix Gaussian process inference with GPU acceleration. NeurIPS 2018. - CG-based GP inference for large-scale regression.
-
Demmel, J., Grigori, L., Hoemmen, M., & Langou, J. (2008). Communication-optimal parallel and sequential QR and LU factorizations. SIAM Journal on Scientific Computing, 34(1), A206-A239. - TSQR and communication-avoiding algorithms.
-
Shah, J. et al. (2024). FlashAttention-3: Fast and accurate attention with asynchrony and low-precision. arXiv:2407.08608. - Block-tiled attention exploiting BLAS-3 structure.
Appendix M: Exercises - Further Problems
M.1 Theoretical Extensions
Problem M.1. Show that for any invertible A there exists a permutation P such that PA has an LU factorization. (Hint: show that there always exists a row ordering such that all leading principal minors of PA are nonzero.)
Problem M.2. Prove the identity det(A) = prod_i U_ii using the LU factorization. Extend to show that det(PA) = (-1)^s det(A) where s is the number of transpositions in P.
Problem M.3. For A with Cholesky A = LL^T, prove that the Schur complement of A_11 in A equals L_22 L_22^T. Conclude that the Schur complement of any SPD matrix is SPD.
Problem M.4. Show that the Householder reflector H = I - 2vv^T/||v||^2 is symmetric, orthogonal, has det(H) = -1, and H^2 = I.
M.2 References
- Golub, G. H. and Van Loan, C. F. (2013). Matrix Computations (4th ed.). Johns Hopkins University Press.
- Trefethen, L. N. and Bau, D. (1997). Numerical Linear Algebra. SIAM.
- Higham, N. J. (2002). Accuracy and Stability of Numerical Algorithms (2nd ed.). SIAM.
- Halko, N., Martinsson, P.-G., and Tropp, J. A. (2011). Finding structure with randomness. SIAM Review, 53(2), 217-288.
- Martens, J. and Grosse, R. (2015). Optimizing neural networks with K-FAC. ICML 2015.
- Hu, E. J. et al. (2022). LoRA: Low-rank adaptation of large language models. ICLR 2022.
- Zhao, J. et al. (2024). GaLore: Memory-efficient LLM training by gradient low-rank projection. ICML 2024.
- Demmel, J. et al. (2008). Communication-optimal parallel and sequential QR and LU factorizations. SIAM JSC, 34(1).