"Orthogonality is the single most important idea in applied mathematics. When things are perpendicular, they don't interfere with each other - and that's the key to making computation tractable."
Overview
Orthogonality is the mathematical formalization of "not interfering." When two vectors are orthogonal, they carry completely independent information. When a set of basis vectors is orthonormal, coordinates can be computed by simple dot products. When a matrix is orthogonal, it preserves lengths and angles - it cannot distort or amplify errors.
This section develops the full theory of orthogonality from first principles. We begin with the geometric intuition - right angles, projections, and the remarkable simplification that occurs when bases are orthonormal. We then build the formal machinery: Gram-Schmidt orthogonalization (classical and numerically stable modified form), Householder and Givens methods for QR decomposition, orthogonal projections and least squares, and the spectral theorem for symmetric matrices.
Throughout, the AI connections are direct and structural. Orthogonal weight initialization preserves gradient norms through linear networks. RoPE encodes position via rotation matrices - orthogonal transformations that preserve the inner products needed for attention. Spectral normalization constrains weight matrices to have unit spectral norm. The QR algorithm for eigenvalues underlies every numerical eigensolver. Understanding orthogonality at depth is understanding the numerical backbone of machine learning.
Prerequisites
- Inner products and dot products - Chapter 2 01: Vectors and Spaces
- Vector spaces, subspaces, dimension - Chapter 2 06: Vector Spaces and Subspaces
- Linear transformations, kernel, image - 04: Linear Transformations
- Eigenvalues and eigenvectors (used in 7) - 01: Eigenvalues and Eigenvectors
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Interactive exploration: Gram-Schmidt visualization, projection geometry, QR factorization, spectral theorem, orthogonal init |
| exercises.ipynb | 8 graded exercises from projection computation through RoPE and orthogonal initialization |
Learning Objectives
After completing this section, you will be able to:
- State and prove the Cauchy-Schwarz inequality and derive the angle formula
- Prove that any orthogonal set of nonzero vectors is linearly independent
- Compute coordinates in an orthonormal basis using Fourier coefficients
- Construct orthogonal complements and verify the orthogonal decomposition theorem
- Characterize orthogonal matrices via and prove they are isometries
- Implement Gram-Schmidt orthogonalization (classical and modified forms)
- Explain why modified Gram-Schmidt is numerically superior to classical
- Construct QR decompositions via Gram-Schmidt, Householder reflectors, and Givens rotations
- Solve least squares problems via QR and explain why this is numerically superior to normal equations
- State and prove the spectral theorem for real symmetric matrices
- Apply orthogonal initialization, RoPE, and spectral normalization in AI contexts
Table of Contents
- 1. Intuition
- 2. Formal Definitions
- 3. Orthogonal Matrices
- 4. Orthogonal Projections
- 5. Gram-Schmidt Orthogonalization
- 6. QR Decomposition
- 7. The Spectral Theorem for Symmetric Matrices
- 8. Applications in Machine Learning
- 9. Common Mistakes
- 10. Exercises
- 11. Why This Matters for AI (2026 Perspective)
- 12. Conceptual Bridge
1. Intuition
1.1 What Orthogonality "Buys" You
Orthogonality is the key that makes linear algebra computationally tractable. Without it, working in a basis requires solving systems of equations. With it, everything simplifies to dot products.
Five computational miracles of orthogonality:
Miracle 1: Coordinates become dot products. In a general basis , finding the coordinates of a vector requires solving an linear system. In an orthonormal basis , the coordinate - just a dot product. No system to solve.
Miracle 2: Length is preserved component-wise. In an ONB: (Parseval's identity). The squared norm is simply the sum of squared coordinates - no cross terms.
Miracle 3: Orthogonal matrices have condition number 1. The condition number of is . This is the best possible - orthogonal systems are maximally well-conditioned. Solving is trivially .
Miracle 4: Projection is the best approximation. The orthogonal projection of onto a subspace gives the closest point in to . This is the geometric foundation of least squares, PCA, and low-rank approximation.
Miracle 5: Composition of orthogonal maps is free. Two isometries composed give another isometry. Chaining rotations, reflections, and permutations never accumulates numerical error.
THE ORTHOGONALITY ADVANTAGE
========================================================================
General basis {b_1, ..., b_n} Orthonormal basis {q_1, ..., q_n}
----------------------------- ----------------------------------
Coordinates: solve Bc = v Coordinates: c^i = <v, q^i> (dot!)
Norm: v^T B^T B v Norm: \Sigma c^i^2 (Parseval)
Projection: A(A^T A)^-^1 A^T b Projection: QQ^T b (just Q^T then Q)
Solve Ax=b: Gaussian elim. Solve Qx=b: x = Q^T b (trivial)
Condition#: can be >> 1 Condition#: exactly 1
========================================================================
1.2 Geometric Picture
In Euclidean space, two vectors are orthogonal if they meet at a right angle. The dot product measures "how much they point in the same direction":
- : (parallel, maximum alignment)
- : (orthogonal, zero alignment)
- : (antiparallel, maximum anti-alignment)
The projection picture. The orthogonal projection of onto is the "shadow" of cast perpendicularly onto the line through :
The error is orthogonal to . This Pythagorean decomposition is the geometric core of all least-squares theory.
Orthonormal bases as "rulers." A set of orthonormal vectors forms a perfect coordinate system: each is a unit "ruler" pointing in an independent direction. The coordinate of along is exactly - how far extends in the -th direction.
1.3 Why Orthogonality Matters for AI
PCA finds orthogonal directions of variance. The principal components are the eigenvectors of the sample covariance matrix . By the spectral theorem (7), these eigenvectors are orthogonal. PCA is possible - and gives uncorrelated components - precisely because of orthogonality.
Attention subspaces. In multi-head attention, each head projects queries and keys into a -dimensional subspace. The heads are designed to attend to different aspects of the input. The more orthogonal the subspaces, the more independent information each head extracts.
Orthogonal weight initialization (Saxe et al., 2013) initializes weight matrices as random orthogonal matrices (via QR decomposition of a random Gaussian matrix). This preserves the norm of activations and gradients through linear layers, enabling training of very deep networks without vanishing/exploding gradients.
RoPE (Su et al., 2023) encodes position by rotating the query/key vectors by angle in each 2D subspace. The key property: the dot product between a rotated query at position and a rotated key at position depends only on - position appears as a relative rotation. This works because rotation is orthogonal: it preserves the magnitude of inner products.
Numerical stability. QR decomposition is the backbone of numerical linear algebra. Householder QR is used to solve least-squares problems, compute eigenvalues (QR iteration), and factor matrices. Its superiority over LU (for rectangular systems) stems from orthogonality: the Householder reflectors are orthogonal transformations that never amplify errors.
1.4 Historical Timeline
| Year | Person | Contribution |
|---|---|---|
| 1801 | Gauss | Least squares by orthogonal projection (unpublished until 1809) |
| 1805 | Legendre | Published least squares - minimizing sum of squared residuals |
| 1883 | Jrgen Gram | Gram-Schmidt process published in Danish |
| 1907 | Erhard Schmidt | Gram-Schmidt republished and popularized |
| 1958 | Alston Householder | Householder reflections for stable QR factorization |
| 1961 | J.G.F. Francis | Practical QR algorithm for eigenvalue computation |
| 1961 | V. Kublanovskaya | Independent discovery of QR algorithm (USSR) |
| 1965 | James Wilkinson | Error analysis of Householder QR; numerical stability theory |
| 2013 | Saxe, McClelland, Ganguli | Orthogonal initialization for deep linear networks |
| 2021 | Su et al. | RoPE: rotary positional encoding via orthogonal rotation matrices |
| 2021 | Hua et al. | Transformer Quality with orthogonal attention mechanisms |
2. Formal Definitions
2.1 Inner Products and the Angle Formula
Definition (Inner Product). A real inner product on a vector space is a function satisfying:
- Symmetry:
- Linearity in first argument:
- Positive definiteness: with equality iff
The standard inner product (dot product) on is .
The induced norm: is the Euclidean length of .
Theorem 2.1.1 (Cauchy-Schwarz Inequality). For any in an inner product space:
with equality iff and are linearly dependent (one is a scalar multiple of the other).
Proof. For any , . Expanding:
This is a quadratic in that is always non-negative, so its discriminant must be :
which gives . Equality holds iff gives , i.e., .
The angle formula. By Cauchy-Schwarz, , so it is valid to define the angle between and by:
Corollary (Triangle Inequality). .
Proof: .
2.2 Orthogonal and Orthonormal Vectors
Definition. Two vectors are orthogonal (written ) if .
A vector is a unit vector if . The normalization of nonzero is .
Two vectors are orthonormal if they are both orthogonal AND both unit vectors:
Key consequences of orthogonality:
1. Zero maps to everything: for all (since ).
2. Pythagorean theorem: If , then . Proof: .
3. Generalized Pythagorean theorem: For mutually orthogonal :
4. Linearity of orthogonality: If and , then for all scalars .
2.3 Orthogonal Sets and Linear Independence
Definition. A set is an orthogonal set if for all . It is an orthonormal set if additionally each .
Theorem 2.3.1. Any orthogonal set of nonzero vectors is linearly independent.
Proof. Suppose . Take the inner product of both sides with :
Since , we have , so . This holds for all .
Remark. This theorem gives an effortless proof of linear independence for orthogonal sets - no row reduction needed. The inner product "isolates" each coefficient.
Non-examples:
- is not orthogonal: .
- - the last vector breaks orthogonality.
2.4 Orthonormal Bases
Definition. An orthonormal basis (ONB) for is a set that is both orthonormal and a basis for .
Theorem 2.4.1 (Fourier Coefficients). If is an ONB for , then for any :
The coefficients are called Fourier coefficients (or coordinates of in the ONB).
Proof. Since is a basis, for some unique . Taking the inner product with :
Theorem 2.4.2 (Parseval's Identity). Under the same conditions:
Proof. .
Remark. Parseval's identity says: in an ONB, the squared norm equals the sum of squared coordinates. There are no cross terms - the orthonormality kills them all. This is the high-dimensional Pythagorean theorem.
Matrix form. Assembling the ONB into a matrix , the Fourier decomposition is simply : first compute coordinates , then reconstruct .
2.5 Orthogonal Complements
Definition. Given a subspace , the orthogonal complement is:
Theorem 2.5.1. is a subspace of .
Proof: If and , then for any : .
Theorem 2.5.2 (Orthogonal Decomposition). If is finite-dimensional and is a subspace, then:
Every decomposes uniquely as with and .
Moreover: .
Proof. Let be an ONB for . Define (projection onto ) and . Then with . For any :
So for all , hence . Uniqueness: if also, then .
Connection to the four fundamental subspaces. For :
- in
- in
These orthogonality relations are the deep structure behind the four fundamental subspaces (see 04: Linear Transformations 4.5).
2.6 Orthogonality in Complex Inner Product Spaces
The theory extends to complex vector spaces with a Hermitian (sesquilinear) inner product:
The complex analog of orthogonal matrices is unitary matrices satisfying , where is the conjugate transpose.
Unitary group: contains all unitary matrices. It reduces to when restricted to real matrices.
Special unitary group: .
Key properties of unitary matrices:
- (complex number of modulus 1, not just )
- Eigenvalues lie on the unit circle:
- Preserve complex inner products:
- The DFT matrix is unitary (not orthogonal, since its entries are complex)
Hermitian matrices () are the complex analogs of real symmetric matrices. The spectral theorem holds: every Hermitian matrix has real eigenvalues and a unitary eigenbasis.
For AI, complex-valued neural networks and quantum computing naturally use unitary matrices. The DFT, crucial for signal processing and some positional encoding schemes, is a unitary transform in .
2.7 Abstract Inner Product Spaces: Axioms
We have been working with the Euclidean inner product . More generally, an inner product on a real vector space is a bilinear map satisfying:
| Axiom | Expression | Meaning |
|---|---|---|
| Symmetry | Order doesn't matter | |
| Linearity | Linear in first argument | |
| Positive definiteness | and | Self-inner product is positive |
Any positive definite symmetric matrix defines an inner product:
Example (precision-weighted inner product): In Bayesian statistics, the Mahalanobis distance between and is:
This is the distance induced by the inner product . In this metric, vectors along high-variance directions of are "shorter" (the precision down-weights those directions).
Orthogonality is metric-dependent. Vectors that are orthogonal in the Euclidean metric may not be orthogonal in the Mahalanobis metric, and vice versa. The choice of inner product determines what "perpendicular" means.
For ML: The natural gradient method (Amari 1998) uses the Fisher information matrix to define the inner product, giving rise to a geometry where parameter updates are . This is gradient descent in the Riemannian metric .
3. Orthogonal Matrices
3.1 Definition and Characterizations
Definition. A square matrix is orthogonal if:
Equivalently, - the transpose is the inverse.
Characterization via columns. is orthogonal if and only if its columns form an orthonormal basis for :
The condition encodes exactly these inner products.
Characterization via rows. By the symmetry , the rows of also form an orthonormal basis. This is a non-trivial statement: a matrix whose columns are orthonormal must also have orthonormal rows (and vice versa).
Characterization via norms (isometry property). is orthogonal if and only if it preserves the Euclidean norm:
Proof: .
Characterization via inner products. preserves inner products:
Proof: .
This means orthogonal matrices preserve angles and distances - they are isometries of Euclidean space.
3.2 The Orthogonal Group and Special Orthogonal Group
The set of all orthogonal matrices forms the orthogonal group:
Group axioms:
- Closure: If , then , so .
- Identity: .
- Inverses: (since ).
- Associativity: Inherited from matrix multiplication.
Determinant. Since , we have .
- : rotations - orientation-preserving isometries
- : improper rotations - reflections (or rotation composed with a reflection)
The special orthogonal group consists of pure rotations.
In 2D:
This is the group of rotations of the plane.
In 3D: parameterizes 3D rotations. This group is the configuration space of a rigid body and appears throughout robotics, computer vision, and physics. Its double cover appears in quaternion-based rotation representations used in game engines and IMUs.
3.3 Condition Number and Numerical Properties
The condition number of a matrix measures how much it amplifies errors:
For orthogonal matrices : all singular values equal 1, so . Orthogonal matrices have the best possible condition number.
This means:
- Multiplying by never amplifies rounding errors
- Solving is trivially stable:
- QR factorization (which uses orthogonal matrices) is numerically stable
For AI: Deep learning systems that maintain orthogonal weight matrices throughout training benefit from gradient signals that are neither exploding nor vanishing. The isometry property means the network's "signal amplification" is exactly 1 at those layers.
3.4 Eigenvalues of Orthogonal Matrices
Theorem. The eigenvalues of a real orthogonal matrix lie on the unit circle in : .
Proof: If with :
Dividing by : .
For real eigenvalues: (rotation-like) or (reflection-like). Complex eigenvalues come in conjugate pairs corresponding to 2D rotations.
3.5 The Cayley Transform
The Cayley transform establishes a correspondence between orthogonal matrices and skew-symmetric matrices (matrices satisfying ).
Definition. For a skew-symmetric matrix (such that is invertible):
Claim: is orthogonal.
Proof: Using (since ):
Since is skew-symmetric, and commute (they differ only by sign of , which is a normal matrix in this context), so:
Hence .
Inverse Cayley transform. Given orthogonal without eigenvalue :
Why this matters:
- Skew-symmetric matrices form a vector space (easy to parameterize)
- The Cayley transform maps this space bijectively to "most" orthogonal matrices
- This gives a smooth parameterization of near the identity - useful for Riemannian optimization
Example in 2D: (skew-symmetric) gives:
With (the Weierstrass substitution), this gives the rotation matrix . The Cayley transform thus rationalizes the trigonometric functions appearing in rotation matrices.
Applications in neural networks: Neural networks that maintain orthogonal weight matrices throughout training can parameterize them via their skew-symmetric "log" and exponentiate via the matrix exponential or Cayley transform. This enables gradient descent on using Lie algebra tools.
4. Orthogonal Projections
4.1 The Projection Formula
Definition. The orthogonal projection of onto a subspace is the unique vector such that .
Formula (1D case). Projection onto the line spanned by unit vector :
The matrix is the rank-1 projection matrix.
Formula (general case). If for a matrix with linearly independent columns:
The matrix is the projection matrix onto .
When has orthonormal columns (i.e., with ), this simplifies dramatically:
This is why orthonormal bases make projections computationally trivial.
4.2 Projection Matrices: Properties
A matrix is a projection if and only if it is idempotent: .
A projection is an orthogonal projection if and only if it is also symmetric: .
Verification: For with :
- Idempotent: OK
- Symmetric: OK
Eigenvalues of projections. is a projection if and only if its eigenvalues are 0 and 1.
Proof: If , then , so , giving or .
Geometrically: eigenvectors with lie in the range of (they are unchanged), while eigenvectors with lie in the null space (they are annihilated).
4.3 Best Approximation Theorem
Theorem (Best Approximation). The projection is the closest point in to :
with equality only when .
Proof. For any :
Since and , the cross term vanishes:
This theorem is everywhere in machine learning:
- Least squares: The normal equations give the projection of onto
- PCA: Principal components are the projection onto the subspace of maximum variance (-> 03: PCA)
- Attention: Softmax attention can be viewed as computing a weighted projection of value vectors
- Linear regression: The fitted values where is the hat matrix
4.4 Projection in Coordinate Systems
Given an ONB for , the projection onto the subspace (for ) is:
This is a sum of rank-1 projectors, one per basis direction. The decomposition:
is the resolution of the identity - every vector is split into its and components.
5. Gram-Schmidt Orthogonalization
5.1 The Algorithm: Classical Gram-Schmidt
Problem. Given linearly independent vectors , construct an orthonormal basis for .
Classical Gram-Schmidt (CGS):
Initialization:
Iteration: For :
What it does geometrically: At step , we subtract the projection of onto the span of the already-processed vectors, leaving the component orthogonal to everything before it. Normalizing gives the -th orthonormal vector.
Inductive proof of correctness. After step , is an ONB for .
Base: : is a unit vector spanning the same subspace. OK
Step: Assume true for . Then where projects onto .
For any : OK
Since are independent, (otherwise ). So is a valid unit vector.
5.2 Modified Gram-Schmidt
The numerical problem with CGS. In floating-point arithmetic, CGS can catastrophically lose orthogonality. This happens when is nearly in the span of - rounding errors in the projection subtraction can accumulate, producing a that isn't truly orthogonal to earlier vectors.
Modified Gram-Schmidt (MGS) reorders the computation to orthogonalize against already-computed sequentially rather than all at once:
for j = 1 to k:
v = a_j
for i = 1 to j-1:
v = v - <v, q_i> q_i # orthogonalize against q_i using CURRENT v
q_j = v / ||v||
Why MGS is more stable. In CGS, all projections use the original . In MGS, after each orthogonalization step, the updated is used - so numerical errors from the first projection are themselves projected away in subsequent steps.
Theorem (Bjorck 1967). MGS is backward stable: the computed satisfy where is the condition number of the input matrix. CGS satisfies only .
Practical advice: Use MGS for explicit basis construction; use Householder for QR factorization (even more stable).
5.3 Gram-Schmidt and QR Factorization
Gram-Schmidt applied to the columns of produces a QR factorization:
where has orthonormal columns and is upper triangular with:
The diagonal entries are positive (since for independent columns).
This gives a unique QR factorization of with positive diagonal entries of .
For AI: Gram-Schmidt implicitly occurs in layer normalization variants, in computing orthogonal weight bases for LoRA, and in the derivation of stable training algorithms.
6. Householder Reflectors and Givens Rotations
6.1 Householder Reflectors
Motivation. While Gram-Schmidt produces QR by building column by column, Householder applies successive orthogonal transformations to reduce to upper triangular form directly - achieving greater numerical stability.
Definition. The Householder reflector determined by unit vector (the normal to the mirror) is:
Properties:
- Symmetric: OK
- Orthogonal: OK
- Involution: , so OK
- Determinant: (it's a reflection, not a rotation)
Action: reflects any vector across the hyperplane with normal :
- Vectors in are unchanged: if
- The vector is negated:
Zeroing a column. To zero out all components of below the first entry: choose:
(The sign choice avoids catastrophic cancellation when .)
6.2 QR via Householder
To factor :
- Find to zero the first column below row 1: has in position and zeros below
- Find (acting on the submatrix) to zero the second column below row 2
- Continue: (upper triangular)
Then (product of reflectors, hence orthogonal), and .
Complexity: flops - same asymptotic complexity as Gram-Schmidt but with better numerical constants.
Stability: Householder QR is backward stable, meaning the computed factors satisfy where .
6.3 Givens Rotations
Definition. The Givens rotation rotates in the -plane:
Purpose: Choose to zero a specific element by rotating in the -plane.
When to use: Givens rotations are preferred when is sparse (each rotation affects only two rows/columns), for parallel computation, and in eigenvalue algorithms like QR iteration.
7. QR Decomposition: Theory and Applications
7.1 Existence and Uniqueness
Theorem (QR Decomposition). Every matrix with admits a factorization:
where has orthonormal columns () and is upper triangular.
This is the thin (reduced) QR decomposition. There is also a full QR decomposition where is a full orthogonal matrix and is upper trapezoidal.
Uniqueness (when has full column rank). If we require for all , the thin QR decomposition is unique.
Proof sketch: Two decompositions give . The left side is orthogonal; the right side is upper triangular. An upper triangular orthogonal matrix must be diagonal with entries. With the sign convention , this diagonal must be .
When has rank : The decomposition still exists but has zero diagonal entries. The columns of corresponding to zero can be chosen as any completion to an orthonormal set.
7.2 QR for Least Squares
Problem. Given () and , solve:
Solution via QR. Let (thin QR, ):
The second term is independent of , so the minimum occurs at:
This is an upper triangular system, solved by back substitution in operations.
Comparison with normal equations. The normal equations are equivalent but form , which squares the condition number: . QR avoids this squaring and is numerically preferred whenever is moderate.
7.3 QR Algorithm for Eigenvalues
Preview. The QR algorithm - the standard method for computing eigenvalues of symmetric matrices - works by iterating:
Under mild conditions, upper triangular (Schur form) and the diagonal entries converge to eigenvalues. This connects orthogonal matrices directly to spectral theory.
Forward Reference: The full QR algorithm with shifts is covered in 08: Matrix Decompositions. The eigenvalue theory it computes is developed in 01: Eigenvalues and Eigenvectors.
8. The Spectral Theorem for Symmetric Matrices
8.1 Statement and Proof
Theorem (Spectral Theorem). Every real symmetric matrix has:
- All eigenvalues are real:
- Eigenvectors for distinct eigenvalues are orthogonal
- There exists an orthonormal basis of eigenvectors: where and
Proof of (1): Suppose with , . Take the Hermitian inner product:
So is real, hence .
Proof of (2): If and with :
. Since : .
Proof of (3): By induction. The base case () is trivial. For the inductive step: let be a unit eigenvector for eigenvalue . Let (the orthogonal complement). Since is symmetric, is -invariant (if , then ). Apply the inductive hypothesis to the symmetric restriction .
8.2 The Rayleigh Quotient
Definition. The Rayleigh quotient of a symmetric matrix at vector is:
Key properties:
- for eigenvectors
- for all
- , achieved at the top eigenvector
- , achieved at the bottom eigenvector
Proof of bounds: Expand in the eigenbasis:
This is a weighted average of eigenvalues, hence bounded by .
Courant-Fischer Min-Max Theorem. The -th largest eigenvalue satisfies:
This variational characterization is fundamental to perturbation theory and numerical linear algebra.
8.3 Quadratic Forms and Symmetric Matrices
A quadratic form is for symmetric .
Positive definiteness via the spectral theorem: (positive definite) all eigenvalues .
Proof: In the eigenbasis, . This is positive for all iff all .
For AI: The Hessian of a loss function is symmetric. Its eigenvalues tell us the curvature in each direction - large eigenvalues are "sharp" directions (fast loss change), small eigenvalues are "flat" directions. The Rayleigh quotient appears in sharpness-aware minimization (SAM), gradient clipping analysis, and understanding training dynamics.
9. Orthogonality in Function Spaces
9.1 The L^2 Inner Product
Orthogonality extends naturally from finite-dimensional vector spaces to infinite-dimensional function spaces. The space of square-integrable functions carries the inner product:
Orthogonality of functions: if .
Example: On , the functions form a mutually orthogonal family:
This is the foundation of Fourier series - a basis expansion for functions.
9.2 Fourier Series as Orthonormal Expansion
Orthonormal basis for :
The Fourier series of is its expansion in this ONB:
where the Fourier coefficients are orthogonal projections:
Parseval's theorem for Fourier series:
This is the infinite-dimensional analog of .
9.3 Discrete Fourier Transform as a Unitary Matrix
The Discrete Fourier Transform (DFT) maps to :
In matrix form: where .
Key fact: is unitary (the complex analog of orthogonal):
This is because the DFT basis vectors are orthonormal in :
(The last equality is the geometric series identity for roots of unity.)
Parseval's theorem for DFT: .
For AI: The DFT/FFT appears in:
- Signal processing: Feature extraction from audio (MFCCs, spectrograms)
- Efficient convolutions: - used in efficient CNN implementations
- Positional encodings: Sinusoidal positional embeddings in the original Transformer are Fourier features
- SSMs: State space models (Mamba, S4) use structured orthogonal/unitary matrices
10. Applications in Machine Learning
10.1 Orthogonal Weight Initialization
The problem with random initialization. If weight matrices have singular values spread over a wide range, gradients either explode or vanish during backpropagation through many layers.
Saxe et al. (2013) showed that initializing with orthogonal matrices preserves gradient norms through linear networks:
Theorem. For a deep linear network with orthogonal weight matrices (each ), the Jacobian of the full network is , which is also orthogonal (product of orthogonal matrices). Hence - gradients neither explode nor vanish.
Practical implementation:
- Generate a random matrix with i.i.d. standard normal entries:
- Compute its QR decomposition:
- Use (or for uniform distribution over ) as the initial weight matrix
This is implemented as torch.nn.init.orthogonal_() in PyTorch.
Why it works beyond linear networks: Even in nonlinear networks, orthogonal initialization places the initial weights in a "good" part of weight space where gradients are well-conditioned at initialization. Empirically, networks with orthogonal initialization often converge faster on deep architectures.
10.2 Rotary Position Embeddings (RoPE)
Standard self-attention lacks inherent position information. The original Transformer adds sinusoidal absolute position embeddings. RoPE (Su et al. 2021, used in LLaMA, GPT-NeoX, Falcon) takes a different approach: it encodes position by rotating query and key vectors.
Construction (2D case). For position , the rotation matrix is:
For a -dimensional embedding, pair up dimensions and apply to each pair using different frequencies .
Key property. The inner product of rotated queries and keys depends only on the relative position :
since (rotation matrices satisfy , because the inverse of a rotation is a rotation in the opposite direction).
Orthogonality is central: The key equation uses the fact that is orthogonal (), so .
10.3 Spectral Normalization
Spectral normalization (Miyato et al. 2018) stabilizes GAN training by constraining weight matrices to have spectral norm (= largest singular value) equal to 1.
Implementation: At each forward pass, divide the weight matrix by its spectral norm: .
Connection to orthogonality: A matrix with is exactly an orthogonal matrix. Spectral normalization enforces while letting other singular values vary freely - it's a "one-sided" orthogonality constraint.
Why it helps GANs: The discriminator satisfies a Lipschitz condition when all weight layers are spectrally normalized. This relates to the Wasserstein GAN objective, which requires a 1-Lipschitz discriminator.
10.4 QR in Optimization: Orthogonal Gradients
Several modern optimization techniques use orthogonality:
Shampoo optimizer (Gupta et al. 2018): Maintains a Kronecker-factored preconditioner and periodically computes its orthogonal "square root" via eigendecomposition, orthogonalizing the gradient update directions.
Muon optimizer (2024): Orthogonalizes gradient matrices via Newton-Schulz iterations (a matrix-valued version of Newton's method for computing polar decomposition), then applies the orthogonalized gradient as an update. This ensures each weight matrix sees updates with balanced singular values, preventing the accumulation of rank deficiency.
Gradient orthogonality in continual learning: Methods like Gradient Episodic Memory (GEM) and Orthogonal Gradient Descent (OGD) project gradients for new tasks onto the orthogonal complement of the gradient space for previous tasks - preserving past performance while learning new tasks.
10b. Bessel's Inequality and Completeness
10b.1 Bessel's Inequality
When we expand a vector in terms of an orthonormal set that is not necessarily a complete basis for the entire space, we get Bessel's inequality:
Proof. Let be the projection of onto . Then:
The inequality follows immediately.
Equality holds if and only if - i.e., when itself lies in the subspace spanned by the chosen orthonormal vectors.
Parseval's identity is the limit case: when is a complete ONB:
10b.2 Completeness of Orthonormal Sets
An orthonormal set in a Hilbert space is complete (or maximal) if it satisfies any of the following equivalent conditions:
- is an ONB: every can be written
- Parseval's identity holds: for all
- for all implies (no non-zero vector is orthogonal to all )
In finite dimensions, any ONB is automatically complete (by dimension counting). In infinite dimensions, completeness is a non-trivial requirement - this is why Hilbert space theory requires careful treatment.
For ML context: In kernel methods, the RKHS is an infinite-dimensional Hilbert space. A kernel function defines an inner product, and the question of whether the kernel features span the whole space (completeness/universality) determines the expressivity of the corresponding kernel machine.
10c. Detailed Worked Examples
10c.1 Projection in Least-Squares Regression
Setup. We observe data points and want to fit a line :
Build the design matrix:
Gram-Schmidt on the columns of :
Column 1:
Column 2: . Project out :
The projection (fitted values):
This is the projection of onto = the space of all affine functions of . The residual is orthogonal to every affine function.
Verification of the normal equations: The least-squares solution satisfies :
Solving: , so - a good linear fit.
10c.2 Full Worked Example: Gram-Schmidt in
Input vectors:
Step 1:
Step 2: Remove the component of along :
Step 3: Remove components of along and :
Verification: OK OK OK
QR factorization: The matrix is upper triangular with:
10c.3 Spectral Decomposition Example
Let (symmetric).
Eigenvalues: or .
Eigenvectors:
- :
- :
Verify orthogonality: OK
Spectral decomposition:
Rayleigh quotient at :
Indeed .
10c.4 Householder Reflector: Concrete Computation
Problem: Find a Householder reflector such that where .
Step 1: .
Step 2: Since , use sign to avoid cancellation:
Step 3: Normalize:
Step 4: Build the reflector:
Verify:
This gives (a valid Householder result; the sign depends on the convention). as expected. OK
10d. Orthogonality and the Geometry of Neural Networks
10d.1 The Geometry of Weight Matrices
A single linear layer transforms the input geometry. Understanding this transformation requires decomposing via its singular value decomposition (preview):
- : rotate the input space (first orthogonal transformation)
- : scale along the new axes by singular values
- : rotate the output space (second orthogonal transformation)
For orthogonal : All singular values are 1, so . The layer is a pure rotation - it doesn't compress or expand the input distribution. Distances and angles are perfectly preserved.
Implication for gradient flow. The gradient of the loss with respect to the input is:
For orthogonal : - gradients are neither amplified nor attenuated. This is the formal reason orthogonal initialization prevents gradient vanishing/explosion in deep linear networks.
10d.2 Attention Heads as Orthogonal Projections
In a Transformer's multi-head attention with heads:
where are the query/key/value projection matrices.
Mechanistic interpretability perspective (Elhage et al. 2021): Each attention head computes a projection of the residual stream onto a low-dimensional subspace, adds information from that subspace back, and (in some sense) different heads should attend to different "directions" of information.
Mathematical idealization: If we require for (the value matrices project to orthogonal subspaces), then different heads genuinely operate on independent information. In practice, heads aren't exactly orthogonal, but studying the overlap quantifies how much heads interfere with each other.
QK circuit analysis. The effective attention pattern for head involves (or rather its action on the residual stream). This matrix's singular values determine how "sharp" vs "diffuse" the attention pattern is.
10d.3 Layer Normalization and Orthogonal Complements
Layer normalization (Ba et al. 2016) operates on a vector :
where and .
Orthogonal decomposition view. The centering step is an orthogonal projection onto the subspace :
This is a rank- orthogonal projector. Layer normalization first projects out the "mean direction" , then rescales to unit norm on the remaining -dimensional subspace.
Consequence for representational geometry: After layer normalization, the model's internal representations always satisfy . The model can only represent information in the orthogonal complement of the constant vector - this is why all-zero inputs and constant inputs are indistinguishable after layer norm.
10d.4 Orthogonal Regularization in Weight Matrices
Several training techniques maintain (near-)orthogonality in weight matrices throughout training:
Orthogonal regularization: Add a penalty to the loss. This penalizes deviation from orthogonality without enforcing it exactly.
Spectral regularization: Penalize - penalizes singular values deviating from 1. Stronger than spectral normalization, which only clips the largest singular value.
Riemannian optimization on the Stiefel manifold: The set of matrices with orthonormal columns, , is a smooth Riemannian manifold. Gradient descent on uses the Riemannian gradient (projection of Euclidean gradient onto the tangent space) and retracts to the manifold via QR decomposition or Cayley transform after each step.
Applications: Orthogonal RNNs, rotation-equivariant neural networks, invertible neural networks, and normalizing flows all benefit from exact or approximate orthogonality constraints.
11. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Confusing "orthogonal" and "orthonormal" | Orthogonal means ; orthonormal additionally requires unit length. An orthogonal matrix has orthonormal columns, not just orthogonal ones. | Check: orthogonal <-> zero inner products; orthonormal <-> zero inner products AND |
| 2 | Assuming implies without checking square | For non-square (), but (it's a rank- projection). Only square orthogonal matrices satisfy both. | Specify dimensions: "thin Q" () vs "full Q" () |
| 3 | Thinking Gram-Schmidt preserves span ordering | CGS does: . But with pivoting or reordering, this property is lost. | Track column ordering explicitly when pivoting QR is used |
| 4 | Using CGS when numerical stability matters | Classical Gram-Schmidt squares the error: vs MGS's . For ill-conditioned , CGS can produce non-orthogonal "orthonormal" vectors. | Use Modified Gram-Schmidt (MGS) or Householder QR for production code |
| 5 | Projecting onto a non-orthogonal basis | The formula assumes . If the columns are merely linearly independent (not orthonormal), the correct formula is , which involves the inverse. | Always check if before using the simplified projection formula |
| 6 | Thinking is sufficient for orthogonal projection | Idempotence only guarantees is some projection. Orthogonal projections additionally require (symmetry). Oblique projections satisfy idempotence but not symmetry. | An orthogonal projection must be both idempotent () and symmetric () |
| 7 | Claiming eigenvectors are always orthogonal | Eigenvectors of different eigenvalues are orthogonal only for symmetric matrices. For general matrices, eigenvectors can be arbitrary non-orthogonal vectors (or even non-real). | The spectral theorem applies to symmetric/Hermitian matrices; general matrices need Schur decomposition |
| 8 | Forgetting the sign convention in Householder | When computing , the sign should be chosen as to avoid cancellation when . | Always choose the sign of to be opposite the sign of : |
| 9 | Misinterpreting Parseval's identity as an approximation | is an equality when is a complete orthonormal basis for the whole space. If it's only a basis for a subspace, you get Bessel's inequality: . | Distinguish: complete ONB -> Parseval's equality; partial ONB -> Bessel's inequality |
| 10 | Assuming QR is always better than Cholesky for normal equations | QR avoids squaring the condition number but costs more flops ( vs for Cholesky). For well-conditioned problems with large , Cholesky on may be faster. | Choose based on condition number: if , use QR; if , Cholesky is acceptable and faster |
| 11 | Using as the definition of orthogonal | is a consequence of orthogonality, not the definition. A matrix can have determinant 1 without being orthogonal (e.g., any upper triangular matrix with diagonal product 1). | Definition is . Determinant follows from this. |
| 12 | Thinking orthogonal initialization prevents gradient problems throughout training | Orthogonal initialization helps at initialization. As training proceeds, weights deviate from orthogonality. Maintaining orthogonality throughout training requires explicit regularization (e.g., spectral normalization, orthogonal regularization loss). | Use orthogonal init as a starting point; for persistent orthogonality through training, use explicit constraints |
12. Exercises
Exercise 1 * - Projection and Decomposition
Given and the subspace :
(a) Apply Gram-Schmidt to the spanning vectors to obtain an ONB for .
(b) Compute the projection using the ONB.
(c) Verify: and .
(d) Compute and verify (Pythagorean theorem).
Exercise 2 * - Gram-Schmidt and QR
Let .
(a) Apply Gram-Schmidt to the columns of to produce and such that .
(b) Verify that and numerically.
(c) Solve the least-squares problem for by solving via back substitution.
(d) Verify the solution against np.linalg.lstsq.
Exercise 3 * - Orthogonal Matrices
(a) Verify that the rotation matrix is orthogonal for .
(b) Show that the composition of two rotations by matrix multiplication.
(c) Construct a Householder reflector such that for . Verify and .
(d) What is ? Explain geometrically.
Exercise 4 ** - Modified Gram-Schmidt
(a) Implement both Classical and Modified Gram-Schmidt.
(b) Test on the Hilbert matrix for . Compute the maximum off-diagonal entry of for each method.
(c) Plot orthogonality error () vs matrix size for both methods. What do you observe?
(d) Explain why MGS is more stable using the error analysis from 5.2.
Exercise 5 ** - Least Squares Stability
Given the Vandermonde system fitting a degree- polynomial through points:
(a) Build the design matrix with for , .
(b) Solve the least-squares problem via: (i) normal equations , (ii) QR factorization then .
(c) Compute the condition number and . What is their ratio?
(d) Perturb by small noise and compare the residuals from both methods. Which is more stable?
Exercise 6 ** - Spectral Theorem
Let .
(a) Find all eigenvalues and eigenvectors of analytically.
(b) Verify that eigenvectors are orthogonal. Normalize them to form an orthogonal matrix .
(c) Verify the spectral decomposition: where .
(d) Compute the Rayleigh quotient for , , . Verify each is in .
(e) Is positive definite? Verify using eigenvalues and by direct computation of .
Exercise 7 *** - Orthogonal Weight Initialization
(a) Generate 100 random matrices with i.i.d. Gaussian entries (standard init). Compute the spectral norm of their product for .
(b) Repeat with each initialized as a random orthogonal matrix (via QR of a Gaussian matrix). Plot vs for both initializations.
(c) Generate synthetic "gradient" vectors and propagate backward through the chains. Compare gradient norms: vs .
(d) Explain the result using the isometry property of orthogonal matrices.
Exercise 8 *** - Rayleigh Quotient Iteration
Rayleigh Quotient Iteration is a method for finding eigenvectors with cubic convergence:
(a) Implement Rayleigh Quotient Iteration for a symmetric matrix.
(b) Test on with random initial . Track over iterations.
(c) Plot convergence on a log scale and estimate the convergence rate (should be cubic: ).
(d) Compare convergence speed vs power iteration and inverse iteration with a fixed shift.
(e) Explain why Rayleigh quotient (not a fixed shift) is needed for cubic convergence.
13. Why This Matters for AI (2026 Perspective)
| Concept | AI Application | Impact |
|---|---|---|
| Orthogonal initialization | torch.nn.init.orthogonal_() | Prevents gradient explosion/vanishing at initialization; standard practice for deep linear networks (Saxe et al.) and RNNs |
| QR decomposition | Muon optimizer; orthogonal gradient updates | Orthogonalizing gradient matrices via QR/Newton-Schulz stabilizes LLM training; avoids rank collapse of weight matrices |
| RoPE embeddings | LLaMA, Mistral, GPT-NeoX, Falcon, Gemma | Rotary position encoding uses orthogonal rotation matrices to encode relative position in self-attention; rotation composition gives relative position property |
| Spectral normalization | GANs, discriminator regularization | Constrains for 1-Lipschitz discriminator; stabilizes Wasserstein GAN training |
| Gram-Schmidt / QR | Numerically stable least squares | Used in regression, feature selection, basis pursuit; avoiding normal equations prevents condition number squaring |
| Orthogonal complement | Continual learning (OGD, GEM) | New task gradients are projected onto the orthogonal complement of old task gradient subspaces; prevents catastrophic forgetting |
| Spectral theorem | Hessian analysis, SAM optimizer | Eigendecomposition of the loss Hessian reveals sharp/flat directions; Sharpness-Aware Minimization (SAM) seeks minima with small \lambda_\max(\nabla^2\mathcal{L}) |
| Orthonormal bases | Attention head analysis | Queries, keys, values in each attention head span subspaces; orthogonality between heads corresponds to specialization; mechanistic interpretability studies this |
| DFT (unitary matrix) | Audio models, efficient convolutions | Wav2Vec, Whisper process spectrograms (DFT magnitude); convolutional layers in frequency domain use FFT = matrix multiply by unitary |
| Householder reflectors | QR in neural network training | Used in computing QR factorizations during optimizer steps (Shampoo); Householder product form compresses orthogonal matrices for efficient multiplication |
| Rayleigh quotient | Eigenvalue estimation in training | Stochastic Lanczos quadrature (Ghorbani et al. 2019) approximates the Hessian spectrum via Rayleigh quotients on random probes |
| Gram matrix | Gram-matrix style loss, NTK | Neural Tangent Kernel (where is the Jacobian) is a Gram matrix; its eigenstructure determines training dynamics |
14. Advanced Topics
14.1 Polar Decomposition
Every matrix () has a polar decomposition:
where has orthonormal columns () and is symmetric positive semidefinite.
Uniqueness: If has full column rank, and are unique, with (the matrix square root) and .
Geometric meaning: captures the "stretching" and captures the "rotation" - this is the matrix analog of writing a complex number as .
Relation to SVD: If (thin SVD), then:
So (the "nearest orthogonal matrix" to ) and .
For AI: The Muon optimizer computes from the SVD of the gradient matrix , then applies as the weight update - this is exactly the orthogonal factor of the polar decomposition of .
14.2 Orthogonality in Hilbert Spaces
All finite-dimensional results extend to Hilbert spaces - complete inner product spaces, possibly infinite-dimensional.
Orthogonal Projection Theorem (Hilbert spaces). If is a Hilbert space and is a closed subspace, then every has a unique decomposition with and .
Completeness is essential: Without it, the projection may not exist. (The "closed" hypothesis ensures that sequences of projections converge.)
Examples of Hilbert spaces in ML:
- : the space of square-integrable functions (kernel methods, RKHS)
- : square-summable sequences (infinite-dimensional limits of neural networks)
- Reproducing Kernel Hilbert Spaces (RKHS): the function space for kernel machines, Gaussian processes
14.3 Gram Matrices and the Kernel Trick
Given vectors , the Gram matrix is:
In matrix form: where .
Properties: is symmetric positive semidefinite. Its rank equals .
Kernel trick: Replace with where is a positive definite kernel. The resulting kernel matrix is a Gram matrix in a (possibly infinite-dimensional) RKHS.
For attention: The attention matrix is a kernelized Gram matrix between queries and keys. Its spectral properties determine how information flows between tokens.
Conceptual Bridge
Looking Backward: What Made This Possible
The theory developed in this section builds directly on foundations from earlier chapters:
- Vectors and inner products (01: Vectors and Spaces): The inner product is the primitive notion from which orthogonality, angles, and projections all derive.
- Linear transformations (04: Linear Transformations): Orthogonal matrices are the isometric linear maps - those that preserve both angles and distances. The kernel and image of projection operators are orthogonal complements.
- Matrix rank (02-LA: Matrix Rank): The rank-nullity theorem and the four fundamental subspaces are unified by orthogonality: .
- Eigenvalues (01: Eigenvalues): The spectral theorem tells us that symmetric matrices are diagonalizable with an orthonormal eigenbasis - the connection between orthogonality and spectral theory.
Looking Forward: What This Enables
Orthogonality and orthonormality are not endpoints but foundations:
- Singular Value Decomposition (02: SVD): The SVD is built from two orthogonal matrices. The right singular vectors form an ONB for the row space; the left singular vectors form an ONB for the column space. Without the theory developed here, SVD cannot be understood.
- Matrix Decompositions (08: Matrix Decompositions): LU, QR, Cholesky, Schur - QR is the central algorithm among these, and the QR algorithm for eigenvalues iterates orthogonal similarity transformations.
- Matrix Norms (06: Matrix Norms): The spectral norm is defined via orthogonal invariance. The condition number for orthogonal .
- Optimization (Chapter 08): Gradient methods, second-order methods, and their variants all navigate spaces equipped with inner products. The geometry of the loss landscape is described using the spectral theory developed here.
CURRICULUM POSITION: ORTHOGONALITY AND ORTHONORMALITY
===========================================================================
FOUNDATIONS THIS SECTION FORWARD
----------- ------------ -------
+-----------------+ +-------------------------+ +------------------+
| Inner Products |----------> | Orthogonality (05) |-->| SVD (02) |
| Vectors/Spaces | | +- Gram-Schmidt | | low-rank approx |
+-----------------+ | +- QR Decomposition | +------------------+
| +- Orthogonal Matrices |
+-----------------+ | +- Projections | +------------------+
| Eigenvalues |----------> | +- Spectral Theorem |-->| Matrix Norms |
| (01) | | +- ML Applications | | (06) |
+-----------------+ +-------------------------+ +------------------+
|
+-----------------+ | +------------------+
| Linear Transf. |---------->----------+ -->| Decompositions |
| (04) | | (08) QR alg. |
+-----------------+ +------------------+
AI CONNECTIONS:
+------------------------------------------------------------------+
| RoPE embeddings -> Gram-Schmidt -> Muon optimizer -> OGD/GEM |
| Orthogonal init -> Spectral norm -> Rayleigh quotient -> SAM |
| QR least squares -> DFT (unitary) -> Kernel Gram matrix -> NTK |
+------------------------------------------------------------------+
===========================================================================
<- Back to Advanced Linear Algebra | Next: Matrix Norms ->
Appendix A: Key Inequalities in Inner Product Spaces
A.1 Cauchy-Schwarz in Different Forms
The Cauchy-Schwarz inequality has many equivalent forms:
Discrete form:
Integral form:
Probability form: For random variables with :
The probability form directly implies: the correlation coefficient satisfies .
Second proof (via AM-GM). For any :
Minimizing over gives and:
Applying to gives the lower bound.
A.2 Triangle Inequality and Its Uses
The triangle inequality for norms follows from Cauchy-Schwarz:
Parallelogram law: For any inner product space:
Proof: Expand both norms and add. The cross terms cancel.
This law characterizes inner product spaces: a norm satisfying the parallelogram law comes from an inner product via the polarization identity:
Practical importance: The and norms do NOT satisfy the parallelogram law, confirming they do not arise from inner products. This is why optimization in (LASSO) has different geometry from (ridge regression) - there is no meaningful notion of "orthogonality" in .
A.3 Weyl's Inequality for Eigenvalue Perturbation
When a symmetric matrix is perturbed to (with symmetric), Weyl's inequality bounds the eigenvalue shifts:
This means: the eigenvalues of a symmetric matrix are Lipschitz functions of the matrix entries, with Lipschitz constant 1 in the spectral norm.
Consequence for ML: If the Hessian changes slowly along a training trajectory (small ), its eigenvalues also change slowly. This justifies quasi-Newton methods that assume the curvature is approximately constant over several steps.
Appendix B: Numerical Orthogonalization Recipes
B.1 When to Use Each Method
| Method | Complexity | Stability | When to Use |
|---|---|---|---|
| Classical Gram-Schmidt (CGS) | Poor: | Never in production; educational use only | |
| Modified Gram-Schmidt (MGS) | Good: | When explicit needed and moderate | |
| Householder QR | Excellent: | General QR factorization; recommended default | |
| Givens QR | Excellent | Sparse matrices; parallel processing | |
| Randomized QR | Good | Large matrices; approximate low-rank QR |
B.2 Loss of Orthogonality: Quantifying the Damage
In IEEE double precision (), the computed from CGS on an matrix satisfies:
For a well-conditioned matrix (): error - acceptable.
For an ill-conditioned matrix (): error - completely non-orthogonal!
MGS improves this to , and Householder to regardless of .
B.3 Reorthogonalization
When MGS produces inadequate orthogonality (common for nearly-dependent input vectors), apply reorthogonalization: after completing Gram-Schmidt once, apply it again to the already-computed . This reduces the error from to , which is typically negligible.
def mgs_with_reorth(A, num_reorth=1):
"""Modified Gram-Schmidt with optional reorthogonalization."""
m, n = A.shape
Q = np.zeros((m, n))
R = np.zeros((n, n))
for j in range(n):
v = A[:, j].copy()
for reorth in range(1 + num_reorth):
for i in range(j):
R[i, j] = Q[:, i] @ v if reorth == 0 else Q[:, i] @ v
v -= (Q[:, i] @ v) * Q[:, i]
R[j, j] = np.linalg.norm(v)
Q[:, j] = v / R[j, j]
return Q, R
B.4 Block Gram-Schmidt for Parallel Computation
For very large matrices on parallel hardware, block Gram-Schmidt processes columns in blocks of size :
- Orthogonalize within each block using MGS or Householder
- Orthogonalize each new block against all previous blocks using a BLAS-3 matrix operation
The key advantage: step 2 uses matrix-matrix products ( flops, BLAS Level 3) rather than matrix-vector products ( flops, BLAS Level 1/2). On modern hardware with SIMD and caching, this gives 10-100\times speedup for large .
Appendix C: Orthogonality in Machine Learning Libraries
C.1 PyTorch Orthogonal Utilities
import torch
import torch.nn as nn
# Orthogonal initialization
linear = nn.Linear(64, 64)
nn.init.orthogonal_(linear.weight) # U from QR decomposition
# Spectral normalization
conv = nn.utils.spectral_norm(nn.Conv2d(64, 64, 3))
# Divides weight by its spectral norm at each forward pass
# Orthogonal regularization
def orthogonal_reg(model, lambda_=1e-4):
loss = 0
for name, param in model.named_parameters():
if 'weight' in name and param.dim() == 2:
W = param
loss += lambda_ * torch.norm(W.T @ W - torch.eye(W.shape[1]))
return loss
C.2 NumPy/SciPy QR and Orthogonal Matrix Tools
import numpy as np
from scipy.linalg import qr, qr_multiply, polar
# Full QR decomposition
Q, R = np.linalg.qr(A) # 'reduced' (default) or 'complete'
Q, R, P = qr(A, pivoting=True) # QR with column pivoting
# Random orthogonal matrix (Haar measure)
M = np.random.randn(n, n)
Q, R = np.linalg.qr(M)
Q *= np.sign(np.diag(R)) # Fix signs for uniform distribution
# Polar decomposition: A = Q @ S
Q, S = polar(A) # scipy.linalg.polar
# Check orthogonality
I_approx = Q.T @ Q
print(f"Max off-diagonal: {np.max(np.abs(I_approx - np.eye(n))):.2e}")
C.3 Efficient Householder Representation
For large , storing the full matrix explicitly requires space. The compact WY representation stores where , requiring only space and allowing matrix-vector products in time:
def apply_householder_sequence(v_list, tau_list, x):
"""Apply Q = H_1 H_2 ... H_k to x using the compact representation."""
for v, tau in zip(v_list, tau_list):
x = x - tau * v * (v @ x) # Apply H = I - tau * v * v^T
return x
This is how LAPACK and NumPy store internally - only the Householder vectors are saved, not the explicit matrix.
Appendix D: The Geometry of Projection - Visual Summary
ORTHOGONAL DECOMPOSITION THEOREM
========================================================================
The Euclidean space \mathbb{R}^n splits into complementary orthogonal subspaces:
For any subspace S \subseteq \mathbb{R}^n:
\mathbb{R}^n = S \oplus S\perp
+------------------------------------------+
| \mathbb{R}^n |
| +----------------+ |
| | S | |
| | v_S = Pv | |
| | - | |
| | / | |
| | / | |
| v- |/ <- v_S | |
| \ +----------------+ |
| \ | orthogonal |
| \ | complement |
| \ | S\perp |
| ------- + ------------------------- |
| v - v_S | (v - v_S \perp every s \in S) |
+------------------------------------------+
Properties of projection matrix P = QQ^T (Q has ONB for S):
Symmetric: P^T = P (orthogonal, not oblique)
Idempotent: P^2 = P (projecting twice = projecting once)
Range: col(P) = S (maps into S)
Null space: null(P) = S\perp (annihilates complement)
========================================================================
GRAM-SCHMIDT PROCESS - STEP BY STEP
========================================================================
Input: linearly independent vectors a_1, a_2, a_3 (not orthogonal)
Output: orthonormal basis q_1, q_2, q_3 for same subspace
STEP 1: q_1 = a_1 / ||a_1||
------------------------------------
a_1 --------------------------> q_1 (unit vector)
STEP 2: u_2 = a_2 - <a_2,q_1>q_1 (subtract projection onto q_1)
q_2 = u_2 / ||u_2||
------------------------------------
a_2 ---> remove q_1 component ---> u_2 ---> q_2
a_2
/
/ up <a_2,q_1>q_1 (this part removed)
/
q_1 ------------ q_1 direction
u_2 (perpendicular remainder) --> q_2
STEP 3: u_3 = a_3 - <a_3,q_1>q_1 - <a_3,q_2>q_2
q_3 = u_3 / ||u_3||
------------------------------------
Remove projections onto BOTH q_1 and q_2
RESULT: {q_1, q_2, q_3} is an ONB with same span as {a_1, a_2, a_3}
QR FACTORIZATION: A = [a_1|a_2|a_3] = [q_1|q_2|q_3] * R
where R is upper triangular with positive diagonal.
========================================================================
SPECTRAL THEOREM: SYMMETRIC <-> ORTHOGONAL EIGENBASIS
========================================================================
A = A^T (symmetric)
<->
A has orthonormal eigenbasis {q_1, ..., q_n}
<->
A = Q\LambdaQ^T where Q = [q_1|...|q_n] \in O(n), \Lambda = diag(\lambda_1,...,\lambda_n)
<->
Quadratic form: x^TAx = \Sigma^i \lambda^i(x^Tq^i)^2 = weighted sum of squared projections
INTERPRETATION:
+-----------------------------------------------------------------+
| Q^T: Express x in eigenbasis -> \Lambda: Scale each component -> |
| Q: Return to standard basis |
| |
| \lambda^i > 0: stretch in q^i direction (positive definite: all \lambda^i>0)|
| \lambda^i = 0: collapse in q^i direction (singular matrix) |
| \lambda^i < 0: reflection in q^i direction (indefinite: mixed signs) |
+-----------------------------------------------------------------+
AI CONNECTION: Hessian \nabla^2\mathcal{L} is symmetric. Its eigenvectors are the
"principal curvature directions" of the loss surface. Eigenvalues
tell us how curved the loss is in each direction.
========================================================================
ORTHOGONAL MATRICES: STRUCTURE AND CLASSIFICATION
========================================================================
Q^TQ = I (definition) => det(Q) = \pm1
det(Q) = +1: ROTATIONS (SO(n)) det(Q) = -1: REFLECTIONS
---------------------------- --------------------------
Preserve orientation Reverse orientation
Examples in 2D: R_\theta for all \theta Example: diag(1,...,1,-1)
Compose to give rotations Compose to give rotations
O(n) = SO(n) \cup {reflections} (two connected components)
EIGENVALUES: always on unit circle \in \mathbb{C}
+-----------------------------------------------------------------+
| Real eigenvalues: \pm1 only |
| Complex eigenvalues: e^{i\theta}, e^{-i\theta} pairs -> 2D rotations |
| |
| So(2n) block structure: |
| +----------+-------+ |
| | R_{\theta_1} | | each block is a 2\times2 rotation |
| +----------+ R_{\theta_2}| with its own angle \theta^i |
| | +-------+ |
| | | ... | |
| +----------+-------+ |
+-----------------------------------------------------------------+
========================================================================
Appendix E: Summary of Key Formulas
Inner Products and Orthogonality
| Formula | Expression | Notes |
|---|---|---|
| Inner product | Euclidean; generalizes to | |
| Norm | Induced by inner product | |
| Angle | Requires both nonzero | |
| Cauchy-Schwarz | $ | \langle\mathbf{u},\mathbf{v}\rangle |
| Pythagorean theorem | Converse also holds | |
| Parseval's identity | $|\mathbf{v}|^2 = \sum_{i=1}^n | \langle\mathbf{v},\mathbf{q}_i\rangle |
| Bessel's inequality | $\sum_{i=1}^k | \langle\mathbf{v},\mathbf{q}_i\rangle |
Projections
| Formula | Expression | Notes |
|---|---|---|
| Proj onto unit vector | 1D case | |
| Proj onto subspace (ONB ) | Requires | |
| Proj onto col (general) | General basis | |
| Projection matrix properties | , | Idempotent + symmetric |
| Complementary projector | Also a projector | |
| Best approx property | for all | Fundamental theorem |
Gram-Schmidt and QR
| Formula | Expression | Notes |
|---|---|---|
| GS step | Remove prior directions | |
| QR factorization | orthonormal cols, upper triangular | |
| Least squares via QR | Solve upper triangular system | |
| Householder | Reflection across | |
| Cayley transform | From skew-symmetric |
Spectral Theorem
| Formula | Expression | Notes |
|---|---|---|
| Spectral decomposition | Symmetric only | |
| Rayleigh quotient | Bounded by [\lambda_\min,\lambda_\max] | |
| Positive definiteness | all | Via spectral theorem |
| Polar decomposition | , , | Extends to non-square |
Appendix F: Connections to Other Chapters
F.1 From Eigenvalues (01) to Orthogonality
The spectral theorem shows that symmetric matrices are the matrices that are diagonalizable by orthogonal transformations. Non-symmetric matrices may still be diagonalizable, but with a non-orthogonal change of basis - making computations numerically less stable.
The QR algorithm for computing eigenvalues (01) is built entirely on orthogonal matrix iterations:
A_0 = A
for k = 0, 1, 2, ...:
A_k = Q_k R_k (QR factorization of current iterate)
A_{k+1} = R_k Q_k (swap factors)
Each iterate is orthogonally similar to the previous: (since and ). The iterates converge to the Schur form - upper triangular, with eigenvalues on the diagonal. The entire algorithm is numerically stable precisely because it uses only orthogonal transformations.
F.2 From SVD (02) to Orthogonality
The SVD can be viewed as the "orthogonality structure" of :
- : ONB for the row space, mapping inputs to "pure directions"
- : ONB for the column space, the natural output directions
- : the non-orthogonal part (pure scaling)
The polar decomposition isolates the orthogonal factor from the symmetric factor . This factorization requires all the tools developed in this section.
F.3 Orthogonality Enables Low-Rank Approximation (02, 03)
Eckart-Young-Mirsky theorem: The best rank- approximation to in both spectral and Frobenius norms is:
This result requires:
- The SVD (which produces orthogonal )
- The best approximation theorem (projection gives the nearest element in a subspace)
- Parseval's identity (to express )
All three are developed in this section. The Eckart-Young theorem is the mathematical foundation for PCA, LoRA, and any other low-rank approximation method.
F.4 Orthogonality and Matrix Norms (06)
Unitarily invariant norms: A matrix norm is unitarily invariant if for all unitary . The three most important matrix norms are all unitarily invariant:
- Spectral norm: (largest singular value)
- Frobenius norm: (sum of squared singular values)
- Nuclear norm: (sum of singular values)
These norms are "orthogonality-aware" - they don't change when the matrix is multiplied by orthogonal matrices. This property is what allows the Eckart-Young theorem to work: the error in low-rank approximation depends only on the discarded singular values, not on the choice of orthogonal bases.
F.5 Orthogonality in Optimization (Chapter 08)
Modern optimization methods for training large neural networks increasingly exploit orthogonal structure:
Sharpness-Aware Minimization (SAM): The sharpness of a minimum is . SAM perturbs parameters in the direction of the eigenvector of the Hessian corresponding to - computed approximately using power iteration. Power iteration produces a sequence of normalized vectors converging to the top eigenvector, each step involving a matrix-vector product with the Hessian and a re-normalization (orthogonalization against zero-dimensional subspace = normalization).
SOAP optimizer (2024): Maintains a factored approximation of the Adam preconditioner in which the "rotation" and "scaling" factors are tracked separately. The rotation factor is updated via QR decomposition to maintain orthogonality, while the scaling factor tracks the second moments in the rotating basis.
K-FAC and its variants: The Kronecker-Factored Approximate Curvature approximates the Fisher information matrix as a Kronecker product . Computing the update requires inverting these factors, which can be done via their eigendecompositions - the eigenvectors are orthonormal matrices.
Appendix G: Historical Notes and References
Timeline: Development of Orthogonality Theory
| Year | Mathematician | Contribution |
|---|---|---|
| 1801 | Gauss | Least squares method; orthogonal projection as normal equations |
| 1844 | Grassmann | Abstract vector spaces and subspaces; inner product generalization |
| 1850s | Cauchy, Schwarz | Cauchy-Schwarz inequality (Cauchy for sums 1821, Schwarz integral form 1888) |
| 1897 | Gram | Gram-Schmidt process (Gram's formulation for functions) |
| 1902 | Schmidt | Finite-dimensional Gram-Schmidt for |
| 1907 | Parseval | Parseval's theorem for Fourier series |
| 1908 | Hilbert | Abstract Hilbert spaces; orthogonal projection theorem |
| 1915-1927 | Weyl, von Neumann | Spectral theorem in infinite dimensions |
| 1954 | Householder | Householder reflectors for numerically stable QR |
| 1965 | Francis | QR algorithm for eigenvalues (with implicit shifts) |
| 1967 | Bjorck | Numerical analysis of Gram-Schmidt stability |
| 1983 | Saxe, McClelland, Ganguli | Orthogonal initialization for deep linear networks |
| 2014 | Mishkin, Matas | LSUV initialization using SVD/QR |
| 2018 | Miyato et al. | Spectral normalization for GANs |
| 2021 | Su et al. | RoPE: Rotary Position Embedding using orthogonal rotations |
| 2024 | Jordan et al. | Muon optimizer: gradient orthogonalization via Newton-Schulz |
Key References
- Golub & Van Loan, Matrix Computations (4th ed., 2013) - Definitive reference on numerical linear algebra including Gram-Schmidt, Householder, and QR algorithms
- Axler, Linear Algebra Done Right (3rd ed., 2015) - Elegant proof-based treatment of inner product spaces and spectral theorem
- Trefethen & Bau, Numerical Linear Algebra (1997) - Excellent coverage of QR, projections, and stability from computational perspective
- Saxe et al. (2013), "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks" - Theoretical foundation for orthogonal initialization
- Miyato et al. (2018), "Spectral Normalization for GANs" - Seminal paper on spectral normalization in deep learning
- Su et al. (2021), "RoFormer: Enhanced Transformer with Rotary Position Embedding" - RoPE using orthogonal rotation matrices
- Jordan et al. (2024), "Muon: An optimizer for hidden layers in neural networks" - Newton-Schulz orthogonalization for gradients
Appendix H: Proofs of Selected Theorems
H.1 Complete Proof: Cauchy-Schwarz Inequality
Theorem. For any inner product space and vectors :
Proof. If , both sides are 0 and the inequality holds trivially. Assume .
For any scalar :
This is a quadratic in that is always . Its discriminant must therefore be :
Equality condition. Equality holds iff iff the quadratic in has a zero iff there exists with iff and are linearly dependent (one is a scalar multiple of the other).
Remark. This proof works for any abstract inner product space, finite or infinite dimensional. The key was that is positive definite.
H.2 Complete Proof: Best Approximation Uniqueness
Theorem. Let be a closed subspace of a Hilbert space . For any , the projection satisfying is unique.
Proof of uniqueness. Suppose both satisfy . Then (since is a subspace). But also (as a difference of vectors each perpendicular to ). So , giving .
H.3 Complete Proof: Gram-Schmidt Produces ONB
Theorem. If are linearly independent vectors in an inner product space , the Gram-Schmidt process produces an orthonormal set with for each .
Proof by induction on .
Base case (): is a unit vector (since by independence). . OK
Inductive step: Assume the claim holds for . Define .
(i) : Suppose for contradiction . Then , contradicting linear independence.
(ii) for : For any :
(using the inductive hypothesis ). Since , we also have . OK
(iii) Same span: is in . Conversely, . OK
By induction, the claim holds for all .
This section is part of the 03-Advanced-Linear-Algebra chapter.
Appendix I: The Stiefel Manifold and Optimization on Orthogonal Groups
I.1 The Stiefel Manifold
The set of matrices with orthonormal columns is the Stiefel manifold:
Special cases:
- - the full orthogonal group
- - the unit sphere in
- for - Stiefel manifolds proper
Dimension:
Intuition: We have free parameters (entries of a matrix), subject to constraints ( normalization constraints + orthogonality constraints between distinct columns).
Tangent space at : The tangent space consists of matrices satisfying , i.e., is skew-symmetric:
I.2 Riemannian Gradient Descent on the Stiefel Manifold
To minimize subject to :
Step 1: Euclidean gradient. Compute in .
Step 2: Project to tangent space. The Riemannian gradient is the projection of onto :
where is the symmetric part.
Step 3: Update and retract. Move in the direction of the (negative) Riemannian gradient, then retract back to the manifold:
Retraction via QR: (take the factor from QR decomposition)
Retraction via Cayley: for a suitable skew-symmetric
I.3 Applications in Modern AI
Orthogonal RNNs (Henaff et al. 2016, Arjovsky et al. 2016). Vanilla RNNs suffer from vanishing/exploding gradients because the recurrence matrix can have singular values . Unitary/Orthogonal RNNs constrain , ensuring stable gradient flow. Training optimizes over using Stiefel manifold optimization or parameterization via Cayley transform.
Rotation-Equivariant Networks. Networks for 3D point cloud processing (SE(3)-equivariant networks, e3nn) explicitly parameterize their weight tensors using representations of . The convolution filters transform predictably under 3D rotations of the input, making the network geometry-aware.
LoRA with Orthogonal Factors. Standard LoRA approximates a weight update as for low-rank matrices , . GLoRA, DoRA, and OLoRA variants constrain or to have orthonormal columns (lying on the Stiefel manifold), improving gradient flow during fine-tuning and reducing the rank collapse problem.
I.4 The Polar Decomposition as Nearest Orthogonal Matrix
Problem: Given (not necessarily orthogonal), find the nearest orthogonal matrix:
Solution: The polar decomposition gives where (SVD).
Proof: For any :
(using unitary invariance of Frobenius norm). Let . Then:
This is minimized when is maximized. Since is diagonal with nonneg entries and is orthogonal, the Hadamard-type inequality gives , with equality at (i.e., ).
Muon optimizer interpretation: The Muon optimizer applies the update where is the nearest orthogonal matrix to the gradient . This can be computed efficiently via Newton-Schulz iterations:
which converges quadratically to the orthogonal factor of the polar decomposition of .
Appendix J: Quick Reference - Orthogonality Vocabulary
| Term | Definition | Key Formula |
|---|---|---|
| Orthogonal vectors | ||
| Orthonormal vectors | Orthogonal + unit length | |
| Orthonormal basis (ONB) | ONS that spans the space | |
| Orthogonal matrix | Square, orthonormal columns | |
| Orthogonal group | All orthogonal matrices | , closed under , , |
| Special orthogonal group | Rotations (det = +1) | |
| Orthogonal projection | closest point in | , , |
| Orthogonal complement | Vectors all of | |
| Gram-Schmidt | Orthogonalization algorithm | Build ONB from any basis |
| Householder reflector | Reflection across a hyperplane | |
| QR decomposition | factorization | : orthonormal cols, : upper triangular |
| Rayleigh quotient | Quotient for eigenvalue bounds | |
| Spectral theorem | Symmetric has ONB of eigenvectors | |
| Isometry | Norm-preserving linear map | |
| Stiefel manifold | Matrices with orthonormal columns | |
| Polar decomposition | Orthogonal \times symmetric factorization | |
| Unitary matrix | Complex analog of orthogonal | , entries |
Appendix K: Common Numerical Experiments
K.1 Verifying Gram-Schmidt Stability
The following experiment quantifies the loss of orthogonality in CGS vs MGS:
import numpy as np
def cgs(A):
m, n = A.shape
Q = np.zeros_like(A)
R = np.zeros((n, n))
for j in range(n):
v = A[:, j].copy()
R[:j, j] = Q[:, :j].T @ v
v -= Q[:, :j] @ R[:j, j]
R[j, j] = np.linalg.norm(v)
Q[:, j] = v / R[j, j]
return Q, R
def mgs(A):
m, n = A.shape
Q = A.astype(float).copy()
R = np.zeros((n, n))
for j in range(n):
R[j, j] = np.linalg.norm(Q[:, j])
Q[:, j] /= R[j, j]
R[j, j+1:] = Q[:, j] @ Q[:, j+1:]
Q[:, j+1:] -= np.outer(Q[:, j], R[j, j+1:])
return Q, R
# Test on Hilbert matrix (ill-conditioned)
n = 12
H = 1.0 / (np.arange(1, n+1)[:, None] + np.arange(1, n+1)[None, :] - 1)
Q_cgs, _ = cgs(H)
Q_mgs, _ = mgs(H)
err_cgs = np.max(np.abs(Q_cgs.T @ Q_cgs - np.eye(n)))
err_mgs = np.max(np.abs(Q_mgs.T @ Q_mgs - np.eye(n)))
print(f"CGS orthogonality error: {err_cgs:.2e}") # ~1e-4 (catastrophic)
print(f"MGS orthogonality error: {err_mgs:.2e}") # ~1e-8 (acceptable)
print(f"kappa(H): {np.linalg.cond(H):.2e}") # ~10^16 for n=12
K.2 Orthogonal Initialization Impact on Gradient Norms
import numpy as np
def grad_norm_experiment(n=64, L=20, trials=100):
"""Compare gradient norms: Gaussian init vs Orthogonal init."""
results = {'gaussian': [], 'orthogonal': []}
for _ in range(trials):
# Gaussian initialization
Ws_gauss = [np.random.randn(n, n) / np.sqrt(n) for _ in range(L)]
prod_gauss = Ws_gauss[0]
for W in Ws_gauss[1:]:
prod_gauss = prod_gauss @ W
results['gaussian'].append(np.linalg.norm(prod_gauss, 2))
# Orthogonal initialization
Ws_orth = []
for _ in range(L):
M = np.random.randn(n, n)
Q, _ = np.linalg.qr(M)
Ws_orth.append(Q)
prod_orth = Ws_orth[0]
for W in Ws_orth[1:]:
prod_orth = prod_orth @ W
results['orthogonal'].append(np.linalg.norm(prod_orth, 2))
print(f"Gaussian: mean={np.mean(results['gaussian']):.3e}, "
f"std={np.std(results['gaussian']):.3e}")
print(f"Orthogonal: mean={np.mean(results['orthogonal']):.3e}, "
f"std={np.std(results['orthogonal']):.3e}")
# Expected: Gaussian shows extreme variance (very large or very small)
# Orthogonal shows norm \approx 1.0 with tiny variance
grad_norm_experiment()
K.3 Rayleigh Quotient Iteration Convergence
import numpy as np
def rayleigh_quotient_iteration(A, x0, n_iter=20):
"""Track convergence of Rayleigh Quotient Iteration."""
x = x0 / np.linalg.norm(x0)
lam_true = np.sort(np.linalg.eigvalsh(A))[-1] # largest eigenvalue
history = []
for _ in range(n_iter):
rho = x @ A @ x # Rayleigh quotient
history.append(abs(rho - lam_true))
try:
y = np.linalg.solve(A - rho * np.eye(len(A)), x)
x = y / np.linalg.norm(y)
except np.linalg.LinAlgError:
break # Converged (A - rho*I is singular)
return history
A = np.array([[3., 1., 0.], [1., 2., 1.], [0., 1., 1.]])
x0 = np.random.randn(3)
errors = rayleigh_quotient_iteration(A, x0)
print("Errors per iteration:")
for i, e in enumerate(errors[:10]):
print(f" iter {i}: {e:.2e}")
# Expected: errors should decrease cubically (each step ~= previous^3)