Orthogonality and Orthonormality: Part 1: Intuition to 8. The Spectral Theorem for Symmetric Matrices
1. Intuition
1.1 What Orthogonality "Buys" You
Orthogonality is the key that makes linear algebra computationally tractable. Without it, working in a basis requires solving systems of equations. With it, everything simplifies to dot products.
Five computational miracles of orthogonality:
Miracle 1: Coordinates become dot products. In a general basis {b1,…,bn}, finding the coordinates of a vector v=∑cibi requires solving an n×n linear system. In an orthonormal basis {q1,…,qn}, the coordinate ci=⟨v,qi⟩ - just a dot product. No system to solve.
Miracle 2: Length is preserved component-wise. In an ONB: ∥v∥2=∑ici2 (Parseval's identity). The squared norm is simply the sum of squared coordinates - no cross terms.
Miracle 3: Orthogonal matrices have condition number 1. The condition number of Q is ∥Q∥⋅∥Q−1∥=1⋅1=1. This is the best possible - orthogonal systems are maximally well-conditioned. Solving Qx=b is trivially x=Q⊤b.
Miracle 4: Projection is the best approximation. The orthogonal projection of b onto a subspace S gives the closest point in S to b. This is the geometric foundation of least squares, PCA, and low-rank approximation.
Miracle 5: Composition of orthogonal maps is free. Two isometries composed give another isometry. Chaining rotations, reflections, and permutations never accumulates numerical error.
THE ORTHOGONALITY ADVANTAGE
========================================================================
General basis {b_1, ..., b_n} Orthonormal basis {q_1, ..., q_n}
----------------------------- ----------------------------------
Coordinates: solve Bc = v Coordinates: c^i = <v, q^i> (dot!)
Norm: v^T B^T B v Norm: \Sigma c^i^2 (Parseval)
Projection: A(A^T A)^-^1 A^T b Projection: QQ^T b (just Q^T then Q)
Solve Ax=b: Gaussian elim. Solve Qx=b: x = Q^T b (trivial)
Condition#: can be >> 1 Condition#: exactly 1
========================================================================
1.2 Geometric Picture
In Euclidean space, two vectors are orthogonal if they meet at a right angle. The dot product measures "how much they point in the same direction":
⟨u,v⟩=∥u∥∥v∥cosθ
θ=0degrees: ⟨u,v⟩=∥u∥∥v∥ (parallel, maximum alignment)
θ=90degrees: ⟨u,v⟩=0 (orthogonal, zero alignment)
θ=180degrees: ⟨u,v⟩=−∥u∥∥v∥ (antiparallel, maximum anti-alignment)
The projection picture. The orthogonal projection of v onto u is the "shadow" of v cast perpendicularly onto the line through u:
projuv=∥u∥2⟨v,u⟩u
The error v−projuv is orthogonal to u. This Pythagorean decomposition is the geometric core of all least-squares theory.
Orthonormal bases as "rulers." A set of orthonormal vectors {q1,…,qn} forms a perfect coordinate system: each qi is a unit "ruler" pointing in an independent direction. The coordinate of v along qi is exactly ⟨v,qi⟩ - how far v extends in the i-th direction.
1.3 Why Orthogonality Matters for AI
PCA finds orthogonal directions of variance. The principal components are the eigenvectors of the sample covariance matrix C=X⊤X/(n−1). By the spectral theorem (7), these eigenvectors are orthogonal. PCA is possible - and gives uncorrelated components - precisely because of orthogonality.
Attention subspaces. In multi-head attention, each head projects queries and keys into a dk-dimensional subspace. The heads are designed to attend to different aspects of the input. The more orthogonal the subspaces, the more independent information each head extracts.
Orthogonal weight initialization (Saxe et al., 2013) initializes weight matrices as random orthogonal matrices (via QR decomposition of a random Gaussian matrix). This preserves the norm of activations and gradients through linear layers, enabling training of very deep networks without vanishing/exploding gradients.
RoPE (Su et al., 2023) encodes position m by rotating the query/key vectors by angle mθi in each 2D subspace. The key property: the dot product between a rotated query at position m and a rotated key at position n depends only on (m−n) - position appears as a relative rotation. This works because rotation is orthogonal: it preserves the magnitude of inner products.
Numerical stability. QR decomposition is the backbone of numerical linear algebra. Householder QR is used to solve least-squares problems, compute eigenvalues (QR iteration), and factor matrices. Its superiority over LU (for rectangular systems) stems from orthogonality: the Householder reflectors are orthogonal transformations that never amplify errors.
1.4 Historical Timeline
Year
Person
Contribution
1801
Gauss
Least squares by orthogonal projection (unpublished until 1809)
1805
Legendre
Published least squares - minimizing sum of squared residuals
1883
Jrgen Gram
Gram-Schmidt process published in Danish
1907
Erhard Schmidt
Gram-Schmidt republished and popularized
1958
Alston Householder
Householder reflections for stable QR factorization
1961
J.G.F. Francis
Practical QR algorithm for eigenvalue computation
1961
V. Kublanovskaya
Independent discovery of QR algorithm (USSR)
1965
James Wilkinson
Error analysis of Householder QR; numerical stability theory
2013
Saxe, McClelland, Ganguli
Orthogonal initialization for deep linear networks
2021
Su et al.
RoPE: rotary positional encoding via orthogonal rotation matrices
2021
Hua et al.
Transformer Quality with orthogonal attention mechanisms
2. Formal Definitions
2.1 Inner Products and the Angle Formula
Definition (Inner Product). A real inner product on a vector space V is a function ⟨⋅,⋅⟩:V×V→R satisfying:
Symmetry:⟨u,v⟩=⟨v,u⟩
Linearity in first argument:⟨au+bv,w⟩=a⟨u,w⟩+b⟨v,w⟩
Positive definiteness:⟨v,v⟩≥0 with equality iff v=0
The standard inner product (dot product) on Rn is ⟨u,v⟩=u⊤v=∑i=1nuivi.
The induced norm:∥v∥=⟨v,v⟩ is the Euclidean length of v.
Theorem 2.1.1 (Cauchy-Schwarz Inequality). For any u,v in an inner product space:
∣⟨u,v⟩∣≤∥u∥⋅∥v∥
with equality iff u and v are linearly dependent (one is a scalar multiple of the other).
Proof. For any t∈R, ∥u+tv∥2≥0. Expanding:
∥u∥2+2t⟨u,v⟩+t2∥v∥2≥0
This is a quadratic in t that is always non-negative, so its discriminant must be ≤0:
Definition. Two vectors u,v are orthogonal (written u⊥v) if ⟨u,v⟩=0.
A vector v is a unit vector if ∥v∥=1. The normalization of nonzero v is v^=v/∥v∥.
Two vectors are orthonormal if they are both orthogonal AND both unit vectors:
⟨qi,qj⟩=δij={10i=ji=j
Key consequences of orthogonality:
1. Zero maps to everything:0⊥v for all v (since ⟨0,v⟩=0).
2. Pythagorean theorem: If u⊥v, then ∥u+v∥2=∥u∥2+∥v∥2.
Proof:∥u+v∥2=⟨u+v,u+v⟩=∥u∥2+2=0⟨u,v⟩+∥v∥2. □
3. Generalized Pythagorean theorem: For mutually orthogonal v1,…,vk:
i=1∑kvi2=i=1∑k∥vi∥2
4. Linearity of orthogonality: If u⊥v and u⊥w, then u⊥(av+bw) for all scalars a,b.
2.3 Orthogonal Sets and Linear Independence
Definition. A set {v1,…,vk} is an orthogonal set if ⟨vi,vj⟩=0 for all i=j. It is an orthonormal set if additionally each ∥vi∥=1.
Theorem 2.3.1. Any orthogonal set of nonzero vectors is linearly independent.
Proof. Suppose ∑i=1kcivi=0. Take the inner product of both sides with vj:
⟨i∑civi,vj⟩=cj⟨vj,vj⟩=cj∥vj∥2=0
Since vj=0, we have ∥vj∥2>0, so cj=0. This holds for all j. □
Remark. This theorem gives an effortless proof of linear independence for orthogonal sets - no row reduction needed. The inner product "isolates" each coefficient.
Non-examples:
{(1,0),(1,1)} is not orthogonal: ⟨(1,0),(1,1)⟩=1=0.
{(1,1,0)/2,(0,0,1),(1,−1,0)/2,(1,0,0)} - the last vector breaks orthogonality.
2.4 Orthonormal Bases
Definition. An orthonormal basis (ONB) for V is a set {q1,…,qn} that is both orthonormal and a basis for V.
Theorem 2.4.1 (Fourier Coefficients). If {q1,…,qn} is an ONB for V, then for any v∈V:
v=i=1∑n⟨v,qi⟩qi
The coefficients v^i=⟨v,qi⟩ are called Fourier coefficients (or coordinates of v in the ONB).
Proof. Since {qi} is a basis, v=∑jcjqj for some unique cj. Taking the inner product with qi:
⟨v,qi⟩=j∑cj⟨qj,qi⟩=j∑cjδji=ci□
Theorem 2.4.2 (Parseval's Identity). Under the same conditions:
Remark. Parseval's identity says: in an ONB, the squared norm equals the sum of squared coordinates. There are no cross terms - the orthonormality kills them all. This is the high-dimensional Pythagorean theorem.
Matrix form. Assembling the ONB into a matrix Q=[q1∣⋯∣qn], the Fourier decomposition is simply v=Q(Q⊤v): first compute coordinates v^=Q⊤v, then reconstruct v=Qv^.
2.5 Orthogonal Complements
Definition. Given a subspace S⊆V, the orthogonal complement is:
S⊥={v∈V:⟨v,s⟩=0 for all s∈S}
Theorem 2.5.1.S⊥ is a subspace of V.
Proof: If u,v∈S⊥ and a,b∈R, then for any s∈S: ⟨au+bv,s⟩=a⟨u,s⟩+b⟨v,s⟩=0. □
Theorem 2.5.2 (Orthogonal Decomposition). If V is finite-dimensional and S is a subspace, then:
V=S⊕S⊥(direct sum)
Every v∈V decomposes uniquely as v=vS+vS⊥ with vS∈S and vS⊥∈S⊥.
Moreover: dim(S)+dim(S⊥)=dim(V).
Proof. Let {q1,…,qk} be an ONB for S. Define vS=∑i=1k⟨v,qi⟩qi (projection onto S) and vS⊥=v−vS. Then v=vS+vS⊥ with vS∈S. For any qj:
⟨vS⊥,qj⟩=⟨v,qj⟩−⟨vS,qj⟩=⟨v,qj⟩−⟨v,qj⟩=0
So vS⊥⊥qj for all j, hence vS⊥∈S⊥. Uniqueness: if v=wS+wS⊥ also, then vS−wS=wS⊥−vS⊥∈S∩S⊥={0}. □
Connection to the four fundamental subspaces. For A∈Rm×n:
null(A)=row(A)⊥ in Rn
null(A⊤)=col(A)⊥ in Rm
These orthogonality relations are the deep structure behind the four fundamental subspaces (see 04: Linear Transformations 4.5).
2.6 Orthogonality in Complex Inner Product Spaces
The theory extends to complex vector spaces with a Hermitian (sesquilinear) inner product:
⟨u,v⟩=k=1∑nukvk
The complex analog of orthogonal matrices is unitary matricesU∈Cn×n satisfying U∗U=I, where U∗=U⊤ is the conjugate transpose.
Unitary group:U(n)={U∈Cn×n:U∗U=I} contains all n×n unitary matrices. It reduces to O(n) when restricted to real matrices.
Special unitary group:SU(n)={U∈U(n):det(U)=1}.
Key properties of unitary matrices:
∣det(U)∣=1 (complex number of modulus 1, not just ±1)
Eigenvalues lie on the unit circle: ∣λi∣=1
Preserve complex inner products: ⟨Uu,Uv⟩=⟨u,v⟩
The DFT matrix Fn/n is unitary (not orthogonal, since its entries are complex)
Hermitian matrices (A=A∗) are the complex analogs of real symmetric matrices. The spectral theorem holds: every Hermitian matrix has real eigenvalues and a unitary eigenbasis.
For AI, complex-valued neural networks and quantum computing naturally use unitary matrices. The DFT, crucial for signal processing and some positional encoding schemes, is a unitary transform in Cn.
2.7 Abstract Inner Product Spaces: Axioms
We have been working with the Euclidean inner product ⟨u,v⟩=u⊤v. More generally, an inner product on a real vector space V is a bilinear map ⟨⋅,⋅⟩:V×V→R satisfying:
Axiom
Expression
Meaning
Symmetry
⟨u,v⟩=⟨v,u⟩
Order doesn't matter
Linearity
⟨au+bv,w⟩=a⟨u,w⟩+b⟨v,w⟩
Linear in first argument
Positive definiteness
⟨v,v⟩≥0 and =0⇒v=0
Self-inner product is positive
Any positive definite symmetric matrix M≻0 defines an inner product:
⟨u,v⟩M=u⊤Mv
Example (precision-weighted inner product): In Bayesian statistics, the Mahalanobis distance between x and μ is:
dM(x,μ)=(x−μ)⊤Σ−1(x−μ)
This is the distance induced by the inner product ⟨u,v⟩Σ−1=u⊤Σ−1v. In this metric, vectors along high-variance directions of Σ are "shorter" (the precision Σ−1 down-weights those directions).
Orthogonality is metric-dependent. Vectors that are orthogonal in the Euclidean metric may not be orthogonal in the Mahalanobis metric, and vice versa. The choice of inner product determines what "perpendicular" means.
For ML: The natural gradient method (Amari 1998) uses the Fisher information matrix F to define the inner product, giving rise to a geometry where parameter updates are Δθ=F−1∇L. This is gradient descent in the Riemannian metric ⟨Δθ1,Δθ2⟩F=Δθ1⊤FΔθ2.
3. Orthogonal Matrices
3.1 Definition and Characterizations
Definition. A square matrix Q∈Rn×n is orthogonal if:
Q⊤Q=QQ⊤=In
Equivalently, Q−1=Q⊤ - the transpose is the inverse.
Characterization via columns.Q is orthogonal if and only if its columns q1,…,qn form an orthonormal basis for Rn:
qi⊤qj=δij={10i=ji=j
The condition Q⊤Q=I encodes exactly these n2 inner products.
Characterization via rows. By the symmetry QQ⊤=I, the rows of Q also form an orthonormal basis. This is a non-trivial statement: a matrix whose columns are orthonormal must also have orthonormal rows (and vice versa).
Characterization via norms (isometry property).Q is orthogonal if and only if it preserves the Euclidean norm:
∥Qx∥2=∥x∥2for all x∈Rn
Proof:∥Qx∥2=(Qx)⊤(Qx)=x⊤Q⊤Qx=x⊤Ix=∥x∥2. □
Characterization via inner products.Q preserves inner products:
⟨Qx,Qy⟩=⟨x,y⟩for all x,y
Proof:(Qx)⊤(Qy)=x⊤Q⊤Qy=x⊤y. □
This means orthogonal matrices preserve angles and distances - they are isometries of Euclidean space.
3.2 The Orthogonal Group and Special Orthogonal Group
The set of all n×n orthogonal matrices forms the orthogonal group:
O(n)={Q∈Rn×n:Q⊤Q=I}
Group axioms:
Closure: If Q1,Q2∈O(n), then (Q1Q2)⊤(Q1Q2)=Q2⊤Q1⊤Q1Q2=I, so Q1Q2∈O(n).
Identity:I∈O(n).
Inverses:Q−1=Q⊤∈O(n) (since (Q⊤)⊤Q⊤=QQ⊤=I).
Associativity: Inherited from matrix multiplication.
Determinant. Since det(Q⊤Q)=det(Q)2=det(I)=1, we have det(Q)=±1.
det(Q)=−1: improper rotations - reflections (or rotation composed with a reflection)
The special orthogonal groupSO(n)={Q∈O(n):det(Q)=+1} consists of pure rotations.
In 2D:
SO(2)={(cosθsinθ−sinθcosθ):θ∈[0,2π)}
This is the group of rotations of the plane.
In 3D:SO(3) parameterizes 3D rotations. This group is the configuration space of a rigid body and appears throughout robotics, computer vision, and physics. Its double cover SU(2) appears in quaternion-based rotation representations used in game engines and IMUs.
3.3 Condition Number and Numerical Properties
The condition number of a matrix measures how much it amplifies errors:
κ(A)=∥A∥⋅∥A−1∥=σminσmax
For orthogonal matrices Q: all singular values equal 1, so κ(Q)=1/1=1. Orthogonal matrices have the best possible condition number.
This means:
Multiplying by Q never amplifies rounding errors
Solving Qx=b is trivially stable: x=Q⊤b
QR factorization (which uses orthogonal matrices) is numerically stable
For AI: Deep learning systems that maintain orthogonal weight matrices throughout training benefit from gradient signals that are neither exploding nor vanishing. The isometry property means the network's "signal amplification" is exactly 1 at those layers.
3.4 Eigenvalues of Orthogonal Matrices
Theorem. The eigenvalues of a real orthogonal matrix lie on the unit circle in C: ∣λ∣=1.
Proof: If Qv=λv with v=0:
∥v∥=∥Qv∥=∥λv∥=∣λ∣∥v∥
Dividing by ∥v∥>0: ∣λ∣=1. □
For real eigenvalues: λ=+1 (rotation-like) or λ=−1 (reflection-like).
Complex eigenvalues come in conjugate pairs eiθ,e−iθ corresponding to 2D rotations.
3.5 The Cayley Transform
The Cayley transform establishes a correspondence between orthogonal matrices and skew-symmetric matrices (matrices satisfying S⊤=−S).
Definition. For a skew-symmetric matrix S (such that I+S is invertible):
Since S is skew-symmetric, I−S and I+S commute (they differ only by sign of S, which is a normal matrix in this context), so:
(I+S)(I−S)=I−S2=(I−S)(I+S)
Hence Q⊤Q=(I−S)−1(I−S)(I+S)(I+S)−1=I. □
Inverse Cayley transform. Given orthogonal Q without eigenvalue −1:
S=(I−Q)(I+Q)−1
Why this matters:
Skew-symmetric matrices form a vector space (easy to parameterize)
The Cayley transform maps this space bijectively to "most" orthogonal matrices
This gives a smooth parameterization of O(n) near the identity - useful for Riemannian optimization
Example in 2D:S=(0t−t0) (skew-symmetric) gives:
Q=1+t21(1−t22t−2t1−t2)
With t=tan(θ/2) (the Weierstrass substitution), this gives the rotation matrix Rθ. The Cayley transform thus rationalizes the trigonometric functions appearing in rotation matrices.
Applications in neural networks: Neural networks that maintain orthogonal weight matrices throughout training can parameterize them via their skew-symmetric "log" and exponentiate via the matrix exponential or Cayley transform. This enables gradient descent on O(n) using Lie algebra tools.
4. Orthogonal Projections
4.1 The Projection Formula
Definition. The orthogonal projection of v onto a subspace S is the unique vector vS∈S such that v−vS⊥S.
Formula (1D case). Projection onto the line spanned by unit vector u^:
proju^(v)=⟨v,u^⟩u^=u^u^⊤v
The matrix P=u^u^⊤ is the rank-1 projection matrix.
Formula (general case). If S=col(A) for a matrix A with linearly independent columns:
projS(v)=A(A⊤A)−1A⊤v
The matrix P=A(A⊤A)−1A⊤ is the projection matrix onto col(A).
When A has orthonormal columns (i.e., A=Q with Q⊤Q=I), this simplifies dramatically:
P=QQ⊤
This is why orthonormal bases make projections computationally trivial.
4.2 Projection Matrices: Properties
A matrix P is a projection if and only if it is idempotent: P2=P.
A projection is an orthogonal projection if and only if it is also symmetric: P⊤=P.
Verification: For P=QQ⊤ with Q⊤Q=I:
Idempotent: P2=QQ⊤QQ⊤=Q(Q⊤Q)Q⊤=QIQ⊤=QQ⊤=P OK
Symmetric: P⊤=(QQ⊤)⊤=QQ⊤=P OK
Eigenvalues of projections.P is a projection if and only if its eigenvalues are 0 and 1.
Proof: If Pv=λv, then λv=Pv=P2v=λPv=λ2v, so λ2=λ, giving λ=0 or λ=1. □
Geometrically: eigenvectors with λ=1 lie in the range of P (they are unchanged), while eigenvectors with λ=0 lie in the null space (they are annihilated).
4.3 Best Approximation Theorem
Theorem (Best Approximation). The projection vS=projS(v) is the closest point in S to v:
Since v−vS⊥S and vS−s∈S, the cross term vanishes:
=∥v−vS∥2+∥vS−s∥2≥∥v−vS∥2□
This theorem is everywhere in machine learning:
Least squares: The normal equations give the projection of b onto col(A)
PCA: Principal components are the projection onto the subspace of maximum variance (-> 03: PCA)
Attention: Softmax attention can be viewed as computing a weighted projection of value vectors
Linear regression: The fitted values y^=Hy where H=X(X⊤X)−1X⊤ is the hat matrix
4.4 Projection in Coordinate Systems
Given an ONB {q1,…,qn} for Rn, the projection onto the subspace S=span(q1,…,qk) (for k≤n) is:
PS=i=1∑kqiqi⊤
This is a sum of rank-1 projectors, one per basis direction. The decomposition:
I=PS+PS⊥=i=1∑kqiqi⊤+i=k+1∑nqiqi⊤
is the resolution of the identity - every vector is split into its S and S⊥ components.
5. Gram-Schmidt Orthogonalization
5.1 The Algorithm: Classical Gram-Schmidt
Problem. Given linearly independent vectors a1,…,ak, construct an orthonormal basis for span(a1,…,ak).
Classical Gram-Schmidt (CGS):
Initialization:q1=a1/∥a1∥
Iteration: For j=2,…,k:
uj=aj−i=1∑j−1⟨aj,qi⟩qiqj=∥uj∥uj
What it does geometrically: At step j, we subtract the projection of aj onto the span of the already-processed vectors, leaving the component orthogonal to everything before it. Normalizing gives the j-th orthonormal vector.
Inductive proof of correctness. After step j, {q1,…,qj} is an ONB for span(a1,…,aj).
Base:j=1: q1=a1/∥a1∥ is a unit vector spanning the same subspace. OK
Step: Assume true for j−1. Then uj=aj−Pj−1aj where Pj−1 projects onto span(q1,…,qj−1).
For any ℓ<j: ⟨uj,qℓ⟩=⟨aj,qℓ⟩−⟨aj,qℓ⟩=0 OK
Since a1,…,aj are independent, uj=0 (otherwise aj∈span(a1,…,aj−1)). So qj=uj/∥uj∥ is a valid unit vector. □
5.2 Modified Gram-Schmidt
The numerical problem with CGS. In floating-point arithmetic, CGS can catastrophically lose orthogonality. This happens when aj is nearly in the span of {a1,…,aj−1} - rounding errors in the projection subtraction can accumulate, producing a uj that isn't truly orthogonal to earlier vectors.
Modified Gram-Schmidt (MGS) reorders the computation to orthogonalize against already-computed qi sequentially rather than all at once:
for j = 1 to k:
v = a_j
for i = 1 to j-1:
v = v - <v, q_i> q_i # orthogonalize against q_i using CURRENT v
q_j = v / ||v||
Why MGS is more stable. In CGS, all projections use the original aj. In MGS, after each orthogonalization step, the updated v is used - so numerical errors from the first projection are themselves projected away in subsequent steps.
Theorem (Bjorck 1967). MGS is backward stable: the computed q^i satisfy q^i⊤q^j=δij+O(ϵmachκ(A)) where κ(A) is the condition number of the input matrix. CGS satisfies only O(ϵmachκ(A)2).
Practical advice: Use MGS for explicit basis construction; use Householder for QR factorization (even more stable).
5.3 Gram-Schmidt and QR Factorization
Gram-Schmidt applied to the columns of A=[a1∣⋯∣an] produces a QR factorization:
A=QR
where Q=[q1∣⋯∣qn] has orthonormal columns and R is upper triangular with:
Rij=⎩⎨⎧⟨aj,qi⟩∥uj∥0i<ji=ji>j
The diagonal entries Rjj=∥uj∥ are positive (since uj=0 for independent columns).
This gives a unique QR factorization of A with positive diagonal entries of R.
For AI: Gram-Schmidt implicitly occurs in layer normalization variants, in computing orthogonal weight bases for LoRA, and in the derivation of stable training algorithms.
6. Householder Reflectors and Givens Rotations
6.1 Householder Reflectors
Motivation. While Gram-Schmidt produces QR by building Q column by column, Householder applies successive orthogonal transformations to reduce A to upper triangular form R directly - achieving greater numerical stability.
Definition. The Householder reflector determined by unit vector n (the normal to the mirror) is:
H=I−2nn⊤
Properties:
Symmetric:H⊤=H OK
Orthogonal:H⊤H=H2=(I−2nn⊤)2=I−4nn⊤+4nn⊤nn⊤=I−4nn⊤+4nn⊤=I OK
Involution:H2=I, so H=H−1 OK
Determinant:det(H)=−1 (it's a reflection, not a rotation)
Action:H reflects any vector across the hyperplane with normal n:
Vectors in n⊥ are unchanged: Hv=v if ⟨v,n⟩=0
The vector n is negated: Hn=n−2n=−n
Zeroing a column. To zero out all components of a below the first entry: choose:
n=∥a+∥a∥e1∥a+∥a∥e1⇒Ha=−∥a∥e1
(The ± sign choice avoids catastrophic cancellation when a1>0.)
6.2 QR via Householder
To factor A∈Rm×n:
Find H1 to zero the first column below row 1: H1A has ∗ in position (1,1) and zeros below
Find H2 (acting on the (n−1)×(n−1) submatrix) to zero the second column below row 2
Continue: Hn⋯H2H1A=R (upper triangular)
Then Q=H1H2⋯Hn (product of reflectors, hence orthogonal), and A=QR.
Complexity:O(mn2−n3/3) flops - same asymptotic complexity as Gram-Schmidt but with better numerical constants.
Stability: Householder QR is backward stable, meaning the computed factors satisfy Q^R^=A+E where ∥E∥=O(ϵmach∥A∥).
6.3 Givens Rotations
Definition. The Givens rotationG(i,j,θ)∈Rn×n rotates in the (i,j)-plane:
G(i,j,θ)kl=⎩⎨⎧cosθ−sinθsinθδklk=l=i or k=l=jk=i,l=jk=j,l=iotherwise
Purpose: Choose θ to zero a specific element aji by rotating in the (i,j)-plane.
When to use: Givens rotations are preferred when A is sparse (each rotation affects only two rows/columns), for parallel computation, and in eigenvalue algorithms like QR iteration.
7. QR Decomposition: Theory and Applications
7.1 Existence and Uniqueness
Theorem (QR Decomposition). Every matrix A∈Rm×n with m≥n admits a factorization:
A=QR
where Q∈Rm×n has orthonormal columns (Q⊤Q=In) and R∈Rn×n is upper triangular.
This is the thin (reduced) QR decomposition. There is also a full QR decomposition where Q∈Rm×m is a full orthogonal matrix and R∈Rm×n is upper trapezoidal.
Uniqueness (when A has full column rank). If we require Rii>0 for all i, the thin QR decomposition is unique.
Proof sketch: Two decompositions Q1R1=Q2R2 give Q2⊤Q1=R2R1−1. The left side is orthogonal; the right side is upper triangular. An upper triangular orthogonal matrix must be diagonal with ±1 entries. With the sign convention Rii>0, this diagonal must be I. □
When A has rank r<n: The decomposition still exists but R has n−r zero diagonal entries. The columns of Q corresponding to zero Rii can be chosen as any completion to an orthonormal set.
7.2 QR for Least Squares
Problem. Given A∈Rm×n (m>n) and b∈Rm, solve:
x∈Rnmin∥Ax−b∥22
Solution via QR. Let A=QR (thin QR, Q⊤Q=I):
∥Ax−b∥2=∥QRx−b∥2=∥Rx−Q⊤b∥2+∥(I−QQ⊤)b∥2
The second term is independent of x, so the minimum occurs at:
Rx=Q⊤b
This is an upper triangular system, solved by back substitution in O(n2) operations.
Comparison with normal equations. The normal equations (A⊤A)x=A⊤b are equivalent but form A⊤A, which squares the condition number: κ(A⊤A)=κ(A)2. QR avoids this squaring and is numerically preferred whenever κ(A) is moderate.
7.3 QR Algorithm for Eigenvalues
Preview. The QR algorithm - the standard method for computing eigenvalues of symmetric matrices - works by iterating:
A0=A,Ak+1=RkQkwhere Ak=QkRk
Under mild conditions, Ak→ upper triangular (Schur form) and the diagonal entries converge to eigenvalues. This connects orthogonal matrices directly to spectral theory.
Theorem (Spectral Theorem). Every real symmetric matrix A=A⊤∈Rn×n has:
All eigenvalues are real: λi∈R
Eigenvectors for distinct eigenvalues are orthogonal
There exists an orthonormal basis of eigenvectors: A=QΛQ⊤ where Q∈O(n) and Λ=diag(λ1,…,λn)
Proof of (1): Suppose Av=λv with v∈Cn, v=0. Take the Hermitian inner product:
vˉ⊤Av=λvˉ⊤v=λ∥v∥2vˉ⊤Av=vˉ⊤A⊤v=vˉ⊤Av(since A=A⊤)
So λ∥v∥2 is real, hence λ∈R. □
Proof of (2): If Au=λu and Av=μv with λ=μ:
λ⟨u,v⟩=⟨Au,v⟩=⟨u,A⊤v⟩=⟨u,Av⟩=μ⟨u,v⟩
(λ−μ)⟨u,v⟩=0. Since λ=μ: ⟨u,v⟩=0. □
Proof of (3): By induction. The base case (n=1) is trivial. For the inductive step: let q1 be a unit eigenvector for eigenvalue λ1. Let W=q1⊥ (the orthogonal complement). Since A is symmetric, W is A-invariant (if ⟨w,q1⟩=0, then ⟨Aw,q1⟩=⟨w,Aq1⟩=λ1⟨w,q1⟩=0). Apply the inductive hypothesis to the (n−1)×(n−1) symmetric restriction A∣W. □
8.2 The Rayleigh Quotient
Definition. The Rayleigh quotient of a symmetric matrix A at vector x=0 is:
RA(x)=x⊤xx⊤Ax
Key properties:
RA(qi)=λi for eigenvectors qi
λmin≤RA(x)≤λmax for all x
maxxRA(x)=λmax, achieved at the top eigenvector
minxRA(x)=λmin, achieved at the bottom eigenvector
Proof of bounds: Expand x=∑iciqi in the eigenbasis:
RA(x)=∑ici2∑ici2λi
This is a weighted average of eigenvalues, hence bounded by [λmin,λmax]. □
Courant-Fischer Min-Max Theorem. The k-th largest eigenvalue satisfies:
This variational characterization is fundamental to perturbation theory and numerical linear algebra.
8.3 Quadratic Forms and Symmetric Matrices
A quadratic form is f(x)=x⊤Ax=∑i,jAijxixj for symmetric A.
Positive definiteness via the spectral theorem:A≻0 (positive definite) ⇔ all eigenvalues >0.
Proof: In the eigenbasis, x⊤Ax=∑iλici2. This is positive for all x=0 iff all λi>0. □
For AI: The Hessian ∇2L of a loss function is symmetric. Its eigenvalues tell us the curvature in each direction - large eigenvalues are "sharp" directions (fast loss change), small eigenvalues are "flat" directions. The Rayleigh quotient appears in sharpness-aware minimization (SAM), gradient clipping analysis, and understanding training dynamics.
Skill Check
Test this lesson
Answer 4 quick questions to lock in the lesson and feed your adaptive practice queue.
--
Score
0/4
Answered
Not attempted
Status
1
Which module does this lesson belong to?
2
Which section is covered in this lesson content?
3
Which term is most central to this lesson?
4
What is the best way to use this lesson for real learning?
Your answers save locally first, then sync when account storage is available.