Lesson overview | Previous part | Lesson overview
Orthogonality and Orthonormality: Appendix A: Key Inequalities in Inner Product Spaces to Appendix K: Common Numerical Experiments
Appendix A: Key Inequalities in Inner Product Spaces
A.1 Cauchy-Schwarz in Different Forms
The Cauchy-Schwarz inequality has many equivalent forms:
Discrete form:
Integral form:
Probability form: For random variables with :
The probability form directly implies: the correlation coefficient satisfies .
Second proof (via AM-GM). For any :
Minimizing over gives and:
Applying to gives the lower bound.
A.2 Triangle Inequality and Its Uses
The triangle inequality for norms follows from Cauchy-Schwarz:
Parallelogram law: For any inner product space:
Proof: Expand both norms and add. The cross terms cancel.
This law characterizes inner product spaces: a norm satisfying the parallelogram law comes from an inner product via the polarization identity:
Practical importance: The and norms do NOT satisfy the parallelogram law, confirming they do not arise from inner products. This is why optimization in (LASSO) has different geometry from (ridge regression) - there is no meaningful notion of "orthogonality" in .
A.3 Weyl's Inequality for Eigenvalue Perturbation
When a symmetric matrix is perturbed to (with symmetric), Weyl's inequality bounds the eigenvalue shifts:
This means: the eigenvalues of a symmetric matrix are Lipschitz functions of the matrix entries, with Lipschitz constant 1 in the spectral norm.
Consequence for ML: If the Hessian changes slowly along a training trajectory (small ), its eigenvalues also change slowly. This justifies quasi-Newton methods that assume the curvature is approximately constant over several steps.
Appendix B: Numerical Orthogonalization Recipes
B.1 When to Use Each Method
| Method | Complexity | Stability | When to Use |
|---|---|---|---|
| Classical Gram-Schmidt (CGS) | Poor: | Never in production; educational use only | |
| Modified Gram-Schmidt (MGS) | Good: | When explicit needed and moderate | |
| Householder QR | Excellent: | General QR factorization; recommended default | |
| Givens QR | Excellent | Sparse matrices; parallel processing | |
| Randomized QR | Good | Large matrices; approximate low-rank QR |
B.2 Loss of Orthogonality: Quantifying the Damage
In IEEE double precision (), the computed from CGS on an matrix satisfies:
For a well-conditioned matrix (): error - acceptable.
For an ill-conditioned matrix (): error - completely non-orthogonal!
MGS improves this to , and Householder to regardless of .
B.3 Reorthogonalization
When MGS produces inadequate orthogonality (common for nearly-dependent input vectors), apply reorthogonalization: after completing Gram-Schmidt once, apply it again to the already-computed . This reduces the error from to , which is typically negligible.
def mgs_with_reorth(A, num_reorth=1):
"""Modified Gram-Schmidt with optional reorthogonalization."""
m, n = A.shape
Q = np.zeros((m, n))
R = np.zeros((n, n))
for j in range(n):
v = A[:, j].copy()
for reorth in range(1 + num_reorth):
for i in range(j):
R[i, j] = Q[:, i] @ v if reorth == 0 else Q[:, i] @ v
v -= (Q[:, i] @ v) * Q[:, i]
R[j, j] = np.linalg.norm(v)
Q[:, j] = v / R[j, j]
return Q, R
B.4 Block Gram-Schmidt for Parallel Computation
For very large matrices on parallel hardware, block Gram-Schmidt processes columns in blocks of size :
- Orthogonalize within each block using MGS or Householder
- Orthogonalize each new block against all previous blocks using a BLAS-3 matrix operation
The key advantage: step 2 uses matrix-matrix products ( flops, BLAS Level 3) rather than matrix-vector products ( flops, BLAS Level 1/2). On modern hardware with SIMD and caching, this gives 10-100\times speedup for large .
Appendix C: Orthogonality in Machine Learning Libraries
C.1 PyTorch Orthogonal Utilities
import torch
import torch.nn as nn
# Orthogonal initialization
linear = nn.Linear(64, 64)
nn.init.orthogonal_(linear.weight) # U from QR decomposition
# Spectral normalization
conv = nn.utils.spectral_norm(nn.Conv2d(64, 64, 3))
# Divides weight by its spectral norm at each forward pass
# Orthogonal regularization
def orthogonal_reg(model, lambda_=1e-4):
loss = 0
for name, param in model.named_parameters():
if 'weight' in name and param.dim() == 2:
W = param
loss += lambda_ * torch.norm(W.T @ W - torch.eye(W.shape[1]))
return loss
C.2 NumPy/SciPy QR and Orthogonal Matrix Tools
import numpy as np
from scipy.linalg import qr, qr_multiply, polar
# Full QR decomposition
Q, R = np.linalg.qr(A) # 'reduced' (default) or 'complete'
Q, R, P = qr(A, pivoting=True) # QR with column pivoting
# Random orthogonal matrix (Haar measure)
M = np.random.randn(n, n)
Q, R = np.linalg.qr(M)
Q *= np.sign(np.diag(R)) # Fix signs for uniform distribution
# Polar decomposition: A = Q @ S
Q, S = polar(A) # scipy.linalg.polar
# Check orthogonality
I_approx = Q.T @ Q
print(f"Max off-diagonal: {np.max(np.abs(I_approx - np.eye(n))):.2e}")
C.3 Efficient Householder Representation
For large , storing the full matrix explicitly requires space. The compact WY representation stores where , requiring only space and allowing matrix-vector products in time:
def apply_householder_sequence(v_list, tau_list, x):
"""Apply Q = H_1 H_2 ... H_k to x using the compact representation."""
for v, tau in zip(v_list, tau_list):
x = x - tau * v * (v @ x) # Apply H = I - tau * v * v^T
return x
This is how LAPACK and NumPy store internally - only the Householder vectors are saved, not the explicit matrix.
Appendix D: The Geometry of Projection - Visual Summary
ORTHOGONAL DECOMPOSITION THEOREM
========================================================================
The Euclidean space \mathbb{R}^n splits into complementary orthogonal subspaces:
For any subspace S \subseteq \mathbb{R}^n:
\mathbb{R}^n = S \oplus S\perp
+------------------------------------------+
| \mathbb{R}^n |
| +----------------+ |
| | S | |
| | v_S = Pv | |
| | - | |
| | / | |
| | / | |
| v- |/ <- v_S | |
| \ +----------------+ |
| \ | orthogonal |
| \ | complement |
| \ | S\perp |
| ------- + ------------------------- |
| v - v_S | (v - v_S \perp every s \in S) |
+------------------------------------------+
Properties of projection matrix P = QQ^T (Q has ONB for S):
Symmetric: P^T = P (orthogonal, not oblique)
Idempotent: P^2 = P (projecting twice = projecting once)
Range: col(P) = S (maps into S)
Null space: null(P) = S\perp (annihilates complement)
========================================================================
GRAM-SCHMIDT PROCESS - STEP BY STEP
========================================================================
Input: linearly independent vectors a_1, a_2, a_3 (not orthogonal)
Output: orthonormal basis q_1, q_2, q_3 for same subspace
STEP 1: q_1 = a_1 / ||a_1||
------------------------------------
a_1 --------------------------> q_1 (unit vector)
STEP 2: u_2 = a_2 - <a_2,q_1>q_1 (subtract projection onto q_1)
q_2 = u_2 / ||u_2||
------------------------------------
a_2 ---> remove q_1 component ---> u_2 ---> q_2
a_2
/
/ up <a_2,q_1>q_1 (this part removed)
/
q_1 ------------ q_1 direction
u_2 (perpendicular remainder) --> q_2
STEP 3: u_3 = a_3 - <a_3,q_1>q_1 - <a_3,q_2>q_2
q_3 = u_3 / ||u_3||
------------------------------------
Remove projections onto BOTH q_1 and q_2
RESULT: {q_1, q_2, q_3} is an ONB with same span as {a_1, a_2, a_3}
QR FACTORIZATION: A = [a_1|a_2|a_3] = [q_1|q_2|q_3] * R
where R is upper triangular with positive diagonal.
========================================================================
SPECTRAL THEOREM: SYMMETRIC <-> ORTHOGONAL EIGENBASIS
========================================================================
A = A^T (symmetric)
<->
A has orthonormal eigenbasis {q_1, ..., q_n}
<->
A = Q\LambdaQ^T where Q = [q_1|...|q_n] \in O(n), \Lambda = diag(\lambda_1,...,\lambda_n)
<->
Quadratic form: x^TAx = \Sigma^i \lambda^i(x^Tq^i)^2 = weighted sum of squared projections
INTERPRETATION:
+-----------------------------------------------------------------+
| Q^T: Express x in eigenbasis -> \Lambda: Scale each component -> |
| Q: Return to standard basis |
| |
| \lambda^i > 0: stretch in q^i direction (positive definite: all \lambda^i>0)|
| \lambda^i = 0: collapse in q^i direction (singular matrix) |
| \lambda^i < 0: reflection in q^i direction (indefinite: mixed signs) |
+-----------------------------------------------------------------+
AI CONNECTION: Hessian \nabla^2\mathcal{L} is symmetric. Its eigenvectors are the
"principal curvature directions" of the loss surface. Eigenvalues
tell us how curved the loss is in each direction.
========================================================================
ORTHOGONAL MATRICES: STRUCTURE AND CLASSIFICATION
========================================================================
Q^TQ = I (definition) => det(Q) = \pm1
det(Q) = +1: ROTATIONS (SO(n)) det(Q) = -1: REFLECTIONS
---------------------------- --------------------------
Preserve orientation Reverse orientation
Examples in 2D: R_\theta for all \theta Example: diag(1,...,1,-1)
Compose to give rotations Compose to give rotations
O(n) = SO(n) \cup {reflections} (two connected components)
EIGENVALUES: always on unit circle \in \mathbb{C}
+-----------------------------------------------------------------+
| Real eigenvalues: \pm1 only |
| Complex eigenvalues: e^{i\theta}, e^{-i\theta} pairs -> 2D rotations |
| |
| So(2n) block structure: |
| +----------+-------+ |
| | R_{\theta_1} | | each block is a 2\times2 rotation |
| +----------+ R_{\theta_2}| with its own angle \theta^i |
| | +-------+ |
| | | ... | |
| +----------+-------+ |
+-----------------------------------------------------------------+
========================================================================
Appendix E: Summary of Key Formulas
Inner Products and Orthogonality
| Formula | Expression | Notes |
|---|---|---|
| Inner product | Euclidean; generalizes to | |
| Norm | Induced by inner product | |
| Angle | Requires both nonzero | |
| Cauchy-Schwarz | $ | \langle\mathbf{u},\mathbf{v}\rangle |
| Pythagorean theorem | Converse also holds | |
| Parseval's identity | $|\mathbf{v}|^2 = \sum_{i=1}^n | \langle\mathbf{v},\mathbf{q}_i\rangle |
| Bessel's inequality | $\sum_{i=1}^k | \langle\mathbf{v},\mathbf{q}_i\rangle |
Projections
| Formula | Expression | Notes |
|---|---|---|
| Proj onto unit vector | 1D case | |
| Proj onto subspace (ONB ) | Requires | |
| Proj onto col (general) | General basis | |
| Projection matrix properties | , | Idempotent + symmetric |
| Complementary projector | Also a projector | |
| Best approx property | for all | Fundamental theorem |
Gram-Schmidt and QR
| Formula | Expression | Notes |
|---|---|---|
| GS step | Remove prior directions | |
| QR factorization | orthonormal cols, upper triangular | |
| Least squares via QR | Solve upper triangular system | |
| Householder | Reflection across | |
| Cayley transform | From skew-symmetric |
Spectral Theorem
| Formula | Expression | Notes |
|---|---|---|
| Spectral decomposition | Symmetric only | |
| Rayleigh quotient | Bounded by [\lambda_\min,\lambda_\max] | |
| Positive definiteness | all | Via spectral theorem |
| Polar decomposition | , , | Extends to non-square |
Appendix F: Connections to Other Chapters
F.1 From Eigenvalues (01) to Orthogonality
The spectral theorem shows that symmetric matrices are the matrices that are diagonalizable by orthogonal transformations. Non-symmetric matrices may still be diagonalizable, but with a non-orthogonal change of basis - making computations numerically less stable.
The QR algorithm for computing eigenvalues (01) is built entirely on orthogonal matrix iterations:
A_0 = A
for k = 0, 1, 2, ...:
A_k = Q_k R_k (QR factorization of current iterate)
A_{k+1} = R_k Q_k (swap factors)
Each iterate is orthogonally similar to the previous: (since and ). The iterates converge to the Schur form - upper triangular, with eigenvalues on the diagonal. The entire algorithm is numerically stable precisely because it uses only orthogonal transformations.
F.2 From SVD (02) to Orthogonality
The SVD can be viewed as the "orthogonality structure" of :
- : ONB for the row space, mapping inputs to "pure directions"
- : ONB for the column space, the natural output directions
- : the non-orthogonal part (pure scaling)
The polar decomposition isolates the orthogonal factor from the symmetric factor . This factorization requires all the tools developed in this section.
F.3 Orthogonality Enables Low-Rank Approximation (02, 03)
Eckart-Young-Mirsky theorem: The best rank- approximation to in both spectral and Frobenius norms is:
This result requires:
- The SVD (which produces orthogonal )
- The best approximation theorem (projection gives the nearest element in a subspace)
- Parseval's identity (to express )
All three are developed in this section. The Eckart-Young theorem is the mathematical foundation for PCA, LoRA, and any other low-rank approximation method.
F.4 Orthogonality and Matrix Norms (06)
Unitarily invariant norms: A matrix norm is unitarily invariant if for all unitary . The three most important matrix norms are all unitarily invariant:
- Spectral norm: (largest singular value)
- Frobenius norm: (sum of squared singular values)
- Nuclear norm: (sum of singular values)
These norms are "orthogonality-aware" - they don't change when the matrix is multiplied by orthogonal matrices. This property is what allows the Eckart-Young theorem to work: the error in low-rank approximation depends only on the discarded singular values, not on the choice of orthogonal bases.
F.5 Orthogonality in Optimization (Chapter 08)
Modern optimization methods for training large neural networks increasingly exploit orthogonal structure:
Sharpness-Aware Minimization (SAM): The sharpness of a minimum is . SAM perturbs parameters in the direction of the eigenvector of the Hessian corresponding to - computed approximately using power iteration. Power iteration produces a sequence of normalized vectors converging to the top eigenvector, each step involving a matrix-vector product with the Hessian and a re-normalization (orthogonalization against zero-dimensional subspace = normalization).
SOAP optimizer (2024): Maintains a factored approximation of the Adam preconditioner in which the "rotation" and "scaling" factors are tracked separately. The rotation factor is updated via QR decomposition to maintain orthogonality, while the scaling factor tracks the second moments in the rotating basis.
K-FAC and its variants: The Kronecker-Factored Approximate Curvature approximates the Fisher information matrix as a Kronecker product . Computing the update requires inverting these factors, which can be done via their eigendecompositions - the eigenvectors are orthonormal matrices.
Appendix G: Historical Notes and References
Timeline: Development of Orthogonality Theory
| Year | Mathematician | Contribution |
|---|---|---|
| 1801 | Gauss | Least squares method; orthogonal projection as normal equations |
| 1844 | Grassmann | Abstract vector spaces and subspaces; inner product generalization |
| 1850s | Cauchy, Schwarz | Cauchy-Schwarz inequality (Cauchy for sums 1821, Schwarz integral form 1888) |
| 1897 | Gram | Gram-Schmidt process (Gram's formulation for functions) |
| 1902 | Schmidt | Finite-dimensional Gram-Schmidt for |
| 1907 | Parseval | Parseval's theorem for Fourier series |
| 1908 | Hilbert | Abstract Hilbert spaces; orthogonal projection theorem |
| 1915-1927 | Weyl, von Neumann | Spectral theorem in infinite dimensions |
| 1954 | Householder | Householder reflectors for numerically stable QR |
| 1965 | Francis | QR algorithm for eigenvalues (with implicit shifts) |
| 1967 | Bjorck | Numerical analysis of Gram-Schmidt stability |
| 1983 | Saxe, McClelland, Ganguli | Orthogonal initialization for deep linear networks |
| 2014 | Mishkin, Matas | LSUV initialization using SVD/QR |
| 2018 | Miyato et al. | Spectral normalization for GANs |
| 2021 | Su et al. | RoPE: Rotary Position Embedding using orthogonal rotations |
| 2024 | Jordan et al. | Muon optimizer: gradient orthogonalization via Newton-Schulz |
Key References
- Golub & Van Loan, Matrix Computations (4th ed., 2013) - Definitive reference on numerical linear algebra including Gram-Schmidt, Householder, and QR algorithms
- Axler, Linear Algebra Done Right (3rd ed., 2015) - Elegant proof-based treatment of inner product spaces and spectral theorem
- Trefethen & Bau, Numerical Linear Algebra (1997) - Excellent coverage of QR, projections, and stability from computational perspective
- Saxe et al. (2013), "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks" - Theoretical foundation for orthogonal initialization
- Miyato et al. (2018), "Spectral Normalization for GANs" - Seminal paper on spectral normalization in deep learning
- Su et al. (2021), "RoFormer: Enhanced Transformer with Rotary Position Embedding" - RoPE using orthogonal rotation matrices
- Jordan et al. (2024), "Muon: An optimizer for hidden layers in neural networks" - Newton-Schulz orthogonalization for gradients
Appendix H: Proofs of Selected Theorems
H.1 Complete Proof: Cauchy-Schwarz Inequality
Theorem. For any inner product space and vectors :
Proof. If , both sides are 0 and the inequality holds trivially. Assume .
For any scalar :
This is a quadratic in that is always . Its discriminant must therefore be :
Equality condition. Equality holds iff iff the quadratic in has a zero iff there exists with iff and are linearly dependent (one is a scalar multiple of the other).
Remark. This proof works for any abstract inner product space, finite or infinite dimensional. The key was that is positive definite.
H.2 Complete Proof: Best Approximation Uniqueness
Theorem. Let be a closed subspace of a Hilbert space . For any , the projection satisfying is unique.
Proof of uniqueness. Suppose both satisfy . Then (since is a subspace). But also (as a difference of vectors each perpendicular to ). So , giving .
H.3 Complete Proof: Gram-Schmidt Produces ONB
Theorem. If are linearly independent vectors in an inner product space , the Gram-Schmidt process produces an orthonormal set with for each .
Proof by induction on .
Base case (): is a unit vector (since by independence). . OK
Inductive step: Assume the claim holds for . Define .
(i) : Suppose for contradiction . Then , contradicting linear independence.
(ii) for : For any :
(using the inductive hypothesis ). Since , we also have . OK
(iii) Same span: is in . Conversely, . OK
By induction, the claim holds for all .
This section is part of the 03-Advanced-Linear-Algebra chapter.
Appendix I: The Stiefel Manifold and Optimization on Orthogonal Groups
I.1 The Stiefel Manifold
The set of matrices with orthonormal columns is the Stiefel manifold:
Special cases:
- - the full orthogonal group
- - the unit sphere in
- for - Stiefel manifolds proper
Dimension:
Intuition: We have free parameters (entries of a matrix), subject to constraints ( normalization constraints + orthogonality constraints between distinct columns).
Tangent space at : The tangent space consists of matrices satisfying , i.e., is skew-symmetric:
I.2 Riemannian Gradient Descent on the Stiefel Manifold
To minimize subject to :
Step 1: Euclidean gradient. Compute in .
Step 2: Project to tangent space. The Riemannian gradient is the projection of onto :
where is the symmetric part.
Step 3: Update and retract. Move in the direction of the (negative) Riemannian gradient, then retract back to the manifold:
Retraction via QR: (take the factor from QR decomposition)
Retraction via Cayley: for a suitable skew-symmetric
I.3 Applications in Modern AI
Orthogonal RNNs (Henaff et al. 2016, Arjovsky et al. 2016). Vanilla RNNs suffer from vanishing/exploding gradients because the recurrence matrix can have singular values . Unitary/Orthogonal RNNs constrain , ensuring stable gradient flow. Training optimizes over using Stiefel manifold optimization or parameterization via Cayley transform.
Rotation-Equivariant Networks. Networks for 3D point cloud processing (SE(3)-equivariant networks, e3nn) explicitly parameterize their weight tensors using representations of . The convolution filters transform predictably under 3D rotations of the input, making the network geometry-aware.
LoRA with Orthogonal Factors. Standard LoRA approximates a weight update as for low-rank matrices , . GLoRA, DoRA, and OLoRA variants constrain or to have orthonormal columns (lying on the Stiefel manifold), improving gradient flow during fine-tuning and reducing the rank collapse problem.
I.4 The Polar Decomposition as Nearest Orthogonal Matrix
Problem: Given (not necessarily orthogonal), find the nearest orthogonal matrix:
Solution: The polar decomposition gives where (SVD).
Proof: For any :
(using unitary invariance of Frobenius norm). Let . Then:
This is minimized when is maximized. Since is diagonal with nonneg entries and is orthogonal, the Hadamard-type inequality gives , with equality at (i.e., ).
Muon optimizer interpretation: The Muon optimizer applies the update where is the nearest orthogonal matrix to the gradient . This can be computed efficiently via Newton-Schulz iterations:
which converges quadratically to the orthogonal factor of the polar decomposition of .
Appendix J: Quick Reference - Orthogonality Vocabulary
| Term | Definition | Key Formula |
|---|---|---|
| Orthogonal vectors | ||
| Orthonormal vectors | Orthogonal + unit length | |
| Orthonormal basis (ONB) | ONS that spans the space | |
| Orthogonal matrix | Square, orthonormal columns | |
| Orthogonal group | All orthogonal matrices | , closed under , , |
| Special orthogonal group | Rotations (det = +1) | |
| Orthogonal projection | closest point in | , , |
| Orthogonal complement | Vectors all of | |
| Gram-Schmidt | Orthogonalization algorithm | Build ONB from any basis |
| Householder reflector | Reflection across a hyperplane | |
| QR decomposition | factorization | : orthonormal cols, : upper triangular |
| Rayleigh quotient | Quotient for eigenvalue bounds | |
| Spectral theorem | Symmetric has ONB of eigenvectors | |
| Isometry | Norm-preserving linear map | |
| Stiefel manifold | Matrices with orthonormal columns | |
| Polar decomposition | Orthogonal \times symmetric factorization | |
| Unitary matrix | Complex analog of orthogonal | , entries |
Appendix K: Common Numerical Experiments
K.1 Verifying Gram-Schmidt Stability
The following experiment quantifies the loss of orthogonality in CGS vs MGS:
import numpy as np
def cgs(A):
m, n = A.shape
Q = np.zeros_like(A)
R = np.zeros((n, n))
for j in range(n):
v = A[:, j].copy()
R[:j, j] = Q[:, :j].T @ v
v -= Q[:, :j] @ R[:j, j]
R[j, j] = np.linalg.norm(v)
Q[:, j] = v / R[j, j]
return Q, R
def mgs(A):
m, n = A.shape
Q = A.astype(float).copy()
R = np.zeros((n, n))
for j in range(n):
R[j, j] = np.linalg.norm(Q[:, j])
Q[:, j] /= R[j, j]
R[j, j+1:] = Q[:, j] @ Q[:, j+1:]
Q[:, j+1:] -= np.outer(Q[:, j], R[j, j+1:])
return Q, R
# Test on Hilbert matrix (ill-conditioned)
n = 12
H = 1.0 / (np.arange(1, n+1)[:, None] + np.arange(1, n+1)[None, :] - 1)
Q_cgs, _ = cgs(H)
Q_mgs, _ = mgs(H)
err_cgs = np.max(np.abs(Q_cgs.T @ Q_cgs - np.eye(n)))
err_mgs = np.max(np.abs(Q_mgs.T @ Q_mgs - np.eye(n)))
print(f"CGS orthogonality error: {err_cgs:.2e}") # ~1e-4 (catastrophic)
print(f"MGS orthogonality error: {err_mgs:.2e}") # ~1e-8 (acceptable)
print(f"kappa(H): {np.linalg.cond(H):.2e}") # ~10^16 for n=12
K.2 Orthogonal Initialization Impact on Gradient Norms
import numpy as np
def grad_norm_experiment(n=64, L=20, trials=100):
"""Compare gradient norms: Gaussian init vs Orthogonal init."""
results = {'gaussian': [], 'orthogonal': []}
for _ in range(trials):
# Gaussian initialization
Ws_gauss = [np.random.randn(n, n) / np.sqrt(n) for _ in range(L)]
prod_gauss = Ws_gauss[0]
for W in Ws_gauss[1:]:
prod_gauss = prod_gauss @ W
results['gaussian'].append(np.linalg.norm(prod_gauss, 2))
# Orthogonal initialization
Ws_orth = []
for _ in range(L):
M = np.random.randn(n, n)
Q, _ = np.linalg.qr(M)
Ws_orth.append(Q)
prod_orth = Ws_orth[0]
for W in Ws_orth[1:]:
prod_orth = prod_orth @ W
results['orthogonal'].append(np.linalg.norm(prod_orth, 2))
print(f"Gaussian: mean={np.mean(results['gaussian']):.3e}, "
f"std={np.std(results['gaussian']):.3e}")
print(f"Orthogonal: mean={np.mean(results['orthogonal']):.3e}, "
f"std={np.std(results['orthogonal']):.3e}")
# Expected: Gaussian shows extreme variance (very large or very small)
# Orthogonal shows norm \approx 1.0 with tiny variance
grad_norm_experiment()
K.3 Rayleigh Quotient Iteration Convergence
import numpy as np
def rayleigh_quotient_iteration(A, x0, n_iter=20):
"""Track convergence of Rayleigh Quotient Iteration."""
x = x0 / np.linalg.norm(x0)
lam_true = np.sort(np.linalg.eigvalsh(A))[-1] # largest eigenvalue
history = []
for _ in range(n_iter):
rho = x @ A @ x # Rayleigh quotient
history.append(abs(rho - lam_true))
try:
y = np.linalg.solve(A - rho * np.eye(len(A)), x)
x = y / np.linalg.norm(y)
except np.linalg.LinAlgError:
break # Converged (A - rho*I is singular)
return history
A = np.array([[3., 1., 0.], [1., 2., 1.], [0., 1., 1.]])
x0 = np.random.randn(3)
errors = rayleigh_quotient_iteration(A, x0)
print("Errors per iteration:")
for i, e in enumerate(errors[:10]):
print(f" iter {i}: {e:.2e}")
# Expected: errors should decrease cubically (each step ~= previous^3)