Vector Spaces and Subspaces
"A vector space is not a collection of arrows. It is a collection of anything that can be added together and scaled - and whose arithmetic obeys eight simple rules. The moment you verify those eight rules, every theorem ever proved about vectors becomes available for free."
Overview
A vector space is the central object of linear algebra. The abstract definition - a set with two operations satisfying eight axioms - is deceptively simple. Its power lies in universality: the same framework that governs arrows in the plane also governs polynomials, matrices, functions, probability distributions, and the weight spaces of neural networks. Every result proved once for abstract vector spaces applies simultaneously to all of these.
Subspaces are the geometric heart of the theory. A subspace is a flat, linear subset of a vector space that passes through the origin and is closed under the same operations. The column space of a weight matrix, the null space of a linear layer, the span of a set of gradient vectors, the low-rank subspace targeted by LoRA - all are subspaces. Understanding their structure is understanding how information flows, is stored, and is transformed in deep learning systems.
This chapter develops vector spaces and subspaces from intuition through rigorous definition to computational tools and AI applications. It is the conceptual foundation for everything that follows in linear algebra.
Prerequisites
- Basic vector arithmetic (addition, scalar multiplication)
- Matrix multiplication and linear systems (Sections 02-05)
- Familiarity with R as a coordinate space
- Exposure to the concept of rank (Section 05)
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Interactive demonstrations: subspace visualisation, Gram-Schmidt, four fundamental subspaces, PCA |
| exercises.ipynb | Practice problems: axiom verification, subspace proofs, basis computation, Gram-Schmidt, projections |
Learning Objectives
After completing this section, you will:
- Verify that a given set with operations satisfies (or fails) the eight vector space axioms
- Identify and prove subspaces using the three-condition test
- Compute span, test linear independence, and extract bases via row reduction
- Identify and compute all four fundamental subspaces of a matrix
- Apply Gram-Schmidt orthogonalisation and construct projection matrices
- Understand direct sums, orthogonal complements, and quotient spaces
- Connect vector space structure to transformer architecture, LoRA, PCA, and mechanistic interpretability
- Recognise invariant subspaces and state the spectral theorem in subspace language
Table of Contents
- Vector Spaces and Subspaces
- Overview
- Prerequisites
- Companion Notebooks
- Learning Objectives
- Table of Contents
- 1. Intuition
- 2. Formal Definitions
- 3. Subspaces
- 4. Span and Linear Combinations
- 5. Linear Independence
- 6. Basis and Dimension
- 7. The Four Fundamental Subspaces
- 8. Subspace Operations
- 9. Inner Product Spaces and Orthogonality
- 10. Affine Subspaces and Quotient Spaces
- 11. Subspaces in Functional Analysis
- 12. Subspace Methods in Machine Learning
- 13. Invariant Subspaces and Spectral Theory
- 14. Common Mistakes
- 15. Exercises
- 16. Why This Matters for AI (2026 Perspective)
- Conceptual Bridge
1. Intuition
1.1 What Is a Vector Space?
A vector space is a collection of objects - called vectors - together with two operations (addition and scalar multiplication) that behave exactly the way arrows in the plane behave. You can add any two vectors and get another vector in the same collection. You can stretch any vector by any scalar and stay in the collection. And these two operations interact with each other in the exact way you would expect from ordinary arithmetic.
The power of the abstract definition is this: once you verify that a collection satisfies the vector space axioms, every theorem ever proved about vector spaces applies immediately - whether the "vectors" are geometric arrows, polynomials, matrices, functions, probability distributions, or sequences of real numbers. You do not reprove theorems for each new setting; you simply check the axioms once and inherit a century of results for free.
The word "space" is deliberate. A vector space has geometric structure. You can talk about directions, linear combinations, independence, and dimensionality - even when the "vectors" are abstract objects that look nothing like arrows. The polynomial lives in a vector space. So does the identity matrix. So does the constant function . In each case, the "geometry" is real and consequential, even if it is not immediately visible.
For AI, this abstraction is not philosophical - it is practical:
VECTOR SPACES THAT APPEAR IN EVERY NEURAL NETWORK
R - embedding space
tokens, sentences, images, users are all vectors here
arithmetic on vectors (king - man + woman ~= queen)
only makes sense because R is a vector space
R - parameter space
all weights of a neural network form a single vector in R
gradient descent navigates this space
LoRA restricts updates to a subspace of R
R - weight matrix space
each weight matrix is a "vector" in a higher-dimensional space
adding matrices (as in LoRA: W + DeltaW) is vector addition
scaling a matrix by a learning rate is scalar multiplication
infinity-dim - function space
the set of all functions a neural architecture can represent
approximation theory asks which subspace of all functions
a given architecture can reach
The abstract definition captures all of these simultaneously. That is the point.
1.2 Why Vector Spaces Are the Language of Deep Learning
Every major architectural concept in modern deep learning can be phrased as a statement about vector spaces or their subspaces. This is not a coincidence - it reflects the fundamental linearity of most operations in a transformer.
Embedding spaces. Tokens, sentences, images, and users are all represented as vectors in . This only makes sense because is a vector space: you can meaningfully add embedding vectors, scale them, and take linear combinations. The famous word embedding arithmetic is a statement about the linear structure of the embedding space. Cosine similarity - the standard measure of semantic relatedness - is an inner product operation. None of this would be coherent if the embedding space were not a vector space.
Parameter space. All parameters of a neural network - every weight matrix, every bias vector, every LayerNorm scale - can be concatenated into a single vector . Gradient descent navigates this -dimensional vector space. The loss landscape lives over this space. The geometry of the parameter space - its curvature, the directions of steepest descent, the flat directions - shapes every aspect of optimisation. Gradient clipping, momentum, and weight decay are all operations in the vector space .
Residual stream. In transformer architectures, the central computational object is the residual stream: a vector that is passed from layer to layer and updated by each component:
The update rule makes sense only because is a vector space - you can add the attention output (a vector in ) and the MLP output (also a vector in ) to the residual stream. Each layer writes a new vector into the same shared space. All components communicate through this one -dimensional vector space. The subspace structure of - which directions are "owned" by which components - is what mechanistic interpretability studies.
Function spaces. Neural networks represent functions. The set of all functions representable by a given architecture with bounded parameters is a subset of the infinite-dimensional function space . Approximation theory studies which subspaces of can be well-approximated by which architectures. The universal approximation theorem is a statement about density in a function space. Understanding which functions a model can and cannot represent requires understanding subspaces of infinite-dimensional vector spaces.
Gradient space. At each training step, the gradient is a vector in the same parameter space . Gradient accumulation (summing gradients across microbatches) is vector addition. Gradient clipping (bounding the norm) is a scaling operation. Momentum (maintaining a weighted sum of past gradients) is a linear combination. All of these rely on the vector space structure of . Empirically, the gradient vectors observed during training lie in a much lower-dimensional subspace of - a fact that motivates LoRA and related methods.
Tangent spaces. The loss surface is a manifold embedded in . At each point , the tangent space - the set of directions one can move while remaining (to first order) on the manifold - is a vector space. Natural gradient methods use the geometry of these tangent spaces; the Fisher information matrix defines a metric on the parameter manifold that is fundamentally a statement about the inner product structure of the tangent space.
1.3 The Hierarchy of Structure
Vector spaces occupy a specific level in a hierarchy of mathematical structures. Each level adds additional structure on top of the previous, enabling more refined geometric notions:
HIERARCHY OF MATHEMATICAL STRUCTURE
Set (no structure)
+ addition and scalar multiplication obeying 8 axioms
Vector Space (linear structure: add, scale)
- can speak of linear combinations, span, independence, basis
- can speak of dimension
+ norm v (a function measuring "length")
Normed Space (metric structure: can measure distances)
- can speak of convergence, continuity, open/closed sets
- can speak of the "size" of a vector
+ inner product u, v (a function measuring "alignment")
Inner Product Space (angular structure: can measure angles)
- can speak of orthogonality, projections, cosine similarity
- can speak of the "angle" between two vectors
+ completeness (every Cauchy sequence converges in the space)
Hilbert Space (complete inner product space)
- can speak of infinite sums that converge
- can speak of Fourier decompositions
- the correct setting for quantum mechanics and function spaces
Key facts:
- R (Euclidean space) satisfies ALL levels simultaneously.
- Every finite-dimensional inner product space is a Hilbert space
(completeness is automatic in finite dimensions).
- Infinite-dimensional function spaces (L^2, Sobolev spaces)
require the Hilbert structure to behave well.
- AI embedding spaces are finite-dimensional: all levels apply.
- RKHS for kernel methods: infinite-dimensional Hilbert spaces.
The hierarchy matters because each additional level of structure enables new theorems. In a bare vector space, you can talk about linear combinations and dimension. Add a norm: you can measure convergence. Add an inner product: you can define orthogonality and projections. Add completeness: you can take limits of infinite sequences. For AI, finite-dimensional vector spaces with inner products (Hilbert spaces) are the default working environment. Function spaces require infinite-dimensional Hilbert structure.
1.4 The Intuition of a Subspace
A subspace is a vector space living inside a larger vector space - a flat, linear subset that passes through the origin and is closed under the same two operations.
"Flat" and "through the origin" are the two defining geometric properties:
- A line through the origin in is a subspace. A line not through the origin is not.
- A plane through the origin is a subspace. A plane not through the origin is not.
- A sphere, a circle, a parabola, a curved surface - none are subspaces.
- The positive quadrant is not a subspace (not closed under scalar multiplication by ).
Why "through the origin"? The zero vector must be in any subspace. This follows from closure under scalar multiplication: if is in the subspace, then must also be in the subspace. A subset that doesn't contain cannot be a subspace.
Why "flat"? Take any two vectors in the subspace. Their sum must also be in the subspace. Multiply either by any scalar - must still be in the subspace. This forces the subset to be "straight". There can be no curves, bends, or boundaries. If the subspace contains a vector , it must contain all scalar multiples for every - the entire line through the origin in the direction of .
SUBSPACES AND NON-SUBSPACES IN R^2 AND R^3
R^2: R^3:
The whole plane R^2 All of R^3
Any line through origin Any plane through origin
{0} Any line through origin
{0}
Line not through origin Plane not through origin
Half-plane Sphere / cylinder / cone
Unit circle Positive octant
Filled disk Affine subspace (offset from origin)
Upper half-plane
Test: does it contain 0? is it closed under + and scaling?
If any answer is NO -> not a subspace.
For AI, the subspace concept is concrete. The column space of a weight matrix is a subspace of the output space. The null space is a subspace of the input space. The set of all weight updates targeted by LoRA is a subspace of the weight matrix space. The set of all probability vectors (probabilities summing to 1) is not a subspace (it does not contain the zero vector and is not closed under vector addition). This distinction matters: arithmetic that is meaningful in a vector space may be meaningless in a non-subspace.
1.5 Subspaces That Appear Constantly in AI
The following subspaces recur throughout deep learning theory, practice, and interpretability. Each is worth understanding geometrically before encountering it formally.
Column space col(W). For a weight matrix , the column space is the set of all vectors as ranges over . It is the set of all reachable outputs of the linear layer. If , then only an -dimensional subspace of can be produced by this layer regardless of the input. The remaining dimensions of the output space are always zero from this layer's contribution.
Null space null(W). The null space is the set of all inputs that produce zero output: . It is the "invisible" subspace - directions in the input that this layer completely ignores. A vector in the null space carries no information through the layer. Its dimension is . Mechanistically: if a component of the residual stream lies in the null space of all query and key projections of some head, that component is invisible to that head.
Row space row(W). The row space is the span of the rows of , equivalently the column space of . It is the set of input directions that "listens to". The projection of an input onto the row space determines the output; the projection onto the null space is discarded.
Attention subspace. Each attention head in a transformer has query and key projection matrices . The attention scores are computed in the -dimensional subspace defined by these projections. The head is completely blind to directions in orthogonal to the column spaces of and . The value and output projections similarly define the subspace that the head writes to. Attention is fundamentally a subspace operation: it reads from one subspace and writes to another.
Gradient subspace. The gradient at step is a vector in . Over the course of training, these gradient vectors empirically lie in a subspace of whose dimension is much smaller than . Gur-Ari et al. (2018) showed that the top eigenspace of the gradient covariance matrix - the "gradient subspace" - has effective dimension in the hundreds or thousands, not in the hundreds of millions. This is the empirical foundation for LoRA: if gradients live in a low-dimensional subspace, it may suffice to restrict weight updates to that subspace.
Residual stream subspace. Mechanistic interpretability reveals that different components of a transformer write to different subspaces of the -dimensional residual stream. Each attention head "owns" a subspace (defined by its OV circuit) and communicates with other components through that subspace. The residual stream is a shared message-passing space, and the subspace structure determines which components can communicate with which.
LoRA update subspace. Low-Rank Adaptation restricts the weight update where and . This constrains to a rank- subspace of : the subspace of matrices with column space contained in . The rank is literally the dimension of the subspace being searched. Choosing is choosing the dimension of the subspace.
1.6 Historical Timeline
The concept of a vector space did not emerge from a single insight - it crystallised gradually over seven decades as mathematicians in different countries, working on different problems, all needed the same abstract structure:
| Year | Person / Event | Significance |
|---|---|---|
| 1827 | Mbius - barycentric coordinates | Linear combinations with fixed coefficient sum; early subspace thinking embedded in geometry |
| 1844 | Grassmann - Ausdehnungslehre (Theory of Extension) | Abstract -dimensional linear spaces, vector algebra, exterior products; 50 years ahead of its time; largely ignored until the 1880s |
| 1858 | Cayley - matrix algebra | Matrices as the natural representation of linear maps; implicit use of vector space structure in |
| 1888 | Peano - Calcolo Geometrico | First rigorous axiomatic definition of a vector space (Peano called them "linear systems"); the modern eight axioms are essentially unchanged from his formulation |
| 1900-10 | Hilbert - infinite-dimensional spaces | Spectral theory of operators; sequences as vectors; laid the foundation for quantum mechanics and functional analysis; introduced Hilbert spaces |
| 1920s | Banach - normed complete spaces | Generalised Hilbert spaces to settings without an inner product; functional analysis matures as a discipline |
| 1929 | von Neumann - Hilbert space formalism | Rigorous foundation for quantum mechanics as operators on Hilbert space; spectral theorem in full generality |
| 1930s-60s | Bourbaki - abstract algebra | Vector spaces as modules over fields; categorical perspective; the modern textbook treatment of linear algebra |
| 1935 | Whitney - tangent bundles | A vector space (tangent space) attached to every point of a smooth manifold; modern differential geometry |
| 1950s-90s | Krylov subspace methods | CG, GMRES, Lanczos: iterative solvers that expand a nested sequence of subspaces; subspace geometry as a computational algorithm |
| 1970s-90s | LAPACK / numerical linear algebra | Practical algorithms for computing with subspaces (QR, SVD, Gram-Schmidt); subspace methods become usable at scale |
| 1901 / 1991 | Pearson / Turk-Pentland - PCA | Pearson (1901) defines principal component analysis; Turk and Pentland (1991) apply PCA to face recognition ("eigenfaces"); subspace learning as a machine learning tool |
| 2013 | Mikolov et al. - word2vec | Words as vectors in ; semantic arithmetic; gender direction as a 1D subspace; demonstrated that natural language semantics has linear subspace structure |
| 2018 | Gur-Ari et al. - gradient subspace | The "gradient subspace" of LLM training has dimension ; gradient updates live in a low-dimensional subspace of parameter space |
| 2020 | Aghajanyan et al. - intrinsic dimensionality | Fine-tuning gradient updates for GPT-2 lie in a subspace of dimension out of ; formal evidence for low-dimensional fine-tuning |
| 2021 | Hu et al. - LoRA | Explicit restriction of weight updates to a rank- subspace; subspace constraint as an inductive bias; state-of-the-art parameter-efficient fine-tuning |
| 2022 | Elhage et al. - superposition hypothesis | Features in LLMs are represented as directions (1D subspaces) in the residual stream; features outnumber dimensions, forcing non-orthogonal superposition; subspace geometry is the language of mechanistic interpretability |
| 2024 | DeepSeek-V2/V3 - MLA | Multi-head Latent Attention: KV projections compressed to a rank- subspace; the KV information is architecturally constrained to a low-dimensional subspace; 5.75x memory reduction |
The arc of this timeline is clear: what began as abstract geometry became, over 180 years, the core language of AI architecture and efficiency.
2. Formal Definitions
Recall: Vectors and Spaces 3 introduced the vector space axioms with concrete geometric intuition (, function spaces, polynomial spaces). This section re-derives them in full axiomatic rigour - the goal here is provability, not intuition. The two sections are complementary: 01 tells you what the axioms describe; this section tells you what follows from them.
2.1 The Axioms of a Vector Space
A vector space over a field (we use throughout unless stated otherwise) is a set together with two operations:
- Addition: - takes two vectors, produces a vector
- Scalar multiplication: - takes a scalar and a vector, produces a vector
satisfying the following eight axioms for all and all :
Axioms for addition:
| # | Name | Statement |
|---|---|---|
| 1 | Commutativity | |
| 2 | Associativity | |
| 3 | Additive identity | such that for all |
| 4 | Additive inverse | For each , such that |
Axioms for scalar multiplication:
| # | Name | Statement |
|---|---|---|
| 5 | Multiplicative identity | |
| 6 | Compatibility |
Distributivity (linking the two operations):
| # | Name | Statement |
|---|---|---|
| 7 | Scalar over vector sum | |
| 8 | Vector over scalar sum |
These eight axioms are the complete definition. Anything satisfying all eight is a vector space; anything failing even one is not. The axioms are minimal: each is independent of the others; removing any one produces a strictly weaker structure.
Notice what the axioms do and do not say:
- They say nothing about what the "vectors" look like. Arrows, polynomials, matrices, functions - all are equally valid.
- They say nothing about angles, lengths, or orthogonality. Those require additional structure (inner product).
- They say nothing about convergence. That requires a topology (norm or metric).
- They require closure under both operations: the result of adding two vectors must be back in ; scaling a vector must stay in .
THE AXIOMS AS CLOSURE CONDITIONS
Axioms 1-4: addition is a well-behaved operation on V
V is an abelian group under +
Axioms 5-6: scalar multiplication interacts correctly with itself
Axioms 7-8: scalar multiplication distributes over both kinds of +
(over vector sums and over scalar sums)
Together: V behaves like "an R that we haven't yet coordinatised"
2.2 Immediate Consequences of the Axioms
The eight axioms imply several useful facts that are not axioms themselves. These are theorems - they must be proved from the axioms - but they follow quickly:
Uniqueness of the zero vector. If and both satisfy Axiom 3, then:
The first equality uses as the identity; the second uses . So the zero vector is unique.
Uniqueness of additive inverses. If and , then:
So the additive inverse is unique for each .
(the zero scalar times any vector = zero vector):
Subtract from both sides (add the additive inverse of ):
(any scalar times the zero vector = zero vector):
Subtract : .
(multiplying by gives the additive inverse):
So is the additive inverse of , i.e., .
Cancellation law for scalars. If , then either or :
Proof: Suppose . Then has a multiplicative inverse in . Multiply both sides:
This last fact is particularly useful in linear independence arguments: if a scalar multiple of a vector is zero, either the scalar is zero or the vector is zero - no third option.
2.3 Standard Examples of Vector Spaces
Understanding which objects form vector spaces - and why - is as important as knowing the axioms. Here are the key examples, from most concrete to most abstract.
(Euclidean space)
- Vectors: -tuples of real numbers
- Addition: (componentwise)
- Scalar multiplication: (componentwise)
- Zero vector:
- Additive inverse:
- Dimension:
This is the prototype. Every finite-dimensional real vector space is isomorphic to for some . All eight axioms follow directly from the arithmetic of real numbers applied componentwise. The AI embedding space is exactly this.
(space of real matrices)
- Vectors: matrices with real entries
- Addition: entrywise addition
- Scalar multiplication:
- Zero: zero matrix
- Dimension: (each of the entries is an independent coordinate)
This is the vector space of all weight matrices of a fixed shape. Gradient descent in a single layer's weight space is gradient descent in . When LoRA writes , it is performing vector addition in . The learning rate times the gradient, , is a scalar multiplication in .
(polynomials of degree )
- Vectors: polynomials
- Addition: standard polynomial addition:
- Scalar multiplication:
- Zero: the zero polynomial (all coefficients zero)
- Dimension: (the coefficients are the coordinates)
The vectors here "look like" arrows in (just replace each polynomial with its coefficient vector), but the polynomial structure is richer - you can evaluate at any point, differentiate, integrate, and compose. The vector space structure is the foundation; the polynomial structure is additional.
(all polynomials, any degree)
- Vectors: polynomials of any (finite) degree
- Operations: same as
- Zero: zero polynomial
- Dimension: (countably infinite; basis )
Note that the sum of two polynomials of degree is again a polynomial of degree , so is a subspace of . This is our first example of a subspace inside a larger space.
(continuous functions on )
- Vectors: continuous functions
- Addition: (pointwise)
- Scalar multiplication: (pointwise)
- Zero: constant zero function
- Dimension: (uncountably infinite dimensional)
The sum of two continuous functions is continuous; a scalar multiple of a continuous function is continuous. All eight axioms follow from corresponding facts about real numbers applied pointwise. The space is the natural setting for physics-informed neural networks (PINNs), neural ODEs, and any application where the model represents a continuous function.
(square-integrable functions)
- Vectors: (equivalence classes of) functions with
- Operations: pointwise, same as
- Inner product: ; induced norm
- Dimension: ; a separable Hilbert space
is the correct setting for Fourier analysis (the Fourier basis is an orthonormal basis for ), for quantum mechanics (wavefunctions are unit vectors in ), and for kernel methods (the RKHS of a translation-invariant kernel is a subspace of ). Every finite-dimensional vector space with an inner product is a Hilbert space; is the prototypical infinite-dimensional Hilbert space.
(square-summable sequences)
- Vectors: infinite sequences with
- Operations: componentwise
- Inner product:
- Dimension: ; a separable Hilbert space; isomorphic to
The infinite-dimensional analogue of . Relevant as the theoretical limit of embedding spaces as vocabulary size .
(trivial vector space)
- The set containing only the zero vector
- Addition: ; scalar multiplication:
- The smallest vector space; dimension 0
- It is a subspace of every vector space
2.4 Non-Examples of Vector Spaces
Understanding which sets fail to be vector spaces - and precisely which axiom they violate - sharpens intuition and prevents common errors in AI applications.
(positive reals)
Fails closure under scalar multiplication and additive inverse: . Also fails to contain the zero vector. This is why you cannot do linear algebra directly in probability space (where all entries must be positive).
(unit sphere)
Fails closure under addition ( in general) and scalar multiplication (). Fails to contain the zero vector. Normalised embedding vectors live on the unit sphere - a non-subspace. This is why arithmetic on normalised vectors (like averaging two unit vectors) is not straightforward; the result must be renormalised.
(probability simplex)
Fails closure under addition: has . Fails to contain the zero vector. Fails scalar multiplication: has . The simplex is an affine subspace (it is a translate of a linear subspace), not a vector subspace.
This is a crucial AI pitfall: softmax outputs live in the interior of . Averaging two softmax output vectors does not produce a valid probability distribution in the same parameterisation. Interpolation, averaging, and arithmetic must be done in logit space (pre-softmax), which is a vector space.
(integer vectors)
Closed under addition (integer + integer = integer) but fails closure under scalar multiplication over : . The integers form a module over (a more general structure), but not a vector space over .
with (inhomogeneous linear system)
Does not contain the zero vector (). Not closed under addition: . This is an affine subspace (a coset of the null space of ), not a vector subspace. The general solution to is , where is any particular solution - a coset structure, not a subspace structure.
SUMMARY: WHY COMMON AI SETS FAIL TO BE VECTOR SPACES
Softmax outputs (interior of Delta):
no zero vector not closed under + not closed under scaling
-> Do arithmetic in logit space, not probability space
Normalised embeddings (unit sphere):
no zero vector not closed under + not closed under scaling
-> Spherical interpolation (slerp) for moving on the sphere
Integer weight tensors (quantised models):
closed under + not closed under R-scaling
-> Module over Z; must dequantise before doing real linear algebra
Solution set Ax = b, b != 0:
no zero vector not closed under +
-> Affine subspace = coset of null(A); add particular solution last
2.5 Vector Spaces over Other Fields
While we work over throughout, the definition of a vector space makes sense over any field . The most relevant alternatives for AI:
(complex vector space). Each scalar is complex; each component is complex. Used in quantum computing, signal processing, and complex-valued Fourier analysis. Dimension over , but over (since each complex number requires 2 real numbers). The inner product becomes sesquilinear: (conjugate of first argument). Complex exponentials are the natural eigenfunctions of differentiation; Fourier analysis over is cleaner than over .
(binary vectors). Field with arithmetic modulo 2: . Addition = XOR; multiplication = AND. Used in error-correcting codes (Hamming codes, LDPC codes, turbo codes) and cryptography. All of linear algebra carries over: basis, span, null space, rank - all computable over by the same row reduction algorithm, with arithmetic mod 2.
(vectors over finite fields). For prime , with arithmetic mod . Used in coding theory, cryptography (elliptic curve cryptography over finite fields), and randomised algorithms. Reed-Solomon codes are linear codes over for large ; the generator matrix spans the code as a subspace of .
Practical AI note. Transformers use real vector spaces (or approximately real, since floating-point arithmetic is finite-precision). Quantum transformers (theoretical) would use . Reliable communication protocols for distributed training use codes for error correction. GPU hardware operates over approximately with format-specific precision (BF16, FP8, INT4).
3. Subspaces
3.1 Definition and the Three Conditions
A subset is a subspace of if is itself a vector space under the same operations inherited from .
Checking all eight axioms would be redundant. The operations themselves are inherited from : the addition of two elements of uses the same addition formula as in , and similarly for scalar multiplication. Because these operations are already defined and well-behaved in , the axioms involving equalities (commutativity, associativity, distributivity) are automatically satisfied for elements of - they hold for all elements of , so in particular they hold for elements of .
What can fail is closure: the result of an operation on elements of might fall outside . And the zero vector might not be in . So only three conditions must be verified:
Subspace Test. A subset is a subspace of if and only if:
- Non-empty / contains zero:
- Closed under addition: for all :
- Closed under scalar multiplication: for all and :
Equivalently, conditions 2 and 3 can be combined into a single condition:
Combined condition: For all and all :
This single condition says that is closed under all linear combinations of its elements. It is often the fastest way to verify that something is a subspace: show that every linear combination of elements of remains in .
Strategy for proving subspace:
- Show is non-empty (usually by showing directly)
- Take arbitrary and arbitrary
- Compute and verify it satisfies the defining property of
Strategy for disproving subspace:
- Find a specific counterexample: either , or two elements whose sum is not in , or an element whose scalar multiple is not in .
3.2 Why These Three Conditions Suffice
Each condition handles a different potential failure:
Condition 1 (contains zero): Without , has no additive identity, so Axiom 3 fails. More fundamentally: if , then cannot be a vector space under the same operations as , because the additive identity is uniquely determined by the operations (we proved this in 2.2).
Note that showing also shows is non-empty, which is required for to be a set with meaningful structure. In practice, checking first is the quickest way to rule out non-subspaces: if the zero vector fails to satisfy the defining property of , we are done.
Condition 2 (closed under addition): Without this, addition is not even a well-defined operation on - adding two "vectors" in might produce something outside . If the operation leaves the set, the set cannot be a vector space under that operation.
Condition 3 (closed under scalar multiplication): Without this, scalar multiplication is not well-defined on - scaling a vector in might produce something outside . This failure also implies Axiom 4 (additive inverse) fails: since must be in for each , closure under scalar multiplication is required for inverses to exist.
The remaining axioms are inherited automatically. For example, commutativity: for any , we have because this equality holds in for all vectors, and are in particular vectors in . The same argument applies to associativity and both distributivity laws. These properties are free.
3.3 Standard Subspace Examples in R
Lines through the origin in and
- the line .
- Contains : set ->
- Closed under :
- Closed under :
Dimension: 1. Basis: . This is a 1-dimensional subspace of - a line through the origin.
Planes through the origin in
- a homogeneous linear equation.
- Contains :
- Closed under : if and , then (by linearity of the equation)
- Closed under : if , then
Dimension: 2 (one homogeneous linear constraint on 3 variables leaves 2 free variables).
Null space of a matrix (solution set of a homogeneous system)
for .
- Contains :
- Closed under :
- Closed under :
The null space is always a subspace of , regardless of . By the Rank-Nullity Theorem (6.7): .
Column space of a matrix
for .
- Contains :
- Closed under :
- Closed under :
The column space is always a subspace of . Its dimension equals .
Symmetric matrices
.
- Contains :
- Closed under :
- Closed under :
Dimension: . A basis consists of all matrices (diagonal) and (off-diagonal symmetric pairs) for . Symmetric matrices appear constantly in AI: covariance matrices, Gram matrices, Hessians, and attention score matrices (in some formulations) are all symmetric.
Traceless matrices
.
- Contains :
- Closed under :
- Closed under :
Dimension: (one linear constraint on free entries). Relevant to normalisation in transformers: the centering step of LayerNorm projects activations onto the traceless (mean-zero) subspace.
Upper triangular matrices
.
The defining condition for is a collection of linear constraints; closure under and follows because the zero entries stay zero under linear combinations. Dimension: . The space of upper triangular matrices is a subspace of .
Polynomials vanishing at a point
- polynomials of degree with .
- Contains : zero polynomial satisfies
- Closed: if and , then
Dimension: (one linear constraint on coefficients). A basis: .
3.4 Non-Subspace Examples
For each of the following, we identify the specific condition that fails:
Affine subspace with .
Condition 1 fails: , so . Additionally, if are both solutions, , so condition 2 also fails. This set is a coset of : the general solution is for any particular solution .
Unit ball .
Condition 1 holds: . Condition 3 fails: take with and : then , so . The unit ball is bounded; a subspace is never bounded (it contains all scalar multiples of its vectors, so it extends to infinity in every direction it contains).
First quadrant .
Conditions 1 and 2 hold: ; sum of non-negative vectors is non-negative. Condition 3 fails: . A subspace must be closed under all scalars, including negative ones - which forces it to be symmetric about the origin.
Nonzero vectors .
Condition 1 fails immediately: is excluded by definition. Since the zero vector is always required, this cannot be a subspace.
The probability simplex (softmax outputs).
All three conditions fail: no zero vector; not closed under ; not closed under scaling. The AI implication: you cannot average, interpolate, or take linear combinations of probability distributions and expect to get another valid probability distribution. Arithmetic must happen in logit space (pre-softmax), where the vector space structure is intact.
QUICK NON-SUBSPACE CHECK
Given a candidate set W, the fastest refutation is usually:
1. Check 0 in W - if the zero vector fails the defining property,
stop immediately. W is not a subspace.
2. Check negative scalar - find v in W and test -v in W.
This catches all sets that require positivity or a fixed norm.
3. Check sum of two "extreme" elements - if W has a boundary
condition, vectors near the boundary often sum outside W.
3.5 The Two Trivial Subspaces
Every vector space has exactly two subspaces that are always present, regardless of what is:
- - the trivial subspace, containing only the zero vector. Dimension 0.
- itself - the whole space. Dimension .
Any subspace satisfies .
A proper subspace is any subspace (not the whole space). A non-trivial subspace is any subspace (not just the zero vector). The interesting subspaces are the non-trivial proper ones - the lines, planes, hyperplanes, and higher-dimensional flats that live inside without being all of .
For AI, interpreting these extremes:
- A layer that writes only the zero vector to the residual stream corresponds to the trivial subspace - it contributes nothing.
- A layer with full rank that can write anything to corresponds to the full space - it has maximum expressive power.
- Most layers operate in a proper non-trivial subspace: they can produce outputs in a -dimensional subspace of where .
4. Span and Linear Combinations
4.1 Linear Combinations
A linear combination of vectors is any vector of the form:
where are arbitrary scalars called the coefficients of the linear combination.
A linear combination uses only the two vector space operations: scalar multiplication (to produce each ) and addition (to sum them). By the closure axioms, any linear combination of vectors in produces another vector in . By the closure conditions for subspaces, any linear combination of vectors in a subspace produces another vector in .
Special cases of linear combinations:
| Coefficients | Name | Meaning |
|---|---|---|
| Any | Linear combination | General; unrestricted |
| Affine combination | Preserves "position" (stays in affine hull) | |
| and | Convex combination | Stays in convex hull (interpolation) |
| Subset sum | Used in combinatorics over |
For AI, the most important special case is the unconstrained linear combination - it defines the span of a set of vectors, which in turn defines the reachable outputs of a linear layer.
4.2 The Span
The span of a set is the set of all linear combinations of vectors in :
Theorem: is always a subspace of .
Proof:
- Contains : set all ->
- Closed under addition: if and , then
- Closed under scalar multiplication: if then
Minimality: is the smallest subspace containing all vectors in . Any subspace that contains all vectors must also contain all their linear combinations (by closure), and hence must contain .
Conventions:
- - the span of the empty set is the trivial subspace. (An empty sum equals zero.)
- The span grows as grows: if then .
- Adding a vector already in does not increase the span.
4.3 Geometric Pictures of Span
Geometric intuition for span is crucial and carries directly into AI:
GEOMETRIC MEANING OF SPAN
span{v} (one non-zero vector in R):
A line through the origin in the direction of v.
All scalar multiples of v: {alphav : alpha in R}.
Always a 1-dimensional subspace.
span{u, v} (two non-parallel vectors in R):
A plane through the origin containing both directions.
All combinations alphau + betav: a 2-dimensional subspace.
(If u and v ARE parallel: same as span{u} - 1-dimensional.)
span{u, v, w} (in R^3):
If u, v, w are coplanar -> a plane (2D subspace).
If not coplanar -> all of R^3 (3D subspace = whole space).
span{v_1, ..., v} (in R):
Dimension = rank of the matrix [v_1 | v_2 | ... | v].
The "flat hull" of the vectors, passing through the origin.
It is the column space of the matrix with columns v_1, ..., v.
The key dimension count. Given vectors in :
- If : span has dimension at most (a proper subspace)
- If : span has dimension iff the vectors are independent (iff the matrix is invertible)
- If : span has dimension at most ; the vectors must be linearly dependent
4.4 Spanning Sets
A set spans (or is a spanning set for ) if - equivalently, if every vector in can be expressed as a linear combination of vectors in .
Standard examples:
- spans (standard basis, by definition of coordinates).
- spans - but is redundant; already.
- spans : every can be written as .
- spans (every polynomial is a linear combination of monomials).
- The columns of a matrix span ; they span all of iff has full row rank ().
AI application. The columns of a weight matrix span . If , then only an -dimensional subspace of is reachable: regardless of the input, the layer's output can never leave the -dimensional column space. Outputs in the orthogonal complement are permanently inaccessible from this layer. They must come from other components (residual connections, other layers).
A spanning set need not be a basis - it may contain redundant vectors. The minimal spanning sets are the bases (Section 6).
4.5 Span in Terms of Matrices
Given vectors , form the matrix:
Then:
This connects span to:
- Column space: of the vectors = column space of the matrix they form
- Rank:
- Linear system solvability: if and only if the system is consistent, which by the Rouch-Capelli theorem holds if and only if .
Algorithm for membership testing. To check whether :
- Form the augmented matrix
- Row reduce to RREF
- If no row of the form with appears:
- If such a row appears:
If , the RREF also yields the coordinates expressing as a linear combination.
5. Linear Independence
5.1 Definition of Linear Independence
Vectors are linearly independent if the only linear combination that equals the zero vector is the trivial one - all coefficients zero:
Equivalently: no vector in the set is a linear combination of the others; each vector contributes a genuinely new direction that cannot be reached from the others.
The vectors are linearly dependent if the negation holds: there exist coefficients , not all zero, such that . In this case, at least one vector is redundant - it can be expressed as a linear combination of the others.
The contrast is worth making explicit:
INDEPENDENT vs DEPENDENT
Independent: each vector adds a new direction.
The set is "lean" - nothing is redundant.
Remove any one vector: the span strictly shrinks.
dim(span) = k (maximum possible given k vectors)
Dependent: at least one vector is a combination of the others.
The set contains redundancy.
Remove the dependent vector: span unchanged.
dim(span) < k (less than the number of vectors)
Why it matters. Linear independence is what makes a basis a basis. Without it, some vectors in a spanning set are superfluous - they do not add to the span. A basis is the most efficient spanning set: independent and spanning.
5.2 The Dependence Relation
If are linearly dependent, there exist coefficients (not all zero) with . Let be any index with . Then:
So is a linear combination of the other vectors. Consequently, removing from the set does not reduce the span:
where means is omitted. This is the operational meaning of dependence: you can identify and remove dependent vectors without losing span. Row reduction does exactly this.
5.3 Checking Linear Independence
Matrix method (universal).
Form the matrix and row reduce to RREF.
- Linearly independent iff every column of has a pivot iff iff
- Linearly dependent iff some column has no pivot iff iff
When dependent, the RREF reveals which vectors are dependent and expresses them in terms of the pivot columns.
Determinant test (square matrices only, ).
are linearly independent iff .
SVD / singular value test (numerical).
for some tolerance iff the columns are linearly independent to numerical precision . The smallest singular value measures "how far from dependent" the columns are. When , the columns are nearly dependent (nearly lying in a lower-dimensional subspace).
Gram matrix test.
The Gram matrix with . The columns are linearly independent iff is positive definite (all eigenvalues ) iff .
5.4 Key Properties
Any set containing is linearly dependent.
- a non-trivial combination. So is always dependent on any other set of vectors. This is consistent with the subspace requirement: carries no directional information.
A single non-zero vector is always linearly independent.
with forces (by the cancellation law proved in 2.2). So is independent whenever .
Two vectors are linearly dependent iff one is a scalar multiple of the other.
with . If : . If : so . So dependence of two vectors = one is a scalar multiple of the other = they point in the same (or opposite) direction.
More vectors than dimensions forces dependence.
If for vectors in , they are always linearly dependent. The matrix has rank at most ; by rank-nullity, is non-trivial, giving a non-trivial dependence relation.
This has a profound implication: in , you cannot have more than linearly independent vectors. In a -dimensional embedding space, there are at most independent directions. Any collection of more than feature vectors must be linearly dependent.
Subsets of independent sets are independent.
If are linearly independent, so is every subset. (Removing vectors from an independent set cannot introduce a new dependence - the remaining vectors were already independent when part of the larger set.)
Supersets of dependent sets are dependent.
If are linearly dependent, so is every superset. (The same non-trivial dependence relation, with zero coefficients for the added vectors, still witnesses dependence.)
5.5 Linear Independence in AI Contexts
Attention heads. If the attention heads write to linearly independent subspaces of (their OV circuits have mutually orthogonal column spaces), then each head provides genuinely independent information. Dependent heads write to overlapping subspaces; they interfere and are candidates for pruning. Head redundancy is exactly the linear dependence of the subspaces they write to.
The superposition hypothesis. Elhage et al. (2022) observe that large language models appear to represent more features than the embedding dimension allows. Since more than linearly independent directions cannot exist in , the features must be stored as linearly dependent (non-orthogonal) directions. This is the superposition hypothesis: features are packed into the embedding space at densities exceeding the dimension, using non-orthogonal directions. Each feature "leaks" into the directions of nearby features - a phenomenon called polysemanticity. Linear independence is the threshold: below features, they can be orthogonal and non-interfering; beyond , interference is inevitable.
LoRA rank selection. In LoRA, the update is with , . The rank of equals the number of linearly independent columns in (equivalently, in ). If the columns of are linearly dependent, the effective rank is less than ; some of the rank budget is wasted. Choosing large helps only if the corresponding columns of actually end up spanning an -dimensional space after training.
Gradient directions across batches. If gradients from different batches are linearly independent, each batch provides new information about the loss landscape. If gradients are nearly parallel (nearly dependent), the batches are redundant - they are teaching the model the same thing. Gradient diversity, measured by the rank or effective dimension of the gradient matrix , is a measure of how much information each batch contributes.
Feature directions in representation learning. Each probing direction in an embedding space is a vector in . For concepts to be probed without interference, their associated directions should be linearly independent (ideally orthogonal). The maximum number of perfectly non-interfering linear features is . Beyond that, interference is structural - a consequence of linear independence breaking down.
6. Basis and Dimension
6.1 Definition of a Basis
A basis for a vector space is a set satisfying two conditions simultaneously:
- Linearly independent: the vectors are linearly independent
- Spans : every vector in is a linear combination of the
A basis is the most efficient possible spanning set: it spans with no redundancy. Equivalently:
- A basis is a minimal spanning set: remove any single vector and the set no longer spans
- A basis is a maximal independent set: add any vector from and the set becomes linearly dependent
These three characterisations - (independent + spanning), (minimal spanning), (maximal independent) - are all equivalent. Each one defines the same objects.
THE ROLE OF A BASIS
A basis gives V a coordinate system.
Once you fix a basis {b_1, ..., b} for V:
- Every vector v in V corresponds to a unique n-tuple (alpha_1,...,alpha)
- The n-tuple is the vector's coordinates in this basis
- V becomes "identified" with R via this correspondence
Different bases = different coordinate systems for the same space.
The space itself is unchanged; only the coordinatisation differs.
In AI: the "basis" of the residual stream is not canonical.
Attention heads and MLP layers operate in their own "preferred"
bases, related to the standard basis by learned rotations.
6.2 The Unique Representation Theorem
Theorem. If is a basis for , then every vector has a unique representation:
The scalars are called the coordinates (or components) of with respect to basis , written .
Proof of uniqueness. Suppose and . Subtract:
By linear independence of the basis vectors, all coefficients are zero: , so for all .
Why uniqueness matters. Without uniqueness, coordinates would be ambiguous - the same vector could have multiple different "addresses" in the same coordinate system. The combination of independence (ensures uniqueness) and spanning (ensures existence) is precisely what makes a basis the right tool for establishing a coordinate system.
The isomorphism. The unique representation theorem says that every -dimensional vector space is isomorphic to : the map is a bijection that preserves both operations (addition and scalar multiplication). This is why working in coordinates is always valid: the coordinate map translates the abstract vector space structure faithfully into .
6.3 Dimension
Theorem. All bases for a given vector space have the same number of vectors. This number is the dimension of :
Proof sketch. Suppose and are both bases for . Since spans and is independent, by the Steinitz Exchange Lemma (or the Replacement Theorem), we must have . By symmetry, . Hence .
Dimension is the fundamental measure of the "size" of a vector space - the number of independent directions it contains. It is an intrinsic property of the space, independent of any particular basis.
Dimensions of standard spaces:
| Space | Dimension | Basis |
|---|---|---|
| (standard) | ||
| (symmetric ) | ||
| (skew-symmetric) | ||
| (diagonal) | ||
| , , | Countably infinite bases exist |
Dimension inequalities. For subspace :
- and together imply (in finite dimensions)
This last point is crucial: if you find a subspace with the same dimension as the whole space, it must equal the whole space. In particular, a subspace of with dimension is all of .
6.4 Standard Bases
Standard basis for :
The coordinates of any vector in the standard basis are simply its components: . This is why the standard basis is the "natural" choice for - coordinates and components coincide.
Standard basis for :
is the matrix with a 1 in position and 0 elsewhere. There are such matrices, and they form a basis for . The coordinate of a matrix in this basis is simply .
Standard basis for :
- the monomials. The coordinates of are its coefficients .
Fourier basis for :
An orthonormal basis. The coordinates of in this basis are the Fourier coefficients. This is the infinite-dimensional analogue of an orthonormal basis in .
Eigenbasis. For a linear map with linearly independent eigenvectors , this set forms a basis called the eigenbasis of . In the eigenbasis, acts diagonally: . This diagonalises , making its action transparent. PCA uses the eigenbasis of the covariance matrix; the action of the covariance matrix in its eigenbasis is pure scaling by eigenvalues (variances).
6.5 Change of Basis
The same vector can be described by different coordinate tuples in different bases. Understanding how coordinates transform under a change of basis is essential for working in multiple representations simultaneously - which is exactly what happens in a transformer, where the residual stream has a "standard" basis but each component may operate in a different preferred basis.
Setup. Let and be two bases for . The change-of-basis matrix is the matrix whose -th column is the coordinate representation of in basis :
Coordinate transformation. For any :
Properties:
- is always invertible (since both and are bases)
- - the inverse change-of-basis goes the other way
- Composition:
Linear maps under change of basis. If a linear map has matrix in basis , its matrix in basis is:
This is a similarity transformation: and represent the same linear map in different bases. They have the same eigenvalues, rank, trace, and determinant - these are basis-independent properties of the map.
AI application: the residual stream basis. In a transformer, the residual stream has dimension and technically lives in with the standard basis. But each attention head's OV circuit is parameterised by and : the head reads in a -dimensional subspace and writes in another. The "natural basis" for head is different from the standard basis of . Change-of-basis matrices connect these representations. When we say a head "rotates" the residual stream, we mean it applies a change-of-basis transformation to the component it reads.
6.6 Constructing a Basis
From a spanning set (row reduction).
Given a spanning set for some subspace :
- Form the matrix
- Row reduce to RREF
- The pivot columns of (not the RREF, but the original ) form a basis for
The non-pivot columns correspond to dependent vectors that can be removed without reducing the span.
From a null space (free variables).
To find a basis for :
- Row reduce to RREF; identify free variables
- For each free variable , set and all other free variables ; solve for pivot variables
- The resulting vector is one basis vector for
- Repeat for each free variable; the vectors form a basis for
Gram-Schmidt (from independent vectors to orthonormal basis).
Given linearly independent vectors , produce an orthonormal basis for . See Section 9.3 for the full procedure.
Extending a basis.
Given a basis for a subspace , extend to a basis for all of :
- Start with
- Find any and add it:
- Repeat until
This process terminates in at most steps.
6.7 Dimension Counting Theorems
These theorems relate the dimensions of related subspaces. They are among the most useful tools in linear algebra.
Rank-Nullity Theorem. For any matrix :
In words: the number of "invisible" input directions plus the number of "active" input directions equals the total number of input dimensions.
Proof idea. Row reduce to RREF. Let . There are pivot columns (corresponding to independent output directions -> ) and free columns (corresponding to null space basis vectors -> ). Total: .
AI application. For a weight matrix with rank :
- input dimensions "pass through" to the output (row space directions)
- input dimensions are silenced (null space directions)
- The layer uses out of available input dimensions
Dimension of a Sum. For subspaces :
This is the Grassmann formula or inclusion-exclusion for dimensions. It is the direct analogue of for finite sets, but for dimensions of subspaces.
Key consequence. If (intersection is trivial):
and the sum is a direct sum: . Dimensions add when the subspaces are independent (share only the origin).
Dimension bounds for intersection. Given , , :
The upper bound: the intersection is contained in both and , so its dimension cannot exceed either. The lower bound: from the Grassmann formula and .
AI application. Two attention heads with -dimensional subspaces in : their subspace intersection has dimension at least . If , the heads must share at least dimensions; their communication channels overlap inevitably.
Complement dimension. For subspace with :
So . The orthogonal complement "fills in" the remaining dimensions. The fundamental subspaces obey:
which is just the Rank-Nullity Theorem re-expressed using the fact that .
7. The Four Fundamental Subspaces
Recall: Earlier sections used these subspaces informally:
- Systems of Equations 4.3 - used col() and null() to characterise when is solvable
- Matrix Rank 4 - used rank and null space dimension to state the rank-nullity theorem
This section is the canonical home: rigorous definitions, dimensional identities, bases computed from RREF, the orthogonality relationships, and their geometric interpretation as the complete decomposition of and .
For any matrix with , Gilbert Strang identified four fundamental subspaces that together give a complete picture of how acts as a linear map . Understanding all four - their definitions, dimensions, bases, and mutual relationships - is the complete theory of linear systems.
7.1 Definition of All Four Subspaces
1. Column space (also called the image or range):
- The set of all possible outputs of ; the "reachable" subspace in
- Basis: pivot columns of (the original , not its RREF)
- The system is consistent if and only if
2. Null space (also called the kernel):
- The set of all inputs mapped to zero; the "invisible" directions in
- (by Rank-Nullity)
- Basis: free variable solution vectors from the RREF of
- cannot distinguish from for any : both produce the same output
3. Row space :
- The set of all linear combinations of the rows of ; the directions in that "listens to"
- (same as column space - both equal the rank)
- Basis: non-zero rows of the RREF of
- The projection of any input onto determines the output; the null space component of is discarded
4. Left null space :
- The set of all output-space vectors orthogonal to the column space of
- (by Rank-Nullity applied to )
- Basis: free variable solution vectors from the RREF of
- If then , which means has no solution
Dimension summary:
| Subspace | Ambient space | Dimension | Complement |
|---|---|---|---|
7.2 The Orthogonality Relations
The four subspaces pair into two orthogonal decompositions:
Proof that .
Take any and any . Then for some . Compute:
So every null space vector is orthogonal to every row space vector.
Since and the two subspaces are orthogonal (hence their intersection is ), they form an orthogonal direct sum decomposition of . Every vector decomposes uniquely as:
And . The null space component is silenced; only the row space component matters.
7.3 Strang's Big Picture
The complete action of as a linear map is captured in the following diagram:
STRANG'S FOUR FUNDAMENTAL SUBSPACES
R (input space) R (output space)
row(A) col(A)
dim = r A> dim = r
"the listening space" "the reachable space"
A is sensitive here A can write here
null(A) null(A)
dim = n - r A> dim = m - r
(->0)
"the silent space" "the unreachable
A maps this to 0 space"
Key facts:
- A is a bijection from row(A) to col(A) - restricted to row(A),
A is invertible.
- A maps all of null(A) to the single point {0} in R.
- No input - however large - can produce an output in null(A).
- The orthogonal complements pair up:
row(A) = null(A) and col(A) = null(A)
The action of completely described. For any input (row space + null space decomposition):
The null space component vanishes; the row space component is mapped bijectively into the column space. The left null space is entirely inaccessible from the input side.
7.4 Computing the Four Subspaces
The systematic procedure to find bases for all four fundamental subspaces from a matrix :
Step 1: Row reduce to RREF.
Identify the pivot positions (row , column pairs where is the leading 1 of row ). Let .
Step 2: null(A).
- Free variables: columns of without a pivot (say columns )
- For each free variable : set , all other free variables , solve for the pivot variables from
- The resulting vector is the -th null space basis vector
- Repeat for ; these vectors form a basis for
Step 3: col(A).
- The pivot columns of the original matrix (not ) form a basis for
- Specifically: if pivots appear in columns , then (columns of ) is a basis
Why original , not ? Row operations change the column space (they change the linear dependencies between columns). The pivot positions identify which columns are independent, but the actual column vectors must be taken from the original to get a basis for the original column space.
Step 4: row(A).
- The non-zero rows of (the RREF of ) form a basis for
- Unlike for the column space, row operations preserve the row space; the non-zero rows of are particular convenient linear combinations of the rows of that happen to be in reduced form
Step 5: null(A^T).
- Row reduce to its RREF, then apply the null space procedure (Step 2) to
- Alternatively: if you have already computed the four dimensions (and bases for three of the four subspaces), use the complementary dimension argument
- Basis vectors of can also be read off from certain row operations applied to
7.5 AI Interpretation of the Four Subspaces
For a weight matrix in a neural network layer , the four subspaces tell the complete story of what this layer can and cannot do:
Column space col(W) - the "reachable" subspace.
Only outputs in can be produced by this layer. If , then dimensions of the output space receive exactly zero contribution from this layer, regardless of input. In a transformer residual stream: each attention head and MLP block writes to a specific subspace of ; the column space of the output projection is the subspace the head writes to. Dimensions orthogonal to are untouched by this head.
Null space null(W) - the "invisible" subspace.
Input directions in are completely ignored by the layer. If an input vector has all its "mass" in the null space, the output is zero - the layer cannot see it. This is both a limitation and a design feature: in mechanistic interpretability, the null space of a query projection is the set of residual stream directions that this attention head does not use to form its queries. These directions are invisible to the head's query computation.
Row space row(W) - the "listening" subspace.
The row space is the complement of the null space: input directions in are the ones the layer actually "listens to". The projection of the input onto determines the output; the null space projection vanishes. In a transformer, the row space of determines which directions of the residual stream the key computation is sensitive to. Keys "live in" the row space of .
Left null space null(W^T) - the "unreachable" output subspace.
The left null space is orthogonal to . No input, however crafted, can produce an output with a component in this subspace from this layer. In a transformer with residual connections, the left null space of one layer's output projection is the subspace that must be written by other layers. This is why residual connections are essential: they allow information to flow "around" layers whose column spaces don't reach certain directions.
AI INTERPRETATION: WEIGHT MATRIX SUBSPACES
W in R, rank r
R (input): R (output):
row(W) col(W)
dim r W> dim r
"W listens here" "W writes here"
null(W) null(W)
dim n-r W>{0} dim m-r
"W ignores this" "W never here"
Transformer layer (y = Wx + b, residual):
- Attention head h reads from row(W_Q^h), row(W_K^h), row(W_V^h)
- Head h writes to col(W_O^h)
- Information in null(W_Q^h) is invisible to head h's queries
- Information in null(W_V^h) is not read by head h's values
- The residual stream preserves ALL d dimensions; individual
layers each modify only their respective column space subspaces
8. Subspace Operations
8.1 Sum of Subspaces
For subspaces , their sum is:
Theorem. is a subspace of .
Proof.
- Contains :
- Closed under : (since , are closed)
- Closed under :
is the smallest subspace containing both and . Any subspace that contains both and must contain all sums (by closure), so .
Grassmann dimension formula:
Important distinction: (set union) is not a subspace in general. Take (x-axis) and (y-axis) in . Then and , but . The union is not closed under addition. The sum (the whole plane) is a subspace; the union (just the two axes) is not.
8.2 Intersection of Subspaces
For subspaces , their intersection is:
Theorem. is a subspace of .
Proof.
- Contains : and , so
- Closed under : if , then (since closed) and (since closed), so
- Closed under : and , so
is the largest subspace contained in both and .
Computing : If and , then:
For general subspaces given by spanning sets: find vectors in that also lie in by setting up a linear system and solving.
8.3 Direct Sum
Subspaces and are complementary in (equivalently, is their direct sum) if:
- - they together span
- - they share only the origin
We write .
Theorem. if and only if every has a unique decomposition with and .
Proof of uniqueness. If , then . The left side is in ; the right side is in . So both sides lie in . Hence and .
Dimension: . This follows from the Grassmann formula with .
Complements are not unique. For a given subspace , there are many subspaces such that . The orthogonal complement is the canonical, unique choice:
Theorem (for finite-dimensional inner product spaces). , and .
The fundamental subspace decompositions from Section 7 are orthogonal direct sums:
Multiple direct sums. The direct sum generalises: if the are pairwise "independent" (each ) and together span . Then every has a unique decomposition and .
In the spectral theorem (Section 13), the orthogonal decomposition into eigenspaces is a multiple orthogonal direct sum: .
8.4 Projection onto Subspaces
Given a subspace , the orthogonal projection maps each vector to the closest point in :
The unique minimiser exists because is a closed convex set (any subspace is convex, and in finite dimensions, closed).
Characterisation. is the unique vector in such that , i.e., for all .
Formula 1: orthonormal basis.
If has orthonormal basis , then:
In matrix form, with (orthonormal columns):
The projection matrix is .
Formula 2: general (non-orthonormal) basis.
If for with linearly independent columns (full column rank):
The projection matrix is . The matrix is the Moore-Penrose pseudoinverse when has full column rank.
Properties of an orthogonal projection matrix :
| Property | Expression | Meaning |
|---|---|---|
| Idempotent | Projecting twice = projecting once | |
| Symmetric | Projection is self-adjoint | |
| Eigenvalues | Vectors in map to themselves; vectors in map to | |
| Rank | Rank = dimension of the subspace projected onto | |
| Trace | Since eigenvalues are 0s and 1s; trace = sum of eigenvalues | |
| Complement | Projection onto |
Proof that : (using for orthonormal columns).
Decomposition. Every decomposes as:
The residual is the component of orthogonal to , and is itself an orthogonal projection matrix (onto ).
AI applications of projection:
- Least squares: the best-fit solution projects onto ; least squares is orthogonal projection
- PCA: projecting data onto the top- principal component subspace is a rank- projection where contains the top left singular vectors
- Attention: (soft) attention can be viewed as a weighted projection of value vectors onto query-determined subspaces
- Concept erasure: removing a concept from an embedding by projecting onto the orthogonal complement of the concept subspace: ; used in LEACE and related methods
8.5 Subspace Angles and Principal Angles
The angle between two vectors is well-defined: . The "angle" between two subspaces is more subtle - it requires a collection of angles called principal angles.
Definition. For subspaces with , , , the principal angles are defined recursively:
Computation via SVD. If and are orthonormal bases for and :
Then (the -th singular value of ). The principal angles are the arc-cosines of the singular values.
Interpretation:
- : the subspaces share a common direction (their intersection is non-trivial)
- : the subspaces have a pair of orthogonal directions (they are "partially orthogonal")
- All : the subspaces are orthogonal (, i.e., )
- All : or (one contains the other)
AI applications:
- Head overlap: principal angles between two attention heads' OV subspaces measure how much they overlap. If , the heads share a direction and are partially redundant. If all , the heads write to orthogonal subspaces - they are fully independent.
- Gradient similarity across layers: principal angles between gradient subspaces at different layers measure gradient diversity during training.
- Representation similarity: the CKA (Centered Kernel Alignment) measure of representation similarity between two networks is related to principal angles between their representation subspaces.
- LoRA subspace alignment: after fine-tuning, the principal angles between the LoRA update subspace and the top- gradient subspace reveal how well LoRA captured the gradient's preferred directions.
9. Inner Product Spaces and Orthogonality
Recall: Vectors and Spaces 5-6 developed norms and inner products concretely in - dot product, angle, Cauchy-Schwarz, and orthogonal projection. This section extends those ideas to abstract inner product spaces: general Hilbert spaces, orthonormal bases, Gram-Schmidt orthogonalization, and orthogonal complements. The abstract setting is what makes these concepts applicable to function spaces (Fourier analysis, kernels) and not just .
9.1 Inner Products
An inner product on a vector space is a function satisfying:
- Symmetry:
- Linearity in the first argument:
- Positive definiteness: with equality if and only if
Together, properties 1 and 2 imply linearity in the second argument as well (by symmetry), making the inner product bilinear: linear in each argument separately.
The induced norm is , and the induced metric (distance) is .
Standard inner products:
| Space | Inner product | Induced norm |
|---|---|---|
| (Euclidean) | ||
| (Frobenius) | ||
| ( norm) |
Weighted inner product. For a symmetric positive definite matrix :
This defines a different geometry on : the unit ball is an ellipsoid rather than a sphere. The natural gradient in optimisation uses the Fisher information matrix as the weight matrix: measures gradient magnitude in the geometry of the statistical model, not the flat geometry of parameter space.
Cauchy-Schwarz inequality. For any inner product:
with equality if and only if and are linearly dependent ( for some scalar ).
This is one of the most important inequalities in mathematics. It implies:
- Cosine similarity is always in - well-defined as a cosine
- Triangle inequality (from Cauchy-Schwarz applied to )
- The law of cosines:
9.2 Orthogonality
Vectors and are orthogonal, written , if .
Orthogonality generalises perpendicularity from Euclidean geometry to any inner product space. In the Fourier inner product on , and are orthogonal for all integers . In the Frobenius inner product on matrices, symmetric and skew-symmetric matrices are orthogonal (since when and ).
Pythagorean theorem. If , then:
Proof:
More generally, if are pairwise orthogonal:
Orthogonal sets and independence. Any set of non-zero pairwise orthogonal vectors is linearly independent.
Proof. Suppose with pairwise orthogonal. Take the inner product of both sides with :
Since , we have , so . This holds for all .
This is a powerful observation: orthogonality implies independence. An orthogonal set is automatically independent, so an orthogonal spanning set is automatically a basis.
9.3 Gram-Schmidt Orthogonalisation
Given linearly independent vectors in an inner product space, the Gram-Schmidt process produces an orthonormal set such that:
The key invariant is that at each step, the orthonormal basis spans the same subspace as the original vectors up to that index.
Algorithm:
Step 1: u_1 = v_1
q_1 = u_1 / u_1
Step 2: u_2 = v_2 - v_2, q_1 q_1
q_2 = u_2 / u_2
Step 3: u_3 = v_3 - v_3, q_1 q_1 - v_3, q_2 q_2
q_3 = u_3 / u_3
Step j: u = v - sum_1^{j-1} v, q q
q = u / u
What each step does. At step , we subtract from all of its projections onto the previously constructed orthonormal vectors . The result is the component of not explained by the previous vectors - the "new" direction that adds. Normalising gives .
Why : Since are linearly independent, . Therefore (if it were zero, would be in the span of the previous 's, contradicting independence).
Verification of orthonormality. For :
Connection to QR decomposition. The Gram-Schmidt process is the algorithm underlying QR decomposition. For a matrix with independent columns:
where has orthonormal columns and is upper triangular with positive diagonal:
The diagonal entries are .
Numerical issues. The classical Gram-Schmidt algorithm as stated is numerically unstable for nearly dependent vectors: rounding errors accumulate and the resulting vectors may not be accurately orthogonal. The Modified Gram-Schmidt algorithm reorders the operations to reduce error propagation. For production use, Householder QR (using Householder reflections rather than projections) is numerically stable and preferred.
Worked example. Let , in .
Step 1: , ,
Step 2:
Verify:
9.4 The Orthogonal Complement
For a subspace , the orthogonal complement is:
is a subspace (easy check):
- for all
- If and for all , then
Properties (for finite-dimensional inner product spaces):
| Property | Statement |
|---|---|
| Double complement | |
| Dimension | |
| Trivial intersection | |
| Direct sum decomposition | |
| Fundamental subspaces | ; |
Why : Clearly (every vector in is orthogonal to everything in ). By the dimension formula: . Same dimension and one contains the other -> they are equal.
Computing . If and we form (rows are the ), then:
because for all is equivalent to .
9.5 Orthogonal Bases for AI
The choice of basis - orthogonal vs arbitrary - has concrete consequences in AI applications.
Interpretability and orthogonal bases. When the basis vectors of an embedding space are orthogonal (as in the standard basis of ), each dimension corresponds to an independent direction. Inner products between embeddings measure similarity in a well-defined way. Projections onto individual basis directions give clean, independent "components". In practice, the features of a learned representation are rarely aligned with the standard basis - but if they were, each feature could be read off by looking at one coordinate.
The superposition hypothesis and basis independence. Elhage et al. (2022) argue that transformers represent features as directions in , not as individual coordinates. The model does not commit to any particular orthonormal basis; instead, features are placed as arbitrary unit vectors in . When there are fewer features than dimensions, they can be nearly orthogonal (low interference). When features exceed , they must be non-orthogonal (superposed). The interference between features is measured by the inner products between their representing directions - exactly the off-diagonal entries of the Gram matrix of feature vectors.
Cosine similarity. The dominant similarity measure in embedding spaces:
This is the cosine of the angle between and , ranging from (anti-parallel) to (parallel). Orthogonal vectors have cosine similarity 0 - they are "unrelated" by this measure. The cosine similarity is basis-dependent: it changes if we apply a non-orthogonal change of basis to the space.
Orthogonal vs orthonormal bases. Orthogonal bases (pairwise orthogonal, but not necessarily unit length) are often more natural than orthonormal ones for intermediate computations. An orthonormal basis (each vector unit length AND pairwise orthogonal) is the "gold standard" for numerical stability and interpretability. The Gram-Schmidt process converts any independent set into an orthonormal one.
PCA as an orthonormal basis for data. Principal Component Analysis finds the orthonormal basis of that best aligns with the directions of maximal variance in a dataset. In the PCA basis, the covariance matrix is diagonal (its off-diagonal elements are zero), meaning the principal components are statistically uncorrelated. PCA is precisely the process of finding the orthonormal eigenbasis of the covariance matrix and using it as the new coordinate system.
Layer normalisation and orthogonality. Layer normalisation in transformers includes a learnable scale and shift :
The mean-subtraction step projects onto the orthogonal complement of the all-ones vector . This is an explicit orthogonal projection: , where is the projection onto the 1-dimensional "mean direction" and is the projection onto its orthogonal complement.
10. Affine Subspaces and Quotient Spaces
10.1 Affine Subspaces
An affine subspace (also called an affine flat or coset) is a translate of a linear subspace:
for a fixed offset vector and a linear subspace .
Geometric picture:
- If is a line through the origin, is a parallel line through
- If is a plane through the origin, is a parallel plane through
- The affine subspace is a "shifted" copy of the linear subspace; it has the same "shape" and dimension as , but it generally does not pass through the origin (unless , in which case )
Affine subspaces are NOT vector subspaces. Unless , the affine subspace does not contain and is not closed under addition (the sum of two elements is offset by , not ).
Key example: solution sets of linear systems. The solution set of with is an affine subspace:
where is any particular solution and is the null space (a linear subspace). The solution set is a translate of the null space. All solutions lie in the same affine subspace parallel to , offset by .
Other examples:
- A line not through the origin in : - a translate of the span of
- The probability simplex : an -dimensional affine subspace of , defined by - a translate of the hyperplane
- Batch normalisation output space: after fixing the batch statistics, the normalised output lives in an affine subspace determined by the running mean and variance
Dimension of an affine subspace. The affine subspace has the same dimension as the underlying linear subspace . A -dimensional affine subspace in is sometimes called a -flat.
AFFINE VS LINEAR SUBSPACES
Linear subspace W: Affine subspace W + v_0:
passes through origin passes through v_0 (not 0)
closed under + and scaling closed under affine combinations
IS a vector space is NOT a vector space
examples: lines/planes/ examples: lines/planes/
hyperplanes through 0 hyperplanes NOT through 0
In AI:
Linear: null(W), col(W), LoRA update subspace
Affine: solution set of Ax=b, probability simplex,
offset embeddings before centering
10.2 Affine Combinations
An affine combination of vectors is a linear combination where the coefficients sum to one:
Affine combinations "stay within" affine subspaces: if , then any affine combination of them also lies in .
Convex combinations are affine combinations with the additional constraint :
Convex combinations stay within the convex hull of .
Why the constraint matters. Without it, scaling the vectors by a common factor would scale the combination - the result would depend on "how large" the vectors are. With , the combination is "location-aware" rather than "direction-aware" - it picks a point in the affine hull regardless of scale.
AI applications of affine and convex combinations:
-
Embedding interpolation. Given two embedding vectors and , the linear interpolation for is a convex combination. This is the operation underlying latent space arithmetic: "find the midpoint of 'cat' and 'dog' in embedding space". Note: this is NOT generally a valid probability-weighted average of the corresponding tokens' distributions - that arithmetic must happen in logit space.
-
Spherical linear interpolation (slerp). For unit vectors (on the unit sphere), linear interpolation does not stay on the sphere. Slerp uses trigonometric weights to interpolate along the great circle: where . The weights and sum to... almost 1 (they are affine-like weights for the sphere geometry).
-
Model averaging and interpolation. Averaging two neural networks and by is a convex combination in parameter space. "Model soup" (Wortsman et al. 2022) averages fine-tuned models; WiSE-FT interpolates between pretrained and fine-tuned weights. These are affine combinations in .
-
Probability distributions. A mixture distribution is a convex combination of two distributions. This is valid because: (convex combination of non-negatives) and (the constraint ensures the mixture is still a probability distribution). This is why mixture models work: affine combinations with unit-sum weights preserve the probability simplex.
10.3 Quotient Spaces
The quotient space (read "V mod W") captures the idea of "ignoring the directions" - it identifies all vectors that differ by an element of .
Definition. For a subspace , define the equivalence relation: iff . The coset of is:
This is an affine subspace - a translate of passing through . Note that iff (the cosets are identical iff the vectors are equivalent).
The quotient space is the collection of all cosets of in .
Operations on :
- Addition: (well-defined: the result does not depend on which representatives we choose, as long as the cosets are the same)
- Scalar multiplication: (well-defined by the same argument)
These operations make a vector space (the quotient space). The zero vector of is (the coset containing , which is itself).
Dimension: .
Intuition. "collapses" the directions to a point (the zero element ) and retains only the directions perpendicular to . The quotient space has dimension because it "forgets" directions.
First Isomorphism Theorem. For a linear map :
The quotient space is isomorphic to the image of . Intuitively: the null space is exactly the "ambiguity" in - different inputs mapping to the same output; the quotient space removes this ambiguity, leaving a bijection between cosets and outputs.
AI applications:
-
Layer normalisation. The mean-subtraction in LayerNorm - subtracting the mean of all coordinates from each coordinate - is equivalent to projecting onto the orthogonal complement of the all-ones direction. The "normalised" space can be viewed as the quotient , where the direction is "divided out". Two activations that differ only by a constant shift (equal means) are identified in this quotient space.
-
Residual connections. The residual connection adds a vector to the current residual stream. From the quotient space perspective: the "content" of the residual stream modulo the current layer's contribution is what gets passed to the next layer. Each layer writes to its column-space subspace; the rest of is preserved as-is.
-
Equivalence classes in training. If for a data matrix , then all weight vectors in the same coset produce the same predictions on the training data. The space of distinct prediction functions is the quotient . Gradient descent with squared loss finds the minimum-norm representative of the coset - the vector in the coset closest to the origin.
10.4 Cosets and Their Structure
The cosets partition into disjoint, equally-shaped affine subspaces:
- Partition: every belongs to exactly one coset; cosets are either identical or disjoint
- Uniform shape: every coset is a translate of , so all cosets have the same dimension () and the same "shape"
- No overlap: two cosets and are either equal (when ) or disjoint
Solution structure for linear systems. This is the most immediate application. For :
- The solution set (if non-empty) is the coset for any particular solution
- All solutions in this coset produce the same output: for any
- The minimum-norm solution (the "pseudoinverse solution") is , which lies in (the orthogonal complement of within the coset)
Implicit bias as coset selection. In overparameterised neural networks (, more parameters than constraints), gradient descent with zero initialisation and MSE loss converges to the minimum-norm solution in (the unique solution in ). This "implicit bias" towards minimum-norm solutions is a coset-selection phenomenon: gradient descent selects the minimum--norm representative from the coset of all equally good solutions.
11. Subspaces in Functional Analysis
11.1 Infinite-Dimensional Spaces and Closed Subspaces
In finite dimensions, every subspace is automatically closed (in the topological sense). This ceases to be true in infinite dimensions, where one must distinguish between algebraic subspaces (satisfying the three conditions of Section 3) and closed subspaces (additionally closed under limits).
Closed subspace. A subspace of a Hilbert space is closed if every Cauchy sequence in converges to a limit that remains in . Equivalently, is closed if it equals its topological closure .
Why closedness matters. In infinite dimensions:
- The projection theorem () holds only for closed subspaces
- The best approximation in exists only if is closed
- The spectral theorem and other key results require closed invariant subspaces
Examples of closed subspaces:
- : polynomials of degree ; finite-dimensional, hence closed
- for a bounded linear operator : always closed (because is continuous and is closed)
- : the even functions; closed under limits
- The span of any finite set of vectors in a Hilbert space: always closed (finite-dimensional subspace)
Examples of non-closed subspaces in infinite dimensions:
- : all polynomials (of any degree); algebraically a subspace, but not closed - the sequence converges in to , which is not a polynomial. The closure (by the Weierstrass approximation theorem in )
- Finitely supported sequences in : sequences with only finitely many non-zero entries; algebraically a subspace, but limit of is but not finitely supported
Projection in infinite dimensions. The projection theorem extends to Hilbert spaces: if is a closed subspace of a Hilbert space , then and every has a unique best approximation . This is the foundation of least squares in infinite dimensions and of the representer theorem in kernel methods.
11.2 Function Spaces Relevant to AI
- square-integrable functions on .
The space of functions with , equipped with inner product . This is a separable Hilbert space. It is the natural setting for:
- Kernel methods: the RKHS of a translation-invariant kernel is a subspace of
- Probability: densities of probability distributions with finite second moment live here
- Signal processing: finite-energy signals are in
- Neural networks: functions representable by a network can be analysed as elements of
Sobolev spaces .
Functions with weak derivatives all lying in , equipped with norms that account for function values and derivative values. They specify "smoothness":
- (no smoothness condition beyond square integrability)
- (square-integrable function with square-integrable first derivative): relevant for PDEs and PINNs
- (second derivatives in ): relevant for spline regression, Gaussian processes with Matrn kernels
Sobolev spaces are used in physics-informed neural networks (PINNs): the solution to a PDE is sought in a Sobolev space, and the network is trained to minimise a loss that penalises PDE residuals in the norm. The constraint "solution in " is a subspace constraint on the function space.
Reproducing Kernel Hilbert Spaces (RKHS).
An RKHS is a Hilbert space of functions such that point evaluation is a bounded linear functional: for each , the map is continuous in the -norm.
By the Riesz representation theorem, there exists a unique function such that:
The function defined by is the reproducing kernel. It is symmetric and positive semi-definite.
Representer theorem. For any regularised learning problem of the form:
the optimal lies in the finite-dimensional subspace . This is a subspace result: despite the infinite-dimensional function space, the optimal solution lies in an -dimensional subspace spanned by the kernel functions at the training points. SVMs, Gaussian process regression, and kernel ridge regression all instantiate this theorem.
Common kernels and their RKHS subspaces:
- Linear kernel : RKHS = (linear functions); finite-dimensional subspace
- RBF / Gaussian kernel : RKHS is a dense subspace of ; infinite-dimensional
- Matrn kernel: RKHS is the Sobolev space ; smoothness parameter controls which Sobolev subspace
- Polynomial kernel : RKHS = polynomials of degree ; -choose--dimensional
11.3 Neural Networks as Subspaces of Function Spaces
A neural network architecture with parameter space defines a parametric family of functions:
This family is not a subspace of . In general, is not equal to for any (the sum of two networks is not itself a network with the same architecture). Similarly for scalar multiples. The non-linearity of the architecture means the function family is a non-linear manifold embedded in , not a subspace.
However, the tangent space at any parameter IS a subspace. The tangent space to the manifold at the point is:
This is the span of functions - a (at most) -dimensional linear subspace of . When you take a gradient step , you are moving in the direction that lies in this tangent space. First-order optimisation operates in the tangent subspace.
Neural Tangent Kernel (NTK). In the infinite-width limit (network width with appropriate scaling), the evolution of the network function during gradient flow is governed by a linear equation in function space:
where is the NTK. In the infinite-width limit, becomes constant during training. The trained network converges to kernel regression with the NTK as kernel - meaning the trained function lies in the RKHS defined by the NTK, which is a subspace of . The NTK result says: in the lazy training regime, the neural network effectively performs a linear projection onto a specific infinite-dimensional subspace of function space.
Universal approximation and subspace density. The universal approximation theorem says that the set of functions representable by a neural network with bounded width and arbitrary depth (or arbitrary width and one hidden layer) is dense in (continuous functions on the unit hypercube) under the uniform norm. "Dense" means: for any continuous function and any , there is a network function with . This is a subspace density statement: the parametric family is dense in the function space . Depth (or width) determines which subspace is reachable; nonlinearity is what allows density.
11.4 Krylov Subspaces
Krylov subspaces are the foundation of the most practical iterative linear algebra algorithms. They connect the abstract geometry of subspaces to the computational efficiency of matrix-vector products.
Definition. For a matrix and a vector , the Krylov subspace of order is:
This is the span of the first vectors in the sequence - the orbit of under repeated application of .
Nested structure. The Krylov subspaces form a nested sequence:
The sequence stabilises at some : At that point, (the subspace is invariant under ), and equals the degree of the minimal polynomial of with respect to .
Krylov methods. Iterative solvers based on Krylov subspaces find the best approximate solution within at step , then expand the subspace:
| Method | Problem | Optimisation in |
|---|---|---|
| Conjugate Gradients (CG) | , SPD | Minimises over |
| GMRES | , general | Minimises over |
| Lanczos | Eigenvalues of symmetric | Finds best rank- approximation to spectrum |
| Arnoldi | Eigenvalues of general | Orthogonalises via Gram-Schmidt |
Each step costs one matrix-vector product with and work for orthogonalisation. For a sparse with non-zeros, each step costs . Contrast with direct methods (Gaussian elimination): cost regardless of sparsity.
AI applications of Krylov methods:
- Second-order optimisation. Methods like K-FAC (Kronecker-Factored Approximate Curvature) need to solve linear systems involving the Fisher information matrix to compute the natural gradient. Krylov methods can solve (where is the gradient) without explicitly forming - only matrix-vector products are needed, which can be computed efficiently.
- Eigenvalue computation for interpretability. Computing the top eigenvalues of the Hessian or the covariance matrix of gradients is done via Lanczos, which builds a Krylov subspace using Hessian-vector products. These products are available cheaply via the "pearlmutter trick" (forward-over-backward AD). The Krylov subspace approach gives the dominant eigenspace in Hessian-vector products.
- Linear attention and state space models. SSMs (S4, Mamba) compute recurrences of the form whose outputs lie in Krylov-like subspaces. The efficiency of convolutional SSM computation is related to the structure of the Krylov subspace generated by the state matrix and input matrix .
12. Subspace Methods in Machine Learning
12.1 PCA as Subspace Finding
Principal Component Analysis is fundamentally a subspace problem: find the rank- subspace of that best explains the variance in a dataset.
Setup. Given data with centred rows (), the sample covariance matrix is:
is symmetric positive semidefinite. Its eigendecomposition (with orthogonal, diagonal with ) defines the principal components.
PCA as orthogonal projection. The optimal rank- subspace minimises the mean squared reconstruction error:
Theorem (Eckart-Young for PCA). The solution is , the span of the top- eigenvectors of (equivalently, the top- left singular vectors of ).
The minimum reconstruction error equals the sum of the discarded eigenvalues: .
Equivalences. PCA can be described four equivalent ways, all pointing to the same subspace:
- Maximum variance: find the -dimensional subspace on which projected data has maximum variance
- Minimum reconstruction error: find the -dimensional subspace minimising projection error (the formulation above)
- SVD: compute ; the top- right singular vectors span
- Eigendecomposition: compute eigenvectors of ; top- eigenvectors span
AI applications of PCA as subspace finding:
- Embedding analysis. PCA of a word embedding matrix reveals the dominant semantic axes. The top principal component often corresponds to a frequency direction; later components correspond to semantic distinctions. PCA on embeddings is a way of finding the most informative subspace of the -dimensional embedding space.
- Activation subspaces. PCA on the activations of a layer across many inputs reveals the effective dimensionality of the layer's representation. If 95% of variance is explained by the top- principal components, the layer effectively operates in a -dimensional subspace of , even though nominally -dimensional.
- Weight matrix analysis. PCA on the rows or columns of a weight matrix reveals which directions the matrix emphasises. The top singular vectors of are the principal directions - the row space direction that maps to the largest column space direction.
- Collapse detection. In self-supervised learning (SimCLR, BYOL, VICReg), representation collapse means all embeddings converge to a low-dimensional subspace (or a single point). Tracking the rank (effective number of significant singular values) of the representation matrix over training detects and diagnoses collapse.
12.2 Subspace Tracking During Training
The gradient subspace evolves over the course of training, and its structure is a key diagnostic for understanding optimisation.
The gradient subspace. At training step , the gradient is a single vector. Over steps, the gradients lie in some subspace of . The gradient subspace at step is approximately:
or more precisely, the leading eigenspace of the gradient covariance matrix .
Empirical finding: the gradient subspace is small. Gur-Ari, Bar-On, and Shashua (2018) showed that for large neural networks, the gradient vectors observed during training effectively lie in a subspace of dimension much smaller than . For models with to parameters, the effective gradient subspace has dimension in the thousands or tens of thousands. This means:
- Gradient descent traverses a low-dimensional "groove" in the high-dimensional parameter space
- The directions orthogonal to the gradient subspace are never updated
- A -dimensional update rule (where ) can achieve nearly the same performance
Intrinsic dimensionality of fine-tuning (Aghajanyan, Zettlemoyer, Gupta, 2020). They parameterise the fine-tuning update as where is a fixed random projection matrix and is optimised. The smallest for which fine-tuning achieves 90% of full-fine-tuning performance is the intrinsic dimension. For GPT-2 fine-tuned on MRPC (a sentence similarity task), the intrinsic dimension is out of parameters. Fine-tuning happens in a subspace of effective dimension .
LoRA as gradient subspace approximation. LoRA restricts to a rank- subspace of . The justification is that the gradient during fine-tuning lives in a low-rank subspace. By restricting updates to rank , LoRA approximates this gradient subspace with parameters instead of . The choice of balances: larger = larger subspace = more expressive but more parameters; smaller = tighter constraint but fewer parameters. The "right" is approximately the intrinsic dimension of the task.
GaLore (Zhao et al. 2024). Rather than permanently fixing the subspace (as LoRA does), GaLore periodically updates the projection subspace by computing the top- singular vectors of the gradient matrix. The gradient is projected onto the current subspace before the optimiser step; optimizer state is maintained in the subspace. Memory savings come from keeping optimizer state in rather than .
12.3 Representation Subspaces
The linear representation hypothesis proposes that concepts are encoded as linear subspaces of the residual stream in large language models. This is a strong structural claim with substantial empirical support.
Concept directions. A concept direction for a binary concept (e.g., "positive sentiment", "female gender", "medical domain") is a unit vector such that indicates the concept is present and indicates it is absent. The concept is represented as a 1-dimensional subspace (a line through the origin) in .
Evidence: probing classifiers (linear models predicting a concept from an embedding) work well for many concepts, suggesting the concept is linearly decodable. The existence of a linear probe is evidence that the concept has a direction in the embedding space - a 1D subspace.
Multi-dimensional concept subspaces. Some concepts are not well-represented by a single direction. "Colour" might involve multiple hues; "grammatical role" might involve multiple positions. In these cases, the concept corresponds to a subspace of dimension . The concept subspace contains all directions relevant to the concept.
LEACE (Least-squares Concept Erasure) (Belrose et al. 2023). Given two sets of representations - one with concept present, one absent - LEACE finds the minimal subspace such that projecting onto makes the concept undetectable by any linear probe. This is explicitly a subspace computation:
- Compute the within-class covariance and between-class difference
- Find the projection direction that maximally separates classes (Fisher discriminant direction)
- Project onto the orthogonal complement:
Concept erasure = projection onto the orthogonal complement of the concept subspace.
Distributed representations. A single concept may activate many neurons, and a single neuron may respond to many concepts. This is distributed representation, and it is the normal state in large networks:
- Distributed over neurons: each neuron contributes a little to many concepts; the concept is "smeared" across many dimensions
- Superposition: many concepts share the same dimensions (polysemanticity); the concept subspaces are non-orthogonal
The superposition hypothesis (Elhage et al. 2022) says that in the regime where the number of features (more features than dimensions), the optimal strategy is to store features as nearly-orthogonal, non-orthogonal unit vectors. Interference is minimised by choosing feature directions to be as orthogonal as possible, but it cannot be entirely eliminated when . This is the geometry of packing more vectors into a space than the dimension allows.
12.4 Mechanistic Interpretability Through Subspaces
Mechanistic interpretability analyses the internal computations of neural networks at the level of circuits - sequences of operations that implement specific algorithmic behaviours. Subspace geometry is the natural language for this analysis.
Residual stream decomposition. At layer , the residual stream is updated by:
Each component (attention head , MLP layer ) writes to a specific subspace of (its column space) and reads from another (its row space). The residual stream is a shared highway; every component adds its contribution to the same -dimensional space. The subspace written to by component is for attention heads or determined by the MLP weight matrices for MLP blocks.
OV and QK circuits. For attention head with projections and :
- QK circuit: but has rank ; it maps the residual stream to a -dimensional subspace for query-key comparison; the head computes attention scores using only dimensions of the -dimensional residual stream
- OV circuit: but has rank ; the head reads from a -dimensional subspace (via ) and writes to a -dimensional subspace (via ); the composition is a rank- map from to
Both the QK and OV circuits are low-rank (rank ) operations in , even though is -dimensional. Each head has access to -dimensional representations but can only "look at" or "write to" -dimensional subspaces. When (typical: for heads), each head is a dimensionality-reducing probe and writer.
Induction heads. An induction head (Olsson et al. 2022) is an attention head that implements in-context learning by attending to previous occurrences of the current token. It works via two components: a "previous token head" that shifts information one position back (writing to a specific subspace), and an "induction head" that looks for matches in that subspace. The circuit is: head A writes "what came before position " to a subspace ; head B reads from and attends accordingly. The communication between heads A and B happens through a shared subspace of the residual stream.
Superposition and polysemanticity. Toy models of superposition (Elhage et al. 2022) show that:
- With features: each feature can be stored in its own orthogonal direction; no interference; polysemanticity unnecessary
- With features (superposition regime): features are stored as nearly-orthogonal non-orthogonal directions; each direction contains a weighted sum of multiple features; individual neurons are polysemantic (they respond to multiple distinct features)
- The "geometry" of superposition: features are arranged as vertices of polytopes inscribed in the unit sphere; the maximum number of nearly-orthogonal unit vectors in that have pairwise inner products is approximately for some constant depending on - exponential in the dimension
12.5 Subspace Fine-Tuning Methods
The empirical low-dimensionality of fine-tuning gradients motivates a family of methods that explicitly restrict weight updates to specific subspaces. Here is a comparative overview:
LoRA (Hu et al. 2021).
Parameterise the weight update as with , , rank . The update lives in the subspace of rank- matrices in . Only parameters are needed instead of . The pre-trained weights are frozen; only and are trained. At inference, the update is merged: , adding no latency.
AdaLoRA (Zhang et al. 2023).
Parameterise where , are updated by gradient descent and is a diagonal matrix of learned singular values. Singular values that shrink toward zero can be pruned during training, giving adaptive rank allocation: some layers get high rank, others get low rank, depending on the task's needs. The total parameter count is bounded, but rank is allocated where the gradient signal is strongest.
IA^3 (Liu et al. 2022).
Rather than adding a low-rank matrix, IA^3 multiplies activations by learned scaling vectors: for a learned vector . Equivalently, this rescales the columns of weight matrices: . Each learned vector is a 1D subspace modulation - a rank-1 multiplicative update to a direction in activation space. Extremely parameter-efficient ( parameters per layer).
Prefix Tuning (Li and Liang 2021).
Prepend trainable "prefix" token embeddings to the input sequence. At each attention layer, the prefix tokens contribute additional keys and values that the original tokens can attend to. The prefix embeddings inject task-specific information into the attention computation. The "prefix subspace" is - the -dimensional subspace of information injected at each layer.
DoRA (Liu et al. 2024).
Decompose the weight matrix as (magnitude times direction, computed column-wise). Fine-tune magnitude and direction separately: magnitude is a scalar per column (1D), direction uses LoRA (rank- subspace). The decomposition reflects the observation that weight magnitude and weight direction play different roles in learning: magnitude controls the "strength" of a feature; direction controls "which feature". DoRA consistently outperforms LoRA at the same rank by freeing the direction update from the magnitude constraint.
DARE (Yu et al. 2024).
After standard fine-tuning, randomly prune with probability (set to zero) and rescale the remainder by to maintain expectation. The effect is to project the fine-tuning update onto a sparse "subspace" (sparse in the coordinate basis). Pruned updates can be summed across multiple fine-tuned models (model merging) with reduced interference, since the non-zero coordinates are less likely to overlap.
SUBSPACE FINE-TUNING: COMPARISON
Method Subspace type Parameters Memory saving
Full FT all of R mn 1x (baseline)
LoRA r rank-r subspace r(m+n) mn/r(m+n)
AdaLoRA adaptive rank-r r(m+n) + r similar to LoRA
IA^3 column scaling m or n mn/d ~= n
Prefix k k-dim prefix k*d per layer varies
DoRA magnitude + LoRA r(m+n) + n similar to LoRA
DARE sparse random mn (train), 0 (prune) 0 (post-hoc)
All methods exploit: fine-tuning gradient lives in low-dim subspace
13. Invariant Subspaces and Spectral Theory
13.1 Invariant Subspaces
A subspace is invariant under a linear map if maps back into :
Invariant subspaces are the subspaces that "respects" - it does not mix with its complement. Restricted to , the map is a linear map from to itself.
Trivial invariant subspaces. Every linear map has two trivial invariant subspaces: and itself. Any map maps to and maps to .
Eigenspaces are 1-dimensional invariant subspaces. If (eigenvector equation), then is invariant: for any :
Conversely, any 1-dimensional invariant subspace must have for some scalar (since must lie in ). So eigenvectors and 1-dimensional invariant subspaces are the same thing.
Generalised eigenspaces. For eigenvalue , the generalised eigenspace is:
(Specifically, always works, but often a smaller suffices - the smallest with is the index of .) The generalised eigenspace is invariant under : .
Invariant subspace decomposition. The ideal scenario is when decomposes as a direct sum of invariant subspaces:
In this case, acts independently on each : the restriction is a linear map from to , and the overall action of is the "direct sum" of these restricted maps. In a basis that respects this decomposition, the matrix of is block diagonal:
Block diagonalisation is the goal of spectral theory: decompose into invariant subspaces so that is maximally "decoupled".
13.2 The Spectral Theorem via Invariant Subspaces
The real spectral theorem is the most important result about invariant subspaces. It applies to symmetric matrices (or more generally, self-adjoint operators on inner product spaces).
Theorem (Real Spectral Theorem). For a symmetric matrix :
- All eigenvalues of are real
- Eigenvectors corresponding to distinct eigenvalues are orthogonal
- decomposes as an orthogonal direct sum of eigenspaces:
- where is orthogonal (eigenvectors as columns) and is diagonal (eigenvalues)
Why eigenvalues are real. For symmetric : . Such an operator is called self-adjoint. If (over ):
Since : , so .
Why eigenvectors for distinct eigenvalues are orthogonal. If and with :
So , and since : .
Spectral decomposition. Let be the orthogonal projection onto the eigenspace . Then:
The action of decomposes into independent scalings of orthogonal subspaces: scales by and acts on each eigenspace independently.
AI relevance. Symmetric matrices appear constantly in deep learning:
- Covariance matrices : PCA = spectral decomposition of ; eigenspaces = principal component subspaces; eigenvalues = variances in each direction
- Gram matrices (dual of covariance): non-zero eigenvalues of = non-zero eigenvalues of ; eigenvectors related by
- Hessian : curvature of loss landscape; eigenspaces = directions of different curvature; positive eigenvalues = directions curving up (need small learning rate); near-zero eigenvalues = flat directions (optimisation easy in these directions)
- Laplacian of a graph : symmetric positive semidefinite; eigenvectors = graph Fourier basis; used in spectral clustering and graph neural networks
13.3 Singular Value Decomposition as Subspace Decomposition
The SVD extends the spectral theorem to non-square (and non-symmetric) matrices. It is the complete subspace decomposition of any linear map.
Theorem (SVD). For any matrix with , there exist orthogonal matrices and and a matrix with non-negative entries only on the main diagonal such that:
The are the singular values, (columns of ) are the left singular vectors, and (columns of ) are the right singular vectors.
SVD as complete subspace decomposition:
-
Domain : decomposes as
- (right singular vectors for non-zero singular values)
- (right singular vectors for zero singular values)
-
Codomain : decomposes as
- (left singular vectors for non-zero singular values)
- (left singular vectors for zero singular values)
-
The action of : maps the -th right singular direction to the -th left singular direction, with scaling :
Rank- approximation. The best rank- approximation to (in Frobenius or operator norm) is:
where contain the first left/right singular vectors and the top- singular values. This is the Eckart-Young theorem: among all rank- matrices, is closest to .
The approximation error: (discarded singular values).
AI applications of SVD as subspace decomposition:
-
Weight matrix compression. A weight matrix can be approximated as using the top- singular components. This compresses parameters to . The column space of the compressed weight uses only the top- left singular directions; the null space grows accordingly.
-
LoRA in SVD language. LoRA's is a rank- matrix, hence its SVD has at most non-zero singular values. The columns of span the column space of ; the columns of span the row space. Initialising (as in standard LoRA) means at the start, which is correct for starting from the pre-trained weights.
-
Activation space analysis. The SVD of the activation matrix (rows = activations for inputs) reveals the effective dimensionality of the representation. If many singular values are near zero, the activations lie in a low-dimensional subspace of . This is a direct measure of representation collapse or dimensionality.
-
Principal angle computation. As noted in 8.5, the principal angles between two subspaces spanned by and are given by the singular values of . SVD gives the geometry of subspace-to-subspace relationships.
13.4 Schur Decomposition
Theorem (Schur). Every square matrix (working over , or over for real Schur form) can be written as:
where is unitary () and is upper triangular. The diagonal entries of are the eigenvalues of (possibly complex, in some order).
Real Schur form. Over , where is quasi-upper triangular: block upper triangular with blocks (real eigenvalues) and blocks (complex conjugate eigenvalue pairs). The columns of form an orthonormal basis.
Invariant subspace connection. The first columns of span a -dimensional invariant subspace of :
This follows because and is upper triangular: , so is a linear combination of . The Schur decomposition gives a nested sequence of invariant subspaces:
Each subspace in the chain is invariant under . The Schur form is the "staircase" of all invariant subspaces simultaneously.
Relationship to spectral theorem. For symmetric : the Schur form has (diagonal), since eigenvectors are orthogonal and being upper triangular with orthogonal columns forces it diagonal. The Schur decomposition of a symmetric matrix is exactly the spectral decomposition.
Practical relevance. The QR algorithm (the standard algorithm for computing eigenvalues) works by iteratively applying QR decompositions to drive toward its Schur form. Each QR step refines the invariant subspace structure. The Schur form is numerically stable (unitary transformations do not amplify errors) and can always be computed, unlike the eigendecomposition (which may not exist for non-diagonalisable matrices).
14. Common Mistakes
| # | Mistake | Why It's Wrong | Correct Understanding |
|---|---|---|---|
| 1 | "The union of two subspaces is a subspace" | fails closure under addition: take and ; their sum need not lie in or . Concrete example: (x-axis) and (y-axis) in ; . | The sum is a subspace; the union is generally not. Use when you need the subspace generated by both. |
| 2 | "An affine subspace is a subspace" | The solution set with does not contain the zero vector () and is not closed under addition (). The probability simplex is an affine subspace - not a vector subspace. | An affine subspace = coset of a linear subspace = linear subspace shifted off the origin. It shares the same dimension and "shape" but is NOT a vector space. Subspaces must contain . |
| 3 | "Closure under addition is sufficient for a subspace" | Closure under addition alone (without scalar multiplication) is insufficient. The set is closed under addition but , so it fails the scalar multiplication axiom. | All three conditions are required: (1) contains , (2) closed under addition, (3) closed under scalar multiplication. All three are independent of each other. |
| 4 | " implies " | The spans being equal means is linearly dependent on - it can be expressed as a linear combination of the earlier vectors. But it need not be zero. For example: ; is not zero. | Equal spans means - redundant, not zero. |
| 5 | "" | This formula is wrong. The correct formula involves the sum: . The dimension of ranges between and . | Correct formula: . You cannot determine from dimensions alone without knowing . |
| 6 | "The complement of a subspace is a subspace" | The set complement is never a subspace: it does not contain (since ) and is not closed under addition or scalar multiplication. | The orthogonal complement IS a subspace. Always say "orthogonal complement ", never just "complement". The orthogonal complement is the canonical linear-algebraic complement of . |
| 7 | "All bases for have the same vectors" | Bases are not unique. Any set of linearly independent spanning vectors is a basis for an -dimensional space. For , , , and are three distinct bases. | All bases for have the same number of vectors (= ). The actual vectors can be anything linearly independent and spanning. The choice of basis is a choice of coordinate system. |
| 8 | "" | always (if then ). But can be strictly larger: if maps some to zero (), then but . | . Equality holds iff , i.e., is injective on the column space of . |
| 9 | "Two subspaces with the same dimension are equal" | Dimension only describes the "size" of a subspace, not its orientation. The x-axis and y-axis in both have dimension 1 but are completely different subspaces (they share only the origin). | Equal dimension means isomorphic as abstract vector spaces. Actual equality as subsets requires showing every vector in one is in the other - or equivalently, that each spans the other. |
| 10 | "Gram-Schmidt always produces a basis from a spanning set" | Gram-Schmidt fails (division by zero) when it encounters a vector already in the span of the previously constructed orthonormal vectors. The intermediate vector becomes , which cannot be normalised. | Apply Gram-Schmidt only to linearly independent vectors. For a spanning set, first extract an independent subset via row reduction, then apply Gram-Schmidt. A zero intermediate vector signals linear dependence - discard that vector and continue. |
| 11 | "The pivot columns of the RREF give a basis for the column space" | Row operations change the column space. The RREF of has different columns than itself. The pivot positions (column indices) are correct, but the actual basis vectors must be taken from the original matrix , not from the RREF. | Identify pivot positions from the RREF, but extract the corresponding columns from the original . The pivot columns of (not its RREF) form a basis for . |
| 12 | " has only the trivial solution implies is invertible" | For : if , cannot be invertible regardless. The trivial null space means is injective (), but invertibility requires square and . | For square : iff is invertible. For rectangular : means is injective (left-invertible) but not invertible. |
15. Exercises
Exercise 1: Verifying Vector Space Axioms
For each of the following, determine whether it is a vector space with the given operations. If not, identify which axiom(s) fail. For each valid vector space, identify the zero vector and describe the additive inverses.
(a) , addition: , scalar mult: - scalar multiplication only affects the first coordinate.
(b) , addition defined as (multiplication of positive reals), scalar multiplication .
(c) with the standard addition and scalar multiplication inherited from .
(d) with standard pointwise addition and scalar multiplication .
(e) with the same standard operations.
Hints for (b): Verify all eight axioms carefully with the non-standard operations. The zero element of the vector space (if it exists) is the identity for , not the number 0. For (d) and (e), think about which evaluation constraint is compatible with the zero function.
Exercise 2: Subspace Verification
For each subset, determine whether it is a subspace of the given vector space. Prove it is a subspace (using the three-condition test) or find an explicit counterexample showing it is not.
(a) inside
(b) (the two coordinate axes together) inside
(c) (traceless matrices) inside
(d) (polynomials of degree that vanish at ) inside
(e) (double cone) inside
For each valid subspace: find a basis and state the dimension.
Exercise 3: The Four Fundamental Subspaces
Let .
(a) Row-reduce to RREF. Identify the pivot columns and free columns.
(b) Find a basis for . Verify the dimension via the Rank-Nullity Theorem.
(c) Find a basis for using the pivot columns of the original (not the RREF). State the dimension.
(d) Find a basis for using the non-zero rows of the RREF. State the dimension.
(e) Find a basis for by row-reducing and finding its null space. Alternatively, argue from the dimension formula.
(f) Verify the dimension counts: and .
(g) Verify the orthogonality : compute the dot product of every basis vector of with every basis vector of and confirm it equals zero.
Exercise 4: Span, Independence, and Basis in
Let , , , .
(a) Form the matrix and row-reduce. Are linearly independent?
(b) What is ?
(c) Identify a subset of the given vectors that forms a basis for .
(d) Express any dependent vectors as explicit linear combinations of your basis vectors.
(e) Is in ? If yes, find coordinates such that . If no, explain why.
Exercise 5: Gram-Schmidt and Orthogonal Projection
Work in with the standard inner product.
(a) Starting from , , , verify that these three vectors are linearly independent (compute the determinant of the matrix they form).
(b) Apply Gram-Schmidt to produce an orthonormal basis for .
(c) Verify your result: check that for each and for .
(d) Express in your orthonormal basis: find for each . Verify that .
(e) Construct the orthogonal projection matrix onto using where . Verify: (i) , (ii) , (iii) . Compute and the residual . Verify that and .
Exercise 6: Subspace Operations and Dimension Formula
(a) In , let (the -plane) and . Find a basis for . Is ? Find and verify the Grassmann formula: .
(b) Is (direct sum)? If not, find a subspace such that .
(c) In , let and . Find , , , and . Verify the Grassmann formula.
(d) For from part (c): find an explicit basis.
Exercise 7: Change of Basis and Coordinates
In , let with and , and let with and .
(a) Verify that both and are bases for (compute the determinants of the corresponding matrices).
(b) Find the change-of-basis matrix : express each as a linear combination of , and use these as the columns of .
(c) For the vector with (coordinates in basis ), compute .
(d) Compute explicitly in the standard basis. Then verify your answer to (c) by directly expressing as a linear combination of and .
(e) If a linear map has matrix in the standard basis, find its matrix representation in basis .
Exercise 8: AI Application - Subspace Analysis of a Weight Matrix
Let .
(a) Compute and . Is full rank?
(b) Find a basis for and . Verify .
(c) A LoRA adapter has rank with update where and . Explicitly compute . What is ? What is ?
(d) Consider the updated weight . Without computing explicitly, state the upper and lower bounds on . Under what condition on the relationship between and (i.e., the left null space of ) would ? Under what condition would ?
(e) Now compute explicitly. Verify your predictions from (d) by computing .
Exercise 9 (Challenge): The Superposition Geometry
This exercise develops the geometry of the superposition hypothesis.
(a) In (a 2-dimensional embedding space), suppose we want to store features as unit vectors with maximum pairwise orthogonality. Show that it is impossible for all three feature vectors to be pairwise orthogonal. (Hint: three pairwise orthogonal unit vectors in cannot exist.)
(b) Instead, place three unit vectors at angles , , from the positive -axis: , , . Compute all pairwise inner products for . What is the interference level?
(c) The "reconstruction loss" for feature when all features have activation is: the residual after reading off the -th feature and subtracting the contribution of the feature direction. Specifically, if the residual stream stores , and we estimate , show that . The error is the interference from other features.
(d) Compute the expected interference for the configuration in (b) assuming each independently with probability of being active. What does the interference approach as ?
Exercise 10 (Challenge): Krylov Subspaces
Let and .
(a) Compute and .
(b) Does ? (Check whether and are linearly independent.)
(c) Apply one step of the Lanczos algorithm to produce an orthonormal basis for (Gram-Schmidt applied to ).
(d) In this orthonormal basis , compute the tridiagonal matrix where . Verify that is symmetric and (nearly) tridiagonal. The eigenvalues of approximate the eigenvalues of - verify this by comparing with the actual eigenvalues of .
16. Why This Matters for AI (2026 Perspective)
| Aspect | Impact |
|---|---|
| Residual stream as shared vector space | The residual stream in a transformer is the central shared vector space. Every attention head, MLP layer, and positional encoding adds vectors to this space via the residual connection. All components communicate through this one -dimensional space - nothing else is shared. Understanding the subspace decomposition of (which subspaces are written by which components, which are read by which) is literally the same as understanding how information flows through a transformer. There is no higher-level description. |
| LoRA rank = subspace dimension | LoRA's entire design is a subspace constraint. The update is a rank- matrix, living in an -fraction of the full parameter space. Choosing is choosing the dimension of the subspace to search in. Too small: the subspace doesn't contain the optimal update direction, and performance suffers. Too large: you are searching a subspace larger than necessary, wasting parameters and potentially overfitting. The right is the intrinsic dimension of the fine-tuning task - a subspace dimension. |
| Superposition and polysemanticity | The superposition hypothesis says LLMs represent more features than their embedding dimension allows. Since more than linearly independent directions cannot exist in , features beyond must share dimensions through non-orthogonal superposition. Each neuron becomes polysemantic - it responds to multiple features. This limits interpretability (you cannot read off features from individual dimensions) and causes interference between features. Solving superposition is one of the central unsolved problems in mechanistic interpretability, and the solution must be phrased in the language of subspace geometry. |
| MLA and KV subspace compression | DeepSeek-V3's Multi-head Latent Attention compresses KV projections to a rank- subspace with . The KV cache stores only the -dimensional compressed representation; at inference, it is decompressed back to . The rank is the architectural bottleneck that enables a reduction in KV cache memory. The subspace dimension is the design variable that trades memory against expressiveness. This is subspace thinking at the architecture level: the architecture is designed around a low-dimensional subspace constraint. |
| Mechanistic interpretability circuits | Every circuit found in mechanistic interpretability is a composition of subspace operations. A "previous token head" reads from a specific subspace of the residual stream (via its , row spaces) and writes to a specific subspace (via its column space). An "induction head" communicates with the previous token head through a shared subspace. Superposition of features happens in specific subspaces. The "language" of mechanistic interpretability is entirely the language of subspaces: read, write, rotate, project, in . |
| Representation collapse prevention | In self-supervised learning, representation collapse means all embeddings converge to a low-dimensional (or 0-dimensional) subspace - the network learns to output the same or similar vectors for all inputs. VICReg, Barlow Twins, and BYOL all prevent collapse by adding losses that penalise low-dimensional representations. VICReg's variance loss requires that the covariance matrix of batch embeddings has high trace (high average variance), which prevents the embeddings from collapsing to a subspace. Collapse = subspace dimension ; the loss explicitly maximises subspace dimension. |
| Implicit bias and minimum-norm solutions | Gradient descent on overparameterised linear models (and, empirically, on large neural networks) converges to minimum-norm solutions. The minimum-norm solution lies in the row space of the data matrix - it is the unique solution in the subspace , the orthogonal complement of the null space. The implicit bias of gradient descent is a subspace selection: it selects the solution in rather than any other coset representative. This subspace selection is what enables generalisation in overparameterised models. |
| Orthogonality and head diversity | If attention heads write to mutually orthogonal subspaces of the residual stream, they do not interfere with each other - each head has an independent information channel. Head redundancy is equivalent to subspace overlap: if head 's column space is a subspace of head 's column space, head is redundant. Pruning heads that write to subspaces already covered by remaining heads preserves expressiveness while reducing computation. Orthogonality between head subspaces is the precise mathematical criterion for head independence. |
| Function space universality | The universal approximation theorem says that neural networks with nonlinear activations can approximate any continuous function to arbitrary accuracy. In subspace language: the set of functions representable by a sufficiently large network is dense in - it is not a subspace (the network family is non-linear), but its closure is the entire function space. The depth and nonlinearity of the architecture determine which function-space "directions" are accessible. Width and depth determine the "dimensionality" of the accessible function subspace. |
| Second-order optimisation | The Hessian of the training loss is a matrix with most of its spectral mass concentrated in a low-dimensional subspace (the "bulk" subspace where curvature is large). The remaining directions have near-zero curvature (flat directions). K-FAC, Shampoo, and SOAP all identify and exploit this curvature subspace by approximating the inverse Hessian in the curved subspace and using the identity in the flat directions. Natural gradient preconditions updates by the inverse Fisher (a positive semidefinite matrix): it aligns updates with the geometry of the curved subspace and suppresses movement in the flat directions. |
Conceptual Bridge
Vector spaces and subspaces are the geometry underlying all of linear algebra. Every concept - linear independence, span, basis, dimension, rank, orthogonality, projections, eigenvalues - is ultimately a statement about the structure of subspaces. The eight axioms that define a vector space are simple; the richness emerges from the subspace hierarchy they generate.
The abstract framework is what makes the theory universal. The same theorems that govern arrows in a plane govern polynomials, matrices, functions, and probability distributions. Verifying the eight axioms once grants access to a century of results - for free, without reproof. This universality is not just mathematical elegance; it is practical power. When you recognise that the gradient vectors during training form a set of vectors in and that is a vector space, you immediately know that all of linear algebra applies: span, independence, subspaces, projections, and dimension all become tools for understanding training dynamics.
For AI in 2026, subspaces are not abstract. They are the architectural primitives of transformers (the residual stream, the attention subspace, the MLP subspace), the design variables of efficient fine-tuning (LoRA rank = subspace dimension), the diagnostic language of interpretability (which subspace does this head write to?), and the theoretical foundation of generalisation (gradient updates live in a low-dimensional subspace; the implicit bias selects the minimum-norm solution in the row space).
WHERE THIS MODULE SITS IN THE CURRICULUM
Sets and Logic -> Functions -> Summation
Proof Techniques
Vectors and Spaces (geometry: what vectors are)
Matrix Operations (computation: how to multiply matrices)
Systems of Equations (solving: Gaussian elimination)
Determinants (volume: the signed volume of a parallelepiped)
Matrix Rank (dimensionality: the rank-nullity connection)
Vector Spaces and Subspaces (structure) <- THIS MODULE
The payoff: everything from earlier modules
is now understood geometrically as a statement
about subspaces. Rank = dimension of col space.
Null space = kernel. Solutions of Ax=b live in
a coset of null(A). Determinant = 0 iff the
column space is a proper subspace of R.
Eigenvalues and Decompositions (spectrum of a linear map)
- invariant subspaces of T: revealed through its eigenvalues
- eigenvector: 1D invariant subspace
- spectral theorem: orthogonal direct sum of eigenspaces
- SVD: complete subspace decomposition of any matrix
Probability and Information Theory
- probability distributions as vectors in function space
- KL divergence as a geometry on the simplex
Calculus and Optimisation
- gradient lives in R, a vector space
- Hessian's eigenspaces = directions of curvature
- natural gradient uses curved subspace geometry
LLM Mathematics
What comes next: Eigenvalues, Eigenvectors, and Matrix Decompositions.
The next module reveals the internal subspace structure of a linear map - the invariant subspaces that a matrix preserves, revealed through its eigenvalues and eigenvectors. An eigenvector spans a 1-dimensional invariant subspace; the spectral theorem decomposes any symmetric matrix into orthogonal 1-dimensional invariant subspaces; the SVD extends this to the complete subspace structure of any rectangular matrix. The four fundamental subspaces are decomposed even further - into singular subspaces aligned with singular values. All of this is the continuation of the subspace story begun in this module.
<- Back to Linear Algebra Basics | Next: Eigenvalues and Decompositions ->