Lesson overview | Lesson overview | Next part
Vector Spaces and Subspaces: Part 1: Intuition to 6. Basis and Dimension
1. Intuition
1.1 What Is a Vector Space?
A vector space is a collection of objects - called vectors - together with two operations (addition and scalar multiplication) that behave exactly the way arrows in the plane behave. You can add any two vectors and get another vector in the same collection. You can stretch any vector by any scalar and stay in the collection. And these two operations interact with each other in the exact way you would expect from ordinary arithmetic.
The power of the abstract definition is this: once you verify that a collection satisfies the vector space axioms, every theorem ever proved about vector spaces applies immediately - whether the "vectors" are geometric arrows, polynomials, matrices, functions, probability distributions, or sequences of real numbers. You do not reprove theorems for each new setting; you simply check the axioms once and inherit a century of results for free.
The word "space" is deliberate. A vector space has geometric structure. You can talk about directions, linear combinations, independence, and dimensionality - even when the "vectors" are abstract objects that look nothing like arrows. The polynomial lives in a vector space. So does the identity matrix. So does the constant function . In each case, the "geometry" is real and consequential, even if it is not immediately visible.
For AI, this abstraction is not philosophical - it is practical:
VECTOR SPACES THAT APPEAR IN EVERY NEURAL NETWORK
R - embedding space
tokens, sentences, images, users are all vectors here
arithmetic on vectors (king - man + woman ~= queen)
only makes sense because R is a vector space
R - parameter space
all weights of a neural network form a single vector in R
gradient descent navigates this space
LoRA restricts updates to a subspace of R
R - weight matrix space
each weight matrix is a "vector" in a higher-dimensional space
adding matrices (as in LoRA: W + DeltaW) is vector addition
scaling a matrix by a learning rate is scalar multiplication
infinity-dim - function space
the set of all functions a neural architecture can represent
approximation theory asks which subspace of all functions
a given architecture can reach
The abstract definition captures all of these simultaneously. That is the point.
1.2 Why Vector Spaces Are the Language of Deep Learning
Every major architectural concept in modern deep learning can be phrased as a statement about vector spaces or their subspaces. This is not a coincidence - it reflects the fundamental linearity of most operations in a transformer.
Embedding spaces. Tokens, sentences, images, and users are all represented as vectors in . This only makes sense because is a vector space: you can meaningfully add embedding vectors, scale them, and take linear combinations. The famous word embedding arithmetic is a statement about the linear structure of the embedding space. Cosine similarity - the standard measure of semantic relatedness - is an inner product operation. None of this would be coherent if the embedding space were not a vector space.
Parameter space. All parameters of a neural network - every weight matrix, every bias vector, every LayerNorm scale - can be concatenated into a single vector . Gradient descent navigates this -dimensional vector space. The loss landscape lives over this space. The geometry of the parameter space - its curvature, the directions of steepest descent, the flat directions - shapes every aspect of optimisation. Gradient clipping, momentum, and weight decay are all operations in the vector space .
Residual stream. In transformer architectures, the central computational object is the residual stream: a vector that is passed from layer to layer and updated by each component:
The update rule makes sense only because is a vector space - you can add the attention output (a vector in ) and the MLP output (also a vector in ) to the residual stream. Each layer writes a new vector into the same shared space. All components communicate through this one -dimensional vector space. The subspace structure of - which directions are "owned" by which components - is what mechanistic interpretability studies.
Function spaces. Neural networks represent functions. The set of all functions representable by a given architecture with bounded parameters is a subset of the infinite-dimensional function space . Approximation theory studies which subspaces of can be well-approximated by which architectures. The universal approximation theorem is a statement about density in a function space. Understanding which functions a model can and cannot represent requires understanding subspaces of infinite-dimensional vector spaces.
Gradient space. At each training step, the gradient is a vector in the same parameter space . Gradient accumulation (summing gradients across microbatches) is vector addition. Gradient clipping (bounding the norm) is a scaling operation. Momentum (maintaining a weighted sum of past gradients) is a linear combination. All of these rely on the vector space structure of . Empirically, the gradient vectors observed during training lie in a much lower-dimensional subspace of - a fact that motivates LoRA and related methods.
Tangent spaces. The loss surface is a manifold embedded in . At each point , the tangent space - the set of directions one can move while remaining (to first order) on the manifold - is a vector space. Natural gradient methods use the geometry of these tangent spaces; the Fisher information matrix defines a metric on the parameter manifold that is fundamentally a statement about the inner product structure of the tangent space.
1.3 The Hierarchy of Structure
Vector spaces occupy a specific level in a hierarchy of mathematical structures. Each level adds additional structure on top of the previous, enabling more refined geometric notions:
HIERARCHY OF MATHEMATICAL STRUCTURE
Set (no structure)
+ addition and scalar multiplication obeying 8 axioms
Vector Space (linear structure: add, scale)
- can speak of linear combinations, span, independence, basis
- can speak of dimension
+ norm v (a function measuring "length")
Normed Space (metric structure: can measure distances)
- can speak of convergence, continuity, open/closed sets
- can speak of the "size" of a vector
+ inner product u, v (a function measuring "alignment")
Inner Product Space (angular structure: can measure angles)
- can speak of orthogonality, projections, cosine similarity
- can speak of the "angle" between two vectors
+ completeness (every Cauchy sequence converges in the space)
Hilbert Space (complete inner product space)
- can speak of infinite sums that converge
- can speak of Fourier decompositions
- the correct setting for quantum mechanics and function spaces
Key facts:
- R (Euclidean space) satisfies ALL levels simultaneously.
- Every finite-dimensional inner product space is a Hilbert space
(completeness is automatic in finite dimensions).
- Infinite-dimensional function spaces (L^2, Sobolev spaces)
require the Hilbert structure to behave well.
- AI embedding spaces are finite-dimensional: all levels apply.
- RKHS for kernel methods: infinite-dimensional Hilbert spaces.
The hierarchy matters because each additional level of structure enables new theorems. In a bare vector space, you can talk about linear combinations and dimension. Add a norm: you can measure convergence. Add an inner product: you can define orthogonality and projections. Add completeness: you can take limits of infinite sequences. For AI, finite-dimensional vector spaces with inner products (Hilbert spaces) are the default working environment. Function spaces require infinite-dimensional Hilbert structure.
1.4 The Intuition of a Subspace
A subspace is a vector space living inside a larger vector space - a flat, linear subset that passes through the origin and is closed under the same two operations.
"Flat" and "through the origin" are the two defining geometric properties:
- A line through the origin in is a subspace. A line not through the origin is not.
- A plane through the origin is a subspace. A plane not through the origin is not.
- A sphere, a circle, a parabola, a curved surface - none are subspaces.
- The positive quadrant is not a subspace (not closed under scalar multiplication by ).
Why "through the origin"? The zero vector must be in any subspace. This follows from closure under scalar multiplication: if is in the subspace, then must also be in the subspace. A subset that doesn't contain cannot be a subspace.
Why "flat"? Take any two vectors in the subspace. Their sum must also be in the subspace. Multiply either by any scalar - must still be in the subspace. This forces the subset to be "straight". There can be no curves, bends, or boundaries. If the subspace contains a vector , it must contain all scalar multiples for every - the entire line through the origin in the direction of .
SUBSPACES AND NON-SUBSPACES IN R^2 AND R^3
R^2: R^3:
The whole plane R^2 All of R^3
Any line through origin Any plane through origin
{0} Any line through origin
{0}
Line not through origin Plane not through origin
Half-plane Sphere / cylinder / cone
Unit circle Positive octant
Filled disk Affine subspace (offset from origin)
Upper half-plane
Test: does it contain 0? is it closed under + and scaling?
If any answer is NO -> not a subspace.
For AI, the subspace concept is concrete. The column space of a weight matrix is a subspace of the output space. The null space is a subspace of the input space. The set of all weight updates targeted by LoRA is a subspace of the weight matrix space. The set of all probability vectors (probabilities summing to 1) is not a subspace (it does not contain the zero vector and is not closed under vector addition). This distinction matters: arithmetic that is meaningful in a vector space may be meaningless in a non-subspace.
1.5 Subspaces That Appear Constantly in AI
The following subspaces recur throughout deep learning theory, practice, and interpretability. Each is worth understanding geometrically before encountering it formally.
Column space col(W). For a weight matrix , the column space is the set of all vectors as ranges over . It is the set of all reachable outputs of the linear layer. If , then only an -dimensional subspace of can be produced by this layer regardless of the input. The remaining dimensions of the output space are always zero from this layer's contribution.
Null space null(W). The null space is the set of all inputs that produce zero output: . It is the "invisible" subspace - directions in the input that this layer completely ignores. A vector in the null space carries no information through the layer. Its dimension is . Mechanistically: if a component of the residual stream lies in the null space of all query and key projections of some head, that component is invisible to that head.
Row space row(W). The row space is the span of the rows of , equivalently the column space of . It is the set of input directions that "listens to". The projection of an input onto the row space determines the output; the projection onto the null space is discarded.
Attention subspace. Each attention head in a transformer has query and key projection matrices . The attention scores are computed in the -dimensional subspace defined by these projections. The head is completely blind to directions in orthogonal to the column spaces of and . The value and output projections similarly define the subspace that the head writes to. Attention is fundamentally a subspace operation: it reads from one subspace and writes to another.
Gradient subspace. The gradient at step is a vector in . Over the course of training, these gradient vectors empirically lie in a subspace of whose dimension is much smaller than . Gur-Ari et al. (2018) showed that the top eigenspace of the gradient covariance matrix - the "gradient subspace" - has effective dimension in the hundreds or thousands, not in the hundreds of millions. This is the empirical foundation for LoRA: if gradients live in a low-dimensional subspace, it may suffice to restrict weight updates to that subspace.
Residual stream subspace. Mechanistic interpretability reveals that different components of a transformer write to different subspaces of the -dimensional residual stream. Each attention head "owns" a subspace (defined by its OV circuit) and communicates with other components through that subspace. The residual stream is a shared message-passing space, and the subspace structure determines which components can communicate with which.
LoRA update subspace. Low-Rank Adaptation restricts the weight update where and . This constrains to a rank- subspace of : the subspace of matrices with column space contained in . The rank is literally the dimension of the subspace being searched. Choosing is choosing the dimension of the subspace.
1.6 Historical Timeline
The concept of a vector space did not emerge from a single insight - it crystallised gradually over seven decades as mathematicians in different countries, working on different problems, all needed the same abstract structure:
| Year | Person / Event | Significance |
|---|---|---|
| 1827 | Mbius - barycentric coordinates | Linear combinations with fixed coefficient sum; early subspace thinking embedded in geometry |
| 1844 | Grassmann - Ausdehnungslehre (Theory of Extension) | Abstract -dimensional linear spaces, vector algebra, exterior products; 50 years ahead of its time; largely ignored until the 1880s |
| 1858 | Cayley - matrix algebra | Matrices as the natural representation of linear maps; implicit use of vector space structure in |
| 1888 | Peano - Calcolo Geometrico | First rigorous axiomatic definition of a vector space (Peano called them "linear systems"); the modern eight axioms are essentially unchanged from his formulation |
| 1900-10 | Hilbert - infinite-dimensional spaces | Spectral theory of operators; sequences as vectors; laid the foundation for quantum mechanics and functional analysis; introduced Hilbert spaces |
| 1920s | Banach - normed complete spaces | Generalised Hilbert spaces to settings without an inner product; functional analysis matures as a discipline |
| 1929 | von Neumann - Hilbert space formalism | Rigorous foundation for quantum mechanics as operators on Hilbert space; spectral theorem in full generality |
| 1930s-60s | Bourbaki - abstract algebra | Vector spaces as modules over fields; categorical perspective; the modern textbook treatment of linear algebra |
| 1935 | Whitney - tangent bundles | A vector space (tangent space) attached to every point of a smooth manifold; modern differential geometry |
| 1950s-90s | Krylov subspace methods | CG, GMRES, Lanczos: iterative solvers that expand a nested sequence of subspaces; subspace geometry as a computational algorithm |
| 1970s-90s | LAPACK / numerical linear algebra | Practical algorithms for computing with subspaces (QR, SVD, Gram-Schmidt); subspace methods become usable at scale |
| 1901 / 1991 | Pearson / Turk-Pentland - PCA | Pearson (1901) defines principal component analysis; Turk and Pentland (1991) apply PCA to face recognition ("eigenfaces"); subspace learning as a machine learning tool |
| 2013 | Mikolov et al. - word2vec | Words as vectors in ; semantic arithmetic; gender direction as a 1D subspace; demonstrated that natural language semantics has linear subspace structure |
| 2018 | Gur-Ari et al. - gradient subspace | The "gradient subspace" of LLM training has dimension ; gradient updates live in a low-dimensional subspace of parameter space |
| 2020 | Aghajanyan et al. - intrinsic dimensionality | Fine-tuning gradient updates for GPT-2 lie in a subspace of dimension out of ; formal evidence for low-dimensional fine-tuning |
| 2021 | Hu et al. - LoRA | Explicit restriction of weight updates to a rank- subspace; subspace constraint as an inductive bias; state-of-the-art parameter-efficient fine-tuning |
| 2022 | Elhage et al. - superposition hypothesis | Features in LLMs are represented as directions (1D subspaces) in the residual stream; features outnumber dimensions, forcing non-orthogonal superposition; subspace geometry is the language of mechanistic interpretability |
| 2024 | DeepSeek-V2/V3 - MLA | Multi-head Latent Attention: KV projections compressed to a rank- subspace; the KV information is architecturally constrained to a low-dimensional subspace; 5.75x memory reduction |
The arc of this timeline is clear: what began as abstract geometry became, over 180 years, the core language of AI architecture and efficiency.
2. Formal Definitions
Recall: Vectors and Spaces 3 introduced the vector space axioms with concrete geometric intuition (, function spaces, polynomial spaces). This section re-derives them in full axiomatic rigour - the goal here is provability, not intuition. The two sections are complementary: 01 tells you what the axioms describe; this section tells you what follows from them.
2.1 The Axioms of a Vector Space
A vector space over a field (we use throughout unless stated otherwise) is a set together with two operations:
- Addition: - takes two vectors, produces a vector
- Scalar multiplication: - takes a scalar and a vector, produces a vector
satisfying the following eight axioms for all and all :
Axioms for addition:
| # | Name | Statement |
|---|---|---|
| 1 | Commutativity | |
| 2 | Associativity | |
| 3 | Additive identity | such that for all |
| 4 | Additive inverse | For each , such that |
Axioms for scalar multiplication:
| # | Name | Statement |
|---|---|---|
| 5 | Multiplicative identity | |
| 6 | Compatibility |
Distributivity (linking the two operations):
| # | Name | Statement |
|---|---|---|
| 7 | Scalar over vector sum | |
| 8 | Vector over scalar sum |
These eight axioms are the complete definition. Anything satisfying all eight is a vector space; anything failing even one is not. The axioms are minimal: each is independent of the others; removing any one produces a strictly weaker structure.
Notice what the axioms do and do not say:
- They say nothing about what the "vectors" look like. Arrows, polynomials, matrices, functions - all are equally valid.
- They say nothing about angles, lengths, or orthogonality. Those require additional structure (inner product).
- They say nothing about convergence. That requires a topology (norm or metric).
- They require closure under both operations: the result of adding two vectors must be back in ; scaling a vector must stay in .
THE AXIOMS AS CLOSURE CONDITIONS
Axioms 1-4: addition is a well-behaved operation on V
V is an abelian group under +
Axioms 5-6: scalar multiplication interacts correctly with itself
Axioms 7-8: scalar multiplication distributes over both kinds of +
(over vector sums and over scalar sums)
Together: V behaves like "an R that we haven't yet coordinatised"
2.2 Immediate Consequences of the Axioms
The eight axioms imply several useful facts that are not axioms themselves. These are theorems - they must be proved from the axioms - but they follow quickly:
Uniqueness of the zero vector. If and both satisfy Axiom 3, then:
The first equality uses as the identity; the second uses . So the zero vector is unique.
Uniqueness of additive inverses. If and , then:
So the additive inverse is unique for each .
(the zero scalar times any vector = zero vector):
Subtract from both sides (add the additive inverse of ):
(any scalar times the zero vector = zero vector):
Subtract : .
(multiplying by gives the additive inverse):
So is the additive inverse of , i.e., .
Cancellation law for scalars. If , then either or :
Proof: Suppose . Then has a multiplicative inverse in . Multiply both sides:
This last fact is particularly useful in linear independence arguments: if a scalar multiple of a vector is zero, either the scalar is zero or the vector is zero - no third option.
2.3 Standard Examples of Vector Spaces
Understanding which objects form vector spaces - and why - is as important as knowing the axioms. Here are the key examples, from most concrete to most abstract.
(Euclidean space)
- Vectors: -tuples of real numbers
- Addition: (componentwise)
- Scalar multiplication: (componentwise)
- Zero vector:
- Additive inverse:
- Dimension:
This is the prototype. Every finite-dimensional real vector space is isomorphic to for some . All eight axioms follow directly from the arithmetic of real numbers applied componentwise. The AI embedding space is exactly this.
(space of real matrices)
- Vectors: matrices with real entries
- Addition: entrywise addition
- Scalar multiplication:
- Zero: zero matrix
- Dimension: (each of the entries is an independent coordinate)
This is the vector space of all weight matrices of a fixed shape. Gradient descent in a single layer's weight space is gradient descent in . When LoRA writes , it is performing vector addition in . The learning rate times the gradient, , is a scalar multiplication in .
(polynomials of degree )
- Vectors: polynomials
- Addition: standard polynomial addition:
- Scalar multiplication:
- Zero: the zero polynomial (all coefficients zero)
- Dimension: (the coefficients are the coordinates)
The vectors here "look like" arrows in (just replace each polynomial with its coefficient vector), but the polynomial structure is richer - you can evaluate at any point, differentiate, integrate, and compose. The vector space structure is the foundation; the polynomial structure is additional.
(all polynomials, any degree)
- Vectors: polynomials of any (finite) degree
- Operations: same as
- Zero: zero polynomial
- Dimension: (countably infinite; basis )
Note that the sum of two polynomials of degree is again a polynomial of degree , so is a subspace of . This is our first example of a subspace inside a larger space.
(continuous functions on )
- Vectors: continuous functions
- Addition: (pointwise)
- Scalar multiplication: (pointwise)
- Zero: constant zero function
- Dimension: (uncountably infinite dimensional)
The sum of two continuous functions is continuous; a scalar multiple of a continuous function is continuous. All eight axioms follow from corresponding facts about real numbers applied pointwise. The space is the natural setting for physics-informed neural networks (PINNs), neural ODEs, and any application where the model represents a continuous function.
(square-integrable functions)
- Vectors: (equivalence classes of) functions with
- Operations: pointwise, same as
- Inner product: ; induced norm
- Dimension: ; a separable Hilbert space
is the correct setting for Fourier analysis (the Fourier basis is an orthonormal basis for ), for quantum mechanics (wavefunctions are unit vectors in ), and for kernel methods (the RKHS of a translation-invariant kernel is a subspace of ). Every finite-dimensional vector space with an inner product is a Hilbert space; is the prototypical infinite-dimensional Hilbert space.
(square-summable sequences)
- Vectors: infinite sequences with
- Operations: componentwise
- Inner product:
- Dimension: ; a separable Hilbert space; isomorphic to
The infinite-dimensional analogue of . Relevant as the theoretical limit of embedding spaces as vocabulary size .
(trivial vector space)
- The set containing only the zero vector
- Addition: ; scalar multiplication:
- The smallest vector space; dimension 0
- It is a subspace of every vector space
2.4 Non-Examples of Vector Spaces
Understanding which sets fail to be vector spaces - and precisely which axiom they violate - sharpens intuition and prevents common errors in AI applications.
(positive reals)
Fails closure under scalar multiplication and additive inverse: . Also fails to contain the zero vector. This is why you cannot do linear algebra directly in probability space (where all entries must be positive).
(unit sphere)
Fails closure under addition ( in general) and scalar multiplication (). Fails to contain the zero vector. Normalised embedding vectors live on the unit sphere - a non-subspace. This is why arithmetic on normalised vectors (like averaging two unit vectors) is not straightforward; the result must be renormalised.
(probability simplex)
Fails closure under addition: has . Fails to contain the zero vector. Fails scalar multiplication: has . The simplex is an affine subspace (it is a translate of a linear subspace), not a vector subspace.
This is a crucial AI pitfall: softmax outputs live in the interior of . Averaging two softmax output vectors does not produce a valid probability distribution in the same parameterisation. Interpolation, averaging, and arithmetic must be done in logit space (pre-softmax), which is a vector space.
(integer vectors)
Closed under addition (integer + integer = integer) but fails closure under scalar multiplication over : . The integers form a module over (a more general structure), but not a vector space over .
with (inhomogeneous linear system)
Does not contain the zero vector (). Not closed under addition: . This is an affine subspace (a coset of the null space of ), not a vector subspace. The general solution to is , where is any particular solution - a coset structure, not a subspace structure.
SUMMARY: WHY COMMON AI SETS FAIL TO BE VECTOR SPACES
Softmax outputs (interior of Delta):
no zero vector not closed under + not closed under scaling
-> Do arithmetic in logit space, not probability space
Normalised embeddings (unit sphere):
no zero vector not closed under + not closed under scaling
-> Spherical interpolation (slerp) for moving on the sphere
Integer weight tensors (quantised models):
closed under + not closed under R-scaling
-> Module over Z; must dequantise before doing real linear algebra
Solution set Ax = b, b != 0:
no zero vector not closed under +
-> Affine subspace = coset of null(A); add particular solution last
2.5 Vector Spaces over Other Fields
While we work over throughout, the definition of a vector space makes sense over any field . The most relevant alternatives for AI:
(complex vector space). Each scalar is complex; each component is complex. Used in quantum computing, signal processing, and complex-valued Fourier analysis. Dimension over , but over (since each complex number requires 2 real numbers). The inner product becomes sesquilinear: (conjugate of first argument). Complex exponentials are the natural eigenfunctions of differentiation; Fourier analysis over is cleaner than over .
(binary vectors). Field with arithmetic modulo 2: . Addition = XOR; multiplication = AND. Used in error-correcting codes (Hamming codes, LDPC codes, turbo codes) and cryptography. All of linear algebra carries over: basis, span, null space, rank - all computable over by the same row reduction algorithm, with arithmetic mod 2.
(vectors over finite fields). For prime , with arithmetic mod . Used in coding theory, cryptography (elliptic curve cryptography over finite fields), and randomised algorithms. Reed-Solomon codes are linear codes over for large ; the generator matrix spans the code as a subspace of .
Practical AI note. Transformers use real vector spaces (or approximately real, since floating-point arithmetic is finite-precision). Quantum transformers (theoretical) would use . Reliable communication protocols for distributed training use codes for error correction. GPU hardware operates over approximately with format-specific precision (BF16, FP8, INT4).
3. Subspaces
3.1 Definition and the Three Conditions
A subset is a subspace of if is itself a vector space under the same operations inherited from .
Checking all eight axioms would be redundant. The operations themselves are inherited from : the addition of two elements of uses the same addition formula as in , and similarly for scalar multiplication. Because these operations are already defined and well-behaved in , the axioms involving equalities (commutativity, associativity, distributivity) are automatically satisfied for elements of - they hold for all elements of , so in particular they hold for elements of .
What can fail is closure: the result of an operation on elements of might fall outside . And the zero vector might not be in . So only three conditions must be verified:
Subspace Test. A subset is a subspace of if and only if:
- Non-empty / contains zero:
- Closed under addition: for all :
- Closed under scalar multiplication: for all and :
Equivalently, conditions 2 and 3 can be combined into a single condition:
Combined condition: For all and all :
This single condition says that is closed under all linear combinations of its elements. It is often the fastest way to verify that something is a subspace: show that every linear combination of elements of remains in .
Strategy for proving subspace:
- Show is non-empty (usually by showing directly)
- Take arbitrary and arbitrary
- Compute and verify it satisfies the defining property of
Strategy for disproving subspace:
- Find a specific counterexample: either , or two elements whose sum is not in , or an element whose scalar multiple is not in .
3.2 Why These Three Conditions Suffice
Each condition handles a different potential failure:
Condition 1 (contains zero): Without , has no additive identity, so Axiom 3 fails. More fundamentally: if , then cannot be a vector space under the same operations as , because the additive identity is uniquely determined by the operations (we proved this in 2.2).
Note that showing also shows is non-empty, which is required for to be a set with meaningful structure. In practice, checking first is the quickest way to rule out non-subspaces: if the zero vector fails to satisfy the defining property of , we are done.
Condition 2 (closed under addition): Without this, addition is not even a well-defined operation on - adding two "vectors" in might produce something outside . If the operation leaves the set, the set cannot be a vector space under that operation.
Condition 3 (closed under scalar multiplication): Without this, scalar multiplication is not well-defined on - scaling a vector in might produce something outside . This failure also implies Axiom 4 (additive inverse) fails: since must be in for each , closure under scalar multiplication is required for inverses to exist.
The remaining axioms are inherited automatically. For example, commutativity: for any , we have because this equality holds in for all vectors, and are in particular vectors in . The same argument applies to associativity and both distributivity laws. These properties are free.
3.3 Standard Subspace Examples in R
Lines through the origin in and
- the line .
- Contains : set ->
- Closed under :
- Closed under :
Dimension: 1. Basis: . This is a 1-dimensional subspace of - a line through the origin.
Planes through the origin in
- a homogeneous linear equation.
- Contains :
- Closed under : if and , then (by linearity of the equation)
- Closed under : if , then
Dimension: 2 (one homogeneous linear constraint on 3 variables leaves 2 free variables).
Null space of a matrix (solution set of a homogeneous system)
for .
- Contains :
- Closed under :
- Closed under :
The null space is always a subspace of , regardless of . By the Rank-Nullity Theorem (6.7): .
Column space of a matrix
for .
- Contains :
- Closed under :
- Closed under :
The column space is always a subspace of . Its dimension equals .
Symmetric matrices
.
- Contains :
- Closed under :
- Closed under :
Dimension: . A basis consists of all matrices (diagonal) and (off-diagonal symmetric pairs) for . Symmetric matrices appear constantly in AI: covariance matrices, Gram matrices, Hessians, and attention score matrices (in some formulations) are all symmetric.
Traceless matrices
.
- Contains :
- Closed under :
- Closed under :
Dimension: (one linear constraint on free entries). Relevant to normalisation in transformers: the centering step of LayerNorm projects activations onto the traceless (mean-zero) subspace.
Upper triangular matrices
.
The defining condition for is a collection of linear constraints; closure under and follows because the zero entries stay zero under linear combinations. Dimension: . The space of upper triangular matrices is a subspace of .
Polynomials vanishing at a point
- polynomials of degree with .
- Contains : zero polynomial satisfies
- Closed: if and , then
Dimension: (one linear constraint on coefficients). A basis: .
3.4 Non-Subspace Examples
For each of the following, we identify the specific condition that fails:
Affine subspace with .
Condition 1 fails: , so . Additionally, if are both solutions, , so condition 2 also fails. This set is a coset of : the general solution is for any particular solution .
Unit ball .
Condition 1 holds: . Condition 3 fails: take with and : then , so . The unit ball is bounded; a subspace is never bounded (it contains all scalar multiples of its vectors, so it extends to infinity in every direction it contains).
First quadrant .
Conditions 1 and 2 hold: ; sum of non-negative vectors is non-negative. Condition 3 fails: . A subspace must be closed under all scalars, including negative ones - which forces it to be symmetric about the origin.
Nonzero vectors .
Condition 1 fails immediately: is excluded by definition. Since the zero vector is always required, this cannot be a subspace.
The probability simplex (softmax outputs).
All three conditions fail: no zero vector; not closed under ; not closed under scaling. The AI implication: you cannot average, interpolate, or take linear combinations of probability distributions and expect to get another valid probability distribution. Arithmetic must happen in logit space (pre-softmax), where the vector space structure is intact.
QUICK NON-SUBSPACE CHECK
Given a candidate set W, the fastest refutation is usually:
1. Check 0 in W - if the zero vector fails the defining property,
stop immediately. W is not a subspace.
2. Check negative scalar - find v in W and test -v in W.
This catches all sets that require positivity or a fixed norm.
3. Check sum of two "extreme" elements - if W has a boundary
condition, vectors near the boundary often sum outside W.
3.5 The Two Trivial Subspaces
Every vector space has exactly two subspaces that are always present, regardless of what is:
- - the trivial subspace, containing only the zero vector. Dimension 0.
- itself - the whole space. Dimension .
Any subspace satisfies .
A proper subspace is any subspace (not the whole space). A non-trivial subspace is any subspace (not just the zero vector). The interesting subspaces are the non-trivial proper ones - the lines, planes, hyperplanes, and higher-dimensional flats that live inside without being all of .
For AI, interpreting these extremes:
- A layer that writes only the zero vector to the residual stream corresponds to the trivial subspace - it contributes nothing.
- A layer with full rank that can write anything to corresponds to the full space - it has maximum expressive power.
- Most layers operate in a proper non-trivial subspace: they can produce outputs in a -dimensional subspace of where .
4. Span and Linear Combinations
4.1 Linear Combinations
A linear combination of vectors is any vector of the form:
where are arbitrary scalars called the coefficients of the linear combination.
A linear combination uses only the two vector space operations: scalar multiplication (to produce each ) and addition (to sum them). By the closure axioms, any linear combination of vectors in produces another vector in . By the closure conditions for subspaces, any linear combination of vectors in a subspace produces another vector in .
Special cases of linear combinations:
| Coefficients | Name | Meaning |
|---|---|---|
| Any | Linear combination | General; unrestricted |
| Affine combination | Preserves "position" (stays in affine hull) | |
| and | Convex combination | Stays in convex hull (interpolation) |
| Subset sum | Used in combinatorics over |
For AI, the most important special case is the unconstrained linear combination - it defines the span of a set of vectors, which in turn defines the reachable outputs of a linear layer.
4.2 The Span
The span of a set is the set of all linear combinations of vectors in :
Theorem: is always a subspace of .
Proof:
- Contains : set all ->
- Closed under addition: if and , then
- Closed under scalar multiplication: if then
Minimality: is the smallest subspace containing all vectors in . Any subspace that contains all vectors must also contain all their linear combinations (by closure), and hence must contain .
Conventions:
- - the span of the empty set is the trivial subspace. (An empty sum equals zero.)
- The span grows as grows: if then .
- Adding a vector already in does not increase the span.
4.3 Geometric Pictures of Span
Geometric intuition for span is crucial and carries directly into AI:
GEOMETRIC MEANING OF SPAN
span{v} (one non-zero vector in R):
A line through the origin in the direction of v.
All scalar multiples of v: {alphav : alpha in R}.
Always a 1-dimensional subspace.
span{u, v} (two non-parallel vectors in R):
A plane through the origin containing both directions.
All combinations alphau + betav: a 2-dimensional subspace.
(If u and v ARE parallel: same as span{u} - 1-dimensional.)
span{u, v, w} (in R^3):
If u, v, w are coplanar -> a plane (2D subspace).
If not coplanar -> all of R^3 (3D subspace = whole space).
span{v_1, ..., v} (in R):
Dimension = rank of the matrix [v_1 | v_2 | ... | v].
The "flat hull" of the vectors, passing through the origin.
It is the column space of the matrix with columns v_1, ..., v.
The key dimension count. Given vectors in :
- If : span has dimension at most (a proper subspace)
- If : span has dimension iff the vectors are independent (iff the matrix is invertible)
- If : span has dimension at most ; the vectors must be linearly dependent
4.4 Spanning Sets
A set spans (or is a spanning set for ) if - equivalently, if every vector in can be expressed as a linear combination of vectors in .
Standard examples:
- spans (standard basis, by definition of coordinates).
- spans - but is redundant; already.
- spans : every can be written as .
- spans (every polynomial is a linear combination of monomials).
- The columns of a matrix span ; they span all of iff has full row rank ().
AI application. The columns of a weight matrix span . If , then only an -dimensional subspace of is reachable: regardless of the input, the layer's output can never leave the -dimensional column space. Outputs in the orthogonal complement are permanently inaccessible from this layer. They must come from other components (residual connections, other layers).
A spanning set need not be a basis - it may contain redundant vectors. The minimal spanning sets are the bases (Section 6).
4.5 Span in Terms of Matrices
Given vectors , form the matrix:
Then:
This connects span to:
- Column space: of the vectors = column space of the matrix they form
- Rank:
- Linear system solvability: if and only if the system is consistent, which by the Rouch-Capelli theorem holds if and only if .
Algorithm for membership testing. To check whether :
- Form the augmented matrix
- Row reduce to RREF
- If no row of the form with appears:
- If such a row appears:
If , the RREF also yields the coordinates expressing as a linear combination.
5. Linear Independence
5.1 Definition of Linear Independence
Vectors are linearly independent if the only linear combination that equals the zero vector is the trivial one - all coefficients zero:
Equivalently: no vector in the set is a linear combination of the others; each vector contributes a genuinely new direction that cannot be reached from the others.
The vectors are linearly dependent if the negation holds: there exist coefficients , not all zero, such that . In this case, at least one vector is redundant - it can be expressed as a linear combination of the others.
The contrast is worth making explicit:
INDEPENDENT vs DEPENDENT
Independent: each vector adds a new direction.
The set is "lean" - nothing is redundant.
Remove any one vector: the span strictly shrinks.
dim(span) = k (maximum possible given k vectors)
Dependent: at least one vector is a combination of the others.
The set contains redundancy.
Remove the dependent vector: span unchanged.
dim(span) < k (less than the number of vectors)
Why it matters. Linear independence is what makes a basis a basis. Without it, some vectors in a spanning set are superfluous - they do not add to the span. A basis is the most efficient spanning set: independent and spanning.
5.2 The Dependence Relation
If are linearly dependent, there exist coefficients (not all zero) with . Let be any index with . Then:
So is a linear combination of the other vectors. Consequently, removing from the set does not reduce the span:
where means is omitted. This is the operational meaning of dependence: you can identify and remove dependent vectors without losing span. Row reduction does exactly this.
5.3 Checking Linear Independence
Matrix method (universal).
Form the matrix and row reduce to RREF.
- Linearly independent iff every column of has a pivot iff iff
- Linearly dependent iff some column has no pivot iff iff
When dependent, the RREF reveals which vectors are dependent and expresses them in terms of the pivot columns.
Determinant test (square matrices only, ).
are linearly independent iff .
SVD / singular value test (numerical).
for some tolerance iff the columns are linearly independent to numerical precision . The smallest singular value measures "how far from dependent" the columns are. When , the columns are nearly dependent (nearly lying in a lower-dimensional subspace).
Gram matrix test.
The Gram matrix with . The columns are linearly independent iff is positive definite (all eigenvalues ) iff .
5.4 Key Properties
Any set containing is linearly dependent.
- a non-trivial combination. So is always dependent on any other set of vectors. This is consistent with the subspace requirement: carries no directional information.
A single non-zero vector is always linearly independent.
with forces (by the cancellation law proved in 2.2). So is independent whenever .
Two vectors are linearly dependent iff one is a scalar multiple of the other.
with . If : . If : so . So dependence of two vectors = one is a scalar multiple of the other = they point in the same (or opposite) direction.
More vectors than dimensions forces dependence.
If for vectors in , they are always linearly dependent. The matrix has rank at most ; by rank-nullity, is non-trivial, giving a non-trivial dependence relation.
This has a profound implication: in , you cannot have more than linearly independent vectors. In a -dimensional embedding space, there are at most independent directions. Any collection of more than feature vectors must be linearly dependent.
Subsets of independent sets are independent.
If are linearly independent, so is every subset. (Removing vectors from an independent set cannot introduce a new dependence - the remaining vectors were already independent when part of the larger set.)
Supersets of dependent sets are dependent.
If are linearly dependent, so is every superset. (The same non-trivial dependence relation, with zero coefficients for the added vectors, still witnesses dependence.)
5.5 Linear Independence in AI Contexts
Attention heads. If the attention heads write to linearly independent subspaces of (their OV circuits have mutually orthogonal column spaces), then each head provides genuinely independent information. Dependent heads write to overlapping subspaces; they interfere and are candidates for pruning. Head redundancy is exactly the linear dependence of the subspaces they write to.
The superposition hypothesis. Elhage et al. (2022) observe that large language models appear to represent more features than the embedding dimension allows. Since more than linearly independent directions cannot exist in , the features must be stored as linearly dependent (non-orthogonal) directions. This is the superposition hypothesis: features are packed into the embedding space at densities exceeding the dimension, using non-orthogonal directions. Each feature "leaks" into the directions of nearby features - a phenomenon called polysemanticity. Linear independence is the threshold: below features, they can be orthogonal and non-interfering; beyond , interference is inevitable.
LoRA rank selection. In LoRA, the update is with , . The rank of equals the number of linearly independent columns in (equivalently, in ). If the columns of are linearly dependent, the effective rank is less than ; some of the rank budget is wasted. Choosing large helps only if the corresponding columns of actually end up spanning an -dimensional space after training.
Gradient directions across batches. If gradients from different batches are linearly independent, each batch provides new information about the loss landscape. If gradients are nearly parallel (nearly dependent), the batches are redundant - they are teaching the model the same thing. Gradient diversity, measured by the rank or effective dimension of the gradient matrix , is a measure of how much information each batch contributes.
Feature directions in representation learning. Each probing direction in an embedding space is a vector in . For concepts to be probed without interference, their associated directions should be linearly independent (ideally orthogonal). The maximum number of perfectly non-interfering linear features is . Beyond that, interference is structural - a consequence of linear independence breaking down.
6. Basis and Dimension
6.1 Definition of a Basis
A basis for a vector space is a set satisfying two conditions simultaneously:
- Linearly independent: the vectors are linearly independent
- Spans : every vector in is a linear combination of the
A basis is the most efficient possible spanning set: it spans with no redundancy. Equivalently:
- A basis is a minimal spanning set: remove any single vector and the set no longer spans
- A basis is a maximal independent set: add any vector from and the set becomes linearly dependent
These three characterisations - (independent + spanning), (minimal spanning), (maximal independent) - are all equivalent. Each one defines the same objects.
THE ROLE OF A BASIS
A basis gives V a coordinate system.
Once you fix a basis {b_1, ..., b} for V:
- Every vector v in V corresponds to a unique n-tuple (alpha_1,...,alpha)
- The n-tuple is the vector's coordinates in this basis
- V becomes "identified" with R via this correspondence
Different bases = different coordinate systems for the same space.
The space itself is unchanged; only the coordinatisation differs.
In AI: the "basis" of the residual stream is not canonical.
Attention heads and MLP layers operate in their own "preferred"
bases, related to the standard basis by learned rotations.
6.2 The Unique Representation Theorem
Theorem. If is a basis for , then every vector has a unique representation:
The scalars are called the coordinates (or components) of with respect to basis , written .
Proof of uniqueness. Suppose and . Subtract:
By linear independence of the basis vectors, all coefficients are zero: , so for all .
Why uniqueness matters. Without uniqueness, coordinates would be ambiguous - the same vector could have multiple different "addresses" in the same coordinate system. The combination of independence (ensures uniqueness) and spanning (ensures existence) is precisely what makes a basis the right tool for establishing a coordinate system.
The isomorphism. The unique representation theorem says that every -dimensional vector space is isomorphic to : the map is a bijection that preserves both operations (addition and scalar multiplication). This is why working in coordinates is always valid: the coordinate map translates the abstract vector space structure faithfully into .
6.3 Dimension
Theorem. All bases for a given vector space have the same number of vectors. This number is the dimension of :
Proof sketch. Suppose and are both bases for . Since spans and is independent, by the Steinitz Exchange Lemma (or the Replacement Theorem), we must have . By symmetry, . Hence .
Dimension is the fundamental measure of the "size" of a vector space - the number of independent directions it contains. It is an intrinsic property of the space, independent of any particular basis.
Dimensions of standard spaces:
| Space | Dimension | Basis |
|---|---|---|
| (standard) | ||
| (symmetric ) | ||
| (skew-symmetric) | ||
| (diagonal) | ||
| , , | Countably infinite bases exist |
Dimension inequalities. For subspace :
- and together imply (in finite dimensions)
This last point is crucial: if you find a subspace with the same dimension as the whole space, it must equal the whole space. In particular, a subspace of with dimension is all of .
6.4 Standard Bases
Standard basis for :
The coordinates of any vector in the standard basis are simply its components: . This is why the standard basis is the "natural" choice for - coordinates and components coincide.
Standard basis for :
is the matrix with a 1 in position and 0 elsewhere. There are such matrices, and they form a basis for . The coordinate of a matrix in this basis is simply .
Standard basis for :
- the monomials. The coordinates of are its coefficients .
Fourier basis for :
An orthonormal basis. The coordinates of in this basis are the Fourier coefficients. This is the infinite-dimensional analogue of an orthonormal basis in .
Eigenbasis. For a linear map with linearly independent eigenvectors , this set forms a basis called the eigenbasis of . In the eigenbasis, acts diagonally: . This diagonalises , making its action transparent. PCA uses the eigenbasis of the covariance matrix; the action of the covariance matrix in its eigenbasis is pure scaling by eigenvalues (variances).
6.5 Change of Basis
The same vector can be described by different coordinate tuples in different bases. Understanding how coordinates transform under a change of basis is essential for working in multiple representations simultaneously - which is exactly what happens in a transformer, where the residual stream has a "standard" basis but each component may operate in a different preferred basis.
Setup. Let and be two bases for . The change-of-basis matrix is the matrix whose -th column is the coordinate representation of in basis :
Coordinate transformation. For any :
Properties:
- is always invertible (since both and are bases)
- - the inverse change-of-basis goes the other way
- Composition:
Linear maps under change of basis. If a linear map has matrix in basis , its matrix in basis is:
This is a similarity transformation: and represent the same linear map in different bases. They have the same eigenvalues, rank, trace, and determinant - these are basis-independent properties of the map.
AI application: the residual stream basis. In a transformer, the residual stream has dimension and technically lives in with the standard basis. But each attention head's OV circuit is parameterised by and : the head reads in a -dimensional subspace and writes in another. The "natural basis" for head is different from the standard basis of . Change-of-basis matrices connect these representations. When we say a head "rotates" the residual stream, we mean it applies a change-of-basis transformation to the component it reads.
6.6 Constructing a Basis
From a spanning set (row reduction).
Given a spanning set for some subspace :
- Form the matrix
- Row reduce to RREF
- The pivot columns of (not the RREF, but the original ) form a basis for
The non-pivot columns correspond to dependent vectors that can be removed without reducing the span.
From a null space (free variables).
To find a basis for :
- Row reduce to RREF; identify free variables
- For each free variable , set and all other free variables ; solve for pivot variables
- The resulting vector is one basis vector for
- Repeat for each free variable; the vectors form a basis for
Gram-Schmidt (from independent vectors to orthonormal basis).
Given linearly independent vectors , produce an orthonormal basis for . See Section 9.3 for the full procedure.
Extending a basis.
Given a basis for a subspace , extend to a basis for all of :
- Start with
- Find any and add it:
- Repeat until
This process terminates in at most steps.
6.7 Dimension Counting Theorems
These theorems relate the dimensions of related subspaces. They are among the most useful tools in linear algebra.
Rank-Nullity Theorem. For any matrix :
In words: the number of "invisible" input directions plus the number of "active" input directions equals the total number of input dimensions.
Proof idea. Row reduce to RREF. Let . There are pivot columns (corresponding to independent output directions -> ) and free columns (corresponding to null space basis vectors -> ). Total: .
AI application. For a weight matrix with rank :
- input dimensions "pass through" to the output (row space directions)
- input dimensions are silenced (null space directions)
- The layer uses out of available input dimensions
Dimension of a Sum. For subspaces :
This is the Grassmann formula or inclusion-exclusion for dimensions. It is the direct analogue of for finite sets, but for dimensions of subspaces.
Key consequence. If (intersection is trivial):
and the sum is a direct sum: . Dimensions add when the subspaces are independent (share only the origin).
Dimension bounds for intersection. Given , , :
The upper bound: the intersection is contained in both and , so its dimension cannot exceed either. The lower bound: from the Grassmann formula and .
AI application. Two attention heads with -dimensional subspaces in : their subspace intersection has dimension at least . If , the heads must share at least dimensions; their communication channels overlap inevitably.
Complement dimension. For subspace with :
So . The orthogonal complement "fills in" the remaining dimensions. The fundamental subspaces obey:
which is just the Rank-Nullity Theorem re-expressed using the fact that .