Lesson overview | Lesson overview | Next part
Vectors and Spaces: Part 1: Intuition to 8. Affine Subspaces and Convexity
1. Intuition
1.1 What Are Vectors and Spaces?
A vector is first encountered as an arrow with magnitude and direction, but that picture is only the beginning. In modern linear algebra, a vector is any element of a set that supports two operations:
- addition of vectors
- multiplication of a vector by a scalar
If those operations satisfy the right axioms, the set is called a vector space. The power of the abstraction is that the "things" inside the space do not need to be arrows in physical space. They may be coordinate tuples, polynomials, matrices, functions, sequences, or even gradient fields.
Concrete examples help fix the idea:
- is a vector
- a polynomial such as is a vector in a polynomial space
- a matrix in is a vector in a matrix space
- a continuous function is a vector in a function space
The unifying idea is linear structure. If you can sensibly add two objects of the same kind and scale the result by numbers, then linear algebra is likely available.
1.2 Why Vectors and Spaces Are Central to AI
Nearly every important object in deep learning is a vector or lives in a vector space:
- token embeddings are vectors in
- hidden states are vectors in a residual stream
- query, key, and value representations are vectors in learned subspaces
- parameter sets are points in a very large Euclidean space
- gradients are vectors in the same parameter space
- logits are vectors in vocabulary space
- probability outputs live in the probability simplex, which sits inside a vector space but is not itself one
The Transformer made this especially explicit. Attention compares a query vector with key vectors using dot products, then produces an output as a weighted sum of value vectors :
That is pure vector-space computation: inner products, scaling, exponentiation of scores, and linear combination. The semantic geometry of embeddings became operational in NLP through word2vec (Mikolov et al., 2013), and the geometry of projected subspaces became central to model architecture through self-attention (Vaswani et al., 2017). In large open models such as Llama 3, parameter space itself has hundreds of billions of coordinates at the frontier scale (Dubey et al., 2024), so geometric reasoning is not optional; it is the only way to think clearly about what the model can represent.
1.3 Geometric, Algebraic, and Abstract Views
There are three complementary ways to think about vectors.
| View | Core idea | Strength | Typical AI use |
|---|---|---|---|
| Geometric | A vector is an arrow with length and direction | Builds intuition for angle, distance, projection | Similarity, attention, clustering |
| Algebraic | A vector is an ordered tuple of numbers | Easy to compute with componentwise rules | Arrays, tensors, embeddings, gradients |
| Abstract | A vector is any element of a vector space | Generalizes linear algebra beyond coordinates | Function spaces, kernels, NTK, RKHS |
The geometric view tells you why cosine similarity matters. The algebraic view tells you how to implement it. The abstract view tells you why the same mathematics reappears for functions, distributions, and operators.
One idea, three views
Geometric: Algebraic: Abstract:
y [v1, v2, ... , vn] v in V
^ where V has
| / v vector addition
| / and scalar multiplication
| /
----+--------------> x
1.4 Where Vectors and Spaces Appear in AI
The flow through a language model can be read as a sequence of moves between vector spaces:
Token IDs
-> embedding vectors in R^d
-> sequence matrix X in R^(n x d)
-> projected query/key/value spaces
-> attention outputs as weighted sums
-> feed-forward maps R^d -> R^d_ff -> R^d
-> logits in R^|V|
-> probabilities in the simplex Delta^(|V|-1)
Even when the last object is not a vector space, it usually sits inside one. The simplex of probability vectors is an affine slice of with nonnegativity constraints. That pattern is common in AI: model objects often live in structured subsets of ambient vector spaces.
1.5 The Hierarchy of Structure
A useful way to organize the subject is by how much extra structure is available.
set
-> abelian group (addition)
-> vector space (addition + scalar multiplication)
-> normed space (size)
-> Banach space (complete normed space)
-> inner product space (angles and lengths)
-> Hilbert space (complete inner product space)
This branching picture is more accurate than a single chain. A normed space need not come from an inner product. An inner product space automatically induces a norm, and a Hilbert space is therefore also a Banach space, but not every Banach space is Hilbert. Most day-to-day ML uses finite-dimensional Euclidean spaces, where these distinctions collapse because all norms are equivalent and completeness is automatic. In theoretical ML and functional analysis, they matter a great deal.
1.6 Historical Timeline
The language of vectors and spaces was assembled gradually.
| Period | Figure or development | Importance |
|---|---|---|
| Ancient mechanics | Directed quantities in geometry and physics | Proto-vector intuition |
| 17th century | Galileo and Newton | Composition of velocity and force |
| 1843-1844 | Hamilton and Grassmann | Algebraic extension beyond ordinary 3D geometry |
| Late 19th century | Peano and axiomatic linear algebra | Abstract vector space formulation |
| Early 20th century | Hilbert | Infinite-dimensional inner product spaces |
| 1920s-1930s | Banach and functional analysis | Normed complete spaces and operator theory |
| 20th century computing | Numerical linear algebra | Matrix computation becomes practical at scale |
| 2013 | word2vec | Semantic geometry becomes an engineering object |
| 2017 | Transformer | Dot-product geometry becomes core architecture |
| 2020s | Foundation models | Representation geometry becomes a first-class research topic |
The modern lesson is simple: vectors started as geometry, became algebra, and now serve as the organizing language of computation.
2. Vectors in R^n
2.1 Definition and Notation
A vector in is an ordered -tuple of real numbers:
In linear algebra, the default convention is usually the column vector:
Key notation:
- is the -th component or coordinate
- is the zero vector
- is the -th standard basis vector, with a 1 in position and 0 elsewhere
- denotes the transpose, turning a column into a row
Every vector in can be written as
That formula is already telling you something profound: coordinates are not the vector itself; they are the coefficients of the vector relative to a chosen basis.
2.2 Vector Addition
Vectors in add componentwise:
So if
then
Vector addition satisfies the familiar algebraic laws:
- commutativity:
- associativity:
- identity:
- inverse:
Geometrically, addition follows the parallelogram law. Place the tail of one vector at the head of another; the resulting diagonal is the sum. That picture becomes important later when we interpret residual connections as additive updates to a representation vector.
Head-to-tail view of vector addition
O ----u----> A ----v----> B
O ----------------------> B
u + v
2.3 Scalar Multiplication
If and , then scalar multiplication is also componentwise:
For example,
The main properties are:
Geometrically, positive scaling stretches or shrinks the vector, while negative scaling also reverses direction.
Scalar multiplication changes length and possibly direction
alpha > 1 0 < alpha < 1 alpha < 0
O-----> O--> O<-----
stretch shrink flip + scale
2.4 Linear Combinations
A linear combination of vectors is any expression of the form
with scalars .
This idea is central because most linear algebra questions reduce to asking which linear combinations are possible. If you know the allowable combinations, you know what a system can generate.
In AI, linear combinations are everywhere:
- a dense layer computes weighted sums of input coordinates before adding bias
- attention outputs are weighted sums of value vectors
- residual streams combine contributions from multiple components additively
- concept arithmetic in embedding space uses vector addition and subtraction as semantic approximations
The famous analogy
is a statement that some semantic relations behave approximately linearly in embedding space (Mikolov et al., 2013). It is not an exact algebraic law, but it is a powerful geometric clue.
2.5 Linear Independence
Vectors are linearly independent if the only way to make the zero vector from them is the trivial way:
If a nontrivial combination gives zero, the vectors are linearly dependent. Dependence means redundancy: at least one vector can be expressed in terms of the others.
Examples:
- and are independent in
- and are dependent because
- any set containing is automatically dependent
Testing independence computationally is a rank question. Put the vectors as columns of a matrix . Then:
- if , the columns are independent
- if , they are dependent
In model analysis, approximate dependence matters as much as exact dependence. If different heads or features occupy nearly the same direction, they carry overlapping information and may be prunable.
Independent vs dependent in R^2
Independent Dependent
^ y ^ y
| w | 2v
| / | /
| / |/
------+------> x -------+------> x
/ /
v v
two distinct directions same line, one is redundant
2.6 Span
The span of a set of vectors is the set of all linear combinations of those vectors:
The span tells you every direction reachable from the given generating set.
Typical cases:
- is a line through the origin
- is a plane through the origin if and are independent
Two facts are worth memorizing:
- span is always a subspace
- adding a vector outside the current span increases the dimension by one
This is why span is the right language for representational capacity. A low-rank matrix can only output vectors inside a low-dimensional span. A collection of value vectors can only produce attention outputs inside their span.
What span looks like geometrically
span{v} span{v, w} span{e1, ..., en}
line plane whole ambient space
/ __________
/ / /|
--+---> /________/ |
| | |
| | /
|________|/
3. Abstract Vector Spaces
Reading note: This section introduces the vector space axioms and their most common concrete instances (, function spaces, polynomial spaces). The focus here is on geometric intuition and computational examples. For the full axiomatic treatment - subspace criteria, span, linear independence, quotient spaces, and the abstract theory of bases - see the dedicated section.
-> Full axiomatic treatment: Vector Spaces and Subspaces 2-6
3.1 The Vector Space Axioms
Let be a field, usually or . A vector space over is a set equipped with:
- vector addition:
- scalar multiplication:
such that, for all and , the following laws hold.
| Axiom | Statement |
|---|---|
| Additive closure | |
| Commutativity | |
| Associativity | |
| Zero vector | There exists with |
| Additive inverse | For each there exists with |
| Scalar closure | |
| Distributivity over vector addition | |
| Distributivity over scalar addition | |
| Scalar associativity | |
| Unit scalar |
Some texts count eight axioms by folding closure into the operation definitions. Listing ten is often pedagogically clearer.
3.2 Consequences of the Axioms
The axioms are minimal, but they already imply a lot.
The zero vector is unique.
If and both behave like additive identities, then
Additive inverses are unique.
If and both satisfy and , then
Zero scalar kills every vector.
Add the additive inverse of to both sides to get .
Every scalar kills the zero vector.
so again .
Multiplying by gives the additive inverse.
so .
These are simple but useful. Many later proofs quietly depend on them.
3.3 Examples of Vector Spaces
The same axioms govern many different mathematical objects.
1. Euclidean space
This is the standard example. Addition and scalar multiplication are componentwise.
2. Polynomial space
The set of polynomials of degree at most :
Addition is polynomial addition, and scalar multiplication rescales coefficients.
3. Matrix space
All real matrices form a vector space with componentwise addition and scaling.
4. Continuous function space
All continuous real-valued functions on form a vector space under pointwise operations:
5. Sequence spaces
Infinite sequences can also form vector spaces. Important examples include , the square-summable sequences, and , the absolutely summable sequences.
6.
Square-integrable functions form an infinite-dimensional vector space that becomes a Hilbert space once we equip it with the usual inner product.
This last example matters in machine learning theory because kernels, Fourier methods, and approximation theory are often most naturally phrased in function spaces rather than coordinate spaces.
3.4 Non-Examples
Students usually learn vector spaces faster by seeing what fails.
| Set | Why it is not a vector space |
|---|---|
| Not closed under additive inverse | |
| Not closed under addition or scaling | |
| Probability simplex | Not closed under arbitrary addition or scaling |
| over | Not closed under real scalar multiplication |
| A line not through the origin | Does not contain the zero vector |
The probability simplex is especially important in AI. It lives inside a vector space, but it is not itself a vector space because probabilities must remain nonnegative and sum to one.
3.5 Subspaces
A subset is a subspace if it is itself a vector space under the same operations. The practical test is short:
- if , then
- if and , then
Equivalently, a nonempty set is a subspace if it is closed under linear combinations.
Examples inside :
- is the trivial subspace
- any line through the origin is a one-dimensional subspace
- any plane through the origin is a two-dimensional subspace
- itself is a subspace
Non-examples:
- a translated line or plane not passing through the origin
- the unit sphere
- the set of vectors with first coordinate equal to 1
Subspaces are how linear algebra talks about structure. The column space of a matrix, the null space of a map, the subspace spanned by a collection of attention heads, and the tangent space of a model family all follow the same logic: identify the directions that are allowed, closed, and linearly stable.
Subspace vs non-subspace in R^2
Subspace (through origin) Not a subspace (shifted)
/ /
/ /
----+----> x -----/----> x
/ /
/
contains 0 misses 0
4. Basis and Dimension
4.1 Basis Definition
A basis for a vector space is a set of vectors
that satisfies two conditions:
- the vectors are linearly independent
- the vectors span the whole space
These two requirements are exact opposites that must hold simultaneously. Independence prevents redundancy; spanning prevents missing directions.
If is a basis, then every vector has a unique expansion
The coefficients are the coordinates of relative to .
Uniqueness matters. If the same vector had two different coordinate descriptions in the same basis, then subtracting those descriptions would produce a nontrivial linear combination of basis vectors equal to zero, contradicting independence.
In R^2:
one vector two non-collinear vectors three vectors
not enough just enough = basis spanning but redundant
/ ^ y ^ y
/ | / | /|
/ | / | / |
-----------------> x +--------> x +--------> x
/ / /
4.2 Standard Basis of R^n
The standard basis of is
In the standard basis, the coordinate vector is just the familiar list of components:
But the standard basis is not sacred. Many problems become simpler after a change of basis:
- PCA chooses a basis aligned with data variance
- Fourier analysis chooses sinusoidal basis functions
- diagonalization chooses an eigenbasis when available
- orthonormal bases simplify coordinates, projections, and energy calculations
The vector does not change when the basis changes. Only its coordinate description changes.
4.3 Dimension
For finite-dimensional vector spaces, every basis has the same number of elements. That common number is the dimension of the space:
Examples:
- because is a basis
Dimension is a count of independent directions, not a count of how many vectors happen to be mentioned in a description. You can describe a plane in using ten spanning vectors if you want; the dimension is still 2.
In AI practice, dimension is often both a modeling choice and a computational budget. Embedding dimension, head dimension, hidden width, and bottleneck rank are all dimension decisions.
4.4 Dimension and Subspaces
If is a subspace of a finite-dimensional vector space , then
Moreover:
- if , then
- if , then
- in , the only subspaces are , lines through the origin, and
- in , the only subspaces are , lines through the origin, planes through the origin, and
The codimension of in is
Codimension tells you how many independent directions are missing.
This is the right language for low-rank modeling. If a matrix maps a 4096-dimensional residual stream into a 64-dimensional attention head space, then its image has dimension at most 64, so most of the ambient directions are not expressible in that head.
4.5 Coordinates and Change of Basis
Let be a basis for . The coordinate vector of relative to is
If is another basis, then coordinates transform by an invertible matrix:
The change-of-basis matrix is invertible because coordinates in each basis are unique. If it were not invertible, some nonzero coordinate vector would collapse to zero, contradicting uniqueness.
Suppose a linear map is represented by matrix in one basis and by matrix in another basis. Then the relationship is
This is similarity transformation. It does not change the underlying map; it changes the coordinates in which the map is described.
That idea appears all over ML:
- whitening and PCA rotate data into more convenient coordinates
- orthogonal parameterizations change basis while preserving norms
- query, key, and value projections choose learned coordinate systems inside the embedding dimension
Same vector, different coordinates
standard basis: rotated basis:
y b2
^ /
| v /
| / / v
| / /
------+------> x -----/----------> b1
The geometric arrow is the same.
Only the coordinate description changes.
4.6 Rank-Nullity Theorem
Let be a linear map with finite-dimensional domain. Then
The two terms are named:
- nullity:
- rank:
For a matrix , this becomes
Interpretation:
- rank counts how many independent directions survive the map
- nullity counts how many independent directions are lost completely
If has rank , then every output lies in an -dimensional subspace of , and an -dimensional family of input directions gets sent to zero.
This theorem explains a great deal of ML engineering:
- low-rank adaptation works because useful updates often live in small image spaces (Hu et al., 2022)
- bottleneck layers deliberately compress information into lower-dimensional subspaces
- overparameterized models have large null spaces in parameter space, which helps explain why many parameter settings realize similar functions
5. Norms and Metric Spaces
5.1 Norms on Vector Spaces
A norm on a vector space is a function
satisfying, for all and all scalars :
- positive definiteness: , with equality iff
- homogeneity:
- triangle inequality:
A norm measures size. Once a norm is available, we can talk about boundedness, convergence, approximation error, and regularization.
5.2 The p-Norms
For and , the -norm is
The limiting case is
The most important examples are:
L1 norm
This encourages sparsity in optimization because its unit ball has corners aligned with coordinate axes.
L2 norm
This is the Euclidean length and the most common norm in deep learning.
L-infinity norm
This measures worst-case component size and is common in adversarial robustness.
The geometry of the unit ball depends on :
- : diamond or cross-polytope
- : sphere
- : cube
Those shapes are not cosmetic. They explain different optimization behavior.
Unit balls in 2D
L1 ball L2 ball L_inf ball
/\ ____ +------+
/ \ / \ | |
\ / \______/ | |
\/ +------+
corners smooth flat faces
5.3 Norm Equivalence
In finite-dimensional spaces, all norms induce the same notion of convergence. More precisely, if and are norms on , then there exist constants such that
Useful special inequalities:
This means that in finite dimensions, saying "a sequence converges" does not depend on whether you measure error with L1, L2, or L-infinity. In infinite dimensions, this is false. Different norms can induce genuinely different topologies.
5.4 Matrix Norms
Matrices also admit norms. Important ones include:
Frobenius norm
This is the Euclidean norm of the matrix viewed as one long vector.
Spectral norm
This is the maximum stretch factor of the linear map:
Nuclear norm
This is the sum of singular values. It is a convex surrogate for rank and appears in matrix completion and low-rank learning (Candes and Recht, 2009).
Induced 1-norm and infinity-norm
In ML:
- Frobenius norm corresponds to standard weight decay on matrix entries
- spectral norm controls the Lipschitz constant of a linear layer
- nuclear norm promotes low-rank structure
5.5 Metric Spaces
A metric space is a set equipped with a distance function
such that for all :
- , with equality iff
Every norm induces a metric by
Not every metric comes from a norm. Edit distance on strings is a metric, but there is no vector subtraction of strings that generates it as a norm.
AI uses many distances:
- Euclidean distance for retrieval and clustering
- cosine distance for directional similarity
- Hamming distance for binary codes
- edit distance for token sequences
- Wasserstein distance for distributions
- KL divergence as a useful non-metric divergence
The important point is conceptual: vector-space geometry gives one family of distances, but machine learning uses broader metric ideas too.
5.6 Convergence in Normed Spaces
A sequence converges to in norm if
A sequence is Cauchy if its terms eventually become arbitrarily close to each other:
A normed space is complete if every Cauchy sequence converges to a limit that still belongs to the space. A complete normed space is called a Banach space.
Important examples:
- with any norm is complete
- matrix spaces with standard norms are complete
- with the sup norm is complete
- spaces are complete for
Completeness matters because many learning algorithms are iterative. Gradient methods, fixed-point solvers, and optimization routines generate sequences. Completeness is the condition that prevents the limit from "falling out of the space."
6. Inner Product Spaces
Reading note: This section develops inner products concretely in and establishes the geometric tools used throughout this chapter (dot product, angle, Cauchy-Schwarz, orthogonality). The abstract inner product space theory - Gram-Schmidt, orthonormal bases, Hilbert spaces, and orthogonal complements - is developed in full later.
-> Full treatment: Vector Spaces and Subspaces 9
6.1 Inner Products
An inner product on a real vector space is a function
satisfying, for all and scalars :
- symmetry:
- linearity:
- positive definiteness: , with equality iff
An inner product upgrades a vector space with geometry. Once it is available, we can measure lengths, define angles, talk about orthogonality, and project onto subspaces.
For complex vector spaces, the definition is slightly modified: and one argument is conjugate-linear. The core geometric story remains the same.
6.2 Standard Inner Product on R^n
The standard inner product on is the dot product:
This inner product induces the Euclidean norm:
So in Euclidean space, angle and length are not independent notions. They are both derived from the same primitive object.
6.3 Geometric Interpretation
The angle between nonzero vectors and is defined by
Interpretation:
- : same direction, maximal alignment
- : orthogonal, zero alignment
- : opposite directions
Cosine similarity is exactly this normalized inner product. It is widely used for embeddings because it measures direction rather than raw magnitude.
For attention, the score combines both angular alignment and magnitude. If vectors are normalized, the score is proportional to cosine similarity. If they are not, norm effects matter too.
Angle and alignment
same direction orthogonal opposite direction
-----> -----> -----> <----- ----->
|
|
v
cos(theta) = 1 cos(theta) = 0 cos(theta) = -1
6.4 Cauchy-Schwarz Inequality
The fundamental inequality of inner-product geometry is
Equality holds exactly when and are linearly dependent.
A classic proof uses positivity. For any real ,
This quadratic in must have nonpositive discriminant, so
which implies the result.
Cauchy-Schwarz guarantees that cosine similarity always lies in . It also gives quick upper bounds on correlations, dot products, and approximation error.
6.5 Orthogonality
Vectors are orthogonal if their inner product is zero:
An orthogonal set is a collection of pairwise orthogonal nonzero vectors. Such a set is automatically linearly independent.
Proof sketch: if
take the inner product with . All cross-terms vanish, leaving
so for every .
An orthonormal set is orthogonal and normalized:
In an orthonormal basis, coordinates are exceptionally simple:
You get coefficients by taking inner products. No matrix inversion is needed.
Orthogonality is central in ML because it reduces interference:
- orthogonal initializations help preserve signal norms through depth (Saxe et al., 2014)
- orthogonal features are easier to disentangle
- orthogonal subspaces make decomposition interpretable
6.6 Gram-Schmidt Orthogonalization
Given linearly independent vectors , the Gram-Schmidt procedure constructs an orthonormal basis spanning the same subspace.
First set
Then recursively subtract the components already accounted for:
and normalize:
Each step removes the part of already explained by the previous orthonormal vectors. What remains is orthogonal to the earlier directions.
Conceptually, Gram-Schmidt says:
- start with a spanning description
- strip away redundancy direction by direction
- normalize what survives
Algorithmically, Gram-Schmidt underlies QR decomposition. Numerically, modified Gram-Schmidt and Householder methods are often preferred for stability.
Gram-Schmidt idea for v2
u1 -------------------->
v2 ------------------------>
split v2 into:
proj_u1(v2) -------------->
remainder ^
|
|
Keep the remainder, then normalize it.
6.7 Orthogonal Complement
If is a subspace of an inner-product space , its orthogonal complement is
Important facts:
- is always a subspace
- in finite dimensions,
- every vector decomposes uniquely as
This decomposition is one of the most useful ideas in linear algebra. It is how least squares, residuals, and projection error are understood.
For matrices, the row space is orthogonal to the null space, and the column space is orthogonal to the left null space. Those are special cases of the same principle.
Orthogonal decomposition
v
/|
/ |
/ | v_perp
-----------*---+-----------------> W
proj_W(v)
v = proj_W(v) + v_perp
6.8 Hilbert Spaces
A Hilbert space is a complete inner-product space. It has both geometry and completeness.
Finite-dimensional inner-product spaces are automatically Hilbert spaces. The interesting cases are infinite-dimensional:
- : square-summable sequences
- : square-integrable functions
- reproducing kernel Hilbert spaces (RKHS), central to kernel methods
In a Hilbert space, orthonormal bases and projection theory still work, though bases may now be infinite. Parseval-type identities and Fourier expansions become available:
for suitable orthonormal bases .
Hilbert spaces matter for AI because many learning-theoretic objects are not finite vectors at all; they are functions. Kernel methods, Gaussian processes, and parts of neural network theory live more naturally in Hilbert spaces than in .
7. Orthogonal Projections
7.1 Projection onto a Subspace
Given a subspace of an inner-product space and a vector , the orthogonal projection of onto is the vector in closest to :
The key characterization is:
For a one-dimensional subspace spanned by a nonzero vector ,
If is already unit length, this simplifies to
Projection is the formal version of "keep the part of the vector that points in the subspace, discard the perpendicular remainder."
Projection onto a line or subspace
v
/|
/ |
/ | residual = v - Proj_W(v)
------*---+---------------------- W
Proj_W(v)
7.2 Projection Matrix Properties
If is the matrix of an orthogonal projection, then:
- (idempotence)
- (symmetry)
The first identity says projecting twice is the same as projecting once. The second says the projection is orthogonal rather than oblique.
For projection onto the line spanned by a nonzero vector ,
Its eigenvalues are 0 and 1 only:
- eigenvalue 1 corresponds to directions already in the target subspace
- eigenvalue 0 corresponds to directions annihilated by projection
The complementary projector is , which projects onto the orthogonal complement.
7.3 Projection onto Column Space
Let have full column rank. The orthogonal projector onto the column space of is
Why this formula works:
- every projected vector has the form , so it lies in
- the residual must be orthogonal to every column of
- orthogonality of the residual gives the normal equations
If the columns of are orthonormal, then and the formula simplifies to
This expression appears in least squares, regression, PCA, and low-dimensional approximation.
In Transformer language, learned projections , , and are not orthogonal projectors in the strict algebraic sense, but they do map activations into lower-dimensional subspaces where later computations occur. Orthogonal projection is therefore the clean mathematical ideal behind many approximate representation moves.
Projecting b onto col(A)
b
/|
/ |
/ | residual
/ |
-----*----+----------> col(A)
A x_hat
Best-fit output lies inside col(A).
The leftover error is orthogonal to col(A).
7.4 Gram Matrix
Given vectors in , the Gram matrix is
If is the matrix whose rows are the vectors, then
Gram matrices have two key properties:
- they are symmetric
- they are positive semidefinite
Indeed, for any coefficient vector ,
The Gram matrix is positive definite exactly when the vectors are linearly independent.
In AI:
- attention score matrices are built from dot products, so they are Gram-like objects before softmax scaling and masking
- kernel matrices are Gram matrices in feature space
- covariance and similarity matrices are Gram constructions with centering or normalization added
7.5 Best Approximation and Least Squares
Orthogonal projection solves the best approximation problem:
If , then the residual is orthogonal to , and Pythagoras gives
Least squares is exactly this geometry in matrix form. Given an overdetermined system , the least-squares solution minimizes
The residual must be orthogonal to the column space of :
so
When has full column rank,
Geometrically, is the projection of onto the column space of .
This is why projection is not just a geometric curiosity. It is the linear algebra under classical regression, representation compression, denoising, and many approximation arguments used throughout machine learning.
8. Affine Subspaces and Convexity
8.1 Affine Subspaces
A linear subspace must pass through the origin. Many important geometric sets do not. An affine subspace is a translated subspace:
Here is a base point and is a linear subspace called the direction space.
Examples:
- a line not through the origin in
- a plane in with
- the probability simplex before nonnegativity is imposed, because is an affine constraint
Affine spaces are not vector spaces unless they happen to pass through the origin. You can subtract two points in an affine space to get a direction vector, but you cannot usually add two points and stay in the set.
Linear subspace vs affine subspace
subspace W affine set v0 + W
/ /
/ /
----+----> x ----/----> x
/ /
/
through 0 shifted away from 0
8.2 Convex Sets
A set is convex if, whenever and ,
This means the entire line segment joining any two points of the set stays inside the set.
Important convex sets:
- every subspace
- every affine subspace
- every norm ball for a true norm
- every half-space
- the probability simplex
The convex hull of a set , written , is the set of all convex combinations of points in . It is the smallest convex set containing .
This matters directly for attention. The output of a single attention head at one position is
so it lies in the convex hull of the value vectors at that position.
Convex vs non-convex
Convex: segment stays inside
*-----------*
Non-convex: segment leaves the set
*---- gap ----*
8.3 Hyperplanes and Half-Spaces
A hyperplane in is a set of the form
for some nonzero normal vector .
Geometrically, a hyperplane is an -dimensional affine subspace. It cuts space into two half-spaces:
Hyperplanes are the basic decision surfaces of linear models. A neuron computes
Thresholding that expression defines a half-space. A ReLU network can therefore be viewed as composing piecewise-linear maps defined by many hyperplane arrangements.
Convex analysis adds a deeper theorem: disjoint convex sets can often be separated by hyperplanes. This is the geometric basis for max-margin classification, linear probes, and many duality arguments.
Hyperplane and half-spaces
half-space hyperplane half-space
xxxxx | .....
xxxxx | w^T x = b .....
xxxxx | .....
^
normal direction w
8.4 Convex Functions and Sublevel Sets
A function is convex if for all and ,
Convex functions are important because their sublevel sets
are convex. This makes optimization far more tractable.
Examples:
- is convex
- is convex
- logistic loss and cross-entropy are convex in suitable linear-model settings
- deep network training objectives are generally not globally convex
The contrast matters. Classical optimization often uses convex structure globally. Deep learning rarely gets global convexity, but convex sets and convex penalties still appear everywhere locally and architecturally.