Lesson overview | Previous part | Next part
Linear Transformations: Part 7: Affine Transformations to Appendix B: Abstract Linear Algebra Perspective
7. Affine Transformations
7.1 Beyond Linearity: Affine Maps
A linear map fixes the origin: . But many practical transformations in geometry and ML need to shift the origin - they include a translation component.
Definition. A function is an affine transformation if it has the form:
where is a matrix (the linear part) and is a vector (the translation).
Affine maps are linear maps composed with a translation. They are NOT linear (unless ) - they fail the zero test: when .
Affine subspaces. The image of under an affine map is an affine subspace - a translate of a linear subspace. Solutions to (when they exist) form an affine subspace: .
Composition of affine maps. If and , then:
The composition is affine: linear part , translation .
7.2 Homogeneous Coordinates
The elegant trick to make affine maps linear is to lift to one higher dimension.
Definition. The homogeneous coordinates of are .
The augmented matrix. The affine map becomes:
Now is a linear map in . The last coordinate is always 1 for "proper" points.
Key benefit: composition becomes matrix multiplication. Two affine maps compose as:
No special cases needed - composition is just matrix multiplication.
Inverse of an affine map:
(Valid when is invertible.)
In computer graphics: Every rigid body transformation (rotation + translation) is an affine map, and sequences of transformations are composed by matrix multiplication in homogeneous coordinates. This is the foundation of 3D graphics pipelines (OpenGL, Vulkan).
7.3 Neural Network Layers as Affine Maps
A single fully-connected layer computes:
The pre-activation is an affine map from to . In homogeneous coordinates:
Why bias matters. Without bias (), each layer is linear and the hyperplanes separating classes must pass through the origin. With bias, the decision boundary can be placed anywhere. This is the geometric reason bias terms dramatically increase expressive power.
Multi-layer without activations. A network is still affine:
Multiple affine layers compose to a single affine layer. Depth without nonlinearity gives no expressive benefit.
BatchNorm as affine rescaling. After normalizing to zero mean and unit variance, BatchNorm applies a learned affine transformation (elementwise). This is an affine map on the normalized activations, restoring the capacity to represent any desired scale and shift.
Embedding layers. An embedding layer maps token indices to vectors. Selecting the -th embedding is equivalent to multiplying by the one-hot vector : (the -th row). This is linear in the one-hot representation. The unembedding maps representations to logits: , a pure linear map.
8. The Jacobian as Linear Approximation
8.1 Linearization of Nonlinear Maps
Every differentiable function is "locally linear" - at any point, it looks like a linear map to first order. This principle underlies calculus, optimization, and all of numerical analysis.
Definition (Total Derivative). A function is differentiable at if there exists a linear map such that:
The linear map is the total derivative or Frechet derivative of at . The first-order approximation is:
For small , the function looks like an affine map: constant plus linear correction .
Geometric meaning. At each point , the total derivative is the best linear approximation to near . It maps directions (tangent vectors at ) to directions (tangent vectors at ). This is the pushforward of vectors.
8.2 The Jacobian Matrix
Definition. The matrix representation of in standard coordinates is the Jacobian matrix:
Entry : - how the -th output changes with the -th input.
Shape mnemonic: output dimension input dimension: for .
Special cases:
- (scalar function): Jacobian is the row gradient .
- (curve): Jacobian is the column tangent vector .
- (same dim): is square; is the local volume scaling.
The Jacobian of softmax. For where :
This is a rank-deficient matrix (rank ) because softmax outputs sum to 1: the Jacobian has a constant vector in its null space.
The Jacobian of ReLU. For (elementwise):
A diagonal matrix with 1s where and 0s elsewhere. ReLU's Jacobian is a projection - it zeroes out the gradient for dead neurons.
The Jacobian of layer normalization. More complex: layer norm applies mean subtraction, variance normalization, and an affine transformation. Its Jacobian is a projection onto a specific hyperplane (orthogonal to the constant vector), scaled and shifted.
8.3 Chain Rule = Composition of Jacobians
The chain rule for vector-valued functions states: if and , then:
This is matrix multiplication of Jacobians. The composition of two differentiable maps has a Jacobian equal to the matrix product of their individual Jacobians (evaluated at the appropriate points).
Backpropagation is reverse-mode Jacobian accumulation. For a network with loss :
Reading right to left: start with , multiply by , then , etc. At each step, we multiply by the transpose Jacobian of a layer - which is the dual map of that layer's linear approximation.
Computational graph view. Each node in the computation graph stores its local Jacobian. Forward pass evaluates the functions; backward pass multiplies the transpose Jacobians (right to left). This is the mathematical content of autograd.
BACKPROPAGATION AS JACOBIAN CHAIN
========================================================================
Forward: x --J_1---> z_1 --J_2---> z_2 -- ... --J_L---> y ---> ell
Backward: \partialell/\partialx <---J_1^T-- \partialell/\partialz_1 <---J_2^T-- \partialell/\partialz_2 <--- ... <---J_L^T-- \nablay L
Each backward step: (grad at input) = J^T \times (grad at output)
= (dual map) applied to incoming gradient
========================================================================
Why Jacobians matter for training dynamics:
- Large Jacobian singular values -> exploding gradients
- Small Jacobian singular values -> vanishing gradients
- The spectral norm of measures how much the linear approximation of can amplify inputs
For AI: Gradient clipping, careful weight initialization (Xavier, He), and residual connections all address the Jacobian conditioning problem. Residual connections add an identity Jacobian contribution: , which prevents the singular values from collapsing to zero (vanishing gradients).
9. Applications in Machine Learning
9.1 Attention as Linear Projections
The scaled dot-product attention mechanism (Vaswani et al., 2017) is built entirely from linear transformations applied to a sequence of token representations.
Setup. Given an input sequence (L tokens, d-dimensional residual stream), three learned projection matrices and define:
Each is a linear transformation: projects each token into "query space", into "key space", into "value space".
Attention scores. The attention pattern is a soft selection matrix. For a fixed query vector , the attention scores are dot products in key space - a linear functional applied to each key.
Output. . The output is a weighted linear combination of value vectors - a linear operation parameterized by . The output projection maps back to the residual stream: another linear map.
Multi-head attention. Each head applies its own projection matrices , computes attention independently, and the results are concatenated and projected:
This is a composition and concatenation of linear maps. The expressivity comes from the multiplicative interaction (which is bilinear, not linear) - but conditioned on fixed , the rest is linear.
For AI: The "OV circuit" and "QK circuit" decomposition (Elhage et al., 2021) analyzes transformer attention by studying the linear maps (value writing) and (key-query matching) separately. This is possible precisely because attention is compositionally linear.
9.2 LoRA: Fine-Tuning via Low-Rank Composition
Low-Rank Adaptation (Hu et al., 2021) is one of the most important parameter-efficient fine-tuning methods, and its design is a direct application of linear map theory.
Setup. Freeze the pretrained weight . Add a trainable low-rank update:
where , , and .
Rank-nullity interpretation. The map has rank at most . By rank-nullity:
The update only changes the network's behavior along at most directions in the -dimensional input space. When and , only of input directions are affected - the update is extremely sparse in "direction space."
The two-layer interpretation. The update is a composition of two maps:
- - compression to rank- space
- - expansion back to full dimension
Initializing randomly and ensures at the start of fine-tuning, so the model starts from the pretrained weights.
Parameter count. would have parameters. uses parameters. Savings ratio: . For , : savings factor of .
Extensions: DoRA (Weight-Decomposition Low-Rank Adaptation, 2024) decomposes into magnitude and direction, applying LoRA only to the direction component. GaLore (2024) applies LoRA-style updates to gradients rather than weights.
9.3 Linear Probes and the Linear Representation Hypothesis
Linear probing tests whether a feature is linearly decodable from a model's representations. Given representations and labels , train a linear classifier:
If the probe achieves high accuracy, the feature is linearly represented - it corresponds to a direction in representation space.
The Linear Representation Hypothesis (Mikolov et al., 2013; Elhage et al., 2022; Park et al., 2023) states that high-level features (sentiment, syntax, factual attributes, world models) are encoded as directions in the residual stream - i.e., as linear features.
Evidence:
- Word2vec arithmetic: . Semantic relationships are linear offsets.
- Steering vectors: adding to all residual stream activations controls model behavior (e.g., "banana" direction, sentiment directions).
- Probing studies: most tested syntactic and semantic features are linearly decodable.
Superposition. When there are more features than dimensions, the model stores features in near-orthogonal directions that partially overlap (superposition). This is still linear representation - just with interference.
For AI: If the linear representation hypothesis holds broadly, then:
- Linear algebra provides the right toolkit for model interpretability.
- Interventions on model behavior reduce to vector addition in representation space.
- Feature extraction is a linear map - PCA/SVD on activations finds meaningful directions.
9.4 Embedding and Unembedding
Token embeddings. The embedding matrix maps vocabulary indices to -dimensional vectors. Indexing row of is equivalent to the linear map (one-hot selection). The embedding layer is linear in the one-hot representation.
Unembedding. The unembedding matrix maps residual stream vectors to logits over the vocabulary:
This is a pure linear map. The logit for token is - a dot product (linear functional) between the unembedding direction and the residual stream.
Logit lens. Applying to intermediate residual stream states (before the final layer) gives "early predictions" - showing what the model is computing at each layer. This technique (Nostalgebraist, 2020) is possible because unembedding is linear.
Tied embeddings. Many models (GPT-2, LLaMA variants) use - the same matrix for both embedding and unembedding. This enforces consistency: the most likely next token after seeing context is the one whose embedding has the highest dot product with - i.e., .
10. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Assuming is possible for a linear map | Homogeneity immediately gives . Any map with is affine or nonlinear. | The zero-test is the fastest way to rule out linearity. |
| 2 | Treating translation as linear | fails additivity: | Neural network layers are affine (), not linear. This matters for composing and inverting. |
| 3 | Confusing rank of a map with rank of a matrix | Rank is basis-independent (it's the dimension of the image) but the matrix depends on the basis. | for any bases - rank is a map invariant, not a matrix property. |
| 4 | Misapplying rank-nullity | Forgetting that rank-nullity applies to the domain dimension, not the codomain. For , rank + nullity = 5, not 3. | Identify explicitly before applying the theorem. |
| 5 | Assuming | Matrix multiplication (= composition of linear maps) is generally not commutative. Even for square matrices, in general. | Non-commutativity is the default. Commutativity (e.g., diagonal matrices, polynomial functions of a matrix) is the special case. |
| 6 | Confusing similar matrices with equal matrices | means they represent the same map in different bases - same eigenvalues, trace, determinant. But in general. | Similar matrices are equal only if the change-of-basis matrix is the identity. |
| 7 | Thinking the kernel is trivial when is tall | A tall matrix with can still have a non-trivial null space if its columns are linearly dependent. | Compute . Nullity = , regardless of whether . |
| 8 | Applying the inverse when only a one-sided inverse exists | A non-square () cannot have a two-sided inverse. Attempting for such matrices is undefined. | Use the pseudo-inverse for the least-squares solution. |
| 9 | Forgetting that the Jacobian is a linear map, not just partial derivatives | The Jacobian matrix is the coordinate representation of the total derivative . Partial derivatives individually give the columns but don't by themselves prove differentiability. | Differentiability requires the linear approximation error to go to zero - this requires the partials to exist AND be continuous. |
| 10 | Treating the gradient as a vector in the same space | Strictly, (the dual space). In Euclidean space with standard basis, the identification makes this invisible. But for optimization on manifolds or with non-Euclidean metrics, treating gradients as primal vectors gives wrong answers. | Use the gradient as a covector when working with Fisher information, natural gradient, or Riemannian optimization. |
| 11 | Assuming all bijective functions are linear isomorphisms | A function can be bijective but not linear. E.g., , is bijective but not linear (). | Isomorphisms must be both bijective AND linear. Check both conditions. |
| 12 | Forgetting that is in , not in | The image is a subspace of the codomain , not the domain . The null space is in . | Draw the diagram: . . . |
11. Exercises
Exercise 1 * - Kernel and Image Computation
Goal: Given explicit matrices, compute kernel and image and verify rank-nullity.
(a) For : find a basis for , a basis for , and verify rank + nullity = 3.
(b) For (projection to first two coordinates): identify kernel and image without computation.
(c) For the differentiation map with matrix as in 3.1: find kernel and image dimensions.
Exercise 2 * - Matrix of a Linear Map in Non-Standard Basis
Goal: Construct the matrix of a given transformation in a specified basis.
Let be reflection across the line (i.e., ). Basis .
(a) Write the standard matrix of in the standard basis.
(b) Compute the change-of-basis matrix from to the standard basis.
(c) Compute . What is notable about this result?
(d) Explain geometrically why is diagonal in the basis .
Exercise 3 * - Rank-Nullity and Linear Systems
Goal: Use rank-nullity to understand solution sets of linear systems.
(a) has rank 3. What is the nullity? How many free variables are there in ?
(b) has rank 4. Is always solvable? What is the nullity of ?
(c) Prove: if is a linear map on a finite-dimensional space, then is injective if and only if is surjective.
Exercise 4 ** - Projection Operator Construction
Goal: Build an orthogonal projection onto a given subspace and verify idempotency.
Let .
(a) Orthonormalize the spanning set using Gram-Schmidt.
(b) Construct the projection matrix where has the orthonormal basis as columns.
(c) Verify: , , and .
(d) Compute and verify it is also a projection. What does it project onto?
Exercise 5 ** - Jacobian of Softmax
Goal: Derive and verify the Jacobian of the softmax function.
(a) Derive for , handling the cases and separately.
(b) Show that the Jacobian is .
(c) Verify that (the Jacobian kills constant vectors). Why does this make sense?
(d) Show that the rank of is at most .
Exercise 6 ** - Affine Map Composition via Homogeneous Coordinates
Goal: Compose affine transformations using augmented matrices.
In , let:
- : rotation by then translate by
- : scale by in both dimensions then translate by
(a) Write augmented matrices and for and .
(b) Compute (apply then ). What are the effective rotation, scale, and translation?
(c) Apply the composition to the point . Verify by applying then directly.
Exercise 7 *** - LoRA Rank Analysis
Goal: Analyze the geometry of low-rank weight updates.
Let be a pretrained weight. A LoRA update with rank is where , .
(a) What is ? What is the nullity of ?
(b) Generate random and and numerically verify: .
(c) For a fixed input , compute . Show this is in .
(d) Compute the singular values of . How many are nonzero? What does this tell you about the effective dimension of the update?
(e) Compare the number of trainable parameters: vs for .
Exercise 8 *** - Linear Probing and the Linear Representation Hypothesis
Goal: Empirically test whether a concept is linearly represented in embedding space.
(a) Generate synthetic embeddings for binary sentiment: positive embeddings clustered near , negative near for a fixed direction , plus noise.
(b) Train a linear probe (logistic regression on top of embeddings) and measure accuracy.
(c) Apply PCA to the embeddings. Show that the first principal component aligns with .
(d) Add a "superposition" condition: embed two independent binary features in dimensions. Show that both features can be linearly decoded but with interference.
(e) Compute the mutual coherence of the feature directions. How does it relate to probe accuracy?
12. Why This Matters for AI (2026 Perspective)
| Concept | AI Application | Why It Matters |
|---|---|---|
| Linear map axioms | Neural layer computation | Every forward pass is a composition of linear maps + activations; understanding linearity separates what the layer does from what nonlinearity adds |
| Kernel and image | Information compression | The null space of a weight matrix is "dead information" - inputs in produce no signal. Attention heads have low-rank structure that defines their effective null space |
| Rank-nullity theorem | LoRA, model compression | LoRA exploits rank-nullity: a rank- update has -dimensional null space, leaving most of the input space unaffected - this is why it works with few parameters |
| Change of basis | Diagonalization, eigenfeatures | The eigenvalue decomposition of weight matrices (studied in mechanistic interpretability) is a change to the "natural basis" in which the layer acts by simple scaling |
| Composition = multiplication | Deep network analysis | The effective weight of a -layer linear network is one matrix - depth without nonlinearity has zero benefit. All depth benefits require nonlinearity |
| Projection operators | Attention heads, layer norm | Attention heads project onto query/key/value subspaces; layer norm projects onto the hyperplane of mean-zero vectors; understanding projections clarifies what information is preserved |
| Affine maps + bias | Universal approximation | Bias terms are essential for shifting decision boundaries. Without bias, the model cannot represent affine functions - only linear ones. Universal approximation requires affine layers |
| Jacobian and chain rule | Backpropagation | Every .backward() call is Jacobian-matrix multiplication via the chain rule. Gradient explosion/vanishing is about Jacobian singular value growth/decay through layers |
| Dual maps and transposes | Gradient computation | The backward pass uses transpose weight matrices - these are the dual maps. Natural gradient and Fisher information matrix methods exploit the geometry of the dual space |
| Linear representation hypothesis | Mechanistic interpretability | If features are linear, activation patching, steering vectors, and linear probing all work. This is why "linear algebra for interpretability" (e.g., representation engineering, logit lens) is a coherent research program |
13. Conceptual Bridge
Looking Back
This section builds on two foundational pillars from earlier in the curriculum.
From Chapter 2 06: Vector Spaces and Subspaces: We studied the axioms of abstract vector spaces (closure, associativity, identity, inverse, distributivity) and their subspaces (spans, null spaces, column spaces). Linear transformations are the maps between these abstract structures - the morphisms of the category of vector spaces. The four fundamental subspaces (column space, null space, row space, left null space) that were defined for matrices are now understood as , , , and - intrinsic properties of the map, not the matrix.
From 01: Eigenvalues and Eigenvectors and 02: SVD: Both eigendecomposition and SVD are studied as decompositions of linear maps into simple pieces. Eigendecomposition finds a basis in which acts by scaling. SVD finds two orthonormal bases (for domain and codomain) in which acts by scaling. These are special cases of the general change-of-basis machinery developed here in 3.
From 03: PCA: PCA uses the linear structure of the data covariance matrix - the covariance is built from linear maps - to find the principal directions via SVD. The whitening transform and PCA projection are linear maps, and their geometry (dimension reduction, preserved variance) follows directly from the rank-nullity theorem.
Looking Forward
The abstract machinery of linear transformations will appear throughout the rest of the curriculum in concrete technical forms.
05: Orthogonality and Orthonormality: Gram-Schmidt is an algorithm that constructs a specific linear map (change of basis to an orthonormal basis). QR decomposition factors a linear map as an orthogonal map followed by a triangular map . Orthogonal projections (5.1 here) are studied in depth there.
06: Matrix Norms: The spectral norm measures how much a linear map can amplify vectors. The nuclear norm measures the "effective rank." These norms quantify properties of the linear map that are invisible from the matrix entries.
Chapter 4: Calculus Fundamentals: The Jacobian (8 here) is the bridge between linear algebra and calculus. Multivariable calculus is essentially the study of how linear maps (Jacobians) approximate smooth nonlinear maps. The Hessian is the second-order analogue - a bilinear map that measures curvature. The implicit function theorem, inverse function theorem, and change-of-variables formula all rely on the Jacobian being an invertible linear map at the point of interest.
Curriculum Position
POSITION IN THE MATH FOR LLMS CURRICULUM
========================================================================
Chapter 1: Mathematical Foundations
Chapter 2: Linear Algebra Basics
+-- Vectors, Matrix Operations, Systems of Equations
+-- Determinants, Matrix Rank
+-- Vector Spaces ----------------------------+
| prerequisite
Chapter 3: Advanced Linear Algebra |
+-- 01-Eigenvalues --------------------------+
+-- 02-SVD ---------------------------------+
+-- 03-PCA ---------------------------------+
| |
+-- 04-Linear Transformations <--------------+
| (YOU ARE HERE)
| Kernel, Image, Rank-Nullity
| Matrix Representation, Change of Basis
| Composition, Isomorphisms, Jacobian
| Affine Maps, Dual Spaces, AI Applications
| |
+-- 05-Orthogonality <----------------------+
+-- 06-Matrix Norms <-----------------------+
+-- 07-Positive Definite Matrices |
+-- 08-Matrix Decompositions |
down
Chapter 4: Calculus Fundamentals
(Jacobians and chain rule developed further)
========================================================================
The unifying theme: Every major algorithm in deep learning is a composition of linear maps and nonlinearities. Understanding this section means understanding the language in which every neural network, every attention mechanism, every gradient computation, and every interpretability method is written. The matrix is not the territory - but knowing how to move between coordinate representations (change of basis), how to measure what a map collapses (kernel and rank), how to compose maps (matrix multiplication), and how to invert them (isomorphisms and pseudo-inverses) gives you the full algebraic toolkit to reason about any linear system you encounter in machine learning.
<- PCA | Back to Advanced Linear Algebra | Orthogonality ->
Appendix A: Extended Examples and Computations
A.1 Computing the Matrix of Differentiation
We work out the differentiation operator in full detail to build intuition for linear maps on function spaces.
Setting. Let with basis and with basis .
Define by (differentiation).
Step 1: Apply to each basis vector of .
- -> coordinate vector
- -> coordinate vector
- -> coordinate vector
- -> coordinate vector
Step 2: Assemble into matrix.
Step 3: Use the matrix. Differentiate . In -coordinates: .
So . Check: . OK
Kernel of : The null space is . Dimension 1.
Image of : All polynomials of degree (since every equals for some ). Dimension 3.
Rank-nullity check: . OK
A.2 Change of Basis: Full Worked Example
Problem. Let rotate vectors by counterclockwise. Express in the rotated basis .
Step 1: Standard matrix of .
Step 2: Change-of-basis matrix. The new basis vectors, in standard coordinates:
Step 3: New matrix.
Since rotation matrices commute (they all rotate around the same axis in 2D): .
Key insight: A rotation in 2D has the same matrix in every orthonormal basis (since all such matrices are just ). This is because commutes with all rotations: .
A.3 The Null Space as a Subspace: Visualization
For (rows are multiples), let's find .
Row reduce: :
Free variables: , (free). Back-substitute: .
This is a plane through the origin in . The map collapses this entire plane to zero. By rank-nullity: rank , nullity , and . OK
The image of is - a line in - since the two columns of are and , so rank = 1.
Appendix B: Abstract Linear Algebra Perspective
B.1 The Category of Vector Spaces
In the language of category theory (which provides a unifying framework for all of mathematics):
- Objects: Vector spaces over a field
- Morphisms: Linear maps between them
- Composition: Function composition (= matrix multiplication)
- Identity morphism: The identity map (= identity matrix )
This forms a category denoted .
Isomorphisms in the category are exactly the invertible linear maps - this matches our definition of isomorphism in 4.3.
Functors are maps between categories that preserve the categorical structure. The "matrix representation" is a functor from (with chosen bases) to the category of matrices. Changing bases corresponds to a natural transformation.
This perspective matters for ML because: neural network architectures are themselves categorical structures (composition of morphisms), and categorical thinking helps reason about when two architectures are "equivalent" (related by isomorphisms).
B.2 Infinite-Dimensional Extensions
The theory extends to infinite-dimensional spaces, where the familiar finite-dimensional results require modification.
Bounded linear operators. On Hilbert spaces (infinite-dimensional inner product spaces), the right notion of "linear transformation" is a bounded linear operator satisfying for some . Unbounded operators (like differentiation on ) require careful domain specification.
The spectral theorem for compact operators. A compact self-adjoint operator on a Hilbert space has a countable set of eigenvalues and an orthonormal basis of eigenvectors. This is the infinite-dimensional analogue of diagonalization.
Functional analysis. The study of linear maps on infinite-dimensional spaces is called functional analysis. Key results (Hahn-Banach, open mapping theorem, closed graph theorem) parallel finite-dimensional results but require additional technical hypotheses.
For AI: Neural network function classes are subsets of infinite-dimensional function spaces ( or Sobolev spaces). Understanding the "size" and "complexity" of these classes uses infinite-dimensional linear algebra - e.g., kernel methods and Gaussian processes operate in reproducing kernel Hilbert spaces (RKHS), and neural tangent kernel theory analyzes infinitely wide networks using spectral theory of linear operators.
B.3 The Tensor Product and Multilinear Maps
Bilinear maps. A map is bilinear if it is linear in each argument separately. The attention score is bilinear (linear in , linear in , but not linear jointly: in general).
The tensor product is the universal space for bilinear maps: every bilinear map factors through a unique linear map . This is why tensors (in the ML sense: multi-dimensional arrays) are called tensors - they represent multilinear maps.
For AI: The key operation in self-attention, , is a bilinear form. The matrix in the "weight matrix" formulation of attention () makes this explicit: is the matrix of the bilinear form. This is the reason attention is more expressive than standard linear transformations - it computes a bilinear (quadratic) function of the input.