"To understand a graph, listen to its spectrum. The eigenvalues of the Laplacian are the resonant frequencies of the graph - they reveal clusters, bottlenecks, expansion, and the rate at which information diffuses across every edge."
Overview
Spectral graph theory is the study of graphs through the eigenvalues and eigenvectors of matrices naturally associated with them - principally the adjacency matrix , the degree matrix , and the graph Laplacian . The central insight is that algebraic properties of these matrices correspond precisely to combinatorial and geometric properties of the graph: the number of connected components equals the multiplicity of eigenvalue zero; the second-smallest eigenvalue quantifies how "hard" the graph is to disconnect; the eigenvectors of form a natural Fourier basis for signals defined on the graph.
This connection between spectral algebra and graph topology has made spectral graph theory one of the most productive areas of modern discrete mathematics - and, increasingly, one of the most important mathematical foundations for machine learning. Spectral clustering (Shi & Malik, 2000; Ng, Jordan & Weiss, 2002) remains a gold-standard unsupervised learning method for non-convex clusters. Graph Convolutional Networks (Kipf & Welling, 2017) are derived from first principles as first-order Chebyshev approximations to spectral filters. Laplacian positional encodings power modern graph Transformers (Dwivedi et al., 2022; GPS, 2022). Even language model attention matrices can be analyzed as weighted graphs whose spectral properties reveal information flow.
This section develops the full theory from scratch. We begin with the three fundamental graph matrices and their spectral properties, build up to the Cheeger inequality and expander graphs, construct the graph Fourier transform, derive spectral clustering rigorously, and connect everything to modern AI applications. Students who complete this section will have the mathematical fluency to read GNN papers, design graph-based ML systems, and understand why spectral methods work when they work - and why they fail when they do.
Prerequisites
- Graph definitions: , adjacency, degree, paths, connectivity, bipartiteness - 11-01 Graph Basics
- Adjacency matrix, degree matrix, Laplacian as data structures - 11-02 Graph Representations
- Eigenvalues, eigenvectors, spectral theorem for symmetric matrices - 03-01 Eigenvalues and Eigenvectors
- Positive semidefinite matrices and quadratic forms - 03-Advanced-Linear-Algebra
- Graph algorithms (BFS, max-flow - for Cheeger intuition) - 11-03 Graph Algorithms
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Interactive derivations: Laplacian spectra, Fiedler vector bisection, Cheeger inequality, Graph Fourier Transform, spectral clustering, Laplacian eigenmaps, PageRank |
| exercises.ipynb | 8 graded exercises from Laplacian PSD proofs through spectral clustering and Laplacian positional encodings |
Learning Objectives
After completing this section, you will:
- Construct the adjacency matrix , degree matrix , and graph Laplacian for any graph, and derive the normalized variants and
- Prove that the graph Laplacian is positive semidefinite using the quadratic form
- State and prove that the multiplicity of eigenvalue of equals the number of connected components
- Define algebraic connectivity and interpret the Fiedler vector as a graph bisection tool
- State Cheeger's inequality and explain its implications for expander graphs and random walk mixing
- Define the Graph Fourier Transform and interpret graph signals in the spectral domain
- Derive spectral clustering (RatioCut and NCut) from graph partitioning relaxations
- Implement the Laplacian eigenmaps algorithm and connect it to spectral positional encodings in graph Transformers
- Derive the GCN layer as a first-order Chebyshev approximation to a spectral filter
- Analyze PageRank as a spectral problem on directed graphs
Table of Contents
- 1. Intuition
- 2. Graph Matrices and Their Spectra
- 3. The Fundamental Quadratic Form and PSD Structure
- 4. Algebraic Connectivity and the Fiedler Vector
- 5. Cheeger's Inequality and Graph Expansion
- 6. Graph Fourier Transform and Signal Processing
- 7. Spectral Filtering
- 8. Spectral Clustering
- 9. Laplacian Eigenmaps and Graph Embeddings
- 10. Directed Graph Spectra
- 11. Advanced Topics
- 12. Applications in Machine Learning
- 13. Common Mistakes
- 14. Exercises
- 15. Why This Matters for AI (2026 Perspective)
- 16. Conceptual Bridge
1. Intuition
1.1 Hearing the Shape of a Graph
In 1966, mathematician Mark Kac posed the question: "Can you hear the shape of a drum?" - meaning, can you reconstruct the geometry of a vibrating membrane from the frequencies it produces? The question turned out to have a negative answer in general, but it crystallized one of the deepest ideas in mathematics: the spectrum of a differential operator encodes geometric information.
Spectral graph theory asks the same question for discrete structures. A graph has an associated matrix - the Laplacian - whose eigenvalues form the graph spectrum. These numbers are not arbitrary: they encode whether the graph is connected, how tightly its communities are glued together, how quickly a random walk mixes across its edges, how hard it is to cut the graph in two.
Think of a social network. Each person is a node; each friendship is an edge. The graph has "natural frequencies": a society with two isolated groups (a disconnected graph) has a different spectrum from one that is fully interconnected. The small eigenvalues of correspond to smooth, slowly-varying signals - the overall community membership function. The large eigenvalues correspond to rapidly-oscillating signals - the microscopic variation from person to person. This is the graph analogue of low and high frequencies in audio.
Three statements, each surprising when first encountered, that spectral graph theory makes precise:
- The number of connected components of equals the number of times appears as an eigenvalue of .
- The second-smallest eigenvalue - the "Fiedler value" - tells you how hard it is to disconnect the graph. A graph is harder to cut when is larger.
- The eigenvector corresponding to - the "Fiedler vector" - assigns a real number to each vertex, and the sign of this number tells you which side of the best bisection each vertex belongs to.
These are not vague analogies. They are theorems with proofs, and they form the backbone of a theory that has become indispensable in machine learning.
1.2 The Three Graph Matrices
For a graph with vertices and edges, three matrices appear constantly:
Adjacency matrix :
For undirected graphs, is symmetric. For weighted graphs, , the edge weight. The adjacency matrix encodes the direct connections in the graph.
Degree matrix : a diagonal matrix with , the degree of vertex . For weighted graphs, is the weighted degree (also called strength).
Graph Laplacian : the central object of spectral graph theory. Explicitly:
The Laplacian is named after Pierre-Simon Laplace because it is the discrete analogue of the continuous Laplace operator . For a function defined on the vertices:
This is the "discrete second derivative" - it measures how much the value at vertex differs from the average value among its neighbors.
For AI: In a Graph Neural Network, the operation (multiplying node features by the adjacency matrix with self-loops) is equivalent to computing . The Laplacian is implicitly present in every GNN layer.
1.3 Why Eigenvalues Reveal Structure
The Laplacian is a real symmetric positive semidefinite matrix. By the spectral theorem (03-Advanced-Linear-Algebra), it has a complete orthonormal basis of eigenvectors with real non-negative eigenvalues:
Why is always? Because - the all-ones vector is always in the null space of (every row of sums to zero). The constant function "assign the same value to every vertex" has zero variation across every edge, so it has zero energy.
The deeper result: if and only if the graph has exactly connected components. On a disconnected graph with components, the eigenvectors for eigenvalue are the indicator vectors of each component.
The first nonzero eigenvalue - if it exists - is called the algebraic connectivity or Fiedler value (after Miroslav Fiedler, who proved its key properties in 1973). A larger means the graph is harder to disconnect; a close to zero means there is almost a disconnection - a bottleneck.
The largest eigenvalue gives the spectral radius of the Laplacian and satisfies .
1.4 Historical Timeline
SPECTRAL GRAPH THEORY - HISTORICAL TIMELINE
========================================================================
1847 Kirchhoff - Matrix-Tree theorem; Laplacian for electrical circuits
1931 Whitney - Graph isomorphism; chromatic polynomials
1954 Collatz & - Systematic study of graph spectra begins
Sinogowitz
1973 Fiedler - Algebraic connectivity; Fiedler vector; graph bisection
1981 Cheeger - Cheeger inequality (originally for manifolds)
1988 Alon & Milman - Discrete Cheeger inequality for graphs
1996 Belkin & - Laplacian eigenmaps for manifold learning (pub. 2001)
Niyogi
2000 Shi & Malik - Normalized Cuts and image segmentation
2002 Ng, Jordan, - Spectral clustering algorithm (the standard version)
Weiss
2004 Spielman & - Spectral sparsification; fast Laplacian solvers
Teng
2011 Hammond et al - Wavelets on graphs
2014 Bruna et al - Spectral graph CNNs (first spectral GNN)
2016 Defferrard - ChebNet: Chebyshev polynomial filters on graphs
et al
2017 Kipf & - GCN: first-order Chebyshev -> simple spatial rule
Welling
2022 Dwivedi et al - Laplacian positional encodings for graph Transformers
2022 Rampasek et - GPS: General, Powerful, Scalable graph Transformer
al with spectral PE
========================================================================
1.5 Roadmap of the Section
This section follows a deliberate progression from foundational algebra to modern AI applications:
SECTION ROADMAP
========================================================================
2 Graph Matrices Build the algebraic objects
down
3 Quadratic Form / PSD Prove fundamental spectral properties
down
4 Fiedler Vector Connect \lambda_2 to graph connectivity
down
5 Cheeger Inequality Connect \lambda_2 to cut structure and mixing
down
6 Graph Fourier Transform Signal processing on graphs
down
7 Spectral Filtering From Fourier to polynomial approximations
down
8 Spectral Clustering Partition graphs via eigenvectors
down
9 Laplacian Eigenmaps Embed graphs; PE for transformers
down
10 Directed Graphs PageRank; complex eigenvalues
down
11 Advanced Topics Sparsification; wavelets; random matrices
down
12 ML Applications KGs, molecules, LLM attention analysis
========================================================================
2. Graph Matrices and Their Spectra
2.1 Adjacency Matrix: Spectral View
Definition. For with vertices, the adjacency matrix is defined by if and otherwise (with for unweighted graphs).
Key spectral property: Walk counting. The entry of counts the number of walks of length exactly from vertex to vertex . This follows by induction: sums over all ways to reach in steps by first taking steps to reach , then one step to .
For AI: This walk-counting property is the spectral justification for why a -layer GNN can "see" information from the -hop neighborhood. The matrix is what a linear GNN with layers computes.
Eigenvalues of . For an undirected graph, is symmetric, so all eigenvalues are real. Let denote the eigenvalues of in decreasing order. Key bounds:
- For any graph: (the maximum degree), since .
- For a -regular graph: with eigenvector .
- Bipartite graphs have symmetric spectra: .
- The number of distinct eigenvalues is at least (where is graph diameter).
Non-examples of symmetry: For a directed graph, is not symmetric and eigenvalues may be complex. This is why directed spectral theory (10) requires separate treatment.
2.2 Degree Matrix and Volume
The degree matrix is diagonal with .
Volume. For a subset , the volume is . For the full graph, (each edge contributes 2 to the total degree sum). Volume plays the role of "mass" in the normalized Laplacian theory.
For a -regular graph, and , making the theory particularly clean. Most derivations proceed with general but reduce to simpler formulas in the regular case.
Random walk transition matrix. The matrix is a row-stochastic matrix: for all . It defines a random walk on the graph: from vertex , move to neighbor with probability . The stationary distribution of this walk is with - proportional to degree. This connection between , , and random walks is central to the normalized Laplacian theory.
2.3 Unnormalized Laplacian L = D - A
Definition. . Entry-by-entry:
Every row (and column) sums to zero: . Equivalently, .
The fundamental quadratic form:
Proof:
Since , this is always non-negative: .
Geometric meaning: measures the total variation of the signal across all edges. It is zero if and only if for all edges - i.e., is constant on each connected component.
For AI: Graph regularization in semi-supervised learning minimizes subject to labeling constraints. This penalizes label functions that change rapidly across edges - a smoothness prior that says "connected nodes likely have the same label."
2.4 Normalized Laplacians
Two normalized variants of the Laplacian are used in practice:
Symmetric normalized Laplacian:
with entries:
Properties: Symmetric; eigenvalues in ; for all iff the graph is bipartite (eigenvalues symmetric around 1); where are eigenvectors of .
Random-walk normalized Laplacian:
with the random walk transition matrix. Properties: Not symmetric, but has the same eigenvalues as (they are similar matrices). Eigenvalues in . The eigenvectors of for eigenvalue are the constant vectors on each component.
When to use which:
| Laplacian | Use case | Why |
|---|---|---|
| Graphs with uniform degree; theoretical proofs | Simplest form | |
| Spectral clustering (Ng et al.); GCN normalization | Symmetric -> orthogonal eigenvectors | |
| Random walk analysis; Shi-Malik NCut | Direct connection to |
For AI (GCN connection): The GCN propagation rule uses the symmetric normalized adjacency of the graph with self-loops - equivalently, of the augmented graph.
2.5 Spectra of Special Graphs
Closed-form eigenvalues for key graph families provide calibration and test cases:
Complete graph : . Eigenvalues of : (once) and ( times). Eigenvalues of : (once) and ( times). The graph is maximally connected: .
Path graph : Vertices , edges . Eigenvalues of :
So , for large - very small. This reflects the intuition that a long path is easy to cut (just remove the middle edge).
Cycle graph : Eigenvalues of :
The spectrum is symmetric around . for large .
Star graph : One hub connected to leaves. Eigenvalues of : (once), ( times), (once). regardless of how many leaves there are - the star is easy to disconnect (remove the hub).
-regular bipartite graph : Eigenvalues of : with the pattern dictated by the bipartite structure; eigenvalue indicates bipartiteness.
2.6 Characteristic Polynomial and Cospectral Graphs
The characteristic polynomial of a graph is . The roots are the eigenvalues of . The coefficients of are spectral invariants: the sum of eigenvalues equals (no self-loops); the sum of squares of eigenvalues equals .
Cospectral (isospectral) graphs are graphs with identical characteristic polynomials but non-isomorphic structures. The simplest pair: two graphs on 6 vertices found by Schwenk (1973). Cospectrality shows that the spectrum does not uniquely determine a graph - a fundamental limitation of spectral methods. For graph isomorphism testing, the Weisfeiler-Lehman test (05) captures structure that the spectrum misses.
For AI: The WL-expressiveness hierarchy of GNNs (Xu et al., 2019) parallels this cospectrality result. GNNs based on spectral convolution can distinguish everything the Laplacian spectrum distinguishes - but no more. This is one motivation for higher-order GNNs and attention-based methods.
3. The Fundamental Quadratic Form and PSD Structure
3.1 Dirichlet Energy
The quadratic form is called the Dirichlet energy (or graph Dirichlet form) of the signal .
This name comes from the continuous analogue: for a function on a domain , the Dirichlet energy is , which measures the total variation (smoothness) of . The graph Laplacian is the discrete analogue of (the negative Laplacian), and is the discrete Dirichlet energy.
Interpretations by context:
| Context | What measures |
|---|---|
| Social network | Total disagreement when labels communities |
| Signal on graph | Total variation (roughness) of the signal across edges |
| Temperature field | Total heat flux across edges at steady state |
| Node embeddings | "Embedding strain" - how much nearby nodes differ |
| Semi-supervised labels | Penalty for assigning different labels to connected nodes |
Critical point of Dirichlet energy. The Rayleigh quotient is minimized by the eigenvector with smallest eigenvalue. Constrained to (orthogonal to the trivial null vector), the minimum is achieved by , the Fiedler vector.
3.2 Proof That L \succeq 0
Theorem. For any undirected weighted graph with non-negative edge weights, .
Proof. For any :
since and for all real numbers.
Corollary. All eigenvalues of are non-negative: .
Corollary. is always an eigenvector with , since (where is the degree vector, equal to ).
Strengthened result for normalized Laplacians. For : since and , we have . Moreover, for all , so .
3.3 Connected Components via the Null Space
Theorem (Fiedler, 1973). The multiplicity of eigenvalue of the graph Laplacian equals the number of connected components of .
Proof.
Suppose has connected components . For each component , define as the indicator vector of : if , else . Then because for any vertex :
(all neighbors of are also in since components are isolated). The vectors are linearly independent, so .
Suppose . Then , which forces for every edge . Thus is constant on each connected component. The dimension of the space of such functions equals the number of components. So .
Combining both directions, .
Examples:
- Fully connected graph (): is simple; .
- Graph with 2 isolated components: ; .
- Path : always connected; .
Non-example: For a disconnected graph with components of different sizes, the eigenvectors for are NOT all-ones vectors but rather indicator vectors of the components.
3.4 Eigenvalue Bounds and Interlacing
Upper bound. For any connected graph:
where is the maximum degree. For -regular graphs: iff the graph is bipartite.
Lower bound on . From the Cheeger inequality (full treatment in 5):
where is the Cheeger constant.
Interlacing theorem. Let be an induced subgraph of on vertices, with Laplacian eigenvalues . Then:
Interlacing means that removing vertices from a graph cannot increase by more than the increase in . This is used in structural arguments about graph connectivity after vertex removal.
3.5 Courant-Fischer Minimax Theorem
The Courant-Fischer theorem provides a variational characterization of every eigenvalue of a symmetric matrix. For the graph Laplacian with eigenvalues :
In particular, the Fiedler value has the characterization:
Proof sketch. Write in the eigenbasis. Then and . The Rayleigh quotient is , a convex combination of eigenvalues. Minimizing over forces , making the minimum (achieved when , all others ).
Practical use. Courant-Fischer justifies using as the optimal graph bisection vector: it solves the continuous relaxation of the minimum bisection problem, as we prove in 4 and 8.
4. Algebraic Connectivity and the Fiedler Vector
4.1 Algebraic Connectivity \lambda_2
Definition. The algebraic connectivity of a graph is , the second-smallest eigenvalue of the graph Laplacian. It is also called the Fiedler value.
Theorem (Fiedler, 1973). if and only if is connected.
This follows directly from 3.3: iff there are at least 2 connected components.
Why "algebraic" connectivity? The classical combinatorial connectivity (minimum number of vertices whose removal disconnects ) is NP-hard to compute in general. The algebraic connectivity provides a computable lower bound:
where is the minimum degree. This inequality chain says: algebraic connectivity vertex connectivity minimum degree.
Sensitivity. When a single edge is added to a graph, can increase by at most . When an edge is removed, can decrease by at most . This quantifies how much the connectivity changes with each graph edit - useful in robust network design.
Regular graphs. For a -regular graph on vertices:
where is the largest eigenvalue of not equal to . The spectral gap of the adjacency matrix and the algebraic connectivity are directly related for regular graphs.
4.2 The Fiedler Vector
Definition. The Fiedler vector is the eigenvector of corresponding to .
The Fiedler vector assigns a real number to each vertex . Vertices with positive values are assigned to one "side" of the graph; vertices with negative values to the other. This is the basis of spectral bisection.
Spectral bisection algorithm:
- Compute the Fiedler vector .
- Partition by the sign of : let .
- The edges form the "spectral cut."
Why does this work? The Courant-Fischer theorem says minimizes the Dirichlet energy subject to and . If we further constrain (a discrete two-way partition), we get the NP-hard graph bisection problem. The Fiedler vector is the continuous relaxation of this discrete problem - the best we can do efficiently.
The ordering property. Sorting vertices by their Fiedler vector value reveals the community structure of the graph. Vertices in the same community tend to have similar values; the transition from negative to positive marks the community boundary.
For AI: Spectral bisection is used in:
- Circuit partitioning (VLSI design): split a circuit graph across two chips to minimize inter-chip connections
- Domain decomposition (PDE solvers): partition a mesh graph for parallel computation
- Community detection in knowledge graphs: find the two most separated communities in a KG
4.3 Bounding Graph Properties via \lambda_2
Diameter bound (Mohar, 1991):
A simpler bound: (rough but useful).
Vertex connectivity. For any connected graph:
where is the vertex connectivity (minimum number of vertices to remove to disconnect). A large means the graph is robustly connected.
Conductance. The conductance measures the minimum normalized cut. The Cheeger inequality (5) gives:
Isoperimetric number. The Cheeger constant (using instead of ) satisfies the same type of inequality with the unnormalized .
4.4 Computing the Fiedler Vector in Practice
For small graphs ( a few thousand), compute all eigenvalues of directly via dense symmetric eigensolver (scipy.linalg.eigh). The second column of the eigenvector matrix is .
For large sparse graphs, use iterative methods:
Lanczos algorithm: Builds a tridiagonal matrix from Krylov vectors . Converges to extreme eigenvalues fastest. For the Fiedler vector, we need the smallest nonzero eigenvalue, which requires the shift-invert trick: compute the largest eigenvalue of for small .
Inverse power iteration with deflation: Since is known, we can deflate it out. Initialize with a random , repeatedly apply (via sparse matrix-vector product), normalize, and orthogonalize against . Convergence rate is per iteration.
Randomized Nystrom approximation: For graphs with , approximate the low-rank spectral structure using randomized sampling of the Laplacian (Spielman & Srivastava, 2011).
4.5 AI Application: Community Detection
Community detection - finding groups of densely interconnected nodes - is one of the most practically important graph problems. Spectral methods are the gold standard for quality guarantees.
Planted partition model. Generate a graph with communities of size each, intra-community edge probability , inter-community probability . Spectral bisection recovers the communities exactly when:
(This is the information-theoretic threshold for the Stochastic Block Model.)
Knowledge graph clustering. In a knowledge graph (KG) like Freebase or Wikidata, entities form communities by topic (sports, science, politics). The Fiedler vector of the KG adjacency graph separates these clusters. The resulting community structure can be used to create topic-specific sub-KGs for retrieval-augmented generation.
5. Cheeger's Inequality and Graph Expansion
5.1 The Cheeger Constant h(G)
Definition. For a graph and a subset , the edge boundary is the set of edges between and its complement :
The conductance (or isoperimetric ratio) of the cut is:
The Cheeger constant (or isoperimetric number) of is:
This is the minimum conductance cut: the partition that minimizes the fraction of edges leaving the smaller side relative to its volume. A small means the graph has a bottleneck - a small number of edges separating a large fraction of the volume.
Computing is NP-hard. This is a major motivation for the Cheeger inequality, which gives a polynomial-time algorithm (via ) to find a cut within a factor of of optimal.
Examples:
- Path : Remove the middle edge; . Very small: the path is a severe bottleneck.
- Complete graph : Every cut has ; .
- Expander graph (5.3): - bounded below by a constant, independent of .
5.2 Cheeger's Inequality
Theorem (Alon & Milman, 1985; Dodziuk, 1984). For any undirected graph :
Proof of the left inequality (easy direction). We show by exhibiting a test vector with .
Let be the optimal Cheeger cut with . Define:
This is orthogonal to the stationary distribution of the random walk (which plays the role of for the normalized Laplacian). Then:
After algebraic simplification using the definition of : . Since , we get .
Proof of the right inequality (hard direction). We show .
Given the Fiedler vector , sort vertices so . For each threshold , let . Consider the sweep over all possible thresholds. By the co-area formula for graphs (a discrete version of the co-area formula in differential geometry), the average conductance of these cuts satisfies:
The last step uses the Cauchy-Schwarz inequality together with the fact that is the Rayleigh quotient of the Fiedler vector.
Tightness. The left bound is tight for expanders (5.3). The right bound is tight for paths and other bottleneck graphs where .
Practical implication. Given , we know . More importantly, the proof of the right inequality is constructive: the sweep over Fiedler vector thresholds finds a cut with conductance .
5.3 Expander Graphs
Definition. A family of graphs is an -expander family if:
- Each has vertices and is -regular
- The second eigenvalue of satisfies
- The spectral gap is bounded below by a positive constant
Equivalently (by Cheeger): , i.e., the Cheeger constant is bounded below uniformly in .
Why expanders matter:
- Communication networks: In a -regular expander on nodes, any message can be routed between any two nodes in hops, using only connections per node. This is optimal for constant-degree networks.
- Error-correcting codes: Expander codes (Sipser & Spielman, 1996) achieve linear-time encoding/decoding of codes close to the Shannon capacity.
- Derandomization: Expanders provide pseudorandom number generators - random walks on expanders mix in steps, so short random walks serve as good randomness sources.
- GNN depth: A GNN on an expander graph propagates information across the entire graph in layers. This is why expanders are ideal benchmarks for deep GNNs.
Ramanujan graphs. The optimal spectral gap for a -regular graph is bounded: (Alon-Boppana theorem). Graphs achieving are called Ramanujan graphs - they are the optimal expanders. Explicit Ramanujan graph constructions (Lubotzky, Phillips, Sarnak, 1988; Margulis, 1988) use deep number theory.
5.4 Random Walk Mixing Time
The random walk on defined by transition matrix has stationary distribution with . The mixing time is the number of steps needed for the walk to get close to the stationary distribution:
Spectral mixing bound. Let be the second-largest absolute eigenvalue of . Then:
Interpretation: The spectral gap governs the mixing time. Large spectral gap -> fast mixing. For expanders with constant spectral gap: . For paths: , so .
Proof sketch. Write the initial distribution as where are eigenvectors of (eigenvalues ). After steps: . The deviation decays as , giving .
Lazy walk. To avoid oscillation when (bipartite-like graphs), use the lazy random walk with . Eigenvalues of are , avoiding negative eigenvalues.
5.5 AI Connection: Over-Smoothing as Diffusion
Over-smoothing is the well-documented phenomenon in deep GNNs where node representations become indistinguishable as the number of layers increases (Li et al., 2018; Oono & Suzuki, 2020). Spectral theory provides the exact mechanism:
A -layer GCN computes (roughly) where is the normalized adjacency. The eigenvalues of satisfy where are eigenvalues of of the augmented graph. After iterations:
All node features converge to a value proportional to , determined only by degree - all structural information is lost.
Rate of collapse. The convergence rate is governed by the spectral gap: . Faster collapse on expanders (large ), slower on bottleneck graphs. This is counterintuitive: the "most connected" graphs (expanders) over-smooth fastest.
Mitigation strategies:
- Residual connections (GCNII, Chen et al., 2020): - preserve initial features
- DropEdge (Rong et al., 2020): randomly remove edges during training, reducing effective
- PairNorm (Zhao & Akoglu, 2020): explicitly normalize pairwise distances to prevent collapse
- Jumping knowledge (Xu et al., 2018): aggregate representations from all layers
Forward reference: The full architecture-level treatment of over-smoothing, including the WL expressiveness hierarchy and architectural mitigations, is in 11-05 Graph Neural Networks.
6. Graph Fourier Transform and Signal Processing
6.1 Classical Fourier Analogy
The classical Fourier transform on decomposes a function into a linear combination of eigenfunctions of the Laplace operator :
The functions are eigenfunctions of the continuous Laplacian: .
On a graph, the Laplacian plays the role of , and its eigenvectors (with eigenvalues ) play the role of the complex exponentials .
The analogy:
FOURIER TRANSFORM ANALOGY
========================================================================
Classical Fourier Graph Fourier
--------------------------------- ---------------------------------
Domain \mathbb{R}^n Vertex set V (finite)
Operator -\Delta (Laplacian) L = D - A (graph Laplacian)
Eigenfunctions exp(i\omega*x) Eigenvectors u_1, u_2, ..., u_n
Frequencies ||\omega||^2 \in [0, \infty) Eigenvalues \lambda_1 \leq \lambda_2 \leq ... \leq \lambda_n
Low freq. ||\omega|| small -> smooth \lambda_k small -> smooth on graph
High freq. ||\omega|| large -> rapid \lambda_k large -> rapid variation
Transform Continuous integral Finite matrix multiply (U^Tx)
========================================================================
This analogy is the conceptual foundation for defining convolution, filtering, and signal processing on irregular graph domains.
6.2 Graph Fourier Transform
Definition. Let be the eigendecomposition of the graph Laplacian, with the matrix of eigenvectors (columns). For a signal (assigning a value to each vertex ), the Graph Fourier Transform (GFT) is:
The inverse GFT is:
Properties:
- Parseval's theorem: (since is orthogonal).
- Linearity: .
- Energy decomposition: (energy in each frequency component).
- Shift property: There is no clean "shift theorem" for graphs as there is for the DFT, because graphs lack translation symmetry. This is a fundamental difference.
Graph convolution. The convolution of two signals and on a graph is defined spectrally:
where is element-wise multiplication. This is the analogue of the convolution theorem: convolution in the vertex domain equals pointwise multiplication in the spectral domain.
Limitation of full GFT: Computing requires time (eigendecomposition). For large graphs (), this is infeasible, motivating polynomial approximations (7).
6.3 Frequency Interpretation
The -th frequency component measures how much of the signal "oscillates at frequency ."
Low-frequency signals correspond to small : the eigenvectors for small eigenvalues vary smoothly across edges (since is small). A signal concentrated in low frequencies is smooth: nearby vertices have similar values.
High-frequency signals correspond to large : the eigenvectors for large eigenvalues oscillate rapidly, with and having opposite signs for many edges . A pure high-frequency signal looks like a checkerboard on the graph.
Example on a path graph. For , the eigenvectors are - the discrete cosine transform (DCT). The eigenvalues are just the squared DCT frequencies. The GFT on a path is the DCT.
Example on a community graph. A graph with two tightly connected communities has:
- : constant (DC component)
- : Fiedler vector, positive on community 1, negative on community 2 - the community membership function is a low-frequency signal
- : highest frequency, alternates sign on bipartite-like structure
6.4 Dirichlet Energy Revisited
The Dirichlet energy decomposes cleanly in the spectral domain:
This is the "power spectrum" interpretation: the Dirichlet energy is the weighted sum of spectral components, weighted by frequency. A signal is smooth (low Dirichlet energy) iff its energy is concentrated in low-frequency components ( small).
Spectral analysis of node features. Given a node feature matrix , we can compute the spectral content of each feature dimension:
Feature dimensions with low Dirichlet energy are "community-consistent" features (e.g., political affiliation in a social network). Feature dimensions with high Dirichlet energy are "noisy" local features.
For AI: Graph regularization in semi-supervised learning minimizes:
This penalizes high-frequency components in the predicted label function , implementing a "smoothness prior": connected nodes likely have the same label.
6.5 Uncertainty Principle on Graphs
In classical signal processing, the Heisenberg uncertainty principle states that a signal cannot be simultaneously concentrated in both time and frequency: the product of time spread and frequency spread is at least .
On graphs, an analogous uncertainty principle holds (Agaskar & Lu, 2013):
where measures how localized is in the vertex domain (concentrated on a small set of vertices) and measures how localized is in the spectral domain (concentrated on a small band of frequencies), and is a constant depending on the graph structure.
Implications for graph signal processing:
- A signal perfectly localized on a single vertex () is spread across all frequencies ( maximum)
- Smooth signals (concentrated on low frequencies, small ) are necessarily spread across many vertices ( large)
- This tradeoff motivates graph wavelets (11.3): basis functions that are approximately localized in both vertex and spectral domains
6.6 AI Application: Node Feature Smoothing
Label propagation (Zhou et al., 2004) is a classic semi-supervised learning algorithm that directly implements low-pass graph filtering. Starting from a partially labeled graph, labels propagate according to:
where is the initial label matrix (zeros for unlabeled nodes), is the normalized adjacency, and controls the smoothing strength. In the spectral domain, this converges to:
The filter is a low-pass filter: it attenuates high-frequency components ( large) more than low-frequency ones.
For modern LLMs: When an LLM reasons over a knowledge graph, smooth graph signals correspond to consistent facts (nearby entities agree), while high-frequency signals correspond to noise or inconsistencies. Spectral filtering provides a principled way to denoise knowledge graphs before retrieval.
7. Spectral Filtering
7.1 Filtering in the Spectral Domain
A spectral filter on a graph is an operation that modifies the frequency content of a graph signal:
where is a scalar function applied pointwise to the eigenvalues: .
Common filters:
| Filter | Effect | AI use case | |
|---|---|---|---|
| Low-pass | Keep low frequencies | Smooth node features | |
| High-pass | Keep high frequencies | Edge detection on graphs | |
| Band-pass | Keep a frequency band | Community detection at scale | |
| Heat kernel | Exponential damping | Graph diffusion, PPMI | |
| Identity | No change | Trivial | |
| GCN | Linear attenuation | First-order spectral convolution |
Implementation cost: Directly computing requires the full eigendecomposition - preprocessing and per signal. This is intractable for large graphs. Polynomial approximation (7.2) reduces cost to per signal.
7.2 Polynomial Filters and Localization
A -th order polynomial filter has the form:
Key property: K-localization. The filter is exactly -localized: depends only on the values of at vertices within graph distance from .
Proof. whenever (by the walk-counting property of graph matrix powers). Therefore whenever .
Complexity. Computing using the recurrence requires sparse matrix-vector multiplications, each . Total: .
Spatial interpretation. A polynomial filter is exactly equivalent to a -hop neighborhood aggregation, connecting spectral and spatial GNN views. This is the theoretical justification for why GNNs with layers aggregate information from -hop neighborhoods.
Approximation theorem. By the Stone-Weierstrass theorem, any continuous function can be uniformly approximated by polynomials. So polynomial filters are universal approximators for spectral filters on any graph.
7.3 Chebyshev Polynomial Approximation
Why Chebyshev? Among all polynomials of degree , the -th Chebyshev polynomial has the smallest maximum deviation from zero on - it is the optimal polynomial approximation basis.
Definition. The Chebyshev polynomials satisfy:
They have the closed form .
Chebyshev graph filter (ChebNet, Defferrard et al., 2016). Scale the Laplacian to (shifting eigenvalues from to ). Define:
Computation via recurrence:
Each step requires one sparse matrix-vector multiply ; total cost .
Advantages over truncated Taylor series:
- The Chebyshev approximation error decays exponentially in (geometric convergence for smooth )
- No numerical instability from large powers of
- The learned parameters have clear frequency interpretation
7.4 Heat Kernel and Diffusion Filters
The graph heat equation generalizes diffusion to graphs:
Solution: . In the spectral domain: - each frequency decays at rate .
The heat kernel is a positive semidefinite matrix representing the diffusion of heat on the graph over time . Entries give the heat at vertex after time when a unit heat source is placed at vertex .
Properties:
- For : (no diffusion)
- For : (heat equalizes, constant temperature on each component)
- Relates to random walk:
Diffusion distance. The distance between vertices and at time scale is:
This diffusion distance is more robust than shortest-path distance: it accounts for all paths between and , not just the shortest one.
For AI: The PPMI (Positive Pointwise Mutual Information) matrix used in graph-based word embeddings is approximately a diffusion kernel. The node2vec random walk (Grover & Leskovec, 2016) approximates diffusion distance.
7.5 From Chebyshev to GCN
The GCN layer (Kipf & Welling, 2017) is derived from ChebNet by:
Step 1: Set (first-order Chebyshev approximation): .
Step 2: Approximate (holding for regular and near-regular graphs), so .
Step 3: Constrain (reduce parameters to prevent overfitting):
Step 4: Add self-loops , renormalize with to prevent numerical issues (the "renormalization trick"):
Full GCN layer:
Spectral interpretation. The GCN filter is a low-pass filter: it passes low frequencies (, smooth signals) and attenuates high frequencies (, rapidly varying signals). GCN is fundamentally a graph smoother.
Full GNN treatment: For GraphSAGE, GAT, MPNN framework, over-smoothing fixes, and expressiveness theory, see 11-05 Graph Neural Networks.
8. Spectral Clustering
8.1 Graph Partitioning Objectives
Minimum cut. Given a graph and integer , partition (disjoint, non-empty) to minimize:
where is the set of edges between and its complement.
Problem with minimum cut. Minimum cut tends to cut off isolated vertices or very small sets - the trivial solution for a low-degree vertex has very few edges to cut. We need objectives that balance cluster sizes.
RatioCut (Hagen & Kahng, 1992):
Normalizes by the number of vertices in each partition - prevents very small cuts.
Normalized Cut (NCut) (Shi & Malik, 2000):
Normalizes by the volume (total degree) - weighted version of RatioCut.
Both problems are NP-hard in general. Spectral clustering relaxes them to tractable eigenvalue problems.
8.2 RatioCut and Unnormalized Spectral Clustering
Two-cluster RatioCut. For with partition :
Define the indicator vector :
Claim. . Also: and .
Proof: . The only nonzero terms come from edges crossing the cut: for :
Summing over all cut edges and using :
Relaxation. The discrete optimization subject to , , is NP-hard. Relax the integrality constraint: allow . By Courant-Fischer, the solution is the Fiedler vector .
Recovery. Given , assign vertex to if , to otherwise. In practice, use k-means with on for robustness.
8.3 Normalized Cut (Shi & Malik 2000)
NCut relaxation. Define the indicator analogously to RatioCut but with volume weights: for partition :
Then , subject to and .
Generalized eigenvalue problem. The continuous relaxation is:
via the substitution . This is the standard Rayleigh quotient for , minimized by . Thus:
where is the Fiedler vector of .
Shi-Malik algorithm (2-cluster):
- Build
- Compute Fiedler vector of
- Assign vertex to if
- Choose threshold: empirically (try all thresholds) or at 0
Multi-class NCut. For clusters, use the smallest eigenvectors of , form the matrix , normalize each row to unit norm, then apply k-means to the rows.
8.4 Multi-Way Spectral Clustering
The Ng-Jordan-Weiss (NJW) algorithm (2002) is the standard multi-class spectral clustering:
- Build the normalized Laplacian
- Compute the smallest eigenvectors (smallest eigenvalues of )
- Form with these eigenvectors as columns
- Normalize rows: let (row normalization)
- Apply k-means to the rows of
Why row normalization? The perturbation theory of 8.5 shows that in a perfect -cluster graph, the rows of lie exactly on orthogonal vectors. Row normalization maps these to the same point on the unit sphere regardless of degree, making k-means converge cleanly.
Perturbation theory justification. Consider a "block graph" consisting of disconnected cliques. The smallest eigenvalues of are all , with eigenvectors being the normalized indicators of each clique. Any real graph with communities can be seen as a perturbed block graph; if the perturbation (inter-community edges) is small, the eigenvectors are close to the block indicators. Weyl's perturbation theorem quantifies how much changes.
8.5 Complete Algorithm and Implementation
SPECTRAL CLUSTERING ALGORITHM
========================================================================
Input: Adjacency matrix A \in \mathbb{R}^n^x^n, number of clusters k
Output: Cluster assignments c \in {1,...,k}^n
1. Compute degree matrix D = diag(A*1)
2. Compute normalized Laplacian L_sym = D^(-1/2) (D - A) D^(-1/2)
(or use L_rw = I - D^(-1) A, but use L_sym for symmetric version)
3. Compute k smallest eigenvalues and eigenvectors of L_sym
-> eigenvectors form columns of U_k \in \mathbb{R}^n^x^k
4. Normalize rows: Y_i = U_k[i,:] / ||U_k[i,:]||_2
(skip for RatioCut; required for NCut)
5. Apply k-means clustering to rows of Y
-> cluster centers \mu_1,...,\mu_k; assignments c[i] \in {1,...,k}
6. Return c
Complexity: O(n^3) for full eigendecomposition;
O(k*n*|E|) with Lanczos + k-means (large graphs)
========================================================================
Practical notes:
- Use Lanczos algorithm or LOBPCG for computing the smallest eigenvectors of on large sparse graphs (avoid full eigendecomposition)
- The choice of can be guided by the eigengap heuristic: choose where the gap is largest
- K-means is run multiple times with random restarts; take the best result (lowest inertia)
- For disconnected graphs, the zero eigenvalues directly give the cluster indicators
8.6 When Spectral Clustering Beats k-Means
K-means minimizes within-cluster variance assuming convex, isotropic, similarly-sized clusters. It fails on non-convex cluster shapes. Spectral clustering has no shape assumption - it works on any cluster structure that is well-separated in the graph.
When spectral clustering excels:
- Concentric rings, spirals, moons - any shape detectable by graph connectivity
- Clusters at multiple scales (nested communities)
- Data with non-Euclidean structure (molecules, social networks)
When k-means excels:
- Truly Gaussian clusters in
- Very large where eigenvector computation is too slow
- Cluster structure is well-captured by Euclidean distance
A critical nuance: Spectral clustering requires building the adjacency/affinity graph first. The -NN graph or -neighborhood graph choice matters enormously for quality. A common failure mode: if the graph is built with too small or , communities may become disconnected even within a true cluster. If too large, community structure is washed out.
9. Laplacian Eigenmaps and Graph Embeddings
9.1 The Embedding Problem
Given a graph , we want a mapping () that preserves the graph structure: vertices that are nearby in the graph should be nearby in the embedding. Formally, we want:
subject to constraints that prevent the trivial solution for all .
Decomposing dimension by dimension, this is separate problems, each of the form:
This is exactly minimizing the Dirichlet energy, solved by the eigenvectors of .
9.2 Laplacian Eigenmaps Algorithm
Belkin & Niyogi (2001/2003). Given data points :
-
Build the adjacency graph: Connect points and if they are among each other's nearest neighbors (or ).
-
Set edge weights: Use the heat kernel for connected pairs (with a bandwidth parameter).
-
Compute degree and Laplacian: , .
-
Solve the generalized eigenvalue problem:
Equivalently: find eigenvectors of (or ).
- Embed: Take the eigenvectors (skip ) and set .
Optimality theorem. The Laplacian eigenmap embedding is the solution to the optimization problem:
The solution is (the -th eigenvector of ). This is optimal in the sense that no other -dimensional embedding has smaller total Dirichlet energy.
Manifold learning interpretation. If the data points lie on a -dimensional manifold embedded in , the Laplacian eigenmap recovers the intrinsic coordinates of the manifold. As and the bandwidth at an appropriate rate, the graph Laplacian converges to the Laplace-Beltrami operator on the manifold (Belkin & Niyogi, 2008).
9.3 Diffusion Maps
Coifman & Lafon (2006) introduced diffusion maps as a multiscale version of Laplacian eigenmaps.
Define the diffusion operator (the random walk matrix) and its -step version . The diffusion distance at scale :
where are eigenvalues/eigenvectors of . The diffusion map embedding:
The Euclidean distance in the diffusion map equals the diffusion distance: .
Multi-scale property. By varying , diffusion maps reveal structure at different scales:
- Small : local neighborhood structure
- Large : global cluster structure (only the dominant eigenvectors with remain)
9.4 Relationship to PCA and Kernel PCA
Kernel PCA (Scholkopf et al., 1998) computes the principal components of data in a feature space defined by a kernel . For a kernel matrix with , kernel PCA computes the eigenvectors of the centered kernel matrix.
Commute-time embedding. The commute time between vertices and is the expected number of steps for a random walk starting at to reach and return. It equals:
This is kernel PCA with the commute-time kernel . So Laplacian eigenmaps is a special case of kernel PCA.
9.5 Spectral Positional Encodings for Transformers
Standard Transformers process tokens with positional encodings to handle sequence order. Graph Transformers need analogous positional encodings for graph nodes - but graphs have no canonical ordering.
Laplacian Positional Encoding (LapPE). Use the eigenvectors of the graph Laplacian as node positional encodings:
where is the -th entry of the -th Laplacian eigenvector.
Challenge: Sign ambiguity. Each eigenvector is defined only up to sign: is also a valid eigenvector. This creates non-uniqueness in the PE.
Solutions:
- Random sign flips during training (Dwivedi et al., 2022): randomly flip signs in training; the Transformer learns sign-invariant functions
- SignNet (Lim et al., 2022): use a Deep Sets architecture that is invariant to sign flips:
- BasisNet: extend to the case of repeated eigenvalues (multiplicity ), which introduce rotational ambiguity
RWPE (Random Walk Positional Encoding). Instead of Laplacian eigenvectors, use steps of a random walk:
where . This avoids the sign ambiguity issue and is invariant to graph automorphisms. Used in GPS (Rampasek et al., 2022) - one of the best-performing graph Transformers.
Why LapPE/RWPE matter. Without positional encodings, graph Transformers cannot distinguish graph structure - all nodes with the same degree distribution look identical. Spectral PE gives each node a unique "spectral fingerprint" derived from its position in the graph's Fourier basis.
10. Directed Graph Spectra
10.1 Directed Laplacians
For a directed graph (digraph) with (ordered pairs), the adjacency matrix is not symmetric: if but may be .
In-degree and out-degree: For each vertex , (number of incoming edges) and (outgoing edges).
Out-degree Laplacian: where .
In-degree Laplacian: (or equivalently, the out-degree Laplacian of the reversed graph).
Key difference from undirected case:
- is NOT symmetric in general
- Eigenvalues may be complex
- The row-sum-zero property holds: (since each row sums to )
- But column sums are , not necessarily zero
Stationary distribution. The directed random walk is row-stochastic. For a strongly connected digraph, the unique stationary distribution satisfies . The stationary distribution is NOT necessarily uniform (unlike -regular undirected graphs).
10.2 Kirchhoff's Matrix-Tree Theorem
Theorem (Kirchhoff, 1847). For a connected undirected graph , the number of spanning trees equals any cofactor of :
where are the non-zero eigenvalues of .
Proof via the Matrix-Tree theorem. By the Matrix-Tree theorem, equals any principal minor of . By the Cauchy-Binet formula, this minor equals , which follows from the spectral decomposition and the fact that with eigenvector .
Examples:
- : (all equal), so (Cayley's formula).
- (path): (only one spanning tree - the path itself).
- (cycle): .
For AI: The number of spanning trees measures "graph robustness." Networks with many spanning trees (like expanders) remain connected even after many edge failures. This metric appears in network reliability analysis for distributed training clusters.
10.3 PageRank as a Spectral Problem
PageRank (Page, Brin, Motwani, Winograd, 1998) - the algorithm behind Google Search - is fundamentally a spectral computation on a directed graph.
Setup. Model the Web as a directed graph: pages are vertices, hyperlinks are directed edges. Define the Google matrix:
where is the column-stochastic random-walk matrix, is the damping factor (typically ), and represents teleportation (random jumps to any page).
PageRank vector. The PageRank of each page is the stationary distribution of the Markov chain defined by :
Equivalently, is the dominant left eigenvector of (eigenvalue ).
Spectral computation. By the Perron-Frobenius theorem, is a positive stochastic matrix (all entries due to the teleportation term), so it has a unique dominant eigenvalue with a unique positive eigenvector .
Power iteration. PageRank is computed by:
Convergence rate: geometric with ratio - the second eigenvalue of is at most . Each iteration is a sparse matrix-vector multiply .
For AI (RLHF and LLM preference graphs): In reinforcement learning from human feedback (RLHF), preference data can be modeled as a directed graph over responses, with edge meaning "response is preferred over ." PageRank on this preference graph gives a global ranking consistent with pairwise preferences. This is closely related to Bradley-Terry models used in reward model training (Ouyang et al., 2022).
10.4 Directed Graph Spectra in AI
Attention as a directed graph. In a Transformer, the attention weights define a directed weighted graph over tokens. The spectral properties of this attention graph have interpretability implications:
- The dominant eigenvector of the attention matrix identifies "hub" tokens - tokens that receive most attention
- Spectral analysis of attention graphs has been used in mechanistic interpretability to identify "induction heads" and "name mover heads" (Olsson et al., 2022)
- The spectral gap of the attention graph determines how quickly information mixes across token positions
Causal DAGs. In causal inference (Chapter 22), structural causal models are represented as DAGs. The spectral properties of the DAG adjacency matrix are related to the "depth" of causal chains: a large spectral radius means long-range causal effects.
11. Advanced Topics
11.1 Spectral Sparsification
Problem. For a dense graph with vertices and edges, many spectral algorithms are too slow. Can we find a sparse graph with edges that preserves the spectrum of ?
Definition. is an -spectral sparsifier of if for all :
Equivalently, in the PSD order.
Theorem (Spielman & Srivastava, 2011). Every graph has an -spectral sparsifier with edges, computable in near-linear time using random sampling weighted by effective resistances.
Effective resistance. The effective resistance between vertices and is the electrical resistance between them when unit resistors are placed on each edge. It equals:
where is the pseudoinverse of .
Algorithm: Sample each edge independently with probability proportional to , and rescale the weight. The resulting sparse graph preserves all spectral properties.
For AI: Spectral sparsification can reduce the computational cost of graph-based ML. A 10-million-edge social graph can be sparsified to edges while preserving spectral clustering quality.
11.2 Random Matrix Theory and Graph Spectra
The spectrum of a random graph has universal limiting behavior described by random matrix theory.
Erdos-Renyi model . For a random graph where each edge appears independently with probability , the empirical spectral distribution of converges to the semicircle law (Wigner, 1955):
The leading eigenvalue separates from the bulk at when , corresponding to the emergence of a giant connected component.
Implications for GNNs:
- Random weight matrices in GNNs have spectra approximated by the semicircle law (for large enough hidden dimensions)
- The alignment between the spectra of data graph and random weight matrices affects gradient flow in training
- Spectral norm regularization of GNN weights controls training stability by constraining the spectral radius
11.3 Graph Wavelets
Motivation. Laplacian eigenvectors are global: is supported on all vertices. For signals with local structure (e.g., a signal that varies in one part of the graph but is constant elsewhere), global eigenvectors are inefficient. We need a local, multiscale basis - a graph wavelet transform.
Hammond, Vandergheynst & Gribonval (2011). For a vertex and scale , define the graph wavelet centered at at scale :
where is the indicator vector of vertex and is a scaled spectral filter (bandpass at frequency ). In the spectral domain:
Properties of graph wavelets:
- Spatially localized: if has compact spectral support , then is -hop localized where depends on the bandwidth and
- Frequency selective: wavelets at scale are sensitive to frequency
- Frame bounds: for appropriate , forms a frame (redundant but stable basis)
Scattering transform on graphs (Gama et al., 2019): Compose multiple wavelet transforms with pointwise nonlinearities to build invariant/equivariant features. Provides theoretical guarantees for GNN expressiveness.
11.4 Infinite Graphs and Spectral Measures
For infinite graphs (e.g., the integer lattice , infinite trees), the Laplacian is an unbounded operator on and the spectrum is no longer a finite set but a spectral measure .
Example: Integer lattice . The Laplacian on has a continuous spectrum (the -dimensional discrete Laplacian spectrum). This connects to the theory of periodic operators in solid-state physics (Bloch's theorem).
Spectral measure. For a vertex , the spectral measure is defined by:
The spectral measure encodes everything about the local geometry of the graph as seen from .
Convergence of finite graphs. If a sequence of finite graphs converges in the Benjamini-Schramm sense to an infinite graph , the empirical spectral distributions of converge weakly to the spectral measure of .
Preview: The spectral theory of infinite-dimensional operators is the subject of Chapter 12: Functional Analysis, where Hilbert spaces, unbounded operators, and spectral measures are developed fully.
12. Applications in Machine Learning
12.1 Semi-Supervised Learning on Graphs
The problem. Given a graph with nodes, a few labeled nodes with labels , and many unlabeled nodes , assign labels to all unlabeled nodes.
Graph-based regularization (Zhou et al., 2004; Zhu et al., 2003). Find a label function that:
- Agrees with the given labels on
- Is smooth on the graph (nearby nodes have similar labels)
The objective:
The closed-form solution involves - a smoothing operator. In the spectral domain:
High-frequency components ( large) are strongly regularized toward zero; low-frequency components are preserved.
Connection to label propagation. The Gaussian Fields and Harmonic Functions algorithm (Zhu et al., 2003) sets labeled node values to the true labels and propagates via:
(where is the Laplacian restricted to unlabeled nodes). This harmonic interpolation assigns each unlabeled node the weighted average of its neighbors' labels, with weights determined by graph structure.
Modern variant: GCN for semi-supervised learning. The two-layer GCN of Kipf & Welling (2017) was originally proposed for exactly this task: semi-supervised node classification. The Laplacian smoothing is built into the propagation rule, making it a parameterized (learnable) version of label propagation.
12.2 Knowledge Graph Analysis
A knowledge graph (KG) represents world knowledge as a graph: entities (nodes) connected by typed relations (edges). Examples: Freebase, Wikidata, ConceptNet, UMLS (medical).
Spectral properties of KGs:
- KGs are heterogeneous (multiple edge types) and sparse ()
- The adjacency spectrum often follows a power law: many small eigenvalues, a few large ones
- The spectral gap measures how well-integrated the KG is: a small gap indicates a KG with isolated sub-graphs (different domains not well-connected)
Spectral regularization. KG embedding models (TransE, RotatE, ComplEx) learn entity and relation embeddings. Adding a spectral regularization term:
encourages entity embeddings to be smooth with respect to each relation type 's graph - entities connected by relation should have similar embeddings. This improves link prediction accuracy, especially for rare relations.
12.3 Molecular Property Prediction
Molecules are naturally represented as graphs: atoms are nodes, chemical bonds are edges. Predicting molecular properties (solubility, toxicity, drug-likeness) from molecular graphs is a key application of GNNs.
Spectral molecular fingerprints. The eigenvalue spectrum of the molecular graph Laplacian provides rotation- and permutation-invariant descriptors. The "spectral profile" uniquely characterizes many molecular structures.
Graph edit distance and spectral distance. Two molecules have similar properties if they have similar spectral profiles. The distance:
(where eigenvalues are sorted and zero-padded to the same length) approximates graph edit distance and correlates with molecular similarity.
Equivariance and invariance. Spectral fingerprints are invariant to atom permutation (graph isomorphism), which is the correct invariance for molecular property prediction. However, they are blind to chirality (mirror image molecules) - a known limitation requiring higher-order structural features.
12.4 Attention Pattern Analysis in LLMs
A -head attention layer in a Transformer computes attention weight matrices , each for sequence length . These define weighted directed graphs over the token positions.
Spectral analysis of attention. The eigenvalues of reveal the attention pattern structure:
- If (uniform attention): , all others
- If (attend only to self):
- Induction heads (Olsson et al., 2022) have with large spectral gap between and - they attend sharply to a few positions
Attention graph Laplacian. Define the symmetrized attention Laplacian . The Fiedler vector of identifies the two groups of tokens most separated by head 's attention.
Applications:
- Attention head pruning: Heads whose attention graphs have very small spectral gap (uniform attention) contribute little and can be pruned (Michel et al., 2019; Voita et al., 2019)
- Mechanistic interpretability: Spectral analysis of multi-head attention composition identifies information routing circuits (Elhage et al., 2021)
- Context window analysis: The Laplacian spectrum of attention graphs grows as more tokens are added; sudden changes in indicate "phase transitions" in how the model processes context
13. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Confusing with | Sign convention: requires . With , all eigenvalues are . | Always check (positive diagonal) to verify sign convention |
| 2 | Using unnormalized for spectral clustering on graphs with varying degrees | RatioCut (unnormalized) penalizes unequally-sized partitions; for real data, NCut (normalized) gives much better clusters | Use or for spectral clustering; unnormalized only for regular graphs |
| 3 | Taking the Fiedler vector (index 1) instead of | is the constant vector (trivial null vector); it has no discriminative information | In NumPy: eigenvectors are sorted ascending; take column index 1 (0-indexed), not index 0 |
| 4 | Ignoring sign ambiguity of eigenvectors | For each eigenvector , both and are valid; different runs give different signs | Use absolute value for visualizations; or use RWPE instead of LapPE to avoid sign issues |
| 5 | Confusing the Cheeger inequality direction: vs | These are the same inequality; the confusing part is that "large " implies "large " (good expander), not "small " | Remember: small <-> bottleneck <-> small <-> easy to cut. Large <-> expander <-> large <-> hard to cut |
| 6 | Computing the graph Laplacian for disconnected graphs and expecting | For a disconnected graph, always. The null space has dimension equal to the number of components. | Check connectivity before spectral clustering. If graph is disconnected, handle each component separately or add a small connectivity term |
| 7 | Treating spectral clustering as scale-free (the same regardless of ) | The cluster structure at scale uses eigenvectors . The -th eigenvector captures increasingly fine-grained structure | Choose using the eigengap heuristic: |
| 8 | Applying GCN (a low-pass filter) to heterophilic graphs | GCN smooths features toward neighborhood averages. For heterophilic graphs (connected nodes have different labels), this destroys discriminative information | Use high-pass or band-pass graph filters (e.g., GPRGNN, FAGCN, BernNet) for heterophilic settings |
| 9 | Confusing and : using for Ng-Jordan-Weiss | NJW requires (symmetric, orthogonal eigenvectors) for row normalization to work. is not symmetric, so its eigenvectors are not orthogonal | Use scipy.linalg.eigh(L_sym) for symmetric eigendecomposition; eigenvectors form orthonormal columns |
| 10 | Over-interpreting spectral methods on cospectral graphs | Two different graphs can have identical Laplacian spectra. Spectral features cannot distinguish them | Augment spectral features with structural features (degree, triangle count, etc.) or use WL-based methods |
| 11 | Forgetting the self-loop normalization in GCN derivation | Without self-loops, the Chebyshev filter has eigenvalues in ; adding shifts them to | Always include self-loops and renormalize with in the GCN propagation rule |
| 12 | Using the full GFT () on graphs with | Full eigendecomposition is ; for , this is operations - completely intractable | Use polynomial filters (Chebyshev, sparse matrix-vector products), Lanczos for top- eigenvectors, or RWPE |
14. Exercises
Exercise 1 * - Laplacian Construction and Properties
For the following graph : vertices , edges (an unweighted undirected graph):
(a) Write out , , and . (b) Verify for . (c) Compute all eigenvalues of . How many connected components does have? (d) Compute and verify its eigenvalues lie in . (e) For AI: Which eigenvector of would be used for spectral bisection? What partition does it suggest?
Exercise 2 * - Spectrum of Special Graphs
(a) Derive the eigenvalues of for the cycle graph (5 vertices in a cycle). Show all work. (b) Compute . Is the cycle more or less connected (in the algebraic sense) than the path ? Use the formula for . (c) For the complete graph : prove that and all are equal. (d) For a star graph (one hub, leaves): find all eigenvalues and explain geometrically why regardless of .
Exercise 3 * - Fiedler Vector and Graph Bisection
Given a barbell graph: two cliques connected by a single bridge edge :
(a) Describe qualitatively what the Fiedler vector looks like without computing it. Which vertices get positive values? Negative?
(b) Implement the graph in NumPy, compute , and find using scipy.linalg.eigh. Plot the Fiedler vector values at each vertex.
(c) Use the Fiedler vector to perform spectral bisection. What is for the resulting cut?
(d) What is for this graph? Is it close to 0? What does this say about the graph's connectivity?
Exercise 4 ** - Cheeger Inequality Verification
For the path graph (10 vertices in a line):
(a) Compute analytically using the known formula. (b) Find the Cheeger constant by enumerating the optimal cut. (Hint: by symmetry, the optimal cut is in the middle.) (c) Verify that Cheeger's inequality holds. How tight are the bounds? (d) Implement the Fiedler vector sweep algorithm to find a cut with conductance . (e) For AI: If were the attention graph of a 10-token sequence, what does the Cheeger constant tell you about information flow?
Exercise 5 ** - Graph Fourier Transform
Define a "community signal" on the karate club graph (Zachary 1977, available in NetworkX) where if node is in community 1 and otherwise.
(a) Compute the GFT of the community signal. (b) Plot vs. . Is the community signal concentrated in low or high frequencies? (c) Define a "noisy" signal where is Gaussian noise. Apply a low-pass filter to (keep only the first 5 frequency components). (d) Compare the filtered signal to the true community signal. What fraction of nodes are correctly assigned? (e) How does this connect to label propagation in semi-supervised learning?
Exercise 6 ** - Spectral Clustering
Generate a synthetic graph with 3 communities using the Stochastic Block Model:
- 3 blocks of 50 nodes each
- Intra-block edge probability , inter-block
(a) Compute the unnormalized Laplacian . Plot the first 6 eigenvalues. Where is the largest eigengap? (b) Implement the NJW spectral clustering algorithm (Ng-Jordan-Weiss) for . (c) Compute the accuracy of the spectral clustering (comparing to the known ground-truth communities, accounting for label permutations). (d) Repeat with (near the phase transition). How does the clustering accuracy degrade? (e) For AI: How does the eigengap heuristic perform? Plot accuracy vs. to verify the correct number of clusters is detected.
Exercise 7 *** - Laplacian Positional Encodings
Implement Laplacian Positional Encodings (LapPE) and test them on a simple graph classification task:
(a) For each graph in a small graph dataset (or a synthetic set with 3 classes: path, cycle, star variants), compute the top- Laplacian eigenvectors as node features. (b) Handle sign ambiguity by randomly flipping signs of each eigenvector at each forward pass (as in Dwivedi et al., 2022). Show that a model trained this way is sign-invariant. (c) Implement RWPE as an alternative: for . Compare LapPE and RWPE on the classification task. (d) Explain theoretically why RWPE avoids the sign ambiguity problem while LapPE does not. (e) For AI: In GPT-style attention, can you use RWPE to give the model a "graph-aware" positional encoding? What would this enable for graph reasoning tasks?
Exercise 8 *** - PageRank and Spectral Analysis
Construct a small directed graph representing a citation network (10 papers, edges from citing paper to cited paper):
(a) Implement power iteration to compute the PageRank vector with damping factor . Verify convergence.
(b) Compute the dominant eigenvalue and eigenvector of the Google matrix directly via scipy.linalg.eig. Compare to the power iteration result.
(c) Add a "dangling node" (a paper with no outgoing citations). How does this affect the Google matrix? How is it handled in practice?
(d) Compute the mixing time: how many iterations of power iteration are needed to achieve ? How does this relate to ?
(e) For AI: In RLHF, model responses can be ranked using a directed preference graph. Implement PageRank-based ranking on a set of 5 responses with pairwise preference comparisons. How does it compare to simple win-count ranking?
15. Why This Matters for AI (2026 Perspective)
| Concept | AI/ML Impact |
|---|---|
| Graph Laplacian spectrum | Foundation of Graph Convolutional Networks (Kipf & Welling, 2017); GCN layer = first-order Chebyshev filter; used in every graph-based ML system |
| Fiedler vector / spectral bisection | Graph partitioning for distributed training (model parallel + pipeline parallel); partition the computation graph of a large model across devices |
| Cheeger inequality | Quantifies over-smoothing rate in deep GNNs; expanders over-smooth fastest; guides choice of GNN depth and skip-connection design |
| Spectral clustering | Gold-standard for community detection in social networks, knowledge graphs, citation networks; used in data curation for LLM pretraining |
| Graph Fourier Transform | Spectral convolution -> polynomial approximation -> spatial GNN: the entire GNN derivation hierarchy is a spectral story; ChebNet (Defferrard et al., 2016) |
| Laplacian Positional Encodings | LapPE in GPS (Rampasek et al., 2022); RWPE in graph Transformers; enables graph Transformers to be position-aware without hardcoded sequence order |
| Random walk mixing | RWPE computation; node2vec walks; GraphSAGE neighborhood sampling; mixing time determines required walk length for meaningful embeddings |
| Heat kernel / diffusion | Graph diffusion networks (Klicpera et al., 2019); APPNP; diffusion-based denoising on knowledge graphs; personalized PageRank for neighborhood aggregation |
| Spectral sparsification | Fast GNN training on large graphs: sparsify the graph while preserving spectral properties; used in GraphSAINT, ClusterGCN |
| PageRank | RLHF preference aggregation; importance weighting in retrieval-augmented generation; entity importance in knowledge graphs |
| Matrix-Tree theorem | Spanning tree sampling for graph augmentation in self-supervised GNN training; tree-structured attention in structured state space models |
| Directed graph spectra | Attention head analysis in mechanistic interpretability (Olsson et al., 2022); causal graph structure in causal LLMs; knowledge graph relation asymmetry |
16. Conceptual Bridge
Where we came from. This section builds directly on three pillars:
- 11-01 Graph Basics provided the combinatorial vocabulary: vertices, edges, paths, connectivity, bipartiteness. These definitions are the domain of spectral graph theory.
- 11-02 Graph Representations introduced the adjacency matrix and Laplacian as data structures. We now treat them as linear operators with rich algebraic structure.
- 03 Advanced Linear Algebra (eigenvalues, spectral theorem, PSD matrices) gave us the mathematical machinery. Spectral graph theory is linear algebra applied to graphs.
What this section proved. Starting from the simple definition , we established:
- (proved via the quadratic form )
- The null space of encodes connected components
- quantifies connectivity (Fiedler, 1973)
- is tightly related to the minimum normalized cut (Cheeger, 1985; Alon-Milman, 1985)
- Eigenvectors of form a natural Fourier basis for graph signals
- Spectral clustering is the continuous relaxation of NP-hard graph partitioning
- The GCN layer is a first-order spectral filter (Kipf & Welling, 2017)
Where we are going. Two sections lie ahead:
11-05 Graph Neural Networks (immediate next): The spectral foundation developed here - GCN derivation, over-smoothing as diffusion, spectral filters - motivates the full GNN architecture zoo. The MPNN framework, GAT attention, GraphSAGE induction, and over-smoothing mitigations are all seen more clearly through the spectral lens.
12 Functional Analysis (next chapter): The spectral theory of the discrete graph Laplacian is a special case of the spectral theory of self-adjoint operators on Hilbert spaces. The Laplace-Beltrami operator on Riemannian manifolds is the continuous limit of the graph Laplacian. Kernel methods, Mercer's theorem, and Reproducing Kernel Hilbert Spaces (RKHS) generalize what we built here to infinite-dimensional settings.
SPECTRAL GRAPH THEORY IN THE CURRICULUM
============================================================================
Chapter 02-03 Chapter 11 Chapter 12
Linear Algebra Graph Theory Functional Analysis
------------- ------------ --------------------
Eigenvalues ------> 04 Spectral ------> Laplace-Beltrami
PSD matrices Graph Theory operator
Spectral | Spectral measure
theorem | RKHS
|
+----------+----------+
v v
11-05 GNNs 22 Causal
GCN, GAT Inference
GraphSAGE Causal DAGs
MPNN d-separation
KEY RESULTS IN 04:
---------------------------------------------------------------------
L = D - A \succeq 0 (proved via quadratic form)
ker(L) = span of (connected components theorem)
component indicators
Cheeger: \lambda_2/2 \leq h \leq \sqrt(2\lambda_2) (connectivity <-> eigenvalue)
GFT: x = U^Tx (graph Fourier transform)
GCN = 1st-order (from ChebNet K=1, \lambda_max\approx2)
Chebyshev filter
============================================================================
The unifying theme. Spectral graph theory teaches a single lesson: linear algebraic structure encodes combinatorial structure. The eigenvalues of a matrix you can compute in reveal properties of the graph that are NP-hard to compute directly. This is the power of the spectral approach, and it is why spectral methods remain foundational even as spatial GNNs dominate in practice - the theory explains why the practice works.
<- Back to Graph Theory | Previous: Graph Algorithms <- | Next: Graph Neural Networks ->