Lesson overview | Lesson overview | Next part
Spectral Graph Theory: Part 1: Intuition to 6. Graph Fourier Transform and Signal Processing
1. Intuition
1.1 Hearing the Shape of a Graph
In 1966, mathematician Mark Kac posed the question: "Can you hear the shape of a drum?" - meaning, can you reconstruct the geometry of a vibrating membrane from the frequencies it produces? The question turned out to have a negative answer in general, but it crystallized one of the deepest ideas in mathematics: the spectrum of a differential operator encodes geometric information.
Spectral graph theory asks the same question for discrete structures. A graph has an associated matrix - the Laplacian - whose eigenvalues form the graph spectrum. These numbers are not arbitrary: they encode whether the graph is connected, how tightly its communities are glued together, how quickly a random walk mixes across its edges, how hard it is to cut the graph in two.
Think of a social network. Each person is a node; each friendship is an edge. The graph has "natural frequencies": a society with two isolated groups (a disconnected graph) has a different spectrum from one that is fully interconnected. The small eigenvalues of correspond to smooth, slowly-varying signals - the overall community membership function. The large eigenvalues correspond to rapidly-oscillating signals - the microscopic variation from person to person. This is the graph analogue of low and high frequencies in audio.
Three statements, each surprising when first encountered, that spectral graph theory makes precise:
- The number of connected components of equals the number of times appears as an eigenvalue of .
- The second-smallest eigenvalue - the "Fiedler value" - tells you how hard it is to disconnect the graph. A graph is harder to cut when is larger.
- The eigenvector corresponding to - the "Fiedler vector" - assigns a real number to each vertex, and the sign of this number tells you which side of the best bisection each vertex belongs to.
These are not vague analogies. They are theorems with proofs, and they form the backbone of a theory that has become indispensable in machine learning.
1.2 The Three Graph Matrices
For a graph with vertices and edges, three matrices appear constantly:
Adjacency matrix :
For undirected graphs, is symmetric. For weighted graphs, , the edge weight. The adjacency matrix encodes the direct connections in the graph.
Degree matrix : a diagonal matrix with , the degree of vertex . For weighted graphs, is the weighted degree (also called strength).
Graph Laplacian : the central object of spectral graph theory. Explicitly:
The Laplacian is named after Pierre-Simon Laplace because it is the discrete analogue of the continuous Laplace operator . For a function defined on the vertices:
This is the "discrete second derivative" - it measures how much the value at vertex differs from the average value among its neighbors.
For AI: In a Graph Neural Network, the operation (multiplying node features by the adjacency matrix with self-loops) is equivalent to computing . The Laplacian is implicitly present in every GNN layer.
1.3 Why Eigenvalues Reveal Structure
The Laplacian is a real symmetric positive semidefinite matrix. By the spectral theorem (03-Advanced-Linear-Algebra), it has a complete orthonormal basis of eigenvectors with real non-negative eigenvalues:
Why is always? Because - the all-ones vector is always in the null space of (every row of sums to zero). The constant function "assign the same value to every vertex" has zero variation across every edge, so it has zero energy.
The deeper result: if and only if the graph has exactly connected components. On a disconnected graph with components, the eigenvectors for eigenvalue are the indicator vectors of each component.
The first nonzero eigenvalue - if it exists - is called the algebraic connectivity or Fiedler value (after Miroslav Fiedler, who proved its key properties in 1973). A larger means the graph is harder to disconnect; a close to zero means there is almost a disconnection - a bottleneck.
The largest eigenvalue gives the spectral radius of the Laplacian and satisfies .
1.4 Historical Timeline
SPECTRAL GRAPH THEORY - HISTORICAL TIMELINE
========================================================================
1847 Kirchhoff - Matrix-Tree theorem; Laplacian for electrical circuits
1931 Whitney - Graph isomorphism; chromatic polynomials
1954 Collatz & - Systematic study of graph spectra begins
Sinogowitz
1973 Fiedler - Algebraic connectivity; Fiedler vector; graph bisection
1981 Cheeger - Cheeger inequality (originally for manifolds)
1988 Alon & Milman - Discrete Cheeger inequality for graphs
1996 Belkin & - Laplacian eigenmaps for manifold learning (pub. 2001)
Niyogi
2000 Shi & Malik - Normalized Cuts and image segmentation
2002 Ng, Jordan, - Spectral clustering algorithm (the standard version)
Weiss
2004 Spielman & - Spectral sparsification; fast Laplacian solvers
Teng
2011 Hammond et al - Wavelets on graphs
2014 Bruna et al - Spectral graph CNNs (first spectral GNN)
2016 Defferrard - ChebNet: Chebyshev polynomial filters on graphs
et al
2017 Kipf & - GCN: first-order Chebyshev -> simple spatial rule
Welling
2022 Dwivedi et al - Laplacian positional encodings for graph Transformers
2022 Rampasek et - GPS: General, Powerful, Scalable graph Transformer
al with spectral PE
========================================================================
1.5 Roadmap of the Section
This section follows a deliberate progression from foundational algebra to modern AI applications:
SECTION ROADMAP
========================================================================
2 Graph Matrices Build the algebraic objects
down
3 Quadratic Form / PSD Prove fundamental spectral properties
down
4 Fiedler Vector Connect \lambda_2 to graph connectivity
down
5 Cheeger Inequality Connect \lambda_2 to cut structure and mixing
down
6 Graph Fourier Transform Signal processing on graphs
down
7 Spectral Filtering From Fourier to polynomial approximations
down
8 Spectral Clustering Partition graphs via eigenvectors
down
9 Laplacian Eigenmaps Embed graphs; PE for transformers
down
10 Directed Graphs PageRank; complex eigenvalues
down
11 Advanced Topics Sparsification; wavelets; random matrices
down
12 ML Applications KGs, molecules, LLM attention analysis
========================================================================
2. Graph Matrices and Their Spectra
2.1 Adjacency Matrix: Spectral View
Definition. For with vertices, the adjacency matrix is defined by if and otherwise (with for unweighted graphs).
Key spectral property: Walk counting. The entry of counts the number of walks of length exactly from vertex to vertex . This follows by induction: sums over all ways to reach in steps by first taking steps to reach , then one step to .
For AI: This walk-counting property is the spectral justification for why a -layer GNN can "see" information from the -hop neighborhood. The matrix is what a linear GNN with layers computes.
Eigenvalues of . For an undirected graph, is symmetric, so all eigenvalues are real. Let denote the eigenvalues of in decreasing order. Key bounds:
- For any graph: (the maximum degree), since .
- For a -regular graph: with eigenvector .
- Bipartite graphs have symmetric spectra: .
- The number of distinct eigenvalues is at least (where is graph diameter).
Non-examples of symmetry: For a directed graph, is not symmetric and eigenvalues may be complex. This is why directed spectral theory (10) requires separate treatment.
2.2 Degree Matrix and Volume
The degree matrix is diagonal with .
Volume. For a subset , the volume is . For the full graph, (each edge contributes 2 to the total degree sum). Volume plays the role of "mass" in the normalized Laplacian theory.
For a -regular graph, and , making the theory particularly clean. Most derivations proceed with general but reduce to simpler formulas in the regular case.
Random walk transition matrix. The matrix is a row-stochastic matrix: for all . It defines a random walk on the graph: from vertex , move to neighbor with probability . The stationary distribution of this walk is with - proportional to degree. This connection between , , and random walks is central to the normalized Laplacian theory.
2.3 Unnormalized Laplacian L = D - A
Definition. . Entry-by-entry:
Every row (and column) sums to zero: . Equivalently, .
The fundamental quadratic form:
Proof:
Since , this is always non-negative: .
Geometric meaning: measures the total variation of the signal across all edges. It is zero if and only if for all edges - i.e., is constant on each connected component.
For AI: Graph regularization in semi-supervised learning minimizes subject to labeling constraints. This penalizes label functions that change rapidly across edges - a smoothness prior that says "connected nodes likely have the same label."
2.4 Normalized Laplacians
Two normalized variants of the Laplacian are used in practice:
Symmetric normalized Laplacian:
with entries:
Properties: Symmetric; eigenvalues in ; for all iff the graph is bipartite (eigenvalues symmetric around 1); where are eigenvectors of .
Random-walk normalized Laplacian:
with the random walk transition matrix. Properties: Not symmetric, but has the same eigenvalues as (they are similar matrices). Eigenvalues in . The eigenvectors of for eigenvalue are the constant vectors on each component.
When to use which:
| Laplacian | Use case | Why |
|---|---|---|
| Graphs with uniform degree; theoretical proofs | Simplest form | |
| Spectral clustering (Ng et al.); GCN normalization | Symmetric -> orthogonal eigenvectors | |
| Random walk analysis; Shi-Malik NCut | Direct connection to |
For AI (GCN connection): The GCN propagation rule uses the symmetric normalized adjacency of the graph with self-loops - equivalently, of the augmented graph.
2.5 Spectra of Special Graphs
Closed-form eigenvalues for key graph families provide calibration and test cases:
Complete graph : . Eigenvalues of : (once) and ( times). Eigenvalues of : (once) and ( times). The graph is maximally connected: .
Path graph : Vertices , edges . Eigenvalues of :
So , for large - very small. This reflects the intuition that a long path is easy to cut (just remove the middle edge).
Cycle graph : Eigenvalues of :
The spectrum is symmetric around . for large .
Star graph : One hub connected to leaves. Eigenvalues of : (once), ( times), (once). regardless of how many leaves there are - the star is easy to disconnect (remove the hub).
-regular bipartite graph : Eigenvalues of : with the pattern dictated by the bipartite structure; eigenvalue indicates bipartiteness.
2.6 Characteristic Polynomial and Cospectral Graphs
The characteristic polynomial of a graph is . The roots are the eigenvalues of . The coefficients of are spectral invariants: the sum of eigenvalues equals (no self-loops); the sum of squares of eigenvalues equals .
Cospectral (isospectral) graphs are graphs with identical characteristic polynomials but non-isomorphic structures. The simplest pair: two graphs on 6 vertices found by Schwenk (1973). Cospectrality shows that the spectrum does not uniquely determine a graph - a fundamental limitation of spectral methods. For graph isomorphism testing, the Weisfeiler-Lehman test (05) captures structure that the spectrum misses.
For AI: The WL-expressiveness hierarchy of GNNs (Xu et al., 2019) parallels this cospectrality result. GNNs based on spectral convolution can distinguish everything the Laplacian spectrum distinguishes - but no more. This is one motivation for higher-order GNNs and attention-based methods.
3. The Fundamental Quadratic Form and PSD Structure
3.1 Dirichlet Energy
The quadratic form is called the Dirichlet energy (or graph Dirichlet form) of the signal .
This name comes from the continuous analogue: for a function on a domain , the Dirichlet energy is , which measures the total variation (smoothness) of . The graph Laplacian is the discrete analogue of (the negative Laplacian), and is the discrete Dirichlet energy.
Interpretations by context:
| Context | What measures |
|---|---|
| Social network | Total disagreement when labels communities |
| Signal on graph | Total variation (roughness) of the signal across edges |
| Temperature field | Total heat flux across edges at steady state |
| Node embeddings | "Embedding strain" - how much nearby nodes differ |
| Semi-supervised labels | Penalty for assigning different labels to connected nodes |
Critical point of Dirichlet energy. The Rayleigh quotient is minimized by the eigenvector with smallest eigenvalue. Constrained to (orthogonal to the trivial null vector), the minimum is achieved by , the Fiedler vector.
3.2 Proof That L \succeq 0
Theorem. For any undirected weighted graph with non-negative edge weights, .
Proof. For any :
since and for all real numbers.
Corollary. All eigenvalues of are non-negative: .
Corollary. is always an eigenvector with , since (where is the degree vector, equal to ).
Strengthened result for normalized Laplacians. For : since and , we have . Moreover, for all , so .
3.3 Connected Components via the Null Space
Theorem (Fiedler, 1973). The multiplicity of eigenvalue of the graph Laplacian equals the number of connected components of .
Proof.
Suppose has connected components . For each component , define as the indicator vector of : if , else . Then because for any vertex :
(all neighbors of are also in since components are isolated). The vectors are linearly independent, so .
Suppose . Then , which forces for every edge . Thus is constant on each connected component. The dimension of the space of such functions equals the number of components. So .
Combining both directions, .
Examples:
- Fully connected graph (): is simple; .
- Graph with 2 isolated components: ; .
- Path : always connected; .
Non-example: For a disconnected graph with components of different sizes, the eigenvectors for are NOT all-ones vectors but rather indicator vectors of the components.
3.4 Eigenvalue Bounds and Interlacing
Upper bound. For any connected graph:
where is the maximum degree. For -regular graphs: iff the graph is bipartite.
Lower bound on . From the Cheeger inequality (full treatment in 5):
where is the Cheeger constant.
Interlacing theorem. Let be an induced subgraph of on vertices, with Laplacian eigenvalues . Then:
Interlacing means that removing vertices from a graph cannot increase by more than the increase in . This is used in structural arguments about graph connectivity after vertex removal.
3.5 Courant-Fischer Minimax Theorem
The Courant-Fischer theorem provides a variational characterization of every eigenvalue of a symmetric matrix. For the graph Laplacian with eigenvalues :
In particular, the Fiedler value has the characterization:
Proof sketch. Write in the eigenbasis. Then and . The Rayleigh quotient is , a convex combination of eigenvalues. Minimizing over forces , making the minimum (achieved when , all others ).
Practical use. Courant-Fischer justifies using as the optimal graph bisection vector: it solves the continuous relaxation of the minimum bisection problem, as we prove in 4 and 8.
4. Algebraic Connectivity and the Fiedler Vector
4.1 Algebraic Connectivity \lambda_2
Definition. The algebraic connectivity of a graph is , the second-smallest eigenvalue of the graph Laplacian. It is also called the Fiedler value.
Theorem (Fiedler, 1973). if and only if is connected.
This follows directly from 3.3: iff there are at least 2 connected components.
Why "algebraic" connectivity? The classical combinatorial connectivity (minimum number of vertices whose removal disconnects ) is NP-hard to compute in general. The algebraic connectivity provides a computable lower bound:
where is the minimum degree. This inequality chain says: algebraic connectivity vertex connectivity minimum degree.
Sensitivity. When a single edge is added to a graph, can increase by at most . When an edge is removed, can decrease by at most . This quantifies how much the connectivity changes with each graph edit - useful in robust network design.
Regular graphs. For a -regular graph on vertices:
where is the largest eigenvalue of not equal to . The spectral gap of the adjacency matrix and the algebraic connectivity are directly related for regular graphs.
4.2 The Fiedler Vector
Definition. The Fiedler vector is the eigenvector of corresponding to .
The Fiedler vector assigns a real number to each vertex . Vertices with positive values are assigned to one "side" of the graph; vertices with negative values to the other. This is the basis of spectral bisection.
Spectral bisection algorithm:
- Compute the Fiedler vector .
- Partition by the sign of : let .
- The edges form the "spectral cut."
Why does this work? The Courant-Fischer theorem says minimizes the Dirichlet energy subject to and . If we further constrain (a discrete two-way partition), we get the NP-hard graph bisection problem. The Fiedler vector is the continuous relaxation of this discrete problem - the best we can do efficiently.
The ordering property. Sorting vertices by their Fiedler vector value reveals the community structure of the graph. Vertices in the same community tend to have similar values; the transition from negative to positive marks the community boundary.
For AI: Spectral bisection is used in:
- Circuit partitioning (VLSI design): split a circuit graph across two chips to minimize inter-chip connections
- Domain decomposition (PDE solvers): partition a mesh graph for parallel computation
- Community detection in knowledge graphs: find the two most separated communities in a KG
4.3 Bounding Graph Properties via \lambda_2
Diameter bound (Mohar, 1991):
A simpler bound: (rough but useful).
Vertex connectivity. For any connected graph:
where is the vertex connectivity (minimum number of vertices to remove to disconnect). A large means the graph is robustly connected.
Conductance. The conductance measures the minimum normalized cut. The Cheeger inequality (5) gives:
Isoperimetric number. The Cheeger constant (using instead of ) satisfies the same type of inequality with the unnormalized .
4.4 Computing the Fiedler Vector in Practice
For small graphs ( a few thousand), compute all eigenvalues of directly via dense symmetric eigensolver (scipy.linalg.eigh). The second column of the eigenvector matrix is .
For large sparse graphs, use iterative methods:
Lanczos algorithm: Builds a tridiagonal matrix from Krylov vectors . Converges to extreme eigenvalues fastest. For the Fiedler vector, we need the smallest nonzero eigenvalue, which requires the shift-invert trick: compute the largest eigenvalue of for small .
Inverse power iteration with deflation: Since is known, we can deflate it out. Initialize with a random , repeatedly apply (via sparse matrix-vector product), normalize, and orthogonalize against . Convergence rate is per iteration.
Randomized Nystrom approximation: For graphs with , approximate the low-rank spectral structure using randomized sampling of the Laplacian (Spielman & Srivastava, 2011).
4.5 AI Application: Community Detection
Community detection - finding groups of densely interconnected nodes - is one of the most practically important graph problems. Spectral methods are the gold standard for quality guarantees.
Planted partition model. Generate a graph with communities of size each, intra-community edge probability , inter-community probability . Spectral bisection recovers the communities exactly when:
(This is the information-theoretic threshold for the Stochastic Block Model.)
Knowledge graph clustering. In a knowledge graph (KG) like Freebase or Wikidata, entities form communities by topic (sports, science, politics). The Fiedler vector of the KG adjacency graph separates these clusters. The resulting community structure can be used to create topic-specific sub-KGs for retrieval-augmented generation.
5. Cheeger's Inequality and Graph Expansion
5.1 The Cheeger Constant h(G)
Definition. For a graph and a subset , the edge boundary is the set of edges between and its complement :
The conductance (or isoperimetric ratio) of the cut is:
The Cheeger constant (or isoperimetric number) of is:
This is the minimum conductance cut: the partition that minimizes the fraction of edges leaving the smaller side relative to its volume. A small means the graph has a bottleneck - a small number of edges separating a large fraction of the volume.
Computing is NP-hard. This is a major motivation for the Cheeger inequality, which gives a polynomial-time algorithm (via ) to find a cut within a factor of of optimal.
Examples:
- Path : Remove the middle edge; . Very small: the path is a severe bottleneck.
- Complete graph : Every cut has ; .
- Expander graph (5.3): - bounded below by a constant, independent of .
5.2 Cheeger's Inequality
Theorem (Alon & Milman, 1985; Dodziuk, 1984). For any undirected graph :
Proof of the left inequality (easy direction). We show by exhibiting a test vector with .
Let be the optimal Cheeger cut with . Define:
This is orthogonal to the stationary distribution of the random walk (which plays the role of for the normalized Laplacian). Then:
After algebraic simplification using the definition of : . Since , we get .
Proof of the right inequality (hard direction). We show .
Given the Fiedler vector , sort vertices so . For each threshold , let . Consider the sweep over all possible thresholds. By the co-area formula for graphs (a discrete version of the co-area formula in differential geometry), the average conductance of these cuts satisfies:
The last step uses the Cauchy-Schwarz inequality together with the fact that is the Rayleigh quotient of the Fiedler vector.
Tightness. The left bound is tight for expanders (5.3). The right bound is tight for paths and other bottleneck graphs where .
Practical implication. Given , we know . More importantly, the proof of the right inequality is constructive: the sweep over Fiedler vector thresholds finds a cut with conductance .
5.3 Expander Graphs
Definition. A family of graphs is an -expander family if:
- Each has vertices and is -regular
- The second eigenvalue of satisfies
- The spectral gap is bounded below by a positive constant
Equivalently (by Cheeger): , i.e., the Cheeger constant is bounded below uniformly in .
Why expanders matter:
- Communication networks: In a -regular expander on nodes, any message can be routed between any two nodes in hops, using only connections per node. This is optimal for constant-degree networks.
- Error-correcting codes: Expander codes (Sipser & Spielman, 1996) achieve linear-time encoding/decoding of codes close to the Shannon capacity.
- Derandomization: Expanders provide pseudorandom number generators - random walks on expanders mix in steps, so short random walks serve as good randomness sources.
- GNN depth: A GNN on an expander graph propagates information across the entire graph in layers. This is why expanders are ideal benchmarks for deep GNNs.
Ramanujan graphs. The optimal spectral gap for a -regular graph is bounded: (Alon-Boppana theorem). Graphs achieving are called Ramanujan graphs - they are the optimal expanders. Explicit Ramanujan graph constructions (Lubotzky, Phillips, Sarnak, 1988; Margulis, 1988) use deep number theory.
5.4 Random Walk Mixing Time
The random walk on defined by transition matrix has stationary distribution with . The mixing time is the number of steps needed for the walk to get close to the stationary distribution:
Spectral mixing bound. Let be the second-largest absolute eigenvalue of . Then:
Interpretation: The spectral gap governs the mixing time. Large spectral gap -> fast mixing. For expanders with constant spectral gap: . For paths: , so .
Proof sketch. Write the initial distribution as where are eigenvectors of (eigenvalues ). After steps: . The deviation decays as , giving .
Lazy walk. To avoid oscillation when (bipartite-like graphs), use the lazy random walk with . Eigenvalues of are , avoiding negative eigenvalues.
5.5 AI Connection: Over-Smoothing as Diffusion
Over-smoothing is the well-documented phenomenon in deep GNNs where node representations become indistinguishable as the number of layers increases (Li et al., 2018; Oono & Suzuki, 2020). Spectral theory provides the exact mechanism:
A -layer GCN computes (roughly) where is the normalized adjacency. The eigenvalues of satisfy where are eigenvalues of of the augmented graph. After iterations:
All node features converge to a value proportional to , determined only by degree - all structural information is lost.
Rate of collapse. The convergence rate is governed by the spectral gap: . Faster collapse on expanders (large ), slower on bottleneck graphs. This is counterintuitive: the "most connected" graphs (expanders) over-smooth fastest.
Mitigation strategies:
- Residual connections (GCNII, Chen et al., 2020): - preserve initial features
- DropEdge (Rong et al., 2020): randomly remove edges during training, reducing effective
- PairNorm (Zhao & Akoglu, 2020): explicitly normalize pairwise distances to prevent collapse
- Jumping knowledge (Xu et al., 2018): aggregate representations from all layers
Forward reference: The full architecture-level treatment of over-smoothing, including the WL expressiveness hierarchy and architectural mitigations, is in 11-05 Graph Neural Networks.
6. Graph Fourier Transform and Signal Processing
6.1 Classical Fourier Analogy
The classical Fourier transform on decomposes a function into a linear combination of eigenfunctions of the Laplace operator :
The functions are eigenfunctions of the continuous Laplacian: .
On a graph, the Laplacian plays the role of , and its eigenvectors (with eigenvalues ) play the role of the complex exponentials .
The analogy:
FOURIER TRANSFORM ANALOGY
========================================================================
Classical Fourier Graph Fourier
--------------------------------- ---------------------------------
Domain \mathbb{R}^n Vertex set V (finite)
Operator -\Delta (Laplacian) L = D - A (graph Laplacian)
Eigenfunctions exp(i\omega*x) Eigenvectors u_1, u_2, ..., u_n
Frequencies ||\omega||^2 \in [0, \infty) Eigenvalues \lambda_1 \leq \lambda_2 \leq ... \leq \lambda_n
Low freq. ||\omega|| small -> smooth \lambda_k small -> smooth on graph
High freq. ||\omega|| large -> rapid \lambda_k large -> rapid variation
Transform Continuous integral Finite matrix multiply (U^Tx)
========================================================================
This analogy is the conceptual foundation for defining convolution, filtering, and signal processing on irregular graph domains.
6.2 Graph Fourier Transform
Definition. Let be the eigendecomposition of the graph Laplacian, with the matrix of eigenvectors (columns). For a signal (assigning a value to each vertex ), the Graph Fourier Transform (GFT) is:
The inverse GFT is:
Properties:
- Parseval's theorem: (since is orthogonal).
- Linearity: .
- Energy decomposition: (energy in each frequency component).
- Shift property: There is no clean "shift theorem" for graphs as there is for the DFT, because graphs lack translation symmetry. This is a fundamental difference.
Graph convolution. The convolution of two signals and on a graph is defined spectrally:
where is element-wise multiplication. This is the analogue of the convolution theorem: convolution in the vertex domain equals pointwise multiplication in the spectral domain.
Limitation of full GFT: Computing requires time (eigendecomposition). For large graphs (), this is infeasible, motivating polynomial approximations (7).
6.3 Frequency Interpretation
The -th frequency component measures how much of the signal "oscillates at frequency ."
Low-frequency signals correspond to small : the eigenvectors for small eigenvalues vary smoothly across edges (since is small). A signal concentrated in low frequencies is smooth: nearby vertices have similar values.
High-frequency signals correspond to large : the eigenvectors for large eigenvalues oscillate rapidly, with and having opposite signs for many edges . A pure high-frequency signal looks like a checkerboard on the graph.
Example on a path graph. For , the eigenvectors are - the discrete cosine transform (DCT). The eigenvalues are just the squared DCT frequencies. The GFT on a path is the DCT.
Example on a community graph. A graph with two tightly connected communities has:
- : constant (DC component)
- : Fiedler vector, positive on community 1, negative on community 2 - the community membership function is a low-frequency signal
- : highest frequency, alternates sign on bipartite-like structure
6.4 Dirichlet Energy Revisited
The Dirichlet energy decomposes cleanly in the spectral domain:
This is the "power spectrum" interpretation: the Dirichlet energy is the weighted sum of spectral components, weighted by frequency. A signal is smooth (low Dirichlet energy) iff its energy is concentrated in low-frequency components ( small).
Spectral analysis of node features. Given a node feature matrix , we can compute the spectral content of each feature dimension:
Feature dimensions with low Dirichlet energy are "community-consistent" features (e.g., political affiliation in a social network). Feature dimensions with high Dirichlet energy are "noisy" local features.
For AI: Graph regularization in semi-supervised learning minimizes:
This penalizes high-frequency components in the predicted label function , implementing a "smoothness prior": connected nodes likely have the same label.
6.5 Uncertainty Principle on Graphs
In classical signal processing, the Heisenberg uncertainty principle states that a signal cannot be simultaneously concentrated in both time and frequency: the product of time spread and frequency spread is at least .
On graphs, an analogous uncertainty principle holds (Agaskar & Lu, 2013):
where measures how localized is in the vertex domain (concentrated on a small set of vertices) and measures how localized is in the spectral domain (concentrated on a small band of frequencies), and is a constant depending on the graph structure.
Implications for graph signal processing:
- A signal perfectly localized on a single vertex () is spread across all frequencies ( maximum)
- Smooth signals (concentrated on low frequencies, small ) are necessarily spread across many vertices ( large)
- This tradeoff motivates graph wavelets (11.3): basis functions that are approximately localized in both vertex and spectral domains
6.6 AI Application: Node Feature Smoothing
Label propagation (Zhou et al., 2004) is a classic semi-supervised learning algorithm that directly implements low-pass graph filtering. Starting from a partially labeled graph, labels propagate according to:
where is the initial label matrix (zeros for unlabeled nodes), is the normalized adjacency, and controls the smoothing strength. In the spectral domain, this converges to:
The filter is a low-pass filter: it attenuates high-frequency components ( large) more than low-frequency ones.
For modern LLMs: When an LLM reasons over a knowledge graph, smooth graph signals correspond to consistent facts (nearby entities agree), while high-frequency signals correspond to noise or inconsistencies. Spectral filtering provides a principled way to denoise knowledge graphs before retrieval.