Lesson overview | Previous part | Next part
Spectral Graph Theory: Part 7: Spectral Filtering to 13. Common Mistakes
7. Spectral Filtering
7.1 Filtering in the Spectral Domain
A spectral filter on a graph is an operation that modifies the frequency content of a graph signal:
where is a scalar function applied pointwise to the eigenvalues: .
Common filters:
| Filter | Effect | AI use case | |
|---|---|---|---|
| Low-pass | Keep low frequencies | Smooth node features | |
| High-pass | Keep high frequencies | Edge detection on graphs | |
| Band-pass | Keep a frequency band | Community detection at scale | |
| Heat kernel | Exponential damping | Graph diffusion, PPMI | |
| Identity | No change | Trivial | |
| GCN | Linear attenuation | First-order spectral convolution |
Implementation cost: Directly computing requires the full eigendecomposition - preprocessing and per signal. This is intractable for large graphs. Polynomial approximation (7.2) reduces cost to per signal.
7.2 Polynomial Filters and Localization
A -th order polynomial filter has the form:
Key property: K-localization. The filter is exactly -localized: depends only on the values of at vertices within graph distance from .
Proof. whenever (by the walk-counting property of graph matrix powers). Therefore whenever .
Complexity. Computing using the recurrence requires sparse matrix-vector multiplications, each . Total: .
Spatial interpretation. A polynomial filter is exactly equivalent to a -hop neighborhood aggregation, connecting spectral and spatial GNN views. This is the theoretical justification for why GNNs with layers aggregate information from -hop neighborhoods.
Approximation theorem. By the Stone-Weierstrass theorem, any continuous function can be uniformly approximated by polynomials. So polynomial filters are universal approximators for spectral filters on any graph.
7.3 Chebyshev Polynomial Approximation
Why Chebyshev? Among all polynomials of degree , the -th Chebyshev polynomial has the smallest maximum deviation from zero on - it is the optimal polynomial approximation basis.
Definition. The Chebyshev polynomials satisfy:
They have the closed form .
Chebyshev graph filter (ChebNet, Defferrard et al., 2016). Scale the Laplacian to (shifting eigenvalues from to ). Define:
Computation via recurrence:
Each step requires one sparse matrix-vector multiply ; total cost .
Advantages over truncated Taylor series:
- The Chebyshev approximation error decays exponentially in (geometric convergence for smooth )
- No numerical instability from large powers of
- The learned parameters have clear frequency interpretation
7.4 Heat Kernel and Diffusion Filters
The graph heat equation generalizes diffusion to graphs:
Solution: . In the spectral domain: - each frequency decays at rate .
The heat kernel is a positive semidefinite matrix representing the diffusion of heat on the graph over time . Entries give the heat at vertex after time when a unit heat source is placed at vertex .
Properties:
- For : (no diffusion)
- For : (heat equalizes, constant temperature on each component)
- Relates to random walk:
Diffusion distance. The distance between vertices and at time scale is:
This diffusion distance is more robust than shortest-path distance: it accounts for all paths between and , not just the shortest one.
For AI: The PPMI (Positive Pointwise Mutual Information) matrix used in graph-based word embeddings is approximately a diffusion kernel. The node2vec random walk (Grover & Leskovec, 2016) approximates diffusion distance.
7.5 From Chebyshev to GCN
The GCN layer (Kipf & Welling, 2017) is derived from ChebNet by:
Step 1: Set (first-order Chebyshev approximation): .
Step 2: Approximate (holding for regular and near-regular graphs), so .
Step 3: Constrain (reduce parameters to prevent overfitting):
Step 4: Add self-loops , renormalize with to prevent numerical issues (the "renormalization trick"):
Full GCN layer:
Spectral interpretation. The GCN filter is a low-pass filter: it passes low frequencies (, smooth signals) and attenuates high frequencies (, rapidly varying signals). GCN is fundamentally a graph smoother.
Full GNN treatment: For GraphSAGE, GAT, MPNN framework, over-smoothing fixes, and expressiveness theory, see 11-05 Graph Neural Networks.
8. Spectral Clustering
8.1 Graph Partitioning Objectives
Minimum cut. Given a graph and integer , partition (disjoint, non-empty) to minimize:
where is the set of edges between and its complement.
Problem with minimum cut. Minimum cut tends to cut off isolated vertices or very small sets - the trivial solution for a low-degree vertex has very few edges to cut. We need objectives that balance cluster sizes.
RatioCut (Hagen & Kahng, 1992):
Normalizes by the number of vertices in each partition - prevents very small cuts.
Normalized Cut (NCut) (Shi & Malik, 2000):
Normalizes by the volume (total degree) - weighted version of RatioCut.
Both problems are NP-hard in general. Spectral clustering relaxes them to tractable eigenvalue problems.
8.2 RatioCut and Unnormalized Spectral Clustering
Two-cluster RatioCut. For with partition :
Define the indicator vector :
Claim. . Also: and .
Proof: . The only nonzero terms come from edges crossing the cut: for :
Summing over all cut edges and using :
Relaxation. The discrete optimization subject to , , is NP-hard. Relax the integrality constraint: allow . By Courant-Fischer, the solution is the Fiedler vector .
Recovery. Given , assign vertex to if , to otherwise. In practice, use k-means with on for robustness.
8.3 Normalized Cut (Shi & Malik 2000)
NCut relaxation. Define the indicator analogously to RatioCut but with volume weights: for partition :
Then , subject to and .
Generalized eigenvalue problem. The continuous relaxation is:
via the substitution . This is the standard Rayleigh quotient for , minimized by . Thus:
where is the Fiedler vector of .
Shi-Malik algorithm (2-cluster):
- Build
- Compute Fiedler vector of
- Assign vertex to if
- Choose threshold: empirically (try all thresholds) or at 0
Multi-class NCut. For clusters, use the smallest eigenvectors of , form the matrix , normalize each row to unit norm, then apply k-means to the rows.
8.4 Multi-Way Spectral Clustering
The Ng-Jordan-Weiss (NJW) algorithm (2002) is the standard multi-class spectral clustering:
- Build the normalized Laplacian
- Compute the smallest eigenvectors (smallest eigenvalues of )
- Form with these eigenvectors as columns
- Normalize rows: let (row normalization)
- Apply k-means to the rows of
Why row normalization? The perturbation theory of 8.5 shows that in a perfect -cluster graph, the rows of lie exactly on orthogonal vectors. Row normalization maps these to the same point on the unit sphere regardless of degree, making k-means converge cleanly.
Perturbation theory justification. Consider a "block graph" consisting of disconnected cliques. The smallest eigenvalues of are all , with eigenvectors being the normalized indicators of each clique. Any real graph with communities can be seen as a perturbed block graph; if the perturbation (inter-community edges) is small, the eigenvectors are close to the block indicators. Weyl's perturbation theorem quantifies how much changes.
8.5 Complete Algorithm and Implementation
SPECTRAL CLUSTERING ALGORITHM
========================================================================
Input: Adjacency matrix A \in \mathbb{R}^n^x^n, number of clusters k
Output: Cluster assignments c \in {1,...,k}^n
1. Compute degree matrix D = diag(A*1)
2. Compute normalized Laplacian L_sym = D^(-1/2) (D - A) D^(-1/2)
(or use L_rw = I - D^(-1) A, but use L_sym for symmetric version)
3. Compute k smallest eigenvalues and eigenvectors of L_sym
-> eigenvectors form columns of U_k \in \mathbb{R}^n^x^k
4. Normalize rows: Y_i = U_k[i,:] / ||U_k[i,:]||_2
(skip for RatioCut; required for NCut)
5. Apply k-means clustering to rows of Y
-> cluster centers \mu_1,...,\mu_k; assignments c[i] \in {1,...,k}
6. Return c
Complexity: O(n^3) for full eigendecomposition;
O(k*n*|E|) with Lanczos + k-means (large graphs)
========================================================================
Practical notes:
- Use Lanczos algorithm or LOBPCG for computing the smallest eigenvectors of on large sparse graphs (avoid full eigendecomposition)
- The choice of can be guided by the eigengap heuristic: choose where the gap is largest
- K-means is run multiple times with random restarts; take the best result (lowest inertia)
- For disconnected graphs, the zero eigenvalues directly give the cluster indicators
8.6 When Spectral Clustering Beats k-Means
K-means minimizes within-cluster variance assuming convex, isotropic, similarly-sized clusters. It fails on non-convex cluster shapes. Spectral clustering has no shape assumption - it works on any cluster structure that is well-separated in the graph.
When spectral clustering excels:
- Concentric rings, spirals, moons - any shape detectable by graph connectivity
- Clusters at multiple scales (nested communities)
- Data with non-Euclidean structure (molecules, social networks)
When k-means excels:
- Truly Gaussian clusters in
- Very large where eigenvector computation is too slow
- Cluster structure is well-captured by Euclidean distance
A critical nuance: Spectral clustering requires building the adjacency/affinity graph first. The -NN graph or -neighborhood graph choice matters enormously for quality. A common failure mode: if the graph is built with too small or , communities may become disconnected even within a true cluster. If too large, community structure is washed out.
9. Laplacian Eigenmaps and Graph Embeddings
9.1 The Embedding Problem
Given a graph , we want a mapping () that preserves the graph structure: vertices that are nearby in the graph should be nearby in the embedding. Formally, we want:
subject to constraints that prevent the trivial solution for all .
Decomposing dimension by dimension, this is separate problems, each of the form:
This is exactly minimizing the Dirichlet energy, solved by the eigenvectors of .
9.2 Laplacian Eigenmaps Algorithm
Belkin & Niyogi (2001/2003). Given data points :
-
Build the adjacency graph: Connect points and if they are among each other's nearest neighbors (or ).
-
Set edge weights: Use the heat kernel for connected pairs (with a bandwidth parameter).
-
Compute degree and Laplacian: , .
-
Solve the generalized eigenvalue problem:
Equivalently: find eigenvectors of (or ).
- Embed: Take the eigenvectors (skip ) and set .
Optimality theorem. The Laplacian eigenmap embedding is the solution to the optimization problem:
The solution is (the -th eigenvector of ). This is optimal in the sense that no other -dimensional embedding has smaller total Dirichlet energy.
Manifold learning interpretation. If the data points lie on a -dimensional manifold embedded in , the Laplacian eigenmap recovers the intrinsic coordinates of the manifold. As and the bandwidth at an appropriate rate, the graph Laplacian converges to the Laplace-Beltrami operator on the manifold (Belkin & Niyogi, 2008).
9.3 Diffusion Maps
Coifman & Lafon (2006) introduced diffusion maps as a multiscale version of Laplacian eigenmaps.
Define the diffusion operator (the random walk matrix) and its -step version . The diffusion distance at scale :
where are eigenvalues/eigenvectors of . The diffusion map embedding:
The Euclidean distance in the diffusion map equals the diffusion distance: .
Multi-scale property. By varying , diffusion maps reveal structure at different scales:
- Small : local neighborhood structure
- Large : global cluster structure (only the dominant eigenvectors with remain)
9.4 Relationship to PCA and Kernel PCA
Kernel PCA (Scholkopf et al., 1998) computes the principal components of data in a feature space defined by a kernel . For a kernel matrix with , kernel PCA computes the eigenvectors of the centered kernel matrix.
Commute-time embedding. The commute time between vertices and is the expected number of steps for a random walk starting at to reach and return. It equals:
This is kernel PCA with the commute-time kernel . So Laplacian eigenmaps is a special case of kernel PCA.
9.5 Spectral Positional Encodings for Transformers
Standard Transformers process tokens with positional encodings to handle sequence order. Graph Transformers need analogous positional encodings for graph nodes - but graphs have no canonical ordering.
Laplacian Positional Encoding (LapPE). Use the eigenvectors of the graph Laplacian as node positional encodings:
where is the -th entry of the -th Laplacian eigenvector.
Challenge: Sign ambiguity. Each eigenvector is defined only up to sign: is also a valid eigenvector. This creates non-uniqueness in the PE.
Solutions:
- Random sign flips during training (Dwivedi et al., 2022): randomly flip signs in training; the Transformer learns sign-invariant functions
- SignNet (Lim et al., 2022): use a Deep Sets architecture that is invariant to sign flips:
- BasisNet: extend to the case of repeated eigenvalues (multiplicity ), which introduce rotational ambiguity
RWPE (Random Walk Positional Encoding). Instead of Laplacian eigenvectors, use steps of a random walk:
where . This avoids the sign ambiguity issue and is invariant to graph automorphisms. Used in GPS (Rampasek et al., 2022) - one of the best-performing graph Transformers.
Why LapPE/RWPE matter. Without positional encodings, graph Transformers cannot distinguish graph structure - all nodes with the same degree distribution look identical. Spectral PE gives each node a unique "spectral fingerprint" derived from its position in the graph's Fourier basis.
10. Directed Graph Spectra
10.1 Directed Laplacians
For a directed graph (digraph) with (ordered pairs), the adjacency matrix is not symmetric: if but may be .
In-degree and out-degree: For each vertex , (number of incoming edges) and (outgoing edges).
Out-degree Laplacian: where .
In-degree Laplacian: (or equivalently, the out-degree Laplacian of the reversed graph).
Key difference from undirected case:
- is NOT symmetric in general
- Eigenvalues may be complex
- The row-sum-zero property holds: (since each row sums to )
- But column sums are , not necessarily zero
Stationary distribution. The directed random walk is row-stochastic. For a strongly connected digraph, the unique stationary distribution satisfies . The stationary distribution is NOT necessarily uniform (unlike -regular undirected graphs).
10.2 Kirchhoff's Matrix-Tree Theorem
Theorem (Kirchhoff, 1847). For a connected undirected graph , the number of spanning trees equals any cofactor of :
where are the non-zero eigenvalues of .
Proof via the Matrix-Tree theorem. By the Matrix-Tree theorem, equals any principal minor of . By the Cauchy-Binet formula, this minor equals , which follows from the spectral decomposition and the fact that with eigenvector .
Examples:
- : (all equal), so (Cayley's formula).
- (path): (only one spanning tree - the path itself).
- (cycle): .
For AI: The number of spanning trees measures "graph robustness." Networks with many spanning trees (like expanders) remain connected even after many edge failures. This metric appears in network reliability analysis for distributed training clusters.
10.3 PageRank as a Spectral Problem
PageRank (Page, Brin, Motwani, Winograd, 1998) - the algorithm behind Google Search - is fundamentally a spectral computation on a directed graph.
Setup. Model the Web as a directed graph: pages are vertices, hyperlinks are directed edges. Define the Google matrix:
where is the column-stochastic random-walk matrix, is the damping factor (typically ), and represents teleportation (random jumps to any page).
PageRank vector. The PageRank of each page is the stationary distribution of the Markov chain defined by :
Equivalently, is the dominant left eigenvector of (eigenvalue ).
Spectral computation. By the Perron-Frobenius theorem, is a positive stochastic matrix (all entries due to the teleportation term), so it has a unique dominant eigenvalue with a unique positive eigenvector .
Power iteration. PageRank is computed by:
Convergence rate: geometric with ratio - the second eigenvalue of is at most . Each iteration is a sparse matrix-vector multiply .
For AI (RLHF and LLM preference graphs): In reinforcement learning from human feedback (RLHF), preference data can be modeled as a directed graph over responses, with edge meaning "response is preferred over ." PageRank on this preference graph gives a global ranking consistent with pairwise preferences. This is closely related to Bradley-Terry models used in reward model training (Ouyang et al., 2022).
10.4 Directed Graph Spectra in AI
Attention as a directed graph. In a Transformer, the attention weights define a directed weighted graph over tokens. The spectral properties of this attention graph have interpretability implications:
- The dominant eigenvector of the attention matrix identifies "hub" tokens - tokens that receive most attention
- Spectral analysis of attention graphs has been used in mechanistic interpretability to identify "induction heads" and "name mover heads" (Olsson et al., 2022)
- The spectral gap of the attention graph determines how quickly information mixes across token positions
Causal DAGs. In causal inference (Chapter 22), structural causal models are represented as DAGs. The spectral properties of the DAG adjacency matrix are related to the "depth" of causal chains: a large spectral radius means long-range causal effects.
11. Advanced Topics
11.1 Spectral Sparsification
Problem. For a dense graph with vertices and edges, many spectral algorithms are too slow. Can we find a sparse graph with edges that preserves the spectrum of ?
Definition. is an -spectral sparsifier of if for all :
Equivalently, in the PSD order.
Theorem (Spielman & Srivastava, 2011). Every graph has an -spectral sparsifier with edges, computable in near-linear time using random sampling weighted by effective resistances.
Effective resistance. The effective resistance between vertices and is the electrical resistance between them when unit resistors are placed on each edge. It equals:
where is the pseudoinverse of .
Algorithm: Sample each edge independently with probability proportional to , and rescale the weight. The resulting sparse graph preserves all spectral properties.
For AI: Spectral sparsification can reduce the computational cost of graph-based ML. A 10-million-edge social graph can be sparsified to edges while preserving spectral clustering quality.
11.2 Random Matrix Theory and Graph Spectra
The spectrum of a random graph has universal limiting behavior described by random matrix theory.
Erdos-Renyi model . For a random graph where each edge appears independently with probability , the empirical spectral distribution of converges to the semicircle law (Wigner, 1955):
The leading eigenvalue separates from the bulk at when , corresponding to the emergence of a giant connected component.
Implications for GNNs:
- Random weight matrices in GNNs have spectra approximated by the semicircle law (for large enough hidden dimensions)
- The alignment between the spectra of data graph and random weight matrices affects gradient flow in training
- Spectral norm regularization of GNN weights controls training stability by constraining the spectral radius
11.3 Graph Wavelets
Motivation. Laplacian eigenvectors are global: is supported on all vertices. For signals with local structure (e.g., a signal that varies in one part of the graph but is constant elsewhere), global eigenvectors are inefficient. We need a local, multiscale basis - a graph wavelet transform.
Hammond, Vandergheynst & Gribonval (2011). For a vertex and scale , define the graph wavelet centered at at scale :
where is the indicator vector of vertex and is a scaled spectral filter (bandpass at frequency ). In the spectral domain:
Properties of graph wavelets:
- Spatially localized: if has compact spectral support , then is -hop localized where depends on the bandwidth and
- Frequency selective: wavelets at scale are sensitive to frequency
- Frame bounds: for appropriate , forms a frame (redundant but stable basis)
Scattering transform on graphs (Gama et al., 2019): Compose multiple wavelet transforms with pointwise nonlinearities to build invariant/equivariant features. Provides theoretical guarantees for GNN expressiveness.
11.4 Infinite Graphs and Spectral Measures
For infinite graphs (e.g., the integer lattice , infinite trees), the Laplacian is an unbounded operator on and the spectrum is no longer a finite set but a spectral measure .
Example: Integer lattice . The Laplacian on has a continuous spectrum (the -dimensional discrete Laplacian spectrum). This connects to the theory of periodic operators in solid-state physics (Bloch's theorem).
Spectral measure. For a vertex , the spectral measure is defined by:
The spectral measure encodes everything about the local geometry of the graph as seen from .
Convergence of finite graphs. If a sequence of finite graphs converges in the Benjamini-Schramm sense to an infinite graph , the empirical spectral distributions of converge weakly to the spectral measure of .
Preview: The spectral theory of infinite-dimensional operators is the subject of Chapter 12: Functional Analysis, where Hilbert spaces, unbounded operators, and spectral measures are developed fully.
12. Applications in Machine Learning
12.1 Semi-Supervised Learning on Graphs
The problem. Given a graph with nodes, a few labeled nodes with labels , and many unlabeled nodes , assign labels to all unlabeled nodes.
Graph-based regularization (Zhou et al., 2004; Zhu et al., 2003). Find a label function that:
- Agrees with the given labels on
- Is smooth on the graph (nearby nodes have similar labels)
The objective:
The closed-form solution involves - a smoothing operator. In the spectral domain:
High-frequency components ( large) are strongly regularized toward zero; low-frequency components are preserved.
Connection to label propagation. The Gaussian Fields and Harmonic Functions algorithm (Zhu et al., 2003) sets labeled node values to the true labels and propagates via:
(where is the Laplacian restricted to unlabeled nodes). This harmonic interpolation assigns each unlabeled node the weighted average of its neighbors' labels, with weights determined by graph structure.
Modern variant: GCN for semi-supervised learning. The two-layer GCN of Kipf & Welling (2017) was originally proposed for exactly this task: semi-supervised node classification. The Laplacian smoothing is built into the propagation rule, making it a parameterized (learnable) version of label propagation.
12.2 Knowledge Graph Analysis
A knowledge graph (KG) represents world knowledge as a graph: entities (nodes) connected by typed relations (edges). Examples: Freebase, Wikidata, ConceptNet, UMLS (medical).
Spectral properties of KGs:
- KGs are heterogeneous (multiple edge types) and sparse ()
- The adjacency spectrum often follows a power law: many small eigenvalues, a few large ones
- The spectral gap measures how well-integrated the KG is: a small gap indicates a KG with isolated sub-graphs (different domains not well-connected)
Spectral regularization. KG embedding models (TransE, RotatE, ComplEx) learn entity and relation embeddings. Adding a spectral regularization term:
encourages entity embeddings to be smooth with respect to each relation type 's graph - entities connected by relation should have similar embeddings. This improves link prediction accuracy, especially for rare relations.
12.3 Molecular Property Prediction
Molecules are naturally represented as graphs: atoms are nodes, chemical bonds are edges. Predicting molecular properties (solubility, toxicity, drug-likeness) from molecular graphs is a key application of GNNs.
Spectral molecular fingerprints. The eigenvalue spectrum of the molecular graph Laplacian provides rotation- and permutation-invariant descriptors. The "spectral profile" uniquely characterizes many molecular structures.
Graph edit distance and spectral distance. Two molecules have similar properties if they have similar spectral profiles. The distance:
(where eigenvalues are sorted and zero-padded to the same length) approximates graph edit distance and correlates with molecular similarity.
Equivariance and invariance. Spectral fingerprints are invariant to atom permutation (graph isomorphism), which is the correct invariance for molecular property prediction. However, they are blind to chirality (mirror image molecules) - a known limitation requiring higher-order structural features.
12.4 Attention Pattern Analysis in LLMs
A -head attention layer in a Transformer computes attention weight matrices , each for sequence length . These define weighted directed graphs over the token positions.
Spectral analysis of attention. The eigenvalues of reveal the attention pattern structure:
- If (uniform attention): , all others
- If (attend only to self):
- Induction heads (Olsson et al., 2022) have with large spectral gap between and - they attend sharply to a few positions
Attention graph Laplacian. Define the symmetrized attention Laplacian . The Fiedler vector of identifies the two groups of tokens most separated by head 's attention.
Applications:
- Attention head pruning: Heads whose attention graphs have very small spectral gap (uniform attention) contribute little and can be pruned (Michel et al., 2019; Voita et al., 2019)
- Mechanistic interpretability: Spectral analysis of multi-head attention composition identifies information routing circuits (Elhage et al., 2021)
- Context window analysis: The Laplacian spectrum of attention graphs grows as more tokens are added; sudden changes in indicate "phase transitions" in how the model processes context
13. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Confusing with | Sign convention: requires . With , all eigenvalues are . | Always check (positive diagonal) to verify sign convention |
| 2 | Using unnormalized for spectral clustering on graphs with varying degrees | RatioCut (unnormalized) penalizes unequally-sized partitions; for real data, NCut (normalized) gives much better clusters | Use or for spectral clustering; unnormalized only for regular graphs |
| 3 | Taking the Fiedler vector (index 1) instead of | is the constant vector (trivial null vector); it has no discriminative information | In NumPy: eigenvectors are sorted ascending; take column index 1 (0-indexed), not index 0 |
| 4 | Ignoring sign ambiguity of eigenvectors | For each eigenvector , both and are valid; different runs give different signs | Use absolute value for visualizations; or use RWPE instead of LapPE to avoid sign issues |
| 5 | Confusing the Cheeger inequality direction: vs | These are the same inequality; the confusing part is that "large " implies "large " (good expander), not "small " | Remember: small <-> bottleneck <-> small <-> easy to cut. Large <-> expander <-> large <-> hard to cut |
| 6 | Computing the graph Laplacian for disconnected graphs and expecting | For a disconnected graph, always. The null space has dimension equal to the number of components. | Check connectivity before spectral clustering. If graph is disconnected, handle each component separately or add a small connectivity term |
| 7 | Treating spectral clustering as scale-free (the same regardless of ) | The cluster structure at scale uses eigenvectors . The -th eigenvector captures increasingly fine-grained structure | Choose using the eigengap heuristic: |
| 8 | Applying GCN (a low-pass filter) to heterophilic graphs | GCN smooths features toward neighborhood averages. For heterophilic graphs (connected nodes have different labels), this destroys discriminative information | Use high-pass or band-pass graph filters (e.g., GPRGNN, FAGCN, BernNet) for heterophilic settings |
| 9 | Confusing and : using for Ng-Jordan-Weiss | NJW requires (symmetric, orthogonal eigenvectors) for row normalization to work. is not symmetric, so its eigenvectors are not orthogonal | Use scipy.linalg.eigh(L_sym) for symmetric eigendecomposition; eigenvectors form orthonormal columns |
| 10 | Over-interpreting spectral methods on cospectral graphs | Two different graphs can have identical Laplacian spectra. Spectral features cannot distinguish them | Augment spectral features with structural features (degree, triangle count, etc.) or use WL-based methods |
| 11 | Forgetting the self-loop normalization in GCN derivation | Without self-loops, the Chebyshev filter has eigenvalues in ; adding shifts them to | Always include self-loops and renormalize with in the GCN propagation rule |
| 12 | Using the full GFT () on graphs with | Full eigendecomposition is ; for , this is operations - completely intractable | Use polynomial filters (Chebyshev, sparse matrix-vector products), Lanczos for top- eigenvectors, or RWPE |