Lesson overview | Previous part | Next part
Random Graphs: Part 9: Applications in Machine Learning to Appendix K: Notation Reference
9. Applications in Machine Learning
9.1 GraphWorld: Benchmark Generation from SBM
Problem: GNN papers often report results on 3-4 standard benchmarks (Cora, Citeseer, OGBN-Arxiv). These benchmarks may not represent the diversity of graph structures encountered in practice.
GraphWorld (Palowitch et al., 2022): A benchmark generation framework that:
- Parameterizes SBM space
- Samples thousands of graph instances across this parameter space
- Evaluates GNN architectures across the full parameter landscape
Key findings:
- No single GNN architecture dominates across all SBM parameters
- GCN is best on homophilic dense graphs; GIN is best near the detection threshold
- The Kesten-Stigum threshold accurately predicts when ALL GNNs fail
- Node feature quality (signal-to-noise ratio in features) often matters more than graph structure
For practitioners: When evaluating a GNN, generate benchmarks from an SBM sweep to characterize the algorithm's operating regime, rather than relying on a few fixed benchmarks.
9.2 Graph Generation: GRAN, GDSS, DiGress
Graph generation models learn to sample new graphs from a distribution. Framed probabilistically: learn a distribution that matches a target distribution .
GRAN (Graph Recurrent Attention Networks, 2019): Generates graphs node-by-node, at each step attending to previously generated nodes. The attention mechanism implicitly models preferential attachment - recently added high-degree nodes attract more attention, reproducing scale-free structure.
GDSS (Jo et al., 2022): Score-based diffusion model for graphs. Joint diffusion over node features and edge features. Samples new graphs by reversing a diffusion process that gradually adds noise. The score function implicitly learns the SBM-like block structure of training graphs.
DiGress (Vignac et al., 2022): Discrete diffusion - adds/removes edges following a Markov process. Denoising model is a graph transformer that learns to predict the original edge from the noised version. DiGress can generate molecular graphs and large social networks by learning implicit graphon structure.
Random graph theory connection: Graph generation is essentially graphon estimation. Given samples from (real graphs), estimate the underlying graphon such that sampling from approximates . The approximation quality is measured by the cut metric .
9.3 LLM Attention as a Random Graph Process
Attention graph: In a transformer with layers and heads, the attention pattern at layer , head defines a complete directed graph on tokens where edge weight is the attention score .
Sparse attention = random graph: Sparse attention mechanisms (Longformer, BigBird, Reformer) explicitly sparsify the attention graph, keeping only edges rather than . The sparsification pattern is often random or pseudo-random.
Random graph analysis of attention:
- Connectivity: Is the sparse attention graph connected? If not, information cannot flow between disconnected components. ER theory predicts connectivity iff expected degree .
- Small-world: BigBird combines local window attention (ring lattice) with random global tokens (rewiring) and special tokens (hubs). This exactly matches the Watts-Strogatz construction!
- Expander properties: Expander graphs (high spectral gap) are the best sparse graphs for information flow. Ramanujan graphs achieve the optimal spectral gap - this motivates expander-based sparse attention.
Result: The random graph structure of sparse attention patterns determines the theoretical expressiveness of the transformer. An attention graph that is disconnected or has large diameter cannot capture long-range dependencies regardless of the weight values.
9.4 Lottery Ticket Hypothesis and Sparse Subgraphs
Lottery Ticket Hypothesis (Frankle & Carlin, 2019): A randomly initialized dense network contains sparse subnetworks ("winning tickets") that, when trained in isolation from the same initialization, achieve comparable accuracy to the dense network.
Random graph framing: Think of the neural network as a random graph where:
- Nodes = neurons
- Edges = weights (including the weight value)
- Sparsification = edge removal
The winning ticket is a sparse subgraph that retains the connectivity and flow properties of the dense graph. Random graph percolation theory describes when sparse subgraphs retain giant component connectivity.
Percolation interpretation: If we randomly retain each edge with probability (magnitude-based pruning approximation), the network retains its giant component iff , where is the percolation threshold. For ER-like neural network graphs, where is the original edge density.
Practical implication: Networks can be pruned to 90%+ sparsity (i.e., ) while retaining performance, consistent with the existence of a giant percolating subgraph at such densities for typical neural network width/depth ratios.
9.5 Social Network Analysis at Scale
Community detection at scale: For billion-node graphs (Facebook, Twitter), exact SBM community detection is computationally infeasible. In practice:
- Louvain algorithm: Greedy modularity maximization,
- Label propagation: Message-passing on the graph, converges in steps
- GraphSAGE + semi-supervised: Use a few labeled nodes (community labels) to train GNN
Random graph models as null models: When analyzing a real social network, we ask: "Is the observed community structure more than what we'd expect by chance?" We compare to an ER null model with the same degree sequence (configuration model) and test whether the observed modularity exceeds the null expectation.
Link prediction: Predicting missing edges in a social graph using random graph models. Under ER: (same for all pairs). Under SBM: - within-community edges are more likely. GNNs for link prediction learn to approximate this block-structured probability.
10. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Confusing and | has random edge count; has exactly edges - they agree asymptotically but differ in finite samples | Use when you want independence; when you want exact count control |
| 2 | Assuming ER is a good null model for all real graphs | ER lacks clustering, hubs, and communities - major features of real networks | Use configuration model (fixed degree sequence) or SBM as null model |
| 3 | Saying Barabasi-Albert generates power law with any exponent | BA always gives ; only extensions (fitness, rewiring) change | Use generalized preferential attachment for |
| 4 | Confusing clustering coefficient with transitivity | Local CC averages over vertices; global transitivity = triangles / paths of length 2. They differ when degree distribution is heterogeneous | Use transitivity for global property; local CC for per-node property |
| 5 | Thinking the Kesten-Stigum threshold is a computational limit | KS is an information-theoretic limit - it bounds what ANY algorithm can do, not just efficient ones. The computational hardness (SDP vs AMP) is a separate question | Distinguish between information-theoretic and computational thresholds |
| 6 | Applying graphon theory to sparse graphs | Graphons describe the limit of DENSE sequences (constant edge density). Sparse graphs () have trivial graphon limits (the zero function) | Use local weak limits (Benjamini-Schramm) or graphex theory for sparse sequences |
| 7 | Equating "scale-free" with "power law" | Scale-free means the DEGREE distribution is a power law. Many other distributions (log-normal, Pareto) look similar. Broido & Clauset (2019) show many claimed scale-free networks don't pass rigorous power-law tests | Use maximum likelihood to fit and compare competing distributions (power law, log-normal, exponential) |
| 8 | Ignoring the giant component when computing path lengths | Average path length on a disconnected graph is undefined (infinite) or meaningless without restricting to the giant component | Always compute path lengths within the largest connected component |
| 9 | Assuming WS small-world is scale-free | WS generates Poisson-like degree distributions (each node rewires independently). It has small-world property but NOT scale-free | Combine WS with preferential attachment for small-world + scale-free |
| 10 | Using the adjacency matrix spectrum directly for SBM clustering when graph is sparse | For sparse SBM, the adjacency matrix eigenvalues don't separate cleanly (semicircle radius comparable to signal). Use regularized Laplacian or Bethe-Hessian instead | Replace with or Bethe-Hessian |
| 11 | Treating the WS model as a generative process for new nodes | WS is defined on a fixed set of nodes with rewiring - it does not define how to add new nodes. It's not a growing network model | Use BA for growing network models; use WS for fixed-size networks |
| 12 | Forgetting that graphon equivalence classes ignore node labels | Two graphons and are equivalent if one is a measure-preserving relabeling of the other. Most graph statistics depend only on the graphon equivalence class | Work with homomorphism densities (relabeling-invariant) when comparing graphons |
11. Exercises
Exercise 1 * - Phase Transition Simulation
Simulate the Erdos-Renyi phase transition computationally. For nodes and with :
(a) Generate for 20 values of linearly spaced in .
(b) For each graph, compute the size of the largest connected component and the second largest .
(c) Plot and as functions of . Identify the phase transition point visually.
(d) Overlay the theoretical prediction satisfying . Compute numerically (Newton's method or bisection) for each .
(e) Compute and plot . What happens to the second-largest component at criticality? Explain from branching process theory.
Exercise 2 * - Degree Distribution Analysis
(a) Generate with and . Compute the empirical degree distribution and overlay the theoretical PMF.
(b) Generate a BA graph with and . Compute the empirical degree distribution on a log-log scale and fit a power law using linear regression on the tail (). Report the estimated exponent .
(c) Compare the two distributions using a Q-Q plot. What are the key structural differences?
(d) Compute the maximum degree in each model. Derive theoretically why for ER and for BA.
Exercise 3 * - Small-World Analysis
Implement the Watts-Strogatz model from scratch.
(a) Build the ring lattice: nodes, each connected to nearest neighbors.
(b) For , rewire each edge independently with probability . Compute and for each value.
(c) Plot normalized clustering coefficient and normalized average path length on the same plot (both on log scale for ). Identify the small-world regime.
(d) Verify that for small . What does this formula say about how clustering degrades with rewiring?
Exercise 4 ** - Stochastic Block Model and Community Detection
(a) Sample an SBM with , equal communities, , . Use (above Kesten-Stigum threshold).
(b) Apply spectral clustering: compute the second eigenvector of the adjacency matrix, threshold at 0 (positive = community 1, negative = community 2), and compute accuracy (fraction of correctly classified nodes, up to community permutation).
(c) Repeat with (near threshold) and (below threshold). How does accuracy vary?
(d) The Kesten-Stigum threshold for 2-block SBM is . Verify your experimental results are consistent with this threshold.
(e) *** Implement the belief propagation algorithm (AMP / approximate message passing) for the 2-block SBM and compare accuracy to spectral clustering near the threshold.
Exercise 5 ** - Wigner's Semicircle Law
(a) Generate a Wigner matrix where has i.i.d. entries, for .
(b) Plot the empirical spectral distribution (histogram of eigenvalues) for each . Overlay the theoretical semicircle density for .
(c) Now set where is the centered adjacency matrix of with , . Plot the empirical spectral distribution. Identify the outlier eigenvalue corresponding to the average degree.
(d) For the SBM with 2 communities (, , ): plot the eigenvalue spectrum of . Identify which eigenvalues encode community structure and which are bulk noise.
Exercise 6 ** - Graphon Estimation
(a) Generate a sequence of SBM graphs with , communities, and block matrix .
(b) For each graph, sort vertices by community label (oracle information) and display the sorted adjacency matrix as a heatmap. Does it converge to the block graphon as grows?
(c) Without oracle labels: apply spectral clustering to estimate community labels, then display the sorted (estimated) adjacency matrix. Measure the cut distance between the estimated and true graphon.
(d) Implement the "histogram graphon estimator": divide into bins and estimate by averaging edges within each bin. Compute the error .
Exercise 7 *** - Giant Component Critical Window
This exercise studies the fine-grained behavior near the critical point .
(a) For with and , compute for 50 trials each. Plot the distribution of vs .
(b) Observe that the distribution has a universal shape (independent of for large ). This is the Brownian excursion limit - the critical window scaling is for the component size and for the window width.
(c) Compute the mean and standard deviation of as functions of . Show that the mean crosses 0 near but the transition is smooth (not a jump) at finite .
(d) Compare to the predicted infinite- limit: for , for some constant . Verify this scaling.
Exercise 8 *** - Preferential Attachment Dynamics
(a) Implement the Barabasi-Albert preferential attachment model for nodes with edges per new node. Use the efficient alias method or linear scan for sampling.
(b) After generation, fit the degree distribution tail to a power law using maximum likelihood estimation on for suitable .
(c) Track the degree of each node over time as the network grows. Plot for nodes added at times . Verify the mean-field prediction .
(d) Implement "fitness-based" preferential attachment: where is a fixed fitness. Compare the resulting degree distribution to standard BA. Does the power-law exponent change?
(e) Simulate a targeted attack: iteratively remove the highest-degree node. Plot the size of the giant component as a function of fraction of nodes removed. Compare to random removal. What fraction of nodes must be targeted to destroy the giant component?
12. Why This Matters for AI (2026 Perspective)
| Concept | Impact on AI/ML |
|---|---|
| ER Phase Transition | Connectivity threshold determines information flow in GNNs on sparse graphs; GCN cannot aggregate across disconnected components, giving a hard limit on performance |
| Poisson Degree Distribution | Most GNN benchmark graphs (Cora, Citeseer) have near-Poisson degrees; GCN's symmetric normalization is optimal for Poisson-degree graphs but suboptimal for scale-free |
| Kesten-Stigum Threshold | Hard information-theoretic limit on community detection; no GNN can exceed this on SBM data regardless of architecture, depth, or width |
| Scale-Free Networks | Real knowledge graphs (Wikidata, Freebase) are scale-free; hub nodes with degree dominate GCN aggregation - require degree-aware normalization or degree-bucketing |
| Small-World Structure | Transformer attention graphs have small-world properties; BigBird/Longformer mimic WS construction (local window + random long-range); designing optimal sparse attention = finding optimal WS parameters |
| Graphons | Theoretical foundation for GNN universality; GNNs that are continuous in cut metric topology can generalize across graph sizes; graphon theory predicts which graph properties a GNN can and cannot learn |
| Graphon Neural Networks | First provably universal GNN framework; used to prove that message-passing GNNs cannot distinguish non-isomorphic graphs with identical WL certificates |
| Spectral Gap | Controls convergence rate of GNN over-smoothing (energy decays at rate ); higher Fiedler value -> faster smoothing -> shallower optimal depth |
| Davis-Kahan | Explains why spectral GNNs (GCN) succeed above community detection threshold and fail below; same theorem underlies learning theory for graph classification |
| Wigner Semicircle | Bulk eigenvalue distribution of noise in graph learning; signal from community structure must exceed semicircle radius to be learnable - same condition as Kesten-Stigum |
| Configuration Model | Null model for testing GNN hypotheses: does the GNN learn graph structure, or just degree sequence? Test by evaluating on configuration model graphs with same degree sequence |
| Random Graph Generation | DiGress, GDSS, GRAN all learn distributions over graphs - effectively learning graphons. Quality measured by cut distance between learned and true graphon |
13. Conceptual Bridge
Backward: What This Builds On
Random graph theory synthesizes several branches of mathematics developed in earlier sections:
From Spectral Graph Theory (04): The Laplacian eigenvalues studied there take on probabilistic meaning here - for random graphs, they become random variables following the Wigner semicircle law. The spectral gap that controls mixing time in deterministic graphs now becomes a random quantity whose distribution determines community detectability.
From Probability Theory (07-Probability-Statistics): All threshold results (giant component, connectivity, community detection) use first and second moment methods, Chernoff bounds, and the Poisson limit theorem. Branching process theory is the probabilistic tool that gives the exact threshold equation .
From Graph Neural Networks (05): The SBM community detection problem IS the node classification problem that GNNs solve in practice. The Kesten-Stigum threshold gives the hard limit on what any GNN can achieve, while Davis-Kahan shows why spectral GNNs succeed above this threshold.
Forward: What This Enables
Functional Analysis (12): The graphon operator is a Hilbert-Schmidt operator on . Its spectral theory - the Hilbert-Schmidt theorem giving a countable orthonormal eigenfunction expansion - is the infinite-dimensional generalization of the adjacency matrix eigendecomposition. This is the full treatment of graphon operators.
Graph Algorithms (07): Random graph models motivate efficient algorithms: spectral clustering (from SBM theory), Louvain community detection (from modularity theory), and link prediction (from random graph null models). The average-case complexity of graph problems is analyzed using random graph models.
Information Theory (09): The Kesten-Stigum threshold is an information-theoretic limit - it follows from a channel capacity argument. The mutual information between the community labels and the observed graph is zero below the threshold, making recovery impossible regardless of computation. This connects random graphs to channel coding theory.
POSITION IN CURRICULUM
========================================================================
04 Spectral Graph Theory
| Laplacians, eigenvalues, Cheeger inequality
|
+--> 05 Graph Neural Networks
| GCN, GAT, GIN, MPNN, expressiveness
|
+--> 06 Random Graphs <=== YOU ARE HERE
|
+- Erdos-Renyi: phase transitions, thresholds
+- Watts-Strogatz: small-world, navigation
+- Barabasi-Albert: scale-free, preferential attachment
+- SBM: communities, Kesten-Stigum threshold
+- Spectral theory: semicircle law, Davis-Kahan
+- Graphons: infinite limits, universality
|
v
07 Graph Algorithms
|
v
12 Functional Analysis <-- graphon operators T_W
Hilbert-Schmidt theory, L^2 spectral decomposition
========================================================================
The central insight of this section - that random graph models are not merely toy examples but rigorous frameworks for understanding real network behavior - carries forward into every domain where graphs appear. In machine learning, understanding when community structure is detectable, when information can flow across a sparse graph, and when a graph distribution can be learned from samples are fundamental questions with precise mathematical answers, and those answers come from random graph theory.
<- Back to Graph Theory | Next: Graph Algorithms ->
Appendix A: Branching Process Theory
A.1 Galton-Watson Branching Processes
A Galton-Watson process is a model of population growth where each individual independently produces offspring according to a fixed offspring distribution .
Formal definition: Let (a single ancestor). At generation , if , each of the individuals produces offspring independently according to :
where i.i.d.
Mean offspring: .
Probability generating function (PGF): .
Extinction probability: The probability of ultimate extinction satisfies .
Theorem (Extinction):
- If : (certain extinction)
- If : (positive survival probability )
Connection to ER: For with , exploring the component of a vertex proceeds like a branching process where each node has offspring (unexplored neighbors). The giant component probability satisfies:
exactly the fixed-point equation .
A.2 Multi-Type Branching Processes
For the SBM with communities, the local neighborhood exploration is a multi-type branching process where the type of an individual is its community label.
Offspring matrix: = expected number of type- offspring from a type- parent.
For the symmetric 2-block SBM with , :
Survival theorem: The multi-type branching process survives iff , where is the spectral radius.
Eigenvalues of : , .
- Giant component exists iff , i.e., (average degree condition).
- Community detection possible iff the "community eigenvalue" ... but this is not quite right. The actual condition involves the non-backtracking matrix.
Non-backtracking operator: The correct spectral condition for SBM community detection uses the non-backtracking (Hashimoto) operator on directed edges. The eigenvalue of corresponding to community structure is , and the bulk spectral radius is . Community detection is possible iff:
i.e., - precisely the Kesten-Stigum threshold.
A.3 Critical Branching Processes
At criticality (Poisson offspring with ), the branching process has a different behavior:
Yaglom's theorem: Conditioned on survival to generation , the population converges in distribution to an Exponential(1) random variable:
For the ER critical window: At , the largest component has size and there are such components. The component size distribution follows Yaglom's theorem for the critical Poisson branching process, but rescaled by .
Appendix B: Configuration Model
B.1 Definition
The configuration model generates a random graph with a prescribed degree sequence .
Construction:
- Give vertex exactly "half-edges" (stubs)
- Pair up all half-edges uniformly at random
- Each pairing creates an edge
Result: A random multigraph (may have self-loops and multi-edges) with the given degree sequence.
Properties:
- For degree sequences with bounded maximum degree: the number of self-loops and multi-edges is - a simple graph w.h.p.
- Conditioned on simplicity: uniform over all simple graphs with the given degree sequence
Why use it? The configuration model is the correct null model for testing graph hypotheses. Instead of comparing to ER (wrong degree distribution), compare to configuration model (same degree distribution, no other structure). If a property (e.g., high clustering) exceeds what configuration model predicts, it's genuinely structural, not just a degree artifact.
B.2 Giant Component in Configuration Model
For the configuration model with degree distribution :
Molloy-Reed criterion: A giant component exists iff:
Interpretation: The excess degree distribution governs the branching factor of the exploration process. The process is supercritical (giant component exists) iff the mean of exceeds 1, which is .
For scale-free networks: With , diverges when . Hence the Molloy-Reed criterion is always satisfied for BA-style scale-free networks - a giant component exists for arbitrarily sparse scale-free graphs.
For ER: (Poisson), so and . Molloy-Reed: - exactly the ER threshold.
B.3 Clustering in Configuration Model
For the configuration model:
as (for fixed degree distribution). The configuration model has vanishing clustering coefficient - it's locally tree-like.
Real networks vs. configuration model: Comparing observed clustering to tests for genuine clustering beyond degree effects. Small-world networks have - they have clustering not explained by degree distribution alone.
Appendix C: Percolation Theory
C.1 Bond Percolation on Graphs
Bond percolation: Given a graph and probability , independently retain each edge with probability . Let denote the resulting random subgraph.
Site percolation: Independently retain each vertex with probability .
Critical probability: The percolation threshold is the infimum of for which has an infinite component (on infinite graphs) or a giant component of size (on finite graphs).
On the integer lattice : Exact thresholds:
- : (must keep all edges)
- (square lattice): exactly (by self-duality)
- : ; harder to compute exactly
On ER graphs: Bond percolation on with retention probability gives . The percolation threshold is , so the giant component survives iff , i.e., .
C.2 Connection to Neural Network Pruning
Neural network weight pruning is isomorphic to bond percolation:
- Dense network graph (neurons = nodes, weights = edges)
- Pruning mask with (retention probability)
- Pruned network must retain computational connectivity
For the network to maintain its computational capacity, it must retain a giant component. The percolation threshold gives the minimum retention rate.
Structured pruning: Head pruning in transformers (removing entire attention heads) is site percolation on the attention head graph. Magnitude pruning selects edges by weight magnitude - approximately bond percolation with fraction of weights retained.
Lottery ticket connection: A winning lottery ticket is precisely a giant percolating subgraph that retains the "signal paths" of the original network. The existence of such subgraphs at high sparsity () is guaranteed by percolation theory for sufficiently wide networks.
C.3 Expanders and Optimal Percolation
Expander graph: A -regular graph on nodes with spectral gap . Large spectral gap means fast mixing and high robustness.
Percolation on expanders: For a -regular expander, the percolation threshold is for bounded-degree expanders. The giant component after percolation at has size for small .
Implication for sparse attention: Expander-based sparse attention (using Ramanujan graphs with spectral gap ) achieves:
- edges (efficient)
- diameter (short paths)
- Maximum spectral gap (optimal information flow)
- Robust to random edge removal (good percolation threshold)
This is why expanders are theoretically optimal sparse attention patterns, even if not used in practice due to implementation complexity.
Appendix D: Threshold Functions - Complete Table
| Property | Threshold | Window width | Notes |
|---|---|---|---|
| Contains an edge | First property to appear | ||
| Contains a triangle | threshold | ||
| Contains | |||
| Contains | |||
| Giant component | Phase transition | ||
| Connectivity | (sharp!) | Very sharp threshold | |
| Diameter | |||
| Contains Hamiltonian cycle | Same as connectivity! | ||
| Planarity loss | Planarity threshold | ||
| Chromatic number | Problem-dependent | Varies | Open for exact |
Sharp vs. coarse thresholds:
A threshold is sharp if the transition from probability 0 to probability 1 occurs in a window of width . Connectivity and Hamiltonian cycles have sharp thresholds (window width ).
A threshold is coarse if the window is . Most subgraph appearance thresholds are coarse - the probability transitions from to over a multiplicative constant change in .
Friedgut's theorem (1999): Every monotone property has a sharp threshold OR can be reduced to a "small" property. Most natural properties (connectivity, -colorability) have sharp thresholds. This is the technical statement of the Bollobas-Thomason heuristic.
Appendix E: Random Geometric Graphs
E.1 Definition
A random geometric graph is constructed by:
- Placing nodes uniformly at random in (or another metric space)
- Connecting two nodes iff their Euclidean distance is
Properties:
- Soft threshold for connectivity: - same scaling as ER
- High clustering: Nodes close to each other share many common neighbors (geometric constraint) - as with fixed
- Bounded degree distribution: All degrees in
- No long-range edges: Maximum edge length , giving large diameter for small
For AI: Random geometric graphs model sensor networks, robotic swarms, and spatial point processes. They also model attention in vision transformers where tokens correspond to image patches at geometric positions - nearby patches should have high attention, distant patches low.
E.2 Comparison to Other Models
| Property | ER | WS | BA | Geometric |
|---|---|---|---|---|
| Degree dist. | Poisson | Poisson-like | Power law | Bounded |
| Clustering | Low | High | Low | Very high |
| Path length | ||||
| Spatial | No | No | No | Yes |
| Community struct. | None | None | None | Implicit (spatial) |
E.3 Connection to Kernel Methods
The random geometric graph adjacency matrix is a kernel matrix:
with kernel (indicator kernel).
More generally, kernel random graphs use for a kernel function . This is a graphon with for node types .
For attention: The softmax attention matrix is a random kernel matrix where the "positions" are query/key vectors. Random graph theory for kernel random graphs directly applies to study information flow in attention layers.
Appendix F: Network Motifs and Subgraph Statistics
F.1 Network Motifs
Definition: A network motif is a subgraph pattern that occurs significantly more often in a real network than in random graphs with the same degree sequence (configuration model null).
Common motifs:
- Feedforward loop (3-node DAG): overrepresented in gene regulatory networks
- Bifan (2->2->2 bipartite): common in neural circuits
- Clique (, ): overrepresented in social networks
Detection: For each candidate subgraph , compare (density in real graph) to (expected density under null model). Z-score : motif. Z-score : anti-motif.
For GNNs: Motif counting is a proxy for what GNNs learn. Standard MPNNs (GCN, GraphSAGE) can count triangles but not 4-cycles. Higher-order GNNs (OSAN, subgraph GNNs) can count richer motif sets. The motif profile of a graph determines which GNN architecture is most suitable.
F.2 Triangle Counting at Scale
Exact triangle count: For dense graphs, using matrix multiplication in (matrix multiplication exponent). For sparse graphs (), algorithms run in .
Approximate counting: For massive graphs (), use streaming algorithms or random sampling:
- Wedge sampling: Count triangles by sampling paths of length 2 and checking closure
- DOULION: Sample edges independently with probability ; count triangles in subgraph; scale by
Expected triangle count in : .
Concentration: For , by Azuma-Hoeffding (Lipschitz condition), concentrates around its mean with standard deviation .
Appendix G: Information-Theoretic Limits
G.1 Mutual Information and Detection
The detection problem: Given , recover (up to global flip).
Impossible regime: Below the Kesten-Stigum threshold, the mutual information in the limit. This means the graph contains literally no information about the community labels that could be extracted by any algorithm.
Formal statement: For , , :
The normalization is because both and grow with ; the mutual information per node goes to 0.
Above threshold: For :
where is the Bayes-optimal overlap and is binary entropy.
G.2 Exact Recovery Threshold
For the exact recovery problem (recover exactly, not just correlate with it), the threshold is higher:
Theorem (Abbe-Sandon, 2015): For the 2-block SBM with , :
This is the CHI-squared divergence condition between the Poisson distributions with means and .
The three thresholds:
- Impossible: - no algorithm can detect communities
- Weak recovery: - algorithms exist to partially recover
- Exact recovery: - algorithms exist for exact recovery
These thresholds become relevant for GNN practitioners when designing tasks: is the community detection problem in your benchmark achievable at all? Which threshold regime does it fall in?
Appendix H: Practical Algorithms
H.1 Spectral Clustering Pipeline for SBM
INPUT: Adjacency matrix A of SBM graph with k communities
OUTPUT: Community assignments \sigma
1. REGULARIZE: Compute A_reg = A + \tau/n * 11^T (small ridge)
(Prevents leading eigenvector being dominated by high-degree nodes)
2. NORMALIZE: Compute L_sym = D^{-1/2} A_reg D^{-1/2}
3. EIGENVECTORS: Compute top k eigenvectors U \in R^{n\timesk} of L_sym
4. NORMALIZE ROWS: Let U_row = U / ||u_i||_2 (spherical projection)
5. k-MEANS: Run k-means on rows of U_row, get cluster labels \sigma
6. RETURN: \sigma (up to global permutation)
COMPLEXITY: O(n*k/\epsilon^2) for sparse graphs (k power iterations)
ACCURACY: Achieves min error rate above Kesten-Stigum threshold
H.2 Louvain Community Detection
The Louvain algorithm maximizes modularity , a measure of community quality:
Phase 1 (local optimization): For each node , move to the neighboring community that maximizes . Repeat until no improvement.
Phase 2 (aggregation): Collapse each community into a supernode. Edge weights between supernodes = total edges between original communities. Apply Phase 1 to the collapsed graph.
Repeat until no further improvement.
Complexity: empirically - the fastest practical community detection algorithm.
Limitation: Modularity has a resolution limit: communities smaller than edges may not be detected. For fine-grained community structure, use alternatives (Infomap, hierarchical spectral methods).
H.3 Fast Giant Component Detection
def giant_component_fraction(A, n):
"""Compute giant component size via BFS."""
visited = [False] * n
max_component = 0
for start in range(n):
if visited[start]:
continue
# BFS from start
queue = [start]
visited[start] = True
component_size = 0
while queue:
v = queue.pop(0)
component_size += 1
for u in range(n):
if A[v][u] and not visited[u]:
visited[u] = True
queue.append(u)
max_component = max(max_component, component_size)
return max_component / n
For sparse adjacency lists (CSR format), BFS runs in - linear in the graph size.
End of 06 Random Graphs notes.
Appendix I: Advanced Topics in Random Graphs
I.1 Local Weak Convergence (Benjamini-Schramm)
For sparse graph sequences where edge density , graphon theory breaks down. The correct limit theory uses local weak convergence.
Definition (Benjamini-Schramm limit): A sequence of graphs converges in the local weak sense to a random rooted graph if for every rooted graph and every :
where is the ball of radius around in .
For : The local weak limit is the Galton-Watson tree with Poisson() offspring distribution - an infinite random tree. This confirms the "locally tree-like" structure of sparse ER graphs.
For BA networks: The local weak limit involves correlated degree distributions due to preferential attachment - the limiting tree is not a Galton-Watson tree but a Polya urn tree.
For SBM: The local weak limit is a multi-type Galton-Watson tree where the types are community labels. This is the probabilistic foundation of the belief propagation algorithm for community detection.
I.2 Random Regular Graphs
Definition: A -regular random graph on vertices is a graph chosen uniformly at random among all -regular simple graphs on vertices.
Construction (configuration model): Give each vertex stubs, pair uniformly at random, condition on simplicity.
Spectral properties: For -regular random graphs, the eigenvalues are in w.h.p. The Alon-Boppana bound says for any -regular graph. A Ramanujan graph achieves equality: .
Alon conjecture (proved by Friedman, 2008): Almost all -regular random graphs are Ramanujan:
for any fixed .
For AI: Expanders (near-Ramanujan regular graphs) provide the optimal sparse attention pattern:
- edges per node (linear total edges)
- Diameter (short paths)
- Spectral gap (fast mixing)
I.3 Random Hypergraphs
Real-world networks often have higher-order interactions - a chemistry reaction involves multiple molecules simultaneously. Hypergraphs capture this.
Definition: A hypergraph where edges can contain any number of vertices.
Random -uniform hypergraph : Each -subset of is an edge independently with probability .
Phase transition: The giant component threshold for occurs at (average degree 1). The analysis uses a multi-type branching process.
For AI: Simplicial complex neural networks (SCNNs) and topological data analysis (TDA) use higher-order graph structure. Random simplicial complexes (random clique complexes of ER graphs) model the topological structure of neural network activation spaces.
I.4 Dynamic Random Graphs
Real networks evolve over time. Several models capture network dynamics:
Forest Fire model (Leskovec et al., 2007): A new node contacts a random "ambassador" and copies some of its links. Exhibits densification (average degree increases over time) and shrinking diameter - both observed in real growing networks.
Copying model: Each new node copies existing edges from a random source node. This generates power-law degree distributions similar to BA but with different exponent depending on copy probability.
Edge dynamics: Instead of just adding edges, allow edge rewiring, deletion, and creation. Models temporal networks (who talked to whom at time ). The temporal analog of clustering coefficient and path length requires time-respecting paths.
For AI: Training data graphs (social networks, citation graphs) evolve over time. Distribution shift in graph ML often comes from temporal evolution of the underlying random graph model. A GNN trained on a 2019 citation graph may fail on a 2024 citation graph if the graph generative process has changed.
Appendix J: Mathematical Proofs
J.1 Proof: Giant Component Threshold (Upper Bound)
We prove: for , all components have size w.h.p.
Proof: Let = number of connected components of size exactly in with .
The probability that a specific set of vertices forms a component involves:
- The internal graph on being connected: probability
- No edges from to : probability
By union bound:
using Cayley's formula ( labeled trees on vertices, each connected tree is a spanning tree of its component).
For and :
(using Stirling: , )
For : (since has , , - it's a maximum at ). Let .
Summing over :
By Markov's inequality, the total number of vertices in components of size is , so the maximum component size is .
J.2 Proof: Poisson Degree Distribution
Claim: For with , .
Proof via PGF: The degree where i.i.d.
PGF of :
Taking :
which is the PGF of .
Since PGF convergence (for ) implies distributional convergence (by continuity theorem for generating functions), .
Multivariate extension: For distinct vertices , their degrees are jointly Poisson in the limit, with joint PGF:
where for all . But the joint distribution is NOT independent Poisson (edges between and contribute to both and ). For fixed , the correlation between degrees of and is , so they are asymptotically independent.
J.3 Proof sketch: Davis-Kahan Theorem
Simplified version: Let where has eigenvalue and unit eigenvector . Let be the unit eigenvector of corresponding to eigenvalue nearest to . Let (gap to other eigenvalues).
Claim: (assuming ).
Proof: Decompose where , , and .
From : .
Project onto : .
All eigenvalues of are at distance from (which is close to ):
Therefore:
This gives the Davis-Kahan bound.
Appendix K: Notation Reference
| Symbol | Meaning | First appears |
|---|---|---|
| Erdos-Renyi random graph, vertices, edge prob | 3.1 | |
| ER graph with exactly edges | 3.1 | |
| Size of largest connected component | 3.3 | |
| Giant component fraction, | 3.3 | |
| Local clustering coefficient of vertex | 4.2 | |
| Average path length | 4.3 | |
| Preferential attachment probability of vertex | 5.1 | |
| Stochastic Block Model | 6.1 | |
| SBM block probability matrix | 6.1 | |
| Community assignment | 6.1 | |
| Spectral radius of matrix | App. A.2 | |
| Wigner semicircle measure | 7.1 | |
| Cut metric between graphs and | 8.1 | |
| Graphon | 8.2 | |
| Homomorphism density of in | 8.3 | |
| Graphon integral operator, | 8.4 | |
| With high probability (probability ) | 2.2 | |
| Asymptotic notation | 2.2 | |
| Large deviation function for ER | 3.3 | |
| Probability generating function | App. A.1 | |
| Excess degree distribution | App. B.2 |
<- Back to Graph Theory | Next: Graph Algorithms ->