Lesson overview | Lesson overview | Next part
Random Graphs: Part 1: Intuition to 8. Graphons: The Infinite-Size Limit
1. Intuition
1.1 What Is a Random Graph?
A random graph is not a single graph but a probability distribution over graphs. When we write , we mean: take labeled vertices; independently include each of the possible edges with probability . The specific graph is a sample from this distribution.
This is a powerful abstraction. Real-world networks - social graphs, citation networks, protein-protein interaction maps, the web - cannot be written down in closed form. They arise from complex processes involving millions of actors. Random graph models capture the statistical regularity of these networks without requiring knowledge of every individual edge.
Key insight: Many properties of real-world networks can be characterized by just a few parameters (average degree, clustering coefficient, community structure), and random graph models are the mathematical objects that interpolate between these summary statistics and the full combinatorial structure of the graph.
For AI and ML, random graphs matter in at least three ways:
- Benchmark generation: We generate synthetic graphs from random models to test GNN algorithms across controlled conditions (the GraphWorld framework uses SBM).
- Graph generation models: GRAN, GDSS, and DiGress learn to sample from distributions over graphs - essentially learning a graphon.
- Network analysis: Training data graphs (social networks, citation networks, knowledge graphs) have structure that random graph models help characterize and exploit.
1.2 Why Random Graphs Matter for AI
The connection between random graphs and modern AI is deeper than benchmark generation:
Transformer attention is a random graph. In a language model with tokens and sparse attention, the attention pattern defines a random-looking bipartite graph. The connectivity of this graph determines information flow - whether distant tokens can influence each other and how quickly information propagates. Results from random graph theory (connectivity thresholds, small-world phenomena) directly predict when transformers can or cannot capture long-range dependencies.
GNN expressiveness depends on graph structure. The WL hierarchy and GIN's expressiveness guarantees depend on what graphs the model will see. If training graphs are sampled from an SBM, the GNN faces a specific community detection problem that has known information-theoretic limits - limits that bound what no GNN can achieve regardless of architecture.
Overparameterized networks are sparse graphs. The lottery ticket hypothesis says that dense networks contain sparse subnetworks (tickets) that train just as well. These sparse subnetworks have random-graph-like structure: they appear near the percolation threshold of the dense network.
Graphons = graph neural network limits. The mathematical theory of graphons (limit objects of dense graph sequences) is the same theory that gives GNNs their universality results. A GNN that is continuous in the cut metric topology of graphons is provably expressive on all graphs in the same graphon equivalence class.
1.3 The Four Canonical Models
THE FOUR CANONICAL RANDOM GRAPH MODELS
========================================================================
Model Parameters Key property
---------------------------------------------------------------------
Erdos-Renyi n, p (or n, m) Phase transition at p = 1/n
G(n,p) Independent edges Poisson degree distribution
Thresholds for all monotone props
Watts-Strogatz n, k, \beta High clustering + short paths
Small-World Ring lattice + Models social/neural networks
random rewiring Navigability (Kleinberg)
Barabasi-Albert n, m_0, m Power-law degree distribution
Scale-Free Preferential Hubs, robustness to random failure
attachment Most real-world networks
Stochastic k, n_1,...,n_k, B Community structure
Block Model Block membership Kesten-Stigum threshold
(SBM) + probability matrix Benchmark for GNN node classif.
========================================================================
Each model captures one dominant feature of real-world networks:
- ER captures mathematical tractability and threshold phenomena
- WS captures the small-world property (short paths, high clustering)
- BA captures heterogeneity (a few highly-connected hubs, many low-degree nodes)
- SBM captures community structure (dense intra-community, sparse inter-community)
Real networks typically exhibit several of these properties simultaneously. Hybrid models (e.g., SBM with degree correction, or preferential attachment with community structure) better match empirical data.
1.4 Historical Timeline: 1959-2026
RANDOM GRAPH THEORY: 1959-2026
========================================================================
1959 Erdos & Renyi introduce G(n,m); first random graph paper
1960 Erdos & Renyi prove giant component phase transition
1961 Erdos & Renyi prove connectivity threshold p = log(n)/n
1984 Bollobas writes first comprehensive textbook on random graphs
1995 Chung & Lu: random graphs with given expected degrees
1998 Watts & Strogatz: small-world networks (Nature, ~36,000 cites)
1999 Barabasi & Albert: scale-free networks (Science, ~50,000 cites)
2001 Lovasz & Szegedy begin graphon theory (published 2006)
2001 Girvan & Newman: modularity and community detection
2007 Lovasz & Szegedy: limits of dense graph sequences (JCTB)
2011 Mossel, Neeman, Sly: Kesten-Stigum threshold (stochastic proof)
2014 Abbe & Sandon: exact recovery threshold for SBM
2015 You, Ying, Leskovec: GraphRNN graph generation
2018 Keriven & Peyre: graphon neural networks
2020 GraphWorld: benchmark generation via SBM
2022 Rusch, Bronstein, Mishra: gradient over-squashing theory
2023 DiGress: discrete diffusion for graph generation
2024 Graphon attention transformers (sparse attention limits)
2026 Graphon-based GNN universality: completeness results
========================================================================
1.5 Phase Transitions: The Central Metaphor
The most striking feature of random graphs is the phase transition: a sudden, dramatic change in global structure as a parameter crosses a critical threshold. This is not merely a quantitative change but a qualitative one - the graph transitions from one phase (many small components) to another (one giant component plus small components).
Physical analogy: In ferromagnetism, a material transitions from disordered (many small magnetic domains) to ordered (one aligned domain) as temperature drops below the Curie point. The mathematics is identical: both are percolation phase transitions.
The critical exponent phenomenon: Near the critical point , the giant component has size - a polynomial intermediate between the subcritical and supercritical regimes. This scaling is a universal critical exponent that appears in many other combinatorial and physical phase transitions.
For AI: Phase transitions in random graphs correspond to phase transitions in learning. A GNN trained to detect community structure in SBMs undergoes a computational phase transition at the Kesten-Stigum threshold - above it, polynomial-time algorithms succeed; below it, no efficient algorithm can (conditional on computational hardness conjectures). This is the rigorous mathematical statement of why some graph learning problems are fundamentally hard.
2. Probability on Graphs: Formal Setup
2.1 Graph Probability Spaces
Let denote the set of all labeled graphs on vertex set . A random graph model is a probability measure on .
Definition (Random Graph): A random graph on vertices is a random variable taking values in , defined on some probability space .
The key examples:
(Erdos-Renyi): Each edge is included independently with probability . The probability of a specific graph with edges is:
(fixed-edge): Uniform over all graphs with exactly edges:
Equivalence: and with have the same asymptotic behavior for most properties. More precisely, conditioned on having exactly edges is .
Graph properties as events: A graph property is a set of graphs closed under isomorphism (it depends only on structure, not labeling). We study the event and its probability as .
2.2 With High Probability (w.h.p.)
Definition (w.h.p.): We say satisfies property with high probability (w.h.p.) if:
This is distinct from "always" - there will always be rare samples that violate the property. But in the limit, the measure of the failure set goes to zero.
Notation conventions:
- :
- :
- : for constants
- :
Example (Isolated vertices): In with , the probability that vertex is isolated is . The expected number of isolated vertices is . For , this expected number is , and the actual number of isolated vertices is 0 w.h.p.
Markov's inequality (First Moment Method): If and , then w.h.p. (since ). This proves upper bounds on the probability of properties.
Second moment method: If , then w.h.p. (Paley-Zygmund inequality: ). This proves lower bounds.
2.3 Monotone Properties and Threshold Functions
Definition (Monotone property): A graph property is monotone increasing if whenever and (additional edges), then . Examples: connectivity, containing a triangle, having a giant component.
Theorem (Bollobas-Thomason, 1987): Every non-trivial monotone graph property has a threshold function such that:
The proof uses the noise sensitivity / sharp threshold theory: monotone properties exhibit sharp transitions (the window where goes from near-0 to near-1 is wide) due to the influence of edges being approximately equal.
For AI: Sharp thresholds are the theoretical explanation for why GNN performance can change dramatically as graph density or community strength crosses a critical value. A model that performs at 90% accuracy might drop to random guessing when a graph parameter decreases by a factor of 2.
2.4 First and Second Moment Methods
These are the two workhorses for proving w.h.p. statements.
First Moment (Upper Bound):
To show holds w.h.p., find a "bad event" and show :
Second Moment (Lower Bound):
To show w.h.p. (where counts copies of some structure):
Paley-Zygmund inequality:
Expanding: , so we need to bound correlations between pairs of copies.
Example (Triangles): Let = number of triangles in .
- For :
The triangle threshold is : for , no triangles w.h.p.; for , triangles exist w.h.p.
3. Erdos-Renyi Model
3.1 Definitions: G(n,p) and G(n,m)
The Erdos-Renyi model is the simplest and most mathematically tractable random graph model. Its beauty lies in the complete independence of edges, which makes exact calculations feasible.
Definition : The random graph on where each edge is independently present with probability .
Statistics:
- for each vertex
Definition : The random graph on drawn uniformly from all graphs with exactly edges.
Regime classification (by average degree ):
| Regime | range | range | Largest component |
|---|---|---|---|
| Subcritical | |||
| Critical | |||
| Supercritical | |||
| Connected | (whole graph) |
For AI: When analyzing message passing in sparse GNNs on ER graphs, the regime determines whether information can flow globally. Below criticality (), most nodes are in isolated small components - the GNN can only see local structure. Above criticality, there's a giant component enabling global information flow.
3.2 Degree Distribution: Poisson Limit
Theorem (Poisson Degree Distribution): In with , the degree of a fixed vertex satisfies:
Proof sketch: where are i.i.d. The sum of independent Bernoullis with success probability converges to Poisson() by the law of small numbers (Poisson limit theorem).
Generating function approach: The probability generating function of is:
Consequences of Poisson degree distribution:
- Exponential tail: - degrees are concentrated near
- Max degree: (sublinear in )
- No hubs: Unlike scale-free networks, ER graphs have no nodes with degree
Contrast with scale-free: Power-law degree distributions have polynomial tails and allow hubs with degree . This is why real networks (preferential attachment) look so different from ER.
3.3 Giant Component Phase Transition
This is the central theorem of random graph theory - one of the most beautiful results in all of combinatorics.
Theorem (Erdos-Renyi, 1960): Let be a constant and . Let denote the size of the largest connected component of .
-
Subcritical (): w.h.p., where . In particular, .
-
Critical (): in distribution (scaling limit is Brownian motion related).
-
Supercritical (): w.h.p., where is the unique solution in to:
The second-largest component has size .
The survival probability : The equation has a probabilistic interpretation. Think of each vertex independently joining the giant component with probability . A vertex joins if it has at least one neighbor that also joins. If neighbors join with probability , and there are neighbors, the probability of having at least one joining neighbor is - hence the fixed-point equation.
Derivation via branching processes: A key technique is to compare component exploration with a branching process. Starting from vertex , reveal its neighbors one by one. Each neighbor independently has additional neighbors (in the limit). This is a Galton-Watson branching process with offspring distribution .
A Galton-Watson process with mean offspring survives (generates an infinite tree) with probability satisfying . Below (mean offspring ), the process dies out almost surely - hence subcritical. Above , there's a positive probability of survival - hence the giant component.
Size of the giant component:
| (fraction in giant) | |
|---|---|
| 0.5 | 0 (subcritical) |
| 1.0 | 0 (critical, but scaling) |
| 1.5 | 0.583 |
| 2.0 | 0.797 |
| 3.0 | 0.940 |
| 5.0 | 0.993 |
For AI: GNN expressiveness on random graphs undergoes a similar phase transition. In the subcritical regime, the WL algorithm (and GINs) see disconnected local neighborhoods - they cannot distinguish non-isomorphic components. In the supercritical regime, the global component provides rich structural information.
3.4 Connectivity Threshold
Theorem (Erdos-Renyi, 1961): Let for a constant . Then:
In particular:
- If : is disconnected w.h.p.
- If : is connected with probability
- If : is connected w.h.p.
Proof sketch (upper bound via first moment):
Let = number of components of size . The bottleneck is (isolated vertices). A vertex is isolated iff none of its potential edges are present:
Expected isolated vertices: .
By a Poisson approximation argument, in distribution, so:
Connectivity fails iff or there's a larger isolated component. One shows larger components disappear before isolated vertices, so connectivity threshold isolated vertex disappearance threshold.
Sharp threshold: The window is - any diverging suffices. This is an unusually sharp threshold (polynomial-width thresholds are more common).
3.5 Subgraph Counts and Triangle Thresholds
Definition: A subgraph count for a fixed graph is number of labeled copies of in .
Expectation:
For dense subgraphs, this simplifies to .
Threshold for : By the first and second moment methods, w.h.p. iff , where:
is the maximum 2-core density of .
Triangle threshold: For (triangle), , so threshold is .
| Subgraph | | | | Threshold | |-------------|-------|-------|---------|----------------| | Edge | 2 | 1 | 1/2 | | | Path | 3 | 2 | 2/3 | | | Triangle | 3 | 3 | 1 | | | 4-clique | 4 | 6 | 3/2 | | | 5-clique | 5 | 10 | 2 | |
Janson's inequality: Provides exponential concentration for subgraph counts when is large:
where sums over pairs sharing at least one edge.
3.6 Diameter and Distances
Theorem: In with (supercritical), the diameter of the giant component is:
w.h.p. More precisely, .
Why? The giant component behaves like a random tree with branching factor . Starting from any node, the neighborhood of radius has size . It reaches when , i.e., .
Characteristic path length: This diameter is the mathematical explanation for the six degrees of separation phenomenon. With (Facebook) and (100 friends), the diameter is - indeed about 4-6 hops.
Contrast with small-world: In the ring lattice (before rewiring), diameter is - linear. Watts-Strogatz introduces just fraction of random rewirings to drop this to while maintaining high clustering.
3.7 Spectral Properties of G(n,p)
Expected adjacency matrix: , a rank-1 perturbation of .
Spectral decomposition: Write . The fluctuation matrix is a Wigner matrix (symmetric, zero-diagonal, i.i.d. entries above diagonal).
Theorem (Furedi-Komlos, 1981): For with constant, the eigenvalues of satisfy:
- (outlier, corresponding to the average degree direction )
- are in w.h.p.
The bulk eigenvalues follow Wigner's semicircle law with radius .
For spectral clustering: The spectral gap determines how easily we can detect the leading eigenvector (which encodes the community structure in SBM).
4. Watts-Strogatz Small-World Model
4.1 Construction and Parameters
The Watts-Strogatz (WS) model was introduced in 1998 to explain a paradox: real-world networks (social, neural, power grid) have both high clustering AND short path lengths. Regular lattices have high clustering but long paths; ER random graphs have short paths but low clustering. WS interpolates between them.
Construction (Watts-Strogatz, 1998):
- Start with a ring lattice: nodes arranged in a circle, each connected to nearest neighbors ( on each side). Assume .
- Rewire: For each edge with in the ring, with probability replace with a uniformly random node (avoiding duplicate edges).
Parameters:
- : number of nodes
- : initial degree (even integer, typically )
- : rewiring probability
- : regular ring lattice (high clustering, long paths)
- : approximately ER random graph (low clustering, short paths)
- -: small-world regime (high clustering, short paths!)
For AI: Neural network connection graphs and attention patterns often exhibit small-world structure. The feedforward layers of transformers have short path lengths (like random graphs) while local attention heads maintain clustered connections (like lattices). Understanding this structure informs efficient attention design.
4.2 Clustering Coefficient
Definition: The local clustering coefficient of vertex with degree is:
the fraction of 's possible neighbor-pairs that are also connected.
Global clustering coefficient: (average over vertices).
Ring lattice (): Each vertex has neighbors (the on each side of the ring). Neighbors of include all nodes within distance . A pair of 's neighbors are connected iff they are within distance of each other.
For large , the fraction of connected neighbor pairs is approximately:
WS model: For small , the clustering coefficient is approximately:
This is because a triangle involving requires all three edges to survive rewiring, and each edge survives with probability .
Random graph (): For ER with :
Key observation: For small (say ), - still very high, close to the lattice value.
4.3 Average Path Length
Ring lattice (): The shortest path between two nodes separated by positions in the ring is . The average path length is approximately , which grows linearly with .
WS model (): The average path length drops dramatically with rewiring. Even a tiny introduces long-range shortcuts that drastically reduce path lengths.
Heuristic analysis (Newman-Watts): The typical inter-component distance after rewiring is:
where for large . For , the path length transitions from to .
Numerical example (, ):
| 0 | 0.667 | 50 |
| 0.001 | 0.660 | 20 |
| 0.01 | 0.640 | 8 |
| 0.1 | 0.528 | 5 |
| 0.5 | 0.195 | 4 |
| 1.0 | 0.010 | 3 |
The small-world regime (-) achieves high and low simultaneously.
4.4 The Small-World Regime
Watts-Strogatz property: A graph is said to have the small-world property if:
- (much more clustered than ER)
- (path length comparable to ER)
These two conditions are simultaneously satisfied for WS with .
Empirical validation:
Watts and Strogatz validated the model against three real networks:
| Network | ||||||
|---|---|---|---|---|---|---|
| Film actors | 225,226 | 61 | 0.79 | 0.00027 | 3.65 | 2.99 |
| Power grid (W. US) | 4,941 | 2.67 | 0.080 | 0.005 | 18.7 | 12.4 |
| C. elegans neural | 282 | 14 | 0.28 | 0.05 | 2.65 | 2.25 |
All three networks have high clustering (much above ER random graph level) but short average path lengths (comparable to ER). This is the hallmark of small-world structure.
4.5 Navigability and Kleinberg's Grid
Watts and Strogatz showed that small-world graphs have short paths, but Kleinberg (2000) asked: can nodes find these short paths using only local information?
Kleinberg's model: Start with a 2D grid of nodes. Each node has edges to all grid neighbors within distance (local structure) plus one long-range link to node chosen with probability proportional to .
Theorem (Kleinberg, 2000):
- If (exponent matches grid dimension): greedy routing finds paths of length w.h.p.
- If : any decentralized algorithm requires steps for some .
Interpretation: The power-law exponent (where is the grid dimension) uniquely enables efficient decentralized navigation. Real social networks approximate this condition, explaining how people can navigate social networks in few steps even with only local knowledge.
For AI: This connects to hierarchical attention in transformers. FlashAttention-2 and related methods achieve efficient attention by exploiting the fact that attention scores decay with token distance - a continuous analog of Kleinberg's inverse power-law long-range links.
4.6 Social Network Evidence
Small-world structure has been empirically validated across many domains:
- Social networks: Milgram's 1967 experiment found ~6 hops between strangers in the US; Facebook (2016) found average path length 3.57 for 1.6 billion users.
- Neural connectomes: C. elegans (302 neurons), mouse visual cortex, human brain fMRI networks all show high clustering and short paths.
- Internet topology: ASN-level and router-level graphs exhibit small-world properties.
- Citation networks: Academic citation graphs have -, well above the ER baseline.
Limitations: WS does not generate power-law degree distributions. Real networks often exhibit BOTH small-world AND scale-free properties - requiring hybrid models.
5. Barabasi-Albert Scale-Free Model
5.1 Preferential Attachment
The Barabasi-Albert (BA) model was introduced in 1999 to explain a universal observation: degree distributions in real-world networks follow a power law with -. This is incompatible with the exponential tails of ER's Poisson distribution.
The explanation: networks grow over time, and new nodes prefer to attach to high-degree nodes. This "rich get richer" mechanism generates hubs.
Construction (Barabasi-Albert):
- Start with nodes with arbitrary connections.
- At each time step :
- Add one new node
- Connect to existing nodes, chosen independently with probability proportional to their current degree:
The denominator (handshaking lemma).
Why "preferential attachment"? This mimics real-world network growth:
- Web pages link to popular pages (already have many in-links)
- Scientists cite highly-cited papers
- Airports add routes to hubs
For AI: GNN training graphs from large knowledge bases (Wikidata, Freebase) exhibit scale-free degree distributions. GNN aggregation functions must handle this heterogeneity - a degree-10 node and a degree-1000 node require different treatment.
5.2 Power-Law Degree Distribution
Theorem (Barabasi-Albert, 1999; rigorous: Bollobas et al., 2001): For , as , the degree distribution satisfies:
This is a power law with exponent :
Scale-free: A distribution is called scale-free if for some . Scale-free means the distribution looks the same at all scales (self-similar under rescaling).
Moments: For :
- iff
- iff
For BA with : mean degree is finite, but variance diverges. This has profound implications:
- No effective epidemic threshold: diseases spread to the whole network for any transmission rate
- Robustness to random failure: removing random nodes rarely disconnects the graph (most nodes have low degree)
- Vulnerability to targeted attack: removing the few hubs disconnects the graph immediately
5.3 Mean-Field Analysis
Mean-field derivation: Let be the degree of node at time . Treating as a continuous variable and taking expectations:
Since (each new node adds edges, contributing to total degree sum):
Solution: With initial condition (node added at time with initial edges):
Degree distribution from mean-field: Node has degree at time iff , i.e., . Since nodes are added uniformly at rate 1 per step:
Therefore:
This reproduces the power law , consistent with the rigorous result.
Hub formation: The oldest nodes grow fastest (degree ). Node added at time has degree at time - a hub with degree.
5.4 Robustness and Fragility
Random failure: If each node is independently removed with probability , what is the critical above which the giant component disappears?
For a network with degree distribution , the critical fraction for percolation is determined by the Molloy-Reed criterion:
For scale-free networks with (including BA with ): , so this criterion always holds. No matter how many nodes are randomly removed, a giant component persists (in infinite networks). In finite BA networks, the giant component persists for close to 1.
Targeted attack: If the highest-degree nodes are removed first, the network is highly vulnerable. Removing even 5% of the highest-degree nodes can destroy the giant component of a BA network.
Implication for AI: Neural network pruning that removes random weights (random failure) is much safer than pruning by magnitude (targeted attack) - magnitude-based pruning could inadvertently target the "hubs" of the implicit neural network graph.
5.5 Scale-Free Networks in the Wild
Evidence for scale-free: Many networks show approximate power-law degree distributions:
- World Wide Web: ,
- Internet (AS-level):
- Protein-protein interaction:
- Actor collaboration:
Controversy (2019): Broido & Clauset (2019) argued that true power laws are rare - most claimed scale-free networks fit log-normal or other heavy-tailed distributions better. The debate continues, but the practical point stands: real networks have much heavier degree tails than ER Poisson.
GNN implications: Degree-heterogeneous graphs require careful normalization in GCN-style aggregation. GraphSAGE's neighborhood sampling provides approximate uniformity; PNA (Principal Neighbourhood Aggregation) uses multiple aggregators (mean, max, std) to handle degree heterogeneity robustly.
6. Stochastic Block Model
6.1 Definition and Parameters
The Stochastic Block Model (SBM) is the canonical random graph model for community structure. It is the workhorse of GNN benchmark generation and the model for which information-theoretic limits of community detection are fully understood.
Definition (SBM): Given parameters:
- : number of nodes
- : number of communities/blocks
- : community assignment vector
- : symmetric block probability matrix (B_{rs} = probability of edge between community and )
The SBM generates a random graph by independently including each edge with probability .
Symmetric SBM (planted partition): All communities have equal size ; (within-community) and for (between-community), with .
Sparse SBM: , with constants. This is the regime studied in information-theoretic limits.
Why SBM matters for AI: Node classification on real graphs (citation networks, social networks) essentially asks: given an observed graph with node features, recover the community labels . SBM provides the ground truth for studying when this is possible and what algorithms can achieve it.
6.2 Community Detection: Information-Theoretic Limits
Three regimes for symmetric 2-block SBM (, equal communities):
With , :
-
Exact recovery ( large enough): Recover exactly (up to global permutation) with high probability. Threshold: .
-
Weak recovery / detection: Identify communities better than chance (correlated with ground truth). Threshold (Kesten-Stigum): .
-
Impossible: Below the Kesten-Stigum threshold, no algorithm can do better than random guessing.
Kesten-Stigum threshold: The SNR condition can be rewritten as:
The left side is the signal-to-noise ratio: measures community signal, measures noise (average degree). When SNR , the noise overwhelms the signal.
Spectral interpretation: The second eigenvalue of the adjacency matrix satisfies . The spectral radius of the noise (Wigner semicircle) is . The signal eigenvalue emerges from the noise bulk iff , i.e., - slightly different from KS due to different normalization, but same qualitative conclusion.
For GNNs: GNNs trained on SBM graphs face exactly this detection problem. Below the Kesten-Stigum threshold, no GNN - regardless of depth, width, or architecture - can reliably classify nodes into communities. This is the hard limit on what graph learning can achieve.
-block SBM: For communities, the threshold generalizes. The second eigenvalue of determines the computational threshold.
6.3 Degree-Corrected SBM
Real-world networks have heterogeneous degree distributions within communities. The DC-SBM adds degree parameters:
Definition (DC-SBM): Node has a degree parameter . The probability of edge is:
Under DC-SBM, the expected degree of node is . Choosing for some power law generates a scale-free SBM combining community structure with hub-spoke topology.
Why DC-SBM matters: Citation networks and social networks have communities AND degree heterogeneity. Standard spectral clustering fails on DC-SBM because high-degree nodes dominate the leading eigenvectors. Regularized spectral clustering or normalized Laplacian-based methods are needed.
6.4 SBM as GNN Benchmark Generator
GraphWorld (2022): A Google framework that generates GNN benchmarks by sampling SBM parameters from a distribution over:
- Number of communities
- Community sizes (balanced or Zipf-distributed)
- Within-block density
- Between-block density
By sampling pairs across and below the Kesten-Stigum threshold, GraphWorld generates benchmarks that are systematically hard or easy for GNNs.
Result: Different GNN architectures excel in different SBM regimes:
- GCN: best for homophilic SBM (dense intra-community)
- GraphSAGE: robust across densities
- GIN: best near the Kesten-Stigum threshold (maximally expressive)
- MixHop: best for heterophilic SBM (dense inter-community)
This validates the theoretical prediction that no single GNN architecture dominates all graph types.
7. Spectral Properties of Random Graphs
7.1 Wigner's Semicircle Law
Wigner's semicircle law is a fundamental result in random matrix theory that describes the bulk eigenvalue distribution of random symmetric matrices.
Setup: Let where is a real symmetric random matrix with:
- Diagonal entries
- Off-diagonal entries i.i.d. with mean 0 and variance
Theorem (Wigner, 1955): The empirical spectral distribution of :
converges weakly in probability to the semicircle law:
Support: - all eigenvalues of lie in this interval asymptotically.
For : The adjacency matrix centered and scaled: has bulk eigenvalues following the semicircle law on (with ).
The outlier eigenvalue comes from the mean matrix - it pokes out of the bulk.
For SBM: The adjacency matrix of an SBM decomposes as where is the block-structured mean matrix (rank ) and is a Wigner-like noise matrix. The outlier eigenvalues of emerge from the semicircle bulk and encode the community structure, provided the signal-to-noise ratio exceeds the Kesten-Stigum threshold.
7.2 Spectral Gap and Community Detection
Spectral algorithm for SBM:
- Compute the leading eigenvectors of (or normalized Laplacian )
- Form the embedding matrix
- Run -means on rows of to recover community assignments
Why does this work? For the planted partition SBM with and :
- Leading eigenvalue: ... (using -block symmetric SBM)
- Second eigenvalue: (signal eigenvalue)
- Bulk edge:
Signal eigenvalue separates from bulk iff , which is exactly the Kesten-Stigum condition.
Davis-Kahan bound (next subsection): If signal and noise eigenvalues are separated, the eigenvectors of are close to those of , enabling accurate community recovery.
7.3 Davis-Kahan Theorem and Perturbation Bounds
Theorem (Davis-Kahan, 1970): Let where is symmetric with eigenvalue and eigenvector , and is a perturbation with . Let be the gap between and all other eigenvalues of . Then the corresponding eigenvector of satisfies:
Application to SBM: For the 2-block SBM:
- Signal gap:
- Noise: (Wigner semicircle radius)
- Angle error:
This is iff , i.e., the Kesten-Stigum condition holds. When it holds, the estimated eigenvector is close to the ground-truth community indicator vector, and -means succeeds.
For GNNs: Davis-Kahan explains why GNNs with spectral-style message passing (GCN) can do community detection above the threshold but fail below it. It also shows that adding node features (which contribute additional signal to ) can push GNNs above the threshold even when the graph structure alone is insufficient.
7.4 Laplacian Spectrum of G(n,p)
Normalized Laplacian: .
For with constant:
- Eigenvalues of lie in (always)
- always (corresponding to )
- For fixed: , - the bulk concentrates near 1
Algebraic connectivity (Fiedler value): (unnormalized Laplacian) is the Fiedler value, which controls:
- Convergence rate of random walks (mixing time )
- Robustness (edge connectivity )
- Cheeger constant approximation:
For with , : for the giant component.
8. Graphons: The Infinite-Size Limit
8.1 Dense Graph Sequences and the Cut Metric
As graph size , what is the "limit" of a sequence of dense graphs? This question leads to graphon theory - the measure-theoretic framework that unifies all dense random graph models.
Dense graph sequence: A sequence of graphs with and (constant edge density).
Cut distance: The cut norm between two symmetric functions is:
The cut metric between graphs and on the same vertex set is where is the step function encoding the adjacency of .
After quotienting by relabelings (graph isomorphisms), we get the cut distance .
8.2 Graphon Definition and Sampling
Definition (Graphon): A graphon is a symmetric measurable function .
Intuition: Think of as the "connection probability" between two abstract node types and , where node types are drawn uniformly from .
Graphon sampling: To sample an -node graph from graphon :
- Draw i.i.d. (latent node types)
- Include edge with probability , independently
Examples of graphons and their corresponding random graph models:
| Graphon | Corresponding model |
|---|---|
| (constant) | Erdos-Renyi |
| Half-graph | |
| -block SBM | |
| Product graphon (Chung-Lu style) | |
| Threshold model |
Lovasz-Szegedy theorem (2006): Every convergent sequence of dense graphs (in the cut metric) converges to a graphon. Conversely, every graphon arises as the limit of a graph sequence. The space of graphons with the cut metric is compact.
For AI: Graphons are the natural limiting objects for graph neural networks. A GNN maps graphs to outputs. If is continuous in the cut metric, then whenever in cut distance. This is precisely the condition for a GNN to be "robust" to graph size changes.
8.3 Homomorphism Densities and Graph Parameters
Homomorphism density: For graphs and , the homomorphism density is:
where counts graph homomorphisms .
Examples:
- = edge density
- = triangle density
- = 4-cycle density
Graphon version: For a graphon and graph with vertices:
Theorem (Lovasz-Szegedy): Two graphons and are equivalent (isomorphic in the graphon sense) iff for all graphs .
Graph parameters from homomorphism densities:
- Triangle density:
- Clustering coefficient: related to
For ML: Substructure counting (homomorphism density estimation) is a key primitive for graph feature extraction. Ring GNNs and structural GNNs compute approximate homomorphism densities; this is one way to understand what graph structure GNNs learn.
8.4 Graphon Neural Networks
Definition (Graphon Neural Network, Keriven & Peyre, 2019): A graphon neural network is a sequence of functions (on -node graphs) that converge to a limiting function on graphons. Formally, is a GNN architecture such that as in cut metric, .
Key result: GCN-style message passing with the propagation operator defines a graphon neural network. The discrete GCN is a finite approximation.
Universality of graphon NNs: Maron et al. (2019) show that equivariant graph networks are universal approximators on graphons - any graphon property that is measurable can be approximated to arbitrary accuracy.
Limitation: Universality on graphons is density-dependent. For SPARSE graph sequences (edge density ), graphons become trivial (the zero graphon), and a different limit theory (graphexes, local limits) is needed.
Forward reference: Graphon theory connects directly to functional analysis - the operator is an integral operator on . Spectral theory of compact operators (Hilbert-Schmidt theorem) governs its eigenvalue decomposition. -> Full treatment: Functional Analysis