Lesson overview | Previous part | Lesson overview
Graph Representations: Part 7: Incidence Matrix to Appendix B: Complexity Reference
7. Incidence Matrix
7.1 Definition and Properties
Definition (Incidence Matrix - Undirected). For a graph with vertices and edges, the incidence matrix has:
Rows are indexed by vertices, columns by edges. Each column has exactly two 1-entries (one per endpoint). Each row has as many 1-entries as the vertex's degree.
Worked example - path : Vertices , edges , , .
Basic properties:
- Each column sums to 2: for all edges
- Row sum of = :
- Space: - often worse than adjacency matrix for dense graphs, but informative for sparse hypergraphs
7.2 Directed Incidence Matrix
For a directed graph (digraph) with edges :
The signed incidence matrix encodes edge direction. Each column still has exactly two non-zero entries: at the head, at the tail.
Curl and divergence on graphs. The directed incidence matrix plays the role of the discrete gradient operator:
- gives the "difference" across each directed edge - a discrete gradient
- gives the net flow divergence at each vertex - the flow in minus flow out
This structure underlies discrete Hodge theory and graph signal processing.
7.3 The Identity L = BB^^T
Theorem. For an undirected graph with (unsigned) incidence matrix :
Proof. The entry of is .
- If : (since , this counts edges incident to ).
- If and : exactly one edge has , so .
- If and : no edge contributes, so .
Thus .
Corollary: (since ).
For the directed incidence matrix : still holds and the Laplacian is positive semidefinite.
For AI: The identity gives a factored representation of the Laplacian that is useful in:
- Graph signal processing: High-frequency signals have large
- Optimisation: subject to constraints is the basis of Laplacian smoothing in GNNs
- Network flow: is the flow conservation constraint (Kirchhoff's current law)
7.4 Hypergraph Incidence Matrix
The incidence matrix generalises naturally to hypergraphs - graphs where edges (hyperedges) can connect more than two vertices.
Definition (Hypergraph Incidence Matrix). For a hypergraph where each hyperedge is a subset of vertices:
Each column has ones (the size of the hyperedge). The ordinary graph incidence matrix is the special case for all edges.
For AI: Hypergraph representations arise in:
- Group interactions: Social events involving participants simultaneously
- Multi-head attention: Each attention head defines a weighted hyperedge over all query-key pairs
- Co-authorship networks: Each paper is a hyperedge connecting all its authors
- Higher-order GNNs: -WL tests and -dimensional GNNs operate on -tuples of vertices, naturally represented as hyperedges
7.5 When to Use the Incidence Matrix
Use the incidence matrix when:
- The primary operations are edge-based (flow, cut, cycle) rather than vertex-based
- Working with hypergraphs (the incidence matrix is the natural structure)
- Implementing Hodge decomposition or discrete exterior calculus
- Teaching or deriving the Laplacian algebraically ()
Avoid when:
- or is large (space is , often worse than alternatives)
- The primary need is neighbour enumeration or edge queries
8. Representation Conversions
8.1 Conversion Complexity Table
| From -> To | Time | Space | Notes |
|---|---|---|---|
| Edge list -> Adjacency matrix | Allocate matrix, fill entries | ||
| Edge list -> Adjacency list | Hash each edge to its endpoints | ||
| Edge list -> COO | Repack into parallel arrays | ||
| Edge list -> CSR | Sort by source, prefix-sum | ||
| Adjacency matrix -> Edge list | Scan for non-zeros | ||
| Adjacency matrix -> CSR | Scan row by row | ||
| COO -> CSR | Sort rows, prefix-sum | ||
| CSR -> COO | Expand row_ptr | ||
| CSR -> CSC | Transpose | ||
| CSR -> Adjacency list | One list per row |
8.2 Edge List <-> Adjacency Matrix
def edge_list_to_matrix(edges, n, directed=False, weighted=False):
"""Convert edge list to numpy adjacency matrix. O(n^2 + m)."""
A = np.zeros((n, n), dtype=float if weighted else int)
for edge in edges:
u, v = edge[0], edge[1]
w = edge[2] if weighted and len(edge) > 2 else 1
A[u, v] = w
if not directed:
A[v, u] = w
return A
def matrix_to_edge_list(A, directed=False, weighted=False):
"""Convert adjacency matrix to edge list. O(n^2)."""
n = len(A)
edges = []
for i in range(n):
j_range = range(n) if directed else range(i+1, n)
for j in j_range:
if A[i, j] != 0:
edges.append((i, j, A[i, j]) if weighted else (i, j))
return edges
8.3 Edge List <-> Adjacency List
def edge_list_to_adj_list(edges, n, directed=False):
"""Convert edge list to adjacency list. O(n + m)."""
adj = {i: [] for i in range(n)}
for edge in edges:
u, v = edge[0], edge[1]
adj[u].append(v)
if not directed:
adj[v].append(u)
return adj
def adj_list_to_edge_list(adj, directed=False):
"""Convert adjacency list to edge list. O(n + m)."""
edges = set()
for u, nbrs in adj.items():
for v in nbrs:
if directed or (v, u) not in edges:
edges.add((u, v))
return list(edges)
8.4 Adjacency Matrix <-> CSR
def matrix_to_csr(A):
"""Dense matrix to CSR. O(n^2) scan."""
n = len(A)
data, col_idx, row_ptr = [], [], [0]
for i in range(n):
for j in range(n):
if A[i, j] != 0:
data.append(A[i, j])
col_idx.append(j)
row_ptr.append(len(data))
return np.array(data), np.array(col_idx), np.array(row_ptr)
# In practice: use scipy.sparse
import scipy.sparse as sp
A_csr = sp.csr_matrix(A_dense)
A_dense_back = A_csr.toarray()
8.5 COO <-> CSR <-> CSC
def coo_to_csr(row, col, data, n):
"""COO to CSR via sorting. O(m log m)."""
# Sort by (row, col)
order = np.lexsort((col, row))
row_sorted, col_sorted = row[order], col[order]
data_sorted = data[order]
# Build row_ptr via counting sort
row_counts = np.bincount(row_sorted, minlength=n)
row_ptr = np.zeros(n + 1, dtype=int)
np.cumsum(row_counts, out=row_ptr[1:])
return data_sorted, col_sorted, row_ptr
# Using scipy (recommended in practice):
A_csr = sp.csr_matrix((data, (row, col)), shape=(n, n))
A_csc = A_csr.tocsc() # O(m) transpose
A_coo = A_csr.tocoo() # O(m) expansion
8.6 PyG <-> NetworkX <-> SciPy
# NetworkX -> PyG edge_index
import networkx as nx
import torch
def nx_to_edge_index(G):
edges = list(G.edges())
if not G.is_directed():
# Add reverse edges for undirected
edges += [(v, u) for u, v in edges]
return torch.tensor(edges, dtype=torch.long).t().contiguous()
# PyG edge_index -> NetworkX
def edge_index_to_nx(edge_index, directed=True):
G = nx.DiGraph() if directed else nx.Graph()
G.add_edges_from(edge_index.t().tolist())
return G
# SciPy CSR -> PyG edge_index
def csr_to_edge_index(A_csr):
coo = A_csr.tocoo()
return torch.tensor([coo.row, coo.col], dtype=torch.long)
# PyG edge_index -> SciPy CSR
def edge_index_to_csr(edge_index, n):
row, col = edge_index[0].numpy(), edge_index[1].numpy()
data = np.ones(len(row))
return sp.csr_matrix((data, (row, col)), shape=(n, n))
9. Heterogeneous and Dynamic Graphs
9.1 Heterogeneous Graphs
A heterogeneous graph (also called hetero-graph or knowledge graph) has multiple types of nodes and/or edges:
where assigns each vertex a node type and assigns each edge a relation type.
Example - academic knowledge graph:
- Node types: Author, Paper, Venue
- Edge types: writes (Author->Paper), published_in (Paper->Venue), cites (Paper->Paper), co-author (Author<->Author)
For each relation type (source type, relation, target type), we maintain a separate adjacency matrix or edge_index tensor:
With node types and relation types, we have at most matrices - typically much smaller than the full graph.
Knowledge graph triple format. Knowledge graphs (Freebase, Wikidata, OGB-MAG) store relations as triples - head entity, relation, tail entity. This is an edge list over a heterogeneous graph:
triples = [
("Albert_Einstein", "nationality", "Germany"),
("Theory_of_Relativity", "author", "Albert_Einstein"),
...
]
9.2 PyG HeteroData
PyG's HeteroData object generalises Data to heterogeneous graphs:
from torch_geometric.data import HeteroData
data = HeteroData()
# Node feature matrices per type
data['author'].x = torch.randn(n_authors, 64)
data['paper'].x = torch.randn(n_papers, 128)
# Edge indices and features per relation type
data['author', 'writes', 'paper'].edge_index = ... # (2, m_writes)
data['paper', 'cites', 'paper'].edge_index = ... # (2, m_cites)
data['paper', 'cites', 'paper'].edge_attr = ... # (m_cites, d_e)
print(data.node_types) # ['author', 'paper']
print(data.edge_types) # [('author','writes','paper'), ('paper','cites','paper')]
Heterogeneous message passing (RGCN, HAN, HGT) iterates over relation types and aggregates messages per relation:
9.3 Dynamic Graphs
A dynamic graph changes over time: edges and vertices are inserted or deleted. The primary representation challenge is supporting updates without full reconstruction.
Adjacency list handles edge insertion in and deletion in . It is the standard choice for dynamic graph algorithms.
CSR is static: every insertion or deletion requires rebuilding the three arrays - total. For high-frequency updates, CSR is inappropriate.
Dynamic CSR alternatives:
- Chunked CSR: Divide the adjacency list into fixed-size chunks; insertions overflow into a "delta" structure
- VCSR (Vertex-Centric CSR): Each row is a separate resizeable array; updates are local
- Sorted adjacency list: Maintain sorted neighbours per vertex for edge lookup
9.4 Temporal Graphs and Snapshots
A temporal graph has edges with timestamps :
The most common representation is a snapshot list: where is the graph at time step . Each snapshot can be stored as a separate Data object:
snapshots = [
Data(x=x_t, edge_index=edge_index_t)
for t, (x_t, edge_index_t) in enumerate(time_series)
]
For AI - temporal GNNs:
- TGAT (Temporal Graph Attention, 2020): Stores interactions as chronologically sorted edge lists with timestamp features; uses time-encoding functions as positional features
- DyRep (2019): Maintains a dynamic adjacency list updated via temporal point processes
- TGN (Temporal Graph Networks, 2020): Uses memory modules per node to summarise temporal history; edge list sorted by timestamp is the core data structure
The temporal edge list sorted by is the standard input format for temporal GNN benchmarks (JODIE, REDDIT datasets in OGB-Temporal).
9.5 Bipartite Graph Representations
Many real-world graphs are bipartite: vertices are split into two disjoint sets and with edges only between and - no edges within or within .
Examples: User-item interactions (recommendation systems), author-paper graphs, word-document matrices, student-course enrollment.
Bipartite graphs require rectangular adjacency matrices rather than square matrices. This changes the representation:
| Representation | Bipartite form |
|---|---|
| Adjacency matrix | $A \in {0,1}^{ |
| Adjacency list | adj_u[u] lists vertices in ; adj_v[v] lists vertices in |
| Edge list | pairs with , |
| COO | edge_index with source indices in $[0, |
In PyG, bipartite graphs use a 2-tuple node feature convention:
data = Data(
x=(x_u, x_v), # node features for U and V separately
edge_index=edge_index, # source \in [0,|U|), target \in [0,|V|)
)
For AI - collaborative filtering. The matrix factorization model for recommendation can be viewed as operating on a bipartite user-item graph with adjacency :
where and are learned embeddings. LightGCN (He et al., 2020) applies GCN layers directly to this bipartite graph, propagating user embeddings through item nodes and vice versa. The core operation is two rectangular SpMM passes: (aggregate items -> users) and (aggregate users -> items).
9.6 Multiplex and Signed Graphs
Multiplex graphs have multiple layers of edges between the same vertex set - different edge types exist in parallel:
Each layer is a separate adjacency matrix . The representation is a tensor (a stack of adjacency matrices).
Applications: Social networks with multiple relation types (follows, likes, messages), temporal layers (contacts at different time windows), transportation networks (road, rail, air in the same city).
Signed graphs have edges with positive or negative weights - representing trust/distrust, attraction/repulsion, or cooperation/competition. The adjacency matrix has entries in (or for weighted signed graphs). Signed graph Laplacians have non-standard spectral properties; they are studied in signed spectral graph theory.
For AI: Signed graphs arise in:
- Sentiment analysis: Stance graphs where edges are agree/disagree
- Financial networks: Positive = correlated assets, negative = anti-correlated
- Recommendation: Explicit dislikes as negative edges in collaborative filtering
- RL reward shaping: Signed reward graphs encode preference relations
10. Choosing a Representation
10.1 Decision Framework
REPRESENTATION SELECTION FLOWCHART
========================================================================
START
|
Is n \leq 10,000 AND dense?
+---------+----------+
YES NO
| |
Adjacency Matrix Is graph dynamic (frequent updates)?
+---------+----------+
YES NO
| |
Adjacency List Target GPU / PyG?
+---------+----------+
YES NO
| |
edge_index Need SpMV / eigenvalues?
(COO/PyG) +---------+----------+
YES NO
| |
CSR Adj. List or
(scipy.sparse) Edge List
========================================================================
10.2 By Graph Size and Density
| Graph Size | Density | Recommended | Rationale |
|---|---|---|---|
| Any | Adjacency matrix | Simplest; - fine | |
| , dense | Adjacency matrix | Dense matmul is GPU-efficient | |
| , sparse | CSR or adj. list | memory | |
| Any | CSR + edge list | Billion-edge graphs need streaming | |
| Batch of small graphs | Dense padded tensors or PyG batch | GPU efficiency via padding |
10.3 By Algorithm Type
| Algorithm | Best Representation | Reason |
|---|---|---|
| BFS / DFS | Adjacency list | Sequential neighbour access |
| Dijkstra / Bellman-Ford | Adjacency list (sorted) | Priority queue needs fast neighbours |
| PageRank / power iteration | CSR | Repeated SpMV |
| Spectral clustering | CSR -> eigsh | Laplacian eigenvectors via ARPACK |
| GCN / MPNN | COO / edge_index | GPU scatter operations |
| Link prediction | Adjacency set | Fast edge existence queries |
| Subgraph sampling | CSR | Row-slice for -hop neighbourhood |
| Shortest path (all pairs) | Adjacency matrix | Floyd-Warshall uses matrix |
10.4 By Hardware Target
CPU (single node):
- SciPy CSR for numerical work
- NetworkX adjacency dict for prototyping
- Numba-JIT over CSR arrays for performance
GPU (single card):
- PyG
edge_index+ cuSPARSE SpMM - For very large graphs: PyG with
NeighborSampler(CSR-based sampling) - DGL for production GNN training (CSR internally, CUDA-optimised)
Multi-GPU / distributed:
- Partitioned edge lists (METIS graph partitioning)
- DistDGL, GraphLearn, or AliGraph for trillion-edge graphs
- Edge list files partitioned on HDFS / S3
TPU:
- Padded dense adjacency matrices (fixed shape required for XLA compilation)
- Jraph (JAX graph library) uses padded dense representation
10.5 By Framework
| Framework | Primary format | Notes |
|---|---|---|
| NetworkX | dict of dict (adj. dict) | Pure Python; slow but feature-rich |
| PyTorch Geometric | edge_index (COO) | Standard for GNN research |
| DGL | CSR DGLGraph | Optimised for large heterogeneous graphs |
| SciPy / NumPy | CSR csr_matrix | Numerical computing; integrates with ARPACK |
| TensorFlow GNN | GraphTensor (COO-based) | TF-native graph representation |
| Jraph (JAX) | Padded dense GraphsTuple | TPU-optimised |
| SNAP (Stanford) | Binary edge lists | Ultra-large graphs ( edges) |
10.6 Case Study: Representing the OGB-Arxiv Graph
The OGB-Arxiv dataset (Open Graph Benchmark) is a directed citation network: nodes (papers), edges (citations). Let us trace the representation choices made by three frameworks:
NetworkX (baseline, not recommended for training):
G = nx.DiGraph()
G.add_edges_from(edge_list)
# Memory: ~800 MB (Python dict overhead)
# Time to iterate all edges: ~12 s
PyTorch Geometric (research standard):
dataset = NodePropPredDataset('ogbn-arxiv')
data = dataset[0]
# data.edge_index: shape (2, 2_332_486) = bidirectional edges
# data.x: shape (169_343, 128) = feature matrix
# Memory: ~37 MB for edge_index (int64)
# Training time (3-layer GCN, epoch): ~0.4 s on A100
DGL (production):
g = dgl.graph((src, dst), num_nodes=n)
# Internal: CSR for forward pass, CSC for backward pass
# Memory: ~20 MB (int32 indices)
# Training time (3-layer GCN, epoch): ~0.3 s on A100 (faster due to CSR)
The choice between PyG COO and DGL CSR is a 25% throughput difference at this scale - a significant factor in research iteration speed.
11. Preview: Algorithms and Spectral Methods
11.1 How 03 Graph Algorithms Uses These Representations
Graph algorithms depend critically on the choice of representation. The asymptotic complexities in 03 are derived assuming specific representations:
| Algorithm | Representation | Complexity |
|---|---|---|
| BFS | Adjacency list | |
| Dijkstra (binary heap) | Adjacency list | |
| Bellman-Ford | Edge list | |
| Kruskal MST | Edge list (sorted) | |
| Prim MST | Adjacency list + heap | |
| Floyd-Warshall | Adjacency matrix | |
| DFS-based SCC | Adjacency list |
Using the wrong representation can change asymptotic complexity: Dijkstra with an adjacency matrix runs in instead of - a factor of slower for sparse graphs. Full algorithmic treatment: 03 Graph Algorithms.
11.2 How 04 Spectral Theory Uses the Laplacian
The Laplacian defined in 3.4 is the central object of spectral graph theory. Its eigendecomposition reveals:
- always (graph has at least one component)
- Number of : number of connected components
- (Fiedler value): measures connectivity strength
- Eigenvector (Fiedler vector): partitions the graph for spectral clustering
Computing the Fiedler vector requires eigsh(L, k=2, which='SM') - a sparse eigensolver that takes CSR format as input. The representation choice (CSR) directly enables the spectral analysis.
Full spectral treatment: 04 Spectral Graph Theory.
11.3 How 05 GNNs Use edge_index and Sparse Adjacency
The three core GNN architectures use the representations from this section differently:
| Architecture | Representation Used | Core Operation |
|---|---|---|
| GCN (Kipf & Welling, 2017) | Sparse (CSR or COO) | - SpMM |
| GraphSAGE (Hamilton et al., 2017) | Adjacency list (neighbour sampling) | Sample neighbours per node |
| GAT (Velickovic et al., 2018) | edge_index (COO) | Attention per edge: computed and aggregated |
Full GNN architectures, expressiveness theory, and training details: 05 Graph Neural Networks.
12. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Using a dense adjacency matrix for a million-node graph | space \approx bytes for - physically impossible | Use CSR or edge_index; dense is only viable for |
| 2 | Forgetting that edge_index is directed by default in PyG | Undirected edges require both and to appear; missing the reverse direction causes asymmetric message passing | Use torch_geometric.utils.to_undirected(edge_index) or add reverse edges explicitly |
| 3 | Using (transpose notation) instead of (\top) for the transpose | Cosmetic but matters: in NOTATION_GUIDE, transpose is ^\top, not ^T - "" looks like a variable name | Write consistently |
| 4 | Assuming the Laplacian is instead of | The sign matters: is positive semidefinite; is negative semidefinite. GCN derivations assume | Always write and verify |
| 5 | Confusing CSR row_ptr length with (instead of ) | row_ptr has entries: row_ptr[0] = 0, row_ptr[n] = m. Indexing row_ptr[n] out-of-bounds crashes code | Always allocate row_ptr = np.zeros(n+1); the extra entry is essential |
| 6 | Adding self-loops before normalising in GCN | GCN normalisation is where . If you add self-loops to first and then normalise with the original , degrees are wrong | Apply self-loops and recompute degree before normalising |
| 7 | Using adjacency matrix eigendecomposition when the graph is sparse and large | np.linalg.eig(A) is - infeasible for | Use scipy.sparse.linalg.eigsh with ARPACK for sparse matrices; it computes only eigenvalues in |
| 8 | Treating the incidence matrix as instead of | The incidence matrix has rows (vertices) and columns (edges). Confusing rows and columns leads to wrong shapes in | Verify: B.shape == (n, m); B @ B.T gives |
| 9 | Storing undirected graphs as directed without both directions | If adj[u] contains v but adj[v] doesn't contain u, BFS/DFS will give wrong results and message passing will be asymmetric | Always add both adj[u].append(v) and adj[v].append(u) for undirected graphs |
| 10 | Using COO format for large repeated SpMV | COO lacks row-pointer structure, so each SpMV must sort entries - per call. For PageRank or GNN training (many iterations), this is catastrophic | Convert COO to CSR once before the iteration loop; repeated SpMV on CSR is per call |
13. Exercises
Exercise 1 * - Manual CSR Construction
Given the path graph with vertices and edges :
(a) Write out the adjacency matrix .
(b) Manually construct the CSR arrays data, col_idx, row_ptr.
(c) Verify: data[row_ptr[2]:row_ptr[3]] gives the non-zeros in row 2.
(d) Compute the degree of each vertex from row_ptr.
Exercise 2 * - Space Complexity Comparison
For a graph with vertices and edges:
(a) Compute the exact number of bytes required for a dense float32 adjacency matrix.
(b) Compute the bytes for CSR representation (int32 indices, float32 data).
(c) Compute the bytes for edge_index (int64 COO).
(d) What fill ratio would make the dense matrix and CSR consume equal memory?
Exercise 3 * - Laplacian Derivation
For the star graph (centre vertex 0, leaves 1, 2, 3, 4):
(a) Write the adjacency matrix and degree matrix . (b) Compute . (c) Verify . (d) Compute the quadratic form for . What does this value measure?
Exercise 4 ** - Conversion Pipeline
Implement the full conversion pipeline:
(a) Start with edge list [(0,1),(0,2),(1,3),(2,3),(3,4)], .
(b) Convert to adjacency list (Python dict).
(c) Convert adjacency list to COO arrays.
(d) Convert COO to CSR manually (without scipy).
(e) Implement SpMV using your CSR arrays; verify against A @ x for (should return degree sequence).
Exercise 5 ** - Normalised Adjacency
For the cycle graph :
(a) Construct (adjacency matrix) and (degree matrix). (b) Compute the GCN normalised adjacency: . (c) Compute the augmented version: , , . (d) Verify that the eigenvalues of lie in and the eigenvalues of lie in . (e) Explain: why does the normalised Laplacian's eigenvalue bound matter for GNN stability?
Exercise 6 ** - Incidence Matrix and Laplacian
For the triangle graph :
(a) Construct the incidence matrix . (b) Compute and verify it equals . (c) Construct the directed incidence matrix (orient edges , , ). (d) Verify still holds. (e) Interpret for edge : what is measuring geometrically on the graph?
Exercise 7 *** - PyG Data Object from Scratch
Without using PyG's built-in loaders, construct a torch_geometric.data.Data object from scratch for the karate club graph:
(a) Load the karate club edge list from networkx.karate_club_graph().
(b) Build edge_index (COO, bidirectional, shape ) without using from_networkx.
(c) Build x as a one-hot encoding of each vertex's community (use G.nodes[v]['club']).
(d) Compute edge_attr as the normalised weight for each edge (karate club edges have weights from G[u][v]['weight']).
(e) Verify: data.num_nodes == 34, data.num_edges == 2*78 == 156, data.is_undirected() == True.
Exercise 8 *** - Heterogeneous Graph Representation
Consider an academic graph with:
- 100 authors, 200 papers, 20 venues
- Relations:
(author, writes, paper)with 400 edges;(paper, published_in, venue)with 200 edges;(paper, cites, paper)with 800 edges
(a) Design the heterogeneous adjacency representation: how many separate edge_index tensors? What are their shapes?
(b) Compute the total memory for all edge_index tensors (int64, COO bidirectional where appropriate).
(c) Implement a simple heterogeneous degree function: for each author, count the number of papers they wrote.
(d) Write the RGCN-style aggregation for author nodes in pseudocode:
where is the set of papers written by author .
(e) Implement this aggregation using torch.scatter_mean on the (author, writes, paper) edge_index.
14. Why This Matters for AI (2026 Perspective)
| Concept | AI Impact |
|---|---|
| Adjacency matrix | Defines the GCN propagation rule ; walk powers count paths used in subgraph GNNs; 's eigenspectrum is the basis of spectral GNNs |
| Normalised adjacency | Core of GCN (Kipf & Welling, 2017): controls feature smoothing across neighbourhoods; symmetric normalisation prevents gradient explosion in deep GNNs |
| Laplacian | Defines graph signal "frequency"; low-frequency signals (smooth) are aligned with small eigenvalues; over-smoothing in deep GNNs corresponds to projecting onto the null space of |
| CSR / CSC formats | Enable SpMM in all production GNN frameworks; DGL's CSR-based DGLGraph is the internal representation behind thousands of GNN papers |
PyG edge_index | The de-facto standard for GNN research (2017-present); shapes all of PyG's 80+ model implementations; enables batching, sampling, and differentiable graph operations |
| Heterogeneous adjacency | RGCN, HAN, HGT, and knowledge graph embedding models (TransE, RotatE) all rely on per-relation adjacency; the 2024 OGB knowledge graph leaderboard is dominated by heterogeneous GNNs |
| Temporal edge lists | TGN, CAWN, and GraphMixer use chronologically sorted edge lists; the OGB-Temporal benchmark measures dynamic link prediction on graphs with timed interactions |
| Hypergraph incidence | HOGNNs (Higher-Order GNNs) operate on -tuples stored as hyperedge incidence matrices; all -WL expressiveness results are stated in terms of hypergraph structure |
| Padded dense adjacency | JAX/Jraph for TPU requires fixed shapes; padded dense enables XLA compilation; models like EigenGNN and SAN use full self-attention (implicit dense ) for small molecular graphs |
| Sparse attention patterns | FlashAttention uses a block-sparse adjacency structure for efficient long-sequence attention; the "local window" and "strided" patterns are special cases of sparse edge_index |
| Bipartite adjacency $A \in \mathbb{R}^{ | U |
| Signed adjacency | Sentiment analysis (agree/disagree), financial correlation, and social balance theory all require signed graph representations; signed Laplacians have distinct spectral properties |
| Dynamic edge lists | Streaming graph frameworks (Flink-based TGN, Kafka-based dynamic GNNs) ingest temporal edge lists in real time; the edge list with timestamps is the universal streaming format |
14.1 Representation as a Research Lever
The history of GNN research shows that representation choices have directly enabled new research directions:
- 2017:
edge_index(COO) in PyG enabled arbitrary graph structures - not just grid or sequence graphs - to be processed in a single framework. This unlocked the explosion of GNN variants. - 2019: DGL's CSR-based
DGLGraphwith heterogeneous support enabled the RGCN, HAN, and HGT architectures for knowledge graphs. - 2020: The OGB benchmark's standardised edge list format enabled reproducible comparison across GNN architectures for the first time.
- 2021: Padded dense adjacency in Jraph/JAX enabled TPU-based GNN training, 10-40\times faster than GPU for certain molecular property prediction tasks.
- 2022: Block-sparse attention patterns (BigBird, LongFormer) showed that limiting the attention graph to a sparse
edge_index(local window + global tokens) reduces transformer complexity from to . - 2023-2024: Graph transformers (GPS, NAGphormer, NodeFormer) use hybrid representations: sparse
edge_indexfor local message passing + full dense adjacency for global attention, combining the strengths of both.
The lesson: choosing, designing, or combining representations is itself a research contribution - not just an implementation detail.
15. Conceptual Bridge
Looking Back
This section translates the abstract graph theory of 01 into concrete data structures. Every object defined in 01 - adjacency, degree, Laplacian - now has a precise computational form:
- Adjacency (01, Definition 2.1) -> Adjacency matrix (3.1): the mathematical definition becomes a 2D array
- Degree (01, 3.1) -> Row sums of or
row_ptr[i+1] - row_ptr[i]in CSR - Handshaking lemma (01, 3.2): ->
data.sum() == 2 * num_edgesfor CSR - Graph Laplacian (01, preview) -> Full definition (3.4) with quadratic form, PSD proof, and connectivity interpretation
The connection to linear algebra (Ch. 02-03) is now explicit: every graph operation is a matrix operation, and the choice of sparse format determines whether that operation is feasible.
The section also establishes two "bridge" matrices that will recur throughout the rest of the chapter:
- The Laplacian - defined here, eigendecomposed in 04, used in GCN derivation in 05
- The normalised adjacency - defined here, proved to be a spectral filter in 04, implemented as the GCN propagation rule in 05
Mastering these two matrices - their construction, their properties, and their computational representations - is the central payoff of this section.
Looking Forward
The representations defined here are the input to every algorithm in the rest of the chapter:
- 03 Graph Algorithms implements BFS, DFS, Dijkstra, Kruskal, and topological sort on these representations. The choice of adjacency list vs. matrix vs. CSR directly determines whether algorithms run in or .
- 04 Spectral Graph Theory eigendecomposes the Laplacian defined in 3.4. The
scipy.sparse.linalg.eigshinterface takes CSR format as input, connecting the sparse representation to spectral analysis. - 05 Graph Neural Networks uses
edge_index(COO) for PyG-based GNN implementations. Every GCN, GAT, and GraphSAGE layer operates on the normalised adjacency (3.5) or directly onedge_indexvia scatter operations. - 06 Random Graphs generates graphs via edge lists; analysing their degree distributions and connectivity requires converting to adjacency lists or CSR for efficient computation.
The Big Picture
GRAPH REPRESENTATIONS IN THE CURRICULUM
========================================================================
01 Graph Basics 02 Graph Representations
----------------- --------------------------
Abstract G=(V,E) ---> Concrete data structures
Adjacency concept ---> Adjacency matrix A, CSR, COO
Degree definition ---> Row sums, row_ptr differences
Laplacian preview ---> L = D - A (full definition)
Normalised A = D^{-1/2}AD^{-1/2}
+--------------------------------------+
| 02: THE BRIDGE |
| Abstract ---> Computational |
| Theory ---> Implementation |
| Proofs ---> Algorithms |
+------+----------------+-------------+
| |
03 Algorithms 04 Spectral Theory
(use adj. list, (use L, eigenvectors,
edge list, CSR) CSR + ARPACK)
| |
+------+---------+
v
05 Graph Neural Networks
(edge_index, A, message passing)
v
06 Random Graphs
(generate edge lists, analyse
degree distributions)
========================================================================
Appendix A: Notation Summary
A.1 Graph and Matrix Symbols
The following notation is used consistently throughout this section, following the conventions of docs/NOTATION_GUIDE.md.
| Symbol | Meaning | Section |
|---|---|---|
| Graph with vertex set , edge set | 2.1 | |
| Number of vertices | 2.2 | |
| Number of edges | 2.2 | |
| Adjacency matrix | 3.1 | |
| Degree matrix | 3.4 | |
| Graph Laplacian | 3.4 | |
| Symmetrically normalised adjacency | 3.5 | |
| Adjacency with self-loops | 3.3 | |
| Random walk transition matrix | 3.5 | |
| Incidence matrix | 7.1 | |
| Fill ratio (sparsity measure) | 2.3 | |
edge_index | PyG COO edge tensor | 5.3 |
Appendix B: Complexity Reference
B.1 Per-Operation Complexity
The following table consolidates the per-operation complexity for all six core representations. Assume vertices, edges, maximum degree .
Notation: "avg" = average case for hash-based structures; "amort" = amortised over a sequence of operations.
| Representation | Space | Edge? | Neighbours | SpMV | Build |
|---|---|---|---|---|---|
| Adjacency matrix | |||||
| Adjacency list | |||||
| Adjacency set | avg | ||||
| Edge list | |||||
| COO | |||||
| CSR | |||||
| CSC | |||||
| Incidence | - |
<- Back to Graph Theory | Previous: Graph Basics <- | Next: Graph Algorithms ->
B.2 Key Break-Even Points
| Comparison | Break-even condition | Prefer dense when | Prefer sparse when |
|---|---|---|---|
| Dense vs CSR space | (int32 CSR) | ||
| Dense vs CSR SpMV | (always equal ops) | Dense has better constants | Sparse wins for |
| Adj. list vs CSR | Never (CSR always faster) | Prototyping | Production code |
| COO vs CSR for SpMV | Single call | COO is simpler to build | CSR is faster for repeat calls |
| Adj. list vs Adj. set | Edge query frequency | Mostly traversal | Frequent edge queries |
B.3 Conversion Cost Summary
All conversions assume the source representation is already built. The cost of COO -> CSR comes from the sort step (can be reduced to with counting sort when vertex IDs are bounded integers).
| Pipeline | Total cost | Bottleneck |
|---|---|---|
| File -> edge list -> COO | File I/O | |
| File -> edge list -> CSR | Sort by row | |
| CSR -> PyG edge_index | Expand row_ptr | |
| NetworkX -> PyG | Python dict iteration | |
| PyG -> NetworkX | Edge list construction | |
| Dense -> CSR | Dense scan | |
| CSR -> Dense | Matrix fill |
For large graphs (), the COO -> CSR sort is the dominant cost. Use radix sort (available in CUDA via thrust::sort_by_key) for GPU-accelerated sorting in expected time.
B.4 Memory Footprint Formulas
For a graph with vertices and edges using 32-bit integers and 32-bit floats:
| Format | Memory (bytes) | Example: , |
|---|---|---|
| Dense float32 | 40 GB | |
| Dense bool | 1.25 GB | |
| Adjacency list (Python) | 40 MB | |
| CSR (int32, unweighted) | 2.0 MB | |
| CSR (int32 + float32 weights) | 6.0 MB | |
| COO (int64 edge_index) | 8.0 MB | |
| LIL (Python, sorted) | 50 MB |
For reference: 1 GB = bytes. A graph with and (typical social network) requires:
- Dense: 4 TB (impossible on a single machine)
- CSR int32: ~24 MB (fits in L3 cache of a modern server)
This 170,000\times difference explains why every large-scale graph ML system uses sparse representations exclusively.
End of 02 Graph Representations. Proceed to 03 Graph Algorithms to see these representations in action.