"The difference between a good algorithm and a great one often comes down to the choice of data structure. For graphs, the right representation can mean the difference between an algorithm that runs in seconds and one that runs for days."
Overview
A graph is an abstract mathematical object. To compute with it - to run algorithms, train neural networks, or query databases - we must map it to a concrete data structure. This section studies that mapping: how to store graphs efficiently in memory, how to support the operations our algorithms need, and how to choose the right representation for each task.
The choice of representation is not cosmetic. An adjacency matrix makes eigenvalue computation trivial but wastes memory for a sparse social network with millions of nodes. An adjacency list supports fast neighbour enumeration but is slow for checking whether a specific edge exists. The PyTorch Geometric edge_index format packs all edges into a single GPU tensor, enabling batched sparse operations at scale. Understanding these trade-offs is essential for anyone implementing graph algorithms or graph neural networks.
This section introduces six core representations - adjacency matrix, adjacency list, edge list, COO, CSR/CSC, and incidence matrix - with full mathematical definitions, space-time complexity analysis, implementation details, and guidance on when to use each. We also cover the Laplacian matrix (defined here; eigenvalue analysis is in 04), the PyTorch Geometric Data object, heterogeneous and dynamic graphs, and a systematic decision framework for representation selection.
Prerequisites
- Graph definitions: , adjacency, degree, directed/undirected - 01 Graph Basics
- Matrix operations and notation - Ch. 02 Linear Algebra Basics
- Basic Python and NumPy familiarity
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | All six representations implemented in Python; conversion functions; SpMV benchmarks; PyG Data object |
| exercises.ipynb | 8 graded exercises: manual CSR construction through heterogeneous graph building |
Learning Objectives
After completing this section, you will:
- Construct and interpret adjacency matrices, adjacency lists, edge lists, COO, CSR, CSC, and incidence matrices
- Derive and verify the Laplacian and normalised adjacency from any representation
- Analyse space and time complexity of each representation for sparse and dense graphs
- Implement conversion algorithms between all pairs of representations
- Explain how PyTorch Geometric's
edge_indexformat enables GPU-accelerated message passing - Choose the correct representation given graph size, density, algorithm type, and hardware target
- Represent heterogeneous graphs (multiple node/edge types) and temporal graphs (snapshots)
- Identify and fix common representation mistakes in GNN implementations
Table of Contents
- 1. Intuition
- 2. Formal Framework
- 3. Adjacency Matrix
- 4. Adjacency List
- 5. Edge List and COO Format
- 6. Sparse Matrix Formats
- 7. Incidence Matrix
- 8. Representation Conversions
- 9. Heterogeneous and Dynamic Graphs
- 10. Choosing a Representation
- 11. Preview: Algorithms and Spectral Methods
- 12. Common Mistakes
- 13. Exercises
- 14. Why This Matters for AI (2026 Perspective)
- 15. Conceptual Bridge
- Appendix A: Notation Summary
- Appendix B: Complexity Reference
1. Intuition
1.1 Why Representation Matters
A graph is an abstract object - a set of vertices and a set of edges. Before we can run a single algorithm on it, we need to answer a concrete engineering question: how do we store it in memory?
This question matters more than it might seem. Consider a social network with users (Facebook-scale). The "obvious" representation - a square adjacency matrix - would require bits of storage, roughly a billion terabytes. Yet this same graph has only edges (average degree ~1000), and a sparse adjacency list stores it in bytes - about 100 gigabytes, entirely within reach of a modern data centre node.
Conversely, when running a Graph Convolutional Network (GCN) on a small dense molecular graph ( atoms, bonds), the adjacency matrix is the natural choice: it enables the GCN's core computation as a single dense matrix multiply, with full GPU utilization and no sparse indexing overhead.
The wrong representation doesn't just waste memory - it changes algorithmic complexity. Checking whether edge exists takes with an adjacency matrix but with an adjacency list. Enumerating all neighbours of takes with an adjacency matrix but with an adjacency list. For a power-law graph where most nodes have degree but a few hubs have degree , these differences dominate runtime.
For AI: Every major GNN framework uses a different primary representation:
- PyTorch Geometric (PyG): COO
edge_indextensor, shape - Deep Graph Library (DGL): CSR-based
DGLGraphwith sorted adjacency - JAX/Flax graph libraries: Dense adjacency matrices padded for TPU alignment
- NetworkX: Pure-Python adjacency dictionary (great for prototyping, slow at scale)
Understanding the representations underpinning these frameworks lets you write efficient GNN code, debug shape errors, and choose the right library for each task.
1.2 The Four Questions Every Representation Must Answer
Any graph representation must efficiently support (some subset of) four fundamental operations. The choice of representation is a choice about which operations to optimise:
THE FOUR FUNDAMENTAL GRAPH QUERIES
========================================================================
Q1. EDGE EXISTENCE Does edge (u, v) exist?
E.g., "Is user A friends with user B?"
Q2. NEIGHBOUR ENUM What are all neighbours of vertex u?
E.g., "Who are all of A's friends?"
Q3. VERTEX ITER Iterate over all vertices
E.g., "For every user, compute their degree"
Q4. EDGE ITER Iterate over all edges
E.g., "For every friendship, compute weight"
Q5. (Bonus) MATRIX Compute A*x, A^2*x, eigendecompose A
E.g., PageRank, GCN, spectral clustering
========================================================================
No single representation answers all five questions optimally. The adjacency matrix excels at Q1 and Q5 but wastes memory and is slow for Q2. The adjacency list excels at Q2 and Q3 but is awkward for Q5. Sparse formats (CSR) balance Q2-Q5 for large sparse graphs. The art is choosing the right tool.
1.3 A Taxonomy of Graph Representations
GRAPH REPRESENTATION TAXONOMY
========================================================================
DENSE SPARSE HYBRID
--------------------- --------------------- ------------------
Adjacency Matrix Adjacency List CSR / CSC
(n\timesn array) (dict of lists) (3-array format)
Adjacency Set COO / edge_index
(dict of sets) (2\timesm tensor)
Edge List LIL / DOK
(list of tuples) (dynamic sparse)
STRUCTURED GENERALISED
--------------------- ---------------------
Incidence Matrix Heterogeneous adjacency
(n\timesm, vertex-edge) (multiple relation types)
Laplacian L = D - A Temporal snapshot list
(combinatorial) (graph_t for each time t)
========================================================================
1.4 Historical Evolution
The history of graph representations mirrors the history of computing hardware:
- 1736 (Euler): Graphs as drawings on paper - no formal data structure
- 1960s: Adjacency matrices become standard in combinatorics textbooks; O(n^2) space is acceptable for small graphs
- 1970s: Adjacency lists emerge as graphs grow larger; Dijkstra's 1959 algorithm is re-implemented with lists
- 1980s: Compressed Sparse Row (CSR) format developed for sparse linear algebra; adopted for large graph algorithms
- 1990s: Web-scale graphs (PageRank, 1998) expose the inadequacy of dense representations; sparse formats dominate
- 2000s: GraphChi, PowerGraph, Pregel enable distributed graph processing with edge-list-based disk formats
- 2017: PyTorch Geometric introduces
edge_index- a COO format on GPU tensors - as the standard for GNN research - 2020s: Mixed precision, TPU padding, and FlashAttention-style sparse attention define new representation challenges
1.5 What This Section Covers vs. What Comes Later
This section defines and analyses six core representations. Content that builds on these representations but belongs to later sections:
Preview: Graph Algorithms (03)
BFS, DFS, Dijkstra, and Kruskal all operate on the representations defined here. The choice of adjacency list vs. matrix determines the asymptotic complexity of these algorithms. Full algorithmic treatment: 03 Graph Algorithms.
Preview: Spectral Graph Theory (04)
The Laplacian defined in 3.4 below is the central object of spectral graph theory. Its eigenvalues encode connectivity, bipartiteness, and cluster structure. Eigenvalue analysis, the Fiedler vector, and spectral clustering: 04 Spectral Graph Theory.
Preview: Graph Neural Networks (05)
GNNs use the adjacency matrix or
edge_indexto define message-passing operations. The normalised adjacency introduced in 3.5 is the GCN propagation rule (Kipf & Welling, 2017). Full GNN architectures: 05 Graph Neural Networks.
2. Formal Framework
2.1 The Representation Problem
Definition (Graph Representation). A representation of graph is a data structure that stores sufficient information to reconstruct and support a specified set of queries.
A representation is correct if it is lossless: iff (up to vertex labelling in the canonical form). Practically, we work with labelled graphs where vertices have fixed integer indices , and correctness means exact reconstruction of both vertex and edge sets.
Notation. Throughout this section:
- - number of vertices
- - number of edges
- - maximum degree
- - average degree (undirected)
2.2 Complexity Measures
We evaluate representations on five dimensions:
| Measure | Description | Typical units |
|---|---|---|
| Space | Memory used to store | Bits or bytes |
| Edge query | Time to answer "does ?" | Wall-clock or |
| Neighbour enum | Time to list all neighbours of | Per neighbour |
| Construction | Time to build from edge list | Total |
| Update | Time to insert or delete an edge | Per operation |
Note that complexity hides constants that matter in practice. An hash lookup has a larger constant than an array index. CSR row access is but requires binary search for edge queries ().
2.3 Density, Sparsity, and the Fill Ratio
Definition (Fill ratio). The fill ratio of a graph is:
A graph is dense if (close to the maximum edges) and sparse if .
Practical thresholds:
- : sparse (social networks, citation graphs, molecular graphs)
- : mildly sparse (small-world networks)
- : moderately dense
- : dense (complete graphs, small knowledge graphs)
For AI: Real-world graphs used in ML are almost universally sparse:
- Citation networks: (Cora: 2708 nodes, 5429 edges)
- Social networks: to
- Molecular graphs: to (small but dense for their size)
- Knowledge graphs: to (Freebase, Wikidata)
The sparsity of real-world graphs is the fundamental reason why sparse representations are preferred over dense matrices at scale.
3. Adjacency Matrix
3.1 Definition and Properties
Definition (Adjacency Matrix). For a simple undirected graph with vertex set , the adjacency matrix is defined by:
Fundamental properties of the undirected adjacency matrix:
- Symmetry: , since
- Zero diagonal: for all (no self-loops in a simple graph)
- Binary entries:
- Row sum = degree:
Worked example. Consider the "diamond" graph with vertices and edges :
0
/ \
1 - 2
\ /
3
Degree check: row sums are , so . OK
3.2 Directed and Weighted Variants
Directed adjacency matrix. For a directed graph (digraph) with directed edges :
is generally not symmetric. Row encodes out-edges of ; column encodes in-edges of .
- Out-degree of : (row sum)
- In-degree of : (column sum)
Weighted adjacency matrix. For a weighted graph with :
Common weight semantics:
- Similarity weights (, larger = more similar): social strength, correlation
- Distance weights (, smaller = closer): road networks, molecular bond lengths
- Signed weights (): financial networks (positive = cooperation, negative = competition)
For AI: Transformer attention defines a weighted digraph where - a fully connected weighted directed graph over sequence positions. Each attention head defines a different graph on the same token set.
3.3 Self-Loops and Augmented Adjacency
In simple graphs . However, GNNs often add self-loops to ensure each node receives its own features during aggregation. The augmented adjacency matrix is:
where is the identity. This sets for all , adding a self-loop at every vertex. The GCN (Kipf & Welling, 2017) uses in its propagation rule.
Effect on degree: Augmenting with self-loops increases every vertex's degree by 1. If is the degree matrix of , then the degree matrix of is .
3.4 The Degree Matrix and Laplacian
Definition (Degree Matrix). The degree matrix of is the diagonal matrix:
so and for . In matrix form: .
Definition (Graph Laplacian). The combinatorial Laplacian of is:
Properties of :
- Symmetric: (for undirected graphs)
- Positive semidefinite: , i.e., for all
- Zero row sums: (so is always an eigenvalue)
- Quadratic form: - measures "roughness" of on
- Connectivity: is connected iff the second-smallest eigenvalue (Fiedler value)
For the diamond graph above:
Proof of the quadratic form identity (Property 4). We verify that for any .
For each edge , vertex contributes to exactly times, and vertex contributes exactly times. Rearranging by edges:
This identity has a beautiful interpretation: measures the total "variation" of the signal across edges of . If assigns similar values to adjacent vertices (a "smooth" signal), is small. If assigns very different values to connected vertices, is large.
Proof of zero row sums (Property 3): .
So always - the all-ones vector is in the null space of , meaning is always an eigenvalue. For a connected graph, has multiplicity exactly 1 (the null space is spanned by ). For a graph with connected components, has multiplicity .
Worked numerical verification - diamond graph:
Edges: . Differences: . Sum .
Direct: ... let us verify numerically:
Wait - this gives 0, not 4. Let me recheck the edge set. Diamond graph edges: . With :
- :
- :
- :
- :
- :
Sum . The discrepancy is in the matrix multiply: let me recheck for the diamond graph with edges . Degrees: , , , .
The quadratic form correctly evaluates to 4, confirming that vertices 0 and 3 (which have values and while being two hops apart) create variation spread across the four boundary edges.
The full spectral analysis of - eigenvalues, Fiedler vector, spectral clustering - belongs to 04. Here we define as a matrix derived from the graph and establish its basic algebraic properties. See 04 Spectral Graph Theory for the complete treatment.
3.5 Normalised Adjacency
Many GNN architectures use normalised forms of to prevent numerical instability during iterative propagation. The two most common normalisations are:
Symmetric normalisation (GCN, Kipf & Welling 2017):
This is symmetric and has eigenvalues in . The GCN propagation rule with self-loops is:
where and .
Row normalisation (random walk):
is the transition matrix of the random walk on : is the probability of moving from to in one step. Each row sums to 1. GraphSAGE's mean aggregation computes implicitly.
Normalised Laplacian:
Eigenvalues of lie in ; the GCN can be derived as a first-order polynomial filter on (see 04).
3.6 Space-Time Analysis
| Operation | Complexity | Notes |
|---|---|---|
| Space | Always - regardless of | |
| Edge query | Direct array access | |
| Neighbour enum of | Must scan full row | |
| Add/remove edge | Set one entry | |
| Add vertex | Rebuild matrix | |
| Build from edge list | Initialise + fill | |
| Matrix-vector | Dense BLAS | |
| Matrix power | per step | Dense eigval or repeated multiply |
Break-even point. For the adjacency matrix to be memory-competitive with CSR, the fill ratio must satisfy roughly (for 32-bit floats). Real-world graphs almost always have , making sparse formats strictly superior at scale.
3.7 Matrix Operations on Graphs
The adjacency matrix bridges graph theory and linear algebra, enabling powerful matrix-based analyses:
Walk counting: counts walks of length from to (proved in 01). This gives a matrix-algebraic approach to: triangle counting (), reachability ( for DAGs), and PageRank.
PageRank (Page et al., 1999): The stationary distribution of the damped random walk solves:
where is the damping factor. Solving this requires repeated sparse matrix-vector multiplication - justifying CSR over dense .
Spectral decomposition: (for symmetric ) reveals the graph's eigenspectrum - covered fully in 04.
For AI: The GCN layer computes . With nodes, features, and hidden units, this is a sparse-dense matrix multiplication - the dominant computational cost. Efficient sparse implementations (cuSPARSE, PyG's torch_sparse) are essential.
3.8 When to Use the Adjacency Matrix
Use the adjacency matrix when:
- and the graph is dense ()
- You need edge existence queries (e.g., building complement graphs)
- You need matrix eigendecomposition or matrix powers
- The graph is static (no insertions/deletions)
- You're using small molecular graphs in a GNN (batched dense matmul is GPU-efficient)
Avoid the adjacency matrix when:
- and/or (sparse graphs at scale)
- Memory is constrained
- You need fast neighbour enumeration for large-degree nodes
- The graph is dynamic (edges added/removed frequently)
3.9 Worked Example: GCN Propagation with the Adjacency Matrix
To ground the theory, let us trace one GCN layer for the diamond graph. We have , feature dimension , and output dimension .
Step 1: Construct .
Step 2: Compute (degree of , i.e., original degree + 1).
Original degrees: , so .
Step 3: Compute .
Step 4: Compute the normalised propagation matrix .
The entry is , so for example:
Step 5: Apply .
With (identity features) and (learned weights), the output is - each row of is aggregated and weighted by the normalised adjacency. Node 1 (highest degree) receives the most averaged signal; node 0 and 3 (boundary nodes) receive less mixing.
Key insight: The normalisation ensures that aggregating over a high-degree hub does not blow up the magnitude of the resulting embedding, because contributions are divided by .
4. Adjacency List
4.1 Definition and Python Implementation
Definition (Adjacency List). An adjacency list representation stores, for each vertex , a list of its neighbours: .
The full adjacency list is the mapping where is the power set. In Python, this is naturally a dictionary of lists:
# Undirected graph: 0-1, 0-2, 1-2, 1-3, 2-3
adj = {
0: [1, 2],
1: [0, 2, 3],
2: [0, 1, 3],
3: [1, 2]
}
For a graph with vertices and edges, the total storage is (each undirected edge appears in two lists).
Python implementation using defaultdict:
from collections import defaultdict
class Graph:
def __init__(self, directed=False):
self.adj = defaultdict(list)
self.directed = directed
self.n_vertices = 0
self.n_edges = 0
def add_edge(self, u, v, weight=None):
entry = (v, weight) if weight is not None else v
self.adj[u].append(entry)
if not self.directed:
rev = (u, weight) if weight is not None else u
self.adj[v].append(rev)
self.n_edges += 1
self.n_vertices = max(self.n_vertices, u + 1, v + 1)
def neighbours(self, u):
return self.adj[u]
4.2 Directed and Weighted Variants
Directed adjacency list. Each vertex stores only its out-neighbours . To support in-neighbour queries, maintain a separate reverse adjacency list:
adj_out[v] # out-neighbours of v (for forward pass)
adj_in[v] # in-neighbours of v (for backward pass / reverse BFS)
For AI: GNNs often need both directions. In PyG, setting add_self_loops=True and using to_undirected() generates bidirectional edges automatically. Message passing typically uses adj_out (source -> destination aggregation).
Weighted adjacency list. Each entry stores a (neighbour, weight) pair:
adj = {
0: [(1, 0.5), (2, 1.2)],
1: [(0, 0.5), (2, 0.8), (3, 2.1)],
...
}
This supports weighted BFS/Dijkstra without conversion overhead.
4.3 Adjacency Set: O(1) Edge Lookup
The standard adjacency list has edge lookup. Replacing each list with a hash set yields average edge lookup:
adj_set = {
0: {1, 2},
1: {0, 2, 3},
2: {0, 1, 3},
3: {1, 2}
}
# O(1) average edge query:
exists = 3 in adj_set[1] # True
Trade-offs of adjacency set vs. list:
| Property | Adjacency List | Adjacency Set |
|---|---|---|
| Edge query | avg | |
| Neighbour iteration | ||
| Memory | Lower (list overhead) | Higher (hash table) |
| Ordered neighbours | Yes | No |
| Suitable for Dijkstra | Yes (sorted heaps) | Limited |
For graphs where edge existence queries dominate (e.g., link prediction, graph complement computation), adjacency sets are superior.
4.4 Space-Time Analysis
| Operation | Adjacency List | Adjacency Set |
|---|---|---|
| Space | ||
| Edge query | avg | |
| All neighbours of | ||
| Add edge | avg | |
| Remove edge | avg | |
| Add vertex | ||
| Build from edge list | ||
| Matrix-vector |
Critical advantage over adjacency matrix: For sparse graphs with , the adjacency list reduces memory from to - a factor of . For , this is the difference between 1 TB and 1 MB.
4.5 Cache Performance and Memory Layout
The Python dictionary-of-lists adjacency representation, while correct and flexible, has poor cache locality. Each list is a separate Python object allocated at a random memory address, causing cache misses during graph traversal.
Cache-friendly alternative: flat CSR arrays (see 6.2). CSR packs all adjacency information into two contiguous arrays, dramatically improving cache hit rates for algorithms that traverse many edges.
Memory layout comparison (1M edges):
Python dict-of-lists: ~80 bytes per edge (Python overhead) -> ~80 MB
NumPy CSR arrays: ~8 bytes per edge (int64) -> ~8 MB
For production GNN code, the adjacency list (Python dict) is a prototyping tool; CSR or COO formats are used in actual training.
4.6 When to Use Adjacency Lists
Use adjacency lists when:
- Medium-scale sparse graphs ()
- The primary operations are neighbour enumeration and graph traversal (BFS, DFS)
- The graph is dynamic (edges inserted/deleted frequently)
- You're prototyping an algorithm in Python (NetworkX uses this internally)
- Weighted graphs with varying edge weights
Avoid adjacency lists when:
- You need fast edge existence queries (use adjacency set or adjacency matrix)
- You need matrix operations (convert to CSR or adjacency matrix)
- You're targeting GPU computation (convert to COO / edge_index)
4.7 Adjacency List Implementation Patterns
Beyond the basic Python dictionary, several implementation patterns arise in practice:
Pattern 1 - CSR as a flattened adjacency list. The CSR format (6.2) can be viewed as a memory-efficient adjacency list: col_idx[row_ptr[i]:row_ptr[i+1]] is exactly the neighbour list of vertex , stored in a flat array for cache efficiency. The adjacency list and CSR are conceptually equivalent; CSR is the production-grade, cache-friendly version.
Pattern 2 - Sorted adjacency list. When edge queries are common but an adjacency set is too memory-intensive, keep each neighbour list sorted and use binary search:
import bisect
class SortedAdjList:
def __init__(self, n):
self.adj = [[] for _ in range(n)]
def add_edge(self, u, v):
bisect.insort(self.adj[u], v)
bisect.insort(self.adj[v], u)
def has_edge(self, u, v) -> bool:
idx = bisect.bisect_left(self.adj[u], v)
return idx < len(self.adj[u]) and self.adj[u][idx] == v
Edge lookup is . Insertion is to shift elements. This is a good middle ground between adjacency list ( lookup) and adjacency set ( lookup with high memory).
Pattern 3 - Degree-aware pre-allocation. When the degree sequence is known in advance (e.g., loading from a file), pre-allocate arrays of the correct size to avoid repeated resizing:
def build_adj_preallocated(n, edges):
deg = [0] * n
for u, v in edges:
deg[u] += 1; deg[v] += 1
adj = [np.empty(deg[i], dtype=np.int32) for i in range(n)]
ptr = [0] * n
for u, v in edges:
adj[u][ptr[u]] = v; ptr[u] += 1
adj[v][ptr[v]] = u; ptr[v] += 1
return adj
This avoids Python list resizing overhead and produces NumPy arrays for each row - enabling vectorised operations per neighbourhood.
For AI - GraphSAGE neighbour sampling. GraphSAGE (Hamilton et al., 2017) samples a fixed number of neighbours per node at each layer. The implementation uses the adjacency list directly:
def sample_neighbors(adj, node, k):
nbrs = adj[node]
if len(nbrs) <= k:
return nbrs
return random.sample(nbrs, k)
This is sampling with replacement using reservoir sampling. DGL's NeighborSampler wraps this in a C++ implementation over CSR for scalability.
5. Edge List and COO Format
5.1 Edge List
Definition (Edge List). An edge list is the simplest possible graph representation: a sequence of all edges in .
For an undirected unweighted graph:
For a weighted graph:
The edge list stores only what exists - no zero entries, no empty adjacency slots. It is the most compact representation for pure storage ( space) and the most natural format for file I/O.
Standard file formats using edge lists:
.edges/.txt: One edge per line, space-separated (u voru v w)- Matrix Market
.mtx: Standard sparse matrix exchange format (used by SNAP, KONECT) - GraphML: XML-based, supports rich metadata
- Parquet: Columnar binary format for billion-edge datasets (used by DGL, OGB)
For AI: The Open Graph Benchmark (OGB) distributes all datasets as edge lists in CSV/Parquet. Every GNN training pipeline starts by loading an edge list and converting to the working representation.
5.2 COO Format
Definition (COO - Coordinate List Format). The COO format represents a sparse matrix (and hence a graph) as three parallel arrays:
for each edge .
Graph: 0->1 (w=2.0), 0->3 (w=1.0), 1->2 (w=0.5), 2->3 (w=3.0)
COO:
row = [0, 0, 1, 2]
col = [1, 3, 2, 3]
data = [2.0, 1.0, 0.5, 3.0]
COO is essentially an indexed edge list with separate arrays for rows, columns, and values. It admits duplicate entries (multiple values for the same pair are summed when converting to CSR), which is useful for accumulating gradients during GNN backpropagation.
Key properties:
- space
- edge query (scan all entries)
- edge insertion (append to all three arrays)
- Conversion to CSR is (sort by row) or (counting sort)
5.3 PyTorch Geometric edge_index
PyTorch Geometric (PyG) uses a specialised COO format for graph edges:
Definition (edge_index). The edge_index tensor is a torch.Tensor of shape and dtype torch.long:
Row 0 contains source nodes; row 1 contains destination nodes. Each column represents one directed edge .
# Diamond graph: 0-1, 0-2, 1-2, 1-3, 2-3 (undirected -> bidirectional)
edge_index = torch.tensor([
[0, 1, 0, 2, 1, 2, 1, 3, 2, 3], # sources
[1, 0, 2, 0, 2, 1, 3, 1, 3, 2] # targets
], dtype=torch.long)
Why edge_index and not a sparse matrix? GPU tensor operations are most efficient on contiguous memory with known shapes. The layout:
- Fits in a single contiguous GPU tensor
- Supports batched operations across multiple graphs via the
batchvector - Enables
scatter_add/scatter_meanfor message aggregation without sparse matrix overhead - Works natively with PyTorch's autograd for differentiable graph operations
Edge attributes. Optional edge features are stored as a separate edge_attr tensor of shape :
edge_attr = torch.tensor([
[2.0], # weight of edge (0->1)
[1.0], # weight of edge (0->3)
...
], dtype=torch.float) # shape: (m, d_e)
5.4 The PyG Data Object
PyG unifies all graph components into a torch_geometric.data.Data object:
from torch_geometric.data import Data
data = Data(
x = node_features, # shape: (n, d_v) - node feature matrix
edge_index = edge_index, # shape: (2, m) - COO edge list
edge_attr = edge_attr, # shape: (m, d_e) - edge features (optional)
y = labels, # shape: (n,) or (1,) - node or graph labels
pos = positions, # shape: (n, 3) - spatial coordinates (optional)
)
print(data.num_nodes) # n
print(data.num_edges) # m
print(data.is_undirected()) # True if edge_index is symmetric
Batching multiple graphs. When training on a dataset of graphs (molecular property prediction, graph classification), PyG concatenates multiple Data objects into a single Batch:
# Batch of 3 graphs with n1, n2, n3 nodes
batch.x # shape: (n1+n2+n3, d_v)
batch.edge_index # shape: (2, m1+m2+m3), indices shifted per graph
batch.batch # shape: (n1+n2+n3,), maps each node to its graph index
The batch vector enables graph-level pooling: global_mean_pool(batch.x, batch.batch) computes one embedding per graph.
5.5 GPU-Friendly Memory Layout
The choice of COO / edge_index over sparse matrix formats is motivated by GPU architecture:
GPU memory hierarchy:
- L1 cache: ~32 KB per streaming multiprocessor (SM)
- L2 cache: ~40 MB shared
- Global memory: 24-80 GB, high bandwidth (900 GB/s on H100)
Sparse matrix-vector multiplication (SpMV) in CSR format requires indirect memory access (following row pointers then column indices), which causes cache misses. The scatter_add pattern used in PyG's message passing accesses edge_index sequentially, enabling prefetching and coalesced memory access.
For the V100/A100/H100: NVIDIA's cuSPARSE library provides efficient CSR SpMV. PyG's torch_sparse package provides hybrid COO/CSR operations optimised for GNN message passing. At very large scale (), systems like DGL use CSR-based graph sampling with pinned memory for CPU-GPU transfer.
5.6 When to Use Edge Lists / COO
Use edge lists / COO when:
- Loading graph data from files (edge lists are the universal exchange format)
- Building a graph incrementally (append edges one by one)
- GPU-based GNN training with PyTorch Geometric
- Batch processing multiple small graphs
- When you need to convert to another format as a first step
Avoid when:
- You need fast neighbour enumeration (convert to CSR first)
- You need edge existence queries (COO is ; use adjacency matrix or set)
- Memory is constrained and is large (COO has 2-3x overhead vs. CSR for large graphs)
6. Sparse Matrix Formats
6.1 The Sparsity Problem
A sparse matrix is one with "sufficiently many" zero entries that storing and operating on zeros wastes time and space. For graphs, the adjacency matrix has entries but only non-zeros (for undirected). The fill ratio measures how non-sparse the matrix is.
Example - CiteSeer citation network:
- nodes, edges
- Dense adjacency: entries, ~88 MB (float32)
- Non-zeros: entries
- Fill ratio: - 99.9% of entries are zero
- Sparse storage (CSR): ~76 KB - over 1000\times smaller
Sparse formats exploit this structure: they store only non-zero entries and their indices.
6.2 CSR: Compressed Sparse Row
CSR (Compressed Sparse Row) is the standard sparse matrix format for row-oriented operations. It uses three arrays:
data[k]- value of the -th non-zero entrycol_idx[k]- column index of the -th non-zero entryrow_ptr[i]- index indata/col_idxwhere row begins
Construction. For row , its non-zero entries are data[row_ptr[i] : row_ptr[i+1]] and their columns are col_idx[row_ptr[i] : row_ptr[i+1]].
Worked example - diamond graph adjacency matrix:
Non-zeros by row:
Row 0: A[0,1]=1, A[0,2]=1
Row 1: A[1,0]=1, A[1,2]=1, A[1,3]=1
Row 2: A[2,0]=1, A[2,1]=1, A[2,3]=1
Row 3: A[3,1]=1, A[3,2]=1
CSR:
data = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
col_idx = [1, 2, 0, 2, 3, 0, 1, 3, 1, 2]
row_ptr = [0, 2, 5, 8, 10]
Reading row 1: data[row_ptr[1]:row_ptr[2]] = data[2:5] = [1, 1, 1] at columns col_idx[2:5] = [0, 2, 3]. Correct: vertex 1 connects to 0, 2, 3. OK
Space: - specifically integers for row_ptr, integers for col_idx, and values for data (or absent for unweighted graphs).
| CSR Operation | Complexity | Notes |
|---|---|---|
| Space | Optimal for sparse | |
| Row neighbours | Contiguous slice | |
| Edge query | or | Linear scan or binary search if sorted |
| SpMV | Industry standard | |
| Add edge | amortised | Must shift entries |
| Convert from COO | Sort by row index |
6.3 CSC: Compressed Sparse Column
CSC (Compressed Sparse Column) is the column-oriented analogue of CSR:
data[k]- value of the -th non-zerorow_idx[k]- row index of the -th non-zerocol_ptr[j]- index where column begins
CSC excels at column operations: accessing all in-neighbours of vertex (i.e., all such that ) takes - a single contiguous slice of row_idx.
CSC vs CSR for directed GNNs: In message-passing GNNs, aggregating messages from in-neighbours of is the forward pass operation. CSC organises data by destination node, making this per node. DGL internally uses a CSR/CSC pair for efficient forward and backward passes.
Relationship: CSC of = CSR of . Converting between CSR and CSC is therefore equivalent to matrix transposition, which is for sparse formats.
6.4 LIL: List of Lists
LIL (List of Lists) stores the matrix as a list of rows, where each row is itself a sorted list of (col_idx, value) pairs:
# LIL representation of diamond graph (4 rows)
lil = [
[(1, 1.0), (2, 1.0)], # row 0
[(0, 1.0), (2, 1.0), (3, 1.0)], # row 1
[(0, 1.0), (1, 1.0), (3, 1.0)], # row 2
[(1, 1.0), (2, 1.0)] # row 3
]
Advantages of LIL:
- Efficient row slicing: access row in
- Efficient incremental construction: insert entry in with binary search
- Good for iterative graph construction where edges are added one by one
Disadvantages:
- Inefficient for column access
- No cache-friendly layout (scattered Python objects)
- Must convert to CSR before matrix operations
When to use LIL: When building a sparse graph representation incrementally from a stream of edges. Use scipy.sparse.lil_matrix for construction, then convert to CSR for computation.
6.5 DOK: Dictionary of Keys
DOK stores only non-zero entries in a Python dictionary keyed by (row, col) tuples:
dok = {
(0, 1): 1.0, (0, 2): 1.0,
(1, 0): 1.0, (1, 2): 1.0, (1, 3): 1.0,
(2, 0): 1.0, (2, 1): 1.0, (2, 3): 1.0,
(3, 1): 1.0, (3, 2): 1.0
}
Advantages: entry access, insertion, and deletion - ideal for sparse updates and random access patterns.
Disadvantages: High memory overhead (Python dict stores hash, key, value per entry - ~200 bytes vs. ~8 bytes in CSR). Not suitable for large graphs.
When to use DOK: Constructing small sparse matrices with frequent random-access updates (e.g., building a proximity matrix from -NN search results).
6.6 Sparse Matrix-Vector Multiplication
The key numerical operation for graph algorithms and GNNs is sparse matrix-vector multiplication (SpMV): compute where is sparse.
CSR SpMV algorithm:
def spmv_csr(data, col_idx, row_ptr, x):
n = len(row_ptr) - 1
y = np.zeros(n)
for i in range(n):
for k in range(row_ptr[i], row_ptr[i+1]):
y[i] += data[k] * x[col_idx[k]]
return y
Time: - each non-zero contributes exactly one multiply-add.
Worked trace of CSR SpMV - diamond graph, .
CSR arrays (from 6.2):
data = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
col_idx = [1, 2, 0, 2, 3, 0, 1, 3, 1, 2]
row_ptr = [0, 2, 5, 8, 10]
Algorithm trace:
i=0: k in [0,2):
k=0: y[0] += data[0]*x[col_idx[0]] = 1*x[1] = 1*2 = 2
k=1: y[0] += data[1]*x[col_idx[1]] = 1*x[2] = 1*3 = 3
-> y[0] = 5
i=1: k in [2,5):
k=2: y[1] += 1*x[0] = 1*1 = 1
k=3: y[1] += 1*x[2] = 1*3 = 3
k=4: y[1] += 1*x[3] = 1*4 = 4
-> y[1] = 8
i=2: k in [5,8):
k=5: y[2] += 1*x[0] = 1
k=6: y[2] += 1*x[1] = 2
k=7: y[2] += 1*x[3] = 4
-> y[2] = 7
i=3: k in [8,10):
k=8: y[3] += 1*x[1] = 2
k=9: y[3] += 1*x[2] = 3
-> y[3] = 5
Result: .
Verification: Row 1 of is , so . OK
Equivalently, - the SpMV computes the sum of feature values over neighbours. This is exactly the sum-aggregation in GNNs without normalisation. The GCN adds the normalisation weights as the data array entries (instead of 1s).
GNN message passing as SpMV. The GCN aggregation step (where sparse, dense) is a sparse-dense matrix multiply (SpMM). It generalises SpMV from vectors to matrices:
Modern GPU libraries (cuSPARSE, torch_sparse) implement SpMM with significant optimisation: tiling, register blocking, and warp-level parallelism.
6.7 SciPy Sparse API
SciPy provides a comprehensive sparse matrix library (scipy.sparse) with six formats:
from scipy import sparse
# Build from COO
A_coo = sparse.coo_matrix((data, (row, col)), shape=(n, n))
# Convert to CSR (most common for computation)
A_csr = A_coo.tocsr()
# Convert to CSC
A_csc = A_csr.tocsc()
# LIL for incremental construction
A_lil = sparse.lil_matrix((n, n))
A_lil[0, 1] = 1.0
A_csr = A_lil.tocsr() # convert when done building
# SpMV
x = np.random.randn(n)
y = A_csr @ x # uses optimised BLAS routines
# Eigenvalues (interface to ARPACK)
from scipy.sparse.linalg import eigsh
eigenvalues, eigenvectors = eigsh(A_csr, k=6, which='SM')
Conversion cost summary:
| From -> To | Cost | Method |
|---|---|---|
| COO -> CSR | .tocsr() | |
| CSR -> COO | .tocoo() | |
| CSR -> CSC | .tocsc() | |
| LIL -> CSR | .tocsr() | |
| DOK -> COO | .tocoo() |
6.8 Format Comparison Table
SPARSE FORMAT COMPARISON
========================================================================
Format Space EdgeQuery RowAccess ColAccess Insert Best Use
-------------------------------------------------------------------------
CSR O(n+m) O(deg) O(deg) O(m) Slow SpMV, GNN
CSC O(n+m) O(deg) O(m) O(deg) Slow SpMM cols
COO O(m) O(m) O(m) O(m) O(1) I/O, PyG
LIL O(n+m) O(deg) O(deg) Slow O(deg) Build
DOK O(m) O(1)avg Slow Slow O(1) Updates
Dense O(n^2) O(1) O(n) O(n) O(1) n\leq10^4
========================================================================
6.9 Benchmark: Real Memory Usage
To make the trade-offs concrete, consider a Cora-scale citation graph (, ):
| Format | Arrays stored | Memory (bytes) | Relative |
|---|---|---|---|
| Dense float32 | values | 29.3 MB | 1\times |
| Dense bool | bits | 0.92 MB | 0.031\times |
| CSR int32 (unweighted) | ints | 54 KB | 0.002\times |
| COO int32 | ints | 43 KB | 0.0015\times |
PyG edge_index int64 | ints | 174 KB | 0.006\times |
CSR gives a 540\times memory reduction over dense float32. For billion-edge graphs (OGB-Papers100M: , ), the dense matrix would require bytes - physically impossible - while CSR uses GB, fitting on a single high-memory node.
Throughput comparison (SpMV, single CPU core, Cora graph):
| Format | SpMV time (ms) | Throughput (GFLOPS) |
|---|---|---|
| Dense (NumPy BLAS) | 0.8 | 0.02 |
| CSR (SciPy) | 0.04 | 0.27 |
| COO (manual) | 0.05 | 0.22 |
Dense SpMV wastes of operations on zero entries. CSR SpMV achieves 7\times higher throughput by operating only on the non-zeros.
7. Incidence Matrix
7.1 Definition and Properties
Definition (Incidence Matrix - Undirected). For a graph with vertices and edges, the incidence matrix has:
Rows are indexed by vertices, columns by edges. Each column has exactly two 1-entries (one per endpoint). Each row has as many 1-entries as the vertex's degree.
Worked example - path : Vertices , edges , , .
Basic properties:
- Each column sums to 2: for all edges
- Row sum of = :
- Space: - often worse than adjacency matrix for dense graphs, but informative for sparse hypergraphs
7.2 Directed Incidence Matrix
For a directed graph (digraph) with edges :
The signed incidence matrix encodes edge direction. Each column still has exactly two non-zero entries: at the head, at the tail.
Curl and divergence on graphs. The directed incidence matrix plays the role of the discrete gradient operator:
- gives the "difference" across each directed edge - a discrete gradient
- gives the net flow divergence at each vertex - the flow in minus flow out
This structure underlies discrete Hodge theory and graph signal processing.
7.3 The Identity L = BB^^T
Theorem. For an undirected graph with (unsigned) incidence matrix :
Proof. The entry of is .
- If : (since , this counts edges incident to ).
- If and : exactly one edge has , so .
- If and : no edge contributes, so .
Thus .
Corollary: (since ).
For the directed incidence matrix : still holds and the Laplacian is positive semidefinite.
For AI: The identity gives a factored representation of the Laplacian that is useful in:
- Graph signal processing: High-frequency signals have large
- Optimisation: subject to constraints is the basis of Laplacian smoothing in GNNs
- Network flow: is the flow conservation constraint (Kirchhoff's current law)
7.4 Hypergraph Incidence Matrix
The incidence matrix generalises naturally to hypergraphs - graphs where edges (hyperedges) can connect more than two vertices.
Definition (Hypergraph Incidence Matrix). For a hypergraph where each hyperedge is a subset of vertices:
Each column has ones (the size of the hyperedge). The ordinary graph incidence matrix is the special case for all edges.
For AI: Hypergraph representations arise in:
- Group interactions: Social events involving participants simultaneously
- Multi-head attention: Each attention head defines a weighted hyperedge over all query-key pairs
- Co-authorship networks: Each paper is a hyperedge connecting all its authors
- Higher-order GNNs: -WL tests and -dimensional GNNs operate on -tuples of vertices, naturally represented as hyperedges
7.5 When to Use the Incidence Matrix
Use the incidence matrix when:
- The primary operations are edge-based (flow, cut, cycle) rather than vertex-based
- Working with hypergraphs (the incidence matrix is the natural structure)
- Implementing Hodge decomposition or discrete exterior calculus
- Teaching or deriving the Laplacian algebraically ()
Avoid when:
- or is large (space is , often worse than alternatives)
- The primary need is neighbour enumeration or edge queries
8. Representation Conversions
8.1 Conversion Complexity Table
| From -> To | Time | Space | Notes |
|---|---|---|---|
| Edge list -> Adjacency matrix | Allocate matrix, fill entries | ||
| Edge list -> Adjacency list | Hash each edge to its endpoints | ||
| Edge list -> COO | Repack into parallel arrays | ||
| Edge list -> CSR | Sort by source, prefix-sum | ||
| Adjacency matrix -> Edge list | Scan for non-zeros | ||
| Adjacency matrix -> CSR | Scan row by row | ||
| COO -> CSR | Sort rows, prefix-sum | ||
| CSR -> COO | Expand row_ptr | ||
| CSR -> CSC | Transpose | ||
| CSR -> Adjacency list | One list per row |
8.2 Edge List <-> Adjacency Matrix
def edge_list_to_matrix(edges, n, directed=False, weighted=False):
"""Convert edge list to numpy adjacency matrix. O(n^2 + m)."""
A = np.zeros((n, n), dtype=float if weighted else int)
for edge in edges:
u, v = edge[0], edge[1]
w = edge[2] if weighted and len(edge) > 2 else 1
A[u, v] = w
if not directed:
A[v, u] = w
return A
def matrix_to_edge_list(A, directed=False, weighted=False):
"""Convert adjacency matrix to edge list. O(n^2)."""
n = len(A)
edges = []
for i in range(n):
j_range = range(n) if directed else range(i+1, n)
for j in j_range:
if A[i, j] != 0:
edges.append((i, j, A[i, j]) if weighted else (i, j))
return edges
8.3 Edge List <-> Adjacency List
def edge_list_to_adj_list(edges, n, directed=False):
"""Convert edge list to adjacency list. O(n + m)."""
adj = {i: [] for i in range(n)}
for edge in edges:
u, v = edge[0], edge[1]
adj[u].append(v)
if not directed:
adj[v].append(u)
return adj
def adj_list_to_edge_list(adj, directed=False):
"""Convert adjacency list to edge list. O(n + m)."""
edges = set()
for u, nbrs in adj.items():
for v in nbrs:
if directed or (v, u) not in edges:
edges.add((u, v))
return list(edges)
8.4 Adjacency Matrix <-> CSR
def matrix_to_csr(A):
"""Dense matrix to CSR. O(n^2) scan."""
n = len(A)
data, col_idx, row_ptr = [], [], [0]
for i in range(n):
for j in range(n):
if A[i, j] != 0:
data.append(A[i, j])
col_idx.append(j)
row_ptr.append(len(data))
return np.array(data), np.array(col_idx), np.array(row_ptr)
# In practice: use scipy.sparse
import scipy.sparse as sp
A_csr = sp.csr_matrix(A_dense)
A_dense_back = A_csr.toarray()
8.5 COO <-> CSR <-> CSC
def coo_to_csr(row, col, data, n):
"""COO to CSR via sorting. O(m log m)."""
# Sort by (row, col)
order = np.lexsort((col, row))
row_sorted, col_sorted = row[order], col[order]
data_sorted = data[order]
# Build row_ptr via counting sort
row_counts = np.bincount(row_sorted, minlength=n)
row_ptr = np.zeros(n + 1, dtype=int)
np.cumsum(row_counts, out=row_ptr[1:])
return data_sorted, col_sorted, row_ptr
# Using scipy (recommended in practice):
A_csr = sp.csr_matrix((data, (row, col)), shape=(n, n))
A_csc = A_csr.tocsc() # O(m) transpose
A_coo = A_csr.tocoo() # O(m) expansion
8.6 PyG <-> NetworkX <-> SciPy
# NetworkX -> PyG edge_index
import networkx as nx
import torch
def nx_to_edge_index(G):
edges = list(G.edges())
if not G.is_directed():
# Add reverse edges for undirected
edges += [(v, u) for u, v in edges]
return torch.tensor(edges, dtype=torch.long).t().contiguous()
# PyG edge_index -> NetworkX
def edge_index_to_nx(edge_index, directed=True):
G = nx.DiGraph() if directed else nx.Graph()
G.add_edges_from(edge_index.t().tolist())
return G
# SciPy CSR -> PyG edge_index
def csr_to_edge_index(A_csr):
coo = A_csr.tocoo()
return torch.tensor([coo.row, coo.col], dtype=torch.long)
# PyG edge_index -> SciPy CSR
def edge_index_to_csr(edge_index, n):
row, col = edge_index[0].numpy(), edge_index[1].numpy()
data = np.ones(len(row))
return sp.csr_matrix((data, (row, col)), shape=(n, n))
9. Heterogeneous and Dynamic Graphs
9.1 Heterogeneous Graphs
A heterogeneous graph (also called hetero-graph or knowledge graph) has multiple types of nodes and/or edges:
where assigns each vertex a node type and assigns each edge a relation type.
Example - academic knowledge graph:
- Node types: Author, Paper, Venue
- Edge types: writes (Author->Paper), published_in (Paper->Venue), cites (Paper->Paper), co-author (Author<->Author)
For each relation type (source type, relation, target type), we maintain a separate adjacency matrix or edge_index tensor:
With node types and relation types, we have at most matrices - typically much smaller than the full graph.
Knowledge graph triple format. Knowledge graphs (Freebase, Wikidata, OGB-MAG) store relations as triples - head entity, relation, tail entity. This is an edge list over a heterogeneous graph:
triples = [
("Albert_Einstein", "nationality", "Germany"),
("Theory_of_Relativity", "author", "Albert_Einstein"),
...
]
9.2 PyG HeteroData
PyG's HeteroData object generalises Data to heterogeneous graphs:
from torch_geometric.data import HeteroData
data = HeteroData()
# Node feature matrices per type
data['author'].x = torch.randn(n_authors, 64)
data['paper'].x = torch.randn(n_papers, 128)
# Edge indices and features per relation type
data['author', 'writes', 'paper'].edge_index = ... # (2, m_writes)
data['paper', 'cites', 'paper'].edge_index = ... # (2, m_cites)
data['paper', 'cites', 'paper'].edge_attr = ... # (m_cites, d_e)
print(data.node_types) # ['author', 'paper']
print(data.edge_types) # [('author','writes','paper'), ('paper','cites','paper')]
Heterogeneous message passing (RGCN, HAN, HGT) iterates over relation types and aggregates messages per relation:
9.3 Dynamic Graphs
A dynamic graph changes over time: edges and vertices are inserted or deleted. The primary representation challenge is supporting updates without full reconstruction.
Adjacency list handles edge insertion in and deletion in . It is the standard choice for dynamic graph algorithms.
CSR is static: every insertion or deletion requires rebuilding the three arrays - total. For high-frequency updates, CSR is inappropriate.
Dynamic CSR alternatives:
- Chunked CSR: Divide the adjacency list into fixed-size chunks; insertions overflow into a "delta" structure
- VCSR (Vertex-Centric CSR): Each row is a separate resizeable array; updates are local
- Sorted adjacency list: Maintain sorted neighbours per vertex for edge lookup
9.4 Temporal Graphs and Snapshots
A temporal graph has edges with timestamps :
The most common representation is a snapshot list: where is the graph at time step . Each snapshot can be stored as a separate Data object:
snapshots = [
Data(x=x_t, edge_index=edge_index_t)
for t, (x_t, edge_index_t) in enumerate(time_series)
]
For AI - temporal GNNs:
- TGAT (Temporal Graph Attention, 2020): Stores interactions as chronologically sorted edge lists with timestamp features; uses time-encoding functions as positional features
- DyRep (2019): Maintains a dynamic adjacency list updated via temporal point processes
- TGN (Temporal Graph Networks, 2020): Uses memory modules per node to summarise temporal history; edge list sorted by timestamp is the core data structure
The temporal edge list sorted by is the standard input format for temporal GNN benchmarks (JODIE, REDDIT datasets in OGB-Temporal).
9.5 Bipartite Graph Representations
Many real-world graphs are bipartite: vertices are split into two disjoint sets and with edges only between and - no edges within or within .
Examples: User-item interactions (recommendation systems), author-paper graphs, word-document matrices, student-course enrollment.
Bipartite graphs require rectangular adjacency matrices rather than square matrices. This changes the representation:
| Representation | Bipartite form |
|---|---|
| Adjacency matrix | $A \in {0,1}^{ |
| Adjacency list | adj_u[u] lists vertices in ; adj_v[v] lists vertices in |
| Edge list | pairs with , |
| COO | edge_index with source indices in $[0, |
In PyG, bipartite graphs use a 2-tuple node feature convention:
data = Data(
x=(x_u, x_v), # node features for U and V separately
edge_index=edge_index, # source \in [0,|U|), target \in [0,|V|)
)
For AI - collaborative filtering. The matrix factorization model for recommendation can be viewed as operating on a bipartite user-item graph with adjacency :
where and are learned embeddings. LightGCN (He et al., 2020) applies GCN layers directly to this bipartite graph, propagating user embeddings through item nodes and vice versa. The core operation is two rectangular SpMM passes: (aggregate items -> users) and (aggregate users -> items).
9.6 Multiplex and Signed Graphs
Multiplex graphs have multiple layers of edges between the same vertex set - different edge types exist in parallel:
Each layer is a separate adjacency matrix . The representation is a tensor (a stack of adjacency matrices).
Applications: Social networks with multiple relation types (follows, likes, messages), temporal layers (contacts at different time windows), transportation networks (road, rail, air in the same city).
Signed graphs have edges with positive or negative weights - representing trust/distrust, attraction/repulsion, or cooperation/competition. The adjacency matrix has entries in (or for weighted signed graphs). Signed graph Laplacians have non-standard spectral properties; they are studied in signed spectral graph theory.
For AI: Signed graphs arise in:
- Sentiment analysis: Stance graphs where edges are agree/disagree
- Financial networks: Positive = correlated assets, negative = anti-correlated
- Recommendation: Explicit dislikes as negative edges in collaborative filtering
- RL reward shaping: Signed reward graphs encode preference relations
10. Choosing a Representation
10.1 Decision Framework
REPRESENTATION SELECTION FLOWCHART
========================================================================
START
|
Is n \leq 10,000 AND dense?
+---------+----------+
YES NO
| |
Adjacency Matrix Is graph dynamic (frequent updates)?
+---------+----------+
YES NO
| |
Adjacency List Target GPU / PyG?
+---------+----------+
YES NO
| |
edge_index Need SpMV / eigenvalues?
(COO/PyG) +---------+----------+
YES NO
| |
CSR Adj. List or
(scipy.sparse) Edge List
========================================================================
10.2 By Graph Size and Density
| Graph Size | Density | Recommended | Rationale |
|---|---|---|---|
| Any | Adjacency matrix | Simplest; - fine | |
| , dense | Adjacency matrix | Dense matmul is GPU-efficient | |
| , sparse | CSR or adj. list | memory | |
| Any | CSR + edge list | Billion-edge graphs need streaming | |
| Batch of small graphs | Dense padded tensors or PyG batch | GPU efficiency via padding |
10.3 By Algorithm Type
| Algorithm | Best Representation | Reason |
|---|---|---|
| BFS / DFS | Adjacency list | Sequential neighbour access |
| Dijkstra / Bellman-Ford | Adjacency list (sorted) | Priority queue needs fast neighbours |
| PageRank / power iteration | CSR | Repeated SpMV |
| Spectral clustering | CSR -> eigsh | Laplacian eigenvectors via ARPACK |
| GCN / MPNN | COO / edge_index | GPU scatter operations |
| Link prediction | Adjacency set | Fast edge existence queries |
| Subgraph sampling | CSR | Row-slice for -hop neighbourhood |
| Shortest path (all pairs) | Adjacency matrix | Floyd-Warshall uses matrix |
10.4 By Hardware Target
CPU (single node):
- SciPy CSR for numerical work
- NetworkX adjacency dict for prototyping
- Numba-JIT over CSR arrays for performance
GPU (single card):
- PyG
edge_index+ cuSPARSE SpMM - For very large graphs: PyG with
NeighborSampler(CSR-based sampling) - DGL for production GNN training (CSR internally, CUDA-optimised)
Multi-GPU / distributed:
- Partitioned edge lists (METIS graph partitioning)
- DistDGL, GraphLearn, or AliGraph for trillion-edge graphs
- Edge list files partitioned on HDFS / S3
TPU:
- Padded dense adjacency matrices (fixed shape required for XLA compilation)
- Jraph (JAX graph library) uses padded dense representation
10.5 By Framework
| Framework | Primary format | Notes |
|---|---|---|
| NetworkX | dict of dict (adj. dict) | Pure Python; slow but feature-rich |
| PyTorch Geometric | edge_index (COO) | Standard for GNN research |
| DGL | CSR DGLGraph | Optimised for large heterogeneous graphs |
| SciPy / NumPy | CSR csr_matrix | Numerical computing; integrates with ARPACK |
| TensorFlow GNN | GraphTensor (COO-based) | TF-native graph representation |
| Jraph (JAX) | Padded dense GraphsTuple | TPU-optimised |
| SNAP (Stanford) | Binary edge lists | Ultra-large graphs ( edges) |
10.6 Case Study: Representing the OGB-Arxiv Graph
The OGB-Arxiv dataset (Open Graph Benchmark) is a directed citation network: nodes (papers), edges (citations). Let us trace the representation choices made by three frameworks:
NetworkX (baseline, not recommended for training):
G = nx.DiGraph()
G.add_edges_from(edge_list)
# Memory: ~800 MB (Python dict overhead)
# Time to iterate all edges: ~12 s
PyTorch Geometric (research standard):
dataset = NodePropPredDataset('ogbn-arxiv')
data = dataset[0]
# data.edge_index: shape (2, 2_332_486) = bidirectional edges
# data.x: shape (169_343, 128) = feature matrix
# Memory: ~37 MB for edge_index (int64)
# Training time (3-layer GCN, epoch): ~0.4 s on A100
DGL (production):
g = dgl.graph((src, dst), num_nodes=n)
# Internal: CSR for forward pass, CSC for backward pass
# Memory: ~20 MB (int32 indices)
# Training time (3-layer GCN, epoch): ~0.3 s on A100 (faster due to CSR)
The choice between PyG COO and DGL CSR is a 25% throughput difference at this scale - a significant factor in research iteration speed.
11. Preview: Algorithms and Spectral Methods
11.1 How 03 Graph Algorithms Uses These Representations
Graph algorithms depend critically on the choice of representation. The asymptotic complexities in 03 are derived assuming specific representations:
| Algorithm | Representation | Complexity |
|---|---|---|
| BFS | Adjacency list | |
| Dijkstra (binary heap) | Adjacency list | |
| Bellman-Ford | Edge list | |
| Kruskal MST | Edge list (sorted) | |
| Prim MST | Adjacency list + heap | |
| Floyd-Warshall | Adjacency matrix | |
| DFS-based SCC | Adjacency list |
Using the wrong representation can change asymptotic complexity: Dijkstra with an adjacency matrix runs in instead of - a factor of slower for sparse graphs. Full algorithmic treatment: 03 Graph Algorithms.
11.2 How 04 Spectral Theory Uses the Laplacian
The Laplacian defined in 3.4 is the central object of spectral graph theory. Its eigendecomposition reveals:
- always (graph has at least one component)
- Number of : number of connected components
- (Fiedler value): measures connectivity strength
- Eigenvector (Fiedler vector): partitions the graph for spectral clustering
Computing the Fiedler vector requires eigsh(L, k=2, which='SM') - a sparse eigensolver that takes CSR format as input. The representation choice (CSR) directly enables the spectral analysis.
Full spectral treatment: 04 Spectral Graph Theory.
11.3 How 05 GNNs Use edge_index and Sparse Adjacency
The three core GNN architectures use the representations from this section differently:
| Architecture | Representation Used | Core Operation |
|---|---|---|
| GCN (Kipf & Welling, 2017) | Sparse (CSR or COO) | - SpMM |
| GraphSAGE (Hamilton et al., 2017) | Adjacency list (neighbour sampling) | Sample neighbours per node |
| GAT (Velickovic et al., 2018) | edge_index (COO) | Attention per edge: computed and aggregated |
Full GNN architectures, expressiveness theory, and training details: 05 Graph Neural Networks.
12. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Using a dense adjacency matrix for a million-node graph | space \approx bytes for - physically impossible | Use CSR or edge_index; dense is only viable for |
| 2 | Forgetting that edge_index is directed by default in PyG | Undirected edges require both and to appear; missing the reverse direction causes asymmetric message passing | Use torch_geometric.utils.to_undirected(edge_index) or add reverse edges explicitly |
| 3 | Using (transpose notation) instead of (\top) for the transpose | Cosmetic but matters: in NOTATION_GUIDE, transpose is ^\top, not ^T - "" looks like a variable name | Write consistently |
| 4 | Assuming the Laplacian is instead of | The sign matters: is positive semidefinite; is negative semidefinite. GCN derivations assume | Always write and verify |
| 5 | Confusing CSR row_ptr length with (instead of ) | row_ptr has entries: row_ptr[0] = 0, row_ptr[n] = m. Indexing row_ptr[n] out-of-bounds crashes code | Always allocate row_ptr = np.zeros(n+1); the extra entry is essential |
| 6 | Adding self-loops before normalising in GCN | GCN normalisation is where . If you add self-loops to first and then normalise with the original , degrees are wrong | Apply self-loops and recompute degree before normalising |
| 7 | Using adjacency matrix eigendecomposition when the graph is sparse and large | np.linalg.eig(A) is - infeasible for | Use scipy.sparse.linalg.eigsh with ARPACK for sparse matrices; it computes only eigenvalues in |
| 8 | Treating the incidence matrix as instead of | The incidence matrix has rows (vertices) and columns (edges). Confusing rows and columns leads to wrong shapes in | Verify: B.shape == (n, m); B @ B.T gives |
| 9 | Storing undirected graphs as directed without both directions | If adj[u] contains v but adj[v] doesn't contain u, BFS/DFS will give wrong results and message passing will be asymmetric | Always add both adj[u].append(v) and adj[v].append(u) for undirected graphs |
| 10 | Using COO format for large repeated SpMV | COO lacks row-pointer structure, so each SpMV must sort entries - per call. For PageRank or GNN training (many iterations), this is catastrophic | Convert COO to CSR once before the iteration loop; repeated SpMV on CSR is per call |
13. Exercises
Exercise 1 * - Manual CSR Construction
Given the path graph with vertices and edges :
(a) Write out the adjacency matrix .
(b) Manually construct the CSR arrays data, col_idx, row_ptr.
(c) Verify: data[row_ptr[2]:row_ptr[3]] gives the non-zeros in row 2.
(d) Compute the degree of each vertex from row_ptr.
Exercise 2 * - Space Complexity Comparison
For a graph with vertices and edges:
(a) Compute the exact number of bytes required for a dense float32 adjacency matrix.
(b) Compute the bytes for CSR representation (int32 indices, float32 data).
(c) Compute the bytes for edge_index (int64 COO).
(d) What fill ratio would make the dense matrix and CSR consume equal memory?
Exercise 3 * - Laplacian Derivation
For the star graph (centre vertex 0, leaves 1, 2, 3, 4):
(a) Write the adjacency matrix and degree matrix . (b) Compute . (c) Verify . (d) Compute the quadratic form for . What does this value measure?
Exercise 4 ** - Conversion Pipeline
Implement the full conversion pipeline:
(a) Start with edge list [(0,1),(0,2),(1,3),(2,3),(3,4)], .
(b) Convert to adjacency list (Python dict).
(c) Convert adjacency list to COO arrays.
(d) Convert COO to CSR manually (without scipy).
(e) Implement SpMV using your CSR arrays; verify against A @ x for (should return degree sequence).
Exercise 5 ** - Normalised Adjacency
For the cycle graph :
(a) Construct (adjacency matrix) and (degree matrix). (b) Compute the GCN normalised adjacency: . (c) Compute the augmented version: , , . (d) Verify that the eigenvalues of lie in and the eigenvalues of lie in . (e) Explain: why does the normalised Laplacian's eigenvalue bound matter for GNN stability?
Exercise 6 ** - Incidence Matrix and Laplacian
For the triangle graph :
(a) Construct the incidence matrix . (b) Compute and verify it equals . (c) Construct the directed incidence matrix (orient edges , , ). (d) Verify still holds. (e) Interpret for edge : what is measuring geometrically on the graph?
Exercise 7 *** - PyG Data Object from Scratch
Without using PyG's built-in loaders, construct a torch_geometric.data.Data object from scratch for the karate club graph:
(a) Load the karate club edge list from networkx.karate_club_graph().
(b) Build edge_index (COO, bidirectional, shape ) without using from_networkx.
(c) Build x as a one-hot encoding of each vertex's community (use G.nodes[v]['club']).
(d) Compute edge_attr as the normalised weight for each edge (karate club edges have weights from G[u][v]['weight']).
(e) Verify: data.num_nodes == 34, data.num_edges == 2*78 == 156, data.is_undirected() == True.
Exercise 8 *** - Heterogeneous Graph Representation
Consider an academic graph with:
- 100 authors, 200 papers, 20 venues
- Relations:
(author, writes, paper)with 400 edges;(paper, published_in, venue)with 200 edges;(paper, cites, paper)with 800 edges
(a) Design the heterogeneous adjacency representation: how many separate edge_index tensors? What are their shapes?
(b) Compute the total memory for all edge_index tensors (int64, COO bidirectional where appropriate).
(c) Implement a simple heterogeneous degree function: for each author, count the number of papers they wrote.
(d) Write the RGCN-style aggregation for author nodes in pseudocode:
where is the set of papers written by author .
(e) Implement this aggregation using torch.scatter_mean on the (author, writes, paper) edge_index.
14. Why This Matters for AI (2026 Perspective)
| Concept | AI Impact |
|---|---|
| Adjacency matrix | Defines the GCN propagation rule ; walk powers count paths used in subgraph GNNs; 's eigenspectrum is the basis of spectral GNNs |
| Normalised adjacency | Core of GCN (Kipf & Welling, 2017): controls feature smoothing across neighbourhoods; symmetric normalisation prevents gradient explosion in deep GNNs |
| Laplacian | Defines graph signal "frequency"; low-frequency signals (smooth) are aligned with small eigenvalues; over-smoothing in deep GNNs corresponds to projecting onto the null space of |
| CSR / CSC formats | Enable SpMM in all production GNN frameworks; DGL's CSR-based DGLGraph is the internal representation behind thousands of GNN papers |
PyG edge_index | The de-facto standard for GNN research (2017-present); shapes all of PyG's 80+ model implementations; enables batching, sampling, and differentiable graph operations |
| Heterogeneous adjacency | RGCN, HAN, HGT, and knowledge graph embedding models (TransE, RotatE) all rely on per-relation adjacency; the 2024 OGB knowledge graph leaderboard is dominated by heterogeneous GNNs |
| Temporal edge lists | TGN, CAWN, and GraphMixer use chronologically sorted edge lists; the OGB-Temporal benchmark measures dynamic link prediction on graphs with timed interactions |
| Hypergraph incidence | HOGNNs (Higher-Order GNNs) operate on -tuples stored as hyperedge incidence matrices; all -WL expressiveness results are stated in terms of hypergraph structure |
| Padded dense adjacency | JAX/Jraph for TPU requires fixed shapes; padded dense enables XLA compilation; models like EigenGNN and SAN use full self-attention (implicit dense ) for small molecular graphs |
| Sparse attention patterns | FlashAttention uses a block-sparse adjacency structure for efficient long-sequence attention; the "local window" and "strided" patterns are special cases of sparse edge_index |
| Bipartite adjacency $A \in \mathbb{R}^{ | U |
| Signed adjacency | Sentiment analysis (agree/disagree), financial correlation, and social balance theory all require signed graph representations; signed Laplacians have distinct spectral properties |
| Dynamic edge lists | Streaming graph frameworks (Flink-based TGN, Kafka-based dynamic GNNs) ingest temporal edge lists in real time; the edge list with timestamps is the universal streaming format |
14.1 Representation as a Research Lever
The history of GNN research shows that representation choices have directly enabled new research directions:
- 2017:
edge_index(COO) in PyG enabled arbitrary graph structures - not just grid or sequence graphs - to be processed in a single framework. This unlocked the explosion of GNN variants. - 2019: DGL's CSR-based
DGLGraphwith heterogeneous support enabled the RGCN, HAN, and HGT architectures for knowledge graphs. - 2020: The OGB benchmark's standardised edge list format enabled reproducible comparison across GNN architectures for the first time.
- 2021: Padded dense adjacency in Jraph/JAX enabled TPU-based GNN training, 10-40\times faster than GPU for certain molecular property prediction tasks.
- 2022: Block-sparse attention patterns (BigBird, LongFormer) showed that limiting the attention graph to a sparse
edge_index(local window + global tokens) reduces transformer complexity from to . - 2023-2024: Graph transformers (GPS, NAGphormer, NodeFormer) use hybrid representations: sparse
edge_indexfor local message passing + full dense adjacency for global attention, combining the strengths of both.
The lesson: choosing, designing, or combining representations is itself a research contribution - not just an implementation detail.
15. Conceptual Bridge
Looking Back
This section translates the abstract graph theory of 01 into concrete data structures. Every object defined in 01 - adjacency, degree, Laplacian - now has a precise computational form:
- Adjacency (01, Definition 2.1) -> Adjacency matrix (3.1): the mathematical definition becomes a 2D array
- Degree (01, 3.1) -> Row sums of or
row_ptr[i+1] - row_ptr[i]in CSR - Handshaking lemma (01, 3.2): ->
data.sum() == 2 * num_edgesfor CSR - Graph Laplacian (01, preview) -> Full definition (3.4) with quadratic form, PSD proof, and connectivity interpretation
The connection to linear algebra (Ch. 02-03) is now explicit: every graph operation is a matrix operation, and the choice of sparse format determines whether that operation is feasible.
The section also establishes two "bridge" matrices that will recur throughout the rest of the chapter:
- The Laplacian - defined here, eigendecomposed in 04, used in GCN derivation in 05
- The normalised adjacency - defined here, proved to be a spectral filter in 04, implemented as the GCN propagation rule in 05
Mastering these two matrices - their construction, their properties, and their computational representations - is the central payoff of this section.
Looking Forward
The representations defined here are the input to every algorithm in the rest of the chapter:
- 03 Graph Algorithms implements BFS, DFS, Dijkstra, Kruskal, and topological sort on these representations. The choice of adjacency list vs. matrix vs. CSR directly determines whether algorithms run in or .
- 04 Spectral Graph Theory eigendecomposes the Laplacian defined in 3.4. The
scipy.sparse.linalg.eigshinterface takes CSR format as input, connecting the sparse representation to spectral analysis. - 05 Graph Neural Networks uses
edge_index(COO) for PyG-based GNN implementations. Every GCN, GAT, and GraphSAGE layer operates on the normalised adjacency (3.5) or directly onedge_indexvia scatter operations. - 06 Random Graphs generates graphs via edge lists; analysing their degree distributions and connectivity requires converting to adjacency lists or CSR for efficient computation.
The Big Picture
GRAPH REPRESENTATIONS IN THE CURRICULUM
========================================================================
01 Graph Basics 02 Graph Representations
----------------- --------------------------
Abstract G=(V,E) ---> Concrete data structures
Adjacency concept ---> Adjacency matrix A, CSR, COO
Degree definition ---> Row sums, row_ptr differences
Laplacian preview ---> L = D - A (full definition)
Normalised A = D^{-1/2}AD^{-1/2}
+--------------------------------------+
| 02: THE BRIDGE |
| Abstract ---> Computational |
| Theory ---> Implementation |
| Proofs ---> Algorithms |
+------+----------------+-------------+
| |
03 Algorithms 04 Spectral Theory
(use adj. list, (use L, eigenvectors,
edge list, CSR) CSR + ARPACK)
| |
+------+---------+
v
05 Graph Neural Networks
(edge_index, A, message passing)
v
06 Random Graphs
(generate edge lists, analyse
degree distributions)
========================================================================
Appendix A: Notation Summary
A.1 Graph and Matrix Symbols
The following notation is used consistently throughout this section, following the conventions of docs/NOTATION_GUIDE.md.
| Symbol | Meaning | Section |
|---|---|---|
| Graph with vertex set , edge set | 2.1 | |
| Number of vertices | 2.2 | |
| Number of edges | 2.2 | |
| Adjacency matrix | 3.1 | |
| Degree matrix | 3.4 | |
| Graph Laplacian | 3.4 | |
| Symmetrically normalised adjacency | 3.5 | |
| Adjacency with self-loops | 3.3 | |
| Random walk transition matrix | 3.5 | |
| Incidence matrix | 7.1 | |
| Fill ratio (sparsity measure) | 2.3 | |
edge_index | PyG COO edge tensor | 5.3 |
Appendix B: Complexity Reference
B.1 Per-Operation Complexity
The following table consolidates the per-operation complexity for all six core representations. Assume vertices, edges, maximum degree .
Notation: "avg" = average case for hash-based structures; "amort" = amortised over a sequence of operations.
| Representation | Space | Edge? | Neighbours | SpMV | Build |
|---|---|---|---|---|---|
| Adjacency matrix | |||||
| Adjacency list | |||||
| Adjacency set | avg | ||||
| Edge list | |||||
| COO | |||||
| CSR | |||||
| CSC | |||||
| Incidence | - |
<- Back to Graph Theory | Previous: Graph Basics <- | Next: Graph Algorithms ->
B.2 Key Break-Even Points
| Comparison | Break-even condition | Prefer dense when | Prefer sparse when |
|---|---|---|---|
| Dense vs CSR space | (int32 CSR) | ||
| Dense vs CSR SpMV | (always equal ops) | Dense has better constants | Sparse wins for |
| Adj. list vs CSR | Never (CSR always faster) | Prototyping | Production code |
| COO vs CSR for SpMV | Single call | COO is simpler to build | CSR is faster for repeat calls |
| Adj. list vs Adj. set | Edge query frequency | Mostly traversal | Frequent edge queries |
B.3 Conversion Cost Summary
All conversions assume the source representation is already built. The cost of COO -> CSR comes from the sort step (can be reduced to with counting sort when vertex IDs are bounded integers).
| Pipeline | Total cost | Bottleneck |
|---|---|---|
| File -> edge list -> COO | File I/O | |
| File -> edge list -> CSR | Sort by row | |
| CSR -> PyG edge_index | Expand row_ptr | |
| NetworkX -> PyG | Python dict iteration | |
| PyG -> NetworkX | Edge list construction | |
| Dense -> CSR | Dense scan | |
| CSR -> Dense | Matrix fill |
For large graphs (), the COO -> CSR sort is the dominant cost. Use radix sort (available in CUDA via thrust::sort_by_key) for GPU-accelerated sorting in expected time.
B.4 Memory Footprint Formulas
For a graph with vertices and edges using 32-bit integers and 32-bit floats:
| Format | Memory (bytes) | Example: , |
|---|---|---|
| Dense float32 | 40 GB | |
| Dense bool | 1.25 GB | |
| Adjacency list (Python) | 40 MB | |
| CSR (int32, unweighted) | 2.0 MB | |
| CSR (int32 + float32 weights) | 6.0 MB | |
| COO (int64 edge_index) | 8.0 MB | |
| LIL (Python, sorted) | 50 MB |
For reference: 1 GB = bytes. A graph with and (typical social network) requires:
- Dense: 4 TB (impossible on a single machine)
- CSR int32: ~24 MB (fits in L3 cache of a modern server)
This 170,000\times difference explains why every large-scale graph ML system uses sparse representations exclusively.
End of 02 Graph Representations. Proceed to 03 Graph Algorithms to see these representations in action.