Lesson overview | Lesson overview | Next part
Graph Basics: Part 1: Intuition to 4. Paths, Walks, and Cycles
1. Intuition
1.1 What Is a Graph?
At its most elemental, a graph is a collection of things and connections between things. The "things" are called vertices (or nodes), and the connections are called edges (or links). This definition is deliberately vague about what the things are - and that is exactly the point. The same mathematical framework that describes friendships between people also describes bonds between atoms, links between web pages, and data flow between operations in a neural network.
Consider a simple social network:
SOCIAL NETWORK AS A GRAPH
========================================================================
Alice ------- Bob Vertices: {Alice, Bob, Carol, Dave}
| / | Edges: {Alice-Bob, Alice-Carol,
| / | Bob-Carol, Bob-Dave}
| / |
Carol -/ Dave 4 vertices, 4 edges
========================================================================
The graph abstracts away everything except the relational structure: who knows whom. It doesn't matter whether Alice is 25 or 65, whether she lives in Tokyo or Toronto. The graph captures only the connectivity pattern.
For AI: This abstraction is precisely why graphs are so powerful in machine learning. A graph neural network processes the structure of relationships, not the raw pixel data or text. When DeepMind's AlphaFold predicts protein structure, it represents amino acid residues as vertices and spatial proximity as edges - the graph structure encodes the physics that determines how the protein folds.
1.2 Why Graphs Matter for AI
Graphs appear in AI in three fundamentally different roles:
1. Graphs as input data. Many real-world datasets are naturally graph-structured: social networks, citation networks, molecular structures, protein interaction networks, scene graphs in computer vision, abstract syntax trees in code analysis. Graph Neural Networks (GNNs) process these directly.
2. Graphs as computational structure. Every neural network defines a directed acyclic graph (DAG) - the computation graph. Vertices are operations (matmul, add, activation), edges carry tensors. Backpropagation is reverse topological traversal of this DAG. PyTorch's autograd and JAX's tracing both build and traverse computation graphs.
3. Graphs as knowledge representation. Knowledge graphs (Wikidata: 100B+ triples, Google Knowledge Graph) represent entities as vertices and relations as labeled edges. Retrieval-augmented generation (RAG) systems traverse these graphs to ground LLM responses in factual knowledge.
THREE ROLES OF GRAPHS IN AI
========================================================================
+-----------------+ +-----------------+ +-----------------+
| GRAPHS AS DATA | | GRAPHS AS COMP. | | GRAPHS AS |
| | | STRUCTURE | | KNOWLEDGE |
+------------------+ +-----------------+ +-----------------+
| Molecules | | Autograd DAGs | | Wikidata |
| Social networks | | Model arch. | | Google KG |
| Citation graphs | | Dataflow graphs | | ConceptNet |
| Scene graphs | | Compiler IR | | RAG retrieval |
+------------------+ +-----------------+ +-----------------+
GNNs process Backprop LLMs query
these directly traverses these for
these grounding
========================================================================
1.3 Graphs Are Everywhere
To build intuition for the diversity of graphs, consider these concrete examples:
Chemistry: A molecule is a graph where atoms are vertices and chemical bonds are edges. Benzene () is a cycle graph with alternating single and double bonds. Drug discovery uses GNNs to predict molecular properties from graph structure (Gilmer et al., 2017).
Natural Language Processing: A dependency parse tree is a directed graph where words are vertices and grammatical relationships are edges. The sentence "The cat sat on the mat" produces a tree rooted at "sat" with edges to "cat" (subject) and "on" (preposition). Abstract Meaning Representations (AMRs) are DAGs used in semantic parsing.
Social Networks: Facebook's social graph has ~3 billion vertices (users) and hundreds of billions of edges (friendships). The degree distribution follows a power law - most users have few friends, a few "hub" users have thousands.
The Internet: The World Wide Web is a directed graph where pages are vertices and hyperlinks are edges. Google's PageRank algorithm (Page et al., 1999) - the foundation of web search - computes the dominant eigenvector of a modified adjacency matrix.
Biology: Protein-protein interaction networks, gene regulatory networks, neural connectomes (the wiring diagram of a brain) - all are graphs. The human connectome has ~86 billion vertices (neurons) and ~100 trillion edges (synapses).
1.4 The Language of Connections
Graph theory provides a precise vocabulary for describing structural properties that arise across all these domains:
- How many connections does each object have? -> Degree
- Can we reach any object from any other? -> Connectivity
- What is the shortest route between two objects? -> Distance, diameter
- Are there natural groupings? -> Components, communities
- Can we split objects into two non-interacting groups? -> Bipartiteness
- Is there redundancy in connections? -> Cycles
- Are two networks "the same" structurally? -> Isomorphism
Each of these questions has a rigorous mathematical formalization that we develop in this section. And each has direct consequences for AI: the degree distribution determines how information spreads through a GNN, connectivity determines whether a computation graph can propagate gradients, and the Weisfeiler-Leman isomorphism test is provably equivalent to the expressive power of message-passing GNNs (Xu et al., 2019).
1.5 Historical Timeline
| Year | Milestone | Significance |
|---|---|---|
| 1736 | Euler solves Konigsberg bridge problem | Birth of graph theory - first proof that no Eulerian circuit exists |
| 1852 | Guthrie poses the four color conjecture | Sparked 124 years of research; proved computationally in 1976 |
| 1878 | Cayley enumerates trees | First systematic study of tree structures |
| 1936 | Konig publishes first graph theory textbook | Formalised bipartite matching, Konig's theorem |
| 1959 | Erdos and Renyi introduce random graphs | Foundation of probabilistic graph theory |
| 1962 | Dijkstra publishes shortest-path algorithm | Still used in network routing today |
| 1970s | NP-completeness results (graph coloring, Hamiltonian cycle) | Showed fundamental limits of graph computation |
| 1976 | Appel and Haken prove the four color theorem | First major theorem proved by computer |
| 1998 | Watts and Strogatz model small-world networks | Explained "six degrees of separation" |
| 1999 | Page et al. publish PageRank | Graph eigenvector as the basis of web search |
| 2005 | Semantic Web and knowledge graphs emerge | Graphs as structured knowledge for AI |
| 2017 | Kipf and Welling publish GCN | Spectral graph convolutions for node classification |
| 2018 | Velickovic et al. publish GAT | Attention mechanisms on graphs |
| 2019 | Xu et al. prove GNN-WL equivalence | 1-WL test bounds message-passing GNN expressiveness |
| 2024 | Graph Transformers (GPS, Graphormer) | Combining graph structure with transformer attention |
| 2026 | Heterogeneous graph models at scale | Multi-relational graphs for knowledge-augmented LLMs |
1.6 What This Section Covers vs. What Comes Later
This section (01) is the vocabulary and core theory of graph theory. It defines what graphs are, introduces the fundamental properties (degree, paths, connectivity, special families, isomorphism, coloring), and provides the mathematical language used throughout the rest of the chapter.
What comes next builds on this vocabulary:
- 02 Graph Representations - How to store graphs in memory (adjacency matrix, adjacency list, CSR format)
- 03 Graph Algorithms - Algorithms that compute the properties defined here (BFS, DFS, shortest paths, MST)
- 04 Spectral Graph Theory - The eigenvalues of graph matrices reveal structural properties
- 05 Graph Neural Networks - Neural architectures that learn from graph structure
- 06 Random Graphs - Probabilistic models of graph generation
2. Formal Definitions
2.1 The Graph
Definition (Undirected Graph). An undirected graph is an ordered pair where:
- is a finite, non-empty set called the vertex set (or node set)
- is a set of unordered pairs of distinct vertices, called the edge set
Here denotes the set of all 2-element subsets of .
We write for the order of (number of vertices) and for the size of (number of edges).
Notation. An edge between vertices and is written as , , or depending on context. In this section we use for undirected edges and for directed edges.
Example 1: The triangle graph .
THE TRIANGLE GRAPH K_3
========================================================================
1 n = 3 (order)
/ \ m = 3 (size)
/ \ Every pair connected -> "complete graph"
2 --- 3
========================================================================
Example 2: The path graph .
1 -- 2 -- 3 -- 4 n = 4, m = 3
Example 3: The cycle graph .
For AI: When a GNN processes a molecular graph, each atom is a vertex in and each chemical bond is an edge in . The formal definition ensures that the graph is well-defined regardless of how atoms are labeled - only the connection pattern matters.
Key terminology summary:
| Term | Symbol | Meaning |
|---|---|---|
| Vertex (node) | An object in the graph | |
| Edge (link) | A connection between two vertices | |
| Order | Number of vertices | |
| Size | Number of edges | |
| Adjacent | Vertices connected by an edge | |
| Incident | Vertex is an endpoint of edge | |
| Neighbour | is adjacent to | |
| Neighbourhood | Set of all neighbours of | |
| Endpoint | of | The two vertices of an edge |
Maximum number of edges. A simple undirected graph on vertices has at most edges (achieved by ). A simple directed graph has at most arcs. A graph with close to is called dense; a graph with is called sparse. Most real-world graphs are sparse.
2.2 Directed vs. Undirected Graphs
Definition (Directed Graph / Digraph). A directed graph (or digraph) is an ordered pair where is a set of ordered pairs of vertices. An edge is called an arc directed from (the tail) to (the head).
The key distinction: in an undirected graph, - the edge has no direction. In a digraph, - the arc goes from to , not the reverse.
UNDIRECTED vs. DIRECTED
========================================================================
Undirected: Directed:
A --- B A ----> B
| | up |
| | | down
D --- C D <---- C
{A,B} = {B,A} (A,B) \neq (B,A)
"A knows B" "A follows B"
Symmetric relation Asymmetric relation
========================================================================
Example (Undirected): A social network where friendship is mutual - if Alice is friends with Bob, then Bob is friends with Alice. The edge set is symmetric.
Example (Directed): Twitter's follow graph - Alice can follow Bob without Bob following Alice. Citation networks - paper A cites paper B, but B does not cite A.
Example (Directed, AI-critical): A computation graph in PyTorch. Each operation node receives input tensors along incoming arcs and produces output tensors along outgoing arcs. The direction encodes dataflow: gradients propagate in the reverse direction during backpropagation.
Non-example: A "graph" where edges connect a vertex to itself (self-loop) - this violates the requirement for simple graphs. We address self-loops in 2.4.
2.3 Weighted Graphs
Definition (Weighted Graph). A weighted graph is a triple where is a graph and is a weight function assigning a real number to each edge.
Common weight interpretations:
- Distance/cost: In a road network, is the distance between cities and
- Similarity/affinity: In a -NN graph, (Gaussian kernel)
- Capacity: In a flow network, is the maximum flow along the arc
- Probability: In a Markov chain,
For AI: Attention weights in a transformer define a weighted directed graph over token positions. The attention matrix is precisely the weight matrix of a weighted digraph where is the attention that position pays to position . Graph Attention Networks (GAT, Velickovic et al., 2018) learn these weights as a function of node features.
Non-example (not a weighted graph): A matrix with negative entries - this can be interpreted as a signed weighted graph, but the standard weighted graph definition allows including negative weights. The distinction matters: negative weights break shortest-path algorithms like Dijkstra but are handled by Bellman-Ford.
2.4 Simple Graphs, Multigraphs, and Pseudographs
The formal definition in 2.1 defines a simple graph: no self-loops and no parallel edges. Real-world graphs often violate these constraints:
Definition (Self-loop). A self-loop is an edge from a vertex to itself: or .
Definition (Multigraph). A multigraph allows multiple edges between the same pair of vertices. Formally, is a multiset rather than a set.
Definition (Pseudograph). A pseudograph allows both self-loops and multiple edges.
| Graph Type | Self-loops | Parallel Edges | Example |
|---|---|---|---|
| Simple graph | No | No | Social network (at most one friendship) |
| Multigraph | No | Yes | Transportation network (multiple bus routes) |
| Pseudograph | Yes | Yes | State machine (self-transitions) |
For AI: Most GNN frameworks assume simple graphs. Self-loops are added explicitly in GCN: adds a self-loop to every vertex, ensuring that each node's own features are included in the aggregation step. This is a deliberate design choice, not a property of the input graph.
Non-example: A relation "is the same as" on a set - every element is related to itself, producing self-loops on every vertex. This is a pseudograph, not a simple graph.
2.5 Standard Examples and Non-Examples
Example 1: The Petersen graph. A famous graph with 10 vertices and 15 edges, 3-regular (every vertex has degree 3). It is a counterexample to many graph theory conjectures - it is not Hamiltonian, not edge-4-colorable, and not planar.
Example 2: The Karate Club graph. Zachary's karate club (1977) - 34 members, 78 edges representing social interactions. The graph split into two communities when the club divided, making it a standard benchmark for community detection. This is the "MNIST of graph learning."
Example 3: A molecule. Water () has 3 vertices (O, H, H) and 2 edges (O-H bonds). Caffeine () has 24 vertices and 25 edges.
Non-example 1: A set without edges. The pair is technically a valid graph (the "empty graph" on 3 vertices), but it has no edges - no relationships are captured.
Non-example 2: A "graph" with edges between non-existent vertices. If and , this is not a valid graph because vertex 3 is not in . Edges must connect vertices in the vertex set.
Non-example 3: An ordered list. The sequence is not a graph - it has no explicit edge set. However, we can construct a path graph from it: , .
2.6 The Null Graph, Empty Graph, and Trivial Graph
These edge cases clarify the boundaries of the definition:
Definition (Trivial Graph). The graph with exactly one vertex and no edges: . This is the simplest possible graph.
Definition (Empty Graph). A graph with vertices and no edges. Also called an edgeless graph or independent set. Denoted (the complement of the complete graph).
Definition (Null Graph). The graph with no vertices and no edges: . Some authors exclude this, requiring . In this curriculum, we require .
For AI: The empty graph appears when constructing GNN inputs: before adding edges (via -NN or radius graphs), the initial graph has nodes with features but no connections. The GCN self-loop trick on an empty graph gives - each node only aggregates its own features, equivalent to a standard MLP.
3. Degree and Degree Sequences
3.1 Vertex Degree
Definition (Degree). For an undirected graph , the degree of a vertex is the number of edges incident to :
Equivalently, is the number of neighbours of : the vertices adjacent to .
Definition (In-degree and Out-degree). For a directed graph :
The total degree of a vertex in a digraph is .
DEGREE EXAMPLES
========================================================================
Undirected: Directed:
A --- B --- C A ---> B ---> C
| | up |
D ---------- E D <--------- E
deg(A) = 2 (B, D) deg^+(A) = 1, deg^-(A) = 1
deg(B) = 2 (A, C) deg^+(B) = 1, deg^-(B) = 1
deg(C) = 2 (B, E) deg^+(C) = 0, deg^-(C) = 1 (sink)
deg(D) = 2 (A, E) deg^+(D) = 1, deg^-(D) = 1
deg(E) = 2 (C, D) deg^+(E) = 1, deg^-(E) = 1
========================================================================
For AI: In a GNN, a vertex's degree determines how many messages it receives during message passing. Vertices with very high degree (hubs) aggregate information from many neighbours, while vertices with degree 1 (leaves) receive information from a single source. This is why GCN normalises by - to prevent high-degree nodes from dominating the aggregation.
Notation (Adjacency matrix connection). If is the adjacency matrix of an undirected graph, then - the row sum. The degree matrix is the diagonal matrix of degrees. Both and are central to spectral graph theory (04).
3.2 The Handshaking Lemma
Theorem (Handshaking Lemma). For any undirected graph :
Proof. Each edge contributes exactly 1 to and exactly 1 to . Thus each edge contributes exactly 2 to the total degree sum. Summing over all edges gives .
Corollary. The number of vertices with odd degree is always even. (If there were an odd number of odd-degree vertices, the total degree sum would be odd, contradicting being even.)
Directed version. For a digraph: . Every arc has exactly one tail and one head.
Example. The Petersen graph: 10 vertices, each of degree 3. Total degree sum edges.
Example. The star graph (one center connected to 4 leaves): , for . Sum edges.
Application: Average degree. The average degree of a graph is . For sparse graphs (), the average degree is . For dense graphs (), the average degree is .
| Graph | Average degree | ||
|---|---|---|---|
| Tree on vertices | |||
For AI: The handshaking lemma provides a quick sanity check when constructing graphs for GNN input. If you compute degrees from an edge list and the sum is odd, your graph construction has a bug. In large-scale graph datasets (OGB, ogbg-molhiv), this is a standard validation step. The average degree also determines the computational cost of one GNN layer: each message-passing step processes edges.
3.3 Degree Sequences
Definition (Degree Sequence). The degree sequence of a graph is the list of vertex degrees arranged in non-increasing order:
Example. The complete bipartite graph : vertices on one side have degree 3, vertices on the other have degree 2. Degree sequence: .
Definition (Graphic Sequence). A non-increasing sequence of non-negative integers is graphic if there exists a simple graph with that degree sequence.
Theorem (Erdos-Gallai, 1960). A non-increasing sequence of non-negative integers with even sum is graphic if and only if for each :
Example. Is graphic? Sum (even). Check : . Yes. (Full check confirms graphicality - it is the degree sequence of .)
Non-example. Is graphic? Sum (even), . Check : . Equality holds. Check : . Fails! Not graphic - you cannot build a simple graph on 4 vertices where three vertices have degree 3 and one has degree 1.
3.4 Regular Graphs
Definition (-Regular Graph). A graph is -regular if every vertex has degree : for all .
Examples:
- 0-regular: The empty graph (no edges)
- 1-regular: A perfect matching (disjoint edges covering all vertices; requires even )
- 2-regular: A disjoint union of cycles
- 3-regular (cubic): The Petersen graph, the complete bipartite graph
- -regular: The complete graph (every vertex connected to every other)
Proposition. A -regular graph on vertices has exactly edges (by the handshaking lemma). This requires to be even.
For AI: Regular graphs are the "uniform" case in GNN analysis. On a -regular graph, the GCN normalisation factor is constant, so GCN reduces to simple averaging of neighbour features. This makes regular graphs the easiest case for theoretical GNN analysis (e.g., proving the WL-equivalence result of Xu et al., 2019).
3.5 Degree Distributions in Real-World Graphs
Real-world graphs rarely have uniform degree distributions. Two patterns dominate:
Poisson degree distribution. In Erdos-Renyi random graphs , the degree of each vertex is approximately where . Most vertices have degree close to .
Power-law degree distribution. In many real-world networks (social, citation, web), the fraction of vertices with degree follows:
This is called a scale-free distribution. It means a few "hub" vertices have enormously high degree while most vertices have low degree.
| Network | Type | Typical |
|---|---|---|
| WWW (out-degree) | Scale-free | |
| Citation networks | Scale-free | |
| Protein interactions | Scale-free | |
| Erdos-Renyi | Poisson | N/A |
| Road networks | Near-uniform | N/A () |
For AI: Power-law degree distributions create challenges for GNNs. Hub vertices with degree dominate mini-batch sampling (GraphSAGE, Hamilton et al., 2017) and can cause representation collapse - the "over-squashing" problem. Solutions include degree-based sampling, virtual nodes, and graph transformers that bypass local message passing.
Clustering coefficient. Beyond degree, an important local structural measure is the clustering coefficient, which quantifies how clustered a vertex's neighbourhood is.
Definition (Local Clustering Coefficient). For a vertex with :
This is the fraction of possible edges among 's neighbours that actually exist. means the neighbourhood is a clique; means no two neighbours are connected.
Definition (Global Clustering Coefficient). The average over all vertices: .
Alternative: Transitivity. .
| Network type | Typical | Interpretation |
|---|---|---|
| Erdos-Renyi | Low, unclustered | |
| Social networks | - | High, clustered (friends of friends are friends) |
| Protein interactions | - | Medium clustering |
| Regular lattice | High | Very clustered but large diameter |
| Small-world (Watts-Strogatz) | High | High clustering AND small diameter |
For AI: The clustering coefficient is a standard graph-level feature in GNN benchmarks. High clustering indicates the presence of many triangles - triangle-counting is a canonical task that standard 1-WL GNNs cannot perform exactly (triangles require 3-WL). This motivates higher-order GNN architectures that explicitly count triangular motifs.
Note: The full theory of degree distributions and random graph models is developed in 06 Random Graphs. Here we introduce the concept; there we formalise it.
4. Paths, Walks, and Cycles
4.1 Walks, Trails, and Paths
These three concepts form a hierarchy of increasing restriction on vertex/edge repetition:
Definition (Walk). A walk of length in is a sequence of vertices such that for each . Vertices and edges may repeat.
Definition (Trail). A trail is a walk in which no edge is repeated (but vertices may repeat).
Definition (Path). A path is a walk in which no vertex is repeated (and hence no edge is repeated).
WALK vs. TRAIL vs. PATH
========================================================================
Graph: A --- B --- C
| | |
D --- E --- F
Walk: A -> B -> E -> B -> C (B visited twice - OK for walk)
Trail: A -> B -> E -> D -> A -> B -> C (no edge repeated, A visited twice)
Path: A -> B -> E -> F -> C (no vertex repeated)
Restriction: Walk \supseteq Trail \supseteq Path
(every path is a trail, every trail is a walk)
========================================================================
Notation. The length of a walk/trail/path is the number of edges traversed (not vertices). A path from to is called a - path.
Proposition. If there exists a walk from to , then there exists a path from to . (Proof: remove cycles from the walk to obtain a path.)
For AI: In a GNN with message-passing layers, information from vertex can reach vertex if and only if there exists a walk of length from to . The GNN's receptive field at depth is exactly the set of vertices reachable by walks of length . Note: it is walks (not paths) because GNNs can re-visit vertices through different neighbours at different layers.
4.2 Cycles
Definition (Cycle). A cycle of length (also called a -cycle) is a closed walk where , all other vertices are distinct, and .
Definition (Girth). The girth of a graph is the length of its shortest cycle. If the graph has no cycles (is acyclic), the girth is defined as .
Examples:
- (triangle): girth
- : girth (contains many triangles)
- (cycle graph): girth
- Any tree: girth (acyclic)
- Petersen graph: girth (smallest cycle has 5 vertices)
Definition (Acyclic Graph). A graph with no cycles. An undirected acyclic graph is a forest; a connected forest is a tree.
For AI: Cycles in computation graphs would create infinite loops during forward pass evaluation. This is why computation graphs are DAGs (directed acyclic graphs) - acyclicity ensures a valid topological ordering for sequential evaluation. Recurrent Neural Networks (RNNs) appear to have cycles, but when "unrolled" over time steps, the computation graph is a DAG.
4.3 Shortest Paths and Distance
Definition (Distance). The distance between vertices and is the length of the shortest - path. If no such path exists (they are in different components), .
Definition (Eccentricity). The eccentricity of a vertex is - the distance to the farthest vertex.
Definition (Diameter). The diameter of a graph is - the maximum distance between any two vertices.
Definition (Radius). The radius of a graph is .
Definition (Center). The center of a graph is the set of vertices with minimum eccentricity: .
Examples:
- Path graph : diameter , radius
- Cycle : diameter
- Complete graph : diameter (every pair is adjacent)
- Star graph (one center connected to leaves): diameter
Proposition. For any connected graph: .
Proof sketch. The left inequality is immediate: the max over all vertices is the min. For the right inequality: let achieve the diameter and let be a center vertex (achieving the radius). Then .
Distance properties (metric space). The graph distance function is a metric:
- Non-negativity: , with
- Symmetry: (for undirected graphs)
- Triangle inequality:
This means the vertex set of any connected graph forms a metric space under graph distance. This observation connects graph theory to metric geometry and is used in graph embedding methods (e.g., embedding graph vertices into Euclidean space while approximately preserving distances).
Distance matrix. The distance matrix has entries . This is distinct from the degree matrix (also sometimes denoted ). The distance matrix contains the full pairwise distance information of the graph and is used as a positional encoding in graph transformers (Li et al., 2020).
DISTANCE MATRIX EXAMPLE (P_4: 1-2-3-4)
========================================================================
Distance matrix: Eccentricities:
1 2 3 4 \epsilon(1) = 3 (farthest: vertex 4)
1 [ 0 1 2 3 ] \epsilon(2) = 2 (farthest: vertex 4)
2 [ 1 0 1 2 ] \epsilon(3) = 2 (farthest: vertex 1)
3 [ 2 1 0 1 ] \epsilon(4) = 3 (farthest: vertex 1)
4 [ 3 2 1 0 ]
Diameter = 3, Radius = 2
Center = {2, 3}
========================================================================
For AI: The diameter of a graph determines the minimum number of GNN layers needed to propagate information between the two most distant vertices. If , a GNN with fewer than layers cannot use the full graph structure for prediction. The "small-world" property - most real-world graphs have - explains why GNNs with 2-3 layers often suffice in practice.
Note: Algorithms for computing shortest paths (BFS for unweighted, Dijkstra for weighted, Bellman-Ford for negative weights) are developed in 03 Graph Algorithms.
4.4 Eulerian and Hamiltonian Paths
Definition (Eulerian Trail/Circuit). An Eulerian trail is a trail that visits every edge exactly once. An Eulerian circuit is a closed Eulerian trail (starts and ends at the same vertex).
Theorem (Euler, 1736). A connected graph has an Eulerian circuit if and only if every vertex has even degree. It has an Eulerian trail (but not a circuit) if and only if exactly two vertices have odd degree.
This was the first theorem of graph theory, proved by Euler to show that the Konigsberg bridge problem has no solution (the graph of Konigsberg had four vertices, all of odd degree).
THE KONIGSBERG BRIDGE PROBLEM
========================================================================
The city of Konigsberg (now Kaliningrad) had 7 bridges connecting
4 landmasses. Can you walk across each bridge exactly once?
Landmasses as vertices, bridges as edges:
A -------- B deg(A) = 3 (odd)
/|\ | deg(B) = 5 (odd)
/ | \ | deg(C) = 3 (odd)
/ | \ | deg(D) = 3 (odd)
C | D-----+
| All degrees odd -> no Eulerian trail exists
+-----------D (Euler's theorem: need 0 or 2 odd-degree vertices)
========================================================================
Proof sketch (Euler's theorem - necessity). If a graph has an Eulerian circuit, every time the walk enters a vertex on one edge it must leave on a different edge. So edges at each vertex are paired: enter-leave, enter-leave, ... This means every vertex uses an even number of edges, so is even. For an Eulerian trail (not circuit), the start and end vertices each use one unpaired edge, so exactly two vertices have odd degree.
Definition (Hamiltonian Path/Cycle). A Hamiltonian path is a path that visits every vertex exactly once. A Hamiltonian cycle is a cycle that visits every vertex exactly once.
Contrast. Euler: every edge once. Hamilton: every vertex once.
| Property | Eulerian | Hamiltonian |
|---|---|---|
| Visits | Every edge once | Every vertex once |
| Existence test | Polynomial (check degrees) | NP-complete |
| Has Eulerian circuit iff is odd | Always has Hamiltonian cycle () | |
| Has Eulerian circuit iff both even | Has Hamiltonian cycle iff |
For AI: The Travelling Salesman Problem (TSP) - finding the shortest Hamiltonian cycle in a weighted complete graph - is a canonical NP-hard combinatorial optimization problem. Neural combinatorial optimization (Vinyals et al., 2015; Kool et al., 2019) uses attention-based models to learn heuristic TSP solutions. The graph structure of TSP is with distance weights.
4.5 Counting Walks with the Adjacency Matrix
Theorem. Let be the adjacency matrix of a graph . The entry counts the number of walks of length from vertex to vertex .
Proof sketch. By induction on . Base case: , and iff there is a walk of length 1 (an edge) from to . Inductive step: . A walk of length from to consists of a walk of length from to some , followed by an edge from to . Summing over all possible intermediate vertices gives the matrix product.
Example. For the triangle with adjacency matrix:
: there are 2 walks of length 2 from vertex 1 to itself ( and ). The diagonal entries of equal the vertex degrees.
Corollary. The number of triangles containing vertex is (each triangle is counted twice, once in each direction). The total number of triangles in is .
Example: walk counts in .
, so vertex 1 is in triangle. , so there are triangle total. Correct for .
Extended corollary: subgraph counting.
| Subgraph | Formula using |
|---|---|
| Edges | |
| Triangles | |
| Walks of length (total) | |
| Paths of length | Harder - requires inclusion-exclusion |
Connection to GNN layers. After GNN layers, the aggregated representation of vertex contains information from all vertices reachable by walks of length from . The contribution of vertex to vertex 's representation after exactly layers is proportional to (in the linear, non-normalised case). The normalised GCN layer uses , so the -layer GCN computes - a matrix whose entries are walks of length normalised by vertex degrees.
For AI: This theorem connects graph theory directly to linear algebra. The spectral decomposition gives , so the number of walks of length is controlled by the eigenvalues of . The dominant eigenvalue determines the exponential growth rate of walk counts - this is the mathematical foundation of PageRank (stationary distribution of a random walk = dominant eigenvector of the transition matrix). Full spectral analysis is in 04 Spectral Graph Theory.