"Sets tell us what objects exist in a mathematical world; logic tells us what must be true about them."
Overview
Sets and logic are the foundation beneath every other mathematical object in this repository. Before we can speak precisely about vectors, functions, probabilities, losses, or neural network layers, we need a language for collections, membership, relations, predicates, and valid inference.
In AI, these ideas show up constantly. A vocabulary is a set, an attention mask is a relation on token positions, a training rule is a logical implication, and a theorem about optimization or generalization is only as strong as the quantifiers and assumptions that define it. This section develops that language so later chapters can use it rigorously rather than implicitly.
Prerequisites
- Basic arithmetic and symbolic notation from Number Systems
- Comfort reading simple mathematical statements and equations
- Familiarity with elementary programming concepts such as conditions and Boolean values
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Interactive set operations, truth tables, logical normal forms, and AI-flavored demonstrations |
| exercises.ipynb | Guided practice on sets, relations, quantifiers, predicates, and proof structure |
Learning Objectives
After completing this section, you will:
- Define sets, subsets, relations, and Cartesian products precisely
- Distinguish naive set intuition from axiomatic safeguards against paradox
- Translate English mathematical statements into propositional and predicate logic
- Work fluently with quantifiers, implication, equivalence, and normal forms
- Recognize how set and logical structure appears in ML pipelines and model design
- Explain cardinality, countability, and why they matter for computation and learning theory
- Read formal statements in later chapters without losing track of hypotheses and conclusions
- Use sets and logic as the conceptual bridge to functions, proof techniques, probability, and beyond
Table of Contents
- Intuition
- Naive Set Theory
- Set Operations
- Relations
- Propositional Logic
- Predicate Logic (First-Order Logic)
- Proof Techniques
- Axiomatic Set Theory (ZFC)
- Logic in Computation and AI
- Cardinality and Infinite Sets
- Logic and Set Theory in Machine Learning
- Advanced Topics
- Common Mistakes
- Exercises
- Why This Matters for AI
- Conceptual Bridge
1. Intuition
1.1 What Are Sets and Logic?
Set theory is the mathematical language for talking about collections of objects; logic is the mathematical language for talking about truth and valid reasoning. Together they form the foundation of all mathematics: every other mathematical structure - numbers, functions, probability, linear algebra - is built on sets and governed by logic.
A set answers the question: what objects belong to this category?
Logic answers the question: given what we know, what can we validly conclude?
These are not abstract curiosities. When you tokenise a sentence, the vocabulary is a set. When you compute attention, the mask selects a subset of positions. When you train a model, the loss function maps from a set of parameters to real numbers. When you evaluate whether a model's output is "correct", you're applying logical predicates. Sets and logic are not prerequisites you study once and forget - they are the operational language of every equation in this curriculum.
For AI and LLMs specifically:
- Sets formalise vocabularies, token sequences, and model outputs
- Logic underpins reasoning, constraint satisfaction, knowledge representation, and formal verification of AI systems
- Every probability distribution is defined over a set with a logical structure (sigma-algebra)
- Neural network verification expresses safety properties as logical formulas
1.2 Why Sets and Logic Matter for AI
| AI Concept | Set/Logic Foundation |
|---|---|
| Tokenisation | Vocabulary is a set; the set of all token sequences is a formal language |
| Probability | A probability space is built on sets: events are subsets of sample space |
| Neural network architecture | Each layer is a function between sets (domain and codomain ) |
| Attention masks | A mask selects a subset of positions; causal mask is a specific set relation |
| Structured generation | Constrained decoding enforces logical constraints on output tokens at each step |
| Knowledge graphs | Entities are sets; relations are functions between sets; reasoning is predicate logic |
| Formal verification | Proving properties of AI systems requires predicate logic and SAT/SMT solvers |
| Training data | Deduplication is a set operation; a dataset is a multiset of examples |
| Retrieval (RAG) | A query retrieves a subset of a corpus satisfying a relevance predicate |
| Type checking | Tensor shape inference is type theory; preventing dimension mismatches |
1.3 The Hierarchy of Mathematical Foundations
Every mathematical structure used in AI rests on a hierarchy, and sets and logic form the very bottom:
Layer 7: Machine Learning (SGD, backprop, transformers)
Layer 6: Probability (distributions, Bayes, entropy)
Layer 5: Analysis (continuity, limits, derivatives)
Layer 4: Linear Algebra (vectors, matrices, eigenvalues)
Layer 3: Algebra (groups, rings, fields)
Layer 2: Arithmetic (natural numbers, integers, reals)
Layer 1: Set Theory (ZFC) (what objects exist? what collections?)
Layer 0: Logic (what is valid reasoning? what follows from what?)
Each layer is defined in terms of the layers below. Natural numbers are constructed from sets (von Neumann ordinals). Real numbers are equivalence classes of Cauchy sequences of rationals. Vector spaces are sets with operations satisfying axioms stated in predicate logic. Probability measures are functions from sets to . The entire edifice stands on sets and logic.
This is not historical accident - it is the result of 150 years of foundational work from Cantor, Frege, Russell, Hilbert, Godel, and Turing to make mathematics rigorous and unambiguous.
1.4 Formal vs Informal Mathematics
Informal mathematics is what most research papers use: proofs in natural language, intuitions supported by diagrams, appeals to "clearly" and "obviously". This is adequate for communication between mathematicians who share common training and conventions.
Formal mathematics is where every step is justified by an explicit rule, and proofs are machine-checkable. Nothing is taken for granted:
| Aspect | Informal | Formal |
|---|---|---|
| Language | Natural language + notation | Formal logic (FOL or type theory) |
| Proof verification | Human reviewer | Machine (proof assistant) |
| Ambiguity | Tolerated | Forbidden |
| Tools | Paper, LaTeX | Lean 4, Coq, Isabelle, Agda |
| Scale | Research papers | Verified libraries (Mathlib ~150K theorems) |
Proof assistants encode logic and set theory precisely and verify proofs automatically. They are no longer academic curiosities:
- Lean 4 (Microsoft Research): Mathlib library; used by AlphaProof (Google DeepMind, 2024) to win IMO gold medals
- Coq (INRIA): CompCert verified C compiler; Four Colour Theorem formalised
- Isabelle (Cambridge/Munich): seL4 verified operating system kernel
- AI and formal mathematics: LLMs increasingly used to assist with formal proofs - Lean Copilot, Hypertree Proof Search, AlphaProof. Understanding sets and logic is prerequisite for understanding what LLMs are attempting when doing mathematical reasoning
1.5 Historical Timeline
The development of sets and logic spans 2,400 years. Here are the critical milestones:
| Year | Person | Contribution |
|---|---|---|
| ~350 BCE | Aristotle | Syllogistic logic - first formal system of deductive reasoning |
| 1679 | Leibniz | Calculus ratiocinator - dream of universal logical calculus |
| 1847 | Boole | The Mathematical Analysis of Logic - Boolean algebra; logic as algebra |
| 1874 | Cantor | Set theory - infinite sets, cardinality, continuum hypothesis |
| 1879 | Frege | Begriffsschrift - first complete formal logic system; predicate calculus |
| 1901 | Russell | Russell's Paradox - naive set theory is inconsistent |
| 1908 | Zermelo | Axiomatic set theory - foundation for modern mathematics |
| 1910-13 | Russell & Whitehead | Principia Mathematica - attempt to reduce all mathematics to logic |
| 1922-30s | Fraenkel, Skolem | ZFC axioms completed - standard foundation of modern mathematics |
| 1929 | Godel | Completeness theorem - every valid FOL formula has a proof |
| 1931 | Godel | Incompleteness theorems - no consistent system can prove all true arithmetic |
| 1936 | Church & Turing | Lambda calculus and Turing machines - computability theory |
| 1936 | Tarski | Formal semantics - truth defined precisely for formal languages |
| 1965 | Zadeh | Fuzzy logic - graded truth values in |
| 1972 | Martin-Lof | Dependent type theory - constructive foundations connecting logic to programming |
| 2005-10 | - | SAT solvers industrialised - automated reasoning becomes practical |
| 2024 | Google DeepMind | AlphaProof - neural theorem prover achieves IMO gold medal level |
1.6 The Two Pillars
Mathematics rests on two pillars, and understanding their relationship is essential:
MATHEMATICS
|
+-------------------+-------------------+
| |
LOGIC SET THEORY
(What is valid reasoning?) (What objects exist?)
| |
Propositional Logic Naive Set Theory
down down
Predicate Logic (FOL) Axiomatic (ZFC)
down down
Modal / Temporal Logic Type Theory
| |
+-------------------+-------------------+
|
Both together: the language in which
all of AI's mathematical foundations
are written
Logic provides the rules of valid inference - what follows from what. It tells you that if "all transformers have attention" and "GPT is a transformer", then "GPT has attention" - and it tells you exactly why this inference is valid (universal instantiation plus modus ponens).
Set theory provides the objects - the collections, the spaces, the structures. It tells you what is (a Cartesian product of copies of ), what a function is (a special kind of relation, which is a subset of a Cartesian product), and what a probability space is (a triple of sets and a measure).
Neither is sufficient alone. Logic without sets has nothing to reason about. Sets without logic have no way to draw conclusions. Together they form the complete foundation.
2. Naive Set Theory
2.1 What Is a Set?
A set is an unordered collection of distinct objects called elements (or members). This is the simplest and most fundamental concept in all of mathematics.
Membership notation:
- means " is an element of " (read: " is in ")
- means " is not an element of " (read: " is not in ")
Example:
Then , , , but , .
The fundamental principle - Axiom of Extensionality: A set is entirely determined by its members. Two sets are equal if and only if they have exactly the same elements:
This means:
- Order does not matter:
- Repetition does not matter: (sets have distinct elements)
Important distinction: Sets have no notion of order or multiplicity. If you need order, use a sequence (tuple). If you need multiplicity, use a multiset (see 2.6). In ML, training batches are typically ordered sequences, not sets - but the mathematical specification of "training data" is a multiset (same example can appear multiple times).
2.2 Describing Sets
There are three standard ways to describe which elements belong to a set:
Roster notation (listing): Enumerate elements explicitly inside braces.
| Example | Description |
|---|---|
| Finite set with 5 elements | |
| Set of three letters | |
| Infinite set with clear pattern (use carefully) |
Set-builder notation (predicate): Describe the property that members satisfy.
Read: "the set of all such that holds."
| Set-builder | Meaning |
|---|---|
| Even integers | |
| Positive real numbers | |
| Vocabulary as a set | |
| Causal attention mask positions |
Parametric description: Define elements by explicit construction.
| Parametric | Meaning |
|---|---|
| Even integers | |
| Reciprocals of positive integers | |
| Range of a linear map |
2.3 Special Sets
The following sets are used so frequently that they have special symbols:
| Symbol | Name | Elements |
|---|---|---|
| Empty set | No elements | |
| Natural numbers | (sometimes starting from 1) | |
| Integers | ||
| Rationals | ||
| Real numbers | All points on the number line | |
| Complex numbers | ||
| Universal set | All objects under consideration (context-dependent) |
The containment chain:
Each step adds new elements: adds negatives, adds fractions, adds irrationals (, , ), adds imaginary numbers.
Critical distinctions with the empty set:
| Expression | Elements | Cardinality |
|---|---|---|
| None | 0 | |
| One element: | 1 | |
| Two elements: and | 2 |
Warning: . The empty set has no elements. The set containing the empty set has one element (which happens to be the empty set). This distinction is not pedantic - it is exactly how natural numbers are constructed from sets in ZFC (see 8.3).
2.4 Subset and Superset
Subset: is a subset of (written ) if every element of is also an element of :
Proper subset: is a proper subset of (written or ) if and - that is, has at least one element not in .
Superset: means (read: " is a superset of ").
Properties of the subset relation:
| Property | Statement | Meaning |
|---|---|---|
| Reflexivity | Every set is a subset of itself | |
| Antisymmetry | and | Standard proof technique for set equality |
| Transitivity | and | Subset chains compose |
| Empty set | for every set | Vacuously true |
The antisymmetry property gives us the standard method for proving two sets are equal: show (every element of is in ) and (every element of is in ). This "double containment" proof is used constantly in mathematics.
Vacuous truth explanation: Why is true? The definition says: for every , if then . Since nothing is in , the "if" clause is never satisfied, so the implication is vacuously true. There is no counterexample - we would need an that is in but not in , and no such exists. See 5.2 for more on vacuous truth and implication.
2.5 Russell's Paradox
Naive comprehension (the principle that doomed early set theory): For any property , there exists a set . This seems natural - "the set of all things with property " - but it leads to disaster.
Russell's construction (1901): Define
The set of all sets that don't contain themselves. Now ask: is ?
Case 1: Assume . Then satisfies its own membership condition, which says . Contradiction.
Case 2: Assume . Then satisfies the condition , so by the definition of , we have . Contradiction.
Either way we get a contradiction. Naive set theory is inconsistent.
The resolution: We cannot form "the set of all sets" or unrestricted collections. The axiom of Separation (in ZFC) restricts comprehension: you can only form for an already-existing set . You cannot form a set "from scratch" using just a property - you must separate from an existing set. This blocks Russell's paradox because there is no universal set of all sets from which to separate.
This is not merely a historical curiosity. Russell's paradox is the reason mathematics has axioms for set theory at all, and understanding it is prerequisite for understanding ZFC (8), type theory (9.4), and the limits of formal systems (12.1).
2.6 Multisets (Bags)
Standard sets require distinct elements: . But in practice, we often need to track multiplicity - how many times each element appears.
A multiset (or bag) generalises sets by assigning each element a multiplicity (count). Formally, a multiset over a base set is a function where gives the number of times appears.
Notation: as a multiset has , , .
Multiset operations:
| Operation | Definition | Example |
|---|---|---|
| Union (max) | ||
| Sum (add) | ||
| Intersection (min) |
AI applications of multisets:
- Training data: A dataset is a multiset of examples - the same document may appear multiple times (data repetition, which affects training dynamics)
- Token counting: Term frequency in TF-IDF and BM25 is a multiset operation - how many times each word appears matters
- Gradient accumulation: Sum of gradient multiset over a batch: each micro-batch contributes a gradient, and we sum (multiset addition)
- Batch statistics: BatchNorm computes mean and variance of a multiset of activations per channel
3. Set Operations
Set operations are the algebraic toolkit for combining, comparing, and transforming sets. Every one of these operations appears directly in modern AI systems: attention masks use complement and intersection, retrieval uses union and difference, and probability theory uses all of them.
3.1 Union
The union contains every element that belongs to at least one of the two sets. The "or" is inclusive - elements in both sets are included (once, since sets don't repeat).
Visual: Think of taking both sets and pouring them together. Any element from either set goes in.
Properties of union:
| Property | Statement | Why |
|---|---|---|
| Commutativity | "or" is commutative | |
| Associativity | Can chain without parentheses | |
| Identity | Adding nothing changes nothing | |
| Idempotence | Duplicates collapse in sets | |
| Domination | Everything plus anything is everything |
Generalised union: For a collection of sets :
AI examples:
- Vocabulary merging: When combining tokenisers from different models,
- Search results: "Documents matching query A OR query B" =
- Multi-dataset training: Training set =
3.2 Intersection
The intersection contains only elements that belong to both sets simultaneously.
Properties of intersection:
| Property | Statement | Why |
|---|---|---|
| Commutativity | "and" is commutative | |
| Associativity | Can chain without parentheses | |
| Identity | Intersecting with everything keeps all | |
| Idempotence | Set intersected with itself is itself | |
| Annihilation | Intersecting with nothing gives nothing |
Disjoint sets: Two sets and are disjoint if - they share no elements.
Generalised intersection: For a collection :
AI examples:
- Common tokens: Tokens shared by two vocabularies:
- Data deduplication: Documents appearing in both train and test: (should be for valid evaluation!)
- Attention intersection: Tokens attended by both head A and head B
Distributive laws (union and intersection interact):
These mirror the distributive laws of multiplication over addition (), which is no coincidence - Boolean algebra (5.7) makes these analogies precise.
3.3 Set Difference
Elements of that are not in . Think of "removing" 's elements from .
Properties of set difference:
| Property | Statement |
|---|---|
| Not commutative | in general |
| Removing nothing changes nothing | |
| Removing everything leaves nothing | |
| Can only reduce the original set | |
| What remains shares nothing with what was removed |
Relationship to other operations:
Difference is intersection with complement (this is often the most useful way to think about it).
AI examples:
- Vocabulary difference: Tokens in GPT-4's vocabulary but not LLaMA's:
- Data filtering: Training data after removing test examples:
- Feature selection: Selected features = all features minus dropped features
3.4 Symmetric Difference
Elements in exactly one of the two sets - not in both.
Properties of symmetric difference:
| Property | Statement | Why |
|---|---|---|
| Commutativity | Definition is symmetric | |
| Associativity | Can chain | |
| Identity | XOR with nothing = original | |
| Self-inverse | XOR with self cancels | |
| Criterion | Zero difference means equal |
Algebraic structure: The symmetric difference operation makes the power set into an abelian group with as identity. Combined with intersection as multiplication, forms a Boolean ring - connecting set theory to abstract algebra.
AI examples:
- Dataset drift detection: Tokens that changed between vocabulary v1 and v2:
- Model comparison: Features used by model A XOR model B
- Edit distance: Symmetric difference of character sets is a crude string similarity measure
3.5 Complement
Everything in the universal set that is not in . The complement depends on what is (must specify context).
Properties of complement:
| Property | Statement |
|---|---|
| Double complement | |
| Complement of | |
| Complement of | |
| Union with complement | |
| Intersection with complement |
AI example: An attention mask selects positions to attend to. The complement gives the positions that are masked out (set to before softmax). In causal masking, the complement is the "future" positions that must not be attended to.
3.6 De Morgan's Laws for Sets
These are the most important algebraic identities for sets. They connect union and intersection through complementation:
First law - complement of union = intersection of complements:
"Not in A-or-B" is the same as "not in A and not in B."
Second law - complement of intersection = union of complements:
"Not in A-and-B" is the same as "not in A or not in B."
Proof of the first law (by double containment):
(): Let . Then , which means and (if were in either, it would be in the union). Hence and , so .
(): Let . Then and . Hence is not in or , so , which means .
Both containments hold, so .
Generalised De Morgan's Laws:
AI examples:
- Attention mask logic: "Not attending to position A or position B" = "not attending to A and not attending to B"
- Retrieval negation: "Documents not matching (query1 OR query2)" = "documents not matching query1 AND not matching query2"
- Filter composition: Negating a disjunctive filter becomes conjunctive
3.7 Cartesian Product
The set of all ordered pairs where the first element comes from and the second from . Unlike sets, order matters in pairs: unless .
Cardinality:
Not commutative: (different ordered pairs, unless ).
Higher products:
- - ordered triples
- - -tuples of elements from
AI examples - Cartesian products are everywhere:
| Expression | Meaning |
|---|---|
| -dimensional embedding space | |
| All possible token sequences of length | |
| Domain of attention: (queries, keys) | |
| All position pairs for attention matrix | |
| Parameter space of model with parameters |
Formal definition of ordered pair (Kuratowski): . This reduces ordered pairs to pure sets - the pair is a set that encodes the order by treating as the "singleton" element.
3.8 Power Set
The set of all subsets of , including and itself.
Cardinality: for finite .
The reason: for each element of , you independently choose "include" or "exclude" - this gives choices for each of the elements, hence subsets total.
Examples:
| Set | ||
|---|---|---|
| 8 subsets (including and ) |
Cantor's theorem (critical): The power set is always strictly larger than the original set, even for infinite sets:
This means there are always more subsets than elements - a fact that generates the entire hierarchy of infinite cardinalities (see 10.4).
AI implications:
- If vocabulary , then - the number of possible token subsets is astronomically larger than the vocabulary itself
- Structured generation constrains output to a subset of at each step - it selects one element of
- The exponential blowup of power sets is why exhaustive search over subsets is intractable, driving the need for heuristic and approximate algorithms throughout AI
3.9 Operations Summary Table
For quick reference, here is the complete table of set operations with their logical counterparts:
| Set Operation | Notation | Logic Equivalent | Python |
|---|---|---|---|
| Union | (OR) | A | B or A.union(B) | |
| Intersection | (AND) | A & B or A.intersection(B) | |
| Difference | (AND NOT) | A - B or A.difference(B) | |
| Symmetric difference | (XOR) | A ^ B or A.symmetric_difference(B) | |
| Complement | (NOT) | U - A | |
| Cartesian product | - | itertools.product(A, B) | |
| Power set | - | itertools.combinations for each size | |
| Subset test | A <= B or A.issubset(B) |
4. Relations
Relations generalise the concept of "connection" between objects. Functions, orderings, equivalences, and even neural network computations are all special kinds of relations. Understanding relations precisely is what separates informal intuition from rigorous mathematics.
4.1 Binary Relations
A binary relation from set to set is a subset of the Cartesian product:
If , we write or and read " is related to by ."
A relation on is a relation from to itself: .
Examples:
| Relation | Sets | Definition |
|---|---|---|
| on | iff is less than or equal to | |
| Parent-of | Person Person | iff is parent of |
| Token adjacency | iff can precede in valid text | |
| Attention mask | iff query at position attends to key at position | |
| Similarity | iff |
A relation can be represented as:
- A set of pairs (the mathematical definition)
- A matrix (Boolean matrix where iff ; this is the attention mask representation)
- A directed graph (nodes = elements; edge from to iff )
4.2 Properties of Relations
Let be a relation on set (i.e., ). There are six fundamental properties a relation may or may not have:
Reflexivity: Every element is related to itself.
- OK is reflexive ( for all )
- NO is not reflexive ()
- AI: identity attention (token attends to itself) is a reflexive relation
Irreflexivity: No element is related to itself.
- OK is irreflexive
- OK "parent-of" is irreflexive (no one is their own parent)
- A relation can be neither reflexive nor irreflexive (some self-loops, not all)
Symmetry: If is related to , then is related to .
- OK "is sibling of" is symmetric
- OK "has same length as" is symmetric
- NO is not symmetric ( but )
- AI: cosine similarity is symmetric; attention is NOT symmetric (query->key direction)
Antisymmetry: If is related to AND is related to , then .
- OK is antisymmetric ( and implies )
- OK on sets is antisymmetric
- Note: antisymmetry \neq "not symmetric". A relation can be both symmetric and antisymmetric (e.g., equality)
Asymmetry: If is related to , then is NOT related to .
- OK is asymmetric
- OK "parent-of" is asymmetric
- Asymmetry implies irreflexivity (if were reflexive, would require )
Transitivity: If is related to and is related to , then is related to .
- OK is transitive
- OK "ancestor-of" is transitive
- NO "friend-of" is not transitive (my friend's friend need not be my friend)
- AI: causal mask transitivity - if position attends to and attends to , should attend to ? In a causal mask, yes (by transitivity of )
4.3 Equivalence Relations
A relation on is an equivalence relation if it is:
- Reflexive: for all
- Symmetric:
- Transitive:
Equivalence relations capture the idea of "sameness" or "interchangeability" - elements related by an equivalence relation are considered equivalent for the purpose at hand.
Examples of equivalence relations:
| Relation | Domain | Equivalence classes |
|---|---|---|
| Equality | Any set | Each class is a singleton |
| Same remainder mod | classes: | |
| Same length | Strings | Strings of equal length grouped together |
| Same BPE encoding | Character sequences | Sequences mapping to same token(s) |
| Same prediction | Inputs | Inputs producing identical model output |
Equivalence class of :
All elements equivalent to form the equivalence class .
Partition theorem: An equivalence relation on partitions into disjoint equivalence classes:
- Every element belongs to exactly one equivalence class
- The union of all classes equals :
- Distinct classes are disjoint:
Quotient set: - the set of all equivalence classes.
AI example - tokenisation as equivalence: BPE tokenisation defines an equivalence relation on character sequences. Two character sequences are "equivalent" if they merge to the same token. The equivalence classes are precisely the tokens. The vocabulary is the quotient set (restricted to the merge rules).
4.4 Partial Orders
A relation on is a partial order if it is:
- Reflexive: for all
- Antisymmetric:
- Transitive:
A partial order is written or conventionally. The pair is called a partially ordered set (poset).
"Partial" means: Not all pairs need be comparable. Some elements may be incomparable - neither nor holds.
Total order (linear order): A partial order where additionally
All pairs are comparable. Total orders arrange elements in a single line.
Examples:
| Relation | Domain | Type | Incomparable? |
|---|---|---|---|
| on | Real numbers | Total order | No |
| Lexicographic on strings | Total order | No | |
| on | Power set | Partial order | Yes: and |
| Divisibility on | iff divides | Partial order | Yes: and |
| Prefix relation | iff is prefix of | Partial order | Yes: "ab" and "ba" |
Hasse diagram: A graphical representation of a finite poset. Draw above with an edge if and there is no element between them (no element with ). This gives a compact picture of the partial order structure.
AI connection - dependency parsing: In a syntactic parse tree, token dominates token if is a descendant of . This dominance relation is a partial order on sentence tokens. Similarly, in DAG-structured computation graphs, the execution order is a partial order on operations.
4.5 Functions as Relations
A function is a special case of a relation - a relation satisfying the condition that every element of appears in exactly one pair:
We write to mean .
Terminology:
| Term | Symbol | Meaning |
|---|---|---|
| Domain | Set of inputs | |
| Codomain | Set of allowed outputs | |
| Range (image) | Actual outputs () |
Types of functions:
| Type | Definition | Meaning |
|---|---|---|
| Injective (one-to-one) | Distinct inputs -> distinct outputs | |
| Surjective (onto) | Every output is hit | |
| Bijective | Injective and surjective | Perfect pairing between and |
AI examples - functions are the core abstraction:
| Function | Type Signature | Properties |
|---|---|---|
| Tokeniser | Not injective (multiple strings -> same tokens) | |
| Embedding | Injective (each token gets unique vector) | |
| LM head | Linear map (matrix multiplication) | |
| Attention | Highly non-linear | |
| Softmax | Surjective onto probability simplex | |
| argmax | Not injective (ties); not differentiable |
4.6 Composition and Inverse
Composition: Given and , the composition is defined by:
Read right to left: "apply first, then ."
Properties of composition:
- Associativity:
- Identity: and , where
- Not commutative: in general
Inverse: If is bijective, there exists a unique satisfying:
Only bijections have inverses. Injective-but-not-surjective functions have left inverses; surjective-but-not-injective functions have right inverses.
AI example - neural networks as function composition: A deep neural network is fundamentally a composition of functions:
Each is one layer (linear map + activation). Deep learning is deeply compositional - the power comes from composing many simple functions. The chain rule of calculus (backpropagation) exploits this compositional structure: .
5. Propositional Logic
Propositional logic is the simplest complete system of logical reasoning. It studies how the truth of compound statements depends on the truth of their components, without looking inside those components. It is the foundation for Boolean circuits, attention mask logic, SAT solvers, and the systematic reasoning that predicate logic (6) and proof techniques (7) build upon.
5.1 Propositions
A proposition is a declarative statement that is either true (, 1) or false (, 0). Never both, never neither.
Examples of propositions:
- "The sky is blue" - true proposition
- "" - false proposition
- "LLaMA-3 has 8B parameters" - true proposition
- "Every even integer greater than 2 is the sum of two primes" - proposition (truth value unknown; Goldbach's conjecture)
Not propositions:
- "What is your name?" - question, no truth value
- "Close the door" - command, no truth value
- "" - depends on ; becomes a proposition only when is specified (this is a predicate, see 6)
Propositional variables: Letters stand for unspecified propositions. Each has a truth value - either or .
5.2 Logical Connectives
Connectives combine propositions into compound statements. There are six standard connectives:
Negation (NOT):
"Not ." Flips the truth value.
AI: Bitwise NOT in attention masks; complement of a token set.
Conjunction (AND):
" and ." True only when both are true.
AI: Combining attention masks (token must be unmasked AND within window); intersecting constraints in structured generation.
Disjunction (OR):
" or ." True when at least one is true (inclusive or).
AI: Union of token sets in structured generation; "match either pattern" in retrieval.
Exclusive Or (XOR):
" xor ." True when exactly one is true.
AI: Symmetric difference of sets; error detection (XOR of prediction and ground truth bits); XOR is the simplest function not linearly separable - a single perceptron cannot learn it (Minsky & Papert 1969).
Implication (IF...THEN):
"If then ." Also read " implies ." False only when is true and is false - a true hypothesis cannot lead to a false conclusion.
Vacuous truth: When is false, is always true regardless of . This is often counterintuitive but essential:
- "If pigs fly, then the moon is green" is true (vacuously)
- The reason: promises "whenever holds, holds." If never holds, the promise is never violated, so it's true by default.
Vacuous truth in mathematics and AI:
- ", property holds" - true, because there's nothing to check
- "" - true, because every element of (there are none) is in
- "If the model reaches 100% accuracy, then it has generalised" - vacuously true if the model never reaches 100%
Biconditional (IF AND ONLY IF):
" if and only if " (abbreviated "iff"). True when and have the same truth value.
Equivalently: .
AI: Set equality (); defining exactly when a condition holds.
5.3 Operator Precedence
When multiple connectives appear without parentheses, the standard precedence (highest first) is:
Examples:
- means , not
- means , not
- means , not
Best practice: Always use parentheses when there's any ambiguity. Clarity is more important than brevity.
5.4 Tautologies, Contradictions, and Contingencies
Tautology: A formula that is true for all possible truth value assignments to its variables. Logically necessary - cannot possibly be false.
| Tautology | Name |
|---|---|
| Law of excluded middle | |
| Self-implication | |
| Simplification | |
| Implication equivalence |
Contradiction (Unsatisfiable): A formula that is false for all truth value assignments.
| Contradiction | Why |
|---|---|
| Cannot be both true and false | |
| Modus ponens violated |
Contingency: A formula that is true for some assignments and false for others.
- - true when both true, false otherwise
- - false only when
Satisfiability: A formula is satisfiable if at least one truth assignment makes it true. Tautologies and contingencies are satisfiable; contradictions are not.
The SAT problem: Given a propositional formula , is satisfiable? This is the canonical NP-complete problem (Cook-Levin theorem, 1971). Despite worst-case intractability, modern SAT solvers handle millions of variables - they are critical for AI constraint solving, formal verification, and automated planning.
5.5 Logical Equivalence
Two formulas and are logically equivalent (written ) if they have the same truth value for every possible truth assignment. Equivalently: is a tautology.
Here are the fundamental equivalences - the "algebra rules" of logic:
Double Negation:
De Morgan's Laws:
These are the most used laws in practice. Notice they mirror the set-theoretic De Morgan's Laws (3.6) exactly - this is the Boolean algebra correspondence (5.7).
Implication Equivalences:
The contrapositive equivalence is used constantly in proofs (see 7.3). It says " implies " is the same as "not implies not ." Distinguish from the converse (), which is NOT equivalent to .
Distributive Laws:
Absorption Laws:
Idempotent Laws:
Identity Laws:
Domination Laws:
Complement Laws:
5.6 Normal Forms
Literal: A variable or its negation ( or ).
Clause: A disjunction of literals ().
Term: A conjunction of literals ().
Conjunctive Normal Form (CNF)
A formula in CNF is a conjunction of clauses - an AND of ORs:
Every propositional formula can be converted to an equivalent CNF. SAT solvers require CNF input - the DPLL and CDCL algorithms operate exclusively on CNF.
Disjunctive Normal Form (DNF)
A formula in DNF is a disjunction of terms - an OR of ANDs:
Every formula can be converted to DNF. A DNF is satisfiable iff any single term is satisfiable (no complementary literals in a term), which is easy to check - but the conversion to DNF may be exponentially large.
Conversion to CNF (Algorithm)
Given an arbitrary formula :
- Eliminate biconditionals:
- Eliminate implications:
- Push negations inward (De Morgan's + double negation):
- Distribute over :
Example: Convert to CNF:
- Step 2:
- Step 4: - this is CNF OK
5.7 Boolean Algebra
The laws of propositional logic form an algebraic structure called a Boolean algebra: satisfying all the equivalences listed in 5.5.
The fundamental connection - Stone Representation Theorem: Every Boolean algebra is isomorphic to an algebra of sets:
| Logic | Sets | Python |
|---|---|---|
| (AND) | (intersection) | & |
| (OR) | (union) | | |
| (NOT) | complement | ~ |
| (false) | set() | |
| (true) | universal set | |
| (implies) | <= | |
| (iff) | == |
This is why De Morgan's laws look the same for sets and for logic - they are the same, in the precise algebraic sense. Boolean algebra unifies propositional logic, set operations, and circuit design under a single framework.
AI connections:
- Attention masks as Boolean matrices: An attention mask is a Boolean matrix where iff query attends to key . Combining masks (causal AND sliding window) is Boolean AND.
- Structured generation FSMs: The transition function of a finite state machine operates on Boolean states. Valid token sets at each state are computed via Boolean operations.
- Circuit complexity of neural networks: A ReLU network computes a piecewise linear function; each linear region corresponds to a Boolean pattern of active/inactive neurons - the network's behaviour is described by a Boolean function over activation patterns.
6. Predicate Logic (First-Order Logic)
Propositional logic treats statements as atomic units - it cannot look inside them. Predicate logic (FOL) opens them up, exposing the objects, properties, and quantifiers that make mathematical reasoning powerful. Every formal specification in AI - from type systems to knowledge bases to verification conditions - ultimately rests on predicate logic.
6.1 Predicates and Quantifiers - Motivation
The statement "every prime greater than 2 is odd" cannot be expressed in propositional logic. We need:
- A way to talk about individual objects (numbers, tokens, vectors)
- A way to express properties of objects ("is prime," "is odd")
- A way to say "for all" or "there exists"
Predicate logic provides all three.
6.2 Syntax of FOL
Terms: The objects we talk about.
- Constants: Specific objects (, , ,
[PAD],[CLS]) - Variables: Placeholders for objects (, , )
- Function symbols: Operations that produce objects (, , )
Predicates (relation symbols): Properties of or relationships between objects that evaluate to true or false.
- Unary: , ,
- Binary: , ,
- -ary: meaning is between and
Quantifiers: The two pillars of FOL.
Universal Quantifier: ("for all")
"For every , holds." True when is true for every object in the domain.
Examples:
- - "every real number squared is non-negative"
- - "every token maps to a -dimensional vector"
- - causal mask definition
Existential Quantifier: ("there exists")
"There exists an such that holds." True when is true for at least one object in the domain.
Examples:
- - " exists (in )"
- - "there's a token with probability above 0.5"
- - "a global minimum exists" (not always true!)
Unique Existential: ("there exists exactly one")
"There is exactly one satisfying ." This is a defined symbol - it's syntactic sugar for the expression on the right.
6.3 Free and Bound Variables
A variable is bound in a formula if it appears within the scope of a quantifier or . Otherwise, it is free.
- In : is bound, is free
- In : is bound, and are free
- In : both and are bound
A sentence (closed formula) has no free variables - it has a definite truth value. A formula with free variables is a predicate - it becomes a sentence only when specific values are substituted for the free variables or when they are quantified.
AI connection: In a loss function , the index is bound (by the sum, a disguised ), while is free - this is why we optimise over .
6.4 Negating Quantifiers
Negation interacts with quantifiers via two fundamental rules:
"Not everything satisfies " "Something fails to satisfy ."
"Nothing satisfies " "Everything fails to satisfy ."
These are the quantifier analogues of De Morgan's laws. The pattern generalises:
- Negation swaps and
- Negation swaps and
- Negation swaps and
AI example: "Not all tokens are attended to" "There exists a token that is not attended to":
6.5 Nested Quantifiers
Quantifiers can be nested. The order matters for mixed quantifiers:
The first says: "for every , there is (possibly different) such that ." The second says: "there is a single that works for all ."
Comparison with examples:
| Statement | Formal | True? |
|---|---|---|
| "For every real, there's a larger one" | True | |
| "There's a real larger than all reals" | False | |
| "Every continuous function on attains its max" | True (EVT) | |
| "For every , there exists ..." | Continuity def |
Order rule: Swapping and is valid only if one direction of implication holds:
The converse is NOT generally true. A single universal is a strictly stronger claim than an individual for each .
AI example - convergence of SGD:
- means "the loss converges to "
- Here depends on - this is the pattern
6.6 Translating Between English and FOL
Translating natural language to FOL is a critical skill for formal verification, knowledge representation, and understanding what mathematical theorems actually say.
Systematic approach:
- Identify the domain (what are the objects?)
- Identify the predicates (what properties/relations?)
- Identify the quantifier structure (, , their order)
- Write the formal expression
- Check by reading it back in English
Practice translations:
| English | FOL |
|---|---|
| "All vectors in have unit norm" | |
| "Some weight is negative" | |
| "No gradient is zero" | or |
| "If all gradients vanish, we're at a critical point" | |
| "For every layer, there exists a neuron that fires" | |
| "There's a learning rate that works for all tasks" | (probably false!) |
6.7 FOL in AI Contexts
Knowledge graphs store facts as FOL ground atoms: , . Rules are FOL implications: .
Description logics (OWL, used in ontologies) are decidable fragments of FOL. They balance expressiveness with computational tractability - reasoning in full FOL is undecidable.
Program specification: Hoare logic uses FOL to specify pre/post-conditions: means "if holds before, holds after."
LLM reasoning: When an LLM performs chain-of-thought reasoning, it is (approximately) performing FOL inference - applying modus ponens, instantiating universals, existential witnesses. The fidelity of this approximation is an active research question.
7. Proof Techniques
This chapter only needs a working preview of proof methods so later sections can state theorems precisely and you can follow the logic of an argument without being surprised by the structure. The full treatment lives in Proof Techniques, where each method is developed in much more depth.
For now, the right goal is recognition:
- What kind of claim is being proved?
- What assumptions are being used?
- Why is a specific proof strategy a natural fit?
7.1 What is a Proof?
A proof is a finite chain of justified steps from assumptions, definitions, axioms, and previously established results to a conclusion. In this section, sets and logic give us the vocabulary; proofs tell us how that vocabulary is used to certify statements rather than merely suggest them.
That distinction matters in AI. A numerical experiment may suggest that an optimizer converges or that a masking rule preserves causality, but a proof tells you exactly under which assumptions the conclusion must hold.
7.2 Direct Proof
In a direct proof, you start from the hypothesis and push forward to the conclusion. This is the default style for statements of the form "" when the path from premises to result is transparent.
Example: to show that the sum of two even integers is even, write them as and , add them, and observe the result is again divisible by 2. This style mirrors many derivations in ML, where you begin with model assumptions and algebraically derive a bound or identity.
7.3 Proof by Contrapositive
Sometimes the forward direction is awkward, but the logically equivalent statement is easier. That is proof by contrapositive, and it is why the implication equivalence in 5.5 matters operationally, not just symbolically.
Classic example: to prove "if is even, then is even," it is simpler to show "if is odd, then is odd." In theory-heavy AI writing, contraposition shows up in impossibility and lower-bound arguments where failure of the conclusion exposes useful structure in the assumptions.
7.4 Proof by Contradiction
In proof by contradiction, you assume the statement is false and derive an impossibility. This is especially common when proving that some object cannot exist, that a property must hold universally, or that a certain finite description is impossible.
Famous examples include proving that is irrational and that there are infinitely many primes. In AI-flavored mathematics, contradiction often appears when ruling out pathological counterexamples or showing that a supposed optimum or invariant would violate earlier assumptions.
7.5 Proof by Cases
When a domain naturally splits into a few exhaustive possibilities, you can prove a statement separately in each branch. This is proof by cases.
For example, the claim is easiest by splitting into and . The same structure appears in algorithms and model analysis whenever behavior differs across regimes such as active vs inactive ReLU units, interior vs boundary points, or short-context vs long-context cases.
7.6 Mathematical Induction
Induction proves statements indexed by the natural numbers. You establish a base case, then show that if the claim holds at step , it also holds at step . Once both parts are in place, the result propagates to all later indices.
This matters for AI because many objects are recursive or sequential: token prefixes, layers in a stack, rollout horizons, and sequence length arguments are all natural induction targets. We will use the full machinery in the dedicated proof section when sequence-model reasoning becomes more central.
7.7 Existence and Uniqueness Proofs
An existence proof shows that at least one object with the desired property exists. A uniqueness proof shows there can be at most one. Together, they establish that there is exactly one such object.
This distinction is everywhere in optimization and model theory. "A minimizer exists" and "the minimizer is unique" are very different statements, with different consequences for learning dynamics and interpretation.
7.8 Proof by Construction
A constructive proof does more than assert existence: it explicitly builds the object. This is especially valuable in computer science and AI because a constructive argument often doubles as an algorithm.
For instance, if a theorem says there exists a network, partition, mask, or encoding with certain properties, a constructive proof tells you how to produce one. That is why constructive arguments are often more actionable than purely existential ones.
7.9 Why This Is Only a Preview
At this stage, proof techniques should feel like a toolbox you can recognize, not yet one you have fully mastered. The important bridge is:
- logic gives the form of valid inference
- sets provide the objects and predicates we talk about
- proofs combine both into reliable mathematical arguments
When you are ready to go deeper, continue with Proof Techniques, where these methods are expanded with full templates, worked examples, and more explicit AI applications.
8. Axiomatic Set Theory (ZFC)
Naive set theory (2) is intuitive but broken - Russell's Paradox (2.5) shows that unrestricted set formation leads to contradictions. ZFC (Zermelo-Fraenkel axioms with the Axiom of Choice) is the standard rigorous foundation for all of modern mathematics. Every mathematical object - numbers, functions, spaces, probability measures, neural networks - can be built from ZFC sets.
8.1 Why Axioms?
Axioms serve two purposes:
- Prevent paradoxes: By restricting which sets exist, ZFC avoids Russell's Paradox and similar contradictions.
- Ensure consistency: All mathematical reasoning ultimately reduces to statements provable from these axioms.
The axioms tell you exactly which sets you are allowed to construct and what operations you may perform on them. Nothing more, nothing less.
8.2 The ZFC Axioms
There are 9 axioms (some formulations give 8 or 10; the exact count depends on grouping). Here they are, with intuition and notation.
Axiom 1: Extensionality
Two sets are equal if and only if they have the same elements. A set is determined entirely by its membership - there is no "hidden structure."
Consequence: . Order and repetition are irrelevant.
Axiom 2: Empty Set
There exists a set with no elements. By Extensionality, this set is unique - there is only one empty set.
Axiom 3: Pairing
For any two objects and , the set exists. This lets us form pairs, which underpins ordered pairs and Cartesian products.
Axiom 4: Union
For any collection of sets , there exists a set containing exactly the elements that belong to at least one set in .
Axiom 5: Power Set
For any set , the power set exists. This is the set of all subsets of , and for finite .
Axiom 6: Separation (Specification / Comprehension)
For any set and any property , the set exists. Crucially, you can only separate out elements from an existing set - you cannot form without a bounding set .
This is what blocks Russell's Paradox. The "set of all sets that don't contain themselves" would require - but without a bounding set , Separation doesn't let you form it.
Axiom 7: Infinity
There exists an infinite set - specifically, a set containing , , , and so on forever. This axiom guarantees the existence of the natural numbers (and prevents mathematics from being stuck in a finite world).
Axiom 8: Replacement
If is a definable function and is a set, then the image is a set. This is like Separation but more powerful - it lets you apply a function to every element and collect the results.
AI analogy: Replacement is
map. Given a set (dataset) and a function (model), the outputs form a set (predictions).
Axiom 9: Foundation (Regularity)
Every non-empty set has an element disjoint from . This prevents circular membership chains () and self-containing sets (). Sets are "well-founded" - they are built from the bottom up (from ).
8.3 The Axiom of Choice (AC)
For any collection of non-empty sets, there exists a choice function that selects exactly one element from each set. This seems obvious but has deep consequences.
Why it's controversial:
- It's non-constructive: it asserts the existence of a choice function without building one
- It implies paradoxical results: Banach-Tarski paradox (decompose a sphere into 5 pieces and reassemble into two spheres of the same size)
- It implies well-ordering theorem: every set can be well-ordered (including , despite having no known explicit well-ordering)
Why it's accepted:
- Without AC, many theorems fail (every vector space has a basis; Tychonoff's theorem; Hahn-Banach theorem)
- ZFC (with AC) has not been shown inconsistent
- AC is independent of ZF: both ZF+AC and ZF+\negAC are consistent (if ZF is consistent), by Godel (1938) and Cohen (1963)
AI applications of AC: Most machine learning theory implicitly uses AC:
- "There exists a minimiser of the loss function" - may require AC for non-compact spaces
- "Every vector space has a basis" - needs AC for infinite-dimensional spaces (function spaces in kernel methods)
- In practice, everything is finite-dimensional, so AC is not needed for actual computations
8.4 Constructing Numbers from Sets
One of the most remarkable achievements of axiomatic set theory is building all number systems from pure sets:
Natural numbers (von Neumann construction):
The rule: . Each natural number is the set of all smaller natural numbers: , so .
Integers: is constructed as equivalence classes of pairs of natural numbers: represents .
Rationals: is constructed as equivalence classes of pairs of integers: represents where .
Reals: is constructed via Dedekind cuts or Cauchy sequences of rationals.
The takeaway: Every mathematical object - including the real numbers that neural network weights live in - is built from the empty set using ZFC axioms. This is why set theory is called the "foundation of mathematics."
8.5 Ordinals and Cardinals (Brief)
Ordinal numbers extend the natural numbers to describe the "length" of well-orderings:
Here is the first infinite ordinal (the "ordinal type" of ). Ordinals are used to measure the "height" of sets in the cumulative hierarchy.
Cardinal numbers measure the "size" of sets. For finite sets, the cardinal and ordinal notions coincide ( means has elements). For infinite sets, cardinality is defined via bijections - two sets have the same cardinality iff a bijection exists between them. More on this in 10.
9. Logic in Computation and AI
This section bridges the pure theory of 5-8 to the concrete computational world. The key idea: logic is not just a tool for proofs - it is the foundation of computation itself.
9.1 Boolean Circuits
Every digital computer is built from logic gates - physical implementations of the logical connectives:
| Gate | Logic | Symbol | Output |
|---|---|---|---|
| AND | & | 1 iff both inputs are 1 | |
| OR | | | 1 iff at least one input is 1 | |
| NOT | ~ | Flips the bit | |
| NAND | - | 0 iff both inputs are 1 | |
| XOR | ^ | 1 iff inputs differ |
Functional completeness: can compute any Boolean function. Even more remarkably, alone is functionally complete - every Boolean function can be built from NAND gates.
From gates to GPUs: GPU cores perform massively parallel Boolean and arithmetic operations. The floating-point units, comparison circuits, and memory addressing logic in a GPU are all Boolean circuits. When you run torch.matmul(A, B), underneath it is billions of transistors implementing Boolean functions.
9.2 SAT Solving
The Boolean satisfiability problem (SAT): given a propositional formula in CNF, is there a truth assignment that makes it true?
DPLL Algorithm (Davis-Putnam-Logemann-Loveland, 1962):
- Unit propagation: If a clause has one unset literal, it must be true
- Pure literal elimination: If a variable appears with only one polarity, set it to satisfy those clauses
- Branching: Pick an unset variable, try both truth values, recurse
- Backtracking: If a contradiction is found, undo the last choice
CDCL (Conflict-Driven Clause Learning): Modern SAT solvers extend DPLL with learned clauses - when a conflict occurs, they analyse why and add a clause preventing the same conflict pattern. This is a form of learning from mistakes.
SAT in AI:
- Constraint satisfaction: Many combinatorial AI problems reduce to SAT
- Planning: Classical AI planning (SATPLAN) encodes action sequences as SAT instances
- Formal verification: Proving neural network properties (robustness, fairness) reduces to SAT/SMT
- Hardware design: Chip verification uses SAT solvers extensively
- Structured generation: Ensuring LLM output satisfies a grammar can be framed as constraint satisfaction
9.3 Resolution and Automated Reasoning
Resolution is a single inference rule that is complete for refuting CNF formulas:
If one clause contains and another contains , their resolvent combines the remaining literals.
Completeness (Robinson, 1965): A set of clauses is unsatisfiable if and only if the empty clause () can be derived by repeated resolution.
This is the backbone of Prolog and logic programming - computation as proof search.
9.4 Type Theory and Programming
Type systems are deductive systems closely related to logic. The Curry-Howard correspondence establishes a deep isomorphism:
| Logic | Type Theory | Programming |
|---|---|---|
| Proposition | Type | Specification |
| Proof | Term (program) | Implementation |
| (implication) | Function type | Function from to |
| (conjunction) | Product type | Pair/tuple |
| (disjunction) | Sum type | Tagged union / either |
| Dependent function | Generic/polymorphic function | |
| Dependent pair | Record with witness | |
| (false) | Empty type | Unreachable code |
| Proof of | Value of type | Running program |
Consequence: A valid type-checked program is a proof of its type signature. Type errors are logical contradictions.
AI connection - type-safe tensors: Projects like
jaxtypingandbeartypeenforce tensor shape constraints at the type level:Float[Array, "batch seq d_model"]. A shape mismatch is caught as a type error - i.e., a logical contradiction between what the function promises and what it receives.
9.5 Modal Logic (Brief)
Modal logic extends propositional logic with necessity () and possibility ():
- : " is necessarily true" (true in all possible worlds)
- : " is possibly true" (true in some possible world)
In AI:
- Epistemic logic: means "agent knows ." Used in multi-agent systems.
- Temporal logic: means " always holds" (safety); means " eventually holds" (liveness). Used in model checking and formal verification.
- Deontic logic: What an AI system "should" do - the logic of obligations and permissions, relevant to AI alignment.
9.6 Description Logics and Knowledge Representation
Description logics (DLs) are decidable fragments of FOL designed for knowledge representation:
- Concepts (unary predicates):
Person,Model,LargeLanguageModel - Roles (binary predicates):
trainedBy,hasParameter,outputs - Constructors: (intersection), (union), (complement), (all -successors are in ), (some -successor is in )
DLs underpin the Web Ontology Language (OWL), used in knowledge graphs, biomedical ontologies, and semantic web. The reasoning complexity depends on which constructors are included - a careful balance between expressiveness and decidability.
9.7 LLM Reasoning as Approximate Logic
When an LLM performs chain-of-thought (CoT) reasoning, it is (approximately) performing logical inference:
| Logical step | LLM behaviour | Fidelity |
|---|---|---|
| Modus ponens () | "Since and implies , therefore " | High for simple cases |
| Universal instantiation () | "Since all X are Y, this X is Y" | Moderate |
| Contradiction detection | "But this contradicts..." | Fragile |
| Multi-step chains | Chaining 5+ inference steps | Degrades rapidly |
Open research questions:
- Can LLMs perform reliable deductive reasoning, or only pattern-matched approximation?
- Can we formally verify LLM reasoning chains against logical rules?
- Can symbolic logic modules be integrated with neural networks (neuro-symbolic AI)?
- How does the "chain-of-thought" prompt format affect logical correctness?
Current evidence suggests LLMs are surprisingly good at local logical steps but struggle with long chains and quantifier reasoning (especially nested patterns). This is precisely the gap that formal methods aim to fill.
10. Cardinality and Infinite Sets
Cardinality answers the question: how big is a set? For finite sets the answer is obvious - just count. For infinite sets, the answer is one of the most profound discoveries in mathematics: not all infinities are the same size.
10.1 Finite Cardinality
If is finite, means there exists a bijection . Counting is bijection.
Key counting facts for AI:
| Formula | Name | AI Example |
|---|---|---|
| Inclusion-exclusion | Deduplicating token sets | |
| Product rule | Size of joint state space | |
| Power set | Number of feature subsets | |
| Exponential | Number of functions from to | |
| Factorial | Permutation count | |
| Binomial | Ways to choose items from |
10.2 Comparing Infinite Sets - Bijections
Definition: Two sets and have the same cardinality () if and only if there exists a bijection .
This is the only sensible definition for infinite sets - we cannot "count" them, but we can pair their elements one-to-one.
Example - :
This seems wrong - "has twice as many elements." But pair them:
Formally: f(n) = \begin{cases} n/2 & \text{if nis even} \\ -(n+1)/2 & \text{ifn is odd} \end{cases}
This is a bijection, so . Infinity doesn't work like finite numbers.
10.3 Countable Sets
A set is countably infinite if - i.e., there exists a bijection . A set is countable if it is finite or countably infinite.
Equivalently, is countable iff its elements can be listed as a sequence (possibly terminating).
Countable sets include:
| Set | Why countable |
|---|---|
| By definition | |
| Bijection shown above | |
| Cantor's zig-zag (diagonalisation of table) | |
| Cantor pairing function: | |
| All finite strings over finite alphabet | , countable union of finite sets |
| All programs (source code) | Programs are finite strings |
| All rational polynomials | Countable coefficients, finite degree |
| Algebraic numbers | Roots of polynomials with integer coefficients |
Key theorem: A countable union of countable sets is countable:
AI consequence: The set of all possible tokenised prompts (finite strings over a finite vocabulary ) is countable. There are possible inputs to an LLM.
10.4 Uncountable Sets - Cantor's Diagonal Argument
Theorem (Cantor, 1891). is uncountable - there is no bijection .
Proof (diagonal argument). We show that even the interval is uncountable. Assume, for contradiction, that is countable. Then we can list all real numbers in :
Construct where is any digit (and to avoid alternative decimal representations).
Then but for every (they differ at the -th digit). This contradicts the assumption that all reals in were listed.
The symbol ("aleph-null") denotes the first infinite cardinal. The set has cardinality (the cardinality of the continuum), which is strictly larger.
10.5 Cantor's Theorem (Generalised)
Theorem. For any set : .
No set is as large as its power set. This holds for every set, finite or infinite.
Proof sketch. The map injects into , so . To show strict inequality, suppose is a bijection. Let . Then , so for some (by surjectivity). Is ?
- If , then - contradiction.
- If , then - contradiction.
Either way, contradiction. So no such bijection exists.
Consequence - the infinite tower:
There is no largest infinity. The hierarchy of infinities is itself infinite (and larger than any infinity in the hierarchy).
10.6 The Continuum Hypothesis
Claim (Cantor, 1878): There is no set with .
In other words, there is no "intermediate" infinity between the countable and the continuum.
Resolution (Godel 1940, Cohen 1963): The Continuum Hypothesis is independent of ZFC - it can neither be proved nor disproved from the standard axioms. Both ZFC + CH and ZFC + \negCH are consistent (if ZFC is).
This is one of the most surprising results in mathematics: there are meaningful mathematical questions that our axioms simply cannot answer.
10.7 Cardinality and AI
| Object | Cardinality | Consequence |
|---|---|---|
| Finite vocabulary | (finite, e.g., 32,000) | Discrete, tractable |
| All prompts | (countable) | Enumerable in principle |
| All real-valued weight vectors | Uncountable - can't enumerate models | |
| All functions | Vastly larger than the set of computable functions | |
| All computable functions | Programs are countable strings |
The gap: There are uncountably many functions but only countably many programs (and neural network architectures, at any fixed precision). Most functions are not computable - and most mathematical objects are not representable by any model. This is a fundamental limitation.
11. Logic and Set Theory in Machine Learning
This section shows how the abstract tools of 2-10 appear concretely in modern ML.
11.1 Formal Languages and Automata
Formal language theory is set theory applied to strings:
- An alphabet is a finite set of symbols
- A string over is a finite sequence of symbols from
- A language is a set of strings:
Connection to AI:
| Concept | ML Instantiation |
|---|---|
| Alphabet | Vocabulary (set of tokens) |
| String | Token sequence (prompt or completion) |
| Language | Set of valid outputs |
| Grammar | Rules defining valid outputs (JSON schema, Python syntax) |
| Automaton (DFA/NFA) | Constrained decoding state machine |
| Regular language | Languages recognisable by finite automata |
| Context-free language | Languages parseable by pushdown automata |
Structured generation constrains an LLM to produce only strings from a target language . At each decoding step, the valid next tokens form a set , computed by the automaton/parser. Tokens outside are masked (logits set to ). This is Boolean logic applied to decoding.
11.2 Sets in Probability
The entire framework of probability theory is built on set theory (Kolmogorov axioms):
| Probability concept | Set-theoretic object |
|---|---|
| Sample space | Set of all possible outcomes |
| Event | Subset of () |
| Probability of or | |
| Probability of and | |
| Probability of not ; equals | |
| -algebra | Collection of measurable subsets of |
| Independence: | Multiplicativity under intersection |
Measure theory (the rigorous foundation of probability) relies heavily on ZFC: -algebras are sets of sets, measures are set functions, and the Axiom of Choice is needed for results like the Vitali set (a non-measurable set).
11.3 Sets in Linear Algebra
| Linear algebra concept | Set structure |
|---|---|
| Vector space | Set with addition and scalar multiplication |
| Subspace | Subset that is itself a vector space |
| Span | all linear combinations of - a set |
| Basis | Linearly independent spanning set |
| Eigenspace | - defined by set-builder notation |
| Column space | - image of (a set) |
| Null space | - kernel of (a set) |
11.4 Logic in Training
Loss functions encode logical conditions:
| Condition | Logic | Loss |
|---|---|---|
| Output target | Equality | MSE: |
| Output matches distribution | KL divergence, cross-entropy | |
| Margin condition | Hinge loss: | |
| Constraint satisfaction | Lagrangian: |
Training as logic: gradient descent searches for satisfying - this is an existential claim: .
Early stopping as logic: "Stop when the validation loss has not decreased for epochs" is a quantified condition: .
11.5 Sets in Model Architecture
Transformer attention as set operations:
The key insight of attention is that it operates on a set of key-value pairs (modulo positional encoding). Self-attention is permutation-equivariant: it treats the input as a set, not a sequence.
- Query: "What am I looking for?"
- Key set: - the set of keys to match against
- Value set: - the set of values to aggregate
- Attention mask: - the set of allowed (query, key) pairs
Combining masks:
- Causal AND local: - intersection (Boolean AND)
- Multi-head union: - each head sees a different subset of positions
- Padding mask:
11.6 Formal Verification of ML Systems
Logic provides tools to prove properties of ML systems rather than merely test them:
| Property | Formal statement | Method |
|---|---|---|
| Robustness | SMT solving, MILP | |
| Fairness | Statistical testing + formal constraints | |
| Safety | Reachability analysis | |
| Monotonicity | Lattice-based verification |
Certified defences use logical proofs (often via abstract interpretation or interval arithmetic) to guarantee that no adversarial perturbation within an -ball can change the classification. This is the gold standard for ML safety - moving from empirical robustness to provable robustness.
12. Advanced Topics
12.1 Godel's Incompleteness Theorems
The most profound results in mathematical logic, with deep implications for AI.
First Incompleteness Theorem (Godel, 1931): Any consistent formal system capable of expressing basic arithmetic contains a statement such that:
- is true (in the standard model of arithmetic)
- is not provable in
- is not provable in
In other words, there are true mathematical statements that cannot be proved. No axiom system - ZFC, or any extension - can be both consistent and complete for arithmetic.
Godel's self-referential trick: Godel constructs to say, essentially, "I am not provable in system ." If were provable, it would be false (since it says it's unprovable), contradicting consistency. If were provable, would be provable (since says it's not), contradiction. So neither nor is provable.
Second Incompleteness Theorem: No consistent system capable of expressing arithmetic can prove its own consistency.
AI implications:
- No AI system (which is a formal system) can verify all true mathematical statements - there will always be truths beyond its reach
- The claim "this AI is guaranteed safe" cannot be proved within the AI's own framework (a version of the second theorem)
- Godel's theorems do NOT mean mathematics is broken - they mean it is inexhaustible
12.2 The Halting Problem
Theorem (Turing, 1936). There is no algorithm that can determine, for every program and input , whether halts on .
Proof (by contradiction). Suppose is a program that returns "halts" or "loops" for any . Define:
What does do?
- If "halts" -> loops forever. But said it halts. Contradiction.
- If "loops" -> halts. But said it loops. Contradiction.
So cannot exist.
Connections to AI:
- This is the computation analogue of Russell's Paradox - self-reference plus negation creates paradox
- You cannot write a "universal bug detector" that catches all infinite loops
- Theorem provers and code verifiers must be incomplete (they may time out or return "unknown")
- LLM code generation cannot guarantee termination of generated code
12.3 Zorn's Lemma
Zorn's Lemma (equivalent to the Axiom of Choice and the Well-Ordering Theorem):
If every chain in a partially ordered set has an upper bound in , then has a maximal element.
A chain is a totally ordered subset; a maximal element is one with no element strictly above it ().
Applications in mathematics:
- Every vector space has a basis (even infinite-dimensional ones)
- Every ideal in a ring is contained in a maximal ideal
- Every filter can be extended to an ultrafilter
AI relevance: Zorn's Lemma is used in the theoretical foundations of kernel methods (Hilbert space bases) and functional analysis (Hahn-Banach theorem), which underlies support vector machines and reproducing kernel Hilbert spaces.
12.4 Fuzzy Logic
Classical logic is crisp: or . Fuzzy logic (Zadeh, 1965) allows truth values in :
where is the degree of membership of in fuzzy set .
Fuzzy operations:
| Classical | Fuzzy |
|---|---|
AI connections:
- Softmax outputs are like fuzzy membership values - a token has degree-of-membership in each class
- Attention weights are fuzzy: not binary "attend or not" but "how much to attend"
- Fuzzy control systems (early AI) used fuzzy rules: "IF temperature IS high THEN fan IS fast"
- Modern neural networks have largely replaced fuzzy systems, but the conceptual framework persists in the soft/differentiable operations that make gradient descent possible
12.5 Second-Order Logic (Brief)
First-order logic quantifies over objects:
Second-order logic quantifies over properties (sets, predicates, functions):
Example - Completeness of :
- First-order: CANNOT express "every bounded set has a least upper bound" (needs quantification over sets)
- Second-order:
Trade-off:
- First-order logic: semi-decidable (complete proof systems exist), but less expressive
- Second-order logic: more expressive, but no complete proof system exists (incompleteness is "worse")
In AI, first-order logic is preferred because it has better computational properties. Description logics, Datalog, and Prolog all work within FOL or its fragments.
13. Common Mistakes and Misconceptions
| # | Mistake | Why it's wrong | Correct version |
|---|---|---|---|
| 1 | Confusing and | "" (membership) vs "" (subset) | = element belongs; = every element of is in |
| 2 | The set is not an element of ; its elements are , , | OK but NO | |
| 3 | has 0 elements; has 1 element (namely ) | ; , | |
| 4 | "Universal set exists" | In ZFC, there is no set of all sets (leads to Russell's Paradox) | Work relative to a fixed domain (a set large enough for your purpose) |
| 5 | Confusing and | does NOT mean (converse) | Implication is one-directional; biconditional is both |
| 6 | "Vacuous truth is a bug" | is always - this is a feature, not a bug | Vacuous truth prevents edge-case contradictions; it's why for all |
| 7 | Converse = contrapositive | (converse) (contrapositive) | Contrapositive original; converse is NOT equivalent |
| 8 | Swapping and | Quantifier order matters; only (not reverse) | |
| 9 | "Countable = small" | is countable but dense in ; countable sets can be structurally complex | Countable means "no bigger than " - still infinite |
| 10 | Induction hypothesis forgotten | Claiming without using | The inductive step MUST use the hypothesis (or for strong induction) |
| 11 | Using induction without base case | Proving only the inductive step | Without base case, the chain has no starting point; the "proof" proves nothing |
| 12 | Confusing "not proven" with "false" | Godel: some true statements are unprovable | Unprovability falsity; independence results are about axiom limitations |
| 13 | Set difference is NOT symmetric (unlike symmetric difference) | ; | |
| 14 | for infinite | Cantor's theorem: always, even for infinite sets | There is no bijection between and - ever |
| 15 | Treating attention as symmetric | does NOT imply ; attention is a directed relation | Attention is asymmetric: query->key direction matters |
14. Exercises
Exercise 1: Set Operations (Warm-up)
Given , , , compute:
- (symmetric difference)
- Verify De Morgan's law: (with )
- - list all elements
Exercise 2: Relations
Let and .
- Is reflexive? Symmetric? Transitive?
- Is an equivalence relation? If yes, find the equivalence classes.
- Draw the relation as a directed graph.
- Write as a Boolean matrix.
Exercise 3: Partial Orders
Consider the divisibility relation on the set ( iff divides ).
- Verify that is a partial order on (check reflexivity, antisymmetry, transitivity).
- Draw the Hasse diagram.
- Find all maximal elements, minimal elements, greatest element, least element.
- Is a total order? Why or why not?
- Find all chains of maximum length.
Exercise 4: Propositional Logic
- Construct truth tables for:
- - is this a tautology?
- - simplification law
- Prove or disprove: (using truth tables).
- Convert to CNF: .
- Convert to DNF: .
Exercise 5: Predicate Logic
Translate to FOL (define your predicates):
- "Every transformer has at least one attention head."
- "No model with zero parameters can learn."
- "There exists a learning rate that makes every batch converge."
- "For every , there exists such that for all , ."
- Negate statement (3) formally and interpret in English.
Exercise 6: Proof Practice
Prove each of the following using the specified technique:
- Direct proof: If is odd, then is odd.
- Contrapositive: If is divisible by 3, then is divisible by 3.
- Contradiction: is irrational.
- Induction: For all : .
- Cases: For all integers : is divisible by 6.
Exercise 7: Cardinality
- Prove that by constructing an explicit bijection.
- Prove that using the Cantor pairing function.
- Explain why the set of all Python programs is countable.
- Explain why the set of all functions is uncountable. (Hint: relate to .)
- A vocabulary has tokens. How many distinct prompts of length exactly exist? Of length at most ?
Exercise 8: AI Applications
-
Attention mask logic: A sliding window attention mask allows query to attend to key iff . A causal mask allows to attend to iff . Write the combined mask as a set: . Is an equivalence relation? A partial order?
-
Tokeniser as equivalence: Define an equivalence relation on where two strings are equivalent iff they map to the same token sequence under the BPE merges: . List the equivalence classes (up to length 3).
-
Loss convergence as FOL: Write the formal statement "SGD converges to a critical point" using quantifiers. Then write its negation and interpret what it means for training to fail.
-
Function composition: A model has: embedding , encoder (12 layers), and head . Write the full model as a composition. Is the composition associative? Does the order matter?
15. Why This Matters for AI
15.1 Summary of Connections
| Topic in This Chapter | Where It Appears in AI/ML | Chapter Reference |
|---|---|---|
| Sets, subsets, operations | Data partitioning (train/val/test), vocabularies, token sets | 2-3 |
| Relations | Attention masks, knowledge graphs, dependency parsing | 4 |
| Equivalence relations | Tokenisation, clustering, quotient spaces | 4.3 |
| Partial orders | Computation graphs, dependency ordering, beam search | 4.4 |
| Functions | Neural network layers, loss functions, activations | 4.5 |
| Composition | Deep networks as function chains, backpropagation | 4.6 |
| Propositional logic | Boolean masks, circuit design, SAT solving | 5 |
| Truth tables, CNF | Constraint satisfaction, structured generation | 5.2, 5.6 |
| Predicate logic (FOL) | Formal specifications, knowledge representation | 6 |
| Quantifiers | Convergence definitions, generalization bounds | 6.2-6.5 |
| Proof techniques | Understanding theoretical papers, convergence proofs | 7 |
| Induction | Proofs about recursive algorithms and sequence models | 7.6 |
| ZFC axioms | Why mathematics is consistent; foundational security | 8 |
| Axiom of Choice | Existence of bases, minima in non-compact spaces | 8.3 |
| Cardinality | Expressiveness limits, countability of programs | 10 |
| Diagonal argument | Impossibility results, undecidability | 10.4, 12.2 |
| Godel's theorems | Limits of formal verification and AI reasoning | 12.1 |
| Formal verification | Provable robustness, safety guarantees | 11.6 |
16. Conceptual Bridge
16.1 What to Study Next
Now that you have the foundations of sets and logic, the natural next steps are:
-
Functions and Mappings (Chapter 03): Deepens the treatment of functions from 4.5 - injectivity, surjectivity, inverse functions, and their role in defining neural network layers precisely.
-
Linear Algebra (Chapters 02-03): Sets and logic provide the language for defining vector spaces, subspaces, and linear maps rigorously. Every "for all" and "there exists" in a linear algebra textbook uses the predicate logic from 6.
-
Calculus (Chapters 04-05): The - definition of continuity () is a nested quantifier statement from 6.5. Optimisation theory builds on this.
-
Probability Theory (Chapters 06-07): Built entirely on measure theory, which is set theory on steroids. Events are sets, probability is a set function, and conditional probability uses set intersection.
-
Information Theory (Chapter 09): Cross-entropy and KL divergence are defined using sets (probability distributions), logarithms (built from real numbers, which are built from sets), and summations (which are disguised quantifiers).
16.2 The Big Picture
+-----------------------------------------------------+
| THIS CHAPTER |
| |
| Sets ---- Logic ---- Proofs ---- Axioms |
| | | | | |
| v v v v |
| Data Masks Theory Foundations |
| structures & rules papers of math |
| | | | | |
| +----------+----------+------------+ |
| | |
| v |
| +-----------------+ |
| | EVERYTHING | |
| | IN MATH/CS | |
| | BUILDS HERE | |
| +-----------------+ |
+-----------------------------------------------------+
Sets and logic are the lingua franca of mathematics. Every mathematical definition, theorem, and proof uses the language developed in this chapter. When you encounter a statement like:
"For every , there exists a neural network with at most neurons such that "
You now know:
- - universal quantifier (6.2)
- - existential quantifier (6.2)
- - a predicate (6.1)
- The whole statement is a nested claim (6.5)
- It can be proved by construction (7.8) or contradiction (7.4)
- The set of all such networks has a specific cardinality (10)
You can read mathematics now. The rest of this course builds on this foundation.