Lesson overview | Previous part | Next part
Sets and Logic: Part 5: Propositional Logic to 12. Advanced Topics
5. Propositional Logic
Propositional logic is the simplest complete system of logical reasoning. It studies how the truth of compound statements depends on the truth of their components, without looking inside those components. It is the foundation for Boolean circuits, attention mask logic, SAT solvers, and the systematic reasoning that predicate logic (6) and proof techniques (7) build upon.
5.1 Propositions
A proposition is a declarative statement that is either true (, 1) or false (, 0). Never both, never neither.
Examples of propositions:
- "The sky is blue" - true proposition
- "" - false proposition
- "LLaMA-3 has 8B parameters" - true proposition
- "Every even integer greater than 2 is the sum of two primes" - proposition (truth value unknown; Goldbach's conjecture)
Not propositions:
- "What is your name?" - question, no truth value
- "Close the door" - command, no truth value
- "" - depends on ; becomes a proposition only when is specified (this is a predicate, see 6)
Propositional variables: Letters stand for unspecified propositions. Each has a truth value - either or .
5.2 Logical Connectives
Connectives combine propositions into compound statements. There are six standard connectives:
Negation (NOT):
"Not ." Flips the truth value.
AI: Bitwise NOT in attention masks; complement of a token set.
Conjunction (AND):
" and ." True only when both are true.
AI: Combining attention masks (token must be unmasked AND within window); intersecting constraints in structured generation.
Disjunction (OR):
" or ." True when at least one is true (inclusive or).
AI: Union of token sets in structured generation; "match either pattern" in retrieval.
Exclusive Or (XOR):
" xor ." True when exactly one is true.
AI: Symmetric difference of sets; error detection (XOR of prediction and ground truth bits); XOR is the simplest function not linearly separable - a single perceptron cannot learn it (Minsky & Papert 1969).
Implication (IF...THEN):
"If then ." Also read " implies ." False only when is true and is false - a true hypothesis cannot lead to a false conclusion.
Vacuous truth: When is false, is always true regardless of . This is often counterintuitive but essential:
- "If pigs fly, then the moon is green" is true (vacuously)
- The reason: promises "whenever holds, holds." If never holds, the promise is never violated, so it's true by default.
Vacuous truth in mathematics and AI:
- ", property holds" - true, because there's nothing to check
- "" - true, because every element of (there are none) is in
- "If the model reaches 100% accuracy, then it has generalised" - vacuously true if the model never reaches 100%
Biconditional (IF AND ONLY IF):
" if and only if " (abbreviated "iff"). True when and have the same truth value.
Equivalently: .
AI: Set equality (); defining exactly when a condition holds.
5.3 Operator Precedence
When multiple connectives appear without parentheses, the standard precedence (highest first) is:
Examples:
- means , not
- means , not
- means , not
Best practice: Always use parentheses when there's any ambiguity. Clarity is more important than brevity.
5.4 Tautologies, Contradictions, and Contingencies
Tautology: A formula that is true for all possible truth value assignments to its variables. Logically necessary - cannot possibly be false.
| Tautology | Name |
|---|---|
| Law of excluded middle | |
| Self-implication | |
| Simplification | |
| Implication equivalence |
Contradiction (Unsatisfiable): A formula that is false for all truth value assignments.
| Contradiction | Why |
|---|---|
| Cannot be both true and false | |
| Modus ponens violated |
Contingency: A formula that is true for some assignments and false for others.
- - true when both true, false otherwise
- - false only when
Satisfiability: A formula is satisfiable if at least one truth assignment makes it true. Tautologies and contingencies are satisfiable; contradictions are not.
The SAT problem: Given a propositional formula , is satisfiable? This is the canonical NP-complete problem (Cook-Levin theorem, 1971). Despite worst-case intractability, modern SAT solvers handle millions of variables - they are critical for AI constraint solving, formal verification, and automated planning.
5.5 Logical Equivalence
Two formulas and are logically equivalent (written ) if they have the same truth value for every possible truth assignment. Equivalently: is a tautology.
Here are the fundamental equivalences - the "algebra rules" of logic:
Double Negation:
De Morgan's Laws:
These are the most used laws in practice. Notice they mirror the set-theoretic De Morgan's Laws (3.6) exactly - this is the Boolean algebra correspondence (5.7).
Implication Equivalences:
The contrapositive equivalence is used constantly in proofs (see 7.3). It says " implies " is the same as "not implies not ." Distinguish from the converse (), which is NOT equivalent to .
Distributive Laws:
Absorption Laws:
Idempotent Laws:
Identity Laws:
Domination Laws:
Complement Laws:
5.6 Normal Forms
Literal: A variable or its negation ( or ).
Clause: A disjunction of literals ().
Term: A conjunction of literals ().
Conjunctive Normal Form (CNF)
A formula in CNF is a conjunction of clauses - an AND of ORs:
Every propositional formula can be converted to an equivalent CNF. SAT solvers require CNF input - the DPLL and CDCL algorithms operate exclusively on CNF.
Disjunctive Normal Form (DNF)
A formula in DNF is a disjunction of terms - an OR of ANDs:
Every formula can be converted to DNF. A DNF is satisfiable iff any single term is satisfiable (no complementary literals in a term), which is easy to check - but the conversion to DNF may be exponentially large.
Conversion to CNF (Algorithm)
Given an arbitrary formula :
- Eliminate biconditionals:
- Eliminate implications:
- Push negations inward (De Morgan's + double negation):
- Distribute over :
Example: Convert to CNF:
- Step 2:
- Step 4: - this is CNF OK
5.7 Boolean Algebra
The laws of propositional logic form an algebraic structure called a Boolean algebra: satisfying all the equivalences listed in 5.5.
The fundamental connection - Stone Representation Theorem: Every Boolean algebra is isomorphic to an algebra of sets:
| Logic | Sets | Python |
|---|---|---|
| (AND) | (intersection) | & |
| (OR) | (union) | | |
| (NOT) | complement | ~ |
| (false) | set() | |
| (true) | universal set | |
| (implies) | <= | |
| (iff) | == |
This is why De Morgan's laws look the same for sets and for logic - they are the same, in the precise algebraic sense. Boolean algebra unifies propositional logic, set operations, and circuit design under a single framework.
AI connections:
- Attention masks as Boolean matrices: An attention mask is a Boolean matrix where iff query attends to key . Combining masks (causal AND sliding window) is Boolean AND.
- Structured generation FSMs: The transition function of a finite state machine operates on Boolean states. Valid token sets at each state are computed via Boolean operations.
- Circuit complexity of neural networks: A ReLU network computes a piecewise linear function; each linear region corresponds to a Boolean pattern of active/inactive neurons - the network's behaviour is described by a Boolean function over activation patterns.
6. Predicate Logic (First-Order Logic)
Propositional logic treats statements as atomic units - it cannot look inside them. Predicate logic (FOL) opens them up, exposing the objects, properties, and quantifiers that make mathematical reasoning powerful. Every formal specification in AI - from type systems to knowledge bases to verification conditions - ultimately rests on predicate logic.
6.1 Predicates and Quantifiers - Motivation
The statement "every prime greater than 2 is odd" cannot be expressed in propositional logic. We need:
- A way to talk about individual objects (numbers, tokens, vectors)
- A way to express properties of objects ("is prime," "is odd")
- A way to say "for all" or "there exists"
Predicate logic provides all three.
6.2 Syntax of FOL
Terms: The objects we talk about.
- Constants: Specific objects (, , ,
[PAD],[CLS]) - Variables: Placeholders for objects (, , )
- Function symbols: Operations that produce objects (, , )
Predicates (relation symbols): Properties of or relationships between objects that evaluate to true or false.
- Unary: , ,
- Binary: , ,
- -ary: meaning is between and
Quantifiers: The two pillars of FOL.
Universal Quantifier: ("for all")
"For every , holds." True when is true for every object in the domain.
Examples:
- - "every real number squared is non-negative"
- - "every token maps to a -dimensional vector"
- - causal mask definition
Existential Quantifier: ("there exists")
"There exists an such that holds." True when is true for at least one object in the domain.
Examples:
- - " exists (in )"
- - "there's a token with probability above 0.5"
- - "a global minimum exists" (not always true!)
Unique Existential: ("there exists exactly one")
"There is exactly one satisfying ." This is a defined symbol - it's syntactic sugar for the expression on the right.
6.3 Free and Bound Variables
A variable is bound in a formula if it appears within the scope of a quantifier or . Otherwise, it is free.
- In : is bound, is free
- In : is bound, and are free
- In : both and are bound
A sentence (closed formula) has no free variables - it has a definite truth value. A formula with free variables is a predicate - it becomes a sentence only when specific values are substituted for the free variables or when they are quantified.
AI connection: In a loss function , the index is bound (by the sum, a disguised ), while is free - this is why we optimise over .
6.4 Negating Quantifiers
Negation interacts with quantifiers via two fundamental rules:
"Not everything satisfies " "Something fails to satisfy ."
"Nothing satisfies " "Everything fails to satisfy ."
These are the quantifier analogues of De Morgan's laws. The pattern generalises:
- Negation swaps and
- Negation swaps and
- Negation swaps and
AI example: "Not all tokens are attended to" "There exists a token that is not attended to":
6.5 Nested Quantifiers
Quantifiers can be nested. The order matters for mixed quantifiers:
The first says: "for every , there is (possibly different) such that ." The second says: "there is a single that works for all ."
Comparison with examples:
| Statement | Formal | True? |
|---|---|---|
| "For every real, there's a larger one" | True | |
| "There's a real larger than all reals" | False | |
| "Every continuous function on attains its max" | True (EVT) | |
| "For every , there exists ..." | Continuity def |
Order rule: Swapping and is valid only if one direction of implication holds:
The converse is NOT generally true. A single universal is a strictly stronger claim than an individual for each .
AI example - convergence of SGD:
- means "the loss converges to "
- Here depends on - this is the pattern
6.6 Translating Between English and FOL
Translating natural language to FOL is a critical skill for formal verification, knowledge representation, and understanding what mathematical theorems actually say.
Systematic approach:
- Identify the domain (what are the objects?)
- Identify the predicates (what properties/relations?)
- Identify the quantifier structure (, , their order)
- Write the formal expression
- Check by reading it back in English
Practice translations:
| English | FOL |
|---|---|
| "All vectors in have unit norm" | |
| "Some weight is negative" | |
| "No gradient is zero" | or |
| "If all gradients vanish, we're at a critical point" | |
| "For every layer, there exists a neuron that fires" | |
| "There's a learning rate that works for all tasks" | (probably false!) |
6.7 FOL in AI Contexts
Knowledge graphs store facts as FOL ground atoms: , . Rules are FOL implications: .
Description logics (OWL, used in ontologies) are decidable fragments of FOL. They balance expressiveness with computational tractability - reasoning in full FOL is undecidable.
Program specification: Hoare logic uses FOL to specify pre/post-conditions: means "if holds before, holds after."
LLM reasoning: When an LLM performs chain-of-thought reasoning, it is (approximately) performing FOL inference - applying modus ponens, instantiating universals, existential witnesses. The fidelity of this approximation is an active research question.
7. Proof Techniques
This chapter only needs a working preview of proof methods so later sections can state theorems precisely and you can follow the logic of an argument without being surprised by the structure. The full treatment lives in Proof Techniques, where each method is developed in much more depth.
For now, the right goal is recognition:
- What kind of claim is being proved?
- What assumptions are being used?
- Why is a specific proof strategy a natural fit?
7.1 What is a Proof?
A proof is a finite chain of justified steps from assumptions, definitions, axioms, and previously established results to a conclusion. In this section, sets and logic give us the vocabulary; proofs tell us how that vocabulary is used to certify statements rather than merely suggest them.
That distinction matters in AI. A numerical experiment may suggest that an optimizer converges or that a masking rule preserves causality, but a proof tells you exactly under which assumptions the conclusion must hold.
7.2 Direct Proof
In a direct proof, you start from the hypothesis and push forward to the conclusion. This is the default style for statements of the form "" when the path from premises to result is transparent.
Example: to show that the sum of two even integers is even, write them as and , add them, and observe the result is again divisible by 2. This style mirrors many derivations in ML, where you begin with model assumptions and algebraically derive a bound or identity.
7.3 Proof by Contrapositive
Sometimes the forward direction is awkward, but the logically equivalent statement is easier. That is proof by contrapositive, and it is why the implication equivalence in 5.5 matters operationally, not just symbolically.
Classic example: to prove "if is even, then is even," it is simpler to show "if is odd, then is odd." In theory-heavy AI writing, contraposition shows up in impossibility and lower-bound arguments where failure of the conclusion exposes useful structure in the assumptions.
7.4 Proof by Contradiction
In proof by contradiction, you assume the statement is false and derive an impossibility. This is especially common when proving that some object cannot exist, that a property must hold universally, or that a certain finite description is impossible.
Famous examples include proving that is irrational and that there are infinitely many primes. In AI-flavored mathematics, contradiction often appears when ruling out pathological counterexamples or showing that a supposed optimum or invariant would violate earlier assumptions.
7.5 Proof by Cases
When a domain naturally splits into a few exhaustive possibilities, you can prove a statement separately in each branch. This is proof by cases.
For example, the claim is easiest by splitting into and . The same structure appears in algorithms and model analysis whenever behavior differs across regimes such as active vs inactive ReLU units, interior vs boundary points, or short-context vs long-context cases.
7.6 Mathematical Induction
Induction proves statements indexed by the natural numbers. You establish a base case, then show that if the claim holds at step , it also holds at step . Once both parts are in place, the result propagates to all later indices.
This matters for AI because many objects are recursive or sequential: token prefixes, layers in a stack, rollout horizons, and sequence length arguments are all natural induction targets. We will use the full machinery in the dedicated proof section when sequence-model reasoning becomes more central.
7.7 Existence and Uniqueness Proofs
An existence proof shows that at least one object with the desired property exists. A uniqueness proof shows there can be at most one. Together, they establish that there is exactly one such object.
This distinction is everywhere in optimization and model theory. "A minimizer exists" and "the minimizer is unique" are very different statements, with different consequences for learning dynamics and interpretation.
7.8 Proof by Construction
A constructive proof does more than assert existence: it explicitly builds the object. This is especially valuable in computer science and AI because a constructive argument often doubles as an algorithm.
For instance, if a theorem says there exists a network, partition, mask, or encoding with certain properties, a constructive proof tells you how to produce one. That is why constructive arguments are often more actionable than purely existential ones.
7.9 Why This Is Only a Preview
At this stage, proof techniques should feel like a toolbox you can recognize, not yet one you have fully mastered. The important bridge is:
- logic gives the form of valid inference
- sets provide the objects and predicates we talk about
- proofs combine both into reliable mathematical arguments
When you are ready to go deeper, continue with Proof Techniques, where these methods are expanded with full templates, worked examples, and more explicit AI applications.
8. Axiomatic Set Theory (ZFC)
Naive set theory (2) is intuitive but broken - Russell's Paradox (2.5) shows that unrestricted set formation leads to contradictions. ZFC (Zermelo-Fraenkel axioms with the Axiom of Choice) is the standard rigorous foundation for all of modern mathematics. Every mathematical object - numbers, functions, spaces, probability measures, neural networks - can be built from ZFC sets.
8.1 Why Axioms?
Axioms serve two purposes:
- Prevent paradoxes: By restricting which sets exist, ZFC avoids Russell's Paradox and similar contradictions.
- Ensure consistency: All mathematical reasoning ultimately reduces to statements provable from these axioms.
The axioms tell you exactly which sets you are allowed to construct and what operations you may perform on them. Nothing more, nothing less.
8.2 The ZFC Axioms
There are 9 axioms (some formulations give 8 or 10; the exact count depends on grouping). Here they are, with intuition and notation.
Axiom 1: Extensionality
Two sets are equal if and only if they have the same elements. A set is determined entirely by its membership - there is no "hidden structure."
Consequence: . Order and repetition are irrelevant.
Axiom 2: Empty Set
There exists a set with no elements. By Extensionality, this set is unique - there is only one empty set.
Axiom 3: Pairing
For any two objects and , the set exists. This lets us form pairs, which underpins ordered pairs and Cartesian products.
Axiom 4: Union
For any collection of sets , there exists a set containing exactly the elements that belong to at least one set in .
Axiom 5: Power Set
For any set , the power set exists. This is the set of all subsets of , and for finite .
Axiom 6: Separation (Specification / Comprehension)
For any set and any property , the set exists. Crucially, you can only separate out elements from an existing set - you cannot form without a bounding set .
This is what blocks Russell's Paradox. The "set of all sets that don't contain themselves" would require - but without a bounding set , Separation doesn't let you form it.
Axiom 7: Infinity
There exists an infinite set - specifically, a set containing , , , and so on forever. This axiom guarantees the existence of the natural numbers (and prevents mathematics from being stuck in a finite world).
Axiom 8: Replacement
If is a definable function and is a set, then the image is a set. This is like Separation but more powerful - it lets you apply a function to every element and collect the results.
AI analogy: Replacement is
map. Given a set (dataset) and a function (model), the outputs form a set (predictions).
Axiom 9: Foundation (Regularity)
Every non-empty set has an element disjoint from . This prevents circular membership chains () and self-containing sets (). Sets are "well-founded" - they are built from the bottom up (from ).
8.3 The Axiom of Choice (AC)
For any collection of non-empty sets, there exists a choice function that selects exactly one element from each set. This seems obvious but has deep consequences.
Why it's controversial:
- It's non-constructive: it asserts the existence of a choice function without building one
- It implies paradoxical results: Banach-Tarski paradox (decompose a sphere into 5 pieces and reassemble into two spheres of the same size)
- It implies well-ordering theorem: every set can be well-ordered (including , despite having no known explicit well-ordering)
Why it's accepted:
- Without AC, many theorems fail (every vector space has a basis; Tychonoff's theorem; Hahn-Banach theorem)
- ZFC (with AC) has not been shown inconsistent
- AC is independent of ZF: both ZF+AC and ZF+\negAC are consistent (if ZF is consistent), by Godel (1938) and Cohen (1963)
AI applications of AC: Most machine learning theory implicitly uses AC:
- "There exists a minimiser of the loss function" - may require AC for non-compact spaces
- "Every vector space has a basis" - needs AC for infinite-dimensional spaces (function spaces in kernel methods)
- In practice, everything is finite-dimensional, so AC is not needed for actual computations
8.4 Constructing Numbers from Sets
One of the most remarkable achievements of axiomatic set theory is building all number systems from pure sets:
Natural numbers (von Neumann construction):
The rule: . Each natural number is the set of all smaller natural numbers: , so .
Integers: is constructed as equivalence classes of pairs of natural numbers: represents .
Rationals: is constructed as equivalence classes of pairs of integers: represents where .
Reals: is constructed via Dedekind cuts or Cauchy sequences of rationals.
The takeaway: Every mathematical object - including the real numbers that neural network weights live in - is built from the empty set using ZFC axioms. This is why set theory is called the "foundation of mathematics."
8.5 Ordinals and Cardinals (Brief)
Ordinal numbers extend the natural numbers to describe the "length" of well-orderings:
Here is the first infinite ordinal (the "ordinal type" of ). Ordinals are used to measure the "height" of sets in the cumulative hierarchy.
Cardinal numbers measure the "size" of sets. For finite sets, the cardinal and ordinal notions coincide ( means has elements). For infinite sets, cardinality is defined via bijections - two sets have the same cardinality iff a bijection exists between them. More on this in 10.
9. Logic in Computation and AI
This section bridges the pure theory of 5-8 to the concrete computational world. The key idea: logic is not just a tool for proofs - it is the foundation of computation itself.
9.1 Boolean Circuits
Every digital computer is built from logic gates - physical implementations of the logical connectives:
| Gate | Logic | Symbol | Output |
|---|---|---|---|
| AND | & | 1 iff both inputs are 1 | |
| OR | | | 1 iff at least one input is 1 | |
| NOT | ~ | Flips the bit | |
| NAND | - | 0 iff both inputs are 1 | |
| XOR | ^ | 1 iff inputs differ |
Functional completeness: can compute any Boolean function. Even more remarkably, alone is functionally complete - every Boolean function can be built from NAND gates.
From gates to GPUs: GPU cores perform massively parallel Boolean and arithmetic operations. The floating-point units, comparison circuits, and memory addressing logic in a GPU are all Boolean circuits. When you run torch.matmul(A, B), underneath it is billions of transistors implementing Boolean functions.
9.2 SAT Solving
The Boolean satisfiability problem (SAT): given a propositional formula in CNF, is there a truth assignment that makes it true?
DPLL Algorithm (Davis-Putnam-Logemann-Loveland, 1962):
- Unit propagation: If a clause has one unset literal, it must be true
- Pure literal elimination: If a variable appears with only one polarity, set it to satisfy those clauses
- Branching: Pick an unset variable, try both truth values, recurse
- Backtracking: If a contradiction is found, undo the last choice
CDCL (Conflict-Driven Clause Learning): Modern SAT solvers extend DPLL with learned clauses - when a conflict occurs, they analyse why and add a clause preventing the same conflict pattern. This is a form of learning from mistakes.
SAT in AI:
- Constraint satisfaction: Many combinatorial AI problems reduce to SAT
- Planning: Classical AI planning (SATPLAN) encodes action sequences as SAT instances
- Formal verification: Proving neural network properties (robustness, fairness) reduces to SAT/SMT
- Hardware design: Chip verification uses SAT solvers extensively
- Structured generation: Ensuring LLM output satisfies a grammar can be framed as constraint satisfaction
9.3 Resolution and Automated Reasoning
Resolution is a single inference rule that is complete for refuting CNF formulas:
If one clause contains and another contains , their resolvent combines the remaining literals.
Completeness (Robinson, 1965): A set of clauses is unsatisfiable if and only if the empty clause () can be derived by repeated resolution.
This is the backbone of Prolog and logic programming - computation as proof search.
9.4 Type Theory and Programming
Type systems are deductive systems closely related to logic. The Curry-Howard correspondence establishes a deep isomorphism:
| Logic | Type Theory | Programming |
|---|---|---|
| Proposition | Type | Specification |
| Proof | Term (program) | Implementation |
| (implication) | Function type | Function from to |
| (conjunction) | Product type | Pair/tuple |
| (disjunction) | Sum type | Tagged union / either |
| Dependent function | Generic/polymorphic function | |
| Dependent pair | Record with witness | |
| (false) | Empty type | Unreachable code |
| Proof of | Value of type | Running program |
Consequence: A valid type-checked program is a proof of its type signature. Type errors are logical contradictions.
AI connection - type-safe tensors: Projects like
jaxtypingandbeartypeenforce tensor shape constraints at the type level:Float[Array, "batch seq d_model"]. A shape mismatch is caught as a type error - i.e., a logical contradiction between what the function promises and what it receives.
9.5 Modal Logic (Brief)
Modal logic extends propositional logic with necessity () and possibility ():
- : " is necessarily true" (true in all possible worlds)
- : " is possibly true" (true in some possible world)
In AI:
- Epistemic logic: means "agent knows ." Used in multi-agent systems.
- Temporal logic: means " always holds" (safety); means " eventually holds" (liveness). Used in model checking and formal verification.
- Deontic logic: What an AI system "should" do - the logic of obligations and permissions, relevant to AI alignment.
9.6 Description Logics and Knowledge Representation
Description logics (DLs) are decidable fragments of FOL designed for knowledge representation:
- Concepts (unary predicates):
Person,Model,LargeLanguageModel - Roles (binary predicates):
trainedBy,hasParameter,outputs - Constructors: (intersection), (union), (complement), (all -successors are in ), (some -successor is in )
DLs underpin the Web Ontology Language (OWL), used in knowledge graphs, biomedical ontologies, and semantic web. The reasoning complexity depends on which constructors are included - a careful balance between expressiveness and decidability.
9.7 LLM Reasoning as Approximate Logic
When an LLM performs chain-of-thought (CoT) reasoning, it is (approximately) performing logical inference:
| Logical step | LLM behaviour | Fidelity |
|---|---|---|
| Modus ponens () | "Since and implies , therefore " | High for simple cases |
| Universal instantiation () | "Since all X are Y, this X is Y" | Moderate |
| Contradiction detection | "But this contradicts..." | Fragile |
| Multi-step chains | Chaining 5+ inference steps | Degrades rapidly |
Open research questions:
- Can LLMs perform reliable deductive reasoning, or only pattern-matched approximation?
- Can we formally verify LLM reasoning chains against logical rules?
- Can symbolic logic modules be integrated with neural networks (neuro-symbolic AI)?
- How does the "chain-of-thought" prompt format affect logical correctness?
Current evidence suggests LLMs are surprisingly good at local logical steps but struggle with long chains and quantifier reasoning (especially nested patterns). This is precisely the gap that formal methods aim to fill.
10. Cardinality and Infinite Sets
Cardinality answers the question: how big is a set? For finite sets the answer is obvious - just count. For infinite sets, the answer is one of the most profound discoveries in mathematics: not all infinities are the same size.
10.1 Finite Cardinality
If is finite, means there exists a bijection . Counting is bijection.
Key counting facts for AI:
| Formula | Name | AI Example |
|---|---|---|
| Inclusion-exclusion | Deduplicating token sets | |
| Product rule | Size of joint state space | |
| Power set | Number of feature subsets | |
| Exponential | Number of functions from to | |
| Factorial | Permutation count | |
| Binomial | Ways to choose items from |
10.2 Comparing Infinite Sets - Bijections
Definition: Two sets and have the same cardinality () if and only if there exists a bijection .
This is the only sensible definition for infinite sets - we cannot "count" them, but we can pair their elements one-to-one.
Example - :
This seems wrong - "has twice as many elements." But pair them:
Formally: f(n) = \begin{cases} n/2 & \text{if nis even} \\ -(n+1)/2 & \text{ifn is odd} \end{cases}
This is a bijection, so . Infinity doesn't work like finite numbers.
10.3 Countable Sets
A set is countably infinite if - i.e., there exists a bijection . A set is countable if it is finite or countably infinite.
Equivalently, is countable iff its elements can be listed as a sequence (possibly terminating).
Countable sets include:
| Set | Why countable |
|---|---|
| By definition | |
| Bijection shown above | |
| Cantor's zig-zag (diagonalisation of table) | |
| Cantor pairing function: | |
| All finite strings over finite alphabet | , countable union of finite sets |
| All programs (source code) | Programs are finite strings |
| All rational polynomials | Countable coefficients, finite degree |
| Algebraic numbers | Roots of polynomials with integer coefficients |
Key theorem: A countable union of countable sets is countable:
AI consequence: The set of all possible tokenised prompts (finite strings over a finite vocabulary ) is countable. There are possible inputs to an LLM.
10.4 Uncountable Sets - Cantor's Diagonal Argument
Theorem (Cantor, 1891). is uncountable - there is no bijection .
Proof (diagonal argument). We show that even the interval is uncountable. Assume, for contradiction, that is countable. Then we can list all real numbers in :
Construct where is any digit (and to avoid alternative decimal representations).
Then but for every (they differ at the -th digit). This contradicts the assumption that all reals in were listed.
The symbol ("aleph-null") denotes the first infinite cardinal. The set has cardinality (the cardinality of the continuum), which is strictly larger.
10.5 Cantor's Theorem (Generalised)
Theorem. For any set : .
No set is as large as its power set. This holds for every set, finite or infinite.
Proof sketch. The map injects into , so . To show strict inequality, suppose is a bijection. Let . Then , so for some (by surjectivity). Is ?
- If , then - contradiction.
- If , then - contradiction.
Either way, contradiction. So no such bijection exists.
Consequence - the infinite tower:
There is no largest infinity. The hierarchy of infinities is itself infinite (and larger than any infinity in the hierarchy).
10.6 The Continuum Hypothesis
Claim (Cantor, 1878): There is no set with .
In other words, there is no "intermediate" infinity between the countable and the continuum.
Resolution (Godel 1940, Cohen 1963): The Continuum Hypothesis is independent of ZFC - it can neither be proved nor disproved from the standard axioms. Both ZFC + CH and ZFC + \negCH are consistent (if ZFC is).
This is one of the most surprising results in mathematics: there are meaningful mathematical questions that our axioms simply cannot answer.
10.7 Cardinality and AI
| Object | Cardinality | Consequence |
|---|---|---|
| Finite vocabulary | (finite, e.g., 32,000) | Discrete, tractable |
| All prompts | (countable) | Enumerable in principle |
| All real-valued weight vectors | Uncountable - can't enumerate models | |
| All functions | Vastly larger than the set of computable functions | |
| All computable functions | Programs are countable strings |
The gap: There are uncountably many functions but only countably many programs (and neural network architectures, at any fixed precision). Most functions are not computable - and most mathematical objects are not representable by any model. This is a fundamental limitation.
11. Logic and Set Theory in Machine Learning
This section shows how the abstract tools of 2-10 appear concretely in modern ML.
11.1 Formal Languages and Automata
Formal language theory is set theory applied to strings:
- An alphabet is a finite set of symbols
- A string over is a finite sequence of symbols from
- A language is a set of strings:
Connection to AI:
| Concept | ML Instantiation |
|---|---|
| Alphabet | Vocabulary (set of tokens) |
| String | Token sequence (prompt or completion) |
| Language | Set of valid outputs |
| Grammar | Rules defining valid outputs (JSON schema, Python syntax) |
| Automaton (DFA/NFA) | Constrained decoding state machine |
| Regular language | Languages recognisable by finite automata |
| Context-free language | Languages parseable by pushdown automata |
Structured generation constrains an LLM to produce only strings from a target language . At each decoding step, the valid next tokens form a set , computed by the automaton/parser. Tokens outside are masked (logits set to ). This is Boolean logic applied to decoding.
11.2 Sets in Probability
The entire framework of probability theory is built on set theory (Kolmogorov axioms):
| Probability concept | Set-theoretic object |
|---|---|
| Sample space | Set of all possible outcomes |
| Event | Subset of () |
| Probability of or | |
| Probability of and | |
| Probability of not ; equals | |
| -algebra | Collection of measurable subsets of |
| Independence: | Multiplicativity under intersection |
Measure theory (the rigorous foundation of probability) relies heavily on ZFC: -algebras are sets of sets, measures are set functions, and the Axiom of Choice is needed for results like the Vitali set (a non-measurable set).
11.3 Sets in Linear Algebra
| Linear algebra concept | Set structure |
|---|---|
| Vector space | Set with addition and scalar multiplication |
| Subspace | Subset that is itself a vector space |
| Span | all linear combinations of - a set |
| Basis | Linearly independent spanning set |
| Eigenspace | - defined by set-builder notation |
| Column space | - image of (a set) |
| Null space | - kernel of (a set) |
11.4 Logic in Training
Loss functions encode logical conditions:
| Condition | Logic | Loss |
|---|---|---|
| Output target | Equality | MSE: |
| Output matches distribution | KL divergence, cross-entropy | |
| Margin condition | Hinge loss: | |
| Constraint satisfaction | Lagrangian: |
Training as logic: gradient descent searches for satisfying - this is an existential claim: .
Early stopping as logic: "Stop when the validation loss has not decreased for epochs" is a quantified condition: .
11.5 Sets in Model Architecture
Transformer attention as set operations:
The key insight of attention is that it operates on a set of key-value pairs (modulo positional encoding). Self-attention is permutation-equivariant: it treats the input as a set, not a sequence.
- Query: "What am I looking for?"
- Key set: - the set of keys to match against
- Value set: - the set of values to aggregate
- Attention mask: - the set of allowed (query, key) pairs
Combining masks:
- Causal AND local: - intersection (Boolean AND)
- Multi-head union: - each head sees a different subset of positions
- Padding mask:
11.6 Formal Verification of ML Systems
Logic provides tools to prove properties of ML systems rather than merely test them:
| Property | Formal statement | Method |
|---|---|---|
| Robustness | SMT solving, MILP | |
| Fairness | Statistical testing + formal constraints | |
| Safety | Reachability analysis | |
| Monotonicity | Lattice-based verification |
Certified defences use logical proofs (often via abstract interpretation or interval arithmetic) to guarantee that no adversarial perturbation within an -ball can change the classification. This is the gold standard for ML safety - moving from empirical robustness to provable robustness.
12. Advanced Topics
12.1 Godel's Incompleteness Theorems
The most profound results in mathematical logic, with deep implications for AI.
First Incompleteness Theorem (Godel, 1931): Any consistent formal system capable of expressing basic arithmetic contains a statement such that:
- is true (in the standard model of arithmetic)
- is not provable in
- is not provable in
In other words, there are true mathematical statements that cannot be proved. No axiom system - ZFC, or any extension - can be both consistent and complete for arithmetic.
Godel's self-referential trick: Godel constructs to say, essentially, "I am not provable in system ." If were provable, it would be false (since it says it's unprovable), contradicting consistency. If were provable, would be provable (since says it's not), contradiction. So neither nor is provable.
Second Incompleteness Theorem: No consistent system capable of expressing arithmetic can prove its own consistency.
AI implications:
- No AI system (which is a formal system) can verify all true mathematical statements - there will always be truths beyond its reach
- The claim "this AI is guaranteed safe" cannot be proved within the AI's own framework (a version of the second theorem)
- Godel's theorems do NOT mean mathematics is broken - they mean it is inexhaustible
12.2 The Halting Problem
Theorem (Turing, 1936). There is no algorithm that can determine, for every program and input , whether halts on .
Proof (by contradiction). Suppose is a program that returns "halts" or "loops" for any . Define:
What does do?
- If "halts" -> loops forever. But said it halts. Contradiction.
- If "loops" -> halts. But said it loops. Contradiction.
So cannot exist.
Connections to AI:
- This is the computation analogue of Russell's Paradox - self-reference plus negation creates paradox
- You cannot write a "universal bug detector" that catches all infinite loops
- Theorem provers and code verifiers must be incomplete (they may time out or return "unknown")
- LLM code generation cannot guarantee termination of generated code
12.3 Zorn's Lemma
Zorn's Lemma (equivalent to the Axiom of Choice and the Well-Ordering Theorem):
If every chain in a partially ordered set has an upper bound in , then has a maximal element.
A chain is a totally ordered subset; a maximal element is one with no element strictly above it ().
Applications in mathematics:
- Every vector space has a basis (even infinite-dimensional ones)
- Every ideal in a ring is contained in a maximal ideal
- Every filter can be extended to an ultrafilter
AI relevance: Zorn's Lemma is used in the theoretical foundations of kernel methods (Hilbert space bases) and functional analysis (Hahn-Banach theorem), which underlies support vector machines and reproducing kernel Hilbert spaces.
12.4 Fuzzy Logic
Classical logic is crisp: or . Fuzzy logic (Zadeh, 1965) allows truth values in :
where is the degree of membership of in fuzzy set .
Fuzzy operations:
| Classical | Fuzzy |
|---|---|
AI connections:
- Softmax outputs are like fuzzy membership values - a token has degree-of-membership in each class
- Attention weights are fuzzy: not binary "attend or not" but "how much to attend"
- Fuzzy control systems (early AI) used fuzzy rules: "IF temperature IS high THEN fan IS fast"
- Modern neural networks have largely replaced fuzzy systems, but the conceptual framework persists in the soft/differentiable operations that make gradient descent possible
12.5 Second-Order Logic (Brief)
First-order logic quantifies over objects:
Second-order logic quantifies over properties (sets, predicates, functions):
Example - Completeness of :
- First-order: CANNOT express "every bounded set has a least upper bound" (needs quantification over sets)
- Second-order:
Trade-off:
- First-order logic: semi-decidable (complete proof systems exist), but less expressive
- Second-order logic: more expressive, but no complete proof system exists (incompleteness is "worse")
In AI, first-order logic is preferred because it has better computational properties. Description logics, Datalog, and Prolog all work within FOL or its fragments.