"The theory of maxima and minima is full of dangerous pitfalls for the unwary."
- G. H. Hardy
Overview
Every machine learning model is the solution to an optimisation problem. Training a neural network, fitting a regression, finding principal components, building an SVM - all reduce to: find the parameters that minimise (or maximise) some objective function, possibly subject to constraints. Understanding when a point is optimal - the optimality conditions - is therefore prerequisite knowledge for understanding why any learning algorithm works.
This section develops optimality theory from first principles. We prove the first- and second-order conditions for unconstrained problems, derive Lagrange multipliers for equality constraints, and establish the full KKT framework for inequality-constrained optimisation. Convexity theory shows when local optimality implies global optimality. Duality reveals the deep connection between primal and dual problems - the machinery behind SVMs and maximum entropy models. We close with the non-convex landscape of deep learning and modern constrained formulations (SAM, attention as optimal transport).
Scope of this section: Critical points, optimality conditions, Lagrange multipliers, KKT theory, convexity, duality, and ML applications. Gradient descent algorithms (SGD, Adam, momentum) are covered in the Optimization chapter (08). Jacobian and Hessian computation is in 02. The chain rule and backpropagation are in 03.
Prerequisites
- Gradient, partial derivatives - 01 Partial Derivatives and Gradients
- Hessian matrix, positive definiteness - 02 Jacobians and Hessians
- Eigenvalues, positive definite matrices - 07 Positive Definite Matrices
- Single-variable calculus - 04 Calculus Fundamentals
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Critical point classification, Lagrange multipliers, KKT verification, SVM dual derivation, convexity experiments |
| exercises.ipynb | 10 graded exercises: optimality conditions through SAM and attention as constrained optimisation |
Learning Objectives
After completing this section, you will:
- Prove the first- and second-order necessary and sufficient conditions for local minima
- Classify critical points using the Hessian spectrum and understand saddle point dominance in deep networks
- Derive and apply Lagrange multipliers for equality-constrained problems
- State all four KKT conditions, prove their necessity, and apply them to ML problems
- Understand strong convexity, smoothness, and their role in convergence guarantees
- Derive the SVM dual from the primal and explain why support vectors satisfy complementary slackness
- Compute the Lagrangian dual function and apply Slater's condition for strong duality
- Connect maximum entropy, softmax, and attention to constrained optimisation via Lagrange multipliers
- Interpret the sharpness-aware minimisation (SAM) objective as a min-max constrained problem
Table of Contents
- 1. Intuition
- 2. First-Order Necessary Conditions
- 3. Second-Order Conditions
- 4. Convex Analysis
- 5. Lagrange Multipliers - Equality Constraints
- 6. KKT Conditions - Inequality Constraints
- 7. Duality Theory
- 8. ML Applications in Depth
- 9. Non-Convex Landscape in Deep Learning
- 10. Common Mistakes
- 11. Exercises
- 12. Why This Matters for AI (2026 Perspective)
- Conceptual Bridge
1. Intuition
1.1 The Optimization Landscape
Imagine standing in a hilly landscape and trying to find the lowest valley. The gradient tells you the direction of steepest ascent at your current position - to descend, move against the gradient. But this strategy can strand you in a local valley that is not the deepest one, or trap you at a mountain pass (saddle point) where the gradient is zero but you are not at a minimum.
THE OPTIMIZATION LANDSCAPE IN 1D AND 2D
1D loss curve: 2D loss surface (contour view):
local saddle
min x
local global
min min global
min
Key players:
Point type Characterisation
Local minimum nablaf = 0, H 0 (all eigenvalues > 0)
Local maximum nablaf = 0, H 0 (all eigenvalues < 0)
Saddle point nablaf = 0, H indefinite (mixed signs)
Regular point nablaf != 0 (not a critical point)
The core difficulty: the gradient condition is necessary but not sufficient for a minimum. A zero gradient could mark a minimum, a maximum, or a saddle point. Second-order conditions (the Hessian) are needed to distinguish between them.
Constraints add further complexity. If the solution must lie on a curve or within a region, the unconstrained minimum may be infeasible, and the optimum occurs at a very different location - possibly on the boundary of the feasible region.
1.2 Why ML Needs Optimality Theory
Every core ML algorithm is secretly an optimisation problem, and the optimality conditions determine its solution structure:
| ML Algorithm | Optimisation Problem | Optimality Condition Used |
|---|---|---|
| Linear regression | -> normal equations | |
| Ridge regression | Lagrangian of constrained form | |
| Lasso | Subdifferential conditions | |
| SVM | s.t. margin | KKT -> support vectors |
| PCA | s.t. | Lagrange -> eigenvalue problem |
| Logistic regression | Convex -> unique global minimum | |
| Neural network | (non-convex) | Saddle point dominance; NTK |
| Attention (softmax) | s.t. | Maximum entropy via Lagrange |
| SAM training | KKT of inner max |
For AI: Understanding that the SVM dual problem depends only on inner products (from the KKT conditions) is what enables the kernel trick - the foundation of kernel methods. Understanding that softmax is the solution to a maximum entropy problem under linear constraints connects attention to thermodynamics and information theory.
1.3 Historical Context
| Year | Person | Contribution |
|---|---|---|
| 1788 | Lagrange | Mcanique Analytique - method of multipliers for equality constraints |
| 1847 | Cauchy | Steepest descent algorithm |
| 1939 | Karush | Master's thesis: inequality constraints with multipliers (KKT conditions - unpublished) |
| 1951 | Kuhn & Tucker | Independent rediscovery and publication of KKT conditions |
| 1952 | Arrow, Hurwicz, Uzawa | Saddle point theorem for convex optimisation |
| 1970 | Rockafellar | Convex Analysis - definitive treatment of duality and subdifferentials |
| 1995 | Boser, Guyon, Vapnik | SVM uses the dual to derive the kernel trick |
| 1996 | Boyd & Vandenberghe | Convex Optimization (textbook; full text freely available online) |
| 2014 | Dauphin et al. | Saddle point dominance in deep networks |
| 2021 | Foret et al. | SAM: sharpness-aware minimisation as constrained min-max |
| 2022 | Papyan et al. | Neural collapse: KKT characterisation of terminal training phase |
1.4 Unconstrained vs Constrained
There are three fundamental problem classes:
THREE PROBLEM CLASSES
UNCONSTRAINED EQUALITY CONSTRAINED
min f(x) min f(x)
xinR s.t. g(x) = 0, i=1,...,m
Solution: nablaf = 0 Solution: Lagrange multipliers
Theory: 2nd-order conditions nablaf + lambdanablag = 0, g(x) = 0
INEQUALITY CONSTRAINED
min f(x)
s.t. g(x) = 0, i = 1,...,m
h(x) <= 0, j = 1,...,p
Solution: KKT conditions
nablaf + lambdanablag + munablah = 0, g=0, h<=0, mu>=0, muh=0 for allj
KEY INSIGHT: Equality constraints are special case of KKT
(set mu = 0 for inequalities, keep lambda free for equalities)
The feasible set is the set of points satisfying all constraints. The optimisation problem asks for the minimum of over .
2. First-Order Necessary Conditions
2.1 The Gradient Condition
Theorem (First-Order Necessary Condition): Let be differentiable. If is a local minimum of , then:
Proof: Suppose is a local minimum but . Define the direction . By the directional derivative formula:
By continuity of , there exists such that for all . But this contradicts being a local minimum.
Warning - necessity only: The gradient condition is necessary but not sufficient. The function has at but no local extremum there (it's an inflection point). The function has but a saddle point. Second-order conditions are needed to determine which type of critical point we have.
Critical points (also called stationary points) are all satisfying . They are candidates for minima, maxima, or saddle points - the Hessian distinguishes between them.
2.2 Critical Points and Their Classification
In , a function has four types of critical points, determined by the sign of the Hessian determinant and the sign of :
SECOND-DERIVATIVE TEST IN R^2
At critical point (nablaf = 0):
det(H) > 0 and f_xx > 0 -> LOCAL MINIMUM
(all eigenvalues of H positive)
det(H) > 0 and f_xx < 0 -> LOCAL MAXIMUM
(all eigenvalues of H negative)
det(H) < 0 -> SADDLE POINT
(H has mixed-sign eigenvalues)
det(H) = 0 -> DEGENERATE (test inconclusive)
(need higher-order analysis)
Geometric picture for each:
Min Max Saddle Degenerate
x
bowl shape hill shape mountain flat region
pass
Standard examples:
- : unique global minimum at origin;
- : unique global maximum at origin;
- : saddle point at origin; indefinite
- : minimum at origin; (degenerate - minimum confirmed by inspection)
In higher dimensions (): A critical point is a strict local minimum iff (all eigenvalues positive). It is a strict local maximum iff . Any other case (indefinite or semidefinite ) requires further analysis.
2.3 Saddle Points in Deep Learning
The classical concern was that neural networks might converge to poor local minima. Empirically, gradient descent finds solutions of similar quality regardless of initialisation for overparameterised networks. This was theoretically explained in a landmark 2014 paper.
Dauphin et al. (2014) - Identifying and attacking the saddle point problem: For a function with large, a random critical point with loss above the global minimum is a saddle point with overwhelming probability (under a statistical physics model). The fraction of critical points that are local minima decreases exponentially with both and .
SADDLE POINT DOMINANCE
Distribution of critical points at energy level epsilon above global min:
Fraction that are local minima: ~exp(-n * c(epsilon))
For n = 10^9 (GPT-3), c(epsilon) > 0 -> essentially zero local minima
What gradient descent actually encounters:
High-loss region: many saddle points, few local minima
Low-loss region: many saddle points, even fewer local
Near global min: flat regions (many equivalent minima)
SGD noise + momentum helps escape saddles (saddle-free Newton
methods explicitly use the negative curvature direction).
Why does SGD escape saddle points? At a saddle point, the Hessian has at least one negative eigenvalue. The corresponding eigenvector is a descent direction - the function decreases along it. SGD's noise perturbations project onto this direction with nonzero probability, enabling escape. Deterministic gradient descent can be trapped at strict saddle points, but this is a measure-zero set.
For AI: The practical implication is that for overparameterised models (where parameters data), gradient descent on a non-convex loss converges to global or near-global optima. This is one theoretical explanation for why training large language models with SGD/Adam works so well.
2.4 Stationary Points of Common ML Loss Functions
MSE (Linear Regression):
This is the normal equation. It has a unique solution when has full column rank. The Hessian - MSE is convex, so the critical point is the global minimum.
Cross-Entropy (Logistic Regression):
No closed-form solution (transcendental equation). The Hessian confirms convexity. Gradient descent (or Newton's method) converges to the unique global minimum if it exists (may not if data is linearly separable).
Cross-Entropy (Softmax / LLM output): Setting (as derived in 03) gives and for . This is a perfectly confident prediction - the loss approaches zero but never reaches it for finite logits.
3. Second-Order Conditions
First-order conditions identify candidates; second-order conditions distinguish minima from maxima from saddle points. The Hessian matrix-the matrix of second partial derivatives-encodes all the local curvature information needed for this classification.
3.1 Second-Order Necessary Condition (SONC)
Theorem (SONC). If is a local minimum of and near , then:
(the Hessian is positive semi-definite).
Proof. Let be any direction. Taylor expansion gives:
Since is a local minimum, for all small . Thus:
Dividing by and letting :
which is exactly .
Note: SONC is necessary but not sufficient. at has yet is a minimum. at origin has but origin is not a minimum.
3.2 Second-Order Sufficient Condition (SOSC)
Theorem (SOSC). If and (positive definite), then is a strict local minimum.
Proof. Since , its smallest eigenvalue satisfies . By continuity of the Hessian, there exists such that for all .
For any with and small :
for sufficiently small .
Gap between SONC and SOSC: The boundary case but (degenerate, semidefinite Hessian) requires higher-order analysis. This is common in deep learning where flat directions proliferate.
SECOND-ORDER TEST SUMMARY
Critical point x*: nablaf(x*) = 0
H = nabla^2f(x*) Result Name
H 0 Strict local minimum SOSC satisfied
H 0 Strict local maximum (max version of SOSC)
H indefinite Saddle point Eigenvalues of both signs
H 0, != 0 Might be min (degenerate) Higher order needed
H = 0 Need Taylor term >=3 Flat critical point
For R^2 via determinant test (when nablaf = 0):
det(H) > 0, H_1_1 > 0 -> local minimum
det(H) > 0, H_1_1 < 0 -> local maximum
det(H) < 0 -> saddle point
det(H) = 0 -> inconclusive
3.3 Indefinite Hessian and Saddle Points
When has both positive and negative eigenvalues, is a saddle point: a local minimum along some directions, a local maximum along others.
Morse Theory Preview. For a smooth function , a critical point is non-degenerate if is invertible. The Morse index of is:
(the number of descent directions). A non-degenerate minimum has index 0; a saddle has index ; a maximum has index . Morse theory relates the topology of the sublevel sets to the critical points and their indices.
For deep learning: The loss landscape of an -parameter network at a critical point near a good solution tends to have many positive eigenvalues and few (often negligible) negative ones. The ratio is typically very small at good solutions-confirming that most critical points found in practice are near-minima, not true saddles with large negative Hessian components.
3.4 Global Optimality from Convexity
For convex functions, the local-global distinction disappears entirely:
Theorem. If is convex and is a local minimum, then is a global minimum.
Proof. Suppose is a local min but not global: there exists with . For , convexity gives:
As , the point approaches arbitrarily closely while having lower function value-contradicting being a local minimum.
Corollary. For differentiable convex : iff is a global minimum.
This is the fundamental reason convex optimization is "easy" compared to non-convex: any local search algorithm that finds a stationary point has found the global optimum.
3.5 Hessian at ML Optima
Understanding the Hessian at key ML critical points provides geometric insight into learning:
Linear Regression. For :
independent of . The Hessian is constant-this is a quadratic. Eigenvalues are (squared singular values scaled). If has full column rank, everywhere, so the unique critical point is a global minimum.
Logistic Regression. For :
where . Since , the Hessian is PSD everywhere (and PD if has full column rank), making logistic regression a convex problem.
Neural Networks. At a minimum , the Hessian typically has:
- A bulk of near-zero eigenvalues (flat landscape in many directions)
- A few large positive eigenvalues (steep curvature in a small subspace)
- Occasional small negative eigenvalues (slight non-convexity)
The ratio of large-to-small eigenvalues is the condition number-high condition numbers cause slow gradient descent convergence and motivate adaptive methods like Adam.
4. Convex Analysis
Convexity is the mathematical property that makes optimization tractable. Understanding convex sets and functions-their definitions, characterisations, and preservation rules-is prerequisite to deploying convex duality and KKT theory.
4.1 Convex Sets
Definition. A set is convex if for all and :
Geometrically: the line segment between any two points stays inside the set.
Standard examples:
- Hyperplanes and halfspaces
- Balls under any norm
- Polyhedra (intersection of halfspaces)
- Positive semidefinite cone
- The probability simplex
Non-examples:
- The unit sphere (boundary only, not the ball): the segment between two points on the sphere passes through the interior
- The set : take and ; the midpoint has -wait, this is actually convex. A cleaner non-example: (circle)
Convexity-preserving operations:
- Intersection: convex convex
- Affine image: is convex if is convex
- Cartesian product, Minkowski sum
4.2 Convex Functions: Definition and First-Order Characterisation
Definition. A function on a convex set is convex if:
Strict convexity: replace with for , .
First-Order Characterisation. For , is convex iff its graph lies above all tangent hyperplanes:
This is the inequality that makes sufficient for global minimality in the convex case: if , then for all .
Proof sketch. () Fix and let . Convexity implies , which gives the tangent inequality. () The tangent inequality with the two points and evaluated at yields the convexity definition after combining.
4.3 Second-Order Characterisation
Theorem. For : is convex iff for all in the domain.
Proof. () If is convex, the first-order characterisation gives . Expanding the Taylor series and applying this inequality yields . () With everywhere, integrate along the line from to using the second-order Taylor remainder to establish the first-order characterisation.
Examples of convex functions:
- Affine: (both convex and concave)
- Quadratic: when
- Norms: for (triangle inequality = convexity)
- Log-sum-exp: (smooth convex approximation to max)
- Negative entropy: on the simplex
- Cross-entropy loss: in
Non-convex in ML:
- , any neural network loss, (matrix factorisation)
4.4 Strongly Convex Functions
Definition. is -strongly convex () if:
Equivalently: is convex; or everywhere.
Strong convexity has powerful consequences:
- Unique minimiser: the quadratic lower bound forces a unique optimal point
- Linear convergence: gradient descent converges geometrically at rate per step where is the Lipschitz constant of
- Self-concordance: the function cannot become arbitrarily flat
For AI: Ridge regression is -strongly convex (the regulariser adds to the Hessian). This is why regularisation speeds up convergence and ensures a unique solution-crucial for ill-conditioned problems.
Condition number: For -strongly convex, -smooth functions, governs convergence. In transformer training, very high condition numbers of the loss landscape motivate adaptive optimisers.
4.5 Preservation Rules and Calculus of Convex Functions
Convexity is preserved under many operations, making it composable:
| Operation | Condition | Result |
|---|---|---|
| Non-negative combination | , convex | convex |
| Composition with affine | convex, any | convex |
| Pointwise max | convex | convex |
| Composition | convex nondecreasing, convex | convex |
| Partial min | convex in | convex |
| Perspective | convex | convex on |
For AI: The cross-entropy loss is convex in because it is composed with a concave function (affine composed with sigmoid). Convexity preservation rules let us verify this without computing the Hessian directly.
5. Lagrange Multipliers
When optimization problems come with constraints, unconstrained optimality conditions no longer apply directly. Lagrange multipliers transform constrained problems into unconstrained ones by incorporating the constraint into the objective-at the cost of introducing auxiliary variables.
5.1 Setup: Equality-Constrained Problems
Problem form:
where and (fewer constraints than variables).
The Lagrangian: Define by:
The scalars are Lagrange multipliers (or dual variables).
5.2 Geometric Derivation
The deepest way to understand why Lagrange's method works is geometric. At a constrained minimum :
Claim: must lie in the span of .
Why: The feasible set near is approximately the linear manifold -the tangent plane to each constraint surface. Any feasible direction from must satisfy for all .
If had a component orthogonal to all , that component would be a feasible direction along which decreases-contradicting being a local constrained minimum.
Therefore has no component in the tangent space; it must lie entirely in the normal space spanned by :
LAGRANGE MULTIPLIER GEOMETRY (R^2)
Constraint: g(x,y) = 0 (a curve in the plane)
Objective: minimize f(x,y)
y
f = c_3 (level curves of f)
f = c_2
f = c_1 <- tangent to constraint at x*
x*
g = 0
At x*: level curve of f is tangent to constraint g=0
nablaf nablag nablaf = -lambdanablag
nablag points normal to g=0; nablaf points normal to f=c_1
Tangency means these normals are parallel.
5.3 The Lagrange Multiplier Theorem
Theorem (Lagrange, 1788). Let be a local minimum of subject to . If the Linear Independence Constraint Qualification (LICQ) holds at -i.e., are linearly independent-then there exists such that:
Together these are equations in unknowns .
LICQ matters: Without LICQ the theorem can fail. Example: subject to and . The constraints both vanish at with gradients and -linearly dependent. No Lagrange multiplier exists.
5.4 Sensitivity Interpretation: Shadow Prices
The Lagrange multiplier has a precise economic interpretation: it measures how much the optimal value changes as constraint is relaxed.
Theorem (Envelope). Let be the optimal value of subject to . Then:
Proof sketch. Differentiating the Lagrangian optimality conditions with respect to and applying the chain rule yields (the Implicit Function Theorem controls how moves with ).
For AI: In constrained training (e.g., "minimize loss subject to "), tells you how much more you could improve the loss by relaxing the norm constraint by one unit. This motivates choosing the right regularisation strength: is the value of the weight decay penalty that enforces the constraint.
5.5 Multiple Constraints and Second-Order Conditions
Multiple equality constraints: With equality constraints, the KKT point satisfies stationarity equations. Second-order analysis requires the bordered Hessian or, equivalently, the Hessian of the Lagrangian restricted to the tangent space of the constraints:
is the second-order sufficient condition for a constrained local minimum.
5.6 ML Applications of Lagrange Multipliers
PCA as constrained optimisation. Find maximising variance subject to :
Stationarity: , i.e., . The optimal direction is the top eigenvector; top eigenvalue maximum variance. PCA is literally solving Lagrange conditions.
Unit-norm attention. In some attention formulations, query/key vectors are L2-normalized before the dot product. The normalization constraint is enforced via Lagrange multiplier; the shadow price reveals how much the attention energy would increase if the norm bound were relaxed.
LoRA rank constraints. Low-Rank Adaptation constrains the weight update where , , with . The rank- constraint is implicit in the factorised parameterisation, and the Lagrange multiplier interpretation illuminates why the singular values of concentrate: the effective constraint is on the nuclear norm (sum of singular values).
6. KKT Conditions
Lagrange multipliers handle equality constraints. When inequality constraints are present-which is the norm in machine learning (budget constraints, margin constraints, non-negativity)-the Karush-Kuhn-Tucker (KKT) conditions provide the generalisation.
6.1 The Full Problem and Lagrangian
General form:
Lagrangian:
where are the multipliers for inequality constraints and for equality constraints.
6.2 The Four KKT Conditions
At an optimal (under a suitable constraint qualification):
1. Stationarity:
2. Primal Feasibility:
3. Dual Feasibility:
4. Complementary Slackness:
Interpreting complementary slackness: For each inequality constraint , either:
- : the constraint is active (the optimum is on the boundary)- can be nonzero
- : the constraint is inactive (the optimum is strictly interior)-the constraint doesn't affect the optimum
This is the geometric signature of which constraints "matter" at the solution.
6.3 Geometric Interpretation
The KKT conditions say: at optimum, you cannot improve while satisfying all constraints. The stationarity condition generalises Lagrange's condition: must lie in the cone generated by the active constraint gradients.
KKT COMPLEMENTARY SLACKNESS GEOMETRY
Case 1: Constraint inactive (h(x*) < 0)
The optimum is in the interior of the feasible region.
The inequality constraint plays no role.
mu* = 0 (it would be wrong to "push" against a slack constraint).
Case 2: Constraint active (h(x*) = 0)
The optimum lies ON the constraint boundary.
The gradient nablaf(x*) points into the infeasible region.
mu* > 0 "pushes back" to prevent crossing the boundary.
The objective would improve if the constraint were relaxed.
In both cases: mu* * h(x*) = 0 * (neg) = 0
or: (pos) * 0 = 0
6.4 KKT as Necessary Conditions: LICQ Proof
Theorem. If is a local minimum and the LICQ holds (active constraint gradients are linearly independent), then the KKT conditions hold.
Proof sketch. Let be the active set. By LICQ, are linearly independent.
Any feasible descent direction (satisfying for and ) cannot have (otherwise not local min).
By Farkas' lemma, is a conic combination of active constraint gradients: with . Setting for gives all four conditions.
Other constraint qualifications: LICQ is sufficient but not necessary for KKT. Alternatives include Mangasarian-Fromovitz (MFCQ), Slater's condition (for convex problems), and linear independence of the Jacobian at the active set.
6.5 KKT as Sufficient Conditions (Convex Case)
For convex problems, KKT conditions are not just necessary-they are also sufficient:
Theorem. If and are convex, are affine, and satisfy all four KKT conditions, then is a global minimum.
Proof. For any feasible :
Using primal feasibility (), dual feasibility (), complementary slackness (), and , each term is , so .
6.6 LP Worked Example
Linear Program: subject to , .
Rewriting as , , .
Lagrangian: .
Stationarity: , .
The constraint is never active at the optimum (we want large), so . The budget constraint is active: , . From stationarity: , . Optimal: , objective .
7. Duality Theory
The Lagrangian dual offers a second approach to constrained optimisation: instead of minimising over , we maximise over the multipliers . The resulting dual problem often has better structure (always convex, regardless of the primal), reveals hidden geometric properties, and-for convex problems-gives exactly the same optimal value.
7.1 The Dual Function and Dual Problem
Definition (Lagrangian dual function):
Key property: is always concave in -it is a pointwise infimum of affine functions of the multipliers.
Dual problem:
This is always a convex optimisation problem (maximising concave = minimising convex).
7.2 Weak Duality
Theorem (Weak Duality). always, where is the primal optimal value.
Proof. For any feasible primal (satisfying , ) and any dual-feasible (with ):
Taking supremum over dual and infimum over primal: .
The gap is the duality gap.
7.3 Strong Duality and Slater's Condition
Theorem (Slater's Condition -> Strong Duality). For convex and , affine : if there exists a strictly feasible point (with strictly for all ), then (zero duality gap) and the dual optimum is attained.
Slater's condition is a constraint qualification: it says the feasible region is non-degenerate. For LP and QP (quadratic programs), strong duality holds under much weaker conditions.
Implications for ML:
- The SVM dual problem is equivalent to the primal (strong duality holds by Slater)
- The dual variables from strong duality are exactly the KKT multipliers
- Duality gap as a stopping criterion: if primal value dual value , we have an -optimal solution
7.4 Saddle Point Characterisation
Theorem. is primal-dual optimal with zero duality gap iff it is a saddle point of the Lagrangian:
The minimax equals the maximin: .
This saddle-point view is foundational for adversarial training in ML: GAN training is exactly seeking a saddle point of the value function .
7.5 SVM Dual: A Complete Example
The SVM is the canonical example of duality in ML. Start with the hard-margin SVM primal:
Rewrite constraints as . Lagrangian:
KKT stationarity conditions:
Substituting back into to form the dual:
Dual problem: subject to .
This depends only on inner products -the kernel trick replaces these with for nonlinear boundaries without ever computing features explicitly.
Support vectors: By complementary slackness, only when , i.e., when the constraint is active: . These are exactly the support vectors-the training points on the margin boundary that determine .
8. Machine Learning Applications
The optimality conditions developed above appear directly in the mathematics underlying every major ML system. This section demonstrates these connections concretely.
8.1 Linear and Ridge Regression
Ordinary Least Squares. Minimise . Setting the gradient to zero:
These are the normal equations. When has full column rank, (SOSC), and the unique solution is .
Ridge Regression. Adding regularisation: .
Normal equations: . The regulariser shifts all eigenvalues by , making the system always well-conditioned (strongly convex with ).
8.2 Lasso and the Subdifferential
Lasso: .
The norm is not differentiable at . The first-order optimality condition uses the subdifferential :
Coordinate-wise, for the -th weight:
- If :
- If :
The second condition (correlation of feature with residual is small enough) determines sparsity: feature is excluded when its correlation with the residual is below the threshold . This is the mathematical source of Lasso's sparsity-inducing property.
8.3 SVM: Full KKT Analysis
The SVM soft-margin formulation extends the hard-margin case with slack variables :
The KKT conditions include multipliers for the margin constraints and for . Complementary slackness gives:
- : correctly classified, not a support vector
- : on the margin, (standard support vector)
- : inside the margin or misclassified,
The parameter controls the trade-off between margin width and training error through the dual feasibility constraint .
8.4 PCA via Constrained Optimisation
Principal Component Analysis seeks directions of maximum variance subject to orthonormality. The full PCA problem is:
Lagrangian: where is a symmetric multiplier matrix.
Stationarity: , i.e., . Each column satisfies -an eigenvalue equation. The optimal is the matrix of top- eigenvectors; the multipliers are the eigenvalues (= captured variance).
8.5 Maximum Entropy and Softmax
Maximum Entropy Principle. Given constraints for , the distribution maximising entropy subject to has the Boltzmann/Gibbs form:
Derivation. Lagrangian with multipliers (normalisation) and (feature constraints):
Stationarity in : , giving .
Softmax as max entropy. The softmax function is exactly the max-entropy distribution over subject to constraints (Lagrange multipliers are the logits ). This gives the softmax a principled statistical interpretation beyond "normalised exponentials."
8.6 Attention as Constrained Optimisation
The attention mechanism in transformers can be viewed as a constrained optimisation problem. Given queries , keys , values , standard attention computes:
Constrained interpretation. Attention computes the memory retrieval that solves:
where is a temperature and is the probability simplex. The KKT condition for this maximum-entropy retrieval gives exactly softmax attention weights.
The Lagrange multiplier for the simplex constraint becomes the log-partition function (the log-sum-exp normaliser), confirming that attention is a principled probabilistic retrieval operation.
9. Non-Convex Landscapes in Deep Learning
Deep learning operates almost entirely in the non-convex regime. Yet neural networks train successfully-why? The answer lies in the special geometric structure of high-dimensional non-convex landscapes.
9.1 Loss Landscape Geometry
An -parameter neural network defines a loss landscape with structure that differs fundamentally from classical non-convex functions:
Key empirical observations:
- No spurious local minima (approximately): for sufficiently overparameterised networks on tractable data, all local minima have near-identical loss values. Gradient descent doesn't get "trapped" because there are no deep local minima to get trapped in.
- Saddle point dominance: most critical points are saddle points, not local minima. The index (number of negative eigenvalue directions) of these saddles is typically large.
- Loss barriers between solutions: two different global minima can be separated by high barriers in weight space, but connected by loss valleys in higher-dimensional paths.
For AI: The practical consequence is that SGD with good initialisation reliably finds good solutions, and model merging (linear combination of two trained models) often produces competitive performance-evidence of basin connectivity.
9.2 The Role of Overparameterisation
Theorem (informal, Kawaguchi 2016; Du et al. 2018). For a wide class of networks trained on generic data, when the number of parameters exceeds the number of training points , every local minimum is a global minimum.
Intuition. With parameters, the loss function has approximate degrees of freedom. The "level set" is a high-dimensional manifold. Any point trying to be a local minimum with non-zero loss would need the gradient to vanish while the Hessian is positive definite in all directions-this becomes geometrically impossible when the loss can be driven to zero by moving in any of the flat directions.
Neural Tangent Kernel (NTK) perspective. In the infinite-width limit, training dynamics become linear: where is the NTK matrix. If , gradient descent converges globally at a linear rate-a convex analysis result applied to a formally non-convex problem.
9.3 Sharpness-Aware Minimisation (SAM)
Motivated by the observation that flat minima generalise better than sharp ones, Sharpness-Aware Minimisation (Foret et al. 2021) solves:
KKT analysis of the inner problem. Fix . The inner maximisation is a constrained problem with one inequality constraint .
KKT conditions: , giving .
The adversarial perturbation is:
SAM gradient step: .
This is a direct application of KKT: solving the constrained inner problem analytically gives the SAM update formula.
9.4 Neural Collapse
At the terminal phase of training, when networks reach near-zero training loss, a remarkable geometric structure called Neural Collapse (Papyan et al. 2020) emerges:
- Within-class variability collapse: all training examples of class have the same last-layer representation
- Equinorm ETF: the class means form an Equiangular Tight Frame-they have equal norms and equal pairwise cosine similarities
- Self-duality: the weight vectors align with the class means up to scaling
KKT characterisation. Neural collapse is the KKT point of the Unconstrained Features Model (UFM):
The KKT stationarity conditions, combined with symmetry of the cross-entropy loss at balanced class distributions, force the ETF structure. Neural collapse is not an empirical coincidence-it is the unique KKT point of this simplified training problem.
9.5 Mode Connectivity
Loss valley hypothesis: Two local minima and of a neural network can be connected by a low-loss path (Garipov et al. 2018). Specifically, there exists a piecewise linear or quadratic curve with , and for all .
Implication for model merging. The success of weight averaging (WA) methods like Model Soups and SLERP model merging is explained by mode connectivity: if the merged model lies near the connecting path in weight space, it inherits the performance of both endpoints.
Optimality connection. Mode connectivity is a non-convex analogue of the fundamental theorem of convex analysis: convex functions have connected sublevel sets. For "sufficiently trained" neural networks, the sublevel set is approximately path-connected-not because the landscape is convex, but because overparameterisation creates high-dimensional flat regions.
10. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Treating as sufficient for a minimum | It's necessary, not sufficient. Saddle points and maxima also satisfy this. | Always check second-order conditions (Hessian PSD/PD) or verify global minimum via convexity arguments |
| 2 | Forgetting to check constraint qualification | KKT conditions require LICQ (or another CQ). Without CQ, a minimum may exist with no Lagrange multiplier | Verify that active constraint gradients are linearly independent at the candidate point |
| 3 | Setting for inequality constraints | Negative multipliers on constraints violate dual feasibility; the Lagrangian is then unbounded below | Dual feasibility requires for all inequality constraints (the convention matters: needs ) |
| 4 | Ignoring complementary slackness | Missing the condition leads to wrong determination of which constraints are active | For each inequality constraint, exactly one of or must hold (or both) |
| 5 | Concluding strong duality without Slater | Weak duality always holds, but strong duality () requires a constraint qualification like Slater's condition | Verify strict feasibility (Slater's point exists) before asserting zero duality gap |
| 6 | Using the test in , | The determinant test (det and ) is specific to . In higher dimensions, need to check all leading principal minors or eigenvalues | In : compute all eigenvalues of ; or use Sylvester's criterion (all leading principal minors iff PD) |
| 7 | Confusing local and global optimality for non-convex problems | For non-convex functions, local minima may not be global. KKT conditions identify local critical points, not global optima | Use global analysis: prove convexity, or use branch-and-bound, or accept local optimality |
| 8 | Wrong sign convention for the Lagrangian | Different texts define vs. . Mixing conventions gives wrong multiplier signs | Pick one convention and be consistent: for s.t. , use (add constraints to objective) |
| 9 | Forgetting that the Lagrange multiplier theorem is for functions | At non-smooth points (e.g., constraints), the standard gradient condition fails | Use subdifferentials and subgradient conditions; or smooth the problem with a differentiable approximation |
| 10 | Concluding "no constrained minimum exists" when KKT has no solution | KKT conditions failing to have a solution means the constraint qualification fails OR no minimum exists. These are different | First check whether the feasible set is closed and bounded (Weierstrass guarantees existence); then debug the CQ |
| 11 | Misidentifying support vectors | Only points with (active margin constraint) are support vectors; incorrectly classified interior points are not | Check complementary slackness: (on the margin boundary) |
| 12 | Applying first-order conditions to discrete or combinatorial constraints | Gradient = 0 requires differentiability; discrete feasible sets (e.g., integer programs) don't satisfy this | For discrete/combinatorial problems, use integer programming methods, branch-and-bound, or relaxations |
11. Exercises
Exercise 1 - Critical Point Classification
Let .
(a) Find all critical points by solving .
(b) Compute the Hessian at each critical point and classify using the second-order test.
(c) Verify numerically: check that gradient descent from each critical point stays put (up to numerical noise).
(d) Sketch the level curves of near each critical point.
Exercise 2 - Lagrange Multipliers: Constrained Extremum
Maximise subject to , .
(a) Write the Lagrangian and derive KKT conditions.
(b) Solve analytically and verify that the maximum is .
(c) Interpret the Lagrange multiplier: by how much does change if the constraint becomes ?
(d) Confirm numerically using scipy.optimize.minimize.
Exercise 3 - KKT Conditions for Quadratic Program
Solve: subject to , .
(a) Write the KKT conditions in full.
(b) Determine the active constraint(s) and solve the KKT system.
(c) Verify the solution is a global minimum by checking convexity.
(d) Compute the duality gap (should be zero).
Exercise 4 - Convexity Analysis
For each function, determine if it is convex, strictly convex, or neither, on the specified domain:
(a) on
(b) on
(c) on (positive definite matrices)
(d) for
(e) for logistic regression with linearly separable data
Exercise 5 - SVM Dual Derivation
Derive the dual of the hard-margin SVM from scratch.
(a) Write the primal as a standard form QP with inequality constraints.
(b) Form the Lagrangian and derive the dual function .
(c) Write the dual problem and verify its constraints.
(d) Show that the dual is concave in .
(e) Implement and solve both primal and dual for a small dataset; verify they give the same optimal value.
Exercise 6 - Maximum Entropy Distribution
Find the maximum entropy distribution on subject to: and .
(a) Write the Lagrangian with multipliers .
(b) Derive that .
(c) Find numerically by solving the constraint equations.
(d) Compare to the uniform distribution: which has higher entropy?
(e) Verify: compute and confirm it exceeds .
Exercise 7 - SAM: KKT Analysis
Analyse Sharpness-Aware Minimisation rigorously.
(a) Write the inner maximisation as a constrained problem.
(b) Write the KKT conditions and solve for in closed form.
(c) Show that satisfies the KKT conditions.
(d) Implement one SAM step and compare to vanilla gradient descent on a sharp quadratic.
(e) Empirically verify that SAM finds flatter minima on a small neural network.
Exercise 8 - Duality Gap and Convergence
Implement a primal-dual interior-point method for a small LP.
(a) Formulate: s.t. , as a standard LP.
(b) Implement the primal-dual path-following method tracking both primal and dual variables.
(c) Plot the duality gap vs. iteration and verify it converges to zero.
(d) Verify the optimal solution satisfies all KKT conditions numerically.
(e) Compare convergence to simplex method on the same problem.
12. Why This Matters for AI (2026 Perspective)
| Concept | Impact on AI/ML |
|---|---|
| First-order necessary conditions | Every modern optimiser (SGD, Adam, AdamW) terminates at approximate stationarity . Training stopping criteria are gradient-norm thresholds. |
| Hessian spectrum | Adaptive learning rate methods (Adam, Adagrad) implicitly approximate to normalise curvature. Sharpness-aware methods (SAM, ASAM) explicitly minimise spectral norm of . |
| Saddle points | The prevalence of saddle points (not local minima) in deep network landscapes explains why gradient descent with noise (SGD, dropout) escapes them efficiently. Perturbed gradient descent has provable saddle-escape guarantees. |
| Convexity | Convex relaxations of combinatorial ML problems (e.g., relaxation of sparse recovery) are solvable to global optimality. Convex loss functions (logistic, square) give unique solutions independent of initialisation. |
| Lagrange multipliers | PCA, LDA, CCA are all Lagrange multiplier problems. QLoRA's 4-bit quantisation uses a constrained optimisation where the quantisation constraint is relaxed via Lagrange multipliers in the mixed-precision framework. |
| KKT conditions | SVM training, LP relaxations of beam search, constrained decoding (e.g., budget-forcing), and RLHF reward-constrained optimisation all use KKT theory for optimality analysis. |
| Duality | The SVM dual gives the kernel trick. Max-margin training of LLMs (via RLHF) can be analysed as a dual problem where the Lagrange multiplier on the KL constraint is the reward temperature . |
| Strong convexity | regularisation makes the loss strongly convex, giving linear convergence guarantees. The condition number governs convergence; warmup schedules and weight decay are practitioner responses to ill-conditioning. |
| Max entropy / softmax | Language model next-token prediction is max-entropy inference subject to observed statistics. Temperature scaling adjusts the implicit Lagrange multiplier on the cross-entropy constraint, controlling distribution sharpness. |
| SAM/sharpness | Flat minima empirically generalise better. SAM (Foret et al. 2021) is now standard in SOTA image classification and LLM fine-tuning. The optimality condition for the inner problem is the KKT stationarity condition. |
| Neural collapse | The ETF structure at convergence (proven via KKT of the UFM) informs classifier weight initialisation strategies and explains why the last layer of a fine-tuned model can be reset and retrained cheaply. |
| Mode connectivity | Model merging (Model Soups, TIES, DARE) and model interpolation are enabled by the path-connectivity of loss basins-a direct consequence of overparameterised non-convex landscape geometry. |
Conceptual Bridge
This section occupies the pivot between two phases of the curriculum. Everything before 04 established the tools for computing derivatives-limits, continuity, partial derivatives, gradients, Jacobians, chain rule. This section asks the deeper question: what do these derivatives tell us about where the best solutions are?
Looking backward. The first-order conditions derive directly from the gradient machinery of 01-03. The Hessian-the matrix of second partial derivatives-extends the single-variable second derivative test to functions of many variables, connecting to the matrix theory of 02. The chain rule and product rule for derivatives underpin every step of the Lagrangian analysis.
Looking forward. The optimality conditions here are used constantly in the chapters ahead:
- 05/05 (Gradient Descent) and 08 (Optimisation Algorithms): gradient descent is precisely the process of iterating toward ; convergence analysis requires the strong convexity and Lipschitz smoothness properties defined here
- 06 (Probability & Statistics): maximum likelihood estimation is an unconstrained minimisation whose optimality conditions give the maximum likelihood equations; Bayesian MAP estimation introduces prior constraints handled by Lagrange multipliers
- 07 (Information Theory): the maximum entropy principle (8.5) is a direct application of Lagrange multipliers; the KL divergence minimisation underlying variational inference is a constrained optimisation whose KKT conditions are the ELBO
POSITION IN THE CALCULUS CURRICULUM
04: Calculus Fundamentals
01: Limits and Continuity Foundation for 02
02: Derivatives and Differentiation Foundation for 03, 04
03: Integration Foundation for 06 (probability)
04: Optimality Conditions YOU ARE HERE
Uses: gradients, Hessians, chain rule, Jacobians
Introduces: critical points, KKT, duality, convexity
05: Multivariate Calculus
01: Partial Derivatives (prerequisite)
02: Gradient and Hessian (prerequisite)
03: Chain Rule & Backpropagation (prerequisite)
04: Optimality Conditions YOU ARE HERE
05: Gradient Descent and Convergence Uses 04's convexity theory
08: Optimisation Algorithms Deep dive into algorithms enabled by 04
09: Probabilistic Graphical Models MAP/MLE via 04's Lagrange theory
The central insight. Optimality conditions are not merely a tool for finding optima-they are a language for describing what it means for a solution to be right. The KKT conditions don't just tell you where to stop searching; they tell you why the answer is the answer: which constraints are binding, how sensitive the solution is to perturbations, what trade-offs are implicit in the optimal choice. Every time a language model outputs a softmax distribution, every time a recommender system solves a constrained relevance problem, every time a reinforcement learning agent balances reward and KL penalty-the KKT conditions are operating in the background, whether or not the engineer knows it.
Supplement A: Deeper Mathematical Treatments
This supplement provides expanded proofs, worked examples, and connections that go beyond the core sections above. These are valuable for practitioners who want to move from "knowing the conditions" to "understanding why the conditions have the form they do."
A.1 Proof of the Envelope Theorem
The Envelope Theorem is the rigorous basis for the shadow price interpretation of Lagrange multipliers. Here is the complete proof.
Setup. Let be the optimal value of:
where is a parameter (e.g., the RHS of a constraint). Assume are and varies smoothly.
Theorem.
where .
Proof. Differentiate with respect to :
From stationarity: . Substituting:
Differentiating the constraint with respect to :
Substituting back:
Consequence for ML. If we add a constraint (norm budget) to a training problem, the Lagrange multiplier equals -the rate at which the optimal loss decreases as we increase the allowed norm. This is why weight decay (which enforces a soft norm budget) and the optimal multiplier are equal at the optimum: they're the same object.
A.2 Farkas' Lemma: The Foundation of KKT
The Farkas Lemma is the linear algebra result that underpins the entire theory of KKT conditions. It provides the criterion for when a linear system has a solution versus when a certificate of infeasibility exists.
Lemma (Farkas, 1902). Let , . Exactly one of the following holds:
- There exists with
- There exists with and
Geometric interpretation. Statement 1 says is in the conic hull of the columns of . Statement 2 says there is a hyperplane separating from this cone. The lemma asserts these are mutually exclusive and exhaustive.
Proof sketch. If 1 holds: since and , so 2 fails. The converse (if 1 fails then 2 holds) follows from the separating hyperplane theorem for convex cones.
How Farkas gives KKT. At a constrained minimum , if is a feasible descent direction (satisfying active constraint gradients and having ), the Farkas alternatives state that such cannot exist iff is in the cone of active constraint gradients-i.e., the KKT conditions hold.
A.3 Fritz John Conditions: When LICQ Fails
When the LICQ fails (active constraint gradients are linearly dependent), the standard KKT multiplier theorem breaks down. The Fritz John conditions provide a fallback:
Theorem (Fritz John, 1948). At a local minimum of subject to , , there exist with such that:
and complementary slackness .
The anomaly. When , the objective function disappears from the condition entirely-the conditions are "degenerate" and say nothing about the objective. This is called an abnormal solution.
Practical implication. If LICQ holds, we can always normalize to , recovering the standard KKT conditions. Fritz John is thus a generalization that covers degenerate cases, but the cleanest results require LICQ (or another constraint qualification that ensures ).
A.4 Second-Order KKT Conditions
For constrained problems, second-order sufficient conditions involve the Hessian of the Lagrangian restricted to the tangent cone of the active constraints.
Setup. At a KKT point with active set , define:
- (Hessian of Lagrangian w.r.t. )
- The critical cone:
Second-order necessary condition (SONC): If is a local minimum, then for all .
Second-order sufficient condition (SOSC): If the KKT conditions hold and for all , then is a strict local minimum.
For SVM: At the SVM optimum, the Hessian of the Lagrangian in the primal variables is positive definite restricted to the constraint tangent space (verified by the positive definiteness of the kernel matrix for distinct support vectors)-confirming the optimum is indeed a strict constrained minimum.
A.5 Convex Conjugate and Fenchel Duality
The Fenchel conjugate provides a powerful alternative to Lagrangian duality, particularly useful for non-smooth problems.
Definition (Conjugate function). For :
is always convex (pointwise supremum of affine functions) regardless of whether is convex.
Key examples:
| Domain | Domain | ||
|---|---|---|---|
| if else | Indicator | ||
| if else | Indicator | ||
Fenchel duality. The Fenchel dual problem of is:
Lasso as Fenchel dual. The Lasso primal has a Fenchel dual that is a quadratic program with box constraints. The dual variables are the residuals scaled by , and the dual constraint gives the KKT condition for Lasso sparsity.
A.6 Proximal Operators and Subdifferential Calculus
For non-smooth convex problems, gradient descent fails but proximal methods succeed. The proximal operator is intimately connected to KKT conditions.
Definition. The proximal operator of with parameter :
Connection to KKT. The fixed point of is the minimiser of . The KKT condition for the proximal subproblem is , which at the fixed point gives -the subdifferential optimality condition.
Proximal gradient method for (smooth + non-smooth ):
For Lasso: (soft thresholding). This is the closed-form solution to the proximal subproblem, and it directly implements the KKT condition: the -th weight is zero when the gradient is smaller than (inactive constraint), nonzero otherwise.
A.7 Constrained Optimisation in High Dimensions: Curse and Blessing
Classical optimisation theory was developed primarily for problems in with small (). Modern ML operates at to . Some classical results break down; others become stronger.
What breaks:
- Computing the Hessian: memory, impossible for
- Solving KKT systems: for dense systems
- Verifying positive definiteness: eigendecomposition
- LICQ: in high dimensions, constraints from discrete data rarely achieve full-rank Jacobians
What gets better (blessing of overparameterisation):
- Saddle points become easier to escape: a random direction has probability of being a descent direction at a saddle; gradient descent with noise escapes saddles in polynomial time
- Local minima become global: as shown by Kawaguchi and Du et al., the geometry of overparameterised networks makes spurious local minima rare
- Flat minima proliferate: with parameters and data points, there is an -dimensional manifold of global minima-gradient descent finds some flat point in this manifold
Practical approximations to KKT in high- regime:
- Gradient norm replaces exact stationarity
- Curvature estimation: diagonal Hessian (Adagrad, Adam) approximates second-order information
- Sketched KKT systems: random projections reduce system size while preserving approximate solutions
- Dual certificate: in sparse recovery, checking \|X^\top(\mathbf{y} - X\hat\mathbf{w})\|_\infty \leq \lambda verifies the KKT condition for Lasso without solving the primal- rather than
Supplement B: Worked Examples
B.1 Complete KKT Analysis: Portfolio Optimisation
Problem. Markowitz mean-variance portfolio: minimise risk subject to expected return and budget .
Lagrangian:
Stationarity:
Solution: (assuming )
Finding multipliers: Substitute into constraints:
Define: , , .
System:
The efficient frontier is the locus of as varies-it's a parabola in space, parameterised by the Lagrange multiplier .
Sensitivity: - a one-unit increase in required return increases minimum variance by units. This is the shadow price: the cost of reaching for higher returns.
B.2 KKT Verification via Duality: Water-Filling
The water-filling solution for capacity maximisation in parallel Gaussian channels is a classic KKT example.
Problem. Maximise total capacity subject to , .
Lagrangian:
KKT stationarity:
Complementary slackness: . So either (channel off) or and (channel on with water level ).
Solution: where is chosen so .
The water-filling metaphor: imagine filling a container with flat water level ; channels with floor above the water level get no power (inactive constraint). This is the KKT complementary slackness condition made vivid.
B.3 RLHF as a Constrained Optimisation Problem
Reinforcement Learning from Human Feedback (RLHF) can be formulated as a constrained optimisation problem whose KKT conditions provide exact theoretical insight into the resulting policy.
Problem formulation. Let be the model policy and the reference policy. Maximize expected reward subject to a KL constraint:
Lagrangian:
where is the Lagrange multiplier (dual variable) for the KL constraint.
KKT stationarity (with respect to the policy at each ):
Solving for the optimal policy:
where is the partition function.
This is exactly the Boltzmann distribution from max-entropy principles! The optimal RLHF policy is the reference policy tilted by the reward, with temperature . Higher (larger KL penalty) stays closer to ; lower maximises reward more aggressively.
Complementary slackness: If the KL budget is not exhausted (the model doesn't need to move far to maximise reward), then and the optimal policy maximises reward unconstrained. If the constraint binds, is the shadow price of KL-how much reward per unit of KL divergence the model could gain by allowing more deviation from reference.
Direct Preference Optimisation (DPO). DPO (Rafailov et al. 2023) avoids sampling from by reparameterising the reward using the KKT condition above: . This transforms the constrained RL problem into a supervised binary classification problem-an elegant application of duality and KKT theory.
Supplement C: Computational Methods for Finding Critical Points
C.1 Newton's Method and KKT Systems
For smooth unconstrained problems, Newton's method solves by iterating:
Newton's method has quadratic convergence near a non-degenerate critical point (where ): the error satisfies .
For equality-constrained problems, Newton's method solves the KKT system:
This KKT matrix (also called the augmented system or saddle-point matrix) is symmetric but indefinite-it has both positive and negative eigenvalues. Solving it efficiently is the core computational challenge in constrained optimisation.
C.2 Interior Point Methods and the Central Path
Interior point methods (barrier methods) solve inequality-constrained problems by converting them to a sequence of unconstrained problems. Replace s.t. with:
for decreasing . The log-barrier forces iterates to stay feasible (it as ) and approximates the indicator as .
Central path. The solution as a function of traces the central path-a smooth curve converging to the optimal as . Along the central path, the modified KKT conditions hold:
As : recovers complementary slackness. Interior point methods achieve the best known polynomial complexity for LP and convex QP: iterations.
C.3 Automatic Differentiation and Gradient Verification
Modern ML uses automatic differentiation (AD) to compute exact gradients of any differentiable computation graph. The core mechanism is an exact application of the chain rule through reverse mode accumulation-backpropagation.
Gradient checking. A standard technique for verifying backprop correctness is comparing against finite differences:
with . This centred difference approximation has error , so at the finite-difference gradient should agree with the exact gradient to about 10 significant digits.
Why finite differences are not used in practice:
- Cost: forward passes for parameters, vs. 1 backward pass for AD
- Numerical precision: for very large or small gradients, floating-point rounding limits accuracy
But finite differences remain the gold standard for correctness verification during algorithm development-if your AD gradient doesn't match finite differences at 1e-5 relative tolerance, you have a bug.
C.4 Constraint Satisfaction in Modern ML Systems
Modern ML increasingly uses hard and soft constraints, and the optimality conditions for handling these have real-world implementations:
Constitutional AI and RLHF with hard constraints. Recent LLM alignment work adds explicit constraints on outputs (harmlessness, helpfulness). The dual variable controlling KL deviation from a reference policy is analogous to the Lagrange multiplier on the constraint "don't deviate too far from the safe base model." The KKT condition determines the optimal trade-off.
Quantisation as constrained optimisation. Post-training quantisation (PTQ) and quantisation-aware training (QAT) seek weight matrices that are close to but have values restricted to a discrete grid. This is:
The optimal solution is the projection onto the nearest grid point (rounding)-the KKT conditions confirm that the Lagrange multiplier is the quantisation error, and the optimal satisfies complementary slackness: either the weight is at the grid boundary (multiplier nonzero) or it's rounded to the nearest interior point (multiplier zero).
FlashAttention and memory constraints. FlashAttention (Dao et al. 2022) reformulates the attention computation to respect SRAM capacity constraints. The tiling strategy solves a constrained I/O minimisation problem: process tiles of that fit in SRAM while computing exact attention. The optimal tile size is determined by KKT-like analysis of the memory hierarchy constraints.
Supplement D: Visualisation Guide
D.1 Saddle Point Visualisation
The canonical saddle point is at the origin. Its Hessian is -indefinite. Level curves are hyperbolas; the gradient flow converges to the origin along the -axis but diverges along the -axis.
In high dimensions. For an -parameter saddle with Morse index , random gradient descent escapes in time steps when corrupted with Gaussian noise of variance (Jin et al. 2017). The escape direction is the eigenvector corresponding to the most negative eigenvalue.
D.2 Level Curve Analysis
For any scalar :
- Near a local minimum: level curves are approximately ellipses centred at ; aspect ratio given by of the Hessian
- Near a saddle point: level curves form hyperbolas; the asymptotes are the stable/unstable manifolds
- Near a local maximum: level curves are ellipses with gradient pointing inward (decreasing )
The condition number of the Hessian at a minimum determines the eccentricity of the ellipses. High condition number (elongated ellipses) means gradient descent zigzags slowly; Newton's method normalises this.
D.3 Constraint Geometry
For s.t. :
- The feasible set is a manifold (curve in , surface in )
- Level curves of foliate the ambient space
- The minimum is where a level curve is tangent to the constraint manifold
- Tangency means (constraint tangent space), i.e., -which is exactly Lagrange's condition
The visual clarity of this geometric argument is why Lagrange's method is so compelling: it replaces algebra with geometry.
Supplement E: Key Formulas Reference
OPTIMALITY CONDITIONS CHEAT SHEET
UNCONSTRAINED
FONC: nablaf(x*) = 0 (necessary)
SONC: H(x*) 0 (necessary)
SOSC: H(x*) 0 (+ FONC) (sufficient for strict local min)
Convex: FONC iff global min
EQUALITY CONSTRAINTS (g(x) = 0)
Lagrangian: L = f + lambdag
KKT: nablaL = 0 and g(x*) = 0
SOSC on tangent: dnabla^2L d > 0 for alld: nablag(x*)d = 0
INEQUALITY CONSTRAINTS (h(x) <= 0)
Lagrangian: L = f + muh + lambdag
KKT (4 conditions):
(1) Stationarity: nablaL = 0
(2) Primal feas.: h(x*) <= 0, g(x*) = 0
(3) Dual feas.: mu* >= 0
(4) Comp. slack.: mu* h(x*) = 0 for alli
DUALITY
Dual function: q(mu,lambda) = inf_x L(x, mu, lambda)
Weak duality: d* <= p* always
Strong duality: d* = p* under Slater's condition (convex, strictly feas.)
Shadow price: partialp*/partialb = lambda* (sensitivity of optimal value)
SPECIFIC PROBLEMS
OLS: (XX)w = Xy (normal equations = FONC)
Ridge: (XX + nlambdaI)w = Xy (FONC with L2 reg)
SVM: w* = Sigma alpha y x (stationarity); alpha > 0 iff on margin
PCA: Sigmav = lambdav (FONC for variance max on sphere)
Softmax: p exp(z) (FONC for max entropy with logit constraints)
RLHF: pi*(y|x) pif exp(r/beta) (KKT for KL-constrained reward max)
References and Further Reading
Primary References
- Boyd & Vandenberghe, "Convex Optimization" (2004) - Chapters 4-5 (convex functions, duality), Chapter 5 (KKT conditions). The standard graduate reference. Available free online at stanford.edu/~boyd/cvxbook.
- Nocedal & Wright, "Numerical Optimization" (2006) - Chapters 12-15. Rigorous treatment of constrained optimisation, KKT theory, interior point methods, and convergence analysis.
- Rockafellar, "Convex Analysis" (1970) - The foundational text on subdifferentials, conjugate functions, and duality. Rigorous but dense.
- Bertsekas, "Nonlinear Programming" (2016) - Chapter 3 (Lagrange multiplier methods) and Chapter 4 (duality). Excellent treatment of constraint qualifications.
For ML/AI Connections
- Dauphin et al. "Identifying and Attacking the Saddle Point Problem in High-Dimensional Non-Convex Optimization" (2014) - Empirical evidence for saddle point dominance in deep networks.
- Foret et al. "Sharpness-Aware Minimization for Efficiently Improving Generalization" (2021) - SAM algorithm; KKT analysis of the inner maximisation.
- Papyan et al. "Prevalence of Neural Collapse during the Terminal Phase of Deep Learning Training" (2020) - Neural collapse as KKT critical point of the UFM.
- Rafailov et al. "Direct Preference Optimization" (2023) - DPO derivation from RLHF KKT conditions.
- Du et al. "Gradient Descent Finds Global Minima of Non-Convex Neural Networks" (2018) - Overparameterisation and global optimality.
- Kawaguchi "Deep Learning without Poor Local Minima" (2016) - Theoretical analysis of local minima in deep linear networks.
Computational Tools
- CVXPY (Diamond & Boyd 2016) - Python library for convex optimisation with automatic KKT verification. Handles LP, QP, SOCP, SDP with strong duality guarantees.
- SciPy
scipy.optimize.minimize- Numerical optimisation with constraint support; uses SLSQP (Sequential Least Squares Programming) which solves KKT subproblems at each step.
This section is part of the 05 Multivariate Calculus chapter. For gradient descent algorithms and convergence theory using these optimality conditions, see 05/05 Gradient Descent and Convergence. For the probability chapter where Lagrange multipliers appear in maximum likelihood and Bayesian estimation, see 06.
Supplement F: Extended Theory
F.1 The Morse Lemma and Local Geometry of Non-Degenerate Critical Points
Theorem (Morse Lemma). Near a non-degenerate critical point (where and is invertible), there exist local coordinates in which has the form:
where is the Morse index (number of negative eigenvalues of ).
Implication. All non-degenerate critical points have the same local geometry as pure quadratics-up to a smooth change of coordinates. This is why the Hessian gives complete local information about a critical point: it completely determines the topological type of the critical point.
For AI. In transformer loss landscapes, measuring the Morse index of critical points found during training is computationally expensive but theoretically illuminating. Empirical studies using Lanczos approximations of suggest that good minima have small Morse index (few descent directions), while saddles found earlier in training have large index.
F.2 Generalized Saddle Points and Minimax Problems
Definition. A point is a saddle point of if:
(minimum over , maximum over ).
Existence theorem. If is convex in and concave in , and both domains are compact convex sets, then a saddle point exists (Von Neumann minimax theorem).
Optimality conditions for saddle points:
This looks identical to an unconstrained critical point, but the additional saddle structure (convex-concave) ensures the point is a minimax rather than an arbitrary stationarity point.
GAN training as saddle point optimisation. Generative Adversarial Networks minimise the objective:
The optimal discriminator maximises for fixed ; the optimal generator minimises for fixed . At the Nash equilibrium (saddle point), generates samples from and everywhere.
The challenge: the convex-concave structure is not present in neural network GAN training (both and are non-convex), so saddle point existence is not guaranteed and gradient descent alternation can cycle or diverge.
F.3 Proximal Point Algorithm and Augmented Lagrangian
Proximal Point Algorithm (PPA). For convex :
This is equivalent to finding the KKT point of the proximal subproblem: .
Augmented Lagrangian (method of multipliers). For s.t. , the augmented Lagrangian:
combines the standard Lagrangian with a quadratic penalty term. The quadratic term is zero at feasibility, so the KKT conditions for the augmented Lagrangian match the original. But the augmented Lagrangian is better conditioned-the term adds curvature near the constraint.
ADMM. The Alternating Direction Method of Multipliers (ADMM) is an augmented Lagrangian method split across two blocks of variables. It is widely used for distributed training, consensus optimisation, and lasso-type problems. At convergence, the ADMM iterates satisfy the KKT conditions of the original problem.
F.4 Implicit Function Theorem and Sensitivity Analysis
The KKT conditions define a nonlinear system parameterised by (problem data). The Implicit Function Theorem (IFT) tells us when and how the optimal solution changes with .
Theorem (IFT). If and the Jacobian is invertible, then there exists a smooth function near with , and:
Application: differentiating through optimisation. In meta-learning (MAML) and bilevel optimisation, we need -the derivative of the optimal solution with respect to parameters. The IFT gives this derivative via solving a linear system involving the KKT Jacobian.
MAML connection. Model-Agnostic Meta-Learning computes:
This requires backpropagating through the inner gradient step-equivalent to applying the IFT to the first-order optimality condition of the inner problem.
F.5 Optimality and Generalisation: The Flat Minima Hypothesis
A deep connection links optimality conditions to generalisation in deep learning. The sharpness of a minimum-measured by the trace or maximum eigenvalue of the Hessian-correlates empirically with test performance.
Measures of sharpness:
- : total curvature (Frobenius norm of )
- : maximum curvature (sharpness in the steepest direction)
- : PAC-Bayes sharpness
Why flat minima generalise better (hypothesis). A flat minimum has many parameters with nearly zero curvature. Small perturbations to the weights (due to finite data, distribution shift, or quantisation error) cause small loss increases. A sharp minimum has high curvature: small weight perturbations cause large loss spikes, leading to poor generalisation.
PAC-Bayes bound. The PAC-Bayes generalisation bound for a posterior over weights near includes a term:
where is the prior. This directly penalises large Hessian eigenvalues-confirming that flat minima (small eigenvalues) admit tighter generalisation bounds.
Implication for optimality. The "best" solution from a generalisation perspective is not just any critical point-it is a critical point that is also a flat critical point: one where is minimised. SAM seeks this; the KKT condition for the SAM minimax problem gives the perturbation direction that reveals the sharpest descent direction.
Supplement G: Historical Perspective
G.1 Timeline of Key Developments
HISTORY OF OPTIMALITY CONDITIONS
1788 Lagrange - "Mcanique Analytique": Lagrange multipliers for
constrained mechanics. Invented to handle pendulum constraints.
1815 Gauss - Method of least squares: unconstrained minimisation,
normal equations as first-order conditions.
1844 Cauchy - Gradient descent proposed as an optimisation method.
First explicit use of the gradient as a direction.
1902 Farkas - Farkas Lemma for linear systems of inequalities.
The algebraic foundation of all LP and KKT theory.
1928 Von Neumann - Minimax theorem for zero-sum games.
Foundation of saddle-point theory and game theory.
1939 Karush - Master's thesis: first-order conditions for
inequality-constrained problems. (Unknown for decades.)
1951 Kuhn & Tucker - Independent rediscovery, published in
"Nonlinear Programming". Conditions become "KKT" in their honor
(Karush's contribution recognised only later).
1953 Wolfe - Duality theorem for nonlinear programming.
1955 Charnes, Cooper, Henderson - Simplex method, LP duality.
1963 Zoutendijk - Feasible direction methods, active set concepts.
1979 Khachiyan - Ellipsoid method: first polynomial algorithm for LP.
1984 Karmarkar - Interior point method for LP, O(n^3.5 L) time.
Practical polynomial LP solver.
1993 Cortes & Vapnik - Support Vector Machine: KKT dual is the
modern SVM. Complementary slackness identifies support vectors.
1996 Byrd et al. - L-BFGS-B: limited memory quasi-Newton for
bound-constrained problems. Industry standard for large-scale
constrained optimisation.
2011 Duchi, Hazan, Singer - Adagrad: adaptive learning rate
implicitly approximates inverse Hessian structure (second-order).
2014 Dauphin et al. - Saddle point dominance in deep learning.
First systematic empirical study of neural network landscape.
2015 Kingma & Ba - Adam optimiser: diagonal Hessian approximation
with momentum. Becomes default for LLM training.
2018 Du et al. - Gradient descent finds global minima in wide nets.
Theoretical basis for why deep learning "works."
2021 Foret et al. - SAM: Sharpness-Aware Minimisation. KKT-derived
update for flatter minima. New SOTA on multiple benchmarks.
2021 Papyan et al. - Neural Collapse: KKT analysis of terminal
training geometry. UFM framework.
2023 Rafailov et al. - DPO: RLHF framed as KKT-constrained
optimisation. Replaces PPO for LLM alignment.
G.2 The Karush Omission
The Karush-Kuhn-Tucker conditions are named for Kuhn and Tucker (1951), but Karush derived essentially the same conditions in his 1939 master's thesis at the University of Chicago under L.M. Graves. Karush never published his thesis, and his work remained unknown until the 1970s when Kuhn acknowledged the priority.
This is a cautionary tale for researchers: unpublished work does not enter the scientific record. Karush's conditions are mathematically equivalent to KKT and were derived first, but priority in science is determined by publication and dissemination.
Modern convention is to include Karush's name (KKT), and some authors use "FOOC" (first-order optimality conditions) to avoid the naming entirely.
G.3 From Constrained Mechanics to Language Models
Lagrange invented his multiplier method to handle constraints in mechanics-specifically, the rigid-rod constraint in a pendulum. The constraint that "the rod length is fixed" is , and the Lagrange multiplier is the tension in the rod.
Two centuries later, the same mathematical structure appears in:
- The tension (KL penalty) in RLHF that prevents language models from deviating too far from the base policy
- The margin constraint in SVMs, where the Lagrange multiplier is the "tension" pulling the decision boundary toward the support vectors
- The softmax temperature, which is the Lagrange multiplier for the entropy constraint in maximum entropy next-token prediction
The deep unity of Lagrange's insight-that constraints can be absorbed into an objective via auxiliary multipliers-is one of the most productive ideas in the history of mathematics.
Supplement H: Problem Reduction Patterns
H.1 Reducing Constrained Problems to Unconstrained
Many constrained problems have hidden structure that allows elimination of constraints, converting a constrained problem to unconstrained.
Pattern 1: Substitution. If constraint can be solved for one variable, substitute to eliminate it.
Example: s.t. . Substitute: . The constraint disappears, FONC is -an unconstrained problem.
Pattern 2: Reparameterisation. If the constraint defines a manifold , parameterise directly.
Example: Unit sphere in : parameterise as . Optimise over without any constraint.
For ML: Stiefel manifold optimisation (orthonormal matrices) uses geodesic gradient descent on the manifold, avoiding explicit constraint handling. Used in RNN training and low-rank optimisation.
Pattern 3: Lagrangian method. When substitution is impractical, Lagrange multipliers convert the constrained problem to a (possibly harder) unconstrained problem in augmented variables.
Pattern 4: Penalty methods. Replace s.t. with . As , the penalised solution approaches the constrained solution. Less accurate than Lagrange but simpler to implement.
H.2 Problem Recognition Guide
When you encounter an optimisation problem, use this decision tree:
OPTIMIZATION PROBLEM RECOGNITION
Does it have constraints?
NO -> Unconstrained
Is f convex?
YES -> FONC (nablaf = 0) is sufficient; any local = global
NO -> FONC necessary; check SOSC; may have local optima
YES -> Constrained
Are constraints equality only?
YES -> Lagrange multipliers; solve n+m equations
NO (has inequalities) -> KKT conditions
Is problem convex?
YES -> KKT iff global optimum (Slater's -> strong duality)
NO -> KKT necessary (LICQ needed); local optimum only
H.3 Standard Problem Forms and Their KKT Systems
Linear Program (LP):
KKT: , ,
Quadratic Program (QP):
KKT: , standard feasibility and complementary slackness
Second-Order Cone Program (SOCP):
Subsumes QP and LP; solvable in polynomial time; KKT conditions involve second-order cone complementarity
Semidefinite Program (SDP):
KKT: matrix complementarity where is the dual variable; powerful for combinatorial relaxations and spectral graph problems
These standard forms are recognised and solved by CVXPY and other convex optimisation packages. Expressing your ML problem in one of these forms gives you access to provably correct, polynomial-time solvers.
Supplement I: Numerical Experiments and Verification
I.1 Verifying KKT Conditions Numerically
In practice, KKT conditions are verified numerically after solving an optimisation problem. Here is a systematic protocol:
def verify_kkt(x_star, lambda_star, mu_star, f, g, h, tol=1e-6):
"""
Verify KKT conditions for problem:
min f(x) s.t. g(x) = 0, h(x) <= 0
at candidate solution (x_star, lambda_star, mu_star).
"""
# 1. Stationarity
grad_f = numerical_gradient(f, x_star)
grad_g = numerical_jacobian(g, x_star) # (m, n)
grad_h = numerical_jacobian(h, x_star) # (p, n)
stationarity = grad_f + lambda_star @ grad_g + mu_star @ grad_h
cond1 = np.allclose(stationarity, 0, atol=tol)
# 2. Primal feasibility
eq_residual = g(x_star)
ineq_residual = h(x_star)
cond2 = np.allclose(eq_residual, 0, atol=tol) and np.all(ineq_residual <= tol)
# 3. Dual feasibility
cond3 = np.all(mu_star >= -tol)
# 4. Complementary slackness
cs = mu_star * h(x_star)
cond4 = np.allclose(cs, 0, atol=tol)
print(f"Stationarity: {'PASS' if cond1 else 'FAIL'}")
print(f"Primal feasibility: {'PASS' if cond2 else 'FAIL'}")
print(f"Dual feasibility: {'PASS' if cond3 else 'FAIL'}")
print(f"Complementary slack: {'PASS' if cond4 else 'FAIL'}")
return cond1 and cond2 and cond3 and cond4
This systematic verification-checking all four KKT conditions numerically-is a key debugging tool when implementing constrained optimisation algorithms.
I.2 Duality Gap Monitoring
For convex problems, monitoring the duality gap during optimisation provides both a convergence certificate and a diagnostic:
def duality_gap(primal_val, dual_val):
"""
Compute duality gap. Should be >= 0 (weak duality).
Converges to 0 at optimality for convex problems (strong duality).
"""
gap = primal_val - dual_val
assert gap >= -1e-8, f"Negative duality gap: {gap}" # weak duality violation
return gap
A duality gap of certifies that both the primal and dual solutions are within of optimality-a certificate of near-optimality without knowing the true optimal value.
I.3 Critical Point Classification Checklist
When you find a critical point numerically, this checklist determines its type:
| Step | Check | How | Interpretation |
|---|---|---|---|
| 1 | Verify stationarity | True critical point (not just near-critical) | |
| 2 | Compute Hessian | Via AD or finite differences | |
| 3 | Eigenvalue decomposition | Sort eigenvalues | |
| 4 | Count negatives | Morse index | |
| 5 | Count near-zero | $m = #{ | \lambda_i |
| 6 | Classify | : min; : max; else: saddle | Standard classification |
| 7 | Check SOSC | All strictly | Strict local min |
| 8 | Verify global (if convex) | FONC alone suffices | No need for SOSC |
This procedure is the numerical implementation of the second-order test and is embedded in standard optimisation post-processing.
Summary: The Three Pillars of Optimality
This section has developed three interconnected pillars that together form the mathematical foundation of constrained optimisation:
Pillar 1: First and Second-Order Conditions. The gradient and Hessian encode all local information about a critical point. FONC () is necessary; SOSC () is sufficient. For convex problems, FONC is sufficient for global optimality-the Hessian never needs to be checked.
Pillar 2: Lagrange Multipliers and KKT. Constraints are not obstacles but information: each active constraint generates a Lagrange multiplier (shadow price) measuring the sensitivity of the objective to that constraint. The KKT conditions generalize FONC to handle both equality and inequality constraints, with complementary slackness providing the geometric signature of which constraints matter.
Pillar 3: Duality. Every constrained problem has a dual that provides a lower bound (weak duality) and-for convex problems with a Slater point-an exact alternative formulation (strong duality). The dual reveals hidden structure (SVM kernel trick), provides certificates of optimality (duality gap), and connects constrained optimisation to game theory (saddle points).
These three pillars appear throughout AI and ML: gradient-based training (Pillar 1), SVM/PCA/attention (Pillar 2), and RL/alignment (Pillar 3). Understanding them deeply is not optional for a serious ML researcher-they are the grammar of the field.
Supplement J: Connections to Information Geometry
J.1 Fisher Information and the Natural Gradient
The gradient is computed with respect to the Euclidean metric on parameter space. But parameter spaces of probability distributions carry a natural Riemannian metric-the Fisher information metric:
Natural gradient. The gradient of in the Fisher metric is:
The natural gradient step is the KKT condition for:
(minimise linearised loss subject to a KL-ball constraint ).
K-FAC and Adam as natural gradient approximations. K-FAC (Kronecker-Factored Approximate Curvature) approximates via Kronecker products for efficiency. Adam's adaptive learning rates approximate -a diagonal approximation to the Fisher inverse. Both are principled from the viewpoint of natural gradient descent, whose KKT formulation makes the Fisher metric (= second-order constraint) explicit.
J.2 Optimal Transport and Wasserstein Duality
The Wasserstein distance between distributions and over is defined as:
where is the set of joint distributions with marginals and .
Kantorovich duality. By Lagrange duality (the constraint is linear in ), the Wasserstein distance admits a dual:
The dual functions and are the Lagrange multipliers for the marginal constraints and .
For LLMs. Wasserstein GANs (WGANs) use the Kantorovich dual to train discriminators with 1-Lipschitz constraint (the dual constraint simplified). The dual formulation is more stable than the Jensen-Shannon divergence of standard GANs-a direct application of duality theory to generative model training.
End of 04 Optimality Conditions notes. This section is 2000+ lines covering unconstrained/constrained optimality from first principles through modern AI applications.
Navigation: <- Chain Rule and Backpropagation | Next: Gradient Descent and Convergence ->
Supplement K: Extended Exercise Solutions (Hints and Partial Solutions)
Exercise 1 Hints
(a) Finding critical points of :
Cases: ( or ) combined with ( or ).
For : from get . Critical point: .
For , or : combine with .
(b) At : . Hessian is zero-degenerate critical point, inconclusive test. Must use higher-order analysis (origin is a saddle of ).
Exercise 4 Hints
(c) on : this is convex. Key: compute the Hessian in the matrix sense. For a perturbation : The second-order term is .
(d) for . Hessian: . Check: . PSD but not PD. Actually convex: it's a perspective function -the perspective of the convex .
Exercise 7 Extended Notes: SAM Convergence
The SAM update can be viewed as gradient descent on the perturbed objective .
The KKT condition of the inner maximisation shows that is correct when the constraint is active (which it always is for nonzero gradients). The multiplier is the shadow price of relaxing the perturbation budget.
This connection shows SAM is not heuristic-it is the exact solution to a well-defined KKT problem. The sharpness regularisation effect comes from the fact that at a sharp minimum, is large (large Hessian eigenvalue gradient), so SAM takes a larger effective step away from sharp directions.
Supplement L: Quick Reference Tables
Lagrange vs KKT: Choosing the Right Tool
| Situation | Method | Key Condition |
|---|---|---|
| No constraints | First/Second order | , check |
| Equality constraints only | Lagrange multipliers | |
| Inequality constraints | KKT | 4 conditions including , CS |
| Mixed equality + inequality | KKT | Full 4-condition system |
| Convex problem | KKT (global) | KKT global min (Slater holds) |
| Non-convex problem | KKT (local) | KKT local min only |
| LP/QP | Interior point or simplex | KKT system solved directly |
| Non-smooth | Subdifferential |
Shadow Price Interpretation Guide
| Context | Multiplier meaning |
|---|---|
| Budget constraint | = value of one more unit of budget |
| Norm constraint | = decrease in loss per unit norm increase |
| SVM margin | = importance of sample to the decision boundary |
| KL constraint in RLHF | = reward per unit KL allowed (the "temperature") |
| Power budget (water-filling) | = value of one more unit of transmit power |
| Expected return (portfolio) | = variance cost per unit extra expected return |
Convexity Verification Checklist
| Check | Method | Outcome |
|---|---|---|
| everywhere | Convex iff true | |
| , | everywhere | Convex iff true |
| is a sum | Each term convex? | Sum is convex |
| (affine precompose) | convex? | convex |
| Each convex? | convex | |
| Each affine? | convex (log-sum-exp) | |
| Always | Convex (quadratic, ) |