Lesson overview | Lesson overview | Next part
Optimality Conditions: Part 1: Intuition to Conceptual Bridge
1. Intuition
1.1 The Optimization Landscape
Imagine standing in a hilly landscape and trying to find the lowest valley. The gradient tells you the direction of steepest ascent at your current position - to descend, move against the gradient. But this strategy can strand you in a local valley that is not the deepest one, or trap you at a mountain pass (saddle point) where the gradient is zero but you are not at a minimum.
THE OPTIMIZATION LANDSCAPE IN 1D AND 2D
1D loss curve: 2D loss surface (contour view):
local saddle
min x
local global
min min global
min
Key players:
Point type Characterisation
Local minimum nablaf = 0, H 0 (all eigenvalues > 0)
Local maximum nablaf = 0, H 0 (all eigenvalues < 0)
Saddle point nablaf = 0, H indefinite (mixed signs)
Regular point nablaf != 0 (not a critical point)
The core difficulty: the gradient condition is necessary but not sufficient for a minimum. A zero gradient could mark a minimum, a maximum, or a saddle point. Second-order conditions (the Hessian) are needed to distinguish between them.
Constraints add further complexity. If the solution must lie on a curve or within a region, the unconstrained minimum may be infeasible, and the optimum occurs at a very different location - possibly on the boundary of the feasible region.
1.2 Why ML Needs Optimality Theory
Every core ML algorithm is secretly an optimisation problem, and the optimality conditions determine its solution structure:
| ML Algorithm | Optimisation Problem | Optimality Condition Used |
|---|---|---|
| Linear regression | -> normal equations | |
| Ridge regression | Lagrangian of constrained form | |
| Lasso | Subdifferential conditions | |
| SVM | s.t. margin | KKT -> support vectors |
| PCA | s.t. | Lagrange -> eigenvalue problem |
| Logistic regression | Convex -> unique global minimum | |
| Neural network | (non-convex) | Saddle point dominance; NTK |
| Attention (softmax) | s.t. | Maximum entropy via Lagrange |
| SAM training | KKT of inner max |
For AI: Understanding that the SVM dual problem depends only on inner products (from the KKT conditions) is what enables the kernel trick - the foundation of kernel methods. Understanding that softmax is the solution to a maximum entropy problem under linear constraints connects attention to thermodynamics and information theory.
1.3 Historical Context
| Year | Person | Contribution |
|---|---|---|
| 1788 | Lagrange | Mcanique Analytique - method of multipliers for equality constraints |
| 1847 | Cauchy | Steepest descent algorithm |
| 1939 | Karush | Master's thesis: inequality constraints with multipliers (KKT conditions - unpublished) |
| 1951 | Kuhn & Tucker | Independent rediscovery and publication of KKT conditions |
| 1952 | Arrow, Hurwicz, Uzawa | Saddle point theorem for convex optimisation |
| 1970 | Rockafellar | Convex Analysis - definitive treatment of duality and subdifferentials |
| 1995 | Boser, Guyon, Vapnik | SVM uses the dual to derive the kernel trick |
| 1996 | Boyd & Vandenberghe | Convex Optimization (textbook; full text freely available online) |
| 2014 | Dauphin et al. | Saddle point dominance in deep networks |
| 2021 | Foret et al. | SAM: sharpness-aware minimisation as constrained min-max |
| 2022 | Papyan et al. | Neural collapse: KKT characterisation of terminal training phase |
1.4 Unconstrained vs Constrained
There are three fundamental problem classes:
THREE PROBLEM CLASSES
UNCONSTRAINED EQUALITY CONSTRAINED
min f(x) min f(x)
xinR s.t. g(x) = 0, i=1,...,m
Solution: nablaf = 0 Solution: Lagrange multipliers
Theory: 2nd-order conditions nablaf + lambdanablag = 0, g(x) = 0
INEQUALITY CONSTRAINED
min f(x)
s.t. g(x) = 0, i = 1,...,m
h(x) <= 0, j = 1,...,p
Solution: KKT conditions
nablaf + lambdanablag + munablah = 0, g=0, h<=0, mu>=0, muh=0 for allj
KEY INSIGHT: Equality constraints are special case of KKT
(set mu = 0 for inequalities, keep lambda free for equalities)
The feasible set is the set of points satisfying all constraints. The optimisation problem asks for the minimum of over .
2. First-Order Necessary Conditions
2.1 The Gradient Condition
Theorem (First-Order Necessary Condition): Let be differentiable. If is a local minimum of , then:
Proof: Suppose is a local minimum but . Define the direction . By the directional derivative formula:
By continuity of , there exists such that for all . But this contradicts being a local minimum.
Warning - necessity only: The gradient condition is necessary but not sufficient. The function has at but no local extremum there (it's an inflection point). The function has but a saddle point. Second-order conditions are needed to determine which type of critical point we have.
Critical points (also called stationary points) are all satisfying . They are candidates for minima, maxima, or saddle points - the Hessian distinguishes between them.
2.2 Critical Points and Their Classification
In , a function has four types of critical points, determined by the sign of the Hessian determinant and the sign of :
SECOND-DERIVATIVE TEST IN R^2
At critical point (nablaf = 0):
det(H) > 0 and f_xx > 0 -> LOCAL MINIMUM
(all eigenvalues of H positive)
det(H) > 0 and f_xx < 0 -> LOCAL MAXIMUM
(all eigenvalues of H negative)
det(H) < 0 -> SADDLE POINT
(H has mixed-sign eigenvalues)
det(H) = 0 -> DEGENERATE (test inconclusive)
(need higher-order analysis)
Geometric picture for each:
Min Max Saddle Degenerate
x
bowl shape hill shape mountain flat region
pass
Standard examples:
- : unique global minimum at origin;
- : unique global maximum at origin;
- : saddle point at origin; indefinite
- : minimum at origin; (degenerate - minimum confirmed by inspection)
In higher dimensions (): A critical point is a strict local minimum iff (all eigenvalues positive). It is a strict local maximum iff . Any other case (indefinite or semidefinite ) requires further analysis.
2.3 Saddle Points in Deep Learning
The classical concern was that neural networks might converge to poor local minima. Empirically, gradient descent finds solutions of similar quality regardless of initialisation for overparameterised networks. This was theoretically explained in a landmark 2014 paper.
Dauphin et al. (2014) - Identifying and attacking the saddle point problem: For a function with large, a random critical point with loss above the global minimum is a saddle point with overwhelming probability (under a statistical physics model). The fraction of critical points that are local minima decreases exponentially with both and .
SADDLE POINT DOMINANCE
Distribution of critical points at energy level epsilon above global min:
Fraction that are local minima: ~exp(-n * c(epsilon))
For n = 10^9 (GPT-3), c(epsilon) > 0 -> essentially zero local minima
What gradient descent actually encounters:
High-loss region: many saddle points, few local minima
Low-loss region: many saddle points, even fewer local
Near global min: flat regions (many equivalent minima)
SGD noise + momentum helps escape saddles (saddle-free Newton
methods explicitly use the negative curvature direction).
Why does SGD escape saddle points? At a saddle point, the Hessian has at least one negative eigenvalue. The corresponding eigenvector is a descent direction - the function decreases along it. SGD's noise perturbations project onto this direction with nonzero probability, enabling escape. Deterministic gradient descent can be trapped at strict saddle points, but this is a measure-zero set.
For AI: The practical implication is that for overparameterised models (where parameters data), gradient descent on a non-convex loss converges to global or near-global optima. This is one theoretical explanation for why training large language models with SGD/Adam works so well.
2.4 Stationary Points of Common ML Loss Functions
MSE (Linear Regression):
This is the normal equation. It has a unique solution when has full column rank. The Hessian - MSE is convex, so the critical point is the global minimum.
Cross-Entropy (Logistic Regression):
No closed-form solution (transcendental equation). The Hessian confirms convexity. Gradient descent (or Newton's method) converges to the unique global minimum if it exists (may not if data is linearly separable).
Cross-Entropy (Softmax / LLM output): Setting (as derived in 03) gives and for . This is a perfectly confident prediction - the loss approaches zero but never reaches it for finite logits.
3. Second-Order Conditions
First-order conditions identify candidates; second-order conditions distinguish minima from maxima from saddle points. The Hessian matrix-the matrix of second partial derivatives-encodes all the local curvature information needed for this classification.
3.1 Second-Order Necessary Condition (SONC)
Theorem (SONC). If is a local minimum of and near , then:
(the Hessian is positive semi-definite).
Proof. Let be any direction. Taylor expansion gives:
Since is a local minimum, for all small . Thus:
Dividing by and letting :
which is exactly .
Note: SONC is necessary but not sufficient. at has yet is a minimum. at origin has but origin is not a minimum.
3.2 Second-Order Sufficient Condition (SOSC)
Theorem (SOSC). If and (positive definite), then is a strict local minimum.
Proof. Since , its smallest eigenvalue satisfies . By continuity of the Hessian, there exists such that for all .
For any with and small :
for sufficiently small .
Gap between SONC and SOSC: The boundary case but (degenerate, semidefinite Hessian) requires higher-order analysis. This is common in deep learning where flat directions proliferate.
SECOND-ORDER TEST SUMMARY
Critical point x*: nablaf(x*) = 0
H = nabla^2f(x*) Result Name
H 0 Strict local minimum SOSC satisfied
H 0 Strict local maximum (max version of SOSC)
H indefinite Saddle point Eigenvalues of both signs
H 0, != 0 Might be min (degenerate) Higher order needed
H = 0 Need Taylor term >=3 Flat critical point
For R^2 via determinant test (when nablaf = 0):
det(H) > 0, H_1_1 > 0 -> local minimum
det(H) > 0, H_1_1 < 0 -> local maximum
det(H) < 0 -> saddle point
det(H) = 0 -> inconclusive
3.3 Indefinite Hessian and Saddle Points
When has both positive and negative eigenvalues, is a saddle point: a local minimum along some directions, a local maximum along others.
Morse Theory Preview. For a smooth function , a critical point is non-degenerate if is invertible. The Morse index of is:
(the number of descent directions). A non-degenerate minimum has index 0; a saddle has index ; a maximum has index . Morse theory relates the topology of the sublevel sets to the critical points and their indices.
For deep learning: The loss landscape of an -parameter network at a critical point near a good solution tends to have many positive eigenvalues and few (often negligible) negative ones. The ratio is typically very small at good solutions-confirming that most critical points found in practice are near-minima, not true saddles with large negative Hessian components.
3.4 Global Optimality from Convexity
For convex functions, the local-global distinction disappears entirely:
Theorem. If is convex and is a local minimum, then is a global minimum.
Proof. Suppose is a local min but not global: there exists with . For , convexity gives:
As , the point approaches arbitrarily closely while having lower function value-contradicting being a local minimum.
Corollary. For differentiable convex : iff is a global minimum.
This is the fundamental reason convex optimization is "easy" compared to non-convex: any local search algorithm that finds a stationary point has found the global optimum.
3.5 Hessian at ML Optima
Understanding the Hessian at key ML critical points provides geometric insight into learning:
Linear Regression. For :
independent of . The Hessian is constant-this is a quadratic. Eigenvalues are (squared singular values scaled). If has full column rank, everywhere, so the unique critical point is a global minimum.
Logistic Regression. For :
where . Since , the Hessian is PSD everywhere (and PD if has full column rank), making logistic regression a convex problem.
Neural Networks. At a minimum , the Hessian typically has:
- A bulk of near-zero eigenvalues (flat landscape in many directions)
- A few large positive eigenvalues (steep curvature in a small subspace)
- Occasional small negative eigenvalues (slight non-convexity)
The ratio of large-to-small eigenvalues is the condition number-high condition numbers cause slow gradient descent convergence and motivate adaptive methods like Adam.
4. Convex Analysis
Convexity is the mathematical property that makes optimization tractable. Understanding convex sets and functions-their definitions, characterisations, and preservation rules-is prerequisite to deploying convex duality and KKT theory.
4.1 Convex Sets
Definition. A set is convex if for all and :
Geometrically: the line segment between any two points stays inside the set.
Standard examples:
- Hyperplanes and halfspaces
- Balls under any norm
- Polyhedra (intersection of halfspaces)
- Positive semidefinite cone
- The probability simplex
Non-examples:
- The unit sphere (boundary only, not the ball): the segment between two points on the sphere passes through the interior
- The set : take and ; the midpoint has -wait, this is actually convex. A cleaner non-example: (circle)
Convexity-preserving operations:
- Intersection: convex convex
- Affine image: is convex if is convex
- Cartesian product, Minkowski sum
4.2 Convex Functions: Definition and First-Order Characterisation
Definition. A function on a convex set is convex if:
Strict convexity: replace with for , .
First-Order Characterisation. For , is convex iff its graph lies above all tangent hyperplanes:
This is the inequality that makes sufficient for global minimality in the convex case: if , then for all .
Proof sketch. () Fix and let . Convexity implies , which gives the tangent inequality. () The tangent inequality with the two points and evaluated at yields the convexity definition after combining.
4.3 Second-Order Characterisation
Theorem. For : is convex iff for all in the domain.
Proof. () If is convex, the first-order characterisation gives . Expanding the Taylor series and applying this inequality yields . () With everywhere, integrate along the line from to using the second-order Taylor remainder to establish the first-order characterisation.
Examples of convex functions:
- Affine: (both convex and concave)
- Quadratic: when
- Norms: for (triangle inequality = convexity)
- Log-sum-exp: (smooth convex approximation to max)
- Negative entropy: on the simplex
- Cross-entropy loss: in
Non-convex in ML:
- , any neural network loss, (matrix factorisation)
4.4 Strongly Convex Functions
Definition. is -strongly convex () if:
Equivalently: is convex; or everywhere.
Strong convexity has powerful consequences:
- Unique minimiser: the quadratic lower bound forces a unique optimal point
- Linear convergence: gradient descent converges geometrically at rate per step where is the Lipschitz constant of
- Self-concordance: the function cannot become arbitrarily flat
For AI: Ridge regression is -strongly convex (the regulariser adds to the Hessian). This is why regularisation speeds up convergence and ensures a unique solution-crucial for ill-conditioned problems.
Condition number: For -strongly convex, -smooth functions, governs convergence. In transformer training, very high condition numbers of the loss landscape motivate adaptive optimisers.
4.5 Preservation Rules and Calculus of Convex Functions
Convexity is preserved under many operations, making it composable:
| Operation | Condition | Result |
|---|---|---|
| Non-negative combination | , convex | convex |
| Composition with affine | convex, any | convex |
| Pointwise max | convex | convex |
| Composition | convex nondecreasing, convex | convex |
| Partial min | convex in | convex |
| Perspective | convex | convex on |
For AI: The cross-entropy loss is convex in because it is composed with a concave function (affine composed with sigmoid). Convexity preservation rules let us verify this without computing the Hessian directly.
5. Lagrange Multipliers
When optimization problems come with constraints, unconstrained optimality conditions no longer apply directly. Lagrange multipliers transform constrained problems into unconstrained ones by incorporating the constraint into the objective-at the cost of introducing auxiliary variables.
5.1 Setup: Equality-Constrained Problems
Problem form:
where and (fewer constraints than variables).
The Lagrangian: Define by:
The scalars are Lagrange multipliers (or dual variables).
5.2 Geometric Derivation
The deepest way to understand why Lagrange's method works is geometric. At a constrained minimum :
Claim: must lie in the span of .
Why: The feasible set near is approximately the linear manifold -the tangent plane to each constraint surface. Any feasible direction from must satisfy for all .
If had a component orthogonal to all , that component would be a feasible direction along which decreases-contradicting being a local constrained minimum.
Therefore has no component in the tangent space; it must lie entirely in the normal space spanned by :
LAGRANGE MULTIPLIER GEOMETRY (R^2)
Constraint: g(x,y) = 0 (a curve in the plane)
Objective: minimize f(x,y)
y
f = c_3 (level curves of f)
f = c_2
f = c_1 <- tangent to constraint at x*
x*
g = 0
At x*: level curve of f is tangent to constraint g=0
nablaf nablag nablaf = -lambdanablag
nablag points normal to g=0; nablaf points normal to f=c_1
Tangency means these normals are parallel.
5.3 The Lagrange Multiplier Theorem
Theorem (Lagrange, 1788). Let be a local minimum of subject to . If the Linear Independence Constraint Qualification (LICQ) holds at -i.e., are linearly independent-then there exists such that:
Together these are equations in unknowns .
LICQ matters: Without LICQ the theorem can fail. Example: subject to and . The constraints both vanish at with gradients and -linearly dependent. No Lagrange multiplier exists.
5.4 Sensitivity Interpretation: Shadow Prices
The Lagrange multiplier has a precise economic interpretation: it measures how much the optimal value changes as constraint is relaxed.
Theorem (Envelope). Let be the optimal value of subject to . Then:
Proof sketch. Differentiating the Lagrangian optimality conditions with respect to and applying the chain rule yields (the Implicit Function Theorem controls how moves with ).
For AI: In constrained training (e.g., "minimize loss subject to "), tells you how much more you could improve the loss by relaxing the norm constraint by one unit. This motivates choosing the right regularisation strength: is the value of the weight decay penalty that enforces the constraint.
5.5 Multiple Constraints and Second-Order Conditions
Multiple equality constraints: With equality constraints, the KKT point satisfies stationarity equations. Second-order analysis requires the bordered Hessian or, equivalently, the Hessian of the Lagrangian restricted to the tangent space of the constraints:
is the second-order sufficient condition for a constrained local minimum.
5.6 ML Applications of Lagrange Multipliers
PCA as constrained optimisation. Find maximising variance subject to :
Stationarity: , i.e., . The optimal direction is the top eigenvector; top eigenvalue maximum variance. PCA is literally solving Lagrange conditions.
Unit-norm attention. In some attention formulations, query/key vectors are L2-normalized before the dot product. The normalization constraint is enforced via Lagrange multiplier; the shadow price reveals how much the attention energy would increase if the norm bound were relaxed.
LoRA rank constraints. Low-Rank Adaptation constrains the weight update where , , with . The rank- constraint is implicit in the factorised parameterisation, and the Lagrange multiplier interpretation illuminates why the singular values of concentrate: the effective constraint is on the nuclear norm (sum of singular values).
6. KKT Conditions
Lagrange multipliers handle equality constraints. When inequality constraints are present-which is the norm in machine learning (budget constraints, margin constraints, non-negativity)-the Karush-Kuhn-Tucker (KKT) conditions provide the generalisation.
6.1 The Full Problem and Lagrangian
General form:
Lagrangian:
where are the multipliers for inequality constraints and for equality constraints.
6.2 The Four KKT Conditions
At an optimal (under a suitable constraint qualification):
1. Stationarity:
2. Primal Feasibility:
3. Dual Feasibility:
4. Complementary Slackness:
Interpreting complementary slackness: For each inequality constraint , either:
- : the constraint is active (the optimum is on the boundary)- can be nonzero
- : the constraint is inactive (the optimum is strictly interior)-the constraint doesn't affect the optimum
This is the geometric signature of which constraints "matter" at the solution.
6.3 Geometric Interpretation
The KKT conditions say: at optimum, you cannot improve while satisfying all constraints. The stationarity condition generalises Lagrange's condition: must lie in the cone generated by the active constraint gradients.
KKT COMPLEMENTARY SLACKNESS GEOMETRY
Case 1: Constraint inactive (h(x*) < 0)
The optimum is in the interior of the feasible region.
The inequality constraint plays no role.
mu* = 0 (it would be wrong to "push" against a slack constraint).
Case 2: Constraint active (h(x*) = 0)
The optimum lies ON the constraint boundary.
The gradient nablaf(x*) points into the infeasible region.
mu* > 0 "pushes back" to prevent crossing the boundary.
The objective would improve if the constraint were relaxed.
In both cases: mu* * h(x*) = 0 * (neg) = 0
or: (pos) * 0 = 0
6.4 KKT as Necessary Conditions: LICQ Proof
Theorem. If is a local minimum and the LICQ holds (active constraint gradients are linearly independent), then the KKT conditions hold.
Proof sketch. Let be the active set. By LICQ, are linearly independent.
Any feasible descent direction (satisfying for and ) cannot have (otherwise not local min).
By Farkas' lemma, is a conic combination of active constraint gradients: with . Setting for gives all four conditions.
Other constraint qualifications: LICQ is sufficient but not necessary for KKT. Alternatives include Mangasarian-Fromovitz (MFCQ), Slater's condition (for convex problems), and linear independence of the Jacobian at the active set.
6.5 KKT as Sufficient Conditions (Convex Case)
For convex problems, KKT conditions are not just necessary-they are also sufficient:
Theorem. If and are convex, are affine, and satisfy all four KKT conditions, then is a global minimum.
Proof. For any feasible :
Using primal feasibility (), dual feasibility (), complementary slackness (), and , each term is , so .
6.6 LP Worked Example
Linear Program: subject to , .
Rewriting as , , .
Lagrangian: .
Stationarity: , .
The constraint is never active at the optimum (we want large), so . The budget constraint is active: , . From stationarity: , . Optimal: , objective .
7. Duality Theory
The Lagrangian dual offers a second approach to constrained optimisation: instead of minimising over , we maximise over the multipliers . The resulting dual problem often has better structure (always convex, regardless of the primal), reveals hidden geometric properties, and-for convex problems-gives exactly the same optimal value.
7.1 The Dual Function and Dual Problem
Definition (Lagrangian dual function):
Key property: is always concave in -it is a pointwise infimum of affine functions of the multipliers.
Dual problem:
This is always a convex optimisation problem (maximising concave = minimising convex).
7.2 Weak Duality
Theorem (Weak Duality). always, where is the primal optimal value.
Proof. For any feasible primal (satisfying , ) and any dual-feasible (with ):
Taking supremum over dual and infimum over primal: .
The gap is the duality gap.
7.3 Strong Duality and Slater's Condition
Theorem (Slater's Condition -> Strong Duality). For convex and , affine : if there exists a strictly feasible point (with strictly for all ), then (zero duality gap) and the dual optimum is attained.
Slater's condition is a constraint qualification: it says the feasible region is non-degenerate. For LP and QP (quadratic programs), strong duality holds under much weaker conditions.
Implications for ML:
- The SVM dual problem is equivalent to the primal (strong duality holds by Slater)
- The dual variables from strong duality are exactly the KKT multipliers
- Duality gap as a stopping criterion: if primal value dual value , we have an -optimal solution
7.4 Saddle Point Characterisation
Theorem. is primal-dual optimal with zero duality gap iff it is a saddle point of the Lagrangian:
The minimax equals the maximin: .
This saddle-point view is foundational for adversarial training in ML: GAN training is exactly seeking a saddle point of the value function .
7.5 SVM Dual: A Complete Example
The SVM is the canonical example of duality in ML. Start with the hard-margin SVM primal:
Rewrite constraints as . Lagrangian:
KKT stationarity conditions:
Substituting back into to form the dual:
Dual problem: subject to .
This depends only on inner products -the kernel trick replaces these with for nonlinear boundaries without ever computing features explicitly.
Support vectors: By complementary slackness, only when , i.e., when the constraint is active: . These are exactly the support vectors-the training points on the margin boundary that determine .
8. Machine Learning Applications
The optimality conditions developed above appear directly in the mathematics underlying every major ML system. This section demonstrates these connections concretely.
8.1 Linear and Ridge Regression
Ordinary Least Squares. Minimise . Setting the gradient to zero:
These are the normal equations. When has full column rank, (SOSC), and the unique solution is .
Ridge Regression. Adding regularisation: .
Normal equations: . The regulariser shifts all eigenvalues by , making the system always well-conditioned (strongly convex with ).
8.2 Lasso and the Subdifferential
Lasso: .
The norm is not differentiable at . The first-order optimality condition uses the subdifferential :
Coordinate-wise, for the -th weight:
- If :
- If :
The second condition (correlation of feature with residual is small enough) determines sparsity: feature is excluded when its correlation with the residual is below the threshold . This is the mathematical source of Lasso's sparsity-inducing property.
8.3 SVM: Full KKT Analysis
The SVM soft-margin formulation extends the hard-margin case with slack variables :
The KKT conditions include multipliers for the margin constraints and for . Complementary slackness gives:
- : correctly classified, not a support vector
- : on the margin, (standard support vector)
- : inside the margin or misclassified,
The parameter controls the trade-off between margin width and training error through the dual feasibility constraint .
8.4 PCA via Constrained Optimisation
Principal Component Analysis seeks directions of maximum variance subject to orthonormality. The full PCA problem is:
Lagrangian: where is a symmetric multiplier matrix.
Stationarity: , i.e., . Each column satisfies -an eigenvalue equation. The optimal is the matrix of top- eigenvectors; the multipliers are the eigenvalues (= captured variance).
8.5 Maximum Entropy and Softmax
Maximum Entropy Principle. Given constraints for , the distribution maximising entropy subject to has the Boltzmann/Gibbs form:
Derivation. Lagrangian with multipliers (normalisation) and (feature constraints):
Stationarity in : , giving .
Softmax as max entropy. The softmax function is exactly the max-entropy distribution over subject to constraints (Lagrange multipliers are the logits ). This gives the softmax a principled statistical interpretation beyond "normalised exponentials."
8.6 Attention as Constrained Optimisation
The attention mechanism in transformers can be viewed as a constrained optimisation problem. Given queries , keys , values , standard attention computes:
Constrained interpretation. Attention computes the memory retrieval that solves:
where is a temperature and is the probability simplex. The KKT condition for this maximum-entropy retrieval gives exactly softmax attention weights.
The Lagrange multiplier for the simplex constraint becomes the log-partition function (the log-sum-exp normaliser), confirming that attention is a principled probabilistic retrieval operation.
9. Non-Convex Landscapes in Deep Learning
Deep learning operates almost entirely in the non-convex regime. Yet neural networks train successfully-why? The answer lies in the special geometric structure of high-dimensional non-convex landscapes.
9.1 Loss Landscape Geometry
An -parameter neural network defines a loss landscape with structure that differs fundamentally from classical non-convex functions:
Key empirical observations:
- No spurious local minima (approximately): for sufficiently overparameterised networks on tractable data, all local minima have near-identical loss values. Gradient descent doesn't get "trapped" because there are no deep local minima to get trapped in.
- Saddle point dominance: most critical points are saddle points, not local minima. The index (number of negative eigenvalue directions) of these saddles is typically large.
- Loss barriers between solutions: two different global minima can be separated by high barriers in weight space, but connected by loss valleys in higher-dimensional paths.
For AI: The practical consequence is that SGD with good initialisation reliably finds good solutions, and model merging (linear combination of two trained models) often produces competitive performance-evidence of basin connectivity.
9.2 The Role of Overparameterisation
Theorem (informal, Kawaguchi 2016; Du et al. 2018). For a wide class of networks trained on generic data, when the number of parameters exceeds the number of training points , every local minimum is a global minimum.
Intuition. With parameters, the loss function has approximate degrees of freedom. The "level set" is a high-dimensional manifold. Any point trying to be a local minimum with non-zero loss would need the gradient to vanish while the Hessian is positive definite in all directions-this becomes geometrically impossible when the loss can be driven to zero by moving in any of the flat directions.
Neural Tangent Kernel (NTK) perspective. In the infinite-width limit, training dynamics become linear: where is the NTK matrix. If , gradient descent converges globally at a linear rate-a convex analysis result applied to a formally non-convex problem.
9.3 Sharpness-Aware Minimisation (SAM)
Motivated by the observation that flat minima generalise better than sharp ones, Sharpness-Aware Minimisation (Foret et al. 2021) solves:
KKT analysis of the inner problem. Fix . The inner maximisation is a constrained problem with one inequality constraint .
KKT conditions: , giving .
The adversarial perturbation is:
SAM gradient step: .
This is a direct application of KKT: solving the constrained inner problem analytically gives the SAM update formula.
9.4 Neural Collapse
At the terminal phase of training, when networks reach near-zero training loss, a remarkable geometric structure called Neural Collapse (Papyan et al. 2020) emerges:
- Within-class variability collapse: all training examples of class have the same last-layer representation
- Equinorm ETF: the class means form an Equiangular Tight Frame-they have equal norms and equal pairwise cosine similarities
- Self-duality: the weight vectors align with the class means up to scaling
KKT characterisation. Neural collapse is the KKT point of the Unconstrained Features Model (UFM):
The KKT stationarity conditions, combined with symmetry of the cross-entropy loss at balanced class distributions, force the ETF structure. Neural collapse is not an empirical coincidence-it is the unique KKT point of this simplified training problem.
9.5 Mode Connectivity
Loss valley hypothesis: Two local minima and of a neural network can be connected by a low-loss path (Garipov et al. 2018). Specifically, there exists a piecewise linear or quadratic curve with , and for all .
Implication for model merging. The success of weight averaging (WA) methods like Model Soups and SLERP model merging is explained by mode connectivity: if the merged model lies near the connecting path in weight space, it inherits the performance of both endpoints.
Optimality connection. Mode connectivity is a non-convex analogue of the fundamental theorem of convex analysis: convex functions have connected sublevel sets. For "sufficiently trained" neural networks, the sublevel set is approximately path-connected-not because the landscape is convex, but because overparameterisation creates high-dimensional flat regions.
10. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Treating as sufficient for a minimum | It's necessary, not sufficient. Saddle points and maxima also satisfy this. | Always check second-order conditions (Hessian PSD/PD) or verify global minimum via convexity arguments |
| 2 | Forgetting to check constraint qualification | KKT conditions require LICQ (or another CQ). Without CQ, a minimum may exist with no Lagrange multiplier | Verify that active constraint gradients are linearly independent at the candidate point |
| 3 | Setting for inequality constraints | Negative multipliers on constraints violate dual feasibility; the Lagrangian is then unbounded below | Dual feasibility requires for all inequality constraints (the convention matters: needs ) |
| 4 | Ignoring complementary slackness | Missing the condition leads to wrong determination of which constraints are active | For each inequality constraint, exactly one of or must hold (or both) |
| 5 | Concluding strong duality without Slater | Weak duality always holds, but strong duality () requires a constraint qualification like Slater's condition | Verify strict feasibility (Slater's point exists) before asserting zero duality gap |
| 6 | Using the test in , | The determinant test (det and ) is specific to . In higher dimensions, need to check all leading principal minors or eigenvalues | In : compute all eigenvalues of ; or use Sylvester's criterion (all leading principal minors iff PD) |
| 7 | Confusing local and global optimality for non-convex problems | For non-convex functions, local minima may not be global. KKT conditions identify local critical points, not global optima | Use global analysis: prove convexity, or use branch-and-bound, or accept local optimality |
| 8 | Wrong sign convention for the Lagrangian | Different texts define vs. . Mixing conventions gives wrong multiplier signs | Pick one convention and be consistent: for s.t. , use (add constraints to objective) |
| 9 | Forgetting that the Lagrange multiplier theorem is for functions | At non-smooth points (e.g., constraints), the standard gradient condition fails | Use subdifferentials and subgradient conditions; or smooth the problem with a differentiable approximation |
| 10 | Concluding "no constrained minimum exists" when KKT has no solution | KKT conditions failing to have a solution means the constraint qualification fails OR no minimum exists. These are different | First check whether the feasible set is closed and bounded (Weierstrass guarantees existence); then debug the CQ |
| 11 | Misidentifying support vectors | Only points with (active margin constraint) are support vectors; incorrectly classified interior points are not | Check complementary slackness: (on the margin boundary) |
| 12 | Applying first-order conditions to discrete or combinatorial constraints | Gradient = 0 requires differentiability; discrete feasible sets (e.g., integer programs) don't satisfy this | For discrete/combinatorial problems, use integer programming methods, branch-and-bound, or relaxations |
11. Exercises
Exercise 1 - Critical Point Classification
Let .
(a) Find all critical points by solving .
(b) Compute the Hessian at each critical point and classify using the second-order test.
(c) Verify numerically: check that gradient descent from each critical point stays put (up to numerical noise).
(d) Sketch the level curves of near each critical point.
Exercise 2 - Lagrange Multipliers: Constrained Extremum
Maximise subject to , .
(a) Write the Lagrangian and derive KKT conditions.
(b) Solve analytically and verify that the maximum is .
(c) Interpret the Lagrange multiplier: by how much does change if the constraint becomes ?
(d) Confirm numerically using scipy.optimize.minimize.
Exercise 3 - KKT Conditions for Quadratic Program
Solve: subject to , .
(a) Write the KKT conditions in full.
(b) Determine the active constraint(s) and solve the KKT system.
(c) Verify the solution is a global minimum by checking convexity.
(d) Compute the duality gap (should be zero).
Exercise 4 - Convexity Analysis
For each function, determine if it is convex, strictly convex, or neither, on the specified domain:
(a) on
(b) on
(c) on (positive definite matrices)
(d) for
(e) for logistic regression with linearly separable data
Exercise 5 - SVM Dual Derivation
Derive the dual of the hard-margin SVM from scratch.
(a) Write the primal as a standard form QP with inequality constraints.
(b) Form the Lagrangian and derive the dual function .
(c) Write the dual problem and verify its constraints.
(d) Show that the dual is concave in .
(e) Implement and solve both primal and dual for a small dataset; verify they give the same optimal value.
Exercise 6 - Maximum Entropy Distribution
Find the maximum entropy distribution on subject to: and .
(a) Write the Lagrangian with multipliers .
(b) Derive that .
(c) Find numerically by solving the constraint equations.
(d) Compare to the uniform distribution: which has higher entropy?
(e) Verify: compute and confirm it exceeds .
Exercise 7 - SAM: KKT Analysis
Analyse Sharpness-Aware Minimisation rigorously.
(a) Write the inner maximisation as a constrained problem.
(b) Write the KKT conditions and solve for in closed form.
(c) Show that satisfies the KKT conditions.
(d) Implement one SAM step and compare to vanilla gradient descent on a sharp quadratic.
(e) Empirically verify that SAM finds flatter minima on a small neural network.
Exercise 8 - Duality Gap and Convergence
Implement a primal-dual interior-point method for a small LP.
(a) Formulate: s.t. , as a standard LP.
(b) Implement the primal-dual path-following method tracking both primal and dual variables.
(c) Plot the duality gap vs. iteration and verify it converges to zero.
(d) Verify the optimal solution satisfies all KKT conditions numerically.
(e) Compare convergence to simplex method on the same problem.
12. Why This Matters for AI (2026 Perspective)
| Concept | Impact on AI/ML |
|---|---|
| First-order necessary conditions | Every modern optimiser (SGD, Adam, AdamW) terminates at approximate stationarity . Training stopping criteria are gradient-norm thresholds. |
| Hessian spectrum | Adaptive learning rate methods (Adam, Adagrad) implicitly approximate to normalise curvature. Sharpness-aware methods (SAM, ASAM) explicitly minimise spectral norm of . |
| Saddle points | The prevalence of saddle points (not local minima) in deep network landscapes explains why gradient descent with noise (SGD, dropout) escapes them efficiently. Perturbed gradient descent has provable saddle-escape guarantees. |
| Convexity | Convex relaxations of combinatorial ML problems (e.g., relaxation of sparse recovery) are solvable to global optimality. Convex loss functions (logistic, square) give unique solutions independent of initialisation. |
| Lagrange multipliers | PCA, LDA, CCA are all Lagrange multiplier problems. QLoRA's 4-bit quantisation uses a constrained optimisation where the quantisation constraint is relaxed via Lagrange multipliers in the mixed-precision framework. |
| KKT conditions | SVM training, LP relaxations of beam search, constrained decoding (e.g., budget-forcing), and RLHF reward-constrained optimisation all use KKT theory for optimality analysis. |
| Duality | The SVM dual gives the kernel trick. Max-margin training of LLMs (via RLHF) can be analysed as a dual problem where the Lagrange multiplier on the KL constraint is the reward temperature . |
| Strong convexity | regularisation makes the loss strongly convex, giving linear convergence guarantees. The condition number governs convergence; warmup schedules and weight decay are practitioner responses to ill-conditioning. |
| Max entropy / softmax | Language model next-token prediction is max-entropy inference subject to observed statistics. Temperature scaling adjusts the implicit Lagrange multiplier on the cross-entropy constraint, controlling distribution sharpness. |
| SAM/sharpness | Flat minima empirically generalise better. SAM (Foret et al. 2021) is now standard in SOTA image classification and LLM fine-tuning. The optimality condition for the inner problem is the KKT stationarity condition. |
| Neural collapse | The ETF structure at convergence (proven via KKT of the UFM) informs classifier weight initialisation strategies and explains why the last layer of a fine-tuned model can be reset and retrained cheaply. |
| Mode connectivity | Model merging (Model Soups, TIES, DARE) and model interpolation are enabled by the path-connectivity of loss basins-a direct consequence of overparameterised non-convex landscape geometry. |
Conceptual Bridge
This section occupies the pivot between two phases of the curriculum. Everything before 04 established the tools for computing derivatives-limits, continuity, partial derivatives, gradients, Jacobians, chain rule. This section asks the deeper question: what do these derivatives tell us about where the best solutions are?
Looking backward. The first-order conditions derive directly from the gradient machinery of 01-03. The Hessian-the matrix of second partial derivatives-extends the single-variable second derivative test to functions of many variables, connecting to the matrix theory of 02. The chain rule and product rule for derivatives underpin every step of the Lagrangian analysis.
Looking forward. The optimality conditions here are used constantly in the chapters ahead:
- 05/05 (Gradient Descent) and 08 (Optimisation Algorithms): gradient descent is precisely the process of iterating toward ; convergence analysis requires the strong convexity and Lipschitz smoothness properties defined here
- 06 (Probability & Statistics): maximum likelihood estimation is an unconstrained minimisation whose optimality conditions give the maximum likelihood equations; Bayesian MAP estimation introduces prior constraints handled by Lagrange multipliers
- 07 (Information Theory): the maximum entropy principle (8.5) is a direct application of Lagrange multipliers; the KL divergence minimisation underlying variational inference is a constrained optimisation whose KKT conditions are the ELBO
POSITION IN THE CALCULUS CURRICULUM
04: Calculus Fundamentals
01: Limits and Continuity Foundation for 02
02: Derivatives and Differentiation Foundation for 03, 04
03: Integration Foundation for 06 (probability)
04: Optimality Conditions YOU ARE HERE
Uses: gradients, Hessians, chain rule, Jacobians
Introduces: critical points, KKT, duality, convexity
05: Multivariate Calculus
01: Partial Derivatives (prerequisite)
02: Gradient and Hessian (prerequisite)
03: Chain Rule & Backpropagation (prerequisite)
04: Optimality Conditions YOU ARE HERE
05: Gradient Descent and Convergence Uses 04's convexity theory
08: Optimisation Algorithms Deep dive into algorithms enabled by 04
09: Probabilistic Graphical Models MAP/MLE via 04's Lagrange theory
The central insight. Optimality conditions are not merely a tool for finding optima-they are a language for describing what it means for a solution to be right. The KKT conditions don't just tell you where to stop searching; they tell you why the answer is the answer: which constraints are binding, how sensitive the solution is to perturbations, what trade-offs are implicit in the optimal choice. Every time a language model outputs a softmax distribution, every time a recommender system solves a constrained relevance problem, every time a reinforcement learning agent balances reward and KL penalty-the KKT conditions are operating in the background, whether or not the engineer knows it.