Lesson overview | Previous part | Next part
Optimality Conditions: Supplement A: Deeper Mathematical Treatments to Supplement K: Extended Exercise Solutions Hints and Partial Solutions
Supplement A: Deeper Mathematical Treatments
This supplement provides expanded proofs, worked examples, and connections that go beyond the core sections above. These are valuable for practitioners who want to move from "knowing the conditions" to "understanding why the conditions have the form they do."
A.1 Proof of the Envelope Theorem
The Envelope Theorem is the rigorous basis for the shadow price interpretation of Lagrange multipliers. Here is the complete proof.
Setup. Let be the optimal value of:
where is a parameter (e.g., the RHS of a constraint). Assume are and varies smoothly.
Theorem.
where .
Proof. Differentiate with respect to :
From stationarity: . Substituting:
Differentiating the constraint with respect to :
Substituting back:
Consequence for ML. If we add a constraint (norm budget) to a training problem, the Lagrange multiplier equals -the rate at which the optimal loss decreases as we increase the allowed norm. This is why weight decay (which enforces a soft norm budget) and the optimal multiplier are equal at the optimum: they're the same object.
A.2 Farkas' Lemma: The Foundation of KKT
The Farkas Lemma is the linear algebra result that underpins the entire theory of KKT conditions. It provides the criterion for when a linear system has a solution versus when a certificate of infeasibility exists.
Lemma (Farkas, 1902). Let , . Exactly one of the following holds:
- There exists with
- There exists with and
Geometric interpretation. Statement 1 says is in the conic hull of the columns of . Statement 2 says there is a hyperplane separating from this cone. The lemma asserts these are mutually exclusive and exhaustive.
Proof sketch. If 1 holds: since and , so 2 fails. The converse (if 1 fails then 2 holds) follows from the separating hyperplane theorem for convex cones.
How Farkas gives KKT. At a constrained minimum , if is a feasible descent direction (satisfying active constraint gradients and having ), the Farkas alternatives state that such cannot exist iff is in the cone of active constraint gradients-i.e., the KKT conditions hold.
A.3 Fritz John Conditions: When LICQ Fails
When the LICQ fails (active constraint gradients are linearly dependent), the standard KKT multiplier theorem breaks down. The Fritz John conditions provide a fallback:
Theorem (Fritz John, 1948). At a local minimum of subject to , , there exist with such that:
and complementary slackness .
The anomaly. When , the objective function disappears from the condition entirely-the conditions are "degenerate" and say nothing about the objective. This is called an abnormal solution.
Practical implication. If LICQ holds, we can always normalize to , recovering the standard KKT conditions. Fritz John is thus a generalization that covers degenerate cases, but the cleanest results require LICQ (or another constraint qualification that ensures ).
A.4 Second-Order KKT Conditions
For constrained problems, second-order sufficient conditions involve the Hessian of the Lagrangian restricted to the tangent cone of the active constraints.
Setup. At a KKT point with active set , define:
- (Hessian of Lagrangian w.r.t. )
- The critical cone:
Second-order necessary condition (SONC): If is a local minimum, then for all .
Second-order sufficient condition (SOSC): If the KKT conditions hold and for all , then is a strict local minimum.
For SVM: At the SVM optimum, the Hessian of the Lagrangian in the primal variables is positive definite restricted to the constraint tangent space (verified by the positive definiteness of the kernel matrix for distinct support vectors)-confirming the optimum is indeed a strict constrained minimum.
A.5 Convex Conjugate and Fenchel Duality
The Fenchel conjugate provides a powerful alternative to Lagrangian duality, particularly useful for non-smooth problems.
Definition (Conjugate function). For :
is always convex (pointwise supremum of affine functions) regardless of whether is convex.
Key examples:
| Domain | Domain | ||
|---|---|---|---|
| if else | Indicator | ||
| if else | Indicator | ||
Fenchel duality. The Fenchel dual problem of is:
Lasso as Fenchel dual. The Lasso primal has a Fenchel dual that is a quadratic program with box constraints. The dual variables are the residuals scaled by , and the dual constraint gives the KKT condition for Lasso sparsity.
A.6 Proximal Operators and Subdifferential Calculus
For non-smooth convex problems, gradient descent fails but proximal methods succeed. The proximal operator is intimately connected to KKT conditions.
Definition. The proximal operator of with parameter :
Connection to KKT. The fixed point of is the minimiser of . The KKT condition for the proximal subproblem is , which at the fixed point gives -the subdifferential optimality condition.
Proximal gradient method for (smooth + non-smooth ):
For Lasso: (soft thresholding). This is the closed-form solution to the proximal subproblem, and it directly implements the KKT condition: the -th weight is zero when the gradient is smaller than (inactive constraint), nonzero otherwise.
A.7 Constrained Optimisation in High Dimensions: Curse and Blessing
Classical optimisation theory was developed primarily for problems in with small (). Modern ML operates at to . Some classical results break down; others become stronger.
What breaks:
- Computing the Hessian: memory, impossible for
- Solving KKT systems: for dense systems
- Verifying positive definiteness: eigendecomposition
- LICQ: in high dimensions, constraints from discrete data rarely achieve full-rank Jacobians
What gets better (blessing of overparameterisation):
- Saddle points become easier to escape: a random direction has probability of being a descent direction at a saddle; gradient descent with noise escapes saddles in polynomial time
- Local minima become global: as shown by Kawaguchi and Du et al., the geometry of overparameterised networks makes spurious local minima rare
- Flat minima proliferate: with parameters and data points, there is an -dimensional manifold of global minima-gradient descent finds some flat point in this manifold
Practical approximations to KKT in high- regime:
- Gradient norm replaces exact stationarity
- Curvature estimation: diagonal Hessian (Adagrad, Adam) approximates second-order information
- Sketched KKT systems: random projections reduce system size while preserving approximate solutions
- Dual certificate: in sparse recovery, checking \|X^\top(\mathbf{y} - X\hat\mathbf{w})\|_\infty \leq \lambda verifies the KKT condition for Lasso without solving the primal- rather than
Supplement B: Worked Examples
B.1 Complete KKT Analysis: Portfolio Optimisation
Problem. Markowitz mean-variance portfolio: minimise risk subject to expected return and budget .
Lagrangian:
Stationarity:
Solution: (assuming )
Finding multipliers: Substitute into constraints:
Define: , , .
System:
The efficient frontier is the locus of as varies-it's a parabola in space, parameterised by the Lagrange multiplier .
Sensitivity: - a one-unit increase in required return increases minimum variance by units. This is the shadow price: the cost of reaching for higher returns.
B.2 KKT Verification via Duality: Water-Filling
The water-filling solution for capacity maximisation in parallel Gaussian channels is a classic KKT example.
Problem. Maximise total capacity subject to , .
Lagrangian:
KKT stationarity:
Complementary slackness: . So either (channel off) or and (channel on with water level ).
Solution: where is chosen so .
The water-filling metaphor: imagine filling a container with flat water level ; channels with floor above the water level get no power (inactive constraint). This is the KKT complementary slackness condition made vivid.
B.3 RLHF as a Constrained Optimisation Problem
Reinforcement Learning from Human Feedback (RLHF) can be formulated as a constrained optimisation problem whose KKT conditions provide exact theoretical insight into the resulting policy.
Problem formulation. Let be the model policy and the reference policy. Maximize expected reward subject to a KL constraint:
Lagrangian:
where is the Lagrange multiplier (dual variable) for the KL constraint.
KKT stationarity (with respect to the policy at each ):
Solving for the optimal policy:
where is the partition function.
This is exactly the Boltzmann distribution from max-entropy principles! The optimal RLHF policy is the reference policy tilted by the reward, with temperature . Higher (larger KL penalty) stays closer to ; lower maximises reward more aggressively.
Complementary slackness: If the KL budget is not exhausted (the model doesn't need to move far to maximise reward), then and the optimal policy maximises reward unconstrained. If the constraint binds, is the shadow price of KL-how much reward per unit of KL divergence the model could gain by allowing more deviation from reference.
Direct Preference Optimisation (DPO). DPO (Rafailov et al. 2023) avoids sampling from by reparameterising the reward using the KKT condition above: . This transforms the constrained RL problem into a supervised binary classification problem-an elegant application of duality and KKT theory.
Supplement C: Computational Methods for Finding Critical Points
C.1 Newton's Method and KKT Systems
For smooth unconstrained problems, Newton's method solves by iterating:
Newton's method has quadratic convergence near a non-degenerate critical point (where ): the error satisfies .
For equality-constrained problems, Newton's method solves the KKT system:
This KKT matrix (also called the augmented system or saddle-point matrix) is symmetric but indefinite-it has both positive and negative eigenvalues. Solving it efficiently is the core computational challenge in constrained optimisation.
C.2 Interior Point Methods and the Central Path
Interior point methods (barrier methods) solve inequality-constrained problems by converting them to a sequence of unconstrained problems. Replace s.t. with:
for decreasing . The log-barrier forces iterates to stay feasible (it as ) and approximates the indicator as .
Central path. The solution as a function of traces the central path-a smooth curve converging to the optimal as . Along the central path, the modified KKT conditions hold:
As : recovers complementary slackness. Interior point methods achieve the best known polynomial complexity for LP and convex QP: iterations.
C.3 Automatic Differentiation and Gradient Verification
Modern ML uses automatic differentiation (AD) to compute exact gradients of any differentiable computation graph. The core mechanism is an exact application of the chain rule through reverse mode accumulation-backpropagation.
Gradient checking. A standard technique for verifying backprop correctness is comparing against finite differences:
with . This centred difference approximation has error , so at the finite-difference gradient should agree with the exact gradient to about 10 significant digits.
Why finite differences are not used in practice:
- Cost: forward passes for parameters, vs. 1 backward pass for AD
- Numerical precision: for very large or small gradients, floating-point rounding limits accuracy
But finite differences remain the gold standard for correctness verification during algorithm development-if your AD gradient doesn't match finite differences at 1e-5 relative tolerance, you have a bug.
C.4 Constraint Satisfaction in Modern ML Systems
Modern ML increasingly uses hard and soft constraints, and the optimality conditions for handling these have real-world implementations:
Constitutional AI and RLHF with hard constraints. Recent LLM alignment work adds explicit constraints on outputs (harmlessness, helpfulness). The dual variable controlling KL deviation from a reference policy is analogous to the Lagrange multiplier on the constraint "don't deviate too far from the safe base model." The KKT condition determines the optimal trade-off.
Quantisation as constrained optimisation. Post-training quantisation (PTQ) and quantisation-aware training (QAT) seek weight matrices that are close to but have values restricted to a discrete grid. This is:
The optimal solution is the projection onto the nearest grid point (rounding)-the KKT conditions confirm that the Lagrange multiplier is the quantisation error, and the optimal satisfies complementary slackness: either the weight is at the grid boundary (multiplier nonzero) or it's rounded to the nearest interior point (multiplier zero).
FlashAttention and memory constraints. FlashAttention (Dao et al. 2022) reformulates the attention computation to respect SRAM capacity constraints. The tiling strategy solves a constrained I/O minimisation problem: process tiles of that fit in SRAM while computing exact attention. The optimal tile size is determined by KKT-like analysis of the memory hierarchy constraints.
Supplement D: Visualisation Guide
D.1 Saddle Point Visualisation
The canonical saddle point is at the origin. Its Hessian is -indefinite. Level curves are hyperbolas; the gradient flow converges to the origin along the -axis but diverges along the -axis.
In high dimensions. For an -parameter saddle with Morse index , random gradient descent escapes in time steps when corrupted with Gaussian noise of variance (Jin et al. 2017). The escape direction is the eigenvector corresponding to the most negative eigenvalue.
D.2 Level Curve Analysis
For any scalar :
- Near a local minimum: level curves are approximately ellipses centred at ; aspect ratio given by of the Hessian
- Near a saddle point: level curves form hyperbolas; the asymptotes are the stable/unstable manifolds
- Near a local maximum: level curves are ellipses with gradient pointing inward (decreasing )
The condition number of the Hessian at a minimum determines the eccentricity of the ellipses. High condition number (elongated ellipses) means gradient descent zigzags slowly; Newton's method normalises this.
D.3 Constraint Geometry
For s.t. :
- The feasible set is a manifold (curve in , surface in )
- Level curves of foliate the ambient space
- The minimum is where a level curve is tangent to the constraint manifold
- Tangency means (constraint tangent space), i.e., -which is exactly Lagrange's condition
The visual clarity of this geometric argument is why Lagrange's method is so compelling: it replaces algebra with geometry.
Supplement E: Key Formulas Reference
OPTIMALITY CONDITIONS CHEAT SHEET
UNCONSTRAINED
FONC: nablaf(x*) = 0 (necessary)
SONC: H(x*) 0 (necessary)
SOSC: H(x*) 0 (+ FONC) (sufficient for strict local min)
Convex: FONC iff global min
EQUALITY CONSTRAINTS (g(x) = 0)
Lagrangian: L = f + lambdag
KKT: nablaL = 0 and g(x*) = 0
SOSC on tangent: dnabla^2L d > 0 for alld: nablag(x*)d = 0
INEQUALITY CONSTRAINTS (h(x) <= 0)
Lagrangian: L = f + muh + lambdag
KKT (4 conditions):
(1) Stationarity: nablaL = 0
(2) Primal feas.: h(x*) <= 0, g(x*) = 0
(3) Dual feas.: mu* >= 0
(4) Comp. slack.: mu* h(x*) = 0 for alli
DUALITY
Dual function: q(mu,lambda) = inf_x L(x, mu, lambda)
Weak duality: d* <= p* always
Strong duality: d* = p* under Slater's condition (convex, strictly feas.)
Shadow price: partialp*/partialb = lambda* (sensitivity of optimal value)
SPECIFIC PROBLEMS
OLS: (XX)w = Xy (normal equations = FONC)
Ridge: (XX + nlambdaI)w = Xy (FONC with L2 reg)
SVM: w* = Sigma alpha y x (stationarity); alpha > 0 iff on margin
PCA: Sigmav = lambdav (FONC for variance max on sphere)
Softmax: p exp(z) (FONC for max entropy with logit constraints)
RLHF: pi*(y|x) pif exp(r/beta) (KKT for KL-constrained reward max)
References and Further Reading
Primary References
- Boyd & Vandenberghe, "Convex Optimization" (2004) - Chapters 4-5 (convex functions, duality), Chapter 5 (KKT conditions). The standard graduate reference. Available free online at stanford.edu/~boyd/cvxbook.
- Nocedal & Wright, "Numerical Optimization" (2006) - Chapters 12-15. Rigorous treatment of constrained optimisation, KKT theory, interior point methods, and convergence analysis.
- Rockafellar, "Convex Analysis" (1970) - The foundational text on subdifferentials, conjugate functions, and duality. Rigorous but dense.
- Bertsekas, "Nonlinear Programming" (2016) - Chapter 3 (Lagrange multiplier methods) and Chapter 4 (duality). Excellent treatment of constraint qualifications.
For ML/AI Connections
- Dauphin et al. "Identifying and Attacking the Saddle Point Problem in High-Dimensional Non-Convex Optimization" (2014) - Empirical evidence for saddle point dominance in deep networks.
- Foret et al. "Sharpness-Aware Minimization for Efficiently Improving Generalization" (2021) - SAM algorithm; KKT analysis of the inner maximisation.
- Papyan et al. "Prevalence of Neural Collapse during the Terminal Phase of Deep Learning Training" (2020) - Neural collapse as KKT critical point of the UFM.
- Rafailov et al. "Direct Preference Optimization" (2023) - DPO derivation from RLHF KKT conditions.
- Du et al. "Gradient Descent Finds Global Minima of Non-Convex Neural Networks" (2018) - Overparameterisation and global optimality.
- Kawaguchi "Deep Learning without Poor Local Minima" (2016) - Theoretical analysis of local minima in deep linear networks.
Computational Tools
- CVXPY (Diamond & Boyd 2016) - Python library for convex optimisation with automatic KKT verification. Handles LP, QP, SOCP, SDP with strong duality guarantees.
- SciPy
scipy.optimize.minimize- Numerical optimisation with constraint support; uses SLSQP (Sequential Least Squares Programming) which solves KKT subproblems at each step.
This section is part of the 05 Multivariate Calculus chapter. For gradient descent algorithms and convergence theory using these optimality conditions, see 05/05 Gradient Descent and Convergence. For the probability chapter where Lagrange multipliers appear in maximum likelihood and Bayesian estimation, see 06.
Supplement F: Extended Theory
F.1 The Morse Lemma and Local Geometry of Non-Degenerate Critical Points
Theorem (Morse Lemma). Near a non-degenerate critical point (where and is invertible), there exist local coordinates in which has the form:
where is the Morse index (number of negative eigenvalues of ).
Implication. All non-degenerate critical points have the same local geometry as pure quadratics-up to a smooth change of coordinates. This is why the Hessian gives complete local information about a critical point: it completely determines the topological type of the critical point.
For AI. In transformer loss landscapes, measuring the Morse index of critical points found during training is computationally expensive but theoretically illuminating. Empirical studies using Lanczos approximations of suggest that good minima have small Morse index (few descent directions), while saddles found earlier in training have large index.
F.2 Generalized Saddle Points and Minimax Problems
Definition. A point is a saddle point of if:
(minimum over , maximum over ).
Existence theorem. If is convex in and concave in , and both domains are compact convex sets, then a saddle point exists (Von Neumann minimax theorem).
Optimality conditions for saddle points:
This looks identical to an unconstrained critical point, but the additional saddle structure (convex-concave) ensures the point is a minimax rather than an arbitrary stationarity point.
GAN training as saddle point optimisation. Generative Adversarial Networks minimise the objective:
The optimal discriminator maximises for fixed ; the optimal generator minimises for fixed . At the Nash equilibrium (saddle point), generates samples from and everywhere.
The challenge: the convex-concave structure is not present in neural network GAN training (both and are non-convex), so saddle point existence is not guaranteed and gradient descent alternation can cycle or diverge.
F.3 Proximal Point Algorithm and Augmented Lagrangian
Proximal Point Algorithm (PPA). For convex :
This is equivalent to finding the KKT point of the proximal subproblem: .
Augmented Lagrangian (method of multipliers). For s.t. , the augmented Lagrangian:
combines the standard Lagrangian with a quadratic penalty term. The quadratic term is zero at feasibility, so the KKT conditions for the augmented Lagrangian match the original. But the augmented Lagrangian is better conditioned-the term adds curvature near the constraint.
ADMM. The Alternating Direction Method of Multipliers (ADMM) is an augmented Lagrangian method split across two blocks of variables. It is widely used for distributed training, consensus optimisation, and lasso-type problems. At convergence, the ADMM iterates satisfy the KKT conditions of the original problem.
F.4 Implicit Function Theorem and Sensitivity Analysis
The KKT conditions define a nonlinear system parameterised by (problem data). The Implicit Function Theorem (IFT) tells us when and how the optimal solution changes with .
Theorem (IFT). If and the Jacobian is invertible, then there exists a smooth function near with , and:
Application: differentiating through optimisation. In meta-learning (MAML) and bilevel optimisation, we need -the derivative of the optimal solution with respect to parameters. The IFT gives this derivative via solving a linear system involving the KKT Jacobian.
MAML connection. Model-Agnostic Meta-Learning computes:
This requires backpropagating through the inner gradient step-equivalent to applying the IFT to the first-order optimality condition of the inner problem.
F.5 Optimality and Generalisation: The Flat Minima Hypothesis
A deep connection links optimality conditions to generalisation in deep learning. The sharpness of a minimum-measured by the trace or maximum eigenvalue of the Hessian-correlates empirically with test performance.
Measures of sharpness:
- : total curvature (Frobenius norm of )
- : maximum curvature (sharpness in the steepest direction)
- : PAC-Bayes sharpness
Why flat minima generalise better (hypothesis). A flat minimum has many parameters with nearly zero curvature. Small perturbations to the weights (due to finite data, distribution shift, or quantisation error) cause small loss increases. A sharp minimum has high curvature: small weight perturbations cause large loss spikes, leading to poor generalisation.
PAC-Bayes bound. The PAC-Bayes generalisation bound for a posterior over weights near includes a term:
where is the prior. This directly penalises large Hessian eigenvalues-confirming that flat minima (small eigenvalues) admit tighter generalisation bounds.
Implication for optimality. The "best" solution from a generalisation perspective is not just any critical point-it is a critical point that is also a flat critical point: one where is minimised. SAM seeks this; the KKT condition for the SAM minimax problem gives the perturbation direction that reveals the sharpest descent direction.
Supplement G: Historical Perspective
G.1 Timeline of Key Developments
HISTORY OF OPTIMALITY CONDITIONS
1788 Lagrange - "Mcanique Analytique": Lagrange multipliers for
constrained mechanics. Invented to handle pendulum constraints.
1815 Gauss - Method of least squares: unconstrained minimisation,
normal equations as first-order conditions.
1844 Cauchy - Gradient descent proposed as an optimisation method.
First explicit use of the gradient as a direction.
1902 Farkas - Farkas Lemma for linear systems of inequalities.
The algebraic foundation of all LP and KKT theory.
1928 Von Neumann - Minimax theorem for zero-sum games.
Foundation of saddle-point theory and game theory.
1939 Karush - Master's thesis: first-order conditions for
inequality-constrained problems. (Unknown for decades.)
1951 Kuhn & Tucker - Independent rediscovery, published in
"Nonlinear Programming". Conditions become "KKT" in their honor
(Karush's contribution recognised only later).
1953 Wolfe - Duality theorem for nonlinear programming.
1955 Charnes, Cooper, Henderson - Simplex method, LP duality.
1963 Zoutendijk - Feasible direction methods, active set concepts.
1979 Khachiyan - Ellipsoid method: first polynomial algorithm for LP.
1984 Karmarkar - Interior point method for LP, O(n^3.5 L) time.
Practical polynomial LP solver.
1993 Cortes & Vapnik - Support Vector Machine: KKT dual is the
modern SVM. Complementary slackness identifies support vectors.
1996 Byrd et al. - L-BFGS-B: limited memory quasi-Newton for
bound-constrained problems. Industry standard for large-scale
constrained optimisation.
2011 Duchi, Hazan, Singer - Adagrad: adaptive learning rate
implicitly approximates inverse Hessian structure (second-order).
2014 Dauphin et al. - Saddle point dominance in deep learning.
First systematic empirical study of neural network landscape.
2015 Kingma & Ba - Adam optimiser: diagonal Hessian approximation
with momentum. Becomes default for LLM training.
2018 Du et al. - Gradient descent finds global minima in wide nets.
Theoretical basis for why deep learning "works."
2021 Foret et al. - SAM: Sharpness-Aware Minimisation. KKT-derived
update for flatter minima. New SOTA on multiple benchmarks.
2021 Papyan et al. - Neural Collapse: KKT analysis of terminal
training geometry. UFM framework.
2023 Rafailov et al. - DPO: RLHF framed as KKT-constrained
optimisation. Replaces PPO for LLM alignment.
G.2 The Karush Omission
The Karush-Kuhn-Tucker conditions are named for Kuhn and Tucker (1951), but Karush derived essentially the same conditions in his 1939 master's thesis at the University of Chicago under L.M. Graves. Karush never published his thesis, and his work remained unknown until the 1970s when Kuhn acknowledged the priority.
This is a cautionary tale for researchers: unpublished work does not enter the scientific record. Karush's conditions are mathematically equivalent to KKT and were derived first, but priority in science is determined by publication and dissemination.
Modern convention is to include Karush's name (KKT), and some authors use "FOOC" (first-order optimality conditions) to avoid the naming entirely.
G.3 From Constrained Mechanics to Language Models
Lagrange invented his multiplier method to handle constraints in mechanics-specifically, the rigid-rod constraint in a pendulum. The constraint that "the rod length is fixed" is , and the Lagrange multiplier is the tension in the rod.
Two centuries later, the same mathematical structure appears in:
- The tension (KL penalty) in RLHF that prevents language models from deviating too far from the base policy
- The margin constraint in SVMs, where the Lagrange multiplier is the "tension" pulling the decision boundary toward the support vectors
- The softmax temperature, which is the Lagrange multiplier for the entropy constraint in maximum entropy next-token prediction
The deep unity of Lagrange's insight-that constraints can be absorbed into an objective via auxiliary multipliers-is one of the most productive ideas in the history of mathematics.
Supplement H: Problem Reduction Patterns
H.1 Reducing Constrained Problems to Unconstrained
Many constrained problems have hidden structure that allows elimination of constraints, converting a constrained problem to unconstrained.
Pattern 1: Substitution. If constraint can be solved for one variable, substitute to eliminate it.
Example: s.t. . Substitute: . The constraint disappears, FONC is -an unconstrained problem.
Pattern 2: Reparameterisation. If the constraint defines a manifold , parameterise directly.
Example: Unit sphere in : parameterise as . Optimise over without any constraint.
For ML: Stiefel manifold optimisation (orthonormal matrices) uses geodesic gradient descent on the manifold, avoiding explicit constraint handling. Used in RNN training and low-rank optimisation.
Pattern 3: Lagrangian method. When substitution is impractical, Lagrange multipliers convert the constrained problem to a (possibly harder) unconstrained problem in augmented variables.
Pattern 4: Penalty methods. Replace s.t. with . As , the penalised solution approaches the constrained solution. Less accurate than Lagrange but simpler to implement.
H.2 Problem Recognition Guide
When you encounter an optimisation problem, use this decision tree:
OPTIMIZATION PROBLEM RECOGNITION
Does it have constraints?
NO -> Unconstrained
Is f convex?
YES -> FONC (nablaf = 0) is sufficient; any local = global
NO -> FONC necessary; check SOSC; may have local optima
YES -> Constrained
Are constraints equality only?
YES -> Lagrange multipliers; solve n+m equations
NO (has inequalities) -> KKT conditions
Is problem convex?
YES -> KKT iff global optimum (Slater's -> strong duality)
NO -> KKT necessary (LICQ needed); local optimum only
H.3 Standard Problem Forms and Their KKT Systems
Linear Program (LP):
KKT: , ,
Quadratic Program (QP):
KKT: , standard feasibility and complementary slackness
Second-Order Cone Program (SOCP):
Subsumes QP and LP; solvable in polynomial time; KKT conditions involve second-order cone complementarity
Semidefinite Program (SDP):
KKT: matrix complementarity where is the dual variable; powerful for combinatorial relaxations and spectral graph problems
These standard forms are recognised and solved by CVXPY and other convex optimisation packages. Expressing your ML problem in one of these forms gives you access to provably correct, polynomial-time solvers.
Supplement I: Numerical Experiments and Verification
I.1 Verifying KKT Conditions Numerically
In practice, KKT conditions are verified numerically after solving an optimisation problem. Here is a systematic protocol:
def verify_kkt(x_star, lambda_star, mu_star, f, g, h, tol=1e-6):
"""
Verify KKT conditions for problem:
min f(x) s.t. g(x) = 0, h(x) <= 0
at candidate solution (x_star, lambda_star, mu_star).
"""
# 1. Stationarity
grad_f = numerical_gradient(f, x_star)
grad_g = numerical_jacobian(g, x_star) # (m, n)
grad_h = numerical_jacobian(h, x_star) # (p, n)
stationarity = grad_f + lambda_star @ grad_g + mu_star @ grad_h
cond1 = np.allclose(stationarity, 0, atol=tol)
# 2. Primal feasibility
eq_residual = g(x_star)
ineq_residual = h(x_star)
cond2 = np.allclose(eq_residual, 0, atol=tol) and np.all(ineq_residual <= tol)
# 3. Dual feasibility
cond3 = np.all(mu_star >= -tol)
# 4. Complementary slackness
cs = mu_star * h(x_star)
cond4 = np.allclose(cs, 0, atol=tol)
print(f"Stationarity: {'PASS' if cond1 else 'FAIL'}")
print(f"Primal feasibility: {'PASS' if cond2 else 'FAIL'}")
print(f"Dual feasibility: {'PASS' if cond3 else 'FAIL'}")
print(f"Complementary slack: {'PASS' if cond4 else 'FAIL'}")
return cond1 and cond2 and cond3 and cond4
This systematic verification-checking all four KKT conditions numerically-is a key debugging tool when implementing constrained optimisation algorithms.
I.2 Duality Gap Monitoring
For convex problems, monitoring the duality gap during optimisation provides both a convergence certificate and a diagnostic:
def duality_gap(primal_val, dual_val):
"""
Compute duality gap. Should be >= 0 (weak duality).
Converges to 0 at optimality for convex problems (strong duality).
"""
gap = primal_val - dual_val
assert gap >= -1e-8, f"Negative duality gap: {gap}" # weak duality violation
return gap
A duality gap of certifies that both the primal and dual solutions are within of optimality-a certificate of near-optimality without knowing the true optimal value.
I.3 Critical Point Classification Checklist
When you find a critical point numerically, this checklist determines its type:
| Step | Check | How | Interpretation |
|---|---|---|---|
| 1 | Verify stationarity | True critical point (not just near-critical) | |
| 2 | Compute Hessian | Via AD or finite differences | |
| 3 | Eigenvalue decomposition | Sort eigenvalues | |
| 4 | Count negatives | Morse index | |
| 5 | Count near-zero | $m = #{ | \lambda_i |
| 6 | Classify | : min; : max; else: saddle | Standard classification |
| 7 | Check SOSC | All strictly | Strict local min |
| 8 | Verify global (if convex) | FONC alone suffices | No need for SOSC |
This procedure is the numerical implementation of the second-order test and is embedded in standard optimisation post-processing.
Summary: The Three Pillars of Optimality
This section has developed three interconnected pillars that together form the mathematical foundation of constrained optimisation:
Pillar 1: First and Second-Order Conditions. The gradient and Hessian encode all local information about a critical point. FONC () is necessary; SOSC () is sufficient. For convex problems, FONC is sufficient for global optimality-the Hessian never needs to be checked.
Pillar 2: Lagrange Multipliers and KKT. Constraints are not obstacles but information: each active constraint generates a Lagrange multiplier (shadow price) measuring the sensitivity of the objective to that constraint. The KKT conditions generalize FONC to handle both equality and inequality constraints, with complementary slackness providing the geometric signature of which constraints matter.
Pillar 3: Duality. Every constrained problem has a dual that provides a lower bound (weak duality) and-for convex problems with a Slater point-an exact alternative formulation (strong duality). The dual reveals hidden structure (SVM kernel trick), provides certificates of optimality (duality gap), and connects constrained optimisation to game theory (saddle points).
These three pillars appear throughout AI and ML: gradient-based training (Pillar 1), SVM/PCA/attention (Pillar 2), and RL/alignment (Pillar 3). Understanding them deeply is not optional for a serious ML researcher-they are the grammar of the field.
Supplement J: Connections to Information Geometry
J.1 Fisher Information and the Natural Gradient
The gradient is computed with respect to the Euclidean metric on parameter space. But parameter spaces of probability distributions carry a natural Riemannian metric-the Fisher information metric:
Natural gradient. The gradient of in the Fisher metric is:
The natural gradient step is the KKT condition for:
(minimise linearised loss subject to a KL-ball constraint ).
K-FAC and Adam as natural gradient approximations. K-FAC (Kronecker-Factored Approximate Curvature) approximates via Kronecker products for efficiency. Adam's adaptive learning rates approximate -a diagonal approximation to the Fisher inverse. Both are principled from the viewpoint of natural gradient descent, whose KKT formulation makes the Fisher metric (= second-order constraint) explicit.
J.2 Optimal Transport and Wasserstein Duality
The Wasserstein distance between distributions and over is defined as:
where is the set of joint distributions with marginals and .
Kantorovich duality. By Lagrange duality (the constraint is linear in ), the Wasserstein distance admits a dual:
The dual functions and are the Lagrange multipliers for the marginal constraints and .
For LLMs. Wasserstein GANs (WGANs) use the Kantorovich dual to train discriminators with 1-Lipschitz constraint (the dual constraint simplified). The dual formulation is more stable than the Jensen-Shannon divergence of standard GANs-a direct application of duality theory to generative model training.
End of 04 Optimality Conditions notes. This section is 2000+ lines covering unconstrained/constrained optimality from first principles through modern AI applications.
Navigation: <- Chain Rule and Backpropagation | Next: Gradient Descent and Convergence ->
Supplement K: Extended Exercise Solutions (Hints and Partial Solutions)
Exercise 1 Hints
(a) Finding critical points of :
Cases: ( or ) combined with ( or ).
For : from get . Critical point: .
For , or : combine with .
(b) At : . Hessian is zero-degenerate critical point, inconclusive test. Must use higher-order analysis (origin is a saddle of ).
Exercise 4 Hints
(c) on : this is convex. Key: compute the Hessian in the matrix sense. For a perturbation : The second-order term is .
(d) for . Hessian: . Check: . PSD but not PD. Actually convex: it's a perspective function -the perspective of the convex .
Exercise 7 Extended Notes: SAM Convergence
The SAM update can be viewed as gradient descent on the perturbed objective .
The KKT condition of the inner maximisation shows that is correct when the constraint is active (which it always is for nonzero gradients). The multiplier is the shadow price of relaxing the perturbation budget.
This connection shows SAM is not heuristic-it is the exact solution to a well-defined KKT problem. The sharpness regularisation effect comes from the fact that at a sharp minimum, is large (large Hessian eigenvalue gradient), so SAM takes a larger effective step away from sharp directions.