Lesson overview | Previous part | Next part
Positive Definite Matrices: Part 12: Exercises to Appendix F: Quick Reference
12. Exercises
Exercise 1 * - Verify PD Axioms for
Let with and . Define .
(a) Prove using the definition for .
(b) What is the smallest eigenvalue of ? Express it in terms of the singular values of and .
(c) What happens as ? Under what condition on does itself become PD?
(d) Show using the Loewner order.
(e) AI connection: Identify as the normal equations matrix for ridge regression. Why does guarantee regardless of the rank of ?
Exercise 2 * - Implement Cholesky from Scratch
(a) Implement the Cholesky algorithm (Algorithm 4.2) in pure NumPy without using np.linalg.cholesky. Your function should return the lower triangular satisfying , or raise a ValueError if is not PD.
(b) Test on and verify .
(c) Compare the result with np.linalg.cholesky - they should agree to machine precision.
(d) Test what happens when you apply your function to an indefinite matrix .
(e) Efficiency: What is the flop count of your implementation for input? Compare to LU factorization.
Exercise 3 * - Sylvester's Criterion
Let .
(a) Compute the three leading principal minors as functions of .
(b) Use Sylvester's criterion to find all values of for which .
(c) At the boundary values of (where transitions from PD to indefinite), what is the nullspace of ? Interpret this geometrically.
(d) Verify your answer computationally by plotting the smallest eigenvalue of as a function of .
Exercise 4 ** - LDL^T Factorization
(a) Implement the LDL^T algorithm (Algorithm 4.4) in NumPy. Your function returns unit lower triangular and diagonal such that .
(b) Test on . Verify .
(c) Using your factorization, solve for by: (i) solve , (ii) solve , (iii) solve .
(d) Explain why LDL^T avoids the numerical issues of Cholesky when is indefinite.
(e) AI connection: Why do second-order optimizers (like KFAC) prefer working with (the pivots) rather than (the Cholesky diagonal)?
Exercise 5 ** - Schur Complement and Block PD Test
Let where:
(a) Compute the Schur complement .
(b) Use Theorem 6.2 to determine whether .
(c) Compute using and verify numerically.
(d) Use the block inverse formula to compute . Verify .
(e) Gaussian conditioning: Interpret as a joint covariance matrix . What is the conditional covariance ? Interpret: does conditioning reduce uncertainty?
Exercise 6 ** - Log-Det via Cholesky and Gradient Verification
(a) Compute for using: (i) direct eigenvalues, (ii) Cholesky diagonal formula, (iii) np.log(np.linalg.det(A)). Verify all three agree.
(b) Implement a function log_det(A) using Cholesky. Compare speed vs eigenvalue method for random PD matrices.
(c) Numerically verify the gradient formula using finite differences: .
(d) For the Gaussian log-likelihood: given 50 samples from where from part (a), compute the log-likelihood where is the sample covariance.
Exercise 7 *** - Gram Matrix / Kernel Matrix and Mercer Check
(a) Given 5 points (design your own interesting configuration), compute the Gram matrix .
(b) Verify by checking eigenvalues. Explain why .
(c) Implement and test three kernels on these points: (i) linear , (ii) RBF , (iii) polynomial . For each, compute the kernel matrix and verify PSD.
(d) Now try the "not-a-kernel" (Laplacian kernel IS a valid kernel) vs (NOT always a valid kernel). Determine empirically whether each produces a PSD Gram matrix.
(e) AI connection: Implement a simple kernel regression: given training inputs and outputs , compute predictions at test points using (Nadaraya-Watson/GPR predictive mean).
Exercise 8 *** - Cholesky Reparameterization for VAE Sampling
(a) Implement a function sample_mvn(mu, L, n_samples) that samples vectors from using the reparameterization trick , .
(b) Let and . Compute . Sample 1000 points, compute the sample mean and covariance, and verify they match and (up to sampling noise).
(c) Simulate a VAE encoder: the encoder takes a scalar input and outputs and a lower triangular (a diagonal for simplicity). Sample and pass it through a simple decoder . Compute the ELBO:
where .
(d) Verify the KL formula using both the analytical expression and Monte Carlo estimation from samples.
(e) Gradient check: Confirm that the reparameterization trick allows gradients to flow through and (not ) by computing analytically and comparing with finite differences.
13. Why This Matters for AI (2026 Perspective)
| Concept | Concrete AI/LLM Role | Where / Method |
|---|---|---|
| Positive definite covariance | Valid multivariate Gaussian distributions | VAEs (Kingma & Welling, 2013); diffusion models (DDPM, score matching) |
| Cholesky factorization | Reparameterization trick for differentiable sampling | All variational inference; NUTS-HMC sampler in Pyro, NumPyro |
| LDL^T factorization | Stable Hessian factorization in trust-region optimizers | L-BFGS, trust-region Newton; scipy.optimize |
| Schur complement | Conditional Gaussian inference; GP prediction | GPyTorch, GPflow; Bayesian neural networks |
| Matrix inversion lemma (Woodbury) | Low-rank updates without full matrix inversion | LoRA weight merging; Kalman filter; GP with inducing points |
| Log-determinant | Normalizing constant in Gaussian log-likelihood | GP hyperparameter optimization; normalizing flows (RealNVP, Glow) |
| Gradient of log-likelihood w.r.t. covariance | Fitting GP kernel parameters; structured covariance learning | |
| Gram matrix / kernel matrix | Kernel methods, self-attention similarity | SVM training; transformer attention; Performer / FAVOR+ |
| Fisher information matrix | Riemannian metric on parameter space | K-FAC optimizer (Google Brain, 2015); natural gradient variational inference |
| PSD Hessian at minima | Second-order optimality condition | SAM (flatness-seeking optimizer); loss landscape analysis |
| Sharpness | Generalization proxy; flat minima theory | SAM, ASAM, Stein Self-Repulsive (2024) |
| PSD cone / SDP | Constrained optimization; certified robustness | -CROWN, -CROWN neural network verification; fairness constraints |
| Mercer's theorem | Theoretical foundation for kernel machines | SVMs; kernel PCA; connections to infinite-width NTK theory |
| Cholesky reparameterization | Parameterizing full covariance in deep models | Deep covariance estimation; Matrix-variate distributions in meta-learning |
| Log-det stochastic estimator | Scalable GP training on millions of points | GPyTorch (Gardner et al., 2018); SVGP; stochastic trace estimation |
14. Conceptual Bridge
Looking Back
This section builds on a foundation laid across the previous six sections of Advanced Linear Algebra. From eigenvalues and eigenvectors (01), we borrow the spectral theorem - the fact that symmetric matrices diagonalize orthogonally - which provides the cleanest proof that positive definiteness is equivalent to positive eigenvalues. From SVD (02), we inherit the language of singular values and the connection between the spectral norm, condition number, and the stability of Cholesky. The orthogonality theory from 05 underpins the proof that is symmetric (the spectral theorem) and that the PSD square root is well-defined and unique. Matrix norms from 06 give precise bounds on how a PD matrix behaves numerically: the condition number measures how far is from singular, and ill-conditioned PD matrices lead to numerical difficulties in Cholesky.
Most fundamentally, positive definite matrices are the matrices that have all four of the desirable properties we want from a symmetric matrix: uniquely solvable linear systems (invertible), well-defined quadratic energy (positive), computable square root (Cholesky), and monotone response to matrix operations (Loewner order). Understanding this quadrant of symmetry \times positivity is the analytical key to all probabilistic machine learning.
Looking Forward
Positive definite matrices are the entry point to the next major development: 08 (Matrix Decompositions) uses the Cholesky factorization developed here as one member of a family that includes LU (general matrices) and QR (orthogonal factorization). The Cholesky decomposition studied here in full generality is previewed (only briefly) in 08 - by the time students reach 08, the Cholesky algorithm should feel familiar.
The deeper application of positive definiteness comes later in the curriculum. In Chapter 6 (Probability Theory), covariance matrices appear in every multivariate distribution. In Chapter 8 (Optimization), the second-order conditions for minima require , and convexity of a function is equivalent to its Hessian being PSD everywhere. In Chapter 12 (Functional Analysis), Mercer's theorem and RKHS theory extend PSD kernels to infinite-dimensional inner product spaces. The PSD cone and semidefinite programming previewed in 9 appear extensively in Chapter 8 (Optimization) and in the robotics/control applications in later chapters.
Curriculum Position
POSITIVE DEFINITE MATRICES IN THE CURRICULUM
========================================================================
Chapter 2: Linear Algebra Basics
+-- 01 Vectors and Spaces -> inner products, quadratic forms
+-- 04 Determinants -> Sylvester's criterion
+-- 06 Vector Spaces -> subspaces, null space
Chapter 3: Advanced Linear Algebra
+-- 01 Eigenvalues -> spectral characterization of PD
+-- 02 SVD -> singular values, condition number
+-- 05 Orthogonality -> spectral theorem, square root
+-- 06 Matrix Norms -> condition number, jitter scale
+-- 07 Positive Definite <== YOU ARE HERE
| | Cholesky, LDL^T, Schur, log-det, Gram, SDP
+-- 08 Matrix Decompositions -> LU, QR, brief Cholesky recap
Downstream:
+-- Ch. 6 Probability: Gaussian covariance \Sigma \succ 0
+-- Ch. 8 Optimization: Hessian \succ 0 at local minima; SDP
+-- Ch. 12 Functional Analysis: Mercer, RKHS
+-- Ch. 13 ML Math: K-FAC, VAE, GP, normalizing flows
========================================================================
The central insight to carry forward: positive definiteness is not a restriction but a characterization. A matrix that fails to be PD is not "almost OK" - it is categorically different, with no unique minimum, no valid probability interpretation, no Cholesky factorization. The condition is the boundary between well-posedness and ill-posedness in a wide range of mathematical problems, and recognizing that boundary - and the tools to work with it (Cholesky, Schur, log-det) - is an essential skill for the working ML practitioner.
Appendix A: Closure Properties and Algebraic Operations
A.1 Preserving Positive Definiteness Under Operations
Understanding which operations preserve PD/PSD structure is practically important - it tells you when a composed matrix is guaranteed to be PD without recomputing.
Direct results:
| Operation | Inputs | Result | Condition |
|---|---|---|---|
| , | Always | ||
| Always | |||
| , | Always | ||
| , invertible | Always | ||
| , full column rank | Always | ||
| Always | |||
| , | Always | ||
| , | Matrix power via exp(t log A) | ||
| NOT necessarily | Only if | ||
| Always (Kronecker) | |||
| Always (Kronecker) | |||
| (Hadamard) | Schur product theorem |
The Schur product theorem. The Hadamard (element-wise) product of two PSD matrices is PSD. This is non-obvious! It is used in kernel methods: if and are valid kernels, their pointwise product is also a valid kernel.
Proof of Schur product theorem. If , write and (Gram representations). Then , which can be expressed as the Gram matrix of vectors (Kronecker products of the generating vectors). Gram matrices are PSD.
Why is not always PSD. Consider and . Both are PD, but , which is not symmetric (hence not PSD). The product of symmetric matrices is symmetric iff they commute.
For AI: Kronecker product structure is central to K-FAC: is PSD since . The Schur product theorem ensures that combining PD kernels (feature interactions) produces valid kernels.
A.2 Principal Submatrix Inheritance
Theorem A.1. If , every principal submatrix is . If , every principal submatrix is .
A principal submatrix of is obtained by deleting rows and columns with the same index set: for .
Proof. For any , embed in via for and for . Then for (since ).
Consequence. Every diagonal entry for (take ). Every principal submatrix is PD, so for all (Cauchy-Schwarz-type bound for matrix entries).
A.3 Indefinite Matrices and Saddle Points
For completeness: an indefinite symmetric matrix has both positive and negative eigenvalues. The corresponding quadratic form takes positive values in some directions and negative values in others. The level sets are hyperboloids.
Connection to saddle points in optimization. A critical point with indefinite Hessian is a saddle point - not a local minimum. In neural network optimization:
- Gradient descent near a saddle point may slow down dramatically (gradient is small, Hessian is indefinite)
- The negative eigenvalue directions are "escape directions" - following them reduces the loss
- Stochastic gradient descent (SGD) with noise "escapes" saddles; deterministic gradient descent can get stuck
Sylvester's law of inertia. For a symmetric matrix , the signature (number of positive, negative, zero eigenvalues) is invariant under congruence transformations (for invertible ). This means no invertible change of basis can turn a saddle into a minimum - the number of descent directions is a geometric invariant of the quadratic form.
Appendix B: Numerical Methods and Implementation
B.1 Cholesky for Linear System Solving
A primary application of Cholesky is solving for PD .
Algorithm:
- Factor: (Cholesky, )
- Forward solve: (substitute from top, )
- Backward solve: (substitute from bottom, )
For multiple right-hand sides : factor once, solve times at cost total.
Numerical example: Solve .
Cholesky: (since , , ).
Forward: ; .
Backward: ; .
Verify: . OK
B.2 Cholesky Update and Downdate
When changes by a rank-1 update (or downdate ), it is wasteful to recompute the full Cholesky factorization. Rank-1 Cholesky update algorithms (LINPACK's dchud) update in time.
Algorithm (rank-1 update, Gill-Golub-Murray-Saunders style):
Given , compute such that :
x = L \ v (forward solve: O(n^2))
for k = 1 to n:
r = sqrt(L[k,k]^2 + x[k]^2)
c = r / L[k,k]
s = x[k] / L[k,k]
L[k,k] = r
L[k+1:n, k] = (L[k+1:n,k] + s*x[k+1:n]) / c
x[k+1:n] = c*x[k+1:n] - s*L[k+1:n,k]
This uses Givens rotations (-> 05: Orthogonality) to update in place.
For AI: Online learning algorithms that add one data point at a time use rank-1 Cholesky updates to maintain the Cholesky factorization of the Gram matrix. Sequential Bayesian updating (online GP regression) uses rank-1 Cholesky updates at cost per update instead of full refactorization.
B.3 Condition Number and Numerical Stability of Cholesky
Condition number. For , the condition number . The relative error in solving satisfies:
For Cholesky specifically: the backward error bound on the Cholesky factor is . This means Cholesky is unconditionally backward stable for PD matrices. In contrast, LU without pivoting can have backward errors proportional to , which can be exponentially large.
Ill-conditioning warning. For large :
- The log-det computation involves some very small (near-zero for near-singular dimensions) - those log terms dominate and may lose digits
- Linear solves lose approximately digits of accuracy
- Cholesky itself remains stable; the error comes from the ill-conditioning of the problem, not the algorithm
Practical rule: If and your floating-point arithmetic has decimal digits (double precision), you lose digits in the solution. For , the solution has no reliable digits.
B.4 Sparse Cholesky
For sparse PD matrices (many zero entries, arising in graph-structured models, FEM discretizations, and spatial statistics), general Cholesky is wasteful because it may "fill in" zeros and produce dense .
The fill-in problem. Even if is sparse, may be dense. The fill-in pattern depends on the sparsity structure of and the ordering of variables.
Reordering algorithms. The approximate minimum degree (AMD) and nested dissection algorithms reorder the variables (permute for a permutation matrix ) to minimize fill-in. The standard library for sparse Cholesky is SuiteSparse (CHOLMOD).
For AI: In Gaussian Markov random fields (GMRFs), the precision matrix (inverse covariance) is sparse. Sparse Cholesky enables efficient inference in spatial models, graph neural networks with Gaussian priors, and structured prediction. GPyTorch's KeOps and PyTorch Sparse backends exploit sparsity for large-scale GP approximations.
Appendix C: Positive Definiteness in Complex Vector Spaces
C.1 Hermitian Positive Definite Matrices
Over , positive definiteness generalizes to Hermitian positive definite (HPD) matrices.
Definition. is Hermitian positive definite if:
- (where is the conjugate transpose)
- for all
(Note: is always real for Hermitian , so the positivity condition makes sense.)
Cholesky for HPD. Every HPD matrix has a unique Cholesky factorization where is lower triangular with positive real diagonal entries.
For AI: Complex-valued neural networks (used in signal processing, quantum computing simulation) use HPD covariance matrices. Radar signal processing, MRI reconstruction, and quantum chemistry all work in with HPD matrices.
Appendix D: Advanced Theory
D.1 The Polar Decomposition and Positive Definite Factors
Every invertible matrix has a unique polar decomposition:
where is orthogonal () and is the unique PD symmetric positive definite "right polar factor." This is the matrix analogue of the polar form for complex numbers.
Existence and uniqueness. Using SVD :
where (orthogonal) and (PSD square root of ).
Uniqueness of : , which is the unique PSD square root (Theorem 5.1). Given , is uniquely determined.
For AI: The polar decomposition appears in:
- Weight matrix orthogonalization: in Muon optimizer (Kosson et al., 2024), gradient updates are "orthogonalized" using the polar factor , which projects the weight update onto the manifold of orthogonal matrices, maintaining stable singular value spectrum during training
- Group equivariance: equivariant neural networks (E(n)-equivariant GNNs) use orthogonal matrices as symmetry group elements; the polar decomposition provides a differentiable projection onto
Computing the polar factor. Newton iteration: converges to starting from . Quadratic convergence once close to . Each step costs one matrix inversion.
D.2 Positive Definite Functions and Bochner's Theorem
A function is a positive definite function if for every and every set of points , the matrix is PSD.
Bochner's theorem. A continuous stationary kernel is a positive definite function on if and only if it is the Fourier transform of a non-negative measure (spectral density):
Implications for kernel design:
- RBF kernel: is PD (its Fourier transform is a Gaussian, which is non-negative)
- Matern kernels: PD for all valid smoothness parameters
- Checking validity: A kernel is valid iff its Fourier transform is non-negative - this is the ultimate test
Random Fourier features (Rahimi & Recht, 2007). Bochner's theorem gives a randomized approximation to stationary PD kernels:
where (the spectral distribution) and . This approximates the PD kernel as a finite inner product, enabling scalable kernel methods without forming the full kernel matrix. Used in Performer transformer attention (FAVOR+, Choromanski et al., 2020).
D.3 The Determinant and Volume
Geometric interpretation of for .
For , the quadratic form defines an ellipsoid in . Its volume is:
So measures the "volume" of the ellipsoid associated with . Larger means a more "spread out" distribution.
In the Gaussian context: The normalizing constant of is . This is the volume of the "effective support" ellipsoid scaled by .
D-optimal experimental design. In Bayesian experimental design, the D-optimal criterion selects experiments to maximize - the determinant of the posterior precision matrix, which equals . Maximizing minimizes the volume of the posterior confidence ellipsoid. This is an SDP with as the optimization variable.
Maximum entropy Gaussians. Among all distributions with mean and covariance , the maximum entropy distribution is , with entropy . Maximizing entropy subject to a trace constraint gives (isotropic Gaussian) - the most "uninformed" Gaussian with bounded total variance.
D.4 Matrix Exponential and the PD Manifold
The set of PD matrices is an open smooth manifold (in fact, a Riemannian manifold with the affine-invariant metric). This means concepts from differential geometry apply.
The matrix exponential. For any symmetric , (positive definite, since exponentials of real numbers are positive). The map is a bijection - PD matrices are exactly exponentials of symmetric matrices.
Log-Euclidean metric. A simple Riemannian metric on (used in diffusion tensor imaging) defines the "distance" between as:
This treats PD matrices as if they lived in a flat space via the log map. Computations (means, geodesics, interpolations) become Euclidean operations on and .
Affine-invariant metric (Riemannian). The more geometrically natural metric:
is invariant under congruence (affine-invariant), making it suitable for geometric statistics.
For AI: The SPD (symmetric positive definite) manifold appears in:
- Diffusion tensor MRI: each voxel has a diffusion tensor ; Riemannian means avoid non-PD artifacts
- Covariance-based classifiers: classifying neural activation covariance matrices using Riemannian geometry (SPDNet, 2017)
- Transformer positional encodings with structured covariances: attention with SPD constraints on the key-query metric tensors
D.5 Inequalities for Positive Definite Matrices
Several fundamental inequalities apply specifically to PD matrices:
Hadamard's inequality. For :
Equality iff is diagonal. Interpretation: the volume of the ellipsoid defined by is at most the product of the diagonal entries (the volumes along coordinate axes).
Minkowski's inequality. For :
This is the matrix analogue of for scalars (Minkowski). It follows from the concavity of on .
Fan-Ky inequality. For and :
(The sum of the largest log-eigenvalues of is at least the sum from plus the sum from .)
Anderson's inequality. For with :
Interpretation: combining two experiments (adding their information matrices) gives more information than either alone, but the combined precision matrix is "sandwiched" by the sum of individual inverse covariances.
For AI: Hadamard's inequality provides a bound on the log-det: , useful when diagonal entries are cheaply computed. Minkowski's inequality is used in information-geometric proofs about combining Gaussian priors in Bayesian learning.
Appendix E: Extended Examples and Case Studies
E.1 Case Study: Covariance Estimation in High Dimensions
One of the most practically important applications of PSD matrix theory is estimating covariance matrices from data when the number of features is comparable to or larger than the number of samples .
The problem. Given samples (zero mean for simplicity), the sample covariance is:
always (it's a Gram matrix). But:
- If : , so is singular (not PD)
- Even for : can be ill-conditioned (extreme ratio of largest to smallest eigenvalue)
The Marchenko-Pastur law. For with and , the empirical eigenvalue distribution of converges to the Marchenko-Pastur distribution:
for , where . This means even under the identity covariance, sample eigenvalues are spread over a wide range - the smallest eigenvalues of are much smaller than 1 and the largest are much larger than 1.
Ledoit-Wolf shrinkage. The Ledoit-Wolf estimator regularizes the sample covariance toward a multiple of identity:
where (the average eigenvalue) and is chosen to minimize mean squared error under the Frobenius norm. The result is always PD (since and ).
Graphical lasso. Assuming the precision matrix (inverse covariance) is sparse (many zeros encode conditional independence), the graphical lasso estimates:
where is the element-wise norm. The constraint is the PD cone constraint. The objective is the negative Gaussian log-likelihood (up to constants). Optimized by block coordinate descent (glasso algorithm) using the Schur complement for each block update.
For LLMs: Large language models trained with structured sparse attention patterns (local + global attention, as in Longformer and BigBird) can be understood as learning sparse precision matrices of the attention distributions.
E.2 Case Study: Gaussian Process Regression, Step by Step
Let's trace through a complete GP regression computation to see all the PD machinery in action.
Setup. Training data: , , . Test point: . Kernel: (RBF with length-scale 1). Noise: .
Step 1: Compute the kernel matrix.
Step 2: Add noise and Cholesky factor.
Compute . (Details in theory.ipynb.)
Step 3: Predictive mean.
.
Step 4: Predictive variance.
The variance is the Schur complement of evaluated at .
Interpretation. The Cholesky factorization appears in every step: solving for , computing the predictive variance, and computing the log-marginal likelihood for hyperparameter optimization. The PD condition on guarantees (the Schur complement is non-negative).
E.3 Case Study: The Multivariate Normal in Deep Learning
Parameterizing structured covariances. In deep generative models (VAEs, normalizing flows, diffusion models), we frequently need to parameterize a multivariate Gaussian with learnable. The challenge: must be PD, and the parameterization must be differentiable.
Approach 1: Diagonal. with . Free parameters: . Correlation structure: none. Used in standard VAE (Kingma & Welling, 2013).
Approach 2: Lower triangular Cholesky. where is lower triangular:
- Diagonal: ensures positivity
- Off-diagonal: for unconstrained (any real number) Free parameters: . Full correlation structure. Used in full-covariance VAEs, normalizing flows.
Approach 3: Low-rank + diagonal. where () and with . Free parameters: . Captures "directions of correlation." Used in structured VBs, mean-field approximations.
The KL divergence between Gaussians. For and :
For (standard VAE prior):
For diagonal :
For Cholesky : and .
All these computations - , , - exploit the Cholesky and log-det tools developed in 4 and 7.
E.4 The Normal Equations and Ridge Regression
Least squares. Given (data) and (targets), ordinary least squares (OLS) minimizes:
The normal equations: . The matrix is always PSD (Gram matrix). If has full column rank (, rank ), then and the unique solution is .
Ridge regression. Add regularization: minimize .
Normal equations: .
The matrix for any (by Proposition 2.3.7). This is why ridge regression is always numerically stable: the PD matrix has all positive pivots, and Cholesky never fails.
Condition number improvement. where are the singular values of . For small , . As , (well-conditioned). Choosing balances bias (regularization) vs numerical stability.
Kernel ridge regression. For nonlinear regression using a PD kernel , form the kernel matrix and solve . Predictions: where . This is equivalent to GP regression with the kernel and noise variance . The PD condition on is exactly the Cholesky solvability condition.
Appendix F: Quick Reference
F.1 Four Equivalent Characterizations of
POSITIVE DEFINITENESS: FOUR EQUIVALENT TESTS
========================================================================
Let A \in \mathbb{R}^n^x^n be symmetric. The following are equivalent:
DEFINITION \forallx\neq0: x^TAx > 0 (quadratic form positive)
SPECTRAL \foralli: \lambda^i(A) > 0 (all eigenvalues positive)
SYLVESTER \forallk: \Delta_k = det(A[1:k,1:k]) > 0 (leading minors positive)
CHOLESKY \exists! lower triangular L with positive diagonal: A = LL^T
COST:
Definition O(n) per test vector <- manual/symbolic
Spectral O(n^3) eigendecomposition <- most informative
Sylvester O(n^4) for all n determinants <- manual/symbolic
Cholesky O(n^3/3) factorization <- fastest in practice
========================================================================
F.2 Key Formulas
| Formula | Notes |
|---|---|
| Cholesky factorization (A PD <-> exists) | |
| LDL^T: unit lower triangular + diagonal | |
| Polar: orthogonal \times PD | |
| Log-det via Cholesky | |
| Matrix calculus: gradient of log-det | |
| Schur complement of in block matrix | |
| Schur PD criterion | |
| Woodbury identity | |
| , | Reparameterization trick |
| VAE KL term |
F.3 Notation Summary
Following the NOTATION_GUIDE.md conventions:
| Symbol | Meaning |
|---|---|
| is positive definite | |
| is positive semidefinite | |
| is negative definite | |
| Space of symmetric matrices | |
| Cone of PD matrices (interior) | |
| Cone of PSD matrices | |
| Cholesky factor (lower triangular, positive diagonal) | |
| Symmetric PSD square root | |
| Inverse PSD square root | |
| Covariance matrix () | |
| Precision matrix (, ) | |
| Fisher information matrix () | |
| Schur complement | |
| Gram matrix (, ) | |
| Kernel matrix () |
F.4 Python Cheatsheet
import numpy as np
import scipy.linalg as la
# === Cholesky factorization ===
L = np.linalg.cholesky(A) # A = L @ L.T
# Or via scipy (can also compute upper factor):
L = la.cholesky(A, lower=True)
# === LDL^T factorization ===
L_ldl, D_ldl, _ = la.ldl(A) # A = L @ D @ L.T
# === Log-determinant (numerically stable) ===
L = np.linalg.cholesky(A)
log_det = 2 * np.sum(np.log(np.diag(L)))
# === Solve A x = b via Cholesky ===
L = np.linalg.cholesky(A)
x = la.cho_solve((L, True), b) # uses LAPACK dpotrs
# === PSD square root ===
eigvals, Q = np.linalg.eigh(A) # symmetric eigendecomp
A_sqrt = Q @ np.diag(np.sqrt(eigvals)) @ Q.T
# === Woodbury: (A + U C V)^{-1} ===
# via np.linalg.solve on the smaller system
# === Reparameterization trick ===
L = np.linalg.cholesky(Sigma) # Sigma = L @ L.T
eps = np.random.randn(n_samples, d)
z = mu + (L @ eps.T).T # z ~ N(mu, Sigma)
# === Gram matrix ===
G = X @ X.T # G = X X^T, PSD
# === Check PD ===
try:
np.linalg.cholesky(A)
is_pd = True
except np.linalg.LinAlgError:
is_pd = False
# === Check PSD ===
eigvals = np.linalg.eigvalsh(A)
is_psd = np.all(eigvals >= -1e-10)