Lesson overview | Previous part | Lesson overview
Second-Order Methods: Part 7: Applications in Machine Learning to References
7. Applications in Machine Learning
This block develops applications in machine learning for Second-Order Methods. It keeps the scope local to this section while pointing forward when a neighboring topic owns the full treatment.
7.1 K-FAC and natural-gradient style preconditioners for neural networks
In this section, Hessian matrix is treated as a concrete optimization object rather than a slogan. The goal is to understand how it changes the objective, the update rule, the convergence story, and the diagnostics a practitioner should inspect when training a modern model. For Second-Order Methods, the phrase "K-FAC and natural-gradient style preconditioners for neural networks" means a precise mathematical habit: state the assumptions, write the update, identify what can be measured, and connect the result to a real AI training decision.
Definition.
For this section, Hessian matrix is the part of Second-Order Methods that controls how the objective, feasible region, or update rule behaves under the assumptions currently in force.
Symbolically, we track it through , , , , and any auxiliary state used by the algorithm.
Examples:
- A small synthetic quadratic where Hessian matrix can be computed directly and compared with theory.
- A logistic-regression or softmax objective where Hessian matrix affects optimization but the model remains interpretable.
- A transformer training diagnostic where Hessian matrix appears through gradient norms, update norms, curvature, or validation loss.
Non-examples:
- Treating Hessian matrix as a hyperparameter recipe without checking the objective assumptions.
- Inferring global behavior from one noisy minibatch when the section requires a population or full-batch statement.
Useful formula:
Proof sketch or reasoning pattern:
Start with the local model around , isolate the term involving Hessian matrix, and use the section assumptions to bound the change in objective value. If the assumption is geometric, the proof turns a picture into an inequality. If the assumption is stochastic, the proof takes conditional expectation before applying the bound. If the assumption is algorithmic, the proof checks that the proposed update is a descent, projection, or preconditioning step. This pattern is reusable across optimization theory.
Implementation consequence:
- Log a metric that makes Hessian matrix visible; otherwise a training run can fail while the scalar loss hides the cause.
- Compare the measured update with the mathematical update below before blaming data or architecture.
- Keep units straight: parameter norm, gradient norm, update norm, objective value, and validation metric are different objects.
Diagnostic questions:
- Which assumption about Hessian matrix is most fragile in the current training setup?
- What number would you log to catch the failure one thousand steps before divergence?
AI connection:
- K-FAC and natural-gradient style preconditioners for neural networks.
- L-BFGS for small-batch fine-tuning and classical ML objectives.
- Hessian-vector products for sharpness and interpretability diagnostics.
- structured preconditioning proposals such as Shampoo, SOAP, and Muon-family methods.
Local scope boundary: This subsection may reference neighboring material, but the full canonical treatment stays in its own folder. For example, stochastic gradient noise belongs to Stochastic Optimization, external schedule shapes belong to Learning Rate Schedules, and cross-entropy as an information measure belongs to Cross-Entropy.
7.2 L-BFGS for small-batch fine-tuning and classical ML objectives
In this section, quadratic model is treated as a concrete optimization object rather than a slogan. The goal is to understand how it changes the objective, the update rule, the convergence story, and the diagnostics a practitioner should inspect when training a modern model. For Second-Order Methods, the phrase "L-BFGS for small-batch fine-tuning and classical ML objectives" means a precise mathematical habit: state the assumptions, write the update, identify what can be measured, and connect the result to a real AI training decision.
Definition.
For this section, quadratic model is the part of Second-Order Methods that controls how the objective, feasible region, or update rule behaves under the assumptions currently in force.
Symbolically, we track it through , , , , and any auxiliary state used by the algorithm.
Examples:
- A small synthetic quadratic where quadratic model can be computed directly and compared with theory.
- A logistic-regression or softmax objective where quadratic model affects optimization but the model remains interpretable.
- A transformer training diagnostic where quadratic model appears through gradient norms, update norms, curvature, or validation loss.
Non-examples:
- Treating quadratic model as a hyperparameter recipe without checking the objective assumptions.
- Inferring global behavior from one noisy minibatch when the section requires a population or full-batch statement.
Useful formula:
Proof sketch or reasoning pattern:
Start with the local model around , isolate the term involving quadratic model, and use the section assumptions to bound the change in objective value. If the assumption is geometric, the proof turns a picture into an inequality. If the assumption is stochastic, the proof takes conditional expectation before applying the bound. If the assumption is algorithmic, the proof checks that the proposed update is a descent, projection, or preconditioning step. This pattern is reusable across optimization theory.
Implementation consequence:
- Log a metric that makes quadratic model visible; otherwise a training run can fail while the scalar loss hides the cause.
- Compare the measured update with the mathematical update below before blaming data or architecture.
- Keep units straight: parameter norm, gradient norm, update norm, objective value, and validation metric are different objects.
Diagnostic questions:
- Which assumption about quadratic model is most fragile in the current training setup?
- What number would you log to catch the failure one thousand steps before divergence?
AI connection:
- K-FAC and natural-gradient style preconditioners for neural networks.
- L-BFGS for small-batch fine-tuning and classical ML objectives.
- Hessian-vector products for sharpness and interpretability diagnostics.
- structured preconditioning proposals such as Shampoo, SOAP, and Muon-family methods.
Local scope boundary: This subsection may reference neighboring material, but the full canonical treatment stays in its own folder. For example, stochastic gradient noise belongs to Stochastic Optimization, external schedule shapes belong to Learning Rate Schedules, and cross-entropy as an information measure belongs to Cross-Entropy.
7.3 Hessian-vector products for sharpness and interpretability diagnostics
In this section, Newton step is treated as a concrete optimization object rather than a slogan. The goal is to understand how it changes the objective, the update rule, the convergence story, and the diagnostics a practitioner should inspect when training a modern model. For Second-Order Methods, the phrase "Hessian-vector products for sharpness and interpretability diagnostics" means a precise mathematical habit: state the assumptions, write the update, identify what can be measured, and connect the result to a real AI training decision.
Definition.
For this section, Newton step is the part of Second-Order Methods that controls how the objective, feasible region, or update rule behaves under the assumptions currently in force.
Symbolically, we track it through , , , , and any auxiliary state used by the algorithm.
Examples:
- A small synthetic quadratic where Newton step can be computed directly and compared with theory.
- A logistic-regression or softmax objective where Newton step affects optimization but the model remains interpretable.
- A transformer training diagnostic where Newton step appears through gradient norms, update norms, curvature, or validation loss.
Non-examples:
- Treating Newton step as a hyperparameter recipe without checking the objective assumptions.
- Inferring global behavior from one noisy minibatch when the section requires a population or full-batch statement.
Useful formula:
Proof sketch or reasoning pattern:
Start with the local model around , isolate the term involving Newton step, and use the section assumptions to bound the change in objective value. If the assumption is geometric, the proof turns a picture into an inequality. If the assumption is stochastic, the proof takes conditional expectation before applying the bound. If the assumption is algorithmic, the proof checks that the proposed update is a descent, projection, or preconditioning step. This pattern is reusable across optimization theory.
Implementation consequence:
- Log a metric that makes Newton step visible; otherwise a training run can fail while the scalar loss hides the cause.
- Compare the measured update with the mathematical update below before blaming data or architecture.
- Keep units straight: parameter norm, gradient norm, update norm, objective value, and validation metric are different objects.
Diagnostic questions:
- Which assumption about Newton step is most fragile in the current training setup?
- What number would you log to catch the failure one thousand steps before divergence?
AI connection:
- K-FAC and natural-gradient style preconditioners for neural networks.
- L-BFGS for small-batch fine-tuning and classical ML objectives.
- Hessian-vector products for sharpness and interpretability diagnostics.
- structured preconditioning proposals such as Shampoo, SOAP, and Muon-family methods.
Local scope boundary: This subsection may reference neighboring material, but the full canonical treatment stays in its own folder. For example, stochastic gradient noise belongs to Stochastic Optimization, external schedule shapes belong to Learning Rate Schedules, and cross-entropy as an information measure belongs to Cross-Entropy.
7.4 structured preconditioning proposals such as Shampoo, SOAP, and Muon-family methods
In this section, Newton decrement is treated as a concrete optimization object rather than a slogan. The goal is to understand how it changes the objective, the update rule, the convergence story, and the diagnostics a practitioner should inspect when training a modern model. For Second-Order Methods, the phrase "structured preconditioning proposals such as Shampoo, SOAP, and Muon-family methods" means a precise mathematical habit: state the assumptions, write the update, identify what can be measured, and connect the result to a real AI training decision.
Definition.
For this section, Newton decrement is the part of Second-Order Methods that controls how the objective, feasible region, or update rule behaves under the assumptions currently in force.
Symbolically, we track it through , , , , and any auxiliary state used by the algorithm.
Examples:
- A small synthetic quadratic where Newton decrement can be computed directly and compared with theory.
- A logistic-regression or softmax objective where Newton decrement affects optimization but the model remains interpretable.
- A transformer training diagnostic where Newton decrement appears through gradient norms, update norms, curvature, or validation loss.
Non-examples:
- Treating Newton decrement as a hyperparameter recipe without checking the objective assumptions.
- Inferring global behavior from one noisy minibatch when the section requires a population or full-batch statement.
Useful formula:
Proof sketch or reasoning pattern:
Start with the local model around , isolate the term involving Newton decrement, and use the section assumptions to bound the change in objective value. If the assumption is geometric, the proof turns a picture into an inequality. If the assumption is stochastic, the proof takes conditional expectation before applying the bound. If the assumption is algorithmic, the proof checks that the proposed update is a descent, projection, or preconditioning step. This pattern is reusable across optimization theory.
Implementation consequence:
- Log a metric that makes Newton decrement visible; otherwise a training run can fail while the scalar loss hides the cause.
- Compare the measured update with the mathematical update below before blaming data or architecture.
- Keep units straight: parameter norm, gradient norm, update norm, objective value, and validation metric are different objects.
Diagnostic questions:
- Which assumption about Newton decrement is most fragile in the current training setup?
- What number would you log to catch the failure one thousand steps before divergence?
AI connection:
- K-FAC and natural-gradient style preconditioners for neural networks.
- L-BFGS for small-batch fine-tuning and classical ML objectives.
- Hessian-vector products for sharpness and interpretability diagnostics.
- structured preconditioning proposals such as Shampoo, SOAP, and Muon-family methods.
Local scope boundary: This subsection may reference neighboring material, but the full canonical treatment stays in its own folder. For example, stochastic gradient noise belongs to Stochastic Optimization, external schedule shapes belong to Learning Rate Schedules, and cross-entropy as an information measure belongs to Cross-Entropy.
7.5 Diagnostic checklist for real experiments
In this section, damped Newton is treated as a concrete optimization object rather than a slogan. The goal is to understand how it changes the objective, the update rule, the convergence story, and the diagnostics a practitioner should inspect when training a modern model. For Second-Order Methods, the phrase "Diagnostic checklist for real experiments" means a precise mathematical habit: state the assumptions, write the update, identify what can be measured, and connect the result to a real AI training decision.
Definition.
For this section, damped Newton is the part of Second-Order Methods that controls how the objective, feasible region, or update rule behaves under the assumptions currently in force.
Symbolically, we track it through , , , , and any auxiliary state used by the algorithm.
Examples:
- A small synthetic quadratic where damped Newton can be computed directly and compared with theory.
- A logistic-regression or softmax objective where damped Newton affects optimization but the model remains interpretable.
- A transformer training diagnostic where damped Newton appears through gradient norms, update norms, curvature, or validation loss.
Non-examples:
- Treating damped Newton as a hyperparameter recipe without checking the objective assumptions.
- Inferring global behavior from one noisy minibatch when the section requires a population or full-batch statement.
Useful formula:
Proof sketch or reasoning pattern:
Start with the local model around , isolate the term involving damped Newton, and use the section assumptions to bound the change in objective value. If the assumption is geometric, the proof turns a picture into an inequality. If the assumption is stochastic, the proof takes conditional expectation before applying the bound. If the assumption is algorithmic, the proof checks that the proposed update is a descent, projection, or preconditioning step. This pattern is reusable across optimization theory.
Implementation consequence:
- Log a metric that makes damped Newton visible; otherwise a training run can fail while the scalar loss hides the cause.
- Compare the measured update with the mathematical update below before blaming data or architecture.
- Keep units straight: parameter norm, gradient norm, update norm, objective value, and validation metric are different objects.
Diagnostic questions:
- Which assumption about damped Newton is most fragile in the current training setup?
- What number would you log to catch the failure one thousand steps before divergence?
AI connection:
- K-FAC and natural-gradient style preconditioners for neural networks.
- L-BFGS for small-batch fine-tuning and classical ML objectives.
- Hessian-vector products for sharpness and interpretability diagnostics.
- structured preconditioning proposals such as Shampoo, SOAP, and Muon-family methods.
Local scope boundary: This subsection may reference neighboring material, but the full canonical treatment stays in its own folder. For example, stochastic gradient noise belongs to Stochastic Optimization, external schedule shapes belong to Learning Rate Schedules, and cross-entropy as an information measure belongs to Cross-Entropy.
8. Implementation and Diagnostics
This block develops implementation and diagnostics for Second-Order Methods. It keeps the scope local to this section while pointing forward when a neighboring topic owns the full treatment.
8.1 Minimal NumPy experiment for natural gradient
In this section, Newton decrement is treated as a concrete optimization object rather than a slogan. The goal is to understand how it changes the objective, the update rule, the convergence story, and the diagnostics a practitioner should inspect when training a modern model. For Second-Order Methods, the phrase "Minimal NumPy experiment for natural gradient" means a precise mathematical habit: state the assumptions, write the update, identify what can be measured, and connect the result to a real AI training decision.
Definition.
For this section, Newton decrement is the part of Second-Order Methods that controls how the objective, feasible region, or update rule behaves under the assumptions currently in force.
Symbolically, we track it through , , , , and any auxiliary state used by the algorithm.
Examples:
- A small synthetic quadratic where Newton decrement can be computed directly and compared with theory.
- A logistic-regression or softmax objective where Newton decrement affects optimization but the model remains interpretable.
- A transformer training diagnostic where Newton decrement appears through gradient norms, update norms, curvature, or validation loss.
Non-examples:
- Treating Newton decrement as a hyperparameter recipe without checking the objective assumptions.
- Inferring global behavior from one noisy minibatch when the section requires a population or full-batch statement.
Useful formula:
Proof sketch or reasoning pattern:
Start with the local model around , isolate the term involving Newton decrement, and use the section assumptions to bound the change in objective value. If the assumption is geometric, the proof turns a picture into an inequality. If the assumption is stochastic, the proof takes conditional expectation before applying the bound. If the assumption is algorithmic, the proof checks that the proposed update is a descent, projection, or preconditioning step. This pattern is reusable across optimization theory.
Implementation consequence:
- Log a metric that makes Newton decrement visible; otherwise a training run can fail while the scalar loss hides the cause.
- Compare the measured update with the mathematical update below before blaming data or architecture.
- Keep units straight: parameter norm, gradient norm, update norm, objective value, and validation metric are different objects.
Diagnostic questions:
- Which assumption about Newton decrement is most fragile in the current training setup?
- What number would you log to catch the failure one thousand steps before divergence?
AI connection:
- K-FAC and natural-gradient style preconditioners for neural networks.
- L-BFGS for small-batch fine-tuning and classical ML objectives.
- Hessian-vector products for sharpness and interpretability diagnostics.
- structured preconditioning proposals such as Shampoo, SOAP, and Muon-family methods.
Local scope boundary: This subsection may reference neighboring material, but the full canonical treatment stays in its own folder. For example, stochastic gradient noise belongs to Stochastic Optimization, external schedule shapes belong to Learning Rate Schedules, and cross-entropy as an information measure belongs to Cross-Entropy.
8.2 Monitoring signal for K-FAC
In this section, damped Newton is treated as a concrete optimization object rather than a slogan. The goal is to understand how it changes the objective, the update rule, the convergence story, and the diagnostics a practitioner should inspect when training a modern model. For Second-Order Methods, the phrase "Monitoring signal for K-FAC" means a precise mathematical habit: state the assumptions, write the update, identify what can be measured, and connect the result to a real AI training decision.
Definition.
For this section, damped Newton is the part of Second-Order Methods that controls how the objective, feasible region, or update rule behaves under the assumptions currently in force.
Symbolically, we track it through , , , , and any auxiliary state used by the algorithm.
Examples:
- A small synthetic quadratic where damped Newton can be computed directly and compared with theory.
- A logistic-regression or softmax objective where damped Newton affects optimization but the model remains interpretable.
- A transformer training diagnostic where damped Newton appears through gradient norms, update norms, curvature, or validation loss.
Non-examples:
- Treating damped Newton as a hyperparameter recipe without checking the objective assumptions.
- Inferring global behavior from one noisy minibatch when the section requires a population or full-batch statement.
Useful formula:
Proof sketch or reasoning pattern:
Start with the local model around , isolate the term involving damped Newton, and use the section assumptions to bound the change in objective value. If the assumption is geometric, the proof turns a picture into an inequality. If the assumption is stochastic, the proof takes conditional expectation before applying the bound. If the assumption is algorithmic, the proof checks that the proposed update is a descent, projection, or preconditioning step. This pattern is reusable across optimization theory.
Implementation consequence:
- Log a metric that makes damped Newton visible; otherwise a training run can fail while the scalar loss hides the cause.
- Compare the measured update with the mathematical update below before blaming data or architecture.
- Keep units straight: parameter norm, gradient norm, update norm, objective value, and validation metric are different objects.
Diagnostic questions:
- Which assumption about damped Newton is most fragile in the current training setup?
- What number would you log to catch the failure one thousand steps before divergence?
AI connection:
- K-FAC and natural-gradient style preconditioners for neural networks.
- L-BFGS for small-batch fine-tuning and classical ML objectives.
- Hessian-vector products for sharpness and interpretability diagnostics.
- structured preconditioning proposals such as Shampoo, SOAP, and Muon-family methods.
Local scope boundary: This subsection may reference neighboring material, but the full canonical treatment stays in its own folder. For example, stochastic gradient noise belongs to Stochastic Optimization, external schedule shapes belong to Learning Rate Schedules, and cross-entropy as an information measure belongs to Cross-Entropy.
8.3 Failure signature for Shampoo
In this section, modified Cholesky is treated as a concrete optimization object rather than a slogan. The goal is to understand how it changes the objective, the update rule, the convergence story, and the diagnostics a practitioner should inspect when training a modern model. For Second-Order Methods, the phrase "Failure signature for Shampoo" means a precise mathematical habit: state the assumptions, write the update, identify what can be measured, and connect the result to a real AI training decision.
Definition.
For this section, modified Cholesky is the part of Second-Order Methods that controls how the objective, feasible region, or update rule behaves under the assumptions currently in force.
Symbolically, we track it through , , , , and any auxiliary state used by the algorithm.
Examples:
- A small synthetic quadratic where modified Cholesky can be computed directly and compared with theory.
- A logistic-regression or softmax objective where modified Cholesky affects optimization but the model remains interpretable.
- A transformer training diagnostic where modified Cholesky appears through gradient norms, update norms, curvature, or validation loss.
Non-examples:
- Treating modified Cholesky as a hyperparameter recipe without checking the objective assumptions.
- Inferring global behavior from one noisy minibatch when the section requires a population or full-batch statement.
Useful formula:
Proof sketch or reasoning pattern:
Start with the local model around , isolate the term involving modified Cholesky, and use the section assumptions to bound the change in objective value. If the assumption is geometric, the proof turns a picture into an inequality. If the assumption is stochastic, the proof takes conditional expectation before applying the bound. If the assumption is algorithmic, the proof checks that the proposed update is a descent, projection, or preconditioning step. This pattern is reusable across optimization theory.
Implementation consequence:
- Log a metric that makes modified Cholesky visible; otherwise a training run can fail while the scalar loss hides the cause.
- Compare the measured update with the mathematical update below before blaming data or architecture.
- Keep units straight: parameter norm, gradient norm, update norm, objective value, and validation metric are different objects.
Diagnostic questions:
- Which assumption about modified Cholesky is most fragile in the current training setup?
- What number would you log to catch the failure one thousand steps before divergence?
AI connection:
- K-FAC and natural-gradient style preconditioners for neural networks.
- L-BFGS for small-batch fine-tuning and classical ML objectives.
- Hessian-vector products for sharpness and interpretability diagnostics.
- structured preconditioning proposals such as Shampoo, SOAP, and Muon-family methods.
Local scope boundary: This subsection may reference neighboring material, but the full canonical treatment stays in its own folder. For example, stochastic gradient noise belongs to Stochastic Optimization, external schedule shapes belong to Learning Rate Schedules, and cross-entropy as an information measure belongs to Cross-Entropy.
8.4 Framework-level implementation pattern
In this section, trust-region preview is treated as a concrete optimization object rather than a slogan. The goal is to understand how it changes the objective, the update rule, the convergence story, and the diagnostics a practitioner should inspect when training a modern model. For Second-Order Methods, the phrase "Framework-level implementation pattern" means a precise mathematical habit: state the assumptions, write the update, identify what can be measured, and connect the result to a real AI training decision.
Definition.
For this section, trust-region preview is the part of Second-Order Methods that controls how the objective, feasible region, or update rule behaves under the assumptions currently in force.
Symbolically, we track it through , , , , and any auxiliary state used by the algorithm.
Examples:
- A small synthetic quadratic where trust-region preview can be computed directly and compared with theory.
- A logistic-regression or softmax objective where trust-region preview affects optimization but the model remains interpretable.
- A transformer training diagnostic where trust-region preview appears through gradient norms, update norms, curvature, or validation loss.
Non-examples:
- Treating trust-region preview as a hyperparameter recipe without checking the objective assumptions.
- Inferring global behavior from one noisy minibatch when the section requires a population or full-batch statement.
Useful formula:
Proof sketch or reasoning pattern:
Start with the local model around , isolate the term involving trust-region preview, and use the section assumptions to bound the change in objective value. If the assumption is geometric, the proof turns a picture into an inequality. If the assumption is stochastic, the proof takes conditional expectation before applying the bound. If the assumption is algorithmic, the proof checks that the proposed update is a descent, projection, or preconditioning step. This pattern is reusable across optimization theory.
Implementation consequence:
- Log a metric that makes trust-region preview visible; otherwise a training run can fail while the scalar loss hides the cause.
- Compare the measured update with the mathematical update below before blaming data or architecture.
- Keep units straight: parameter norm, gradient norm, update norm, objective value, and validation metric are different objects.
Diagnostic questions:
- Which assumption about trust-region preview is most fragile in the current training setup?
- What number would you log to catch the failure one thousand steps before divergence?
AI connection:
- K-FAC and natural-gradient style preconditioners for neural networks.
- L-BFGS for small-batch fine-tuning and classical ML objectives.
- Hessian-vector products for sharpness and interpretability diagnostics.
- structured preconditioning proposals such as Shampoo, SOAP, and Muon-family methods.
Local scope boundary: This subsection may reference neighboring material, but the full canonical treatment stays in its own folder. For example, stochastic gradient noise belongs to Stochastic Optimization, external schedule shapes belong to Learning Rate Schedules, and cross-entropy as an information measure belongs to Cross-Entropy.
8.5 Reproducibility and logging checklist
In this section, Gauss-Newton is treated as a concrete optimization object rather than a slogan. The goal is to understand how it changes the objective, the update rule, the convergence story, and the diagnostics a practitioner should inspect when training a modern model. For Second-Order Methods, the phrase "Reproducibility and logging checklist" means a precise mathematical habit: state the assumptions, write the update, identify what can be measured, and connect the result to a real AI training decision.
Definition.
For this section, Gauss-Newton is the part of Second-Order Methods that controls how the objective, feasible region, or update rule behaves under the assumptions currently in force.
Symbolically, we track it through , , , , and any auxiliary state used by the algorithm.
Examples:
- A small synthetic quadratic where Gauss-Newton can be computed directly and compared with theory.
- A logistic-regression or softmax objective where Gauss-Newton affects optimization but the model remains interpretable.
- A transformer training diagnostic where Gauss-Newton appears through gradient norms, update norms, curvature, or validation loss.
Non-examples:
- Treating Gauss-Newton as a hyperparameter recipe without checking the objective assumptions.
- Inferring global behavior from one noisy minibatch when the section requires a population or full-batch statement.
Useful formula:
Proof sketch or reasoning pattern:
Start with the local model around , isolate the term involving Gauss-Newton, and use the section assumptions to bound the change in objective value. If the assumption is geometric, the proof turns a picture into an inequality. If the assumption is stochastic, the proof takes conditional expectation before applying the bound. If the assumption is algorithmic, the proof checks that the proposed update is a descent, projection, or preconditioning step. This pattern is reusable across optimization theory.
Implementation consequence:
- Log a metric that makes Gauss-Newton visible; otherwise a training run can fail while the scalar loss hides the cause.
- Compare the measured update with the mathematical update below before blaming data or architecture.
- Keep units straight: parameter norm, gradient norm, update norm, objective value, and validation metric are different objects.
Diagnostic questions:
- Which assumption about Gauss-Newton is most fragile in the current training setup?
- What number would you log to catch the failure one thousand steps before divergence?
AI connection:
- K-FAC and natural-gradient style preconditioners for neural networks.
- L-BFGS for small-batch fine-tuning and classical ML objectives.
- Hessian-vector products for sharpness and interpretability diagnostics.
- structured preconditioning proposals such as Shampoo, SOAP, and Muon-family methods.
Local scope boundary: This subsection may reference neighboring material, but the full canonical treatment stays in its own folder. For example, stochastic gradient noise belongs to Stochastic Optimization, external schedule shapes belong to Learning Rate Schedules, and cross-entropy as an information measure belongs to Cross-Entropy.
9. Common Mistakes
| # | Mistake | Why It Is Wrong | Fix |
|---|---|---|---|
| 1 | Using a recipe without checking assumptions | Optimization guarantees depend on smoothness, convexity, stochasticity, or feasibility assumptions. | Write the assumptions next to the update rule before choosing hyperparameters. |
| 2 | Confusing objective decrease with validation improvement | The optimizer sees the training objective; validation behavior also depends on generalization and data split quality. | Track objective, train metric, validation metric, and update norm separately. |
| 3 | Treating all norms as interchangeable | The geometry changes when the norm changes, especially for constraints and regularizers. | State whether you use , , Frobenius, spectral, or another norm. |
| 4 | Ignoring scale | Learning rates, penalties, curvature, and gradient norms are all scale-sensitive. | Normalize units and inspect effective update size . |
| 5 | Overfitting to a single seed | Optimization can look stable for one seed and fail under another. | Run small seed sweeps for important claims. |
| 6 | Hiding instability behind smoothed plots | A moving average can hide spikes, divergence, and bad curvature events. | Plot raw metrics alongside smoothed metrics. |
| 7 | Using test data during tuning | This contaminates the final evaluation. | Reserve test data until after model and hyperparameter selection. |
| 8 | Assuming large models make theory irrelevant | Large models often make diagnostics more important because failures are expensive. | Use theory to decide what to log, not to pretend every theorem applies exactly. |
| 9 | Mixing optimizer state with model state carelessly | State corruption changes the effective algorithm. | Checkpoint parameters, gradients if needed, optimizer moments, scheduler state, and random seeds. |
| 10 | Not checking numerical precision | BF16, FP16, FP8, and accumulation choices can change the observed optimizer. | Cross-check suspicious runs against higher precision on a small batch. |
10. Exercises
- Exercise 1 [*] - Newton Step (a) Define Newton step using the notation of this repository. (b) Give three valid examples and two non-examples. (c) Derive the relevant update or inequality shown below.
(d) Implement a NumPy check on a synthetic two-dimensional objective. (e) Explain what metric you would log in a real LLM or fine-tuning run.
- Exercise 2 [*] - Damped Newton (a) Define damped Newton using the notation of this repository. (b) Give three valid examples and two non-examples. (c) Derive the relevant update or inequality shown below.
(d) Implement a NumPy check on a synthetic two-dimensional objective. (e) Explain what metric you would log in a real LLM or fine-tuning run.
- Exercise 3 [*] - Trust-Region Preview (a) Define trust-region preview using the notation of this repository. (b) Give three valid examples and two non-examples. (c) Derive the relevant update or inequality shown below.
(d) Implement a NumPy check on a synthetic two-dimensional objective. (e) Explain what metric you would log in a real LLM or fine-tuning run.
- Exercise 4 [] - Levenberg-Marquardt** (a) Define Levenberg-Marquardt using the notation of this repository. (b) Give three valid examples and two non-examples. (c) Derive the relevant update or inequality shown below.
(d) Implement a NumPy check on a synthetic two-dimensional objective. (e) Explain what metric you would log in a real LLM or fine-tuning run.
- Exercise 5 [] - Bfgs** (a) Define BFGS using the notation of this repository. (b) Give three valid examples and two non-examples. (c) Derive the relevant update or inequality shown below.
(d) Implement a NumPy check on a synthetic two-dimensional objective. (e) Explain what metric you would log in a real LLM or fine-tuning run.
- Exercise 6 [] - Two-Loop Recursion** (a) Define two-loop recursion using the notation of this repository. (b) Give three valid examples and two non-examples. (c) Derive the relevant update or inequality shown below.
(d) Implement a NumPy check on a synthetic two-dimensional objective. (e) Explain what metric you would log in a real LLM or fine-tuning run.
- Exercise 7 [] - Fisher Information** (a) Define Fisher information using the notation of this repository. (b) Give three valid examples and two non-examples. (c) Derive the relevant update or inequality shown below.
(d) Implement a NumPy check on a synthetic two-dimensional objective. (e) Explain what metric you would log in a real LLM or fine-tuning run.
- Exercise 8 [*] - K-Fac** (a) Define K-FAC using the notation of this repository. (b) Give three valid examples and two non-examples. (c) Derive the relevant update or inequality shown below.
(d) Implement a NumPy check on a synthetic two-dimensional objective. (e) Explain what metric you would log in a real LLM or fine-tuning run.
- Exercise 9 [*] - Soap** (a) Define SOAP using the notation of this repository. (b) Give three valid examples and two non-examples. (c) Derive the relevant update or inequality shown below.
(d) Implement a NumPy check on a synthetic two-dimensional objective. (e) Explain what metric you would log in a real LLM or fine-tuning run.
- Exercise 10 [*] - Curvature Diagnostics** (a) Define curvature diagnostics using the notation of this repository. (b) Give three valid examples and two non-examples. (c) Derive the relevant update or inequality shown below.
(d) Implement a NumPy check on a synthetic two-dimensional objective. (e) Explain what metric you would log in a real LLM or fine-tuning run.
11. Why This Matters for AI (2026 Perspective)
| Concept | AI Impact |
|---|---|
| Hessian matrix | K-FAC and natural-gradient style preconditioners for neural networks |
| quadratic model | L-BFGS for small-batch fine-tuning and classical ML objectives |
| Newton step | Hessian-vector products for sharpness and interpretability diagnostics |
| Newton decrement | structured preconditioning proposals such as Shampoo, SOAP, and Muon-family methods |
| damped Newton | K-FAC and natural-gradient style preconditioners for neural networks |
| modified Cholesky | L-BFGS for small-batch fine-tuning and classical ML objectives |
| trust-region preview | Hessian-vector products for sharpness and interpretability diagnostics |
| Gauss-Newton | structured preconditioning proposals such as Shampoo, SOAP, and Muon-family methods |
| Levenberg-Marquardt | K-FAC and natural-gradient style preconditioners for neural networks |
| secant equation | L-BFGS for small-batch fine-tuning and classical ML objectives |
12. Conceptual Bridge
Second-Order Methods sits inside a chain. Earlier sections give the calculus, probability, and linear algebra needed to write the objective and interpret the update. Later sections use this material to reason about noisy gradients, adaptive state, regularization, tuning, schedules, and finally information-theoretic losses.
Backward link: Gradient Descent supplies the immediate prerequisite vocabulary.
Forward link: Constrained Optimization uses this section as a building block.
+------------------------------------------------------------+
| Chapter 8: Optimization |
| 01-Convex-Optimization Convex Optimization |
| 02-Gradient-Descent Gradient Descent |
| >> 03-Second-Order-Methods Second-Order Methods |
| 04-Constrained-Optimization Constrained Optimization |
| 05-Stochastic-Optimization Stochastic Optimization |
| 06-Optimization-Landscape Optimization Landscape |
| 07-Adaptive-Learning-Rate Adaptive Learning Rate |
| 08-Regularization-Methods Regularization Methods |
| 09-Hyperparameter-Optimization Hyperparameter Optimization |
| 10-Learning-Rate-Schedules Learning Rate Schedules |
+------------------------------------------------------------+
Appendix A. Extended Derivation and Diagnostic Cards
References
- Nocedal and Wright, Numerical Optimization.
- Martens and Grosse, Optimizing Neural Networks with Kronecker-factored Approximate Curvature.
- Amari, Natural Gradient Works Efficiently in Learning.
- Gupta et al., Shampoo: Preconditioned Stochastic Tensor Optimization.
- Goodfellow, Bengio, and Courville, Deep Learning.
- Bottou, Curtis, and Nocedal, Optimization Methods for Large-Scale Machine Learning.
- PyTorch optimizer and scheduler documentation.
- Optax documentation for composable optimizer transformations.