Exercises Notebook
Converted from
exercises.ipynbfor web reading.
Generative Models: Exercises
Ten exercises cover autoregressive likelihood, VAE KL and reparameterization, GAN losses, flow densities, diffusion noising, score updates, FID intuition, and diagnostics.
Code cell 2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
try:
import seaborn as sns
sns.set_theme(style="whitegrid", palette="colorblind")
HAS_SNS = True
except ImportError:
plt.style.use("seaborn-v0_8-whitegrid")
HAS_SNS = False
mpl.rcParams.update({
"figure.figsize": (10, 6),
"figure.dpi": 120,
"font.size": 13,
"axes.titlesize": 15,
"axes.labelsize": 13,
"xtick.labelsize": 11,
"ytick.labelsize": 11,
"legend.fontsize": 11,
"legend.framealpha": 0.85,
"lines.linewidth": 2.0,
"axes.spines.top": False,
"axes.spines.right": False,
"savefig.bbox": "tight",
"savefig.dpi": 150,
})
np.random.seed(42)
print("Plot setup complete.")
Exercise 1: Autoregressive likelihood
Compute log probability from conditionals.
Code cell 4
# Your Solution
probs = np.array([0.5, 0.25])
print("Starter: sum log probabilities.")
Code cell 5
# Solution
probs = np.array([0.5, 0.25])
logp = np.log(probs).sum()
print("logp:", logp)
Exercise 2: VAE KL
Compute Gaussian KL to standard normal.
Code cell 7
# Your Solution
mu = np.array([0.0, 1.0])
logvar = np.array([0.0, 0.0])
print("Starter: 0.5*sum(exp(logvar)+mu^2-1-logvar).")
Code cell 8
# Solution
mu = np.array([0.0, 1.0])
logvar = np.array([0.0, 0.0])
kl = 0.5 * np.sum(np.exp(logvar) + mu**2 - 1 - logvar)
print("KL:", kl)
Exercise 3: Reparameterization
Compute z=mu+sigma*eps.
Code cell 10
# Your Solution
mu, sigma, eps = 1.0, 2.0, -0.5
print("Starter: mu + sigma*eps.")
Code cell 11
# Solution
mu, sigma, eps = 1.0, 2.0, -0.5
z = mu + sigma * eps
print("z:", z)
Exercise 4: GAN discriminator loss
Compute -log(Dreal)-log(1-Dfake).
Code cell 13
# Your Solution
Dreal = 0.8
Dfake = 0.3
print("Starter: -log(Dreal)-log(1-Dfake).")
Code cell 14
# Solution
Dreal = 0.8
Dfake = 0.3
loss = -np.log(Dreal) - np.log(1 - Dfake)
print("D loss:", loss)
Exercise 5: Flow log density
Compute log p_x for x=a*z with a=2.
Code cell 16
# Your Solution
z = 0.5
a = 2.0
log_pz = -0.5*z**2 - 0.5*np.log(2*np.pi)
print("Starter: log_px=log_pz-log(abs(a)).")
Code cell 17
# Solution
z = 0.5
a = 2.0
log_pz = -0.5*z**2 - 0.5*np.log(2*np.pi)
log_px = log_pz - np.log(abs(a))
print("log_px:", log_px)
Exercise 6: Diffusion noising
Compute x_t from x0 and epsilon.
Code cell 19
# Your Solution
x0, eps, abar = 1.0, -0.5, 0.64
print("Starter: sqrt(abar)*x0 + sqrt(1-abar)*eps.")
Code cell 20
# Solution
x0, eps, abar = 1.0, -0.5, 0.64
xt = np.sqrt(abar) * x0 + np.sqrt(1 - abar) * eps
print("xt:", xt)
Exercise 7: Denoising MSE
Compute MSE between true and predicted noise.
Code cell 22
# Your Solution
eps = np.array([1.0, 0.0])
pred = np.array([0.8, 0.2])
print("Starter: mean((eps-pred)^2).")
Code cell 23
# Solution
eps = np.array([1.0, 0.0])
pred = np.array([0.8, 0.2])
mse = np.mean((eps - pred) ** 2)
print("MSE:", mse)
Exercise 8: Score update
Take one deterministic score step for standard normal.
Code cell 25
# Your Solution
x = np.array([2.0])
eta = 0.1
print("Starter: score=-x, x_new=x+eta*score.")
Code cell 26
# Solution
x = np.array([2.0])
eta = 0.1
score = -x
x_new = x + eta * score
print("x_new:", x_new)
Exercise 9: FID mean term
Compute squared mean difference.
Code cell 28
# Your Solution
mu1 = np.array([0.0, 1.0])
mu2 = np.array([1.0, 3.0])
print("Starter: sum((mu1-mu2)^2).")
Code cell 29
# Solution
mu1 = np.array([0.0, 1.0])
mu2 = np.array([1.0, 3.0])
dist = np.sum((mu1 - mu2) ** 2)
print("mean term:", dist)
Exercise 10: Checklist
Write four generative-model diagnostics.
Code cell 31
# Your Solution
print("Starter: include objective, quality, diversity, sampling cost.")
Code cell 32
# Solution
checks = [
"state the optimized objective",
"evaluate sample quality and diversity",
"check for mode collapse or poor coverage",
"report sampling cost and number of steps",
]
for check in checks:
print("-", check)
Closing Reflection
Generative modeling is not one algorithm. It is a family of choices about likelihood, latent structure, sampling, adversarial feedback, denoising, and evaluation.