Exercises Notebook
Converted from
exercises.ipynbfor web reading.
Probabilistic Models: Exercises
Ten exercises cover normalization, likelihood, Bayes updates, naive Bayes, mixture responsibilities, EM, HMM forward recursion, Monte Carlo estimates, and diagnostics.
Code cell 2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
try:
import seaborn as sns
sns.set_theme(style="whitegrid", palette="colorblind")
HAS_SNS = True
except ImportError:
plt.style.use("seaborn-v0_8-whitegrid")
HAS_SNS = False
mpl.rcParams.update({
"figure.figsize": (10, 6),
"figure.dpi": 120,
"font.size": 13,
"axes.titlesize": 15,
"axes.labelsize": 13,
"xtick.labelsize": 11,
"ytick.labelsize": 11,
"legend.fontsize": 11,
"legend.framealpha": 0.85,
"lines.linewidth": 2.0,
"axes.spines.top": False,
"axes.spines.right": False,
"savefig.bbox": "tight",
"savefig.dpi": 150,
})
np.random.seed(42)
print("Plot setup complete.")
Exercise 1: Normalize
Normalize positive scores into probabilities.
Code cell 4
# Your Solution
scores = np.array([2.0, 3.0, 5.0])
print("Starter: divide by sum.")
Code cell 5
# Solution
scores = np.array([2.0, 3.0, 5.0])
p = scores / scores.sum()
print("p:", p)
Exercise 2: Bernoulli MLE
Find MLE p for Bernoulli data.
Code cell 7
# Your Solution
y = np.array([1, 1, 0, 1])
print("Starter: mean of y.")
Code cell 8
# Solution
y = np.array([1, 1, 0, 1])
p = y.mean()
print("MLE:", p)
Exercise 3: Beta posterior
Update Beta(2,2) with 3 heads and 1 tail.
Code cell 10
# Your Solution
alpha, beta = 2, 2
heads, tails = 3, 1
print("Starter: add heads to alpha and tails to beta.")
Code cell 11
# Solution
alpha, beta = 2, 2
heads, tails = 3, 1
print("posterior:", (alpha + heads, beta + tails))
Exercise 4: MAP pseudo-count
Compute posterior mean after Beta update.
Code cell 13
# Your Solution
a, b = 5, 3
print("Starter: a/(a+b).")
Code cell 14
# Solution
a, b = 5, 3
print("posterior mean:", a / (a + b))
Exercise 5: Naive Bayes
Choose class from log scores.
Code cell 16
# Your Solution
log_scores = np.array([-2.0, -1.0, -3.0])
print("Starter: argmax.")
Code cell 17
# Solution
log_scores = np.array([-2.0, -1.0, -3.0])
print("class:", np.argmax(log_scores))
Exercise 6: Responsibilities
Normalize mixture component joint probabilities.
Code cell 19
# Your Solution
joint = np.array([0.2, 0.3, 0.5])
print("Starter: joint / joint.sum().")
Code cell 20
# Solution
joint = np.array([0.2, 0.3, 0.5])
gamma = joint / joint.sum()
print("gamma:", gamma)
Exercise 7: EM mean update
Compute responsibility-weighted mean.
Code cell 22
# Your Solution
x = np.array([0.0, 2.0, 4.0])
gamma = np.array([0.2, 0.5, 0.3])
print("Starter: sum(gamma*x)/sum(gamma).")
Code cell 23
# Solution
x = np.array([0.0, 2.0, 4.0])
gamma = np.array([0.2, 0.5, 0.3])
mu = np.sum(gamma * x) / gamma.sum()
print("mu:", mu)
Exercise 8: HMM forward
Run one forward step.
Code cell 25
# Your Solution
alpha = np.array([0.3, 0.2])
trans = np.array([[0.8, 0.2], [0.1, 0.9]])
emit = np.array([0.5, 0.7])
print("Starter: (alpha@trans)*emit.")
Code cell 26
# Solution
alpha = np.array([0.3, 0.2])
trans = np.array([[0.8, 0.2], [0.1, 0.9]])
emit = np.array([0.5, 0.7])
new = (alpha @ trans) * emit
print("new alpha:", new)
Exercise 9: Monte Carlo
Estimate E[X^2] from samples.
Code cell 28
# Your Solution
samples = np.array([1.0, 2.0, 3.0])
print("Starter: mean(samples**2).")
Code cell 29
# Solution
samples = np.array([1.0, 2.0, 3.0])
estimate = np.mean(samples**2)
print("estimate:", estimate)
Exercise 10: Checklist
Write four probabilistic-model diagnostics.
Code cell 31
# Your Solution
print("Starter: include normalization, finite logs, held-out likelihood, predictive checks.")
Code cell 32
# Solution
checks = [
"probabilities normalize",
"log probabilities are finite",
"held-out log likelihood is tracked",
"posterior predictive checks are run",
]
for check in checks:
print("-", check)
Closing Reflection
Probabilistic models teach you to ask what is observed, what is hidden, what is uncertain, and what decision the distribution supports.