Lesson overview | Previous part | Next part
Fourier Transform: Appendix B: Worked Examples and Techniques to Appendix F: The Fourier Transform in Practice - Case Studies
Appendix B: Worked Examples and Techniques
B.1 The Fourier Transform Table - Complete Reference
The following table lists the most important Fourier Transform pairs, using the -convention . All functions are assumed in unless otherwise noted.
| # | Notes | ||
|---|---|---|---|
| 1 | Self-dual Gaussian | ||
| 2 | Scaled Gaussian | ||
| 3 | , | Double-sided exponential | |
| 4 | , | One-sided exponential | |
| 5 | Rectangular pulse of width | ||
| 6 | Ideal low-pass filter (by duality) | ||
| 7 | Triangle function | ||
| 8 | Lorentzian / Cauchy | ||
| 9 | Dirac impulse -> white spectrum | ||
| 10 | Constant -> DC spike (distributional) | ||
| 11 | Shifted impulse | ||
| 12 | Pure tone (distributional) | ||
| 13 | Cosine | ||
| 14 | Sine | ||
| 15 | Dirac comb | ||
| 16 | (distributional) | Sign function | |
| 17 | (unit step) | Heaviside step (distributional) | |
| 18 | , , | Polynomial exponential |
B.2 Four Full Worked Examples
Worked Example 1: Triangular Pulse
The triangular function is zero outside . We compute directly.
For the first integral, let :
Adding both integrals: .
Integrating by parts:
Using :
The triangular pulse is the FT of . Alternatively: (convolution of two rect functions), so by the convolution theorem. The spectrum decays faster than (as vs ) because is continuous (no jumps), while has jump discontinuities.
Worked Example 2: Gaussian via Differential Equation
An elegant alternative proof that uses the fact that satisfies .
Taking the FT of both sides and using the differentiation rules:
- LHS:
- RHS:
Wait - use:
So .
This gives the ODE: , with solution .
To find : (Gaussian integral). So and .
This proof is superior for teaching: it shows that the Gaussian's self-duality follows from the fact that satisfies the simplest possible first-order ODE, and the FT converts this ODE into the same ODE in frequency space.
Worked Example 3: FT of and a Contour Argument
For the first integral ():
For the second integral, let (, ):
Therefore:
This is a Lorentzian (Cauchy distribution) centered at with half-width . As , the time-domain signal approaches the constant , and its Fourier Transform approaches ... Actually: ... more carefully, and is the standard Poisson kernel. The limit confirms .
Worked Example 4: The FT and Convolution for the Heat Equation Kernel
The heat kernel at time is . We verify using the scaling theorem.
We know . The heat kernel is:
More directly: write . Setting :
The FT of a Gaussian is ... Using our convention:
, so .
For : where ... Let me just directly compute:
Write with , . Using :
This confirms that in the -convention, the heat kernel's FT is - exactly the solution we derived from the heat equation in Section A.4.
B.3 Decay Rates and Smoothness: The Complete Picture
The relationship between smoothness of and decay of is one of the deepest qualitative results in Fourier analysis. The complete picture:
Theorem B.1 (Smoothness Spectral Decay). Let .
| Condition on | Decay of as | |-----------------|----------------------------------------------| | has a jump discontinuity | | | (continuous, one derivative) | | | , | | | | Faster than any polynomial: for all | | analytic (holomorphic in a strip ) | Exponential decay: |
Proof sketch: For the -derivative case: . For the analytic case, shift the contour of integration to (for ) gaining an exponential factor .
Applications:
- Signal compression: Smooth signals (like audio with no sharp transients) have rapidly decaying spectra -> can be compressed by truncating high frequencies with small error.
- Neural network generalization: Smooth target functions (natural image statistics) have rapidly decaying spectra -> a network with limited bandwidth can approximate them well. This connects to the spectral bias of neural networks (Rahaman et al., 2019): networks preferentially learn low-frequency components first.
- Discretization error: The aliasing error when sampling a signal at rate is bounded by if - smoother signals alias less.
B.4 The Fourier Transform in Different Function Spaces
A complete picture of where the FT lives:
DOMAIN OF DEFINITION - FOURIER TRANSFORM
========================================================================
L1(R) -> C0(R) (bounded, continuous, vanishes at infty)
f_infty <= f_1
L2(R) -> L2(R) (Plancherel: FT is unitary)
f_2 = f_2
L1 L2 -> L2 (both apply; easiest to work with)
S(R) -> S(R) (Schwartz space: FT preserves rapidly
(bijection) decreasing smooth functions)
S'(R) -> S'(R) (tempered distributions: largest domain)
(bijection) includes , constants, periodic signals
Key inclusions: S L1 L2 L1 S'
S L1 L2 L2 S'
For practical computation: work in S' (distributions),
which unifies all cases under one theory.
========================================================================
B.5 Numerical Computation of the Fourier Transform
In practice, the Fourier Transform is computed numerically via the DFT/FFT (Section 20-03). Here we note the key relationships.
Approximation scheme. To numerically approximate for :
- Truncate the signal to for large enough (error: )
- Sample at rate to get equally-spaced samples where
- Apply DFT:
- The result approximates for .
Sources of error:
- Aliasing: Undersampling causes high-frequency components to fold back - reduced by sampling faster (higher )
- Leakage: Truncation at multiplies by a rect function -> convolves spectrum with sinc -> spectral leakage. Reduced by windowing (multiplying by a smooth window before DFT)
- Resolution: The frequency resolution is - the smallest frequency distinguishable in the DFT is . For finer resolution, use longer time windows.
All three error types are direct consequences of the Fourier Transform properties developed in this section. Full treatment in Section 20-03.
Appendix C: Connections, Extensions, and Deep Dives
C.1 The Fourier Transform on Locally Compact Abelian Groups
The Fourier Transform generalizes far beyond . For any locally compact abelian (LCA) group with Haar measure , there is a Fourier Transform where is the Pontryagin dual group (the group of continuous homomorphisms ).
| Group | Dual | Fourier Transform | Name |
|---|---|---|---|
| Fourier integral | |||
| Discrete-time FT (DTFT) | |||
| Fourier series | |||
| Discrete Fourier Transform (DFT) | |||
| Multidimensional FT |
Plancherel's theorem, the convolution theorem, and the inversion theorem all hold in this general setting. The unification of Fourier series and Fourier Transform as special cases of the same construction on different groups is one of the most elegant results in abstract harmonic analysis.
For AI: The group structure is what makes RoPE work. The circle group (the group of complex numbers of modulus 1 under multiplication) is a compact abelian group. Multiplying by is the group action of rotation by . The relative-position property of RoPE is the statement that the group action is transitive: the inner product only sees the group element corresponding to relative position.
C.2 Non-Commutative Fourier Analysis
For non-abelian groups (e.g., , permutation groups ), the Pontryagin duality breaks down but a Fourier Transform still exists, with irreducible unitary representations playing the role of frequencies.
Graph Fourier Transform: For a graph with Laplacian (where is the degree matrix), the eigenvectors play the role of complex exponentials. The graph Fourier Transform is where is the matrix of eigenvectors. The eigenvalues represent "graph frequencies" - how rapidly the eigenvector oscillates across the graph.
This is the foundation of spectral graph neural networks (Bruna et al., 2014; Defferrard et al., 2016 ChebNet; Kipf & Welling, 2017 GCN). The spectral GNN applies a learned filter in graph Fourier space: .
Full treatment in Section 11-04 Spectral Graph Theory.
C.3 The Fourier Transform and Learning Theory
The Fourier Transform connects deeply to several foundational results in learning theory:
Spectral Bias (Rahaman et al., 2019): Neural networks trained by gradient descent learn the Fourier components of the target function in order of increasing frequency. This is because the NTK (Neural Tangent Kernel) has eigenvalues that decay rapidly with frequency - the network converges faster on low-frequency components. Consequence: networks generalize well on smooth targets and may overfit on high-frequency features.
Frequency Principle: Let be the target function and the network at training step . Then decreases at rate (the NTK eigenvalue at frequency ), and decreases with . High-frequency components therefore converge last.
Implications:
- Data augmentation that adds high-frequency perturbations (e.g., random cropping, noise injection) improves generalization precisely because it forces the network to represent these frequencies.
- Adversarial examples often correspond to high-frequency perturbations of inputs that fool the network but are invisible to humans - the network hasn't fully learned the high-frequency structure.
- Neural networks with spectral inductive bias (e.g., SIREN - sinusoidal activation functions) can represent high-frequency targets more efficiently.
Rademacher Complexity and Fourier Norms: The generalization error of a hypothesis class can be bounded by its Fourier norm: . Functions with small Fourier norm have low complexity and generalize well. This connects to the theory of Barron functions (Barron, 1993): functions with can be approximated to accuracy by a two-layer neural network with neurons.
C.4 Phase Retrieval - When Magnitude Alone Is Insufficient
The FT consists of a magnitude and a phase . Phase retrieval asks: can you recover from alone (without the phase)?
In general, the answer is no - many different signals can have the same magnitude spectrum. For example, and have the same magnitude spectrum but may be entirely different. More strikingly, phase-randomized images (keeping but using random ) look like noise - the phase carries most of the perceptual information.
Phase is more important than magnitude for perception: An experiment: take image and image , swap their phase spectra (keep 's magnitude, use 's phase), and reconstruct. The result looks like image - demonstrating that phase dominates perception.
For AI: This has implications for:
- Adversarial robustness: Many adversarial attacks perturb the high-frequency phase of input images - imperceptible to humans but highly disruptive to CNNs.
- Generative models: Diffusion models and GANs must learn both the magnitude and phase of the data distribution's spectrum to generate perceptually realistic images.
- FNet limitation: FNet takes the real part of the FT output, effectively discarding half the phase information. This is one reason FNet performs worse than attention on tasks requiring precise token interactions.
C.5 Fast Algorithms Beyond FFT
The FFT computes the -point DFT in operations, a vast improvement over the naive . Several modern algorithms push further:
Sparse FFT: If the signal has only significant frequencies, the sparse FFT (Hassanieh et al., 2012; adopted in MIT SFFT) computes the DFT in - sublinear in . Applications: energy-efficient spectrum sensing, MRI acceleration.
Non-Uniform FFT (NUFFT): The standard FFT requires uniformly-spaced samples. The NUFFT handles non-uniform grids with complexity, using oversampling and interpolation. Used in MRI reconstruction (non-Cartesian k-space trajectories) and radio astronomy.
Butterfly factorization (Monarch matrices): The FFT matrix can be written as a product of sparse matrices (butterfly factors), each requiring operations. Monarch matrices (Dao et al., 2022) generalize this structure for learned efficient linear layers - they can represent any matrix with parameters that can be applied in time, interpolating between dense matrices () and diagonal ().
The FFT algorithm and its applications - Cooley-Tukey, Bluestein, Rader - are covered in full in Section 20-03 DFT and FFT.
C.6 Summary: The Central Theorems of the Fourier Transform
This section has developed the following central results, in order of importance:
1. Definition and well-posedness (Section 2):
Well-defined, bounded, continuous. Riemann-Lebesgue: as .
2. Properties (Section 3): Linearity, shift, modulation, scaling, time reversal, differentiation, convolution-multiplication duality. The Master Properties Table encodes all relationships between time and frequency operations.
3. Inversion (Section 4): Under sufficient conditions, . The Gauss-Weierstrass regularization gives a constructive proof.
4. Plancherel (Section 5): . The FT is a unitary isometry on - energy is preserved.
5. Uncertainty Principle (Section 6): , with equality iff is a Gaussian. Time-frequency concentration is fundamentally limited.
6. Distribution extension (Section 7): The FT extends to tempered distributions, giving , , , and the Dirac comb formula. Poisson summation unifies Fourier series and Fourier Transform.
These six results form a complete, self-consistent theory that has shaped mathematics, physics, engineering, and AI for two centuries - and shows no sign of becoming less central.
Appendix D: Machine Learning Deep Dives
D.1 FNet Architecture - Full Mathematical Analysis
We give a complete mathematical analysis of the FNet architecture (Lee-Thorp et al., 2022) using the formalism developed in this section.
Standard Transformer token mixing (attention):
Given token matrix (sequence length , embedding dimension ):
This is in memory and time (the attention matrix dominates for large ). The attention matrix is data-dependent: each row is a different weighted sum of the value vectors, where the weights depend on the specific input.
FNet token mixing:
where applies the DFT along the sequence dimension (rows) and along the embedding dimension (columns).
The DFT as a matrix multiplication:
The 1D DFT can be written as where with (unitary DFT matrix). The 2D FNet mixing is:
where and are DFT matrices.
Key properties:
-
Unitarity: and . The FNet mixing is an isometry (before taking real part): by Plancherel.
-
Global mixing: Each output position depends on all input values. FNet is a global mixer - exactly like attention.
-
No parameters: Unlike attention, FNet has zero parameters in the mixing layer ( and are fixed). All learning happens in the feed-forward network after mixing.
-
Complexity: via 2D FFT, vs for attention. For , : FNet is fewer operations in the mixing layer.
Why taking real part? The DFT output is complex even for real inputs. Taking projects back to real numbers (required since the subsequent feed-forward layers expect real inputs). This discards the imaginary part - half the phase information. This is one reason FNet is weaker than attention: it cannot separately process magnitude and phase information.
Theoretical analysis: The FNet paper shows that attention with random (untrained) weights performs similarly to FNet. This suggests the attention mechanism's power comes primarily from its (trained) data-dependent mixing, not merely from being a global mixer. FNet is the "global but data-independent" baseline.
When to use FNet vs Attention:
| Task type | FNet | Attention | Reason |
|---|---|---|---|
| Sentence classification | ~92-97% BERT | 100% | Global mixing is sufficient |
| Named entity recognition | ~90% | 100% | Moderate position sensitivity |
| Extractive QA (SQuAD) | ~80% | 100% | Requires precise token matching |
| Speed-critical inference | +7 | baseline | FNet wins when quality tradeoff acceptable |
D.2 Random Fourier Features - Implementation Details
Setup for RBF kernel: .
The spectral density is .
(With : , which is the standard form.)
Feature map construction (two variants):
Random Fourier Features (RFF): Sample , :
Random Fourier Features (paired, no bias): Sample :
Both give . The paired version has slightly lower variance.
Concentration bound:
where and .
Variance reduction via Quasi-Monte Carlo: Instead of random , use quasi-random sequences (Sobol, Halton) to improve coverage of frequency space. Gives error vs for random - significant improvement in practice.
Orthogonal Random Features (ORF) (Yu et al., 2016): Sample as rows of where is a Hadamard matrix, is diagonal , is a diagonal scaling. This gives:
- Same unbiasedness:
- Lower variance: vs but with better constants
- Faster computation: using fast Hadamard transform
Connection to Performer: The Performer attention (Choromanski et al., 2021) approximates (the unnormalized attention softmax kernel) using orthogonal random features:
This reduces attention from to - linear in sequence length .
D.3 Spectral Analysis of Neural Network Weight Matrices
The spectral norm and the spectral distribution of weight matrices encode important information about the training state of a neural network.
Heavy-tailed self-regularization (Martin & Mahoney, 2021): In well-trained neural networks, the eigenvalue distribution of follows a power law: for some . Under-trained or over-regularized networks have bulk (Marchenko-Pastur) distributions. This is detected by fitting a power law to the sorted singular values.
WeightWatcher: The open-source tool weightwatcher analyzes the spectral distribution of weight matrices in PyTorch/TensorFlow models and reports values as a proxy for model quality - without needing test data. Well-trained frontier models (GPT-4, LLaMA) have -4; poorly trained models have or non-power-law distributions.
Spectral norm and Lipschitz constant: For a multi-layer network , the Lipschitz constant satisfies:
For 1-Lipschitz activations (, ): .
Spectral normalization (Section 8.3) sets each , making the entire network 1-Lipschitz.
For LLMs: Analyzing the spectral properties of attention weight matrices () is an active area of mechanistic interpretability research. Low-rank structure in these matrices (small effective rank = few large singular values) indicates specialized computation. Anthropic's interpretability work has found "singular value signatures" of specific computational patterns in attention heads.
D.4 The Fourier Perspective on Attention
Attention and the Fourier Transform are both global linear mixing operations on sequences. Understanding their relationship clarifies when each is appropriate.
Attention as a data-dependent filter: The output of attention can be written as:
For fixed query , this is a weighted average of value vectors with data-dependent weights . In signal processing terms, it is a position-dependent filter: the filter coefficients change for each output position .
Fourier Transform as a fixed filter bank: FNet applies the DFT, which is a fixed (non-data-dependent) filter: the output at frequency is always , regardless of the content of . Each DFT frequency is a specific pattern of oscillation at rate .
Comparison table:
| Property | Self-Attention | FNet (DFT) |
|---|---|---|
| Filter type | Data-dependent | Fixed |
| Parameters in mixer | (W_Q, W_K, W_V) | 0 |
| Computational complexity | ||
| Expressive power | High (can select specific tokens) | Low (global uniform mixing) |
| Position awareness | Via PE or RoPE | Via DFT phases |
| Best for | Selective retrieval tasks | Global mixing tasks |
| Example success | Machine translation, QA | Text classification, embedding |
The analogy: Attention is like a learnable short-time Fourier Transform with adaptive windows - for each output position, it selects which "frequencies" (patterns) to attend to. The DFT in FNet is like a fixed global frequency analysis with no adaptivity. The power of attention comes from its ability to compute different effective filters for different positions and different inputs.
Appendix E: Reference Summary and Quick Checks
E.1 The Six Most Important Facts
If you retain only six things from this section, retain these:
1. Definition: (use the -convention; no in the inverse).
2. Self-dual Gaussian: . The Gaussian maps to itself. All other transforms can be derived from this one via properties.
3. Plancherel: . Energy is conserved. The FT is a unitary operator on .
4. Uncertainty principle: , equality iff Gaussian. You cannot have both time and frequency concentration.
5. Distributions: , , . Pure tones map to spikes. Spikes have flat spectra.
6. AI: The FT is used in RoPE (modulation theorem), FNet (global mixing via DFT), RFF (Bochner + spectral sampling), spectral normalization (spectral norm of weight matrices), and FNO (learn in Fourier space).
E.2 Quick Reference: Which Theorem to Use When
| Problem | Theorem/Tool |
|---|---|
| Compute FT of $e^{-a | t |
| Compute FT of | Time-shift + modulation properties |
| Show $\int | f |
| Find | Parseval applied to rect |
| Show as $ | \xi |
| Compute FT of | Differentiation property: multiply by |
| Compute FT of | Modulation theorem or distribution: two delta spikes |
| Verify recoverable from | Inversion theorem (check or use theory) |
| Lower bound on signal bandwidth | Uncertainty principle: |
| FT of a periodic signal | Distribute over sum: |
| FT of (rescaled signal) | Scaling theorem: $(1/ |
| Show kernel is PD | Bochner: show spectral density |
E.3 Sign Convention Reference
Different sources use different sign conventions. This table gives the conversion formulas.
| Convention | Forward FT | Inverse FT | maps to |
|---|---|---|---|
| (this section) | |||
| , no | |||
| , symmetric |
Conversion: If is the transform in the -convention, then for the non-symmetric -convention.
Properties in the -convention (no ):
- Differentiation: (not )
- Time-shift: (not )
- Parseval: (extra factor!)
The -convention is self-symmetric and avoids all these factors - strongly recommended for all new work.
E.4 Prerequisites Revisited: What You Now Know
This section assumed you knew Fourier series (Section 01) and functional analysis basics. Here is what each prerequisite contributed:
| Prerequisite | How it was used |
|---|---|
| Complex exponentials | The basis functions of the FT; the Euler formula is used constantly |
| inner product | Plancherel's theorem is the FT version of Parseval from Fourier series |
| Orthonormality of | Generalized: the FT is the "orthonormal expansion" in the continuous limit |
| Parseval's identity (series) | Generalized to Plancherel's theorem in Section 5 |
| Gibbs phenomenon | Reinterpreted as spectral leakage - the frequency-domain manifestation of truncation |
| Dirichlet conditions | Sufficient conditions for pointwise inversion (Section 4.1) |
| Integration / IBP | Proofs of differentiation theorem, inversion via approximate identity |
| Convergence of series | convergence in Plancherel proof; distributional convergence in Section 7 |
E.5 Glossary
| Term | Definition |
|---|---|
| Fourier Transform | ; maps time-domain functions to frequency-domain functions |
| Spectrum | The function ; describes which frequencies are present and at what amplitude/phase |
| Power spectrum | $ |
| Plancherel theorem | ; FT is a unitary isometry on |
| Parseval's relation | ; the FT preserves inner products |
| Uncertainty principle | ; time and frequency spread cannot simultaneously be small |
| Riemann-Lebesgue lemma | as $ |
| Dirac delta | Distributional unit impulse; ; FT is constant 1 |
| Tempered distribution | Continuous linear functional on Schwartz space; includes all functions, , and more |
| Schwartz space | Smooth functions decaying faster than any polynomial; FT maps to bijectively |
| Dirac comb | ; its FT is another comb with spacing |
| Poisson summation | ; connects Fourier series and FT |
| Modulation theorem | ; multiplication by tone = frequency shift |
| Sinc function | ; FT of rectangular pulse; ideal low-pass filter kernel |
| Spectral norm | ; largest singular value; Lipschitz constant of linear map |
| RFF | Random Fourier Features; via sampled frequencies from spectral density |
| STFT | Short-Time Fourier Transform; localizes FT analysis to a time window; gives spectrogram |
| FNet | Transformer variant replacing attention with 2D DFT; mixing |
| FNO | Fourier Neural Operator; learns PDE solutions by parameterizing the spectral operator |
| RoPE | Rotary Position Embedding; encodes position as rotation in complex plane |
Appendix F: The Fourier Transform in Practice - Case Studies
F.1 Case Study: Audio Preprocessing for Whisper
OpenAI's Whisper (Radford et al., 2022) is a speech recognition model that accepts raw audio and produces text. Its preprocessing pipeline is a direct application of STFT and spectral analysis.
Step 1: Resampling. Raw audio is resampled to 16kHz. By Shannon's sampling theorem (Poisson summation): a signal sampled at 16kHz can faithfully represent frequencies up to 8kHz - which covers all speech frequencies (human voice: 80Hz-8kHz, formants: 200Hz-4kHz).
Step 2: STFT. Apply a windowed Fourier Transform with:
- Window: Hann window (smooth tapering reduces spectral leakage)
- Window length: 25ms 16kHz = 400 samples
- Hop length: 10ms 16kHz = 160 samples (75% overlap)
- FFT size: 512 points (zero-padded for frequency resolution)
This gives a spectrogram of shape where 257 = 512/2 + 1 unique frequencies.
Step 3: Mel filterbank. Map the 257 linear frequencies to 80 mel-scale frequency bins using triangular filters. The mel scale is perceptually uniform - equal mel intervals are equally distinguishable to humans. The uncertainty principle is no longer limiting here because the window (25ms) is longer than the smallest perceptible phoneme duration (~10-20ms), so time resolution is slightly sacrificed for good frequency resolution.
Step 4: Log compression. Take . The log operation compresses the dynamic range of audio energy (from ~120 dB to ~12 dB).
Result: log-mel spectrogram, treated as a 2D "image" and fed to a CNN + Transformer encoder. The entire preprocessing is fixed (not learned) - it embeds centuries of signal processing knowledge into the model architecture.
Fourier analysis used:
- Fourier Transform -> STFT window analysis
- Uncertainty principle -> window length tradeoff
- Sampling theorem -> 16kHz minimum rate
- Spectral leakage -> Hann window choice
F.2 Case Study: LLaMA-3 Context Extension via RoPE Scaling
LLaMA-3 uses RoPE with base and supports up to 128K token context (compared to LLaMA-2's 4K with ).
The problem: Standard RoPE with has frequencies ranging from (dimension pair 0) to (last dimension pair). At frequency , the RoPE encoding completes one full rotation (period = tokens). The maximum distinguishable context is the longest period: tokens for - explaining why LLaMA-2 degrades beyond ~4K (it uses , so the longest period is ... the exact computation depends on ).
The Fourier interpretation: Each dimension pair is an oscillator at frequency . For two positions and to be distinguishable, their phase difference must be mod for at least one . If is larger than , all dimensions have wrapped around - positions become identical. This is aliasing in the temporal domain, caused by the frequency being too high.
The fix: Reduce the minimum frequency by increasing :
LLaMA-3 uses , giving - more than enough for 128K context.
The cost: Higher means the low-frequency dimensions ( large) oscillate very slowly, providing little positional information for short contexts. There is a fundamental tradeoff between context length and short-range positional precision - a manifestation of the uncertainty principle in the discrete position domain.
YaRN (Peng et al., 2023): Instead of uniformly rescaling all frequencies, YaRN uses a non-uniform rescaling: high-frequency dimensions (small ) are scaled minimally (they handle short-range position fine), while low-frequency dimensions (large ) are scaled aggressively. The interpolation factor is applied per-frequency, weighted by the "needed" extension. This maintains short-range precision while extending long-range capacity - a principled application of the time-frequency tradeoff.
F.3 Case Study: Spectral Normalization in Stable Diffusion
Modern diffusion models (Stable Diffusion, DALL-E 3, Flux) train a discriminator or guidance network to distinguish real from generated data. Spectral normalization (Section 8.3) is used to stabilize this training.
Setup: The discriminator has convolutional layers. Each convolution layer has weight . For a 2D convolution layer, the relevant spectral norm is:
For a convolution with kernel , the spectral norm equals - the maximum of the 2D Fourier Transform of the kernel. This connects spectral normalization directly to the FT: normalizing by the spectral norm makes the filter's frequency response bounded by 1 at every frequency.
Effect on generated images: A spectrally-normalized discriminator applies Lipschitz constraints that prevent it from exploiting arbitrary high-frequency artifacts in generated images. This forces the generator to eliminate high-frequency artifacts (checkerboard patterns from transposed convolutions, aliasing from bilinear upsampling) - improving sample quality.
Alias-free GAN / StyleGAN3 (Karras et al., 2021): Takes spectral control further by enforcing alias-free processing throughout the generator using carefully designed low-pass filters at every upsampling step. The key insight from Fourier analysis: any nonlinearity applied to a bandlimited signal creates harmonics beyond the original bandwidth - these must be filtered out before downsampling to avoid aliasing. StyleGAN3's architecture enforces this principle throughout, producing significantly more temporally consistent video outputs.