This short course provides a rigorous overview of the current state-of-the-art in generative modeling, transitioning from foundational adversarial techniques to modern diffusion and flow-based paradigms.
Designed for senior undergraduate and graduate students, the curriculum balances theoretical derivation (SDEs, ODEs, Flow Matching) with practical architectural implementation (Diffusion Transformers, LoRA, ControlNet). The course concludes with an exploration of frontier applications in engineering sciences and the ethical implications of synthetic media.
Introduction and Motivation: why studying generative models is important?, Taxonomy of Generative Models (Implicit vs. Explicit), Likelihood Maximization.
Adversarial Learning Dynamics (GAN Min-Max objective), WGAN, and the StyleGAN Paradigm (Disentanglement, AdaIN), GAN Applications. Intro to Diffusion Models.
Auto-encoders, Diffusion Models, Mathematical derivation of DDPM / DDIM objectives.
Part A: Parameter-Efficient Fine-Tuning (LoRA), Structural Conditioning (ControlNet), and Zero-Shot Adapters (IP-Adapter). Part B: Math Preliminaries of Flow Matching (ODEs, Vector Fields, and Probability Paths).
Part A: Rectified Flows, Conditional Flow Matching, and Reflow procedures. Part B: Scaling Laws, Diffusion Transformers (DiT), Patchification, and Adaptive Layer Norm (AdaLN). Case Studies: Flux, SD3.
Engineering: Biomedical (MRI/CT), Materials (Inverse Design), Civil (Topology Optimization/Digital Twins). Ethics: Cross-Cultural Performance, Fairness & Bias Metrics, Safety & Provenance (Glaze, SynthID, C2PA).