Stanford CS236: Deep Generative Models I 2023 I Lecture 7 - Normalizing Flows

Поділитися
Вставка
  • Опубліковано 5 тра 2024
  • For more information about Stanford's Artificial Intelligence programs visit: stanford.io/ai
    To follow along with the course, visit the course website:
    deepgenerativemodels.github.io/
    Stefano Ermon
    Associate Professor of Computer Science, Stanford University
    cs.stanford.edu/~ermon/
    Learn more about the online course and how to enroll: online.stanford.edu/courses/c...
    To view all online courses and programs offered by Stanford, visit: online.stanford.edu/

КОМЕНТАРІ • 1

  • @CPTSMONSTER
    @CPTSMONSTER 4 дні тому

    8:00 Without KL term, similar to a stochastic autoencoder which takes an input and maps it to a distribution over latent variables
    8:30 Reconstruction to resemble Gaussian, KL term encourages latent variables generated through encoder to be distributed similar to the prior distribution (Gaussian in this case)
    10:00? Trick decoder
    12:50? q also stochastic
    14:10 Both p and q generative models, only regularizing latent space of an autoencoder (q)
    15:10 Marginal distribution of z under p and under q seems like a possible training objective, intractable integrals
    24:10? If p is a powerful autoregressive model, then z is not needed
    32:05? Sample p of z given x, invert generative process, find z's likely under that posterior, intractable to compute
    34:25? Sample from conditional, not selecting from most likely z
    53:50 Change of variables formula
    56:40 Mapping unit hypercube to parallelotope (linear invertible transformation)
    59:10 Area of parallelogram is determinant of matrix
    59:50 Parallelotope pdf
    1:08 Non-linear invertible transformation formula, generalized to determinant of Jacobian of f. Dimension of x and z are equal, unlike in VAEs. Determinant of Jacobian of inverse of f is equal to inverse of determinant of Jacobian of f.
    1:15:00 Worked example of non-linear transformation pdf formula
    1:17:45 Two interpretations of diffusion models, stacked VAEs and infinitely deep flow models
    1:21:20 Flow model intuition, latent variables z don't compress dimensionality, views data from another angle to make things easier to model