Stanford CS236: Deep Generative Models I 2023 I Lecture 5 - VAEs

Поділитися
Вставка
  • Опубліковано 5 тра 2024
  • For more information about Stanford's Artificial Intelligence programs visit: stanford.io/ai
    To follow along with the course, visit the course website:
    deepgenerativemodels.github.io/
    Stefano Ermon
    Associate Professor of Computer Science, Stanford University
    cs.stanford.edu/~ermon/
    Learn more about the online course and how to enroll: online.stanford.edu/courses/c...
    To view all online courses and programs offered by Stanford, visit: online.stanford.edu/

КОМЕНТАРІ • 2

  • @dohyun0047
    @dohyun0047 6 днів тому

    @56:38 쯤에 수업 분위기가 너무 좋네욤ㅋㅋㅋㅋ

  • @CPTSMONSTER
    @CPTSMONSTER 11 днів тому

    29:30 Infinite number of latent variables z
    30:10 Finite gaussians, able to choose parameters arbitrarily, lookup tables
    30:30 Infinite gaussians, not arbitrary, chosen by feeding z through neural network
    39:30 Parameters of infinite gaussian model
    40:30? Positive semi-definite covariance matrix
    41:30? Latent variable represented by part of image obscured
    50:00 Number of latent variables (binary variables, Bernoulli)
    52:00 Naive Monte Carlo approximation of likelihood function for partially observable data
    1:02:30? Modify learning objective to do semi-supervised learning
    1:04:00 Importance sampling with Monte Carlo
    1:07:00? Unbiased estimator, is q(z^(j)) supposed to be maximized?
    1:09:00 Biased estimator when computing log-likelihood, proof by Jensen's inequality for concave functions (log is concave)
    1:14:30 Summary, log p_theta(x) desired. Conditioned on latent variables z, if infinite Gaussians, then intractable. Do importance sampling with Monte Carlo. Base case k=1 shows biased estimator for log p_theta(x). Jensen's inequality yields ELBO. Optimize by choosing q.
    1:17:00 KUBO and other techniques for upper bound, much trickier to get UB
    1:18:40? Entropy and equality when q is posterior distribution
    1:19:40? E step of EM algorithm
    1:20:30? Loop when training? x to z and z to x