Understanding Variational Autoencoders (VAEs) | Deep Learning
Вставка
- Опубліковано 28 тра 2024
- Here we delve into the core concepts behind the Variational Autoencoder (VAE), a widely used representation learning technique that uncovers the hidden factors of variation throughout a dataset.
Timestamps
--------------------
Introduction 0:00
Latent variables 01:53
Intractability of the marginal likelihood 05:08
Bayes' rule 06:35
Variational inference 09:01
KL divergence and ELBO 10:14
ELBO via Jensen's inequality 12:06
Maximizing the ELBO 12:57
Analyzing the ELBO gradient 14:34
Reparameterization trick 15:55
KL divergence of Gaussians 17:40
Estimating the log-likelihood 19:04
Computing the log-likelihood 19:58
The Gaussian case 20:17
The Bernoulli case 21:56
VAE architecture 23:33
Regularizing the latent space 25:37
Balance of losses 28:00
Useful links
------------------------
Original VAE paper: arxiv.org/abs/1312.6114
More detailed explanation: arxiv.org/abs/1906.02691
Nice discussion of the reparameterization trick: gregorygundersen.com/blog/201...
Intro to variational inference and the ELBO: www.cs.cmu.edu/~epxing/Class/...
On the problem of learnt variance in the decoder: arxiv.org/abs/2006.13202
VAE tutorial in Keras: keras.io/examples/generative/...
MIT lecture on deep generative modelling: • MIT 6.S191 (2023): Dee...
Deriving the KL divergence for Gaussians: leenashekhar.github.io/2019-0...
Article with a nice discussion of regularized latent spaces: towardsdatascience.com/unders...
I absolutely love the way you presented a visual illustration on how the reconstruction and KL-Divergence loss terms affect the latent space.
Oh I'm so glad I found this video
best explanation on vaes
Best video on this topic ... amazing job