DeepMind x UCL | Deep Learning Lectures | 11/12 | Modern Latent Variable Models

Поділитися
Вставка
  • Опубліковано 17 кві 2024
  • This lecture, by DeepMind Research Scientist Andriy Mnih, explores latent variable models, a powerful and flexible framework for generative modelling. After introducing this framework along with the concept of inference, which is central to it, Andriy focuses on two types of modern latent variable models: invertible models and intractable models. Special emphasis is placed on understanding variational inference as a key to training intractable latent variable models.
    Note this lecture was originally advertised as lecture 9.
    Download the slides here:
    storage.googleapis.com/deepmi...
    Find out more about how DeepMind increases access to science here:
    deepmind.com/about#access_to_...
    Speak Bio:
    Andriy Mnih is a Research Scientist at DeepMind. He works on generative modelling, representation learning, variational inference, and gradient estimation for stochastic computation graphs. He did his PhD on learning representations of discrete data at the University of Toronto, where he was advised by Geoff Hinton. Prior to joining DeepMind, Andriy was a post-doctoral researcher at the Gatsby Unit, University College London, working with Yee Whye Teh.
    About the lecture series:
    The Deep Learning Lecture Series is a collaboration between DeepMind and the UCL Centre for Artificial Intelligence. Over the past decade, Deep Learning has evolved as the leading artificial intelligence paradigm providing us with the ability to learn complex functions from raw data at unprecedented accuracy and scale. Deep Learning has been applied to problems in object recognition, speech recognition, speech synthesis, forecasting, scientific computing, control and many more. The resulting applications are touching all of our lives in areas such as healthcare and medical research, human-computer interaction, communication, transport, conservation, manufacturing and many other fields of human endeavour. In recognition of this huge impact, the 2019 Turing Award, the highest honour in computing, was awarded to pioneers of Deep Learning.
    In this lecture series, research scientists from leading AI research lab, DeepMind, deliver 12 lectures on an exciting selection of topics in Deep Learning, ranging from the fundamentals of training neural networks via advanced ideas around memory, attention, and generative modelling to the important topic of responsible innovation.
  • Наука та технологія

КОМЕНТАРІ • 20

  • @leixun
    @leixun 3 роки тому +25

    *DeepMind x UCL | Deep Learning Lectures | 11/12 | Modern Latent Variable Models*
    *My takeaways:*
    *1. Lecture outline **0:38*
    *2. Generative modeling **1:45*
    2.1 Introduction of generative models 1:50
    2.2 Progress in generative models 6:30
    2.3 Types of generative models 8:00
    *3. Latent variable models & inference **15:11*
    *4. Invertible models & exact inference **30:15*
    *5. Variational inference (VI) **41:47*
    *6. Gradient estimation in VI **1:10:25*
    *7. Variational autoencoders **1:22:15*
    *8. Conclusion **1:27:04*

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому +9

    best presentation on generative approach so far.

  • @SempoiGiler
    @SempoiGiler 3 роки тому +4

    Thank you, really grateful for these kinds of educational videos, especially from a country where AI research can be considered non-existence. While I have a big desire to learn it. The knowledge shared is so precious to me. Thanks thanks thanks.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому +2

    thank you so much for posting these videos!

  • @youvenzful
    @youvenzful 3 роки тому

    Big thanks for this incredible overview of latent variable models in a such short time presentation! Indeed, some additional material or literature recommendations on the subject could have been helpful.

  • @learnml7034
    @learnml7034 3 роки тому +1

    Great presentation - very clear!

  • @patricknnamdi2203
    @patricknnamdi2203 3 роки тому +2

    This was great, thanks!

  • @lukn4100
    @lukn4100 3 роки тому

    Great lecture and big thanks to DeepMind for sharing this great content.

  • @colevfrank
    @colevfrank Рік тому

    Superb lecture--very clear explanation of variational autoencoders and the associated tradeoffs between ease of inference and modeling flexibility/power

  • @bryanbosire
    @bryanbosire 2 роки тому

    good work expounding on applications of statistical inference in generative models

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому

    How about conjugate prior for tractability?

  • @justiny.8365
    @justiny.8365 3 роки тому

    any recommonded literature to learn more about this?

  • @bingeltube
    @bingeltube 3 роки тому

    Unfortunately, Mnih provides very few references in video and slides

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому

    What is a factorial prior?

  • @freemind.d2714
    @freemind.d2714 3 роки тому

    The slides seem missing, please fix the link!

  • @tunestar
    @tunestar 3 роки тому

    Love the topic!! On to the lesson...

    • @tunestar
      @tunestar 3 роки тому

      Ok, it sucked! Why? Too much theory and math, very few practical examples.

  • @user-ls7mr4rq5x
    @user-ls7mr4rq5x 3 роки тому

    I never tease people like you