MIT 6.S191 (2020): Deep Generative Modeling

Поділитися
Вставка
  • Опубліковано 7 тра 2024
  • MIT 6.S191 (2020): Introduction to Deep Learning
    Deep Generative Modeling
    Lecturer: Ava Soleimany
    January 2020
    For all lectures, slides, and lab materials: introtodeeplearning.com
    Lecture Outline
    0:00 - Introduction
    4:37 - Why do we care?
    6:36 - Latent variable models
    8:12 - Autoencoders
    13:30 - Variational autoencoders
    20:18 - Reparameterization trick
    23:55 - Latent pertubation
    26:12 - Debiasing with VAEs
    30:40 - Generative adversarial networks
    32:40 - Intuitions behind GANs
    35:12 - GANs: Recent advances
    39:38 - Summary
    Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
  • Наука та технологія

КОМЕНТАРІ • 86

  • @wangxiao_ahu
    @wangxiao_ahu 4 роки тому +118

    It's very kind of you to share the course for the researchers all over the world!

  • @VALedu11
    @VALedu11 4 роки тому +34

    all this time I was thinking if Probabilistic ML techniques have been lost. But this lecture has put a smile on my face. A perfect blend of adding uncertainty by way of defining Gaussian priors, overcoming the back propagation hurdle.. this is simply SUPERB. and kudos to you Ms. Ava.

  • @bhargavasavi
    @bhargavasavi 3 роки тому +18

    It's quite impressive of how such a complex topic is explained very accurately....Thank you for the lecture !

  • @owaisahussain
    @owaisahussain 4 роки тому +6

    Hands down the best lecturers I've seen after Andrew Ng on Deep learning. Ms Soleimany especially, is quite comprehensive in this video. Thanks a log Ava, Alexander and MIT team for putting this up.

  • @ilfat_khairullin
    @ilfat_khairullin 4 роки тому +3

    I just have watched previous lectures on this topic, but I can't wait to see this lecture!!! Thank you a lot for such amazing content!!!!! Mind blowing! !!

  • @singhprabhjinder
    @singhprabhjinder 4 роки тому +5

    What a wonderful series of lectures! Thoroughly enjoying. Many thanks to the instructors and MIT.

  • @Sofalovesmusic
    @Sofalovesmusic 3 роки тому +4

    This is the best explanation of a VAE I've ever seen/heard, THANK YOU for sharing!

  • @PyMoondra
    @PyMoondra 4 роки тому +8

    Thanks for the updated lectures. I haven't watched any MIT deep learning lectures as of yet, but I am looking forward to it.

  • @user-us3ny6ii9r
    @user-us3ny6ii9r 3 роки тому +2

    Thanks for providing great introductory lectures to DL!

  • @stickynote18
    @stickynote18 3 роки тому +5

    Excellent lecture, thanks. Been trying to understand GAN implementation and now I do.

  • @atriantafy
    @atriantafy 3 роки тому +1

    Thank you Ava for a great explanation of the subject! Really intuitive!

  • @shivamwadhwa537
    @shivamwadhwa537 3 роки тому +1

    Wow! While explaining regularization term how wonderfully you explained the use of bayesian statistics !
    Thanks a lot for such a great explanation :)

  • @EjiroOnose
    @EjiroOnose 4 роки тому +3

    I enjoyed every bit of all the lectures.
    Best deep learning refresher class I've come across

    • @ahmeds4
      @ahmeds4 3 роки тому +1

      Refresher?!

  • @miguelramirez4037
    @miguelramirez4037 Рік тому +2

    Thank you very much Ava, excellent presentation. God bless you.

  • @trulyspinach
    @trulyspinach 4 роки тому +2

    ahhh, can't wait to see this!!!!!

  • @Samuel-wl4fw
    @Samuel-wl4fw 4 роки тому +4

    Great video. I think the explanation of the difference between an autoencoder and variational autoencoder might be a little clearer with example output though.

  • @burlemanimounika7631
    @burlemanimounika7631 3 роки тому +1

    Soo good lecture so clear appreciate you for this

  • @YT-di3do
    @YT-di3do 3 роки тому +2

    Thanks! 由衷感谢!

  • @jitendrakr9171
    @jitendrakr9171 3 роки тому +1

    Thank you mam good Explaination of GAN & Autoencoder

  • @Iamine1981
    @Iamine1981 3 роки тому +4

    Thanks for sharing this great material. I come from a maths background and only recently got to dive a bit into ML and deep learning. One question that comes to my mind though - and this is more of a philosophical question than a practical I would guess - is why isn't there more focus on the QUALITY of the parameters learned by SGD. Mathematically speaking, we are only guaranteed a global optimum in the case of a convex loss function, so how do we evaluate the quality of learned parameters from local optima in the case of non-convex loss functions. Is there any research/mathematical research done that guarantees certain properties on these parameters, or their variability, or any other measure of stability for example? thanks.

  • @peymannoorbakhsh4749
    @peymannoorbakhsh4749 3 роки тому

    Thanks for illuminating these areas.
    سپاس از شما و باعث خوشحالی که در اولین نتیجه ی جستجوی گوگل اومد-پایدار و پیروز باشین

  • @vincentlee4513
    @vincentlee4513 2 роки тому +1

    Finally....I stuck in reparameterization for a long time.......

  • @gihanna
    @gihanna Рік тому +1

    This is awesome!

  • @arieljumba9754
    @arieljumba9754 2 роки тому

    Thank you for the concrete lessons. Any chance I can access the MIT lab contents including the lecture?

  • @sinikishan1408
    @sinikishan1408 3 роки тому

    just awesome explanations...

  • @praveenkumarverma3767
    @praveenkumarverma3767 3 роки тому

    Really Great Learning and explanation.

  • @2000sunnybunny
    @2000sunnybunny 3 роки тому

    Amazing lecture !

  • @alexandermacleod1672
    @alexandermacleod1672 3 роки тому

    You know what's interesting is that the GANs seem to have issues with symmetry. So the GAN generated faces they show alot, the easiest way to tell them from real images is by looking at the ears, and the teeth. For instance both B and C have one earring, and one earlobe larger than the other. GANs do things like lighting so well, but I guess that tends to be more continuous where as the ear symmetry is further spread apart, or understanding that each tooth doesn't have a general tooth look, but a specific look and it's important that they're all there.

  • @anandkl3009
    @anandkl3009 3 роки тому +1

    awesome and explained very well..

  • @marcospereira6034
    @marcospereira6034 3 роки тому +6

    This course is awesome and thank you so much for sharing it with us! I have one question though - why is it such a common practice to include equations without labeling all the variables?
    For example at 19:06 there is no mention (or maybe I missed it?) of what the "D()" function is, or what the || symbol represents.

    • @Suraj-rb8kf
      @Suraj-rb8kf 3 роки тому +2

      D denotes the distance between the distribution that the encoder learns (i.e. p_phi(z|x) ) and the prior that we chose (i.e. p(z) ).
      I'd recommend you to see the lecture on it from last year by Amini himself : ua-cam.com/video/yFBFl1cLYx8/v-deo.html

    • @hmzshk2201
      @hmzshk2201 3 роки тому

      || means ‘or’ in most programming languages

  • @CGCSVIDYASHREEKS
    @CGCSVIDYASHREEKS 2 роки тому

    Mind blowing!!!

  • @shashidharpai8298
    @shashidharpai8298 4 роки тому

    @Alexander @Ava, you mention the Generator sees the real data, but we don't pass the Generator the real data anywhere, I thought only the Discriminator sees the real data, tad bit unclear on that, it would be great if you could clarify this.

  • @checktv6460
    @checktv6460 4 роки тому +4

    Hello Ava I'm from Brazil, can't wait to see too. I'm following the classes.
    Where can I take my doubts from previous lessons?

    • @shivangagarwal6703
      @shivangagarwal6703 4 роки тому +1

      you can join the what app group you can find the link in the second lecture comments

    • @poojanpujara8643
      @poojanpujara8643 4 роки тому

      @@shivangagarwal6703, Can you please share the link? Thanks in advance!

  • @lizgichora6472
    @lizgichora6472 3 роки тому +1

    Thank you.

  • @aiwithr
    @aiwithr 4 роки тому +3

    Great!

  • @MrRynRules
    @MrRynRules 3 роки тому

    Thank you!

  • @kaustavchaudhury2620
    @kaustavchaudhury2620 4 роки тому

    what if we want to learn the latent space in GANS how can we embed VAE to that

  • @judisjeevan4908
    @judisjeevan4908 3 роки тому

    I have a doubt Mr Alexander. Is Gan and reinforcement learning be used side by side.

  • @mrigankasaikia6453
    @mrigankasaikia6453 3 роки тому +1

    Amazing❤🖤🖤🖤🖤

  • @sayakpaul3152
    @sayakpaul3152 4 роки тому

    Shouldn't the term be dz/dphi instead of df/dphi?

  • @mediaanalysis4708
    @mediaanalysis4708 2 роки тому

    I have a plan now that I generate some fake with GaN and finally try to classify using SVM and hopefully margin will tell me goodness

    • @mediaanalysis4708
      @mediaanalysis4708 2 роки тому

      For example I have some official covid data and some fake tweet and fb data of covid how can I model this to identify the policies have to taken by govt.

  • @user-kg3cl1lq7v
    @user-kg3cl1lq7v 3 роки тому

    great video!

  • @vincent_hall
    @vincent_hall 3 роки тому

    So far, 23:57 in. VAEs and GANs are so sexy!
    They're amazingly attractive from a learning POV (point of view).
    Thanks to Ava for explaining this to us.
    I actually understand it.

  • @AlexSmith-zn5sf
    @AlexSmith-zn5sf 4 роки тому +1

    This is an outstanding lecture.

  • @hussainkaleem6770
    @hussainkaleem6770 3 роки тому +1

    very nice

  • @hashimosmanmusa8715
    @hashimosmanmusa8715 4 роки тому

    Thanks

  • @ShraddhaSurana
    @ShraddhaSurana 3 роки тому +2

    Would it be correct to say that autoencoders are a form of lossy compression?

    • @kyriakostp
      @kyriakostp 3 роки тому +2

      Yes it is, they are a learnt model that does lossy compression (the encoder part, from the input to the smallest layer) and then decompression (the decoder part, from the layer just after the smallest one to the output).

  • @judemarks5965
    @judemarks5965 3 роки тому +1

    i wish i was smart enough to find a way to unify art & perception with deep neural networks. or a system that could help me interoperate brain activity. but i make no sense beyond my own mental limitations

  • @moisesmanuelmorinhevia.8774
    @moisesmanuelmorinhevia.8774 4 роки тому

    Love.

  • @bofloa
    @bofloa 4 роки тому +1

    I see problem especially in image recognition as it will be bias to a specific race of people since the training data is bias....

    • @njqm1065
      @njqm1065 3 роки тому

      Lab 2 is about precisely that. Debiasing with CNNs

  • @candyreid59
    @candyreid59 3 роки тому

    Deep Generative Modeling, Is this lecture 4 ?

  • @abdulwaheed_cs
    @abdulwaheed_cs 3 роки тому

    In style transfer example (Cycle GANs) 39:00 ...where Alexander's speech style is transformed into Obama's style ...Why does Obama's lip sync match with spoken utterance ? When it's just speech to speech style transfer

    • @Suraj-rb8kf
      @Suraj-rb8kf 3 роки тому

      I have the same question. I think they might have used another model for it but its just a guess. Who knows if they actually had Obama say that xD

  • @mohammedfawaaz693
    @mohammedfawaaz693 3 роки тому

    Can anyone recommend me best online course for deep learning in python, i mean course should have detailed explanation in deep learning,specially in CNNs

    • @badreddineberrehal1624
      @badreddineberrehal1624 3 роки тому

      you should check coursera deep learning by andrew yang, after that there is also tensorflow in practice, but NOTE:
      - you should already by familiar with python
      - you should know the deffrence between deep N, and machine learning so that you'll know what to learn

    • @mohammedfawaaz693
      @mohammedfawaaz693 3 роки тому

      @@badreddineberrehal1624 Thanks a lot

  • @TheAnubhav27
    @TheAnubhav27 4 роки тому +1

    Where to get all lectures?

    • @victorsergio
      @victorsergio 4 роки тому

      introtodeeplearning.com/ this is the course website.

    • @TheAnubhav27
      @TheAnubhav27 4 роки тому

      @@victorsergio Thanks

  • @dermitdembrot3091
    @dermitdembrot3091 3 роки тому

    autoencoder stands for "automatically encoding data"? I thought it was "self encoding"?

    • @dermitdembrot3091
      @dermitdembrot3091 3 роки тому

      2. I don't think the VAE is inspired by the autoencoder. Instead, it has derived from variational methods and only happened to be autoencoding

    • @dermitdembrot3091
      @dermitdembrot3091 3 роки тому

      3. the reparametrization noise is not drawn "from a prior distribution" but it's distribution just happens to coincide with the typically used prior

  • @jonathan364
    @jonathan364 3 роки тому

    The link for the database: www.dropbox.com/s/bp54q547mfg15ze/train_face.h5?dl=1 does not exist anymore. Cannot run the google collab code.

    • @AAmini
      @AAmini  3 роки тому +2

      github.com/aamini/introtodeeplearning/issues/82

    • @jonathan364
      @jonathan364 3 роки тому

      @@AAmini Thanks!

  • @manishahajare2470
    @manishahajare2470 Рік тому

    7:48

  • @caveman4659
    @caveman4659 2 роки тому +1

    Two big reasons I watched the video.

  • @vishnyas
    @vishnyas 3 роки тому +1

    It's a lie, first face is mine. Or wait...

  • @allandogreat
    @allandogreat 4 роки тому +10

    Big

  • @sawchawn
    @sawchawn 4 роки тому +1

    i want to be at MIT too :(
    stuck at an IIT

    • @nikhilsaini1597
      @nikhilsaini1597 4 роки тому +1

      Which IIT? That is what matters.

    • @rohitsinha1843
      @rohitsinha1843 4 роки тому +5

      Jaha hamare sapne pure hote hai..waha aapka struggle shuru hota hai.....

    • @sawchawn
      @sawchawn 3 роки тому +1

      @Demon King hahah! sai baat hai. pr dil maange more. Bahar ki univ k instructors bht achhe lagte, isliye bolra tha

    • @amanbansiwal37
      @amanbansiwal37 3 роки тому

      Wierd flex, but okay.

    • @sawchawn
      @sawchawn 3 роки тому

      @@amanbansiwal37 it's not much but it's honest work XD

  • @miguelramirez4037
    @miguelramirez4037 Рік тому +1

    Thank you very much Ava, excellent presentation. God bless you.