[Classic] Generative Adversarial Networks (Paper Explained)

Поділитися
Вставка
  • Опубліковано 25 чер 2024
  • #ai #deeplearning #gan
    GANs are of the main models in modern deep learning. This is the paper that started it all! While the task of image classification was making progress, the task of image generation was still cumbersome and prone to artifacts. The main idea behind GANs is to pit two competing networks against each other, thereby creating a generative model that only ever has implicit access to the data through a second, discriminative, model. The paper combines architecture, experiments, and theoretical analysis beautifully.
    OUTLINE:
    0:00 - Intro & Overview
    3:50 - Motivation
    8:40 - Minimax Loss Function
    13:20 - Intuition Behind the Loss
    19:30 - GAN Algorithm
    22:05 - Theoretical Analysis
    27:00 - Experiments
    33:10 - Advantages & Disadvantages
    35:00 - Conclusion
    Paper: arxiv.org/abs/1406.2661
    Abstract:
    We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.
    Authors: Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio
    Links:
    UA-cam: / yannickilcher
    Twitter: / ykilcher
    Discord: / discord
    BitChute: www.bitchute.com/channel/yann...
    Minds: www.minds.com/ykilcher
    Parler: parler.com/profile/YannicKilcher
    LinkedIn: / yannic-kilcher-488534136
    If you want to support me, the best thing to do is to share out the content :)
    If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
    SubscribeStar: www.subscribestar.com/yannick...
    Patreon: / yannickilcher
    Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
    Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
    Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
    Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
  • Наука та технологія

КОМЕНТАРІ • 68

  • @Youtoober6947
    @Youtoober6947 2 роки тому +23

    I don't know if you have an idea, but I would like to tell you that I believe you have NO idea how helpful (and especially how helpful with time management) the Paper Explained series you're doing is for me. These are SERIOUSLY invaluable, thank you so much.

  • @Aniket7Tomar
    @Aniket7Tomar 3 роки тому +105

    I am loving these classic paper videos. More of these, please.

  • @TheInfinix
    @TheInfinix 3 роки тому +90

    I think that such an initiative will be useful for fresh researchers and beginners.

  • @kateyurkova6384
    @kateyurkova6384 3 роки тому +10

    These reviews are priceless, you add so much more value than just reading the paper would bring, thank you for your work.

  • @MinecraftLetstime
    @MinecraftLetstime 3 роки тому +14

    These are absolutely amazing, please keep them coming.

  • @datamlistic
    @datamlistic 3 роки тому +3

    The classic papers are amazing! Please continue making them!

  • @sulavojha8322
    @sulavojha8322 3 роки тому +5

    Classic paper is too good. Hope you upload such more videos. Thank you !

  • @maltejensen7392
    @maltejensen7392 3 роки тому +6

    It's extremely helpful to hear your thoughts on what the authors have been thinking and things like researchers trying to put MCMC somewhere it was intended not to be. This gives a better idea of how the machine learning in academia works. Please continue this and thanks!

  • @fulin3397
    @fulin3397 3 роки тому +6

    classic paper and very awesome explanation. Thank you!

  • @andresfernandoaranda5498
    @andresfernandoaranda5498 3 роки тому +5

    I thank you for making this resources free to the community ))

  • @SallyZhang-vt2oi
    @SallyZhang-vt2oi 3 роки тому

    Thank you very much. I really appreciate your understanding of these papers. Please keep on releasing such kind of videos. They helped me a lot. Thanks again!

  • @benjaminbenjamin8834
    @benjaminbenjamin8834 3 роки тому +1

    @Yannic , this is such a great initiative and you are doing a great great job. Please carry it on.

  • @agbeliemmanuel6023
    @agbeliemmanuel6023 3 роки тому +2

    It's great to have origin of most models in ML today. Good work

  • @narinpratap8790
    @narinpratap8790 2 роки тому +1

    This was awesome! I am currently a graduate student, and I have to write a paper review for my Deep Learning course. Loved your explainer on GANs. This has helped me understand so much of the intuition behind GANs, and also the developments in Generative Models since the paper's release. Thank you for making this.

  • @bjornhansen9659
    @bjornhansen9659 3 роки тому +1

    I like these videos on the papers. It is very helpful to hear how another person views the ideas discussed in these papers. thanks!

  • @aa-xn5hc
    @aa-xn5hc 3 роки тому +3

    I love these historical videos of you!!

  • @falachl
    @falachl 2 роки тому

    Yannic, thank you, In this overloaded world for ML you are providing a critical informative service. Please keep it up

  • @ambujmittal6824
    @ambujmittal6824 3 роки тому +1

    You're truly a God's gift for people who are comparatively new in the field. (Maybe even for experienced ones) Thanks a lot and keep up the good work!

  • @YtongT
    @YtongT 3 роки тому +3

    very useful, thank you for such quality content!

  • @avishvj
    @avishvj 2 роки тому

    brilliant, would love more of these!

  • @frankd1156
    @frankd1156 3 роки тому

    Wow ...this is gold.Keep up man.be blessed

  • @kristiantorres1080
    @kristiantorres1080 3 роки тому

    Beautiful paper and superb review!

  • @herp_derpingson
    @herp_derpingson 3 роки тому +15

    12:00 I never quite liked the min-max analogy. I think a better analogy would be a teacher student analogy. The discriminator says, "The image you generated does not look like a real image and here are the gradients which tells you why. Use the gradients to improve yourself."
    .
    32:30 I am pretty sure this interpolations existed in auto-encoder literature
    .
    Mode collapse is pretty common for human teachers and students. Teachers often say that you need to solve the problems the way I taught in class. "My way or the highway" XD

    • @YannicKilcher
      @YannicKilcher  3 роки тому +8

      Yes the teacher student phrasing would make more sense, I think the min-max is just the formal way of expressing the optimization problem to be solved and then people go from there into game theory etc.
      The mode collapse could also be the student that knows exactly what to write in any essay to make the one particular teacher happy :D

  • @alexandravalavanis2282
    @alexandravalavanis2282 2 роки тому

    Damn. I’m enjoying this video very much. Very helpful. Thank you!

  • @AnassHARMAL
    @AnassHARMAL Рік тому

    This is amazing, thank you! As a materials scientist trying to utilize machine learning, this just hits the spot!

  • @goldfishjy95
    @goldfishjy95 3 роки тому

    Hi this is incredibly useful, thank you so much!

  • @aman6089
    @aman6089 2 роки тому

    Thank you for the explaination.
    It is a great resource for beginner like myself!

  • @bosepukur
    @bosepukur 3 роки тому

    great initiative ....love to see some classis NLP papers

  • @Throwingness
    @Throwingness 2 роки тому

    I'd appreciate more explaining on the math in the future. This kind of math is rarely encountered by most programmers.

  • @sergiomanuel2206
    @sergiomanuel2206 3 роки тому +3

    Very good paper!! , can you please go to the paper of next bigger step to the state of art in GANs. Thank you!

  • @flyagaric23
    @flyagaric23 3 роки тому

    Thank you, Excellent.

  • @DasGrosseFressen
    @DasGrosseFressen 3 роки тому +3

    "Historical" in ML : 6 years :D
    The series ist nice, thanks! one question though: you said that the objective is to minimize the exoectations in (1), but the minmax is already performed to get to the equality, right? How does V look?
    Edit: oh, never mind. In (3) you see that (1) is in the typical CS-sloppy notation...

  • @AltafHussain-gk2xe
    @AltafHussain-gk2xe 2 роки тому

    Sir I'm big fan of you. I'm following you for last one year I find your every video is full of information and really useful. Sir I request you to please make few videos one segmentation as well I shall be thankful to you.

  • @utku_yucel
    @utku_yucel 3 роки тому

    YES! THANKS!

  • @kvawnmartin1562
    @kvawnmartin1562 3 роки тому

    Best GAN explanation ever

  • @lcslima45
    @lcslima45 3 роки тому

    This channel is awesome

  • @TheKoreanfavorites
    @TheKoreanfavorites 2 роки тому

    Great!!!

  • @Notshife
    @Notshife 3 роки тому +1

    Hey @Yannic, I followed up on the BYOL paper you covered. While I'm not super familiar with machine learning I do feel I implemented something which is mechanically the same as what was presented and I thought it might interest you that the result for me was that it converged to a constant, every time. The exponential moving average weighted network and the separate augmentations did not prevent it. I will be going back through to see if I maybe have made a mistake. But I have been trying a bit of everything and so far nothing has been able to prevent the trivial solution. Maybe I'm missing something, which I hope because I liked the idea. My experimentation with parameters and network architecture has not been exhaustive... But yeah, so far: no magic.

    • @YannicKilcher
      @YannicKilcher  3 роки тому +1

      Yes, I was expecting most people to have your experience and then apparently someone else can somehow make it work sometimes.

  • @jintaoren6755
    @jintaoren6755 3 роки тому +1

    why youtube hasn't recommended me this channel earlier?

  • @dl569
    @dl569 Рік тому

    thanks a lot!

  • @westcott2204
    @westcott2204 9 місяців тому

    Thank you for providing your insights and current point of view on the paper. it was very helpful.

  • @jeromeblanchet3827
    @jeromeblanchet3827 3 роки тому +1

    Most people tells stories with data insights and model prediction. Yannic tells stories with papers.
    An image is worth a 1000 word, and a good story is worth a 1000 image.

  • @robo2.069
    @robo2.069 3 роки тому

    Nice explained thanku.......Can you make a video on Dual motion GAN(DMGAN) .

  • @rameshravula8340
    @rameshravula8340 3 роки тому

    Yannic, could you give application examples at the end of each paper you review.

  • @dandy-lions5788
    @dandy-lions5788 3 роки тому

    Thank you so much!! Can you do a paper on UNet?

  • @hahawadda
    @hahawadda 3 роки тому +3

    Funny how now we can say the original paper on GAN is classic

  • @ehza
    @ehza 2 роки тому

    Thanks

  • @shivombhargava2166
    @shivombhargava2166 3 роки тому +1

    Please make a video on pix2pix GANs

  • @paulijzermans7637
    @paulijzermans7637 8 місяців тому

    i'm writing my thesis on GAN's atm. Would enjoy an interesting conversation with an expert:)

  • @vigneshbalaji21
    @vigneshbalaji21 2 роки тому

    Can you please post a video of GAIL ?

  • @sweatobertrinderknecht3480
    @sweatobertrinderknecht3480 3 роки тому +2

    I‘d like to see a mix of papers and actual (python) code

  • @DANstudiosable
    @DANstudiosable 3 роки тому +1

    What you mean by prior on input distribution?

    • @YannicKilcher
      @YannicKilcher  3 роки тому

      it's the way the inputs are distributed

  • @XOPOIIIO
    @XOPOIIIO 3 роки тому +3

    In the future there'll be an algorithm to transform scientific papers into your videos.

    • @adamantidus
      @adamantidus 3 роки тому +1

      No matter how efficient this algorithm might be, Yannic will still be faster

  • @jithendrayenugula7137
    @jithendrayenugula7137 3 роки тому

    very awesome explanation! Thanks man!
    Is it too late or waste of time to play with and explore GANs in 2020 where BERT/GPT are hot and trending in AI community?

    • @ssshukla26
      @ssshukla26 3 роки тому +1

      Is it too late to learn something? No... Is it too late to research into GANs? Absolutely not... Nothing is perfect, GANs are not, there will be decades of research on these same topics. Whether you can make money out of knowing GANs... Ummmm debatable...

  • @aishwaryadhumale1278
    @aishwaryadhumale1278 3 роки тому

    Can I please more content on GAN

  • @chinbold
    @chinbold 3 роки тому

    I'm only inspired by watching your videos 😢😢😢

  • @timothyschollux
    @timothyschollux 3 роки тому

    The famous Schmidhuber-Goodfellow moment: ua-cam.com/video/HGYYEUSm-0Q/v-deo.html

  • @sadface7457
    @sadface7457 3 роки тому

    Revisit attention is all you need because that is now a classic paper.

    • @audrius0810
      @audrius0810 3 роки тому

      He's done the actual paper already