Understand the Math and Theory of GANs in ~ 10 minutes

Поділитися
Вставка
  • Опубліковано 27 тра 2024
  • Join my Foundations of GNNs online course (www.graphneuralnets.com)! This video takes a deep dive into the math of Generative Adversarial Networks. It explains the optimization function, steps through an algorithm for solving it, and theoretically proves that solving it leads to the perfect Generative model.
    Part 1 of this series gave a high-level overview of how GANs work: • Gentle Intro to Genera...
    My blog series on GANs: blog.zakjost.com/tags/generat...
    The original paper from Ian Goodfellow: papers.nips.cc/paper/5423-gene...
    Mailing List: blog.zakjost.com/subscribe
    Discord Server: / discord
    Blog: blog.zakjost.com
    Patreon: / welcomeaioverlords

КОМЕНТАРІ • 81

  • @jlee-mp4
    @jlee-mp4 3 місяці тому +4

    Holy sh*t, this guy is diabolically, criminally, offensively underrated. THE best explanation of GANs I have ever seen, somehow rooting it deeply in the mathematics while keeping it surface level enough to fit in a 12 min video. Wow

  • @elliotha6827
    @elliotha6827 6 місяців тому +6

    The hallmark of a good teacher is when they can explain complex topics simply and intuitively. And your presentation on GANs in this video truly marks you as a phenomenal one. Thanks!

    • @fidaeharchli4590
      @fidaeharchli4590 Місяць тому

      I agreeeeeeeee, you are the best, thank you sooo mutch

  • @luisr1421
    @luisr1421 4 роки тому +10

    Didn't think in a million years I'd get the math behind GANs. Thank you man

  • @shivammehta007
    @shivammehta007 4 роки тому +27

    This is Gold!!! Pure Gold!!

  • @alaayoussef315
    @alaayoussef315 4 роки тому +7

    Brilliant! Never thought I could understand the math behind GAN's

  • @shaoxuanchen2052
    @shaoxuanchen2052 3 роки тому +11

    OMG that is the best one in explaining GANs I found these days!!!!! Thank you so much and I'm so lucky to find this vedio!!!!!!

  • @TheTakenKing999
    @TheTakenKing999 2 роки тому +12

    Awesome explanation. The original GAN paper isn't too hard to read but the "maximize" the Discriminator always irked me. Like.. my understanding was correct but I would always have trouble explaining it to someone else, this is a really well put together video. Clean, concise and good explanation. I think because of the way Goodfellow et al. phrased it, as "ascending the gradient" many people get stuck here, because for beginners like us we have gradient "descent" stuck in our heads lol.

  • @gianfrancodemarco8065
    @gianfrancodemarco8065 2 роки тому

    Short, concise, clear. Perfect!

  • @deblu118
    @deblu118 5 місяців тому +1

    This video is amazing! You make things intuitive and really dig down to the core idea. Thank you! And also subscribed your blog!

  • @user-rr1jk1ws2n
    @user-rr1jk1ws2n Рік тому

    Nice explanation! The argument at 7:13 once felt like a jump for me, but found it similar to 'calculus of variation' I learned in classical physics class.

  • @janaosea6020
    @janaosea6020 7 місяців тому +1

    Wow. This video is so well explained and well presented!! The perfect amount of detail and explanation. Thank you so much for demystifying GANs. I wish I could like this video multiple times.

  • @bikrammajhi3020
    @bikrammajhi3020 21 день тому

    Best mathematical explanation on GAN on the internet so far

  • @tusharkantirouth5605
    @tusharkantirouth5605 9 місяців тому

    Simply the best .. short and crisp... thanks and keep uploading such beautiful videos..

  • @williamrich3909
    @williamrich3909 3 роки тому +1

    Thank you. This was very clear and easy to follow.

  • @dipayanbhadra8332
    @dipayanbhadra8332 4 місяці тому

    Great Explanation! Nice and clean! All the best

  • @wenhuiwang4439
    @wenhuiwang4439 5 місяців тому

    Great learning resource for GAN. Thank you.

  • @siddhantbashisth5486
    @siddhantbashisth5486 2 місяці тому

    Awesome explanation man.. I loved it!!

  • @superaluis
    @superaluis 4 роки тому +1

    Thanks for the detailed video.

  • @dingusagar
    @dingusagar 4 роки тому +1

    best video explaning the math of GAN. Thanks !!

  • @DavesTechChannel
    @DavesTechChannel 4 роки тому +1

    Great explanation man, I've read your article on Medium!

  • @Daniel-ed7lt
    @Daniel-ed7lt 4 роки тому +5

    I have no idea how I found this video, but it has been very helpful.
    Thanks a lot and please continue making videos.

    • @welcomeaioverlords
      @welcomeaioverlords  4 роки тому +2

      That's awesome, glad it helped. I'll definitely be making more videos. If there's any particular ML topics you'd like to see, please let me know!

    • @Daniel-ed7lt
      @Daniel-ed7lt 4 роки тому +4

      @@welcomeaioverlords
      I'm currently interested in CNNs and I think it would be really useful if you would describe its base architecture, same as you did for GAN, while simultaneously explaining the underlying math from a relevant paper.

  • @jovanasavic4357
    @jovanasavic4357 3 роки тому +1

    This is awesome. Thank you so much!

  • @shashanktomar9940
    @shashanktomar9940 3 роки тому +2

    I have lost count of how many times I have paused the video to take notes. You're a lifesaver man!!

  • @EB3103
    @EB3103 2 роки тому

    Best explainer of deep learning!

  • @toheebadura
    @toheebadura 2 роки тому

    Many thanks, dude! This is awesome.

  • @symnshah
    @symnshah 3 роки тому

    Such a great explanation.

  • @dman8776
    @dman8776 3 роки тому +1

    Best explanation I've seen. Thanks a lot!

  • @tarunreddy7
    @tarunreddy7 7 місяців тому

    Lovely explanation.

  • @psychotropicalfunk
    @psychotropicalfunk Рік тому

    Very well explained!

  • @paichethan
    @paichethan 2 роки тому

    Fantastic explanation

  • @anilsarode6164
    @anilsarode6164 3 роки тому

    God bless you, man !! Great Job !! Excellent !!!

  • @walidb4551
    @walidb4551 4 роки тому +2

    THANK GOD I FOUND THIS ONE THANK YOU

  • @adeebmdislam4593
    @adeebmdislam4593 11 місяців тому

    man immediately knew you listen to prog and play guitar when i heard the intro hahaha! great explanation

  • @maedehzarvandi3773
    @maedehzarvandi3773 2 роки тому +1

    you helped a lot of lot 👏🏻🙌🏻👍🏻

  • @manikantansrinivasan5261
    @manikantansrinivasan5261 Рік тому

    thanks a ton for this!

  • @architsrivastava8196
    @architsrivastava8196 3 роки тому +1

    You're a blessing.

  • @caiomelo756
    @caiomelo756 2 роки тому

    four years ago I read the original GAN paper for more than a month and could not understand what I was reading, and now it makes sense

  • @ishanweerakoon9838
    @ishanweerakoon9838 2 роки тому +1

    Thanks very clear

  • @muneebhashmi1037
    @muneebhashmi1037 3 роки тому

    tbvvh couldn't have asked for a better explanation!

  • @ramiismael7502
    @ramiismael7502 3 роки тому +1

    great video

  • @StickDoesCS
    @StickDoesCS 3 роки тому +2

    Really great video! I have a little question however since i'm new to this field and i'm a little confused. Why is that at 5:02 you mentioned about ascending the gradient to maximize the cost function? Would like to know exactly why this is the case because I initially thought the cost function generally has to be minimized, so the smaller the cost ideally the better the model. Maybe because of how I'm looking at cost functions in general? Like is there a notion of it being already referred to as something we want to be small, so now we'd simply treat it as the negative of a number in which that number is what you're referring to as the one we want to maximize? Subscribed by the way, keep up the good work! :>

    • @welcomeaioverlords
      @welcomeaioverlords  3 роки тому +5

      In most ML, you optimize such that the cost is minimized. In this case, we have two *adversaries* that are working in opposition to one another. One is trying to decrease the cost (discriminator) and one is working to increase the cost (generator).

  • @bernardoolisan1010
    @bernardoolisan1010 Рік тому

    when the training process is done, do we only use the generator model? or what? how to use it in production?

  • @shourabhpayal1198
    @shourabhpayal1198 2 роки тому +1

    Good one

  • @bernardoolisan1010
    @bernardoolisan1010 Рік тому

    I have a question. in 4:49 from were we take the real samples, for example, we want to generate "faces", in the generator m samples are just random vectors of the dimensions of a face image, so it can be a super ugly blur picture right? but what about the real samples? they are just faces images that were taken out of the internet?

  • @123epsilon
    @123epsilon 2 роки тому

    Does anyone know any good resources to learn more ML theory like how it’s explained in this video? Specifically content covering proofs and guaranteeing convergence

  • @bernardoolisan1010
    @bernardoolisan1010 Рік тому

    also, were it says theory alert it means that is only for proving that the model is kind of good? like the min value is a good value?

  • @jrt6722
    @jrt6722 10 місяців тому

    Would the loss function works the same if I switch the label of the real sample and fake sample? ( 0 for real sample and 1 for fake sample).

  • @friedrichwilhelmhufnagel3577
    @friedrichwilhelmhufnagel3577 8 місяців тому

    CANNOT UPDATE ENOUGH. EVERY STATISTICS OR ML MATH VIDEO SHOULD BE AS CLEAR AS THIS. YOU DEMONSTRATE THAT MATH AND THEORY EXPLANATION IS ONLY A MATTER OF AN ABLE TEACHER

  • @Darkev77
    @Darkev77 2 роки тому +2

    This was really good! Though could someone explain to me what does he mean by maximize the loss function for the discriminator? Shouldn't you also train your discriminator via gradient descent to improve classification accuracy?

    • @welcomeaioverlords
      @welcomeaioverlords  2 роки тому +1

      To minimize the loss, you use gradient descent. You walk down the hill. To maximize the loss, you use gradient ASCENT. You calculate the same gradient, but walk up the hill. The discriminator walks up, the generator walks down. That’s why it’s adversarial. You could multiply everything by -1 and get the same result.

    • @sunnydial1509
      @sunnydial1509 2 роки тому +1

      i am not sure but in this case i think we maximise the discriminator loss function as it is expressed as log(1-D(G(Z)) which is equivalent to minimize the log(D(G(Z))) as it happens on normal neural networks.... so the discriminator is learning by maximising the loss in this case

  • @koen199
    @koen199 4 роки тому +1

    @7:20 Why is p_data(x) and p_g(x) assumed constant over x in the integral (a and b)? In my mind the probability changes for each sample...

    • @welcomeaioverlords
      @welcomeaioverlords  4 роки тому +2

      Hi Koen. When I say "at any particular point" I mean "at any particular value of x". So p_data(x) and p_g(x) change with x. Those are, for example, the probabilities of seeing any particular image either in the real or generated data. The analysis that follows is for any particular x, for which p_data and p_g have a single value, here called "a" and "b" respectively. The logical argument is that if you can find the D that maximizes the quantity under the integral for every choice of x, then you have found the D that maximizes the integral itself. For example: imagine you're integrating over two different curves and the first curve is always larger in value than the second. You can safely claim the integral of the first curve is larger than the integral of the second curve. I hope this helps.

    • @koen199
      @koen199 4 роки тому

      @@welcomeaioverlords Oh wow it makes sense now! Thanks man.. keep up the good work

  • @goodn1051
    @goodn1051 4 роки тому +2

    Thaaaaaaank youuuuuu

    • @welcomeaioverlords
      @welcomeaioverlords  4 роки тому +1

      I'm glad you got value from this!

    • @goodn1051
      @goodn1051 4 роки тому

      @@welcomeaioverlords yup...when you're self taught its videos like this that really help so much

  • @adityarajora7219
    @adityarajora7219 3 роки тому

    The cost function isn't the difference between True and predicted value right?, it's the actual predicted value in the range [0,1] right??

    • @welcomeaioverlords
      @welcomeaioverlords  3 роки тому

      It's structured as a classification problem where the discriminator estimates the probability of the sample being real or fake, which is then compared against the ground truth of whether the sample is real, or was faked by the generator.

    • @adityarajora7219
      @adityarajora7219 3 роки тому

      @@welcomeaioverlords Thank you sir for your reply, Got it.

  • @adityarajora7219
    @adityarajora7219 3 роки тому

    what do you do for a living?

  • @abdulaziztarhuni
    @abdulaziztarhuni Рік тому

    this was hard for me to follow , from where should i get more resources

  • @saigeeta1993
    @saigeeta1993 3 роки тому

    PLEASE EXPLAIN TEXT TO SPEECH SYNTHESIS EXAMPLE USING GAN

  • @jorgecelis8459
    @jorgecelis8459 3 роки тому

    Very good explanation. One question: If we know the form of the optimal discriminator, don't we only need to get the Pg(x), as we have all the statistics of P(x) in advance? And that would be 'just' sampling from the z?

    • @welcomeaioverlords
      @welcomeaioverlords  3 роки тому +1

      Thanks for the question, Jorge. I would point out that knowing the statistics of P(x) is very different than knowing P(x) itself. For instance, I could tell you the mean (and higher-order moments) of a sample from an arbitrary distribution and that wouldn't be sufficient for you to recreate it. The whole point is to model P(x) (the probability that a particular pixel configuration is of a face) , because then we could just sample from it to get new faces. Our real-life sample, which is the training dataset, is obviously a small portion of all possible faces. The generator effectively becomes our sampler of P(x) and the discriminator provides the training signal. I hope this helps.

    • @jorgecelis8459
      @jorgecelis8459 3 роки тому

      @@welcomeaioverlords right... statistics of P(x) =/= distribution P(x), if we know P(x) we could just generate images and we would have no problem to solve with GAN. Thanks.

  • @theepicguy6575
    @theepicguy6575 2 роки тому +1

    Found a gold mine

  • @sarrae100
    @sarrae100 4 роки тому +1

    What the fuck, u explained it like it's a toy story, u beauty 😍

  • @kelixoderamirez
    @kelixoderamirez 3 роки тому

    permission to learn sir

  • @samowarow
    @samowarow Рік тому

    ua-cam.com/video/J1aG12dLo4I/v-deo.html
    How exactly did you do this variable substitution? Seems not legit to me

    • @JoesMarineRush
      @JoesMarineRush Рік тому

      I also stopped at this step. I think it is valid.
      Remember that the transformer g is fixed. In the second term, distribution of z and g(z) are the same, so we can set x = g(z) and replace the z with x. Then we can merge first and second integrals together, with the main difference being that the first term and second term have different probabilities for x since they are being sampled from different distributions.

    • @samowarow
      @samowarow Рік тому

      @@JoesMarineRush It's not in general legit to say that the distributions of Z and g(Z) are the same. Z is a random variable. A non-linear function of Z changes its distribution.

    • @JoesMarineRush
      @JoesMarineRush Рік тому

      @@samowarow I looked at it again the other day. Yes you are right. g can change the distribution of z.
      There are is a clarification step missing. Setting x = g(z) and swapping out z for x. The distribution of x is given be to under g. There is a link between distributions of z and g that needs clarification. I'll try to think on it.