VQ-GAN: Taming Transformers for High-Resolution Image Synthesis | Paper Explained

Поділитися
Вставка
  • Опубліковано 21 січ 2025

КОМЕНТАРІ • 32

  • @TheAIEpiphany
    @TheAIEpiphany  3 роки тому +12

    What do you get combining DeepMind's VQ-VAE, GANs, perceptual loss, and OpenAI's GPT-2, and CLIP? Well, I dunno but the results are awesome haha!

  • @johnpope1473
    @johnpope1473 3 роки тому +7

    I like the low level stuff. I attempt to read these papers and your grasp and explanations give me confidence that I can decode them too. Almost always they’re built on top of other work. I liked when you distilled that history out in stylegan session.

    • @TheAIEpiphany
      @TheAIEpiphany  3 роки тому +1

      Thanks, It's fairly a complex tradeoff to decide when to stop digging into more nitty-gritty details. 😅 I am still figuring it out

    • @johnpope1473
      @johnpope1473 3 роки тому

      @@TheAIEpiphany . I once came across some python code I cloned on GitHub that could take a PDF and create multi quiz questions based off any content. Maybe I could help you one day and have you nut out the answer. You remember that sort of stuff in physics class where the teacher makes things clear eliminating nonsense and elucidating correct answer.

  • @moaidali874
    @moaidali874 3 роки тому +4

    The in-depth explanation is pretty useful. Thank you so much.

  • @jisujeon5799
    @jisujeon5799 3 роки тому +2

    UA-cam should have recommended me this channel a year ago. What a quality content! Keep it up :D

    • @TheAIEpiphany
      @TheAIEpiphany  3 роки тому +1

      Hahah misterious are the paths of the YT algorithm. 😅

  • @ronitrastogi9016
    @ronitrastogi9016 Рік тому

    In-depth explanations are game changer. Keep doing the same. Great work!!

  • @daesoolee1083
    @daesoolee1083 2 роки тому +1

    I think you cover both the high-level explanation and details fairly well :) Keep it up, please.

  • @akashsuryawanshi6267
    @akashsuryawanshi6267 Рік тому

    keep it up with the detailed explanations. For those who are interested in the low level stuff can just skip the detailed parts, win for both. Thank you.

  • @alexijohansen
    @alexijohansen 3 роки тому

    So great! Love the explanation of the loss functions.

  • @hoomansedghamiz2288
    @hoomansedghamiz2288 3 роки тому +3

    Great work and explanation. Probably you have noticed but VQVAE is a bit rough to train since it’s not differentiable. In parallel there is GumbleSoft which is differentiable and therefore easier to train, wav2vec v2 use that. It might be interesting to cover that next :) cheers

  • @MostafaTIFAhaggag
    @MostafaTIFAhaggag 2 роки тому

    this is a master pieceee.

  • @akashraut3581
    @akashraut3581 3 роки тому +2

    U are on fire 🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥.
    This video was much needed for me, Thank you so much.

    • @TheAIEpiphany
      @TheAIEpiphany  3 роки тому +1

      I am just getting started 😂 awesome!

  • @vinciardovangoughci7775
    @vinciardovangoughci7775 3 роки тому +1

    Great Job! The condition part is super useful. The paper is confusing there.

  • @rikki146
    @rikki146 Рік тому

    15:56 I thought it is arbitrary at first but later realized it is just balancing between loss terms, namely L_{rec} and L_{GAN}. If gradients of L_{GAN} is big, then less weight on L_{GAN} and vice versa

  • @MuhammadAli-mi5gg
    @MuhammadAli-mi5gg 3 роки тому

    Thanks again, a masterpiece like the VQ-VAE one. But it would be great if you also add the code part like in the VQ-VAE part, perhaps even more detailed one.
    Thanks aloooot again!

  • @xxxx4570
    @xxxx4570 3 роки тому

    Thanks for your awesome explain about this paper, I want to ask a question, How does the transformer use the characteristics of the transformer to achieve autoregressive prediction?

  • @fly-code
    @fly-code 3 роки тому +1

    thank you sooo much

  • @vinhphanxuan5654
    @vinhphanxuan5654 3 роки тому

    how did you do it can you share with me , thank you

  • @jonathanballoch
    @jonathanballoch 2 роки тому

    i feel like you lost me on the semantic segmentation --> image generation step; you say that the semantic token vector from the semantic VQGAN is appended to the front of the CLS token and then the token vector of...the output VQGAN? and then this 2N+1 length vector is input, and the output is a length N vector? how is this possible, aren't transformers necessarily the same dimensional input and output?

  • @kirtipandya4618
    @kirtipandya4618 3 роки тому

    Answer : I find in depth explanation very very useful. 🙂 you could also explain codes here. But great work. Thanks. 👍🏻🙂
    Could you please also review paper „A Disentangling Invertible Interpretation Network for Explaining Latent Representations“ from same author. It would be great. Thank you. 🙂

  • @marcotroster8247
    @marcotroster8247 Рік тому

    It's always interesting to me how a bit of constrained resources can produce very intelligent, next-gen results instead of just pumping up the model with weights and using crazy amounts of compute 😂

  • @yasmimrodrigues5437
    @yasmimrodrigues5437 3 роки тому +1

    Some segments in the video are stamped not adjacent to each other

  • @TF2Shows
    @TF2Shows 6 місяців тому

    The adversarial loss - i think the explanation is wrong
    You said the discriminator tries to maximize it, however, you have just shown that it tries to minimize is (the term becomes 0 if D(x) is 1 and D(\hatX) is 0). So the discriminator tries to minimize it (and because its a loss function it makes sense), and the generator tries to do the opposite, maximize it, to fool the discriminator.
    So I think you mis-labeled the objective: L_GAN we try to minimize (minimize loss) in order to train the discriminator.

  • @DebraMcClain-i5e
    @DebraMcClain-i5e 3 місяці тому

    Rodriguez Barbara Martinez Jose Johnson Larry

  • @EatuopLyreqzgj-f5l
    @EatuopLyreqzgj-f5l 4 місяці тому

    Robinson Sandra Hall Melissa Lee David

  • @LauraMiller-z4u
    @LauraMiller-z4u 3 місяці тому

    Robinson Cynthia Young Gary Thompson Kevin

  • @dfergrg4053
    @dfergrg4053 3 роки тому

    how did you do it can you share with me , thank you