DeepMind x UCL | Deep Learning Lectures | 9/12 | Generative Adversarial Networks

Поділитися
Вставка
  • Опубліковано 24 лис 2024

КОМЕНТАРІ • 28

  • @leixun
    @leixun 4 роки тому +26

    *DeepMind x UCL | Deep Learning Lectures | 9/12 | Generative Adversarial Networks (GANs)*
    *My takeaways:*
    *1. Overview: why are we interested in GANs **0:25*
    1.1 GANs advances 4:22
    1.2 Learning an implicit model through a two-player game: discriminator and generator 5:28
    -Generator 6:38
    -Discriminator 8:03
    1.3 Train GAN 9:02
    1.4 Unconditional and conditional generative models 41:18
    *2. Evaluating GANs **43:52*
    *3. The GAN Zoo **50:55*
    3.1 Image Synthesis with GANs: MNIST to ImageNet 51:46
    -The original GANs 52:02
    -Conditional GANs 53:16
    -Laplacian GANs 54:08
    -Deep convolutional GANs 57:30
    -Spectrally Normalised GANs 1:00:20
    -Projection discriminator 1:01:54
    -Self-attention GANs 1:03:12
    -BigGANs 1:04:49
    -BigGANs-deep 1:11:24
    -LOGAN 1:14:12
    -Progressive GANs 1:15:38
    -StyleGANs 1:16:58
    -Summary: from simple images to large-scale database of high-resolution images1:19:23
    3.2 GANS for representation learning 1:21:05
    -Why GANs?
    --Motivation example 1: semantics in DCGAN latent space 1:21:28
    --Motivation example 2: unsupervised category discovery with BigGANs1:22:16
    -InfoGANs 1:23:59
    -ALI/bidirectional GANs 1:25:54
    -BigBigGANs 1:29:28
    *3.3 GANs for other modalities and problems **1:33:05*
    -Pix2Pix: translate images of two different domains 1:33:18
    -CycleGANs: translate images of two different domains 1:34:48
    -GANs for audio synthesis: WaveGAN, MelGAN, GAN-TTS 1:36:19
    -GANs for video synthesis and predication TGAN-2, DVD-GAN, TriVD-GAN 1:37:19
    -GANs are everywhere1:39:10
    --Imitation learning: GAIL
    --Image editing: GauGAN
    --Program synthesis: SPIRAL
    --Motion transfer: Everybody dance now
    --Domain adaptation: DANN
    --Art: Learning to see

    • @harshvardhangoyal5362
      @harshvardhangoyal5362 3 роки тому +1

      mvp

    • @leixun
      @leixun 3 роки тому +2

      @@harshvardhangoyal5362 Welcome to check out my research on my channel.

  • @agamemnonc
    @agamemnonc Рік тому

    Great lecture, thank you! One small note, I believe the terminology used "distance between two probability distributions" is not quite rigorous. Even KL-divergence is not really a distance metric as it is not symmetric.

  • @lukn4100
    @lukn4100 3 роки тому +2

    Great lecture and big thanks to DeepMind for sharing this great content.

  • @shivtavker
    @shivtavker 4 роки тому

    At 17:48 Why does KL(p, p^*) look like that? Divergence will be minimised when we have have p(x) as low as possible. So p can be a distribution that does very bad on both Gaussians.

  • @robertfoertsch
    @robertfoertsch 4 роки тому +2

    Excellent, Added To My Research Library, Sharing Through TheTRUTH Network...

  • @CSEAsapannaRakeshRakesh
    @CSEAsapannaRakeshRakesh 4 роки тому

    @10:58 "We only do few steps of SGD for discriminator" Is it 1 k-sized step for 1-epoch (iteration)

  • @awadelrahman
    @awadelrahman 4 роки тому +4

    Regardless to the extremely wonderful lecture!!!!! I am always wondering why GAN people have a very similar "talking" style and tone as Goodfellow!! @ Jeff :D ... Thanks a lot ;)

  • @kirtipandya4618
    @kirtipandya4618 3 роки тому +1

    Can we access code exercises?

  • @mohitpilkhan7003
    @mohitpilkhan7003 4 роки тому +1

    Its an amazing overview. Loved it very much. Thank you DeepMind and Love you.

    • @pervezbhan1708
      @pervezbhan1708 2 роки тому

      ua-cam.com/video/r_Q12UIfMlE/v-deo.html

  • @mathavraj9378
    @mathavraj9378 4 роки тому

    Could someone tell me why we call it "latent" noise? latent means something hidden right? so what is being hidden from the input noise?

    • @haejinsong1835
      @haejinsong1835 3 роки тому +1

      The idea is that latent noise (which is the input to the generator) is not an observable variable. People often use "un-observable" / "hidden" / "latent" to refer to those variables which we do not have observed in the dataset. Cf. if we have a collection of images, the images are observable variables.

  • @CSEAsapannaRakeshRakesh
    @CSEAsapannaRakeshRakesh 4 роки тому

    @9:17 Why does Binary Cross Entropy function has no negative sign to it?

    • @CSEAsapannaRakeshRakesh
      @CSEAsapannaRakeshRakesh 4 роки тому +1

      @10:12 Is it because we are "maximizing" D's prediction accuracy cost(D) = - cost(G)

  • @Daniel-mj8jt
    @Daniel-mj8jt Рік тому

    Excellent lecture!

  • @jayanthkumar9637
    @jayanthkumar9637 3 роки тому

    I just loved her voice

  • @quosswimblik4489
    @quosswimblik4489 3 роки тому

    GANs are cool but what can you do with CIANs (clown and identifier adversarial networks networks). So you have one AI trying to identify things and another network trying to fool the identifying AI into making a mistake.
    The clown AI is trying to find holes in the mindset of the identifier as to give the Identifier a more general fit and is for training identification where as the GAN is the other way round trying to train the generator on a specific imitation task.

  • @sanjeevi567
    @sanjeevi567 4 роки тому

    Wonderful thanks guys...GANs(Wow)

  • @lizgichora6472
    @lizgichora6472 3 роки тому

    Thank you, very interesting work cycleGAN translating domain.

  • @luksdoc
    @luksdoc 4 роки тому

    A wonderful lecture.

  • @GeneralKenobi69420
    @GeneralKenobi69420 4 роки тому

    1:31:10 Lol are we just gonna ignore the pic of a woman wearing black latex pants? 👀
    (Also do NOT zoom in on that picture in the bottom left... It's like some of the worst nightmare fuel I've ever seen in my life. JFC)

  • @myoneuralnetwork3188
    @myoneuralnetwork3188 4 роки тому

    If you'd like a beginner-friendly, easy to read guide to GANs and building them with PyTorch, you might find "Make Your First GAN With PyTorch" useful.. www.amazon.com/dp/B085RNKXPD All the code is open source on github github.com/makeyourownneuralnetwork/gan

  • @iinarrab19
    @iinarrab19 4 роки тому +1

    Great. Only feedback is that she needs to master how to speak effectively as in when to properly pause and breath.