Why do we not get people like that in college? I just had a 2 period course about _basic_ ML techniques (linear and logistic regression, classification and basic concepts like features, labels and hypothesis space) that took 2 hours to introduce each concept and then 10+ hours of self-study, most of which was spent ploughing through insane math jargon to realize just how unnecessary it is to explain the relatively simple concepts underneath. This is 100 times more complicated and the guy just explained the whole thing in 25 minutes and I have no questions! Bravo to him, this is why online education is replacing universities. Good teachers get a chance to reach people through platforms like this.
This guy explains the concepts correctly. There are a lot of channels here on youtube that explain a paper based on their speculations and not facts or the information presented in the paper. You get my sub :)
I miss your videos. Your channel is definitely the single best ML youtube channel I've seen. And that, by very far. Please tell us you're alive and that you'll do more videos. We miss you.
I've seen other video explanations of this, but they were not clear at all, expecially for people outside the deep learning environment, this was so well explained that made me make a good leap in understanding the whole process. Even the notebooks are so well curated on the accompanying informations that makes those notebooks important to keep safe. Great job, and keep it up.
Extremely well explained where a beginner can easily get a gist of GANs and its overview. This channel is so useful who tries to read and understand ideas through papers and finds difficult to catch up to the complexity explained
WOW!! That was simply excellent!!! That has to be one of the best videos on GAN's I have seen yet. Please continue doing more of these videos (with a notebook) - that is just outstanding work.
Fantastic explanation and presentation. I have been learning GANs for a few days and your video is by far the best resource I have found. Keep the hard work and the good content come in. Thank you!
I've always been a fan of Autoencoders over GANs because of all the control you get with autoencoders, but damn it appears GANs have came a long way since I last checked up on the. Amazing video Xander!
There's a new autoencoder paper that leverages these ideas from StyleGan on CVPR 2020, its called Adversarial Latent autoencoders. Pretty cool Idea too
Incredible video, Xander!! Thanks so much for this. GANs are one of the most interesting topics in ML and AI at the moment. This is definitely one of the best, if not the best, video on GANs I've seen. And thanks for providing the notebook. I'm sure I will have a lot of fun messing around with it :)
Very impressive, this gives a really good insight & intuition into GANs. Thanks & also thanks to Ian Goodfellow and everyone who has in one way or another contributed POSITIVELY to this powerful Innovation.
Best video on GAN & StyleGAN! You should do those more often. And I hope to see one on StyleGAN2. With such a high quality videos you should consider making an online course or something so we can help sponsor your work and creation of more videos.
With the first-ever video I watched on this channel it took second place in my favorite channels list, just after the TwoMinutePapers. And boy the placement may change any moment. I cannot make a list of things done very good in this video. Thank you for the work.
I'm glad I clicked [2] on thispersondoesnotexist.com because [1] is complete gibberish to me. This video is very clear and easy to understand and you speak at an easy to understand pace. Thank you for making this.
It looks like a movie. :) Thanks a lot, you saved a lot of time for me to understand the actual paper. Now I can understand it easily and faster. Good presentation.
Beautifully explained, you removed a lot of the uncertainty I had whilst reading the paper and have made it far easier for me to begin implementing myself!
I was so excited to try out 2. Face Editing Notebook (with pretrained networks), so I could prepare this as a project for my high school class next year. BUT it failed at the point of installing/downgradng to TFv1.12.2 and CUDA 9.0. Apparently Colab is default TFv15 now, and the oldest tensorflow-gpu its pip can see is 13.1. If you could update the notebooks, it would be so awesome!
I went to that website and wanted to find fake faces for roblox ID to see what would happen, but never did it, and I saw some creepy faces if there are two people.
For some reason I previously thought that there wasn't a latent vector for most REAL images (Think: partial mode collapse). But now you show me that even cars and others stuff are possible! Brilliant video! Btw: Wouldn't getting the latent vector be much easier if you used VQ-VAE instead of the GAN, since VAE's provide an encoder? VQ-VAE generated images are pretty close in quality to GANs.
ERROR: Could not find a version that satisfies the requirement tensorflow-gpu==1.12.2, as your notebook works on this tensorflow version, but this version is outdated and not available now. What to do in this case?
XXXX Dude, can you please give links for the "additional tweaks" at 21:17 ??? XXXX - I would like to know where they are in the literature. Especially the one about applying a "mask" to the generation process
HI Xander! You explained every complicated concept beautifully. I bet you spent a lot of time in preparations of this video. Perhaps we need another GAN to help us generate the videos that we create like this one :)
Been following your channel since the Variational Auto Encoders, is an actual treat to watch such educating content for free particularly for students like me.
great video! thanks. However when running notebook II I get an error because of the tensorflow version required (1.12.2). It seems like colab made some changes recently. Do you have a fix to this?
Unfortunately the first notebook is broken and no code can be tested, neither the second notebook can be run. Traceback (most recent call last): File "encode_images.py", line 241, in main() File "encode_images.py", line 230, in main img_array = mask*np.array(img_array) + (1.0-mask)*np.array(orig_img) ValueError: operands could not be broadcast together with shapes (1048,1048,1) (1024,1024,3)
Loved the video and first Notebook. The second notebook "2. Face_Editing_Notebook.ipynb" however doesn't work. Get error "ERROR: Could not find a version that satisfies the requirement tensorflow-gpu==1.12.2". I'd love it if it were fixed! It looks as though the lowest tensorflow version is now 1.13.1
google quota exceeded now, stick your files on Google Cloud Storage (you'll have to pay a little). Or some other file sharing service. Otherwise - excellent material - excellent work, please keep doing more of such content - it's highly appreciated! :)
A rough estimate for this video: - Shaping the content outline: ~1 day - Making all the animated slides, visual assets, videos, GIFs.. : 2 - 3 days - Recording the video: 2 hours - Making the IPython notebooks: 1 day - Editing the video: 3 days In total I'd say this video took about two weeks of full-time work, but I combine UA-cam stuff with some many other projects in parallel it's hard to keep track exactly :) So yes, that's a lot, but it is a 25-min video and the idea is to pack as much information in those 25mins as possible + I love to do these things :p
@@ArxivInsights haha worth it though, Thanks a lot! One more question maybe :P Do you see any chance to get a 1x512 latent space for existing faces, instead of the 18x512 vector? It seemed to me that the decoder only provides the 18x space..
@@shipper611 You can definitely do this, however you'd have to train a model yourself & it might work slightly worse because the z-space has less degrees of freedom to fit random input faces compared to w-space. But if you browse through the InterfaceGan repo there are options for doing everything in z-space!
Thank you for this great video. What I don't get it is why moving perpendicular to to the plane give you different categories of same person image. Is there any video explaining this? Many thanks.
Why so memetastic man? Really brings the overall aura of the video down when you're plastering in pictures of kids on a trike captioned Fuck yes, or whatever.
Very well explained. I was redirected here from a retweet of your tweet. I am impressed by the quality and content of your channel. Keep up the good work.
The notebook is really fun. Does anybody know any resources on creating custom latent directions? I imagine it involves using active learning to efficiently label data for the characteristic/dimension you're targeting, but I have no idea how.
You create your featurea than train resnet and after use that features vector which already consist predifined features dimentions ,and feed it to gan , in gan you could climb in the latent direction by get different feature values to current image
Great video! but I do have a few questions: 1. I would think that the Discriminator trying to maximize loss would get a bit too good at balancing out the Generator's attempt at minimizing loss. Is there some type of weighting that's applied to get the Generator to have more say than the Discriminator? 2. Seen at 9:03, why must the number of noise samples fed to the Discriminator be the same as the number of examples fed to it? Is this only because of the input architecture of the Discriminator NN or something else? It seems that someone would more likely have fewer noise samples/fake images than examples/real images...since you'd likely be trying to create a single fake image in the end. And by "examples", you mean real images, correct? 3. During the training of the Generator, I'm assuming the resulting image is somehow iteratively fed back into the input layer, or no? If not, given no label, what is the backpropogation of the Generator's training measuring up against?
Hello Arxiv, thanks for great explanation.I need some help regarding SageMaker.Almost for half a week I am trying to convert a simple GAN model into script mode so that I can run training jobs ,but I cannot.Secondly I want to understand how to configure sagemaker locally ,so that we can prototype first locally, and then use the was resources. If you can help me, please let me know.
Really impressive presentation and storytelling!
the creepy thing is, you can now find a latent code of your own face, then generate a true alter ego!
I love your weekly updates in A.I. Henry
@@pixel7038 Thank you so much!!
@@connor-shorten Hey i have a question: where can i get GAN? Is it a page or something? I don't get it 😭
@@kwea123 Hey i have a question: where can i get GAN? Is it a page or something? I don't get it 😭
Why do we not get people like that in college? I just had a 2 period course about _basic_ ML techniques (linear and logistic regression, classification and basic concepts like features, labels and hypothesis space) that took 2 hours to introduce each concept and then 10+ hours of self-study, most of which was spent ploughing through insane math jargon to realize just how unnecessary it is to explain the relatively simple concepts underneath. This is 100 times more complicated and the guy just explained the whole thing in 25 minutes and I have no questions! Bravo to him, this is why online education is replacing universities. Good teachers get a chance to reach people through platforms like this.
I know right, overcomplicating things so much, and justify in their defense that it's only "rigour" to learn this way
This guy explains the concepts correctly. There are a lot of channels here on youtube that explain a paper based on their speculations and not facts or the information presented in the paper. You get my sub :)
I miss your videos. Your channel is definitely the single best ML youtube channel I've seen. And that, by very far.
Please tell us you're alive and that you'll do more videos. We miss you.
I swear some college professors spends a year of lectures to convey the same amount of information that this video does in 25 mins
Or maybe it's because you don't pay attention bro lmao
or maybe the prof just sucks? Both ways@@ReaperOnRepo
I've seen other video explanations of this, but they were not clear at all, expecially for people outside the deep learning environment, this was so well explained that made me make a good leap in understanding the whole process. Even the notebooks are so well curated on the accompanying informations that makes those notebooks important to keep safe.
Great job, and keep it up.
These videos are gold. Cleanly explained, let me grasp the concept even if i am an entusiast (and not an expert on the field).
Extremely well explained where a beginner can easily get a gist of GANs and its overview. This channel is so useful who tries to read and understand ideas through papers and finds difficult to catch up to the complexity explained
Superb - lucid, comprehensive, well-paced, and designed to for human beings, unlike so much material in this space!
Hands down the best GANs video I've seen on UA-cam so far!
WOW!! That was simply excellent!!! That has to be one of the best videos on GAN's I have seen yet. Please continue doing more of these videos (with a notebook) - that is just outstanding work.
Fantastic explanation and presentation. I have been learning GANs for a few days and your video is by far the best resource I have found. Keep the hard work and the good content come in. Thank you!
You solved in 20 minutes problems and questions I was trying to find out in weeks. Thank you.
Hey i have a question: where can i get GAN? Is it a page or something? I don't get it 😭
I m glad that UA-cam suggested me this channel,there are very few channels having quality content on these topics and this is the best
this is the most amazing thing I've seen on UA-cam!
It's an in-depth video, but it feels casual. It's technical, but it feels fun. It's 25 minutes but it feels short. Marvelous!
i can not convey how i adore your diamond cellar channel
Thank you!
Im new to GAN's but the explanation was so intuitive and clear!
Kudos and keep up the good work mate!
A fantastic talk about GAN. A lot of ideas are incorporated. I get a great brainstorming as I watch this video
I've always been a fan of Autoencoders over GANs because of all the control you get with autoencoders, but damn it appears GANs have came a long way since I last checked up on the. Amazing video Xander!
Mah boi @Jabrils!
VQ-VAE for the win!
There's a new autoencoder paper that leverages these ideas from StyleGan on CVPR 2020, its called Adversarial Latent autoencoders. Pretty cool Idea too
I literally wrote an AutoEncoder (with a precision error of +- 0.95) ONCE, and now it's popping up everywhere!
Incredible video, Xander!! Thanks so much for this. GANs are one of the most interesting topics in ML and AI at the moment. This is definitely one of the best, if not the best, video on GANs I've seen. And thanks for providing the notebook. I'm sure I will have a lot of fun messing around with it :)
It was really great. You had done really an awesome job clarifying the issues after reading the original papers. Really a great fan. Hats off to you.
So happy to see you're back! I needed a styleGAN video so much. Thank you for your great job, keep doing this!
Well done! Clear and informative! So far the best DL channel I know
What a brillant video! Thanks a ton, I finally understood why the mapping network is needed and how we find the latent vector for an image :)
thank you so much for this video. i struggled to understand GANs as someone with no background in AI but your video simplified it for me.
Your presentation style was superb. It made a complex topic more understandable. Thank you.
Very impressive, this gives a really good insight & intuition into GANs.
Thanks & also thanks to Ian Goodfellow and everyone who has in one way or another contributed POSITIVELY to this powerful Innovation.
Best video on GAN & StyleGAN! You should do those more often. And I hope to see one on StyleGAN2.
With such a high quality videos you should consider making an online course or something so we can help sponsor your work and creation of more videos.
With the first-ever video I watched on this channel it took second place in my favorite channels list, just after the TwoMinutePapers. And boy the placement may change any moment. I cannot make a list of things done very good in this video. Thank you for the work.
Very good work! The notebook to play/learn with is an awesome idea. Thanks!
cant find any better explaination that this. Thanks a lot for all the efforts.
The most useful channel I've even seen. You should start your Patreon page. Please keep doing what you are doing.
Great that your are back Xander ! Keep doing your videos, they are awesome !
Hey, it's amazing. Your video totally sums up the paper. Keep posting more videos. Your explanations are far far better than siraj.
This channel is criminally under subscribed. Keep up the good work!
Best video I have watched about GAN. Great job!
The efficiency of this video has gone through the roof!!!
One of the best explanation of gans. The way you presented it was awesome.
I'm glad I clicked [2] on thispersondoesnotexist.com because [1] is complete gibberish to me. This video is very clear and easy to understand and you speak at an easy to understand pace. Thank you for making this.
21:42 for uncanny valley
2:14 you don't have to make Emilia Clark smile... She is always smiling...
Truth
This channel is so good! Thanks for sharing your code. I am excited to dive in and play with it.
Excelent presentation, very intuitive explanations, completed by ready to play code, pre-optimized and trained? 🤯
I've watch all the videos you have posted, and it was amazing. I hope you guys can bring more exciting videos : )
men love your videos, you disappeared a long time ago, thanks for coming back
One of the best UA-cam channels. Is it possible for you to do a detailed hands on course for GAN???
props for good notes and some humor in your notebooks. awesome
It looks like a movie. :) Thanks a lot, you saved a lot of time for me to understand the actual paper. Now I can understand it easily and faster. Good presentation.
Beautifully explained, you removed a lot of the uncertainty I had whilst reading the paper and have made it far easier for me to begin implementing myself!
I was so excited to try out 2. Face Editing Notebook (with pretrained networks), so I could prepare this as a project for my high school class next year. BUT it failed at the point of installing/downgradng to TFv1.12.2 and CUDA 9.0. Apparently Colab is default TFv15 now, and the oldest tensorflow-gpu its pip can see is 13.1. If you could update the notebooks, it would be so awesome!
Perfect UA-cam channel doesn't exis...
and.somewhere inside the latent space ... there is ''coordinates'' where psychopaths and serial killers are located ... who knows..
How is this not the top UA-cam comment of all time??
Anyone else looking at people that can't get the corona virus at the website before they came here?
@@dayhookah Because the comment rating is part of the channel, it does not exist!
I went to that website and wanted to find fake faces for roblox ID to see what would happen, but never did it, and I saw some creepy faces if there are two people.
one of the most informative GAN videos I've seen
Could you recommend approaches that use the GAN architecture to train a DRL policy (generator) on demonstrations? (4:37)
I'm lost in an Internet rabbit hole and do not understand a word of what I am watching, but I am watching anyway!!
Your presentation skill is outstanding, thank you so much sir :)
You are simply Great !! Thanks for such nice video on the GAN topic
Highly motivational presentation. Deserves a lot more attention. Subbed and followed you on Twitter.
Distribution without gaps is called continuous, not uniform. Uniform is a special kind of distribution. Other that that, thank you for a great video !
Beautiful explanations! This channel is like a needle in a haystack
Hey i have a question: where can i get GAN? Is it a page or something? I don't get it 😭
@@milyreina204 GAN as the name suggest is a neural net. you can get the code on GitHUB or kaggle but depends on your use case.
@@uddhavdave908 i still don't get it, i just want to create faces for fun 😭😂
This is a beautiful explanation of GANs!
Amazing work!
Fantastic work and clear exposition. Very close to what I wish I could do. Thanks.
For some reason I previously thought that there wasn't a latent vector for most REAL images (Think: partial mode collapse). But now you show me that even cars and others stuff are possible! Brilliant video!
Btw: Wouldn't getting the latent vector be much easier if you used VQ-VAE instead of the GAN, since VAE's provide an encoder? VQ-VAE generated images are pretty close in quality to GANs.
I never leave comments but this was actually really good and so deserved a comment :)
Having problm with python notebook.
Google drive quota exceeded.help plz
ERROR: Could not find a version that satisfies the requirement tensorflow-gpu==1.12.2, as your notebook works on this tensorflow version, but this version is outdated and not available now. What to do in this case?
Awesome video please make a complete course on GANs if possible
XXXX Dude, can you please give links for the "additional tweaks" at 21:17 ??? XXXX - I would like to know where they are in the literature. Especially the one about applying a "mask" to the generation process
HI Xander! You explained every complicated concept beautifully. I bet you spent a lot of time in preparations of this video. Perhaps we need another GAN to help us generate the videos that we create like this one :)
This is so scary. Was so convinced theres no way those people looked so real but didnt exist
Been following your channel since the Variational Auto Encoders, is an actual treat to watch such educating content for free particularly for students like me.
Glad to see you back!
You really need to post more content. One video in 6 months is too less for such amazing content.
great video! thanks. However when running notebook II I get an error because of the tensorflow version required (1.12.2). It seems like colab made some changes recently. Do you have a fix to this?
Tutorial 2 of the file in googke doesnt work on colab it gives errors
This video is super freaking AWESOME! Very clearly explained.
This is an amazing channel, please don't let this channel die, I literally just found it
Unfortunately the first notebook is broken and no code can be tested, neither the second notebook can be run.
Traceback (most recent call last):
File "encode_images.py", line 241, in
main()
File "encode_images.py", line 230, in main
img_array = mask*np.array(img_array) + (1.0-mask)*np.array(orig_img)
ValueError: operands could not be broadcast together with shapes (1048,1048,1) (1024,1024,3)
I'm having the same issue.Even if I truen face_mask=False , the code seems to run into some threading issue :(
Same issue here. No idea how to proceed.
Hey, I contacted him on twitter and he fixed the notebook. Check his twitter for specifics if you're interested.
@@shinohara9218 thank you very much, yes I'm going to check. 👏👏👏
Burh are you god how did you explain it so simply... great job keep it up :D
Loved the video and first Notebook. The second notebook "2. Face_Editing_Notebook.ipynb" however doesn't work. Get error "ERROR: Could not find a version that satisfies the requirement tensorflow-gpu==1.12.2". I'd love it if it were fixed! It looks as though the lowest tensorflow version is now 1.13.1
Wonderful presentation! Looking forward to your other videos!
what is JSD divergence ? at 09:48
From the first insight, really impressed. Thank you!
google quota exceeded now, stick your files on Google Cloud Storage (you'll have to pay a little). Or some other file sharing service.
Otherwise - excellent material - excellent work, please keep doing more of such content - it's highly appreciated! :)
Wow! Amazing explanation, wonder how much effort it takes to produce such a video.
A rough estimate for this video:
- Shaping the content outline: ~1 day
- Making all the animated slides, visual assets, videos, GIFs.. : 2 - 3 days
- Recording the video: 2 hours
- Making the IPython notebooks: 1 day
- Editing the video: 3 days
In total I'd say this video took about two weeks of full-time work, but I combine UA-cam stuff with some many other projects in parallel it's hard to keep track exactly :)
So yes, that's a lot, but it is a 25-min video and the idea is to pack as much information in those 25mins as possible + I love to do these things :p
@@ArxivInsights haha worth it though, Thanks a lot! One more question maybe :P
Do you see any chance to get a 1x512 latent space for existing faces, instead of the 18x512 vector? It seemed to me that the decoder only provides the 18x space..
@@shipper611 You can definitely do this, however you'd have to train a model yourself & it might work slightly worse because the z-space has less degrees of freedom to fit random input faces compared to w-space.
But if you browse through the InterfaceGan repo there are options for doing everything in z-space!
Thank you for this great video. What I don't get it is why moving perpendicular to to the plane give you different categories of same person image. Is there any video explaining this? Many thanks.
Very nicely explained..............especially for someone who starts it afresh
Why so memetastic man? Really brings the overall aura of the video down when you're plastering in pictures of kids on a trike captioned Fuck yes, or whatever.
Very well explained. I was redirected here from a retweet of your tweet. I am impressed by the quality and content of your channel. Keep up the good work.
The notebook is really fun. Does anybody know any resources on creating custom latent directions? I imagine it involves using active learning to efficiently label data for the characteristic/dimension you're targeting, but I have no idea how.
You create your featurea than train resnet and after use that features vector which already consist predifined features dimentions ,and feed it to gan , in gan you could climb in the latent direction by get different feature values to current image
I try to explain but...
excellent presentation Arxiv Insights. Thank you very much.
Awesome video again! One note: it's usually called min-maxing, not mini-maxing unless I'm wrong and you're referring to something else.
You should make a video that covers how an app called Remini upscales faces. I think it uses some type of Stylegan or Cyclegan to do so.
1:15 GANime... I'll be going now.
Dude, do you have a Pateon?
Your videos are so good we, need more of them.
Great video! but I do have a few questions:
1. I would think that the Discriminator trying to maximize loss would get a bit too good at balancing out the Generator's attempt at minimizing loss. Is there some type of weighting that's applied to get the Generator to have more say than the Discriminator?
2. Seen at 9:03, why must the number of noise samples fed to the Discriminator be the same as the number of examples fed to it? Is this only because of the input architecture of the Discriminator NN or something else? It seems that someone would more likely have fewer noise samples/fake images than examples/real images...since you'd likely be trying to create a single fake image in the end. And by "examples", you mean real images, correct?
3. During the training of the Generator, I'm assuming the resulting image is somehow iteratively fed back into the input layer, or no? If not, given no label, what is the backpropogation of the Generator's training measuring up against?
Hello Arxiv, thanks for great explanation.I need some help regarding SageMaker.Almost for half a week I am trying to convert a simple GAN model into script mode so that I can run training jobs ,but I cannot.Secondly I want to understand how to configure sagemaker locally ,so that we can prototype first locally, and then use the was resources. If you can help me, please let me know.
Bhaiya ji tensorflow deprecated hai
The GAN explanation - extremely well done. The videos at the end - terrifying!
Hey i have a question: where can i get GAN? Is it a page or something? I don't get it 😭
could you share the slides with us? thanks so much!!!!!!!