He took time to answer the question to a bigger audience than Schmidhuber, by directing them to the places where it matters, also saving time of his presentation.
@@kanishktantia7899 just watch a couple of videos on this, derive the min-max loss atleast once on paper. See a basic implementation of GAN from scratch. I was doing it just for making a ppt on a research paper (StyleGAN2), so this was more than enough for me. Same goes for any other topic in DL. I was just dipping my feet in the deep sea of DL, i ain't touching it again.
@@ruchirjain1163 I graduated last year, working these days.. not enrolled currently at any university..I'm looking for that only. Any lab or any university in US or maybe somewhere else.
So many people in the wild came up with this idea of training two networks against each other. Some lacked the deeper knowledge to continue, some tackled specific problems instead of generalizing. Ian Goodfellow is just another person who came up with this exact idea (in fact while drunk or something but this may be wrong but in a casual context for sure). Schmidthuber's paper is also based on this exact idea. Goodfellow must acknowledge the overwhelming overlap, let alone similarity, but he doesn't, because if he did, that would take all attention away from him since the earlier work is the same. He probably wasn't even aware of Schmidthuber's paper because that paper dates back >20 years earlier. This doesn't justify not giving proper credit though. Goodfellow simply popularized GANs, certainly not invented. In fact, I bet there were even others before Schmidthuber who came up with this core idea but just didn't keep up.
Agree. Goodfellow could and should have just cited Schmidhuber and explain the similarities and differencies. Schmidhuber wants recognition, not fame. Goodfellow wanted fame and got it.
I dont know if its just me, but, I enjoyed and understood Sir Jürgen Schmidhuber more than Sir Ian Goodfellow. Will definitely go check his contributions.
Has anyone kept track of the plagiarism Jürgen mentioned and if they are actually based on facts? Because he does make good points. We should not allow accidental/plagiarism and corrections have to be made.
I think he misspoke at 31:39 - he said that we want to make sure that x has a higher dimension than z. Instead, z should have a higher dimension than x, to provide full support to space of x and avoiding learning lower-dimensional manifold.
It's almost never the case in practice where z has about 100 entries and x is 500x500x3 or something similar. If it was the other way around, the noise input would have superfluous entries, which is what he means by "learning lower-dimensional manifolds", I believe. However, in order to perfectly reconstruct the training distribution, I agree that it makes sense to have it exactly that way, and I'm confused by the way he worded that whole part.
If it's still relevant, I'll try to help. This is called maximum likelyhood estimation, which is the product of the estimated probabilities on the training dataset. As derivative of product is not easy to work with, we take a log of this, so this yields the sum of logs. As log is a monotonous function it doesn't change the local minimum and is easier to take the derivative of. Hope it helped
Well well well another prodigy from stanford that has made a.i more complicated and sophisticated for the better good of humanity 😔... Isnt GAN whats CONDUCTING the war now lol 😂. I remember movie WARGAMES
Ok, people like Dr. Schmidhuber need to understand that a tutorial is not a place for these kind of discussions. This could have been easily taken offline. Also, once the presenter expresses they have no desire to discuss it at that moment, back off. It's too conceited and self-important to think your argument with the presenter is more important than everyone else - who paid for NIPS and had been looking forward to this tutorial - getting their time and money's worth. That being said, not downplaying what Dr. Schmidhuber was trying to point out, simply that it could've been discussed differently and elsewhere.
If Schmidhuber had only raised his concern offline, it would not have gotten in the focus of academic publicity in the way it did. It's reasonable to assume that the latter was his intention, therefore it didn't matter whether Goodfellows presentation was a tutorial.
Why is he obfuscating everything with needless mathematical jargon that most ML researchers won't understand? This stuff actually isn't so complicated that you need a degree in mathematics to understand it. You can understand it on a deep level with only high school mathematics if it is explained properly.
Oh sorry, what would you like to talk instead about? Visual Code themes or the latest JavaScript framework? Any ML course in the university is heavy math Einstein.
Engineering details are straightforward, however, this is a NIPS lecture, they have to have theoretical justification about how it works. If you want to engineer a working system you don't need all this, I agree with that.
Two people inventing more or less the same thing independent of each other has happened many times in history, for example Newton's method or Darwin's theory of natural selection. Calling either of them fake is rather presumptuous.
@@busTedOaS I completely understand that, but u need to credit the first creator too, that's the only problem I have. As far I know Goodfellow was the first one to make Adverserial Networks work, Schmidhuber worked with 1000x worser hardware and wasn't able to get any real results so it was just a cool idea with no solid results backing it up. Thus Goodfellow deserves credit but not completely. Even after all these arguments, Goodfellow refused to acknowledge the clear similarity between the work and cite it, this is blatantly unethical from an academic standpoint.
@@pranav7471Schmidhuber had modern hardware in 2014. Plus years of experience with adversarial models ahead of Goodfellow, presumably. I don't see any disadvantage for Schmidhuber there. I agree that one should cite related work and Goodfellow does this consistently - in fact that's what he did right before the confrontation. Why would he specifically ignore Schmidhuber's work while citing many other works with even stronger similarities? The reasonable explanations I can come up with are 1) personal spite or 2) he honestly thinks the techniques are sufficiently different.
We're talking about a guy that says he invented "generative adversarial networks", when the paper clearly mentions 7 other people, university staff and more working on the project. Of course he's gonna talk like that.
even the lecture on GAN had an Adversary
ahahah
Schmidhuber vs Goodfellow @ 1:03:00
God bless you. I came here for that!
@@sudharsank5780 +1
@@markmcelroy1872 and after I spent so much brainpower trying to follow the question smh
@nerd I'm sorry, I don't understand.
*Grabs popcorn*
Jürgen Schmidhuber starts talking at 1:02:59.
An adversarial interaction about adversarial methods.
Schmidhuber: Can I ask a question ?
GoodFellow: Oh, You_Again ?
Watch from 1:03:00 to 1:05:04 in double speed. The art of schmidhubering vs the art of patience while being schmidhubered.
He took time to answer the question to a bigger audience than Schmidhuber, by directing them to the places where it matters, also saving time of his presentation.
1:03:00 is what you're all here for
Learning Finite Automaton Simplifier: drive _ google _ com/file/d/1GSv89tiQmPDcnFEu4n4CqfaJcUJxVmL5KrSCJ047g4o/edit
So you come to a video about an amazing technology just for petty drama?
@@y.o.2478 yes
@@y.o.2478 1000%
@@y.o.2478 Indeed.
Schmidhuber putting the ADVERSERIAL in GANs @ 1:03:00
i came here just to understand GANs a bit better, didn't realize i would strike gold at 1:03:00
Can you give your study plan as in how do you get it?
@@kanishktantia7899 just watch a couple of videos on this, derive the min-max loss atleast once on paper. See a basic implementation of GAN from scratch. I was doing it just for making a ppt on a research paper (StyleGAN2), so this was more than enough for me. Same goes for any other topic in DL. I was just dipping my feet in the deep sea of DL, i ain't touching it again.
@@ruchirjain1163 Sure , that might help. Im looking for higher study opportunities from abroad in this space only. Can you help me in any capacity?
@@kanishktantia7899 it depends mate, i would say you should try going for thesis if u have that in ur university
@@ruchirjain1163 I graduated last year, working these days.. not enrolled currently at any university..I'm looking for that only. Any lab or any university in US or maybe somewhere else.
So many people in the wild came up with this idea of training two networks against each other. Some lacked the deeper knowledge to continue, some tackled specific problems instead of generalizing. Ian Goodfellow is just another person who came up with this exact idea (in fact while drunk or something but this may be wrong but in a casual context for sure). Schmidthuber's paper is also based on this exact idea. Goodfellow must acknowledge the overwhelming overlap, let alone similarity, but he doesn't, because if he did, that would take all attention away from him since the earlier work is the same. He probably wasn't even aware of Schmidthuber's paper because that paper dates back >20 years earlier. This doesn't justify not giving proper credit though. Goodfellow simply popularized GANs, certainly not invented. In fact, I bet there were even others before Schmidthuber who came up with this core idea but just didn't keep up.
Agree. Goodfellow could and should have just cited Schmidhuber and explain the similarities and differencies. Schmidhuber wants recognition, not fame. Goodfellow wanted fame and got it.
Can someone explain in English the reason of the confrontation?
stats.stackexchange.com/questions/251460/were-generative-adversarial-networks-introduced-by-j%c3%bcrgen-schmidhuber/301280#301280
Thanks Schmidhuber I had the same questions while reading the paper.....
then I read the final copy 😂
I dont know if its just me, but, I enjoyed and understood Sir Jürgen Schmidhuber more than Sir Ian Goodfellow. Will definitely go check his contributions.
How do you study can you share your plan with me?
Really good fellow
Has anyone kept track of the plagiarism Jürgen mentioned and if they are actually based on facts? Because he does make good points. We should not allow accidental/plagiarism and corrections have to be made.
If only Schmidhuber had more friends in the field he wouldn't be outmaneuvered in this manner.
It should not matter if someone has friends or not within the field!!!
@@hummingbird7579 It shouldn't but it does!
@@yoloswaggins2161 History has shown time and time again... you are right.
It's really a shame.
Very well organised lecture, thank you Ian!
1:03:00 what a way of getting called out
I think he misspoke at 31:39 - he said that we want to make sure that x has a higher dimension than z. Instead, z should have a higher dimension than x, to provide full support to space of x and avoiding learning lower-dimensional manifold.
It's almost never the case in practice where z has about 100 entries and x is 500x500x3 or something similar. If it was the other way around, the noise input would have superfluous entries, which is what he means by "learning lower-dimensional manifolds", I believe. However, in order to perfectly reconstruct the training distribution, I agree that it makes sense to have it exactly that way, and I'm confused by the way he worded that whole part.
yeah, that confused me too. Good to see someone agrees that it should be the other way round.
Thank you very much for uploading.
Thank you for uploading the wonderful video! The explanations were really clear.
13:24 when searching for the best theta, Ian is summing over the log of the probabilities rather than over the probabilities, why?
If it's still relevant, I'll try to help. This is called maximum likelyhood estimation, which is the product of the estimated probabilities on the training dataset. As derivative of product is not easy to work with, we take a log of this, so this yields the sum of logs. As log is a monotonous function it doesn't change the local minimum and is easier to take the derivative of. Hope it helped
predictability minimization is almost literally the same thing as GANs
almost literally kind of exactly vaguely the same thing.
Can someone give more insights to exercises?
I dont think people realize this is one of the most important lectures in the past 100 years... GAN... it will be everywhere soon.
who is using gans?
gans are dead lol
Lol
I have a question:is this kind of network just good for the same data as training data?
No
The generator, if successful, will recover the training distribution, and nothing else, if that answers the question.
Good speech, thx
THERE IS NO BETTER LECTURE ON INTRODUCTION TO GAN. PERIOD.
THANKS FOR SHARING
Schmidhuber has a point
I feel bad for him. He deserves more recognition.
Pure Genius, available in R?
That's just sampling with a new name. The innovation is in naming, not method.
Well well well another prodigy from stanford that has made a.i more complicated and sophisticated for the better good of humanity 😔... Isnt GAN whats CONDUCTING the war now lol 😂. I remember movie WARGAMES
Ok, people like Dr. Schmidhuber need to understand that a tutorial is not a place for these kind of discussions. This could have been easily taken offline. Also, once the presenter expresses they have no desire to discuss it at that moment, back off. It's too conceited and self-important to think your argument with the presenter is more important than everyone else - who paid for NIPS and had been looking forward to this tutorial - getting their time and money's worth.
That being said, not downplaying what Dr. Schmidhuber was trying to point out, simply that it could've been discussed differently and elsewhere.
If Schmidhuber had only raised his concern offline, it would not have gotten in the focus of academic publicity in the way it did. It's reasonable to assume that the latter was his intention, therefore it didn't matter whether Goodfellows presentation was a tutorial.
Why is he obfuscating everything with needless mathematical jargon that most ML researchers won't understand? This stuff actually isn't so complicated that you need a degree in mathematics to understand it. You can understand it on a deep level with only high school mathematics if it is explained properly.
Oh sorry, what would you like to talk instead about? Visual Code themes or the latest JavaScript framework? Any ML course in the university is heavy math Einstein.
Engineering details are straightforward, however, this is a NIPS lecture, they have to have theoretical justification about how it works. If you want to engineer a working system you don't need all this, I agree with that.
Man, it’s almost like Computer Science is a subset of Mathematics...
@@sZlMu2vrIDQZBNke8ENmEKvzoZ lmao
GO write your HTML code, kid.
Such a fake Schmidhuber is the true creator of Gans, and Goodfellow had the nerve to shut him up
Two people inventing more or less the same thing independent of each other has happened many times in history, for example Newton's method or Darwin's theory of natural selection. Calling either of them fake is rather presumptuous.
@@busTedOaS bro Goodfellow came decades after this guy that's not called simultaneous inventions
@@pranav7471 That's why I said independent, not simultaneous. Bro.
@@busTedOaS I completely understand that, but u need to credit the first creator too, that's the only problem I have. As far I know Goodfellow was the first one to make Adverserial Networks work, Schmidhuber worked with 1000x worser hardware and wasn't able to get any real results so it was just a cool idea with no solid results backing it up. Thus Goodfellow deserves credit but not completely. Even after all these arguments, Goodfellow refused to acknowledge the clear similarity between the work and cite it, this is blatantly unethical from an academic standpoint.
@@pranav7471Schmidhuber had modern hardware in 2014. Plus years of experience with adversarial models ahead of Goodfellow, presumably. I don't see any disadvantage for Schmidhuber there.
I agree that one should cite related work and Goodfellow does this consistently - in fact that's what he did right before the confrontation. Why would he specifically ignore Schmidhuber's work while citing many other works with even stronger similarities? The reasonable explanations I can come up with are 1) personal spite or 2) he honestly thinks the techniques are sufficiently different.
This ian kid is rude and not so goodfellow. Schmidhuber politely just asked a question and got attacked
And also schmidhuber can be his father, he should show a little more respect
@@EB3103 Ok boomer
We're talking about a guy that says he invented "generative adversarial networks", when the paper clearly mentions 7 other people, university staff and more working on the project. Of course he's gonna talk like that.
@@Nickyreaper2008 He also likes to call himself "The industry lead"
My role model we are science 🔭🧪