Thank you Alfredo. Words fall short to describe the amount of hard work and effort you've put into organizing the material and structuring it so well. Bless you. From India.
While I was bored listening to the lectures at my own university, I did not experience this in this video series. I even feel like I'm taking the class interactively at NYU. It's great that Alfredo interrupts Lecun and asks questions in our minds, adds fun to the lesson, and continues the lesson in an unconventional way.
Rosenblatt really started deep learning and I hope he continues to get more and more recognition. His devising the Perceptron and on top of that, manually building it is nothing short of genius. He died tragically and too young. But the Perceptron book by Minsky and Papert misconstrued his work I believe disingenuously, as it is known that Rosenblatt and Minsky were vocal rivals dating back years, what Rosenblatt was saying as only a linear network; in the original work Rosenblatt discussed multilayer architectures but had not yet devised backprop yet. He also laid the foundations for ideas about spiking networks and dynamical activity. The only reason he used a rate encoding sum + squashing function was in his words, due to the technological hardware limitations at the time.
@@alfcnz I just mean, in his original paper he does discuss multilayered learning (with hidden units) contrary to popular perception that he only discusses single layer perceptrons - this misconception came about because of Minsky's characterization
Thank you, Alfredo. We can't thank you enough for providing these video lectures for free. I have one small query btw, I was going through the course websites of the 2020 and 2021 versions. Which version would you suggest to go through? 2020 one looks comprehensible. Just wanted to confirm if both of the versions (2020 and 2021) contents are the same ? Thanks again :)
One of the most beautiful, interesting and joyful classes I ever came across. Can I join the entire module. Is there any specific steps I should follow prior to starting. I want to dedicate couple of weeks this summer to seriously absorb all the contents start to finish. Grateful for any recommendations. Thank you kindly Alf.
Just keep an eye on the class website, where I'm posting homework and schedule. For now the first theme is out. The second theme comes out this coming week.
First lesson... to non-french speaking people... Try to pronounce Lecun with a french accent ;-) "un" in french is like the number "1" ... so ... try to pronounce the number 1 in french will give you "un" ... then integrate it into Lecun... lot of english speaking people say "LeKoune" instead of Lecun which sounds weird from our side. So ask Google translate to select french, then type "un" and listen to it ;-) ... You tried ? You get it ? You are doing great ! ;-)
Guessing, the five lines at the end of video are basis vectors in un warped space or transformed space where we can draw lines to separate the colored dots. Support vector machines applies the same principle but with pre-defined functions and they are not as sensitive as neural nets.
@@alfcnzideally if we consider a 5-d space there are basis vectors which means a unit vector which will help what direction I should go in that space to go to a vector. Suppose in 2-d case i,j. Let's assume if we are in 5-d space then i, j, k, l, m are unit vectors. If we start with 2 features a neural network will help us to find combination of these two to get 5 features and corresponding weights will help us get to w1i, w2j, w3k, w4l, w5m. I may be wrong or unclear and I need to rethink on the same problem at the end of this playlist 😀.
Thank you for sharing Grant Sanderson website. Really inspiring and finally propose a great way to link many views and applications of linear algbebra !
what is the best way to use the vidoes together with the course website..... looks like theres a lot of information and I don't know where to begin and in what order..... I need help
Wow, so happy I found this. Yan's intro and historical survey was really interesting and useful. I remember being in grad school in the early 80s and feeling sorry for the guys studying AI: we all felt that they had no future. :) But, now, after most of the progress in basic processor design has been made, dnn and the rest of AI are looking like the best thing to investigate! The tables have turned.... :) Now, the AI guys can feel sorry for the rest of us. Ha! :)
Not a content related question - I just became very curious - Was there any point in your life when you decided to be just the way you are? You know, making a conscious decision... Thanks for the education. Much appreciated.
@@alfcnz being happy, being a bit silly, being yourself. although you recognise some may find you weirdly funny, but you are okay with that, but you are not imposing your characteristics upon anyone, you are just being yourself, and communicating that this is who you are. Did it come naturally to you or did you make a conscious effort somehow, to preserve yourself, your personality? Because too many of us loss themselves in order to, you know, to please other people, to maintain certain standard.
I have a few videos on that on my most recent playlist, second chapter. There, I explain how the Perceptron (a binary neuron with an arbitrary number of inputs) used an error correction strategy for learning. Let me know if you have any other question. 😇😇😇 Chapter 2, video 4-6 ua-cam.com/video/g4sSU6B99Ek/v-deo.html
Can you tell me why is it that neurons in a neural net uses this mathematical model i.e f(Ax+b) ,f is the activation function and inside is a affine transformation of inputs and not something else ?Is it just beacuse of the fact that they are inspired from biological neurons?
@@alfcnz Just to be clear ,I am asking whether the method of using a affine transformation of inputs to have some control over the function phi(x), which is equal to affine transformation of inputs passed through some nonlinearity ,is completely inspired from biological neurons
@@alfcnz Thanks ,does that mean that method of using affine transformation which is inspired from biological neurons just happens to be useful for having some control over phi(x)? It's my last question for today ,sir
thank you very much for this awesome course, are there open-access exercises with solutions that I can use to test my understanding after each lecture?
Hello Alf, I am so grateful for this course which I am watching and enjoy very much. I have an unrelated question for you. Could you please point me to a good book or tutorial on how to deploy a ML model. I have trained a decent classification model and need to deploy it to a website where users can upload their CSV input and get predictions also in a CSV file. Thank you so much in advance.
I was waiting for the next video because this one was the first on your channel (sort by date) but today I have found that they were already the others videos :((((
You're wrong. «Meta learning [or learning to learn] is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments.» en.wikipedia.org/wiki/Meta_learning_(computer_science)
@@alfcnz He isn’t 100% wrong ;-) Self-supervised learning is a way to meta-learning (in a limited context, of course). A good way to look at the relationships between the two is if you were to plot the capabilities of meta-learning and self-supervised learning as a Venn diagram, there is an intersection.
I think the question asked is if self-sup is now called meta-learning. Although meta-learning is automated, I wouldn't call it self-sup learning. The two things are rather distinct, for what I understand. One is dealing with hyperparameter search while the other prelearns weights.
Thank you Alfredo. Words fall short to describe the amount of hard work and effort you've put into organizing the material and structuring it so well. Bless you. From India.
😇😇😇
While I was bored listening to the lectures at my own university, I did not experience this in this video series. I even feel like I'm taking the class interactively at NYU. It's great that Alfredo interrupts Lecun and asks questions in our minds, adds fun to the lesson, and continues the lesson in an unconventional way.
🥳🥳🥳
I may not always like the jokes but I admire your caring for others to learn. Thanks for sharing the course!
🥳🥳🥳
Thank you so much!. This playlist is a gem.
💎💎💎
Starting this playlist on 3rd October 2024, Extremely thankful for open source education.
I'm excited as well as afraid of the mathematics involved.
Haha, don’t be! Also, Marc’s Mathematics of Machine Learning book is a good resource to check out. See my blog on suggested visual prerequisites.
I was always interested in this subject. But was procrastinating. Today I finally decided to learn this.
Yay! 🥳🥳🥳
I wanted to thank you for the efforts Sir. The way you described that NN basically warps the space this concept has cleared so many things in my mind.
😇😇😇
Open Source Teaching. Beautiful stuff, Alfredo. Grazie amico!
Prego 😇😇😇
Thank you for sharing this to everyone. I was bored by my university courses which are not very understood, passionate and advanced.
You’re welcome! 😉 Let me know if you have any questions about the course material! 🤓
Thank you, Alfredo! I just started this course, and I think it's incredibly detailed and amazing. I can't express my excitement in words! ❤️❤️❤️
That’s awesome! 🥳🥳🥳
Amazing that we are able to hear one of the people who developed deep learning.
😊😊😊
Rosenblatt really started deep learning and I hope he continues to get more and more recognition. His devising the Perceptron and on top of that, manually building it is nothing short of genius. He died tragically and too young. But the Perceptron book by Minsky and Papert misconstrued his work I believe disingenuously, as it is known that Rosenblatt and Minsky were vocal rivals dating back years, what Rosenblatt was saying as only a linear network; in the original work Rosenblatt discussed multilayer architectures but had not yet devised backprop yet. He also laid the foundations for ideas about spiking networks and dynamical activity. The only reason he used a rate encoding sum + squashing function was in his words, due to the technological hardware limitations at the time.
Deep learning ≠ shallow learning. I'm not sure Rosenblatt can be considered to have started _deep_ learning. _Shallow_ learning, perhaps.
@@alfcnz I just mean, in his original paper he does discuss multilayered learning (with hidden units) contrary to popular perception that he only discusses single layer perceptrons - this misconception came about because of Minsky's characterization
Thank you, Alfredo. We can't thank you enough for providing these video lectures for free. I have one small query btw, I was going through the course websites of the 2020 and 2021 versions. Which version would you suggest to go through? 2020 one looks comprehensible. Just wanted to confirm if both of the versions (2020 and 2021) contents are the same ? Thanks again :)
They are not.
The 2021 edition will be completely online in two weeks. Only the new lectures have been summarised with text, though.
@@alfcnz Thanks for the quick reply. So, which version would you suggest to go through end-to-end. Thanks.
The 2021's is fresher.
I'll write on the 2021 homepage what lectures have not been included from the previous year.
@@alfcnz Thanks. It is very humble of you to respond even small queries :)
No worries ☺️☺️☺️
Great video, Alfredo. Thanks for sharing!
You're welcome 😊
One of the most beautiful, interesting and joyful classes I ever came across. Can I join the entire module. Is there any specific steps I should follow prior to starting. I want to dedicate couple of weeks this summer to seriously absorb all the contents start to finish. Grateful for any recommendations. Thank you kindly Alf.
Just keep an eye on the class website, where I'm posting homework and schedule. For now the first theme is out. The second theme comes out this coming week.
@@alfcnz Thanks a million Alf. Looking forward to.
💜💜💜
First lesson... to non-french speaking people... Try to pronounce Lecun with a french accent ;-)
"un" in french is like the number "1" ... so ... try to pronounce the number 1 in french will give you "un" ... then integrate it into Lecun...
lot of english speaking people say "LeKoune" instead of Lecun which sounds weird from our side.
So ask Google translate to select french, then type "un" and listen to it ;-)
...
You tried ?
You get it ?
You are doing great !
;-)
🤣🤣🤣
Guessing, the five lines at the end of video are basis vectors in un warped space or transformed space where we can draw lines to separate the colored dots. Support vector machines applies the same principle but with pre-defined functions and they are not as sensitive as neural nets.
Where did I get those vectors? 😀
@@alfcnz comment check
@@alfcnzideally if we consider a 5-d space there are basis vectors which means a unit vector which will help what direction I should go in that space to go to a vector. Suppose in 2-d case i,j. Let's assume if we are in 5-d space then i, j, k, l, m are unit vectors. If we start with 2 features a neural network will help us to find combination of these two to get 5 features and corresponding weights will help us get to w1i, w2j, w3k, w4l, w5m. I may be wrong or unclear and I need to rethink on the same problem at the end of this playlist 😀.
Thank you Alfredo. Very Energetic and Informative..
Hahaha 😁😁😁
Hello :) 48:55 are the arrows the basis vectors of the classes?
Yup, called class embeddings as well.
Cool. I hope I have time to watch all the videos of this course over the weekend.
And time is not there, you can always make it! 💪🏻💪🏻💪🏻
Thank you for sharing Grant Sanderson website. Really inspiring and finally propose a great way to link many views and applications of linear algbebra !
😀😀😀
what is the best way to use the vidoes together with the course website..... looks like theres a lot of information and I don't know where to begin and in what order..... I need help
My suggestion is checking out my website, where I explain how to begin and what order to follow. 😇😇😇
Beautiful framing. You look like a decision boundary for your background colours :)
😀😀😀
Wow, so happy I found this. Yan's intro and historical survey was really interesting and useful. I remember being in grad school in the early 80s and feeling sorry for the guys studying AI: we all felt that they had no future. :) But, now, after most of the progress in basic processor design has been made, dnn and the rest of AI are looking like the best thing to investigate! The tables have turned.... :) Now, the AI guys can feel sorry for the rest of us. Ha! :)
😅😅😅
Not a content related question - I just became very curious - Was there any point in your life when you decided to be just the way you are? You know, making a conscious decision... Thanks for the education. Much appreciated.
Conscious decision about… being happy?
@@alfcnz being happy, being a bit silly, being yourself. although you recognise some may find you weirdly funny, but you are okay with that, but you are not imposing your characteristics upon anyone, you are just being yourself, and communicating that this is who you are.
Did it come naturally to you or did you make a conscious effort somehow, to preserve yourself, your personality?
Because too many of us loss themselves in order to, you know, to please other people, to maintain certain standard.
twitter.com/alfcnz/status/1251234277351739392
Great to hear the voice of the Father of new AI era !
🎙️🎙️🎙️
Awesome class. Keep them coming. Thanks!
🥳🥳🥳
Coolest prof i have seen in a while! and the course seems extensive too.. Thank you so much for uploading 🙃
🥰🥰🥰
How to get into NYU ? These lectures are too good.
🤑🤑🤑
hey dear alfredo, as a Business student, Can i watch this course and do assignments? or i should choose another course ? :(
You can do as you please 🙂
Thanks! May you please list DOIs of cited articles in the description of the video, please ?
Right, that would make sense. I'll let Yann know.
Thanks for your courses! It is awesome!
Many thanks Alfredo!! Awesome Job!!
Of course, you're welcome 😊😊😊
How were the neural nets trained before 1985 ie before back prop was invented?
I have a few videos on that on my most recent playlist, second chapter. There, I explain how the Perceptron (a binary neuron with an arbitrary number of inputs) used an error correction strategy for learning. Let me know if you have any other question. 😇😇😇
Chapter 2, video 4-6
ua-cam.com/video/g4sSU6B99Ek/v-deo.html
@@alfcnz thanks! 🙏
Magnificent class! Thank you :) 13:00 was so funny btw haha
🤣🤣🤣
Briallian animation at the end. Cheers ! Do interactive version of these animations exits??
Or the code ...
Sure, everything is on the course website and repo.
Amazing lecture!! Thanks for sharing
Of course, anytime 😋😋😋
Can you tell me why is it that neurons in a neural net uses this mathematical model i.e f(Ax+b) ,f is the activation function and inside is a affine transformation of inputs and not something else ?Is it just beacuse of the fact that they are inspired from biological neurons?
Yup, that's a very very very rough approximation of how a biological neutron work.
@@alfcnz Just to be clear ,I am asking whether the method of using a affine transformation of inputs to have some control over the function phi(x), which is equal to affine transformation of inputs passed through some nonlinearity ,is completely inspired from biological neurons
Yes.
@@alfcnz Thanks ,does that mean that method of using affine transformation which is inspired from biological neurons just happens to be useful for having some control over phi(x)?
It's my last question for today ,sir
thank you very much for this awesome course, are there open-access exercises with solutions that I can use to test my understanding after each lecture?
See course website.
@@alfcnz .Thank you very much Sir
👍🏻👍🏻👍🏻
Thanks, Alfredo for being a nice Human Being.
PS: I like the sarcasm, keep it coming :P
Sarcasm? 😮😮😮
lecun aged like fine wine :)
🍷🍷🍷
Hello Alf, I am so grateful for this course which I am watching and enjoy very much. I have an unrelated question for you. Could you please point me to a good book or tutorial on how to deploy a ML model. I have trained a decent classification model and need to deploy it to a website where users can upload their CSV input and get predictions also in a CSV file. Thank you so much in advance.
I've never deployed anything. Regardless, you may want to look into ML-ops and madewithml.com/#foundations
@@alfcnz Thank you so much!
Grazie. Bravo 👏🏻👏🏻
Prego 😊😊😊
Do we not have hands on in this series or I am missing something. Thanks for the upload sir!
I don't understand the question. All I've posted is PyTorch implementations.
@@alfcnz ohh my bad.... I was thinking that it's only theory as there is no ipynb file in git Repo.
@@abhishekaiem the notebooks covered so far are in last year's repo. I'll add the new ones to this year's repo.
@@alfcnz Thanks Professor!
Great video!!!! waiting for more :)
Way too many are just waiting in my hard drive to get edited. This *is* a full-time job, LOL. Aaaaah!
Like your channel, professor, subscribed.
🥳🥳🥳
Nice video Thank you.
You're welcome! 😇
I was waiting for the next video because this one was the first on your channel (sort by date) but today I have found that they were already the others videos :((((
Happy watching! 😀😀😀
Great work, thank you..
You're welcome 😊😊😊
Nice content 👍.please upload coding vedio as well Thanks.
About what?
@@alfcnz means how to implement architecture through code.like cnn
Great job...
🤓🤓🤓
Mass 🎉 video
only a few words on his own masterpiece work haha
🤭🤭🤭
shame on me, i was eating ice cream while Alfredo just mentioned about it....
I like your food pictures ... but it looks you don't have a picture of "Arrosticini" ;-)
My… food pictures? 🙈🙈🙈
Wow, my idol!
🥰🥰🥰
🙏
😇😇😇
Why does Yann looks like some Star Lord.
Because he is one! ✴️🌟✨
Before 1986 neurons were binary?! Sounds like a nightmare
Yup, backprop was not and could not be a thing.
Isn't self-supervised learning has got the new fancy name of meta-learning these days?... correct me if I am wrong
You're wrong.
«Meta learning [or learning to learn] is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments.»
en.wikipedia.org/wiki/Meta_learning_(computer_science)
@@alfcnz He isn’t 100% wrong ;-) Self-supervised learning is a way to meta-learning (in a limited context, of course). A good way to look at the relationships between the two is if you were to plot the capabilities of meta-learning and self-supervised learning as a Venn diagram, there is an intersection.
I think the question asked is if self-sup is now called meta-learning. Although meta-learning is automated, I wouldn't call it self-sup learning.
The two things are rather distinct, for what I understand.
One is dealing with hyperparameter search while the other prelearns weights.
@@alfcnz yes, they are not equivalent for sure.
Yan started dyeing his hair like Alf's. It will take some time to reach his level. (joking).
Hahahaha! He started doing that _before_ me.
@@alfcnz Maybe, as an exercise, develop a machine learning based software to add the dye onto the hair of the person on the call...
😁😁😁
what's the teddy bear doing on the bed
His name is Vincenzo. He comes to class with me every week 😊😊😊