01 - History and resources

Поділитися
Вставка
  • Опубліковано 16 гру 2024

КОМЕНТАРІ • 152

  • @sudiptoghosh5740
    @sudiptoghosh5740 3 роки тому +58

    Thank you Alfredo. Words fall short to describe the amount of hard work and effort you've put into organizing the material and structuring it so well. Bless you. From India.

    • @alfcnz
      @alfcnz  3 роки тому +6

      😇😇😇

  • @oguzhanercan4701
    @oguzhanercan4701 2 роки тому +5

    While I was bored listening to the lectures at my own university, I did not experience this in this video series. I even feel like I'm taking the class interactively at NYU. It's great that Alfredo interrupts Lecun and asks questions in our minds, adds fun to the lesson, and continues the lesson in an unconventional way.

    • @alfcnz
      @alfcnz  8 місяців тому

      🥳🥳🥳

  • @rindraramamonjison9704
    @rindraramamonjison9704 3 роки тому +4

    I may not always like the jokes but I admire your caring for others to learn. Thanks for sharing the course!

    • @alfcnz
      @alfcnz  3 роки тому

      🥳🥳🥳

  • @vaibhavsingh8715
    @vaibhavsingh8715 3 роки тому +1

    Thank you so much!. This playlist is a gem.

    • @alfcnz
      @alfcnz  3 роки тому

      💎💎💎

  • @hasanrants
    @hasanrants 2 місяці тому

    Starting this playlist on 3rd October 2024, Extremely thankful for open source education.
    I'm excited as well as afraid of the mathematics involved.

    • @alfcnz
      @alfcnz  2 місяці тому

      Haha, don’t be! Also, Marc’s Mathematics of Machine Learning book is a good resource to check out. See my blog on suggested visual prerequisites.

  • @all462
    @all462 3 роки тому

    I was always interested in this subject. But was procrastinating. Today I finally decided to learn this.

    • @alfcnz
      @alfcnz  3 роки тому +1

      Yay! 🥳🥳🥳

  • @anmolgautam9572
    @anmolgautam9572 3 роки тому +1

    I wanted to thank you for the efforts Sir. The way you described that NN basically warps the space this concept has cleared so many things in my mind.

    • @alfcnz
      @alfcnz  3 роки тому

      😇😇😇

  • @MarkWitucke
    @MarkWitucke Рік тому

    Open Source Teaching. Beautiful stuff, Alfredo. Grazie amico!

    • @alfcnz
      @alfcnz  Рік тому

      Prego 😇😇😇

  • @dimitri30
    @dimitri30 6 місяців тому

    Thank you for sharing this to everyone. I was bored by my university courses which are not very understood, passionate and advanced.

    • @alfcnz
      @alfcnz  6 місяців тому +1

      You’re welcome! 😉 Let me know if you have any questions about the course material! 🤓

  • @iamumairjaffer
    @iamumairjaffer 4 місяці тому

    Thank you, Alfredo! I just started this course, and I think it's incredibly detailed and amazing. I can't express my excitement in words! ❤️❤️❤️

    • @alfcnz
      @alfcnz  4 місяці тому +1

      That’s awesome! 🥳🥳🥳

  • @The-Daily-AI
    @The-Daily-AI 2 роки тому

    Amazing that we are able to hear one of the people who developed deep learning.

    • @alfcnz
      @alfcnz  2 роки тому

      😊😊😊

  • @matthewevanusa8853
    @matthewevanusa8853 3 роки тому

    Rosenblatt really started deep learning and I hope he continues to get more and more recognition. His devising the Perceptron and on top of that, manually building it is nothing short of genius. He died tragically and too young. But the Perceptron book by Minsky and Papert misconstrued his work I believe disingenuously, as it is known that Rosenblatt and Minsky were vocal rivals dating back years, what Rosenblatt was saying as only a linear network; in the original work Rosenblatt discussed multilayer architectures but had not yet devised backprop yet. He also laid the foundations for ideas about spiking networks and dynamical activity. The only reason he used a rate encoding sum + squashing function was in his words, due to the technological hardware limitations at the time.

    • @alfcnz
      @alfcnz  3 роки тому

      Deep learning ≠ shallow learning. I'm not sure Rosenblatt can be considered to have started _deep_ learning. _Shallow_ learning, perhaps.

    • @matthewevanusa8853
      @matthewevanusa8853 3 роки тому

      @@alfcnz I just mean, in his original paper he does discuss multilayered learning (with hidden units) contrary to popular perception that he only discusses single layer perceptrons - this misconception came about because of Minsky's characterization

  • @atharvaingle3567
    @atharvaingle3567 3 роки тому +7

    Thank you, Alfredo. We can't thank you enough for providing these video lectures for free. I have one small query btw, I was going through the course websites of the 2020 and 2021 versions. Which version would you suggest to go through? 2020 one looks comprehensible. Just wanted to confirm if both of the versions (2020 and 2021) contents are the same ? Thanks again :)

    • @alfcnz
      @alfcnz  3 роки тому +2

      They are not.
      The 2021 edition will be completely online in two weeks. Only the new lectures have been summarised with text, though.

    • @atharvaingle3567
      @atharvaingle3567 3 роки тому

      @@alfcnz Thanks for the quick reply. So, which version would you suggest to go through end-to-end. Thanks.

    • @alfcnz
      @alfcnz  3 роки тому +3

      The 2021's is fresher.
      I'll write on the 2021 homepage what lectures have not been included from the previous year.

    • @atharvaingle3567
      @atharvaingle3567 3 роки тому

      @@alfcnz Thanks. It is very humble of you to respond even small queries :)

    • @alfcnz
      @alfcnz  3 роки тому +3

      No worries ☺️☺️☺️

  • @bulkrivero
    @bulkrivero 3 роки тому +3

    Great video, Alfredo. Thanks for sharing!

    • @alfcnz
      @alfcnz  3 роки тому

      You're welcome 😊

  • @cambridgebreaths3581
    @cambridgebreaths3581 3 роки тому +2

    One of the most beautiful, interesting and joyful classes I ever came across. Can I join the entire module. Is there any specific steps I should follow prior to starting. I want to dedicate couple of weeks this summer to seriously absorb all the contents start to finish. Grateful for any recommendations. Thank you kindly Alf.

    • @alfcnz
      @alfcnz  3 роки тому +1

      Just keep an eye on the class website, where I'm posting homework and schedule. For now the first theme is out. The second theme comes out this coming week.

    • @cambridgebreaths3581
      @cambridgebreaths3581 3 роки тому

      @@alfcnz Thanks a million Alf. Looking forward to.

    • @alfcnz
      @alfcnz  3 роки тому

      💜💜💜

  • @alexandrevalente9994
    @alexandrevalente9994 3 роки тому

    First lesson... to non-french speaking people... Try to pronounce Lecun with a french accent ;-)
    "un" in french is like the number "1" ... so ... try to pronounce the number 1 in french will give you "un" ... then integrate it into Lecun...
    lot of english speaking people say "LeKoune" instead of Lecun which sounds weird from our side.
    So ask Google translate to select french, then type "un" and listen to it ;-)
    ...
    You tried ?
    You get it ?
    You are doing great !
    ;-)

    • @alfcnz
      @alfcnz  3 роки тому

      🤣🤣🤣

  • @donthulasumanth5415
    @donthulasumanth5415 Місяць тому

    Guessing, the five lines at the end of video are basis vectors in un warped space or transformed space where we can draw lines to separate the colored dots. Support vector machines applies the same principle but with pre-defined functions and they are not as sensitive as neural nets.

    • @alfcnz
      @alfcnz  Місяць тому +1

      Where did I get those vectors? 😀

    • @donthulasumanth5415
      @donthulasumanth5415 Місяць тому

      @@alfcnz comment check

    • @donthulasumanth5415
      @donthulasumanth5415 21 день тому

      ​@@alfcnzideally if we consider a 5-d space there are basis vectors which means a unit vector which will help what direction I should go in that space to go to a vector. Suppose in 2-d case i,j. Let's assume if we are in 5-d space then i, j, k, l, m are unit vectors. If we start with 2 features a neural network will help us to find combination of these two to get 5 features and corresponding weights will help us get to w1i, w2j, w3k, w4l, w5m. I may be wrong or unclear and I need to rethink on the same problem at the end of this playlist 😀.

  • @rishiganeshv840
    @rishiganeshv840 3 роки тому

    Thank you Alfredo. Very Energetic and Informative..

    • @alfcnz
      @alfcnz  3 роки тому +1

      Hahaha 😁😁😁

  • @marco.r
    @marco.r 3 роки тому

    Hello :) 48:55 are the arrows the basis vectors of the classes?

    • @alfcnz
      @alfcnz  3 роки тому +1

      Yup, called class embeddings as well.

  • @НиколайНовичков-е1э

    Cool. I hope I have time to watch all the videos of this course over the weekend.

    • @alfcnz
      @alfcnz  3 роки тому

      And time is not there, you can always make it! 💪🏻💪🏻💪🏻

  • @alexandrevalente9994
    @alexandrevalente9994 3 роки тому

    Thank you for sharing Grant Sanderson website. Really inspiring and finally propose a great way to link many views and applications of linear algbebra !

    • @alfcnz
      @alfcnz  3 роки тому

      😀😀😀

  • @oseikofi4953
    @oseikofi4953 8 місяців тому

    what is the best way to use the vidoes together with the course website..... looks like theres a lot of information and I don't know where to begin and in what order..... I need help

    • @alfcnz
      @alfcnz  8 місяців тому

      My suggestion is checking out my website, where I explain how to begin and what order to follow. 😇😇😇

  • @ravikiranrao05
    @ravikiranrao05 2 роки тому +1

    Beautiful framing. You look like a decision boundary for your background colours :)

    • @alfcnz
      @alfcnz  2 роки тому

      😀😀😀

  • @briancase6180
    @briancase6180 3 роки тому +4

    Wow, so happy I found this. Yan's intro and historical survey was really interesting and useful. I remember being in grad school in the early 80s and feeling sorry for the guys studying AI: we all felt that they had no future. :) But, now, after most of the progress in basic processor design has been made, dnn and the rest of AI are looking like the best thing to investigate! The tables have turned.... :) Now, the AI guys can feel sorry for the rest of us. Ha! :)

    • @alfcnz
      @alfcnz  3 роки тому

      😅😅😅

  • @nourinahmedeka9518
    @nourinahmedeka9518 2 роки тому

    Not a content related question - I just became very curious - Was there any point in your life when you decided to be just the way you are? You know, making a conscious decision... Thanks for the education. Much appreciated.

    • @alfcnz
      @alfcnz  2 роки тому

      Conscious decision about… being happy?

    • @nourinahmedeka9518
      @nourinahmedeka9518 2 роки тому

      @@alfcnz being happy, being a bit silly, being yourself. although you recognise some may find you weirdly funny, but you are okay with that, but you are not imposing your characteristics upon anyone, you are just being yourself, and communicating that this is who you are.
      Did it come naturally to you or did you make a conscious effort somehow, to preserve yourself, your personality?
      Because too many of us loss themselves in order to, you know, to please other people, to maintain certain standard.

    • @alfcnz
      @alfcnz  2 роки тому +1

      twitter.com/alfcnz/status/1251234277351739392

  • @alexandrevalente9994
    @alexandrevalente9994 3 роки тому

    Great to hear the voice of the Father of new AI era !

    • @alfcnz
      @alfcnz  3 роки тому

      🎙️🎙️🎙️

  • @helloworldcsofficial
    @helloworldcsofficial 8 місяців тому

    Awesome class. Keep them coming. Thanks!

    • @alfcnz
      @alfcnz  8 місяців тому

      🥳🥳🥳

  • @snowflake5204
    @snowflake5204 2 роки тому

    Coolest prof i have seen in a while! and the course seems extensive too.. Thank you so much for uploading 🙃

    • @alfcnz
      @alfcnz  2 роки тому +1

      🥰🥰🥰

  • @dipeshshrestha7287
    @dipeshshrestha7287 3 роки тому

    How to get into NYU ? These lectures are too good.

    • @alfcnz
      @alfcnz  3 роки тому +10

      🤑🤑🤑

  • @ErfanKhodabakhsh
    @ErfanKhodabakhsh 2 роки тому

    hey dear alfredo, as a Business student, Can i watch this course and do assignments? or i should choose another course ? :(

    • @alfcnz
      @alfcnz  2 роки тому

      You can do as you please 🙂

  • @WilliamGacquer
    @WilliamGacquer 3 роки тому

    Thanks! May you please list DOIs of cited articles in the description of the video, please ?

    • @alfcnz
      @alfcnz  3 роки тому

      Right, that would make sense. I'll let Yann know.

  • @songhualiu4813
    @songhualiu4813 3 роки тому

    Thanks for your courses! It is awesome!

  • @bernabesolideogloria
    @bernabesolideogloria 3 роки тому

    Many thanks Alfredo!! Awesome Job!!

    • @alfcnz
      @alfcnz  3 роки тому

      Of course, you're welcome 😊😊😊

  • @aloklal99
    @aloklal99 5 місяців тому

    How were the neural nets trained before 1985 ie before back prop was invented?

    • @alfcnz
      @alfcnz  5 місяців тому

      I have a few videos on that on my most recent playlist, second chapter. There, I explain how the Perceptron (a binary neuron with an arbitrary number of inputs) used an error correction strategy for learning. Let me know if you have any other question. 😇😇😇
      Chapter 2, video 4-6
      ua-cam.com/video/g4sSU6B99Ek/v-deo.html

    • @aloklal99
      @aloklal99 5 місяців тому

      @@alfcnz thanks! 🙏

  • @oyupanquiunalm
    @oyupanquiunalm 3 роки тому

    Magnificent class! Thank you :) 13:00 was so funny btw haha

    • @alfcnz
      @alfcnz  3 роки тому

      🤣🤣🤣

  • @kanchankumar1519
    @kanchankumar1519 3 роки тому +1

    Briallian animation at the end. Cheers ! Do interactive version of these animations exits??

    • @kanchankumar1519
      @kanchankumar1519 3 роки тому

      Or the code ...

    • @alfcnz
      @alfcnz  3 роки тому +2

      Sure, everything is on the course website and repo.

  • @Tiojuan-mx
    @Tiojuan-mx 3 роки тому

    Amazing lecture!! Thanks for sharing

    • @alfcnz
      @alfcnz  3 роки тому +1

      Of course, anytime 😋😋😋

  • @Anujkumar-my1wi
    @Anujkumar-my1wi 3 роки тому

    Can you tell me why is it that neurons in a neural net uses this mathematical model i.e f(Ax+b) ,f is the activation function and inside is a affine transformation of inputs and not something else ?Is it just beacuse of the fact that they are inspired from biological neurons?

    • @alfcnz
      @alfcnz  3 роки тому

      Yup, that's a very very very rough approximation of how a biological neutron work.

    • @Anujkumar-my1wi
      @Anujkumar-my1wi 3 роки тому

      ​@@alfcnz Just to be clear ,I am asking whether the method of using a affine transformation of inputs to have some control over the function phi(x), which is equal to affine transformation of inputs passed through some nonlinearity ,is completely inspired from biological neurons

    • @alfcnz
      @alfcnz  3 роки тому

      Yes.

    • @Anujkumar-my1wi
      @Anujkumar-my1wi 3 роки тому

      @@alfcnz Thanks ,does that mean that method of using affine transformation which is inspired from biological neurons just happens to be useful for having some control over phi(x)?
      It's my last question for today ,sir

  • @mostafaatallah7001
    @mostafaatallah7001 3 роки тому

    thank you very much for this awesome course, are there open-access exercises with solutions that I can use to test my understanding after each lecture?

    • @alfcnz
      @alfcnz  3 роки тому +1

      See course website.

    • @mostafaatallah7001
      @mostafaatallah7001 3 роки тому

      @@alfcnz .Thank you very much Sir

    • @alfcnz
      @alfcnz  3 роки тому

      👍🏻👍🏻👍🏻

  • @anuraganand9675
    @anuraganand9675 3 роки тому

    Thanks, Alfredo for being a nice Human Being.
    PS: I like the sarcasm, keep it coming :P

    • @alfcnz
      @alfcnz  3 роки тому

      Sarcasm? 😮😮😮

  • @doyourealise
    @doyourealise 3 роки тому

    lecun aged like fine wine :)

    • @alfcnz
      @alfcnz  3 роки тому

      🍷🍷🍷

  • @beticuben
    @beticuben 2 роки тому

    Hello Alf, I am so grateful for this course which I am watching and enjoy very much. I have an unrelated question for you. Could you please point me to a good book or tutorial on how to deploy a ML model. I have trained a decent classification model and need to deploy it to a website where users can upload their CSV input and get predictions also in a CSV file. Thank you so much in advance.

    • @alfcnz
      @alfcnz  2 роки тому

      I've never deployed anything. Regardless, you may want to look into ML-ops and madewithml.com/#foundations

    • @beticuben
      @beticuben 2 роки тому

      @@alfcnz Thank you so much!

  • @extrememike
    @extrememike 3 роки тому

    Grazie. Bravo 👏🏻👏🏻

    • @alfcnz
      @alfcnz  3 роки тому

      Prego 😊😊😊

  • @abhishekaiem
    @abhishekaiem 3 роки тому

    Do we not have hands on in this series or I am missing something. Thanks for the upload sir!

    • @alfcnz
      @alfcnz  3 роки тому

      I don't understand the question. All I've posted is PyTorch implementations.

    • @abhishekaiem
      @abhishekaiem 3 роки тому

      @@alfcnz ohh my bad.... I was thinking that it's only theory as there is no ipynb file in git Repo.

    • @alfcnz
      @alfcnz  3 роки тому

      @@abhishekaiem the notebooks covered so far are in last year's repo. I'll add the new ones to this year's repo.

    • @abhishekaiem
      @abhishekaiem 3 роки тому

      @@alfcnz Thanks Professor!

  • @wolfisraging
    @wolfisraging 3 роки тому

    Great video!!!! waiting for more :)

    • @alfcnz
      @alfcnz  3 роки тому

      Way too many are just waiting in my hard drive to get edited. This *is* a full-time job, LOL. Aaaaah!

  • @GenerativeDiffusionModel_AI_ML
    @GenerativeDiffusionModel_AI_ML 2 роки тому

    Like your channel, professor, subscribed.

    • @alfcnz
      @alfcnz  2 роки тому

      🥳🥳🥳

  • @kemaleddinkara
    @kemaleddinkara 3 роки тому +1

    Nice video Thank you.

    • @alfcnz
      @alfcnz  3 роки тому +1

      You're welcome! 😇

  • @captainfrio4244
    @captainfrio4244 3 роки тому

    I was waiting for the next video because this one was the first on your channel (sort by date) but today I have found that they were already the others videos :((((

    • @alfcnz
      @alfcnz  3 роки тому

      Happy watching! 😀😀😀

  • @mubashirmufti5941
    @mubashirmufti5941 3 роки тому

    Great work, thank you..

    • @alfcnz
      @alfcnz  3 роки тому

      You're welcome 😊😊😊

  • @abhishekkumar-qi3is
    @abhishekkumar-qi3is 3 роки тому

    Nice content 👍.please upload coding vedio as well Thanks.

    • @alfcnz
      @alfcnz  3 роки тому

      About what?

    • @abhishekkumar-qi3is
      @abhishekkumar-qi3is 3 роки тому

      @@alfcnz means how to implement architecture through code.like cnn

  • @mahdiamrollahi8456
    @mahdiamrollahi8456 3 роки тому

    Great job...

    • @alfcnz
      @alfcnz  3 роки тому

      🤓🤓🤓

  • @heyyou1143
    @heyyou1143 2 місяці тому

    Mass 🎉 video

  • @TomChenyangJI
    @TomChenyangJI 5 місяців тому

    only a few words on his own masterpiece work haha

    • @alfcnz
      @alfcnz  5 місяців тому

      🤭🤭🤭

  • @WaiTingKuo0527
    @WaiTingKuo0527 2 роки тому

    shame on me, i was eating ice cream while Alfredo just mentioned about it....

  • @alexandrevalente9994
    @alexandrevalente9994 3 роки тому

    I like your food pictures ... but it looks you don't have a picture of "Arrosticini" ;-)

    • @alfcnz
      @alfcnz  3 роки тому

      My… food pictures? 🙈🙈🙈

  • @ianzhang8331
    @ianzhang8331 3 роки тому

    Wow, my idol!

    • @alfcnz
      @alfcnz  3 роки тому

      🥰🥰🥰

  • @Bobby-bz8bk
    @Bobby-bz8bk 3 роки тому

    🙏

    • @alfcnz
      @alfcnz  3 роки тому

      😇😇😇

  • @capeandcode
    @capeandcode 3 роки тому

    Why does Yann looks like some Star Lord.

    • @alfcnz
      @alfcnz  3 роки тому

      Because he is one! ✴️🌟✨

  • @andres_pq
    @andres_pq 3 роки тому

    Before 1986 neurons were binary?! Sounds like a nightmare

    • @alfcnz
      @alfcnz  3 роки тому

      Yup, backprop was not and could not be a thing.

  • @zeeshanakhtar3786
    @zeeshanakhtar3786 3 роки тому

    Isn't self-supervised learning has got the new fancy name of meta-learning these days?... correct me if I am wrong

    • @alfcnz
      @alfcnz  3 роки тому

      You're wrong.
      «Meta learning [or learning to learn] is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments.»
      en.wikipedia.org/wiki/Meta_learning_(computer_science)

    • @DelipRao
      @DelipRao 3 роки тому

      @@alfcnz He isn’t 100% wrong ;-) Self-supervised learning is a way to meta-learning (in a limited context, of course). A good way to look at the relationships between the two is if you were to plot the capabilities of meta-learning and self-supervised learning as a Venn diagram, there is an intersection.

    • @alfcnz
      @alfcnz  3 роки тому

      I think the question asked is if self-sup is now called meta-learning. Although meta-learning is automated, I wouldn't call it self-sup learning.
      The two things are rather distinct, for what I understand.
      One is dealing with hyperparameter search while the other prelearns weights.

    • @DelipRao
      @DelipRao 3 роки тому

      @@alfcnz yes, they are not equivalent for sure.

  • @antonispolykratis3283
    @antonispolykratis3283 3 роки тому

    Yan started dyeing his hair like Alf's. It will take some time to reach his level. (joking).

    • @alfcnz
      @alfcnz  3 роки тому +1

      Hahahaha! He started doing that _before_ me.

    • @Maeda_Toshiie
      @Maeda_Toshiie 3 роки тому

      @@alfcnz Maybe, as an exercise, develop a machine learning based software to add the dye onto the hair of the person on the call...

    • @alfcnz
      @alfcnz  3 роки тому

      😁😁😁

  • @sandipanhaldar8609
    @sandipanhaldar8609 3 роки тому

    what's the teddy bear doing on the bed

    • @alfcnz
      @alfcnz  3 роки тому +7

      His name is Vincenzo. He comes to class with me every week 😊😊😊