Norms and Unit Vectors - Topic 6 of Machine Learning Foundations

Поділитися
Вставка
  • Опубліковано 26 гру 2024

КОМЕНТАРІ • 68

  • @namichwan577
    @namichwan577 3 дні тому

    Coming from non-math/cs background, can't even tell how these videos are helping me.. i was so hopeless and lost then i found this, am so grateful

  • @sumitbasu6703
    @sumitbasu6703 2 місяці тому +1

    I never thought of start learning AI as I was scared or confused. Now, after watching your videos, I think I am at the right direction. Thank you to avail these rich contents for free.🙏

  • @aashishrana9356
    @aashishrana9356 3 роки тому +16

    I found myself very lucky that I found your channel.
    So awesome explantaion and especially the way you are mapping it to the machine learning concepts.
    Thank you so much.
    I will share your channel In my network.

    • @JonKrohnLearns
      @JonKrohnLearns  3 роки тому +5

      I'm lucky you found my channel too, Aashish! Glad you're enjoying the content :)

  • @ilolita_
    @ilolita_ 6 місяців тому +1

    I love how you explain in an intuitive way and then showcase examples.

  • @DM-py7pj
    @DM-py7pj 2 роки тому +6

    Love the explanations that are immediately followed by code examples

    • @JonKrohnLearns
      @JonKrohnLearns  2 роки тому +2

      Heck yes! I'm glad you do because that is definitely my modus operandi for video tutorials :)

    • @joelausten
      @joelausten 6 місяців тому

      @@JonKrohnLearns is your tutorial leading us beginners to be a machine learning engineer? I want to be one. :), Are this foundations gonna help me know more of the making of the machine learning model.

  • @alexmattyou
    @alexmattyou 2 роки тому +2

    Thank you for these nice explanations
    I learned a lot from these videos, more than in schools
    Thank you for the hard work behind presentations and code demos
    also, love your weightlifting

    • @JonKrohnLearns
      @JonKrohnLearns  2 роки тому +1

      haha glad you're enjoying the weightlifting videos while also finding value in my tutorials! It means a lot to hear this feedback, thank you :)

  • @elp09bm1
    @elp09bm1 2 роки тому +1

    Thanks for one more excellent video. I am practising them as I watch.

  • @subramaniamananthakrishnan6593
    @subramaniamananthakrishnan6593 7 місяців тому

    At time 6:38 secs you talk about unit vectors and the vector x has unit norm of 1. However, the co-ordinates of x are (1,1), so the unit norm in this case would be sqrt(2). Can you explain how the unit norm is 1 ?

  • @아아아-v3g
    @아아아-v3g Рік тому

    7:41 Could you explain more about the second and third bullet point??

    • @JonKrohnLearns
      @JonKrohnLearns  Рік тому +1

      Sure! Both of these bullets are easy to understand when contrasted with the L2 Norm. The L2 Norm, because it has a square, is inherently NON-linear (unlike L1) meaning that L2 starts starts to vary a LOT as we move away from zero whereas the L1 Norm continues to vary at the same rate as did near zero.

  • @abhis17121995
    @abhis17121995 3 роки тому +2

    Thanks Jon for the wonderful videos.
    I had few queries:
    1. You gave few points explaining how L2 norm is computationally cheaper. I didn't get the reasons. You will be explaining them further later or as they are not used much in ML you are gonna skip it?
    2. For L1 norm slide you told it is used whenever the difference between 0 and non 0 is key. Any examples where this will be used? Just to get a practical feel of it.

    • @JonKrohnLearns
      @JonKrohnLearns  3 роки тому +2

      Glad you're enjoying the videos, Abhi!
      W.r.t. Q1, do you mean that **Squared** L2 Norm is computationally cheaper? If so, I explain why at 9:17.
      W.r.t. Q2, this is beyond the scope of my "ML Foundations" series because it's a big ML-specific topic in its own right. If you'd like to learn more, here's a good Stack Exchange post: stats.stackexchange.com/questions/45643/why-l1-norm-for-sparse-models

    • @abhis17121995
      @abhis17121995 3 роки тому

      Thanks @@JonKrohnLearns

    • @joelausten
      @joelausten 6 місяців тому

      ​@@JonKrohnLearns How can I continue after I finished the foundation

  • @Amarjeet-fb3lk
    @Amarjeet-fb3lk Рік тому

    Hi,earlier in video,you said that, array are 1 dimensional,but in video at 5:16 ,you said the array x is in 3 dimensional space.
    Can you explain that?

    • @joelausten
      @joelausten 6 місяців тому

      from the preceding video, from my POV, array is a 1-D space if it is just ( [ ] ), but if it has an extra bracket / ( [ [ ] ] ) then it becomes a 2-D space

    • @joelausten
      @joelausten 6 місяців тому

      as I've watched he also gave the notes about
      scalar = 0-D = magnitude only
      vector = 1-D = array
      matrix = 2-D = flat table, e.g., square
      3-tensor = 3-D = 3D table, e.g., cube
      n-tensor = n-D = higher dimensional
      Hope these help, :)

  • @EjazAhmed-pf5tz
    @EjazAhmed-pf5tz Рік тому +1

    thank you so much for your help, can we get those slides through which you are teaching us. thank you

    • @JonKrohnLearns
      @JonKrohnLearns  Рік тому

      Certainly! Head to jonkrohn.com/talks and search for "linear algebra" on the page

    • @EjazAhmed-pf5tz
      @EjazAhmed-pf5tz Рік тому +1

      @@JonKrohnLearns thank you so much sir.

    • @JonKrohnLearns
      @JonKrohnLearns  Рік тому

      @@EjazAhmed-pf5tz you're welcome!

  • @RahulSID500
    @RahulSID500 10 місяців тому +1

    Sir can I plz get the pdf notes for your lectures 🙏
    ....... This lectures are extremely helpful to me🔥🔥

  • @juanfranciscorinaldi6109
    @juanfranciscorinaldi6109 Рік тому

    Thanks a lot for this amazing videos! Grettings ts from Argentina

  • @mrohit1617
    @mrohit1617 Рік тому

    I really like your video , I learn a lot from them

  • @ungoyboy2006
    @ungoyboy2006 2 роки тому +1

    thanks for the great course John. Is there any additional good references on Norms you can recommend.
    I understand at high level what they are, however still wondering why they are derived in first place and more details on how the different types of norms are used. Cheers

    • @JonKrohnLearns
      @JonKrohnLearns  2 роки тому

      Great question! Now that you understand what norms are from this video, you can apply them whenever you encounter them in a machine learning textbook, e.g., this classic on the "deep learning" subfield of ML: deeplearningbook.org

  • @kareemzad9952
    @kareemzad9952 2 роки тому +1

    Dude, you're AWESOME! 🔥

  • @re-srchr3882
    @re-srchr3882 Рік тому

    aha i found out your channel through one of my utube subscription known as "ishan sharama" than i jst hoped into your channel Jon Sir ! and i am so gald that u done this complete course for free in utube !! I Love the way u Explain Jon Krohn Sir so simple and easy to understand !! and for instance i am making 🗒 notes also with it ! Thanku for this knowledge 😄🤝!! i have subscribed to ur channel and aslo goona complete the course 🎯. and also i am goona teach it to my friends !! , lots of love from India Jon Krohn sir.

  • @karelsukup2824
    @karelsukup2824 Рік тому

    Just wonderful, thank you so much for these videos, great explanation

  • @akashchristopher
    @akashchristopher 6 місяців тому

    Jon, can we get that presentation file?

  • @MonilMunjal
    @MonilMunjal Рік тому +1

    loved your videos❤❤❤❤❤❤

  • @Alias.Nicht.Verfügbar
    @Alias.Nicht.Verfügbar 2 місяці тому

    thanks for the knowledge!

  • @coldnlonely
    @coldnlonely 3 роки тому

    Hi Jon. Thanks for the videos. I'm really enjoying the way you are presenting the material - very clear, and unrushed.
    I'm a little confused though in this particular one with the diagram of the unit vector. Wouldn't the vector be [1,1] based on how it's in the diagram? If that's the case, then how do we get to ||x|| = 1? I must be misinterpreting something along the way. Thank you.

    • @JonKrohnLearns
      @JonKrohnLearns  3 роки тому +3

      Ah, the vector does NOT stretch all the way to [1,1], but I should have made that more obvious in the figure. The point truly is downward and to the left relative to [1,1], but it's difficult to see. I'll ensure this is more clear in the ML Foundations book I'm writing - thank you for the feedback, Alex!

    • @coldnlonely
      @coldnlonely 3 роки тому +1

      @@JonKrohnLearns Thank you for the clarification.

  • @richarda1630
    @richarda1630 3 роки тому

    since you are demonstrating both PyTorch and TensorFlow, can they be used for different types of machine learning problems? like dealing with images, text, sound , tabular data etc. Is one better than the other for certain types of applications?

    • @JonKrohnLearns
      @JonKrohnLearns  3 роки тому

      I summarize the key relative strengths of PyTorch and TensorFlow here: ua-cam.com/video/9hT5AKAI924/v-deo.html

    • @richarda1630
      @richarda1630 3 роки тому +1

      @@JonKrohnLearns Thanks so much! :)

  • @leroiduday
    @leroiduday 9 місяців тому

    Great job

  • @aashishrana9356
    @aashishrana9356 3 роки тому

    Is L1 and L2 norm are same as ridge and lasso regularization?

    • @JonKrohnLearns
      @JonKrohnLearns  3 роки тому +1

      They're not the "same" exactly but they are related to regularization.
      We need to calculate the L1 norm in order to implement L1 regularization, which is itself a part of Lasso regression (as well as being a part of other optimization approaches).
      Likewise, we need to calculate the L2 norm in order to implement L2 regularization, which is itself a part of Ridge regression (as well as being a part of other optimization approaches).

  • @manajitdas7619
    @manajitdas7619 Рік тому

    I hope I will complete the series

  • @fadilyassin4597
    @fadilyassin4597 Рік тому

    superb

  • @stolfjr
    @stolfjr 2 роки тому +1

    Wish I had taken these courses before enrolling to an ML class, where I was so lost, lol. Now I will have to do a recovery exam! But I will ace it in God's will!!

    • @JonKrohnLearns
      @JonKrohnLearns  2 роки тому +1

      Damn, well hope these videos help!! Glad you've found them :)

  • @Soul.game.
    @Soul.game. 2 роки тому +1

    thanks

    • @JonKrohnLearns
      @JonKrohnLearns  2 роки тому

      You're welcome, Shamil!

    • @Soul.game.
      @Soul.game. 2 роки тому

      @@JonKrohnLearns when will there be a sequel?

    • @JonKrohnLearns
      @JonKrohnLearns  2 роки тому

      @@Soul.game. haha, a sequel to what?

    • @Soul.game.
      @Soul.game. 2 роки тому

      @@JonKrohnLearns
      this course ua-cam.com/video/Qc19jQWHdL0/v-deo.html)))

  • @faithcyril513
    @faithcyril513 2 роки тому +3

    I learnt that L1 norm is also called Manhattan distance😅

    • @JonKrohnLearns
      @JonKrohnLearns  2 роки тому +1

      Yes! Which is a fun way to describe L1 distance. If you want to travel from Point A to Point B on much of Manhattan, you can only move North-South or East-West along the avenues and streets, respectively, so the distance you drive or walk from Point A to Point B is the L1 distance :)

  • @elp09bm1
    @elp09bm1 2 роки тому

    So we can say Squared L2 norm is the dot product of two vectors.

    • @JonKrohnLearns
      @JonKrohnLearns  2 роки тому

      Kinda, but not exactly. The Squared L2 Norm of some vector x is a dot product of the x vector with itself. If we take the dot product of two different vectors, that is not a Squared L2 Norm.

    • @elp09bm1
      @elp09bm1 2 роки тому

      @@JonKrohnLearns Thanks for clarification. Is L1 norm also known as Manhattan Distance. If yes, why so. I understand L2 is also known as Euclidian distance because of Mathematician Euclid. Why we some many types of Norms?

    • @JonKrohnLearns
      @JonKrohnLearns  2 роки тому

      @@elp09bm1 I appreciate all of the comments, but I feel like some of your questions could be answered by a quick Google search :)
      Here's a blog post on Manhattan (L1) vs Euclidean (L2) distance: towardsdatascience.com/3-distances-that-every-data-scientist-should-know-59d864e5030a
      Norms come in handy in lots of places in machine learning, including for model regularization: www.analyticssteps.com/blogs/l2-and-l1-regularization-machine-learning

  • @barthsparks9791
    @barthsparks9791 Рік тому

    So. L2 norm is just Pythagoras theorem

  • @Soul.game.
    @Soul.game. 2 роки тому +1

    thanks