SVM (The Math) : Data Science Concepts

Поділитися
Вставка
  • Опубліковано 24 лис 2020
  • Let's get mathematical.
    SVM Intuition Video: • Support Vector Machine...

КОМЕНТАРІ • 186

  • @stanlukash33
    @stanlukash33 3 роки тому +182

    This guy is underrated for real. UA-cam - throw him into recommendations.

    • @jmspiers
      @jmspiers 2 роки тому +5

      I know... I recommend him all the time on Reddit.

    • @backstroke0810
      @backstroke0810 2 роки тому +1

      True! He deserves way more subscription. He should prepare a booklet like statquest did but of his own. Would definitely buy it!

    • @aravind_selvam
      @aravind_selvam 2 роки тому +1

      True!!

  • @supersql8406
    @supersql8406 3 роки тому +62

    This guy is super smart and he takes sophisticated concepts and explains it in a way where it's digestible without mocking the theory! What a great teacher!

  • @ragyakaul6027
    @ragyakaul6027 2 роки тому +29

    I can't explain how grateful I am for your channel! I am doing an introductory machine learning course at Uni and it's extremely challenging as it's full of complex concepts and the basics aren't explored throughly. Many videos I came across on youtube were too overly simplified and only helped me very briefly to make sense of my course. However, your videos offer the perfect balance, you explore the complex maths and don't oversimplify it, but do so in a way that's easy to understand. I read through this concept several times before watching your video, but only now do I feel as if I TRULY understand it. I HIGHLY appreciate the work you do and look forward to supporting your channel.

  • @shusrutorishik8159
    @shusrutorishik8159 3 роки тому +14

    This has been simultaneously the simplest, most detailed and yet most concise explanation of this topic I've come across so far. Much appreciated! I hope you keep making awesome content!

  • @velevki
    @velevki 2 роки тому +1

    You answered all the questions I had in mind without me even asking them to you. This was an amazing walkthrough. Thank you!

  • @pavelrozsypal8956
    @pavelrozsypal8956 2 роки тому

    Another great video on SVM. As a mathematician I do appreciate your succinct yet accurate exposition not playing around with irrelevant details.

  • @nickmillican22
    @nickmillican22 2 роки тому +7

    Question on the notation.
    The image shows that the vector between the central line and decision line is w. So, I think, that w is the length of the decision boundary. But then we go on to show that the length of the decision boundary is k=1/||w||. So I'm not clear on what w (or k, for that matter) are actually representing.

    • @WassupCarlton
      @WassupCarlton Місяць тому

      I too expected k to equal the length of that vector w :-/

  • @KARINEMOOSE
    @KARINEMOOSE 2 роки тому +2

    I'm a PhD student studying data mining and I just wanted commend you for this SUPERB explanation. I can't thank you enough for the explaining this so clearly. Keep up the excellent work!!

  • @FPrimeHD1618
    @FPrimeHD1618 Рік тому +1

    Just to add onto all the love, I'm a data scientist in marketing and you are my number one channel for reviewing concepts. You are a very talented individual!

  • @vedantpuranik8619
    @vedantpuranik8619 2 роки тому

    This is the best and most comprehensible math video on hard margin SVM I have seen till date!

  • @srivatsa1193
    @srivatsa1193 3 роки тому +4

    This is the best and the most intuitive explanation for SVM. It is really hard for me to actually read research papers and understand what story each line of the equation is telling. But you made it soo intuitive. Thanks a ton! Please Please make more videos like this

  • @stephonhenry-rerrie3997
    @stephonhenry-rerrie3997 2 роки тому

    I think this might be top 5 explanations of SVM mathematics all-time. Very well done

  • @nikkatalnikov
    @nikkatalnikov 3 роки тому

    Great video as usual!
    A possible side note - I find 3d picture even more intuitive.
    Adding z-direction which is basically can be shrunk to [-1;1] is our class prediction dimension and x1 x2 are feature dimensions.
    Hence, the margin hyperplane "sits" exactly on (x1; x1; 0)
    This is also helpful for further explanation of what SVM kernels are and why kernel alters the norms (e.g. distances) between data points, but not the data points themselves.

  • @polarbear986
    @polarbear986 2 роки тому

    I finally get svm after watching a lot of tutorial on UA-cam. Clever explanation. Thank you

  • @yangwang9688
    @yangwang9688 3 роки тому +1

    Very easy to follow the concept! Thanks for this wonderful video! Looking forward to seeing next video!

  • @usmanabbas7
    @usmanabbas7 2 роки тому +1

    You and statquest are the perfect combination :) Thanks for all of your hardwork.

  • @luchomame1
    @luchomame1 9 місяців тому +1

    Dude thank you! now these equations don't feel like they were pulled out of thin air. and the best part is I can work them out too! I haven't done linear algebra in almost a decade so I got stuck on the ||w||/(w*w) part for a good bit but this pushed me to refresh some concepts and figure it out! Thank you

  • @lisaxu1848
    @lisaxu1848 2 роки тому

    studying my masters in data science and this is a brilliant easy to understand explanation tying graphical and mathematical concepts - thank you!

  • @mindyquan3141
    @mindyquan3141 2 роки тому

    So simple, so clear!!! Wish all the teachers are like this!

  • @clifftondouangdara6249
    @clifftondouangdara6249 Рік тому

    Thank you so much for this video! I am learning about SVM now and your tutorial perfectly breaks it down for me!

  • @more-uv4nl
    @more-uv4nl Місяць тому

    this guy explained what my professors couldn't explain in 2 hours 😂😂😂

  • @Shaan11s
    @Shaan11s 2 місяці тому

    your videos are what allowed me to take a spring break vacation bro, saved me so much time thank you

  • @WassupCarlton
    @WassupCarlton Місяць тому

    This is giving "Jacked Kal Penn clearly explains spicy math" and | am HERE for it

  • @nishanttailor4786
    @nishanttailor4786 Рік тому

    Just Amazing Clarity of Topics!!

  • @honeyBadger582
    @honeyBadger582 3 роки тому +9

    That's what i've been waiting for! Thanks a lot. Great video!

  • @TheWhyNotSeries
    @TheWhyNotSeries 3 роки тому +7

    At 5:10, I don't get how you obtain K from the last simplification. Can you/someone please explain?
    Btw beautiful video!

    • @ritvikmath
      @ritvikmath  3 роки тому +10

      thanks! I did indeed kind of skip a step. The missing step is that the dot product of a vector with itself is the square of the magnitude of the vector. ie. w · w = ||w||^2

    • @TheWhyNotSeries
      @TheWhyNotSeries 3 роки тому +1

      @@ritvikmath right, thank you!!

  • @gdivadnosdivad6185
    @gdivadnosdivad6185 6 місяців тому

    I love your channel. You explain difficult concepts that could be explained to my dear grandmother who never went to college. Excellent job sir! You should become a professor one day. You would be good.

  • @jaibhambra
    @jaibhambra 2 роки тому

    Absolutely amazing channel! You're a great teacher

  • @aashishkolhar8155
    @aashishkolhar8155 3 роки тому

    Great, thanks for this lucid explanation about the math behind SVM

  • @houyao2147
    @houyao2147 3 роки тому

    It's so easy to understand thi s math stuff! Best explanation ever in such a short video.

  • @ifyifemanima3972
    @ifyifemanima3972 Рік тому

    Thank you for this video. Thanks for simplifying SVM.

  • @lakhanpal1987
    @lakhanpal1987 Рік тому

    Great video on SVM. Simple to understand.

  • @techienomadiso8970
    @techienomadiso8970 Рік тому

    This is a serious good stuff video. I have not seen a better svm explanation

  • @chimetone
    @chimetone 2 місяці тому

    Best high-level explanation of SVMs out there, huge thanks

  • @badermuteb4552
    @badermuteb4552 3 роки тому +2

    Thank you so much. This is what i have been looking for so long time. would you please do the behind other ML and DL algorithms.

  • @ht2239
    @ht2239 3 роки тому

    You explained this topic really well and helped me a lot! Great work!

  • @dcodsp_
    @dcodsp_ 8 місяців тому

    Thanks for such brilliant explanation really appreciate your work!!

  • @sejmou
    @sejmou 8 місяців тому +1

    In case you're also having trouble figuring out how we arrive at k=1/||w|| from k * (w*w/||w||) = 1:
    remember that the dot product of any vector with itself is equal to its squared magnitude. Then, w*w can also be expressed as ||w||^2.
    ||w||^2/||w|| simplifies to just ||w||. Finally bring ||w|| to the other side by dividing the whole equation by ||w||, and you're done :)
    if you also have trouble understanding why exactly the dot product of any vector with itself is equal to its squared magnitude it also helps to know that the magnitude of a vector is the square root of the sum of squares of its components and that sqrt(x) * sqrt(x) = x
    I hope that somehow makes sense if you're struggling, surely took me a while to get that lol

    • @FootballIsLife00
      @FootballIsLife00 5 місяців тому

      I almost forget this rule, thank you brother for saving my day

  • @emid6811
    @emid6811 2 роки тому +1

    Such a clear explanation! Thank you!!!

  • @zz-9463
    @zz-9463 3 роки тому +1

    very informative and helpful video to help understand the SVM! Thanks for such a great video! You deserve more subscribers

  • @user-ik5vu8rf9d
    @user-ik5vu8rf9d 2 місяці тому

    Thanks man great explaination , was trying to understand the math for 2 days , finally got it

  • @learn5081
    @learn5081 3 роки тому

    very helpful! I always wanted to learn math behind the model! thanks!

  • @himanshu1056
    @himanshu1056 2 роки тому

    Best video on large margin classifiers 👍

  • @maheshsonawane8737
    @maheshsonawane8737 8 місяців тому

    🌟Magnificient🌟I actually understood this loss function in by watching once. Very nice explanation of math. I saw lot of other lectures but you cant understand math without graphical visualization.

  • @SreehariNarasipur
    @SreehariNarasipur Рік тому

    Excellent explanation Ritvik

  • @Snaqex
    @Snaqex 4 місяці тому

    Youre so unbelieveble good in explaining :)

  • @Jayanth_mohan
    @Jayanth_mohan 2 роки тому

    This really helped me learn the math of svm thanks !!

  • @BlueDopamine
    @BlueDopamine Рік тому

    I am very happy that I found Your YT Channel Awsome Videos I was unable to Understand SVM UntilNow !!!!

  • @maurosobreira8695
    @maurosobreira8695 2 роки тому

    Amazing teaching skills - Thanks, a lot!

  • @TheOilDoctor
    @TheOilDoctor 7 місяців тому

    great, concise explanation !

  • @asharnk
    @asharnk 9 місяців тому

    What an amazing video bro. Keep going.

  • @pedrocolangelo5844
    @pedrocolangelo5844 10 місяців тому

    Once again, ritvikmath being a lifesaver for me. If I understand the underlying math behind this concepts, it is because of him

  • @NiladriBhattacharjya
    @NiladriBhattacharjya Рік тому

    Amazing explanation!

  • @mensahjacob3453
    @mensahjacob3453 2 роки тому

    Thank you Sir . You really simplified the concept. I have subscribed already waiting patiently for more videos 😊

  • @AnDr3s0
    @AnDr3s0 3 роки тому

    Nice explanation and really easy to follow!

  • @zarbose5247
    @zarbose5247 Рік тому

    Incredible video

  • @jingzhouzhao8609
    @jingzhouzhao8609 22 дні тому

    thank you for your genius explanation. At 5:11, before getting the value k, the equation k * ( w * w) / (magnitude of w) = 1 contains w * w, why the output k doesn't have w in the end.

  • @ziaurrahmanutube
    @ziaurrahmanutube 3 роки тому

    Amazing explanation from the theoretical to the mathematical. Please tell me how you do it? So i can self-learn myself how you are able to understand and then explain these concepts or other concepts. what resources do you use ?

  • @godse54
    @godse54 3 роки тому +1

    Pls also make one for svm regression.. you are amazing

  • @harshalingutla7318
    @harshalingutla7318 2 роки тому

    brilliant explanation!

  • @trishulcurtis1810
    @trishulcurtis1810 2 роки тому

    Great explanation!

  • @borisshpilyuck3560
    @borisshpilyuck3560 10 днів тому

    Great video ! Why we can assume that right hand side of wx - b in those three lines is 1, 0, -1 ?

  • @walfar5726
    @walfar5726 Рік тому

    Very well explained, thank you !

  • @bhuvaneshkumarsrivastava906
    @bhuvaneshkumarsrivastava906 3 роки тому

    Eagerly waiting for your video on SVM Soft margin :D

  • @akashnayak6144
    @akashnayak6144 2 роки тому +2

    Loved it!

  • @yashshah4172
    @yashshah4172 3 роки тому +1

    Hey Ritvik, Nice video, can you please cover the kernalization part too.

  • @sukritgarg3175
    @sukritgarg3175 Місяць тому

    Holy shit what a banger of a video this is

  • @germinchan
    @germinchan Рік тому +1

    This is very clearly defined. Thank you.
    But could someone explain to me what w is? How can I visualize it and calculate it.

  • @wildbear7877
    @wildbear7877 10 місяців тому

    You explained this topic perfectly! Amazing!

  • @SESHUNITR
    @SESHUNITR Рік тому +1

    very informative and intuitive

  • @fengjeremy7878
    @fengjeremy7878 2 роки тому +1

    Hi ritvik! I wonder what is the geometric intuition of the vector w? We want to minimize ||w||, but what does w look like on the graph?

  • @superbatman1462
    @superbatman1462 3 роки тому

    Easily Explained 👍,
    Can you also explain how does SVM works with respect to regression problems?

  • @almonddonut1818
    @almonddonut1818 Рік тому

    Thank you so much!

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 2 роки тому

    Wow, that was so well explained.

  • @suckockshititties2599
    @suckockshititties2599 10 місяців тому

    You are an amazing elucidator👍

  • @user-bp5go3ds5t
    @user-bp5go3ds5t 10 місяців тому

    phenomenal

  • @stephanecurrie1304
    @stephanecurrie1304 2 роки тому

    That was crystal clear !

  • @amanbagrecha
    @amanbagrecha 3 роки тому +1

    what about the points within the margin? are they support vectors as well?

  • @samt3825
    @samt3825 5 місяців тому

    it was amazing thankyou so much

  • @Pazurrr1501
    @Pazurrr1501 2 роки тому

    BRILLIANT!

  • @rndtnt
    @rndtnt Рік тому +1

    Hi, how exactly did you choose 1 and -1, the values for wx -b where x is a support vector? wx-b = 0 for x on the separating line makes sense however. Could it have other values?

  • @user-hu5qf6lg8u
    @user-hu5qf6lg8u 4 місяці тому +1

    you are my savior

  • @sorrefly
    @sorrefly 2 роки тому +1

    I'm not sure but I think you forgot to say that in order to have margin = +-1 you should scale multiplying constants to w and b. Otherwise I don't explain how we could have distance of 1 from the middle
    The rest of the video is awesome, thank you very much :)

  • @Max-my6rk
    @Max-my6rk 3 роки тому

    Smart! This is the easiest way to come up with the margin when given theta (or weight)... gosh..

  • @xviktorxx
    @xviktorxx 3 роки тому +1

    Great video, great underappreciated channel! Thank you and keep up the good work!

  • @salzshady8794
    @salzshady8794 3 роки тому +9

    Could you do the math behind each Machine learning algorithm, also would you be doing Neural Networks in the future?

    • @marthalanaveen
      @marthalanaveen 3 роки тому

      along with the assumptions of supervised and un-supervised ML algorithms that deals specifically with structured data.

    • @ritvikmath
      @ritvikmath  3 роки тому +3

      Yup neural nets are coming up

    • @jjabrahamzjjabrhamaz1568
      @jjabrahamzjjabrhamaz1568 3 роки тому

      @@ritvikmath CNN's and Super Resolution PLEASE PLEASE PLEASE

  • @TheCsePower
    @TheCsePower 2 роки тому

    You should mention that your W is an arbitrary direction vector of the hyperplane. (it is not the same size as the margin)

  • @blackforest449
    @blackforest449 2 роки тому

    So good .. ThnQ

  • @madshyom6257
    @madshyom6257 Рік тому

    Bro, you're a superhero

  • @robfurlong8868
    @robfurlong8868 5 місяців тому

    @ritvikmath - Thanks for this great explanation. I have noticed other material online advises the equation for the hyperplan is w.x+b=0 rather than w.x-b=0. Can you confirm which is accurate

  • @zeinramadan
    @zeinramadan 3 роки тому +1

    great video as always. thank you

  • @ramankutty1245
    @ramankutty1245 3 роки тому

    Great explanation

  • @manishbolbanda9872
    @manishbolbanda9872 3 роки тому

    explained it so well.

  • @emmanuelibrahim6427
    @emmanuelibrahim6427 2 роки тому

    Gifted teacher!

  • @fengjeremy7878
    @fengjeremy7878 2 роки тому

    Thank you! I am wodering why do we use "+1 and -1" instead of "+1 and 0" to classify these two areas?

  • @stevehan3498
    @stevehan3498 2 роки тому

    thank you so much

  • @Reojoker
    @Reojoker 3 роки тому

    Are SVMs only useful for binary classification, or can they be extended to multi-class predictions?

  • @Cobyboss12345
    @Cobyboss12345 Рік тому

    you are the smartest person I know

  • @yatinarora9650
    @yatinarora9650 Рік тому

    super explanation

  • @acidaly
    @acidaly Рік тому +2

    Equation for points on margins are:
    w.x - b = 1
    w.x - b = -1
    That means we have fixed our margin to "2" (from -1 to +1). But our problem is to maximize the margin, so shouldn't we keep it a variable? like:
    w.x - b = +r
    w.x - b = -r
    where maximizing r is our goal?

    • @davud7525
      @davud7525 11 місяців тому

      Have you figured it out?