Support Vector Machines (SVMs): A friendly introduction

Поділитися
Вставка
  • Опубліковано 28 вер 2024

КОМЕНТАРІ • 134

  • @hichamsabah31
    @hichamsabah31 3 роки тому +1

    Best explanation of SVM on UA-cam. Keep up the good work.

  • @ብሌናይጻዕዳ
    @ብሌናይጻዕዳ 4 роки тому +1

    The best SVM explanation I ve listened to. Thank you.

  • @drewlehe3763
    @drewlehe3763 5 років тому +5

    This is a great explanation of the concepts, it helped me.
    But isn't this video about the Support Vector Classifier and not the SVM (which uses kernelization)?
    The SVC uses the maximal margin classifier, with a budget parameter for errors, and the SVM uses the SVC in an expanded feature space made by kernelization.

  • @giannismaris13
    @giannismaris13 2 роки тому

    BEST explanation of SVM so far!

  • @ismailcezeri1691
    @ismailcezeri1691 4 роки тому

    The best explanation of SVM I have ever seen

  • @imagnihton2
    @imagnihton2 2 роки тому

    I am way too late here...but so happy to have found a gold mine of information! Amazing explanation!!

  • @mohammedhasan6522
    @mohammedhasan6522 5 років тому +17

    As always, very nicely and easily explained. Looking forward to seeing your explanation about PCA, TSNE and some topics of Reinforcement Learning.

  • @ocarerepairlab8218
    @ocarerepairlab8218 Рік тому

    Hey Louis, I have recently come across your videos and I am blown away by your simplistic approach to delivering the mathematics and logic especially the mention of the applications. A quick one, DO YOU TAKE STUDENTS, I WOULD LIKE TO ENROLL. I have more interest in analysis of biological data and o rarely find as much good video as this. I'm simply in love with your methods !!!!!

  • @obheech
    @obheech 5 років тому

    Very nice explanations.. May your channel flourish !!

  • @bassimeledath2224
    @bassimeledath2224 5 років тому +2

    Legend. Keep doing what you do!

  • @polarbear986
    @polarbear986 3 роки тому

    best svm explanation. Thanks a lot!

  • @EngineeringChampion
    @EngineeringChampion 4 роки тому

    Thank you for this simplifying the concepts! I enjoyed watching this video!

  • @gitadanesh7496
    @gitadanesh7496 4 роки тому

    Explained very simple. Thanks a lot.

  • @gammaturn
    @gammaturn 5 років тому +2

    Thank you very much for this amazing video. I have come across your channel only recently and I do like your way of explaining these complicated topics.
    I have got two (hopefully not too dumb) questions regarding SVMs:
    Given the similarity of SVMs and logistic regression, would it be a good idea to start from an LR-result instead of a random line?
    Did I understand correctly, that the distance between the two lines can only increase during the search for the best solution? Wouldn't it be conceivable that at some point the combined error function decreases by decreasing the distance between the lines?

    • @SerranoAcademy
      @SerranoAcademy  5 років тому +1

      Thank you, great questions!
      1. That's a good idea, it's always good to start from an good position rather than a random one. Since the two algorithms are of similar speed (complexity), I'm not sure if starting from LR is necessarily better than just doing an SVM from the start, but it's definitely worth a try.
      2. Actually, in the process of moving the line, one could change the coefficients in such a way that the lines get a little closer again (for example, if a and b are both increased in magnitude, the lines get close together).

  • @dante_calisthenics
    @dante_calisthenics 4 роки тому

    Can I ask that step of separating line is just only for optimizing the model, right? Like in the case when you have 2 lines have already separated the training data, so you expand the line to see how wide they are?

  • @XunZhong
    @XunZhong 2 місяці тому

    The "Margin Error" part is confusing. Didn't get it.

  • @scientific-reasoning
    @scientific-reasoning 3 роки тому

    Hi Luis, I like your youtube video animations, they are great! Can I know what software you use for animations?

  • @pushkarparanjpe
    @pushkarparanjpe 5 років тому

    Great work!

  • @sakcee
    @sakcee 8 місяців тому

    excellent

  • @eisamqassim1169
    @eisamqassim1169 4 роки тому

    SVM is only for separating the points of two classes only?!

  • @ravindrasonavane1469
    @ravindrasonavane1469 4 роки тому

    Plz anyone tell me... how 1 n -1 came in equation of line..

  • @scherwinn
    @scherwinn 5 років тому

    Clever great!

  • @bertobertoberto242
    @bertobertoberto242 2 роки тому +1

    SVM are a non parametric algorithm, and you are explaining it as a parametric algorithm.... are you sure that this is the way to go?... SVM usually deals with matrix of Alphas to determine the support vectors, to have the best separation, however, in your algorithm, you are always multiplying abc by 0.99 which makes no sense, why you should have a wider range every time? (abc*0.99 each iteration will cause abc to become smaller and smaller, and so the boundaries will become wider and wider)

  • @divyanshupandey9870
    @divyanshupandey9870 5 років тому

    ur videos are good but the problem is they are all not sorted properly..please group them properly...

  • @KoreaRwkz
    @KoreaRwkz 4 роки тому

    22:00 can anyone derive that expression?

    • @SerranoAcademy
      @SerranoAcademy  4 роки тому

      It takes a bit of calculations, but here's a place where it's done: www.ck12.org/geometry/Distance-Between-Parallel-Lines/lesson/Distance-Between-Parallel-Lines-GEOM/

  • @sunilsarode4295
    @sunilsarode4295 5 років тому

    Thanks lot , There is other way to come up with the equation of line without iterative gradient decent ,can you explain that and also math part :P and what support vector mean?

  • @SRAVANAM_KEERTHANAM_SMARANAM
    @SRAVANAM_KEERTHANAM_SMARANAM 4 роки тому

    Sir, In the video at ua-cam.com/video/Lpr__X8zuE8/v-deo.html you are showing a Good SVM, But you are showing in red area blue points and in blue area red points. Why is it then a good SVM. Model

    • @SerranoAcademy
      @SerranoAcademy  4 роки тому

      Thanks for noticing! That was a typo, the colors of the areas should have been backwards. I’ll add a comment correcting it.

    • @SRAVANAM_KEERTHANAM_SMARANAM
      @SRAVANAM_KEERTHANAM_SMARANAM 4 роки тому

      @@SerranoAcademy Machine learning would have been a night mare for me have you not posted your videos. Thanks a lot for your Support.

  • @LuisDanielLucioQuiroz
    @LuisDanielLucioQuiroz 2 роки тому

    Hi there, your video is good, but is this an AI video or a linear function lesson? You waste too much time explaining linear functions, I think If you are studying AI, your math basics must be solid.
    Now, we are supposed to find a band that separates both points and it is as wide as possible without error (wise margin without errors). What if instead of growing gradually, you use genetic algorithms? If we are going to iterate anyway, I think that GA will help to discover the best possible lines.

  • @naps9249
    @naps9249 5 років тому +15

    The best Machine learning / Deep learning I've learnt from.

  • @blesucation4417
    @blesucation4417 10 місяців тому +2

    Just want to leave a comment so that more people could learn from your amazing videos! Many thanks for the wonderful and fun creation!!!

  • @JohnTheStun
    @JohnTheStun 4 роки тому +6

    Visual, thorough, informal - perfect!

  • @JimmyGarzon
    @JimmyGarzon 5 років тому +4

    Thank you, this is fantastic! Your visual explanations are great, they’ve really helped understand the intuition of these techniques.

  • @macknightxu2199
    @macknightxu2199 3 роки тому +1

    I think SVM's loop should use one line ap+bq+c-1>0 for blue points and another line ap+bq+c+1

    • @olayomateoreynaud9956
      @olayomateoreynaud9956 3 роки тому

      I think that you are right; I don´t know anythong about SVM (wich is why I ended up here), but I was thinking during the entrie video that it doesn´t make sense to create parallel lines if there are not used.

  • @08ae6013
    @08ae6013 5 років тому +2

    Thank you very much for this video. As usual you are so good in explaining the complex things in simple way. First time I am able to understand the motive behind SVC and also how it is different from Logistic regression. Can you please make a video on SVM kernels (Polynomial, Gaussian, Radial ...)

  • @xruan6582
    @xruan6582 4 роки тому +1

    Great tutorial. (16:23) "if point is blue, and ap + bq + c > 0", I think the equation should have BLUE color (to indicate the BLUE dash on the graph) rather than RED. Similarly, "if point is red, and ap + bp + c < 0", the equation should be RED (to indicate the RED dash on the graph) instead of BLUE. Pardon me if I am wrong.

  • @nguyenbaodung1603
    @nguyenbaodung1603 3 роки тому +1

    This is terrifying omg. You approach it soooooo perfectly and all the math behind just guide me to the point that I have to say WOW! Such a good observation, this video is by far golddd. I love your approach at 22:56 so much, you guide me to that point and say, that's the regulization term and I was omg wtf is happening, that's what I was trying to understand all this time and this guy, you, just explain it in a few minutes. Really appreciate

  • @anujshah645
    @anujshah645 3 роки тому

    In the pseudo algorithm of svm, in the last step we multiply a,b,c by 0.99 then even the right hand side should be multiplied by 0.99 making the right hand side to 0.99 and not 1. Am I missing something?

  • @tangledweb5265
    @tangledweb5265 23 дні тому

    You made the SVMs look like a walk in the park. Thoroughly enjoyed this as I enjoyed your Math for ML specialisation in CoursEra.

  • @sofiayz7472
    @sofiayz7472 3 роки тому +1

    This is the best SVM explanation! I never truly understand it until I watch your video!

  •  4 роки тому

    So if data is separable with a large margin, the margin error is small... even though the model produces worse classification than the model with a small margin having a high margin error. is that correct?

  • @Pulorn1
    @Pulorn1 8 місяців тому

    Thank you for the good explanation. However, I miss some introductions. What is its added value compared to Logistic Regression? And some recommendations on when to prioritize this algorithm against other...

  • @macknightxu2199
    @macknightxu2199 3 роки тому

    16:36, Multiply a,b,c by 0.99, so in the loop, 0.99ap+0.99bq+0.99c is the same with ap+bq+c, so is 0.99 multiply senseless?

  • @konradpietras8030
    @konradpietras8030 2 роки тому

    In my opinion, suggestion that every iteration margin is increasing, is misleading. If I understood it correctly, margin error is truly making it bigger but there is also classification error which can easily compensate this and overall make margin decrease in a single iteration.

  • @SRAVANAM_KEERTHANAM_SMARANAM
    @SRAVANAM_KEERTHANAM_SMARANAM 4 роки тому

    Excellent explanation. When you are moving a live we need to adjust the rotation and also offset. how is it that you are accomplishing both at the same time ie a,b,c all three are modified together. ua-cam.com/video/Lpr__X8zuE8/v-deo.html. How will it achieve both slope and offset at the same time.

  • @MANISHMEHTAIIT
    @MANISHMEHTAIIT 5 років тому +2

    Nice Sir, best teaching style.
    Love the way you teach...

  • @bodenseeboys
    @bodenseeboys 3 роки тому +1

    I really like your accent, could listen all day. Living legend Luis

  • @Vikram-wx4hg
    @Vikram-wx4hg 3 роки тому +1

    Super explanation Luis!
    It great when someone can bring out the intuitions and meaning behind mathematics in such a clear way!

  • @AnilAnvesh
    @AnilAnvesh 2 роки тому

    Thank You for this video ❤️

  • @KundanKumar-zu1xk
    @KundanKumar-zu1xk 3 роки тому

    As always excellent and easy to understandable vedio.

  • @meenakshichoudhary4554
    @meenakshichoudhary4554 5 років тому +1

    Sir, thank you for the video, extremely well explained in short duration. Really appreciable

  • @sandeepgill4282
    @sandeepgill4282 2 роки тому

    Thanks a lot for such a nice explanation.

  • @kimsethseu6596
    @kimsethseu6596 3 роки тому

    thank you for the good explanation.

  • @vitor613
    @vitor613 3 роки тому

    HOLY SHIT, BEST EXPLANATION EVER

  • @chetantanwar8561
    @chetantanwar8561 4 роки тому

    sir also teach the kernal method of it in deepth .

  • @sorooshp.8086
    @sorooshp.8086 Рік тому

    No need to show urself or a photo of ur friends

  • @ardhidattatreyavarma5337
    @ardhidattatreyavarma5337 6 місяців тому

    awesome explanation

  • @sharangkulkarni1759
    @sharangkulkarni1759 Рік тому

    Oh louis my love of life

  • @chetantanwar8561
    @chetantanwar8561 4 роки тому

    specially rbf kernal

  • @geogeo14000
    @geogeo14000 3 роки тому

    amazing work thx

  • @ronaktiwari7041
    @ronaktiwari7041 3 роки тому

    You are the best Luis.

  • @keshav2136
    @keshav2136 4 роки тому

    Best!

  • @letyrodridc
    @letyrodridc Місяць тому

    Amazing explanation Luis! As usual. You are a great professor making complex topics in very simple explanations.

  • @sandipansarkar9211
    @sandipansarkar9211 3 роки тому

    Great explanation

  • @andresalmodovar3473
    @andresalmodovar3473 3 роки тому

    Hi Luis, amazing job. But just one question. Could there be a typo on the criteria for misclassification of points?. I mean, I think the criteria should be: for blue: ap+bq+c>-1, and for red: ap+bq+c

  • @ignaciosanchezgendriz1457
    @ignaciosanchezgendriz1457 Рік тому

    Luis, tus vídeos son simplemente maravillosos! Pienzo cuanto conociemiento e claridad fue necesário. Quote by Dejan Stojanovic: “The most complicated skill is to be simple.”

  • @alyyahfoufy6222
    @alyyahfoufy6222 4 роки тому

    Hello,
    When we multiply the equation by the expanding rate of 0.99, should the right side of the equal be 0.99, 0, and -0.99? Thanks.

  • @terryliu3635
    @terryliu3635 2 роки тому

    Great video!!!

  • @robertpollock8617
    @robertpollock8617 10 місяців тому

    Excellent!!!!!

  • @frankhendriks2637
    @frankhendriks2637 3 роки тому

    Hi Luis, Thanks very much for these videos. I watch them with great pleasure. I have some questions though about this one. The questions are preceded by the moment in the video (in mm:ss) where I have my question.
    14:26: For determining whether a point is correctly classified, should you compare the red points to the red (dashed) line and the blue points to blue (dashed) line? Or should we compare all points to the black line? I assume it is the first although this is not mentioned explicitly.
    22:07: The margin is different when you start with a different value of d in ax+by+c=d. Would you always start with d=1 and -1 or are there situations you start with other values of d (see also my question below)?
    27:33: Two questions here. 1) In the second example the margin is actually not increased but decreased. Your video however only talks about expansion, not the opposite. How does reduction of the margin happen? Or does this only work by starting the algorithm with a smaller expansion so with a smaller value of d than 1 in ax+by+c=d? 2) It seems to me that the first solution will also be the result of minimizing the log-loss function as this maximizes the probabilities that a point is classified correctly. So the further the points are away from the line in the correct area, the better it is. And that seems to be the case for the first solution. So what is the difference between this log-loss approach and the SVM approach? Do they deliver different results? If so, when would you choose the one or the other?
    Thanks, Frank

  • @AmitSharma-rj2rp
    @AmitSharma-rj2rp 4 роки тому

    can someone explain why the margins don't keep diverging infinitely? the final step of the SVM algorithm involves multiplying a, b and c by 0.99. If you keep doing that don't you just get lines that are infinitely far apart? thank you

  • @krishnanarra5578
    @krishnanarra5578 3 роки тому

    Awesome.. I liked your videos so much that I bought your book and the book is great too.

    • @SerranoAcademy
      @SerranoAcademy  3 роки тому

      Thank you Krishna, so glad to hear you liked it! ;)

  • @zullyholly
    @zullyholly 4 роки тому

    very succinct way of explaining hyperparameter of eta and c. normally I just take things for granted and just do hyperparameter tuning

  • @karanpatel1906
    @karanpatel1906 4 роки тому

    Simply awesome...even thank you is not enough to describe how well this video is....explained thoughest things in kids language

  • @raviankitaava
    @raviankitaava 4 роки тому

    Would be grateful if you can have explanations on Gaussian Process and hyperparameters optimisation techniques.

  • @rafaelborbacs
    @rafaelborbacs 4 роки тому

    How to generalize these algorithms to many dimensions? My problem has about 50 atributes instead of 2, and I need to classify data as "red or blue"

  • @johncyjoanofarc
    @johncyjoanofarc 3 роки тому

    This video should go viral.... So that ppl benefit from it.... Great teaching

  • @ruskinchem4300
    @ruskinchem4300 3 роки тому

    Hi Luis,explaination is great no doubt but the equations that u wrote for margin error should be ax+by=1 and ax+by=-1

  • @dante_calisthenics
    @dante_calisthenics 4 роки тому

    And at step 5, I think after add/subtract 0.01, you should also have to do gradient descent, right?

  • @manjunatharadhya4361
    @manjunatharadhya4361 3 роки тому

    Very Nice

  • @todianmishtaku6249
    @todianmishtaku6249 2 роки тому

    Superb!

  • @souravkumar-yu9vi
    @souravkumar-yu9vi 4 роки тому

    Excellent

  • @lucycai3356
    @lucycai3356 3 роки тому

    thanks!

  • @khatiwadaAnish
    @khatiwadaAnish Рік тому

    You made complex topic very easily understandable 👍👍

  • @humzaiftikhar1130
    @humzaiftikhar1130 2 роки тому

    Thank you very much for that hard work. it was so informative and well described.

  • @samirelzein1978
    @samirelzein1978 4 роки тому

    the more you speak the better it gets, please keep giving practical examples of applications at the end of each video

  • @macknightxu2199
    @macknightxu2199 3 роки тому

    in the loop, when do you use the parallel lines?ax+by+c=1 and ax+ bx+c=-1

  • @말바른-e7f
    @말바른-e7f 3 роки тому

    Very insightful lecture. Thank you very much Dr Serrano.

  • @keshavkumar7769
    @keshavkumar7769 4 роки тому

    what a explanation . Dammn good . you r great sir
    please make some video on Xgboost and other algorithm also

  • @creativeuser9086
    @creativeuser9086 Рік тому

    Awesome video. Can you do more videos about LLMs?

    • @SerranoAcademy
      @SerranoAcademy  8 місяців тому

      Thanks for the suggestion! I did some recently, here they are: ua-cam.com/play/PLs8w1Cdi-zva4fwKkl9EK13siFvL9Wewf.html

  • @rajeshvarma2162
    @rajeshvarma2162 2 роки тому

    Thanks for your easy and understandable explanation

  • @mudcoff
    @mudcoff 5 років тому

    Mr. Serano, U r the only 1, who explains the logic of ML and not the technicalities. Thank U

  • @john22594
    @john22594 5 років тому

    Nice tutorial. Thank you so much. It would be easy for us if you add code for this algoirthm.

  • @jaikumaranandapadmanaban1525
    @jaikumaranandapadmanaban1525 3 роки тому

    Hi sir..why parallel lines equated to +1 and -1?

  • @farzadfarzadian8827
    @farzadfarzadian8827 5 років тому

    SVM is constrained optimization so it needs Lagrange Multiplier?

  • @hanfei3468
    @hanfei3468 4 роки тому

    Thanks Luis, great video and explanation! How do you do the animation in the video?

  • @AA-yk8zi
    @AA-yk8zi 3 роки тому

    Really good explanation! thank you sir.

  • @shrisharanrajaram4766
    @shrisharanrajaram4766 4 роки тому

    Hats off to you,sir. Very clear with the concept

  • @RIYASHARMA-he9vz
    @RIYASHARMA-he9vz 3 роки тому

    A very nice explanation of SVM I have ever read.

  • @ikramullahmohmand
    @ikramullahmohmand 4 роки тому

    very well explained. thanks mate :-)

  • @iidtxbc
    @iidtxbc 5 років тому

    What is the name of the algorithm you have introduced in the lecture?