Support Vector Machines - THE MATH YOU SHOULD KNOW

Поділитися
Вставка
  • Опубліковано 5 вер 2024

КОМЕНТАРІ • 153

  • @CodeEmporium
    @CodeEmporium  5 років тому +37

    Like I said at the end of the video, I have some ideas for this channel. Here are some types of videos:
    1. Concepts with Comedy - Short, but pithy explanations of certain big idea concepts in the Data Science Space (Eg. the AB Testing Video I made) with some bad comedy to keep it engaging.
    2. Algorithm Concepts - Explaining the details of Machine Learning Algorithms (like this one).
    3. The Math You Should know - Subset of (2) with a ton of Math (Linear, Logistic Regression and Kernel Vids I made)
    4. Papers and Discussions - Reading and explaining the concept math behind recent research papers in deep Learning, machine learning (Eg. Attention, Masked RCNN, CycleGANs vids)
    I made explicit playlists of these videos, so check 'em out in the "playlist" section. add the entire playlist for some continuous quality content ;)
    I'll have a video on High dimensional data and PCA coming up - either the next or the one after that. So subscribe to keep an eye out for it!
    I have some goals for this channel. Lets make it big! Thanks for all your support!

    • @TheAnirudhable
      @TheAnirudhable 5 років тому +1

      you explain well, Please try to do a kaggle competition or a computer vision project with code

    • @mohammedfaisal6714
      @mohammedfaisal6714 4 роки тому

      You Will very soon have 1M subs...
      Keep Growing 👑👑👑

    • @vineethm6930
      @vineethm6930 4 роки тому

      I would say go for 3 and 4, excellent content brother.

    • @hemantchaudhary5902
      @hemantchaudhary5902 4 роки тому

      Hi please let me know if I can contact you in some way

    • @zahirrussell234
      @zahirrussell234 3 роки тому

      InstaBlaster.

  • @alicezhou6252
    @alicezhou6252 4 роки тому +15

    Thank you for saving my summer project essay! It's really helpful to have someone show me the details rather than one sentence saying "doing this with Lagrange multipliers" on Boswell's paper 🥳

  • @muhammadshaheer3741
    @muhammadshaheer3741 5 років тому +24

    Great stuff, hats off !
    Dude kindly keep making videos of "Maths you should know"
    Can you please go for Hidden Markov Models and Kalman filters ?

  • @ericaltf4
    @ericaltf4 4 роки тому +59

    Why doesn't anyone go through an example with real numbers so we can actually see these formulas in action?

    • @henrychen1544
      @henrychen1544 4 роки тому +5

      because most of these variables are vectors or matrices

    • @asutriasorem9364
      @asutriasorem9364 4 роки тому +2

      @@henrychen1544 ... Which contains numbers, so it is a reasonable question

    • @iventuresmov-dj9tu
      @iventuresmov-dj9tu 4 роки тому +4

      It does contain numbers, but it would take super long to go through each calculation, and if you’re watching this video, you should already know how to plug in those “numbers”

    • @yavarjn2055
      @yavarjn2055 2 роки тому +1

      There are lot of videos with examples and numbers. This video is when you want to know hows not whats.

    • @maxpatrickoliviermorin2489
      @maxpatrickoliviermorin2489 Рік тому +1

      @@henrychen1544 an illustration with actual number is definitely possible. Real life data science problems involve actual numbers.

  • @pingchuanma1191
    @pingchuanma1191 5 років тому +5

    The best video of SVM! Thank you for your excellent work!

  • @PRUTHVIRAJRGEEB
    @PRUTHVIRAJRGEEB 4 місяці тому

    This is exactly what I was looking for. End to end explanation clearly showing the steps involved. Thanks a ton man!❤

  • @priyankakaswan7528
    @priyankakaswan7528 4 роки тому +3

    you were just wonderful!. you explained the concept amazingly, exactly what I needed to hear and at correct speed.

  • @yuktikaura
    @yuktikaura 3 роки тому

    @2.20 the equation of distance for point vector from hyperplane, the denominator should be ||w|| than ||w|| squared.

  • @muhtasirimran
    @muhtasirimran 2 роки тому +1

    I would be thankful if you tell me what maths are required for SVM and any reference to learn that. I have searched on internet for a looooooong time and every single one says just the basic math. I'm too frustrated over the internet rn.

  • @87dammi
    @87dammi 5 років тому +16

    Maybe a tad to rushed, but balancing time to make video and release is propably a tricky buisness :)
    That being said, I like your videos 1000x more than Siraj, wish I could move some of his views to you ;)
    Keep it up!

    • @CodeEmporium
      @CodeEmporium  5 років тому +2

      Thanks for the constructive criticism, and the compliments ;). I admit the math was rushed. Things that took me 30 minutes to type were shown on screen for 5 seconds. But I am worried that long and drawn out videos with a ton of math may be overwhelming. And I didn't just want this to be some 1 hour lecture on the topic. That said, I'm trying out different things -- hopefully I can find a sweet spot.

  • @wolfisraging
    @wolfisraging 5 років тому +30

    Plz for God sake keep doing this

    • @CodeEmporium
      @CodeEmporium  5 років тому +11

      God has spoken. You're prayers have been answered. Your messiah is here.
      But yeah. I don't plan on stopping anytime soon. Thanks for the support!

  • @ananyadas3257
    @ananyadas3257 Рік тому

    Yoooooo!! With super crisp explanations of the maths behind it, visualization super-awesome and picture-perfect presentation this video is a nice contribution to the community of ML. Super awesome work!! Keep it up.

    • @CodeEmporium
      @CodeEmporium  Рік тому +1

      Thanks for the compliments :D I shall be pumping more content (video in 2 days!)

  • @Saravananmicrosoft
    @Saravananmicrosoft 4 роки тому +1

    Very well done on the explanation part and i am obsessed with your Math explanation especially the term and usage of it.

  • @maria4880
    @maria4880 7 місяців тому

    thank you for the deep dive into rbf kernels, my hope is to fill in my gaps w my maths so I can watch these videos and get better intuition of these topics.. for now not quite there but we still getting there !

  • @karthin3839
    @karthin3839 5 років тому +3

    The most informative and easily understandable video....Thanks a lot ❤❤❤

  • @yasserothman4023
    @yasserothman4023 3 роки тому

    Why did you change the minimization in the primal form to maximization in the dual form ?

  • @theghostwhowalk
    @theghostwhowalk 4 роки тому

    Most crisp and to the point explanation.

  • @rtgunti
    @rtgunti 5 років тому +2

    Dude.. You are a saviour! Great work. Keep it coming.. 👍🏻

  • @sampathkumargajawada1595
    @sampathkumargajawada1595 4 роки тому +1

    Math can be done using some example.. Writing equations may not help

  • @bladongarland8635
    @bladongarland8635 Місяць тому

    Why is phi used to denote the transformation function at 1:22? I asked Chat GPT and it doesn't know why, I looked online and did not see phi used in this context at all. At the risk of sounding ignorant, I don't believe this symbol should have been used. It seems any other alternative would have been suitable without the additional confusion of the symbols meaning in context of the equation. Like why use the symbol phi when you'd assume it adds some additional context but instead it adds nothing to the understanding of the function?

  • @masood4744
    @masood4744 2 роки тому

    Why we have multiplied yn with (Wt * phi(n) + b)?

  • @yavarjn2055
    @yavarjn2055 2 роки тому

    Could you please offer some theoremes and proofs?

  • @ochanabondhu
    @ochanabondhu Рік тому

    This is wonderful. Can you please make a video on Pegasos: Primal Estimated sub-GrAdient SOlver for SVM

  • @exoticcoder5365
    @exoticcoder5365 Рік тому

    0:42 that “kernalization” had me laughing 😂😂😂😂

  • @blmppes9876
    @blmppes9876 2 роки тому

    OMG This is so helpful!!!. So simple but so general. Thank you very much sir!

    • @CodeEmporium
      @CodeEmporium  Рік тому

      So glad this is helpful!! Thanks for commenting and watching !!

  • @vulturebeast
    @vulturebeast 5 років тому +2

    I'll suggest you keep your good work going .. you have a great potential.

    • @CodeEmporium
      @CodeEmporium  5 років тому

      Thanks for the support Gaurav! You can look forward to more videos.

  • @cdsjatin
    @cdsjatin 5 років тому +3

    Excellent explanation, keep up the good work.

    • @CodeEmporium
      @CodeEmporium  5 років тому

      Thanks a ton! Look forward to some more videos homie!

  • @miquelmarti5024
    @miquelmarti5024 4 роки тому +1

    I think you have a typo in the equation appearing in 9:06, last term. It should be "sum_n(xi_n(C-alpha_n-lambda_n))"

  • @mohitsinha1994
    @mohitsinha1994 4 роки тому +1

    brooooooooooooo.....you saved my life...!!!
    thanks a ton.....

    • @CodeEmporium
      @CodeEmporium  4 роки тому +1

      Saving lives is what I do..you are welcome

  • @drummatick
    @drummatick 3 роки тому

    One edit. In the dual formulation you dropped the double summation which might create some confusion

  • @user3sej90rga8
    @user3sej90rga8 5 років тому +3

    wow, this is what i was looking for!

  • @rahulbball9395
    @rahulbball9395 5 років тому +1

    Good video man. However, for those of us who are not excellent in math, I feel like it would be a lot more helpful if you could derive these equations by hand.

    • @CodeEmporium
      @CodeEmporium  5 років тому +1

      Thanks for the feedback homie. I'll try to make a video that's more accessible soon :)

  • @nbndanzo3685
    @nbndanzo3685 9 місяців тому

    hi bro,thank you for the short and clear material.can you help me understand in detail with mathematics how you use the summand c in c(with a(x)+b)=0 to compare the equation by 1 (at 04:30)?I'm just watching your translation video.

  • @nilkamalthakuria4030
    @nilkamalthakuria4030 3 роки тому +2

    Great video! I was wondering how exactly the final optimization is implemented to find the alphas and the lambdas, which is the whole point. At first glance, it seemed to me that the Lagrangian has far too many variables(about the size of the dataset), and too few constraints to work with. So I poked around and found this: en.wikipedia.org/wiki/Sequential_minimal_optimization

  • @s25412
    @s25412 3 роки тому

    In 4:07 on the top right corner, if you multiply the hyperplane equation by y_i, wouldn't that expression be always greater or equal to 0?
    If your hyper plane equation is

    • @keren718
      @keren718 Рік тому

      I guess its because y_i can have a value of either less than zero or more than zero but I am not sure and I am also wondering about this..

    • @HazotKleim
      @HazotKleim 9 місяців тому

      It's because y_i is either worth 1 or -1 it's the "label" of each points it's -1 when the points are above the line and 1 when they are underneath it

    • @HazotKleim
      @HazotKleim 9 місяців тому

      @@keren718 It's because y_i is either worth 1 or -1 it's the "label" of each points it's -1 when the points are above the line and 1 when they are underneath it

  • @manojbhatt7010
    @manojbhatt7010 4 роки тому

    Can you make a video on support vector regression ? can you also make a video on the steps for making SVR Matlab code ????

  • @harry5094
    @harry5094 5 років тому

    How did the constraint change for linearly inseperable case at 5:49?Why should Xi satisfy yn(wtx+b)>=1-Xi?

  • @arghachakraborty7088
    @arghachakraborty7088 2 роки тому

    We get a closed form for w in terms of the Lagrange multipliers. But for prediction, we also need the intercept 'b'. How do we obtain it?

  • @touchdownConfirmed
    @touchdownConfirmed 3 роки тому

    1:15 What is a feature space? What is Phi of x?

    • @jorgel.parishuanaortega948
      @jorgel.parishuanaortega948 2 роки тому

      Usually it's a mapping from the variables Xs to one function of them.
      Just like when you see a 3d Graph of a function of two variables. The graph makes some points to have a "height". Same idea for more dimensions.
      So if you add a height to each point. based on a function. Maybe there's a function. Where the heights given. Make the linearly unseparable points, actually separable.
      For example (x^2+y^2). Makes points around 0,0 be at a "low height". Further points get up. So you can separate points in a circle. around 0,0 By giving height to the far points.
      That mapping (x1,x2,...,xn, f(x) ) is phi.
      At the end we don't need to reformulate the problem. Since the solution of the optimization problem. Makes you only need phi(x1).phi(x2). We define this as Kernel(x1,x2).

  • @cbasile22
    @cbasile22 3 роки тому

    Hi really good video thanks for posting it, I had a question, did you do a video about the non perfect separation case, optimization problem solution, I see that here the dual problem is posed, but one which is solved? subgradient/gradient decend, any book you recommend other than the standard 'statistical learning' ? Thanks!

  • @sudeshkumar8377
    @sudeshkumar8377 Рік тому

    How can you substitute optimal weight vector to be the argmax(min....) ?? I am confused buddy.

    • @sudeshkumar8377
      @sudeshkumar8377 Рік тому

      Also Why are we taking argmax of the minimum distance ??

    • @sudeshkumar8377
      @sudeshkumar8377 Рік тому

      And I also noticed that the w_optimum vector given by argmax(1/l2_norm(w_vector)) got changed suddenly to min1/2l2_norm(w_vector) in the Primal Form

  • @siddharthsahu1130
    @siddharthsahu1130 5 років тому

    VERY nice video contain easy and well explanation ,,,,god keep giving you lot of intelligent

  • @1UniverseGames
    @1UniverseGames 3 роки тому

    Please make some.videos about ML CNN related maths calculation videos in details

  • @akashkhunt4191
    @akashkhunt4191 4 роки тому +1

    May someone please write the steps on how to find w and b parameter of an hyperplane given labeled training data set ?

    • @priyankakaswan7528
      @priyankakaswan7528 4 роки тому

      you make the convex optimisation problem- with the variables w,b and k(whatever that symbol was). Now you need to solve the convex optimisation problem to find these optimum variables- just put your set of equations with constraints in a quadratic solver.

    • @goyobetoalvarez7102
      @goyobetoalvarez7102 3 роки тому +1

      You might have found the answer to this question by now, but I think it's worth explaining since I could not find similar information elsewhere.
      After you get the alphas from the quadratic solver, you can substitute them in the w*phi equation. After you get this vector, you can then substitute it on the constraint y(w*phi+b)=1 and solve for b.
      To predict new values, a new kernel matrix with phi_k and phi_test has to be computed.

  • @kirannbhavaraju5978
    @kirannbhavaraju5978 4 роки тому +1

    Either the derivations you show to simplify in the larangian step are wrong or you have messed up your optimization assumption. I could point out the mistakes if you wish to correct your video. :P

  • @akshitbhalla874
    @akshitbhalla874 3 роки тому

    I just thought I should let you know you're awesome. Thanks for sharing. You have a subscriber!

    • @CodeEmporium
      @CodeEmporium  3 роки тому

      Thanks you. Really appreciate it :)

  • @saurabh1791
    @saurabh1791 3 роки тому

    wow this is exactly what I needed. Thanks!

  • @RojeshManShikhrakar
    @RojeshManShikhrakar 4 роки тому

    did i miss it ? where do we introduce the support vectors ?

  • @meirgoldenberg5638
    @meirgoldenberg5638 4 роки тому

    At 6:09, shouldn't that be \xi>2, so that one minus that be less than -1?

  • @mohammedfareedh
    @mohammedfareedh 4 роки тому +1

    Dude, I like your representation and husky voice coming on middle

  • @kamakshim8001
    @kamakshim8001 5 років тому

    Sir Please help me if possible What is the difference between PSO and SVM ? How to prove SVM is better than PSO?

  • @arturomorenopilar2035
    @arturomorenopilar2035 2 роки тому

    I love you, bro. Thanks!!

  • @astrid6744
    @astrid6744 2 роки тому

    It’s so help, thanks for sharing! But there’s a mistake in 9:13, g() is wrong.

    • @HazotKleim
      @HazotKleim 5 місяців тому

      I think that you are right, because the sum (a_n*y_n*(wt.phi(x_n)) isn't equal to 0 he made a mistake by considering wt.phi(x_n) as a constant, which gaved him wt.phi(x_n)*sum(a_n*y_n) = 0 because sum(a_n*y_n) = 0, indeed wt.phi(x_j) != wt.phi(x_k) if you choose j and k properly

  • @TorqueFinAna
    @TorqueFinAna 4 роки тому +2

    Awesome! Keep it up. Waiting to see more amazing videos like this from you.

  • @yavarjn2055
    @yavarjn2055 2 роки тому

    Great video. Thanks for making it.

  • @kabylfx
    @kabylfx 2 роки тому

    Such a good explanation, thanks!

  • @josebordon46
    @josebordon46 3 роки тому

    What is phi of X?

  • @RanjithKumar-vu4og
    @RanjithKumar-vu4og 3 роки тому

    You are simply awesome, bro!!!!!!!

  • @sairamreddypulikanti1153
    @sairamreddypulikanti1153 5 років тому

    Obsessed with your videos ! Many Thanks :)

  • @sandeepharshavardhan4560
    @sandeepharshavardhan4560 Рік тому

    at @4:57 how did you converted the maximization problem into minimization problem as far as i know you should multiply it with -1 if you want to convert a maximization problem to minimization.

  • @goldfishjy95
    @goldfishjy95 3 роки тому

    what does w* represent? thank you

  • @farooq8fox
    @farooq8fox 4 роки тому

    Great video but Its hard to follow. Ive done andrews course so i was able to make some sense of this.

  • @andreamarkos
    @andreamarkos 5 років тому +1

    How should one go about otpimizing a SVM (decisionProcedure; SVM type,Kernel type, shrinking, etc...) and hey! you don't need to be "funny" at all cost, your content is great and we, your audience, are mostly nerds..... so please keep up the good work because we need to understand the math behing the algos

    • @kamakshim8001
      @kamakshim8001 5 років тому

      How can we prove SVM is better than PSO

    • @andreamarkos
      @andreamarkos 5 років тому

      @@kamakshim8001 I dunno, master!!! I subscribed to your channel, not the otehr way around!
      post a detailed video on this topic

  • @spiritmoon3457
    @spiritmoon3457 6 місяців тому

    9:15 left 3rd equation looks incorrect

    • @HazotKleim
      @HazotKleim 5 місяців тому

      I think that you are right, because the sum (a_n*y_n*(wt.phi(x_n)) isn't equal to 0 he made a mistake by considering wt.phi(x_n) as a constant, which gaved him wt.phi(x_n)*sum(a_n*y_n) = 0 because sum(a_n*y_n) = 0, indeed wt.phi(x_j) != wt.phi(x_k) if you choose j and k properly

  • @landonbuechner7330
    @landonbuechner7330 5 років тому +2

    Take my sub, this is a great video.

    • @CodeEmporium
      @CodeEmporium  5 років тому

      Thanks Piqcyl! Glad you got something out of it ;)

  • @shivadityameduri9973
    @shivadityameduri9973 2 роки тому

    Explained very well!!!!

  • @ttiigogo5897
    @ttiigogo5897 2 роки тому

    hey sorry if I missed it, but what is w (and why do we transpose it)?

    • @jagannathan1014
      @jagannathan1014 2 роки тому +2

      I think each column of w represents the weights of X1 to Xn in y=w1x1+...
      So when we transpose we get w1 to wn in rows and multiplying it with vector matrix of x will yield us the y=w1x1+... Equation

    • @ttiigogo5897
      @ttiigogo5897 Рік тому

      @@jagannathan1014 thanks Jagan!

  • @laadee3669
    @laadee3669 5 років тому

    At 2:19, the hyper-plane point-vector distance d_H, should the x_0, in both cases, be just "x"?

    • @CodeEmporium
      @CodeEmporium  5 років тому

      When I use "x", it is a variable. x_0 on the other hand is a _specific_ point. I use d_H with x_0 to show this as a distance of a _specific_ point from the hyperplane. If you replace both x_0s with x. That is also correct, and it gives the generalized formula. I just applied it to a specific instance where x = x_0

    • @laadee3669
      @laadee3669 5 років тому

      ​@@CodeEmporium Yes, that makes sense, but x_0 (as a specific point) should also contain y component inside, right? I mean, the non-vector formula on the left side uses x_0 and y_0 intuitively for 2D, but d_H contains x_0 and no y_0 which leads into my confusion that y coordinate seems to be omitted from the calculation as only x_0 is being used and it has been previously used for 1 component of 2D explicitly...? So x_0 in d_H is a vector of x, y coordinates of first specific point in the dataset, ok?

  • @AaronDennis1111
    @AaronDennis1111 4 роки тому

    Shades of Milhouse: "My mom says Im cool!" No, great explanation. Thanks.

  • @tanchienhao
    @tanchienhao 2 роки тому

    Nice video!!

  • @SaimKhan-xj5um
    @SaimKhan-xj5um 5 років тому

    Great tutorial, please keep it up

    • @CodeEmporium
      @CodeEmporium  5 років тому

      Definitely. Thanks for the support!

  • @95Bloulou
    @95Bloulou 4 роки тому

    I love the math you should know

  • @ivycao5394
    @ivycao5394 3 роки тому

    ahhhhh I need to review linear algebra and calc

  • @anikar1302
    @anikar1302 3 роки тому

    You're brilliant!

  • @anoriginalnick
    @anoriginalnick 3 роки тому

    I keep hearing you say "five x" instead of phi axe.

  • @fethiourghi
    @fethiourghi 4 роки тому

    you are the best man!!

    • @CodeEmporium
      @CodeEmporium  4 роки тому

      Thanks! I'm thinking of starting my own religion soon.

  • @glaswasser
    @glaswasser 3 роки тому

    why do you put a reverb on your voice? :/

  • @halimamounsit6838
    @halimamounsit6838 5 років тому +5

    My god hard to understand

  • @marcinw6701
    @marcinw6701 Рік тому

    After the Lagrangian appliance I can't believe that you can be funny ;), no possibility :-]

    • @CodeEmporium
      @CodeEmporium  Рік тому

      I is funny I swear ;) it’s just all this math talk makes me sound like a robot hahah

    • @marcinw6701
      @marcinw6701 Рік тому

      @@CodeEmporium Sure I believe, I've just wanted to pull your leg a little :):). Very good math videos, please continue at this level !

  • @taylorchung1322
    @taylorchung1322 3 роки тому

    ahhhh why couldnt my prof link me to this video

  • @Harish-ou4dy
    @Harish-ou4dy 3 роки тому

    good video

  • @mahimasinghthakur6381
    @mahimasinghthakur6381 4 роки тому

    Amazing

  • @fuzzyip
    @fuzzyip 5 років тому

    Is the indicator function described in this video at all? I watched the entire thing but maybe I missed it or am not connecting it to it in the video

  • @Tyokok
    @Tyokok 4 роки тому

    great video of clear explain ML topics! liked & subed! is this video from a series? if so can you post the links for previous and next video, or playlist. Thanks a lot!

  • @tıbhendese
    @tıbhendese Місяць тому

    hard to grasp. Excess of math notation without explanation

  • @shivamkaushik6637
    @shivamkaushik6637 Рік тому

    I failed to explain how does SVM works in an interview. Only if I had watched this video before.

    • @CodeEmporium
      @CodeEmporium  Рік тому

      Thanks for the compliment! feel like this video went way too Technical. I honestly wish I made this explanation more easy to grasp.

    • @shivamkaushik6637
      @shivamkaushik6637 Рік тому +1

      @@CodeEmporium Yes it did, but you don't need to remember every step, just intuition is enough. And I got my intuition here.

  • @kop0164
    @kop0164 Рік тому

    So harrrddd

  • @sniper9143
    @sniper9143 5 років тому +2

    I’m lost haha

  • @harshmankodiya9397
    @harshmankodiya9397 3 роки тому

    felt a bit rushed in the math part.

  • @alecampos1725
    @alecampos1725 2 роки тому

    an ACTUAL math example would've helped

  • @monishmathpal6774
    @monishmathpal6774 3 роки тому

    its too math heavy throwing so many equations without providing the intutive context makes it very gibberish.

  • @blancalopez-bravogarcia1251
    @blancalopez-bravogarcia1251 4 роки тому

    Me he mareado

  • @MuhammadArnaldo
    @MuhammadArnaldo 3 роки тому

    I still don't undserstand lol

  • @tamojitmaiti
    @tamojitmaiti Рік тому

    If you can't do better than putting a bunch of symbols on a PPT and explain them in a rushed manner, you might as well not make an explainer video. This is the most jargon-y but superficial video I have ever come across in a while, please take a intro to linear algebra class.

    • @CodeEmporium
      @CodeEmporium  Рік тому

      Yea. Made this about 4 years ago. At the time, the goal of making this video was more to teach myself to learn for an exam. This is less of an “explainer” and more “my note taking and prep work for the exam”. So the explanation isn’t great per se. ill say tho that my more recent videos are better at this since now I am trying to explain concepts instead. Thanks for watching!

  • @sinefinehabitarevolo
    @sinefinehabitarevolo 4 роки тому

    one of the worst explanations of SVMs I've seen.

    • @CodeEmporium
      @CodeEmporium  4 роки тому

      Maybe. Made this while learning about the topic at the time. This wasn't made to teach, more work out some math for a test. I have a very different teaching style these days. Maybe I'll remake it

  • @BlazeRiver
    @BlazeRiver 2 роки тому

    Additional SVM blog : medium.com/sfu-cspmp/a-practical-guide-to-support-vector-machines-svm-ccd6a4d4dd04