SVM Kernels : Data Science Concepts

Поділитися
Вставка
  • Опубліковано 14 жов 2024
  • A backdoor into higher dimensions.
    SVM Dual Video: • SVM Dual : Data Scienc...
    My Patreon : www.patreon.co...

КОМЕНТАРІ • 121

  • @cassie8324
    @cassie8324 2 роки тому +61

    you have been teaching me the fundementals of SVMs better than my expensive professor at my university. thank you, man.

  • @matattz
    @matattz Рік тому +6

    Hey, i love that everything we learn in the video is already written on the board. It's so clean and compact, yet so much information. Just great man

  • @Gibson-xn8xk
    @Gibson-xn8xk 2 роки тому +25

    I started learning SVM looking for some material that would provide an intuitive understanding of how this model works. By this time, i have already covered in depth all the mathematics behind it and I have spent almost a month on it. It sounds like a eternity, but i can’t feel myself confident, until i consider everything in details. In my opinion, basic intuition is the most important thing in model’s exploration and you did this extremely cool. Thank you for your time and work. For those, who are new to this channel, i highly recommend you to subscribe. This guy makes an awesome content!

  • @norebar5848
    @norebar5848 Рік тому +2

    You are blowing my mind sir, thank you for this amazing explanation! No one else has been able to teach the subject of SVM this well.

  • @flvstrahl
    @flvstrahl Рік тому +4

    By far the best explanation of kernels that I've seen/read. Fantastic job!

  • @samruddhideshmukh5928
    @samruddhideshmukh5928 3 роки тому +10

    Amazing explanation!! Finally kernels are way more clearer to me than they have been in the past.

  • @tgross2
    @tgross2 7 місяців тому

    Was stuck for 3 days on kernels looking at numerous lectures online. You just made it clear. Thank you so much!

  • @undertaker7523
    @undertaker7523 Рік тому +2

    I'd love to see a video on Gaussian Process Regression, or just Gaussian Processes in general! Thanks for this video - very helpful

  • @bztomato3131
    @bztomato3131 2 місяці тому

    When some one has tries a lot to know something, he can explain it much better than others, thanks a lot.

  • @adelazhou5900
    @adelazhou5900 6 місяців тому

    The two paths diagram explains everything so clearly! Thank you!!

    • @ritvikmath
      @ritvikmath  6 місяців тому

      You're very welcome!

  • @DavidLolxe
    @DavidLolxe Рік тому +1

    As someone who's searched everywhere for an explanation about this topic, this is the only good one out there. Thanks so much!

  • @twincivet9668
    @twincivet9668 Рік тому +5

    Note, to get the inner product after transformation to be equivalent to (1+x_i * x_j)^2, the transformation will need to have some constants. Specifically, the transformation should be [x1, x2] --> [1, sqrt(2)*x1, sqrt(2)*x2, x1^2, x2^2, sqrt(2)x1*x2]

    • @durgeshmishra-fn6kx
      @durgeshmishra-fn6kx 8 місяців тому

      Instead ignore the coefficients (for example will have a term 2 xi^(1) xj^(1) so only consider xi^(1) xj^(1) and drop the 2 in the expansion you will get the match).

  • @1MrAND
    @1MrAND 4 місяці тому +1

    Dude, you are a legend. Finally I understood the power of Kernel functions. Thanks!

  • @DevanshKhandekar
    @DevanshKhandekar 3 роки тому +2

    Great Man. After months of stumbling over the convex optimization theories and KKT and whatnot, this video made everything clear . Highly appreciated.👏👏

  • @moravskyvrabec
    @moravskyvrabec Рік тому

    Dude, like the other commenters say, you are so good at just laying stuff out in plain English. Just for this and the prior video I'm going to hit subscribe...you deserve it!

  • @AndBar283
    @AndBar283 3 роки тому +4

    Huge, big thank you, for your hard work and spreading the knowledge. Nice, brave explanation.

  • @obakasan31
    @obakasan31 Рік тому

    This is the clearest explanation of this topic I've seen so far. Thank you

  • @johnstephen8041
    @johnstephen8041 6 місяців тому

    Bro - Thanks much!!'
    The way that you are teaching and your understanding is crazy!

  • @guygirineza4001
    @guygirineza4001 3 роки тому +1

    Might be one of the best videos I have seen on SVM. Crazy

  • @mehdi_mbh
    @mehdi_mbh 9 місяців тому

    You are amazing! Thank you so much for explaining the math and the intuition behind all of this. Fantastic teaching skills.

  • @CodeEmporium
    @CodeEmporium 3 роки тому +5

    Good stuff

  • @gufo__4922
    @gufo__4922 2 роки тому

    I found you by case and this was a damn miracle, will constantly check for new videos

  • @alimurtaza4904
    @alimurtaza4904 Рік тому +1

    This explanation cleared up everything for me! Amazing work, I can’t thank you enough!

  • @qiguosun129
    @qiguosun129 2 роки тому +1

    You summed up all the needed knowledge about svm, and the discussion in this episode is more philosophical, thank you very much for the course.

  • @danielbriones6171
    @danielbriones6171 2 роки тому

    Been struggling to grasp this even after watching a bunch of UA-cam videos. Finally understand! Must be the magic of the white board!

  • @amaramar4969
    @amaramar4969 6 місяців тому

    Amazing, Amazing, you are my true guru while I prepare for the university exam. You are far far above my college professors whom I barely understand. Hope you get your true due some how. Subscribed already. 🙏

  • @SiddhantSethi02
    @SiddhantSethi02 Рік тому

    Hey man,
    Just wanted to admire you for your beautiful work on bringing some of the key complex fundamentals such as this to ease. :D.

  • @zzzzzzzmr9759
    @zzzzzzzmr9759 Рік тому

    Very clear and well-organized explanation. Thank you!

  • @xKikero
    @xKikero 10 місяців тому

    This is the best video I've seen on this topic. Thank you, sir.

  • @Palapi_H
    @Palapi_H Рік тому

    Cant thanks you enough to explain it so simply.

  • @anurondas3853
    @anurondas3853 2 роки тому +1

    Much better than other youtubers explaining the same concept.

  • @aalailayahya
    @aalailayahya 3 роки тому +4

    Absolutely great !

  • @DeltaPi314
    @DeltaPi314 3 роки тому +2

    Marketer studying Data Science here. Amazing content!

  • @uoohknk6881
    @uoohknk6881 2 роки тому

    You spittin knowledge, GD! This needs to go viral

  • @alessandro5847
    @alessandro5847 2 роки тому

    Such a great explanation. First time I get it after many attempts

  • @Ranjithbhat444
    @Ranjithbhat444 2 роки тому

    Can’t get any better explanation than this 👌🏼

  • @BiKey91
    @BiKey91 10 місяців тому

    dude I like before even watching the vids because I know I won't be disappointed

  • @yt-1161
    @yt-1161 3 роки тому

    Your data science concepts video series is one of a kind

  • @Daily_language
    @Daily_language 5 місяців тому

    Clearly explained! Thank you!

  • @process6996
    @process6996 3 роки тому +2

    Awesome explanation. Thank you!

  • @zwitter689
    @zwitter689 Рік тому

    You have done a very good job here - Thank You! How about a list of youtube videos you have done? ( I just subscribed)

  • @hareemesahar6140
    @hareemesahar6140 5 місяців тому

    That's a great video. Thank you for making this.

  • @axadify
    @axadify 3 роки тому

    Thats the best video I have seen on kernels on YT! great content

  • @abdelrahmantaha9785
    @abdelrahmantaha9785 Рік тому

    very well explained, thank you!

  • @martian.07_
    @martian.07_ Рік тому

    Very underrated video

  • @thecamelbackfiles3685
    @thecamelbackfiles3685 2 роки тому

    Smart AND fit - these videos are like candy for my eyes and brain 🧠 😂

  • @morisakomasaru8020
    @morisakomasaru8020 3 роки тому

    I finally understood what a kernel does! Thanks!

  • @pranavjain9799
    @pranavjain9799 3 роки тому +1

    This is an incredible explanation. It helped me alot. Thank you so much.

  • @harshitlamba155
    @harshitlamba155 2 роки тому +2

    Hi Ritvik, this is an excellent explanation of the kernel trick concept. I have a doubt though. When we apply 2-degree polynomial trick to the dot product of the two vectors we will apply (a+b+c)**2 formula. Doing this will introduce a factor of 2 for a few terms. Is it ignored since it will just scale the dot product?

    • @durgeshmishra-fn6kx
      @durgeshmishra-fn6kx 8 місяців тому

      Ignore the coefficients (for example will have a term 2 xi^(1) xj^(1) so only consider xi^(1) xj^(1) and drop the 2 in the expansion you will get the match).

  • @manishbolbanda9872
    @manishbolbanda9872 3 роки тому +13

    we get inner products of high dimensional data with out even converting data into high dimension, thats the conclusion i drew, correct me if am wrong.

    • @ritvikmath
      @ritvikmath  3 роки тому +4

      Yup, that's exactly the main point !

  • @e555t66
    @e555t66 Рік тому

    Really explained well. If you want to get the theoretical concepts one could try doing the MIT micromasters. It’s rigorous and demands 10 to 15 hours a week.

  • @liat978
    @liat978 Рік тому

    this is the first time i get it! thank you

  • @hazema.6150
    @hazema.6150 2 роки тому

    Masha'Allah man, like really Masha'Allah. This is just beautiful and truly a piece of gold. Thank you for this

  • @GAZ___
    @GAZ___ 4 місяці тому

    This is a good explanation, but I'm a bit confused about the terms on the bottom right corner. Did we reach this by squaring the parentheses And then taking? That's gonna result in the sum of the terms, so what did we do next, take each term independently and set it as a term?

  • @victorsun9802
    @victorsun9802 3 роки тому +1

    Amazing explanation! Thanks for making these series of video on SVM. One question is that does kernel/kernel trick can also be applied on other model like logistic regression? I saw some online posts saying kernel can be applied on logistic regression but seems like it's very unpopular. Wonder if it's because the logistic regression and other models can't really get the dot product term, which makes computation expensive or other reasons? Thanks!

    • @durgeshmishra-fn6kx
      @durgeshmishra-fn6kx 8 місяців тому

      Little late but still, It can be applied to any ML algorithm, for example Linear regression (Kernelized) and so on, to include higher dimensional polynomial features instead of linear attributes.

  • @asdadasasdsaasd
    @asdadasasdsaasd 3 місяці тому

    Nice explanation

  • @geogeo14000
    @geogeo14000 2 роки тому

    Very insightful thanks a lot

  • @eyuelmelese944
    @eyuelmelese944 Рік тому

    This is amazing

  • @shaktijain8560
    @shaktijain8560 2 роки тому

    Simply amazing 🤩

  • @eacd2743
    @eacd2743 Рік тому

    Great video man thanks a lot!

  • @kevinmeyer3863
    @kevinmeyer3863 3 роки тому +1

    Hi Ritvik, in the end you have to sum the values in the 6-tuple to get the equivalent to the kernel output, right? (in order to get a proper scalar from the scalar product)

  • @softerseltzer
    @softerseltzer 3 роки тому

    Your videos are of exquisite quality.

  • @jasonwang9990
    @jasonwang9990 2 роки тому

    Amazing explanation!

  • @JOHNREINKER
    @JOHNREINKER 5 місяців тому

    this video is goated

  • @javiergonzalezarmas8250
    @javiergonzalezarmas8250 Рік тому

    Beautiful

  • @zahratebiyaniyan1592
    @zahratebiyaniyan1592 Рік тому

    You are GREAT!

  • @loveen3186
    @loveen3186 Рік тому

    amazing teacher

  • @lechx32
    @lechx32 Рік тому

    Thank you. I just imagined what a hard time I would have if I tried to grind through all of this math on my own. It is not a good idea for a beginner)

  • @thirdreplicator
    @thirdreplicator 2 роки тому +1

    Ritvik for president!

  • @jalaltajdini7959
    @jalaltajdini7959 2 роки тому

    Thanks, this was just what I wanted 😙

  • @mahdimoosavi2109
    @mahdimoosavi2109 2 роки тому

    dude I love you

  • @manishbolbanda9872
    @manishbolbanda9872 3 роки тому +1

    what do you mean by Inner products of original data?

  • @dungtranmanh7820
    @dungtranmanh7820 2 роки тому

    Thank you very much ❤, you save us a lot of time and effort, hope I can work with you someday

  • @ireoluwaTH
    @ireoluwaTH Рік тому

    Your videos rank pretty high on the 'binge-ability' matrix...

  • @mainakmukherjee3444
    @mainakmukherjee3444 Рік тому

    Why we calculate the the inner products ? I understand the data points need to be transformed in higher dimensions, so that they can be linearly sepereble. But why we calculate the 6 dimensional space for that ?, say we have 2d space (original feature space), we can transform it to 3d space to make things done.

    • @moatzmaloo
      @moatzmaloo 3 місяці тому

      Thats correct applyinf polynomial kernel quadratic for example will convert it to 3d dimensions but rdf can convert it to infinite dimensions

  • @damialesh2109
    @damialesh2109 2 роки тому

    If we plugged in the kernel function output(similarity of our points in higher dimensional space) into the primal version of the cost function i.e use the similarity instead of the inputs themselves. Would it be equivalent to solving the dual function? Just a lot more inefficient?

  • @MauroAndretta
    @MauroAndretta 3 місяці тому

    What is not clear for me is that, is the output of the kernel function a scalar?

  • @walidghazouani9427
    @walidghazouani9427 2 роки тому

    what is xj exactly? am i understanding it right if i can consider it as the triangle data point and xi are the x data points...? so xj is like feature variables within our data...?

  • @oscargonzalez-barrios9502
    @oscargonzalez-barrios9502 2 роки тому

    Wow, thank you so much!

  • @arvinds7182
    @arvinds7182 Рік тому

    quality👏

  • @samuelrojas3766
    @samuelrojas3766 3 місяці тому

    I am still confused about how you developed the kernels in the first place. I know what they do but don't know how to obtain them without using the transformed space.

  • @nimeesha0550
    @nimeesha0550 3 роки тому

    Great Job! Thank you soo much!!

  • @maged4087
    @maged4087 2 роки тому

    i love you man. i am vt student. i wish that i knew this a month a go :(

  • @Kirill-xp9jq
    @Kirill-xp9jq 3 роки тому

    What is the purpose of finding the relationship between two separate vectors? Why can't you just take the polynomial of a vector with respect to itself (xi_1^T xi_1+c)^2? Wouldn't your number of terms just blow up when you have to find K(xa,xb) for every a and b in X?

  • @Fat_Cat_Fly
    @Fat_Cat_Fly 10 місяців тому

    magic

  • @PF-vn4qz
    @PF-vn4qz 3 роки тому

    Thank you!

  • @revycayolivia
    @revycayolivia 2 роки тому

    sorry may I ask? how if we have 4/5 class ? how we describe or using it?

  • @mattkunq
    @mattkunq 2 роки тому

    Can someone elaborate how a kernal exactly does that? At the end of the day, we still need the higher demsion data no? I'm confused.

  • @murilopalomosebilla2999
    @murilopalomosebilla2999 3 роки тому

    Thanks!

  • @iidtxbc
    @iidtxbc 3 роки тому +1

    Why does 1 mean in the transformed matrix?

    • @ritvikmath
      @ritvikmath  3 роки тому +2

      1 is just for the "intercept". It's like the "b" term in the linear equation "y=mx+b"

  • @ccuuttww
    @ccuuttww 3 роки тому +1

    The Phi is always impossible to compute directly
    If u don't mind I can give u a simple kernel PCA example to help viewers
    because this concept is hard to understand if u are new to this topics

    • @ritvikmath
      @ritvikmath  3 роки тому

      sure! any resources are always welcome

  • @hussameldinrabah5018
    @hussameldinrabah5018 2 роки тому

    why do we add 1 term to the dot product in Kernel?

    • @richardbloemenkamp8532
      @richardbloemenkamp8532 2 роки тому

      He did not derive the kernel. He showed that if you use (1 + )^2 as a kernel, then if you work it out, you get exactly the same terms as when you explicitly compute (except for a few factors 2). If you would take the kernel ()^2 then you would not get the same terms. Probably some clever person invented the kernel: (1 + )^2 , but it is not explained here how he/she found it. Note there are also other kernel functions that work well for SVM, but with different basis functions.

  • @nuclearcornflakes3542
    @nuclearcornflakes3542 11 місяців тому

    let him cook

  • @skelgamingyt
    @skelgamingyt 10 місяців тому

    india se ho kya bhai?

  • @KernaaliKehveli
    @KernaaliKehveli 3 роки тому

    Hey, I know your videos are according to the current theme, but would be great to have a projector matrix/subspace video at some point in the future! Keep up the great content

  • @giantplantofweed6061
    @giantplantofweed6061 Рік тому

    that well explained. thank you