Lecture 15 - Kernel Methods

Поділитися
Вставка
  • Опубліковано 16 гру 2024

КОМЕНТАРІ • 83

  • @sakules
    @sakules 8 років тому +76

    "if i went to enter the Z space, you would have never heard from me again" haha so great

  • @guptaachin
    @guptaachin 6 років тому +35

    This makes me add him to the category of artists like Prof Gilbert Strang. I wonder how do you develop such skill of so lucidly explaining the most intricate concepts?

    • @clarkupdike6518
      @clarkupdike6518 2 роки тому +4

      Obviously one would have to know the material inside and out but also at both a theoretical and applied level. I'm guessing he has honed his technique through years of experience with students and is able to dial into that sweet spot of just enough, but not too much, detail--while providing lots of context and interlinking of related concepts. He only goes theoretical, and judiciously at that, when it adds insight... instead of grandstanding to show off. He is truly a master at bringing students along for the ride on complex subjects.

  • @amoughnieh
    @amoughnieh 5 місяців тому

    This man answered a lot of lingering questions I had, even after reading multiple articles, papers, and watching experts on UA-cam.

  • @chilling00000
    @chilling00000 7 років тому +3

    lecture 14,15,16 are the best SVM videos on youtube

  • @Kristofcoddens
    @Kristofcoddens 5 років тому +1

    By far best the explanation on kernels in SVMs I found online.

  • @livnatje
    @livnatje 11 років тому +3

    An amazing lecturer. His talks are perfectly clear, insightful and interesting. Thanks for putting this online!

  • @vasjaforutube1
    @vasjaforutube1 3 роки тому

    Professor Abu-Mostafa is such a cheerful person. His explanation is very clear, but I still have to pause the video every once in a while just to have a laugh.

  • @TheCpHaddock
    @TheCpHaddock 7 років тому +2

    Sir you are one of the best professors ever! And not just in machine learning!

  • @saiprasad8311
    @saiprasad8311 10 років тому +8

    Very valuable addition to all ML text books, in which one can easily get drowned in the mathematics involved. He is superb to elicit the meaning of mathematics without going into the complexities of the same. Thanks for this course.

  • @alvincay100
    @alvincay100 7 років тому +1

    Just to reiterate what other commenters are saying.... simply excellent. I've found multiple sources and could not wrap my head around the kernel trick until I found these lectures. Abu-Mostafa separates the important concepts out from the mathematical details so that you can understand the important concepts at hand. It is easy to fill in the details later once you understand the important concept.

  • @gip8507
    @gip8507 10 років тому +1

    These lectures are really great. Exceptionally clear and fun to watch. Thank you so much for this

  • @JulienAmelot
    @JulienAmelot 12 років тому +42

    Conclusion : What happens in the Z space stays in the Z space :P

  • @ahlammallak8853
    @ahlammallak8853 9 років тому +39

    you are just amazing ^^ many thanks professor, I hope if you can add more videos for more techniques such as PCA, ICA and deep learning

  • @zuodongzhou3334
    @zuodongzhou3334 9 років тому +4

    excellent lecture ! Best explanation I have ever seen.

  • @pt77780
    @pt77780 9 років тому +30

    "... terms will be dropping like flies" lol

  • @kevinlin4157
    @kevinlin4157 8 років тому

    Thank you Professor Yaser, it clearly explain how Kernal reduce computation.

  • @etiennedoumazane7556
    @etiennedoumazane7556 3 роки тому

    I think you just gave me a bit of intuition of what that mysterious kernel trick is... thanks!

  • @jakebruce1617
    @jakebruce1617 11 років тому +3

    This has been extremely helpful. Thanks for posting!

  • @vman049
    @vman049 11 років тому

    18:17 blew my mind. LOL'ed at 26:30. All the elements of a great lecture! Excellent!

  • @brainstormingsharing1309
    @brainstormingsharing1309 4 роки тому +1

    Absolutely well done and definitely keep it up!!! 👍👍👍👍👍

  • @michaelmellinger2324
    @michaelmellinger2324 2 роки тому

    @45:30 Establish that the Z-space exists even if we don’t know what it is

  • @denoleyn
    @denoleyn 9 років тому +1

    Thank you for this great lecture. Everything explained veryvery clearly.

  • @5up5up
    @5up5up 6 років тому +1

    im the happiest guy in the world, i finally understood what's the kernel freaking trick.. thank you! شكراً جدا لحضرتك

  • @ShaymaaKhalifa
    @ShaymaaKhalifa 7 років тому +1

    This professor is brilliant!

  • @Darshanhegde
    @Darshanhegde 12 років тому +2

    and again at 41:42 Prof. Yaser says: "Third way to check that kernel is valid is 'who cares' for mercer's theorem :)"

  • @nicktgr152
    @nicktgr152 10 років тому +2

    Fantastic presentation. Thank you very much.

  • @go2chayan
    @go2chayan 7 років тому +4

    I burst into laughs when he described positive-semidefinite matrix in terms of a "sleeping vector" and "standing vector" at 45:00

  • @jandal487
    @jandal487 6 років тому

    Excellent course on Introduction to ML. Thank you professor :)

  • @siddharthsvnit
    @siddharthsvnit 6 років тому +2

    1:02:30
    can't slack still be zero ? as 0*0 = 0 , so condition is still satisfied

    • @markh1462
      @markh1462 6 років тому

      No, because you're also maximizing beta. So, the only reason we would ever let beta be zero is when zeta is nonzero.

    • @mementomori6734
      @mementomori6734 5 років тому

      Mark H I dont understand

  • @Darshanhegde
    @Darshanhegde 12 років тому +6

    It's hilarious :) Prof. Yaser at 31:30 says: " If I had gone to Z space (Which is infinite here), you would have never heard from me again :D "

  • @ajayram198
    @ajayram198 6 років тому

    In slide 1 at 5:04 he talks about using SVM with non linear transform. Could someone there explain the difference between h and H? (Complex h but simple H)

  • @hson198
    @hson198 7 років тому

    can you explain at 1:03:18 (slide 19) that why 0

  • @NicolaPiovesan
    @NicolaPiovesan 11 років тому +3

    24:00 "so by doing this operation you have done an inner product in a infinite dimensional space. Congratulations!" - LOL :D

  • @DAsiaView_
    @DAsiaView_ 4 роки тому

    Awesome lecture, had my interest the entire time!

  • @kennethnavarro3496
    @kennethnavarro3496 3 роки тому

    I am not sure but i am pretty sure that the equation for b at minute 36:59 is wrong. When I solved I got almost the same thing except instead of y of m i got 1/(y of m ) n the same spot.

  • @fierydino9402
    @fierydino9402 4 роки тому

    Thank you for the precious lectures!!

  • @emademad4
    @emademad4 5 років тому +1

    a question : he said an objective function of number of miss classification is NP hard , why ? and if it is so in soft margin SVM, the amount of violation need to be minimized and to perform this u need to check every sample whether they are violating or not, so its the same action he called np-hard. any one who knows where im wrong id be glad to hear it.

  • @aztmln
    @aztmln 8 років тому

    very useful lecture. Thanks Prof ! Would love to hear more

  • @dergarten776
    @dergarten776 5 років тому

    excellent demonstration of kernel methods!

  • @acgalt
    @acgalt 3 роки тому

    Excellent lecture. Congratulations!

  • @fengzhengkite
    @fengzhengkite 10 років тому +7

    You are excellent

  • @3198136
    @3198136 11 років тому

    Thank you so much, it's much better than the class I attended for Pattern Recognition!

  • @sarnathk1946
    @sarnathk1946 6 місяців тому

    You are Pedhanna (Big brother) from now on! Thank you!

  • @alisiena7009
    @alisiena7009 8 років тому

    i have problem with dataSet if very small between [-1;0] and i have the approximation target between [0;1] but always the trainig performance is not go to thebest solution how can i solis problem

  • @michaelmellinger2324
    @michaelmellinger2324 2 роки тому

    @39:50 The whole idea of the kernel is that you don’t visit the Z-Space

  • @dipanjans
    @dipanjans 8 років тому

    Thanks a lot Prof. Yaser.

  • @111rave
    @111rave 6 років тому +2

    you are a really good lecturer!!! "Okay" :D

  • @achronicstudent
    @achronicstudent 2 місяці тому

    I am a rookie MSc student this is my first time learning these.. uh.. whatever these are... and everyone in the comments saying "Woah now i understand great explanation" etc etc. and I am just looking at the screen and feel dumb.

  • @apeman5291
    @apeman5291 11 років тому

    56:51 - I don't know, that still looks pretty complicated.
    59:01 - Okay, that was pretty neat.
    59:29 - Jaw hit the floor.

  • @BreatheThePureSerene
    @BreatheThePureSerene 8 років тому +1

    Brilliant teacher

  • @chaitanyatanwar8151
    @chaitanyatanwar8151 10 років тому +2

    Thanks Superb Lecture..

  • @gcgrabodan
    @gcgrabodan 8 років тому

    If you take the derivative of the lagrangian of the soft-margin SVM, with respect to w, why does Xi (the error) drop out?
    It should depend on w, doesnt it? i.e. different margins will have different errors. So it seems to me like a super complicated problem... Thanks for help ;)

    • @Bing.W
      @Bing.W 7 років тому

      Different margins do have different errors, but different margins do not have different Ws. That's why Xi does not depend on W. In other words, for a same hyperplane (fixed W), you can define different allowed errors (Xi).

  • @fatimatayeb677
    @fatimatayeb677 5 років тому

    Great dude . Keep it up .

  • @mohamedsalem9806
    @mohamedsalem9806 2 роки тому

    This is brilliant!

  • @DiegoAToala
    @DiegoAToala 2 роки тому

    Great lecture! thank you

  • @reinerwilhelms-tricarico344

    I like this a lot for his great clarity. Except this: When you get to "Then call your quadratic programming code to hand over the alpha's", you may end up with a big can of worms, because no body seems to know how to call any of the damn quadratic programming software that is available. There seem to be hundreds of codes around with usually miserable documentation. May be left with role your own. 😁

  • @PradiptoDas-SUNYBuffalo
    @PradiptoDas-SUNYBuffalo 11 років тому

    56:53 - did not see that coming - why was he proud over the equation?
    59:46 - memorial service for beta!
    Classic!

  • @Nestorghh
    @Nestorghh 12 років тому

    Great class! Thanks a lot!

  • @Waynema8
    @Waynema8 11 років тому

    Great lecture !

  • @nishanthkanala
    @nishanthkanala 8 років тому

    Just Brilliant!!!

  • @明焕李
    @明焕李 6 років тому +1

    讲的真好!

  • @sddyl
    @sddyl 12 років тому

    Fabulous! Great Intuition!

  • @0xaugustus
    @0xaugustus 7 років тому

    Absolute genius !

  • @nooneknown
    @nooneknown 5 років тому

    31:20

  • @jiewang7713
    @jiewang7713 10 років тому

    execellent "OKs"

  • @behrozkhan2000
    @behrozkhan2000 12 років тому

    Ok!

  • @jackeown
    @jackeown 5 років тому

    The previous lecture is very helpful for understanding this: ua-cam.com/video/eHsErlPJWUU/v-deo.html

  • @MohamedAtia
    @MohamedAtia 12 років тому

    ok?

  • @brainstormingsharing1309
    @brainstormingsharing1309 4 роки тому

    👍👍👍👍👍

  • @roknyakhavein5833
    @roknyakhavein5833 4 роки тому

    We R in Z space.

  • @diegoiruretagoyenaoliveri6050
    @diegoiruretagoyenaoliveri6050 6 років тому

    OKAY

  • @robbertkarry4392
    @robbertkarry4392 8 років тому

    like russian

  • @Nestorghh
    @Nestorghh 11 років тому

    haha. great

  • @tianchi1989
    @tianchi1989 11 років тому

    This is almost the best explanation about kernel I find. But the tone he uses makes me really sleepy. :(

  • @petar29able
    @petar29able 3 роки тому

    Im to stupid for this why am i here anyway

  • @nha1481
    @nha1481 7 років тому

    Who cares?

  • @AndyLee-xq8wq
    @AndyLee-xq8wq Рік тому

    Great explanation!