(ML 15.1) Newton's method (for optimization) - intuition

Поділитися
Вставка
  • Опубліковано 1 сер 2024

КОМЕНТАРІ • 39

  • @walete
    @walete Рік тому

    The way you explain this is so helpful - love the comparison to the linear approximation. Thank you!

  • @AjaySharma-pg9cp
    @AjaySharma-pg9cp 6 років тому

    Wonderful video for clearing optimization of newtons method for finding minima of function in machine learning

  • @evilby
    @evilby Рік тому +1

    man, perfect explanation. clear and intuitive!

  • @TheCoolcat0
    @TheCoolcat0 7 років тому

    Illuminating! Thank you

  • @johnjung-studywithme
    @johnjung-studywithme Рік тому

    This was exactly what I needed, thank you!
    after learning Newton's method for finding the x-intercept, I was confused at first on how it was being used for minimization problems

  • @amirreza08
    @amirreza08 8 місяців тому

    It was one of the best explanations, so informative and helpful. Thank you!

  • @abhinavarora6574
    @abhinavarora6574 8 років тому

    Your videos are awesome!

  • @rounaksinghbuttar9083
    @rounaksinghbuttar9083 2 роки тому

    Sir your way of explaining is really good.

  • @minivergur
    @minivergur 11 років тому +2

    This was actually quite helpful :)

  • @danielseita5552
    @danielseita5552 9 років тому

    Thank you for the video!

  • @kevin-fs5ue
    @kevin-fs5ue 4 роки тому

    really appreciate your work :)

  • @anhthangyeu
    @anhthangyeu 12 років тому

    Thanks so much for posting!!

  • @nikpapan
    @nikpapan 9 років тому +1

    Thanks for posting these videos. They are quite helpful. So, to ensure that we minimize and not maximize, is it sufficient to ensure that the newton step has the same sign (goes towards the same direction) as the gradient? Is it ok to just change the sign of the step if that's not the case? (my experiments seem to indicate its not, but what should be done then?)

  • @AJ-et3vf
    @AJ-et3vf Рік тому

    Great video. Thank you

  • @moazzammalik1410
    @moazzammalik1410 7 років тому +1

    Can you please make a video on levenberg method. Since there is no lecture available on this topic

  • @aviraj017
    @aviraj017 8 років тому

    thanks , very informative

  • @ericashivers5489
    @ericashivers5489 Рік тому

    Amazing! Thanks

  • @MrPaulrael
    @MrPaulrael 11 років тому +4

    I am a PhD student and I will be using optimization methods in my research.

  • @lordcasper3357
    @lordcasper3357 4 місяці тому

    this is so good man

  • @rafaellima8146
    @rafaellima8146 10 років тому

    cool! ;D

  • @vijayd15
    @vijayd15 4 роки тому

    damn good!

  • @mfurkanatac
    @mfurkanatac Рік тому

    THANK YOU

  • @KKyrou
    @KKyrou 4 роки тому

    very good. thank you

    • @dellpi3911
      @dellpi3911 3 роки тому

      ua-cam.com/video/kxftUHk7NDk/v-deo.html

  • @max2buzz
    @max2buzz 11 років тому +1

    So here is the thing ... My function y = (guess^2 - x )
    Now i want to minimize y by approximating guess
    so i use first order with guess = guess - (guess^2 - x)/2guess ....
    which is xt+1 = xt - f(x)/f'(x)
    but if i do on more derivative
    then xt+1 = xt - f ' (x)/f " (x) which is guess = guess - (2guess)/2 = guess - guess = 0
    What to do
    the function is finding square root by newtons method

  • @akulsinator7680
    @akulsinator7680 3 роки тому

    Thank you u god among men

    • @dellpi3911
      @dellpi3911 3 роки тому

      ua-cam.com/video/kxftUHk7NDk/v-deo.html

  • @alexn2566
    @alexn2566 3 роки тому +1

    Soooo, if 2nd order is faster than 1st order, why not try 3rd order too?

    • @rushipatel5241
      @rushipatel5241 3 роки тому

      Hii @Alex N ,as per my knowledge this methods are used for Machine learning, where gradient descent is a classical algorithm to find minimum of a function( not always zero), If you know basics about ML then you will be familiar with loss function , so we have to minimize that function, for that we need its derivative to be zero, for finding that we use gradient as direction where the change in function is maximum.Now we have the direction but the we dont have the magnitude , for that we use a learning rate as a constant which is what 1st order does.In 2nd order we would use the magnitude which gives us the magnitude for which the point where derivative of function is 0 can be reached in less iterations.Thus 3rd order will ultimately result in finding the minimum of dervative of the loss function , but we need to find minimum of the loss function so ,it will be useless. Hope this was helpul

  • @AK-vb2dp
    @AK-vb2dp 6 років тому +13

    Video makes sense up until the point where the "pictorial" representation of the 2nd order method comes in. That to me makes absolutely no sense whatsoever, the "pictorial" should not be the function itself but you rather the 1st derivative of the function and you apply Newton's method to that.

    • @Dupet
      @Dupet 4 роки тому +1

      I think that the visualization makes sense if we think about approximating the function f(x) by its second order Taylor expansion around x_t. Taking the derivative of the second order Taylor expansion and setting it equal to zero leads us to the formula of the Newton's method for optimization. This operation is the same as minimizing the second order approximation of the function at x_t as depicted in the video.

  • @MolotovWithLux
    @MolotovWithLux 5 років тому

    #IntuitiveAlgorithm finding where zero of a function

  • @fireboltthegod
    @fireboltthegod 10 років тому +1

    Shreyas Rane Now I see where you study from.

  • @albertyao6181
    @albertyao6181 5 років тому +2

    comparing to Andrew Ng's explanation, this one is hard to understand

  • @chonssdw
    @chonssdw 11 років тому

    thanks for the video. could you please check your inbox, I have some further questions, thanks!!!