Levenberg-Marquardt Algorithm

Поділитися
Вставка
  • Опубліковано 16 лис 2024

КОМЕНТАРІ • 52

  • @janplechaty1702
    @janplechaty1702 Місяць тому +1

    I usually don't look at videos longer than 30 minutes but WOW.. I saw it whole and it was amazing. Many thanks to you!

  • @capsbr2100
    @capsbr2100 Рік тому +4

    Fantastic. You made a complex subject seem easier to understand by your way of explaining it in a clear, intuitive, illustrative and easy language. Thank you very much.

  • @ut971
    @ut971 2 роки тому +5

    Thank you soo much for uploading this. It means A LOT to every engeenering student in different parts of the world who is struggling to understand this algorithm.

    • @mehran1384
      @mehran1384  2 роки тому +2

      You are welcome. Happy that you like the video. Please share this Channel with your friends.

  • @thedanebear
    @thedanebear 11 місяців тому

    Incredibly intuitive and helpful. Easily the best way out there to spend an hour to better understand this topic

  • @gabrielperez1369
    @gabrielperez1369 2 роки тому +2

    Excellent explanation! Your English is very good and easy to understand! Thank you very much!

  • @neoneo1503
    @neoneo1503 2 роки тому +2

    Thanks for your explanation!! The Levenberg-Marquardt method that balances the converge-speed(Newton method) and converge-robustness(GD)

    • @mehran1384
      @mehran1384  2 роки тому

      You are welcome. Happy to hear that you found the video useful. Please share this channel with your friends.

    • @neoneo1503
      @neoneo1503 2 роки тому

      @@mehran1384 Yeah I will😊, Thanks!

  • @smchiew7708
    @smchiew7708 2 роки тому +1

    Very clear explanation for the Levenberg-Marquardt algorithm. Thank you so much!

  • @martvald
    @martvald 9 місяців тому +1

    Thanks for the explanation. I will add that this is not LM though, this is a trust region method using GD and NR. While LM is a trust region-based method using GD and gauss-newton (GN). They look similar, but you would end up with x_(n+1) = x_n - (J^T*J + kI)J^T*E_n, where k is lambda, J is the jacobian matrix and E_n is error vector (see GN). But other than that, the explanation on how the weights etc is used is very descriptive.

    • @mauriciogonzalez1998
      @mauriciogonzalez1998 8 місяців тому

      Hi, where could I look an explanation this clear about the real LM method?

    • @eaglezhou1243
      @eaglezhou1243 4 місяці тому

      You are right. Strictly speaking, LM method is a trust region based method that solves the nonlinear least square problem. And in which Hessian uses JTJ instead of the conventional second order derivative. And gradient descent is replaced by the error vector.

  • @justman7656
    @justman7656 Рік тому +1

    Great and very clear explanation! Thank you so much for your work

  • @pedrohenriquesiscato9768
    @pedrohenriquesiscato9768 3 місяці тому +1

    Thank you for that video. Excellent explanation!

  • @mokhaladhasan6937
    @mokhaladhasan6937 Рік тому

    Many thanks to you , it was very clear and simple explanation from a professional person. My understanding of this algorithm was stuck in some points (as GD😊😊 ) until this video.

  • @skymanaditya
    @skymanaditya 3 роки тому +1

    Great video. Explained with utmost clarity!

    • @mehran1384
      @mehran1384  3 роки тому +1

      thanks. happy you liked it.

  • @vlado.erdman
    @vlado.erdman 3 роки тому +1

    Great, easy to understand explanation. Thank you.

    • @mehran1384
      @mehran1384  3 роки тому

      Happy that you found the video easy to follow. Please share this channel with your friends.

  • @polinba
    @polinba Рік тому +1

    Thank you for the amazing video! It helped me a lot!

  • @priyachimurkar6058
    @priyachimurkar6058 2 роки тому +1

    Nice Videos with excellent demonstration

    • @mehran1384
      @mehran1384  2 роки тому

      Happy to hear that you liked the video. Please share this channel with your friends.

  • @shafqatjabeen1104
    @shafqatjabeen1104 Рік тому

    Thank you so much for this video. Very clear information

  • @minute_machine_learning5362
    @minute_machine_learning5362 9 місяців тому

    great talk and heavily informative.
    can you provide the sheet that you are presenting?

  • @zheka47
    @zheka47 2 роки тому +1

    Amazing explanations!

  • @workaccount6597
    @workaccount6597 3 роки тому +1

    I have been binge watching you videos about non-linear equation and their solvers and optimisers. By, every video I am getting more clarity. Your background in teaching students at different levels really helps you explaining very clearly. I question thought, do you think we( as in viewers) get the material from your videos?

    • @mehran1384
      @mehran1384  3 роки тому

      Thanks. I am not sure if I understood your question about getting the material? Could you elaborate?

    • @workaccount6597
      @workaccount6597 3 роки тому

      @@mehran1384 The one note notes are what I meant.

  • @kihoon2217
    @kihoon2217 2 роки тому +1

    Great lecture

    • @mehran1384
      @mehran1384  2 роки тому

      Thank you. Please share this channel with your friends.

  • @Chadwikj
    @Chadwikj 8 місяців тому

    Fantastic. Thank you!

  • @kleanthiskaramvasis9512
    @kleanthiskaramvasis9512 2 роки тому +1

    Excellent presentation :) :)

    • @mehran1384
      @mehran1384  2 роки тому

      Thank you. Please share this channel with your friends.

  • @RLDacademyGATEeceAndAdvanced
    @RLDacademyGATEeceAndAdvanced 2 роки тому +1

    Excellent video

  • @tsalex1992
    @tsalex1992 Рік тому

    Thanks for the video! From my understanding the most common heuristic for lambda is to having the increase factor be smaller than the decrease factor. However, I'm not sure that I understand the rational since we expect the algorithm to have more decreasing steps. At some point lambda will reach zero, or at least zero in the numerical sense - can you elaborate a bit more on this point?

  • @tshipmatic
    @tshipmatic 3 роки тому +1

    Awesome video! easy to follow along. One question, is there a way to choose the initial value of lambda? or any value would work?

    • @mehran1384
      @mehran1384  3 роки тому

      Sorry for the late response. Since lambda changes by an order of magnitude each time, the initial value of it is not so critical. An imperfect lambda just slows downs the entire convergence by only a few iterations.

  • @danielhelmanlee5126
    @danielhelmanlee5126 3 роки тому +1

    Is this least squares and levenberg-marquardt algorithm? I see things like Jacobian matrix in other resources...

    • @mehran1384
      @mehran1384  3 роки тому

      This is the standard LM algorithm. It has least squares as a part of it.

  • @wwefan9391
    @wwefan9391 2 роки тому +1

    Thank you for this great video ,but I'm just wondering,in the matlab code for the gradient descent method, why did you divide by norm(temp)? what's the purpose of it?

    • @mehran1384
      @mehran1384  2 роки тому

      You are welcome. Diving by norm gives a unit vector (direction only) of the notion and magnitude of it is determined by alpha.

    • @wwefan9391
      @wwefan9391 2 роки тому

      @@mehran1384 im a bit weak in linear algebra so I'm not sure what is alpha? Also norm(temp) is taking the norm of 2×2 matrix correct? Does dividing by the norm of a matrix also gives us the unit vector like when dividing by the norm of a vector? Because I thought taking the norm of a matrix gives us info about how big the elements are

  • @ИльяЧугунов-д1с

    That's great!

  • @DongIncheonExpress
    @DongIncheonExpress 2 роки тому

    Great Work! Thank you for the good explanation. Can i get your OneNote Lecture Notes that you showed to us in this lecture?

  • @mohammadsheikhpour6612
    @mohammadsheikhpour6612 2 роки тому

    thank you so much

  • @gianmarcoalarcon6185
    @gianmarcoalarcon6185 3 роки тому +1

    Nice Video!!!

  • @sephgeodynamics9246
    @sephgeodynamics9246 2 роки тому +1

    thank you

    • @mehran1384
      @mehran1384  2 роки тому

      You are welcome. Please share this channel with your friends.