Gauss-Newton algorithm for solving non linear least squares explained

Поділитися
Вставка
  • Опубліковано 16 жов 2019
  • Gauss-Newton algorithm for solving non-linear least squares explained.
    ros-developer.com/2019/10/17/g...

КОМЕНТАРІ • 17

  • @familywu3869
    @familywu3869 3 місяці тому

    Thank you for the clear explanation too!

  • @Clem000
    @Clem000 3 роки тому +3

    Thanks for the clear explanation!

  • @muhannedalsaif153
    @muhannedalsaif153 Рік тому

    love how simple and clear the explanation!

  • @anirudhk2659
    @anirudhk2659 Рік тому

    Thank you so much for the neat explanation

  • @mojtabakhosravi7248
    @mojtabakhosravi7248 4 роки тому +1

    Excellent!

  • @wolpumba4099
    @wolpumba4099 Місяць тому

    *Summary*
    *Problem:*
    * *(**0:00**)* Fitting non-linear functions to data (e.g., camera projection models).
    * *(**1:40**)* Minimize the distance between predicted values from the function and actual data points (residuals).
    *Approach:*
    1. *(**2:16**)* *Define Residual Function:*
    * `r(x) = g(x) - y`
    * `g(x)`: Non-linear function with parameters `x`
    * `y`: Observed data
    2. *(**2:25**)* *Minimize Sum of Squared Residuals:*
    * Minimize `F(x) = ||r(x)||^2` (sum of squared residuals)
    3. *(**6:09**)* *Iterative Optimization (Gauss-Newton Method):*
    * Start with an initial guess for parameters `x`.
    * Approximate `r(x)` using first-order Taylor series (linearization).
    * Calculate the Jacobian matrix `J` (derivatives of `r(x)` w.r.t `x`).
    * Update parameters iteratively:
    * `x_(n+1) = x_n - (J^T * J)^(-1) * J^T * r(x_n)`
    * This step finds the minimum of the approximated quadratic function.
    4. *(**8:20**)* *Repeat* until convergence (minimum is reached).
    *Key Points:*
    * Gauss-Newton is an iterative method for finding the minimum of a sum of squared residuals.
    * It uses a linear approximation of the non-linear function at each iteration.
    * The Hessian matrix in the update equation is approximated by `J^T * J`, neglecting a smaller term.
    * The algorithm iteratively refines the parameter estimates until a minimum is reached.
    i used gemini 1.5 pro to summarize the transcript

  • @hamedmajidian4451
    @hamedmajidian4451 4 роки тому

    Thanks Behnam it was very lucid explanation

    • @RoboCodeHub
      @RoboCodeHub  4 роки тому

      I'm glad that you found it useful.

  • @balajilakshminarayan1593
    @balajilakshminarayan1593 3 роки тому

    excellent

  • @heissheis
    @heissheis 2 роки тому

    Does defining the objective function as r=g(x) -y make a difference than r=y-g(x)

    • @matthewjames7513
      @matthewjames7513 2 роки тому

      no because the residuals get squared. (g(x) - y)^2 = (y - g(x))^2

  • @conimar7940
    @conimar7940 2 роки тому

    7:02 Why is the second term so small that it's negligible,thanks

    • @RoboCodeHub
      @RoboCodeHub  2 роки тому

      You can think of approximating a curve on a particular point by a line. On that particular point, over a small vicinity, the curve and the line are very close and almost equal, you can include higher-order terms (second order, third, ...) to make better approximation, but their effect is small, as the line is close enough to our curve.

  • @amirhosseintalebi6770
    @amirhosseintalebi6770 Рік тому

    where is 1/2 gone?

    • @RoboCodeHub
      @RoboCodeHub  Рік тому +1

      A constant plays no roles in finding min/ max.

  • @jackstarkey76
    @jackstarkey76 Рік тому +1

    Personally I cannot stand this video explanation. You are not really explaining, you are saying what you have written down. Maybe I'm just stupid but I don't follow at all.

    • @muhannedalsaif153
      @muhannedalsaif153 Рік тому +2

      do not be harsh on yourself...he explained it very well, but you need to have good background in linear algebra and calculus