Conjugate Gradient Method

Поділитися
Вставка
  • Опубліковано 17 січ 2025

КОМЕНТАРІ • 49

  • @zhangkin7896
    @zhangkin7896 3 роки тому +8

    Here is 2021, your class is still great for now. ❤️

  • @pigfaced9985
    @pigfaced9985 Рік тому

    You are a life saver! I have an assignment that's related to this method and I understood it pretty well! THANK YOU!

  • @louisdavies5367
    @louisdavies5367 8 років тому +7

    Thank you for making this video!! It's really helpful with my studies :)

  • @mari0llly
    @mari0llly 6 років тому +22

    good video, but you used the Laplace operator instead of the nabla operator for the gradient.

  • @from-chimp-to-champ1
    @from-chimp-to-champ1 2 роки тому

    Good job, Priya, elegant explanation!

  • @alexlagrassa8961
    @alexlagrassa8961 5 років тому +3

    Good video, clear explanation.

  • @valentinzingan1151
    @valentinzingan1151 5 років тому +11

    The first method you described is called the Steepest Descent (not the Gradient Descent). Gradient Descent is the simplest one, the Steepest Descent is an improvment on the Gradient Descent, exactly as you described.

  • @edoardostefaninmustacchi2232
    @edoardostefaninmustacchi2232 3 роки тому +2

    Excellent stuff. Really helped

  • @songlinyang9248
    @songlinyang9248 6 років тому +3

    Very very clear and helpful, thank you very much

  • @dmit10
    @dmit10 Рік тому +1

    Another interesting topic is Newton-CG and what to do if the Hessian is indefinite.

  • @kandidatfysikk86
    @kandidatfysikk86 7 років тому +1

    Great video!

  • @Koenentom
    @Koenentom 4 роки тому

    great video. Thanks!!

  • @aboubekeurhamdi-cherif6962
    @aboubekeurhamdi-cherif6962 9 років тому +1

    Please note that x* is the minimizer and the minimum.

  • @ryanmckenna2047
    @ryanmckenna2047 7 місяців тому

    What is a TOD?

  • @pablocesarherreraortiz5239
    @pablocesarherreraortiz5239 2 роки тому

    thank you very much

  • @frankruta4701
    @frankruta4701 3 роки тому

    is alpha_k a matrix or scalar quantity?

    • @frankruta4701
      @frankruta4701 3 роки тому

      scalar... i just didn't flatten my residual (which was a matrix in my case)

  • @narvkar6307
    @narvkar6307 11 років тому +1

    how is the value of alpha1 updated..

  • @lenargilmanov7893
    @lenargilmanov7893 Рік тому

    What I don't understand is: why use an iterative process if we know that there's exactly one minimum, just set the gradient to 0 and solve the resulting system of equations, no?

    • @mrlolkar6229
      @mrlolkar6229 Рік тому

      those methods are used when you have let say 10^6+ equations (for example in Finite Element Method). With those method you solve in much faster then by setting all derivatives equal to 0. Even it seems there you need all steps to get to minimum its not true, usually you are close enought even in those humangus number of equations to minimum that you are already satisfied with answer that you dont need rest of 95% of missing gradients, and thats why those methods are so powerfull,.

    • @lenargilmanov7893
      @lenargilmanov7893 Рік тому

      @@mrlolkar6229 Yeah, I kinda figured it out now.

  • @yubai6549
    @yubai6549 6 років тому

    Many thanks!

  • @bigsh0w1
    @bigsh0w1 9 років тому

    Please can you share the code

  • @exploreLehigh
    @exploreLehigh 3 роки тому

    gold

  • @Aarshyboy96
    @Aarshyboy96 4 роки тому

    I dont understand how you updated alpha1.

  • @beeseb
    @beeseb Рік тому

    🍵

  • @aboubekeurhamdi-cherif6962
    @aboubekeurhamdi-cherif6962 9 років тому +1

    Sorry! Something was missing in my last comment. Please note that x* is the minimizer and NOT the minimum.

    • @kokori100
      @kokori100 9 років тому

      +Aboubekeur Hamdi-Cherif yeap same notice

    • @yashvander
      @yashvander 4 роки тому

      hmm, that means x1 = x0 + x*
      right?

  • @xruan6582
    @xruan6582 5 років тому +5

    lack of detailed explanation and hard to understand

  • @bellfish188
    @bellfish188 9 місяців тому

    low volume

  • @AdityaPrasad007
    @AdityaPrasad007 5 років тому +5

    wow interesting how she made one technical video and stopped. Motivation was lost I guess?

    • @nickp7526
      @nickp7526 4 роки тому +2

      Have you not seen Bear and Simba dumbass?

    • @AdityaPrasad007
      @AdityaPrasad007 4 роки тому

      @@nickp7526 I said technical video my dear chap.

    • @ethandickson9490
      @ethandickson9490 4 роки тому +1

      @@AdityaPrasad007 Think he was joking bruh

    • @AdityaPrasad007
      @AdityaPrasad007 4 роки тому

      @@ethandickson9490 really? I'm pretty bad at sarcasm... @Nick was it a joke?

    • @PriyaDeo
      @PriyaDeo  4 роки тому +6

      I made the video for a class. I guess I didn't expect it to get so many views and comments especially for people to keep watching it after some years. But if theres alot of interest I can make another video. Do you have any suggestions for topics?

  • @DLSMauu
    @DLSMauu 8 років тому +1

    cute lecture :P

  • @Marmelademeister
    @Marmelademeister 5 років тому +2

    It’s okay... It’s too slow at the beginning and too fast at the end. And why would you start with gradient descent? I would think that most people studying cg are already miles beyond gradient descent, have seen Newton’s method and now study Newton-like methods.

  • @MyName-gl1bs
    @MyName-gl1bs 3 роки тому

    I like fud

  • @erickdanielperez9463
    @erickdanielperez9463 6 років тому

    You don't use your mathematics for resolve all the problema. If you have problems with more than 3 variables, not is possible look the solution if not used the abstract mathematics. A mutli dimensional problems i.e. chemical problems (pressure, tempeture, flux, composition and rate) only is visualized with math, not with graph. Used you mathematics and numbers

  • @vijayakrishnanaganoor9335
    @vijayakrishnanaganoor9335 4 роки тому

    Great video!