Dear Dr. Garg, At first thanks for the video. I am a PhD student in USA. I solved steepest descent method by the way you showed and got an acceptabpe result. But my professor did not gave me mark and he is asking to give him some pdfs or links or proofs which support this formula, specially the formula to get lambda you showed. I could not find exactly same formula supporting pds in internet. Can you please help me regarding this ?
while running this code why i am getting this error Error in ==> gradient at 59 g = zeros(size(f),class(f)); % case of singleton dimension Error in ==> Untitled at 5 grad = gradient(func);s Help needed....
No.... As this function is not quadratic... For such function.. find X1 = X0+lambda S and hence find f(X1). Based on this f(X1), differential this f with lambda i.e.,, df/d(lambda) = 0 and find lambda.... I hope it clears now..
@@DrHarishGarg i was not expecting such a quick reply....thanks alot sir ....You really doing a great job and making students life easy.......really appreciated......
Sir app ne jo iteration 1 me lebda ka value nikale h woo uper me s transpose or s1 ko matrix se solve kiye toh kya 2 aayega kya ek baar check kr ke batayie ga ki sahi h ya galat mere hisaab se galat h baaki appka concept ek dum jhakaas h
I have one question, what happens if you wanted to use the next term in the taylor series at 7:42. The gradient represents the first order derivative, and the hessian the second order, but how would you do the third order one? And what about the f(delta X) ^ n where n is larger than 2. When n is 2 for example, we did the transpose of x times x (with the hessian in the middle because otherwise the matrix multiplication wouldn't work), but how would a third x be multiplied?
For quadratic function, the third term (corresponding to third derivative) always zero... However, for non-quadratic function, you can find value of next point in terms of lambda and then find f(Newpoint). Then take the derivation of function with lambda and find lambda (according to the condition of maximum or minimum). I hope it will clear you.
Extremely helpful . Thanks a lot sir. And sir, here you have used analytical method ( to determine lambda) and didn't use other methods mentioned ( like newton, secant, etc which are perhaps only used to calculate optimum lambda ) are these methods are called.. exact or inexact line search. I mean I am confused about the methods.
Newton, secant methods are used to find the approximate value of lambda... Since there is a quadratic function so you can easily find value of lambda using analytical method to get exact answer....
@@DrHarishGarg okk. As there in the example you took quadratic function that's why you went for exact value of lambda( and if we go with newton secant etc ,the inexact ones ..we will get a approximate value lambda ..so we may need more iterations than 6..[ here we got optimal value with in max 6 iters]..) And sir, are those ( newton..secant..quasinewton..) methods present in your playlist. I can't fine though. Thanks sir. With Respect❤️.
Exactly i dont know the book...because i used it from my experience in teaching.. but you can see the book link given in the description of the video. Thanks for watching
Then substitute the value of x and y (critical point value) in hessian matrix... Already explain in Hessian matrix lecture...you may watch that lecture too
Univariate methods are Golden section, Fibonacci search ... Both are available... See from the playlist NonLinear Programming Techniques: ua-cam.com/play/PLO-6jspot8AKg6Pov9fDHd3ys5_JlyUXv.html
Sure..... I will... In the meanwhile, you can watch the MATLAB Code of Steepest descent method and run the problem to verify your answers step by step...
Basically just double differentiate it So basically for d/dx1^2 you have to differentiate with respect to x1 two times For dx2^2 with respect to x2 two times Like that for dx1dx2 first dx1 then dx2 differentiate And for dx2dx1 the reverse basically
See the MATLAB Code of Steepest Descent Method (This theory lecture)
ua-cam.com/video/JfREfGtFTLA/v-deo.html
10:55 S1 is [-1]
[1]
Dear Dr. Garg, At first thanks for the video. I am a PhD student in USA. I solved steepest descent method by the way you showed and got an acceptabpe result. But my professor did not gave me mark and he is asking to give him some pdfs or links or proofs which support this formula, specially the formula to get lambda you showed. I could not find exactly same formula supporting pds in internet. Can you please help me regarding this ?
Check any numerical optimization book
s1=[-1,1]right,but u put [1,1]
how it comes!?
thanks,final exam in 4 hrs. very helpful❤
The way you are explaining is amazing. Voice is soft.
Very clear video, the method is excellently explained, the logic is good and the example is also good.
Thanks for liking the content.
hi, dear professor. your teaching is very eloquent and instructive. Thanks a lot.
while running this code why i am getting this error Error in ==> gradient at 59
g = zeros(size(f),class(f)); % case of singleton dimension
Error in ==> Untitled at 5
grad = gradient(func);s
Help needed....
Sir I need Secant method for optimization problem.Kindly provide it
sir lambda ke formula me S1 ki value [-1 1] honi chahiye thi naa
Watch the Matlab code of this steepest descent method ... It is uploaded now
yes .....even if you take -1 1 answer same hi araha hain
Lamda 1 is 1 I there was some problem with transpose
can we find Lamda of such function using same method f(x, y) = (x^2 + y - 11)^2 + (x + y^2 - 7)^2 since H matrix is not numeric in this case....
No.... As this function is not quadratic... For such function.. find X1 = X0+lambda S and hence find f(X1). Based on this f(X1), differential this f with lambda i.e.,, df/d(lambda) = 0 and find lambda.... I hope it clears now..
@@DrHarishGarg i was not expecting such a quick reply....thanks alot sir ....You really doing a great job and making students life easy.......really appreciated......
My pleasure always... Keep watching and sharing the videos with other students too, so that they can also learn easily... Thank
Sir app ne jo iteration 1 me lebda ka value nikale h woo uper me s transpose or s1 ko matrix se solve kiye toh kya 2 aayega kya ek baar check kr ke batayie ga ki sahi h ya galat mere hisaab se galat h baaki appka concept ek dum jhakaas h
I have one question, what happens if you wanted to use the next term in the taylor series at 7:42. The gradient represents the first order derivative, and the hessian the second order, but how would you do the third order one? And what about the f(delta X) ^ n where n is larger than 2. When n is 2 for example, we did the transpose of x times x (with the hessian in the middle because otherwise the matrix multiplication wouldn't work), but how would a third x be multiplied?
For quadratic function, the third term (corresponding to third derivative) always zero... However, for non-quadratic function, you can find value of next point in terms of lambda and then find f(Newpoint). Then take the derivation of function with lambda and find lambda (according to the condition of maximum or minimum).
I hope it will clear you.
See the quadratic form lecture... New lecture uploaded
ua-cam.com/video/6jjTLDX_JOk/v-deo.html
@@DrHarishGarg Thanks!
Watch the Matlab code of this steepest descent method ... It is uploaded now
If in this question step size is given as 0.5 ...what it means? Is it the value of lamda?
Thank you for your help
Thank you sir! Many of the lecture are super helpful!
Glad to hear that.... Keep watching
Watch the Matlab code of this steepest descent method ... It is uploaded now
wow!
if you want to understand the topic, listen to the end.
See the quadratic form lecture... New lecture uploaded
ua-cam.com/video/6jjTLDX_JOk/v-deo.html
Watch the Matlab code of this steepest descent method ... It is uploaded now
Hello sir!
Can I find lambda value by using this: argmin(Xi-λ*▽f(Xi)), and λ>=0?
Yes .. You can .. But it will take alot of computational steps/time
Great, Thanks for lecture
Extremely helpful . Thanks a lot sir. And sir, here you have used analytical method ( to determine lambda) and didn't use other methods mentioned ( like newton, secant, etc which are perhaps only used to calculate optimum lambda ) are these methods are called.. exact or inexact line search. I mean I am confused about the methods.
Newton, secant methods are used to find the approximate value of lambda... Since there is a quadratic function so you can easily find value of lambda using analytical method to get exact answer....
@@DrHarishGarg okk. As there in the example you took quadratic function that's why you went for exact value of lambda( and if we go with newton secant etc ,the inexact ones ..we will get a approximate value lambda ..so we may need more iterations than 6..[ here we got optimal value with in max 6 iters]..)
And sir, are those ( newton..secant..quasinewton..) methods present in your playlist. I can't fine though.
Thanks sir. With Respect❤️.
Yes they are also available.... See playlist "MATLAB code Numerical Methods"
See the quadratic form lecture... New lecture uploaded
ua-cam.com/video/6jjTLDX_JOk/v-deo.html
Watch the Matlab code of this steepest descent method ... It is uploaded now
ThankYou!
hello, in which book and chapter can I find the equations shown in the video? Thank you so much
Exactly i dont know the book...because i used it from my experience in teaching.. but you can see the book link given in the description of the video.
Thanks for watching
@@DrHarishGarg Thank you so much :)
Watch the Matlab code of this steepest descent method ... It is uploaded now
If the Hessian matrix contain x and y term what should I do?
Then substitute the value of x and y (critical point value) in hessian matrix... Already explain in Hessian matrix lecture...you may watch that lecture too
Thank you Sir for this clear and concise vedio.
See the quadratic form lecture... New lecture uploaded
ua-cam.com/video/6jjTLDX_JOk/v-deo.html
Watch the Matlab code of this steepest descent method ... It is uploaded now
Very elaborate video
Glad you like it!.... Keep watching
S1^T . S1 in (Step 2) S1 value is wrong. it should be [-1 not [1
1] 1]
See the quadratic form lecture... New lecture uploaded
ua-cam.com/video/6jjTLDX_JOk/v-deo.html
Watch the Matlab code of this steepest descent method ... It is uploaded now
Thank you sir.
Welcome
Watch the Matlab code of this steepest descent method ... It is uploaded now
Konsi book se liya hai content??
Sir , i can't find "Univariate method" and "Powell's method". Could u please drop the link ?
Univariate methods are Golden section, Fibonacci search ... Both are available... See from the playlist
NonLinear Programming Techniques: ua-cam.com/play/PLO-6jspot8AKg6Pov9fDHd3ys5_JlyUXv.html
@@DrHarishGarg sir , i can't find Powell's method
Powell method is not explained till date.
@@DrHarishGarg oh ok thanks. Btw your lectures are awesome.👌
Thanks... Keep watching and sharing with others too
Thank you Sir. This lecture is very helpful.
You are most welcome
Watch the Matlab code of this steepest descent method ... It is uploaded now
Thank you! Very clear lecture.
Thank you for this:)
Thank you
See the quadratic form lecture... New lecture uploaded
ua-cam.com/video/6jjTLDX_JOk/v-deo.html
Watch the Matlab code of this steepest descent method ... It is uploaded now
sir i m in 2nd sem of m tech production an you please provide the solution of classical optimization
Pls make a video on Quasi newton method also.
See the quadratic form lecture... New lecture uploaded
ua-cam.com/video/6jjTLDX_JOk/v-deo.html
Watch the Matlab code of this steepest descent method ... It is uploaded now
sir plz share the answers of the practice questions so that we can check our answers .regads
Sure..... I will... In the meanwhile, you can watch the MATLAB Code of Steepest descent method and run the problem to verify your answers step by step...
Thank u sir
My pleasure.... Keep watching other videos too
Great! Appreciated
Thanks .... My pleasure. Keep watching other content too and share with others.
Watch the Matlab code of this steepest descent method ... It is uploaded now
how do you do the hessian matrix in iteration 2?
Basically just double differentiate it
So basically
for d/dx1^2 you have to differentiate with respect to x1 two times
For dx2^2 with respect to x2 two times
Like that for dx1dx2 first dx1 then dx2 differentiate
And for dx2dx1 the reverse basically
I have prepared for my exam. Now i am ready for exam.
how u got gradient
Partial derivative of the function with respect to the variables
Watch the Matlab code of this steepest descent method ... It is uploaded now
U should watch kk sir ❤
❤️❤️
thanks
🙏🙏🙏
See the quadratic form lecture... New lecture uploaded
ua-cam.com/video/6jjTLDX_JOk/v-deo.html
Watch the Matlab code of this steepest descent method ... It is uploaded now
Türkçe dil yokmu
My pleasure