Power Method with Inverse & Rayleigh
Вставка
- Опубліковано 6 сер 2024
- Discussion of Eigenvalues & Eigenvectors, Power Method, Inverse Power Method, and the Rayleigh Quotient with brief overview of Rayleigh Quotient Iteration. Example code hosted on GitHub github.com/osveliz/numerical-...
Chapters
0:00 Title Card
0:12 Terminology
0:37 Eigenvalue Example
1:04 Power Method
1:29 Power Method Example
2:33 Notes on Power Method
2:50 Inverse Power Method
3:21 Inverse Power Method Example
4:12 Rayleigh Quotient
4:54 Solve using Determinant
5:15 Inverse Power Method with Shift
5:34 Inverse Power Method with Shift Example
6:12 Rayleigh Quotient Iteration
6:37 Summary
7:01 Thank You
#PowerMethod #RayleighQuotient #NumericalAnalysis
Just found your channel. Great content, abolutely love it! I was completely baffled by that golden ratio eigenvalue for a moment haha, keep it up!
Wow, you taught me more in 7 minutes then I learned in 2 hours lecture... Like why did my proffesor feel the need to overcomplicate things so much. Amazing video!
Thank you so much, found this video after browsing for quite a while. Helped a lot
Thank you very much! Finally understood how to implement the inverse power method.
dude this covers 4 lectures from trefethen bau ....covered in just 7 mins...amazing stuff!
Great video, man! It helped me a lot! Wish you all the best!
You are the best 😭😭😭😭. Your teaching is such a spoon feeding. It helped me a lot thank u .
Extremely Useful Video. Thank You!
Thank you from the bottom of my heart!
Extremely helpful. Thank you.
Thank you a lot! Your video very helped me!
this is the best explaination ever
wonderful video!
Not all heroes wear capes
good work
thank u
great video , I have question when you know that u reach to highest eigenvalue or how we convergence? for example why u not complete after 8 iteration in first example
At 2:26 with b_16 = [0.618034; 1] if you kept iterating then b_17 = [0.618034; 1] which is the same so you know you can stop. This was of course using 6 decimal digits of accuracy. You could use a higher level of precision and the process would take longer. Optionally, you can stop the iterations when the distance between b's is less than some small epsilon. That is normally represented with the L2-norm:
|| b_(k+1) - b_k || < eps.
I thought normalizing meant to take the sum of squares, not taking the maximum of the two vector elements?
There are different types of norms which have different uses. The L-2 norm (or Euclidean norm) is that square root of sum of squares you're referring to. The L-1 just sums the absolute value. In this case the L-Infinity (or Max Norm) is the maximum of all the elements.
Ah, I see. Thank you! @@OscarVeliz
How does the norm yield the maximum of the vector? Are you using the infinity norm?
The infinity norm is also known as the max norm.
hello, can i obtain a video about subspace and lancizoz methods for finding natural frequencies and mode shapes of more than 3*3 matrices
I can add this to the queue. It will take a while to get to.
What would be the terminating condition for the Inverse Power Method? Picking infinity norm(bk+1 - bk) does not seem to suffice for our example demonstrating the inverse power method.
That is the usual terminating condition and something more sophisticated might look at the changes in lambda's, instead of b's, or detect cycles in b's.
@@OscarVeliz Could you take the dot-product of each adjacent iteration pair of eigenvectors, normalized to unit length, until the change in direction is adequately small? Select what your threshold of small is & test for when the angular change in direction of the eigenvector becomes below that threshold.
.75 speed and he sounds just like Toby from the Office
What happens if I use in the Rayleigh Quotient Iteration, instead of the inverse iteration, but the normal Power iteration?
With Power Method (even if you use the Rayleigh Quotient to compute the eigenvalue) it will head towards the dominate eigenvector. You have to use RQI or InversePM to find non-dominate eigenstuff.
When at 5:31 you mentioned inverse power method can be used to compute eigenvector given eigenvalue. Can you elaborate on how that can be done? I cannot find any readings on this. Thanks!
Certainly. Use your given eigenvalue as your value for μ, then take a guess for the eigenvector and iterate until the vector converges.
@@OscarVelizI see! Thank you very much!!
how do you calculate IIA*b_kII? is it Maximum Norm =biggest absolute Value or just biggest Value?Somepeople calculate IIA*b_kII as 2Norm.is it wright?
This is using the L-Infinity (or Max Norm).
what is the suitable value for guess close ? and how about zero eigen value is it the least eigen value?
For a suitable guess you can use Gershgorin circle theorem which I might make a video on in the future. If you need a very simple heuristic use the values in the diagonal of the matrix. An eigenvalue of zero would be the least in magnitude but you can have values less than zero (like in my example).
is it also work with complex eigen value?complex eigen value is very special case since they don;t have real eigen value and no eigen vector and rotation in Rn
actually power method show that A^kx is tend towards to the eigen vector in R^n
Since complex eigenvalues come in conjugate pairs it usually will oscillate or converge into a repeating sequence if you start with real numbers. You can instead try a starting vector with complex numbers.
please make a video on schur decomposition
Will add to the queue
4:06 i don't understand, why did you factorize out the smaller element -0.6, i thought the norm supposed to be the maximum value from the resulting vector which is 1 ??
That step is trying to find an eigenvalue given an eigenvector. Think back to Av=λv. Or use the Rayleigh quotient.
Can some eigenvalue methods be used to find roots of polynomials?
Every eigenvalue method can be used to solve polynomials once you create the corresponding Companion Matrix. The example matrix from this video with solutions from x^2-x-1=0 wasn't a coincidence. I planned to make more eigen-videos but other things have taken priority in the queue.
@@OscarVeliz Yet, for some reason, when I use the companion matrix for the function x^3 - 1, it doesn't seem to converge...
The trouble with Power Method is that it is very sensitive to the starting values. If used Octave's eig() function on the Companion Matrix for x^3-1 it finds all three roots but it is likely using decomposition, or another method, to solve it.
@@OscarVeliz Small but important question: what is (or are) the terminating condition of the power iteration, the inverse power iteration and the Rayleigh quotient iteration?
Great question! Norm of the change in step size |b_{k+1} - b_k| < eps, or change in mu (or lambda) < eps, or a maximum number of iterations.
It's not 100% clear, but I think one must be consistent in the manner of normalization.
wish it had proof
when power method fails?
A matrix with no dominant eigenvalue it can't pick between them (imagine two with opposite signs but equal in magnitude). Or if your eigenvalues are complex numbers but you start iterating with real ones (you'll need to start with complex numbers in this case).
Это индус рассказывает?