omg, you are really great at explaining things by using only a pen and a whiteboard, without the need for fancy digital animation, this is definitely what I call a REAL "Education"!!!
Does fancy digital animation make education worse for you?It offers the insights of experts in those subjects directly to you, without having to study a subject 20 years to understand it in depth.
This is definitely a great explanation of eigendecomposition. I kind of got into this rabbit hole trying to understand singular value decomposition, and this video helped me understand that as well. Thanks for your help understanding this.
Holy shit you're literally blowing my mind (in a positive way) with your videos. I've never understood Eigendecomposition (and many more of the topics you're explaining) but now it all makes sense. Please never stop with your videos!
Thank you so much for explaining this so clearly. I was struggling to understand this for so long and you just made it so easier. You are an excellent teacher!!
And it is affordable. Thirty lectures like this could be an entire course in linear algebra priced at a fraction of what universities charge in tuition and save money, time, classroom space and energy on campus commute. However, we could only go too far doing matrices by hand, so the course would need a software package like Mathematica, Matlab or Maple to crunch the numbers. Thanks a great deal for the quality presentation.
Surprisingly good explanation. Thanks a lot! I especially liked that all the information goes in order without gaps and an example of practical application is given.
I'm just learning these basics and your videos are very comprehensive and highly informative. Looking forward to completing all the videos in the playlist!!
Honestly.... U deserve atleast a million subscribers.... A moron professor in our Econometrics class didn't even try to do this in his class! Thanks professor ritvik!
Hey! This video is great and it has helped me a lot. As feedback I will tell you that when the video began everything was on the whiteboard. This felt really overwhelming to me. This might be something you want to think about in the future.
Great explanation! Can you please give an example in machine learning or data science when we need to do the same linear transformation again and again?
Only one doubt, what's the reason behind normalizing eigenvectors? Btw, your content, the way of explaining these scary concepts taught me something that even MIT lectures couldn't. Thank you so much sir, please keep making such videos! More power to you sir :)
You have a great channel! Thanks for the insight which is hard to come by. Just one confusing area to me at the time was the definition of the 2x2 matrices for u1 and u2. They look like 3x2 matrices with values 1 & u1 (or u2). I did figure it out though. Thanks!
@@Galmionit shouldn't have been written in the way it was in my opinion, as it causes confusion. Those "1's" are just dot dot dots, ..., meant to be arbitrary entries
@@Galmionthe Matrix U is the 2 eigenvectors, u1 and u2, put next to each other in one matrix. And since u1 and u2 are 2x1 vectors, putting them together in a matrix makes it a 2x2
@@GalmionI would have chosen an example with no square roots as the first example personally. Say your eigenvectors are u1= [2] [3] u2 = [4] [5] Then U, the eigenvector matrix: U = [2 4] [3 5] Hope this helps.
Your explanation is the best I have ever seen. But your explanation does not explain what each component really means, ie. The First U^-1, map/rotate the input vectors, and then stretch the result in each eigenvector direction and then finally reverse-rotate the vector (restoring into the original axis).
hey did anyone solve for the eigenvectors? Maybe I am wrong, I got x1 = -2/3 x2 and x2 = -3/2 x1 when solving the equations for lamda = -5. if anyone got the answer please let me know.
Excellent. I was struggling to understand how the form A=ULU^-1 is reached from the definition of an eigenvalue (Au=lu) as explained in my textbook, but the way you explained it made it all click for me. Thanks!
Hang on, if a matrix times its inverse is the identity matrix, why can't the formula for eigendecomposition (U * lambda * U^-1) be simplified as just lambda?
omg, you are really great at explaining things by using only a pen and a whiteboard, without the need for fancy digital animation, this is definitely what I call a REAL "Education"!!!
Does fancy digital animation make education worse for you?It offers the insights of experts in those subjects directly to you, without having to study a subject 20 years to understand it in depth.
This is definitely a great explanation of eigendecomposition.
I kind of got into this rabbit hole trying to understand singular value decomposition, and this video helped me understand that as well.
Thanks for your help understanding this.
Lmao I'm in the exact same rabbithole :D
holy shit I guess I'm not alone lmao
+1
same here bro
haha mee too
Holy shit you're literally blowing my mind (in a positive way) with your videos. I've never understood Eigendecomposition (and many more of the topics you're explaining) but now it all makes sense. Please never stop with your videos!
Thank you so much for explaining this so clearly. I was struggling to understand this for so long and you just made it so easier. You are an excellent teacher!!
And it is affordable. Thirty lectures like this could be an entire course in linear algebra priced at a fraction of what universities charge in tuition and save money, time, classroom space and energy on campus commute. However, we could only go too far doing matrices by hand, so the course would need a software package like Mathematica, Matlab or Maple to crunch the numbers. Thanks a great deal for the quality presentation.
Finally, someone that shows it simple and clear and answers the most important question: why? Thank you!
No problem!
Wish I could give more than one like. This channel is so underrated.
Surprisingly good explanation. Thanks a lot! I especially liked that all the information goes in order without gaps and an example of practical application is given.
Never seen such a clear explanation! Thank you so much!
This is a great explanation, been stuck trying to understand PCA and this really helps
I'm just learning these basics and your videos are very comprehensive and highly informative. Looking forward to completing all the videos in the playlist!!
Watched this video as a refresher for my ML class and it was super helpful. Thanks!!!
Great video, love the clarity of the explanation
Honestly.... U deserve atleast a million subscribers.... A moron professor in our Econometrics class didn't even try to do this in his class! Thanks professor ritvik!
While Ritvik is indeed A-MA-ZING, perhaps you should be a bit nicer to your econometrics professor :-)
Brief and clear! Thank you. 简短,清晰!
Wow this is the best video on Eigen Decomposition. Thanks a lot man!
A superb explanation that i got the first time through. Liked and subscribed!
best video on eigen val decomposition on any platform. Thanks man!
Wow, thanks!
Hey! This video is great and it has helped me a lot. As feedback I will tell you that when the video began everything was on the whiteboard. This felt really overwhelming to me. This might be something you want to think about in the future.
This channel is extremely useful, thank you very much
Beautifully explained Ritvik. 👍
Such a succinct explanation.. can you just explain why we normalised the eigen vectors?
Thank you so much. I always love to learn why things are important. Makes studying much more interesting :)
Beautiful explanation........ Thanks.............
Best help I found online. Thanks :)
You're welcome!
This gives a lot of information about the process of doing it and its value in data science. Thanks.
I really love your explanations, really helpful
Appreciated!
Thank you very much for your detailed answer with appropriate examples and its benefit
I liked the video, very explanatory and understandable
Love bro! This explanation was so clear
Glad to hear it!
You made it so easy to understand! Thank you!
Glad it helped!
Outstanding explanation!
It is very difficult to find that subject in a linear algebra college textbook.
Great explanation! Can you please give an example in machine learning or data science when we need to do the same linear transformation again and again?
Amazing clear explanation! Love u dude! Thx a million!
Only one doubt, what's the reason behind normalizing eigenvectors?
Btw, your content, the way of explaining these scary concepts taught me something that even MIT lectures couldn't. Thank you so much sir, please keep making such videos!
More power to you sir :)
Because any scalar multiplied by a eigenvector also remains a eigenvector only, hence we generally take unit vector
thanks, very easy to follow you in your thought process. Helped me very much!
Glad it helped!
OMG the application part was amazing😍
Great Clear explanations... Thanks a lot!
Great short explenation! Thanks!
Awesome Explanation.. Keep it up!
Thanks a lot!
really well explained good job.
thanks for posting it, it would have been nicer to show how matix to the power is used in data science.
OMG, literally understood the eigen shit in 8 minutes, thank you so much
Awesome!
your videos are helpful and concise at the same time, thats rare on today's yt
Thanks a lot for this clear explanation!
Damn, just a good video. Thank you very much for explaining
Thanks a lot. This was sublime.
You're very welcome!
Wow, such a good explanation!
Glad it was helpful!
Great explanation !
Thanks...Very nice explanation...
You are welcome
thankyou so much, u are a saviour
7:54 Shouldn't you do the rightmost multiplication first? Lambda * U inverse.
Thank you for this amazingly simple explanation!
Could you give me an example of that kind of multiplication used in Machine Learning?
You have a great channel! Thanks for the insight which is hard to come by. Just one confusing area to me at the time was the definition of the 2x2 matrices for u1 and u2. They look like 3x2 matrices with values 1 & u1 (or u2). I did figure it out though. Thanks!
Thank you!
can you elaborate on this? I still don't get how it isn't a 3x2 matrix.
@@Galmionit shouldn't have been written in the way it was in my opinion, as it causes confusion. Those "1's" are just dot dot dots, ..., meant to be arbitrary entries
@@Galmionthe Matrix U is the 2 eigenvectors, u1 and u2, put next to each other in one matrix. And since u1 and u2 are 2x1 vectors, putting them together in a matrix makes it a 2x2
@@GalmionI would have chosen an example with no square roots as the first example personally. Say your eigenvectors are
u1= [2]
[3]
u2 = [4]
[5]
Then U, the eigenvector matrix:
U = [2 4]
[3 5]
Hope this helps.
Best intro ever
But are most matrices decomposable to eigendecomposition? Then doesn’t that mean limited use?
Great video ! Can you also touch on the topic of LU Decomposition, Jordan Canonical Form, Rayleigh quotient, etc. ?
very nice explanation
Thanks for liking
Man, this rocks! thank you!
Great video, thanks!
Love this. Thank u❤
You are AWESOME! thank you!
Super helpful. Thanks
Can y tell me about what are the pros of this topic?
What is the difference between decomposition and factorisation?
I think they're often used interchangeably
Your explanation is the best I have ever seen. But your explanation does not explain what each component really means, ie. The First U^-1, map/rotate the input vectors, and then stretch the result in each eigenvector direction and then finally reverse-rotate the vector (restoring into the original axis).
this is awesome!
Why do we need normalized eigenvectors? won't any eigenvectors from the family of eigenvectors suffice
great explanation
Great video!
iF P=6 OR p=7 is this arbitrary p=8?
Thank you. Thank you. Thank you.
Any time!
Thanks for your help!
That was beautiful !!!! :')
thanks!
Fantastic!!!!!!!!!!!!!!!!!!
hey did anyone solve for the eigenvectors?
Maybe I am wrong, I got x1 = -2/3 x2 and x2 = -3/2 x1 when solving the equations for lamda = -5.
if anyone got the answer please let me know.
pff great video, i feel bad i didnt knew this guy erlier, saves a lot of time.
great job , I had no idea before the video now I know everything
Great job, peace
Excellent
Excellent. I was struggling to understand how the form A=ULU^-1 is reached from the definition of an eigenvalue (Au=lu) as explained in my textbook, but the way you explained it made it all click for me. Thanks!
Excellent!
awesome thanks
Ojalá me lo hubieran explicado así de fácil cuando lo estudiaba hace casi 30 años
Explicación excepcional
Gracias por las amables palabras!
Good video
very nice
Thanks man!
Beautiful
Amazing
Thanks
Hang on, if a matrix times its inverse is the identity matrix, why can't the formula for eigendecomposition (U * lambda * U^-1) be simplified as just lambda?
You cannot rearrange the equation with matrices multiplication as you would with numbers/variables
Exactly, matrix multiplication is not commutative!
Thank you
Of course!
10/10 ty
Awsome
nice
damn i like you, good job
SVD is superior imo
thanks*10^10000
thanksssssssssssssssssssssssssssssssssssssssss
Coool
Beautiful and handsome and pretty and