PCA 4: principal components = eigenvectors
Вставка
- Опубліковано 18 січ 2014
- Full lecture: bit.ly/PCA-alg
We can find the direction of the greatest variance in our data from the covariance matrix. It is the vector that does not rotate when we multiply it by the covariance matrix. Such vectors are called eigenvectors, and have corresponding eigenvalues. Eigenvectors that have the largest eigenvalues will be the principal components (new dimensions of our data). - Наука та технологія
Probably THE BEST explanation of PCA on the internet.
Thank you.
finally someone who is able explain the relationship between PCA and eigenvector. Thank yoU!
This is the first time that PCA has actually made sense mathematically. Great video
Without this video I don't think I'd ever intuitively understand the inherent link between PCA and covariance eigenvectors. And to think that it only took 7 minutes of my time. THANK YOU
Just a couple of minutes of this video, when you explain how the covariance matrix acts on any vector, are the glue for all the different pieces of the PCA puzzle in my mind. Thanks for such excellent explanation.
AWESOOOME! :D Now I understand not only PCA but also what eigenvectors are
Thank you!
exactly !
Me Too !!!!!!!!!!!!!
pro tip : watch movies at Flixzone. Me and my gf have been using it for watching loads of movies these days.
@Jaxxon Leighton Yup, I've been using flixzone} for years myself =)
Even without any fancy visualization, you managed to explain things in an intuitive and clear manner! Amazing!
Possibly the best explanation of eigenvectors, in the context of PCA, I've seen. Great work!
3 years... I did data science without this clear understanding of PCA. I have used it, I know how to calculate everything, but I never really get to see the essence of it. This is such a great lecture, Thank you.
This is amazing, thank you so much! This series of videos, combined with the one you have on Eigenvectors and Eigenvalues helped me to completely and thoroughly understand PCA and math behind it!
today only i understood the exact meaning of PCA ... was struggling from more than a month
NO match for thses lectures .... great job
You're an insanely good teacher! Really appreciate you putting in time to make these videos, thanks :D
He was a professor at this point actually.
@@georgalem3310😊😅😅😅😅😅😅😅😅😊😅😊
@@monsta8279hehe
This is the best explanation of finding the eigenvector in PCA I have seen so far and is easy to understand. Thank you very much
"How do you extract these dimensions of greatest spread of the data."
Really love the plain-English intuitive explanation of eigenvectors and principle component analysis. Thanks Victor!
have been really confused why eigenvectors were used in PCA for the longest! Love the explaination
Wow! Such an elegant explanation of Co-Variance matrix and Eigen vectors.
THANKS, you've answered a lot of questions in my mind with your amazing explanation!!!!
Great video! I have always used PCA but this is the first time I actually understand the underlying principle. Thank you
Went thru countless videos to understand why we try to find vectors which dont change their direction when multiplied by covariance matrix. Finally found the answer here. Thanks for posting.
I don't comment on videos very often, but this is an amazing explanation of PCA. Thank you! Helped a lot :)
I have a feeling that most teachers who do not themselves have a sound understanding and will confuse the heck out of you to cover their own inadequacies. Thank you for explaining this so clearly - the part where you mentioned about mutiplying the Covariance by any vector rotates towards the principal component is superb and is understandable even by a novice.
dude i always wondered why do u find eigen vectors while finding principle components now I understood thanks man really appreciated
You are a great teacher Victor, thank you for being able to explain this so wel
Your videos are doing an amazing job helping me understand these subjects better. Thanks a lot
Very clear lecture! Well-organized contents!
Thank you for the clear explanation. The extra work on colour and graph helped.
Thanks for the video! Now I got the sense why calculate eigenvector of the covariance matrix to get PC. Really great explanation.
Thanks a lot for this video, this is the first time I understood Covariance matrix Eigenvectors and Eigen values....superb video
one of the best explanation i could find!! thank you
Excellent explanation. Finding information on this exact process proved difficult - thank you for posting.
Brilliant explanation. Very good presenter and orator.
You are simply wonderful....these stuff must be taught everywhere
Thank you for the instructional and helpful videos! Is there any reference to what you said about the multiplication of the covariance matrix with any vector yields a vector that points more towards the maximum variance?
Thank you for the explanation, Mr. Robot!
Amazing explanation of eigenvectors and PCA, helped me a lot, Thank you.
Great explanation in such a short video !!!
Awesome! I understood the connection between covariance matrix and principal components now.
WoW!!! Finally I can understand the intuition behind using eigenvalues/vectors for PCA....Thank U so much !!!!! More Power to u :-) \m/
MAN YOU ARE GREAT! My brain lighted thanks to you!
The best intuitive explanation of PCA. Thank you!
Thank you!
Your videos really helped me. I have a doubt in this video, how did we know that multiplying the co-variance matrix with the vector (the red one in the video) will rotate it towards the principal component?
Awesome explanation of principle components.
Excellent Explanation! Thank you so much
Saw so many videos, But cleared my doubt. Thanks for the awesome explanation.
More exciting than watching a movie!
Gained a lot from this video! Thanks Victor!
ohh ooooh so that's the idea. Great explanation very thanks. It is so simple yet no one properly explains like that.
Thanks for your video, it solved my puzzle.
This was very helpful. Thank you!
Impressive explanation. Thank you very much
This is really helpful and easy to understand! Thank you!
Thank you, happy to know you find it useful.
Really great lecture! Thanks!
4:10 multiply by cov; slopes coverging; that is the PCA
5:50 e2* cov : turn korbe NA
6:48 choose eig vectors who have BIGGEST eigen values
Thank you for the video! But may ask you something? How can you find a slope when the eigenvector is of size 17x1 and not 2x2? My initial dataset has 17 variables of 4 observations each (4x17 matrix), so the covariance table hat derives is 17x17 and we have 17 eigenvalues and 17 eigenvectors.
Thank you in advance!
incredible explanation
Awesome explanation, very very intuitive.......
Great explanation. Thank you.
Awesomeness...thank you Prof !!
Deep understanding!
Excellent explanation
for the covariance matrix, shouldn't the diagonal equal to 1 as this means the covariance of the feature with itself ?
dude this lecture is awesome
Awesome. Thank you very much.
Great explanation - Is there any intuition for why the covariance matrix tends to transform vectors towards the eigenvectors? Or to put this another way: what is the reason that the eigenvectors are not rotated by the linear transformation arising from the covariance matrix?
This is great. Thank you!
why does the covariance matrix rotates the vectors towards the greatest variance?
thank you! amazing explanation
great explanation !! double thumbs up :-)
How do you plot vectors real time on top of a document. Are you using a Python notebook by chance?
Great lecture - wonderful explanation
Thank you!
Great video, thanks man
Hi Victor, thanks for the great videos they're a fantastic study aid. For PCA some people recommend using the correlation matrix over the covariance matrix, can you explain why you would use one over the other? Are there any advantages of one vs. the other?
+John Garrigan Chiming in. You can use both, the correlation is just the normalized form of the covariance. The correlation of a dataset always lies between -1 and 1 while the covariance can take on any value from negative infitinity to infinity. Specifically, the covariance is scale-dependent.
So the correlation is a better measure if you wish to compare different systems because it does not depend on the scale of the systems.
such a wonderful explanation, plz how can I get the slides ??
Appreciate it man.
Step 1 is center, should we also scale so variance = 1
Are eigenvectors:
1) the ones that "don't change" when you multiply (vector * co-var)
2) or the ones that "do change" when you multiply (vector * co-var) ?
+darchz Neither of both. Eigenvectors are vectors that do not change their direction when multiplied by the matrix (matrix*vector), they might however change their length.
eigenvectors of co-variance matrix DONT CHANGE with rotation
Thank you sir.
How are we getting the vector (-2.5, -1) ? Victor said multiply again but by multiplying the covariance matrix with (-1.2, -0.2) I do not get (-2.5, -1). Could anybody explain this?
Hi Akshay, It's already late to reply but answering for others. So you need to multiply the covariance matrix with the new vector (-1.2, -0.2) to get (-2.5, -1) and then is the same process you follow.
very useful thanks a lot.
Thanks, glad it worked for you!
Thanks
Good video.. but difficult to follow what is 'here' and 'there'
man u r awesome
Awesome
why do we subtract the mean from each attribute?
v9 The pragmatic reason -- it makes all the maths a lot simpler: covariances become dot-products, and the maths in parts 7 and 8 are a lot easier.
The conceptual reason -- PCA is all about preserving variance, it is not intended to model the mean of the data, so we factor out the mean, model what remains (the variance around that means), and then add the mean back (if we need to reconstruct some semblance of the original data from the low-dimensional representation -- see see: ua-cam.com/video/_lY74pXWlS8/v-deo.html for an example)
Victor Lavrenko Thank you.
SVD
3:08
Hugs and kisses!!!!
wow, so this is what exactly eigen vectors really are....
Glad I could help :-)
british gale from breaking bad
Andrew Tates doing statistics