Beautiful explanation … 3 min into the video and I understood the whole gist of CCA! Thankyou so much !!! Whoever said that complicated things cannot be explained simply?
Thank you very much for your clear explanation. Just wanted to say your voice is very similar to Professor Schmidt. Keep up the good work. best regards :)
@@tilestats Excellent video. One question though: How to choose whether to use CCA or PLS? The difference is that PLS maximises the covariance between the datasets whereas CCA maximises the correlation.
Thanks for your very didatical demostration. I was wondering why you didn't mentioned about the data transformation and the data standarlization previous start the analysis, mainly because the blood preasure and body size have distinct scales.
Yes, you can standardize the data but you will get the same correlations with un-standardized data because you later on instead standardize the scores as I explain at 10:56.
Despite of negative coefficient value/ taller person has lower bp/heavier person has high bp. This is not clear to me. I also faced such type of result in CCA but cant interpret the result. Would anyone plz define me.
Excellent video. One question though: How to choose whether to use CCA or PLS? The difference is that PLS maximises the covariance between the datasets whereas CCA maximises the correlation.
Hi, I tried to reproduce what you are showing here in python but I got totally different results. The calculations that you are showing are on the numbers shown in the video or are you using something else as input?
Eigen vectors for Rx and Ry are wrong, different results calculated. Are yu sure about calculating eigen value of Rx and Ry. First and second eigen vectors and eigen values places are different.
If you run the following code in R for, for example, Ry, mat=matrix(c(-0.164,0.430, -0.322,0.722),2,2) eigen(mat) you will get the following eigenvectors and eigenvalues: $values [1] 0.51939343 0.03860657 $vectors [,1] [,2] [1,] 0.4262338 -0.8463918 [2,] -0.9046130 0.5325607 Please share your own calculations so that I can have a look.
Ry = [ -0.164 -0.322 0.430 0.722] But your given code in R , is transpose of this matrix. You give input matrix false. Or should we take transpose before taking eigenvectors? @tilestats
No, you fill in the numbers by column in R. If you like to fill in by rows instead, you do like this, which will give the exact same matrix and eigenvectors: mat=matrix(c(-0.164,-0.322, 0.430,0.722),2,2,byrow = TRUE) eigen(mat)
@@tilestats A = np.array([[-0.164, -0.322], [0.430, 0.722]]) # Calculate eigenvalues and eigenvectors eigenvalues, eigenvectors = np.linalg.eig(A) print("Eigenvalues:", eigenvalues) print("Eigenvectors:", eigenvectors) This code prints reverse of it, I dont know why there is difference in python
Wow, this is by far the only tutorial demonstrating a clear description of the CCA, and how to compute it. Thanks!
Oh My! This is the best explanation about CCA I have ever seen.
Beautiful explanation … 3 min into the video and I understood the whole gist of CCA! Thankyou so much !!! Whoever said that complicated things cannot be explained simply?
Thanks!
Thank you very much for your clear explanation. Just wanted to say your voice is very similar to Professor Schmidt. Keep up the good work. best regards :)
Thank you!
You are the best stats professor!! Thanks so much
Thank you!
@@tilestats Excellent video. One question though: How to choose whether to use CCA or PLS? The difference is that PLS maximises the covariance between the datasets whereas CCA maximises the correlation.
Is there further theory behind the equation introduced at 6:25? Can you suggest some reading material for concrete proofs?
Check wiki
en.wikipedia.org/wiki/Canonical_correlation
Can you share a link to a nice multivariate linear regression dataset with at least 4 dependent variable and at least 2 outcome variables if possible?
So well explained!! Thank you!!
Thanks for your very didatical demostration. I was wondering why you didn't mentioned about the data transformation and the data standarlization previous start the analysis, mainly because the blood preasure and body size have distinct scales.
Yes, you can standardize the data but you will get the same correlations with un-standardized data because you later on instead standardize the scores as I explain at 10:56.
you're a life-saver
What would have happened if we did not take inverse at 6:46 timestamp? What if we multiply all of them as it is? Thank you.
Is anybody having step by step notes for this sum.. Pls reply
Your stats videos are great.
Thank you!
Thanks a lot! Very helpful!
U r very knowledgeable person.
Thank you!
very clear! Thank you =)
thank you so much for your explanation! it is very helpful
Thank you!
Despite of negative coefficient value/ taller person has lower bp/heavier person has high bp. This is not clear to me. I also faced such type of result in CCA but cant interpret the result. Would anyone plz define me.
This is just a small data set so do not draw any biologic conclusion from it.
Excellent video. One question though: How to choose whether to use CCA or PLS? The difference is that PLS maximises the covariance between the datasets whereas CCA maximises the correlation.
I would use CCA for correlation and PLS for regression. I have a video about PLS as well:
ua-cam.com/video/Vf7doatc2rA/v-deo.html
Great lecture
Hi, I tried to reproduce what you are showing here in python but I got totally different results. The calculations that you are showing are on the numbers shown in the video or are you using something else as input?
Yes, I used the example data in R. What is your output?
Don't miss out on a chat with Binance's CEO about the future - exclusive interview
Thank you
Thank youuuu
Eigen vectors for Rx and Ry are wrong, different results calculated. Are yu sure about calculating eigen value of Rx and Ry. First and second eigen vectors and eigen values places are different.
If you run the following code in R for, for example, Ry,
mat=matrix(c(-0.164,0.430,
-0.322,0.722),2,2)
eigen(mat)
you will get the following eigenvectors and eigenvalues:
$values
[1] 0.51939343 0.03860657
$vectors
[,1] [,2]
[1,] 0.4262338 -0.8463918
[2,] -0.9046130 0.5325607
Please share your own calculations so that I can have a look.
Ry = [ -0.164 -0.322
0.430 0.722]
But your given code in R ,
is transpose of this matrix.
You give input matrix false. Or should we take transpose before taking eigenvectors? @tilestats
No, you fill in the numbers by column in R. If you like to fill in by rows instead, you do like this, which will give the exact same matrix and eigenvectors:
mat=matrix(c(-0.164,-0.322,
0.430,0.722),2,2,byrow = TRUE)
eigen(mat)
@@tilestats A = np.array([[-0.164, -0.322], [0.430, 0.722]])
# Calculate eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(A)
print("Eigenvalues:", eigenvalues)
print("Eigenvectors:", eigenvectors)
This code prints reverse of it,
I dont know why there is difference in python
The way you rotate the data is arbitrary so it does not matter if you get the reverse values. The eigenvalues are correct, right?