Correction: 3:23 The array should only have wt through wt5, ko1 through ko5. Support StatQuest by buying my books The StatQuest Illustrated Guide to Machine Learning, The StatQuest Illustrated Guide to Neural Networks and AI, or a Study Guide or Merch!!! statquest.org/statquest-store/
Thank you, I was mentioning 3:23. Your videos are great. I am a medical doctor from Turkey and currently, I am planning a career change to data science and I have been watching your videos to get prepared for a data scientist position. Could you create a few videos regarding data science interviews if it is relevant for your channel content? Best Regards, Göktuğ Aşcı, MD.
@@keerthik3791 Unfortunately the random forest implementations for Python are really bad and they don't have all of the features. If you're going to use a random forest, I would highly recommend that you do it in R instead.
"Note: We use samples as columns in this example because... but there is no requirement to do so." "Alternatively, we could have used..." "One last note about scaling with sklearn vs scale() in R" This is some of the gold that sets StatQuest apart. Thank you! ❤
One of the best videos ever made on this topic. This channel has helped me a lot in understanding machine learning in greater detail. Keep up the good work !!
The fact that you said bam when the plot showed what we wanted really shows that even if you are a pro python programmer, you still feel happy when you code correct, relatableeee
Simply loving StatQuest. Concise, clear and fun videos. One point I noted while watching this video is that the latest version of sklearn PCA() will center the data for you, but not scale it. So if you just need centering for doing pca, you don't need to worry about preprocessing.
I learn so much better in Python for some reason, I think it's because it's more interactive and you can play around with the data! Good one. Stattttquueeeeeest.
Awesome. Please create more videos about how to implement the machine learning as well as data science concepts explained here into Python. That would be super helpful for us, in particular beginners.
Thanks for the tutorial! One thing I don't understand is why the PC1 can separate the wt and ko samples. Their gene expression values are generated in a same way.
Hello Josh, Thank you for the amazing video! Quick question, at 9:18 how can I adapt "index=[*wt, *ko] for an excel input? Lets say that we have the same variables (Genes vs wt/ko) but in an excel file. How can I add these labels to the final plot (9:47)? Thank you again!!
I'm not sure I understand your question. You can export your data from excel and import it into python (or R or whatever). Or are you asking about something else?
@@statquest excellent going - really. difficult to know what's up and down in data science, and so i'm happy your videos cover subjects from mathematical concepts to code implementation. excellent spirit and explanations, again. (sorry about the superlative avalanche - in the vast ocean that's the net, it's difficult finding authoritative sources covering subjects well ) bests from Germany/Denmark ;)
Amazing video! I initially watched the video explaining PCA and i was mind-blown, thank you so much! I was hoping to ask if anyone on the comment section or even StatQuest if possible, would know how to implement PCA in a multivariate timeseries dataset and also "examine the loading scores" in such a dataset. Thanks in advance! :) P.S - extremely clueless on anything coding or ML, but Ive got to use PCA (and other dimensionality reduction methods) on my timeseries dataset. so would greatly appreciate any direction on how to proceed.
Queation please... 09:50 wt and ko samples are both created with the same random function Poisson (10, 1000). Why are wt samples (and ko samples) more correlated??
Because rd.randrange(10, 1000) returns a random number between 10 and 1000. Once we get that random value, we use it to generate 5 values for the wt samples using a poisson distribution. Then we select another random value between 10 and 1000 and use it to generate 5 values for the ko using a different (because the random value is different) poisson distribution.
Very concise, I will surely be coming back to this video, however I would like to know why PCA is able to group these two categories (wt and ko), when it's shown they are generated from the same random method. If all indexes were generated at the same time, I would get it, but as they are generated index by index, I seem not to be able to grasp it.
The trick is at 3:48. For each group, wt and ko, we select a different parameter for the poisson distribution and generate 5 measurements from each of those two different distributions. One set is for wt and the other set is for ko.
@@statquest I think my confusion comes from the fact that these will make the two groups different from one another (all w's different from ko's), but I wouldn't predict them to be similar within the group (wt1 is close in vertical to wt2, and to wt3...,), thus I tend to believe PCA should tell them apart, but not in exactly two groups (wt's vs ko's), I would predict more like two clouds instead of two "vertical line of points" in the 2-D.
@@3stepsahead704 Remember how PCA actually works, it finds the axis that has the most variation (which is between WT and KO) and focuses on that. And then find the secondary differences (among the WT and KO). However, because the differences between WT and KO are big, the scale on the x-axis will be much bigger than the scale on the y-axis. Thus, the samples will appear to be in a vertical line rather than spaced apart like you might guess they should be. In short, check the scales of the axes, they will explain the difference between what you think you see and what you expect.
I don't know why in the PCA graph you plot the "features", in some other videos, they plot all the data point and visualize the data in the new subspace... And I don't know what are the meaning of the x-axis in the same plot, what does -10 mean in the PC1-89.9%? thanks
Hi, You are a lifesaver. I am trying to do PCA analysis on my own data but since every demo video either use the databases and you created your own data. I am missing some crucial steps, especially in defining index when i am doing it with my data. Will it be too much to ask few more videos on machine learning where you use the excel sheet data from your laptop.
Wow, thanks. One question: while verifying loading scores, I saw that 'False' command. Typically, for PCA, the data needs to be scaled, right? But false means it is not scaled, so I am confused. Please clarify this.
Question please. This line trasforms the original data to a 10x10 array: pca_data = pca.transform(scaled_data) The video says: it generates the coordinates for PCA graph based on loading score and scaled data. Apart from the coordinates in graph, what do the values actually represent? How should I interpret them - Is it the amount of variance of sample values attributed to each PC? The distance of each sample on PC line to the origin? What is the unit?
Oops!! I make a mistake a deleted your follow up comment. Sorry about that. However, my response is "Yes, the PCA graph is a graph that uses PCs as the axes."
what happens if we give the n_components=d, d being the no of dimensions? Does PCA denoise the data because there won't be any reduction in dimensions?
I don't think it will. However, there is still value because you can draw the scree plot and see how many PCs are really useful (it might only be a few, or it could be all of them).
Great tutorial, sorry if my question may be ammature, but how did they differentiate WT and KO apart in the final PCA, I thought the data set was randomly generated?
What is the point to look at loading scores at the final step? My understanding is as follows. Each gene is a sample. If their loading scores on PC1 are similar, it means a lot of samples are projecting around a similar position on PC1. So they are clustering apparently. Am I right?
In this case, loading scores tell us which genes have the most influence on the PCs. This can tell us which genes have the most variation and are the most useful for determining why the cells cluster the way the do. For more details, see: ua-cam.com/video/FgakZw6K1QQ/v-deo.html
@@statquest Thank you very much for the clarification! I googled it, and seems that it's two different things, but sometimes they can be used interchangeably or be the same thing.
@@Cat_Sterling Yes, I guess it depends on how you want to use them and whether you divide by 'n' or 'n-1', but, at least on a conceptual level, they are the same.
We select a random number between 10 and 1000 to be the mean of a poisson distribution. That's just the average value, and there can be larger and smaller values.
@@statquest I saw that video. I want to use PCA to determine weights of variable in my index. I have 2 doubts: 1) Shall I take square of load factor and multiply by variation of the principal component to get weight for that variable? 2) what is the role of negative and positive sign if I am using PCA method to derive weights. Shall I give negative sign load factors, negative weights in my index?
@@manannawal8466 1) The loading scores are weights to begin with and determine how much of each variable is combined to make the principal component. In other words, you should need to transform the loading scores. 2) The positive and negative signs are just relative to the other variables and how they contribute to the slope of the principal component. For example, if you had two genes, 1 and 2, then loading scores 5 and -2 (for genes 1 and 2 respectively) would give us the exact same slope as -5 and 2. Since the positive and negative signs are somewhat arbitrary, people often ignore them and instead concentrate on the magnitude of the loading scores as a way to determine variable importance.
hai, could help me to this question please? Xnp = np.asarray(X.todense()) # Run a principal component analysis on Xnp # How much of the variance can be explained # by the first 10, 50, 100, 200, and 500 principal components?
Please could someone tell me which are the variables\dimensions we want to reduce and which are the observations\samples? im a little confused, especially that i found 10 principal components in the scree plot the genes are the variables/features , and the wt, ko are the samples right?
The genes are features and the types of mice are the samples. Did you run the code that I wrote, or did you write your own? You can download my code for free.
@@statquest thank you very much, you know due to the reverse of order of variables and samples, i got a little confused , plus the scree plot you showed in the video had 10 principal components which is equal to the number of samples, but perhaps the scree plot was just for visualizations and there is more than 10 principal components, however i didn't run the code I have another small question please, what if the samples are far less than the variables, let's say it's an image and the variables are the pixels with 4000 pixels in total for each image, and the samples are just 200 , in that case i would not get more than 200 principal components right? or in other words, only 200 PC or less would be useful to me right?
Thanks for the "easy" to follow tutorial. I am trying to do a PCA for my RNAseq data but when I run scaled_tmp = StandardScaler().fit_transform(tmp.T) I get an error message: 'could not convert string to float: 'lcl|NC_000913.3_cds_NP_414542.1_1'. The lcl... is my target gene ID and I cannot edit it since i will need it later on to identify speficific genes. Please how do I solve this error message?
It looks like one of the columns in your matrix is some sort of identifier instead of sequencing data. In the video, when we create the data, we move identifiers to be row names or column names (see: 3:17). Other than the row and column names, the matrix that we do math on can only contain numbers because... how do we do math with identifiers?
@@statquest Thanks I was able to rectify the issue. I did not indicate "my gene_id" column as my index when loading data. After setting the index column it now works well.
What if i already have a dataset which i will upload only? what do i pass in this line in index? pca_df = pd.DataFrame(pca_data, index=[], columns=labels)
Suppose a number of items exist of type 1 and 40 variables associated with each. Further items of type 2 exist, also having the same 40 associated variables. Is there a way to find which variables, or combination of variables, best discriminates whether an arbitrary item belongs to type 1 or type 2? Is this supervised PCA? Thank you for any help.
Your channel has helped me immeasurably :) I just had one question here, and that is how precisely to go from the data sample array you start with, to the scaled data by hand? I tried but didn't get the correct answer? I did watch the PCA Explained video as well, but just didnt get the same result here and wonder if you could clarify exactly how it gets from one to the other... should be: scaled_data = (data['wt1'][i] - np.mean['wt1])/ np.std(data['wt1']) ... for each datapoint i and each column right? this isnt real code im just making a point that its z = x-u / s :)
Hi Josh!, this is a very excellent video that helped me a lot!!! I have a question, what if PC3 PC4 is also essential? Do I need to draw 2 2-D graphs, or what do I need to do?
If you want to draw the PCs and the data, then you'll have to draw multiple graphs. Or you can use the projections from the first 4 PCs and input to a dimension reduction algorithm like t-SNE: ua-cam.com/video/NEaUSP4YerM/v-deo.html
BAM!!! I understood what u said. I show my gratitude. But I have a query. I am confused with my dataset regarding which to consider a row and which as columns My dataset is regarding Phase measurement units (PMU) used in electrical grid or sort of the distribution lines we see around. One single PMU measures 21 electrical parameters for a timestamp. We use around Four PMU each measuring the 21 parameters at different locations at the same time continuously over a period of time. How can I arrange the above data for Performing PCU sir?
Sir those two case you mentioned that PCU would work is what I am also interested in calculating apart from the combination of all of the PMUs time stamp. Can u mention how to arrange the data (Rows and columns) for both of the mentioned viable cases? Thanking you so much!!You are really awesome sir
Oh i see! Thank you so much Josh. I watch your video from time to time in the past, and a lot more recently, and I'm always amazed at how extremely talented you're at teaching and explaining things!! Do you have somewhere that I can show some appreciation (aka pay tuition) if I don't plan to buy anything?@@statquest
Hi Joshua, thanks for that. really helpful. i'm quite new to python myself, and i'm trying to compile a PCA across a range of macro-economic factors (inflation,gdp,fx, policy rate etc.,), now in all that you've done above where is the display of the PCA i.e: the newly uncorrelated data set, is it the loading scores you printed? or the wt, and ko variables you plotted? Thanks
It depends on the field you are in. I used to work in Genetics and this is the format they used. So it's always worth checking to make sure you have the data correctly oriented.
Hi Josh Thank you for your efforts, really statquest is a magnificent channel , Could you please make video for Singular Value decomposition SVD. thanks
You don't have to scale the data, but it is highly recommended. For more details why scaling is important, see this StatQuest: ua-cam.com/video/oRvgq966yZg/v-deo.html
Correction:
3:23 The array should only have wt through wt5, ko1 through ko5.
Support StatQuest by buying my books The StatQuest Illustrated Guide to Machine Learning, The StatQuest Illustrated Guide to Neural Networks and AI, or a Study Guide or Merch!!! statquest.org/statquest-store/
Thank you, I was mentioning 3:23. Your videos are great.
I am a medical doctor from Turkey and currently, I am planning a career change to data science and I have been watching your videos to get prepared for a data scientist position. Could you create a few videos regarding data science interviews if it is relevant for your channel content? Best Regards, Göktuğ Aşcı, MD.
@@GoktugAsc123 I'll keep that in mind.
@@keerthik3791 Unfortunately the random forest implementations for Python are really bad and they don't have all of the features. If you're going to use a random forest, I would highly recommend that you do it in R instead.
@@statquest Thankyou for the suggestion. I am good at Python, MATLAB. Can I do random forest in MATLAB? Or is learning R necessary here?
@@keerthik2168 I have no idea. I've never tried to do random forests in Matlab.
Dude you deserve a humanitarian award.
Thanks! :)
he is a good human in my eyes
@@joshuamcguire4832 bam!
@@rezab314 super bammm!!!
Not only the best PCA demonstration but also THE BEST introduction to Python. Hats off to you man!!
Thank you! :)
I have been dabbling in data science for a while now, and only now learned that pandas stand for "panel data" xd
This channel never ceases to amaze
:)
Whenever I search for some machine learning based explanation, I add 'by statquest' in it ^_^. Keep up the great work :')
Thank you very much!
@@statquest It's True I do the same thing ..thank you for your hard work
"Note: We use samples as columns in this example because... but there is no requirement to do so."
"Alternatively, we could have used..."
"One last note about scaling with sklearn vs scale() in R"
This is some of the gold that sets StatQuest apart. Thank you! ❤
Thank you! :)
YOU ARE SAVING MY DEGREE I LOVE YOU SO MUCH I CANT EVEN BELIEVE THIS IS THE SAME MATERIAL IM LEARNING IN MY MACHINE LEARNING CLASS RIGHT NOW.
Happy to help!
Finally! You explain in the language I understand much better than English haha Thanks !!!
:)
but you are watching a tutorial \(-_-)/
One of the best videos ever made on this topic. This channel has helped me a lot in understanding machine learning in greater detail. Keep up the good work !!
Thank you!
The fact that you said bam when the plot showed what we wanted really shows that even if you are a pro python programmer, you still feel happy when you code correct, relatableeee
bam! :)
This channel is the best UA-cam channel that I discovered. Thank you, sir!
Thanks!
Python. Now you're speaking my language :)
want me to take out my python?
@@HK-sw3vi ...weirdo
Simply loving StatQuest. Concise, clear and fun videos. One point I noted while watching this video is that the latest version of sklearn PCA() will center the data for you, but not scale it. So if you just need centering for doing pca, you don't need to worry about preprocessing.
Thanks for the update!
I learn so much better in Python for some reason, I think it's because it's more interactive and you can play around with the data! Good one. Stattttquueeeeeest.
Thanks! There should be a lot more Python videos and learning material out soon.
@@statquest looking forward to it :).
I push the like button even before I play the video. Because Josh never fails to amaze me.
bam!
I am watching the 1st minute and I'm already super excited. Thanks!!
Hooray!!!!!! :)
You've got the right formula for simple explanations. Teach me dawg
Thank you! :)
The only good step by step explanation I found on the web. Thank you so much!
Hooray!!! Thank you so much! :)
You are one the best teacher that i've ever found. Thank you very much!
Thank you! :)
Awesome. Please create more videos about how to implement the machine learning as well as data science concepts explained here into Python. That would be super helpful for us, in particular beginners.
Thanks, will do!
Hi Josh... Simply incredible all StatQuest videos... Triple Bam!!!
Thank you! :)
Always can find a new and detailed explanation of steps from your videos! Thank you!
Thank you! :)
It's awesome to have the explanation based on python code. Thanks a lot!
No problem. I'm doing a lot more python coding these days, so hopefully I'll more of these "in python" videos.
Wish there were more statquest coding in python videos, they are the best! Much prefer to regular content although that is still really high quality
Noted.
Thank you Josh. Such practice is important and valuable!! And you really also taught some Python tricks that I don’t know.
Thank you! :)
Wow Josh.. Thanks for that unpacking concept. I never knew that my whole life...
You bet!
Another Great StatsQuest in the books!
Wow, your explanation is so clearly!!
Thank you! 😃
6:31 using scikit PCA
8:35 plotting scree plot
10:37 loading scores for each principal component
Thanks for the time point! I'll add those to the description to divide the video into chapters.
Really appreciate this and would love to see more concepts implemented in python.
Thanks!
I like the way you plot the ratio of each PC~~
It is really easy to read!
BAM~~~~~~~~~~
Thank you!
MAKE MORE PYTHON CONTENT PLEASE I LOVE IT
I'm working on it. :)
This was so clear, thanks! Finally I can do PCA in python, BAM 😊 You DA BEST!
Thanks!
Man, u r a gem. I will pay for the knowledge later after my graduation bro. lol
Wow! Thank you! :)
Hi Josh. The best PCA explanation. Thanks a lot :-) May GOD bless you 😊
Thank you! :)
Yes, May god bless you 100 times. May the troubles of today’s world not reach your doorstep. You’re a great person.
Thanks for the tutorial! One thing I don't understand is why the PC1 can separate the wt and ko samples. Their gene expression values are generated in a same way.
Just stating I have the same question 2 years later.
What a playlist, I simply loved it 😘
Thank you!
Woww! That was absolutely awesome!!! Thank you so much!
Glad you liked it!
You are the best!!!! It would be great if you could make a video on speculative decoding using medusa and quantization of neural networks in general
@statquest
I'll keep that in mind! :)
This was a reallly good explanation using Python
As always a great presentation and the python code just give the extra bite...
Thanks!
Hello Josh, Thank you for the amazing video! Quick question, at 9:18 how can I adapt "index=[*wt, *ko] for an excel input? Lets say that we have the same variables (Genes vs wt/ko) but in an excel file. How can I add these labels to the final plot (9:47)? Thank you again!!
I'm not sure I understand your question. You can export your data from excel and import it into python (or R or whatever). Or are you asking about something else?
Excellent work!!! 👏👏
Thanks a lot!
That's a cool one. The fact that observations are columns makes it so confusing though. I'm really used to the tidy data notation
Noted
very much enjoy your explanation style.
many thanks for the great videos!
Thanks!
@@statquest excellent going - really.
difficult to know what's up and down in data science, and so i'm happy your videos cover subjects from mathematical concepts to code implementation.
excellent spirit and explanations, again.
(sorry about the superlative avalanche - in the vast ocean that's the net, it's difficult finding authoritative sources covering subjects well )
bests from Germany/Denmark ;)
@@miskaknapek BAM! :)
Amazing video! I initially watched the video explaining PCA and i was mind-blown, thank you so much! I was hoping to ask if anyone on the comment section or even StatQuest if possible, would know how to implement PCA in a multivariate timeseries dataset and also "examine the loading scores" in such a dataset. Thanks in advance! :)
P.S - extremely clueless on anything coding or ML, but Ive got to use PCA (and other dimensionality reduction methods) on my timeseries dataset. so would greatly appreciate any direction on how to proceed.
See: stats.stackexchange.com/questions/158281/can-pca-be-applied-for-time-series-data
Thank you! This video helped a lot with what I'm trying to do.
Awesome!
i really like your clear explanation. please do some videos about deep learning and NLP.
I'm working on them.
@@statquest yeah! I am waiting for that
Your videos are great! Thanks
Thanks!
Amazing! this is so important, thanks a lot.
Thanks! :)
Queation please...
09:50 wt and ko samples are both created with the same random function Poisson (10, 1000). Why are wt samples (and ko samples) more correlated??
Because rd.randrange(10, 1000) returns a random number between 10 and 1000. Once we get that random value, we use it to generate 5 values for the wt samples using a poisson distribution. Then we select another random value between 10 and 1000 and use it to generate 5 values for the ko using a different (because the random value is different) poisson distribution.
Good explanation. Thank you so much.
at 5:10 why do we scale our data?
I explain why we scale the data in this video: ua-cam.com/video/oRvgq966yZg/v-deo.html
Very concise, I will surely be coming back to this video, however I would like to know why PCA is able to group these two categories (wt and ko), when it's shown they are generated from the same random method. If all indexes were generated at the same time, I would get it, but as they are generated index by index, I seem not to be able to grasp it.
The trick is at 3:48. For each group, wt and ko, we select a different parameter for the poisson distribution and generate 5 measurements from each of those two different distributions. One set is for wt and the other set is for ko.
@@statquest I think my confusion comes from the fact that these will make the two groups different from one another (all w's different from ko's), but I wouldn't predict them to be similar within the group (wt1 is close in vertical to wt2, and to wt3...,), thus I tend to believe PCA should tell them apart, but not in exactly two groups (wt's vs ko's), I would predict more like two clouds instead of two "vertical line of points" in the 2-D.
@@3stepsahead704 Remember how PCA actually works, it finds the axis that has the most variation (which is between WT and KO) and focuses on that. And then find the secondary differences (among the WT and KO). However, because the differences between WT and KO are big, the scale on the x-axis will be much bigger than the scale on the y-axis. Thus, the samples will appear to be in a vertical line rather than spaced apart like you might guess they should be. In short, check the scales of the axes, they will explain the difference between what you think you see and what you expect.
@@statquest Thank you very much for taking the time to explain this. I now get it!
is centering included in sklearn's pca model and that's why there is no extra step to center?
I believe so.
I don't know why in the PCA graph you plot the "features", in some other videos, they plot all the data point and visualize the data in the new subspace... And I don't know what are the meaning of the x-axis in the same plot, what does -10 mean in the PC1-89.9%? thanks
I don't plot the features, I plot the subjects. For details, see: ua-cam.com/video/FgakZw6K1QQ/v-deo.html
Hi, You are a lifesaver. I am trying to do PCA analysis on my own data but since every demo video either use the databases and you created your own data. I am missing some crucial steps, especially in defining index when i am doing it with my data. Will it be too much to ask few more videos on machine learning where you use the excel sheet data from your laptop.
I am a newbie in data science and programming. I am a Molecular Biologist who would love to learn machine learning.
I'll keep that in mind for a future video.
great video! thanks for these!!! have you done a redundancy analysis and dbRDA plot video? thank you for contributing to our education
I haven't done that yet.
@@statquest let us know if you ever do! It would be a double bam from me. It just clicks the way you explain! Thank you again for your content!!!
COOOOOL, so easy to understand!
Wow, thanks. One question: while verifying loading scores, I saw that 'False' command. Typically, for PCA, the data needs to be scaled, right? But false means it is not scaled, so I am confused. Please clarify this.
The data are already all on the same scale.
Thank you! I’ve been struggling with this problem for so long !
Hooray! I'm glad the video was helpful. :)
If u end up using the PCA data...would not cause data lakeage in our predictive model since scaling should be done after train test split?
If you're using for machine learning, presumably you can come up with a standard scaling and centering procedure.
Question please. This line trasforms the original data to a 10x10 array: pca_data = pca.transform(scaled_data)
The video says: it generates the coordinates for PCA graph based on loading score and scaled data.
Apart from the coordinates in graph, what do the values actually represent? How should I interpret them - Is it the amount of variance of sample values attributed to each PC? The distance of each sample on PC line to the origin? What is the unit?
The coordinates do not have units. And, as far as I know, they are just coordinates.
Oops!! I make a mistake a deleted your follow up comment. Sorry about that. However, my response is "Yes, the PCA graph is a graph that uses PCs as the axes."
@@statquest No problem. Thanks for confirming.
what happens if we give the n_components=d, d being the no of dimensions? Does PCA denoise the data because there won't be any reduction in dimensions?
I don't think it will. However, there is still value because you can draw the scree plot and see how many PCs are really useful (it might only be a few, or it could be all of them).
This video is really awesome! I am just confused on one thing, what are your predictors and what is your target?
PCA does not have predictors and targets. All variables are just...variables. For more details about PCA, see: ua-cam.com/video/FgakZw6K1QQ/v-deo.html
Thanks for the great video! :)
:)
What's about the source of dataset??
The dataset is created within the code.
@@statquest thanks for your quick reply
Hi Joshua, Great Videos!
In the loading_scores, it appears this error: valueError: Length of passed values is 8, index implies 9686. I'm using my own dataset
Maybe look at the first few items to make sure it is what you expect it to be.
@@statquest yes I got it, thanks
What will be the negative values in the loading scores indicates?
Loading scores are explained here: ua-cam.com/video/FgakZw6K1QQ/v-deo.html
@@statquest Thank you.
dear instructor, will you release a python version of your ml course. supper fan here!
One day I will.
@@statquest hope that day comes quick. stay well.
dude i'm trying to do isotonic regression with toydataset, but the error show x is not a 1D ARRAY, can i use PCA to turn it into 1D?
Maybe!
Great tutorial, sorry if my question may be ammature, but how did they differentiate WT and KO apart in the final PCA, I thought the data set was randomly generated?
Early on we gave the rows and columns names and kept track of them.
What is the point to look at loading scores at the final step? My understanding is as follows. Each gene is a sample. If their loading scores on PC1 are similar, it means a lot of samples are projecting around a similar position on PC1. So they are clustering apparently. Am I right?
In this case, loading scores tell us which genes have the most influence on the PCs. This can tell us which genes have the most variation and are the most useful for determining why the cells cluster the way the do. For more details, see: ua-cam.com/video/FgakZw6K1QQ/v-deo.html
@@statquest Thank you! Just found you replied to my response very fast! Wish I knew how to look at those notifications earlier!
Thank you!!! When we are speaking about variation in PCA, is that the same as variance?
Yep.
@@statquest Thank you very much for the clarification! I googled it, and seems that it's two different things, but sometimes they can be used interchangeably or be the same thing.
@@Cat_Sterling Yes, I guess it depends on how you want to use them and whether you divide by 'n' or 'n-1', but, at least on a conceptual level, they are the same.
@@statquest Thank you so much again! Really appreciate your reply! Your channel helped me so much!!!
What if I get 4 variables with maximum variation in the scree plot? How would I then plot a PCA plot?
You can draw multiple pca graphs (PC1 vs PC2, PC1 vs PC3 etc.)
fantastic, like always.
I wonder how Poisson distribution caused each wt samples and ko samples to be correlated with each other?
Because we generated the data, I selected different lambda values for the wt from the ko samples.
4:46 Why the gene4,ko1 has a value over 1000 if the command says "get a random value between 0 and 1000?
Thanks for the value !!
We select a random number between 10 and 1000 to be the mean of a poisson distribution. That's just the average value, and there can be larger and smaller values.
@@statquest oh! i see!! thank you so much, I still learning about this
how do we interpret positive and negative load factors of features in terms of separating the sample?
I show some examples of this in my main PCA video: ua-cam.com/video/FgakZw6K1QQ/v-deo.html
@@statquest I saw that video. I want to use PCA to determine weights of variable in my index. I have 2 doubts:
1) Shall I take square of load factor and multiply by variation of the principal component to get weight for that variable?
2) what is the role of negative and positive sign if I am using PCA method to derive weights. Shall I give negative sign load factors, negative weights in my index?
@@manannawal8466 1) The loading scores are weights to begin with and determine how much of each variable is combined to make the principal component. In other words, you should need to transform the loading scores.
2) The positive and negative signs are just relative to the other variables and how they contribute to the slope of the principal component. For example, if you had two genes, 1 and 2, then loading scores 5 and -2 (for genes 1 and 2 respectively) would give us the exact same slope as -5 and 2.
Since the positive and negative signs are somewhat arbitrary, people often ignore them and instead concentrate on the magnitude of the loading scores as a way to determine variable importance.
@@statquest thank you :)
hai,
could help me to this question please?
Xnp = np.asarray(X.todense())
# Run a principal component analysis on Xnp
# How much of the variance can be explained
# by the first 10, 50, 100, 200, and 500 principal components?
If I had more time I could help, but today is super busy. Maybe someone else wants to help.
Please could someone tell me which are the variables\dimensions we want to reduce and which are the observations\samples? im a little confused, especially that i found 10 principal components in the scree plot
the genes are the variables/features , and the wt, ko are the samples right?
The genes are features and the types of mice are the samples. Did you run the code that I wrote, or did you write your own? You can download my code for free.
@@statquest thank you very much, you know due to the reverse of order of variables and samples, i got a little confused , plus the scree plot you showed in the video had 10 principal components which is equal to the number of samples, but perhaps the scree plot was just for visualizations and there is more than 10 principal components, however i didn't run the code
I have another small question please, what if the samples are far less than the variables, let's say it's an image and the variables are the pixels with 4000 pixels in total for each image, and the samples are just 200 , in that case i would not get more than 200 principal components right? or in other words, only 200 PC or less would be useful to me right?
@@timokimo61 I answer that question in this video: ua-cam.com/video/oRvgq966yZg/v-deo.html
@@statquest i have seen the video before , yea.
But a simple answer here would help me alot 😄
Thanks for the "easy" to follow tutorial. I am trying to do a PCA for my RNAseq data but when I run scaled_tmp = StandardScaler().fit_transform(tmp.T) I get an error message: 'could not convert string to float: 'lcl|NC_000913.3_cds_NP_414542.1_1'. The lcl... is my target gene ID and I cannot edit it since i will need it later on to identify speficific genes. Please how do I solve this error message?
It looks like one of the columns in your matrix is some sort of identifier instead of sequencing data. In the video, when we create the data, we move identifiers to be row names or column names (see: 3:17). Other than the row and column names, the matrix that we do math on can only contain numbers because... how do we do math with identifiers?
@@statquest Thanks I was able to rectify the issue. I did not indicate "my gene_id" column as my index when loading data. After setting the index column it now works well.
@@godsperson5571 Hooray!
How can I select the final components to apply them to new data?
You can use the loading scores. For details, see: ua-cam.com/video/_UVHneBUBW0/v-deo.html
@@statquest Thanks! Were can I find the loading scores in python?
@@dafran500 For details on how to do PCA in python, see: ua-cam.com/video/Lsue2gEM9D0/v-deo.html
What if i already have a dataset which i will upload only?
what do i pass in this line in index?
pca_df = pd.DataFrame(pca_data, index=[], columns=labels)
Whatever you want the row names to be
Suppose a number of items exist of type 1 and 40 variables associated with each. Further items of type 2 exist, also having the same 40 associated variables. Is there a way to find which variables, or combination of variables, best discriminates whether an arbitrary item belongs to type 1 or type 2? Is this supervised PCA? Thank you for any help.
Consider using LDA instead of PCA for your problem. For details, see: ua-cam.com/video/azXCzI57Yfc/v-deo.html
@@statquest That's the perfect solution to this problem, thanks very much! N
Your channel has helped me immeasurably :) I just had one question here, and that is how precisely to go from the data sample array you start with, to the scaled data by hand? I tried but didn't get the correct answer? I did watch the PCA Explained video as well, but just didnt get the same result here and wonder if you could clarify exactly how it gets from one to the other... should be: scaled_data = (data['wt1'][i] - np.mean['wt1])/ np.std(data['wt1']) ... for each datapoint i and each column right? this isnt real code im just making a point that its z = x-u / s :)
It depends on how the data are oriented. Sometimes it's in columns, sometimes rows. So check to make sure your data is in columns.
@@statquest for the test code you supplied, so columns, am I using the correct method?
@@RachelDance It sounds like it.
Is loading score eigenvalues? Wish to see a more linear algebra method of explaining pca!
For more details on how PCA works, see: ua-cam.com/video/FgakZw6K1QQ/v-deo.html
Hi Josh!, this is a very excellent video that helped me a lot!!!
I have a question, what if PC3 PC4 is also essential? Do I need to draw 2 2-D graphs, or what do I need to do?
If you want to draw the PCs and the data, then you'll have to draw multiple graphs. Or you can use the projections from the first 4 PCs and input to a dimension reduction algorithm like t-SNE: ua-cam.com/video/NEaUSP4YerM/v-deo.html
Please post some intuitions on sparse deconvolution and compressive sensing..Would love to understand your approach..❤️
BAM!!! I understood what u said. I show my gratitude. But I have a query.
I am confused with my dataset regarding which to consider a row and which as columns
My dataset is regarding Phase measurement units (PMU) used in electrical grid or sort of the distribution lines we see around.
One single PMU measures 21 electrical parameters for a timestamp.
We use around Four PMU each measuring the 21 parameters at different locations at the same time continuously over a period of time.
How can I arrange the above data for Performing PCU sir?
Sir those two case you mentioned that PCU would work is what I am also interested in calculating apart from the combination of all of the PMUs time stamp.
Can u mention how to arrange the data (Rows and columns) for both of the mentioned viable cases?
Thanking you so much!!You are really awesome sir
How do we import all those libraries???
Do we have to download anythiny extra ???
It probably depends on what Python you have. I believe I used Anaconda which comes with all of these libraries.
wt and ko are chosen randomly from the same distribution, why are they very different from the perspective of PCA?
They use the same distribution, but different parameters for that distribution. Specifically, they use different values for lambda.
Oh i see! Thank you so much Josh. I watch your video from time to time in the past, and a lot more recently, and I'm always amazed at how extremely talented you're at teaching and explaining things!! Do you have somewhere that I can show some appreciation (aka pay tuition) if I don't plan to buy anything?@@statquest
@@koalaggcc There are lots of ways to support StatQuest. Here's a link that describes them all: statquest.org/support-statquest/
Is scalling to be done for both test and train dataset?
Yes.
loading_scores=pd.Series(pca.components_[0],index=genes)
what should i write in place of genes
If you changed the index, as described at 3:17, you should probably use the same thing you changed it to.
Incredible French accent “Poisson distribution” , I saw it three times 😆
:)
Hi Joshua, thanks for that. really helpful. i'm quite new to python myself, and i'm trying to compile a PCA across a range of macro-economic factors (inflation,gdp,fx, policy rate etc.,), now in all that you've done above where is the display of the PCA i.e: the newly uncorrelated data set, is it the loading scores you printed? or the wt, and ko variables you plotted? Thanks
generally, in ML, we use 'columns' as 'features(variables)' and ''rows' as 'examples', but in the video, it is inverse. but is is not a big deal.
It depends on the field you are in. I used to work in Genetics and this is the format they used. So it's always worth checking to make sure you have the data correctly oriented.
Thank you very much for this tutorial. Please can you explain how to get correlation matrix
With numpy, you use corrcoef().
@@statquest Thank you very much
Hi Josh Thank you for your efforts,
really statquest is a magnificent channel ,
Could you please make video for Singular Value decomposition SVD.
thanks
is it necessary to scale the data ? becuse sometimes a variable might have a std near 0 wich generate NaNs
You don't have to scale the data, but it is highly recommended. For more details why scaling is important, see this StatQuest: ua-cam.com/video/oRvgq966yZg/v-deo.html