Machine Learning Tutorial Python - 19: Principal Component Analysis (PCA) with Python Code

Поділитися
Вставка
  • Опубліковано 21 вер 2024

КОМЕНТАРІ • 168

  • @codebasics
    @codebasics  2 роки тому +6

    Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced

  • @mayishamaliha6369
    @mayishamaliha6369 Рік тому +4

    super helpful for newbies not scaring them off with too many statistical terms and getting overwhelmed. thank u so much

  • @AqsaChappalwala
    @AqsaChappalwala 4 місяці тому +1

    Masters in Data Science in the UK and still loves watching only your videos :-)

  • @Rainbow-lj5pp
    @Rainbow-lj5pp 5 місяців тому +1

    This is a really easy to understand and thorough explanation of principal component analysis. Many others I watched were either too technical and math theory oriented or to basic in showing how to use the function but not what it does. This is a great balance of understanding and practicality.

  • @rakeshbullet7363
    @rakeshbullet7363 Рік тому +10

    Awesome videos - Simple explanations. A balanced approach to teaching with a right mixture of theory and practicals and not overwhelming the learners . i loved the approach - After seeing numerous ML training videos from across the spectrum , this is far most the best one i have seen . Thank you for taking time to create these videos .

  • @dushyyanta5305
    @dushyyanta5305 2 роки тому +4

    You are the best! I am doing PG in DS but still, I watch your videos for better understanding. Kudos! Keep it up!

  • @ernestanonde3218
    @ernestanonde3218 2 роки тому +5

    This is the best channel on UA-cam. You are simply amazing. You just saved my career. Thanks a million

  • @mohitupadhayay1439
    @mohitupadhayay1439 2 роки тому +2

    The last few minutes were BANG ON! This is what i wanted to hear. Thanks!

  • @bhaskarg8438
    @bhaskarg8438 2 роки тому +3

    Thank you, PCA concept is clearly explained .
    Need to understand in actual real life scenarios, what we consider, the performance or process time

  • @luciamatamorospava4382
    @luciamatamorospava4382 11 місяців тому

    it's like the 10th video i'm watching on PCA and the FIRST one I understand, thank you so much!

  • @prakashkoneti7630
    @prakashkoneti7630 Рік тому +1

    I would really appreciate for your hard work in making these videos and decoding the complex to easy..

  • @swagsterfut9992
    @swagsterfut9992 2 роки тому +13

    at 17:35, shouldn't we be doing pca.fit_transform()on our scaled dataset (X_scaled in our case) rather than on X?

    • @geoafrikana
      @geoafrikana 2 роки тому +1

      This came to my mind also.
      Perhaps the accuracy would have been higher if he scaled before pca.

    • @anirudhgangadhar6158
      @anirudhgangadhar6158 2 роки тому +2

      Yes it should be on X_scaled.

  • @maruthiprasad8184
    @maruthiprasad8184 2 роки тому +1

    Thank you very much for simple and great explanation. I got higher accuracy in SVM=86.74 %, after PCA I got accuracy in RF=73.06

  • @akinloluwababalola6666
    @akinloluwababalola6666 2 роки тому +9

    Hello Code basics. I usually enjoy your videos as I learn a lot from them. Can you make a video on association rules, apriori algorithms and any machine model that deals with the determination of interrelationships amongst variables? Thank you

  • @BG4INDIA
    @BG4INDIA Рік тому

    Impressed with the clarity of explaination

  • @mukeshkumaryadav350
    @mukeshkumaryadav350 2 роки тому

    It was an amazing explanation of PCA without much mathematics and eigen value and vector which scares me. Interesting learning 1. we can know variance explained by each PC which helps.

  • @guillermokinderman8267
    @guillermokinderman8267 Рік тому

    I was trying to understand PCA, this video helped me a lot

  • @K.Charz0
    @K.Charz0 2 роки тому +2

    Thanks sir the great work, your explanation makes ML easier for sure 🙏

  • @gnaneshgn8341
    @gnaneshgn8341 2 роки тому +6

    Nice Video sir. Please make a video for the math behind PCA. Thanks in Advance Sir

    • @geekyprogrammer4831
      @geekyprogrammer4831 2 роки тому +1

      ua-cam.com/video/FgakZw6K1QQ/v-deo.html this should be sufficient if you want to know mathematics

    • @shubhanshugupta9754
      @shubhanshugupta9754 2 роки тому +1

      U can get best of pcs by taking log(total features)

  • @sevicore
    @sevicore 2 роки тому +5

    I HAVE SOME QUESTIONS:
    1)if you use ur PCA data that has been scaled before doing any train test split...wouldnt it cause Data Lakeage?
    2) should not the target be dropped?

    • @hrithiksarma1204
      @hrithiksarma1204 2 роки тому

      I had the same doubt, have you got any update on this ?

  • @talkingbirb2808
    @talkingbirb2808 7 місяців тому

    I would add that reducing number of columns should help with overfitting

  • @ogobuchiokey2978
    @ogobuchiokey2978 Рік тому +4

    Your videos have helped me to complete my MSc research. Thank you for being a great teacher. I do have a question, during the explanation, you said we should always use PCA on the scaled data but during implementation, you used the unscaled data. Could you explain this?

    • @kreativeaman7688
      @kreativeaman7688 8 місяців тому

      I had the same question following through.

    • @kreativeaman7688
      @kreativeaman7688 8 місяців тому

      I tried using PCA on the scaled data and used it in SVM, Logistic Regression and RandomForest classifier, but the results were almost the same as to using regular data with PCA.

  • @TK-fx8dh
    @TK-fx8dh 2 роки тому

    My long await topic!!!! Thank you for posting this PCA lesson

  • @boogersincoffee
    @boogersincoffee 2 роки тому +1

    Ahhhhhh I've been struggling to understand this and this cleared everything up, thank you

  • @slainiae
    @slainiae 6 місяців тому

    Highest accuracy 0.8729 with SVM (linear) and with PCA n_components = 11.

  • @asamadawais
    @asamadawais Рік тому

    I am watching this vedio 2nd or 3rd time. @Dhavel you best among equals...👍👍

  • @nastaran1010
    @nastaran1010 8 місяців тому +1

    Hi. I have a question. why when you perform PCA, for input, you did not give (x_scaled), you gave x?????

  • @richardshaw8326
    @richardshaw8326 6 місяців тому

    Great explanation on PCA. @codebasics: I must have missed it though, but after running the PCA to identify which features will give the results, I missed where one might get the features.

  • @nikhilanand9022
    @nikhilanand9022 Рік тому +1

    Here is my two question for u
    1- why u dont scaled the target column means (y)
    2- for score as accuracy why u dont compare with actual and predicted u give for score is x_test and y_test why not y_pred and y_test

    • @vanshoberoi2154
      @vanshoberoi2154 Місяць тому

      1- y is target it doesnt influence the training as x inputs do , the raw values of y are used to calculate errors (e.g., loss functions) directly. Scaling y is generally unnecessary and could alter the model's predictions in unintended ways.
      2- as sir has previously explained.. when u use accuracy_score then we pass ypred and ytest but in model.score it takes xtest,ytest and internally converts xtest to ypred

  • @dees900
    @dees900 Рік тому

    great explanation on PCA. It's an abstract concept to grasp. well done

  • @anirudhgangadhar6158
    @anirudhgangadhar6158 2 роки тому

    Highest accuracy: SVM - 85.83%, after PCA (3 PCs), accuracy was 83.87%. For all 3 models, accuracy slightly (

  • @leamon9024
    @leamon9024 Рік тому +1

    Thanks for this amazing tutorial. Hope you could do one video about when to use feature selection and feature extraction, or even combination of them.

  • @jinks3669
    @jinks3669 2 роки тому

    Another very informative video.
    DHANYAVAAD ! :)

  • @arjunprashanth7824
    @arjunprashanth7824 5 місяців тому +3

    Shouldn't X_scaled be passed inside pca.fit_transform() method? Because if you're passing X, there's no point we did the scaling right?

    • @AnasAhmadKhan-me7ur
      @AnasAhmadKhan-me7ur 3 місяці тому

      Exactly, i was about to ask the same question.

    • @MinhTamNguyen-u7b
      @MinhTamNguyen-u7b 3 місяці тому +1

      I tried both. When passing X to pca without scaling, I got higher score. But you're right, I also believe to pass X_scaled for parallel comparison.

  • @DrizzyJ77
    @DrizzyJ77 4 місяці тому

    Thank you code basics❤

  • @ogochukwustanleyikegbo2420
    @ogochukwustanleyikegbo2420 Рік тому

    After completing the assignment, i got a best score of 0.85 with svm:rbf kernel and after PCA my best score reduced to 0.68 still svm:rbf kernel

  • @Maniclout
    @Maniclout 2 роки тому

    Amazing explanation, I understand PCA now.

  • @ahsanurrahman8915
    @ahsanurrahman8915 Рік тому +1

    Very nicely described ! I have a question:
    In your example PCA(0.95) reduces the dimension to 29. But, how do we know which dimensions it picked? I am asking this because I want to use PCA to determine the principal drivers in determining the targets.

    • @akashbhargava906
      @akashbhargava906 9 місяців тому

      hey buddy, PCA doesn't pick any existing dimension. It creates new dimensions which by the naked eye won't make much sense to you.

  • @punnarahul4068
    @punnarahul4068 2 роки тому +1

    great looking for more videos dhaval bhai............../

  • @namantiwari8251
    @namantiwari8251 Рік тому +1

    Sir can you please tell which features it reduces How can I get those particular selected (reduced) features as output?

  • @mohammadhosseinkazemi8558
    @mohammadhosseinkazemi8558 Рік тому +1

    Thank you for the video. I have one problem, though: Shouldn't we first split the data into training and test sets, then scale each set separately using StandardScaler(), RobustScaler(), etc. ?

  • @sarangali4595
    @sarangali4595 2 роки тому

    Sir also make a video on how PCA actually works and what type of information we can gain from the loadings like how are these features affecting the label

  • @AkaExcel
    @AkaExcel 2 роки тому

    @codebasics Thank You for Teaching and helping us!

  • @krishnadaskv2197
    @krishnadaskv2197 2 роки тому

    I am getting around 80 % score when using PCA(0.99999) in exercise, which is higher than the score before using PCA, and also getting a better score without removing outliers.

    • @codebasics
      @codebasics  2 роки тому +2

      That’s the way to go kv, good job working on that exercise

  • @jjanna07751
    @jjanna07751 2 роки тому

    thanku ..pca explained very easily

  • @youktinathbhowmick4673
    @youktinathbhowmick4673 2 роки тому +2

    Thanks for the explanation. I have one question: When you are doing PCA, you are taking the whole data and after that you are doing train test split. Isn't bit unethical? Again, if I do pca on train data, is same will the same pca can be applied on test data? Is there anyway to store the transformation of PCA to apply that on test data?

    • @hrithiksarma1204
      @hrithiksarma1204 2 роки тому

      I had the same doubt, have you got any update on this ?

  • @tobe7602
    @tobe7602 2 роки тому +1

    Hi
    Good tutorial, i think you must use X_train in pca.fit_transform and not X. Thanks

  • @mansijswarnkar4389
    @mansijswarnkar4389 2 роки тому

    Wonderful, as always - thanks for making this video, it has helped me a lot ! Regards

  • @Ooo12376
    @Ooo12376 2 роки тому +1

    Please also explain the math behind it. You get questions on math behind PCA in interviews. People ask the derivation of PCA

  • @anirudh7150
    @anirudh7150 6 місяців тому

    Thank you Sir. It was really helpful.

  • @souhamahmoudi7745
    @souhamahmoudi7745 Рік тому

    Thanks for sharing, it's highly appreciated

  • @anonymous-bi6ul
    @anonymous-bi6ul 2 місяці тому

    Why didn't you used X_scaled as the parameter to the fit transform function of pca?

  • @MohammadYs77
    @MohammadYs77 2 роки тому

    Very informative and practical.

  • @tamirat9797
    @tamirat9797 7 місяців тому

    Thank you 🙏

  • @LamNguyen-jp5vh
    @LamNguyen-jp5vh 2 роки тому

    Hi, I just want to ask why we use StandardScaler instead of MinMaxScaler in the lecture (not exercise). Thank you so much for your help!

  • @kalluriyaswanthkumar2275
    @kalluriyaswanthkumar2275 Рік тому

    sir you told that we should scale before pca but you are applying pca to non scaled data in code

  • @ShanthoshKumaarSomiRajesh
    @ShanthoshKumaarSomiRajesh 8 місяців тому +1

    I have a query to ask. You said we should pass the data to PCA after scaling but you passed the original X instead of X_scaled. Why ??

    • @dhineshv2590
      @dhineshv2590 27 днів тому

      Since we are using grayscale image, it's already kinda in scaled values(px values).

    • @dhineshv2590
      @dhineshv2590 27 днів тому

      Since we are using grayscale image, its already kinda in scaled value I guess.

  • @self.__osman
    @self.__osman 2 роки тому +1

    Hi. I might not be making any sense here but I wanted to know if same thing could be achieved with entropy and information gain. We know information gain tells you the feature with the most information or importance as a number . Therefore, in theory, we can remove all the features with really low information gains. I think it would this would work with discrete data better. I don't know if it already exists. If it does, what method does this. If it doesn't, can I know if this solution is practical.

  • @purebackend1993
    @purebackend1993 2 роки тому

    You kill it, amazing!

  • @sarangali4595
    @sarangali4595 2 роки тому

    Sir please also make a video on how to find relations using descriptive technique.

  • @mr.luvnagpal7407
    @mr.luvnagpal7407 2 роки тому

    Thankyouu so much for this amazing video

  • @manjularathore1076
    @manjularathore1076 2 роки тому

    You are absolutely amazing.

  • @lostcat7494
    @lostcat7494 Рік тому

    thank you so much

  • @nriezedichisom1676
    @nriezedichisom1676 7 місяців тому

    Thank you

  • @salahmahmoud2119
    @salahmahmoud2119 Рік тому

    You are the best!!! 👏

  • @ujjwalchetan4907
    @ujjwalchetan4907 Місяць тому

    Thanks.

  • @sohailshaikh786
    @sohailshaikh786 Рік тому

    Thanks

  • @siddheshmhatre2811
    @siddheshmhatre2811 Рік тому

    Thanks ❤

  • @BG4INDIA
    @BG4INDIA Рік тому +1

    Hi Mr. Dhaval, I am so thankful for sharing such a good informative video. Like "ogobuchiokey2978" even i wanted to know, if there is a specific reason of not selecting X_scaled while fitting into PCA? In the above demo, if I fit raw X, I get 29 new PCA-features but if i fit scaled_X i get new 40 PCA-features.
    Similarly through your exercise, if I fit scaled_X I get 10 features (only 1 attribute is reduced) with Accuracy of 85% and if i fit raw X, i get 2 attributes, but accuracy dips down to 69%(Random Forest)
    I believe this depends on the data as well.

  • @usamaalicraft3646
    @usamaalicraft3646 2 роки тому +1

    Thanks sir 😊😊

  • @pateltapasvi7277
    @pateltapasvi7277 9 місяців тому

    How can I get selected features in dataframe along with its feature name instead of number 1, 2, 3,etc.?

  • @albertoachavalrodriguez2461
    @albertoachavalrodriguez2461 2 роки тому

    Great video!

  • @aaditya1267
    @aaditya1267 Рік тому

    nice explanation !!

  • @RiteshKumar-yv8nx
    @RiteshKumar-yv8nx 2 місяці тому

    Why didn't you normalise y(i.e. the dataset.target)?

  • @babalolamayowamercy186
    @babalolamayowamercy186 2 роки тому

    Nice video
    Thank you

  • @Cat_Sterling
    @Cat_Sterling Рік тому

    Should you scale the data before PCA?

  • @girishtripathy3354
    @girishtripathy3354 2 роки тому

    The dimensions it should get reduced to, isn't it another hyper parameter? For 2 dimensions yeah you can visualize. For > 2, visualization is not possible. How can you decide what dimension you should reduce your dataset to?

  • @MonilModi10
    @MonilModi10 Рік тому

    Why PCA rotate the axis? What is a significance of that?

  • @Lnd2345
    @Lnd2345 7 місяців тому

    I’m confused when you say it explains the most variation in the data relevant for the target variable. PCA doesn’t know what the target variable is, it just looks at the data as a whole and decides what explains the most variation within that. Am I right?

    • @chanduvenna
      @chanduvenna 3 місяці тому +1

      Yeah you are right. PCA looks the data and calculates the Principal components. It doesn't know the target variable.

  • @vanshoberoi2154
    @vanshoberoi2154 Місяць тому

    doesnt 0.95 ie 95% retention means 60 out of 64 features should have been retained... why how 25

  • @mayank66
    @mayank66 Рік тому

    amajing

  • @HT-xt4cn
    @HT-xt4cn 2 місяці тому

    What was the purpose of scaling X at 14:18?

  • @makoriobed
    @makoriobed 2 місяці тому

    is it X_pca=pca.fit_transform(X) or X_pca=pca.fit_transform(X_scaled)

  • @farahamirah2091
    @farahamirah2091 Рік тому

    I have question, we can trained model using pca , then how about imbalance dataset? We not need to do imbalance?

  • @suriyaprakashgopi
    @suriyaprakashgopi 11 місяців тому

    nicely done

  • @zainnaveed267
    @zainnaveed267 2 роки тому

    sir i have a question
    how one can predict target values when PCA create all new columns based on its own calculations

  • @tigrayrimey6418
    @tigrayrimey6418 2 роки тому

    Nice points.

  • @bommubhavana8794
    @bommubhavana8794 2 роки тому

    Hello, I have newly started working on a PCR project. I am stuck at a point and could really use some help...asap
    Thanks a lot in advance.
    I am working on python. So we have created PCA instance using PCA(0.85) and transformed the input data.
    We have run a regression on principal components explaining 85 percent variance(Say N components). Now we have a regression equation in terms of N PCs. We have taken this equation and tried to express it in terms of original variables.
    Now, In order to QC the coefficients in terms of original variables, we tried to take the N components(85% variance) and derived the new data back from this, and applied regression on this data hoping that this should give the same coefficients and intercept as in the above derived regression equation.
    The issue here is that the coefficients are not matching when we take N components but when we take all the components the coefficients and intercept are matching exactly.
    Also, R squared value and the predictions provided by these two equations are exactly same even if the coefficients are not matching
    I am soo confused right now as to why this is happening. I might be missing out on the concept of PCA at some point. Any help is greatly appreciated.Thank you!

  • @nyangwindicollins1018
    @nyangwindicollins1018 2 роки тому

    Superb

  • @ajaxx627
    @ajaxx627 2 роки тому

    Please I have a problem with some work.
    I was given a list of words let’s say about 200 different words. And I’m meant to create a code that generates 3 random words each together.
    Eg wordlist=[a, b, c, d, e,................z]
    Output should be = a, d, z
    c, o, x
    And so on
    Please how do I do it?

  • @sharmilasenguptachowdhry509
    @sharmilasenguptachowdhry509 2 роки тому

    Thanks v m! can you pls help explain Eigen values and Eigen vectors from the data science perspective? thanks again

  • @taimoorneutron2940
    @taimoorneutron2940 2 роки тому

    hello sir, i am 27 now and masters is in progress
    sir i have teaching experience.
    but now i want to start my career in Machine learning or data science ?
    so it is possible? every company needs new fresh comers so what should i do?

  • @FutureAIDev2015
    @FutureAIDev2015 Рік тому

    I have no idea where to start on the exercise or even what "z-score" means for getting rid of outliers.

    • @slainiae
      @slainiae 6 місяців тому +1

      Check out video #41 in this video series. That teaches everything about Z Scores.

  • @gulnawaz9670
    @gulnawaz9670 2 роки тому

    Hi Sir, very informative video.
    I have a problem
    I uploaded a local dataset and when I use code
    dataset.keys ()
    which shows
    Index(['Unnamed: 0', 'Flow ID', ............
    Now at
    pd.DataFrame(dataset.data, columns=dataset.feature_names)
    then it shows an error even I changed data into Unnamed as well but ut occurs the same problem.
    AttributeError: 'DataFrame' object has no attribute 'data'
    waiting for your kind reply.
    Thanks.

  • @VickyKumar-dk6rd
    @VickyKumar-dk6rd 2 роки тому

    feature_names column is now not reflecting in Load_digits() dataset

  • @MyManiratnam
    @MyManiratnam 2 роки тому

    Hi, I have seen your videos on PCA they are really informative and your explanation is really cool. I have a doubt, we apply PCA on the dataset and later we go for model fitting for example if it is a classification problem we go for classification model. Here my doubt is, after building a model we validate it with test set, after that if I have new observation i.e new row in the dataset, how to predict my label?

    • @kirubakaran6145
      @kirubakaran6145 9 місяців тому

      hello Maniratnam, have you got the answer the above question?.

  • @wangjessica1275
    @wangjessica1275 6 місяців тому

    How to interpret PCA result in regression?

  • @krishnapatel8852
    @krishnapatel8852 Рік тому

    hello, if I want to visulize this data in 3D, then what will be z axis ?

  • @jayuchawla1892
    @jayuchawla1892 2 роки тому

    you applied pca on normal dataframe whereas in theory you explained we need to apply on scaled dataframe

  • @einnairo
    @einnairo Рік тому

    I have a question. After going through scaling, and then PCA, the features are now all different to the original of values between 0 and 16. When I have a new digit to classify and been provided with the same 64 features, how do I make this new prediction?