How to implement PCA (Principal Component Analysis) from scratch with Python

Поділитися
Вставка
  • Опубліковано 29 січ 2025

КОМЕНТАРІ • 28

  • @pragyantiwari3885
    @pragyantiwari3885 3 місяці тому +3

    To bring more robustness in your PCA code, also mention the condition that the n_components value should not exceed the max eigenvectors that are created unless it would be meaningless.
    Here is the updated with explained variance ratio method too:
    # assumption that the data is standardized
    class PCA:
    def __init__(self,n_components):
    self.n_components = n_components

    def fit(self,X):

    cov_matrix = np.cov(X.T)
    eigens = np.linalg.eig(cov_matrix)
    eigenvectors = eigens.eigenvectors
    eigenvalues = eigens.eigenvalues
    # max eigenvectors we can retrieve
    self.max_components = eigenvectors.shape[0]
    if self.n_components

  • @luis96xd
    @luis96xd 2 роки тому +1

    Wow, amazing video of the course! I liked the theory part and how it is implemented with numpy 😄👍
    It was all well explained, thanks! 😄👏💯😁

  • @business_central
    @business_central Рік тому +2

    all the ones explained by the girl are very clearly explained and walked through, this guy seems he just wants to be done and he is not really explaining much at all.

  • @martinemond1207
    @martinemond1207 11 місяців тому

    How would you go about reconstructing the original data from the X_projected based on PC1 and PC2, which kept only 2 dimensions from the original 4 dimensions?

  • @pranavgandhiprojects
    @pranavgandhiprojects Рік тому

    Loved the vedio.....thanks man

  • @kachunpang7543
    @kachunpang7543 Рік тому +4

    HI, I am wondering the output of 'np.linalg.eig(cov)' in line 20. According to NumPy documentation the first output is the eigenvalues and the second should be set of eigenvectors stored inside a matrix. However, in line 20 the you swap the names between eigenvector and eigenvalues but still get a pleasant plot after PCA. Could someone explain this part to me? Thanks.

    • @dylan.savoia
      @dylan.savoia Рік тому

      Great observation and I think you're right, in fact. I've run the code swapping the two variable - i.e. eigenvalues, eigenvectors = np.linalg.eig(cov) - and you get a different plot. This wouldn't make sense as you cannot multiply a matrix and a vector if the dimensions aren't appropriate, but for how numpy works, I suspect there is an implicit "broadcasting" happening at np.dot in the transform() method (line 35) making the operation possible.
      TL;DR: Numpy doesn't raise an error, but the result you get is in fact wrong.

    • @pragyantiwari3885
      @pragyantiwari3885 3 місяці тому

      see my comment in this video...i mentioned the solution

  • @yusmanisleidissotolongo4433
    @yusmanisleidissotolongo4433 9 місяців тому

    Thanks so much for sharing.

  • @michelebersani7294
    @michelebersani7294 8 місяців тому

    Good morning, this playlist is amazing and I was searching it for several weeks. I have a question about the interpretection of the eigenvectors. Why do the eigenvectors, of the covariance matrix, point in the direction of maximum variance?

    • @MahmouudTolba
      @MahmouudTolba 5 місяців тому

      if you take a unit vector in the variance dimension , and tried to get the value when projecting data into that line means how much that unit vector encodes information about the relationship between the vectors in other words how much that unit vector contain information about the dimensions of covariance matrix so it we need to maximise that value of that dot product using Lagrange multipliers we got that Σv=λv , eigenvectors of the covariance matrix

  • @ernestbonat2440
    @ernestbonat2440 2 роки тому +3

    You should implement PCA with NumPy only. In fact, you need to use NumPy everywhere possible. The NumPy is the faster Python numerical library today. We should not teach based on some student understanding definition. We should teach students with real Python production code for them to find a job only. Everyone needs to pass the job interviews.

    • @iDenyTalent
      @iDenyTalent 2 роки тому

      stop talking grandpa

    • @projectaz77
      @projectaz77 2 роки тому

      @@iDenyTalent

    • @badi1072
      @badi1072 4 місяці тому

      No, I'll not implement it. What will you do?

  • @MinhNguyen-cl9pq
    @MinhNguyen-cl9pq Рік тому +1

    Line 19 seems to have a bug, as return values should be swapped based on Numpy documentation

    • @shadow-x1s3c
      @shadow-x1s3c Місяць тому

      Yes. I don't know how he ran the code without a problem.

  • @ASdASd-kr1ft
    @ASdASd-kr1ft Рік тому

    Nice video!, but i have one doubt, why you have more variance in the principal component 2 than principal component 1, is it cuz the scale?

  • @thejll
    @thejll Рік тому

    Could you show how to do pca with gpu?

    • @gokul.sankar29
      @gokul.sankar29 Рік тому +1

      you could try to use pytorch and replace the numpy arrays with pytorch arrays and similarly replace numpy functions with pytorch functions. You will have to read up a bit on how to use gpu with pytorch

  • @eugenmalatov5470
    @eugenmalatov5470 Рік тому +2

    Sorry, the theory part did not explain anything to me

  • @igordemetriusalencar5861
    @igordemetriusalencar5861 2 роки тому

    Excellent video and beautiful OOP python programming, clean and easy to understand for a programmer, but OOP in data analysis is terribly ugly and not productive with a lot of not necessary abstraction with classes and methods. The functional paradigm is way way better for data analysis due to its easy (initial) concepts of data flow and functions that transform the data. This way anyone that learned "general system theory" could understand (managers, biologists, physicists, psychologists...) if you could do the same in a functional way would be amazing! (in Python, R, or Julia).

  • @chyldstudios
    @chyldstudios 2 роки тому +1

    You should implement PCA without using Numpy, just vanilla python (no external libraries). It's more pedagogically rigorous and leads to a deeper understanding.

  • @0MVR_0
    @0MVR_0 10 місяців тому

    > states 'from scratch'
    > proceeds to import numpy

    • @prithvimarwadi345
      @prithvimarwadi345 9 місяців тому

      well numpy is just a mathematical computational tool, you are using it to make your life simpler. from scratch means you are not using models already made by other people

    • @0MVR_0
      @0MVR_0 9 місяців тому

      @@prithvimarwadi345 proceeds to import numpy.cov and numpy.linalg.eig and calls the method 'from scratch'

    • @HarshavardhanaSrinivasan-c2e
      @HarshavardhanaSrinivasan-c2e 7 місяців тому +1

      Are you asking to code from an assembly language standpoint?

    • @0MVR_0
      @0MVR_0 7 місяців тому

      @@prithvimarwadi345 I would dispute that from scratch means translating all relevant mathematical equations into plain python algorithms. Principle Component Analysis can be shown through eigenvectors and linear algebra. Relying on imports is honestly lazy when exemplifying the process.
      I am going to refuse acknowledging the comment on assembly language.