The Moore-Penrose Pseudoinverse - Topic 37 of Machine Learning Foundations

Поділитися
Вставка
  • Опубліковано 29 вер 2024
  • This video introduces Moore-Penrose pseudoinversion, a linear algebra concept that enables us to invert non-square matrices. The pseudoinverse is a critical machine learning concept because it solves for unknown variables within the non-square systems of equations that are common in machine learning. To show you how it works, we’ll use a hands-on code demo.
    There are eight subjects covered comprehensively in the ML Foundations series and this video is from the second subject, "Linear Algebra II: Matrix Operations". More detail about the series and all of the associated open-source code is available at github.com/jonkrohn/ML-foundations
    The next video in the series is: • Regression with the Ps...
    The playlist for the entire series is here: • Linear Algebra for Mac...
    This course is a distillation of my decade-long experience working as a machine learning and deep learning scientist, including lecturing at New York University and Columbia University, and offering my deep learning curriculum at the New York City Data Science Academy. Information about my other courses and content is at jonkrohn.com
    Dr. Jon Krohn is Chief Data Scientist at untapt, and the #1 Bestselling author of Deep Learning Illustrated, an interactive introduction to artificial neural networks. To keep up with the latest from Jon, sign up for his newsletter at jonkrohn.com, follow him on Twitter @JonKrohnLearns, and on LinkedIn at linkedin.com/in/jonkrohn

КОМЕНТАРІ • 20

  • @Victor-ji1rz
    @Victor-ji1rz 8 місяців тому +1

    A_p = torch.tensor([[-1, 2], [3, -2], [5, 7.]])
    U, d, V = torch.svd(A_p)
    UT = torch.transpose(U, 0, 1)
    d_diag = torch.diag(d)
    d_plus = torch.inverse(d_diag)
    torch.matmul(V, torch.matmul(d_plus, UT))

    • @Victor-ji1rz
      @Victor-ji1rz 8 місяців тому

      This works, but can someone explain to me why the matrix U is not of the same size as the one returned with the numpy SVD method ?

  • @literallynobody4840
    @literallynobody4840 Місяць тому

    You have given best shot to explain....but my brain is not able to catch up 😢

  • @bastianian2939
    @bastianian2939 3 роки тому +3

    Insanely helpful video. Hopes my comment helps boost this to the youtube algorithm!

  • @subhashmishra8665
    @subhashmishra8665 Рік тому +1

    My Frustration level at -2:09 🙂🙂🙂

  • @EmmanuelPeter-y4d
    @EmmanuelPeter-y4d 2 місяці тому

    Thanks Jon Krohn. Do you have a course detailing learning ML from ground up? Thanks in anticipation of your response, I love your pedagogical skills.

  • @Grobulia1
    @Grobulia1 3 роки тому +2

    Thank you so much for your clear explanation! It would also be helpful if you posted a link to the Jupyter notebook that is shown in the video so that we could peruse it.

    • @JonKrohnLearns
      @JonKrohnLearns  3 роки тому

      You're most welcome, Ksenia! Glad you found my explanation of MPP helpful :)
      A link to the accompanying open-source code is provided in the video description and, separately, as a text overlay when the notebook is first brought up during the video. For convenience, here is the full URL: github.com/jonkrohn/ML-foundations/blob/master/notebooks/2-linear-algebra-ii.ipynb

  • @Jamming0ut
    @Jamming0ut 2 роки тому +3

    Excelente video, me ha ayudado de una manera increíble, saludos desde Colombia.

    • @JonKrohnLearns
      @JonKrohnLearns  2 роки тому

      You're most welcome, Santiago! Glad you found this video helpful :)

  • @mohamedsamsudeensoofiba8382
    @mohamedsamsudeensoofiba8382 7 місяців тому

    First of all, the video and the playlist are super helpful. one small thing I found out is that the final output after doing the calculations from torch.svd is not matching with the torch.pinverse(). But surprisingly torch.linalg.svd() output of pseudo inverse is matching with torch.pinverse(). is it expected?

  • @theseusRJ7
    @theseusRJ7 2 роки тому +1

    hey is it too far fetch for me to try to recreate a linear regression algorithm on my own at this point?

    • @JonKrohnLearns
      @JonKrohnLearns  2 роки тому +1

      Certainly not! I think this video provides you with precisely a way to do it, as long as the dataset is not too large.
      In my "Calculus for ML" playlist (which I recommend undertaking after the Linear Algebra one you've been working through), we thoroughly detail how to create a linear regression algorithm using a machine learning approach that scales to any number of data points: ua-cam.com/play/PLRDl2inPrWQVu2OvnTvtkRpJ-wz-URMJx.html

  • @NavnilDas-o1n
    @NavnilDas-o1n 3 місяці тому

    1 second ago
    The code in Pytorch is as follows:
    import torch
    A_pt = torch.tensor([[-1,2],[3,-2],[5,7]]).float()
    A_pt
    U_pt, d_pt, Vt_pt = torch.linalg.svd(A_pt)
    U_pt_T = U_pt.T
    V_pt = Vt_pt.T
    V_pt
    D_pt = np.diag(d_pt)
    D_plus_pt = torch.linalg.inv(D_pt)
    D_conc_plus_pt = torch.concatenate( (D_plus_pt, torch.tensor([[0.],[0.]])), axis=1 )
    A_plus_pt = torch.matmul(V_pt, torch.matmul(D_conc_plus_pt ,U_pt_T))
    A_plus_pt
    I am getting the following result:
    tensor([[-0.0877, 0.1777, 0.0758],
    [ 0.0766, -0.1193, 0.0869]])
    Can somebody please tell if I am correct?

  • @etherioussanjudraganeel3163
    @etherioussanjudraganeel3163 3 роки тому +1

    Man You are the Best you don't know how much u helped me through this video Thank you so much

    • @JonKrohnLearns
      @JonKrohnLearns  3 роки тому

      YES! I am so happy to hear this. You are so welcome and I'm delighted to be able to help :D

  • @taraskuzyk8985
    @taraskuzyk8985 2 роки тому

    Are there any resources on why pseudinversion works as an ML rule so well with noisy data? (compared to something like Hebbian learning)

  • @ali-qq6cp
    @ali-qq6cp 2 роки тому

    V transpose was not in the moore-penrose formula, but when you calculated, you put V transpose !!

    • @JonKrohnLearns
      @JonKrohnLearns  2 роки тому

      Yep, I explain this in the audio: The method we used to create V automatically creates "V transpose", so in order to create an untransposed V (for use in the Moore-Penrose formula), we need to tranpose "V transpose"!
      That is why we annotated with "VT.T" in the code: We're taking "V transpose" and transposing it to make it plain old V.