Singular Value Decomposition : Data Science Basics

Поділитися
Вставка
  • Опубліковано 30 чер 2020
  • So ... what is the SVD and why is it so useful for data science?
    *Note* : At 4:06 I meant to say "since all the u vectors are orthogonal to each other, the U'U=I is true". Linearly independent columns alone don't guarantee this property.

КОМЕНТАРІ • 67

  • @VR-fh4im
    @VR-fh4im 3 роки тому +14

    I looked all over for SVD for 3 hours, and your video in 10 minutes explained it, so nicely. Thanks.

    • @ritvikmath
      @ritvikmath  3 роки тому +1

      no problem! happy to help

  • @subandhu39
    @subandhu39 Рік тому +1

    I think this video is amazing. I have been wanting to watch videos of this channel since the past 2 years but never could because i lacked the basic knowledge to gain from the explanations here. I was taught this concept in class very poorly, i immediately knew i could finally come here. The way this ties in with pca, if i am correct and the ease with which the mathematical kinks were explained was phenomenal. Glad to finally benefit from this channel. Thanks a ton.

  • @saidisha6199
    @saidisha6199 3 роки тому +10

    Just amazing explanation. I had a blurry understanding of svd after taking a class and your video made the concept absolutely clear for me. Thanks a lot.

  • @JD-jl4yy
    @JD-jl4yy 3 роки тому +9

    You have a gift for explaining things clearly! This is so much better than the 5 SVD videos I watched prior to this haha

  • @chunchen3450
    @chunchen3450 4 роки тому +3

    I really like your videos! 👍 Methods are very clearly and concisely explained, explaining the applications of the methods in the end also helps a lot to remember what it really does. The time span of the video is also perfect! Thanks and hope to see more videos from you

  • @SLopez981
    @SLopez981 6 місяців тому

    Been watching your videos for months now. Very much enjoy how general your videos can be for someone outside of data science. I generally like watching math videos from non-math educators because they have a great balance of an explanation.
    One thing I really enjoy about your videos is at the end you bring it back to your field and why this is useful in your world.
    Reduced in entries for storage or for further calculations is very tangible to see the real world application.

  • @Andynath100
    @Andynath100 4 роки тому +1

    Thanks for the regular quality content !!

  • @SupremeChickenx
    @SupremeChickenx 3 роки тому +4

    bros who make youtube math tutorials are the real MVPs

  • @EdouardCarvalho82
    @EdouardCarvalho82 3 роки тому +10

    Absolutely love your videos! Just to clear possible confusion for learners at 4:05 abt VtV=I because of orthonormality, not merely independence which is only a consequence. Great job!

  • @scorpio19771111
    @scorpio19771111 2 роки тому

    Thank you. Short video that packs all the right punches.

  • @iWillieDR
    @iWillieDR 3 роки тому

    Great vid man, keep up the good work!

  • @AkshayRakate
    @AkshayRakate 2 роки тому

    Thanks so much for clearing the concepts. now I can connect the dots for what's the reason we use SVD in recommended systems. 👍

  • @nujelnigsns5376
    @nujelnigsns5376 2 роки тому

    I was really struggling with leaner algebra ,your video's are really a saviour

  • @amritpalsingh6440
    @amritpalsingh6440 3 роки тому +1

    I was struggling to understand the concept in the class and this video made it very clear for me. Thank you so much. Keep them coming :)

  • @TrangTran-dw2vw
    @TrangTran-dw2vw 3 роки тому +4

    Until you learn PCA and revisit this video, everything really makes sense!!

  • @matthewchunk3689
    @matthewchunk3689 4 роки тому +1

    Very relevant subject right now. Thanks

  • @haiderali-wr4mu
    @haiderali-wr4mu Рік тому

    Your explained most important things about SVD. Thank you

  • @hamade7997
    @hamade7997 3 роки тому +2

    I like this explanation too, thank you. Wish I had discovered you earlier during my data analysis course, but oh well :P

  • @lingtan6742
    @lingtan6742 Рік тому

    Super clear! Thank you so much!

  • @rugahun
    @rugahun 2 роки тому

    I am glad of the note about 4:06, I freaked out when that was said. GREAT VIDEO!!

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому

    Your explanation is awesome

  • @Fat_Cat_Fly
    @Fat_Cat_Fly 3 роки тому +1

    Super well explained.

  • @hameddadgour
    @hameddadgour Рік тому

    Wow, now I have a real appreciation for SVD :)

  • @Josh-di2ig
    @Josh-di2ig Рік тому

    thanks for a great video. do you also have a video on how to find those lambda values?

  • @qiushiyann
    @qiushiyann 4 роки тому +1

    This is really helpful

  • @JonasNgYF
    @JonasNgYF Рік тому +3

    The columns of M, before SVD, could mean features. Do the columns of U and V (the left and right singular vectors) carry any physical meaning? The video keeps two singular values. How many do people usually keep?

  • @IrfanPeer
    @IrfanPeer 2 роки тому

    marker flip on point

  • @mohammadmuneer6463
    @mohammadmuneer6463 3 роки тому

    Awesome !!

  • @msrizal2159
    @msrizal2159 3 роки тому +1

    Thanks ritvik, I'm phd candidate from Malaysia. Your videos are helping me a lot.

  • @yulinliu850
    @yulinliu850 4 роки тому

    Awesome 👍

  • @abhyuditgupta7652
    @abhyuditgupta7652 Рік тому

    Thank you so much!

    • @abhyuditgupta7652
      @abhyuditgupta7652 Рік тому

      I have always thought of such applications of matrices but never worked on finding how. beauty of maths.

  • @srs.shashank
    @srs.shashank 5 місяців тому

    Thanks, the applications which I can think of rightaway is: PCA and Matrix Factorization. what could be other possible applications?

  • @sanjeetwalia5077
    @sanjeetwalia5077 3 роки тому +1

    Hi Ritvik, thanks for the very explanatory video. Really very helpful to understand. However, when you see that you achieved a 75% computation reduction in this case, was it really because we assumed sigma(3) onwards to be approximately equal to zero. Does this assumption sway away from reality or this is how it always happens. Eager to hear your thoughts. Happy to learn from this video.

    • @sanjeetwalia5077
      @sanjeetwalia5077 3 роки тому

      Also, if you could do Moore-Penrose Pseudoinverse video as well. TIA

  • @Mertyy3
    @Mertyy3 3 роки тому +1

    Unbelivable it has only 9k views... Video is great!

  • @sarvagyagupta1744
    @sarvagyagupta1744 Рік тому

    This is really realy informative. Just one question. What are sigmas? Are they eigenvalues from SVD or something else? How did you get 2 and 3 in your example?

  • @singnsoul6443
    @singnsoul6443 10 місяців тому

    tysm

  • @Pukimaxim
    @Pukimaxim 3 роки тому

    Hi ritvik, I might missed it in your video but how do you get sigma?

  • @Actanonverba01
    @Actanonverba01 3 роки тому +1

    I usually hear SVD used synonymously with PCA. The way you described it, SVD is like a compression of the data but how is that different from PCA?

  • @kurtji8170
    @kurtji8170 Рік тому

    Hi Ritvik, what if the data is well constructed and there are 10 significant non-zero singular values? What can we do about this data

  • @mohamedhossam1717
    @mohamedhossam1717 2 роки тому

    i don't understand how to get independent vectors of a matrix to get its rank and what is mean by independent vector

  • @saraaltamirano
    @saraaltamirano 4 роки тому +15

    I am a big fan of your videos, but I think I liked the old format better, where you do the math step-by-step and write it on the whiteboard :/

    • @ritvikmath
      @ritvikmath  4 роки тому +9

      Thanks for the comment! I was debating whether to go back to the marker and paper style where I would write more stuff in real time. This suggestion is very helpful to me.

    • @saraaltamirano
      @saraaltamirano 4 роки тому

      @@ritvikmath thanks for the reply! I am especially grateful for your PCA math video since I am currently doing research with a functional data analysis algorithm that uses multivariate functional PCA and I've looked EVERYWHERE for an easy explanation. Your PCA video (and the required videos) is hands down the best explanation out there. I am forever grateful :-)

    • @teegnas
      @teegnas 3 роки тому

      @@saraaltamirano I initially had the same PoV but then after consuming the whiteboard type content for a while, I've gotten used to it, and recently he has started moving away from the board at the end of the video so that we can pause and ponder upon it.

    • @kristofmeszaros5505
      @kristofmeszaros5505 3 роки тому +2

      I prefer this current style greatly, no need to spend time writing. Can concentrate on explaining more.

  • @kickfloeb
    @kickfloeb 2 роки тому

    I feel like such a baby because I laughed everytime you said u p and sigma p. Anyway, great video as always :).

  • @juanguang5633
    @juanguang5633 Рік тому +1

    can we say that using SVD we are extracting significant features?

  • @kancherlapruthvi
    @kancherlapruthvi 3 роки тому

    The explanations are good but for Linear Algebra the best videos are from Prof. Gilbert Strang

  • @robertleo3561
    @robertleo3561 3 роки тому +1

    Wait what at 4:06 you said that matrices with full rank always have their transposes as inverses?

    • @ritvikmath
      @ritvikmath  3 роки тому

      almost, it's a small but important distinction.
      A full rank matrix has its columns linearly independent to each other.
      An orthogonal matrix (like the ones in this video) are full rank but also satisfy the property that its rows and columns are orthogonal to each other (have dot product 0 for any pair of different rows/columns).
      So for an orthogonal matrix, like you said, its transpose is its inverse. But that's not generally true for any full rank matrix. Looking back, I did say the wrong thing, and I'll go correct it in the comments. Thanks for pointing it out!

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Рік тому

    but don't orthonormal matrix have to be square?

  • @zigzagarmchair2367
    @zigzagarmchair2367 4 місяці тому

    OMG again!!!

  • @SoumyajitGanguly_ALKZ
    @SoumyajitGanguly_ALKZ 3 роки тому +1

    this is ~75% reduction. from 1000 to 222

    • @ritvikmath
      @ritvikmath  3 роки тому

      true! thanks for pointing that out. I think I meant now you only need around 25% storage.

  • @erlint
    @erlint 7 місяців тому

    Isn't this the thin form of SVD? And aren't you using the numerical rank in the praxis example? Because A = U [Σ; 0] V^T, but since Σ in nxn and 0 (m-n)xn then U = [U_1 U_2] where U_1 mxn and U_2 mxm-n therby A = U_1*Σ*V^T. Also U_2 will be in the null space of A (?). And the skip to rank truncated matrix instead of explaining how u_r+1, ..., u_m will be the basis for N(A^T) and v_r+1, ..., v_n be the basis for N(A).
    Also I'm still unsure on how the eigenvectors of A^T A and A A^T tells you the most important information in A. Are we projecting the data onto the eigenvectors like in PCA?
    The eigendecomposition and SVD videos are some of the most compact and understandable videos I have found on those topics, it made the link between the change of basis to the eigenbasis, then calculating linear transformation then back to original basis much more clear to me, thanks.

  • @pixieofhugs
    @pixieofhugs Рік тому

    Thank you! You have been a life saver 🛟

  • @xinyuan6649
    @xinyuan6649 2 роки тому

    The best! This is 101 on how to have fun with math in ds🫰