Computing the Singular Value Decomposition | MIT 18.06SC Linear Algebra, Fall 2011

Поділитися
Вставка
  • Опубліковано 29 вер 2024
  • Computing the Singular Value Decomposition
    Instructor: Ben Harris
    View the complete course: ocw.mit.edu/18-...
    License: Creative Commons BY-NC-SA
    More information at ocw.mit.edu/terms
    More courses at ocw.mit.edu

КОМЕНТАРІ • 263

  • @SnackPacks10
    @SnackPacks10 8 років тому +553

    5 min of this video taught me more than two lectures, two chapters of my book, one office hour with the TA, and 4 hours trying to figure it out on my own.

    • @jugsma6676
      @jugsma6676 7 років тому +3

      same here, :))) , haha

    • @TheR971
      @TheR971 6 років тому +1

      nice thought!

    • @sunghunet
      @sunghunet 6 років тому +1

      Yes. It saved the time to understand the difficult processes of SVD. Very helpful to me.

    • @oguztopcu9048
      @oguztopcu9048 6 років тому +4

      dude you can accomplish anything by watching 2 youtube videos.

    • @deepgee9214
      @deepgee9214 5 років тому +1

      Hahaha 😂🤣

  • @vincenzosolinas6495
    @vincenzosolinas6495 3 роки тому +7

    There is a problem, the \Sigma matrix must have positive decreasing values, so can't be diag([2/sqrt(5) 4/sqrt(5)]) but \Sigma = diag([4/sqrt(5) 2/sqrt(5)]). Indeed, if you perform the matrices product of your U Sigma and V' you don't get C but you obtain [ -1 7; 5 5] instead

  • @poiu850
    @poiu850 10 років тому +1

    V1 and V2 are ortonormal vector, using gram-schmidt process..:)

  • @royxss
    @royxss 7 років тому

    sigma matrix values should have been switched such that a11 >= a22. That's what svd says. Correct me if I am wrong

  • @skilstopaybils4014
    @skilstopaybils4014 9 років тому +2

    This is the closest to understanding the process of SVD to date! So thank you for that, but it would be great if you didn't use the convenient example for the V matrix. Anybody know of a video that uses algebra?
    Also, the identity matrix is Sigma*Sigma.T right?

    • @hminhph
      @hminhph 9 років тому

      +skilstopaybils Sigma*Sigma^T equals a matrix with the eigenvalues of (C^T)*C on the diagonal.
      in this case its 20 and 80 on the diagonal

  • @GoJMe
    @GoJMe 8 років тому +3

    How does he get U????? can some1 show me how the fug he did, this make it jsut more complicated...

    • @alierencelik2188
      @alierencelik2188 8 років тому

      +GoJMe he probably did not want to lose time with it as the matrice U is just obvious

    • @GasMaskedMidget
      @GasMaskedMidget 8 років тому +9

      +Ali Eren Çelik Judging by the fact that he says he got U by "dividing" CV by Sigma, no, it's obviously not clear. You can't divide matrices so saying this just provides for confusion. I, myself can't figure out how he turned CV into unit vectors and then how multiplying that by Sigma simply changed some negative signs. So if it's so obvious to you why don't you stop being pompous towards other people and explain?

    • @mubashshirali571
      @mubashshirali571 8 років тому +11

      +GasMaskedMidget U*Sigma=CV. After computing CV, multiply both sides by Sigma inverse and that will give you U. Or instead of computing Sigma inverse just divide CV by square root of the Eigen values, it's the same thing. Hope it's clear.

    • @giovanni-cx5fb
      @giovanni-cx5fb 8 років тому +2

      Your comment was SO helpful to me! As soon as I read it, it came back to me that the inverse of a diagonal matrix is just another diagonal matrix whose diagonal entries are the inverse of the ones in the initial matrix and BAM! All clear.

  • @jw-jackson3333
    @jw-jackson3333 3 роки тому

    so many errors in this video, not sure why people think it is good. Please refer to other souces

  • @D3tyHuff
    @D3tyHuff 6 років тому +96

    7:24 No need to look for me man, I'm right here.

  • @Georgesbarsukov
    @Georgesbarsukov 4 роки тому +47

    I vocally say out loud 'Minus lambda' at 3:44, he then turns around and says 'minus lambda, thank you'. You're welcome.

  • @jacobfields8111
    @jacobfields8111 8 років тому +123

    You explained singular value decomposition better in 11 minutes than my linear algebra professor did in 50. Thanks.

    • @carl6167
      @carl6167 6 років тому +6

      I am afraid mate that then you don't understand anything, but how to compute it... See, SVD is way more than that, it is a way of decomposing a homomorphism into three separate endomorphism. One that rotates it in the definition Space, One that scales vectors accordingly when bringing them from one dimension to another, and then, the vector gets rotated once again in the Output Space (Unitary matrices can be decomposed into rotation matrices).

    • @sathasivamk1708
      @sathasivamk1708 4 роки тому +5

      @@carl6167 I don't think first one rotates , orthogonal matrices doesn't mean it's a rotation I can be a reflection/inversion too

  • @ArdianUmam
    @ArdianUmam 7 років тому +32

    I'm always amazed by MIT OCW videos. The way they teach is just ideal. Clear/big enough writen on the board, systematic explanation and comfortable to understand.

  • @luislobo6986
    @luislobo6986 10 років тому +20

    To find U it is simple. Did the professor found U*Sigma from CV, didn't he?
    So, to find U we do. U*Sigma = CV.
    U = [a11 a12; a21 a22]*[2*sqrt(5) 0; 0 4*sqrt(5)] = [-sqrt(10) 2*sqrt(10); sqrt(10) 2*sqrt(10)].
    So, we have a11=-sqrt(10)/2*sqrt(5) = -1/sqrt(2)
    a12 = 2*sqrt(10)/4*sqrt(5)=1/sqrt(2)
    a21 = sqrt(10)/2*sqrt(5)=1/sqrt(2)
    a22 = 2*sqrt(10)/4*sqrt(5)=1/sqrt(2)

    • @SKCSK792
      @SKCSK792 10 років тому +1

      Perfect

    • @plavix221
      @plavix221 8 років тому

      +Luis Lobo Isnt the order of the matrices in the first equation CtC wrong?
      I dont get why V is at the front and Vt at the end. I thought matrices are not commutative.....
      Hope someone can help me out.

    • @GasMaskedMidget
      @GasMaskedMidget 8 років тому +2

      This still doesn't make sense to me. How is -sqrt(10)/2*sqrt(5) not equal to -sqrt(2)/2???

    • @danielwenhao
      @danielwenhao 7 років тому +1

      i just came across ur comment, hope u have already solved ur problem, but ill give u the reason behind it with the link :) www.wikihow.com/Divide-Square-Roots

  • @rockfordlines3547
    @rockfordlines3547 3 роки тому +23

    Thanks for this, Mclovin.

  • @mehmetkazakli956
    @mehmetkazakli956 5 років тому +17

    Singular values need to be ordered decreasingly. When you write sigma, should not 1st and 4th values switched ?

  • @DivinityinLove
    @DivinityinLove Рік тому +4

    At 5:08 I don't understand how you get -3 and 1? Then again at 6:04 how do you get 1 and 3?

  • @Kitiara306
    @Kitiara306 10 років тому +59

    U is wrong. First entry is negative, not third.

    • @ortollj4591
      @ortollj4591 5 років тому +5

      I highly recommend watching this other video about SVD very well explained:
      ua-cam.com/video/EokL7E6o1AE/v-deo.html
      Yes a little sign error in U matrix:
      sagecell.sagemath.org/?q=yuwvgr

    • @fxelixe
      @fxelixe 4 роки тому +2

      thank you finally I can go to sleep. he didn't seem to notice it at all

    • @mdichathuranga1
      @mdichathuranga1 3 роки тому

      @@ortollj4591 thanks for sharing this link, this seems the clearest explanation I found so far.

  • @sophiaxmc2167
    @sophiaxmc2167 4 роки тому +19

    I think the V and the Sigma need to be ordered with the largest singular vector/value on the left?

    • @kanakambaran
      @kanakambaran 4 роки тому

      ya.. this is what i thought as well. but if you do that and rearrange V also, you dont get the original matrix if you multiply the 3 components. so there must be some reason why he has put it like that

    • @heathmatthews2643
      @heathmatthews2643 3 роки тому

      They don’t need to be but it’s super useful for stuff like Principal Component Analysis so arrange the sigma from greatest to least :)

  • @parthgadoya5690
    @parthgadoya5690 6 років тому +18

    Last minute Mistake: He put a wrong sign in u11 and u21 position.
    Correction: u11 = -1/root(2), u21= 1/root(2)
    Correct me If I am going somewhere wrong.

    • @GarbageMan144
      @GarbageMan144 3 роки тому +3

      no ur right. these mistakes can be so annoying sometimes

    • @dpccn6969
      @dpccn6969 3 роки тому +1

      I think same you.

  • @antonivanov3830
    @antonivanov3830 4 роки тому +8

    Thank you for this video! Very nicely done! Just one correction: U11 and U21 seems to be should be swapped)

  • @anuraagnarang8982
    @anuraagnarang8982 7 років тому +92

    This is a bit confusing. If anyone wants to know how to find U, have a look at my workings. (S=sigma)
    Firstly, when calculating the eigenvector for the eigenvalue 20, swap the signs around so that the vector is (3/root10, -1/root10).
    Note that this also a correct eigenvector for the given matrix and its eigenvalue 20. You can find out why by reading up on eigendecomposition.
    Secondly, swap the order of the columns around in S, so that the values go from high to low when looking from left to right. This is the conventional format of the sigma matrix.
    Now when finding U, I'm not sure why he's done the unit length thing, and I can't even see how he's got his final answer from it.
    Anyway, we know that CV = US, which means CVS^-1 = USS^-1.
    Since S is diagonal, SS^-1 = I, the identity matrix i.e. it can be ignored.
    So now we have CVS^-1 = U.
    To find S^-1: Since we have a diagonal matrix, just invert the values along the diagonal i.e. any value on the diagonal, x, becomes 1/x.
    Now multiply your CV by your S^-1 and you should get the same result for U as in the video, but with the columns swapped around i.e. in the correct format.

    • @cdahdude51
      @cdahdude51 5 років тому +2

      Excellent, thanks. Was wondering about how he calculated U

    • @Yustynn
      @Yustynn 5 років тому +5

      Regarding getting U after he retrieves US, there are 2 ways.
      Both rely on the following: Notice that S, being a diagonal matrix, stretches the matrix U. Each column gets scaled by the corresponding diagonal in the S matrix.
      Method 1 (need to know the value of S): Divide each column j by the corresponding S_jj value.
      Method 2 (no need to know the value of S): Observe that U is by definition orthogonal, and that each column has been scaled by
      2. Observe that:
      i) S is by definition diagonal => all that happened to U was a scaling of each column vector
      ii) U is by definition orthogonal => each column vector has magnitude 1
      So just normalize all the columns.
      He made a mistake: he put the minus sign on U_11 instead of U_21

    • @forthrightgambitia1032
      @forthrightgambitia1032 3 роки тому

      When he gets the eigenvectors did he just have the prepared or is there a mental trick to get the vectors he saw without Gaussian elimination?

    • @KnowledgeGuide859
      @KnowledgeGuide859 2 роки тому +1

      Excellent. I spent an hour thinking how did he calculate U. Thanks again.

  • @김서현-n2r5n
    @김서현-n2r5n 7 років тому +14

    I'm sincerely thanks for MIT to give me this wonderful lecture for free .
    It's really helpful for me to learn SVD.

  • @vishnus2567
    @vishnus2567 7 років тому +5

    why he swapped the negative sign in the last section of finding u(in column 1)?
    -(1/2) change to +(1/2) and +(1/2) change to -(1/2) in column 1?
    got confused in the last section?
    plz help
    thankyou

    • @ausamahassan9559
      @ausamahassan9559 4 роки тому +1

      you are right, though too late a reply,!!u11 negative rather than u21you can check by multiplying u(sigma)V^T to get C

  • @samytee3282
    @samytee3282 3 роки тому +4

    Very helpful. You delivered it very nicely, stay blessed:).... But I am confused on the V1 and V2 part ,how did you make that a unit like -3 and 1

    • @Sammyd123samuel
      @Sammyd123samuel 10 місяців тому

      Part of finding eigenvectors, set the (CtC -20i)v = 0 and v will be -3 and 1

  • @Rajbhandari88
    @Rajbhandari88 12 років тому +5

    Note to viewers, the sigma is wrong. The elements, s, in sigma should be s1>s2.

  • @PaMS1995
    @PaMS1995 8 років тому +8

    This is the best svd tutorial I could wish for, thanks for making it easy

  • @yaweli2968
    @yaweli2968 4 роки тому +4

    There is a much simpler way of doing this with less errors in one of the MIT pages on SVD decomposition. U is the unit Eigenvector of C Ctranspose, and V transpose is the unit eig. Vector of C transpose C. Eigenvalues are same for both matrices. Just find your eigen value diagonal matrice and you are done.

  • @cosminanton5419
    @cosminanton5419 2 роки тому +1

    I really thought he was going to leave for a whole minute.

  • @rafaellima8146
    @rafaellima8146 10 років тому +73

    Just an advice: next time try to compute without jumping steps. When you compute step-by-step calmly, your chances of success increases a lot. Anyway, thank you for the lecture anyway. It was very helpful.

  • @rivaldijulviar1765
    @rivaldijulviar1765 5 років тому +4

    Great way of teching!! you've teach me SVD in 11 minutes. a few error on the end but that's understandable

  • @ajayjadhav5699
    @ajayjadhav5699 5 років тому +3

    I think he skipped a step where product CV is multiplied by inverse of sigma matrix

  • @DrAndyShick
    @DrAndyShick 10 місяців тому

    Did anyone here confirm the values of U? Because I'm not confident this was correct. Also, he was doing a bunch of things that didn't seem to make sense. Edit: I did it myself. It is correct, but still unclear on the logic, and why the columns of sigma and V are switched. Also confused by that unit vector thing at the end.

  • @GarbageMan144
    @GarbageMan144 3 роки тому +1

    dude, when i compute the UsigmaV^T you have, I dont get the A matrix back.

  • @companymen42
    @companymen42 7 місяців тому

    Sometimes the old ways are better. All this new fangled technology and we forget how to teach along the way. Blackboard and chalk is the way to go!

  • @JP-re3bc
    @JP-re3bc 4 роки тому

    Horrible. This guy glosses over explanations and pulls results out of the, well, hat. Bad teacher, bad.

  • @dw61w
    @dw61w 2 роки тому

    It's confusing to call the matrix C, as if it is the covariance matrix. In fact, C is not, C.T @ C is the covariance matrix to eig().

  • @arianoori5924
    @arianoori5924 3 роки тому

    CV=U*sigma he computed the CV . I call it y. so U*sigma=y and U= inv(sigma)*y ?????

  • @DuuAlan
    @DuuAlan 8 років тому +8

    You should go over the case when some of the eigen values are 0.

    • @andibritz
      @andibritz 7 років тому

      einegvalues can never be trivial

    • @carl6167
      @carl6167 5 років тому

      Well, you just substract 0 to the start matrix. SO you basically just compute Kern(C^TC)

  • @vedprakashdubey5839
    @vedprakashdubey5839 5 років тому +4

    In final step there is a mistake in first column of u reverses sign of element.

  • @harshitarawat1411
    @harshitarawat1411 3 роки тому +1

    why did he take -3 at 5:03 and not 3 while taking V1 when there are no -ves , and 3 while taking V2 when there are -ves ?

  • @sakshimantri7548
    @sakshimantri7548 3 роки тому +1

    Something I've noticed that only a student can teach a student well, most professors complicate stuff for no reason

  • @muhammaddanialshafqat4576
    @muhammaddanialshafqat4576 7 років тому +1

    MIT OCW is the big reason due to which I pass my courses like Linear Algebra in my university ...
    THANKS MIT_OCW

  • @vaga8bnd
    @vaga8bnd 3 роки тому +1

    @7:04 Are you sure that your diagonal matrix will be like that? I think the diagonal matrix's diagonal elements have to be monotonically decreasing from upper left to lower right. I think it has to be [sqrt(80) 0; 0 sqrt(20)].

    • @peter2485
      @peter2485 2 роки тому

      You are right. The order has to decrease from upper left to lower right. So v_1 and v_2 needs to swapped around, the same with \sigma_1 and \sigma_2.

  • @jacksonmckenzie2172
    @jacksonmckenzie2172 6 років тому +1

    I thought that for the sigma matrix, the eigenvalues were listed in descending order, so it should be sqrt(80) then sqrt(20). Is this true or does it matter?

  • @andrewhutchings2176
    @andrewhutchings2176 5 років тому +3

    Explained a complicated problem in a simple way. Amazing work.

  • @soundman1992
    @soundman1992 8 років тому +3

    Great tutorial but in the Sigma matrix the values are the wrong way round. Max singular value should be top left. This leads to incorrect U matrix I think

    • @PaMS1995
      @PaMS1995 8 років тому +1

      It's actually arbitrary what you make the order, but yeah the convention is to go in decreasing value afaik

    • @Evan490BC
      @Evan490BC 5 років тому

      @@PaMS1995 Exactly.

  • @ccss31319
    @ccss31319 9 років тому +16

    In the end of ans of u, did he make the mistake of the sign between u11 and u12?

    • @ccss31319
      @ccss31319 9 років тому +10

      correct: u11&u21

    • @bhat2087
      @bhat2087 8 років тому +2

      I was thinking the same...
      did he ???

    • @harshitsaini4629
      @harshitsaini4629 7 років тому

      its U(1,1), U(2,1)

  • @jhabriel
    @jhabriel 8 років тому +1

    The correct values according to MATLAB are:
    u = [0.7071 0.7071
    0.7071 -0.7071]
    s = [8.9443 0
    0 4.4721]
    v = [0.3162 0.9487
    0.9487 -0.3162]
    Regards!

    • @eiriklid
      @eiriklid 7 років тому

      You have just changed the order of the eigenvectors/-values & and set v_2 to equal -v_1 in the video. But the values are equally correct!

  • @Joujau9
    @Joujau9 6 років тому +1

    Can anyone explain me precisely, what is happening @10:00, how does he get this first matrix? What is he diving it with? I'm a bit confused.

    • @incognito5438
      @incognito5438 4 роки тому

      I am also confused about that moment :(

  • @eddiechen6389
    @eddiechen6389 2 роки тому

    how did he one step get sigma is that matrix in diagonal form??

  • @anirudh7137
    @anirudh7137 2 роки тому

    Is it necessary to have the diagonals of the sigma matrix to be square root of eigenvalues of A*A. If I try to find U then I need to go with AA* which means the diagonal elements of sigma matrix should now contain AA* eigenvalues's square root.

  • @jonathansum9084
    @jonathansum9084 6 років тому +1

    Look at we, we are the failures. Look at this kid, he is the successor. They have the money. They have the better education! Our professors in the community college are just a joke.

    • @VladaFurman
      @VladaFurman 6 років тому +1

      It's not just community college professors that are jokes. :(

  • @jahbini
    @jahbini 8 місяців тому

    Wonderful hand waving at 1.57

  • @peop.9658
    @peop.9658 2 роки тому +1

    How did we wrote the values for v1 and v2 5:07

  • @mariestolk3794
    @mariestolk3794 4 роки тому +2

    This video is really great! Thank you for walking through this example so clearly, I really get it now! :)

    • @eevibessite
      @eevibessite Рік тому

      ua-cam.com/video/0Ahj8SLDgig/v-deo.html

  • @debbya.8277
    @debbya.8277 8 років тому +9

    Thanks for the video. Kinda confused on the signs of the final answer of U tho'. Did we need to change the signs of a11 and a21?

    • @giovanni-cx5fb
      @giovanni-cx5fb 8 років тому +18

      No, he just screwed up.

    • @moshiurchowdhury3673
      @moshiurchowdhury3673 6 років тому

      he just screwed up at the end of U matrix!. In front of U11, there should be a - 'Minus' sign instead of U21.

  • @sujeetiitd
    @sujeetiitd 4 роки тому

    In the end, U was calculated incorrectly. U is not equal to C.V.Sig; rather C.V.Sig^-1

  • @aggressivetourist1818
    @aggressivetourist1818 4 роки тому

    One of the best ways to teach people is to show them an example.
    I cannot find the example of this thing in my LA class. A teacher uses matlab to calculats the UEV
    Thanks this man. He must gain Phd already. I wish

  • @diskokaktus
    @diskokaktus 3 роки тому

    anyone from HSE Moscow 2021?))

  • @majidkh2695
    @majidkh2695 3 роки тому

    Quick question! Is it fair to say that the left singular vectors reflect the distribution (statistical distribution) of the column vectors?!

  • @ethanbartiromo2888
    @ethanbartiromo2888 Рік тому

    I’m going to be completely honest. You are exactly like me! I am a masters recipient, but I still get confused multiplying matrices in my head because it’s just so much to keep track of.

  • @hhafizhan
    @hhafizhan 7 років тому

    why we can just put square root of 20 and 80 for the sigma matrix? I mean, shouldn't it be just 20 and 80?

  • @madhuw1642
    @madhuw1642 7 років тому

    I think he mistaken when writing the final V. It should be write down as the eigen vector of the largest eigen value is column one, the eigen vector of the next largest eigen value is column two, and so forth and so on until we have the eigen vector of the smallest eigen value as the last column of our matrix. Right?

  • @iasm
    @iasm 11 років тому +1

    Directly computing CC* and C*C will work definitely but an important issue will arise. That is the uniqueness of eigenvector! Since giving minus to every eigenvector could also be a reasonable solution, U and V obtained from AA* and A*A might not be able to reconstruct the original A. However, if one uses the 2nd equation mentioned in this video, such potential issue could be avoided because U is derived from C, V and sigma!

    • @kuzminkg
      @kuzminkg Рік тому

      Underrated comment! Indeed, the demonstrated method find the matrix V that corresponds to previously found matrix U.

  • @soumodeepsen3448
    @soumodeepsen3448 2 роки тому

    How did the negative sign in the U change to the 3rd element from the 1st element? I didn't understand that part.

  • @jeetendraahuja1728
    @jeetendraahuja1728 7 років тому

    Not for someone who has no idea about it, it's like you are practicing what you know already.. Not worth watching

  • @meimei9948
    @meimei9948 3 роки тому

    I'm a starter of SVD. I would like to know how to calculate the sigma matrix and matrix U.

  • @priyanshuverma9844
    @priyanshuverma9844 5 років тому +1

    At 5:08, can you explain how did you calculated the value of V1?

    • @MrCorpsy6
      @MrCorpsy6 4 роки тому

      The second row is 3 times the first one, so you forget it and you find that y = -1/3x (or alternatively, x = -3y), therefore you can take v1 = (-3 1) (or v1 = (1 -3)).
      Basically, you're solving the linear system C^TC * X = 80X, where X = (x y) for instance.

  • @rocioalvarado8116
    @rocioalvarado8116 7 місяців тому

    Thank you! You are a genius!

  • @amandasgupta8609
    @amandasgupta8609 2 роки тому

    Why did first element in v1 become -3 and not 3? Please can anyone explain.

  • @jinudaniel6487
    @jinudaniel6487 3 роки тому

    skipping steps, confusing me more

  • @uthmanzubairoluwatos
    @uthmanzubairoluwatos 7 років тому

    What if C iss not a square matric, you need to show that you have to find c^Tc and cc^T, then find there coresponding eigen vectors

  • @williamgomez6226
    @williamgomez6226 20 днів тому

    Uff very usefull

  • @totetnhitnhitjt
    @totetnhitnhitjt 6 років тому +3

    Perhaps there's a faster way. let's note C' the transpose of C, S for sigma matrix
    By using two equations : CC' = V S'S V'
    C'C = US'SU'
    You can compute S'S by finding eigenvalues of CC' but it happens to be the same eigen values of C'C. So after finding V, instead of using your second equation. You can just find the eigen vectors of C'C by doing :
    C'C - 20I
    C'C - 80I
    You'll find the vectors of U faster and without inversing S.
    Also by factorizing 1/sqrt(10), it's easier to compute

  • @TorresVr1
    @TorresVr1 7 років тому

    When finding the eigenvectors for a given eigenvalue, why is it necessary that we find a single element of the nullspace, instead of the whole nullspace?

  • @GoJMe
    @GoJMe 8 років тому

    instead of showing new ways of solving the SVD can some1 explain the basic way, the idiotic proof way

  • @saadsaeed354
    @saadsaeed354 11 років тому +1

    shouldnt you multiply U(Sigma) with sigma ^ -1 for finding U in the end

  • @ferox7878
    @ferox7878 7 років тому

    funny that you say eigen(vector) and eigen(value) in english, just like in german

  • @jasont7604
    @jasont7604 5 років тому +1

    Thank you, most examples I found for this were simple examples, this helped me figure out the more complex problems.

  • @johngillespie8724
    @johngillespie8724 8 років тому +2

    Good job. Very helpful.

  • @SophiaZhang-j2u
    @SophiaZhang-j2u 6 місяців тому

    Thank you sooooooo much

  • @jayalakshmiravi3324
    @jayalakshmiravi3324 Рік тому

    How to find v1 and v2?

  • @ChauNguyen-wd6fm
    @ChauNguyen-wd6fm 2 роки тому

    Really helpful!! Tysm

  • @TankNSSpank
    @TankNSSpank 2 роки тому

    Thank you Sir, it is

  • @anjanbasumatary1510
    @anjanbasumatary1510 8 років тому +1

    formula to find v1 and v2???

  • @arceuslegend4605
    @arceuslegend4605 4 роки тому

    To find U, it would've been a lot easier to multiply to the right of each side by A^T so this time you get another diagonalization equation for AA^T and so all you need to do is find the eigenvectors of AA^T. Much faster imo

  • @AnhTran-nm9if
    @AnhTran-nm9if 4 роки тому

    Ben Harris is a cutie.

  • @nayankadam9866
    @nayankadam9866 7 років тому +8

    How to make matrix entries unit length? 9:58

    • @PelaajaSuomi
      @PelaajaSuomi 7 років тому +2

      Divide all coordinates by the length of the vector (here length = sqrt((-3)^2 + 1^2))

  • @SnackPacks10
    @SnackPacks10 8 років тому +1

    Does it matter if I use C'C or CC' at the beginning when calculating the determinant?

    • @danielschut3170
      @danielschut3170 7 років тому +1

      No, if I am right, it doesn't matter

    • @lekhaniray7851
      @lekhaniray7851 6 років тому

      Yes it does matter. C'C as a matrix is different from CC'.

  • @melihhan1335
    @melihhan1335 6 років тому

    What if one of the lambda value is equal to 0. How can i solve the question?

  • @bennyb.8966
    @bennyb.8966 7 років тому

    i don't get where the matrix he writes down at 10:00 comes from, can somebody help?
    The matrix with (1/sqrt(2))

    • @gainauntu
      @gainauntu 5 років тому

      so it's......calculating the length which is sqrt(-sqrt(10)^2+sqrt(10)^2)=sqrt(10)....and dividing the components by the value!

  • @SanjuktaDawn
    @SanjuktaDawn 7 років тому +2

    Hi , This video was very helpful. However I think the eigenvectors for the corresponding eigenvalues have been interchanged somehow. That is the eigen vector for lambda = 20 is the one which had been shown against lambda = 80

  • @sanjaykrish8719
    @sanjaykrish8719 6 років тому

    you are a born teacher. justified because you are a student of Mr.Gilbert

  • @kstahmer
    @kstahmer 12 років тому

    A Singular Value Decomposition (SVD) computation is easy to screw up. It’s wise to take this video reciter’s (Ben Harris’) advice. Please try a few SVD computations on simple 2x2 matrices. Once you have an answer, check that it works. This will help demystify SVD.
    This link may help:
    h t t p://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/lecture-29-singular-value-decomposition/
    (URL editing is required at the beginning.)

  • @sultanatasnimjahan5114
    @sultanatasnimjahan5114 4 роки тому

    C^TC is wrong

  • @dramaqueen9475
    @dramaqueen9475 2 роки тому

    thank u sir

  • @Rajbhandari88
    @Rajbhandari88 12 років тому +1

    How the fuck do we determine which one is V1 and which one is V2?

    • @gfredtech
      @gfredtech 6 років тому

      you can do it in any order. However, you must make sure that your sigma matrix with the eigenvalues also matches in the right order

  • @수수-v2m
    @수수-v2m 2 роки тому

    Best explanation ever ! Finally I got it😍😍 Thanks

  • @gregorybehm4890
    @gregorybehm4890 11 років тому

    I'm not sure what you're saying Alex there is no problem using AA* and A*A is the way to solve what are you implying not to use A*A and A*A to solve?
    Can you give an actual example where using A*A and AA* to solve each U and V separately doesn't work. Because how I see it solving each by construction gives you the solution just as going one way getting just U or just V and solving for the other.

  • @Karmakameleeon
    @Karmakameleeon 7 років тому

    are the values of the Sigma matrix determined by the eigenvalues of C*C-transpose or C-transpose * C??

    • @alexdowad947
      @alexdowad947 2 роки тому

      Either. The eigenvalues of C*Ct are the same as those of Ct*C.

  • @tarumankhatri8887
    @tarumankhatri8887 3 роки тому

    Very nice