It is very clear when you compare the vector x with the water flow that, 'U' is the eigen flows and 'V' is the time the eigen flows evolves in the flow. In contrast to the face, also it ise very clear in the part of 'U' as the eigen faces but I wonder what will be the 'V' vector ?
I'm confused with the T-transpose notation. Dose the SVD of X give U*Sigma* V, for which you manually have to transpose V afterwards, or does the SVD of X give you U*Sigma*V-transpose, such that the SVD transposes V for you automatically upon completion of its calculation.
SVD was at the very end of my college LinAlg class so I never got a very good understanding of it before the final - this is truly amazing; you say "thank you" at the end of every video but it should be us saying it to you- keep doing your thing! I'm loving it.
The best thing about your lectures is, u do coding implementation along with huge maths.. That makes u different from rest of the traditional instructors. Kudos to you!!!
The book is great, but relatively terse for someone like me who needs to brush up on his linear algebra. These video lectures are an excellent compliment to the book and really help drive home the concepts.
_"I've been assuming the whole time in these lectures that_ *n* _is_ *much much larger* _than_ *m,* _meaning I have many more_ *entries* _in each column than I have_ *columns."* [question]: To agree with his choice of wording, didn't he actually mean to say that _m_ (the number of rows; you could also say the number of entries down each column) is much much larger than _n_ (the count of columns)? I think he got his _m_ and _n_ mixed up when he wrote *n>>m* . I think, instead, he meant to write *m>>n* .
n -- is the number of entries in each column m -- is the number of columns (can be interpreted as number of samples/individual observations) so, I think in your interpretations, after the [question], you have swapped m and n. m is the number of columns, not the number of rows
I believe in this video he makes m = number of columns and n = number of rows. It is standard to have it the other away around and this confused me as well.
Not an engineer/student, but I'm watching this to get a better understanding of PCA in statistics. I'm going to check the book and research this, but my only complaint (nit-picky) is trying to tell the difference when Steve speaks between "M" and "N" which I know refers to the number of rows or columns of the matrix. But really, this was great and I am thankful that this is something I can study on my own. Much appreciated.
you explain math in such a way as to not make someone feel stupid, but feel like their taking steps into understanding a larger concept, and the tools they need are the ones we already have, big ups
Gosh, what a class! As mr. Ayush said, this was indeed by far the best SVD explanation I've seen. You've made a such complicated subject way more affordable! I wish you all the best, Steve! Greetings from Brazil!
Please keep making these high quality lectures. They are some of the best I have seen on UA-cam and that goes a long way because I watch a lot of lectures online.
This all sounded like gibberish until I started to think of the first term of the expansion (Sigma1*U1*V1T) as the (strongest) "signal" and the rest of the terms as ever decreasing amounts of "signal" and ever increasing amounts of "noise". So the last term (Sigmam*Um*VmT) is essentially all background "noise" in the data. Thinking of it that way, it all makes perfect sense.
others : "wow great explanation thanks for the lsson" me : how is this man writing perfectly backwards onto thin air ? mr bruton : I'm glad the green screen, glass board, and $1000 dollars of adobe products really paid off
Thank you for this great explanation. I just lost you on one point, why is this matrix multiplication equals sig1U1V1T + sig2U2V2T + ... + sigmUmVmT Can someone explain how does it complete the entire matrix multiplication? I somehow lost in this columns of U and row of V
ua-cam.com/video/xy3QyyhiuY4/v-deo.html I am not sure what .. [increasingly improves] means. The singular values were stated to be decreasing. I was expecting something such as ..[[improves]].
@2:39 "the first column sigma1 u1 only multiplies the v1 transpose column, the 2nd column only multiplies the v2 transpose column and so forth" did he mean 'the first column sigma1 u1 only multiplies the v1 transpose ROW' like his hand motion shows? when I multiply the matrices by hand it seems to be sigma1u1 by v1 tranpose row
Hello once again(sorry, this will be the last I think), is there somewhere I can get some pictures like the waveform you showed several timesteps of(to be processed by SINDy i think) and the PDE of the waveform. I want to use the images with a known PDE to see if my compressed images will give something the same or similar:)
The assumption n >> m is contrary to what we have quite often in data sciences. In many problems, the number of samples (here m) is bigger than number of features (here n). In such a case, we just take the transpose and keep going the same way? Or there are some additional considerations (of course except of swapping interpretations of eigen vectors etc)?
This is so excellent. I just have one question you keep saying that you will multiply a column of u with a row of v. But matrix multiplication is row by column. So how are you doing it the other way around ? What am I missing ?
Just started watching this playlist, excellent explanations and a great way to promote while sharing knowledge; bought your book and can't wait to revisit w/the text!
Hello, I can't find a way to compress all my images into one X matrix, the only examples I've seen are for doing SVD for one image at a time. How do I make one X matrix from images please?
In the example, m is the number of images you have, n is their pixel values. So m doesn't have to be enough to represent your data. It is like saying, if you have 2 photo, 2 dimensions are enough to represent image features.
i had some time to accept 1:22 conclusion, since if i understood you right, we have n vector space, in which our data can be, so it should be okay to use the whole n vectors of U as new basis, unless we want dimensionality reduction and not just matrix decomposition, or i'm just missing something?
Thank you for making the linear algebra less boring and really connected to data science and machine learning, this series is so much more interpretable than what my professor explains
Superlative production! Lighting, sound, set, rehearsals, material, these videos are among the best productions on UA-cam. Even I understood some of it! :-)
This was, by far, the most compensable explanation of what the SVD is mathematically and visually. The SVD is an incredible algorithm! Amazing how so little you could keep in order to understand the original system.
I find everything with these courses, even the way board arranged is just great. Many many thanks for this wonderful explanation and all your effort to make it understandable and yet complete.
I didn't get why your sigamatrix (X in the eigenvector base) has so many rows of zeros. Is it because of the non quadratic dimension of X or due to the precision limitations of the computer? I guess you just cut off at some point?
Absolutely, this is no problem. The SVD should work for any size matrix, I just considered the case where n>>m for simplicity. But if you want to use the notation in this lecture series, you can just transpose your matrix so that n>>m, compute the SVD, and transpose the result.
I am trying to identify dominant modes/ coherent structures and inner-outer interaction in the turbulent wall jets using PIV images. Can you give any suggestion?
Yes, it is. But there are different ways to visualize matrix multiplication . In your visualization, row i of U times column j of V and we have a scalar at position ij in the result matrix. In his visualization, each column of U times the corresponding row of V and we get a rank 1 matrix. The result will be the summation of those rank 1 matrices.
This can be a bit tricky to visualize, especially since there is a diagonal matrix between U and V. So I would recommend actually writing this out and checking that it makes sense.
Please, could you please tell me how can I run the economy SVD in Python ? I always use all my memory when I use the function "np.linalg.svd()". My matrix has 100 thousands rows and 27 columns. Thanks !
This channel is so underrated, your explanations and overal video presentation is really good!
Don't know why you think it's underrated...
Everyone who is watching this videos knows how great they are.
What a wonderful way to simplify a complicated topic such as SVD--I wish more people in academia emulated your way of teaching, Mr. Brunton.
Wonderful explanation, clear and easy to understand. Thank you very much
love the video, well explained and aesthetically good.
Just amazing explanation.
Thank you for this video series.
It is very clear when you compare the vector x with the water flow that, 'U' is the eigen flows and 'V' is the time the eigen flows evolves in the flow. In contrast to the face, also it ise very clear in the part of 'U' as the eigen faces but I wonder what will be the 'V' vector ?
I wish i was his student during my college days ,my grades had been improved ..
what if we change our norm. is this remain the best approximation or not?
I'm confused with the T-transpose notation. Dose the SVD of X give U*Sigma* V, for which you manually have to transpose V afterwards, or does the SVD of X give you U*Sigma*V-transpose, such that the SVD transposes V for you automatically upon completion of its calculation.
are the u vectors sorted organized in the descending order?
This series is by far the best explanation of SVD that I have seen.
The best explanation of SVD. Your videos are excellent. Thank you very much!
SVD was at the very end of my college LinAlg class so I never got a very good understanding of it before the final - this is truly amazing; you say "thank you" at the end of every video but it should be us saying it to you- keep doing your thing! I'm loving it.
The best thing about your lectures is, u do coding implementation along with huge maths.. That makes u different from rest of the traditional instructors. Kudos to you!!!
It's my pleasure
Amazing lectures, immidiately bought the book, thank you!
The book is great, but relatively terse for someone like me who needs to brush up on his linear algebra. These video lectures are an excellent compliment to the book and really help drive home the concepts.
I just finished my exam and I see this lmaoo 😭😭
Part 2 of the Eckard Young theorem is that this video is the best explanation of the theorem's part1 :P
@14:52
No Steve, thank YOU!
Can you do a series on QR decomposition as well? This is so useful!
_"I've been assuming the whole time in these lectures that_ *n* _is_ *much much larger* _than_ *m,* _meaning I have many more_ *entries* _in each column than I have_ *columns."*
[question]: To agree with his choice of wording, didn't he actually mean to say that _m_ (the number of rows; you could also say the number of entries down each column) is much much larger than _n_ (the count of columns)? I think he got his _m_ and _n_ mixed up when he wrote *n>>m* . I think, instead, he meant to write *m>>n* .
n -- is the number of entries in each column
m -- is the number of columns (can be interpreted as number of samples/individual observations)
so, I think in your interpretations, after the [question], you have swapped m and n. m is the number of columns, not the number of rows
I believe in this video he makes m = number of columns and n = number of rows. It is standard to have it the other away around and this confused me as well.
Not an engineer/student, but I'm watching this to get a better understanding of PCA in statistics. I'm going to check the book and research this, but my only complaint (nit-picky) is trying to tell the difference when Steve speaks between "M" and "N" which I know refers to the number of rows or columns of the matrix. But really, this was great and I am thankful that this is something I can study on my own. Much appreciated.
you explain math in such a way as to not make someone feel stupid, but feel like their taking steps into understanding a larger concept, and the tools they need are the ones we already have, big ups
Very, very nice explanation and presentation. Thank you!
Explanations like this for a dummy like me makes my life so much easier
Gosh, what a class! As mr. Ayush said, this was indeed by far the best SVD explanation I've seen. You've made a such complicated subject way more affordable! I wish you all the best, Steve! Greetings from Brazil!
Thanks so much! That is great to hear!!
Please keep making these high quality lectures. They are some of the best I have seen on UA-cam and that goes a long way because I watch a lot of lectures online.
It was pleasure to watch. You should do more educational videos, mr. Brunton.
for mn ? (For example, what happen if my dataset is composed by 5000 images of 32x32?)
You are a very very gifted teacher! Thank you for sharing this! :)
why could he write in mirror image? Amazing!
I also feel amazed by that
@@NG-lx1kx Remember that image processing is one of the applications of SVD. Now think, how does image processing relate to this video?
Post processing he flips left-right. Notice which hand has the wedding ring.
This all sounded like gibberish until I started to think of the first term of the expansion (Sigma1*U1*V1T) as the (strongest) "signal" and the rest of the terms as ever decreasing amounts of "signal" and ever increasing amounts of "noise". So the last term (Sigmam*Um*VmT) is essentially all background "noise" in the data. Thinking of it that way, it all makes perfect sense.
In case anyone didn't know, there's an entire playlist for SVD: ua-cam.com/video/gXbThCXjZFM/v-deo.html
others : "wow great explanation thanks for the lsson"
me : how is this man writing perfectly backwards onto thin air ?
mr bruton : I'm glad the green screen, glass board, and $1000 dollars of adobe products really paid off
It doesn't get better than this. I am so thankful to you. I don't know how to repay this help.... And yes, this is a highly underrated channel
Thank you for this great explanation. I just lost you on one point, why is this matrix multiplication equals sig1U1V1T + sig2U2V2T + ... + sigmUmVmT
Can someone explain how does it complete the entire matrix multiplication? I somehow lost in this columns of U and row of V
ua-cam.com/video/xy3QyyhiuY4/v-deo.html I am not sure what .. [increasingly improves] means. The singular values were stated to be decreasing. I was expecting something such as ..[[improves]].
@2:39 "the first column sigma1 u1 only multiplies the v1 transpose column, the 2nd column only multiplies the v2 transpose column and so forth" did he mean 'the first column sigma1 u1 only multiplies the v1 transpose ROW' like his hand motion shows? when I multiply the matrices by hand it seems to be sigma1u1 by v1 tranpose row
Hello, do we pass the U, S or VT as input into the SINDy algorithm or we pass in the approximation of X gotten using the U,S and VT?
Hey nice tutorial, so you are saying to me that if we pass as input an incomplete EDM we can find the complete EDM with this approximation?
Hello once again(sorry, this will be the last I think), is there somewhere I can get some pictures like the waveform you showed several timesteps of(to be processed by SINDy i think) and the PDE of the waveform. I want to use the images with a known PDE to see if my compressed images will give something the same or similar:)
I don't understand why we are multiplying the columns of U by the rows of V... shouldn't it be the opposite?
The assumption n >> m is contrary to what we have quite often in data sciences. In many problems, the number of samples (here m) is bigger than number of features (here n). In such a case, we just take the transpose and keep going the same way? Or there are some additional considerations (of course except of swapping interpretations of eigen vectors etc)?
This is so excellent. I just have one question you keep saying that you will multiply a column of u with a row of v. But matrix multiplication is row by column. So how are you doing it the other way around ?
What am I missing ?
One of the best channels I have ever followed, appreciate it so much!
Just started watching this playlist, excellent explanations and a great way to promote while sharing knowledge; bought your book and can't wait to revisit w/the text!
Awesome, thank you!
Top rate education, I'm happily learning a lot.
Nicely done. Thank you
Hello, I can't find a way to compress all my images into one X matrix, the only examples I've seen are for doing SVD for one image at a time. How do I make one X matrix from images please?
In the example, m is the number of images you have, n is their pixel values. So m doesn't have to be enough to represent your data.
It is like saying, if you have 2 photo, 2 dimensions are enough to represent image features.
I’m coding up the exercises in his book in Python but somebody must have done this before. Does anybody know?
Shouldn't u vectors be row vectors? How do you multiply column vector with another column vector in other matrix?
To the point. Answers all the important questions. I mean you should come to the party knowing some lin alg but great for intermediate level
i had some time to accept 1:22 conclusion, since if i understood you right, we have n vector space, in which our data can be, so it should be okay to use the whole n vectors of U as new basis, unless we want dimensionality reduction and not just matrix decomposition, or i'm just missing something?
For the last point, (u~)*transpose(u~) is still a identity matrix, but it is a n by n matrix instead of a r by r matrix. Am I right?
Thank you very much for those videos , they are very explanatory . Keep up the good work, we need you lessons for our academic improvement.
Wawww ... 202 Liken, 0 Dislike.
It shows the quality of content.
Very nice lecture and clearly understantable...Thanks Steve...🤗
Lightbulbs are finally going off when it comes to SVD cant thank you enough!
You have a talent for taking complicated topics and breaking them down into digestible pieces. That's the sign of a good teacher. Thank you.
What if u = 2x2 matrix, and V a 3x3 matrix. Then how would you calculate that outer product uV
Watch out Kahn Academy, Steve Brunton is coming for ya! Seriously though, these videos are fantastic :)
Is the economy SVD not also an approximation of X? (Since we lose some columns of U)
I could take nap alonh each topic and still didn't lose any important notes, you should get prepared before start presenting
I like your series also the dark background make my eye feels ease than white background like other channels did
One of the best videos on singular value decomposition. it not only tells the maths but also the intuition. Thanks. !
Thank you for making the linear algebra less boring and really connected to data science and machine learning, this series is so much more interpretable than what my professor explains
Hey I know it's been nine months but I just came across your comment and was curious. How'd the rest of your class go?
I do not understand why the sigma values are in decending order and why the first sigma values are more important than the latter ones
How to determine the criterion to truncate please? It might depend on the specific case but is there a general guideline for it?
Superlative production! Lighting, sound, set, rehearsals, material, these videos are among the best productions on UA-cam. Even I understood some of it! :-)
This was, by far, the most compensable explanation of what the SVD is mathematically and visually. The SVD is an incredible algorithm! Amazing how so little you could keep in order to understand the original system.
I find everything with these courses, even the way board arranged is just great. Many many thanks for this wonderful explanation and all your effort to make it understandable and yet complete.
Anybody else unable to shake the fact that this guy looks like dr harrison wells?
Is truncating at 'r' similar to filtering out the highest frequencies in the FFT ?
Very, very nice explanation and presentation. Thank you!
found this hard to follow without teaching any ways to think of this intuitively.
Just in time for the new semester!
Why is the X matrix n by m but your SVD written out for X as m by n matrix?
if the U columns after m dont matter .. why is U unique ?
Excellent explanation. Thank you very much.
I didn't get why your sigamatrix (X in the eigenvector base) has so many rows of zeros. Is it because of the non quadratic dimension of X or due to the precision limitations of the computer? I guess you just cut off at some point?
Should have waited for a bit :P no explanation needed anymore..
Thank you so much for this easy to understand explanation. I was really struggling with the topic and this helped a lot. Thanks again 😊
Glad it was helpful!
Thank you for presenting us an amazing experience to learn about SVD!
What are you, some kind of eigengenius?
Help me out; are you right or left-handed?
really appreciate your efforts. wish u all the best
@11:08 his face when he said Frobenius norm xD
I need to work on SVD where n
Absolutely, this is no problem. The SVD should work for any size matrix, I just considered the case where n>>m for simplicity. But if you want to use the notation in this lecture series, you can just transpose your matrix so that n>>m, compute the SVD, and transpose the result.
what drove you to watch this channel?
me: money
I am interested in knowing what software he is using to write.
really really nice explanation!you are really a great teacher!
thank u so much sir, very helpful
That's so helpful! thank you!
I am trying to identify dominant modes/ coherent structures and inner-outer interaction in the turbulent wall jets using PIV images. Can you give any suggestion?
Thank You Professor. Respects from India
You're welcome :)
write everything mirrored? WOW....
superinformative series of SVD
Awesome explanation
you are at the tip-top i like your explanation
I got confused at about 2:40. Shouldn't it be a row of the left matrix (U) times a column of the right matrix (V transposed)?
Yes, it is. But there are different ways to visualize matrix multiplication . In your visualization, row i of U times column j of V and we have a scalar at position ij in the result matrix. In his visualization, each column of U times the corresponding row of V and we get a rank 1 matrix. The result will be the summation of those rank 1 matrices.
This can be a bit tricky to visualize, especially since there is a diagonal matrix between U and V. So I would recommend actually writing this out and checking that it makes sense.
Please, could you please tell me how can I run the economy SVD in Python ? I always use all my memory when I use the function "np.linalg.svd()". My matrix has 100 thousands rows and 27 columns. Thanks !