Thank you! It is great when people share this kind of explanatory gift with the world. Alongside the Gilbert Strang MIT lectures these where the videos that helped me most.
Are you saying that for any arbitrary linear transformation it is *always* possible to find an orthonormal basis for the input space which maps to an orthogonal basis for the output space? How would you justify that?
@@TobyDriscoll Thanks for the reply. I believe I allowed for scaling when I switched from "orthonormal" to just "orthogonal". I'll see if I can find copies of one of those books.
There are two variants. In the "full" type, U and V are square, while S has the same shape as A. In the "reduced" or "thin" type, S is square and either U or V is rectangular. Sometimes we use hats over the variables in the thin type to help distinguish them.
hi,the definition says USV*.The (V*) is the conjugate of matrix A??.i have found some books defining it as (V^H) which is hermitian and its the conjugate transpose .which one is right ?are they the same?
Great tutorial. But why do most programs eg. Matlab Python give a square U and V matrices, whereas in most conceptual teachings of svd U is not a square? I mean the number of eigens should be less than the original dimension right? eg. public.lanl.gov/mewall/kluwer2002.html
That's a fair question, although most books I know introduce the square form first. Perhaps the lesson has been learned: in Julia, the "thin" form is the default.
Thank you! It is great when people share this kind of explanatory gift with the world. Alongside the Gilbert Strang MIT lectures these where the videos that helped me most.
Wow, I'm flattered to be in such company!
great vids thx, hope u'll get much more subscribers and views !
Is there not a playlist for your linear algebra videos? I can't seem to find one.
There is Intro to Numerical Mathematics, ua-cam.com/play/PLvUvOH0OYx3AZ_WXhkLXo3JLSeGT2wLj3.html. It's not specific to linear algebra.
@@TobyDriscoll Ah Thank you!
Are you saying that for any arbitrary linear transformation it is *always* possible to find an orthonormal basis for the input space which maps to an orthogonal basis for the output space? How would you justify that?
Yes, as long as you allow length scaling along the output axes.
Look for a proof in Trefethen and Bau, or Golub and Van Loan.
@@TobyDriscoll Thanks for the reply. I believe I allowed for scaling when I switched from "orthonormal" to just "orthogonal". I'll see if I can find copies of one of those books.
@@godfreypigott It's not an especially deep proof. You maximize norm over all vectors, then use induction on the dimension.
Isn't S supposed to be square and the U and V are non square (see your definitions at the beginning)
There are two variants. In the "full" type, U and V are square, while S has the same shape as A. In the "reduced" or "thin" type, S is square and either U or V is rectangular. Sometimes we use hats over the variables in the thin type to help distinguish them.
hi,the definition says USV*.The (V*) is the conjugate of matrix A??.i have found some books defining it as (V^H) which is hermitian and its the conjugate transpose .which one is right ?are they the same?
hema raju They are the same.
U and V have different dimensions, so Ut*U cannot be equal to Vt*V. Its identical matrices, but different dimensions
I just wanted to add the same comment😅
Great tutorial. But why do most programs eg. Matlab Python give a square U and V matrices, whereas in most conceptual teachings of svd U is not a square? I mean the number of eigens should be less than the original dimension right?
eg. public.lanl.gov/mewall/kluwer2002.html
That's a fair question, although most books I know introduce the square form first. Perhaps the lesson has been learned: in Julia, the "thin" form is the default.