Singular value decomposition

Поділитися
Вставка
  • Опубліковано 12 січ 2025

КОМЕНТАРІ •

  • @trummelbummelfriedlich4658
    @trummelbummelfriedlich4658 8 років тому +8

    Thank you! It is great when people share this kind of explanatory gift with the world. Alongside the Gilbert Strang MIT lectures these where the videos that helped me most.

    • @TobyDriscoll
      @TobyDriscoll  8 років тому +4

      Wow, I'm flattered to be in such company!

  • @geogeo14000
    @geogeo14000 4 роки тому

    great vids thx, hope u'll get much more subscribers and views !

  • @MasayoMusic
    @MasayoMusic 5 років тому +1

    Is there not a playlist for your linear algebra videos? I can't seem to find one.

    • @TobyDriscoll
      @TobyDriscoll  5 років тому +1

      There is Intro to Numerical Mathematics, ua-cam.com/play/PLvUvOH0OYx3AZ_WXhkLXo3JLSeGT2wLj3.html. It's not specific to linear algebra.

    • @MasayoMusic
      @MasayoMusic 5 років тому

      @@TobyDriscoll Ah Thank you!

  • @godfreypigott
    @godfreypigott 3 роки тому

    Are you saying that for any arbitrary linear transformation it is *always* possible to find an orthonormal basis for the input space which maps to an orthogonal basis for the output space? How would you justify that?

    • @TobyDriscoll
      @TobyDriscoll  3 роки тому +1

      Yes, as long as you allow length scaling along the output axes.

    • @TobyDriscoll
      @TobyDriscoll  3 роки тому +1

      Look for a proof in Trefethen and Bau, or Golub and Van Loan.

    • @godfreypigott
      @godfreypigott 3 роки тому

      @@TobyDriscoll Thanks for the reply. I believe I allowed for scaling when I switched from "orthonormal" to just "orthogonal". I'll see if I can find copies of one of those books.

    • @TobyDriscoll
      @TobyDriscoll  3 роки тому

      @@godfreypigott It's not an especially deep proof. You maximize norm over all vectors, then use induction on the dimension.

  • @ZAUpdates
    @ZAUpdates 5 років тому +1

    Isn't S supposed to be square and the U and V are non square (see your definitions at the beginning)

    • @TobyDriscoll
      @TobyDriscoll  5 років тому +1

      There are two variants. In the "full" type, U and V are square, while S has the same shape as A. In the "reduced" or "thin" type, S is square and either U or V is rectangular. Sometimes we use hats over the variables in the thin type to help distinguish them.

  • @hemaraju8498
    @hemaraju8498 7 років тому +1

    hi,the definition says USV*.The (V*) is the conjugate of matrix A??.i have found some books defining it as (V^H) which is hermitian and its the conjugate transpose .which one is right ?are they the same?

  • @Enerdzizer
    @Enerdzizer 5 років тому +1

    U and V have different dimensions, so Ut*U cannot be equal to Vt*V. Its identical matrices, but different dimensions

    • @KasparJohannes
      @KasparJohannes 5 років тому +1

      I just wanted to add the same comment😅

  • @flyerwolftang
    @flyerwolftang 7 років тому +2

    Great tutorial. But why do most programs eg. Matlab Python give a square U and V matrices, whereas in most conceptual teachings of svd U is not a square? I mean the number of eigens should be less than the original dimension right?
    eg. public.lanl.gov/mewall/kluwer2002.html

    • @TobyDriscoll
      @TobyDriscoll  7 років тому +1

      That's a fair question, although most books I know introduce the square form first. Perhaps the lesson has been learned: in Julia, the "thin" form is the default.