The Matrix Exponential

Поділитися
Вставка
  • Опубліковано 5 гру 2024

КОМЕНТАРІ • 50

  • @cantetrodolfojuancarlos2124
    @cantetrodolfojuancarlos2124 4 роки тому +24

    I watched a seminar of Gilbert Strang many years ago and now it seems he´s even better: he´s a Master teacher!

  • @sahilsingh1
    @sahilsingh1 5 років тому +21

    Always been a fan of Gilbert Strang, and his Linear Algebra book

  • @georgesadler7830
    @georgesadler7830 3 роки тому +6

    This is a beautiful lecture on Matrix Exponential by DR. Gilbert Strang. Linear Algebra flows into many forms of science and engineering. Control engineering uses linear algebra and differential equations to solve all their complex and linear systems.

  • @26623661
    @26623661 7 років тому +22

    he is 1000 times better than my instructor!! finally get some lectures with blackboard lol

  • @GireeshwarDM
    @GireeshwarDM 5 років тому +12

    Explanation was excellent.
    Thank you Sir. I hope I will meet you one day.

  • @kcbandyopadhyay6747
    @kcbandyopadhyay6747 2 роки тому

    Incredibly Magnificent Teaching Guidance

  • @marshall7253
    @marshall7253 5 років тому +30

    "i'm not doing anything brilliant here" . .... its beautiful how it comes though

  • @Titurel
    @Titurel 3 роки тому

    Jimmy Stewart is my favorite Linear Algebra teacher!

  • @saeida.alghamdi1671
    @saeida.alghamdi1671 2 роки тому +1

    Very valuable lecture! It should be the spring board to solve a system of first order linear differential equations.

  • @tobechukwublessed4274
    @tobechukwublessed4274 8 місяців тому

    Mhenn, Genius levels guy😵😵‍💫😵‍💫 he's too good. Be like him when I grow

  • @romanemul1
    @romanemul1 6 років тому +107

    poor blind people they always tought that he is speaking about e^80

  • @mrflibble5717
    @mrflibble5717 7 років тому +6

    Brilliant Lectures!

  • @thedujin
    @thedujin 5 років тому +1

    Saved me before the final, thank you!

  • @likisafile2568
    @likisafile2568 5 років тому +2

    Best of best lecture

  • @matthewfairfield9354
    @matthewfairfield9354 6 років тому +3

    Thank you, this is amazing !!!

  • @roycekraakman9846
    @roycekraakman9846 4 роки тому

    Great instructor; really good

  • @gabe1006
    @gabe1006 5 років тому +2

    Brilliant lecture! How can I like this twice?

    • @mohab.m.metwally
      @mohab.m.metwally 4 роки тому +3

      L=(like matrix) with eigen values Lev, liking it twice L would be L*L = V* Lev^2 * V, for V eigenvector of L, so find this comes down to solving Lev, V for your preference liking Matrix, that you have to provide for us.

  • @IoT_
    @IoT_ Рік тому

    Why do you need to define the derivative of the exponential function using derivatives of its Taylor series terms? Taylor series is derived itself using derivatives. You can just use derivative of the exponential function itself and multiple first Taylor series by matrix A.

  • @omarkhalifa4621
    @omarkhalifa4621 6 років тому

    I like this instructor! Super!

  • @jesussantos8234
    @jesussantos8234 5 років тому

    excellent professor great explanation

  • @hamzaasif4775
    @hamzaasif4775 6 років тому

    Thank you, Thank you, Thank you!

  • @AS-il7fm
    @AS-il7fm 5 років тому

    13:53 eat = I at stop

  • @fabiotp5068
    @fabiotp5068 Рік тому

    3:07 since exp(At) is a matrix in which order do i put those? Aexp(At) or exp(At)A. it should matter, right?

    • @julianbruns7459
      @julianbruns7459 Рік тому +1

      Great question. Under the wikipedia article of matrix exponential, under the topic "the exponential map"
      You can find the equality Xexp(tX)=exp(tX)X (where X is a matrix of course). however i wasnt able to find a proof for this statement in the article. If i had to guess, id say the reason they commute is because if you multiply A with the series representation of exp(At), the only matrix you multiply with is A itself, wich obviously commutes.

    • @fabiotp5068
      @fabiotp5068 Рік тому

      @@julianbruns7459 thanks man that makes sense. Appreciate your answer :)

  • @muhammadseyab9032
    @muhammadseyab9032 7 років тому

    Excellent lecture

  • @nargizn3183
    @nargizn3183 4 роки тому +1

    magnificent!

  • @alexandrosanastasiou1964
    @alexandrosanastasiou1964 4 роки тому

    Amazing. Thank you.

  • @shivrajahirwar4334
    @shivrajahirwar4334 5 років тому

    nice explanation sir

  • @abdessalamderbale5432
    @abdessalamderbale5432 5 років тому

    Thank you

  • @ajsdoa6282
    @ajsdoa6282 6 років тому

    Thank you, very clear and understandable!

  • @jinshuenjameslo9647
    @jinshuenjameslo9647 2 роки тому

    amazing

  • @sporefergieboy10
    @sporefergieboy10 6 років тому

    Sheldon Axler sweatin

  • @rohanyuttham
    @rohanyuttham 5 років тому +1

    saand teeps and trixx for MIT pleax

  • @dariodrigo3778
    @dariodrigo3778 4 роки тому

    Yes, but what the hell is a series with matrices as terms...?

    • @Eric-xh9ee
      @Eric-xh9ee 3 роки тому +1

      A power series

    • @muttleycrew
      @muttleycrew 2 роки тому

      It can be convergent when each matrix in the infinite series repeats. Consider any of the 2x2 Pauli matrices, for instance the one below.
      0 1
      1 0
      Square that and you get the identity. Once you have the identity as the square, the cube is the original matrix back again. The fourth power of that matrix is the identity again. You are collecting and distributing powers against two different diagonals. What you end up with, after exponentiating and collecting power series terms, is the power series for cosh(t) appearing on the main diagonal and the power series for sinh(t) on the off-diagonal.
      cosh(t) sinh(t)
      sinh(t) cosh(t)
      The Pauli spin matrices, denoted σ, are usually expanded using exp(i σ * θ/2) and where i = sqrt(-1) of course.
      That gets you half integer spins instead.
      cos(θ/2) _i_ sin(θ/2)
      _i_ sin(θ/2) cos(θ/2)
      So yes, as Eric said you get a power series. The point is that some very special matrices, like the Pauli spin matrices, will give you a power series for one or more of the rational functions. That is, so to speak, why the Pauli spin matrices are what they are. Each of the Pauli matrices just hands you a familiar-looking power series when you exponentiate them.

  • @dave1stfan
    @dave1stfan 5 років тому

    nequation

  • @ObitoSigma
    @ObitoSigma 7 років тому +1

    Is the expansion series method the only way we can computer e^At when given only A. How could we use the exponential rule e^(B+C)t (where A=B+C) to compute e^At?

    • @homerobse
      @homerobse 6 років тому +1

      As he showed in the second blackboard, when you have n independent eigenvectors, you can compute e^At using e^At = V e(Lambda*t) V^(-1).
      With V being the matrix of eigenvectors (each column is one of the eigenvectors of A), and Lambda being the diagonal matrix.
      As for using A=B+C, I found this wikipedia page (en.wikipedia.org/wiki/Matrix_exponential#The_exponential_of_sums) saying that for matrix exponentials you can only equate e^(B+C)t=e^(Bt) * e^(Ct) if B and C commute (meaning that BC = CB)

  • @homeomorfisme
    @homeomorfisme 3 роки тому +1

    Wink twice if your grandson forced you to make a yt video.

  • @peterevans3310
    @peterevans3310 5 років тому

    "The exponential pops a t in"

  • @godthisisannoying
    @godthisisannoying 6 років тому

    Is he coming on to me? or is he asking for help in morse? just kidding, brilliant lecture.

  • @saicharanmarrivada5077
    @saicharanmarrivada5077 3 роки тому

    8:00 lol

  • @mohamed.m.saleh.m.m
    @mohamed.m.saleh.m.m 8 місяців тому