This is a beautiful lecture on Matrix Exponential by DR. Gilbert Strang. Linear Algebra flows into many forms of science and engineering. Control engineering uses linear algebra and differential equations to solve all their complex and linear systems.
L=(like matrix) with eigen values Lev, liking it twice L would be L*L = V* Lev^2 * V, for V eigenvector of L, so find this comes down to solving Lev, V for your preference liking Matrix, that you have to provide for us.
Why do you need to define the derivative of the exponential function using derivatives of its Taylor series terms? Taylor series is derived itself using derivatives. You can just use derivative of the exponential function itself and multiple first Taylor series by matrix A.
Great question. Under the wikipedia article of matrix exponential, under the topic "the exponential map" You can find the equality Xexp(tX)=exp(tX)X (where X is a matrix of course). however i wasnt able to find a proof for this statement in the article. If i had to guess, id say the reason they commute is because if you multiply A with the series representation of exp(At), the only matrix you multiply with is A itself, wich obviously commutes.
It can be convergent when each matrix in the infinite series repeats. Consider any of the 2x2 Pauli matrices, for instance the one below. 0 1 1 0 Square that and you get the identity. Once you have the identity as the square, the cube is the original matrix back again. The fourth power of that matrix is the identity again. You are collecting and distributing powers against two different diagonals. What you end up with, after exponentiating and collecting power series terms, is the power series for cosh(t) appearing on the main diagonal and the power series for sinh(t) on the off-diagonal. cosh(t) sinh(t) sinh(t) cosh(t) The Pauli spin matrices, denoted σ, are usually expanded using exp(i σ * θ/2) and where i = sqrt(-1) of course. That gets you half integer spins instead. cos(θ/2) _i_ sin(θ/2) _i_ sin(θ/2) cos(θ/2) So yes, as Eric said you get a power series. The point is that some very special matrices, like the Pauli spin matrices, will give you a power series for one or more of the rational functions. That is, so to speak, why the Pauli spin matrices are what they are. Each of the Pauli matrices just hands you a familiar-looking power series when you exponentiate them.
Is the expansion series method the only way we can computer e^At when given only A. How could we use the exponential rule e^(B+C)t (where A=B+C) to compute e^At?
As he showed in the second blackboard, when you have n independent eigenvectors, you can compute e^At using e^At = V e(Lambda*t) V^(-1). With V being the matrix of eigenvectors (each column is one of the eigenvectors of A), and Lambda being the diagonal matrix. As for using A=B+C, I found this wikipedia page (en.wikipedia.org/wiki/Matrix_exponential#The_exponential_of_sums) saying that for matrix exponentials you can only equate e^(B+C)t=e^(Bt) * e^(Ct) if B and C commute (meaning that BC = CB)
I watched a seminar of Gilbert Strang many years ago and now it seems he´s even better: he´s a Master teacher!
Always been a fan of Gilbert Strang, and his Linear Algebra book
This is a beautiful lecture on Matrix Exponential by DR. Gilbert Strang. Linear Algebra flows into many forms of science and engineering. Control engineering uses linear algebra and differential equations to solve all their complex and linear systems.
he is 1000 times better than my instructor!! finally get some lectures with blackboard lol
Explanation was excellent.
Thank you Sir. I hope I will meet you one day.
Incredibly Magnificent Teaching Guidance
"i'm not doing anything brilliant here" . .... its beautiful how it comes though
Jimmy Stewart is my favorite Linear Algebra teacher!
Very valuable lecture! It should be the spring board to solve a system of first order linear differential equations.
Mhenn, Genius levels guy😵😵💫😵💫 he's too good. Be like him when I grow
poor blind people they always tought that he is speaking about e^80
LOOOOOOOOOOOOL
No it is "e 2 The 8 t"
lullll
Don't kill us with laughter. I beg you.
Brilliant Lectures!
Saved me before the final, thank you!
Best of best lecture
Thank you, this is amazing !!!
Great instructor; really good
Brilliant lecture! How can I like this twice?
L=(like matrix) with eigen values Lev, liking it twice L would be L*L = V* Lev^2 * V, for V eigenvector of L, so find this comes down to solving Lev, V for your preference liking Matrix, that you have to provide for us.
Why do you need to define the derivative of the exponential function using derivatives of its Taylor series terms? Taylor series is derived itself using derivatives. You can just use derivative of the exponential function itself and multiple first Taylor series by matrix A.
I like this instructor! Super!
excellent professor great explanation
Thank you, Thank you, Thank you!
13:53 eat = I at stop
3:07 since exp(At) is a matrix in which order do i put those? Aexp(At) or exp(At)A. it should matter, right?
Great question. Under the wikipedia article of matrix exponential, under the topic "the exponential map"
You can find the equality Xexp(tX)=exp(tX)X (where X is a matrix of course). however i wasnt able to find a proof for this statement in the article. If i had to guess, id say the reason they commute is because if you multiply A with the series representation of exp(At), the only matrix you multiply with is A itself, wich obviously commutes.
@@julianbruns7459 thanks man that makes sense. Appreciate your answer :)
Excellent lecture
magnificent!
Amazing. Thank you.
nice explanation sir
Thank you
Thank you, very clear and understandable!
amazing
Sheldon Axler sweatin
saand teeps and trixx for MIT pleax
Yes, but what the hell is a series with matrices as terms...?
A power series
It can be convergent when each matrix in the infinite series repeats. Consider any of the 2x2 Pauli matrices, for instance the one below.
0 1
1 0
Square that and you get the identity. Once you have the identity as the square, the cube is the original matrix back again. The fourth power of that matrix is the identity again. You are collecting and distributing powers against two different diagonals. What you end up with, after exponentiating and collecting power series terms, is the power series for cosh(t) appearing on the main diagonal and the power series for sinh(t) on the off-diagonal.
cosh(t) sinh(t)
sinh(t) cosh(t)
The Pauli spin matrices, denoted σ, are usually expanded using exp(i σ * θ/2) and where i = sqrt(-1) of course.
That gets you half integer spins instead.
cos(θ/2) _i_ sin(θ/2)
_i_ sin(θ/2) cos(θ/2)
So yes, as Eric said you get a power series. The point is that some very special matrices, like the Pauli spin matrices, will give you a power series for one or more of the rational functions. That is, so to speak, why the Pauli spin matrices are what they are. Each of the Pauli matrices just hands you a familiar-looking power series when you exponentiate them.
nequation
Is the expansion series method the only way we can computer e^At when given only A. How could we use the exponential rule e^(B+C)t (where A=B+C) to compute e^At?
As he showed in the second blackboard, when you have n independent eigenvectors, you can compute e^At using e^At = V e(Lambda*t) V^(-1).
With V being the matrix of eigenvectors (each column is one of the eigenvectors of A), and Lambda being the diagonal matrix.
As for using A=B+C, I found this wikipedia page (en.wikipedia.org/wiki/Matrix_exponential#The_exponential_of_sums) saying that for matrix exponentials you can only equate e^(B+C)t=e^(Bt) * e^(Ct) if B and C commute (meaning that BC = CB)
Wink twice if your grandson forced you to make a yt video.
"The exponential pops a t in"
Is he coming on to me? or is he asking for help in morse? just kidding, brilliant lecture.
8:00 lol
❤