I don't know what he's doing differently, but everything is easier when I listen to this guy. It might be the way he repeats certain phrases or makes everything less abstract. I'm currently struggling in my Linear Algebra class, so seeing this video and realizing that it isn't as hard as my current professor makes it seem is a huge relief. This guy is great. Thanks MIT.
Damn, I share the identical feeling with you bro. My prof in linear algebra thought the outside is way too noisy and she always closes the door of the classroom.So its super stuffy in there and I felt run out of O2 so therefore sleepy. It's all from mr gilbert that i learn sth
The difference I think is that this guy actually taeches linear algebra as part of a bigger picture. Most math profs just teach this stuff as a series of theorems that don't really have anythign to do with each other except you need to use theorem t to answer question a) b) c).etc. but hes trying to explain linear algebra as a holistic system
The only difference is his passion and love for the subject and quest to impart the same to his students. Without passion, love, and quest - it will be just lecturing the text with articulation, which might bring clarity but does not bind the audience.
Fun fact, not mentioned in the video: If the matrix A is singular, then the process is losing information because two linearly independent vectors in the input space could be mapped to the same output vector, making them indistinguishable. When the professor discussed the matrix for ordinary differentiation of a polynomial function, notice that it was singular. This corresponds to the arbitrary constant of integration i.e. information will be lost, so you save it ahead of time via initial conditions so you can reconstruct the input vector. Linear algebra is amazing for this kind of insight.
I'm doing a master's in Germany and we were supposed to go through all the lectures as an introduction to a course called Advanced Mathematics for Engineers. At the beginning I was quite reluctant to do that, but after the first lecture I couldn’t wait for the next one. Professor Strang is indeed a great teacher. Thank you MIT for sharing these splendid lectures!
"if you cannot explain what you learnt to a little child in the way that she/he can understand easily, you definitily didint understood what you a trying to explain". This Professor Understood deepily lenear algebra. Congrates and thanks
If you think you fully understand this lecture the first time you watched it, it could possibly means either you knew this subject very well before you watched it or you don't really understand the subject deeply as you think. Please watch it again and think about why everything he said is true. Then you will have more questions unanswered in your mind after you watched this lecture.
This guy is something else. I go to my Linear Algebra class and I usually go out feeling more confused than when I went in, then I come watch these videos and things make sense in a minutes. It's not just what he knows, but he knows how to explain it, simple brilliant.
He explains it so clearly its amazing. Instead of just throwing around the definition of it all he actually explains and gives examples on why and how it works, in a simplified manner.
Gilbert Strang is one of the best teacher in the world. The keywords and key sentences which he uses in his lectures makes it simple to understand. Loved this. ❤❤
I've been watching all the precedent videos understanding most of what happens, but I always found that I had no link between what I saw and what I know already. I could not integrate the new set of concepts to my current knowledge. It was quite puzzling. Now with the first 18 mins of this lecture everything just clicked. It's like I was floating in space having a hard time navigating but still could move from one point to another. Now I got gravity. This is quite impressive.
At BYU right now. Chem E. Studying this material for the past few days now to get ahead. This lecture is an excellent tool for taking all these concepts and placing them in a clear and simple context. I have greatly benefited from listening to this lecture. Thank you.
I switched from engineering to math/physics during college, and so was taught linear algebra with and without coordinates. One course, entirely with matrices. Another course almost entirely without matrices. What a beautiful topic.
These are very niece explanation of Linear Transformations and Their Matrices thanks once again to the godfather of linear DR. Gilbert Strang. I took introductory linear algebra at the University of Maryland Baltimore County in the late 1980's and my professor was nothing close to what Dr. Strang is. This MIT legend is a rare find.
I think the reason why Professor Strang's courses are so popular lies with his personality. He is very considerate so he would try all means to find a way to explain linear algebra concepts easy to understand. This is the same at workplace. Nice people who always take other people's feelings into account tend to provide popular products and services. Some professors offer hard-to-understand, unpopular courses since they don't care about how their students feel at all. They mainly care about whether they are promoted or receive more funding.
Thanks to MIT for explaining that one. I can think of an explanation for the origin of that phrase. You see, the Summer gets "tricked" into thinking it's gonna last. But alas! The Winter inevitably comes and the few days of its warm reign are over. And just like this short-lived summer period, the Indians got tricked by their conquerors too. Thus the term, "Indian Summer". And Thank You MIT for these wonderful videos. Keep up the great work!! Great Profs teaching in a great way!
The choice of basis is important for linear transformation. Actually it becomes quite easy to use convenient basis as we can represent things in polar as well as Cartesian coordinates. We humans are breaking things and making things.
That is correct, it's not a linear transformation. For me, it helps to think of this in terms of similar triangles: T(2v) takes two times the original vector but only adds one times v0 to it. For the resultant vector to be proportional (i.e., =2T(v) ), one would have to add 2v0 to T(2v).
@valtih1978 I'm not exactly sure what portion of the lecture you're referring to, but if you're talking about the later 3/4, I think he means that A imparts transforms upon the vectors that are expressed with the basis {v1, v2,...,vn} and transforms them into a new basis {w1, w2,..wn} where T(v1), T(v2), etc. are all functions used to express the original bases {v1, v2, etc.} "in terms of" {w1, w2, etc.} So the system of equations is just a generalization of that concept.
Have about this subject at the moment, don't understand anything about it - low selfconfidence. but watching this lecutre - I understand everything again.
there is essential key points behind the example at 36:59 connect to eigenvalue/vector, but not well explained, hope professor could've extended right there
I go to texas tech university and amidst covid 19 I have been having a hard time with linear algebra help, this was a nice relief. My question is transforming the basis of numbers I completely get and I want to know if this applies to unit conversion with physical properties. Like in doing stochiometry to convert between units (same plane). Or the relationships we get from integration and derivation like mass and force. Thank you for this resoucre incredibly helpful
Ahh! Suddenly everything is easy to understand, there is great enlightenment. In the previous lessons I sometimes had difficulties and now the dark forest is thinning out.
In General, A = [column_1 column_2 ... column_n] input coordinate vector v = [v1 v2 ... vn]^T So: output vector *w = A * v = (v1 * column_1) + (v2 * column_2) + ... + (vn * column_n)* i.e. an n-term linear combination of m-dimensional vectors. Now imagine v1 = 1 and v2 through vn being zero. Then column 1 is the vector you get when you pass in a unit vector according to your input basis. This works for v2 = 1 while everything else is zero, and so on, and so on. Intuition: say you have a 2x3 rectangular matrix, A, and a 3*1 input vector v. The shape of the output vector w is 2x1. Looking back at A, it is composed of 3 columns, and each column is 2x1, which matches the output 2x1. What's going on? The output vector is a linear combination of the columns of A. This is always true. If you have any other questions or want me to clarify something, lemme know ------------ Fun fact, not mentioned in the video: If the matrix A is singular, then the process is losing information because two linearly independent vectors in the input space could be mapped to the same output vector, making them indistinguishable. When the professor discussed the matrix for ordinary differentiation of a polynomial function, notice that it was singular. This corresponds to the arbitrary constant of integration i.e. information will be lost, so you save it ahead of time via initial conditions so you can reconstruct the input vector. Linear heart is amazing for this kind of insight. Best wishes, friend
@@benjaminlin7918 As in the comment above, A = [column_1 column_2 ... column_n] Take for example (v1 v2 ... vn) as my standard basis vector ([1 0 0 ...]T, [0 1 0 0 ...]T, ......) Any coordinate v is (c1, c2, ...cn) will represent the corresponding vector v = c1*v1 + c2*v2 + ... Taking an example say c's as my coordinate in the input basis, I want to change it to the new coordinate system(of output basis). I am choosing here eigenvectors as output basis(w1, w2, w3 ....wm) because that is usually the basis of interest but it can be anything (even exactly the input basis). So if I want my new coordinate(d1, d2, d3,...dm) which is the same as a linear combination of eigenvectors with d's as corresponding coefficients. d = A * c = (c1 * column_1) + (c2 * column_2) + ... + (cn * column_n) Since we are doing linear transformation, we must know what my new coordinate looks like if I were to transform just v1, just v2 and so on. This is equivalent to saying I know what will (1,0,0,0..), (0,1,0,0,...)... will become by linear transformation into the output basis. Thus we can say that column_1 = coordinate when transforming v1 or (1,0,0,..) into output basis. Hence we can write this transformation as a linear combination of eigenvectors(my output basis) with coefficients as column_1(my new coordinates). Similarly, now I know what the remaining columns of A represent. Thus I know my matrix completely. If you are with me till this point, then for any arbitrary c's we can get new d's into the output basis. as A*c = d. Your question: what happened when the input basis isn't one element 1 and others 0 (standard basis)? So if I know my input and output basis and what my (1,0,0..), (0,1,0,..) and so on look like in new coordinate(output basis). I can just write these coordinates as columns of my matrix A and I can now find the transformation for any arbitrary c's. The whole point of a linear transformation is to find A such that we get what you asked for. This should answer your question.
its because he leaves al the' in math written" proves to the book and get to the point and explains what he's doing where as most teachers explain something by writing the prove down that no-one really understands
Maldita sea, porque no hay maestros que intenten explicar las matematicas de manera simple... Porque creen que hay tantos reprobados en México? La verdad este Dr.Strang sabe mucho! Gracias por hacer estos videos!! Saludos desde México!
I am not clear about Prof Strang's explanation at 37:22. He said we choose the eigenvector basis [0 1]^T and [1 0]^T, but this is the standard basis in R^2, right? Does he mean that if the input and output basis are the eigenvector basis, then the transformation matrix A is just Λ? Thanks.
That's exactly what it means. If x_i is an eigenvector, and lambda_i the associated eigenvalue, then Ax_i = lambda_i * x_i. Notice the thing on the right only includes x_i; it doesn't need any contribution from any of the other eigenvectors (basis vectors). Thus, when you build the matrix A by putting into its columns what you want it to do to the basis vectors, when the basis vectors are independent eigenvectors, you will get the eigenvalue matrix as a result.
This give a easiest geometric interpretation of why singular matrices has no inverse. This kind of matrices take n-dimensional vectors as input and the output is in a vector of smaller dimension, and different vectors can have the same output, so you can´t have an inverse transformation because you don't know to wich of all of this original vectors corresponds the vector in the space of smaller dimension. More importat, that's why you need an initial condition in integrals to know the original function, because the derivative is making this same process and only with the inverse ('integral ) you can't tell which is the original function.
Sorry, but beg to differ, in a linear transformation an input coordinate can have an output of smaller and larger dimension, as long as the second case is not infinite, and I'd say that's the geometric interpretation for a matrix to not br invertible: if the output coordinates happen to be infinite
The brilliant series of lectures. However, this seems unclear one. I failed to understand the A derivation. What the system of equation does? Can you explain what Gilbert wanted to say? Wikipedia meantime does it trivially: A=[T(v1) T(v2) ..T (vn)].
The input basis v1 to vn is the same as the output basis w1 to wm. Professor Gilbert strang didn't mentioned it when talking about how to find matrix A. You can't construct a transformation matrix A without a basis. The transformation matrix A transforms basis v1.... vn to T(v1).... T(vn). And each T(vi) is vi times the column i of matrix A. Since output basis is the same the input basis, it can be written as a linear combination of the basis w1 to wm where the cofficients are the entries of column i of A. Gilbert Strang's course alone can not help you fully understand the subject. Check other source like Khan academy might help. This lecture and the next one change of basis will only bring more questions unanswered after you watched it.
remember on looney toons when they would run in place before taking off? thats how this guy is. he just seems so excited his mouth wants to blurt it out but his mind is a fraction behind.... whose idea was it to include stutters in the subtitiles?
a is a vector along the line that makes 45 degrees with the origin . Since it's 45 degrees both x and y coordinates has to be the same giving vector a = [1,1] (transpose) . He took that as a unit vector with each value as 1/sqrt(2) . So performing that operating a*aT will give you the projection matrix . I hope that helps :)
You lose generality and introduce artifacts that have nothing to do with what you describe, but are just there based on your choice of coordinates. look at tensors for a general approach
I have a confusion may be somone can solve it. How did professor say with conviction at around 42:14 that the 1st column of A tells us what happens to the first basis vector?
I'm struggling so much with the paradigm of linear transformations. I can do with matrices just fine. "See" the concepts even - but throw in the T(x) whateverthefuck and I can feel myelf melting with tears. Hoping this will be the click for me!
+Ahmed Jan If you project the basevector v_2 onto the line (which goes through the origin), you will get the zero vector. In other words: the length of this vector v_2 in the direction of the line is zero.
is W1=T(V1) W2=T(V2)...? because the transformation output doesnt always have to be a base for the space, it only happens to be when the transformation is an isomorphism between the two spaces.
The answer to your first question is no. You can choose the basis for the input space (v1,...), the basis for the output space (w1,...), and the particular transformation T independently. What is required to form the columns of the matrix A is to ask, if input basis vector v1 were expressed in its own coordinates, what would its image under T look like, expressed in coordinates of the output basis? That is column 1 of A. Repeat for v2, ... If the input basis and the output basis span the same space, then you can do something similar to what you are describing. Construct a vector that has, as its columns, the w basis vectors, but written in their v basis coordinates. This linear transformation is the change-of-basis transformation from the w basis to the v basis. When applied to a vector of coordinates in the w basis, it gives you back the same vector, except expressed in coordinates of the v basis. This matrix is invertible, and its inverse is the change-of-basis transformation in the other direction. When the input and output bases span different spaces, then you can still construct this matrix, but it is not square, and hence not invertible.
This lecture could just as well have appeared in the beginning, as the very first one, to introduce the meaning of a matrix (except for the part about using eigenvectors as a basis, of course).
I saw example 2 which shifts (moves?) the whole plane. Does this mean that Translation is not a linear transformation? If not what is it? I thought it was a linear one...
It is non linear transformation as zero vector as input will give another non zero vector upon shifting which violates the linearity property as we want zero vector as output
Besides the textbooks from Strang and Lay and after them, for Numerical/Computational linear Algebra, I favor William Ford. For the precise topic of this lecture, linear transformations w/o coordinates "Linear Algebra Done Right" by Axler. For the best of the three worlds combined: great pedagogy, computational Linear Algebra and linear transformations with and without coordinates, the classic text by Cornelius Lanczos (He invented the SVD) "Linear Differential Operators"
+Ahmed Jan The linear transformation he is describing goes from vectorspace V to vectorspace W. A basis for V is the set (1, x, x²) and a basis for W is the set (1, x). In order to transform elements of V into elements of W, you are going to need a 2x3 matrix (I could be helpful to have a look at the rules for matrix multiplication).
I don't know what he's doing differently, but everything is easier when I listen to this guy. It might be the way he repeats certain phrases or makes everything less abstract. I'm currently struggling in my Linear Algebra class, so seeing this video and realizing that it isn't as hard as my current professor makes it seem is a huge relief. This guy is great. Thanks MIT.
Did you pass your class with a good mark?
@@aibattileubaiuly7830 A+ boiiiiii
Damn, I share the identical feeling with you bro. My prof in linear algebra thought the outside is way too noisy and she always closes the door of the classroom.So its super stuffy in there and I felt run out of O2 so therefore sleepy. It's all from mr gilbert that i learn sth
The difference I think is that this guy actually taeches linear algebra as part of a bigger picture. Most math profs just teach this stuff as a series of theorems that don't really have anythign to do with each other except you need to use theorem t to answer question a) b) c).etc. but hes trying to explain linear algebra as a holistic system
The only difference is his passion and love for the subject and quest to impart the same to his students. Without passion, love, and quest - it will be just lecturing the text with articulation, which might bring clarity but does not bind the audience.
Fun fact, not mentioned in the video:
If the matrix A is singular, then the process is losing information because two linearly independent vectors in the input space could be mapped to the same output vector, making them indistinguishable. When the professor discussed the matrix for ordinary differentiation of a polynomial function, notice that it was singular. This corresponds to the arbitrary constant of integration i.e. information will be lost, so you save it ahead of time via initial conditions so you can reconstruct the input vector.
Linear algebra is amazing for this kind of insight.
Cool observation!
Gilbert Strang, the name that all college students around the world know. He's a genius and a great teacher
No digas mamadas prro
Oscar Hernandez ??
Now,06/02/2021, this is true. Hello, Afonso
@@os_car.s2768 jajaj te cae mal Gilbert Strang?
I'm doing a master's in Germany and we were supposed to go through all the lectures as an introduction to a course called Advanced Mathematics for Engineers. At the beginning I was quite reluctant to do that, but after the first lecture I couldn’t wait for the next one. Professor Strang is indeed a great teacher. Thank you MIT for sharing these splendid lectures!
Sorry, what was your master? I'm really zealous to know :)
@@rosadovelascojosuedavid1894 Master of baiting.
These lectures make me so regret that I was not fortunate enough to go to MIT and sit in this legend's classes.
"if you cannot explain what you learnt to a little child in the way that she/he can understand easily, you definitily didint understood what you a trying to explain". This Professor Understood deepily lenear algebra. Congrates and thanks
Are you saying you can show this video to a little child and they will understand it easily?
If you think you fully understand this lecture the first time you watched it, it could possibly means either you knew this subject very well before you watched it or you don't really understand the subject deeply as you think. Please watch it again and think about why everything he said is true. Then you will have more questions unanswered in your mind after you watched this lecture.
Well, thats a nice saying, but in the end its not true for some topics.
This guy is something else. I go to my Linear Algebra class and I usually go out feeling more confused than when I went in, then I come watch these videos and things make sense in a minutes. It's not just what he knows, but he knows how to explain it, simple brilliant.
Wow what an amazing professor. If only every math professor was as clear as him!
THANK YOU MIT!!
Prof. Strang, you may not a true physicist but you are a true magician.
He explains it so clearly its amazing. Instead of just throwing around the definition of it all he actually explains and gives examples on why and how it works, in a simplified manner.
Is multivariable calculus a prerequisite to this course?
@@anonym498 no
Gilbert Strang is one of the best teacher in the world. The keywords and key sentences which he uses in his lectures makes it simple to understand. Loved this. ❤❤
I've been watching all the precedent videos understanding most of what happens, but I always found that I had no link between what I saw and what I know already. I could not integrate the new set of concepts to my current knowledge. It was quite puzzling. Now with the first 18 mins of this lecture everything just clicked. It's like I was floating in space having a hard time navigating but still could move from one point to another. Now I got gravity. This is quite impressive.
sounds like you found a basis
This is how you teach. I wish professors would learn how to teach like this man. Gawd it makes sense now.
At BYU right now. Chem E.
Studying this material for the past few days now to get ahead. This lecture is an excellent tool for taking all these concepts and placing them in a clear and simple context.
I have greatly benefited from listening to this lecture. Thank you.
12 years later, how is it going?
Feels like 3b1b's video was inspired by this lecture!
I switched from engineering to math/physics during college, and so was taught linear algebra with and without coordinates. One course, entirely with matrices. Another course almost entirely without matrices. What a beautiful topic.
Pretty incredible to be sitting in my living room while learning from a world class professor. Thank you MIT for sharing these!
This professor has always been able to clear my confusion on these higher level math courses.
These are very niece explanation of Linear Transformations and Their Matrices thanks once again to the godfather of linear DR. Gilbert Strang. I took introductory linear algebra at the University of Maryland Baltimore County in the late 1980's and my professor was nothing close to what Dr. Strang is. This MIT legend is a rare find.
this guy is so great. My linear algebra teacher is horrible.. He makes it actually kind of interesting!
I think the reason why Professor Strang's courses are so popular lies with his personality. He is very considerate so he would try all means to find a way to explain linear algebra concepts easy to understand. This is the same at workplace. Nice people who always take other people's feelings into account tend to provide popular products and services. Some professors offer hard-to-understand, unpopular courses since they don't care about how their students feel at all. They mainly care about whether they are promoted or receive more funding.
Thank you! I'm learning good stuff for free.
If I had tattoos, I would make a tattoo with Gilbert Strang's name on it:) Amazing guy, amazing lecture, amazing MIT
Thanks to MIT for explaining that one. I can think of an explanation for the origin of that phrase. You see, the Summer gets "tricked" into thinking it's gonna last. But alas! The Winter inevitably comes and the few days of its warm reign are over. And just like this short-lived summer period, the Indians got tricked by their conquerors too. Thus the term, "Indian Summer".
And Thank You MIT for these wonderful videos. Keep up the great work!!
Great Profs teaching in a great way!
The choice of basis is important for linear transformation. Actually it becomes quite easy to use convenient basis as we can represent things in polar as well as Cartesian coordinates.
We humans are breaking things and making things.
This lecture is the greatest one ever
That is correct, it's not a linear transformation.
For me, it helps to think of this in terms of similar triangles: T(2v) takes two times the original vector but only adds one times v0 to it. For the resultant vector to be proportional (i.e., =2T(v) ), one would have to add 2v0 to T(2v).
@valtih1978 I'm not exactly sure what portion of the lecture you're referring to, but if you're talking about the later 3/4, I think he means that A imparts transforms upon the vectors that are expressed with the basis {v1, v2,...,vn} and transforms them into a new basis {w1, w2,..wn} where T(v1), T(v2), etc. are all functions used to express the original bases {v1, v2, etc.} "in terms of" {w1, w2, etc.} So the system of equations is just a generalization of that concept.
this guy is amazing, thank you from Spain!!
Have about this subject at the moment, don't understand anything about it - low selfconfidence. but watching this lecutre - I understand everything again.
finally, Zodiark EX Prog
Very nice explanation. From now I am a big fan of him
worth spending time to see this wonderful video
wtf this fella just said see ya next monday after thanksgiving and it is actually this weekend for me o.O
there is essential key points behind the example at 36:59 connect to eigenvalue/vector, but not well explained, hope professor could've extended right there
I go to texas tech university and amidst covid 19 I have been having a hard time with linear algebra help, this was a nice relief. My question is transforming the basis of numbers I completely get and I want to know if this applies to unit conversion with physical properties. Like in doing stochiometry to convert between units (same plane). Or the relationships we get from integration and derivation like mass and force. Thank you for this resoucre incredibly helpful
the derivative is linear thing blew my mind
Professor, really respect you ! Thank you and God bless :)
Wow, yea this guy is a great teacher!
first thought: that is some good ass chalk.
Same
The famous MIT chalk
Thank you!..every thing makes sense now!
Professor Strang rules!
A great video! I have to notice that T(0)=0 "proof" is a bit funny :)
this lecture made my night before the midterm
"Every linear transformation is associated with a matrix":- Gilbert Strang.
An 'Indian summer' is used in the UK to describe a summer that lasts later than usual.
In the states, it's the warm spell after the first frost.
Ahh! Suddenly everything is easy to understand, there is great enlightenment. In the previous lessons I sometimes had difficulties and now the dark forest is thinning out.
a teacher u really like 2 have
32:24 HE WORKS
Give a lot of examples, very helpful, still useful now
this guy is a legend
44:50 : why a11 a21 ... am1 is the first column of matrix A?
Isn't this equation right? >> T(V1) = AV1 = a11w1 + a21w2 + ... +am1Wm
In General, A = [column_1 column_2 ... column_n]
input coordinate vector v = [v1 v2 ... vn]^T
So:
output vector *w = A * v = (v1 * column_1) + (v2 * column_2) + ... + (vn * column_n)*
i.e. an n-term linear combination of m-dimensional vectors.
Now imagine v1 = 1 and v2 through vn being zero. Then column 1 is the vector you get when you pass in a unit vector according to your input basis. This works for v2 = 1 while everything else is zero, and so on, and so on.
Intuition: say you have a 2x3 rectangular matrix, A, and a 3*1 input vector v. The shape of the output vector w is 2x1. Looking back at A, it is composed of 3 columns, and each column is 2x1, which matches the output 2x1. What's going on? The output vector is a linear combination of the columns of A. This is always true.
If you have any other questions or want me to clarify something, lemme know
------------
Fun fact, not mentioned in the video:
If the matrix A is singular, then the process is losing information because two linearly independent vectors in the input space could be mapped to the same output vector, making them indistinguishable. When the professor discussed the matrix for ordinary differentiation of a polynomial function, notice that it was singular. This corresponds to the arbitrary constant of integration i.e. information will be lost, so you save it ahead of time via initial conditions so you can reconstruct the input vector.
Linear heart is amazing for this kind of insight.
Best wishes, friend
@@ozzyfromspace but what happened when input basis isnt one element 1 others 0 (standard basis)?
@@benjaminlin7918 As in the comment above,
A = [column_1 column_2 ... column_n] Take for example (v1 v2 ... vn) as my standard basis vector ([1 0 0 ...]T, [0 1 0 0 ...]T, ......)
Any coordinate v is (c1, c2, ...cn) will represent the corresponding vector v = c1*v1 + c2*v2 + ...
Taking an example say c's as my coordinate in the input basis, I want to change it to the new coordinate system(of output basis). I am choosing here eigenvectors as output basis(w1, w2, w3 ....wm) because that is usually the basis of interest but it can be anything (even exactly the input basis). So if I want my new coordinate(d1, d2, d3,...dm) which is the same as a linear combination of eigenvectors with d's as corresponding coefficients.
d = A * c = (c1 * column_1) + (c2 * column_2) + ... + (cn * column_n)
Since we are doing linear transformation, we must know what my new coordinate looks like if I were to transform just v1, just v2 and so on.
This is equivalent to saying I know what will (1,0,0,0..), (0,1,0,0,...)... will become by linear transformation into the output basis.
Thus we can say that
column_1 = coordinate when transforming v1 or (1,0,0,..) into output basis.
Hence we can write this transformation as a linear combination of eigenvectors(my output basis) with coefficients as column_1(my new coordinates). Similarly, now I know what the remaining columns of A represent. Thus I know my matrix completely.
If you are with me till this point, then for any arbitrary c's we can get new d's into the output basis. as A*c = d.
Your question: what happened when the input basis isn't one element 1 and others 0 (standard basis)? So if I know my input and output basis and what my (1,0,0..), (0,1,0,..) and so on look like in new coordinate(output basis). I can just write these coordinates as columns of my matrix A and I can now find the transformation for any arbitrary c's. The whole point of a linear transformation is to find A such that we get what you asked for.
This should answer your question.
its because he leaves al the' in math written" proves to the book and get to the point and explains what he's doing where as most teachers explain something by writing the prove down that no-one really understands
Maldita sea, porque no hay maestros que intenten explicar las matematicas de manera simple... Porque creen que hay tantos reprobados en México? La verdad este Dr.Strang sabe mucho! Gracias por hacer estos videos!! Saludos desde México!
:0
Así es. Gilbert Strang impactando en México incluso 9 años después de tu comentario. Y en verdad que esto me sirve muchísimo.
bro helped me beat Zodiark. thanks man
I am not clear about Prof Strang's explanation at 37:22. He said we choose the eigenvector basis [0 1]^T and [1 0]^T, but this is the standard basis in R^2, right?
Does he mean that if the input and output basis are the eigenvector basis, then the transformation matrix A is just Λ?
Thanks.
That's exactly what it means. If x_i is an eigenvector, and lambda_i the associated eigenvalue, then Ax_i = lambda_i * x_i. Notice the thing on the right only includes x_i; it doesn't need any contribution from any of the other eigenvectors (basis vectors). Thus, when you build the matrix A by putting into its columns what you want it to do to the basis vectors, when the basis vectors are independent eigenvectors, you will get the eigenvalue matrix as a result.
Thank You Professor!!!
@RingWarrior12 maybe its for you, it helped me a lot!
This give a easiest geometric interpretation of why singular matrices has no inverse. This kind of matrices take n-dimensional vectors as input and the output is in a vector of smaller dimension, and different vectors can have the same output, so you can´t have an inverse transformation because you don't know to wich of all of this original vectors corresponds the vector in the space of smaller dimension. More importat, that's why you need an initial condition in integrals to know the original function, because the derivative is making this same process and only with the inverse ('integral ) you can't tell which is the original function.
Sorry, but beg to differ, in a linear transformation an input coordinate can have an output of smaller and larger dimension, as long as the second case is not infinite, and I'd say that's the geometric interpretation for a matrix to not br invertible: if the output coordinates happen to be infinite
who is this magnificent instructor?! He puts all of my math professors to shame!
The brilliant series of lectures. However, this seems unclear one. I failed to understand the A derivation. What the system of equation does? Can you explain what Gilbert wanted to say? Wikipedia meantime does it trivially: A=[T(v1) T(v2) ..T (vn)].
The input basis v1 to vn is the same as the output basis w1 to wm. Professor Gilbert strang didn't mentioned it when talking about how to find matrix A.
You can't construct a transformation matrix A without a basis. The transformation matrix A transforms basis v1.... vn to T(v1).... T(vn). And each T(vi) is vi times the column i of matrix A. Since output basis is the same the input basis, it can be written as a linear combination of the basis w1 to wm where the cofficients are the entries of column i of A.
Gilbert Strang's course alone can not help you fully understand the subject. Check other source like Khan academy might help. This lecture and the next one change of basis will only bring more questions unanswered after you watched it.
Matrices of linear transformations, isn't this part supposed to be in first courses ?
I hope that'll help me pass my linear algebra exam
Well i have studied these at the Technion isreal instittute of technology its an easy subject
remember on looney toons when they would run in place before taking off? thats how this guy is. he just seems so excited his mouth wants to blurt it out but his mind is a fraction behind.... whose idea was it to include stutters in the subtitiles?
Thank you.
I think just plotting a vector on new axes with reference to a given axes.
New axes not necessarily orthogonal to each other.
thanks sir u improved my math knowledge
God bless you!
aww he is so cute
Sorry for my humble input, but this lecture on linear transformation should have been at the beginning of the course.
once I finished watching the lecture, I understood that it is where it must be! brilliant
It's easier to understand than on my native language :)
What an amazing man
God bless you.
40:01 how did he get the projection matrix? what is the a??
a is a vector along the line that makes 45 degrees with the origin . Since it's 45 degrees both x and y coordinates has to be the same giving vector a = [1,1] (transpose) . He took that as a unit vector with each value as 1/sqrt(2) . So performing that operating a*aT will give you the projection matrix . I hope that helps :)
Why do physicists avoid using/thinking in terms of basis? why are they unwilling to bring them in?
Because in Relativity the point is that the laws of physics are the same, and independent, for any choice of coordinate system
You lose generality and introduce artifacts that have nothing to do with what you describe, but are just there based on your choice of coordinates. look at tensors for a general approach
I have a confusion may be somone can solve it. How did professor say with conviction at around 42:14 that the 1st column of A tells us what happens to the first basis vector?
37:11- what's perpendicular here?
Godly explanation
he is legend
This helped so much!
I'm struggling so much with the paradigm of linear transformations. I can do with matrices just fine. "See" the concepts even - but throw in the T(x) whateverthefuck and I can feel myelf melting with tears. Hoping this will be the click for me!
35:00 why does it kill the basevector v2? is it because it transforms it?
+Ahmed Jan If you project the basevector v_2 onto the line (which goes through the origin), you will get the zero vector. In other words: the length of this vector v_2 in the direction of the line is zero.
v2 is perpendicular to the line it is projected onto, so the projection kills v2, which has no component along the line.
Thanks guys
This dude speaks fluent
is W1=T(V1) W2=T(V2)...? because the transformation output doesnt always have to be a base for the space, it only happens to be when the transformation is an isomorphism between the two spaces.
The answer to your first question is no. You can choose the basis for the input space (v1,...), the basis for the output space (w1,...), and the particular transformation T independently. What is required to form the columns of the matrix A is to ask, if input basis vector v1 were expressed in its own coordinates, what would its image under T look like, expressed in coordinates of the output basis? That is column 1 of A. Repeat for v2, ...
If the input basis and the output basis span the same space, then you can do something similar to what you are describing. Construct a vector that has, as its columns, the w basis vectors, but written in their v basis coordinates. This linear transformation is the change-of-basis transformation from the w basis to the v basis. When applied to a vector of coordinates in the w basis, it gives you back the same vector, except expressed in coordinates of the v basis. This matrix is invertible, and its inverse is the change-of-basis transformation in the other direction. When the input and output bases span different spaces, then you can still construct this matrix, but it is not square, and hence not invertible.
At 43.15 how does he know that the constants are a11 etc?? Can someone explain me? I know this works but I can't understand why does it works
I also doubt about this example.
Those are the components of the matrix A, written in row-column format
beautifully taught!
His name is Gilbert Strang, mathematician.
magician
wohooo, all makes sense now :D
I love these kinds of playlists, now I can learn Linear Algebra from UA-cam videos and chill. xD
thank youuuuu!!!
wow it actually makes sense now :D
amazing!!!
Que buen video!!
still don't understand however i feel that i need to do some e.g.'s and i'll get there..
great lecture regardless
Do you understand now ?
@@srinikethvelivela9877 I'd hope so, 10 years later
This lecture could just as well have appeared in the beginning, as the very first one, to introduce the meaning of a matrix (except for the part about using eigenvectors as a basis, of course).
that I just noticed
I saw example 2 which shifts (moves?) the whole plane. Does this mean that Translation is not a linear transformation? If not what is it? I thought it was a linear one...
It is affine, not linear.
It is non linear transformation as zero vector as input will give another non zero vector upon shifting which violates the linearity property as we want zero vector as output
43:00 Climax
Please tell me best books for linear algebra
Gilbert Strang's and David C. Lay.
Reeshabh Ranjan
I have both of those textbooks!!
It’s the best of both worlds. I love Lay!!
Besides the textbooks from Strang and Lay and after them, for Numerical/Computational linear Algebra, I favor William Ford.
For the precise topic of this lecture, linear transformations w/o coordinates "Linear Algebra Done Right" by Axler.
For the best of the three worlds combined: great pedagogy, computational Linear Algebra and linear transformations with and without coordinates, the classic text by Cornelius Lanczos (He invented the SVD) "Linear Differential Operators"
48:06 How does he know that the transformation matrix A must be a 2x3 matrix?
+Ahmed Jan The linear transformation he is describing goes from vectorspace V to vectorspace W. A basis for V is the set (1, x, x²) and a basis for W is the set (1, x). In order to transform elements of V into elements of W, you are going to need a 2x3 matrix (I could be helpful to have a look at the rules for matrix multiplication).
thanks for the help !
at 39:45
can anyone please explain how we got that matrix?
Check out the lectures about orthogonality, Gram Schmidt, Projections