00:00 Error from last lecture, row dependent. 04:28 4 Fundamental subspaces. 08:30 Where are those spaces? 11:45 Dimension of those spaces. 21:20 Basis for those space. 30:00 N(A^T) "Left nullspace"? 42:10 New "matrix" space?
i am asking you because your's is the most recent comment? 1)at 9:15 how the column space is R^m? for mxn(m rows x n columns)matrix there are n colums so there are n column vectors so it supposed to be R^n right?
@@lokahit6940 Because each vector in the column space has m components. Yes, there are n vectors, but the number of components of a vector describes the dimensions of space its in. This is different once you get to a basis, where the number of vectors describe its dimension, but even that is a subspace of R^(# of components). So a two-vector basis where each vector has 5 components is a 2d subspace in R^5.
"But, after class - TO MY SORROW - a student tells me, 'Wait a minute that [third vector] is not independent...'" I love it. What other professor brings this kind of passion to linear algebra? This is what makes real in the flesh lectures worthwhile.
xoppa09 I think here it is the Professor that's honorable . He elaborated on his mistake, which is reasonably embarrassing for him, and made clear important concepts. I think most others would just correct it, apologize, and move on. You can see his embarrassment when he used words like 'bury', and the reaction when he accidentally uncovered the board again later.
@@fanzhang3746 I don't think he is much embarrassed. He talked about doing math in class in the first vedio of this series, if you've watched that. He said that it might be inevitable to make mistakes, and it's great to go through all the processes with the students including making errors and correcting those.
I am a 4th year, double engineering student re-learning linear algebra so I can have a stronger basis for ML, DL and AI. Never in my college classes, or independent studying, have I been so amazed in the way a concept is introduced as I was when prof. Strang got to the computing of the left null space. The way this man teaches is just astonishing, thank you very much.
Have you checked out his newest book "Linear Algebra and Learning from Data"?. That plus "Introduction to Statistical Learning" given a foundation in programming, probability, and statistical inference is a killer combo. I'm a statistics graduate student wanting to specialize in ML. I've been watching these on 2x speed as a review
OMG I'm literally the same. I jumped on ML and AI early in my 2nd year, but could not understand any concepts thoroughly. Now I really feel the need to relearn the basics and prof. Strang is like the savior for me.
I am so fascinated by the way that professor G. Strang gives his lectures, he does it in such a great way that even a 5 years old boy could understand , on the side , teachers from my university make the subject so complicated, that even highly above the avarege students struggle to understand the concepts poperly.
Correct me if I'm wrong but Strang was introducing abstract algebra at the end. Once you have all of these linear transformation transforming more linear transformations, you have an even greater transformation of space. Absolutely love this man
Conclusion: Four fundamental subspaces of A(m*n), including 1. The column space means spanning the column vectors, which is in R to m, notation as C(A) 2. The nullspace of A means the free variables corresponding vector span the null space, which is in R to n, notation as N(A) 3. The row space means spanning the row vectors, which is in R to n, notation as C(A') equal to n-r 4. The left nullspace of A means the A' free variables corresponding vector span the null space, which is in R to m, notation as N(A') equal to m-r. other conclusions: The sum of dim(C(A')) and N(A) is equal to n, the sum of dim(C(A)) and N(A') is equal to m.
I'm actually pretty sure he did this on purpose to trick the audience. Since first two rows are identical, it's too obvious when you learn that matrix must have the same number of linearly independent columns and rows (and it's a GREAT introduction to the lecture).
well, when you have an intuition of just row space and column space and connection between them, it's quite obvious and you don't have to be a genius to recognize the dependency of those row vectors. In fact, the first half of the linear algebra is relatively simple.
I think professor just made that up and he intentionally did wrong in the previous lecture just to introduce the row space. Professor just planned it like in "Money Heist"
I am really grateful for your wonderful explanation about the four fundamental subspaces. My mathematics exam is tomorrow. It is a wonderful source for me to learn and refresh my memory. Thank you so much!
At t = 38:00, Strang shows a way that expedites finding L: find E, then solve [E| I | to get E inverse which = L. Now we can quickly decompose A into LU if we do Gaussian elimination only--not Gauss-Jordan elimination--from the beginning. At t = 43:00, he defines a vector space out of 3x3 matrices, call it M_33. At t = 47:00, he covers the dimensions of subspaces of M.
Thank you so much!! Your explanation is soo amazing! Now I finally get why the column space of A and R are different, and why the row space of A and R is the same!! Btw, I'm saving 24:00 for the explanation of the subspaces of A and R
This is correct, but his mistake actually illuminates the importance of understanding independence from both the row space and column space. Most matrices wont be this easy to find column space independence so conceptualizing both of those spaces will give you a deeper, richer understanding of vector spaces in general
My, I feel so….dense. What a sense of humor this brilliant man must have to have penned a book entitled “Linear Algebra for Everyone”. Sir, I can’t even subract!
Incorporating MATLAB commands in the lecture is a great way for students to learn about matrices and linear algebra in context. The overall lecture is another classic by DR. Gilbert Strang.
It's interesting that he constantly regards on the fact that he exposes things without proving them, but in fact I think he explains the things so clearly an understandable that he does'nt need to prove them, because we can realize about them almost in an axiomatic way.
"Poor misbegotten fourth subspace" -Gilbert Strang, 1999 Remember when Elizabeth Sobeck decided to give GAIA feelings? These guys gave math feelings. And I love him for that. I didn't even know that was possible.
So? This can mean a lot of things, and one of them is that they couldn´t tape this class and Strang had to repeat it in front of the cameras and they didn´t pay to some people to just sit right there so people like you would stop commenting that fact. Great classes, I do not speak english as native language, but certainly this is awesome, I really appreciate it So much Thanks to MIT and Professor Strang!!
The four fundamental subspaces are the column space, null space, row space, and left null space. The dimensions of these spaces are related to the rank of the matrix, with the sum of the dimensions of the null space and row space equaling the number of columns, and the sum of the dimensions of the column space and left null space equaling the number of rows. Highlights: 00:10 The lecture focuses on correcting errors from the previous lecture and introducing the concept of four subspaces associated with a matrix, including column space, null space, row space, and the left null space. -Explanation of the error correction process from the previous lecture and the significance of having different bases for spaces in linear algebra. -Introduction and explanation of the row space as a fundamental subspace, its basis, and its connection to the rows of a matrix through combinations. -Discussion on transposing matrices to work with column vectors, leading to the column space of the transposed matrix and the null space of the transposed matrix. 07:52 Understanding the four spaces in linear algebra - null space of A, column space of A, row space of A, and null space of A transpose - is crucial as they provide insights into the properties of matrices and their dimensions. -The importance of the four spaces in linear algebra and their relation to matrices' properties and dimensions. -The process of determining bases and dimensions for each of the four spaces, providing a systematic approach to understanding and analyzing matrices. -Explanation of the dimensions of the column space, row space, and null space of A transpose, highlighting their significance in understanding linear algebra concepts. 15:47 Understanding the dimensions of row space, column space, and null space in a matrix is crucial. The row space and column space have the same dimension, while the null space dimension is determined by the number of free variables. -The relationship between row space, column space, and null space dimensions. The row space and column space have the same dimension, while the null space dimension is determined by the number of free variables. -Determining the basis and dimensions of the null space. Special solutions from free variables form a basis for the null space, with the dimension being n-r, where n is the total variables and r is the number of pivot variables. -Exploring the dimensions of the left null space. The left null space dimension is m-r, where m is the number of columns in the transpose matrix. A transpose follows similar rules as the original matrix in terms of dimensions. 23:24 Understanding row space and column space in matrix operations is crucial. The row space and column space of a matrix can have different bases, but the row space basis can be identified by the matrix's rows. -Difference between row space and column space in matrix operations. Identifying the basis for the row space using the matrix's rows. -Exploring the concept of basis for the row space and its significance in matrix transformations. The importance of independence in determining the basis for the row space. -Understanding the left null space of a matrix and its relation to the null space of the matrix's transpose. Exploring the concept of vectors in the null space of A transpose. 31:06 Understanding the left null space involves transforming A to R using row reduction, resulting in a matrix E. In the invertible square case, E is the inverse of A, but for rectangular A, E connects A to R. -Explanation of left null space and its connection to row reduction and matrix E in transforming A to R. -Comparison of E in the invertible square case to the case of a rectangular A, where E does not represent the inverse of A. 38:58 Understanding the concept of subspaces in linear algebra is crucial. The video discusses row space, null space, column space, and left null space, emphasizing their dimensions and relationships in a matrix. It also introduces a new vector space using three by three matrices. -Exploring the dimensions and relationships of row space, null space, column space, and left null space in a matrix is essential in linear algebra. -Introducing a new vector space using three by three matrices and discussing the rules that define vectors within this space. -Discussing subspaces within the matrix space, such as upper triangular matrices and symmetric matrices, and how the intersection of subspaces forms a subspace. 46:36 The dimension of different subspaces of matrices can be determined by finding a basis. Diagonal matrices have a dimension of three and can be spanned by three independent matrices. -Understanding the concept of dimension in linear algebra and how it relates to subspaces of matrices. -Exploring the basis of diagonal matrices and how they form a subspace.
I was nodding my head, keeping up just swimmingly, it all made perfect sense. He wrapped up the diagram and it seemed like we were done. Then he stepped over to the far board and replaced vectors with matrices and just turned everything upside down. Didn't see that coming.
lol funny I'm just first watching this today and it was posted exactly 7 years ago xD thanks for the video, really helpful! I was struggling with this concept for my current linear algebra 2 course since I took the non-specialist version of linear algebra 1 which didn't really test us on proofs at all. I think I have a better understanding of the four fundamental subspaces now! :)
I believe Prof. Strang deliberately made the mistake at the end of Lec 9, in order to transition the focus from column space to row space. The transition was too smooth for this to be an accident. This is also a great show of humility that he didn't mind being perceived making a mistake!
*Question:* what is the relationship between rank(A) and rank(A^T)? Does rank(A) = rank(A^T) in general? The professor seems to be hinting at this, but rref(A) only preserves the column space, so it doesn’t seem so trivial to me. Any insight is highly appreciated. Edit: I found the answer. rank(A) = rank(A^T) by virtue of the fact that linear independence of the columns implies linear independence of the rows, even for non-square matrices. I proved this for myself this evening. The main idea for the proof (at least how I did it) is that if you have two linearly dependent rows, one above the other say, row reduction kills the lower one (reduces number of possibly independent rows). Killing off the row (making the row all zeros) also makes it so that the given row can’t have a pivot. Thus, we’ve reduced the number of potential pivot columns by one. That’s the relationship in a nutshell. The math is only slightly more involved
rref(A) does not preserve the column space, only the null and row spaces. It does preserve the dim(Row(A)) however, which suffices to prove that the row and column ranks are equal.
Worth mentioning: if row-reduction of the matrix generates the most natural row space basis without much effort, we can also generate the most natural basis of the column space of said matrix by doing row-reduction on the transpose of the matrix. This is all so incredibly fascinating!
beginning from natural number N, then integer Z, then real number, then complex number which is just only 2 dimensional number, then vector which is a n dimensional number. vector space of vectors is like the range of N, or Z, or real numbers. with vector space the confident definition of vectors can be obtained. in terms of calculation linear algebra uses computer to do calculation. the key is to find out the algorith by studying examples of low dimensions vectors and matrices. with the algorithm the calculation can scale to vectors and matrices of high dimensions. obviously vector is a much much more expressive number comparing with preceeding any kind of number. so vector as well as linear algebra become a very powerful math tools in many applications.
min 18:50 If it's helpful for anybody: the dimension of the null space is the same as the number of basis vectors that form the null space. Just like the dimension of a column space (or rank) is the number of linearly independent columns (i.e. vectors within the matrix), in the case of the null space, its dimension is the number of linearly independent columns, i.e. the number of basis vectors that form the null space.
There is a problem with Dr. Strang's lectures. The problem is, he makes it so intuitive that I'm literally nodding in agreement the entire lecture. I've now watched the lectures once, read the book chapters, and watched the lectures a second time. And while I have a good grasp of everything discussed so far, they all sort of blend in. I couldn't list the things I learnt one by one for these 10 lectures. :D (well, I sort of can.)
What fascinates me are some stats you can find below this video. Maybe it's some bug but youtube tells us that this video is most popular among: 1) men 45-54 yo 2) men 35-44 3) men 25-34 Which I find really strange cause I've thought that most of viewers would be actual students. Also, popularity by region is interesting stat.
You could also argue that it isnt a basis because -1 time the first vector plus 2 times de second vector gives us the third vector... You really dropped the ball there professor G. hahahaha just kidding, this man is the best thing that ever happened to Linear Algebra right after Gauss
For finding basis for N(A), Why can't we use similar approach of finding basis for left nullspace. 1) trans(A) - - - -> RREF 2) E' × trans(A) = RREF 3) finding basis from E'
That's fine. All at once doesn't matter. What matters is "forever and always". Do what you must to understand it deeply so that you will know it the rest of your life. It may take watching the video many times and will probably require writing down some matrices and doing them yourself. Math is a subject which is hard to learn by observation; it really depends on participation. Remember, the students in the audience were MIT students, so they had proven they were quite talented. Those students saw what you saw in the video. Those students had the ability to talk to this professor after class. Those students had homework practice. Still, when the quiz was administered, I guarantee the average score was below 100%. Even after all that help, some students didn't quite get it all. They didn't get it "all at once". How can you expect yourself to do better than that, especially if you demand it happen "all at once"?
Im just gonna write this for ppl like me if you are like me(didn't finish high school yet(first year)) you will have to put some effort in studying this u gotta search for some basics u may not have been taught yet and it will take a lot of time to search alot but just try and take the reason you are studying this that early as a motivation cuaze some stuff will be so frustrating and hard to understand cuaze u haven't mastered some basics yet but ik for sure it's worth for me Im studying this to learn machine learning and some stuff i wanna also do physics engines and stuff like that to help people and i found that everything i wanna do is related to linear algebra and once i get it and get deep into it i can do whatever i want just by some basic research Trust me it's hard but worth❤
Hello! Does anybody know any other lecturers like Dr. Strang with such passion in fields like convex optimization, detection estimation or probability theory?
Look up lectures by Steven Boyd. "Stanford Engineering Everywhere" is like Stanford's version of OCW and has some great courses in convex optimization: EE263 and EE364A. They aren't quite as good as Strang's lectures, but he's hard to beat!
25:06 So performing row eliminations doesn't change the row space but changes the column space? So to get the basis for the column space, would you have to do column elimination for matrix [A]? Or could you take the transpose, do row elimination, and just use that row basis for [A] transpose as the column basis for [A]?
+Pablo P That's what I was thinking as I watched that part of the video. It seems that approach would work. Before this lecture, it's the approach I probably would have used, but now that I see the tie-in to pseudo-Gauss-Jordan, I think I prefer pseudo-Gauss-Jordan.
My mind got blown when I realized you could get the basis for the left null space from row transformation. I mean, it seems completely obvious after he points it out but I never thought much of it until then.
Oh cool. I've never computer the nullspace of the row space before. Initially, I thought of computer the nullspace of the columnspace of the transpose, but the method he provides - calculating E - is so easy, once you've already done all the work computing the other subspaces.
May I say that the vectors in R span the same space as vectors in A after row operation because you can do a reverse ROW operation and construct the same vectors in A from R? It can't be true for column space because after row operations you most likely can't reverse and reconstruct the original column vectors from R through COLUMN combinations.
I think that for to figure out how a space looks like, you must ask yourself how a space looks like... because a subspace is in fact a space inside another space, so just imagine that instead of being in a three dimensional space we are in a 3 dimensional subspace inside another bigger-dimensional space and a null-space turns to be the space of all vectors in that bigger-dimensional space that when multiplies any array of 3 basis vector of our "subspace R^3" that nullifies our space (makes it 0)
god dude, my school cobined multivariable calc and linear algebra into one class, so this entire lecture was only one part of 4 of my most recent lecture
I don't know if if made my point, but if you see the other lectures you will understand better... in fact i just realize i made a mistake in the grammar at the end... next time i'll review before posting anything.
At 14:00 that’s *not* true in general. The number of basis vectors of C(A) is rank(A) *yes* but said basis vectors are not the pivot columns. The professor probably misspoke or something. Doing rref(A) of a matrix means we’re taking linear combinations of the rows of A, so there’s no reason to believe the column space will be preserved during the row-reduction operations. In general, C(A) is not equal to C(rref(A)). Let A^T stand for transpose of matrix A. Then the following is always true: *C(A) = C( ( rref(A^T) )^T )* . He’s kind of implied this already in previous lectures without using the rote notation, I’m just connecting the dots like a game. Edit: A simple example is A = [1,2; 2,4]. rref(A) = [1,2; 0,0]. The column space of rref(A) is along the “y” axis whereas the column space of A is along the line “y” = 2 ”x”. It should be clear that aside from the zero vector, they have no other vectors in common. Further, rref(A) has one pivot column, so the dimension of A should be 1. 2*[1;2] = [2;4] (columns are linearly dependent, so we really only have *1* vector to scale. Hence, dimension of C(A) = r = 1, as earlier predicted 😊 Hopefully this helps someone. Professor Strang is one of the best educators out there, I see why people admire MIT! This playlist in linear algebra is the perfect way to prepare for the vectorized implementations of machine learning (if you actually wanna understand what you’re doing, that is). Best wishes to you all. 👨🏽🏫
i think he breaks with the usual convention for m * n Matrix being: m rows, n collums ... which confused me, but great lecture anyways ... edit: I was wrong 9:12 ... misunderstood what he said there, ofc the columspace has m (rows many) components, because colums go m (rows-many) components down ... thanks Robert Smits
He keeps the convention. His example A=[[1 2 3 1], [1 1 2 1], [1 2 3 1]] has 3 rows and 4 columns. The 1st row is [1 2 3 1] has 4 components and is thus an element of R^4.
He doesn't but it is definitely a convention that confuses me and I have to think twice about the coordinates every time. Usually in mathematics (and in programming) the dimensions are X (meaning horizontal offset) and then Y - here it's reversed. If somebody tells me coordinates [100, 1] I expect it to be far to the right, not way down.
3:30 Funny, I was looking in col space and noticed that -1 * C1+ 2 * C2 = C3 and completely missing the far more obvious fact that R1 = R2. Hey ho. Target fixation. It's what sunk the Kaga and Akagi.
Somehow after some thought I figured out why Prof says all upper triangular matrices are subspaces of 3X3 Matrices! If we add any 2 upper triangular matrices we are still in that space as also if we multiply with scalar any upper triangular matrices we are still in that space. Same with Symmetric and Diagonal matrices. But Prof assumes that all students will decipher this and he did not spell that out. Otherwise its a privilege to hear from a genius!
I watched this video hoping to learn what the row space and left null space were good for and learned nothing new. This lecture recounts only the definitions of the four fundamental subspaces and their dimensions and works on an example of finding their respective bases. Like in his book "Introduction to Linear Algebra", Dr. Strang takes so many shortcuts and skips over the precise definition of many concepts and proofs and precise definitions of so many important theorems that I find his lectures (and book as well) useful only to a limited extent. I appreciate the effort but I believe there should be a better way of teaching linear algebra.
@@syedsaad6929 It has significant shortcomings as well but I found Dr. van de Geijn's course on edX, "Linear Algebra: Foundations to Frontiers", to suit better to my taste, having a better balance of theory vs applications. The fall class has just ended but they have a new one starting 16 January 2019 (totally free if you like) and the professors themselves answer the questions during the course! The course also provides downloadable pdf notes along with the class. Having made the above criticism against Dr. Strang's way of teaching linear algebra, I have to acknowledge that nearly all of the (worked) examples I have studied and quite a few of the exercises in his book "Introduction to Linear Algebra" were excellent. Though I have reservations about his approach to teaching linear algebra, I still recommend studying his book but not as the only source.
Also check out these resources which I found helpful at times: 1) Linear Algebra Done Right ( www.linear.axler.net ), 2) immersive linear algebra ( immersivemath.com/ila/index.html ), 3) A First Course in Linear Algebra ( linear.ups.edu/html/fcla.html ). The first one is the site for Dr. Sheldon Axler's book which refers to videos based on his book. Videos only provide a summary of his book but still helpful like the abridged version of the book. Unfortunately, you may need his unabridged book to make the most out of his teachings but it is not (legally) freely available. Axler's approach is totally different from than that of Strang and more suitable for math majors than other science and engineering majors but it is so clean and fundamental. It is a good source whenever you want to understand some basic linear algebra concept deeply.
@@apaksoy There are also a number of other courses and resources available for linear algebra on MIT OpenCourseWare. We recommend you check out Herb Gross' "Calculus Revisited: Complex Variables, Differential Equations, and Linear Algebra" (ocw.mit.edu/RES18-008 and/or UA-cam playlist: ua-cam.com/play/PLD971E94905A70448.html ) To see the complete listing of courses related to linear algebra, visit our Course Finder: ocw.mit.edu/courses/find-by-topic/#cat=mathematics&subcat=linearalgebra. Best wishes on your studies!
Correction of error from previous lecture 0:43 Introduction to the four fundamental subspaces (column space, null space, row space, left null space) 4:20 Basis and dimension of each fundamental subspace 11:44 Basis and dimension of the column space 12:50 Dimension of the row space (it is the rank) 14:41 Basis and dimension of the null space 17:05 Dimension of the left null space (m - rank) 19:41 Basis of the row space (nonzero rows in the rref) 21:08 Basis of the left null space 29:48 Review of the four fundamental subspaces 42:09 A new vector space of all 3 by 3 matrices 42:32
Thank you MIT, thank you Prof Strang.
00:00 Error from last lecture, row dependent.
04:28 4 Fundamental subspaces.
08:30 Where are those spaces?
11:45 Dimension of those spaces.
21:20 Basis for those space.
30:00 N(A^T) "Left nullspace"?
42:10 New "matrix" space?
i am asking you because your's is the most recent comment?
1)at 9:15 how the column space is R^m? for mxn(m rows x n columns)matrix there are n colums so there are n column vectors so it supposed to be R^n right?
@@lokahit6940 Because each vector in the column space has m components. Yes, there are n vectors, but the number of components of a vector describes the dimensions of space its in.
This is different once you get to a basis, where the number of vectors describe its dimension, but even that is a subspace of R^(# of components). So a two-vector basis where each vector has 5 components is a 2d subspace in R^5.
@@aarongreenberg159 Thanks for the clarification.
"But, after class - TO MY SORROW - a student tells me, 'Wait a minute that [third vector] is not independent...'"
I love it. What other professor brings this kind of passion to linear algebra? This is what makes real in the flesh lectures worthwhile.
Give that brave student a medal.
xoppa09 I think here it is the Professor that's honorable . He elaborated on his mistake, which is reasonably embarrassing for him, and made clear important concepts. I think most others would just correct it, apologize, and move on. You can see his embarrassment when he used words like 'bury', and the reaction when he accidentally uncovered the board again later.
@@fanzhang3746 I don't think he is much embarrassed. He talked about doing math in class in the first vedio of this series, if you've watched that. He said that it might be inevitable to make mistakes, and it's great to go through all the processes with the students including making errors and correcting those.
Lies again? FAS FUS Sheng Siong
whats so passionate about accepting & correcting own mistake?
"No mathematics went on there; we just got some vectors that were lying down to stand up."
Gotta know the bases for the spaces.
AHAHHAHHAHAHHAHAH
😂
Cant lie being able to pause the video and ponder about the ideas is so nice to have. Goes to show how much work those students had to put in
I am a 4th year, double engineering student re-learning linear algebra so I can have a stronger basis for ML, DL and AI. Never in my college classes, or independent studying, have I been so amazed in the way a concept is introduced as I was when prof. Strang got to the computing of the left null space. The way this man teaches is just astonishing, thank you very much.
Have you checked out his newest book "Linear Algebra and Learning from Data"?. That plus "Introduction to Statistical Learning" given a foundation in programming, probability, and statistical inference is a killer combo. I'm a statistics graduate student wanting to specialize in ML. I've been watching these on 2x speed as a review
OMG I'm literally the same. I jumped on ML and AI early in my 2nd year, but could not understand any concepts thoroughly. Now I really feel the need to relearn the basics and prof. Strang is like the savior for me.
I am so fascinated by the way that professor G. Strang gives his lectures, he does it in such a great way that even a 5 years old boy could understand , on the side , teachers from my university make the subject so complicated, that even highly above the avarege students struggle to understand the concepts poperly.
+Daniel Couto Fonseca What about a 5 year old girl?
Only 5 years old WHITE BOYS I would say
Are you joking? I can't tell
I guess it's more funny if you dont
+Daniel Couto Fonseca I challenge you to teach a 5 yr old linear algebra.Good luck to that.
How a perfect thing that being able to be a great mathematician and a great teacher at the same time! Especially, being a great teacher is priceless!
The best teacher ever. I really admire the act of MIT. Like in a phrase in its website: "Unlocking Knowledge, Empowering Minds."
These lectures are saving my bachelors in Engineering. Thanks MIT!
woah your icon image tells that very precisely that you survived engineering after all.....wish me luck
"I see that this fourth space is getting second class citizen treatment..it doesn't deserve it"
Kaveri Chatra by coincidence I read this exactly when he said it
@@NG-we8uu me too, i just read this while i was listening to it 😂
Me too 😂😂
The end portion really educated how matrix algebra theory can be applied to computer vision; really glad he added that in.
For the first time I envy students in MIT. Because they have such genius lectures to attend.
I don't. I've got it better. No time pressure to watch the lectures, I don't NEED to make the exercises, nor the exams. It's great! 😁
@@NostraDavid2 and nor the hefty money too. So yeah.
Correct me if I'm wrong but Strang was introducing abstract algebra at the end. Once you have all of these linear transformation transforming more linear transformations, you have an even greater transformation of space. Absolutely love this man
Bob Mike yes, and in an earlier lecture he was talking about how n x n permutation matrices form a group
Yes, abstract vector spaces are quite important in linear algebra
It's my honor to have met you even virtually, sir!
Conclusion: Four fundamental subspaces of A(m*n), including 1. The column space means spanning the column vectors, which is in R to m, notation as C(A)
2. The nullspace of A means the free variables corresponding vector span the null space, which is in R to n, notation as N(A)
3. The row space means spanning the row vectors, which is in R to n, notation as C(A') equal to n-r
4. The left nullspace of A means the A' free variables corresponding vector span the null space, which is in R to m, notation as N(A') equal to m-r.
other conclusions: The sum of dim(C(A')) and N(A) is equal to n, the sum of dim(C(A)) and N(A') is equal to m.
I'm into the fifth minute and wondering whether he made that mistake in last lecture knowingly
man, exactly. Due to this error, i came to know if a matrix in non invertible, the columns would be linearly dependent
40:54 There's no one in the class...
Same thought and maybe he did. Great chance
@@eduardoschiavon5652 nah it's because they reduced the rows of the class, whtat we see are the rows of zeros.
I'm actually pretty sure he did this on purpose to trick the audience. Since first two rows are identical, it's too obvious when you learn that matrix must have the same number of linearly independent columns and rows (and it's a GREAT introduction to the lecture).
This lecture about the four subspaces is the most beautiful Linear Algebra lecture I have ever had.
This man has dedication!
Also, that girl in the beginning must have been a sharp genius.
Bruh its MIT they got Gods in there you talk about sharp
well, when you have an intuition of just row space and column space and connection between them, it's quite obvious and you don't have to be a genius to recognize the dependency of those row vectors. In fact, the first half of the linear algebra is relatively simple.
I think professor just made that up and he intentionally did wrong in the previous lecture just to introduce the row space.
Professor just planned it like in "Money Heist"
@@sreenjaysen927
I agree
I am really grateful for your wonderful explanation about the four fundamental subspaces. My mathematics exam is tomorrow. It is a wonderful source for me to learn and refresh my memory. Thank you so much!
At t = 38:00, Strang shows a way that expedites finding L: find E, then solve [E| I | to get E inverse which = L. Now we can quickly decompose A into LU if we do Gaussian elimination only--not Gauss-Jordan elimination--from the beginning.
At t = 43:00, he defines a vector space out of 3x3 matrices, call it M_33.
At t = 47:00, he covers the dimensions of subspaces of M.
Thank you so much!! Your explanation is soo amazing! Now I finally get why the column space of A and R are different, and why the row space of A and R is the same!! Btw, I'm saving 24:00 for the explanation of the subspaces of A and R
I want to write on that chalkboard with that chalk.
It seems quite satisfying indeed
the chalk looked like a big stone
Nah man u got sm severe autism
Thank you MIT for enabling us enjoy these treats.. And Prof. Strang is just pure genius
I was totally astonished by the idea of computing left nullspace!
Thank you Dr. Gilbert.
The second time to see nobody in the classroom. The camera man is really happy to be a VIP student I believe.
How can you tell? He seemed to be talking to audience
@@phil97n cameraman avoids to point at the chairs and at the end you don't hear the usual chatter, just silence
Thank God for dr.Strang. I am understanding concepts that have eluded me for over a decade.
It is amazing how he can do these lectures in front of no students and still be so engaging. In a way he is a great actor.
There are students in back rows
The mistake professor Strang made turned into a great connection to the new topic. That's why he is a genius
Loved the bit at the end where he showed that upper triangular or symmetric or diagonal matrices form a subspace.
at 3:15 - 3:20 Instead of looking at the row picture to realize the dependence we may also see that 2*(column 2) - (column 1) gives (column-3) :)
This is correct, but his mistake actually illuminates the importance of understanding independence from both the row space and column space. Most matrices wont be this easy to find column space independence so conceptualizing both of those spaces will give you a deeper, richer understanding of vector spaces in general
He explains in the first three minutes why you didn't even have to look at the columns. The girl who pointed this out was quick!
@@dhruvg550 I think the girl was Gilbert Strang himself
My, I feel so….dense. What a sense of humor this brilliant man must have to have penned a book entitled “Linear Algebra for Everyone”.
Sir, I can’t even subract!
Incorporating MATLAB commands in the lecture is a great way for students to learn about matrices and linear algebra in context. The overall lecture is another classic by DR. Gilbert Strang.
It's interesting that he constantly regards on the fact that he exposes things without proving them, but in fact I think he explains the things so clearly an understandable that he does'nt need to prove them, because we can realize about them almost in an axiomatic way.
Strang proves things without you even realizing that you've just experienced a 'proof.' He makes it very conversational and intuitive.
I really like how he talks. He sounds so friendly in his explanations.
"Poor misbegotten fourth subspace"
-Gilbert Strang, 1999
Remember when Elizabeth Sobeck decided to give GAIA feelings? These guys gave math feelings. And I love him for that. I didn't even know that was possible.
Thank was great performance! Thank you MIT.
I really don't know what to say..... Satisfying? Grateful? OMG I just love it!!!!
So?
This can mean a lot of things, and one of them is that they couldn´t tape this class and Strang had to repeat it in front of the cameras and they didn´t pay to some people to just sit right there so people like you would stop commenting that fact.
Great classes, I do not speak english as native language, but certainly this is awesome, I really appreciate it
So much Thanks to MIT and Professor Strang!!
The four fundamental subspaces are the column space, null space, row space, and left null space. The dimensions of these spaces are related to the rank of the matrix, with the sum of the dimensions of the null space and row space equaling the number of columns, and the sum of the dimensions of the column space and left null space equaling the number of rows.
Highlights:
00:10 The lecture focuses on correcting errors from the previous lecture and introducing the concept of four subspaces associated with a matrix, including column space, null space, row space, and the left null space.
-Explanation of the error correction process from the previous lecture and the significance of having different bases for spaces in linear algebra.
-Introduction and explanation of the row space as a fundamental subspace, its basis, and its connection to the rows of a matrix through combinations.
-Discussion on transposing matrices to work with column vectors, leading to the column space of the transposed matrix and the null space of the transposed matrix.
07:52 Understanding the four spaces in linear algebra - null space of A, column space of A, row space of A, and null space of A transpose - is crucial as they provide insights into the properties of matrices and their dimensions.
-The importance of the four spaces in linear algebra and their relation to matrices' properties and dimensions.
-The process of determining bases and dimensions for each of the four spaces, providing a systematic approach to understanding and analyzing matrices.
-Explanation of the dimensions of the column space, row space, and null space of A transpose, highlighting their significance in understanding linear algebra concepts.
15:47 Understanding the dimensions of row space, column space, and null space in a matrix is crucial. The row space and column space have the same dimension, while the null space dimension is determined by the number of free variables.
-The relationship between row space, column space, and null space dimensions. The row space and column space have the same dimension, while the null space dimension is determined by the number of free variables.
-Determining the basis and dimensions of the null space. Special solutions from free variables form a basis for the null space, with the dimension being n-r, where n is the total variables and r is the number of pivot variables.
-Exploring the dimensions of the left null space. The left null space dimension is m-r, where m is the number of columns in the transpose matrix. A transpose follows similar rules as the original matrix in terms of dimensions.
23:24 Understanding row space and column space in matrix operations is crucial. The row space and column space of a matrix can have different bases, but the row space basis can be identified by the matrix's rows.
-Difference between row space and column space in matrix operations. Identifying the basis for the row space using the matrix's rows.
-Exploring the concept of basis for the row space and its significance in matrix transformations. The importance of independence in determining the basis for the row space.
-Understanding the left null space of a matrix and its relation to the null space of the matrix's transpose. Exploring the concept of vectors in the null space of A transpose.
31:06 Understanding the left null space involves transforming A to R using row reduction, resulting in a matrix E. In the invertible square case, E is the inverse of A, but for rectangular A, E connects A to R.
-Explanation of left null space and its connection to row reduction and matrix E in transforming A to R.
-Comparison of E in the invertible square case to the case of a rectangular A, where E does not represent the inverse of A.
38:58 Understanding the concept of subspaces in linear algebra is crucial. The video discusses row space, null space, column space, and left null space, emphasizing their dimensions and relationships in a matrix. It also introduces a new vector space using three by three matrices.
-Exploring the dimensions and relationships of row space, null space, column space, and left null space in a matrix is essential in linear algebra.
-Introducing a new vector space using three by three matrices and discussing the rules that define vectors within this space.
-Discussing subspaces within the matrix space, such as upper triangular matrices and symmetric matrices, and how the intersection of subspaces forms a subspace.
46:36 The dimension of different subspaces of matrices can be determined by finding a basis. Diagonal matrices have a dimension of three and can be spanned by three independent matrices.
-Understanding the concept of dimension in linear algebra and how it relates to subspaces of matrices.
-Exploring the basis of diagonal matrices and how they form a subspace.
He makes everything look so clear.
Thank you so much for this lecture series. This helps a lot! Great professor with great and easy to understand explanations.
Thank you Dr. Strang and MIT. These videos are amazing and keeping me afloat in my class.
A mathematician with Great sense of Humour. Mr. Strang !
I was nodding my head, keeping up just swimmingly, it all made perfect sense. He wrapped up the diagram and it seemed like we were done. Then he stepped over to the far board and replaced vectors with matrices and just turned everything upside down. Didn't see that coming.
lol funny I'm just first watching this today and it was posted exactly 7 years ago xD
thanks for the video, really helpful! I was struggling with this concept for my current linear algebra 2 course since I took the non-specialist version of linear algebra 1 which didn't really test us on proofs at all. I think I have a better understanding of the four fundamental subspaces now! :)
I believe Prof. Strang deliberately made the mistake at the end of Lec 9, in order to transition the focus from column space to row space. The transition was too smooth for this to be an accident. This is also a great show of humility that he didn't mind being perceived making a mistake!
*Question:* what is the relationship between rank(A) and rank(A^T)? Does rank(A) = rank(A^T) in general?
The professor seems to be hinting at this, but rref(A) only preserves the column space, so it doesn’t seem so trivial to me. Any insight is highly appreciated.
Edit: I found the answer. rank(A) = rank(A^T) by virtue of the fact that linear independence of the columns implies linear independence of the rows, even for non-square matrices. I proved this for myself this evening. The main idea for the proof (at least how I did it) is that if you have two linearly dependent rows, one above the other say, row reduction kills the lower one (reduces number of possibly independent rows). Killing off the row (making the row all zeros) also makes it so that the given row can’t have a pivot. Thus, we’ve reduced the number of potential pivot columns by one. That’s the relationship in a nutshell. The math is only slightly more involved
rref(A) does not preserve the column space, only the null and row spaces. It does preserve the dim(Row(A)) however, which suffices to prove that the row and column ranks are equal.
If all youtube content would be deleted today, the most upsetting thing for me would probably be losing this series of lessons.
Worth mentioning: if row-reduction of the matrix generates the most natural row space basis without much effort, we can also generate the most natural basis of the column space of said matrix by doing row-reduction on the transpose of the matrix. This is all so incredibly fascinating!
45:26 transform an exclamation mark into an M. Brilliant!
Brilliant. This lectures connects the complex puzzle
beginning from natural number N, then integer Z, then real number, then complex number which is just only 2 dimensional number, then vector which is a n dimensional number. vector space of vectors is like the range of N, or Z, or real numbers. with vector space the confident definition of vectors can be obtained. in terms of calculation linear algebra uses computer to do calculation. the key is to find out the algorith by studying examples of low dimensions vectors and matrices. with the algorithm the calculation can scale to vectors and matrices of high dimensions. obviously vector is a much much more expressive number comparing with preceeding any kind of number. so vector as well as linear algebra become a very powerful math tools in many applications.
It's not that she found a numerical error, it was the power of her reasoning for it. I'm shook, whoever that girl is, she's clearly brilliant.
He made that story up to drive a point
Thank you MIT and Professor Strang!
min 18:50 If it's helpful for anybody: the dimension of the null space is the same as the number of basis vectors that form the null space. Just like the dimension of a column space (or rank) is the number of linearly independent columns (i.e. vectors within the matrix), in the case of the null space, its dimension is the number of linearly independent columns, i.e. the number of basis vectors that form the null space.
He is not only a master lecturer, he is a master of writing on a chalkboard. I swear, it looks like he is using a paint pen.
There is a problem with Dr. Strang's lectures.
The problem is, he makes it so intuitive that I'm literally nodding in agreement the entire lecture. I've now watched the lectures once, read the book chapters, and watched the lectures a second time. And while I have a good grasp of everything discussed so far, they all sort of blend in. I couldn't list the things I learnt one by one for these 10 lectures. :D (well, I sort of can.)
ya gotta do the problems. That pulls it together in your head.
Class is crowded these days, no worries. Don't know why no one is attending back in 2005!
It was actually in 2000. But it was uploaded to web in Spring 2005. The dates written in video titles are dates of upload not dates of record.
What fascinates me are some stats you can find below this video. Maybe it's some bug but youtube tells us that this video is most popular among:
1) men 45-54 yo
2) men 35-44
3) men 25-34
Which I find really strange cause I've thought that most of viewers would be actual students.
Also, popularity by region is interesting stat.
where exactly can you find these stats? When I click on statistics I just get a viewcount graph
You could also argue that it isnt a basis because -1 time the first vector plus 2 times de second vector gives us the third vector... You really dropped the ball there professor G. hahahaha just kidding, this man is the best thing that ever happened to Linear Algebra right after Gauss
Just when I thought he ran out of blackboard to write he moves to the right and lo and behold there's more of them
He is lecturing to an empty classroom if you look at time 40'53'' !! Even more wonders!
For finding basis for N(A), Why can't we use similar approach of finding basis for left nullspace.
1) trans(A) - - - -> RREF
2) E' × trans(A) = RREF
3) finding basis from E'
Yes I have same question and this way sounds more mechanical (programmable) than earlier way
Thank you Prof. Strang
😖😖 He's the best professor I know and yet my brain doesn't get it at once😂
That's fine. All at once doesn't matter. What matters is "forever and always". Do what you must to understand it deeply so that you will know it the rest of your life. It may take watching the video many times and will probably require writing down some matrices and doing them yourself. Math is a subject which is hard to learn by observation; it really depends on participation. Remember, the students in the audience were MIT students, so they had proven they were quite talented. Those students saw what you saw in the video. Those students had the ability to talk to this professor after class. Those students had homework practice. Still, when the quiz was administered, I guarantee the average score was below 100%. Even after all that help, some students didn't quite get it all. They didn't get it "all at once". How can you expect yourself to do better than that, especially if you demand it happen "all at once"?
Why is he such a good lecturer, my Prof used to just read from the text book
this is mind-blowing
i don't fully understand it
but i know it's mind-blowing
40:50
is the best plot twist awesomee
9:45
Professor Strang subtly integrates class consciousness into his lecture of the Four Fundamental Subspaces.
Truly a genius.
Last hired First fired?
Im just gonna write this for ppl like me
if you are like me(didn't finish high school yet(first year)) you will have to put some effort in studying this u gotta search for some basics u may not have been taught yet and it will take a lot of time to search alot but just try and take the reason you are studying this that early as a motivation cuaze some stuff will be so frustrating and hard to understand cuaze u haven't mastered some basics yet but ik for sure it's worth
for me Im studying this to learn machine learning and some stuff i wanna also do physics engines and stuff like that to help people and i found that everything i wanna do is related to linear algebra and once i get it and get deep into it i can do whatever i want just by some basic research
Trust me it's hard but worth❤
Hello! Does anybody know any other lecturers like Dr. Strang with such passion in fields like convex optimization, detection estimation or probability theory?
Look up lectures by Steven Boyd. "Stanford Engineering Everywhere" is like Stanford's version of OCW and has some great courses in convex optimization: EE263 and EE364A. They aren't quite as good as Strang's lectures, but he's hard to beat!
John N. Tsitsiklis has great probability lectures on MIT open courseware here on UA-cam. Highly recommended.
It's like an enlightenment moment when he says, "she said, it's got two identical rows"
25:06 So performing row eliminations doesn't change the row space but changes the column space?
So to get the basis for the column space, would you have to do column elimination for matrix [A]? Or could you take the transpose, do row elimination, and just use that row basis for [A] transpose as the column basis for [A]?
+Pablo P That's what I was thinking as I watched that part of the video. It seems that approach would work. Before this lecture, it's the approach I probably would have used, but now that I see the tie-in to pseudo-Gauss-Jordan, I think I prefer pseudo-Gauss-Jordan.
My mind got blown when I realized you could get the basis for the left null space from row transformation. I mean, it seems completely obvious after he points it out but I never thought much of it until then.
There are no students sitting there, but the lecture is still so good.
Oh cool. I've never computer the nullspace of the row space before. Initially, I thought of computer the nullspace of the columnspace of the transpose, but the method he provides - calculating E - is so easy, once you've already done all the work computing the other subspaces.
Nice lecture. Would like to have seen that N(A) and C(A^T) are independent (or even orthogonal!)
May I say that the vectors in R span the same space as vectors in A after row operation because you can do a reverse ROW operation and construct the same vectors in A from R? It can't be true for column space because after row operations you most likely can't reverse and reconstruct the original column vectors from R through COLUMN combinations.
Take another look at the list...the first time I feel glad at so many left unwatched.
Here's a paper by prof. Strang related to this lecture.
web.mit.edu/18.06/www/Essays/newpaper_ver3.pdf
The twist at the end was better than that of GOT's.
great lecture .i am so grateful to prof.gilbert
I think that for to figure out how a space looks like, you must ask yourself how a space looks like... because a subspace is in fact a space inside another space, so just imagine that instead of being in a three dimensional space we are in a 3 dimensional subspace inside another bigger-dimensional space and a null-space turns to be the space of all vectors in that bigger-dimensional space that when multiplies any array of 3 basis vector of our "subspace R^3" that nullifies our space (makes it 0)
I love these lectures
Thanks Dr. Strang
god dude, my school cobined multivariable calc and linear algebra into one class, so this entire lecture was only one part of 4 of my most recent lecture
I don't know if if made my point, but if you see the other lectures you will understand better... in fact i just realize i made a mistake in the grammar at the end... next time i'll review before posting anything.
At 14:00 that’s *not* true in general. The number of basis vectors of C(A) is rank(A) *yes* but said basis vectors are not the pivot columns. The professor probably misspoke or something.
Doing rref(A) of a matrix means we’re taking linear combinations of the rows of A, so there’s no reason to believe the column space will be preserved during the row-reduction operations. In general, C(A) is not equal to C(rref(A)).
Let A^T stand for transpose of matrix A. Then the following is always true: *C(A) = C( ( rref(A^T) )^T )* . He’s kind of implied this already in previous lectures without using the rote notation, I’m just connecting the dots like a game.
Edit:
A simple example is A = [1,2; 2,4]. rref(A) = [1,2; 0,0]. The column space of rref(A) is along the “y” axis whereas the column space of A is along the line “y” = 2 ”x”. It should be clear that aside from the zero vector, they have no other vectors in common. Further, rref(A) has one pivot column, so the dimension of A should be 1. 2*[1;2] = [2;4] (columns are linearly dependent, so we really only have *1* vector to scale. Hence, dimension of C(A) = r = 1, as earlier predicted 😊
Hopefully this helps someone. Professor Strang is one of the best educators out there, I see why people admire MIT! This playlist in linear algebra is the perfect way to prepare for the vectorized implementations of machine learning (if you actually wanna understand what you’re doing, that is). Best wishes to you all. 👨🏽🏫
i think he breaks with the usual convention for m * n Matrix being: m rows, n collums ... which confused me, but great lecture anyways ...
edit: I was wrong
9:12 ... misunderstood what he said there, ofc the columspace has m (rows many) components, because colums go m (rows-many) components down ... thanks Robert Smits
He keeps the convention. His example A=[[1 2 3 1], [1 1 2 1], [1 2 3 1]] has 3 rows and 4 columns. The 1st row is [1 2 3 1] has 4 components and is thus an element of R^4.
He doesn't but it is definitely a convention that confuses me and I have to think twice about the coordinates every time. Usually in mathematics (and in programming) the dimensions are X (meaning horizontal offset) and then Y - here it's reversed. If somebody tells me coordinates [100, 1] I expect it to be far to the right, not way down.
I was confused too! So I scrolled down in the comments looking for a lost soul like me xD Thank you for the explanation Robert!
thanks Luis .. you cleared my doubt !! ^^
3:30 Funny, I was looking in col space and noticed that -1 * C1+ 2 * C2 = C3 and completely missing the far more obvious fact that R1 = R2. Hey ho.
Target fixation. It's what sunk the Kaga and Akagi.
Somehow after some thought I figured out why Prof says all upper triangular matrices are subspaces of 3X3 Matrices! If we add any 2 upper triangular matrices we are still in that space as also if we multiply with scalar any upper triangular matrices we are still in that space. Same with Symmetric and Diagonal matrices. But Prof assumes that all students will decipher this and he did not spell that out. Otherwise its a privilege to hear from a genius!
great that we can E=AR this lecture.
Guess you could say the students who understood that were all E=ARs
he is so dam good at explaning! I love him!!!!!!!!!!!
I watched this video hoping to learn what the row space and left null space were good for and learned nothing new. This lecture recounts only the definitions of the four fundamental subspaces and their dimensions and works on an example of finding their respective bases. Like in his book "Introduction to Linear Algebra", Dr. Strang takes so many shortcuts and skips over the precise definition of many concepts and proofs and precise definitions of so many important theorems that I find his lectures (and book as well) useful only to a limited extent. I appreciate the effort but I believe there should be a better way of teaching linear algebra.
I felt that too, both his lectures and his book lack in rigor and depth. Do you know some other resource?
@@syedsaad6929 It has significant shortcomings as well but I found Dr. van de Geijn's course on edX, "Linear Algebra: Foundations to Frontiers", to suit better to my taste, having a better balance of theory vs applications. The fall class has just ended but they have a new one starting 16 January 2019 (totally free if you like) and the professors themselves answer the questions during the course! The course also provides downloadable pdf notes along with the class.
Having made the above criticism against Dr. Strang's way of teaching linear algebra, I have to acknowledge that nearly all of the (worked) examples I have studied and quite a few of the exercises in his book "Introduction to Linear Algebra" were excellent. Though I have reservations about his approach to teaching linear algebra, I still recommend studying his book but not as the only source.
Also check out these resources which I found helpful at times: 1) Linear Algebra Done Right ( www.linear.axler.net ), 2) immersive linear algebra ( immersivemath.com/ila/index.html ), 3) A First Course in Linear Algebra ( linear.ups.edu/html/fcla.html ).
The first one is the site for Dr. Sheldon Axler's book which refers to videos based on his book. Videos only provide a summary of his book but still helpful like the abridged version of the book. Unfortunately, you may need his unabridged book to make the most out of his teachings but it is not (legally) freely available. Axler's approach is totally different from than that of Strang and more suitable for math majors than other science and engineering majors but it is so clean and fundamental. It is a good source whenever you want to understand some basic linear algebra concept deeply.
@@apaksoy There are also a number of other courses and resources available for linear algebra on MIT OpenCourseWare. We recommend you check out Herb Gross' "Calculus Revisited: Complex Variables, Differential Equations, and Linear Algebra" (ocw.mit.edu/RES18-008 and/or UA-cam playlist: ua-cam.com/play/PLD971E94905A70448.html ) To see the complete listing of courses related to linear algebra, visit our Course Finder: ocw.mit.edu/courses/find-by-topic/#cat=mathematics&subcat=linearalgebra. Best wishes on your studies!
@@apaksoy I agree, I do follow the book. Its good for applications of linear algebra which is what I need, but not what satisfies me.
Correction of error from previous lecture 0:43
Introduction to the four fundamental subspaces
(column space, null space, row space, left null space) 4:20
Basis and dimension of each fundamental subspace 11:44
Basis and dimension of the column space 12:50
Dimension of the row space (it is the rank) 14:41
Basis and dimension of the null space 17:05
Dimension of the left null space (m - rank) 19:41
Basis of the row space (nonzero rows in the rref) 21:08
Basis of the left null space 29:48
Review of the four fundamental subspaces 42:09
A new vector space of all 3 by 3 matrices 42:32
vector spaces of matrices! mindblow!
thanks Prof and MIT
Thank You Frof. Strang...
36:24
...SPORTS. IT'S IN THE GAME!