Its kind of cool and odd that someone who has taught this subject for so long can keep it so fresh...like he's stumbling across the Null Space Matrix for the first time. Thank you Dr. Strang and thank you MITOCW.
This is more than a lecture on linear algebra, it's a demo on perfect teaching presentation. His way of pinpointing each question along the way that our brains need to ask and then solve is truly beautiful.
Every student should have at least one professor like Prof Strang. Motivating, illuminating and such great energy. I truly appreciate these classes. Thank you!
This guy is giving me such a good intuitive understanding of linear algebra, rather than just presenting seemingly semi-random algorithms without explanation.
Conclusion: 1. To calculate Ax=0 in other words calculate the null space of A, we can use 'reduce row echelon form' (rref) method. 2. The rank of A equal to the number of pivots or rows after reducing row echelon, notation as r. The column of A equal to the number of variables, notation as n. So n-r is equal to the number of free variables. 3. Consider the solution of Ax=0, if we calculate the reduced row echelon form of A consisting of [T F], the solution matrix will be the transform of [-F T], where T stands for identity matrix and F stands for free matrix. The solution matrix would be n*(n-r) shape.
For anyone confused by the block matrix explanation --- the I and F and blocks of zeros --- hang in there until lecture 8 where it all becomes clearer. And yes, F may be interspersed with the I, and, contrary to the top rated answer on Stack Exchange, this cannot be remedied with permutation matrices. Basically it's just a visual cue that allows you to pluck out the relevant numbers.
This is exactly what got me confused at first. I really appreciate professor Strang and MIT for making this gem of a lecture available online, but I felt he presented a few tricks like the block matrix one for finding the spanning set of the null space of a linear map (a linearly independent one*, too, because of that I block) in a rather hand-wavy manner. Perhaps a better way to visualize it is as follows: 1. Draw the RREF matrix as staircases with pivots, preferably with interlaced free columns for generality. 2. If there are any all-zero rows at the bottom of the RREF matrix, trim off that part. 3. Pluck out a free column from the staircase, then turn it sideways (90 degrees counterclockwise.) 4. Multiply each component in the free column by -1, to reverse their signs. (This is for building the -F block) 5. Insert the "selector" (coefficient of 1) component at the same index as the index of the extracted free column in the RREF matrix. 6. Insert "N/A" (coefficient of 0) components at the same indices as the indices of the rest of the free columns in the RREF matrix. 7. Now turn the free column back to its original position (90 degrees clockwise) 8. Put the finished column in the "special solutions" matrix. 9. Do the same with the rest of the free columns in the RREF. 10. In the special cases where F is NOT interspersed with the I in the original RREF matrix, what you get is a matrix with the -F block stacked on top of the I block. P.S. The point of plucking out a free column and then laying it on its side is to make the step 5 and 6 easier to visualize. *If only one column vector has a non-zero entry at a specific row index in a set of columns, then there is no linear combination of the rest of columns in the set that is equal to that column. That is why the special solutions matrix built this way always contains a linearly independent set of columns.
yes, I'm a bit confused what to do with matrices which rreff is something like [ 1 * 0 * 0 [ 0 0 1 * 0 [ 0 0 0 0 1 they are clearly not [I F I will check this answer
@@eulerappearethwhat i've noticed is how switching columns 2 and 3 in this scenario's rref of A (compared to the usual [I F] form), caused the ROWS 2,3 to be switched in the special solutions. so since the solutions would be [-2 0 1 0] and [2 -2 0 1] if the rref was just [I F], then we switch rows 2,3 and get [-2 1 0 0] and [2 0 -1 2] instead. pretty late but i hope this helps somebody
learning linear algebra with you is like watching movies. it's fascinating, exciting, convincing and fun. thank you so much Professor Strang! Im so lucky to be learning this subject with you!
Indeed! It's like a story - with characters, and plot, and plot twists...Mr Strang is a shining example of what education should be - accessible, engaging and with the sense of disccovery!
I've my linear algebra class early in the morning, and I never make it to the class was frustrated to catch up all this stuff but watching these videos are helping me so much Sincerely thank professor Strang and this channel!
Seriously the best thing that I could have found on the Internet. Too bad my final is in 4 days. Naturally I will be staying on youtube for quite a few hours this week
I was studying for my engineering degree when this was filmed. I just wish I was having professors like Dr. Strang and Dr. Lewin. Clear cut and practical explanations of the most abstruct branch of Mathematics!
this is the best way possible to describe the rank of a matrix! for so long I have struggled with this concept! And now it feels so rudimentary, so basic! Thank you professor strange for such a fantastic way of explaining things
For those like me, who did not get about the free columns and pivot columns fiasco at first. Firstly, note that the free columns are linear combinations of the pivot columns (you can do some scribbling to confirm this). This gives some intuition as to why we can allow the free columns to be scaled freely by any number, then solving for the scalars of the pivot columns, such that you get zero column/vector if you add all these scaled columns. Pivot variables and free variables are the names for those respected scalars. I hope this cleared some doubts...wish you the best of luck.
The second part of the lecture going from Ux = 0 to Rx = 0 and further on to RN = 0, and proposing what is N, seemed to be full of leaps that I could not follow completely. I have done a ML course, and a Neural network course without a deeper knowledge of Linear Algebra. I thought of filling that gap. The rabbit hole seems to go deep, and again I seem to be taking a few magical things that happen to be as axiomatic. I will persist. If I can not get it from Prof Strang, I may not get it at all. Hope the pennies will drop as I move forward, and I will get rich!
Dr. Strang wanted us to realize that the reduced row echelon form of the original matrix consisted of the identity matrix (when only looking at the pivot columns) and some other matrix, which he called F, when only looking at the free columns. He generalized this notion by defining the matrix R using placeholders I and F for the identity matrix (I) and the matrix formed by the free columns (F), with possible rows of 0s beneath. Since he was generalizing, he wrote R as a block matrix (where I and F represent matrices). We know I has dimensions r x r (since I is the identity matrix formed by the pivot columns, and the number of pivot columns = number of pivot variables = rank = r) We know F has dimensions n - r x n - r (since F is the matrix formed by the free columns, and we know there are n - r free columns). So our original Ax = 0 can be rewritten -- throughout the whole process of his lecture -- as Rx = 0. He then wonders what would the solution of this matrix equation would be. Well, since defined R generally using I and F, he unintentionally (I am assuming given how pleasantly surprised he sounded) was defining R as a block matrix, he decided to find all the special solutions at once in which he called a null space matrix N. This N would solve the Rx=0 equation, i.e., would make RN = 0 true. Well, knowing how matrix multiplication works, N needs to be a matrix that, when multiplied with the row(s) of R, would produce 0's. Since the first row of R is I F, what linear combination of I F would = 0? We would need to multiply I by -F, and F by I (because then we'd have -F + F = 0). This is how to look at it pure algorithmically. Dr. Strang actually uses wonderful logic. If the first row of R = [I F], then of course we want I in the free variable row (the second row of N) in order to preserve it, and to cancel it out, of course we would need -F in the identity row (the first row of N) in order to cancel out the F in the free variable block of R. This is how he knows the null space matrix N is always going to be [-F I] (obviously written as a column, but I can't type that out in this comment). He then goes further to show us how this actually is not surprising. Going back to Rx = 0, remember that R (as a block matrix) = [I F]. x = [x_pivot, x_free] (as a column matrix). If we actually did the matrix multiplication we would have: I * x_pivot + F * x_free = 0. Solving for x_pivot we get: x_pivot = -F * x_free So, if in our solution we make our free variables the identity (remember when Dr. Strang said "hey, these are free variables. Let's make them whatever we want. Let's make x_2 = 1 and x_4 = 0" and later he said "hey, let's make x_2 = 0 and x_4 = 1"), then by the above equation, of course x_pivot HAS to be -F.
Why does my university not allow for students to record the lectures? It is so good to have those at home in video format. You can re-watch them and rewind time anytime you missed something because you weren't paying attention. Mighty helpful.
Just wanted to say that blocking the rref matrix into [[I F], [0...]] form and solving for the nullspace matrix like that is one of the greatest things I've ever seen. It seems like it shouldn't work because F could have different shape than I, but it does. And it generalizes to when F doesn't exist, which helps you remember the ideas in the next lecture.
I've never understood null space, rref, and how null basis is immediate from rref better. I'd recommend Dr. Strang to anyone that tries to learn linear algebra.
F will have the same number of rows as I, but maybe not the same number of columns. So to make N, you just put -F on top and fill in the bottom with the identity matrix of the correct size (the number of columns of F). So say I is m by m and F is m by n, then N will have (m + n) rows and R will have (m + n) columns, so it works out. And each block multiplication (I * -F and F * I) also work out.
Yes. So the dimension of the identity matrix in R is not the same as the dimension of the identity matrix in N. And the sum of the dimensions of these identity matrix should be equal to the number of columns in A.
The part I found confusing is when we write [I F]*[-F,I] = 0 F and -F are the same dimension but the identity on the left hand side is rxr and on the right hand side (n-r)x(n-r), Thinking it through, it makes sense. The RHS has the same number of columns as the number of free variables. It's just a little unusual to see the same letter on both sides meaning slightly different things.
I believe that Mr. Strang is really amazing and incredible, but I got stuck when he said of "the pivot and free variables" , I mean he told us about the algorithm very well, but I am not able to connect these with that what does it mean to?? Why does non pivot columns can be considered anything?? And what are its effect on my graph if I wish to plot it on?? And what's the concept behind these pivot and free variables, how did it occur from anywhere?? So , if you guys could help me out with this , it would really be appreciated!!
The pivot is the first non-zero entry on a row. If the system has a unique (single) solution, each row will have exactly one non-zero entry when the matrix is in reduced row echelon form, and there won't be any free variables. In the null space example, if each row has exactly one non-zero entry, the only vector that solves the system when the RHS is zero is the zero vector.
You are right about pivots because he did not really explain that. The idea of only two pivots is that columns of this matrix are not linearly independent, bu I agree it should be explained much better and more detailed then in this lecture.
I study at a german uni and everything is so god damn formal I couldnt fathom up unti today why the core plus the rank is equal the number of culmns thank you mit opencourseware abd thank you dr. Strang
So the independent columns are associated with dependent variables whereas dependent columns give us free variables that can be arbitrarily assigned in the equation.
9:11 "I can assign anything that I like for X2 and X4..." So, what's stopping us from choosing the free variables as X2 and X3? Because, it seems clear from the equations that they can be assigned any value arbitrarily. Somebody!
In the original example we treated used all of the identity elements (1,0;0,1) in the solution (x), but in the transpose we just reduce it to "1". Why?
The identity Matrix of a 1x1 Matrix is just 1. Since the result was a 1x3 and not a 4x2, the identity part only had a 1x1 spot in the result, thus it just gives 1.
You can also use this algorithm to get the cross product of two vectors in R3. Solve Ax=0 for 2x3 matrix. The dotproduct of each rows and x will be 0. Can you derive the cross product matrix from this algorithm?
3:07 Why does elimination change the column space? All you do is take rows to be linear combinations of each other (crucially, preserving the original row), which leaves the column space unchanged. The basis vectors you use to form the column space are different, but the space itself should be the same because all combinations are in the original space and no vectors are lost by multiplication by zero.
We are taking linear combinations in the row space, hence row space remains unchanged but column space changes since they are being changed independant of each other now the resulting vectors can be in any different space
I'm curious why we should choose 1 and 0 as our free variables? Choosing different values will produce different, but still numerically consistent results, so what makes 1 and 0 the appropriate choice? Are other solutions valid?
One thing I didn't understand. When he solved for A transpose ..2nd row becomes 0 because it's dependent on 1st row ..4th row also became 0 but it doesn't depend on any other row.. he said that row of 0's means that original row is a combination of other rows ..but 4th row is not a combination so why it became 0??? Please help if anyone knows the answer!
I just read the latest edition of the book for this course and it is brilliant , the best Linear Algebra textbook! Thank you Dr. Gilbert Strang! But I like the cover of older edition with the houses being transformed. Is there any software for online hw for the book? Please let me know.
Its kind of cool and odd that someone who has taught this subject for so long can keep it so fresh...like he's stumbling across the Null Space Matrix for the first time. Thank you Dr. Strang and thank you MITOCW.
Well said.
one is 9 yrs ago and the other one is 9 months ago lol
@@tianjoshua4079
This is more than a lecture on linear algebra, it's a demo on perfect teaching presentation. His way of pinpointing each question along the way that our brains need to ask and then solve is truly beautiful.
Every student should have at least one professor like Prof Strang. Motivating, illuminating and such great energy. I truly appreciate these classes. Thank you!
This guy is giving me such a good intuitive understanding of linear algebra, rather than just presenting seemingly semi-random algorithms without explanation.
Conclusion: 1. To calculate Ax=0 in other words calculate the null space of A, we can use 'reduce row echelon form' (rref) method.
2. The rank of A equal to the number of pivots or rows after reducing row echelon, notation as r.
The column of A equal to the number of variables, notation as n.
So n-r is equal to the number of free variables.
3. Consider the solution of Ax=0, if we calculate the reduced row echelon form of A consisting of [T F], the solution matrix will be the transform of [-F T], where T stands for identity matrix and F stands for free matrix.
The solution matrix would be n*(n-r) shape.
bro i think its I instead of T for Identity matrix
@@muhammadwahajkhalil6577 it just notion we can use any variable to represent identity matrix for us convince it is not mandatory to stick with I
long live professor strang!
May glory come to him.
given how old he looks now I suppose you're right
mit has high quality of blackboard
+YOUWEI QIN I thought the same thing. There's just like infinite sliding blackboards stacked on top of each other :)
quantity?
MIT has everything of a very high quality! Its such a pity that you only noticed the blackboards!
Agree, no body mentions the chalks.
I took linear algebra 30 years ago and I thought it was pretty hard at the time. Prof. Strang makes it easy!
For anyone confused by the block matrix explanation --- the I and F and blocks of zeros --- hang in there until lecture 8 where it all becomes clearer. And yes, F may be interspersed with the I, and, contrary to the top rated answer on Stack Exchange, this cannot be remedied with permutation matrices. Basically it's just a visual cue that allows you to pluck out the relevant numbers.
This is exactly what got me confused at first. I really appreciate professor Strang and MIT for making this gem of a lecture available online, but I felt he presented a few tricks like the block matrix one for finding the spanning set of the null space of a linear map (a linearly independent one*, too, because of that I block) in a rather hand-wavy manner.
Perhaps a better way to visualize it is as follows:
1. Draw the RREF matrix as staircases with pivots, preferably with interlaced free columns for generality.
2. If there are any all-zero rows at the bottom of the RREF matrix, trim off that part.
3. Pluck out a free column from the staircase, then turn it sideways (90 degrees counterclockwise.)
4. Multiply each component in the free column by -1, to reverse their signs. (This is for building the -F block)
5. Insert the "selector" (coefficient of 1) component at the same index as the index of the extracted free column in the RREF matrix.
6. Insert "N/A" (coefficient of 0) components at the same indices as the indices of the rest of the free columns in the RREF matrix.
7. Now turn the free column back to its original position (90 degrees clockwise)
8. Put the finished column in the "special solutions" matrix.
9. Do the same with the rest of the free columns in the RREF.
10. In the special cases where F is NOT interspersed with the I in the original RREF matrix, what you get is a matrix with the -F block stacked on top of the I block.
P.S. The point of plucking out a free column and then laying it on its side is to make the step 5 and 6 easier to visualize.
*If only one column vector has a non-zero entry at a specific row index in a set of columns, then there is no linear combination of the rest of columns in the set that is equal to that column. That is why the special solutions matrix built this way always contains a linearly independent set of columns.
This really bothered me. The block presentation wasn't exactly blocked. But I'll stick with it.
yes, I'm a bit confused what to do with matrices which rreff is something like
[ 1 * 0 * 0
[ 0 0 1 * 0
[ 0 0 0 0 1
they are clearly not [I F
I will check this answer
@@eulerappearethwhat i've noticed is how switching columns 2 and 3 in this scenario's rref of A (compared to the usual [I F] form), caused the ROWS 2,3 to be switched in the special solutions. so since the solutions would be [-2 0 1 0] and [2 -2 0 1] if the rref was just [I F], then we switch rows 2,3 and get [-2 1 0 0] and [2 0 -1 2] instead. pretty late but i hope this helps somebody
infinite blackboards...
zhen de ei
domain expansion typa shit
I wonder if the couple at 15:03 in the second row is still together.
lol
i was noticing them
Why don't we ask them?
Makes you feel like you're in the classroom even more...
Exactly haha they’re in most of the lectures
Does anyone else feel a nice smooth buttery feeling when the chalk glides against the board?
I have never seen teacher like u.....ur way of teaching and clearing the concepts of students is amazing sir..
...
learning linear algebra with you is like watching movies. it's fascinating, exciting, convincing and fun. thank you so much Professor Strang! Im so lucky to be learning this subject with you!
Indeed! It's like a story - with characters, and plot, and plot twists...Mr Strang is a shining example of what education should be - accessible, engaging and with the sense of disccovery!
Yes! this is what I felt as I was watching! And I felt that I was as happy as I would be watching a favourite movie.
I've my linear algebra class early in the morning, and I never make it to the class
was frustrated to catch up all this stuff but watching these videos are helping me so much
Sincerely thank professor Strang and this channel!
Seriously the best thing that I could have found on the Internet. Too bad my final is in 4 days. Naturally I will be staying on youtube for quite a few hours this week
I was studying for my engineering degree when this was filmed. I just wish I was having professors like Dr. Strang and Dr. Lewin. Clear cut and practical explanations of the most abstruct branch of Mathematics!
this is the best way possible to describe the rank of a matrix! for so long I have struggled with this concept! And now it feels so rudimentary, so basic! Thank you professor strange for such a fantastic way of explaining things
No way there is anyone can explain linear algebra like professor Strang!
W. Gilbert Strang, you are a gem of a teacher! Thank you so very much!!
It's fun pausing the video and trying to figure out how the process ends before he's shown it ...
There's a lot of magic going on here that Dr. Strang doesn't state explicitly. It makes this lecture worth a couple listen throughs.
definitely more than a couple. I dont know why people are saying its magical
This guy has figured out how to access the 12 dimension. Infinite chalkboards; some crazy wizardry shit.
For those like me, who did not get about the free columns and pivot columns fiasco at first.
Firstly, note that the free columns are linear combinations of the pivot columns (you can do some scribbling to confirm this).
This gives some intuition as to why we can allow the free columns to be scaled freely by any number, then solving for the scalars of the pivot columns, such that you get zero column/vector if you add all these scaled columns.
Pivot variables and free variables are the names for those respected scalars.
I hope this cleared some doubts...wish you the best of luck.
thank you so much for clearing this doubt.
The last thing he wrote at the end of the lecture was "FIN"... like the end of an old-fashioned French film.
I thought it was funny.
readap427
Or Spanish film.
Both from Latin
fin means the end
It is mind blowing how elegant linear algebra really is
I literally want to cry after watching this. Thank you so much for saving my ass.
Love you Prof. Strang.....I am beginning to fall in love with Linear Algebra....You are a genius Prof. Strang....
When you think there is no more sliding boards 33:16
Truly agree. It's quite impressing how MIT has so many sliding boards... the # of blackboards in MIT is INFINITE. LOL
The second part of the lecture going from Ux = 0 to Rx = 0 and further on to RN = 0, and proposing what is N, seemed to be full of leaps that I could not follow completely. I have done a ML course, and a Neural network course without a deeper knowledge of Linear Algebra. I thought of filling that gap. The rabbit hole seems to go deep, and again I seem to be taking a few magical things that happen to be as axiomatic. I will persist. If I can not get it from Prof Strang, I may not get it at all. Hope the pennies will drop as I move forward, and I will get rich!
Any updates?
same issue with me
Dr. Strang wanted us to realize that the reduced row echelon form of the original matrix consisted of the identity matrix (when only looking at the pivot columns) and some other matrix, which he called F, when only looking at the free columns.
He generalized this notion by defining the matrix R using placeholders I and F for the identity matrix (I) and the matrix formed by the free columns (F), with possible rows of 0s beneath. Since he was generalizing, he wrote R as a block matrix (where I and F represent matrices).
We know I has dimensions r x r (since I is the identity matrix formed by the pivot columns, and the number of pivot columns = number of pivot variables = rank = r)
We know F has dimensions n - r x n - r (since F is the matrix formed by the free columns, and we know there are n - r free columns).
So our original Ax = 0 can be rewritten -- throughout the whole process of his lecture -- as Rx = 0. He then wonders what would the solution of this matrix equation would be.
Well, since defined R generally using I and F, he unintentionally (I am assuming given how pleasantly surprised he sounded) was defining R as a block matrix, he decided to find all the special solutions at once in which he called a null space matrix N.
This N would solve the Rx=0 equation, i.e., would make RN = 0 true.
Well, knowing how matrix multiplication works, N needs to be a matrix that, when multiplied with the row(s) of R, would produce 0's.
Since the first row of R is I F, what linear combination of I F would = 0? We would need to multiply I by -F, and F by I (because then we'd have -F + F = 0).
This is how to look at it pure algorithmically. Dr. Strang actually uses wonderful logic. If the first row of R = [I F], then of course we want I in the free variable row (the second row of N) in order to preserve it, and to cancel it out, of course we would need -F in the identity row (the first row of N) in order to cancel out the F in the free variable block of R.
This is how he knows the null space matrix N is always going to be [-F I] (obviously written as a column, but I can't type that out in this comment).
He then goes further to show us how this actually is not surprising. Going back to Rx = 0, remember that R (as a block matrix) = [I F]. x = [x_pivot, x_free] (as a column matrix).
If we actually did the matrix multiplication we would have:
I * x_pivot + F * x_free = 0. Solving for x_pivot we get:
x_pivot = -F * x_free
So, if in our solution we make our free variables the identity (remember when Dr. Strang said "hey, these are free variables. Let's make them whatever we want. Let's make x_2 = 1 and x_4 = 0" and later he said "hey, let's make x_2 = 0 and x_4 = 1"), then by the above equation, of course x_pivot HAS to be -F.
@@toanvo2829 F has dimensions r x n - r (NOT n - r x n -r), no?
Why does my university not allow for students to record the lectures? It is so good to have those at home in video format. You can re-watch them and rewind time anytime you missed something because you weren't paying attention. Mighty helpful.
Mit has done wonderful job to give us quality education for free veryyyyy thanks
Just wanted to say that blocking the rref matrix into [[I F], [0...]] form and solving for the nullspace matrix like that is one of the greatest things I've ever seen. It seems like it shouldn't work because F could have different shape than I, but it does. And it generalizes to when F doesn't exist, which helps you remember the ideas in the next lecture.
enjoying these lectures tremendously - cant say I expected to find linear algebra that interesting
the amount of blackboards that an MIT classroom has is baffling.
Great Lecturer ! Never has learning linear algebra been so interesting and well explained !
36:20 "I quit without trying, I shouldn't have done that." So true
hahahahahah me with linear algebra 2 years ago. will get a 10 now easy
From watching this lecture, DR. Strang continues to strength my knowledge of linear algebra. He makes the subject look so simple.
I've never understood null space, rref, and how null basis is immediate from rref better.
I'd recommend Dr. Strang to anyone that tries to learn linear algebra.
The way he connects the flow of ideas..........
26:31 even the ghost gets mindblowned
F will have the same number of rows as I, but maybe not the same number of columns. So to make N, you just put -F on top and fill in the bottom with the identity matrix of the correct size (the number of columns of F). So say I is m by m and F is m by n, then N will have (m + n) rows and R will have (m + n) columns, so it works out. And each block multiplication (I * -F and F * I) also work out.
Yes. So the dimension of the identity matrix in R is not the same as the dimension of the identity matrix in N. And the sum of the dimensions of these identity matrix should be equal to the number of columns in A.
I'm in love with these lecs
The part I found confusing is when we write [I F]*[-F,I] = 0 F and -F are the same dimension but the identity on the left hand side is rxr and on the right hand side (n-r)x(n-r),
Thinking it through, it makes sense. The RHS has the same number of columns as the number of free variables. It's just a little unusual to see the same letter on both sides meaning slightly different things.
Thank you! This helped enormously! And 9 years later!
a great prof.. abstract maths can so easily taught .... Its amazing...... Great ..... hats off to u
U should come up with simillar lect in analysis
Have u observed
First lecture got Million's of views
And views count is slowly reduced from video to video
I LOVE THIS GUY
GREAT! I was a bit confused at first but in the end, he rocked my world as always! Thaaaank you!
i still do not understand how it works , F and I definitely can have different shapes ? This part is not clear from the video.
15:06 lovebirds
lol would be annoying af if sittin behind em.
A magician telling all his secret tricks...
Prof. Strang is a Magician, he shows that Matrix is synonymous to Magic.
every math loving student would love this great man
He is the best teacher i've ever had. How can i get in touch with him? Please!!! Thank you so much.
+hoan huynh See his department page for contact information: www-math.mit.edu/~gs/
43:02 "Fridi"
God I love this man.
After so many years it’s still magic❤
@ Dr. Strang: OUTSTANDING!
8:35 pivot columns und free columns
34:00 If rank(A) = 3, then U*x = 0 has only trivial solution. But A*(-1, -1, 1)^T = 0. So rank(A) is not equal to 3.
23:09 pivot rows 1 and 2😊
19:30 Reduced row echelon form(RREF)
its incredible what 15 years does 🙌🏽
I believe that Mr. Strang is really amazing and incredible, but I got stuck when he said of "the pivot and free variables" , I mean he told us about the algorithm very well, but I am not able to connect these with that what does it mean to?? Why does non pivot columns can be considered anything?? And what are its effect on my graph if I wish to plot it on?? And what's the concept behind these pivot and free variables, how did it occur from anywhere??
So , if you guys could help me out with this , it would really be appreciated!!
same here! been listening to this part again and again, but having hard time understanding the logic behind pivot and free variables
The pivot is the first non-zero entry on a row. If the system has a unique (single) solution, each row will have exactly one non-zero entry when the matrix is in reduced row echelon form, and there won't be any free variables. In the null space example, if each row has exactly one non-zero entry, the only vector that solves the system when the RHS is zero is the zero vector.
You are right about pivots because he did not really explain that. The idea of only two pivots is that columns of this matrix are not linearly independent, bu I agree it should be explained much better and more detailed then in this lecture.
16:30 the free variable, rank, and special solution amount relation
n-r free variables (column minus rank)
I study at a german uni and everything is so god damn formal I couldnt fathom up unti today why the core plus the rank is equal the number of culmns
thank you mit opencourseware abd thank you dr. Strang
No one teaches this better than him
@ Rahul Duggal, the prof doesn't mean to change the column, he's just highlighting it to be obvious.
@gavilanch I´m glad they were made to be found! MIT rocks, more should follow their example!
hmmm thanks for the explanation. I had to play with examples for some time to get a hang of it.
truly brilliant and impeccably clear
just hope you remember the minus F on the test.
Finally null space column has a combination of identity with free variables 👏👏
For a second, I'm always surprised that people don't clap at the end of the lecture...
I know!!😂 He's the best professor I know.
non-trivial solutions doesn't exist for nxn matrix if their columns are linearly independent
correct
indeed, rocked my world! listening to his lecture is a kind of pleasure!
@ 32:10 X subscript "pivot" and X subscript "free" are being treated as submatrices to enable block multiplication. Hope I'm right
Using Null matrics the sound waves can be converted as no sound domain as zero power of sound as switch application in hearing aids.
Really happy at these lectures... Delivered by pro. Strange
So the independent columns are associated with dependent variables whereas dependent columns give us free variables that can be arbitrarily assigned in the equation.
15:10 special solutions
9:11 "I can assign anything that I like for X2 and X4..."
So, what's stopping us from choosing the free variables as X2 and X3? Because, it seems clear from the equations that they can be assigned any value arbitrarily.
Somebody!
Oh he is such a great teacher!!! with appropriate pause and a moderate speed!! I'm glad that I learn a lot.
Anyone understand the equation at 32:15? I think x_free should be above x_pivot?
In the original example we treated used all of the identity elements (1,0;0,1) in the solution (x), but in the transpose we just reduce it to "1". Why?
The identity Matrix of a 1x1 Matrix is just 1. Since the result was a 1x3 and not a 4x2, the identity part only had a 1x1 spot in the result, thus it just gives 1.
19:54 "Let me suppose I got as far as u" lol
6:36 echflon..stair case
I think the best title for this video is Understanding Null Space.
17:13 how many free variable
You can also use this algorithm to get the cross product of two vectors in R3. Solve Ax=0 for 2x3 matrix. The dotproduct of each rows and x will be 0.
Can you derive the cross product matrix from this algorithm?
huge teacher -professor
3:07 Why does elimination change the column space? All you do is take rows to be linear combinations of each other (crucially, preserving the original row), which leaves the column space unchanged. The basis vectors you use to form the column space are different, but the space itself should be the same because all combinations are in the original space and no vectors are lost by multiplication by zero.
We are taking linear combinations in the row space, hence row space remains unchanged but column space changes since they are being changed independant of each other now the resulting vectors can be in any different space
all of a sudden i'm heavily interested in linear algebra and math
I'm curious why we should choose 1 and 0 as our free variables? Choosing different values will produce different, but still numerically consistent results, so what makes 1 and 0 the appropriate choice? Are other solutions valid?
@Baysungur Alparslan Makes sense, thanks!
One thing I didn't understand. When he solved for A transpose ..2nd row becomes 0 because it's dependent on 1st row ..4th row also became 0 but it doesn't depend on any other row.. he said that row of 0's means that original row is a combination of other rows ..but 4th row is not a combination so why it became 0??? Please help if anyone knows the answer!
33:18 Flirting starts at row 2, not the matrix row I mean.
@shinyralle all he wants is, that you get a better view for that what he wants you to see. that why he writes it that way
Love him so much.
can someone please explain how he got from R = [1 2 0 -2; 0 0 1 2; 0 0 0 0] to R = [I F; 0 0] ?
Que aula! Sensacional
I just read the latest edition of the book for this course and it is brilliant , the best Linear Algebra textbook! Thank you Dr. Gilbert Strang! But I like the cover of older edition with the houses being transformed. Is there any software for online hw for the book? Please let me know.
This guy is definitely PERFECTTTT!!!!