@Reed Morris Who put together the ranking you speak of? This is not something one can evaluate objectively. Try again. And of course this is basic math. This is a freshman or sophomore undergraduate math class. How does that relate to my original comment at all?
@@SilverArro I know not of the ranking you speak of, how can you rank the cuteness of all genius's by your own mechanism's. He isn't very bright in my book, but I guess he can still be a math genius. How do you know he didn't just study hard. Every competent is not a genius just because they are to you.
@@SilverArro most places require 2 years of college calculus before linear that's the third year btw just saying. Saying its entry level doesn't uplift you at all.
@@Gojam12 The ranking thing was in reply to a comment that has apparently since been deleted. I really have no idea what you’re talking about and I don’t particularly care. You do not need 2 years of college calculus to take linear - 2 semesters maybe (that’s a single year). I took AP Calc in high school and went straight into Multivariate Calc and Linear my freshman year. Since this is MIT, I’m going to guess many students are in a similar boat. Linear Algebra is indeed basic math, sorry. It is where most students just begin to get their feet wet in digging into mathematical theory. If you want to argue further over something so trivial, feel free to keep arguing with yourself here - I won’t be replying again. This was a lighthearted comment and certainly not meant to be a treatise on mathematical genius - I find it amusing that you should need that pointed out to you. Goodbye.
Lecture timeline Links Lecture 0:00 What's the Inverse of a Product 0:25 Inverse of a Transposed Matrix 4:02 How's A related to U 7:51 3x3 LU Decomposition (without Row Exchange) 13:53 L is product of inverses 16:45 How expensive is Elimination 26:05 LU Decomposition (with Row exchange) 40:18 Permutations for Row exchanges 41:15
The internet is such a wonder! Thanks to it, I can learn from great educators like Prof. Strang from the comfort of my home. What a nice era to be a human in.
Gilbert Strang lecture: "and this is a matrix..." Gilbert Strang textbook: "Find the corners of a square in n dimensions and whether vectors a, s, d,e w,wieidwdjdkdk are contained in the cube...."
how many times has watching a lecture brought a smile to your face ? I was constantly smiling - every time he pointed out something that I hadn't thought of in the way he mentions it. Such an amazing teacher!
The moment he said the inverses of these matrices (permutation matrices) are just their transposes... Blew my mind I had to pause to check all of them... wow
46:15 Holy shit! He gave a teaser to abstract algebra right there! I just finished abstract algebra and was just watching these lectures because... I don't need a reason, and I just noticed that now! Prof Strang is amazing. I am glad I can watch these lectures from anywhere and at anytime I want :)
Jͼan Yep. He gives a very nice and basic example of a group. The permutation group is often one of the first examples you examine when studying group theory.
In Germany our linear algebra courses open up with group and field theory and some later on even do modules if theyre feeling mean. I kind of like the approach of geometry and computations first tho, at least for physics
I remember being a student and rushing out after class, as these students did. But now at the ripe age of 35, I see these students doing the same and I think "HOW DARE THEY NOT STOP AND APPLAUD FOR SUCH A MASTERFUL PERFORMANCE"
I did my maths degree in the late 1970s-early 1980s. I did a load of linear algebra, I didn't realize how lucky I was. Watching professor Strang just makes me want to pick up an algebra book, and work through it. Bravo Professor.
He just gave an intro to computational complexity in CS measured in order notations such as Big O, Omega etc. Pure gold how he also almost put the definition of small O right there. Applauds
for those unaware, this video was originally uploaded in very bad quality (you willl see complaints about this in the next one) and MIT OCW claimed to have lost the original recording and thus were unable to upload in higher quality. Fortunately, the seem to have found the tapes. Thanks MIT OCW!
yeah, those unaware fools...what do they know ! ...i remember the bad quality and was afraid of lecture 4, when i repeated the course...but let´s not scare the young folks with stories from the past... let it rest...
Note for myself : Elementary matrices ka inversion simple hai sirf jaha -ve hai usko +ve karna hai Also jab ham row exchanges nahi karte hai then L simply mil jata hai bas identity matrix mein E21 ,E31 and E32 ko zero karne k liye jo operations kiye hai unka sign reverse karke Identity matrix mein respective positions par likhna hai . And jab ham row exchanges karte hai in order to get the U matrix then also its very simple bas as compared to previous case jab row exchanges nahi kar rahe the yaha par permutation matrices will also be present as the elementary matrices and their inverse is also very simple to calculate. Permutation matrix is itself it's inverse.
He just non-chalantly planted the idea of Group Theory in at the very end of the lecture - Genius! If only one can make a math playlist of all the best lecturers in the world... may be I will do this.
I won't understand people who disliked this videos. I haven't liked revisiting a topic, unless it is from Deep Learning. But this! This is a gem that I will revisit my entire life, any given day. Bored? Pick a topic from this series. Depressed? Pick a topic from this series. Need inspiration? Pick a topic from this series. Time on your hands? Still need a hint?
Can you clear one thing why it is not 99sq. or(n-1)sq for first operation what is the significance Of saying about 100sq. when there is no operation specifically on first row
Ajitesh Bhan That is because we only wanna know the highest order of the possible answer, as a estimate parameter of what so called “cost” or “complexity”.Try 1 to 10 you will find n cube is significantly greater then n sq when n grows bigger and bigger, so we just do a estimate to know the highest order is n cube which is good enough to know the cost, because n sq and other factor is so small compared with n cube.
Ajitesh Bhan the first step is n(n-1) if we do a accurate count, but n sq is fine for the same reason that we just wanna know the approximately cost order, it is Order 2 obviously so n sq is okay instead of a more acc n(n-1).
I feel so sorry for younger version of me who doesn't know about this great course and the nicest instructor. poor guy just hated the math classes. thank MIT, thanks dear Gilbert Strang :)
n^2 + (n-1)^2 + (n-2)^2 + ... + 2^2 + 1^2 = n * (n+1) * (2n + 1) / 6. As n becomes "big", this sum approaches n^3 / 3. His approach in the lecture is also good. Plot a graph of y = x^2 and identify the points x = n, x = n-1, x = n-2, ..., x = 1, and you'll find that if n is big enough, the discrete plots start to look more and more like the curve y = x^2, which then allows you to approximate the area under the curve. Again, what you get for a reasonably large n is n^3 / 3. One final thing is, if these operations are performed in a loop, you'll need way more time, because his analysis assumes an operation on an entire row. To achieve this, you would need vectorized code that can operate on the entire row at once. Hope this helped someone :)
Shouldn't it be : n(n+1)/2 2x2 ----> 1 operations 3x3 ------>1+2=3 operations 4x4 -------> 3+3=6 operations 5x5-------->6+4=10 operations 6x6--------->10+5 = 15 operations 7x7---------> 15+6 = 21 operations Hence 1+2+3+4+5+6+.....n = n(n+1)/2 I considerd multpliying the a row with a constant and then subtracting from another row as one operation.
@@mkjav596 If you multiply a row(say a size of n) with a constant , then you are having the cost complexity as O(n). We generally consider it a linear complexity than just considering a constant. Since for bigger n (say > 10000) taking the operation as a constant can be expensive.
dude your comment is really helpful to me, may I ask you more details about the second point of your comment, that loop thing , I'm not getting that point clearly
Another visualization is building a pyramid starting with a block of base n^2 and height equal to one, then another smaller block with base (n-1)^2 and height one on top and so on eventually resulting in the overall height of n. When n grows and the point of observation moves away from the pyramid such that the height appears to be constant, the blocky pyramid becomes increasingly smooth and the volume approaches 1/3n^3.
So clear the explanation is that for a simple matrix (3X3) I can directly flush out the inverse matrix given the multiplier at each elimination step, without going thru matrix inverse and multiplication. Here is the process: (1) Flip the sign of multiplier at each elimination step. (2) Directly add it in the L matrix in the same position (index) of L. In the example of 18:00, flip (-2) I get 2, then add 2 in position L[2,1]; flip (-5) I get 5, add 5 in position L[2,3]. So I got L. BTW, another way to understand that L is better than E is that: (1) When producing E, there are interfered operations happening and thus a new (implicit) relationship between row1 and row3 is formed. As a result, a new entry (10) is appearing to reflect the newly created (implicit) relationship among row1 and row3, as shown at the position E[3, 1]. (2) When producing L, the operations are in the right order that there are no interfered operations thus no implicit relationships were generated. So we can just plug in the multiplier (the entry) directly to L, without worrying about missing any entries. (3) As a side note, the entry value tells about the multiplication, BUT more importantly, the index of each entry tells about he the relationship between rows. Eg. In matrix E32, the entry E[3, 2] = -5, it means the change coming from row2 to row3, with (-5) as the amount. Getting an explicit explanation on the role of entry indexes helps a lot to build some intuitive in a long run. Thank you Dr. Strang. You are the hero of Linear Algebra!
"When producing L, the operations are in the right order" I don't understand why that is. The order is determined by the order of the E_i, just in each case it is the inverse of the respective E... How can the order be "right" if it was determined by the order of the E_i. There are multiple possible sequences of E_i after all.
The best teacher teaching this material as far as I know. I wonder whether his books are as good as his lectures. May he still have a long and healthy life. :)
But do you put on one sock, then one shoe, then the other sock, then the other shoe? Or both socks first, then both shoes? And do you have to take them off in the same order?
@Wilhelm Eley If he changes his outfit daily with periodicity coprime with 7 (basically "any number"), at the end of the time we would see he using all his clothes thanks to Bézout's identity. u.u We just need to spot the minimal differences in his outfits and, statistically, hope we are right with the amount of cloaths he uses. We are doing our jobs right. u.u
when i first run into linear algebra at university i was so stuck to understand even the basic topics of my courses. then after 2-3 years i discovered mr. Strang's lectures and i have to say i am so grateful for this professor because his teaching aproach made me understang the whole concept of linear algebra and i actually found it very interesting for the first time in my life. Plus i finally passed my courses after all these years xD god bless you mr. Strang :)
(1/3)*n^3, magically! hmm! But hey, it makes sense, that is a sum of all (n-x)^2, and as he assumed for all pivots it is a continues variable then the discrete sum of continues variables turns into integral and there you have it, 1/3n^3.
Since all eliminations must be done by computers for large matrices, intuitive approaches fail quickly. So, precise rigorous algorithms are the only practical way to do elimination. Gilbert Strang style defies the rigorous approach, and does it on purpose to breath life into the dull process of elimination.
When he's going over the number of operations to "solve" a matrix, what exactly does he mean by "solve" ? Finding E, L, I, or something else? EDIT: Aha, at 35:17 he mentions it's going from A to U.
100× 100 matrix will take max 49950 steps of ellimination to form an Identity , if and only if -> all the elements are not equal 0 ; explanation -formula as per me 😁 i.e 100 × 100 matrix = 10000 elements , so 100 num of pivots , which are no to be changed and so remaining elements ;10000-100 = 99900 ; we are supposed to make either upper or a lower Tri. , So now we have to change 1/2 of the elements skipping the pivots i.e = 1/2 × 99900 = 49950 elements so 49950 steps 😎
Great video! I love this series. In this lecture, Dr. Strang briefly mentions that the cost of operations for the det(A) will be (n!) He also shows us how the cost of getting A into upper triangular form U will be (1/3)n^3. But from Lecture 2 we know that one way of finding det(A) is to get A into U and then simply find the product of all n pivots. So it seems like the cost for finding det(A) would just be a bit more than (1/3)n^3, perhaps (1/3)n^3 + n. I must be missing something here; any thoughts?
If I understand you correctly, I think the new term n is ignored because it's not significant compared to n^3 as n goes to infinity. I'm no expert and I'm not even sure if you are right. But if you are, and you are wondering why the n is ignored compared to n^3, i think this is the reason.
Weston Loucks you're right. When you calculate the upper triangular form and than multiply the pivots, the work scales with n^3. When he says, the effort is n!, he refers to a calculation with the laplace formular.
The lower term for large n almost vanishes so it isn't significant in Big O notation stuff. n^3 will dominate as n goes to infinity. The n! comes from a popular determinant based algorithm.
Yaa, I was thinking about it and, it came to me that, let's say we have 100 by 100 matrix. Now if we count a multiplication and subtraction as two operations then to reach a state where the first column of the matrix has only one non zero element and which is our pivot in 1st row, we do 100 subtractions for 99 rows as the first row remains unchanged, also, since we are multiplying all the elements of a row by some multiplier we also have 100 multiplications 99 times . So , the answer should be about 2*100*99 . Generalizing for n elements it comes to be 2*n*(n-1). And so the total operations will be 2*[(1/3)(n^3) - (1/2)(n^2)] PS: If we take into account the discreteness, total no of operations = 2*[n(n+1)(n-1)/3]
Can anyone explain why the E21 in 11:16 is easy to invert? Did he teach about the skills in the previous lecture? Or the skill is taught in the readings?
If there is -4, just put 4 in the sample place. It is kind of related to the ways of understanding matrix multiplication. Start with EA = U, this means you are doing one step of elimination to A, say step E21. The step is, you get row 2 minus 4 times row 1 (for A). This is what the second row of E, i.e., (-4, 1) means. Now you want to cancel this step E to get A = LU. You need to add back the 4 times row 1 to row 2. So the new (4, 1) means, get a changed row 2, add 4 times row 1 back.
If you scroll down you will find someone asked the same question. "It takes 100 operations to make (2,1) into 0, because the rows are 100 deep. Each of those elements changed as well. Then it’s done 98 more times to the rest of the rows."
I would say that the most efficient way for solving Ax=b would be solving A^t A x = A^t b (minimum square problem) using CG algorithm, due to my personal amazement with CG method. Pretty sure that it's not the case, but this method gives me chills. Hahaha
I dont understand why the cost is n^2 to make zero first column. A single operationg is multiplication and subtraction ,therefore i need to multiply first row with some constant and substract it from 2nd to get zero. It takes 1 operation to make (21) zero so 99 or 100 operation in total to make first column zero except pivot. Help me out here please
it seems to be every operation of "every numbers" in "every row" has to be considerd as 1 operation. Like when you change (2,1) to zero,you also change (2,2)...(2,100) as well,cause you multiply the whole row1 by some constant,and let row2 substract it. So you have already consumed 100 times of operation when ending fixing the row2,and then doing row3 cost 100,doing row4 cost 100 ...until row100. That's why you already use 100 squared just after finishing fix first colume of A. It's just my guess,cause in the video he said "a multiply plus a substraction of raw" is defined as "1 operation" , so it confused me as well,hope there will be another smart guy that can answers us.
Gilbert said " the cost comes from the number of multiplication and subtraction that you did when you multiply 2 rows by some number to make the (2,1) component zero, you have to subtract every element in 2nd row from the 1st row multiplied by some number so the number of calculation you did equals to 100 the rest of 98 rows take the same step. Therefore the number of calculation you did equals to 100+100+100... 100 (= 100*99) and the last 100 comes from the fact that you have went through 100 times multiplication of 1st rows to make the 1st column except for (1,1) component zero. So (100*99) + 100 = 100*100 ^^
Man the chalk glides so smoothly across the black board, when I give tutorials at my university it's usually a huge pain in the ass to draw stuff on there because it just feels like shit haha
This python program shows that total sum is n cube divided by 3 import matplotlib.pyplot as plt def r(n): sum = 0 for i in range(1,n+1): sum = sum + i*i return sum def y_axes(n): lst = [] for i in range(1, n+1): lst.append(r(i)) return lst plt.plot([x for x in range(1,1001)], y_axes(1000)) plt.show()
because once you eliminate a single element from a row then that whole row will be changed, as each row has 100 elements all those will be changed, i.e in a single row operation you will be changing 100 elements. so in first step when you're eliminating first element of all 99 rows (below first row) so a total of 100*99 elements are changed which is roughly taken as 100^2.
At 24:20 when he says that the multiplier goes directly into L, he means the negative right? If you keep a track of the operation on the left side, it inverts while bringing it to the right side.
At 39:00 The cost of b is not EXACTLY n^2 operations like he says in the video. Yes there are n elements, and we assume that all the elements are non-zero from the beginning. But that doesn't mean that EVERY element are being changed. For this to be true(that we are using n^2 operations) we ALSO must assume that there is no 1's in the pivot positions from the beginning. For example if it were 1's in all the pivot positions from the beginning the cost of b would be n^2 - 100(unlikely, but as an example) So the cost of b is not exactly but CLOSE to n^2.
how is the number of operations n^2. 2x2 ----> 1 operations 3x3 ------>1+2=3 operations 4x4 -------> 3+3=6 operations 5x5-------->6+4=10 operations 6x6--------->10+5 = 15 operations 7x7---------> 15+6 = 21 operations So it should be 1+2+3+4+5+6+.....n = n(n+1)/2 Correct me if I am wrong
4:57 if i transpose these guys, that product, then again, ... : why did he jump suddenly to this theorem without any definition of transpose nor the logical derivation? Did I miss something?
I finally got stuck at this video. I guess I haven’t mastered what was taught at last lecture. Will revise it by solving assigned problem set like the MIT students. Cannot demand the same progress while I don’t practice as much the MIT student do.
He proves that you multiply in reverse order with inverses. He doesn't prove that with transpose. I want to work it out - I think it has to do with the fact that you multiply row * matrix, you don't multiply column * matrix. The rows of A-transpose are the columns of A. So you have to reverse order to multiply row * matrix.
at 32:35 => Should not the first cost be 99*2 instead of 100^2 because there are 99 rows below first one. For pivoting each row below, you need to multiply the first row by some constant then subtract from the row you are pivoting. So, essentially are we not ending up having 2 operations per row for those 99 rows, hence, 99*2 instead of 100^2 ???
I think I have figured it out. It would be like for each element in any row below first one, you would be getting (that element - multiplier*corresponding first row element) and this is 1 operation. So, you have 99 rows below first row each having 100 elements, so, it should be 100*99 operations, which Prof. Strang writes as *about 100*. Thanks
@@bridge5189 But he said that after 1st step 1st row(which is obviously not changed) , 2nd row and 1st column only these are clean. So, for this I think he only meant by cleaning up the 1st column (just like Gauss), So, in that case the operations for just 1st step would only be 99
@@anuragagarwal5480 Elimination steps are nothing but writing linear combinations of the given system of algebraic equations by multiplying by some constants and then adding/subtracting them from each other. So, when you do first elimination step for bringing in zero at second row's first column by subtracting the second row from some constant times the first row, you would have to do the same operation for the whole second row. Thus, you would get n operations, one for each element in the second row. Here, n = 100. Similarly, you would keep on doing this for bringing zero in the first element of all the rows beneath the second row, which would be 98 in total. Hence, you would end up doing 100*99 operations in total.
@@bridge5189 Yes, but we also don't want to make our 2nd row's 2nd column and respectively in further rows to be zero as we want our diagonal elements to be non-zero. 2. So, total number of operations must be 100*99 to complete the elimination process, then, why sir is taking another (n-1)² + (n-2)² + ......2² + 1² ≈ n(n+1)(2n+1)/6 operations to complete the whole Elimination process ?
e as about n^2 (100^2). Now, what we would need in our matrix is bringing zero in the second column starting from 3rd row to 100th row. This would be just like bringing zeros in the first column from the second row to last row in any (n-1)×(n-1) matrix. So, for this we can say we would need about (n-1)^2 operations. So, we have 1^2 + 2^2 + ...... + n^2 = n(n+1)(2n+1)/6
hmm is there a complementary book we must follow? I'm puzzled as I don't think we've introduced the elementary matrices in previous lectures, or Upper/lower matrices. I feel this is not beginner friendly.
We recommend you view the course materials along with the videos at: ocw.mit.edu/courses/18-06sc-linear-algebra-fall-2011. For further study, there are suggested readings in Professor Strang’s textbook (both the 4th and 5th editions): Strang, Gilbert. Introduction to Linear Algebra. 4th ed. Wellesley, MA: Wellesley-Cambridge Press, February 2009. ISBN: 9780980232714 Strang, Gilbert. Introduction to Linear Algebra. 5th ed. Wellesley, MA: Wellesley-Cambridge Press, February 2016. ISBN: 9780980232776 Best wishes on your studies!
Think about what happens to the entries in the matrix every time you do a row exchange. You can never get more than the single 1 in each row that is moved into that row from another one since rows are exchanged. In other words, every exchange is a new ordering of the rows of 1’s leaving no COLUMNS with more than one 1 as well since no extra 1’s are added to any rows - and the starting point was the identity matrix. This is what the permutations are all about. Whenever you have done a row exchange you can think of this way: there are four ways to put a 1 in any given row, but then there are only three ways to put a 1 in the next row you choose since you cannot get any 1’s above and below each other), and then there are two ways to choose the third 1 and for the last row you only have one position left in which to complete the order and thereby the row exchange. Altogether this makes for 4x3x2x1 ways to choose the order of the 1’s or n! ways to order a given n x n matrix and this type of rearranging is called “permuting” and every instance of ordering is a single permutation.
I'm a year late, but here comes a somewhat intuitive explaination. Whenever you are trying to calculate how many ways are there to rearrange some set of len N think like that: there are N ways to choose 1st object (a row here), then we should be left with N-1 options for 2nd object and so on untill we get the only object left as the last one. Thus we have a multiplication of N * (N-1) * (N-2) * ... * 1, or, simply put, N! . Hope that helps, sorry for bad English.
So we know that B is n by 1 vector (if u don't remember it go to lecture 2 around minute 15 he is talking about it) and we just need to apply what we did for each row in A to the same row in B which is O(1) operations like divide by some factor or add a multiple of some row to it etc. since we got n elements and n operations in total it is gonna cost us Θ(n^2).
Does this mean the cost of B is equal to the cost of doing an inverse matrices? Because right before he explained the cost of matrix, he was saying why the inverse matrices is better to do.
But its not EXACTLY n^2 operations like he says in the video. Yes there are n elements, and we assume that all the elements are non-zero from the beginning. But that doesn't mean that EVERY element are being changed. For this to be true(that we are using n^2 operations) we ALSO must assume that there is no 1's in the pivot positions from the beginning. For example if it were 1's in all the pivot positions from the beginning the cost of be would be n^2 - 100(unlikely, but as an example) So it's not exactly but CLOSE to n^2.
18:52 It says I'm subtracting rows from lower rows when we are multiplicating the two matrix. Was not able to get this.was able to simply multiply the matrix using combinations of columns and rows.
That would give n^3 which is up for a factor of 3. Take into account that the elementary matrix are special matrices and there are plenty of 1s and 0s, so his way is more precisse.
kinda late, but I'm pretty sure that it is because we want the first pivot. So we subtract this row by everything else to allow it to be the pivot, the other numbers would need to change too (the ones other parts of the row). You must add/subtract every other number in the row. Anyways it is cool that you are learning during a time like this :)
Thanks a Lot, Sir & MIT for bringing out these excellent lecture series on Linear Algebra. May I know where one can find the corresponding problems & assignments for these lectures. Thanks.
The course materials are on MIT OpenCourseWare at: ocw.mit.edu/18-06S05. We also recommend you look at the OCW Scholar version of the course. It has more materials to help self-learners out: ocw.mit.edu/18-06SCF11. Best wishes on your studies!
The shoes and socks analogy for inverses of matrix products is probably the cutest thing a math genius has ever said.
It is in every textbook.
@Reed Morris Who put together the ranking you speak of? This is not something one can evaluate objectively. Try again.
And of course this is basic math. This is a freshman or sophomore undergraduate math class. How does that relate to my original comment at all?
@@SilverArro I know not of the ranking you speak of, how can you rank the cuteness of all genius's by your own mechanism's. He isn't very bright in my book, but I guess he can still be a math genius. How do you know he didn't just study hard. Every competent is not a genius just because they are to you.
@@SilverArro most places require 2 years of college calculus before linear that's the third year btw just saying. Saying its entry level doesn't uplift you at all.
@@Gojam12 The ranking thing was in reply to a comment that has apparently since been deleted. I really have no idea what you’re talking about and I don’t particularly care. You do not need 2 years of college calculus to take linear - 2 semesters maybe (that’s a single year). I took AP Calc in high school and went straight into Multivariate Calc and Linear my freshman year. Since this is MIT, I’m going to guess many students are in a similar boat. Linear Algebra is indeed basic math, sorry. It is where most students just begin to get their feet wet in digging into mathematical theory. If you want to argue further over something so trivial, feel free to keep arguing with yourself here - I won’t be replying again. This was a lighthearted comment and certainly not meant to be a treatise on mathematical genius - I find it amusing that you should need that pointed out to you. Goodbye.
Lecture timeline Links
Lecture 0:00
What's the Inverse of a Product 0:25
Inverse of a Transposed Matrix 4:02
How's A related to U 7:51
3x3 LU Decomposition (without Row Exchange) 13:53
L is product of inverses 16:45
How expensive is Elimination 26:05
LU Decomposition (with Row exchange) 40:18
Permutations for Row exchanges 41:15
awesome
Solomon Xie You are the hero everyone needs :D
What is the book that the student use for this course?
@@saurycarmona5716 The professor himself is the author of the book.Its called
'INTRODUCTION TO LINEAR ALGEBRA'
By GILBERT STRANG
The internet is such a wonder! Thanks to it, I can learn from great educators like Prof. Strang from the comfort of my home. What a nice era to be a human in.
Gilbert Strang lecture: "and this is a matrix..."
Gilbert Strang textbook: "Find the corners of a square in n dimensions and whether vectors a, s, d,e w,wieidwdjdkdk are contained in the cube...."
lmaoo omg
THAT IS SO TRUE OMG
so in other words he aint worth a shit is that what you mean? Because I agree
he even admits that some of its examples are dumb
@@Gojam12 what do you mean?
how many times has watching a lecture brought a smile to your face ? I was constantly smiling - every time he pointed out something that I hadn't thought of in the way he mentions it. Such an amazing teacher!
The moment he said the inverses of these matrices (permutation matrices) are just their transposes...
Blew my mind I had to pause to check all of them... wow
@@ranjanachaudhary2110 Same lol that blew my mind
thank you mit for this...to give such a privilege to the world that they can view and learn from your content,...its commendable,..and i am grateful
46:15 Holy shit! He gave a teaser to abstract algebra right there! I just finished abstract algebra and was just watching these lectures because... I don't need a reason, and I just noticed that now! Prof Strang is amazing. I am glad I can watch these lectures from anywhere and at anytime I want :)
Jͼan Yep. He gives a very nice and basic example of a group. The permutation group is often one of the first examples you examine when studying group theory.
Holy Beautiful
“Because... I don’t need a reason,” you are damn right
I studied discreet maths set theory today, and it just clicked that its an algebraic group the moment he said closure. Small moments of happiness :)
In Germany our linear algebra courses open up with group and field theory and some later on even do modules if theyre feeling mean. I kind of like the approach of geometry and computations first tho, at least for physics
I remember being a student and rushing out after class, as these students did. But now at the ripe age of 35, I see these students doing the same and I think "HOW DARE THEY NOT STOP AND APPLAUD FOR SUCH A MASTERFUL PERFORMANCE"
I feel exactly the same
@Supersnowva he's great
It would be hard for a beginning linear algebra student to appreciate how much better Strang’s teaching is than most courses.
I did my maths degree in the late 1970s-early 1980s. I did a load of linear algebra, I didn't realize how lucky I was. Watching professor Strang just makes me want to pick up an algebra book, and work through it. Bravo Professor.
If reincarnation and time travel both turn out to be things, I want to come back as a student in his class.
i literally waited years for this video. im going to binge watch this bitch
Professor Strang is really enjoying teaching, I am so appreciate that I could learn from him! I like the way he teaches so much!
He just gave an intro to computational complexity in CS measured in order notations such as Big O, Omega etc. Pure gold how he also almost put the definition of small O right there. Applauds
for those unaware, this video was originally uploaded in very bad quality (you willl see complaints about this in the next one) and MIT OCW claimed to have lost the original recording and thus were unable to upload in higher quality. Fortunately, the seem to have found the tapes. Thanks MIT OCW!
nachomasterCR 8
yeah, those unaware fools...what do they know ! ...i remember the bad quality and was afraid of lecture 4, when i repeated the course...but let´s not scare the young folks with stories from the past... let it rest...
And where exactly is this HQ version? Or is there one that's more echo-y than this one?
@@windowsforvista ua-cam.com/video/5hO3MrzPa0A/v-deo.html
@@briann10 Damn that is bad!
I know of no soft power more effective than these lectures. Thank you MIT for the generosity and commitment.
Note for myself : Elementary matrices ka inversion simple hai sirf jaha -ve hai usko +ve karna hai
Also jab ham row exchanges nahi karte hai then L simply mil jata hai bas identity matrix mein E21 ,E31 and E32 ko zero karne k liye jo operations kiye hai unka sign reverse karke Identity matrix mein respective positions par likhna hai .
And jab ham row exchanges karte hai in order to get the U matrix then also its very simple bas as compared to previous case jab row exchanges nahi kar rahe the yaha par permutation matrices will also be present as the elementary matrices and their inverse is also very simple to calculate.
Permutation matrix is itself it's inverse.
God bless MIT and Professor Strang. Such a bright light for a wonderful course!
"I'm sorry that's on tape" Strang 2005
isn't it yr 2000? 2005 is the year they published the "tape", I think.
Congratulations for finding this recording! Thank you a lot!!!
He just non-chalantly planted the idea of Group Theory in at the very end of the lecture - Genius!
If only one can make a math playlist of all the best lecturers in the world... may be I will do this.
kindly share that genius playlist here..
Yes pls share playlist
yes please do
@@ManishKumar-xx7ny This guy is doing most of the same. Follow him.
"The Bright Side of Mathematics"
I won't understand people who disliked this videos. I haven't liked revisiting a topic, unless it is from Deep Learning. But this! This is a gem that I will revisit my entire life, any given day.
Bored? Pick a topic from this series.
Depressed? Pick a topic from this series.
Need inspiration? Pick a topic from this series.
Time on your hands? Still need a hint?
36:49 it's (1/3)n^3 + (1/2)n^2 + (1/6)n for those wanting to know the exact answer.
Porque
Can you clear one thing why it is not 99sq. or(n-1)sq for first operation what is the significance Of saying about 100sq. when there is no operation specifically on first row
Ajitesh Bhan That is because we only wanna know the highest order of the possible answer, as a estimate parameter of what so called “cost” or “complexity”.Try 1 to 10 you will find n cube is significantly greater then n sq when n grows bigger and bigger, so we just do a estimate to know the highest order is n cube which is good enough to know the cost, because n sq and other factor is so small compared with n cube.
Ajitesh Bhan the first step is n(n-1) if we do a accurate count, but n sq is fine for the same reason that we just wanna know the approximately cost order, it is Order 2 obviously so n sq is okay instead of a more acc n(n-1).
Still an approximation. You’re assuming that the cost for an n x n matrix is approximately n squared when it’s n squared minus n.
I feel so sorry for younger version of me who doesn't know about this great course and the nicest instructor. poor guy just hated the math classes. thank MIT, thanks dear Gilbert Strang :)
26:05 "What did it cost?"
~40:00 "Everything"
I bet 2$
n^2 + (n-1)^2 + (n-2)^2 + ... + 2^2 + 1^2 = n * (n+1) * (2n + 1) / 6. As n becomes "big", this sum approaches n^3 / 3. His approach in the lecture is also good. Plot a graph of y = x^2 and identify the points x = n, x = n-1, x = n-2, ..., x = 1, and you'll find that if n is big enough, the discrete plots start to look more and more like the curve y = x^2, which then allows you to approximate the area under the curve. Again, what you get for a reasonably large n is n^3 / 3. One final thing is, if these operations are performed in a loop, you'll need way more time, because his analysis assumes an operation on an entire row. To achieve this, you would need vectorized code that can operate on the entire row at once. Hope this helped someone :)
Shouldn't it be : n(n+1)/2
2x2 ----> 1 operations
3x3 ------>1+2=3 operations
4x4 -------> 3+3=6 operations
5x5-------->6+4=10 operations
6x6--------->10+5 = 15 operations
7x7---------> 15+6 = 21 operations
Hence 1+2+3+4+5+6+.....n = n(n+1)/2
I considerd multpliying the a row with a constant and then subtracting from another row as one operation.
@@mkjav596 If you multiply a row(say a size of n) with a constant , then you are having the cost complexity as O(n).
We generally consider it a linear complexity than just considering a constant. Since for bigger n (say > 10000) taking the operation as a constant can be expensive.
@@mkjav596 thanks
dude your comment is really helpful to me, may I ask you more details about the second point of your comment, that loop thing , I'm not getting that point clearly
Another visualization is building a pyramid starting with a block of base n^2 and height equal to one, then another smaller block with base (n-1)^2 and height one on top and so on eventually resulting in the overall height of n. When n grows and the point of observation moves away from the pyramid such that the height appears to be constant, the blocky pyramid becomes increasingly smooth and the volume approaches 1/3n^3.
Good God these lectures are a perfect addendum when trying to learn this topic from the book alone. Thank you thank you thank you
So clear the explanation is that for a simple matrix (3X3) I can directly flush out the inverse matrix given the multiplier at each elimination step, without going thru matrix inverse and multiplication.
Here is the process:
(1) Flip the sign of multiplier at each elimination step.
(2) Directly add it in the L matrix in the same position (index) of L.
In the example of 18:00, flip (-2) I get 2, then add 2 in position L[2,1]; flip (-5) I get 5, add 5 in position L[2,3]. So I got L.
BTW, another way to understand that L is better than E is that:
(1) When producing E, there are interfered operations happening and thus a new (implicit) relationship between row1 and row3 is formed. As a result, a new entry (10) is appearing to reflect the newly created (implicit) relationship among row1 and row3, as shown at the position E[3, 1].
(2) When producing L, the operations are in the right order that there are no interfered operations thus no implicit relationships were generated. So we can just plug in the multiplier (the entry) directly to L, without worrying about missing any entries.
(3) As a side note, the entry value tells about the multiplication, BUT more importantly, the index of each entry tells about he the relationship between rows. Eg. In matrix E32, the entry E[3, 2] = -5, it means the change coming from row2 to row3, with (-5) as the amount. Getting an explicit explanation on the role of entry indexes helps a lot to build some intuitive in a long run.
Thank you Dr. Strang. You are the hero of Linear Algebra!
"When producing L, the operations are in the right order"
I don't understand why that is. The order is determined by the order of the E_i, just in each case it is the inverse of the respective E... How can the order be "right" if it was determined by the order of the E_i. There are multiple possible sequences of E_i after all.
great lectures,, much better than any paid courses on Udemy or other sites
i guess I am the lucky one since i've just started to watch this video lecture series today!
This is SO helpful, more than thankful for this upload. I really like this professor too.
The best teacher teaching this material as far as I know. I wonder whether his books are as good as his lectures. May he still have a long and healthy life. :)
I have Strang's Linear Algebra for Everyone. It's a decent book. I prefer Bretscher's Linear Algebra with Applications
Thank you Dr. W. G. Strang, for all this knowledge you have all us favored of.
This man is just a genius in the purest meaning of the word. He is like Neo, he can see matrices everywhere.
An extra ordinary teacher. Thank you MIT.
Thank you so much. This is the proper education people should receive.
Ah that explains it. I don’t understand the math but now I finally understand why my socks are getting wet when it rains
Put on socks > put on shoes, thus to inverse the process take of shoes > take of socks 😂 Great analogy!
But do you put on one sock, then one shoe, then the other sock, then the other shoe? Or both socks first, then both shoes? And do you have to take them off in the same order?
Thanks, Dr. Strang. I always enjoy your lectures.
His closet consists of 7 version of the exact same outfit.
My exact inference
@Emanuel D Underrated comment LMAO
He's a superhero! What do you expect?
@Wilhelm Eley Dayum!
@Wilhelm Eley If he changes his outfit daily with periodicity coprime with 7 (basically "any number"), at the end of the time we would see he using all his clothes thanks to Bézout's identity. u.u We just need to spot the minimal differences in his outfits and, statistically, hope we are right with the amount of cloaths he uses. We are doing our jobs right. u.u
thank god someone uploaded a better audio/video quality version the other one was abysmal
Thank's MIT to share this type of documents with the worlds. Thank's...
I don't understand how some people find it easy to disrespect teachers. In Bangladesh, we respect our teachers.
when i first run into linear algebra at university i was so stuck to understand even the basic topics of my courses. then after 2-3 years i discovered mr. Strang's lectures and i have to say i am so grateful for this professor because his teaching aproach made me understang the whole concept of linear algebra and i actually found it very interesting for the first time in my life. Plus i finally passed my courses after all these years xD god bless you mr. Strang :)
uuuu the quality in this video is better than the another on the previous playlist
I just repeated watching this video twice and then got the idea of why L is better than E. Thanks Dr. Strang
36:49 the precise answer would be n(n²-1)/3..
thanks, Profesor Gilbert Strang
The way he connects the dots. Wow!
(1/3)*n^3, magically! hmm! But hey, it makes sense, that is a sum of all (n-x)^2, and as he assumed for all pivots it is a continues variable then the discrete sum of continues variables turns into integral and there you have it, 1/3n^3.
36:05 to sleep in front of 300k people... a fucking legend
Since all eliminations must be done by computers for large matrices, intuitive approaches fail quickly. So, precise rigorous algorithms are the only practical way to do elimination. Gilbert Strang style defies the rigorous approach, and does it on purpose to breath life into the dull process of elimination.
Wonderful explanation, prof. Gilbert. Thank you!
When he's going over the number of operations to "solve" a matrix, what exactly does he mean by "solve" ? Finding E, L, I, or something else?
EDIT: Aha, at 35:17 he mentions it's going from A to U.
Finally! Where did you end up finding this??? Cached in someone's IE?
They restored it by training a neural net on all the other videos together and feeding it the low quality one as input to transform
@@nickpayne4724 no way this is the work of a neural net.
100× 100 matrix will take max 49950 steps of ellimination to form an Identity , if and only if -> all the elements are not equal 0 ; explanation -formula as per me 😁 i.e 100 × 100 matrix = 10000 elements , so 100 num of pivots , which are no to be changed and so remaining elements ;10000-100 = 99900 ; we are supposed to make either upper or a lower Tri. , So now we have to change 1/2 of the elements skipping the pivots i.e = 1/2 × 99900 = 49950 elements so 49950 steps 😎
Didn't know there were videos of his classes. I have being learning from his books in my school.
Great video! I love this series. In this lecture, Dr. Strang briefly mentions that the cost of operations for the det(A) will be (n!) He also shows us how the cost of getting A into upper triangular form U will be (1/3)n^3. But from Lecture 2 we know that one way of finding det(A) is to get A into U and then simply find the product of all n pivots. So it seems like the cost for finding det(A) would just be a bit more than (1/3)n^3, perhaps (1/3)n^3 + n. I must be missing something here; any thoughts?
If I understand you correctly, I think the new term n is ignored because it's not significant compared to n^3 as n goes to infinity. I'm no expert and I'm not even sure if you are right. But if you are, and you are wondering why the n is ignored compared to n^3, i think this is the reason.
Weston Loucks you're right. When you calculate the upper triangular form and than multiply the pivots, the work scales with n^3.
When he says, the effort is n!, he refers to a calculation with the laplace formular.
plug in some numbers and see how rational the outcomes seem
The lower term for large n almost vanishes so it isn't significant in Big O notation stuff. n^3 will dominate as n goes to infinity. The n! comes from a popular determinant based algorithm.
Yaa, I was thinking about it and, it came to me that, let's say we have 100 by 100 matrix. Now if we count a multiplication and subtraction as two operations then to reach a state where the first column of the matrix has only one non zero element and which is our pivot in 1st row, we do 100 subtractions for 99 rows as the first row remains unchanged, also, since we are multiplying all the elements of a row by some multiplier we also have 100 multiplications 99 times . So , the answer should be about 2*100*99 . Generalizing for n elements it comes to be 2*n*(n-1). And so the total operations will be 2*[(1/3)(n^3) - (1/2)(n^2)]
PS: If we take into account the discreteness, total no of operations = 2*[n(n+1)(n-1)/3]
Thanks for this one ! Was awaiting it for a long time :D
Can anyone explain why the E21 in 11:16 is easy to invert? Did he teach about the skills in the previous lecture? Or the skill is taught in the readings?
If there is -4, just put 4 in the sample place. It is kind of related to the ways of understanding matrix multiplication. Start with EA = U, this means you are doing one step of elimination to A, say step E21. The step is, you get row 2 minus 4 times row 1 (for A). This is what the second row of E, i.e., (-4, 1) means. Now you want to cancel this step E to get A = LU. You need to add back the 4 times row 1 to row 2. So the new (4, 1) means, get a changed row 2, add 4 times row 1 back.
This professor is awesome!
Correct me if I'm wrong.
I was following the lecture series in order but I don't think transpose was taught in any of the previous lectures.
Lol u are watching MIT courseware...they expect u would be knowing the basics..they won't spoon feed u everything!!!!
This lecture on factorization is very helpful ,however I don't fully understand why this concept. is important.
Can someone please explain how did he get 100^2 and not 100 x 2 (multiply+substract) operations at 33:22?
If you scroll down you will find someone asked the same question.
"It takes 100 operations to make (2,1) into 0, because the rows are 100 deep. Each of those elements changed as well.
Then it’s done 98 more times to the rest of the rows."
I would say that the most efficient way for solving Ax=b would be solving A^t A x = A^t b (minimum square problem) using CG algorithm, due to my personal amazement with CG method. Pretty sure that it's not the case, but this method gives me chills. Hahaha
I dont understand why the cost is n^2 to make zero first column. A single operationg is multiplication and subtraction ,therefore i need to multiply first row with some constant and substract it from 2nd to get zero. It takes 1 operation to make (21) zero so 99 or 100 operation in total to make first column zero except pivot. Help me out here please
it seems to be every operation of "every numbers" in "every row" has to be considerd as 1 operation.
Like when you change (2,1) to zero,you also change (2,2)...(2,100) as well,cause you multiply the whole row1 by some constant,and let row2 substract it.
So you have already consumed 100 times of operation when ending fixing the row2,and then doing row3 cost 100,doing row4 cost 100 ...until row100.
That's why you already use 100 squared just after finishing fix first colume of A.
It's just my guess,cause in the video he said "a multiply plus a substraction of raw" is defined as "1 operation" , so it confused me as well,hope there will be another smart guy that can answers us.
I got the same question. Why not just be like ( 99+98+97+.....+1)? where does the ^2 come from?
Thanks a lot. I got it. You are correct.
Thanks a lot!
Gilbert said " the cost comes from the number of multiplication and subtraction that you did when you multiply 2 rows by some number to make the (2,1) component zero, you have to subtract every element in 2nd row from the 1st row multiplied by some number so the number of calculation you did equals to 100 the rest of 98 rows take the same step. Therefore the number of calculation you did equals to 100+100+100... 100 (= 100*99) and the last 100 comes from the fact that you have went through 100 times multiplication of 1st rows to make the 1st column except for (1,1) component zero. So (100*99) + 100 = 100*100 ^^
Man the chalk glides so smoothly across the black board, when I give tutorials at my university it's usually a huge pain in the ass to draw stuff on there because it just feels like shit haha
I did't understand at the first view nor at the 2nd one but at the 3rd or even at the 4th time waw CRAZY approach !
The educational lead up to 40:07 "We really have discussed the most fundamental algorithm for a system of equations."
This python program shows that total sum is n cube divided by 3
import matplotlib.pyplot as plt
def r(n):
sum = 0
for i in range(1,n+1):
sum = sum + i*i
return sum
def y_axes(n):
lst = []
for i in range(1, n+1):
lst.append(r(i))
return lst
plt.plot([x for x in range(1,1001)], y_axes(1000))
plt.show()
So glad there's a solution to the Sox Shoe Primacy Dilemma.
I=Lu
Group. They are a nice little GROUP.
Can someone explain why the computatinal expense is roughly 100^2 for the first step? I thought it would be roughly proportional to n = 100
because once you eliminate a single element from a row then that whole row will be changed, as each row has 100 elements all those will be changed, i.e in a single row operation you will be changing 100 elements. so in first step when you're eliminating first element of all 99 rows (below first row) so a total of 100*99 elements are changed which is roughly taken as 100^2.
@@raqeebkhan8678 ooookey ty. so as i understand we calculated expense for per number, not for per row.
@@raqeebkhan8678 thanks that confused me too
I really find it funny how towards the end of the lecture most students can't wait to go..😂
At 24:20 when he says that the multiplier goes directly into L, he means the negative right? If you keep a track of the operation on the left side, it inverts while bringing it to the right side.
He defines operations as multiplication + subtractions so by definition the inverses have the positive multipliers.
Yes in fact, the shoes-socks rule stands well in this science
At 39:00
The cost of b is not EXACTLY n^2 operations like he says in the video.
Yes there are n elements, and we assume that all the elements are non-zero from the beginning. But that doesn't mean that EVERY element are being changed. For this to be true(that we are using n^2 operations) we ALSO must assume that there is no 1's in the pivot positions from the beginning. For example if it were 1's in all the pivot positions from the beginning the cost of b would be n^2 - 100(unlikely, but as an example) So the cost of b is not exactly but CLOSE to n^2.
Why is it n^2 to obtain a column of n zeros
how is the number of operations n^2.
2x2 ----> 1 operations
3x3 ------>1+2=3 operations
4x4 -------> 3+3=6 operations
5x5-------->6+4=10 operations
6x6--------->10+5 = 15 operations
7x7---------> 15+6 = 21 operations
So it should be 1+2+3+4+5+6+.....n = n(n+1)/2
Correct me if I am wrong
4:57 if i transpose these guys, that product, then again, ... : why did he jump suddenly to this theorem without any definition of transpose nor the logical derivation? Did I miss something?
I finally got stuck at this video. I guess I haven’t mastered what was taught at last lecture. Will revise it by solving assigned problem set like the MIT students. Cannot demand the same progress while I don’t practice as much the MIT student do.
He proves that you multiply in reverse order with inverses. He doesn't prove that with transpose. I want to work it out - I think it has to do with the fact that you multiply row * matrix, you don't multiply column * matrix. The rows of A-transpose are the columns of A. So you have to reverse order to multiply row * matrix.
at 32:35 => Should not the first cost be 99*2 instead of 100^2 because there are 99 rows below first one. For pivoting each row below, you need to multiply the first row by some constant then subtract from the row you are pivoting. So, essentially are we not ending up having 2 operations per row for those 99 rows, hence, 99*2 instead of 100^2 ???
I think I have figured it out. It would be like for each element in any row below first one, you would be getting (that element - multiplier*corresponding first row element) and this is 1 operation. So, you have 99 rows below first row each having 100 elements, so, it should be 100*99 operations, which Prof. Strang writes as *about 100*.
Thanks
@@bridge5189 But he said that after 1st step 1st row(which is obviously not changed) , 2nd row and 1st column only these are clean. So, for this I think he only meant by cleaning up the 1st column (just like Gauss), So, in that case the operations for just 1st step would only be 99
@@anuragagarwal5480 Elimination steps are nothing but writing linear combinations of the given system of algebraic equations by multiplying by some constants and then adding/subtracting them from each other.
So, when you do first elimination step for bringing in zero at second row's first column by subtracting the second row from some constant times the first row, you would have to do the same operation for the whole second row. Thus, you would get n operations, one for each element in the second row. Here, n = 100.
Similarly, you would keep on doing this for bringing zero in the first element of all the rows beneath the second row, which would be 98 in total.
Hence, you would end up doing 100*99 operations in total.
@@bridge5189 Yes, but we also don't want to make our 2nd row's 2nd column and respectively in further rows to be zero as we want our diagonal elements to be non-zero.
2. So, total number of operations must be 100*99 to complete the elimination process, then, why sir is taking another (n-1)² + (n-2)² + ......2² + 1² ≈ n(n+1)(2n+1)/6 operations to complete the whole Elimination process ?
e as about n^2 (100^2).
Now, what we would need in our matrix is bringing zero in the second column starting from 3rd row to 100th row. This would be just like bringing zeros in the first column from the second row to last row in any (n-1)×(n-1) matrix. So, for this we can say we would need about (n-1)^2 operations.
So, we have 1^2 + 2^2 + ...... + n^2 = n(n+1)(2n+1)/6
Tnx to MIT for this kind of stuff.
hmm is there a complementary book we must follow? I'm puzzled as I don't think we've introduced the elementary matrices in previous lectures, or Upper/lower matrices. I feel this is not beginner friendly.
We recommend you view the course materials along with the videos at: ocw.mit.edu/courses/18-06sc-linear-algebra-fall-2011. For further study, there are suggested readings in Professor Strang’s textbook (both the 4th and 5th editions):
Strang, Gilbert. Introduction to Linear Algebra. 4th ed. Wellesley, MA: Wellesley-Cambridge Press, February 2009. ISBN: 9780980232714
Strang, Gilbert. Introduction to Linear Algebra. 5th ed. Wellesley, MA: Wellesley-Cambridge Press, February 2016. ISBN: 9780980232776
Best wishes on your studies!
@@mitocw Thank you much. Already looking in the resources and things are making more sense. Thank you for taking the time.
How did transpose suddenly come into the picture?
Was about to ask the same thing!
thank you mit for this
항상성과 매개변수가 같다면 오진법과 십진법에서 7로 잡아서 문제를 풀고 대비숫자를 기준으로 문제를 도출해서 문제를 풀면된다. 의사결정 지원 시스템에 문제를 기입하고 풀면 됩니다.
lower and upper
Gilbert Strang a legend
June 15/2019-Very good lecture!
This actually makes sense now! Thank you!
@32:20 can anyone help me undertand why does it cost 100^2 instead of 99 operations?
Because suppose we don’t have SIMD, you have to operate each number one by one
Thanks, Dr. Strang
At 10:20 how do you get elimination matrix E21?
Why does a 4x4 matrix have 24 Permutationmatrices when a 3x3 Matrix has 6? I don't get how they calculated it so quickly.
No need to search further, that makes perfect sense! Thank you :)
same question before seeing this solution, thanks!
Think about what happens to the entries in the matrix every time you do a row exchange. You can never get more than the single 1 in each row that is moved into that row from another one since rows are exchanged.
In other words, every exchange is a new ordering of the rows of 1’s leaving no COLUMNS with more than one 1 as well since no extra 1’s are added to any rows - and the starting point was the identity matrix.
This is what the permutations are all about. Whenever you have done a row exchange you can think of this way: there are four ways to put a 1 in any given row, but then there are only three ways to put a 1 in the next row you choose since you cannot get any 1’s above and below each other), and then there are two ways to choose the third 1 and for the last row you only have one position left in which to complete the order and thereby the row exchange.
Altogether this makes for 4x3x2x1 ways to choose the order of the 1’s or n! ways to order a given n x n matrix and this type of rearranging is called “permuting” and every instance of ordering is a single permutation.
I'm a year late, but here comes a somewhat intuitive explaination. Whenever you are trying to calculate how many ways are there to rearrange some set of len N think like that: there are N ways to choose 1st object (a row here), then we should be left with N-1 options for 2nd object and so on untill we get the only object left as the last one. Thus we have a multiplication of N * (N-1) * (N-2) * ... * 1, or, simply put, N! . Hope that helps, sorry for bad English.
4P4 = 24
Can anyone explain to me why the answer for cost of B is $n^2$ around 40:00?
So we know that B is n by 1 vector (if u don't remember it go to lecture 2 around minute 15 he is talking about it) and we just need to apply what we did for each row in A to the same row in B which is O(1) operations like divide by some factor or add a multiple of some row to it etc. since we got n elements and n operations in total it is gonna cost us Θ(n^2).
Does this mean the cost of B is equal to the cost of doing an inverse matrices? Because right before he explained the cost of matrix, he was saying why the inverse matrices is better to do.
But its not EXACTLY n^2 operations like he says in the video.
Yes there are n elements, and we assume that all the elements are non-zero from the beginning. But that doesn't mean that EVERY element are being changed. For this to be true(that we are using n^2 operations) we ALSO must assume that there is no 1's in the pivot positions from the beginning. For example if it were 1's in all the pivot positions from the beginning the cost of be would be n^2 - 100(unlikely, but as an example) So it's not exactly but CLOSE to n^2.
Thanks Gilbert!
18:52 It says I'm subtracting rows from lower rows when we are multiplicating the two matrix.
Was not able to get this.was able to simply multiply the matrix using combinations of columns and rows.
That would give n^3 which is up for a factor of 3. Take into account that the elementary matrix are special matrices and there are plenty of 1s and 0s, so his way is more precisse.
Why is space a subspace of itself? A container is not ann object inside of itself, a the components of an object are not the sum of its components.
39:35,The cost of columns is about n^2 or 1/2n^2?I think it should be about 1/2n^2.
Yes, I think so too. And to be precise it would be n*(n-1)/2
love this course
Why doe she go directly into LU factorization? should we earln first how to solve ax = b or 0?
sorry i'm confused at 32:22, it is 100 ^2, but i think it is 100, why it isn't 100? could anyone explain that to me? thx alot
kinda late, but I'm pretty sure that it is because we want the first pivot. So we subtract this row by everything else to allow it to be the pivot, the other numbers would need to change too (the ones other parts of the row). You must add/subtract every other number in the row. Anyways it is cool that you are learning during a time like this :)
Thanks a Lot, Sir & MIT for bringing out these excellent lecture series on Linear Algebra. May I know where one can find the corresponding problems & assignments for these lectures. Thanks.
The course materials are on MIT OpenCourseWare at: ocw.mit.edu/18-06S05. We also recommend you look at the OCW Scholar version of the course. It has more materials to help self-learners out: ocw.mit.edu/18-06SCF11. Best wishes on your studies!
@@mitocw Thanks for sharing the requested course contents