7. Solving Ax = 0: Pivot Variables, Special Solutions

Поділитися
Вставка
  • Опубліковано 20 січ 2025

КОМЕНТАРІ • 352

  • @markptak5269
    @markptak5269 11 років тому +640

    Its kind of cool and odd that someone who has taught this subject for so long can keep it so fresh...like he's stumbling across the Null Space Matrix for the first time. Thank you Dr. Strang and thank you MITOCW.

  • @Gorbleray
    @Gorbleray Рік тому +40

    This is more than a lecture on linear algebra, it's a demo on perfect teaching presentation. His way of pinpointing each question along the way that our brains need to ask and then solve is truly beautiful.

  • @MrPink029
    @MrPink029 2 роки тому +52

    Every student should have at least one professor like Prof Strang. Motivating, illuminating and such great energy. I truly appreciate these classes. Thank you!

  • @EclecticSceptic
    @EclecticSceptic 13 років тому +94

    This guy is giving me such a good intuitive understanding of linear algebra, rather than just presenting seemingly semi-random algorithms without explanation.

  • @yanshudu9370
    @yanshudu9370 2 роки тому +63

    Conclusion: 1. To calculate Ax=0 in other words calculate the null space of A, we can use 'reduce row echelon form' (rref) method.
    2. The rank of A equal to the number of pivots or rows after reducing row echelon, notation as r.
    The column of A equal to the number of variables, notation as n.
    So n-r is equal to the number of free variables.
    3. Consider the solution of Ax=0, if we calculate the reduced row echelon form of A consisting of [T F], the solution matrix will be the transform of [-F T], where T stands for identity matrix and F stands for free matrix.
    The solution matrix would be n*(n-r) shape.

    • @muhammadwahajkhalil6577
      @muhammadwahajkhalil6577 4 місяці тому

      bro i think its I instead of T for Identity matrix

    • @hariprasath7050
      @hariprasath7050 3 місяці тому

      @@muhammadwahajkhalil6577 it just notion we can use any variable to represent identity matrix for us convince it is not mandatory to stick with I

  • @thelastcipher9135
    @thelastcipher9135 8 років тому +192

    long live professor strang!

  • @tylerhuttenlocher5481
    @tylerhuttenlocher5481 6 років тому +174

    I wonder if the couple at 15:03 in the second row is still together.

    • @npundir29
      @npundir29 4 роки тому +5

      lol

    • @elliotpolinsky9422
      @elliotpolinsky9422 4 роки тому +8

      i was noticing them

    • @crocopie
      @crocopie 4 роки тому +9

      Why don't we ask them?

    • @AkinduDasanayake
      @AkinduDasanayake 4 роки тому +22

      Makes you feel like you're in the classroom even more...

    • @peggy767
      @peggy767 4 роки тому +7

      Exactly haha they’re in most of the lectures

  • @kellypainter7625
    @kellypainter7625 7 років тому +41

    I took linear algebra 30 years ago and I thought it was pretty hard at the time. Prof. Strang makes it easy!

  • @youweiqin2416
    @youweiqin2416 9 років тому +173

    mit has high quality of blackboard

    • @iwtwb8
      @iwtwb8 8 років тому +19

      +YOUWEI QIN I thought the same thing. There's just like infinite sliding blackboards stacked on top of each other :)

    • @dostoguven
      @dostoguven 7 років тому +2

      quantity?

    • @mainakbiswas2584
      @mainakbiswas2584 6 років тому +4

      MIT has everything of a very high quality! Its such a pity that you only noticed the blackboards!

    • @danwu7275
      @danwu7275 6 років тому +32

      Agree, no body mentions the chalks.

  • @abdulghanialmasri5550
    @abdulghanialmasri5550 10 місяців тому +2

    No way there is anyone can explain linear algebra like professor Strang!

  • @quirkyquester
    @quirkyquester 4 роки тому +8

    learning linear algebra with you is like watching movies. it's fascinating, exciting, convincing and fun. thank you so much Professor Strang! Im so lucky to be learning this subject with you!

    • @НиколайТодоров-и9т
      @НиколайТодоров-и9т 2 роки тому +2

      Indeed! It's like a story - with characters, and plot, and plot twists...Mr Strang is a shining example of what education should be - accessible, engaging and with the sense of disccovery!

    • @judepope6196
      @judepope6196 2 роки тому +1

      Yes! this is what I felt as I was watching! And I felt that I was as happy as I would be watching a favourite movie.

  • @gizmopossible
    @gizmopossible 11 років тому +47

    Does anyone else feel a nice smooth buttery feeling when the chalk glides against the board?

  • @eswyatt
    @eswyatt 2 роки тому +13

    For anyone confused by the block matrix explanation --- the I and F and blocks of zeros --- hang in there until lecture 8 where it all becomes clearer. And yes, F may be interspersed with the I, and, contrary to the top rated answer on Stack Exchange, this cannot be remedied with permutation matrices. Basically it's just a visual cue that allows you to pluck out the relevant numbers.

    • @yuriyroman7132
      @yuriyroman7132 Рік тому +5

      This is exactly what got me confused at first. I really appreciate professor Strang and MIT for making this gem of a lecture available online, but I felt he presented a few tricks like the block matrix one for finding the spanning set of the null space of a linear map (a linearly independent one*, too, because of that I block) in a rather hand-wavy manner.
      Perhaps a better way to visualize it is as follows:
      1. Draw the RREF matrix as staircases with pivots, preferably with interlaced free columns for generality.
      2. If there are any all-zero rows at the bottom of the RREF matrix, trim off that part.
      3. Pluck out a free column from the staircase, then turn it sideways (90 degrees counterclockwise.)
      4. Multiply each component in the free column by -1, to reverse their signs. (This is for building the -F block)
      5. Insert the "selector" (coefficient of 1) component at the same index as the index of the extracted free column in the RREF matrix.
      6. Insert "N/A" (coefficient of 0) components at the same indices as the indices of the rest of the free columns in the RREF matrix.
      7. Now turn the free column back to its original position (90 degrees clockwise)
      8. Put the finished column in the "special solutions" matrix.
      9. Do the same with the rest of the free columns in the RREF.
      10. In the special cases where F is NOT interspersed with the I in the original RREF matrix, what you get is a matrix with the -F block stacked on top of the I block.
      P.S. The point of plucking out a free column and then laying it on its side is to make the step 5 and 6 easier to visualize.
      *If only one column vector has a non-zero entry at a specific row index in a set of columns, then there is no linear combination of the rest of columns in the set that is equal to that column. That is why the special solutions matrix built this way always contains a linearly independent set of columns.

    • @mistergooseman7047
      @mistergooseman7047 Рік тому +3

      This really bothered me. The block presentation wasn't exactly blocked. But I'll stick with it.

    • @eulerappeareth
      @eulerappeareth 8 місяців тому

      yes, I'm a bit confused what to do with matrices which rreff is something like
      [ 1 * 0 * 0
      [ 0 0 1 * 0
      [ 0 0 0 0 1
      they are clearly not [I F
      I will check this answer

    • @YoussefSherief-z6j
      @YoussefSherief-z6j 4 місяці тому

      ​@@eulerappearethwhat i've noticed is how switching columns 2 and 3 in this scenario's rref of A (compared to the usual [I F] form), caused the ROWS 2,3 to be switched in the special solutions. so since the solutions would be [-2 0 1 0] and [2 -2 0 1] if the rref was just [I F], then we switch rows 2,3 and get [-2 1 0 0] and [2 0 -1 2] instead. pretty late but i hope this helps somebody

    • @cutestbear3327
      @cutestbear3327 Місяць тому

      thank you for the heads up my man, thnx~

  • @christoskettenis880
    @christoskettenis880 Рік тому +2

    I was studying for my engineering degree when this was filmed. I just wish I was having professors like Dr. Strang and Dr. Lewin. Clear cut and practical explanations of the most abstruct branch of Mathematics!

  • @arsalanwani2436
    @arsalanwani2436 3 роки тому +7

    I have never seen teacher like u.....ur way of teaching and clearing the concepts of students is amazing sir..
    ...

  • @rajarshighoshal6256
    @rajarshighoshal6256 3 роки тому +5

    this is the best way possible to describe the rank of a matrix! for so long I have struggled with this concept! And now it feels so rudimentary, so basic! Thank you professor strange for such a fantastic way of explaining things

  • @shinyeong7188
    @shinyeong7188 4 роки тому +7

    I've my linear algebra class early in the morning, and I never make it to the class
    was frustrated to catch up all this stuff but watching these videos are helping me so much
    Sincerely thank professor Strang and this channel!

  • @JoshuaJEMarin
    @JoshuaJEMarin 12 років тому +10

    Seriously the best thing that I could have found on the Internet. Too bad my final is in 4 days. Naturally I will be staying on youtube for quite a few hours this week

  • @17mjankowski
    @17mjankowski 4 роки тому +22

    This guy has figured out how to access the 12 dimension. Infinite chalkboards; some crazy wizardry shit.

  • @genidor
    @genidor 5 років тому +3

    W. Gilbert Strang, you are a gem of a teacher! Thank you so very much!!

  • @michaelmolter6180
    @michaelmolter6180 3 роки тому +4

    There's a lot of magic going on here that Dr. Strang doesn't state explicitly. It makes this lecture worth a couple listen throughs.

    • @sahil0094
      @sahil0094 2 роки тому +3

      definitely more than a couple. I dont know why people are saying its magical

  • @yufanlin352
    @yufanlin352 5 років тому +18

    I literally want to cry after watching this. Thank you so much for saving my ass.

  • @oilotnoM
    @oilotnoM 14 років тому +25

    It's fun pausing the video and trying to figure out how the process ends before he's shown it ...

  • @naterojas9272
    @naterojas9272 5 років тому +5

    It is mind blowing how elegant linear algebra really is

  • @xiangzhang8508
    @xiangzhang8508 8 років тому +432

    infinite blackboards...

  • @phatimakhatoon9835
    @phatimakhatoon9835 6 років тому +4

    Mit has done wonderful job to give us quality education for free veryyyyy thanks

  • @citiblocsMaster
    @citiblocsMaster 7 років тому +109

    When you think there is no more sliding boards 33:16

    • @샤페인
      @샤페인 4 роки тому +2

      Truly agree. It's quite impressing how MIT has so many sliding boards... the # of blackboards in MIT is INFINITE. LOL

  • @abhinavasthana20061
    @abhinavasthana20061 5 років тому +1

    Love you Prof. Strang.....I am beginning to fall in love with Linear Algebra....You are a genius Prof. Strang....

  • @readap427
    @readap427 8 років тому +59

    The last thing he wrote at the end of the lecture was "FIN"... like the end of an old-fashioned French film.
    I thought it was funny.

    • @lucasm4299
      @lucasm4299 7 років тому

      readap427
      Or Spanish film.
      Both from Latin

    • @Antonio-gn6iq
      @Antonio-gn6iq 5 років тому +1

      fin means the end

  • @blackpepper9828
    @blackpepper9828 4 роки тому +4

    For those like me, who did not get about the free columns and pivot columns fiasco at first.
    Firstly, note that the free columns are linear combinations of the pivot columns (you can do some scribbling to confirm this).
    This gives some intuition as to why we can allow the free columns to be scaled freely by any number, then solving for the scalars of the pivot columns, such that you get zero column/vector if you add all these scaled columns.
    Pivot variables and free variables are the names for those respected scalars.
    I hope this cleared some doubts...wish you the best of luck.

  • @jakeaus
    @jakeaus 5 років тому +8

    36:20 "I quit without trying, I shouldn't have done that." So true

    • @miketh4434
      @miketh4434 3 роки тому +1

      hahahahahah me with linear algebra 2 years ago. will get a 10 now easy

  • @warnford
    @warnford 8 років тому +5

    enjoying these lectures tremendously - cant say I expected to find linear algebra that interesting

  • @georgesadler7830
    @georgesadler7830 3 роки тому

    From watching this lecture, DR. Strang continues to strength my knowledge of linear algebra. He makes the subject look so simple.

  • @AryanPatel-wb5tp
    @AryanPatel-wb5tp 8 місяців тому

    Great Lecturer ! Never has learning linear algebra been so interesting and well explained !

  • @rishabhdwivedi2516
    @rishabhdwivedi2516 14 днів тому

    this is such a cool video..... thanks for making it possible for his teachings to be recorded and available to public.... 🙏🙏🙏🙏

  • @animeshguha9649
    @animeshguha9649 Рік тому +2

    I'm in love with these lecs

  • @avidreader100
    @avidreader100 4 роки тому +9

    The second part of the lecture going from Ux = 0 to Rx = 0 and further on to RN = 0, and proposing what is N, seemed to be full of leaps that I could not follow completely. I have done a ML course, and a Neural network course without a deeper knowledge of Linear Algebra. I thought of filling that gap. The rabbit hole seems to go deep, and again I seem to be taking a few magical things that happen to be as axiomatic. I will persist. If I can not get it from Prof Strang, I may not get it at all. Hope the pennies will drop as I move forward, and I will get rich!

    • @Upgradezz
      @Upgradezz 3 роки тому +1

      Any updates?

    • @sahil0094
      @sahil0094 2 роки тому +1

      same issue with me

    • @toanvo2829
      @toanvo2829 2 роки тому +5

      Dr. Strang wanted us to realize that the reduced row echelon form of the original matrix consisted of the identity matrix (when only looking at the pivot columns) and some other matrix, which he called F, when only looking at the free columns.
      He generalized this notion by defining the matrix R using placeholders I and F for the identity matrix (I) and the matrix formed by the free columns (F), with possible rows of 0s beneath. Since he was generalizing, he wrote R as a block matrix (where I and F represent matrices).
      We know I has dimensions r x r (since I is the identity matrix formed by the pivot columns, and the number of pivot columns = number of pivot variables = rank = r)
      We know F has dimensions n - r x n - r (since F is the matrix formed by the free columns, and we know there are n - r free columns).
      So our original Ax = 0 can be rewritten -- throughout the whole process of his lecture -- as Rx = 0. He then wonders what would the solution of this matrix equation would be.
      Well, since defined R generally using I and F, he unintentionally (I am assuming given how pleasantly surprised he sounded) was defining R as a block matrix, he decided to find all the special solutions at once in which he called a null space matrix N.
      This N would solve the Rx=0 equation, i.e., would make RN = 0 true.
      Well, knowing how matrix multiplication works, N needs to be a matrix that, when multiplied with the row(s) of R, would produce 0's.
      Since the first row of R is I F, what linear combination of I F would = 0? We would need to multiply I by -F, and F by I (because then we'd have -F + F = 0).
      This is how to look at it pure algorithmically. Dr. Strang actually uses wonderful logic. If the first row of R = [I F], then of course we want I in the free variable row (the second row of N) in order to preserve it, and to cancel it out, of course we would need -F in the identity row (the first row of N) in order to cancel out the F in the free variable block of R.
      This is how he knows the null space matrix N is always going to be [-F I] (obviously written as a column, but I can't type that out in this comment).
      He then goes further to show us how this actually is not surprising. Going back to Rx = 0, remember that R (as a block matrix) = [I F]. x = [x_pivot, x_free] (as a column matrix).
      If we actually did the matrix multiplication we would have:
      I * x_pivot + F * x_free = 0. Solving for x_pivot we get:
      x_pivot = -F * x_free
      So, if in our solution we make our free variables the identity (remember when Dr. Strang said "hey, these are free variables. Let's make them whatever we want. Let's make x_2 = 1 and x_4 = 0" and later he said "hey, let's make x_2 = 0 and x_4 = 1"), then by the above equation, of course x_pivot HAS to be -F.

    • @attilakun7850
      @attilakun7850 9 місяців тому

      @@toanvo2829 F has dimensions r x n - r (NOT n - r x n -r), no?

  • @nota2938
    @nota2938 2 роки тому +1

    I've never understood null space, rref, and how null basis is immediate from rref better.
    I'd recommend Dr. Strang to anyone that tries to learn linear algebra.

  • @rudrajyotidas1538
    @rudrajyotidas1538 4 роки тому +3

    The way he connects the flow of ideas..........

    • @briann10
      @briann10 3 роки тому +1

      26:31 even the ghost gets mindblowned

  • @CadrinTheWerecat
    @CadrinTheWerecat 12 років тому +3

    Why does my university not allow for students to record the lectures? It is so good to have those at home in video format. You can re-watch them and rewind time anytime you missed something because you weren't paying attention. Mighty helpful.

  • @thehyphenator
    @thehyphenator 12 років тому +1

    Just wanted to say that blocking the rref matrix into [[I F], [0...]] form and solving for the nullspace matrix like that is one of the greatest things I've ever seen. It seems like it shouldn't work because F could have different shape than I, but it does. And it generalizes to when F doesn't exist, which helps you remember the ideas in the next lecture.

  • @antoniolewis1016
    @antoniolewis1016 8 років тому +17

    I LOVE THIS GUY

  • @alirazi9198
    @alirazi9198 7 місяців тому +1

    I study at a german uni and everything is so god damn formal I couldnt fathom up unti today why the core plus the rank is equal the number of culmns
    thank you mit opencourseware abd thank you dr. Strang

  • @reshobrouth8123
    @reshobrouth8123 7 років тому

    Prof. Strang is a Magician, he shows that Matrix is synonymous to Magic.

  • @mushtaqdass7421
    @mushtaqdass7421 5 років тому +4

    every math loving student would love this great man

  • @thehyphenator
    @thehyphenator 12 років тому +2

    F will have the same number of rows as I, but maybe not the same number of columns. So to make N, you just put -F on top and fill in the bottom with the identity matrix of the correct size (the number of columns of F). So say I is m by m and F is m by n, then N will have (m + n) rows and R will have (m + n) columns, so it works out. And each block multiplication (I * -F and F * I) also work out.

    • @qinglu6456
      @qinglu6456 5 років тому

      Yes. So the dimension of the identity matrix in R is not the same as the dimension of the identity matrix in N. And the sum of the dimensions of these identity matrix should be equal to the number of columns in A.

  • @Kamillascookie
    @Kamillascookie 15 років тому +1

    GREAT! I was a bit confused at first but in the end, he rocked my world as always! Thaaaank you!

  • @makoto0423
    @makoto0423 2 місяці тому

    After so many years it’s still magic❤

  • @chaeeuijin5912
    @chaeeuijin5912 3 місяці тому +2

    the amount of blackboards that an MIT classroom has is baffling.

  • @amarenpdas1975
    @amarenpdas1975 15 років тому +1

    a great prof.. abstract maths can so easily taught .... Its amazing...... Great ..... hats off to u
    U should come up with simillar lect in analysis

  • @effortless35
    @effortless35 12 років тому +3

    The part I found confusing is when we write [I F]*[-F,I] = 0 F and -F are the same dimension but the identity on the left hand side is rxr and on the right hand side (n-r)x(n-r),
    Thinking it through, it makes sense. The RHS has the same number of columns as the number of free variables. It's just a little unusual to see the same letter on both sides meaning slightly different things.

  • @turgaysengoz
    @turgaysengoz 4 роки тому

    43:02 "Fridi"
    God I love this man.

  • @maxhuang4650
    @maxhuang4650 4 роки тому +2

    Anyone understand the equation at 32:15? I think x_free should be above x_pivot?

  • @reiriley1780
    @reiriley1780 3 роки тому

    its incredible what 15 years does 🙌🏽

  • @hanzvonkonstanz
    @hanzvonkonstanz 14 років тому

    @ Dr. Strang: OUTSTANDING!

  • @ΜιχαήλΣάπκας
    @ΜιχαήλΣάπκας 2 роки тому +3

    15:06 lovebirds

    • @turokg1578
      @turokg1578 2 роки тому +2

      lol would be annoying af if sittin behind em.

  • @SphereofTime
    @SphereofTime 3 місяці тому +1

    34:13 how many pivot variableS?

  • @zakeerp
    @zakeerp 5 років тому

    19:30 Reduced row echelon form(RREF)

  • @crystallai1002
    @crystallai1002 4 роки тому

    Oh he is such a great teacher!!! with appropriate pause and a moderate speed!! I'm glad that I learn a lot.

  • @chhayankmulchandani4812
    @chhayankmulchandani4812 3 роки тому

    No one teaches this better than him

  • @hoanhuynh782
    @hoanhuynh782 9 років тому +13

    He is the best teacher i've ever had. How can i get in touch with him? Please!!! Thank you so much.

    • @mitocw
      @mitocw  9 років тому +14

      +hoan huynh See his department page for contact information: www-math.mit.edu/~gs/

  • @shubhamtalks9718
    @shubhamtalks9718 5 років тому +2

    A magician telling all his secret tricks...

  • @reginaldarbruthnot1766
    @reginaldarbruthnot1766 3 роки тому +1

    truly brilliant and impeccably clear

  • @eswyatt
    @eswyatt 2 роки тому

    @ 32:10 X subscript "pivot" and X subscript "free" are being treated as submatrices to enable block multiplication. Hope I'm right

  • @imrans7545
    @imrans7545 12 років тому +7

    i still do not understand how it works , F and I definitely can have different shapes ? This part is not clear from the video.

  • @sureshsadasivuni8367
    @sureshsadasivuni8367 4 роки тому

    Really happy at these lectures... Delivered by pro. Strange

  • @SphereofTime
    @SphereofTime 3 місяці тому

    23:09 pivot rows 1 and 2😊

  • @stevenjames5874
    @stevenjames5874 2 роки тому

    16:30 the free variable, rank, and special solution amount relation

  • @bearcharge
    @bearcharge 15 років тому

    indeed, rocked my world! listening to his lecture is a kind of pleasure!

  • @SphereofTime
    @SphereofTime 3 місяці тому +1

    17:13 how many free variable

  • @SphereofTime
    @SphereofTime 3 місяці тому

    8:35 pivot columns und free columns

  • @Nolimits-l6y
    @Nolimits-l6y 11 місяців тому

    huge teacher -professor

  • @rguktiiit371
    @rguktiiit371 3 роки тому +2

    Have u observed
    First lecture got Million's of views
    And views count is slowly reduced from video to video

  • @competitivedoritos4294
    @competitivedoritos4294 6 років тому +6

    I believe that Mr. Strang is really amazing and incredible, but I got stuck when he said of "the pivot and free variables" , I mean he told us about the algorithm very well, but I am not able to connect these with that what does it mean to?? Why does non pivot columns can be considered anything?? And what are its effect on my graph if I wish to plot it on?? And what's the concept behind these pivot and free variables, how did it occur from anywhere??
    So , if you guys could help me out with this , it would really be appreciated!!

    • @rohi9594
      @rohi9594 6 років тому +1

      same here! been listening to this part again and again, but having hard time understanding the logic behind pivot and free variables

    • @danieljulian4676
      @danieljulian4676 5 років тому +5

      The pivot is the first non-zero entry on a row. If the system has a unique (single) solution, each row will have exactly one non-zero entry when the matrix is in reduced row echelon form, and there won't be any free variables. In the null space example, if each row has exactly one non-zero entry, the only vector that solves the system when the RHS is zero is the zero vector.

    • @Jirnyak
      @Jirnyak 4 роки тому +2

      You are right about pivots because he did not really explain that. The idea of only two pivots is that columns of this matrix are not linearly independent, bu I agree it should be explained much better and more detailed then in this lecture.

  • @tchappyha4034
    @tchappyha4034 4 роки тому

    34:00 If rank(A) = 3, then U*x = 0 has only trivial solution. But A*(-1, -1, 1)^T = 0. So rank(A) is not equal to 3.

  • @karthik3685
    @karthik3685 3 роки тому

    This is so spectacularly good.

  • @scorpionboy3
    @scorpionboy3 14 років тому

    @gavilanch I´m glad they were made to be found! MIT rocks, more should follow their example!

  • @gangren1453
    @gangren1453 10 років тому

    @ Rahul Duggal, the prof doesn't mean to change the column, he's just highlighting it to be obvious.

  • @yevgeniygorbachev5152
    @yevgeniygorbachev5152 4 роки тому

    3:07 Why does elimination change the column space? All you do is take rows to be linear combinations of each other (crucially, preserving the original row), which leaves the column space unchanged. The basis vectors you use to form the column space are different, but the space itself should be the same because all combinations are in the original space and no vectors are lost by multiplication by zero.

    • @geethasaikrishna8286
      @geethasaikrishna8286 4 роки тому

      We are taking linear combinations in the row space, hence row space remains unchanged but column space changes since they are being changed independant of each other now the resulting vectors can be in any different space

  • @bassmaiasa1312
    @bassmaiasa1312 5 місяців тому

    You can also use this algorithm to get the cross product of two vectors in R3. Solve Ax=0 for 2x3 matrix. The dotproduct of each rows and x will be 0.
    Can you derive the cross product matrix from this algorithm?

  • @SphereofTime
    @SphereofTime 3 місяці тому

    6:36 echflon..stair case

  • @tachyon7777
    @tachyon7777 6 років тому +3

    19:54 "Let me suppose I got as far as u" lol

  • @TMAC02010
    @TMAC02010 4 роки тому +1

    15:10 special solutions

  • @sivarajchinnasamy11
    @sivarajchinnasamy11 3 роки тому

    Finally null space column has a combination of identity with free variables 👏👏

  • @imrans7545
    @imrans7545 12 років тому +1

    hmmm thanks for the explanation. I had to play with examples for some time to get a hang of it.

  • @psibarpsi
    @psibarpsi 2 роки тому +1

    9:11 "I can assign anything that I like for X2 and X4..."
    So, what's stopping us from choosing the free variables as X2 and X3? Because, it seems clear from the equations that they can be assigned any value arbitrarily.
    Somebody!

  • @neverbendorbreak
    @neverbendorbreak 7 років тому +1

    Love him so much.

  • @kubilayistikam6382
    @kubilayistikam6382 3 роки тому +1

    how can i find the number of pivot in a matrix ?

  • @durgeshmishra4005
    @durgeshmishra4005 Рік тому

    At 32:10 shouldn't the x be [ xfree xpiviot]? So that xfree + F * xpivot = 0.

  • @shadownik2327
    @shadownik2327 Рік тому

    Can't I sub 0 and 1 into the pivot variables just as easily as into the free variables? It just gives new vectors and all these vectors are just linear combinations.

  • @mertduran2023
    @mertduran2023 6 років тому

    This guy is definitely PERFECTTTT!!!!

  • @nandakumarcheiro
    @nandakumarcheiro Рік тому

    Using Null matrics the sound waves can be converted as no sound domain as zero power of sound as switch application in hearing aids.

  • @thoniageo
    @thoniageo 7 років тому +1

    Wow, great professor! Thank you!

  • @ChetanPalicherla
    @ChetanPalicherla 4 роки тому

    8:49 How we assign any number to the free columns (Variables that multiply those columns)??

    • @andreykasyanov1063
      @andreykasyanov1063 4 роки тому

      It seems you just do some form of 1 and other zeroes for each column. Eg (1,0,0);(0,1,0);(0,0,1).
      PS Someone correct me if I got wrong intuition.

    • @real-investment-banker
      @real-investment-banker 3 роки тому

      just don't take the linear combination , those two numbers in both the column should be linearly independent . 1,0 and 0,1 happens to be linearly dependent because if you analyze it further these are nothing but unit vectors ( i cap and j cap ) . As we can assign two variables anything , why don't take the i cap and j cap , so we can cover the full span of a plane.

  • @iqbalazmee2616
    @iqbalazmee2616 12 років тому

    I think the best title for this video is Understanding Null Space.

  • @marnk1950
    @marnk1950 6 років тому +1

    Such a legendary professor. He rocks! :)

  • @yiyu9519
    @yiyu9519 3 роки тому

    love this course

  • @gustavoleal413
    @gustavoleal413 Рік тому +1

    Que aula! Sensacional

  • @shinyralle
    @shinyralle 13 років тому +3

    at 26:13 I dont get how he can switch column 2 and column 3 to get the identity matrix in the first block of [ I F] ? You cant chage the order of pivot columns just like that ? PLEASE answer this for me someone! 1love

    • @googlywoodstudioagency325
      @googlywoodstudioagency325 6 років тому +1

      if you changed the order of pivot columsn you would also have to change the order of the variables in the solution vector

  • @Zinpinko
    @Zinpinko 2 роки тому

    So the independent columns are associated with dependent variables whereas dependent columns give us free variables that can be arbitrarily assigned in the equation.