30. Linear Transformations and Their Matrices

Поділитися
Вставка
  • Опубліковано 25 лис 2024

КОМЕНТАРІ • 188

  • @jstreet465
    @jstreet465 11 років тому +354

    I don't know what he's doing differently, but everything is easier when I listen to this guy. It might be the way he repeats certain phrases or makes everything less abstract. I'm currently struggling in my Linear Algebra class, so seeing this video and realizing that it isn't as hard as my current professor makes it seem is a huge relief. This guy is great. Thanks MIT.

    • @aibattileubaiuly7830
      @aibattileubaiuly7830 4 роки тому +4

      Did you pass your class with a good mark?

    • @noddlexxx9161
      @noddlexxx9161 4 роки тому +3

      @@aibattileubaiuly7830 A+ boiiiiii

    • @youngjim7987
      @youngjim7987 3 роки тому +3

      Damn, I share the identical feeling with you bro. My prof in linear algebra thought the outside is way too noisy and she always closes the door of the classroom.So its super stuffy in there and I felt run out of O2 so therefore sleepy. It's all from mr gilbert that i learn sth

    • @flashg3292
      @flashg3292 3 роки тому +6

      The difference I think is that this guy actually taeches linear algebra as part of a bigger picture. Most math profs just teach this stuff as a series of theorems that don't really have anythign to do with each other except you need to use theorem t to answer question a) b) c).etc. but hes trying to explain linear algebra as a holistic system

    • @vsshappy
      @vsshappy 3 роки тому +2

      The only difference is his passion and love for the subject and quest to impart the same to his students. Without passion, love, and quest - it will be just lecturing the text with articulation, which might bring clarity but does not bind the audience.

  • @ozzyfromspace
    @ozzyfromspace 4 роки тому +57

    Fun fact, not mentioned in the video:
    If the matrix A is singular, then the process is losing information because two linearly independent vectors in the input space could be mapped to the same output vector, making them indistinguishable. When the professor discussed the matrix for ordinary differentiation of a polynomial function, notice that it was singular. This corresponds to the arbitrary constant of integration i.e. information will be lost, so you save it ahead of time via initial conditions so you can reconstruct the input vector.
    Linear algebra is amazing for this kind of insight.

  • @f_0_n
    @f_0_n 11 років тому +285

    Gilbert Strang, the name that all college students around the world know. He's a genius and a great teacher

  • @jorgemartinez9892
    @jorgemartinez9892 12 років тому +61

    I'm doing a master's in Germany and we were supposed to go through all the lectures as an introduction to a course called Advanced Mathematics for Engineers. At the beginning I was quite reluctant to do that, but after the first lecture I couldn’t wait for the next one. Professor Strang is indeed a great teacher. Thank you MIT for sharing these splendid lectures!

  • @abdulmukit4420
    @abdulmukit4420 3 роки тому +32

    These lectures make me so regret that I was not fortunate enough to go to MIT and sit in this legend's classes.

  • @victormbebe3797
    @victormbebe3797 3 роки тому +10

    "if you cannot explain what you learnt to a little child in the way that she/he can understand easily, you definitily didint understood what you a trying to explain". This Professor Understood deepily lenear algebra. Congrates and thanks

    • @webstime1
      @webstime1 3 роки тому +1

      Are you saying you can show this video to a little child and they will understand it easily?

    • @Robocat754
      @Robocat754 2 роки тому

      If you think you fully understand this lecture the first time you watched it, it could possibly means either you knew this subject very well before you watched it or you don't really understand the subject deeply as you think. Please watch it again and think about why everything he said is true. Then you will have more questions unanswered in your mind after you watched this lecture.

    • @kingplunger1
      @kingplunger1 2 місяці тому

      Well, thats a nice saying, but in the end its not true for some topics.

  • @zionen01
    @zionen01 14 років тому +8

    This guy is something else. I go to my Linear Algebra class and I usually go out feeling more confused than when I went in, then I come watch these videos and things make sense in a minutes. It's not just what he knows, but he knows how to explain it, simple brilliant.

  • @fakwater
    @fakwater 12 років тому +27

    Wow what an amazing professor. If only every math professor was as clear as him!
    THANK YOU MIT!!

  • @omrastogi5258
    @omrastogi5258 4 роки тому +8

    Prof. Strang, you may not a true physicist but you are a true magician.

  • @fallensach
    @fallensach 2 роки тому +8

    He explains it so clearly its amazing. Instead of just throwing around the definition of it all he actually explains and gives examples on why and how it works, in a simplified manner.

  • @_RajRanjanOjha
    @_RajRanjanOjha 3 місяці тому

    Gilbert Strang is one of the best teacher in the world. The keywords and key sentences which he uses in his lectures makes it simple to understand. Loved this. ❤❤

  • @neurolife77
    @neurolife77 4 роки тому +14

    I've been watching all the precedent videos understanding most of what happens, but I always found that I had no link between what I saw and what I know already. I could not integrate the new set of concepts to my current knowledge. It was quite puzzling. Now with the first 18 mins of this lecture everything just clicked. It's like I was floating in space having a hard time navigating but still could move from one point to another. Now I got gravity. This is quite impressive.

    • @nielsota63
      @nielsota63 3 роки тому +6

      sounds like you found a basis

  • @adamjahani4494
    @adamjahani4494 Місяць тому

    This is how you teach. I wish professors would learn how to teach like this man. Gawd it makes sense now.

  • @divinusnobilite
    @divinusnobilite 15 років тому +2

    At BYU right now. Chem E.
    Studying this material for the past few days now to get ahead. This lecture is an excellent tool for taking all these concepts and placing them in a clear and simple context.
    I have greatly benefited from listening to this lecture. Thank you.

  • @sarvasvarora
    @sarvasvarora 4 роки тому +34

    Feels like 3b1b's video was inspired by this lecture!

  • @astrophilip
    @astrophilip 12 років тому +10

    I switched from engineering to math/physics during college, and so was taught linear algebra with and without coordinates. One course, entirely with matrices. Another course almost entirely without matrices. What a beautiful topic.

  • @hubbadubchub
    @hubbadubchub Рік тому +1

    Pretty incredible to be sitting in my living room while learning from a world class professor. Thank you MIT for sharing these!

  • @evanmthw
    @evanmthw 10 років тому +11

    This professor has always been able to clear my confusion on these higher level math courses.

  • @georgesadler7830
    @georgesadler7830 3 роки тому

    These are very niece explanation of Linear Transformations and Their Matrices thanks once again to the godfather of linear DR. Gilbert Strang. I took introductory linear algebra at the University of Maryland Baltimore County in the late 1980's and my professor was nothing close to what Dr. Strang is. This MIT legend is a rare find.

  • @billz281
    @billz281 12 років тому +4

    this guy is so great. My linear algebra teacher is horrible.. He makes it actually kind of interesting!

  • @freeeagle6074
    @freeeagle6074 2 роки тому

    I think the reason why Professor Strang's courses are so popular lies with his personality. He is very considerate so he would try all means to find a way to explain linear algebra concepts easy to understand. This is the same at workplace. Nice people who always take other people's feelings into account tend to provide popular products and services. Some professors offer hard-to-understand, unpopular courses since they don't care about how their students feel at all. They mainly care about whether they are promoted or receive more funding.

  • @MoonLight11023
    @MoonLight11023 8 років тому +25

    Thank you! I'm learning good stuff for free.

  • @АлександрСницаренко-р4д

    If I had tattoos, I would make a tattoo with Gilbert Strang's name on it:) Amazing guy, amazing lecture, amazing MIT

  • @666mafiamongers
    @666mafiamongers 11 років тому +4

    Thanks to MIT for explaining that one. I can think of an explanation for the origin of that phrase. You see, the Summer gets "tricked" into thinking it's gonna last. But alas! The Winter inevitably comes and the few days of its warm reign are over. And just like this short-lived summer period, the Indians got tricked by their conquerors too. Thus the term, "Indian Summer".
    And Thank You MIT for these wonderful videos. Keep up the great work!!
    Great Profs teaching in a great way!

  • @SahilZen42
    @SahilZen42 Рік тому

    The choice of basis is important for linear transformation. Actually it becomes quite easy to use convenient basis as we can represent things in polar as well as Cartesian coordinates.
    We humans are breaking things and making things.

  • @konobel8705
    @konobel8705 7 років тому +4

    This lecture is the greatest one ever

  • @ehorganv
    @ehorganv 15 років тому

    That is correct, it's not a linear transformation.
    For me, it helps to think of this in terms of similar triangles: T(2v) takes two times the original vector but only adds one times v0 to it. For the resultant vector to be proportional (i.e., =2T(v) ), one would have to add 2v0 to T(2v).

  • @arrangement
    @arrangement 13 років тому +1

    @valtih1978 I'm not exactly sure what portion of the lecture you're referring to, but if you're talking about the later 3/4, I think he means that A imparts transforms upon the vectors that are expressed with the basis {v1, v2,...,vn} and transforms them into a new basis {w1, w2,..wn} where T(v1), T(v2), etc. are all functions used to express the original bases {v1, v2, etc.} "in terms of" {w1, w2, etc.} So the system of equations is just a generalization of that concept.

  • @elcarle95
    @elcarle95 10 років тому +4

    this guy is amazing, thank you from Spain!!

  • @dahl121
    @dahl121 14 років тому +1

    Have about this subject at the moment, don't understand anything about it - low selfconfidence. but watching this lecutre - I understand everything again.

  • @arbiter771
    @arbiter771 2 роки тому +9

    finally, Zodiark EX Prog

  • @salomiranimerugu9669
    @salomiranimerugu9669 3 роки тому

    Very nice explanation. From now I am a big fan of him

  • @dennisyangji
    @dennisyangji 15 років тому +2

    worth spending time to see this wonderful video

  •  6 років тому +7

    wtf this fella just said see ya next monday after thanksgiving and it is actually this weekend for me o.O

  • @Tyokok
    @Tyokok Рік тому

    there is essential key points behind the example at 36:59 connect to eigenvalue/vector, but not well explained, hope professor could've extended right there

  • @blakehouldsworth9918
    @blakehouldsworth9918 4 роки тому +4

    I go to texas tech university and amidst covid 19 I have been having a hard time with linear algebra help, this was a nice relief. My question is transforming the basis of numbers I completely get and I want to know if this applies to unit conversion with physical properties. Like in doing stochiometry to convert between units (same plane). Or the relationships we get from integration and derivation like mass and force. Thank you for this resoucre incredibly helpful

  • @523101997
    @523101997 6 років тому +8

    the derivative is linear thing blew my mind

  • @vishakp89
    @vishakp89 12 років тому +3

    Professor, really respect you ! Thank you and God bless :)

  • @Enlightenchannel
    @Enlightenchannel 2 роки тому +1

    Wow, yea this guy is a great teacher!

  • @MirageScience
    @MirageScience 13 років тому +51

    first thought: that is some good ass chalk.

  • @rasikajayathilaka3516
    @rasikajayathilaka3516 4 роки тому +1

    Thank you!..every thing makes sense now!

  • @juhxmn2
    @juhxmn2 14 років тому +8

    Professor Strang rules!

  • @jovanjovancevic915
    @jovanjovancevic915 10 років тому +8

    A great video! I have to notice that T(0)=0 "proof" is a bit funny :)

  • @mehmetaliozer2403
    @mehmetaliozer2403 3 роки тому

    this lecture made my night before the midterm

  • @raghav9o9
    @raghav9o9 2 роки тому +1

    "Every linear transformation is associated with a matrix":- Gilbert Strang.

  • @michaelashcroft6028
    @michaelashcroft6028 12 років тому +1

    An 'Indian summer' is used in the UK to describe a summer that lasts later than usual.

    • @hektor6766
      @hektor6766 5 років тому +2

      In the states, it's the warm spell after the first frost.

  • @volkerblock
    @volkerblock 4 роки тому +2

    Ahh! Suddenly everything is easy to understand, there is great enlightenment. In the previous lessons I sometimes had difficulties and now the dark forest is thinning out.

  • @imbsalstha
    @imbsalstha 12 років тому +2

    a teacher u really like 2 have

  • @coal2710
    @coal2710 7 років тому +2

    32:24 HE WORKS

  • @zhangjerry2731
    @zhangjerry2731 4 роки тому +1

    Give a lot of examples, very helpful, still useful now

  • @didyoustealmyfood8729
    @didyoustealmyfood8729 3 роки тому +1

    this guy is a legend

  • @김유진-m3o3h
    @김유진-m3o3h 4 роки тому +4

    44:50 : why a11 a21 ... am1 is the first column of matrix A?
    Isn't this equation right? >> T(V1) = AV1 = a11w1 + a21w2 + ... +am1Wm

    • @ozzyfromspace
      @ozzyfromspace 4 роки тому

      In General, A = [column_1 column_2 ... column_n]
      input coordinate vector v = [v1 v2 ... vn]^T
      So:
      output vector *w = A * v = (v1 * column_1) + (v2 * column_2) + ... + (vn * column_n)*
      i.e. an n-term linear combination of m-dimensional vectors.
      Now imagine v1 = 1 and v2 through vn being zero. Then column 1 is the vector you get when you pass in a unit vector according to your input basis. This works for v2 = 1 while everything else is zero, and so on, and so on.
      Intuition: say you have a 2x3 rectangular matrix, A, and a 3*1 input vector v. The shape of the output vector w is 2x1. Looking back at A, it is composed of 3 columns, and each column is 2x1, which matches the output 2x1. What's going on? The output vector is a linear combination of the columns of A. This is always true.
      If you have any other questions or want me to clarify something, lemme know
      ------------
      Fun fact, not mentioned in the video:
      If the matrix A is singular, then the process is losing information because two linearly independent vectors in the input space could be mapped to the same output vector, making them indistinguishable. When the professor discussed the matrix for ordinary differentiation of a polynomial function, notice that it was singular. This corresponds to the arbitrary constant of integration i.e. information will be lost, so you save it ahead of time via initial conditions so you can reconstruct the input vector.
      Linear heart is amazing for this kind of insight.
      Best wishes, friend

    • @benjaminlin7918
      @benjaminlin7918 3 роки тому

      @@ozzyfromspace but what happened when input basis isnt one element 1 others 0 (standard basis)?

    • @vidhankashyap7663
      @vidhankashyap7663 Рік тому

      @@benjaminlin7918 As in the comment above,
      A = [column_1 column_2 ... column_n] Take for example (v1 v2 ... vn) as my standard basis vector ([1 0 0 ...]T, [0 1 0 0 ...]T, ......)
      Any coordinate v is (c1, c2, ...cn) will represent the corresponding vector v = c1*v1 + c2*v2 + ...
      Taking an example say c's as my coordinate in the input basis, I want to change it to the new coordinate system(of output basis). I am choosing here eigenvectors as output basis(w1, w2, w3 ....wm) because that is usually the basis of interest but it can be anything (even exactly the input basis). So if I want my new coordinate(d1, d2, d3,...dm) which is the same as a linear combination of eigenvectors with d's as corresponding coefficients.
      d = A * c = (c1 * column_1) + (c2 * column_2) + ... + (cn * column_n)
      Since we are doing linear transformation, we must know what my new coordinate looks like if I were to transform just v1, just v2 and so on.
      This is equivalent to saying I know what will (1,0,0,0..), (0,1,0,0,...)... will become by linear transformation into the output basis.
      Thus we can say that
      column_1 = coordinate when transforming v1 or (1,0,0,..) into output basis.
      Hence we can write this transformation as a linear combination of eigenvectors(my output basis) with coefficients as column_1(my new coordinates). Similarly, now I know what the remaining columns of A represent. Thus I know my matrix completely.
      If you are with me till this point, then for any arbitrary c's we can get new d's into the output basis. as A*c = d.
      Your question: what happened when the input basis isn't one element 1 and others 0 (standard basis)? So if I know my input and output basis and what my (1,0,0..), (0,1,0,..) and so on look like in new coordinate(output basis). I can just write these coordinates as columns of my matrix A and I can now find the transformation for any arbitrary c's. The whole point of a linear transformation is to find A such that we get what you asked for.
      This should answer your question.

  • @3andhalfpac
    @3andhalfpac 12 років тому

    its because he leaves al the' in math written" proves to the book and get to the point and explains what he's doing where as most teachers explain something by writing the prove down that no-one really understands

  • @89HouseMusic
    @89HouseMusic 13 років тому +1

    Maldita sea, porque no hay maestros que intenten explicar las matematicas de manera simple... Porque creen que hay tantos reprobados en México? La verdad este Dr.Strang sabe mucho! Gracias por hacer estos videos!! Saludos desde México!

    • @rosadovelascojosuedavid1894
      @rosadovelascojosuedavid1894 3 роки тому +1

      :0
      Así es. Gilbert Strang impactando en México incluso 9 años después de tu comentario. Y en verdad que esto me sirve muchísimo.

  • @xmioinsterx4179
    @xmioinsterx4179 2 роки тому

    bro helped me beat Zodiark. thanks man

  • @yakunli9111
    @yakunli9111 7 років тому +2

    I am not clear about Prof Strang's explanation at 37:22. He said we choose the eigenvector basis [0 1]^T and [1 0]^T, but this is the standard basis in R^2, right?
    Does he mean that if the input and output basis are the eigenvector basis, then the transformation matrix A is just Λ?
    Thanks.

    • @davidlovell729
      @davidlovell729 7 років тому +1

      That's exactly what it means. If x_i is an eigenvector, and lambda_i the associated eigenvalue, then Ax_i = lambda_i * x_i. Notice the thing on the right only includes x_i; it doesn't need any contribution from any of the other eigenvectors (basis vectors). Thus, when you build the matrix A by putting into its columns what you want it to do to the basis vectors, when the basis vectors are independent eigenvectors, you will get the eigenvalue matrix as a result.

  • @chummyigbo8844
    @chummyigbo8844 9 років тому +5

    Thank You Professor!!!

  • @UberMarauder
    @UberMarauder 14 років тому +1

    @RingWarrior12 maybe its for you, it helped me a lot!

  • @pablo_CFO
    @pablo_CFO 4 роки тому

    This give a easiest geometric interpretation of why singular matrices has no inverse. This kind of matrices take n-dimensional vectors as input and the output is in a vector of smaller dimension, and different vectors can have the same output, so you can´t have an inverse transformation because you don't know to wich of all of this original vectors corresponds the vector in the space of smaller dimension. More importat, that's why you need an initial condition in integrals to know the original function, because the derivative is making this same process and only with the inverse ('integral ) you can't tell which is the original function.

    • @elyepes19
      @elyepes19 3 роки тому

      Sorry, but beg to differ, in a linear transformation an input coordinate can have an output of smaller and larger dimension, as long as the second case is not infinite, and I'd say that's the geometric interpretation for a matrix to not br invertible: if the output coordinates happen to be infinite

  • @Ghostly123456789
    @Ghostly123456789 11 років тому +1

    who is this magnificent instructor?! He puts all of my math professors to shame!

  • @valtih1978
    @valtih1978 14 років тому +5

    The brilliant series of lectures. However, this seems unclear one. I failed to understand the A derivation. What the system of equation does? Can you explain what Gilbert wanted to say? Wikipedia meantime does it trivially: A=[T(v1) T(v2) ..T (vn)].

    • @Robocat754
      @Robocat754 2 роки тому +1

      The input basis v1 to vn is the same as the output basis w1 to wm. Professor Gilbert strang didn't mentioned it when talking about how to find matrix A.
      You can't construct a transformation matrix A without a basis. The transformation matrix A transforms basis v1.... vn to T(v1).... T(vn). And each T(vi) is vi times the column i of matrix A. Since output basis is the same the input basis, it can be written as a linear combination of the basis w1 to wm where the cofficients are the entries of column i of A.
      Gilbert Strang's course alone can not help you fully understand the subject. Check other source like Khan academy might help. This lecture and the next one change of basis will only bring more questions unanswered after you watched it.

  • @roronoa_d_law1075
    @roronoa_d_law1075 7 років тому +1

    Matrices of linear transformations, isn't this part supposed to be in first courses ?

  • @ДанилЕлфимов-ш8г
    @ДанилЕлфимов-ш8г 4 роки тому +1

    I hope that'll help me pass my linear algebra exam

  • @hmodywakid
    @hmodywakid 14 років тому +1

    Well i have studied these at the Technion isreal instittute of technology its an easy subject

  • @janitarjanitar
    @janitarjanitar 14 років тому +1

    remember on looney toons when they would run in place before taking off? thats how this guy is. he just seems so excited his mouth wants to blurt it out but his mind is a fraction behind.... whose idea was it to include stutters in the subtitiles?

  • @alijoueizadeh8477
    @alijoueizadeh8477 5 років тому +2

    Thank you.

  • @aku7598
    @aku7598 3 роки тому

    I think just plotting a vector on new axes with reference to a given axes.
    New axes not necessarily orthogonal to each other.

  • @vamshikrishnam9206
    @vamshikrishnam9206 4 роки тому

    thanks sir u improved my math knowledge

  • @ArabicLearning-MahmoudGa3far
    @ArabicLearning-MahmoudGa3far 2 роки тому +1

    God bless you!

  • @ILOVEMINTBREEZE
    @ILOVEMINTBREEZE 13 років тому +5

    aww he is so cute

  • @АлександрСницаренко-р4д

    Sorry for my humble input, but this lecture on linear transformation should have been at the beginning of the course.

  • @НикитаЮрченко-э3ь
    @НикитаЮрченко-э3ь 4 роки тому +1

    It's easier to understand than on my native language :)

  • @the_basement_files
    @the_basement_files 10 років тому +4

    What an amazing man

  • @jamesdiangson9568
    @jamesdiangson9568 10 років тому +3

    God bless you.

  • @billwong2039
    @billwong2039 4 роки тому +2

    40:01 how did he get the projection matrix? what is the a??

    • @balajikalva188
      @balajikalva188 4 роки тому +4

      a is a vector along the line that makes 45 degrees with the origin . Since it's 45 degrees both x and y coordinates has to be the same giving vector a = [1,1] (transpose) . He took that as a unit vector with each value as 1/sqrt(2) . So performing that operating a*aT will give you the projection matrix . I hope that helps :)

  • @renesax6103
    @renesax6103 8 років тому +2

    Why do physicists avoid using/thinking in terms of basis? why are they unwilling to bring them in?

    • @elyepes19
      @elyepes19 3 роки тому

      Because in Relativity the point is that the laws of physics are the same, and independent, for any choice of coordinate system

    • @kingplunger1
      @kingplunger1 2 місяці тому

      You lose generality and introduce artifacts that have nothing to do with what you describe, but are just there based on your choice of coordinates. look at tensors for a general approach

  • @rishavdhariwal4782
    @rishavdhariwal4782 10 місяців тому

    I have a confusion may be somone can solve it. How did professor say with conviction at around 42:14 that the 1st column of A tells us what happens to the first basis vector?

  • @zack_120
    @zack_120 5 місяців тому

    37:11- what's perpendicular here?

  • @mayanksinghshishodia
    @mayanksinghshishodia Рік тому

    Godly explanation

  • @aviralrai6803
    @aviralrai6803 2 місяці тому

    he is legend

  • @rodolfonavarro3645
    @rodolfonavarro3645 5 років тому +2

    This helped so much!

  • @icono__7136
    @icono__7136 3 роки тому

    I'm struggling so much with the paradigm of linear transformations. I can do with matrices just fine. "See" the concepts even - but throw in the T(x) whateverthefuck and I can feel myelf melting with tears. Hoping this will be the click for me!

  • @Abss0966
    @Abss0966 9 років тому +2

    35:00 why does it kill the basevector v2? is it because it transforms it?

    • @BongboBongbong
      @BongboBongbong 9 років тому

      +Ahmed Jan If you project the basevector v_2 onto the line (which goes through the origin), you will get the zero vector. In other words: the length of this vector v_2 in the direction of the line is zero.

    • @yryj1000
      @yryj1000 8 років тому

      v2 is perpendicular to the line it is projected onto, so the projection kills v2, which has no component along the line.

    • @Abss0966
      @Abss0966 8 років тому +1

      Thanks guys

  • @dhruvilshah6405
    @dhruvilshah6405 4 роки тому +1

    This dude speaks fluent

  • @alullabyofpain
    @alullabyofpain 8 років тому

    is W1=T(V1) W2=T(V2)...? because the transformation output doesnt always have to be a base for the space, it only happens to be when the transformation is an isomorphism between the two spaces.

    • @davidlovell729
      @davidlovell729 7 років тому +1

      The answer to your first question is no. You can choose the basis for the input space (v1,...), the basis for the output space (w1,...), and the particular transformation T independently. What is required to form the columns of the matrix A is to ask, if input basis vector v1 were expressed in its own coordinates, what would its image under T look like, expressed in coordinates of the output basis? That is column 1 of A. Repeat for v2, ...
      If the input basis and the output basis span the same space, then you can do something similar to what you are describing. Construct a vector that has, as its columns, the w basis vectors, but written in their v basis coordinates. This linear transformation is the change-of-basis transformation from the w basis to the v basis. When applied to a vector of coordinates in the w basis, it gives you back the same vector, except expressed in coordinates of the v basis. This matrix is invertible, and its inverse is the change-of-basis transformation in the other direction. When the input and output bases span different spaces, then you can still construct this matrix, but it is not square, and hence not invertible.

  • @leonardosoto5669
    @leonardosoto5669 5 років тому +1

    At 43.15 how does he know that the constants are a11 etc?? Can someone explain me? I know this works but I can't understand why does it works

    • @pa15i
      @pa15i 5 років тому

      I also doubt about this example.

    • @elyepes19
      @elyepes19 3 роки тому

      Those are the components of the matrix A, written in row-column format

  • @padhai_karle_bhai
    @padhai_karle_bhai 2 роки тому

    beautifully taught!

  • @peter_castle
    @peter_castle 11 років тому +1

    His name is Gilbert Strang, mathematician.

  • @patrikisaksson7823
    @patrikisaksson7823 11 років тому +8

    wohooo, all makes sense now :D

    • @Peter_1986
      @Peter_1986 10 років тому

      I love these kinds of playlists, now I can learn Linear Algebra from UA-cam videos and chill. xD

  • @quirkyquester
    @quirkyquester 4 роки тому +1

    thank youuuuu!!!

  • @BlahBlooBlee4205
    @BlahBlooBlee4205 8 років тому +1

    wow it actually makes sense now :D

  • @UberMarauder
    @UberMarauder 14 років тому +2

    amazing!!!

  • @geverayala8978
    @geverayala8978 Рік тому

    Que buen video!!

  • @MRsegolego
    @MRsegolego 14 років тому +1

    still don't understand however i feel that i need to do some e.g.'s and i'll get there..
    great lecture regardless

  • @landsgevaer
    @landsgevaer 13 років тому +1

    This lecture could just as well have appeared in the beginning, as the very first one, to introduce the meaning of a matrix (except for the part about using eigenvectors as a basis, of course).

  • @VangelisPs
    @VangelisPs 15 років тому

    I saw example 2 which shifts (moves?) the whole plane. Does this mean that Translation is not a linear transformation? If not what is it? I thought it was a linear one...

    • @davidlovell729
      @davidlovell729 7 років тому +1

      It is affine, not linear.

    • @rathnam7818
      @rathnam7818 Рік тому

      It is non linear transformation as zero vector as input will give another non zero vector upon shifting which violates the linearity property as we want zero vector as output

  • @coding99
    @coding99 2 роки тому

    43:00 Climax

  • @RahulKumar-en7dv
    @RahulKumar-en7dv 7 років тому +1

    Please tell me best books for linear algebra

    • @mrobjectoriented
      @mrobjectoriented 7 років тому +3

      Gilbert Strang's and David C. Lay.

    • @lucasm4299
      @lucasm4299 6 років тому

      Reeshabh Ranjan
      I have both of those textbooks!!
      It’s the best of both worlds. I love Lay!!

    • @elyepes19
      @elyepes19 3 роки тому

      Besides the textbooks from Strang and Lay and after them, for Numerical/Computational linear Algebra, I favor William Ford.
      For the precise topic of this lecture, linear transformations w/o coordinates "Linear Algebra Done Right" by Axler.
      For the best of the three worlds combined: great pedagogy, computational Linear Algebra and linear transformations with and without coordinates, the classic text by Cornelius Lanczos (He invented the SVD) "Linear Differential Operators"

  • @Abss0966
    @Abss0966 9 років тому

    48:06 How does he know that the transformation matrix A must be a 2x3 matrix?

    • @BongboBongbong
      @BongboBongbong 9 років тому +1

      +Ahmed Jan The linear transformation he is describing goes from vectorspace V to vectorspace W. A basis for V is the set (1, x, x²) and a basis for W is the set (1, x). In order to transform elements of V into elements of W, you are going to need a 2x3 matrix (I could be helpful to have a look at the rules for matrix multiplication).

    • @Abss0966
      @Abss0966 9 років тому

      thanks for the help !

  • @shivanshkhare8756
    @shivanshkhare8756 2 роки тому

    at 39:45
    can anyone please explain how we got that matrix?