14. Orthogonal Vectors and Subspaces

Поділитися
Вставка
  • Опубліковано 5 тра 2009
  • MIT 18.06 Linear Algebra, Spring 2005
    Instructor: Gilbert Strang
    View the complete course: ocw.mit.edu/18-06S05
    UA-cam Playlist: • MIT 18.06 Linear Algeb...
    14. Orthogonal Vectors and Subspaces
    License: Creative Commons BY-NC-SA
    More information at ocw.mit.edu/terms
    More courses at ocw.mit.edu

КОМЕНТАРІ • 241

  • @priyankkharat7407
    @priyankkharat7407 5 років тому +226

    Thank you professor! I am amazed by the fact that professors from top institutes like MIT explain the mere basics without any expectation that we are supposed to know those topics earlier. On the other side our university professors just avoid the whole thing by saying "it isn't the part of syllabus, you are expected know this already". A huge salut and thanks to professor Strang and MIT team for publishing these videos free of cost.

    • @yogeshporwal7219
      @yogeshporwal7219 4 роки тому +6

      Yes this one line is fix-"you are supposed to know this, learn it at your own"
      And here this great professor giving knowledge from basic level to very advance level.

  • @adamlevin6328
    @adamlevin6328 8 років тому +169

    That smile at the end, he knew he'd done a good job

  • @ozcan3686
    @ozcan3686 12 років тому +90

    i dont know how but when ever i need he repeats it.thx mr Strang

    • @9888565407
      @9888565407 4 роки тому +1

      hey thats true mate. so did you watch the whole series ?

    • @kub1031
      @kub1031 3 роки тому +1

      sen de berbat hocalara sahiptin herhalde kader arkadaşım.

    • @rosadovelascojosuedavid1894
      @rosadovelascojosuedavid1894 3 роки тому +1

      @@9888565407 lol let's hope he has the same UA-cam account he had 8 years ago

  • @dougiehwang9192
    @dougiehwang9192 3 роки тому +28

    I really encourage you to buy The Introduction of Linear Algebra which Pf Strang wrote. If I say these videos are rank r, then I can definitely say the book is the orthogonal complement of these videos that makes perfect dimension of Linear Algebra.

    • @rosadovelascojosuedavid1894
      @rosadovelascojosuedavid1894 3 роки тому

      Dude I read this comment and literally TODAY I recommended this book to a guy in a Facebook group and he already ordered it. 👌

  • @nenadilic9486
    @nenadilic9486 3 роки тому +20

    To find this course on the web is tantamount to finding massive gold treasure.

  • @corey333p
    @corey333p 7 років тому +217

    The dot product of orthogonal vectors equals zero. All of a sudden it clicked when I remembered my conclusion as to what a dot product actually was, that is, "what amount of one vector goes in the direction of another." Basically, if vectors are orthogonal, then no amount of one will go in the direction of the other. Like how a tree casts no shadow at noon.

    • @robertorama8284
      @robertorama8284 5 років тому +8

      Thank you for this comment! That's a great conclusion.

    • @estebanl2354
      @estebanl2354 4 роки тому +4

      it was very enlightening

    • @anilsarode6164
      @anilsarode6164 4 роки тому +4

      ua-cam.com/video/LyGKycYT2v0/v-deo.html to get the concept of the dot product.

    • @indiablackwell
      @indiablackwell 3 роки тому +1

      This helped, a lot

    • @kevinliang5568
      @kevinliang5568 2 роки тому +2

      Oh my this is enlightening, I've never thought it that way

  • @steveecila
    @steveecila 11 років тому +74

    Mr Strang makes me feel, in the first time of my life, that linear algebra is interesting!

  • @dmytrobondal4127
    @dmytrobondal4127 6 років тому +94

    Gilbert Strang, you are truly an outstanding teacher! I am currently doing my Master's thesis in Finite Element Analysis and started watching these video lectures just for fun, since I already had some Linear Algebra back on my bachelor's. Your little sidenote at the end of a lecture about multiplying a system by A.transpose actually helped me crack a problem I'm dealing with right now. My finite element system had more equations than unknowns (because I'm fixing some internal degrees of freedom, not the nodes themselves) and I just couldn't figure out how to solve such system. I completely forgot about this trick of multiplying by a transpose!! THANK YOU SO MUCH!! My final system now has "good" dimensions and the stiffness matrix has a full rank!!!

    • @dmytrobondal4127
      @dmytrobondal4127 6 років тому +19

      And also his strict mathematical proof, I believe, in 1971, about the completeness being a necessary condition for FEM convergence is actually something I'm using right now! This guy played such a great role in FEM.

  • @nenadilic9486
    @nenadilic9486 3 роки тому +7

    25:56 "I'm a happier person now." I love his interludes. Thank you, professor, a lot.

  • @debarshimajumder9249
    @debarshimajumder9249 6 років тому +71

    "the origin of the world is right here"

  • @elyepes19
    @elyepes19 3 роки тому +9

    This lecture is a Tour of Force, every sentence he says, including the ancillary comments, are so well crafted that makes everything click with ease. Least Squares open the gates for the siamese fields of Optimization and Inverse Theory, so every bit of insight he shares has deep implications on those fields (and many others). It's not exaggeration to say that the whole lecture is an aha! moment. Very illuminating, thank you Professor Strang

  • @youmgmtube
    @youmgmtube 14 років тому +18

    This series is phenomenal. Every lecture a gem. Thank you Mr Strang!

  • @quirkyquester
    @quirkyquester 4 роки тому +10

    so much fun, so much love. Thank you Professor Strang and MIT for inspiring more people around the world. I truly enjoy learning linear algebra with Professor Strang :) we know he's done it!

  • @LisaLeungLazyReads
    @LisaLeungLazyReads 8 років тому +65

    I remember falling asleep in all my linear algebra classes @ UWaterloo. Not until now that I'm starting to like linear algebra!

    • @Neme112
      @Neme112 7 років тому +2

      "like linear algebra" Good one!

    • @lucasm4299
      @lucasm4299 6 років тому +1

      Lisa Leung
      Is that in Ontario, Canada

    • @alpozen5347
      @alpozen5347 4 роки тому +7

      same here for me, I study at EPFL, but Mr. Strang seems to have a natural gift for the subject

  • @condafarti
    @condafarti 5 років тому +15

    okkkkk, cameras are rolling, this is lecture 14. What an intro line!

  • @ninadgandhi9040
    @ninadgandhi9040 2 роки тому +1

    Really enjoying this series! Thank you professor Strang and MIT. This is absolute service to humanity!

  • @niko97219
    @niko97219 3 роки тому

    It is a pure joy watching these lectures. Many thanks to Prof. Gilbert Strang and MIT OCW.

  • @rabinadk1
    @rabinadk1 4 роки тому +6

    Really a great lecture. He explains things simply that they seem obvious. I had never learned it as clearly as this in my college.

  • @LinhNguyen-st8vw
    @LinhNguyen-st8vw 7 років тому +11

    Linear algebra, it's been almost 3 years but I think I've finally got you. *sob *wished I could go back in time

    • @dangernoodle2868
      @dangernoodle2868 6 років тому +1

      Man, I think a part of me died in the math class I took at the start of university. I feel like I'm ressurecting a part of my soul.

  • @georgesadler7830
    @georgesadler7830 3 роки тому +1

    DR. Strang ,thank you for another classic lecture on orthogonal vectors and subspaces. Professor Strang, you are the grand POOBAH of linear algebra.

  • @professorfernandohartwig
    @professorfernandohartwig 2 роки тому +3

    In many linear algebra courses that I have seen, the student is simply told about the various relationships between the fundamental subspaces. But in this course these ideas are convincingly yet accessibly presented. This is very important because it allows students to really understand such key ideas of linear algebra to the point where they become intuitive, instead of simply memorizing properties and formulas. Another great lecture by professor Strang!

  • @carlostrebbau2516
    @carlostrebbau2516 24 дні тому

    I have never felt the platonic injunction to "to carve nature at its joints" more strongly than after watching this lecture.

  • @palashnandi4165
    @palashnandi4165 Рік тому +3

    00:00:00 to 00:02:50 : Introduction
    00:02:51 to 00:13:45 : What is Orthogonality?
    00:13:50 to 20:49:00 : What is Orthogonality for Subspaces?
    00:20:50 to 26:00:00 : Why RS(A) ⊥ NS(A)?
    26:01:00 to 34:00:00 : What is Orthogonal complement?
    39:45:00 to End : Properties of A^T.A ?

  • @AryanPatel-wb5tp
    @AryanPatel-wb5tp Місяць тому +1

    "Let me cook up a vector that's orthogonal to it" - the goat professor strang 8:25

  • @mikesmusicmeddlings1366
    @mikesmusicmeddlings1366 3 роки тому +2

    I am learning so much more from these lectures than from any teacher I have ever had

  • @georgeyu7987
    @georgeyu7987 4 роки тому +23

    "blackboard extends to infinity..." yeah, MIT does have infinitely long blackboard...

    • @akselai
      @akselai 3 роки тому +7

      * slides out the 45th layer of blackboard *

  • @onatgirit4798
    @onatgirit4798 3 роки тому +2

    Omg the orthogonality between nullspace and row space adds up so well with the v=[1,2,3] example prof. gave previous lecture. I've seen much less entertaining tv series than 18.06, this course should be on Netflix lol

  • @rajprasanna34
    @rajprasanna34 9 років тому +5

    It's an extraordinary and amazing one.. No other lecturer are as good as Gilbert...
    Thank you sir........

  • @mreengineering4935
    @mreengineering4935 2 роки тому +3

    Thank you very much, sir. I am watching lectures and enjoying them. I have benefited from you because we do not have a teacher in Yemen because of the war situation, so you became my teacher

  • @tongqiao699
    @tongqiao699 11 років тому +6

    The most greatest lecturer who I meet in my life.

  • @adarshagrawal8510
    @adarshagrawal8510 2 роки тому +1

    At 36:22, it is fascinating how he got a bit into the orbital mechanics, saying there are 6 unknowns (which is rightly known at the State vector, which is a 6 by 1 matrix with Position (x,y,z) and velocities (xdot, ydot, zdot)).

  • @gokulakrishnancandassamy4995
    @gokulakrishnancandassamy4995 2 роки тому

    Great summary at the end: A^T*A is invertible if and only if A is full column rank! Just loved the lecture...

  • @Seanog1231
    @Seanog1231 6 років тому +1

    Can't wait for the next one!

  • @abdulazizabdu8362
    @abdulazizabdu8362 8 років тому +3

    But lessons are great!!!! I'am enjoying from every class. Thank you Gilbert Strang

  • @vedantparanjape
    @vedantparanjape 4 роки тому +12

    Second best part about watching these lectures is the comment section

  • @zoltanczesznak976
    @zoltanczesznak976 9 років тому +10

    You are the king Mr Strang! Thanks

  • @hassannazeer5969
    @hassannazeer5969 4 роки тому +8

    This is a 90-degree chapter, Strang meant business from the word go!

  • @prakhyathbhandary9822
    @prakhyathbhandary9822 3 роки тому +6

    25:00 why transpose of Row's were taken to find combination of row space? Will we be able to multiply transpose of row 1 to X?

    • @rjaph842
      @rjaph842 2 роки тому +1

      I lost it there too man,idk if you've managed to figure out why

    • @joaocosta3506
      @joaocosta3506 2 роки тому

      @@rjaph842 wasn't the point proving the case that the left null space and the column space were ortogonal too?

    • @iamjojo999
      @iamjojo999 2 роки тому +1

      I think its a little mistake that prof. Strang didn’t notice. Probably because prof Strang just taught what property of two vectors are orthogonal have.(ie XtY=0)
      But this require’s X and Y are column vectors. Here row vector is not a column vector, so no need to transpose in order to product another column vector. Simply row vector * x (which is a column vector) =0 is ok though. Nevertheless, I really like prof Strang’s style. Thank you prof Strang.

  • @omarelgazzar834
    @omarelgazzar834 6 років тому

    Great lecture, Thanks prof. Strang.

  • @antoniosaidwebbesales2418
    @antoniosaidwebbesales2418 2 роки тому

    Amazing, thk u MIT and Prof. Gilbert Strang.

  • @miami360x
    @miami360x 12 років тому +2

    I love his explanations. My linear Algebra prof. will just give us definitions, state theorums, and prove them and if were lucky we'll get an example, but never a solid explanation.

  • @hanzvonkonstanz
    @hanzvonkonstanz 13 років тому +1

    I swear, these lectures with the Schaum's Outline of Linear Algebra can really help anyone learn the subject.

  • @anilsarode6164
    @anilsarode6164 4 роки тому +3

    38:30 -Mr. Strang gives a hint about the Maximum Likelihood Estimate (MLE).

  • @muditsaxena3640
    @muditsaxena3640 5 років тому +18

    At 19:50 he said "When is a line through the origin orthogonal to whole plane? Never" but I think if we take any line through origin and a plane whose normal vector is parallel to that line then they both will be orthogonal. For example x-axis and y-z plane. Help me out please.

    • @wasiimo
      @wasiimo 5 років тому +10

      By that he means any line passing through the origin that is in the plane(i.e a subspace of the plane) cannot be orthogonal to the whole plane. Of course if this line is parallel to the normal of the plane as you stated, then yes it will be orthogonal to every vector in that plane.

    • @Basta11
      @Basta11 5 років тому +7

      He’s talking a line through origin (a sub space) that is also in the plane.

    • @khanhdovanit
      @khanhdovanit 3 роки тому +1

      Thanks for your question

    • @jeffabc1997
      @jeffabc1997 3 роки тому +1

      Thanks for the question and answer... it really helps!

  • @tanphan1618
    @tanphan1618 2 роки тому

    Beautiful lecture and amazing lecturer !!!

  • @abdelaziz2788
    @abdelaziz2788 3 роки тому

    Thats A VERY VERY ESSENTIAL lecture for machine learning
    i used to do the transpose trick but didnt know where it come from, know i may die in peace

  • @BigBen866
    @BigBen866 Рік тому

    The man puts his Soul into his lectures 🤔🙏🏼😀👍

  • @nateshtyagi
    @nateshtyagi 3 роки тому +1

    Thanks Prof Strang, MIT!

  • @Brekhna
    @Brekhna 12 років тому

    he is such a great teacher!! thankyou professor strang!!

  • @bca1037
    @bca1037 4 місяці тому

    최고의 선형대수학 강의.

  • @jnnewman90
    @jnnewman90 2 роки тому +1

    This man cooked up some vectors AND insulted MIT's floor integrity. Legend

  • @cutieFAIZANHASSAN
    @cutieFAIZANHASSAN 4 роки тому

    Thank you, sir. You are a great teacher.

  • @imegatrone
    @imegatrone 12 років тому

    I Really Like The Video From Your Orthogonal Vectors and Subspaces

  • @VladimirDjokic
    @VladimirDjokic 9 років тому +2

    He's absolutely amazing!!!

    • @nenadilic9486
      @nenadilic9486 3 роки тому

      И ја сам одушевљен. Његова предавања су пуна просветљујућих момената, бар за нас лаике је то тако.

  • @emenikeanigbogu9368
    @emenikeanigbogu9368 4 роки тому

    Loved this lecture

  • @Anaghish
    @Anaghish 3 роки тому

    You're the best teacher in the world.

  • @hibvio
    @hibvio 13 років тому

    Really good videos!! This series are helping me pretty much!! I'm from Brazil and I'm loving this videos!!

  • @mahneh7121
    @mahneh7121 11 місяців тому

    this was the best lesson I have taken

  • @vetonsadriu1216
    @vetonsadriu1216 9 років тому

    thank you g. strang.

  • @kaikim8402
    @kaikim8402 3 роки тому

    감사합니다, Thank you.

  • @pawankumar-gc6ho
    @pawankumar-gc6ho 2 роки тому

    Brilliant leacture

  • @soulmansaul
    @soulmansaul 3 роки тому

    Recorded in 1999, still relevant in 2021. "Comes back 40 years later" - Yep still relevant

  • @shoumikghosal
    @shoumikghosal 4 роки тому +8

    "The one thing about Math is you're supposed to follow the rules."

  • @axequalsb8431
    @axequalsb8431 4 роки тому

    I love this guy!!!

  • @ozzyfromspace
    @ozzyfromspace 3 роки тому

    This man is my hero 🙌🏽✨

  • @ArabicLearning-MahmoudGa3far
    @ArabicLearning-MahmoudGa3far 2 роки тому

    God bless you profesor!

  • @ZehraAkbulut-my7fj
    @ZehraAkbulut-my7fj 3 місяці тому

    I can't stop watching the spinning pens 15:05

  • @eccesignumrex4482
    @eccesignumrex4482 7 років тому +12

    Gill uses his 'god' voice at ~8:00

  • @hits6620
    @hits6620 3 роки тому +2

    At 25:00 Mr. Stang wrote (row 1) transpose x equals 0, but I don't really understand.
    I was thinking about to remove the "transpose" thing, and I was sooo confused.

    • @minagobran4165
      @minagobran4165 Рік тому

      me too did u ever understand

    • @APaleDot
      @APaleDot Рік тому

      @@minagobran4165
      All row vectors are written with the transpose symbol to indicate they are row vectors and not column vectors.

  • @BestBites
    @BestBites 2 роки тому

    Cameraman would have become Pro in Linear Algebra by absorbing such a high level of teaching.

  • @BigBen866
    @BigBen866 Рік тому

    “Let me add the great name, ‘Pythegorious’!” I love it 😂😂😊

  • @sachinranveer3452
    @sachinranveer3452 3 роки тому +1

    Where are the next lectures for A^TA ???

  • @alijoueizadeh8477
    @alijoueizadeh8477 5 років тому

    Thank you.

  • @jacksonsunny1261
    @jacksonsunny1261 Рік тому

    East or West, Prof Strang is the best!

  • @user-ud7nv6fp6q
    @user-ud7nv6fp6q Рік тому +1

    thank a lot

  • @bipashat4131
    @bipashat4131 2 роки тому

    why exactly is the null space of (A transpose )(A) = to the null space of A ?

  • @jockyitch8815
    @jockyitch8815 2 роки тому

    41:32 recap point for A.t * Ax = A.t * b

  • @faustind
    @faustind 4 роки тому +3

    At 25:17 is it necessary to transpose the rows of A before multiplying with X ( since the dimensions match already )?

    • @nenadilic9486
      @nenadilic9486 3 роки тому +2

      He didn't transpose the rows of A but the vectors named 'row-sub-i', which are, as any vector, always written in the column form.
      In other words, it is a convention that, if we want to to write a vector that corresponds to a row of any matrix A (the rows are not vectors by themselves) we write it as the proper vector which is the corresponding column of the matrix A transpose.
      This makes our notation consistent. Anytime we write a vector name (e.g. 'a'. 'row'. 'q', 'spectrum', 'x', 'v'...), we can always replace it with some matrix column. So, if we want to multiply another vector or matrix with it from the left, we must first transpose it.
      And it is not a mere convention! It is an essential property of the matrices: the columns are vectors, not the rows. If we could, at our own leisure, claim whenever we want that rows are also vectors, then the whole concept of a transposed matrix will be corrupted, even the concept of a matrix itself.

    • @jimziemer474
      @jimziemer474 2 роки тому

      @@nenadilic9486 I’m not sure that’s completely correct. I’ve seen him show rows as vectors at times to compare what a row vector looks like compared to the column vectors.

  • @betobayona7812
    @betobayona7812 9 років тому +22

    25:39 Ins´t there an error with the symbol T (for transpose)? Why transpose the rows? Please, explanations!

    • @RetroAdvance
      @RetroAdvance 9 років тому +19

      Here we contemplate rows as vectors. And it is just a convention to write a vector vertical. So you have to transpose it if you mean to write it down horizontally.

    • @Nakameguro97
      @Nakameguro97 9 років тому +1

      RetroAdvance Thanks for this confirmation - I suspected this reason was much more likely than Prof. Strang making a mistake here (as I have seen this convention in other textbooks). However, it's still confusing as he sometimes draws an array of rows [row_1 row_2 ... row_m] vertically implying that those are horizontal rows. Is this convention of all variables as vectors typically only apply to variables written in text?

    • @RetroAdvance
      @RetroAdvance 9 років тому +4

      Ken Feng Yes, he often writes things down without a strict mathematical rigor for didactic reasons. [row1 row2 row3] is probably just Strang's intuitive formulation to get the point across so don't take that too seriously. As long as it is clear what he means by that it is ok.
      But a vector is different from its transposed version in terms of its matrix representation:
      V (element of R^n) is a n x 1 matrix
      V transposed is a 1 x n matrix

    • @longgy1123
      @longgy1123 7 років тому

      Beto ba Yona It is just a small mistake.

    • @thangibleword6854
      @thangibleword6854 5 років тому

      @@RetroAdvance no, it is a mistake

  • @WonJable
    @WonJable 11 років тому

    cuteness level off the charts @49:35

  • @theali8oras274
    @theali8oras274 5 років тому

    49:36 did he flip someone off?

  • @w4yn6
    @w4yn6 12 років тому

    thanks professor

  • @ozzyfromspace
    @ozzyfromspace 4 роки тому +2

    Here I was, thinking I was gonna breeze through this lecture when BAM! I got hit with subtle logic 👨🏽‍🏫

  • @ashutoshtiwari4398
    @ashutoshtiwari4398 5 років тому

    13:00 subspace S is orthogonal to subspace T means: every vector in S is orthogonal to vectors in T.
    Can anyone explain what is going on there?

    • @nenadilic9486
      @nenadilic9486 3 роки тому

      @@jigglygamer6887 I would say that a subspace is any chosen selection of vectors. What is a particular kind here is the kind of way we choose subspaces that are practical ;)

  • @notslahify
    @notslahify 11 років тому

    well I don't think A' has an inverse. so you can't backtrack from eq 2 to eq 1

  • @trojanhorse8278
    @trojanhorse8278 Рік тому

    Hello can someone please explain how the length formula has derived for vector? how length of vector x is xt .x ? where xt is x transpose.

    • @thepruh1151
      @thepruh1151 Рік тому

      a^2 + b^2 = c^2, so a, b, c must be the length of the vectors making this right-angled triangle. Thus, the a^2 in the Pythagorean formula corresponds to ||→a||^2, where →a just indicates vector a. This logic applies to the rest of the theorem, resulting you in
      ||→a||^2 + ||→b||^2 = ||→c||^2
      But we know that →c is just →a + →b, so we can replace that c with the vector sum, therefore
      ||→a||^2 + ||→b||^2 = ||→a + →b||^2
      That covers the vector interpretation of the Pythagoras Theorem.
      As to why ||→x||^2 can be written as x^T * x. We can write ||→x||^2 as
      ||→x||^2 = ||→x|| ||→x||= →x ⋅ →x
      By property of duality, the dot product of any two vectors, say →v and →w is the same as taking the matrix multiplication of one of those vectors transposed and the other vector, mathematically
      →v ⋅ →w = v^T * w = w^T * v
      Why this is so results from, as far as I know, a coincidentally beautiful property where it's just true
      For example, say
      →v = and →w = , where indicates column vectors
      Thereby
      →v ⋅ →w = ⋅
      →v ⋅ →w = 1 * 3 + 2 * 2 + 3 * 1
      →v ⋅ →w = 10
      or
      v^T * w =
      [3]
      [1, 2, 3] * [2] = 1* 3 + 2 * 2 + 3 * 1
      [1]
      v^T * w =
      [3]
      [1, 2, 3] * [2] = 10
      [1]
      or
      w^T * v =
      [1]
      [3, 2, 1] * [2] = 3 * 1 + 2 * 2 + 1 * 3
      [3]
      w^T * v =
      [1]
      [3, 2, 1] * [2] = 10
      [3]
      In conclusion, the dot product of two vectors is also equal to the matrix multiplication of one of the vectors transposed and the other vector. Apply that logic with the dot product of a vector with itself, and you'll get the same result.

  • @pelemanov
    @pelemanov 13 років тому

    @j4ckjs Thanks for the reply. It's just a confusing definition and if you google a bit on orthogonality, you find many other definitions contradicting this one. I think he should have pointed it out more clearly, but at least now I will be cautious when it comes to this topic. I guess that's good enough...

  • @HeartSeamstress
    @HeartSeamstress 13 років тому

    when finding an orthogonal vector, are there specific steps to find it, or do we have to find a vector (through inspection) which when multiplied by another vector through dot product is zero?

    • @bassmaiasa1312
      @bassmaiasa1312 2 роки тому

      The normal vector to any plane (or hyperplane) is orthogonal to any vector in the plane. In the polynomial equation for a plane thru the origin (ax + by + cz = 0), the normal vector is the coefficients (a,b,c). So given any vector (a,b,c), plug in any (x,y) and solve for z. Then (x,y,z) is orthogonal to (a,b,c). The same applies to hyperplanes (A1x1 + A2x2 + .... Anxn = 0). (A1, A2, ... An) is the normal vector to a hyperplane in Rn.

  • @Jack-gi6jp
    @Jack-gi6jp 8 років тому +6

    This shit is so good

  • @anshumittal889
    @anshumittal889 4 роки тому

    Awesome!!

  • @MB-oc7ky
    @MB-oc7ky 4 роки тому

    At 42:17 why does multiplying each side by A^T change x? In other words why isn't x equal to x_hat?

    • @MrScattterbrain
      @MrScattterbrain 3 роки тому +4

      Here, prof. Strang is pointing towards Statistics. Assume there is some "true" value of x, which is set by nature's law. We collected measurements of A and b, and try to find that true x by solving the equation Ax=b. But our measurements are noisy, so we will not get the true x. What we can find, is some approximation, "estimate" of x, which is usually denoted by x-hat.
      That's one example of Stats is about - finding estimates for unknown parameters from the given observations. The true parameter usually remains unknown.

  • @vijayamanikandanv8471
    @vijayamanikandanv8471 3 роки тому

    In the section Row space orthogonal to null space, (time 25.40) do we need transpose for row. Because (row)*x is the scalar product, not (row)^T*x.

    • @nenadilic9486
      @nenadilic9486 3 роки тому

      He didn't transpose the rows of A but the vectors named 'row-sub-i', which are, as any vector, always written in the column form.
      In other words, it is a convention that, if we want to to write a vector that corresponds to a row of any matrix A (the rows are not vectors by themselves) we write it as the proper vector which is the corresponding column of the matrix A transpose.
      This makes our notation consistent. Anytime we write a vector name (e.g. 'a'. 'row'. 'q', 'spectrum', 'x', 'v'...), we can always replace it with some matrix column. So, if we want to multiply another vector or matrix with it from the left, we must first transpose it.
      And it is not a mere convention! It is an essential property of the
      matrices: columns are vectors, not rows. If we could, at our own
      leisure, claim whenever we want that rows are also vectors, then the whole concept of a transposed matrix will be corrupted, even the concept of a matrix itself.

    • @nenadilic9486
      @nenadilic9486 3 роки тому

      @CoeusQuantitative Read my comment and any textbook. He didn't make any mistake. And no, Vijaya is not correct. And professor Gilbert certainly does not have ''senior moments''. His brain is more lucid than mine or yours have ever been or will ever be.

  • @wandileinvestment1413
    @wandileinvestment1413 Рік тому +1

    Thank you prof I'm writing a exam tomorrow morning

  • @baconpenguin94
    @baconpenguin94 9 місяців тому

    HES THE GOAT. THE GOAAAAAAT

  • @Gisariasecas
    @Gisariasecas 8 років тому +1

    Could i see (A^t)(A) as a dot product of two vectors in the matrix space?

    • @antoniolewis1016
      @antoniolewis1016 8 років тому +2

      no because A is a matrix.

    • @dacianbonta2840
      @dacianbonta2840 7 років тому +3

      No, because you're doing matrix multiplication. The result is a matrix, not a scalar.

    • @Gisariasecas
      @Gisariasecas 7 років тому

      Thanks for answering but thats not quite correct, the answer is at the end of this course when professor talks about linear transformations and the matrix representation of them. The correct answer is that i can not see it as a dot product because for every vector we have a coordinate representation,This definition of dot product only works with the coordinate representation of a vector which means that if i would like to do the dot product between this elements i first need the coordinate representation(as columns) and after i apply the definition.

    • @daxterminator5
      @daxterminator5 7 років тому

      In fact, every value in the new matrix (A^t)(A) is a vectorial product between two columns of A. Since the columns of A are part of C(A) [thanks, Capt. Obvious!], you are right!

  • @woddenhorse
    @woddenhorse 2 роки тому +1

    "I shouldn't do this, but I will"

  • @MrCricriboy
    @MrCricriboy 8 років тому +11

    Was it a burp at 48:48?

    • @SteamPunkLV
      @SteamPunkLV 5 років тому +1

      now we're asking the real questions

    • @winniejeng7402
      @winniejeng7402 5 років тому +1

      Sorry had an indigestion right before class

  • @Mike-mu3og
    @Mike-mu3og 5 років тому +1

    19:50 why can't a line through the origin be orthogonal to a plane? It looks natural to me, that the z-axis is orthogonal to xy-plane

    • @KaiyuKohai
      @KaiyuKohai 5 років тому

      it won't be orthogonal to every vector contained in the plane so it isn't orthogonal to the plane

    • @vishwajeetdamor2302
      @vishwajeetdamor2302 3 роки тому +3

      I think he was only talking about a line in R2 space

    • @nenadilic9486
      @nenadilic9486 3 роки тому

      @@KaiyuKohai It is orthogonal to every vector in that plane, but the point is we are talking about 2D space: no vector in that space is orthogonal to a plane representing that space.

  • @ankanghosal
    @ankanghosal 3 роки тому

    At 44:31 why did sir say solving 3 equation by 2 variables is not possible. We have seen in our schools that the number of variable should be equal to number of equation in order to evaluate the variable. Plz explain.

    • @nenadilic9486
      @nenadilic9486 3 роки тому +1

      Let's suppose some natural law determines that some attribute b is given by exactly 3 parts of some parameter a1 and exactly 5 parts of some parameter a2, and these two parameters don't depend on each other. You would have the equation:
      a1*3 + a2* 5 = b
      If you could measure those two parameters absolutely precisely, you would always end up with the correct b. And the other way around - if you could measure b with absolute precision... but let's see what happens.
      Suppose, you want to discover that natural law, i.e. you didn't know factors 3 and 5 but want to discover them.
      You would measure the observable, b, in situations when you change a1 and a2 by experiment (or when you measure cases where a1 and a2 are naturally different from case to case). Suppose the ideal world (which doesn't exist) where there is no error in measurement.
      You get, for example:
      1*x1 + 1*x2 = 8
      2*x1 + 1*x2 = 11
      -1*x1 + 1*x2 = 2
      1*x -0.2*x2 = 2
      You have more than 2 equations, and two unknowns, x1, and x2. Because all the equations are linear combinations of just two of them, you can take any two equations, and throw away the rest and the solution will be: x1 = 3, x2 = 5.
      If you put the coefficients in front of x1 in the first column of matrix A and those in front of x2 in the second column, then you 'get 4 x 2 matrix A:
      - -
      | 1 1 |
      | 2 1 |
      | -1 1 |
      | 1 -0.2 |
      - -
      You could write x1 and x2 in a column, as a vector x with x1 and x2 as its coordinates:
      - -
      | x1 |
      | x2 |
      - -
      You also write all the right-hand side numbers as coordinates of the vector b:
      - -
      | 8 |
      | 11|
      | 2 |
      | 2 |
      - -
      Then you can write all the above equations in a matrix form, as:
      Ax = b.
      Matrix A has the rank of 2 (only two independent equations - the other two, in this example, are just derived from them as linear combinations, so they don't provide any new information). You can replace A with the matrix A^, which is 2x2 matrix with only any two of the equations. (You also have to reduce your observed b vector to 2-dimensional space, and write it as b^.) For example, you could choose equations:
      - - - - - -
      | 2 1 | * | x1 | = | 11|
      | -1 1 | | x2 | | 2 |
      - - - - - -
      When you solve this system, and we can write it in a matrix form, as:
      A^ * x = b^
      for x, you get vector x with the coordinates x1 and x 2 that are 3 and 5, which is the solution you wanted.
      But the world is not ideal. You do the measurements and you actually get the whole A as this one:
      1,01*x1 + 0,99*x2 = 8,04
      2, 01*x1 + 1,04* 2 = 10,99
      -0,97*x1 + 1*x2 = 2,05
      1*x -0.22*x2 = 2, 01
      It's close to the first set of equations, but not exactly the same.
      Now, you can solve the system of any two of these equations, but you cannot solve the whole system, because there doesn't exist a combination of x1 and x2 that satisfies all four equations. We say that the vector with coordinates: 8,04, 10,99, 2,05, 2,01, i.e. our observed 'b', is NOT in the column space of A. In the first example, which is extremely rare, vector b was, by chance, in the A's column space.
      Instead, you can find the BEST approximate solution, which is exactly the topic of the next lecture :)

  • @matrixkernel
    @matrixkernel 12 років тому +1

    Wouldn't the z-axis be orthogonal to the entire x-y plane? It kind of goes against one of his remarks in the 20-21 minute part of the video.