Tensors for Beginners 9: The Metric Tensor

Поділитися
Вставка
  • Опубліковано 1 гру 2024

КОМЕНТАРІ • 253

  • @cermet592
    @cermet592 6 років тому +131

    What an incredibly good and straight forward explanation of the Metric tensor! Using the Pythagorean theorem approach was brilliant and made so much sense! Seeing how it fails for non - orthonormal basis and then how the metric tensor correct both the basis and vector coefficients was very good. You have such an intuitive approach that makes total sense in the real world rather than the ratified math or physics approach (when applied blindly and then the student must just accept the concept rather then "see" it as in this vid.)

  • @srf221nyu
    @srf221nyu 5 років тому +59

    I've been trying to teach myself general relativity for a while and tensors have without a doubt been my Achiles heel. So glad I found these videos. You're doing a great job!

  • @ahmedabbas3998
    @ahmedabbas3998 2 роки тому +13

    I've tried many videos and books on tensors but never come across such a clear explanation of the subject like this which in itself a sign that the teacher has mastered his stuff and understood the subject way better than those who cannot present it like him in this clear and lucid manner.

    • @eigenchris
      @eigenchris  2 роки тому +4

      Thanks. I'm always happy when I hear an explanation I've given helps clear up the stuff in textbooks.

  • @brilinos
    @brilinos 6 років тому +60

    Your videos are real pleasure to watch. Please continue as soon as possible. Looking forward to tensor calculus. Thanks for your work!

  • @stephaniebeiram2761
    @stephaniebeiram2761 2 роки тому +4

    There needed to be a mic drop at 14:38. This series is by far the best and most intuitive explanation of tensors I have ever seen. Thank you for this gift!

  • @buramjakajil5232
    @buramjakajil5232 6 років тому +22

    I must say, you're videos are superb! You beat many professors I know! Referencing to other earlier comments here, I believe the ability to explain things relates very much to how well you actually understand what you're talking about. Quoting Einstein: "If you can't explain it simply, you don't understand it yourself well enough". But you clearly seem to know what you're talking about ;) You're doing an important work. Thank you very much and keep it up!

  • @briancastle3298
    @briancastle3298 Місяць тому

    This is the clearest most concise short course i've ever seen. Thank you for this.

  • @TarunKumar-qr3fb
    @TarunKumar-qr3fb 2 роки тому +1

    Just started this playlist yesterday, completing today. I never stayed (or completed) on any playlist this long. I am in love with this series. Looking forward to start Tensor calculus tomorrow .

  • @przadka
    @przadka 4 роки тому +11

    I just wanted to espress my gratitute for your work Chris! I have decided to learn tensors this year, as a gateway to better understand differential geometry. Your series and the work of @MathTheBeautiful are extremely valuable for me on that journey :)

    • @eigenchris
      @eigenchris  4 роки тому +3

      I'm glad they both helped! I remember watching MathTheBeautiful when I first started learning tensors as well.

  • @rebollo87
    @rebollo87 6 років тому +7

    U r genius bro! It looks very easy in the way you say it, I am already an Aerospace engineer, and i have never seen such a simple explanation. Very well done!
    Best regards from Amsterdam!

  • @vivalibertasergovivitelibe4111
    @vivalibertasergovivitelibe4111 5 років тому +1

    Thank you so much. My physics prof was just like "well this is a tensor and this is what it does in relativity" without any explanation as to what they truely are. You give such a great straight forward perspective on them. Just watching the few videos so far has given me so much more understanding on what we actually did in the lectures and helped me freshen up and get a new perspective linear algebra (my prof loved proofs but nearly never gave any visual intuition)
    Thank you so much from a physics student who is thanks to you not afraid of tensors anymore ;)

    • @eigenchris
      @eigenchris  5 років тому +1

      I very much replied to the wrong comment. Sorry for the random flood of text. :) I'm glad you found these useful. I find the basics get skipped over in some professors' notes and it causes a lot of confusion later on.

  • @pellythirteen5654
    @pellythirteen5654 3 роки тому

    Wow. For the first time in my life I am really getting a grasp on what tensors are and how they work.
    This is due to the excellent way it is presented. Very to the point , very clear graphics and a pleasant pace.
    The author assumes the reader has at least a highschool level of maths.

  • @humphreyearwicker4358
    @humphreyearwicker4358 6 років тому +4

    Really enjoyed these videos. Hope you produce more. The exposition is fantastically clear and concise!

  • @tzaidi2349
    @tzaidi2349 4 роки тому

    Awsome series! It is a service to humanity to make these available for free. Thanks.

  • @ryanj748
    @ryanj748 10 місяців тому +1

    On the off-chance that a watcher happens to be working through Munkre's excellent Analysis on Manifolds while watching: This video explains exactly what's going on under the hood in Exercise 4 of Section 1.1. (I dimly remembered this problem while watching this fantastic video!)

  • @dennisbrown5313
    @dennisbrown5313 6 років тому

    Amazing! This clarifies so much in a manner no one else ever does and connects so much that previously, I was confused or simply did not see the connections. The parallel examples really explains this in such an intuitive manner. You need to have a "Tip" button. Your time is deserving of compensation - I'd certainly pay for a book like this!

  • @akshatpathrikar7080
    @akshatpathrikar7080 Рік тому

    The greatest explanation of the metric tensor there is. Thanks, Chris.

  • @UniversoNarrado
    @UniversoNarrado 4 роки тому +5

    Man, you are a natural professor! Your videos are amazing and incredibly helpful. Thanks a lot for the job on tensor calculus. It's the best thing I have found on the entire internet (and man, I have searched about this).
    Thanks. Regards from Brazil! (Where are u from?)

  • @johannesaaen248
    @johannesaaen248 3 місяці тому

    Suddenly I actually understand what is going on with the metric tensors. Great video man

  • @AjinkyaNaikksp
    @AjinkyaNaikksp 5 років тому +1

    Thank you so much for all your videos, this was the best mathematical video series that I have ever watched, the explanation was lucid, clear and straight and it just made me understand the metric tensor without much efforts. Keep up the good work!!

  • @labalimbu829
    @labalimbu829 3 роки тому

    Thanks a lot for making me understand what tensor is. You are such a underrated mathematician.

  • @individuum4494
    @individuum4494 4 роки тому +1

    The first eight minutes were everything I needed, thank you very much!

  • @IslamEldifrawi
    @IslamEldifrawi 4 роки тому

    you are simply brilliant, please continue explaining tensors, and mathematics in general, great job man !!!

  • @nelsonpalraj
    @nelsonpalraj 6 років тому +1

    I am trying to understand tensor for the pass years now only I got some insight ,Great ,Thank you Chris

  • @meenakshiiyer1644
    @meenakshiiyer1644 2 роки тому +2

    i request you to make a series on linear and abstract algebra too.
    please please please........................
    you outclass every video that has been present in youtube so far.

    • @eigenchris
      @eigenchris  2 роки тому +2

      Thanks. My series on Error Correcting Codes touches on some concepts from linear and abstract algebra (specifically finite fields), but I don't plan on making series on LA or AA specifically. You can try checking out 3blue1brown's "Essence of Linear Algebra" series.

  • @sweetytarika8068
    @sweetytarika8068 5 років тому +1

    you are God for me, i was struggling in learning this concepts, you made everything easy....thanks a lot from the core of my heart , sir

  • @RealisiticEdgeMod
    @RealisiticEdgeMod 6 років тому

    This is the best exposition on the topic ive seen on youtube or indeed anywhere else.

  • @Viscoplasticbeing
    @Viscoplasticbeing 2 роки тому

    Thank you very much kind sir. It's amazing to see how that dot product in the end show phytagoras for the orthonormal case

  • @xgozulx
    @xgozulx 2 роки тому

    thank you so much, I've been struggling 2 weeks to understand this, and this was super clear :D

  • @MrSypratt
    @MrSypratt Рік тому

    I’m really enjoying these videos as I’m in Nepal holed up with a head cold. Thank you so much for making them! So it appears from looking at the double summations one can get the New metric tensor from the old by G(tilde) = F(transpose)FG and the Old metric tensor from the new by G=BB(transpose)G(tilde). Doing the routine matrix multiplications is much easier than writing out the summations term by term. But I’m not sure I’m correct. It works for this example.

  • @aronvanveldhuizen
    @aronvanveldhuizen 4 роки тому

    Thanks for these videos man! I was lost trying to understand my lecture notes on tensor calculus and your videos have explained it all so clearly.

  • @hadifar
    @hadifar 5 років тому

    Best video I've seen about metric tensor. You saved my day... Thank you

  • @johncamm3853
    @johncamm3853 6 років тому

    A brilliant and lucid explanation of some difficult ideas in this course - I hope you manage to complete it. Thank you for helping me to understand the ideas of contravariance and covariance. I have been reading books on General Relativity but quickly realised I needed to do a maths course first! A long way to go but having this improved understanding will go a long way - it has cleared up some confusions I had ..... .

    • @eigenchris
      @eigenchris  6 років тому +1

      Thanks. In my ideal world, I'll be doing another video series on GR, but I don't think that will happen for another few months.

    • @akarshchaturvedi2803
      @akarshchaturvedi2803 6 років тому

      Please do it!!!

    • @eigenchris
      @eigenchris  6 років тому +1

      I'll do my best. I still don't understand GR at this point. Still have some reading and thinking to do.

  • @yashwantsarav2433
    @yashwantsarav2433 4 роки тому

    Superb video. With this, I am able to study general relativity that much easier! Thank you so much!

  • @HighWycombe
    @HighWycombe 3 роки тому +2

    Worried that you are "losing steam" eigenchris. This videos are wonderful...I really hope you make it to the end of GR.

    • @eigenchris
      @eigenchris  3 роки тому +1

      I have this relativity playlist that I'm working on: ua-cam.com/play/PLJHszsWbB6hqlw73QjgZcFh4DrkQLSCQa.html
      It's about 8 hours long so far, and I'm just about to complete special relativity. General relativity will be next.

    • @HighWycombe
      @HighWycombe 3 роки тому +2

      @@eigenchris That's brilliant. These videos are Sooo good at teaching. I've been working through your Relativity Playlist, slowly and carefully. I got as far as 103d then realised that I needed to learn a bit about tensors (particularly the Metric Tensor) before continuing. With your help, my ultimate ambition is to understand the mathematics of General Relativity the best I can. You've reminded my why I had a passion for Physics years ago.

  • @malihabintehasan7182
    @malihabintehasan7182 Рік тому

    your videos are literally saving my life

    • @eigenchris
      @eigenchris  Рік тому

      I'm sorry your life depends on the metric tensor, but I'm glad the videos are helping you.

    • @malihabintehasan7182
      @malihabintehasan7182 Рік тому

      😆😆😆

  • @rajanalexander4949
    @rajanalexander4949 3 роки тому

    This is just insanely cogent, lucid, and helpful!

  • @thevegg3275
    @thevegg3275 Рік тому

    Thank you for replying so quickly to our questions. Please excuse any stupid words that auto correct creates while I am thinking I’m not really reading.
    So on minute, 14:28
    Do you apply? The contravariant parents are upstairs and covariant components are downstairs are downstairs.
    This implies that what makes a contravariant or covariant components is simply the position next to the large T.
    I’ve also seen them move from downstairs to upstairs, which would make say a contravariant component now a covariant component .
    And I’m wondering how the definitions of these components are not hardwired back to the definitions of vectors, whereby contravariant components use parallel projection and covariant components used perpendicular projection to find those components.
    So the conflict here is if one moves a contravariant component downstairs where the cold barrier components are UV, essentially by the spooky action, gone back into the definition and changed what was perpendicular projection to parallel projection. How is all this possible?
    To summarize the upstairs and downstairs components of a tensor have to be created from the geometric definition, I described above add a such should not be able to be moved up or down at all. Does this make any sense whatsoever? Thank you.

  • @HualinZhan
    @HualinZhan Рік тому

    For derivation at 11:30, I think using the matrix representation is convenient: step 1, As the length is invariant, we know that v^T g v = \tilde{v}^T \tilde{g} \tilde{v}, here the superscript T indicates matrix transpose; step 2, we also know that v=F \tilde{v}; step 3, combine all these together, we have \tilde{g} = F^T g F.

  • @thevegg3275
    @thevegg3275 Рік тому +1

    Hi there
    My usual interpretation of the dot product is two unique vectors multiplied together times the cosine of the angle between them.
    But since we are multiplying collinear vectors them being the same, I guess there’s no need to ever see a cos of 90. you’re doing the dot product of a vector with itself. Does that sound about right?

    • @eigenchris
      @eigenchris  Рік тому

      The two definitions end up being equivalent. If vectors are parallel the cos(0) becomes 1, and if vectors are perpendicular the cos(90) becomes 0. These are equivalent to the results I get for ex.ex = 1, ey.ey=1, and ex.ey=0.

  • @mikamikamusic7792
    @mikamikamusic7792 Рік тому +1

    I'm first year highschool and I love your videos on tensors because it makes me look smart infront of my classmates AHAHAHAH

  • @allanrocha4647
    @allanrocha4647 6 років тому +4

    Awesome series! Thank you so much for your work and please keep it up! I have a quick question, would you have a book recommendation?

  • @h2ogun26
    @h2ogun26 Рік тому

    I have a question, is it safe to say that there's a multiple length for one vector?
    though we calculated length wrt prior basis (basis without tilde) on this video,
    I thought we can also calculate length wrt coordinate system where tilde basis are seen as orthonormal and prior basis as non-orthonormal.
    After all, which basis is orthonormal is relative to reference frame.
    If such my thought has no problem, Is there any symbol to distinguish which reference frame has been used while we calculate the length of the vector?
    Or am I using wrong concept here? thus there's a unique orthonormal vector?

  • @07alpadnan
    @07alpadnan 4 роки тому

    What a great content, keep up the good work! A suggested book or content about solved questions and exercises would also be very useful.

  • @gogl0l386
    @gogl0l386 6 років тому +18

    Your videos are totally awesome! If you spoke into something other than a potato then they would be perfect!

  • @doctormeister4566
    @doctormeister4566 8 місяців тому

    Can i just choose if I want to represent my vector as contraviant components and covariant basis vectors or cvariant components and contravariant basis vectors? Because the general object v should be the same, right?

  • @BB-bc1xv
    @BB-bc1xv Рік тому

    How did you get the basis vector components at 4:49 using the forward transform? I am not getting that answer and when I plug your new basis vectors into the original vector equation, I do not get the right vector. I am very new to tensor calculus, sorry.

    • @eigenchris
      @eigenchris  Рік тому

      The equations at the top right of the screen are based on the image shown at 4:44. You can see e1~ is made of 2*e1 and 1*e2. And the 2nd equation applies for e2~. Is that what you mean?

  • @pablofernandezpena1045
    @pablofernandezpena1045 3 роки тому +1

    Vectors and Covectors have a graphical representation as stated in previous parts as arrows and stacks respectively. What about the Metric Tensor? Has a graphical representation? If you can suggest some guide to this will be greatly appreciated!

    • @eigenchris
      @eigenchris  3 роки тому

      The best I can think of is putting a "circle" or radius 1 everywhere in space. This circle may appear to squash or stretch depending on the coordinate system. For example, on a map of the earth, the circle will stretch near the poles because shapes are distorted near the poles ro appear bigger thsn they actually are... so the circles will be stretched bigger to show you what 1 unit of area looks like.

    • @pablofernandezpena1045
      @pablofernandezpena1045 3 роки тому

      @@eigenchrisFirst of all, thanks for your response! Regarding your visualization, I think my question was not clear enough, because the "graphical representation" I seek must be INVARIANT, i.e. must be independent of the coordinate system you choose, the same way as vectors as arrows and covectors as stacks. This is why this graphical representation is elusive to me.

    • @eigenchris
      @eigenchris  3 роки тому

      @@pablofernandezpena1045 I'm not sure how to draw the metric in a way other than I've described. I don't really know how to draw any tensors of rank 2 or higher, unfortunately.

    • @pablofernandezpena1045
      @pablofernandezpena1045 3 роки тому

      @@eigenchris Yes! I was rethinking the subject, and what you say is the key! This graphical representation is elusive, because I try to "draw" the Metric Tensor in the same plane as vectors and covectors are drawn (supposing our vectors/covectors are two dimensional), but the Metric Tensor doesn't lives in that plane, instead lives in a space which is the result of the Tensor Product of that plane with itself, which I don't know if there is a graphical representation to it, based on your comment appears to not have it. Again thanks very much for your videos and dedication to answer our questions!

  • @kirancp4758
    @kirancp4758 2 роки тому

    At 0.39 , How did you get the components 5/4 and 3 for new basis ? is it just arbitrary? From the graph it looks like 1 instead of 5/4.

  • @MaxxTosh
    @MaxxTosh 2 роки тому +1

    Thank you so much for the series! It's incredibly helpful, you explain it so much better than any other resource I've seen. One question I have, is there any interesting real world use of the 2,0 tensor?

    • @eigenchris
      @eigenchris  2 роки тому

      All of special and general relativity are based on the concept of a metric. Time dilation, length contraction, black holes, gravitational waves, and the expansion of the universe are all explained using metrics. I have a whole series of relativity you can watch in my playlists section.

  • @b43xoit
    @b43xoit Рік тому

    When you talk about dot product, I'm used to the idea of dotting a row with a column. But in the definition of length (squared), you are dotting a vector with itself. Should an alarm bell go off to the effect that some reasoning from this could go awry? Would it be more strictly correct to talk about dotting it with its transpose? And where transpose comes up, is Hermitian transpose usually what should go in, in case the components can be complex numbers? Or is that just something that never comes up in the uses of tensors?

    • @eigenchris
      @eigenchris  Рік тому

      So, when you first learn vectors, it's common to be taught that vectors are lists of numbers arranged in a column (and the transpose of that is the list of numbers arranged in a row).
      In this series I treat vectors as abstract symbols that can be added together and scales. The vector COMPONENTS can be written in a column, FOR A CERTAIN BASIS. But if we change basis, the components will also change, even though the vector has stayed the same. This is why I prefer to treat vectors abstractly: because the list of numbers that describes them depends on the basis and can change at any time.
      Taking "v" to be a column, the "v^T v" formula only works assuming an orthonormal basis (basis vectors have length 1, and are at right angles to each other). If you want to generalize this to any basis, we need the metric tensor matrix in the middle: "v^T g v". But the components of "v" and "g" will change as soon as we change basis. And indeed, for complex numbers, this should be the Hermitian-conjugate, as you say.
      I'm not sure if that answers your question. The main goal of tensors is to be able to write out formulas that are true for everyone, regardless of what their personal favourite basis is.

    • @b43xoit
      @b43xoit Рік тому

      @@eigenchris So dotting a vector with itself makes perfect sense, I take it, within the conception of vectors as abstract objects that obey the postulates.

  • @WilEngl
    @WilEngl 5 років тому

    THANK YOU SO MUCH for making tensor calculus crystal clear !!!

  • @faeancestor
    @faeancestor Рік тому +1

    you are a genius ...

  • @xinyuli6603
    @xinyuli6603 2 роки тому

    And sorry for another consecutive question here. But Does it matter if we put both F and B on the left side of T when calulating transformation? Since I don't think it matters whether on left or right on summation?

  • @marat61
    @marat61 4 роки тому +1

    Would I be right if claim that dot product of a 2 vectors not Just stupid sum of per component multiplication?

    • @eigenchris
      @eigenchris  4 роки тому +1

      It's a bit hard to answer this because "dot product" can mean different things to different people. For some people (especially beginners) the definition of the dot product is a.b = a1b1 a2b2 a3b3, and that works fine for them. But in tensor calculus, this formula alone is pretty meaningless because the coordinate system might change. In tensor calc we need formulas to work in all coordinate systems, so we need the metric tensor components to define it properly.

    • @______________
      @______________ 4 роки тому

      @@eigenchris this answer helped me

  • @yizhu5275
    @yizhu5275 21 день тому

    Very nice video. very inspiring. In this video 12:41, can anyone explain why those bunch of matrix can swich location (or commute)?

  • @nityanandadas5575
    @nityanandadas5575 3 роки тому

    Dear Chris,
    If a tensor is symmetric in orthonormal x-y co-ordinate system. Now I want to transform the tensor ( 2×2 matrix) to an oblique co-ordinate system by tilting y axis by some angle so that the new co-ordinate system is one along x as it was before, and the other making an angle theta with first axis.
    Whether the tensor will remain symmetric in oblique co-ordinate or not?
    Thanks and Regards

    • @eigenchris
      @eigenchris  3 роки тому +1

      The metric tensor is always symmetric because (e_i · e_j) = (e_j · e_i) always. Oblique angles for the basis vectors will make some of the off-diagonal matrix elements non-zero, but the matrix will still be symmetric.

    • @nityanandadas5575
      @nityanandadas5575 3 роки тому

      Dear Chris ,
      Thanks for your reply. But, actually I want to know :
      If Ax and Ay are the components of of vector A in orthonormal x-y system[ ( 1,0) and ( 0,1)] , what are the vector components ( contravariant) of A with respect to ( 1,0) and ( cosθ, sinθ) basis system.
      Please discuss this clearly. I request you.
      Thanks

    • @eigenchris
      @eigenchris  3 роки тому

      You have just described a change-of-basis from basis {e1,e2} to basis {f1,f2}, which has these equations: f1 = e1, f2 = cosθ e1 + sinθ e2. So the "Forward" transform matrix would be:
      [1 cosθ]
      [0 sinθ]
      Since vectors are contravariant, it will transform with the inverse matrix:
      [1 -cosθ/sinθ]
      [0 1/sinθ ]
      So the new components of the vector A would be Ax' = Ax - cosθ/sinθ Ay and Ay' = 1/sinθ Ay.
      I think I have that correct, but please double-check.

  • @roberttrask6826
    @roberttrask6826 6 років тому

    Excellent presentation. Thanks very much for this series.

  • @AndreaPancia1
    @AndreaPancia1 Рік тому

    Hi thanks for the explanation.. not sure to get why you write (min 7:04) v1 and v2 as an horizontal covector...and also vertically as a vector

    • @eigenchris
      @eigenchris  Рік тому

      That's just what we need to write out to get the correct formulas at the top of the screen.

  • @steffenleo5997
    @steffenleo5997 2 роки тому

    Good Day Chris, i See here dot product of basis vector e(i). e(j) = Kronecker delta ij in Video 4:43 but another e(i). e(j) =g(ij) metric vector in your Video 8:11..... Is it depend on the angle between this 2 basis vector if perpendicular(90 degree)/orthonormal basis we get Kronecker delta and if not 90 degree we get metric vector?... Thanks for this video and have a nice weekend... 👍👍

    • @eigenchris
      @eigenchris  2 роки тому

      e_i · e_j = δ_ij is only true for an orthonormal basis, where all basis vectors have length 1, and are at right angles with respect to each other. More generally, we write e_i · e_j = g_ij for any basis.

  • @xinyuli6603
    @xinyuli6603 2 роки тому

    Thanks for the intuitive explanation. But I think vectors are of (1,0) type while covectors are of (0,1) type? (If we take a look at the formal definition of tensor as mutilinear functions from wikipedia) (ps: this is consistent with that metric tensors are (0,2) type since it requires 0 covector and 2 vector to produce a real number) . If we would like to explain things by how are changed (whether contravariantly and covariantly), this should be counted as whether the coeffients of tensors are changed contravariantly or else. Metric tensor's coeffients change both contravariantly so it is (0,2) and vector's coeffients change contravariantly, so it is (1,0) and vice versa with covectors.

  • @petershotts8571
    @petershotts8571 6 років тому +1

    Brilliant videos. Thank you so much. One question. In establishing the metric tensor are you not identifying a special pair of basis vectors - namely the orthonormal vectors - and that that is a consequence of working within Euclidean geometry? Thus, having established the definition of vector length using an orthonormal basis, length (and angle) can then be calculated for any other basis.

    • @eigenchris
      @eigenchris  6 років тому

      You are right that I started out with a special pair of basis vectors... but I had to start somdwhere. I need to make up the answers to the dot product for the basis I start with, or else I can never get the components of the metric tensor.
      I didn't have to start with an orthonormal basis. I could have started in a basis were e1.e1 = 2, e1.e2 = 1, e2.e2 = 8 and that would also be acceptable. But most people are familiar with orthonormal basis vectors, so I started with those.

    • @petershotts8571
      @petershotts8571 6 років тому

      Thanks. I see now! Peter

    • @narfwhals7843
      @narfwhals7843 4 роки тому

      @@eigenchris I feel like I'm missing something fundamental about the dot product. How can e1.e1=2 unless you already specified a basis in which e1 has length 2. 2 is already relative to something. This seems like starting from an orthonormal basis is necessary, which can't be right. In your example if I just multiply the orthonormal basis vectors by 2 and chose that as my starting basis they are still orthonormal but the vector will have a different numerical length. How is the dot product defined without a preferred basis?

  • @gullumpanie
    @gullumpanie 4 роки тому

    In minute 12:33 the square length of a vector is given as multiple matrice product.
    then the order of the matrices is rearranged. But generally the result of matrix-multiplication is different if you change the order of the matrices to be multiplied. What is the "trick" is behind it?

    • @eigenchris
      @eigenchris  4 роки тому

      Those aren't matrices, they are matrix components (numbers), and so we can change the order if we like.

    • @gullumpanie
      @gullumpanie 4 роки тому

      @@eigenchris But the matrix multiplication rule about inverses is used to get Kronecker Delta from BxF=I.

  • @owen7185
    @owen7185 2 роки тому

    Chris, is that why minkowski metric has ds^2 = dx^2+dy^2+dz^2 because the dot product gives the lengths

    • @eigenchris
      @eigenchris  2 роки тому

      Yes. A "metric" and a "dot product" are basically the same thing in my mind.

  • @JL-jc5fj
    @JL-jc5fj 10 місяців тому

    the transformation of L,does it matter how we write the order of L,F and B on the RHS

  • @mr.es1857
    @mr.es1857 4 роки тому +1

    Hey chris ! Is the inertia tensor some kind of metric tensor when we are attempting to get kinetic energy like 1/2 w^(transpose) I w

    • @eigenchris
      @eigenchris  4 роки тому

      I haven't thought about the inertia tensor is almost 10 years, and I'm not sure I ever properly understood it. At first glance, I wouldn't call it a metric, since it's not used for measuring distance, but it does have a few similarities with a metric tensor, like being symmetric.

    • @mr.es1857
      @mr.es1857 4 роки тому

      @@eigenchris First of all thanks for your lectures . I had this thought looking at some formulas for computing computing kinetic energy. So I thought that if kinetic energy stays the same no matter the frame and it takes 2 inputs it might be like some kind of metric tensor.
      Right now I'm working on a " review actively" notebook for this video series using notion.
      Now I can challenge my self making some of the derivations by my self. The note book guides you through the topics and at the same time you have the time to practice.
      After this two series on tensors I want to start with differential geometry. Is this background on tensors helpful to start with diff geo ?

    • @eigenchris
      @eigenchris  4 роки тому +1

      @@mr.es1857 The boundary between Tensor Calculus and Differential Geometry is unclear in my opinion. There is quite a lot of overlap. The 2nd half of my "Tensor Calculus" series covers selected topics from differential geometry (mostly the stuff needed for general relativity), starting around video 15. If you want to study Differential Geometry for its own sake, you can check out these 4 PDFs from a math professor liavas.net/courses/math430. You'll find there is a lot of overlap with tensors.

  • @geniusgamer8046
    @geniusgamer8046 3 роки тому

    1:50 Pythagoras isn't a lie, it just doesn't work there. there's cosine rule for that : a² + b² - 2bc×cos(angle that replaced right angle) = c².

  • @steffenleo5997
    @steffenleo5997 2 роки тому

    Good Day Chris, if g^uv * g_uv is equal 1 or - 1 ? g^uv here is contravariant Metrik tensor and g_uv is covariant Metrik tensor.

    • @eigenchris
      @eigenchris  2 роки тому

      It gives a summation over the identity matrix, or "Kronecker Delta": δ^u_u. The summation over the diagonal elements of the identity matrix is equal to the dimension of the space.

  • @gabrielsara3947
    @gabrielsara3947 2 роки тому

    This videos are magnificent!!! Thanks!!!!

  • @pacchutubu
    @pacchutubu 4 роки тому

    Thanks for these videos, I am greatly indebted to you. I have a question w.r.t dot product of two vectors in polar form, (r1,theata1) and (r2,theta2). Now in the metric tensor, which what value should I use for r^2 (g22 component) ?

    • @eigenchris
      @eigenchris  4 роки тому

      Polar coordinates are curvilinear coordinates and so the answer is more complicated. in curvilinear coordinates it only makes sense to do a dot product of vectors at a specific point (r,theta), because the metric tensor matrix changrs from point to point. So use the ce value r^2 for the value of r at the point you are interested in.

    • @pacchutubu
      @pacchutubu 4 роки тому

      @@eigenchris Thanks for the reply. In this video ua-cam.com/video/BbQmTmSzUCI/v-deo.html , you used '2' as radius value. Is this because, the vector dR/d lambda , is at radius 2? It is not clear to me, with respect to which point, the components of the original position vector 'R' is defined.

    • @eigenchris
      @eigenchris  4 роки тому

      @@pacchutubu Yes, everywhere on the circular path has radius 2, so I used r=2.

  • @chymoney1
    @chymoney1 5 років тому

    I understand that the tensor remains invariant but is this manifested through some mathematical property of matrices? For example suppose I have i do a coordinate transform (x,y) -> (r, theta) then what about the metric tensor remains invariant maybe the eigenvalues? I am fully aware the components and basis transform in knowable ways ive just been wondering more what about a 2nd rank tensor remains the same? i know for a rank 1 tensor( a vector) the length remains same under coordinate transform what property does the generalize to for matrices? the eigenvalue?

    • @eigenchris
      @eigenchris  5 років тому

      I don't think the eigenvalues or determinany or trace or anythibg like that will remain constant. What stays cobstant is the g function itself. g(u,v) will give the same result regardless of which basis you use. The numerical arrays depend on the basis but the function output (the length) does not

    • @chymoney1
      @chymoney1 5 років тому

      eigenchris I’ve found sources online saying the eigenvalues and the coefficients of the characteristic polynomial of a tensor remain invariant. Idk why they can’t be more blunt I feel like everyone just keeps repeating the magnitude of a vector remains the same which I know already

    • @chymoney1
      @chymoney1 5 років тому

      eigenchris i found on physics stacks that the double contraction of a tensor( analogous to the magnitude of a vector). |T|^2= g_ik g_jl T^kl T^ij remains invariant

  • @aviveshed2412
    @aviveshed2412 5 років тому

    Can someone explain the transition in 12:50? I don't see why I get the Kronecker delta there. it does not work with the indices of B and F

    • @doctor-mad
      @doctor-mad 5 років тому

      On the first (B,F) couple the i's cancel out and leves you with a and k. On the second (B,F) couple j's cancel out and you are left with b and l

  • @______________
    @______________ 4 роки тому

    Is orthonormal basis necessary to obtain length of vector? is it possible to calculate length without orthonormal basis?

    • @eigenchris
      @eigenchris  4 роки тому +1

      Yes, the red "tilde" basis is non-orthogonal in this video.

  • @jaeimp
    @jaeimp 6 років тому +1

    Fabulous work! Easily the most accessible and updated presentations online. I presume the advantage you take of the orthonormality of the first vector basis in deriving the metric tensor matrix is translatable to the tangent vectors to a curve at a point on a manifold also being orthogonal, and also allowing the use of the Kronecker delta to end up with zeros and ones - I'll have to wait to your calculus presentations...

    • @eigenchris
      @eigenchris  6 років тому

      Thanks! Yes, I plan on describing the metric tensor on curved surfaces later on, and it's a similar idea.

  • @СпасСтоилов-с2ю
    @СпасСтоилов-с2ю 3 роки тому

    You make explanation more difficult than it is!

  • @ahmadfaraz9279
    @ahmadfaraz9279 3 роки тому

    How is dot product defined?? I mean, here dot product of orthonormal basis is defined as delta (i,j). How is dot product defined for general basis? Dot product bw two basis vectors is defined as cos(theta) , right?

    • @eigenchris
      @eigenchris  3 роки тому +1

      You have to define how the basis vectors are oriented with respect to each other, and what their sizes are. One way to do this is just declare what their dot products are. Another way is to define their lengths and angles with respect to each other, and then work backwards to get the dot products. But you need to start somewhere, so you need to invent these quantities on your own, or else give them in terms of another basis.

  • @Lucky10279
    @Lucky10279 4 роки тому

    Question: Your generalized metric for vector length in a flat plane should be equivalent to the law of cosines, as that lets us get the length of the hypotenuse of any triangle. But that has a minus sign and yours has a plus sign. What's up with that?

    • @eigenchris
      @eigenchris  4 роки тому +1

      In this video, the angle theta is the angle between the basis vectors e1 and e2, both extending from the origin. In the law of cosines, the base of e2 is "slid" along to the end of e1's tip, and the 3 sides of the triangle are e1, e2 and e1+e2. The angle used in the law of cosines is opposite the side e1+e2, and so is pi-theta (or 180-theta in degrees). This results in a sign change in the formula.

  • @MrGarry1410
    @MrGarry1410 3 роки тому

    At 12:08 how to get from g_ij = FF g_kl to BB g_ij = g_kl ? By multiplying with two Bs of course, but where did the resulting kronecker deltas go? I cant get the indices right.

    • @eigenchris
      @eigenchris  3 роки тому

      The actual index letters used don't matter. What's important is how the index letters are paired together, which determines the summations. If you can an answer, try re-labeling the indices pair-by-pair so that it looks like the answer in the video.

    • @MrGarry1410
      @MrGarry1410 3 роки тому

      ​@@eigenchris I am aware of the possibility for renaming indices. My problem lies in finding something to cancel the deltas with.
      Instead of g~ I will wright G:
      G_ij = F^k_i F^l_j g_kl | multiply with B^i_s
      B^i_s G_ij = B^i_s F^k_i F^l_j g_kl | inverses to delta
      B^i_s G_ij = d^k_s F^l_j g_kl | multiply with B^j_t
      B^j_t B^i_s G_ij = d^k_s B^j_t F^l_j g_kl | inverses to delta
      B^j_t B^i_s G_ij = d^k_s d^l_t g_kl | what now?

    • @MrGarry1410
      @MrGarry1410 3 роки тому

      Ah!:
      B^j_t B^i_s G_ij = d^k_s d^l_t g_kl | make use of d^k_s g_kl = g_sl
      B^j_t B^i_s G_ij = d^l_t g_sl | analogous to above
      B^j_t B^i_s G_ij = g_st | rename s -> k and t -> l
      B^j_l B^i_k G_ij = g_kl
      I was too blind to see it for several hours. Thank you very much for conversing with me, that solved the problem.

    • @eigenchris
      @eigenchris  3 роки тому

      It looks like you figured it out. Well done!

  • @pablogriswold421
    @pablogriswold421 3 роки тому

    Maybe this is answered somewhere else in the series, but why do some of the tensors in the overview at 13:41 seem to disobey the general transformation rule at 15:00 ? Specifically, the rule shows that to go from old to new, backwards transforms are on the left and forwards are on the right, but only linear maps seem to follow that, while basis vectors etc. are the opposite. Obviously the dimensions wouldn't agree in that case if the rule were followed-is that how the order is decided?

    • @eigenchris
      @eigenchris  3 роки тому +1

      The left and right position of tensor summations doesn't matter. You can flip the letters back and forth and the formula still has the same meaning.

    • @pablogriswold421
      @pablogriswold421 3 роки тому

      @@eigenchris Oh, gotcha. Thanks for the quick response, and your videos are really wonderful!

  • @manaayek8091
    @manaayek8091 5 місяців тому

    This is the infinity war of this series.

  • @lobnasaeed
    @lobnasaeed 2 роки тому

    great and incredible explanation !!!

  • @adamb7088
    @adamb7088 Рік тому

    OMG! Is this what is known as a Hilbert space? Is it possible to take a derivative wrt a metric tensor?

    • @eigenchris
      @eigenchris  Рік тому

      Hilbert spaces are required to have "inner products", which is basically the same thing has having a metric tensor. Although in most quantum classes you always have an orthonormal basis, so you can just take the metric tensor to be the identity matrix. Hilbert spaces also have extra properties, like being compatible with complex numbers, and being "complete", meaning you can take limits and do calculus in this.

  • @felix1840
    @felix1840 9 місяців тому

    Awesome playlist !

  • @nahimafing
    @nahimafing 4 роки тому +1

    at 11:50 where did you get the new components k and l? but great vid keep it up :)

    • @eigenchris
      @eigenchris  4 роки тому

      I introduced a new summation, so I just made up the summation index out of thin air. I could have used any letter.

    • @nahimafing
      @nahimafing 4 роки тому

      @@eigenchris ah thanks so much i get it the Einstein summation convention

  • @MrGarry1410
    @MrGarry1410 3 роки тому

    Until this Video, we where only referring to vector spaces. Here you also bring up the dot product. So is it safe to say, that in order to define (1,0) and (0,1) and (1,1) - tensors one has to have at least a vector space, whereas in order to define any other tensors, like the the (0,2) - tensor, one has to have at least an inner product space?

    • @eigenchris
      @eigenchris  3 роки тому

      I brought up the example of the metric tensor (dot product function) because it's a very common example of a tensor, particularly in relativity). But I don't think we need an inner product space to define tensors in general. Video 10 shows how bilinear forms in general are also tensors. In the last video of this series, I show how to "partner" vectors and covectors together, and this part does require the definition of an inner product.

    • @MrGarry1410
      @MrGarry1410 3 роки тому

      @@eigenchris ​ Hm, another way of saying what I mean is: There is no way to define the metric tensor, without getting the definition of the dot product for free. Would you still disagree?

    • @eigenchris
      @eigenchris  3 роки тому +1

      I guess I'd agree with that. In my mind, the metric tensor is basically the same thing as a dot product/inner product. So if a metric tensor is defined, you have an inner product space by definition.

  • @compphysgeek
    @compphysgeek 5 років тому

    Pythagoras' Theorem is not a lie (for non-cartesian coordinate systems). the definition of c^2 = a^2 + b^2 is simply only applicable for right triangles and if you use it for other triangles you are simply making a mistake. Don't blame others for your mistakes ;)

  • @seungsoolee1949
    @seungsoolee1949 6 років тому

    Hi, thank you so much for your video series. I don't think I would've made it without it. I'm slightly confused as to why the dot product of orthonormal basis is the same as the kronocker delta. Is it because a • b = ||a|| ||b|| cos (theta)? Thank you!

    • @eigenchris
      @eigenchris  6 років тому

      You cam think of it that way, since orthogonal vectors are at 90 degree angles, and cos (90 degree) = 0.
      However, I think this involves circular reason at 9:30, since I derive that formula using dot products and kronecker deltas in the first place.
      I didn't do a proper derivation of the dot product formulas because I felt it was too tedious and not very interesting. I can link you to a derivation from the ground up if you like.

  • @ram_c
    @ram_c 3 роки тому

    Great, please can you make such lectures for classical physics and QFT?

    • @eigenchris
      @eigenchris  3 роки тому

      Sorry, but I don't know QFT very well.

  • @simonmultiverse6349
    @simonmultiverse6349 4 роки тому

    I have been trying to translate these (as I go) into ordinary matrix notation. With that, I get g(new) = F(transpose).F.g(old)
    I would use g= F_T . F This is much simpler than using g(with two subscripts) then F with 2 indices and F with 2 more indices.
    Ordinary matrix & vector notation makes this much simpler to write.

  • @DargiShameer
    @DargiShameer 2 роки тому

    Great explanation 😍😍😍😍

  • @daniel97144
    @daniel97144 Рік тому

    You are the GOAT

  • @quinnculver
    @quinnculver 3 роки тому

    Bravo, eigenchris!

  • @AlvaroZevallosDkP88
    @AlvaroZevallosDkP88 4 роки тому

    I wonder, a vector v represents something, for example a stick with longitude 5, and we can measure its components with the vector basis, and we can change the basis and apply a transformation on a vector components to fit the basis, but that vector v still means a stick of the same longitude(5) and direction. The metric tensor has the same rules, you have it, you apply transformations on it to change the numbers inside it to fit based on a different vector basis, but doesn't that mean that the metric tensor actually represents an absolute object and we just change the components we see based on the vector basis we are using? I mean tensor gij represents object A, and we see and represents A with different basis vectors, but can we define an object B and measure it with the metric tensor Zij and also represent it with different vector basis. What are A and B? is A absolute and B doesn't exists at all in this universe? what does the metric tensor means.
    PD: love your videos, you're an awesome teacher

    • @trogdorstrngbd
      @trogdorstrngbd 4 роки тому +1

      To the extent that length and angles are "real" and not just fabrications of the mind, the metric tensor is as well.

  • @Onegod40-v4h
    @Onegod40-v4h 4 роки тому

    At 11.28, I 'm bit confused on why you used 'i' as superscript? In your previous videos i & j both were in subscript in case of vector bases.

    • @eigenchris
      @eigenchris  4 роки тому +1

      The particular letters that are used for indexes/indices don't matter. The thing that matters are how the indexes are summed with each other.

    • @Onegod40-v4h
      @Onegod40-v4h 4 роки тому

      @@eigenchris Get it.I finished the beginning videos & approaching to "Tensor calculus" playlist.Thank you a lot for your valuable time for us.

  • @rahmatkhan3982
    @rahmatkhan3982 4 роки тому

    ur lectures are very helpful,thanks

  • @sajidhaniff01
    @sajidhaniff01 5 років тому

    Thanks so much! Crystal clear explanation

  • @bulentgucsav8782
    @bulentgucsav8782 5 років тому

    I guess there is something wrong at 13:25 where you give the Contravariant (1,0) component change rules and the Covariant (0,1) componet change rules. I think the boxes (showing component change rules) should switch place. Isn't it?

    • @bulentgucsav8782
      @bulentgucsav8782 5 років тому

      NO, he is right, İt was q quick response without listening to him :) Sorry

    • @eigenchris
      @eigenchris  5 років тому

      @@bulentgucsav8782 I think I did make a copy paste error in one of my videos regarding what is covariant and contravariant, but I think this video has it right.

    • @bulentgucsav8782
      @bulentgucsav8782 5 років тому

      @@eigenchris Yes you are absolutely right. Plus I realized that I did not thank you for your great effort and very good explanation which simply a sign of your desire to "understand it deep, i.e. real understanding." Highly appreciated.

  • @amrabboud3170
    @amrabboud3170 2 роки тому

    I think it is an outer product not inner product ( inner product end up with scalar)

  • @marcp3743
    @marcp3743 6 років тому

    Awesome video series. Keep on working!

  • @rlicinio1
    @rlicinio1 6 років тому +1

    Excelente! Muito bom! Por favor, continue!