Tensors for Beginners 3: Vector Transformation Rules

Поділитися
Вставка
  • Опубліковано 19 лис 2024

КОМЕНТАРІ • 152

  • @samuelbignardi8318
    @samuelbignardi8318 5 років тому +72

    You know it's a good video when in the process to refresh some basic concepts, you find out that you learn something new in the process!!

  • @Mayank-mf7xr
    @Mayank-mf7xr 4 роки тому +5

    Eigenchris is the best teacher.
    Edit: i have had this playlist stored safely for years now (pro-procastinator here), for future study. Now, as finals approach I am semi-forced to end what i started and realised how great this series is. I have backed this up so in can never forget this. This channel is a gem i tell you.

  • @sebaufiend
    @sebaufiend 3 роки тому +20

    I have been wanting to understand tensors so badly for so long, I would always get hung up on "contra" and "co" variant vectors. Your simple explanation at 4:28 is just... wow.

    • @abnereliberganzahernandez6337
      @abnereliberganzahernandez6337 Рік тому +1

      same always when reading books no one explains that they are defined as covectors just becouse the simpple fact that they transform on diferent ways. it is weird that that information is not covered on the books

  • @braytonbailey3782
    @braytonbailey3782 Рік тому +1

    I went to graduate school and never had a complete understanding of what contravariant meant. Now I do, great videos Chris.

  • @ozzyfromspace
    @ozzyfromspace 6 років тому +82

    Nearly 8000 views and not a single dislike.... that's how you know this intro video series is fire. :) My plan is to watch the video series two or three times over, then recreate the logic on my own to *finally* have some intuition I can depend on. I don't mind the little errors here and there, I'd much rather resolve the bigger picture. After that, I'll start reading books on this stuff...then it's on to GR and eventually EM.
    +EigenChris I'm not college educated so videos like these are about as close as I get to human-interactive education. I thank you for the awesome content you're creating, and hope your channel expands.
    Best,
    -Float.

    • @mr.es1857
      @mr.es1857 6 років тому

      For what kind of job do you do this ? or just for fun?

    • @JayLikesLasers
      @JayLikesLasers 6 років тому +9

      Do you mean 'in what job do you use tensors?'
      They are used in physics to understand general relativity, used by electrical engineers, software engineers, and signal analysts dealing with satellite information due to the effects of general relativity (such as time dilation). Tensors are used by all sorts of programmers and businesses who use artificial intelligence and machine learning to make predictions or automatically identify people and objects in images and videos. This is used for self-driving cars, for weather prediction, and for finding out what sort of online ads people might be interested in. See the software 'Tensorflow' for more info. Tensors are used by mechanical engineers and researchers to understand stress and strain in a material (see the Cauchy Stress Matrix). They are used in understanding Mohr's circle which describes the 3D state of stress within a material element. This is useful in Finite Element Analysis. There will be lots more applications I haven't mentioned.

    • @martinibarra4903
      @martinibarra4903 6 років тому

      x2

    • @darkinferno4687
      @darkinferno4687 6 років тому +1

      well the guy asked what will he use it for

    • @JoeHynes284
      @JoeHynes284 4 роки тому +2

      i hope it went well for you, i am doing the same...for fun

  • @EnolaGe
    @EnolaGe Місяць тому

    Best and clearest explanation I've ever seen for basis transformation. Thanks!

  • @wycliffdembe3968
    @wycliffdembe3968 4 роки тому +8

    Best tutorial on the internet. Keep up the passion... and about those simple errors... I don't even realize until you point them out! Thank you!

  • @sebsunda
    @sebsunda 6 років тому +3

    Holy Shit...
    I bugged down so hard trying to understand Tensors & no one were able to explain it step-by-step so clearly like this.
    Keep going men, you are doing great work!

  • @marcgarrett2427
    @marcgarrett2427 5 років тому +1

    Eigenchris is a rock star. Simply the best presentations of Tensors on the planet.

  • @Mouse-qm8wn
    @Mouse-qm8wn 3 роки тому +4

    Dear Chris,
    Thank you so much for these videos. I am self studying physics at 48 year old chemist, and since it is more than 20 year since I did mathematics in the university, your videos are excellent to refresh some of the mathematics and new stuff. I would never be able to do self studying without these great videos. Best Camilla 🇩🇰

  • @CardiganBear
    @CardiganBear 6 років тому +9

    Thanks a lot for this. It is the clearest, most succinct and easiest to understand explanation of contravariance that I have ever seen. Promising start to the series...

  • @indexladder5432
    @indexladder5432 3 роки тому

    This is the best source on tensors, I found over the years.

  • @stantackett107
    @stantackett107 6 років тому +6

    In short, if I'd had this series of videos when I took my graduate course in general relativity I'd have passed the class with a better grade. If it weren't for the final project, I'd have failed miserably. Where were you 20 years ago? hehehe. Anything beyond the Kronecker Delta and Minkowski Metric escaped me.
    I had a math and physics background going in, but tensors were my undoing. Your series is totally comprehensible. Good work!!!

    • @eigenchris
      @eigenchris  6 років тому +6

      I was in a similar position, except I was self-teaching GR. I find most sources don't take the time to make you comfortable with tensors. They just throw them at you.

    • @stantackett107
      @stantackett107 6 років тому

      You're telling me! The text we used was Weinberg, from 1972. Excellent text, one of the most famous in the business, IF you know what you're doing. I slowly lost my mind that semester.

  • @ahmedabbas3998
    @ahmedabbas3998 2 роки тому

    The best and most lucid presentation and explanation of the subject I've ever seen.

  • @boba5729
    @boba5729 3 роки тому +1

    Excellent! This is the best explanation I've found of what the term contravariant means.

  • @rajanalexander4949
    @rajanalexander4949 5 років тому +2

    What an excellent explanation of covariant and contravariant -- easily the best I've seen!

  • @dipayanbhadra8332
    @dipayanbhadra8332 2 роки тому

    Brilliant, brilliant presentation! Very clean concepts! You establish a standard for how cleanly an idea can be represented with a minimum number of words! Hats off sir!

  • @guthrietapvideos8950
    @guthrietapvideos8950 2 роки тому

    Excellent lessons… I sincerely appreciate these videos… clear and just the right length. It’s clear you have taught this stuff a lot… and thanks for staying out of the way and not making it about you.

  • @varungoyal6351
    @varungoyal6351 4 роки тому

    Excellent! There have to be people making such series on advanced topics rather than thousands of series on the basic ones. Thanks, @eigenchris.

  • @rajanalexander4949
    @rajanalexander4949 4 роки тому

    Thank you for making these videos. They're clear, concise, and accessible. You've simply nailed the topic, and have done what many a math professor had failed to!

  • @ralphrazor788
    @ralphrazor788 2 роки тому

    A lucid explanation, in particular in relation to the distinction between covariant and contravariant vectors. Even a concise technical exposition of General Relativity is replete with these terms, and it's nice to now have a sounder intuitive footing with these concepts.

  • @danv8718
    @danv8718 3 роки тому

    Brother, this video series is pure gold. Thanks a lot.

  • @cermet592
    @cermet592 6 років тому

    Very intuitive! Gives both the math and the physical process showing the why and how of the definitions.

  • @ΝίκοςΓιαννόπουλος-λ5θ

    Excellent! I unsuccessful grappled with this topic during first year and only now through relativity do I start getting the hang of it. You've decisively cleared the waters for me, thanks!

  • @peterbluhm668
    @peterbluhm668 6 років тому

    I have heard about contravariant tensor for years. Now I finally understand them. Great explanation.

  • @saadabdisalam1057
    @saadabdisalam1057 3 роки тому +1

    Oh my, it's just a perfect explanation of tensors. Well done!

  • @pukaman2000
    @pukaman2000 4 роки тому +4

    This is where I had problems. My professor did not make this point clear; I kept trying to figure out why it was what seemed to be backward and wrong and failing tests. This lead to my low grade and eventually being kicked out of the University on academics.

    • @DrMcFly28
      @DrMcFly28 2 роки тому +2

      They should make a Netflix drama series about this

  • @stevec4899
    @stevec4899 3 роки тому

    A very lucid and clear development of these concepts.

  • @Gismho
    @Gismho 3 роки тому

    Another excellent video. Thank you so much. None of this appears in my textbook and study guide on tensors, i.e. the fundamental reason for defining a vector as a contravariant tensor !!!

  • @himme8471
    @himme8471 3 роки тому

    One of the clearest examples of the transformation rules I've seen. Very good job.
    Another thing one can note is that for the vector v to be invariant, when expressed in the two bases, we can "sandwich" in B*F = identity, in between the vector component and the basis vector. Thus the vector v is invariant. But that's much less descriptive than what you showed. :)

    • @eigenchris
      @eigenchris  3 роки тому +1

      Yes, I came to this realization later and it's exactly how I explain it in the relativity series I'm working on now.
      I also realized that basis vectors can be written as rows and vector components can be written as columns so you can do the same thing with array notation if you like (with B and F matrices instead).

    • @himme8471
      @himme8471 3 роки тому

      The more perspectives offered, the merrier :)
      I'm working through Gravitation (Misner, Thorne, Wheeler) at my own pace and plan to cover it in full, and felt I should brush up on some of the Tensor Analysis again, so I'm glad I could find this series of videos you've made. It was a great refresher to see your videos on the Geodesic Equation and Covariant Derivative as well!

  • @kellyramsay4896
    @kellyramsay4896 5 років тому +8

    I just wanted to thank you so much for this video

  • @黎銘-s9n
    @黎銘-s9n 3 роки тому

    correction: the transformation formula at 03:15 in this lecture is correct, ie, \tilde e_1 is the \sum _{i=1} ^{n} F_{1i} \cdot e_i \\ well, everything in this lecture is correct, but there's critical typo error in the formula at 08:50 in lecture 1. I finally figured it out became confident enough to make a conclusion when I tried to verify the points learned by now. fyi.
    I have a book by Jim Hefferon who says, if you don't do enough exercises, you're not learning LA. That's a great advice.

  • @abstractnonsense3253
    @abstractnonsense3253 2 роки тому

    Excellent. Considering how good this series has been, I hope you consider remaking video number 1.

  • @ahmedgaafar5369
    @ahmedgaafar5369 5 років тому

    this is the best explanation i ever came across....well done indeed.

  • @kingplunger1
    @kingplunger1 4 місяці тому

    Thank you so much. I was always confused about this in linear algebra

  • @bethsuni4011
    @bethsuni4011 6 років тому +12

    >I really don't like video editing.
    Well, I quite enjoying the content !

  • @jjbnair
    @jjbnair 4 роки тому

    I learn it for fun too... It's fantastic...And now i will learn it to get graduate, whatever is my age (i am 57 years old, lol).

  • @Dusefull
    @Dusefull 6 років тому

    Well done, could not have hoped for a better explanation !

  • @icedhockey1
    @icedhockey1 6 років тому

    Wonderfully presented and clear. Thank you very much.

  • @timreisinger7078
    @timreisinger7078 6 років тому +1

    I was waiting for something like this for a long time. Thanks.

  • @davidprice1875
    @davidprice1875 6 років тому

    Excellent intro to the vector components and basis transformation relationship

  • @fernandomaron
    @fernandomaron 5 років тому +1

    Thank you Dr. Eigenchris

  • @retiredreplicant.2195
    @retiredreplicant.2195 3 роки тому

    Man. You know this subject par excellance

  • @rakhuramai
    @rakhuramai 2 роки тому

    This series is great. I really hope you can do one on Quantum Mechanics too!

    • @eigenchris
      @eigenchris  2 роки тому +1

      Thanks. Unfortunately I don't understand QM very well, so I don't plan on doing any QM videos in the near future. My goal right now is relativity.

    • @rakhuramai
      @rakhuramai 2 роки тому

      @@eigenchris That's fine. Really glad you did a relativity series too btw. Having a course this term and lectures have been very difficult to follow

  • @fsh3702
    @fsh3702 6 років тому

    This video series is pretty exciting.

  • @marcoventura9451
    @marcoventura9451 6 днів тому

    Well... let me watch this video another couple of times... ;-) Very good indeed, kind of tough to grasp!

  • @user-pk5rc4or2w
    @user-pk5rc4or2w 6 років тому +2

    CHAPEAU¡¡¡¡......I can not stop to watch your videos.

  • @vazgen.balayan
    @vazgen.balayan 7 місяців тому

    It would be correct to note: in the matrix representation of the vector transforming expression the transformation matrix is involved in transposed form.
    This will avoid possible misunderstandings, since a given matrix, in general case, is not symmetric, I mean, its indices cannot be swapped.

  • @FloppyDobbys
    @FloppyDobbys 5 років тому

    I honestly think its easier to just visualize the inverse transformation in order to get to the vector in the new basis. It takes a bit more imagination at first, I admit, but then you never have to remember any of this really. What I mean is just stick the vectors e1~ and e2~ (the new basis) as column vectors in a matrix. e1~ is the first column and e2~ being the second column. Now take the inverse which geometrically means that this matrix will transform from e1~ to [1,0] vector and it will transform the e2~ to [0,1]. This is important to visualize. Just follow the vector e1~ as it moves in space to [1,0]. Do this for all column vectors. Now, if you can imagine "carrying" any other vector along while the transformation moves e1~ to [1,0] and e2~ to [0,1], this will be the vector in the new basis. This is why doubling the length of e1~ and e2~ results in 1/2 the length in the other coordinate frame because of the inverse transformation.

  • @remusgogu7545
    @remusgogu7545 3 роки тому

    🧡 this, very good refresher (in my case)

  • @shekharsuman1271
    @shekharsuman1271 6 років тому +16

    @3:17 the equation for forward and backward transformation seems wrong. You have forgotten to switch the ij for F and B. So, instead of Fij it should be Fji and same for B.

    • @eigenchris
      @eigenchris  6 років тому +9

      Yep, you're right. I'm going crazy with all these little errors I made. It's really hard to catch them all.
      I'll add that error to the description.

    • @handlepan1938
      @handlepan1938 6 років тому +2

      Yes, so the relationship of the components between the two systems are inverse and transpose. Thank you for your video anyway, it really helps me .

    • @er4255
      @er4255 5 років тому

      @@eigenchris Just to be sure. Basis transformation matrix is Fij and coordinate transformation matrix is the transpose of Fij^-1.

    • @Mikey-mike
      @Mikey-mike 5 років тому +1

      @@eigenchris
      Dont't worry, your little indices errors are serving as an unexpected pedagogical device.
      Also, if you've made an even number of mistakes, then often with coordinate dummy suffixes (Dirac called coordinate indices: suffixes, and index was left for labeling 'which one'), the mistake becomes undone.

    • @IlanTarabula
      @IlanTarabula 5 років тому

      Sorry but I don't understand why. Please could you explain

  • @notu483
    @notu483 Рік тому

    This concept is the same as the case of function transformations. For example, when you scale f(x) to f(kx), you're essentially manipulating the entire x-axis, and the graph of the function therefore experiences the opposite transformation.

  • @marctverdostup8346
    @marctverdostup8346 Рік тому

    bro that is so crazy. I love your Videos!

  • @didierfavre2356
    @didierfavre2356 2 роки тому

    That question could be considered like the vector is constant, the change in component have to be compensated. Therefore the inverse transformation matrix is to be used.

  • @TorrediPisa52
    @TorrediPisa52 5 років тому

    Hi, very grateful for this work you made. Pls could you clarify what do you intend when you say (frame 5:53) "the basis rotate cw and the components rotate ccw. My question is: why you say components rotate and not components vary? In a cw rotation the vector component on e1 is greater than the ones onto e2. The basis rotate and the components vary ....

  • @AkiraNakamoto
    @AkiraNakamoto 10 місяців тому

    5:40 I cherish the hope that you can write the right subscript to the left instead of as an exponent (if you really don't want to keep the conventional subscript notion).

  • @gguevaramu
    @gguevaramu 6 років тому +1

    Dear Chris, I came to the point that there is no mistake in what you have done in your videos.
    For example in your 3rd video, you are multiplying matrix by a colum basis, in your 4th video your are presenting the same information in other way, you are just tansponsing the former matriz equation getting row basis times matrix, and from here you keep that way for the next videos.
    In this video also there is no error, when you take the index equation that transform from new basis to old basis, you are multiplying row bases times matrix, like you did in video 4th.
    When you write the vector equation in index form, you can have row componentes times column basis, or opposite, in index form, it does not have impact in matrix representation, but if you use your former equation for basis trasformation (THere is row basis times matrix), then is forced to have vector equation in the form row basis times column components, and in the end you are getting the relation between new colum components and old column components.
    So, the only thing to say here is that in the realtion between basis, you are comparing row basis, and in the equation between componentes you are comparing column componentes. There are no mistakes, it is only your information is presented in a not symmetrical way, when you use basis trasnform you are using row basis, when you use components transform you use column components.
    I hope to be helpful, and no to confuse saying this in words and not equations. If you would like I can send you a file with equations my email is ggm7317@hotmail.com
    Regards

  • @raylandon11
    @raylandon11 3 роки тому

    I noticed in this video that the indices are different in the summation forms than the forms in "Tensors for Beginners 1". Was that due to the transpose matrix error in video 1 and its subsequent correction?

  • @Saturos02
    @Saturos02 6 років тому

    Immensely helpful videos, thank you!

  • @ויקטורגורביץ
    @ויקטורגורביץ 3 роки тому

    THANK YOU VERY MUCH EIGENCHRIS.

  • @BlueSoulTiger
    @BlueSoulTiger 2 роки тому +1

    4:37 "Now because vector components behave contrary to the basis vectors, we say that the vector components are contravariant"

  • @aleksanderaksenov1363
    @aleksanderaksenov1363 4 роки тому

    Sir,what literature do you suggest for deep study of linear algebra?

  • @gabrielsara3947
    @gabrielsara3947 2 роки тому

    So, a vector is a grade 2 tensor. This tensor is the combination of two grade 1 tensors: the vector components (which are contravariant tensors) and the basis vector which are covariant tensors. am I right?

  • @biblebot3947
    @biblebot3947 4 роки тому +2

    Why do people not write indeces as subscripts and superscripts instead of above and below the letters
    It seems like a better system for less confusion

  • @nahblue
    @nahblue Рік тому

    Opposite two ways, the bases transform as "row vectors" multiplied from the left, and vectors multiplied from the right (presumably), but also interchange F, B. I guess it will be clear soon, but at this point my question is why it's this way.

  • @alexw335
    @alexw335 6 років тому

    Awesome video series, thank you!

  • @suave319
    @suave319 6 років тому

    bruh, these vids are so good

  • @thegamergem3820
    @thegamergem3820 3 роки тому

    Excellent!!!

  • @fredrickcampbell8198
    @fredrickcampbell8198 5 місяців тому

    Ok. I think I do not know what the horizontally written vector is supposed to represent.

  • @acatisfinetoo3018
    @acatisfinetoo3018 5 років тому +1

    I understand the notion of change of basis, but for what reason do we need this?

    • @eigenchris
      @eigenchris  5 років тому

      Some problems are easier to solve in certain coordinate systems. With circular motion, it's easier to use pilar coordinates instead of cartesian coordinates. I talk about this in my tensor calculus series.
      Also, we need the last of physics to work in every basis, so we should be able to change basis and get the same physical results.

  • @josephahles7529
    @josephahles7529 2 роки тому

    So we use the backwards transformation in a way to essentially "normalize" or re-define the vector through a different set of unit vectors?

    • @eigenchris
      @eigenchris  2 роки тому +1

      Not sure what you mean by "normalize". We just take some vector "v" and build it using a basis. The backward transformation tells us how to change the vector components when we change basis.

    • @josephahles7529
      @josephahles7529 2 роки тому

      @@eigenchris Thank you for clarifying.

  • @yeonjoon-choi
    @yeonjoon-choi 6 років тому

    THIS VIDEO IS CRAZINGLY EASY TO UNDERSTAND THE COORDINATE TRANSFORMATIONS!

  • @sajidhaniff01
    @sajidhaniff01 5 років тому

    Hi Chris,
    Should F not have an upper and lower index as it represent a linear map as opposed to two lower index?

    • @eigenchris
      @eigenchris  5 років тому +1

      That is correct. I hadn't introduced the upper/lower index notation at this point yet.

    • @sajidhaniff01
      @sajidhaniff01 5 років тому

      Thanks

  • @ertugrulkarademir9440
    @ertugrulkarademir9440 7 років тому +1

    Great video!

    • @eigenchris
      @eigenchris  7 років тому +1

      Thanks! I'm really glad to know someone enjoyed this... I do apologize for the visual glitches. I will try to reupload a fixed version at a later time.

  • @numoru
    @numoru 2 роки тому +1

    Man nice, but someone needed a high pass filter super bad

    • @numoru
      @numoru 2 роки тому

      syllabance of a snake being spaghettified

    • @catholicgamer1345
      @catholicgamer1345 11 місяців тому

      you mean low pass? high pass clears out low and keeps high end...

  • @thevegg3275
    @thevegg3275 7 місяців тому

    You say that vector components transfer in the opposite manner as the basis vectors, but is that not only half true? Is it not the case that dual basis vector components, components found through perpendicular projection, transform in the same way as the dual basis vectors.
    And just to be clear what I mean, dual basis vectors are vectors that calculated from skewed basis vectors by the equation 1/|e|1 cos theta, where theta is the angle between and e1 in the skewed coordinate system, and e1 in the dual basis system.
    Also, can you direct me in your videos where you talk about parallel projection to find vector components versus perpendicular projection to find dual basis vector components?

    • @eigenchris
      @eigenchris  7 місяців тому

      As I present in the next few videos:
      Vector components transform contravariantly (opposite the basis vectors).
      Dual vector components transform covariantly (same as basis vectors).
      Dual basis vectors transform contravariantly (opposite the basis vectors).
      In this series I draw dual basis vectors as "stacks" (as seen in the next video) to make it clear they are different from vectors. The "perpendicular projection" is just the result of counting the number of stack lines that a vector pierces.
      I'm not so familiar with that cosine formula you gave. Do you have a link to a page that explains it?

  • @MrCooldude4172
    @MrCooldude4172 6 років тому

    Hi, I am really confused about this:
    @3:17 yes, should be B_ji instead of B_ij, so if I work through the rest myself I get:
    v~_i = sum (from j = 1 to n) B_ji v_j, i.e here we also have B_ji instead of B_ij, so not only is it the backwards matrix, it is also its transpose. Is this correct? Thanks.

    • @eigenchris
      @eigenchris  6 років тому

      Looking at this video again... I think what I've written at 3:17 is perfectly fine. I'm not sure what I wrote it down as a mistake earlier...
      The correct matrix multiplication formula is v~_i = sum(over j) B_ij v_j. The basically means entry 1 of v~ is row 1 of B times the column v, and entry 2 of v~ is row 2 of B times the column v.
      Does that make sense to you?

    • @MrCooldude4172
      @MrCooldude4172 6 років тому

      Please correct me if I am wrong.
      At 3:17 your formula for ej is wrong, you need to change B_ij to B_ji or simply swap the indices for ei and ej around.
      Referring to your previous video, when you give the example for the transformation of vector components, instead of writing the transformation matrices down (forwards and backwards) you write its transpose. Thus, in your calculation to show when going from new vector components from old you are actually performing B transposed instead of B.
      This is why I think v~_i = sum (from j = 1 to n) B_ji v_j is correct. The only difference between this and what you just wrote in your comment is I have B_ji, i.e. the transpose of B_ij.

  • @andrerossa8553
    @andrerossa8553 5 років тому

    excellent, thanks

  • @TheBigBangggggg
    @TheBigBangggggg 6 років тому

    You call v-subscript i (or v-superscript i) the component of the vector. I always learned that v-subscript i (or v-superscript i) times a basisvector is the component. In my opinion v-subscript i (or v-superscript i) is just a scaling-factor. Am I wrong?

    • @eigenchris
      @eigenchris  6 років тому

      That's interesting... I'm not really a math teacher and I'm sort of "winging" these videos based on my intuition, not formal mathematical reasoning, so I might not have all my terminology correct. You may be correct.
      I I think I am going to keep using the terminology I use now just for consistency. I hope it's not too confusing for you or others.

  • @curtpiazza1688
    @curtpiazza1688 2 роки тому

    Interesting!

  • @phb1955
    @phb1955 3 роки тому

    beware B and F indexing has been swapped from the previous video. vectors seem indexed now as rows (i.e. index 1)

  • @cirrusmusic5522
    @cirrusmusic5522 4 роки тому

    I wish all teachers were able to tell what they know so clearly. LIke, if it's clear in your head you should be able to explain it clearly too.

  • @Physics_PI
    @Physics_PI 4 роки тому

    Thank you sir

  • @charlesfogel7272
    @charlesfogel7272 6 років тому

    what soft/hardware are you using?

  • @harveybernard6873
    @harveybernard6873 6 років тому

    Can you make a video of metric tensor, Riemann geometry?

    • @eigenchris
      @eigenchris  6 років тому

      Does this video help?
      ua-cam.com/video/SmjbpIgVKFs/v-deo.html

    • @harveybernard6873
      @harveybernard6873 6 років тому

      I am really touched that you are doing a great job.I hope there will be more knowledge in the future.

  • @aldopietrolingua9692
    @aldopietrolingua9692 2 роки тому

    why you don't write a book ? I would greetings.

  • @ashishzachariah3721
    @ashishzachariah3721 3 роки тому

    In Lecture 2
    F = [ (2 , 1), (-0.5, 0.25)]
    B = [ (0.25, -1), (0.5, 2)]
    In Lecture 3 and the following lectures..
    F = [ (2 , -0.5), (1, 0.25)]
    B = [ (0.25, 0.5), (-1, 2)]
    The matrices/arrays appear transposed,
    The figure shown is the same.
    Have I missed something??

    • @eigenchris
      @eigenchris  3 роки тому

      I made a video 1.5 where I corrected an error I made in video 1. Sorry about this.

  • @scottraywelty
    @scottraywelty 2 роки тому

    Not clear to me what the difference is between basis vectors and vector components.

    • @eigenchris
      @eigenchris  2 роки тому

      Basis vectors are the arrows we use as building blocks to make othher vectors. Vector components are how much of each basis vector we use when building new vectors.

  • @taiucnguyenvo3115
    @taiucnguyenvo3115 2 роки тому

    you can read in the matrix i think it better

  • @Sharikkhursheed
    @Sharikkhursheed 6 років тому

    Hello i have a question.. Can i ask it?

    • @eigenchris
      @eigenchris  6 років тому

      Go ahead. :)

    • @Sharikkhursheed
      @Sharikkhursheed 6 років тому

      eigenchris are the covectors the covariant vectors?

    • @eigenchris
      @eigenchris  6 років тому +1

      Covector COMPONENTS follow the covariant transformation rule. I cover this in videos 4,5,6. Does that answer your question?

    • @MrCooldude4172
      @MrCooldude4172 6 років тому

      he's not your servant mate haha ask questions if you want, but don't expect him to personally send you notes

    • @Sharikkhursheed
      @Sharikkhursheed 6 років тому

      cooldude 4172 do u have any material buddy?

  • @akshitkushwaha9479
    @akshitkushwaha9479 7 місяців тому

    Wow

  • @tombouie
    @tombouie 5 років тому

    Hmmmmmmmmmmm, matrix analysis would have been much more clear/effective/efficient than the scalar analysis, thks anyways though.

  • @shobhitkhajuria7464
    @shobhitkhajuria7464 6 років тому

    shocked!!!!!vectors and the basis transform the same way.Think of it this way,when transforming basis we are indirectly and actually transforming a vector of components 1. so why would there be an opposite thing.

    • @eigenchris
      @eigenchris  6 років тому +1

      I'm not sure how to explain it in a better way. If the basis vectors get longer, then you need less basis vectors to build other vectors. So when basis vectors get big, the components of other vectors get small.

    • @shobhitkhajuria7464
      @shobhitkhajuria7464 6 років тому

      eigenchris this got me confused for an hour now.if i take pure rotation of basis,not change magnitude of basis.i am talking about pure rotation of orthonormal basis.what u think in this case?

    • @eigenchris
      @eigenchris  6 років тому

      If you rotate the basis vectors clockwise, it *appears* as if a vector V sitting in space is rotating counter-clockwise. The rotation matrix for the basis vectors will be the exact same as the rotation matrix for the vector V's components, except the angle of rotation will be positive in one and negative in the other.

    • @rathikmurtinty5902
      @rathikmurtinty5902 2 роки тому

      @@shobhitkhajuria7464 eigenchris’s reply is a very good explanation, I was stuck with the rotation aspect and not the scaling aspect as well. Imagine you are sitting at the origin of the basis vectors and you rotate to the right along with them. The invariant vector/tensor will appear to rotate to the left from your point of view. This is basic relativity. If you calculate the components of the tensor vs the basic vectors, you can use the same rotation matrix for both, but the angles of the rotation are of opposite signs. This is why the components seem to oppose a change in the basis vectors even in rotation.

  • @annesanila1897
    @annesanila1897 6 років тому

    Is scaling in enough ++ riittääkö se?

  • @braigetori
    @braigetori 3 роки тому

    I don't understand why, at 3:13 the subscripts for the vector suddenly change from only "j" to "i" and "j". This is not random as F and B have "j" elements as well, and these are used in the simplification after substitution. Please help explain why the "j" and "i" subscripts for the vector suddenly appear.

    • @eigenchris
      @eigenchris  3 роки тому

      I can't tell which summation you're talking about. Can you write out the formula to help me understand?

    • @braigetori
      @braigetori 3 роки тому

      @@eigenchris Ok, two equations on the top left at 3:13, V = SUM(j=1 to n)vj x ej = SUM(i=1 to n)vi x ei, why the different subscripts (i and j)? why does this i and j correspond to Fij and Bij? - thank you !!

    • @eigenchris
      @eigenchris  3 роки тому

      @@braigetori The letter used for summations doesn't matter. I changed the letter to make the letters in the final summation formula match. But you can always change the summation letter without changing the meaning of the formula.

    • @braigetori
      @braigetori 3 роки тому

      @@eigenchris the letters used for summation do matter - because you relied on the fact that the letters in V match the letters in E (basis vector). The substitution would not work unless the subscripts match. So there is an error In logic here - the subscript letters are clearly not arbitrary.

    • @eigenchris
      @eigenchris  3 роки тому

      @@braigetori Yes, what I meant to say is "the summation index doesn't matter, as long as they match". I mean you can replace the summation letter, as long as you do it consistently.

  • @richardneifeld7797
    @richardneifeld7797 2 роки тому

    Before the next video read the Wikipedia article on Covariance and Contravariance. en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors