Eigenchris is the best teacher. Edit: i have had this playlist stored safely for years now (pro-procastinator here), for future study. Now, as finals approach I am semi-forced to end what i started and realised how great this series is. I have backed this up so in can never forget this. This channel is a gem i tell you.
I have been wanting to understand tensors so badly for so long, I would always get hung up on "contra" and "co" variant vectors. Your simple explanation at 4:28 is just... wow.
same always when reading books no one explains that they are defined as covectors just becouse the simpple fact that they transform on diferent ways. it is weird that that information is not covered on the books
Nearly 8000 views and not a single dislike.... that's how you know this intro video series is fire. :) My plan is to watch the video series two or three times over, then recreate the logic on my own to *finally* have some intuition I can depend on. I don't mind the little errors here and there, I'd much rather resolve the bigger picture. After that, I'll start reading books on this stuff...then it's on to GR and eventually EM. +EigenChris I'm not college educated so videos like these are about as close as I get to human-interactive education. I thank you for the awesome content you're creating, and hope your channel expands. Best, -Float.
Do you mean 'in what job do you use tensors?' They are used in physics to understand general relativity, used by electrical engineers, software engineers, and signal analysts dealing with satellite information due to the effects of general relativity (such as time dilation). Tensors are used by all sorts of programmers and businesses who use artificial intelligence and machine learning to make predictions or automatically identify people and objects in images and videos. This is used for self-driving cars, for weather prediction, and for finding out what sort of online ads people might be interested in. See the software 'Tensorflow' for more info. Tensors are used by mechanical engineers and researchers to understand stress and strain in a material (see the Cauchy Stress Matrix). They are used in understanding Mohr's circle which describes the 3D state of stress within a material element. This is useful in Finite Element Analysis. There will be lots more applications I haven't mentioned.
Holy Shit... I bugged down so hard trying to understand Tensors & no one were able to explain it step-by-step so clearly like this. Keep going men, you are doing great work!
Dear Chris, Thank you so much for these videos. I am self studying physics at 48 year old chemist, and since it is more than 20 year since I did mathematics in the university, your videos are excellent to refresh some of the mathematics and new stuff. I would never be able to do self studying without these great videos. Best Camilla 🇩🇰
Thanks a lot for this. It is the clearest, most succinct and easiest to understand explanation of contravariance that I have ever seen. Promising start to the series...
In short, if I'd had this series of videos when I took my graduate course in general relativity I'd have passed the class with a better grade. If it weren't for the final project, I'd have failed miserably. Where were you 20 years ago? hehehe. Anything beyond the Kronecker Delta and Minkowski Metric escaped me. I had a math and physics background going in, but tensors were my undoing. Your series is totally comprehensible. Good work!!!
I was in a similar position, except I was self-teaching GR. I find most sources don't take the time to make you comfortable with tensors. They just throw them at you.
You're telling me! The text we used was Weinberg, from 1972. Excellent text, one of the most famous in the business, IF you know what you're doing. I slowly lost my mind that semester.
Brilliant, brilliant presentation! Very clean concepts! You establish a standard for how cleanly an idea can be represented with a minimum number of words! Hats off sir!
Excellent lessons… I sincerely appreciate these videos… clear and just the right length. It’s clear you have taught this stuff a lot… and thanks for staying out of the way and not making it about you.
Thank you for making these videos. They're clear, concise, and accessible. You've simply nailed the topic, and have done what many a math professor had failed to!
A lucid explanation, in particular in relation to the distinction between covariant and contravariant vectors. Even a concise technical exposition of General Relativity is replete with these terms, and it's nice to now have a sounder intuitive footing with these concepts.
Excellent! I unsuccessful grappled with this topic during first year and only now through relativity do I start getting the hang of it. You've decisively cleared the waters for me, thanks!
This is where I had problems. My professor did not make this point clear; I kept trying to figure out why it was what seemed to be backward and wrong and failing tests. This lead to my low grade and eventually being kicked out of the University on academics.
Another excellent video. Thank you so much. None of this appears in my textbook and study guide on tensors, i.e. the fundamental reason for defining a vector as a contravariant tensor !!!
One of the clearest examples of the transformation rules I've seen. Very good job. Another thing one can note is that for the vector v to be invariant, when expressed in the two bases, we can "sandwich" in B*F = identity, in between the vector component and the basis vector. Thus the vector v is invariant. But that's much less descriptive than what you showed. :)
Yes, I came to this realization later and it's exactly how I explain it in the relativity series I'm working on now. I also realized that basis vectors can be written as rows and vector components can be written as columns so you can do the same thing with array notation if you like (with B and F matrices instead).
The more perspectives offered, the merrier :) I'm working through Gravitation (Misner, Thorne, Wheeler) at my own pace and plan to cover it in full, and felt I should brush up on some of the Tensor Analysis again, so I'm glad I could find this series of videos you've made. It was a great refresher to see your videos on the Geodesic Equation and Covariant Derivative as well!
correction: the transformation formula at 03:15 in this lecture is correct, ie, \tilde e_1 is the \sum _{i=1} ^{n} F_{1i} \cdot e_i \\ well, everything in this lecture is correct, but there's critical typo error in the formula at 08:50 in lecture 1. I finally figured it out became confident enough to make a conclusion when I tried to verify the points learned by now. fyi. I have a book by Jim Hefferon who says, if you don't do enough exercises, you're not learning LA. That's a great advice.
It would be correct to note: in the matrix representation of the vector transforming expression the transformation matrix is involved in transposed form. This will avoid possible misunderstandings, since a given matrix, in general case, is not symmetric, I mean, its indices cannot be swapped.
I honestly think its easier to just visualize the inverse transformation in order to get to the vector in the new basis. It takes a bit more imagination at first, I admit, but then you never have to remember any of this really. What I mean is just stick the vectors e1~ and e2~ (the new basis) as column vectors in a matrix. e1~ is the first column and e2~ being the second column. Now take the inverse which geometrically means that this matrix will transform from e1~ to [1,0] vector and it will transform the e2~ to [0,1]. This is important to visualize. Just follow the vector e1~ as it moves in space to [1,0]. Do this for all column vectors. Now, if you can imagine "carrying" any other vector along while the transformation moves e1~ to [1,0] and e2~ to [0,1], this will be the vector in the new basis. This is why doubling the length of e1~ and e2~ results in 1/2 the length in the other coordinate frame because of the inverse transformation.
@3:17 the equation for forward and backward transformation seems wrong. You have forgotten to switch the ij for F and B. So, instead of Fij it should be Fji and same for B.
@@eigenchris Dont't worry, your little indices errors are serving as an unexpected pedagogical device. Also, if you've made an even number of mistakes, then often with coordinate dummy suffixes (Dirac called coordinate indices: suffixes, and index was left for labeling 'which one'), the mistake becomes undone.
This concept is the same as the case of function transformations. For example, when you scale f(x) to f(kx), you're essentially manipulating the entire x-axis, and the graph of the function therefore experiences the opposite transformation.
That question could be considered like the vector is constant, the change in component have to be compensated. Therefore the inverse transformation matrix is to be used.
Hi, very grateful for this work you made. Pls could you clarify what do you intend when you say (frame 5:53) "the basis rotate cw and the components rotate ccw. My question is: why you say components rotate and not components vary? In a cw rotation the vector component on e1 is greater than the ones onto e2. The basis rotate and the components vary ....
5:40 I cherish the hope that you can write the right subscript to the left instead of as an exponent (if you really don't want to keep the conventional subscript notion).
Dear Chris, I came to the point that there is no mistake in what you have done in your videos. For example in your 3rd video, you are multiplying matrix by a colum basis, in your 4th video your are presenting the same information in other way, you are just tansponsing the former matriz equation getting row basis times matrix, and from here you keep that way for the next videos. In this video also there is no error, when you take the index equation that transform from new basis to old basis, you are multiplying row bases times matrix, like you did in video 4th. When you write the vector equation in index form, you can have row componentes times column basis, or opposite, in index form, it does not have impact in matrix representation, but if you use your former equation for basis trasformation (THere is row basis times matrix), then is forced to have vector equation in the form row basis times column components, and in the end you are getting the relation between new colum components and old column components. So, the only thing to say here is that in the realtion between basis, you are comparing row basis, and in the equation between componentes you are comparing column componentes. There are no mistakes, it is only your information is presented in a not symmetrical way, when you use basis trasnform you are using row basis, when you use components transform you use column components. I hope to be helpful, and no to confuse saying this in words and not equations. If you would like I can send you a file with equations my email is ggm7317@hotmail.com Regards
I noticed in this video that the indices are different in the summation forms than the forms in "Tensors for Beginners 1". Was that due to the transpose matrix error in video 1 and its subsequent correction?
So, a vector is a grade 2 tensor. This tensor is the combination of two grade 1 tensors: the vector components (which are contravariant tensors) and the basis vector which are covariant tensors. am I right?
Opposite two ways, the bases transform as "row vectors" multiplied from the left, and vectors multiplied from the right (presumably), but also interchange F, B. I guess it will be clear soon, but at this point my question is why it's this way.
Some problems are easier to solve in certain coordinate systems. With circular motion, it's easier to use pilar coordinates instead of cartesian coordinates. I talk about this in my tensor calculus series. Also, we need the last of physics to work in every basis, so we should be able to change basis and get the same physical results.
Not sure what you mean by "normalize". We just take some vector "v" and build it using a basis. The backward transformation tells us how to change the vector components when we change basis.
Thanks! I'm really glad to know someone enjoyed this... I do apologize for the visual glitches. I will try to reupload a fixed version at a later time.
You say that vector components transfer in the opposite manner as the basis vectors, but is that not only half true? Is it not the case that dual basis vector components, components found through perpendicular projection, transform in the same way as the dual basis vectors. And just to be clear what I mean, dual basis vectors are vectors that calculated from skewed basis vectors by the equation 1/|e|1 cos theta, where theta is the angle between and e1 in the skewed coordinate system, and e1 in the dual basis system. Also, can you direct me in your videos where you talk about parallel projection to find vector components versus perpendicular projection to find dual basis vector components?
As I present in the next few videos: Vector components transform contravariantly (opposite the basis vectors). Dual vector components transform covariantly (same as basis vectors). Dual basis vectors transform contravariantly (opposite the basis vectors). In this series I draw dual basis vectors as "stacks" (as seen in the next video) to make it clear they are different from vectors. The "perpendicular projection" is just the result of counting the number of stack lines that a vector pierces. I'm not so familiar with that cosine formula you gave. Do you have a link to a page that explains it?
Hi, I am really confused about this: @3:17 yes, should be B_ji instead of B_ij, so if I work through the rest myself I get: v~_i = sum (from j = 1 to n) B_ji v_j, i.e here we also have B_ji instead of B_ij, so not only is it the backwards matrix, it is also its transpose. Is this correct? Thanks.
Looking at this video again... I think what I've written at 3:17 is perfectly fine. I'm not sure what I wrote it down as a mistake earlier... The correct matrix multiplication formula is v~_i = sum(over j) B_ij v_j. The basically means entry 1 of v~ is row 1 of B times the column v, and entry 2 of v~ is row 2 of B times the column v. Does that make sense to you?
Please correct me if I am wrong. At 3:17 your formula for ej is wrong, you need to change B_ij to B_ji or simply swap the indices for ei and ej around. Referring to your previous video, when you give the example for the transformation of vector components, instead of writing the transformation matrices down (forwards and backwards) you write its transpose. Thus, in your calculation to show when going from new vector components from old you are actually performing B transposed instead of B. This is why I think v~_i = sum (from j = 1 to n) B_ji v_j is correct. The only difference between this and what you just wrote in your comment is I have B_ji, i.e. the transpose of B_ij.
You call v-subscript i (or v-superscript i) the component of the vector. I always learned that v-subscript i (or v-superscript i) times a basisvector is the component. In my opinion v-subscript i (or v-superscript i) is just a scaling-factor. Am I wrong?
That's interesting... I'm not really a math teacher and I'm sort of "winging" these videos based on my intuition, not formal mathematical reasoning, so I might not have all my terminology correct. You may be correct. I I think I am going to keep using the terminology I use now just for consistency. I hope it's not too confusing for you or others.
In Lecture 2 F = [ (2 , 1), (-0.5, 0.25)] B = [ (0.25, -1), (0.5, 2)] In Lecture 3 and the following lectures.. F = [ (2 , -0.5), (1, 0.25)] B = [ (0.25, 0.5), (-1, 2)] The matrices/arrays appear transposed, The figure shown is the same. Have I missed something??
Basis vectors are the arrows we use as building blocks to make othher vectors. Vector components are how much of each basis vector we use when building new vectors.
shocked!!!!!vectors and the basis transform the same way.Think of it this way,when transforming basis we are indirectly and actually transforming a vector of components 1. so why would there be an opposite thing.
I'm not sure how to explain it in a better way. If the basis vectors get longer, then you need less basis vectors to build other vectors. So when basis vectors get big, the components of other vectors get small.
eigenchris this got me confused for an hour now.if i take pure rotation of basis,not change magnitude of basis.i am talking about pure rotation of orthonormal basis.what u think in this case?
If you rotate the basis vectors clockwise, it *appears* as if a vector V sitting in space is rotating counter-clockwise. The rotation matrix for the basis vectors will be the exact same as the rotation matrix for the vector V's components, except the angle of rotation will be positive in one and negative in the other.
@@shobhitkhajuria7464 eigenchris’s reply is a very good explanation, I was stuck with the rotation aspect and not the scaling aspect as well. Imagine you are sitting at the origin of the basis vectors and you rotate to the right along with them. The invariant vector/tensor will appear to rotate to the left from your point of view. This is basic relativity. If you calculate the components of the tensor vs the basic vectors, you can use the same rotation matrix for both, but the angles of the rotation are of opposite signs. This is why the components seem to oppose a change in the basis vectors even in rotation.
I don't understand why, at 3:13 the subscripts for the vector suddenly change from only "j" to "i" and "j". This is not random as F and B have "j" elements as well, and these are used in the simplification after substitution. Please help explain why the "j" and "i" subscripts for the vector suddenly appear.
@@eigenchris Ok, two equations on the top left at 3:13, V = SUM(j=1 to n)vj x ej = SUM(i=1 to n)vi x ei, why the different subscripts (i and j)? why does this i and j correspond to Fij and Bij? - thank you !!
@@braigetori The letter used for summations doesn't matter. I changed the letter to make the letters in the final summation formula match. But you can always change the summation letter without changing the meaning of the formula.
@@eigenchris the letters used for summation do matter - because you relied on the fact that the letters in V match the letters in E (basis vector). The substitution would not work unless the subscripts match. So there is an error In logic here - the subscript letters are clearly not arbitrary.
@@braigetori Yes, what I meant to say is "the summation index doesn't matter, as long as they match". I mean you can replace the summation letter, as long as you do it consistently.
You know it's a good video when in the process to refresh some basic concepts, you find out that you learn something new in the process!!
Eigenchris is the best teacher.
Edit: i have had this playlist stored safely for years now (pro-procastinator here), for future study. Now, as finals approach I am semi-forced to end what i started and realised how great this series is. I have backed this up so in can never forget this. This channel is a gem i tell you.
I have been wanting to understand tensors so badly for so long, I would always get hung up on "contra" and "co" variant vectors. Your simple explanation at 4:28 is just... wow.
same always when reading books no one explains that they are defined as covectors just becouse the simpple fact that they transform on diferent ways. it is weird that that information is not covered on the books
I went to graduate school and never had a complete understanding of what contravariant meant. Now I do, great videos Chris.
Nearly 8000 views and not a single dislike.... that's how you know this intro video series is fire. :) My plan is to watch the video series two or three times over, then recreate the logic on my own to *finally* have some intuition I can depend on. I don't mind the little errors here and there, I'd much rather resolve the bigger picture. After that, I'll start reading books on this stuff...then it's on to GR and eventually EM.
+EigenChris I'm not college educated so videos like these are about as close as I get to human-interactive education. I thank you for the awesome content you're creating, and hope your channel expands.
Best,
-Float.
For what kind of job do you do this ? or just for fun?
Do you mean 'in what job do you use tensors?'
They are used in physics to understand general relativity, used by electrical engineers, software engineers, and signal analysts dealing with satellite information due to the effects of general relativity (such as time dilation). Tensors are used by all sorts of programmers and businesses who use artificial intelligence and machine learning to make predictions or automatically identify people and objects in images and videos. This is used for self-driving cars, for weather prediction, and for finding out what sort of online ads people might be interested in. See the software 'Tensorflow' for more info. Tensors are used by mechanical engineers and researchers to understand stress and strain in a material (see the Cauchy Stress Matrix). They are used in understanding Mohr's circle which describes the 3D state of stress within a material element. This is useful in Finite Element Analysis. There will be lots more applications I haven't mentioned.
x2
well the guy asked what will he use it for
i hope it went well for you, i am doing the same...for fun
Best and clearest explanation I've ever seen for basis transformation. Thanks!
Best tutorial on the internet. Keep up the passion... and about those simple errors... I don't even realize until you point them out! Thank you!
Holy Shit...
I bugged down so hard trying to understand Tensors & no one were able to explain it step-by-step so clearly like this.
Keep going men, you are doing great work!
Eigenchris is a rock star. Simply the best presentations of Tensors on the planet.
Dear Chris,
Thank you so much for these videos. I am self studying physics at 48 year old chemist, and since it is more than 20 year since I did mathematics in the university, your videos are excellent to refresh some of the mathematics and new stuff. I would never be able to do self studying without these great videos. Best Camilla 🇩🇰
Thanks a lot for this. It is the clearest, most succinct and easiest to understand explanation of contravariance that I have ever seen. Promising start to the series...
This is the best source on tensors, I found over the years.
In short, if I'd had this series of videos when I took my graduate course in general relativity I'd have passed the class with a better grade. If it weren't for the final project, I'd have failed miserably. Where were you 20 years ago? hehehe. Anything beyond the Kronecker Delta and Minkowski Metric escaped me.
I had a math and physics background going in, but tensors were my undoing. Your series is totally comprehensible. Good work!!!
I was in a similar position, except I was self-teaching GR. I find most sources don't take the time to make you comfortable with tensors. They just throw them at you.
You're telling me! The text we used was Weinberg, from 1972. Excellent text, one of the most famous in the business, IF you know what you're doing. I slowly lost my mind that semester.
The best and most lucid presentation and explanation of the subject I've ever seen.
Excellent! This is the best explanation I've found of what the term contravariant means.
What an excellent explanation of covariant and contravariant -- easily the best I've seen!
Brilliant, brilliant presentation! Very clean concepts! You establish a standard for how cleanly an idea can be represented with a minimum number of words! Hats off sir!
Excellent lessons… I sincerely appreciate these videos… clear and just the right length. It’s clear you have taught this stuff a lot… and thanks for staying out of the way and not making it about you.
Excellent! There have to be people making such series on advanced topics rather than thousands of series on the basic ones. Thanks, @eigenchris.
Thank you for making these videos. They're clear, concise, and accessible. You've simply nailed the topic, and have done what many a math professor had failed to!
A lucid explanation, in particular in relation to the distinction between covariant and contravariant vectors. Even a concise technical exposition of General Relativity is replete with these terms, and it's nice to now have a sounder intuitive footing with these concepts.
Brother, this video series is pure gold. Thanks a lot.
Very intuitive! Gives both the math and the physical process showing the why and how of the definitions.
Excellent! I unsuccessful grappled with this topic during first year and only now through relativity do I start getting the hang of it. You've decisively cleared the waters for me, thanks!
I have heard about contravariant tensor for years. Now I finally understand them. Great explanation.
Oh my, it's just a perfect explanation of tensors. Well done!
This is where I had problems. My professor did not make this point clear; I kept trying to figure out why it was what seemed to be backward and wrong and failing tests. This lead to my low grade and eventually being kicked out of the University on academics.
They should make a Netflix drama series about this
A very lucid and clear development of these concepts.
Another excellent video. Thank you so much. None of this appears in my textbook and study guide on tensors, i.e. the fundamental reason for defining a vector as a contravariant tensor !!!
One of the clearest examples of the transformation rules I've seen. Very good job.
Another thing one can note is that for the vector v to be invariant, when expressed in the two bases, we can "sandwich" in B*F = identity, in between the vector component and the basis vector. Thus the vector v is invariant. But that's much less descriptive than what you showed. :)
Yes, I came to this realization later and it's exactly how I explain it in the relativity series I'm working on now.
I also realized that basis vectors can be written as rows and vector components can be written as columns so you can do the same thing with array notation if you like (with B and F matrices instead).
The more perspectives offered, the merrier :)
I'm working through Gravitation (Misner, Thorne, Wheeler) at my own pace and plan to cover it in full, and felt I should brush up on some of the Tensor Analysis again, so I'm glad I could find this series of videos you've made. It was a great refresher to see your videos on the Geodesic Equation and Covariant Derivative as well!
I just wanted to thank you so much for this video
correction: the transformation formula at 03:15 in this lecture is correct, ie, \tilde e_1 is the \sum _{i=1} ^{n} F_{1i} \cdot e_i \\ well, everything in this lecture is correct, but there's critical typo error in the formula at 08:50 in lecture 1. I finally figured it out became confident enough to make a conclusion when I tried to verify the points learned by now. fyi.
I have a book by Jim Hefferon who says, if you don't do enough exercises, you're not learning LA. That's a great advice.
Excellent. Considering how good this series has been, I hope you consider remaking video number 1.
this is the best explanation i ever came across....well done indeed.
Thank you so much. I was always confused about this in linear algebra
>I really don't like video editing.
Well, I quite enjoying the content !
I learn it for fun too... It's fantastic...And now i will learn it to get graduate, whatever is my age (i am 57 years old, lol).
Well done, could not have hoped for a better explanation !
Wonderfully presented and clear. Thank you very much.
I was waiting for something like this for a long time. Thanks.
Excellent intro to the vector components and basis transformation relationship
Thank you Dr. Eigenchris
Man. You know this subject par excellance
This series is great. I really hope you can do one on Quantum Mechanics too!
Thanks. Unfortunately I don't understand QM very well, so I don't plan on doing any QM videos in the near future. My goal right now is relativity.
@@eigenchris That's fine. Really glad you did a relativity series too btw. Having a course this term and lectures have been very difficult to follow
This video series is pretty exciting.
Well... let me watch this video another couple of times... ;-) Very good indeed, kind of tough to grasp!
CHAPEAU¡¡¡¡......I can not stop to watch your videos.
It would be correct to note: in the matrix representation of the vector transforming expression the transformation matrix is involved in transposed form.
This will avoid possible misunderstandings, since a given matrix, in general case, is not symmetric, I mean, its indices cannot be swapped.
I honestly think its easier to just visualize the inverse transformation in order to get to the vector in the new basis. It takes a bit more imagination at first, I admit, but then you never have to remember any of this really. What I mean is just stick the vectors e1~ and e2~ (the new basis) as column vectors in a matrix. e1~ is the first column and e2~ being the second column. Now take the inverse which geometrically means that this matrix will transform from e1~ to [1,0] vector and it will transform the e2~ to [0,1]. This is important to visualize. Just follow the vector e1~ as it moves in space to [1,0]. Do this for all column vectors. Now, if you can imagine "carrying" any other vector along while the transformation moves e1~ to [1,0] and e2~ to [0,1], this will be the vector in the new basis. This is why doubling the length of e1~ and e2~ results in 1/2 the length in the other coordinate frame because of the inverse transformation.
🧡 this, very good refresher (in my case)
@3:17 the equation for forward and backward transformation seems wrong. You have forgotten to switch the ij for F and B. So, instead of Fij it should be Fji and same for B.
Yep, you're right. I'm going crazy with all these little errors I made. It's really hard to catch them all.
I'll add that error to the description.
Yes, so the relationship of the components between the two systems are inverse and transpose. Thank you for your video anyway, it really helps me .
@@eigenchris Just to be sure. Basis transformation matrix is Fij and coordinate transformation matrix is the transpose of Fij^-1.
@@eigenchris
Dont't worry, your little indices errors are serving as an unexpected pedagogical device.
Also, if you've made an even number of mistakes, then often with coordinate dummy suffixes (Dirac called coordinate indices: suffixes, and index was left for labeling 'which one'), the mistake becomes undone.
Sorry but I don't understand why. Please could you explain
This concept is the same as the case of function transformations. For example, when you scale f(x) to f(kx), you're essentially manipulating the entire x-axis, and the graph of the function therefore experiences the opposite transformation.
bro that is so crazy. I love your Videos!
That question could be considered like the vector is constant, the change in component have to be compensated. Therefore the inverse transformation matrix is to be used.
Hi, very grateful for this work you made. Pls could you clarify what do you intend when you say (frame 5:53) "the basis rotate cw and the components rotate ccw. My question is: why you say components rotate and not components vary? In a cw rotation the vector component on e1 is greater than the ones onto e2. The basis rotate and the components vary ....
5:40 I cherish the hope that you can write the right subscript to the left instead of as an exponent (if you really don't want to keep the conventional subscript notion).
Dear Chris, I came to the point that there is no mistake in what you have done in your videos.
For example in your 3rd video, you are multiplying matrix by a colum basis, in your 4th video your are presenting the same information in other way, you are just tansponsing the former matriz equation getting row basis times matrix, and from here you keep that way for the next videos.
In this video also there is no error, when you take the index equation that transform from new basis to old basis, you are multiplying row bases times matrix, like you did in video 4th.
When you write the vector equation in index form, you can have row componentes times column basis, or opposite, in index form, it does not have impact in matrix representation, but if you use your former equation for basis trasformation (THere is row basis times matrix), then is forced to have vector equation in the form row basis times column components, and in the end you are getting the relation between new colum components and old column components.
So, the only thing to say here is that in the realtion between basis, you are comparing row basis, and in the equation between componentes you are comparing column componentes. There are no mistakes, it is only your information is presented in a not symmetrical way, when you use basis trasnform you are using row basis, when you use components transform you use column components.
I hope to be helpful, and no to confuse saying this in words and not equations. If you would like I can send you a file with equations my email is ggm7317@hotmail.com
Regards
I noticed in this video that the indices are different in the summation forms than the forms in "Tensors for Beginners 1". Was that due to the transpose matrix error in video 1 and its subsequent correction?
Immensely helpful videos, thank you!
THANK YOU VERY MUCH EIGENCHRIS.
4:37 "Now because vector components behave contrary to the basis vectors, we say that the vector components are contravariant"
Sir,what literature do you suggest for deep study of linear algebra?
So, a vector is a grade 2 tensor. This tensor is the combination of two grade 1 tensors: the vector components (which are contravariant tensors) and the basis vector which are covariant tensors. am I right?
Why do people not write indeces as subscripts and superscripts instead of above and below the letters
It seems like a better system for less confusion
Opposite two ways, the bases transform as "row vectors" multiplied from the left, and vectors multiplied from the right (presumably), but also interchange F, B. I guess it will be clear soon, but at this point my question is why it's this way.
Awesome video series, thank you!
bruh, these vids are so good
Excellent!!!
Ok. I think I do not know what the horizontally written vector is supposed to represent.
I understand the notion of change of basis, but for what reason do we need this?
Some problems are easier to solve in certain coordinate systems. With circular motion, it's easier to use pilar coordinates instead of cartesian coordinates. I talk about this in my tensor calculus series.
Also, we need the last of physics to work in every basis, so we should be able to change basis and get the same physical results.
So we use the backwards transformation in a way to essentially "normalize" or re-define the vector through a different set of unit vectors?
Not sure what you mean by "normalize". We just take some vector "v" and build it using a basis. The backward transformation tells us how to change the vector components when we change basis.
@@eigenchris Thank you for clarifying.
THIS VIDEO IS CRAZINGLY EASY TO UNDERSTAND THE COORDINATE TRANSFORMATIONS!
Who clicked 1 dislike?
Hi Chris,
Should F not have an upper and lower index as it represent a linear map as opposed to two lower index?
That is correct. I hadn't introduced the upper/lower index notation at this point yet.
Thanks
Great video!
Thanks! I'm really glad to know someone enjoyed this... I do apologize for the visual glitches. I will try to reupload a fixed version at a later time.
Man nice, but someone needed a high pass filter super bad
syllabance of a snake being spaghettified
you mean low pass? high pass clears out low and keeps high end...
You say that vector components transfer in the opposite manner as the basis vectors, but is that not only half true? Is it not the case that dual basis vector components, components found through perpendicular projection, transform in the same way as the dual basis vectors.
And just to be clear what I mean, dual basis vectors are vectors that calculated from skewed basis vectors by the equation 1/|e|1 cos theta, where theta is the angle between and e1 in the skewed coordinate system, and e1 in the dual basis system.
Also, can you direct me in your videos where you talk about parallel projection to find vector components versus perpendicular projection to find dual basis vector components?
As I present in the next few videos:
Vector components transform contravariantly (opposite the basis vectors).
Dual vector components transform covariantly (same as basis vectors).
Dual basis vectors transform contravariantly (opposite the basis vectors).
In this series I draw dual basis vectors as "stacks" (as seen in the next video) to make it clear they are different from vectors. The "perpendicular projection" is just the result of counting the number of stack lines that a vector pierces.
I'm not so familiar with that cosine formula you gave. Do you have a link to a page that explains it?
Hi, I am really confused about this:
@3:17 yes, should be B_ji instead of B_ij, so if I work through the rest myself I get:
v~_i = sum (from j = 1 to n) B_ji v_j, i.e here we also have B_ji instead of B_ij, so not only is it the backwards matrix, it is also its transpose. Is this correct? Thanks.
Looking at this video again... I think what I've written at 3:17 is perfectly fine. I'm not sure what I wrote it down as a mistake earlier...
The correct matrix multiplication formula is v~_i = sum(over j) B_ij v_j. The basically means entry 1 of v~ is row 1 of B times the column v, and entry 2 of v~ is row 2 of B times the column v.
Does that make sense to you?
Please correct me if I am wrong.
At 3:17 your formula for ej is wrong, you need to change B_ij to B_ji or simply swap the indices for ei and ej around.
Referring to your previous video, when you give the example for the transformation of vector components, instead of writing the transformation matrices down (forwards and backwards) you write its transpose. Thus, in your calculation to show when going from new vector components from old you are actually performing B transposed instead of B.
This is why I think v~_i = sum (from j = 1 to n) B_ji v_j is correct. The only difference between this and what you just wrote in your comment is I have B_ji, i.e. the transpose of B_ij.
excellent, thanks
You call v-subscript i (or v-superscript i) the component of the vector. I always learned that v-subscript i (or v-superscript i) times a basisvector is the component. In my opinion v-subscript i (or v-superscript i) is just a scaling-factor. Am I wrong?
That's interesting... I'm not really a math teacher and I'm sort of "winging" these videos based on my intuition, not formal mathematical reasoning, so I might not have all my terminology correct. You may be correct.
I I think I am going to keep using the terminology I use now just for consistency. I hope it's not too confusing for you or others.
Interesting!
beware B and F indexing has been swapped from the previous video. vectors seem indexed now as rows (i.e. index 1)
I wish all teachers were able to tell what they know so clearly. LIke, if it's clear in your head you should be able to explain it clearly too.
Thank you sir
what soft/hardware are you using?
Microsoft PowerPoint.
Can you make a video of metric tensor, Riemann geometry?
Does this video help?
ua-cam.com/video/SmjbpIgVKFs/v-deo.html
I am really touched that you are doing a great job.I hope there will be more knowledge in the future.
why you don't write a book ? I would greetings.
In Lecture 2
F = [ (2 , 1), (-0.5, 0.25)]
B = [ (0.25, -1), (0.5, 2)]
In Lecture 3 and the following lectures..
F = [ (2 , -0.5), (1, 0.25)]
B = [ (0.25, 0.5), (-1, 2)]
The matrices/arrays appear transposed,
The figure shown is the same.
Have I missed something??
I made a video 1.5 where I corrected an error I made in video 1. Sorry about this.
Not clear to me what the difference is between basis vectors and vector components.
Basis vectors are the arrows we use as building blocks to make othher vectors. Vector components are how much of each basis vector we use when building new vectors.
you can read in the matrix i think it better
Hello i have a question.. Can i ask it?
Go ahead. :)
eigenchris are the covectors the covariant vectors?
Covector COMPONENTS follow the covariant transformation rule. I cover this in videos 4,5,6. Does that answer your question?
he's not your servant mate haha ask questions if you want, but don't expect him to personally send you notes
cooldude 4172 do u have any material buddy?
Wow
Hmmmmmmmmmmm, matrix analysis would have been much more clear/effective/efficient than the scalar analysis, thks anyways though.
shocked!!!!!vectors and the basis transform the same way.Think of it this way,when transforming basis we are indirectly and actually transforming a vector of components 1. so why would there be an opposite thing.
I'm not sure how to explain it in a better way. If the basis vectors get longer, then you need less basis vectors to build other vectors. So when basis vectors get big, the components of other vectors get small.
eigenchris this got me confused for an hour now.if i take pure rotation of basis,not change magnitude of basis.i am talking about pure rotation of orthonormal basis.what u think in this case?
If you rotate the basis vectors clockwise, it *appears* as if a vector V sitting in space is rotating counter-clockwise. The rotation matrix for the basis vectors will be the exact same as the rotation matrix for the vector V's components, except the angle of rotation will be positive in one and negative in the other.
@@shobhitkhajuria7464 eigenchris’s reply is a very good explanation, I was stuck with the rotation aspect and not the scaling aspect as well. Imagine you are sitting at the origin of the basis vectors and you rotate to the right along with them. The invariant vector/tensor will appear to rotate to the left from your point of view. This is basic relativity. If you calculate the components of the tensor vs the basic vectors, you can use the same rotation matrix for both, but the angles of the rotation are of opposite signs. This is why the components seem to oppose a change in the basis vectors even in rotation.
Is scaling in enough ++ riittääkö se?
I don't understand why, at 3:13 the subscripts for the vector suddenly change from only "j" to "i" and "j". This is not random as F and B have "j" elements as well, and these are used in the simplification after substitution. Please help explain why the "j" and "i" subscripts for the vector suddenly appear.
I can't tell which summation you're talking about. Can you write out the formula to help me understand?
@@eigenchris Ok, two equations on the top left at 3:13, V = SUM(j=1 to n)vj x ej = SUM(i=1 to n)vi x ei, why the different subscripts (i and j)? why does this i and j correspond to Fij and Bij? - thank you !!
@@braigetori The letter used for summations doesn't matter. I changed the letter to make the letters in the final summation formula match. But you can always change the summation letter without changing the meaning of the formula.
@@eigenchris the letters used for summation do matter - because you relied on the fact that the letters in V match the letters in E (basis vector). The substitution would not work unless the subscripts match. So there is an error In logic here - the subscript letters are clearly not arbitrary.
@@braigetori Yes, what I meant to say is "the summation index doesn't matter, as long as they match". I mean you can replace the summation letter, as long as you do it consistently.
Before the next video read the Wikipedia article on Covariance and Contravariance. en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors