Tensors for Beginners 4: What are Covectors?

Поділитися
Вставка
  • Опубліковано 1 жов 2024
  • These are really tedious to make... I'm starting to lose steam. I'll make sure I finish this series, but I'm not sure how much I'll be able to manage afterward.

КОМЕНТАРІ • 329

  • @ismaeel747
    @ismaeel747 5 років тому +221

    I'm a PhD student, honestly your videos are a godsend for me, your explanations are so good and the number of views also testifies to that fact. Thank you very much, looking forward to watching the rest of the series.

    • @eigenchris
      @eigenchris  5 років тому +28

      I'm glad you find them helpful. Can I ask what you are studying for your PhD?

    • @ismaeel747
      @ismaeel747 5 років тому +32

      @@eigenchris I am doing theoretical chemistry, my aim in my research is to develop a tool for predicting quantum forces between atoms,to a reasonably high degree of accuracy and quickly.

    • @AyanKhan-if3mm
      @AyanKhan-if3mm 4 роки тому +4

      @@eigenchris I understood covectors whereas I am in high school. Is it normal or a big achievement?

    • @johnrainwater5249
      @johnrainwater5249 4 роки тому +50

      @@AyanKhan-if3mm its normal if your in 2nd grade, catch up dude

    • @shayanmoosavi9139
      @shayanmoosavi9139 4 роки тому +20

      @@johnrainwater5249 psh. It's normal if you're an embryo. Catch up.

  • @MrCri1tical
    @MrCri1tical 3 роки тому +26

    I am genuinely crying right now. As an engineering student minoring in physics, I am struggling so much with my general relativity classes. These videos explain everything clearly and simply. Your ability to break down concepts into simpler form reminds me alot of Feynman. Keep it up man +1 sub

  • @AnanyaChadha
    @AnanyaChadha Рік тому +20

    I can't imagine how long these videos must take, the effort & hard work is so clear. THANK YOU!!!!! seriously, I'm taking a class that's teaching us about tensors and I didn't understand at all and was considering dropping it in school. But the only thing getting me through it is your videos so I'm sticking with it! So much better than any textbook I could find. Thank you so much again!!!!!
    p.s. how did u learn all this stuff? is there a textbook you'd recommend? just curious bc your understanding is so stellar

    • @eigenchris
      @eigenchris  Рік тому +7

      Thanks for the praise. I didn't have a very good way of learning it. I just read as many articles on tensors as I could until it made sense. The wikipedia article on "linear forms" (another word for "covector") has this "stack" visualization, so that's part of what helped.

  • @miguelaphan58
    @miguelaphan58 6 років тому +104

    ...¡ jesus! GOOD must bless you, for the very first time in history , there was someone capable of graphically explain what a covector is in the context of tensor analysis...thank you so much....please , keep doing this for differential forms and odd creatures like that...

    • @eigenchris
      @eigenchris  6 років тому +13

      I'm glad you found this helpful. I have a few more videos planned after this, but mostly on other tensors like linear maps and the metric tensor. Is there anything in particular about differential forms you don't understand? I could maybe give you some other articles or videos to learn from.
      David Metzler on UA-cam has a very good playlist on them.

    • @cermet592
      @cermet592 6 років тому +2

      This might (I am guessing) be the use of the Jacobean for tracking the tensor transformation of differential basis. That is the heart of these tensors in physics and especially relativity.

    • @miguelaphan58
      @miguelaphan58 4 роки тому +2

      yeah, pussforward and pullback operators in the tangent space context...¿ final goal ,? Lie derivative...no body can explain..as you do, with co vectors

  • @BlueSoulTiger
    @BlueSoulTiger 6 років тому +204

    Congrats! eigenchris. Your vids are models of clarity and quality exposition. You have done the universe a big favour ;) Thank you.

    • @eigenchris
      @eigenchris  6 років тому +46

      Thanks. As I've said to many others, it really means a lot to me that people find these videos useful.

    • @ozzyfromspace
      @ozzyfromspace 6 років тому +16

      Useful is an understatement.... just saying.

    • @dantekaufmann7238
      @dantekaufmann7238 6 років тому +4

      indeed

    • @rajanalexander4949
      @rajanalexander4949 3 роки тому +4

      @@eigenchris You made these heretofore cryptic concepts totally accessible. This is no easy feat! You've basically "opened the vault" and dispelled all the mystery behind these difficult (and usually poorly explained) concepts. A very deep thank you for this.

    • @markseidel19
      @markseidel19 2 роки тому +1

      @@eigenchris U teach 10,000,000x better than university professors.

  • @cherma11
    @cherma11 3 роки тому +30

    Man I know these are really tedious derivations but for the sake of completeness and accurate understanding of what is going on I need this. So thank you chris.

  • @philandthai
    @philandthai 6 років тому +47

    This is really jaw-dropping. I had no idea! Why didn't anybody tell me! How can I get to be 67 years old and still not know this, LOL. Excellent video. I'm watching the whole series in one big swallow, gulp.

  • @kshitijjha6737
    @kshitijjha6737 3 роки тому +18

    You are great. I am soo much interested in General Relativity but didn't know Tensors. I was waiting for almost 2 years to find something like this.

    • @eigenchris
      @eigenchris  3 роки тому +1

      I'm doing a relativity playlist now if you want to watch that: ua-cam.com/video/bEtBncTEc6k/v-deo.html

  • @carlosantoniogaleanorios4580
    @carlosantoniogaleanorios4580 4 роки тому +24

    Awesome videos. I just wanted to comment that colourblind people (like me) struggle to see the red as different from black. Red colour blindness is by far the most common sort of colour blindness, so if you could try to avoid the red-black pair when looking for contrast there will be about 1% of men that will be very thankful.
    Just trying to give a tip that would make your videos even better than they already are. Thanks for the amazing work.

    • @eigenchris
      @eigenchris  4 роки тому +15

      Thanks a lot for pointing that out. I have worried my videos my be hard for colourblind people at various points. I assume it's the part around 12:30 that is the problem?
      Feel free to point out any other parts in future videos that are hard for you so I know what type of stuff to avoid doing.

  • @signorellil
    @signorellil 3 роки тому +9

    In the end we're all very happy you've stuck to doing this. Thanks from all your fans!

  • @guancongm
    @guancongm 4 роки тому +8

    these are some of the best explanations i have seen so far! thank you!

  • @cat-.-
    @cat-.- 3 роки тому +7

    You achieve StatQuest level clarity in explaining algebra! I cannot think of a good enough complement for you! I owe you a lot for these videos!!!!

  • @connorbeaton8375
    @connorbeaton8375 3 роки тому +6

    I needed this and didn’t know I needed it. Tasty vid.

  • @karankhm6001
    @karankhm6001 3 роки тому +7

    Thanks, Chris. These videos are helping me a lot. I am a Master's student but was really taught bad in these tensors and your videos are helping me to understand clearly.

  • @amritanshraghav3793
    @amritanshraghav3793 6 років тому +21

    Amazing video series! I have tried to teach myself Linear Algebra and Tensors using a lot of online resources and yours has been the best! As an example, I had of course read about Linear Functionals and map from V -> R but the way you presented it here as a row vector being a function that takes a column vector was a complete 'Aha' moment for me. Thank you so much!

    • @eigenchris
      @eigenchris  6 років тому +9

      Thanks. It seems a number of people like this video in particular. I got the idea of the "stacks" from Wikipedia and a textbook by Misner, Thorne and Wheeler, but it seems this explanation doesn't get used very much elsewhere, which is a shame.

  • @purim_sakamoto
    @purim_sakamoto 3 роки тому +1

    お、おう
    まるでそのコベクトルによって、座標の場が直交じゃなくなるようだね
    いいゾ~これ

  • @zerefdragneel6344
    @zerefdragneel6344 4 роки тому +4

    Thank you for giving this beautiful insight to the covector.. im a beginner in this topic.. I am finding these video interesting.. I request you to recommend the book where I can get such insights as these

  • @crehenge2386
    @crehenge2386 3 роки тому +1

    At last ateacher that isn't afraid of actually showing the numbers

  • @jayjun2435
    @jayjun2435 4 роки тому +2

    Hello Mr. Chris, may I ask why the individual "up & down" numbers of a vector is written as v¹ and v² while the individual numbers of a matrix (matrix components) are written as F₁₂ and F₂₂?

    • @jayjun2435
      @jayjun2435 4 роки тому

      I am just wondering when I use a superscript notation or when I use a subscript notation. Thank you :)

    • @eigenchris
      @eigenchris  4 роки тому

      @@jayjun2435 I think in my early videos I don't get too technical with the "up and down" because I'm still introducing the ideas. In the later videos I get more consistent. A simple (but slightly wrong) explanation is that superscript gives you the row and a subscript gives you a column. According to this rule F^1_2 would denote the 1st row and 2nd column. The more correct (but also more complicated) explanation is that a superscript tells you a "contravariant" index (transforms in the opposite way of basis vectors) and a subscript denotes a "covariant" index (transforms in the same way of the basis vectors). If you have patience I will be explaining all this much better in my relativity series... it will just take me time to get it done.

    • @jayjun2435
      @jayjun2435 4 роки тому

      @@eigenchris Thank you Mr. Chris. You know, the reason I like your videos and you are because I get most of my curiosity answered in about 10 minutes; unlike khan academy or other UA-camrs. Thank you for constantly checking your comments! (btw: I think you are the only UA-camr without rude comments!)

  • @MultiAblee
    @MultiAblee 5 років тому +4

    like my dude that graphical explaination of covectors is a thing of beauty thank you for putting in this work

  • @paulmcc8155
    @paulmcc8155 5 років тому +3

    Thank you for a wonderful set of videos. I know first-hand it takes a lot of work .
    A couple of things have helped me along the way:
    1) The very simplest covector that I can think of is: a linear function f:V-->R, where V is the one-dimensional vector space formed from the Real numbers R over the field of Real numbers: (R, R, +, .). On a graph, let V be the x-axis, and let the target R be the y-axis. Then each covector f is represented by a straight line through the origin, and the dual space V* would be the collection of all such straight line functions.
    2) I like the intuitive description of a geodesic as the path that an otherwise freely moving particle, constrained to a frictionless surface, would take.

  • @thedorantor
    @thedorantor 5 років тому +4

    Finally I understand this concept. Thank you so much!!

  • @ajaypotdar7161
    @ajaypotdar7161 Рік тому +2

    Thank you so much for these videos!!! I can't even begin to imagine the amount of effort it must have gone into making it so comprehensive and at the same time illustrative. You're amazing 👏👏

  • @RalphDratman
    @RalphDratman 3 роки тому +1

    This is good. But I think you forgot to say, "Let's try to draw some contour lines"

  • @kachunli9853
    @kachunli9853 6 років тому +3

    from Hong Kong : convector is a generalized function to produce a contour of a vector in any dimension, with its own axioms in another vector space, so called dual vector space?

    • @eigenchris
      @eigenchris  6 років тому +7

      Yes. Covectors are also called "dual vectors, "one forms" and "linear functionals". They are basically linear maps from N-dimensional space to 1-dimensional space. And, yes they have their own rules for forming a vector space.

  • @michalbotor
    @michalbotor 3 роки тому +1

    hey @eigenchris,
    when i was learning about covectors i was told that what they do is eat a vector and spit out its projection on a certain line associated with this covector. more precisely if α(v) := ax + by is the covector, then we can write it as α(v) = t·v = projₜ(v), where t := 〈a, b〉 is a certain vector and projₜ(v) := t·v is the projection operator projecting vectors onto line spanned by vector t. this line has equation Ax + By = 0 or n·v=0 or projₙ(v) = 0, where n := 〈A, B〉 is the vector normal to this line. since n ⟂ v or in other words n · v = Aa + Bb = 0, we can guess that n can be chosen as n = 〈b, -a〉 and write the equation of the line as bx - ay = 0 or (assuming that a != 0) as y = (b/a)x. now, by watching your video i have learnt that i can think of covectors as sort of stack of equally spaced ticks that eat vectors and spit out how many of this ticks were pierced by this vector. you also notice that this ticks are always perpendicular to the vector v and so also to the line spanned by it. which made it click in my mind and made me realize that what covectors are, are..rulers! or more precisely they eat vectors and they spit out its projection on a certain ruler associated with this covector. you get the line of this ruler by the process given by me and you get the ticks of this ruler by the process given by you. what do you think? it makes it even more beautiful, no?

  • @lucaolmastroni6270
    @lucaolmastroni6270 3 роки тому +1

    Hi Cris, wonderful videos indeed, thank you. At time code 1:57 you generalize the sum starting with low and high intexes from value 1 to value n, but on the right you caracterize the big sigma summation symbol as starting from i = 0. I suppose you meant i = 1 ? Regards.

  • @DboyLiao
    @DboyLiao 3 роки тому +1

    Great job! eigenchris.
    Your videos are the best that I've ever watched about tensors and theory around it.
    Thanks a lot.

  • @zestyorangez
    @zestyorangez 2 роки тому +1

    This really helps explain why a vector dot product with itself is related to its magnitude as well, cool.

  • @sudoscience5084
    @sudoscience5084 3 роки тому +1

    Is there any relation between the row vector’s “linearity” property, and a homomorphism from group theory/ abstract algebra, where you have a function f such that f(a*b)=f(a)*f(b)?

    • @eigenchris
      @eigenchris  3 роки тому +3

      All linear maps in linear algebra are group homomorphisms because they obey the law L(a+b) = L(a) + L(b). Covectors are a special case of linear maps where the output is always a scalar number (so a 1D vector space, in the case of real numbers).

  • @AniSepherd972
    @AniSepherd972 4 роки тому +1

    first time i am able to grasp these and m thankful i found ur channel sooner

  • @peterrobinherbert
    @peterrobinherbert 3 роки тому +1

    I have no doubt that these videos are tedious to make, but they are absolutely brilliant. I have never seen this explained so well.

  • @jayjun2435
    @jayjun2435 4 роки тому +1

    Hello again Mr. Chris. It's that "annoying 12 year old kid who always asks dumb questions". But I do not understand how diagonal lines have anything to do with [2 1].

    • @eigenchris
      @eigenchris  4 роки тому +1

      Are you familiar with the idea of plotting a line on a pair of x,y-axes? Usually this is done with the equation y = (slope) * x + (y-intercept). All I'm doing is plotting the lines where the expression at 6:00 equals zero, equals one, equals 2, etc.
      The collection of lines we get tells us what the covector looks like. The covector is a function, and the lines tell us how quickly the covector's output values change over space in the x,y-plane.

    • @jayjun2435
      @jayjun2435 4 роки тому

      @@eigenchris Thank you very much Mr. Chris. I am very sorry for my stupid 12 year old brain.

    • @eigenchris
      @eigenchris  4 роки тому

      @@jayjun2435 I would hardly call you stupid for taking an interest in this subject at your age. I learned it starting at age 20 and only started making these videos when I was 26.

  • @nicholasflowers4251
    @nicholasflowers4251 4 дні тому

    Very good! Small comment is that as you're taking your underlying field of scalars to be S, you ought to define the dual space to be functionals to the base field S. In the video you say/write "R" which is probably what you really have in mind for S anyway, but better to be consistent

  • @jordansmirnov7291
    @jordansmirnov7291 4 роки тому +1

    😯I studied these concepts on many books, I spent hours and hours to try to understand them without any result... It's been sufficient to watch your videos for a few minutes and Now, not only I understand them, but I can see them! Thanks a lot!

  • @ozzyfromspace
    @ozzyfromspace 4 роки тому +2

    Covectors are trippy, I love it!

  • @Salmanul_
    @Salmanul_ 4 роки тому +1

    2:11, shouldn't i start from 1 to n?

    • @jaykim3662
      @jaykim3662 3 роки тому

      ya I think it's a mistake.

  • @ansofficial709
    @ansofficial709 20 днів тому

    You are an absolute GENIUS in how well you explain these concepts. I have never seen anything that presents difficult concepts so clearly!

  • @jayjun2435
    @jayjun2435 4 роки тому

    Sorry for the interruption Mr. Chris. This the "12-year-old kid who is interested in high school math" here (my name is Jay if you are wondering). But I am confused about the following quoted phrases --> We can _____ "inputs" or _____ "outputs" and "get the same answer". Are the inputs and outputs the v and w? does "get the same answer" just means that we can use different properties of addition and multiplication and "get the same answer"? Sorry again if I am asking idiotic questions... thank you :)

  • @brummi9869
    @brummi9869 Місяць тому

    I had like 5 jimmi neutron, my brain is expanding moments during this. Great explanation, thank you so much for making this.

  • @reup6943
    @reup6943 Рік тому

    I understand the definition of a covector as a function given in this video. But I don't understand why or in which context column and row vectors are so different especially when the basis vector are not orthonormal (no example as to why they are different was given in the end). So I'm a bit confused. To me it's just a matter of how I lay down on paper the numbers and it should not change any computation🤔 It sounds like it's just a matter of interpretation.

  • @antoniorojas2408
    @antoniorojas2408 6 місяців тому

    You son of a bitch. I've been trying to understand this topics for a week, and you just made me to realize how everything is related. This is awesome, thanks a lot

  • @adarshchaturvedi3498
    @adarshchaturvedi3498 6 років тому +1

    Hii, i cannot understand how it works around 9.00 ..... why the no. of point of intersection of v with the stack of lines is equal to alpha(v)?

    • @eigenchris
      @eigenchris  6 років тому +1

      Around 7:00 I show the level sets of the function (which you could call alpha if you want). The level sets of the covector show the output values at a given point on the plane. When you give the covector a vector input, to get the output value, you need to look at the level set line where the tip of the vector arrow is--the number associated with that level set is the output value of the function. This number is also the same as the number of lines that the vector pierces.

    • @adarshchaturvedi3498
      @adarshchaturvedi3498 6 років тому

      got it, Thanks a lot ....

  • @ricardodelzealandia6290
    @ricardodelzealandia6290 Рік тому

    I had to rewind the beginning of this several times because you state that vectors are contravariant, but in your previous video you said that vectors are invariant and vector *components* are contravariant. So I think there's an error at the beginning of this.

  • @mcalkis5771
    @mcalkis5771 7 місяців тому

    8:03 after some effort, I finally managed to do a nice geometric proof of this statement.

  • @alexanderwong6244
    @alexanderwong6244 2 роки тому

    Just to confirm that covectors need not necessarily be represented by row vectors, as long as they are linear functions that map a vector space to scalars. You have chosen to do so for ease of presentation and explanation, is that right?

  • @brk1953
    @brk1953 3 роки тому

    vectors are not contravariant or covariant !! Thats inaccurate
    Vector is invariant geometric object composed of two parts one covariant and the other contravariant so when you find the length of the vector it stays the same in all coordinate system
    This is productive in physics

  • @khandakerahmed7408
    @khandakerahmed7408 6 років тому +2

    EigenChris, you’re a phenomenon

  • @a52productions
    @a52productions 4 роки тому

    Hm, seems more like covectors themselves are not functions, rather covectors equipped with some product operator are functions. Covectors themselves are just objects... right???
    Anyway, it's really cool how they provide an unambiguous definition of the dot product, especially with your grid formulation instead of using arrows! I was looking at all the change of basis stuff and worrying about how on earth a dot product would work in a non-orthonormal basis

    • @eigenchris
      @eigenchris  4 роки тому +1

      I guess it depends on how you define things. The stack of lines can be thought of as a "thing"/"object". But the stack can also be interpreted as a function that eats a vector and outputs a scalar. Both can be true.

  • @fate_map1592
    @fate_map1592 2 роки тому

    It'll be huge help if you could share the textbook/material that was referred to for making this video?

  • @peronianguy
    @peronianguy 6 місяців тому

    I don't see how the technique you showed for drawing covectors taking them as a linear function f(x) = y (6:00) would lead to the plot of beta in 10:40. In fact it doesn't seem to be a function at all

    • @eigenchris
      @eigenchris  6 місяців тому +1

      Beta would be given by the row vector [3, 0]. So the function is [3, 0] * [x, y]^T = C => 3x + 0y = 3x = C. So you're right that particular one can't be written as f(x)=y.

  • @High_Priest_Jonko
    @High_Priest_Jonko Рік тому

    Starting to lose steam 4 vids into basic linear algebra...congratulations

  • @michaelnemeth6952
    @michaelnemeth6952 5 років тому +1

    Nicely done! It would be useful to tie in the reciprocal basis often used by engineers.

    • @paulmcc8155
      @paulmcc8155 4 роки тому

      I agree. I was able to get a handle on the Metric Tensor only after seeing where Reciprocal Basis Vectors fit into the picture. The way I see it, the Metric Tensor Matrix transforms a vector's components from those of the old Basis Vectors to those of the new Reciprocal Basis Vectors. When used in a dot product, those new Reciprocal Basis Vector components are exactly equal to those of the Covector, and those Covector components show up in the notation. Seen this way, the tensor notation hides all of this from you, in the background.

  • @user-pk5rc4or2w
    @user-pk5rc4or2w 6 років тому +5

    it is a pleasure to watch this. Simple, accurate and unique explanation. Esto es pata negra.

  • @delq
    @delq Рік тому

    This is the best explanation i've had about visualizing the dual vectors. Thank you sooo much !!!

  • @hangchen
    @hangchen 5 років тому +1

    @0:12 Vectors are invariant. Vector components are contravariant. Right?

  • @kajalkhirwar176
    @kajalkhirwar176 6 років тому +1

    Since alpha is a function from V to R, it could take non-integral values too. Why did you consider only integral values to form the stack? Is it because of the basis?

    • @eigenchris
      @eigenchris  6 років тому

      I use integer output values because they are easy to understand. I think ai showed one example where the output was 0.5 around 8:19.

    • @kajalkhirwar176
      @kajalkhirwar176 6 років тому

      No I have a problem with those equidistant lines you've drawn.How could you decide that because that affects the density which ultimately affects what value the covector gives on applying to a certain vector. I failed to understand that.

    • @kajalkhirwar176
      @kajalkhirwar176 6 років тому

      Okay got it. Sorry and Thank you. Loved your approach.

  • @warrenchu6319
    @warrenchu6319 3 роки тому

    For alpha (v) = 5, you chose the column vector v = [2 1] to satisfy 2*2 + 1*1 =5. But there is an infinite number of other column vectors that also satisfy alpha (v) = 5, such as the column vector [1 3] such that 2*1 + 1*3 = 5 or [-1 7] such that 2*-1 +1*7 = 5. It was unfortunate that you chose [2 1] for the column vector because that is the same number as the covector [2 1]. That was the source of confusion.

  • @IntegralMoon
    @IntegralMoon 6 років тому +1

    Hey eigenchris! Again, thanks so much for this series. You’ve helped more people than you can imagine with these! I do have a slight question though.
    At the beginning you say that the transpose of a column vector in an orthonormal basis is the same as a row vector. I’m not quite sure what you mean by that.
    My best guess is that the components of the vector in the dual space would be the same as the components in the original space. But its not clear how that relates to the idea of a row vector to me.

    • @eigenchris
      @eigenchris  6 років тому +1

      There is an idea I'm implicitely using here tbat I don't fully explain im detail until the end of the series, but it is possible to "pair up" a vector from a vector space with a partner covector in the dual space.
      In an orthonormal basis, it's possible to switch between the components of the vector and the components of its partner covector just by taking the transpose. In non-orthonormal coordinate systems, we can't just take the transpose or else we'll get the wrong components and the wrong partner covector.
      Does that help at all or is that just raising more questions? You might try watching videos 5 and 6 if you haven't already.

    • @IntegralMoon
      @IntegralMoon 6 років тому +1

      Awesome. Then perhaps I'll hold off on this question for the time being till I get through the others. I've been sitting down with a notebook and pen trying to understand everything.
      I thought it made sense the first time I went through it, but then I thought about the action of a matrix on a vector and a covector and potentially confused myself. It would be true that if vT . M = M . v , then M would be a symmetric matrix, and not an orthononormal matrix. So I will just get through the rest of this series and come back to this later.
      Thanks for the quick reply :)

  • @MrMeltdown
    @MrMeltdown Рік тому

    Thank God for @eignenchris I've spent too long trying to read text about this but never really "got it" getting lost in the detail and confusing terms.... These videos do a seriously good job of showing each part simply and also where some mis-conceptions can arise from assumptions I was making from previous experience which do not apply

  • @perryrice6573
    @perryrice6573 Рік тому

    Retired physics prof, I do quantum optics and never "bonded" with GR. This series is amazing, and i wish I could teach GR again, now that I have a much better idea of tthe PHYSICS

  • @karimshariff7379
    @karimshariff7379 Рік тому

    Excellent! I have seen the stack visualization in Mister, Thorne, and Wheeler but you made it so clear! After the beta + gamma part as piercings in the x and y directions, I said: Of course! Now why couldn't I think of that.

  • @DegradationDomain_stuff
    @DegradationDomain_stuff Рік тому

    ~ 10:16 you say that 2*alpha(v)=4 and 0.5*alpha(v)=0.5
    But why is this so and not the other way around or not something else entirely? Just by definition, right? Then... What is the definition?

    • @DegradationDomain_stuff
      @DegradationDomain_stuff Рік тому

      Same question to the addition of the covectors.
      It is just by definition, right?

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 Рік тому

    This is an amazing explanation, I saw explanations of the dual vector space, but they don't tell you it's actually meant for forcefields, nor that it has a different addition/scaling method, it's very important to get this basic concept. I was wondering about the dual space and was like, yes but what about it, what's the point ? Thanks for providing that insight.

  • @MesbahSalekeen
    @MesbahSalekeen 2 роки тому

    what is logic behind the number of lines v crosses with alpha is the alpha(v)? As far as i know alpha(v) = alpha1 * v^1 + alpha2 * v^2 + ...............

  • @jacklam5658
    @jacklam5658 Рік тому

    your video is extremely helpful!
    you clarify the concept of which I did watch so many videos but still got confused!
    thank you so much !
    Jack Lam from Hong Kong

  • @Pavel.Zhigulin
    @Pavel.Zhigulin Рік тому

    I broke my mind while trying to understand why line crossing is actually is a result of applying covector to vector.
    It used as some super obvious fact on 9:22, but actually, I still do not truly understand how this magic work. You just declare this and it works, but I cannot see any connections with this line drawings magic and formulas

    • @eigenchris
      @eigenchris  Рік тому

      The lines in the covector stack are the "level sets" of a linear function that takes a vector as input and outputs a vector. I try to show this at 6:00.

  • @ssrbaqri
    @ssrbaqri 3 роки тому

    All ur previous videos are simply superb except that the indices got messed up at places ... and even though u hv made corrections in the following videos, yet I'd urge u to remake those videos without any errors ... that'll immensely improve the value of this grt resource.
    However, I hv some reservations abt this video .... although u explained the concept of covectors in a technically correct manner, but I find Dirac's illustration the best because that is much more intuitive ... in that construction, contravariant components are obtained by drawing lines parallel to the coordinate axes, but in obtaining covariant components perpendiculars are drawn from the tip of the vector on the coordinate axes. The method makes great intuitive sense when the sum of the products of covariant and contravariant components with same indices gives the length of the vector.

  • @alexanderwong6244
    @alexanderwong6244 2 роки тому

    Thanks very much for the very good videos and explanations. Just one question, when adding two covectors, what happens if beta and gamma are oriented differently from the horizontal and vertical directions. Is the sum the same as the case shown?

  • @BigBrother4Life
    @BigBrother4Life 3 роки тому

    Could someone tell me? In 4:08, alpha row vector has i & j component as [2 1] while v vector has [0.5 , 2] but v is vertical it should have only i component not j component. Is the drawing wrong?

  • @gianniskara2709
    @gianniskara2709 Рік тому

    Thank you so much for your excellent videos!
    This is the first time that everything you analyze is presented to me in such an organized way.
    Thanks, thanks so much!

  • @user-hh5bx8xe5o
    @user-hh5bx8xe5o 3 роки тому

    Covectors could be called antivectors.
    They cancel the action of vectors.
    A vector can be seen as a linear map from the 0 dimension space to the 1 dimension vector space.
    A vector extends space. A covector contracts space.
    Covectors map 1 dimension vectors to 0 dimension space.
    As such they are -1 dimension object.

  • @thevegg3275
    @thevegg3275 6 років тому

    Also, this animation is confusing since it says that contravariant characteristic is that as the basis vectors increase the components decrease.
    ua-cam.com/video/CliW7kSxxWU/v-deo.html
    They then say a covariant characteristic is that as the dot product of the basis vectors with the vector increases with increases in the basis vector (covariant). The problem is when the covariant basis vectors are 1, the perp projection touches the tip of the vector. When they increase it to show how the dot product increases...there is no way that the perp projection of the basis vectors will be even close to the tip. If they are drawing an inverse analogy to the contravariant basis vector growing, they break the symmetry. When the contravariant basis vector grows or shrinks the components make up the difference and always touch the tip of the vector. I'm wondering what I'm missing here. Thanks!

  • @goranbutkovic9380
    @goranbutkovic9380 Рік тому

    A covector is also called a differential 1 form or a linear functional. The total derivative dx is one example of a differential 1 form and it is dual to the gradient, which is a vector.
    Great video, clearly explained, nice!

  • @vipinx8881
    @vipinx8881 Рік тому

    Saw covectors appear in two classes at the same time, in different ways, and didn't quite understand either. Watched this video, and now they make perfect sense. Thanks!

  • @paulrodriguez1116
    @paulrodriguez1116 5 місяців тому

    What a great series of videos, thanks for do them possible. I'd like to ask you which program do you use to create graphics or schemes such as you use on this video?

    • @eigenchris
      @eigenchris  5 місяців тому +1

      All my videos are made by recording Microsoft Powerpoint presentations and exporting them to video.

  • @mauette2000
    @mauette2000 Рік тому

    As another commenter noted below, these level sets are critical for any of these integration by forms to be correct. The linear example was straightforward but when I tried to research a more detailed explanation of the proper generation of these sets the results were daunting.

  • @ThePiloks
    @ThePiloks 3 роки тому

    Awesome video series i read fecko's dlfferential geometrh book and is so difficult to grasp but this clears all up sp thanks thank you so much man u r so generous anyway greetings from Argentina

  • @fayz100
    @fayz100 3 роки тому

    Fantastic videos. Years ago I took advanced courses in GR in which we were given no geometric intuition. I bought Misner, Thorne and Wheeler for to help with that, but your videos are even better. Keep up the great work!

  • @alexanderwong6244
    @alexanderwong6244 2 роки тому

    Around 11:53, the diagram on the right seems to be a little misleading. It appears to show that the vector v is pointing in the direction of the stack lines for beta + gamma, when in fact this is not necessarily the case.

  • @raulavila4986
    @raulavila4986 3 роки тому

    This's what i was looking for. Im from Spain and there r no good videos on this subject in spanish.
    Thank u so much, Chris.

  • @warrenchu6319
    @warrenchu6319 3 роки тому

    I now think of covectors as measuring something and vectors are the things being measured. Thus when a covector acts on a vector, the output is a measurement - a real number.

  • @acebdf5101
    @acebdf5101 Рік тому

    Is the scalar resulting from a covector and vector multiplied together always representing the number of lines that the vector pierces?

    • @eigenchris
      @eigenchris  Рік тому +1

      Yes. Although for simplicity in some parts of this video, I only drew 1 line out of every 100 to make the picture easier to read.

  • @cermet592
    @cermet592 6 років тому +4

    This is a subject that can baffle even someone with some experience even with Linear Algebra (LA) and Calculus. This is again, an excellent video because you refuse to just define the concepts via generalized math or use math specific theory statements based in LA but use very intuitive examples. Without a doubt the best example I have seen of these concepts.

    • @eigenchris
      @eigenchris  6 років тому +5

      Thanks for the kind words. I just explained things in the way I would want to hear them if I was learning it. I prefer to motivate math using practical examples ("bottom-up") rather than just using abstract reasoning ("top-down").

  • @Oh4Chrissake
    @Oh4Chrissake 2 роки тому

    7:10 And since [2, 1] [2 1] ^T = 5, and alpha cuts 5 lines, this illustrates what is given in the next slide. Cool!

  • @gregoriocuesta5551
    @gregoriocuesta5551 6 років тому

    Yes, you did, I see, perpendicularity of v to alfa stack lines is not required, any direction of vector v is valid to count met stack lines. Many thanks.

  • @raydencreed1524
    @raydencreed1524 2 роки тому

    I’m having trouble getting through this section because I can’t tell what these are for or why one would think to use them

    • @eigenchris
      @eigenchris  2 роки тому +1

      Covectors are used any time we want to measure densities. The most obvious application is waves, which have a density in space (wavenumber) and a density in time (frequency). Each surface in the stack represents a wavefront. If you search "Relativity 106a" on UA-cam, you'll see another video on covectors I did that shows how to use covectors to represent waves, and how we can use them to understand the doppler effect geometrically.

  • @Bigfoot1144
    @Bigfoot1144 17 днів тому

    I pogged when he visualized the covectors

  • @-VHSorPlanetTelex
    @-VHSorPlanetTelex 5 років тому +2

    Prior to your videos we were living in darkness...

  • @RAZERZONE1000
    @RAZERZONE1000 6 років тому

    Just one note for the part from 10:27 : If covector [ 3 0 ] is applied to vector V^T [ 1 1 ] (transposed form) result is 3 which has been shown in the fist picture from the left. Then, If covector [ 0 2 ] is applied to vector V^T [ 1 1 ] (transposed form) result is 2 which has been represented by the second picture from the left. Applying the SUM of these covectors [ 3 2 ] to the same vector V doesn't mean that vector V is perpendicular to the covector lines like has been shown in the third picture from the left. I mean vector V^T [ 1 1 ] doesn't lie on the direction of vector [ 3 2 ]. But we know that covector lines are perpendicular to vector [ 3 2 ].

    • @eigenchris
      @eigenchris  6 років тому

      You are correct, although I wasn't trying to imply that the yellow vector was perpendicular to the stack lines.

  • @maxwellsequation4887
    @maxwellsequation4887 3 роки тому

    This video helped me understand covectors AND helped me with geography
    Thanks

  • @AndreKowalczyk
    @AndreKowalczyk 2 роки тому

    At 1:50 minute the summation should be from i=1. It's a typo, I think. There is no i=0.

  • @marcoe6704
    @marcoe6704 6 років тому +3

    These series of videos is the most valuable resource to understand tensors I've ever found. Thanks a lot eigenchris.

  • @BioAbner
    @BioAbner 3 роки тому

    So... basically they're just dot products?

  • @atomicgeneral
    @atomicgeneral 3 роки тому

    Did in 14 minutes what my university lecturers could not achieve over a semester.

  • @davidhand9721
    @davidhand9721 4 роки тому

    Doesn't this make covector(vector) = proj_covector vector? Or something like Re(vector/covector) in the imaginary plane? I would rather call it vector/covector, that's just me I guess.

    • @eigenchris
      @eigenchris  4 роки тому

      I'm sorry, but I don't understand your comment. What do you mean by "proj_covector" and "vector/covector"?

    • @davidhand9721
      @davidhand9721 4 роки тому

      @@eigenchris it's been a while since I was in school, but my recollection is that proj is an operator that gives the component of a vector in the direction of some other vector. The projection of the vector onto what in this case is the covector. It either returns the scalar component or that component multiplied by the given covector, I forget which. It's taught right along with the dot product, iirc. It's written with the subscript being the vector that serves as the new basis.
      The vector division I mention is in the 2 dimensional complex analogy. Dividing by a complex number gives a real part that is the dividend's component with the basis of the divisor and an imaginary part that is the dividend's component with the basis of a vector orthogonal to the divisor with the same magnitude.
      I guess neither of these is really filling quite the same role given that the bases of a covector need not be orthogonal. I'm trying to translate my knowledge of complex numbers, quaternions, and (especially) geometric algebra to get a good intuition for tensors. Is there a straightforward relationship between these concepts?

  • @SupriyoChowdhury5201
    @SupriyoChowdhury5201 9 місяців тому

    What a great video!!! Thank you so much.

  • @samuelmcdonagh1590
    @samuelmcdonagh1590 Рік тому

    HA! “take ten seconds to figure it out”
    >10 second unskippable ad pops up

    • @eigenchris
      @eigenchris  Рік тому +1

      I wish my videos didn't have ads, but UA-cam forces them in. I guess you can use the time to solve imaginary homework problems.

  • @tuyenpham1756
    @tuyenpham1756 2 роки тому

    at 1:55 I think the summation is from i=1 to n? small typo

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 3 роки тому

    Very good and refreshing with some stuff i was suposed to know and things that are new.