Dual Basis - Covariant & Contravariant Components

Поділитися
Вставка
  • Опубліковано 6 вер 2024

КОМЕНТАРІ • 21

  • @thevegg3275
    @thevegg3275 Рік тому +1

    Min 9:40 you say as you close the skewed angle towards the vector that the components get smaller but I say they get larger. I’ve done this on paper, and if you measure from the origin to the point of intersection of the horizontal line of the vector, the length of the line formed increases as the angle decreases.
    To be specific the parallel projection of the skewed axis gets larger, while the parallel projection of the horizontal axis get smaller

  • @thevegg3275
    @thevegg3275 5 місяців тому +1

    Regarding dual basis, with the exception of Daniel Fleish, no one else on UA-cam discusses dual basis in this way it’s always about vector spaces and rules and I just don’t understand why do a bases are not explained graphically like you have done. Or they are talking about different animals?
    One question would be how does the geometric explanation take the controvariant and covariant components and apply them to tensors? Or another words could you do another video explaining the relationship of this video to tensors?

  • @svenwindpassinger2170
    @svenwindpassinger2170 Рік тому +1

    May i can answer some questions her like so:
    (Please correct me if I'm wrong!)
    There are 2 kinds of transformations.
    1) the rotation only. Without changing the angle of the coordination system (contrvarint)
    2)the changing of the angle of the basisvoctors (covarint)
    So it doesn't matter some of the wrong ideas how you call it, as long you know what transformation you have.
    That's why, this video is the best I've seen before.
    If you know what kind of transformation is your situation 1) or 2) you know how to call it and where the indicis are standing sup or sub.
    Also of interest ist.
    Contravariant:
    - basis have index down
    - vectors index up
    Covariant:
    - basis have index up
    - vectors down
    That means conterwise!
    I think that was the main obscurity.
    Hope I'm write 😊

  • @svenwindpassinger2170
    @svenwindpassinger2170 Рік тому +2

    I've heard a lot of perceptions about co- & contravarint. But this was the first explanation with the quality of consistency.
    My very compliment! And thank you!
    I have to understand something. Only knowledge is to less for me.
    Greetings Sven

  • @swan2799
    @swan2799 2 роки тому +2

    Rarely seen everywhere else. thanks!

  • @rockyshepheard6054
    @rockyshepheard6054 2 роки тому +1

    This is just what I've been seeking for years! Thank you. One question.
    Why wouldn't it be more understandable to write for ex. covariant compoments like this?
    A sub 1 e sub 1 instead of A sub 1 e sup 1
    This way you know that A sub 1 is covarient component and e sub vector is covariant basis vector.
    The std way makes it seem like A sub 1 is covariant and the e sup 1 is the contravariant basis vector on the x direction.

    • @TheCynicalPhilosopher
      @TheCynicalPhilosopher  2 роки тому +2

      It's mostly because of the Einstein summation convention, where indices that are opposite (one up, one down) are assumed to be summed over. It also has to do with index lowering and raising when working with higher order tensors, where the Einstein summation convention is the standard.

  • @ahmedouebnou400
    @ahmedouebnou400 10 місяців тому

    Very helpful.
    Thanks.

  • @shuewingtam6210
    @shuewingtam6210 2 роки тому

    Some video describe covariant vector components orthogonal projection to the conventional basis vector e sub i(dot product of vector and basis vector). You describe covariant vector as the projection to basis vector e sup i. What is the difference? I am somehow confused.

    • @TheCynicalPhilosopher
      @TheCynicalPhilosopher  2 роки тому

      By convention, for covariant vectors, the basis should have a superscript for the index. The difference is only conventional (it wouldn't change the meaning of covariance if it was done the opposite way), used for the Einstein summation convention.

    • @TheCynicalPhilosopher
      @TheCynicalPhilosopher  2 роки тому

      You can check it out on the Wikipedia article if you like: en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors

    • @shuewingtam6210
      @shuewingtam6210 2 роки тому

      For tensor algebra, dot product of dual basis vector e sup i and basis vector e sub j is kronecker delta, equal to 1 if i=j. But you mentioned that product of dual basis vector and conventional basis vector is 1/cos(theta). Is there any difference between both these statements?

    • @TheCynicalPhilosopher
      @TheCynicalPhilosopher  2 роки тому

      @@shuewingtam6210 The dot product for Kronecker delta is 1 for i = j and 0 for i ≠ j if the bases are orthonormal and θ = 0° rotated from each other (when both the contravariant and covariant basis vectors are unit length, orthogonal, and pointed in the same direction).
      I will use the carrot ^ to denote superscript (like e^1) and the underscore _ to denote subscrtipt (like e_1)
      Say we have a 2D vector *V* = v_1e^1 + v_2e^1
      If the contravariant and covariant bases for the vector *V* say e^1 and e_1, are at different angles, then the dot product will be 1/cosθ because as the angle θ between e^1 and e_1 approaches 90° the length of the covariant basis e_1 will approach infinite length, making the product e^1 x e_1 go towards infinity, same as 1/cosθ as θ→∞

    • @shuewingtam6210
      @shuewingtam6210 2 роки тому

      wiki that you mentioned cite an example of rescaling basis vectors such that e^1•e_1=1, e^1•e_2=0, e^2•e_1=0, e^2•e_2=1. It works rather than e^1•e_1=1/cos(theta). What is the difference?

  • @thevegg3275
    @thevegg3275 Рік тому

    Ok, after months of thinking on this, I have an explanation of covariant vs contravariant that no one has ever uttered, as far as I know. Here goes. Hold my beer!
    ---
    You combine contravariant (components times their regular basis vectors) tip to tail to reach the tip of the vector you're defining.
    But with covariant, you combine (covariant components times their dual basis vectors) tip to tail to get to the tip of the vector you're definging.
    Why does no one explain it like this?
    But my question is how is covarant components of dual basis vectors relate to the dot product? Please correct me if I'm wrong on the following...
    DOT PRODUCT: A (vector) dot B (vector) = a scalar quantity
    CONTRAVARIANT: described by the combination of contravariant (components times regular basis vectors) added tip to tail of
    A (vector) dot B (vector).
    COVARIANT: described by the combination of covariant (components times dual basis vectors) added tip to tail of
    A prime (vector) dot B prime (vector).
    QUESTION:
    If we dot product A prime (vector) with B prime (vector), does that scalar quantity equal
    A lower 1 prime times e upper 1 prime PLUS A lower 2 prime times e upper 2 prime?
    If so, arent we then saying that a scalr is equal to a vector???

  • @truthprevail2742
    @truthprevail2742 Місяць тому

    Not clear diagram, writing and notion

  • @jameshopkins3541
    @jameshopkins3541 10 місяців тому

    OTRO GARABATERO

  • @godfreypigott
    @godfreypigott 3 роки тому +3

    I recommend you script your videos. The stumbling makes you appear somewhat unsure of what you are saying.

    • @dan-js1mc
      @dan-js1mc 5 місяців тому

      rubbish it was perfectly clear

    • @godfreypigott
      @godfreypigott 5 місяців тому

      @@dan-js1mc 4 to 1 says you're wrong.