Probably telling you what you already know, but you’re a very good teacher Michael. You manage condense a lot of teaching into a relatively brief yet densely packed video series on some of the most abstract topics. You also communicate great enthusiasm and understanding of you subject, which is inspiring. Please keep doing this.
Fun fact: If vectors can be represented as a column of numbers, 1-forms can be represented as a row of numbers. Why? Because if you take 1-form of a vector, it outputs a number. Just like a scalar multiplication of a vector with a fixed vector. Just think of it as matrix multiplication ;)
@@nahblue Ah, yes, here we are getting to the dual spaces. Suppose we have a vector space V that contains the typical column vectors we usualy work with. Let V* be the dual vector space of V. Elements of V* are actually the 1-forms (or generaly n-forms, just as big as the former vectors were). But since 1-forms are elements of some vector space (although it is a dual vector space), we can consider them vectors! Then we can make another dual space. Let V** be the dual vector space of V*. Can we make some 1-forms on 1-forms? Yes, we can! It turns out that V** is isomorphic to V, thus we can represent them (1-forms on 1-forms) as our former clasical vectors! That means we can work only with vectors and 1-forms, which I find quite interesting.
I like the video but it could have been nice to stick with the definition where was a function : TpR^n -> R^n from the first video. Now itself is the vector in TpR^n and goes as an argument to 1-form w even though dx and dy are themselves 1-form with the definition :TpR^n -> R^n
Great video, really instructive! Thanks! In the end, why is not the negative branch mentionned for a? (because it gives another solution for ω) N.B. : at 6:21 (and after), there's a small typo : you wrote "2y" where it should be "2dy" ( which gives : ω() = 3dx + 2dy )
When I was studying differential forms at a university, I got a textbook where coordinates had upper indices, not lower. And I got really confused on that when there started appearing symbols like dx^1, dx^2, ..., dx^n; as I totally could not understand why dx^m ∧ dx^n was not equal mnx^(m+n-2)dx∧dx = 0, and thus was confused with the whole topic. Much appreciated your videos, many thanks for the work you are putting into this.
At 1:05 you mention choosing any orthogonal coordinate system at P, which has been drawn the same as x and y, that was chosen on the initial plane, but this could be any axes not parallel to x-y?
professor penn, i have a question regarding the example that you give at 6:18. you explain that the 1-form ω[dx, dy] = 3dx + 2dy, with direction vector [3, 2] that is parallel to direction vector [1, 2/3], is geometrically projecting vectors [dx, dy] onto line dy = (2/3)dx, which made me wonder about two things: first: how is this 1-form ω related to the 1-form α[dx, dy] = -3dx - 2dy, with direction vector [-3, -2] that is antiparallel to direction vector [1, 2/3] that is apparently also projecting vectors [dx, dy] onto line dy = (2/3)dx. since the vectors [3, 2] and [-3, -2] point in the opposite directions, shouldn't we somehow distinct between these two lines dy = (2/3)dx?, for example like so: dy = (2/3)dx for ω[dx, dy] = 3dx + 2dy: ...... where (n) denotes how many copies of direction vector [1, 2/3] we take to reach this point (notice that here the vectors are flowing out of the origin) and dy = (2/3)dx for α[dx, dy] = -3dx - 2dy: ...--->(-3)--->(-2)--->(-1)--->(0)
Great video! I have a small question. At 5:24, you say that a 1-form is a multiple of the scalar projection onto some line. The definition you wrote is ||||*scalar projection_ . But a and b are components, but the way you wrote it seems to say that they are components of a vector. Is that vector on |R^n?
Is he not implicitly using a result from Linear Algebra? Namely, that every linear function from a linear space to R can be expressed as an inner-product.
@5:35 you have a 1-form (omega) acting on the basis vectors of TpM (in this case TpRn) linearly to produce omega() = a1 dx1 + a2 dx2 + ... + an dx. The 1-form omega is separate from the vector . Now, forward one video in the series, and in here ua-cam.com/video/z2yRiMg92S0/v-deo.htmlsi=PwARWd3iH9i86VhP&t=502 you define a 1-form omega1 as 3 dx - 2 dy - dz. All of a sudden the 1-form appears already dotted with the basis vectors of TpRn! And then acting on some other vector. It is extremely confusing. Related to it, is the use of the same basis vectors for TpM and TpM^*.
New subscriber here so first of all thank you for teaching and also sorry if I'm missing something you explained in previous videos. At 3:30 you define omega as a 1-form and so the codomain should be R but I see the differential dx and dy: maybe a notation problem but the result doesn't seem a real number to me, I recognize that omega has R^2 as domain but I fail to see how R is its codomain.
Now I 👀see what I think is the "abuse of notation": dx and dy - as domain of definition of omega or in other words as input of the omega 1-form or in the left side of your definition omega(dx,dy) = - those dx and dy are vectors and they are different (this is misleading imho) from dx and dy in the right side = a dx + b dy - those dx and dy are covectors and in other words your 1-form is really omega(c dx + d dy) = (a dx + b dy) *(c dx + d dy) or succinctly omega (_) = (a dx + b dy) * _ and indeed this is going from R^2 to R
Having read some texts on relativity using p-forms, they seem to amount to nothing but a pointless change of nomenclature for what used to be called covariant derivatives of vectors and tensors. I didn't gain any startling new insights to repay the effort of having to translate the new symbols.
At the end you consider the solution a = 3/sqrt(5), but one could also choose a=-3/sqrt(5). Is the solution not unique or am I missing something that should have determined the line orientation? After all the 1-forms \omega and -\omega are different things.
@Michael Penn one question, the function omega you define as adx+bdy does not seem linear or multilinear. Maybe there is something I missing because I am just learning this all for the first time.
that's because it's not supposed to be multilinear, but only linear on R². so multiples of the full vector are mapped to multiples of the image of the vectors and the same for sums. you can't multiply each component of the vector individually
If dy=2 dx, then it seems like a=2 and b=-1. This gives the wrong answer, however. What am I missing? What are the correct relations between the coefficients and the tangent vector components? If dx=1, then obviously dy=2; so, why isn't omega=2 dx - dy? Is this a covariant/contravariant thing?
At about 8:10 in the video I see omega = 3dx + 2dy, at the top of the board, but at the bottom I see something that is equivalent to 3dy - 2 dx = 0. How should I interpret this ?
Maybe I missed an introductory video, but what is the motivation for this, or what would be considered the prerequisite knowledge to start talking about diff. forms
Oh man, really thankful this series exists. I literally cannot understand Rudin's multivariable analysis chapters.
Probably telling you what you already know, but you’re a very good teacher Michael. You manage condense a lot of teaching into a relatively brief yet densely packed video series on some of the most abstract topics.
You also communicate great enthusiasm and understanding of you subject, which is inspiring. Please keep doing this.
Es increíble la manera en que puedes comunicar intuiciones profundas de las matemáticas. Enhorabuena!
Do you plan to extend this series also to cover the basis of differential geometry, like differentiable manifolds, or is it too advanced level stuff?
In the first example it should be 3 dx + 2 *dy*
The very best videos on this topic in the internet. Clear, logical, with good examples. Thank you!
2:14 For the dual space, see Chris's "Differential Forms are Covectors" for intuitions. ua-cam.com/video/XGL-vpk-8dU/v-deo.html
Very great material, thank you Michael Penn
Fun fact:
If vectors can be represented as a column of numbers, 1-forms can be represented as a row of numbers.
Why? Because if you take 1-form of a vector, it outputs a number. Just like a scalar multiplication of a vector with a fixed vector. Just think of it as matrix multiplication ;)
And when vectors are columns, the dual space with 1-forms is rows. :)
@@nahblue Ah, yes, here we are getting to the dual spaces. Suppose we have a vector space V that contains the typical column vectors we usualy work with. Let V* be the dual vector space of V. Elements of V* are actually the 1-forms (or generaly n-forms, just as big as the former vectors were). But since 1-forms are elements of some vector space (although it is a dual vector space), we can consider them vectors! Then we can make another dual space. Let V** be the dual vector space of V*. Can we make some 1-forms on 1-forms? Yes, we can! It turns out that V** is isomorphic to V, thus we can represent them (1-forms on 1-forms) as our former clasical vectors! That means we can work only with vectors and 1-forms, which I find quite interesting.
I like the video but it could have been nice to stick with the definition where was a function : TpR^n -> R^n from the first video. Now itself is the vector in TpR^n and goes as an argument to 1-form w even though dx and dy are themselves 1-form with the definition :TpR^n -> R^n
Great video, really instructive! Thanks!
In the end, why is not the negative branch mentionned for a? (because it gives another solution for ω)
N.B. : at 6:21 (and after), there's a small typo : you wrote "2y" where it should be "2dy" ( which gives : ω() = 3dx + 2dy )
da peanut gallery doan miss
When I was studying differential forms at a university, I got a textbook where coordinates had upper indices, not lower. And I got really confused on that when there started appearing symbols like dx^1, dx^2, ..., dx^n; as I totally could not understand why dx^m ∧ dx^n was not equal mnx^(m+n-2)dx∧dx = 0, and thus was confused with the whole topic. Much appreciated your videos, many thanks for the work you are putting into this.
At 1:05 you mention choosing any orthogonal coordinate system at P, which has been drawn the same as x and y, that was chosen on the initial plane, but this could be any axes not parallel to x-y?
Is there a video about Lie Groups and Algebras on this beautiful channel?
How did you know I was just confused about this exact topic? ;)
Everyone is because most books are crap
@DnB and Psy Production agreed
professor penn, i have a question regarding the example that you give at 6:18.
you explain that the 1-form ω[dx, dy] = 3dx + 2dy, with direction vector [3, 2] that is parallel to direction vector [1, 2/3], is geometrically projecting vectors [dx, dy] onto line dy = (2/3)dx, which made me wonder about two things:
first: how is this 1-form ω related to the 1-form α[dx, dy] = -3dx - 2dy, with direction vector [-3, -2] that is antiparallel to direction vector [1, 2/3] that is apparently also projecting vectors [dx, dy] onto line dy = (2/3)dx. since the vectors [3, 2] and [-3, -2] point in the opposite directions, shouldn't we somehow distinct between these two lines dy = (2/3)dx?, for example like so:
dy = (2/3)dx for ω[dx, dy] = 3dx + 2dy:
......
where (n) denotes how many copies of direction vector [1, 2/3] we take to reach this point (notice that here the vectors are flowing out of the origin)
and
dy = (2/3)dx for α[dx, dy] = -3dx - 2dy:
...--->(-3)--->(-2)--->(-1)--->(0)
Great video! I have a small question. At 5:24, you say that a 1-form is a multiple of the scalar projection onto some line. The definition you wrote is ||||*scalar projection_ . But a and b are components, but the way you wrote it seems to say that they are components of a vector. Is that vector on |R^n?
Any chance we could get a video on De Rahm's theorem? I have tried to grok that one like 3 times and it's still over my head. Thank youuuu =)
Is he not implicitly using a result from Linear Algebra? Namely, that every linear function from a linear space to R can be expressed as an inner-product.
Excellent course for undergraduate physicists who don't want to delve into pure mathematics
so are the (x,y) and the the same terms that make up the vector contribution and the convector contribution to a Tensor of 1,1 form ??
@5:35 you have a 1-form (omega) acting on the basis vectors of TpM (in this case TpRn) linearly to produce omega() = a1 dx1 + a2 dx2 + ... + an dx. The 1-form omega is separate from the vector . Now, forward one video in the series, and in here ua-cam.com/video/z2yRiMg92S0/v-deo.htmlsi=PwARWd3iH9i86VhP&t=502 you define a 1-form omega1 as 3 dx - 2 dy - dz. All of a sudden the 1-form appears already dotted with the basis vectors of TpRn! And then acting on some other vector. It is extremely confusing. Related to it, is the use of the same basis vectors for TpM and TpM^*.
New subscriber here so first of all thank you for teaching and also sorry if I'm missing something you explained in previous videos. At 3:30 you define omega as a 1-form and so the codomain should be R but I see the differential dx and dy: maybe a notation problem but the result doesn't seem a real number to me, I recognize that omega has R^2 as domain but I fail to see how R is its codomain.
Now I 👀see what I think is the "abuse of notation": dx and dy - as domain of definition of omega or in other words as input of the omega 1-form or in the left side of your definition omega(dx,dy) = - those dx and dy are vectors and they are different (this is misleading imho) from dx and dy in the right side = a dx + b dy - those dx and dy are covectors and in other words your 1-form is really omega(c dx + d dy) = (a dx + b dy) *(c dx + d dy) or succinctly omega (_) = (a dx + b dy) * _ and indeed this is going from R^2 to R
Having read some texts on relativity using p-forms, they seem to amount to nothing but a pointless change of nomenclature for what used to be called covariant derivatives of vectors and tensors. I didn't gain any startling new insights to repay the effort of having to translate the new symbols.
Sir which book you follow please reply sir 🙏🙏🙏
What perquisites are needed for this?
just multivariable calculus
Which playlist is this in? I want to watch from the beginning
At the end you consider the solution a = 3/sqrt(5), but one could also choose a=-3/sqrt(5). Is the solution not unique or am I missing something that should have determined the line orientation? After all the 1-forms \omega and -\omega are different things.
The stereo audio is flipped and it's bugging me out
Settings > Ease of Access > Audio > mono
No, it is not a good place to stop!
@Michael Penn one question, the function omega you define as adx+bdy does not seem linear or multilinear. Maybe there is something I missing because I am just learning this all for the first time.
that's because it's not supposed to be multilinear, but only linear on R². so multiples of the full vector are mapped to multiples of the image of the vectors and the same for sums. you can't multiply each component of the vector individually
If dy=2 dx, then it seems like a=2 and b=-1. This gives the wrong answer, however. What am I missing? What are the correct relations between the coefficients and the tangent vector components? If dx=1, then obviously dy=2; so, why isn't omega=2 dx - dy? Is this a covariant/contravariant thing?
Then the 1-form is just the line in the local coordinate system?
At about 8:10 in the video I see omega = 3dx + 2dy, at the top of the board, but at the bottom I see something that is equivalent to 3dy - 2 dx = 0. How should I interpret this ?
Maybe I missed an introductory video, but what is the motivation for this, or what would be considered the prerequisite knowledge to start talking about diff. forms
This is the second video in the series. The idea is to pick up just after the information from a multi variable calculus class.
n00bkillerleo We covered this in the last 2 weeks of my Honors Multivariable Calculus class, which was basically an advanced Calc III for math majors
Very talented lecture, thank you very much, Sir
Rozella Locks
Substitles please.
🙏🙏🙏👍👍👍👍
I wanted to understand the use of differential form in electrodynamics, but I am already lost after 2 video :(
Nota 100000000♾♾👏👏👏👏👏👏👏👏👏👏👏
Admiral
White