Hi eigenchris, just wanted to say that while the video is clear, the kronecker product is usually computed by shoving the tensor on the right into the tensor on the left, which is usually how it's done at least in quantum computing
Yeah, I also found that confusing when I read a paper from Tamara Kolda (epubs.siam.org/doi/pdf/10.1137/07070111X, cited 5000 times). I think right to left is more common, but still it's explained brilliant in this video :)
@@KleineInii There is math book called ABSTRACT ALGEBRA by DUMMIT AND FOOTE. Take a look, Kronecker product is just tensor product with respect to different ordering of basis
Hi, I'd just like to clarify something. On 2.21, is the summation sign implied for v(j)e(i)? I understand Einstein's notation is used for letters which are the same on top and the bottom, but in this case the letters at the top and bottom are different.
Tensor product(e_i, epsilon^j) is a stack of vectors all parallel to e_i and the result of acting of this linear map on a vector is linear combination of these parallel vectors. So the result is a vector prallel to e_i (wich is v^j e_i)
Hi! It seems to me that at 2:49 the new vector components should be represented by a latin letter and upper indices (like [w^1 over w^2] rather than [omega_1 over omega_2]). Thank you again for the videos.
Would the array at 3:02 be a rank 3 tensor? Am I correct in guessing that the rank = m+n, where m is the # of covariants and n is the number of contravariants?
Something seems off at 1:36. It is my understanding that the tensor product takes a (p,q)-tensor and (a,b)-tensor and produces a (p+a,q+b) tensor. In the example shown, we have the vector e_i which is a (1,0) tensor and eps^j which is a (0,1) tensor. This means that the tensor product between e_i and eps^j is a (1,1) tensor. What's confusing is about 1:36 is that the tensor resulting from the tensor product of e_i and eps^j should have a covector and a vector as an input. This will make sure that the output of the tensor is a scalar and not a vector.
If we can download all the slides, it would be perfect, since we can print it out and study it without electronic devices, also we can drop down notes for deeper understandings.
Your notation with the array of arrays suggests that the Kronecker product produces arrays of higher dimension that two. However the Kronecker product of a matrix is always a matrix, for instance if I take $u\oplus v \oplus w$ this will not produce an object having three indices, it will still be a matrix. Of course you can relate this to the object made from arrays of arrays, but technically they are not the same thing.
At 1:56, when the j-th basis covector of V* acts on the k-th basis vector of V to become the Kronecker delta, why does the tensor product operator disappear? Is it a valid operation to take the tensor product between a vector and a scaler? And does it merely equal the scalar times the vector?
I'm actually pretty sure the Kronecker product as used in this video (and the previous one) is backwards. Both in my university's coursework and on the Wikipedia page, it seems like you're actually supposed to distribute the array on the right over the one on the left, that is, the left array is meant to be used as a "template" and the right array is copied for each block. You can take a look here: en.wikipedia.org/wiki/Kronecker_product
Hey eigenchris, at 2:04, I'm a bit confused about what v^j e_i actually is. Should this not result in a vector? Therefore, shouldn't the top and bottom indices end up being the same?
v^j e_i is a vector. It is the ith basis vector of V scaled by the jth component of v. In regards to your last question, the top and bottom indices should not be the same because we are not summing over anything. We do not need to do an implicit sum to get a vector.
1:57 so I can just as well put a co-vector in as argument, the epsilon of the co-vector "eats" the e_i of the (1,1) tensor, and the result would be a co-vector?
Typically, you also flatten when using the Kronecker product. Tensor product increases tensor order, but kronecker product does not. This is a quite imporant distinction both in theory and practise. The Kronecker product as you explain it here is not how it works in e.g. Numpy or PyTorch.
Since the kronecker product can act on nested arrays, is there an index notation formula that i can find anywhere ? I've scoured google and can't find anything related to this specifically
I think the Kronecker product, as seen in most math classes, doesn't use "nested arrays". If you have a 2x2 matrix krocker-product'ed with a 3x1 matrix, you will just get a 6x2 matrix, without any "nesting". The "nested arrays" thing is something that I came up with myself. I was desperately trying to make "array multiplication" work for arrays beyond 2D matrices. But my view now is to "give up" trying to multiply arrays bigger than 2D and just use the tensor index notation instead.
Hey Eigenchris, can we do some problems of some of these.. I'm kinda getting a grasp of what tensors are..what are these good for besides computer programs and/or GR
I'd like to get another 2-3 videos out of the way just to finish the series, but after that I can make a video or two on applications. In addition to GR, there are applications in quantum mechanics, electricity & magnetism, and continuum mechanics. Unfortunately I am less experienced with the physics compared to the pure math, so I don't have a ton to say on those topics right now.
This is so confusing. When you did first Kronecker you distributed the RHS into the LHS but when you did second Kronecker you distributed LHS into RHS. This is like WTF !!
I guess it's probably intended for physicist, or engineer . As a mathematician, I think it's not good way to learn it. Kronecker product is just tensor product with respect to different ordering of basis
i need a joint
Fr.
I need a ligament
Hi eigenchris, just wanted to say that while the video is clear, the kronecker product is usually computed by shoving the tensor on the right into the tensor on the left, which is usually how it's done at least in quantum computing
Yeah, I also found that confusing when I read a paper from Tamara Kolda (epubs.siam.org/doi/pdf/10.1137/07070111X, cited 5000 times). I think right to left is more common, but still it's explained brilliant in this video :)
@@KleineInii There is math book called ABSTRACT ALGEBRA by DUMMIT AND FOOTE. Take a look, Kronecker product is just tensor product with respect to different ordering of basis
My man this is exactly what I needed thank you
Just amazingly clear and good
This channel is a goldmine
Awesome! Thanks for this :)
tankyou, eigenchris, i never understand if i learn tensor before, you explain it from the star / beginner. these really helpful
Please correct your spellings
I told my colleague to study the Tensor calculus on this channel only
Thanks a million!
Hi, I'd just like to clarify something. On 2.21, is the summation sign implied for v(j)e(i)? I understand Einstein's notation is used for letters which are the same on top and the bottom, but in this case the letters at the top and bottom are different.
Tensor product(e_i, epsilon^j) is a stack of vectors all parallel to e_i and the result of acting of this linear map on a vector is linear combination of these parallel vectors. So the result is a vector prallel to e_i (wich is v^j e_i)
It's not a sum. v^j e_i is the *number* v^j multiplying the *vector* e_i. The *number* v^j is the jth component of the *vector* v.
Same thing, different context
Got it, cheers
Hi! It seems to me that at 2:49 the new vector components should be represented by a latin letter and upper indices (like [w^1 over w^2] rather than [omega_1 over omega_2]). Thank you again for the videos.
Yes, my mistake.
great video. thank you
Would the array at 3:02 be a rank 3 tensor?
Am I correct in guessing that the rank = m+n, where m is the # of covariants and n is the number of contravariants?
Hold a second... now a row vector is an array. Good.
I didn't get it that what is the definition of Kronecker product? "We just distribute the array on the left into the array on the right."
Something seems off at 1:36. It is my understanding that the tensor product takes a (p,q)-tensor and (a,b)-tensor and produces a (p+a,q+b) tensor. In the example shown, we have the vector e_i which is a (1,0) tensor and eps^j which is a (0,1) tensor. This means that the tensor product between e_i and eps^j is a (1,1) tensor. What's confusing is about 1:36 is that the tensor resulting from the tensor product of e_i and eps^j should have a covector and a vector as an input. This will make sure that the output of the tensor is a scalar and not a vector.
If we can download all the slides, it would be perfect, since we can print it out and study it without electronic devices, also we can drop down notes for deeper understandings.
I have the slides uploaded here: github.com/eigenchris/MathNotes/tree/master/TensorsForBeginners
Your notation with the array of arrays suggests that the Kronecker product produces arrays of higher dimension that two. However the Kronecker product of a matrix is always a matrix, for instance if I take $u\oplus v \oplus w$ this will not produce an object having three indices, it will still be a matrix. Of course you can relate this to the object made from arrays of arrays, but technically they are not the same thing.
At 1:56, when the j-th basis covector of V* acts on the k-th basis vector of V to become the Kronecker delta, why does the tensor product operator disappear? Is it a valid operation to take the tensor product between a vector and a scaler? And does it merely equal the scalar times the vector?
It probably should have disappeared in the 2nd line, since epsilon(v) is a scalar, not a vector.
I'm actually pretty sure the Kronecker product as used in this video (and the previous one) is backwards. Both in my university's coursework and on the Wikipedia page, it seems like you're actually supposed to distribute the array on the right over the one on the left, that is, the left array is meant to be used as a "template" and the right array is copied for each block. You can take a look here:
en.wikipedia.org/wiki/Kronecker_product
Yeah, I noticed that. I'm not sure if it's just two different conventions or if the wikipedia one is fundamentally correct.
I am hazy now. I followed you until this video and the previous video.
Which part is confusing you?
@@eigenchris I get it. Thank you, Eigenchris.
Is dot product is one of the tensor product?
Thanks eigen chris
Hey eigenchris, at 2:04, I'm a bit confused about what v^j e_i actually is. Should this not result in a vector? Therefore, shouldn't the top and bottom indices end up being the same?
v^j e_i is a vector. It is the ith basis vector of V scaled by the jth component of v.
In regards to your last question, the top and bottom indices should not be the same because we are not summing over anything. We do not need to do an implicit sum to get a vector.
@@jonathanchippett4036 Ahh ok, I now understand :) Thank you!
1:57 so I can just as well put a co-vector in as argument, the epsilon of the co-vector "eats" the e_i of the (1,1) tensor, and the result would be a co-vector?
Linear map...how?
Hey as far as i know kronecker product is a branch of tensor product?
so.... I understood a little bit of that :)
Typically, you also flatten when using the Kronecker product. Tensor product increases tensor order, but kronecker product does not. This is a quite imporant distinction both in theory and practise. The Kronecker product as you explain it here is not how it works in e.g. Numpy or PyTorch.
Kronecker product is just tensor product with respect to different ordering of basis
Since the kronecker product can act on nested arrays, is there an index notation formula that i can find anywhere ? I've scoured google and can't find anything related to this specifically
I think the Kronecker product, as seen in most math classes, doesn't use "nested arrays". If you have a 2x2 matrix krocker-product'ed with a 3x1 matrix, you will just get a 6x2 matrix, without any "nesting". The "nested arrays" thing is something that I came up with myself. I was desperately trying to make "array multiplication" work for arrays beyond 2D matrices. But my view now is to "give up" trying to multiply arrays bigger than 2D and just use the tensor index notation instead.
2:07 The "circle times" shouldn't be there from the second row onwards as epsilon acting on v is a number
If i want to do a kronecker product between a 1-column array and a nxn matrix. Do I distribute the column to each row?
You distribute the column to each element of the matrix. If the column is m elements tall, you'll end up with an nxnxm array.
@@eigenchris I see, Thank You eigenchris!
Nothing fancy here.
Kronecker product is just tensor product with respect to different ordering of basis
Aren’t they just the same thing but in different forms?
Kronecker product is just tensor product with respect to different ordering of basis, You are right
Hey Eigenchris, can we do some problems of some of these.. I'm kinda getting a grasp of what tensors are..what are these good for besides computer programs and/or GR
I'd like to get another 2-3 videos out of the way just to finish the series, but after that I can make a video or two on applications. In addition to GR, there are applications in quantum mechanics, electricity & magnetism, and continuum mechanics. Unfortunately I am less experienced with the physics compared to the pure math, so I don't have a ton to say on those topics right now.
Thank you very much
Is a kronecker product between 2 same-size square matrices valid?
Yes. You can do the Kronecker product of any two matrices.
@@eigenchris thanks
It looks like
Tensot product is operation for vector and covector
Kroneker product is operation for vector and covector components
This is so confusing. When you did first Kronecker you distributed the RHS into the LHS but when you did second Kronecker you distributed LHS into RHS. This is like WTF !!
Kinda lost here.
I guess it's probably intended for physicist, or engineer . As a mathematician, I think it's not good way to learn it.
Kronecker product is just tensor product with respect to different ordering of basis
Same as prevoius: when why how can this information be used...? Start with some practical approach pls.
This is hopeless - it's supposed to be for beginners!!
Which parts are you struggling with? Have you watched the rest of the series up until this point?
I think your definition of Kronecker product is reversed.
The definition on Wiki says that A⊗B distributes B into A, but you just distribute A into B