The reason complex conjugation corresponds to transposition in the matrix rep is because the inner product :=z* w on the complex plane is mapped to the standard dot product on R^2, and complex conjugation on C and transposition on R^2 both represent the hermitian adjoint.
I have always known that complex numbers could be represented as matrices, but never stopped to think about their application to matrix theory, Perfect video
I know there's something connection in complex number and vector, it really similar, like the notation (a+bi and ai+bj), how it work etc, that connection is ✨matrices✨ tysm for making this video
Wow - thank you! So glad there are people like you that understand this stuff well enough to combine seemingly disparate concepts and provide a high level summary without the details and rigor that obscure many of the important concepts. Well done! I kept hoping you would work my favorite subject in there - how the Fourier transform, SVD, (singular value decomposition), tensors, and Roots of Unity are related. Keep up the great work! Love your channel.
I can see the connection between Fourier and roots of unity, but where do tensors and the SVD come in? If you have a good reference where I can learn more, I will definitely consider making a video about that. Right now, you might be interested in a video we have on Patreon that looks at Fourier analysis as the study of the characters of a circle.
this is my personal favorite subject, a fun exercise is to think about complex valued matrices, for example try looking at a pauli matrices as a set of nested matrices
I learned about complex numbers while programming a Mandelbrot set generator in Pascal on my parents' PC, based on an algorithm published in Scientific American. I remember getting pretty deep into complex numbers, and proving what I later knew to be famous basic theorems. Namely, if an equation is true among complex numbers, then it is also true among their complex conjugates; and that if you multiply two complex numbers represented as vectors, the rotation of the product is the sum of the rotations of the factors, and the length of the product is the product of the lengths. Of course now I know my way around the complex plane in terms of r* e ^ (i * theta), which makes these facts self evident, but I remember proving them just using algebra.
That's pretty cool! When you proved that second theorem, especially the part where the angle of the product is the sum of the angles, did you realize that you had discovered an exponential relation?
This is bascially the reason why z* doesnt have a derivative since the jacobean of the transformation is a reflection and not a rotation and scalling (the only transformations possible when multiplying by a complex number) which is effectively the reasoning behind the Cauchy-Riemann equations, it took me ages to understand the link between the linear transformation due to complex multiplication and the restricted set of matricies which describe complex multiplication which the jacobean must be in, since it too must reprosent complex multiplication thus must be of this form thus the CR equ.'s. (whenever I found the jacobean, I assumed that that must mean a derivavtive must exist, but no)
Fun fact: i is an anti-symmetric matrix here. You can decompose every square matrix to the sum of a symmetric and an anti-symmetric component every time: ½(A + Aᵀ) is the symmetric and ½(A - Aᵀ) is the anti-symmetric component, where Aᵀ is the transposed matrix. This doesn't add up entirely because a symmetric matrix is not necessarily a multiple of the identity matrix, but it still looks like decomposing a complex number to a real and an imaginary component. Stupid me, I had to correct the formulas.
Just failed an exam and got so frustrated that i almost felt like i should not be doing math ever again. Your intuitive explanations really keeps me motivated
Geometric algebra gives you a hint how to do this properly. Let's go to G(2,0), the two-dimensional space. Your i matrix is simply a transformation, and it's a bivector in GA, and a pseudoscalar in this space. We have the x axis and the y axis, but y can be written simply as y = i x. From now on we can describe every point on the plane with a x + b i x. This gives an algebra entirely isomorphic to the complex numbers. Or we can choose G(0,1), which is much simpler, and still isomorphic to complex numbers.
As it happens, I'm preparing the series on GA right now. And you've hit the nail on the head. We will publish 2 videos about this later. Also, G(0,1) is trivially isomorphic to C, but I prefer the way they emerge from G(2,0) because it's much more surprising. You define 2 elements that square to 1, and magically you get a new element that squares to -1. Math is full of such surprises.
This is great. I've been trying to understand capacitance and inductance without resorting to to the square root of -1, which makes the math work but is far from intuitive. This is taking me into a good direction.
You can often replace complex numbers by something else (such as matrices or 2D vectors). The math still works, but it's often more verbose. You no longer package the 2 parts into a single number, which leads to twice as many equations. By packaging those back into vectors/matrices, you may be able to recover the original elegance.
@@AllAnglesMath Yes. The thing I like about representing impedance as [[R, Z][-Z R] is that it's 2 vectors: the top one being the impedance, and the bottom one being a vector with the same magnitude but rotated exactly 90 deg counter-clockwise. So you do V*[[cos t, sin t] [-sin t, cos t]] = [[R, Z][-Z, R] * I*[[cos t, sin t] [-sin t, cos t]] . This shows exactly how the real and reactive elements turn real and 90 deg out of phase currents into real and out of phase voltages while reactive elements turn real and out of phase currents into out of phase and real voltages. It's all quite intuitive when you do it this way. It doesn't depend on an understanding that multiplying by i means rotation.
somewhere between complex numbers and GL(2) is the subset of GL(2) with positive determinants, since that one must be closed under multiplication as well.
Every matrix representation is a homomorphism; and those in turn are studied as the "arrows" in category theory. Functors are an arrow between such arrows. The condition for functors is indeed very similar to the conditions you see for many other kinds of arrows such as isomorphisms, linear transformations, or diffeomorphisms.
Please note that i, sin and cos are not variables. This is why one should not write them cursive. This topic is so based. I had a lot of representation matrices stuff in a group theory course.
6:46 really.. how? Lets say we want to make an angle of (1/7)*2π out of the available angles: (1/3)*2π , (1/4)*2π , (1/5)*2π So we want to find integers x,y,z such that [0]: (1/7)*2π = (1/3)*2πx + (1/4)*2πy + (1/5)*2πz Cancelling 2π gives [1]: (1/7) = (1/3)x + (1/4)y + (1/5)z Cancelling the common denominator of 3*4*5*7=420 gives [2]: (3*4*5) = (4*5*7)x + (3*5*7)y + (3*4*7)z Reducing modulo 7 gives [3]: 60 ≡ 0 (mod 7) ⇒ 4 ≡ 0 (mod 7) [3] is clearly not true. The equation [1] also fails because (via congruence relations) it implies [4]: x=3a , y=4b , z=5c for integers a,b,c ⇒ 1 = 7a + 7b + 7c ⇒ 1/7 = a + b + c [4] is clearly not true. The group of integers under addition is closed (otherwise it wouldnt be a group). If x,y,z are allowed to be rational numbers then [1] has an infinte number of rational solutins. But how is this fundamentally different from taking all rational roots of unity on the unit circle to begin with.
That's just the phrase I use for the product between two matrices, or between a matrix and a vector. I adopted this phrase half jokingly in the series on linear algebra.
The reason complex conjugation corresponds to transposition in the matrix rep is because the inner product :=z* w on the complex plane is mapped to the standard dot product on R^2, and complex conjugation on C and transposition on R^2 both represent the hermitian adjoint.
That's such a cool connection! I have been thinking about making a video about adjoints for a while, but I haven't gotten around to it yet.
Shout out to all my hermitians
I have always known that complex numbers could be represented as matrices, but never stopped to think about their application to matrix theory, Perfect video
now try the same with quaternions :P
3T4A4s
Just helped me in my group theory course, thank you
I know there's something connection in complex number and vector, it really similar, like the notation (a+bi and ai+bj), how it work etc, that connection is ✨matrices✨
tysm for making this video
Wow - thank you!
So glad there are people like you that understand this stuff well enough to combine seemingly disparate concepts and provide a high level summary without the details and rigor that obscure many of the important concepts. Well done! I kept hoping you would work my favorite subject in there - how the Fourier transform, SVD, (singular value decomposition), tensors, and Roots of Unity are related. Keep up the great work! Love your channel.
Sounds cool, where would you recommend reading about the connection between Fourier transform and SVD for those who are impatient?
I can see the connection between Fourier and roots of unity, but where do tensors and the SVD come in? If you have a good reference where I can learn more, I will definitely consider making a video about that.
Right now, you might be interested in a video we have on Patreon that looks at Fourier analysis as the study of the characters of a circle.
this is my personal favorite subject, a fun exercise is to think about complex valued matrices, for example try looking at a pauli matrices as a set of nested matrices
I learned about complex numbers while programming a Mandelbrot set generator in Pascal on my parents' PC, based on an algorithm published in Scientific American. I remember getting pretty deep into complex numbers, and proving what I later knew to be famous basic theorems. Namely, if an equation is true among complex numbers, then it is also true among their complex conjugates; and that if you multiply two complex numbers represented as vectors, the rotation of the product is the sum of the rotations of the factors, and the length of the product is the product of the lengths. Of course now I know my way around the complex plane in terms of r* e ^ (i * theta), which makes these facts self evident, but I remember proving them just using algebra.
That's pretty cool! When you proved that second theorem, especially the part where the angle of the product is the sum of the angles, did you realize that you had discovered an exponential relation?
This is bascially the reason why z* doesnt have a derivative since the jacobean of the transformation is a reflection and not a rotation and scalling (the only transformations possible when multiplying by a complex number) which is effectively the reasoning behind the Cauchy-Riemann equations, it took me ages to understand the link between the linear transformation due to complex multiplication and the restricted set of matricies which describe complex multiplication which the jacobean must be in, since it too must reprosent complex multiplication thus must be of this form thus the CR equ.'s. (whenever I found the jacobean, I assumed that that must mean a derivavtive must exist, but no)
You video made this idea very clear, well done
I always wondered why the "i" matrix had the negative sign in the top left instead of the bottom right, this explains it perfectly
Fun fact: i is an anti-symmetric matrix here. You can decompose every square matrix to the sum of a symmetric and an anti-symmetric component every time: ½(A + Aᵀ) is the symmetric and ½(A - Aᵀ) is the anti-symmetric component, where Aᵀ is the transposed matrix. This doesn't add up entirely because a symmetric matrix is not necessarily a multiple of the identity matrix, but it still looks like decomposing a complex number to a real and an imaginary component. Stupid me, I had to correct the formulas.
Bro i swear i love you
Just failed an exam and got so frustrated that i almost felt like i should not be doing math ever again.
Your intuitive explanations really keeps me motivated
@@dan0_0nad76 Sorry to hear that you had a bad time with an exam. It happens to the best of us. I hope you'll be able to keep doing what you love.
This series looks really promising! It's great!
I would suggest to speed up a little bit the narrative, I had to listen it at 1.5 to keep me engaged
Geometric algebra gives you a hint how to do this properly. Let's go to G(2,0), the two-dimensional space. Your i matrix is simply a transformation, and it's a bivector in GA, and a pseudoscalar in this space. We have the x axis and the y axis, but y can be written simply as y = i x. From now on we can describe every point on the plane with a x + b i x. This gives an algebra entirely isomorphic to the complex numbers. Or we can choose G(0,1), which is much simpler, and still isomorphic to complex numbers.
As it happens, I'm preparing the series on GA right now. And you've hit the nail on the head. We will publish 2 videos about this later. Also, G(0,1) is trivially isomorphic to C, but I prefer the way they emerge from G(2,0) because it's much more surprising. You define 2 elements that square to 1, and magically you get a new element that squares to -1. Math is full of such surprises.
@@AllAnglesMath Sorry for the spoiler :)
Wow, this is amazing. I've got go watch all your videos real quick.
Feel free to ask questions in the comments if needed.
In computational physics this is a very, very useful tool.
This is rather good so far
All angles never disappoints 🙌🏻
Thanks for having so much faith in our channel.
heavily underrated content, instant subscription
This should be taught day one of any linear equations curriculum.
This is great. I've been trying to understand capacitance and inductance without resorting to to the square root of -1, which makes the math work but is far from intuitive. This is taking me into a good direction.
You can often replace complex numbers by something else (such as matrices or 2D vectors). The math still works, but it's often more verbose. You no longer package the 2 parts into a single number, which leads to twice as many equations. By packaging those back into vectors/matrices, you may be able to recover the original elegance.
@@AllAnglesMath Yes. The thing I like about representing impedance as [[R, Z][-Z R] is that it's 2 vectors: the top one being the impedance, and the bottom one being a vector with the same magnitude but rotated exactly 90 deg counter-clockwise. So you do V*[[cos t, sin t] [-sin t, cos t]] = [[R, Z][-Z, R] * I*[[cos t, sin t] [-sin t, cos t]] .
This shows exactly how the real and reactive elements turn real and 90 deg out of phase currents into real and out of phase voltages while reactive elements turn real and out of phase currents into out of phase and real voltages. It's all quite intuitive when you do it this way.
It doesn't depend on an understanding that multiplying by i means rotation.
This is super interesting!
Nice idea to write i as a matrix ⭐❣️
honey wake up new allangles series 🗣️🗣️🔥🔥
What an Amazing video I hoped not to end!!
Thank you! Sorry that it had to end 🤷
@@AllAnglesMath It would be great to see another video in this series soon!
somewhere between complex numbers and GL(2) is the subset of GL(2) with positive determinants, since that one must be closed under multiplication as well.
True, but be careful: the determinant has to be strictly positive. Zero is not allowed (because matrices with zero determinant have no inverse).
Great work great video. Thank you
Is it possible to represent signed numbers in terms of just unsigned numbers in vector form?
I'm not sure. I like this question though, it makes me think.
Amazing keep it up!!
these matrix representations reminds me a lot about functors, could there maybe be a correlation or could be that I'm just confusing things...?
Every matrix representation is a homomorphism; and those in turn are studied as the "arrows" in category theory. Functors are an arrow between such arrows. The condition for functors is indeed very similar to the conditions you see for many other kinds of arrows such as isomorphisms, linear transformations, or diffeomorphisms.
Please note that i, sin and cos are not variables. This is why one should not write them cursive.
This topic is so based. I had a lot of representation matrices stuff in a group theory course.
Wow that's awesome
Ah, among other stuff this introduction reminds me how I derived quaternion matrices by myself, not without errors at first.
Nothing ever happens without errors at first 😉
6:46 really.. how?
Lets say we want to make an angle of (1/7)*2π out of the available angles:
(1/3)*2π , (1/4)*2π , (1/5)*2π
So we want to find integers x,y,z such that [0]:
(1/7)*2π = (1/3)*2πx + (1/4)*2πy + (1/5)*2πz
Cancelling 2π gives [1]:
(1/7) = (1/3)x + (1/4)y + (1/5)z
Cancelling the common denominator of 3*4*5*7=420 gives [2]:
(3*4*5) = (4*5*7)x + (3*5*7)y + (3*4*7)z
Reducing modulo 7 gives [3]:
60 ≡ 0 (mod 7)
⇒ 4 ≡ 0 (mod 7)
[3] is clearly not true.
The equation [1] also fails because (via congruence relations) it implies [4]:
x=3a , y=4b , z=5c for integers a,b,c
⇒ 1 = 7a + 7b + 7c
⇒ 1/7 = a + b + c
[4] is clearly not true.
The group of integers under addition is closed (otherwise it wouldnt be a group).
If x,y,z are allowed to be rational numbers then [1] has an infinte number of rational solutins.
But how is this fundamentally different from taking all rational roots of unity on the unit circle to begin with.
16:06 I laughed in this one, ofc it's not wrong cuz it still 1x1 matrices but idk it just funny 😭😭
Sehr gut
Vielen Dank!
What is “number samba”?
That's just the phrase I use for the product between two matrices, or between a matrix and a vector. I adopted this phrase half jokingly in the series on linear algebra.
@@AllAnglesMath oh. I hoped I heard misheard something less like a secret club handshake/wink.
Eigen value =i?
The little hole you punched in the middle is why phi doesn't start at zero.
Que ne lo paguen que estiy jasta el cñ moño de abajo😊
Real is dual to imaginary -- complex numbers are dual.
"Always two there are" -- Yoda.
1:11-;6:46-6:56;8:10-8:22;9:03-9:15;