I enjoy my Friday nights with Dr Barker. There are so many low beer poseurs out there it is just nice to see someone doing some bare hands maths at a craftsman level! Your style is appreciated. I'm >> 70 and have been taught by some famous people (one of whom was invited by Dirac to Cambridge during WW2 to discuss his papers on the statistical foundations of quantum mechanics) and I get really sick of cheesy flummery.
Earned me as a new subscriber from this video. I've seen and liked some of your videos previously, and this one just tipped me over the threshold for subscribing. Cheers! This video really helped me to get some intuition that made many of the remaining puzzles about Linear Algebra suddenly start to make sense and now I can see how to go about solving them.
I saw a demonstration of Cayley Hamilton by Dr. Peyam using the group theory. But as a math enthusiast electrical engineer I definitely prefer your approach! ☺
In your last case, for a "degenerate" 2x2 matrix transformation that decreases the dimension of the image to a line, I was wondering if the fact that "any arbitrary vector is transformed to a multiple of the eigenvector" is not immediately obvious: whether you start with an eigenvector or not, the transformed vector has to be on the image of the transformation, that is, on the line. (I suppose that what you did was to effectively prove that the image is a line.)
Nice video. I’ll just mention that I found the comment about a non-parallel second eigenvector would be a problem a little strange. Of course it’s entirely possible that there would be a second non-parallel eigenvector when the algebraic multiplicity is 2, it’s just not necessarily the case.
The proof was done for matrices over the real or complex numbers. Probably it works in a similar way for matrices over other fields? Probably one needs to use some field extensions? I don't know enough abstract algebra to see exactly how that would work... :/
Interesting question, if we want to work over a different field, I'd imagine the argument will break down unless there is some equivalent of the result that a degree n polynomial has n roots over the field (including multiplicities of roots).
@@DrBarker Serge Lang's notorious Algebra gives a proof in the more general setting of a commutative ring k and a free module E of dimension n over k. Chapter XV section 4 presents it in half a page, BUT it references Chapter XIII for a lengthy and intricate development of matrices, so I've never managed to understand it. However, I can see that he is doing the thing with a polynomial ring k[t] and linear map A from E to E where you turn the module E over k into a module E over k[t] by substituting A in place of t to specify how an element of k[t] multiplies an element of E. So the division in the field and the vector space goes unused, and the theorem is unexpectedly general.
Your argument form 18:55 to 20:10 isn't really a proof by contradiction. What you *actually* have is a direct proof that w is a scalar multiple of v1. But before you start the proof, you have this superfluous assumption that w and v1 are linearly independent, and after you have proven directly that they are linearly dependent, you pull up this assumption to claim you've reached a contradiction. To make an exaggerated example of what I mean, here is a proof by contradiction that if n is even, then n^2 is even. Assume n is even, and for contradiction assume n^2 is odd. Since n is even we can write n = 2m for some integer m. Squaring this yields n^2 = 4m^2 = 2(2m^2). So n^2 is even, which contradicts our assumption that it is odd. Therefore by contradiction n^2 must be even. Can you see what I mean when I say that the contradiction framing contributes nothing to the proof? In contrast, look at the archetypal proof by contradiction: that the square root of 2 is irrational. I like to prase the actual statement as "If x^2 = 2, then x is irrational". Then for contradiction we assume x is rational. Note, however, that in this case, the assumption that x is rational is *crucial* to progress with the proof. And it isn't a proof by contraposition either (another variation on not-really-proof-by-contradiction, similar to my earlier example), because the assumption that x^2 = 2 is also crucial to progress with the proof. So that one is a true proof by contradiction.
I enjoy my Friday nights with Dr Barker. There are so many low beer poseurs out there it is just nice to see someone doing some bare hands maths at a craftsman level! Your style is appreciated. I'm >> 70 and have been taught by some famous people (one of whom was invited by Dirac to Cambridge during WW2 to discuss his papers on the statistical foundations of quantum mechanics) and I get really sick of cheesy flummery.
"Flummery" is great. Thank you sir.
The quality of the channel far outstrips the sub count. I love watching these videos over dinner, thanks Dr Barker :)
Thank you!
Yesiree! This guy is well worth watching.
For all the beautiful theorems I have encountered, I remember this one left me completely floored. Thanks so much for this vid!!
nice video as always!! It just striked me while watching this video that, this is exactly how characteristic curves work in PDE too.
So excellent and beautiful lecture.
Thank you, Doctor.
and
With luck and more power to you.
hoping for more videos.
Earned me as a new subscriber from this video. I've seen and liked some of your videos previously, and this one just tipped me over the threshold for subscribing. Cheers!
This video really helped me to get some intuition that made many of the remaining puzzles about Linear Algebra suddenly start to make sense and now I can see how to go about solving them.
I saw a demonstration of Cayley Hamilton by Dr. Peyam using the group theory. But as a math enthusiast electrical engineer I definitely prefer your approach! ☺
I was just working on something like this while trying to find a function that generates itself :)) thank you!
Awesome video, thank you so much!
In your last case, for a "degenerate" 2x2 matrix transformation that decreases the dimension of the image to a line, I was wondering if the fact that "any arbitrary vector is transformed to a multiple of the eigenvector" is not immediately obvious: whether you start with an eigenvector or not, the transformed vector has to be on the image of the transformation, that is, on the line. (I suppose that what you did was to effectively prove that the image is a line.)
What is this timing??
Yesterday I had a class about most of what he went over in this video, haha!
Nice video.
I’ll just mention that I found the comment about a non-parallel second eigenvector would be a problem a little strange.
Of course it’s entirely possible that there would be a second non-parallel eigenvector when the algebraic multiplicity is 2, it’s just not necessarily the case.
Excellent👏
The proof was done for matrices over the real or complex numbers. Probably it works in a similar way for matrices over other fields? Probably one needs to use some field extensions? I don't know enough abstract algebra to see exactly how that would work... :/
Interesting question, if we want to work over a different field, I'd imagine the argument will break down unless there is some equivalent of the result that a degree n polynomial has n roots over the field (including multiplicities of roots).
@@DrBarker Serge Lang's notorious Algebra gives a proof in the more general setting of a commutative ring k and a free module E of dimension n over k. Chapter XV section 4 presents it in half a page, BUT it references Chapter XIII for a lengthy and intricate development of matrices, so I've never managed to understand it. However, I can see that he is doing the thing with a polynomial ring k[t] and linear map A from E to E where you turn the module E over k into a module E over k[t] by substituting A in place of t to specify how an element of k[t] multiplies an element of E. So the division in the field and the vector space goes unused, and the theorem is unexpectedly general.
@@Alan-S-Crowe Every commutative ring is a quotient of a domain, so we know it holds anyway.
@@DrBarker Surely we can just extend the field to include the roots of the characteristic polynomial?
Your argument form 18:55 to 20:10 isn't really a proof by contradiction. What you *actually* have is a direct proof that w is a scalar multiple of v1. But before you start the proof, you have this superfluous assumption that w and v1 are linearly independent, and after you have proven directly that they are linearly dependent, you pull up this assumption to claim you've reached a contradiction.
To make an exaggerated example of what I mean, here is a proof by contradiction that if n is even, then n^2 is even. Assume n is even, and for contradiction assume n^2 is odd. Since n is even we can write n = 2m for some integer m. Squaring this yields n^2 = 4m^2 = 2(2m^2). So n^2 is even, which contradicts our assumption that it is odd. Therefore by contradiction n^2 must be even.
Can you see what I mean when I say that the contradiction framing contributes nothing to the proof? In contrast, look at the archetypal proof by contradiction: that the square root of 2 is irrational. I like to prase the actual statement as "If x^2 = 2, then x is irrational". Then for contradiction we assume x is rational. Note, however, that in this case, the assumption that x is rational is *crucial* to progress with the proof. And it isn't a proof by contraposition either (another variation on not-really-proof-by-contradiction, similar to my earlier example), because the assumption that x^2 = 2 is also crucial to progress with the proof. So that one is a true proof by contradiction.