It's really so good to see people at the age of 37 learning math especially these topics! Keep it up sir. You're doing great ✨ I often feel demotivated seeing someone younger than me acing such topics in maths tbh!
@@COSMOPHILE_1729 it's often times hard for me to see people half my age who pick up things so quickly and solve problems with no issues. However, it's fine. I enjoy the pain!
Oh my, I can't believe you made these videos, I was studying Quantum Physics and was little bit stuck with linear algebra. For the topics I had difficulty in your book, I used to search youtube about explanation, but now I found the author's channel itself :D Thanks for writing this book.
BS/MS in math, nearing a PhD in statistics, and linear algebra comes up so often that I'm going through these to brush up on intuition, since I want my understanding of linear algebra to be "done right." Thank you for uploading!
8:17 wow, I never thought of polynomials as linear combinations of linearly independent vectors in the space of all R ---> R functions, but it makes sense
Also, I find a very common mistake for students to prove theorem 2.26. They said that every element in subspace U is a linear combination of some finite vectors in V. And they would conclude immediately U is finite dimensional.
The videos discuss only the most important results from each section of the book, not all the results. Because other results in this section are more important than 2.26, it was not included in the video. The videos are meant to highlight the key points in the book, not be a complete replacement for the book.
There is a very famous linear algebra book in Japan in which the author doesn't state the theorem 2.26 but uses it. It is not hard to prove the theorem 2.26 but I think it is very important to state it in a linear algebra book.
hey i've been trying to self learn this and i'm currently struggling with the statement that says that the length of all linearly independent lists is less than or equal to the length of all spanning lists. in the step 1 of the multi-step proof, it says that we have v_1, ..., v_m ∈ V that is linearly independent and w_1, ..., w_n ∈ V that spans V. so u_1, w_1, ..., w_n must be linearly dependent, i get that. but then it says that you can remove one of w_1, ..., w_n by the linear dependence lemma... i don't understand why that logic is valid. the linear dependence lemma only says that if you have a linearly dependent list of vectors v_1, ..., v_m, then there's a vector you can remove to keep the span the same. it doesn't say that you can just remove any vector. in the case of u_1, w_1, ..., w_n the proof says that we can remove one of w_1, ..., w_n but if we're just basing it off of the linear dependence lemma, there's nothing stopping us from saying that it might only be possible to remove u_1. i know it has something to do with how u_1, ..., u_j is linearly independent if we're talking about any jth step but i just can't immediately make a clear connection as to why... so i just made a proof that i could understand about how if we have v_1, ..., v_m that is is linearly independent and there's w_1, ..., w_n such that v_2, ..., v_m, w_1, ..., w_n spans V (which implies v_1 is part of the span of that), then if you create v_1, ..., v_m, w_1, ..., w_n, you can always remove one of w_1, ..., w_n, say w_1 is the one that's removed, such that the resulting list, which is v_1, ..., v_m, w_2, ..., w_n, still spans V and then i tried restating the multi-step proof on the book from there. my question from before still remains though.
I have a question..(7,3,8) belongs to span((2,3,1),(1,-1,2)) but (1,-1,2) doesn't belong to span(2,3,1). But in the 2nd edition the lemma says vj belongs to span(v1,......,vj-1) for j belongs to {2,....,m} . Then why in this example v2 doesn't belong to span(v1) ??
The Linear Dependence Lemma states that there exists j such that v_j belongs to span( v_1, …, v_{j-1} ) (take j = 3 in your example). The Linear Dependence Lemma does not assert that v_j belongs to span( v_1, …, v_{j-1} ) for all j. Thus you can not necessarily take j = 2.
I'm 37, learning this for fun. I can't believe someone wrote a book at 26.
I love how you respect your mentors. You're amazing Sheldon.
It's really so good to see people at the age of 37 learning math especially these topics! Keep it up sir. You're doing great ✨
I often feel demotivated seeing someone younger than me acing such topics in maths tbh!
@@COSMOPHILE_1729 it's often times hard for me to see people half my age who pick up things so quickly and solve problems with no issues. However, it's fine. I enjoy the pain!
Oh my, I can't believe you made these videos, I was studying Quantum Physics and was little bit stuck with linear algebra. For the topics I had difficulty in your book, I used to search youtube about explanation, but now I found the author's channel itself :D Thanks for writing this book.
"I was studying Quantum Physics and was little bit stuck with linear algebra". same haha
Ayo! Thats exactly what I've been going through rn as well
BS/MS in math, nearing a PhD in statistics, and linear algebra comes up so often that I'm going through these to brush up on intuition, since I want my understanding of linear algebra to be "done right." Thank you for uploading!
8:17 wow, I never thought of polynomials as linear combinations of linearly independent vectors in the space of all R ---> R functions, but it makes sense
These are good for revision, after I have finished reading the chapter. Thank you prof axler
Halmos sounds like a great mathematician, I tried to read some of his book the other day but some of the exercises are hard!
Do you mean the linear algebra problem book?
"Because theorems, indeed, are victories".
Pretty cool to define a 'finite-dimensional vector space' without defining 'dimension' first - I like it!
Also, I find a very common mistake for students to prove theorem 2.26. They said that every element in subspace U is a linear combination of some finite vectors in V. And they would conclude immediately U is finite dimensional.
I find that the video misses the theorem 2.26. Every subspace of a finite dimensional space is finite dimensional. Is it a major result?
The videos discuss only the most important results from each section of the book, not all the results. Because other results in this section are more important than 2.26, it was not included in the video. The videos are meant to highlight the key points in the book, not be a complete replacement for the book.
There is a very famous linear algebra book in Japan in which the author doesn't state the theorem 2.26 but uses it. It is not hard to prove the theorem 2.26 but I think it is very important to state it in a linear algebra book.
Thank you!
hey i've been trying to self learn this and i'm currently struggling with the statement that says that the length of all linearly independent lists is less than or equal to the length of all spanning lists.
in the step 1 of the multi-step proof, it says that we have v_1, ..., v_m ∈ V that is linearly independent and w_1, ..., w_n ∈ V that spans V. so u_1, w_1, ..., w_n must be linearly dependent, i get that. but then it says that you can remove one of w_1, ..., w_n by the linear dependence lemma...
i don't understand why that logic is valid. the linear dependence lemma only says that if you have a linearly dependent list of vectors v_1, ..., v_m, then there's a vector you can remove to keep the span the same. it doesn't say that you can just remove any vector. in the case of u_1, w_1, ..., w_n the proof says that we can remove one of w_1, ..., w_n but if we're just basing it off of the linear dependence lemma, there's nothing stopping us from saying that it might only be possible to remove u_1. i know it has something to do with how u_1, ..., u_j is linearly independent if we're talking about any jth step but i just can't immediately make a clear connection as to why...
so i just made a proof that i could understand about how if we have v_1, ..., v_m that is is linearly independent and there's w_1, ..., w_n such that v_2, ..., v_m, w_1, ..., w_n spans V (which implies v_1 is part of the span of that), then if you create v_1, ..., v_m, w_1, ..., w_n, you can always remove one of w_1, ..., w_n, say w_1 is the one that's removed, such that the resulting list, which is v_1, ..., v_m, w_2, ..., w_n, still spans V and then i tried restating the multi-step proof on the book from there.
my question from before still remains though.
awesome video, thanks! the book is also amazing :)
I have a question..(7,3,8) belongs to span((2,3,1),(1,-1,2)) but (1,-1,2) doesn't belong to span(2,3,1). But in the 2nd edition the lemma says vj belongs to span(v1,......,vj-1) for j belongs to {2,....,m} . Then why in this example v2 doesn't belong to span(v1) ??
The Linear Dependence Lemma states that there exists j such that v_j belongs to span( v_1, …, v_{j-1} ) (take j = 3 in your example). The Linear Dependence Lemma does not assert that v_j belongs to span( v_1, …, v_{j-1} ) for all j. Thus you can not necessarily take j = 2.