Another perfect lecture, finally we can understand such beautiful subject and not just memorize it like mindless robots. Thank you so much Ritvik, your our hero! Gratitude from Brazil
This a huge gem! I love all your videos, they’re always a beautiful mix of theory, applied, and visual examples. I also think they’re the perfect length as well as depth and breath of connected material covered. That’s a delicate balance most technical UA-cam videos fail at and what makes yours special. 👍
PERFECT! As a programmer, I found the process just like "data normalization" which is indeed recommended and useful, amazing. One stupid question, so what's the difference between the column-column check you did, and echelon(row-row) form? I've seen some use echelon
This is the best linear algebra explanation I've ever heard and I've watched basically everything. The only thing you missed was the geometric interpretation, the point of the basis axes don't change. Still, absolutely excellent. 3b1b is the one everyone praises when actually he confuses simple things. You did the reverse.
Thank you so much, you're great at explaining and I appreciate you including the application of the concept in the real world, that helps to connect the points!
Another great video, thanks RItvik! Could you please make one about the determinant / trace / diagonalization? Because many happen to see these stuff in Linear Algebra courses, I specifically wonder how are they used in Data Science.
Good topic. It turns out that a deep neural network framework is pretty convenient for solving for the two low rank approximation matrices, or finding the exact solution matrices if they exist. I came up with the following technique: In Tensorflow you use two Embeddings layers with your choice of k and one Lambda layer to do a matrix multiply. Your loss function can be a typical choice like L2 distance between the result of the Lambda layer and the entry of the original big matrix. Each entry of teh original big matrix constitutes one training example. The optimizer is your choice like Adam, everyone loves Adam optimizer. So I came up with this arrangement to do movie recommendations on the MovieLens dataset. And it's better than Alternating Least Squares algorithm for many reasons, one big one being with the DNN technique, you will completely avoid making the dumb assumption that there are zero values in the original matrix entries that are missing values. Of course if you are not missing any values then ALS is probably fine.
Off topic, but you should make a video on implementing linear bayes/bayesian logistic regression/similar. Would be on-topic for your channel and would also compliment your non-bayesian implementations.
Hi :) thank you for this video. I wish Ive watched this video before svd video . Would you pls make a video about latent factor Decomposition and CUR model for approximation?
And which math book do you recommend to have an in_depth concept about data science, ml and ai at the same time with practical concept ? Just the way you teach (not pure useless math formula without any data sience related explanation )
Another perfect lecture, finally we can understand such beautiful subject and not just memorize it like mindless robots. Thank you so much Ritvik, your our hero! Gratitude from Brazil
You are most welcome
"I want to make sure to show you the actual applications..." God bless this man.
Thanks :)
Man, you should have been my math teacher at undergrad level. I would have scored more than what I actually did. Simple yet effective explanation.
3 years later and still the goat
This a huge gem! I love all your videos, they’re always a beautiful mix of theory, applied, and visual examples. I also think they’re the perfect length as well as depth and breath of connected material covered. That’s a delicate balance most technical UA-cam videos fail at and what makes yours special. 👍
Wow, thank you!
wonderfully explained. thanks
4-5 years spent to understand the real world use case, that's so true brother, for many other concepts as well.
best explanation of rank of a matrix in the world and how it is related to data science
Straight to the point and elegantly explained. Love it!
Your explanation is awesome man. I simply love the way you explain the concept.
I like the way you link these things with application, which is mind blowing...whenever I look for answer, I come here. thanks for all your videos.
PERFECT! As a programmer, I found the process just like "data normalization" which is indeed recommended and useful, amazing. One stupid question, so what's the difference between the column-column check you did, and echelon(row-row) form? I've seen some use echelon
This is the best linear algebra explanation I've ever heard and I've watched basically everything. The only thing you missed was the geometric interpretation, the point of the basis axes don't change.
Still, absolutely excellent. 3b1b is the one everyone praises when actually he confuses simple things. You did the reverse.
Thank you for making this so clear and specific!
Amazing content as always Ritvik!
Thank you Ritvik, you explained in a much needed beautiful way
Thank you so much, you're great at explaining and I appreciate you including the application of the concept in the real world, that helps to connect the points!
This was the best, and filled many gaps in my mind, bravo👏
Incredible! Thank you so much for the intuitive video.
No problem!
You’re so gifted at explaining things in an easy to understand way! Thank you!
Happy to help!
The explanation is really Awesome!!!
Thank you so much!!
Awesome explanation!!
Outstanding video; the best I have seen on the subject!
so clear and easy to understand! amazing!!
Very good Video! Keep up the good work!!!
thankyou so much i was struggling to learn this topic from every resource but didnt understand a bit :)
Really, thank you, it is a very beneficial video, it is the first time to understand the rank of the matrix.
Excellent explanation.
I'm majoring Economics at South Korea. This video helped me so much. Thank you
excellent explanation! Thank you so much!
Cool! This is the first time that i really catch the rank of a matrix.
fantastic explanation!
Helped me for my JEE exam and I learnt something new. Good video!
Great video, thanks so much!!
Crystal Clear, very well explained.
Superb explanation
OMG you are an excellent teacher!
Thanks, it was really useful. Hope you get more views ! ;)
Gem content. Worth to subscribe.
Great explanation!
🙏 thanks
very nice video
Amazing!
Thank you, sir!
Can there be any connection to eigenvectors given the relation to PCA?
nice explanation
Another great video, thanks RItvik! Could you please make one about the determinant / trace / diagonalization? Because many happen to see these stuff in Linear Algebra courses, I specifically wonder how are they used in Data Science.
can u explain its use in solving physical problems
Good topic. It turns out that a deep neural network framework is pretty convenient for solving for the two low rank approximation matrices, or finding the exact solution matrices if they exist. I came up with the following technique: In Tensorflow you use two Embeddings layers with your choice of k and one Lambda layer to do a matrix multiply. Your loss function can be a typical choice like L2 distance between the result of the Lambda layer and the entry of the original big matrix. Each entry of teh original big matrix constitutes one training example. The optimizer is your choice like Adam, everyone loves Adam optimizer. So I came up with this arrangement to do movie recommendations on the MovieLens dataset. And it's better than Alternating Least Squares algorithm for many reasons, one big one being with the DNN technique, you will completely avoid making the dumb assumption that there are zero values in the original matrix entries that are missing values. Of course if you are not missing any values then ALS is probably fine.
At 9:10 How does A' have 8 numbers? How come it's 4x2? Can anyone please explain this to me? I don't get it.
Off topic, but you should make a video on implementing linear bayes/bayesian logistic regression/similar. Would be on-topic for your channel and would also compliment your non-bayesian implementations.
Thank you!
You're welcome!
nice
Hi :) thank you for this video. I wish Ive watched this video before svd video . Would you pls make a video about latent factor Decomposition and CUR model for approximation?
Masterclass
Brilliant
Such a simple idea used by a major paper: LoRA - Low Rank Adaptation for Large Language Models
Thanks sir
an you make a video on the trace of a matrix, does it have any particular objective? thank u
Great 👍
And which math book do you recommend to have an in_depth concept about data science, ml and ai at the same time with practical concept ? Just the way you teach
(not pure useless math formula without any data sience related explanation )
Neat...👌🏽
Thanks!
Very very good lecture
Just, isn't it:. K / p + p/N ?
Is this the fundamental idea behind LoRA finetuning of AI models?
Brilliant!!! Do teachers know this?
Revenge of the dorks leave alone the nerds.
I am probably coming back again after getting some sense (cause it's first time that I heard about existing this kind of concept :/)
fk, u make it so simple, thanks
You're good alright
I can't see the left side of the board tho
What i couldn’t understand in a whole fooking year of my varsity life.
what about this matrix
1 2 3
4 5 6
7 8 9
the actual rank is 2 but with ur method it must be 1
Never mind I see it.
.
Why is A' 4x2?
Ga bisa bahasa enggres