Thanks! Great explanation and visuals, and I especially enjoyed whispered parts. I'm sure I'm not the only one who gets this emotional reaction to information that is being whispered instead of plainly said. And emotions are very important for remembering things. Please consider making a full-whispered redub of your videos!
I tried doing some dimensionality reduction using yellowbrick and sklearn on what I _thought_ was a very modestly-sized data set, and I was surprised by how long it took! I guess it was probably the Manifold Learning methods that took longer than PCA, but I don't recall PCA being exactly quick either. Is that expected? Are there techniques for subsampling data to get some faster approximation?
First off - loving the videos! Thanks for the fun and clear explanations. Quick clarification, though: The matrix V at 5:22 is drawn as a D' x D, no? Are we meant to actually have z_i = x_i V^T (V-transpose)?
I really found that little coffee bean animation helped me to concentrate on the video ! Thank you for this strategy !
Such a great explanation awesome work!!
So glad you liked it!
A very good explanation with details of how it works
Thanks! Great explanation and visuals, and I especially enjoyed whispered parts. I'm sure I'm not the only one who gets this emotional reaction to information that is being whispered instead of plainly said. And emotions are very important for remembering things. Please consider making a full-whispered redub of your videos!
ASMR with Machine Learning content. 😅
Nice presentation. Pls, what tool did you use for presentation?
Thanks! I used good old Powerpoint. 😅
ha ha ha, i was hoping to learn how to calculate the eigen vector, I love this channel
you are really awesome!, thanks a lot
🙂 Thanks for watching and leaving this awesome comment!
I tried doing some dimensionality reduction using yellowbrick and sklearn on what I _thought_ was a very modestly-sized data set, and I was surprised by how long it took! I guess it was probably the Manifold Learning methods that took longer than PCA, but I don't recall PCA being exactly quick either.
Is that expected? Are there techniques for subsampling data to get some faster approximation?
First off - loving the videos! Thanks for the fun and clear explanations. Quick clarification, though: The matrix V at 5:22 is drawn as a D' x D, no? Are we meant to actually have z_i = x_i V^T (V-transpose)?
I replaced my morning coffee with AI coffee. :)
Haha, this is funny and wholesome! :)
@C me too, lol!
Wouldn't a coffee bean be afraid of a coffee break because that's the point in time when it would be most likely ground up?
Shhh!!! Don't tell Ms. Coffee Bean that! 🤫😱
thank u
this is so cute
Funny? Ya(y.)
🤣