🎯 Key Takeaways for quick navigation: 00:00: 🧠 *Intro to mathematical foundations in ML, emphasizing unsupervised learning and representation learning.* 04:01: 🗂️ *"Comprehension is compression": Learning involves compressing information for key aspects.* 09:48: 🔄 *Use of representative points and coefficients for significant dataset compression.* 15:23: 🔄 *Flexibility in choosing representative vectors along a line for robust compression.* 21:26: ⚖️ *Trade-off between compression and exact reconstruction, introducing the concept of proxy points.* 23:51: 📏 *Determining proxy points by projecting data points onto a line to minimize distances.*
23:23 but wont we actually use a slightly different line from the blue line? unless our loss function is changed to be a boolean function like classification loss function then the optimal representation will never be exactly the blue line.
Mai week 1 lectures ko do mhine se dekh rha hu abhi tk ek bhi nhi smjh aa rha .. jbki maine linear algebra pdh rkhi hai..ur multivariable calculus bhi ...
Piece of art!!
🎯 Key Takeaways for quick navigation:
00:00: 🧠 *Intro to mathematical foundations in ML, emphasizing unsupervised learning and representation learning.*
04:01: 🗂️ *"Comprehension is compression": Learning involves compressing information for key aspects.*
09:48: 🔄 *Use of representative points and coefficients for significant dataset compression.*
15:23: 🔄 *Flexibility in choosing representative vectors along a line for robust compression.*
21:26: ⚖️ *Trade-off between compression and exact reconstruction, introducing the concept of proxy points.*
23:51: 📏 *Determining proxy points by projecting data points onto a line to minimize distances.*
Really liked the Professor 🙂
way of teaching is excellent.
Wow what a lecture
Great Representation
a joy to wtch
Amazing explanation
amazing
At the end of the lecture, C* = x1w1 + x2w2 = xT.w right? But you have written xT.w.w
23:23 but wont we actually use a slightly different line from the blue line? unless our loss function is changed to be a boolean function like classification loss function then the optimal representation will never be exactly the blue line.
I think he was just building the intuition for projections. He'll eventually do that later?
how can get these lecture slides can someone tell me or if anyone already having please share with me
Mai week 1 lectures ko do mhine se dekh rha hu abhi tk ek bhi nhi smjh aa rha .. jbki maine linear algebra pdh rkhi hai..ur multivariable calculus bhi ...
Poetry
Jo ye pdha rhe hai sb isi me to explain nhi kr rhe prerequisite kya hai iski jbki maine mlf ke videos dekh rkhe hai dauda dauda k
how i will register in it