The prof repeatedly mentions that entropy is the opposite of information... I always thought entropy was expected shannon information content. Am I missing something? By the way, amazing content!
Entropy measures the uncertainty, it is negative of information. When the probability goes close to 1, which means more information you get, less uncertainty left.
Excellent Lecture on Logistic Regression....must go through it twice to thrice to absorb it completely....
Maybe I'm misunderstading the 3D plot, but shouldn't it be Y=0={RED} and Y=1={BLUE} at 12:40 ?
I really love your lecture sir...
Thank you. The lecture is quite easy to understand. You delivered it in very good way.
22:00 I think J comes Jacobian and H from Hessian
But the question is about the J of cost function.
thank you for making this lecture available. very helpful.
This is just blowing my mind I should say.
This was a nice lecture.. Thank you..
I was also looking for lectures on Constrained Optimization and Support Vector Machines.. Are these available?
Great lecture. By 'simulating' thetas, I am reckoning that we are respawning thetas from regions in proportion to their probabilities?
Awesome lecture. How do we estimate w(thetha_i) using MC in bayesian ?
The prof repeatedly mentions that entropy is the opposite of information... I always thought entropy was expected shannon information content. Am I missing something? By the way, amazing content!
Entropy measures the uncertainty, it is negative of information. When the probability goes close to 1, which means more information you get, less uncertainty left.
Superlike!