This whole set of videos on machine learning is so well done and everything was explained in molecular details. Great teacher with exceptional teaching ability! I feel truly blessed.
Great stuff, although I wonder, should the normalisation constant for the multivariate normal pdf at 19:00 contain a factor (2*pi)^(-n/2) (since it's stated as a general multivariate Gaussian)? If it's still supposed to be the bivariate example, I missed that...
Thank you. This lecture alone has consolidated many fragments of knowledge that I have about linear regression! It's like almost everything clicked for me. I do still have a big question. Why is the standard deviation also estimated by minimizing the log-likelihood? What makes it an appropriate estimate of the standard deviation of the same normal distribution that has the mean (x^T)*theta_ML?
at 1:12:38 is a bit confusing. I think it should be the information that the unfair coin toss reveals to us is less than one heads-or-tails. am I missing some thing?
i'm not sure about this, but the way I undertand entropy is as a measure of randomness, thus when you have a fair coin, you have the highest entropy since all events in state space are equally likely. If you have an unfair coin you gain more information about what the value will be next time coin is flipped. If you take limiting cases you have max info gain and min entropy since every throw will result in 0 or 1. In later lectures when he talks about decision trees and information gain he explains this.
This whole set of videos on machine learning is so well done and everything was explained in molecular details. Great teacher with exceptional teaching ability! I feel truly blessed.
I feel bad for people trying to learn Machine Learning and don't were lucky to find this class as I was. Thanks Prof. Freitas!
I truly appreciate these lectures. Thank you very much professor, great pacing, great structure, great content!
beautifully linked the idea of maximising likelihood by illustrating the 'green line' @ 51:41
Great stuff, although I wonder, should the normalisation constant for the multivariate normal pdf at 19:00 contain a factor (2*pi)^(-n/2) (since it's stated as a general multivariate Gaussian)? If it's still supposed to be the bivariate example, I missed that...
Exactly, as you pointed out it should have negative n over 2 since it talks about n random variables
I wish i watch this video earlier before the midterm.. Cool, your explanation is always amazing.. Thank you..
Very very clear explanation, I have spent a lot of time about learning probability, just now everything became clear.
really very smart professor!
God bless you, professor Freitas!
Smart professor!
great intuition for MLE
Thank you. This lecture alone has consolidated many fragments of knowledge that I have about linear regression! It's like almost everything clicked for me. I do still have a big question. Why is the standard deviation also estimated by minimizing the log-likelihood? What makes it an appropriate estimate of the standard deviation of the same normal distribution that has the mean (x^T)*theta_ML?
39:00 - Maximum likelihood
45:20 - Linear regression
This is when got really interesting 22:02 typically, I'm given points and I am trying to learn the mu's and the sigma's
The lecture is great! It is really helpful. Thank you.
+1 for your sense of humor! :) Great lecture.
at 1:12:38 is a bit confusing. I think it should be the information that the unfair coin toss reveals to us is less than one heads-or-tails. am I missing some thing?
i'm not sure about this, but the way I undertand entropy is as a measure of randomness, thus when you have a fair coin, you have the highest entropy since all events in state space are equally likely. If you have an unfair coin you gain more information about what the value will be next time coin is flipped. If you take limiting cases you have max info gain and min entropy since every throw will result in 0 or 1. In later lectures when he talks about decision trees and information gain he explains this.
Great lesson!
awesome lecture
Excellent lecture ....
Superb!
Perfecto
Thanks Internet for making this accessible in india.
Machine learning... Linear regression