Wanted a quick intro, and this was fantastic; so clear and grounded in real-world issues. Guess that reflects the applied background of the prof.....Stuff from maths and physics depts is generally incomprehensible and bogged down in technicalities. Thanks for creating - let's have more !
i find this video very helpful for my assignment, which was about difference between MLE and bayesian approach, thanks Alot to define it in minutes what i was trying to understand from hours on web
Thanks! This video clear a lot of the basic concepts that I was trying hard to grasp in my Statistical Machine Learning class. Please keep up the good work Sir!
Possibly a minor correction: at 7:08 f(x|alpha) is the likelihood, and not the prior; f(alpha) is the prior (initial degree of belief in parameter alpha) Thank you for the video!
Wanted a quick intro, and this was fantastic; so clear and grounded in real-world issues. Guess that reflects the applied background of the prof.....Stuff from maths and physics depts is generally incomprehensible and bogged down in technicalities.
Thanks for creating - let's have more !
Thanks for all your efforts They help a lot and encourages us.
i find this video very helpful for my assignment, which was about difference between MLE and bayesian approach, thanks Alot to define it in minutes what i was trying to understand from hours on web
thnk you sir...what i was finding dfficult to understand for days...ur video make it possible in 20 mins....thnks a lot...
Thanks! This video clear a lot of the basic concepts that I was trying hard to grasp in my Statistical Machine Learning class. Please keep up the good work Sir!
any concrete example?
Very good summary of estimation techniques.. Very helpfull
Very helpful tutorial.
thank you! very good video
Possibly a minor correction: at 7:08 f(x|alpha) is the likelihood, and not the prior; f(alpha) is the prior (initial degree of belief in parameter alpha)
Thank you for the video!
best if the bests :D thank you so much.
Thanks! Excellent introduction to the estimator classes.
Kudos to you
Always, in the asymptotic case, MLE can achives smallest possible variance of any unbiased estimator?
Hi, could you explain at 11:09 why Baysian allows us to find the best estimator? Because of prior knowledge incorporated?
Zheng Jia Probably because it uses a mixture of known estimate and sample estimate, depending on the number of data.
Dear Professor, what kind of presentation software do you use?
I put the graphics together using Photoshop with a Bamboo tablet, and then record/edit in Camtasia.
Cive concrete example please.