23. Generalized Linear Models (cont.)
Вставка
- Опубліковано 15 жов 2024
- MIT 18.650 Statistics for Applications, Fall 2016
View the complete course: ocw.mit.edu/18-...
Instructor: Philippe Rigollet
In this lecture, Prof. Rigollet talked about strict concavity, optimization methods, quadratic approximation, Newton-Raphson method, and Fisher-scoring method.
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu
why do you just put the whole slide on the screen when he is pointing something on the slides? there is no way to see where he is pointing. just point the camera at the slide
I've never heard such silence before lolo @33:30
In the fisher scoring algorithm, why do we only take the expectation of the hessian and not also of the gradient. I mean, we want to minimize the KL divergence, i.e. the expectation of the negative log likelihood that we approximate by a second order taylor expansion. Then we should take the expectation of the whole approximation, right?
i think you should understand the taylor expansion cuz when u do the quadratic appproximation the only terme that matter is the quadratic one hence our log likelihood function is concave and looks like the quadra terme
"Maybe I wanted you to figure out yourself " He is funny
It is mentioned that "phi is known positive value" (16.28 min). I am wondering, if exponential dist. comes under canonical exponential family (or not), where theta = lambda, b(theta) = ln(theta) and phi = -1?
Yes exponential distribution is (of course) in the exponential family, see here en.wikipedia.org/wiki/Exponential_family
You need to pick a form that works. So let theta = -1/lambda, b(theta)=-log(-theta), and phi = 1 instead.
May I have the link to the entire series, please
AWESOME!