Estimating the posterior predictive distribution by sampling
Вставка
- Опубліковано 14 тра 2018
- Illustrates how to build up an approximate posterior predictive distribution by sampling for a simple example.
This video is part of a lecture course which closely follows the material covered in the book, "A Student's Guide to Bayesian Statistics", published by Sage, which is available to order on Amazon here: www.amazon.co.uk/Students-Gui...
For more information on all things Bayesian, have a look at: ben-lambert.com/bayesian/. The playlist for the lecture course is here: • A Student's Guide to B...
I'm paying 1000s of dollars for a uni course and come here to actually learn what is going on. Thanks!
Fantastic explanation, thank you so much! I failed in understanding so many other explanations, but yours really made it click for me:)
Excellent Ben! Thank you!
Awesome explanation. Thank you so much, Dr. Lambert.
Awesome !!! Thabks a lot for such valuable information!!! And clear explanation
Thanks for the content. I guess here we are implicitly assuming the predicted value $\tilde{x}_{i}$ does not depend on the data $x$?
It's a bit confusing
In the video you are trying to come up with the posterior approximation given the sample data by sampling methods,
You mentioned the left is the beta distribution which is the posterior already, what are we trying to approximate then, how is the samples drawn to be clear?
Thanks Ben. It is a nice video. I am trying to simulate Posterior predictive distribution for NHPP. I have expression for P(X tilda I alpha beta)*P(alpha, beta | X). Can you please help how can i simulate the X tilda using MCMC in R or WInbug. Thanks
Really great work, Thank you sir for all the videos. When the solution manual of your book will be available?
Hi, thanks for your comment. It should be available ASAP on the book website (waiting on publisher). If you email me on Ben.c.lambert@gmail.com, however, I can share it with you. Best, Ben
Very cool video! So if our posterior was not conjugate and was instead approximated using a gibbs sampler, could we do something similar? I'm imagining randomly selecting a gibbs iteration (excluding burn in), and recording that vector of parameters as a sample from the posterior. Plug these parameters into the likelihood, sample, repeat. It seems especially important to sample the entire vector at once, since the marginal posteriors might not be independent. Sound reasonable?
Hi there, do you know if I can obtain the mean of each parameter, in a gaussian mixture, and then obtain the posterior predictive, or I should obtain each gaussian mixture simulation and then obtain the predictive?? it is the same result??
In practice, how is the posterior distribution related to AQL's and RQL's in real life sampling?
I'm a little confused about how the sampling of the posterior distribution is done. Looking at the mathematica simulation, I didn't see any samples taken from the right side of the beta(3,9)...is the sampling restricted somehow to only a portion of the posterior distribution? Or are those samples discarded because they have no effect on the marginal?
The chance of a particular theta being selected depends on the probability density of the posterior distribution. Because the right side of the beta(3,9) has a very low probability density, its very unlikely to be selected (sampled)
🎯 Key Takeaways for quick navigation:
00:00 *Predicting new data*
01:21 *Sampling procedure steps*
10:51 *Dominant uncertainty source*
Hi sir could you please explain how bayesian model averaging works
Including how parameters are estimated in a simple way so that
And if possible could you demonstrate it with a problem
Thanks in advance
In your simulation towards the end of the video, I'm having some difficulty keeping track of what each process represents.
Left process output = sample-theta from actual posterior
Middle process output = sample-x (from some distribution?) using output of precious step
Right process output = histogram of sample-x values from previous step
Definitely missed something important here, yikes
I think the left process output is the parameter likelihood, middle is the distribution based on the parameter and the right is the sampled posterior.
If we knew the formula for the actual posterior, there is no point in sampling it, we already have the formula, so we can compute all the parameters and the fits we want from the formula itself.
I can be wrong and I agree that the video was quite confusing, but at least intuitively it kinda makes sense.