Estimating the posterior predictive distribution by sampling

Поділитися
Вставка
  • Опубліковано 14 тра 2018
  • Illustrates how to build up an approximate posterior predictive distribution by sampling for a simple example.
    This video is part of a lecture course which closely follows the material covered in the book, "A Student's Guide to Bayesian Statistics", published by Sage, which is available to order on Amazon here: www.amazon.co.uk/Students-Gui...
    For more information on all things Bayesian, have a look at: ben-lambert.com/bayesian/. The playlist for the lecture course is here: • A Student's Guide to B...

КОМЕНТАРІ • 19

  • @Taysky
    @Taysky 4 роки тому +20

    I'm paying 1000s of dollars for a uni course and come here to actually learn what is going on. Thanks!

  • @NeverHadMakingsOfAVarsityAthle
    @NeverHadMakingsOfAVarsityAthle 3 місяці тому

    Fantastic explanation, thank you so much! I failed in understanding so many other explanations, but yours really made it click for me:)

  • @kiranskamble
    @kiranskamble Місяць тому

    Excellent Ben! Thank you!

  • @tranle5614
    @tranle5614 8 місяців тому

    Awesome explanation. Thank you so much, Dr. Lambert.

  • @engenhariaquimica6590
    @engenhariaquimica6590 9 місяців тому

    Awesome !!! Thabks a lot for such valuable information!!! And clear explanation

  • @user-rv3ct3wi9p
    @user-rv3ct3wi9p 4 місяці тому

    Thanks for the content. I guess here we are implicitly assuming the predicted value $\tilde{x}_{i}$ does not depend on the data $x$?

  • @mirotivo
    @mirotivo 2 роки тому +2

    It's a bit confusing
    In the video you are trying to come up with the posterior approximation given the sample data by sampling methods,
    You mentioned the left is the beta distribution which is the posterior already, what are we trying to approximate then, how is the samples drawn to be clear?

  • @GuruprakashAcademy
    @GuruprakashAcademy 2 роки тому

    Thanks Ben. It is a nice video. I am trying to simulate Posterior predictive distribution for NHPP. I have expression for P(X tilda I alpha beta)*P(alpha, beta | X). Can you please help how can i simulate the X tilda using MCMC in R or WInbug. Thanks

  • @abhinavtyagi7231
    @abhinavtyagi7231 6 років тому +2

    Really great work, Thank you sir for all the videos. When the solution manual of your book will be available?

    • @SpartacanUsuals
      @SpartacanUsuals  6 років тому

      Hi, thanks for your comment. It should be available ASAP on the book website (waiting on publisher). If you email me on Ben.c.lambert@gmail.com, however, I can share it with you. Best, Ben

  • @jacobschultz7201
    @jacobschultz7201 3 роки тому

    Very cool video! So if our posterior was not conjugate and was instead approximated using a gibbs sampler, could we do something similar? I'm imagining randomly selecting a gibbs iteration (excluding burn in), and recording that vector of parameters as a sample from the posterior. Plug these parameters into the likelihood, sample, repeat. It seems especially important to sample the entire vector at once, since the marginal posteriors might not be independent. Sound reasonable?

  • @Gatitohomicida
    @Gatitohomicida 3 роки тому

    Hi there, do you know if I can obtain the mean of each parameter, in a gaussian mixture, and then obtain the posterior predictive, or I should obtain each gaussian mixture simulation and then obtain the predictive?? it is the same result??

  • @ZezaoCH
    @ZezaoCH 3 роки тому

    In practice, how is the posterior distribution related to AQL's and RQL's in real life sampling?

  • @gregoryhall9276
    @gregoryhall9276 5 років тому +2

    I'm a little confused about how the sampling of the posterior distribution is done. Looking at the mathematica simulation, I didn't see any samples taken from the right side of the beta(3,9)...is the sampling restricted somehow to only a portion of the posterior distribution? Or are those samples discarded because they have no effect on the marginal?

    • @jimip6c12
      @jimip6c12 3 роки тому

      The chance of a particular theta being selected depends on the probability density of the posterior distribution. Because the right side of the beta(3,9) has a very low probability density, its very unlikely to be selected (sampled)

  • @AshutoshRaj
    @AshutoshRaj 4 місяці тому

    🎯 Key Takeaways for quick navigation:
    00:00 *Predicting new data*
    01:21 *Sampling procedure steps*
    10:51 *Dominant uncertainty source*

  • @abhijithv3047
    @abhijithv3047 2 роки тому

    Hi sir could you please explain how bayesian model averaging works
    Including how parameters are estimated in a simple way so that
    And if possible could you demonstrate it with a problem
    Thanks in advance

  • @jacobmoore8734
    @jacobmoore8734 4 роки тому

    In your simulation towards the end of the video, I'm having some difficulty keeping track of what each process represents.
    Left process output = sample-theta from actual posterior
    Middle process output = sample-x (from some distribution?) using output of precious step
    Right process output = histogram of sample-x values from previous step
    Definitely missed something important here, yikes

    • @holloloh
      @holloloh 4 роки тому

      I think the left process output is the parameter likelihood, middle is the distribution based on the parameter and the right is the sampled posterior.
      If we knew the formula for the actual posterior, there is no point in sampling it, we already have the formula, so we can compute all the parameters and the fits we want from the formula itself.
      I can be wrong and I agree that the video was quite confusing, but at least intuitively it kinda makes sense.