Introducing Bayes factors and marginal likelihoods

Поділитися
Вставка
  • Опубліковано 16 тра 2018
  • Provides an introduction to Bayes factors which are often used to do model comparison. In using Bayes factors, it is necessary to calculate the marginal likelihood - another term for the denominator of Bayes rule. This video explains that marginal likelihoods are notoriously difficult to calculate and are sensitive to the choice of priors; even when changes to priors do not affect the posterior distribution.
    This video is part of a lecture course which closely follows the material covered in the book, "A Student's Guide to Bayesian Statistics", published by Sage, which is available to order on Amazon here: www.amazon.co.uk/Students-Gui...
    For more information on all things Bayesian, have a look at: ben-lambert.com/bayesian/. The playlist for the lecture course is here: • A Student's Guide to B...

КОМЕНТАРІ • 25

  • @omidghasemi1401
    @omidghasemi1401 3 роки тому +2

    I think I watched this video more than 10 times. I get back to it every time that I got lost in formulas and explanations in other sources. Thanks Ben for such amazing and nice tutorials.

  • @GK-oj3cn
    @GK-oj3cn 6 років тому +11

    I can not believe you are back. I have been watching your videos for 4 years, you are incredible. Thank your for all the staff you make , it always was really helpful.

  • @rodneycummings7319
    @rodneycummings7319 4 роки тому +6

    Thanks Dr. Lambert for all your hard work and videos on Bayesian analysis. I'm currently taking a Bayesian statistics course this semester at UCF in Orlando, Fl. I wish my professor would teach this course the way you do. I use your videos to study and fill in knowledge gaps. Your videos have been very helpful. I plan on buying your book over the summer and reading it cover to cover.

  • @anush9065
    @anush9065 4 місяці тому

    One of The best veideos I have watched!! Your videos are helping me get through my grad courses (Statistical ML) !! Thank you

  • @hedgehog1962
    @hedgehog1962 2 роки тому

    I am taking a statistics course but your video is much better than my university's. It really works

  • @domlaukemp2772
    @domlaukemp2772 2 роки тому

    Thank you so much for making these videos. They help a lot and you answered questions I didn't know I had :)

  • @u0432865
    @u0432865 6 років тому

    Great introduction to the bayes factor

  • @itsfabiolous
    @itsfabiolous 3 роки тому

    Thank you very much for your help!

  • @musicarroll
    @musicarroll 6 років тому

    Btw, I didn't mean to seem a troll in last comment. I should have prefaced with the fact that I think you video is excellent. But I am interested in seeing/hearing your further discussion of WAIC, and LOO-CV. Thanks for the video. Very helpful.

  • @ChandraReddy-vd3gb
    @ChandraReddy-vd3gb Рік тому

    which tool did you use to do the black boarding? it is really awesome

  • @Eta_Carinae__
    @Eta_Carinae__ 2 роки тому

    What happens when M1 and M2 are not mutually disjoint? Or if they're not exhaustive of the data? Wondering if there is an analogue for a residue term in P(data). Thanks!

  • @khandelwaltarun
    @khandelwaltarun 6 років тому +1

    Please let us know the playlist and the sequence number in which this video would/should appear. Thanks

    • @SpartacanUsuals
      @SpartacanUsuals  6 років тому

      Hi, thanks for your message. The playlist can be found here: m.ua-cam.com/play/PLwJRxp3blEvZ8AKMXOy0fc0cqT61GsKCG.html Best, Ben

    • @khandelwaltarun
      @khandelwaltarun 6 років тому

      Thanks Ben

  • @rafeedrahman5420
    @rafeedrahman5420 2 роки тому +1

    what is the difference between theta and a model? Aren't they supposed to be the same?

  • @Marteenez_
    @Marteenez_ Рік тому

    What is the distinction between m1, m2 and theta?

  • @janiceliu5473
    @janiceliu5473 3 роки тому +2

    what is theta here? Is it a vector of the parameters in model M1 or M2?

  • @Mizunt
    @Mizunt 3 роки тому

    Why would you ascribe a lower prior to more complex models?

    • @gordongoodwin6279
      @gordongoodwin6279 3 роки тому

      Because of principles like Occams, which try to favor simpler models, for a variety of reasons, such as utility (imagine a selection model for employees. Utility goes down as model complexity and cost goes up.) the prior is a potential way of constraining/penalizing complexity

  • @musicarroll
    @musicarroll 6 років тому +2

    You are assuming, by means of p(M1)=1-p(M2), that these two models are the only ones possible. Perhaps an oversimplification for didactic purposes?

    • @SpartacanUsuals
      @SpartacanUsuals  6 років тому

      Hi, thanks for your comment and, no worries, most trolls don’t comment on the intricacies of Bayesian inference! Yes, I am considering two models here, sorry if that wasn’t clear. Best, Ben

    • @SpartacanUsuals
      @SpartacanUsuals  6 років тому

      Hi, thanks for your comment and, no worries, most trolls don’t comment on the intricacies of Bayesian inference! Yes, I am considering two models here, sorry if that wasn’t clear. Best, Ben

    • @SpartacanUsuals
      @SpartacanUsuals  6 років тому

      Hi, thanks for your comment and, no worries, most trolls don’t comment on the intricacies of Bayesian inference! Yes, I am considering two models here, sorry if that wasn’t clear. Best, Ben

    • @SpartacanUsuals
      @SpartacanUsuals  6 років тому

      Hi, thanks for your comment and, no worries, most trolls don’t comment on the intricacies of Bayesian inference! Yes, I am considering two models here, sorry if that wasn’t clear. Best, Ben