An introduction to the Random Walk Metropolis algorithm

Поділитися
Вставка
  • Опубліковано 29 вер 2024
  • This video is part of a lecture course which closely follows the material covered in the book, "A Student's Guide to Bayesian Statistics", published by Sage, which is available to order on Amazon here: www.amazon.co....
    For more information on all things Bayesian, have a look at: ben-lambert.co.... The playlist for the lecture course is here: • A Student's Guide to B...

КОМЕНТАРІ • 32

  • @vman049
    @vman049 5 років тому +9

    Best explanation of MH on UA-cam. Thank you!

  • @lemyul
    @lemyul 5 років тому +4

    thank God there's a video about this

  • @هشامأبوسارة-ن7و
    @هشامأبوسارة-ن7و 7 місяців тому

    Very insightful.

  • @collincherubim2698
    @collincherubim2698 4 роки тому +19

    Finally, visuals for MCMC! Highly illuminating, thank you.

  • @darcycordell7156
    @darcycordell7156 3 роки тому +1

    Maybe a dumb question, but at 5:46 aren't you using the unknown distribution to calculate r? Isn't the black line on the graph the unknown distribution you are trying to estimate?

    • @GabeNicholson
      @GabeNicholson 3 роки тому

      No thats a good question. The way I understand it is that all you need to compare is the ratio of the numerator for Bayes rule. And you sample through different values of the parameter (theta). You basically run through lots of different possible parameter values and the way it walks across the graph is when it hits a value of the parameter that corresponds to the true posterior distribution. Most of the guesses of the parameter are wrong and don't lead to anything at all which is why the animation at around 8 minutes has many more wrong guesses than correct ones.
      The numerator of Bayes's rule is what dictates where those true parameter spots are. The denominator decides the height of it. This sampling method estimates the height of them which is decided by the denominator. So for your question of r. All we know is the numerator, not the denominator.

  • @amenaalhassan2807
    @amenaalhassan2807 2 роки тому

    ước gì có tiền ăn tết

  • @jiachengchen7828
    @jiachengchen7828 4 роки тому +6

    This video is 10000x better than the equations in the class.

    • @distrologic2925
      @distrologic2925 2 роки тому +2

      Right? I don't understand how actual lecturers can be THAT terrible at conveying knowledge. Its like they don't want people to understand it.

  • @AmrutaOfAllTrades
    @AmrutaOfAllTrades 4 роки тому +3

    Finally found out why its called Monte Carlo. This is the best explanation of the algorithm I have ever seen. Thanks for this.

  • @johnedwardhills4529
    @johnedwardhills4529 3 роки тому +3

    Thanks Ben. This is a really clear visual representation of what the algorithm is doing and how it works in principle. Excellent stuff!

  • @tergl.s
    @tergl.s 6 місяців тому

    the simulation is so helpful! thanks

  • @johannesluttmer1285
    @johannesluttmer1285 3 роки тому

    Metropolis 1927 illumination= moloch
    Conspiracy of dunges = owl of minerva =satan

  • @bradh2649
    @bradh2649 2 місяці тому

    Beautifully explained

  • @raycyst-k9v
    @raycyst-k9v 7 місяців тому

    where can i find the code for this?

  • @distrologic2925
    @distrologic2925 2 роки тому

    Don't gaps in the true distribution skew the samples to the borders of these gaps because the random walk is less likely to cross the gap, especially with a low sigma in the jumping distribution?

  • @MisterCactus777
    @MisterCactus777 2 роки тому

    I used this for my Bachelors thesis to simulate ultracold fermions in a harmonic trap, which was a replication of real expermients! Thank you for explaining, I had forgotten what it did...

  • @mikolajwojnicki2169
    @mikolajwojnicki2169 3 роки тому

    Great video. Way easier to understand than my uni lectures.

  • @Penrodyn
    @Penrodyn 5 років тому

    Are the Mathematica programs you used in the video available? I particularly liked the last one where you showed a more complicated surface. I also just ordered your book, are there available with that?

  • @FluxProGaming
    @FluxProGaming 4 роки тому

    Subscribed. Good voice, good explanations !!

  • @ahmedjunaidkhalid3929
    @ahmedjunaidkhalid3929 5 років тому

    I have a question. Suppose rather than having just one value theta, I have multiple values [A,B,C] in my state. Each variable can only have four values [0,1,2,3]. How would I choose a new state from the previous one? Would I calculate a new value for each variable and call it a proposed state and then calculate the value for the complete system?

    • @jelmerdevries7827
      @jelmerdevries7827 4 роки тому +3

      probably too late, but for a multi-variable model you want to use a gibbs samples

  • @karimaelouahmani7078
    @karimaelouahmani7078 2 роки тому

    Brilliant honestly.

  • @mikotokitahara9923
    @mikotokitahara9923 3 роки тому

    Best one on UA-cam, thanks a lot.

  • @wunderjahr
    @wunderjahr 3 роки тому

    👏👏👏

  • @mojiheydari
    @mojiheydari 3 роки тому

    awesome

  • @DJRaagaMuffin
    @DJRaagaMuffin 5 років тому

    Great explanation. Thank you

  • @siarez
    @siarez 6 років тому +2

    I don't get where the likelihood term and the prior term come from. Here we assume they exist. What is an example of a practical application where we have these two terms but don't have the posterior?

    • @GeoffRuddock
      @GeoffRuddock 6 років тому +1

      @Siarez the difficult part in calculating the posterior is usually the denominator (marginal distribution). This algorithm uses the ratio of unnormalized posteriors, so the cumbersome marginal distribution cancels out.

    • @AP-rs5wz
      @AP-rs5wz 5 років тому

      Yes, the marginal can be quite costly to compute, as you have to integrate out so many (potentially) unknown dimensions.

    • @payam-bagheri
      @payam-bagheri 4 роки тому

      I agree with you. There's a disconnect in the explanation in the video. The video mentions that the prior can be anything (just to have a starting point) but doesn't explain where the likelihood comes from.

    • @jimbocho660
      @jimbocho660 3 роки тому +1

      @@payam-bagheri The likelihood is computed in the usual way from your data and proposed data generation model. This video is about how to sample from your unnormalized posterior once you obtain an expression for it using the likelihood x prior.