Gibbs Sampling : Data Science Concepts

Поділитися
Вставка
  • Опубліковано 26 січ 2021
  • Another MCMC Method. Gibbs sampling is great for multivariate distributions where conditional densities are easy to sample from.
    To emphasize a point in the video:
    - First sample is (x0,y0)
    - Next Sample is (x1,y1)
    - Next Sample is (x2,y2)
    ...
    That is, we update all variables once to get a new sample.
    Intro MCMC Video : • Markov Chain Monte Car...

КОМЕНТАРІ • 70

  • @Adam-ec9dk
    @Adam-ec9dk 3 роки тому +69

    I like that you wrote all the major points on the board and fit everything into one slide. Super easy to take a screenshot so I can remember the gist of the video.

  • @ResilientFighter
    @ResilientFighter 3 роки тому +34

    Ritvik, your videos are ranking the top when a person searches "metropoolis hastings" and "gibbs sampling". Great job man!

  • @rahulchowdhury9739
    @rahulchowdhury9739 11 місяців тому +2

    You're one of the best teachers of statistics. Thanks for taking the time to share the way you understand theories and problems.

  • @musclesmalone
    @musclesmalone 3 роки тому +9

    fantastic concise explanation, excellent visualisations. it's also very appreciated that everything is written prior to recording so there isn't thousands of people ( and in some cases millions ) waiting while watching you draw a graph or write a formula. Huge appreciation for your work, thank you!

  • @DhruveeChauhan
    @DhruveeChauhan Рік тому +1

    You are literally saving us one day before an exam!

  • @Ciavi-ar
    @Ciavi-ar 4 місяці тому +1

    This did actually help to finally wrap my brain around this topic. Thanks!

  • @des6309
    @des6309 2 роки тому +1

    dude you're so talented at explaining

  • @user-me9mw5oc7u
    @user-me9mw5oc7u 2 роки тому +2

    Thanks, you are soooooo good at explaining. I will recommend my professor to take a look at your videos.

  • @squidgeypea
    @squidgeypea 2 роки тому

    Thank you! Your videos are all really helpful and well explained.

  • @adamtran5747
    @adamtran5747 Рік тому

    absolutely love the content brother. Please keep up the amazing work.

  • @thename305
    @thename305 Рік тому

    Excellent video, your explanation was clear and helpful!

  • @Mv-pp7is
    @Mv-pp7is Рік тому

    This is incredibly helpful, thank you!

  • @salahlaaroussi9896
    @salahlaaroussi9896 2 роки тому +1

    really well explained. Nice job!

  • @monicamilagroshuaytadurand2076
    @monicamilagroshuaytadurand2076 2 роки тому

    Thank you very much! Your explanation helped me a lot!

  • @aalailayahya
    @aalailayahya 3 роки тому +1

    Great video, keep up the work I love it

  • @shuangli5466
    @shuangli5466 7 місяців тому

    Thank you for giving me probably 15 marks on my exam and lower my probability of failing from 10% to 5%

  • @christophersolomon633
    @christophersolomon633 3 роки тому +1

    Excellent video - wonderfully clear.

  • @mrocean1293
    @mrocean1293 3 роки тому

    Great explanation, love it!

  • @dddd-ci2zm
    @dddd-ci2zm 2 роки тому

    Thank You! I finally understand it now !

  • @rmb706
    @rmb706 3 місяці тому

    I had to write a Gibbs sampler for my Bayes midterm. That moment when I checked it with PyMC and it was spot on first attempt just felt amazing. 🎉 🔥

  • @vitorsantana2795
    @vitorsantana2795 Рік тому

    You just saved my ass so hard right now. Thanks a lot

  • @praveenkumarkazipeta
    @praveenkumarkazipeta 9 місяців тому

    this post is awesome, keep going

  • @RollingcoleW
    @RollingcoleW Рік тому

    Thank you! I am a hobbiest and this is helpful.

  • @AdrianYang
    @AdrianYang 3 роки тому

    Thank you for your video, Ritvik. Can I understand this as: search within a multi-dimension space is difficult because there are infinite choices of directions, while by fixing all the other dimensions and only leaving one movable, search within one dimension space becomes super easy because there are only two choices of directions.

  • @mikeshin77
    @mikeshin77 Рік тому

    fantastic and easy explanation. I like the way to explain!

  • @anushaavyukt6381
    @anushaavyukt6381 2 роки тому +3

    Hi Ritvik, Thanks for such a clear explanation. Would you please make a video on EM algorithm? I saw a lot of videos on it and understand the basics but not sure how to implement it for any problem.Thanks a lot.

  • @filosofiadetalhista
    @filosofiadetalhista 2 роки тому

    Tight video. Thanks!

  • @Alexander-pk1tu
    @Alexander-pk1tu 2 роки тому

    thank you! Very good video

  • @marcoantoniocoutinho
    @marcoantoniocoutinho 3 роки тому +2

    Great video, thanks. How could I associate (conceptually or intuitively) GIBBS sampling with variable's Markov Chain modeling once I'm building a sampling based on their conditional probability?

  • @AleeEnt863
    @AleeEnt863 10 місяців тому

    A big thanks!

  • @senyaisavnina
    @senyaisavnina 2 роки тому

    This high density bubble is like a supermassive black hole, once you get there, you'd never go out :)

  • @MirGlobalAcademy
    @MirGlobalAcademy 2 роки тому +2

    Simple Explanation. Just like spoon feeding -Goood

  • @Reach41
    @Reach41 3 роки тому +1

    This is one of the few channels left where p(x), with p(1) = Democrat, etc, is not a factor. Now to apply this to LIDAR ranging to produce either a Bayesian occupancy grid or a point cloud. Laser beams expand in diameter and lose energy (in air) going out from the device lens, vary in intensity both as the distance increases, and independently across the beam as a function of both horizontal and vertical beam width.

  • @shirleygui6533
    @shirleygui6533 Рік тому

    so clear

  • @MoodyMooMoo
    @MoodyMooMoo 8 місяців тому

    Thanks!

  • @Abhilashaisgood
    @Abhilashaisgood 4 дні тому

    amazing

  • @LL-lb7ur
    @LL-lb7ur 2 роки тому

    Thank you for the video. What real life problems can you use gibbs sampling, and what do you get at the end of sampling?

  • @eduardo.garcia
    @eduardo.garcia 2 роки тому

    Thanks a lot for all your videos!!! Please do Hamiltonian Monte Carlo Next, please :D

  • @snehanjalikalamkar2268
    @snehanjalikalamkar2268 Рік тому +4

    Hey Ritvik, your videos are very helpful, I learned a lot from them.
    Could you also provide some references for some points that you don't cover (mostly for pre-requisites)?
    In this video, I could not find out why p(x|y) = N(ρy, 1 - ρ²)? Could you please provide a reference for this?

  • @prof1math
    @prof1math 3 роки тому +1

    great explanation keep it up thanks

  • @Gasgar800
    @Gasgar800 3 роки тому

    Sick ! thanks

  • @princessefleure8360
    @princessefleure8360 3 роки тому

    Thank you soo much for this video, it helps me a lot!
    I just had a quesiton, if I well undertood, if we have 3 variables we have to calculate p(x|(y,z))
    But how to know the "p" in this case, because I guess we need a 3*3 covariance matrix.
    Have a good day!

  • @tsen6367
    @tsen6367 Рік тому +2

    Hello sir.. first things first, I want to say thank you very much for your incredible explanation through your videos.
    I am currently working on my thesis which use hierarchical Bayesian method, but I still confused and don't understand how to determin the right prior for my data. If you don't mind and have a free time, can I discuss with you through social media? I really need someone to guide me🙏 Thank you very much in advance sir.

  • @chainonsmanquants1630
    @chainonsmanquants1630 3 роки тому

    Am I right if I say that Gibbs sampling is possible only when you know the marginal probability distribution for each variable ?

  • @PeterSmitGroningen
    @PeterSmitGroningen 3 роки тому +2

    With the "probably spikes" example, I think a more formal explanation would be "steep gradient" or lack of gradient even. Many approximation techniques have problems with steep or sudden gradients, think neural networks

    • @ritvikmath
      @ritvikmath  3 роки тому +1

      thanks for putting a name to it! Indeed, many ML algorithms and stat methods are not happy with quick, unexpected changes.

  • @vs7185
    @vs7185 2 роки тому

    Is there no accept reject here like in Metropolis Hastings or Rejection sampling?

  • @anondsml
    @anondsml 8 місяців тому

    do you offer any tutoring in bayesian statistics?

  • @leohsusolid
    @leohsusolid 2 роки тому

    Great videos! Make the concept very clear! Thank you!
    I have a question about the correction: After sampling (X0, Y0), how can we sample (X1, Y1)? In other words, what is the condition when we change both? Or just sample X1, Y1 respectively?

    • @leohsusolid
      @leohsusolid 2 роки тому

      The other question is that if we go from (X0, Y0) to (X1, Y1), then we don't face the situation of "Probability Spike", do we?

    • @apah
      @apah 2 роки тому

      The reason he made the correction is that what we call a sample is (xi, yi). Therefore an iteration of Gibbs is the update to both variables with the method he gave; sampling x1 given y0 then y1 given x1.

    • @leohsusolid
      @leohsusolid 2 роки тому

      @@apah Thank you for replying me!
      Do you mean that we can sample (X1, Y1), but actually in this sample, there is an order which is X1 first given by Y0, Y1 given by X1.

    • @apah
      @apah 2 роки тому

      @@leohsusolid My pleasure ! Exactly, starting with either one is fine. As a said earlier, a sample is by definition the pair (Xi, Yi). The point of gibbs sampling is to find a way to make these samples grow closer and closer to samples drawn from the actual distribution P(X, Y). And the method to do so, is to alternatively sample from the the conditional distributions.

  • @AshokKumar-lk1gv
    @AshokKumar-lk1gv 3 роки тому +2

    nice

  • @shahf13
    @shahf13 3 роки тому +2

    great channel ! can you do a video about autoencoders?

  • @cleansquirrel2084
    @cleansquirrel2084 3 роки тому +4

    i'm watching

  • @edwardhartz1029
    @edwardhartz1029 2 роки тому

    At around 4:30 , you started at (x0,y0), but then the value of x0 was never used. Why is this?

    • @vs7185
      @vs7185 2 роки тому

      I am thinking you can use either one to start the process. If you are using x0, then next you will use p(y1 | x0); in case you are using y0, then next you will use p(x1 | y0)

  • @juanpabloaguilarcabezas8089
    @juanpabloaguilarcabezas8089 3 роки тому +1

    Can you do a video on hamiltonian monte carlo ?

  • @BreezeTalk
    @BreezeTalk 2 роки тому

    Please show a code implementation

  • @apicasharma2499
    @apicasharma2499 2 роки тому

    Could you please explain hands-on?