Bayesian Data Science: Probabilistic Programming | SciPy 2019 Tutorial | Eric Ma

Поділитися
Вставка
  • Опубліковано 22 лис 2024

КОМЕНТАРІ • 24

  • @bilboswaggins7629
    @bilboswaggins7629 4 роки тому +23

    Could listen to this dude talk all day, does such a great job of making it all seem so non-threatening and interesting.

  • @Mutual_Information
    @Mutual_Information 2 роки тому +1

    Bayesian statistics makes for such satisfying modeling..

  • @galaxymariosuper
    @galaxymariosuper 4 роки тому +2

    you guys are gold

  • @henrmota
    @henrmota 4 роки тому +2

    Great workshop.

  • @carstenlimberger9427
    @carstenlimberger9427 4 роки тому +4

    Was the tutorial code executed with acceleration through C/C++ compilation at the background of Theano? When I execute the examples, the computation is much slower than in the video, especially when sampling from posterior in the baseball example.

    • @christiansmith2547
      @christiansmith2547 3 роки тому +3

      Can’t remember exactly because I’m midway through, but I think he’s using an online jupyter kernel rather than a local instance.

  • @flowy-moe
    @flowy-moe 10 місяців тому

    Would someone be able to share the Jupyter Notebooks? The link in the description is not working for me ...

  • @juliocardenas-rodriguez1986
    @juliocardenas-rodriguez1986 2 роки тому +1

    These guys rock !!

  • @bhishanpoudel8707
    @bhishanpoudel8707 5 років тому +2

    Great tutorial, lots to learn. One Aside: How to do you select the text and paste to another place so neatly? Which app do you use?

  • @zzhou3894
    @zzhou3894 4 роки тому +2

    Any link for the demo notebook? Thx.

  • @TomerBenDavid
    @TomerBenDavid 5 років тому +2

    Thank you too

  • @nano7586
    @nano7586 3 роки тому +1

    1:40:00 I was able to follow so well but then it was simply too fast.. I keep hearing the term "posterior" but it wasn't explained. I also didn't understand why p follows a uniform distribution and not e.g. a normal. Does it mean that p has equal probability of reaching a certain value for n data sets?

    • @ihgnmah
      @ihgnmah 3 роки тому +1

      So he was talking about the parameter p of the groups (control and test), which he generated using the groupby() function. This p is the P(distribution /model | data), the probability of the distribution/model given the observed data. However, how much of this information is reliable. This is what you do if you don't have pymc3.
      Then he moved on to explain the mechanism of using pymc3 to achieve the same thing. Since p is unknown, the best you can guess is it follows a uniform distribution, which means any values between 0 and 1 have the same chance of being the value for p, and because each sample is a Bernoulli trial, he then used the Bernoulli distribution as the likelihood to estimate the value for p. Again, giving the observed likelihood, data, and prior (the distribution that you believe p follows), let's estimate its real value.
      The posterior he mentioned could be the estimated p after running code. I'm not sure about how the underlying algorithm work, but it probably calculates P(data | model) (remember, we can go back and forth between data and model in the Bayesian Formula), and it might be what he referred to as posterior.

  • @matthewmeadows2456
    @matthewmeadows2456 4 роки тому

    Cannot run jupyter notebooks properly now. Very frustrating.

  • @bhavinmoriya9216
    @bhavinmoriya9216 2 роки тому

    Could anybody please send me the link of Justin Boyce blog?

  • @joaopedrorocha5693
    @joaopedrorocha5693 Рік тому

    We have uncles that doesn't change political views here on my country too ... maybe we could model hierarchically the probability of someone in some country having such an uncle hehe
    we could do a website called the "uncle project" which would have a form section on which we would ask people worldwide how many uncles they have and how many gets emotional when someone slightly disagrees with his views. I think we could use a binomial likelihood in this case since we get N uncles and have a Bernoulli trial on each fitting the description.
    Then each time someone submitted an answer we could update our hierarchical model ... So when enough people in enough countries send their estimates we would have a world map showing which countries people are the luckiest on the uncle subject and which ones are the unluckiest. 🤣

  • @programminginterviewsprepa7710
    @programminginterviewsprepa7710 2 роки тому

    Watch first person on earth to understand bayes

  • @NidhiSinha4U
    @NidhiSinha4U 2 роки тому

    Anyone who can help me with bayseian analysis? I'd really appreciate 😁

  • @perrygrossman2008
    @perrygrossman2008 4 роки тому

    Cool presentation, Eric!
    Estimation is core of all statistical inference.
    ua-cam.com/video/2wvt6GPZl1U/v-deo.html
    Nice one: "Calculating p-values is not even... the point of statistical inference."

  • @OriginalBernieBro
    @OriginalBernieBro 4 роки тому +1

    Holy shit no timestamps?!