The better way to do statistics

Поділитися
Вставка
  • Опубліковано 21 тра 2024
  • To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/VeryNormal. You’ll also get 20% off an annual premium subscription.
    Non-clickbait title: A gentle, but progressively rough introduction to Bayesian statistics
    Stay updated with the channel and some stuff I make!
    👉 verynormal.substack.com
    👉 very-normal.sellfy.store
    This video was sponsored by Brilliant.

КОМЕНТАРІ • 252

  • @cameronhill7769
    @cameronhill7769 Місяць тому +158

    I used to be a frequentist, but then I updated my beliefs.

  • @kadaj131313
    @kadaj131313 Місяць тому +251

    Half my professors would fight you over this title, the other half would agree with you

    • @very-normal
      @very-normal  Місяць тому +26

      😈

    • @jasondads9509
      @jasondads9509 Місяць тому +2

      I swear the title changed, what was it before?

    • @very-normal
      @very-normal  Місяць тому +9

      nah it didn’t change

    • @ThatOneAmpharos
      @ThatOneAmpharos 8 днів тому +1

      @@very-normal what was the probability it would have changed if the probability of change was the probability of it not changing?

    • @tracywilliams7929
      @tracywilliams7929 5 днів тому

      Lol!

  • @RomanNumural9
    @RomanNumural9 Місяць тому +305

    Math finance PhD student here. Great video! Just so you know there's a book called "Deep Learning" by Ian Goodfellow et al. It covers Bayesian stats, including MCMCs and other things. It's a great resource and if you wanna know more about this stuff I found it a pretty reasonable read! :)

    • @Nino21370
      @Nino21370 Місяць тому +3

      🔥

    • @JetJockey87
      @JetJockey87 Місяць тому +8

      Ray Kurzweil's novel "How to Create a mind" goes into this as well with his description of the Monte Carlo Markov Chain Models he used to build his software that has been an absolute staple in the Medical Industry for dictation and still continues to outperform Transformers models.
      Dragon Naturally Speaking.
      Which is really hilarious if you are the kind of nerd to know what that software is as well as knowing who Ray Kurzweil is and what he is more famous for - Singularity-esque Neohuman Futurism propaganda

    • @lupino652
      @lupino652 Місяць тому +1

      Yep, a classic, well peaced for someone qith background

    • @raphaelscaff2399
      @raphaelscaff2399 Місяць тому +1

      Cool

    • @luisjuarez7291
      @luisjuarez7291 Місяць тому +2

      Hey out of curiosity, if you have a doctorate in math finance, do you find that a lot of job opportunities are still based off whether you have a CFA or CPA on top of your degree? (If you don’t pursue being a profesor or researcher)

  • @Tom_Het
    @Tom_Het Місяць тому +53

    Wow, nobody has ever explained to me that Bayes' Theorem is derived from p(a|b)p(b) = p(b|a)p(a). That makes way more sense now.

    • @durg8909
      @durg8909 7 днів тому +1

      Same, that was such a 3Blue1Brown moment.

  • @ricafrod
    @ricafrod Місяць тому +27

    As a PhD student who has used frequentist statistics for as long as I remember, I’d only ever heard gossip and rumours about Bayesian statistics, but your video hooked me from start to finish on such a fascinating subject! Great video!!!

    • @very-normal
      @very-normal  Місяць тому +7

      Thanks! Despite the title name, I think having both under your belt is better than just choosing a “side” in this nerdy debate lol

  • @rtg_onefourtwoeightfiveseven
    @rtg_onefourtwoeightfiveseven Місяць тому +6

    I'm an astrophysicist, and in our field Bayesian statistics is the way. Great and all, except everyone seemingly expected me to know what an MCMC analysis was (it wasn't mentioned anywhere in the refresher lectures at the start of my PhD) despite never having heard of it before I started. This video was a massive help.

    • @very-normal
      @very-normal  Місяць тому +6

      If that isn’t the PhD student experience, I don’t know what is
      Also, I’m using your comment to brag to my friends that I spoke to an astrophysicist

    • @rtg_onefourtwoeightfiveseven
      @rtg_onefourtwoeightfiveseven Місяць тому +1

      @@very-normal Haha, glad I count as clout to someone. I'm just a humble 1st-year PhD student, but no need to tell them that. ;-)

    • @very-normal
      @very-normal  Місяць тому +3

      Oof, first year is tough, it was definitely the most character building I’ve done in a short span of time. Hang in there, and best of luck!

  • @qwerty11111122
    @qwerty11111122 Місяць тому +59

    As an introduction to Bayes theorem, i think that 3b1b really helped me form an intuition about this statistics using his "bayes ratio" of multiplying your prior by a ratio formed by the likelihood and margin to form the posterior, a new prior

    • @barttrudeau9237
      @barttrudeau9237 Місяць тому +2

      3b1b was where I first heard of Bayes Theorem. I've been hooked ever since.

    • @leassis91
      @leassis91 Місяць тому

      this video from 3b1b is a life saver

  • @tomalapapa100
    @tomalapapa100 Місяць тому +4

    Ive studied math as a degree and specialized in statistics and finance. Had rhe same experience with numerous frequentist clases but few bayesian. Ive studied on my own and with a couple of clases that were available to me in grad. Struggled a lot to get the gist of bayesian statistics. This video is s perfect for people with knowledge of frequientist view who wish to then learn bayesian

  • @avenger1825
    @avenger1825 Місяць тому +31

    I always get excited when I see one of your uploads; I've been studying heavily about statistics coming from a pure mathematics background, and your videos are always very helpful to build the conceptual foundations that textbooks often obscure in favor of specialized, theoretical language. This has already cleared up several things I didn't quite understand about Bayesian statistics, so thank you (for this and your other videos)! :^)

    • @very-normal
      @very-normal  Місяць тому +9

      These kind of comments are the ones that get me really fired up (in a good way). The videos are doing what I want them to do, thanks for letting me know and taking the time to comment!

  • @elinope4745
    @elinope4745 Місяць тому +3

    UA-cam recommended this video to me on my recommended feed. Only about one in twenty videos is any good. The odds were low that someone would make a video worth while, subbed, liked.

  • @nzt29
    @nzt29 Місяць тому +1

    Best video i’ve seen on this so far. I like the comparison between the two methods and that fact that you map back the data and parameter variables back to the typical A and B seen in the Baye’s thm definition.
    edit: I should have phrased this instead as how you connected Baye’s thm to distributions.

  • @mmilrl5768
    @mmilrl5768 Місяць тому +2

    I’m currently finishing up my first of many statistic courses and the first month of the course we spent on Bayesian statistics then started focusing more on the frequency side of things. I had no idea these are often were even considered different things. Very cool video!

  • @trevorgalvez9127
    @trevorgalvez9127 Місяць тому +2

    I used Bayes Theorem for a simple learning model for establishing categories for various phrases that were similar but not exactly the same. Going through thousands of records manually was possible, but using this allowed me to do it in a day with the help of excel and python.

  • @justdave9195
    @justdave9195 Місяць тому +8

    Could you please make a video on Generalized Linear Models too?
    These explanations are soooo helpful.

  • @TheOnlyNightmare
    @TheOnlyNightmare Місяць тому +1

    Loved this! Definitely a subscriber now 🎉
    I got confronted with Bayes in a Seminar where we used various Machine learning and deep learning models with the expectation of already knowing all this prior to starting. It led to me having no confidence in the model results even though they outperformed some other approaches.

  • @tylerwalton7659
    @tylerwalton7659 27 днів тому

    For some reason I always love when someone says “hi mom” in a video. It’s just wholesome and nice to know they are getting their mom’s support.

  • @thegimel
    @thegimel Місяць тому +1

    Great video as the rest of your content. You have a pleasantly simple, intuitive and concise way of presenting the D :)
    I would very much like for you to dive deeper into the Likelihood in particular, and why it isn't a real PDF even though it can look like one.
    Cheers!

  • @charlesbwilliams
    @charlesbwilliams Місяць тому +5

    Its so cool to see MCMCs get some love. The only use I’ve ever seen of it in my field (Psychology) is in Item Response Theory. Awesome video!

    • @ym35325
      @ym35325 Місяць тому +1

      It's also used quite a bit in economics industrial organization or in quantitative marketing!

  • @xanmos
    @xanmos Місяць тому +2

    Very comprehensible video about Bayesian Statistics.
    I have seen most of your videos and i will recommend it to my students. I am teaching undergraduate basic Statistics and i must stay, ur videos are very well-made. I will put it in my course sites so my students could learn from you as well. ❤

  • @cameronkhanpour3002
    @cameronkhanpour3002 Місяць тому +2

    Great video once again! You mentioned MCMC algorithms, to add some details, they work by constructing a regular/ergodic markov chain that has a unique stationary distribution, and we want that stationary distribution to be the target distribution so you can, say, sample it for inference. So the real question is now, how do you design a transition kernel that (if converges) leads an initial vector to the proper stationary distribution you wish to sample after some burn in time.
    I know this from a probabilistic graphical models perspective where this is used extensively in the form of Gibbs sampling, a special case of Metropolis-Hastings algorithm, and Rao-Blackwellized particles, which sample certain (more complex/loopy) parts of a network then does exact (analytical) inferencing on the rest.
    Variational inference is a form of inferencing as an optimization problem, such as in mean field approximation where you choose a simpler distribution Q and compute new parameters that gets it close to your actual distribution P. The main way I have learned is to minimize the KL divergence (relative entropy) between Q and P (in that order since KL is not symmetric). If anyone would like to read more Bayesian Reasoning and Machine Learning by David Barber is really good (IIRC variational inference in particular is in chapter 28).

  • @entivreality
    @entivreality Місяць тому +1

    Really great explanation! Love the progression from elementary to more advanced topics. A video on empirical Bayes methods could also be cool :)

  • @marcovitturini9481
    @marcovitturini9481 Місяць тому +1

    Thanks to your channel i'm considering chosing a biostat MSc. Thanks for explaining and inspiring

  • @joelbeeby866
    @joelbeeby866 29 днів тому

    My university UG finance course has taught me rudimentary statistics but not to the level that I want. Your videos are genuinely amazing self-studying and really bring out the logic in statistic, which textbooks almost never do. Thank you! Please keep it up!

    • @very-normal
      @very-normal  29 днів тому

      Thanks for watching! It’s always very encouraging to see they’re helping people out, thanks for taking the time out to tell me

  • @alishermirmanov5608
    @alishermirmanov5608 Місяць тому

    Amazing video, provides great intuitive understanding!

  • @figmundsreud9363
    @figmundsreud9363 Місяць тому +8

    Very nice introduction to Bayesian Statistics. What I like about Bayesian Statistics is that one can in many contexts interpret Frequentist methods just as a special case of Bayesian methods with uninformative priors. Also from my experience many people are just Bayesian for pragmatic reasons because for many problems Bayesian methods just work better (frequentists also just discover the importance of shrinkage estimation and the most generalized way to apply shrinkage methods is through Bayesian priors). So I think the philosophical debate between Frequentist and Bayesian methods is somewhat overhyped. The biggest downside of Bayesian methods is their computational cost. Currently working with a model that doesn't even have a computable expression of the Likelihood. MCMC somehow still works (magic) but estimation even for a relatively small model takes several hours

    • @very-normal
      @very-normal  Місяць тому +3

      I think the debate is overhyped too, it’s truly just nerd drama
      I feel your MCMC pains, best of luck 🫡

  • @stevenjackson8226
    @stevenjackson8226 28 днів тому

    Cool. Nice overview. This is the most rigorous presentation I've seen. I get Bayesian statistics at an intuitive level, but was curious about how it works mathematically. And there it is.

  • @RexAstrum
    @RexAstrum 19 днів тому

    Thank you for this!

  • @derWeltraumaffe
    @derWeltraumaffe Місяць тому +1

    I'm still learning frequentist statistics right now in university (first year psychology) so this video still goes way over my head, but it was a really nice overview to get a general idea of the concept of bayesian statistics. All we learned about it is "btw there's a thing called bayesian statistics. ok... let's move on." not kidding.

    • @very-normal
      @very-normal  Місяць тому +3

      This was my experience word for word in undergrad too

  • @jrlearnstomath
    @jrlearnstomath 21 день тому

    Looking forward to more on variational inference, it's really doing my head in

  • @Unaimend
    @Unaimend Місяць тому

    Another good one. Thanks

  • @barttrudeau9237
    @barttrudeau9237 Місяць тому

    This is a great video on the subject. I really hope you produce more content on Bayesian stats. (maybe dive into PyMC.?) Thank you!

  • @BrakeForLoop
    @BrakeForLoop 26 днів тому

    Very helpful! I did get a little lost when trying to think about applications from my experiences.

  • @cyberwolf575
    @cyberwolf575 Місяць тому

    This video was great. As someone who is studying data science on my own online, I'm 32 right now, I had a hard time understanding the Bayesian Theorem for a bit there. I wish I saw this video way earlier, it would have saved me weeks of wrapping my brain around it.

    • @very-normal
      @very-normal  Місяць тому +1

      Thank you! Best of luck on your data science journey, I hope I can help you out more with the statistical portion of it

  • @tobiaspeelen4395
    @tobiaspeelen4395 12 днів тому

    when i first saw the concept of bayesian statistics, i thought: "wow, thats dumb, having probabilities only rely on your own belief" but now i see that it is a way of deducing what is likely the real probability and im like"WOW, AMAZING"

  • @davidl.e5203
    @davidl.e5203 6 днів тому

    If my understanding is correct, Bayesians are basically frequentists plus moving average.
    Frequentists draw conclusions about probabilities based on historic observations and take for granted of its probability.
    Bayesians subset the historic observation by time-scale, then make predictions for the next time-scale. If the next time-scale probabilities don't match the historic probabilities, update probability.

  • @hyunsunggo855
    @hyunsunggo855 Місяць тому +28

    The cool thing about variational inference is that it converts the problem of computing the intractable integral into a more manageable optimization problem of, with respect to the parameters, optimizing some quantity, the variational free energy! This not only makes the problem often easier (through the more flexible variational graphical model) and more tractable (than e.g. MCMC, etc..), but also enables borrowing insights from mathematical optimization theory, to solve the particular formulation of the problem. By the way, this connection to mathematical optimization is why it is called "variational" inference in the first place, directly connected to calculus of variations! Also, VI has amazing applications in deep learning, namely, variational autoencoders (VAEs), in which it's applied to the latent space for the induced probability distribution, for explaining the data distribution, to become much, much more complex, compared to the classical examples you've shown in this video. For example, diffusion models, that can create those amazing images, can indeed be seen as an instance of VAE! Thank you for this great video! I learned a lot! :)

    • @waylonbarrett3456
      @waylonbarrett3456 Місяць тому

      I'm developing an AI model based on variational inference

    • @hyunsunggo855
      @hyunsunggo855 Місяць тому

      @@user-ju2pu8cf2l Well, LDMs such as SD make use of VAEs to reduce resource requirements but that's not what I was talking about. You see, mathematically, you can think of the noising & denoising steps themselves as an instance of a VAE. Which includes non-LDMs that operate on the original data domain such as the pixels themselves.

    • @hyunsunggo855
      @hyunsunggo855 Місяць тому +1

      @user-ju2pu8cf2l While LDMs such as SD indeed utilize VAEs for reducing resource requirements but I wasn't talking about that specific use case. The noising and de-noising steps as a whole is also an instance of a VAE under a certain interpretation, not limited to LDMs but also pure non-LDMs, that operate on the original data domain such as the pixels themselves, as well.
      (Idk why but my previous reply was deleted. 🤔)

  • @Y45HV1N
    @Y45HV1N 25 днів тому

    Really cool video and it's easy to follow. I just think it would be better with the frequentist bashing. I think the parts about what frequentists hope they could do or fool themselves into thinking are more a reflection of misinformed /poorly trained frequentists. Ultimately frequentism has itself two main approaches, the Neyman Pearson NHST approach and the Fisher compatibility approach. The search for p

    • @very-normal
      @very-normal  25 днів тому

      Bayesians aren’t allowed to take two steps without making fun of frequentist methods

  • @jrlearnstomath
    @jrlearnstomath 21 день тому

    Great video, thanks!!

  • @VTdarkangel
    @VTdarkangel 12 днів тому

    I don't know much about Bayesian methods beyond the most basic premises of it. However, even at that basic level, I can see the power of it. As an engineer, I was really only taught the fundamentals of frequentist statistics. While it has proven useful to understand that (despite my grumbling at the time I took the class), I could see the problem of assumptions being required in the analysis. Bayesian methods seem to account for that.

  • @KirinDave
    @KirinDave Місяць тому +2

    What's funny about this presentation is that it makes it look like the MCMC approach is the one only the deep practitioners use.
    In reality, non-statisticians who need to use this stuff prefer the MCMC approach because it's *very flexible* and just requires we provide a model and priority and then then an optimizer and the data fight it out in the computation.
    So in a very real sense, the MCMC approach is easier for non-statisticians and preferred.

    • @very-normal
      @very-normal  Місяць тому +2

      Ah yeah, in hindsight my “levels of Bayes” framing does give off this feeling. Not intended, but something for me to think about for future videos.
      Thanks for your insight!

  • @bringbackthedislikecount6767
    @bringbackthedislikecount6767 Місяць тому

    Currently taking statistical physics, found out some similarities between the two, such as prior probability in Bayesian statistics and priori theorem for each microstates of a system in statistical physics. Interesting to learn about Bayesian statistics from a physics major’s perspective nonetheless

  • @K33go175
    @K33go175 Місяць тому

    Fantastic as always, I am starting on the biostatistics track myself!

  • @EkShunya
    @EkShunya Місяць тому

    my good sir .
    u have my attention and subscription
    thank you

  • @jamesmcadory1322
    @jamesmcadory1322 Місяць тому +1

    It’s funny because when I was a Physics major two professors in the department argued over whether frequentist or Bayesian Stats were better and would teach their labs differently based on their preference.

  • @godlyradmehr2004
    @godlyradmehr2004 Місяць тому

    That was the best explanation for me at least thank a lot keep going

  • @dandandan18
    @dandandan18 Місяць тому

    My university teaches both approaches to statistics, but professors and lecturers don't make the distinction known (at least for a bachelor's degree). Bayes' theorem is taught, including how probability densities may differ, how we arrive at the prior belief (or distribution), and the difference in philosophy that affects where Bayesian statistics are often used. Ultimately though, we employ frequentist approaches for theses since it's easier to teach and more common for bachelor's degrees since there's less credibility to design models (i.e., there's little mastery of the field to justify the prior beliefs that will affect the distribution).
    However, as a civil engineering student, I most often see studies that do not incorporate prior beliefs when modeling real world phenomena that would incredibly affect data interpretation. For instance, studies on flooding, landslides, groundwater flow, the structural health of bridges and concrete buildings, and project management are heavily time-dependent, which makes prior beliefs more significant, but I only encounter risk reports and models that only focus on the current data. I do think that for most studies, frequentist methods are more applicable and would quite suffice given the more theoretical nature of the data and of the methodologies, since most call for single-parameter hypothesis testing. But given the cost of testing materials and tools, I believe the Bayesian approach (incorporating expert knowledge and credible intervals instead of "typical confidence intervals") could be incredibly helpful for handling smaller sample sizes.

  • @mikestein5983
    @mikestein5983 День тому

    Anyone wanting a deep dive into Bayesian stats as it applies to research should consider the book Statistical Rethinking by Richard McElreath. He has also prepared a semester’s worth of lectures available on UA-cam. This is not a quick fix, but essentially a graduate course. It is IMHO quite accessible and doesn’t assume too much in terms of math background.

  • @hopefullysoonaweldingengineer
    @hopefullysoonaweldingengineer Місяць тому

    haha a brilliant example in the introduction. suscribed.

  • @KinomaroMakhosini
    @KinomaroMakhosini Місяць тому +1

    What a coincidence I am starting bayesian analysis next week on my Stohastic Probability class😂

  • @oterotube13
    @oterotube13 Місяць тому

    First time. Thank you for sharing!

  • @tommys4809
    @tommys4809 28 днів тому

    Using p notation sometimes indicates discrete random variables and calculating the marginal would be summation, f would denote probability density functions of continous random variable which the marginal could be calculated by integrating with respect to theta

  • @ottoludewig1244
    @ottoludewig1244 Місяць тому +4

    I began studying Bayesian Statistics last year for developing the tools toward an applied study with Climate Change model data, and the main tool that facilitates the posterior calculation is somewhat recent, called INLA. The theory that gives it structure is pretty dense and difficult, but by far is the easiest to implement and has the least computational costs for the cases that it is applied to.
    A year deep into this I've found it fascinating how these perspectives open up so much compared to the restrictive nature of frequentist analysis.
    I recommend the Rethinking Statistics course for all audiences and the Gelman book (Bayesian Data Analysis) for a more mature audience with background in math and Statistics.

  • @maltez6446
    @maltez6446 Місяць тому

    Where was this video two weeks ago when i was writing my exam project on HMM's, this shit is too hard to comprehend for a second year bachelor student :( such a great video!!

  • @paulshaughnessy8182
    @paulshaughnessy8182 24 дні тому

    So, I never knew the thing I hated most about statistics was called the frequencist approach, and I always hated p values and null hypothesis, which is my opinion is worthless. I knew of Bayes, but when I went to school, it was never taught. Great video and now I'm subscribed.

  • @ulrichtietz1327
    @ulrichtietz1327 25 днів тому

    In the realm of chance where priors dwell,
    Bayesian paths weave complex spells.
    With data new, we seek to blend
    Beliefs of old, to truths we bend.
    A prior’s whisper, soft and slight,
    Adjusts with evidence brought to light.
    Yet minds may twist and turn in vain,
    To grasp the likelihood's arcane chain.
    Conjugates and posteriors deep,
    In nested sums that never sleep.
    A dance of numbers, hard to track,
    Where certainty, forever, lacks.
    Through the haze of dense equations,
    We seek insight, past frustrations.
    A Bayesian view, so broad, so vast,
    Yet understanding, comes not fast.

  • @RabbitLLLord
    @RabbitLLLord Місяць тому +1

    To understand variational distribution better, perhaps understanding variational autoencoder can be a good start

    • @very-normal
      @very-normal  Місяць тому

      Thanks! I’ve heard about it vaguely, but I’ll look more deeply into it!

  • @Jaylooker
    @Jaylooker Місяць тому

    The Bayesian method sounds similar to how neural networks update their nodes to new data. There are Bayesian neural networks that implement Bayes theorem which allows them have a confidence percentage instead of just an answer to some presented data.

    • @very-normal
      @very-normal  Місяць тому +1

      Oh that’s cool, I didn’t know that was a thing. I really do be learning a lot from my comment section nowadays

  • @frankjohnson123
    @frankjohnson123 Місяць тому +7

    Love the videos, bit of friendly criticism: when working with a specific example (like 3:35 on), it would help to switch from generic variable names like A and B to something more specific to the example (e.g., S and W in this case). It could even be looser for pedagogical purposes, like replacing A with "Sub" and B with "Watch", though I know not everyone likes that.

    • @very-normal
      @very-normal  Місяць тому +5

      Ah that’s a good idea I also find it helpful to see the events in the expression but didn’t make the connection there. thanks for pointing that out!

  • @brainsify
    @brainsify Місяць тому +1

    A pdf is a density function not a distribution function. At least in the text books I’ve taught from. I understand this isn’t a big deal, but you made a whole thing.

    • @very-normal
      @very-normal  Місяць тому

      Based on my experience, the terms can be used interchangeably, but I can see where the confusion can come from

    • @James-bv4nu
      @James-bv4nu Місяць тому

      Isn't a distribution function, a density function?
      The area under the curve, f(x), is probability; therefore, f(x) is the probability density.
      That is, f(x) dx is in unit of probability; that makes f(x) in unit of probability per unit x.

    • @very-normal
      @very-normal  Місяць тому

      For me, it’s mostly a semantics thing. The pdf f(x) can be referred to as a “probability density function”or a probability distribution function. When I refer to the cumulative distribution function, I’ll make sure to say “cumulative” instead of just saying “distribution”.
      This is one of those topics where it’s really easy to get lost in the sauce. If I say “probability density function”, then someone will invariably say that I should also include “probability mass function”. It just becomes too wordy for the script, so I stick to probability distribution. As long as I show what I’m referring to, my hope is that people will get what I’m saying

  • @TN-cx4qi
    @TN-cx4qi 20 днів тому

    We used Bayes theorem a lot in discrete math, stats, machine learning, and AI classes. I markov chains in a couple personal programming projects.

    • @very-normal
      @very-normal  20 днів тому

      Man the machine learning classes get more exposure to it than the statistics classes lol

    • @TN-cx4qi
      @TN-cx4qi 20 днів тому

      @@very-normalthey really do. When it first popped up on the screen and the professor asked if anyone has seen this formula before, it was like a crazy joke. It begs to be used in something like hvac for temperature control.

  • @patrickegan8866
    @patrickegan8866 Місяць тому

    Yep, undergrad psych we got told to avoid it or we'd be excommunicated lol in masters of org psych we ended up using BN models and it was amazing

  • @simonpedley9729
    @simonpedley9729 Місяць тому

    what I love about bayesian statistics is the way i can get almost whatever results i want by changing the prior

    • @very-normal
      @very-normal  Місяць тому

      lol i wish that’s how it worked

  • @iamjinse
    @iamjinse 9 днів тому

    Very interesting video. Can u suggest some books, i really want to learn about bayesian statistics.

    • @very-normal
      @very-normal  9 днів тому +1

      Thanks! A more technical textbook would be Bayesian Data Analysis by Andrew Gelman (and others), while a lighter intro could be Bayesian Statistics The Fun Way by Will Kurt

    • @iamjinse
      @iamjinse 9 днів тому

      @@very-normal thank you so much. I am new to Bayesian statistics. Shall i start with the second one.?

  • @ufuoma833
    @ufuoma833 Місяць тому

    What a time to be alive!

    • @very-normal
      @very-normal  Місяць тому +1

      👀 two minute papers watcher?

    • @nyx211
      @nyx211 Місяць тому

      Ice cream for my eyes!

  • @Megasteel32
    @Megasteel32 Місяць тому

    lmao im taking the required entry level stats/prob class for my comp sci major and this was all our last test was about, good to know that me being super confused was normal.

  • @provocateach
    @provocateach Місяць тому +1

    Likelihoodists: are we a joke to you?

  • @zegpi1821
    @zegpi1821 Місяць тому

    How do you read the notation in min 7:14? In which you have the estimate through MLE, the true parameter over an arrow and the parameter set

    • @very-normal
      @very-normal  Місяць тому

      It reads as “the MLE will approach the true parameter value as the sample size tends to infinity”. The “p” denotes convergence in probability, but the theta on the right is the true parameter. Slightly mixed up my notation throughout the video, but that’s what I intended here

    • @zegpi1821
      @zegpi1821 Місяць тому

      @@very-normal first time encounter p as convergence

  • @lucianozaffaina9853
    @lucianozaffaina9853 Місяць тому

    Can you recommend me a good book for learning bayesian statistics? I am a master degree student in statistics and I have done an exam in Bayesian statistics and other exams about markov chains and also computational statistics.

  • @strayorion2031
    @strayorion2031 Місяць тому

    I still dont quite follow, does the prior significantly affect the results? if so, what qualifies a good guess? is it how close it is to the posterior distribution?, also what kinda data is good to use in bayesian statistics? I often work with rats so data hardly achieves double digits, but the quality of it is really good since there is little noise in it, is that type of data fit for bayesian methods?, are there "equivalents" of frequentist statistic tests in bayesian statistics? like test that do the same function of a t students or anova?, I have a lot of questions and they are kinda hard to answer by reading because in my bachelor degree I only recieved basic training in biostatistics.

    • @very-normal
      @very-normal  Місяць тому +1

      Hopefully, I can try to clarify a bit.
      Choosing a prior is hard, but in many practical applications people opt for an uninformative prior. In your case, with such a small dataset, the prior will have more influence over the posterior. So, you'd want to be careful not to choose one that is too opinionated. The definition of a good guess will vary a lot based on the research context, but usually it's derived from past research; in other words, based on past work, a parameters for a prior distribution can be chosen to resemble the results of past work.
      It's not that certain types of data are good with Bayesian statistics. Given a particular research question and data, there will most likely be a Bayesian model that can be made answer that question. I do mostly applied work, so
      It's hard to answer this question without specific information about what kind of question you're trying to answer. At the end of the day, with Bayesian statistics, you're going to use the posterior distribution to characterize some parameter of interest about these rats (i.e body weight, proportion of rats with symptoms, size of litter). The posterior distribution tells you a range of values are most likely for this parameter and what the most likely value is.
      With frequentist hypothesis tests, you get an estimate for what this parameter is, and you can reject or fail to reject the null hypothesis. It doesn't tell you directly about the parameter itself.

    • @strayorion2031
      @strayorion2031 Місяць тому +1

      @@very-normal thanks, I think Im starting to undertand, basically frequentist statistic and bayesian statistics ask different questions about the data, and based on the interpretation of the answer you take a decision, its just that im still unfamiliar with bayesian "questions" and "answers". Your channel took my interest and I will keep researching about bayesian statistics. Thank you

  • @TheThreatenedSwan
    @TheThreatenedSwan Місяць тому +1

    Updating my priors right now 🤖

  • @varbias
    @varbias Місяць тому +1

    TRIGGERED! Frequentist for life 😤 nice video though, as usual

  • @durg8909
    @durg8909 7 днів тому

    Aspiring Biostatistician here, my target school has some current PhD students specializing in Bayesian inference so I came here to learn what I might be in for. A part of me worries that specializing in Bayesian techniques could downsize my potential job pool, is there any validity to this concern?

    • @very-normal
      @very-normal  7 днів тому +1

      Hi! I assume you’re doing an MS or PhD in biostat, so I’ll answer from this perspective. Let me know if I’m off the mark
      In your coursework, you will most likely be trained in frequentist methods. If you’re lucky, you’ll get some exposure to Bayesian methods but I think it’ll be unlikely you’ll use it much. Therefore, your coursework will help you cover your bases for basic skills expected of a statistician. I’m of the opinion that learning how to do Bayesian analyses will help expand your opportunities, since you simply have more tools/skills.
      I suppose if you were a PhD student specializing in esoteric Bayesian methods, you might have trouble finding positions where you apply those methods, but that’s a general PhD problem.

    • @durg8909
      @durg8909 7 днів тому

      @@very-normalThanks for the speedy reply! That’s a fair point about the general PhD problem, sadly. It sounds like Bayesian inference could be a tool in the belt that may or may not be used in the workplace, but it’s definitely something I want to study. Thanks for the awesome video man!

  • @me5ng3
    @me5ng3 Місяць тому +3

    I had to take bayesian statistics for my machine learning classes. I didn't know know that they aren't as taught in other places, since in Germany they're fairly popular. They are even taught in highschool

    • @PR-cj8pd
      @PR-cj8pd Місяць тому +4

      No, everyone will see Bayesian probability, not Bayesian statistics

    • @huhuboss8274
      @huhuboss8274 5 днів тому +1

      I am from Germany too and I don't think bayesian statistics are taught in highschool. May you confuse it with bayes theorem, which also has a frequentistic interpretation?

  • @oscarlacueva
    @oscarlacueva Місяць тому

    are bayesian just a generalization of frequentist approach where conclusions will match for bayesian and frequentist methodologies if you choose a completely uninformative prior?

    • @very-normal
      @very-normal  Місяць тому

      I think you’re mostly right, but I’d change the phrasing. It’s not that the Bayesian approach is a “generalization” of the frequentist approach, but that these two approaches do come to the same conclusion under that prior

  • @someonespotatohmm9513
    @someonespotatohmm9513 Місяць тому +2

    I am suprised the statisticians don't use Bayesian methods more often. They are the basis for a lot of estimation algorithms. I want to say that it requires to much modeling for the statisticians and to much statistics for the engineers :P.

  • @bokehbeauty
    @bokehbeauty Місяць тому +1

    This video could be a nice kick-off to a series where you explain each of the layers and the related methods. As is, it is too packed for me. The assumptions and limitations of the methods would need explanation, as to my experience this is where even Profs get sloppy.

  • @waylonbarrett3456
    @waylonbarrett3456 Місяць тому

    Isn't it interesting that P(A|B) is directly proportional to P(B|A)? This seems to imply that when one finds a sequence running one direction, they are likely to also find it running in the opposite direction. Everything is an oscillator.

  • @dullyvampir83
    @dullyvampir83 22 дні тому

    Am I correct that frequentist just set P(θ) = 1, which probably also makes the integral easy?

    • @very-normal
      @very-normal  22 дні тому

      I’m not sure, I’ve never heard of that before. I do know that Bayesian analyses agree with Frequentist analysis when you use uniform priors, since it essentially boils down to maximum likelihood.
      But I don’t think they can assign a probability to the parameter since it’s viewed as just a number, not random

    • @dullyvampir83
      @dullyvampir83 22 дні тому

      @@very-normal Thanks for the answer. I made a mistake. What I meant to say was: Shouldn't they agree if we set the Prior to dirac_delta(x-θ)? Wouldn't that express the conviction, that θ is simply a number? Then the numerator would equal the denumenator and the posterior would be 1, so not worth investigating further.

  • @thesoundofscience
    @thesoundofscience 5 днів тому

    I clearly missed something ... if P(D) is just a number, and we know that the posterior must be normalized over some range of theta, then isn't P(D) just the normalization constant?

    • @very-normal
      @very-normal  5 днів тому

      It is! We know that it’s a number, but this integral is usually difficult to calculate

  • @KevinBalch-dt8ot
    @KevinBalch-dt8ot 20 днів тому

    I think Reverend Bayes came a little before 1963. Still a great video!

  • @austenmoore7326
    @austenmoore7326 19 днів тому

    Bayesian stats is cool in theory but I’ve never found anything on how to deal with cofounders with it. Do they just deal with univariate data?

    • @very-normal
      @very-normal  19 днів тому

      Theoretically, you can handle confounders through your design and just use a Bayesian analysis, but I’m not sure about the observational setting. I know Bayesian causal methods exist but I haven’t used them myself

  • @jayceh
    @jayceh Місяць тому

    As head of experimentation at a tech company, we have to disabuse frequentist-only interpretation of A/B test results
    Turns out once you care about being right and predictive, rather than academically accurate, you have to become Bayesian

    • @very-normal
      @very-normal  Місяць тому

      Is this head of experimentation hiring 👀
      (jk bayesians rise up tho)

    • @jayceh
      @jayceh Місяць тому

      @@very-normal in our early days we had some frequentist disasters from the product management team
      Wasted a lot of time making things blink because one blinky element accidentally showed significant results
      Thankfully we've come a long way since then (thanks Thomas Bayes!)

  • @tracywilliams7929
    @tracywilliams7929 5 днів тому

    I wonder if Marilyn vos Savant used Bayesian statistics for her infamous solution to the "Monty Hall Problem"? People who disagreed with her solution were probably frequentists. That would include me.

    • @very-normal
      @very-normal  5 днів тому

      So you would choose to not switch doors?

  • @wp9860
    @wp9860 Місяць тому

    The fastest way to grasp Bayes Rule is graphically. The mathematical expression falls naturally out of that. Every time I want to derive Bayes Rule, (I don't use it frequently), I illustrate it graphically. This really sets in the intuitive understanding of Bayes Rule. The symbolic derivation does prove Bayes. But, symbol manipulation imparts of formal and rather sterile understanding. I really fault this tutorial for not introducing Bayes Rule with a literal picture of how it works.

    • @very-normal
      @very-normal  Місяць тому

      There were literal graphs but okay

    • @wp9860
      @wp9860 Місяць тому

      @@very-normal I'm talking set theory, Venn diagrams, or diagrams of dots representing A, B, and A intersection B. Open the discussion with such images. Motivate the equation that way. Only then move on to symbolic math. I also don't mean illustrations of distributions, e.g., prior, posterior. Those have their place. My focus is the at the very start. Pedagogically, a graphic illustration is the best introductory device. A picture is worth a thousand words, as the saying goes ... BTW, is it a theory. Many refer to it as a rule, basically being an identity, not that it matters much?

    • @very-normal
      @very-normal  Місяць тому

      That’s fair. To your point, I originally had a Venn diagram for the first section, but it took too much time for me to explain to my liking, so it was cut in editing.

  • @christopherellis2663
    @christopherellis2663 21 день тому

    Portable Document Format. After ten years, I still have no idea of how to do this. I suppose that I need to take a class. Unless I have no real use for it

  • @dangernoodle2868
    @dangernoodle2868 Місяць тому

    Very normal is the chad bayesian doge

  • @michelebensa
    @michelebensa Місяць тому

    Any book on that?

    • @very-normal
      @very-normal  Місяць тому

      Bayesian Data Analysis by Gelman is usually what I use to refresh myself on Bayesian concepts!

    • @michelebensa
      @michelebensa Місяць тому

      @@very-normal thanks a lot!

  • @KennethKamhw
    @KennethKamhw Місяць тому

    baysian vs frequentist is kinda like Newtonian vs Hamiltonian

  • @charleslynch340
    @charleslynch340 25 днів тому

    Respect my probabilitah

  • @jasondads9509
    @jasondads9509 Місяць тому +1

    I'm now more confused.

  • @elijahmansur5710
    @elijahmansur5710 Місяць тому

    My AI class is teaching me about Bayesian statistics

  • @rajibkudas123
    @rajibkudas123 Місяць тому

    After making it enough confusing now its your duty to simplify it by using few examples

  • @cheerymoya
    @cheerymoya Місяць тому +1

    You lost the opportunity to say "this is the probability of someone watching this video given that they're subscribed, and according to UA-cam statistics only a small percentage of people that watch my videos are actually subscribed" and show your UA-cam statistics

    • @very-normal
      @very-normal  Місяць тому +1

      aw man you’re right, that didn’t even occur to me. Thanks for reminding me that’s something I can do lol

  • @gustavomajano5148
    @gustavomajano5148 Місяць тому

    Frequentists say that the approximate posterior is always in the "wrong place" and Bayesians says that frequentists can't give real confidence intervals... both are right, but only one is wrong 🤣

  • @tylerduncan5908
    @tylerduncan5908 Місяць тому

    Was not expecting the copium pepe in a video about Bayesian statistics. Lmaoo

    • @very-normal
      @very-normal  Місяць тому

      I’m huffed up on copium 24/7

  • @lucianozaffaina9853
    @lucianozaffaina9853 Місяць тому

    In Bayesian statistics you can't test H0:u=uo , H1:u>u0. You are always going to accept H0 no matter what data say to you

    • @very-normal
      @very-normal  Місяць тому

      Instead you can directly calculate the probability that u > u0, if it’s high enough then awesome. With null hypothesis testing, you don’t know anything about u0 in the end

    • @lucianozaffaina9853
      @lucianozaffaina9853 Місяць тому

      @@very-normal you're right , I only wanted to cite that paradox

  • @user-nb6el9te7m
    @user-nb6el9te7m Місяць тому

    The inference at 4:30 is very wrong. It is not an independent term so you cannot increase that alone without affecting other terms. Also, it should not be that you need current subscribers to watch your videos more, but that you need the watchers more from your subscriber pool, since the total number of watchers and subscribers both are not constant.

    • @very-normal
      @very-normal  Місяць тому

      Not my best example, thanks for your point

  • @jakubkopczynski779
    @jakubkopczynski779 28 днів тому

    Frqusitin Stattstitcs

  • @prod.kashkari3075
    @prod.kashkari3075 Місяць тому

    We know your a Bayesian now from that title 😂

  • @goodfortunetoyou
    @goodfortunetoyou Місяць тому

    Nerd drama: The second best kind of drama.