I've been hearing about bayesian thinking from last 6 months and was watching multiple videos about it. I never understood priors, likelihood and posterior from rea life perspective but youtube recommended this hidden gem to me. I'm glad I came across your channel :)
Wow, i never understood how bayesian thinking revolved around updating a prior belief. it was always so obscure to me, trying to reason around three distributions (prior, likelihood, posterior) But your one sentence "prior beliefs get updated w/ new data" really puts it into perspective. I had no idea the posterior was the updated prior. Thank you for such a great video!
God every minute of this video was too heavy. Took me around an hour to understand this 20 mins video, but thank you, you have literally made this concept so clear to me.
This is the easiest, thus the best introduction to Bayesian Stat, I ever came cross. It transfer to knowledge from Frequentist Probability to Conditional Probability, then to Bayesian Probability in a concise manner. Thanks for it and the others...
I really don't know where I'd be without ritvikmath to explain these complex concepts in statistics. Thank you for an amazing video. This is certainly one of the best videos on Bayesian statistics on UA-cam.
The explanation of this concept is often presented in a dry manner with formulas, while some use engaging and intricate animations to explain it. However, you excel in your ability to intuitively convey how we should understand it. This approach is highly effective as it links science directly to our lives. I particularly appreciate your style.
Hey ritvikmath This is pure genius - literally the best video I saw about explaing the basics of bayesian statistics. I really understood it now & as I am writing my thesis about prediciting carbon price with bayesian stats (and the new shrinkTVP package in R) this really helps me a lot. One idea that tripped me up a bit is between 9:30-10:00, as you talk about dividing 15/170 and I thought: isn't the probability that I hear the noise 20/170? After rewatching a few times I know what you mean - just something that might make it even better as it already is. Thanks for providing real value!
What a great explanation about the Bayesian Reasoning. I'll have a test about Bayesian Inference and this video helped me to have more clues about this topic. Thanks for the kind introduction for this subject, mate! Cheers from Brazil!
Just a piece of advice, mate: everytime you make a video using the white board, give us some seconds at the end to take a print screen of it. It helps a lot of the notes here. But, again, nicely done! Thanks for the amazing class!
this is, seriously, top notch stuff ❤️❤️ would love to have more bayesian topics ... how dose the markov chain monte carlo algorithm works? gibbs sampling? all those bayesian concepts
Very well explained. Bayesian statistics is always confusing to grab. This video made it clear. People without a lot of statistics can also grasp it very comfortably.
nice! I was asking my professor this morning exactly this kind of questions: How can I differentiate between conditional probability and bayes theorem and when to use what? This Video breaks down very good the problems that need to be understood to notice the difference. By the way your videos get better, and you start to talk more calm and a bit better to understand. Thank you!
Wow! That was an incredibly skillful and clear explanation of Bayesian stats. I had real trouble understanding it for the past year even after applying it in my work. This really spelled out what I was actually trying to do. Thank you!
A very good video. A suggestion: better to use different numbers in the example to avoid confusion. For example, at your first table, the "15" is two times, once at (B,N) and secondly at (S,~N). If you would say (S,~N)=20 then, again, it would be confusing as you add up the 15+20 and the total N is 20 (possible confusion). If (S,~N)= 16, then numbers would be different and the example would limit possible confusions. I got confused at the beginning and I thought you may find useful my observation.
thank you for explaining this so well! I wish you had taken even longer to explain these concepts, the cuts in between the video are a little distracting.
I love how you explain the concepts. That's very clear and helpful. Could you please do some deep learning (CNN, RNN etc) videos? And also for ML, could you talk about the specific Validation methods that we can apply to Time Series. Thank you!!!!
This is really good. Please do a complete series in forecasting and TimeSeries including auto regression/ moving averages/ARMA models/non linear models/GARCH/ARCH/cointegration etc. etc.)
Regarding your final point, what's wrong with just having a weak prior? The data you collect will then drive the posterior and as you collect more data, the priors will become less and less relevant.
100% right as the updating mechanism of Bayes theorem gives more weighting to the incoming data as you collect more data, hence rendering the prior belief less and less impactful on your decision.
4:09 In approach 1, why do you calculate frequentist probability of the phone being in the bedroom upon hearing the cell phone noise as 15/150? Why should the denominator be 150, since 150 is the sum of both kinds of noises? 135 of those 150 times it was in the bedroom, you heard some other noise. So in approach 1 asking P(N|B), you're asking the probability of in the bedroom after only the cell phone noise, not some other noise. So how are the 135 instances of some other noise relevant? Why isnt the relevant stat for "in the bedroom given the cell phone noise" 15/20 even in approach 1?
I am currently doing a seminar on Data Science for Risk Evaluation in Banking. And I am trying to apply this to calculating the Default Probability. Hope I got it right.
Might be a silly question but why doesn't the result in approach in 2 equal the final result? For example P(B/N) is 75% in approach 2 then after you expand it equals to 8.8%?
To anyone wondering why Bayesian approach didn’t match “Approach 2” probabilities, this is because P(B) = 150/170 is not 0.88, but 0.882352… If you calculate precisely l, than 1/3 proportion would remain.
Ok so i have one question: Should we accept probality *P(B|N)* from the "Bayesian update" (which is roughly (0.088 / (0.03 + 0.088) = 74.5%) or should we accept the probability stated with "Aproach 2"? Or are they different things completly? The reason i point this out is that you used the same symbols for both quantities and did not eplain the difference
very clear, and very interesting but now i'm thinking about the next natural step, how to apply it in data science?. Could be nice to see different implementations like bayesian optimization to get an idea of its power.
The perfect video for understanding bayesian stats, priors, Likelyhood... thanks bhai.... I usually never comment but this thin was making me sick understanding🙂
So correct me if I’m wrong, Bayesian thinking is just taking the way you’d normally do conditional probability and just doing the inverse of that? My question makes me think I didn’t fully grasp the concept lol
@ritvikmath If you start a patreon or even add donation options in youtube, I'll gladly pay, and I'm sure many folks would do the same. These explanations are truly remarkable and they drive the point home so effortlessly. Thanks a lot for your contribution to the ML/DS community.
This example is somewhat confusing. If I know it's in the apartment, then I would just check the place where it's most likely based on P(B) vs P(S). Why would I go through the extra step of calling my phone?
In Summary, the phone being in the bedroom is 3.00x more likely there is noise than the phone being in the study. The phone being in the bedroom is 9.00x more likely there is no noise than the phone being in the study. There is no noise is 1.20x more likely the phone being in the bedroom than there is noise. There is noise is 2.50x more likely the phone being in the study than there is no noise. There is noise given the phone being in the study is 2.50x more likely than there is noise given the phone being in the bedroom. There is no noise given the phone being in the bedroom is 1.20x more likely than there is no noise given the phone being in the study. Let’s say the prevalence or prior probabilities for the phone being in the bedroom is 88.24% (odds of 7.50x or chances of 100 for every 113), and for the phone being in the study is 11.76% (0.13x or 100 for every 850), whether or not there is noise. In a world of the phone being in the bedroom, 10.00% (0.11x or 100 for every 1000) is there is noise, let’s say, and 90.00% (9.00x or 100 for every 111) is there is no noise. In a world of the phone being in the study, 25.00% (0.33x or 100 for every 400) is there is noise, let’s say, and 75.00% (3.00x or 100 for every 133) is there is no noise. Thus, the phone being in the bedroom is 0.40x as likely there is noise as the phone being in the study. Also, the phone being in the bedroom is 1.20x as likely there is no noise as the phone being in the study. We know this as the Likelihood Ratio, Risk Ratio, or Bayes Factor. The prevalence of there is noise, or there is no noise, regardless of the phone being in the bedroom or the phone being in the study, is 11.76% (0.13x or 100 for every 850), and 88.24% (7.50x or 100 for every 113), respectively. Therefore, which is more likely? In a world of there is noise, the posterior probability of the phone being in the bedroom is 75.00% (3.00x or 100 for every 133), and the phone being in the study is 25.00% (0.33x or 100 for every 400). In a world of there is no noise, the posterior probability of the phone being in the bedroom is 90.00% (9.00x or 100 for every 111), and the phone being in the study is 10.00% (0.11x or 100 for every 1000). The probability of the phone being in the bedroom, and there is no noise is 79.41% (3.86x or 100 for every 126). The probability of the phone being in the study, and there is noise is 2.94% (0.03x or 100 for every 3400). The probability of the phone being in the study, and there is no noise is 8.82% (0.10x or 100 for every 1133). Sensitivity analysis: What would the prevalence or prior probabilities for the phone being in the bedroom, and the phone being in the study, whether or not there is noise, need to be such that in a world where the phone being in the bedroom given there is noise, and the phone being in the study given there is noise, that both these posterior probabilities are equally likely? In other words, we’d be indifferent? The prevalence of the phone being in the bedroom would need to be 71.43% (2.50x or 100 for every 140), and the phone being in the study would need to be 28.57% (0.40x or 100 for every 350), all else being equal. Similarly, what would the prevalence or prior probabilities for the phone being in the bedroom, and the phone being in the study, whether or not there is no noise, need to be such that in a world where the phone being in the bedroom given there is no noise, and the phone being in the study given there is no noise, that both these posterior probabilities are equally likely? In other words, we’d be indifferent? The prevalence of the phone being in the bedroom would need to be 45.45% (0.83x or 100 for every 220), and the phone being in the study would need to be 54.55% (1.20x or 100 for every 183), all else being equal. What would the consequent probabilities or likelihoods for there is noise given the phone being in the bedroom, and there is no noise given the phone being in the bedroom, need to be such that in a world where the phone being in the bedroom given there is noise, and the phone being in the study given there is noise, that both these posterior probabilities are equally likely? In other words, we’d be indifferent? The likelihood of there is noise given the phone being in the bedroom would need to be 3.33% (0.03x or 100 for every 3000), and there is no noise given the phone being in the bedroom would need to be 96.67% (29.00x or 100 for every 103), all else being equal. Similarly, what would the consequent probabilities or likelihoods for there is noise given the phone being in the study, and there is no noise given the phone being in the study, need to be such that in a world where the phone being in the bedroom given there is noise, and the phone being in the study given there is noise, that both these posterior probabilities are equally likely? In other words, we’d be indifferent? The likelihood of there is noise given the phone being in the study would need to be 75.00% (3.00x or 100 for every 133), and there is no noise given the phone being in the study would need to be 25.00% (0.33x or 100 for every 400), all else being equal. What would the consequent probabilities or likelihoods for there is noise given the phone being in the bedroom, and there is no noise given the phone being in the bedroom, need to be such that in a world where the phone being in the bedroom given there is no noise, and the phone being in the study given there is no noise, that both these posterior probabilities are equally likely? In other words, we’d be indifferent? The likelihood of there is noise given the phone being in the bedroom would need to be 90.00% (9.00x or 100 for every 111), and there is no noise given the phone being in the bedroom would need to be 10.00% (0.11x or 100 for every 1000), all else being equal.
I'm so confused as to how he can calculate that P(B|N) = 15/20 = 0.75, whilst with Bayes theorem he calculates that P(B|N) = 0.88/P(N).. Wouldn't that give us P(B|N) = 0.88/20 = 0.04?? That's a totally different number from 0.75!
I think Bayesian methods receive too much criticism. All analyses can be used with really uninformative prior distributions that don’t take analyst bias into account
I've been hearing about bayesian thinking from last 6 months and was watching multiple videos about it. I never understood priors, likelihood and posterior from rea life perspective but youtube recommended this hidden gem to me. I'm glad I came across your channel :)
You're work is opening doors for me. Thank you!
So glad!
Wow, i never understood how bayesian thinking revolved around updating a prior belief. it was always so obscure to me, trying to reason around three distributions (prior, likelihood, posterior) But your one sentence "prior beliefs get updated w/ new data" really puts it into perspective. I had no idea the posterior was the updated prior. Thank you for such a great video!
Great stuff. I've been trying to understand the Frequentist vs. Bayes reasoning for a long time, and now I get it. Thanks so much.
Great to hear!
Zero fluff and exceptional clarity. Updating my prior belief that I understood Bayesian thinking. Thank you Sir!
nicely put
God every minute of this video was too heavy. Took me around an hour to understand this 20 mins video, but thank you, you have literally made this concept so clear to me.
So nicely explained! I really like how you go beyond the formulas, explaining the concepts, also with clear examples.
This is the easiest, thus the best introduction to Bayesian Stat, I ever came cross. It transfer to knowledge from Frequentist Probability to Conditional Probability, then to Bayesian Probability in a concise manner. Thanks for it and the others...
Thanks!
Excellent work!!! You've made the content SO EASY to understand!
I really don't know where I'd be without ritvikmath to explain these complex concepts in statistics. Thank you for an amazing video. This is certainly one of the best videos on Bayesian statistics on UA-cam.
Happy to help!
Excellent explanation. Thank you so much for your hard work. I'm watching your vids just for entertainment after work :)
The explanation of this concept is often presented in a dry manner with formulas, while some use engaging and intricate animations to explain it. However, you excel in your ability to intuitively convey how we should understand it. This approach is highly effective as it links science directly to our lives. I particularly appreciate your style.
Hey ritvikmath
This is pure genius - literally the best video I saw about explaing the basics of bayesian statistics. I really understood it now & as I am writing my thesis about prediciting carbon price with bayesian stats (and the new shrinkTVP package in R) this really helps me a lot.
One idea that tripped me up a bit is between 9:30-10:00, as you talk about dividing 15/170 and I thought: isn't the probability that I hear the noise 20/170? After rewatching a few times I know what you mean - just something that might make it even better as it already is. Thanks for providing real value!
What a great explanation about the Bayesian Reasoning. I'll have a test about Bayesian Inference and this video helped me to have more clues about this topic. Thanks for the kind introduction for this subject, mate!
Cheers from Brazil!
Just a piece of advice, mate: everytime you make a video using the white board, give us some seconds at the end to take a print screen of it. It helps a lot of the notes here. But, again, nicely done! Thanks for the amazing class!
I was lost in the middle but got it by the end, lovely! Good job.
The best intro to understand the fundamentals of Bayesian, thank you.
This channel needs more subs, more likes and more views.
How is Bayesian Stats being explained here better than my undergrad prof did live? HOW?!
Good job Ritvik. There are few explanations I have seen in the past, I will recommend your video from now on :)
Been hearing about it, asked around many times but i still didn't get it. Thank you for finally elucidating this concept in a clear manner!
Never have I ever thought of bayes theorem in this way. This video has changed my thinking. Thank you
I was waiting for this video, plz do a serie about bayesian statistics and explain how we can do it, estimating parameters ....!!! 🙏🙏🙏
Thanks for the suggestion!
@@ritvikmath Yes, please do this if you have time.
Agreed, that would be sooo so very helpful - your other videos make it a lot easier to understand. thank you!!
I agree. Please do that. 🙏🙏🙏
this is, seriously, top notch stuff ❤️❤️
would love to have more bayesian topics ...
how dose the markov chain monte carlo algorithm works? gibbs sampling? all those bayesian concepts
Excellent! Now I've finally got it, thanks to you. Congratulations!
Another great video. Thanks a lot, you have no idea how much you had help me
Very well explained. Bayesian statistics is always confusing to grab. This video made it clear. People without a lot of statistics can also grasp it very comfortably.
Glad it was helpful!
I used Naive bayes classifier in my final class project last semester. You explained Baysian Stats nicely. Keep posting good contents 👍.
Thanks, will do!
nice! I was asking my professor this morning exactly this kind of questions: How can I differentiate between conditional probability and bayes theorem and when to use what? This Video breaks down very good the problems that need to be understood to notice the difference. By the way your videos get better, and you start to talk more calm and a bit better to understand.
Thank you!
Thanks for the kind words!
this is insane
your videos are so valuable
glad you think so!
You made it so clear and easy to understand, thank you very much.
You are a really good educator. I'm excited to watch other videos in your channel.
Awesome explanation! Keep up the excellent work 👏👏
Thanks, will do!
hands down one of the best explanations!
I still remember that I took Bayesian Statistics in college, and that was one of my favorite class!
Thank you for this explanation. I like your teaching style.
Glad it was helpful!
Wow! That was an incredibly skillful and clear explanation of Bayesian stats. I had real trouble understanding it for the past year even after applying it in my work. This really spelled out what I was actually trying to do. Thank you!
People like you are the reason why I pay my internet bill ❤
This just cleared most of the doubts on the fundamentals I had during my undergrad days. Very intuitive and very helpful!
Glad it was helpful!
A very good video. A suggestion: better to use different numbers in the example to avoid confusion. For example, at your first table, the "15" is two times, once at (B,N) and secondly at (S,~N). If you would say (S,~N)=20 then, again, it would be confusing as you add up the 15+20 and the total N is 20 (possible confusion). If (S,~N)= 16, then numbers would be different and the example would limit possible confusions. I got confused at the beginning and I thought you may find useful my observation.
Good suggestion! Thank you
Great work. Really like your videos!
Thank You.
Awesome video. Finally understand. Thanks so much for your help!
thank you for explaining this so well!
I wish you had taken even longer to explain these concepts, the cuts in between the video are a little distracting.
Thank you very much for clearly explaining the concept.
You are welcome!
I love how you explain the concepts. That's very clear and helpful. Could you please do some deep learning (CNN, RNN etc) videos? And also for ML, could you talk about the specific Validation methods that we can apply to Time Series. Thank you!!!!
Great suggestion!
Hi, I am also working on Time Series forecasting(One of very interesting for me)... If @ritvikmath makes videos it will be more interesting.
Absolutely love the content
Amazing explanation. Thank you so much.
Wow, amazing elucidation. Thank a lot.
This is really good. Please do a complete series in forecasting and TimeSeries including auto regression/ moving averages/ARMA models/non linear models/GARCH/ARCH/cointegration etc. etc.)
You're a great resource. Thanks very much
What a perfect video! Thank you!
This is the 4th video I’ve watched trying to understand what Bayesian stats are… and it’s the first one that makes sense….
Dude! You are good! Really good.
Regarding your final point, what's wrong with just having a weak prior? The data you collect will then drive the posterior and as you collect more data, the priors will become less and less relevant.
100% right as the updating mechanism of Bayes theorem gives more weighting to the incoming data as you collect more data, hence rendering the prior belief less and less impactful on your decision.
Great explanation!
great video - very well explained
Glad you liked it!
Thank you for best explanation. :)
Wish I had discovered this BEFORE finals week!
Nicely explained the concepts, formula and examples. Can you please make a video on hierarchical time series and multiple time series forcasting?
thank you, really good video easy to understand
So well explained!
4:09 In approach 1, why do you calculate frequentist probability of the phone being in the bedroom upon hearing the cell phone noise as 15/150? Why should the denominator be 150, since 150 is the sum of both kinds of noises?
135 of those 150 times it was in the bedroom, you heard some other noise. So in approach 1 asking P(N|B), you're asking the probability of in the bedroom after only the cell phone noise, not some other noise. So how are the 135 instances of some other noise relevant? Why isnt the relevant stat for "in the bedroom given the cell phone noise" 15/20 even in approach 1?
sick flip!
I watched this 3 times. Finally, I think I got it !
I'm glad!
@@ritvikmath Many thanks !
is approach 1 could be regarded as kind of frequentist method?
Man you are the best
I am currently doing a seminar on Data Science for Risk Evaluation in Banking. And I am trying to apply this to calculating the Default Probability. Hope I got it right.
Do you have any social media. Your videos on Bayesian statistics are amazing, and would love to share them with my network
Might be a silly question but why doesn't the result in approach in 2 equal the final result? For example P(B/N) is 75% in approach 2 then after you expand it equals to 8.8%?
Because he doesn't divide by P(N)
Also because 0.88 is rounded value, actually it is 0.8823…
Well done
Yes I got a good understanding
To anyone wondering why Bayesian approach didn’t match “Approach 2” probabilities, this is because P(B) = 150/170 is not 0.88, but 0.882352… If you calculate precisely l, than 1/3 proportion would remain.
So, if the objective is to find the probability to check either the bedroom or study, approach-1 was never the correct approach?
Ok so i have one question:
Should we accept probality *P(B|N)* from the "Bayesian update" (which is roughly (0.088 / (0.03 + 0.088) = 74.5%) or should we accept the probability stated with "Aproach 2"? Or are they different things completly? The reason i point this out is that you used the same symbols for both quantities and did not eplain the difference
very clear, and very interesting but now i'm thinking about the next natural step, how to apply it in data science?. Could be nice to see different implementations like bayesian optimization to get an idea of its power.
Great suggestion!
The perfect video for understanding bayesian stats, priors, Likelyhood... thanks bhai.... I usually never comment but this thin was making me sick understanding🙂
Excellent
This was awsome!
Thank you.
So correct me if I’m wrong, Bayesian thinking is just taking the way you’d normally do conditional probability and just doing the inverse of that? My question makes me think I didn’t fully grasp the concept lol
awesomeeeee
You da best!!
@ritvikmath If you start a patreon or even add donation options in youtube, I'll gladly pay, and I'm sure many folks would do the same. These explanations are truly remarkable and they drive the point home so effortlessly. Thanks a lot for your contribution to the ML/DS community.
This example is somewhat confusing. If I know it's in the apartment, then I would just check the place where it's most likely based on P(B) vs P(S). Why would I go through the extra step of calling my phone?
This guy is genius.
i'm your biggest fan in Shanghai!!
Are you sure I watch his videos more in Guangzhou. :)
Well, a big thanks to both of you haha :)
@@ritvikmath you are welcome and keep the great work going!!!
Thanks Ritvik.
Rajavel KS
Bengaluru
Very good
Thanks
In Summary, the phone being in the bedroom is 3.00x more likely there is noise than the phone being in the study. The phone being in the bedroom is 9.00x more likely there is no noise than the phone being in the study. There is no noise is 1.20x more likely the phone being in the bedroom than there is noise. There is noise is 2.50x more likely the phone being in the study than there is no noise. There is noise given the phone being in the study is 2.50x more likely than there is noise given the phone being in the bedroom. There is no noise given the phone being in the bedroom is 1.20x more likely than there is no noise given the phone being in the study.
Let’s say the prevalence or prior probabilities for the phone being in the bedroom is 88.24% (odds of 7.50x or chances of 100 for every 113), and for the phone being in the study is 11.76% (0.13x or 100 for every 850), whether or not there is noise. In a world of the phone being in the bedroom, 10.00% (0.11x or 100 for every 1000) is there is noise, let’s say, and 90.00% (9.00x or 100 for every 111) is there is no noise. In a world of the phone being in the study, 25.00% (0.33x or 100 for every 400) is there is noise, let’s say, and 75.00% (3.00x or 100 for every 133) is there is no noise. Thus, the phone being in the bedroom is 0.40x as likely there is noise as the phone being in the study. Also, the phone being in the bedroom is 1.20x as likely there is no noise as the phone being in the study. We know this as the Likelihood Ratio, Risk Ratio, or Bayes Factor.
The prevalence of there is noise, or there is no noise, regardless of the phone being in the bedroom or the phone being in the study, is 11.76% (0.13x or 100 for every 850), and 88.24% (7.50x or 100 for every 113), respectively. Therefore, which is more likely? In a world of there is noise, the posterior probability of the phone being in the bedroom is 75.00% (3.00x or 100 for every 133), and the phone being in the study is 25.00% (0.33x or 100 for every 400). In a world of there is no noise, the posterior probability of the phone being in the bedroom is 90.00% (9.00x or 100 for every 111), and the phone being in the study is 10.00% (0.11x or 100 for every 1000). The probability of the phone being in the bedroom, and there is no noise is 79.41% (3.86x or 100 for every 126). The probability of the phone being in the study, and there is noise is 2.94% (0.03x or 100 for every 3400). The probability of the phone being in the study, and there is no noise is 8.82% (0.10x or 100 for every 1133).
Sensitivity analysis:
What would the prevalence or prior probabilities for the phone being in the bedroom, and the phone being in the study, whether or not there is noise, need to be such that in a world where the phone being in the bedroom given there is noise, and the phone being in the study given there is noise, that both these posterior probabilities are equally likely? In other words, we’d be indifferent? The prevalence of the phone being in the bedroom would need to be 71.43% (2.50x or 100 for every 140), and the phone being in the study would need to be 28.57% (0.40x or 100 for every 350), all else being equal. Similarly, what would the prevalence or prior probabilities for the phone being in the bedroom, and the phone being in the study, whether or not there is no noise, need to be such that in a world where the phone being in the bedroom given there is no noise, and the phone being in the study given there is no noise, that both these posterior probabilities are equally likely? In other words, we’d be indifferent? The prevalence of the phone being in the bedroom would need to be 45.45% (0.83x or 100 for every 220), and the phone being in the study would need to be 54.55% (1.20x or 100 for every 183), all else being equal.
What would the consequent probabilities or likelihoods for there is noise given the phone being in the bedroom, and there is no noise given the phone being in the bedroom, need to be such that in a world where the phone being in the bedroom given there is noise, and the phone being in the study given there is noise, that both these posterior probabilities are equally likely? In other words, we’d be indifferent? The likelihood of there is noise given the phone being in the bedroom would need to be 3.33% (0.03x or 100 for every 3000), and there is no noise given the phone being in the bedroom would need to be 96.67% (29.00x or 100 for every 103), all else being equal. Similarly, what would the consequent probabilities or likelihoods for there is noise given the phone being in the study, and there is no noise given the phone being in the study, need to be such that in a world where the phone being in the bedroom given there is noise, and the phone being in the study given there is noise, that both these posterior probabilities are equally likely? In other words, we’d be indifferent? The likelihood of there is noise given the phone being in the study would need to be 75.00% (3.00x or 100 for every 133), and there is no noise given the phone being in the study would need to be 25.00% (0.33x or 100 for every 400), all else being equal.
What would the consequent probabilities or likelihoods for there is noise given the phone being in the bedroom, and there is no noise given the phone being in the bedroom, need to be such that in a world where the phone being in the bedroom given there is no noise, and the phone being in the study given there is no noise, that both these posterior probabilities are equally likely? In other words, we’d be indifferent? The likelihood of there is noise given the phone being in the bedroom would need to be 90.00% (9.00x or 100 for every 111), and there is no noise given the phone being in the bedroom would need to be 10.00% (0.11x or 100 for every 1000), all else being equal.
I'm so confused as to how he can calculate that P(B|N) = 15/20 = 0.75, whilst with Bayes theorem he calculates that P(B|N) = 0.88/P(N).. Wouldn't that give us P(B|N) = 0.88/20 = 0.04?? That's a totally different number from 0.75!
Vaov!
So Bayesian would work really well in cases where you have lots of data on already occurred past events, go to know! Now learn how to apply it lol.
What's your insta or fb account?
I think Bayesian methods receive too much criticism. All analyses can be used with really uninformative prior distributions that don’t take analyst bias into account