What happens when you're 58 and you decide to (re)learn discrete math, logic and probabilities? You watch this series and have a fun ride. Liked and subbed: it's brilliant, lively, entertaining and a great (re)learning experience. Thank you so much.
the best part is how it goes in a bit further depth by exploring what happens if you test positive twice ( probability of disease given you test positive 2 times in a row ) that ish hit different
Yup. The amount of arguments i have had with people who claim vaccinated and unvaccinated are both spreading covid equally...ignoring all the vaccinated who did not get infected in the first place and so were not in the studies....
First by examining how the results were games by manipulating cycle thresholds and changing the criteria for a "positive" to include similar symptoms of any illness, the suddenly "died with" as opposed to "died from" most of while had 4+ comorbidities becomes quite shocking. The only remaining question is at what confidence interval we can deem it a for-profit scam with CEOs and board members of oversight approving their own profits. Whoops!
I agree with the majority of the comments. This was masterfully explained. I used to be a TA on discrete maths, probability and statistics and this felt like a breath of fresh air. Thanks a lot!
Beautiful wrapping up of the concept! "The whole point of Bayesian analysis is that as I get more information, I get to update the probabilities by which I believe events are going to occur."
Worth explicitly showing are the relationships of TP (True Positive), TN (True Negative), FP (False Positive), and FN (False Negative). These relationships are often glossed over, and people frequently mix them up, leading to wrong answers! True Positive and False Positive are NOT complements, nor are True Negative and False Negative. Instead, the TP/TN/FP/FN relationships are: 1. TP and FN are complements, so TP = 1 - FN and FN = 1 - TP 2. TN and FP are complements, so TN = 1 - FP and FP = 1 - TN
Yes, and even worse then they claim a certain reliability but then increase and decrease cycle thresholds to make big numbers, then to "prove" their product is after self-appeoving it with nepotistic relationships. ;)
This is exceptionally well explained. I have real trouble assigning the events. For example, "P(A|B) means have disease having tested positive, and P(B) is testing positive)". The breakdown has really helped wrap my mind around it. Thank you!
I think you need more explanation going from the original formula to the expanded denominator, but it's a great example and helped me dearly. Thank you very much
The importance of knowing your initial risk (and how it differs from the population incidence) can't be stressed enough. When I see my doctor it is because something is wrong. The doctor looks at the presentation and effectively puts me in a sub population with an elevated risk of various diseases - the results of relevant tests then update those risks until there is enough confidence to prescribe a treatment. (Well that's the theory). In practice the diagnosis involves the doctors experience, training and judgement. Bayes theorem allows that subjective judgement to be replaced or at least reinforced by calculation.
The example of repeating the test assumes that the two tests are uncorrelated (independent). It is often the case that when a medical test fails to give the correct result, it is for a reason and repeating the test may fail for the same reason.
Doing the test twice is not necessarily independent events. What is really needed is the chance that someone who hasn't the disease but had a false positive having a second false positive. Ideally the second test would be a different test for the same disease where the results are independent.
i arrive at the same answer but my "priors" have changed on the second test. it appears that you use the same prior of 1% on the second test for the probability of having the disease notwithstanding the positive first test. post test odds = pre-test odds x likelihood ratio (LR) for +'ve test, where pre-test odds = .01/0.99 or .0101 and LR is sensitivity/(1-specificity). so, post test odds =0.0101x0.9/0.05 = 0.181818 probability = odds/(1+odds) = 0.181818/1.181818 = 15.38%. for a second test, the pre-test odds are no longer 1%, but are .181818 post-2nd test odds = 0.181818 x LR for a positive test (which has not changed) = 0.181818 x 0.9/0.05 = 3.27 probability = 3.27/4.27 = 76.6%
I found this video very helpful and I thank you for presenting it. However, does not the analysis for the case of testing positive twice in a row depend on an assumption that errors in the tests are independent? I can imagine situations where successive tests are far from independent - for example I might use covid test kits from the same production batch or there might be some peculiarity of my blood chemistry that routinely confuses some enzyme test. (I used to calculate reliability of communication networks. I found that even very small correlations between link failures could completely change results calculated on the assumption of independence between link failures.)
Great work, this helped me a lot. I see you just published this, and with the growth in popularity and relevance of probabalistic programming and machine learning, it's right on time.
The second part (taking the test twice) assumes that the events are independent. If it's something stable in the test subject's body that isn't the disease that triggers the false positive, then taking the test many times would have no affects on the probabilities.
There are some things that I did not understand: 1)Why are we dividing by P(B) 5:57 2) Why is it that it is 90% of that 90%? What is the idea behind that?
I am wondering if someone could use a Bayesian approach to estimate undetected covid-19 cases?, I mean obtain the probability of infected population that are not being tested in a country or in a specific region. Especially on those places that the government is not given that much information about the spread of the virus, if in fact you can actually use Bayes' Theorem, can you make a video about that?
Very good video, one of the best I have ever watched about this subject. But at 2:32 he should consider 10 % not 5%, as he said at 2:09 that the teste also have a false negative rate of 10%. May I be wrong?
From past experience it is known that a machine if set up correctly 90% of the time, then 95% of good parts are expected but if the machine is not set up correctly then the probability of a good part is only 30%. On a given day the machine is set up and the first component produced was found to be good. What is the probability that the machine is set up correctly? solution for this?
Well made video! I am a college professor and aspire to this level also but I have a few questions: (1) Do you get tired through having to be as expressive (this is a good thing!) as you are, through an online medium? I see that you make a great effort in projecting your voice and also gesticulating to drive home "the point". This must be tiring (2) What recording/capturing software do you use? Thank you for your time!
I have a question: There is a store. 40% of the store contains products from company A, the remainder from company B. The store is also composed of 30% Large items, the rest being Small items. Suppose that 50% of the store is composed of items that are either from company B or is Large, what is the probability of choosing an item belonging to company A given that the item you chose is Small? So this is how I did it: P[B] = 40% so the other 10% must be the large items from company A to make P[B & L] = 50%. Which means that P[L|A] = (1/6) because 60% x (1/6) gives me the 10% I needed. This also means that P[S|A] = 5/6. Since company A supplies 10% of the Large items, this must mean that company B must supply 20% of the Large items to make a storewide total of 30%. Which means P[L|B] = (1/2) and P[S|B] = (1/2). Using Bayes' Theorem, I got P[A|S] = (1/2). Is this correct?
LOVE THE VIDEO! But, I think you confused FP with FN. If there is 10% chance that test will give a FN, then there is 90% chance that when test gives negative, we actually DO NOT have the illness. On the other hand, if there is 5% chance that test will give a FP, then there is 95% chance that when test gives positive we actually DO have the illness. So, P(A) should be 0.95, correct?
Let me clear this a bit for you. I am restating your sentence with little modifications. If there is 10% chance that test will give a FN, then there is 90% chance that when test gives positive, we actually DO have the illness. On the other hand, if there is 5% chance that test will give a FP, then there is 95% chance that when test gives negative we actually DO NOT have the illness.
The opposite is also true. If you don't have the disease and given the test is positive, the first test would yield 84.6% (approximately 5/6) probability of getting a false result. The second test would drop to 23.4%. Only the 3rd test would be close to zero (i.e 1.7%). Therefore most of the medical test/statistic is not trustworthy if taken only once. However, this is also true for the distributed data itself. Because IF all the 100 subjects are only tested once, how trustworthy is the distributed data that you depend on initially?
How would one apply this concept to a model that is fairly well calibrated but has a pretty large false positive rate? Instead of just a binary output it gives a probability. Would I use that probability as the prior?
Trevor, wouldn't we use 15.4% as the "priior" that you do have the disease when you run the test a 2nd time? I'm thinking of the posterior becoming the prior.
I've love this video with just the numbers and formulas available while you explain instead of recalling numbers from 10 minutes prior. You waving your hands and being wild is pretty distracting. Thanks for your help with Bayes.
Thank you for your detailed explanation, but shouldn't it be 0.95 for P(B|A) instead of 0.9? Because P(B|A) represents the probability of a positive test result given that one is actually sick. With a 5 percent false positive rate, it means that 95 percent of sick people would receive a positive test result (which aligns with P(B|A) of 0.95). 7:41
A genuine question. Doesn’t the FPR reset each time? Meaning every individual test has a 95% chance of being correct. This isn’t the same as 5 out of 100 being false. If the accuracy of every individual test is 95%, then each individual tests is 95% accurate. Does that in reality equate to 5 out of 100 being wrong? Can you apply specific accuracy to bulk testing?
Indeed, there is a big difference between 95% and 5 in 100 people. The most likely outcome for 100 people is 5, but in any specific group of 100 people sometimes it will be less and sometimes more than this. So it is ok to build intuition like I did a the beginning of the video with a sample of 100 people, but you can't only look at that.
Dr. Trefor Bazett thanks for this! I was having an argument about the COVID PCR FPR - 0.8% (ish). I argued that out of 100k tests if only 80 are positive then they could all be false as the PFR suggests around 800 FPs. I was told “no” that’s statistically highly improbable as the likelihood of each individual positive being correct is 99.2%. I don’t know how to reconcile the two - I’m not maths smart!
I am puzzled at your calculation of P(A|B) after the second test. Instead of using the probability of testing positive twice, why don't you simply update the prior P(A) to be 0.154 instead of 0.01? Given that the first test is positive, the probability that the patient has the disease is no longer the general prevalance of 1% but is now 0.154. The sensitivity and specitivity of the test is the same, so you end up with P(A|B) = .74
I got tested positive for anphetamine and ecstasy but i havent used anything so what will happen, they told me that they will send the same urine again and contact me
Hello sir, thanks for that clear explanation however i have one question. Should not we use the result of the first solving which is 0.154 as a prior for the 2nd test result where it resulted into another positive? Im new to this so I'm quite confused so please correct me on which part did i misunderstood. Thank you so much :D
false negative rate of 10% means than the test will reflect positive for the presence of the disease 90% of the time. The sensitivity of the test is .90 (will be positive when the disease is present).
Video by veritasium says the P(Having Disease) is prior information so it is updated using the previous result. But you updated P(Testing positive| Having Disease) . What am I missing here?
I teste positive for covid, with 6% chance of false positive (and 96% true positive). Then tested negative twice. Wasn't able to crunch the numbers, though
If you don't understand why True Positive + False Negative = 100%, check out this wikipedia picture: en.wikipedia.org/wiki/Sensitivity_and_specificity#/media/File:Sensitivity_and_specificity.svg
What happens when you're 58 and you decide to (re)learn discrete math, logic and probabilities? You watch this series and have a fun ride. Liked and subbed: it's brilliant, lively, entertaining and a great (re)learning experience. Thank you so much.
the best part is how it goes in a bit further depth by exploring what happens if you test positive twice ( probability of disease given you test positive 2 times in a row )
that ish hit different
This global pandemic is the perfect time to learn this theorem
For sure, if there was ever a more perfect application it is hard to imagine
Yup. The amount of arguments i have had with people who claim vaccinated and unvaccinated are both spreading covid equally...ignoring all the vaccinated who did not get infected in the first place and so were not in the studies....
First by examining how the results were games by manipulating cycle thresholds and changing the criteria for a "positive" to include similar symptoms of any illness, the suddenly "died with" as opposed to "died from" most of while had 4+ comorbidities becomes quite shocking. The only remaining question is at what confidence interval we can deem it a for-profit scam with CEOs and board members of oversight approving their own profits. Whoops!
I agree with the majority of the comments. This was masterfully explained. I used to be a TA on discrete maths, probability and statistics and this felt like a breath of fresh air. Thanks a lot!
Beautiful wrapping up of the concept! "The whole point of Bayesian analysis is that as I get more information, I get to update the probabilities by which I believe events are going to occur."
Today you thought me something in 12 minutes which my teachers couldn't teach in 12 months.!
Worth explicitly showing are the relationships of TP (True Positive), TN (True Negative), FP (False Positive), and FN (False Negative). These relationships are often glossed over, and people frequently mix them up, leading to wrong answers! True Positive and False Positive are NOT complements, nor are True Negative and False Negative. Instead, the TP/TN/FP/FN relationships are:
1. TP and FN are complements, so TP = 1 - FN and FN = 1 - TP
2. TN and FP are complements, so TN = 1 - FP and FP = 1 - TN
Thanks, I was confused about them.
Yes, and even worse then they claim a certain reliability but then increase and decrease cycle thresholds to make big numbers, then to "prove" their product is after self-appeoving it with nepotistic relationships. ;)
I found This more intuitive:
TP + FN = Total Positive ==> TP = Total Positive - FN. (this was mentioned in the video. getting %90 from %10).
@@seyedhamidazimidokht3569I still don’t get why TP + FP = Total positive is not true
Like, you got tested positive, and it means that 5% you don’t have the disease right, and 95% you have it
last year you saved my calculus course this year you are saving my statistic course
I have watched at least 10 other videos on Bayes. After watching yours I finally get it. Thanks, so much!
Glad it helped!
All the lessons about Bayes' Theorem are great. Thanks for explaining them in a simple and interesting way.
This is exceptionally well explained.
I have real trouble assigning the events. For example, "P(A|B) means have disease having tested positive, and P(B) is testing positive)". The breakdown has really helped wrap my mind around it.
Thank you!
Your enthusiasm for teaching math is simultaneously disturbing and infectious. Thanks for the work you do
I was really struggling with this theorem. Your video helped tons. Thanks a lot!
You're very welcome!
Best video I have watched to get an intuition for Bayes theorem. Thank you!
Sir ...what a power of explanation, confidence you have..
Thank you so much sir..
you have a great way of explaining things and this is random but you sound like ryan gosling
Sir, Your explanation about the concepts are so clear that anyone can understand clearly. Thank you so much.
Doctor you are the best. Thanks for breaking this down for mr.
WONDERFULLY EXPLAINED CONTENT...I'm surprised this has so few views...
Well he has a huge no of subscribers...so that makes sense
thanks!
I think you need more explanation going from the original formula to the expanded denominator, but it's a great example and helped me dearly. Thank you very much
Thank you so much for putting in the second scenario where you go through the test twice!
Bravo! gotta update my prostate-cancer probability!
Thanks, had only been given a week to understand this theorem and your videos really help my understand it 👍
The importance of knowing your initial risk (and how it differs from the population incidence) can't be stressed enough.
When I see my doctor it is because something is wrong. The doctor looks at the presentation and effectively puts me in a sub population with an elevated risk of various diseases - the results of relevant tests then update those risks until there is enough confidence to prescribe a treatment. (Well that's the theory). In practice the diagnosis involves the doctors experience, training and judgement.
Bayes theorem allows that subjective judgement to be replaced or at least reinforced by calculation.
This principle has applications in information retrieval too.I was struggling to understand it but thanks to you I am out of the woods. Cheers mate
*Wow, excellently explained !! By the way, it's little like tongue twister !!*
Why I am thinking about Corona tests rn ?
And word positive for it is haunting!
Illness, diseases , these are the examples to understand Bays Theorem :)
The example of repeating the test assumes that the two tests are uncorrelated (independent). It is often the case that when a medical test fails to give the correct result, it is for a reason and repeating the test may fail for the same reason.
That was also my concern.
Great! the best explanation I've ever heard
Fun to watch in COVID times. Case numbers being reported using lateral flow could be far off.
amazing explanation sir ! thanks a lot for this tutorial
superb method of teaching which every one can easily understand.
thank you sir
Very well explained, it helped a lot. Thanks.
makes it seem like grade 6 content, so perfectly explained
Excellent explanation. Thank you sir
Doing the test twice is not necessarily independent events. What is really needed is the chance that someone who hasn't the disease but had a false positive having a second false positive.
Ideally the second test would be a different test for the same disease where the results are independent.
better than my lecture, moreee better, you are the best. thanks for sharing, hope you be well, during this pandemic.
i arrive at the same answer but my "priors" have changed on the second test. it appears that you use the same prior of 1% on the second test for the probability of having the disease notwithstanding the positive first test.
post test odds = pre-test odds x likelihood ratio (LR) for +'ve test,
where pre-test odds = .01/0.99 or .0101 and LR is sensitivity/(1-specificity).
so, post test odds =0.0101x0.9/0.05
= 0.181818
probability = odds/(1+odds)
= 0.181818/1.181818
= 15.38%.
for a second test, the pre-test odds are no longer 1%, but are .181818
post-2nd test odds = 0.181818 x LR for a positive test (which has not changed)
= 0.181818 x 0.9/0.05
= 3.27
probability = 3.27/4.27
= 76.6%
I found this video very helpful and I thank you for presenting it. However, does not the analysis for the case of testing positive twice in a row depend on an assumption that errors in the tests are independent? I can imagine situations where successive tests are far from independent - for example I might use covid test kits from the same production batch or there might be some peculiarity of my blood chemistry that routinely confuses some enzyme test.
(I used to calculate reliability of communication networks. I found that even very small correlations between link failures could completely change results calculated on the assumption of independence between link failures.)
Great work, this helped me a lot. I see you just published this, and with the growth in popularity and relevance of probabalistic programming and machine learning, it's right on time.
As a side note, I heard a baby crying around 8 minutes... assuming that's yours, congratulations!
The second part (taking the test twice) assumes that the events are independent. If it's something stable in the test subject's body that isn't the disease that triggers the false positive, then taking the test many times would have no affects on the probabilities.
Outstanding explanation
Glad it was helpful!
I'll have to rewatch this a couple of times ✌️
very helpful! Thank you so much!
pretty good explaination
Excellent way of teaching. Subscribing!
Welcome aboard!
Wow! I got it! Thank you so much!
Excellent exposition
There are some things that I did not understand:
1)Why are we dividing by P(B) 5:57
2) Why is it that it is 90% of that 90%? What is the idea behind that?
This is quite interesting.
Thank you for the videos, very helpful
You are welcome!
you explained this so well go off unc
This was a great video, it really helped so much, thank you, you're really helping me love math! :)
Greatly explained.. thank you 😊
Excellent
I am wondering if someone could use a Bayesian approach to estimate undetected covid-19 cases?, I mean obtain the probability of infected population that are not being tested in a country or in a specific region. Especially on those places that the government is not given that much information about the spread of the virus, if in fact you can actually use Bayes' Theorem, can you make a video about that?
Very good video, one of the best I have ever watched about this subject. But at 2:32 he should consider 10 % not 5%, as he said at 2:09 that the teste also have a false negative rate of 10%. May I be wrong?
thanx a lot....true life saver
Thank you so much!
Great video. Shame there is so much boom and echo in the sound.
Awesome explanation!
That's a baby crying or a cat at 8:10😂
haha that's my baby:D
@@DrTrefor That's beautiful, best wishes man,
And you really have been of great help
08:09 baby sound?
haha yup!
From past experience it is known that a machine if set up correctly 90% of the time, then 95% of good parts are expected but if the machine is not set up correctly then the probability of a good part is only 30%. On a given day the machine is set up and the first component produced was found to be good. What is the probability that the machine is set up correctly?
solution for this?
Amazing! 😀😁😍😎
Most underwatched video on youtube! 😐
Śmiem Wątpić because he stolen idea from Veritasium
Well made video! I am a college professor and aspire to this level also but I have a few questions: (1) Do you get tired through having to be as expressive (this is a good thing!) as you are, through an online medium? I see that you make a great effort in projecting your voice and also gesticulating to drive home "the point". This must be tiring (2) What recording/capturing software do you use? Thank you for your time!
Excellent video
Thanks a lot you explained very good.
Make some videos on systems and signals
I have a question:
There is a store. 40% of the store contains products from company A, the remainder from company B. The store is also composed of 30% Large items, the rest being Small items. Suppose that 50% of the store is composed of items that are either from company B or is Large, what is the probability of choosing an item belonging to company A given that the item you chose is Small?
So this is how I did it:
P[B] = 40% so the other 10% must be the large items from company A to make P[B & L] = 50%. Which means that P[L|A] = (1/6) because 60% x (1/6) gives me the 10% I needed. This also means that P[S|A] = 5/6.
Since company A supplies 10% of the Large items, this must mean that company B must supply 20% of the Large items to make a storewide total of 30%. Which means P[L|B] = (1/2) and P[S|B] = (1/2).
Using Bayes' Theorem, I got P[A|S] = (1/2). Is this correct?
Please what is the name of the software you are using for the video, its great way to present lecture, thank you.
I have a whole vid about my process here: ua-cam.com/video/hmQd_P_qj1w/v-deo.html&ab_channel=Dr.TreforBazett
Why don't you use the first test's posterior probability of 15.4% ,which then becomes a prior ,to figure out second test posterior probability?
This is great 👍
Sir do you have a video regarding Bernoulli trials.
Thank you.
LOVE THE VIDEO! But, I think you confused FP with FN. If there is 10% chance that test will give a FN, then there is 90% chance that when test gives negative, we actually DO NOT have the illness. On the other hand, if there is 5% chance that test will give a FP, then there is 95% chance that when test gives positive we actually DO have the illness. So, P(A) should be 0.95, correct?
Let me clear this a bit for you. I am restating your sentence with little modifications. If there is 10% chance that test will give a FN, then there is 90% chance that when test gives positive, we actually DO have the illness. On the other hand, if there is 5% chance that test will give a FP, then there is 95% chance that when test gives negative we actually DO NOT have the illness.
When do we get answer to this question...
The opposite is also true. If you don't have the disease and given the test is positive, the first test would yield 84.6% (approximately 5/6) probability of getting a false result. The second test would drop to 23.4%. Only the 3rd test would be close to zero (i.e 1.7%). Therefore most of the medical test/statistic is not trustworthy if taken only once. However, this is also true for the distributed data itself. Because IF all the 100 subjects are only tested once, how trustworthy is the distributed data that you depend on initially?
How would one apply this concept to a model that is fairly well calibrated but has a pretty large false positive rate? Instead of just a binary output it gives a probability. Would I use that probability as the prior?
Trevor, wouldn't we use 15.4% as the "priior" that you do have the disease when you run the test a 2nd time? I'm thinking of the posterior becoming the prior.
yes
Very informative!
I've love this video with just the numbers and formulas available while you explain instead of recalling numbers from 10 minutes prior. You waving your hands and being wild is pretty distracting. Thanks for your help with Bayes.
2:06 Why positive test might have cases?
I'm corona infected,
But now I'm not sure.
Great work
I don't know why people don't watch this work instead of pewdiepie
Thank you for your detailed explanation, but shouldn't it be 0.95 for P(B|A) instead of 0.9? Because P(B|A) represents the probability of a positive test result given that one is actually sick. With a 5 percent false positive rate, it means that 95 percent of sick people would receive a positive test result (which aligns with P(B|A) of 0.95). 7:41
no! in that part you're just deducting the 10% probability of having a false negative
ty ty ty, my teacher didnt explain shit throughout the course
A genuine question. Doesn’t the FPR reset each time? Meaning every individual test has a 95% chance of being correct. This isn’t the same as 5 out of 100 being false.
If the accuracy of every individual test is 95%, then each individual tests is 95% accurate. Does that in reality equate to 5 out of 100 being wrong? Can you apply specific accuracy to bulk testing?
Indeed, there is a big difference between 95% and 5 in 100 people. The most likely outcome for 100 people is 5, but in any specific group of 100 people sometimes it will be less and sometimes more than this. So it is ok to build intuition like I did a the beginning of the video with a sample of 100 people, but you can't only look at that.
Dr. Trefor Bazett thanks for this! I was having an argument about the COVID PCR FPR - 0.8% (ish). I argued that out of 100k tests if only 80 are positive then they could all be false as the PFR suggests around 800 FPs. I was told “no” that’s statistically highly improbable as the likelihood of each individual positive being correct is 99.2%.
I don’t know how to reconcile the two - I’m not maths smart!
I am puzzled at your calculation of P(A|B) after the second test. Instead of using the probability of testing positive twice, why don't you simply update the prior P(A) to be 0.154 instead of 0.01? Given that the first test is positive, the probability that the patient has the disease is no longer the general prevalance of 1% but is now 0.154. The sensitivity and specitivity of the test is the same, so you end up with P(A|B) = .74
That's exactly my thought. the new (2nd test) prior is the 1st test's posterior probability 0f .154
Great stuff :) Thank you! :)
Awesome
I got tested positive for anphetamine and ecstasy but i havent used anything so what will happen, they told me that they will send the same urine again and contact me
Hello sir, thanks for that clear explanation however i have one question. Should not we use the result of the first solving which is 0.154 as a prior for the 2nd test result where it resulted into another positive? Im new to this so I'm quite confused so please correct me on which part did i misunderstood. Thank you so much :D
I was wondering the same thing
shouldn't be P(B|A)=.95? I'm confused on this part, other than that the video was amazing!
false negative rate of 10% means than the test will reflect positive for the presence of the disease 90% of the time. The sensitivity of the test is .90 (will be positive when the disease is present).
What does the 77 percent represent
that you actually have the disease given you have just done the test twice and both times it came up positive
Suppper video!!
Video by veritasium says the P(Having Disease) is prior information so it is updated using the previous result. But you updated P(Testing positive| Having Disease) . What am I missing here?
found out there are two ways to get to the same answer. Either Update the prior probability or update the P(HD| test positive).
I teste positive for covid, with 6% chance of false positive (and 96% true positive). Then tested negative twice. Wasn't able to crunch the numbers, though
This is the most confusing and incoherent explanation I have ever heard for this scenario. Wow.
If you don't understand why True Positive + False Negative = 100%, check out this wikipedia picture:
en.wikipedia.org/wiki/Sensitivity_and_specificity#/media/File:Sensitivity_and_specificity.svg