that's why a single number for accuracy of the test is misleading and two numbers should be given instead. Matt's machine had 20% chance of an error of 1st kind (false positive) and 20% chance of an error of 2nd kind (false negative). The "better" 95% accurate test of "everything is not a phone" would really be 0% error of the 1st kind and 100% error of the 2nd kind.
True. That's why, when evaluating binary classification systems, we usually don't use accuracy as a metric. Instead, we use precision (percentage of selected items that are relevant), recall (the percentage of relevant items that are selected), and a F1-score (which is a combination of precision and recall). en.wikipedia.org/wiki/Precision_and_recall
My math could be wrong, but it would still be 80% accurate. Out of the boxes containing phones, it can accurately predict it 80% of the time, so it will accurately predict 4 out of the 5 phones. However, the accuracy also applies to the ones whom are not phones (as it must determine there are no phones there either) so it will be 80% accurate meaning it will accurately detect that 76 of the 95 boxes aren't phones and label the other 19 as phones. They came up with 17% at the end because the machine would claim a total of 23 boxes would be labeled as phones. Out of that, 4 are correct, or 17.39(...)% of the newly wrapped boxes are phones.
It's a T-shirt with the legend "I am a T-shirt" on the front. Except that it randomly changes to read "I am an off-duty Czechoslovakian traffic warden" 20% of the time. (Internet points to anyone who gets the reference).
I played the video 4 to 5 times just to see Matt's expression each time he repeated, "80% accurate!". He managed to vary his expression in subtle ways each time, not to mention the complacent little nod after each iteration.
I can already see people providing 80% accurate solutions to the next Matt Parker's Math Puzzle. Jokes aside though, what a neat way to explain this at first sight unintuitive part of probability theory!
For instance choose any positive integer n. Such that it forms a regular n-gon. If those n-gons are congruent, and can be used as faces to form a solid that is 80% enclosed, then the n-hedron constructed, is a Parker Platonic Solid.
Never have I seen Bayes theorem explained so simply and with such fun. Call me a skeptic but I suspect the 5 mobile phones were fake news - got to watch those budgets.
in reality there were 5 brand new mobile phones in the boxes but they were in the /red/ boxes. when the boxes got switched all of the actual phones got dumped on the ground.
I watched all 3 progs when first broadcast, and the 80% accurate machine struck me then as an absolute highlight of brilliant & clear explanation. Bravo, both
another less-mentioned but equally important result of this sort of error: one of the positives ended up on the floor. In the video that means a smartphone, but with the current events that means a contagious case of the human malware that didn't get caught by screening. Testing helps, but is part of the equation, not the entire solution.
That's why, depending on the problem, you try to optimize the false positive or the false negative, considering it is better to detect miss classify a non infected as being infected or better to miss detect an infected case.
3Blue1Brown vid modelling virus spreading, including testing & quarantine, and when that testing isn't perfect. ua-cam.com/video/gxAaO2rsdIs/v-deo.html (23mins)
@@olmostgudinaf8100 I would say the main point of the demonstration was on False Positives, considering the notion of False Negatives was never even mentioned at all. They focused on the fact that even with 80% accuracy there were 19 false positives on the table, while the false negative was barely given a passing mention and never brought up again.
Anthony Flanders Not only is it a Parker machine. It's also made by Matt Parker. It means that it's a Parker Parker machine. Or simplifying Parker² machine. We've come a full circle to Parker square.
This has got to be near-optimal for children's education. You've got 3 hours -- can we get 6 hours * 5 days * 40 weeks * 12 years - 3 hours = 14397 more, if we all work together?
I love the non-intuitive way that a low incidence rate can make the tests pretty much meaningless. But on a serious note, how often is the accuracy on these tests symmetric? In a lot of cases, you can adjust the sensitivity, to give more positives or more negatives. It would be interesting to look at the impact that could have , eg if your machine correctly flagged _every_ phone as a phone but incorrectly flagged 50% of non-phones as phones. I guess in different situations, it might be more beneficial to swing the bias one way or the other, depending on what you are looking for.
Indeed, tests are biased a bit for best accuracy at a specific incidence rate. Also, tests used for screening will be biased more towards the positive, because false positives can be checked, false negatives can be deadly.
And like that you've rediscovered the bias-variance tradeoff. It all depends on how you want to measure failure (what statisticians call the “error function”) and minimizing that. For instance, if false positives are worse than false negatives, you might construct a test that minimizes an error function which assigns a higher value to false positives than to false negatives.
In biophysics we are keenly aware of such issues and tend to use a different metric known as Mathew's Correlation Coefficient (MCC) rather than things like accuracy and sensitivity. MCC works very nicely even with extremely unbalanced data. Oh, and this Mathew is not Parker, lol. The machine learning folk are only catching on to this metric much more recently. MCC won't always be most appropriate, for instance if you're working on a problem where false positives have no consequences, but I think it is a better general use metric than any other, even if it may be less intuitive to derive and a bit more involved to calculate.
I’ve thought about this part of the Christmas lectures at almost every government briefing. Fully worth watching (RI lectures, not necessarily Matt Hancock)
It's a classic science communication problem. (... and not science too) Normally you do it through the "in a study, doctors were presented with..." spiel. And yeah, solving people misunderstanding false positives/negitives is real research. The visual of the 17 presents at the end is stunning, and probably the best presentation I've seen yet.
From the beginning, I wanted to chuck the 23 positives back into the machine. Rounding down, the 19 junk and 4 wins becomes 3 of each. Stripping back the metaphor, this is not always possible. Perhaps something in the original sample was what led to the false positives and negatives. But if the fault was in the testing, then retesting will improve the results.
At our Uni (Würzburg, Germany) we have a physics christmas lecture in witch it's discussed if "Santa" can delivere them in time. Because the X-Rays were discovered here by Mr Röntgen, we use a real X-Ray-Machine to identify the presents :)
I read an article a few years ago about mammograms and their effectiveness... or maybe it was a video that I saw... regardless, it talked about how there were a distressingly high amount of false positives as well as a regrettable amount of false negatives. The math in the article/video brought into question the effectiveness of mammograms as a screening method.
That would only work if those were random false positives. What if there was a specific property that caused the false positives? You'd end up with the same 23 rewrapped boxes.
@@arnhelmkrausson8445 I shouldn't be a specific property to be a false positive, since this would be a systematic error and with propagation of uncertainty you ony take random errors in account. And since this is the way you get your accuracy, you don't have systematic errors
When your doctor administers you a test, they should tell you up front, "if you test positive, it really means you have an X% chance of having it. If you test negative, it really means you have a Y% chance," because X will often be actually pretty small. Then, if you test positive, your doctor should give you more tests to increase your certainty.
But X and Y depend on a priori probabilities. A positive result of an 80% accurate test for a very rare desease performed test on a random person will almost always be a false positive. The same with a person from a subpopulation with a much higher probabilty of having the desease (e.g., someone with suitable symptons) will have a much lower false positive rate
Elling (did I hear that name correctly) should've asked to rescan the wrapped gifts that came through from the first sort. That would've improved the selection odds.
The machine has produced a smaller pile of boxes with a higher % of phones. You could feed those back into the machine repeatedly until only one box remains, and it would have a much higher chance of being a phone than picking a box after just one pass. (Assuming, of course, that it's an actual machine and not just a box with a guy in it who switches out some of the boxes...)
You can't know that. If the machine randomly select something as false / true with 80 % then yes, thus strategy would work. In real life usually tests fail because of some reason (it can't determine what's in the box; the medicine does not work for patients with a particular condition) so putting it again would wield the same result.
Hey Matt, dunno if you'll see this: seems like a good follow up to this would be a discussion of Precision, Recall, Specificity, and any other relevant terms for this kind of thing.
That sorting machine reminds me of an old Soviet joke: The Americans build a potato peeling machine, and show it off at a conference where Gorbachev is present. It's TV sized box. They drop a potato. It ends up in the dispenser after a few seconds nicely peeled. Throw in two, a few seconds later they appear as well. They pour in a whole sack. Within 30 seconds all are perfectly peeled. Gorbachev goes home and gives the order that Soviet engineers have to develop their own peeling machine, so they can present it to the world too. They come back after barely a week. It's the size of a small car. They drop a potato, wait a minute and a much smaller edgy one slides out of the dispenser. They drop two, wait two minutes, and another two, small ones slide out. They pour in a whole sack. They wait ten minute. Then twenty. After half an hour a small paper slides out: "Sasha can't take it any more."
I don't get it. Is it the fact that they just cut out a lot of potato instead of peeling it? Or is it that Sasha is sitting inside and peeling them? What is the funny part?
Depending on how the machine works, maybe you can but the gold presents back into the machine with the bow removed and see if it comes back with or without a bow. Basically, re-test a box
I would have loved if the kid started throwing the yellow boxes back into the machine. Also: This is why they always assess things like age and history before screening for diseases.
Not only it's 80% accurate at predicting which present contains a phone. It also is bad at counting. 2:46 Throws in 2 presents, 5 come out Parker machine indeed :P
80% accurate coulda been my college motto! 90% accuracy required so much more work! I had a formula for how many hours I would be willing to study or do homework depending on the class and the weight of each on my grade! But seriously why did we not put the 20 plus newly wrapped presents back in? Give me another 80%!!!!
Smart money is on none of the boxes had a phone, Hannah lied, and Matt had no idea that his machine couldn't tell a sock from an orange. It did turn boring red presents into fancy gold ones sometimes, so there's that i guess
My idea of how your machine actually works is someone sitting inside there and exchanging certain boxes... Though I can't help from feeling bad when I think of that whole bag of boxes that landed on their head. Btw, were there actually any phones there?
I don't think anyone was counting the number of dud presents going in and out, so the 'machine' just had to put most of the dud presents out and mix in 23 'phones'
If the presents labeled as 'Phone' would have gone through the machine a second time: - The resulting expected number of phones labeled correctly would be 3.2 (let's say 3), and - The number of presents incorrectly labeled as phone would be 3. - That means you would have a ~50% chance of picking a phone from the pile! And that's why getting a second opinion is not a bad idea when you get something tested ;)
I get that in order to have a digestible example for the target audience, it has to be "accuracy" in the broad sense, but anyone who finds this compelling should take the next step and look into specificity and sensitivity.
What would happen if one removes the gift wrap and put the box again in machiene ? Would the possibility of finding a phone increase or decrease or remain unaffected ?
Just put the remaining 24 packages through the machine again. It will give 3 phones on a total of 7. That is already more than 42% chance of getting a phone. Do it once more, and there is a 2 out of 3 chance. And after 4 times, it would be a guarantee to give 2 (or 1 if round down) phones remaining. Ofc this does only work if 80% accurate means it always gives exactly a return of 80% (rounded down to nearest whole number), without deviation (which in reality it won't) but that assumption was already made at the start with guaranteeing 4 phones.
I was half expecting a machine that would label all the presents as 'not a phone'. I mean, you could bump up your accuracy to 95% with that!
^ Underrated comment.
that's why a single number for accuracy of the test is misleading and two numbers should be given instead.
Matt's machine had 20% chance of an error of 1st kind (false positive) and 20% chance of an error of 2nd kind (false negative).
The "better" 95% accurate test of "everything is not a phone" would really be 0% error of the 1st kind and 100% error of the 2nd kind.
True. That's why, when evaluating binary classification systems, we usually don't use accuracy as a metric. Instead, we use precision (percentage of selected items that are relevant), recall (the percentage of relevant items that are selected), and a F1-score (which is a combination of precision and recall).
en.wikipedia.org/wiki/Precision_and_recall
My math could be wrong, but it would still be 80% accurate. Out of the boxes containing phones, it can accurately predict it 80% of the time, so it will accurately predict 4 out of the 5 phones.
However, the accuracy also applies to the ones whom are not phones (as it must determine there are no phones there either) so it will be 80% accurate meaning it will accurately detect that 76 of the 95 boxes aren't phones and label the other 19 as phones.
They came up with 17% at the end because the machine would claim a total of 23 boxes would be labeled as phones. Out of that, 4 are correct, or 17.39(...)% of the newly wrapped boxes are phones.
@@TheZotmeister literally the highest rated comment
I would like to reiterate that this machine is 80% accurate.
I think Matt was only 80% accurate on that statement.
I would like to reiterate that this comment is 80% accurate.
I really didnt understand. How acurrate is it?
@@doublespoonco, 80%!
Huh, must have missed that bit...
I came for Matt Parker and his machine, but getting a bonus of Hannah Fry as well is like my own perfect Christmas Present.
80% of the video was Hannah.
I came for Hannah Fry. I'm tolerating that Matt Parker is involved.
4:32 My machine *slaps roof* is 80% accurate.
This bad boy can fit so many f***in phones inside
Solid joke.
Don't take 50% of the credit. Take 80% of the credit.
So 17%?
Can I have the 20% of not-a-credit then?
ʷᵃᶦᵗ ᵗʰᵃᵗˢ ⁿᵒᵗ ʰᵒʷ ᶦᵗ ʷᵒʳᵏˢ ᶦˢ ᶦᵗˀ
I want ‘It’s 80% accurate’ merch. And you only get sent the right merch 80% of the time.
Is it a phone 17% of the time?
It's a T-shirt with the legend "I am a T-shirt" on the front.
Except that it randomly changes to read "I am an off-duty Czechoslovakian traffic warden" 20% of the time.
(Internet points to anyone who gets the reference).
80%Acbutate
but you’d have to send a shirt to 20% of people who didn’t ask for one too
@@sixstringedthingYou Smeeeeeeg Heeeeaaaad!
I played the video 4 to 5 times just to see Matt's expression each time he repeated, "80% accurate!". He managed to vary his expression in subtle ways each time, not to mention the complacent little nod after each iteration.
I can already see people providing 80% accurate solutions to the next Matt Parker's Math Puzzle.
Jokes aside though, what a neat way to explain this at first sight unintuitive part of probability theory!
Not completely sure about this but it feels very related to Bayes' Theorem.
EDIT: grammar
Parker math is math that is 80% accurate, or 80% to the goal.
For instance choose any positive integer n. Such that it forms a regular n-gon. If those n-gons are congruent, and can be used as faces to form a solid that is 80% enclosed, then the n-hedron constructed, is a Parker Platonic Solid.
Never have I seen Bayes theorem explained so simply and with such fun. Call me a skeptic but I suspect the 5 mobile phones were fake news - got to watch those budgets.
in reality there were 5 brand new mobile phones in the boxes but they were in the /red/ boxes. when the boxes got switched all of the actual phones got dumped on the ground.
@@gildedbear5355 Wow, a 0.032% occurrence, bet they didn't see that coming at all
I'd say that the idea of the phones all being fake is about 80% accurate.
Probably. Not only for budget reasons, but because having the person at the end pick out an actual phone would defeat the point of the demonstration.
Phones are cheaper than socks.
Better title:
Parker Machine
FINE. Updated.
@@standupmaths Yey!!!! We did it boys
@@standupmaths only you would embrace that joke so gracefully hahaha
For those who came later, the tittle was "Matt's machine which is 80% accurate."
There are currently 80 likes on this comment, by the way.
I like the part where he says the machine is 80% accurate.
When did he say that?
Sorry, when? I didn't catch that one
timestamp?
He's only saying that for 17% of the video.
@@gasdive Your 17% of the video is 80% accurate!
Well, I mean, the machine improved the girl's chances of finding a smartphone by 247.8%, so it wasn't a total loss...
Or something. ;p
Parker Salesman:
*slaps top of machine*
This bad boy is 80% accurate
Matt's embracing the true Parker spirit
I watched all 3 progs when first broadcast, and the 80% accurate machine struck me then as an absolute highlight of brilliant & clear explanation. Bravo, both
another less-mentioned but equally important result of this sort of error: one of the positives ended up on the floor. In the video that means a smartphone, but with the current events that means a contagious case of the human malware that didn't get caught by screening. Testing helps, but is part of the equation, not the entire solution.
That's why, depending on the problem, you try to optimize the false positive or the false negative, considering it is better to detect miss classify a non infected as being infected or better to miss detect an infected case.
That was *literally* the point of the video.
3Blue1Brown vid modelling virus spreading, including testing & quarantine, and when that testing isn't perfect.
ua-cam.com/video/gxAaO2rsdIs/v-deo.html (23mins)
@@olmostgudinaf8100 I would say the main point of the demonstration was on False Positives, considering the notion of False Negatives was never even mentioned at all. They focused on the fact that even with 80% accuracy there were 19 false positives on the table, while the false negative was barely given a passing mention and never brought up again.
Ahh so it’s a Parker machine. This comment looks stupid since he changed the title lol.
Anthony Flanders Not only is it a Parker machine. It's also made by Matt Parker. It means that it's a Parker Parker machine. Or simplifying Parker² machine. We've come a full circle to Parker square.
Matt Parker: The King of Imperfection
@@mikeuk1927 Surely not a full circle 😜
@Hyasconi "Matt's machine which is 80% accurate"
@@mikeuk1927 you mean a parker circle to a parker square?
This has got to be near-optimal for children's education. You've got 3 hours -- can we get 6 hours * 5 days * 40 weeks * 12 years - 3 hours = 14397 more, if we all work together?
I remember this! It was a most effective way of demonstrating the principle. Well done, you and Hannah!
that poor child. imagine getting hyped at an 80% chance of getting a phone only to have it destroyed
Omg you changed the title to "Parker Machine"
80% of the time, I'm 80% right.
@@epsi What if 80% of the time he's 80% accorate, and 20% of the time he's 100% accurate? :P
@@epsi He could be 1% right and his comment would be a true statement.
your statement works 60% of time everytime.
@@epsi -- so I'm 14 percent more reliable than a coin flip? This gets 7% more confusing as time goes by.
So, you are atleast 64% right and atmost 84% right(considering that this time you are right!)✌️
Love the Christmas lectures usually, even more so with you and Hannah doing it
I love the non-intuitive way that a low incidence rate can make the tests pretty much meaningless. But on a serious note, how often is the accuracy on these tests symmetric? In a lot of cases, you can adjust the sensitivity, to give more positives or more negatives. It would be interesting to look at the impact that could have , eg if your machine correctly flagged _every_ phone as a phone but incorrectly flagged 50% of non-phones as phones. I guess in different situations, it might be more beneficial to swing the bias one way or the other, depending on what you are looking for.
Indeed, tests are biased a bit for best accuracy at a specific incidence rate. Also, tests used for screening will be biased more towards the positive, because false positives can be checked, false negatives can be deadly.
And like that you've rediscovered the bias-variance tradeoff. It all depends on how you want to measure failure (what statisticians call the “error function”) and minimizing that. For instance, if false positives are worse than false negatives, you might construct a test that minimizes an error function which assigns a higher value to false positives than to false negatives.
In biophysics we are keenly aware of such issues and tend to use a different metric known as Mathew's Correlation Coefficient (MCC) rather than things like accuracy and sensitivity. MCC works very nicely even with extremely unbalanced data. Oh, and this Mathew is not Parker, lol. The machine learning folk are only catching on to this metric much more recently. MCC won't always be most appropriate, for instance if you're working on a problem where false positives have no consequences, but I think it is a better general use metric than any other, even if it may be less intuitive to derive and a bit more involved to calculate.
It's still a bit unclear, how accurate is the machine?
80%
Not very
Somewhere between 75% and 85% accurate
i'm 80% sure he mentioned that
The process (finding a phone) is now 17% accurate instead of just 5% because of the machine being 80% accurate
I’ve thought about this part of the Christmas lectures at almost every government briefing. Fully worth watching (RI lectures, not necessarily Matt Hancock)
It's a classic science communication problem. (... and not science too)
Normally you do it through the "in a study, doctors were presented with..." spiel. And yeah, solving people misunderstanding false positives/negitives is real research.
The visual of the 17 presents at the end is stunning, and probably the best presentation I've seen yet.
Who's Matt Hancock?
Already saw this on the RI channel, but I’m watching it again on here anyway.
From the beginning, I wanted to chuck the 23 positives back into the machine. Rounding down, the 19 junk and 4 wins becomes 3 of each.
Stripping back the metaphor, this is not always possible. Perhaps something in the original sample was what led to the false positives and negatives. But if the fault was in the testing, then retesting will improve the results.
Please stop making things that sort of work or else the word "Parker" will be everywhere
Nice Parker comment.
He changed the title!
By this comment and the message of the video, all medical tests are Parker tests...
But it does work!
It's 80% accurate!!
Parker pens xd
That machine operates very smoothly, the craftsmanship is clearly top-notch. I'd love to see it taken apart.
Watching the full lecture, the bit that comes after this one is completely relevant today.
the RI channel really has some great videos. Long form but informative and thought provoking
I love the name is "X-mas ray detector 0.80"
I wonder how many spotted that.
At our Uni (Würzburg, Germany) we have a physics christmas lecture in witch it's discussed if "Santa" can delivere them in time.
Because the X-Rays were discovered here by Mr Röntgen, we use a real X-Ray-Machine to identify the presents :)
I remember going on a school trip to one of the 2013 lectures. I was 12, that's made me feel old.
When I was 12 Matt Parker had yet to be born.
I read an article a few years ago about mammograms and their effectiveness... or maybe it was a video that I saw... regardless, it talked about how there were a distressingly high amount of false positives as well as a regrettable amount of false negatives. The math in the article/video brought into question the effectiveness of mammograms as a screening method.
What a fantastic clip and visualisation, well done.
I was very much looking forward to this video!
I died at “Xmas ray detector .80”
This is some Monty Python level maths comedy going on here!
Bloody hell Matt, it went from some kids excited to win an iPhone to kids worrying about their inevitable demise! That got dark quick!
This video provides a nice visual explanation of the pitfalls of Covid-19 antibody testing with some of the current tests...
That's why I just run the "phones" back through the machine!
That would only work if those were random false positives.
What if there was a specific property that caused the false positives? You'd end up with the same 23 rewrapped boxes.
@@arnhelmkrausson8445 I shouldn't be a specific property to be a false positive, since this would be a systematic error and with propagation of uncertainty you ony take random errors in account. And since this is the way you get your accuracy, you don't have systematic errors
The crushing disappointment as the child realised the phone was not coming...
I really like the RI channel.. and gosh I love Hannah, and well.. you.. are a Parker in my hearth... (?
Of course a Parker Machine wouldn't be 100% accurate.
but it gives it a go and that's what's important.
Well, ain't that a Parker Square of a machine?
woow I like how you guys deliver this with the children and all.
nice show! good vibes!
Good job Matt and Hannah. Disease screening explained in 5 mins.
When your doctor administers you a test, they should tell you up front, "if you test positive, it really means you have an X% chance of having it. If you test negative, it really means you have a Y% chance," because X will often be actually pretty small. Then, if you test positive, your doctor should give you more tests to increase your certainty.
But X and Y depend on a priori probabilities. A positive result of an 80% accurate test for a very rare desease performed test on a random person will almost always be a false positive. The same with a person from a subpopulation with a much higher probabilty of having the desease (e.g., someone with suitable symptons) will have a much lower false positive rate
Elling (did I hear that name correctly) should've asked to rescan the wrapped gifts that came through from the first sort. That would've improved the selection odds.
Bit of a Parker Square of a machine.
I've just seen you on 'World's Top 5' planes commenting on a spy plane. A repeat on Quest, but still found it interesting.
The machine has produced a smaller pile of boxes with a higher % of phones. You could feed those back into the machine repeatedly until only one box remains, and it would have a much higher chance of being a phone than picking a box after just one pass.
(Assuming, of course, that it's an actual machine and not just a box with a guy in it who switches out some of the boxes...)
You can't know that.
If the machine randomly select something as false / true with 80 % then yes, thus strategy would work.
In real life usually tests fail because of some reason (it can't determine what's in the box; the medicine does not work for patients with a particular condition) so putting it again would wield the same result.
ARE YOU IMPLYING THE MACHINE ISN'T REAL?!?!?!!??! Heresy!
@Liku Just take the gold-wrapped ones, tear off the paper, and put red on them again.
You're assuming that, for any given box, the machine wouldn't just reproduce identical results each time.
Parker Mathematical Consultants "Where perfect is the enemy of good"
Hey Matt, dunno if you'll see this:
seems like a good follow up to this would be a discussion of Precision, Recall, Specificity, and any other relevant terms for this kind of thing.
you are really good at stage!
6:12 - machine and presents
6:24 - empty stage
Well i already seen the full lecture but still fun
This is a really good explanation of the prosecutor's fallacy
2:08 I only _just_ noticed the "X-Mas Ray" pun 😂
80% of the time it works 100% of the time.
100% of the time it works 80% of the time.
I'm not accustomed to seeing Hannah with short hair.
That sorting machine reminds me of an old Soviet joke:
The Americans build a potato peeling machine, and show it off at a conference where Gorbachev is present. It's TV sized box. They drop a potato. It ends up in the dispenser after a few seconds nicely peeled. Throw in two, a few seconds later they appear as well. They pour in a whole sack. Within 30 seconds all are perfectly peeled. Gorbachev goes home and gives the order that Soviet engineers have to develop their own peeling machine, so they can present it to the world too. They come back after barely a week. It's the size of a small car. They drop a potato, wait a minute and a much smaller edgy one slides out of the dispenser. They drop two, wait two minutes, and another two, small ones slide out. They pour in a whole sack. They wait ten minute. Then twenty. After half an hour a small paper slides out: "Sasha can't take it any more."
I don't get it. Is it the fact that they just cut out a lot of potato instead of peeling it? Or is it that Sasha is sitting inside and peeling them? What is the funny part?
@@nanigopalsaha2408 Sasha is inside peeling them, and doing a worse job at it than the American machine did.
@@nanigopalsaha2408 The Parker machine didn't work very well and just has a person inside it, their joke was similar on those two points.
I don't see how this is a joke, or funny at all.
@@joshuacollins385 it worked perfectly, it demonstrated the problem. Its job wasnt to find phones.
Thank god you just filled 3 hours of my 24 hours allocated daily :D
Who did you get to work the inside the machine!? That's awesome.
Depending on how the machine works, maybe you can but the gold presents back into the machine with the bow removed and see if it comes back with or without a bow. Basically, re-test a box
I must have misheard the number, how accurate is the machine?
Somewhere between 0% and 100%, if I remember correctly.
He forgot to mention it, the machine is in fact 80% accurate
your delivery is so good in this. very charming and makes the point understandable. honestly look up to you on that
Fantastic demonstration!
We watched this in school on our last day before it closed
Loved the lectures. I doubt any will ever top the ones from the Polish(?) chemist gentleman for me though.
I would have loved if the kid started throwing the yellow boxes back into the machine.
Also: This is why they always assess things like age and history before screening for diseases.
A bit of Fry and Parker!
Not only it's 80% accurate at predicting which present contains a phone. It also is bad at counting. 2:46 Throws in 2 presents, 5 come out
Parker machine indeed :P
A suggestion for a topic of a new video:
The baker's dozen and the baker's math.
And yes, I am (tongue in cheek) serious!
That poor volunteer will never again trust mathematicians bearing gifts
This is a much more fun way to talk about probability than talking about marbles in a jar
80% accurate coulda been my college motto! 90% accuracy required so much more work!
I had a formula for how many hours I would be willing to study or do homework depending on the class and the weight of each on my grade!
But seriously why did we not put the 20 plus newly wrapped presents back in? Give me another 80%!!!!
Smart money is on none of the boxes had a phone, Hannah lied, and Matt had no idea that his machine couldn't tell a sock from an orange. It did turn boring red presents into fancy gold ones sometimes, so there's that i guess
80% of the time, it works *every* time
I liked those gold Parker Cubes it spat out.
It's like a Parker Square machine!
My idea of how your machine actually works is someone sitting inside there and exchanging certain boxes...
Though I can't help from feeling bad when I think of that whole bag of boxes that landed on their head.
Btw, were there actually any phones there?
I don't think anyone was counting the number of dud presents going in and out, so the 'machine' just had to put most of the dud presents out and mix in 23 'phones'
@@Septimus_ii do you think the maths loving Matt Parker, who loves accuracy in numbers, would go on stage making a mathematically inaccurate machine?
I would have put the 'phones' in the machine again. Just to mess with Matt.
If the presents labeled as 'Phone' would have gone through the machine a second time:
- The resulting expected number of phones labeled correctly would be 3.2 (let's say 3), and
- The number of presents incorrectly labeled as phone would be 3.
- That means you would have a ~50% chance of picking a phone from the pile!
And that's why getting a second opinion is not a bad idea when you get something tested ;)
Then you should try reinserting the wraped boxes by unwrapping. It works I think🤔. May be for many number of repetitions 😄.
Hey, I looked this sort of test, completely by coincidence, in one of my last stats classes
MY EYES, MY EYES, IT BURNS
~spongebob unnamed character
Ah it's a real parker square of a machine.
I get that in order to have a digestible example for the target audience, it has to be "accuracy" in the broad sense, but anyone who finds this compelling should take the next step and look into specificity and sensitivity.
I know I'm late to this, but I need to know what you told the red and yellow hats that was so funny.
That's a real Parker Square™ of a machine
Love Hannah Fry!
That's how a proper Parker machine should work!
What would happen if one removes the gift wrap and put the box again in machiene ? Would the possibility of finding a phone increase or decrease or remain unaffected ?
Just put the remaining 24 packages through the machine again. It will give 3 phones on a total of 7. That is already more than 42% chance of getting a phone. Do it once more, and there is a 2 out of 3 chance. And after 4 times, it would be a guarantee to give 2 (or 1 if round down) phones remaining. Ofc this does only work if 80% accurate means it always gives exactly a return of 80% (rounded down to nearest whole number), without deviation (which in reality it won't) but that assumption was already made at the start with guaranteeing 4 phones.
80% accurate, but also 82.6% likely to deceive the phone seeker. I love how maths works.
The 80% accurate stuff does suit you, Matt
It would have been funny if Matt fell in and came out wearing a gold ribbon on his head that said "Phone"
crossover of the century huh