Newcomb's Problem

Поділитися
Вставка
  • Опубліковано 18 гру 2024

КОМЕНТАРІ •

  • @KaneB
    @KaneB  6 років тому +51

    A brief outline of solutions to Newcomb’s Problem:
    The One-Boxer: “Take only box B.”
    The Two-Boxer: “Take both boxes.”
    The Dialetheist: “Take only box B and take both boxes.”
    The Fundamentalist Teenager: “Until we’re married, take only box A.”
    The G.E. Moore: “Here is £500,000. Here is another £500,000. Therefore, take only box B.”
    The Schroedinger: “Take only box B, but never open it.”
    The Radical Skeptic: “What boxes?”
    The Rational Skeptic: “When I appeared alongside this so-called “predictor” on Larry King Live, I invited it to take my One Million Dollar Paranormal Challenge, and it accepted. Unfortunately, when I approached it after the show to arrange a date, it made a quick exit without responding to me, and my subsequent attempts to contact it have all been ignored.”
    The Modal Realist: “In some possible world, you take both boxes and receive £1,001,000.”
    The Talking Heads: “And you may find yourself taking both boxes. And you may say to yourself, MY GOD, WHAT HAVE I DONE?!”
    The Behaviourist: “I can see that you’re keen on two-boxing; how do I feel about it?”
    The Humean: “One box? Two boxes? I prefer to just scratch my finger.”
    The Nihilist: “Vere iz dat money, Lebowski? Ve vant da money, Lebowski!! Ya zink ve are kidding?”
    The Neutrino Physicist: “Take two boxes. Then do the same again another hundred trillion trillion times.”
    The Heisenberg: “I worked out exactly what’s in box B, but now I have no idea where it is.”
    The Penn & Teller: “This time, we’ll do Newcomb’s problem with *both* boxes transparent!” [Naturally you take two boxes, but as you leave you notice that the all the money has disappeared from both of them.]
    The Timothy Leary: “Take all three boxes.”
    The Kaczynski: “Run away from box B.”
    The Donald: “We’re gonna open boxes, okay? Because I know boxes. I have the best boxes. Tremendous boxes. We’re gonna open so many boxes, you’re gonna get sick of opening boxes. Believe me, I’ve opened boxes many times. Look, having boxes - my uncle, good genes, very smart, okay, very smart - he opened so many boxes, you wouldn’t believe it. Crooked Hillary, she couldn’t open even one box - it’s true, it’s true. Lots of people are saying that. Sad!”
    The John Cage: “*Music for Two Boxes* for solo performer. Flip a coin. If heads, take box B. If tails, take both boxes. To be performed with maximum amplification.”
    The Nicolas Cage: “No, not the boxes! NOT THE BOXES! AAAHHHHH!! AHH, MY EYES, MY EYES!!”

    • @KaneB
      @KaneB  4 роки тому +1

      @Oners82 Because I'd be pretty much guaranteeing that I'll end up with aggressive cancer.
      With respect to causality, I'm inclined to say either that causality is just irrelevant (if one is adopting a realist view of causality), or that in fact, choosing to do philosophy does cause cancer in this case (if one is adopting certain antirealist views of causation). A good article on causality in Newcomb's problem is Price and Liu, "Click Bait for Causalists" - philarchive.org/archive/PRIQBF

    • @KaneB
      @KaneB  4 роки тому

      @Oners82 Well, I'd say the same about the Newcomb case. Either causality is irrelevant (if one is adopting a realist view of causality), or choosing to one-box does cause the Predictor to place the money in the box (if one is adopting certain antirealist views of causation).

    • @bishopbrennan3558
      @bishopbrennan3558 2 роки тому

      @Oners82 I'm definitely in agreement with Oners here rather than Kane. As Oners says: if I have the gene, I'm probably going to get cancer, whether I choose to do philosophy or not. To be honest, I don't even really get how this scenario is supposed to be different to the previous one, and my intuitions are the same as in the previous case.

    • @tudornaconecinii3609
      @tudornaconecinii3609 Рік тому

      @@KaneB You forgot the most meme response to Newcomb's Problem, the Zero Boxer:
      "The difficulty of perfectly predicting human brains from the outside is probably greater than the difficulty of simulating human brains. Therefore, on priors, I should assume I'm in a simulation. Looking at the world as a simulation from this vantage point, it seems like the resolution of my box picking is at least mildly relevant/interesting to whomever is doing the simulating. So me delaying my box picking, perhaps even indefinitely, has positive odds of prolonging my existence."
      Now, I call it a meme response, but it has some nugget of validity to it. Namely this: Suppose that, instead of basing your decision of whether to one box/two box on your desires, you base it on bringing a 6-sided die with you, rolling it 2000 times, and then one-boxing if the sum is even or two-boxing if the sum is odd. THAT would be absolute bullshit for a mere predictor to get right, but plausible for a simulator. And Omega does get it right. So... zero box.

  • @valsyrie
    @valsyrie 6 років тому +21

    Kane: "I'm a militant one-boxer"
    Me, out loud: "Yesssssss!"

  • @alexmeyer7986
    @alexmeyer7986 6 років тому +17

    You've never been so prejudiced in a video before...
    and I LOVE it!

  • @m3morizes
    @m3morizes 6 місяців тому

    When you said the opinions on Newcomb's problem were near 50/50 and discussed the argument at 12:30, it got me thinking about whether people's opinion on Newcomb's problem correlates with their economic political stance, i.e. that two-boxers correlate more with being economically right-wing and that one-boxers correlate more with being economically left-wing. I have no idea, though; just a random feeling.

  • @GodisgudAQW
    @GodisgudAQW 2 роки тому +3

    I think we have to consider how the predictor came to be such a good predictor. We can imagine two scenarios:
    1) The predictor has a time machine. She goes into the future, sees whether you one-boxed, and goes back in time and adjusts the boxes accordingly to reward or punish you.
    2) The predictor does not have a time machine, and is simply an unparalleled analyst with excellent abilities to read intention.
    I believe everyone would agree that if it's scenario 1), then one-boxing is the best strategy. And, if it's scenario 2), it's at least understandable why some philosophers are two-boxers, for they would argue that the money has already been placed and cannot be un-placed based on your decision.
    However, at 33:25, when we consider a perfect predictor, we are basically committing to a scenario in which beating the predictor (getting $0 or $1M+1k) entails a contradiction. That would mean that your choice causes the prediction to have already taken place. This is plausible if we consider that we are in scenario 1).
    When we go back to the original Newcomb's problem, it is precisely because it's not 100% that some philosophers seem to assume that we are not in scenario 1) and hence can become two-boxers. But, I would say that if the prediction is so accurate as to be nearly 100%, it is entirely possible that we are still in scenario 1), and the time-machine-wielding predictor occasionally misremembers the correct prediction, or even occasionally purposefully makes the wrong prediction just to throw us off and cause some of us to think we have a chance of beating the predictor. And it's entirely clear to me that as long as we know the predictor has a time machine and generally uses it to reward the 1-boxers, that one should become a 1-boxer.
    ----------
    Now I would like to say that even if we know for a fact that we are in scenario 2), then I would still argue in favor of 1-boxing, but I can see why someone would consider 2-boxing. My argument is that if you really think what it would take to become such a skilled predictor, any future effects, such as a friend's last-minute attempt at persuasion to become a two-boxer (and the impact that would have on your decision), would have already been accounted for in the analysis. I believe that the only way someone could become such a near-perfect predictor is precisely if your future decisions will influence the past analysis of the predictor. I think a two-boxer fails to account for this.

    • @dumbledorelives93
      @dumbledorelives93 2 роки тому +3

      I have always tended to view the abilities of the predictor as deriving from a "Maxwell's demon" level of knowledge about the current and past state of the world, and using that to make an incredibly accurate prediction of the future state. Based on what it understands about the arrangements of all the particles in the universe, it already knows whether a friend will try to persuade you to change your choice and already knows the outcome of this to high precision. I don't think we would even need to bring time travel into the equation. With this type of predictor, I still think one boxing is the most appropriate choice, as the predictor has all the relevant information to be a true oracle.

    • @GodisgudAQW
      @GodisgudAQW 2 роки тому +2

      @@dumbledorelives93 Yes, I agree. That's why I think even scenario 2 leads to 1-boxing. I think the fundamental difference of opinion is that two-boxers believe so strongly that your future decisions can't affect past predictions that they cannot fathom things being otherwise. I think considering the possibility of time travel would help a two-boxer understand where one-boxers like us are coming from.
      A Maxwell demon basically knows the future state in advance just as well as someone who actually saw the future outcomes and then went back in time.

    • @neoginseng436
      @neoginseng436 2 роки тому +1

      I don't see the logic or method of those who decide to consider how the predictor is good. The fact that Newcomb decided to leave that out tells me 1) Newcomb wanted to prove that our education system has failed half the participants or 2) He knew people, he understood thoroughly how people make decisions, and knew/predicated the participant results would be nearly 50 50, and that such results would be shockingly apparent in the scientific community to point where we are forced to acknowledge that half the world is just a bit stupider than us 1 boxers 🤣🤣

  • @philster5918
    @philster5918 2 роки тому +1

    Kane, just finding your videos now, and loving them.
    I feel like when I first read Newcomb's Problem, it just seemed to me like the problem leaves some details vague, and sneaks in assumptions about causality without explicitly stating them.
    Imagine a relatively poor predictor. The 2-boxer wants to ask immediately before their decision "is the contents of the boxes set and invariant, no matter what I choose?" If the answer is yes, then they can choose 2 boxes (would the 1 boxer deny that if the 2 boxer gets $1000, that 1 boxing here would have given them $0.00?).
    But it seems when you add in a perfect predictor (and maybe even a near perfect predictor), it's hard to answer the 2 boxer's question, "Are the boxes set and invariant, no matter what I choose?" If the answer is yes, then it feels like we either have to deny the possibility of a perfect predictor or we have to say something weird about retroactive causality.
    If it's possible to have a perfect predictor, then maybe we have to say that the boxes are not "set and invariant, no matter what I choose." But then we are just saying that the perfect predictor allows for retroactive causality, and that's where all the confusion happens (imo).
    Curious to hear your thoughts.

  • @warptens5652
    @warptens5652 Рік тому

    28:30 "it seems insane to do philosophy"
    yeah, no. It seems just as sensible as being a 2 boxer which I am. So as a 1 boxer you do face an inconsistency (with the earlier analogy), but as a 2 boxer I don't

  • @LitotheLlanito
    @LitotheLlanito 2 роки тому +1

    Kane I'm a fan of your work. Re this old vid I'd be interested whether you'd still be a one-boxer if A was £1m and B £10m for example?

  • @andrewmoy5855
    @andrewmoy5855 2 місяці тому

    I suspect that the talk about causality is a red herring. This problem would still arise even if the content of the boxes was determined at the same time at which you make a decision. There are two decision occurring in the scenario; the optimal choice for decision 1 is dependent on the outcome of decision 2, but decision 2 is dependent on decision 1. Even if you flipped the order of the decisions or changed the accuracy of the predictor the circularity remains. Attempting to determine which choice is optimal would just be a mistake.

  • @colegiesbrecht335
    @colegiesbrecht335 6 років тому +3

    I'm not sure I understand your cancer/philosophy counter, are you saying that given I have the gene, if I choose to do philosophy I will get cancer? Or will I get cancer regardless of whether or not I choose to do philosophy, given that I have the gene?

    • @Ansatz66
      @Ansatz66 6 років тому

      The point of the objection is that philosophy and cancer are causally independent, which means that neither one causes the other. They are both caused by the gene.
      The tricky point is the question of whether the gene causes a person to do philosophy, or whether it merely causes a person to desire to do philosophy. If it is only causing a desire, then the choice is irrelevant since we'll have the desire no matter what we choose. If the gene actually causes the choice, then by resisting our desire to do philosophy we can effectively prove that we don't have the gene.

  • @paulblart8568
    @paulblart8568 Рік тому +1

    I think it’s fair to assume the predictor has no access to information about the situation (your demeanor, the boxes, the room, etc.) except for, let’s say a button that they press to put a million or 0 pounds in box B. The reason I believe this assumption is fair is because otherwise, the 1 boxer can give some BS reason that the predictor has just noticed something about you that aids in their accurate prediction, and the problem is derailed into arguments of biology/psychology. So, if we take that assumption and continue, two-boxing is the best strategy because the predictor has essentially stumbled into their reliability at random. Given that, you can bet that the next prediction given by the predictor will be wrong and choose two boxes, resulting in $1001000.

  • @detroyerdiscord2160
    @detroyerdiscord2160 4 роки тому +2

    Quick comment on the "well-wishing" friend argument. First, you might say that the argument begs the question since it's supposed that the well-wisher will always recommend what's in your best interest and that he will recommend taking BOTH. Second, and more forcefully: the friend, so long as he is rational, would likely not make any recommendation at all. After all, he either sees that there is $M in there and will rejoice that you're about to pick ONE, or else is disappointed in the blunder you're about to make. He knows that whatever he says is unlikely to influence the outcome and, in the limit case, it cannot do so. See Craig (1987).

    • @detroyerdiscord2160
      @detroyerdiscord2160 4 роки тому +2

      If anything, the well-wisher supports ONE, since you would want to do the action that the well-wisher would be happy to find out that you're going to do. Before you make your choice, he'd be happy that there is $M and that you'll almost certainly choose ONE, or else would be unhappy that there's 0$ and that you'll almost certainly choose BOTH. Therefore, etc.

  • @myreneario7216
    @myreneario7216 6 років тому +2

    32:53 "If you can see what´s in the boxes, then there´s just no question at all. You take both."
    Why?
    Even if you can see what´s in the boxes, the fact still remains that the people who only take one box tend to become millionaires, while the people who take two boxes get a mere thousand dollars.
    If you walk into the room, and see the box filled with a million dollar then it´s of course very tempting to take both boxes, but the only people who get to see a box filled with a million dollars are the people who do not take the second box.

    • @Ansatz66
      @Ansatz66 6 років тому +3

      If we could see the content of both boxes then there would be no one-boxers, and therefore box B would always be empty. We wouldn't need a fantastic prediction machine to make the prediction, because it would be crazy to take just the empty box when you know that it's empty. If we did put the million in box B, then no one would have any reason to not take box A since they already know they've won the million either way, so therefore it would always be the wrong prediction to put the million in box B.

    • @myreneario7216
      @myreneario7216 6 років тому +1

      "If we could see the content of both boxes then there would be no one-boxers"
      I would still take one box. Even if you can see what´s in the box, the fact remains that most one-boxers walk away with a million, while most two-boxers get a mere thousand dollars.
      As far as I can tell, all the arguments for one-boxing and two-boxing still work exactly as well when you can see what´s in the box.

    • @Ansatz66
      @Ansatz66 6 років тому +4

      "I would still take one box. Even if you can see what´s in the box, the fact remains that most one-boxers walk away with a million."
      It's incredible that you could be faced with a box with $1000 and an empty box, and freely choose to only take the empty box for no reason. Maybe you went in there intent on being a one-boxer, but once you see what's in the boxes you have no reason to remain a one-boxer. You already know the one-boxing strategy has failed, so there's no point in going through with it.
      Similarly, it's incredible that you could be faced with a $1000 box and a $1000000 box and choose to leave behind the $1000 box for no reason. Perhaps that doesn't seem like much money in comparison to the $1000000, but it's your money for the taking. You may have gone into the challenge intending to be a one-boxer, but as soon as you see the content of box B you know you've already won whether you take box A or not, so one-boxing can no longer give you any advantage.
      Once you already know what the predictor has decided, it's too late to go back and change its decision. If you see an empty box B, you can't fill it with money by becoming a one-boxer, no matter how hard you try.

    • @donanderson3653
      @donanderson3653 2 роки тому

      ​@@Ansatz66 Have you heard of the "Kavka's toxin puzzle" It sounds similar to what you're describing.
      "An eccentric billionaire places before you a vial of toxin that, if you drink it, will make you painfully ill for a day, but will not threaten your life or have any lasting effects. The billionaire will pay you one million dollars tomorrow morning if, at midnight tonight, you intend to drink the toxin tomorrow afternoon. He emphasizes that you need not drink the toxin to receive the money; in fact, the money will already be in your bank account hours before the time for drinking it arrives, if you succeed. All you have to do is. . . intend at midnight tonight to drink the stuff tomorrow afternoon. You are perfectly free to change your mind after receiving the money and not drink the toxin."
      You seem to be saying "It's impossible to imagine anyone actually being paid, because after either receiving or not recieving the million dollars in the morning, there's absolutely no reason to actually go through with drinking the poison. Everyone would realize this, and therefore, intend not to drink the poison (because they know there will be no incentive to do it when the time comes)."
      My intuition is pretty clear in this case: You intend to drink the poison, then actually drink the poison. Similarly, it seems reasonable (assuming you actually believe in newcomb's amazing prediction powers) to 1-box even in the clear box case.
      Another way of motivating this belief: Suppose newcomb actually creates 1,000,000 simulated copies of you, and offers this choice to each of them. Then, whatever the vast majority of simulation do, he uses that as his prediction in the real world (Simulated copies cease to exist after the choice). In this case, there's a 99.9999% chance you're a simulation, and the best action to ensure the real version of you gets the million dollars is to 1-box.
      This re-inserts causality, by making the causal chain:
      Have a 1-box thought process -> Choose to 1-box in simulations -> Get offered $1,000,000
      Getting the million isn't causally dependant on *choosing* to 1-box, but it is causally dependent upon having the *predisposition* to 1-box. The decision to 1-box, and the box being filled with $1,000,000 both have a common cause.

    • @gJonii
      @gJonii Місяць тому

      I'd one-box with transparent boxes, and would consider it lunacy to do otherwise

  • @zornrose3547
    @zornrose3547 2 роки тому

    I'm not clear on what the difference is between Kane's cancer hypothetical and the one before. Is it just that it's specified that the correlation is 99.9%? Would one-boxers think quite a bit differently if the predictor were 60% accurately?
    (Incidentally, I would endorse two-boxing in the perfect predictor case and doing philosophy in Kane's cancer case.)

    • @warptens5652
      @warptens5652 Рік тому

      it's just that the risk is greater
      I'm pretty sure one boxers are only one boxers because the risk of 2boxing is losing 1million while the risk of 1boxing is just losing 1k

  • @jayadenuja3796
    @jayadenuja3796 2 роки тому +1

    I'm not so sure your modified gene scenario makes a difference. Remember, the Newcomb problem would arise even if the predictor is 51% accurate so if you accept that in those cases, you would still one-box then in the original gene scenario, you should also not do philosophy. If you wouldn't one box if the predictor is 51% accurate, the question would be, why not? The same calculations are being made to arrive to the desired outcome.

  • @rogerwitte
    @rogerwitte 10 місяців тому

    Sweet argument! Personally I would question the premise. If the predictor can predict with better than 50% accuracy, then either the predictor can genuinely see into the future or there will be a clear majority of people for one of the two options. The evidence is that there is not a clear majority, means that the claim is that you have a genuine prophet available. If I one box and win, you may be correct, but if I one box and lose, then the problem setter is a liar (and I would ascribe high probability to there never having been any money). In this way, if I don't get the money, I get that nice smug feeling of intellectual superiority.

  • @Reddles37
    @Reddles37 Рік тому

    This is 100% a question about free will vs determinism. If you believe in free will then at the point of picking the boxes you can freely choose which to take without affecting the prediction, so you should take both boxes. If you believe in determinism then whatever decision you make was fixed all along and presumably known by the predictor, so you should take one box. But, the existence of the nearly perfect predictor in the definition of the problem already implies that we're in the deterministic case, so taking one box is the right answer.
    Basically, if determinism is correct then your decision *can* cause the predictor to change what's in the boxes earlier in time, because they've already made a perfect model of your brain and 'you' have already made the decision in the past. Just imagine replacing the contestant with a computer program, and the predictor can look at the code or just run the program in a VM or something before setting the prediction. The code gets run twice but its effectively the same decision both times, so you can treat it as if the the future decision causally influences the prediction. Also, it would be pretty embarassing to go for both boxes only to find out that you're actually the simulated version of yourself being used to make the prediction, and you've just guaranteed that the real you won't get a million dollars...

  • @warptens5652
    @warptens5652 Рік тому

    My problem with this version is that the $1k vs $1m make it very biased towards 1 boxing. That is, if you 1box and you're right, you get +999k, but if you 1box and you're wrong, you only lose 1k. I suspect if the bet was balanced, that is, with $2k in the opaque box instead of $1m, there would be far fewer 1boxers.

  • @warptens5652
    @warptens5652 Рік тому +1

    One boxers consistently get more money out of the boxes... but they were consistently offered boxes with more money in it! So this is hardly a testament to one boxing being a good decision.

    • @gJonii
      @gJonii Рік тому

      It's not like one boxers have some birthright to better opportunities. You know the same as anyone else that one boxer in such a scenario gets offered more money, so why not be one boxer there?

    • @warptens5652
      @warptens5652 Рік тому

      @@gJonii "It's not like one boxers have some birthright to better opportunities."
      Actually yes it precisely is like that
      "one boxer in such a scenario gets offered more money"
      People who were one-boxers at the time when the predictor scanned their brain to make a prediction, those people get offered more money. And of course nobody disagrees that it would have been good to be a one boxer at that time. But that time is passed. One boxing now has no effect on the content of the boxes. One boxing because "one boxers get offerend more money" is like buying an expensive watch and hoping that makes you rich because "people who buy expensive watches are rich". There's merely a correlation there.

    • @gJonii
      @gJonii Рік тому

      @@warptens5652 No. People who were predicted to be one-boxers when the choice came up get offered money. You reveal your choice by... Making the choice.
      If you are deemed to be irrational enough to two-box in that scenario, willing to pass money despite fully understanding the situation, you get offered less money. But that's entirely based on prediction of how you behave in that choice situation. Something you have full control over. Your motives literally do not matter, only your action.
      You can justify choosing badly however you want, that's the funny part. You're still walking away with less money, predictably, virtually every time, with there being nothing preventing you from being among the winners beside your bad choices that you insist on.

    • @warptens5652
      @warptens5652 Рік тому

      @@gJonii "No. People who were predicted to be one-boxers when the choice came up get offered money."
      Idk why you would say "no" and then proceed to repeat what I said
      "You reveal your choice by... Making the choice. "
      revealing doesn't make you richer
      getting a box with money in it does
      if i reveal to you that your bank account has $100 more than what you thought, you didn't gain $100, you gained $0
      "If you are [predicted to two box], you get offered less money. But that's entirely based on prediction of how you behave in that choice situation. Something you have full control over"
      No, you don't have control over how you were predicted to act. You can't cause things to have happened in the past. You can merely learn new information about what happened in the past. This isn't a two boxer talking, even one boxers agree with this.
      You can justify choosing badly however you want, that's the funny part. You're still walking away with less money, predictably, LITERALLY every time, with there being nothing preventing you from being getting an extra $1000 beside your bad choice that you insist on.

  • @brymonk8892
    @brymonk8892 5 років тому +6

    If the predictor has no causal effect on what is in the boxes, then its accuracy at predicting the outcome has to be an extremely improbable coincidence. If the predictor is perfect, then it literally causes there to be a million because everyone one boxes. I don't see how the predictor even matters when the boxes are already set unless the predictor itself is the cause.

    • @ThePiotrekpecet
      @ThePiotrekpecet 2 роки тому +2

      I know its an old comment but this really boils down to interpretation of probability. If you accept bayesian view that probability of a event is just you confidence in this event happening changing your mind does change the probability. If you accept the frequentist view then it obviously doesn't change.

    • @femboyorigami
      @femboyorigami Рік тому

      I honestly don't understand the claim of no causal effect, since the boxes are literally set depending on the prediction. The predictor's accuracy also can't be a coincidence in any meaningful sense, since the setup of the problem assumes that the >99% accuracy is the true probability. (For instance, it would be extremely improbable to flip heads 10 times in a row on a fair coin, but only because the true probability of heads is 50%; if that wasn't the true probability, it wouldn't be a fair coin and thus not a coincidence.)
      Even if we didn't know that the true probability was 99.9% in real life, the fact that a coincidence is extremely improbable should be enough to reject the assumption of coincidence.

    • @paulblart8568
      @paulblart8568 Рік тому

      Why can’t the predictor’s reliability by a coincidence? Do people not get hit by lightning multiple times, , and do people not win the lottery multiple times? Does life not occur on one planet over countless others? I believe, if we live in the current reality, the predictor has achieved their reliability purely by chance, and that each subsequent prediction is less and less likely to be correct. Because of this, two-boxing seems the logical choice no?

  • @stephenlawrence4821
    @stephenlawrence4821 2 роки тому

    Very good.
    I'm a one boxer and one of my arguments is "the betting argument" I didn't know it was a thing. I did come up with it myself. It seems to me if somebody should bet on me only if I one box, then I should one box. My one boxing seems to be the same bet.
    I also think counterfactual reasoning is about moving to the nearest possible in which I made a different choice. So I don't have a problem with me having a different past in that world. It's a bit like asking what would I have done if I had got home and found I'd been burgled. It's very different but for sure we're imaging a different possible world with a different past prior to my arriving home. It doesn't seem to matter. Nobody says but you can't change the past.
    And I think maybe the concerns about one boxing are based on a common illusion which is you can't chsnge the past but you can change the future. Really! "From what to what" as Dan Dennett points out.
    Having said that the one argument that made me wonder in this video is the argument from the two boxers point of view about a correlation between doing philosophy and getting cancer with a common cause in the past. That does seem right and does seem similar to the two boxers argument.
    I shall ponder.

    • @paulblart8568
      @paulblart8568 Рік тому

      Would you change your bet if the predictor's accuracy was based on luck completely, akin to a person winning the lottery a couple times in a row?

  • @edvardm4348
    @edvardm4348 4 місяці тому

    I'm a simple guy, but to me it looks like two-boxers want to forget that predictor is very accurate, or that they can cheat it somehow. The way how I see it is that even though nothing can change once you've made a decision and then change it to improve it, I just simply accept the premise that the Predictor was able to see it: it's very accurate. So I guess I'm treating it more like a magical devise (in which I don't believe, btw) and no matter how smart I'd try to be, it would be almost certainly be able to predict my choice => I'm vouching for one-boxers too. Oh, and a friend watching and always seeing the boxes is a different game already, so that should be easy one to rule out(?)
    Then again, I'm quire certain I'm missing something which is why it seems non-paradox to me.

  • @jamessmith4172
    @jamessmith4172 11 місяців тому

    “If you can see what’s in the boxes there’s no question that you take both.”
    So you admit the following: Even when you can’t see, it’s a given that if you could, you would take both no matter if there’s a million or not.
    How could the rational choice change based on if your blind or not?

  • @cliffordhodge1449
    @cliffordhodge1449 6 років тому +2

    This problem has a sort of reflexive quality that makes it difficult to evaluate. As chooser, I must decide what the predictor will himself decide, based on what he thinks I will decide, based on what I think he will decide based on what he thinks I will decide about what he will decide, ad infinitum. Since this looks like an infinite regress, it seems at least reasonable - maybe not optimal - to treat this as a sort of educated gambler problem. As a gambler/chooser, I would wish to gather probabilistic information. But as this is not a simple event, like the proverbial coin flip, I must face the further problem that probabilities will be mere measures of my ignorance, based on limited conditions, and not based on an entire universe as the starting assumption. That is why different choices seem more or less reasonable based on adjustments to the set of given facts. But in the real world, a gambler goes with that. So if I buy a lottery ticket, I give, let's say $2, and I receive a probability of P is >0 that I will win one $1M. What matters to the gambler is not the actual probability of winning, but the fact that he has increased his probability from perfect 0 to something >0.

  • @danwylie-sears1134
    @danwylie-sears1134 9 місяців тому +1

    Before watching:
    Newcomb's problem, singular? It could be a completely different question, depending how the Predictor works. In the most likely interpretation, the Predictor is magic, so of course you should take only box B. There's this entity identified by a capitalized version of a common noun. Obviously it's a genericized version of the Christian notion of God, and any God-knockoff worthy of its capital letter can reach backward through time to add or remove money. There's also the scenario where people turn out to be depressingly predictable, and the predictor does well enough just by checking your zip code, age, sex, income, ethnic background, and education level. But that scenario is boring, so it probably isn't the one that's being posed. So you should ignore that scenario, and take only box B.
    Now to watch the video.
    An addendum, as I'm a little bit in. There are four questions, not two. Not only is there the matter of whether it's a Predictor or just a predictor, but there's also the question of whether you're deciding what to do or deciding what to be going to do. If you're just encountering the problem, posed to you in a book or a UA-cam video, you're not in the situation deciding what to do. You're in the present, deciding what to be going to do in a possible future. In that case, you should definitely decide to take both boxes. The likelihood of encountering a Predictor is incalculably smaller than that of encountering a mere predictor, which in turn is vastly less than that of never encountering either. Neither encounter is any more likely than an encounter with an anti-predictor, who presents the same scenario but lies to you and puts the million pounds in the box for precisely those people it thinks are going to take both. The question of your intellectual integrity is the only question that actually matters here, and you shouldn't become a woo-swilling fluff-brain just for the sake of a Predictor who doesn't even exist.
    About half-way through
    Hey, we've seen this one before. It's not Newcomb's Wager. It's Pascal's. The whole thing is about the singular focus on the scenario in which the existence of the Predictor is presumed to be plausible while the existence of either the anti-predictor or the anti-Predictor is presumed to be implausible. It's circular. You assume your conclusion, and arrive at your conclusion effortlessly from your assumption.
    And a bit later
    Of course I would still two-box in the Perfect Predictor case, because I would have no way of knowing that I was in that case, no reason to believe that I was.
    We can _make_ me believe that I'm in that case, just by stipulating it. And in that alternate version, I would be compelled to one-box. But even stipulation cannot enable me to _know_ that I'm in that case.
    If an omnipotent, omniscient, omnibenevolent God exists, and for His own mysterious reasons He wants to convince me that He does not exist, it is more important to submit to His will than to be correct about His existence. So even if God does exist, I'm right not to believe in Him. Of course, a tri-omni God could also annihilate me and replace me with a one-boxer who's otherwise quite similar to me. And if that's what He wants to do, again, I will not oppose Him.

  • @drivoiliev1667
    @drivoiliev1667 3 місяці тому

    Whilst I understand the arguments for two-boxing your first argument for one-boxing seals the case for me.
    I think it's even stronger if we modify it in the following manner.
    Imagine that you wait in line with an arbitrarily large number of people, all going into a room with two boxes inside setup as defined by the Newcomb's problem. Furthermore, imagine that neither you, nor anybody else is told about the predictor or the distribution of money. In actuality it doesn't matter if you tell them that the boxes contain money at all. You then observe that people who one-box have, on average, significantly larger winnings. Since an arbitrary many people have gone into the room before you, you can make that statement with arbitrary certainty. So, obviously, when it's your turn to enter the room you take one box.
    In this limited information version of the problem it does seem like one-boxing is clearly correct and I don't think that any case for two-boxing could be made. Your decision seems perfectly rational.
    Now let's return to the original problem. It would seem that a two-boxer has to concede that had he been in the situation in which he had less information, he would have one-boxed. But it seems like a contradiction to me that having more information rationally leads to worst outcomes, or at least it clashes with our intuition about information and rational decision-making.

  • @Alice98561
    @Alice98561 6 років тому +1

    You kind of gloss over this point so I'm wondering what the actual problem is with saying that the decision causes the placing of money into the box. This seems to me like the simplest way to resolve the paradox. What's the argument for causes always preceding their effects?

    • @thehairblairbunchjones6209
      @thehairblairbunchjones6209 6 років тому

      Alice Svensson if you’re interested in the debate around retrocausality, Huw Price’s work is very work checking out.

    • @Alice98561
      @Alice98561 6 років тому

      Hold on there, how can you say the decision and the contents are independent? They're clearly not. And are you saying that retro-causation only makes sense if there is no more than one possible outcome? Because that seems obviously untrue. What am I missing here?
      Also, thank you for the recommendation, Jones, I'll take a look.

    • @MitBoy_
      @MitBoy_ 6 років тому

      It seem to me, that Newcomb's problem is not even a paradox, but just has some built-in contradiction. Perhaps I don't understand something, and I definitely don't know much about logic, but this problem just has to have some sort of contradiction in its premises!

    • @MitBoy_
      @MitBoy_ 6 років тому +1

      It feels to me like problem implies that both "The choice influences the outcome" and "The choice doesn't influence the outcome" is true whichi is contradiction. And people on different sides just pick one premise or the other.

    • @MitBoy_
      @MitBoy_ 6 років тому

      No way! Just by virtue of having IF for person's choice and THEN for the outcome, the has to be a link! Perhaps it's not casual one, but any sort of possible link between the two! I can't believe this!

  • @bishopbrennan3558
    @bishopbrennan3558 2 роки тому

    I guess I would be a cautious one-boxer. Some of the cases for two-boxing appeal to me, but if I was actually in this situation, I would just tell myself "One-box, one-box, one-box!" and one-box, because I don't have enough confidence in the two-boxing arguments to risk possibly losing a million pounds.

  • @dumbledorelives93
    @dumbledorelives93 2 роки тому +1

    I have always tended to view the abilities of the predictor as deriving from a "Maxwell's demon" level of knowledge about the current and past state of the world, and using that to make an incredibly accurate prediction of the future state. Based on what it understands about the arrangements of all the particles in the universe, it already knows whether a friend will try to persuade you to change your choice and already knows the outcome of this to high precision. I don't think we would even need to bring time travel into the equation. With this type of predictor, I still think one boxing is the most appropriate choice, as the predictor has all the relevant information to be a true oracle.

    • @fimdalinguagem
      @fimdalinguagem Рік тому +1

      isnt that the Laplace's demon?

    • @dumbledorelives93
      @dumbledorelives93 Рік тому +1

      @@fimdalinguagem you're right. I always get them mixed up. Maxwell's demon is related to thermodynamics

  • @neoginseng436
    @neoginseng436 2 роки тому

    I prefer using statistics over philosophical assumptions when it comes to being a contestant in Newcombs "Who wants to be a Millionaire" crazy gameshow.
    Newcomb laughing in his grave because he knew that half of us would overthink it and choose to become empty handed 2 boxers

  • @jamessmith4172
    @jamessmith4172 11 місяців тому

    Rewarding irrationality is actually a very good way of putting it. For the life of me I can’t recall the name of this thought experiment, but let’s say you’re a perfectly rational and self interested hitch hiker. You come across a driver who’s willing to give you a ride, but only on the condition that you pay them 100 dollars for the service. Additionally, the driver happens to be a nearly perfect predictor of human behaviour. Since you desperately need to get back home, you would gladly give him the 100 dollars now if you had it on you- but since you forgot your wallet, you’ll have to wait till he drops you off.
    Now, since you’re so rational and self interested, you both know that when he actually drops you off, you won’t give him the money. Therefore, since he knows what you’re going to do, he’s not going to give you the ride.
    You would be REWARDED for being irrational when he drops you off- but the problem is, it’s impossible to commit yourself to that course of action. Because, no matter what, once you get home, you’ll realise “hey, my actions now cannot possibly change the fact that he drove me all the way here. After all, he doesn’t have a Time Machine.”
    It’s the same thing here. Ideally, I could commit myself to irrationality and take only the one box. But if I am a purely rational actor, I will take two no matter what once the time arrives. My actions then cannot change what’s in the box, just as my actions after hitch hiking cannot undo the ride I was already given. The only difference is that you have some ignorance about what’s already happened. Just imagine the boxes are both clear the whole time. You would take both no matter what, yes?

  • @darrellee8194
    @darrellee8194 9 місяців тому

    One boxers are living in a Nonrational world of reverse causation. They seem to believe that they have a choice, but also that that choice is predetermined.

    • @gJonii
      @gJonii Місяць тому

      Also notably, one-boxers are making way more money than two-boxers in newcomb-like problems.
      What's with rationality that makes you give up a million dollars?

  • @MrNishiike
    @MrNishiike 2 роки тому

    I would two box even in the perfect predictor case because the contents of the boxes are already set, the type of person I am is already set, and now all I can do is accept my own destiny. If it’s a perfect predictor, he would know I’m a two boxer already, there’s nothing I can do about it.

    • @MrNishiike
      @MrNishiike 2 роки тому

      I think it’s insane to be an one boxer.

    • @spongbobsquarepants3922
      @spongbobsquarepants3922 2 роки тому

      No he would not know you are a two boxer, because you can be a one boxer, and decide to take the one box. The predictor will see this, and put the money in. You will be rich. But if you stubbornly stay a two boxer you will only get a thousand. That is a pretty stupid choice.

    • @MrNishiike
      @MrNishiike 2 роки тому

      @@spongbobsquarepants3922 in both the nearly perfect predictor or the perfect predictor cases, the money is already in the box or not. At the point that the box is presented to you, everything is already set. That’s the point of the whole experiment. That’s why I think there’s never a reason to take just one box.

    • @MrNishiike
      @MrNishiike Рік тому

      @@spongbobsquarepants3922 that's not the thought experiment. the whole point is that the money is either in the box or not from the beginning. If its a near exact predictor that is the scenario, and its the same with the perfect predictor too. that is one of the essential components of the whole thought experiment. if you change that, the thought experiment loses all significance.

    • @marco_mate5181
      @marco_mate5181 Рік тому

      @@MrNishiike " the money is already in the box or not. " And this is based on whether or not you will ACTUALLY choose one or two. so yeah, the only rational choice is to take 1 box. Whatever mental strategy or change of mind you are going to have, has also been predicted directly or indirectly.

  • @Oskar1000
    @Oskar1000 3 роки тому

    I disagree that the question is asking me to answer what I'll do at T2. I don't know what I'll do at T2, I can only answer what I think I will do at T2. And the correct answer for what I should want to do at T2 is pick box 1.

  • @Trynottoblink
    @Trynottoblink 6 років тому +1

    46-minute video? Nice, man.

  • @adreaminxy
    @adreaminxy 3 роки тому +1

    This is so funny! I'm no philosopher but one boxing seems so completely obvious. Two boxing seems like just a childish attempt to exert yourself against the predictor with no possible justification. Similar to believing in a god or moral realism or something like that.

    • @superstartop6763
      @superstartop6763 10 місяців тому +1

      You think it's childish to think moral realism is true or to think an intelligent designer exists? Yet, from your comment it seems like you're committed to the existence of "justification". Can you tell me what justification is?

  • @Oskar1000
    @Oskar1000 3 роки тому

    At 39:00 you took up what I was going to say as a joke (you faking being a one-boxer) . Good one.

  • @cliffordhodge1449
    @cliffordhodge1449 6 років тому

    I don't think the cancer gene paradigm is a good analogy for this reason: My doing philosophy, or refraining therefrom, does not play any causal role respecting a future cancer. My choice of boxes does determine my reward. E.g., my avoidance of philosophy does not save me from cancer, but my box choice does play a causal role in deciding what amount I receive, so much so in fact, that if I choose box A only, as I understand it, I am certain of receiving exactly 1,000 pounds. In the cancer case, the outcome is determined prior to any choice or act of mine; whereas in Newcomb's case, the outcome remains unknowable pending my choice. My lack of knowledge of the predection does not eliminate me as a causal factor.

    • @Ansatz66
      @Ansatz66 6 років тому

      "If I choose box A only, as I understand it, I am certain of receiving exactly 1,000 pounds."
      Choosing only box A is not an option. We either choose both boxes or only box B. That's why the two camps are called one-boxers and two-boxers.
      "My choice of boxes does determine my reward."
      Your choice only determines whether you get the 1000 in box A. You'll get the content of box B no matter what you decide, and the content of box B is potentially far larger than the 1000.

  • @rath60
    @rath60 Рік тому

    The expected value of 1 boxing is E(1 box)= 1,000,000×P where P is the probability the predicter predicting correctly. The expected value of two boxing is E(2 box)=1000+1,000,000(1-P). The value of P such that your choice is erelavent is E(1 box)=E(2 box) for the value stated that is 50.5%. If the goofmd predicter is better than 50.5% slightly better than a quarter for instance by using the philosophy survey distribution to randomally predict your stance then 1 boxing is the dominant stradagy. Case close if you know P the strategy becomes obviouse. Whats more as the unkown box has smaller sums of money 2 boxing becomes preferable for greater values of P until at 1k where 2 boxing becomes dominant irelavent of P which makes sense you get 1k even if the predictor is perfect. Although as the value in the opace box tends to infinity the value of P tends to 50% meaning that two boxing is dominant so long as the predictor does worse than flipping a fair coin.

    • @warptens5652
      @warptens5652 Рік тому

      E(2 box) = E(1 box) + 1000
      => E(2 box) > E(1 box)
      => 2boxing is dominant

  • @rath60
    @rath60 Рік тому

    Nearly perfect predictors of human behavior have not existed. They must acted on there beliefs. Being a two boxer is bad for your chances at winning the million. But the one boxer should take two boxes. As the predictor would use your prefence to decide.
    But in a world with nearly perfect predictors then being a one boxer is reasonable.

  • @cliffordhodge1449
    @cliffordhodge1449 6 років тому

    I think the 2-box strategy may seem foolish because there is a tendency to see an extremely high accuracy rate as suggesting some sort of psychological causal relation somewhere, so that the problem has a superficial appearance of allowing for the formation of strategy. The idea of forming a strategy based on some sort of hypothesis about what is predicted comes from a belief that somehow or other there is a causal relation between the act of choosing and the act of predicting. If this assumption does obtain, no matter how vaguely perceived, one tends to look at this in somewhat economic terms. So one concludes that I two-box to get the extra 1T, but what I would thereby be doing is significantly diminishing my chances for the 1M just so that I can get an extra 1T. But then I am just trading valuable probability points for a marginal gain of a mere 1T, and that is not a sufficient marginal gain to justify the sacrifice of probability points for the big prize, the 1M . It remains unclear to me that this problem actually amounts to anything more than the evaluation of a garden variety gambling problem based on a game of chance. Therefore, I am not convinced that it makes much sense to speak of strategy formulation. The only thing the chooser can conclude is that to win anything at all, I must first play; I must raise my probability of winning from the perfect )I think the 2-box strategy may seem foolish because there is a tendency to see an extremely high accuracy rate as suggesting some sort of psychological causal relation somewhere, so that the problem has a superficial appearance of allowing for the formation of strategy. The idea of forming a strategy based on some sort of hypothesis about what is predicted comes from a belief that somehow or other there is a causal relation between the act of choosing and the act of predicting. If this assumption does obtain, no matter how vaguely perceived, one tends to look at this in somewhat economic terms. So one concludes that I two-box to get the extra 1T, but what I would thereby be doing is significantly diminishing my chances for the 1M just so that I can get an extra 1T. But then I am just trading valuable probability points for a marginal gain of a mere 1T, and that is not a sufficient marginal gain to justify the sacrifice of probability points for the big prize, the 1M . It remains unclear to me that this problem actually amounts to anything more than the evaluation of a garden variety gambling problem based on a game of chance. Therefore, I am not convinced that it makes much sense to speak of strategy formulation. The only thing the chooser knows is that to win he must play; he must raise his probability of winning from the perfect 0 at which it begins, and from that point he can do nothing except play with the ratio of possible gain/loss - he can be safe and choose the A box only, or he can minimize his cost in the lottery by buying only one ticket.

  • @pmyou2
    @pmyou2 6 років тому +1

    The answer is mu, to unask the question. (I think that is correct word. I hope.)
    The problem seems to devolve into a question of happiness, where you are only happy is you are successful at being maximally greedy. That is irrational. If I am actually convinced by the statement that the predictor is really good (not to worry about 'nearly perfect') then I would one-box and take the $1M. The $1K is not going to dramatically affect my subsequent life. I could be quite pleased with that. If I lose and get nothing, then, sure, the $1K I don't have would have been nice to have, but then so would the $1M that I don't have either. So no worries. I don't have to try to out think this Great Predictor.
    It isn't like if I were to 1 box and get $1M, leaving $1K on the table, that as a side effect 1,000 babies will be eaten by boa constrictors. The down side of missing out on the extra $1K is pretty trivial all things considered.

    • @ChaiElephant
      @ChaiElephant 6 років тому +2

      Bad answer. Replace money with human lives.