See these videos for an introduction to Bayesian epistemology: ua-cam.com/video/jvw74s9yu1U/v-deo.html ua-cam.com/video/ClVIw7_ZzSE/v-deo.html Also, yes, I'm aware that I might be taking Dutch book arguments too literally here. But the point I'm making can be generalized. The question is: whatever the pragmatic costs of failing to conform my degrees of belief to the probability laws, if I judge that there are benefits of doing this that outweigh these costs, then why (by the subjective Bayesian's own lights) would this be irrational?
10:17 I think the Dutch book argument assumes that you are an expected utility maximizer, so by definition you must be willing to accept both bets, since taking both is no worse in expectation than taking only one or neither of them. 12:02 I think the argument "Dutch books rarely/never happen" sort of misses the point. The idea is that the universe itself is constantly trying to Dutch book you, in the sense that you need to make decisions under uncertainty, and if your beliefs don't follow the probability axioms then you'll end up making decisions that *provably, predictably* leave you worse off in expectation. If P(A) + P(not A) > 1 for some unpleasant event A, you might end up paying costs to mitigate the effects of A while also, simultaneously, making plans that assume A probably won't happen. 14:35 "My incoherent probability assignments might well bring me pleasure" If this is actually true, then the Dutch book argument probably fails, since you may not be able to be a consistent expected utility maximizer if your utility function is "nosy" about how you assign probabilities. Outside of that though, utility functions can be ~arbitrarily weird.
>> and if your beliefs don't follow the probability axioms then you'll end up making decisions that provably, predictably leave you worse off in expectation From whose point of view would it provably, predictably leave me worse off? I take it that this will be from my own point of view -- this is why Dutch book arguments are compelling, because it will be clear even to the person who assigns the "incoherent" probabilities that the series of bets will leave them worse off. But then if, while assigning credences that violate the probability axioms, I can predict that some decision will leave me worse off, why shouldn't I continue to assign such credences and just not make that decision? Analogously, I might assign p(A)=0.7 and p(~A)=0.7 but then refuse to pay £70 for a £100 prize if A is true and £70 for a £100 prize if A is false. The reason why I would refuse the bet is precisely I recognize that such a bet will guarantee I lose money. I recognize this even while assigning p(A)=0.7 and p(~A)=0.7. Maybe there's a dilemma for the Dutch book argument here. Is it the case that, while assigning p(A)=0.7 and p(~A)=0.7, I can recognize that the Dutch book will leave me worse off? If yes, then I can just refuse the Dutch book, and I need not change my credences. Even with my current credences, I can see that the Dutch book guarantees a loss, and that's enough for me to refuse it. If no, then the Dutch book argument does not give me any reason to change my credences. If by my lights the Dutch book does not guarantee a loss, then it's no problem that my credences would lead me to accept a Dutch book. I think the obvious solution here is to take the line you suggest in your first sentence -- to assign p(A)=0.7 and p(~A)=0.7 just by definition involves being willing to accept the Dutch book. Perhaps the person who is willing to pay £70 for a £100 prize if A is true, or £70 for a £100 prize if A is false, but not both, just doesn't really have any credences with respect to A and ~A. But they do have some sort of attitude to A and to ~A -- call it a "schmedence". Then the question is why we would have to have credences rather than schmedences. Are schmedences irrational? Why?
@@KaneB sorry to jump in with my ‘3-years-out-of-study-barely-recollected-Philosophy of Science Master’s degree’ level of understanding, but my thought is that the ‘schmedences’ you describe would be irrational and still should reveal that you are opening up yourself to Dutch Book arguments. ‘If by my lights the Dutch Book does not guarantee a loss, then it’s no problem that my credences lead me to accept a Dutch book.’ Doesn’t this quote amount to the ‘knowledge of the fact that you are in a Dutch Book scenario’ defeats your desire to act on your credences (despite it having been revealed to you that your credences are nevertheless irrational since you should, in principle, be happy to accept both bets and lose due to your probability assignments)? Would this not also mean your ‘schmedences’ are irrational, as they require this defeater to avoid irrationally poor outcomes? I’m not saying this (probably) doesn’t describe how this sort of thing may play out in reality with humans rather than subjective probability robots (aka that we have schmedences that often have a practical defeater, rather than a Bayesian-rational credence), but I think it still reveals the irrationality of such schmedences even if they are often defeated in this way. If so, it might be true in some (most) circumstances that schmedences don’t cause someone to behave irrationally (I certainly think I am more conscious of needing to defeat my more irrational probability assignments in real world scenarios where I could make a loss), but do you not think you are given a reason to change your credences for situations where it might not be so obvious that you have the action-defeating knowledge that you are being Dutch Booked?
@@KaneB I've been thinking about whether there are examples where having schmedences p(A)=0.7 and p(~A)=0.7 differs (in your actions) from just having some other credences. Perhaps if after taking one bet, if you refuse further bets on A that would leave you dutch booked, that might practically look identical to just having some "more rational" credences about A after being offered the first bet. I'm not sure if this works out though. One example I can come up with where credences and schmedences seem to differ is the following. Suppose a bookie flips a coin about whether to offer you a bet on A or a bet on ~A. The bookie can pick the two possible bets after the coinflips such that you are going to accept either one, and so that over the randomness just in the coinflip, you lose money in expectation. (I can give an example with specific numbers if you like.) So like, you can calculate that this coinflip-then-usual-bet-on-A-or-~A sequence has negative expectation for you, but if you get to decide after the coinflip, you are still going to always take the bet, whichever way the coinflip went. This seems like a less bad property than just being absolutely guaranteed to lose money, but it still seems undesirable. On a somewhat different note, I think it's sort of true that even the best (honest) bayesians would admit that their probability assignment is such that there are almost certainly ways they could be dutch booked, for instance because they are computationally bounded and can't really check that all their beliefs are consistent according to their rules. I think the maxim they are actually following is something like "when you find out about a way you could be dutch booked, try to change your probabilities so that this is no longer the case". When phrased in this way, what bayesians practically do starts to sound somewhat more like what your schmedence guy does. On a different note, I think one reason to be more worried about dutch books is that if you really had these beliefs, your expected value calculus would tell you that it's actually a super great idea to be constantly seeking dutch book bets on yourself; even though there might not be dutch bookies walking around constantly trying to dutch book you (well, perhaps the universe is, but I agree that there aren't many such people, anyway). Like you should be walking up to a trading firm and trying to get them to bet with you, with your bets being equivalent to just giving them all your money, with your expected value calculus telling you that this is an amazing deal. On a final different note, one possible stronger way to think of bayesianism is that you actually have like a range of possible worlds in mind, with various probabilities. You know everything there is to know about each possible world, you just don't know which one you are in! And whenever you make an observation, the "only thing" that happens is that it rules out you being in one of the possible worlds where you would not have made that observation. (By the way, this picture makes it really simple to understand Bayes' rule. If initially the possible worlds where A is true vs the worlds where A is not true are in ratio 1:5 and you make an observation about B which is compatible with half the worlds where A is true and a third of the worlds where B is true, that updates this rate to (1/2):(5/3), or more simply 3:10.) Now if you believed p(A)=0.7 and p(~A)=0.7, that would actually mean that there is some possible world where you believe both A and ~A are true. But this is a contradiction of the first (stronger) kind you mention in the video! To clarify, I don't think I have some very coherent point I'm trying to make here, just a bunch of small points that seem interesting.
Good Luck with the Viva - I’m sure you’ll do amazingly. You’re so well informed and present issues so clearly. You deserve a teaching job somewhere because you’d be a teacher students would throughly enjoy
All these Bayesians admonishing me about how ignoring the probability laws is irrational don't seem to understand that I'd rather be Dutch booked than Dutch uncled.
I appreciate it! Don't worry, I'm not beating myself up about anything. It's just that I enjoy uploading videos, and I also wanted to keep viewers updated on where I am with things.
Good luck! I think a part-answer is that if you assign probabilities incoherently and so does someone else then there's an opportunity for someone to come in and dutch book across the two of you. (I would have called this person 'an arbitrageur' but I think that's more similar to hedging.) That may not always cause a loss for you but it would mean that coherent probabilities are rewarded. This argument might need more work due to risk aversion. I know Taleb mentions a story about someone he worked with who thought that the market was going one way but they should make a trade as though it was going the other. I think that was a case where it was better to avoid risk (losing money) than to seek reward (gaining money). Another point is that if you're okay with accepting the bets individually then you would have to take an inventory of all the bets you've ever taken. There is then a dutch book argument across time which Taleb basically covers in a couple papers (I'll have to find them later). His argument is that there's only so much volatility you can have in your probability assignments if you assume they have to be tradable.
TLDR The only thing I would say if P(a)+P(~a)>1 is that you are not playing the game. Definitionally for any event x in S the definition of P requires P(x)+P(~x)=1 that is what it means to have a probability. Given events E in a sample space Ω: P(E)≥0 P(Ω)=1 P(∪E in Φ)= ∮P(E) dE Taking axiom 1. if a probability is a probability then it is positive. Therefore P(a)>1. Taking axiom 3. if a probability is a probability P(a)+P(~a)=P(a∪~a)=P(a∪(Ω/a))=P(Ω) Taking axiom 2. if a probability is a probability P(a)+P(~a)=1 Therefore P(a)+P(~a) cannot be grater than 1 (or less than) To say otherwise is to say a contradiction.
Good luck with your defense Kane! You know, a defense of a PhD thesis is more of pro-former than anything. If your thesis is good (and I have lots of reasons to believe it is) you don't have to worry. I say this because it happened to me: my defense was terrible, I was incredible nervous and couldn't articulate even some very simple ideas. But the thesis was good so I ended with the highest mark nontheless. You'll do great!!
Thanks! Yeah, I've been told that the verbal defense has little influence on the final decision, and my supervisors are both confident that the thesis is good. It's still kinda terrifying though!
I just found out about this channel because of the video you made about emotivism (I'm sure you did it several years ago). Watching your videos is helping me understand more than one concept. I mean, I'm new to philosophy. Thank you for your work! P.S. I'm a little curious if you're still a supporter of emotivism or have changed your metaethical stance along the way.
Every lottery is a Dutch book in the sense that if you bought all the tickets you would be guaranteed to loose. It’s also not clear that people who buy lottery tickets estimate the probability of winning in a way consistent with the probability calculus. Assuming that each ticket is equally likely to be drawn, they will probably vastly over estimate the probability or their ticket winning. However I suspect that they will have no opinion about the probability of any other particular ticket winning. I once knew someone who bought five lottery tickets a week and clearly enjoyed the excitement of watching the numbers being drawn each week.
If someone believes that whenever they flip a coin, there is a 70% chance of heads and there is a 70% chance of tails; then they can flip 10000 coins and see that they are wrong. In the case that the person is a contrarian, or otherwise get some additional benefit from being wrong, then that additional benefit must be included in their rationality. To not include all benefits in a decision would be irrational. But, in these hypothetical problems, we usually hold these additional conditions ceteris pariubus Anyway, best wishes for your adventure next week. I'm sure you will do a great job. If they reject your dissertation, its because they are ninnies and not because you dont know your shit. Clearly, you know philosophy very well. And in the case that they are ninnies, its not the end - you can revise your paper and go again. Regardless, we are all in your corner! Go man go!!
Thanks for the support! >> If someone believes that whenever they flip a coin, there is a 70% chance of heads and there is a 70% chance of tails; then they can flip 10000 coins and see that they are wrong. Bear in mind I'm only talking to subjective Bayesians here. They don't think there is such a thing as the "real" or "objective" probability of landing heads. If I assign 70% probability to landing heads, but then I flip the coin 10,000 times and it lands tails every time, my initial probability assignment was not incorrect at that time (though it would be irrational for me not to update that probability as the evidence of the coin flips comes in). Take your example but with two people: Verity assigns 70% to H, Sydney assigns 70% to tails. For a subjective Bayesian, it makes no sense to ask which of these is right. All we can say is that, if they are rational, Verity will assign 30% to T and Sydney 30% to H.
@@KaneB okay, so even in the case where the coin came up tails 10000 times, like the person could see that they had a surplus of tails of 3000 and a deficit of heads of 700 - for a total deficit of 400. No matter what the outcome is, they will have a total deficit of 400. There is no possibility of getting 70% heads and 70% tails. Thus, their subjective belief is something that is objectively impossible - hence it is irrational.
I wonder what would that mean to say someone assigns a .7 subjective probability A but is not willing to take a .7 to 1 bet about A... my understanding would rather be that willingness to take a x to 1 bet is precisely what it means to have a .7 strong belief in something. it nicely matches ordinary life situations where one, confronted to someone expressing divergent beliefs about an issue, reacts by saying "wanna bet?" that reaction is precisely aimed at sorting merely verbal doubts and doubts expressing actual beliefs. I'm inclined to think someone claiming they believe something without being prepared to act upon those beliefs are (at least) probably talking about a form of belief different from the one described by subjective probabilities (or maybe they are just lying)
also: Dutch book bet may also be seen as a mathematical model of daily risk assessment-based behaviour. most of our actions can be understood as bets, where one wages the effort involved against the expected reward. in that sense, even though we rarely face an explicit bet, one may claim we constantly deal with them
See these videos for an introduction to Bayesian epistemology:
ua-cam.com/video/jvw74s9yu1U/v-deo.html
ua-cam.com/video/ClVIw7_ZzSE/v-deo.html
Also, yes, I'm aware that I might be taking Dutch book arguments too literally here. But the point I'm making can be generalized. The question is: whatever the pragmatic costs of failing to conform my degrees of belief to the probability laws, if I judge that there are benefits of doing this that outweigh these costs, then why (by the subjective Bayesian's own lights) would this be irrational?
Just a note to wish you good luck with the PhD. I'm sure you'll sail through.
10:17 I think the Dutch book argument assumes that you are an expected utility maximizer, so by definition you must be willing to accept both bets, since taking both is no worse in expectation than taking only one or neither of them.
12:02 I think the argument "Dutch books rarely/never happen" sort of misses the point. The idea is that the universe itself is constantly trying to Dutch book you, in the sense that you need to make decisions under uncertainty, and if your beliefs don't follow the probability axioms then you'll end up making decisions that *provably, predictably* leave you worse off in expectation. If P(A) + P(not A) > 1 for some unpleasant event A, you might end up paying costs to mitigate the effects of A while also, simultaneously, making plans that assume A probably won't happen.
14:35 "My incoherent probability assignments might well bring me pleasure" If this is actually true, then the Dutch book argument probably fails, since you may not be able to be a consistent expected utility maximizer if your utility function is "nosy" about how you assign probabilities. Outside of that though, utility functions can be ~arbitrarily weird.
>> and if your beliefs don't follow the probability axioms then you'll end up making decisions that provably, predictably leave you worse off in expectation
From whose point of view would it provably, predictably leave me worse off? I take it that this will be from my own point of view -- this is why Dutch book arguments are compelling, because it will be clear even to the person who assigns the "incoherent" probabilities that the series of bets will leave them worse off. But then if, while assigning credences that violate the probability axioms, I can predict that some decision will leave me worse off, why shouldn't I continue to assign such credences and just not make that decision? Analogously, I might assign p(A)=0.7 and p(~A)=0.7 but then refuse to pay £70 for a £100 prize if A is true and £70 for a £100 prize if A is false. The reason why I would refuse the bet is precisely I recognize that such a bet will guarantee I lose money. I recognize this even while assigning p(A)=0.7 and p(~A)=0.7.
Maybe there's a dilemma for the Dutch book argument here. Is it the case that, while assigning p(A)=0.7 and p(~A)=0.7, I can recognize that the Dutch book will leave me worse off? If yes, then I can just refuse the Dutch book, and I need not change my credences. Even with my current credences, I can see that the Dutch book guarantees a loss, and that's enough for me to refuse it. If no, then the Dutch book argument does not give me any reason to change my credences. If by my lights the Dutch book does not guarantee a loss, then it's no problem that my credences would lead me to accept a Dutch book.
I think the obvious solution here is to take the line you suggest in your first sentence -- to assign p(A)=0.7 and p(~A)=0.7 just by definition involves being willing to accept the Dutch book. Perhaps the person who is willing to pay £70 for a £100 prize if A is true, or £70 for a £100 prize if A is false, but not both, just doesn't really have any credences with respect to A and ~A. But they do have some sort of attitude to A and to ~A -- call it a "schmedence". Then the question is why we would have to have credences rather than schmedences. Are schmedences irrational? Why?
@@KaneB sorry to jump in with my ‘3-years-out-of-study-barely-recollected-Philosophy of Science Master’s degree’ level of understanding, but my thought is that the ‘schmedences’ you describe would be irrational and still should reveal that you are opening up yourself to Dutch Book arguments.
‘If by my lights the Dutch Book does not guarantee a loss, then it’s no problem that my credences lead me to accept a Dutch book.’
Doesn’t this quote amount to the ‘knowledge of the fact that you are in a Dutch Book scenario’ defeats your desire to act on your credences (despite it having been revealed to you that your credences are nevertheless irrational since you should, in principle, be happy to accept both bets and lose due to your probability assignments)? Would this not also mean your ‘schmedences’ are irrational, as they require this defeater to avoid irrationally poor outcomes? I’m not saying this (probably) doesn’t describe how this sort of thing may play out in reality with humans rather than subjective probability robots (aka that we have schmedences that often have a practical defeater, rather than a Bayesian-rational credence), but I think it still reveals the irrationality of such schmedences even if they are often defeated in this way.
If so, it might be true in some (most) circumstances that schmedences don’t cause someone to behave irrationally (I certainly think I am more conscious of needing to defeat my more irrational probability assignments in real world scenarios where I could make a loss), but do you not think you are given a reason to change your credences for situations where it might not be so obvious that you have the action-defeating knowledge that you are being Dutch Booked?
@@KaneB I've been thinking about whether there are examples where having schmedences p(A)=0.7 and p(~A)=0.7 differs (in your actions) from just having some other credences. Perhaps if after taking one bet, if you refuse further bets on A that would leave you dutch booked, that might practically look identical to just having some "more rational" credences about A after being offered the first bet. I'm not sure if this works out though.
One example I can come up with where credences and schmedences seem to differ is the following. Suppose a bookie flips a coin about whether to offer you a bet on A or a bet on ~A. The bookie can pick the two possible bets after the coinflips such that you are going to accept either one, and so that over the randomness just in the coinflip, you lose money in expectation. (I can give an example with specific numbers if you like.) So like, you can calculate that this coinflip-then-usual-bet-on-A-or-~A sequence has negative expectation for you, but if you get to decide after the coinflip, you are still going to always take the bet, whichever way the coinflip went. This seems like a less bad property than just being absolutely guaranteed to lose money, but it still seems undesirable.
On a somewhat different note, I think it's sort of true that even the best (honest) bayesians would admit that their probability assignment is such that there are almost certainly ways they could be dutch booked, for instance because they are computationally bounded and can't really check that all their beliefs are consistent according to their rules. I think the maxim they are actually following is something like "when you find out about a way you could be dutch booked, try to change your probabilities so that this is no longer the case". When phrased in this way, what bayesians practically do starts to sound somewhat more like what your schmedence guy does.
On a different note, I think one reason to be more worried about dutch books is that if you really had these beliefs, your expected value calculus would tell you that it's actually a super great idea to be constantly seeking dutch book bets on yourself; even though there might not be dutch bookies walking around constantly trying to dutch book you (well, perhaps the universe is, but I agree that there aren't many such people, anyway). Like you should be walking up to a trading firm and trying to get them to bet with you, with your bets being equivalent to just giving them all your money, with your expected value calculus telling you that this is an amazing deal.
On a final different note, one possible stronger way to think of bayesianism is that you actually have like a range of possible worlds in mind, with various probabilities. You know everything there is to know about each possible world, you just don't know which one you are in! And whenever you make an observation, the "only thing" that happens is that it rules out you being in one of the possible worlds where you would not have made that observation. (By the way, this picture makes it really simple to understand Bayes' rule. If initially the possible worlds where A is true vs the worlds where A is not true are in ratio 1:5 and you make an observation about B which is compatible with half the worlds where A is true and a third of the worlds where B is true, that updates this rate to (1/2):(5/3), or more simply 3:10.) Now if you believed p(A)=0.7 and p(~A)=0.7, that would actually mean that there is some possible world where you believe both A and ~A are true. But this is a contradiction of the first (stronger) kind you mention in the video!
To clarify, I don't think I have some very coherent point I'm trying to make here, just a bunch of small points that seem interesting.
Good Luck with the Viva - I’m sure you’ll do amazingly. You’re so well informed and present issues so clearly. You deserve a teaching job somewhere because you’d be a teacher students would throughly enjoy
Thanks, I appreciate it!
Baysianism saves analytic philosophy in the same way that a parachute saves a person when it evacuates after he has hit the ground.
I wish you the best of luck on defending your PhD thesis!
As someone who's doing their PhD on bayesian statistics (among other things), it's really cool to hear your thoughts on this!
What if I hate the person making a Bayesian argument so much that I dont care that in rejecting their argument Dutch books can be cooked against me?
Btw good luck with your PhD
All these Bayesians admonishing me about how ignoring the probability laws is irrational don't seem to understand that I'd rather be Dutch booked than Dutch uncled.
Have you heard of Deductive dependence (aka contra-probability)?
This hair style is fascinating
Kane, you’ve been uploading a lot. Don’t be this hard on yourself, since all you owe us should only be instrumental to what you owe yourself.
I appreciate it! Don't worry, I'm not beating myself up about anything. It's just that I enjoy uploading videos, and I also wanted to keep viewers updated on where I am with things.
Good luck dawg!!
Good luck!
I think a part-answer is that if you assign probabilities incoherently and so does someone else then there's an opportunity for someone to come in and dutch book across the two of you. (I would have called this person 'an arbitrageur' but I think that's more similar to hedging.) That may not always cause a loss for you but it would mean that coherent probabilities are rewarded. This argument might need more work due to risk aversion. I know Taleb mentions a story about someone he worked with who thought that the market was going one way but they should make a trade as though it was going the other. I think that was a case where it was better to avoid risk (losing money) than to seek reward (gaining money).
Another point is that if you're okay with accepting the bets individually then you would have to take an inventory of all the bets you've ever taken. There is then a dutch book argument across time which Taleb basically covers in a couple papers (I'll have to find them later). His argument is that there's only so much volatility you can have in your probability assignments if you assume they have to be tradable.
I have to go but I believe the papers are 'Election predictions as martingales: An arbitrage approach' and 'All Roads Lead to Quantitative Finance'
TLDR The only thing I would say if P(a)+P(~a)>1 is that you are not playing the game. Definitionally for any event x in S the definition of P requires P(x)+P(~x)=1 that is what it means to have a probability.
Given events E in a sample space Ω:
P(E)≥0
P(Ω)=1
P(∪E in Φ)= ∮P(E) dE
Taking axiom 1. if a probability is a probability then it is positive. Therefore P(a)>1.
Taking axiom 3. if a probability is a probability P(a)+P(~a)=P(a∪~a)=P(a∪(Ω/a))=P(Ω)
Taking axiom 2. if a probability is a probability P(a)+P(~a)=1
Therefore P(a)+P(~a) cannot be grater than 1 (or less than)
To say otherwise is to say a contradiction.
Good luck with your defense Kane! You know, a defense of a PhD thesis is more of pro-former than anything. If your thesis is good (and I have lots of reasons to believe it is) you don't have to worry. I say this because it happened to me: my defense was terrible, I was incredible nervous and couldn't articulate even some very simple ideas. But the thesis was good so I ended with the highest mark nontheless. You'll do great!!
Thanks! Yeah, I've been told that the verbal defense has little influence on the final decision, and my supervisors are both confident that the thesis is good. It's still kinda terrifying though!
good luck with your thesis defense!
I just found out about this channel because of the video you made about emotivism (I'm sure you did it several years ago). Watching your videos is helping me understand more than one concept. I mean, I'm new to philosophy. Thank you for your work!
P.S. I'm a little curious if you're still a supporter of emotivism or have changed your metaethical stance along the way.
Every lottery is a Dutch book in the sense that if you bought all the tickets you would be guaranteed to loose. It’s also not clear that people who buy lottery tickets estimate the probability of winning in a way consistent with the probability calculus. Assuming that each ticket is equally likely to be drawn, they will probably vastly over estimate the probability or their ticket winning. However I suspect that they will have no opinion about the probability of any other particular ticket winning.
I once knew someone who bought five lottery tickets a week and clearly enjoyed the excitement of watching the numbers being drawn each week.
Best of luck in the defence, (almost) Dr Baker 👨⚕️ 👨🍳
Good luck!
is it worse to be Dutch booked or Dutch oven'd?
If someone believes that whenever they flip a coin, there is a 70% chance of heads and there is a 70% chance of tails; then they can flip 10000 coins and see that they are wrong.
In the case that the person is a contrarian, or otherwise get some additional benefit from being wrong, then that additional benefit must be included in their rationality. To not include all benefits in a decision would be irrational. But, in these hypothetical problems, we usually hold these additional conditions ceteris pariubus
Anyway, best wishes for your adventure next week. I'm sure you will do a great job. If they reject your dissertation, its because they are ninnies and not because you dont know your shit. Clearly, you know philosophy very well. And in the case that they are ninnies, its not the end - you can revise your paper and go again. Regardless, we are all in your corner! Go man go!!
Thanks for the support!
>> If someone believes that whenever they flip a coin, there is a 70% chance of heads and there is a 70% chance of tails; then they can flip 10000 coins and see that they are wrong.
Bear in mind I'm only talking to subjective Bayesians here. They don't think there is such a thing as the "real" or "objective" probability of landing heads. If I assign 70% probability to landing heads, but then I flip the coin 10,000 times and it lands tails every time, my initial probability assignment was not incorrect at that time (though it would be irrational for me not to update that probability as the evidence of the coin flips comes in). Take your example but with two people: Verity assigns 70% to H, Sydney assigns 70% to tails. For a subjective Bayesian, it makes no sense to ask which of these is right. All we can say is that, if they are rational, Verity will assign 30% to T and Sydney 30% to H.
@@KaneB okay, so even in the case where the coin came up tails 10000 times, like the person could see that they had a surplus of tails of 3000 and a deficit of heads of 700 - for a total deficit of 400. No matter what the outcome is, they will have a total deficit of 400. There is no possibility of getting 70% heads and 70% tails. Thus, their subjective belief is something that is objectively impossible - hence it is irrational.
I wonder what would that mean to say someone assigns a .7 subjective probability A but is not willing to take a .7 to 1 bet about A...
my understanding would rather be that willingness to take a x to 1 bet is precisely what it means to have a .7 strong belief in something. it nicely matches ordinary life situations where one, confronted to someone expressing divergent beliefs about an issue, reacts by saying "wanna bet?" that reaction is precisely aimed at sorting merely verbal doubts and doubts expressing actual beliefs.
I'm inclined to think someone claiming they believe something without being prepared to act upon those beliefs are (at least) probably talking about a form of belief different from the one described by subjective probabilities
(or maybe they are just lying)
also: Dutch book bet may also be seen as a mathematical model of daily risk assessment-based behaviour. most of our actions can be understood as bets, where one wages the effort involved against the expected reward.
in that sense, even though we rarely face an explicit bet, one may claim we constantly deal with them
Best of luck - I bet £70 you smash it :)
Hi you're cute greetings from poland from Lena
Thanks. I agree, and I'm not sure why more people don't notice this obvious fact. Lol.
Shedule!? 😜
Cool