I’m Studying Rawls in no depth at sixth form and I have wanted for a while a more thorough but clearly explained (without excessive jargon) look at his ideas. The algorithm works in mysterious ways. Thank you!
My impression before watching: the original position behind the veil of ignorance is a useful way of thinking about it, but the absolute maximin conclusion commonly associated with Rawls is not a reasonable conclusion. If a reasonable mind contemplates various possible social orders from the original position, they'll prefer the prospect of having things be worse in some completely trivial way if the one-in-a-billion risk of being worst off materializes, paired with massive gains the other 99.99 etc. percent of the time, over the reverse. For that matter, even if they know they're going to be worst off, any reasonable person would accept an absolutely trivial additional bit of suffering in order to spare others from similar (but trivially lesser) suffering. People aren't pure selfless utilitarians, or anywhere close. But neither are they pure selfish minmaxers. The point is that the original position is a useful way of framing the question. I don't share Rawls's particular set of premises, though, so I don't feel bound by his particular conclusions.
Note an essential part of the original position is that each person is rationally determining the rules of the game as to serv their own self interest, this is essential because the argument behind the original position is that it would reveal what is just/fair, just because it's what rational self interested people would agree to when they don't know which person they will end up being in society, if people were allowed to not be self interested in the original position that would undermine the strength of the conclusions derived from it. So people can't be self sacrificing in the original position, but you are right that there is another criticism there which is maybe it would be rational for self interest to bet into a risky arrangement when the risk of being worse off is small, but the pay off of getting the better positions is very big, there is a small chance that you'll end up in the worse off position, but a large chance you'll end up in much better position in a society that doesn't apply the difference principle. Rawls argues that rational people would be risk averse in the original position but you could reject that, the reasons he says that is because the stakes involve fundamental rights and opportunities which are too significant to gamble with Rawls claims (but again surly some risk have a very small cost but high rewards, that rational people would be willing to partake in) But anyways there is an interesting paper that responds to risk aversion concerns, google "RAWLS AND RISK AVERSION DREW SCHROEDER " and you will find the pdf
@@lolroflmaoization Thank you for the paper. The relevant criterion of rationality that I remember from decision theory class is that a rational agent won't respond to a set of options with unknown outcomes in a way that amounts to letting someone make book against them. That doesn't assume anything about whether they're betting under conditions of known risk, or under conditions of uncertainty. The conclusion (based on some other criteria of rationality in addition to that one) was that in order to be a rational agent as so defined, one must have some set of guesses about the probabilities of all the unknown outcomes, regardless of whether the available information provides any basis for assigning objective probabilities or not. If someone satisfies a set of reasonable-sounding criteria of rationality, they act as though it's all risk, and risk-aversion is completely subsumed into diminishing marginal utility. Unfortunately, I don't remember what the rest of the criteria were. It's also worth noting that (as far as I remember) the decision theory class didn't impose any restrictions about selfishness. People prefer whatever they prefer: they can be benevolent, or purely selfish, or even malevolent (willing to accept some harm to themselves, in order to inflict sufficient harm on people they hate), without violating the criteria of rationality that were used. There were just a set of outcomes, and a set of preferences about them.
I’m Studying Rawls in no depth at sixth form and I have wanted for a while a more thorough but clearly explained (without excessive jargon) look at his ideas. The algorithm works in mysterious ways. Thank you!
Oooh. Jeffrey Kaplan has competition!!
Great work 🎉🎉🎉
My impression before watching: the original position behind the veil of ignorance is a useful way of thinking about it, but the absolute maximin conclusion commonly associated with Rawls is not a reasonable conclusion. If a reasonable mind contemplates various possible social orders from the original position, they'll prefer the prospect of having things be worse in some completely trivial way if the one-in-a-billion risk of being worst off materializes, paired with massive gains the other 99.99 etc. percent of the time, over the reverse. For that matter, even if they know they're going to be worst off, any reasonable person would accept an absolutely trivial additional bit of suffering in order to spare others from similar (but trivially lesser) suffering. People aren't pure selfless utilitarians, or anywhere close. But neither are they pure selfish minmaxers. The point is that the original position is a useful way of framing the question.
I don't share Rawls's particular set of premises, though, so I don't feel bound by his particular conclusions.
Where can I find more sources on feminist critiques of Rawl's political theory within the household?
Note an essential part of the original position is that each person is rationally determining the rules of the game as to serv their own self interest, this is essential because the argument behind the original position is that it would reveal what is just/fair, just because it's what rational self interested people would agree to when they don't know which person they will end up being in society, if people were allowed to not be self interested in the original position that would undermine the strength of the conclusions derived from it.
So people can't be self sacrificing in the original position, but you are right that there is another criticism there which is maybe it would be rational for self interest to bet into a risky arrangement when the risk of being worse off is small, but the pay off of getting the better positions is very big, there is a small chance that you'll end up in the worse off position, but a large chance you'll end up in much better position in a society that doesn't apply the difference principle.
Rawls argues that rational people would be risk averse in the original position but you could reject that, the reasons he says that is because the stakes involve fundamental rights and opportunities which are too significant to gamble with Rawls claims (but again surly some risk have a very small cost but high rewards, that rational people would be willing to partake in)
But anyways there is an interesting paper that responds to risk aversion concerns, google "RAWLS AND RISK AVERSION DREW SCHROEDER " and you will find the pdf
@@lolroflmaoization Thank you for the paper.
The relevant criterion of rationality that I remember from decision theory class is that a rational agent won't respond to a set of options with unknown outcomes in a way that amounts to letting someone make book against them. That doesn't assume anything about whether they're betting under conditions of known risk, or under conditions of uncertainty. The conclusion (based on some other criteria of rationality in addition to that one) was that in order to be a rational agent as so defined, one must have some set of guesses about the probabilities of all the unknown outcomes, regardless of whether the available information provides any basis for assigning objective probabilities or not. If someone satisfies a set of reasonable-sounding criteria of rationality, they act as though it's all risk, and risk-aversion is completely subsumed into diminishing marginal utility. Unfortunately, I don't remember what the rest of the criteria were.
It's also worth noting that (as far as I remember) the decision theory class didn't impose any restrictions about selfishness. People prefer whatever they prefer: they can be benevolent, or purely selfish, or even malevolent (willing to accept some harm to themselves, in order to inflict sufficient harm on people they hate), without violating the criteria of rationality that were used. There were just a set of outcomes, and a set of preferences about them.