Another thing I think is significant for our intuitions is how they evolved. Our intuitions evolved to provide the best chance of survival as a species. This often involves strong intuitions to protect existing members of society, for obvious reasons, as well as strong intuitions to increase the happiness of existing people. We did not, however, need ethical intuitions that having children improved the world. Our sexual desires meant that people would have as many children as was possible regardless, and specific intuitions regarding the value of life were unnecessary. This is why we feel that there is an imbalance between refusing to have a child if their life will be one of suffering and murdering people who happen to live lives of suffering. Our intuitions strongly oppose harming a living person, regardless of their happiness, but we don’t have a coherent intuition on the value of life. In the case of the RC, we don’t have any intuition of the vast utility caused by such a large population simply existing. We only have intuitions about the suffering of the people in the society. In a rational ethical theory, we can reject these contradictory and irrational intuitions.
I think the actual problem here is that the "minimally positive" life is not well defined: if we zoom in a bit I think the problem is solvable. How mediocre exactly are we envisioning here? Even a fairly meager existence, when you zoom in a bit, can still be a rich and interesting experience for the person actually having it, even if it also contains hardship and not much luxury. If you degrade that experience too much further to the point where it no longer has those positive qualities, then we should attribute to it zero or negative value - then, increasing the population can no longer be used as a trick to equal any arbitrarily happy population. So either A) the "minimally positive" life is alright and we can safely accept the Repugnant Conclusion, or B) the "minimally positive" life actually super sucks and we've not assigned it a low enough value to avoid the Repugnant Conclusion being arrived at in the first place. In short, if your version of utilitarianism judges 10 people who are doing fine to be of less value than 1 person living in untold luxury, that doesn't show that utilitarianism is wrong, it just shows that you haven't properly calibrated your outcomes.
An argument for eugenics, abortion, population control and euthanasia is that the average happiness and welfare of the population will be higher. This is in contrast to the repugnant conclusion that values having more individuals with lower ability and happiness. Having a higher average seems better than preserving all live regardless of quality..
Well, you can make argument for these (eugenics, abortion, population control and euthanasia) without appeal that average happyness. In fact I'm for alll of these (well it's more complicated with eugenics and population control) without ever considering that it would increase average happyness...
Well then you just arrive at a different repugnant conclusion, namely we should kill all people except the most happy, thus increasing the average happiness of the population. More generally, under average-welfare utilitarianism, there are situations in which a society that is a pareto disimprovement is considered a better society, which IMO is absurd.
I suspect there's a number of equally worthy worlds, like budgets that bring an equal amount of utility in production possibility curves. A world with very few people (or even 1 utility monster) that are very happy, a world with trillions of people whose lives are just worth living, or a world with anything in between on a strait line, will be equally good. Maybe some external factor could determine where on that curve we should be. That the extremes of a curve of equally valuable worlds seem to be extreme doesn't really say much. Of course the result is counter-intuitive, because you're thinking about an extreme situation, it'd be surprising if our intuitions worked when dealing with things they didn't evolve for! 7:50 a blanket opposition to any and all inequality would seemingly lead to even more extreme problems than the repugnant conclusion, as I doubt it'd be possible to abolish inequality if more than 1 person existed. 15:00 the role of intuition depends on what you think a moral theory is for. I thought moral theories existed to try and find objective values, things we "should" believe no matter what values we evolution and society have actually given us. These could bear any relation to the values we actually have (our "intuitions"), they could even say everything we feel is good is bad and vice versa. If this is the case, then a theory having counter-intuitive conclusions is never a problem. Your description in the video on egoism of the role of moral theories (to make a coherent framework from our intuitions) clearly relies on intuition, so from that perspective a moral theory needs to be intuitive.
@Oners82 - i decided it wasn't worth arguing over. "Moral philosophy" has multiple purposes. If the purpose is just to organize our intuitions into something workable, then basing it on intuition is a must. But if you want to judge if your intuitions are actually valid, or to decide between different intuitions, then it can't be based on them.
The thing with utilitarian deliberations about happiness is that millions of people tend to flock together into nations, or tribes, or neo-tribes in our day and age. People identify themselves with some overarching concepts, i.e., they tend to resonate on cultural barycenters. Not everyone in the same way, people still are individuals, but the tendency cannot be ignored from the perspective of what defines people, or, how they seem to like to be identified. The perception of happiness is also quite affected by cultural backgrounds, which are based on continuously maintained cultural traditions and deviations from traditions, evolving into new views of what "happiness" may mean to the individuals that feel bound by those concepts. So, the mere agglomeration of billions of people's sense of happiness as discrete objects, is not reflecting the actual perception of "happiness" all too well. Most people do not like to dwell as isolated individuals on a lonely planet, but tend to get associated through social binding. Formally, I would object to the option of adding all individual cases of "happiness" into one global total, or a distribution among nations, because the definition of happiness is not something that can be projected onto a scalar variable. People experience a multitude of sensations, ideas and deliberations that define their perception of happiness. It is a multi-dimensional vector, which would need to be added according to all of these dimensions in an orthogonal way, thereby also agreeing on the validity of the assumed coordinates. Simply adding all of the millions and billions of "happiness" values along a scalar is not going to cut it. That is also the reason why inequality can be a disruptive property of any society. Some coordinates in the multi-dimensional happiness-vector of a part of society could be pointing in another direction than for another part of society, and yet, the total summary of all dimensions together, once added according to all of the coordinates, might be larger than in another case, where less people populate the same society, thereby creating smaller vectors of happiness. This theoretical thought experiment about utilitarian conglomerate happiness is trickier than it may seem at first glance. And I'm not even a philosopher.
I think the problem is solved by simply realizing that the worth of a world, is actually only defined in terms of the worse possible experience of that world. So a world (A) in which there are just 100 people having 100 pleasure for all of their existence is a better world than one (B) in which there are 100 trillion of people having 100 trillion of pleasure for all of their lives, but there is also one person just experiencing 99 pleasure. So A is better simply because in B one person has 99 and not al least 100. And yes, a world in which there is one person suffering 100 trillion of pain is worse than a world in which 100 trillion suffer 99 trillion of pain. This is because the evaluation of a world, from the pov of someone who is evaluating, only makes sense if the person determines the worth of the world on the basis of individual experiences, since each experience is separate and not part of the unified world. Thus even asking the question which world is better makes no sense, “better” and “worse” in this case only refer to the individual experiences, so if we want to make sense of any notion of “worth of a world” it must be dependent on that individual experience.
Isn't the reason that we want the human race to continue for as long as possible that we want/assume that, if "all goes to plan", over the long term, the world in many ways will get better and better. The lives of people will improve as time goes on and we live more and more worthwhile lives as enabled by technology and better societal structures. If it emerged that the world would continue for a million years, but for some bizzare reason progress in many ways stopped, and life continued pretty much as it is in 2016, a lot of us would be very dismayed and disappointed, and we might not want the world to go on for as long as possible any more.
Yeah that's definitely a line of thought taken by some philosophers. You can definitely argue that the repugnant conclusion is true for future generations but it's still moral to prioritise the welfare of existing people more.
"All we've done here is create more happiness." No, you've done more than you think: You've also added suffering to a world that was absent of it before. Thus A+ is not "just as good as A." A more basic problem here is the idea that the concept of "total happiness" has meaning. It doesn't. There's nothing in the universe to care about "total happiness." The only thing that matters is the individual happiness of those in existence, whether it's 1 person or 1 billion. An analogy is U.S. citizens saying they hate Congress overall, but they like their own Congressman. The former is meaningless, whether there are 10 or 10,000 Congressmen; the only thing that matters is what each individual voter thinks about his Congressman. Bottom line: Stopping population growth, or even reversing it, would not necessarily be a bad thing, since the only 100% effective way to prevent suffering is to stop reproducing. Otherwise, life is a crap shoot. To give birth is to roll the dice on behalf of someone else without his consent.
Thanks for this, though I think Huemer's argument isn't as trivial as you suggest. If the number of worthwhile lives in the past are irrelevant (thereby negating their "worthwhile" categorization in a modern context), then I would argue there will always exist a time in the future after everyone living in the present has died when the number of worthwhile lives would fall to zero from that futuristic point of view on our present. This fact would imply that we are mislead in thinking our lives have any value, or that our lives only have value while we are living them, and lack any sort of objective value in the context of past or future moral theories.
I agree with the idea that average happiness is more important than total happiness. Nobody can enjoy the lives of a billion slightly-happy people at once. I'm not more happy living in a world with seven billion people than I would have been in a world with seven million, and I'd be no more happy in a world with seven trillion people, if the world was able to support that many people as well as it does its current population. I also don't agree completely with your analysis of the world with one person living in constant agony vs the same but many people living in constant agony plus ten seconds of pleasure. If asked which would you would rather live in, you would probably say the second, although neither would be desirable. But if you continually add to the population people living progressively better and better lives, you could have a few billion people living in relative pleasure and a few million living in constant pain, and several billion others somewhere inbetween, that isn't far off from the world we live in now.
Another thing I think is significant for our intuitions is how they evolved.
Our intuitions evolved to provide the best chance of survival as a species. This often involves strong intuitions to protect existing members of society, for obvious reasons, as well as strong intuitions to increase the happiness of existing people. We did not, however, need ethical intuitions that having children improved the world. Our sexual desires meant that people would have as many children as was possible regardless, and specific intuitions regarding the value of life were unnecessary.
This is why we feel that there is an imbalance between refusing to have a child if their life will be one of suffering and murdering people who happen to live lives of suffering. Our intuitions strongly oppose harming a living person, regardless of their happiness, but we don’t have a coherent intuition on the value of life.
In the case of the RC, we don’t have any intuition of the vast utility caused by such a large population simply existing. We only have intuitions about the suffering of the people in the society. In a rational ethical theory, we can reject these contradictory and irrational intuitions.
🤨
WOW!!!!!!!! ‼️‼️‼️😍
👹😮🤔‼️‼️
I think the actual problem here is that the "minimally positive" life is not well defined: if we zoom in a bit I think the problem is solvable. How mediocre exactly are we envisioning here? Even a fairly meager existence, when you zoom in a bit, can still be a rich and interesting experience for the person actually having it, even if it also contains hardship and not much luxury. If you degrade that experience too much further to the point where it no longer has those positive qualities, then we should attribute to it zero or negative value - then, increasing the population can no longer be used as a trick to equal any arbitrarily happy population. So either A) the "minimally positive" life is alright and we can safely accept the Repugnant Conclusion, or B) the "minimally positive" life actually super sucks and we've not assigned it a low enough value to avoid the Repugnant Conclusion being arrived at in the first place.
In short, if your version of utilitarianism judges 10 people who are doing fine to be of less value than 1 person living in untold luxury, that doesn't show that utilitarianism is wrong, it just shows that you haven't properly calibrated your outcomes.
You explained this so well!
Haha I love it how you say Rembrandt produced lesser art than Pollock and the like.
An argument for eugenics, abortion, population control and euthanasia is that the average happiness and welfare of the population will be higher. This is in contrast to the repugnant conclusion that values having more individuals with lower ability and happiness. Having a higher average seems better than preserving all live regardless of quality..
Well, you can make argument for these (eugenics, abortion, population control and euthanasia) without appeal that average happyness. In fact I'm for alll of these (well it's more complicated with eugenics and population control) without ever considering that it would increase average happyness...
Well then you just arrive at a different repugnant conclusion, namely we should kill all people except the most happy, thus increasing the average happiness of the population. More generally, under average-welfare utilitarianism, there are situations in which a society that is a pareto disimprovement is considered a better society, which IMO is absurd.
I suspect there's a number of equally worthy worlds, like budgets that bring an equal amount of utility in production possibility curves. A world with very few people (or even 1 utility monster) that are very happy, a world with trillions of people whose lives are just worth living, or a world with anything in between on a strait line, will be equally good. Maybe some external factor could determine where on that curve we should be. That the extremes of a curve of equally valuable worlds seem to be extreme doesn't really say much. Of course the result is counter-intuitive, because you're thinking about an extreme situation, it'd be surprising if our intuitions worked when dealing with things they didn't evolve for!
7:50 a blanket opposition to any and all inequality would seemingly lead to even more extreme problems than the repugnant conclusion, as I doubt it'd be possible to abolish inequality if more than 1 person existed.
15:00 the role of intuition depends on what you think a moral theory is for. I thought moral theories existed to try and find objective values, things we "should" believe no matter what values we evolution and society have actually given us. These could bear any relation to the values we actually have (our "intuitions"), they could even say everything we feel is good is bad and vice versa. If this is the case, then a theory having counter-intuitive conclusions is never a problem. Your description in the video on egoism of the role of moral theories (to make a coherent framework from our intuitions) clearly relies on intuition, so from that perspective a moral theory needs to be intuitive.
@Oners82 - i decided it wasn't worth arguing over. "Moral philosophy" has multiple purposes. If the purpose is just to organize our intuitions into something workable, then basing it on intuition is a must. But if you want to judge if your intuitions are actually valid, or to decide between different intuitions, then it can't be based on them.
10:49 - oh no no no no Rothko over Vermeer? You can keep that world for yourself!
The thing with utilitarian deliberations about happiness is that millions of people tend to flock together into nations, or tribes, or neo-tribes in our day and age. People identify themselves with some overarching concepts, i.e., they tend to resonate on cultural barycenters. Not everyone in the same way, people still are individuals, but the tendency cannot be ignored from the perspective of what defines people, or, how they seem to like to be identified.
The perception of happiness is also quite affected by cultural backgrounds, which are based on continuously maintained cultural traditions and deviations from traditions, evolving into new views of what "happiness" may mean to the individuals that feel bound by those concepts.
So, the mere agglomeration of billions of people's sense of happiness as discrete objects, is not reflecting the actual perception of "happiness" all too well. Most people do not like to dwell as isolated individuals on a lonely planet, but tend to get associated through social binding.
Formally, I would object to the option of adding all individual cases of "happiness" into one global total, or a distribution among nations, because the definition of happiness is not something that can be projected onto a scalar variable. People experience a multitude of sensations, ideas and deliberations that define their perception of happiness. It is a multi-dimensional vector, which would need to be added according to all of these dimensions in an orthogonal way, thereby also agreeing on the validity of the assumed coordinates. Simply adding all of the millions and billions of "happiness" values along a scalar is not going to cut it.
That is also the reason why inequality can be a disruptive property of any society. Some coordinates in the multi-dimensional happiness-vector of a part of society could be pointing in another direction than for another part of society, and yet, the total summary of all dimensions together, once added according to all of the coordinates, might be larger than in another case, where less people populate the same society, thereby creating smaller vectors of happiness.
This theoretical thought experiment about utilitarian conglomerate happiness is trickier than it may seem at first glance. And I'm not even a philosopher.
I think the problem is solved by simply realizing that the worth of a world, is actually only defined in terms of the worse possible experience of that world. So a world (A) in which there are just 100 people having 100 pleasure for all of their existence is a better world than one (B) in which there are 100 trillion of people having 100 trillion of pleasure for all of their lives, but there is also one person just experiencing 99 pleasure. So A is better simply because in B one person has 99 and not al least 100. And yes, a world in which there is one person suffering 100 trillion of pain is worse than a world in which 100 trillion suffer 99 trillion of pain. This is because the evaluation of a world, from the pov of someone who is evaluating, only makes sense if the person determines the worth of the world on the basis of individual experiences, since each experience is separate and not part of the unified world. Thus even asking the question which world is better makes no sense, “better” and “worse” in this case only refer to the individual experiences, so if we want to make sense of any notion of “worth of a world” it must be dependent on that individual experience.
Isn't the reason that we want the human race to continue for as long as possible that we want/assume that, if "all goes to plan", over the long term, the world in many ways will get better and better. The lives of people will improve as time goes on and we live more and more worthwhile lives as enabled by technology and better societal structures. If it emerged that the world would continue for a million years, but for some bizzare reason progress in many ways stopped, and life continued pretty much as it is in 2016, a lot of us would be very dismayed and disappointed, and we might not want the world to go on for as long as possible any more.
One thought: what if we say that life and happines of alredy existent people are more important that thouse of hypotetical people?
We need a basis to accept that. You can’t just say that, you need to back it up
Yeah that's definitely a line of thought taken by some philosophers. You can definitely argue that the repugnant conclusion is true for future generations but it's still moral to prioritise the welfare of existing people more.
This could become a practical problem when we start colonizing the galaxy.
"All we've done here is create more happiness." No, you've done more than you think: You've also added suffering to a world that was absent of it before. Thus A+ is not "just as good as A." A more basic problem here is the idea that the concept of "total happiness" has meaning. It doesn't. There's nothing in the universe to care about "total happiness." The only thing that matters is the individual happiness of those in existence, whether it's 1 person or 1 billion. An analogy is U.S. citizens saying they hate Congress overall, but they like their own Congressman. The former is meaningless, whether there are 10 or 10,000 Congressmen; the only thing that matters is what each individual voter thinks about his Congressman. Bottom line: Stopping population growth, or even reversing it, would not necessarily be a bad thing, since the only 100% effective way to prevent suffering is to stop reproducing. Otherwise, life is a crap shoot. To give birth is to roll the dice on behalf of someone else without his consent.
But there isn’t suffering. The extra lives in A+ aren’t bad, they’re just not as good
...leading towards a person affecting view of ethics
Thanks for this, though I think Huemer's argument isn't as trivial as you suggest. If the number of worthwhile lives in the past are irrelevant (thereby negating their "worthwhile" categorization in a modern context), then I would argue there will always exist a time in the future after everyone living in the present has died when the number of worthwhile lives would fall to zero from that futuristic point of view on our present. This fact would imply that we are mislead in thinking our lives have any value, or that our lives only have value while we are living them, and lack any sort of objective value in the context of past or future moral theories.
Just because our lives won't have value from the perspective of future generations, this doesn't mean our lives don't have value to us right now.
Yes
I agree with the idea that average happiness is more important than total happiness. Nobody can enjoy the lives of a billion slightly-happy people at once. I'm not more happy living in a world with seven billion people than I would have been in a world with seven million, and I'd be no more happy in a world with seven trillion people, if the world was able to support that many people as well as it does its current population.
I also don't agree completely with your analysis of the world with one person living in constant agony vs the same but many people living in constant agony plus ten seconds of pleasure. If asked which would you would rather live in, you would probably say the second, although neither would be desirable. But if you continually add to the population people living progressively better and better lives, you could have a few billion people living in relative pleasure and a few million living in constant pain, and several billion others somewhere inbetween, that isn't far off from the world we live in now.
Considering Jackson Pollock one of the best artists ever just undermined your credibility... severely...
All this quibbling over the question like lawyers just proves this is all bogus. Good philosophy isn't like this. This isn't the love of wisdom.