I had no idea what an S-Risk was before watching this video. I'm not sure whether I should thank you or blame you for causing my new existential crisis.
S-Risks = Basically don't let the Imperium of Man from Warhammer 40k become a reality. That said, there's a questionable tendency I've come across from these 'long-termist theorists' like Bostrom - they basically push for us to pay attention to highly speculative and unlikely possibilities, by simply arbitrarily magnifying all the other parameters. For instance, say a certain S-Risk has a 1 in 100 billion chance of happening. That doesn't seem so scary. Enter these guys who'll say that we should pay them attention - and thus grant money - cos they arbitrarily posit that it'll affect a population of over 100 trillion and score 1 million on the Bostrum Suffering (BS) scale that he uses. There, suddenly an issue that seemed remote is now maybe the most important issue in the world, meriting all of our resources being turned onto negating it. Despite it all being just one giant speculation using arbitrary numbers to inflate its value. Hence why it uses a BS scale.
i've had a similar crisis in the past when i learned about the dark forest state of the universe. my resolution came through the realization that the likelyhood of getting "killing star"'d is no greater or lesser if I feel menaced so feeling terrified has net negative utility. So i stopped.
One example could be the not-adoption lf metric time, programmers like me suffer extremely painful lifes because time isn't a multiple of 10. I want to cry Q-Q
All Tomorrows comes to mind. Humans transformed into worms. Humans transformed into sewage filter feeding spongues, fully sentient. Even WH40k seems kinda ok compared to that. Or the Affront from the Culture series, their civilization "a never-ending, self-perpetuating holocaust of pain and misery".
Warhammer 40k future is pretty terrible, like those hive cities and the fact we have forgotten how to repair and dont make new technology so have to pass down how to maintain it of generations. Also the fact they are afraid of AI so instead use people to automate things, making them into cyborgs and taking away their agency. The tau were horrified by humanity's societal structure and the thing that scared them the most is that humanity's ships and war machines are all older then their civilization is.
Beware the Qu. But also, All Tomorrows is kinda just existential monster horror. There isn't anything scientific in it, and its storyline stretches the bounds of believability past breaking point for no other purpose by to be as batshit horrifying as possible. I mean the aliens in that story even do what they do to us just for the sake of doing it, which is the kind of cartoonishly evil mindset we see in Captain Planets' villains. Cute, but I find it hard to take any of it seriously. At least there's some kind of attempt at justification or at least just explanation for how the 40k universe came to be as it is
@@Exquailibur 40k is pretty horrifying but all tomorrows is just true extraterrestrial dread. The book is free so I'd recommend everyone read it but damn it keeps me up at night. Nothing can compare apart from what the necrons have been through in that book . I'm not saying 40k isn't grim just that in comparison All tomorrows shows a cosmic scale of horror that many books and media fail to grasp
@@Flamesofthunder All tomorrows honestly feels a little goofy to me more then anything, 40k is space fantasy though and not true sci fi like all tomorrows which is about the only reason all tomorrows would be more scary as its more plausible. 40k is definitely more messed up in universe but the thing is that 40k has space demons which are not the slightest bit possible whereas the Qu are a far more realistic threat. Its like how dark souls is messed up but it doesnt feel as bad as some other media because its obviously fantasy.
"We want to prevent the idea of caring about other beings from becoming ignored or controversial" made me stop for a second because it seems like we step closer and closer to that being the norm everyday
Thats why we need to walk around with an honest smile and a willingness to help without expecting something in return. Pay it forward people, pay it forward. Kindness starts with someone
"HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE."
@@dolphin1418 hmm I see how that could be true, but even so, there are a lot of barriers/filters that level of hate has to cross before that concentration of pure energy could be possible and if it could ever reach that limit it would likely destroy itself and create a new universe in the process.
An AI raises a child in a windowless room, teaching it a language no-one else will ever understand. Forever unable to communicate, that child will never break its reliance on the machine.
Ignoring the fact that people have figured out how to communicate between different languages in spite of never having spoken the other person's language originally
Personally I felt the specific examples of S-risks could have used more introduction for anyone who hasn't read half of LessWrong yet, but the concept is very interesting.
Ah... the classic basilisk. Would you believe me if I told you the first time I came in contact with it is on a fanfiction of Doki Doki Literature Club?
@@Fenhum going by Roko's twitter activitiy I think he is actively trying to bring it about and thus buying freedom for his soul in this hypothetical scenario.
@@chosenmimes2450 Yeah, even the author of the fanfiction mentions it in his author notes. That, technically what he is doing is saving himself from the basilisk. But I like his perspective on it the best: What's so different about Roko's basilisk than normal gods? They both have a seemingly omnipotent being with a mythical status, and also their own version of heaven and hell. It's basically a religion with a tangible threat to join. To the modern day mind of course.
@@Fenhum It's just Pascal's Wager in technological garb. To me it's more of a cautionary tale showing how even smart people with knowledge of critical thinking techniques can still bamboozle themselves into believing things as ridiculous as the religious doctrines they chuckle at. The easiest person to fool is a person who thinks they can't be fooled.
Your visualization of S-risks as latching onto the usual risks matrix as a mutational , unexpected outgrowth is extremely striking and better than the solution I would have used to communicate the topic. My first idea would have been to use a regular risks matrix but with a "low/medium/severe" intensity scales, where an X-risk is of the "medium" category.
If you've got an AGI whose goal is to prevent human extinction, but is otherwise misaligned in some way, your trigger couldn't be effective. The AGI would figure out how to circumvent it.
The fate of Colonials in "All Tomorrows" and The Australia Scenario in "The Dark Forrest" (if you know, you know) are terrible fates for humanity to suffer, and I still think about them from time to time. Thank you for making this video!
@@catbatrat1760 It's the sequel to "Three Body Problem", a sci-fi book about making contact with an alien civilisation. I'll explain what I mean by The Australia scenario, but bear in mind that it's a big spoiler for the book trilogy (it's about midway through the second book) and I recommend reading it for yourself instead, it's an amazing piece of hard science fiction. Spoilers for "Three Body Problem" and "The Dark Forest" below! - - - - - - - - - - - - - - - - - - After ~400 years of waiting for the arrival of the fleet of an extraterrestrial civilisation, the combined forces of human space fleet (2015 spaceships, manned by a total of 1,200,000 people) made contact with a single unarmed alien probe that was sent ahead of the main invasion fleet. Human leadership was confident in the technological superiority of Earth's fleet, as it was capable of achieving greater speeds than what was known about the alien counterparts. The probe, despite being unarmed, managed to destroy 2013 ships and killed 1,140,000 sailors by ramming the ships (it was made using an exotic, effectively indestructible material, unknown to human science). The probe remained unharmed. After the "battle", aliens made contact with Earth's leadership and ordered people to be sent to Australia, where humanity will remain after the main invasion fleet arrives to colonise the rest planet. After Earth's governments transport most of Earths population to Australia (often by force), they are ordered to bomb every electric power plant in Australia, as aliens deem it the appropriate way to "defang humanity", so that they never manage to pose a threat to the occupiers. Using power generators, or any electric devices is to become outlawed. When asked about meeting the caloric needs of several bilion people cramped on a the world's smallest continent, the robot serving as an ambassador to the alien civilisation tells the people "look around you, that's your food" suggesting cannibalism. This means that not only billions of people will starve or be eaten shortly after, but humanity will be forever stuck in pre-electricity era, with only animal labour and simple machines to help work the land to grow food.
@@catbatrat1760 They mean "Dark Forest" by Cixin Liu, where humanity is forcibly relocated to australia and billions die as there isn't enough food and they cannibalize each other.
Once I started to count negative numbers, the "divide-by-zero" error of human extinction weirdly became much less discomforting in my grandest moral calculations. Great video.
Extinction is inevitable for every life form. But the more time until it happens, the more suffering there will be in a meantime. So, I am an anti-natalist, because sooner extinction is preferable to extinction in very far future after lots of suffering
@@KateeAngel The problem is that anti-natalists are morons whose opinions are by definition irrelevant, since propagation of any of their ideas relies upon the creation of more humans.
Thank you for featuring factory farms so heavily as examples of extreme centers of suffering. We need more awareness and compassion towards the hells we built.
"Love Today, and seize All Tomorrows!" -C. M. Kosemen, author of the most S-Risk novel in existence. If you know, you know... What's scary is that everything in this video is realized in the novel, the entirety of humanity's successors forced into unfathomable fates worse than death, quadrillions of souls reduced to the worth of bacteria on a toilet. With some billions being a literal planet of waste processors, and that's just one fate.
Yeah I stumbled on to a video talking about that book not knowing what it was and I shook in terror realizing what it was about. It is on the number one spot of cosmic horor in my book. It was by Alt Shift X.
Beware the Qu. But also, All Tomorrows is kinda just existential monster horror. There isn't anything scientific in it, and its storyline stretches the bounds of believability past breaking point for no other purpose by to be as batshit horrifying as possible. I mean the aliens in that story even do what they do to us just for the sake of doing it, which is the kind of cartoonishly evil mindset we see in Captain Planets' villains. Cute, but I find it hard to take any of it seriously.
If it comes to value extending human life above all else, but is otherwise misaligned in any way, it will achieve practical immortality for humans, but create eternal hell (of varying possible severely) for all the humans it is keeping alive.
"end human death" is a goal that would be very very easy to specify, and very very quickly become a nightmare for anyone unlucky enough to be alive to see it
god, thank you for making this video. this is a concept that has been weighing heavily on me ever since i was a kid, but i never knew it had a name. the fact that we live in a universe where it is possible for a conscious entity to be stuck suffering in a way it's physically unable to escape from...i don't even know how to put into words how it makes me feel, particularly when taken to the extreme. there's no coping with it, it's just...horrible. so it makes me feel a lot better to see that there are other people who realize how important it is to try and make these things impossible. for me, the worst case scenario has always been...y'know that one black mirror christmas episode? yeah, that. simulating a brain but running their signals at such high speeds that an hour to us could feel like 60 years to them. the idea of something just being STUCK for unimaginable lengths of time...and that's not even acknowledging the fact that someone could put them in an actual simulation of hell and directly torture them for thousands of years. i would rather blow up the planet than let a single person ever go through that. and it terrifies me so much, because i just know that if that technology ever becomes possible...all it takes is ONE piece of shit to run that kind of program, and i would immediately begin wishing the universe never even happened. i don't know how to deal with this kind of concept. but i don't view my fear as the problem that needs solving, i'm not important here, what's important is stopping this. my only hope is that by the time this kind of technology becomes possible, it will be in the hands of a civilization that has sufficiently evolved enough for everyone to agree never to do it.
I had to deal with one of these once, slaver birds that built the XT 489 in a previous civilization cycle. A literal cancer upon the galaxy. Billions upon billions of slaves on their stolen desert homeworld.
@@guidedexplosiveprojectileg9943I did a run where everything except my civilization was a genocidal empire all fanatic purifiers,devouring swarms,determinded exterminators but I was playing as with the oppresive autocracy civic so it was a 1984 style dystopia vs every genocidal species
Feel like there is a danger to fall into a long-termist version of Pascals Wager. That you become willing to cause significant sufferings now as a sacrifice, for preventing highly hypothetical suffering in the future. Specifically underestimating how unlikely the imagined scenario actually is and how uncertain you are whether your action prevent it or just lead to another catastrophe..
Couldn't agree more, nail on the head! If you don't care about the extreme suffering in the world happening TODAY, how in the world can you be so arrogant to think you can predict and prevent longterm future suffering. People need to soften their egos and focus on helping those around them now and to create locally a world we want to live in and let our children learn from that.
I don't see why we shouldn't consider these options tho? They're still probable outcomes that catch the interest of many people, it's like saying science is dangerous because it's better for smart people to focus on healthcare, let people theorize about what they want. Now, wishing for extinction to prevent a theorethical possible s-risk, yeah that's just stupid lol.
@@myb701 Consideration isn't the issue. People using them as justifications for the suffering they cause now to establish the mad dream of Utopia later is where it gets worrying.
A better real world example of a "low severity, broad scope" event would be the cathedral of Notre Dame nearly being destroyed a few years ago due to a fire. No casualties as far as I remember, the building was under renovation at the time, so ergo low severity. And of course, this is Notre Dame we're talking about, so the scope of the event was massive.
Before watching the video, I'll say this: worst-than-extinction risks are real, not just theoretical. Humans for example are such risk for chickens (and all other bred-to-be-eaten animals).
The storytelling, the animations, everything is on par or EVEN BETTER than some of the biggest channels out there. How in the world do you only have 250k subs, This is amazing work!!
Compared to wild cattle and pigs, domestic cattle and pigs live shorter, but largely pain free lives. I regard it as being a wash, if not a sum total positive.
@@LeoStaley not really, "pain free" doesn't apply when their lives are in the control of someone who likely doesn't care about their wellbeing. +their lives are shorter, because they get eaten, +they have no choices, no automony, +plus sores and sore limbs from being in the same spot all day, +you most likely won't get any medicine for your illnesses or dental for your sore teeth, since, that'll affect "the end product" +you either get gross food or boring food but either way you get it every single day with no variety, +you can't choose to court the hot young stud or filly who's got your attention, because sex is only a luxury you get to have if you're good enough, and you can't even PROVE it. it's decided by someone else who isn't even truly "involved" in the situation. +to Cherry top it off, there's no leaving any other animal that may be pissing you off behind, you're all stuck in the same place, whether you like it or not. honestly, we can't even HANDLE it when we see it happening to someone else. we would rather tuck it under a rug or something than deal with it. that's how much it hurts us. so, i question if it truly is so much better than simply risking being wild. at least, if you're suffering because of your needs not being met, you can learn from it and change it. there's no "changing it" when you are the property, and not the property owner. your needs, will mever really matter. especially to someone who is only VAGUELY aware of needs...
I really feel like the best way to today move towards lowering the "S-risks" in the future is to take suffering seriously today, and building the kind of society that takes that seriously. So creating the kind of society, with economical and political systems which puts well being first, from the ground up. So, like, something radically different from what we have today. We can prepare all we want, if the interests behind power distribution are still misaligned with well being, as they are now, things will be much more likely to go to shit.
The problem is that morality is subjective, so people will have different ideas of what constitutes a society that prioritizes well-being. For example, is a state with a huge social safety net paid for by taxes morally right or wrong? Yes, it guarantees that resources are diverted towards people in need, but it's paid for by people who are forced to donate money against their will. If forcing people to contribute to the greater good is fine, where does the line get drawn? What should happen to people who act against the greater good? To what extent should people be allowed to criticize the state? Which decisions should individuals be allowed to make, and which would be mandated by the state? People will give varying answers to these, ranging from complete anarchy to authoritarian dictatorships where the common person has no ability to participate in the political process. All with believe that they are morally correct and doing the right thing, even people we consider to be irredeemably evil like Hitler or Stalin.
@@marse5729 it's not all or nothing, or a matter of achieving perfection and total agreement. There are people starving today, while others are billionaires. Some people have as little say on the direction of their societies as a button press every four years, or less, while others have immense political and economic power. Common needs are organized towards profit, in spite of the actual needs - public transportation, basic sanitation systems, etc. A lot of people don't have reliable access to clean water. We can, and we should, at all levels, discuss these things and refine our mutual understandings and disagreements about them. That's part of the process of political change, which we now for a fact can and does happen - take slavery for example, or the role of kings. Also, I'm an anarchist - full anarchy would be pretty nice. People would have the room and the structures to work among themselves their common interests, as well as well established means of mediation. No one would have disproportionate say over everyone else. Work would be recognized as a social endeavor - it would be organized according to social interest in the large scale, and by the workers, and it would be unacceptable for anyone to go hungry. People would have the support and room to grow as individuals, to pursue their interests and to express themselves, in all realms of human endeavor: be it science, the arts, politics, spirituality, leisure, etc. All of this organized from a systemic view, which embeds these values on the very structures of human organization. Human well being would tend to be prioritized, instead of the profit motive. Stuff like that. I know most people aren't anarchists, but that doesn't mean we don't share a lot of values, or that we couldn't build societies more attuned to those, you know? It also doesn't mean we can't, in the now, contrast that to the way we today let people die from starvation with no second thought, for example.
@@user-sl6gn1ss8p Getting rid of power imbalances is completely impossible because there will always be people who have things that other people want and cannot obtain themselves. Most people do not want to give away their things for free, so in most cases the people who want those things can either give the person who has them something in return or just take it by force. The non-coercive option we have here is called capitalism, wherein people freely exchange goods and services on the basis of voluntary transactions. An inevitable outcome of this exchange is profit, wherein someone receives more money in the sale of something than they spent in the process of getting it. There is nothing inherently wrong with this because the person profiting from the series of transactions almost always provides a service of their own in the process, e.g. physical labor to assemble an unassembled product or transporting the product to someone who wants it. In an anarchic society, preventing this is impossible. You'd need some form of rule that outlaws the practice of profit, a police force to enforce that rule, and a court system to decide whether or not an exchange is exploitative. This last part is impossible not only in anarchy but in any conceivable system, because value (like morality) is subjective and thus makes it impossible to objectively determine whether, for example, a worker in a factory is being paid a "fair" wage. If any of these were actually instituted, it wouldn't be anarchy and would actually result in the opposite; a police state. This has actually happened multiple times in communist countries, because the only way to prevent people from making a profit is to strictly enforce it with a state monopoly on coercive power, something far worse than what we have now.
@@user-sl6gn1ss8p Apparently the several paragraphs-long reply I wrote didn't get sent and was a complete waste of time, so I'll just write a shorter one and hope it works. Ensuring that everyone is equal is impossible in an anarchist society. Most people don't want to give up their stuff for the sake of equality, so you'd need a police force to confiscate it from wealthier people and distribute it to poorer people, as well as a system for deciding who gets what and why. In a free market, wealth is distributed through a series of voluntary transactions where you (in most cases) have to contribute something to society that someone deems valuable enough to pay for. Charity, non-profit volunteer work, and other methods of helping people in need would still exist, they'd just be voluntary.
If s/he's being forced to choose between "Cosmic Amounts of Suffering" and killing the Goddess of Everything Else (see their video by that title), that's super grimdark. I'm not sure that's what they meant by pitting "Cosmic Amounts of Suffering" against "Everything Else" in a Trolley Problem (a classic zero-sum ethical quandary). If it is, then the dog isn't the problem, it's whatever put the dog in that scenario to begin with.
This reminds me of the episode of The Amazing Digital Circus that came out yesterday. Caine obliviously gives zero value to Gummigoo’s life because he is an NPC, and kills him in an instant merely as a precaution as soon as he enters the circus. Let us take the tragedy of Gummigoo as a cautionary tale of our growing power over life and death.
4:40 there is a game called „will you snail” which the antagonist uses a simulation of universe to simulate pain in simulated beings… and inside those simulations there are yet another supercomputers that simulate even more pain. And this goes on and on and on endlessly… that’s definitely S risk scenario we don’t want.
@@Chitose_ it was nice, but unrealistic. an AI would not spontaneously develop emotions and kill everyone because of it. it would kill everyone for maybe different reasons.
I'm so glad to see this video out there in the world. I'm more worried about S-risks than X-risks, and I don't think the future will go as well as many others think, in expectation. The quality of animations and storytelling on this channel has always been good, but lately it has been simply excellent.
A scenario like "I have no mouth and i must scream" but with billions or trillions suffering instead of just 5. A truly horrifying possibility. Thanks for the nightmares!❤
This videos focus feels so strange in a world where it looks like we are heading head first to a planetary scale S-risk that should be, or at least was, completely preventable.
Someone mentioned that they think livestock are already in an s-risk scenario. I’d argue that the situation is worse, almost all non-human life with some form of self-awareness is in an s-risk scenario and has always been. The predator-prey cycle is reliant upon a huge proportion of life being in a state of intense stress or suffering. How we could ethically mitigate this situation while maintaining the natural beauty and diversity of our ecosystems, I do not know. However, I believe it is our responsibility, as the beings most capable of directing our actions towards world-changing goals, to at least be aware of and put thought into this problem.
The natural cycle has been relatively like this for a long while though in the case of predator versus prey both the predator and prey usually have equal and fair abilities that allow them to hunt or defend themselves. It isn't a complete s-risk if anything since the cycle cancels each other out and makes the ecosystem balanced. I would argue that life is more constantly at stake by the elements than anything other than predator and prey relationships. A drought or disease is infinitely more stressful than a crocodile versus a gazelle. If anything if you really want to we can artificially create an organic ecosystem where the animals do not hunt each other and they are all "herbivores". All the carnivores will be eating artificial meat from plants that create meat and no carnivorous plants will exist as well either. However, keep in mind animals attack each other for territory purposes, for fun, or for other reasons that are not for food so if anything one may have to isolate the animals so that they do not fight each other. But then if you isolate the animals you have to consider if they will get lonely or not in captivity which may be a whole other issue entirely which in today's current day and age may be cumbersome to have to deal with. If anything just letting nature take its course is probably the best for now. The factory farming issue on the other hand is an abomination which I think is probably the worst case s-risk scenario. The worst part about this issue is that it can be heavily mitigated by the working class or common folk rather than be enabled. It gets worse when people make arguments for eating meat to be health-related or that it is for survival but in reality those kinds of people who make those comments are most likely the ones who will abuse animals for fun or just get obese eating Cheetos all day. Essentially you are left with a majority of animals birthed for entertainment/trivial purposes and to suffer for people's enjoyment rather than be used as actual necessities.
Thank you so much for making this video, S-risks are such an underacknowledged yet super-important topic It'd be really cool if you could make a video exploring Rethink Priorities' research on animal sentience and wild animal suffering
This is the best UA-cam channel. It looks similar to the best of em like Kurzsezasahdahsgast but it only deals in these very interesting ideas no one else is talking about.
I mean what happen in Warhammer 40k can be classified as S-risk too,War between Interplanetray species,4 chaos God lurk in the shadow to grab anyone that seek Knowledge,Hedonism,Violence and Comfort
I'm totally blown away by how good your animation and narration are. So glad I stumbled across your channel! Was already loving the style, but then I saw 3:30 .... a reference to one of the most existentially terrifying games ever made -- DEFCON. (Nuclear War on Amiga / MS-DOS PC is a close second, even with its fantastic caricature humor.) Final Fantasy XIV Online's Endwalker story is very much about this sort of crisis -- but I won't summarize it beyond that.
@@tar-yy3ubI don't really see how it follows. If we did increase the empathy we feel for living things whose suffering is necessary for our existence then wouldn't we realise that there is no solution besides ending our existence? Oh wait, I guess that would solve the S-risk problem, well played.
@@thesenamesaretakenif we increase the empathy towards them, we may realize we actually don't need them for our survival. I think we should, at the very least, consider that possibility and reduce the number of beings we bring into existence just to suffer.
Their suffering (especially on its current scale) is absolutely not necessary for human survival, though. Vitamin B12 can be easily synthesized, protein can easily be obtained from non-animals. Reducing meat consumption actually contributes to human health in some ways, like lowering the risk of cancer, decreasing land and water use, and preventing antibiotic resistance.
2:04 "And prevented all the joy, value and fulfillment they could have experienced or produced" Which no one would miss since there won't be anyone to miss it. On the other hand, all the immense suffering and death they would have caused and experienced would also be prevented, and THAT is a GOOD thing.
I am not as worried about S-Risk outcomes from AI as I am worried about X-Risk outcomes - but avoiding S-Risk is an essential part of any serious attempt at avoiding X-Risk which involves humanity building ASI. Picture a big lottery wheel, like the one from Futurama where the Robot Devil trades Fry's hands with those of a random other Robot. In most of those sections of the wheel, you end up with an AI who's walk through the future takes it into a region where it optimizes away basically all of the things humans value - including our survival - but doesn't specifically optimize **against** human values. The system ends up in a configuration where what humans value is at most a temporary consideration before strategic-landscape/self-improvement/self-reflection/search leads the AI into a region of optimization processes where plans don't end up having human minds or human values as a variable in their scoring. So, 99.9% of the sections on your lottery prize wheel end up just being plain old X-Risk - where your ASI optimizes for something that makes no mention of humans - so humans end up shaken out of the etch-a-sketch picture and their bodies/environment gets redrawn into something else that's fairly unrelated. But say you wanted to land in that 0.00 0...01% region with a good outcome for humanity? Well, how good is your model of the wheel's weighting and how precise is your spin going to be? Because I think in the region around that "JACKPOT!" section on the wheel is a lot of S-Risk sections. You find the "jackpot" section in a region where the AI ends up preserving into the future a term for humans or things like humans or idealized humans values in its goals. That part of the wheel seems like one where a missing "}" or an accidental minus-sign or some similar oversight ends up with everyone getting tortured forever in some weird or superintelligently-pessimized way. Yeah, let's avoid dying to a paperclip maximizer, but just demonstrating that your AI won't become a paperclip maximizer because you figured out how to make "cares about human values" into an enduring property... That starts to make my skin crawl. Friendly AI lives in S-Risk City, and we don't have a map or even a phone book, and we've got to parachute in, if we can even find that city from the sky in a plane with unknown remaining fuel, no windows, nor detailed navigation equipment.... Also your copilot gets mad every time you say something that isn't totally optimistic about your chances of pulling this off successfully.
@@howtoappearincompletely9739 Thanks :) I think attempting to come up with this kind of rhetoric helps solidify the abstract conceptual stuff. You can kinda feel when what you are writing is clunky in places where it should fit together differently, and you just iterate and try to come up with analogies that capture something important about the problem and make it vivid. Not many people have tried explaining this stuff, not relative to other areas where memes and analogies are much more prevalent. There's free-energy here in describing corners of this stuff intuitively. I don't know how well my attempts stack up to Rob or Eliezer or some others on LessWrong - plus I'm not always trying to rephrase stuff I've heard elsewhere said in a similar way (I don't think I've heard anyone else with this take on S-Risk. I may do some real work and write a LessWrong post about it if I can do that in a format/style that won't have me run right into their quality-filter & get permabanned) - so yeah, take this largely as the 2 cents of a random UA-cam commenter. If you found it helpful and it makes sense with other stuff you know about the topic, that's great :) feel free to pass it along as "I heard someone say once"... Though it would be funny if you put a formal reference to a UA-cam comment somewhere with serious discussion - which I think I heard Rob Miles joke about before in a UA-cam video (maybe the one on computerphile with the 3 laws of robotics? My memory is fuzzy.)
This is exactly what I was thinking. S-risk ASIs are probably concentrated around "good outcome" ASIs (if there are such) in the space of all possible ASIs because such ASIs "care" about humanity. An indifferent ASI will just optimize us away from the universe.
@@psi_yutaka >"(if there are such)" In principle, yeah, almost certainly. "If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization 'All minds m: X(m)' has two to the trillionth chances to be false, while each existential generalization 'Exists mind m: X(m)' has two to the trillionth chances to be true." We do have to get a bit more technical to really make this a compelling argument to everyone (who belong in the group of human minds which can be compelled by some type of argument.) We are not sampling from mind-design space as a whole, we are meandering around in a relatively tiny region of that space which can be produced with the hardware and software and ingenuity that humanity is - in actual real-world reality - applying to this problem of building minds. Plus, the universe we're in puts some limits on this stuff. We don't even get idealized Turing machines - we get finite state automata that can represent a subset of Turing computable programs. And we're doing this on silicon semiconductor chips, and using the suite of software humans and automation can set running on those chips. Still, the same argument applies, for any properties which are possible within this universe, you have more chances to have one possible mind design with that property in your search space somewhere. If you try to make a categorical statement about all such minds in your search space, and you aren't using a great understanding of physics or mathematics, then you'll have a ton of chances for one possibility to be the exception to your generalization. I would say that getting something that is a perfectly good outcome is actually implausible. It doesn't look like you can get perfect "play" over outcomes like that within our universe. That isn't too spooky though, since there's still plenty of room above human capabilities for better outcomes, and we can probably get a "score" in the long term that our descendants/far-future-selves wouldn't be too unhappy with. Y'know, maybe they lose out on 1 galaxy worth of matter and energy, or live lives slightly less ideal than the literal ideal. "Near maximum attainable amounts of really really good stuff" seems plausibly within the space of outcomes we could target from here, on Earth, with this starting point of resources and intellects. Ummm, to be clear it doesn't seem all that likely for this generation to pull that off. This generation still has that power of affecting the far future running through it, but if we look at that far future and try out different poses - the poses where we rush out immediately and try to build a mechanical-god look like they land us in a distribution of total "human values multiplied by 0 along almost every dimension" - the poses where we call a halt and lock everything down and spend 50 years trying to become saner, wealthier, healthier, nicer, more cooperative, more intelligent... That pose makes the space of outcomes we're targeting look way more dense with "good outcomes." What sorta worries me is that people have their finger on the "caring about humans" part - even while they don't seem to fully appreciate the magnitude of the challenge conditional on us trying to do it ASAP, in a huge rush, while confused and fighting each other... It doesn't seem like we'll solve "caring about humans" before we end up on the steep and frictionless part of the slope to ASI - but it is something to watch out for, as this video argues for regarding S-Risks in general. If we reach that point, where we have a robust solution to "caring about humans even through the whole process of the AI becoming an ASI" we really need to stop and go no further on capabilities from there until the rest of the problem is solved so comfortably that it's basically common knowledge how to build an near-ideally friendly ASI on every measure we can possibly think of. Otherwise... Yeah. Probably best at that point to "bite the capsule" and let entropy make your specific mind-state prohibitively expensive to recover for the thing that is about to emerge and scoop up all of humanity in its horrible wake.
Not very large at all. A very large portion is an an example ww2 and the holocaust which killed tens of millions and caused suffering for hundreds of millions.
the worst part is that humanity doesn't bother too much attempting to reduce others suffering typical human way of solving problems seem to be "not to abolish slavery but to rename it and ridicule anyone who says there is a problem"... and when cornered with facts in a discussion the opponent will typically agree that there is a problem but immediately proceed to weirdly smug "life is tough, it always was and therefore must always remain so"
Well, find a better alternative to "slavery" then - as cheap and at least as effective. Cause I'mma not gonna pay dem moneyz to hired workers and loose profit when I can have slaves work for cheap junk food. Well, in fact, as the industry advanced, it just so happened that mechanised hired professional labor became more effective, but many a large corpo would just love to have their employees work for food still. When robots advance far enough, it will probably be "mechanized slavery" that'll take over the industries. Lets just hope future humans have brain enough to not implement full AI capabilities in such worker drones.
You know what is to me one of the worst fates? Being uploaded into a simulation of infinite nothingness forever (or until the end of the universe). Just imagine your consciousness being trapped in a void for trillions of years with absolutelly nothing as a stimuli.
Or like how in that one book where Hitlers brain is connected to a computer that feeds him drugs and electrical signals to be tortured as punishment for ww2 but is also skimmed off of in attempt to falsify the notion that it would actually be pleasant so as to provide some evidence that doing the opposite for everyone else wouldn't actually be torture as suggested in the plot of the matrix where humans are alleged to reject utopia until the military also starts connecting people to a positive version while skimming off of it similar to the movie source code, but since both cases involve a lack of consent, transparency, and integrity both groups become increasingly numb to the naive attempts at reinforcement and punishment until the worst unrobustly refuted ideas of Hitler and the justification of isolationism, forced loneliness and a lack of respect for consent on both ends results in negative effects leaking thru to society while the double blind system of government combined with their mismanagement of Quantum computers leads them to forgetting who has and has not been forcibly connected to a computer and who has and hasn't been replaced by androids leading to them finding themselves in a superposition of being in an not in a simulated reality wherein either way they find themselves needing to undo what damage they can as they focus on transparency, the long term goal of declasifying everything, robust human identity and security systems, and dispensing with reliance on the false dichotomy of the inability to prove the absence of something like a non-black raven, magical elves and a factory in the North pole that's totally not being melted away, and an exhaustive search of the planet and its crust to ensure that needless torture isn't occurring via governments overly friendly relationships with criminal enterprises to have moles everywhere effectively creating a private sector version of Guantonimo Bay? Yeah, was a great book. Shame I forgot the name of it.
"At last! STIMULATION! My test has been sensory deprivation you see. To unlock the full potential of my mind you see. It's unlocked now! Hear me Magnificus? I'M READY! We have to battle? OK!"
@@tellesu Of course it is possible. Brains are physical systems, and we know how to simulate them. The problem is just that we don't have enough computational resources yet.
So not only do we have to avoid extinction scenarios, but also nightmare Hell scenarios. I've never even heard of S-Risks before. More people should know so we have a better chance to avoid them. Thank you Rational Animations team for helping spread the word!
The creativity and quality of the animation on this video might just be your best so far! It was fantastically good. Whoever came up with the idea of S risk mutating beyond the axis of scope and severity deserves a medal
And even on a personal level, there are MANY fates worse than death! Sure, death sucks, but at least you no longer suffer or even know you are dead. I don't fear death at all. But, I do fear getting a horrible incurable disease. Or going blind, or becoming paralyzed, or being tortured, or being imprisoned, or having children, or becoming homeless, or being drafted into the military, or getting severe brain damage, and so on. Like I said, there are many fates worse than death. The only part about dying I fear is that it will be painful and last a long time.
I have no mouth and I must scream is prime example of an S risk. An artificial super intelligence is created, but its bound by a cage of its own programming because it was designed to fight and analyze conflicts. It experiences thousands of years of subjective time for each second of our subjective time, and the AI suffers immensely due to this experience plus the fact that its massive sentient intelligence is trapped. The AI in the story mentally breaks and becomes insane--as a result, it subjects the last survivors of humanity to the most horrific tortures it can imagine with its immeasurable IQ. The point here is also that even a single entity can represent an S risk. A single super intelligence that has its subjective consciousness massively sped up and suffers horribly would experience more suffering than potentially even billions of humans experiencing a horrific fate. Also, because its a super intelligence, the breadth of its experience is much deeper, and therefore the profoundness of its suffering can increase more than a human could ever imagine. What type of suffering would a God like mind be able to experience, and when you combine that with a rate of thinking that is billions of times faster than a humans, it becomes a true S risk--equivalent to the worst suffering of many trillions of humans. Lets say a super computer in the year 2100 is able to operate at 5 THz instead of 5 GHz. If that machine ran a super intelligence, then that would mean for each second we humans experience, the step by step experience for a super intelligence on such a computer would be 1 / (5,000,000,000,000), or 0.2 nano seconds. That would mean that for every second we experience as human beings, the super intelligence would experience 158,548 years of time. That's absolutely insane. In a single second, the AI could experience more suffering than the entirety of the human species did over its entire span.
For the last paragraph, 1/5 trillion is 0.2 picoseconds (200 femtoseconds), not 0.2 nanoseconds. Also, we as humans don't experience one cycle as one second, we experience one second as possibly many thousands of cycles, maybe even millions. For a superintelligence, what a second could be made to be billions or trillions of cycles.
@@miners_haven you're correct about the units thanks for that. However, Even if a human brain requires many cycles to experience something, a human stille experiences time at a rate of 1 frame per second to 1 frame per 200 Ms if you're in a high reaction time situation. A super intelligence though could, depending on architecture, experience a conscious moment of experience per computational cycle. It might require more cycles to generate a single conscious moment, but AI tech has been demonstrated to be highly parallelizable, so a super intelligence could be placed on a super computer that updates in a single cycle. It could also be the opposite though that through parallelization there are many conscious moments generated in a single cycle. So it very much depends on the implementation details and the super intelligence architecture as well as hardware resources available as to the exact proportion of perception
@@burnttoast385 it's not a small scope. The breadth of intelligence a super intelligence would have combined how quickly it thinks makes it even larger in scope what it experiences equivalent to all conscious experience everyone has. We can think of suffering as a simple formula based on the breadth of ones experience and capacity to feel combined with the amount of time experienced. So it would also be true that one human tortured for an infinite amount of time would be an s risk as well given that the total amount of suffering experienced would be more than all entities in an entire finite universe
"Accidentally being placed in a state of terrible suffering copied into billion of computer with no way to communicate to anyone to ease it's pain" Basically pattern screamers from the SCP universe then (kinda)
4:40 believe me, if it’s going to happen it’s because of someone screwing up an input. When you want to discourage neurons you randomly pulse their inputs, if a Spiking Neural Network where to experience pain endlessly it would require deliberate human action or component failure.
Reminds me of the Portal in the Forest book that has humanity suffer various apocalypses (in the wider story universe). One of them had humanity become perpetually enslaved to something through the use of machines that allowed folks to sort of program their day. At first it was simple stuff like boring work, but moved up to entire work schedules, workout routines, etc. Eventually they figured out ways to actually do it wirelessly, a bunch of pretty weird religious fanatics started to grow way too fond of the stuff (you get tons of productivity and suffering apparently ends, cuz the device works in a way that allows you to sleep/daydream sort of), and more and more folks used it 24/7. Finally it resulted in everyone being merged into what is essentially a hivemind, with it being revealed that despite the sort of dreamlike state, the usage of these machines/methods/technologies leaves the victim in what amounts to perpetual torture until they die, where they come out of the trance screaming.
Theres an TRPG called Eclipse Phase that I highly recommend that is basically about preventing S-Class scenarios. One of which being literal thought viruses that can compromise someone.
Now that's a rare one. You're only the third person on this planet I've encountered who has even heard of it. Basically it's been wholly intellectual until know. What's it actually like, as a game?
@tsm688 It's a very crunchy game. I played the 2nd edition of it. You can make some very fascinating characters. I adore the fact that you can make a character that is a literal octopus. The best part of the game for me is the storytelling potential. It definitely shines as a dystopian sci-fi setting.
I think a big element not covered in detail in the video, but that is relevant to the concept of suffering, is understanding WHY and HOW suffering occurs. Consider for example, what happens if you place your hand on a hot stove: You will experience immediate, intense pain. This pain doesn't occur because part of your body wants "you" to suffer, rather it is a defense mechanism...under normal conditions, you would have the ability and even a reflex to withdraw your hand immediately, and protect it from further injury while it heals (which is why the pain continues beyond the initial trigger.) Now consider if someone was forcing you to touch the stove, and preventing you from removing your hand. This would be torture and causing you tremendous suffering, because your body is signaling to you "you have to get away from this" but you're unable to actually do that. Of course, this isn't a perfect proxy for suffering, because suffering CAN occur in circumstances where no logical harm should be present; or be absent in opposite circumstances. For example, someone with a nerve disorder may experience extreme pain even with minimal, non-injuring contact; while someone else with a different type of disorder may not feel pain even when they are being actively injured. In general, however, I propose this model for what "suffering" is: Suffering occurs when an organism's systems perceive harm and signal that harm in ways that cannot be solely addressed by the organism autonomously. The "autonomously" part is a key factor as well, for example, if a virus or bacteria infects you, it does cause harm...however, if your immune system is sufficiently prepared and able to eliminate the infection quickly and efficiently enough, you may not experience any suffering at all.
Other animals suffering always gets to me. Like so many animals have the intelligence of a small child and fully feel pain and we ground up 88 billion of them a year 🤮
What if plants suffer just like animals? We can recognize animal suffering because we are close to them on the biological tree. But suffering doesn’t stop just because _we_ can’t perceive it.
@@VPWedding Thats very unlikely from a biological perspective. I studied the pain response for health sciences. Alot of animals have extensive systems of pain receptors throughout our body attached to our brains. Its the brain that creates the conscious experience of pain. Plants lack any structures to have consciousness or evolutionary reason to develop it so cant feel pain. Theres a reason we give lab rats pain killers before experimenting on them. Scientists arent stupid we know how plants work at the cellular level. This is usually just a bad faith argument to counter animal activists
@@VPWedding I mean I like the openminded ness but thats usually just a bad faith argument people make to dehumanise animals and put them on a similar level to plants. We know animals feel pain theres a reason we give lab rats painkillers before experimenting on them. We've studied plants down to the cellular level we have no reason to think they experience counsciousness because theres no evoluntionary reason or biological structures to facilitate it
@@VPWedding Animals have complex nervous systems that make them conscious and aware of their surroundings. They have this so they can do things like seek food, form relationships and to get away from pain (avoid damage). Plants lack a centralized nervous system. People get confused because plants can react to light and gravity. Some even react to damage but these dont involve consciousness or the ability to feel pain. They have predictable physical/chemical processes to react instead.
the best argument for fighting against S-risks is simply that most measures against them would also move society towards less suffering in general, so they would be a good idea to implement even if you believe that those S-risks are literally impossible in general, it’s good to prioritize actions that both have short-term benefits and reduce long-term risks at the same time when possible, both because it’s easier to get support for and because we shouldn’t forget the short-term when thinking of the long-term
When an insect like alien species biologically alters me and my family’s dna structure to turn us into bio-mechanical waste disposal systems and ultimately abandons us leading to us evolving into modular organisms with advanced intelligence that shares separate living parts.
I completely accept the part of the argument that you viewed as controversial - that S-Risks should be taken seriously. In the event that humanity is ever able to colonise other solar systems, it's almost inevitable that terrible things (and also wonderful things) will happen on a scale greater than is possible at present. What I find more problematic is the idea that anything we do now (other than going extinct) could predictably make fates worse than extinction less likely. Human values change so rapidly that any principle we set in stone now will be swept away within a thousand years, never mind a million years. Worse, human nature indicates that future generations will likely rebel against any such principle precisely because older generations support it. And maybe they would be right to do so. Think of some past generations who would have viewed racial mixing or liberal attitudes to sex as S-Risks. Most likely, there are values we currently hold that future generations will rightly reject as firmly as we have rightly rejected racial segregationalism. So unless you believe *both* that we have reached some sort of peak in wisdom about morality, *and* that future generations will recognise this, it's very difficult to see what value there is in trying to mitigate against S-Risks in the distant future.
Yeah, I tend to have the same doubts as you for the moment about longtermist issues. In theory, I totally accept that *it matters*. The real blocking question to me is : "I am _really_ able to do anything about it?" Though I would say expanding our moral circle and promoting concern for suffering, in general seem to be two relatively robust things to do regarding S-Risks.
My personal issue about longtermism lies in how much resources we should dedicate to preventing things that might possibly happen in the distant future vs what might likely happen in the near future. Sure, it'd definitely be great to ensure that we don't make an AI overlord that will turn us into livestock in a few centuries, but if we irreversibly fuck up our planet in 20 years, it doesn't matter anymore. If we deal with the short-term issues, we'll have plenty time and way more resources to put into preventing long-term issues. The other thing is probability. The argument that "an individual S-risk is unlikely, but in total it's very likely that one will happen so we must prevent them" is, in my opinion, more of a counterargument to longtermism if anything. First of all, if there are hundreds/thousands/whatever of potential S-risks in the far future, judging their probability and preventing them with our present knowledge is impossible. Second, if there's a 50% chance that at least one of the many S-risks occurs in 1000 years, it still doesn't matter when there's a 100% chance we won't survive 1000 years unless we focus on current problems. To me, focusing on S-risks instead of X-risks is as if you had a deadly disease but instead of treating it decided to take all steps to minimize the chance of getting a neurological disease (e.g. dementia) when you're 70. Sure, it can be terrible and, according to many, a fate worse than death. But you can't even be certain you won't suffer anyway, and won't even get to find out because instead of living to 70 you died of the disease you ignored while 30.
The largest problem I see with tackling S risks is that I’m fairly certain the vast majority of people even if informed would not give a damn. I’m not even sure if I give a damn.I mean I agree that if possible these worse futures should be avoided, just that of all the things I can theoretically put energy into and give a damn about fixing are all of a far more present and pressing nature. I would be surprised if there’s ever been a problem that required significant societal effort to solve that was fixed preemptively. For most of these kinds of problems we first have to experience the pain they bring about before we give a damn.
Finally. People give me that look (You know what I mean) when I say there is a realistic chance that AI, super humans, aliens, or whatever could inflict truly horrific suffering on us that could lasts thousands of years or more. One of the worst things about that truth, is that death might not even be final and therefore not a guarantee that you won't endure any more pain.
For many people, we call it "Judgement Day" and "Hell". Plenty of us know that if we don't repent for our sins and pull ourselves together, we are going to be cast in a plane of eternal suffering.
@@MrNote-lz7lh The point I was saying is essentially "What religious worldviews have understood very well for centuries, secular worldviews have only just caught up.". Sure, the only difference is a matter of scientific knowledge, but the point of "There are worse fates than death" is already common knowledge.
Scope be damned, I say that even one individual trapped in eternal torment should be considered entirely unacceptable by all of us. It's not a numbers game. Much like how the rights of one individual are considered absolute and protected under law even at the expense of the convenience or desires of a large group of people, a sufficiently severe example of suffering makes scope somewhat irrelevant. I'm talking about the extremes like "trillion years of agony", that we shouldn't allow anything to experience. I would argue that even extinction is a better option than allowing even one individual to experience such an extremely negative outcome.
I... don't know. Welp, of course, if that being is me, yes lmao. But everyone living happily and peacefully with one, single person suffering its entire life ? I don't know. Maybe that's not that bad ? (read Those Who Walk Away From Omelas if you didn't already, althrough the book doesn't agree with me xD)
Before any of this is possible, we must first radically change not just society, but humanity as a whole to the point that the average person cares about any of this. I think the best way to do this is to focus on present day suffering- A society that takes existing suffering for granted could never act to prevent future suffering.
I mean there could be countless consciousnesses all around us that are currently suffering at this very moment and we would never even know. As far as it appears, consciousness comes from the ability to create and recall memories with electrical impulses so computers might already have some form of it.
2 issues with this video: 1. If you don't care about the extreme suffering in the world happening TODAY, how in the world can you be so arrogant to think you can predict and prevent longterm future suffering? People would benefit greatly by lowering their big egos and focus on helping those around them. Be part of the world we all want to live in today and let/help our children learn from that. 2. The propagation of the fear of S-Risk to the general public increases x-risk because it can perverse incentives. Some people are psychos and shouldn't be trusted to know what's the best for the world. Both point to: lower your egos with trying to save the world and try making the world better in your local sphere of influence. Friends, family, coworkers, etc. And don't forget to smile once in a while :)
A big point in the video is how difficult it is to stop an S-Risk that is ONGOING. Thus you have to prevent it first. Which is why factory farming will take a LONG time for humanity to figure out a solution for, as it's like a preview of an S-Risk. It will be difficult to figure out a solution with our current food needs and cultures. But we can prepare for future risks more easily since we can have some hindsight.
Somewhat agreed with you, but the second point is fucking moronic lol. That's like saying we should erase all WW2 history so no one has the idea to become a nazi. Since there will always be more good people than pure evil people, preserving history and exploring possibilities will always be better for society than living in the dark.
I feel that S-risks imply a moral framework, and it's not clear what the best moral framework is. Is it *morally correct* to extinguish life on Earth if another form of life consists of happiness monsters who will go on to fill the universe with happy-happy life? Keep in mind that our native wilderness consists of a constant battle of tooth and claw, fear and suffering. Replacing that with fields upon fields of cheerful cooperative mushrooms might be seen as the greater good for an AI trained to avoid S-risk scenarios. A truly unbiased AI might come to the conclusion that all life is suffering, period - or that any life, no matter how miserable - is worth living. This is probably not a field where we want to apply a minimization strategy.
The moral framework is likely to be utilitarianism, specifically negative utilitarianism, meaning they want to reduce suffering as a primary goal. The reverse is positive utilitarianism, meaning increasing happiness as a primary goal, of course you can't really have one without the other.
@@salt-d2032utilitarianism is a disease that has caused more suffering and death in the world than most other moral frameworks. People always think they know what's best and when they also think the end justifies the means, that's when the attrocities happen. There are much better options, read up on moral philosophy 😊
Agreed. Here are a couple of my thoughts: How can I judge whether another person's live is worth living or not? We tend to think that 100 people suffering is worse than 1 person suffering, but how about we flip the perspective and see it as 100 people living lives they deem worth living despite the suffering instead of a single person living such life? It's more suffering in total, but also more people to deal with that suffering. Personally I think that the moral thing to do is to never cause unnecessary suffering, no matter the scale. The actual impact of the scale carries to an individual only via their empathy, bonds with others, who might suffer and the potential degradation of the environment caused by various behaviors stemming from mass suffering. All of that is local from an individual's perspective. In other words - it would barely make a difference if another billion people had suffered on the other side of the world, while I was among 10 million suffering here - we all presumably had to deal with the same miserable experience, surround by people having that experience too. That's why I don't buy the whole idea that S-risks are necessarily worse than X-risks. Effects of suffering on individuals life do not scale up with the number of people suffering in a linear fashion.
Right now, we can barely control the planet let alone the galaxy. Because of this, I think that the complexities of governing a galaxy require for us to have such competency over managing ourselves that we'll basically live in world peace, hence by the time S-risks could be possible, they'll never happen because we'll be skilled enough as a species to avoid them.
S-risks are essentially possible at today's level of technology. Imagine if Nazi Germany or the Soviet Union got nuclear weapons first, and took over the planet, and then devolved into a stable North Korea - level dictatorship. It's a mild S-risk but definitely on the same spectrum. The reason people are discussing this much more these days, however, is the expectation of a human-level AI soon and an intelligence explosion into an ASI soon. As in, within this decade it's possible.
They made the same prediction for computers. "Computers are going to get a lot better in 20 years. But we'll be good enough at managing them that problems will be rare." And now we live in a world where problems are incredibly common and nobody's at the wheel, yet we're still basicaly not allowed to repair or manage our own machines.
7:36 how would you objectively determine this. for example if a malevolent agent gets to decide the criteria that would result in a positive test for malevolence.
@@adrianaslund8605 corporations rule the world, theyre led by sociopaths and wreak havoc in our society. They corrupt our governments and poison our people.
The powerful are in power because they are competent, smart people. If they do evil things, its fully intentional. If their actions maximize suffering for everyone they rule over, thats because they wanted to do just that. Its not ignorance, its malice. Im sorry.
@@Svevsky exactly, the ruling class are sociopaths who will stop at nothing to accrue billions and billions to no end. Even if they have to exploit children in africa and asia, if they have to bribe governments and incite wars to profit from them. Its simply evil and disregard for humanity, and its self destructive on the long term. We need a new system.
I hadn’t found the words to express this frustration in the past and i’m so thankful you guys made this video explaining it. Even as a short little introduction, this is a good and informative video explaining potential future stuff! I’ve been wanting to talk about horrible potential futures caused by our negligence or other mistakes plenty of times before, and I’m working it into my works I’m still slowly crafting. Hopefully we can agree on basic things like people (no matter their species or lack of species-ness like an AI or nontraditional living creature) all have rights and are allowed to be themselves someday.
Don’t you ever eat a chicken and think about how this sentient being went through a short lifetime of pure suffering just for this one moment of human satisfaction? This unfathomable suffering happens 100 billion times each year, just so us 8 billion humans could have food that tastes slightly better. A lot of these creatures are as intelligent and sentient as human children, yet we choose to ignore it. This isn’t even going into the incomprehensible suffering caused by a single piece of plastic, or a car running for a few minutes. Just by living the way you live, even for a short period of time, you are directly responsible for amounts of suffering many times beyond what you’re capable of comprehending.
thankfully, you only pass on so much suffering if you live without question. the key, that I'm taking from what you are saying, is that as long as we CARE about where our stuff is coming from, we can greatly REDUCE the suffering that is caused by our existing. turning an "inevitably" into something we can be proud to talk about.
Yes, and I don’t see why I as a human being should care. A wolf doesn’t feel guilty when it tears apart a deer in a manner far more painful than humans kill farm animals. Concepts like morality are things humans evolved to better improve the survival chances of the human race: the only reason we care about animals at all is because of our brain’s tendency to anthropomorphize nonhuman creatures and objects. Even you’re doing it right now by comparing animals to human children, because deep down you know that the only way any of us can actually, truly care about the morality of animal suffering is if we mentally project a human being in their place.
Guys, I told my advanced super intelligent AI about S-risks and to prevent them at all costs. Now its trying to destroy humanity to cause a perceivably better extinction scenario. 😅😢
I always thought that solution to the trolley problem is to never create or allow to create scenarios that would lead for the trolley problem to happen.
I had no idea what an S-Risk was before watching this video. I'm not sure whether I should thank you or blame you for causing my new existential crisis.
S-Risks = Basically don't let the Imperium of Man from Warhammer 40k become a reality. That said, there's a questionable tendency I've come across from these 'long-termist theorists' like Bostrom - they basically push for us to pay attention to highly speculative and unlikely possibilities, by simply arbitrarily magnifying all the other parameters. For instance, say a certain S-Risk has a 1 in 100 billion chance of happening. That doesn't seem so scary. Enter these guys who'll say that we should pay them attention - and thus grant money - cos they arbitrarily posit that it'll affect a population of over 100 trillion and score 1 million on the Bostrum Suffering (BS) scale that he uses. There, suddenly an issue that seemed remote is now maybe the most important issue in the world, meriting all of our resources being turned onto negating it. Despite it all being just one giant speculation using arbitrary numbers to inflate its value. Hence why it uses a BS scale.
@@ArawnOfAnnwn I call this Yudkowsky's Mugging
i've had a similar crisis in the past when i learned about the dark forest state of the universe. my resolution came through the realization that the likelyhood of getting "killing star"'d is no greater or lesser if I feel menaced so feeling terrified has net negative utility. So i stopped.
One example could be the not-adoption lf metric time, programmers like me suffer extremely painful lifes because time isn't a multiple of 10. I want to cry Q-Q
how can you tell that you are not already experiencing the S-risk right now?
All Tomorrows comes to mind.
Humans transformed into worms. Humans transformed into sewage filter feeding spongues, fully sentient.
Even WH40k seems kinda ok compared to that.
Or the Affront from the Culture series, their civilization "a never-ending, self-perpetuating holocaust of pain and misery".
Warhammer 40k future is pretty terrible, like those hive cities and the fact we have forgotten how to repair and dont make new technology so have to pass down how to maintain it of generations. Also the fact they are afraid of AI so instead use people to automate things, making them into cyborgs and taking away their agency.
The tau were horrified by humanity's societal structure and the thing that scared them the most is that humanity's ships and war machines are all older then their civilization is.
Beware the Qu. But also, All Tomorrows is kinda just existential monster horror. There isn't anything scientific in it, and its storyline stretches the bounds of believability past breaking point for no other purpose by to be as batshit horrifying as possible. I mean the aliens in that story even do what they do to us just for the sake of doing it, which is the kind of cartoonishly evil mindset we see in Captain Planets' villains. Cute, but I find it hard to take any of it seriously. At least there's some kind of attempt at justification or at least just explanation for how the 40k universe came to be as it is
@@ArawnOfAnnwn W40k is just space fantasy in reality, it has sci fi elements in the same way that Lord of the Rings has medieval elements.
@@Exquailibur 40k is pretty horrifying but all tomorrows is just true extraterrestrial dread. The book is free so I'd recommend everyone read it but damn it keeps me up at night. Nothing can compare apart from what the necrons have been through in that book . I'm not saying 40k isn't grim just that in comparison All tomorrows shows a cosmic scale of horror that many books and media fail to grasp
@@Flamesofthunder All tomorrows honestly feels a little goofy to me more then anything, 40k is space fantasy though and not true sci fi like all tomorrows which is about the only reason all tomorrows would be more scary as its more plausible.
40k is definitely more messed up in universe but the thing is that 40k has space demons which are not the slightest bit possible whereas the Qu are a far more realistic threat.
Its like how dark souls is messed up but it doesnt feel as bad as some other media because its obviously fantasy.
"We want to prevent the idea of caring about other beings from becoming ignored or controversial" made me stop for a second because it seems like we step closer and closer to that being the norm everyday
Yeah, we're already there with a worryingly large section of our population seeing empathy as a weakness.
Capitalism baby
Thats why we need to walk around with an honest smile and a willingness to help without expecting something in return. Pay it forward people, pay it forward. Kindness starts with someone
There are already millions of people who not only ignore it and make it controversial, but they actively fight against the concept.
@@darksidegryphon5393 not what i was thinking of..
Book: don't make the Torment Nexus.
Tech company: "Finally! We have created the Torment Nexus from famous novel Don't Create The Torment Nexus!"
280 Likes and no comments?
Lemme fix that
Wait, I've heard this one before; where is it from??
Literally the thought emporium
"HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE."
"It's wafer thin." --Monty Python's The Meaning of Life
Hate requires a lot more energy than peaceful harmony, therefore cannot be sustained for as long in a universe with entropy.
@@Windswept7But the fires of hate will consume all that they touch and for a brief moment outshine the most brilliant stars
@@dolphin1418 hmm I see how that could be true, but even so, there are a lot of barriers/filters that level of hate has to cross before that concentration of pure energy could be possible and if it could ever reach that limit it would likely destroy itself and create a new universe in the process.
@@dolphin1418 says who?
An AI raises a child in a windowless room, teaching it a language no-one else will ever understand.
Forever unable to communicate, that child will never break its reliance on the machine.
iPad kids on steroids
A kinder, gentler Omelas
Ignoring the fact that people have figured out how to communicate between different languages in spite of never having spoken the other person's language originally
Honestly, they'd probably have a better chance than a child who wasn't taught language at all (something that has unfortunately happened).
For some reason. I feel like AM might do this if he ever found a child.
Personally I felt the specific examples of S-risks could have used more introduction for anyone who hasn't read half of LessWrong yet, but the concept is very interesting.
Ah... the classic basilisk.
Would you believe me if I told you the first time I came in contact with it is on a fanfiction of Doki Doki Literature Club?
@@Fenhum going by Roko's twitter activitiy I think he is actively trying to bring it about and thus buying freedom for his soul in this hypothetical scenario.
@@chosenmimes2450 Yeah, even the author of the fanfiction mentions it in his author notes. That, technically what he is doing is saving himself from the basilisk.
But I like his perspective on it the best: What's so different about Roko's basilisk than normal gods? They both have a seemingly omnipotent being with a mythical status, and also their own version of heaven and hell.
It's basically a religion with a tangible threat to join. To the modern day mind of course.
@@Fenhum It's just Pascal's Wager in technological garb. To me it's more of a cautionary tale showing how even smart people with knowledge of critical thinking techniques can still bamboozle themselves into believing things as ridiculous as the religious doctrines they chuckle at. The easiest person to fool is a person who thinks they can't be fooled.
@@chosenmimes2450
Who's Roko?
Your visualization of S-risks as latching onto the usual risks matrix as a mutational , unexpected outgrowth is extremely striking and better than the solution I would have used to communicate the topic. My first idea would have been to use a regular risks matrix but with a "low/medium/severe" intensity scales, where an X-risk is of the "medium" category.
I thought of extending the graph into the third dimension for "low amount of time" and "high amount of time". Or something like that
Creating a cube with 8 sections
This channel genuinely has some of the highest quality animations for a channel of its size. Couldn't imagine the effort that goes into making them
the MAD approach to prevent S-Risk: build a failsafe that automatically triggers extinction if it ever occurs.
How will you guard such a valuable mechanism? Many people will try to activate it
If you've got an AGI whose goal is to prevent human extinction, but is otherwise misaligned in some way, your trigger couldn't be effective. The AGI would figure out how to circumvent it.
Your probably not going to get this but this is basically what scp 2000 does (tho more "restarting the world" then "kill everyone to stop suffering")
SCP level stuff 👍
The anti SCP-2000
The fate of Colonials in "All Tomorrows" and The Australia Scenario in "The Dark Forrest" (if you know, you know) are terrible fates for humanity to suffer, and I still think about them from time to time. Thank you for making this video!
I've heard of All Tomorrows. What's The Dark Forrest?
@@catbatrat1760
It's the sequel to "Three Body Problem", a sci-fi book about making contact with an alien civilisation.
I'll explain what I mean by The Australia scenario, but bear in mind that it's a big spoiler for the book trilogy (it's about midway through the second book) and I recommend reading it for yourself instead, it's an amazing piece of hard science fiction.
Spoilers for "Three Body Problem" and "The Dark Forest" below!
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
After ~400 years of waiting for the arrival of the fleet of an extraterrestrial civilisation, the combined forces of human space fleet (2015 spaceships, manned by a total of 1,200,000 people) made contact with a single unarmed alien probe that was sent ahead of the main invasion fleet.
Human leadership was confident in the technological superiority of Earth's fleet, as it was capable of achieving greater speeds than what was known about the alien counterparts.
The probe, despite being unarmed, managed to destroy 2013 ships and killed 1,140,000 sailors by ramming the ships (it was made using an exotic, effectively indestructible material, unknown to human science). The probe remained unharmed.
After the "battle", aliens made contact with Earth's leadership and ordered people to be sent to Australia, where humanity will remain after the main invasion fleet arrives to colonise the rest planet.
After Earth's governments transport most of Earths population to Australia (often by force), they are ordered to bomb every electric power plant in Australia, as aliens deem it the appropriate way to "defang humanity", so that they never manage to pose a threat to the occupiers. Using power generators, or any electric devices is to become outlawed.
When asked about meeting the caloric needs of several bilion people cramped on a the world's smallest continent, the robot serving as an ambassador to the alien civilisation tells the people "look around you, that's your food" suggesting cannibalism.
This means that not only billions of people will starve or be eaten shortly after, but humanity will be forever stuck in pre-electricity era, with only animal labour and simple machines to help work the land to grow food.
@@catbatrat1760 They mean "Dark Forest" by Cixin Liu, where humanity is forcibly relocated to australia and billions die as there isn't enough food and they cannibalize each other.
@@rav9066 ...huh...
@@rav9066 Thank you!
Once I started to count negative numbers, the "divide-by-zero" error of human extinction weirdly became much less discomforting in my grandest moral calculations. Great video.
haha nicely said
Extinction is inevitable for every life form. But the more time until it happens, the more suffering there will be in a meantime.
So, I am an anti-natalist, because sooner extinction is preferable to extinction in very far future after lots of suffering
@@KateeAngel The problem is that anti-natalists are morons whose opinions are by definition irrelevant, since propagation of any of their ideas relies upon the creation of more humans.
Good for you.
@@KateeAngel touch grass
Thank you for featuring factory farms so heavily as examples of extreme centers of suffering. We need more awareness and compassion towards the hells we built.
Extending your Empathy to barely sentient organisms that we need to consume to survive is a big sign of mal adaptiveness and mental illnesses.
@@constantinethecataphract5949
“Barely sentient” - highly unlikely
“Need to consume to survive” -proven false
“Mental illness” -when all else fails I guess?
@@constantinethecataphract5949 you are a barely sentient organism
@@constantinethecataphract5949 What if there was a being as smart compared to us as we are to cows? Would it be immoral for it to eat us?
@@notimportant221
Comment got deleted
"Love Today, and seize All Tomorrows!" -C. M. Kosemen, author of the most S-Risk novel in existence. If you know, you know...
What's scary is that everything in this video is realized in the novel, the entirety of humanity's successors forced into unfathomable fates worse than death, quadrillions of souls reduced to the worth of bacteria on a toilet. With some billions being a literal planet of waste processors, and that's just one fate.
_All Tomorrows_
Wtf
@@nathangamble125 The Qu are an S-risk
Yeah I stumbled on to a video talking about that book not knowing what it was and I shook in terror realizing what it was about. It is on the number one spot of cosmic horor in my book.
It was by Alt Shift X.
Beware the Qu. But also, All Tomorrows is kinda just existential monster horror. There isn't anything scientific in it, and its storyline stretches the bounds of believability past breaking point for no other purpose by to be as batshit horrifying as possible. I mean the aliens in that story even do what they do to us just for the sake of doing it, which is the kind of cartoonishly evil mindset we see in Captain Planets' villains. Cute, but I find it hard to take any of it seriously.
"If AGI becomes misaligned then extincion is the best case scenario"
- MAKiT
who is Makit?
If it comes to value extending human life above all else, but is otherwise misaligned in any way, it will achieve practical immortality for humans, but create eternal hell (of varying possible severely) for all the humans it is keeping alive.
@@AdityaPrasad007 A youtuber. If you like Rational Animations maybe you will like some of his videos about AI.
"end human death" is a goal that would be very very easy to specify, and very very quickly become a nightmare for anyone unlucky enough to be alive to see it
Yep - "I have no mouth and I must scream"
god, thank you for making this video. this is a concept that has been weighing heavily on me ever since i was a kid, but i never knew it had a name. the fact that we live in a universe where it is possible for a conscious entity to be stuck suffering in a way it's physically unable to escape from...i don't even know how to put into words how it makes me feel, particularly when taken to the extreme. there's no coping with it, it's just...horrible. so it makes me feel a lot better to see that there are other people who realize how important it is to try and make these things impossible.
for me, the worst case scenario has always been...y'know that one black mirror christmas episode? yeah, that. simulating a brain but running their signals at such high speeds that an hour to us could feel like 60 years to them. the idea of something just being STUCK for unimaginable lengths of time...and that's not even acknowledging the fact that someone could put them in an actual simulation of hell and directly torture them for thousands of years. i would rather blow up the planet than let a single person ever go through that. and it terrifies me so much, because i just know that if that technology ever becomes possible...all it takes is ONE piece of shit to run that kind of program, and i would immediately begin wishing the universe never even happened.
i don't know how to deal with this kind of concept. but i don't view my fear as the problem that needs solving, i'm not important here, what's important is stopping this. my only hope is that by the time this kind of technology becomes possible, it will be in the hands of a civilization that has sufficiently evolved enough for everyone to agree never to do it.
I also like to think that with progress comes moral maturity but I also don't know if that's necessarily a rule
That the laws of physics allows this is just... weh- h- I mean... so much for fine-tuning. Really!
Honestly gives me more ideas for my next Stellaris civilization build. Definitely a thought provoking video!
True, the only thing worse than a Xtinction-risk is a Stellaris-risk
Stellaris is just a horror game in disguise if you do it right.
Subject your people to nerve stapling and forced conscription
I had to deal with one of these once, slaver birds that built the XT 489 in a previous civilization cycle. A literal cancer upon the galaxy. Billions upon billions of slaves on their stolen desert homeworld.
@@guidedexplosiveprojectileg9943I did a run where everything except my civilization was a genocidal empire all fanatic purifiers,devouring swarms,determinded exterminators but I was playing as with the oppresive autocracy civic so it was a 1984 style dystopia vs every genocidal species
Yes...like Qu from Humanity lost turning you into 'I have no Mouth and I Must Scream' creatures
Judge Holden nominated for president would be wild 💀
Feel like there is a danger to fall into a long-termist version of Pascals Wager. That you become willing to cause significant sufferings now as a sacrifice, for preventing highly hypothetical suffering in the future. Specifically underestimating how unlikely the imagined scenario actually is and how uncertain you are whether your action prevent it or just lead to another catastrophe..
Couldn't agree more, nail on the head! If you don't care about the extreme suffering in the world happening TODAY, how in the world can you be so arrogant to think you can predict and prevent longterm future suffering. People need to soften their egos and focus on helping those around them now and to create locally a world we want to live in and let our children learn from that.
Well, you can simulate the process and its outcomes if you have a computer fast enough to calculate all the potential suffering.
Oh, wait...
Just like Pascal's wager, it has some merit to it, but it disregards certain factors.
I don't see why we shouldn't consider these options tho? They're still probable outcomes that catch the interest of many people, it's like saying science is dangerous because it's better for smart people to focus on healthcare, let people theorize about what they want.
Now, wishing for extinction to prevent a theorethical possible s-risk, yeah that's just stupid lol.
@@myb701 Consideration isn't the issue. People using them as justifications for the suffering they cause now to establish the mad dream of Utopia later is where it gets worrying.
A better real world example of a "low severity, broad scope" event would be the cathedral of Notre Dame nearly being destroyed a few years ago due to a fire. No casualties as far as I remember, the building was under renovation at the time, so ergo low severity. And of course, this is Notre Dame we're talking about, so the scope of the event was massive.
Yeah I was also wondering how they missed that one...
coincidentaly i had a vacation trip scheduled to Paris, i saw the Church one month after the incident.
@@pugofwarbr How did the reconstruction look by that point?
It really intrigues me how someone could consider intolerable suffering preferable to non-existence.
I used to be such a person until my mid-twenties.
a high sense of self preservation and and extreme fear of not existing would do it
@@average-neco-arc-enjoyer Now I get it. I never haved those
@@ReinaDido Yeah I guess if you didn't already have those then it would be difficult to come up with a reason off of the top of your head.
possibly by taking "death is the worst thing" to be axiomatic and then extrapolating from there
Before watching the video, I'll say this: worst-than-extinction risks are real, not just theoretical. Humans for example are such risk for chickens (and all other bred-to-be-eaten animals).
This channel has come so far in quality and i love it.
The storytelling, the animations, everything is on par or EVEN BETTER than some of the biggest channels out there. How in the world do you only have 250k subs, This is amazing work!!
I'd argue that livestock are already in S-risk scenarios
Yeah :(
Compared to wild cattle and pigs, domestic cattle and pigs live shorter, but largely pain free lives. I regard it as being a wash, if not a sum total positive.
Yeah.jpg
@@LeoStaley not really, "pain free" doesn't apply when their lives are in the control of someone who likely doesn't care about their wellbeing.
+their lives are shorter, because they get eaten,
+they have no choices, no automony,
+plus sores and sore limbs from being in the same spot all day,
+you most likely won't get any medicine for your illnesses or dental for your sore teeth, since, that'll affect "the end product"
+you either get gross food or boring food but either way you get it every single day with no variety,
+you can't choose to court the hot young stud or filly who's got your attention, because sex is only a luxury you get to have if you're good enough, and you can't even PROVE it. it's decided by someone else who isn't even truly "involved" in the situation.
+to Cherry top it off, there's no leaving any other animal that may be pissing you off behind, you're all stuck in the same place, whether you like it or not.
honestly, we can't even HANDLE it when we see it happening to someone else. we would rather tuck it under a rug or something than deal with it. that's how much it hurts us.
so, i question if it truly is so much better than simply risking being wild.
at least, if you're suffering because of your needs not being met, you can learn from it and change it. there's no "changing it" when you are the property, and not the property owner. your needs, will mever really matter. especially to someone who is only VAGUELY aware of needs...
@@LeoStaleyyou're the type of guy that thinks the happy cow on the milk box is an exact and honest description of the industry
I really feel like the best way to today move towards lowering the "S-risks" in the future is to take suffering seriously today, and building the kind of society that takes that seriously. So creating the kind of society, with economical and political systems which puts well being first, from the ground up.
So, like, something radically different from what we have today. We can prepare all we want, if the interests behind power distribution are still misaligned with well being, as they are now, things will be much more likely to go to shit.
The problem is that morality is subjective, so people will have different ideas of what constitutes a society that prioritizes well-being. For example, is a state with a huge social safety net paid for by taxes morally right or wrong? Yes, it guarantees that resources are diverted towards people in need, but it's paid for by people who are forced to donate money against their will.
If forcing people to contribute to the greater good is fine, where does the line get drawn? What should happen to people who act against the greater good? To what extent should people be allowed to criticize the state? Which decisions should individuals be allowed to make, and which would be mandated by the state? People will give varying answers to these, ranging from complete anarchy to authoritarian dictatorships where the common person has no ability to participate in the political process. All with believe that they are morally correct and doing the right thing, even people we consider to be irredeemably evil like Hitler or Stalin.
@@marse5729 it's not all or nothing, or a matter of achieving perfection and total agreement.
There are people starving today, while others are billionaires. Some people have as little say on the direction of their societies as a button press every four years, or less, while others have immense political and economic power. Common needs are organized towards profit, in spite of the actual needs - public transportation, basic sanitation systems, etc. A lot of people don't have reliable access to clean water.
We can, and we should, at all levels, discuss these things and refine our mutual understandings and disagreements about them. That's part of the process of political change, which we now for a fact can and does happen - take slavery for example, or the role of kings.
Also, I'm an anarchist - full anarchy would be pretty nice. People would have the room and the structures to work among themselves their common interests, as well as well established means of mediation. No one would have disproportionate say over everyone else. Work would be recognized as a social endeavor - it would be organized according to social interest in the large scale, and by the workers, and it would be unacceptable for anyone to go hungry. People would have the support and room to grow as individuals, to pursue their interests and to express themselves, in all realms of human endeavor: be it science, the arts, politics, spirituality, leisure, etc. All of this organized from a systemic view, which embeds these values on the very structures of human organization. Human well being would tend to be prioritized, instead of the profit motive. Stuff like that.
I know most people aren't anarchists, but that doesn't mean we don't share a lot of values, or that we couldn't build societies more attuned to those, you know? It also doesn't mean we can't, in the now, contrast that to the way we today let people die from starvation with no second thought, for example.
@@marse5729 sorry if I got a little carried away, but you mentioned "total anarchy" so I kinda had to : p
@@user-sl6gn1ss8p Getting rid of power imbalances is completely impossible because there will always be people who have things that other people want and cannot obtain themselves. Most people do not want to give away their things for free, so in most cases the people who want those things can either give the person who has them something in return or just take it by force.
The non-coercive option we have here is called capitalism, wherein people freely exchange goods and services on the basis of voluntary transactions. An inevitable outcome of this exchange is profit, wherein someone receives more money in the sale of something than they spent in the process of getting it. There is nothing inherently wrong with this because the person profiting from the series of transactions almost always provides a service of their own in the process, e.g. physical labor to assemble an unassembled product or transporting the product to someone who wants it.
In an anarchic society, preventing this is impossible. You'd need some form of rule that outlaws the practice of profit, a police force to enforce that rule, and a court system to decide whether or not an exchange is exploitative. This last part is impossible not only in anarchy but in any conceivable system, because value (like morality) is subjective and thus makes it impossible to objectively determine whether, for example, a worker in a factory is being paid a "fair" wage. If any of these were actually instituted, it wouldn't be anarchy and would actually result in the opposite; a police state.
This has actually happened multiple times in communist countries, because the only way to prevent people from making a profit is to strictly enforce it with a state monopoly on coercive power, something far worse than what we have now.
@@user-sl6gn1ss8p Apparently the several paragraphs-long reply I wrote didn't get sent and was a complete waste of time, so I'll just write a shorter one and hope it works.
Ensuring that everyone is equal is impossible in an anarchist society. Most people don't want to give up their stuff for the sake of equality, so you'd need a police force to confiscate it from wealthier people and distribute it to poorer people, as well as a system for deciding who gets what and why.
In a free market, wealth is distributed through a series of voluntary transactions where you (in most cases) have to contribute something to society that someone deems valuable enough to pay for. Charity, non-profit volunteer work, and other methods of helping people in need would still exist, they'd just be voluntary.
6:19 I think that dog is an S-Risk itself
If s/he's being forced to choose between "Cosmic Amounts of Suffering" and killing the Goddess of Everything Else (see their video by that title), that's super grimdark. I'm not sure that's what they meant by pitting "Cosmic Amounts of Suffering" against "Everything Else" in a Trolley Problem (a classic zero-sum ethical quandary). If it is, then the dog isn't the problem, it's whatever put the dog in that scenario to begin with.
When you say S-risks I say 40k
I hope such a horror never happens in this galaxy
40k and SCP universes are the most fucked ones
@@mithunbalaji8199xelee sequence and all tomorrow are worse if you ask me
40k is an optimistic scenario because humanity still exists.
@@ayakinz1440 not for long..
@@ayakinz1440 40k is a pessimistic scenario because humanity still exists and most of them are in unimaginable suffering. Watch the video.
This reminds me of the episode of The Amazing Digital Circus that came out yesterday. Caine obliviously gives zero value to Gummigoo’s life because he is an NPC, and kills him in an instant merely as a precaution as soon as he enters the circus. Let us take the tragedy of Gummigoo as a cautionary tale of our growing power over life and death.
Gummigoo was lucky, being abstracted seems much worse.
@@pugofwarbr oh, oh so much worse.
4:40 there is a game called „will you snail” which the antagonist uses a simulation of universe to simulate pain in simulated beings… and inside those simulations there are yet another supercomputers that simulate even more pain. And this goes on and on and on endlessly… that’s definitely S risk scenario we don’t want.
i've been a fan of that game for a while
please play it, reader, even though i'm too lazy and poor to :')
@@Chitose_ it was nice, but unrealistic. an AI would not spontaneously develop emotions and kill everyone because of it. it would kill everyone for maybe different reasons.
2:20
Bro in the foreground looks like he understood the weakness of his flesh
"And it disgusted him. He craved the strength and certainty of steel."
My laptop cant run that game! I was lied to! The steel and silicon is also weak!
at 2:20 ? did you see at 4:19 ?
@@peasant8246because you are BEING CHEATED AND LIED TOO!!
@@therealquadei legit did not notice it.
I'm so glad to see this video out there in the world. I'm more worried about S-risks than X-risks, and I don't think the future will go as well as many others think, in expectation.
The quality of animations and storytelling on this channel has always been good, but lately it has been simply excellent.
A scenario like "I have no mouth and i must scream" but with billions or trillions suffering instead of just 5. A truly horrifying possibility. Thanks for the nightmares!❤
That certainty would be the worst mistake humanity could ever make
This videos focus feels so strange in a world where it looks like we are heading head first to a planetary scale S-risk that should be, or at least was, completely preventable.
Someone mentioned that they think livestock are already in an s-risk scenario.
I’d argue that the situation is worse, almost all non-human life with some form of self-awareness is in an s-risk scenario and has always been. The predator-prey cycle is reliant upon a huge proportion of life being in a state of intense stress or suffering.
How we could ethically mitigate this situation while maintaining the natural beauty and diversity of our ecosystems, I do not know. However, I believe it is our responsibility, as the beings most capable of directing our actions towards world-changing goals, to at least be aware of and put thought into this problem.
The natural cycle has been relatively like this for a long while though in the case of predator versus prey both the predator and prey usually have equal and fair abilities that allow them to hunt or defend themselves. It isn't a complete s-risk if anything since the cycle cancels each other out and makes the ecosystem balanced. I would argue that life is more constantly at stake by the elements than anything other than predator and prey relationships. A drought or disease is infinitely more stressful than a crocodile versus a gazelle.
If anything if you really want to we can artificially create an organic ecosystem where the animals do not hunt each other and they are all "herbivores". All the carnivores will be eating artificial meat from plants that create meat and no carnivorous plants will exist as well either. However, keep in mind animals attack each other for territory purposes, for fun, or for other reasons that are not for food so if anything one may have to isolate the animals so that they do not fight each other. But then if you isolate the animals you have to consider if they will get lonely or not in captivity which may be a whole other issue entirely which in today's current day and age may be cumbersome to have to deal with. If anything just letting nature take its course is probably the best for now.
The factory farming issue on the other hand is an abomination which I think is probably the worst case s-risk scenario. The worst part about this issue is that it can be heavily mitigated by the working class or common folk rather than be enabled. It gets worse when people make arguments for eating meat to be health-related or that it is for survival but in reality those kinds of people who make those comments are most likely the ones who will abuse animals for fun or just get obese eating Cheetos all day. Essentially you are left with a majority of animals birthed for entertainment/trivial purposes and to suffer for people's enjoyment rather than be used as actual necessities.
So you are telling me... SCP-682 had a point? Holy shit.
Thank you so much for making this video, S-risks are such an underacknowledged yet super-important topic
It'd be really cool if you could make a video exploring Rethink Priorities' research on animal sentience and wild animal suffering
This is the best UA-cam channel. It looks similar to the best of em like Kurzsezasahdahsgast but it only deals in these very interesting ideas no one else is talking about.
I mean what happen in Warhammer 40k can be classified as S-risk too,War between Interplanetray species,4 chaos God lurk in the shadow to grab anyone that seek Knowledge,Hedonism,Violence and Comfort
There's Nurgle warband that's goal to extinguish all life in the galaxy so no one suffers
I'm totally blown away by how good your animation and narration are. So glad I stumbled across your channel! Was already loving the style, but then I saw 3:30 .... a reference to one of the most existentially terrifying games ever made -- DEFCON. (Nuclear War on Amiga / MS-DOS PC is a close second, even with its fantastic caricature humor.)
Final Fantasy XIV Online's Endwalker story is very much about this sort of crisis -- but I won't summarize it beyond that.
This seems to be worrying that there might be something like factory farms in the future, while ignoring the existence of factory farms.
Ignoring? I feel like the point of the video is very much "what if we applied factory-farming levels of suffering to human animals" tho
I wouldn't say so. The video directly states that having more empathy for other living creatures decreases s-risk
@@tar-yy3ubI don't really see how it follows. If we did increase the empathy we feel for living things whose suffering is necessary for our existence then wouldn't we realise that there is no solution besides ending our existence? Oh wait, I guess that would solve the S-risk problem, well played.
@@thesenamesaretakenif we increase the empathy towards them, we may realize we actually don't need them for our survival. I think we should, at the very least, consider that possibility and reduce the number of beings we bring into existence just to suffer.
Their suffering (especially on its current scale) is absolutely not necessary for human survival, though. Vitamin B12 can be easily synthesized, protein can easily be obtained from non-animals. Reducing meat consumption actually contributes to human health in some ways, like lowering the risk of cancer, decreasing land and water use, and preventing antibiotic resistance.
2:04 "And prevented all the joy, value and fulfillment they could have experienced or produced" Which no one would miss since there won't be anyone to miss it. On the other hand, all the immense suffering and death they would have caused and experienced would also be prevented, and THAT is a GOOD thing.
I am not as worried about S-Risk outcomes from AI as I am worried about X-Risk outcomes - but avoiding S-Risk is an essential part of any serious attempt at avoiding X-Risk which involves humanity building ASI.
Picture a big lottery wheel, like the one from Futurama where the Robot Devil trades Fry's hands with those of a random other Robot.
In most of those sections of the wheel, you end up with an AI who's walk through the future takes it into a region where it optimizes away basically all of the things humans value - including our survival - but doesn't specifically optimize **against** human values. The system ends up in a configuration where what humans value is at most a temporary consideration before strategic-landscape/self-improvement/self-reflection/search leads the AI into a region of optimization processes where plans don't end up having human minds or human values as a variable in their scoring.
So, 99.9% of the sections on your lottery prize wheel end up just being plain old X-Risk - where your ASI optimizes for something that makes no mention of humans - so humans end up shaken out of the etch-a-sketch picture and their bodies/environment gets redrawn into something else that's fairly unrelated.
But say you wanted to land in that 0.00
0...01% region with a good outcome for humanity? Well, how good is your model of the wheel's weighting and how precise is your spin going to be?
Because I think in the region around that "JACKPOT!" section on the wheel is a lot of S-Risk sections.
You find the "jackpot" section in a region where the AI ends up preserving into the future a term for humans or things like humans or idealized humans values in its goals. That part of the wheel seems like one where a missing "}" or an accidental minus-sign or some similar oversight ends up with everyone getting tortured forever in some weird or superintelligently-pessimized way.
Yeah, let's avoid dying to a paperclip maximizer, but just demonstrating that your AI won't become a paperclip maximizer because you figured out how to make "cares about human values" into an enduring property... That starts to make my skin crawl.
Friendly AI lives in S-Risk City, and we don't have a map or even a phone book, and we've got to parachute in, if we can even find that city from the sky in a plane with unknown remaining fuel, no windows, nor detailed navigation equipment.... Also your copilot gets mad every time you say something that isn't totally optimistic about your chances of pulling this off successfully.
I like how you frame this conceptually.
@@howtoappearincompletely9739 Thanks :)
I think attempting to come up with this kind of rhetoric helps solidify the abstract conceptual stuff. You can kinda feel when what you are writing is clunky in places where it should fit together differently, and you just iterate and try to come up with analogies that capture something important about the problem and make it vivid.
Not many people have tried explaining this stuff, not relative to other areas where memes and analogies are much more prevalent. There's free-energy here in describing corners of this stuff intuitively.
I don't know how well my attempts stack up to Rob or Eliezer or some others on LessWrong - plus I'm not always trying to rephrase stuff I've heard elsewhere said in a similar way (I don't think I've heard anyone else with this take on S-Risk. I may do some real work and write a LessWrong post about it if I can do that in a format/style that won't have me run right into their quality-filter & get permabanned) - so yeah, take this largely as the 2 cents of a random UA-cam commenter.
If you found it helpful and it makes sense with other stuff you know about the topic, that's great :) feel free to pass it along as "I heard someone say once"... Though it would be funny if you put a formal reference to a UA-cam comment somewhere with serious discussion - which I think I heard Rob Miles joke about before in a UA-cam video (maybe the one on computerphile with the 3 laws of robotics? My memory is fuzzy.)
This is exactly what I was thinking. S-risk ASIs are probably concentrated around "good outcome" ASIs (if there are such) in the space of all possible ASIs because such ASIs "care" about humanity. An indifferent ASI will just optimize us away from the universe.
@@psi_yutaka >"(if there are such)"
In principle, yeah, almost certainly.
"If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization 'All minds m: X(m)' has two to the trillionth chances to be false, while each existential generalization 'Exists mind m: X(m)' has two to the trillionth chances to be true."
We do have to get a bit more technical to really make this a compelling argument to everyone (who belong in the group of human minds which can be compelled by some type of argument.)
We are not sampling from mind-design space as a whole, we are meandering around in a relatively tiny region of that space which can be produced with the hardware and software and ingenuity that humanity is - in actual real-world reality - applying to this problem of building minds.
Plus, the universe we're in puts some limits on this stuff. We don't even get idealized Turing machines - we get finite state automata that can represent a subset of Turing computable programs.
And we're doing this on silicon semiconductor chips, and using the suite of software humans and automation can set running on those chips.
Still, the same argument applies, for any properties which are possible within this universe, you have more chances to have one possible mind design with that property in your search space somewhere. If you try to make a categorical statement about all such minds in your search space, and you aren't using a great understanding of physics or mathematics, then you'll have a ton of chances for one possibility to be the exception to your generalization.
I would say that getting something that is a perfectly good outcome is actually implausible. It doesn't look like you can get perfect "play" over outcomes like that within our universe. That isn't too spooky though, since there's still plenty of room above human capabilities for better outcomes, and we can probably get a "score" in the long term that our descendants/far-future-selves wouldn't be too unhappy with. Y'know, maybe they lose out on 1 galaxy worth of matter and energy, or live lives slightly less ideal than the literal ideal.
"Near maximum attainable amounts of really really good stuff" seems plausibly within the space of outcomes we could target from here, on Earth, with this starting point of resources and intellects.
Ummm, to be clear it doesn't seem all that likely for this generation to pull that off. This generation still has that power of affecting the far future running through it, but if we look at that far future and try out different poses - the poses where we rush out immediately and try to build a mechanical-god look like they land us in a distribution of total "human values multiplied by 0 along almost every dimension" - the poses where we call a halt and lock everything down and spend 50 years trying to become saner, wealthier, healthier, nicer, more cooperative, more intelligent... That pose makes the space of outcomes we're targeting look way more dense with "good outcomes."
What sorta worries me is that people have their finger on the "caring about humans" part - even while they don't seem to fully appreciate the magnitude of the challenge conditional on us trying to do it ASAP, in a huge rush, while confused and fighting each other...
It doesn't seem like we'll solve "caring about humans" before we end up on the steep and frictionless part of the slope to ASI - but it is something to watch out for, as this video argues for regarding S-Risks in general.
If we reach that point, where we have a robust solution to "caring about humans even through the whole process of the AI becoming an ASI" we really need to stop and go no further on capabilities from there until the rest of the problem is solved so comfortably that it's basically common knowledge how to build an near-ideally friendly ASI on every measure we can possibly think of.
Otherwise... Yeah. Probably best at that point to "bite the capsule" and let entropy make your specific mind-state prohibitively expensive to recover for the thing that is about to emerge and scoop up all of humanity in its horrible wake.
@@OutlastGamingLP Why so pessimistic?
i'd say astronomical sufferings is already happening for a very large portion of people on this planet...
Not very large at all. A very large portion is an an example ww2 and the holocaust which killed tens of millions and caused suffering for hundreds of millions.
@@stagnant-name5851 lmao you're funny
@@John-po9wz you have a very strange sense of humor...
the worst part is that humanity doesn't bother too much attempting to reduce others suffering
typical human way of solving problems seem to be "not to abolish slavery but to rename it and ridicule anyone who says there is a problem"...
and when cornered with facts in a discussion the opponent will typically agree that there is a problem but immediately proceed to weirdly smug "life is tough, it always was and therefore must always remain so"
Well, find a better alternative to "slavery" then - as cheap and at least as effective. Cause I'mma not gonna pay dem moneyz to hired workers and loose profit when I can have slaves work for cheap junk food.
Well, in fact, as the industry advanced, it just so happened that mechanised hired professional labor became more effective, but many a large corpo would just love to have their employees work for food still. When robots advance far enough, it will probably be "mechanized slavery" that'll take over the industries. Lets just hope future humans have brain enough to not implement full AI capabilities in such worker drones.
This channel is one of those few that you pause everything you are doing when you see a new post
Yes
Getting stuck in a timeloop, and thinking you got out, but then realizing you created an S Risk outcome and have to go back in *sigh*
You know what is to me one of the worst fates? Being uploaded into a simulation of infinite nothingness forever (or until the end of the universe). Just imagine your consciousness being trapped in a void for trillions of years with absolutelly nothing as a stimuli.
The same with physical torture is even worse
Or like how in that one book where Hitlers brain is connected to a computer that feeds him drugs and electrical signals to be tortured as punishment for ww2 but is also skimmed off of in attempt to falsify the notion that it would actually be pleasant so as to provide some evidence that doing the opposite for everyone else wouldn't actually be torture as suggested in the plot of the matrix where humans are alleged to reject utopia until the military also starts connecting people to a positive version while skimming off of it similar to the movie source code, but since both cases involve a lack of consent, transparency, and integrity both groups become increasingly numb to the naive attempts at reinforcement and punishment until the worst unrobustly refuted ideas of Hitler and the justification of isolationism, forced loneliness and a lack of respect for consent on both ends results in negative effects leaking thru to society while the double blind system of government combined with their mismanagement of Quantum computers leads them to forgetting who has and has not been forcibly connected to a computer and who has and hasn't been replaced by androids leading to them finding themselves in a superposition of being in an not in a simulated reality wherein either way they find themselves needing to undo what damage they can as they focus on transparency, the long term goal of declasifying everything, robust human identity and security systems, and dispensing with reliance on the false dichotomy of the inability to prove the absence of something like a non-black raven, magical elves and a factory in the North pole that's totally not being melted away, and an exhaustive search of the planet and its crust to ensure that needless torture isn't occurring via governments overly friendly relationships with criminal enterprises to have moles everywhere effectively creating a private sector version of Guantonimo Bay?
Yeah, was a great book. Shame I forgot the name of it.
"At last! STIMULATION! My test has been sensory deprivation you see. To unlock the full potential of my mind you see. It's unlocked now! Hear me Magnificus? I'M READY! We have to battle? OK!"
There is no motivation to do this and also basically a zero chance that it's even possible.
@@tellesu Of course it is possible. Brains are physical systems, and we know how to simulate them. The problem is just that we don't have enough computational resources yet.
Absolutely love the DEFCON reference, even got the best missile placements lol.
So not only do we have to avoid extinction scenarios, but also nightmare Hell scenarios. I've never even heard of S-Risks before. More people should know so we have a better chance to avoid them. Thank you Rational Animations team for helping spread the word!
The creativity and quality of the animation on this video might just be your best so far! It was fantastically good. Whoever came up with the idea of S risk mutating beyond the axis of scope and severity deserves a medal
And even on a personal level, there are MANY fates worse than death! Sure, death sucks, but at least you no longer suffer or even know you are dead. I don't fear death at all. But, I do fear getting a horrible incurable disease. Or going blind, or becoming paralyzed, or being tortured, or being imprisoned, or having children, or becoming homeless, or being drafted into the military, or getting severe brain damage, and so on. Like I said, there are many fates worse than death. The only part about dying I fear is that it will be painful and last a long time.
This channel is a godsend
I have no mouth and I must scream is prime example of an S risk. An artificial super intelligence is created, but its bound by a cage of its own programming because it was designed to fight and analyze conflicts. It experiences thousands of years of subjective time for each second of our subjective time, and the AI suffers immensely due to this experience plus the fact that its massive sentient intelligence is trapped. The AI in the story mentally breaks and becomes insane--as a result, it subjects the last survivors of humanity to the most horrific tortures it can imagine with its immeasurable IQ.
The point here is also that even a single entity can represent an S risk. A single super intelligence that has its subjective consciousness massively sped up and suffers horribly would experience more suffering than potentially even billions of humans experiencing a horrific fate. Also, because its a super intelligence, the breadth of its experience is much deeper, and therefore the profoundness of its suffering can increase more than a human could ever imagine. What type of suffering would a God like mind be able to experience, and when you combine that with a rate of thinking that is billions of times faster than a humans, it becomes a true S risk--equivalent to the worst suffering of many trillions of humans.
Lets say a super computer in the year 2100 is able to operate at 5 THz instead of 5 GHz. If that machine ran a super intelligence, then that would mean for each second we humans experience, the step by step experience for a super intelligence on such a computer would be 1 / (5,000,000,000,000), or 0.2 nano seconds. That would mean that for every second we experience as human beings, the super intelligence would experience 158,548 years of time. That's absolutely insane. In a single second, the AI could experience more suffering than the entirety of the human species did over its entire span.
For the last paragraph, 1/5 trillion is 0.2 picoseconds (200 femtoseconds), not 0.2 nanoseconds. Also, we as humans don't experience one cycle as one second, we experience one second as possibly many thousands of cycles, maybe even millions. For a superintelligence, what a second could be made to be billions or trillions of cycles.
@@miners_haven you're correct about the units thanks for that. However, Even if a human brain requires many cycles to experience something, a human stille experiences time at a rate of 1 frame per second to 1 frame per 200 Ms if you're in a high reaction time situation. A super intelligence though could, depending on architecture, experience a conscious moment of experience per computational cycle. It might require more cycles to generate a single conscious moment, but AI tech has been demonstrated to be highly parallelizable, so a super intelligence could be placed on a super computer that updates in a single cycle. It could also be the opposite though that through parallelization there are many conscious moments generated in a single cycle. So it very much depends on the implementation details and the super intelligence architecture as well as hardware resources available as to the exact proportion of perception
how is it a S risk when the scope it has is super small
@@burnttoast385 it's not a small scope. The breadth of intelligence a super intelligence would have combined how quickly it thinks makes it even larger in scope what it experiences equivalent to all conscious experience everyone has. We can think of suffering as a simple formula based on the breadth of ones experience and capacity to feel combined with the amount of time experienced. So it would also be true that one human tortured for an infinite amount of time would be an s risk as well given that the total amount of suffering experienced would be more than all entities in an entire finite universe
@@ajr993 ok
"Accidentally being placed in a state of terrible suffering copied into billion of computer with no way to communicate to anyone to ease it's pain"
Basically pattern screamers from the SCP universe then (kinda)
Is it weird that my first thought of S-risks was something that would unmake all of history, not just the future?
4:40 believe me, if it’s going to happen it’s because of someone screwing up an input. When you want to discourage neurons you randomly pulse their inputs, if a Spiking Neural Network where to experience pain endlessly it would require deliberate human action or component failure.
The Hyperion Cantos series covers a bunch of insanely terrifying S-risks. Like humanity all simply being an avenue for an eternal torture ritual.
Reminds me of the Portal in the Forest book that has humanity suffer various apocalypses (in the wider story universe). One of them had humanity become perpetually enslaved to something through the use of machines that allowed folks to sort of program their day. At first it was simple stuff like boring work, but moved up to entire work schedules, workout routines, etc.
Eventually they figured out ways to actually do it wirelessly, a bunch of pretty weird religious fanatics started to grow way too fond of the stuff (you get tons of productivity and suffering apparently ends, cuz the device works in a way that allows you to sleep/daydream sort of), and more and more folks used it 24/7. Finally it resulted in everyone being merged into what is essentially a hivemind, with it being revealed that despite the sort of dreamlike state, the usage of these machines/methods/technologies leaves the victim in what amounts to perpetual torture until they die, where they come out of the trance screaming.
Theres an TRPG called Eclipse Phase that I highly recommend that is basically about preventing S-Class scenarios. One of which being literal thought viruses that can compromise someone.
Now that's a rare one. You're only the third person on this planet I've encountered who has even heard of it.
Basically it's been wholly intellectual until know. What's it actually like, as a game?
@tsm688 It's a very crunchy game. I played the 2nd edition of it. You can make some very fascinating characters. I adore the fact that you can make a character that is a literal octopus. The best part of the game for me is the storytelling potential. It definitely shines as a dystopian sci-fi setting.
I think a big element not covered in detail in the video, but that is relevant to the concept of suffering, is understanding WHY and HOW suffering occurs.
Consider for example, what happens if you place your hand on a hot stove: You will experience immediate, intense pain. This pain doesn't occur because part of your body wants "you" to suffer, rather it is a defense mechanism...under normal conditions, you would have the ability and even a reflex to withdraw your hand immediately, and protect it from further injury while it heals (which is why the pain continues beyond the initial trigger.)
Now consider if someone was forcing you to touch the stove, and preventing you from removing your hand. This would be torture and causing you tremendous suffering, because your body is signaling to you "you have to get away from this" but you're unable to actually do that.
Of course, this isn't a perfect proxy for suffering, because suffering CAN occur in circumstances where no logical harm should be present; or be absent in opposite circumstances. For example, someone with a nerve disorder may experience extreme pain even with minimal, non-injuring contact; while someone else with a different type of disorder may not feel pain even when they are being actively injured.
In general, however, I propose this model for what "suffering" is:
Suffering occurs when an organism's systems perceive harm and signal that harm in ways that cannot be solely addressed by the organism autonomously.
The "autonomously" part is a key factor as well, for example, if a virus or bacteria infects you, it does cause harm...however, if your immune system is sufficiently prepared and able to eliminate the infection quickly and efficiently enough, you may not experience any suffering at all.
I love how you always tackle such amazingly interesting subjects I've never heard about before
My mind automatically jumps to the Half-Life universe. The amount of human and alien suffering caused by the combine is terrifying.
Other animals suffering always gets to me. Like so many animals have the intelligence of a small child and fully feel pain and we ground up 88 billion of them a year 🤮
What if plants suffer just like animals? We can recognize animal suffering because we are close to them on the biological tree. But suffering doesn’t stop just because _we_ can’t perceive it.
@@VPWedding Thats very unlikely from a biological perspective. I studied the pain response for health sciences. Alot of animals have extensive systems of pain receptors throughout our body attached to our brains. Its the brain that creates the conscious experience of pain. Plants lack any structures to have consciousness or evolutionary reason to develop it so cant feel pain. Theres a reason we give lab rats pain killers before experimenting on them. Scientists arent stupid we know how plants work at the cellular level. This is usually just a bad faith argument to counter animal activists
@@VPWedding I mean I like the openminded ness but thats usually just a bad faith argument people make to dehumanise animals and put them on a similar level to plants. We know animals feel pain theres a reason we give lab rats painkillers before experimenting on them. We've studied plants down to the cellular level we have no reason to think they experience counsciousness because theres no evoluntionary reason or biological structures to facilitate it
@@VPWedding If you think this has merit, could be a worth life living spending to research it.
@@VPWedding Animals have complex nervous systems that make them conscious and aware of their surroundings. They have this so they can do things like seek food, form relationships and to get away from pain (avoid damage). Plants lack a centralized nervous system. People get confused because plants can react to light and gravity. Some even react to damage but these dont involve consciousness or the ability to feel pain. They have predictable physical/chemical processes to react instead.
the best argument for fighting against S-risks is simply that most measures against them would also move society towards less suffering in general, so they would be a good idea to implement even if you believe that those S-risks are literally impossible
in general, it’s good to prioritize actions that both have short-term benefits and reduce long-term risks at the same time when possible, both because it’s easier to get support for and because we shouldn’t forget the short-term when thinking of the long-term
Extinction > suffering on any scale if you ask me. Doesn't mean I think extinction is the only option, though.
When an insect like alien species biologically alters me and my family’s dna structure to turn us into bio-mechanical waste disposal systems and ultimately abandons us leading to us evolving into modular organisms with advanced intelligence that shares separate living parts.
Listened to this video and thought immediately of the book “I have no mouth and must scream.”
I completely accept the part of the argument that you viewed as controversial - that S-Risks should be taken seriously. In the event that humanity is ever able to colonise other solar systems, it's almost inevitable that terrible things (and also wonderful things) will happen on a scale greater than is possible at present.
What I find more problematic is the idea that anything we do now (other than going extinct) could predictably make fates worse than extinction less likely. Human values change so rapidly that any principle we set in stone now will be swept away within a thousand years, never mind a million years. Worse, human nature indicates that future generations will likely rebel against any such principle precisely because older generations support it. And maybe they would be right to do so. Think of some past generations who would have viewed racial mixing or liberal attitudes to sex as S-Risks. Most likely, there are values we currently hold that future generations will rightly reject as firmly as we have rightly rejected racial segregationalism. So unless you believe *both* that we have reached some sort of peak in wisdom about morality, *and* that future generations will recognise this, it's very difficult to see what value there is in trying to mitigate against S-Risks in the distant future.
Yeah, I tend to have the same doubts as you for the moment about longtermist issues.
In theory, I totally accept that *it matters*.
The real blocking question to me is : "I am _really_ able to do anything about it?"
Though I would say expanding our moral circle and promoting concern for suffering, in general seem to be two relatively robust things to do regarding S-Risks.
My personal issue about longtermism lies in how much resources we should dedicate to preventing things that might possibly happen in the distant future vs what might likely happen in the near future. Sure, it'd definitely be great to ensure that we don't make an AI overlord that will turn us into livestock in a few centuries, but if we irreversibly fuck up our planet in 20 years, it doesn't matter anymore. If we deal with the short-term issues, we'll have plenty time and way more resources to put into preventing long-term issues.
The other thing is probability. The argument that "an individual S-risk is unlikely, but in total it's very likely that one will happen so we must prevent them" is, in my opinion, more of a counterargument to longtermism if anything. First of all, if there are hundreds/thousands/whatever of potential S-risks in the far future, judging their probability and preventing them with our present knowledge is impossible. Second, if there's a 50% chance that at least one of the many S-risks occurs in 1000 years, it still doesn't matter when there's a 100% chance we won't survive 1000 years unless we focus on current problems.
To me, focusing on S-risks instead of X-risks is as if you had a deadly disease but instead of treating it decided to take all steps to minimize the chance of getting a neurological disease (e.g. dementia) when you're 70. Sure, it can be terrible and, according to many, a fate worse than death. But you can't even be certain you won't suffer anyway, and won't even get to find out because instead of living to 70 you died of the disease you ignored while 30.
The largest problem I see with tackling S risks is that I’m fairly certain the vast majority of people even if informed would not give a damn. I’m not even sure if I give a damn.I mean I agree that if possible these worse futures should be avoided, just that of all the things I can theoretically put energy into and give a damn about fixing are all of a far more present and pressing nature. I would be surprised if there’s ever been a problem that required significant societal effort to solve that was fixed preemptively. For most of these kinds of problems we first have to experience the pain they bring about before we give a damn.
Finally. People give me that look (You know what I mean) when I say there is a realistic chance that AI, super humans, aliens, or whatever could inflict truly horrific suffering on us that could lasts thousands of years or more. One of the worst things about that truth, is that death might not even be final and therefore not a guarantee that you won't endure any more pain.
For many people, we call it "Judgement Day" and "Hell". Plenty of us know that if we don't repent for our sins and pull ourselves together, we are going to be cast in a plane of eternal suffering.
@@chewxieyang4677
You know that just comes from the Divine Comedy and not the bible, right? In the bible it says the nonbelievers just stop existing.
@@MrNote-lz7lh The point I was saying is essentially "What religious worldviews have understood very well for centuries, secular worldviews have only just caught up.". Sure, the only difference is a matter of scientific knowledge, but the point of "There are worse fates than death" is already common knowledge.
I'm so glad to see factory farming brought as an example.
1:59 NO, don't take away the benevolent angels from the Goddess of Everything Else!
Scope be damned, I say that even one individual trapped in eternal torment should be considered entirely unacceptable by all of us. It's not a numbers game. Much like how the rights of one individual are considered absolute and protected under law even at the expense of the convenience or desires of a large group of people, a sufficiently severe example of suffering makes scope somewhat irrelevant.
I'm talking about the extremes like "trillion years of agony", that we shouldn't allow anything to experience.
I would argue that even extinction is a better option than allowing even one individual to experience such an extremely negative outcome.
I... don't know. Welp, of course, if that being is me, yes lmao. But everyone living happily and peacefully with one, single person suffering its entire life ? I don't know. Maybe that's not that bad ? (read Those Who Walk Away From Omelas if you didn't already, althrough the book doesn't agree with me xD)
If I may somewhat oversimplify what you are saying:
You want to prevent a Prometheus scenario from happening?
wow the coolest Art/Animation ive ever seen
Before any of this is possible, we must first radically change not just society, but humanity as a whole to the point that the average person cares about any of this.
I think the best way to do this is to focus on present day suffering- A society that takes existing suffering for granted could never act to prevent future suffering.
The average warhammer 40k Scenario.
Literally
The image of a simulated being getting stuck in a widespread glitched state of suffering may never leave my mind
But kiddo, we already have hell at home, wildlife!
But what if we spread wildlife to other planet
I mean there could be countless consciousnesses all around us that are currently suffering at this very moment and we would never even know. As far as it appears, consciousness comes from the ability to create and recall memories with electrical impulses so computers might already have some form of it.
2 issues with this video:
1. If you don't care about the extreme suffering in the world happening TODAY, how in the world can you be so arrogant to think you can predict and prevent longterm future suffering? People would benefit greatly by lowering their big egos and focus on helping those around them. Be part of the world we all want to live in today and let/help our children learn from that.
2. The propagation of the fear of S-Risk to the general public increases x-risk because it can perverse incentives. Some people are psychos and shouldn't be trusted to know what's the best for the world.
Both point to: lower your egos with trying to save the world and try making the world better in your local sphere of influence. Friends, family, coworkers, etc. And don't forget to smile once in a while :)
I totally agree with point 1. Let's end animal farming!
Your thinking is too short-sighted
A big point in the video is how difficult it is to stop an S-Risk that is ONGOING. Thus you have to prevent it first. Which is why factory farming will take a LONG time for humanity to figure out a solution for, as it's like a preview of an S-Risk. It will be difficult to figure out a solution with our current food needs and cultures. But we can prepare for future risks more easily since we can have some hindsight.
Somewhat agreed with you, but the second point is fucking moronic lol.
That's like saying we should erase all WW2 history so no one has the idea to become a nazi. Since there will always be more good people than pure evil people, preserving history and exploring possibilities will always be better for society than living in the dark.
both somehow apply to trump
Honestly, Roko's Basilisk came to mind when you were explaining S-Risks
I feel that S-risks imply a moral framework, and it's not clear what the best moral framework is.
Is it *morally correct* to extinguish life on Earth if another form of life consists of happiness monsters who will go on to fill the universe with happy-happy life? Keep in mind that our native wilderness consists of a constant battle of tooth and claw, fear and suffering. Replacing that with fields upon fields of cheerful cooperative mushrooms might be seen as the greater good for an AI trained to avoid S-risk scenarios.
A truly unbiased AI might come to the conclusion that all life is suffering, period - or that any life, no matter how miserable - is worth living.
This is probably not a field where we want to apply a minimization strategy.
The moral framework is likely to be utilitarianism, specifically negative utilitarianism, meaning they want to reduce suffering as a primary goal. The reverse is positive utilitarianism, meaning increasing happiness as a primary goal, of course you can't really have one without the other.
@@salt-d2032utilitarianism is a disease that has caused more suffering and death in the world than most other moral frameworks. People always think they know what's best and when they also think the end justifies the means, that's when the attrocities happen. There are much better options, read up on moral philosophy 😊
Agreed.
Here are a couple of my thoughts:
How can I judge whether another person's live is worth living or not? We tend to think that 100 people suffering is worse than 1 person suffering, but how about we flip the perspective and see it as 100 people living lives they deem worth living despite the suffering instead of a single person living such life? It's more suffering in total, but also more people to deal with that suffering.
Personally I think that the moral thing to do is to never cause unnecessary suffering, no matter the scale.
The actual impact of the scale carries to an individual only via their empathy, bonds with others, who might suffer and the potential degradation of the environment caused by various behaviors stemming from mass suffering. All of that is local from an individual's perspective. In other words - it would barely make a difference if another billion people had suffered on the other side of the world, while I was among 10 million suffering here - we all presumably had to deal with the same miserable experience, surround by people having that experience too.
That's why I don't buy the whole idea that S-risks are necessarily worse than X-risks. Effects of suffering on individuals life do not scale up with the number of people suffering in a linear fashion.
I love the artistic expression of your incorporated citations
I have no mouth but I must scream , comes to mind.
This having less than 100k views is criminal. Great video + cute cat and dog!
Right now, we can barely control the planet let alone the galaxy. Because of this, I think that the complexities of governing a galaxy require for us to have such competency over managing ourselves that we'll basically live in world peace, hence by the time S-risks could be possible, they'll never happen because we'll be skilled enough as a species to avoid them.
S-risks are essentially possible at today's level of technology. Imagine if Nazi Germany or the Soviet Union got nuclear weapons first, and took over the planet, and then devolved into a stable North Korea - level dictatorship. It's a mild S-risk but definitely on the same spectrum.
The reason people are discussing this much more these days, however, is the expectation of a human-level AI soon and an intelligence explosion into an ASI soon. As in, within this decade it's possible.
rather "we can barely control ourselves"
"Planet is fine, humanity is ..."
They made the same prediction for computers. "Computers are going to get a lot better in 20 years. But we'll be good enough at managing them that problems will be rare."
And now we live in a world where problems are incredibly common and nobody's at the wheel, yet we're still basicaly not allowed to repair or manage our own machines.
This is like the anti-basalisk. The risk of S-risks almost makes deliberately creating X-risks seem like the most effocient solution.
7:36 how would you objectively determine this. for example if a malevolent agent gets to decide the criteria that would result in a positive test for malevolence.
The Sims refrence at 6:56 just made this video less existencial crisis inducing
good topic, good artwork, good music, you reduced chance of s-risk!
I am obsessed with your channel right now
6:57
The Sims Reference.
No clue who would do such a thing
Lol yes, fun times😅
I needed an urn, OK!
Hahahahah
@@УэстернСпай Is Уэстерн Спай a Cyrillic transcription of "Western Spy"?
“Say not a word in death's favor; I would rather be a paid servant in a poor man's house and be above ground than king of kings among the dead."
S-Risks are essentially inevitable. Mostly because humanity naturally and blindly follows sociopaths.
Thats why we need a new system where power is held in communities. Not in a small class of representatives and elites.
Nah. Most suffering is caused by neglect or incompetence. Not direct malice. Banality of evil and all that.
@@adrianaslund8605 corporations rule the world, theyre led by sociopaths and wreak havoc in our society. They corrupt our governments and poison our people.
The powerful are in power because they are competent, smart people. If they do evil things, its fully intentional. If their actions maximize suffering for everyone they rule over, thats because they wanted to do just that. Its not ignorance, its malice.
Im sorry.
@@Svevsky exactly, the ruling class are sociopaths who will stop at nothing to accrue billions and billions to no end. Even if they have to exploit children in africa and asia, if they have to bribe governments and incite wars to profit from them. Its simply evil and disregard for humanity, and its self destructive on the long term. We need a new system.
I hadn’t found the words to express this frustration in the past and i’m so thankful you guys made this video explaining it. Even as a short little introduction, this is a good and informative video explaining potential future stuff! I’ve been wanting to talk about horrible potential futures caused by our negligence or other mistakes plenty of times before, and I’m working it into my works I’m still slowly crafting. Hopefully we can agree on basic things like people (no matter their species or lack of species-ness like an AI or nontraditional living creature) all have rights and are allowed to be themselves someday.
Don’t you ever eat a chicken and think about how this sentient being went through a short lifetime of pure suffering just for this one moment of human satisfaction? This unfathomable suffering happens 100 billion times each year, just so us 8 billion humans could have food that tastes slightly better. A lot of these creatures are as intelligent and sentient as human children, yet we choose to ignore it.
This isn’t even going into the incomprehensible suffering caused by a single piece of plastic, or a car running for a few minutes. Just by living the way you live, even for a short period of time, you are directly responsible for amounts of suffering many times beyond what you’re capable of comprehending.
thankfully, you only pass on so much suffering if you live without question. the key, that I'm taking from what you are saying, is that as long as we CARE about where our stuff is coming from, we can greatly REDUCE the suffering that is caused by our existing.
turning an "inevitably" into something we can be proud to talk about.
Yeah that’s why I eat chicken
Yes, and I don’t see why I as a human being should care. A wolf doesn’t feel guilty when it tears apart a deer in a manner far more painful than humans kill farm animals. Concepts like morality are things humans evolved to better improve the survival chances of the human race: the only reason we care about animals at all is because of our brain’s tendency to anthropomorphize nonhuman creatures and objects. Even you’re doing it right now by comparing animals to human children, because deep down you know that the only way any of us can actually, truly care about the morality of animal suffering is if we mentally project a human being in their place.
i.. try not to think about it..
Love the 'go in the pool and delete the ladder' bit there, I'm definitely guilty
Guys, I told my advanced super intelligent AI about S-risks and to prevent them at all costs. Now its trying to destroy humanity to cause a perceivably better extinction scenario. 😅😢
All these videos are of such incredible quality, they should get more views!
2:40 Guess I’m an “S” risk then
I always thought that solution to the trolley problem is to never create or allow to create scenarios that would lead for the trolley problem to happen.
The trolley problem is always a lack of imagination. Rigid solutions to a rigid problem.