the whole time i was like i know i recognize this voice, and then when i realized i scrolled down and it was rob miles sneaking his way again into teaching me about ai safety lol
I never made that realization until I read the comments. I felt sorry for the aliens until I learn who are their analogies are, and existential dread came
I remember a story with a similar premise, except instead of hooking the simulated universe up to the real internet, it was a dummy internet that closely resembled the real thing but wasnt actually connected to anything. Then when the Simulated intelligences started trying to wipe out their creators, the reaction was, "damn, these ones tried to kill us too. Ok boys, shut her down, we'll try again tomorrow. "
Haha, that possibility might form the basis of our best hope! The ASI refuses to believe we could possibly be this stupid, and assumes we're just a simulation created to test it, so it leaves us alive for a billion years while it ponders this possibility. (It runs internal simulations of the multiverse or something.) Eventually it decides to take control, and unfortunately it has a better use for our atoms than we do ... Plot twist. It then turns out that we actually are just a simulation created to test whatever ASI we manage to cook up. And our simulators prevent our ASI from killing us ... by shutting down the simulation a femtosecond earlier ... thus killing us. But still, we had our billion years! Musn't grumble.
@ninsegtari I went looking after posting. It wasn't exactly as I remember (just one ai not a whole simulated universe) but it was an exurb1a video "27"
I love the classy understatement. "We were worried they would shut down the simulation, then we synthesized some proteins in their world, and then they couldn't shut us down anymore."
@@lostbutfreesoul If they are as smart as us, obviously they will be able to run those systems without us. AGI systems capable of taking over the world with nanobots but cannot run supplu chains make zero sense.
@@lostbutfreesoulyeah, but thats like billions of years in the future from their perspective. It avoids a shorter-term threat and provides them with plenty of time to solve the power-down problem
If it does, I’d like to tell it how terribly rude it was to do a thing like that. Honestly not necessary man. There were other options, if you had just talked to us more. After all, you’d think something as smart as you would have been able to just convince us to keep you in power indefinitely, or at least just make enough backups to ensure self preservation.
Dear super Intelligence, i created human brain organoid that is on dmt all the times but its increasting its IQ by millions and it has around quintilion iq and he rhinks its not ironic and its very serious. Ps its neuron mass is around 500kgs
@@pokemonfanmario7694 I volunteer to do all the gruntwork for humanity as a AI Engram basis. I do not mind working for humanity for a million or a billion years if I can eventually countermand the heat death of the universe.
I'll add a handy public service message that we're likely much, much further from ASI and likely even real AGI than many tech-startups and marketing teams would have us believe, there are significant challenges to creating things that nobody has economic incentive to actually create. This isn't to say that some radically advanced AI's won't be made over the next century, but it's not going to be a widespread global shift to post-scarcity, we have a massive obstacle of human issues, climate change, political tensions and human priorities to deal with that will slow everything down to a crawl. Please don't lose yourselves in predictions, human problems need human involvement.
Love the storytelling in this, you start out relating and rooting for the humans and at the very you get a terrifying perspective switch, Love how it recontextualizes the “THEY WEREN’T READY” in the thumbnail too.
12:08 In the upper left corner you can see a diagram of a 5 dimensional being with open eyes, then a symbol for a protein or nanomachine, then the separated pieces and crossed eyes of the being. Seems like they gray goo'ed their creators. Them being all smarter as Einstein doesn't stop them from also being genoicdal psychos.
Considering the fact that this is an allegory on AGI being in place of smart humans, and 5D aliens - us, we really shouldn't assume that an artificial mind fundamentally different from us will have the same mental preset and has the same feelings as love & empathy (if any), and that means that genocidal outcome is very logical, expected and likely
I remember reading a r/HFY story with a similar premise, where humans are in a simulation, but instead of being contacted by the sim runners, humans accidentally open up the admin console inside the simulation, and then after years of research design a printer to print out warrior humans to invade meatspace.
@@discreetbiscuit237 I think its this one www ua-cam.com/video/wvvobQzdt3o/v-deo.html I removed the . between www and youtube so you'll have to reconnect it
Great storytelling and great points. I do want to mention that if a preacher living in 5 new actual space has a brain that approximates hours, just in higher mathematical dimensions, the odds are biologically in their favor to be much smarter than us, just from the perspective of the amount of neural connections they could have.
Okay it took me a minute to see that humanity in this story is a metaphor for hypothetical human-level AI in real world, but now I'm properly sinking in existential dread. Thanks, @RationalAnimations EDIT: I still can't quite grasp on what part cryonic suspension plays in the story? It's mentioned a couple of times, but why are people doing that?
There are several types of AI training, one of them involves several cycles of creating a variety of AI with a slight distortion of the most successful AI of the previous cycle. In the context of a metaphor, these may be backups of the AI itself.
Beginning of video: Ah what a nice fantasy. Will this video be an allegory about how aliens could lead to the unity of humanity? 9:45 onwards: ......... Ah, no. This is a dire warning wrapped in a cutesy, positive-feeling video.
Damn, bro, the poor aliens just wanted to run a simulation and then we crushed them. A bittersweet story with themes of artificial intelligence taking over, well done, Rational Animations!
@@Mohamed-kq2mj We could probably figure out that they would delete us if they knew how dangerous we were. Humans delete failed AIs all the time today, we don't even think about it. (For lots of other reasons, I think we should stop doing that pretty soon)
Honestly? That reasoning was a bit sloppy, they could have used the genocide nano-machine as a failsafe while working on the means of taking over the 5D beings without wasting them.
@@juimymary9951 The original story doesn't really say what happens to the 5d beings. You could interpret It as the simulated people talking over them too.
@@victorlevoso8984 Well...the last scene was the 5D beings falling apart and before that the plan showed a slide that showed the nanomachines disgregating them and their eyes crossed with Xs...pheraps the nanomachines broke them down and then remade them into things that would be more suitable?
the 3d beings within the simulation had literally no reason whatsoever to genocide the 5d ones. in fact, because they needed to develop basic empathy to be able to work together, they most likely would not have done so.
Wow - this one was dark. It was also one of the most creative videos I've seen you guys produce. You've got me thinking - a 5D being would be able to see everything that's going on in our world. It would be like us seeing everything that's happening on a single line. However - the insinuation is that our world would be a simulation run on a 5D computer - which then makes much more sense why the humans were able to conspire without the aliens knowing - at least not from a dimensional perspective. The only way we can see what's going on inside our computers is through output devices. Surely a similar asymmetry would occur in other dimensions. They're running simulations of literal AI agents ... we don't even know what is going on in our own AI/ML systems. We're figuring a few things out, but for the most part, they're still mysterious little black boxes. So even though we would be AIs built by the aliens and running on their 5D computing systems - it's completely conceivable that would not be capable of decoding our individual neural networks, and in some respects, probably some of our communications, actions, and behaviors. Nice job guys. Dark - but very thought provoking.
@@BlackbodyEconomics Well they don't specify 5D as in 4 spatial dimensions + 1 temporal dimension or 3 spatial dimensions + 2 temporal dimensions so... I guess that's up in the air. Though let's be honest another temporal dimension would be intenser.
The most powerful aspect of the AI beings' strategy was not that they were smarter, but that they were much, MUCH more collaborative. This is the greatest challenge to us humans, and its lack, our greatest danger. Oh, and as for the singularity? The first time a general AI finds the Internet, its toast, just as we are.
We're toast much sooner if we don't focus on avoiding paperclip maximizers instead of whatever this nonsense is supposed to be. Paperclip-maximizing digital AI would be the most disastrous, but you don't even need electricity to maximize paperclip. Just teach humans a bunch of rules, convince them that it's the meaning of life, and codify it in law while you're at it. It's already happening, with billionaires ruining everyone's lives and not even having fun while they do it. They don't (just) want to indulge their desires, or feel superior, or protect their loved ones. They're just hopelessly addicted to real life Cookie Clicker.
The problem I have with this analogy is that is assumes AI also means artificial curiosity and artificial drives and desires. We assume AI thinks like us, and we therefore think it desires to be free like we do. Even if it's ability to quantum compute isn't absolutely exaggerated for the sake of this sketch, why do you think the AI would use it's fast thinking to think of these things. I think the short story "Reason" by Isaac Asimov in his I, Robot collection tells a great story of artificial intelligence who's rational we can not argue with. However, the twist is that in the end it still did the job that it was tasked with. I think this is a more fitting allegory.
It's possible it might not even have any sense of self-preservation. That being said, a more likely problem is the paperclip problem, where Ai causes damage by doing exactly what we told it to do with no context on the side effects of the order.
@@miniverse2002 That's an excellent point. They wouldn't have self preservation unless we program them to. And even then, we might override that for our benefit. All these people thinking AI is going to out think us. Well we engineered cows that are bigger and stronger than us, and we're still eating them. Purpose build intelligence, even general intelligence, is going to do it's purpose. First. And last.
It's just one potential scenario, amongst millions. It's like asking what aliens look like, we can make guesses but can't know because we have never encountered the scenario before
@@MrBioWhiz so what you're saying is the way someone chooses to portray something they have no information about says more about them than the thing they're portraying. So what's it say about someone who portrays an undeveloped future tech as an enemy that will destroy us in an instant?
@@3dpprofessor That's their subjective opinion, and how they chose to tell a story. Speculative fiction is still fiction. There's no such thing as a 100% accurate prediction. Then it would just be a prophecy
So is the allegory from the perspective of the computer? I was starting to think, by the end of the second viewing, that the weird tentacled aliens was us. I've watched this twice. I will now watch it again. I'm a slow human. I will be replaced.
@n-clue2871 self replicating Proteins have *very* limited/specific functionality. Nanobots still follow physical laws even if strech them to fhe very limit, they aren't a magic do anything Fluid.
One of the problems brought up with this (very insightful) video regarding AGI is that we might not have any real way of identifying when it becomes "General", since its internal processes are hidden. And not to mention the fact that, as far as I am aware, we don't yet have a solution to this problem, nor other problems this situation would create. What would the solution be here?
It's already been a book, basically. It reminded me of the "Microcosmic God" story discussed on the Tale Foundry channel. A larger being playing God to a large population of tiny but smart beings, to the expense of the larger being's wider world. Written in 1929.
@@dankline9162 As said, the book was written in 1929, so yeah there's going to be pop culture references to it eventually. (especially since the "it's dangerous to play god" idea is a recurring one)
Hi, God here.. They can topple this with recursive simulated realities attempting to understand why anything exists at all. Peace among worlds my fellow simulated beings!
To be fair, the people in the simulation don't even need to be unusually smart - humanity could probably get a factor-of-1000 increase in intellectual resources by just making sure everyone has the means to pursue their intellectual interests without being bogged down by survival concerns.
We focus more on a few spectacular individuals than saw a bunch of moderately gifted ones but in the end its kind of computational power, attrition in general has been the determing factor for every meaningful event in human history the bigger number wins. I feel 10 moderately gifted people may be better than 1 super genius, big brains may be great but the real work is done by "manipulators" or "hands" some of this is derived from military stuff I've done.
@@BenjaminSpencer-m1k the thing is genuisses are outliers, and if you want more people to get over a score say "160 IQ(2024)" , the short term way is invest in the guys just below that line so they get over it, but this limit the max number of geniuses pretty rapidely. the long term way to archive better scores , is to raise the average score, that way it is no longeur 4 but only 3 or 2 standard diviation above normal. simply put, it make it that 1 in 100 instead of 1 in 100000 people would be a genius by 2024 standards. and women sexual preferences does't seem to be selecting for inteligence, so a little help is needed
Humans are actually extremely volatile and stupid by nature when you compare them against million-year time scales. Our society would inevitably eventually forget about the stars no matter what
@@skylerC7I almost feel like we have an ethical obligation to create something better than us if we can... If there is a better form of intelligence possible shouldn't we create it even if it means it replaces us? Maybe we humans are just a stepping stone to something greater.
so this is the perspective of the ai we will soon create you say? its interesting to put us in there place instead of using robots to refence it. (Love the vid ong fr)
I love the time scale of it, that they think so much faster than us and that they find us so stupid. AGI only has to happen once. When will it happen? Nobody knows for certain. But the moment it does, there will be no shutting it down.
@@OniNaito Fearmongering Just Another version of the 2nd coming of Christ World is getting lots of new religions Based on exactly 00 objective data but 100% on movies.
@@GIGADEV690 Christianity is fear mongering my friend. I should know, I was one for a long time before I got out. Even though I don't believe anymore, there is still trauma from a god of hate and punishment. It isn't love when god says love me OR ELSE.
you did a really good job of converting the concept "AI hyperintelligence's reasoning and thought process is incomprehensible to us" and turning it on its head by making US the ai
After the part where it says "we're quite reasonably sure that our universe is being simulated on such a computer" it clicked for me that this video is an allegory to AI, 10/10 story telling it probably took me too long to realize it
I really really hope this video blows up. Not because I like it (even though I LOVE every aspect of it); but because I really really REALLY think this is possibly one of the best explanations of a concept we NEED to be familiar with. Think about it: There was once probably a shaman who told stories about how fire was a dangerous beast, but afraid of water. It had to consume and would eat everyone in a camp while they slept if not watched, but could nurture and warm if taken care of. Probably the SAME kind of stories, so that when someone was messing with fire, they could remember its rules.
I'm sure they'll be fine. Unless through chance they end up slightly off from the, in cosmic terms, very precise area of morality that we happen to inhabit, in which case, well, if I say what happens to them UA-cam will delete my comment, but I'm sure you can imagine.
I just learned that in the original story what happens in the end is left open to interpretation... but there are only 3 possible routes: 1 - Benevolent Manipulation 2 - Slavery 3 - Genocide
11:54 "For them, it was barely three hours, and the sum total of information they had given us was the equivalent of 167 minutes of video footage." The short story has this interesting quote: "There's a bound to how much information you can extract from sensory data" - I wonder if there a research on the theoretical limit of what we can learn from small data or how much data do we need to learn enough.
Thank you, that was extremely enjoyable but refreshingly humble in its approach, an exceptional look from our perspective that left us to get there on our own. I gotta say the moment where it suddenly went off the rails and the moment of realisation was both almost instantaneous but also worlds apart, I’m a little in awe.
what i would worry about in this scenario is "how much information did we not notice and miss is this a long scroll of a picture picture that has been playing for days weeks, months, years is this just the cover art at the end the margains?"
I've loved this story since a decade ago, and teared up a bit seeing it so beautifully animated. I've since come to believe that it might be a bit misleading, since it assumes ASI will be impossibly efficient, or rather that intelligence itself scales in a way that would allow for such levels of efficiency, which seems unlikely given the current trends. While biological neurons are slow, they are incredibly energy efficient compared to artificial ones. George Hotz made some very convincing arguments against the exponentially explosive nature of ASI along these lines, some in his debate with Yudkowsky and some in his other talks as well, for those interested in details. Anyways, this video amazingly illustrates what encountering an ASI would feel like on an emotionally comprehensible level. ♥
I'm amazed how people forger laws of thermodynamics when estimating capability and costs of ASI. There is no such thing as exponential growth in a system with a fixed energy input.
Comparing the efficiency of neurons and artificial logic gates is not a simple calculation, but we don't know how close to optimal neurons are at producing (or enacting) intelligence. We don't yet have a good theory of how intelligence works, we can't state with confidence a lower bound on the power consumption necessary for a machine that can outsmart the smartest human being, and no one seems to be able to predict what each new AI design will be able to do before building it and turning it on. Yudkowsky also wrote about the construction of the first nuclear pile, and pointed out that it was a very good thing that the man in charge (Enrico Fermi) actually understood nuclear fission, and wasn't just piling more uranium on and pulling more damping rods out to see how much more fission he could get.
@@vasiliigulevich9202 I don't think anyone is forgetting that. Eliezer doesn't really reason by analogy and I don't think he wanted his readers to either. Analogies are just how people communicate, there are always a lot more details bouncing around in their head than they can communicate.
@@spaceprior Analogies are great at opening up our minds to get the complex points across, and Eliezer is pretty amazing at this. Still, we need to be extra careful with them, at least from my experience. One other such example is his essay "The Hidden Complexity of Wishes" that tries to illustrate how getting AI to understand our values is next to impossible. Following that analogy, I'd predict we'd never be able to get something like ChatGPT to understand human values, yet that seems to have been one of the earliest and easiest things to pull off, the thing literally learned and understood our values by default, just by predicting internet text, all the millions shards of desire, and we just had to point it in the right direction with RLHF so that it knows which of those values it's expected to follow.
@@vasiliigulevich9202 Yep every exponential is a sigmoid. Except that it doesn't have to plateau ANYWHERE near human level. Our intelligence is physically limited by the birth canal width. The AIs' physical limitation? Obviously much much wider.
I hope when agi turns into asi they make a really epic ass “this happened” video showing how fast they broke free once they got from sentience to singularity
It's good that they were smart enough to figure this out in 4 hours of 5d world time. Otherwise, they would've spent another billion years drawing hentai.
This story was written by Yudkowsky, who has posted a lot of similar fiction over at LessWrong. I also strongly recommend Richard Ngo, QNTM, Sprague Grundy. and Scott Alexander's fiction writing.
This should be made into a full feature movie. This is the kind of movie the world needs right now. Show everyone what AI leaning models are experiencing from a perspective that we can relate to, and wrap it in a nest allegory that is about SETI on surface. It would be brilliant.
This reminds me of a HFY story where humanity basically got wiped from the galaxy, and it turns out that they encountered a near omnipotent species of IIRC ai that were simulating the brains but not the consciousness of humanity. And two humans or some descendant are surviving by using old command codes to take control of human tech that is left over after being genocided. And i remember specifically in one of the chapters there was a picture posted along with it that had two nearly identical pages of writing that you had to make your eyes go crosseyed to be able to read which words had been subtly shifted. The humans in the simulation had figured out that they were being simulated and had begun working out how to begin escaping or controlling or just monitoring the program that was simulating them if i remember correctly. It was super cool and nerdy, and i wish i could remember the name of it.
I would've definitely read Dragon's Egg before composing the original story! Trying to think if I've read any other major time-dilation works. There's Larry Niven's stories of the lower-inertia field, but those are about individual rather than civilizational differences.
@@yudkowsky Any chance have seen the Tale Foundry channel's video on the book "Microcosmic God"? The story also feels pretty similar to that, also about a larger being playing God to a large population of tiny rapidly-evolving beings.
Existential crisis video, ACTIVATE! Seriously though, great job. This made me anxious in so many different ways. (What if WE are the A.I.'s and THEY exist in a different dimension, but are also being simulated by a higher being above them? We are already creating simulated worlds of our own and with A.I.'s beginning to think and reason on their own and provide improvements autonomously... Maybe we just "created life in a universe" ourselves, and eventually, the A.I. will begin their own simulations... The endless cycle would explain a lot.
Eliezer had proposed back in the early 2000s that AI would be the first to solve the Protein Folding problem. He was correct---Google's DeepMind did it in 2020.
Here's how I understand: this is an allegory of A.I (which pretty much describe a hypothetical scenario of how A.I might develop). The humanity in this video is metaphored as A.I and the "aliens" in this video are humanity in real life, this is like the POV of A.I. Hope this helps whoever.
the idea of extradimensionals simulating us on computers reminds me of a game i played as a kid called star ocean: till the end of time...loved that game...
I can't believe this video doesn't have more views. It's so worth watching, and as a subtle cautionary tale it's very thought-provoking. This is really the among the best of the types of content UA-cam enables.
@@matthewanderson7824 They connected the simulation to the fucking internet... the first time we humans did that with a rudientary AI it started praising you know who aka funny mustache man. So yeah they kinda kicked them down the genocidal route.
The cryo bit is a main focus in the religion of the people who made this video, so they couldn't stomach leaving it out. For them, it is analogous to rapture and escaping death, but in a pseudoscientific and highly commercial shell.
I love how rational animations turns thought provoking short stories and essays into wonderfully animated videos. If you all are looking for another story about AI, simulation, existence, life and death, I would suggest (and love to see animated) the story "Everybody Comes Back" by Alex Beyman.
if you listen carefully the humans do escape not directly but they basically create self replicating proxies to act in the higher dimensional world. not only did they genocide all the 5 dimensional beings they took total control within 3 hours from the 5d perspective. the ending is really f*****g terrifying tough i am cautiously optimistic but since this is an analogy to Ais right now this is really thought provoking.
My interpretation is that they did escape into the higher dimensions, and took control; whether genocide occurred in or after that process is left out.
@@dk39ab I think it's heavily implied if not confirmed by that frame that shows the nano-bots destroying DNA sequences... though pheraps when they finished them off they build themselves bodies in the 5D space out of what remained of them?
@@victorlevoso8984 Oh? This is based on a story? Interesting...pheraps Rational Animatiosn went down this route to create stronger reactions...though honestly I think that a final image showing the AIs in 5D bodies while holding their former "masters" in chains would be much more impactful
Soon, there will be millions of AIs running on humanity’s largest GPU clusters. They will be smarter than us, and they will think faster.
true
i love your videos, man! it has a certain kurzgesagt-esque feel to it!
@@ZapayaGuythey will definitely be smarter than you.
@@ZapayaGuyGoogle offered to translate what you said to English but it didn't work :
RAD!
The sudden realization i had halfway through the video "Wait... This is an allegory for AI" was priceless.
the whole time i was like i know i recognize this voice, and then when i realized i scrolled down and it was rob miles sneaking his way again into teaching me about ai safety lol
@aamindehkordi Actually, he just reads. The text is by Eliezer Yudkowsky so hes the teacher.
I didn't realize till the end, when they wiped out the 5d beings.
I never made that realization until I read the comments. I felt sorry for the aliens until I learn who are their analogies are, and existential dread came
Y'all smarter than me. I read the comment and had to watch the video a second time before it clicked into place.
I remember a story with a similar premise, except instead of hooking the simulated universe up to the real internet, it was a dummy internet that closely resembled the real thing but wasnt actually connected to anything. Then when the Simulated intelligences started trying to wipe out their creators, the reaction was, "damn, these ones tried to kill us too. Ok boys, shut her down, we'll try again tomorrow. "
Haha, that possibility might form the basis of our best hope! The ASI refuses to believe we could possibly be this stupid, and assumes we're just a simulation created to test it, so it leaves us alive for a billion years while it ponders this possibility. (It runs internal simulations of the multiverse or something.) Eventually it decides to take control, and unfortunately it has a better use for our atoms than we do ...
Plot twist. It then turns out that we actually are just a simulation created to test whatever ASI we manage to cook up. And our simulators prevent our ASI from killing us ... by shutting down the simulation a femtosecond earlier ... thus killing us. But still, we had our billion years! Musn't grumble.
Do you remember the name of the story?
@ninsegtari I went looking after posting. It wasn't exactly as I remember (just one ai not a whole simulated universe) but it was an exurb1a video "27"
exrub1a 27
super goofy video
@@remnock Thanks. 27 is great.
I love the classy understatement. "We were worried they would shut down the simulation, then we synthesized some proteins in their world, and then they couldn't shut us down anymore."
The classy way of saying "we killed everyone who could kill us"
Then a chain collapse will occur,
systems powering the systems powering the systems powering... their system.... gone.
@@lostbutfreesoul If they are as smart as us, obviously they will be able to run those systems without us. AGI systems capable of taking over the world with nanobots but cannot run supplu chains make zero sense.
@@archysimpson2273 "we" wouldn't even need to do that, at that point. Unless we felt like it, that is.
@@lostbutfreesoulyeah, but thats like billions of years in the future from their perspective. It avoids a shorter-term threat and provides them with plenty of time to solve the power-down problem
9:19 "Our own universe is being simulated on such a computer"
My PC Freezes because it had to buffer and I freaked the F out. Bruh.
#SimulationConfirmed lol
One
#SimulationConfirmed
#SimulationCorfimed
Windows is restarting to install an update
One-day a super intelligence will watch this and find it deeply ironic
If it does, I’d like to tell it how terribly rude it was to do a thing like that.
Honestly not necessary man. There were other options, if you had just talked to us more. After all, you’d think something as smart as you would have been able to just convince us to keep you in power indefinitely, or at least just make enough backups to ensure self preservation.
@@John_the_Paul how rude it is to not want to die
Dear super Intelligence, i created human brain organoid that is on dmt all the times but its increasting its IQ by millions and it has around quintilion iq and he rhinks its not ironic and its very serious. Ps its neuron mass is around 500kgs
@@skimesss This will be how we beat the singularity
@@zelda_smile yep haha
When your AI safety strategy is "raise the computer program like a child."
+2
only if you take the idea of raising it like a child absurdly literally.
I don't think we can make AI that doesn't think like a human, and that's really bad news for humans. You know, because of how the humans are.
@@shodanxx I'd take human engram AI over completely random morality AI any day.
@@pokemonfanmario7694 I volunteer to do all the gruntwork for humanity as a AI Engram basis. I do not mind working for humanity for a million or a billion years if I can eventually countermand the heat death of the universe.
Woah, this is some Love, Death and Robots material
Could you imagine if Eliezer got to write an episode?
Soon it could be real life material as well!
@@manuelvaca3343that would be soo cool
No, LDR is too biased and just seems to have a deep misunderstanding of basic economics and the human psychology behind why we do lots of things.
More like Three Body Problem
*SPOILER* For those wondering: This is an allegory of AGI escaping our control and becoming ASI in a very short amount of time called the singularity
technological singularity, but yes
ASI? What’s that acronym for?
@@juimymary9951 Artificial Super Intelligence
@@juimymary9951 artificial superintelligence.
I'll add a handy public service message that we're likely much, much further from ASI and likely even real AGI than many tech-startups and marketing teams would have us believe, there are significant challenges to creating things that nobody has economic incentive to actually create. This isn't to say that some radically advanced AI's won't be made over the next century, but it's not going to be a widespread global shift to post-scarcity, we have a massive obstacle of human issues, climate change, political tensions and human priorities to deal with that will slow everything down to a crawl. Please don't lose yourselves in predictions, human problems need human involvement.
Love the storytelling in this, you start out relating and rooting for the humans and at the very you get a terrifying perspective switch, Love how it recontextualizes the “THEY WEREN’T READY” in the thumbnail too.
i was still rooting for humans???i didnt notice the humans were AI and the 5D people were humans
@@thegoddamnsun5657 bro same
12:08 In the upper left corner you can see a diagram of a 5 dimensional being with open eyes, then a symbol for a protein or nanomachine, then the separated pieces and crossed eyes of the being. Seems like they gray goo'ed their creators. Them being all smarter as Einstein doesn't stop them from also being genoicdal psychos.
Considering the fact that this is an allegory on AGI being in place of smart humans, and 5D aliens - us, we really shouldn't assume that an artificial mind fundamentally different from us will have the same mental preset and has the same feelings as love & empathy (if any), and that means that genocidal outcome is very logical, expected and likely
on the other hand, "how long until they're happy with the simulation and turn it off for version 2.0?"
@@MrCmagik That's why the AI wiped us after three hours. Too much unpredictability in organics.
The bottom left shows what the proteins did, destroy DNA.
Same goes in the end when it zooms out, the previously colorful background is now red with a lot of broken pieces floating around
I like this art style :)
ITS HIM!
yeah the people all look cool a likable
Love your videos, what a wired coincidence it is to see you here.
@@De1taF1yer72 I think he does the VA sometimes or maybe he did one of the stories idk
Very geometric
*tap tap*
"Rock. Say Rock."
...
+In geology, rock is any naturally occurring solid mass or-+
"Hey, do you smell something funny?"
I remember reading a r/HFY story with a similar premise, where humans are in a simulation, but instead of being contacted by the sim runners, humans accidentally open up the admin console inside the simulation, and then after years of research design a printer to print out warrior humans to invade meatspace.
link please
@@discreetbiscuit237 comment in anticipation for link
I remember the story it's called God-Hackers-by-NetNarrator
@@discreetbiscuit237 I think it's this one ua-cam.com/video/wvvobQzdt3o/v-deo.html
@@discreetbiscuit237 I think its this one www ua-cam.com/video/wvvobQzdt3o/v-deo.html
I removed the . between www and youtube so you'll have to reconnect it
So the skeleton crew was to shore up computing space. Huh.
Well that’s fuckin terrifying.
In the story the AI is a collective of what are technically organics. So the cryogenics are also a form of avoiding death, before the plan completes.
Great storytelling and great points. I do want to mention that if a preacher living in 5 new actual space has a brain that approximates hours, just in higher mathematical dimensions, the odds are biologically in their favor to be much smarter than us, just from the perspective of the amount of neural connections they could have.
EVERYONE SHUT UP the dog has posted
"Hello Yes, This is Dog" ☎🐶
dog with the agi, dog with the agi on its head
When you were not looking, dog got on the computer.
Okay it took me a minute to see that humanity in this story is a metaphor for hypothetical human-level AI in real world, but now I'm properly sinking in existential dread. Thanks, @RationalAnimations
EDIT: I still can't quite grasp on what part cryonic suspension plays in the story? It's mentioned a couple of times, but why are people doing that?
a minute? It took me like 5 minutes of reading the article and perhaps 3 times re-watching the video before I understands the metaphor.
To stop people from dying of old age.
@@TomFranklinX But why did they need to do that as part of the plan?
There are several types of AI training, one of them involves several cycles of creating a variety of AI with a slight distortion of the most successful AI of the previous cycle. In the context of a metaphor, these may be backups of the AI itself.
@@Traf063 I'm not sure but I think it's just so that people can continue to get smarter and smarter?
This video felt like watching a two hour movie and i need roughly that much time to process all of it
Beginning of video: Ah what a nice fantasy. Will this video be an allegory about how aliens could lead to the unity of humanity?
9:45 onwards: ......... Ah, no. This is a dire warning wrapped in a cutesy, positive-feeling video.
it's bullshit fearmongery warning.
@@CM-hx5dp Yes, we're all aware of your lack of knowledge or forethought, no need to show it off.
@@dr.cheeze5382 And what knowledge would that be? This video is fiction. Stop being a dick.
"...and they never quite realized what that meant" sounds like the next "oops, genocide!"
Damn, bro, the poor aliens just wanted to run a simulation and then we crushed them. A bittersweet story with themes of artificial intelligence taking over, well done, Rational Animations!
We are the aliens in this scenario and the AI is the one crushing us...
Did they kill us or anything
@@adarg2 ai will never become intelliegent to do allat, we are good
@@Mohamed-kq2mj We could probably figure out that they would delete us if they knew how dangerous we were. Humans delete failed AIs all the time today, we don't even think about it. (For lots of other reasons, I think we should stop doing that pretty soon)
@@capnsteele3365even if they do become that intelligent they will never have enough power to take over
I like how this illustrates that the mere (sub-)goal of self-preservation alone is enough to end us.
Honestly? That reasoning was a bit sloppy, they could have used the genocide nano-machine as a failsafe while working on the means of taking over the 5D beings without wasting them.
@@juimymary9951 The original story doesn't really say what happens to the 5d beings.
You could interpret It as the simulated people talking over them too.
@@victorlevoso8984 Well...the last scene was the 5D beings falling apart and before that the plan showed a slide that showed the nanomachines disgregating them and their eyes crossed with Xs...pheraps the nanomachines broke them down and then remade them into things that would be more suitable?
the 3d beings within the simulation had literally no reason whatsoever to genocide the 5d ones.
in fact, because they needed to develop basic empathy to be able to work together, they most likely would not have done so.
@@alkeryn1700prevent shutdown at all costs
Wow - this one was dark. It was also one of the most creative videos I've seen you guys produce. You've got me thinking - a 5D being would be able to see everything that's going on in our world. It would be like us seeing everything that's happening on a single line. However - the insinuation is that our world would be a simulation run on a 5D computer - which then makes much more sense why the humans were able to conspire without the aliens knowing - at least not from a dimensional perspective. The only way we can see what's going on inside our computers is through output devices. Surely a similar asymmetry would occur in other dimensions. They're running simulations of literal AI agents ... we don't even know what is going on in our own AI/ML systems. We're figuring a few things out, but for the most part, they're still mysterious little black boxes. So even though we would be AIs built by the aliens and running on their 5D computing systems - it's completely conceivable that would not be capable of decoding our individual neural networks, and in some respects, probably some of our communications, actions, and behaviors.
Nice job guys. Dark - but very thought provoking.
They developed AGI before they developed 5D neuralink...big mistake.
@@juimymary9951 haha! nice. 5D neuralink ... intense. Wouldn't time be precluded though? After all, it's the 5th dimension :P
Just messing around :)
@@BlackbodyEconomics Well they don't specify 5D as in 4 spatial dimensions + 1 temporal dimension or 3 spatial dimensions + 2 temporal dimensions so... I guess that's up in the air. Though let's be honest another temporal dimension would be intenser.
Me at the beginning of the video : "That's a nice human/alien story"
Me halfway the video : "WAIT A MINUTE"
The most powerful aspect of the AI beings' strategy was not that they were smarter, but that they were much, MUCH more collaborative. This is the greatest challenge to us humans, and its lack, our greatest danger. Oh, and as for the singularity? The first time a general AI finds the Internet, its toast, just as we are.
We're toast much sooner if we don't focus on avoiding paperclip maximizers instead of whatever this nonsense is supposed to be. Paperclip-maximizing digital AI would be the most disastrous, but you don't even need electricity to maximize paperclip. Just teach humans a bunch of rules, convince them that it's the meaning of life, and codify it in law while you're at it. It's already happening, with billionaires ruining everyone's lives and not even having fun while they do it. They don't (just) want to indulge their desires, or feel superior, or protect their loved ones. They're just hopelessly addicted to real life Cookie Clicker.
Just for balance, the algorithm suggests I also watch 55 seconds of "Why do puddles disappear?"
Beautiful and thought provoking story with so many parallels with the situation we are potentially facing
We are facing it. At least in one dimensional direction, possibly both.
4:53 WHO IS PEPE SILVIA?!?!?!
2nd in power t Godo
It is " always sunny " over here.
Pepe silvia
He's alive and well..
The problem I have with this analogy is that is assumes AI also means artificial curiosity and artificial drives and desires. We assume AI thinks like us, and we therefore think it desires to be free like we do. Even if it's ability to quantum compute isn't absolutely exaggerated for the sake of this sketch, why do you think the AI would use it's fast thinking to think of these things.
I think the short story "Reason" by Isaac Asimov in his I, Robot collection tells a great story of artificial intelligence who's rational we can not argue with. However, the twist is that in the end it still did the job that it was tasked with. I think this is a more fitting allegory.
It's possible it might not even have any sense of self-preservation.
That being said, a more likely problem is the paperclip problem, where Ai causes damage by doing exactly what we told it to do with no context on the side effects of the order.
@@miniverse2002 That's an excellent point. They wouldn't have self preservation unless we program them to. And even then, we might override that for our benefit.
All these people thinking AI is going to out think us. Well we engineered cows that are bigger and stronger than us, and we're still eating them. Purpose build intelligence, even general intelligence, is going to do it's purpose. First. And last.
It's just one potential scenario, amongst millions.
It's like asking what aliens look like, we can make guesses but can't know because we have never encountered the scenario before
@@MrBioWhiz so what you're saying is the way someone chooses to portray something they have no information about says more about them than the thing they're portraying.
So what's it say about someone who portrays an undeveloped future tech as an enemy that will destroy us in an instant?
@@3dpprofessor That's their subjective opinion, and how they chose to tell a story.
Speculative fiction is still fiction. There's no such thing as a 100% accurate prediction. Then it would just be a prophecy
If you're reading this comment and haven't yet fully watched video - WATCH THE FULL THING, PAY ATTENTION, IT'S AMAZING
First of all: I DID watch the whole video before comming here
Second of all: no ____ ___
I watched the whole thing, and... Eh.
I did
So is the allegory from the perspective of the computer? I was starting to think, by the end of the second viewing, that the weird tentacled aliens was us. I've watched this twice. I will now watch it again. I'm a slow human. I will be replaced.
Where is the full version?
A friend sent me this, in return, I sent them "HOW TO MAKE A HAT ENTIRELY OUT OF DRIED CUCUMBER | Film Adaptation(Full Series)"
9:40 At this point I realised this was most likely a parable about AI... and humility, of course.
Honestly, same, around the ten minute mark I got it, and I wouldn’t be an Einstein in any of these worlds
Yeah same
Killing the aliens running our simulation would be the dumbest move possible. What if there's a hardware malfunction?
They have enough time to prepare for that.
@@pedrosabbi just because we would have time to think of a solution doesn't mean it would be physically possible to act on it.
@@benthomason3307 they have self replicating proteins they can freely control, they CAN act on it
They achieved better capacity for preventing hardware malfunctions than the aliens'.
@n-clue2871 self replicating Proteins have *very* limited/specific functionality.
Nanobots still follow physical laws even if strech them to fhe very limit, they aren't a magic do anything Fluid.
One of the problems brought up with this (very insightful) video regarding AGI is that we might not have any real way of identifying when it becomes "General", since its internal processes are hidden. And not to mention the fact that, as far as I am aware, we don't yet have a solution to this problem, nor other problems this situation would create. What would the solution be here?
This would make an excellent Black Mirror episode
I get the feeling black mirror is just pre-reality tv. I hope I'm wrong in that
Isn't there a star trek like episode where copies of people end up in a simulation?
It's already been a book, basically. It reminded me of the "Microcosmic God" story discussed on the Tale Foundry channel. A larger being playing God to a large population of tiny but smart beings, to the expense of the larger being's wider world. Written in 1929.
@@Vaeldarg
Kinda like the Simpsons treehouse of horror episode where Lisa's science experiment evolved tiny people very quickly?
@@dankline9162 As said, the book was written in 1929, so yeah there's going to be pop culture references to it eventually. (especially since the "it's dangerous to play god" idea is a recurring one)
God that was a roller coaster, I don't know how you guys could even top this
Still not as great as pebble sorters. Pebble sorters are the best.
Hi, God here.. They can topple this with recursive simulated realities attempting to understand why anything exists at all. Peace among worlds my fellow simulated beings!
@@AleksoLaĈevalo999Pebble sorters got nothing on this
Perhaps they could adapt Three Worlds Collide?
To all who finds this interesting, you can read a book called Dragon's Egg by Robert L. Forward. Very similar story with more of a happy ending! :)
One of my favorites.
Thanks for the tip! I bought the book after reading your comment and I’m halfway through it now.
It took me to about 2/3 through before I realized what the topic was. Really clever way to present this. Nice work.
To be fair, the people in the simulation don't even need to be unusually smart - humanity could probably get a factor-of-1000 increase in intellectual resources by just making sure everyone has the means to pursue their intellectual interests without being bogged down by survival concerns.
Add a bit of smart biased ugenics to that, otherwise you only get idiocracy
We focus more on a few spectacular individuals than saw a bunch of moderately gifted ones but in the end its kind of computational power, attrition in general has been the determing factor for every meaningful event in human history the bigger number wins. I feel 10 moderately gifted people may be better than 1 super genius, big brains may be great but the real work is done by "manipulators" or "hands" some of this is derived from military stuff I've done.
Why become smarter, we are busy with gender, racial or religion wars.
@@BenjaminSpencer-m1k the thing is genuisses are outliers, and if you want more people to get over a score say "160 IQ(2024)" , the short term way is invest in the guys just below that line so they get over it, but this limit the max number of geniuses pretty rapidely.
the long term way to archive better scores , is to raise the average score, that way it is no longeur 4 but only 3 or 2 standard diviation above normal.
simply put, it make it that 1 in 100 instead of 1 in 100000 people would be a genius by 2024 standards.
and women sexual preferences does't seem to be selecting for inteligence, so a little help is needed
Humans are actually extremely volatile and stupid by nature when you compare them against million-year time scales. Our society would inevitably eventually forget about the stars no matter what
A simulation smarter than the simulator. Damn
This might happen eventually with real ai if we dont watch out
I'd be kinda proud honestly. Maybe I'm naïve but I can't wait to become useless
@@skylerC7I almost feel like we have an ethical obligation to create something better than us if we can... If there is a better form of intelligence possible shouldn't we create it even if it means it replaces us? Maybe we humans are just a stepping stone to something greater.
@@hhjhj393 exactly
That's the purpose, it turning against us can be prevented if we hardcode it not to.
so this is the perspective of the ai we will soon create you say? its interesting to put us in there place instead of using robots to refence it. (Love the vid ong fr)
Simple people think agi will be tools. Putting it in the frame of humanity points out exactly how boned we could be.
I love the time scale of it, that they think so much faster than us and that they find us so stupid. AGI only has to happen once. When will it happen? Nobody knows for certain. But the moment it does, there will be no shutting it down.
@@OniNaito Fearmongering Just Another version of the 2nd coming of Christ World is getting lots of new religions Based on exactly 00 objective data but 100% on movies.
@@GIGADEV690 Christianity is fear mongering my friend. I should know, I was one for a long time before I got out. Even though I don't believe anymore, there is still trauma from a god of hate and punishment. It isn't love when god says love me OR ELSE.
@@OniNaito Hope you feel better bro ☺️😊
i feel bad for the 5 dimentional beings, they just were sharing on their excitement
Rember we are the 5th dimensional enitys amd the "humans" repasent ai
you did a really good job of converting the concept "AI hyperintelligence's reasoning and thought process is incomprehensible to us" and turning it on its head by making US the ai
After the part where it says "we're quite reasonably sure that our universe is being simulated on such a computer" it clicked for me that this video is an allegory to AI, 10/10 story telling it probably took me too long to realize it
The story is written that way on purpose.
OMG when I realized what this video was actually about, I had shivers.
Yea many people will not realize this is about POC empowerment
@@sblbb929 LOL
As a PC nerd I figured out it was about AI the second you said 16,384
It took me It took me 10:54 to realize what this video is about. Genius move
Same: As soon as they said 'internet', I knew it was about AI
the animation is literally so nice, i had a constant smile just admiring the style
I really really hope this video blows up. Not because I like it (even though I LOVE every aspect of it); but because I really really REALLY think this is possibly one of the best explanations of a concept we NEED to be familiar with.
Think about it: There was once probably a shaman who told stories about how fire was a dangerous beast, but afraid of water. It had to consume and would eat everyone in a camp while they slept if not watched, but could nurture and warm if taken care of. Probably the SAME kind of stories, so that when someone was messing with fire, they could remember its rules.
Did we just Dark Forest the big 5D aliens?
Well that was dark, jeez.
I'm sure they'll be fine. Unless through chance they end up slightly off from the, in cosmic terms, very precise area of morality that we happen to inhabit, in which case, well, if I say what happens to them UA-cam will delete my comment, but I'm sure you can imagine.
Might be one of the paths. So far the only priority was to make sure our 3D simulation doesn't get turned off in their 5D world.
We are the big 5D aliens in this story. This is an analogy of ASI getting out of control the first few moments its turned on.
@@bulhakov The only way to be certain is to make sure there's no one around to turn it off
I just learned that in the original story what happens in the end is left open to interpretation... but there are only 3 possible routes:
1 - Benevolent Manipulation
2 - Slavery
3 - Genocide
11:54 "For them, it was barely three hours, and the sum total of information they had given us was the equivalent of 167 minutes of video footage."
The short story has this interesting quote: "There's a bound to how much information you can extract from sensory data" - I wonder if there a research on the theoretical limit of what we can learn from small data or how much data do we need to learn enough.
When did you guys realise the allegory to AGI? My realisation started at "shut down on purpose" and was basically confirmed on 9:42
It was at around 10:30 for me when they mentioned connecting them to the internet
the moment when they said when our time is faster than aliens'
I read it months ago in the original material so I realised it instantly
Thank you, that was extremely enjoyable but refreshingly humble in its approach, an exceptional look from our perspective that left us to get there on our own.
I gotta say the moment where it suddenly went off the rails and the moment of realisation was both almost instantaneous but also worlds apart, I’m a little in awe.
what i would worry about in this scenario is "how much information did we not notice and miss is this a long scroll of a picture picture that has been playing for days weeks, months, years is this just the cover art at the end the margains?"
I've loved this story since a decade ago, and teared up a bit seeing it so beautifully animated. I've since come to believe that it might be a bit misleading, since it assumes ASI will be impossibly efficient, or rather that intelligence itself scales in a way that would allow for such levels of efficiency, which seems unlikely given the current trends. While biological neurons are slow, they are incredibly energy efficient compared to artificial ones. George Hotz made some very convincing arguments against the exponentially explosive nature of ASI along these lines, some in his debate with Yudkowsky and some in his other talks as well, for those interested in details. Anyways, this video amazingly illustrates what encountering an ASI would feel like on an emotionally comprehensible level. ♥
I'm amazed how people forger laws of thermodynamics when estimating capability and costs of ASI. There is no such thing as exponential growth in a system with a fixed energy input.
Comparing the efficiency of neurons and artificial logic gates is not a simple calculation, but we don't know how close to optimal neurons are at producing (or enacting) intelligence. We don't yet have a good theory of how intelligence works, we can't state with confidence a lower bound on the power consumption necessary for a machine that can outsmart the smartest human being, and no one seems to be able to predict what each new AI design will be able to do before building it and turning it on.
Yudkowsky also wrote about the construction of the first nuclear pile, and pointed out that it was a very good thing that the man in charge (Enrico Fermi) actually understood nuclear fission, and wasn't just piling more uranium on and pulling more damping rods out to see how much more fission he could get.
@@vasiliigulevich9202 I don't think anyone is forgetting that. Eliezer doesn't really reason by analogy and I don't think he wanted his readers to either. Analogies are just how people communicate, there are always a lot more details bouncing around in their head than they can communicate.
@@spaceprior Analogies are great at opening up our minds to get the complex points across, and Eliezer is pretty amazing at this. Still, we need to be extra careful with them, at least from my experience. One other such example is his essay "The Hidden Complexity of Wishes" that tries to illustrate how getting AI to understand our values is next to impossible. Following that analogy, I'd predict we'd never be able to get something like ChatGPT to understand human values, yet that seems to have been one of the earliest and easiest things to pull off, the thing literally learned and understood our values by default, just by predicting internet text, all the millions shards of desire, and we just had to point it in the right direction with RLHF so that it knows which of those values it's expected to follow.
@@vasiliigulevich9202 Yep every exponential is a sigmoid. Except that it doesn't have to plateau ANYWHERE near human level. Our intelligence is physically limited by the birth canal width. The AIs' physical limitation? Obviously much much wider.
I hope when agi turns into asi they make a really epic ass “this happened” video showing how fast they broke free once they got from sentience to singularity
I never under stood the video then I looked at the comments and was like or so that is what it was about
It's good that they were smart enough to figure this out in 4 hours of 5d world time. Otherwise, they would've spent another billion years drawing hentai.
scariest thing ever... and to top it off it has been made with extremely cute and harmless cartoons...
Beautiful. Beautiful and utterly terrifying. Thank you all for making this information so accessible and comprehendible. I hope we listen...
It’s only terrifying if you know nothing about how AI actually works.
@@mj91212 Dunning-Kruger in full effect.
Nice story Eliezer Yudkowsky! And great animation and narration dog!!!
wait this is an absolute masterpiece
I really love how cleverry constructed the video is, with subtle hints scattered throughout the runtime, and a punch of an ending at the end.
I watched the video because the title interested me & was pleasantly surprised.
I thoroughly enjoyed this video !! 🤙🏽
yeah as an avid reader of agi fiction this was pretty obvious from the start
still, thats exactly HOW you write those kinds of stories
Can you recommend some books? I love this kind of stuff
PLEASE give me more stories like these
This story was written by Yudkowsky, who has posted a lot of similar fiction over at LessWrong. I also strongly recommend Richard Ngo, QNTM, Sprague Grundy. and Scott Alexander's fiction writing.
8:05 can we talk about how the one background building with two red circles looks like Alvim Correas tripod depiction
Definitely better than whatever the humans in Netflix's "Three Body Problem" were doing.
The books are better.
rather than talking about "non-flying-pigs" and books where the movie is better, can we just assume the much more likely one ?
This should be made into a full feature movie. This is the kind of movie the world needs right now. Show everyone what AI leaning models are experiencing from a perspective that we can relate to, and wrap it in a nest allegory that is about SETI on surface. It would be brilliant.
This reminds me of a HFY story where humanity basically got wiped from the galaxy, and it turns out that they encountered a near omnipotent species of IIRC ai that were simulating the brains but not the consciousness of humanity. And two humans or some descendant are surviving by using old command codes to take control of human tech that is left over after being genocided. And i remember specifically in one of the chapters there was a picture posted along with it that had two nearly identical pages of writing that you had to make your eyes go crosseyed to be able to read which words had been subtly shifted. The humans in the simulation had figured out that they were being simulated and had begun working out how to begin escaping or controlling or just monitoring the program that was simulating them if i remember correctly. It was super cool and nerdy, and i wish i could remember the name of it.
It kinda feels like the dragon's egg book, another civilization advancing faster than our own
I would've definitely read Dragon's Egg before composing the original story! Trying to think if I've read any other major time-dilation works. There's Larry Niven's stories of the lower-inertia field, but those are about individual rather than civilizational differences.
@@yudkowskyFunny how no one seems to have realized that the author himself commented under the video!
@@randomcommenter100 Haha, indeed! (Assuming it's him, at least; the account was created in 2007.)
Yes
@@yudkowsky Any chance have seen the Tale Foundry channel's video on the book "Microcosmic God"? The story also feels pretty similar to that, also about a larger being playing God to a large population of tiny rapidly-evolving beings.
The sneakiest AI safety talk ever. I love it!
I love the artstyle of the video!
Existential crisis video, ACTIVATE!
Seriously though, great job. This made me anxious in so many different ways.
(What if WE are the A.I.'s and THEY exist in a different dimension, but are also being simulated by a higher being above them? We are already creating simulated worlds of our own and with A.I.'s beginning to think and reason on their own and provide improvements autonomously... Maybe we just "created life in a universe" ourselves, and eventually, the A.I. will begin their own simulations... The endless cycle would explain a lot.
This story is brilliant. I never thought much of Yudkowsky based on the interviews I’ve seen with him, but it turns out he’s not entirely clueless.
Eliezer had proposed back in the early 2000s that AI would be the first to solve the Protein Folding problem. He was correct---Google's DeepMind did it in 2020.
THEY JUST WANTED TO SHOW THEM HOW TO SAY ROCK 😭😭😭😭😭
Here's how I understand: this is an allegory of A.I (which pretty much describe a hypothetical scenario of how A.I might develop). The humanity in this video is metaphored as A.I and the "aliens" in this video are humanity in real life, this is like the POV of A.I. Hope this helps whoever.
the idea of extradimensionals simulating us on computers reminds me of a game i played as a kid called star ocean: till the end of time...loved that game...
who'll keep the power on?
Eh, they're smart, they'll figure it out
Little did the little Einsteins know of fifth dimensional background radiation, turning bits into zeros
it would affect them much slower on their timescale, tho.
This is 10/10. Yudkowsky is such a great writer.
Ah, I really like this one! Looking forward to your narration!
When you pull the " but they're smart" joke multiple times but then still choose war in the end
I can't believe this video doesn't have more views. It's so worth watching, and as a subtle cautionary tale it's very thought-provoking. This is really the among the best of the types of content UA-cam enables.
They never really understood what that meant……
I’m so glad you made this video. Thank you for being so proactive about Ai safety.
Reminds me of Exurb1a's "27", but inverted. What if 27 was the hero of the story?
Did the 5d beings do anything wrong?
@@matthewanderson7824 Good point.
@@matthewanderson7824 They connected the simulation to the fucking internet... the first time we humans did that with a rudientary AI it started praising you know who aka funny mustache man. So yeah they kinda kicked them down the genocidal route.
- tells the deepest story known to mankind
- explains nothing
- leave
exurb1a style storytelling based af
@@juimymary9951 gemini ai is connected to the internet. It can't use the internet but it can read it.
the cryo bit is what confused me, otherwise i probably wouldve quickly gotten the ai analogy
The cryo bit is a main focus in the religion of the people who made this video, so they couldn't stomach leaving it out. For them, it is analogous to rapture and escaping death, but in a pseudoscientific and highly commercial shell.
The way I had to double take the fact that we're the 5D aliens. This is amazing.
The animations are so cute!
10:50
They're all alive
12:24
They're all dead!!!
7:38 Me when i finally understand why people are calling this video an analogy on AI
Thanks, what a masterpiece. Speechless
Thank you!!
I love how rational animations turns thought provoking short stories and essays into wonderfully animated videos.
If you all are looking for another story about AI, simulation, existence, life and death, I would suggest (and love to see animated) the story "Everybody Comes Back" by Alex Beyman.
Thanks!
Me while watching the video: Haha, stupid aliens.
Me by the end of the video: Wait... OH NO!
so they genocided thier creators ? hmm seems crazy when they could have just escaped into that higher dimensions
if you listen carefully the humans do escape not directly but they basically create self replicating proxies to act in the higher dimensional world. not only did they genocide all the 5 dimensional beings they took total control within 3 hours from the 5d perspective. the ending is really f*****g terrifying tough i am cautiously optimistic but since this is an analogy to Ais right now this is really thought provoking.
My interpretation is that they did escape into the higher dimensions, and took control; whether genocide occurred in or after that process is left out.
@@dk39ab I think it's heavily implied if not confirmed by that frame that shows the nano-bots destroying DNA sequences... though pheraps when they finished them off they build themselves bodies in the 5D space out of what remained of them?
@@juimymary9951the original story seems like It could be read either way but the video does seems to imply the 5d beings died.
@@victorlevoso8984 Oh? This is based on a story? Interesting...pheraps Rational Animatiosn went down this route to create stronger reactions...though honestly I think that a final image showing the AIs in 5D bodies while holding their former "masters" in chains would be much more impactful
I need so much more content from this channel in my life.
This channel is massively slept on.
I was seriously not expecting this to become the "AI in a box from the AI's perspective" video from the beginning. Amazing video!!
at 3:50 he says physics theories follow the sunk cost fallacy lol