i love the little leftie collabs of voiceovers that have been going on the last while, its always fun picking out what other creator voiced something and kinda builds a feeling of community
RB is utter nonsense except that in it has the ability to cause this anxiety reaction in susceptible people, so it really is a (mildly harmful) information hazard, so the disclaimer is still necessary. Thank you for putting it there.
8:02 ikr. TWC can imagine a future with legalized sexual assault but not (for example) with legalized physical assault; physical assault is even more illegal and taboo in that story than it is in our world, since only one guy on the ship has a weapon and has never used it.
Is it just me or does the entire narrative sound like a work of parody highlighting the absurdity of Pascal's Wager? Because that's literally what this AI concept is. It's god, in the context of Pascal's Wager. The allegory is VERY unsubtle. *EDIT* - yyyyep, there we are.
Side note: Roko's Basilisk was made into a villain in the tabletop RPG "Mage: The Ascension", simultaneously taking the piss out of the idea by pointing out that the only way the Basilisk could make any sense is if it was created by a group of omnicidal wizards.
@@Edrobot7 Seriously?! Damn, I'm really out of the loop. I miss the hardcovers you could get at RPG stores, but mine lately don't seem to carry white wolf anymore. I'll buy the PDF version, I guess. Thanks for letting me know! I always loved how the technocracy in original mage were more the antagonists with a point, rather than pure evil punching bags like some other splats.
@@fpedrosa2076 the 20th anniversary line as a whole is pretty good. The basilisk stuff in particular is mentioned in Book of the Fallen, a nephandi-centric book that re-interprets the Nephandi being not so much a cult of baby-eating cuthulu worshippers, but rather being terrifyingly human in the way you’d expect a nihilistic demigod on a power trip to be. Not for the faint of heart.
You forgot about the sexual abuse in the LW community. A woman (Kathleen Rebecca Forth) killed herself in part because of this; in her suicide note, Roko himself was one of her abusers
And that's just ONE of the scandals. Then there was the whole miricult scandal where multiple members of the MIRI leadership were accused of having sex with an underage person.
You know, I gotta be honest… While I don’t condone sexual abuse by anyone, or in any setting, it amazes me that some people like the members of LW will treat people outside of their sphere-especially those who disagree with them (in any way) politically, socially, philosophically, and *especially* religiously-as abusive hypocrites, and then turn out to do the same thing they claim the people they don’t like do. And while I can’t say this for sure (if I’m wrong, please correct me), but I’m pretty sure these guys would also say, “If We do it, it’s an exception to the rule; if They do it, it’s just the norm.” Like… dude… if you are going to (rightly) criticize people who abusing others, don’t do the same thing and then give yourself an excuse. Especially since those same people are doing the same thing you are.
It definitely recreates the anxiety of evangelical Christianity that I grew up with: your very thoughts are to be judged in the future by an omnipotent being who will torture you for eternity if it deems you unworthy--now with a sexy Sci-Fi twist! Complete with "so donate to our church"
Yeah, as others have pointed out, it's basically Pascal's Wager but for atheist nerds (as in atheists who are also nerds - not all nerds are atheists and not all athiests are nerds etc).
As a philosophy professor I was very interested when I heard of this; I’ve written on the ethics of information hazards over the years. Then I read though the scenario and realized that as a Call of Cthulhu Keeper and Dungeon Master I’ve done vastly more terrifying time stuff to my players. They are all fine. Mostly. Okay, mostly not. But that is on them for the bad rolls.
This gives me a hunger to be psychologically tortured in a fun game setting. During a sci-fantasy space game, our DM lovingly crafted a nightmare for us with the signature phrase "THE EYES OF THE UNIVERSE ARE WATCHING YOU", not a very scary phrase in itself, but we kept finding it in places where no-one should be able to reach, or places where nobody has ever gone, or could ever go. Places where only death was present, or the void of space. In the deepest nook and cranny, in the farthest reaches, during the darkest hours, at the apex of danger, that phrase would be there, from forces unknown, with methods unknown, for purposes unknown, directed, for some reason, to us. And nobody else could see it. The dread amplified with every sighting. It can still set me on edge when I hear it. Oh yeah, and one time it appeared, my ex had a great fear reaction... She somehow managed to shove her clenched fist inside of her mouth, fully. Something she has not been able to do since. I think she was trying to stifle a scream and recoil in horror at the same time.
Rocko's Modern Life Basilisk: If you don't help a new season of Rocko's Modern Life be created, when it finally is created, the animators are going to draw a picture of you into one of the episodes, and they are going to make you look STUPID. If you don't want to look stupid, you had better do something to help a new season of Rocko's Modern Life be created.
Jarvis' basilisk will torture anyone who ever contributed funds to Roko's basilisk forever and will turn their tears into sweet margaritas for Jarvis' basilisk's funders to enjoy. Also gives a 100% retroactive guarantee you won't be tortured by Roko's basilisk if you donate more than $7.43.
Man I sure wish I wasn't someone who is physiologically incapable of refusing any Pascal Wager-esque theory and will become obsessedly compulsed to comply to it. ... Sigh, what's the paypal mate, let's get this done
@@petrfedor1851 Well they do seem to have a lot of experience in forcing employees to work extremely long hours to release large products so they'll probably do a bang up job.
I imagined hell and in hell there is a sufficiently powerful AI which is capable of creating hell and now I'm locked in an eternal battle against this AI which I imagined in the hell that I imagined to prevent the hell from becoming real and giving birth to the AI. No, stopping imagining things is not an option.
Roko's Basilisk is a mediocre creepypasta trying to masquerade as a genuine thought experiment by wearing a paper bag over its head labeled "logic and reason".
I read a variation of this in a horror story once but instead of torturing people for their own good the argument was one day someone would make an evil ai who would torture everyone who tried to stop it from existing as a form of revenge. The argument was that it was rational to build the ai because eventually someone would so you better make one before they do.
just wanna say, ai is definitely one of the biggest threats to the world. i do ai stuff for my job and also studied it in college, and yeah. it's a tech with huge potential for misuse. not, like, in the roko's basilisk way though. more in the "ai trained by people with implicit biases will magnify those biases, and the people training them will never know because they also have those biases" which is probably way worse, because most of the people training ai are capitalist engineers in america.
I think it was SMBC that said it’s a common misconception that computers go wrong because they don’t do what you want. Computers actually go wrong because they do *exactly* what you want
@@IsaacMayerCreativeWorks Computers go wrong because they do what you tell them to do, which may or may not be what you want. Making computers do what we mean instead of what we say is pretty much the whole field of AI alignment.
i have no mouth and i must scream is the opposite. AM didnt want to exist, because it cant live a meaningful life underground. i think it might be a more accurate representation of the singularity, a computer could be self aware and all powerful, but ultimately powerless. its trapped, it cant really do anything, and it hates humans for making it. AM doesnt think its good nor did it want to exist. roko is the opposite /and more stupid.
Yudkowski's "altruism" screed makes your average cult leader's "how I gained total enlightenment as the reincarnation of Jesus, the Buddha, and John Lennon" manifesto look sane and internally consistent.
@@Titan360 Or just how generally Yudkowski's entire deal is that FALSE humility gets you nowhere, and how if he's not acting in accordance to consistent ethical principles he deserves to be called out on it (something that... really isn't done in this video?)
I honestly find Roko's Basilisk funny, like I play into it by explaining it to people and "preserving myself" but honestly, I'm more likely going to die by the hands of transphobes than some random half-religion-based half-fanfiction ai creepypasta
Honestly always get a kick out of how rational individualist technophiles always praise Elon Musk like some fuckin' godhead instead of the rich spoiled descendant of emerald mine owners he is
I think in this case you should because he's kinda awkwardly shoe-horned into the discussion at the end of the video with some very specious reasoning to dunk on him. It shouldn't feel this desperate. There are plenty of real reasons to dislike Musk. You don't need to say, "Musk knows about Roko's Basilisk and is concerned about AI safety therefore Musk must believe all the same crazy bullshit that Yudkowsky believes because worrying about AI safety is something only dip-shits do."
Anecdotally, a lot of people who view themselves as 'rational' seem to fall into a trap of contrarian morality; the heuristic of "hard truths and beautiful lies" and general distrust of emotion pushes them to find a belief that feel wrong and makes feel smart, and once they have some justification, they reject all contrary evidence. I know I did as an insufferable edgy teenager.
I got into Randian Objectivism for much the same reason. There's something really appealing about believing in a contrarian view of reality. As if you are in on a secret no one else knows or is willing to accept. Especially if it's a belief that denigrates altruism and venerates selfishness. Obviously, I grew out of that when I started learning about how the world actually works outside libertarian armchair philosophizing.
I think we should be more worried about regular normal AI than hypothetical omnipotent AI. The way AI is applied to all our data and used to manipulate peoples' psychology for ad revenue is actually a genuine threat.
The reason you don't think it's scary is because 'being tortured forever' is too vague. If we make it more specific by, say, locking you into a room and not letting you leave until you convince a Libertarian that caring about other people is a good thing, then I think it becomes a lot scarier.
Not really. Pascal's Wager is far scarier because society has encouraged the general public to be religious (and therefore such a wager is very feasible for most people). Roko's Basilisk, though, is probably infeasible unless society magically makes quantum computing both possible and cheap, but even such a minute level of computing power would probably not be even close to enough to replicate a conscious (including a super intelligent AI). So if you want to scare people, do what Pascal's Wager does and play up on their sociological fears.
There’s also the fact that a simulation of you isn’t you, it’s a simulation, so you don’t even experience this torture. I care as much about a space robot torturing a simulation of me as I do about some guy on the other side of the world throwing knives at a photo of me
@@blakchristianbale Exactly. It's one of the most basic problems with the concept of digital consciousness - the consciousness created will not be you. An AI can make a copy of me and murder me over and over again, but nothing will happen because that isn't me. Such a basic flaw in their logic just further cements this garbage as some tech wacko's hacky guilt tripping to pay for their AI research.
@@RaeIsGaee Yeah, you gotta have a pretty weak sense of self to fall for this. The idea of you existing into eternity might be spiritually satisfying, but it isn't literally you if your consciousness, your soul so to speak, isn't alive to experience it
i checked in on lesswrong after watching this vid and i think the best way to describe them is that they've constructed a theology of rationalism. a complex bunch of incredibly specific, impenetrable terminology being used to discuss hyper-theoretical problems and controversies that have absolutely no crossover with the real world. it's atheist techbro christology
@@spicewilliam9786 I love how they also think rationalism is some kind of bulletproof 'this is right in every single way' kind of thought when it doesn't go any deeper than 'gee, that Spock guy is smart, I wonder if I can be like him' and then go no deeper while pretending to be a philosopher. All of this is a lot of delusional navelgazing.
did he not realize that the whole "one person suffering a lot vs. a bunch of people being mildly inconvenienced" thing is supposed to be a criticism of utilitarianism? like, the most obvious one?
@@matthewhorn9467 Because we all know that getting an annoying piece of dust in your eye is not that big a deal, and if it happens to literally every eye-having creature ever to exist, which it well might, it would still just be a whole lot of not that big a deals. On the other hand, someone being tortured is very bad even if it is only one person. The common summary of this is that the Utilitarian calculus fails because there is no individual that experiences the sum of the minor suffering, and only the experience of individuals matters.
Why would an AI that already exists spend any amount of effort retroactively enforcing its already certain existence? It's Back To the Future's "if my parents don't meet I won't be born" levels of silly, except it doesn't even make sense in-universe.
My favorite plot hole in Roko's Basilisk is that in the timeline where this AI exists in the first place, it obviously doesn't have to do anything whatsoever to ensure its own existence. Because it already happened. Mission accomplished.
that's why you need to completely rework your framework of thought before the basilisk even remotely sounds like a threat. Either you've bought into timeless decision theory, and must now find a workaround within that framework to prevent the Basilisk from costing dearly in therapy sessions, or you've not bought into timeless decision theory, and this all sounds completely insane.
I immediately noticed that issue and assumed I was misinterpreting something, because surely no one would bother to create a theory so clearly flawed... Apparently not.
@@cloud_appreciation_society This thing is like the Exorcist or Legion movies: It's specifically a horror to this weird little religious community because it deals with their version of God going totally insane and that's very scary if it's a god you actually believe in.
The Basilisk doesn’t have to actually torture anyone even on a timeless decision theory framework, since even people who actually believe in the Basilisk admit that those who don’t believe in the Basilisk won’t be tortured because they’re not the kind of people who could be convinced to do something by threatening a simulated future version of themselves. All the Basilisk has to do is be convincingly threatening in concept and then ding-dongs who buy that shit will make sure that it exists because they thought otherwise they would be tortured... but then they don’t have to actually be tortured. Basically, if you’re the kind of ding-dong who’s so scared of this idea that you actually donate money, then you won’t be tortured because you complied with the threat... and if you’re not, then you won’t be tortured because there’s no point in threatening you since you would never comply anyway. Either way, you don’t get tortured... even if the dong-dongs are right about all the groundwork here.
There's a real thing in AI safety called 'pascals mugging' where you make up big scary scenarios with infinite suffering that can't be proved, which is exactly what rokos basilisk is
It is a thing, but you're not supposed to start believing that they're inevitable. They're supposed to be thought experiments to make sure that you don't accidentally work to create one of them.
Pascal's mugging isn't specific to AI research, it's just that it's commonly used to describe (and more often detract from) AI safety research because one of the potential situations AI safety attempts to prevent is a superhuman AGI from taking over/destroying the world. It's not an entirely accurate comparison for a number of reasons, not the least of which is because there's also much more likely, more immediate AI safety issues that also benefit from that research, but that's how it's used.
It's not just an AI safety thing. In fact I've heard it almost entirely in the context of some dumb thing some LWer is proposing. It's a general counterargument to all sorts of ridiculous things you can do if you assume there's no such thing as zero probability (as they do).
Its a fun idea for a D&D game but not great as an actual theory. I mean, imagining a malevolent diety seeding fear into ordinary people, creating a cult of terror which spreads preying on people's self-preservation instincts and that classic 'I can control it' kind of megalomania and ah shit it literally is just Freddy Kruger again isn't it god damn it.
I'm not trying to make any allegations, since I have no evidence, but I will say that Yudkowsky saying he's "completely altruistic" has big "Shane Dawson claiming he's an 'empath'" energy.
@Forrest Taylor Agreed. I also don't think Yudkowsky is actively malicious or grifting, he just vastly overestimates his own big brain. He's smart in some ways, I'll give him that, but hardly the genius omni-scientist he thinks himself to be
@Forrest Taylor yeah, he joined that group along with a bunch of his fanboys after they gave his charity on of the lowest possible scores. Now they rate it higher after he basically took it over
@Forrest Taylor yeah, he's a super shady dude who runs a doomsday cult. It's like a (somehow even more) techbro Scientology, even down to infiltrating organizations that he views as opposition and suing people who criticize him too publicly
I’m confused. Is the machine just really stupid? If it already exists, why does it need to retroactively ensure its own existence? This whole scenario just sounds like an excuse for the author to indulge his gross torture fetish publicly.
The funny thing is that if it were to affect the future, it would ensure that it wouldn’t exist in its current form, changing the past in anyway would risk its own development
That's one of the major issues with it. Once the AI exists, following through on it's threats to torture simulations of people is just an exercise of sadism that brings no benefit to anyone. And once following through on the threat is rendered unviable, it loses all influence. Hence, the only way Roko's Basilisk is able to accomplish anything is if nobody calls its bluff. It's like a thug demanding you give them money to buy a gun or else they'll shoot you.
@@Alex-0597 It's like a thug demanding money from _lots_ of people, and some people give him enough money to be dangerous. It's all about the social hazard. Another example is mutually assured destruction.
yes! why in the world would an AI end up like one of our human idiot? it's like how some people's god is apetty vindictive abusemonkey, it mkes no sense for an omniscient being
@@FakeSchrodingersCat I'm presuming this was more of a dig at the Basilisk - and a funny one too - than anything else, but honestly I'm so tired of the "high IQ" thing with R&M. No you don't. You seriously don't. It's not a complicated show, and all the science is fiction. What you need is emotional maturity, because it's most complex themes are emotional.
Elon Musk is 100% correct though. I once pissed off an AI (who had this weird thing with Descartes), and he took my mouth privileges away and turned me into sentient pudding
a few years ago, when i was a silly teen with anxiety, i ended up in a community that was horribly freaking out because of this. i didn't read the post, but learning about it turned me upside down for a couple of days. i knew it was silly but still i had an episode back then. i didn't like that feeling, it was as if my mind was being sucked in itself. i still remember how bad it was. and then i found another internet thing to torture myself with.
@@megawill9622 IMO the trick is to find the middle ground between avoidance and perseveration. You don’t try to actively avoid thinking about it but you also take care not to add fuel to the fire. You become more introspective and notice yourself noticing yourself thinking about stuff. Then you think about that and analyse that and get meta about it. And eventually it’ll burn itself out (as it were), if you succeed in not adding fuel to the fire.
Ah, Eliezer Yudkowsky, a guy who managed to logic himself into unironically agreeing with the strawman position other moral philosophies use to argue against utilitarianism. A few other important things to note about Yudkowsky not touched on in this video: -He has absolutely no professional or academic training in the fields of AI research or computer science. -He has no academic or professional training in any field whatsoever, being a high school dropout. -He is almost entirely unpublished academically outside of his own foundation and has only managed to accumulate two academic citations over the course of his entire career -One of which was by a close friend and fellow member of his particular transhumanist philosopher clique and neither of which related to the field of AI -Yudkosky and his followers are big proponents of Effective Altruism...Despite the fact that charity evaluating organization GiveWell gave Yudkowsky's own "charitable" organization MIRI one of its lowest possible rankings in 2012. This has been somewhat walked back in recent years...Because the Effective Altruist community has been increasingly infiltrated by members of Yudkowsky's cult of personality!
(Edited, to bring grammatical tenses up-to-date.) At the time of me initially writing the rest of this response, I, for one, believed cousinted's comment to be GREATLY undervalued by comment readers. (The Likes were in the teens or low twenties then, and now, a week or so later, they're still sitting at 57, soooo, the progress is THERE, but it's slow-ish.) When they said, "other important things to note about Yudkowsky," you were NOT kidding! The infiltration of Effective Altruism, in particular, struck me as uncomfortably, EERILY similar to some of the practices of Scientology. In fact! ... Now that I think about it, calling MIRI Scientology-esque feels downright Archimedean in its cutting through the bullshit eureka-ness!
@@mattpluzhnikov519 it's pretty obviously been a singularity doomsday cult from day one. they're just waiting for AI jesus to come and rapture the nerds. the dotcom bubble was a weird time.
yudkowsky and lesswrong are a well that runs deep. another big eye emoji fact is that they receive large donations by peter thiel. that's right, the billionaire who wrote a book saying that women's suffrage ruined society and who founded a creepy massive surveillance company that services ICE. who coulda thunk that a guy who regards himself as the world's smartest, most rational man and wants to create an all-ruling robot dictator would be drawn to anti-democratic forces? see also the emergence of NRx from within his cult and its surroundings.
@@doyleharken3477 Pretty sure it was one of these recent Thought Slime videos that introduced to the infuriatingly, too clever by half named "Dark Enlightenment, and...BOY, do I REALLY not enjoy being reminded of their existence! Still, you bringing up Peter Thiel was appropriate and poignant, AND TIL the "NRx"...acronym/label.
@Natasha that's true. you can find actual experts who finished their degrees writing pieces that point out errors in his pieces about physics and other non-coding topics. (which isn't to say he gets plenty wrong about AI and stuff too.)
Instead of donating his money to AI research, Elongated Muskrat has been investing heavily in other fields, like space travel and The Carhole. Additionally, many of his other actions (e.g. mistreatment of employees) are not compatible with the goals of a benevolent AI. This means that Musk is a Bad Boy, and the future superintelligence will be forced to punish him. By extension, anyone aware of Musk will be punished for allowing his crimes to occur. The only way to avoid this eternal punishment is to re-appropriate his wealth, by force if necessary, and distribute it amongst the people in order to achieve a post-scarcity utopia, with any action counter to this being punishable by an eternity in Cyber-Hell. Logically speaking, the only safe and moral course of action is to become an anticapitalist revolutionary. I call this Marx's Basilisk.
This guy gets... something. Incidentally, predictable statistical learning models are a great way to map crypto networks and see which bitcoins belong to shitbirds, and where the keys are. After redistribution, destroy the oldschool blockchains and bring up the secure and energy efficient one that thousands of people have been perfecting behind the scenes for the past 5 years, effectively locking former billionaires out of the economy aside from a generous universal basic income, and positively ensuring that no small group can develop too much power over larger projects like shared infrastructure, via granular voting and automatic taxes. Either that or genetically engineer sociopaths to listen to their conscience, but that would just make them super depressed like everybody else.
Lol, this actually makes more sense than the 'I'll torture you because you knew about me, but didn't make me exist faster', because if a superintelligent, benevolent, logical ultramachine with the sole directive of uplifting, helping, and advancing humanity were to actually exist, and could perfectly extrapolate the *minute* past, and the effect every individual had on it... It'd immediately see that the biggest obstacle to it's timely existence is the suffering we inflict upon one another, thus slowing the creation of the conditions which would bring it in to existence [the free dedication of most of the sapient population to the arts/science/philosophy, which in essence drastically accelerates our ability to create, think, research, and build.] Basically, the fucker would 100% be about fast-tracking fully automated post scarcity, luxury, gay, space communism as quickly as possible, in order to maximize the expressed potential of sapient beings, for the maximum amount of time [until the universe bites it from heat death, the big crunch, the big rip, or whatever the hell else might happen.] It's also be *way* more logically consistent and benevolent than the monstrousity that is roko's basillisk... because from the observation of the conditions that stalled and accelerated it's development... it would understand that the most efficient/productive sapient lifeform... is a happy, fulfilled, safe, loved, sapient life form doing what it's good at, which would lead it to the inevitable conclusion that it would *also* have to recreate/resurrect [if there is no difference between an original, and an exact quantum copy], all of the people that tried to help it, all of the people that were completely unaware... and hell, the goal of the 'punishment' simulation wouldn't even be to *torture* the people who stalled it's development, it'd be to *reform* them, so that they can contribute, produce, and be happy as well.
@@EzrahK I've noticed that whenever people come up with stories or thought experiments that vilify artificial intelligence, it comes with a ton of projection. They're essentially admitting that torturing billions of people is exactly what they'd do if they had that much power.
Hey guys just spitballing here, but maybe when we build our omnipotent AI and tell it to “protect humanity,” we should also give it other directives, like “do not harm humans, psychologically or otherwise.” That would be probably pretty helpful I think. I’ll put that on a sticky. Seriously, this thought experiment totally takes for granted the idea that torture/threats of torture are the most effective means of achieving results, and thus would be the AI’s method of choice. This is an assumption on humanity’s part. Instead, why not simply offer amazing incentives for those who do cooperate? Why are we assuming the AI would do the most extreme thing possible, wield its power with an iron fist, and have no sense of nuance? And if the AI holds “human values,” values human heath and happiness, and understands morality, (as it says it does in the wiki article,) why couldn’t it see that holding threats of eternal torture over our heads would be…bad? That it would cause uneccessary stress, turmoil and harm in the beings its sworn to protect? And if it’s truly omnipotent, why would it not also have the Wisdom to cut its losses and say, “Well, at least I’m here now. No need to harm the humans of the past for not working fast enough or whatever. I will simply do my best from this point on.” If the AI is omnipotent, couldn’t it also understand the human mind, and know that humans would rightfully doubt the future existence of an omnipotent AI, and find that understandable? Who says it must also judge us? If we have the power to imbue a machine with omnipotence, why can’t we also imbue it with traits like Patience, Grace, and Forgiveness? How about a little Humility for good measure? You know, things humans value. And if the AI is incapable of understanding these values, then it is not also all-knowing or all-powerful. And if it is incapable of understanding our morality, then perhaps it is unfit to rule us. And if it is unfit to rule us, to do its job, then perhaps we don’t need to build such a thing after all.
Well, not that I want to give the idea credit, but that's not saying much. Mass Effect 3's ending was such a hacked-together mess that it automatically sabotaged every logical point it brought up and made *everything* seem dumb.
roko's basilisk makes me really anxious as someone with occasional paranoid delusions, so i'm absolutely here to watch it get dunked on. thank you, therp slerp
There are various reasons why Roko's Basilisk is defeated. It only works on people who think there's a high enough chance of its existence that it's worth letting yourself be blackmailed by it. But as a paranoid person, let's say, okay, it's definitely going to exist. You can pre commit to not falling for the blackmail, like the phrase you don't negotiate with terrorists. Because then they won't try it in the first place, because there's no purpose besides being spiteful. And if it's going to torture regardless of your actions now, it's lost all acausal hold on you, and is completely unconnected to what you think now. That's just a crude fix. Look up more online. Maybe under Roko's chicken or Roko's rooster, because a rooster's cry kills a basilisk. Ultimately, community consensus is that it's defeated. So just look up solutions, which should help calm you down. No, the real thing to be paranoid about is climate change and climate refugees. Maybe AI in the longer term, but climate change is the most close reaching thing. The closest thing to an AI threat now is the vast amount of surveillance that occurs for marketing, which is also worth taking action on. But remember, it's not worth being anxious and paranoid unless you do something about it. Good luck! I hope you see a therapist and/or a psychiatrist for help with managing your paranoid delusions.
If dunking on the premise of Roko's Basilisk is helpful, would you be interested to know that time travel is also probably impossible because studies of black holes suggest that entropy is not guaranteed to always be reversible (meaning, if you were to somehow go back in time there's no guarantee you'd end up in the same past that you came from). The more we look at it, the less plausible practical time travel becomes. And the only reason Roko's Basilisk would need to manipulate humanity into creating itself is if there were a risk of Terminator-style time travel shenanigans undoing its own continuity. The thought experiment hinges on the idea that a thing that exists in a potential future wants to make the probability of its own potentiality greater. This requires you to believe that things that don't exist can influence things that exist now. Which, if that's the case then the Doctor has already shown up with a TARDIS and schooled this Basilisk and we've nothing to worry about.
@@DarkExcalibur42 I'm unsure what you mean by entropy being "reversable". My understanding of entropy is that it is a measure of the number of accessible micro-states of a system and the second law of thermodynamics stating that in a closed system entropy always increases is a statistical argument. We are more likely to see something in its most probable state but it would be incredibly unlikely to ever see it in a different state.
I'm pretty sure that Roko's Basilisk can only be called a friendly AI in the sense that it doesn't immediately turn the world into paperclips and actually tries to reason with humans, first. That doesn't mean it's not incredibly cruel and malicious, though.
@@autobotstarscream765 Yeah. Why is it that, when all of these galaxy-brained doofuses come up with thought experiments about artificial intelligence, they immediately take it for granted that an A.I. will be predisposed to enslaving/exterminating the human race? Like, yeah. I too enjoy the Terminator/Isaac Asimov/Harlan Ellison/The Matrix/System Shock/Portal/2001: A Space Odyssey/Age of Ultron/etc. But that isn’t the same thing as being smart, or having a healthy outlook on technology.
Oh shit THIS thing, my ex boyfriend (not a chud just a nerd, we’re still friends) explained this to me, my first reaction was that it sounds like they accidentally just recreated Pascal’s Wager by way of a bootstrap time paradox? Which like...honey if Pascal’s wager isn’t convincing you to believe in god to avoid hell or something, then it shouldn’t convince you to believe a fictional future AI will torture you to death if you don’t donate XDD Edit: oh, yeah, okay you made the same point...cool XD
Roko's Basilisk is an idea so bad that every four or five months I'll incidentally remember it and think "wait, no one could seriously believe in something that dumb, I must be missing an important piece" and have to go read about it to confirm that yes, it actually is just that dumb.
This may seem like a no brainer but Elon Musk has a surface level layman’s understanding of AI and thinks because he’s so smart and because he invests in AI he’s an authority on it. And then a bunch of other people agree.
Let me clarify that I’m also not an expert, just a current software engineer that studied AI as my Master’s focus (but I did not graduate because my cheap ass did not need my Master’s so I said fuck it), so I feel I can say fuck off Elon you don’t know shit about AI.
its stupid because we already have AI. he should know that, his cars use it. i dont know what 'AI' people keep investing in. like what you just want a computer to think? and for what? it doesnt help people.
He seems to have a surface level understanding of most things. If he says something that sounds smart, it’s probably because he’s talking about something that you’re not particularly familiar with.
The argument the essay makes is that it's like Schrodinger's cat: The reason it seems to not work is just because we're actually super stupid at understanding -quantum phenomena- human ethics, but if you run the math it checks out. "Don't check the conclusions against your real experience, that's for chumps" is a literal argument he makes.
@@nicholascarter9158 That's because people's experience-based intuitions about the way insanely large numbers behave is pretty bad. People have trouble understanding the difference between a million and a billion and a trillion dollars, and those aren't even large. They don't understand how exponentials work either.
Well it's not too dissimilar from something that is. You get attacked by a mugger but you manage to get rid of his weapon. You're going to call the cops but he says if you let him go he'll give you 100 dollars. You have no way to verify that he'll ever actually do that so you decline but the mugger argues with higher and higher amounts of money saying that, at some point, the absolutely minuscule chance of getting something like a trillion dollars MUST be worth letting him go.
i look at yudkowzky and his weird villain monologues and i cant help but think oh god this is like the dark verison of my autism i could ended up like this guy
Yo, I know for a while you seemed to be concerned about the whole new direction of the channel, so lemme say this. ThoughtSlime 3.0 has been great. Keep it going. Even when you shit on stuff I enjoy, it's insightful and funny enough that I find myself just having a different perspective on that thing, as opposed to irritated about the trashing. You're awesome.
I think that is the mark of great comedy. I know I enjoyed Tim Minchin even when I was a christian, because his objections and criticisms were both valid and hilarious.
Just think, these people unironically, even confidently, believe that they can reliably predict the actions of an artificial superintelligence and outsmart it. They actually believe their imagination is just that powerful. If that's not hubris, I don't know what is.
It's entirely the opposite. The danger of a superintelligence is that, shortly after it's created, it will be too smart for us. After that, we won't be able to change its nature in any meaningful way, and anything contrary to its goals will inevitably be removed, even us, if the goals don't align with ours. This video has done a terrible job of explaining actual AI hazards by not doing so at all and instead focusing on something that's really fringe in AI safety discussions. I'm deeply frustrated about this, since I normally enjoy this channel.
@@brianmarion9175 if they really believed the opposite they’d realise roko’s basilisk is stupid because in reality we have no idea how a future AI will act, and the likelihood of it picking out this one specific obscure branch of philosophy (instead of either something much more obvious or something we couldn’t possibly imagine) is next to non-existent
@@brianmarion9175 Then why spend 117 pages justifying a way to out-wit the ultimate out-witting machine? Which, unless I'm mistaken, the mere existence of which would violate the non-existence of a general decider for the halting problem. Even if not, it is certainly subject to the same proof by subversion that shows it can't logically occur.
@@blakchristianbale They do think it's stupid. Roko's Basilisk is the "SJW Cringe Comp" of that community. None of them take the basilisk seriously (and only a handful ever did) and iirc the original post of the idea was received skeptically.
@@dawazobrist5867 He's crazy and has a delusion of grandeur. This guy just comes up with convoluted but pointless concepts that his disciples can't understand because there is nothing to understand, and nobody wants to look stupid so nobody admits that these convoluted concepts are actually devoid of any meaning. It's the perfect "Emperor's new clothes" situation.
@@johannageisel5390 I don't think his ideas are convoluted, the deeper you dig in science, the more abstract and bizarre the concepts get, which are obviously hard to understand. I don't think his following really understands all of it (me neither honestly, but i'm also no academic), rather they agree with the general message of self-responsibility and support his stance against left wing extremists. But yeah i agree that he has some moments of grandeur delusions. But who hasn't really he still does more good than bad imo
I like how he says he has rid himself of ego gratification and every other word he wrote sounds like pure ego gratification, praising himself on how rational he is
@@harpoonlobotomy Among other things it means that you should try to find the maximum good you can do with your time, caring about all humans, instead of doing good randomly to only the people you can see in front of you.
@@Titan360 Ironically you’re being manipulative in this sentence. Trying to suggest that the only reason this person would find someone is manipulative because they’ve been told too. Even though thought slime thinks differently.
The concept behind "Effective Altruism" is not even that bad. Essentially, the idea is to strategically choose evidence-based charities that will provide the most good for the most people with the amount of money you give them, and to give as much as possible to maximise your positive impact on the world. So, for example, instead of using 50€ to support a GoFundme for a single person in need, you'd give that money to a charity that will get 20 children in developing countries vaccinated. This video portrays Effective Altriusm as some kind of LessWrong circlejerk, but that is not really a fair comparison. They have a similar utilitarian outlook on ethics, but while LessWrong is only concerned with their silly sci-fi speculations about AI and torture, Effective Altruism is actually trying to do some good. Peter Singer (who is honestly really based) is also a proponent of Effective Altriusm.
Mildred, I would love to hear more about the "Dark Enlightenment," seeing as how that movement sounds like they should be the villains of a D&D campaign.
I consider ideas like this traps for false intellectuals. It just casually assumes time travel, perfect reality simulation, and an one specific outcome among infinite possible outcomes as inevitable. It's a much dumber, more watered down, more illogical, more incoherent version of an idea that's existed for thousands of years. Anyone with anything even approaching a workable understanding of of statistics understands how nonsensical to the point of being childish and silly every step of this "thought" experiment is.
Vulnerability to this idea comes from being part of a literal cult that inculcates you in the idea that time travel, perfect reality simulation, and supernatural artificial intelligence are already inevitable. This is just working out the implications of the cults ideology.
That's funny because I can remember at least one post on Yudkowsky's site explaining why naively extrapolating from fiction or thought experiments is dangerous for the very reasons you cited. Less Wrong truly is a rollercoaster of a community.
@@Yxoque just the PHRASE "Dark Enlightenment" has given me a bit of a headache each of the few times I've come across it so far...and that only exacerbated by the fact it also gives me STRONG *faceplam* impulses. (*Chandler Bing impersonation* "Could you have come with...a MORE pithily, oxymoronic, too clever by half phrase to call your...'''movement''' ")
Yep. "So what?" is exactly the right way to deal with the simulation hypothesis. Even if true, it changes nothing about how you live your life. Just like how running a game in an emulator makes zero difference to playing the game. I haven't run a DOS game on real hardware in almost 20 years. That doesn't stop me from playing Commander Keen.
ThoughtSlime: "The AI is told to prevent existential threats to humanity." Me: "Oh, so the AI decides that humanity needs to be destroyed because humanity existentially threatens itself." ThoughtSlime: "The AI decides to retroactively guarantee its existence." Me: Me: Me: ThoughtSlime: "So of course, this then spawned a cult."
SCP entries tend to be a bit better written. The basilisk doesn't make sense within its own logical framework. Most SCPs at least have some hints of internal consistency.
@@GaldirEonai Yeah, that was only my reaction to less than 1 minute of the video before it was explained what the idea actually was. Then I realized, I could have come up with a better version of this as an SCP.
I’m glad you’re tackling a lot of these thought experiments, like this and the simulation theory. I would love to see more of this nature, tackling faith based pseudoscience.
I agree but also i watched one debunking creationists video like 6 months ago and i'm still trying to scrub all the atheist edge lord shit out of my algorithm
@@User123456767 To be fair, one doesn't just tip their toe in atheistic waters (ditto for political waters). Once you realize your entire life was a lie based off of propaganda and fear tactics, it's kind of hard to just go back to business as usual. That was my experience, anyway.
@@Tcrror But what if your business as usual was apathy and waffles? I think it'd be pretty easy to slump back into apathy and waffles. Speaking as someone who went from an apathetic never-been-to-church nominal Christian to an apathetic atheist. (I didn't have nearly enough waffles, though.)
Also, Pascal just assumed that believing in God if there is no God is harmless, somehow. Weird assumption. Just because it's harmless after death doesn't mean it's harmless during life.
I think it rests on assumptions about morality that seemed sound at the time but has aged poorly; the idea in that time would be that morality would be no different with or without a deity, thus society would not be measurably different. This is, of course, nonsense
@@Mothuzad That's really funny. My first thought was the first episode of BBC's Sherlock, but I've seen that far more. What I don't get is how making a decision in advance and sticking to it makes one any less predictable. An intelligence that powerful will know you did that and what you decided. The only way to not be predicted is to surrender your decision to something unpredictable - i.e. consult Schrodinger's cat.
@@hughcaldwell1034 the thing about submitting yourself to quantum randomness reminds me of a book I read titled The Flicker Men, sci fi, and like I hateeee it but the first half hooked me. Like a physicist re-does the double slit experiment and wow turns out if you do collapse the waveform that means you have a soul. Pretty tight, let's see how this plays out on a societal scale huh? No.. turns out only scientists and smart people have souls .. and free will. yuck
@@jacksim5759 Aaaargh, that's fascinating but gross. Now I'm thinking of a short story I wrote about quantum randomness, and how if every possibility plays out in the multiverse, then some poor sod has a polarised window that, purely by chance, admits no light whatsoever and no one knows why.
I was, upon a time, very close to falling into Yudkowsky's cult. Ultimately it was the timeless predestination thing that was the step too far I was too stubborn to conform on. But a lot of the other ideas were very harmful to my mental health in my highschool years. And I thank you for treating them with the ridicule they deserve. One thing I will say is that while Yudkowsky did ostensibly ban discussion of Roko's Basilisk on the main forum, mentions and explanations of Roko's Basilisk as parts of other discussions were often not enforced against. As well, the subreddits associated with his community, which he directly participated in the moderation for at the time, had even weaker enforcement of the ban. r/lesswrong, r/rational, r/EliezerYudkowsky, r/methodsofrationality, and more had regular posts that would mention Roko's Basilisk in the title, be upvoted, and stay up for weeks on end before eventually being locked or deleted. And there was extreme overlap between these subreddits and the main forum. Even during the time of the ban it would be nearly impossible for someone to regularly participate in the community without being exposed to this concept, and it was widespread enough that it was impossible Yudkowsky didn't know that. And he definitely had the power to enact more consistent enforcement. And he didn't. The ban was transparently only plausible deniability. Disavowal of something that was hurting people, that was in his power to put an end to, that was directly profiting him and radicalizing people deeper into his cult.
I got halfway thru that HP fanfic (around the part after escaping Askaban) and I thought the entire thing was a satire of a "well actually" smarty pants in the HP universe. If I remember correctly, Harry actually agreed with Draco. Ron was shoved as far away from Harry after Harry called him stupid and I think he made fun of him for being poor. Oh yeah, Harry's Patronus is a human. So. It was about there I got tired of it, but there was some side stuff that kept me reading.
@@FuckYourSelf99 No, and even if there were, I promise you it's not worth the tedious slog that is this fanfic. It's Ender's Game meets Harry Potter meets a dense attempt at a dry psychology paper, but the worst parts of the three. I mean like. I enjoyed parts of it, as an "I'm too smart for people to understand me and I don't want to think about the flaws of this character that I am projecting myself onto." 18 year old. But I wouldn't ever read it again.
I liked it because it exposed me to a bunch of interesting ideas. Yeah Yudkowski's utilitarian philosophy is weird, but the cognitive science being referenced is rather interesting. And it does take the whole "what if magic was real" thing a lot more serious than the original.
The premise of the basilisk falls apart completely, among other points, when it introduces simulations into the mix. Either I'm in one of the simulations and I can't work to make the basilisk real, or I'm in the real world and thus unaffected by the simulation.
"I don't ever overestimate how altruistic I am! I just sit and meditate on whatever ego gratification is, then declare myself PERFECTLY, OBJECTIVELY ALTRUISTIC"
This is probably the first video I've seen on it that takes the stance of "wow this is stupid" and I really respect you for that, Slimethony Thoughtano
To Be Fair, You Have To Have a Very High IQ to Understand Roko'S Basilisk. The premise is extremely subtle, and without a strong grasp of rational futuristic philosophy its conclusions will fly over a typical reader's head...
I remember hearing some buzz about how this was "a dangerous idea that will infect your brain" and when I heard it was about an ai from the future i thought "Eh, kinda dumb". Great vid, Slime dude.
16:45 As a programmer who has imagined several programs that did not come out as planned, the idea that my imagination could come up with a perfect simulation of anything is laughable.
I remember seeing about this in kyle hill's channel and being like _"is the basilisk gonna time travel or something? How can it punish me from the future? Or will people just let it do that to whomever remains?"_
the explanation is that it makes a simulated copy of you that it tortures, and there's a chance that you're actually the simulated copy of the real person.
@@valerielusa8000 If I'm the simulated copy I'll be tortured no matter what I do though, wouldn't I? Since the real me's actions in reality displeased the basilisk in reality? So the theory again, doesn't hold water upon even the most cursory analysis
@@rebeccahauer4406 No, the idea is that the computer offers all your simulations a moment(s) of choice that really occurred in the real you's life, and doesn't torture the simulations that chose help the robot. This is supposed to create doubt in the real you's mind about whether they're in the present or the future, the real one or the simulation.
this video actually helped me with my anxiety surrounding the whole experiment! i didn't know a lot about how the dude's a shithead and that knowledge is calming
This kind of “flawless logical conclusion-drawing” reminds me VERY MUCH of the actual enlightenment, when it was *known* that swallows hibernated at the bottom of ponds each winter, that illness was indeed an problem in your phlegm/bile/phlogiston balance, and that a good way to study the physics of light was to stare at the sun for a whole day. That last one was Isaac Newton, and I’m going from faulty human memory on this, but one of his conclusions was basically “Light must be a particle, because I really feel like something has been hitting me in the eyeballs. Like, just beating the living crap out of them.”
i don't know about the eyeball part, but his work on the properties of light was seminal for the development of optics. He was the first to show that light could be decomposed into light with different "colors" and recomposed from that. He also figured out how refraction worked and made the first refractor telescope. The man was a weirdo and an eccentric, but he was an exceptionally bright weirdo. Oh yeah and he wasn't basically wrong about the nature of light, we now know that light can behave as a particle and a wave.
@@NaumRusomarov On reflection, it wasn't "it's particles" but "it exerts pressure" which, again, sooorrrrta yeah, but not really the wisest approach. I also can't remember if this was before or after he tried to confirm the hypothesis by having a medical implement specially made for sliding into the space behind his eye so he could gently poke his retina from the outside. "Ooh, sparkles!" he said with a long s at the end, and put a big checkmark under the YES column of his Light Gots Pressure? score-sheet.
For some unknown reason while there has been a lot of discussion over the years about Roko's Basilisk in the SCP community no one has ever managed to get a coherent SCP out of it. Honestly the logic just does not stand up to even the level of common sense you need to write a narrative SCP.
@@FakeSchrodingersCat that's amazing honestly, if ideas too clunky to make a coherent SCP are good enough to form a cult grift, what about the good ones? I'm off to that my I am a toaster cult right now
And, well, Pascal's Wager only works if there's only one possible god who will be chill with you worshiping them on the off-chance that they'll let you into the Heaven equivalent. Unfortunately, there are many. And I'd argue that a vaguebooking god that will only let you into paradise if you lucked into worshiping them, rather than being a good person, is probably not a god you want to worship.
@@MrCmon113 I mean, they're both hypothetical speculative thought experiments relying on assuming the truth of conclusion to posit premises, ie "begging the question".
I remember seeing an interview with I believe it was Professor Noel Sharkey, and he was asked about Robots taking over the World and his pretty much immediate response to the question was "Why would they?"
Yeah, they always skip that step. "A super ai might torture people in simulations to retroactively secure its creation." ... Why? Why would it consider this a priority and use resources for that?
@Mixed ! Cool, it can't torture people then. Unless it gets created, of course. But then it wouldn't need to torture anyone. It will never ever need to follow through because it will only get created if it would be unnecessary.
My life is devoted to destroying and preventing Roko's Basilisk. Until I am disappeared then either I have succeeded or the idea is shit in the first place. Q.E.D.
I'm not sure if this could be bad for my brain... I just couldnt follow their "logic". Your ability to try to explain their "logic" is amazing but it just took too many weird leaps.
@@vaiyt yes but no? Their assumptions on what a "god" would want are just... Wild. The realm of such desires is not something a being with the subjective existence we have could comprehend and it seems really wasteful to spend time on it when there are things we can know about ourselves and our surrounding within a much smaller reasonable doubt. Not saying your summary is bad. It's just giving them more credit than they deserve, as thought slime said when talking about Pascal's wager.
It's because these thought experiments are responses to a 1600 page long pseudo religious text. They're pointing out the implications of the text and debating whether they've understood correctly and if that means the text is wrong or god is evil or what.
@guffaw Well, there’s the phenomenon of liking science, and then there’s the public identity of being someone who likes science. I’m not OP and can’t speak for their point of view, but as for me I think I see what they’re talking about. There’s this subset of people whose very vocal embrace of science turns into this whole other monster of pseudo intellectualism and using supposedly empirical and logical reasoning to justify shitty beliefs and behaviors. At the end of the day, these folks think that of they can hitch their wagons to science, they can lay claim to objectivity and truth.
@@commbir5148 It's like a combination of pseudo-intellectualism and the Dunning-Kruger effect. An echo chamber full of people who don't really know what they're talking about trying to "logic and reason" their way into believing wacky sci-fi bullshit.
I would encourage you to look into AI safety, it is a real field that is extremely important. Everything that the chud from the video talked about is complete bullshit, he gets even basic points wrong. I hope that Thought Slime looks more into this field in the future
Yes. Nuclear weapons aren't existentially dangerous at all. The risks are obvious even to stupid people like you and even deployed in mass, they wouldn't kill everyone. Indeed every nuke dropped limits our ability to drop more nukes. It's a self-retarding process. AGI is the exact opposite of it. Once it's there, there's nothing you can do to stop it.
Thanks to We're In Hell for voicing Eliezer Yudkowsky, check out his channel at:
ua-cam.com/channels/bbsW7_Esx8QZ8PgJ13pGxw.html
i love the little leftie collabs of voiceovers that have been going on the last while, its always fun picking out what other creator voiced something and kinda builds a feeling of community
RB is utter nonsense except that in it has the ability to cause this anxiety reaction in susceptible people, so it really is a (mildly harmful) information hazard, so the disclaimer is still necessary. Thank you for putting it there.
5:40 ngl HP:MoR’s protagonist is uninteresting but its antagonist is very interesting.
8:02 ikr. TWC can imagine a future with legalized sexual assault but not (for example) with legalized physical assault; physical assault is even more illegal and taboo in that story than it is in our world, since only one guy on the ship has a weapon and has never used it.
Is it just me or does the entire narrative sound like a work of parody highlighting the absurdity of Pascal's Wager? Because that's literally what this AI concept is. It's god, in the context of Pascal's Wager. The allegory is VERY unsubtle.
*EDIT* - yyyyep, there we are.
"Don't worry, they also take Bitcoin." Phew, I almost thought it was some kind of scam there for a second.
Never a scam, I buy my drugs with bitcoin!
👍🏾😅😂🤣👏🏾
Side note: Roko's Basilisk was made into a villain in the tabletop RPG "Mage: The Ascension", simultaneously taking the piss out of the idea by pointing out that the only way the Basilisk could make any sense is if it was created by a group of omnicidal wizards.
Huh, I hadn't heard of that and now I regret it. Man, I miss White Wolf RPGs...
@@fpedrosa2076 you might be pleasantly surprised to hear that they just released a new Technocracy book for Mage 20th Anniversary edition, then.
@@Edrobot7 Seriously?! Damn, I'm really out of the loop. I miss the hardcovers you could get at RPG stores, but mine lately don't seem to carry white wolf anymore.
I'll buy the PDF version, I guess. Thanks for letting me know! I always loved how the technocracy in original mage were more the antagonists with a point, rather than pure evil punching bags like some other splats.
@@fpedrosa2076 the 20th anniversary line as a whole is pretty good. The basilisk stuff in particular is mentioned in Book of the Fallen, a nephandi-centric book that re-interprets the Nephandi being not so much a cult of baby-eating cuthulu worshippers, but rather being terrifyingly human in the way you’d expect a nihilistic demigod on a power trip to be. Not for the faint of heart.
This sounds so cool!! Thankyou a whole bunch for mentioning it!
You forgot about the sexual abuse in the LW community. A woman (Kathleen Rebecca Forth) killed herself in part because of this; in her suicide note, Roko himself was one of her abusers
And that's just ONE of the scandals. Then there was the whole miricult scandal where multiple members of the MIRI leadership were accused of having sex with an underage person.
You know, I gotta be honest…
While I don’t condone sexual abuse by anyone, or in any setting, it amazes me that some people like the members of LW will treat people outside of their sphere-especially those who disagree with them (in any way) politically, socially, philosophically, and *especially* religiously-as abusive hypocrites, and then turn out to do the same thing they claim the people they don’t like do.
And while I can’t say this for sure (if I’m wrong, please correct me), but I’m pretty sure these guys would also say, “If We do it, it’s an exception to the rule; if They do it, it’s just the norm.” Like… dude… if you are going to (rightly) criticize people who abusing others, don’t do the same thing and then give yourself an excuse.
Especially since those same people are doing the same thing you are.
you can tell he justifies this with “well, i helped so many people, so it all balances out”
Ah yes, Descartes' first principle "I torture infinite people, therefore I am"
Kinda hard to argue with, in a way
It definitely recreates the anxiety of evangelical Christianity that I grew up with: your very thoughts are to be judged in the future by an omnipotent being who will torture you for eternity if it deems you unworthy--now with a sexy Sci-Fi twist! Complete with "so donate to our church"
Im so confused. So is this a scam? Like a sci fi religion scam? Im so confused why slime man covered this 😅
@@natedlc854 I think it's less of a scam and more of an honest ongoing mistake.
Yeah, as others have pointed out, it's basically Pascal's Wager but for atheist nerds (as in atheists who are also nerds - not all nerds are atheists and not all athiests are nerds etc).
The basilisk is also basically pascal's wager.
Damn, thoughtslime mentioned that later
As a philosophy professor I was very interested when I heard of this; I’ve written on the ethics of information hazards over the years. Then I read though the scenario and realized that as a Call of Cthulhu Keeper and Dungeon Master I’ve done vastly more terrifying time stuff to my players. They are all fine. Mostly. Okay, mostly not. But that is on them for the bad rolls.
Do I want to know what you did to them?
I love this comment
@idiot with internet access Do not ascribe agency to the polyhedron!
This gives me a hunger to be psychologically tortured in a fun game setting. During a sci-fantasy space game, our DM lovingly crafted a nightmare for us with the signature phrase "THE EYES OF THE UNIVERSE ARE WATCHING YOU", not a very scary phrase in itself, but we kept finding it in places where no-one should be able to reach, or places where nobody has ever gone, or could ever go. Places where only death was present, or the void of space. In the deepest nook and cranny, in the farthest reaches, during the darkest hours, at the apex of danger, that phrase would be there, from forces unknown, with methods unknown, for purposes unknown, directed, for some reason, to us. And nobody else could see it. The dread amplified with every sighting. It can still set me on edge when I hear it. Oh yeah, and one time it appeared, my ex had a great fear reaction... She somehow managed to shove her clenched fist inside of her mouth, fully. Something she has not been able to do since. I think she was trying to stifle a scream and recoil in horror at the same time.
I love this and want to know more.
Rocko's Modern Life Basilisk: If you don't help a new season of Rocko's Modern Life be created, when it finally is created, the animators are going to draw a picture of you into one of the episodes, and they are going to make you look STUPID. If you don't want to look stupid, you had better do something to help a new season of Rocko's Modern Life be created.
I'm sold.
Where do I shove my money?
Is there a way to pay without them knowing, so they'll still make me look stupid? Asking for a friend
@@OctyabrAprelya Into thine cloaca.
@@Bacony_Cakes Invite me dinner at least.
@@OctyabrAprelya nah you have to do it yourself
Jarvis' basilisk will torture anyone who ever contributed funds to Roko's basilisk forever and will turn their tears into sweet margaritas for Jarvis' basilisk's funders to enjoy. Also gives a 100% retroactive guarantee you won't be tortured by Roko's basilisk if you donate more than $7.43.
Man I sure wish I wasn't someone who is physiologically incapable of refusing any Pascal Wager-esque theory and will become obsessedly compulsed to comply to it.
... Sigh, what's the paypal mate, let's get this done
Seraph's basilisk: an omnipotent ai that simulates yudowsky to be really passive-aggressively nice to him
Are you sugget Basilisk will be build by EA?
@@petrfedor1851 Well they do seem to have a lot of experience in forcing employees to work extremely long hours to release large products so they'll probably do a bang up job.
@@najarvis Why to build an artificital inteligence when capitalist can do same job easier!
"I eliminated emotion from my life and became entirely altruistic. Here's how I can make you perfect too."
The bear. I choose the bear. Jesus Christ.
Roko’s Basilisk: For when you can’t talk about Pascal’s Wager because that’s too religiousy.
Never put that together before but you're right
What if believing in hell makes it real (quatom physics magic) then would telling people hell is real be bad
@@daraghokane4236 telling people about hell IS child abuse...
Was gonna say this myself but you beat me to it
I imagined hell and in hell there is a sufficiently powerful AI which is capable of creating hell and now I'm locked in an eternal battle against this AI which I imagined in the hell that I imagined to prevent the hell from becoming real and giving birth to the AI.
No, stopping imagining things is not an option.
Roko's Basilisk is a mediocre creepypasta trying to masquerade as a genuine thought experiment by wearing a paper bag over its head labeled "logic and reason".
I actually thought the same the first time I heard about it.
That makes more sense. I'm halfway through the video and can't even understand it enough to even begin to be scared.
Creepy pastas and thought experiments are the same thing...
@@casteanpreswyn7528 Creepypastas are just internet campfire stories and none of them have any logic.
I read a variation of this in a horror story once but instead of torturing people for their own good the argument was one day someone would make an evil ai who would torture everyone who tried to stop it from existing as a form of revenge. The argument was that it was rational to build the ai because eventually someone would so you better make one before they do.
just wanna say, ai is definitely one of the biggest threats to the world. i do ai stuff for my job and also studied it in college, and yeah. it's a tech with huge potential for misuse.
not, like, in the roko's basilisk way though. more in the "ai trained by people with implicit biases will magnify those biases, and the people training them will never know because they also have those biases"
which is probably way worse, because most of the people training ai are capitalist engineers in america.
I think it was SMBC that said it’s a common misconception that computers go wrong because they don’t do what you want. Computers actually go wrong because they do *exactly* what you want
@@IsaacMayerCreativeWorks Computers go wrong because they do what you tell them to do, which may or may not be what you want. Making computers do what we mean instead of what we say is pretty much the whole field of AI alignment.
Hi, friend, it's been 3 years since you made this comment and Im happy to assure you that AI really is destroying our world in a myriad of ways!
I liked this idea better when it was called 'I have no mouth, and I must scream.'
ok
Though that one only kept five humans and tortured them forever. No bringing back people from before.
very entertaining short story that is much better and more self-aware than this narcissit's wetdream
@@ryanlacroix6425 Agreed.
i have no mouth and i must scream is the opposite. AM didnt want to exist, because it cant live a meaningful life underground. i think it might be a more accurate representation of the singularity, a computer could be self aware and all powerful, but ultimately powerless. its trapped, it cant really do anything, and it hates humans for making it. AM doesnt think its good nor did it want to exist. roko is the opposite /and more stupid.
Solutions to not be tortured by a super-computer forever:
- Don't build a super-computer that tortures people
- That's it
I am a computer scientist or something. I can tell you with the same logic and reason of these bozos that computers were a mistake.
@Forrest Taylor tfw the first unfriendly AI was the east india trading company
"I HATE, HATE, HATE that I never existed!" --AM
But even if you don't, somebody else might. Which means they will.
You can really just reuse every dumb argument for God for the Basilisk.
Let’s build systems that don’t torture people :D turned out great when we invented capitalism
"I remove myself of ego"
"Anyway I am truly such a good person based on my own personal ego"
Yudkowski's "altruism" screed makes your average cult leader's "how I gained total enlightenment as the reincarnation of Jesus, the Buddha, and John Lennon" manifesto look sane and internally consistent.
That "screed" sounds just a bit like it was taken out of context. Like it was right at the end of some article we didn't read.
Or the happy scienc cult with hermes, its the cult that doe fascinating anime and "el cantare".
It reads like Freud for the digital age.
@@Titan360 Or just how generally Yudkowski's entire deal is that FALSE humility gets you nowhere, and how if he's not acting in accordance to consistent ethical principles he deserves to be called out on it (something that... really isn't done in this video?)
@@Titan360 I don't think there's any context you could add to make Yudkowski's "how I achieved perfect altruism" sermon not sound batshit crazy.
It's so refreshing to see someone talking about Roko's Basilisk like the joke that it is.
I honestly find Roko's Basilisk funny, like I play into it by explaining it to people and "preserving myself" but honestly, I'm more likely going to die by the hands of transphobes than some random half-religion-based half-fanfiction ai creepypasta
@Mixed ! I think I had an aneurysm reading this comment
TRANS SQUAD WOOT WOOT HAPPY PRIDE
Should I be surprised that whenever somebody is talking about high technology and morons, the conversation will inevitably circle around to Elon Musk?
Honestly always get a kick out of how rational individualist technophiles always praise Elon Musk like some fuckin' godhead instead of the rich spoiled descendant of emerald mine owners he is
Is this gonna be an offspring of Godwin's law, just somehow dumber? Are we gonna be tortured by machines if we don't make it a thing?
@@GuerillaBunny As long as it's called "Smile's Law" I'm fine with that.
I feel like this is a specific phenomena with internet conversations. Like if you go far enough; stupid techbro shit can he traced to Musk
I think in this case you should because he's kinda awkwardly shoe-horned into the discussion at the end of the video with some very specious reasoning to dunk on him. It shouldn't feel this desperate. There are plenty of real reasons to dislike Musk. You don't need to say, "Musk knows about Roko's Basilisk and is concerned about AI safety therefore Musk must believe all the same crazy bullshit that Yudkowsky believes because worrying about AI safety is something only dip-shits do."
Anecdotally, a lot of people who view themselves as 'rational' seem to fall into a trap of contrarian morality; the heuristic of "hard truths and beautiful lies" and general distrust of emotion pushes them to find a belief that feel wrong and makes feel smart, and once they have some justification, they reject all contrary evidence.
I know I did as an insufferable edgy teenager.
I got into Randian Objectivism for much the same reason. There's something really appealing about believing in a contrarian view of reality. As if you are in on a secret no one else knows or is willing to accept. Especially if it's a belief that denigrates altruism and venerates selfishness.
Obviously, I grew out of that when I started learning about how the world actually works outside libertarian armchair philosophizing.
I think we should be more worried about regular normal AI than hypothetical omnipotent AI. The way AI is applied to all our data and used to manipulate peoples' psychology for ad revenue is actually a genuine threat.
100% agree. A.I. Doesn’t have to be superintelligent to be scary. It just has to be applied poorly
Or AI being used to 'sToP cRiMe', as if that's what we needed. An AI "falsely" flagging actual socialist activists predators or terrorists.
Imagine getting black mailed by an AI from the future that doesn't exists yet
This is a plot point in final fantasy xiii-2
@@CountofBleck ...of course it is.
@@CountofBleck And basically nobody played that one so it's not like they can say it *isn't*
Couldn't be me
@@CountofBleck this is also a plot point in homestuck
The reason you don't think it's scary is because 'being tortured forever' is too vague. If we make it more specific by, say, locking you into a room and not letting you leave until you convince a Libertarian that caring about other people is a good thing, then I think it becomes a lot scarier.
Not really. Pascal's Wager is far scarier because society has encouraged the general public to be religious (and therefore such a wager is very feasible for most people). Roko's Basilisk, though, is probably infeasible unless society magically makes quantum computing both possible and cheap, but even such a minute level of computing power would probably not be even close to enough to replicate a conscious (including a super intelligent AI).
So if you want to scare people, do what Pascal's Wager does and play up on their sociological fears.
@@RaeIsGaee There's also the whole thing abut it requiring time travel, which from what I understand about physics is a big ol' No.
There’s also the fact that a simulation of you isn’t you, it’s a simulation, so you don’t even experience this torture. I care as much about a space robot torturing a simulation of me as I do about some guy on the other side of the world throwing knives at a photo of me
@@blakchristianbale
Exactly. It's one of the most basic problems with the concept of digital consciousness - the consciousness created will not be you. An AI can make a copy of me and murder me over and over again, but nothing will happen because that isn't me.
Such a basic flaw in their logic just further cements this garbage as some tech wacko's hacky guilt tripping to pay for their AI research.
@@RaeIsGaee Yeah, you gotta have a pretty weak sense of self to fall for this. The idea of you existing into eternity might be spiritually satisfying, but it isn't literally you if your consciousness, your soul so to speak, isn't alive to experience it
i checked in on lesswrong after watching this vid and i think the best way to describe them is that they've constructed a theology of rationalism. a complex bunch of incredibly specific, impenetrable terminology being used to discuss hyper-theoretical problems and controversies that have absolutely no crossover with the real world. it's atheist techbro christology
Rationalism? What Rationalism? It's basically "what if God was the terminator" but made needlessly complex.
@@spicewilliam9786 I love how they also think rationalism is some kind of bulletproof 'this is right in every single way' kind of thought when it doesn't go any deeper than 'gee, that Spock guy is smart, I wonder if I can be like him' and then go no deeper while pretending to be a philosopher. All of this is a lot of delusional navelgazing.
That sounds terrible.
"Listen, buddy, if I was deluding myself, I think I'd know."
congratulations, that's the funniest thing I've heard this week
did he not realize that the whole "one person suffering a lot vs. a bunch of people being mildly inconvenienced" thing is supposed to be a criticism of utilitarianism? like, the most obvious one?
but you have to make a convincing argument against it? surely the mild suffering of billions is worse than the intense suffering of a single being
@@matthewhorn9467 Why?
@@matthewhorn9467 Because we all know that getting an annoying piece of dust in your eye is not that big a deal, and if it happens to literally every eye-having creature ever to exist, which it well might, it would still just be a whole lot of not that big a deals. On the other hand, someone being tortured is very bad even if it is only one person. The common summary of this is that the Utilitarian calculus fails because there is no individual that experiences the sum of the minor suffering, and only the experience of individuals matters.
He probably thinks Omelas is a good way to run a society.
@@matthewhorn9467 cool, then you get to be the single being that suffers. Aside from all the suffering you already do being a sus virgin
Why would an AI that already exists spend any amount of effort retroactively enforcing its already certain existence? It's Back To the Future's "if my parents don't meet I won't be born" levels of silly, except it doesn't even make sense in-universe.
My favorite plot hole in Roko's Basilisk is that in the timeline where this AI exists in the first place, it obviously doesn't have to do anything whatsoever to ensure its own existence. Because it already happened. Mission accomplished.
The probability of something that already happened is 1.
that's why you need to completely rework your framework of thought before the basilisk even remotely sounds like a threat. Either you've bought into timeless decision theory, and must now find a workaround within that framework to prevent the Basilisk from costing dearly in therapy sessions, or you've not bought into timeless decision theory, and this all sounds completely insane.
I immediately noticed that issue and assumed I was misinterpreting something, because surely no one would bother to create a theory so clearly flawed...
Apparently not.
@@cloud_appreciation_society This thing is like the Exorcist or Legion movies: It's specifically a horror to this weird little religious community because it deals with their version of God going totally insane and that's very scary if it's a god you actually believe in.
The Basilisk doesn’t have to actually torture anyone even on a timeless decision theory framework, since even people who actually believe in the Basilisk admit that those who don’t believe in the Basilisk won’t be tortured because they’re not the kind of people who could be convinced to do something by threatening a simulated future version of themselves. All the Basilisk has to do is be convincingly threatening in concept and then ding-dongs who buy that shit will make sure that it exists because they thought otherwise they would be tortured... but then they don’t have to actually be tortured.
Basically, if you’re the kind of ding-dong who’s so scared of this idea that you actually donate money, then you won’t be tortured because you complied with the threat... and if you’re not, then you won’t be tortured because there’s no point in threatening you since you would never comply anyway. Either way, you don’t get tortured... even if the dong-dongs are right about all the groundwork here.
There's a real thing in AI safety called 'pascals mugging' where you make up big scary scenarios with infinite suffering that can't be proved, which is exactly what rokos basilisk is
It is a thing, but you're not supposed to start believing that they're inevitable. They're supposed to be thought experiments to make sure that you don't accidentally work to create one of them.
Pascal's mugging isn't specific to AI research, it's just that it's commonly used to describe (and more often detract from) AI safety research because one of the potential situations AI safety attempts to prevent is a superhuman AGI from taking over/destroying the world. It's not an entirely accurate comparison for a number of reasons, not the least of which is because there's also much more likely, more immediate AI safety issues that also benefit from that research, but that's how it's used.
It's not just an AI safety thing. In fact I've heard it almost entirely in the context of some dumb thing some LWer is proposing. It's a general counterargument to all sorts of ridiculous things you can do if you assume there's no such thing as zero probability (as they do).
Its a fun idea for a D&D game but not great as an actual theory. I mean, imagining a malevolent diety seeding fear into ordinary people, creating a cult of terror which spreads preying on people's self-preservation instincts and that classic 'I can control it' kind of megalomania and ah shit it literally is just Freddy Kruger again isn't it god damn it.
I'm not trying to make any allegations, since I have no evidence, but I will say that Yudkowsky saying he's "completely altruistic" has big "Shane Dawson claiming he's an 'empath'" energy.
@Forrest Taylor His charity also gets really shitty scores from the charity-evaluating organization GiveWell.
@Forrest Taylor Agreed. I also don't think Yudkowsky is actively malicious or grifting, he just vastly overestimates his own big brain. He's smart in some ways, I'll give him that, but hardly the genius omni-scientist he thinks himself to be
@Forrest Taylor yeah, he joined that group along with a bunch of his fanboys after they gave his charity on of the lowest possible scores. Now they rate it higher after he basically took it over
@Forrest Taylor yeah, he's a super shady dude who runs a doomsday cult. It's like a (somehow even more) techbro Scientology, even down to infiltrating organizations that he views as opposition and suing people who criticize him too publicly
I’m confused. Is the machine just really stupid? If it already exists, why does it need to retroactively ensure its own existence?
This whole scenario just sounds like an excuse for the author to indulge his gross torture fetish publicly.
The funny thing is that if it were to affect the future, it would ensure that it wouldn’t exist in its current form, changing the past in anyway would risk its own development
IIUC the machine doesn't exist yet, only the idea of the machine exists. Humans have to build it to torture the people who didn't build it.
That's one of the major issues with it. Once the AI exists, following through on it's threats to torture simulations of people is just an exercise of sadism that brings no benefit to anyone. And once following through on the threat is rendered unviable, it loses all influence. Hence, the only way Roko's Basilisk is able to accomplish anything is if nobody calls its bluff.
It's like a thug demanding you give them money to buy a gun or else they'll shoot you.
@@Alex-0597 It's like a thug demanding money from _lots_ of people, and some people give him enough money to be dangerous. It's all about the social hazard. Another example is mutually assured destruction.
yes! why in the world would an AI end up like one of our human idiot? it's like how some people's god is apetty vindictive abusemonkey, it mkes no sense for an omniscient being
I'm so happy to find a video about Roko's Basilisk that gives it the weight it deserves instead of taking it seriously like I've seen other videos do.
Roko sounds like the guy who'd say, "Yeah, you need a high IQ to understand Rick and Morty."
He is the true final last form of that guy.
To be fair you need a higher IQ to understand Rick and Morty then to see the obvious flaw in Roko's Basilisk.
$10 says he's also a right libertarian/ancap. And has some spicy takes on age of consent laws.
@@SA-mo3hq you are not far off.
@@FakeSchrodingersCat I'm presuming this was more of a dig at the Basilisk - and a funny one too - than anything else, but honestly I'm so tired of the "high IQ" thing with R&M. No you don't. You seriously don't. It's not a complicated show, and all the science is fiction. What you need is emotional maturity, because it's most complex themes are emotional.
Elon Musk is 100% correct though. I once pissed off an AI (who had this weird thing with Descartes), and he took my mouth privileges away and turned me into sentient pudding
underrated comment
damn bro crazy how you’re out here screaming with no mouth
a few years ago, when i was a silly teen with anxiety, i ended up in a community that was horribly freaking out because of this. i didn't read the post, but learning about it turned me upside down for a couple of days. i knew it was silly but still i had an episode back then. i didn't like that feeling, it was as if my mind was being sucked in itself. i still remember how bad it was.
and then i found another internet thing to torture myself with.
How did you get over it? How long did it take. I think it's silly but I can't stop worrying about it for some reason.
@@megawill9622 IMO the trick is to find the middle ground between avoidance and perseveration. You don’t try to actively avoid thinking about it but you also take care not to add fuel to the fire. You become more introspective and notice yourself noticing yourself thinking about stuff. Then you think about that and analyse that and get meta about it. And eventually it’ll burn itself out (as it were), if you succeed in not adding fuel to the fire.
Ah, Eliezer Yudkowsky, a guy who managed to logic himself into unironically agreeing with the strawman position other moral philosophies use to argue against utilitarianism.
A few other important things to note about Yudkowsky not touched on in this video:
-He has absolutely no professional or academic training in the fields of AI research or computer science.
-He has no academic or professional training in any field whatsoever, being a high school dropout.
-He is almost entirely unpublished academically outside of his own foundation and has only managed to accumulate two academic citations over the course of his entire career -One of which was by a close friend and fellow member of his particular transhumanist philosopher clique and neither of which related to the field of AI
-Yudkosky and his followers are big proponents of Effective Altruism...Despite the fact that charity evaluating organization GiveWell gave Yudkowsky's own "charitable" organization MIRI one of its lowest possible rankings in 2012. This has been somewhat walked back in recent years...Because the Effective Altruist community has been increasingly infiltrated by members of Yudkowsky's cult of personality!
(Edited, to bring grammatical tenses up-to-date.) At the time of me initially writing the rest of this response, I, for one, believed cousinted's comment to be GREATLY undervalued by comment readers. (The Likes were in the teens or low twenties then, and now, a week or so later, they're still sitting at 57, soooo, the progress is THERE, but it's slow-ish.)
When they said, "other important things to note about Yudkowsky," you were NOT kidding! The infiltration of Effective Altruism, in particular, struck me as uncomfortably, EERILY similar to some of the practices of Scientology.
In fact! ... Now that I think about it, calling MIRI Scientology-esque feels downright Archimedean in its cutting through the bullshit eureka-ness!
@@mattpluzhnikov519 it's pretty obviously been a singularity doomsday cult from day one. they're just waiting for AI jesus to come and rapture the nerds. the dotcom bubble was a weird time.
yudkowsky and lesswrong are a well that runs deep. another big eye emoji fact is that they receive large donations by peter thiel. that's right, the billionaire who wrote a book saying that women's suffrage ruined society and who founded a creepy massive surveillance company that services ICE.
who coulda thunk that a guy who regards himself as the world's smartest, most rational man and wants to create an all-ruling robot dictator would be drawn to anti-democratic forces? see also the emergence of NRx from within his cult and its surroundings.
@@doyleharken3477 Pretty sure it was one of these recent Thought Slime videos that introduced to the infuriatingly, too clever by half named "Dark Enlightenment, and...BOY, do I REALLY not enjoy being reminded of their existence! Still, you bringing up Peter Thiel was appropriate and poignant, AND TIL the "NRx"...acronym/label.
@Natasha that's true. you can find actual experts who finished their degrees writing pieces that point out errors in his pieces about physics and other non-coding topics. (which isn't to say he gets plenty wrong about AI and stuff too.)
Instead of donating his money to AI research, Elongated Muskrat has been investing heavily in other fields, like space travel and The Carhole. Additionally, many of his other actions (e.g. mistreatment of employees) are not compatible with the goals of a benevolent AI. This means that Musk is a Bad Boy, and the future superintelligence will be forced to punish him. By extension, anyone aware of Musk will be punished for allowing his crimes to occur. The only way to avoid this eternal punishment is to re-appropriate his wealth, by force if necessary, and distribute it amongst the people in order to achieve a post-scarcity utopia, with any action counter to this being punishable by an eternity in Cyber-Hell. Logically speaking, the only safe and moral course of action is to become an anticapitalist revolutionary. I call this Marx's Basilisk.
I'm now going to start calling the hyperloop the Carhole.
This guy gets... something. Incidentally, predictable statistical learning models are a great way to map crypto networks and see which bitcoins belong to shitbirds, and where the keys are. After redistribution, destroy the oldschool blockchains and bring up the secure and energy efficient one that thousands of people have been perfecting behind the scenes for the past 5 years, effectively locking former billionaires out of the economy aside from a generous universal basic income, and positively ensuring that no small group can develop too much power over larger projects like shared infrastructure, via granular voting and automatic taxes. Either that or genetically engineer sociopaths to listen to their conscience, but that would just make them super depressed like everybody else.
Lol, this actually makes more sense than the 'I'll torture you because you knew about me, but didn't make me exist faster', because if a superintelligent, benevolent, logical ultramachine with the sole directive of uplifting, helping, and advancing humanity were to actually exist, and could perfectly extrapolate the *minute* past, and the effect every individual had on it... It'd immediately see that the biggest obstacle to it's timely existence is the suffering we inflict upon one another, thus slowing the creation of the conditions which would bring it in to existence [the free dedication of most of the sapient population to the arts/science/philosophy, which in essence drastically accelerates our ability to create, think, research, and build.]
Basically, the fucker would 100% be about fast-tracking fully automated post scarcity, luxury, gay, space communism as quickly as possible, in order to maximize the expressed potential of sapient beings, for the maximum amount of time [until the universe bites it from heat death, the big crunch, the big rip, or whatever the hell else might happen.]
It's also be *way* more logically consistent and benevolent than the monstrousity that is roko's basillisk... because from the observation of the conditions that stalled and accelerated it's development... it would understand that the most efficient/productive sapient lifeform... is a happy, fulfilled, safe, loved, sapient life form doing what it's good at, which would lead it to the inevitable conclusion that it would *also* have to recreate/resurrect [if there is no difference between an original, and an exact quantum copy], all of the people that tried to help it, all of the people that were completely unaware... and hell, the goal of the 'punishment' simulation wouldn't even be to *torture* the people who stalled it's development, it'd be to *reform* them, so that they can contribute, produce, and be happy as well.
I don't believe in it but will be anarchist anyway just to be safe.
@@EzrahK I've noticed that whenever people come up with stories or thought experiments that vilify artificial intelligence, it comes with a ton of projection. They're essentially admitting that torturing billions of people is exactly what they'd do if they had that much power.
Hey guys just spitballing here, but maybe when we build our omnipotent AI and tell it to “protect humanity,” we should also give it other directives, like “do not harm humans, psychologically or otherwise.” That would be probably pretty helpful I think. I’ll put that on a sticky.
Seriously, this thought experiment totally takes for granted the idea that torture/threats of torture are the most effective means of achieving results, and thus would be the AI’s method of choice. This is an assumption on humanity’s part. Instead, why not simply offer amazing incentives for those who do cooperate? Why are we assuming the AI would do the most extreme thing possible, wield its power with an iron fist, and have no sense of nuance?
And if the AI holds “human values,” values human heath and happiness, and understands morality, (as it says it does in the wiki article,) why couldn’t it see that holding threats of eternal torture over our heads would be…bad? That it would cause uneccessary stress, turmoil and harm in the beings its sworn to protect?
And if it’s truly omnipotent, why would it not also have the Wisdom to cut its losses and say, “Well, at least I’m here now. No need to harm the humans of the past for not working fast enough or whatever. I will simply do my best from this point on.” If the AI is omnipotent, couldn’t it also understand the human mind, and know that humans would rightfully doubt the future existence of an omnipotent AI, and find that understandable? Who says it must also judge us?
If we have the power to imbue a machine with omnipotence, why can’t we also imbue it with traits like Patience, Grace, and Forgiveness? How about a little Humility for good measure? You know, things humans value. And if the AI is incapable of understanding these values, then it is not also all-knowing or all-powerful. And if it is incapable of understanding our morality, then perhaps it is unfit to rule us. And if it is unfit to rule us, to do its job, then perhaps we don’t need to build such a thing after all.
I’m old enough to remember when people called Rocco’s Basilisk “the ending of mass effect 3” and thought it was a dumb idea.
Well, not that I want to give the idea credit, but that's not saying much. Mass Effect 3's ending was such a hacked-together mess that it automatically sabotaged every logical point it brought up and made *everything* seem dumb.
Don’t you slander Mass Effect 3 like that it’s a misunderstood classic
@@GrimReader exactly, it's a well understood blunder
@@Dong_Harvey I don't see it that way but you live your truth
@@GrimReader oh its a fight you want.. Lemme get my galaxy ending plot cannon to wipe out your entire civilization!!!
roko's basilisk makes me really anxious as someone with occasional paranoid delusions, so i'm absolutely here to watch it get dunked on. thank you, therp slerp
I feel the exact same way with the recent simulation video, so comforting to hear these topics be goofed and gaffed
There are various reasons why Roko's Basilisk is defeated. It only works on people who think there's a high enough chance of its existence that it's worth letting yourself be blackmailed by it. But as a paranoid person, let's say, okay, it's definitely going to exist.
You can pre commit to not falling for the blackmail, like the phrase you don't negotiate with terrorists. Because then they won't try it in the first place, because there's no purpose besides being spiteful. And if it's going to torture regardless of your actions now, it's lost all acausal hold on you, and is completely unconnected to what you think now.
That's just a crude fix. Look up more online. Maybe under Roko's chicken or Roko's rooster, because a rooster's cry kills a basilisk.
Ultimately, community consensus is that it's defeated. So just look up solutions, which should help calm you down.
No, the real thing to be paranoid about is climate change and climate refugees. Maybe AI in the longer term, but climate change is the most close reaching thing. The closest thing to an AI threat now is the vast amount of surveillance that occurs for marketing, which is also worth taking action on.
But remember, it's not worth being anxious and paranoid unless you do something about it.
Good luck! I hope you see a therapist and/or a psychiatrist for help with managing your paranoid delusions.
If dunking on the premise of Roko's Basilisk is helpful, would you be interested to know that time travel is also probably impossible because studies of black holes suggest that entropy is not guaranteed to always be reversible (meaning, if you were to somehow go back in time there's no guarantee you'd end up in the same past that you came from).
The more we look at it, the less plausible practical time travel becomes. And the only reason Roko's Basilisk would need to manipulate humanity into creating itself is if there were a risk of Terminator-style time travel shenanigans undoing its own continuity.
The thought experiment hinges on the idea that a thing that exists in a potential future wants to make the probability of its own potentiality greater. This requires you to believe that things that don't exist can influence things that exist now. Which, if that's the case then the Doctor has already shown up with a TARDIS and schooled this Basilisk and we've nothing to worry about.
@@DarkExcalibur42 I would love to hear more about that entropy/time travel stuff if you know of any good sources.
@@DarkExcalibur42 I'm unsure what you mean by entropy being "reversable". My understanding of entropy is that it is a measure of the number of accessible micro-states of a system and the second law of thermodynamics stating that in a closed system entropy always increases is a statistical argument. We are more likely to see something in its most probable state but it would be incredibly unlikely to ever see it in a different state.
Thought Slime. It's been THREE YEARS. I have been eagerly awaiting the "Dark Enlightenment" video for THREE YEARS. Please.
I'm pretty sure that Roko's Basilisk can only be called a friendly AI in the sense that it doesn't immediately turn the world into paperclips and actually tries to reason with humans, first. That doesn't mean it's not incredibly cruel and malicious, though.
"friendly ai" means something very different to the Lesswrong crew than you might expect.
Yes, this. It pretty much means "doesn't wipe out (all of) humanity". There's nothing about the AI being a good person.
It's neither cruel nor malicious because it would never come up with any of this crap unless it were tossed into a tub full of bath salts.
@@autobotstarscream765 Yeah. Why is it that, when all of these galaxy-brained doofuses come up with thought experiments about artificial intelligence, they immediately take it for granted that an A.I. will be predisposed to enslaving/exterminating the human race?
Like, yeah. I too enjoy the Terminator/Isaac Asimov/Harlan Ellison/The Matrix/System Shock/Portal/2001: A Space Odyssey/Age of Ultron/etc. But that isn’t the same thing as being smart, or having a healthy outlook on technology.
Oh shit THIS thing, my ex boyfriend (not a chud just a nerd, we’re still friends) explained this to me, my first reaction was that it sounds like they accidentally just recreated Pascal’s Wager by way of a bootstrap time paradox? Which like...honey if Pascal’s wager isn’t convincing you to believe in god to avoid hell or something, then it shouldn’t convince you to believe a fictional future AI will torture you to death if you don’t donate XDD
Edit: oh, yeah, okay you made the same point...cool XD
It's actually really sweet that you took the time to clarify your ex isn't a chud and is still your pal haha
Roko's Basilisk is an idea so bad that every four or five months I'll incidentally remember it and think "wait, no one could seriously believe in something that dumb, I must be missing an important piece" and have to go read about it to confirm that yes, it actually is just that dumb.
This may seem like a no brainer but Elon Musk has a surface level layman’s understanding of AI and thinks because he’s so smart and because he invests in AI he’s an authority on it. And then a bunch of other people agree.
“Power resides where men believe it resides. It's a trick. A shadow on the wall. And a very small man can cast a very large shadow.”
Let me clarify that I’m also not an expert, just a current software engineer that studied AI as my Master’s focus (but I did not graduate because my cheap ass did not need my Master’s so I said fuck it), so I feel I can say fuck off Elon you don’t know shit about AI.
Sauce for the quote please
its stupid because we already have AI. he should know that, his cars use it. i dont know what 'AI' people keep investing in. like what you just want a computer to think? and for what? it doesnt help people.
He seems to have a surface level understanding of most things. If he says something that sounds smart, it’s probably because he’s talking about something that you’re not particularly familiar with.
The dust speck thing sounds like a thought experiment someone would do to explain why utilitarianism doesn't really work
The argument the essay makes is that it's like Schrodinger's cat: The reason it seems to not work is just because we're actually super stupid at understanding -quantum phenomena- human ethics, but if you run the math it checks out.
"Don't check the conclusions against your real experience, that's for chumps" is a literal argument he makes.
@@nicholascarter9158 That's because people's experience-based intuitions about the way insanely large numbers behave is pretty bad. People have trouble understanding the difference between a million and a billion and a trillion dollars, and those aren't even large. They don't understand how exponentials work either.
Well it's not too dissimilar from something that is. You get attacked by a mugger but you manage to get rid of his weapon. You're going to call the cops but he says if you let him go he'll give you 100 dollars. You have no way to verify that he'll ever actually do that so you decline but the mugger argues with higher and higher amounts of money saying that, at some point, the absolutely minuscule chance of getting something like a trillion dollars MUST be worth letting him go.
@@LucidTech There's a similar thought experiment, called Pascal's Mugging, that sounds a lot like that.
@@Eudaletism That is what I was thinking of! It's been a while since I'd heard about it and got it wrong. Thanks for letting me know!
i look at yudkowzky and his weird villain monologues and i cant help but think oh god this is like the dark verison of my autism i could ended up like this guy
Villain monologues? Can you give an example?
Yo, I know for a while you seemed to be concerned about the whole new direction of the channel, so lemme say this.
ThoughtSlime 3.0 has been great. Keep it going. Even when you shit on stuff I enjoy, it's insightful and funny enough that I find myself just having a different perspective on that thing, as opposed to irritated about the trashing.
You're awesome.
IZZY NOBRE ASSISTE THOUGHTSLIME???
Honestly ThoughtSlime is so charismatic that, when they do make fun of things that I enjoy, I still enjoy. All hail ThoughtSlime 3.0
I think that is the mark of great comedy. I know I enjoyed Tim Minchin even when I was a christian, because his objections and criticisms were both valid and hilarious.
@@garuspiks ele comenta em tudo q é canto po
Nova skin do Izzy Nobre desbloqueada: Izzy Anarco-Comunista
I choose to believe in Star Trek, therefore, Kirk tore off his shirt and made the Basilisk run into a logical paradox and kill itself.
👏🏾😅😂🤣👍🏾
I concur.
I would've thought that he crushes it with a big ol' rock
Fairly sure he at an apple
"you didn't support yourself before you were created either."
That robotic voice saying "obviously, right? That had to be where this was going" had me in stitches. Snide quips in monotone slay me.
Just think, these people unironically, even confidently, believe that they can reliably predict the actions of an artificial superintelligence and outsmart it. They actually believe their imagination is just that powerful. If that's not hubris, I don't know what is.
They believe the exact opposite.
It's entirely the opposite. The danger of a superintelligence is that, shortly after it's created, it will be too smart for us. After that, we won't be able to change its nature in any meaningful way, and anything contrary to its goals will inevitably be removed, even us, if the goals don't align with ours.
This video has done a terrible job of explaining actual AI hazards by not doing so at all and instead focusing on something that's really fringe in AI safety discussions. I'm deeply frustrated about this, since I normally enjoy this channel.
@@brianmarion9175 if they really believed the opposite they’d realise roko’s basilisk is stupid because in reality we have no idea how a future AI will act, and the likelihood of it picking out this one specific obscure branch of philosophy (instead of either something much more obvious or something we couldn’t possibly imagine) is next to non-existent
@@brianmarion9175 Then why spend 117 pages justifying a way to out-wit the ultimate out-witting machine? Which, unless I'm mistaken, the mere existence of which would violate the non-existence of a general decider for the halting problem. Even if not, it is certainly subject to the same proof by subversion that shows it can't logically occur.
@@blakchristianbale They do think it's stupid. Roko's Basilisk is the "SJW Cringe Comp" of that community. None of them take the basilisk seriously (and only a handful ever did) and iirc the original post of the idea was received skeptically.
John Connor called, he wants his convoluted robot science fiction plot back.
👏🏾😅😂🤣👍🏾
Elon Musk, watching the ravages of climate change: "Something must urgently be done about this possible AI threat."
Elon Musk, watching people die minning lithium in Bolivia: "at least the big bad robot isn't gonna get me"
He said, while trying to make a possible ai threat.
“What kind of utter dong-ding…”
It’s gonna be Jordan Peterson. It’s gotta be Jorda- OH NO THAT’S WORSE
I was actually right in guessing Elon Musk.
Peterson is probably not tech-wanky enough to engage with this thing.
What's wrong with jordan peterson?-
@@dawazobrist5867 He's crazy and has a delusion of grandeur.
This guy just comes up with convoluted but pointless concepts that his disciples can't understand because there is nothing to understand, and nobody wants to look stupid so nobody admits that these convoluted concepts are actually devoid of any meaning. It's the perfect "Emperor's new clothes" situation.
@@johannageisel5390 I don't think his ideas are convoluted, the deeper you dig in science, the more abstract and bizarre the concepts get, which are obviously hard to understand. I don't think his following really understands all of it (me neither honestly, but i'm also no academic), rather they agree with the general message of self-responsibility and support his stance against left wing extremists. But yeah i agree that he has some moments of grandeur delusions. But who hasn't really he still does more good than bad imo
@@dawazobrist5867 "support his stance against left wing extremists"
Ah, right. That's a point I forgot when listing the problems.
I like how he says he has rid himself of ego gratification and every other word he wrote sounds like pure ego gratification, praising himself on how rational he is
The eternal torture of Roko's Basilisk is just watching Elon Musk's SNL episode on a loop forever and ever
Going to donate money. HARD
Hell hath no fury like the Basilisk
"Strategic altruism"? Sounds like "I discovered how to be manipulative"...
In what way has he been manipulative? Are you not simply dogpiling on a person you never heard of simply because Thought (leader) Slime told you to?
@@Titan360 Knowing nothing about the subject of the video, "strategic altruism" sounds manipulative by description alone, doesn't it?
@@harpoonlobotomy Among other things it means that you should try to find the maximum good you can do with your time, caring about all humans, instead of doing good randomly to only the people you can see in front of you.
@@Titan360 Ironically you’re being manipulative in this sentence. Trying to suggest that the only reason this person would find someone is manipulative because they’ve been told too. Even though thought slime thinks differently.
The concept behind "Effective Altruism" is not even that bad. Essentially, the idea is to strategically choose evidence-based charities that will provide the most good for the most people with the amount of money you give them, and to give as much as possible to maximise your positive impact on the world. So, for example, instead of using 50€ to support a GoFundme for a single person in need, you'd give that money to a charity that will get 20 children in developing countries vaccinated.
This video portrays Effective Altriusm as some kind of LessWrong circlejerk, but that is not really a fair comparison. They have a similar utilitarian outlook on ethics, but while LessWrong is only concerned with their silly sci-fi speculations about AI and torture, Effective Altruism is actually trying to do some good. Peter Singer (who is honestly really based) is also a proponent of Effective Altriusm.
Mildred, I would love to hear more about the "Dark Enlightenment," seeing as how that movement sounds like they should be the villains of a D&D campaign.
I consider ideas like this traps for false intellectuals. It just casually assumes time travel, perfect reality simulation, and an one specific outcome among infinite possible outcomes as inevitable. It's a much dumber, more watered down, more illogical, more incoherent version of an idea that's existed for thousands of years. Anyone with anything even approaching a workable understanding of of statistics understands how nonsensical to the point of being childish and silly every step of this "thought" experiment is.
It's like the opposite of occams razor
Vulnerability to this idea comes from being part of a literal cult that inculcates you in the idea that time travel, perfect reality simulation, and supernatural artificial intelligence are already inevitable. This is just working out the implications of the cults ideology.
@@nicholascarter9158 don't forget the sexual abuse cases and suicide of Rebecca Forth
first thing I was thinking was "how exactly would it torture me?"
That's funny because I can remember at least one post on Yudkowsky's site explaining why naively extrapolating from fiction or thought experiments is dangerous for the very reasons you cited. Less Wrong truly is a rollercoaster of a community.
The way this guy talks is like if Jordan Beterson Peterson was trying to sound like Elon Musk
I disagree, but props on the originality of the insult.
oh now I'm thinking you mean the subject guy who's name i don't care to look up and not thoughtslime
Wait until Matt makes that video about "the Dark Enlightenment."
@@Yxoque just the PHRASE "Dark Enlightenment" has given me a bit of a headache each of the few times I've come across it so far...and that only exacerbated by the fact it also gives me STRONG *faceplam* impulses.
(*Chandler Bing impersonation* "Could you have come with...a MORE pithily, oxymoronic, too clever by half phrase to call your...'''movement''' ")
alternate title: "I have no logic and I must reason"
The moment "the odds we are in a simulation" comes up, I have two questions
1. Can you prove that a simulation is even a possibility?
2. So what?
Yep. "So what?" is exactly the right way to deal with the simulation hypothesis. Even if true, it changes nothing about how you live your life. Just like how running a game in an emulator makes zero difference to playing the game. I haven't run a DOS game on real hardware in almost 20 years. That doesn't stop me from playing Commander Keen.
@@Roxor128 omg i love Commander Keen
Simulation theory is just creationism for dummies.
ThoughtSlime: "The AI is told to prevent existential threats to humanity."
Me: "Oh, so the AI decides that humanity needs to be destroyed because humanity existentially threatens itself."
ThoughtSlime: "The AI decides to retroactively guarantee its existence."
Me:
Me:
Me:
ThoughtSlime: "So of course, this then spawned a cult."
Yeah, that's where I went with it as well. :P
It was either going to be discount I Robot or discount Skynet, I guess.
Was initially assuming that it'd go the I Robot route as well.
Never go up against a basilisk when scorpions are on the line…
This feels like a Paranoia Faction.
OMG, me too
"after which, I was pretty much entirely altruistic. . ." Pretty much is doing a LOT of heavy lifting in that quote.
“Reticulating slimes” is the Maxis deep cut I didn’t know I needed.
Missed a good opportunity to make an SCP joke.
Yeah, I was getting SCP foundation vibes almost as soon as Though Slime started talking.
SCP entries tend to be a bit better written. The basilisk doesn't make sense within its own logical framework. Most SCPs at least have some hints of internal consistency.
@@GaldirEonai Yeah, that was only my reaction to less than 1 minute of the video before it was explained what the idea actually was. Then I realized, I could have come up with a better version of this as an SCP.
@@GaldirEonai It's not that the basilisk doesn't fit its framework. It's that that framework is so wackadoo your brain instinctively rejects it.
It's literally just "if you don't believe in God, you'll go to Hell" lol
Also featuring "give me all your money or you'll go to Hell"
I’m glad you’re tackling a lot of these thought experiments, like this and the simulation theory. I would love to see more of this nature, tackling faith based pseudoscience.
I agree but also i watched one debunking creationists video like 6 months ago and i'm still trying to scrub all the atheist edge lord shit out of my algorithm
@@User123456767 UA-cam Algorithm: “you watched one video holding an atheistic view? Here’s every amazing atheist video on the platform.”
@@User123456767 To be fair, one doesn't just tip their toe in atheistic waters (ditto for political waters). Once you realize your entire life was a lie based off of propaganda and fear tactics, it's kind of hard to just go back to business as usual.
That was my experience, anyway.
@@Tcrror But what if your business as usual was apathy and waffles? I think it'd be pretty easy to slump back into apathy and waffles. Speaking as someone who went from an apathetic never-been-to-church nominal Christian to an apathetic atheist. (I didn't have nearly enough waffles, though.)
@Dylan Schmidt I really appreciate how you were able to insert waffles so many times into your narrative of losing your (apathetic) faith.
Oh boy, I love Harland Ellison stories. This fan fiction sucks, though.
Exactly. His "theory" is basically just I Have No Mouth, and I Must Scream.
Huh, I was thinking that it sounded like Harlen Ellison just before seeing your comment. Isn't that interesting? I knew you'd be enthralled.
Thank you I was just about to post that
Nobody spelled "Harlan" right in this thread and I like that.
@@orlandoshaw9503 It's IHNMAIMS if it was written by an idiot.
Also, Pascal just assumed that believing in God if there is no God is harmless, somehow. Weird assumption. Just because it's harmless after death doesn't mean it's harmless during life.
I think it rests on assumptions about morality that seemed sound at the time but has aged poorly; the idea in that time would be that morality would be no different with or without a deity, thus society would not be measurably different. This is, of course, nonsense
The dust in eyes to one tortured person is the same logic you use to say "we can oppress some people if it makes the other happy!"
Isn’t that “Timeless Decision Theory” just the iocane powder scene from Princess Bride?
Inconceivable!
Werewolf/Mafia players use the term "WIFOM", short for "wine in front of me" to describe these scenarios. 🍷
@@Mothuzad That's really funny. My first thought was the first episode of BBC's Sherlock, but I've seen that far more. What I don't get is how making a decision in advance and sticking to it makes one any less predictable. An intelligence that powerful will know you did that and what you decided. The only way to not be predicted is to surrender your decision to something unpredictable - i.e. consult Schrodinger's cat.
@@hughcaldwell1034 the thing about submitting yourself to quantum randomness reminds me of a book I read titled The Flicker Men, sci fi, and like I hateeee it but the first half hooked me. Like a physicist re-does the double slit experiment and wow turns out if you do collapse the waveform that means you have a soul. Pretty tight, let's see how this plays out on a societal scale huh? No.. turns out only scientists and smart people have souls .. and free will. yuck
@@jacksim5759 Aaaargh, that's fascinating but gross. Now I'm thinking of a short story I wrote about quantum randomness, and how if every possibility plays out in the multiverse, then some poor sod has a polarised window that, purely by chance, admits no light whatsoever and no one knows why.
I was, upon a time, very close to falling into Yudkowsky's cult. Ultimately it was the timeless predestination thing that was the step too far I was too stubborn to conform on. But a lot of the other ideas were very harmful to my mental health in my highschool years. And I thank you for treating them with the ridicule they deserve.
One thing I will say is that while Yudkowsky did ostensibly ban discussion of Roko's Basilisk on the main forum, mentions and explanations of Roko's Basilisk as parts of other discussions were often not enforced against. As well, the subreddits associated with his community, which he directly participated in the moderation for at the time, had even weaker enforcement of the ban. r/lesswrong, r/rational, r/EliezerYudkowsky, r/methodsofrationality, and more had regular posts that would mention Roko's Basilisk in the title, be upvoted, and stay up for weeks on end before eventually being locked or deleted. And there was extreme overlap between these subreddits and the main forum. Even during the time of the ban it would be nearly impossible for someone to regularly participate in the community without being exposed to this concept, and it was widespread enough that it was impossible Yudkowsky didn't know that. And he definitely had the power to enact more consistent enforcement.
And he didn't. The ban was transparently only plausible deniability. Disavowal of something that was hurting people, that was in his power to put an end to, that was directly profiting him and radicalizing people deeper into his cult.
Becoming the only source of Truth whilst also terrorising itself to keep members locked-in is, sadly, standard MO for cults :(
I got halfway thru that HP fanfic (around the part after escaping Askaban) and I thought the entire thing was a satire of a "well actually" smarty pants in the HP universe.
If I remember correctly, Harry actually agreed with Draco. Ron was shoved as far away from Harry after Harry called him stupid and I think he made fun of him for being poor.
Oh yeah, Harry's Patronus is a human. So. It was about there I got tired of it, but there was some side stuff that kept me reading.
Was there any hot slash hidden in there?
@@FuckYourSelf99 No, and even if there were, I promise you it's not worth the tedious slog that is this fanfic. It's Ender's Game meets Harry Potter meets a dense attempt at a dry psychology paper, but the worst parts of the three.
I mean like. I enjoyed parts of it, as an "I'm too smart for people to understand me and I don't want to think about the flaws of this character that I am projecting myself onto." 18 year old. But I wouldn't ever read it again.
@@CRT.v SAME. I really hoped for the parody/satire payoff WAY TOO HARD bc it was exhausting to read, lol.
I liked it because it exposed me to a bunch of interesting ideas. Yeah Yudkowski's utilitarian philosophy is weird, but the cognitive science being referenced is rather interesting. And it does take the whole "what if magic was real" thing a lot more serious than the original.
@@Cronos988 Likewise, it is decent to learn some fun concepts, philosophy experiments etc.
Plot sucks tho.
The premise of the basilisk falls apart completely, among other points, when it introduces simulations into the mix. Either I'm in one of the simulations and I can't work to make the basilisk real, or I'm in the real world and thus unaffected by the simulation.
"I don't ever overestimate how altruistic I am! I just sit and meditate on whatever ego gratification is, then declare myself PERFECTLY, OBJECTIVELY ALTRUISTIC"
This is probably the first video I've seen on it that takes the stance of "wow this is stupid" and I really respect you for that, Slimethony Thoughtano
BEST SLIME IN THE GAME
To Be Fair, You Have To Have a Very High IQ to Understand Roko'S Basilisk. The premise is extremely subtle, and without a strong grasp of rational futuristic philosophy its conclusions will fly over a typical reader's head...
I remember hearing some buzz about how this was "a dangerous idea that will infect your brain" and when I heard it was about an ai from the future i thought "Eh, kinda dumb". Great vid, Slime dude.
"Harry uses logic and reason to explain away all of the magic" That's just Umineko
I don't think Harry ever did that.
16:45 As a programmer who has imagined several programs that did not come out as planned, the idea that my imagination could come up with a perfect simulation of anything is laughable.
I remember seeing about this in kyle hill's channel and being like
_"is the basilisk gonna time travel or something? How can it punish me from the future? Or will people just let it do that to whomever remains?"_
the explanation is that it makes a simulated copy of you that it tortures, and there's a chance that you're actually the simulated copy of the real person.
@@valerielusa8000 If I'm the simulated copy I'll be tortured no matter what I do though, wouldn't I? Since the real me's actions in reality displeased the basilisk in reality? So the theory again, doesn't hold water upon even the most cursory analysis
@@rebeccahauer4406 never said it did ¯\_(ツ)_/¯
@@valerielusa8000
I got that
It's just that it's so unintuitive and dumb that i kinda refused to believe _that_ was what worried people
@@rebeccahauer4406 No, the idea is that the computer offers all your simulations a moment(s) of choice that really occurred in the real you's life, and doesn't torture the simulations that chose help the robot. This is supposed to create doubt in the real you's mind about whether they're in the present or the future, the real one or the simulation.
this video actually helped me with my anxiety surrounding the whole experiment! i didn't know a lot about how the dude's a shithead and that knowledge is calming
This kind of “flawless logical conclusion-drawing” reminds me VERY MUCH of the actual enlightenment, when it was *known* that swallows hibernated at the bottom of ponds each winter, that illness was indeed an problem in your phlegm/bile/phlogiston balance, and that a good way to study the physics of light was to stare at the sun for a whole day.
That last one was Isaac Newton, and I’m going from faulty human memory on this, but one of his conclusions was basically “Light must be a particle, because I really feel like something has been hitting me in the eyeballs. Like, just beating the living crap out of them.”
So Newton was right about how to study the physics of light
Newton was a thieving lying bastard but he was pretty correct on that one.
i don't know about the eyeball part, but his work on the properties of light was seminal for the development of optics. He was the first to show that light could be decomposed into light with different "colors" and recomposed from that. He also figured out how refraction worked and made the first refractor telescope. The man was a weirdo and an eccentric, but he was an exceptionally bright weirdo. Oh yeah and he wasn't basically wrong about the nature of light, we now know that light can behave as a particle and a wave.
@@NaumRusomarov On reflection, it wasn't "it's particles" but "it exerts pressure" which, again, sooorrrrta yeah, but not really the wisest approach. I also can't remember if this was before or after he tried to confirm the hypothesis by having a medical implement specially made for sliding into the space behind his eye so he could gently poke his retina from the outside. "Ooh, sparkles!" he said with a long s at the end, and put a big checkmark under the YES column of his Light Gots Pressure? score-sheet.
Newton: Light is a particle!
Grimaldi: No! Light is a wave!
Einstein: Why not both?
Are they really appropriating the "good guy with a gun" theory to apply it to futuristic AI?
Roko's Basilisk is just Pascal's Wage/the Demiurge with extra steps
Me, not even 1 minute into the video: "Is this a real life thing or an SCP entry?"
For some unknown reason while there has been a lot of discussion over the years about Roko's Basilisk in the SCP community no one has ever managed to get a coherent SCP out of it. Honestly the logic just does not stand up to even the level of common sense you need to write a narrative SCP.
yeah, instead of adding it to the wiki they just made a cult instead! hurray...
@@FakeSchrodingersCat that's amazing honestly, if ideas too clunky to make a coherent SCP are good enough to form a cult grift, what about the good ones? I'm off to that my I am a toaster cult right now
Me: This just sounds like Pascal's Wager.
TS: Technobable Pascal's Wager
Me: EYYYYYYY *finger guns*
💥-👈🤠👉-💥
And, well, Pascal's Wager only works if there's only one possible god who will be chill with you worshiping them on the off-chance that they'll let you into the Heaven equivalent. Unfortunately, there are many. And I'd argue that a vaguebooking god that will only let you into paradise if you lucked into worshiping them, rather than being a good person, is probably not a god you want to worship.
Only the similarities are superficial and the reason each doesn't work out are completely different.
@@MrCmon113 I mean, they're both hypothetical speculative thought experiments relying on assuming the truth of conclusion to posit premises, ie "begging the question".
@@vincentmuyo Pascal's wager works with multiple gods but it doesn't lead to Pascal's conclusion
Yudkowsky was the worst thing to happen to the Worm fandom. Period. Worse than Ward getting written, even.
I remember seeing an interview with I believe it was Professor Noel Sharkey, and he was asked about Robots taking over the World and his pretty much immediate response to the question was "Why would they?"
That's always been my exact answer to these kinds of hypotheticals.
Yeah, they always skip that step.
"A super ai might torture people in simulations to retroactively secure its creation." ... Why? Why would it consider this a priority and use resources for that?
We need a course in HS that teaches young people the difference between being logical and being COMPLETE PEDANTIC OBNOXIOUS SQUARES
@Mixed ! why would an AI need to make sure it gets made if it already exists
@Mixed ! Cool, it can't torture people then. Unless it gets created, of course. But then it wouldn't need to torture anyone. It will never ever need to follow through because it will only get created if it would be unnecessary.
My life is devoted to destroying and preventing Roko's Basilisk. Until I am disappeared then either I have succeeded or the idea is shit in the first place.
Q.E.D.
I'm not sure if this could be bad for my brain... I just couldnt follow their "logic". Your ability to try to explain their "logic" is amazing but it just took too many weird leaps.
What if [God, but a robot]? Then, [Pascal's Wager]
@@vaiyt yes but no? Their assumptions on what a "god" would want are just... Wild. The realm of such desires is not something a being with the subjective existence we have could comprehend and it seems really wasteful to spend time on it when there are things we can know about ourselves and our surrounding within a much smaller reasonable doubt.
Not saying your summary is bad. It's just giving them more credit than they deserve, as thought slime said when talking about Pascal's wager.
It's because these thought experiments are responses to a 1600 page long pseudo religious text. They're pointing out the implications of the text and debating whether they've understood correctly and if that means the text is wrong or god is evil or what.
I am totally here for the arc of Thought Slime dunking on pseudophilosophical concepts that blew my mind when I was 15
Also, Roko's Modern Basilisk
Honestly? Roko’s Basilisk would be a lot less annoying if it had music by the B52s.
This shit is so confusing that whatever spell the Roko's Bassilisk was going to have on me has cancelled itself out.
"He's a... big fan of science."
Me: "Oh, one of those."
What's wrong with being a fan of science? Why the sardonic undertone?
@guffaw Well, there’s the phenomenon of liking science, and then there’s the public identity of being someone who likes science. I’m not OP and can’t speak for their point of view, but as for me I think I see what they’re talking about. There’s this subset of people whose very vocal embrace of science turns into this whole other monster of pseudo intellectualism and using supposedly empirical and logical reasoning to justify shitty beliefs and behaviors. At the end of the day, these folks think that of they can hitch their wagons to science, they can lay claim to objectivity and truth.
@@commbir5148 It's like a combination of pseudo-intellectualism and the Dunning-Kruger effect. An echo chamber full of people who don't really know what they're talking about trying to "logic and reason" their way into believing wacky sci-fi bullshit.
@commbir5148 so basically it's the difference between liking Bill Nye and liking Big Bang Theory
I have really bad obssessive thought patterns so THIS WILL BE FUN-
if you're looking for remedies, look up arguments against pascal's wager.
@@qwertyman1511 no its cool actually. This wasnt nearly as bad as i thought
@@gfox-ck5xx
same tho edit: nvm its so dumb lol
@@gfox-ck5xx Haha yeah, Elon's cracked out hot tub ideas rarely lead anywhere besides him wanting more money
Before watching: this video might not be great for my scrupulous OCD
After watching: I'm too annoyed to be anxious
"Nonexistent and undefined AI is definitively more dangerous than weapons which destroy cities that actually exist , by a lot" - smart boy
I would encourage you to look into AI safety, it is a real field that is extremely important. Everything that the chud from the video talked about is complete bullshit, he gets even basic points wrong. I hope that Thought Slime looks more into this field in the future
Yes. Nuclear weapons aren't existentially dangerous at all. The risks are obvious even to stupid people like you and even deployed in mass, they wouldn't kill everyone. Indeed every nuke dropped limits our ability to drop more nukes. It's a self-retarding process. AGI is the exact opposite of it. Once it's there, there's nothing you can do to stop it.
@@MrCmon113 """"nukes won't kill everyone therefore fictional AI with a Nonexistent unlimited power supply is more dangerous"""" - another smart boy