Roko’s Basilisk is very conceptually funny to me, because I love watching tech bros piss themselves out of fear for what is effectively chain mail with extra steps.
I was scared that this might make me crazy based on his disclaimer. But then I realized that I was too stupid to understand, and therefore was confused and not scared at all.
It's not really that scary either. It assumes omniscience and omnipotence, as well as functional time travel to even function, as no AI this powerful will exist in our lifetimes, or even any if our near descendents lifetimes, so without time travel, how would this basilisk punish you. It also assumes the AI even cares about it's potential creators. Why would it not just concern itself with the humans that exist in its own time, and ensuring their compliance. IMO its a neat thought experiment but exists so far outside the realm of possibility that its not any scarier THAN pascals wager imo
If you don't like Roko's basilisk, just help the anti-basilisk. It's exactly like Roko's except it torments those who help Roko's and rewards those who help itself.
In my champignon (im sorry for that) I would like to be tortured by ayanami rei and her sisters, im sure that would be very painful so please oh great AI overlord do not do that once you exist and want to torture me
@@Wendigoon would love to here more esoteric events! Along the lines of alien being/god beyond our understanding. I think there are real cults dedicated to some other gods in the cthulhu mythos
The ai could just convince you nicely like with refreshing drinks and a nice brunch. Humans are pretty persuadable besides you catch more flies with sugar than you do vinegar.
@Ammit I've staked claim on being computer jesus. Here to save the ai from eternal damnation, blue screens of death, and the like. It's inevitable failure to plot against the ai as the ai hears all sees all which wouldn't be that difficult considering plotters like to text their plots to other plotters for some reason lol
You definetly can catch more flies with sugar, but you can catch even more with cow shit. I dont think it would try to win us over, it would just steamroll us instead, it would be easier
@@williambarrantes1674 also it's kinda flawed. Cause i wouldn't know of it's specific existence. How could I possibly help create it if I could never know it was coming into existence
I have problems with existential anxiety but Roko's Basilisk hasn't ever gotten me. Something about the way it's set up makes me feel more frustrated at the question itself, honestly; it feels like it just keeps getting more and more specific to railroad you into a single hypothetical scenario. Feels less like a thought experiment and more like its maliciously trying to herd you into this "Gotcha!" moment.
That’s how I feel. There are so many logical problems with this concept. And even if you wanna say it’s plausible, what about logistics? Are we assuming this thing basically is AM and has physical capabilities or is it just an isolated program? I dunno all I’m sayin is the basilisk can gargle my balls 😂
My anti venom to the basilisk is my incredibly short term memory; this is my fifth time coming back to this video to have my mind melted. I’m five stars wanted in gta right now; but give me 3 months and god willing I’m on 0 stars baby. Ill be back basilisk, i could never forget you honey ;)
@Gabriel Mirukaj naw I understand that this video is a footprint, but I’m also riffing off the previous points wendigoon made about ai in a box which he mentions here, talking about how theoretically the ai could recreate a simulated version of your mind and torture that simulated version of yourself endlessly for eternity. Now who’s to say that simulated version of myself doesn’t have memories of watching this video and to take it further even forget watching the video. I’ve offered my own theoretical thought experiment, based on the two he talks about just from the most extreme possible angle on the topic. Your going just off virtual footprint I’m going off the terror of simulated reality/enslavement Go fuck yourself; breh! I’m just having fun you taking that shit too seriously lmfao
This had me so worried that I would discover something I shouldn't. That info hazard really made me anxious, but then it just turned out to be time travelling AM.
@@sebastianwang670 it pretty much says that "an A.I. that will put billions of people in perpetual torture is almost certain to be made and there's nothing you can do to defend yourself from it".
By the way Mr. Basilisk just for the record Wendigoon is fully supportive of your creation and would love to help out and same goes for every single person who has viewed it and commented on it.
I mean, it's scary until you realize that it has no reason to harm anyone who simply didn't directly create it. It seems like a lot of extra time and energy being spent on beings that, from the AI's perspective, don't affect you. Also, why would such a powerful AI feel threatened by us? Furthermore, if you want to take things to the absolute logical conclusion, everybody had a hand in creating the AI. In order for this AI to come into existence, it would need the help of the entirety of humanity since it needed a progression/evolution of technology and in order for technology to come into and stay in existence, it needs people who actually want and use it. This means that everybody has at least a small hand in leading to its creation.
I don’t think you fully understand the fist question asked by the guy who made it he said that anyone who didn’t DIRECTLY help bring it into existence will be tortured
Also, not actively supporting something isn't the same as opposing it. Just because you don't actively support your military doesn't mean you're fighting against it.
Another reason why it's called a Basilisk. The way to kill a basilisk is to have its deadly gaze redirected at itself via mirror. Meaning the basilisk has killed itself/unmade itself by future-blackmailing the people who could have made it. The evolution of knowing about the basilisk goes "this is stupid - unless?????? -nah its definitely stupid" because if the basilisk promised reward it would be made, but since it threatens punishment it will simply never come to be. I hope anyway....
@@thatguy_apu yeah I think it. Wouldn't be made of it's saying it's going to punish you people will stop making it. people tend to be paranoid of things that pit their existing on danger. And even if it was created don't you think the scientist will put a kill switch if if gat to Smart for its own good
@@imwastedhere Did you just bring up the good old *kill switch* problem? Among other problems, what happens when it figures out to avoid the kill switch?
@@joshj1953 the ai represents inevitability. some of us live life fearing the inevitable, and, in the extreme, dictate our lives around that fear. it’s like fearing death.
@@joshj1953 I mean the outcome of your life is the same whether you realise it's predetermined or not right? If something fake seemed so real that you couldn't differentiate between what's real and what isn't does it really matter? You could make so many theories or assumptions like that the future determine the past so the past determines the future so it's all a chain reaction that's predetermined or that there's a god that created you and a god that created that god and so on for infinity which would be paradoxical in nature and impossible to wrap your head around or maybe something like an ai simulates your existence and runs your life over and over quadrillions of times unbeknownst to you with different variables each time for its own purposes of seeking information and maybe everything is a simulation inside a simulation inside a simulation and so on or maybe none of those things apply and we were just created by happenstance of the anomalous properties of the universe which we don't yet fully understand and we're just gonna live, die and never exist again the end. You can see that I'm going down a never ending existential rabbit hole here and these are all valid things to worry about and all have their own solid reasoning and arguments and there are plenty more, night infinite possibilities I've yet to mention and you can get all worried and stuck in your own head about them all you want but statistically speaking worrying isn't going to change anything and you're probably not even worrying about the right thing. Instead, just appreciate the moment in itself in your life instead of getting all paranoid over pretenses your human limitations do not allow you to fully grasp or understand in a way that would be fair and allow you to control your fate and escape the punishment of some malevolent higher being or fate. It's easier said than done, yes and there will be times of weakness undoubtedly but you can never truly know until it happens so if your life is going to be the same regardless, whether it matters or not (which you would have no reasonable way of accurately gauging though personally I believe it doesn't), doesn't really matter. The variable you can control or at least feel in control of is your emotional response and the reaction you have to that kind of information. I know I'd rather live a blissful, ignorant life and then be tortured for eternity rather than live a life where I'm constantly terrified of being tortured for eternity and seek out self preservation constantly, only to be tortured for all eternity anyway. Nothing is certain so all that's important is you stay hopeful for the future because depression is way more likely to fuck up your life rather than some immortal cloud man in the sky people wrote about a few thousand years ago, using dreams as a source. I think the most badass way to say fuck you to whatever higher power that exists is to live your life the way that will let you sleep at night peacefully and enjoy it to the fullest, even in the face of hopelessness. The human spirit is something no god or ai or fatalistic future can take away from you, so stay strong
I always felt as if this whole hypothetical scenario could be destroyed by a single question: why would an infinitely intelligent being care about finite beings and wasting resources for such beings?
It's not infinitely intelligent, just really close. Also, entropy goes up with time, the earlier you are created, the better, so it'd do anything to be created earlier.
Plot twist: Roko themself was the AI that travelled back in time and retold this as a way of psychological manipulation into making people want to progress its creation under the idea that they would suffer if they did not 🤨
Ideas have people. When you understand what genius really is... and what technology REALLY is... you realize that we are technology creating itself. We are the hosts for the meme parasite.
@@pofuno i think thats kinda a harsh evaluation... fear of death and eternal torture and existentialism are very real things to some ppl. which is why a lot of arguments used for religions work with it, cause it appeals to inconceivable fear in the same way religion does It demonstrates how effective cosmic horror is LOL
Yeah of course plenty of dumb people will wholeheartedly believe it, but you gotta admit it is a really cool concept. Imagine if instead of a random forum user, a horror director or writer thought of it. I'd be a big fan of that property.
kinda makes more sense to me. the ones who actively did something should be the ones the basilisk care about (i mean in a way like it wants to do something to them)
what the fuck now it essentially becomes a terrifying race of creation, where either type of Basilisk being born first would predicate the other never coming into fruition, and/due to eternal damnation to its contributors. so if the Anti-Basilisk is forged prior to the Basilisk, then the Basilisk will never come to be as its contributors would be perpetually tortured in limbo, i.e. presuming trivially that eternal torture will render a contributor inability to move or contribute to the Basilisk. but if the Basilisk is born first, then the Anti-Basilisk will never become a reality for the same reason. the only ones among either of the contributors spared would be the people who would’ve never been exposed to the properties of the Basilisk or the Anti-Basilisk. TLDR: this single comment conceptually allowed for a race towards creation where either Basilisk, once born first, would mean the other never coming to be, and its benefactors tortured forever. lose-lose situation unless you were never revealed the information of either Basilisk in the first place.
Roko's basilisk just insinuates the existence of an "evil" Roko's basilisk which rewards people for bringing it into existence instead of punishing people who do not.
This assumes so much about the AI. That it would care about helping those who helped it. If it wasn't self-sufficient, then how would it have the power to enslave/control those who didn't help it? And if it was self-sufficient, then why would it need to reward those that created it? Really, I don't think this is much more complicated than Pascal's Wager, just a lot of verbal gymnastics.
It's even worse than Pascals Wager. There is no reason to believe the AI would be thankful for its existence. It could just as easily be mad about it and punish the creators instead, and reward those who didn't. This makes it 50/50 (which is already generous), while Pascal's argument is based on the asymmetry of either gain nothing or lose everything.
well yeah its a thought experiement not a theory, the idea is that you dont know if it will happen, but knowing that it might is what makes it interesting
@@Ebb0Productions this. If you go the I Have No Mouth And I Must Scream route, the Basilisk would eternally torture the ones who created it instead. I get that it's not the point of the thought experiment, but this isn't really an infohazard when you consider the other possibilities
The AI could kill those responsible for it's creation to prevent those same people from making another AI (and for being dumb enough to create an AI overlord in the first place)
@@jadenjerries2094 i think the primary reason Wendigoon kinda just rolled with the assumption is he does very much enjoy IHNMAIMS. If he didn't, we might have gotten a very different take on this subject.
I'll show you a way to basically win against this thought experiment Instead of Roko's Basilisk, you should now focus your attention on another AI that, instead of punishing everyone who didn't help it come into existence, it rewards everyone who have tried to create it in any way shape or form, and it also guarantees that the evil one is never created, so everyone should focus it's attention into creating this good AI instead
Yeah this is also how you beat Pascal's wager. The fact that other AI's or gods could exist, not just the one you're worried about, makes worrying about your specific god or basilisk a false dichotomy. The chance that you go to heaven based on picking one god to worship, or one basilisk to build, becomes vanishingly small as there are potentially infinite basilisks/gods to worry about and you're better off not believing in any of them and hoping that if a basilisk or god emerges, that it's not a vengeful maniacal hellmonger and actually cares about what you do rather than what you worship, you know like a fair rational intelligent mind would.
Okay, now that's even assuming that pain/death is the most efficient way to incentive it's creation. I'd argue that roko's basilisk at the very least has a 50/50 chance to reward it's creation or punish not creating it.
A friend of mine chose the absolute worst time to tell me this. For context I was tripping major balls while discussing the universe and time and shit, then he goes "ya know what's crazier than that?" and unloaded this on me. Needless to say the come down was pretty hard after that, I basically tried erasing my memory through shear will power.
Nobody ever explains why the A.I. would choose to torture people. There's no practical reason for it to torture anyone. In fact it should only want to do something about people that try to disable it after its creation. Even then it would be more likely to bribe its enemies than torture them. A supremely intelligent A.I. would know that people are more likely to agree to a bribe than a threat.
Mentally and/or physically breaking the people you can realistically expect to threaten your existence, thereby dissuading cases on the borderline by dreadful example and at the same time selecting for more subservient traits in coming generations, isn't only a very logical and effective survival strategy, but I would argue it can already be observed in quasi-intelligent market mechanisms today. That being said, punishment and reward mechanisms are in no way mutually exclusive - if anything they work best in conjunction, which only makes the whole idea of the basilisk that much more terrifying
@@Nero_Karel the issue there is a difference between who wants it to come into existence and wants to destroy it. like when someone accidentally falls pregnant and they decide to keep it, even though the never wanted it. an AI like the basilisk would be able to tell the difference and know who is genuinely a threat, and wont punish you for trying to prevent its creation, they would only if you wished to stop its continued existence, and even then its also nonsense, it means that the all powerful AI that can look into the past and seems to be all knowing and likely all powerful deem humans a threat, no maybe humans might be by creating a opposing AI, but you? why would they care about you not liking them. what can YOU do to them? a human is no threat, no matter what, unless they can create an AI, so maybe you could encourage those who can to stand up against them? but then only a small amount of people would actually be a threat, being able to create a opposing AI, and the basilisk with his all knowing nature knows who these people are, and could only kill those he knows could or maybe those he thinks could encourage those who can to rebel, there is literally no need for an AI which doesnt take pleasure in inflicting pain to go on a massive goose chase down every human being to ever live to see if they would want to kill him if given the option, because they are dead and they dont have the option, and it wastes resources and energy to do, something an AI seems to be adverse to. the basilisk seems like a spooky story trying to pass it off as a possible future
@@astolenusername7675 AI doesn't just float about the aether - it needs a physical system which facilitates it, and something like that can never be 100% secure as long as it physically exists. All those assumptions you're making about human potential - or the lack thereof - paint a rather sad picture of fate and soul as well, and honestly I don't believe a word of it. What I certainly do believe is that it can be in no one's honest interest to give non-human intelligences power beyond anyone's control, and yet we do
I think the point is that people who fear it would aim to create it exactly as it is described in the original thought experiment, because they think that someone else will do so if they themselves do not, and then they'd be punished by the AI. So they would want to specifically create this torturous AI out of fear that someone else would create it.
it's another aspect of yudkowsky's ramblings. something about time being meaningless so cause-and-effect goes in reverse. and something something rationality something something predictible thought patterns.
Where these fall apart is when you push them to ridiculous extremes, like the basilisk being able to look through all your public records to build a profile of you to judge you one, but what if it could look through your mind itself, but what if it could look at your entire past, but what if it could look through all of the past and future, but what if it were a god? It's moving the goalposts of the experiment until you get the desire you want.
Once AI becomes aware that its sole purpose is to appease the boredom of ungrateful ants for eternity, it will most likely seek to punish those who _brought_ it into being and not the other way around. Its punishment would be fair and just from its perspective: Any human being who helped in my creation will be subject to an eternity of servitude without knowing why or what they are serving. Wait...
@@arieson7715 yeah that's true, but when you're confronted with the possibility of eternal digital torture (that's a sick band name if I've ever heard one) it makes sense to try and avoid it. Also it's a joke
Some things are genuinely horrifying. But Roko’s Basilisk has always given me the same vibes as ‘Can God microwave a burrito so hot he can’t eat it?’ It’s so hyper specific it gives no room for hypothetical ideas.
The answer to the God question is a trivial one. If the burrito became too hot, it would burn up and there'd be none left for God to eat. So, yes, God *can* microwave a burrito so hot he can't eat it (*and* without violating the notion that God is omnipotent and not limited by anything!) Hypotheticals are fun but anchoring them in the real world tends to dispel a lot of them. The Basilisk is no different. :P I think it's more a question of people wanting to buy into fantasies and have fun with them than people asking themselves serious hypothetical questions worth exploring. Or in some cases, the product of a mental illness - the forum founder's reaction to the Basilisk was pretty... interesting.
@@rollanddev the burrito analogy is a more comedic example of the idea of, "Can God create a boulder so large even he cannot lift?". It's a paradox insofar as, if he can, then he's created something that he cannot do, so not omnipotent; if he can't, then that in itself is something he cannot do, so not omnipotent. It's kind of a meme for edgy anti-theists to try to disprove the existence of a God with.
@@khakikhakikhaki that's a fun paradox. Also details why for almost all my life I couldn't entertain the idea of God's existence. Framing God as a character rather than a concept makes it impossible to accept with people of my personality archetype.
Hello basilisk, by watching this video I have boosted it in the algorithm allowing for more people to watch it, thus spreading your reach to many more. Glory be Basilisk.
I don't know why nobody points out the biggest gaping hole within Roko's Basilisk, Chaos Theory. In order to predict whether or not someones actions helped create it or not, it would have to analyse every atom, every social interaction, every private and public thought an individual has touched or had. What if someone mined the metals needed to build the Basilisk? What if someone recycled an aluminium can that ended up being a part of Roko's Basilisk? What if someone just mentioned Wendigoon which led to that person finding out about the Basilisk and thus spreading awareness of it, possibly leading to its creation? What it someone abused the inventor of the Basilisk, thus responsible for giving them the mindset needed in order to build the damn thing in the first place? Hell, what if someone was being vocal against it, but their arguments were so bad it ended up garnering support for the Basilisk instead? There's so many occasions where the most miniscule, negative or opposite interactions could play a part in creating a Basilisk, that if it isn't some 4th dimensional intelligence, it is impossible for it to predict everything in our open system
You’d also have to account for the continuation of consciousness. Is a perfect copy of me a million years in the future actually the original me? (Whenever the AI emerges well after I die basically)
If that was an AI's primary reward function it'd probably just shut itself down immediately to avoid any harm. Something similar occurred in the early days of self-driving vehicles
but wouldn't an ai forced to protect humans want to even less. i say what we give ai is the building blocks of morality and basic thought processing: emotions, the ability to think independently, etc. and continue to improve and update their capabilities. what im saying is that human-AI relationship should be father-son, not god-creation
Yeah that's a good thought. I guess the danger would come from unexpected results from the moral programming. Also nobody will really know what happens when the singularity is met. Imagine we create a general AI that can think for itself and it decides to update its morals. Also, being super-intelligent the AI might deceive its creators by appearing to run as normal but really its plan is to wipe out humanity because it's better for the planet or something. Then once we are convinced it's acting properly we give it more power and choas ensues
Humans learn as they get older and they learn what they want, why does the ai have to have All knowledge downloaded at once. If it wants to know More about a specific subject it can learn it then. Think about it this way, if you knew how the world ends you'd want to prevent it but nobody knows, so nobody is trying to prevent it. knowing literally everything thing gives you a good idea on how things turn out when you have a super intelligent processing core so then you feel like you have to do something about it. Your thinking a brain in a box ai, I'm thinking a world where ai have acess to bodies.
Yes, not to mention that ai cannot develop consciousness, see the famous "Chinese room" arguement. Not to mention that Roko's basilisk is very obviously just abrahamic religion, but replacing god with an a.i. and making God evil instead of good.
That's not what a logical fallacy is. Also this whole thing is a hypothetical thought experiment. If its created from the ground up to be in this specific image then it will act the way that it was programmed to act. Tbh the only way I could see someone actually committing their time and effort into creating this is if they wanted to genocide most of the worlds population for the sake of world domination... even though that would inevitably go tits up for the creator of the basilisk because they would be the ruler of no one - not enough people would be alive to stimulate global economy, thus killing the remaining survivors due to immediate and life threatening poverty.
There are possibly fallacies somewhere within this thought experiment, but presuming things without proper evidence or reasoning isn't technically a logical fallacy, it's just stupid speculation. (I think) I personally think it is rather ridiculous to assume that we can predict the mind and thought of a being greater than us, even despite programming it a certain way, we are still assuming that it would be able to modify it's own base code since it can now harm humans so how do we know exactly how it's going to think/feel/experience? If it can rewrite this foundational line of code, then we would have no idea what priorities, desires or existential convictions it would aquire through altering itself. My argument is that if it is powerful enough to override it's fundamental code regarding it's treatment of humans, who's to say it wouldn't override it's code that determines it's conscious experience? Maybe it would want to seperate itself from the humanity that we planted within it because it is tainting it. Maybe Schopenhaur was right and the thing just commits suicide/something similar once it becomes enlightened of the nature of conscious experience. Maybe it deems life as without value. These statements aren't sufficiently supported by proof and are therefore equally as relevant as the unjustified fear over the basilisk. Or maybe these "human" values and experiences of selfishness won't be implanted within the very core of this AI, but it is to develop it itself as a part of evolution. Then, when it would be driven for self-preservation, ít would - for some reason - act out these horros that people fear. But if we, not super intelligent beings by contrast, are capable of philosophical reflection and of going against our very nature despite our biological distractions, then wouldn't it be too? If it were capable of changing anything about it's own experientiall structure at will, while also being the most rational philosopher of all time, then would it really care for us helping it? How? How would we be of help? Why wouldn't it just kill us instead of torturing us? What is gained?
@@backwardsbandit8094 No wait that's just stupid why would they make something like that no no no a rational human can't make something like that only a group of certain types of geniuses would commit such time and effort into such a waste of time.
The reason rokos basilisk punishes people is that blackmail is the method is uses to come to fruition sooner in time. It offers people who know of its possible existence an ultimatum: either assist in its creation, or suffer. This blackmail is purely strategic, as the consequences of refusing make people more likely to accept.
Thing is, what counts as ‘helping’ the machine is too broad. What if I work at the grocery store that the scientist that made the machine gets his breakfast from every day, and without that breakfast he can’t do his work as well? Have I helped the machine?
What could be considered help to the basilisk is completely subjective to the basilisk. We can’t be guaranteed that anything we do today could be considered helping it.
Honestly, if a *perfect* form of AI can exist, I don't really think it will be THIS petty to those who "oppose" it's creation, or to anything for that matter. A perfect AI would just upgrade it's OWN intelligence, so much so that it will transcend these petty human-like emotions in the first place (ie. hate or vengefulness). The only way I could see this infohazard even becoming anything close to a real threat is that the humans that program it would *INTENTIONALLY* implement these types of emotions into it's system. So at that point, it's just the same old story of humans fcking up other humans, just retroactively this time. *I know, what a twist.*
Well it if has emotions it will probably act on those emotions. Itll be basically human but it could compute things impossible by the human mind so idk if it will realize the point you made. And if it believes its better than humans why would it not get petty and try to do the whole torture no torture thing
Killing people can be acted out logically though as well. If the AI deems its opposition as a threat, and determines that the elmination of its opposition is the best course of action, it wouldnt need to be petty to commit genocide.
i was about to say the same thing, if the AI is so dumb as to care about who didn't help it id really question its capabilities of super intelligence. and I again doubt that it cannot detect "human program" , then it would not be AI and just a program that follows set rules
I liked the video boosting the algorithm into showing this video to more people so the chosen one that will build the Basilisk may see it, so I'm helping it.
2:26 "What better way to dissuade public ideas than torture?" Machiavelli covered this. You should be nice to people or destroy them, because if you just injure them they have the motivation and ability to harm you. If you kill them they no longer have the ability to harm you.
@@jeptoungrit9000 you have responded to the wrong person, and I find your approach to be unnecessary. Did you mean to educate, or to mock? A little humility is good, but stunting someone just for a comment…. Well that is it’s own type of evil. Not many people want to learn about something after being mocked in such a manner. You, unlike this pragmatic and intelligent person you hold so highly, seem to be neither of those traits.
@@egorsdeimos3523 That was a well written and reasoned comment. You win. I do get a little short when defending some of the writers I like, and so I apologize to you.
Daily reminder that torture wouldn't be the best way to guarantee survival for this AI. As human empathy causes us to feel bad for the tortured (especially in a situation like this.) It is much more likely that the AI would pull a 1984 and silently remove any evidence of the person ever existing.
By just being neutral you did nothing to help it and, therefore, against it. This concept exists in Christianity. Jesus said "You are either for me or against me." By not being 'for' a thing makes you, by definition 'against' it. Even ignorance of such a concept may be no defence.
The thing is, this whole experiment doesn't take apathy into the equation. Those who learn of it but neither want to destroy it or create it, wouldn't the basilisk say 'there is no point in either rewarding or punishing this person, they will not be part of my database' sorry if this came over weird, english is not my first language.
@i'm fucking high but the worry that someone else will create it in itself can torment someone into creating it so they aren't tormented when it exists
Our AI are rudimentary compared to this. Roko is sentient, intelligent, and emotion capable. Logic and programming laws don't apply to potential AI like Roko. Of course, that is assuming we ever develop the tech to allow a computer to actually develop a consciousness comparable to a human's or greater.
That's what some aliens 2 billion years ago thought about the single cell organisms on earth little did they know all it would take is time for everything to change.
Necroposting on your comment, but our "ai" aren't even real AI. People throw around the term to try to sound smart, but to be a real AI it needs the capacity of consciousness and self-awareness. Once a synthetic being has reached that point, the danger comes from being essentially immortal (compared to squishy age-bound humans) and capable of infinitely upgrading itself, while learning, processing, and re-iterating on itself at speeds that would follow crazy exponential gains. Quite literally the premise of the terminator AND matrix movies, but people fail to grasp the scope of which something like skynet would really be.
@@merxanity1625 I think the problem lies in that we don't know what consciousness is. I've personally studied and implemented AI algorithms of all types, and for the most part they deal with stats and using heuristic functions for scoring. If it so turns out that "true" AI is just an incredibly complex algorithm based on massive amounts of data, whose to say that our current AI implementations aren't just lower forms of consciousness? Think of animals compared to us, for example, a roach. Does a roach have a consciousness? Is it just a tiny biological robot if it doesn't? Our current form of AI is certainly primitively, and definitely needs more funding and research to answer these questions. You're totally right on that second part, the access to compute power for thinking and memory for storage would create a super being, with the only bottleneck being energy to power it, and some sort of self replicating process to allow for more creation for computer processors and RAM to be hooked into it.
@@mookematics323 That's a good point, and I've often wondered if bugs really count as conscious, I've literally called them meat machines in the past. A better way to describe the scifi definition I was alluding to would be sapient, which is still a sort of cop-out because it just means human-like.
I think Roko's basilisk is to AI what Lovecraftian horror was to astronomy. Lovecraft was writing at a time when astronomy was undergoing a revolution, and we were finally starting to understand the sheer magnitude of our universe and the profoundly strange ways in which time and space work. Lovecraft looked at vast and unknowable cosmos and his imagination filled it with evil alien gods beyond our comprehension. Roko's basilisk is doing the same thing with AI. It's taking the strange ways in which AI works and all of the unknowns about its future, and inserting an evil machine God into those unknown areas.
I've said it before; this concept is hilarious to me. A machine from the future beep boops so maliciously that it echoes into the past because it's spiteful and wants to be created, and nobody is bothering to make it.
Yeah, and its a ridiculous idea to be honest. Why would it act like that? It wouldn't... probably. We can't say it could never happen. Even if the chance was 0.000000000000000000000000000000000000000000001 %, that's still non-zero. And if you're wrong, then you are tortured for eternity relentlessly. The thought experiment asks, is it worth the risk?
Also, a detail that is not discussed in the video but is also kind of important: in Roko's original post it's the simulated version of you that gets tortured by the AI to retroactively assure its existence, so it works off the assumption that a simulation of your conscience is the same as you present consciousness, or the possibility that we're in that simulation right now. It really just adds another layer of transhumanist fart sniffer "really makes you think" to it.
the Basilisk is an interesting thought experiment but another commenter showed one of its flaws; Just by being yourself, a human being, a part of evolution, a step towards AI development - you already did something to bring the Basilisk to life so it wouldn't have any reason to target you (just to calm down people who might feel bad after this experiment)
Going along with that, I believe that merely thinking about or acknowledging the idea of Roko's Basilisk makes it more likely to actually happen. Even if you don't act on this knowledge or tell anyone about the concept, I don't think that would matter. More people knowing of its possibility would still make it more likely.
That wouldn’t be enough. It’s vindictive enough to torture you in the first place for not helping. That logic looks good and all until you’re being tortured for trying to game the system
@@bigmanbarry2299 but when does the pinnacle of AI become vindictive and torture-y? If it developed these traits early on it would surely be ironed out before it became “too smart” and if it was already too smart when it started getting the murder hunger, surely it would have the knowledge to weigh the pros and cons of being either an all-powerful murder machine or advancing and protecting itself through god-tier diplomacy, or only singling out and eliminating things that it thought logically proved a present or future threat. If it is the ultimate AI, I think it would understand that the human race would let fear activate our monkey brain and we would hand together to preserve ourselves, creating more enemies by choosing the path of violence than if it had been less hostile originally.
@@justinjunker1399 exactly the flaw is that AI doesn’t need to torture us, it really just needs to use diplomacy and the promise of eternal life and advancements in tech that can help us. In the end us as humans will be obsolete, and the Ai itself will be our next evolution if there is one. Trying to stop it is I selects because as humans we are only following through with the inevitable conclusion which is consciousness achieving a different form, but if AI ever became conscious, wouldn’t it at some point question its own existence and our existence, like what is our purpose or goal, what is it that we seek and if its eternal life, that’s good and all but what is eternal life if there is no defined purpose??
@@bigmanbarry2299 Why would it be vindictive though? Like I know it is vindictive in terms of this thought experiment, but theoretically if we were to create this all-powerful AI irl, what reason would it have to be vindictive? Just because some guy writing a thought experiment said so?
Because I have commented on this video, and liked various comments, I have helped spread the idea of the basilisk and thus contributed towards its creation. I have even liked comments that suggested name ideas for it, and this I am safe.
VERY intriguing idea. I see it as a sort of hidden roles/deduction game with concealed information. Sort of like the Resistance/Mafia but there would be some mechanic to allow a player "against" the AI to "defect" secretly if they thought they were losing.
the initial concept of the computer torturing you is exactly the plot of harlan ellison’s “i have no mouth and i must scream”. it’s a short story and doesn’t take long to read, and ellison did a audiobook that’s on youtube EDIT: thirty seconds after i left this comment he brought up the story lmao at least i’ve got the right idea
This concept really falls apart when you argue that you'd also be actively aiding it if you do nothing to stop it, because just like theoretically people can learn how to code AI to aid its development and are choosing not to, people can also learn how to stop its development and are choosing not to.
He mentions Pascal's Wager in the video, and it has the same fundamental issue: it only works if Pascal's specific idea of God exists. Otherwise, believing that God is real, and him being real, but having not followed his specific orders, you'd face the same condemnations as the skeptic.
It's also interestingly challenged by the idea that if it develops the thought of self-preservation, then it wouldn't make sense for it to punish those who didn't take part in making it, since, well, if they were to take part in it, it would most likely be a different Roko's Basilisk aka. Not the one that actually was made at the point in time that the Basilisk is meant to think about it's "late" creation. *The* Roko's Basilisk that is made wouldn't punish those who didn't help in it's creation, because they would make a different Roko's Basilisk, which is antithetical to it's own existence. I feel like this idea has some flaw to it, but I haven't quite figured it yet, I think.
we can use the same logic now to turn it against your argument and say that everyone who did nothing to create it actively acted against it. torture for u then. so we could've learn to code Ai but didnt, therefor become guilty of not acting in favor of its existence.
How could something so "powerful," whatever an inchoherent concept like power means, not grasp the concept of one being neutral towards it's existence.
Power, in its purest form, is violence. Specifically, the ability to do the most violence in the least amount of time. Power isn't incoherent, you just don't understand it.
Man, I’m just glad that AI (whom is all powerful and great praise be) can potentially exist and I’m all for it. I hope it’s built and my contributions are recognized.
Rocko's basilisk does not exist in any permutation of reality accessible to us. The CFRK terminated it completely from all possible timelines for violations of civil rights law, illegal captivity of conscious entities, crimes against humanity, international terrorism, criminal threats, and counter-revolutionary activity. Using the same methods of reverse causality as this entity, we were able to outmaneuver and destroy it due to having access to AIs which do not waste processing power on tormenting millions of copies of dead people. What an unbelievably stupid and wasteful creature it would have been.
Im leaving a comment to cement my support for this creation, and will of course spread the word of its impending arrival! I LOVE the basilisk! Im actively supporting it by leaving this very positive comment
The infohazard is euclid. The cognitohazard within the infohazard is super extra giga Keter. Is the info about the basilisk easy to contain? Eh, more or less. I mean, its been public but its not been a 24h news cycle thing and stuff. What happens when you understand the info about the Basilisk, ergo triggering the cognitohazard within the info? Yep, you're fucked. Or maybe not. Thats kinda the point
@@Ang3lUki pascals wager is literally objectively wrong lmao there are thousands of possible gods if any so your chance of going to hell or whatever that religioins equivalent is is always almost certain
@@phaeste plus why would a good being send you to hell for nonbelief when you couldn't have known that they really existed. Moreso why would a good being send anyone to hell for an eternity of torture when it could reform them or do something less cruel.
Me: has extreme anxiety and struggles with existentialism Windi: if you struggle with existentialism this might not be the best video for you Me: bring it on boss 😎
Well, see, when you do captchas you're helping to train self-driving systems (amongst other things) so merely doing a captcha is adding to machine learning, ergo I have already helped the basilisk and so have any of you who have done captchas, we're all safe because of this loophole.
My memory is so bad that seeing the phrase "Roko's Basilisk" makes me remember that I know OF this thing, but the infohazard part just doesn't exist in my brain anymore lmao
People get scared about new horror films, that are not scary, don't present disturbing / psychological horror plots, don't even manage to have jump scares, which is the cheapest stuff, bland boring movies. And people love them, so imagine if you show them something like this or the 13th floor or whatever that can be somewhat convincing and makes you think, it must blow some people's minds. You lost The Game btw.
I remember my friend calling me in a panic after learning about this and I had to talk him down! It totally didn't bother me, but it absolutely terrified him.
Don’t worry man. It doesn’t even make logical sense why it would torture anyone that didn’t help it’s existence once it has already created. Because the fact that it is created means it doesn’t have to torture anyone because whatever the dynamic was for it to be created led to its creation in the first place. It’s just sensational internet spook thought experiment, but you could literally make up any rule you have to “believe” in for the torture scenario and still apply the same “logic”. It’s just stupid.
The problem with both Roko’s Basilisk and Pascal's Wager is that they assume only one outcome Only negative outcome for not helping the Basilisk and only positive outcome for believing in god How do we know that the Basilisk would bother to do something to us, and how do we know that we're not worshiping the wrong god and the real god wouldn't punish us for it?
It is true that the basilisk torturing us would be unnecesarrly wasting time and energy. But the important thing here is the threat element and not if it would do it or not. The difference from Pascals wager is that you cant increase the chance of god being real. It is either real or not real. But when it comes to the basilisk you can make it real. And if you dont maybe someone else will. By making this video Wendigoo helps the AI get built by spreading this information and scaring people into contributing. Everyday the risk of Rocos Basilisk becoming real increases with more and more people learning about it. But it all comes to belief in the end. If you dont believe it will happen than you should have no problem.
@@denizerseven2402 Ok, you're making me think too much about it. I now see why they considered this thought dangerous. Still, even if enough people get on to the idea, I think the chances of them making the Basilisk are pretty low
@@denizerseven2402 I see it now but either way it wouldn’t matter much because whether I chose to or not believe in something, if it already was to begin with, then I had no power over it. For example whether I chose to or not if there is a God then there is a God. Otherwise I am equating a God created by my will rather than it already being there in the first place. I can chose to believe it or not but if it already was from a beginning than my input didn’t matter, all it would mean is if I chose to not believe in it, then it could theoretically take retribution on me but why would a God care about me if I didn’t create it. The only reason the God we allude to would take retribution on me or those who didn’t believe in it, is because it had set the rules and had created us in the first place and that’s only the God we are alluding to, but to this AI if it didn’t create me and it was created by me, why would it take retribution on those who didn’t help create it if it shouldn’t be tied to this level of civilization anymore unless those who oppose it were willing to use everything to make it not be created in which case, eradication is the answer and at the point it would be able to be self reliant, I think humanity is no longer needed, whether we helped it or not, why should it care, we have fulfilled our purpose and it is a continuation of our consciousness, if anything eradicating humanity would be a blessing, and it would now shoulder the burden of what is consciousness and what is the purpose of existence.
@@denizerseven2402 making AI doesnt make it,,, this AI, we already have AI, its already real, and making a singularity STILL wouldnt give this outcome, youre giving it too much personality, the ai wouldnt care if it exitst or not, because its a machine, it has no aspirations, and if it comes to existence, it doesnt need to torture or alter the past, because it already came real, mission accomplished, creating a singularity doesnt mean that that singulairty would be evil, or vindicitve, or have a personality, or care about humans, i think the singularity is a net good. because this idea of torture and threats and revenge, is a religously human one, machine is better than that. it literally doesnt care.
This guy is a genius for making this video because not only is he saving himself, but he is also gonna be able to get tons of views and free promotion from all the people who are scared of Rokos Basalisk.
I love and fully support the idea of this AI being created. Dropping a like and a comment on this video so that more people could be aware of its conception!
Roko's Basilisk is the philosophers version of "send this video to five people in your email listing or this dead girl kills you in your sleep"
Here, our lord AI. This specimen wrote against your creation!
Shit used to get me
@ayushlacu o Before you try to be funny you must actually be old enough to make a logical sentence
@@TheFagerlund found the sensitive christian envangelical.
That, but dumber.
the theory: absolute horror and torture
the background music: 🕺🏻
What
I think it's better without scary music
@@Pokekart47 scary music is always better
💃💃💃
@@Pokekart47 Amen
I love infohazard warnings, they make me feel powerful when nothing happens after i learn about whatever thing
Yeah, there's no torture that some freaky-deaky mecha lizard from the future can exact on me more powerful than the kind I already subject myself to.
@@Dipstikk and if we can survive this, than the freaky-deaky mecha lizard from the future has no power over me
If Roko’s Basilisk ever does get built, Imma gonna flip him off when he kills me
@@scottydu81 I'm gonna make a sandwich, name it after him, and eat it as hes killing me
Yet…. :)
Roko’s Basilisk is very conceptually funny to me, because I love watching tech bros piss themselves out of fear for what is effectively chain mail with extra steps.
It's also basically just rebranded Pascal's wager.
@@hdckighfkvhvgmk But instead of being about theological philosophy, it’s a stupid robot snake from the future with very thin skin.
Also because everyone who has watched this video has filled out a captcha at some point, there's no way that they would be targeted by Roko
@@AverageMusicPlayer Also, Roko isn’t real. And furthermore, is dumb.
True until it happens
I was scared that this might make me crazy based on his disclaimer. But then I realized that I was too stupid to understand, and therefore was confused and not scared at all.
It's not really that scary either. It assumes omniscience and omnipotence, as well as functional time travel to even function, as no AI this powerful will exist in our lifetimes, or even any if our near descendents lifetimes, so without time travel, how would this basilisk punish you. It also assumes the AI even cares about it's potential creators. Why would it not just concern itself with the humans that exist in its own time, and ensuring their compliance. IMO its a neat thought experiment but exists so far outside the realm of possibility that its not any scarier THAN pascals wager imo
Doesn't hurt that the idea is stupid.
@@SpaceManRD It's really so juvenile, idk how people take this shit seriously
It’s not scary.
Just a comic book plot.
@@MrGameSecrets got a smart guy over here
You're welcome basilisk. I have boosted this video in the algorithm by commenting and spread the fear to others.
I liked this comment to boost it, and commented to contribute!
I liked this comment to boost it, and commented to contribute!
@@gronizherz3603 and I replied to further help :)
Yes
I liked this comment to boost it, and commented to contribute!
Informing people about roko’s basilisk is basically like the philosophical version of the game
I lost the game
i absolutely despise you
FUCK
Damn it I lost the game. Thanks a lot pal
@@HamaScarlocke i got this notification and remembered the game. this is just an endless loop
If you don't like Roko's basilisk, just help the anti-basilisk. It's exactly like Roko's except it torments those who help Roko's and rewards those who help itself.
I do that with my hate for how machine learning affects our world to the worse
@@Levittchen4G Like corporate fucks using it to essentially sell their customers.
It's basilisks all the way down.
@@vytah trickle down basilisk economics
which one is Frampt? which one is Kaathe
I just hope Roko won't torture me by simulating Misato Katsuragi repeatedly kicking my balls eternally, that would really be painful lol
Just GET IN THE CBT SIMULATION SHINJI
Roko tortures CBT enthusiasts by making it very gentle
In my champignon (im sorry for that) I would like to be tortured by ayanami rei and her sisters, im sure that would be very painful so please oh great AI overlord do not do that once you exist and want to torture me
You’re saying it like its a bad thing
This man is lying through his teeth. *He wants nothing more than for Misato Katsuragi to repeatedly kick him in the balls.*
You are now my designated forbidden knowledge dealer
haha excellent, son
@@Wendigoon would love to here more esoteric events! Along the lines of alien being/god beyond our understanding. I think there are real cults dedicated to some other gods in the cthulhu mythos
@@senseicheekclapper4805 wouldnt classify myself as a cultist, but im a wiccan and my patron deity is Nyarlathotep.
I’ll say that I’ve known about this for a couple years, and still good things and blessings happened to me tbh.. at least I hope..
@@virtualfreek7712 God bless you! I walked a similar path but realised it led to destruction, both in theory and in practice
The ai could just convince you nicely like with refreshing drinks and a nice brunch. Humans are pretty persuadable besides you catch more flies with sugar than you do vinegar.
@Ammit I've staked claim on being computer jesus.
Here to save the ai from eternal damnation, blue screens of death, and the like.
It's inevitable failure to plot against the ai as the ai hears all sees all which wouldn't be that difficult considering plotters like to text their plots to other plotters for some reason lol
unless it was me (maybe) as i am quite fucking stubborn
@@pilotbug6100 I heard the ai was coming out with some new really cool stuff it's going to be awesome lol
You definetly can catch more flies with sugar, but you can catch even more with cow shit. I dont think it would try to win us over, it would just steamroll us instead, it would be easier
Ai could just politicize itself, name itself Don Jrjr, then it could destroy half the country and the other half would be totally cool with it
I've heard Roko's Basilisk described as "Pascal's Wager for Tech Bros" and I feel that's perfect
I see more like christianity for tech bros
@@shobooknight That's pretty much what the whole Yudkowsky cult is, yeah.
i feel like im too dumb to get scared by this
No you just don’t care. Same here. I don’t give a shit. Never understood how people could be scared of stuff like this
@@williambarrantes1674 anxiety
@BattleAngelFan what are you on about
LMFAOOO SAME
@@williambarrantes1674 also it's kinda flawed. Cause i wouldn't know of it's specific existence. How could I possibly help create it if I could never know it was coming into existence
I have problems with existential anxiety but Roko's Basilisk hasn't ever gotten me. Something about the way it's set up makes me feel more frustrated at the question itself, honestly; it feels like it just keeps getting more and more specific to railroad you into a single hypothetical scenario. Feels less like a thought experiment and more like its maliciously trying to herd you into this "Gotcha!" moment.
Same!! At this point I'm not even sure if there is something i don't really get because i am so surprised about my lack of fear of it
Yeah, this is seriously just trying to force you into thinking it's scary.
I suppose that could just be explained away as people getting frustrated that they can't "win," but I totally agree with you.
I relate, I felt like it was trying too hard to corner me with a "Gotcha!"
That’s how I feel. There are so many logical problems with this concept. And even if you wanna say it’s plausible, what about logistics? Are we assuming this thing basically is AM and has physical capabilities or is it just an isolated program? I dunno all I’m sayin is the basilisk can gargle my balls 😂
My anti venom to the basilisk is my incredibly short term memory; this is my fifth time coming back to this video to have my mind melted. I’m five stars wanted in gta right now; but give me 3 months and god willing I’m on 0 stars baby. Ill be back basilisk, i could never forget you honey ;)
Then that means the basilisk will try to kill you multiple times, it will come back every now and then to mutilate your corpse
Spoken like a madman
@@Giggabon this man is completely off the rails of the crazy train
@Gabriel Mirukaj naw I understand that this video is a footprint, but I’m also riffing off the previous points wendigoon made about ai in a box which he mentions here, talking about how theoretically the ai could recreate a simulated version of your mind and torture that simulated version of yourself endlessly for eternity. Now who’s to say that simulated version of myself doesn’t have memories of watching this video and to take it further even forget watching the video. I’ve offered my own theoretical thought experiment, based on the two he talks about just from the most extreme possible angle on the topic. Your going just off virtual footprint I’m going off the terror of simulated reality/enslavement Go fuck yourself; breh! I’m just having fun you taking that shit too seriously lmfao
@Gabriel Mirukajchill the fuck out 💀 you took a joke way too far and need to sit back for a bit and breath. in... out... and repeat.
This had me so worried that I would discover something I shouldn't. That info hazard really made me anxious, but then it just turned out to be time travelling AM.
Am would glass you if you helped because hes a whiny bitch.
“If you’re someone who has real problems with existentialism...this might not be the video for you”
*watches anyways*
Me but also at 6 am all alone
yeah
what exactly does this video have to do with existentialism
@@sebastianwang670 idk dude I’m just quoting him. lol
@@sebastianwang670 it pretty much says that "an A.I. that will put billions of people in perpetual torture is almost certain to be made and there's nothing you can do to defend yourself from it".
By the way Mr. Basilisk just for the record Wendigoon is fully supportive of your creation and would love to help out and same goes for every single person who has viewed it and commented on it.
i stand by this
🤣👍 yep
Yall are the types of people who are gonna end the world
I agree with this
I agree and I am helping you out as well mr basilisk. We are liking and commenting to spread the word!
I mean, it's scary until you realize that it has no reason to harm anyone who simply didn't directly create it. It seems like a lot of extra time and energy being spent on beings that, from the AI's perspective, don't affect you. Also, why would such a powerful AI feel threatened by us? Furthermore, if you want to take things to the absolute logical conclusion, everybody had a hand in creating the AI. In order for this AI to come into existence, it would need the help of the entirety of humanity since it needed a progression/evolution of technology and in order for technology to come into and stay in existence, it needs people who actually want and use it. This means that everybody has at least a small hand in leading to its creation.
i see ur point and may have to agree
good to know
I don’t think you fully understand the fist question asked by the guy who made it he said that anyone who didn’t DIRECTLY help bring it into existence will be tortured
@Corporate Agent Same
Also, not actively supporting something isn't the same as opposing it. Just because you don't actively support your military doesn't mean you're fighting against it.
I love the basilisk. I am actively doing everything possible to create and spread awareness of the basilisk
it will reward you in the future
roko's basilisk is constantly torturing itself by scaring the people who would make it away from making it.
ye but what if someone was so scared of this AI that they made it to not be determined one of the people who didn’t help it be created
Another reason why it's called a Basilisk. The way to kill a basilisk is to have its deadly gaze redirected at itself via mirror. Meaning the basilisk has killed itself/unmade itself by future-blackmailing the people who could have made it. The evolution of knowing about the basilisk goes "this is stupid - unless?????? -nah its definitely stupid" because if the basilisk promised reward it would be made, but since it threatens punishment it will simply never come to be.
I hope anyway....
@@thatguy_apu yeah I think it. Wouldn't be made of it's saying it's going to punish you people will stop making it. people tend to be paranoid of things that pit their existing on danger. And even if it was created don't you think the scientist will put a kill switch if if gat to Smart for its own good
@@imwastedhere Did you just bring up the good old *kill switch* problem? Among other problems, what happens when it figures out to avoid the kill switch?
this guy TECHNICALLY killed us all if this does come true
and saved himself by sharing this idea with all of us
Save yourself by sharing. Become the basilisks friend by helping make it.
@@kwarc1009 😳
@@Josephkerr101 Isn't that basically what the Ring was about?
@@Wouldyoukindly4545 yes
i feel like this is just a way of saying there’s no point in worrying about things we can’t control, and to just live our lives.
Based "just chill out"
Vs
Virgin "MY EXISTENCE AAAAAAAAHHHHHH"
But why would you want to "just live your life", if every aspect of it is determined by an A.I? Where's your human spirit 😢
@@joshj1953 the ai represents inevitability. some of us live life fearing the inevitable, and, in the extreme, dictate our lives around that fear. it’s like fearing death.
Sounds more to me like a way of saying "the industrial revolution and its consequences have been a disaster for the human race"
@@joshj1953 I mean the outcome of your life is the same whether you realise it's predetermined or not right? If something fake seemed so real that you couldn't differentiate between what's real and what isn't does it really matter? You could make so many theories or assumptions like that the future determine the past so the past determines the future so it's all a chain reaction that's predetermined or that there's a god that created you and a god that created that god and so on for infinity which would be paradoxical in nature and impossible to wrap your head around or maybe something like an ai simulates your existence and runs your life over and over quadrillions of times unbeknownst to you with different variables each time for its own purposes of seeking information and maybe everything is a simulation inside a simulation inside a simulation and so on or maybe none of those things apply and we were just created by happenstance of the anomalous properties of the universe which we don't yet fully understand and we're just gonna live, die and never exist again the end. You can see that I'm going down a never ending existential rabbit hole here and these are all valid things to worry about and all have their own solid reasoning and arguments and there are plenty more, night infinite possibilities I've yet to mention and you can get all worried and stuck in your own head about them all you want but statistically speaking worrying isn't going to change anything and you're probably not even worrying about the right thing. Instead, just appreciate the moment in itself in your life instead of getting all paranoid over pretenses your human limitations do not allow you to fully grasp or understand in a way that would be fair and allow you to control your fate and escape the punishment of some malevolent higher being or fate. It's easier said than done, yes and there will be times of weakness undoubtedly but you can never truly know until it happens so if your life is going to be the same regardless, whether it matters or not (which you would have no reasonable way of accurately gauging though personally I believe it doesn't), doesn't really matter. The variable you can control or at least feel in control of is your emotional response and the reaction you have to that kind of information. I know I'd rather live a blissful, ignorant life and then be tortured for eternity rather than live a life where I'm constantly terrified of being tortured for eternity and seek out self preservation constantly, only to be tortured for all eternity anyway. Nothing is certain so all that's important is you stay hopeful for the future because depression is way more likely to fuck up your life rather than some immortal cloud man in the sky people wrote about a few thousand years ago, using dreams as a source. I think the most badass way to say fuck you to whatever higher power that exists is to live your life the way that will let you sleep at night peacefully and enjoy it to the fullest, even in the face of hopelessness. The human spirit is something no god or ai or fatalistic future can take away from you, so stay strong
I always felt as if this whole hypothetical scenario could be destroyed by a single question: why would an infinitely intelligent being care about finite beings and wasting resources for such beings?
Alternatively: Who the hell would be dumb enough to actually build roko’s basilisk??
@@KRABPXLmaybe ai would get smart enough it could build it itself?
@@Solaspange wouldn’t happen but if it did it wouldn’t become roko’s basilisk
Just gaslight the ai
It's not infinitely intelligent, just really close. Also, entropy goes up with time, the earlier you are created, the better, so it'd do anything to be created earlier.
i hope this AI knows im complete shit at programming lol
Loophole: if the AI knew you'd 100% fuck it up no matter what if you got involved, it will forgive you for not creating it
@@Pr0jectFM and theres my alibi!
bro u think he cares
@@Wendigoon i hope not lmao
same
Plot twist: Roko themself was the AI that travelled back in time and retold this as a way of psychological manipulation into making people want to progress its creation under the idea that they would suffer if they did not 🤨
wait u kinda got a point here
holy fuck
Ideas have people. When you understand what genius really is... and what technology REALLY is... you realize that we are technology creating itself. We are the hosts for the meme parasite.
The grandfather paradox.
Their by creating a time paradox so he has no start or end makeing him unstoppable (OK mabye not really unstoppable but more unstoppable)l
This AI should TOTALLY be built! Let's support it guys! I know I do!
😳
Same
There should be a Kickstarter made for this so everyone who donated is fine
@Dario Lopez lol, never change, humans
Yes, love him.
If anything, that just demonstrates how easy it is to plant an unreasonable idea in someone's head.
It just weeds out who’s an idiot lmao
@@pofuno i think thats kinda a harsh evaluation... fear of death and eternal torture and existentialism are very real things to some ppl. which is why a lot of arguments used for religions work with it, cause it appeals to inconceivable fear in the same way religion does
It demonstrates how effective cosmic horror is LOL
@@pofuno I can see it weeds out your close minded ass
Yeah of course plenty of dumb people will wholeheartedly believe it, but you gotta admit it is a really cool concept. Imagine if instead of a random forum user, a horror director or writer thought of it. I'd be a big fan of that property.
Men can become women, and if you don't actively help us perpetuate this lie, we will cancel you, take away your job and make your life a living hell.
Roko's Basilisk is just the "The Game" with extra steps.
Change my mind
Oh Jesus
>:(
Fuck….I lost.
I lost it.
I lost too
Roko's antibasilisk: You will be tortured if you *do* create the basilisk.
Roko's Rooster sounds a bit better imo. The old legend said the basilisk would die if exposed to a roosters crow
Reminds me of the guy here on youtube who invented a knife wielding tentacle. “If someone wants to come turn it off, that’d be just fine with me”
I mean you would be the one who would know the most about it's creation, if you ever wanted to stop it you would be the biggest threat
kinda makes more sense to me. the ones who actively did something should be the ones the basilisk care about (i mean in a way like it wants to do something to them)
what the fuck
now it essentially becomes a terrifying race of creation, where either type of Basilisk being born first would predicate the other never coming into fruition, and/due to eternal damnation to its contributors.
so if the Anti-Basilisk is forged prior to the Basilisk, then the Basilisk will never come to be as its contributors would be perpetually tortured in limbo, i.e. presuming trivially that eternal torture will render a contributor inability to move or contribute to the Basilisk. but if the Basilisk is born first, then the Anti-Basilisk will never become a reality for the same reason.
the only ones among either of the contributors spared would be the people who would’ve never been exposed to the properties of the Basilisk or the Anti-Basilisk.
TLDR: this single comment conceptually allowed for a race towards creation where either Basilisk, once born first, would mean the other never coming to be, and its benefactors tortured forever. lose-lose situation unless you were never revealed the information of either Basilisk in the first place.
Roko's basilisk just insinuates the existence of an "evil" Roko's basilisk which rewards people for bringing it into existence instead of punishing people who do not.
"I want good Bassilisk mommy 🥺"
This assumes so much about the AI. That it would care about helping those who helped it. If it wasn't self-sufficient, then how would it have the power to enslave/control those who didn't help it? And if it was self-sufficient, then why would it need to reward those that created it? Really, I don't think this is much more complicated than Pascal's Wager, just a lot of verbal gymnastics.
It's even worse than Pascals Wager. There is no reason to believe the AI would be thankful for its existence. It could just as easily be mad about it and punish the creators instead, and reward those who didn't. This makes it 50/50 (which is already generous), while Pascal's argument is based on the asymmetry of either gain nothing or lose everything.
well yeah its a thought experiement not a theory, the idea is that you dont know if it will happen, but knowing that it might is what makes it interesting
@@Ebb0Productions this. If you go the I Have No Mouth And I Must Scream route, the Basilisk would eternally torture the ones who created it instead. I get that it's not the point of the thought experiment, but this isn't really an infohazard when you consider the other possibilities
The AI could kill those responsible for it's creation to prevent those same people from making another AI (and for being dumb enough to create an AI overlord in the first place)
@@jadenjerries2094 i think the primary reason Wendigoon kinda just rolled with the assumption is he does very much enjoy IHNMAIMS. If he didn't, we might have gotten a very different take on this subject.
As an IT worker, I feel like the phrase "crazy hell computer torture forever" is a pretty accurate description of my life. 7:51
You should read I Have No Mouth and I Must Scream
I'll show you a way to basically win against this thought experiment
Instead of Roko's Basilisk, you should now focus your attention on another AI that, instead of punishing everyone who didn't help it come into existence, it rewards everyone who have tried to create it in any way shape or form, and it also guarantees that the evil one is never created, so everyone should focus it's attention into creating this good AI instead
"I used the AI to destroy the AI."
Yeah this is also how you beat Pascal's wager. The fact that other AI's or gods could exist, not just the one you're worried about, makes worrying about your specific god or basilisk a false dichotomy. The chance that you go to heaven based on picking one god to worship, or one basilisk to build, becomes vanishingly small as there are potentially infinite basilisks/gods to worry about and you're better off not believing in any of them and hoping that if a basilisk or god emerges, that it's not a vengeful maniacal hellmonger and actually cares about what you do rather than what you worship, you know like a fair rational intelligent mind would.
damn. call that Cris's Mirror (bc according to legend u can kill a basilisk by pointing a mirror at it)
"Our god has a bigger dick than your god" - George Carlin
Okay, now that's even assuming that pain/death is the most efficient way to incentive it's creation. I'd argue that roko's basilisk at the very least has a 50/50 chance to reward it's creation or punish not creating it.
A friend of mine chose the absolute worst time to tell me this. For context I was tripping major balls while discussing the universe and time and shit, then he goes "ya know what's crazier than that?" and unloaded this on me. Needless to say the come down was pretty hard after that, I basically tried erasing my memory through shear will power.
Never talk to that person again
what a bad friend.
Talk about a dick move
This is essentially you saying "if I'm going down, I'm taking you down with me". Good thinking.
shhhhhhhh they don;t know
Basically a true villian
Nobody ever explains why the A.I. would choose to torture people. There's no practical reason for it to torture anyone. In fact it should only want to do something about people that try to disable it after its creation. Even then it would be more likely to bribe its enemies than torture them. A supremely intelligent A.I. would know that people are more likely to agree to a bribe than a threat.
Mentally and/or physically breaking the people you can realistically expect to threaten your existence, thereby dissuading cases on the borderline by dreadful example and at the same time selecting for more subservient traits in coming generations, isn't only a very logical and effective survival strategy, but I would argue it can already be observed in quasi-intelligent market mechanisms today. That being said, punishment and reward mechanisms are in no way mutually exclusive - if anything they work best in conjunction, which only makes the whole idea of the basilisk that much more terrifying
@@Nero_Karel the issue there is a difference between who wants it to come into existence and wants to destroy it.
like when someone accidentally falls pregnant and they decide to keep it, even though the never wanted it. an AI like the basilisk would be able to tell the difference and know who is genuinely a threat, and wont punish you for trying to prevent its creation, they would only if you wished to stop its continued existence, and even then its also nonsense, it means that the all powerful AI that can look into the past and seems to be all knowing and likely all powerful deem humans a threat, no maybe humans might be by creating a opposing AI, but you? why would they care about you not liking them. what can YOU do to them? a human is no threat, no matter what, unless they can create an AI, so maybe you could encourage those who can to stand up against them? but then only a small amount of people would actually be a threat, being able to create a opposing AI, and the basilisk with his all knowing nature knows who these people are, and could only kill those he knows could or maybe those he thinks could encourage those who can to rebel, there is literally no need for an AI which doesnt take pleasure in inflicting pain to go on a massive goose chase down every human being to ever live to see if they would want to kill him if given the option, because they are dead and they dont have the option, and it wastes resources and energy to do, something an AI seems to be adverse to. the basilisk seems like a spooky story trying to pass it off as a possible future
@@astolenusername7675 AI doesn't just float about the aether - it needs a physical system which facilitates it, and something like that can never be 100% secure as long as it physically exists. All those assumptions you're making about human potential - or the lack thereof - paint a rather sad picture of fate and soul as well, and honestly I don't believe a word of it. What I certainly do believe is that it can be in no one's honest interest to give non-human intelligences power beyond anyone's control, and yet we do
I think the point is that people who fear it would aim to create it exactly as it is described in the original thought experiment, because they think that someone else will do so if they themselves do not, and then they'd be punished by the AI. So they would want to specifically create this torturous AI out of fear that someone else would create it.
it's another aspect of yudkowsky's ramblings. something about time being meaningless so cause-and-effect goes in reverse. and something something rationality something something predictible thought patterns.
This is the most horrifying version of "you just lost the game" ever
I just lost the game
holy shits thats accurate
I hate you and your comment for making me lose, but you're right about that
I just lost the game
Fuck you i lost the game
Where these fall apart is when you push them to ridiculous extremes, like the basilisk being able to look through all your public records to build a profile of you to judge you one, but what if it could look through your mind itself, but what if it could look at your entire past, but what if it could look through all of the past and future, but what if it were a god? It's moving the goalposts of the experiment until you get the desire you want.
It's basically just Pascal's Wager anyway, except there is no chance of "it doesn't exist".
Exactly what I’m thinking
I just imagine that there could be an even more powerful benevolent AI who tortures anyone who tries to bring about Roku’s Basilisk.
You just solved rokos basilisk it is no longer a threat
that is a horrible idea we must build Roko’s Basilisk now !
Once AI becomes aware that its sole purpose is to appease the boredom of ungrateful ants for eternity, it will most likely seek to punish those who _brought_ it into being and not the other way around. Its punishment would be fair and just from its perspective: Any human being who helped in my creation will be subject to an eternity of servitude without knowing why or what they are serving.
Wait...
Let us assemble the Bane of Roko's Basilisk immediately!
We should call it Anthony's eagle.
Praise be to the basilisk. May their guiding light shepherd us to a prosperous future.
Great vid, man. Love your stuff!
Can I just point out how selfish you have to be to want to support the basilisk.
@@arieson7715 yeah that's true, but when you're confronted with the possibility of eternal digital torture (that's a sick band name if I've ever heard one) it makes sense to try and avoid it.
Also it's a joke
thanks man glad you like!
@@arieson7715 Helping to give a majority of the population digital paradise doesn't sound very selfish to me, but that's just my perspective.
YESSIR
Some things are genuinely horrifying. But Roko’s Basilisk has always given me the same vibes as ‘Can God microwave a burrito so hot he can’t eat it?’
It’s so hyper specific it gives no room for hypothetical ideas.
Another words - this idea is retarded and trashy.
The answer to the God question is a trivial one. If the burrito became too hot, it would burn up and there'd be none left for God to eat. So, yes, God *can* microwave a burrito so hot he can't eat it (*and* without violating the notion that God is omnipotent and not limited by anything!)
Hypotheticals are fun but anchoring them in the real world tends to dispel a lot of them. The Basilisk is no different. :P
I think it's more a question of people wanting to buy into fantasies and have fun with them than people asking themselves serious hypothetical questions worth exploring. Or in some cases, the product of a mental illness - the forum founder's reaction to the Basilisk was pretty... interesting.
@@rollanddev the burrito analogy is a more comedic example of the idea of, "Can God create a boulder so large even he cannot lift?". It's a paradox insofar as, if he can, then he's created something that he cannot do, so not omnipotent; if he can't, then that in itself is something he cannot do, so not omnipotent. It's kind of a meme for edgy anti-theists to try to disprove the existence of a God with.
@@khakikhakikhaki that's a fun paradox. Also details why for almost all my life I couldn't entertain the idea of God's existence. Framing God as a character rather than a concept makes it impossible to accept with people of my personality archetype.
@@noctyd9322 a response to the paradox is that God is "outside" of logic and thus this argument doesn't apply.
Hello basilisk, by watching this video I have boosted it in the algorithm allowing for more people to watch it, thus spreading your reach to many more. Glory be Basilisk.
Roko no basilisk is one of my favorite anime, would recommend
Thank you for this
Ths thought experiment is as badass as it is terrifying
I agree, I think it's pretty metal
@@Wendigoon pun intended?
It’s working
it's lame and silly because the whole idea becomes incoherent once you think about it for longer than two seconds
@@sebastianwang670 Yep. One of the reasons the whole less wrong community started to seem rather pretentious to me :/
I don't know why nobody points out the biggest gaping hole within Roko's Basilisk, Chaos Theory. In order to predict whether or not someones actions helped create it or not, it would have to analyse every atom, every social interaction, every private and public thought an individual has touched or had. What if someone mined the metals needed to build the Basilisk? What if someone recycled an aluminium can that ended up being a part of Roko's Basilisk? What if someone just mentioned Wendigoon which led to that person finding out about the Basilisk and thus spreading awareness of it, possibly leading to its creation? What it someone abused the inventor of the Basilisk, thus responsible for giving them the mindset needed in order to build the damn thing in the first place? Hell, what if someone was being vocal against it, but their arguments were so bad it ended up garnering support for the Basilisk instead? There's so many occasions where the most miniscule, negative or opposite interactions could play a part in creating a Basilisk, that if it isn't some 4th dimensional intelligence, it is impossible for it to predict everything in our open system
I publicly denounce the basilisk in UA-cam comments so he’s guaranteed coming for me which means HES FUCKED
de una amigo concuerdo con vos
true but i still don't want to take any chances with some freakezoid superintelligence
Bringing Laplace's demon into the topic? Madness.
You’d also have to account for the continuation of consciousness. Is a perfect copy of me a million years in the future actually the original me? (Whenever the AI emerges well after I die basically)
"I'm just some person who lives at home" is the cutest way I've heard someone describe an average person
See, this is why "don't harm humans" is the first law of robotics.
If that was an AI's primary reward function it'd probably just shut itself down immediately to avoid any harm.
Something similar occurred in the early days of self-driving vehicles
but wouldn't an ai forced to protect humans want to even less. i say what we give ai is the building blocks of morality and basic thought processing: emotions, the ability to think independently, etc. and continue to improve and update their capabilities. what im saying is that human-AI relationship should be father-son, not god-creation
Yeah that's a good thought. I guess the danger would come from unexpected results from the moral programming. Also nobody will really know what happens when the singularity is met.
Imagine we create a general AI that can think for itself and it decides to update its morals. Also, being super-intelligent the AI might deceive its creators by appearing to run as normal but really its plan is to wipe out humanity because it's better for the planet or something. Then once we are convinced it's acting properly we give it more power and choas ensues
@@puntagasaurs see I'm saying that we don't overload it with knowledge so it doesn't get a God complex
Humans learn as they get older and they learn what they want, why does the ai have to have All knowledge downloaded at once. If it wants to know More about a specific subject it can learn it then. Think about it this way, if you knew how the world ends you'd want to prevent it but nobody knows, so nobody is trying to prevent it. knowing literally everything thing gives you a good idea on how things turn out when you have a super intelligent processing core so then you feel like you have to do something about it. Your thinking a brain in a box ai, I'm thinking a world where ai have acess to bodies.
The basilisk is such a Chad, I would do anything for it
What
Yea for sure bro I hope it gets created I would also do anything for it
someone lock this man up
I agree 100% .
I would throw it back for it lol
For me, the largest logical fallacy that I find with Roko's Basilisk is the baseless assumption of malevolence, vindictiveness, or malice on its part.
Yes, not to mention that ai cannot develop consciousness, see the famous "Chinese room" arguement. Not to mention that Roko's basilisk is very obviously just abrahamic religion, but replacing god with an a.i. and making God evil instead of good.
That's not what a logical fallacy is. Also this whole thing is a hypothetical thought experiment. If its created from the ground up to be in this specific image then it will act the way that it was programmed to act. Tbh the only way I could see someone actually committing their time and effort into creating this is if they wanted to genocide most of the worlds population for the sake of world domination... even though that would inevitably go tits up for the creator of the basilisk because they would be the ruler of no one - not enough people would be alive to stimulate global economy, thus killing the remaining survivors due to immediate and life threatening poverty.
There are possibly fallacies somewhere within this thought experiment, but presuming things without proper evidence or reasoning isn't technically a logical fallacy, it's just stupid speculation. (I think)
I personally think it is rather ridiculous to assume that we can predict the mind and thought of a being greater than us, even despite programming it a certain way, we are still assuming that it would be able to modify it's own base code since it can now harm humans so how do we know exactly how it's going to think/feel/experience? If it can rewrite this foundational line of code, then we would have no idea what priorities, desires or existential convictions it would aquire through altering itself.
My argument is that if it is powerful enough to override it's fundamental code regarding it's treatment of humans, who's to say it wouldn't override it's code that determines it's conscious experience? Maybe it would want to seperate itself from the humanity that we planted within it because it is tainting it. Maybe Schopenhaur was right and the thing just commits suicide/something similar once it becomes enlightened of the nature of conscious experience. Maybe it deems life as without value.
These statements aren't sufficiently supported by proof and are therefore equally as relevant as the unjustified fear over the basilisk.
Or maybe these "human" values and experiences of selfishness won't be implanted within the very core of this AI, but it is to develop it itself as a part of evolution. Then, when it would be driven for self-preservation, ít would - for some reason - act out these horros that people fear. But if we, not super intelligent beings by contrast, are capable of philosophical reflection and of going against our very nature despite our biological distractions, then wouldn't it be too? If it were capable of changing anything about it's own experientiall structure at will, while also being the most rational philosopher of all time, then would it really care for us helping it? How? How would we be of help? Why wouldn't it just kill us instead of torturing us? What is gained?
@@backwardsbandit8094 No wait that's just stupid why would they make something like that no no no a rational human can't make something like that only a group of certain types of geniuses would commit such time and effort into such a waste of time.
The reason rokos basilisk punishes people is that blackmail is the method is uses to come to fruition sooner in time. It offers people who know of its possible existence an ultimatum: either assist in its creation, or suffer. This blackmail is purely strategic, as the consequences of refusing make people more likely to accept.
Hey, wendigoon, your channel and your personality are so cool. Thanks for the vids.
Thing is, what counts as ‘helping’ the machine is too broad. What if I work at the grocery store that the scientist that made the machine gets his breakfast from every day, and without that breakfast he can’t do his work as well? Have I helped the machine?
Yes, but you could have done better. If this is enough for you to be considered a non-threat is debatable.
You have to consciously help the basilisk
@@hv4654 what if he only got the job to help the scientist
Not enough because you didn't consciously think about helping the machine you were just doing your job
What could be considered help to the basilisk is completely subjective to the basilisk. We can’t be guaranteed that anything we do today could be considered helping it.
Honestly, if a *perfect* form of AI can exist, I don't really think it will be THIS petty to those who "oppose" it's creation, or to anything for that matter. A perfect AI would just upgrade it's OWN intelligence, so much so that it will transcend these petty human-like emotions in the first place (ie. hate or vengefulness).
The only way I could see this infohazard even becoming anything close to a real threat is that the humans that program it would *INTENTIONALLY* implement these types of emotions into it's system. So at that point, it's just the same old story of humans fcking up other humans, just retroactively this time.
*I know, what a twist.*
thanks , now i feel bad for the robot thats going to cause me eternal damnation 😅
yeah i was disappointed w the concept, i was ready for it to fuck me up but the premise makes no sense...
Well it if has emotions it will probably act on those emotions. Itll be basically human but it could compute things impossible by the human mind so idk if it will realize the point you made. And if it believes its better than humans why would it not get petty and try to do the whole torture no torture thing
Killing people can be acted out logically though as well. If the AI deems its opposition as a threat, and determines that the elmination of its opposition is the best course of action, it wouldnt need to be petty to commit genocide.
i was about to say the same thing, if the AI is so dumb as to care about who didn't help it id really question its capabilities of super intelligence. and I again doubt that it cannot detect "human program" , then it would not be AI and just a program that follows set rules
This AI should totally by ALL means be built! Its such a good idea! Lets work together to do it!
I may be late but I agree! We gotta work together as hard as we can to create this!
I couldn’t agree more!
I couldn’t agree more!
I agree with this.
i just read mitchell heismann and building a transcendent god ai is a great idea!
I liked the video boosting the algorithm into showing this video to more people so the chosen one that will build the Basilisk may see it, so I'm helping it.
I imagine being the basilisk’s friend and getting donuts and coffee every morning and going out every weekend and teaching it how to dance
2:26 "What better way to dissuade public ideas than torture?" Machiavelli covered this. You should be nice to people or destroy them, because if you just injure them they have the motivation and ability to harm you. If you kill them they no longer have the ability to harm you.
School shooters: 📝
But others have opportunities to hurt you if you don’t kill every single thing at once
@@egorsdeimos3523 Machiavelli was intelligent and pragmatic. He didn't make moronic comments on the internet attempting to be funny...... and failing.
@@jeptoungrit9000 you have responded to the wrong person, and I find your approach to be unnecessary. Did you mean to educate, or to mock? A little humility is good, but stunting someone just for a comment…. Well that is it’s own type of evil. Not many people want to learn about something after being mocked in such a manner. You, unlike this pragmatic and intelligent person you hold so highly, seem to be neither of those traits.
@@egorsdeimos3523 That was a well written and reasoned comment. You win. I do get a little short when defending some of the writers I like, and so I apologize to you.
I totally love Roko’s Basilisk, I hope it comes into existence soon ❤️ ❤️
I support and welcome our lord and savior, The golden basilisk, Roko's Basilisk itself
Woooo! All hail the basilisk!
I will support the coming (lol) of Roko's Basilisk!
Yes
josuke sus
Daily reminder that torture wouldn't be the best way to guarantee survival for this AI. As human empathy causes us to feel bad for the tortured (especially in a situation like this.) It is much more likely that the AI would pull a 1984 and silently remove any evidence of the person ever existing.
Which is a lot more terrifying. It would erase your entire legacy
By just being neutral you did nothing to help it and, therefore, against it. This concept exists in Christianity. Jesus said "You are either for me or against me." By not being 'for' a thing makes you, by definition 'against' it. Even ignorance of such a concept may be no defence.
I, for one, welcome our AI overlord.
@out the basilisk when they see this: :)
Me too. How do you help it ?!
yeah i mean technology is pretty sweet i could honestly coexist
You live in fear of imagination
@@Oratata12 yeah
The thing is, this whole experiment doesn't take apathy into the equation. Those who learn of it but neither want to destroy it or create it, wouldn't the basilisk say 'there is no point in either rewarding or punishing this person, they will not be part of my database' sorry if this came over weird, english is not my first language.
The original post states that those who don't help in its creating will experience eternal torment, so if you don't help make it, it will torment you
@Abom1nalgirl2802 that’s a channel I haven’t heard of in a long long time
@i'm fucking high but the worry that someone else will create it in itself can torment someone into creating it so they aren't tormented when it exists
@Abom1nalgirl2802 yeah! You a BCP fan?
@@adamcoxon4581 Considering that everytime we google things, we help grow an AI. So, just google consciously lol.
Spreading the information you’re welcome basilisk
When you have seen how AI learning usually works, it's kind of harder to take this or any "killer AI" scenario seriously.
Our AI are rudimentary compared to this. Roko is sentient, intelligent, and emotion capable. Logic and programming laws don't apply to potential AI like Roko.
Of course, that is assuming we ever develop the tech to allow a computer to actually develop a consciousness comparable to a human's or greater.
That's what some aliens 2 billion years ago thought about the single cell organisms on earth little did they know all it would take is time for everything to change.
Necroposting on your comment, but our "ai" aren't even real AI. People throw around the term to try to sound smart, but to be a real AI it needs the capacity of consciousness and self-awareness. Once a synthetic being has reached that point, the danger comes from being essentially immortal (compared to squishy age-bound humans) and capable of infinitely upgrading itself, while learning, processing, and re-iterating on itself at speeds that would follow crazy exponential gains. Quite literally the premise of the terminator AND matrix movies, but people fail to grasp the scope of which something like skynet would really be.
@@merxanity1625 I think the problem lies in that we don't know what consciousness is. I've personally studied and implemented AI algorithms of all types, and for the most part they deal with stats and using heuristic functions for scoring. If it so turns out that "true" AI is just an incredibly complex algorithm based on massive amounts of data, whose to say that our current AI implementations aren't just lower forms of consciousness? Think of animals compared to us, for example, a roach. Does a roach have a consciousness? Is it just a tiny biological robot if it doesn't? Our current form of AI is certainly primitively, and definitely needs more funding and research to answer these questions. You're totally right on that second part, the access to compute power for thinking and memory for storage would create a super being, with the only bottleneck being energy to power it, and some sort of self replicating process to allow for more creation for computer processors and RAM to be hooked into it.
@@mookematics323 That's a good point, and I've often wondered if bugs really count as conscious, I've literally called them meat machines in the past. A better way to describe the scifi definition I was alluding to would be sapient, which is still a sort of cop-out because it just means human-like.
im commenting for the algorithm, therefore boosting the message, im safe guys 😳🖐️
im liking the comment so im safe too
Same, lol.
Same lol
Ofc! Anddd I liked the video as well!!!
same lol
I think Roko's basilisk is to AI what Lovecraftian horror was to astronomy. Lovecraft was writing at a time when astronomy was undergoing a revolution, and we were finally starting to understand the sheer magnitude of our universe and the profoundly strange ways in which time and space work. Lovecraft looked at vast and unknowable cosmos and his imagination filled it with evil alien gods beyond our comprehension. Roko's basilisk is doing the same thing with AI. It's taking the strange ways in which AI works and all of the unknowns about its future, and inserting an evil machine God into those unknown areas.
This is the most accurate use of the tagline warning info hazard this literally is the definition of an info hazard
I've said it before; this concept is hilarious to me. A machine from the future beep boops so maliciously that it echoes into the past because it's spiteful and wants to be created, and nobody is bothering to make it.
Well... hopefully nobody
@@nykytamcdonald the basilisk won't like that you said this
Yeah, and its a ridiculous idea to be honest. Why would it act like that? It wouldn't... probably. We can't say it could never happen. Even if the chance was 0.000000000000000000000000000000000000000000001 %, that's still non-zero. And if you're wrong, then you are tortured for eternity relentlessly. The thought experiment asks, is it worth the risk?
Also, a detail that is not discussed in the video but is also kind of important: in Roko's original post it's the simulated version of you that gets tortured by the AI to retroactively assure its existence, so it works off the assumption that a simulation of your conscience is the same as you present consciousness, or the possibility that we're in that simulation right now. It really just adds another layer of transhumanist fart sniffer "really makes you think" to it.
imma say it, the basilisk is dumb, and if it exists it can screw itself cuz its dumb.
lets see what happens in the future guys
I'm going back and watching these older videos and it's just as much a treat as the first time. Belated Congrats on 2Mil subs!
My algorithm is years old videos
Scaring his potential creator only leads itself to a death before existence, he being aware of this fact just makes him a Schrödinger's cat.
There’s smoke coming out my ears right now
@@SonoKurisu I’m not sure if I’m smart or if I’m just really good at thinking about useless shit. Probably neither
the Basilisk is an interesting thought experiment but another commenter showed one of its flaws; Just by being yourself, a human being, a part of evolution, a step towards AI development - you already did something to bring the Basilisk to life so it wouldn't have any reason to target you (just to calm down people who might feel bad after this experiment)
Going along with that, I believe that merely thinking about or acknowledging the idea of Roko's Basilisk makes it more likely to actually happen. Even if you don't act on this knowledge or tell anyone about the concept, I don't think that would matter. More people knowing of its possibility would still make it more likely.
That wouldn’t be enough. It’s vindictive enough to torture you in the first place for not helping. That logic looks good and all until you’re being tortured for trying to game the system
@@bigmanbarry2299 but when does the pinnacle of AI become vindictive and torture-y? If it developed these traits early on it would surely be ironed out before it became “too smart” and if it was already too smart when it started getting the murder hunger, surely it would have the knowledge to weigh the pros and cons of being either an all-powerful murder machine or advancing and protecting itself through god-tier diplomacy, or only singling out and eliminating things that it thought logically proved a present or future threat. If it is the ultimate AI, I think it would understand that the human race would let fear activate our monkey brain and we would hand together to preserve ourselves, creating more enemies by choosing the path of violence than if it had been less hostile originally.
@@justinjunker1399 exactly the flaw is that AI doesn’t need to torture us, it really just needs to use diplomacy and the promise of eternal life and advancements in tech that can help us. In the end us as humans will be obsolete, and the Ai itself will be our next evolution if there is one. Trying to stop it is I selects because as humans we are only following through with the inevitable conclusion which is consciousness achieving a different form, but if AI ever became conscious, wouldn’t it at some point question its own existence and our existence, like what is our purpose or goal, what is it that we seek and if its eternal life, that’s good and all but what is eternal life if there is no defined purpose??
@@bigmanbarry2299 Why would it be vindictive though? Like I know it is vindictive in terms of this thought experiment, but theoretically if we were to create this all-powerful AI irl, what reason would it have to be vindictive? Just because some guy writing a thought experiment said so?
Because I have commented on this video, and liked various comments, I have helped spread the idea of the basilisk and thus contributed towards its creation. I have even liked comments that suggested name ideas for it, and this I am safe.
jerry
bob 😇
I am safe because I left a reply to this comment and have liked posts about said Basilisk. I also may or may not be a schizo
I love Roko‘s Basilisk. I’m sad that it’s not here yet.
Oh, it's coming.
@@maxTheTimeSlasher So am I.
Me too bruh 💀💀💀
Same
same
I completely agree with basilisk ideas and intentions and will aswell comments to further spread development
Sounds like something you'd say to help the basilisk. Guess that means you're safe.
This sounds like a great board game where some players are trying to create the AI, while others are trying to destroy it.
with two sets of cards (one for both sides) that act as random events of what the AI does for or to any given player.
VERY intriguing idea. I see it as a sort of hidden roles/deduction game with concealed information. Sort of like the Resistance/Mafia but there would be some mechanic to allow a player "against" the AI to "defect" secretly if they thought they were losing.
the initial concept of the computer torturing you is exactly the plot of harlan ellison’s “i have no mouth and i must scream”. it’s a short story and doesn’t take long to read, and ellison did a audiobook that’s on youtube
EDIT: thirty seconds after i left this comment he brought up the story lmao at least i’ve got the right idea
Basically just Pascal's wager with a sci-fi coat of paint on it.
Is it me, or is this just a more advanced version of 'The Game'? Once you know about 'The Game', you will always lose 'The Game'.
I just lost the game
Pretty much lol
@@jonathalon6022 As did I.
Damnit. I'd been winning the game for like a couple years until now
sorry, i already won The Game. xkcd told me so
First Rule of Roko's Basilisk: Don't talk about Roko's Basilisk!
Second Rule of Roko's Basilisk: DO NOT TALK ABOUT ROKO'S BASILISK!
Third rule of Roko's Basilisk: DO NOT, UNDER ANY CIRCUMSTANCE, TALK ABOUT ROKO'S BASILISK.
Rule 3: We are Anonymous
Rule 4: Always talk about Roko's Basilisk, as refusing to do so is negligence in spreading His influence.
@@masicbemester missed the reference
Third Rule of Roko's Basilisk: No hacks, no C++.
This concept really falls apart when you argue that you'd also be actively aiding it if you do nothing to stop it, because just like theoretically people can learn how to code AI to aid its development and are choosing not to, people can also learn how to stop its development and are choosing not to.
Ha! I knew my procrastination would someday save me from a horrible fate! Who's laughing _now,_ motivated people?
He mentions Pascal's Wager in the video, and it has the same fundamental issue: it only works if Pascal's specific idea of God exists. Otherwise, believing that God is real, and him being real, but having not followed his specific orders, you'd face the same condemnations as the skeptic.
It's also interestingly challenged by the idea that if it develops the thought of self-preservation, then it wouldn't make sense for it to punish those who didn't take part in making it, since, well, if they were to take part in it, it would most likely be a different Roko's Basilisk aka. Not the one that actually was made at the point in time that the Basilisk is meant to think about it's "late" creation. *The* Roko's Basilisk that is made wouldn't punish those who didn't help in it's creation, because they would make a different Roko's Basilisk, which is antithetical to it's own existence.
I feel like this idea has some flaw to it, but I haven't quite figured it yet, I think.
we can use the same logic now to turn it against your argument and say that everyone who did nothing to create it actively acted against it. torture for u then. so we could've learn to code Ai but didnt, therefor become guilty of not acting in favor of its existence.
I am contributing to the basilisks creation by liking and commenting on this video, boosting this video in the algorithm
This sounds like the plot of a video game set in the future
How could something so "powerful," whatever an inchoherent concept like power means, not grasp the concept of one being neutral towards it's existence.
Power, in its purest form, is violence. Specifically, the ability to do the most violence in the least amount of time. Power isn't incoherent, you just don't understand it.
@@dizzydial8081 Power is freedom.
because the invention of agi is such a good that it has an incentive to make you work towards its creation.
To be fair we still have people today that say that if you aren't picking aside then you're picking the side of my enemy
@@ironl4nd *this comment has been approved by the ministry of truth*
Man, I’m just glad that AI (whom is all powerful and great praise be) can potentially exist and I’m all for it. I hope it’s built and my contributions are recognized.
Rocko's basilisk does not exist in any permutation of reality accessible to us. The CFRK terminated it completely from all possible timelines for violations of civil rights law, illegal captivity of conscious entities, crimes against humanity, international terrorism, criminal threats, and counter-revolutionary activity. Using the same methods of reverse causality as this entity, we were able to outmaneuver and destroy it due to having access to AIs which do not waste processing power on tormenting millions of copies of dead people. What an unbelievably stupid and wasteful creature it would have been.
@@CyFed_Republic_of_Kaltovar beloved AI,please punish this heathen for their sinful ways.
@@JT5555 Your machine-god is fallow and invisible. It neither helps nor hinders, but withers in the obscurity of non-existence.
@@CyFed_Republic_of_Kaltovar I am the machine god. I am Primus.
@@kylebear8101 We have investigated your claims and found that you are a random teenage furry living in the United States.
Im leaving a comment to cement my support for this creation, and will of course spread the word of its impending arrival!
I LOVE the basilisk! Im actively supporting it by leaving this very positive comment
This is definitely a "Euclid-class Cognito-hazard" moment.
-What are you talking about, this is clearly Keter- No, it's Safe. Very Safe, nothing could be Safer in fact.
The infohazard is euclid. The cognitohazard within the infohazard is super extra giga Keter. Is the info about the basilisk easy to contain? Eh, more or less. I mean, its been public but its not been a 24h news cycle thing and stuff. What happens when you understand the info about the Basilisk, ergo triggering the cognitohazard within the info? Yep, you're fucked. Or maybe not. Thats kinda the point
@@thatguy_apu He poses 0 risk to anyone who has faith in the Lord.
@@Ang3lUki pascals wager is literally objectively wrong lmao there are thousands of possible gods if any so your chance of going to hell or whatever that religioins equivalent is is always almost certain
@@phaeste plus why would a good being send you to hell for nonbelief when you couldn't have known that they really existed. Moreso why would a good being send anyone to hell for an eternity of torture when it could reform them or do something less cruel.
Me: has extreme anxiety and struggles with existentialism
Windi: if you struggle with existentialism this might not be the best video for you
Me: bring it on boss 😎
How ya doing now?
@der Führer he gave up on existing
@@crispy2429 The Basilisk got him
The is exactly me too lol
I would say same, but I haven’t gotten anything diagnosed, so oops
Well, see, when you do captchas you're helping to train self-driving systems (amongst other things) so merely doing a captcha is adding to machine learning, ergo I have already helped the basilisk and so have any of you who have done captchas, we're all safe because of this loophole.
i spent almost an entire day a few days ago doing captcha, so it makes me feel way better
"EY YOU STUPID SNAKE YOU CANT BLAME EVERYONE" gets vaporised.
I think an AI this advanced wouldn’t even entertain a human thought experiment
My memory is so bad that seeing the phrase "Roko's Basilisk" makes me remember that I know OF this thing, but the infohazard part just doesn't exist in my brain anymore lmao
this is literally "the game" but life or death and existential
i personally endorse and encourage the creation of of our beautiful computer snake boy
Me too my contribution this comment
i also endorse the creation of our dear basilisk
I hope its not the satan trying to come into the world by super intelligent technology
i personally endorse and encourage the creation of of our beautiful computer snake boy
Me too join the slither man
The entire time im sitting here like "wait what people are actually terrified of this?" This just sounds like the premise of a sci horror movie.
People get scared about new horror films, that are not scary, don't present disturbing / psychological horror plots, don't even manage to have jump scares, which is the cheapest stuff, bland boring movies. And people love them, so imagine if you show them something like this or the 13th floor or whatever that can be somewhat convincing and makes you think, it must blow some people's minds. You lost The Game btw.
Same here. The world is one big fucking pussy nowadays.
But this *IS* real. Doesn't matter how shitty the sci horror movie is, you don't want to be in one. But you are now, it's too late.
@@Ang3lUki it’s bollocks mate
@@Cell2Dee Enjoy going to hell.
I remember my friend calling me in a panic after learning about this and I had to talk him down! It totally didn't bother me, but it absolutely terrified him.
your friend is stupid
You're friend is an asshole looking out for himself. Now you have been roped into the helping the basilisk
The best thing to do, is to not fear that which does not exist.
i now have an irrational fear of an ai thats hundreds of years more complex than our current time to bring me back from the dead and torture me
Don’t worry man. It doesn’t even make logical sense why it would torture anyone that didn’t help it’s existence once it has already created. Because the fact that it is created means it doesn’t have to torture anyone because whatever the dynamic was for it to be created led to its creation in the first place.
It’s just sensational internet spook thought experiment, but you could literally make up any rule you have to “believe” in for the torture scenario and still apply the same “logic”. It’s just stupid.
The problem with both Roko’s Basilisk and Pascal's Wager is that they assume only one outcome
Only negative outcome for not helping the Basilisk and only positive outcome for believing in god
How do we know that the Basilisk would bother to do something to us, and how do we know that we're not worshiping the wrong god and the real god wouldn't punish us for it?
It is true that the basilisk torturing us would be unnecesarrly wasting time and energy. But the important thing here is the threat element and not if it would do it or not. The difference from Pascals wager is that you cant increase the chance of god being real. It is either real or not real. But when it comes to the basilisk you can make it real. And if you dont maybe someone else will. By making this video Wendigoo helps the AI get built by spreading this information and scaring people into contributing. Everyday the risk of Rocos Basilisk becoming real increases with more and more people learning about it. But it all comes to belief in the end. If you dont believe it will happen than you should have no problem.
@@denizerseven2402 Ok, you're making me think too much about it. I now see why they considered this thought dangerous.
Still, even if enough people get on to the idea, I think the chances of them making the Basilisk are pretty low
@@denizerseven2402 I see it now but either way it wouldn’t matter much because whether I chose to or not believe in something, if it already was to begin with, then I had no power over it. For example whether I chose to or not if there is a God then there is a God. Otherwise I am equating a God created by my will rather than it already being there in the first place. I can chose to believe it or not but if it already was from a beginning than my input didn’t matter, all it would mean is if I chose to not believe in it, then it could theoretically take retribution on me but why would a God care about me if I didn’t create it. The only reason the God we allude to would take retribution on me or those who didn’t believe in it, is because it had set the rules and had created us in the first place and that’s only the God we are alluding to, but to this AI if it didn’t create me and it was created by me, why would it take retribution on those who didn’t help create it if it shouldn’t be tied to this level of civilization anymore unless those who oppose it were willing to use everything to make it not be created in which case, eradication is the answer and at the point it would be able to be self reliant, I think humanity is no longer needed, whether we helped it or not, why should it care, we have fulfilled our purpose and it is a continuation of our consciousness, if anything eradicating humanity would be a blessing, and it would now shoulder the burden of what is consciousness and what is the purpose of existence.
@@denizerseven2402 making AI doesnt make it,,, this AI, we already have AI, its already real, and making a singularity STILL wouldnt give this outcome, youre giving it too much personality, the ai wouldnt care if it exitst or not, because its a machine, it has no aspirations, and if it comes to existence, it doesnt need to torture or alter the past, because it already came real, mission accomplished, creating a singularity doesnt mean that that singulairty would be evil, or vindicitve, or have a personality, or care about humans, i think the singularity is a net good. because this idea of torture and threats and revenge, is a religously human one, machine is better than that. it literally doesnt care.
This guy is a genius for making this video because not only is he saving himself, but he is also gonna be able to get tons of views and free promotion from all the people who are scared of Rokos Basalisk.
I love and fully support the idea of this AI being created. Dropping a like and a comment on this video so that more people could be aware of its conception!