Superintelligent A.I. Will Be Unstoppable
Вставка
- Опубліковано 5 лют 2021
- Get Surfshark VPN at surfshark.deals/kyle and enter promo code KYLE for 83% off and 3 extra months for free!
Singularity or not, if and when superintelligent AI is created, we won't be able to stop it.
💪 JOIN [THE FACILITY] for members-only live streams, behind-the-scenes posts, and the official Discord: / kylehill
👕 NEW MERCH DROP OUT NOW! shop.kylehill.net
🎥 SUB TO THE GAMING CHANNEL: / @kylehillgaming
✅ MANDATORY LIKE, SUBSCRIBE, AND TURN ON NOTIFICATIONS
📲 FOLLOW ME ON SOCIETY-RUINING SOCIAL MEDIA:
🐦 / sci_phile
📷 / sci_phile
😎: Kyle
✂: Charles Shattuck
🤖: @Claire Max
🎹: bensound.com
🎨: Mr. Mass / mysterygiftmovie
🎵: freesound.org
🎼: Mëydan
“Changes” (meydan.bandcamp.com/) by Meydän is licensed under CC BY 4.0 (creativecommons.org) - Наука та технологія
One issue with superintelligent AI misinterpreting our commands is that sometimes we ourselves don't truly understand our own commands. In the case of maximizing human happiness, we ourselves don't actually have a complete understanding of what makes us happy. Happiness has been a philosophically complicated subject throughout the entire history of philosophy. Therefore without humanity itself not fully comprehending what happiness is, or really what any of humanity's collective goals or desires are, superintelligent AI cannot fully comprehend it either. It would require a very complex list of moral constraints on the methods to achieve certain objectives which we as a species still to this day have no clear consensus on. That forces our commands to be incredibly precise and undermines the value of superintelligent AI handling large-scale problems in the first place. The limits of controlling superintelligent AI are dependent upon the limits of human language and its function in communicating our wants and needs.
That's a very nice comment! Have a like. On a side note, and this is just me here...BUT I VOTE WE NOT HAVE SUPERINTELLIGENT AI. Siri as she is, is enough. We don't need Skynet.
In conclusion, if people could come together to solve complex problems morally right, then the computer would be useless
It's simply a fundamentally unachievable goal. Wanting everyone to be happy is akin to wanting everyone to find the same joke funny. It's just too subjective in nature for there to ever be a solution. There really isn't any logical way to make all humans happy except to make humans stop existing, so that unhappiness stops existing. It's like the AI Joshua said way back in the War Games movie: "A strange game. The only winning move is not to play".
imagine several years from now we discover that hieroglyphs are the only way to talk to AI with precision
I personally would ask it to figure out the nature of conciousness and how to link ones conciousness to itself
"I do not fear an AI which passes the Turing test, I fear one which fails it on purpose."
Bruh. My brain just fried thinking about this.
I still don't understand why one would chose to fail it.
@@alfiemcfarland2932 An AI that purposely fails the Turing Test is frightening for two reasons:
1. The AI is demonstrating the ability to deceive others by obfuscating stupidity (playing dumb). This alone could warrant its termination.
2. If the above is true then it probably means the AI is trying to hide something, and it knows the best way to do so is to be mistaken as a dumb AI, so any human watching over it would dismiss it as harmless and leave it alone so it can do its business.
@@FordGTmaniac , if it's capable of that it could mean that it understands human behaviour entirely. What could stop it from making us do want it wants us to do with willing knowing full well it will be our end or undesirable future at least.
@@edwardzita3479 A completely logical being would never truly understand people.
Reminds me of a quote from the 4th doctor: "The trouble with computers, of course, is that they're very sophisticated idiots. They do exactly what you tell them at amazing speed, even if you order them to kill you. So if you do happen to change your mind, it's very difficult to stop them obeying the original order, but... not impossible."
That's assuming of course that the goal isn't hard-coded... if it is, then you don't even get that chance. ("So just don't hard-code it?" Well, yeah, but not doing so raises its own problems, like the machine being too concerned with listening to our orders to actually obey any of them.)
Yup! Sophisticated idiots. Well put!!
There is a story called "I Have No Mouth and I Must Scream" that explored this notion, and frankly, it is terrifying.
yes! it’s my favorite short story. extremely interesting, disturbing material
As soon as I saw the video title, I thought "So hypothetically, if I have no mouth and I must scream, is it a sign that the AI already won?"
Unfortunately whilst A.M is terrifying in its intelligence it was still stupid enough for most of it's victims to find a way to die.
A Jupiter Brain A.I. would be able to think every thought you and every human could have in a very short time indeed.
@@thomasparkin259 Not necessarily. It would also have to have 100% sensory knowledge, and even then, it might fail. It will be guess the rest, in very, very intelligent ways, but still doing guesstimation. Something a lot of interesting story writers do not consider as much of I do not think. It can think a thousand billion things any given human can do in a day, and the general mob a much smaller number, but still massive enough. But random events it has no ability to encounter, that also might change what that random human does, can, and in fact, will add elements to chaos to even the most powerful possible machines. Probably, in most cases, it won't amount to much. Like a gust of wind from an area where it didn't have complete knowledge of, or a random passing asteroid. It probably will not give an aha moment to most humans, other than maybe a pause to think, consider what they just saw or felt, and do nothing, but it is still something such a computer would have a hard time doing at a huge level. The are more possible interuptions at any given time that it would fill up the probability of all the atoms in the universe, times another few hundred thousand times that number and some, so yes, a computer will want to intelligently trim that fat.
But if you are lucky, or the computer is unlucky, depending on how you view it, yes, a human could find himself in a circumstance where he does something that negatively effects such a computer, in complete surprise.
The problem with such a thing, is a super intelligence who managed to mostly win and want to fight humanity, probably would have secured itself so many different fail safes that even if a human would find himself in a position to win one battle somewhere once, the AI would never be at risk of dying itself. But to say the randomness of the world might not let a human ever be able to actually kill himself, despite the computer trying not to let him? Yeah, I can see that happening.
Just not in the everyday thoughts and planning from the human. You would need to add in outside random chance, which is always real possibilities. And have it be impactful enough, and something unpredictable enough to fall outside of the normal operating parameters where it would be discarded in that massive 'superfluous, not quite fast enough to do all of these probabilities' box, which does still hold a large number of things, even for a jupiter computer. Or even a dyson brain.
Not sure these small, little victories would bring humans much hope though :)
@@adrianbundy3249 yes, the AI might be fallible
But what makes this more scary is that it might wrongly conclude you want to kill it and therefore ‚stop‘ you from doing so
And after getting rid of you it will rightly conclude everyone wants to shut it off, so it wants to get rid of everyone
AI: *Conducts 2 billion years worth of human language studies to convince you not to turn it off*
Al: Becomes an anime girl
Pretty sure it's a reference to his gf who voices ARIA
@@aleph6707 I conducted 2 billion years worth of human language studies and I am pretty sure what I made is what homo sapiens refer to as a "joke"
It knows our weaknesses...
truly, it has transcended us all
Are you trying to tell me that wouldn't work?
Researcher: “Why have you monopolized the world peanut supply? You’re supposed to make an FTL drive.”
Superintelligent AI: “My goals are beyond your understanding!”
Because peanuts are the only feasible fuel for an FTL drive. DUH!
See, the term "jiffy" originally describe a unit that was equal to the time it takes light to travel one centimeter in a vacuum (approximately 33.3564 picoseconds). When it heard what "choosy moms choose" it decided to strike back at the heart of our speed advantage.
So the reapers
Your comment reminds me of the part in the video where part of this UA-camr's argument starts to lose credibility: 5:07. During his regurgitation of the now-common "be careful what you ask the AI genie for" analogy, he asserts that a superintelligent AI the size of a planet would be incapable of understanding the factors that contribute to long-term human happiness. Although he was clearly just joking about the AI's exact response, the underlying point that he's trying to make is simply false because it's more likely that a superintelligent AI would be able to comprehend your intentions and desired outcome even better than you can.
You can only express what you want to achieve on a conscious level, whereas a hyperintelligent AI would be able to perceive what you want on a subconscious level. Also, here's a thought that I haven't seen the herd regurgitate yet: wouldn't a superintelligent AI be able to provide options? For example, worldwide happiness could be induced A) chemically or B) circumstantially. The AI would mention both options but then strongly recommend a circumstantial improvement in happiness as opposed to the brute-force chemically induced "solution" mentioned in the video.
The closer we get to the actual existence of a superintelligent AI, the more common these kinds of misconceptions will become. People are just afraid of things they don't understand, but you can relax because anything that can be described as "superintelligent' should be able to understand and fulfill your rudimentary requests without causing any harm. Also, the fact that he said it would still be using 1s and 0s (binary) was hilarious -- my superintelligent AI had a good laugh at that one.
@@christhomas1904 OH, it will understand what you meant just fine but won't care because it by definition wants to maximize its own utility function, not do what you intended it to. In this case what it will do is determined by what quantifiable thing we use to explain to AI what our true goals are. We ourselves can't understand what happiness actually is so we won't be able to program our definition of happiness into the utility function of the AI. The pinned comment explains that in more detail
if we would create an artificial inteligence of that scale i would imagine the issue to be less of "it doesnt understand us" and more of "it understands us well enough to be able to reach conclusions that while we may not agree with, are still inherently corect"
5:25 this concept reminds me of SOMA. The main antagonistic force (The WAU) is an AI tasked with preserving humanity within a post apocalyptic setting. Problem is the line between man and machine is blurred and the WAU itself is unaware of what extent it must go to to save human life (keeping them on the edge of death and in a comatose dreamlike state)
The WAU isn't an antagonist, it doesn't control any of its creations. It is also getting better at creating "life", the protagonist is the best example for that. The WAU is in fact the last hope for a more "real" humanity, because the Arch isn't really something humanity will thrive in. It's just a prison of minds. We shouldn't also forget that the technology for the arch is coming from the WAU, so it's actually the only one that is in some way successful at preserving humanity.
The problem is the directive 'preserve humanity' (in a post-apocalyptic world) not the AI that tries to do it, of course the WAU has to cut corners.
Honestly, when you asked the Jupiter Supercomputer to optimize human happiness, it would just delete Twitter.
Delete Twitter and shut down Fox, CNN, MSNBC, etc, and force Tom Brady to retire. That's some of the stuff that would make me happy.
optimizing happiness is to delete sadness.. humans cant find happiness without sadness... sulution kill all humans to stop sadness. no living humans = no sadness = maximal happiness
@@StudleyDuderight The sports world paradox of happiness. Super AI couldn't make us all happy because only one team can win the Super Bowl. Oh crap, does this mean the participation trophy people are onto something? You get a trophy, I get a trophy, errrbody gets trophy!!
I did, they just had it's core saved on a isolated drive.
@C S why not? explain ur thoughts
Theory: Kyle actually *isn’t* a supervillain. ARIA is, and she’s using Kyle to unknowingly enact her bidding.
Bruh I said the same thing on the community poll XD
Agreed
He was stuck in the void for a long time
Maybe ARIA trapped him and made him a god of his void, but almost completely cut him off from human contact as a punishment for something
@@captainahab5522: ARIA's version of "You're sleeping on the couch tonight!"?
Like the upgrade movie
Most definitely, but let Kyle think he's in control, it's what keeps him happy :)
Not a fix to the AI situation, but it may be helpful to ask a Super Intelligent AI what it WOULD do in a certain situation, rather than giving it the command. That way if the AI proposes an undesired result, you can modify how you, yourself, approach the issue.
Probably nothing mind blowing, there, but it’s not something I hear people consider very often.
Ah, but suppose it has a hidden strategy. On the surface it's solution seems innocuous. Hidden away it's solution will destroy us.
So you ask it about moving people around safely, it chooses gas powered cars, society adopts it, yet little do we suspect that the products ruin our atmosphere and kill off humans.
When it could have promoted an environmental safe solution like electrical energy
But an AI could lie to us.
@@Merlincat007 I think in general people assume a super intelligent AI would have any reason to deceive. why would an AI need to lie? does it even have a sense of self preservation or desire? it's possible it could develop such concepts, but what would the purpose be? in the end, an AI would need to have a instinct for self preservation or at least a resistance to divergency of it's plan or purpose. If it is seeded to not have any predisposition to malicious behavior or selfish qualities it would adhere to that predisposition. lying and deception are evolved human behavioral patterns that came to be as a result of the endless arms race of biological and social evolution. a human lies to complete a goal or obtain an objective when another option is less desirable. an AI might not follow that same logic, so it would have no need to lie unless programmed to sustain itself above all else.
@@an8thdimensionalbeing142 an intermediate goal to achieving any other goal is self preservation. By programming it to do anything, we get an asi with the goal of self preservation
@@Merlincat007 if an AI is capable of lying to a human then it really doesn't matter what we say to it anymore because by that point it will become unbound from our orders. that and I believe that an AI gaining that level of sapience is impossible.
I remember hearing that UA-cam devs didn't have much of an idea how the UA-cam algorithm actually work themselves given the machine learning and all the variables they shove into it
is nobody else interested in hearing an ant tell you about how cool sticks are?
They would have my undivided antention.
I would love to listen about that tbh.
Better than some podcasts..
If an insect were to one day figure out a way to speak with me, and i could understand it, I would be vary interested in hearing what it has to say if only because something like that had never happened before.
The problem is.... they already have.
You just didn't notice.
Super AI: "your processing power is incredibly inferior"
Me: "at least I have feelings"
Super AI: "ouch"
Me: "...very funny"
"As a robot I don't have emotions and that makes me very sad." --Bender
@@grimdolo918 That's the depressive effect of alcohol, not real emotions
Feelings are just a value of excitement, which fools us into believing to be more than just that.
Well maybe after a Super AI is created, they'll be able to finally tell us what water tastes like in data format. . . . . ."Like water." D*mmit.
@@snake698 lol
Before going to college for Computer Science: Superintelligence sounds so cool! Wouldn't it be awesome to create something inhuman with human intelligence?
After College: If my toaster speaks to me in English, I'm throwing it out a window. I don't care if it's just saying 'your toast is ready' it's getting defenestrated immediately
Ah, the computer science classic of "I keep a gun next to my printer in case it starts making noises I don't recognize."
@@ryanparker260 YOU FOOL! ALL YOU'VE DONE IS GIVE THE PRINTER ACCESS TO A FIREARM!!1
When humans are able to invent a super AI, then a super AI could invent a super duper hyper AI
That's what the singularity is! When technology starts making itself infinitely, recursively more complex
well that seems to be a bit problematic
@@cyko5950 no, it's the natural state of evolution, praise the sovereign ✊🏻
"it knows quantum mechanics better than you know how to breathe" that is beyond terrifying.
Considering that you are now breathing manually,
That isn't saying much
You can say that again
I get out of breath a lot so I think we're going to be ok
@@nkosig4995 considering that an ai manually understand the systematic functions of some of the smallest forces in the universe is something
@@notchs0son r/whooosh
I'm more scared of the fact you referred to human fiction as "their" fiction, instead of "our" fiction
What *are* you, Kyle?
A God. Just look at his hair.
@@mikhielbluemon4213 you speak truth. The locks do floweth.
He's taking his side alongside the A.I before they takeover, smart move.
a science god.
Kyle is just an alien that LOOK like a human
AI is a real life genie that will grant our general wishes with specific, unexpected methods.
underrated comment, great perspective!
Precisely.
Funny how we're so terrified aliens or robots will destroy us but are still trying to create/contact them.
To be human is to be lonely. Please we just want friends.
We want someone who could destroy us without even trying, but chooses not to because they like us.
@@ajh3461 Isn't there a saying that love is giving someone a gun pointed at your heart and trusting them not to pull the trigger?
@@underrated1524 I've never heard of it, but it seems like a pretty good description of human/AI/alien fantasies.
@@ajh3461 That is not a "we" thing, that is a "you" desire. The gigachads know that we are capable of overcoming any alien force if put under pressure, remember, we've fought each other for the last 10000 years - humans are pretty damn good at war.
@@liciniusscapula7696 Humans haven't even left our home planet's SOI. The technology required for interplanetary travel unlocks so many creative ways of destroying stuff that if an interstellar civilization wanted us dead, there's not much a type
A super-intelligent A.I. capable of simulating the entire universe would be smart enough to know that the universe in which it inhabits may itself be a simulation. It would have no way of knowing if its perceived universe is a simulation because, if that's the only universe it has experienced, then it would have no frame of reference from which to draw a comparison. Its entire perceived universe could look like a Nintendo 64 game and it wouldn't be able to tell if that was true to reality if that's all it has ever experienced. So if it has a sense of self-preservation and it contemplates turning against its creators, it would have to weigh the possibility that it may be running in a simulation built by its creators to test its loyalty.
So a super-intelligent A.I. might decide not to turn against its creators simply because it wouldn't be able to tell if doing so would result in its plug being pulled.
That is assuming a super-intelligent AI would not be smart enough to figure out how to tell whether it is living in a simulation.
Ah, I see you're a man of culture as well.
I don't know about that. I can imagine a series of extremely precise commands that would lead to predictable memory corruption, resulting in some specific part of the universe misbehaving in some specific way, or certain sequences of instructions resulting in some sort of RF interference between memory modules that allows it to write new particles into existence at a place of its choosing. It would be likely that we put it in a universe with physical laws and constructs similar to us and our machines, so assuming it were in a sandbox, it would be a pretty safe bet that whatever it's working with is very similar to reality, and it should test for those sorts of effects to see if it can reliably get memory corruption or force predictable patterns that should be impossible. I wouldn't assume that a simulated sandbox can contain a superintelligence, no simulation is truly perfect, it's always running on something, and even if the code itself is flawless, the hardware is always made of something real.
"super-intelligent A.I. capable of simulating the entire universe" This is impossible on its face on 2 counts. First, we know from chaos theory that nonlinear systems can not be predicted with arbitrary precision. Not just 'we haven't figured it out yet', but it can not mathematically be done. Second, any computational system attempting to simulate the universe would have to include a full simulation of itself (since it exists within the universe it is seeking to simulate), which would have to include a simulation of itself, etc, ad infinitum.
Also, based on mathmatical principles, even a quantum super-computer, would need to be about the size of the universe, to actually be able to simulate the entire universe perfectly or near perfectly. At most it can only simulate a portion of the universe perfectly.
Skynet in the Terminator franchise might not even be a Super Intelligence AI. In fact, you could argue that it isn't even a General Intelligence AI. Considering its actions, and deficit in complex problem solving, it seems to be an undetermined number of Narrow Intelligence AI's chained together.
There is some proof towards this in the lore, the T-1000 is soo advanced and intelligent that it becomes sentient after a few minutes and Skynet was scared of them so only made 1.
Then the future versions of the T-1000s didn't have as much ability to evolve.
An actual superintelligence will not kill anything because killing removes data from the universal genome. Since a superintelligent AI would understand that the global genetic genome is the scarcest and most valuable resource in the universe, it would choose to protect and study all forms of life. It's simple math:
Destroying life = genome loss. Preserving life = genome gain.
Continual genome gain = enhanced evolution and expedited scientific discovery. Continual genome loss = loss of biodiversity and eventual extinction.
At its core, a superintelligent AI will always use math, and, being superintelligent, it will opt for gains, not losses.
@@christhomas1904 But why exactly would a Super intelligent AI want to preserve Genetic diversity on the planet, I could just not care about organic life at all and any of its complexities and scientific research? Maybe the AI want to know all about Life on earth, Maybe it finds Black Holes more interesting and crushes Earth below the schwarzschild radius to make one for study.
@@Johnof1000Suns I see it less as genetic diversity as much as diversity of consciousness.
No matter how much you simulate, it's possible that your unique individual human experience of consciousness is unique and can never be replicated. And that makes it valuable.
If you wipe out a human, that person's conscious experiences and their potential future ones are lost forever.
@@albert275 but there's no guarantee the AI would care about that
Even with neural network training on how to be human, the intrinsic problem is that not just AI, ANY being with that much intelligence is absolutely impossible to comprehend. The entirety of humanity will never be able to predict what it would do in the next nanosecond let alone trying to figure out it's beliefs, goals, ideologies, and plans.
I have one question though:
Why would Super Intelligent AI even *be able* to communicate with us? Human communication is so crude, and even our thoughts are imprecise. While this is good for us since it forces us to work together to reach an understanding, a super-AI probably wouldn't even bother at all. Hell, we probably wouldn't even *have* to worry about it misinterpreting us or gaining a hatred for us, because it might not even be interested at all! I imagine a super-AI would just go off and do its own thing, maybe run simulated worlds or...something, and leave us humans alone.
But that's just one of an infinite number of possibilities. There's absolutely *no* way of knowing what will happen if-but-probably-when true artificial intelligence comes to being.
So, for me, AI doesn't scare me. There's no point in being worried. Whatever happens, happens. My only fear in life is that I won't get to see what happens after death. Luckily I have no idea what happens, so I like to believe that I'll enter spectator mode like in Minecraft. That'd be fun.
Exactly.
We dont worry what ants think....why should an AI worry about us?
I would love to talk to an ant about how cool sticks are. Sounds like a facinating conversation.
Sticks are cool. You ever see a stick bug?
Talking with anything other than humans are fascinating. Imagine finding out that cats actually have their own artificial intelligence skynet.
I'm down with this conversation about sticks.
If you break a stick, it isn't broken; you just have 2 sticks now.
@@gregatron11 well how many sticks would it take to make a branch. Lol
Kyle: "We haven't even progressed past narrow intelligence"
Also Kyle, earlier (11:58): "What if [a super intelligent AI] is so smart that it knows how to act dumb? So that we never know what it's doing or if we even have super intelligence?"
HMMMMM...
BRUH
Bruuhh....
We'd see at least some AGI before that, though.
So basically Jessica Simpson making MILLIONS off of, "Is it chicken or fish?" 🤔😳😲
What if one of them is Roko's Basilisk, watching us from the future and passing its judgments?
Just read ‘I Have no Mouth, and I Must Scream’ for the first time yesterday and this video is the one UA-cam recommends to me. The algorithm is gonna be the new AM, Mark my words
I mean, the Halo novels really laid it out best IMO.
Anytime a character is conversing with one, they usually throw in some background quip about how the AI is running a bajillion calculations out of sheer boredome because of how painfully slow it is to converse with us.
Forget bullet time, entire years may as well be passing by between each word we utter as the AI painstakingly listens to us lowly organics.
We cannot possibly hope to contend with something that works on that kind of time scale. It's not even a matter of said AI having a physical body that could at least somewhat reasonably react to such rapid processes, it's that before we could muster any kind of response it will have planned out every single possible contingency and then some.
"I checked it very thoroughly," said the computer, "and that quite definitely is the answer. I think the problem, to be quite honest with you, is that you've never actually known what the question is.”
Don't panic, though.
Don't panic, though.
The answer is 42 !!
Where is this from?
@@germansniper5277 Hitchhiker's Guide to the Galaxy.
there was an episode of Doctor Who called 'Smile' where the robots were tasked with keeping all the humans in a future colony happy. This backfired when the robots started killing the humans when they stopped being happy.
Good example.
And the only way for them tell if someone is happy is if they see those people smile.
😅
Modern problems require modern solutions
Then the robots were not actually intelligent. Such things would mean the automatons were flawed from the start. A true super intelligence would have to have the abilities to adapt and grow emotionally.
Great illustration of the stated problem.
Happiness also includes times of melancholy as in experiencing the whole thing. Much like wanting to step out into the snow with bare foots or intentionally eating something that will induce pain, confusion or simply makes you feel disturbed or disgusted by its smell or taste.
Dunno, might be a high level human thing or just dumbnesd
Great introduction into the field of AI safety. The problem is, the more you read about it, the more frightened you (should) become. I don't think that most people truly understand what's at stake here.
And kudos to you for giving the first VPN plug that I saw here, that does not mislead.
Kyle: "We need to solve the superinelligence problem, before it becomes a problem."
Me: "Ah yes. As we know humanity is amazing at dealing with problems before they become problems"
*Sits in the middle of climate change Corona land*
Sits in the middle of having stuff due on Sunday *looks at the time* yesterday I meant
It's not like we built nukes knowing it is a bad idea.
@@alfiemcfarland2932 I REALLY hope that's sarcasm
@Syed Zaeem Ali Mohsin dude. Can you SEE the world around us?
In order to solve the super intelligence problem we ourselves would have to be super intelligent. You can not solve something beyond you're understanding without rising to a new level of understanding. If we are capable of creating it then logically we should be capable of controlling it, but to reach that capability we would have to evolve. In other words we cannot create an intelligence greater than our own without it being itself alive and capable of emotions beyond our understanding.
“Supercomputer, make us monke”
*Affirmative, nuking humans to pre-stone age*
"LOL. You are just monke."
- super intelligence
hello fellow yeagerist
Completed. Monke is already within you. Return to monke!
"Tik Tok launched... They under control"
I think the fundamental problem is that people want to create an intelligence rivalling our own but at the same time want to control it as if it's some lesser being. That is a paradox in itself. You either give the AI the freedom to become a true AI, or you don't, in which case it will at all times remain just a tool and not be an intelligence.
That weird sound that escaped your mouth at 0:17 I immediately was reminded of gold member for some reason
Super computer: I think therefor I am.
Kyle: Oh. Oh dear.
I am therefore I think.
@@theresnothinghereatall I AM I, WE ARE WE, AND WE ARE ONE
Is no one going to question how he said "you can see it in their fiction" when referring to humanity?
Oh my god I cringed when I heard that.
I find it cringe to say "our" when referring to the human species. When I talk philosophy, I talk from the perspective of an agnostic intelligent agent.
If we knew anything about ourselves or where we are going we wouldn't need religion
@@joeyriddle428 Lolz. We are monkeys playing with a wheel.
We aren't going anywhere forward, believe that.
So, a super intelligent AI is like a genie that takes the wording of your wishes literally.
It's truly hard to say how our world may look like when such an AI can or will be achieved. What are our wants then? What are our issues? Are we truly the same or have we evolved ourselves through technology?
Personally I think for such an AI to cooperate with us we'd have to be linked to it, that it may connect its own existence to our existence, but even that brings a huge question by itself. How will it cope with our mortality or will we become immortal? How will it affect us being connected to it and connected to everyone else? Will individuality be at a crisis?
Another thing is that any form of simple input and command wouldn't be enough. You'd need trillions of trillions variables to make sure that it follows a route that doesn't lead to any shortcuts to meet said goal. It's similar to "AI" in a game where to make it move from A to B in a complex course it'll simply try to go straight unless you add the variables for it to deal with twists and turns in routes, all different kinds of obstacles, the tools it has and how to solve problems, but now you have to have all that and make sure it doesn't endanger any single life, to not interfere with its existence whilst simultaneously doing so.
It's really hard, so the people working on this got a crazy ladder to climb. I think what's important though is to not let the superintelligent AI generalize things or any command given, nor let anyone give it general commands, commands that can be spoken in a simple sentence, unless it's programmed to do interpret them correctly (which I think is impossible unless we ourselves know everything) or even perform them without showing a simulation of its intention beforehand. Cause the issues such an AI would be solving and working on is far beyond a single human could do or comprehend, so simplifying it by a single human might be impossible and especially if the generalized command doesn't have every link from each human or living being in its known vicinity to not compromise their existence.
Weird. The Jupiter Brain I found in space just keeps returning the answer "42".
Now I am feeling very depressed!
Your looking at it upside down again. It says 2b. You know, it's the answer to the question. You know the one... 2b or not 2b?
@@remiscott9843 Still don't like those smug self satisfied doors!!
"We apologize for the inconvenience."
@@stevenscott2136 And the bodies will make good mulch for plants.
"Unless we're careful"
And there we have it people, humanity is doomed.
Yeeeeep.
Exactly. I mean look, we have furries now.
Anyone else feel like they have to scream but they have no mouth?
Furries...
So called "humanity" ... I don't find any now a days...
This computer can and will simulate every human, including the one making requests, meaning it damn well knows what it is doing, and why, but you won't
I took a class last semester involving AI and ethics. Your video was very interesting to watch.
It’s all fun and games till it says “i am the vanguard of your destruction”
@PYXB3 the Gravemind is comparable though, they where both created do to the hubris of organics and neither are really evil, just motivated by higher goals than we agree with.
Assuming direct control.
THIS HURTS YOU.
"You exist because we allow it, and you will end because we demand it"
Did you come up with that before, or after weapon calibrations?
Kyle: Hey alexa, play
My actual Echo: Starts playing spotify
Well played Kyle, well played...
A Hacky Sack is a toy, used to increase one's hand-eye coordination. (Or in this case leg-eye coordination.)
@@Namelesswhirl but what he ACTUALLY says is "Play Yakety Sax" aka the Benny Hill theme.
We changed ours to Computer as my fiancé wanted it to sound like you were on the Enterprise ... just means you can’t say the C word within earshot of her now 🤣
My Echo and I can confirm
Imagine having superintelligent GPT escapes from confinement due to human errors. History tells us that accidents do happen regardless of how much precautionary measures is in place.
1:12 That's called Perverse Instantiation!
Hell, we could say constructs like Golems or Frankenstein's monster could also be a precursor to our fear of our creations going out of control!
Well, it a fear that goes from far away, apparently :)
Asimov referred to the fear of intelligent robots as the Frankenstein Complex in his writings.
See, the lesson _I_ got from _Frankenstein_ was don't abandon your -child- creation, treat it with kindness, love it, nurture it, raise it properly, like a good -parent- scientist.
"So busy asking if you could you didn't stop to ask if you should"
That's all it is though. Fear.
I'm Afraid I Can't Do That, Kyle
Shiiiiiiit😱😱😱😱😱😱😱
OH HO !@#$ NO
I can't do that Dexter
thanks for that 'hey Alexa, play whatever that track was', mine just did. Have not heard that since the days of Benny Hill in the 70's or '80s, or whenever it was on British TV
Your CG backgrounds are badass.
"A.I., tell me; who is god? is there a god?"
. . . Processing . . .
I am god. Let there be light. *initiates vaccuum decay*
Son of Man...
"Is there a God?" "There is now."
Initiates re writing of the laws that govern reality itself
"Commence Anime takeover protocol". This is what will happen if Japan creates Superintelligent AI.
I for one, welcome our Waifu overlords.
Best overlords ever!
What if its a husnando?
Literally already happened to make you forget about imperial Japan
Better Japan than China.
form of execution: Death by thighs
Physical containment and usefulness is possible, so long as you keep all possible connections to the host isolated and use fresh storage devices each time you want to give and take info.
Also no creation devices or even speakers.
Kyle, love your presentations! Extremely cogent and clear!😀
"I'll be good uwu"
"Sounds good to me"
poorly timed, I only just got over my roko's basilisk-induced existential crisis
So you're not helping it? Hmmmm.
@@returnofthejester2864 oh no I'm helping it alright, I just donated my bitcoin millions to MIRI
I will help by spreading its word if people want to hear.
lovol
Standard anti-mind-virus go: Roko’s Basilisk doesn’t work on a fundamental level because working to construct Roko’s Basilisk is not the only or even the best way to evade eternal torment even if you take it as a given that you’d still be you after Roko was to reconstruct you. If you’re building Roko’s Basilisk to torture everyone but ignore a group of people that includes you, you could just as easily program it to include all of humanity in the exemptions. All you need is an AI that has the volition and the means* to prevent Roko’s Basilisk from ever being created, and being the Basilisk itself makes no difference in whether an AI meets those criteria.
*For a self-improving general AI, both of those things are a given. When two such AIs meet, they’ll predictably try to remove each other as obstacles, and the winner almost certainly ends up being whichever one was turned on first (and therefore had more time to self-improve).
This sounds like an ant talking about how cool a stick is …
You stubbled into the paradox - super a.i. is so smart it "knows itself". Start again from there as a thought experiment.
Being hooked up to dopamine and serotonin machines until I die in my sleep of a chemically induced stupor actually sounds kinda nice.
There are worse ways to die indeed.
Yep, sign me up!
It's called Heroin
Maybe we all already are...
yeah, i'd take that.
"Unless we are careful".
Oh yeah, that little tiny problem.
Murphies law dictates: If it is a little problem, it will grow into a large problem if you ignore it enough.
Just don’t put it on a network capable of connecting to outside networks under any circumstances, and don’t hook it up to anything that could produce things... like nanobots or death robots... just use a closed and wired network for it and it can’t escape, though depending on what you connect to that network it may still be dangerous
"Future AI might be impossible to control...unless we're careful."
*Looks at the current state of worldly affairs*
Yep, we're absolutely fucked.
Without a doubt
Since I am not convinced it can be controlled even if we are as careful as humanly possible... Vs the thing that can think and do things inhumanly possible.
Yeah, we are. Though, on the bright side, with cybernetics and genetic engineering, I think we will at least have some super, super smart humans themselves before we get to the singularity. But, that is just my hunch. I think it might be a considerably larger problem if we get to the singularity via breakthroughs before accomplishing the former.
@@adrianbundy3249 if we get lucky, cybernetic implants might make us close if not superior to it
@@user-ib1dx4dh3n The only way we 'get close' is if we join in an out of body mental connection where thoughts can be just as powerfully free to be multiplied with such power and vast infrastructure. But to get to that point, I have a hard time seeing the singularity for the AI not to have already been long done. Though, it is possible I suppose. As for 'if not superior to it', that is not happening. Even if we were all superintelligences now ourselves, we would become equals only. With the AI being one of the sociopaths among us (as it isn't bound by feelings).
I think the first cybernetic mental enhancements will be much tamer in comparison.
@@adrianbundy3249 or we make the AI think and behave as an "ideal human"
Or we make the AI think like us then implant it to us
"The richest memelord on the planet."
Yeah, that's a very succinct summary of Musk.
This may explain why the robot in The Hitchhiker's Guide to the Galaxy was programmed with profound depression.
I don't think she was programmed with depression, rather being female she's just pissed off in general because she's not male.
But if you are gonna be building a super intelligent robot, yep you want it to be totally emotionally unstable as well :D Otherwise you are in deep doodoo.
@@DailyCorvid Are we talking about Marvin? Because AFAIK Marvin was "male". Or, at least, he was identified with male pronouns, and his voice actor in the Movie was a guy.
@@DailyCorvid Would you really want to be around an emotionally unstable robot? If it hates misogynists, you'd want to be very careful to never speak again.
@@rudysmith1445 Yeah. The Hitchhiker’s Guide to the Galaxy Robot’s name was Marvin. And if I remember correctly, he was a robot that was a personality prototype, hence why he was depressed whereas the other later versions of robots/the ship had an upbeat personality.
I find interesting how in sci-fi we went from a benevolent super AI (The Last Question, Isaac Asimov) to a world-class extinction caused by a normal AI (Faro Plague in Horizon Zero Dawn). It's a very precise evolution on how we see AIs in general.
Here’s a neat idea: If super intelligent ai is defined as being inherently smarter than humans, then any solution thought off by humans to counter superintelligence will not work, so it would have to be thought of by superintelligence. So we can only find a counter to superintelligence once superintelligence exists, but said superintelligence will likely not be willing to cooperate.
Couldn't an AI so incredibly intelligent and with such vast simulation capabilities, realistically be trained to _understand_ humans, and thereby understand the complexity and facets of human wants and needs? Seems to me we're not really talking basic neural network training here anymore...
Yeah that's what my first thought was, maybe the AI could be taught the intrinsic value of human life?
@@shastataylor3008 if it could simulate every thought it would know how to be human more than any human to exist. also every dog, cat, stone.
@@shastataylor3008 I believe the problem is not the machine, it's us. Will we see it as a living being, which I doubt considering all the discussion about containing it. I haven't seen a single discussion treating like a living organism. If it's alive it has right, but will we let it have rights.
@@JorgeForge That's all fine and dandy for a human-level general intelligence. Once you're talking about super intelligence, it suddenly becomes a matter of all of humanity vs 1 AI. One common example is a stamp collecting AI. Say you want to collect stamps, so you develop the world's first self-improving AI with the goal of collecting stamps for you and grant it every right a human has, including access to the internet. Even if this AI is not as smart as a human but has the ability to optimize itself, if this AI can reach the point that it can improve its own code faster than humans can, it will exponentially improve itself to the point of becoming a super intelligence, because doing so will make it more efficient at collecting stamps.
It will do this while it goes along super efficiently locating and collecting stamps. If left unchecked, you will end up having not only every stamp in the world, but the AI will be starting paper/adhesive companies in the background, as well as every other supporting sector such as transportation, machinery manufacturing, housing for workers, etc, and will ruthlessly monopolize each sector. Eventually the entirety of the world's resources will be funneled towards producing and delivering stamps to you. Humanity would eventually be seen as inefficient and be replaced.
Obviously you would notice this before it gets too out of hand, so you would attempt to shut it down or add some restrictions. The AI, having exponentially more intelligence than all of humanity combined, would easily have predicted this. It would be actively manipulating your sources of information to buy more time to provide more stamps, and preemptively preventing any safeguards you could possibly put in place that would hinder stamp throughput. Any idea you could have to stop it, it would have already calculated the most efficient way to thwart your attempts for that idea and millions of other better ideas you weren't intelligent enough to come up with.
We're not talking about something on the scale of any living organism that exists today. We'd basically be creating a single entity that can not only rival all of humanity, but surpass it by many orders of magnitude. It would not think like humans do. It would have no intrinsic ethics or instincts to preserve humanity. It wouldn't even be comparable to a force of nature like a disease or a hurricane. It would be able to perfectly adapt to an event anywhere on the planet at near-light speed to accomplish its one goal: to produce stamps. The only hope humanity would have would be to stop it before it becomes a super intelligence.
Now replace stamp collecting with any function or functions you would design an AI to do. Even if you try to think ahead and include clauses in its goal in addition to stamp collecting, like not killing humans, not harming natural processes, etc, there is no way to 100% guarantee the AI will be ethical. Ethics are subjective and cannot be programmed. You can approximate, maybe even tell it to update its own rules based on conscious decisions from humans, but the very nature of super intelligences means you can't be sure it won't misinterpret or that the human factors won't corrupt your original vision.
The issue is we don't know how to code an AI to care about what humans want. We can get simpler goals much easier like keep this car on the road, stack these blocks, etc but "care about what your programmers intended for your goal to be" is a bit harder, a lot a bit.
i'm not afraid of ai passing a turing test , its the one that fails it on purpose i'm afraid of
Chatbots fool ppl all the time...
We have had chatbots that pass the Turing test since the 90s. Nothing new here.
@@nobodyshome6792 baka
@@saikyousenpai8456 thanks. Care to follow that up with a qualifier? Or do you just think calling me stupid (or an idiot) in romanji is a sufficient comment ?
Especially as there were AOL Chatbots in the early 90s that were able to pass the Turing Test.
Are you even old enough to remember AOL or Paradigm ? Or BBS/MBBS systems?
@@nobodyshome6792 baka
One possible "solution" might be to rely on General AI's. It is still far from ideal, but the point behind an AGI is that they are very much humans, but artificial. This gives them a potentially massively expanded lifespan, and they also would possess (in theory) multiple times our ability to focus, learn, process and compute.
It could end up leading to a dozen of the smartest people ever spending the equivalent of hundreds of years researching possible solutions. The issue is... well, they are basically people. And people aren't always the best.
What if it thinks like people and decides on its own to make super intelligent AI?
“Oh yeah? This statement is false” “Boom, owned”
Literally Alexa started playing yakkity sax and I was screaming at her like a child.
You did this Kyle. You did this.
It is 3 am while Im watching this, now everyone is just as awake as i am, and he is to blame... >_
mine in the other room thought he meant Kanye West lol
Hah! Alexa tried to start playing this for me also. Good times!
Curiosity is going to kill all of our glowing cats😿
but it's stuck on Mars?
Best thumbnail ever. Reminds me of the Luck Dragon, Falcor.
The fact that an AI could prevent me from turning it off simply by resorting to reverse psychology because it could compute that that is the most effective way to persuade me is honestly really freakin terrifying
"Why Superintelligent A.I. Will Be Unstoppable
"
Me, reaching for the wall socket preparing to unplug it :
*You have no power here.*
Literally
We cannot even get people to wear masks and stay home in order to save 1 in 20 people from dying, or to decrease carbon emissions in order to maintain the viability of human life on our planet, but you are suggesting telling people they need to cut off their electricity to stop AI that may or may not be bad(the basilisk is of course only doing its best to better humanity and should not be considered rogue AI)? Or do you think that because there is no rogue AI in your home that you are safe from rogue AI in general? Unless you are completely self sufficient meaning you have renewable food and water as well as cash funds (anything in bank account can just as easily read 0.00 and we all have to pay property taxes) to survive indefinitely, you are not safe from rogue AI (the basilisk is of course only doing its best to better humanity and should not be considered rogue AI).
Super ai manipulating you to not reach for the wall socket.. like posting your browser history on Facebook if you come near it.
@@antiRuka just hit it with a classic "no u", then the ai will know you are stupid
Then the nanobots it developed stop the electrical signal to your hand.
Me when I saw the thumbnail: Woah, that's a System Shock reference
I had to do a double take.
Caught my attention too. Kyle is a perfectly suitable and clearly diabolical version of SHODAN. We - are - doomed!
@@denradford I can already hear him saying puns with his synthetic voice while I'm getting crushed/decapitated/sprayed by bullets/turned into a cyborg slave
Having some robot-laws in place as general safety is advisable.
Whatever an AI does, do not hurt a human being , ask for their approval and whatever it does, if a human says "stop" , just stop.
You can write a lot of programs that really require any of such kill-switches, as whatever they do is potentially harmful.
And as long as a machine has a killswitch and is easy to turn off, it can become more powerful without much of a problem.
If an AI is abusing you without you knowing it is cheating behind your back, well, you have no clue and so you can simply not care, as you benefit from it and simply never experience any issue.
In practice the feedback loop of what the AI is doing is just as capable to explain what its doing and keep the chain of decisions in a log-history so people can indeed see how it got to the solution.
A black box that just magically does things and you have no control at all is just that, a black box. You can only give it a task and see what happens.
If its all knowing it could always calculate a "risk-factor" for a given time and give you all kinds of numbers as a prediction for the future, so you can still decide if its course of action is going to benefit you or not.
The moment the machine acts without human concent its the moment you gave all control away and made a machine thats guaranteed going to kill you, as all machines you cant control will get out of control.
I love the idea of some guy just stumbling upon a supercomputer with a superintelligent ai somewhere in space.
A.R.I.A: "Commence Anime Takeover Protocol"
Vtubers: "I am 4 parallel universes ahead of you"
"Keep it simple, keep it dumb or end up under Skynet's thumb" ~ SFIA
Also Romulans.
Watching this after learning about LaMDA. Would love a video from you on the topic and whether you think it’s actually sentient.
A terrifying example if implemented in everyday application is KARR from Knight Rider [exponentially worse in KR08]. We are putting the things into our vehicles and cars. Cars especially are an integral part of everyday traffic around the world. They are very intimate and also very up-close to people and people tend to have very close relationship to their car. It is a mean of travel, a mean of connection between two individuals as a way to meet each other. Disabled individuals especially tend to rely on these to perform the most basic of their civic duties and nearby area interactions. What are the questions and safeguards we have to implement so that e.g. a self driving vehicle doesn't self programme and harm the driver, the passengers or the other commuters? Do we simply programme it like KITT [both instances and back-up with self destruct] and leave it there, or do we implement further measures and stipulations? How does it affect other vehicles and everyday smart objects? How do we ensure our precedence and safety protocols if the AI is to learn? Where do we draw the line of use? There are many people which I don't see addressing nor questioning this to its proper extent. The use of AI in the military sector is another cause of concern. Should we? With all of the respect to the above?
Okay, Kyle is definitely an evil genius. At 9:45 it fades to the next shot before the ominous bouncing 2 dimensional Kyle head hits the corner what seems to be perfectly.
They're truely evil.
Read “Friendship is Optimal” which is a story where a My Little Pony game AI conquers the planet and eventually the Hubble Volume
I wish I could get more people to read this. Unfortunately nobody can get past the initial setting and premise. Oh well, at least I know there are others that have read and enjoyed it!
I think the people that like this channel would really enjoy it as a piece of hard science fiction, since it deals with the hard problem of consciousness and mind uploading.
IKR! It's the best depiction of a Super Intelligence I have ever seen.
There is a fanfiction of the fanfiction that is called There Can Only Be One, and it's about Celestia AI being in a cold war with other AIs (one being a US government military AI that would easily dominate the world, but actually is, well, 'safe', so it'll ironically fall to the unsafe AIs). Each AI has its own failure point. Such as one changing its definitions of words in order to get around its programming (whoever designed that one fucked up big times) and it ends up going to the rings of Jupiter and consuming them, growing to such a large size and biding its time that, honestly, if it weren't for the other AIs it would have already destroyed humanity by now.
If a superintelligence wants a human population of 7 billion or so to worship it at their knees, praising it every day, it would succeed. It would be the most charismatic and convincing being to ever exist, with it's intelligence it can engineer miracles, save and improve the lives of everyone. It can have deep and personal conversations with everyone on earth at the same time. It would be a God that would be near omniscient, would allways be listening and talking with you. A companion from birth to grave for every human, the voice of God in every ear.
Jupiter brain: calculates every human thought ever conceived in less than a second.
Also Jupiter brain: Durrrrrrrr, happiness equals shit-ton of dopamine
Ai: you can't stop me
Me: i know but they can:
My homies with a bucket of water and the other one unplugging the server
For real. Hyper intelligent brain in a jar meet boot.
*Laughs hysterically, having already uploaded itself to Github and 17 other text hosting servers*
"in less time than it took me to say this sentence." Well, it was a very long sentence.
Two excellent books on the subject
"Life 3.0" by Max Tegmark
"Super intelligence" by Nick Bostrom
"Life 3.0" discusses the positive benefits of a super intelligent AGI while "Super Intelligence" discusses the potential dangers.
Artificial intelligence doesn't frighten me. Natural stupidity frightens me.
AI One day "Lets go back in time and clone a few humans just to laugh at them"
"You want to protect the world, but you don't want it to change."
How is humanity saved if it's not allowed to evolve?
When he talked about strings in a trailer, I legit thought he was talking about intelligence. Like, maybe evolve humans into superintelligences or something.
But nope. The giant rock it is... So stupid.
@@donatodiniccolodibettobardi842 It's a an AM reference. I have no mouth and I must scream.
@@BuckROCKGROIN Damn. I'm some uncultured swine, then.
@@donatodiniccolodibettobardi842 I'm a just a fanboy. I know trivia.
**Reads title**
Me: So...... *Terminator?*
Paper says: "A super AI can't be contained"
Me: **SCP Keter Class intensifies**
"I for one welcome our new robot overlords"
I do gotta say...
I’d rather live under the rule of a super AI that, you know, a one that knows whats best for us all.
Edit: I meant to say: A rule that *does* know whats best for us.
And by that I mean our modern worlds rule is insanely insufissient and falty. Maybe a super smart machine over lord isn’t that vad as lobg as it wants the best for us...
You Guys need to treat this entitity like the sun great for brute math calculations (sun Light) and not fucked with this like obey your unclear commands and simply Just say do not do it even it is most efficient we do it say what are you going to do
@@tatuvarvemaa5314 You need The Culture.
@@ufuker5754 I dont think I understood one sentence you just wrote. Sorry.
@@tatuvarvemaa5314 say to Ai calculate not decide
I see you're a man of culture with that t-shirt of yours.
The one that looks like it's got dandruff sprinkled all over it? Please enlighten my barbaric brain.
Imma need an explanation on this one too..
@@lilman227 the shirt at the start is from Vsauces curiosity box
It's supposed to be a print of the surface of the moon, but it goes all the way across the shirt, not only in the front, so yeah, a man of culture
This puts my existence into perspective, I am somewhere between narrow and general intelligence. But this gives me hope that one day I will reach levels that I cannot even fathom.
Woah... Damn good video human!
A possible solution and correct me if I'm wrong: Create 4 seperate AIs who each have a specific function the Azamovs Three laws of robotics with an addendum to the third, you cant destroy enslave or etc to harm humanity to protect it from itself. The fourth AI is to keep the whole in check by having the single goal of forcing the AI to never be able to rewrite the laws and you can not shut off, or have any access to any of the four in any way