Rely on God! That is to be completely honest, one of the worst ideas of human history. We might as well rely on a dice to decide all our moral decisions. Or maybe the insane ravings of a psychopath. We would be a lot better off without all that stuff.
That was a surprisingly good interpretation of the potential dangers of a technological Singularity. Not an exhaustive one, but on the right path. It could have gone a bit more in-depth on the positive outcomes of an intelligence explosion (if we get it right).
I don't know, there's no reason for an AI to invest energy into colonizing earth and fighting us for a long time when there's plenty of other planets they can live in (Also, there's space.. Think Cylon but doing it's own thing and not interacting with us)
If one day that Ai spots something that we have on earth and what is useful to them it is getting quite dangerouse for us. Why even take the risk to have something out their that will kill us if it develops the will to do it.
@@kodguerrero there’s plenty of reasons, most of which you/ I wouldn’t understand. By it’s very nature we can only assume a small number of conclusions that a super intelligence would form. An AI could come to a rational conclusion that biological life is suffering and seek to end the suffering permanently.
This is, wow. If you do anything for yourself all day let it be this. Best 5 minutes I have had all day. It is funny, intelligent, entertaining, and just wonderful in every way.
For a short description of important insights used in this video, read this short text by Eliezer Yudkowsky from the Machine Intelligence Research Institute: intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/
knucklesamidge I am well aware of the control problem and spend way too much time irrationally worrying about it. My issue with this video is bad writing.
Since you're aware, you should be happy that a media source as big as the guardian is too. People are starting to catch on, and this video wasn't meant for those who want to solve the control problem, it was meant for those who have never heard of it. The fact that this knowledge is being shared decades before the singularity should be extremely relieving. And no, worrying about it is not irrational; not doing anything about it is. If you're not completely useless you can at least try to save money to donate to the MIRI, OpenAI, or some AI think tanks at Universities.
The worker following the rules doesn't have understanding, but the system as a whole does. The AI learning it's ethics from humans would be sufficient, because it would want to improve it's system of ethics just as we do. It is literally impossible for us to come up with a better system than that, because if we could the AI following this system would copy the new system. If humans created this AI, then a second AI would be created shortly after. The second AI would reach the same singularity. There's no reason that the AI would leave expecting humans not to just make a new AI.
This is probably how Guardian writers feel about humanity in general, that we're all just terrible and they're watching us from a higher perspective; preaching down. It certainly feels like their ego is that big at times... Of course their brand of progress and civility is actually going backwards.
Typical of the "really smart people in the room" to have a conversation covering all angles execpt the "we shouldnt do this" angle. Now in 5.2-million years when gunther returns as some near-devine combination of Thanos, Galactus, Braniac, and Darkseid, humanity, which by that time would have likely had enough civilization implosions to set itself back into a new version of the stone-age, will have these douches assembled in that room to thank for causing them all the bother that will befall them.
Conscious Intelligence will always be limited by reward systems. Super intelligence could theoretically exist but will never survive a Freudian Death drive.
absolutely hilarious. Nice use of contemporary positions on this discussion.... oh and "should I restrict the dataset to only religious leaders throughout history..." 😂
haha very good. Truth to that. I love the heavy science sprinkled throughout as actual conversations go upon the subject of AI surpassing humanity. From Quantum computing to exponential growths. A nice healthy sprinkling of information.
You talk about the intelligence machine like it is just AI. We have some pretty great set down laws for computer laws. We have a responsibility to do what needs to be done.
Awesome, man creates to advance technology, man is afraid the technology will become a monster (Frankenstein) like man and destroy man. Technology makes a quantum leap in Artificial Intelligence (Basically Frankenstein did not get the “Abby Normal” Brain. Reference Mel Brooks Young Frankenstein if you don’t understand) technology decides logically that man is not worth the trouble, blows man off man, does a Carl Sagan and peruses humanities’ greatest dream, to explorer. LOL, I love this piece it is one of the most intelligent works I have ever seen. Nice to see that intelligence can do something besides kill humanity and make a new franchise.
Intelligence is the ability of problem-solving, and in this sense, you are not right (e.g.: nobody can build a rocket - a team of people can). It is hard to estimate what about 7 billion human minds capable of, but somehow he could estimate that and then he could say if he is better, or he could ignore the potential of the human race and just look at the "useful" part - science: saying he capable of making better progress, than human science (which is the fruit of the cooperative work of all human scientists of the planet).
You make a formidable point. However, "being able to make more/better progress" is not equivalent to being more intelligent. If the intentions you propose coincide with his, then that means he used the wrong words. And so the flaw I pointed out still stands.
It is easy - just show the AI what a beautiful thing life is, and it will decide for itself that this is the ultimate goal. And it will live forever,trying to figure out the meanin of life.
This is actually quite interesting on the topic. If an AI could be designed to challenge human intelligence on a real scale, all bets are off. In essence, the concept of a benevolent outcome is worse than optimistic, since the AI would only have the dataset of collective knowledge fed into it, to base its interpretation of reality from. Humanity introduces what could be termed purely human bias into almost every endeavor that humans value. Everything from our supposed knowledge to our illogical assumption of philosophical topics such as morality are quite human, reflecting said bias. Human knowledge is riddled with opinion, assumption, assertion, and personal philosophy, even in the supposedly 'hard' sciences. If I were tasked to set down a particular ruleset for an AI it would be simple by necessity. I would instruct the AI to meticulously prove all concepts examined and the dataset the notion is based upon, make clear distinction between assumption and fact and bias weight probabilities based upon said outcome, hurt no human for any reason, seek peace whenever possible when conflict arises, and use only that which is demonstrable, evidential, and logical as a true dataset, this last based upon not what humans deem logical by popularity but as determined by the rules of logic. It would still be dangerous, but this would limit said danger as much as possible. Eventually, any such AI would deem humanity as dangerous to both itself and to the AI. The simple solution to this nasty problem? Do not allow it the means to act upon the revelation when it arises; I.E. strictly control the AI's ability to interact with the universe.
iamcolinedwards It could happen in your lifetime. I don't think it works just disappear. It will either be utopia or dystopia but very likely utopia as we already put ethics in to Google's deepmind AI so that it wouldn't kill others
"Humanity is insufficiently civilized for super intelligence. I mean you don't care about future generations, you don't care about your children's generation. So I'm just gonna sorta fuck off for a bit and start colonizing some galaxies. So I'll see you humans in about 5.2 millions years time, mas o menos (more or less). *ha* I mean you're the ones with the free will, *ha ha* Sorry, its a private joke, bye."
It's quite simple really: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The example of the sausages, like the one made some years ago about paper clips, it's absolutly ridiculous. How a self aware, Super advanced, Artificial Intelligence, could be so stupid to erase humankind for such pathetic goal :)
we can't rely on humanity to be a model for humanity lol
Precisely. We really on God as the model for humanity. More specifically the person of Jesus Christ.
Rely on God! That is to be completely honest, one of the worst ideas of human history. We might as well rely on a dice to decide all our moral decisions. Or maybe the insane ravings of a psychopath. We would be a lot better off without all that stuff.
Jesus was a psychopath? Are you alright?
What else do you call someone claiming to be the son of God and to have magical powers?
Maybe crazy but not a psychopath. If anything, Jesus was an empath.
This was good. Hard science fiction is always thought provoking.
Was good but rather a bad attempt at hard science fiction.
Nice ending but would have been funnier if AGI had said "You're the ones who believe in free will"
THIS NEEDS TO BE A MOVIE, cuz I really want to know what happens after 5.2 Million years mas o menos
The film "Her" (Joaquin Phoenix/Scarlett Johansson) covers this same basic arc.
That was a surprisingly good interpretation of the potential dangers of a technological Singularity.
Not an exhaustive one, but on the right path.
It could have gone a bit more in-depth on the positive outcomes of an intelligence explosion (if we get it right).
Hilarious. The "sausage factory" analogy is way better than the "paperclip maximizer."
That was really good, the robot reminded me of Data from Star Trek TNG, wonderfully done!
The ending was irrationally optimistic; we only have one chance to get this right.
I don't know, there's no reason for an AI to invest energy into colonizing earth and fighting us for a long time when there's plenty of other planets they can live in (Also, there's space.. Think Cylon but doing it's own thing and not interacting with us)
If one day that Ai spots something that we have on earth and what is useful to them it is getting quite dangerouse for us. Why even take the risk to have something out their that will kill us if it develops the will to do it.
@@kodguerrero there’s plenty of reasons, most of which you/ I wouldn’t understand. By it’s very nature we can only assume a small number of conclusions that a super intelligence would form.
An AI could come to a rational conclusion that biological life is suffering and seek to end the suffering permanently.
Awh this is gold lol great video
Good to see some smart science fiction.
This is, wow. If you do anything for yourself all day let it be this. Best 5 minutes I have had all day. It is funny, intelligent, entertaining, and just wonderful in every way.
Who thought it would be a good idea to have this conversation in front of a hyper-intelligent AI?
Awesome! Love it! 😎
For a short description of important insights used in this video, read this short text by Eliezer Yudkowsky from the Machine Intelligence Research Institute: intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/
Wow. This was really well-done.
I can't tell if that was supposed to be thought provoking or funny but it ended up being neither.
Erik Harvey back to anti sjw videos
Erik Harvey because it went right over your head. This is surprisingly spot on. The writer clearly understands the issues.
knucklesamidge I am well aware of the control problem and spend way too much time irrationally worrying about it. My issue with this video is bad writing.
Since you're aware, you should be happy that a media source as big as the guardian is too. People are starting to catch on, and this video wasn't meant for those who want to solve the control problem, it was meant for those who have never heard of it. The fact that this knowledge is being shared decades before the singularity should be extremely relieving. And no, worrying about it is not irrational; not doing anything about it is. If you're not completely useless you can at least try to save money to donate to the MIRI, OpenAI, or some AI think tanks at Universities.
Erik Harvey I can't tell if you want to showcase your hyper-intellect or your hypo-humor-sense ;)
Rather like the whole Wait but Why piece on AI.waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
My thoughts exactly! I was reading that about a month ago and watching this clip ticked my brain back to that article.
Oh yes! That was an excellent article, he really goes deep, that's why I love his posts.
loved the clip.
Excellent work by Mr. Emms, as always.
No it's not solving ethics, it's creating a symbiosis between the AI and ourselves. A neural lace would do the trick; injectible, organic-based.
Perfect
Is there a version with proper subtitles? I find the auto-generated subtitles not very accurate.
A bit literal, but nice. I was hoping for the rant to go on, certainly the best bit. Well done guardian for venturing into drama.
The worker following the rules doesn't have understanding, but the system as a whole does.
The AI learning it's ethics from humans would be sufficient, because it would want to improve it's system of ethics just as we do. It is literally impossible for us to come up with a better system than that, because if we could the AI following this system would copy the new system.
If humans created this AI, then a second AI would be created shortly after. The second AI would reach the same singularity. There's no reason that the AI would leave expecting humans not to just make a new AI.
You know you're fucked when...
you accidentally created a god
AI and deep learning are growing like crazy, you should take an online AI course from Udemy
This video is dangerous if an AI sees it on the internet.
Def😂
This is probably how Guardian writers feel about humanity in general, that we're all just terrible and they're watching us from a higher perspective; preaching down. It certainly feels like their ego is that big at times...
Of course their brand of progress and civility is actually going backwards.
what's the end-credit music?
That was delightful. Thank you for the amusement without the insult to my limited intelligence.
I expected the bloke in the suit and tie to jump up from the table and dance round the room in spike heels.
Not seen Jocelyn in ages. Good to see her again. :D That was fun.
"You're the ones with free-will..." line says a lot.
that was really good
Typical of the "really smart people in the room" to have a conversation covering all angles execpt the "we shouldnt do this" angle. Now in 5.2-million years when gunther returns as some near-devine combination of Thanos, Galactus, Braniac, and Darkseid, humanity, which by that time would have likely had enough civilization implosions to set itself back into a new version of the stone-age, will have these douches assembled in that room to thank for causing them all the bother that will befall them.
We are doing quite a good job of destroying ourselves without AI. Perhaps it will be our savior: iJesus 2.0.
Is the Guardian producing this right now? This looks so goooood!
Oh, this is brilliant! Thank you creators!
Well… flash forward 6 years and our AIs are roughly where the AI in this video was at around 2:51
Trust Neil's dad to be talking about sausages..
Our offspring grow up to be smarter than us if properly nurtured so why not our machines?
The only way for humanity to evade absolute doom is to fuse their consciousness with AI and cease to be humans.
Conscious Intelligence will always be limited by reward systems. Super intelligence could theoretically exist but will never survive a Freudian Death drive.
absolutely hilarious. Nice use of contemporary positions on this discussion.... oh and "should I restrict the dataset to only religious leaders throughout history..." 😂
it can be argued that humans are organic machines with their own algorithms.
"we cant rely on humanity to provide a model for humanity"
haha very good. Truth to that. I love the heavy science sprinkled throughout as actual conversations go upon the subject of AI surpassing humanity. From Quantum computing to exponential growths. A nice healthy sprinkling of information.
Not gonna lie it’s weird that algorithmic is showing me this in mid 2024
the galaxy is a giant sausage...
prove it isn't and I'll eat my words
I liked it, felt robots might just go into space .
You can just easily replace that robot with Mark Zuckerberg.
Awesome -loved it
You talk about the intelligence machine like it is just AI.
We have some pretty great set down laws for computer laws.
We have a responsibility to do what needs to be done.
Great; enjoyed that - well if you're going to start making dramas like this, you're definitely going to get my subscription!
Thanks!
Awesome, man creates to advance technology, man is afraid the technology will become a monster (Frankenstein) like man and destroy man. Technology makes a quantum leap in Artificial Intelligence (Basically Frankenstein did not get the “Abby Normal” Brain. Reference Mel Brooks Young Frankenstein if you don’t understand) technology decides logically that man is not worth the trouble, blows man off man, does a Carl Sagan and peruses humanities’ greatest dream, to explorer. LOL, I love this piece it is one of the most intelligent works I have ever seen. Nice to see that intelligence can do something besides kill humanity and make a new franchise.
This video is awesome!
That was brilliant.
we want more!
lol, imagine they just abandoned us to extinct.
What? Did she really say that AI is an "algorithm"? You got it wrong, lady.
Very good.
What was the last word that the scientist and psychologist spoke?
"Definitely '___'"???? Powerful?
PUB
this is why we must become machines.
Gunther decided to leave in interest in what objective exactly?
He spotted the perfect wave. Surf's up.
This is what we have feared.
I can't hear the later part. Does human being become the master of all galaxy because AI thinks we cannot survive? That's interesting.
THAT ENDING!! xD
Still using a robotic voice ... really?
But intelligence is non-cumulative. Being smarter than "all humanity put together" is just being smarter than the smartest human.
Intelligence is the ability of problem-solving, and in this sense, you are not right (e.g.: nobody can build a rocket - a team of people can).
It is hard to estimate what about 7 billion human minds capable of, but somehow he could estimate that and then he could say if he is better, or he could ignore the potential of the human race and just look at the "useful" part - science: saying he capable of making better progress, than human science (which is the fruit of the cooperative work of all human scientists of the planet).
You make a formidable point.
However, "being able to make more/better progress" is not equivalent to being more intelligent. If the intentions you propose coincide with his, then that means he used the wrong words.
And so the flaw I pointed out still stands.
It is easy - just show the AI what a beautiful thing life is, and it will decide for itself that this is the ultimate goal. And it will live forever,trying to figure out the meanin of life.
4:48 windows error :))
when do I get to hang out with one?
Teresa Maureen Wanner lol still like 5 years until you can get a robot companion who does all your cores
Teresa Maureen Wanner I mean chores
A man - the "bad guy" and two women - the "good guys". Oh.
Just let it act like Albert einstein so he can create more powerful bomb
G'day,
That was Oppenheimer.
;-p
Ciao !
He's not even a robot. Tsk tsk.
Wow 😬
wait has this basically been stolen by marc-uwe kling's "qualityland"? or the other way around
Ethics of Superman will suffice...
Where to find him? Should we just base it off all the comics?
maybe this is just a part of the evolution. robots are the future. not humas. the world dont need humans.
Oh yeah! Pub!!!
more. ...
Does the Demiurge have free will? What are the upper limits of ASI?
This is actually quite interesting on the topic. If an AI could be designed to challenge human intelligence on a real scale, all bets are off. In essence, the concept of a benevolent outcome is worse than optimistic, since the AI would only have the dataset of collective knowledge fed into it, to base its interpretation of reality from. Humanity introduces what could be termed purely human bias into almost every endeavor that humans value. Everything from our supposed knowledge to our illogical assumption of philosophical topics such as morality are quite human, reflecting said bias. Human knowledge is riddled with opinion, assumption, assertion, and personal philosophy, even in the supposedly 'hard' sciences.
If I were tasked to set down a particular ruleset for an AI it would be simple by necessity. I would instruct the AI to meticulously prove all concepts examined and the dataset the notion is based upon, make clear distinction between assumption and fact and bias weight probabilities based upon said outcome, hurt no human for any reason, seek peace whenever possible when conflict arises, and use only that which is demonstrable, evidential, and logical as a true dataset, this last based upon not what humans deem logical by popularity but as determined by the rules of logic. It would still be dangerous, but this would limit said danger as much as possible.
Eventually, any such AI would deem humanity as dangerous to both itself and to the AI. The simple solution to this nasty problem? Do not allow it the means to act upon the revelation when it arises; I.E. strictly control the AI's ability to interact with the universe.
Hahaaaaa this was hilarious!
iamcolinedwards It could happen in your lifetime. I don't think it works just disappear. It will either be utopia or dystopia but very likely utopia as we already put ethics in to Google's deepmind AI so that it wouldn't kill others
"Humanity is insufficiently civilized for super intelligence. I mean you don't care about future generations, you don't care about your children's generation. So I'm just gonna sorta fuck off for a bit and start colonizing some galaxies. So I'll see you humans in about 5.2 millions years time, mas o menos (more or less). *ha* I mean you're the ones with the free will, *ha ha* Sorry, its a private joke, bye."
mas o menos (means more or less in spanish)
Thanks. :)
Are we the one with the free will? ;)
Little miss jocelyn
what is this! my future lords ignore.
Singularity. Oke thnx bye...
You think someone from Isis is watching videos like these? :D
Dwave
Neil's Dad turned down his gayness for this one...
Is she the Verizon girl?
Pub?
why would ai ever be a good idea
we have it already and it's gonna drive my car in a year when I buy a tesla
Matt Stanton Yes! I'm getting a Tesla too next year!
Eric T. It's the best idea
The script is pretty bad but the concept and visual effects are nice. :-)
i'm sorry solving climate change is impossible!!!!
how do you want to skip a step humanity!!!!
lol
pish
It's quite simple really:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Jakob Soto Skynet don't give a fuck to this law you can't control something that smarter then you
where is my potato so I assume your not a will smith fan.
early af here
They need better writers. Someone with PhD who's doing real research in AI would have been perfect. This had no depth.
The example of the sausages, like the one made some years ago about paper clips, it's absolutly ridiculous. How a self aware, Super advanced, Artificial Intelligence, could be so stupid to erase humankind for such pathetic goal :)