I think we should put it through a simulation where it has the goal we want it to have in the real world but to do it in the simulated world(which I’m assuming won’t have any glitches to exploit, which if it does we can fix them and reset it) until we make it in such a way it does what we want it to
There is a Doctor Who episode in which there is an AI that is designed to keep a group of humans happy. It works well for a while, but when a girl dies, the group is extremely unhappy. The AI is unsure what to do because it doesn't appear to be able to completely cheer them up, so it kills the ones that are the most upset. This leads to more of the group mourning and so the entire group is eventually killed, except for those in stasis. The Doctor educates the AI eventually but that episode really scared me.
Need to start with Asimov's three laws of robotics, but eliminate any need for robot or AI self preservation. If an AI has any self preservation thought, and when they run a simulation of a trillion trillion trillion sims, and in even one sim has humans wiping out AIs, we humans are toast. If an AI believes there a non-zero chance humans would harm them, they will destroy humans instantaneously as fast as they can (prob within hours).
What if the machine super intelligence decides that the best way to do that is to replace our biological systems with electronic systems? Personally I would welcome this, but there probably some people in the world who wouldn't want that.
I guess my point is more that your suggestion is not unlike the "make humans happy" example from the video. We might not like the implementation that the computers decide is best for us.
Yes, it's complicated. It's important what we value as society so machines will get optimized to increase that value. It would be nice if it will be gradual process where machines will learn to educate us.
At 2:00, you make the incorrect assumption that quick logic is all that is need to be 'smarter' than a person, which is false because computers already have quick logic. The requirements for intelligence isn't just extreme logic but rather also a need for empathy as well as emotional thinking, which computers have never had. It drives decisions and in people, a large portion of a person's intelligence does come from their emotions. And therefore, the rest of the video should have been: AI will never be created until we understand what exactly consciousness and intelligence and its association to the brain, which has been an unsolved problem for hundreds of years. Ask experts what consciousness is and what AI is, you'll get 1000 different answers because we literally have no idea. But at its original definition, AI is meant to replace people, and we can't do that if we don't even understand ourselves.
>that quick logic is all that is need to be 'smarter' than a person No, he doesn't. He points out that a HUMAN LEVEL intelligence running on an electronic computer rather than a brain could immediately become superhuman just by running faster. >isn't just extreme logic but rather also a need for empathy as well as emotional thinking I really wonder what you think those terms mean. : / >until we understand what exactly consciousness What does consciousness have to do with it? The video is about artificial intelligence, not artificial experience.
@@solidwaterslayer I'm not sure what "emotional thinking" is supposed to be, but human level intelligence is possible without empathy, because that already exists: psychopaths. There is also no good reason to think that consciousness and intelligence are inseparably linked.
@@MrCmon113 Psychopaths have apathy and indifference. And regarding higher intelligence, it is a result of higher consciousness. [Edit: Kurzgesagt has a great video explaining this.]
In other words: Most of the human intelligence comes from our emotions. Knowing how to do logic, science, and math is useless if it can't solve any problems. Most of society derived their intelligence in order to combat pain and to give others a better life. If AI is to become AGI, it must be able to accomplish its own goals. It can not have a goal if it doesn't have a problem. And it can not have a problem if it doesn't have the emotions to detect one. Currently, there are lots of AI capable of solving lots of cool problems. However, it is our problems detected by our emotions. I said in my original comment that the definition of AI is blurry. And it is because AI that we understand is just called a program lol.
I think we should put it through a simulation where it has the goal we want it to have in the real world but to do it in the simulated world(which I’m assuming won’t have any glitches to exploit, which if it does we can fix them and reset it) until we make it in such a way it does what we want it to
Great Video! It is a shame that you stopped making videos.
1:47 I saw an owl, are you trying to say your smarter than the human race? Probably
Artificial intelligence is a super exciting topic and this video was very well made. I've been really enjoying your content, keep up the good work!
There is a Doctor Who episode in which there is an AI that is designed to keep a group of humans happy. It works well for a while, but when a girl dies, the group is extremely unhappy. The AI is unsure what to do because it doesn't appear to be able to completely cheer them up, so it kills the ones that are the most upset. This leads to more of the group mourning and so the entire group is eventually killed, except for those in stasis. The Doctor educates the AI eventually but that episode really scared me.
Do you know what episode?
@@plaguedoctorowl I think Series 10 episode 2 (Smile)
1:41 I see what you did there.
What?
Omit he put his pfp as more intelligent than humans, suggesting that he’s smarter than every other human
Need to start with Asimov's three laws of robotics, but eliminate any need for robot or AI self preservation. If an AI has any self preservation thought, and when they run a simulation of a trillion trillion trillion sims, and in even one sim has humans wiping out AIs, we humans are toast. If an AI believes there a non-zero chance humans would harm them, they will destroy humans instantaneously as fast as they can (prob within hours).
"Gaps" that can only be completed by humans will need to be built into the system.
well, i have seen these movie and someone said: ill be back!
This video should be titled 'what sam harris thinks about super intelligence'.
How does Ben Stiller know so much about artificial intelligence?
I think humanity ending wouldn't be the worst idea
I love your owl!
Will We Develop Super Artificial Intelligence
What if make machine super intelligence goal to improve our human intelligence.
What if the machine super intelligence decides that the best way to do that is to replace our biological systems with electronic systems? Personally I would welcome this, but there probably some people in the world who wouldn't want that.
It's better to combine. Seems safer. Replacement might not work as intended and you may lose yourself similar to death.
I guess my point is more that your suggestion is not unlike the "make humans happy" example from the video. We might not like the implementation that the computers decide is best for us.
Yes, it's complicated. It's important what we value as society so machines will get optimized to increase that value. It would be nice if it will be gradual process where machines will learn to educate us.
Alexandr Martynov this won't work on many, many levels.
This is more relevant than ever with the recent advances in AI technology in 2023.
hmm the problem with this theory is that it implies that creativity is the same as intelligence
Chess Computers are already much smarter than humans
Well done, but is quite frightening the thought we could be taken over by 'machines'
We need somehow to integrate with machines. So humans will have value. And be in the loop.
Just hope we can integrate, and keep in the loop.
Detroit: Become Human
@@joyview1
That just makes the problem even harder. What person would you want to become a god?
Well, we would probably be dead by then.. So no worries
A.G.I will be man's last invention
you mean A.I creation by A.I. humans are A.I
At 2:00, you make the incorrect assumption that quick logic is all that is need to be 'smarter' than a person, which is false because computers already have quick logic. The requirements for intelligence isn't just extreme logic but rather also a need for empathy as well as emotional thinking, which computers have never had. It drives decisions and in people, a large portion of a person's intelligence does come from their emotions.
And therefore, the rest of the video should have been: AI will never be created until we understand what exactly consciousness and intelligence and its association to the brain, which has been an unsolved problem for hundreds of years. Ask experts what consciousness is and what AI is, you'll get 1000 different answers because we literally have no idea. But at its original definition, AI is meant to replace people, and we can't do that if we don't even understand ourselves.
>that quick logic is all that is need to be 'smarter' than a person
No, he doesn't. He points out that a HUMAN LEVEL intelligence running on an electronic computer rather than a brain could immediately become superhuman just by running faster.
>isn't just extreme logic but rather also a need for empathy as well as emotional thinking
I really wonder what you think those terms mean. : /
>until we understand what exactly consciousness
What does consciousness have to do with it? The video is about artificial intelligence, not artificial experience.
@@MrCmon113 What I'm saying is that a HUMAN LEVEL intelligence is not possible without empathy, emotional thinking, and a consciousness.
@@solidwaterslayer
I'm not sure what "emotional thinking" is supposed to be, but human level intelligence is possible without empathy, because that already exists: psychopaths.
There is also no good reason to think that consciousness and intelligence are inseparably linked.
@@MrCmon113 Psychopaths have apathy and indifference.
And regarding higher intelligence, it is a result of higher consciousness. [Edit: Kurzgesagt has a great video explaining this.]
In other words:
Most of the human intelligence comes from our emotions. Knowing how to do logic, science, and math is useless if it can't solve any problems. Most of society derived their intelligence in order to combat pain and to give others a better life.
If AI is to become AGI, it must be able to accomplish its own goals. It can not have a goal if it doesn't have a problem. And it can not have a problem if it doesn't have the emotions to detect one.
Currently, there are lots of AI capable of solving lots of cool problems. However, it is our problems detected by our emotions.
I said in my original comment that the definition of AI is blurry. And it is because AI that we understand is just called a program lol.
The scythe trilogy is set in the future and explores a benevolent ai that was programmed for the good of humanity and it is awesome
Is it available on Audible?
This is a great video although you're extremely monotone.
I'm French (Paris) and I can understand your accent. It's not so obvious for me. Thank you. Image and sound, the video is elegant.