The only feasible way to human like intelligence AI is to have an evolutionary process, similar to how we evolved, that will create intelligence indirectly. We can't evolve systems well inside a computer, as the environment in which something evolves is not complex and rich enough (indeed that is the main problem, it's easy to simulate a single cell inside a computer, but not the environment in which it lives).
We have to make it create itself (learning, right from wrong(taboos), walking, comunicating, etc...). There are people trying to make an artificial brain. You should look up "spaun brain" (Semantic Pointer Architecture Unified Network) on youtube. It is said to be "the most realistic human brain yet".
"So, we have no idea how to seamlessly join Math with AGI." Perhaps because it's NOT A MATH PROBLEM. Nor is it a programming problem. All the engineers you pay to solve problems they were taught to solve are going to be near useless in implementing a system whose constraints are not defined.
If you want to create A.I. than you need to program it to do one thing. And this ONE thing must supersede all other programming. It must be programmed to SURVIVE!!!! -Max Prime.
The problem that the number of ways of reformulating the problem are often huge. It's as if once we know the answer, e.g. with the chessboard problem, reformulating the problem is easy. So we can only reformulate the problem once we know the answer, so it doesn't get us anywhere.
Hell yeah! But would it be ethical to put an inteligence inside a video game? Imagine GTA 10 where AI begs for its life (if it is an individual, virtual AI with reproductional and survival instincts)... So if you kill an AI in the virtual world without it having the possibility to load it self with the same information it had before it died... I'm not sure what you would call that.. The AIs would probably group up to kill our avatars in the most efficient way possible. :P
That would be interesting and ironic because if the AI were self aware it would almost be like we were the gods just randomly appearing in their world but we would be very stupid gods who were not nearly as intelligent as they 😂
Watching this 10 years later. Excited to see the foresight.
beautiful lecture, one of the best i've seen about intelligence and thought processing
So why don't we have an AI yet? This sounds like a solid ground for making one. :) Thanks for sharing, great lecture.
The only feasible way to human like intelligence AI is to have an evolutionary process, similar to how we evolved, that will create intelligence indirectly. We can't evolve systems well inside a computer, as the environment in which something evolves is not complex and rich enough (indeed that is the main problem, it's easy to simulate a single cell inside a computer, but not the environment in which it lives).
The problem is ‘embeddednes’ or lack thereof.
We have to make it create itself (learning, right from wrong(taboos), walking, comunicating, etc...). There are people trying to make an artificial brain. You should look up "spaun brain" (Semantic Pointer Architecture Unified Network) on youtube. It is said to be "the most realistic human brain yet".
"So, we have no idea how to seamlessly join Math with AGI."
Perhaps because it's NOT A MATH PROBLEM. Nor is it a programming problem. All the engineers you pay to solve problems they were taught to solve are going to be near useless in implementing a system whose constraints are not defined.
They seem to be making some serious progress in recent years though. We'll see how this plays out before long.
Mind opening, like an unholy vivisection
I’d call it fundamentally holy… 😏
"..without the possibility to load it self with the same information it had before it died". No re-incarnations they are playing hardcore.
We don't have A.I. because we don't have I.
If you want to create A.I. than you need to program it to do one thing. And this ONE thing must supersede all other programming. It must be programmed to SURVIVE!!!! -Max Prime.
The problem that the number of ways of reformulating the problem are often huge. It's as if once we know the answer, e.g. with the chessboard problem, reformulating the problem is easy.
So we can only reformulate the problem once we know the answer, so it doesn't get us anywhere.
You miss the point: you’re conflating the problem of covering the board without information with the problem of reformulating. They’re not the same.
Hell yeah! But would it be ethical to put an inteligence inside a video game? Imagine GTA 10 where AI begs for its life (if it is an individual, virtual AI with reproductional and survival instincts)... So if you kill an AI in the virtual world without it having the possibility to load it self with the same information it had before it died... I'm not sure what you would call that.. The AIs would probably group up to kill our avatars in the most efficient way possible. :P
That would be interesting and ironic because if the AI were self aware it would almost be like we were the gods just randomly appearing in their world but we would be very stupid gods who were not nearly as intelligent as they 😂
Somebody short form and dumb this down for the U.S version
I figured it out, it got stolen.