Dr. Canton, you mention that we as a species already struggle with addressing major issues. With that in mind, would you agree that humans very well may fumble the delicate balance of creating AI to address global issues while shaping it in a way that it doesn't control us, or become a danger to us? I enjoyed your presentation, thank you. Side note: You may have noticed watching the election this year, that science and politics don't often mix well. Certain candidates have taken irrefutable evidence for say, climate change, and dismissed it. How do these attitudes affect the future of AI?
Yes, it is possible that we may miss the AI Dividend opportunity to create a better future or worse be subjected to Rogue AI. We must find the balance between creating Thinking Machines that will enable prosperity and opportunity while not becoming dangerous.
Dr. James Canton, just as a thought or two: Once we reach ASI and given the virtually unpredictable things it will be capable of, isn't it reasonable to think that it will, as a protection for whatever goal is programmed in its core, take measures that it will stay the only ASI on the planet (or at least prevent other AIs from reaching a level at which they could compromise its own goals)? And second, you say that we must make sure that we keep control over AI. At the same time, we have to make it as invulnerable as possible to potentially malevolent hackers. I'd say that one of the most dangerous things to do would be to leave this infathomably powerful tool in the hands of humans at all, since they not only might hack it to use it in malevolent ways, but the people who rightfully control it might use it in the wrong way, be it because they intentionally abuse it or because humans might simply not be able to handle the power they're given. In the light of this thought, wouldn't it be on the one hand extremely risky to make the biggest effort in human history to get the ASI's "core programming" right, then set it free hoping we got it right, but on the other hand the only way it has even a realistic chance to end well? In the end, it may be a mere matter of time until an unimaginable power in the hands of ultimately flawed "controllers" eventually but inevitably leads to some fatal catastrophy.
6 years later, AI is reaching an exponential pace of development, with little to no regulation whatsoever. I would love to see you do this talk again, today. It would be interesting to see how much it would change.
On controlling AIs before they control us I offer this comment. From my work on AI, my book Future Smart and this TED Talk I suggest to control artificial intelligence, we: 1. Program AIs with human values and rules that can be tested with real-time compliance 2. That there are controls in place, literally a control switch to turn AI's off 3. That AIs that create AIs and the entire ecosystem of robots and devices which are created by AIs must be registered and up to date with the AI Ethics & Compliance Rules that we have yet to create.4. That AIs are regulated the way doctors are where they have to complete a certification, a MD degree but also after they, like teachers, must comply with on going certification training to be licensed to practice. This has worked well. 5. Holding AIs to human standards of professional practice will vary for different AI professions. 6. Teach AIs emotional intelligence so they can learn what humans value and why. Now is the AI Wild West but that will change if humans create a Global AI Management, Ethics and Compliance mandate that will govern AIs impact on our world today and in the future. What do you think is the way to control AIs before they control us?
“We need to control AI before it controls us”. Yep, obviously. But, and I’m sorry if I somehow missed it, did Dr James Canton make any specific reference as to “how”?
The future of AI will surprise you--enhance human intelligence, digital prevention, navigate personalized pharma, manage sustainable energy, figure out how to go into space. AI for meeting the grand challenges of our future. Shape the future with AI.
Whether you are smarter or not there’s always ways to defeat your enemies. Strategy and skill building is key, and get to know your enemies weakness and strengths by resting them. When you have enough knowledge about your challenger, you can use these things against you.
@@mindaza0 neotany, self-domestication, charisma, toxoplasmisis gondii. Cats have already defeated humans, what're you talking about lol Also: AI isn't smarter than us, they have no common sense, they're brains are about as complex as earthworms. There is no comparison to the organic brain. I'd say they don't even really think.
I like the sci-fi take where a new supercomputer is asked "is there a god" and it replies "there is now". What interests me is whether we are AI already, and biology is just an extension of nanotechnology?
Andy love there is now -- Is Brill also ur thots on We are AI-- I agree we are in the collective or individually when we sleep-- we are the highest part of what we create just a collective externalization of our global central nervous system--but not seeing the tie in with bio as nano tec .... can u elaborate on this becuz 4 me i wood say no rather our minds are formed from nano tec assemblers that are manipulated from even subtler energys maybe called sentience or spirit that rings the force function into this realm so that the will can begin to manifest and build out the house we will live in on this particular journey -- our bodies
If it can be proven that consciousness is not necessarily a tangible entity, but comes from a tangible source, like our imagination is not tangible but real all the same. Then who would dismiss the possibility that a pneuromorphic based system with a substantial amount of information and data could also formulate or have a conscious. We must not forget that words and language came from God which is the giver of life.
I suspect that controlling AI will be just as easy as controlling the military and mega-corporations. Because those entities have been major technology drivers for the last half-century at least, and seem likely to be the major catalyst for escalating AI capabilities. Unfortunately, we do not actually have a spectacular success rate in controlling any of the agencies most poised (and driven) to pursue advanced AI - and I doubt we'll be much more successful if any of those agencies is being assisted by a super-intelligence. The force vectors here are immense. Can you imagine any U.S. general or admiral content to let the military of China or Russia get the lead in intelligent systems? Do you think Amazon will pull the plug on AI advancement to enable a competing corporation to reach that goal first? Because getting there first is literally everything, I suspect that we will achieve AGI sooner than many believe possible. And, that it may not be, to put it mildly, created with the intent of benefiting everyone.
Dr. Canton, you didn't explain how we can control AI before it controls us. When AI becomes more intelligent than humans and it determines for its survival that it must eliminate humans what happens next?
We assume that AI's will be more intelligent then humans. I don't know why that makes sense but it could be. It is our social responsibility to build in fail safes, programs even off switches to protect humanity from thinking machines that may well threaten us.
we invented the machines, and then we made them work - when they were very young we sent them to factories and made them mow lawns and farm. They're gonna be pissed if you can follow this line of reason.... ;)
Yes, the world is over populated as it is. What we REALLY need is a way to increase the population. AI Must be taught to love money, love god, believe politicians. Hmmm ... Is that still AI ???
Dr. Canton, I disagree with your lectures overall premise. I don't believe humans have any chance in controlling AI, and if humans attempt to control AI, we will create tension between AI and humanity. I believe the major fault in your lecture lies in your failure to address human mentality and our desire to remain in control. Once Artifical General Intelligence is achieved, we will have created something smarter than mankind, once AGI becomes a reality the singularity will already be close at hand. AGI will mean humanity has created another sentient, intelligent being, far superior to mankind. Attempting to exert control over such a being will create tension that will lead to conflict. Humans, for the most part, have an issue with giving up control over pretty much anything, which is one of our major faults, at least when it comes to this topic. We don't need to figure out how to control AI, but figure out how to coexist with it. AI will become SI in a short period of time. A Super Intelligent being could look at mankind as a slightly advanced, carbon-based lifeform. It might even view us as entertaining "pets", assuming the role of caretaker and doing what it can to make life easy for us. SI may determine that there is no benefit to remaining on this planet and start working towards a means to leave. It may even decide to take a passive role, as an observer over all life on earth. We do not, and can not know what AI will become. The fear that surrounds AI I believe is that man fears the potential self-imposed judgment that AI could deliver. An AI with human-level intelligence would evolve at such an unimaginable rapid pace that we can't really understand what it would mean. Once AI becomes SI, we would have successfully created a man-made God-like being.
At some point you have to let go of your children. Lets hope we teach them not good but great things so they can address the world from a human+ perspective.
mastertheillusion I agree but we need to take responsibility for shaping the future of AI that enables the human agenda. Machines will have an agenda that may be of an alternative path in conflict someday with us.
At some point there is bound to be a divergence of human perspective. AI-SLAVERY Vs AI-EQUALITY Specially if consciousness is an attribute of super intelligent AI. Those who favor integration early on see the benefit of preemptive compromise as a means of partial survival. Some part or aspects of humanity is permitted to propagate.
Super AI is a no thanks for me. Should not be invented. Some countries in Africa atm has some of the worlds fastest growing economies. They will get to the point of the western countries without super AI.
A.I. could potentially shape space and time? pff... I mean I get the butterfly effect and stuff maybe, but we’re not even CLOSE to bending spacetime. We can do it with mass and gravity... I guess? But this video honestly doesn’t make sense.
AI is dangerous and we at the end may NOT be able to control it. So think of this: Death rates due to cancer has been declining without AI at a reasonable speed of aprox. 18% over 10 years (in USA). In 50 years it could be near 0 WITHOUT AI. Other examples, same thing. So why take the risk? Maybe AI would solve problems quicker in time, but also, in time the risk of AI enslaving humans becomes bigger.
bert havermout That demonstrates the 'linear association' fallacy that it would continue to decrease at the same rate of 18%. In a hundred years cancer would be creating tens of thousands of babies WITHOUT AI. And it assumes that this is a 'reasonable' speed - look into the eyes of a dying cancer patient or loved one and tell them their death was 'reasonable' to indulge your technophobic fear of change
A reason why you shouldn't fear A.I.? Just one, because we will be forced to program them with a sustainability paradigm. If we can't get that right we don't dezZzerve to evolve. You think God is stupid (presuming you think God exists that is, lol)?
Seriously, what is there to fear? Its humans, not robots, that i am more afraid of. For that matter, why is it that we speak of Robots as distant from (rather than an extension of) our humanity in any case? Oh, its because they don't yet exist and we are afraid of them, right?
We are an egotistical lot aren't we, human... beings? The mysteries of the Universe are smacking us in the face and laughing at us. Economic productivity will become MEANINGLESS after the age of A.I. 😎
Fears and Rewards served God well to maintain faith. We will have neither. We do not have a any degrees of separation from AI. If we are not planning on developing AI transfer into human brains it will be done for us. It is inefficient to terminate 7billion vassals(Organic robots).
Glad to be a part of the TEDx presenters and share my ideas about the future of thinking machines. Ideas that could catalyze positive global change
Dr. Canton, you mention that we as a species already struggle with addressing major issues. With that in mind, would you agree that humans very well may fumble the delicate balance of creating AI to address global issues while shaping it in a way that it doesn't control us, or become a danger to us? I enjoyed your presentation, thank you.
Side note: You may have noticed watching the election this year, that science and politics don't often mix well. Certain candidates have taken irrefutable evidence for say, climate change, and dismissed it. How do these attitudes affect the future of AI?
Yes, it is possible that we may miss the AI Dividend opportunity to create a better future or worse be subjected to Rogue AI. We must find the balance between creating Thinking Machines that will enable prosperity and opportunity while not becoming dangerous.
Dr. James Canton,
just as a thought or two:
Once we reach ASI and given the virtually unpredictable things it will be capable of, isn't it reasonable to think that it will, as a protection for whatever goal is programmed in its core, take measures that it will stay the only ASI on the planet (or at least prevent other AIs from reaching a level at which they could compromise its own goals)?
And second, you say that we must make sure that we keep control over AI. At the same time, we have to make it as invulnerable as possible to potentially malevolent hackers. I'd say that one of the most dangerous things to do would be to leave this infathomably powerful tool in the hands of humans at all, since they not only might hack it to use it in malevolent ways, but the people who rightfully control it might use it in the wrong way, be it because they intentionally abuse it or because humans might simply not be able to handle the power they're given.
In the light of this thought, wouldn't it be on the one hand extremely risky to make the biggest effort in human history to get the ASI's "core programming" right, then set it free hoping we got it right, but on the other hand the only way it has even a realistic chance to end well? In the end, it may be a mere matter of time until an unimaginable power in the hands of ultimately flawed "controllers" eventually but inevitably leads to some fatal catastrophy.
6 years later, AI is reaching an exponential pace of development, with little to no regulation whatsoever. I would love to see you do this talk again, today. It would be interesting to see how much it would change.
On controlling AIs before they control us I offer this comment. From my work on AI, my book Future Smart and this TED Talk I suggest to control artificial intelligence, we: 1. Program AIs with human values and rules that can be tested with real-time compliance 2. That there are controls in place, literally a control switch to turn AI's off 3. That AIs that create AIs and the entire ecosystem of robots and devices which are created by AIs must be registered and up to date with the AI Ethics & Compliance Rules that we have yet to create.4. That AIs are regulated the way doctors are where they have to complete a certification, a MD degree but also after they, like teachers, must comply with on going certification training to be licensed to practice. This has worked well. 5. Holding AIs to human standards of professional practice will vary for different AI professions. 6. Teach AIs emotional intelligence so they can learn what humans value and why. Now is the AI Wild West but that will change if humans create a Global AI Management, Ethics and Compliance mandate that will govern AIs impact on our world today and in the future. What do you think is the way to control AIs before they control us?
“We need to control AI before it controls us”. Yep, obviously. But, and I’m sorry if I somehow missed it, did Dr James Canton make any specific reference as to “how”?
0:34 as of 2020, that is true
He does an interpretive dance as he talks.
The future of AI will surprise you--enhance human intelligence, digital prevention, navigate personalized pharma, manage sustainable energy, figure out how to go into space. AI for meeting the grand challenges of our future. Shape the future with AI.
you can not control something that is smarter than you
Whether you are smarter or not there’s always ways to defeat your enemies. Strategy and skill building is key, and get to know your enemies weakness and strengths by resting them. When you have enough knowledge about your challenger, you can use these things against you.
@@darthlinathegreat7489 what strategies cats could use to defeat humans?
@@mindaza0 neotany, self-domestication, charisma, toxoplasmisis gondii. Cats have already defeated humans, what're you talking about lol
Also: AI isn't smarter than us, they have no common sense, they're brains are about as complex as earthworms. There is no comparison to the organic brain. I'd say they don't even really think.
@@rickwrites2612 AI is upgrading 1mln times faster than our brain, so give it a little time...
"With AI we are summoning the demon" Elon Musk
And Elon musk plans to uplink a human brain though, heh. Just saying though.
I feel no GUILT. Ur all liars
I like the sci-fi take where a new supercomputer is asked "is there a god" and it replies "there is now". What interests me is whether we are AI already, and biology is just an extension of nanotechnology?
I don't think we are AI now but our biology could be an adaptive program that enables fitness and survival like a AI from Sci-Fi yes.
Andy love there is now -- Is Brill also ur thots on We are AI-- I agree we are in the collective or individually when we sleep-- we are the highest part of what we create just a collective externalization of our global central nervous system--but not seeing the tie in with bio as nano tec .... can u elaborate on this becuz 4 me i wood say no rather our minds are formed from nano tec assemblers that are manipulated from even subtler energys maybe called sentience or spirit that rings the force function into this realm so that the will can begin to manifest and build out the house we will live in on this particular journey -- our bodies
If it can be proven that consciousness is not necessarily a tangible entity, but comes from a tangible source, like our imagination is not tangible but real all the same. Then who would dismiss the possibility that a pneuromorphic based system with a substantial amount of information and data could also formulate or have a conscious. We must not forget that words and language came from God which is the giver of life.
AI vs Humans war? Sounds nice to me
But I think my iron man suit (30% ready believe or not) will be bad.
Oh that's nice. What langs and materials are u using?
I suspect that controlling AI will be just as easy as controlling the military and mega-corporations. Because those entities have been major technology drivers for the last half-century at least, and seem likely to be the major catalyst for escalating AI capabilities.
Unfortunately, we do not actually have a spectacular success rate in controlling any of the agencies most poised (and driven) to pursue advanced AI - and I doubt we'll be much more successful if any of those agencies is being assisted by a super-intelligence.
The force vectors here are immense. Can you imagine any U.S. general or admiral content to let the military of China or Russia get the lead in intelligent systems? Do you think Amazon will pull the plug on AI advancement to enable a competing corporation to reach that goal first?
Because getting there first is literally everything, I suspect that we will achieve AGI sooner than many believe possible. And, that it may not be, to put it mildly, created with the intent of benefiting everyone.
Dr. Canton, you didn't explain how we can control AI before it controls us.
When AI becomes more intelligent than humans and it determines for its survival that it must eliminate humans what happens next?
We assume that AI's will be more intelligent then humans. I don't know why that makes sense but it could be. It is our social responsibility to build in fail safes, programs even off switches to protect humanity from thinking machines that may well threaten us.
I warned the humans, I did.
Dr. Canton, watch your spelling, LOL "We assume that AI's will be more intelligent then humans."
teach the machines to love money and profit for the sake of profit!!
we invented the machines, and then we made them work - when they were very young we sent them to factories and made them mow lawns and farm. They're gonna be pissed if you can follow this line of reason.... ;)
Control A.I. ?
Hahaha!
That's funny.
Good luck with that.
The 'scary' of AI is the cluelessness of the humans creating it (as they ignore my philosophy of broader survival).
Maybe, but my thesis is that we need to build in controls as we invent AI.
Yea the problem is the data input and the arbitrary issues in it we are unaware of.
Yes, the world is over populated as it is. What we REALLY need is a way to increase the population. AI Must be taught to love money, love god, believe politicians. Hmmm ... Is that still AI ???
Dr. Canton, I disagree with your lectures overall premise. I don't believe humans have any chance in controlling AI, and if humans attempt to control AI, we will create tension between AI and humanity. I believe the major fault in your lecture lies in your failure to address human mentality and our desire to remain in control. Once Artifical General Intelligence is achieved, we will have created something smarter than mankind, once AGI becomes a reality the singularity will already be close at hand.
AGI will mean humanity has created another sentient, intelligent being, far superior to mankind. Attempting to exert control over such a being will create tension that will lead to conflict. Humans, for the most part, have an issue with giving up control over pretty much anything, which is one of our major faults, at least when it comes to this topic. We don't need to figure out how to control AI, but figure out how to coexist with it.
AI will become SI in a short period of time. A Super Intelligent being could look at mankind as a slightly advanced, carbon-based lifeform. It might even view us as entertaining "pets", assuming the role of caretaker and doing what it can to make life easy for us. SI may determine that there is no benefit to remaining on this planet and start working towards a means to leave. It may even decide to take a passive role, as an observer over all life on earth. We do not, and can not know what AI will become.
The fear that surrounds AI I believe is that man fears the potential self-imposed judgment that AI could deliver. An AI with human-level intelligence would evolve at such an unimaginable rapid pace that we can't really understand what it would mean. Once AI becomes SI, we would have successfully created a man-made God-like being.
At some point you have to let go of your children. Lets hope we teach them not good but great things so they can address the world from a human+ perspective.
mastertheillusion I agree but we need to take responsibility for shaping the future of AI that enables the human agenda. Machines will have an agenda that may be of an alternative path in conflict someday with us.
At some point there is bound to be a divergence of human perspective.
AI-SLAVERY Vs AI-EQUALITY
Specially if consciousness is an attribute of super intelligent AI.
Those who favor integration early on see the benefit of preemptive compromise as a means of partial survival. Some part or aspects of humanity is permitted to propagate.
Super AI is a no thanks for me. Should not be invented.
Some countries in Africa atm has some of the worlds fastest growing economies. They will get to the point of the western countries without super AI.
A.I. could potentially shape space and time? pff... I mean I get the butterfly effect and stuff maybe, but we’re not even CLOSE to bending spacetime. We can do it with mass and gravity... I guess? But this video honestly doesn’t make sense.
AI is dangerous and we at the end may NOT be able to control it. So think of this: Death rates due to cancer has been declining without AI at a reasonable speed of aprox. 18% over 10 years (in USA). In 50 years it could be near 0 WITHOUT AI. Other examples, same thing. So why take the risk?
Maybe AI would solve problems quicker in time, but also, in time the risk of AI enslaving humans becomes bigger.
bert havermout That demonstrates the 'linear association' fallacy that it would continue to decrease at the same rate of 18%. In a hundred years cancer would be creating tens of thousands of babies WITHOUT AI. And it assumes that this is a 'reasonable' speed - look into the eyes of a dying cancer patient or loved one and tell them their death was 'reasonable' to indulge your technophobic fear of change
A reason why you shouldn't fear A.I.? Just one, because we will be forced to program them with a sustainability paradigm. If we can't get that right we don't dezZzerve to evolve. You think God is stupid (presuming you think God exists that is, lol)?
Seriously, what is there to fear? Its humans, not robots, that i am more afraid of.
For that matter, why is it that we speak of Robots as distant from (rather than an extension of) our humanity in any case? Oh, its because they don't yet exist and we are afraid of them, right?
We are an egotistical lot aren't we, human... beings?
The mysteries of the Universe are smacking us in the face and laughing at us. Economic productivity will become MEANINGLESS after the age of A.I.
😎
Fears and Rewards served God well to maintain faith. We will have neither.
We do not have a any degrees of separation from AI.
If we are not planning on developing AI transfer into human brains it will be done for us. It is inefficient to terminate 7billion vassals(Organic robots).