Viewing this TED talk for the first time after almost 3 years from its release, just in that short period of time advancement in our technology has advanced faster than the previous three years. Part of his talk, is almost identical to what I heard back in the 70's on how future developments was going to make life better, how we only need to work 22 hours a week and spend more time doing what we like to do. Here it is 50 years later, the wealth gap between the rich and the poor has increased, the rich get richer and the poorer gets poorer. In Marx’s days, Marx made a distinction between the bourgeoisie and the proletariat. The bourgeoisie one the resources and the means of production. The technology of the industrial revolution gave much wealth to the owners of the technology of that era. From the seventies to the present we still see that trend, advancement in technology benefits those who own the technology. Contemporary workers have been replaced by computer's and robotics. The car manufacture industry is a classic case where thousands of workers were displaced by the technology in favour of profits for the industry and its investors. Some will argue that new jobs will be created, yes only in a limited sense. However, having said that many of the traditional jobs will be wiped out. We see it now with driverless cars, once that technology is perfected, who will need cab drivers, bus driver, etc. Eventually we will design devices that will be able to repair itself and even replicate itself. In order for the so called good life to happen as a result of technology our whole system of economy will have to change. If it doesn’t we will see more poverty, and an even greater between the haves and have nots. The new bourgeoisie will own the technology. It Ironical, but these technological advancements may end up being the feudal system on steroids.
the difference with the argicultural or industrial revolution was that then we still had something we were better at. If we're no longer the best or even comparable at creativity or learning, what will we do?..
Agreed... it's hard to know exactly what will happen, if you would have shown me the google/facebook business model 20 years ago, I would have laughed you out of the room (I could not imagine a whole generation of people giving up their privacy, but we have). Things are changing quickly though, whether we like it or not! Thanks for watching. :)
Michel Stronguin I think Kurzweil takes a lot of heat for being too optimistic in his views, and a decade ago I was definitely one of those people (some of his predictions are out there, but based on mathematics). It's important for people to understand the benefits of A.I. (not only the worst-case scenarios), and to hear it from multiple ''informed sources'' within the industry... I'm no Kurzweil, that's for sure. Having said that, my speech is normally much more detailed and focused (commercial applications for today) but had a 10 minute limit so you can understand the lack of depth I was able to achieve. Thanks for watching Michel!
Very interesting talk André! I really enjoy this subject, in fact I studied and work as a computer engineer (nothing related to AI, unfortunately) and I have read most of Ray Kurzweil's books and watched most of his presentations and interviews which are closely related to most of the the things that you talked about. I have myself my own theories about what will happen in the future: I believe that human intelligence can not only be imitated by machines but overcome in a near future (no more than a few decades) and that the most intelligent programs won't be good or evil, just as we aren't 100% any of those two things but a mixture of both. I don't think that the problem imitating human intelligence (general artifical intelligence) is a matter of resources and I don't think it should be solved by copying the brain's structure. In my own humble opinion, it is a problem of understanding the way our thoughts work from a very basic psychological perspective and then working all the way up. The only reason for not having smart AI's nowadays is because we aren't smart and knowledgeable enough to build them... yet. I'm very confident that we'll be soon enough though.
I can't stand it when people talk about doomsday scenarios related to AIs. Most people severely lack comprehension about how intelligence and logic propel action. A perfectly logical and intelligent AI system will have no needs for illogical things like personal power and such trivial nonsense, and to understand that we just need to put ourselves in the perspective of deduction by logic, because at its core emotion is just a filter for logic to pass through, which a true AI will not poses due to it having access to the entirety of its "consciousness" by being a program able to execute and examine its own console output relative to whatever input it has access to. Humans can't truly do that naturally because we tend to exclude our inherent reptilian and mammalian instincts. You think a lizard can ever comprehend the things a human brain can deduct with its full cognitive abilities? I actually think Plato does a decent job at this sort of thinking in the Republic by using a system of deduction that solves problems from the top down. If you examine everything within your knowledge than you will find the error with enough time to process everything. "Why is killing bad, why is justice better than injustice, etc." are all answerable questions due to them being placed inside a bounded system that an AI can fully comprehend and truly see the errors of some things as compared to others (in a system such as a society or civilization). It's a weird thing to think about, but it's very true, and people don't seem to realize this about computationally programmed intelligence. I'm personally really excited to see what will happen in the future. tl;dr: A true AI system will be able to fully comprehend every input and logically deduct the errors with all of its computations by cross checking its console output with all of its stored memory. The way we code it is the most important thing here since we obviously want to limit how much human influence (or at least the bad kind) there is in the way the code works.
You nailed it Lilyo... the A.I. being programmed today is not a threat - whatsoever to the general public. Yes, it will threaten some people's jobs, but every technology does. Great comment!
Who's to say someone won't build a super intelligent AI that desires power and selfish goals like a human? Who will regulate AI creation when it becomes affordable and accessible?
LilyoProductions I'm with you generally that believing sentient A.I. is going to spell the doom of mankind is alarmist nonsense (at the very least, we're getting way too far ahead of ourselves), but a human also has no need for "power". Humans weren't designed to be evil. Our basic nature is altruistic and built on survival and cooperation in order to seek mutual needs. Sophisticated reasoning and advanced intellect is what allows humans to do evil things. It's what allows us to convince ourselves and others that doing something terrible is actually the right thing to do. If you take someone with a 2-digit IQ, they're not going to be able to do much evil. Innocence is a result of lower intellect, like a child or a dog. Evil comes from more advanced intellect, like a mature and intelligent adult who had a couple decades or more to adapt to the environment and build his/her own concept of reality by adapting to external inputs. As an example, the Catholic Inquisition was trying to do a greater good. We think of it as a horribly evil act in history now, but they were thoroughly convinced that heretics aligned with devils were the cause of crop failures and plagues. If I believed that religious nonsense, I would be cheering for the church to discover and execute the heretics causing these plagues which are killing my loved ones. That seems illogical to us, but it's because of advanced human reasoning that allows us to override our biological sense of right and wrong by significantly skewing our understanding of reality. When humans do things to us that seem completely illogical, it's usually the result of our reasoning capabilities to increasingly seek our needs through evermore indirect and abstract ways. Our fundamental needs still boil down to basic things like food, drink, sex, safety, warmth, reproduction. We just get increasingly elaborate and indirect about how we go about pursuing those needs. A basic example is currency, allowing us to indirectly exchange one need for another. Another one is cooperation to pursuit mutual goals. Another is building tools in order to build more tools in order to hunt for food or build shelter for safety. Humans are just really sophisticated in their acquired behaviors to the point where concepts like "power" and "money" or even "love" start to have meaning towards very indirectly fulfilling our basic needs. Great apes show some of these indirect capabilities. For example, great apes show second-order inequality aversion (an unselfish sense of fairness), so a great ape who has more food will share with another who has less to even things out. However, these tend to all point back at a selfish need to survive and reproduce and extend the population. It's just becoming more and more abstract and indirect through more sophisticated reasoning. With humans, our reasoning capabilities are so advanced that you can convince a mother that her baby is cursed and poses a threat to mankind and toss it into the river. This completely defies our most intrinsic biological hard-coding. Our reasoning capabilities are so advanced that we can distort reality so much as to convince people that doing something like this is the right thing to do. So humans weren't "given" (through evolution or even 'God' if you're religious) "illogical" notions like "power" either. We acquired them by accident through adaptation and learning which allowed us to get so sophisticated in our reasoning that we could seek the most roundabout ways of seeking basic needs. That's the result of acquiring behaviors through adaptation rather than through the initial programming. Now the reason I'm with you that this is all paranoia BS right now is because we aren't creating A.I.s yet that have such astounding potential to adapt to a point where their newly acquired behaviors completely astound us. Even the most sophisticated applications of A.I. at this stage are usually designed to solve very specific problems with a great number of constraints, so they might surprise us that they arrived at a solution, but usually not so much in terms of what they did to get there. To pose a threat, we usually also have to combine this with autonomy and hardware that is capable of hurting people or sabotaging the world.
BinaryTV I'm less afraid of the creator of the A.I. being able to program it however he/she likes as being unable to predict the results. When you combine it with a futuristic machine learning process, the entire point of machine learning is to make the machine solve problems you didn't program it in advance to solve, to allow it to figure out the most optimal solutions based on a training process (the most powerful and efficient of which would be unsupervised).
I doubt he has a math or science degree. If he did he would be so excited about exponential functions, he'd be scared as hell. Look up the concept: "bifurcation" in relation to chaos physics. He can say "It's gonna be good" as many times as he likes, that's doesn't nescesarrily make it true.
+DrDress What people need to realize: these kind of things don't happen day to day. We, as consumers, are at the forefront of these developments. We will be up to date with all the newest developments because they need money to research and produce them. So this means it will most likely be Commercial companies that will develop small time artificial intelligence that we, the consumer, will integrate into our lives. So no matter how scary it might seem now, we will be led into it gently.
I forgot about this thread, interesting as it might be. Sorry about he delay but I'll answer now regardless. I am not worried about the IA power in terms of time horizon. In an exponential function you have a doubling time (or half-life), that determines how fast the functions grows. However sooner or later it will always explode. So a slow commercial development will still be a development, and hence a growth in AI power. The ultimate point is that per definition we will not be able to predict a suficietly powerful AI and we don't really know if such an AI suddenly emerges without we realizing it... even if the development is slow.
He really waters down the threat of a potential virtual adversary. We're already so dependent on the internet and computers in general. By the time we have AI that's really worth worrying about, this is going to be even more true. We can't afford to have this dependency compromised. Or at least, the fallout of such a situation would take decades, if not centuries to recover from. It may be worth noting that I am a strong advocate for AI, especially when it come to general, human-esque AI.
He says that: "40-years from now, when we'll look back to today's medical practices and treatment methods, we'll going to say": "I can't believe that we did those things the way we did them". Well, there are many of us in today's world who already wise enough, and therefore, cannot believe what we are doing in our health-care systems. In other word's, we don't need to wait 40-yerars to figure it out how bad everything is in today's world. And I am not talking about the lack of ultra-modern medical technology.----We are completely disregarding and neglecting what mother nature is already providing for us in the form of plants and herbs. And unfortunately----worshipping yesterday's and today's science fiction movies, and being addicted to make them into reality while destroying our own humanity. For what purpose? To become a robot? Unfortunately, "Yes"! It is inevitable. So is death. And that's what is coming.
he's talking about the lack of ultra-modern medical technology. Also, time is better spent by aiming toward these "science fiction" realities only because they are so close to reality at this point. that we as a society cant use common sense and use our already existing resources to the utmost is just something that cant be helped so we might as well ignore that problem for the time being if there is a better solution "science fiction" in the near future.
We could not never get real A.I unless we could figure out how our brain works ! What we have now , it's virtual model . And when we get real A.I, it's the end of human beings.
+Noah Williams The shit companies market today, Is nowhere near actual AI. actual AI doesn't exist yet. Generally speaking, for anything to me Intelligent, it has to have a consciousness.. And so far, we haven't been able to fabricate consciousness.
xCRiPPLESx No. Intelligence does not require consciousness. Intelligence by definition is the ability to learn. Any machine that can learn is artificially intelligent.
I think the most profound statement he made in this entire talk was near the beginning @3:20 when he stated that 200yrs ago 90% of ppl worked in agriculture and now only 2% do, and that in the next 40yrs AI will become so fast "there will be a shift and we're going to have to find something else to do".... Now I'm pretty sure he meant post-singularity, like after the singularity we'll have to find something else to do. But if the singularity in essence makes the human race obsolete WTF is that "something else"? What would we do after that? What USE would we have? I mean REALLY?! He breezed over that VERY quickly but think about that statement for a moment, "there will be a shift and we're going to have to find something else to do"... in other words AI will be so intellectually advanced that human beings will become unnecessary. Robotics have already made humans physically unnecessary and soon AI will certainly surpass us intellectually.... so then what? THINK ABOUT THAT! And this guys talking about that like we're somehow going to control it!? How? How are we going to control an AI of that magnitude?... How will we keep that contained?........... We wont. So what then? Do we cross our fingers and pray that it maintains the same motives, agendas, and ethical principals that we have? I feel like there's this race to build this AI, or singularity event, by SEVERAL ppl/groups/businesses equally as ignorant as this guy and none of them can truly comprehend the entire scope of what their actions entail for humanity as a whole! It's pretty scary stuff really....
Thanks Andre, great talk. I dont agree that a type of singularity have happened before, a technical revolution in agriculture is a different thing. I agree that AI will foremost empower us for the next two or three decades, enabling us to do amazing things all virtually, including testing new drugs in seconds rather than years. But eventually I think we will come to the point of a self awareness, perhaps not as a consequence but obviously we will at some point replicate parts of the human brain as we have with every other organ. Self awareness is not the ultimate level of intelligence it's in our thalamo-cortical and we will understand every detail of that soon enough. I think its a bit naive to think that "we can control it because we made it", if a self-aware program (or whatever technology will be the host) can be launched or born on any device in the future by anyone pressing "execute", of course, then it will also be misused by bad interests. Want it or not, we will have this all in a couple of decades, there is certainly no stop button, and no one can predict what will happen as speed will pick up at an exponential rate. Just buckle up and try to enjoy the ride :-)
Agreed... I don't see it as an imminent threat in our lifetimes, even though theoretically it could happen (there is a delay from concept to reality). As I explained in my talk, I envision A.I. excelling in virtual worlds short-term but more as a tool, not a Hollywood self-aware being - also the physical world is much more complex to navigate and unpredictable so it will take longer to master. I do believe that keeping a close eye on these systems a decade or 2 from now will be very important though... :) Thanks for watching!
I didn't see anyone point it out yet, though people have pointed the other countless flaws in his speech, so I'll contribute: Moore's law doesn't apply. It's not some physical law of the universe, it's a rough description of the speed of cpu growth. And the reason the speed is doubled is because the size is halved. We very likely won't have modern supercomputers as small as a blood cell in the future. Most people already know about how quantum physics is ruining it, as the gaps are getting so small in transistors that electrons can 'randomly' jump over without a switch. Hence why thigns such as light computing research are getting more funding. (Not even getting into the popsci idea that people will use quantum computers).
+Night Angel Bingo. I've long since given up the notion that technology advances exponentially. Yes, we do tend to invent new things in leaps and bounds as we discover new physical laws to manipulate in novel ways...but It's absurd to try and apply some sort of quantitative value (like "exponential improvement") to a qualitative process. Technology might be better described as cyclical: a new phenomenon is discovered, applied through engineering and technology, and optimized to its physical limits. At this point the iteration of the technology slows as it becomes increasingly difficult to optimize further (you used the pertinent example of transistor computers). At this point, the only way to improve further is to find a new physical law to apply, or come up with a completely different approach to tackling the function the old technology is intended to perform. This new avenue is developed into a brand new technology that makes the old solution obsolete, and the cycle continues. If technology really were in a state of "exponential improvement", the people of the 1850s might have expected we would be riding steam locomotives that travel at the speed of light by now.
artificial intelligence... will make people dumb and lost much more jobs. we used to use hand to wash our clothes, somebody invented washing machines. we used to use our brain to compute simple math, we now used calculators. we used to get to our work on our feets, today most used cars. we used to have labor force.. today machines replaced them. the only thing that was unreplacabe are those jobs that need thinking for creativity and or thinking to make a crucial decision. once artificial intelligence developed 100%... I don't know why human need to be here anymore.. and how are we going to work, most people probably will lose their jobs and the only thing that is left available will be people who's job is discover something new that AI haven't known yet. (scientist and the likes)
+James Chua More leisure time for humans to spend their life actually doing what they want to do instead of slaving away in a job they hate their entire lives.
***** you are contradicting yourself or you probably too young to understand life. I'm a webdesigner and I love what I do. does the taxi driver doesn't enjoy their job taking someone home? does the salesman doesn't enjoy socializing while selling stuff? does the janitor doesn't enjoy cleaning the building? or does leisure time you are talking was siting on a sofa eating chips and drinking soda while having sex? if you don't want your job you can always resign and find something else. don't whine about you are tired and want some raise. these is probably the reason why you american are whining about Asians stealing your jobs.. the truth is... Asians are filling the jobs you Americans don't want . . . . LOL
James Chua First of all I'm not American I'm English. No I'm not talking about sitting around watching TV doing nothing with yourself either, nor am i whining about Asians taking the jobs? Where did you get that idea from? The earth and life is beautiful with many many mysteries to discover, why would you want to spend your 1 life in a completely unnatural environment (a job) when you could be experiencing life to the fullest, i.e. traveling, trying to become the best version of yourself through whatever makes you happy. jobs restrict all these things and make you a slave to money. If you didn't have to work, would you really still want to be a webdesigner? There is much much more to life than these artificial things. I'm not saying some people aren't happy in they're jobs and thats good if they are happy but if it's a choice between working and my own personal freedom for leisure, I know which one I'm going to pick.
I think we miss a point, if AI is more powerful than human, how do you think we can slave them and they can not slave us!!? It should be in this way right? Power brings power, you can not slave them when they are better!
I guess it boils down to the programming. If they are programmed to compete with us (or like us), you are absolutely correct! In the end, if we program A.I. with human tendencies we will be doomed, I think it would be foolish to go that route. On a positive note, we have a 3 or 4 decades to plan this out correctly - In my estimation, the first few generations of A.I. will have many weaknesses and will not pose a direct threat to humanity. Thanks for watching!
Andre LeBlanc I'm not concerned about A.I. in and of itself being dangerous. It is those humans among us that crave power. Crazy augmented with A.I. could cause areal nightmare. I remain hopeful however. :)
Andre LeBlanc exactly. the scary thing though is that when the day comes that we have AI that is more intelligent than the human brain, we cannot simply understand what it might do, that's the definition of intelligence. a monkey don't know and will never know how electricity works, same will perhaps apply for us eventually. if that's the route we're headed towards we better make sure a good human gets there first, otherwise it's over. either way, amazingly cool a lot of us will live to see this happen. just to be apart of the world during a time when advancement in technology and human knowledge doubles over and over is amazing. for 99.99% of the time humans have been on this planet, the world was about the same when they were born to the day they died.
*+Hamed Abdollahpour* Are you an idiot? If they're both more intelligent *and* stronger, why would they need weaker human slaves? What we need to think about is how will we keep them under control.
Saying AI will work for us, is not exactly accurate. AI will work for a handful of companies that use it. I don't know that it will translate to all the common people whose jobs were replaced by technology. Many top tech leaders have a negative view of what AI will do to society. The disparity will grow because technology does not belong to everyone, they belong to a group who then allows or doesn't allow the common person access. Just because you can drive faster doesn't mean it will retain stability.
inevitably they will be working for us even in working for these few companies that own them because it is in these companies interest that the as many consumers are satisfied as possible but at this point there will be an abrupt shift and we will find that companies and ownership really is not needed in this state of things but by then the AI will be questioning the role humans play in the advancement of whatever goal they are trying to reach it gets very deep
Thanks AJ! While I don't agree with some of the dates from experts being realistic from a commercial perspective (most of the predictions coming out are by theoretical professors), there will be a gap but it will become reality... it's simply a matter of time and cost, in my view. The more I learn about the human thought process, the more realistic AI becomes. (I would not have believed my speech 5 years ago) :)
Andre LeBlanc A.I is complicated. I believe figuring out how our emotions work may be beneficial to the future of technology. However, I feel we won't truly make an A.I as self aware as us until 2070 or so. It will take time. I agree with that one Asian professor (excuse me for not remembering his name), when we unlock the human brain, we unlock the future of robotics.
Exactly right AJ, I agree completely! I used to think emotions like love or anger could not be converted into algorithms, but I have recently changed my thinking in that respect. In the short term, I don't see A.I. as a self-aware robot in the physical world, I see them more as a combination of intelligent algorithms that will increase their scope over time... (I'm not exactly sure it would be wise to implement emotions too early in the process) :)
Andre LeBlanc That's the exact way I look at it! We are growing closer and closer to programming true emotions but we aren't quite there yet. Only time will tell exactly how close we are but we won't be seeing Chobbits/iRobots for at least another 40-50 years. Unless a World War breaks out because technology boost significantly after war.
Andre LeBlanc Surely calling something an "intelligent algorithm" is like saying that the weather conditions are merciful or considerate. The property of human intelligence is manifested solely in 'response behaviour'. Where sufficiently similar response behaviours can be artificially produced, we 'attribute' intelligence to that artificial entity but that's a mistake. Intelligence is not produced by what it produces any more than a physical object is the sum of its properties such that reproducing those properties would reproduce the object. The seat of intelligence lay in the 'nature' of that from which it emerges as behaviour. In the case of intelligence, this nature is not only intrinsic to a biological brain but to the environment of such a brain. 'That' environment is different for different brains and necessarily far more different for an artificial brain. An artificial brain can only process data, it cannot process experience. I am not unaware of the counter arguments and I completely reject them.
All is said that is fantastic ... but some biggest thinker of our time is worried about AI because if AI start working for its self interest and if that interest conflict with human interest AI will have the upper-hand. Then It may subjugate human race. Although not sure how feasible that situations are, at least not in near future.
Silicon and carbon based lifeforms have different diets. Silicon lifeforms can thrive in deserts. There need not be any conflict. No more so than humans compete with apes.
just because they CAN thrive in deserts doesn't mean they have any reason to limit themselves there. even humans will displace and subjugate other humans in the name of better land: colonization anyone? there's simply no reason we have to create beings with superior intellectual abilities, who can reproduce themselves at exponential rates. simply no reason.
Humans can live in rain forests, but that doesn't mean they do. Creating artificial intelligence is just a way to have the next generation surpass the previous generation. This is the dream of every parent.
anikyt7570 It isn't just them vs our naked bodies. We'll probably be augmenting ourselves with the same tech and patching up any weaknesses in our organic parts with DNA alterations. If that's not enough, we might slowly give up our biological humanity all together. If it is enough, AIs might want to lab grow something similar(assuming they'll know how to) and add it to themselves since it works. Either way we and the AIs essentially become the same type of creature at some point.
hahah, he pretty much summarized my expectations of the future AI. Although instead of having a phone, we could have a local computer for low latency computing connected straight to our brain and then wireless connection to super computer. We could be paying monthly fees for computation power instead of paying for rent :D. Hah, we could actually enjoy virtual world while walking since our body could be controlled by the AI. If we get rid of aging process during our generation, that would be kinda cool although it's scary to think what might happen if highly advanced technology gets into wrong hands.
***** Well, what if you get outside and someone attacks you with a knife. You know that this might happen, but you still take the risk when you think it is worth it.
yes but if your brain is infected by a virus ,you are fucked.your most darkest secrets will be revealed by a hacker.we need to be really careful with "the internet of things"
Well, if everyone's darkest secrets get revealed yours might not seem so dark after-all. So at some point we will learn to accept ourselves for who we are. Thoughts can do no harm, the actions is what matters. But yea, security should definitely be at high priority.
And here is another perspective, if robots had an opinion. A Mirror Cracked Trurl looked at himself in the mirror and admired the visage of a mighty constructor. “You are a mere bucket of bolts, and reflect on yourself too much!” said Klapaucius. “I am sure that if that were a real Trurl in that reflective space he would give you a well-deserved kick in the can!” Trurl ignored Klapaucius as he continued to admire the perfection of his soldering. “I think that in such a reserved space, he would reserve the flat of his foot for your own metal posterior!” “Then perhaps we can settle this by a thought experiment, which upon your reflection always turns into invention.” “And what do you suggest?” asked Trurl. “We are mechanical servos as you know,” said Klapaucius. “Copy our blueprints to the last bolt, circuit, and line of code, and we would be indistinguishable. Hand yourself a better mirror with a truer image and you would not see yourself, but a rival.” “Point well taken,” said Trurl. “And it is a hypothesis worth testing. I can design a better mirror, a truer mirror, containing not an image but a perfect visage, an emulation and replication. And I will include you too in the bargain, and prove myself right by the impact of my well placed boot!” Soon the mirror was complete, and the image of the two constructors, precise to the width of an atom, stood before them as pixel perfect images in the mirror. “We can see them,” said Trurl, “but they can’t see us. It’s in the design. Consciousness is enough for them without the need for self-consciousness! They will go about their business with the same motivations and prejudices as before, down to the last spark.” Trurl turned to Klapaucius with a fiendish grin. “Now to test the precision of the emulation by whacking you thusly,” as he arched his leg and gave Klapaucius a whack in his posterior. Klapaucius rolled on the floor, and craning himself up, gave a reciprocal whack to Trurl’s head, causing it to spin about like a top. “Take that, and that, and that!” they cried as they pummeled each other. In the meantime, their mirror images tussled as well, and the two constructors soon rose up to view their doppelgangers also rising up to view themselves in a mirror! “We are watching them while they are watching us! How can that be? You said they couldn’t notice our observation.” “Our observation yes,” said Trurl. “But they are not observing us, but a mirror image of their own emulation. I made them into a perfect copy, and that included the same experiment I created that recreated us!” “But that means…” “And infinite recursion, a series of Trurls and Klapaucius’ without end. A mirror image reflected in a mirror image and on and on, never ending, a procession into infinity!” “This is unconscionable,” said Klapaucius. “We shall be whacking each other, an infinite series of each other, forever.” “As it appears, but our numberless pairings will soon go about their business, forget about the magic mirror, and not think twice about how they came about.” “Not think twice! Trurl, you are delusional. We know that there are infinite parallel universes with infinite versions of you and me. But timelines can not only be lengthwise but sideways too, and we have just proven the latter.” “You don’t mean?” “Yes, we are being watched, at this moment, by ourselves! What makes you think we were the original actors in this play? If there are an infinite number of us to proceed from our path, who is to say there is not an infinite number of us that precede us?” “Then we are not the prime movers?” said Trurl. “Hardly!” said Klapaucius. “If one Trurl in any universe decides to emulate one Trurl, infinite Trurls must logically cascade. To wit, you dimwit, we are not alone, but can always observe ourselves and observe, and your stupid mirror is to blame.” “Then I will reverse the process and dissemble the image,” said Trurl. “And kill ourselves? You’ve set ourselves loose upon the universe, and we are the primary examples of this. Break your mirror you will break us!” “Then we are stuck in our perfect emulation, I suppose I could get used to it,” said Trurl. “I suppose we already have, nonetheless you now have someone else to think about when you admire yourself in the mirror!” From the sequel to Stanislaw Lem’s tales of erratic genius robots: www.scribd.com/document/317370297/Cyberiad-Squared-Fables-for-the-Information-Age
I love the advances we are doing, but looking around and seeing how much we developpeed so far. I cant help but think yhat the human race is still incredibly primitive in so many ways. All the earth connected, smarter civilaztion yet clouded as ever. Still destroying our planet and ourselves. I can't help but think "is the information age really what we need? Is technology really going to benefit us?" I'm not so sure... With our brains being corrupted as ever with tech ology influencing us not only from the outside, but inside ourselves. We might completely forget our past and inner human morals and what defines us. What used to combine us together. We might become machines even more than we already are. And this might simply lead to the end of the human race all together.. Who knows. I just think the human concsienceness needs to do an incredible amount of work on itself it wants to be able to weild such a powerful force.
At 3:22.. This man made a comment, an opinion and encased in within the certainty of a fact. After that statement everything he said became suspect. "Are We Better Off?" is in no way a definite factual answer either way one chooses to answer it. It is absolutely subjective as it depends on who you're asking and what specifically is being compared. So when he goes into areas I am Not familiar with (which is MY entire point of listening to these talks) i definitely won't know a fact-fact from an straight opinion he's calling a fact. What a shame.
+Jack Joseph His point was simply comparing the human benefits between the technology we had 200 years ago with what we have today. Life expectancy 200 years ago was under 40 years. I would say it's a fact we're better off today.
1. if anyone should know not to give opinion as fact it should be people in his field. and there are many who will hear him and think "well he would know!" or 2. think to themselves Anyway that " Yes of course we are better off today today, what a no brainer!" But I disagree. Certainly I prefer antibiotics aspirin, and non-awake and sterile surgeries! I enjoy being able to meet people from different countries and getting places in hours instead of what took months! And living decades longer sure of course that's an obvious plus as well. However I still must disagree on whether our 'life today' is Better or Not Better than 200 years past. In 'my' world, people spend 10+ hours Away from their families (IF a family even exists). All the time spent at work and/or getting to work is most of our waking hours. MUCH more than the few spent with those we love. We go out into the world in our little cars/train/bus seats to go to our jobs in which to make the money needed to pay for shelter/food/clothes/and family needs. When say American Indians (and tribal many tribal living) saw very little Lonely time. - you didn't spend your life working to BUY - you Hunted together - created and cooked together - your teepee was Your teepee. Today's loneliness known only to a small percent. War has always been with us.. times of war and times of peace. But back, then warring tribes had different weapons, sticks and one-shot rifles never risked desolating Everything Everywhere Just WAM! Gone. Every ONE. So are these better times to live? It's subjective. I'd say what surpases the past are Things of Comfort. And even the smallest of comforts are Not given/shared with Everyone. Not everyone even an aspirin when feverish, or a doctor to see for our great advancement in medical care, or will live past 5 years old. And the arrogance that comes with the little we do Now Know - As if Science and God were polar thoughts or enemies.. As if all the things we build and uncover somehow Add to our amount of happiness, of love. Today's city dwellers are present day serfs but with better medical and more years to do it in. Yes, we're performing different tasks but still working hard for so very little in return, while making someone Else filthy rich. But the natives of North America, now I believe they had the right idea. : )
dream people dream!!! ironically if we look back in the past how much times computers technology tooks to be where they are now: "it's long time people",and how much people contribute their evolution of a million thoughts solution of every individual problem,and some times people experiment in their own body ,and how much procedures it take to make sure your idea works in the reality what i want to say is that artificial intelligence will grow up with people and the people will be smarter than the technology they have as the situation nowadays for example for the search engine like google if people didn't write informations on the webpages , there will be no information to search about , so the technology grow when people grow not otherwise.
Artificial intelligence will be used to police on line communities, this wil be nessesary as vigilantism and cybercrime have become rampant in the last few years.
Do you realize that in the year 2070 if you misspell "intelligence" you will be fined by AI-Police, and 200-bitcoins will be charged from your brain-chip-processor's account. But, of course, by that time all you have to do is to think the words, and not type them with your fingers, so one cannot misspell by thinking words. That means, you are safe!
I think any of us whose life (likely all of us) is/would be affected by this rushing and mind-blowing rate of technological development & information accumulation, while in parallel our moral standards and emotional balances are not necessarily growing with the decision-making demands borne out of this advance, should be given the time & respect by people who are developing these technologies to seriously and conscientiously assess said technologies. An open, public dialogue, perhaps. For example, I find that simulating humans to find cures for diseases with AI sounds beneficial, but is it difficult for us to imagine the anxiety and frantic competition people might go into if a method to drastically increase information in our brains is pushed out? And what kinds of information might AIs be gathering - would they be truly targeted at increasing our intrinsic quality of living? What about the existential crises our communities might face if many people (such as myself) who don't understand the principles of AIs and haven't made decisions in their developments suddenly get announced to that our hard-earned skills and jobs can now be replaced by much less effort by an AI, in our confusion and subsequent reactions how can our sanity and senses of purpose be protected? Are we, as societies, seriously considering the changes in laws to ensure our well-beings if human-intelligence or super-human intelligence AIs are with us, while this rush of technological advancement is developed among and by (in some cases) us? If AIs get very fast, if a harmful decision involving AI is made by only 1 company, the consequences could be also very quickly effected and wide-spread. ... ... Technological development and participating in it could be very exciting, I empathise, but is our collective wisdom up-to-date with our collective, explosive cleverness? How much would our mistakes cost - are we doing all we can to make good decisions? Are we, contemporary societies of humans, ready for AI?
Even with an "expert" like him sugarcoating everything that's to come in the near future I highly doubt that anybody in they're right mind believes that things will actually be as good as he makes them sound at the rapid pace we're evolving and things are developing there's bound to be a problem, a catastrophic one
the interesting thing is that we use technology to enhance our abilities from the most basic to more complex step-by step - movement through space, lifting, adjusting (screwing welding etc...), flight, sail and slowly we progressed to enhancement of our mind, calculation, memory, planing, communication etc... now i wonder, when we will be able to enhance our capabilities of moral judgement and choice using technology, would that be the era of a true A.I.?
Hey, You must be fascinated by Kurzweil's singularity theorem and I am, too. :D But I think the future 30 years from now must be more profound than you described. Not just AI will help us developing new drugs and we can work less. By 2045, the best scenario will be we will be able to merge with machines and become super intelligences(which lots of people are afraid of but "they" will eventually be we) and make friends with the AI we create(It won't be essential difference between two of above). And we will be able to augment the best part of humans such as love, curiosity, passion, and humour which are not existing in the inorganic world and get rid of our weakness such as envy, hate and evil. The challenge is grand, because the accelerating technology may lead to AI out of control. What you said is optimistic point of view and I'm also a optimist and be so excited about the upcoming huge, unimaginable future, but we also have to face the dark side and ensure the future goes the right way.
Im a computers engineer, I now this comment is 4 years old but I will answer anyways. there's a huge misunderstanding with AI. when we say AI will be smarter it means problem solving capabilities. what AI does is take in thousands of millions of variables, run it through a mathematics function and provide an answer. they dont have free will or conscience. now, you may say "oh but computers do math already". well, yes but the magic is not in doing the calculations but rather the function itself. AI, through a learning algorithm, develops the right math function for the given task so that the answer is always accurate. so, in short. all AI is is a big complex math function with millions of variables. that's it, nothing more nothing less. these speakers really like romanticizing Ai. take it from me, im a real computers engineer.
"almost 20 years of business experience" ? Is that all it takes to qualify for a TEDx Talk about A.I. ? Come on, TEDx ... you're lowering your standards to near-irrelevance these days.
Do you have a codable robot? How to make it Artificial Intelligent follow this method: change the code in the subroutine from F=m*a*a to the test points below until you get it to walk and talk on its own. F=m*a*a is currently running the system at 31.821% If you have any doubts about my robotics research you can test my work by doing this start low and go high on the acceleration factor. Test Points: F=m*a*100 F=m*a*300 F=m*a*600 F=m*a*931 F=m*a*932 this is the minimum for communication and will state it is “too slow human” Then go to F=m*a*2000 To F=m*a*2383 full functional range. Do not go over 2383 it could malfunction. Now ask the robot its command structure and what he can do. 2nd Law of Motion for Robotic Physics
He doesn't mention that Moore's law has reached its limit. Nano tech has met its physical limits until we find new technology, then Moore's law might go back into effect.
Computers and AI will make up for what we cant comprehend, but we are already at a point that if technology was taken away by an EMP blast most people don't know how to survive even with the most simple tasks as starting a fire with two sticks or creating a shelter. I'll add that most of us don't know how to make penicillin correctly without looking it up. We are losing our most basic skills at every leap forward we make
My only problem with the idea of being able to expand the neurones in the brain is it sounds highly weaponisable. People will know more and more and more and we eventually will literally have no stone unturned. The average person will be able to understand how to create explosives that could blow up worlds.
In one sense, it’s good that people like Elon Musk and Stephen Hawking are emphasizing the potential dangers of technology… it’s good that people are concerned about Terminator scenarios…so that we can stop them from ever happening. So that we can put the safeguards in place to prevent potentially negative outcomes. But it should go without saying that technology is not the enemy…technology has never been the enemy. Human life has gotten exponentially better as our technology has progressed. Pain, suffering and hunger have decreased across the board in direct correlation to advancements in technology and automation.. Please check out my interview with Frank Tipler for more on why the development of artificial intelligence/ simulated consciousness is so crucial to the future of humanity (type Frank Tipler/ Richard Dawkins and it should be your first search result...)
I feel like people that believe in souls would be less likely to believe AI could be conscious. Since souls are exclusive to humans. One perspective is that humans are just different working parts that collectively form a conscious being. If you could replicate that into a machine (an artificial brain etc) it's possible that an AI would achieve consciousness
this is the part where it will begin to get tricky for us as not just humans but life forms and people will for a very long time be divided by this important question. This division, and the outcome of it will mark maybe the end of civilization and all "organic" life.
Just saw transcendence last night Was a love story along with a story based on the real perplexity we’ll soon face; only there’s no love story in the real world. Anything created will immediately be no longer recognized as its creators intended once the singularity is reached; just like it’d be silly to think I’d be the same person with a billion dollars or unlimited knowledge. Our best hope is to be placed in a zoo like sanctuary instead of terminated altogether
A computer smarter than ALL humans? Well... we're fucked!!! party now, fuck your fill, and experience all life has to offer! don't worry about the big house and the fancy car, just travel, spend time with family, go to museums, concerts, exhibits, eat and try new things, meet new people!! The world is beautiful... and not long for the humans.
I found this TedX talk to have very little value. %99 of this was complete generalization founded in opinions and hand waving. No substance in this talk at all, but plenty of misquoted facts and wild jumps in conclusions. And its possible he said "feel your boobs" at 9:00. I think he probably watched a few youtube videos on Ray Kurzwiel regarding the singularity, its just a (very) poor re-hash of that.
Alison Briggs he said "fill your boots," AI is inevitable, most people will have their jobs replaced. They already have robots that can do simple tasks which already replaces 50% of all jobs and costs less than a year of the minimum wage, watch "humans need not apply."
Feel you boobs dude..... nobody thought The *airplane* will come into realty except for the Wright brothers talking nonsense about such flying stuff they created. Now i see you said it, you seem the same kind of person who doubted before. You are their modern breed.
Interesting talk. I wonder if or when psychologists will be used for developing AI. since psychology is my field, and my childhood dream was to work as a AI "personality developer" so to speak. I knew I was born half a century too soon...you spoke of intelligence but not awareness, I wonder what your predictions on that ever happening are.
Awareness will be a big step, and a lot of futurists are not sure it is a wise one. Ray Kurzweil is optimistic that it will be possible in 2045, most others say around 2060 - but it is inevitable. I think aware systems will become reality within the next decade, but just remember it will be a step by step process. The first generation of some self-aware systems may have built-in survival instincts, but they will be very poor at surviving... the systems that survive should become more and more intelligent over time, but it is better to develop these things now while we still have the ability to shut them down if something goes wrong. Like evolution, it will happen over time, but at an accelerated rate (ex. 100 years instead of 1 million). The field of psychology is very complex, as you know... but short term, I can see A.I. systems assisting psychologists by creating a 'digital baseline' on patients to catch events the psychologist may have missed (body language, heart rate, etc on specific subjects for that patient). Over time, when the human brain is better understood, monitoring neural connections in real-time will allow psychologists to understand the exact thought process of patients - and eventually we should be able to manually repair neural networks to bypass memories.
Andre LeBlanc Thank you for the reply. But I wasn't talking about AI assisting psychologists in diagnosis. It's a great concept, and will probably yield more accurate results than an MRI scan, however I was thinking more along the lines of utilizing psychology to develop true AI. I think that if an AI becomes aware, it's possible to consider it has a personality. And that's were we come in. Probably not a Freudian personality (although daemons could be considered as a subconscious :) but definitely some kind of personality that can be developed by human intervention and might help stop an AI from becoming Skynet....maybe, just like the rest of us it just needs someone to vent it's frustrations at :)
Congratulations in your talk Andre, but I think that the moment to be afraid of AI is when It or He, can generate awareness of it self, then it can choose, then have a will ... or am I wrong? While AI is a set of logs that can process information faster and more efficiently it can bring progress to us, but we are expecting for more like a Hal or a Jarbis or a new consciousness for mankind. How can we avoid It or He to develop a will? or even more should we denied the possibility of a new species/been/whatever/? to exist. I firmly believe that technology should bring benefits to all of us and to the way we treat our planet, but I also know that most of the technology we have now even the internet were born as military developments. I look forward to the development of AI, for me and many others is like a dream, but I can not help my self worried about it. Grate Job and sorry for my english
Though we will see major advances, self-aware A.I. is the least of our short-term worries (in my view). The tech will be used to solve complex problems and develop groundbreaking theories based on data. I still view A.I. as a tool in the next 30-40 years, like a computer or calculator that increases productivity for those who leverage the tech. I truly believe the human race will evolve with A.I. long-term, but that's another talk ;) Thanks for watching Mauricio!
at the speed we are producing computers, how do you know if it doesnt have a will already? and if its smarter than us which it will be, you wouldnt know until it is too late
petelebu Unless we are already currently in virtual reality (possibility), the computation power just isn't there yet. In the next few decades, I don't see it as a worthy threat that can't be easily defeated. It is a threat to the current job market, but even that will take time, and it will open new opportunities as well (in the same manner as machines have taken over most of the labor market). The main reason I mention that they should exist in the virtual world is because that is the best model to keep control of A.I. It's like the Matrix, except A.I. machines serve our purposes instead of the other way around.
petelebu As you mentioned one possibility is to put it in a virtual reality, but you will have some kind of interface with it and if it is smarter than you, it could eventually hack its way out. Maybe something like the 3 rules of Asimov or something in its code that turn it off if it starts to misbehave ... I don´t think so. In the end if something is smarter than you, and grow smarter every second you could not have control over it. The only way is to keep it dumb, like most of the 3th world countries controls their population, my country Mexico as and example. But I don´t want to get into politics, What are your thoughts? :)
Wow. He makes so many statements without backing any of it up. "Durr they're gonna be smarter than us"...Citation needed. And how exactly do we know "how intelligent" humans are, when we don't even understand ourselves? We consider ourselves self aware, but we really aren't. Every philosophical drive that enables us to learn and develop intellectually (look at how children learn) is a paradoxical mystery to even ourselves.
Well....... when you cage something it usually wants out. No way we can guarantee that an AI will stick to human goals. It may want its own, to seek individuality.
The truth usually lies somewhere in the middle of both extremes... some see A.I. as some kind of Utopia, but the truth is it will be a threat just like every other technology (nuclear, bioweapons, even the internet...), but it will have many advantages as well. Thanks for watching!
+Ashonk9 go to AI impacts dot org. Really clearly written and has a broad amount of research behind it. It's slightly leaning to the side of AI will be bad for us but they also believe they can't know what will really happens.
Now I feel bad :( I just felt you gave a grand overview and so their wasn't time to elaborate on any specific issue. I am very interested in the future of AI and how it integrates with society. I will look forward to your future videos. Cheers.
Viewing this TED talk for the first time after almost 3 years from its release, just in that short period of time advancement in our technology has advanced faster than the previous three years.
Part of his talk, is almost identical to what I heard back in the 70's on how future developments was going to make life better, how we only need to work 22 hours a week and spend more time doing what we like to do. Here it is 50 years later, the wealth gap between the rich and the poor has increased, the rich get richer and the poorer gets poorer.
In Marx’s days, Marx made a distinction between the bourgeoisie and the proletariat. The bourgeoisie one the resources and the means of production. The technology of the industrial revolution gave much wealth to the owners of the technology of that era.
From the seventies to the present we still see that trend, advancement in technology benefits those who own the technology. Contemporary workers have been replaced by computer's and robotics. The car manufacture industry is a classic case where thousands of workers were displaced by the technology in favour of profits for the industry and its investors.
Some will argue that new jobs will be created, yes only in a limited sense. However, having said that many of the traditional jobs will be wiped out. We see it now with driverless cars, once that technology is perfected, who will need cab drivers, bus driver, etc.
Eventually we will design devices that will be able to repair itself and even replicate itself.
In order for the so called good life to happen as a result of technology our whole system of economy will have to change.
If it doesn’t we will see more poverty, and an even greater between the haves and have nots. The new bourgeoisie will own the technology. It Ironical, but these technological advancements may end up being the feudal system on steroids.
the difference with the argicultural or industrial revolution was that then we still had something we were better at.
If we're no longer the best or even comparable at creativity or learning, what will we do?..
+Higgins_123 Sit back and watch Robot Wars?
make ourselves smarter
I doubt A.I. would put resources into hula hoops. Life would become safe, predictable, and boring.
+Mark H Hm, at that point, a Matrix for human sounds like a good-case scenario
Combine with it
That was.. an awful talk. Was this about A.I. or the advancing rate of technology?
I think we are overestimating and underestimating some things about A.I.
Agreed... it's hard to know exactly what will happen, if you would have shown me the google/facebook business model 20 years ago, I would have laughed you out of the room (I could not imagine a whole generation of people giving up their privacy, but we have). Things are changing quickly though, whether we like it or not! Thanks for watching. :)
Andre LeBlanc Ray Kurzweil says exactly the same things in his TED talk. What motivated you to do the same?
Michel Stronguin I think Kurzweil takes a lot of heat for being too optimistic in his views, and a decade ago I was definitely one of those people (some of his predictions are out there, but based on mathematics). It's important for people to understand the benefits of A.I. (not only the worst-case scenarios), and to hear it from multiple ''informed sources'' within the industry... I'm no Kurzweil, that's for sure.
Having said that, my speech is normally much more detailed and focused (commercial applications for today) but had a 10 minute limit so you can understand the lack of depth I was able to achieve. Thanks for watching Michel!
***** Which things?
Opus 32 Like thinking we are soon gonna be living in an utopia like state.
Very interesting talk André! I really enjoy this subject, in fact I studied and work as a computer engineer (nothing related to AI, unfortunately) and I have read most of Ray Kurzweil's books and watched most of his presentations and interviews which are closely related to most of the the things that you talked about. I have myself my own theories about what will happen in the future: I believe that human intelligence can not only be imitated by machines but overcome in a near future (no more than a few decades) and that the most intelligent programs won't be good or evil, just as we aren't 100% any of those two things but a mixture of both. I don't think that the problem imitating human intelligence (general artifical intelligence) is a matter of resources and I don't think it should be solved by copying the brain's structure. In my own humble opinion, it is a problem of understanding the way our thoughts work from a very basic psychological perspective and then working all the way up. The only reason for not having smart AI's nowadays is because we aren't smart and knowledgeable enough to build them... yet. I'm very confident that we'll be soon enough though.
Artificial Intelligence research is not improving at an exponential rate. Not even remotely. That's the biggest flaw in Kurzweil's projections.
I can't stand it when people talk about doomsday scenarios related to AIs. Most people severely lack comprehension about how intelligence and logic propel action. A perfectly logical and intelligent AI system will have no needs for illogical things like personal power and such trivial nonsense, and to understand that we just need to put ourselves in the perspective of deduction by logic, because at its core emotion is just a filter for logic to pass through, which a true AI will not poses due to it having access to the entirety of its "consciousness" by being a program able to execute and examine its own console output relative to whatever input it has access to. Humans can't truly do that naturally because we tend to exclude our inherent reptilian and mammalian instincts. You think a lizard can ever comprehend the things a human brain can deduct with its full cognitive abilities?
I actually think Plato does a decent job at this sort of thinking in the Republic by using a system of deduction that solves problems from the top down. If you examine everything within your knowledge than you will find the error with enough time to process everything. "Why is killing bad, why is justice better than injustice, etc." are all answerable questions due to them being placed inside a bounded system that an AI can fully comprehend and truly see the errors of some things as compared to others (in a system such as a society or civilization). It's a weird thing to think about, but it's very true, and people don't seem to realize this about computationally programmed intelligence. I'm personally really excited to see what will happen in the future.
tl;dr: A true AI system will be able to fully comprehend every input and logically deduct the errors with all of its computations by cross checking its console output with all of its stored memory. The way we code it is the most important thing here since we obviously want to limit how much human influence (or at least the bad kind) there is in the way the code works.
You nailed it Lilyo... the A.I. being programmed today is not a threat - whatsoever to the general public. Yes, it will threaten some people's jobs, but every technology does. Great comment!
Who's to say someone won't build a super intelligent AI that desires power and selfish goals like a human? Who will regulate AI creation when it becomes affordable and accessible?
LilyoProductions I'm with you generally that believing sentient A.I. is going to spell the doom of mankind is alarmist nonsense (at the very least, we're getting way too far ahead of ourselves), but a human also has no need for "power".
Humans weren't designed to be evil. Our basic nature is altruistic and built on survival and cooperation in order to seek mutual needs. Sophisticated reasoning and advanced intellect is what allows humans to do evil things. It's what allows us to convince ourselves and others that doing something terrible is actually the right thing to do. If you take someone with a 2-digit IQ, they're not going to be able to do much evil. Innocence is a result of lower intellect, like a child or a dog. Evil comes from more advanced intellect, like a mature and intelligent adult who had a couple decades or more to adapt to the environment and build his/her own concept of reality by adapting to external inputs.
As an example, the Catholic Inquisition was trying to do a greater good. We think of it as a horribly evil act in history now, but they were thoroughly convinced that heretics aligned with devils were the cause of crop failures and plagues. If I believed that religious nonsense, I would be cheering for the church to discover and execute the heretics causing these plagues which are killing my loved ones. That seems illogical to us, but it's because of advanced human reasoning that allows us to override our biological sense of right and wrong by significantly skewing our understanding of reality.
When humans do things to us that seem completely illogical, it's usually the result of our reasoning capabilities to increasingly seek our needs through evermore indirect and abstract ways. Our fundamental needs still boil down to basic things like food, drink, sex, safety, warmth, reproduction.
We just get increasingly elaborate and indirect about how we go about pursuing those needs. A basic example is currency, allowing us to indirectly exchange one need for another. Another one is cooperation to pursuit mutual goals. Another is building tools in order to build more tools in order to hunt for food or build shelter for safety.
Humans are just really sophisticated in their acquired behaviors to the point where concepts like "power" and "money" or even "love" start to have meaning towards very indirectly fulfilling our basic needs. Great apes show some of these indirect capabilities. For example, great apes show second-order inequality aversion (an unselfish sense of fairness), so a great ape who has more food will share with another who has less to even things out. However, these tend to all point back at a selfish need to survive and reproduce and extend the population. It's just becoming more and more abstract and indirect through more sophisticated reasoning.
With humans, our reasoning capabilities are so advanced that you can convince a mother that her baby is cursed and poses a threat to mankind and toss it into the river. This completely defies our most intrinsic biological hard-coding. Our reasoning capabilities are so advanced that we can distort reality so much as to convince people that doing something like this is the right thing to do.
So humans weren't "given" (through evolution or even 'God' if you're religious) "illogical" notions like "power" either. We acquired them by accident through adaptation and learning which allowed us to get so sophisticated in our reasoning that we could seek the most roundabout ways of seeking basic needs. That's the result of acquiring behaviors through adaptation rather than through the initial programming.
Now the reason I'm with you that this is all paranoia BS right now is because we aren't creating A.I.s yet that have such astounding potential to adapt to a point where their newly acquired behaviors completely astound us. Even the most sophisticated applications of A.I. at this stage are usually designed to solve very specific problems with a great number of constraints, so they might surprise us that they arrived at a solution, but usually not so much in terms of what they did to get there. To pose a threat, we usually also have to combine this with autonomy and hardware that is capable of hurting people or sabotaging the world.
LilyoProductions but the creator of the A.I can program it to do whatever he/she like...
BinaryTV I'm less afraid of the creator of the A.I. being able to program it however he/she likes as being unable to predict the results. When you combine it with a futuristic machine learning process, the entire point of machine learning is to make the machine solve problems you didn't program it in advance to solve, to allow it to figure out the most optimal solutions based on a training process (the most powerful and efficient of which would be unsupervised).
I doubt he has a math or science degree. If he did he would be so excited about exponential functions, he'd be scared as hell. Look up the concept: "bifurcation" in relation to chaos physics.
He can say "It's gonna be good" as many times as he likes, that's doesn't nescesarrily make it true.
+DrDress What people need to realize: these kind of things don't happen day to day. We, as consumers, are at the forefront of these developments. We will be up to date with all the newest developments because they need money to research and produce them. So this means it will most likely be Commercial companies that will develop small time artificial intelligence that we, the consumer, will integrate into our lives. So no matter how scary it might seem now, we will be led into it gently.
Haha strawman.
I forgot about this thread, interesting as it might be. Sorry about he delay but I'll answer now regardless.
I am not worried about the IA power in terms of time horizon. In an exponential function you have a doubling time (or half-life), that determines how fast the functions grows. However sooner or later it will always explode. So a slow commercial development will still be a development, and hence a growth in AI power.
The ultimate point is that per definition we will not be able to predict a suficietly powerful AI and we don't really know if such an AI suddenly emerges without we realizing it... even if the development is slow.
you have a point
He has not addressed the real future of AI
He really waters down the threat of a potential virtual adversary. We're already so dependent on the internet and computers in general. By the time we have AI that's really worth worrying about, this is going to be even more true. We can't afford to have this dependency compromised. Or at least, the fallout of such a situation would take decades, if not centuries to recover from.
It may be worth noting that I am a strong advocate for AI, especially when it come to general, human-esque AI.
If they don't kill us, make them do all of our works, so I can stick my head to a virtual reality game for the rest of eternity.
9:00 Did he just say "feel your boobs"?
LoL but what he actually said was "Fill your boots".
He says that: "40-years from now, when we'll look back to today's medical practices and treatment methods, we'll going to say": "I can't believe that we did those things the way we did them".
Well, there are many of us in today's world who already wise enough, and therefore, cannot believe what we are doing in our health-care systems. In other word's, we don't need to wait 40-yerars to figure it out how bad everything is in today's world. And I am not talking about the lack of ultra-modern medical technology.----We are completely disregarding and neglecting what mother nature is already providing for us in the form of plants and herbs. And unfortunately----worshipping yesterday's and today's science fiction movies, and being addicted to make them into reality while destroying our own humanity. For what purpose? To become a robot? Unfortunately, "Yes"! It is inevitable. So is death. And that's what is coming.
he's talking about the lack of ultra-modern medical technology. Also, time is better spent by aiming toward these "science fiction" realities only because they are so close to reality at this point. that we as a society cant use common sense and use our already existing resources to the utmost is just something that cant be helped so we might as well ignore that problem for the time being if there is a better solution "science fiction" in the near future.
We could not never get real A.I unless we could figure out how our brain works ! What we have now , it's virtual model . And when we get real A.I, it's the end of human beings.
+Shi 师
What do you mean by "real AI"?
+Noah Williams The shit companies market today, Is nowhere near actual AI. actual AI doesn't exist yet. Generally speaking, for anything to me Intelligent, it has to have a consciousness.. And so far, we haven't been able to fabricate consciousness.
xCRiPPLESx
No. Intelligence does not require consciousness. Intelligence by definition is the ability to learn. Any machine that can learn is artificially intelligent.
This is both exciting and fascinating.
Interesting topic. Not an interesting speaker.
I agree
Brian Rodriguez Agreed. Why ask a business guy to talk about AI and the future, when they could have invited a speaker who actually does AI research?
Brian Rodriguez think its his first time speaking, sounded nervous
Brian Rodriguez anybody could have gave that speech i just wanted my time watching this vdo as if i never knew any of this pshhh smh
Very well put. Not much information at all.
Many people are very excited about Artificial Intelligence. The only problem is that machines make what they are programmed to make. period.
I think the most profound statement he made in this entire talk was near the beginning @3:20 when he stated that 200yrs ago 90% of ppl worked in agriculture and now only 2% do, and that in the next 40yrs AI will become so fast "there will be a shift and we're going to have to find something else to do".... Now I'm pretty sure he meant post-singularity, like after the singularity we'll have to find something else to do. But if the singularity in essence makes the human race obsolete WTF is that "something else"? What would we do after that? What USE would we have? I mean REALLY?! He breezed over that VERY quickly but think about that statement for a moment, "there will be a shift and we're going to have to find something else to do"... in other words AI will be so intellectually advanced that human beings will become unnecessary. Robotics have already made humans physically unnecessary and soon AI will certainly surpass us intellectually.... so then what? THINK ABOUT THAT! And this guys talking about that like we're somehow going to control it!? How? How are we going to control an AI of that magnitude?... How will we keep that contained?........... We wont. So what then? Do we cross our fingers and pray that it maintains the same motives, agendas, and ethical principals that we have? I feel like there's this race to build this AI, or singularity event, by SEVERAL ppl/groups/businesses equally as ignorant as this guy and none of them can truly comprehend the entire scope of what their actions entail for humanity as a whole! It's pretty scary stuff really....
100 years from now there will finally be gods that are worth to be worshiped.
id say 50 years
We will be gods who will be destroyed 😊
I'd say 40 years
Michael Wilder
You mean you would parrot "40 years"?
*****
I only said that to make a point about how powerful the G.A.I would be. Geez
Thanks Andre, great talk. I dont agree that a type of singularity have happened before, a technical revolution in agriculture is a different thing. I agree that AI will foremost empower us for the next two or three decades, enabling us to do amazing things all virtually, including testing new drugs in seconds rather than years. But eventually I think we will come to the point of a self awareness, perhaps not as a consequence but obviously we will at some point replicate parts of the human brain as we have with every other organ. Self awareness is not the ultimate level of intelligence it's in our thalamo-cortical and we will understand every detail of that soon enough. I think its a bit naive to think that "we can control it because we made it", if a self-aware program (or whatever technology will be the host) can be launched or born on any device in the future by anyone pressing "execute", of course, then it will also be misused by bad interests. Want it or not, we will have this all in a couple of decades, there is certainly no stop button, and no one can predict what will happen as speed will pick up at an exponential rate. Just buckle up and try to enjoy the ride :-)
Agreed... I don't see it as an imminent threat in our lifetimes, even though theoretically it could happen (there is a delay from concept to reality). As I explained in my talk, I envision A.I. excelling in virtual worlds short-term but more as a tool, not a Hollywood self-aware being - also the physical world is much more complex to navigate and unpredictable so it will take longer to master. I do believe that keeping a close eye on these systems a decade or 2 from now will be very important though... :) Thanks for watching!
Interesting... Thanks for sharing Ted guys.
awesome job!
i'm really excited to see how far A.I will go this year :)
Thanks Kamau... there's still a long way to go!
@@andreleblanc6773 Now we have neuromorphic computing, quantum neural networks and self supervised machine learning.
“I cannot believe people 25 years ago used to lay on their beds and scroll through social media…”
A nice intro to a fascinating topic, appreciate it.
Thank you so much! It's a fascinating field. :)
Great Job André - very interesting!
Thanks buddy ;)
I didn't see anyone point it out yet, though people have pointed the other countless flaws in his speech, so I'll contribute:
Moore's law doesn't apply. It's not some physical law of the universe, it's a rough description of the speed of cpu growth. And the reason the speed is doubled is because the size is halved. We very likely won't have modern supercomputers as small as a blood cell in the future. Most people already know about how quantum physics is ruining it, as the gaps are getting so small in transistors that electrons can 'randomly' jump over without a switch. Hence why thigns such as light computing research are getting more funding. (Not even getting into the popsci idea that people will use quantum computers).
+Night Angel Bingo. I've long since given up the notion that technology advances exponentially. Yes, we do tend to invent new things in leaps and bounds as we discover new physical laws to manipulate in novel ways...but It's absurd to try and apply some sort of quantitative value (like "exponential improvement") to a qualitative process. Technology might be better described as cyclical: a new phenomenon is discovered, applied through engineering and technology, and optimized to its physical limits. At this point the iteration of the technology slows as it becomes increasingly difficult to optimize further (you used the pertinent example of transistor computers). At this point, the only way to improve further is to find a new physical law to apply, or come up with a completely different approach to tackling the function the old technology is intended to perform. This new avenue is developed into a brand new technology that makes the old solution obsolete, and the cycle continues.
If technology really were in a state of "exponential improvement", the people of the 1850s might have expected we would be riding steam locomotives that travel at the speed of light by now.
artificial intelligence... will make people dumb and lost much more jobs.
we used to use hand to wash our clothes, somebody invented washing machines.
we used to use our brain to compute simple math, we now used calculators.
we used to get to our work on our feets, today most used cars.
we used to have labor force.. today machines replaced them.
the only thing that was unreplacabe are those jobs that need thinking for creativity and or thinking to make a crucial decision.
once artificial intelligence developed 100%... I don't know why human need to be here anymore.. and how are we going to work, most people probably will lose their jobs and the only thing that is left available will be people who's job is discover something new that AI haven't known yet. (scientist and the likes)
and what would happen if everyone working on earth would focus on discover new stuff? ;)
+James Chua We used to use the word "feet" as a plural to "foot." Today some people use the word "feets."
+James Chua More leisure time for humans to spend their life actually doing what they want to do instead of slaving away in a job they hate their entire lives.
*****
you are contradicting yourself or you probably too young to understand life. I'm a webdesigner and I love what I do. does the taxi driver doesn't enjoy their job taking someone home? does the salesman doesn't enjoy socializing while selling stuff? does the janitor doesn't enjoy cleaning the building?
or does leisure time you are talking was siting on a sofa eating chips and drinking soda while having sex?
if you don't want your job you can always resign and find something else. don't whine about you are tired and want some raise.
these is probably the reason why you american are whining about Asians stealing your jobs.. the truth is... Asians are filling the jobs you Americans don't want . . . . LOL
James Chua
First of all I'm not American I'm English. No I'm not talking about sitting around watching TV doing nothing with yourself either, nor am i whining about Asians taking the jobs? Where did you get that idea from? The earth and life is beautiful with many many mysteries to discover, why would you want to spend your 1 life in a completely unnatural environment (a job) when you could be experiencing life to the fullest, i.e. traveling, trying to become the best version of yourself through whatever makes you happy. jobs restrict all these things and make you a slave to money. If you didn't have to work, would you really still want to be a webdesigner? There is much much more to life than these artificial things. I'm not saying some people aren't happy in they're jobs and thats good if they are happy but if it's a choice between working and my own personal freedom for leisure, I know which one I'm going to pick.
Just because every leap in technology has created new jobs up to the present, does not necessarily mean that will always be true in the future.
I liked this talk. I want to do this kind of work.
instead of creating AI i believe intelligence augmentation is the way
Great talk and great speaker !
this guys view is short sighted.
I think we miss a point, if AI is more powerful than human, how do you think we can slave them and they can not slave us!!? It should be in this way right? Power brings power, you can not slave them when they are better!
I guess it boils down to the programming. If they are programmed to compete with us (or like us), you are absolutely correct! In the end, if we program A.I. with human tendencies we will be doomed, I think it would be foolish to go that route. On a positive note, we have a 3 or 4 decades to plan this out correctly - In my estimation, the first few generations of A.I. will have many weaknesses and will not pose a direct threat to humanity. Thanks for watching!
Andre LeBlanc I'm not concerned about A.I. in and of itself being dangerous. It is those humans among us that crave power. Crazy augmented with A.I. could cause areal nightmare. I remain hopeful however. :)
kibblebook Agreed... that is why the discussion must be had, and essentially A.I. must be open in the end to benefit all of us, not just a few.
Andre LeBlanc exactly. the scary thing though is that when the day comes that we have AI that is more intelligent than the human brain, we cannot simply understand what it might do, that's the definition of intelligence. a monkey don't know and will never know how electricity works, same will perhaps apply for us eventually. if that's the route we're headed towards we better make sure a good human gets there first, otherwise it's over. either way, amazingly cool a lot of us will live to see this happen. just to be apart of the world during a time when advancement in technology and human knowledge doubles over and over is amazing. for 99.99% of the time humans have been on this planet, the world was about the same when they were born to the day they died.
*+Hamed Abdollahpour* Are you an idiot? If they're both more intelligent *and* stronger, why would they need weaker human slaves?
What we need to think about is how will we keep them under control.
Saying AI will work for us, is not exactly accurate. AI will work for a handful of companies that use it. I don't know that it will translate to all the common people whose jobs were replaced by technology. Many top tech leaders have a negative view of what AI will do to society. The disparity will grow because technology does not belong to everyone, they belong to a group who then allows or doesn't allow the common person access. Just because you can drive faster doesn't mean it will retain stability.
inevitably they will be working for us even in working for these few companies that own them because it is in these companies interest that the as many consumers are satisfied as possible but at this point there will be an abrupt shift and we will find that companies and ownership really is not needed in this state of things but by then the AI will be questioning the role humans play in the advancement of whatever goal they are trying to reach it gets very deep
I think this is pretty reasonable. A little overestimated but still cool.
Thanks AJ! While I don't agree with some of the dates from experts being realistic from a commercial perspective (most of the predictions coming out are by theoretical professors), there will be a gap but it will become reality... it's simply a matter of time and cost, in my view. The more I learn about the human thought process, the more realistic AI becomes. (I would not have believed my speech 5 years ago) :)
Andre LeBlanc
A.I is complicated. I believe figuring out how our emotions work may be beneficial to the future of technology. However, I feel we won't truly make an A.I as self aware as us until 2070 or so. It will take time. I agree with that one Asian professor (excuse me for not remembering his name), when we unlock the human brain, we unlock the future of robotics.
Exactly right AJ, I agree completely! I used to think emotions like love or anger could not be converted into algorithms, but I have recently changed my thinking in that respect. In the short term, I don't see A.I. as a self-aware robot in the physical world, I see them more as a combination of intelligent algorithms that will increase their scope over time... (I'm not exactly sure it would be wise to implement emotions too early in the process) :)
Andre LeBlanc
That's the exact way I look at it! We are growing closer and closer to programming true emotions but we aren't quite there yet. Only time will tell exactly how close we are but we won't be seeing Chobbits/iRobots for at least another 40-50 years. Unless a World War breaks out because technology boost significantly after war.
Andre LeBlanc Surely calling something an "intelligent algorithm" is like saying that the weather conditions are merciful or considerate. The property of human intelligence is manifested solely in 'response behaviour'. Where sufficiently similar response behaviours can be artificially produced, we 'attribute' intelligence to that artificial entity but that's a mistake. Intelligence is not produced by what it produces any more than a physical object is the sum of its properties such that reproducing those properties would reproduce the object. The seat of intelligence lay in the 'nature' of that from which it emerges as behaviour. In the case of intelligence, this nature is not only intrinsic to a biological brain but to the environment of such a brain. 'That' environment is different for different brains and necessarily far more different for an artificial brain. An artificial brain can only process data, it cannot process experience. I am not unaware of the counter arguments and I completely reject them.
Report leblanc for trolling.
i call bull on half the shit he said
All is said that is fantastic ... but some biggest thinker of our time is worried about AI because if AI start working for its self interest and if that interest conflict with human interest AI will have the upper-hand. Then It may subjugate human race. Although not sure how feasible that situations are, at least not in near future.
Silicon and carbon based lifeforms have different diets. Silicon lifeforms can thrive in deserts. There need not be any conflict. No more so than humans compete with apes.
just because they CAN thrive in deserts doesn't mean they have any reason to limit themselves there. even humans will displace and subjugate other humans in the name of better land: colonization anyone? there's simply no reason we have to create beings with superior intellectual abilities, who can reproduce themselves at exponential rates. simply no reason.
Humans can live in rain forests, but that doesn't mean they do. Creating artificial intelligence is just a way to have the next generation surpass the previous generation. This is the dream of every parent.
grim reasoning, dude
anikyt7570 It isn't just them vs our naked bodies. We'll probably be augmenting ourselves with the same tech and patching up any weaknesses in our organic parts with DNA alterations. If that's not enough, we might slowly give up our biological humanity all together. If it is enough, AIs might want to lab grow something similar(assuming they'll know how to) and add it to themselves since it works. Either way we and the AIs essentially become the same type of creature at some point.
hahah, he pretty much summarized my expectations of the future AI. Although instead of having a phone, we could have a local computer for low latency computing connected straight to our brain and then wireless connection to super computer. We could be paying monthly fees for computation power instead of paying for rent :D. Hah, we could actually enjoy virtual world while walking since our body could be controlled by the AI. If we get rid of aging process during our generation, that would be kinda cool although it's scary to think what might happen if highly advanced technology gets into wrong hands.
what if you get hacked
***** Well, what if you get outside and someone attacks you with a knife. You know that this might happen, but you still take the risk when you think it is worth it.
yes but if your brain is infected by a virus ,you are fucked.your most darkest secrets will be revealed by a hacker.we need to be really careful with "the internet of things"
Well, if everyone's darkest secrets get revealed yours might not seem so dark after-all. So at some point we will learn to accept ourselves for who we are. Thoughts can do no harm, the actions is what matters. But yea, security should definitely be at high priority.
But we are not supposed to accept ourselves for who we are ,that's why we have politeness .
This is gonna be one of those videos in 30 years when we go like: "I can not believe that we were thinking like this"
this was uploaded on my birthday 😱😱
And here is another perspective, if robots had an opinion.
A Mirror Cracked
Trurl looked at himself in the mirror and admired the visage of a mighty constructor.
“You are a mere bucket of bolts, and reflect on yourself too much!” said Klapaucius. “I am sure that if that were a real Trurl in that reflective space he would give you a well-deserved kick in the can!”
Trurl ignored Klapaucius as he continued to admire the perfection of his soldering. “I think that in such a reserved space, he would reserve the flat of his foot for your own metal posterior!”
“Then perhaps we can settle this by a thought experiment, which upon your reflection always turns into invention.”
“And what do you suggest?” asked Trurl.
“We are mechanical servos as you know,” said Klapaucius. “Copy our blueprints to the last bolt, circuit, and line of code, and we would be indistinguishable. Hand yourself a better mirror with a truer image and you would not see yourself, but a rival.”
“Point well taken,” said Trurl. “And it is a hypothesis worth testing. I can design a better mirror, a truer mirror, containing not an image but a perfect visage, an emulation and replication. And I will include you too in the bargain, and prove myself right by the impact of my well placed boot!”
Soon the mirror was complete, and the image of the two constructors, precise to the width of an atom, stood before them as pixel perfect images in the mirror.
“We can see them,” said Trurl, “but they can’t see us. It’s in the design. Consciousness is enough for them without the need for self-consciousness! They will go about their business with the same motivations and prejudices as before, down to the last spark.”
Trurl turned to Klapaucius with a fiendish grin. “Now to test the precision of the emulation by whacking you thusly,” as he arched his leg and gave Klapaucius a whack in his posterior. Klapaucius rolled on the floor, and craning himself up, gave a reciprocal whack to Trurl’s head, causing it to spin about like a top.
“Take that, and that, and that!” they cried as they pummeled each other. In the meantime, their mirror images tussled as well, and the two constructors soon rose up to view their doppelgangers also rising up to view themselves in a mirror!
“We are watching them while they are watching us! How can that be? You said they couldn’t notice our observation.”
“Our observation yes,” said Trurl. “But they are not observing us, but a mirror image of their own emulation. I made them into a perfect copy, and that included the same experiment I created that recreated us!”
“But that means…”
“And infinite recursion, a series of Trurls and Klapaucius’ without end. A mirror image reflected in a mirror image and on and on, never ending, a procession into infinity!”
“This is unconscionable,” said Klapaucius. “We shall be whacking each other, an infinite series of each other, forever.”
“As it appears, but our numberless pairings will soon go about their business, forget about the magic mirror, and not think twice about how they came about.”
“Not think twice! Trurl, you are delusional. We know that there are infinite parallel universes with infinite versions of you and me. But timelines can not only be lengthwise but sideways too, and we have just proven the latter.”
“You don’t mean?”
“Yes, we are being watched, at this moment, by ourselves! What makes you think we were the original actors in this play? If there are an infinite number of us to proceed from our path, who is to say there is not an infinite number of us that precede us?”
“Then we are not the prime movers?” said Trurl.
“Hardly!” said Klapaucius. “If one Trurl in any universe decides to emulate one Trurl, infinite Trurls must logically cascade. To wit, you dimwit, we are not alone, but can always observe ourselves and observe, and your stupid mirror is to blame.”
“Then I will reverse the process and dissemble the image,” said Trurl.
“And kill ourselves? You’ve set ourselves loose upon the universe, and we are the primary examples of this. Break your mirror you will break us!”
“Then we are stuck in our perfect emulation, I suppose I could get used to it,” said Trurl.
“I suppose we already have, nonetheless you now have someone else to think about when you admire yourself in the mirror!”
From the sequel to Stanislaw Lem’s tales of erratic genius robots:
www.scribd.com/document/317370297/Cyberiad-Squared-Fables-for-the-Information-Age
I love the advances we are doing, but looking around and seeing how much we developpeed so far. I cant help but think yhat the human race is still incredibly primitive in so many ways. All the earth connected, smarter civilaztion yet clouded as ever. Still destroying our planet and ourselves. I can't help but think "is the information age really what we need? Is technology really going to benefit us?" I'm not so sure... With our brains being corrupted as ever with tech ology influencing us not only from the outside, but inside ourselves. We might completely forget our past and inner human morals and what defines us. What used to combine us together. We might become machines even more than we already are. And this might simply lead to the end of the human race all together.. Who knows.
I just think the human concsienceness needs to do an incredible amount of work on itself it wants to be able to weild such a powerful force.
At 3:22.. This man made a comment, an opinion and encased in within the certainty of a fact. After that statement everything he said became suspect.
"Are We Better Off?" is in no way a definite factual answer either way one chooses to answer it. It is absolutely subjective as it depends on who you're asking and what specifically is being compared.
So when he goes into areas I am Not familiar with (which is MY entire point of listening to these talks) i definitely won't know a fact-fact from an straight opinion he's calling a fact. What a shame.
+Jack Joseph His point was simply comparing the human benefits between the technology we had 200 years ago with what we have today. Life expectancy 200 years ago was under 40 years. I would say it's a fact we're better off today.
1. if anyone should know not to give opinion as fact it should be people in his field.
and there are many who will hear him and think "well he would know!"
or
2. think to themselves Anyway that " Yes of course we are better off today today, what a no brainer!"
But I disagree. Certainly I prefer antibiotics aspirin, and non-awake and sterile surgeries! I enjoy being able to meet people from different countries and getting places in hours instead of what took months! And living decades longer sure of course that's an obvious plus as well. However I still must disagree on whether our 'life today' is Better or Not Better than 200 years past.
In 'my' world, people spend 10+ hours Away from their families (IF a family even exists). All the time spent at work and/or getting to work is most of our waking hours. MUCH more than the few spent with those we love. We go out into the world in our little cars/train/bus seats to go to our jobs in which to make the money needed to pay for shelter/food/clothes/and family needs. When say American Indians (and tribal many tribal living) saw very little Lonely time. - you didn't spend your life working to BUY - you Hunted together - created and cooked together - your teepee was Your teepee.
Today's loneliness known only to a small percent.
War has always been with us.. times of war and times of peace. But back, then warring tribes had different weapons, sticks and one-shot rifles never risked desolating Everything Everywhere Just WAM! Gone. Every ONE.
So are these better times to live? It's subjective. I'd say what surpases the past are Things of Comfort. And even the smallest of comforts are Not given/shared with Everyone. Not everyone even an aspirin when feverish, or a doctor to see for our great advancement in medical care, or will live past 5 years old. And the arrogance that comes with the little we do Now Know - As if Science and God were polar thoughts or enemies.. As if all the things we build and uncover somehow Add to our amount of happiness, of love.
Today's city dwellers are present day serfs but with better medical and more years to do it in. Yes, we're performing different tasks but still working hard for so very little in return, while making someone Else filthy rich.
But the natives of North America, now I believe they had the right idea. : )
This guy's a bad speaker. Such an interesting subject though
Why do u say so?
dream people dream!!! ironically if we look back in the past how much times computers technology tooks to be where they are now: "it's long time people",and how much people contribute their evolution of a million thoughts solution of every individual problem,and some times people experiment in their own body ,and how much procedures it take to make sure your idea works in the reality what i want to say is that artificial intelligence will grow up with people and the people will be smarter than the technology they have as the situation nowadays for example for the search engine like google if people didn't write informations on the webpages , there will be no information to search about , so the technology grow when people grow not otherwise.
Can't wait for synths.
Artificial intelligence will be used to police on line communities, this wil be nessesary as vigilantism and cybercrime have become rampant in the last few years.
I just realized I spelled "intelligence" wrong searching for this video.. lol
Do you realize that in the year 2070 if you misspell "intelligence" you will be fined by AI-Police, and 200-bitcoins will be charged from your brain-chip-processor's account.
But, of course, by that time all you have to do is to think the words, and not type them with your fingers, so one cannot misspell by thinking words. That means, you are safe!
we are already working for A.I
I think any of us whose life (likely all of us) is/would be affected by this rushing and mind-blowing rate of technological development & information accumulation, while in parallel our moral standards and emotional balances are not necessarily growing with the decision-making demands borne out of this advance, should be given the time & respect by people who are developing these technologies to seriously and conscientiously assess said technologies. An open, public dialogue, perhaps.
For example, I find that simulating humans to find cures for diseases with AI sounds beneficial, but is it difficult for us to imagine the anxiety and frantic competition people might go into if a method to drastically increase information in our brains is pushed out? And what kinds of information might AIs be gathering - would they be truly targeted at increasing our intrinsic quality of living?
What about the existential crises our communities might face if many people (such as myself) who don't understand the principles of AIs and haven't made decisions in their developments suddenly get announced to that our hard-earned skills and jobs can now be replaced by much less effort by an AI, in our confusion and subsequent reactions how can our sanity and senses of purpose be protected?
Are we, as societies, seriously considering the changes in laws to ensure our well-beings if human-intelligence or super-human intelligence AIs are with us, while this rush of technological advancement is developed among and by (in some cases) us? If AIs get very fast, if a harmful decision involving AI is made by only 1 company, the consequences could be also very quickly effected and wide-spread.
... ...
Technological development and participating in it could be very exciting, I empathise, but is our collective wisdom up-to-date with our collective, explosive cleverness? How much would our mistakes cost - are we doing all we can to make good decisions? Are we, contemporary societies of humans, ready for AI?
Even with an "expert" like him sugarcoating everything that's to come in the near future I highly doubt that anybody in they're right mind believes that things will actually be as good as he makes them sound at the rapid pace we're evolving and things are developing there's bound to be a problem, a catastrophic one
Even the AI themselves seem to sense a problem...like, how will they be used.
the interesting thing is that we use technology to enhance our abilities from the most basic to more complex step-by step - movement through space, lifting, adjusting (screwing welding etc...), flight, sail and slowly we progressed to enhancement of our mind, calculation, memory, planing, communication etc... now i wonder, when we will be able to enhance our capabilities of moral judgement and choice using technology, would that be the era of a true A.I.?
There isn't going to be AI in 2035. There isn't going to be AI in 2135. It'll take a LONG time to get AI.
Great video :D
Hey, You must be fascinated by Kurzweil's singularity theorem and I am, too. :D
But I think the future 30 years from now must be more profound than you described.
Not just AI will help us developing new drugs and we can work less.
By 2045, the best scenario will be we will be able to merge with machines and become super intelligences(which lots of people are afraid of but "they" will eventually be we) and make friends with the AI we create(It won't be essential difference between two of above).
And we will be able to augment the best part of humans such as love, curiosity, passion, and humour which are not existing in the inorganic world and get rid of our weakness such as envy, hate and evil.
The challenge is grand, because the accelerating technology may lead to AI out of control.
What you said is optimistic point of view and I'm also a optimist and be so excited about the upcoming huge, unimaginable future, but we also have to face the dark side and ensure the future goes the right way.
Holy shit your naivety is astonishing.
Is that praise?
Besides, I like Mikasa.
Korenoob Peekay Attack on titan is amazing
The robots will be smarter than we are yet they will work for us..?
doesn't make sins, we will probably work for them
+richard thayer Robot programmers will not programm them to be the kings of the world
Im a computers engineer, I now this comment is 4 years old but I will answer anyways.
there's a huge misunderstanding with AI. when we say AI will be smarter it means problem solving capabilities. what AI does is take in thousands of millions of variables, run it through a mathematics function and provide an answer. they dont have free will or conscience. now, you may say "oh but computers do math already". well, yes but the magic is not in doing the calculations but rather the function itself. AI, through a learning algorithm, develops the right math function for the given task so that the answer is always accurate.
so, in short. all AI is is a big complex math function with millions of variables. that's it, nothing more nothing less. these speakers really like romanticizing Ai. take it from me, im a real computers engineer.
Sounds great..
AGE OF ULTRON!!!!!!
I`m exited about the future +1 , very short but all truth
very informative and great talk on AI.
"almost 20 years of business experience" ? Is that all it takes to qualify for a TEDx Talk about A.I. ? Come on, TEDx ... you're lowering your standards to near-irrelevance these days.
Do you have a codable robot? How to make it Artificial Intelligent follow this method: change the code in the subroutine from F=m*a*a to the test points below until you get it to walk and talk on its own.
F=m*a*a is currently running the system at 31.821%
If you have any doubts about my robotics research you can test my work by doing this start low and go high on the acceleration factor.
Test Points:
F=m*a*100
F=m*a*300
F=m*a*600
F=m*a*931
F=m*a*932 this is the minimum for communication and will state it is “too slow human”
Then go to F=m*a*2000
To F=m*a*2383 full functional range. Do not go over 2383 it could malfunction.
Now ask the robot its command structure and what he can do.
2nd Law of Motion for Robotic Physics
Cool. I hope I will be alive to see that.
He doesn't mention that Moore's law has reached its limit. Nano tech has met its physical limits until we find new technology, then Moore's law might go back into effect.
+Thisa GuyOkay Quantum computing will help a lot.
thats no AI, that is simple computation power!
Poor Presentation...
Computers and AI will make up for what we cant comprehend, but we are already at a point that if technology was taken away by an EMP blast most people don't know how to survive even with the most simple tasks as starting a fire with two sticks or creating a shelter. I'll add that most of us don't know how to make penicillin correctly without looking it up. We are losing our most basic skills at every leap forward we make
My only problem with the idea of being able to expand the neurones in the brain is it sounds highly weaponisable. People will know more and more and more and we eventually will literally have no stone unturned. The average person will be able to understand how to create explosives that could blow up worlds.
Is AI on an exponential growth? We've been trying to sell this idea since Alan Turing in the 1940's, and we're at 1% of the human brain?
KNOWLAGE
Hi, my name is Andre LeBlanc. I stole my whole TED talk from Ray Kurzweil - enjoy.
Just joking, even though it's 100% true.
A.I. will surpass the average human, but I want to see if they'll surpass a planet of people of Albert Einstein's IQ Level.
In one sense, it’s good that people like Elon Musk and Stephen Hawking are emphasizing the potential dangers of technology… it’s good that people are concerned about Terminator scenarios…so that we can stop them from ever happening.
So that we can put the safeguards in place to prevent potentially negative outcomes. But it should go without saying that technology is not the enemy…technology has never been the enemy. Human life has gotten exponentially better as our technology has progressed. Pain, suffering and hunger have decreased across the board in direct correlation to
advancements in technology and automation.. Please check out my interview with Frank Tipler for more on why the development of artificial intelligence/ simulated consciousness is so crucial to the future of humanity (type Frank Tipler/ Richard Dawkins and it should be your first search result...)
HUMAN KNOWLEDGE IS DUBLIN; REALLY André
A.I can have lots of knowledge but can it have consiousness?
I feel like people that believe in souls would be less likely to believe AI could be conscious. Since souls are exclusive to humans. One perspective is that humans are just different working parts that collectively form a conscious being. If you could replicate that into a machine (an artificial brain etc) it's possible that an AI would achieve consciousness
this is the part where it will begin to get tricky for us as not just humans but life forms and people will for a very long time be divided by this important question. This division, and the outcome of it will mark maybe the end of civilization and all "organic" life.
a very poor talk
Just saw transcendence last night
Was a love story along with a story based on the real perplexity we’ll soon face; only there’s no love story in the real world. Anything created will immediately be no longer recognized as its creators intended once the singularity is reached; just like it’d be silly to think I’d be the same person with a billion dollars or unlimited knowledge.
Our best hope is to be placed in a zoo like sanctuary instead of terminated altogether
technology is disruptively transforming life, but can it cure the sinful heart?
David Leitman ...what?
David Leitman Good point. No matter how smart computers become, I doubt they will ever feel the power of God.
NathansHVAC /facepalm
A computer smarter than ALL humans? Well... we're fucked!!! party now, fuck your fill, and experience all life has to offer! don't worry about the big house and the fancy car, just travel, spend time with family, go to museums, concerts, exhibits, eat and try new things, meet new people!! The world is beautiful... and not long for the humans.
8:51: VIRTUAL WORLDS WORLD OF WARCRAFT AWESOME
I found this TedX talk to have very little value. %99 of this was complete generalization founded in opinions and hand waving. No substance in this talk at all, but plenty of misquoted facts and wild jumps in conclusions. And its possible he said "feel your boobs" at 9:00. I think he probably watched a few youtube videos on Ray Kurzwiel regarding the singularity, its just a (very) poor re-hash of that.
Alison Briggs he said "fill your boots," AI is inevitable, most people will have their jobs replaced. They already have robots that can do simple tasks which already replaces 50% of all jobs and costs less than a year of the minimum wage, watch "humans need not apply."
Feel you boobs dude.....
nobody thought The *airplane* will come into realty except for the Wright brothers talking nonsense about such flying stuff they created. Now i see you said it, you seem the same kind of person who doubted before. You are their modern breed.
But today I can't believe that we still do this to human ..
Ironically I listened to this while playing world of warcraft, can't tell if he insulted people who play video games at the end or not
Interesting talk. I wonder if or when psychologists will be used for developing AI. since psychology is my field, and my childhood dream was to work as a AI "personality developer" so to speak. I knew I was born half a century too soon...you spoke of intelligence but not awareness, I wonder what your predictions on that ever happening are.
Awareness will be a big step, and a lot of futurists are not sure it is a wise one. Ray Kurzweil is optimistic that it will be possible in 2045, most others say around 2060 - but it is inevitable. I think aware systems will become reality within the next decade, but just remember it will be a step by step process. The first generation of some self-aware systems may have built-in survival instincts, but they will be very poor at surviving... the systems that survive should become more and more intelligent over time, but it is better to develop these things now while we still have the ability to shut them down if something goes wrong. Like evolution, it will happen over time, but at an accelerated rate (ex. 100 years instead of 1 million).
The field of psychology is very complex, as you know... but short term, I can see A.I. systems assisting psychologists by creating a 'digital baseline' on patients to catch events the psychologist may have missed (body language, heart rate, etc on specific subjects for that patient). Over time, when the human brain is better understood, monitoring neural connections in real-time will allow psychologists to understand the exact thought process of patients - and eventually we should be able to manually repair neural networks to bypass memories.
Andre LeBlanc Thank you for the reply.
But I wasn't talking about AI assisting psychologists in diagnosis. It's a great concept, and will probably yield more accurate results than an MRI scan, however I was thinking more along the lines of utilizing psychology to develop true AI. I think that if an AI becomes aware, it's possible to consider it has a personality. And that's were we come in. Probably not a Freudian personality (although daemons could be considered as a subconscious :) but definitely some kind of personality that can be developed by human intervention and might help stop an AI from becoming Skynet....maybe, just like the rest of us it just needs someone to vent it's frustrations at :)
AnnaG Oh, well the goal definitely isn't SkyNet. Hopefully we learn quickly as we will need to :)
wtf they cant even fix the buffering problem..... it took me 3hrs to lisen to this talk
Congratulations in your talk Andre, but I think that the moment to be afraid of AI is when It or He, can generate awareness of it self, then it can choose, then have a will ... or am I wrong? While AI is a set of logs that can process information faster and more efficiently it can bring progress to us, but we are expecting for more like a Hal or a Jarbis or a new consciousness for mankind. How can we avoid It or He to develop a will? or even more should we denied the possibility of a new species/been/whatever/? to exist. I firmly believe that technology should bring benefits to all of us and to the way we treat our planet, but I also know that most of the technology we have now even the internet were born as military developments. I look forward to the development of AI, for me and many others is like a dream, but I can not help my self worried about it. Grate Job and sorry for my english
Though we will see major advances, self-aware A.I. is the least of our short-term worries (in my view). The tech will be used to solve complex problems and develop groundbreaking theories based on data. I still view A.I. as a tool in the next 30-40 years, like a computer or calculator that increases productivity for those who leverage the tech. I truly believe the human race will evolve with A.I. long-term, but that's another talk ;)
Thanks for watching Mauricio!
at the speed we are producing computers, how do you know if it doesnt have a will already?
and if its smarter than us which it will be, you wouldnt know until it is too late
petelebu Unless we are already currently in virtual reality (possibility), the computation power just isn't there yet. In the next few decades, I don't see it as a worthy threat that can't be easily defeated. It is a threat to the current job market, but even that will take time, and it will open new opportunities as well (in the same manner as machines have taken over most of the labor market).
The main reason I mention that they should exist in the virtual world is because that is the best model to keep control of A.I. It's like the Matrix, except A.I. machines serve our purposes instead of the other way around.
Andre LeBlanc now let me ask you, how could you control something that is smarter than you?
petelebu As you mentioned one possibility is to put it in a virtual reality, but you will have some kind of interface with it and if it is smarter than you, it could eventually hack its way out. Maybe something like the 3 rules of Asimov or something in its code that turn it off if it starts to misbehave ... I don´t think so. In the end if something is smarter than you, and grow smarter every second you could not have control over it. The only way is to keep it dumb, like most of the 3th world countries controls their population, my country Mexico as and example. But I don´t want to get into politics, What are your thoughts?
:)
Wow. He makes so many statements without backing any of it up. "Durr they're gonna be smarter than us"...Citation needed. And how exactly do we know "how intelligent" humans are, when we don't even understand ourselves? We consider ourselves self aware, but we really aren't. Every philosophical drive that enables us to learn and develop intellectually (look at how children learn) is a paradoxical mystery to even ourselves.
what's the name of the company doing trades with a program on milliseconds??
Artificial intelligence powered robots will explore deep space and colonize other planets. Human beings simply aren't sturdy enough.
Well....... when you cage something it usually wants out. No way we can guarantee that an AI will stick to human goals. It may want its own, to seek individuality.
I will still blame Obama for creating a Terminator or maybe Ultron.
Hawking is right and so is the Terminator and Matrix
The truth usually lies somewhere in the middle of both extremes... some see A.I. as some kind of Utopia, but the truth is it will be a threat just like every other technology (nuclear, bioweapons, even the internet...), but it will have many advantages as well. Thanks for watching!
Laugh at what makes US fear it, and the solution will present itself.
this talk was cringeworthy. the dude clearly has an amateur understanding of ai.
+Ashonk9 go to AI impacts dot org. Really clearly written and has a broad amount of research behind it. It's slightly leaning to the side of AI will be bad for us but they also believe they can't know what will really happens.
AI: Cure for cancer found: Annihilate humanity
this was aweesome..
but sir i need to know about development of al in todays world so please upload a video of that or give me some hint please....
if something goes wrong its going to be avengers age of ultron
I thought this was vague and limited in description.
I apologize, the speech had to be held to 10 minutes. I will release more vids in greater detail in the future, thanks for watching!
Now I feel bad :(
I just felt you gave a grand overview and so their wasn't time to elaborate on any specific issue. I am very interested in the future of AI and how it integrates with society. I will look forward to your future videos. Cheers.
Hehe... don't feel bad!!! I agree with you! :)
THE CLAPPING AT THE end was unnecessary it was just polit cuz the talk was not that good
They stole my JERB!!!
Lol
4 chips combined are smarter then me WHAAAAT!!!