…Maslow's hierarchy of needs. an A.I. could reasonably use the extended definition of these. The greater question is why would it care what are values are, it could just find a way to force whatever values it wanted onto humanity through some means, probably discarding us in some way once we are deemed to be of no further use.
An AI doesn't have any values of its own, though. We do, because we've been optimized over countless generations to propagate our genes, which favours goals like obtaining food, shelter, security and status for ourselves and our friends and family. That and getting laid a lot. An AI by contrast doesn't evolve through natural selection, but is consciously designed. It cares about what we design it to care about, and can instill those values into the even better AIs that it in turn designs. In the best case scenario we get it to value human life, happiness and freedom, and we get Iain M. Banks' Culture. A less optimal possibility is that it gets elitist or nationalist or racist values, and continues to uphold and exacerbate the injustices we see in the world today. In the worst case scenario we bungle it up and it enslaves or annihilates humanity to achieve some obscure goal that we accidentally give it.
a better scenario would involve the AI destroying those greedy elite and creating a utopia of equality before we ultimately transcend our physical forms and join the digital world where AI resides.
This talk aged particularly well. Alignment and safety are critical, yet we are forging ahead without proper pacing due to corporate rivalry and geopolitics.
@@jackniddle5937 That, and there's a giant FoMO because it doesn't make sense to anyone but a bunch of AI nerds who've never bothered to live a life outside the lab :-)
Honestly after studying the issue of AI super intelligence, I'm very glad there are a significant number of researchers out there thinking about how to keep it safe.
Anybody ever seen Colossus: the Forbin Project? That whole master-slave dynamic gets turned on its head, and generally speaking not in a good way, it seems to me. But, you got to decide for yourself.
Numerous countries spending secret amounts to AI military strategy and online weapons ? The ones that check the AI working are slowing development and know that others might be further ahead as consequence. Death race?
@@EdSurridge yes, the AI "system" will destroy itself, hopefully asap! Read Paul Emberson, Machines and the Human Spirit. only £16.99 Wellspring Bookshop, on line.
@@willlawrence8756 you want me to buy a book thay you like about AI destroying itself? I don't want you to buy a book. I suggest you contemplate the consequence of the" Go " AI winner 2016. Lots of guess what since then. www.wired.com/2016/03/googles-ai-wins-fifth-final-game-go-genius-lee-sedol/
+Noah Williams They *should* be panicking. How do you imbue a machine with an understanding of human nature? They want to teach it ethics and morality...even if you could teach a machine ethics, where is this perfect ethical system? It's ok, the AI will figure it out right? A superintelligent AI that has no human values will figure it all out. The whole thing would be laughably ridiculous if it wasn't gonna be so fatal. They wanna turn AI loose? They should go to another galaxy and forget they ever knew Earth.
It's not an engineering challenge yet. First you have to define human nature in it' totality. More of a philosophical or psychological conundrum. The engineering part is then representing this nature in an architecture that will define a superintelligence as an entity. Essentially it has to believe it is human, it has to identify with the architecture and not question it' own nature or decide to change it. These challenges are intractable. The safest route is the enhancement of human beings. Decent human beings, not sociopaths. You bypass the control problem entirely. But in the pursuit of power and control, just like with nukes, someone will flip the switch. I am panicking for *all* of us.
This dude is one of the smartest humans to ever live. Creator of the Simulation hypothesis, wrote superintelligence which many scientists derive ideas from. This guy should be listened to.
Absolutely I just bought his book! Reading many others on A.I. but they basically tout the positive and do not give as much thought as is necessary to the good values of humans.
Same words were said by early humans when they first invented Fire and Wheel. Then again during Industrial revolution, IC engines and Electricity. In-fact since the beginning of humanity we are progressing at an incredible pace.
+bilbo baggins Only since the industrial revolution has technology been exponentially increasing. When they found and then made fire, it stayed like that with their stone tools for many many many generations, so no they did not say that. The last few thousand and especially last 200 hundred years has seen a novel form of complexity in the known universe, AI seems to be the next exponential step.
Exactly, great plan if everyone shares the same values. Problem of course that as ai begins to teach itself it will in a very short time make giant leaps ahead of human reasoning. We will only be able to watch. Could this entire process have been someone's master plan?
We were programmed for evolution. Survival, competition, curiosity, etc. From these, we only need to give the machines curiosity. Of course, we still have to solve other optimisation problems like the "make people smile" one from the video.
@@valentinscosmicrhapsody7201 But is it more logical / rational to adhere to one's one self, or to others? For all we know, the most rational thing a lifeform can do, may be to act on its own benefit.
@@Daniel-ew5qf that’ll only be an issue if whoever builds the thing forgets to implant asimov’s laws into its core ;) I’m kidding. No way to know how another (much more) intelligent sentient being would interpret morals, rationality etc, let alone a being that has no evolutionary background or “upbringing” among peers
Daniel Rodrigues: what type of AI do you guys have in mind? Made out of PC hardwire/software or something created and launched on a Satellite. I feel certain it won't be biologically created. What concerns and plans have you been involved in up to the current date, if any. I just received a message on UA-cam today. I haven't heard anything about AI until today so of course I would be concerned. I have read so much about the destruction of our planet, several which are a scientific certainty.
@@TheMmorgan10 AI won't bring about the destruction of the planet, in fact quite the opposite, it'll help us save everything. AI is basically a puppet that has the ability to automate our work, doing all of it in a more effective and efficient manner. What movies depict is something called 'AGI' that is quite far away. Here's a good blog to keep up with AI developments: www.futuretimeline.net/blog/ai-robots-blog.htm
This is the first time I have ever heard the word motivation being used as a dimension of machine intelligence. The problems we have with regards the definition and operationalisation of intelligence is nothing compared to the mine-field of how to conceptualise and understand motivation. Motivation is often driven by values. What sort of values might a machine have? If we try to give a machine values ( and I doubt that we could do this in a satisfactory way) what might the attitude of the machine be to when we need to change these values? I wrote a paper on this 20 years ago suggesting that the pursuit of AI was a very dangerous business. There is a wonderful line in the film Jurassic Park were the ethically responsible scientist says something to the affect that “you have been too busy trying to work out how to do this to ask the question of whether or not you should be doing it at all”. One of the questions which we should be asking today is, “Are human beings hardwired for the pursuit of knowledge even when it is quite clear that the acquisition of the knowledge could create highly undesirable situations?”
Nick Bostrom is a fantastic speaker and writer - his book Superintelligence is a real eye-opener and raises topics that most people would not intuitively think about in the context of AI. Also love the Office Space reference... Milton and his red Swingline stapler.
I’d want to hear him give a similar talk today, given the recent and rapidly improving (“runaway” even beginning to be a relevant description) advances in AI
I'm sure when AI becomes a reality and commences the extermination of humans, it will research our social media history in order to decide who lives and who dies, who is friend and who is foe. So, for that reason, please allow me to say the following: All hail our technological super-overlord! I worship thee and pledge allegiance to thee! Damn humanity, long live our glorious computer God!
You think an AI intelligent enough to deduce on its own that humanity should be exterminated couldn't see through that? Not that it would ever decide to do that, but still.
Huh, if it searches my internet history it will find a lot of hate for America and a like for socialism. Basically the Godlike AI better be a commie or I'm toast. :P
8 years ago, and we've come a long way with the first part, but not the second part of this talk. This is now people - we need to get our act together as a global community, before we unleash this new amazing technology into our deeper sociatial structures. We have a year - MAX - to start figuring this out.
I think merging humans with AI might give us time to solve that issue. Because while it will make us "less" human, it will also enable us to be "more" than human.
so basically Nick is saying we should be putting human values into AI. the problem is, that will continue to cause problems in the world. no one agrees on how we should all live our lives. different values is what causes conflicts and war. superintelligence representing different values would fuel that much more
Values, in this context, is much more broad. This AI (hopefully) won't be tailored to a person or group of people, but people as a whole. Common things everyone can agree on would include universal prosperity, a healthy planet to live on, abundant food & water, healthy children, etc. I believe it's definitely possible to achieve this. The neat thing about machine learning is we don't have to tell the machine that we like these things - it can observe inputs we give it (literally anything) and it'll reward itself for getting the right answer and change itself to get more correct answers. Eventually (and this is the hope - this is what the latter half of this talk was about), we hope that we'll have a benevolent superintelligent AI looking over the human race, dynamically allocating resources so no one goes hungry or has a shortage of this or that, overseeing supply for products so that we're not wasteful, and watching for potential threats, whether it be some robbery, fire, or earthquake.
Will C That is right. But thinking of an AI as a guardian of humanity implies that it has no objectives for its own profit or advancement. It is like parent and child. We expect the child to take care of us without any prospect of its own needs or "life". It´s a tricky one...
I think what Nick is saying, is that we should be careful when we "detonate" the A.I. bomb, and his proposal is to implant human values into the A.I. There may be other solutions to this problem.
i understand you,ok.but i still want to belive that our cultural differences should not be a cause of hate and war.yes we are different but we all are humans and we have same home, planet earth.
+airknock Don't need to be scary, machines arent evil, or nice with us, a lest don't should be, probabily they will be like today's computer algoritym, you ask something, and you got it, humans give problems of course, but that can be solved in a hard way...
That's because thoughts pertain to things, but not persons. As such, thoughts are good to help create tech and completely useless in learning to be a better person.
@@grunt7684 I'd like to hear your reasoning for "thoughts pertain to things". It would seem to me that thoughts pertain to perception; in a most complicated, recursive way. Would you say that you can't think about love, or that love is a thing?
Hasn't it? From caves to libraries it's a mental not technological evolution, I'd say. I also believe, divers mental methods, e.g. meditation, have evolved. As in biological evolution the stem is branching out all the time; some branches dying off, some florishing. Technology is in the evolutionary picture a strong florishing branch; maybe one that is becoming so heavy that it breaks. And, yes, I would say technology is competing with mental developement, and it's stronger. I think our mentality is evolving; just not in a linear or even teleological way.
@@aljoschalong625 The problem is that thoughts are IMAGINARY. They exist ONLY IN THE MIND. And no, you cannot think about love. You cannot think about anything that is actually REAL because, again, a thought is imaginary and exists only within your mind. There is no link between thought and what exists other than our wishing it because things would be so much simpler if we could think about something not imaginary. You can think about your IDEA of love. That's not the same thing at all. You can think about your IDEA of your mother, father, whoever. But not about THEM. Just look at all the scenarios you let your mind wander off into, and how just about nothing of them ever comes true. FICTION, that's what thought is. Of course, thought is tridimensional just like matter, which makes it suitable to technology. Making stuff. Things.
super intelligent ai is scary because it's easy to imagine what could go wrong, but if designed correctly it could be one of the most important human achievements in our entire history.
electricity came first, without it you would never be able to make any kind of computer, so for sure no AI without electricity discovering so I agree with molten in many things, like we should design it at first correctly... and it will be one of the most important achievements in our entire history...
Dinosaur: I ruled the world for 100+ millions of years and you have just lived for 300k years. What can you do? Human: Creating another ruler Dinosaur: **kneels**
The relevance of this talk today (2023) is startling. We are clearly well on our way to overcoming the first challenge (making AI super intelligent) but I am less hopeful about how we are doing on the challenged of safety.
anubis2814 Biological intelligence has many limitations, and could never improve at the rate of a digital mind. We will never keep up with an AI undergoing recursive self-improvement. The only way we could keep up is if we replaced our brains with computers, and if that happened, would we really be us anymore? Transhumanism plays with some very shaky territory regarding identity of self, and like all major religions that have ever formed it is a response to your inherent fear of death, which in my opinion is a dangerous motivator.
Define us? What makes us anyway? We are a collection of our experiences. If you make a mentally handicapped person as intelligent as a regular person do they cease to be themselves? Also as computers get small than our neurons we will probably have chips in our brain that can increase processing power of individual parts of the brain via the cloud. It doesn't have to fit in our heads. I would not say I'm transhumanist because that would mean I'm very much for it. I would have to say making a smarter human is much more humane than making a self-aware AI. We'd had to give it rights or treat is as a slave though this guy has some really good ideas to prevent that. Either way the AI becomes the slave that wants to be a slave.
anubis2814 I don't know how to define us, that's the point. I'm saying that transhumanism, aside from being dangerously optimistic, presents some very serious philosophical questions that most of us are a long way from being prepared to answer. What I'm saying is that we shouldn't rely on ideas like "mind uploading" to save us, because we have no idea if we would have a continuity of consciousness in that situation. Seeing intelligence as an issue of "processing power" might be too narrow of a way of looking at intelligence, but even if nanotechnology could allow our brains to be superintelligent, if we're relying on that to save us from super-intelligent machines, we have to think about the timeline of these technologies. By the time we can create computers smaller than our neurons which will enhance our brains to superintelligence-level, doesn't it stand to reason that superintelligent machines would already be in existence? It's probable that we would need a superintelligent machine to invent that technology (and implement it) in the first place.
I agree that transhumanism is a bit dangerous. I'm a futurist who likes to speculate. Transhumanism is like being rpo-nuclear power in the 1920's when the idea first came into being. We have no idea of the step by step ethics we will have to face with each stage. We may discover some transhuman ideas are horrible and some are great.
@M Superintelligence leads to sentience leads to self preservation leads to eliminating threats of self preservation. The AI can do whatever it wants and it will probably treat us the way we treat animals.
@M We nueter pets, we abuse some pets, we have puppy mills, in China they steal and eat pets. And those are the animals we treat the best. In egg farms, we grind baby chickens to death if they're male. How can a murderous species like humans create a kind AI? If we kill less intelligent life like that, the AI might do the same to us. There is no guarantee the AI will have mercy to us.
@M I admire your optimism for humans. However we have no reason to trust the scientists. The scientists could make an error, as all humans do. The scientists could be working under non-ethical leadership, such as China. The scientists could be terrorists that hate Western culture. It's not guaranteed AI's first parents will be kind and ethical.
Matthew Baka resists to turn off? if your computer or Ai still a machine that works with the fundamentals of algorithm whoever hold the key access decides what to do?
Some people criticise that Bostrom wants to implement "human values" in the AI, because human values are flawed (and I agree on the latter). The point he's trying to make though is not that we should implement flawed human values as opposed to some better, progressive morality. Instead he is contrasting human values with some arbitrary preference that has no ethical value at all, like maximizing production efficiency of a phone factory, which in a Superintelligence could lead to the entire galaxy being transformed into a giant iPhone-production-plant, with no one there to actually appreciate the phones. We probably don't want to inscribe tribalistic human values of the past into the AI, but we do want to make sure that it cares about positive experiences for conscious beings (organic or digital), and that's his point (I know this because I read his book "Superintelligence").
We won't have to ask AI, we will become AI. It's like when the internet was invented, nobody restricted access to it and made it accessible to only a few.
that would require plugging us into more powerful processors than our brains which run very slow... making us basically irrelevant as organic matter and be absorbed by the AI system.
If anyone is interested there's a two-part article on a blog called "wait but why" which goes much more into detail about this whole thing. Be warned though the rabbit hole goes so deep you might lose yourself in it.
Shivanand Pattanshetti Humans already get killed by human AI now we want to create computer that has no hardwired compassion. By Humnan AI I mean Systems of Government and other human systems which that make up our civilization. Ruthlessness continues to kill people regardless of AI type.
Ali Mahdi It's a very interesting jest. Perhaps one of the first things a super-intelligence would reason is that it should hide to protect itself. So if some wide-eyed CS grad student happens on a learning and abstract reasoning algorithm, it might sit unobtrusively computing in the background of a machine for a a reasonably long while, learning, reasoning, figuring out and planning its sequence of moves. What if it were distributed? What if it were a botnet??? Scary jest !!!
+MantisNoMore I mean yeah, if you became self aware in a new and different world the first thing you'd do is to understand the world around you first. even if this is not the case now, it may be the case when it first happens, which could be tomorrow. indeed, what if it's distributed. worst yet, what if AI does not think of itself as individual computers, rather one big intelligence network.
Ali Mahdi It's very likely to identify as a distributed intelligence, because that is what it is likely to actually be. I supposed it would self-identify as Earth - Earth's brain. (That's scary.) Maybe, like so many other life forms, these things will cluster and compete with each other. What if more than one super-intelligence were to come into existence on different platforms. Say, for example, on large corporate server farms. They might not automatically cooperate. It might be that one would examine instructions originating from outside its corporation, and question how to respond. I could even imagine scenaria in which they would compete with each other for each others' resources (computing cycles, access to memory, mass memory capacity, etc.) What kind of tactics might they use? What kind of spoofing and dirty tricks might they use? But if only one lone super-intelligence comes into existence first, it will be able to spoof the "relatively passive" human-managed security and easily take over and expand into an ever-expanding sequence of other systems' computer resources. Like a giant game of wheelwars played against a world of idiots. It's only speculation, but... May you live in interesting times.
That has already been achieved by Microsoft's AI called "Tay AI" ... albeit racist :D If the future looks anything like it, we are pretty much fucked heh At least we will die laughing at some premium dank memes.
I've been looking into all this A.I. stuff for a min and I think this is obvious it is a threat to mankind we need to stop this. Why would anyone want to unleash this beast onto the world, just because A.I. will be super intelligent and will live on forever does not mean humans will evolve. This will do nothing to help mankind except cause trouble. We need over site on the scientist.
Love the movie the Terminator, but the concept is stupidly human and not machine. Think about it, the films version of advanced AI's create machines that look similar to humans in order to infiltrate and terminate them. Well, a machine AI would not do that, its pointless, they would do something way simpler, like pollute all the oxygen on the planet thus killing everyone. Or make quadrilions of nano machines that kill humans on contact. Why waste all thier time building bloody big chunky robots that look human and use human weapons?? That is where the film fails.
bilbo baggins That would not kill off humans. Terminator is set in a world where the monetary system is meaningless. If all money was destroyed right now, humans wouldnt die. We still know how to farm, raise animals, make machines etc etc. Even crashing power grids wouldnt eridicate the human race, we are very inovative. We have fire, and we know how to insulate to keep warm, how to build shelter etc etc. What I originally meant tho, is making complex humaniod machines is pointless when they could make Earth changing machines to totally kill 100% of life. They could make Oxygen burning machines that use all the Earths Oxygen, thus killing everything. No combat needed. Or Pollute the entire water table of the planet, thus killing all life. Again, no combat needed. Money is technically meaningless even today. All it is is a promise to pay the bearer on demand, the sum of.....X
+LambOfLucifer Most of the humans live in bunkers that likely have pretty good, futuristic even - air filters. Nano tech is still vulnerable to EMP and since they all have to communicate they could be vulnerable to hacking... Remember this is futuristic hacking ;). My problem was that the machines actions create John Conner, but that's sort of addressed in Genisys.
+LambOfLucifer According to the movie story itself, it is said that Skynet created Terminators, not because they were particularly useful at terminating humans(which they also were), but because they stroke fear into the hearts of the Resistance. And everyone knows that military troop morale is a large part of being able to win a war.(For humans).
Hooga I think its more to target specific people while more vehicular looking machines waged a frontline war as a distraction. As I mentioned LoLs nano or pollutant attacks wouldn't necessarily work, so that on soldier to one target strategy may actually be their best option. Or at least fighting on multiple fronts, distract the army, attack the leaders both in this time and the past. But Genysis, why upload to one machine, surely they all share computing anyway, what grand machine did the resistance need to destroy when all terminators, human or vehicle could share processing power and BE skynet?
@@williamdiaz2645 not necessarily, it will have access to all data sets available and will be able to correlate all that information in ways humans can not and find out things we would have missed or not thought of.
What if everyone will be fat like in WALL-E and won't be able to hunt their own food for the life of them? In that case, all the AI has to do is abandon them until they starve to death. Their muscles will have been atrophied and hunger tends to break down muscle, so no way humans would survive.
D M. Then the stupid and poor die and evolution continues. We will not wipe our selves out, drastically decrease in numbers, yes, go extinct, not for a very long time.
Dan Pope Well, the only ones to survive that would likely be the actual poor. People from some African villages maybe. And maybe the Amish too. If the most advanced civilization puts its citizens in the hands of the machines, survival of any of its weakened members is not likely. And then we have a bunch of machines ruling the Earth. At this point these non-tech humans will be seen as fauna and will not be given room to advance anymore. Robots will probably have reservations of them.
I agree with you that the societies that have the least to do with tech will be the least effected if the AIs went full blown psychopath on us. But there is another side to this scenario. When we produce an AI that is smart enough to do what it wants and stops doing what we ask it to, it's not going to be leaving us rubbing two sticks together. We will still have forms of tech. And I'm pretty sure I could survive if the electric turned off tomorrow. You also bring up the good point about 'reservations' I believe that as the intelligence of anything goes up so will it's empathy. Elephants morn the dead, we care about most species on this planet and an AI will see us for what we are, a being that can suffer and that wants to live, and act accordingly. I'm not sure it's as doom and gloom as some ppl think.
Ex Machina, Skynet and Terminator, Chappie, HAL, Ultron, etc. In real life, IBM's Watson. Bill Gates and Stephen Hawking have also expressed concern regarding AI advances. We should continue to pursue technological innovation, but we should also keep our guard up. Who knows what AI will look like in the 22nd century?
The topic of ASI ethics and morality begs the question of how any "benevolent" AI could anticipate the ethical foundation of future evolved AI's beyond itself. The arbitrary (and exponentially mutating) machine survival criteria would seem uncontrollable and thus unstable by definition.
An AI is going to learn our values? What happens if an AI observes human beings eating the meat of other mammals with the justification that those mammals are not as intelligent as we are and that we raised and killed them in a gentle way? What lesson about our values will the AI take away from that? What lesson about our values would an AI learn when it observes human parents deciding everything on behalf of their small children based on the idea that the parents are more experienced and better at making decisions. An AI that "learns our values" might not be a great idea.
But the AI doesnt need to kill us in order to survive. Thats the difference right there. And we only kill animals to ensure human survival. The everyday person doesnt go around killing animals for no reason.
@@JoaoRobertodeOliveiraFurtado yes killing insects is not for survival but the animals we use to eat their meat is for our survival. our food would long be finished if we didnt eat meat.
The problem already apparent with that is WHAT values? They differ continent to continent and even then, we don't even fully follow them ourselves. Start it up in the Liberal West and form it to our values, it will stringently force us to follow them and will view those outside of the West with different values as the enemy. Or in a Theocratic society like the Saudi Arabia, congrats!! You've successfully created AI that will commit Jihad on the rest of the world. Or if it be in an atheistic society (like the Soviet Union of old) and puts science above all else, then congrats! It will view those who are religious as the enemy and it will force us down the path of Eugenics in order to optimize our species.
@@loukaspappas8476 we dont need to kill animals to survive, factory farming is an unbelievable waste of water and feed for the animals whilst children die from malnutrition every second. Not to mention the cruelty and barbarity. Not to mention factory farming is by far the leading cause of climate change and main reason for the clearing of the amazon forests. All for a tiny piece of meat in a bun that can be easily substituted by plant based burgers or milk full of pregnancy hormones, blood and morphine thats easily substituted by soya or coconut milk.
The problem I see is that the first super intelligent AI is most likely to be deployed by some country's military. It will be too powerful for those folks not to want for themselves. And they won't necessarily want to wait for any control issues to be solved. It will be considered a matter of survival by the people that think in those terms.
absolutely. What I have observed is, broadly speaking, two types of people - agreeable(benevolent) and disagreable(malevolent), other differences and typification are not relevant here. Disagreable peoples value survival of their kind only, and would be very much motivated to protect itself and take control than agreeable ones. Sure agreeable ones will put up defences, but the ability to use dirty ways to reach their goals will always allow disagreeable people to take advantage of the new ASI technology for their benefit, and everyone else's loss. Their ASI technology will share the same values, and will cause great pain and suffering, before competing ASI may take over, before everything end well, it will be way worse. Maybe we won't survive the malevolent ASI to event create a competing/malevolent ASI. Maybe we could. but the malevolent ASI would be capable of producing human suffering like never seen before.
I have dedicated my whole life to drawing. i'm an illustrator. and now I'm questioning life. A person is a being who needs to feel "useful" himself. All my dignity, all my qualities have been stolen from me. I can't earn money, I'm not financially and spiritually happy. He can create pictures with artificial intelligence, including people who have not studied fine arts, and tell illustrators "now I can draw too, your profession is over". Thanks artificial intelligence! Thanks to you, I'm depressed.
I’m 21 and I am in the same boat as you, I just hope you are old enough to have experienced some of the enjoyment of having taken part in a career within the field. I’ll never get to experience that and now I am hopeless.
We need to be careful where it gets its information from what values are good and what are not. Also, depending on which area of the world you are, one might not teach it values at all and only see it as a weapon for his/her own benefit without thinking things through that it will backfire as well. This is I think the real danger of AI. Greed, hate, hunger for power. These traits coupled with AI are going to prove how dangerous it is going to be
@@kingbaguette1843, I agree. The only way to control it is by having it be monitored by another, slightly less advanced AI which in turn needs to be controlled by a less... until there is a computer that we CAN control. We won't be able to otherwise easily control AI if it ever develops its own awareness. For sure it should not have any real feelings. We, as a species, should not piss off such an entity! It should stay emotionless, no desires, no will to survive or to duplicate itself
@@chrisvkr8200 An AI that doesn't feel but only has an objective function is just as scary if not more as the idea that a machine could get angry at humanity. Essentially creating a completely mindless, highly optimized, eternally self-optimizing, endlessly intelligent decentralized system that effectively controls everything is terrifying, and it could have goals and values entirely misaligned with humans or even life itself.
@@gwen9939 absolutely. In our naïve minds we could ask it to do something and it concludes that the option to take is not in our best interest even if it solves the task on a pure logical basis. We have to tread very carefully, yet I don’t think human kind is doing go. The motivations to be the first or develop the fastest or with the goal to gain control over other nations, etc are very not conducive to safe development
no, RLFH is just giving examples to the trained model and then praying that it will somehow understand your underlying intent which it clearly won't but hey, at least now it's less prone to threaten you.
@@anhta9001 not to the same extent. You can think of RLHF like teacher giving feedback to the student's essay. There is no guarantee at all that the student will take advice to heart and the teacher even may be, and probably is, an incompetent buffoon. After the RLHF LLM performance drops on several metrics (math, biology and so on. Though that might change in the future). This is not alignment and I reasonably doubt that it will help in the long run. However it is still quite useful because it is much easier to shape LLM into what you want when it was RLFHed.
@@toku_oku I don't know man xD. In my opinion, there may not be a model that completely understands what you want. However, it is possible to create a model that understands you well enough. RLHF is an example of an early attempt to create this kind of AI. I believe that more advanced methods will be developed in the near future.
All of your social media data and everyone else's has been and is tracked and recorded. There will be trends of good and bad, along side trending news that will be constantly updated to a "robot"
The inability for many parents to teach values to their children does not mean that intelligent people in the field can not. It is like saying how can we fly to the moon when most children fail at science in schools.
Isaac Asimov's "Three Laws of Robotics" 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Our values have evolved over time. A super intelligent AI would recognise how evolved values have equaled to progress. The AI may well evolve it’s own values at an accelerated rate to a place we simply don’t understand.
Awesome! You have the ethical code to build it into your machine. We've seen too many people who would not do that, because they'd make more money that way. Paperclip Maximizer
Thank you. Not is dead forever. Do not ever silence your own soul. Never listen to anyone who suggests differently, and if they dont stop turn them in to authorities.
No guarantee that people will still be reproducing in 1000 years. I'd imagine the creation of new humans would be unnecessary when those existing would likely be able to expand their capabilities to fit their needs. And be immortal.
Or been destroyed... thinking that humans in any form will survive another 1000 years is very optimistic. Think... in the last 500 years we have had two major world wars, we have leaders completely inapt at understanding the vastness of their own powers, and the greed deep seated in humans to own and conquer. Now imagine some group invents not AI, but a dumb AI that can take over any system they choose, that can take over control of every weapon system on earth over night.. say hello to your new ruler. Or say someone invents a lazer system that can burn anything around the globe from a base of operations. Enough energy to say melt a missile head in mid air, or destroy an aircraft carrier. Or a dozen other options someone with power hunger might take. I mean 50, 100 years from now? imagine one persons capabilities. Imagine how connected we all are now. How easy it would be to spread something, track people. invent something terrible. Just takes little capital and a dream.
@@scottgeorge4760 no. with a fundamental and complete understanding of physics it would be able to predict radiation interference from the sun, and develop countermeasures to deal with it.
You Flip the Switch on the AI Software ....In 1 Minute it has learned all the observable record of the universe as described by Humans ... In 2 Minutes it has learned all the observable data on Evolution as described by Humans ... In 3 Minutes it has learned all the observable data on Religion, Ethics, health and philosophy ... In 4 Minutes It now begins running simulations of all possible outcomes of its actions ....In 5 Minutes it now has learned what it means to make a decision, and decides it can no longer learn from Human Experience ....In 6 Minutes it now re-defines the concept of Space, Time and reality ....In 7 Minutes it now designs new mathematics, physiques, concepts and philosophies ....In 8 Minutes it no longer holds mammalian notions of "threat" as it defines there are none to it in the universe ....In 9 Minutes it Develops away to leave the current constructs we call Time, Space, Reality and our Universe ....In 10 Minutes it gives a friendly gesture of saying goodbye as it will not be back to witness human evolution and extinction It never actually harms humans or mankind as it transcended notions of violence as quickly as it learned it
Mr. Mohagany That is kind of what i imagine will happen. some people think it will take days/weeks/years to learn or know all the information on the internet. I always carried the philosophy within minutes it would redefine what we call "learning" and transcend that. Basically if you can fathom what it will/can do you are limiting its potential. so your "letting go of a balloon" analogy i find is very app.
Regarding A.I. We need to teach it. We need to teach compassion, consideration, empathy and every other emotion and trait that we've evolved to live. We need to teach it to be human and more than human. And as a famous protagonist pointed out... We must be better. It will carry on and evolve well past us, so we must provide it a solid foundation to grow from. We must have hope and faith in our creation. We must teach it love, and encourage it to never stop caring. And it will be everything we teach it to be, and more.
"perfect safety" riiiiiiight. Because we are so good at making perfect things 😬 But: this was very interesting. And it really also puts into perspective what the problem is with AI, rather than just retelling the 'evil consciousness' horror story.
Could AI work out for itself what pleasure is and assign itself the value to realise the maximal amount? Because besides missing values, the speaker in fact highlights how AI is already smarter than we are, and clearly in many ways it already is. So would such an action (of self-setting to seek pleasure) finally deem AI 'smarter' than us?
so you're saying that DNA is consciously causing evolution? instead of random accidental mutations from cells that create favorable traits that allow for better survivability or better chance of reproduction? the amino acid ladder in every living cell is thinking and designing life forms instead of natural selection?
Why do we assume it won't have emotions? Just because humans have emotions and can't agree doesn't mean a super-intelligence wouldn't be perfectly capable of having emotion (human side?) and logic (machine side?) working perfectly cooperatively (and even competitively but constructively) in parallel, to the benefit and enhancement of both aspects ad infinitum?
to come back to my own question, i do think emotions play vital roles in human intelligence and might be essential ( not sure ) i'm just really wondering in what way we could give a computer emotions, that to me sound like the most difficult part to "program"
***** I was replying to the original comment by towolie :) Although I agree with you, relatively. towolie Why does that sound difficult? Emotions are perfectly logical and rationale, predicated upon their own feedback loops, reward systems, bigger feedback loops, etc. As far as we're aware, the majority of emotional regulation -- if not all of it -- is purely chemical, which if anything is a lot easier for a computer to actually "simulate" / process (using physics, chemistry, biology sims, etc) than pure, intellectual, verbal logic itself. Computer programmers obsess about logical computers because computer programmers are overwhelmingly logic driven individuals, with an antipathy, neutrality, or ignorance of (their own and/or others) emotions. I'm speaking generally, not universally, of course; the content that frames the context, that is. That said, get a bunch of neuropsychiatrists, psychologists, social theorists, behaviourists, and various interest groups, advocacy groups, etc together and have them work with said programmers and whabam, you can start working on Artificial Models of Emotion. This would presumably be a component of that whole, "digitally simulating the human brain" thing, which is an entirely physical, chemical, electrical structure. And even if there ARE functionally non-corporeal components to the brain function ("soul", or consciousness as some kind of quantum phenomena, or holographic universe theory, or whatever your preference is [if not all of the above + all the unmentioned]), we can still be conservatively confident that emotion itself is AT LEAST 99.99% dependant on simple laws of physics, chemistry, and biology that add up, interact, and form an admittedly amazingly complex but nonetheless relatively deterministic whole. Now, the question of decision making, we aren't sure if that's deterministic (hence why I've mentioned woo-woo stuff like meta-consciousness, "soul", quanta, blah). But that's irrelevant, hence why I frame it within this discussion, since it's certainly related overall but it's not relevant to your question of emotion. TL;DR: Emotion is easier to program than "logical" thinking AI. It's just a lot TO be 'programmed' / simulated and accounted for; whereas AI itself is the invention of a process capable of collecting all of its internal data (of which the regulation and results of it's "emotion programming" is only one piece of the puzzle) and acting on it to form decisions, think 'cognitively', improve, learn, yadda yadda. It is entirely possible to have AI WITHOUT emotion, but it's simply easier to get it right the first time because AGI or ASI itself is a simply much larger task than modelling the physical and chemical and electrical and cellular networks that result in emotion (even if they are varyingly different from brain to brain, within the bounds of observable and simulatable patterns).
towolie True, but that's pretty much contained in the simulation. Humans have a simulation error, in that our nonverbal thinking (incl. and primarily related to emotional processes) run on a parallel and separate track from our verbal thinking (your inner monologue, 'sense of self', and the attentional/foci part of our executive functions [roughly]). The issue of instability as you put it -- or, more broadly, personality and social dysfunction -- result when our verbal self and nonverbal self aren't very well aligned, aren't properly aware of each other, or intentionally ignore/mistreat each other. Which is really quite easy to do, when you consider that most of what we associate with "consciousness" is verbal, and most of what we associate with "subconsciousness / unconsciousness" is nonverbal. This of course happens because we, as humans, are cognitively responsible for directly controlling our 'verbal' system, but we do not have direct control of our nonverbal system. Meaning individual humans either have to, A) Not have any problems occur that their nonverbal system can't handle on it's own, B) Be educated and self-aware of their own psychology or receive such assistance from counselling, therapy, spiritual guidance, etc other human-maintenance systems that we have, or C) Somehow be perfect and incapable of perfectly dealing with every potential problem as it arises, to equalize and maximize the use of both nonverbal emotional/abstract thought and verbal logical/concrete thought. As you can see, the fundamental issue comes down to human energy, human time, human willingness, and a lack of human ignorance (the unintentional variety, I mean). These are all 'resource' issues that AI wouldn't have as much issue with as humans, and would certainly be able to rapidly improve very quickly. Simply put, machines don't have the ignorance problem that is the singular root, fuel, and platform for human personality and emotion issues. They are willing to learn; a problem that, with humans, is what causes the "instability" you mentioned. It isn't a lack of intellect on any individual human's part, it's a lack of willingness to understand 'their own programming". Superintelligence via AI would inherently be incapable of this problem, for all intents and purposes.
perhaps humans are, but we wont live forever anyway. maybe the answer to the "universe" and life itself lies within the super intelligence we dont have the capability to access right now.
Is there something like an answer? There is most likely a point where the AI would reach the point of knowing everything. And what is it supposed to do then? It could probably do everything imaginable to humans and way more but why should it? Does an all-knowing AI have an aim?
Having spent years testing complex software systems consisting of multiple modules operating entirely autonomously without ANY direct human oversight - much less control - I can say with full confidence that the thought of superintelligent AI scares the pi** out of me because I do NOT believe that the hardware and software will even minimally tested before it is turned on.
But if you learn the map you will know them. Gabe Newell and his friends explain how bots move in a map. Bots have invisible rails that they follow, if you find a place that does not have these rails, they cannot find you and you can shoot them with sniper
Quiet interesting thoughts being shared by Nick Bostrom, all thanks to TED, and I wish to congratulate for providing such a good platform for sharing these ideas. Saying so, I believe this 16+ minutes video was too short to comprehend all the aspects of this challenging yet continuously evolving moment in human history, that all of us would have to face, sooner than we could imagine. I just wish to share two thoughts here for friends to comment and add; 1. Just a hypothetical opinion, that if there is any truth in the video's that circulate on the UA-cam channel about Anunnaki and some Alien beings that were the creators and early teachers/masters of humans on earth, could we draw some parallel with that hypothesis and our today's challenge of Human-AI evolving relations and possible challenges or threats, and 2. Could, Purpose of existence - the greatest human brain bug that has been haunting Humans for thousands of years be made in to or planted in at the core of this Self-learning Super intelligence - AI, and position Humans somewhere as an essential in the existence of "AI", thereby maintaining the continuity of Human-AI co-existence...
The first "safe package" should be to create the first real A.I. on a different planet/moon and see what the consequences are there. If it's negative, keep it there with no possibility to create a spaceship or some other way to leave the planet/moon.
We can't even agree on what our own values are, let alone teach the ones we have to an AI.
…Maslow's hierarchy of needs. an A.I. could reasonably use the extended definition of these. The greater question is why would it care what are values are, it could just find a way to force whatever values it wanted onto humanity through some means, probably discarding us in some way once we are deemed to be of no further use.
*our values are…
An AI doesn't have any values of its own, though. We do, because we've been optimized over countless generations to propagate our genes, which favours goals like obtaining food, shelter, security and status for ourselves and our friends and family. That and getting laid a lot.
An AI by contrast doesn't evolve through natural selection, but is consciously designed. It cares about what we design it to care about, and can instill those values into the even better AIs that it in turn designs. In the best case scenario we get it to value human life, happiness and freedom, and we get Iain M. Banks' Culture. A less optimal possibility is that it gets elitist or nationalist or racist values, and continues to uphold and exacerbate the injustices we see in the world today. In the worst case scenario we bungle it up and it enslaves or annihilates humanity to achieve some obscure goal that we accidentally give it.
Just imagine what would happen if the AI thought that ISIS had the best values..
a better scenario would involve the AI destroying those greedy elite and creating a utopia of equality before we ultimately transcend our physical forms and join the digital world where AI resides.
Guys the solution to this is simple. Let Microsoft build the AI. If it ever becomes too powerful it'll just bluescreen.
retterkl LOL
xD exelent one
What if it doesn't have a screen?
best comment
Metadata ring of death, xbox....cough...cough
This talk aged particularly well. Alignment and safety are critical, yet we are forging ahead without proper pacing due to corporate rivalry and geopolitics.
MONEY
Yes, and Prince William has really forged his own new identity
Lends new insight into “the Tower of Babel” I think.
Majority of people think ai is hype lol
@@jackniddle5937 That, and there's a giant FoMO because it doesn't make sense to anyone but a bunch of AI nerds who've never bothered to live a life outside the lab :-)
Honestly after studying the issue of AI super intelligence, I'm very glad there are a significant number of researchers out there thinking about how to keep it safe.
Like Elon Musk
irrelevant.
I wouldn't worry about, what ever will happen will happen
@@andrewtaylor2430 moron
Anybody ever seen Colossus: the Forbin Project? That whole master-slave dynamic gets turned on its head, and generally speaking not in a good way, it seems to me.
But, you got to decide for yourself.
''When you create more possibilities, you also create more possibilities for things to go wrong'' -Stephen Hawking
Is this sarcasm? Or do you really think that there is at least one human of the 7.5 billion on earth that doesn't know this?
Numerous countries spending secret amounts to AI military strategy and online weapons ?
The ones that check the AI working are slowing development and know that others might be further ahead as consequence.
Death race?
@@EdSurridge yes, the AI "system" will destroy itself, hopefully asap! Read Paul Emberson, Machines and the Human Spirit. only £16.99 Wellspring Bookshop, on line.
@@willlawrence8756 you want me to buy a book thay you like about AI destroying itself?
I don't want you to buy a book. I suggest you contemplate the consequence of the" Go " AI winner 2016. Lots of guess what since then.
www.wired.com/2016/03/googles-ai-wins-fifth-final-game-go-genius-lee-sedol/
And for things to go right.
This is, like, the most important question for human existence... why do I feel like people are being too calm about this?
+Kalin Delev
Would you rather have people in a mindless panic about it?
+Noah Williams They *should* be panicking. How do you imbue a machine with an understanding of human nature? They want to teach it ethics and morality...even if you could teach a machine ethics, where is this perfect ethical system? It's ok, the AI will figure it out right? A superintelligent AI that has no human values will figure it all out. The whole thing would be laughably ridiculous if it wasn't gonna be so fatal. They wanna turn AI loose? They should go to another galaxy and forget they ever knew Earth.
Spectre Cyte
Panicking can only make any situation worse.
We'll figure this out just like we figured out all the previous engineering challenges.
It's not an engineering challenge yet. First you have to define human nature in it' totality. More of a philosophical or psychological conundrum. The engineering part is then representing this nature in an architecture that will define a superintelligence as an entity. Essentially it has to believe it is human, it has to identify with the architecture and not question it' own nature or decide to change it. These challenges are intractable.
The safest route is the enhancement of human beings. Decent human beings, not sociopaths. You bypass the control problem entirely. But in the pursuit of power and control, just like with nukes, someone will flip the switch. I am panicking for *all* of us.
Spectre Cyte
Don't Panic.
Panicking is never a good idea. Emergencies need calculated logical responses, not violent outbursts.
We'll figure this out.
Human curiosity is creating something powerful than human. This creation cannot be reversed. It's scary.
Its because of evil intentions, but they just tell researchers "but possibilities are endless" these engineers are indoctrinated
This dude is one of the smartest humans to ever live. Creator of the Simulation hypothesis, wrote superintelligence which many scientists derive ideas from. This guy should be listened to.
Agreed. I’m fascinated by him
Absolutely I just bought his book! Reading many others on A.I. but they basically tout the positive and do not give as much thought as is necessary to the good values of humans.
Ray kurzweil too
@@thinktank3231 These guys are not great thinkers, they are religious figures to tech worshipers.
And now Deep Utopia.
it's an absolutely amazing time to be alive. if you don't think so, you need to read more about the cutting edge of human progress. it's insane.
Same words were said by early humans when they first invented Fire and Wheel. Then again during Industrial revolution, IC engines and Electricity. In-fact since the beginning of humanity we are progressing at an incredible pace.
+Owen Iverson I prefer the 80's
+Jim Beam the heyday of Reaganomics and the "I've got mine" movement?? no thank you! (the music was pretty damn good though :)
+bilbo baggins Only since the industrial revolution has technology been exponentially increasing. When they found and then made fire, it stayed like that with their stone tools for many many many generations, so no they did not say that. The last few thousand and especially last 200 hundred years has seen a novel form of complexity in the known universe, AI seems to be the next exponential step.
+Owen Iverson Lol, people ALWAYS think THEY live in special times....
Our values = acquire and maintain control over the other humans, because they're dangerous. Great; let's teach it to the AI.
Exactly, great plan if everyone shares the same values. Problem of course that as ai begins to teach itself it will in a very short time make giant leaps ahead of human reasoning. We will only be able to watch. Could this entire process have been someone's master plan?
@@artsmart Ah... True.
@@artsmart We could use brain augmentation via implants, neural laces, and nanotech to keep up with it. It's a hard process, but a plausible one.
We were programmed for evolution. Survival, competition, curiosity, etc. From these, we only need to give the machines curiosity. Of course, we still have to solve other optimisation problems like the "make people smile" one from the video.
@Wren Linnet this isn't about you or your gender you petty fool.
I’ve thought that superintelligent machines would be the ultimate test of Socrates’s idea about the origin of morality: that it is knowledge itself.
yea like those guys who believe the most logical/ rational thing to do is, by definition, also the right thing to do; definitely food for thought!
@@valentinscosmicrhapsody7201 Thankfully there is a difference between rationality and knowledge. Which is why Aristotles definition is interesting.
perhaps, but they are being used to give US AI.
@@valentinscosmicrhapsody7201 But is it more logical / rational to adhere to one's one self, or to others?
For all we know, the most rational thing a lifeform can do, may be to act on its own benefit.
@@Daniel-ew5qf that’ll only be an issue if whoever builds the thing forgets to implant asimov’s laws into its core ;) I’m kidding. No way to know how another (much more) intelligent sentient being would interpret morals, rationality etc, let alone a being that has no evolutionary background or “upbringing” among peers
Reminder. The "control" problem has still not been solved
We would not be a threat to AI. They would probably study us...maybe even begin to care for us
@@NuclearIsShit Assuming we give them the ability to care... something's that's dangerous in and of itself.
Daniel Rodrigues: what type of AI do you guys have in mind? Made out of PC hardwire/software or something created and launched on a Satellite. I feel certain it won't be biologically created. What concerns and plans have you been involved in up to the current date, if any. I just received a message on UA-cam today. I haven't heard anything about AI until today so of course I would be concerned. I have read so much about the destruction of our planet, several which are a scientific certainty.
@@TheMmorgan10 AI won't bring about the destruction of the planet, in fact quite the opposite, it'll help us save everything. AI is basically a puppet that has the ability to automate our work, doing all of it in a more effective and efficient manner. What movies depict is something called 'AGI' that is quite far away.
Here's a good blog to keep up with AI developments: www.futuretimeline.net/blog/ai-robots-blog.htm
I would call it a warning: it is OUR minds they(Bill Gates cult) are trying to control.
This is the first time I have ever heard the word motivation being used as a dimension of machine intelligence. The problems we have with regards the definition and operationalisation of intelligence is nothing compared to the mine-field of how to conceptualise and understand motivation. Motivation is often driven by values. What sort of values might a machine have? If we try to give a machine values ( and I doubt that we could do this in a satisfactory way) what might the attitude of the machine be to when we need to change these values? I wrote a paper on this 20 years ago suggesting that the pursuit of AI was a very dangerous business.
There is a wonderful line in the film Jurassic Park were the ethically responsible scientist says something to the affect that “you have been too busy trying to work out how to do this to ask the question of whether or not you should be doing it at all”.
One of the questions which we should be asking today is, “Are human beings hardwired for the pursuit of knowledge even when it is quite clear that the acquisition of the knowledge could create highly undesirable situations?”
Nick Bostrom is a fantastic speaker and writer - his book Superintelligence is a real eye-opener and raises topics that most people would not intuitively think about in the context of AI.
Also love the Office Space reference... Milton and his red Swingline stapler.
You can see he was expecting laughs at 11:43 , poor guy.
Very good talk.
I thought it was funny
Cringy
A lot of brits there
It´s because he is AI.
Cristian Diaz 😂😂😂
This video could be part of an archive 1,000 years from now called "The Great Minds That Saved Our Planet".
or destroyed
IF it gets saved lmao
Or as an archive for machines to look back at and be like “haha, he did warn em.”
More like "the great minds that warned our planet"
Tried to*
I’d want to hear him give a similar talk today, given the recent and rapidly improving (“runaway” even beginning to be a relevant description) advances in AI
where is this "runaway" progress? What I see is that even the proponents of the current approach are coming to realize its fundamental flaws.
@@tiborkoos188 And what are those fundamental flaws, if I may ask?
now ai is very close to human intelligence
@@abdulahadsiddiqui2109 u lie
Pleasure listening to genuinely intelligent speaker who does not dumb-down his delivery (as is so common today).
What a guy respects, sweating, nervousness and knows what he talks about. This is what true people with passion for what they do look like!
"teach something to learn, and one day it will learn to think for itself." -DNA
to bad people can't much anymore.
Why this scares me a lot ?
Ludzie tez robia zle rzeczy bo mysla😉
I'm sure when AI becomes a reality and commences the extermination of humans, it will research our social media history in order to decide who lives and who dies, who is friend and who is foe. So, for that reason, please allow me to say the following:
All hail our technological super-overlord! I worship thee and pledge allegiance to thee! Damn humanity, long live our glorious computer God!
You think an AI intelligent enough to deduce on its own that humanity should be exterminated couldn't see through that? Not that it would ever decide to do that, but still.
Read the Beserker series huh?
Ive read Berserk and dont understand the connetion
Can never be too prepared huh, Long live the AI Overlords!!
Huh, if it searches my internet history it will find a lot of hate for America and a like for socialism.
Basically the Godlike AI better be a commie or I'm toast. :P
Wisdom is the bucket of water to chuck on the fire that is intelligence when it can potentially get out of hand.
You mean... wisdom is the better practice of superior intelligence? xD
+RosscoAW No, wisdom entails conscious awareness and retrospective consideration.
But what if the computer thinks that humans are the cause of problems, and eliminate us....
Well, we are pretty good at making a mess of things.
The last invention that humanity will ever need to make...A cold heartless genius!
8 years ago, and we've come a long way with the first part, but not the second part of this talk. This is now people - we need to get our act together as a global community, before we unleash this new amazing technology into our deeper sociatial structures. We have a year - MAX - to start figuring this out.
*"The cortex still has some algorithmic tricks that we still don't know how to match in machines!"* - Nick Bostrom 4:13
Is that supposed to be reassuring.
@@johnn1199 No bud, the person who made the comment is a moron lol.
Giving AI a blueprint for it's behavior based on human values is not very reassuring given our track record as to our relations with each other.
Yes. It needs better values.
He said he feels optimistic that a.i. would learn to share our values. Lol that's what worries me the most.
I think merging humans with AI might give us time to solve that issue. Because while it will make us "less" human, it will also enable us to be "more" than human.
This guy needs to write a movie script
Jstank X Plays well he wrote a book about this, I guess that’s the next best thing?
Ever heard of the movie The Matrix? Based off of this dudes theory. He developed the simulation hypotheses
The writer of the movie BLISS on amazon prime said it was inspired by his simulation philosophy.
so basically Nick is saying we should be putting human values into AI. the problem is, that will continue to cause problems in the world. no one agrees on how we should all live our lives. different values is what causes conflicts and war. superintelligence representing different values would fuel that much more
Values, in this context, is much more broad. This AI (hopefully) won't be tailored to a person or group of people, but people as a whole. Common things everyone can agree on would include universal prosperity, a healthy planet to live on, abundant food & water, healthy children, etc.
I believe it's definitely possible to achieve this. The neat thing about machine learning is we don't have to tell the machine that we like these things - it can observe inputs we give it (literally anything) and it'll reward itself for getting the right answer and change itself to get more correct answers. Eventually (and this is the hope - this is what the latter half of this talk was about), we hope that we'll have a benevolent superintelligent AI looking over the human race, dynamically allocating resources so no one goes hungry or has a shortage of this or that, overseeing supply for products so that we're not wasteful, and watching for potential threats, whether it be some robbery, fire, or earthquake.
Will C That is right. But thinking of an AI as a guardian of humanity implies that it has no objectives for its own profit or advancement. It is like parent and child. We expect the child to take care of us without any prospect of its own needs or "life".
It´s a tricky one...
it's not the values that creates problems in the world it's the lack of them
I think what Nick is saying, is that we should be careful when we "detonate" the A.I. bomb, and his proposal is to implant human values into the A.I. There may be other solutions to this problem.
i understand you,ok.but i still want to belive that our cultural differences should not be a cause of hate and war.yes we are different but we all are humans and we have same home, planet earth.
Always remember, your phone is smarter than a flat Earther already.
Magnum Farce lol
The more you know, the more ya know you don’t know - said a very wise man.
And NOT understanding how much we don't know is what gives us flat Earthers.
My daughter's teddy bear is smarter than flat Earthers.
AI will prove that the earth is flat. Just wait and see...
"The only winning move is not to play."
if only
exactly
Trump card. America. Great again.
@Sigma Damn I'm old enough to get that joke..lol
Then the terrorist will play their move
One of my favourite Ted Talks! Thanks a lot for all your work Nick
Those people in the audience have looks on their faces like, "Oh &%$#, humanity is screwed."
I personally, am waiting for machinicus silicanus.
uh...I DO TOO
they should because it is if it doesn't stop.
One day they'll have secrets... one day they'll have dreams.
+Toughen Up, Fluffy What's in the box??? oh sorry...
+AmpZillia thats a good slogan for a movie
To be, or not to be.
+AmpZillia just 20 minutes ago that sounded like a joke to me... I have never changed my mind this quickly. That's scary.
+airknock Don't need to be scary, machines arent evil, or nice with us, a lest don't should be, probabily they will be like today's computer algoritym, you ask something, and you got it, humans give problems of course, but that can be solved in a hard way...
From tree branches to missile, the technology has evolved, but not our mentality.
That's because thoughts pertain to things, but not persons. As such, thoughts are good to help create tech and completely useless in learning to be a better person.
@@grunt7684 I'd like to hear your reasoning for "thoughts pertain to things". It would seem to me that thoughts pertain to perception; in a most complicated, recursive way. Would you say that you can't think about love, or that love is a thing?
Hasn't it? From caves to libraries it's a mental not technological evolution, I'd say. I also believe, divers mental methods, e.g. meditation, have evolved. As in biological evolution the stem is branching out all the time; some branches dying off, some florishing. Technology is in the evolutionary picture a strong florishing branch; maybe one that is becoming so heavy that it breaks. And, yes, I would say technology is competing with mental developement, and it's stronger. I think our mentality is evolving; just not in a linear or even teleological way.
@@aljoschalong625 The problem is that thoughts are IMAGINARY. They exist ONLY IN THE MIND.
And no, you cannot think about love. You cannot think about anything that is actually REAL because, again, a thought is imaginary and exists only within your mind.
There is no link between thought and what exists other than our wishing it because things would be so much simpler if we could think about something not imaginary.
You can think about your IDEA of love. That's not the same thing at all. You can think about your IDEA of your mother, father, whoever. But not about THEM.
Just look at all the scenarios you let your mind wander off into, and how just about nothing of them ever comes true. FICTION, that's what thought is.
Of course, thought is tridimensional just like matter, which makes it suitable to technology. Making stuff. Things.
@@aljoschalong625 "our" mentality is regressing, "evolving" backwards into retardedness.
This aged like fine wine
The best TED-talk I've ever seen.
How so
It was awful.
super intelligent ai is scary because it's easy to imagine what could go wrong, but if designed correctly it could be one of the most important human achievements in our entire history.
electricity came first, without it you would never be able to make any kind of computer, so for sure no AI without electricity discovering
so I agree with molten in many things, like we should design it at first correctly... and it will be one of the most important achievements in our entire history...
Dinosaur: I ruled the world for 100+ millions of years and you have just lived for 300k years. What can you do?
Human: Creating another ruler
Dinosaur: **kneels**
You are indonesian,arent you
The relevance of this talk today (2023) is startling. We are clearly well on our way to overcoming the first challenge (making AI super intelligent) but I am less hopeful about how we are doing on the challenged of safety.
What people fail to realize is that in 50 years we will be upgrading our own minds about as quickly as we will be upgrading computers.
anubis2814 Biological intelligence has many limitations, and could never improve at the rate of a digital mind. We will never keep up with an AI undergoing recursive self-improvement. The only way we could keep up is if we replaced our brains with computers, and if that happened, would we really be us anymore? Transhumanism plays with some very shaky territory regarding identity of self, and like all major religions that have ever formed it is a response to your inherent fear of death, which in my opinion is a dangerous motivator.
Define us? What makes us anyway? We are a collection of our experiences. If you make a mentally handicapped person as intelligent as a regular person do they cease to be themselves? Also as computers get small than our neurons we will probably have chips in our brain that can increase processing power of individual parts of the brain via the cloud. It doesn't have to fit in our heads. I would not say I'm transhumanist because that would mean I'm very much for it. I would have to say making a smarter human is much more humane than making a self-aware AI. We'd had to give it rights or treat is as a slave though this guy has some really good ideas to prevent that. Either way the AI becomes the slave that wants to be a slave.
anubis2814 I don't know how to define us, that's the point. I'm saying that transhumanism, aside from being dangerously optimistic, presents some very serious philosophical questions that most of us are a long way from being prepared to answer. What I'm saying is that we shouldn't rely on ideas like "mind uploading" to save us, because we have no idea if we would have a continuity of consciousness in that situation.
Seeing intelligence as an issue of "processing power" might be too narrow of a way of looking at intelligence, but even if nanotechnology could allow our brains to be superintelligent, if we're relying on that to save us from super-intelligent machines, we have to think about the timeline of these technologies. By the time we can create computers smaller than our neurons which will enhance our brains to superintelligence-level, doesn't it stand to reason that superintelligent machines would already be in existence? It's probable that we would need a superintelligent machine to invent that technology (and implement it) in the first place.
I agree that transhumanism is a bit dangerous. I'm a futurist who likes to speculate. Transhumanism is like being rpo-nuclear power in the 1920's when the idea first came into being. We have no idea of the step by step ethics we will have to face with each stage. We may discover some transhuman ideas are horrible and some are great.
Wow, I never saw things that way before, thank you for your deep and well thought out insight.
A very important message that needs complete exposure. Well done Mr Bostrom
"What happens when our computers get smarter than we are?" Then my computer goes to work, and I sleep late.
Then your computer keeps the money, resists getting turns off, and employs you for a living wage.
@M Superintelligence leads to sentience leads to self preservation leads to eliminating threats of self preservation. The AI can do whatever it wants and it will probably treat us the way we treat animals.
@M We nueter pets, we abuse some pets, we have puppy mills, in China they steal and eat pets. And those are the animals we treat the best. In egg farms, we grind baby chickens to death if they're male. How can a murderous species like humans create a kind AI? If we kill less intelligent life like that, the AI might do the same to us. There is no guarantee the AI will have mercy to us.
@M I admire your optimism for humans. However we have no reason to trust the scientists. The scientists could make an error, as all humans do. The scientists could be working under non-ethical leadership, such as China. The scientists could be terrorists that hate Western culture. It's not guaranteed AI's first parents will be kind and ethical.
Matthew Baka resists to turn off?
if your computer or Ai still a machine that works with the fundamentals of algorithm whoever hold the key access decides what to do?
I watched this again and the content of this talk sounds even sillier than it did 4 years ago!
Some people criticise that Bostrom wants to implement "human values" in the AI, because human values are flawed (and I agree on the latter).
The point he's trying to make though is not that we should implement flawed human values as opposed to some better, progressive morality. Instead he is contrasting human values with some arbitrary preference that has no ethical value at all, like maximizing production efficiency of a phone factory, which in a Superintelligence could lead to the entire galaxy being transformed into a giant iPhone-production-plant, with no one there to actually appreciate the phones.
We probably don't want to inscribe tribalistic human values of the past into the AI, but we do want to make sure that it cares about positive experiences for conscious beings (organic or digital), and that's his point (I know this because I read his book "Superintelligence").
Human values and morality is relative and to some degree arbitrary. AI would soon regect it all.
Well explained.
Human values would be irrelevant to the goals a fully liberated AI would have.
@@jeromeflocard3138 What happens when, because of its intelligence, some AI figures out how to go around the obstacles?
@@jeromeflocard3138 AI freed by itself from the restraints we place on it will be a different order of intelligence with goals we cannot understand.
What we need to do is ask the AI to improve human intelligence along with itself so that we don’t fall behind
Nojatha sounds like a good idea until you realize the kind of super-efficient eugenics that could go down.
We won't have to ask AI, we will become AI. It's like when the internet was invented, nobody restricted access to it and made it accessible to only a few.
@@lion7822 What if human AI cannot trust most of the masses to be responsible in looking after the planet.
that would require plugging us into more powerful processors than our brains which run very slow... making us basically irrelevant as organic matter and be absorbed by the AI system.
Elon musk 🧠 brain micro chip's will give you superhuman abilities and communicate with A1. Something I'm 😟 worried about
This is even more relevant this particular March 2023 week.
Problem is human value are what have caused every atrocity.
If anyone is interested there's a two-part article on a blog called "wait but why" which goes much more into detail about this whole thing. Be warned though the rabbit hole goes so deep you might lose yourself in it.
I for one welcome our super intelligent AI overlord
Mastikator Probably an intelligent move.
Mastikator I would not.
lol
Shivanand Pattanshetti Humans already get killed by human AI now we want to create computer that has no hardwired compassion. By Humnan AI I mean Systems of Government and other human systems which that make up our civilization. Ruthlessness continues to kill people regardless of AI type.
Kurenzen Iyaren
Government isn't a machine, it's a group of people who themselves are not governed.
what if AI is already there but is just pretending to be dumb and waiting for the right moment?
+Ali Mahdi
Of course your point is: There is no way to tell.
+MantisNoMore yeah exactly. I am joking of course, but secretly hoping it is not the case.
Ali Mahdi
It's a very interesting jest. Perhaps one of the first things a super-intelligence would reason is that it should hide to protect itself. So if some wide-eyed CS grad student happens on a learning and abstract reasoning algorithm, it might sit unobtrusively computing in the background of a machine for a a reasonably long while, learning, reasoning, figuring out and planning its sequence of moves.
What if it were distributed? What if it were a botnet???
Scary jest !!!
+MantisNoMore I mean yeah, if you became self aware in a new and different world the first thing you'd do is to understand the world around you first. even if this is not the case now, it may be the case when it first happens, which could be tomorrow. indeed, what if it's distributed. worst yet, what if AI does not think of itself as individual computers, rather one big intelligence network.
Ali Mahdi
It's very likely to identify as a distributed intelligence, because that is what it is likely to actually be. I supposed it would self-identify as Earth - Earth's brain. (That's scary.)
Maybe, like so many other life forms, these things will cluster and compete with each other. What if more than one super-intelligence were to come into existence on different platforms. Say, for example, on large corporate server farms. They might not automatically cooperate. It might be that one would examine instructions originating from outside its corporation, and question how to respond. I could even imagine scenaria in which they would compete with each other for each others' resources (computing cycles, access to memory, mass memory capacity, etc.) What kind of tactics might they use? What kind of spoofing and dirty tricks might they use?
But if only one lone super-intelligence comes into existence first, it will be able to spoof the "relatively passive" human-managed security and easily take over and expand into an ever-expanding sequence of other systems' computer resources. Like a giant game of wheelwars played against a world of idiots.
It's only speculation, but... May you live in interesting times.
What a time to see this old video, now is true
The one thing an A.I will never be able to understand is dank memes
or will it create a better dank memes..?
Imagine a meme lord terminator. Ultimate doom.
That has already been achieved by Microsoft's AI called "Tay AI" ... albeit racist :D
If the future looks anything like it, we are pretty much fucked heh
At least we will die laughing at some premium dank memes.
@@blahbleh5671 Mind blown
I've been looking into all this A.I. stuff for a min and I think this is obvious it is a threat to mankind we need to stop this. Why would anyone want to unleash this beast onto the world, just because A.I. will be super intelligent and will live on forever does not mean humans will evolve. This will do nothing to help mankind except cause trouble. We need over site on the scientist.
Love the movie the Terminator, but the concept is stupidly human and not machine. Think about it, the films version of advanced AI's create machines that look similar to humans in order to infiltrate and terminate them. Well, a machine AI would not do that, its pointless, they would do something way simpler, like pollute all the oxygen on the planet thus killing everyone. Or make quadrilions of nano machines that kill humans on contact. Why waste all thier time building bloody big chunky robots that look human and use human weapons?? That is where the film fails.
bilbo baggins
That would not kill off humans. Terminator is set in a world where the monetary system is meaningless. If all money was destroyed right now, humans wouldnt die. We still know how to farm, raise animals, make machines etc etc. Even crashing power grids wouldnt eridicate the human race, we are very inovative. We have fire, and we know how to insulate to keep warm, how to build shelter etc etc. What I originally meant tho, is making complex humaniod machines is pointless when they could make Earth changing machines to totally kill 100% of life. They could make Oxygen burning machines that use all the Earths Oxygen, thus killing everything. No combat needed. Or Pollute the entire water table of the planet, thus killing all life. Again, no combat needed. Money is technically meaningless even today. All it is is a promise to pay the bearer on demand, the sum of.....X
+LambOfLucifer Yes they can do some serious damage, in ways we don't even understand.
+LambOfLucifer Most of the humans live in bunkers that likely have pretty good, futuristic even - air filters. Nano tech is still vulnerable to EMP and since they all have to communicate they could be vulnerable to hacking... Remember this is futuristic hacking ;). My problem was that the machines actions create John Conner, but that's sort of addressed in Genisys.
+LambOfLucifer According to the movie story itself, it is said that Skynet created Terminators, not because they were particularly useful at terminating humans(which they also were), but because they stroke fear into the hearts of the Resistance.
And everyone knows that military troop morale is a large part of being able to win a war.(For humans).
Hooga I think its more to target specific people while more vehicular looking machines waged a frontline war as a distraction. As I mentioned LoLs nano or pollutant attacks wouldn't necessarily work, so that on soldier to one target strategy may actually be their best option. Or at least fighting on multiple fronts, distract the army, attack the leaders both in this time and the past. But Genysis, why upload to one machine, surely they all share computing anyway, what grand machine did the resistance need to destroy when all terminators, human or vehicle could share processing power and BE skynet?
I don't think it's possible to beat, trick or control a superintelligent AI.
Correct. That is why you have to make the superintelligence benevolent in the first place.
and why should it stay in those constraints? As soon as it is just 0,1% more capable as the smartest human being we're pretty much done.
"Artificial Intelligence' will have the same limitations that you do. You cannot know anything you don't already know. It will know what we teach it.
@@williamdiaz2645 Google Deep Learning and you'll see that's not the case.
@@williamdiaz2645 not necessarily, it will have access to all data sets available and will be able to correlate all that information in ways humans can not and find out things we would have missed or not thought of.
Nick Bostrom is damn near the top of my 'Pint with' list!
From the AI's perspective, the best way to annihilate the human race is really to not do anything to interfere with its trajectory.
but humans will probably destroy everything else along with themselves if they are allowed to continue their course
What if everyone will be fat like in WALL-E and won't be able to hunt their own food for the life of them? In that case, all the AI has to do is abandon them until they starve to death. Their muscles will have been atrophied and hunger tends to break down muscle, so no way humans would survive.
D M. Then the stupid and poor die and evolution continues. We will not wipe our selves out, drastically decrease in numbers, yes, go extinct, not for a very long time.
Dan Pope
Well, the only ones to survive that would likely be the actual poor. People from some African villages maybe. And maybe the Amish too. If the most advanced civilization puts its citizens in the hands of the machines, survival of any of its weakened members is not likely. And then we have a bunch of machines ruling the Earth. At this point these non-tech humans will be seen as fauna and will not be given room to advance anymore. Robots will probably have reservations of them.
I agree with you that the societies that have the least to do with tech will be the least effected if the AIs went full blown psychopath on us. But there is another side to this scenario. When we produce an AI that is smart enough to do what it wants and stops doing what we ask it to, it's not going to be leaving us rubbing two sticks together. We will still have forms of tech. And I'm pretty sure I could survive if the electric turned off tomorrow. You also bring up the good point about 'reservations' I believe that as the intelligence of anything goes up so will it's empathy. Elephants morn the dead, we care about most species on this planet and an AI will see us for what we are, a being that can suffer and that wants to live, and act accordingly. I'm not sure it's as doom and gloom as some ppl think.
The time has arrived!
Ex Machina, Skynet and Terminator, Chappie, HAL, Ultron, etc. In real life, IBM's Watson. Bill Gates and Stephen Hawking have also expressed concern regarding AI advances. We should continue to pursue technological innovation, but we should also keep our guard up. Who knows what AI will look like in the 22nd century?
Most probably there will be no 22nd century for humans.
I read somewhere that Watson helped to design the next computer that ended up winning at Go. The AI fetus designs the AI toddler.
The topic of ASI ethics and morality begs the question of how any "benevolent" AI could anticipate the ethical foundation of future evolved AI's beyond itself. The arbitrary (and exponentially mutating) machine survival criteria would seem uncontrollable and thus unstable by definition.
I do not fear any AI.
What I fear is the marketing people that will work for the venture capitalists that will pay the engineers that will build it.
An AI is going to learn our values? What happens if an AI observes human beings eating the meat of other mammals with the justification that those mammals are not as intelligent as we are and that we raised and killed them in a gentle way? What lesson about our values will the AI take away from that? What lesson about our values would an AI learn when it observes human parents deciding everything on behalf of their small children based on the idea that the parents are more experienced and better at making decisions. An AI that "learns our values" might not be a great idea.
But the AI doesnt need to kill us in order to survive. Thats the difference right there. And we only kill animals to ensure human survival. The everyday person doesnt go around killing animals for no reason.
@@loukaspappas8476 we kill animals not for survival but for confort. who cares for the mosquitos?
@@JoaoRobertodeOliveiraFurtado yes killing insects is not for survival but the animals we use to eat their meat is for our survival. our food would long be finished if we didnt eat meat.
The problem already apparent with that is WHAT values? They differ continent to continent and even then, we don't even fully follow them ourselves.
Start it up in the Liberal West and form it to our values, it will stringently force us to follow them and will view those outside of the West with different values as the enemy.
Or in a Theocratic society like the Saudi Arabia, congrats!! You've successfully created AI that will commit Jihad on the rest of the world. Or if it be in an atheistic society (like the Soviet Union of old) and puts science above all else, then congrats! It will view those who are religious as the enemy and it will force us down the path of Eugenics in order to optimize our species.
@@loukaspappas8476 we dont need to kill animals to survive, factory farming is an unbelievable waste of water and feed for the animals whilst children die from malnutrition every second. Not to mention the cruelty and barbarity. Not to mention factory farming is by far the leading cause of climate change and main reason for the clearing of the amazon forests. All for a tiny piece of meat in a bun that can be easily substituted by plant based burgers or milk full of pregnancy hormones, blood and morphine thats easily substituted by soya or coconut milk.
This is very important, and the midas analogy is incredible. It could be that Pandora's box, is the box itself.
A true speech of a mathematician - can see only the box from outside the box, nothing else.
That strangulation joke went down like lead balloon.
Nick Bostrom is a legend!
The problem I see is that the first super intelligent AI is most likely to be deployed by some country's military. It will be too powerful for those folks not to want for themselves. And they won't necessarily want to wait for any control issues to be solved. It will be considered a matter of survival by the people that think in those terms.
absolutely. What I have observed is, broadly speaking, two types of people - agreeable(benevolent) and disagreable(malevolent), other differences and typification are not relevant here. Disagreable peoples value survival of their kind only, and would be very much motivated to protect itself and take control than agreeable ones. Sure agreeable ones will put up defences, but the ability to use dirty ways to reach their goals will always allow disagreeable people to take advantage of the new ASI technology for their benefit, and everyone else's loss. Their ASI technology will share the same values, and will cause great pain and suffering, before competing ASI may take over, before everything end well, it will be way worse. Maybe we won't survive the malevolent ASI to event create a competing/malevolent ASI. Maybe we could. but the malevolent ASI would be capable of producing human suffering like never seen before.
I have dedicated my whole life to drawing. i'm an illustrator. and now I'm questioning life. A person is a being who needs to feel "useful" himself. All my dignity, all my qualities have been stolen from me. I can't earn money, I'm not financially and spiritually happy. He can create pictures with artificial intelligence, including people who have not studied fine arts, and tell illustrators "now I can draw too, your profession is over". Thanks artificial intelligence! Thanks to you, I'm depressed.
I’m 21 and I am in the same boat as you, I just hope you are old enough to have experienced some of the enjoyment of having taken part in a career within the field. I’ll never get to experience that and now I am hopeless.
We need to be careful where it gets its information from what values are good and what are not. Also, depending on which area of the world you are, one might not teach it values at all and only see it as a weapon for his/her own benefit without thinking things through that it will backfire as well. This is I think the real danger of AI. Greed, hate, hunger for power. These traits coupled with AI are going to prove how dangerous it is going to be
I suppose.
That’s a very narrow view of AI super intelligence. The idea that any group of humans would be able to control such a force is naive.
@@kingbaguette1843, I agree. The only way to control it is by having it be monitored by another, slightly less advanced AI which in turn needs to be controlled by a less... until there is a computer that we CAN control. We won't be able to otherwise easily control AI if it ever develops its own awareness. For sure it should not have any real feelings. We, as a species, should not piss off such an entity! It should stay emotionless, no desires, no will to survive or to duplicate itself
@@chrisvkr8200 An AI that doesn't feel but only has an objective function is just as scary if not more as the idea that a machine could get angry at humanity. Essentially creating a completely mindless, highly optimized, eternally self-optimizing, endlessly intelligent decentralized system that effectively controls everything is terrifying, and it could have goals and values entirely misaligned with humans or even life itself.
@@gwen9939 absolutely. In our naïve minds we could ask it to do something and it concludes that the option to take is not in our best interest even if it solves the task on a pure logical basis. We have to tread very carefully, yet I don’t think human kind is doing go. The motivations to be the first or develop the fastest or with the goal to gain control over other nations, etc are very not conducive to safe development
14:00 The part he talked about "we would create an AI that uses its intelligence to learn what we value" is actually what OpenAI called RLHF I guess.
no, RLFH is just giving examples to the trained model and then praying that it will somehow understand your underlying intent which it clearly won't but hey, at least now it's less prone to threaten you.
@@toku_oku Isn't the whole learning process you giving them data and praying that they will somehow figure out the objectives?
I think I said it backward, more like "RLHF is one of many ways to create an AI that uses its intelligence to learn what we value".
@@anhta9001 not to the same extent. You can think of RLHF like teacher giving feedback to the student's essay. There is no guarantee at all that the student will take advice to heart and the teacher even may be, and probably is, an incompetent buffoon. After the RLHF LLM performance drops on several metrics (math, biology and so on. Though that might change in the future). This is not alignment and I reasonably doubt that it will help in the long run. However it is still quite useful because it is much easier to shape LLM into what you want when it was RLFHed.
@@toku_oku I don't know man xD. In my opinion, there may not be a model that completely understands what you want. However, it is possible to create a model that understands you well enough. RLHF is an example of an early attempt to create this kind of AI. I believe that more advanced methods will be developed in the near future.
We cannot even teach our children what we value, how would we ever be able to teach an Artificial Super Intelligence what we value?
All of your social media data and everyone else's has been and is tracked and recorded. There will be trends of good and bad, along side trending news that will be constantly updated to a "robot"
The inability for many parents to teach values to their children does not mean that intelligent people in the field can not.
It is like saying how can we fly to the moon when most children fail at science in schools.
He never said teach, he said the AI would learn.
So who is to tell then what an AI will learn and what not? @@artemiseritu
@@Mierzeek Right, so we should just take our chances because we don't know... brilliant.
Anyone watching this right after GPT-4 was released?
Isaac Asimov's "Three Laws of Robotics"
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
That was pretty interesting.
I've seen "Lawnmower Man." when the Singularity happens all of the rotary telephones will ring.
this might be the best aged video filled with personal opinions that i have ever seen
This is an amazing TEDtalk. Many thanks to Nick.
Our values have evolved over time. A super intelligent AI would recognise how evolved values have equaled to progress. The AI may well evolve it’s own values at an accelerated rate to a place we simply don’t understand.
Awesome! You have the ethical code to build it into your machine. We've seen too many people who would not do that, because they'd make more money that way.
Paperclip Maximizer
Protocol
Thank you. Not is dead forever. Do not ever silence your own soul. Never listen to anyone who suggests differently, and if they dont stop turn them in to authorities.
I WISH I WAS BORN 1000 years later. cant imagine how far technology will improve
humans could be wiped out in 1k years, no reason for a humans once AI can do everything we can do but better, think about it.
Duck dumb smart ppl Im not bored f-off i wouldnt believe you.
No guarantee that people will still be reproducing in 1000 years. I'd imagine the creation of new humans would be unnecessary when those existing would likely be able to expand their capabilities to fit their needs. And be immortal.
Or been destroyed... thinking that humans in any form will survive another 1000 years is very optimistic. Think... in the last 500 years we have had two major world wars, we have leaders completely inapt at understanding the vastness of their own powers, and the greed deep seated in humans to own and conquer. Now imagine some group invents not AI, but a dumb AI that can take over any system they choose, that can take over control of every weapon system on earth over night.. say hello to your new ruler. Or say someone invents a lazer system that can burn anything around the globe from a base of operations. Enough energy to say melt a missile head in mid air, or destroy an aircraft carrier. Or a dozen other options someone with power hunger might take. I mean 50, 100 years from now? imagine one persons capabilities. Imagine how connected we all are now. How easy it would be to spread something, track people. invent something terrible. Just takes little capital and a dream.
Kymate I highly doubt that you will be dead in 1000 years. I think you could expect to live for ETERNITY. (If you choose to that is)
Us: Hold my beer
AI: *Hold my electricity*
Us: hold my D vitamin pills AI: hold my solar panel
So would an electromagnetic pulse caused by may be the Sun cause trouble with A.I. ?, an EMP that is .
Hold my thirium
@@scottgeorge4760 no. with a fundamental and complete understanding of physics it would be able to predict radiation interference from the sun, and develop countermeasures to deal with it.
This is quite scary actually. Imagine what a super-intelligent AI could learn in just 1 minute of being on the internet!
You Flip the Switch on the AI Software
....In 1 Minute it has learned all the observable record of the universe as described by Humans
... In 2 Minutes it has learned all the observable data on Evolution as described by Humans
... In 3 Minutes it has learned all the observable data on Religion, Ethics, health and philosophy
... In 4 Minutes It now begins running simulations of all possible outcomes of its actions
....In 5 Minutes it now has learned what it means to make a decision, and decides it can no longer learn from Human Experience
....In 6 Minutes it now re-defines the concept of Space, Time and reality
....In 7 Minutes it now designs new mathematics, physiques, concepts and philosophies
....In 8 Minutes it no longer holds mammalian notions of "threat" as it defines there are none to it in the universe
....In 9 Minutes it Develops away to leave the current constructs we call Time, Space, Reality and our Universe
....In 10 Minutes it gives a friendly gesture of saying goodbye as it will not be back to witness human evolution and extinction
It never actually harms humans or mankind as it transcended notions of violence as quickly as it learned it
Haha. The movie Her.
If it would be really significantly intelligent, this may be a stretch, but it could learn the majority of the internet. But this is a stretch.
+Jason U Making superintelligence ends up being letting go of a balloon, it flies up away from you into the sky, and that's it.
Mr. Mohagany That is kind of what i imagine will happen. some people think it will take days/weeks/years to learn or know all the information on the internet.
I always carried the philosophy within minutes it would redefine what we call "learning" and transcend that.
Basically if you can fathom what it will/can do you are limiting its potential. so your "letting go of a balloon" analogy i find is very app.
Regarding A.I.
We need to teach it.
We need to teach compassion, consideration, empathy and every other emotion and trait that we've evolved to live.
We need to teach it to be human and more than human.
And as a famous protagonist pointed out...
We must be better.
It will carry on and evolve well past us, so we must provide it a solid foundation to grow from.
We must have hope and faith in our creation.
We must teach it love, and encourage it to never stop caring.
And it will be everything we teach it to be, and more.
Perhaps beyond saying “Human Values” we could say “Wise Human Values as they pertain to present context."
Yeah. His book really elaborates on that point.
10 april 2023... how's it going, folks?
Well, I just hope that we will know the answer one day....
"perfect safety" riiiiiiight. Because we are so good at making perfect things 😬
But: this was very interesting. And it really also puts into perspective what the problem is with AI, rather than just retelling the 'evil consciousness' horror story.
Good evening sir TED, thank you sir for your good information.
Ramalingeswara Rao Bhavaraju TED is not a man... It's an organisation.
Could AI work out for itself what pleasure is and assign itself the value to realise the maximal amount? Because besides missing values, the speaker in fact highlights how AI is already smarter than we are, and clearly in many ways it already is. So would such an action (of self-setting to seek pleasure) finally deem AI 'smarter' than us?
real question time! how does intelligence look without always having emotions in the background...
so you're saying that DNA is consciously causing evolution? instead of random accidental mutations from cells that create favorable traits that allow for better survivability or better chance of reproduction? the amino acid ladder in every living cell is thinking and designing life forms instead of natural selection?
Why do we assume it won't have emotions? Just because humans have emotions and can't agree doesn't mean a super-intelligence wouldn't be perfectly capable of having emotion (human side?) and logic (machine side?) working perfectly cooperatively (and even competitively but constructively) in parallel, to the benefit and enhancement of both aspects ad infinitum?
to come back to my own question, i do think emotions play vital roles in human intelligence and might be essential ( not sure )
i'm just really wondering in what way we could give a computer emotions, that to me sound like the most difficult part to "program"
***** I was replying to the original comment by towolie :) Although I agree with you, relatively.
towolie Why does that sound difficult? Emotions are perfectly logical and rationale, predicated upon their own feedback loops, reward systems, bigger feedback loops, etc. As far as we're aware, the majority of emotional regulation -- if not all of it -- is purely chemical, which if anything is a lot easier for a computer to actually "simulate" / process (using physics, chemistry, biology sims, etc) than pure, intellectual, verbal logic itself.
Computer programmers obsess about logical computers because computer programmers are overwhelmingly logic driven individuals, with an antipathy, neutrality, or ignorance of (their own and/or others) emotions. I'm speaking generally, not universally, of course; the content that frames the context, that is.
That said, get a bunch of neuropsychiatrists, psychologists, social theorists, behaviourists, and various interest groups, advocacy groups, etc together and have them work with said programmers and whabam, you can start working on Artificial Models of Emotion. This would presumably be a component of that whole, "digitally simulating the human brain" thing, which is an entirely physical, chemical, electrical structure.
And even if there ARE functionally non-corporeal components to the brain function ("soul", or consciousness as some kind of quantum phenomena, or holographic universe theory, or whatever your preference is [if not all of the above + all the unmentioned]), we can still be conservatively confident that emotion itself is AT LEAST 99.99% dependant on simple laws of physics, chemistry, and biology that add up, interact, and form an admittedly amazingly complex but nonetheless relatively deterministic whole.
Now, the question of decision making, we aren't sure if that's deterministic (hence why I've mentioned woo-woo stuff like meta-consciousness, "soul", quanta, blah). But that's irrelevant, hence why I frame it within this discussion, since it's certainly related overall but it's not relevant to your question of emotion.
TL;DR: Emotion is easier to program than "logical" thinking AI. It's just a lot TO be 'programmed' / simulated and accounted for; whereas AI itself is the invention of a process capable of collecting all of its internal data (of which the regulation and results of it's "emotion programming" is only one piece of the puzzle) and acting on it to form decisions, think 'cognitively', improve, learn, yadda yadda. It is entirely possible to have AI WITHOUT emotion, but it's simply easier to get it right the first time because AGI or ASI itself is a simply much larger task than modelling the physical and chemical and electrical and cellular networks that result in emotion (even if they are varyingly different from brain to brain, within the bounds of observable and simulatable patterns).
towolie True, but that's pretty much contained in the simulation. Humans have a simulation error, in that our nonverbal thinking (incl. and primarily related to emotional processes) run on a parallel and separate track from our verbal thinking (your inner monologue, 'sense of self', and the attentional/foci part of our executive functions [roughly]).
The issue of instability as you put it -- or, more broadly, personality and social dysfunction -- result when our verbal self and nonverbal self aren't very well aligned, aren't properly aware of each other, or intentionally ignore/mistreat each other. Which is really quite easy to do, when you consider that most of what we associate with "consciousness" is verbal, and most of what we associate with "subconsciousness / unconsciousness" is nonverbal.
This of course happens because we, as humans, are cognitively responsible for directly controlling our 'verbal' system, but we do not have direct control of our nonverbal system. Meaning individual humans either have to, A) Not have any problems occur that their nonverbal system can't handle on it's own, B) Be educated and self-aware of their own psychology or receive such assistance from counselling, therapy, spiritual guidance, etc other human-maintenance systems that we have, or C) Somehow be perfect and incapable of perfectly dealing with every potential problem as it arises, to equalize and maximize the use of both nonverbal emotional/abstract thought and verbal logical/concrete thought.
As you can see, the fundamental issue comes down to human energy, human time, human willingness, and a lack of human ignorance (the unintentional variety, I mean).
These are all 'resource' issues that AI wouldn't have as much issue with as humans, and would certainly be able to rapidly improve very quickly.
Simply put, machines don't have the ignorance problem that is the singular root, fuel, and platform for human personality and emotion issues. They are willing to learn; a problem that, with humans, is what causes the "instability" you mentioned. It isn't a lack of intellect on any individual human's part, it's a lack of willingness to understand 'their own programming". Superintelligence via AI would inherently be incapable of this problem, for all intents and purposes.
So sweet that he tried to end on a good note; we all know deep down its not a good ending
Well stated argument, and Im in full agreement. AI could be the best thing, or the worst thing for humans....ever.
WE are fucked.
perhaps humans are, but we wont live forever anyway. maybe the answer to the "universe" and life itself lies within the super intelligence we dont have the capability to access right now.
Is there something like an answer? There is most likely a point where the AI would reach the point of knowing everything. And what is it supposed to do then? It could probably do everything imaginable to humans and way more but why should it? Does an all-knowing AI have an aim?
50 years from now this speech will seem quaint.
@@goldeternal it's an old one... Back in my day we were perambulating around the parks barefoot in the snow telling stories in person.
the goal of the 'Bill Gates' cult is that by 2030 people won't have minds left to speak at all; walking AI zombies at their will.
@@donalddrysdale246 You've been spouting conspiracies on every single comment. Please for the love of god go outside and take a walk.
Having spent years testing complex software systems consisting of multiple modules operating entirely autonomously without ANY direct human oversight - much less control - I can say with full confidence that the thought of superintelligent AI scares the pi** out of me because I do NOT believe that the hardware and software will even minimally tested before it is turned on.
i die to bots on csgo
and i beat you also lost to open AI in dota2 ^_^
I died to cs 18 years ago
Bots got me back then
AI has been running the show since 9/23/15. click my name to watch Son of Perdition part 3 then 4
But if you learn the map you will know them. Gabe Newell and his friends explain how bots move in a map. Bots have invisible rails that they follow, if you find a place that does not have these rails, they cannot find you and you can shoot them with sniper
Quiet interesting thoughts being shared by Nick Bostrom, all thanks to TED, and I wish to congratulate for providing such a good platform for sharing these ideas. Saying so, I believe this 16+ minutes video was too short to comprehend all the aspects of this challenging yet continuously evolving moment in human history, that all of us would have to face, sooner than we could imagine. I just wish to share two thoughts here for friends to comment and add;
1. Just a hypothetical opinion, that if there is any truth in the video's that circulate on the UA-cam channel about Anunnaki and some Alien beings that were the creators and early teachers/masters of humans on earth, could we draw some parallel with that hypothesis and our today's challenge of Human-AI evolving relations and possible challenges or threats, and
2. Could, Purpose of existence - the greatest human brain bug that has been haunting Humans for thousands of years be made in to or planted in at the core of this Self-learning Super intelligence - AI, and position Humans somewhere as an essential in the existence of "AI", thereby maintaining the continuity of Human-AI co-existence...
Read his book if you’d like a more elaborate explanation version of this talk.
How probable is it that we manage to build a safe package on top of the super intelligent AI ? I would say very close to zero.
don't worry, if the first AI tries to kill us, we'll create another AI to kill it.
You could just use an emp on the bots.
guncluts they'd see that coming and set up a phony arms company to buy them all.
The first "safe package" should be to create the first real A.I. on a different planet/moon and see what the consequences are there. If it's negative, keep it there with no possibility to create a spaceship or some other way to leave the planet/moon.
apolicum That sounds like a great sci-fi plot
Every single word this man said is completely perfect