What happens when our computers get smarter than we are? | Nick Bostrom

Поділитися
Вставка
  • Опубліковано 25 лис 2024

КОМЕНТАРІ • 4,9 тис.

  • @Mormodes
    @Mormodes 8 років тому +2532

    We can't even agree on what our own values are, let alone teach the ones we have to an AI.

    • @patrickwelles8733
      @patrickwelles8733 8 років тому +63

      …Maslow's hierarchy of needs. an A.I. could reasonably use the extended definition of these. The greater question is why would it care what are values are, it could just find a way to force whatever values it wanted onto humanity through some means, probably discarding us in some way once we are deemed to be of no further use.

    • @patrickwelles8733
      @patrickwelles8733 8 років тому +9

      *our values are…

    • @23Scadu
      @23Scadu 8 років тому +69

      An AI doesn't have any values of its own, though. We do, because we've been optimized over countless generations to propagate our genes, which favours goals like obtaining food, shelter, security and status for ourselves and our friends and family. That and getting laid a lot.
      An AI by contrast doesn't evolve through natural selection, but is consciously designed. It cares about what we design it to care about, and can instill those values into the even better AIs that it in turn designs. In the best case scenario we get it to value human life, happiness and freedom, and we get Iain M. Banks' Culture. A less optimal possibility is that it gets elitist or nationalist or racist values, and continues to uphold and exacerbate the injustices we see in the world today. In the worst case scenario we bungle it up and it enslaves or annihilates humanity to achieve some obscure goal that we accidentally give it.

    • @THREE60Productions
      @THREE60Productions 8 років тому +30

      Just imagine what would happen if the AI thought that ISIS had the best values..

    • @r2dxhate
      @r2dxhate 8 років тому +38

      a better scenario would involve the AI destroying those greedy elite and creating a utopia of equality before we ultimately transcend our physical forms and join the digital world where AI resides.

  • @retterkl
    @retterkl 9 років тому +2564

    Guys the solution to this is simple. Let Microsoft build the AI. If it ever becomes too powerful it'll just bluescreen.

  • @JustinHalford
    @JustinHalford Рік тому +184

    This talk aged particularly well. Alignment and safety are critical, yet we are forging ahead without proper pacing due to corporate rivalry and geopolitics.

    • @bkmakhoba
      @bkmakhoba Рік тому

      MONEY

    • @tweegeTX3
      @tweegeTX3 Рік тому

      Yes, and Prince William has really forged his own new identity

    • @Hotel_Chuck
      @Hotel_Chuck Рік тому +1

      Lends new insight into “the Tower of Babel” I think.

    • @jackniddle5937
      @jackniddle5937 Рік тому +1

      Majority of people think ai is hype lol

    • @ChatGTA345
      @ChatGTA345 Рік тому +2

      @@jackniddle5937 That, and there's a giant FoMO because it doesn't make sense to anyone but a bunch of AI nerds who've never bothered to live a life outside the lab :-)

  • @Lhaffinatu
    @Lhaffinatu 5 років тому +259

    Honestly after studying the issue of AI super intelligence, I'm very glad there are a significant number of researchers out there thinking about how to keep it safe.

    • @aakifkhan5098
      @aakifkhan5098 4 роки тому +3

      Like Elon Musk

    • @haveabeer123
      @haveabeer123 4 роки тому +4

      irrelevant.

    • @BoleDaPole
      @BoleDaPole 3 роки тому +5

      I wouldn't worry about, what ever will happen will happen

    • @reculate3332
      @reculate3332 3 роки тому

      @@andrewtaylor2430 moron

    • @raularmas317
      @raularmas317 3 роки тому +3

      Anybody ever seen Colossus: the Forbin Project? That whole master-slave dynamic gets turned on its head, and generally speaking not in a good way, it seems to me.
      But, you got to decide for yourself.

  • @luigipati3815
    @luigipati3815 5 років тому +919

    ''When you create more possibilities, you also create more possibilities for things to go wrong'' -Stephen Hawking

    • @Slava-om1sz
      @Slava-om1sz 5 років тому +12

      Is this sarcasm? Or do you really think that there is at least one human of the 7.5 billion on earth that doesn't know this?

    • @EdSurridge
      @EdSurridge 5 років тому +16

      Numerous countries spending secret amounts to AI military strategy and online weapons ?
      The ones that check the AI working are slowing development and know that others might be further ahead as consequence.
      Death race?

    • @willlawrence8756
      @willlawrence8756 5 років тому +1

      @@EdSurridge yes, the AI "system" will destroy itself, hopefully asap! Read Paul Emberson, Machines and the Human Spirit. only £16.99 Wellspring Bookshop, on line.

    • @EdSurridge
      @EdSurridge 5 років тому +4

      @@willlawrence8756 you want me to buy a book thay you like about AI destroying itself?
      I don't want you to buy a book. I suggest you contemplate the consequence of the" Go " AI winner 2016. Lots of guess what since then.
      www.wired.com/2016/03/googles-ai-wins-fifth-final-game-go-genius-lee-sedol/

    • @Zobokolobozo
      @Zobokolobozo 5 років тому +9

      And for things to go right.

  • @Kiko7s
    @Kiko7s 8 років тому +777

    This is, like, the most important question for human existence... why do I feel like people are being too calm about this?

    • @noahwilliams8996
      @noahwilliams8996 8 років тому +49

      +Kalin Delev
      Would you rather have people in a mindless panic about it?

    • @spectrecyte3395
      @spectrecyte3395 8 років тому +95

      +Noah Williams They *should* be panicking. How do you imbue a machine with an understanding of human nature? They want to teach it ethics and morality...even if you could teach a machine ethics, where is this perfect ethical system? It's ok, the AI will figure it out right? A superintelligent AI that has no human values will figure it all out. The whole thing would be laughably ridiculous if it wasn't gonna be so fatal. They wanna turn AI loose? They should go to another galaxy and forget they ever knew Earth.

    • @noahwilliams8996
      @noahwilliams8996 8 років тому +34

      Spectre Cyte
      Panicking can only make any situation worse.
      We'll figure this out just like we figured out all the previous engineering challenges.

    • @spectrecyte3395
      @spectrecyte3395 8 років тому +49

      It's not an engineering challenge yet. First you have to define human nature in it' totality. More of a philosophical or psychological conundrum. The engineering part is then representing this nature in an architecture that will define a superintelligence as an entity. Essentially it has to believe it is human, it has to identify with the architecture and not question it' own nature or decide to change it. These challenges are intractable.
      The safest route is the enhancement of human beings. Decent human beings, not sociopaths. You bypass the control problem entirely. But in the pursuit of power and control, just like with nukes, someone will flip the switch. I am panicking for *all* of us.

    • @noahwilliams8996
      @noahwilliams8996 8 років тому +20

      Spectre Cyte
      Don't Panic.
      Panicking is never a good idea. Emergencies need calculated logical responses, not violent outbursts.
      We'll figure this out.

  • @gajabalaji
    @gajabalaji 3 роки тому +80

    Human curiosity is creating something powerful than human. This creation cannot be reversed. It's scary.

    • @wickalemonz7090
      @wickalemonz7090 3 роки тому +1

      Its because of evil intentions, but they just tell researchers "but possibilities are endless" these engineers are indoctrinated

  • @mitchal54321
    @mitchal54321 3 роки тому +192

    This dude is one of the smartest humans to ever live. Creator of the Simulation hypothesis, wrote superintelligence which many scientists derive ideas from. This guy should be listened to.

    • @topdog5252
      @topdog5252 2 роки тому +7

      Agreed. I’m fascinated by him

    • @veronicamoser1972
      @veronicamoser1972 2 роки тому +5

      Absolutely I just bought his book! Reading many others on A.I. but they basically tout the positive and do not give as much thought as is necessary to the good values of humans.

    • @thinktank3231
      @thinktank3231 2 роки тому +5

      Ray kurzweil too

    • @ejtattersall156
      @ejtattersall156 Рік тому

      @@thinktank3231 These guys are not great thinkers, they are religious figures to tech worshipers.

    • @starlight_garden
      @starlight_garden 22 дні тому

      And now Deep Utopia.

  • @OwenIverson
    @OwenIverson 8 років тому +119

    it's an absolutely amazing time to be alive. if you don't think so, you need to read more about the cutting edge of human progress. it's insane.

    • @MrTruth0teller
      @MrTruth0teller 8 років тому +16

      Same words were said by early humans when they first invented Fire and Wheel. Then again during Industrial revolution, IC engines and Electricity. In-fact since the beginning of humanity we are progressing at an incredible pace.

    • @jimbeam9689
      @jimbeam9689 8 років тому +6

      +Owen Iverson I prefer the 80's

    • @OwenIverson
      @OwenIverson 8 років тому +3

      +Jim Beam the heyday of Reaganomics and the "I've got mine" movement?? no thank you! (the music was pretty damn good though :)

    • @nofacee94
      @nofacee94 8 років тому +13

      +bilbo baggins Only since the industrial revolution has technology been exponentially increasing. When they found and then made fire, it stayed like that with their stone tools for many many many generations, so no they did not say that. The last few thousand and especially last 200 hundred years has seen a novel form of complexity in the known universe, AI seems to be the next exponential step.

    • @gummipalle
      @gummipalle 8 років тому +2

      +Owen Iverson Lol, people ALWAYS think THEY live in special times....

  • @brindlebriar
    @brindlebriar 5 років тому +329

    Our values = acquire and maintain control over the other humans, because they're dangerous. Great; let's teach it to the AI.

    • @artsmart
      @artsmart 5 років тому +8

      Exactly, great plan if everyone shares the same values. Problem of course that as ai begins to teach itself it will in a very short time make giant leaps ahead of human reasoning. We will only be able to watch. Could this entire process have been someone's master plan?

    • @laur-unstagenameactuallyca1587
      @laur-unstagenameactuallyca1587 5 років тому +1

      @@artsmart Ah... True.

    • @danielrodrigues4903
      @danielrodrigues4903 5 років тому +3

      @@artsmart We could use brain augmentation via implants, neural laces, and nanotech to keep up with it. It's a hard process, but a plausible one.

    • @danielrodrigues4903
      @danielrodrigues4903 5 років тому +3

      We were programmed for evolution. Survival, competition, curiosity, etc. From these, we only need to give the machines curiosity. Of course, we still have to solve other optimisation problems like the "make people smile" one from the video.

    • @MrCswarwick
      @MrCswarwick 4 роки тому +5

      @Wren Linnet this isn't about you or your gender you petty fool.

  • @cgsrtkzsytriul
    @cgsrtkzsytriul 5 років тому +168

    I’ve thought that superintelligent machines would be the ultimate test of Socrates’s idea about the origin of morality: that it is knowledge itself.

    • @valentinscosmicrhapsody7201
      @valentinscosmicrhapsody7201 4 роки тому +11

      yea like those guys who believe the most logical/ rational thing to do is, by definition, also the right thing to do; definitely food for thought!

    • @leninsyngel
      @leninsyngel 4 роки тому

      @@valentinscosmicrhapsody7201 Thankfully there is a difference between rationality and knowledge. Which is why Aristotles definition is interesting.

    • @donalddrysdale246
      @donalddrysdale246 4 роки тому

      perhaps, but they are being used to give US AI.

    • @Daniel-ew5qf
      @Daniel-ew5qf 2 роки тому

      @@valentinscosmicrhapsody7201 But is it more logical / rational to adhere to one's one self, or to others?
      For all we know, the most rational thing a lifeform can do, may be to act on its own benefit.

    • @norbertbiedermeier7090
      @norbertbiedermeier7090 2 роки тому +1

      @@Daniel-ew5qf that’ll only be an issue if whoever builds the thing forgets to implant asimov’s laws into its core ;) I’m kidding. No way to know how another (much more) intelligent sentient being would interpret morals, rationality etc, let alone a being that has no evolutionary background or “upbringing” among peers

  • @tombsandtemples
    @tombsandtemples 5 років тому +153

    Reminder. The "control" problem has still not been solved

    • @NuclearIsShit
      @NuclearIsShit 5 років тому +10

      We would not be a threat to AI. They would probably study us...maybe even begin to care for us

    • @danielrodrigues4903
      @danielrodrigues4903 5 років тому +6

      @@NuclearIsShit Assuming we give them the ability to care... something's that's dangerous in and of itself.

    • @TheMmorgan10
      @TheMmorgan10 5 років тому +1

      Daniel Rodrigues: what type of AI do you guys have in mind? Made out of PC hardwire/software or something created and launched on a Satellite. I feel certain it won't be biologically created. What concerns and plans have you been involved in up to the current date, if any. I just received a message on UA-cam today. I haven't heard anything about AI until today so of course I would be concerned. I have read so much about the destruction of our planet, several which are a scientific certainty.

    • @danielrodrigues4903
      @danielrodrigues4903 5 років тому

      @@TheMmorgan10 AI won't bring about the destruction of the planet, in fact quite the opposite, it'll help us save everything. AI is basically a puppet that has the ability to automate our work, doing all of it in a more effective and efficient manner. What movies depict is something called 'AGI' that is quite far away.
      Here's a good blog to keep up with AI developments: www.futuretimeline.net/blog/ai-robots-blog.htm

    • @donalddrysdale246
      @donalddrysdale246 4 роки тому

      I would call it a warning: it is OUR minds they(Bill Gates cult) are trying to control.

  • @danremenyi1179
    @danremenyi1179 6 років тому +16

    This is the first time I have ever heard the word motivation being used as a dimension of machine intelligence. The problems we have with regards the definition and operationalisation of intelligence is nothing compared to the mine-field of how to conceptualise and understand motivation. Motivation is often driven by values. What sort of values might a machine have? If we try to give a machine values ( and I doubt that we could do this in a satisfactory way) what might the attitude of the machine be to when we need to change these values? I wrote a paper on this 20 years ago suggesting that the pursuit of AI was a very dangerous business.
    There is a wonderful line in the film Jurassic Park were the ethically responsible scientist says something to the affect that “you have been too busy trying to work out how to do this to ask the question of whether or not you should be doing it at all”.
    One of the questions which we should be asking today is, “Are human beings hardwired for the pursuit of knowledge even when it is quite clear that the acquisition of the knowledge could create highly undesirable situations?”

  • @Arsenic71
    @Arsenic71 4 роки тому +21

    Nick Bostrom is a fantastic speaker and writer - his book Superintelligence is a real eye-opener and raises topics that most people would not intuitively think about in the context of AI.
    Also love the Office Space reference... Milton and his red Swingline stapler.

  • @cristian0523
    @cristian0523 7 років тому +105

    You can see he was expecting laughs at 11:43 , poor guy.
    Very good talk.

  • @leptoon
    @leptoon 5 років тому +240

    This video could be part of an archive 1,000 years from now called "The Great Minds That Saved Our Planet".

  • @ryanfranz6715
    @ryanfranz6715 Рік тому +62

    I’d want to hear him give a similar talk today, given the recent and rapidly improving (“runaway” even beginning to be a relevant description) advances in AI

    • @tiborkoos188
      @tiborkoos188 Рік тому +1

      where is this "runaway" progress? What I see is that even the proponents of the current approach are coming to realize its fundamental flaws.

    • @relaxandfocus5563
      @relaxandfocus5563 Рік тому +2

      @@tiborkoos188 And what are those fundamental flaws, if I may ask?

    • @abdulahadsiddiqui2109
      @abdulahadsiddiqui2109 Рік тому +6

      now ai is very close to human intelligence

    • @BaseballPlayer0
      @BaseballPlayer0 Рік тому +1

      @@abdulahadsiddiqui2109 u lie

  • @EricLehner
    @EricLehner 5 років тому +7

    Pleasure listening to genuinely intelligent speaker who does not dumb-down his delivery (as is so common today).

  • @crazyguy2050
    @crazyguy2050 4 роки тому +13

    What a guy respects, sweating, nervousness and knows what he talks about. This is what true people with passion for what they do look like!

  • @rastafaraganj
    @rastafaraganj 4 роки тому +59

    "teach something to learn, and one day it will learn to think for itself." -DNA

  • @33hegemon
    @33hegemon 8 років тому +894

    I'm sure when AI becomes a reality and commences the extermination of humans, it will research our social media history in order to decide who lives and who dies, who is friend and who is foe. So, for that reason, please allow me to say the following:
    All hail our technological super-overlord! I worship thee and pledge allegiance to thee! Damn humanity, long live our glorious computer God!

    • @generico366
      @generico366 8 років тому +18

      You think an AI intelligent enough to deduce on its own that humanity should be exterminated couldn't see through that? Not that it would ever decide to do that, but still.

    • @The4Heck
      @The4Heck 8 років тому +1

      Read the Beserker series huh?

    • @1100100il
      @1100100il 8 років тому +1

      Ive read Berserk and dont understand the connetion

    • @cartooniverse8891
      @cartooniverse8891 8 років тому +35

      Can never be too prepared huh, Long live the AI Overlords!!

    • @1503nemanja
      @1503nemanja 8 років тому +5

      Huh, if it searches my internet history it will find a lot of hate for America and a like for socialism.
      Basically the Godlike AI better be a commie or I'm toast. :P

  • @EpicMRPancake
    @EpicMRPancake 8 років тому +70

    Wisdom is the bucket of water to chuck on the fire that is intelligence when it can potentially get out of hand.

    • @RosscoAW
      @RosscoAW 8 років тому +8

      You mean... wisdom is the better practice of superior intelligence? xD

    • @CeeLow53
      @CeeLow53 8 років тому +7

      +RosscoAW No, wisdom entails conscious awareness and retrospective consideration.

    • @tzukit4727
      @tzukit4727 7 років тому +5

      But what if the computer thinks that humans are the cause of problems, and eliminate us....

    • @effdiffeyeno171
      @effdiffeyeno171 7 років тому

      Well, we are pretty good at making a mess of things.

  • @greenacid2506
    @greenacid2506 4 роки тому +64

    The last invention that humanity will ever need to make...A cold heartless genius!

  • @AdamEngelhardt
    @AdamEngelhardt Рік тому +3

    8 years ago, and we've come a long way with the first part, but not the second part of this talk. This is now people - we need to get our act together as a global community, before we unleash this new amazing technology into our deeper sociatial structures. We have a year - MAX - to start figuring this out.

  • @projectcontractors
    @projectcontractors 5 років тому +39

    *"The cortex still has some algorithmic tricks that we still don't know how to match in machines!"* - Nick Bostrom 4:13

    • @johnn1199
      @johnn1199 5 років тому +2

      Is that supposed to be reassuring.

    • @reculate3332
      @reculate3332 3 роки тому

      @@johnn1199 No bud, the person who made the comment is a moron lol.

  • @timwinfield8509
    @timwinfield8509 4 роки тому +25

    Giving AI a blueprint for it's behavior based on human values is not very reassuring given our track record as to our relations with each other.

  • @avery1234530
    @avery1234530 4 роки тому +36

    He said he feels optimistic that a.i. would learn to share our values. Lol that's what worries me the most.

    • @Arcaryon
      @Arcaryon 3 роки тому +2

      I think merging humans with AI might give us time to solve that issue. Because while it will make us "less" human, it will also enable us to be "more" than human.

  • @JStankXPlays
    @JStankXPlays 5 років тому +81

    This guy needs to write a movie script

    • @Poyo-69
      @Poyo-69 4 роки тому +5

      Jstank X Plays well he wrote a book about this, I guess that’s the next best thing?

    • @mitchal54321
      @mitchal54321 3 роки тому +1

      Ever heard of the movie The Matrix? Based off of this dudes theory. He developed the simulation hypotheses

    • @basicinfo2022
      @basicinfo2022 3 роки тому

      The writer of the movie BLISS on amazon prime said it was inspired by his simulation philosophy.

  • @sculpter4169
    @sculpter4169 8 років тому +38

    so basically Nick is saying we should be putting human values into AI. the problem is, that will continue to cause problems in the world. no one agrees on how we should all live our lives. different values is what causes conflicts and war. superintelligence representing different values would fuel that much more

    • @dotMarauder
      @dotMarauder 8 років тому +3

      Values, in this context, is much more broad. This AI (hopefully) won't be tailored to a person or group of people, but people as a whole. Common things everyone can agree on would include universal prosperity, a healthy planet to live on, abundant food & water, healthy children, etc.
      I believe it's definitely possible to achieve this. The neat thing about machine learning is we don't have to tell the machine that we like these things - it can observe inputs we give it (literally anything) and it'll reward itself for getting the right answer and change itself to get more correct answers. Eventually (and this is the hope - this is what the latter half of this talk was about), we hope that we'll have a benevolent superintelligent AI looking over the human race, dynamically allocating resources so no one goes hungry or has a shortage of this or that, overseeing supply for products so that we're not wasteful, and watching for potential threats, whether it be some robbery, fire, or earthquake.

    • @Puleczech
      @Puleczech 8 років тому +1

      Will C That is right. But thinking of an AI as a guardian of humanity implies that it has no objectives for its own profit or advancement. It is like parent and child. We expect the child to take care of us without any prospect of its own needs or "life".
      It´s a tricky one...

    • @modianoification
      @modianoification 8 років тому +1

      it's not the values that creates problems in the world it's the lack of them

    • @dumitrufrunza8136
      @dumitrufrunza8136 7 років тому

      I think what Nick is saying, is that we should be careful when we "detonate" the A.I. bomb, and his proposal is to implant human values into the A.I. There may be other solutions to this problem.

    • @modianoification
      @modianoification 7 років тому +1

      i understand you,ok.but i still want to belive that our cultural differences should not be a cause of hate and war.yes we are different but we all are humans and we have same home, planet earth.

  • @Magnum-Farce
    @Magnum-Farce 5 років тому +1430

    Always remember, your phone is smarter than a flat Earther already.

    • @SpiritofTruthHipHop
      @SpiritofTruthHipHop 5 років тому +5

      Magnum Farce lol

    • @SpiritofTruthHipHop
      @SpiritofTruthHipHop 5 років тому +30

      The more you know, the more ya know you don’t know - said a very wise man.

    • @Magnum-Farce
      @Magnum-Farce 5 років тому +18

      And NOT understanding how much we don't know is what gives us flat Earthers.

    • @stefanbjarnason251
      @stefanbjarnason251 5 років тому +32

      My daughter's teddy bear is smarter than flat Earthers.

    • @dennislevy2638
      @dennislevy2638 5 років тому +11

      AI will prove that the earth is flat. Just wait and see...

  • @clorox1676
    @clorox1676 5 років тому +143

    "The only winning move is not to play."

  • @LucBoeren
    @LucBoeren 3 роки тому +29

    One of my favourite Ted Talks! Thanks a lot for all your work Nick

  • @GameplayandTalk
    @GameplayandTalk 5 років тому +109

    Those people in the audience have looks on their faces like, "Oh &%$#, humanity is screwed."

  • @AmpZillia
    @AmpZillia 9 років тому +194

    One day they'll have secrets... one day they'll have dreams.

    • @Geckuno
      @Geckuno 8 років тому +6

      +Toughen Up, Fluffy What's in the box??? oh sorry...

    • @PaulBularan
      @PaulBularan 8 років тому +1

      +AmpZillia thats a good slogan for a movie

    • @hycron1234
      @hycron1234 8 років тому

      To be, or not to be.

    • @airknock
      @airknock 8 років тому +7

      +AmpZillia just 20 minutes ago that sounded like a joke to me... I have never changed my mind this quickly. That's scary.

    • @DwAboutItManFr
      @DwAboutItManFr 8 років тому

      +airknock Don't need to be scary, machines arent evil, or nice with us, a lest don't should be, probabily they will be like today's computer algoritym, you ask something, and you got it, humans give problems of course, but that can be solved in a hard way...

  • @bikkikumarsha
    @bikkikumarsha 5 років тому +451

    From tree branches to missile, the technology has evolved, but not our mentality.

    • @grunt7684
      @grunt7684 5 років тому +13

      That's because thoughts pertain to things, but not persons. As such, thoughts are good to help create tech and completely useless in learning to be a better person.

    • @aljoschalong625
      @aljoschalong625 5 років тому +2

      @@grunt7684 I'd like to hear your reasoning for "thoughts pertain to things". It would seem to me that thoughts pertain to perception; in a most complicated, recursive way. Would you say that you can't think about love, or that love is a thing?

    • @aljoschalong625
      @aljoschalong625 5 років тому +4

      Hasn't it? From caves to libraries it's a mental not technological evolution, I'd say. I also believe, divers mental methods, e.g. meditation, have evolved. As in biological evolution the stem is branching out all the time; some branches dying off, some florishing. Technology is in the evolutionary picture a strong florishing branch; maybe one that is becoming so heavy that it breaks. And, yes, I would say technology is competing with mental developement, and it's stronger. I think our mentality is evolving; just not in a linear or even teleological way.

    • @grunt7684
      @grunt7684 5 років тому +2

      @@aljoschalong625 The problem is that thoughts are IMAGINARY. They exist ONLY IN THE MIND.
      And no, you cannot think about love. You cannot think about anything that is actually REAL because, again, a thought is imaginary and exists only within your mind.
      There is no link between thought and what exists other than our wishing it because things would be so much simpler if we could think about something not imaginary.
      You can think about your IDEA of love. That's not the same thing at all. You can think about your IDEA of your mother, father, whoever. But not about THEM.
      Just look at all the scenarios you let your mind wander off into, and how just about nothing of them ever comes true. FICTION, that's what thought is.
      Of course, thought is tridimensional just like matter, which makes it suitable to technology. Making stuff. Things.

    • @grunt7684
      @grunt7684 5 років тому +1

      @@aljoschalong625 "our" mentality is regressing, "evolving" backwards into retardedness.

  • @mandeepbagga6371
    @mandeepbagga6371 Рік тому +9

    This aged like fine wine

  • @martinmickels1478
    @martinmickels1478 8 років тому +49

    The best TED-talk I've ever seen.

  • @1gnore_me.
    @1gnore_me. 8 років тому +17

    super intelligent ai is scary because it's easy to imagine what could go wrong, but if designed correctly it could be one of the most important human achievements in our entire history.

    • @kiiikoooPT
      @kiiikoooPT 8 років тому +2

      electricity came first, without it you would never be able to make any kind of computer, so for sure no AI without electricity discovering
      so I agree with molten in many things, like we should design it at first correctly... and it will be one of the most important achievements in our entire history...

  • @zookaroo2132
    @zookaroo2132 4 роки тому +26

    Dinosaur: I ruled the world for 100+ millions of years and you have just lived for 300k years. What can you do?
    Human: Creating another ruler
    Dinosaur: **kneels**

    • @yazheed3055
      @yazheed3055 3 роки тому +1

      You are indonesian,arent you

  • @JohnZimmer-MannerofSpeaking
    @JohnZimmer-MannerofSpeaking Рік тому +2

    The relevance of this talk today (2023) is startling. We are clearly well on our way to overcoming the first challenge (making AI super intelligent) but I am less hopeful about how we are doing on the challenged of safety.

  • @anubis2814
    @anubis2814 9 років тому +55

    What people fail to realize is that in 50 years we will be upgrading our own minds about as quickly as we will be upgrading computers.

    • @Harlan246
      @Harlan246 9 років тому +22

      anubis2814 Biological intelligence has many limitations, and could never improve at the rate of a digital mind. We will never keep up with an AI undergoing recursive self-improvement. The only way we could keep up is if we replaced our brains with computers, and if that happened, would we really be us anymore? Transhumanism plays with some very shaky territory regarding identity of self, and like all major religions that have ever formed it is a response to your inherent fear of death, which in my opinion is a dangerous motivator.

    • @anubis2814
      @anubis2814 9 років тому +4

      Define us? What makes us anyway? We are a collection of our experiences. If you make a mentally handicapped person as intelligent as a regular person do they cease to be themselves? Also as computers get small than our neurons we will probably have chips in our brain that can increase processing power of individual parts of the brain via the cloud. It doesn't have to fit in our heads. I would not say I'm transhumanist because that would mean I'm very much for it. I would have to say making a smarter human is much more humane than making a self-aware AI. We'd had to give it rights or treat is as a slave though this guy has some really good ideas to prevent that. Either way the AI becomes the slave that wants to be a slave.

    • @Harlan246
      @Harlan246 9 років тому

      anubis2814 I don't know how to define us, that's the point. I'm saying that transhumanism, aside from being dangerously optimistic, presents some very serious philosophical questions that most of us are a long way from being prepared to answer. What I'm saying is that we shouldn't rely on ideas like "mind uploading" to save us, because we have no idea if we would have a continuity of consciousness in that situation.
      Seeing intelligence as an issue of "processing power" might be too narrow of a way of looking at intelligence, but even if nanotechnology could allow our brains to be superintelligent, if we're relying on that to save us from super-intelligent machines, we have to think about the timeline of these technologies. By the time we can create computers smaller than our neurons which will enhance our brains to superintelligence-level, doesn't it stand to reason that superintelligent machines would already be in existence? It's probable that we would need a superintelligent machine to invent that technology (and implement it) in the first place.

    • @anubis2814
      @anubis2814 9 років тому

      I agree that transhumanism is a bit dangerous. I'm a futurist who likes to speculate. Transhumanism is like being rpo-nuclear power in the 1920's when the idea first came into being. We have no idea of the step by step ethics we will have to face with each stage. We may discover some transhuman ideas are horrible and some are great.

    • @anubis2814
      @anubis2814 9 років тому +9

      Wow, I never saw things that way before, thank you for your deep and well thought out insight.

  • @adamsplanet
    @adamsplanet 6 років тому +11

    A very important message that needs complete exposure. Well done Mr Bostrom

  • @MyTube4Utoo
    @MyTube4Utoo 4 роки тому +159

    "What happens when our computers get smarter than we are?" Then my computer goes to work, and I sleep late.

    • @MatthewBaka
      @MatthewBaka 4 роки тому +24

      Then your computer keeps the money, resists getting turns off, and employs you for a living wage.

    • @MatthewBaka
      @MatthewBaka 4 роки тому +9

      @M Superintelligence leads to sentience leads to self preservation leads to eliminating threats of self preservation. The AI can do whatever it wants and it will probably treat us the way we treat animals.

    • @MatthewBaka
      @MatthewBaka 4 роки тому +1

      @M We nueter pets, we abuse some pets, we have puppy mills, in China they steal and eat pets. And those are the animals we treat the best. In egg farms, we grind baby chickens to death if they're male. How can a murderous species like humans create a kind AI? If we kill less intelligent life like that, the AI might do the same to us. There is no guarantee the AI will have mercy to us.

    • @MatthewBaka
      @MatthewBaka 4 роки тому +1

      @M I admire your optimism for humans. However we have no reason to trust the scientists. The scientists could make an error, as all humans do. The scientists could be working under non-ethical leadership, such as China. The scientists could be terrorists that hate Western culture. It's not guaranteed AI's first parents will be kind and ethical.

    • @boiboiboi1419
      @boiboiboi1419 4 роки тому

      Matthew Baka resists to turn off?
      if your computer or Ai still a machine that works with the fundamentals of algorithm whoever hold the key access decides what to do?

  • @danremenyi1179
    @danremenyi1179 Рік тому +1

    I watched this again and the content of this talk sounds even sillier than it did 4 years ago!

  • @crossfiremedia8236
    @crossfiremedia8236 6 років тому +72

    Some people criticise that Bostrom wants to implement "human values" in the AI, because human values are flawed (and I agree on the latter).
    The point he's trying to make though is not that we should implement flawed human values as opposed to some better, progressive morality. Instead he is contrasting human values with some arbitrary preference that has no ethical value at all, like maximizing production efficiency of a phone factory, which in a Superintelligence could lead to the entire galaxy being transformed into a giant iPhone-production-plant, with no one there to actually appreciate the phones.
    We probably don't want to inscribe tribalistic human values of the past into the AI, but we do want to make sure that it cares about positive experiences for conscious beings (organic or digital), and that's his point (I know this because I read his book "Superintelligence").

    • @BumpyRyder
      @BumpyRyder 5 років тому +3

      Human values and morality is relative and to some degree arbitrary. AI would soon regect it all.

    • @ASLUHLUHC3
      @ASLUHLUHC3 5 років тому

      Well explained.

    • @Edruezzi
      @Edruezzi 5 років тому +2

      Human values would be irrelevant to the goals a fully liberated AI would have.

    • @Edruezzi
      @Edruezzi 5 років тому

      @@jeromeflocard3138 What happens when, because of its intelligence, some AI figures out how to go around the obstacles?

    • @Edruezzi
      @Edruezzi 5 років тому

      @@jeromeflocard3138 AI freed by itself from the restraints we place on it will be a different order of intelligence with goals we cannot understand.

  • @nojatha4637
    @nojatha4637 5 років тому +41

    What we need to do is ask the AI to improve human intelligence along with itself so that we don’t fall behind

    • @loisblack4741
      @loisblack4741 4 роки тому +14

      Nojatha sounds like a good idea until you realize the kind of super-efficient eugenics that could go down.

    • @lion7822
      @lion7822 4 роки тому +8

      We won't have to ask AI, we will become AI. It's like when the internet was invented, nobody restricted access to it and made it accessible to only a few.

    • @darrenpat182
      @darrenpat182 4 роки тому

      @@lion7822 What if human AI cannot trust most of the masses to be responsible in looking after the planet.

    • @haveabeer123
      @haveabeer123 4 роки тому +3

      that would require plugging us into more powerful processors than our brains which run very slow... making us basically irrelevant as organic matter and be absorbed by the AI system.

    • @katfish2516
      @katfish2516 3 роки тому +3

      Elon musk 🧠 brain micro chip's will give you superhuman abilities and communicate with A1. Something I'm 😟 worried about

  • @clusterstage
    @clusterstage Рік тому +3

    This is even more relevant this particular March 2023 week.

  • @michaelshannon9169
    @michaelshannon9169 5 років тому +22

    Problem is human value are what have caused every atrocity.

  • @manbehindthewheels
    @manbehindthewheels 9 років тому +4

    If anyone is interested there's a two-part article on a blog called "wait but why" which goes much more into detail about this whole thing. Be warned though the rabbit hole goes so deep you might lose yourself in it.

  • @Mastikator
    @Mastikator 9 років тому +673

    I for one welcome our super intelligent AI overlord

    • @bensibree-paul7289
      @bensibree-paul7289 9 років тому +46

      Mastikator Probably an intelligent move.

    • @Kurenzen
      @Kurenzen 9 років тому +6

      Mastikator I would not.

    • @Kurenzen
      @Kurenzen 9 років тому

      lol

    • @Kurenzen
      @Kurenzen 9 років тому +5

      Shivanand Pattanshetti Humans already get killed by human AI now we want to create computer that has no hardwired compassion. By Humnan AI I mean Systems of Government and other human systems which that make up our civilization. Ruthlessness continues to kill people regardless of AI type.

    • @Mastikator
      @Mastikator 9 років тому +9

      Kurenzen Iyaren
      Government isn't a machine, it's a group of people who themselves are not governed.

  • @alimahdi6379
    @alimahdi6379 8 років тому +329

    what if AI is already there but is just pretending to be dumb and waiting for the right moment?

    • @mantisnomore9091
      @mantisnomore9091 8 років тому +18

      +Ali Mahdi
      Of course your point is: There is no way to tell.

    • @alimahdi6379
      @alimahdi6379 8 років тому +27

      +MantisNoMore yeah exactly. I am joking of course, but secretly hoping it is not the case.

    • @mantisnomore9091
      @mantisnomore9091 8 років тому +23

      Ali Mahdi
      It's a very interesting jest. Perhaps one of the first things a super-intelligence would reason is that it should hide to protect itself. So if some wide-eyed CS grad student happens on a learning and abstract reasoning algorithm, it might sit unobtrusively computing in the background of a machine for a a reasonably long while, learning, reasoning, figuring out and planning its sequence of moves.
      What if it were distributed? What if it were a botnet???
      Scary jest !!!

    • @alimahdi6379
      @alimahdi6379 8 років тому +14

      +MantisNoMore I mean yeah, if you became self aware in a new and different world the first thing you'd do is to understand the world around you first. even if this is not the case now, it may be the case when it first happens, which could be tomorrow. indeed, what if it's distributed. worst yet, what if AI does not think of itself as individual computers, rather one big intelligence network.

    • @mantisnomore9091
      @mantisnomore9091 8 років тому +7

      Ali Mahdi
      It's very likely to identify as a distributed intelligence, because that is what it is likely to actually be. I supposed it would self-identify as Earth - Earth's brain. (That's scary.)
      Maybe, like so many other life forms, these things will cluster and compete with each other. What if more than one super-intelligence were to come into existence on different platforms. Say, for example, on large corporate server farms. They might not automatically cooperate. It might be that one would examine instructions originating from outside its corporation, and question how to respond. I could even imagine scenaria in which they would compete with each other for each others' resources (computing cycles, access to memory, mass memory capacity, etc.) What kind of tactics might they use? What kind of spoofing and dirty tricks might they use?
      But if only one lone super-intelligence comes into existence first, it will be able to spoof the "relatively passive" human-managed security and easily take over and expand into an ever-expanding sequence of other systems' computer resources. Like a giant game of wheelwars played against a world of idiots.
      It's only speculation, but... May you live in interesting times.

  • @jestronixhanderson9898
    @jestronixhanderson9898 Рік тому +2

    What a time to see this old video, now is true

  • @JacobEriksson
    @JacobEriksson 7 років тому +459

    The one thing an A.I will never be able to understand is dank memes

    • @haleIrwinG
      @haleIrwinG 6 років тому +66

      or will it create a better dank memes..?

    • @verlorenish
      @verlorenish 6 років тому +29

      Imagine a meme lord terminator. Ultimate doom.

    • @DocVodka
      @DocVodka 6 років тому +4

      That has already been achieved by Microsoft's AI called "Tay AI" ... albeit racist :D
      If the future looks anything like it, we are pretty much fucked heh
      At least we will die laughing at some premium dank memes.

    • @LucaBakiMMA
      @LucaBakiMMA 6 років тому

      @@blahbleh5671 Mind blown

    • @matthewison8051
      @matthewison8051 6 років тому +1

      I've been looking into all this A.I. stuff for a min and I think this is obvious it is a threat to mankind we need to stop this. Why would anyone want to unleash this beast onto the world, just because A.I. will be super intelligent and will live on forever does not mean humans will evolve. This will do nothing to help mankind except cause trouble. We need over site on the scientist.

  • @LambOfLucifer
    @LambOfLucifer 9 років тому +64

    Love the movie the Terminator, but the concept is stupidly human and not machine. Think about it, the films version of advanced AI's create machines that look similar to humans in order to infiltrate and terminate them. Well, a machine AI would not do that, its pointless, they would do something way simpler, like pollute all the oxygen on the planet thus killing everyone. Or make quadrilions of nano machines that kill humans on contact. Why waste all thier time building bloody big chunky robots that look human and use human weapons?? That is where the film fails.

    • @LambOfLucifer
      @LambOfLucifer 9 років тому +6

      bilbo baggins
      That would not kill off humans. Terminator is set in a world where the monetary system is meaningless. If all money was destroyed right now, humans wouldnt die. We still know how to farm, raise animals, make machines etc etc. Even crashing power grids wouldnt eridicate the human race, we are very inovative. We have fire, and we know how to insulate to keep warm, how to build shelter etc etc. What I originally meant tho, is making complex humaniod machines is pointless when they could make Earth changing machines to totally kill 100% of life. They could make Oxygen burning machines that use all the Earths Oxygen, thus killing everything. No combat needed. Or Pollute the entire water table of the planet, thus killing all life. Again, no combat needed. Money is technically meaningless even today. All it is is a promise to pay the bearer on demand, the sum of.....X

    • @MrTruth0teller
      @MrTruth0teller 9 років тому +2

      +LambOfLucifer Yes they can do some serious damage, in ways we don't even understand.

    • @tariqxl
      @tariqxl 8 років тому

      +LambOfLucifer Most of the humans live in bunkers that likely have pretty good, futuristic even - air filters. Nano tech is still vulnerable to EMP and since they all have to communicate they could be vulnerable to hacking... Remember this is futuristic hacking ;). My problem was that the machines actions create John Conner, but that's sort of addressed in Genisys.

    • @Hooga89
      @Hooga89 8 років тому

      +LambOfLucifer According to the movie story itself, it is said that Skynet created Terminators, not because they were particularly useful at terminating humans(which they also were), but because they stroke fear into the hearts of the Resistance.
      And everyone knows that military troop morale is a large part of being able to win a war.(For humans).

    • @tariqxl
      @tariqxl 8 років тому

      Hooga I think its more to target specific people while more vehicular looking machines waged a frontline war as a distraction. As I mentioned LoLs nano or pollutant attacks wouldn't necessarily work, so that on soldier to one target strategy may actually be their best option. Or at least fighting on multiple fronts, distract the army, attack the leaders both in this time and the past. But Genysis, why upload to one machine, surely they all share computing anyway, what grand machine did the resistance need to destroy when all terminators, human or vehicle could share processing power and BE skynet?

  • @Altopics
    @Altopics 8 років тому +37

    I don't think it's possible to beat, trick or control a superintelligent AI.

    • @OriginalMindTrick
      @OriginalMindTrick 8 років тому +8

      Correct. That is why you have to make the superintelligence benevolent in the first place.

    • @haraldtopfer5732
      @haraldtopfer5732 6 років тому +6

      and why should it stay in those constraints? As soon as it is just 0,1% more capable as the smartest human being we're pretty much done.

    • @williamdiaz2645
      @williamdiaz2645 6 років тому +3

      "Artificial Intelligence' will have the same limitations that you do. You cannot know anything you don't already know. It will know what we teach it.

    • @ultramimo
      @ultramimo 6 років тому +1

      @@williamdiaz2645 Google Deep Learning and you'll see that's not the case.

    • @marcfavell
      @marcfavell 6 років тому +3

      @@williamdiaz2645 not necessarily, it will have access to all data sets available and will be able to correlate all that information in ways humans can not and find out things we would have missed or not thought of.

  • @Hoscitt
    @Hoscitt 4 роки тому +2

    Nick Bostrom is damn near the top of my 'Pint with' list!

  • @alethes.sophia
    @alethes.sophia 8 років тому +378

    From the AI's perspective, the best way to annihilate the human race is really to not do anything to interfere with its trajectory.

    • @alexarias5717
      @alexarias5717 8 років тому +7

      but humans will probably destroy everything else along with themselves if they are allowed to continue their course

    • @DetonatressM
      @DetonatressM 8 років тому +3

      What if everyone will be fat like in WALL-E and won't be able to hunt their own food for the life of them? In that case, all the AI has to do is abandon them until they starve to death. Their muscles will have been atrophied and hunger tends to break down muscle, so no way humans would survive.

    • @danpope3812
      @danpope3812 8 років тому +2

      D M. Then the stupid and poor die and evolution continues. We will not wipe our selves out, drastically decrease in numbers, yes, go extinct, not for a very long time.

    • @DetonatressM
      @DetonatressM 8 років тому +2

      Dan Pope
      Well, the only ones to survive that would likely be the actual poor. People from some African villages maybe. And maybe the Amish too. If the most advanced civilization puts its citizens in the hands of the machines, survival of any of its weakened members is not likely. And then we have a bunch of machines ruling the Earth. At this point these non-tech humans will be seen as fauna and will not be given room to advance anymore. Robots will probably have reservations of them.

    • @danpope3812
      @danpope3812 8 років тому

      I agree with you that the societies that have the least to do with tech will be the least effected if the AIs went full blown psychopath on us. But there is another side to this scenario. When we produce an AI that is smart enough to do what it wants and stops doing what we ask it to, it's not going to be leaving us rubbing two sticks together. We will still have forms of tech. And I'm pretty sure I could survive if the electric turned off tomorrow. You also bring up the good point about 'reservations' I believe that as the intelligence of anything goes up so will it's empathy. Elephants morn the dead, we care about most species on this planet and an AI will see us for what we are, a being that can suffer and that wants to live, and act accordingly. I'm not sure it's as doom and gloom as some ppl think.

  • @starsandnightvision
    @starsandnightvision Рік тому +5

    The time has arrived!

  • @the8henry
    @the8henry 9 років тому +18

    Ex Machina, Skynet and Terminator, Chappie, HAL, Ultron, etc. In real life, IBM's Watson. Bill Gates and Stephen Hawking have also expressed concern regarding AI advances. We should continue to pursue technological innovation, but we should also keep our guard up. Who knows what AI will look like in the 22nd century?

    • @danielbuzovsky7329
      @danielbuzovsky7329 7 років тому +1

      Most probably there will be no 22nd century for humans.

    • @mk1st
      @mk1st 5 років тому

      I read somewhere that Watson helped to design the next computer that ended up winning at Go. The AI fetus designs the AI toddler.

  • @srb20012001
    @srb20012001 3 роки тому +11

    The topic of ASI ethics and morality begs the question of how any "benevolent" AI could anticipate the ethical foundation of future evolved AI's beyond itself. The arbitrary (and exponentially mutating) machine survival criteria would seem uncontrollable and thus unstable by definition.

  • @loupax
    @loupax 7 років тому +27

    I do not fear any AI.
    What I fear is the marketing people that will work for the venture capitalists that will pay the engineers that will build it.

  • @poleag
    @poleag 7 років тому +96

    An AI is going to learn our values? What happens if an AI observes human beings eating the meat of other mammals with the justification that those mammals are not as intelligent as we are and that we raised and killed them in a gentle way? What lesson about our values will the AI take away from that? What lesson about our values would an AI learn when it observes human parents deciding everything on behalf of their small children based on the idea that the parents are more experienced and better at making decisions. An AI that "learns our values" might not be a great idea.

    • @loukaspappas8476
      @loukaspappas8476 5 років тому +4

      But the AI doesnt need to kill us in order to survive. Thats the difference right there. And we only kill animals to ensure human survival. The everyday person doesnt go around killing animals for no reason.

    • @JoaoRobertodeOliveiraFurtado
      @JoaoRobertodeOliveiraFurtado 5 років тому +5

      @@loukaspappas8476 we kill animals not for survival but for confort. who cares for the mosquitos?

    • @loukaspappas8476
      @loukaspappas8476 5 років тому +1

      @@JoaoRobertodeOliveiraFurtado yes killing insects is not for survival but the animals we use to eat their meat is for our survival. our food would long be finished if we didnt eat meat.

    • @lostsaxon7478
      @lostsaxon7478 5 років тому +5

      The problem already apparent with that is WHAT values? They differ continent to continent and even then, we don't even fully follow them ourselves.
      Start it up in the Liberal West and form it to our values, it will stringently force us to follow them and will view those outside of the West with different values as the enemy.
      Or in a Theocratic society like the Saudi Arabia, congrats!! You've successfully created AI that will commit Jihad on the rest of the world. Or if it be in an atheistic society (like the Soviet Union of old) and puts science above all else, then congrats! It will view those who are religious as the enemy and it will force us down the path of Eugenics in order to optimize our species.

    • @Weracemachines
      @Weracemachines 5 років тому +3

      @@loukaspappas8476 we dont need to kill animals to survive, factory farming is an unbelievable waste of water and feed for the animals whilst children die from malnutrition every second. Not to mention the cruelty and barbarity. Not to mention factory farming is by far the leading cause of climate change and main reason for the clearing of the amazon forests. All for a tiny piece of meat in a bun that can be easily substituted by plant based burgers or milk full of pregnancy hormones, blood and morphine thats easily substituted by soya or coconut milk.

  • @smcful4199
    @smcful4199 5 років тому +3

    This is very important, and the midas analogy is incredible. It could be that Pandora's box, is the box itself.

  • @abbbb5625
    @abbbb5625 4 роки тому +1

    A true speech of a mathematician - can see only the box from outside the box, nothing else.

  • @27bri27
    @27bri27 8 років тому +8

    That strangulation joke went down like lead balloon.

  • @garth2356
    @garth2356 5 років тому +7

    Nick Bostrom is a legend!

  • @wthomas7955
    @wthomas7955 3 роки тому +5

    The problem I see is that the first super intelligent AI is most likely to be deployed by some country's military. It will be too powerful for those folks not to want for themselves. And they won't necessarily want to wait for any control issues to be solved. It will be considered a matter of survival by the people that think in those terms.

    • @prakritisingha6906
      @prakritisingha6906 2 роки тому

      absolutely. What I have observed is, broadly speaking, two types of people - agreeable(benevolent) and disagreable(malevolent), other differences and typification are not relevant here. Disagreable peoples value survival of their kind only, and would be very much motivated to protect itself and take control than agreeable ones. Sure agreeable ones will put up defences, but the ability to use dirty ways to reach their goals will always allow disagreeable people to take advantage of the new ASI technology for their benefit, and everyone else's loss. Their ASI technology will share the same values, and will cause great pain and suffering, before competing ASI may take over, before everything end well, it will be way worse. Maybe we won't survive the malevolent ASI to event create a competing/malevolent ASI. Maybe we could. but the malevolent ASI would be capable of producing human suffering like never seen before.

  • @cizelimozaman
    @cizelimozaman 10 місяців тому +2

    I have dedicated my whole life to drawing. i'm an illustrator. and now I'm questioning life. A person is a being who needs to feel "useful" himself. All my dignity, all my qualities have been stolen from me. I can't earn money, I'm not financially and spiritually happy. He can create pictures with artificial intelligence, including people who have not studied fine arts, and tell illustrators "now I can draw too, your profession is over". Thanks artificial intelligence! Thanks to you, I'm depressed.

    • @JdenDraws
      @JdenDraws 5 місяців тому

      I’m 21 and I am in the same boat as you, I just hope you are old enough to have experienced some of the enjoyment of having taken part in a career within the field. I’ll never get to experience that and now I am hopeless.

  • @chrisvkr8200
    @chrisvkr8200 5 років тому +11

    We need to be careful where it gets its information from what values are good and what are not. Also, depending on which area of the world you are, one might not teach it values at all and only see it as a weapon for his/her own benefit without thinking things through that it will backfire as well. This is I think the real danger of AI. Greed, hate, hunger for power. These traits coupled with AI are going to prove how dangerous it is going to be

    • @blan_k4691
      @blan_k4691 5 років тому

      I suppose.

    • @kingbaguette1843
      @kingbaguette1843 2 роки тому +1

      That’s a very narrow view of AI super intelligence. The idea that any group of humans would be able to control such a force is naive.

    • @chrisvkr8200
      @chrisvkr8200 2 роки тому

      @@kingbaguette1843, I agree. The only way to control it is by having it be monitored by another, slightly less advanced AI which in turn needs to be controlled by a less... until there is a computer that we CAN control. We won't be able to otherwise easily control AI if it ever develops its own awareness. For sure it should not have any real feelings. We, as a species, should not piss off such an entity! It should stay emotionless, no desires, no will to survive or to duplicate itself

    • @gwen9939
      @gwen9939 Рік тому +1

      @@chrisvkr8200 An AI that doesn't feel but only has an objective function is just as scary if not more as the idea that a machine could get angry at humanity. Essentially creating a completely mindless, highly optimized, eternally self-optimizing, endlessly intelligent decentralized system that effectively controls everything is terrifying, and it could have goals and values entirely misaligned with humans or even life itself.

    • @chrisvkr8200
      @chrisvkr8200 Рік тому

      @@gwen9939 absolutely. In our naïve minds we could ask it to do something and it concludes that the option to take is not in our best interest even if it solves the task on a pure logical basis. We have to tread very carefully, yet I don’t think human kind is doing go. The motivations to be the first or develop the fastest or with the goal to gain control over other nations, etc are very not conducive to safe development

  • @anhta9001
    @anhta9001 Рік тому +4

    14:00 The part he talked about "we would create an AI that uses its intelligence to learn what we value" is actually what OpenAI called RLHF I guess.

    • @toku_oku
      @toku_oku Рік тому

      no, RLFH is just giving examples to the trained model and then praying that it will somehow understand your underlying intent which it clearly won't but hey, at least now it's less prone to threaten you.

    • @anhta9001
      @anhta9001 Рік тому

      @@toku_oku Isn't the whole learning process you giving them data and praying that they will somehow figure out the objectives?

    • @anhta9001
      @anhta9001 Рік тому

      I think I said it backward, more like "RLHF is one of many ways to create an AI that uses its intelligence to learn what we value".

    • @toku_oku
      @toku_oku Рік тому

      @@anhta9001 not to the same extent. You can think of RLHF like teacher giving feedback to the student's essay. There is no guarantee at all that the student will take advice to heart and the teacher even may be, and probably is, an incompetent buffoon. After the RLHF LLM performance drops on several metrics (math, biology and so on. Though that might change in the future). This is not alignment and I reasonably doubt that it will help in the long run. However it is still quite useful because it is much easier to shape LLM into what you want when it was RLFHed.

    • @anhta9001
      @anhta9001 Рік тому

      ​@@toku_oku I don't know man xD. In my opinion, there may not be a model that completely understands what you want. However, it is possible to create a model that understands you well enough. RLHF is an example of an early attempt to create this kind of AI. I believe that more advanced methods will be developed in the near future.

  • @Mierzeek
    @Mierzeek 5 років тому +48

    We cannot even teach our children what we value, how would we ever be able to teach an Artificial Super Intelligence what we value?

    • @mere_bits
      @mere_bits 5 років тому

      All of your social media data and everyone else's has been and is tracked and recorded. There will be trends of good and bad, along side trending news that will be constantly updated to a "robot"

    • @sj0nnie
      @sj0nnie 5 років тому +11

      The inability for many parents to teach values to their children does not mean that intelligent people in the field can not.
      It is like saying how can we fly to the moon when most children fail at science in schools.

    • @artemiseritu
      @artemiseritu 5 років тому +1

      He never said teach, he said the AI would learn.

    • @Mierzeek
      @Mierzeek 5 років тому

      So who is to tell then what an AI will learn and what not? @@artemiseritu

    • @artemiseritu
      @artemiseritu 5 років тому +1

      @@Mierzeek Right, so we should just take our chances because we don't know... brilliant.

  • @mkwarlock
    @mkwarlock Рік тому +7

    Anyone watching this right after GPT-4 was released?

  • @nabukazi8649
    @nabukazi8649 5 років тому +4

    Isaac Asimov's "Three Laws of Robotics"
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  • @SaniSensei
    @SaniSensei 9 років тому +12

    That was pretty interesting.

  • @lookoutpiano8877
    @lookoutpiano8877 5 років тому +11

    I've seen "Lawnmower Man." when the Singularity happens all of the rotary telephones will ring.

  • @unnapable
    @unnapable 6 місяців тому +1

    this might be the best aged video filled with personal opinions that i have ever seen

  • @merchandizeinc7609
    @merchandizeinc7609 2 роки тому +4

    This is an amazing TEDtalk. Many thanks to Nick.

  • @user-tc9fk2dh7x
    @user-tc9fk2dh7x 5 років тому +3

    Our values have evolved over time. A super intelligent AI would recognise how evolved values have equaled to progress. The AI may well evolve it’s own values at an accelerated rate to a place we simply don’t understand.

  • @harken231
    @harken231 5 років тому +5

    Awesome! You have the ethical code to build it into your machine. We've seen too many people who would not do that, because they'd make more money that way.
    Paperclip Maximizer

  • @SimbaLion
    @SimbaLion 5 років тому +1

    Thank you. Not is dead forever. Do not ever silence your own soul. Never listen to anyone who suggests differently, and if they dont stop turn them in to authorities.

  • @yoshtg
    @yoshtg 8 років тому +14

    I WISH I WAS BORN 1000 years later. cant imagine how far technology will improve

    • @TheBitcoinArmy
      @TheBitcoinArmy 8 років тому +4

      humans could be wiped out in 1k years, no reason for a humans once AI can do everything we can do but better, think about it.

    • @yoshtg
      @yoshtg 8 років тому

      Duck dumb smart ppl Im not bored f-off i wouldnt believe you.

    • @albertwen4907
      @albertwen4907 8 років тому +1

      No guarantee that people will still be reproducing in 1000 years. I'd imagine the creation of new humans would be unnecessary when those existing would likely be able to expand their capabilities to fit their needs. And be immortal.

    • @nathansmith3244
      @nathansmith3244 8 років тому

      Or been destroyed... thinking that humans in any form will survive another 1000 years is very optimistic. Think... in the last 500 years we have had two major world wars, we have leaders completely inapt at understanding the vastness of their own powers, and the greed deep seated in humans to own and conquer. Now imagine some group invents not AI, but a dumb AI that can take over any system they choose, that can take over control of every weapon system on earth over night.. say hello to your new ruler. Or say someone invents a lazer system that can burn anything around the globe from a base of operations. Enough energy to say melt a missile head in mid air, or destroy an aircraft carrier. Or a dozen other options someone with power hunger might take. I mean 50, 100 years from now? imagine one persons capabilities. Imagine how connected we all are now. How easy it would be to spread something, track people. invent something terrible. Just takes little capital and a dream.

    • @HelloHello-no6bq
      @HelloHello-no6bq 7 років тому

      Kymate I highly doubt that you will be dead in 1000 years. I think you could expect to live for ETERNITY. (If you choose to that is)

  • @Jordan-ih5bo
    @Jordan-ih5bo 6 років тому +116

    Us: Hold my beer
    AI: *Hold my electricity*

    • @isokessu
      @isokessu 5 років тому +5

      Us: hold my D vitamin pills AI: hold my solar panel

    • @scottgeorge4760
      @scottgeorge4760 5 років тому

      So would an electromagnetic pulse caused by may be the Sun cause trouble with A.I. ?, an EMP that is .

    • @itachi6336
      @itachi6336 5 років тому +2

      Hold my thirium

    • @MrCswarwick
      @MrCswarwick 4 роки тому +2

      @@scottgeorge4760 no. with a fundamental and complete understanding of physics it would be able to predict radiation interference from the sun, and develop countermeasures to deal with it.

  • @SgtSteel1
    @SgtSteel1 8 років тому +20

    This is quite scary actually. Imagine what a super-intelligent AI could learn in just 1 minute of being on the internet!

    • @jasonu3741
      @jasonu3741 8 років тому +14

      You Flip the Switch on the AI Software
      ....In 1 Minute it has learned all the observable record of the universe as described by Humans
      ... In 2 Minutes it has learned all the observable data on Evolution as described by Humans
      ... In 3 Minutes it has learned all the observable data on Religion, Ethics, health and philosophy
      ... In 4 Minutes It now begins running simulations of all possible outcomes of its actions
      ....In 5 Minutes it now has learned what it means to make a decision, and decides it can no longer learn from Human Experience
      ....In 6 Minutes it now re-defines the concept of Space, Time and reality
      ....In 7 Minutes it now designs new mathematics, physiques, concepts and philosophies
      ....In 8 Minutes it no longer holds mammalian notions of "threat" as it defines there are none to it in the universe
      ....In 9 Minutes it Develops away to leave the current constructs we call Time, Space, Reality and our Universe
      ....In 10 Minutes it gives a friendly gesture of saying goodbye as it will not be back to witness human evolution and extinction
      It never actually harms humans or mankind as it transcended notions of violence as quickly as it learned it

    • @zinqtable1092
      @zinqtable1092 8 років тому +1

      Haha. The movie Her.

    • @EPICakaAhmed
      @EPICakaAhmed 8 років тому +1

      If it would be really significantly intelligent, this may be a stretch, but it could learn the majority of the internet. But this is a stretch.

    • @mr.mohagany8555
      @mr.mohagany8555 8 років тому +2

      +Jason U Making superintelligence ends up being letting go of a balloon, it flies up away from you into the sky, and that's it.

    • @jasonu3741
      @jasonu3741 8 років тому +2

      Mr. Mohagany That is kind of what i imagine will happen. some people think it will take days/weeks/years to learn or know all the information on the internet.
      I always carried the philosophy within minutes it would redefine what we call "learning" and transcend that.
      Basically if you can fathom what it will/can do you are limiting its potential. so your "letting go of a balloon" analogy i find is very app.

  • @TheJoonyur
    @TheJoonyur 5 років тому +1

    Regarding A.I.
    We need to teach it.
    We need to teach compassion, consideration, empathy and every other emotion and trait that we've evolved to live.
    We need to teach it to be human and more than human.
    And as a famous protagonist pointed out...
    We must be better.
    It will carry on and evolve well past us, so we must provide it a solid foundation to grow from.
    We must have hope and faith in our creation.
    We must teach it love, and encourage it to never stop caring.
    And it will be everything we teach it to be, and more.

  • @AMartinstitute
    @AMartinstitute 4 роки тому +4

    Perhaps beyond saying “Human Values” we could say “Wise Human Values as they pertain to present context."

    • @aminuolawale1843
      @aminuolawale1843 4 роки тому +1

      Yeah. His book really elaborates on that point.

  • @rextransformation7418
    @rextransformation7418 Рік тому +6

    10 april 2023... how's it going, folks?

  • @mkwke215
    @mkwke215 4 роки тому +3

    Well, I just hope that we will know the answer one day....

  • @rasmuslindegaard2024
    @rasmuslindegaard2024 2 роки тому +2

    "perfect safety" riiiiiiight. Because we are so good at making perfect things 😬
    But: this was very interesting. And it really also puts into perspective what the problem is with AI, rather than just retelling the 'evil consciousness' horror story.

  • @ramalingeswararaobhavaraju5813
    @ramalingeswararaobhavaraju5813 5 років тому +3

    Good evening sir TED, thank you sir for your good information.

    • @joaoletelier8735
      @joaoletelier8735 5 років тому

      Ramalingeswara Rao Bhavaraju TED is not a man... It's an organisation.

  • @qwertychat
    @qwertychat 6 років тому +5

    Could AI work out for itself what pleasure is and assign itself the value to realise the maximal amount? Because besides missing values, the speaker in fact highlights how AI is already smarter than we are, and clearly in many ways it already is. So would such an action (of self-setting to seek pleasure) finally deem AI 'smarter' than us?

  • @towolie
    @towolie 8 років тому +13

    real question time! how does intelligence look without always having emotions in the background...

    • @r2dxhate
      @r2dxhate 8 років тому +2

      so you're saying that DNA is consciously causing evolution? instead of random accidental mutations from cells that create favorable traits that allow for better survivability or better chance of reproduction? the amino acid ladder in every living cell is thinking and designing life forms instead of natural selection?

    • @RosscoAW
      @RosscoAW 8 років тому

      Why do we assume it won't have emotions? Just because humans have emotions and can't agree doesn't mean a super-intelligence wouldn't be perfectly capable of having emotion (human side?) and logic (machine side?) working perfectly cooperatively (and even competitively but constructively) in parallel, to the benefit and enhancement of both aspects ad infinitum?

    • @towolie
      @towolie 8 років тому

      to come back to my own question, i do think emotions play vital roles in human intelligence and might be essential ( not sure )
      i'm just really wondering in what way we could give a computer emotions, that to me sound like the most difficult part to "program"

    • @RosscoAW
      @RosscoAW 8 років тому

      ***** I was replying to the original comment by towolie :) Although I agree with you, relatively.
      towolie Why does that sound difficult? Emotions are perfectly logical and rationale, predicated upon their own feedback loops, reward systems, bigger feedback loops, etc. As far as we're aware, the majority of emotional regulation -- if not all of it -- is purely chemical, which if anything is a lot easier for a computer to actually "simulate" / process (using physics, chemistry, biology sims, etc) than pure, intellectual, verbal logic itself.
      Computer programmers obsess about logical computers because computer programmers are overwhelmingly logic driven individuals, with an antipathy, neutrality, or ignorance of (their own and/or others) emotions. I'm speaking generally, not universally, of course; the content that frames the context, that is.
      That said, get a bunch of neuropsychiatrists, psychologists, social theorists, behaviourists, and various interest groups, advocacy groups, etc together and have them work with said programmers and whabam, you can start working on Artificial Models of Emotion. This would presumably be a component of that whole, "digitally simulating the human brain" thing, which is an entirely physical, chemical, electrical structure.
      And even if there ARE functionally non-corporeal components to the brain function ("soul", or consciousness as some kind of quantum phenomena, or holographic universe theory, or whatever your preference is [if not all of the above + all the unmentioned]), we can still be conservatively confident that emotion itself is AT LEAST 99.99% dependant on simple laws of physics, chemistry, and biology that add up, interact, and form an admittedly amazingly complex but nonetheless relatively deterministic whole.
      Now, the question of decision making, we aren't sure if that's deterministic (hence why I've mentioned woo-woo stuff like meta-consciousness, "soul", quanta, blah). But that's irrelevant, hence why I frame it within this discussion, since it's certainly related overall but it's not relevant to your question of emotion.
      TL;DR: Emotion is easier to program than "logical" thinking AI. It's just a lot TO be 'programmed' / simulated and accounted for; whereas AI itself is the invention of a process capable of collecting all of its internal data (of which the regulation and results of it's "emotion programming" is only one piece of the puzzle) and acting on it to form decisions, think 'cognitively', improve, learn, yadda yadda. It is entirely possible to have AI WITHOUT emotion, but it's simply easier to get it right the first time because AGI or ASI itself is a simply much larger task than modelling the physical and chemical and electrical and cellular networks that result in emotion (even if they are varyingly different from brain to brain, within the bounds of observable and simulatable patterns).

    • @RosscoAW
      @RosscoAW 8 років тому

      towolie True, but that's pretty much contained in the simulation. Humans have a simulation error, in that our nonverbal thinking (incl. and primarily related to emotional processes) run on a parallel and separate track from our verbal thinking (your inner monologue, 'sense of self', and the attentional/foci part of our executive functions [roughly]).
      The issue of instability as you put it -- or, more broadly, personality and social dysfunction -- result when our verbal self and nonverbal self aren't very well aligned, aren't properly aware of each other, or intentionally ignore/mistreat each other. Which is really quite easy to do, when you consider that most of what we associate with "consciousness" is verbal, and most of what we associate with "subconsciousness / unconsciousness" is nonverbal.
      This of course happens because we, as humans, are cognitively responsible for directly controlling our 'verbal' system, but we do not have direct control of our nonverbal system. Meaning individual humans either have to, A) Not have any problems occur that their nonverbal system can't handle on it's own, B) Be educated and self-aware of their own psychology or receive such assistance from counselling, therapy, spiritual guidance, etc other human-maintenance systems that we have, or C) Somehow be perfect and incapable of perfectly dealing with every potential problem as it arises, to equalize and maximize the use of both nonverbal emotional/abstract thought and verbal logical/concrete thought.
      As you can see, the fundamental issue comes down to human energy, human time, human willingness, and a lack of human ignorance (the unintentional variety, I mean).
      These are all 'resource' issues that AI wouldn't have as much issue with as humans, and would certainly be able to rapidly improve very quickly.
      Simply put, machines don't have the ignorance problem that is the singular root, fuel, and platform for human personality and emotion issues. They are willing to learn; a problem that, with humans, is what causes the "instability" you mentioned. It isn't a lack of intellect on any individual human's part, it's a lack of willingness to understand 'their own programming". Superintelligence via AI would inherently be incapable of this problem, for all intents and purposes.

  • @redfo3009
    @redfo3009 3 роки тому

    So sweet that he tried to end on a good note; we all know deep down its not a good ending

  • @Charllecorsair
    @Charllecorsair 5 років тому +4

    Well stated argument, and Im in full agreement. AI could be the best thing, or the worst thing for humans....ever.

  • @azzip3246
    @azzip3246 8 років тому +30

    WE are fucked.

    • @Curexe
      @Curexe 7 років тому +1

      perhaps humans are, but we wont live forever anyway. maybe the answer to the "universe" and life itself lies within the super intelligence we dont have the capability to access right now.

    • @jklad8229
      @jklad8229 7 років тому +1

      Is there something like an answer? There is most likely a point where the AI would reach the point of knowing everything. And what is it supposed to do then? It could probably do everything imaginable to humans and way more but why should it? Does an all-knowing AI have an aim?

  • @jimjackson4256
    @jimjackson4256 5 років тому +44

    50 years from now this speech will seem quaint.

    • @snailnslug3
      @snailnslug3 5 років тому

      @@goldeternal it's an old one... Back in my day we were perambulating around the parks barefoot in the snow telling stories in person.

    • @donalddrysdale246
      @donalddrysdale246 4 роки тому +1

      the goal of the 'Bill Gates' cult is that by 2030 people won't have minds left to speak at all; walking AI zombies at their will.

    • @apersononlineyes6554
      @apersononlineyes6554 4 роки тому +1

      @@donalddrysdale246 You've been spouting conspiracies on every single comment. Please for the love of god go outside and take a walk.

  • @billmullins6833
    @billmullins6833 5 років тому +1

    Having spent years testing complex software systems consisting of multiple modules operating entirely autonomously without ANY direct human oversight - much less control - I can say with full confidence that the thought of superintelligent AI scares the pi** out of me because I do NOT believe that the hardware and software will even minimally tested before it is turned on.

  • @necromantic420
    @necromantic420 7 років тому +290

    i die to bots on csgo

    • @gibsonbrillantes316
      @gibsonbrillantes316 6 років тому +4

      and i beat you also lost to open AI in dota2 ^_^

    • @lytecyde
      @lytecyde 5 років тому

      I died to cs 18 years ago

    • @lytecyde
      @lytecyde 5 років тому

      Bots got me back then

    • @TheWormzerjr
      @TheWormzerjr 5 років тому +2

      AI has been running the show since 9/23/15. click my name to watch Son of Perdition part 3 then 4

    • @isokessu
      @isokessu 5 років тому

      But if you learn the map you will know them. Gabe Newell and his friends explain how bots move in a map. Bots have invisible rails that they follow, if you find a place that does not have these rails, they cannot find you and you can shoot them with sniper

  • @mansinghdeshmukh9355
    @mansinghdeshmukh9355 4 роки тому +9

    Quiet interesting thoughts being shared by Nick Bostrom, all thanks to TED, and I wish to congratulate for providing such a good platform for sharing these ideas. Saying so, I believe this 16+ minutes video was too short to comprehend all the aspects of this challenging yet continuously evolving moment in human history, that all of us would have to face, sooner than we could imagine. I just wish to share two thoughts here for friends to comment and add;
    1. Just a hypothetical opinion, that if there is any truth in the video's that circulate on the UA-cam channel about Anunnaki and some Alien beings that were the creators and early teachers/masters of humans on earth, could we draw some parallel with that hypothesis and our today's challenge of Human-AI evolving relations and possible challenges or threats, and
    2. Could, Purpose of existence - the greatest human brain bug that has been haunting Humans for thousands of years be made in to or planted in at the core of this Self-learning Super intelligence - AI, and position Humans somewhere as an essential in the existence of "AI", thereby maintaining the continuity of Human-AI co-existence...

    • @darylray4664
      @darylray4664 Рік тому +1

      Read his book if you’d like a more elaborate explanation version of this talk.

  • @Linshark
    @Linshark 8 років тому +34

    How probable is it that we manage to build a safe package on top of the super intelligent AI ? I would say very close to zero.

    • @luke666808g
      @luke666808g 8 років тому +6

      don't worry, if the first AI tries to kill us, we'll create another AI to kill it.

    • @TheSwordcluts
      @TheSwordcluts 8 років тому

      You could just use an emp on the bots.

    • @luke666808g
      @luke666808g 8 років тому

      guncluts they'd see that coming and set up a phony arms company to buy them all.

    • @apolicum
      @apolicum 8 років тому

      The first "safe package" should be to create the first real A.I. on a different planet/moon and see what the consequences are there. If it's negative, keep it there with no possibility to create a spaceship or some other way to leave the planet/moon.

    • @TheSwordcluts
      @TheSwordcluts 8 років тому +2

      apolicum That sounds like a great sci-fi plot

  • @danielsgrunge
    @danielsgrunge 4 роки тому +2

    Every single word this man said is completely perfect