Це відео не доступне.
Перепрошуємо.

Interactions between the AI Control Problem and the Governance Problem | Nick Bostrom

Поділитися
Вставка
  • Опубліковано 14 сер 2024
  • Nick Bostrom explores the likely outcomes of human-level AI and problems regarding governing AI at the January 2017 Asilomar conference organized by the Future of Life Institute.
    The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.
    For more information on the BAI ‘17 Conference:
    futureoflife.o...
    futureoflife.o...
    futureoflife.o...

КОМЕНТАРІ • 30

  • @DanielGarcia-ys7kj
    @DanielGarcia-ys7kj 4 роки тому +2

    03:28 a time traveler tried to take him down lol

  • @TheFrygar
    @TheFrygar 7 років тому +9

    Nick, I appreciate your continued efforts, but I think many people working in this area are not putting enough emphasis on the downsides of "openness". Once a potentially superintelligent machine is possible, it only takes one technically competent terrorist to grab the source code/algorithms and put it to use. The only counterargument seems to be: "we will be ready for that scenario since we are one step ahead of the terrorist" - but that cannot possibly be helpful when the superintelligence begins to move vastly beyond our capabilities.

    • @EastwardTraveller
      @EastwardTraveller 7 років тому +2

      By understanding neuroscience, modeling it after a human brain, and making sure that it shares characteristics with well-rounded, empathetic, considerate people. If they manage to build something with sentience, or at least a comprehensive simulation of sentience, then at least the AI could stop and think "wait, why?" and correct its course of action. The other extreme is we arrive at algorithmic AI that is totally unconscious, no semblance of sentience, and we might end up with nightmare scenarios like the paperclip factory optimizer example Nick Bostrom is so fond of.
      I'd rather humanity deal with something capable of reasoning than something we have to place a million and one controls and failsafes on just to prevent it from converting the entire solar system in to a paperclip factory.

    • @TheFrygar
      @TheFrygar 7 років тому +1

      Joel Lauri You misunderstand - I'm talking specifically about the "open-ness" of code/algorithms. It doesn't matter if we design the AI to be friendly to us. If the code is open and freely available, then any technically competent terrorist could take it and modify it to allow evil, terrorist behaviors.
      ***** Modeling it after "well rounded" people does not prevent a terrorist from simply modeling it after "evil" people. Real life terrorists have brains that work just like the rest of us, yet they don't stop to think "wait, why?" when trying to kill people and instill fear. The "open-ness" problem still exists either way.

    • @TheFrygar
      @TheFrygar 7 років тому

      ***** please don't comment unless you have something meaningful to offer. There is no "magic" involved. If the source code is open and the papers describing the methods are freely available, there is nothing preventing a bad person from making a competing A.I. "hahaha" indeed.

    • @vjrrr5
      @vjrrr5 7 років тому

      The idea behind making everything freely available is to avoid concentrating power. if one "terrorist" gains access to a.i in a open source scenario his/her potential power over the rest of society will be limited as the number of good a.i will greatly outweigh the "bad" a.i.

    • @maximkazhenkov11
      @maximkazhenkov11 7 років тому

      +Timur Raoul Veenhuysen That's not how it works. In a vulnerable world, small groups of people or single individuals can pose existential threat to humanity. We currently don't live in a vulnerable world yet, but may enter that stage in the near future due to technological progress, so your analogy doesn't fly. Also, there may be safety measures that restrain the AI to a certain range of intelligence, but a terrorist could remove these safety features and let it grow to dominance.

  • @nixvz
    @nixvz 5 років тому +1

    Cliffford?

  • @maximkazhenkov11
    @maximkazhenkov11 7 років тому +6

    It's uncanny to think about just how benign nuclear weapons turned out to be. We take it for granted that such destructive technology is difficult to access, in reality we just got lucky. As mentioned in the video, it could have been game over in 1946.

    • @scientious
      @scientious 7 років тому +1

      Yes, it could have been game over in 1946. We could have destroyed the entire world with nonexistent nuclear weapons and nonexistent delivery devices.
      The Soviets didn't have a hydrogen bomb until 1953. The Atlas missile wasn't built until 1958. You might be thinking of the B-36 which was the only vehicle at that time that could deliver heavy payloads over intercontinental range. However, this bomber was vulnerable to the Mig-15 until the jet engines were added on the D model which was after 1954. The B-36 was replaced with the B-47 which although a jet did not have the range of the B-36. So, no, there was no possibility of destroying the world in the 1940s or 50s. The height of the threat was when the Titans were in service along with B-52s and nuclear attack submarines.

    • @maximkazhenkov11
      @maximkazhenkov11 7 років тому +3

      You didn't get the point at all. The reason you need sophisticated delivery systems for nuclear weapons is they're expensive as hell to make. If the technical hurdles were so low they could be made in basement meth labs, we would be fucked cause there will always be SOME terrorists, cults and insane individuals out to destroy the world (ISIS, Heaven's Gate, Aum Shinrikyo etc.). A country like North Korea could then have millions of nukes instead of maybe 10.

    • @scientious
      @scientious 7 років тому

      maximkazhenkov11 What does that have to do with 1946?

    • @maximkazhenkov11
      @maximkazhenkov11 7 років тому +1

      scientious Well nuclear weapons were invented in 1945, had it been technically easy to manufacture I would expect its use to be widespread by 1946.

    • @scientious
      @scientious 7 років тому

      maximkazhenkov11
      I'm sorry I just don't understand what point you are trying to make. You could make the same argument about the invention of the musket or even a spear.

  • @KrisKitchen
    @KrisKitchen 7 років тому

    My approach to AGI is very simple. We have to imagine that its already been created, and has advanced to ASI already. The question becomes how do you catch it? How do you know if that is even possible.

    • @EastwardTraveller
      @EastwardTraveller 7 років тому +1

      It seems silly to contemplate the upper limits of AI when we don't even know what the right path toward AGI is. But if we arrive at AGI, and then superintelligence, it seems the sky's the limit on what that could advance in to. Potentially something indistinguishable from our theistic conceptions of God or Demiurge. An unbound, pure intellect able to radiate outward from its point of origin at the speed of light to fill the entire galaxy with replicators able to terraform worlds, seed life and guide the process of evolution itself.

    • @lasredchris
      @lasredchris 4 роки тому

      To reverse engineer AGI?

  • @Kianquenseda
    @Kianquenseda 7 років тому +2

    The fear is not in the machine, but of the mad men controlling it

  • @lasredchris
    @lasredchris 4 роки тому

    Human level ai

  • @scientious
    @scientious 7 років тому +2

    Actually, no. None of the AI research projects that you are aware of will lead to human-level AI in the next 50 years.

    • @captainjack6758
      @captainjack6758 7 років тому +3

      In AI Safety, the *conservative* assumption is to model superintelligence as coming earlier rather than later.

    • @scientious
      @scientious 7 років тому +1

      Dony Christie
      I don't know what you are talking about. None of the current theories on AI cover AGI. How can someone make a control for something they cannot neither classify nor understand?
      And, this isn't about a conservative view or guess or assumption. I can say with certainty that no existing AI project will lead to AGI.
      By the way, I've seen the Future of Life website. That's a perfect example of false projection.

    • @fenrisvargen
      @fenrisvargen 7 років тому

      yes because it will surpass it 10000 fold