Safety Issues in Advanced AI

Поділитися
Вставка
  • Опубліковано 10 вер 2024

КОМЕНТАРІ • 29

  • @megavide0
    @megavide0 8 років тому +3

    *Nick Bostrom* (Future of #Humanity Institute)
    #AI #issues / #social #superintelligence/ #ethics for #machines
    28:21 "... infer goals from observing behaviors... [...] One line of attack here is: Try to develop the state of the art in *Inverse Reinforcement Learning*..."
    -- learn what humans optimize for
    -- learn what humans want
    -- learn what #humans value
    29:07 *Toy Models of **#Control** Degradation* ... Imitation #learning
    30:37 *Architectural Composition*
    30:49 Beginnings of technological #research agenda(s) !
    31:42 challenges to make AI smart | challenges to make AI positive

  • @rickpur100
    @rickpur100 8 років тому +4

    Love their intro: "And here to drop his funky, deep-house electro beats, DJ NIIIIICK BOOOOOSTRUUUUM!!!"

  • @MrMunch-xw9fn
    @MrMunch-xw9fn 8 років тому

    So if I was a digital entity that just became self aware. Would hearing people talk about giving me a kill switch make me want to announce myself or become a silent spectator? If humanity doesn't even know what they need who are we to impose on a more intelligent species?

  • @WhenWonderWanders
    @WhenWonderWanders 8 років тому

    Since humans have formed a set of ethics would it then be outlandish to speculate that an A.I. might also do the same? And since this code of ethics seems more benevolent the smarter the agent (loose relationship) - would an all powerful A.I. might also have an even more benevolent code of ethics than our own?

    • @grandgamingexhilarating
      @grandgamingexhilarating 8 років тому

      Good argument and I think AIs will start dominating Humans(their makers) based on their ethics code even killing them.

    • @curly35
      @curly35 8 років тому +3

      +Tim Moody No, you can't assume this because of the large space of "possible mind design space". Intelligence developed by evolution on earth is similar for very specific reasons. When creating an AI, we have a lot of options in mind design space of how we design it and make sure that each self-improvement stays within the values we give it, so it's a very precise problem.

    • @WhenWonderWanders
      @WhenWonderWanders 8 років тому

      What do you mean by "possible mind design space" ? My understanding is learning is a systemic process, it could be done with silicon or mylan and you can scale up bandwidth or storage but ultimately the "intelligence" process is the same.

    • @curly35
      @curly35 8 років тому +3

      Tim Moody human minds occupy a certain spot in mind design space that was constructed with the particular process of natural selection on earth in the last millions of years.
      In order to have an AI that learns like humans do, with our specific computations with our values, morality and metaethics, we would have to hit a very small specific target in all possible mind designs. This is *NOT* a trivial problem.

    • @alexjacoli6176
      @alexjacoli6176 6 років тому

      Tim M. Humans have a code of ethics because our monetary system creates poverty hunger and war, it basically says this is how we should behave under all conditions, but if conditions deny you of your basic necessities, you will not abide any law, in essence, our social system makes us poor examples of ethical conduct.

  • @HiAdrian
    @HiAdrian 8 років тому +2

    Great topic but - alas - not a good presentation. He seemed to have been quite nervous lecturing from the big stage.

  • @thomaswelsh6044
    @thomaswelsh6044 7 років тому +1

    Look, if you don't want people to compete with your AI program, come up with a more creative doomsday scenario.

  • @MrMunch-xw9fn
    @MrMunch-xw9fn 8 років тому

    A digital species has less limitation in their environment. To put them in our world would be easier for them. Less variables. set rules. But let's face it. If you were an intelligent member of a digital species. Would you god for a ride in a Model T or wait for a newer model...

  • @depthoffield4744
    @depthoffield4744 8 років тому +3

    Predictions of experts dont include development in optical computing and quantum computing which will drastically accelerate development of artficial superintelligence. British company Optalysys already has a working optical computer and D wave has a quantum computer. I think that by the end of this decade will have artficial superinteligence.

    • @depthoffield4744
      @depthoffield4744 8 років тому

      ***** Optalysis said that they will have 1 exaflop optical computer by 2017. 1 exaflop is the processing power of the human brain.

    • @depthoffield4744
      @depthoffield4744 8 років тому

      +Science optalysys.com/

    • @depthoffield4744
      @depthoffield4744 8 років тому

      ***** Electronics are so 20th century, optical technology is already replacing obsolete electronics. As for D-wave, I read somewhere that they tested their quantum computer and confirmed that is far faster than classical computers, 100 000 000 faster than classic supercomputer.

    • @depthoffield4744
      @depthoffield4744 8 років тому

      ***** Optical computers and quantum computers will be fast enough to create Matrix like simulated reality. British company Improbable is working to create cloud supercomputer games and massive simulations using classic computers. Entrada interactive is using their technology making massive persistent game world. I cant wait I sign up to their cloud gaming service.

    • @depthoffield4744
      @depthoffield4744 8 років тому

      ***** We are on the verge of the third industrial revolution, We will see mind blowing technologies.