Debunking the AI Alignment Doomers

Поділитися
Вставка
  • Опубліковано 18 лис 2024

КОМЕНТАРІ • 7

  • @fatjay9402
    @fatjay9402 3 місяці тому +2

    Great points!

    • @SvilenK
      @SvilenK 3 місяці тому

      🙏🙏

  • @LelouchVelvet
    @LelouchVelvet 3 місяці тому

    They would just say that the risk is potentially humanity ending (too big), not like example you gave or most risks taken in human history

    • @DisruptionTheory
      @DisruptionTheory  2 місяці тому

      high risk, high reward. Comes down to risk tolerance i suppose

  • @Blake-Householder
    @Blake-Householder 3 місяці тому

    It sounds like you're making the same arguments Yann LeCun used. I talked about those here: ua-cam.com/video/BygErhGbONA/v-deo.html

    • @DisruptionTheory
      @DisruptionTheory  2 місяці тому

      watched some of it, I disagree with your premise of binary framing of we either can or can't.
      alignment is probabilistic.
      There's certainly a real chance it might destroy all of humanity, the question is one of risk tolerance.
      for some 20% is perfectly fine, others freak at 2%.
      The real problem is that there's not meaningful way to quantify any of it, giving way to endless discussions, most of which are useless