OpenAI's Safety Team Exodus: Ilya Departs, Leike Speaks Out, Altman Responds - Zvi Analyzes Fallout

Поділитися
Вставка
  • Опубліковано 26 сер 2024

КОМЕНТАРІ • 19

  • @Nityavidyardhi
    @Nityavidyardhi 3 місяці тому +5

    please call Ilya to this podcast if possible . i feel he is centrally responsible for the current progress in gen ai and llms

  • @thevenomous1
    @thevenomous1 3 місяці тому +7

    I really appreciate what you guys are doing but the audio quality is terrible again. How is it possible that Zvi doesn't have a gamer headset to dust off for the podcast at least?

    • @appipoo
      @appipoo 3 місяці тому +1

      Come on Zvi! A decent mic could mean thousands of ears more hearing your takes. This is not the thing to be lazy about.
      Come on man 😂

  • @GNARGNARHEAD
    @GNARGNARHEAD 3 місяці тому +1

    that Schulman interview was brilliant, definitely worth a watch

  • @dustinsuburbia
    @dustinsuburbia 3 місяці тому +2

    Just an update to say Altman has addressed the equity issue, for what its worth "we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop." He then goes on to say they fixing the agreements to this end

  • @augmentos
    @augmentos 3 місяці тому +4

    Woah just watched yesterdays haha

  • @jeffspaulding43
    @jeffspaulding43 3 місяці тому +1

    The point of knowing that you're in a simulation is that you can either hack it to find the infinite X glitch, or to escape it up to the next level of reality which I assume is less fragile then our simulated one

  • @AlexanderGambaryan
    @AlexanderGambaryan 3 місяці тому +3

    99% of people are not thinking about it at all
    Don't look up

    • @flickwtchr
      @flickwtchr 3 місяці тому +1

      Most people aren't aware of the state of the technology. Of the people who are, your percentage estimate regarding those concerned is way off.

  • @alexlloyd5726
    @alexlloyd5726 2 місяці тому

    Does anyone have a reference for SOFON(?), the alignment research proposal Zvi mentioned around 39:00

  • @LDdrums20
    @LDdrums20 3 місяці тому +1

    This conversation is heating up

  • @inkpaper_
    @inkpaper_ 3 місяці тому

    sadly audio quality makes this important piece almost unlistenable, especially for non-natives :c

  • @alexleonee
    @alexleonee 3 місяці тому

    If AGI technology is sophisticated enough to pose risks, why can’t we leverage that same intelligence to mitigate those risks and ensure it is safe and beneficial for everyone? I’m sick of hearing this same line of reasoning.

  • @AI_Opinion_Videos
    @AI_Opinion_Videos 3 місяці тому +3

    I would have no trust in anything less than a global treaty with a CERN-like organization and a ton of oversight. Private companies and single nations should get nowhere near this.

    • @kreek22
      @kreek22 3 місяці тому

      The temptation to corruption would be far higher for AGI/ASI than it is for any of CERN's current operations. What miracles does such a technology not promise? It may not be able to deliver all of them, even at the best, but the promise suffices to corrupt.
      Butlerian jihad or the abyss.

  • @CodepageNet
    @CodepageNet 3 місяці тому +3

    ah, to hell with safety. let's go all in! (i mean it)

    • @flickwtchr
      @flickwtchr 3 місяці тому

      So worrying how it affects other humans just doesn't register, eh?