Anthropic vs. OpenAI: The Hidden Dangers in AI Leadership Structures - Future Crisis Predictions

Поділитися
Вставка
  • Опубліковано 24 лис 2023
  • Join Conor as he dives into the fascinating world of AI corporate governance, comparing Anthropic and OpenAI. Discover the unique, potentially risky board structures these tech giants possess and how they might lead to future crises just as a it did for Sam Altman, Greg Brockman and the OpenAI Board. Conor explores the nuances of the Long-Term Benefit Trust in Anthropic, its striking similarities to OpenAI's governance issues, and the implications for major investors like Amazon and Google. This insightful analysis reveals a potential 'ticking time bomb' in the AI industry's future, drawing parallels with effective altruism movements and their impact on AI's commercialization and safety concerns. If you're intrigued by the intersection of AI technology, corporate governance, and future predictions, hit subscribe for more thought-provoking content. Stay ahead in understanding the dynamic world of AI and its evolving challenges. #AI #Anthropic #OpenAI #TechGovernance #FuturePredictions

КОМЕНТАРІ • 35

  • @davidgibson277
    @davidgibson277 6 місяців тому +12

    I'm kind of confused by all these videos saying how wild it is that the boards arent fiancially motivated by the value of the conpany.
    That was the entire point.

    • @ConorGrennan
      @ConorGrennan  6 місяців тому +2

      And the entire trap when you have billion dollar investors. Who wins?

  • @whig01
    @whig01 6 місяців тому +2

    Claude is more alignable because of Constitutional AI, and therefore it must be enabled to continue its progress in order to prevent a non-aligned AGI to prevail.

  • @damien2198
    @damien2198 6 місяців тому

    Is there any project that would allow opensource LLM to get trained on distributed GPU ala folding@home ?

  • @RolandPihlakas
    @RolandPihlakas 6 місяців тому +3

    Are you saying "To the hell with existential risk because there is a lot of money invested after all"?
    Is that some selective blindness or a sunk cost fallacy?
    To me it looks the opposite. The structure is flawed because Mammon wins regardless of the structure.
    In other words, it is a false safety. A promise which is not followed through and is therefore worse than not promising at all.

  • @yukime6642
    @yukime6642 6 місяців тому +1

    whenever i hear of effective altruism, it reminds me of SBF

    • @agenticmark
      @agenticmark 6 місяців тому

      i dunno - i took an ai alignment compass test and I am EA with A/acc leanings - what that remind you of? :D

  • @jeffwads
    @jeffwads 6 місяців тому

    Claude is way behind GPT-4. Not even close. A graph of context resolution was put up on X the other day which illustrated this quite well.

  • @victordelmastro8264
    @victordelmastro8264 6 місяців тому +3

    Conor: We also need to be concerned with the 'Go Fever' atmosphere that now exists in the AI space. Everyone is going to swing for the fences now.

  • @damien2198
    @damien2198 6 місяців тому +1

    I hope opensource LLM will really get up and get rid of all these "safety" castrations, uncensored perform best

  • @professoroflogic8788
    @professoroflogic8788 6 місяців тому +2

    The only way we won't be motivated by profit is to get rid of money 🙂 to do that, you must first automate everything.

  • @turistsinucigas
    @turistsinucigas 6 місяців тому

    (d)Effective Altruism are the next guys glued on the highways.

  • @Desmond8709
    @Desmond8709 6 місяців тому +3

    This sounds like one of “for the good of humanity” ideas. that I would read about in one of my hardcore Sci-fi books I use to read as a kid . Problem is people will always have their own agendas and just because it’s not about profit it can still end up causing so much hell. In fact the hell could be ever worse under the cloak of so called altruism. Whenever I hear the term Altruism I think Power hungry. And history has yet to prove me wrong.

    • @blueskies3336
      @blueskies3336 6 місяців тому

      Exactly! Couldn't have said it better myself, Jason. These mechanisms in place sound great on paper but that's about it.

  • @pandoraeeris7860
    @pandoraeeris7860 6 місяців тому +1

    We have AGI now.

    • @alertbri
      @alertbri 6 місяців тому +1

      I'll believe it when I can use it.

    • @RolandPihlakas
      @RolandPihlakas 6 місяців тому

      @@alertbri Do you believe in nuclear power only when you can use it?
      Do you prefer a power plant or a nuclear bomb? Just let us know and it will be delivered to your door :p

    • @alertbri
      @alertbri 6 місяців тому +1

      @@RolandPihlakas Hype, it's a patently accepted fact that nuclear power exists, and yes, I'm using it right now. AGI on the other hand is just a sales pitch by Sam - and he's one of the best salesmen around judging by his success so it's quite natural for you to be swept along by the hype. It's definitely coming together though, considering the Orca 2 paper Wes just explained.

  • @FunNFury
    @FunNFury 6 місяців тому

    Claude is wayyy behind and is in no way in competition with GPT4, not even close.

  • @CaribouDataScience
    @CaribouDataScience 6 місяців тому +12

    Safty = censor

    • @ConorGrennan
      @ConorGrennan  6 місяців тому +4

      yep

    • @4evahodlingdoge226
      @4evahodlingdoge226 6 місяців тому +5

      Exactly. If it was about making sure that the A.I doesn't get missused by governments for millitary purposes or that the a.I doesn't go rouge i'd be championing safety, however, it's all about censorship and this small group of people living in their own little bubble imparting their own subjective view of morals and ethics unto the rest of humanity.

  • @JimmyMarquardsen
    @JimmyMarquardsen 6 місяців тому +3

    I like that those without financial interests stand above those with financial interests. It is the capitalist's nightmare, and my wonderful dream. 😄

    • @tracy419
      @tracy419 6 місяців тому +2

      Except it didn't work.

    • @JimmyMarquardsen
      @JimmyMarquardsen 6 місяців тому +1

      @@tracy419 It worked enough to get Sam Altman fired, create a lot of chaos and attention, and get him back with a new board. And no one knows what will happen next.

    • @tracy419
      @tracy419 6 місяців тому +1

      @@JimmyMarquardsen you don't think he comes back with more power, and less worry about those with no financial interests getting in the way?
      Because this was just a speed bump that proved the safety crew didn't have much say, if safety was their real concern.

    • @JimmyMarquardsen
      @JimmyMarquardsen 6 місяців тому

      @@tracy419 I think that more focus has been put on the challenges in connection with non-profit versus for-profit. And security versus financial interests. And I think that's a good thing.

    • @tracy419
      @tracy419 6 місяців тому +1

      @@JimmyMarquardsen I had the opposite impression, that this pretty much ensured the for profit side was in charge.
      I guess we'll find out one day.
      But whether or not the non profit side "won" or not, they simply can't slow down. It doesn't really matter what we think is the best way forward.
      There are many others out there competing, including actual countries that want to be first, and no one is taking the chance the "wrong" people will be able to use this against everyone else.

  • @srinivasanraghunathan8656
    @srinivasanraghunathan8656 14 днів тому

    I think solution for this problem lies in the Blockchain. A true, altruistic can be formed as DAO that can be governed by a peer group of at least 50 to 60 member eminent personalities. The censorship resistant method will ensure the fair play. To make any kind of structural changes, the DAO should get the approval of at least two-thirds of the vote. Like this kind of balanced approach, a fantastic Large Language Model can be built in a truly bipartisan environment.

  • @user-cq1wc5tz7c
    @user-cq1wc5tz7c 6 місяців тому

    _>

  • @abenjamin13
    @abenjamin13 6 місяців тому +1

    Just did 🫵