AI Safety Regulations: Prudent or Paranoid? with a16z's Martin Casado

Поділитися
Вставка

КОМЕНТАРІ • 21

  • @ArielTavori
    @ArielTavori 18 днів тому +2

    Excellent and insightful discussion as usual! 🙏🏻
    Martin presents the most coherent arguments for his position that I've heard so far. However, it seems like every conversation I've encountered recently about the future are dominated by arguments over hypothetical AGI/ASI/p-doom timelines, with relatively little discussion of the impact and progress of smaller models, the possibility of unlocking new capabilities from existing models (Ilya at least has stated repeatedly and publicly that he believes there's massive untapped potential here), and the paradigm shifting performance optimizations that are on the way.
    These improvements are not trivial or even (for the most part) hypothetical, and some of their obvious short-term implications are IMHO the most exciting, profitable, and potentially problematic areas we're likely to actually encounter in the coming months and years. Just a few off the top of my head for example:
    - Mojo
    - Mamba
    - 1.58 bit
    - photonic hardware
    - thermodynamic hardware
    - quantum hardware...
    Many of these have already demonstrated stunning performance improvements, which in some cases may have the potential to stack multiplicatively, not to mention the potential for massive reductions in hardware requirements including total GPU memory, memory bandwidth, or even 1.58 bit outright eliminating matrix multiplication...

  • @xinehat
    @xinehat 18 днів тому +2

    Oof. It didn't feel that Martin was actually engaging with Nathan. It was basically 2 hours of "This is a stupid conversation that's not worth having, and you're stupid if you don't think it's stupid."

  • @_arshadm
    @_arshadm 18 днів тому +2

    An excellent episode, the FSD discussion is so pertinent. Elon can spout BS about how close FSD is, and he could be 97% right. But the problem is that the remaining 3% still happen everyday and any failure would be a killer for FSD.

  • @Cagrst
    @Cagrst 18 днів тому +10

    well, this was a deeply frustrating watch. Reasoning by analogy to historical events when we are dealing with the most significant revolution in human history feels pretty asinine to me. brilliant, but I think he just doesn’t get it.

    • @kyatt_
      @kyatt_ 18 днів тому

      Yeah, stopped after 10mins tbh

    • @Sporkomat
      @Sporkomat 18 днів тому +1

      I agree. To me it seems like he doesnt extrapolate in the obvious ways and just sees thinks as they are and not how they (almost probably) will be.

  • @AI_Opinion_Videos
    @AI_Opinion_Videos 17 днів тому +1

    "If we identified one mechanism that has massive destructive power"
    The AI lab would identify pre-release, and chose to race internally and harness that power. Who is going to protect us from that?

  • @mitsuman5555
    @mitsuman5555 18 днів тому +5

    I’m sure the guest is a brilliant person, but his arrogant disposition is unsettling. As if anything in this field is a foregone conclusion.

    • @rasen84
      @rasen84 17 днів тому

      It’s not like he’s telling OpenAI to stop scaling.
      He’s clearly excited about new uses enabled by scaling.
      He just doesn’t think it’s going to create god.

    • @militiamc
      @militiamc 16 днів тому

      I think he's just knowledgeable and confident. Not necessarily arrogant. I don't agree with his position though

    • @NuanceOverDogma
      @NuanceOverDogma 13 днів тому

      lol, the interviewer is arrogant AF

  • @jarlaxle6591
    @jarlaxle6591 18 днів тому +2

    This conversation was a tough listen. He repeats him self over and over. Its almost like he's in denial. And man, the ways he talks. As if everything he says his fact.

    • @seanmchugh2866
      @seanmchugh2866 18 днів тому

      Yeah, I can't say for sure if there's "denail" but I am seeing that kind of thing a lot with AI. No matter what it does it's a joke. He also reframed shifting goal posts from "it's not ai it's not ai it's not ai" to "it is ai (nope), it is ai (nope), it is ai (nope)"
      I suppose if I wasn't a ChatGPT enthusiast at this point I would still be able to get work done with my head in the sand. But because I try to use it I see use cases everywhere that someone who didn't try never would.

    • @seanmchugh2866
      @seanmchugh2866 17 днів тому

      Okay so just as a random example (out of a population of probably 100 worthy examples this month). I needed this code in python. Now before you judge me I can easily write this in C# but this saved me probably an hour in python. If you aren't coding or have your head stuck in the sand you're going to miss how beyond words level of amazing this is.
      "can you write me a python function that takes in the path to two images, originalImage and portraitMask. you should know that originalImage and portraitMask are guaranteed to have the same dimensions.
      your function should replace every pixel in originalImage with a zero opacity pixel where the corresponding pixel in portraitMask is < (200,200,200).
      finally, your function should take a third argument which lets me tell it where to save the output"

  • @augmentos
    @augmentos 17 днів тому +1

    Ironically I disagreed with a lot of Martins positions and don’t think he understand a lot of the comparisons though I totally am in agreement and aligned with having no liability for an llm model creator and stopping any drive at regulation currently. I’m more often disagreeing with the host who I really enjoy but I think is bias from his red team time, and is driving at the regulation hoop these days way too hard. To be liable if any llm use results in criminality based on an answer is absurd. There’s infinite ways to trick a model and a huge landscape of ‘criminality’. These guys fear mongering are eating out of Altmans hand, helping build his moat, are doing America a huge disservice.

    • @LiquidRR
      @LiquidRR 17 днів тому +1

      The most powerful models will need heavy regulation. Unless you want people developing super viruses in their garages it's a necessity we regulate the leading edge models. Why would you think otherwise?

    • @augmentos
      @augmentos 14 днів тому

      @@LiquidRR yawn. 🥱 all the info you need to do that exists already. Has since the advent of the internet. Motivated ppl will do bad things. I suppose you like and believe standing in TSA security theater keeps us safer too. I don’t want Sam Altman or the SF woke brigade deciding what I can and cannot use or see. Look how that worked out for Twitter.

  • @NuanceOverDogma
    @NuanceOverDogma 13 днів тому

    You are not very bright.