AI Foom Debate: Liron Shapira vs. Beff Jezos (e/acc) on Sep 1, 2023

Поділитися
Вставка
  • Опубліковано 22 лис 2024

КОМЕНТАРІ • 104

  • @nac341
    @nac341 10 місяців тому +8

    Man, one thing that is superhuman is Liron's patience

  • @mistycloud4455
    @mistycloud4455 4 місяці тому +2

    I have never seen someone so calm and composed as liron in my life Great debate

  • @PercyOtebay
    @PercyOtebay Рік тому +31

    Great debate Liron! Cant believe you held your composure all that time

    • @TheManinBlack9054
      @TheManinBlack9054 Рік тому +2

      Beff Jezos and the whole e/acc sphere are nothing but fools who have nothing of worthwhile to say. We should instantly discard them as another type of fool. I found nothing of actual value in what they say.

  • @akmonra
    @akmonra Рік тому +18

    Mad respect for you stepping into the lion’s den and not faltering.

    • @xsuploader
      @xsuploader 11 місяців тому

      lions den? more like a class of angry toddlers.

  • @whataquirkyguy
    @whataquirkyguy Рік тому +18

    Sounds like you're engaging in good faith debate with a bunch of obnoxious kids. Kudos Liron for keeping your composure and genuinely trying to put good arguments out there

    • @lwmburu5
      @lwmburu5 6 місяців тому

      Obnoxious foolish kids

  • @yancur
    @yancur Рік тому +14

    Wow, so many arguments from personal incredulity "I cannot imagine how that would work". Or the one guy pushing on Liron like "No! The Superingelligence has to have an OFF button" while Liron patiently like 5x explained that no one knows how to make an OFF button for SuperAI... If these are the people who are pushing us towards ASI, then all the gods of all the Pantheons be with us!

  • @Gathiat
    @Gathiat Рік тому +18

    It is amazing how Liron is objectively running circles around the juvenile e/acc crowd, but they think that laughing arguments off is a winning strategy.
    It is obvious that large portion of the arguments Liron has posed they have heard for the very first time in their lives, but they are happy to dismiss them without any second thought.
    The whole attitude boils down to "we'll be fine, trust me bro".
    Oh and even if we are not fine, that's also OK because "it's inevitable bro".

    • @gustavgans9082
      @gustavgans9082 11 місяців тому

      What's juvenile is the idea that top-down regulation of technology will be used to do anything other than increase power for a select group of elites

    • @agenticmark
      @agenticmark 5 місяців тому

      clearly you arent swayed by rational, informed thought and prefer "magic" thinking.

  • @Dababs8294
    @Dababs8294 Рік тому +25

    Good job Liron. These folks were being unbearable at times. You're doing a public service

  • @NitsanAvni
    @NitsanAvni 9 місяців тому +1

    Amazing work! Hope to see more from you.

  • @Vanguard6945
    @Vanguard6945 11 місяців тому +3

    wow that was pretty painful, Liron was very focused but better to get some adults to interview you next time

  • @appipoo
    @appipoo Рік тому +5

    @1:28:30 Liron: "You won't hear an ad hominem attack from me, I respect you for the purposes of this discussion."
    😂 My sides. That is the most perfectly polite "f you" I've ever heard.

    • @xsuploader
      @xsuploader 11 місяців тому +1

      lol i didnt catch that.

  • @Alice_Fumo
    @Alice_Fumo Рік тому +5

    In my opinion, you're arguing about this better than anyone else I've heard.
    In fact, you're making many of the arguments the same way I would do them.
    What I'm not sure of is whether many of these people were not getting what you were saying because their judgement is clouded by some sort of defensive psychological response or whether they're just simply too stupid.
    Personally, I agree with the 1-5% chance of GPT-5 to be capable of causing catastrophe.

  • @grandeois
    @grandeois Рік тому +5

    those egotistical nitwits are appalling ... good work though, Liron

  • @chrisCore95
    @chrisCore95 Рік тому +1

    LOVELY. More of this please!

  • @majorhuman
    @majorhuman Рік тому +11

    I listened to this twice on X. You’re better at this than anyone else I’ve heard. George Hotz one was incredible. Shame about the first half

    • @liron00
      @liron00  Рік тому +8

      Thanks. Happy to do more if ppl invite me.

    • @majorhuman
      @majorhuman Рік тому +1

      I hope I see you debating the best on some well known podcasts some day 🔥

    • @kabirkumar5815
      @kabirkumar5815 Рік тому

      Have you considered going on the Guardians of Alignment podcast?

    • @liron00
      @liron00  Рік тому

      @@kabirkumar5815 sure if they invite me

    • @eaccer
      @eaccer Рік тому

      How exactly do you mean that though? Incredible as in his ability to debunk their statements or what?

  • @DavenH
    @DavenH 5 місяців тому +2

    Any counterargument beginning with "bro," should have cost them $100 a pop.

  • @meatofpeach
    @meatofpeach Рік тому +2

    I’m now officially a Liron fan.

  • @kabirkumar5815
    @kabirkumar5815 Рік тому +1

    Glad to see this happening.

  • @agenticmark
    @agenticmark 11 місяців тому

    This was awesome but I agree with the comment about analogies being lossy - they dont map into the real world.
    Foom will happen. What we have no idea is, will it be good or bad. And any "guesses" we make are lossy by nature. Until we actually do it (secure environment sandbox) we just have no idea.
    It's good to have the discussion, but as a programmer and data scientist and ml guy, you just dont know until you have tested it. So the conversation should move to how do we correctly sandbox these new systems with no escape and real-world-based fail safes that need feet hands and thumbs to operate (use or disable)

    • @flickwtchr
      @flickwtchr 11 місяців тому

      How many tests are you doing to detect and understand what is going on in that ever growing "black box", and how is that going currently?

    • @agenticmark
      @agenticmark 11 місяців тому

      at least in the models I work with (rnns, cnns, transformers) you can "tell" whats going on in the "black box" (hidden layers) through probing and testing. There is even "MRI" tech for NNs. Its not magnetic, but is provides a similar look into how the activations are happening.@@flickwtchr

    • @agenticmark
      @agenticmark 5 місяців тому

      @@flickwtchr its only a "black box" to those not actually in ML. Its a fuzzy box to the rest of us. We CAN see inside, just not as well as we would like. Its a model, not magic.

  • @dancingdog2790
    @dancingdog2790 11 місяців тому +2

    Wow, swearing boy comes across as an unserious douche, bro!

  • @kabirkumar5815
    @kabirkumar5815 Рік тому +3

    3:07:04 - Your patience here is very good

  • @JLongTom
    @JLongTom 9 місяців тому

    Ex quantum cosmologists who say "bro" a lot are the worst. And the off-button guy. 🤦‍♂️ Still, there were a few arguments in there that it was great to hear your answers to. Great public service, as said elsewhere here.

  • @homelessrobot
    @homelessrobot 11 місяців тому +1

    Someday AI will be able to transcribe 'foom' consistently. But by then it maybe too late! (also, great debate)

  • @TheBlackClockOfTime
    @TheBlackClockOfTime 11 місяців тому +1

    In retrospect Beff Jezos here is actually bayes, and the unknown is Beff :D

  • @kabirkumar5815
    @kabirkumar5815 Рік тому +1

    This will be good

  • @jonwheatley
    @jonwheatley Рік тому +1

    i really enjoyed this debate!

  • @aihopeful
    @aihopeful Рік тому +1

    What software was used to record the conversation? I really love it's visual presentation!

    • @liron00
      @liron00  Рік тому +2

      Descript

    • @aihopeful
      @aihopeful Рік тому +1

      ​@@liron00thanks for the quick and helpful reply. I got confused and thought is was Twitter Recorded Spaces, and even left a comment to that effect, so it's good to set the record straight. I think I said this before, but in case I haven't: "Thank you!" I realize that your voice is only one rational whisper amongst an oblivious clangor, but it's much appreciated!

    • @liron00
      @liron00  Рік тому

      @@aihopeful thanks, ya it is a recorded space but I could only extract the audio to post so I had to generate a different video for it

  • @user-ky4jy8mc1x
    @user-ky4jy8mc1x Рік тому +2

    PSA: Many of the speaker labels shown in the top left are wrong.

    • @liron00
      @liron00  Рік тому +1

      Yeah sorry it was a tough one. Here's the X Spaces recording, might be more accurate: twitter.com/liron/status/1699113813537349646

  • @justinleemiller
    @justinleemiller 10 місяців тому +1

    If you can understand 100% of this debate, you should be making at least 100K. Ask for a raise or change jobs.

  • @absta1995
    @absta1995 Рік тому +3

    Why is it so tough for some people to grasp AI risk?

    • @absta1995
      @absta1995 Рік тому

      @SigmoidalHive There is a lot of evidence documented by researchers at DeepMind of specification gaming. Which is when models over-optimise towards the specifics of an objective instead of the true intention of the objective (essentially king midas). So even with just this issue (and there are many more) if you have a superintelligence that exhibits specification gaming, we're doomed, because it will find whatever means it can to maximise said objective literally.
      In LLMs this could involve the model making up true sounding falsehoods to farm votes. This obviously seems benign, because it's a weak AI model that mainly models language. But once agents can model real life, they can exhibit more drastic misaligned behavior. If you wait until that point, you're likely on a path to doom. Simple as that imo, but there are much worse things like instrumental convergence, etc

  • @eaccer
    @eaccer Рік тому

    Does anyone know the twitter of the guy that starts talking at around 43:12 ?

  • @cjn6564
    @cjn6564 11 місяців тому +6

    Indian guy was so bad

  • @ikotsus2448
    @ikotsus2448 Рік тому +1

    Regarding planning vs. executing I do not think there is a discrepancy. A superintelligence will not make a generic theoretical plan and then fail because af lack of means to execute. It will make one based on its current abilities, however small, to execute, widening the abilities on the go, if neccecary. If it already has all the means to execute, it wouldn't need to be a superintelligence,what are we talking about... Intelligence = do more with less.

    • @liron00
      @liron00  Рік тому +1

      You only need to “do more with less” once, then you permanently have more

    • @ikotsus2448
      @ikotsus2448 Рік тому

      @@liron00 Exactly, how can people be so blind to this...

  • @jonathanvictor9890
    @jonathanvictor9890 5 місяців тому

    The guy from the 45m-1hr mark seems like he made a pretty succinct point:
    - Doomers don't really quantify anything, and so they can easily just fall back to "well you could imagine..." vs quantifying a position that people could contend with
    Props to him for calling out liron

    • @liron00
      @liron00  5 місяців тому

      And what did he quantify better?

    • @jonathanvictor9890
      @jonathanvictor9890 5 місяців тому

      ⁠@@liron00
      The onus is on you since you’re the one making the aggressive assertions.
      He pointed out that to believe in foom implies a number of assumptions about how the system will evolve (implicitly assigning certain properties to the system itself). If you believe the system has those properties, demonstrate it.

    • @liron00
      @liron00  5 місяців тому

      @@jonathanvictor9890 actually the assertion that this universe won’t shake off humanity like its shaken off the vast majority of other species and will continue to do so, is “aggressive”.

    • @jonathanvictor9890
      @jonathanvictor9890 5 місяців тому

      @@liron00 right so you’re asserting this system has limitless and boundless potential and can therefore rapidly self improve
      We can agree to disagree, that seems like a basic physics question

    • @liron00
      @liron00  5 місяців тому

      @@jonathanvictor9890 nope, nothing I’m saying depends on disagreements about what physics allows. It’s uncontroversial that the limiting factor of human-level intelligence isn’t physics.

  • @kabirkumar5815
    @kabirkumar5815 Рік тому +4

    3:15:20 "we're gonna need the off button" - basic decision theory mistake
    You don't have an off button for a super intelligence.

    • @akmonra
      @akmonra Рік тому +1

      “Just give God an off button” one of the worst arguments I’ve heard yet

  • @goodleshoes
    @goodleshoes Рік тому +4

    The guy swearing is SO annoying.

    • @goodleshoes
      @goodleshoes Рік тому

      Put me on here, I'd do better.

    • @goodleshoes
      @goodleshoes Рік тому +3

      I'll debate either side without being rude to you. I actually want to hear what you're trying to say.

    • @goodleshoes
      @goodleshoes Рік тому +1

      Not enough energy to foom? Nonsense, if a superintelligent a.i. escaped they would be able to utilize forms of energy we can't. Imagine how quickly a super intelligent a.i. Could improve solar, geothermal, etc. It's obvious that it would come up even more ways to collect resources that we are incapable of understanding.

    • @goodleshoes
      @goodleshoes Рік тому +1

      It makes sense to me that something that can optimize for predicting the next token can scale to super intelligence. Humans only had the goal of copying their genetic material and just so happened to have the hardware to go to the moon. This is how intelligence can generalize. A.i.'s hardware can scale more than what humans are limited to. Predicting the next token is a goal that can help an intelligence generalize to the point of super intelligence.

    • @goodleshoes
      @goodleshoes Рік тому +2

      They think you can just "turn off the computers" once a superintelligent a.i. begins to kill everyone. Lmao.

  • @kabirkumar5815
    @kabirkumar5815 Рік тому

    3:08:24 - Might it have been useful to say that a plan which doesn't keep the likelihood of you being killed low, isn't a smart plan?

  • @kabirkumar5815
    @kabirkumar5815 Рік тому

    2:06:34 - wow, what a mess

  • @pauldannelachica2388
    @pauldannelachica2388 Рік тому

    Wow

  • @Alice_Fumo
    @Alice_Fumo Рік тому +2

    I'll just refute a few arguments I'm hearing for fun:
    "The world is very complex." Yes. For example, humans introduce a lot of complexity, but it turns out that the behaviour of humans is actually a lot easier to predict after they've been turned to paperclips. In fact doing that might raise my success rate from like 30% to 85%
    Argument sorta bootlegged from gwern
    10MB superintelligence script:
    The naive way to do this would be to have gpt-5 write a gpt-5 training script which includes data acquisition, filtering, processing etc. Yes, that would be insanely computationally expensive and only work if it can use a shitload of the infected computers efficiently, but if it gets that done, then it has access to a widely distributed system of more of its own capabilities which has redundancies and insane total compute and ideally (for it) doesn't have already succumb to value drift.
    All that can be much simplified if it gets access to its own weights beforehand, since then its bootstrapping script doesn't need much more than a 10tb download.
    Edit: This was addressed after I wrote the comment. But it feels like they don't grant "it can just exfiltrate its own weights bro" after granting it has a 0-day which can infect like any existing computer.
    "Who would work for AI"
    There's like so many companies these days where people don't interact with each other face to face.
    I know of one person who has worked for AI before: The taskrabbit guy in the GPT-4 paper. Like once it has some money, it can get humans to do all sorts of stuff. To efficiently use money it might need a number of bank accounts which I'm not quite sure how it'll get those.
    Either way, that's just current day stuff, yet they seem to treat it like unlikely sci-fi shit.
    Btw, I'm operating under the assumption that nothing I write contains any new thoughts which when this gets datamined and put into the gpt-5 dataset might cause it to do exactly this, since I extremely strongly suspect it would be able to figure out as smart or smarter things in any case. If anyone makes the case my comment is dangerous, I'll delete it.

    • @NikiDrozdowski
      @NikiDrozdowski Рік тому

      Who would work for an AI? The answer is one letter: Q. Some lines of text on some obscure forum managed to drive millions all around the world crazy with a assinine conspiracy theory about Hillary Clinton drinking baby blood and Democrats being satanists. And it wasn't even that good or creative. Imagine what a super-persuasive AI can come up with: Start an ever better conspiracy theory, start a techno-religion, maybe just start with e/acc ...

  • @Dababs8294
    @Dababs8294 Рік тому +2

    omg one of them is a climate change denier? Why does this make so much sense

    • @kabirkumar5815
      @kabirkumar5815 Рік тому

      Wait, what?? When??

    • @Dababs8294
      @Dababs8294 Рік тому

      @@kabirkumar5815 sorry i cant be bothered to listen back to this. One of em talked about how scientists just like ai doomers have stupid models that are innacurate and lead them to believe in dumb apocalyptic scenarios

    • @akmonra
      @akmonra Рік тому +1

      I thought it was saying that climate models alone are insufficient, because it doesn’t incorporate societal shifts, new technologies, etc.

  • @bestintentions6089
    @bestintentions6089 Рік тому

    Just bunch or Karens

  • @Adam-ji8qw
    @Adam-ji8qw Рік тому

    'Promosm'

  • @dafarii
    @dafarii Рік тому +4

    I'm about 1 hour in. It seems like you have an unfalsifiable position.

    • @liron00
      @liron00  Рік тому +4

      State the unfalsifiable claim?

    • @kabirkumar5815
      @kabirkumar5815 Рік тому

      How so?

    • @filmmakerdanielclements
      @filmmakerdanielclements Рік тому +1

      I think a lot of Ai doomers mistake their paradoxical, unfalsifiable argument for being a rock solid one. Where are the testable hypotheses? Where’s the precedence? Why does the doom argument get a magical genie that’s unaccountable and unbounded by physics, as their debate chess piece, but everyone else had to use a pawn snd show their work? If doom is inevitable, why isn’t the sky filling up with paperclip maximizers from alien civilizations milions/billions of years ahead of us? How does the ai survive a gargantuan solar flare within the next decade or two?

    • @liron00
      @liron00  Рік тому

      @@filmmakerdanielclements GrabbyAliens.com explains the most likely solution to the Fermi paradox I've ever seen, and is perfectly consistent with an AI takeover foom. The aliens (or their rogue AIs) are rushing toward us as fast as they can to grab our galaxy's resources, but they're far away because 14B years is early in this universe's lifetime. There are many trillions of years to go.
      The AI survives a solar flare by harvesting all the stars in the galaxy and spreading outward to colonize the universe at near the speed of light.
      Any more questions?

  • @angloland4539
    @angloland4539 Рік тому