e/acc Leader Beff Jezos vs Doomer Connor Leahy

Поділитися
Вставка
  • Опубліковано 20 тра 2024
  • The world's second-most famous AI doomer Connor Leahy sits down with Beff Jezos, the founder of the e/acc movement debating technology, AI policy, and human values.
    Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon. We have some amazing content going up there with Max Bennett and Kenneth Stanley this week!
    / mlst (public discord)
    / discord
    / mlstreettalk
    As the two discuss technology, AI safety, civilization advancement, and the future of institutions, they clash on their opposing perspectives on how we steer humanity towards a more optimal path.
    Leahy, known for his critical perspectives on AI and technology, challenges Jezos on a variety of assertions related to the accelerationist movement, market dynamics, and the need for regulation in the face of rapid technological advancements. Jezos, on the other hand, provides insights into the e/acc movement's core philosophies, emphasizing growth, adaptability, and the dangers of over-legislation and centralized control in current institutions.
    Throughout the discussion, both speakers explore the concept of entropy, the role of competition in fostering innovation, and the balance needed to mediate order and chaos to ensure the prosperity and survival of civilization. They weigh up the risks and rewards of AI, the importance of maintaining a power equilibrium in society, and the significance of cultural and institutional dynamism.
    MORE CONTENT!
    Post-interview with Beff and Connor: / 97905213
    Pre-interview with Connor and his colleague Dan: / connor-leahy-and-97631416
    This debate was mapped with the society library:
    www.societylibrary.org/connor...
    Beff Jezos (Guillaume Verdon):
    / basedbeffjezos
    / gillverd
    Connor Leahy:
    / npcollapse
    TOC:
    00:00:00 - Intro
    00:08:14 - Society library reference
    00:08:44 - Debate starts
    00:10:17 - Should any tech be banned?
    00:25:48 - Leaded Gasoline
    00:34:06 - False vacuum collapse method?
    00:40:05 - What if there are dangerous aliens?
    00:42:05 - Risk tolerances
    00:44:35 - Optimizing for growth vs value
    00:57:47 - Is vs ought
    01:07:38 - AI discussion
    01:12:47 - War / global competition
    01:16:11 - Open source F16 designs
    01:25:46 - Offense vs defense
    01:33:58 - Morality / value
    01:48:43 - What would Conor do
    01:55:45 - Institutions/regulation
    02:31:50 - Competition vs. Regulation Dilemma
    02:37:59 - Existential Risks and Future Planning
    02:46:55 - Conclusion and Reflection
    Pod version: podcasters.spotify.com/pod/sh...
  • Наука та технологія

КОМЕНТАРІ • 831

  • @sofia.eris.bauhaus
    @sofia.eris.bauhaus 3 місяці тому +182

    starts at 8:12. i think that intro is way too long btw.

    • @CipherOne
      @CipherOne 3 місяці тому +5

      Dear god, thank you 😂

    • @GarryGabriel
      @GarryGabriel 3 місяці тому +3

      10000%

    • @rysw19
      @rysw19 3 місяці тому +1

      Disagree the intro was the best part, downhill from there

    • @iFastee
      @iFastee 2 місяці тому +2

      pseud royale... none of these will be here in 5 years. especially Doomer Connor... clearly he lives of something that doesnt exist. it's cringe and sad. all these doomers are weak in the phenotype... i salute connor for not being as low of a phenotype as eliezer

    • @6AxisSage
      @6AxisSage Місяць тому +1

      Thanks, wtf do they have an 8 minute introduction?

  • @skitcostanza5130
    @skitcostanza5130 3 місяці тому +313

    This is my Super Bowl

    • @shauryai
      @shauryai 3 місяці тому +2

      True

    • @TTGTanner
      @TTGTanner 3 місяці тому +2

      Real

    • @TheManinBlack9054
      @TheManinBlack9054 3 місяці тому +24

      Tbh, i really haven't heard any compelling arguments from the accelerationist sides on how is it really safe to create smarter-than-human AI systems.
      So far it's just pseudophilosophic bs. If this is your intellectual originator, then you're not convincing me with those arguments. I really do think it would be wiser to be prudent than to be sorry.

    • @jdietzVispop
      @jdietzVispop 3 місяці тому

      Can Beff be the Packers and the other guy be the Panthers please?

    • @cliddily
      @cliddily 3 місяці тому

      The cheerleaders flip when they wiggle their hips.

  • @Alice_Fumo
    @Alice_Fumo 3 місяці тому +38

    I find this extremely agitating to listen to.
    Responding to a hypothetical with "I don't believe in that" is the most useless response anyone could possibly give. Beff seems to try insanely hard to avoid answering ANY questions to not fall into any 'gotchas' or whatever and thus goes off on these insane tangents which do nothing other than try to get the conversation off track.
    This makes it impossible for Connor to ever figure out at which point their reasoning actually has disagreements and really just makes any useful discussion impossible.
    Let's say someone asked me what should be done if we found a way to collapse a false vacuum, I'd answer "Destroy all the research, commit suicide or set up misinformation campaigns designed to prevent anyone else from ever figuring this out - unless it is likely this is going to naturally occur anyways in which case well firstly we're fully fucked, but also pour more research into this to figure out how to prevent this from happening."
    Honestly, the way Beff gets off topic is like trying to talk with schizophrenics.

    • @--LZ---
      @--LZ--- Місяць тому +3

      Sadly some people who seem intelligent on some topics arent intelligent in all topics or lack social / conversational skills. For me this is also a learning experience, what not to do in a conversation and how to steer it in a more productive way. I also highly dislike this type of answers to hypotheticals, and it means either a person is conflicted within him self, confused, or trying to hide his evil. Which one of these options is better for people working on AI, I'm not sure, but all the options sound really bad.

    • @megaslayercho
      @megaslayercho 22 дні тому

      Yup,watching Beff talks makes me wonder how can such a smart person be so clueless and utterly unable to give a straight answer to a single question or understand the concept of a hypotetical example.
      Like I cant figure out if he is just trolling and beeing opaque on purpose or if he generaly fails to understand what Connor is asking him.

  • @DrFlashburn
    @DrFlashburn 2 місяці тому +21

    I can picture Beff Jesos driving a car at high speed, closing his eyes and yelling "JESUS TAKE THE WHEEL"

    • @megaslayercho
      @megaslayercho 19 днів тому

      I can imagine a scenario in which two face escapes Arkham asylum and kid naps Beff Jesos ,hanging him over a barrel full of acid and when two face spin the coin and asks "Head or tails!" Beff goes like: "Well I think you should really consider the fact that the roman's empire downfall was partialy caused by the degradation of their coins. Originaly the roman coins contained more gold which over time began to be replaced with more and more copper. I mean really is there an objective difference between heads and tails ,from the inside of the coin it's all just technically atoms, there for I dont believe that..."
      *boom*
      (Two face shot himself)

  • @OzFush
    @OzFush 3 місяці тому +44

    Connor: If we keep inventing more and more powerful technologies without caring about safety, eventually we’ll destroy civilisation by accident.
    Guillaume: Nah, humanity has plot armour.
    1:03:25

    • @danielbrown001
      @danielbrown001 2 місяці тому +2

      Well, I mean, we’re not dead so far despite trying pretty hard to destroy one another. So maybe we do! 😂

  • @martinkunev9911
    @martinkunev9911 3 місяці тому +39

    Beff Jezos really has a problem with counterfactuals.

    • @ryanbigguy
      @ryanbigguy 29 днів тому

      I wonder what he would have done if he didn't have breakfast this morning.

  • @OzFush
    @OzFush 3 місяці тому +53

    Implicit in Guillaume’s position of maximising growth is that you need to ensure survival to achieve this over time. He considers threats to civilisational survival to be very low probability and excludes them as a rounding error, leaving only “maximise growth” as the plan to be followed. Connor is more concerned about these threats and considers them to be much more likely, with historical and logical justification.

    • @TechyBen
      @TechyBen 3 місяці тому +7

      I'd argue there isn't historical and logical justification here. As one side is claiming it's like a nuke, the other it's like the printing press. Thus we can't yet decide which parts of history apply.

    • @RedmotionGames
      @RedmotionGames 3 місяці тому +1

      The threat to civilisation is - currently and very obviously (and as determined by multiple corroborated scientific metrics and ironically) the maximisation of growth. That's not a rounding error. lol.

    • @NullHand
      @NullHand 3 місяці тому +8

      My argument is that the advent of a non-human, non-biological intelligence is Out-of-Scope for any historical analysis.
      More akin to the emergence of humanity itself, or even aerobic multicellular life.
      Predicting the machine Singularity is futile.

    • @2CSST2
      @2CSST2 3 місяці тому +2

      We're talking about there being threats at all or about the odds of surviving them? There has been countless threats to humanity but also life in general in planet Earth, yet all of them have been survived and lead to us here being more advanced than ever and living the best overall conditions of life. So in fact the data is on Beff's side. In terms of logic, refer to thermodynamics, the system exponentially favors changes of state that lead to greater dissipation of heat, hence life and complexity, not end of existence or annihilation. So still on Beff's side, and also coherent with the historical data.

    • @LucidLiquidity
      @LucidLiquidity 3 місяці тому +1

      The second someone tries to bring religion into a conversation like this, it's a little hard for me to trust their ability to think more practically, which is of dire importance given the stakes. We don't have time to be muddying the waters with religious ideology. We just need some solutions, and fast lol.

  • @MitchellPorter2025
    @MitchellPorter2025 3 місяці тому +61

    This is like an updated version of the opposition that used to exist between Robin Hanson and Eliezer Yudkowsky, in the late 00s. Hanson is a transhumanist but also an economist, and thinks in terms of social systems. Eliezer thought in terms of self-enhancement and a single agent bootstrapping to power over the entire world. Robin and Guillaume emphasize holistic principles like trade, self-organization, and robustness through decentralization and redundancy; Eliezer and Connor emphasize the contingency of human-friendly values and the need for policy precision.

    • @MitchellPorter2025
      @MitchellPorter2025 3 місяці тому +11

      @@NathanielKrefman Yes, and all earthly affairs are just epiphenomena of the surface chemistry of a cosmic dust mote... In other words, if you're a busy person who has no time or interest for any further details, then yes, you can boil it down to that. But if you are interested in history of ideas, historical context, or how any of these people think, the comparison is informative.

    • @kreek22
      @kreek22 3 місяці тому +3

      "the opposition that used to exist between Robin Hanson and Eliezer Yudkowsky"
      They never came to terms. Otherwise, great summary.
      The Hanson/Guillaume approach violates the Kelly Criterion.

    • @Levi7hart
      @Levi7hart 3 місяці тому +3

      its a good example because robin has now moved to full on anithumanism like beff. Except I do think robin's intellectual life is much more robust with original ideas; where as beff is extremely smart, but isn't developing any new ideas. (note robin is an economist and beff is working on actual tech so the time alloted to these things are real are different)
      But.. the concerning thing to me and most people is that guillaume and robin both believe humanity dying out for any form of technology (AI) is a good thing and an evolutionary stepping stone and thats like the most anthitheical to morality and sanity view a human can have for the world imo

    • @Gnaritas42
      @Gnaritas42 3 місяці тому

      @@NathanielKrefman no, it's more like Connor and Eliezer suggesting we just have to control everyone on earth and Beff are Hanson are like, nah, that's not even a possibility, that's not how reality works, go sit at the kids table you idiots.

    • @darklordvadermort
      @darklordvadermort 3 місяці тому

      @@Levi7hart on a long enough timeline humans are toast anyway without help because our civilizational reproduction time (time to terraform venus (without AI) (mars honestly sux more than the moon, for humans)) is much less than our expected civilizational lifespan, (e.g. 10k years vs like maybe 700 years), and that's ignoring risk of total human extinction, or the concerns of the individual (we, you and i, are all gonna die much quicker than 700 years)

  • @AstroGray
    @AstroGray 3 місяці тому +105

    Starts at 8:44

    • @XShollaj
      @XShollaj 3 місяці тому +8

      Thank you!

    • @covle9180
      @covle9180 3 місяці тому +6

      Doing the ai lord's work

    • @a97807
      @a97807 3 місяці тому +1

      Wish I'd have scrolled down to read this first. Thanks!

    • @siroutrage1045
      @siroutrage1045 3 місяці тому

      UA-cam Nobel prize award coming

  • @TranshumanismVideos
    @TranshumanismVideos 3 місяці тому +103

    Just watched a waterfall vs agile argument

    • @JohnVandivier
      @JohnVandivier 3 місяці тому +16

      underrated comment

    • @matinusdisseque
      @matinusdisseque 3 місяці тому +12

      Waterfall is linear and phase-based, criticized for its rigidity. Agile is iterative, promotes flexibility, and adapts to changes well.
      These are IT projects concepts.

    • @willrocksBR
      @willrocksBR 3 місяці тому +6

      Where are those 'agile' guys? We have massive technical safety uncertainty and they aren't doing shit. Connor is the one doing the AI safety startup, not Beff.

    • @nessbrawlaaja
      @nessbrawlaaja 3 місяці тому

      This is a surprising take to me, I would have said 100% agile vs 95% agile or something 🤷

    • @christopherspavins9250
      @christopherspavins9250 3 місяці тому

      More than machine or man.

  • @tearnfourstar
    @tearnfourstar 3 місяці тому +77

    This would be so much better if it was just the debate, imo there was too much pre-roll and it was hard to find where the debate even started without any timestamps. Also there's a few minute portion played twice near the start of the debate, I think less is more when it comes to editing on these types of videos. Aside from that I'm enjoying it very much, and thank you MLST for putting it together!

    • @Matt-yp7io
      @Matt-yp7io 3 місяці тому +6

      yeh the editting in general on this channel is a mess. I don't even know how to describe it. Its like a soup of videos chained together with no structure and u dont even know what ur supposed to be watching

    • @ArtOfTheProblem
      @ArtOfTheProblem 3 місяці тому +3

      he's havin fun

    • @dungeon_architect
      @dungeon_architect 3 місяці тому +1

      I believe Tim is looking to hire a video editor. He's aware he's not the best in the world at video editing. Fortunately his podcast is the best AI podcast so we let it slide 😁

    • @Hexanitrobenzene
      @Hexanitrobenzene 3 місяці тому +1

      @@dungeon_architect
      The best AI podcast may be "AI insiders" by AI Explained, but it's behind a paywall...

    • @dungeon_architect
      @dungeon_architect 3 місяці тому

      @@Hexanitrobenzene I enjoy AI Explained (I'll try check out AI Insiders) but it can't really match the hardcore philosophical bent of MLST, which is really its unique selling point

  • @zachschillaci9533
    @zachschillaci9533 3 місяці тому +7

    As a physicist myself, I hate the way Beff abuses and relies on physics analogies. It’s just cringe

    • @Hexanitrobenzene
      @Hexanitrobenzene 3 місяці тому +1

      Yeah, I don't think you can apply statistical mechanics to agents who always try to outsmart "the rules of the game". Electrons in the material do not try to "outsmart" the measurement.
      Connor has mentioned in some other podcast, I believe "Future of Life", that we don't really have a framework to describe interactions between adversarial systems, such as in economy.

  • @jwilliamcase
    @jwilliamcase 3 місяці тому +68

    I didn't realize Beff was so fluent in Yappanese.

    • @EricDMMiller
      @EricDMMiller 3 місяці тому +7

      He sure knows how to speak! But he doesn't know how to say anything.

    • @amonkeysden
      @amonkeysden Місяць тому +1

      I was blown away by how he is unable to form a coherent view on the world state and current risks and opportunities.
      I say I was blown away, then he was unable to respond to simple questions like "should weapons of mass destruction be open sourced".
      He needs to watch Team America and ask himself some serious questions! 😢

    • @Rugg-qk4pl
      @Rugg-qk4pl Місяць тому

      First 25 minutes is actually crazy yapping

  • @micheldavidovich6940
    @micheldavidovich6940 3 місяці тому +9

    Why does the e/acc guy speak like that? It seems like he could say the same thing in simpler terms. If you are layman watching this it’s very hard to understand him

    • @Hexanitrobenzene
      @Hexanitrobenzene 3 місяці тому +3

      This was the best summary I have found in the comments:
      "@NathanielKrefman
      4 days ago
      I think his [Connor's] aim was to force Beff to actually make positive assertions about values or policy and to find opportunities to point out inconsistencies/contradictions/hypocrisies. Beff sensed Connor was baiting him, and I think he avoided agreeing or disagreeing to evade being trapped. It would have gone better if Beff had just answered plainly and trusted that he could make an argument against Connor’s follow-up tactic. He might have also challenged the apparent posture of both of them that Beff was the only person who needed to justify his views. Beff never forced Connor to defend a position.

    • @megaslayercho
      @megaslayercho 19 днів тому

      I think Beff is intentionaly using complicated words and beeing vague on purpose.
      He just either doesnt seem to understand connor's questions or most likely feels like he is about to loose a certain argument and rather than concede a point he just tries to sound as complicated as possible in the hope everyone gets confused.
      But if you are actually use to the terms he is using and that doesnt throw you off and you follow what he is saying ,you ll quickly understand he is speaking high volumes of gibberish with very little volume of actual meaning/points beeing made.

  • @alancollins8294
    @alancollins8294 3 місяці тому +11

    Acceleration isn't bad. In fact we should accelerate alignemnt research. However, accelerating at the *cost* of safety is the problem. Any life saved on the way to hurdling towards unaligned AGI is meaningless as it's ultimately destroyed. We can save more lives with safe AGI without the cost of longterm extinction in exchange for short term benefits.

    • @jamesmedina2062
      @jamesmedina2062 2 місяці тому

      very well stated sir. I concur.

    • @gotgunpowder
      @gotgunpowder 27 днів тому

      There is literally zero evidence supporting the idea that AGI acceleration will lead to extinction. That is pure fear mongering not based on facts or research. The fact that you treat it as a given to the point where you think any life saved to create AGI is a waste of time speaks to how you've been brainwashed by it.
      The actually realistic issues with AI are not nearly as dramatic and their solutions are not as drastic as alignment zealots want you to believe.

  • @dexterdrax
    @dexterdrax 3 місяці тому +10

    The way he fumbled with the first question tells you everything...

  • @Dan-hw9iu
    @Dan-hw9iu 3 місяці тому +104

    Connor wants a plan right now. Provides no plan. Demands a plan from guy who said we don't need one. 10/10

    • @blackmartini7684
      @blackmartini7684 3 місяці тому +16

      😂 the perfect summary. To add one thing it's not that Beff doesn't think there should be a plan. It's that at the current moment implementing one could be detrimental and too early. Like he said, it needs to stabilize first.

    • @caparcher2074
      @caparcher2074 3 місяці тому +38

      It's not like that. Connor just wants him to admit that we need a plan. He's not asking him to solve alignment

    • @kensho123456
      @kensho123456 3 місяці тому +1

      They both made it clear they were talking thematically so no need to reduce it to it "for and against" they just expressed their differing viewpoints. BTW I agree with Conger Thingby.

    • @Dan-hw9iu
      @Dan-hw9iu 3 місяці тому +5

      @@caparcher2074 I believe Beff repeatedly retorted that we might not need one, etc. That said, Beff also called two things similar by _saying that their dot product was large._ You know, like socialized adults often say. So if Connor missed some of Beff's points, I frankly don't blame him...

    • @rickevans7941
      @rickevans7941 3 місяці тому +4

      Because it's maybe impossible but he's trying. What's the problem here with asserting we need a plan immediately WHILE TRYING TO MAKE IT he's not just armchair mode he's putting in effort towards the change he claims is necessary!!

  • @Qumeric
    @Qumeric 3 місяці тому +98

    I learned that answering questions is apparently not aligned with growth maximisation

    • @nitroyetevn
      @nitroyetevn 3 місяці тому

      LOL

    • @andreipaven4388
      @andreipaven4388 3 місяці тому

      GOLD

    • @EliudLamboy
      @EliudLamboy 2 місяці тому

      😅

    • @benjaminkemper5876
      @benjaminkemper5876 2 місяці тому

      Lmao. Well to be fair he didn't want to be led into a trap that presupposes a flimsy analogy, so he was trying to cut through the analogies, a little bit too preemptively in some cases.

    • @ts4gv
      @ts4gv 2 місяці тому

      @@benjaminkemper5876 AI safety guys don't tend to "trap" people with analogies. they're almost always used just to clarify opponent's position.

  • @dexterdrax
    @dexterdrax 3 місяці тому +7

    It's better to have discussions rather than debates. Debates just highlight the merit of the speaker but not the topic itself. It's better to have questions prepared by either party beforehand so that the answers can be a bit more precise and understandable...

  • @andrewcampbell7011
    @andrewcampbell7011 3 місяці тому +130

    Man this is painful. It’s Socratic hypotheticals vs jargon laden tangents. Everyone loses

    • @superresistant8041
      @superresistant8041 3 місяці тому +4

      yeah it's a pain

    • @gulllars4620
      @gulllars4620 3 місяці тому +18

      It's an asymmetrical debate skill level discussion, and yes a bit cringe, but overall very informative. I think this could have been distilled down to about 30-45 minutes of proper debate if Beff was more of a debater and less abstract visionary optimist. Hopefully he comes back better prepared after having holes in his model(s) pointed out, like no short or long term even hypothetical/contingency plans and the naturalistic fallacy (is does not prescribe ought). Beff basically conceded E/ACC is a version of might makes right without specifically calling it that, and he's not wrong factually looking back or projecting current world state forward, but that isn't necessary what we want or think should happen. He has sort of surrendered some agency and human-centric value systems to align his philosophy with the mechanics underlying his world model to more predictably have a future aligned with him rather than having it as an emergent separate guide for mutating the state of the world into something which fits his concrete contextualized values.
      I would definitely watch their follow up chats or podcasts, as they have a lot of common ground in interesting areas and seemingly a good take on that but slightly different perspectives.
      Creds to Connor here for being a good spirit but critical debater and not just going for trying to destroy Beff in politics style.

    • @DaveKeil
      @DaveKeil 3 місяці тому +7

      got to 10:49 with the question about should any technologies be banned, and he ducks it with "I don't think it's enforceable". I mean, come on. OBVIOUS technologies to ban - Concentration camps. Is it enforceable - yes, some country starts using them everyone else invades them to stop it. SMH.

    • @TechyBen
      @TechyBen 3 місяці тому +9

      @@DaveKeil Those are not technologies. Like... that's not even what the word means.

    • @EdFormer
      @EdFormer 3 місяці тому +3

      ​​@@DaveKeilI mean, it is potentially one of the most poorly thought out questions I've heard recently. What did Connor mean by "ban"? As @TechyBen pointed out, concentration camps are not a technology, so I'll ignore that, but most technologies that are restricted in many countries (e.g. firearms) are still available to those with a license or access to the black market. Are those things "banned" by Connor's definition? If not, Beff is right to question the idea that any technology can be banned and, if so, Connor's point is meaningless.

  • @ekhadley
    @ekhadley 3 місяці тому +13

    I feel like Connor gave up right at the finish line with the is vs ought tangent. I wish he'd asked if 'growth' was Guillaume's terminal goal or an instrumental one. I imagine Guillaume would probably say it is terminal, but this isnt compatible with his response to saying he wouldn't get rid of all humans if an ASI told him it was growth optimal. I think this is what Connor meant when he said Guillaume "doesn't really want growth". If growth is an instrumental goal for him then they probably both just want human flourishing and can move directly to the 'what are good policies' tangent.

    • @reidelliot1972
      @reidelliot1972 3 місяці тому +4

      You heard the man, his terminal goal is entropy.

    • @eSKAone-
      @eSKAone- 3 місяці тому

      Todays humans would perish to evolutionary change anyway. Over time species change into other species 💟🌌☮️

    • @eSKAone-
      @eSKAone- 3 місяці тому

      It's inevitable. Biology is just one step of evolution.
      So just chill out and enjoy life 💟🌌☮️

    • @mackiej
      @mackiej 2 місяці тому +2

      For other readers: instrumental goals are pursued to help reach other goals, not necessarily their own sake (Regulation of AI and Tech, Promotion of Open Access to AI, Adaptable and Flexible Policy. Terminal goals are pursued for their own sake, representing intrinsic values (Growth Maximization, Balance Innovation and Stability, Preserve and Enhance Civilization). Of course, we can disagree whether a goal is instrumental or terminal.
      This info came by feeding the transcript in two halves into GPT-4 and asking, "What are instrumental and terminal goals in the context of the full transcript?" I haven't watched the video yet.

  • @blahblahsaurus2458
    @blahblahsaurus2458 3 місяці тому +11

    45:10 Connor asks why an AI that spends resources on protecting humans and making us happy would win a war against an AI that doesn't. Beff says "you can ask that about countries or companies". Well, for one thing, Saudi Arabia is very repressive and uncaring, but very successful. But more importantly: in an oppressive country people can rebel! The government is just a minority of people, they don't have the kind of advantage over the rest of the population that an AI would. Also, companies in particular can and do use child labor, pay pennies for back breaking work, and straight up use slave labor where they can get away with it. But it would probably be even worse if companies didn't care about their image and public opinion, another thing AI would not be vulnerable to.

    • @andersfant4997
      @andersfant4997 3 місяці тому +6

      Can people rebel in North Korea, Iran, Russia? Good luck with that🙂

    • @Rugg-qk4pl
      @Rugg-qk4pl Місяць тому

      There's no reason to think a sufficiently smart AI wouldn't care about its outward appearance. Safe to assume it will know that certain actions will end up leading to its shutdown

    • @blahblahsaurus2458
      @blahblahsaurus2458 Місяць тому

      @@Rugg-qk4pl that's certainly possible and a fair point. And that's one reason I've always been much less concerned about the medium term danger of ASI disobeying its creators, and more concerned about the short term danger of AGI that's happy to obey its creators. What if the AGI is controlled by a dictator or evil billionaire? A small group of people could have an AGI automate a bunch of factories that build robots, and those robots could serve as an army. And as the AGI becomes more competent, the number of people necessary to build a robot army shrinks.
      It drives me insane when everyone assumes that all humans are on the same side, and skip the question of what humans could do with AI that *doesn't* go rogue. That will be a problem much sooner, and may be worse than anything ASI would choose to do.

  • @codeantlers485
    @codeantlers485 3 місяці тому +14

    Wait, wait, wait.
    At 8:00, "Like, I don't know how to say this in a polite way, but death is evil. Like game isn't evil. Beff is evil like Beff is an evil character. And I think you wrote him intentionally to be evil."
    But that's during the intro promo part. Where in the rest of the video is that part? I don't think it's there. You can't be throwing around the word evil in a promo, and then not include it in the actual video. Why isn't it part of the edit?
    I want to see Guillaume's response!

    • @SBalajii
      @SBalajii 3 місяці тому +2

      agreed, that's quite important

    • @danielbrown001
      @danielbrown001 2 місяці тому

      I think it might be part of the pre or post-interview parts that you have to be a member of their Patreon to access unfortunately.

  • @matten_zero
    @matten_zero 3 місяці тому +9

    2:08:00 I align with e/acc but I agree with that position. We had free markets and they devolved into the situation we have today. Maximally free markets are unstable because of power asymmetry within populations. Someone always wins and does things to maximize their own benefits

  • @naesone2653
    @naesone2653 3 місяці тому +4

    More of these longer talks please

  • @Mynestrone
    @Mynestrone 3 місяці тому +8

    Don't read the comments for an opinion. Watch it first.

  • @Aldraz
    @Aldraz 3 місяці тому +10

    Great conversation, so to sum up everything:
    Beff Jezos (optimist) wants to let everything to "chance", he feels like there is still a lot of time for action, he wants to open-source models no matter what their abilities are (maybe till a certain point), wants to stop regulations for now and wait for couple of years till things get more clear and wants the regulations to be gradual - very general at first, not impactful at the beginning - so compute cap is an extreme regulation to him, he also believes we should fight for new decentralization methods that will replace democracy, while knowing that some centralization will always be likely a bigger entity and they will co-exist. He wants to optimize for growth (natural progression or competition).
    Connor Leahy (doomer) thinks we can actually create smart laws and not let everything to chance and natural evolution (or physics), he feels like there is not a lot of time left for an action, he wants to open source models only to a certain point of intelligence, he wants to create new institutions not to rely on government and wants to see more cooperation in the world, but he also supports some decentralization, although he thinks it would be extremely hard to implement. He wants to optimize for civilization happiness.
    Both agree on a lot of points and both make good arguments, but they seem to miss the optimal solution here which is quite obvious to me. Just do everything in the middle, yes start regulating now, but very slowly with laws that will not harm anyone, including the companies. No hard caps limits, etc. For example start with laws that will define what AI is, how does it differ from other algorithms, how can the data for it be gathered, etc. Maybe over time say that you can't have more than 50% of data that is malicious in content, etc.. Just do it gradually with the rate of AI progression.

    • @bobbsurname3140
      @bobbsurname3140 3 місяці тому +3

      I don't trust the current bureucratic consensus on what "malicious" is.

    • @Aldraz
      @Aldraz 3 місяці тому +3

      @@bobbsurname3140 Oh me neither, that was just an example to imagine what's possible. Such a rule would be stupid.

    • @kreek22
      @kreek22 3 місяці тому

      Your solution isn't simple because the danger AI poses, now and in future, is not known and not agreed upon. The most dangerous system is no more likely to announce its intentions than Bernie Madoff was.

    • @potatodog7910
      @potatodog7910 3 місяці тому

      Ya

    • @Victor-kd9dh
      @Victor-kd9dh 3 місяці тому

      Nuance is always key

  • @smileifyoudontexist6320
    @smileifyoudontexist6320 3 місяці тому

    Important Topics.. I’d like to see the key points here expanded on more., … Ahhh Yess … I like the unseen 3rd person chiming in. For a moment the discussion felt like i was scrolling pointlessly through important topics . Thanks for great work/ Perspectives … Appreciate!

  • @1stPrinciplesFM
    @1stPrinciplesFM 3 місяці тому +55

    I don't agree with Connor on much, but the insane quality of his camera setup makes me WANT to agree with him

    • @ramonarobot
      @ramonarobot 3 місяці тому +16

      He even captures himself in different angles 😅

    • @Aziz0938
      @Aziz0938 3 місяці тому +1

      Thats the trick

    • @karasira2696
      @karasira2696 3 місяці тому +2

      @@ramonarobot that was super cringe 🤣

    • @ageresequituresse
      @ageresequituresse 23 дні тому

      As a photographer, his camera isn't even particularly quality. He just turns up the equivalent of Photoshop's "luminance" in whatever software he's using.
      A rookie move.

  • @the3rdworlder293
    @the3rdworlder293 3 місяці тому +1

    Nuhh the editsss are funnnnny 😂😂 I love ittt. Who ever came up with it is my hero

  • @JD-jl4yy
    @JD-jl4yy 3 місяці тому +11

    55:03 - 55:55
    This seals the deal for me. I've never seen e/acc people give a good response to this.

    • @darklordvadermort
      @darklordvadermort 3 місяці тому +3

      are you kidding that was the biggest fail in the conversation on the part of connor up to that point lol. LLM is optimized to predict the next word/token - but so much grows out of that. Human (or maybe genes if you like selfish gene thesis) is optimized to reproduce...
      just obviously not true

    • @Hexanitrobenzene
      @Hexanitrobenzene 3 місяці тому +6

      The "AI discussion" section was also illustrative. First Beff said that the best way forward is to decentralize control, and then said that the entity with the most capital (that is, compute) wins. I agree with Connor - Beff doesn't follow his own premises to their logical conclusions.
      He is a classic libertarian. These people are fine with destroying the world as long as their freedom is not touched... without understanding somehow that they would be destroyed along with said world.

    • @onagain2796
      @onagain2796 3 місяці тому

      @@Hexanitrobenzene DRUMPF ALER!!!

    • @onagain2796
      @onagain2796 3 місяці тому

      This is actually utter horse shit. All of unsupervised learning is about doing exactly what he says but getting results out of it. Optimize for X to get Y result.

    • @Hexanitrobenzene
      @Hexanitrobenzene 3 місяці тому

      ​@@darklordvadermort
      This is what happens when the concepts are not rigorous enough. Misunderstanding.
      Sure, if you program ASI to optimise for growth, you could get nanorobots, thermonuclear fusion, quantum computing, etc., but all these would be in service of growth.
      Such a system would teraform the Earth into an entity best suited to spread to the rest of the Solar system, which would almost certainly make it unlivable.
      "Oh, you need air, water and food ? Sorry, not in my objective function." ...and then nanobots disassemble you, because ASI calculated trillion ways to arrange your atoms into more useful things. More useful for growth, that is...

  • @fedel81
    @fedel81 3 місяці тому +1

    Fascinating dialogue! Love how both your visions complement each other. I add a peaceful thought. The gradient towards the reduction of violence also favors the coexistence of multiple subsystems, regardless of the values or speeds they choose.
    Some may choose to go at the highest speeds, while others may choose to live with the values of the current or previous eras. There's still nomadic people in some corners of the world, aren't there? And because it's not a zero-sum game, they can benefit too. I guess it's a way of saying "we're all in this together" in creating this fireball on the Earth while traveling at light speed through spacetime.

  • @damianlewis7550
    @damianlewis7550 3 місяці тому

    Thanks Dr Tim!

  • @drhxa
    @drhxa 3 місяці тому +4

    This is hilarious, thank you for sharing!

  • @10produz90
    @10produz90 3 місяці тому

    This was a great debate. Many new things to think through

  • @LuisManuelLealDias
    @LuisManuelLealDias 3 місяці тому +12

    Beff Jezzos is just like any Sillicon Valley tech genius: very proficient in maths, science and sci-fi, incredibly dumb at philosophy, morality, ethics. He refused to even begin to understand the Is-Ought fallacy. He just couldn't even begin to understand that there was anything to understand here. His insisntence on an objective measurement of a moral reference being in evolution (and even in increasing entropy) has to be the dumbest smartest idea I have ever encountered in my life. You really have to be really smart to not just come up with the concept but create an entire moral framework about Entropy in this manner, and you really have to be incredibly dumb not to realise that this is an operation that you are just not allowed to do. A cognitive dissonance the likes of which that it might just kill us all in the name of Entropy increase.

    • @Hexanitrobenzene
      @Hexanitrobenzene 3 місяці тому +2

      Interesting summary :)

    • @potatodog7910
      @potatodog7910 3 місяці тому

      Interesting

    • @Hexanitrobenzene
      @Hexanitrobenzene 3 місяці тому

      Wait, why is it "an operation that you are just not allowed to do" ? I think it's unacceptable given a reasonable moral understanding, but not allowed ? Logically not allowed, you mean ?

    • @LuisManuelLealDias
      @LuisManuelLealDias 3 місяці тому +1

      @@Hexanitrobenzene Yes. it does not follow logically. You cannot say, X *should* be this way because it's the way it *is*. This is not an acceptable syllogism.

    • @Hexanitrobenzene
      @Hexanitrobenzene 3 місяці тому +1

      @@LuisManuelLealDias
      Oh, you mean the same point as Connor makes - "is" is not "ought", Hume's guillotine.

  • @seanbradley562
    @seanbradley562 3 місяці тому +9

    Holy fucking shit.
    This is my evening now❤️🫡

  • @DrFlashburn
    @DrFlashburn 2 місяці тому +3

    How does this entire debate happen without discussing the difficulty of control or aligning superintelligence on short, accelerated timelines. It seems the assumption that alignment of superintelligence is possible and happens easily on accelerated AGI timelines was granted and the entire discussion was about who would control the superintelligence(s).

  • @randylefebvre3151
    @randylefebvre3151 3 місяці тому +13

    Makes me think about using a high discount factor vs low discount factor in RL. Guillaume is saying that the system is so chaotic that we can't and shouldn't plan, kinda like in a very hard POMDP which could resemble a bandit setting. Connor proposes to try and plan anyway, which could lead to a suboptimal policy in such a system.

    • @darklordvadermort
      @darklordvadermort 3 місяці тому +1

      high quality comment

    • @JD-jl4yy
      @JD-jl4yy 3 місяці тому +19

      Yep. And what makes more sense, trying our hardest to optimize for a best plan, or shrug our shoulders, not even try and only accelerate?
      We're already accelerating at light speed atm. Do we really want to be so cynical that we shouldn't even try to come up with plans to steer things in better directions? Is that the best humanity has to offer?

    • @GuillaumeVerdonA
      @GuillaumeVerdonA 3 місяці тому +2

      good comment.

    • @SmittyWerbenjagermanjensen
      @SmittyWerbenjagermanjensen 3 місяці тому +1

      @@JD-jl4yy Yes, What are you talking about, we know for sure that we can't predict the weather or market for at most a couple days prior for the best of us.
      The best route is acceleration on computers, what makes computers great is the level of control, and what science is about is control, prediction and whatnot are byproducts, it's about control.
      Acceleration on current computers is speeding up the understanding whilst the medium is still limited, imagine slowing it down and it being still explored on much more capable hardware in the future?
      Not that I care, I'm just fascinated and would like to see development faster than need to be.
      Imagine externalities off a computer in the first place, wtf, if that was the case, ban photoshop and clip studio, making false images of people or bros materializing their lolicon-fun lol

    • @jamesmedina2062
      @jamesmedina2062 2 місяці тому +1

      @Werbenjagermanjensen no. We were accelerating very quickly when we raced to throw astronauts onto the moon. But there were at least attempts at doing this safely and to basically pull out all the stops in favor of safety whilst still accomplishing the missions. So, the costs were only money and some astronaut lives. Today, the safety is not even being prioritized at all and yet the penalties are not just a handful of astronauts but millions of human beings and possibly even the fitness of the planet for organic life. Plus the freedom of humanity from machines is at stake.

  • @shauryai
    @shauryai 3 місяці тому +23

    These debates should be premiered on tv! prime time XD

  • @abby5493
    @abby5493 3 місяці тому +2

    The edits are so good and can tell you put lots of time and efforts in to creating this. Thank you so much MLST.

  • @joshismyhandle
    @joshismyhandle 3 місяці тому +1

    Amazing discussion

  • @metaronin
    @metaronin Місяць тому +1

    Finally watching this way overdue

  • @shodan6401
    @shodan6401 Місяць тому +1

    What was the line in Lord of the Rings?
    "Don't you understand? There won't BE a Shire anymore."

  • @leeeeee286
    @leeeeee286 3 місяці тому +10

    Kinda painful to watch not going to lie. I think Beff was struggling to comes to terms with the fact that we should perhaps try to optimise for things other than economic growth. Whether or not we can do that is another question of course, but at the very least we should understand that growth maximising may result in a world we either cannot or do not want to live. Although I suppose if you're an optimist you can just ignore any probability of negative outcomes and maximise away. Hopefully we get lucky.

    • @Hexanitrobenzene
      @Hexanitrobenzene 3 місяці тому

      We cannot bet on the fate of humanity supported just by hope...

  • @vinith3773
    @vinith3773 3 місяці тому +5

    "is is not ought" doesn't mean that literally every "is" is not "ought -_- or that from a particular "is" you CANNOT derive an "ought"
    You need a deeper discussion to see if you have a framework to go from that particular "is" to an "ought"
    If every person had to summarise the other person and make sure they are on the same page before going too deep this would have been valuable. This is mostly just people talking over each other.
    The intro/thing before the debate is super confusing.
    Its pretty cool we're having these open discussions though

  • @AndersHansgaard
    @AndersHansgaard 3 місяці тому +54

    Maybe Connor Leahy's rolling eyes, blatant smugness, disinterest in questioning, skipping any and every step in all arguments to arrive at the most extreme and his guardedness aren't the best ingredients for a thoughtful debate.

    • @osuf3581
      @osuf3581 3 місяці тому +7

      Don't think that's the side that is unable to actually argue

    • @ideacharlie
      @ideacharlie 3 місяці тому +3

      Thinking his perfect hair got to his head

    • @hehehe991
      @hehehe991 3 місяці тому +9

      Dude is insufferable

    • @willrocksBR
      @willrocksBR 3 місяці тому +2

      Your comment added nothing to the debate. Zero substance.

    • @2CSST2
      @2CSST2 3 місяці тому +3

      @@willrocksBR Neither did yours...

  • @ajohny8954
    @ajohny8954 3 місяці тому +33

    I am not a fan of the debate so far, BUT I love that you just let these guys talk, I hate structured / moderated debates
    Edit: I have now listened to the whole thing. It gets slightly better towards the end, but Connor really was not interested in talking it seems.
    He was hyperfixated on 2 things he wanted to say, and was trying to “guide” Beff into giving him the best entry for saying those 2 things. A bit disappointing

    • @Eggs-n-Jakey
      @Eggs-n-Jakey 3 місяці тому +3

      I deleted my previous comment. Over an hour in and it doesn't seem like either have taken a position, or hell even said anything of substance.

    • @Pianoblook
      @Pianoblook 3 місяці тому +16

      I couldn't make it past ~1:15, I tried my best. Feels like the Beff guy is trying to have a chat and share views, while Connor is stuck in a loop of swing-for-a-gotcha --> cry fallacy when it doesn't work -> rinse&repeat.
      Would love the channel to consider bringing in real ethicists or philosophers to discuss these very fascinating topics! Feels like this is what I'd expect lighting up in undergrad.

    • @2CSST2
      @2CSST2 3 місяці тому +3

      "was trying to “guide” Beff into giving him the best entry for saying those 2 things"
      I think you hit the nail on the head.

    • @jeffrey5602
      @jeffrey5602 3 місяці тому +4

      Connor trying so hard to establish a single point of absolute truth with these weird analogies and get Beff to agree so he can then derive his whole beliefs from that and check mate him. If in the limit we are all gonna die anyways so why do you even get up in the morning anyways Connor?

    • @caelumforder9710
      @caelumforder9710 3 місяці тому +6

      I think Connor was trying to establish the bounds to Beff's position, which he persisted to be blurry about. Rather than trying to understand the spirit of Connor's questions, Beff was trying to answer in as favourable spirit as possible. It was good Connor didn't let that fly.
      It would have been better if Beff were less defensive and more eagerly shared his true model.
      I suspect the reason he was obscuring his true model is because he hasn't actually thought about the bounds of his own position very much. I guess he is under pressure from other growth ideologues not to show nuance, or risk getting replaced

  • @arde4
    @arde4 3 місяці тому +5

    Connor's argument is wrong. He assumes a regulation, even prohibition, can be imposed on world-dominating technologies, which is simply absurd. Nuclear regulations are imposed by nuclear powers on others, but nobody imposes them on them. Any restriction they obey is simply one that they have agreed on with its peers for their own benefit. The fact is one can endlessly think about impossible things, and it may be interesting but ultimately it is a waste of time.
    Effective prohibition of high power technologies is impossible. Deal with it: use it to your advantage, despair or spend your last days with your loved ones until its power is imposed on you, but the genie is not going back to the bottle.

    • @jeffrey970
      @jeffrey970 3 місяці тому +2

      China and Russia just announced they're discontinuing research on superintelligence because Connor doesn't think it's safe. Oh wait..

    • @Rugg-qk4pl
      @Rugg-qk4pl Місяць тому

      Is this not just a summary of beffs claim which just reduces down to ought-is? "You'll die if you try, therefore it's not right"

  • @joecunningham6939
    @joecunningham6939 3 місяці тому +4

    Absolutely painful. Connor constantly interrupting with pompous condescension and Beff rambling on about simplistic economic ideologies and refusing to answer any questions or take any moral stances. No moderation to speak of. Just terrible, im sorry, and I am a die hard fan of the channel

    • @Scott_Raynor
      @Scott_Raynor 3 місяці тому +1

      Surely interrupting pointless rambling is good though?

    • @joecunningham6939
      @joecunningham6939 3 місяці тому +4

      @@Scott_Raynornot when it's just more pointless rambling and grandstanding

  • @DJWESG1
    @DJWESG1 3 місяці тому +5

    No, Nick Land isnt the 'only....'
    Id be more than happy to expand on this area.
    Please see the work of Ulrich Beck .

    • @masonlee9109
      @masonlee9109 3 місяці тому

      Thanks! Any specific work of Beck we should check out? Ray Kurzweil also comes to mind, seemingly in favor of an outcome where today's biological life is replaced.

  • @ultra.2500
    @ultra.2500 3 місяці тому +7

    Both are making mistakes not really listening to the other

  • @RickDelmonico
    @RickDelmonico 3 місяці тому +4

    "Robust versus resilient.
    Levee versus estuary."
    Dave Snowden

  • @denishclarke4470
    @denishclarke4470 3 місяці тому

    I've heard this is one of the fierce debate,
    Let's see

  • @arinco3817
    @arinco3817 3 місяці тому +3

    Holy shit! This is like the ultimate!

  • @espenglomsvoll
    @espenglomsvoll 3 місяці тому +14

    Just look at the world right now, in 2024. Peace and love is easy to say, not so easy to practice. We still have a long way to go and I don`t think we are ready for this AI-race. Love from Norway.

    • @EricDMMiller
      @EricDMMiller 3 місяці тому

      We have never been able to align humans. And most of them are dumb.

  • @DeadtomGCthe2nd
    @DeadtomGCthe2nd 3 місяці тому +2

    Conner Leahy - "If the church said to murder all babies and people in the world, would you do it?"
    Any Christian- "no"
    Conner Leahy - "Then you're not Christian"
    Real good argument there 👏 😂

    • @happyduck1
      @happyduck1 3 місяці тому +5

      Connor's response would have been "Then you don't follow everything the church says", and if the other person would have previously claimed "I always follow everything the church says" then that would have been a very useful argument, to show that that claim was actually false.

  • @allinballsout1
    @allinballsout1 3 місяці тому +3

    😂😂😂 This was hilarious and rightly exposes both these bozos. Now, please no more acid tripping phd students either. Just some really good technical talk on machine learning 🙏🏽

  • @TonyJMatos
    @TonyJMatos 3 місяці тому +28

    Connors analogies are little wide though, wish he would stick to the specific arguments regarding AI specifically

    • @matten_zero
      @matten_zero 3 місяці тому

      That's because he's and ethicist. He enjoys talking down to people as an "authority" because he's "concerned" about humanity. Or at least likes posturing like it because it's socially powerful position. Doesn't have to build anything, just dictate morality to others

    • @arde4
      @arde4 3 місяці тому +2

      I wish he weren't so rude.

    • @Hexanitrobenzene
      @Hexanitrobenzene 3 місяці тому

      @@NathanielKrefman
      Very good summary.

    • @OnigoroshiZero
      @OnigoroshiZero 2 місяці тому

      He can't support his bs logic that way. He is just a doomer that wants to create drama.

  • @tombjornebark
    @tombjornebark 3 місяці тому

    As our understanding deepens, it becomes clear that there's still much we don't comprehend about why certain algorithms yield the results they do. We recognize that some algorithms perform better than others under specific conditions, but the underlying reasons remain elusive. What concerns me isn't the technology itself but the oversimplified way in which it's often perceived by the younger generation. I frequently encounter the notion that happiness can be maximized by reducing our workload-a concept that, while appealing on the surface, overlooks the deeper value of having a purpose and the journey required to achieve it. It's through this journey, with its challenges and achievements, that we experience genuine moments of happiness.

  • @ikiphoenix9505
    @ikiphoenix9505 3 місяці тому

    Decentralised systems are hard, all fall on this point. That's why Ohad Asor need to come in MLST. Look at "Nullary second order logic with recurrence" and what their work means for IT.
    Thanks for your show by the way !

  • @masonlee9109
    @masonlee9109 3 місяці тому +12

    Call me a luddite, but I don't think developing autopoietic computronium bombs should be legal right now.

    • @TheManinBlack9054
      @TheManinBlack9054 3 місяці тому +2

      Doesn't matter, we'll open source it anyways.

    • @41-Haiku
      @41-Haiku 2 місяці тому

      Hear hear!

  • @jdietzVispop
    @jdietzVispop 3 місяці тому +2

    Oh thank god for this.

  • @epador5348
    @epador5348 3 місяці тому +4

    Intro too long

  • @ErolCanAkbaba
    @ErolCanAkbaba 3 місяці тому +5

    IMO, Beff is demonstrating a great example of missing the forest for the trees.

  • @rerrer3346
    @rerrer3346 3 місяці тому +8

    Connor must be getting money now the hair gets smoother every debate😂😂😂 I wish he expanded on his plan more I wanted to hear him out but his cynicism got the best of him. Hope there is a part 2 in 6 months.

  • @Thedeepseanomad
    @Thedeepseanomad 3 місяці тому

    They are both right, but about different parts. At the heart lies termodynamics and the efficent use of energy to do work (cause change or resist it) according to preferences, it is just that they have to be physically and socially sustainable in a win-win type of setting.

  • @optimusprimevil1646
    @optimusprimevil1646 3 місяці тому +14

    the market isn't going to save us from skynet, the market is skynet.

  • @sheevys
    @sheevys 3 місяці тому +11

    Never heard of Beff before. He's like Jordan Peterson of tech gibberish talk.

    • @Al-Storm
      @Al-Storm Місяць тому

      Read more, might help your brain.

  • @LuisManuelLealDias
    @LuisManuelLealDias 3 місяці тому +1

    I have a last comment to make. It was actually impressive how absent the moderation was. It was literally unnecessary, and this is a kudos to both participants of the debate, who always kept it professional, objective, and respectful of each other's time. I don't remember listening to a debate that had this level of non-moderation.

  • @danielillner8187
    @danielillner8187 3 місяці тому +1

    Haha nice for bringing them together. Great choice

  • @Custodian123
    @Custodian123 3 місяці тому +2

    I just skipped to 1:07:38... "You are a doomer, your are a doomer". Someone said this is their Superbowl? No this was a 💩 show. I am so happy that the Industry will move forward, regardless of what these two 🤡 think.

  • @atheistbushman
    @atheistbushman 3 місяці тому

    Respect, excellent discussion

  • @ngbrother
    @ngbrother 3 місяці тому

    The discussion about "new institutions" sounded hopeful. For me it was the silver-lining from this debate. But I'm skeptical that I'll see these new institutions realized in my lifetime - or even in my children's lifetimes. IMHO, The trajectory we are on points toward greater consolidation of power through memetic control and regulatory capture by incumbents. No amount of "open sourcing of ideas" is going to redistribute access to capital or compute in the next 50 years. The incumbents have enough existing power to stretch out their lead to the point where it becomes impractical to talk about "maintaining a narrow power differential" between levels in the hierarchy.

  • @6AxisSage
    @6AxisSage Місяць тому +1

    I used to hold myself back generating content because I thought it was so cringe but this video has shown me that cringe is not a barrier to success.

  • @aaronweiss3294
    @aaronweiss3294 3 місяці тому +1

    The epistemic crux here (aside from the value debates and the cruxes at 2:48:00 and 2:52:00 )is modeling Black Swans.
    Beff believes in being optimized based on the data we already have - bureaucratic failures, governmental corruption, technophobia, tech improving civilization- and as we get more data on AI itself, we'll worry more about alignment. By analogy: no point in trying to create a martial art from 1st principles before you have ever tried throwing a punch.
    We aren't at the precipice of a FOOM yet, so any plan we'll make now will be irrelevant compared to plans we can make immediately prior to AGI onset.
    Connor is worried that we are headed in the direction of extinction, and we aren't making real plans yet. We don't have adults who are capable of saying 'timeout, let's start making a gameplan' if alignment isn't solved and we're about to create AGI.

  • @Hexanitrobenzene
    @Hexanitrobenzene 3 місяці тому +6

    1:36:26 "Oh, you fed Chtulhu, he'll be nice to you!"
    Connor's sarcasm is off the scale :D

  • @averyhaskell1577
    @averyhaskell1577 3 місяці тому +8

    This is literally the weirdest drama I’ve ever seen in tech and I’ve lived in Silicon Valley since 2012

  • @kyneticist
    @kyneticist 28 днів тому

    1:27:15 How does one maintain equilibrium, a "careful balance" in a high entropy system with many empowered actors (remembering that e/acc says that policing, government and/or regulations are inherently not compatible with a self-balancing structure)?

  • @HanSolosRevenge
    @HanSolosRevenge 2 місяці тому +6

    I now understand why Connor was being SO genial at the top, which was very considerate. Because he subsequently annihilated this guy to the point where I started to feel embarrassed for him

    • @gotgunpowder
      @gotgunpowder 27 днів тому +2

      imagine thinking a guy whose philosophy boils down to "ai will kill us because i say so" is ever the winner of any argument lol

  • @injustice4194
    @injustice4194 Місяць тому

    How do you enforce "Thou shall not kill." ?

  • @Low_commotion
    @Low_commotion 3 місяці тому

    This conversation has made both of them seem more likable & reasonable to me than any other appearance I've seen of them. I think Connor underestimates the possibility of nuclear fission-style indefinite stagnation out of fear becoming a reinforcing cultural loop (I'm bearish on new/modular nuclear), but I also think the line between that & _his_ fear of accelerating faster & faster while understanding less until catastrophe might be razor-thin in the case of AI.
    I don't think it's unthinkable that democracies outright ban the technology out of the currently-prevalent techno-pessimism, and at least China seems eager to shut the door on AI (at least consumer AI). But at the same time it might be hard to scale even _just_ interpretability as quickly as capability, and I think we all know interpretability is not alignment.

    • @41-Haiku
      @41-Haiku 2 місяці тому

      For clarity: The "AI Doomer" crowd -- the experts and others who claim that AI is somewhat or very likely to kill all of humanity in the relatively near future -- are mostly techno-optimists. If you asked almost any of these people "would you be glad if we completely stopped work on AGI, but accelerated most other technologies including narrow AI," they would say "Of course! That would be a wonderful world!"
      For most experts, the reasons they have for expecting doom from AI are mostly technical reasons related specifically to AI. Half of all published AI researchers think there's a 10% or greater chance that AGI will lead to a bad outcome on par with human extinction.

  • @dylanalexisalfaromonroy9468
    @dylanalexisalfaromonroy9468 2 місяці тому

    These guys should do this more often, interesting discussion.

  • @Guy_Reverse
    @Guy_Reverse 3 місяці тому +20

    what was that 8 minute intro, just let me watch the debate.

  • @BestCosmologist
    @BestCosmologist 3 місяці тому +10

    We're so screwed.

    • @eyeofthetiger7
      @eyeofthetiger7 2 місяці тому +1

      😂

    • @timothy6966
      @timothy6966 2 місяці тому +2

      Yup.

    • @OnigoroshiZero
      @OnigoroshiZero 2 місяці тому

      Most likely not (slightly dystopic future), and also more likely to have a positive future. Connor's doomer views are extreme, and most likely just bs speculations based on movies with no actual reasoning to back them up.

    • @41-Haiku
      @41-Haiku 2 місяці тому +1

      @@OnigoroshiZero This is just not accurate, but I really wish it was. Half of all published AI researchers say there is a 10% or greater chance that AI will result in a bad outcome on par with human extinction. Geoffrey Hinton says 10%. Yoshua Bengio says 20%.
      I highly recommend that you look into the Alignment Problem. Experts never held these beliefs based on sci-fi. They hold them because we have absolutely no idea how to steer smarter-than-human intelligence. The theory of AI Safety paints a very bleak picture, and the more empirical data we've gotten, the worse it looks.
      We are extremely fucked, and it's for boring, technical reasons.

  • @deku6737
    @deku6737 3 місяці тому +2

    Joe Rogan: Did you hear they're genetically engineering super tigers? 1:51:40

  • @7TheWhiteWolf
    @7TheWhiteWolf 3 місяці тому +4

    The problem is, Connor, and other decels and doomers like him, have no better alternative. Acceleration is a fact of the universe, Nick Land is still correct about this, everything is rushing towards hyper abundance of Capital.

    • @nakedsquirtle
      @nakedsquirtle 3 місяці тому +3

      Capital is not a real thing. It's a game humans came up with to maximally extract and produce resources and a capable AGI will ignore/work around it and repurpose our bodies into jet fuel. The dude talks about biology which is a machine made by physics being an entropy maximizer and since biology and evolution eventually gave rise to humans we should go in that trajectory ... while ignoring the quadrillions of corpses we stand on being the decedents of the tiny few that happened to fuck before dying.
      His stance is antihuman, he wants to add us to that corpse pile.

    • @7TheWhiteWolf
      @7TheWhiteWolf 3 місяці тому

      @@nakedsquirtle Take your schizophrenia medication.

    • @7TheWhiteWolf
      @7TheWhiteWolf 3 місяці тому +2

      @@nakedsquirtle People have been saying that for the last 3 million years about absolutely everything, and no, I trust AGI more than I trust Humans.

    • @TheManinBlack9054
      @TheManinBlack9054 3 місяці тому +1

      The plan is a stop, pause. Regulation and massive investments into AI safety. The only alternative is far too dangerous.

    • @7TheWhiteWolf
      @7TheWhiteWolf 3 місяці тому

      @@TheManinBlack9054 And how would you accomplish that? Other nations (especially China), open source, and *perhaps even corporations inside your own country* aren’t going to listen to you, and as we have seen with alcohol prohibition, police are corrupt and won’t enforce the law you drafted and passed.
      I’m sorry, but you both are completely disconnected from reality. Your solution would just make everything worse.

  • @davidrichards1302
    @davidrichards1302 3 місяці тому +1

    This discussion was inevitable. (vide "Determined", by Robert Sapolsky)

    • @melasonos6132
      @melasonos6132 3 місяці тому

      Everyone should read this book

  • @kuakilyissombroguwi
    @kuakilyissombroguwi 3 місяці тому +2

    Can this Beff just answer 1 question in a straightforward way? Born politician, that guy. Immediate mistrust in anything he says.

  • @user-cf1iw7tf3k
    @user-cf1iw7tf3k 3 місяці тому +22

    "e/acc movement vs AI doomerism vs some rando editing tf out of the whole discussion"

    • @simo4875
      @simo4875 3 місяці тому +9

      Did it not not loop at some point early on? Thought I was having a stroke.

    • @captaincaption
      @captaincaption 3 місяці тому +5

      ​@@simo4875 yea I thought I somehow clicked a button somewhere, but yep it did loop probably around 5 minutes in the beginning.

  • @rubic0n2008
    @rubic0n2008 3 місяці тому +1

    2:50:11 😂 You're pro death ☠️. That killed me!

  • @vfwh
    @vfwh 3 місяці тому +6

    Imagine Tom Cotton or Nancy Pelosi listening to this conversation before drafting ai legislation.

    • @Houshalter
      @Houshalter 3 місяці тому

      Do you really think they write anything, or even read it?

    • @vfwh
      @vfwh 3 місяці тому +2

      @@Houshalter Not really, no, I just liked to imagine the scene.

    • @danielbrown001
      @danielbrown001 2 місяці тому

      @@vfwhThey’d ignore the entire conversation, not understanding any of it. They’d ask the lobbyists who own them, “Hey, what laws should we pass with regards to this stuff?” The lobbyists would hand them a bill, and they’d put their signature on it.
      Then they’d go on CNN or Fox to talk about some random social issue to distract people into caring about that and ignore the AI stuff.

  • @DeadtomGCthe2nd
    @DeadtomGCthe2nd 3 місяці тому +1

    Extrapolating to infinity is a logical fallacy. It's never worked in the history of history.

  • @asdf8asdf8asdf8asdf
    @asdf8asdf8asdf8asdf 3 місяці тому +22

    One thing I know for sure: I do NOT, under any circumstances, want Connor in any position of power.

  • @JohnVandivier
    @JohnVandivier 3 місяці тому +3

    'a few edit distance away' it repeated twice the whole related block...

  • @LucidLiquidity
    @LucidLiquidity 3 місяці тому +1

    Pretty clear to me which guy has thought thru these things a little more deeply, and is more concerned with preserving humanity vs making sure not to restrict the poor technocrats from steamrolling towards new, inventive ways to surveil, control, and monetize every aspect of our lives. Humanity seems like a top priority if you ask me... kinda need that first before you can enjoy the fruits of innovation. Wild thought: maybe they should innovate some new ways to keep us from killing ourselves, directly or indirectly.

  • @disarmyouwitha
    @disarmyouwitha 3 місяці тому

    Line goes up!

  • @KCM25NJL
    @KCM25NJL 3 місяці тому

    I'm commenting early on this one, from around the 1 hour 5 mark.... trying to soak in what I've listened to so far. I think this is an important debate that comes from two extremes and desperately attempts to seek common ground. I feel however, that like many catalysing events, discoveries, paradigm shifts we go through as a species, the one thing we have never been able to do..... is "Unrub the Lantern". The genie is well and truly out of his little golden cave and we'll probably do what we always do..... we'll adapt and overcome. If we aren't capable of overcoming the age of AI as a species, then quite frankly, we were never supposed to. We often romanticise about how simple things were in the past, but they never were simple. They were just times with much less knowledge, every bit as fraught with pressure, anxiety and danger as they are today. We only made it this far because we clung on to hope and dared to press forward.... and I think that's all we have today.. and tomorrow... and maybe even the day after that.

  • @TheFrenchGenius
    @TheFrenchGenius 3 місяці тому

    This was amazing! Gives me hope for a future where government is decided by a younger generation than the one in power right now.