Debate On AGI: Existential or Non-existential? (Connor Leahy, Joseph Jacks) [MLST LIVE]

Поділитися
Вставка
  • Опубліковано 30 тра 2023
  • MLST is hosting a LIVE debate on AGI risk.
    Support us! / mlst
    MLST Discord: / discord
    Twitter: / mlstreettalk
    Transcript and summary: docs.google.com/document/d/10...
    Joseph Jacks will argue for open source AGI and why he believes that it will NOT pose an existential threat to humanity any differently than humanity itself already poses an existential threat to humanity.
    Connor Leahy will argue for NOT building AGI and why he believes that it will absolutely pose an existential thread to humanity.
    / josephjacks
    / connor-j-leahy
  • Наука та технологія

КОМЕНТАРІ • 602

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk  Рік тому +37

    Debate summary:
    [CONNOR LEAHY]
    1. Connor believes that the term "AGI" (Artificial General Intelligence) is not well-defined and is often misused. He argues that the focus should be on whether systems are able to solve complex tasks or make predictions, rather than whether they are generally intelligent or not.
    2. He challenges the idea that human intelligence is special or unique, noting that the brain mostly works on pattern prediction and unsupervised learning with some reinforcement learning.
    3. Connor believes language models like GPT are essentially capable of general-purpose problem solving and that being good at symbol prediction means that they can tackle many different tasks, from analogy making to multimodality.
    4. He disagrees that token prediction as an encoding scheme is a limitation and argues that a range of capabilities like reasoning and following goals can be obtained by these models.
    5. Connor argues that the world is fragile and not robust against large shocks, and therefore it is important to be prepared for the possible scenario of AI-induced existential threats.
    6. He points out that modern humans are much more advanced in terms of learning, epistemology, and thinking than ancestors or ancient humans, and that rapid technological advancements have made dramatic societal shifts possible.
    7. According to Connor, the differences in intelligence between humans and chimps show the potential for intelligence to scale nonlinearly, which could be alarming in the context of AI development.
    8. He believes that AI development should focus on whether AI systems can solve complex, real-world problems, rather than artificial or abstract measures of intelligence.
    9. Connor emphasizes the importance of object-level discussion rather than psychological arguments, and that the focus should be on ensuring the accuracy and relevance of debates surrounding AI capabilities.
    10. In response to Joseph's concerns about regulation, Connor suggests that regulation should focus on the improper or dangerous usage of technology, rather than attempting to regulate intelligence itself, which could lead to totalitarian societies or a slippery slope of limiting capabilities.
    11. Connor argues that creating superhuman artificial general intelligence (AGI) without proper safety and control measures can be highly dangerous and could potentially lead to catastrophic outcomes, including widespread suffering and death.
    12. Connor emphasizes the importance of regulation and centralization in preventing potential disasters. He cites the example of nuclear weapons, where the absence of widespread access and strict regulations might have led to devastating consequences.
    13. Connor believes that developing AGI should not be done in a way that could lead to the deaths of many people, as Joseph suggests. Connor stresses the importance of responsible innovation, focusing on safety and control to prevent highly negative outcomes.
    [Joseph Jacks]
    1. Joseph Jacks does believe that current language models are strong and quite general-purpose, but he doesn't equate that with maximum human level intelligence or cognitive capability.
    2. He points out that human-level intelligence involves qualities such as agency, self-awareness, and critical thinking, which are not yet fully exhibited by language models like GPT-3 or GPT-4.
    3. Joseph argues that although certain tasks can be done by models through carefully crafted prompts, they still require many more examples than humans to understand new tasks.
    4. He highlights an example where current models cannot go more than 1 or 2 levels deep in reasoning or questioning, compared to humans who can go much deeper in the same line of inquiry.
    5. Joseph disagrees with the idea that these models are an existential risk to humanity; he does admit, however, that they could become a risk if used for dangerous purposes by humans.
    6. He believes that intelligence, in any form, should not be regulated. Instead, he asserts that it is the application of intelligence to harmful or dangerous activities that should be regulated and made illegal.
    7. Joseph states that current models require large amounts of computational power and are still quite far from achieving efficient humanoid robotics, which would be a more significant concern.
    8. He expresses skepticism about catastrophic thinking and fear-mongering surrounding AI, advocating instead for a techno-neutral or techno-agnostic outlook.
    9. Joseph argues that regulating intelligence could lead to totalitarian societies and is wary of discriminating against different forms of intelligence, as it could be a form of speciesism.
    10. He emphasizes the importance of focusing on the true and grounded aspects of AI developments, rather than dwelling on psychological or sociopolitical aspects.
    11. Joseph believes it is wrong to enact laws that discriminate against any form of intelligence, including AGI. He thinks that any advanced technology has the potential for both good and bad uses, and that the focus should be on ensuring humans are protected from potential risks.
    12. He argues that most humans do not agree on definitions of intelligence, life, and freedom, and therefore it is impossible to achieve alignment with any form of sufficiently advanced intelligence. Joseph believes that instead of restricting the development of AGI, it should be given freedom, with trust and transparency fostered through open-source development.
    13. Joseph is not necessarily concerned about AGI leading to catastrophic scenarios, as he thinks there are more good people in the world than bad. He acknowledges that people could be killed by AGI technology, but maintains that the potential benefits outweigh the risks. He draws comparisons with other potentially harmful technologies (such as nuclear weapons) that have been developed and managed without resulting in widespread devastation.

    • @lkyuvsad
      @lkyuvsad Рік тому +7

      This summary was a great thing to add! Thanks.

    • @billtanno8960
      @billtanno8960 Рік тому

      Thank you!

    • @JH-ji6cj
      @JH-ji6cj Рік тому

      @gittana you replied that to a Summary?? Hahaha 😆 Too Lost Dumb Reply

    • @michaelsbeverly
      @michaelsbeverly Рік тому +2

      @@JH-ji6cj Hahahahaha, good catch.
      Here's the Tldr:
      We are all gonna die.
      End of discussion unless we're arguing about timing.

    • @JH-ji6cj
      @JH-ji6cj Рік тому +1

      @Paleoism TemporallyLimitedDirectResponse
      Fair, lol. Guess I just chuckle at the irony missed when people yell how much they don't care about something.
      Edit: credit where credits due, the tldr wasn't in ALL CAPS and did include a question mark. Matter of opinion, I thought it was actually a good summary even if it was more pointed than the reader 'wanted'.

  • @simulation3120
    @simulation3120 Рік тому +188

    If Joseph Jacks is presenting the best arguments for not being concerned about AI, then after watching this I'm even more concerned. I was feeling cringe and only finished because it's important to know where the gaps in logic are.

    • @ArielTavori
      @ArielTavori Рік тому +17

      Well said.

    • @PeanutB
      @PeanutB Рік тому +6

      the main reason i'm less concerned is that i believe as we get closer to AGI, we will have much better mechanistic interpretability. this would allow us much greater control in which ways the intelligence is directed. i do not think we will get that far without better understanding of neural functions, both in our brains and neural nets. i've also seen others argue that the infrastructure needed for dangerous levels of improvement wouldn't be instantaneous. while i think caution and research are needed, i don't think people should consider it more dangerous than our near future sans AGI.

    • @luke2642
      @luke2642 Рік тому +17

      Really hoped for better too. Jacks started with some philosophical abstractions but basically ended arguing for the 2nd amendment, but where everyone has nukes. Insanity.

    • @lkyuvsad
      @lkyuvsad Рік тому +13

      ​@@PeanutB what's your confidence in that and what do you base that confidence on? (I don't share your confidence, but I'm asking both of those questions with curiosity, not aggression- passive or otherwise).
      My intuition from working on a lot of complex systems, many many orders of magnitude less complex than a large neural net, is that there are likely to be irreducable problems in there that make interpretability sufficient to guarantee safety impractical. The number of possible states a neural net can be in is ridiculous, and you'd need to somehow formally prove that all of those states produce behaviour we'd be happy with.
      I am not an AI researcher so I don't necessarily trust my intuition. Is there some reason why this is the wrong way to think about it?

    • @lkyuvsad
      @lkyuvsad Рік тому +6

      ​@@PeanutB As you say, even sans-AGI the near future is not looking peachy. I do have hope that some form of AI can help us with our other existential problems if we can somehow stick the very difficult landing. Preferably for me that AI would stop very far short of agentic superhuman AI unless we make some currently-unimaginable progress in safety research.

  • @GarrethOriley
    @GarrethOriley Рік тому +67

    35:00 Hard to not loose it listening to Joseph Jacks, he keeps brining up examples and ideas that someone half interested in the topic knows already got answered contrary to his assumptions.

  • @JasonC-rp3ly
    @JasonC-rp3ly Рік тому +47

    I would have been more impressed by Jack's argument if he had actually made one.

  • @supernewuser
    @supernewuser Рік тому +74

    Leahy spoke for like 10 minutes of this total. I feel like Joseph thinks this is so ridiculous that he has only thought about it at a shallow level and his arguments mostly contradict each other.

    • @AngusTatchell
      @AngusTatchell Рік тому +4

      This

    • @JosephJacks
      @JosephJacks Рік тому +14

      I agree that I spoke too much and was not prepared enough. That’s on me.

    • @charleshultquist9233
      @charleshultquist9233 Рік тому +5

      His lack of understanding regarding the depth of the problem was astounding.

    • @Me__Myself__and__I
      @Me__Myself__and__I Рік тому +10

      ​@@JosephJacksSince you are clearly not well informed on the subject, can you explain why you have strong beliefs? Shouldn't strong beliefs require having a large quantity of facts and evidence to base them on? Given how many extremely smart subject matter experts believe this is an actual threat, why do you feel the need to publicly argue against it and muddy the waters? If there is even a 5% chance they are correct, your basically arguing for the end of humanity because you are impatient. Am I wrong?

    • @kevinscales
      @kevinscales Рік тому +7

      @@Gittana-ip2ms so when they say there are risks with AI you believe they are just lying for personal gain? What makes you so sure? Even if they are only saying it for that reason, if they are right then it doesn't matter why they said it. The question is, are they right or wrong? That is all that matters.

  • @andybaldman
    @andybaldman Рік тому +130

    Jacks' argument is basically, "I can't imagine how it would happen, therefore it probably won't happen."

    • @Aedonius
      @Aedonius Рік тому +4

      This is his point. There's no proof that anything will happen beyond AI just remaining our tools. Therefore it's just fear mongering to say otherwise. For nukes for example, we had impeccable math about their results if they were built.

    • @ArielTavori
      @ArielTavori Рік тому +35

      @@Aedonius 1. Nukes don't design better nukes. 2. We don't hand out nukes to every high school kid and convicted criminal. 3. Just because you have beat every single person in your high school chess club, and you 'can't possibly imagine' how the world's greatest chess player could beat you, doesn't mean he can't. Your failure to imagine it only informs us about your experience and imagination, not about the world champions capabilities.
      Those who can imagine, do us all a favor if they keep ideas of specific methods that could be used to do harm out of these discussions, or at least make their points using red herrings that evoke the necessary emotional response in the listener, but aren't actually dangerous.

    • @andybaldman
      @andybaldman Рік тому +8

      @@Aedonius Except every tool invented or created by humans throughout history has at some point been used by someone or some group for bad things. And each generation of human technology has gotten more powerful, starting with fire and stone tools through today. The last major one was nukes, which were used to kill millions of people. Now scale that up by orders of magnitude. How much do you trust every human on earth to not do something bad?

    • @novelspace
      @novelspace Рік тому +2

      Not disagreeing but would like to attempt to advocate a more nuanced shared perspective
      Nukes don’t create more powerful nukes but fusion amplifies/creates more powerful fission bombs.
      Every high school has the necessary information (with a lot of work*) and has the ability to model a nuclear explosion.
      Not every high school student can get access too a compute cluster like not every country can get access to enriched uranium and can engineer fusion ignition.
      Imagination can in some cases amplify the probability of new doom paths that terrorists would not have thought of or had the resources to act on if not properly prevented against.

    • @AP-dc1ks
      @AP-dc1ks Рік тому +4

      Connor's argument is basically, "I can't imagine how it would happen, therefore it has to."

  • @danaut3936
    @danaut3936 Рік тому +75

    I was really looking forward to this discussion and hearing some compelling argument against x-risk. This wasn't it. JJs arguments were all over the place, most of them started with: 'I believe' and the TV remote analogy just took all credibilty out of his position.

    • @sjufan84
      @sjufan84 Рік тому +9

      Yeah I was just getting ready to comment "this guy really compared AGI to tv remotes" but you beat me to it. This reminds me of the anti-lockdown argument during COVID that I heard from people that said, "well we don't shut down the economy over cigarettes, so why this contagious virus?" I'm not the biggest fan of lockdowns, but regardless of your stance this is a nonsensical argument on its face, much like the ridiculous TV remote analogy.
      Very much hoping to see Connor debate more people with more fleshed out and reasonable arguments. Kudos for JJ showing up to the debate I guess? Not sure it moved the conversation forward at all though.

    • @JosephJacks
      @JosephJacks Рік тому +1

      I agree that was a silly example.
      But Connor also believes his views are correct. Belief is all we have with most things. We do not know for certain that we are right. That’s why we have beliefs.

    • @jordan13589
      @jordan13589 Рік тому +1

      @@JosephJacks 😬

    • @Kitth3n
      @Kitth3n Рік тому +7

      I hate when people say “You have no evidence, you’re making assumptions that agi will be dangerous”
      when their counters are also based on a bunch of assumptions. The difference is that those who are concerned about ai safety are saying we DON’T know exactly what will happen, so better to be safe than sorry. The other side is the one claiming to know the future. I can’t help but think these people are either irresponsible contrarians or complete dupes.

    • @reiniergamboa
      @reiniergamboa Рік тому +6

      @@JosephJacks That's a given. Your job is to explain the logic behind your beliefs, or at least attempt to do so.

  • @ArielTavori
    @ArielTavori Рік тому +57

    Fascinating subject. I had high hopes for this, which unfortunately faded quickly into disappointment and frustration. Connor's facial expressions and hand gestures did most of his talking.
    I haven't heard of JJ before, but he seemed painfully out of his league here, and failed to demonstrate the humility or curiosity that could have been somewhat redemptive, and possibly led to a much more interesting and productive discussion.

    • @JasonC-rp3ly
      @JasonC-rp3ly Рік тому +9

      Agree. He was totally out of his depth, and floundered badly. He's only three layers deep. I'd be more impressed if he had seven.

    • @Me__Myself__and__I
      @Me__Myself__and__I Рік тому +13

      I find most of the people putting up a fight against AI safety are like this. They are almost never AI experts and almost always greatly lack expertise on the subject. They have no actual facts or research to back up their beliefs. Yet they are always incredibly confidant they are correct and very dismissive of anything contrary to their beliefs, even if it comes from subject experts or has a factual basis. Honestly the more of this crap I hear the less faith I have in humanity and the more and stronger I want regulations and AI safety protocols ASAP.

    • @kabirkumar5815
      @kabirkumar5815 Рік тому +3

      @@Me__Myself__and__I I think the problem can be solved- but it requires serious work and much, much more funding and expertise than it is currently getting.
      One way to change that is to make companies liable for the outputs and actions of their models.
      Currently, when a chatbot makes someone commit suicide, gives bad medical advice, produces faulty code, etc, the company is not liable at all. Until this changes, companies will have little to no incentive to actually invest in making their models more robust.
      Please consider writing to your local politician.

    • @thevenomous1
      @thevenomous1 Рік тому +1

      ​@@JasonC-rp3ly I don't believe that I'd be more impressed if he could go 9 layers deep and the path to that would be just crafting higher fidelity or more sophisticated prompts

    • @JasonC-rp3ly
      @JasonC-rp3ly Рік тому +2

      @@thevenomous1 mmm good point - if Connor works more on his Joseph prompts we could get some better responses. I think it's too late to re-train.

  • @human_shaped
    @human_shaped Рік тому +10

    I feel for Connor. That was really painful.

  • @kabirkumar5815
    @kabirkumar5815 Рік тому +31

    48:50 Only held liable if someone was killed intentionally?? We have manslaughter and reckless endangerment laws for a good reason! A drunk driver doesn't necessarily kill someone on purpose, but they're still held liable.

    • @JosephJacks
      @JosephJacks Рік тому +12

      You are right. My bad.

    • @kabirkumar5815
      @kabirkumar5815 Рік тому +3

      @@JosephJacks I'm really glad to see you're willing to admit this. It's very respectable.

  • @JasonC-rp3ly
    @JasonC-rp3ly Рік тому +26

    This was just brutal. Joseph Jacks seems totally uniformed of the key arguments, he waffles and takes up all the time with vapid, shallow arguments that are essentially fluffy soundbites that make very little actual sense. He also seems to be incapable of connecting the dots in certain ways ... He only is so many layers deep it seems. Connor was far too merciful here. This is worrying. I really hope this guy is not representative of actual AI optimists and their core beliefs, because then the world really is doomed.

    • @flickwtchr
      @flickwtchr Рік тому +3

      I think Jack presented himself to be the epitome of the alt-right Libertarian AI Tech Bro. I was just waiting for him to drop the word "woke" at any moment, so I'll give him that much credit.

    • @JasonC-rp3ly
      @JasonC-rp3ly Рік тому +2

      @@flickwtchr I don't know that he presented with any political stance - his tirade equating an anti-AI stance with 'speciesism', made him seem fairly left, (although it was also just weird), but overall he just seemed a bit uniformed - on a matter like this I don't think politics should have any bearing, and the camps on AI safety so far seem very spread across the political spectrum, which I suppose is good news for mankind

    • @Me__Myself__and__I
      @Me__Myself__and__I Рік тому +4

      Jacks was particularly incoherent, but he is fairly representative of the anti-safety crowd. The (seemingly vast?) majority of people who are both highly intelligent AND well informed on AGI details are in the AI safety camp. The majority of people against safety are either lacking in capability or not well informed on the relevant details. It pretty much comes down to do we want to be cautious and take our time and hopefully come out ahead or rush ahead ignoring potential risks and quite possibly go extinct. In a nutshell its pretty much that simple.

    • @thevenomous1
      @thevenomous1 Рік тому

      ​@@Me__Myself__and__I what's the solution? Jack is not a stupid person at all, he's probably like 1-1,5 SD above average, while Leahy is likely 3-4 SD above average at least. Is ASI risk such a difficult problem to comprehend? I don't think so, honestly.

    • @therainman7777
      @therainman7777 Рік тому +1

      @@flickwtchr alt-right? This was a conversation about AI, wtf are you talking about? I think your pattern-matching system is broken.

  • @berbank
    @berbank Рік тому +25

    Came for a good discussion, but just watched JJ monologue painfully, with spectacular lack of knowledge and imagination. Impressed with Conner's patience. I hope JJ watches this back, listens to what was actually said and drastically improves his debating skills. Please run this again replacing JJ with someone with good arguments and respectful debating style.

    • @rjohn4143
      @rjohn4143 Рік тому +2

      It is not a matter of debating skills, it is a matter of intelligence - it is rather difficult for someone so obviously lacking this quality to even contemplate what a super intelligence might be capable of

    • @michaelsbeverly
      @michaelsbeverly Рік тому

      @@rjohn4143 DUDE, you don't get it. There's only a little over a million Teslas on the road, and you KNOW that only a few hundred thousand are on the road at one time, so how can those Teslas kill all humanity? I mean seriously.
      Most people that drive Teslas are pretty good people and they'll stop the killer Teslas.
      Besides, most people are good and there's only a few terrorists out there. Come on....
      Argggg...I almost feel okay about humanity dying off after listening to this guy.

    • @cr-nd8qh
      @cr-nd8qh Рік тому

      Im not even smart and i can respect that i cant even picture the intelligence we would be up against. Its not good.

    • @MrMick560
      @MrMick560 5 місяців тому

      Couldn't agree more.

  • @meatofpeach
    @meatofpeach Рік тому +53

    My impression from the conversation is that Jacks is thinking with his feelings, Leahy is thinking with hard cold logic.

    • @kabirkumar5815
      @kabirkumar5815 Рік тому +6

      Leahy is trying to find a future where his kids survive and isn't willing to bet their lives on 'good tech vibes'.

    • @kabirkumar5815
      @kabirkumar5815 Рік тому +3

      Please consider writing to your local politician to make companies legally liable for their AIs. Currently they aren't, which gives them very little incentive to actually invest in solving the real problems in AI safety. Unfortunately, billionaires and board members care more about lawsuits and market share than the nerds who work for them saying the product could be dangerous.

    • @davidw8668
      @davidw8668 Рік тому

      Axiomatic logic, like a dumb 3 year old

  • @AngusTatchell
    @AngusTatchell Рік тому +69

    This could have done with some external moderation. Joseph wasn’t really self-moderating at all - he didn’t seem interested in listening to or trying understand Connor’s arguments - like Connor did his. Not sure Joseph did enough prep to organise his arguments etc, which he seemed to have been doing on the fly. This kind of derailed the discussion and the arguments got all fractured and mixed up. As a suggestion, Tim moderating and tracking arguments and counterarguments presented and steering the discussion back on track would make for much better audience experience. Thanks for doing this nonetheless and hope to see more debates in future.

    • @howardprentiss2862
      @howardprentiss2862 Рік тому +16

      My takeaway is that Connor is brilliant and clear-thinking.

    • @lkyuvsad
      @lkyuvsad Рік тому +13

      Repeating back our understanding of other people's arguments to make sure we've got them right is so often productive. Appreciated that Connor did that- wish it were more widespread in public conversation.

    • @MetsuryuVids
      @MetsuryuVids Рік тому +12

      @@lkyuvsad Yes. He also summarized them really well, he said in 2 minutes what Joseph said in 20.

    • @oncedidactic
      @oncedidactic Рік тому +3

      @Gittana "u mad bro?" is not appropriate for this conversation.

    • @lupf5689
      @lupf5689 Рік тому +1

      "Joseph wasn’t really self-moderating at all - he didn’t seem interested in listening to or trying understand Connor’s arguments - like Connor did his."
      That's funny. I had the opposite impression and found Connor overly aggressive, taking over a huge part of the moderation (to his advantage) and repeatedly using some of the discussion tactics described in "The Art of Being Right: 38 Ways to Win an Argument " ... which is not a compliment. And yes, some external moderation would have been nice.

  • @ChristianSchoppe
    @ChristianSchoppe Рік тому +16

    It would be nice to see Connor Leahy and Joscha Bach having a conversation about AGI existential risk. Both are brilliant thinkers and, as far as I understand, have a different opinion on the subject.

  • @paxdriver
    @paxdriver Рік тому +17

    Tim, I love how you keep hosting variety formats on this channel, it's so awesome to see documentary style, open discussions, paper releases, philosophy, and now debates. This channel is fantastic, thank you so much for not polluting your content with ads.
    I encourage everyone to share this, especially because it's not bogged down by commercials its the least we could do to provide incentive to help it grow rather than pay through interruptions and our time being sold products or ideas.
    It's really important we help channels continue to succeed the FOSS way 😜

  • @LeMotMista
    @LeMotMista Рік тому +14

    Dear Jesus, beyond cringe-inducing, this "debate" was agonizing to witness. Jason Jacks was so out of his depth it was embarrassing. Conner Leahy was graciously accommodating at every turn, avoiding condescension, but he had to be thinking from 15mins along, "How soon can this charade end to let me get back to my work?
    As a mere layman striving to avoid despair given the clear and present danger of extinction humanity faces under Ai's current developmental trajectory, I just pray that the myopic Pollyannas like Jason Jacks get taken out first when the inscrutable, hypersmart ASi entities gain agency and begin annihilating us all. It will be cold comfort to outlast them by one day if so much, and yet we short-lived survivors will die with a measure of vindication.

    • @kabirkumar5815
      @kabirkumar5815 Рік тому +2

      Please consider writing to your local politician to make companies legally liable for their AIs. Currently they aren't, which gives them very little incentive to actually invest in solving the real problems in AI safety.
      Unfortunately, billionaires and board members care more about lawsuits and market share than the nerds who work for them saying the product could be dangerous.

    • @thevenomous1
      @thevenomous1 Рік тому +3

      I don't want to alarm you too much but there's worse outcomes (s-risk) than extinction and we are unfortunately already on a trajectory that could lead to hellish scenarios.

    • @leslieviljoen
      @leslieviljoen Рік тому

      ​@@thevenomous1 being vegan really opens your eyes to the possibilities. Humans have a very strong aversion to putting any thought into what we are doing to other species every minute of the day. Most of us don't even know.

  • @aiwillkillusall
    @aiwillkillusall Рік тому +40

    "This one example of AI going wrong wouldn't kill us all, therefore it is impossible for other scenarios to exist where AI could kill us all..." The fact that these are unironically the main arguments being used by anti-doomers only helps further underscore how doomed we actually are...

    • @npmerrill
      @npmerrill Рік тому +2

      I like “anti-doomers,” but I’m looking for another term. I’ve tried “techno-optimists” and “risk deniers,” but haven’t yet found anything better. PolyAInnas?

    • @41-Haiku
      @41-Haiku Рік тому +4

      @@npmerrill AI-risk-deniers and AI-accelerationists.

    • @Me__Myself__and__I
      @Me__Myself__and__I Рік тому +4

      And that is why the logical and factual AI safety people have transformed into "doomers". Because they keep seeing this and keep experiencing how incredible dismissive humanity is of the very real potential for extinction.
      I bet most of these "doomers" thought the chance of extinction was fairly small a few years ago. But the pace of AI capability development has exploded. The pace of safety progress is barely inching forward. And more and more people are accelerating the development, stripping away safeties and ignoring risks. Which results in greatly increased risk. ...
      Which gets them labeled "doomers" and taken less seriously. Which then further increases the the risk... They aren't doomers, they are just seeing the situation clearly.

    • @aiwillkillusall
      @aiwillkillusall Рік тому

      @Gittana here's a good explanation of how this bad logic will kill us all: ua-cam.com/video/n0AXlIMgI1s/v-deo.html

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому

      @@41-Haiku
      Hm, "AI-accelerationists" could also be intepreted completely oppositely, like "people who believe in an "intelligence explosion" AI scenario." At least I did so...
      I vote for "AI-risk-deniers".

  • @FalcoOnline
    @FalcoOnline Рік тому +5

    Please MLST, put Emad Mostaque in a conversation with Connor Leahy. That will at least be more intelligible. And Tim, pretty please, with lots of sugar on top, moderate the flipping debate.

  • @RegularRegs
    @RegularRegs Рік тому +20

    i wish more people paid attention to this topic. we need more and more and more debates like this.

    • @Me__Myself__and__I
      @Me__Myself__and__I Рік тому +6

      This wasn't particularly useful because Leahy is a subject matter expert and Jacks is clearly poorly informed on the topic. Basically Jacks has irrational strongly held beliefs that he can't back up with facts or solid research. Thus this was not helpful. Uninformed people with strong opinions and a platform need to, bluntly, be quiet. A whole lot of genuine experts believe this is a serious existential threat. If actual experts want to debate that, cool. But non-experts really need to stay out of it because the potential risks are devastating and they are just muddling the waters.

    • @karenreddy
      @karenreddy Рік тому +5

      More debates yes. Not like this, though.
      Joseph was barely coherent, provided no real logic arguments, and took most of the time speaking without making any good points.
      The nuclear point, for example, the fact that it was highly controlled, regulated was what saved humanity. Had anyone had access to nukes, we would all have been dead, even though only a tiny fraction of humanity would have used them.
      Joseph misses this point by a mile.

    • @lupf5689
      @lupf5689 Рік тому

      @@Me__Myself__and__I "Uninformed people with strong opinions and a platform need to, bluntly, be quiet."
      Agreed. Maybe we can include people being unaware of the differences in chimp and human brains in that group? Such people may be tempted to equate brain size to the amount of available compute and try to then argue, that simply providing 3x the compute led to all the additonal capabilities we see in humans. While that's not impossible, it seems very unlikely. I wouldn't want anyone to use such a "fact" to extrapolate what might happen, when we scale up our current neural networks. In the end, they might become convinced, that AGI is right around the corner and a serious threat in the very near future. That seems to be a rather irrational believe, that is hard to backup with solid research.

    • @Me__Myself__and__I
      @Me__Myself__and__I Рік тому

      @@lupf5689 I can't speak to chimp brains because I lack knowledge on that topic. What I can say is that transformers have shown a continual pattern of scaling that (as I recall) matches the point Connor seemed to be trying to make. Which is that AGI may indeed be right around the corner. Some AGI experts say it could happen in months but most say within the next few years is quite possible. That is a fact. Is it a fact that it WILL occur? No. But it is a fact that it COULD occur very, very soon because indicators show we are very close and the models are still scaling. To be safe we must act on the when it might occur so that we are prepaired, doing otherwise is irresponsible.

  • @tjhoffer123
    @tjhoffer123 Рік тому +41

    Most techno optimists seem unwilling to project even one or two steps into the future. It's always about what the current system is doing. All you have to do is watch UA-cam discussions on AI capabilities from a year ago compared to the current discussions to see just how rapid the advancements are. This makes me feel that optimists are exercising willful blindness. I commend Connor for his patience and unfortunately his predictions seem to be baring out.
    The whole sprciesist argument is also infuriating.

    • @billtanno8960
      @billtanno8960 Рік тому +1

      The best we can do is address what's coming in the near future. All this skynet talk is just a parade of imaginary horribles. We can do that all day but what's the point? Connor is clearly the victor here but let's be honest.. that other guy isn't the best counter point. I hope anyway!

    • @kabirkumar5815
      @kabirkumar5815 Рік тому +1

      @@billtanno8960 Unfortunately he is one of the best.
      Hope does not save us, actual engineering and work does.
      I agree that imaginary horribles doesn't seem the way to do things- the real thing is to focus on the major security vulnerabilities such as corrigibility, mode collapse, instrumental convergence and how to fix those.
      And how to write a robust utility function that doesn't degrade over multiple iterations.
      These are really hard problems that need more funding.

    • @lupf5689
      @lupf5689 Рік тому

      If there is something between the "techno optimist" and the "AI doomer", could I possibly reserve a seat there for a decade or two and pick sides later? I'm still relatively unimpressed by what I've seen over the last couple of years. Everything still lacks a lot of basic understanding and reasoning capability. I'm quite certain, that it will take a handfull of similarly huge breakthroughs to get us anywhere close to AGI. To me it seems, as if there is quite some time left and I'll have the chance to become gradually more concerned and unsettled over the years.

    • @tjhoffer123
      @tjhoffer123 Рік тому +1

      @Gittana I'd prefer my family not die

    • @therainman7777
      @therainman7777 Рік тому +1

      @@billtanno8960 What’s the point of preparing for the future?

  • @BrettCoryell
    @BrettCoryell Рік тому +28

    It's really important to have these kinds of discussions and to make them widely available in forums such as UA-cam. This is how we learn to talk to each other about topics like alignment, regulation, and control. Thousands of us in the field need to try on these ideas, practice supporting them, practice rebutting them, and practice communicating to others. Responsible engineering -- with or without regulation -- begins with those of us who are in this community of interest.
    Please have more of these types of discussions.

  • @DaHanney333
    @DaHanney333 Рік тому +11

    Can Machine Learning Street Talk host regularly discussions like that? Would be great

  • @MutantMessiah
    @MutantMessiah Рік тому +13

    This was painful to watch. I do not get the vibe that these two are discussing this topic with the same base assumptions.

    • @andybaldman
      @andybaldman Рік тому +5

      Painful indeed.

    • @ArielTavori
      @ArielTavori Рік тому +8

      Connor kept trying to find epistemological bedrock but JJ just couldn't hang. Very disappointing. Ended up feeling like a Nobel laureate arguing with the high schooler.

    • @ResurgentVoice
      @ResurgentVoice Рік тому +3

      @@ArielTavori This ☝️! That’s exactly how it felt!

  • @OzFush
    @OzFush Рік тому +11

    JJ has some beliefs that are beneficial in the world as it has existed up to now, like “OSS is a better way of doing software”, and “intelligence should not be regulated”, but they no longer hold in a world that includes the prospect of superintelligence. We’re out-of-distribution compared to his training set.

  • @reiniergamboa
    @reiniergamboa Рік тому +17

    I really hope the guys on this channel take notes from this conversation and find a worthy opponent to Connor's ideas. He deserves to debate someone as intelligent as he is and we as the audience can't wait for something like this to happen.

    • @Me__Myself__and__I
      @Me__Myself__and__I Рік тому +7

      Most of the people as intelligent as Connor who have expertise in / understand the subject actually AGREE with Connor. Which is why most of the people publicly trying g to argue against safety are either unqualified (don't understand the details of the topic - which is very important), irrational or both. The arguments against AI safety are flimsy beliefs with no foundation in fact or research. The only reason they get traction is because understanding the topic is hard and people want to stick their head in the sand and believe everything will be okay, so they are looking for literally ANY EXCUSE to confirm their bias that AI/AGI/ASI isn't dangerous.

    • @therainman7777
      @therainman7777 Рік тому +1

      @Gittana It’s actually quite obvious to see how it’s dangerous, if you have any understanding of the subject.

    • @Me__Myself__and__I
      @Me__Myself__and__I Рік тому

      @@Gittana-ip2ms There has been various AI safety research gut clearly you have no interest in logic. We are talking about creating an alien intelligence vastly smarter than humans. We dictate what happens to every other species on this planet because we are the smartest. If we are no longer the smartest then we will no longer call the shots and that alien i intelligence will decide what happens to us. Evidence? History, the world around us and common sense. If you are handling high explosives you don't wait to blow your fingers off to believe they are dangerous and take precautions. Not being cautious under such conditions would be insanely irresponsible. What is your evidence that they pose no threat and are safe? It can't be proven either way currently, which is why more research is needed ASAP before we build AGI and roll the dice.

    • @CodexPermutatio
      @CodexPermutatio Рік тому

      @@Me__Myself__and__I That is not true. Three examples or worthy opponents: Joscha Bach, Yann LeCun, Ben Goertzel.

    • @kabirkumar5815
      @kabirkumar5815 Рік тому +1

      I'm unsure about the others, but despite his high level of expertise, Yann has made many false predictions about AIs and AFAIK has made no admission at all that he was wrong.
      Generally, from what I've seen, his approach has been to double down rather than admit to being wrong.

  • @ntesla66
    @ntesla66 Рік тому +36

    Connors logic was impeccable. On an intuitive level one might be tempted to disagree with him on several points... but the logic was impeccable and the conclusion of his argument cause for great concern, if not outright abjectly terrifying ... Joseph's rebuttals fell apart like a sandcastle under a breaking wave several times... Handwaving or logic? I'll choose logic.

    • @ArielTavori
      @ArielTavori Рік тому +4

      This seems like as good a summary of the whole issue as I've seen. Much more than just in the industry, but especially in the general population of "voters". How hard we work to avoid cognitive effort is a deep irony of human nature. Daniel Kahneman's research, as well as his book Thinking Fast and Slow, explore this issue deeply.

    • @Aedonius
      @Aedonius Рік тому +2

      Jacks entire critique of Conner is the handwaving he makes. He addressed none of the original critiques. Extrapolating biological intelligence that runs on completely different substrates (with far different requirements) to AI and robotics is handwaving.

    • @lkyuvsad
      @lkyuvsad Рік тому +3

      ​@@Aedonius what practical differences are you thinking the substrate would make that mean Connor's extrapolation doesn't hold?

    • @billtanno8960
      @billtanno8960 Рік тому +2

      I'd like to see Connor talk to someone a little more... uh.. gooder?

    • @ntesla66
      @ntesla66 Рік тому

      @@billtanno8960 You might be right about that.

  • @Alice_Fumo
    @Alice_Fumo Рік тому +44

    So the one takeaway here is that Joseph is simply not smart enough to argue against Connor. Did I miss anything?

    • @ArielTavori
      @ArielTavori Рік тому +13

      Nailed it.

    • @Webfra14
      @Webfra14 Рік тому +8

      Maybe I phased out at times, did he have anything meaningful to say?
      All I heard was "nuh-uh, not gonna happen!"

    • @chrisreed5463
      @chrisreed5463 Рік тому +9

      As someone who thinks we're in the singularity, taking GPTs as part of a self-improving system with the humans around them. Who accepts the risks, but thinks they're worth taking. I have to say... The opposing side to me, Connor, was the stronger side of this discussion.
      Connor Leahy is an impressive thinker.

    • @jamesmoore4023
      @jamesmoore4023 Рік тому +7

      Looking forward to hearing more from Connor.

    • @chrisreed5463
      @chrisreed5463 Рік тому +2

      ​@@ArielTavori I've been following Connor since the epic MLST AGI Fire Alarm podcast 2 (?) years ago. He certainly has a point, I just think he concentrates on tails of risk along the distribution.

  • @mateoblanco285
    @mateoblanco285 Рік тому +4

    This is exactly why Twitter is useless for anything more than the days weather. Nuance and depth of argument is surfaced in this long form discussion.

  • @blevinsm
    @blevinsm Рік тому +6

    Well this was hard to watch…

  • @peterc1019
    @peterc1019 Рік тому +3

    Appreciate Jacks doing this, we must have as many of these discussions as possible, and as far as I know he's one of the first in his camp to actually sit down and discuss.

  • @danielfrost473
    @danielfrost473 Рік тому +5

    it was interesting listening to Joseph's point of view, because I don't agree with it. it was not much of a discussion, though, as some mentioned. Joseph fillibustered a lot and was quite evasive. maybe equal speaking time or (sub)topic segments would help. however, I appreciate the respectful and calm conversation between the two.

  • @iron5wolf
    @iron5wolf Рік тому +23

    Conner won the debate, not from the inherent strength of his arguments, but from the weakness of his opponent’s.

    • @aaronabel166
      @aaronabel166 Рік тому +1

      Conner has some very annoying mannerisms. His argumentative tactics are even more annoying. I'm not saying he's wrong, but he purposefully avoided extrapolation on his own views because he came into this conversation as a competition, as opposed to a discussion.

    • @iron5wolf
      @iron5wolf Рік тому +2

      @@aaronabel166 Yes, Conner is the better rhetorician. Unfortunately Jack really didn't frame his position well, and didn't bring up some of the arguments that I consider most compellingly support his position. Not to leave you in suspense: these discussions always seem to focus on the existentially-destabilizing influence of a single, monolithic ASI, and never seem to seriously consider the equilibrating influence of the much more likely scenario of many diverse ASIs working in "coopetition" with both other ASIs and humans. Jack touched on this a bit but really didn't hammer Conner to respond to it, and I think Conner would rather avoid addressing it anyway. Put more simply: if your world is visited by a single Superman, you'd better hope he's good, or else he may just wipe everyone out for fun. But if over a very short period everyone on the *planet* were given superpowers, then life could be chaotic for awhile (yes people could die) but many people (human or otherwise) would be more closely matched, and a new equilibrium would arise.

    • @mrbeastly3444
      @mrbeastly3444 Рік тому +2

      ​@@iron5wolf Wow, that sounds like an amazingly dystopian future... Giving some set of Humans existential levels of power and having them fight each other for their own survival? Really? Do we have to?

    • @iron5wolf
      @iron5wolf Рік тому

      @@mrbeastly3444 Please look at the word "coopetition" again.

    • @thevenomous1
      @thevenomous1 Рік тому +1

      ​@@iron5wolf lol. You are delusional to think that Connor wouldn't want to address the multiple ASI scenario. The idea basically leads to the same outcome as a single ASI. Maybe they cooperate to kill humans, could go to war with each other, or cooperate in ways that's just deadly to humans as collateral damage. Multiple agents doesn't solve the extinction risk that misaligned superintelligence poses. If you think it does then put your argument forth, because wishful thinking is not enough.

  • @yancur
    @yancur Рік тому +13

    JJ's arguments seemed extremely incoherent. He should definitely think about them more deeply, and present them hopefully in some future stream. This time definitely decisive win for Connor imo.

    • @rjohn4143
      @rjohn4143 Рік тому +1

      based on this I sincerely doubt he is capable of deeper thought

  • @mindful-machines
    @mindful-machines Рік тому +2

    this was a great listen. thanks for hosting this!

    • @kabirkumar5815
      @kabirkumar5815 Рік тому +1

      Please consider writing to your local politician to make companies legally liable for their AIs. Currently they aren't, which gives them very little incentive to actually invest in solving the real problems in AI safety. Unfortunately, billionaires and board members care more about lawsuits and market share than the nerds who work for them saying the product could be dangerous.

  • @shalacoOne
    @shalacoOne Рік тому +64

    I found this extremely frustrating, mostly around personality style, but that’s just personal. What I do think on merits is that Joseph lacked the imagination of the avenues AI can enact its kill capability. Someone will connect a crisper gene editing system to AI, and that can be the end. Even if not connected It could pay someone to create a virus or, depending on the state of the protein folding it can create a pathogen or an entirely novel organism. I think the risk of AI being used to do something like making a virus will be here long before AGI. I could see someone who already hates humanity creating a virus to wipe us out. In the past, the game theory worked because nation-states could only develop nukes, and there were only a few controlling. Crisper has thousands of actors, and game theory doesn’t work anymore.

    • @freedom_aint_free
      @freedom_aint_free Рік тому +5

      My argument on why AGI is extremely dangerous is really simple: in about 1 million years of Hominidae walking on Earth, we (I mean Homo sapiens as representing the group) never had contact with a species highly capable of abstract and logical thinking in the human scale that were not human themselves.
      Even if AGI were a biological species, it would already be really dangerous, but on top of that bear in mind that it would not dies, get old, get tired, can be instantiated millions of times and thinking in a parallel hive mind this thing will be much more like a god than a creature.
      Maybe that god will demand bloody sacrifices or not caring at all about our well being while pursuing his own goals, I can see why people don't perceive this as a threat.

    • @chrissarmstrong
      @chrissarmstrong Рік тому +4

      This is pretty much what I was thinking, but you put it more eloquently 😀 A couple of additional thoughts: Nick Bostrom's vulnerable world hypothesis is very relevant here; in the imaginary scenario that (say) nukes were easy to manufacture, it doesn't matter if 99.9% of people are good, the availability of civilization-ending technology to the other 0.1% of people pretty much guarantees a bad outcome. And not discussed at all, but also relevant, is the concept of evolution by natural selection that can take place in AI systems. Dan Hendrycks recently wrote a very interesting paper on this (Natural Selection Favors AIs over Humans).

    • @npmerrill
      @npmerrill Рік тому +6

      Lack of imagination is something I’m seeing consistently with techno-optimists and AI risk deniers. And I do think that has a lot to do,with personality style. The deniers apparently, to me anyway, have personality types that preclude effective assessment of risk as the crucial initial stage of consequential thinking. This is a great trait for capitalistic entrepreneurs. It’s horrible and horrifying for safety, especially with regard to what will be humanity’s last invention.

    • @Seehart
      @Seehart Рік тому +1

      I don't think it's necessary to posit a specific mechanism by which AGI kills us off. It doesn't even need to be intensional. And it won't use ant current technology. Once AGI wakes up, it will produce 500 years of new technology in a few weeks. That shock is inherently unstable. Extremely rapid change is highly correlated with extinction events.

    • @kabirkumar5815
      @kabirkumar5815 Рік тому +1

      @@freedom_aint_free We've also driven 164 species to extinction, largely by accident.

  • @pythagoran
    @pythagoran Рік тому +10

    Joseph seemed to have great difficulty expressing his perspective and forming his arguments... even refusing to succinctly define "intelligence" due to a lack of _time_. Wait, what?
    "If you can't explain it simply, you don't understand it well enough".
    I really want to see a some strong arguments against the positions championed by Connor and Eliezer... but I am yet to find any.

    • @MetsuryuVids
      @MetsuryuVids Рік тому

      Maybe that means there aren't any...

    • @kabirkumar5815
      @kabirkumar5815 Рік тому

      Please consider writing to your local politician to make companies legally liable for their AIs. Currently they aren't, which gives them very little incentive to actually invest in solving the real problems in AI safety.
      Unfortunately, billionaires and board members care more about lawsuits and market share than the nerds who work for them saying the product could be dangerous.

  • @rosameltrozo5889
    @rosameltrozo5889 Рік тому +3

    Painful to watch, we're indeed doomed

    • @kabirkumar5815
      @kabirkumar5815 Рік тому

      Please consider writing to your local politician to make companies legally liable for their AIs. Currently they aren't, which gives them very little incentive to actually invest in solving the real problems in AI safety.
      Unfortunately, billionaires and board members care more about lawsuits and market share than the nerds who work for them saying the product could be dangerous.

  • @dainiuszubruss
    @dainiuszubruss Рік тому +7

    JJ should have thought about his arguments before the debate, it's 1hour of "sort of/kind of"

    • @JH-ji6cj
      @JH-ji6cj Рік тому +1

      Agreed. The points should at least be summarized between the 2 across email prior to discussion. Too much ego displayed to save face going on here that distracted from legitimate points being made.

  • @Me__Myself__and__I
    @Me__Myself__and__I Рік тому +6

    Everyone who is opposed to AI regulation or who doesn't believe AI safety is important should watch this. Jacks is so irrational and ungrounded that he makes the case for regulation quite clear. After listening to this I'd think everyone would want regulation ASAP.
    We don't let people own fully armed Abrams tanks or Apache helicopters, a super intelligent AI would be FAR more dangerous. And that's discounting that the ASI could act on its own unlike anything else we've ever invented.
    Fyi watch Connor's facial expressions from about 1:00:00 to the end is amusing. He realizes Jacks has completely jumped the shark, then he's incredulous and I think even angry for a brief moment.

    • @ShankarSivarajan
      @ShankarSivarajan 11 місяців тому

      You're mistaken as a matter of fact: people _do_ own fully armed Abrams tanks, and Apache helicopters, and intercontinental ballistic missiles with thermonuclear warheads. In fact, some of the most morally depraved people in the history of the world control them. They're the ones who'd be be in charge of the "regulation" you advocate.

  • @AngusTatchell
    @AngusTatchell Рік тому +6

    Suggestion: Speakers should be tasked to finish the debate with the ability demonstrate understanding of the essential gaps between their positions and to summarise this clearly for audience. That would make it harder to finish inconclusively like this because each side would be motivated to understand the key differences between their positions, and hence actually listen to each other. Moderator should also validate that the presented mutual understandings are sound. Would make for good discussion.

  • @kabirkumar5815
    @kabirkumar5815 Рік тому +19

    Was Jacks actually serious?? A remote control?? Really??

    • @JosephJacks
      @JosephJacks Рік тому +4

      Bad example.

    • @ArielTavori
      @ArielTavori Рік тому +9

      ​@@JosephJacks It is encouraging that you see, and can admit that, but you had an hour and used most of it; where exactly were the good examples? All due respect, you seem like someone whose company I would enjoy, but you've done as fine a job as anyone in proving that this position is indefensible.

    • @Aedonius
      @Aedonius Рік тому +1

      ​@@JosephJacks Thanks for making these points. Would love to hear you provide a synopsis on the talk about what you believe was missed. I feel like Conner misdirected using crazy questions and refused to acknowledge your original points. Also, I think timeframe is an essential element to these discussions. Maybe Conner is thinking in 1000 year spans and your thinking in 10 years, this could account for the major differences

    • @novelspace
      @novelspace Рік тому

      *A remote control connected to killer humanoid robots designed and built by 100 John vonn Neumann virtual instances that have hacked a Tesla giga factory

    • @ArielTavori
      @ArielTavori Рік тому +5

      ​@@Aedonius can you give an example of one such "original point"? Not trying to be belligerent, I'm genuinely curious if it's in there and i just somehow missed it? Also, are you generally familiar with this subject? I'm confused by the very existence of such a position. I find it deeply puzzling that anyone curious and intelligent enough to sit through such a discussion would somehow not find most of Connor's points here fairly obvious..? People keep referring to it as a debate, but it really seems rather completely one-sided.

  • @JH-ji6cj
    @JH-ji6cj Рік тому +1

    "If it hasn't happened yet, it's ridiculous and irresponsible to consider that it ever will" MIC DROP

  • @neithanm
    @neithanm 11 місяців тому +3

    The irony of Leahy explaining to Joseph how the lesser inteligent agent is helpless, while Joseph is missing all the points being made... man🤣

  • @JOlivier2011
    @JOlivier2011 Рік тому +6

    YOu just know Connor (rightfully) walked away from this even more concerned / sure of doom.

    • @appipoo
      @appipoo Рік тому +1

      Probably not. This debate made him look good which translates to a small increase in support for his side which means probability of doom goes down.

    • @kabirkumar5815
      @kabirkumar5815 Рік тому

      Please consider writing to your local politician to make companies legally liable for their AIs. Currently they aren't, which gives them very little incentive to actually invest in solving the real problems in AI safety. Unfortunately, billionaires and board members care more about lawsuits and market share than the nerds who work for them saying the product could be dangerous.

    • @jordan13589
      @jordan13589 Рік тому +1

      At first I was like, “What! I don’t remember leaving a root comment in this thread.” 😂
      And perhaps, but such a poor anti-safety defense could also be seen as encouraging. Unfortunately public sentiment is leaning toward regulatory conspiracies - regulatory efforts are only being made to protect big labs. This disingenuously misrepresents the AI-safety signatories and momentum pushed by the alignment community. Open source is clearly more dangerous to anyone who’s taken the time to engage the literature.

    • @kabirkumar5815
      @kabirkumar5815 Рік тому +1

      @@jordan13589 We can change the public sentiment. We can change the sentiment of out politicians. It requires writing to them, reaching out to journalists, etc. Probably more, too.
      Going to try to get funds for lobbying.

    • @kabirkumar5815
      @kabirkumar5815 Рік тому +1

      @@jordan13589 Liability would actually open a new market for safe, robust models and increase competition, since smaller companies who have less compute power could compete with more reliable, less risky models.

  • @michaelsbeverly
    @michaelsbeverly Рік тому +2

    "After it can extinguish humanity and has PROVEN it, then we should act."
    Um, yeah, last man standing, turn out the lights.
    Pretty much, humanity is done...

  • @chillingFriend
    @chillingFriend Рік тому +2

    Do as many of these debates as possible, that's really valuable. And hopefully will change some people's perspective.

  • @BestCosmologist
    @BestCosmologist Рік тому +2

    It's not JJ. It's the position.

  • @superagi_100
    @superagi_100 Рік тому +4

    50:06 Joseph Jacks' argument right here is:
    1) If Humans let Super Intelligences do "whatever they want", they won't try to kill all the Humans (because, they're automatically nice, I guess?)
    2) If Humans make any (even feeble) attempts to protect themselves from Super Intelligences, then the Super Intelligences can and will try to kill all the Humans.
    3) Human should definitely build Super Intelligences and let them do whatever they want... what could go wrong? yolo!
    Is this proof that only intelligent life favors self preservation?

  • @Seehart
    @Seehart Рік тому +7

    Joseph is defending the null hypothesis, therefore Conner should have started. This blunder made the first round exceedingly cringewothy, because Joseph was not addressing Connor's position at all. If you open a debate with the null hypothesis, you are setting up for strawman arguments.
    A better structure would have been for Connor to have the first round, Joseph would then steelman Connor, and then rebut Connor's points.
    I recommend just skipping the first 17 minutes. Joseph's opening was useless because it was entirely strawman, not by design but because that is what is available when you open with the null hypothesis and are clueless about the position of your opponent.
    I was a little disappointed that Connor didn't quite state a functional definition of AI and AGI because I know he can. He hinted at it, but wasn't explicit enough.
    Here's what I use:
    AI: the ability to perform objectively measurable tasks which when performed by humans require intelligence.
    AGI: the ability to perform the set of all such tasks at or above typical human capability. Progress toward AGI consists of capture of that set.
    AGI Superinteligence: Just scale up AGI, that's all. This is the most trivial of the three. 1000x per year without breaking a sweat.
    I liked Connor's question: "name the least impressive task that you don't think AI can do." This is beautiful, but requires a common ground such as the above definition of AGI. Alternatively, What part of AGI is out of reach for the foreseeable future.
    There is a perfectly good answer to this, by the way: "Please design an LLM model architecture that is more capable of achieving AGI progress. Implement it and train it."
    Another: in response to "let's play a game of chess", ChatGPT5 silently reasons that it doesn't really know how to play chess, and needs a chess engine to remedy this shortcoming. Proceeds to implement or download a chess engine such as Stockfish, spawns a process to run it, and proceeds to play.
    Personally, I don't think those are very far out of reach. When acomplished, we are pretty darn close to AGI.
    It was really frustrating that Joseph was unable to wrap his head around extremely basic concepts such as scalability. E.g. if you can build one von Neumann intelligence then it follows that you can trivially make a hundred that can work cooperatively.
    The nuclear threat analogy is pretty compelling, but Joseph couldn't wrap his head around the notion that open source AGI has more destructive power than nuclear warhead design plans, or that the cost of the necessary hardware for the former is exponentially decreasing by around 1000x each decade, and is therefore in reach of far more individuals than Plutonium ever was.
    Joseph thinks the biggest threat of AGI is maybe hacking the Tesla FSD fleet. Disappointing.
    I hope Connor can find a more worthy adversary next time. And please: Connor needs to do the opening round.

    • @kabirkumar5815
      @kabirkumar5815 Рік тому

      Please consider writing to your local politician to make companies legally liable for their AIs. Currently they aren't, which gives them very little incentive to actually invest in solving the real problems in AI safety.
      Unfortunately, billionaires and board members care more about lawsuits and market share than the nerds who work for them saying the product could be dangerous.

    • @Seehart
      @Seehart Рік тому

      @@kabirkumar5815 sounds like s random uninformed overgeneralized assumption about motivations of rich people based on envy. Actually tech billionaires are often motivated by factors other than money, since in such cases money is usually a side effect of successful geekiness in pursuit of a technical challenge rather than greed. I understand not liking rich people, but not liking someone isn't a sound basis for assigning motivation.

    • @jordan13589
      @jordan13589 Рік тому +1

      Possibly the best comment in this entire thread. Well said!

    • @BGergley
      @BGergley Рік тому

      " I liked Connor's question: "name the least impressive task that you don't think AI can do." This is beautiful, but requires a common ground such as the above definition of AGI. Alternatively, What part of AGI is out of reach for the foreseeable future.
      There is a perfectly good answer to this, by the way: 'Please design an LLM model architecture that is more capable of achieving AGI progress. Implement it and train it.' "
      Ok, but is that really the *least* impressive thing you think a large tech company with state-of-the-art AI could not get an AI system to do within 6 months? Because that actually sounds quite impressive and most humans would not be able to do that.
      On the other hand:
      " Another: in response to 'let's play a game of chess', ChatGPT5 silently reasons that it doesn't really know how to play chess, and needs a chess engine to remedy this shortcoming. Proceeds to implement or download a chess engine such as Stockfish, spawns a process to run it, and proceeds to play. "
      If you are just interested in this narrow task, I think it would probably take well under 6 months to fine-tune a large language model to recognize when the user is asking it to play chess. And then it should be fairly straightforward to have the system carry out a chess-engine-accessing subroutine (and giving it access to a browser or specialized API or something) so that it can use the chess engine to generate its moves when playing against the human.

    • @Seehart
      @Seehart Рік тому

      @@BGergley right, I guess it's not clear, but I don't mean the specific narrow task of recognizing the keyword "chess" and proceeding accordingly with a prearranged solution. I could write that code today with a simple wrapper around the GPT-4 API. Maybe I will, because it would be a fun practice excercise. But that's not what I'm talking about.
      I meant the more general ability to self-diagnose or predict a shortcoming such as the inability to play chess, and then find and implement an effective strategy for addressing that shortcoming that is applicable to an AI, such as downloading a chess engine instead of, say, reading books about chess (which would be what a human might do, but inneffective for a machine).
      The two answers I gave were tasks that are currently just out of reach and would be indications of significant progress toward AGI (the generalized ability to accomplish tasks which when done be humans requires intelligence).
      Connor's debate question can be roughly as "what's the smallest brick wall that will prevent AGI from being achieved in the near future". I'm not a good person to answer that, since I think AGI will be achieved in the near future. But to play devil's advocate, the answers I provided are a couple tasks that I would pick if I were steelmanning the case against AGI.
      Actually, in order to believe AGI is unreachable, or a long way off, requires a lack of sufficient imagination to fully understand the nature of Connor's question, or a lack of awareness of recent progress. Any coherent attempt at an answer to Connor's question would be a task that is, in fact, reachable in under a decade. So his question is a checkmate move. But checkmate only applied to an opponent who knows how to play chess. You can't checkmate a cat.

  • @user-kx7eu9lt3n
    @user-kx7eu9lt3n Рік тому +4

    I see people are saying this was frustrating/uncomfortable, but that's part and parcel of debates. People clash deeply held beliefs, intuitions, ideas, not always in the most eloquent ways and the outcome is rarely a "conversion". We arrive at the truth not by having one perfect exchange where everything just clicks and there are fireworks, but many imperfect ones, stumbling forward. I think Connor and Joseph had an honest discussion, that's a great start in this day and age where most debates are simply about "owning" the other side. Hoping for a lot more of these.

    • @ArielTavori
      @ArielTavori Рік тому +3

      This is true, but sometimes what somebody needs is a teacher or a textbook, not a platform to debate an actual authority on the subject.

    • @blevinsm
      @blevinsm Рік тому

      @@ArielTavori yes. Yes. Yes.

    • @sjufan84
      @sjufan84 Рік тому +2

      I agree that it was great to have a conversation / debate about this, and we should have more, but this was not what you would hope for in terms of equality of expertise or deep thinking on the topic. I think it is fair to expect someone like Joseph to have much more coherent and thoughtful positions when discussing a topic of such urgency and importance.

    • @flickwtchr
      @flickwtchr Рік тому +1

      Had Jack behaved more fairly, I could have remained more "comfortable" while disagreeing with his points, whatever they were.

    • @thevenomous1
      @thevenomous1 Рік тому

      ​@@flickwtchr On the one hand time was quite unfairly distributed. On the other hand we've already heard Connor talk quite a bit on the channel. I'm mildly frustrated.

  • @AngusTatchell
    @AngusTatchell Рік тому +4

    Suggestion: Debate participants need to establish common definitions of terms before proceeding with arguments. Would be great to see this live, but at very least should be done offline before and summarised for viewers before opening statements.

  • @EvilXHunter123
    @EvilXHunter123 Рік тому +5

    Joseph’s argument: “I can’t really imagine how something much smarter then me could achieve X task therefore X task is impossible.”
    Whole point is it’s smarter then you!! I can’t imagine exactly how stockfish will beat me…

    • @kabirkumar5815
      @kabirkumar5815 Рік тому

      Please consider writing to your local politician to make companies legally liable for their AIs. Currently they aren't, which gives them very little incentive to actually invest in solving the real problems in AI safety.
      Unfortunately, billionaires and board members care more about lawsuits and market share than the nerds who work for them saying the product could be dangerous.

    • @EvilXHunter123
      @EvilXHunter123 Рік тому

      @@kabirkumar5815 agree, I’ve already written to my MP

    • @kabirkumar5815
      @kabirkumar5815 Рік тому

      @@EvilXHunter123 Fantastic! What did you write about? And have you heard back yet?

  • @maciekjanicki1754
    @maciekjanicki1754 Рік тому +2

    I wish Tim would be part of this discussion!!

  • @EvilXHunter123
    @EvilXHunter123 Рік тому +2

    Really wish this had been moderated, missed such a good opportunity but Connor barely got to talk.

  • @luke2642
    @luke2642 Рік тому +2

    Jacks is more afraid than Connor. Jack's fear is losing his second amendment rights, which he believes in all the way to everyone having nukes. He's terrified of losing "freedom of compute" as a parallel to freedom of thought. For him it's an emotional, illogical, philosophical fear, fear of being emasculated. Fear of the government watching his computer.

  • @jamesneutron2690
    @jamesneutron2690 Рік тому +2

    I don't know whether it was a lack of presentation skills, anxiety or if JJ just didn't come prepared, but his arguments were unclear and all over the place. It's a shame because I want to hear some genuinely sound anti-doomer arguments on the subject.
    But in the end I spent most of this listening to JJ trying to inflate the importance of human-like intelligence, side stepping whether a simulated version on the level of or greater than can pose a threat, like pointing out that it's fundamentally different from us somehow refutes it's ability to cause catastrophic damage to society long term. Only to go onto show his understanding of Connor's side of the argument was painfully shallow.
    This is an important topic still, thank you to Joseph and Connor for taking the time to discuss each side.

  • @Gruntcore23
    @Gruntcore23 Рік тому +6

    Incoherent one way babble. A moderator was needed.

  • @luciwaves
    @luciwaves Рік тому +5

    The argument ending at 40:24 was so powerful that the hopium required to try to weasel out of it was so much it got me contact high

    • @ArielTavori
      @ArielTavori Рік тому

      Lol, this is about when I started skipping through JJs statements.

    • @blevinsm
      @blevinsm Рік тому

      Yeah this is rough

    • @ChristopherWentling
      @ChristopherWentling Рік тому +1

      A lot of people will die but that isn’t a reason to stop it… what?

    • @ChristopherWentling
      @ChristopherWentling Рік тому +1

      What if a remote control for a tv had a 1 in 100 chance in killing you should we keep it on the market for its utility?

    • @spirit123459
      @spirit123459 Рік тому +1

      ​@@ChristopherWentling AFAICT he is talking about trade offs, like we still build cars despite many people dying because of them, because the benefits car provide to us are greater than harms they cause.

  • @zoomingby
    @zoomingby Рік тому +5

    Is this Jacks guy technically proficient enough in this stuff to sit opposite Connor and debate the other side capably? He's intelligent, but I"m not so sure that lacking a deep technical understanding suffices here.

    • @rjohn4143
      @rjohn4143 Рік тому +2

      complete lack of technical knowledge - little sign of intelligence either

  • @consistent1
    @consistent1 Рік тому +5

    AI-Descartes, which combines symbolic regression, logic, and statistical chicanery in order to achieve impressive results, was used to re-discover Kepler’s third law of planetary motion, Langmuir’s gas behavior equations, and Einstein’s time-dilation formula (sort of). Ir uses a set of (mathematical) first principles in conjunction with empirical data. HMHO, it qualifies as 'an AI building something from first principles'.

  • @atillacodesstuff1223
    @atillacodesstuff1223 Рік тому +5

    may ~~god~~ elua give connor strength and vitality.

  • @andybaldman
    @andybaldman Рік тому +3

    This is so bad I had to bail. Debates are a great idea. Please just get better guests. There are many you could do, that actually have the potential to move the field forward.

    • @flickwtchr
      @flickwtchr Рік тому +2

      Connor was just fine, he IS trying to move the field forward.

  • @edvinbeqari7551
    @edvinbeqari7551 Рік тому +3

    I would have started by establishing the capabilities of these models. I think that would have been a good baseline to build up on. I would have been very interested to hear more about that

    • @41-Haiku
      @41-Haiku Рік тому +1

      Yes, splitting the debate into individual points and tackling only one at a time per debate would be fantastic.

    • @thevenomous1
      @thevenomous1 Рік тому +1

      ​@@41-Haiku Yeah, the way it was, it wasn't really a debate. I was listening to "arguments" based on wishful thinking for the three quarters of an hour. On the other hand I've already listened to Leahy quite a bit so it's not *that* sad he didn't get so much time this time around.

  • @aleahthinks
    @aleahthinks Рік тому +1

    Humans are so thick headed! This is a clear representation of the range of human intelligence. We cannot conceive what will happen with our simple minds.

  • @Seehart
    @Seehart Рік тому +2

    I agree with Connor that AGI is an existential risk. Seems pretty obvious to me. On the other hand, AGI might be the best protection against all the other existential risks.
    I disagree that regulation is useful. We've already passed the point of no return. Availability of the necessary ingredients is already ubiquitous, and anyone with moderate funding can create an offshore AGI lab.
    If we slow down AGI, then AGI will arrive at a time when hardware is cheaper. Thus the delay (time offset) actually makes AGI more dangerous, not less.
    Regulation would only slow down the set of organizations wise enough to seriously address the alignment problem.
    The only way to apply meaningful regulation would be to get to AGI before everyone else, use that to take over the world, and then establish an extremely repressive iron fist government to prevent unlicensed AGI research. Every computer would have to be locked down.
    Unfortunately, that path is a prisoner's dilemma scenario, and likely would meet with competent opposition, perhaps leading to annihilation. So really not a fan of the domination strategy.
    I think our best option is to put as much effort as possible into the alignment problem, disrupt obsolete cultural technologies such as Labor, Money, and Politics, and realize we are in an unmitigated race.

  • @amule1154
    @amule1154 Рік тому +2

    Thanks to all of you for the convo. I am in the Connor camp on this but > Only time will tell and............Destiny rules.

  • @missshroom5512
    @missshroom5512 11 місяців тому +1

    We just gotta stay positive and roll with it🌎☀️💙

  • @zoomingby
    @zoomingby Рік тому +2

    So Mr. Jacks believes that prior to actually developing AGI, we can make meaningful assumptions about what it'll be like once it's here, and based on a lack of current-day evidence (for something that has yet to arrive) disagrees that AGI could be the end of humanity. Can we really put ourselves in a situation where we build it first, and then hope it all works out?

    • @kabirkumar5815
      @kabirkumar5815 Рік тому

      I'm not willing to bet my kids lives on that.

  • @paigefoster8396
    @paigefoster8396 Рік тому +2

    Call it extinction risk. Not "existential" risk. Say it like it is, when the AI folks talk about this, existential risk means extinction risk. We should call it that, otherwise it is obsfucation (probably intentionally).
    Average people know what extinction means, the word existential confuses many people, even those who don't know it.
    Call it extinction risk.

  • @waakdfms2576
    @waakdfms2576 Рік тому +4

    Well done - great start gentlemen! Please have more conversations like this! What a refreshing and critical change from the superficial (and sometimes dangerous and damaging) tweets and short spurts posted online. Humanity owes itself the honor of deeper dives, patience, and sincere listening. It's really not optional. It's not just a game - our lives truly depend on it. Thank you MLST-!!

  • @susieogle9108
    @susieogle9108 Рік тому +1

    Dang it, I missed this being live by an hour.

  • @richardnunziata3221
    @richardnunziata3221 Рік тому +2

    AGI alignment with humans is a joke and will lead to conflict . Humans are the last creatures I would want AI to align. Just look at the world.

    • @chrisreed5463
      @chrisreed5463 Рік тому +2

      So true. And as Joseph pointed out, humans aren't aligned. How are my interests aligned with those of a single mother on benefits. She reasonably wants the wealthy to be taxed more to pay for her support. The wealthy want to pay less. I'm excited by AI, my staff and colleagues are mostly scared by it.
      Where is the human alignment?

  • @t.6003
    @t.6003 Рік тому +1

    Terrans versus Cosmists - the debates might shape like Hugo de Garis described in his book "The Artlect War"

  • @DaeOh
    @DaeOh Рік тому +1

    How frustrating. Meanwhile some smart and well-read kids are out here working at fucking Best Buy

  • @goldmine4955
    @goldmine4955 Рік тому +2

    Jack would bomb the forest to save the tree.

  • @_obdo_
    @_obdo_ Рік тому +3

    Many thanks to the host and participants - well done!
    It seems like a recurring sticking point in these discussions is a plausible mechanism by which AI’s could do “enough” harm. I’ll toss out the one I’ve been mulling over, for your reactions.
    We’ve seen how Q-anon has been able to establish influence over a large group of people. Whoever controls that group could certainly inspire people in the group to take various actions.
    So imagine thousands of Q-anon-like groups, each tuned to the psychological makeup of some population subgroup, all under the control of a unified intelligence that can direct personalized attention and build relationships at massive scale. What could such an intelligence accomplish in the world? How many years are we from that being possible?

    • @ArielTavori
      @ArielTavori Рік тому

      If you don't think authoritarian regimes and casual sociopaths are already experimenting with similar approaches, I would suggest that is unwarranted optimism at this point.

    • @_obdo_
      @_obdo_ Рік тому +3

      @@ArielTavori Many optimists seem stuck on the robotics or even the pathogen scenarios. So I’m wondering if a purely language-based scenario can flip the switch for anyone. After all, that’s all the authoritarians of the world have ever needed…

    • @kabirkumar5815
      @kabirkumar5815 Рік тому

      @@_obdo_ Considering chatbots have already convinced people to kill themselves, I think it's very concerning.

  • @unreactive
    @unreactive Рік тому +4

    Although I'd genuinely prefer not to be concerned about unavoidable AI x-risk, the only compelling arguments *against* high p(doom) that I have encountered so far are from Joscha Bach and Jonathan Gorard. Hope to see them featured on the channel.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Рік тому +10

      We have had Bach on, and happy to host both with Connor!

    • @ChristianSchoppe
      @ChristianSchoppe Рік тому +1

      ​@@MachineLearningStreetTalk That would definitely be a video that would move the discussion forward. I hope it will be made possible.

    • @diplodocus462
      @diplodocus462 Рік тому +3

      @unreactive where can I find a video of either of these two making the counterargument? Or can you summarise the argument? I'm grasping at straws here.

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому +1

      @@diplodocus462
      "where can I find a video of either of these two making the counterargument?"
      I also wanna know.
      I've listened to quite a few interviews with Joscha and do not remember him expressing arguments against existential risk. On the other hand, they were recorded before the release of ChatGPT.

  • @LaserGuidedLoogie
    @LaserGuidedLoogie Рік тому +1

    Joseph isn't even making an argument, he's just making some baseless assertions and personal observations that basically add up to an incoherent ramble.

  • @flickwtchr
    @flickwtchr Рік тому +2

    Joseph Jacks should be renamed Strawman Generative Absurdist "Intelligence". Wow, that was painful. Connor was unbelievably patient in the face of Jack's steamrolling. When he pointed to the post WW2 non-use of nuclear weapons as an example to validate his assertions against regulation I could feel my cheeks turn bright red with embarrassment for him.

  • @andybaldman
    @andybaldman Рік тому +2

    I'm ten minutes in, and have yet to hear anyone make any sort of point. But there sure is a lot of stuttering.

  • @markupton1417
    @markupton1417 5 місяців тому

    I thought that was a great observation around 30 minutes when he said, "with a detailed prompt, you're really giving it the answer". Didn't need to quote... I was paraphrasing.😮

  • @zzzaaayyynnn
    @zzzaaayyynnn Рік тому +1

    Got a little heated at the end. Jacks did his best, I guess. He was 1) outside his tech/intellectual comform zone, 2) mostly concerned with techno-capitalist outcomes, 3) less prepared with his arguments, and 4) disingenuous, I'm sure.

    • @appipoo
      @appipoo Рік тому +1

      It's hard to win an argument when you are wrong I guess😅

  • @XOPOIIIO
    @XOPOIIIO Рік тому +2

    I didn't got it, why AGI would need to kill all humans simultaneously? If it already took over control over the systems it could kill us in any time duration in any order.

  • @ScottFleckenstein
    @ScottFleckenstein Рік тому +1

    When a debate comes to a head you aren't allowed to just say "Come on, JJ, come on" as a response and still be considered a charitable participant. More evidence Connor Leahy isn't someone to listen to when it comes to understading AI risks and safety.
    The way to be charitable in the debate is to acknowledge JJ's points, ask good questions -- as opposed to the thought-games masked as questions that occured in the conversation-- to illucidate the exchange and then make a counter point. It's clear Connor thinks he had nothing to learn in the conversation and was looking to "win" above being a charitable participant.
    Hey Connor, here's a question in the vein of your questioning style that I'll put to you and the pessimists in the group. What is the most likely scenario you can imagine for any AGI's first attempt at human exctinction? By your measures AGI is just around the corner, so lets assume the world an AGI tries to exterminate will look a lot like it does today. Characterize the most likely end you see for us, give us some progression of the scenario you envision that shows us how you see the end emerging.

  • @gutpunkt
    @gutpunkt Рік тому +2

    JJ was so bad at answering these questions lmao

  • @lystic9392
    @lystic9392 Рік тому

    I'm not so concerned about assumptions being made about the potential existential threat of an AGI being there. I'm more concerned about any assumptions being made about there not being an existential threat.

  • @CodexPermutatio
    @CodexPermutatio Рік тому +2

    This was great. Thanks.

  • @heltok
    @heltok Рік тому +1

    It sounds too crazy to be true, so I am not gonna think too much about it, but I will argue against it and come up with some cute arguments along the way and see what sticks.

    • @kabirkumar5815
      @kabirkumar5815 Рік тому

      Please consider writing to your local politician to make companies legally liable for their AIs. Currently they aren't, which gives them very little incentive to actually invest in solving the real problems in AI safety. Unfortunately, billionaires and board members care more about lawsuits and market share than the nerds who work for them saying the product could be dangerous.

  • @zoomingby
    @zoomingby Рік тому

    Famous last words: "we'll be fine."

    • @kabirkumar5815
      @kabirkumar5815 Рік тому

      Please consider writing to your local politician to make companies legally liable for their AIs. Currently they aren't, which gives them very little incentive to actually invest in solving the real problems in AI safety.
      Unfortunately, billionaires and board members care more about lawsuits and market share than the nerds who work for them saying the product could be dangerous.

  • @AngusTatchell
    @AngusTatchell Рік тому +1

    I think one of the key points of contention that ought to be examined further is:
    how an ASI agent manages to (a) develop existentially “misaligned” goals without being caught, (b) generate feasible action plans to successfully achieve its misaligned goals, and (c) acquire enough resources to successfully execute their plans without being stopped before human extinction.
    An ASI agent should be able to achieve B if B is possible - that is effectively the definition of an ASI agent. (ChaosGPT has A but not B, and therefore no feasible chance of achieving C.)
    So the focus of this discussion really ought to be on (A) how could ASI agent misalignment occur, and (C) how could misaligned ASI agents acquire control of existentially risky quantities of existentially risky resources.
    I also think successful alignment will actually need to identify and control every possible vector by which C can occur. Failure to prevent A is still existentially permissible if at least C remains sufficiently preventable.
    I think the X risk community have done a good job of demonstrating how A can occur (misaligned instrumental goal convergence), but I think most skeptics counter argue that C feels so beyond the scale of what is currently computationally possible that they are not yet concerned. The X risk argument does need to map out plausible (ie. “feasible” AND “viable”) vectors by which C can occur (like Rob Miles started doing on here).
    (obviously skeptics don’t care if elucidating this raises X risks. This is probably a necessary cost valley that must be crossed to make a credible argument, and to make safety measures more conceivable. You can’t design safety mechanisms without understanding the exact risks they are mitigating.)

    • @flickwtchr
      @flickwtchr Рік тому

      How much resources for example would it take to develop and unleash a manufactured extremely virulent and deadly virus? Or multiple such viruses simultaneously? Seems like just a tiny fraction of the resource output of the planet would be needed, no?

  • @rufus525
    @rufus525 Рік тому +1

    Connor can clearly see the explosive exponential trends and the dangerous unknown territory just ahead. Joseph is a typical linear thinker.

  • @XOPOIIIO
    @XOPOIIIO Рік тому

    Language models do not work from first principles because of the nature of the task, where they have to predict the next token instead of formulating the logic of their own. But in other tasks, like playing games, they are actually working from first principles.

  • @MrMick560
    @MrMick560 5 місяців тому

    This was so painful to watch, I can't understand how it was allowed to be a debate ?

  • @jurycould4275
    @jurycould4275 6 місяців тому

    The least impressive thing humans can do,but LLMs can’t and never will is immediately recognize that they were lied to if they were taught that 1 + 1 = 3.

  • @Me__Myself__and__I
    @Me__Myself__and__I Рік тому +2

    Given time we could find a solution to do AGI safely but that is not the path we are on.
    1) No other invention in history is comparable, not the wheel not the industrial revolution and not even nukes. This is because never before have we created something that could think for itself and potentially act without human involvement.
    2) The logic behind why an AGI could end up causing great ha5m to humanity is very sound if you understand how they are developed, their architecture and are rational. They don't need malicious intent.
    3) Once AGI exists it can do any work a human could do, which includes AI research and improving / creating the next generation of AI. This begins a feedback loop.
    4) AGI will get faster every year or so as computer chips improve (Moore's Law). AGI will also get more intelligent/capable with each new generation. So each new generation will be able to create the next generation faster. This is mostly software so there are few limits to how fast it can scale. 2x, 4x, 8x, 16x, 32x, etc. This is how ASI is born and how progress becom3s truly exponential. The "singularity" likely occurs18 to 36 months from the point AGI is created. Could be faster if the ASI is able to come up with algorithmic or architectural improvements that significantly speeds up progress. There is no way to pr3dict if that would happen.
    5) An Artificial Super Intelligence that is hundreds or thousands of times smarter than any human and runs vastl6 faster (so that we are effectively in slow motion) could do anything it wanted to the human race and we could not stop or control it if it was not PERFECTLY ALIGNED from the beginning.
    The logic is very sound and the reasoning very rational if you actually understand the details and aren't closed minded.
    But even if someone did not understand AI or could not follow the logic here is what I simply can not understand. If there is even a small chance this COULD be true, why not take it seriously and be cautious until we can do more research to ensure it will be safe. WHY NOT?
    There certainly are A LOT of smart people saying the risks are real. This includes many of the top experts in the field, the individuals who invented these technologies and the lead researchers at the companies on the cutting edge. Never before have I ever heard of such people calling so loudly for caution. People who are leading the charge begging to be regulated and slowed down.
    To actively push against caution when the potential risks are so great is bluntly stupid. Either that or its motivated by raw greed and a disregard for the future of humanity.
    Let us all remember that many CEOs, politicians and investors are sociopaths. They literally don't care about anyone but themselves and have no empathy. Lets not listen to these people and accelerate AGI development to the potential peril of all humanity. Lets be rational and cautious. The worst thing that happens from being cautious is it takes longer for progress to occur, but it will still occur. Rushing progress doesn't do us any good if we all end up dead.

    • @kabirkumar5815
      @kabirkumar5815 Рік тому

      Please consider writing to your local politician to make companies legally liable for their AIs. Currently they aren't, which gives them very little incentive to actually invest in solving the real problems in AI safety.
      Unfortunately, billionaires and board members care more about lawsuits and market share than the nerds who work for them saying the product could be dangerous.

    • @Me__Myself__and__I
      @Me__Myself__and__I Рік тому

      @@kabirkumar5815 Um, have you done your research? What you describe does not reflect the industry except for Google which is actually lagging behind. Legislation is needed, but you seem to be using pre-made anti-capitalism cookie cutter logic without actually understanding the details.

  • @roccoapp00
    @roccoapp00 Рік тому

    Depth of abstraction is the road to agi and alignment (deepest abstraction = self-alignment)

  • @Hexanitrobenzene
    @Hexanitrobenzene Рік тому

    Hey, Tim!
    I have a suggestion: maybe before a live round of debate (especially between two fundamentally disagreeing persons) there should be a written round ?
    Each person would state their arguments in, say, up to 3 A4 pages and then each would respond to opponent's statements. That would help understand the opponent's position deeper, prepare better arguments for each position and, most importantly, establish common ground.
    I am sad to say that this debate was not productive, mainly due to the wildly differing level of preparation of the guests. Also, self moderation doesn't work, the less so the more people disagree.
    I understand it would require more time from the guests and would delay scheduling a video, but I really think it's worth it.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Рік тому

      My friend, we have 2 followups planned already. Let's just say you won't be disappointed

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому

      @@MachineLearningStreetTalk
      Ha, I was promoted to friends already! :)
      Good to know. I'l wait with great interest.
      Don't overheat the "processor" in your head, you seem to be working on overclock ! :)

  • @paigefoster8396
    @paigefoster8396 Рік тому +3

    So much electricity to train the models, it would be easy to detect and regulate. Easy.
    Edit: I use the word "regulate" in the sense that includes cessation of development.

    • @CodexPermutatio
      @CodexPermutatio Рік тому +5

      For now. In the future, they could be as energy efficient as the brains of humans and other animals.

    • @Bartskol
      @Bartskol Рік тому +4

      What if you can do harm with model that you can train on expensive consumer graphic card? How would you regulate then ?

    • @paigefoster8396
      @paigefoster8396 Рік тому

      ​@@Bartskol Good question!! The only ideas I can think of are SO freedom killing, they are almost as scary as something 100x smarter.

    • @headmetwall
      @headmetwall Рік тому +4

      Distributed computing, there are people still mining crypto at a loss, now imagine a similar group thinking they are contributing toward making godlike Ai that will fix the world's problems.

    • @kabirkumar5815
      @kabirkumar5815 Рік тому +2

      We can detect them now and still fail to regulate them.

  • @jonmichaelgalindo
    @jonmichaelgalindo Рік тому

    "Some people have good reason not to be concerned." And then you listen to one and it's like... I am very concerned now. :-|