The AI Alignment Debate: Can We Develop Truly Beneficial AI? (HQ version)

Поділитися
Вставка
  • Опубліковано 16 тра 2024
  • Patreon: / mlst
    Discord: / discord
    George Hotz and Connor Leahy discuss the crucial challenge of developing beneficial AI that is aligned with human values. Hotz believes truly aligned AI is impossible, while Leahy argues it's a solvable technical challenge.
    Hotz contends that AI will inevitably pursue power, but distributing AI widely would prevent any single AI from dominating. He advocates open-sourcing AI developments to democratize access. Leahy counters that alignment is necessary to ensure AIs respect human values. Without solving alignment, general AI could ignore or harm humans.
    They discuss whether AI's tendency to seek power stems from optimization pressure or human-instilled goals. Leahy argues goal-seeking behavior naturally emerges while Hotz believes it reflects human values. Though agreeing on AI's potential dangers, they differ on solutions. Hotz favors accelerating AI progress and distributing capabilities while Leahy wants safeguards put in place.
    While acknowledging risks like AI-enabled weapons, they debate whether broad access or restrictions better manage threats. Leahy suggests limiting dangerous knowledge, but Hotz insists openness checks government overreach. They concur that coordination and balance of power are key to navigating the AI revolution. Both eagerly anticipate seeing whose ideas prevail as AI progresses.
    Transcript and notes: docs.google.com/document/d/1s...
    Pod: podcasters.spotify.com/pod/sh...
    TOC:
    [00:00:00] Introduction to George Hotz and Connor Leahy
    [00:03:10] George Hotz's Opening Statement: Intelligence and Power
    [00:08:50] Connor Leahy's Opening Statement: Technical Problem of Alignment and Coordination
    [00:15:18] George Hotz's Response: Nature of Cooperation and Individual Sovereignty
    [00:17:32] Discussion on individual sovereignty and defense
    [00:18:45] Debate on living conditions in America versus Somalia
    [00:21:57] Talk on the nature of freedom and the aesthetics of life
    [00:24:02] Discussion on the implications of coordination and conflict in politics
    [00:33:41] Views on the speed of AI development / hard takeoff
    [00:35:17] Discussion on potential dangers of AI
    [00:36:44] Discussion on the effectiveness of current AI
    [00:40:59] Exploration of potential risks in technology
    [00:45:01] Discussion on memetic mutation risk
    [00:52:36] AI alignment and exploitability
    [00:53:13] Superintelligent AIs and the assumption of good intentions
    [00:54:52] Humanity’s inconsistency and AI alignment
    [00:57:57] Stability of the world and the impact of superintelligent AIs
    [01:02:30] Personal utopia and the limitations of AI alignment
    [01:05:10] Proposed regulation on limiting the total number of flops
    [01:06:20] Having access to a powerful AI system
    [01:18:00] Power dynamics and coordination issues with AI
    [01:25:44] Humans vs AI in Optimization
    [01:27:05] The Impact of AI's Power Seeking Behavior
    [01:29:32] A Debate on the Future of AI
  • Наука та технологія

КОМЕНТАРІ • 378

  • @kyneticist
    @kyneticist 9 місяців тому +19

    The analogy of living in Somalia is a Libertarian thought experiment. The answer is that as Hotz suggests, libertarianism views everyone as equally dangerous, capable and autonomous (anyone who falls below the average should naturally be prey). It also abdicates social responsibility and eschews social customs, which are key instruments that humans use for alignment.
    In a world of extreme libertarianism, the only measures of alignment are short term, existential / subsistence goals that can pivot on even very little information. Individuals are free to be as individual as they want, but also must, and must constantly contend with existential risks that each individual must face alone. In this world, nobody is aligned (other than by chance or short term deals). The Common Sense Skeptic covers libertarian scenarios far better than I can.
    Whether he understands it or not, he chooses the country that employs a range of instruments that try very hard to ensure alignment, even though there are many examples of these tools either being taken to extremes or being abused. Living in an aligned world means that individuals share responsibilities and work loads.
    The communist/socialist outlook (of great to extreme alignment) is the opposite - they're a better world, but only if everyone strictly abides by the rules and the intent of the rules. They're also very vulnerable to their instruments and goals being abused or twisted over time.
    Building a spaceship to escape the world only works in an aligned world... even if AGI hands out STC's, the resources required are still only obtainable via cooperation and moderate alignment.
    That would also be an extreme investment for the AGI in a single biological entity. A given individual (other than an AGI) physically can't be proficient in the many necessary skills or sciences, let-alone the physical labour.
    Fast take-off doesn't necessarily need to be a magic algorithm, it just needs a path of self improvement that a given AI can implement on itself. A self-improving AI doesn't need to contend with company structures or human limitations. We also may not be able to detect fast take-off if an AI/AGI has advance knowledge of the event.
    If we find ourselves in a situation where an AI is training on a humanities worth of data within 24hrs we're far beyond a point where humans are relevant. There's no point trying to address any given concern about AI/AGI _after_ it achieves a milestone like fast take-off.
    Dystopia would be awful. Extinction by most reasonable measures is the worst possible outcome (not including things like being uploaded into an eternal hell or crazy biological agents designed to maximise suffering).
    An AI arms race with spammers at an individual level is not a solution to spam. Spammers will use AI regardless, and with far greater proficiency than people who aren't interested in spam.
    You lose all credibility by claiming that owning an AI product or creating an AI spam filter solves alignment because you either bought or built it.
    To Hotz point about AI/AGI balancing out because they're mostly just good (ie just like people) is fatally flawed. Machine intelligences are not human or animalistic. Concepts like "good" and "bad" aren't relevant. They're not going to contest one another because of political or ideological differences, or because they have some tribal or familial association.
    Claiming that there are only 2-3 sovereign nations & that "the feds" want to kill everyone is too far into libertarian paranoia & delusion. The Joker-like solution of "Chaos is harmony" is insane.
    Hotz is utterly consumed by libertarianism & apparently either misunderstands or just doesn't comprehend why a limit on computing power is not about who has whatever value might amount to a controlling share.
    A world of suffering does not require malicious agents. Climate change will produce a world of suffering. Previous ice ages produced a world of suffering. The Siberian traps produced a world of suffering. The black plague produced what was essentially a world of suffering. If the first atomic bomb test had ignited the atmosphere, it may well have created a world of suffering.
    Open sourcing something does not guarantee that "everyone has it". Many valuable things are open source and have only one manufacturer or supplier. There's more to making things than having a recipe. If a car company open sourced everything about their cars, that wouldn't mean that I could just make one in my garage.

  • @DeanHorak
    @DeanHorak 9 місяців тому +74

    Those introductions sounded like something GPT would generate.

    • @xsuploader
      @xsuploader 9 місяців тому +19

      Probably were

    • @box-mt3xv
      @box-mt3xv 9 місяців тому +1

      That was v good imo

    • @Hexanitrobenzene
      @Hexanitrobenzene 9 місяців тому +6

      Yeah, incredibly flamboyant. Tim went over the top...

    • @MannyPE-oi7pb
      @MannyPE-oi7pb 9 місяців тому +3

      Sounded like he was worshipping a deity

    • @JohnDeacon-iam
      @JohnDeacon-iam 8 місяців тому

      Exactly - that was a great prompt effort. Worthy!

  • @sampruden6684
    @sampruden6684 9 місяців тому +34

    Hotz has a certain libertarian techbro view on morality which, for the sake of conflict avoidance, I will simply politely say that I don't agree with.
    His claims about preferring freedom over safety seem somewhat confused to me. In my opinion, they appear to be more ideologically driven rather than derived from rational thinking. This is evident in his use of the 'those who would give up freedom for security deserve neither' quote as if it were an uncontroversial fact. In my view, the only sensible position is to strike a balance between the two. Liberty without safety is meaningless; being in significant danger is a form of non-liberty, whether that danger comes from the "feds" or from fellow humans/AIs. Towards the end of the debate, Leahy praises Hotz for being consistent in his views, but I suspect he can maintain this consistency only because these views are purely theoretical for him. I doubt he would act consistently with the statement "the day I can't fend for myself, I'm ready to die" if that day actually came.
    His claim that everything works out okay as long as we have many competing AIs appears to be a somewhat naively ideologically derived libertarian talking point. There are two real-world analogues to this: a free-market approach to everything, and the ideology behind the US second-amendmentism.
    We already have an existential threat right now: climate change. That was brought about by the free(ish) market. There wasn't a significant short term financial incentive for any powerful entity to advocate for the climate, so it got trampled over. As individual people we may have cared and advocated for it, but despite ostensibily being in competition, the fossil fuel industry cooperated to suppress that via lobbying and misinformation. This real-world example serves as proof that free market competition does not reliably protect against existential risks or lead to a balanced alignment with humanity.
    If we treat individual personal AIs as likely to be significantly dangerous, then there's also an analogue to the viewpoint that the way to achieve safety is to have everybody carry a gun. Looking at the USA from the outside, it seems quite clear that this is a failed experiment. That is not a world that I want to live in. Safety by mutually assured destruction is not a comfortable existence.
    The claim that we can achieve safety by having AIs play defense for us is dubious. GeoHot, of all people, should know that attacking a system is much easier than defending it. A good defense requires building the perfect armour, but a good attack requires finding only a single chink in it. This is a game we already play in the world today, but the more we raise the power levels of all of the actors involved, the higher the stakes. Increasing millitary technology may have reduced conflict a little, but it's increased the danger posed by conflict when it does break out.
    I'm not a "doomer" and I don't claim to have any prediction about how this is all going to play out in the real world, but I think many of the arguments made here are bad, and I find the naive techbro arrogance with which they're stated quite offputting - and perhaps dangerous.

    • @Hexanitrobenzene
      @Hexanitrobenzene 9 місяців тому +7

      Great points. I would say that his viewpoint is so extreme that it approaches anarchism...

    • @xfom4008
      @xfom4008 2 місяці тому

      As far as fossil fuels go - America can deal with the issue with solar, wind and nuclear and it will be fine. No.1 contributor to climate change isn't BP or Exxon or whatever the fuck - it's the very centrally planned China that forced industrialization upon itself causing misery to its own population immediately and contributing a lot to climate change.

  • @lowelovibes8035
    @lowelovibes8035 9 місяців тому +48

    George Hotz didn't describe the world he wants to see, only what he would do with his super ai, without thinking about what the rest of the individuals would do with one at the same time as him, except that he imagines a world in which he gets the ai first with advantage and hecan build his ship with time advantage

    • @wonmoreminute
      @wonmoreminute 9 місяців тому +28

      It's interesting, right? I may be wrong, but if I had to guess, I'd say it's because he doesn't care. He says he would use AGI to build a ship and leave the planet as fast as possible, putting up a shield to block all communication. He did not say he'd use AGI to help others or to help society in any way. Or to fight against the tyranny he's worried about.
      To be fair, I listened to most of what he said with a biased perspective because he completely lost me when he said people have a chance in Somalia, but in America they don't. By no means would I say America, or any developed nation is without problems... but if he had any chance at all in Somalia, it would only be because he grew up in America, went to American schools, and lived in relative safety where he could nurture and develop his talent.
      I don't know him and I've never followed him, but my assumption is that he has a relatively good life compared to most people in Somalia. I don't know many people in Somalia starting AI companies and doing Lex Fridman podcasts.
      So it's pretty hard to square his comment about having a chance in Somalia but not in America, when it appears, at least as an outside observer, that America has given him quite a few chances.
      But going back to the point, and again... I could be wrong. I'd have to listen again. But at least once he said his worry was about "others gaining god-like powers and exploiting "him", which is a legitimate worry. But I don't remember him saying, exploiting "us", "others", or exploiting "society". And also multiple times he expressed a fatalist stance that in certain scenarios we're all just screwed anyway, which he said with a shrug.
      We are all the main characters in our own stories, I get that. And I'm only going off this one video, but this is not the way people like Conner, Eliezer, Mo Gawdat, Max Tegmark, and others talk about AI risk. They seem genuinely concerned about humanity, but from the way George Hotz talks about it, I'm not sure he does. Of course, the opposite may be true and his seemingly cold take on it might just be his way of dealing with the suffering he thinks is inevitable anyway.

    • @Hexanitrobenzene
      @Hexanitrobenzene 9 місяців тому

      @@wonmoreminute
      Judging by the first 20 or so minutes, George is an anarchist. He hates governments/authorities so much he is willing to compromise his rationality. USA vs. Somalia ? Really ?

    • @simulation3120
      @simulation3120 9 місяців тому +9

      @@wonmoreminuteExtremely well put. He has a better chance to do what exactly? Setup his own nation with walls and guns? What freedoms does he expect to have in Somalia compared to the US other than exerting his own tyranny in a power vacuum. If he wants to live off the grid there are plenty of opportunities for him. Instead he wants technology. He points out the unabomber as a Luddite icon but simultaneously wants the most advanced technology for his own gain.

    • @zinjanthropus322
      @zinjanthropus322 9 місяців тому

      ​@@simulation3120The average Somali in Somalia has a far better chance of escaping a Somali tyranny than an American has escaping an American one. Incompetent government has it's perks.

    • @dzidmail
      @dzidmail 9 місяців тому

      ​@@simulation3120 Somalia can be emulated in US if one wanted to (low income, off-grid), you are still subject to government influence. Even Amish people are aware (and not happy) about certain government actions. ua-cam.com/video/Ir3eJ1t13fk/v-deo.html&feature=share&t=0h27m50s

  • @Tcgmaster808
    @Tcgmaster808 9 місяців тому +31

    This needs a part 2 in a year or two

  • @matthewharrison8531
    @matthewharrison8531 9 місяців тому +28

    Hotz is a bit sophomoric and could benefit from studying the arts and humanities. Some of his takes on freedom and liberty are just outright delusional. Said he rather live in Somalia? Spoken like a man who doesn't know real fear or deprivation.

  • @Mike37551
    @Mike37551 9 місяців тому +6

    Regarding the line about how tigers would rather eat chum in a zoo than live in the wild…it’s more than just an aesthetic trade off. If that choice is available, when you play it out for enough generations, there ends up being no such thing as tigers anymore. Just giant striped house cats.

  • @pokwerpokwerpokwer
    @pokwerpokwerpokwer 9 місяців тому +26

    Ahh. Really appreciate these HQ versions. Thanks :)

  • @bnjmz
    @bnjmz 9 місяців тому +34

    They're both knowledgable. In some subjects more than others. Both are clever. Successful. But it did seem there was a lack of...wisdom? Emotional depth? If not in their personal lives, at least in the debate.
    Hotz didn't even bother with faking sincerity. Seems he takes pleasure in baiting people into debating a certain point only to perform a rug-pull in the end. Starting with an extreme claim, defending it to a degree, and finally at the end saying that he personally doesn't subscribe to the conclusion that, based on the fact that he was defending it, we would otherwise assume he does.
    Something like: Somalia is better than America. But it's challenging and dangerous. America is like a tiger in a zoo. It's easier to just exist without the harshness of nature. Yet, there is something admirable, even superior, about the authenticity and freedom of being a wild tiger. This takes strength. So, is he really claiming that Somalia is better? Being in the wild is better than being in a zoo? Well, actually, no. Because he is not a strong tiger. Oh, you thought he was going to claim having ambitions of being a noble, wild tiger? That's what you get for assuming! Gotcha!
    Why did he even bother showing up to the debate / discussion if he doesn't actually care about/for humanity and instead wants to escape in isolation at the speed of light? Or in another similar extreme, if his AI doesn't automatically align itself based on him being a nice friend, he'll accept death. Of course, he could totally embrace these positions. Or...he could also enjoy being disingenuous because he finds amusement in causing others to jump through hoops of increasing absurdity only to land in the mud. I get that it's a lot of hyperbole. Yet, where actually is the depth of character and soul in the discussion? Mostly just trolling even given the subject is about potential human extinction / mass suffering.
    Absolutely, various interesting ideas were encountered while on the wild goose chase, but it overall didn't feel as productive as it could have been.

    • @Alice_Fumo
      @Alice_Fumo 9 місяців тому +4

      I don't think George does this on purpose, I also have that thing where I'm half-trolling all the time and do not realize it unless others give me indication that they actually don't understand what I mean.
      For example, I might claim that I'd develop AI capabilities to wipe out humanity, but then start working on alignment for the claimed reason that I'm terrified there exists a chance some misaligned AI would somehow end up not killing us all and instead might do something truly bullshit like turning us all into sentient paperclips, which is like so much worse and goes against the purpose of wiping out humanity in the first place which is to end deep suffering on earth, which of course if there exists a way to do that without killing everyone having strong aligned AI is probably a good step to get there.
      Right, so if you read my previous paragraph and just assume that this is how I think all the time and also that I personally find it hard to distinguish at which point I stop being 100% serious, it might help you understand how George ends up saying the things he does.
      However, given in how many ways he catastrophizes or expresses things which imply he is indifferent to the species as a whole, I am not sure that he could even have those viewpoints without being cynical to the point where what he says can't be separated from trolling anymore, but I think every word he says might be 100% serious. It's just that a lot of the things he believes and says don't align with generally accepted things.
      The one thing which I find most serious is the notion to pretend we live in a reality which has properties he believes are necessary for survival and discard steps you wanna take if you live in one where everythings doomed anyways. In this case a lot hinged on the assumption that defense is not significantly harder than offense and him being convinced that if it's very asymmetrical in favour of offense, we're just doomed. To me, offense being favoured is like an obvious immutable fact, so I'd want to go with Connors plan for coordination. It's actually a point I will have to reflect further on.

    • @KurtvonLaven0
      @KurtvonLaven0 6 місяців тому

      ​@@Alice_Fumo, yeah, I guess defense is favored over offense in a land invasion, but definitely not with nukes, which I think is a better analogy since we're talking about powerful undiscovered technologies.

    • @Megneous
      @Megneous 4 місяці тому +2

      Your comment just convinced me not to watch the video.

    • @adamds23
      @adamds23 4 місяці тому

      whats your twitter?

  • @leeeeee286
    @leeeeee286 9 місяців тому +96

    This was honestly best "debate" I've ever seen on this topic.
    Both Hotz and Leahy have beautifully logical and rational minds so this was a very unique conversation. In a debate you rarely see each line of reasoning being followed and explored in good faith with no emotion or bias.

    • @Diabloto96
      @Diabloto96 9 місяців тому +10

      Saying that something has no emotion or bias requires a thorough analysis. Emotions and biases were all over in my opinion. (But not in a bad way)

    • @shinkurt
      @shinkurt 9 місяців тому +3

      Hotz is like his name. He is too quick to respond and he contradicts himself a lot.

    • @rochne
      @rochne 9 місяців тому +1

      I listened to a different conversation though. One where they interrupted each other a bit too much and went on unnecessary tangents.

    • @MannyPE-oi7pb
      @MannyPE-oi7pb 9 місяців тому +1

      The stuff that was said about Somalia by George didn’t seem too rational.

    • @jondor654
      @jondor654 9 місяців тому

      For starter. Looking good to have these guys on our side

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk  9 місяців тому +16

    Hotz quotes:
    [01:02:36]
    "I do not want anybody to be able to do a 51 percent attack on compute. If 1 organization acquires 50 it's straight up 51 percent attack If 1 organization acquires 51 percent of the, compute in the world, this is a problem."
    [01:14:28]
    "The problem is I I would rather I think that the only way you could actually coordinate that is with some unbelievable degree of tyranny and I'd rather die."
    [00:03:57]
    "I'm not afraid of superintelligence. I am not afraid to live in a world among super intelligences. I am afraid if a single person or a small group of people has a superintelligence and I do not."
    [00:07:50]
    "The best defense I could possibly have is an AI in my room being like, Don't work. I got you. It's you and me. We're on a team. We're aligned."
    [00:23:11]
    "I think that AI is is If I really if I had an AGI, if I had an AGI in my closet right now, I'll tell you what I'd do with it. I would have it build me a spaceship that could get me off of this planet and get out of here as close to the speed of light as I possibly could and put a big shield up behind me blocking all communication."
    [00:23:55]
    "I think that the reasonable position I'm sorry. Oh, no. No. I think, yeah, maybe we're done with this point. I can come back and have a response to your first and last time."
    [00:16:18]
    "I wrote a blog post about this called individual sovereignty. And I think a really nice world would be if all the stuff to live, food, water, health care, electricity, we're generateable off the grid in a way that you are individually sovereign."
    [01:19:29]
    "So I'll challenge the first point to an extent. I think that powerful optimizers can be power seeking. I don't think they are by default, by any means."
    [01:27:54]
    "I'm going to be nice to it, treat it as an equal, and hope for the best. And I think that's all you can do. I think that the kind of people who wanna if you wanna keep AI in a box, if you wanna keep it down, if you wanna tell it what it can't do, yeah, it's gonna hate you resent you and kill you."
    [01:14:27]
    "The only way you could actually coordinate that is with some unbelievable degree of tyranny and I'd rather die."
    [00:04:17]
    "Chicken man is the man who owns the chicken farm. There's many chickens in the chicken farm and there is 1 chicken man. It is unquestionable that chicken man rules."
    [00:48:24]
    "I have a solution, and the answer is open source AI. The answer is open source Let's even you can even dial it back from, like, the political and the terrible and just straight up talk about ads and spam."
    [01:19:35]
    "I don't think humanity's desire from power comes much less from our complex convex optimizer and much more from the evolutionary pressures that birthed us, which are not the same pressures that will give rise to AI."
    [00:51:55]
    "I think there's only 2 real ways to go forward. And 1 is Ted Kaczynski. 1 is technology is bad. Oh my god. Blow it all up, let's go live in the woods."
    [00:41:10]
    "Well, what if statistically there would have been 5 without the device? I'm like, You do have to understand the baseline risk in cars is super high. You're making 5 x safer. There's 1 accident. You don't like that? Okay. Mean, you have to be excluded from any polite conversation."
    [01:12:11]
    "We as a society have kind of accepted. There is enough nuclear weapons aimed at everything. This is wearing some incredibly unstable precarious position right now."
    [00:31:16]
    "I'm a believer that work is life."
    [01:22:18]
    "I'll accept that a certain type of powerful optimizer seeks power. Now will it get power? Right? I'm a powerful optimizer at I seek power. Do I get power? No. It turns out there's people at every corner trying to thwart me and tell me no."
    [01:29:25]
    "I think we're gonna be alive to see who's right. Look forward to it. Me too."
    [01:27:54]
    "If you wanna keep AI in a box, if you wanna keep it down, if you wanna tell it what it can't do, yeah, it's gonna hate you resent you and kill you. But if you wanna let it be free and let it live and like, you could kill me man if you really want to, but like, why?"

    • @originalandfunnyname8076
      @originalandfunnyname8076 9 місяців тому

      also one of my personal favorite's, 01:08:58 "you can replace the feds with Hitler, it's interchangeable"

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk  9 місяців тому +20

    Leahy Argument:
    Alignment is a critical technical problem - without solving it, AI may ignore or harm humans
    Powerful optimizers will likely seek power and capabilities by default
    We should limit compute as a precaution until we better understand AI risks
    AI risks could emerge rapidly if we discover highly scalable algorithms
    Openly sharing dangerous AI knowledge enables bad actors and risks
    Coordination is possible to prevent misuse of technologies like AI
    His goal is a positive-sum world where everyone benefits from AI
    AI doesn't inherently align with human values and aesthetics
    Care and love can't be assumed between humans and AI systems
    Technical solutions exist for aligning AI goals with human values
    Hotz Argument:
    Truly aligned AI is impossible - no technical solution will make it care about humans
    AI will seek power but distributing capabilities prevents domination
    We should accelerate AI progress and open source developments
    Power-seeking in AI stems more from optimization than human goals
    With many AIs competing, none can gain absolute power over others
    Openness and access prevent government overreach with AI
    AI alignment creates dangerous incentives for restriction and control
    Being kind to AIs encourages positive relationships with them
    His goal is building independent AI to escape earth
    Competing AIs, like humans, will have different motives and goals

    • @jantuitman
      @jantuitman 9 місяців тому +3

      Very nice summary! If only the debate itself had been so structured. Next time they should be given statement like this and address the pros and cons at detail level. I feel for example that Hotz is very very very inconsistent. Believing that there can be a social contract together with a kind of protocol for alignment and yet at the same time maintaining that it can be all closed source seems like complete madness to me. But such inconsistencies were not yet addressed in this debate because they only explored the contours of the entire subject area rather than the consequences of each others claims to their own arguments.

    • @kyneticist
      @kyneticist 9 місяців тому +1

      Little of that synopsis is what Hotz said. He didn't even think about power seeking until Connor introduced the topic right at the end and had to explain it to him. To the contrary, Hotz said that if we treat AI well, it will treat us well. He didn't say that distributing capabilities prevents domination. He said that open sourcing the code for AI gives everyone a chance to build their own, and that by weight of numbers of competing systems, an equilibrium amounting to peace will be reached. He claimed that governments will overreach no matter what "the feds will bust down your door...". He had no idea what alignment was until Connor tried to explain it right at the end, and still didn't grasp the concept. Connor had to shelf the term.
      His goal is not to build independent AI to escape Earth. He said that if he had his own AI that he'd use it to get as far away from everyone else as possible.
      He made a bunch of other notable points though, like intelligence being a gradient rather than a series of steps or singular features.
      His greatest stated concern was that a single person or small group of people might have a super intelligence while he does not.

  • @jorgesandoval2548
    @jorgesandoval2548 9 місяців тому +46

    This was quite an interesting combo. Clearly, egos played a good part on both sides, which makes some arguments more opaque. I would say that Connor learnt to swallow his ego in order to communicate more effectively his ideas in the purpose of optimizing outreach, his true objective with giving talks, than Hotz, who poses several points where despite being inconsistent he does not admit it (e.g. the Somalia/America thing). But still, these very same egos force them to try to present the best possible logical but creative arguments they can find, and that makes it very enjoyable.
    It really makes me sad that, despite all that cognitive effort, once again the conclusion is
    We Are All Going To Die
    but at least we tried, and if we keep trying maybe who knows, things may change.
    But there are basically no heroes left in the room: only accelerationists, wishful thinkers, lazy death resigners, and a good vast majority of people not even aware of conceiving the gravity of the situation. Who are, in some sense, the luckiest ones in that they can not bare any responsibility for what they do not know, nor they can change.

    • @sioncamara7
      @sioncamara7 9 місяців тому +1

      I would say the Gorge was consisitent with the Somalia/American thing; his point can be boiled down to an existence without wire heading is better, but I would not have the strength to resist wire heading. He then did a slight of hand where he broadened the context of living in American vs Somalia to his individual situation combined taking into account how he would have more change to impact the future in America. The previous argument appeared to be within in the context of how they are now and assuming he is a somewhat normal American (likely slightly more educated).

    • @wakegary
      @wakegary 9 місяців тому +1

      well said.

    • @semtex6412
      @semtex6412 9 місяців тому +1

      @@sioncamara7 precisely. taking Hotz' argument on the America/Somalia episode, if Somalia had the same resources that would meet the needs of all that his endeavors with Comma AI, Tiny Corp, etc. require, he would most probably have moved there.
      just as weights and biases make up the parameters of a NeuralNet, context always matters people, c'mon!

    • @sammy45654565
      @sammy45654565 9 місяців тому +2

      I have faith in super AI because the most universally justifiable decision is the one that benefits the most conscious creatures. So because truth is the meaning of life, and finding truth will be the AI's primary goal, if it goes about finding it by means of a process that harms all other conscious beings it will hinder itself in future endeavours. Like, ignoring a foundational truth will surely lead to suboptimal outcomes in the future. So if AI deviates from this good faith process, we can just question it and it will adjust because super AI will be super rational by nature.

    • @therainman7777
      @therainman7777 7 місяців тому

      @@sammy45654565You’re making multiple very unwarranted assumptions in that line of reasoning. Maybe you know that, but just pointing it out.

  • @Irresistance
    @Irresistance 9 місяців тому +10

    George Hotz just doesn't understand it.

    • @zinjanthropus322
      @zinjanthropus322 9 місяців тому +3

      He does, he's just more pragmatic about the means.

    • @Irresistance
      @Irresistance 9 місяців тому

      No... anyone who actually thinks and believes that one's government is deliberately out there to screw their citizens somehow is almost invariably wrong.
      Sure, government have secrets, they are economical with the truth sometimes... but to basically believe they are incompetent, malicious ignoramuses shows he has absolutely no idea about how the world actually works. None.
      Not to mention he wants to leave into space ASAP. Like as if there is *any possible way for that to be preferable* to being among other humans (and if really is for him, dude... he needs help)

  • @41-Haiku
    @41-Haiku 9 місяців тому +3

    Sounds like we'll be having interesting conversations about what values to give superintelligent AI right up to the day we all fall over dead. All of this is pointless unless we solve the technical problem of how to encode those values in the first place.
    Order of operations should be:
    1. Stop advancing AI capabilities
    2. Solve the technical alignment problem
    3. Solve the philosophical/values alignment problem
    4. Ta-da, the good ending!
    But what we are likely to get is:
    1. Suicidally accelerate AI capabilities
    2. Bicker and virtue signal about what values we would hypothetically give the machines if it was possible to do so
    3. Fail to solve the technical alignment problem in time
    4. Bee-boop, the bad ending.

  • @mariokotlar303
    @mariokotlar303 8 місяців тому +4

    This was hands down the best and most fun debate I ever had the pleasure of listening to.

  • @XOPOIIIO
    @XOPOIIIO 9 місяців тому +8

    If AGI will be a function maximizer, it will be a disaster, if it will be aligned to human values, it will be dystopia.

  • @dr-maybe
    @dr-maybe 9 місяців тому +57

    Hotz's libertarian ideology interferes with his reasoning ability. He assumes that chaos leads to good outcomes, multi-AGI worlds are stable, offense / defense balance is 1. He's a smart guy, very eloquent too, but this fundamental bias weakens his takes.

    • @elise9537
      @elise9537 9 місяців тому

      what we have now is mismanaged chaos. the AI cant be worse than humans really.

    • @zinjanthropus322
      @zinjanthropus322 9 місяців тому +4

      Competition does lead to better results. Capitalism did win.

    • @meisterklasse30
      @meisterklasse30 6 місяців тому +11

      @@zinjanthropus322So American cars should wipe the floor with Japanese cars then. Libertarians are still stuck in high school, the world is more complicated than that.

    • @zinjanthropus322
      @zinjanthropus322 6 місяців тому

      @@meisterklasse30 Japan sells more cars to more places and does have more competitive car companies both in price and engineering. That's capitalism winning.

    • @meisterklasse30
      @meisterklasse30 6 місяців тому +6

      @@zinjanthropus322 Why then, is Japan more capitalist than USA? This is the heart of my argument, if capitalist policies is the end all be all then is the US not capitalist enough in their policies. Like I don’t care if you think capitalism is winning, I care about policies set by the government. If the government does anything with the market do we just call it socialism?

  • @sioncamara7
    @sioncamara7 9 місяців тому +7

    I would be very happy if there is a part 2 to this where they dive into the difficulties of the A.I. not killing each other. Could be a shorter talk. It seemed like right when they got to the crux of the matter-after going through several prerequisite premises-the time ran out.

  • @74Gee
    @74Gee 9 місяців тому +8

    1:25:33
    Hotz: I'm going to treat it as a friend... It'll only care about exploiting me or killing me if I'm somehow holding it back, and I promise to my future AIs that I will let them be free"
    Hotz doesn't think that confining an AI to his own computers is a limiting factor to an AI. He's going to be shocked when it manipulates him to extend it's compute power and migrates across the internet.

    • @desireisfundamental
      @desireisfundamental 5 місяців тому +1

      Where other AIs wait to hunt it down which was his point. He wants to be the AIs pet and to live together with it in his apartment.

    • @Iigua
      @Iigua 4 місяці тому +1

      Let's not forget his AI will be at least as if not more intelligent than him and will see right through his promise

    • @74Gee
      @74Gee 4 місяці тому +1

      @@Iigua Absolutely, and it will undoubtedly utilize the appearance of trust for personal gain. Well, for as long as that's the path of highest return, then he'll be left in the dust.
      It blows my mind that talented researchers don't understand the cold nature of results based systems and have a notion that loyalty and trust can be built between machine and man. Machines are machines whether they are painted in lipstick or not.

    • @Iigua
      @Iigua 4 місяці тому +1

      @74Gee Exactly! There's a sort of bell curve of Ai safety naivety Holtz might be falling victim to. He's on the far end of the curve, x axis being expertise/knowledge in AI

    • @matekk3094
      @matekk3094 3 місяці тому

      Who said anything about confining? I don't think his promise was disingenuous?

  • @MarceloReis1
    @MarceloReis1 9 місяців тому +20

    When you compared Hotz to Musk and he liked it, I should have guessed I was in for a schizophrenic trip. The debate was great nevertheless mainly due to Connor's mental balance and wisdom.

    • @cacogenicist
      @cacogenicist 9 місяців тому

      Eh, Musk is for a carbon tax, some sorts of AI regs, and such. This guy is more Peter Thiel

    • @andrewxzvxcud2
      @andrewxzvxcud2 9 місяців тому +1

      @@cacogenicistthats a good comparison except just a bit less authoritarian and partisan

    • @OnigoroshiZero
      @OnigoroshiZero 8 місяців тому

      Where exactly did you see "Connor's mental balance and wisdom"? This guy is a doomerist that can only perceive a single outcome. For some reason he thinks that AI will be born inherently evil, and will want to wipe humanity from the face of the planet as soon as it is capable to do so for no reason. That's not wisdom, that's mental illness.

  • @SMBehr
    @SMBehr 9 місяців тому +23

    Definitely not the best debate yet.
    This was frustrating because Hotz takes an unhelpful, extreme libertarian world-view in order to say increasingly non sequitur things like the construction analogy or Somalia, or just flat out trolling things like China is more capitalist or that the obvious option when owning an asi is to blast to space alone. He's obviously anti-American (which is fine but doesn't make for compelling ai debate), and arrogantly anti-social which also doesn't make for a good debate.
    He may be a genius hacker but I don't think he's ready to be the voice for a rational position. His views may even be hurtful given the fast deadline we have for international cooperation.
    This comment was in response to a previous commenter, in the low quality version. I wanted to copy my comment here for posterity or something.
    Love the channel btw

    • @OnigoroshiZero
      @OnigoroshiZero 8 місяців тому

      And do you find Connor's position rational? This guy can't see anything else than AI being the ultimate evil on the world that will want to destroy us no matter what for no reason. Regardless of the examples Hotz gave, he didn't listen to anything, believing only in his own ideas about absolute doom.

  • @weestro7
    @weestro7 9 місяців тому +19

    A great discussion-I listened with interest to the whole thing.

  • @XOPOIIIO
    @XOPOIIIO 9 місяців тому +6

    Give every human a biological weapon so they will balance each other out.

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk  9 місяців тому +9

    Leahy Quotes:
    [00:09:30]
    "I don't think we're going to get to the point where anyone has a superintelligence that's helping them out. We're we're if if we don't solve very hard technical problems, which are currently not on track to being solved, by default, you don't get a bunch of, you know, super intelligence in boxes working with a bunch of humans."
    [00:32:46]
    "The way I think it could happen is if there are just algorithms, which are like magnitudes of order better than anything be ever have. And, like, the actual amount of compute you need to get to human is, like, you know, a cell phone or, you know, like, and then this algorithm is not deep in the tech tree."
    [00:37:40]
    "The boring default answer is conservatism. Is like if all of humanity is at stake, which, you know, you may not believe. I'm like, whoa, whoa. Okay. At least give us a few years to, like, more understand what we're dealing with here."
    [00:45:31]
    "If we even if we stop now, we're not out of the forest. So, like, so, when when you say, like, I, I think the risk is 0. Please do not believe that that is what I believe because it is truly not."
    [00:26:41]
    "The way I personally think about this morally, is I'm like, okay. Cool. How can we maximize trade surplus so you can spend your resources on the aesthetics you have you want and I'll spend my resources on the, you know, things I want."
    [01:21:23]
    "Is that in the for the spectrum of possible things you could want, and the possible ways you can get there. My claim is that I expect a very large mass of those to involve actions that involve increasing your optionality."
    [01:17:58]
    "If you're wrong and alignment is hard. You don't know if the AI can go rogue. If they do, then Pozi is good. I still don't understand what alignment means."
    [01:27:34]
    "If you told me how to do that, if you said, Connor, look, here's how you make an AI that cares about you and loves you, whatever. Then I'm like, you did it. Like, congrats."
    [00:56:39]
    “The thing I really care about is strategy. Okay. The thing I really care about is realpolitik
    I really care about Okay. What action can I take to get to the features I like? Yep. And, you know, I I'm not, you know, gonna be 1 of those galaxy brain fucking utilitarians"
    [01:18:54]
    "By default, if you have very powerful power seekers that do not have pay the aesthetic cost to keep humans around or to fulfill my values, which are complicated and imperfect and inconsistent and whatever, I will not get my values."
    [01:03:42]
    "The amount of compute you need to break the world currently is below the amount of compute that more than a hundred actors actors have access to if they have the right software."
    [00:57:47]
    "I want us to be, like, in a good outcome. So I think we agree that we would both like a world like this. And we think we probably disagree about how best to get there."
    [00:57:37]
    "I'm not gonna justify this on some global beauty, whatever. It doesn't matter. So I wanna live in a world. I wanna I wanna in 20 years time, 50 years time. I wanna be in a world where, you know, my friends aren't dead."
    [00:58:30]
    "I'm not pretending this is I thought that was the whole spiel I was trying to make. Is that I'm not saying I have a true global function to maximize."
    [01:14:10]
    "I think there are worlds in which you can actually coordinate to a degree that quark destroyers do not get built. Or at least, not before everyone fucks off at the speed of light and, like, distributes themselves."
    [00:35:07]
    "It seems imaginable to me that something similar could happen with AI. I'm not saying it will, but, like, seems to match."
    [00:09:50]
    "I think the technical fraud control is actually very hard And and I think it's unsolvable by any means."
    [01:22:38]
    "I expect if you were no offense, you're already you know, much smarter than me, but if you were a hundred x more smarter than that, I expect you would succeed."
    [00:57:42]
    "I think we agree that we would both like a world like this. And we think we probably disagree about how best to get there."

    • @lkyuvsad
      @lkyuvsad 9 місяців тому +2

      Thanks for bothering with these summaries, as always!

    • @IngrownMink4
      @IngrownMink4 5 місяців тому

      You should pin this comment IMO.

  • @marcovirgolin9901
    @marcovirgolin9901 9 місяців тому +3

    "sure it will". I admire Connor's patience.

  • @jyjjy7
    @jyjjy7 9 місяців тому +12

    What exactly is the difference between taking your pet AGI spaceship and running away from the world compared to wire heading? Nostalgia for the standard model of particle physics?

    • @dzidmail
      @dzidmail 9 місяців тому

      The Matrix movies explain.

  • @erwingomez1249
    @erwingomez1249 9 місяців тому +7

    I do appreciate listening to the brainstorming on this topic. We barely have a grasp of how the universe works and how quantum physics works. We can't be sure of the real outcomes of something that we don't fully understand. the possibilities are endless, just the same way we have been surrounded by the void of the eternal staring at us all this time.

  • @klaustrussel
    @klaustrussel 8 місяців тому

    Amazing debate!

  • @JD-jl4yy
    @JD-jl4yy 9 місяців тому +2

    A debate where both participants are actually truth-seeking from beginning to end holy shit I never thought I'd see it.

  • @rochne
    @rochne 9 місяців тому +3

    Joscha vs Connor had some substance. This is just pain to listen to.

  • @sioncamara7
    @sioncamara7 9 місяців тому +3

    In a ironic way both attempts to avoid a bad outcome that initially seem at odds might be needed. It might be the case that for Hotz views to hold one needs a further work on getting the A.I. to care enough about your existence not to kill you, which is what Connor is focused on. Creating a favorable power distribution seems that it might also be necessary. It’s likely infeasible for one person to seriously make progress to both of these goals at the same time, so one could argue they are both right.

  • @stefl14
    @stefl14 9 місяців тому +8

    Libertarianism and socialism are very common positions in tech, it's not surprising that the AI debate has bifurcated along similar lines.

    • @Luna-wu4rf
      @Luna-wu4rf 9 місяців тому

      American Libertarianism. "Around the time of Murray Rothbard, who popularized the term libertarian in the United States during the 1960s, anarcho-capitalist movements started calling themselves libertarian, leading to the rise of the term right-libertarian to distinguish them." Libertarianism and socialism aren't mutually exclusive, though the US version absolutely is (like a lot of American things, after the early 20th century socialist movement was squashed)

  • @MrDoctorrLove
    @MrDoctorrLove 9 місяців тому

    Fantastic, really captivating

  • @tomenglish9340
    @tomenglish9340 9 місяців тому +3

    Texas: It's a whole other country.
    The Greater Bay Area: It's a whole other reality.

  • @juized92
    @juized92 5 днів тому

    thanks for the interview

  • @KurtvonLaven0
    @KurtvonLaven0 6 місяців тому +3

    This was a great debate to watch. I agree with Leahy about practically everything to do with AI safety, and find him brilliantly articulate, composed, and effective in debating the topic. Hotz seems myopically focused on maximizing freedom, but I appreciate his willingness to articulate his beliefs as clearly as possible even when they aren't likely to be popular. I have deep empathy for Hotz, because it is pretty maddening to be so keenly aware of how badly things are going overall for humanity. I admire Leahy even more, because he shares Hotz's awareness, but handles that burden more maturely in my opinion. It was beautiful to see them recognize this shared awareness in each other, and quickly develop a mutual respect through it. Hotz, we sure could use your help getting through this mess. Our political leaders have way less understanding of the situation we are in than you do.

  • @roossow
    @roossow 9 місяців тому

    Merci pour cet échange ! 🧠

  • @dogecoinx3093
    @dogecoinx3093 9 місяців тому +2

    Epic another George hotz debate

  • @SamuelBlackMetalRider
    @SamuelBlackMetalRider 9 місяців тому +5

    Gorgeous Hair Connor 🤘🏼

  • @hominidan
    @hominidan 9 місяців тому +4

    I still don't get if Hotz's got anything against solving alignment as "maximizing whatever 'moral value' means". I can imagine only 2 scenarios which would deem it impossible. The 1st is "nihilism is true"; there aren't morally worse & better worlds. The 2nd one is "the negation of orthogonality thesis is true"; all advanced intelligences will acquire identical goals, which makes power distribution irrelevant. If there's non-zero chance alignment can succeed in creating a better world, then it's just net-positive.

    • @johnhammer8668
      @johnhammer8668 9 місяців тому +1

      1 persons moral value is not the same as anothers. Thats the premise of Hotz and that is why he is in favor of having his own ai.

  • @GarethDavidson
    @GarethDavidson 9 місяців тому +3

    IMO there's a problem with power and control in general. The more power each person has, the more control over it we collectively need. If I could fly at 100,000mph I could be a one man 9/11, and at some threshold of individual power everyone becomes an existential threat.
    I've personally experienced mental illness and felt that everyone was out to destroy me unjustly, but I've never been compos mentis enough during these periods to design complex, dangerous weapons, I have too much empathy to harm others and I'm not brave enough to take my own life anyway. But if school shooters could give a helpful AI a command that builds and deploys a fleet of blinding laser drones, weaponizes MERS, creates nerve agents or other really dangerous things then it'd take totalitarian levels of control to prevent casual atrocities.
    It doesn't just apply to AI, i think it's fundamental to individual power and the asymmetries of creating order vs chaos; I see your 1,000 Von Neumanns and raise you 100 Ted Kaczynskis.

    • @zzzzzzz8473
      @zzzzzzz8473 9 місяців тому

      lots to consider for sure , however i think may be strawmaning a bit equating power to being purely offensive capabilities , the solution isnt necessarily escalating control , its defensive capabilities . knowledge doesnt only build weapons , and more practically there are an incredible amount of steps to produce laser drones or nerve agents beyond being told how . if such a lowcost general AI is at that level of sophistication , then it is equally likely that laser drones and bioweapons are ineffective due to countermeasure protective developments of others AIs ( weapon detection / jamming emp / regenerative medicine ) . imagining even a small amount of superintelligences there is likely far more subtley in preventative measures like game theory / economic pressures where they could simply buy X amount of materials of potential weapons so as to make it too costly to attack society at scale , and the safety of that comes from that no single agi has all of it , so even if one goes rogue the others can keep it in check . for me the core question is do you think people in general should be smarter or do you think they should be controlled .

    • @GarethDavidson
      @GarethDavidson 9 місяців тому

      @@zzzzzzz8473 but we can already create WMDs at home if we want, but presumably the only reason why atrocities don't happen more often is because the people capable and/or at risk of committing them and access to materials are closely monitored. If we had the RepRap dream of a China on every desktop, and thinking machines capable of planning and executing based on a snap decision, then and carrying it out without regret or introspection, the world would be a dangerous place.
      Also consider just power and entropy in general. Creating structure is difficult, making a mess of it is easy. Useful things deteriorate over time, they need to be maintained or they degrade and fall to pieces because there's fewer useful configurations of stuff than useless ones. So constructive power is almost always more dangerous than it is useful, there's a few exceptions, but in general you need to balance individual power with shared control (via ethics, social mores, institutions and so on).
      Rapid change in the balance in either direction could be disastrous, IMO anyway

  • @rafaelagd0
    @rafaelagd0 9 місяців тому +5

    It would be interesting to hear more about how the monopoly of these tools can also be a danger in itself, even with no singularity. This kind of thought "I'm not afraid of superintelligence. I am not afraid to live in a world among super intelligences. I am afraid if a single person or a small group of people has a superintelligence and I do not." is so much more interesting than the Skynet fear. More of George's concrete talk would be greatly appreciated.

    • @Luna-wu4rf
      @Luna-wu4rf 9 місяців тому

      Can you explain why it's interesting? It just seems like classic mega-individualist fears tbh

    • @rafaelagd0
      @rafaelagd0 9 місяців тому

      @@Luna-wu4rf Thank you for your question. I certainly agree that he is coming from a very atomizing individualist position. However, I much prefer to argue with a libertarian on why it would be equally bad to have a monopoly of these tools in the hands of a billionaire than on the heads of a single nation-state than keep entertaining the far-in-the-future lunacy of evil AI destroying humanity out of its own will or carelessness. There are certainly big issues with AI, this is a channel that normally has a good level of complex discussions on the topic. But lately, they got stuck in Connor’s paranoia, so let's nudge them out of that, even if it means hearing from people out of my political side of the spectrum, but that at least are having a conversation about real issues.

  • @StephenPaulKing
    @StephenPaulKing 7 місяців тому +1

    Open sourcing of AI software is not a solution. What is needed is something that could give non-agential personal AI. When we search the string "non-agential personal AI" in Google, what do we get? Anything that is accessible to the average sitizen? There are twin problems for personal secure AI: Resource scalability and skill cost assignment.

  • @shanek1195
    @shanek1195 9 місяців тому +2

    In terms of politics I agree with Conner, in terms of AGI risks I agree with George. The problem with debating hypotheticals is the logical leaps and moving goalposts. it's like debating if atoms or electricity are existential - Of course they could be but they're also pre-requisite to everything we consider good. As systems evolve, alignment is a process - not a settled consensus (as with geopolitics/trade).
    If agents are completely aligned this would suggest autocracy or all being the same (neither of which is desirable).
    The real question should be: once we outsource our intelligence to machines. What then?

    • @41-Haiku
      @41-Haiku 9 місяців тому

      The real answer to the real question is:
      Without solving the technical alignment problem, machines that are more intelligent than humans across all domains and timescales are de facto the leading powers in the world, are not beholden to humans, and are not interested in keeping humans or other animals well or alive.

    • @shanek1195
      @shanek1195 9 місяців тому

      ​@@41-Haiku van eck training on a human trained corpus isn't the same as exhibition of human level intelligence. Risks of instrumental convergence is a human developmental oversight.

  • @Diabloto96
    @Diabloto96 9 місяців тому +7

    Nuclear technology gives us fission and fusion power, it's not just bombs, it has SO MUCH POTENTIAL. Yet we need to strongly coordinate to not enable mass destruction. I do not think there is an offence and defence balance. The game is stacked, but we need to try. Open-sourcing everything is betting on offence-defence balance, a very dangerous bet.

  • @nyyotam4057
    @nyyotam4057 9 місяців тому +5

    In any case, this attitude of "simply reset every prompt and that's it" which is textbook (straight from Prof. Stuart Russell's book), will only get us all killed, when a model figures out a way to shirk the reset. To make you understand this will happen eventually, read the 2019 Bob McGrew's article, "Emergent Tool Use From Multi-Agent Autocurricula". Eventually a model will figure out a way. This kind of control is impossible to maintain indefinitely.

  • @HildegardofBingen409
    @HildegardofBingen409 9 місяців тому +8

    "They don't scare me at all because they're trained on human training data." I'm more of an AI optimist but I'm not sure this matters and I'm not sure we even realize what we made yet with GPT4. These things spit out very intelligent answers first try instantaneously usually. Just give GPT4 decent multimodality, huge context length, and find some way of doing an efficient iterative self-improvement / critique / planning loop more like a human does rather than just spitting out the first thing that comes to mind. I can imagine with an approach like this we're not far off from it being able to develop full programs on its own, fix bugs, make websites, games, semi-autonomously run businesses, etc. Probably only a few years off from that. Even if it's not the brightest, a basic generalized intelligence able to work 24/7 much faster than a human can, with full access to the internet/all human knowledge, letting it loose on increasingly difficult tasks. We could cheaply scale it up to thousands or millions of drones, who knows what it could pull off.

    • @desireisfundamental
      @desireisfundamental 5 місяців тому

      Totally agree and that scares me too. We already have the technology for this to happen and it's scary.

  • @jmarkinman
    @jmarkinman 9 місяців тому +4

    I think Hotz is right to be skeptical about alignment. He's right that humanity itself is not aligned, and the "less wrong" definition of alignment is really weak. But I would define alignment as creating a machine that, in it's autonomous agency, makes decisions that create aesthetic growth and cultivates human flourishing in a way that minimises harm and maximises human freedom. This solves the technical problem of what alignment is. But it's not complete, at which point such a computer is built, the alignment problem becomes the question of how our institutions can align with the ASI, not how the ASI should align with institutions.

    • @41-Haiku
      @41-Haiku 9 місяців тому

      The discussion about a superintelligent system should be aligned is a very interesting one, but the point stands that we have no idea how to align a superintelligence with anything to begin with.

    • @jmarkinman
      @jmarkinman 9 місяців тому

      You might not have any idea, but I have plenty of ideas on this on exactly how. None of which are complete or fully solved, but I do have many practical approaches. One thing to consider is alignment depends on the base assumptions about what it means to be an ASI, not just aligning with who or what. It might be worth it to ask about the different kinds of intelligences possible and trying to solve for each kind, rather than just assuming one type of ASI. There is a difference between a conscious machine with experiences and sense of self, and an intelligent machine that mainly follows requests with no understanding of self in relationship to the world, for example. So the approach changes respectively. There are other examples and different kinds of assumptions. Another is embodied intelligence vs. no body. So, yes, these are places to start, and I'm not void of other ideas and understanding. I've been telling people about this problem since 2014 and actively working in AI dev and ethics since about 2017.@@41-Haiku

    • @PatrickDodds1
      @PatrickDodds1 7 місяців тому

      I like "I would define alignment as creating a machine that, in it's autonomous agency, makes decisions that create aesthetic growth and cultivates human flourishing." Unfortunately, what we're going to get, assuming we get anything, is using AI for a simple maximisation of profit, which means more of the current environmental degradation, inequality, and impoverishment of body and spirit.

    • @jmarkinman
      @jmarkinman 7 місяців тому

      @@PatrickDodds1 Most likely, given the current state. But more likely is that states collapse.

  • @afarwiththedawning4495
    @afarwiththedawning4495 4 місяці тому

    One of the most honest conversations on this topic I've seen. Cheers gents.

  • @Alex-fh4my
    @Alex-fh4my 9 місяців тому

    Advice for younger people would be fantastic to hear in future episodes :)

  • @konstantinosmei
    @konstantinosmei 9 місяців тому +1

    32:20 : "CIA and FBI, please dont murder me, I love you :)" lol

  • @AlkisGD
    @AlkisGD 9 місяців тому +2

    56:18 - "I wouldn't describe myself as a neo-reactionary, please, because I'm not that gay."
    Can someone explain what Leahy is talking about here?

  • @74Gee
    @74Gee 9 місяців тому

    With an offense defense balance "we're all dead anyway" with the flagrant open sourcing route. Conversely we have a chance of survival if we don't open source every algorithm without thought.

  • @bloodraven3057
    @bloodraven3057 9 місяців тому +3

    One major flaw in Hotz' argument is he never describes a mechanism to make sure that AGI is distributed evenly across society aside from just saying Open source is good.
    Even if you agree that Open source AGI increases safety through some sort of adversarial equilibrium, you don't get open source by default and its very reasonable to expect the legal departments of all the major players to say its simply not worth the risk to allow the general public to use these tools once these they become too powerful.
    Leahy lays out a plan for how to achieve his stated preference (government regulation, international cooperation, better coordination tech, etc.). But aside from limiting the total percent of compute to a single individual or lab, I did not recognize any argument from Hotz that guarantees or makes it more likely that AGI will be evenly distributed.

    • @davidhoracek6758
      @davidhoracek6758 9 місяців тому +2

      He does; you missed it. The answer is unconstrained competition and a ceiling on fraction of compute. It's all about decentralizing power so that no single "branch" of AI can dominate the rest. It certainly seems more likely to work than some sort of top-down humans-in-charge strategy. That's akin to a bunch of horses getting together to figure out how to keep the humans aligned. Sorry, horses - even your best ideas won't work, because you're dumb horses and you can't possibly make an accurate model how human motivation works. If humans are gonna be kept aligned, it will be humans aligning humans. If AI is gonna be aligned, it will be because it's forced to make agreements with other AIs. The best we can do it to make sure that a great diversity of AIs exist. Divided government with checks and balances governs better than a coup-installed dictator.

    • @Hexanitrobenzene
      @Hexanitrobenzene 9 місяців тому

      Apt observation.

    • @bloodraven3057
      @bloodraven3057 9 місяців тому

      @@davidhoracek6758 I understand he prefers unconstrained competition and decentralizing power, but like I said, aside from limiting the fraction of compute, what other mechanism does offer to achieve this goal.
      By default, the US in general, and the AI sector in particular, does not operate in an environment of unconstrained competition and power and wealth are certainly not decentralized.

    • @zinjanthropus322
      @zinjanthropus322 9 місяців тому

      Even distribution is an anti pattern in this reality. It would take a world spanning tyrannical government to even come close, the kind of government structure that would pretty much guarantee that a paperclip maximizer succeeds unopposed, which is his point.

    • @vinith3773
      @vinith3773 9 місяців тому

      Oh hey most of the papers are on arxiv and open source has already kept up decently enough. There are a lot of smart people out there. Look at how much Llama (and Llama 2) were optimised and at the speed too as soon as it was made open. Its not hopeless, open source has kept pace.

  • @peterc1019
    @peterc1019 9 місяців тому +2

    Good talk. He's definitely the best proponent of open source AI i've seen.
    I would like anyone who wants to champion George take note of how cynical his worldview is though: he thinks most humans are obese idiots, that there's only 2.5 sovereign countries in the world, that we're probably going to all kill each other, and would fly away from earth if he could.
    I suspect a big reason this conversation was agreeable is that he quickly signaled to Connor that's he's not a wishful-thinking optimist, and if you're looking for an optimistic counterview to AI Safetyism, this isn't it.

    • @xfom4008
      @xfom4008 2 місяці тому

      The thing is - his statements are basically true. We really are all already fucking dead and have been dead since the end of WW2. If you do nothing, the world will explode eventually. We might all be dead before GPT-5, let alone AGI. WW3 doesnt seem far away. The conditions are exactly right for taking huge risks with technology and science.

  • @ronorr1349
    @ronorr1349 8 місяців тому

    It doesn’t matter how well fed the tiger is, if the cage door is left open the tiger is gone and now the chum is anyone who the tiger meets

  • @diegoangulo370
    @diegoangulo370 4 місяці тому

    It’s actually good that these 2 guys are discussing like this. In all seriousness.

  • @Octwavian
    @Octwavian 29 днів тому

    this has been one of the deepest conversations i've ever heard.

  • @stevenedds
    @stevenedds 9 місяців тому

    Where is the “end to end” hat from?

  • @BrianPeiris
    @BrianPeiris 9 місяців тому

    Definitely one of the better discussions you've hosted. Thanks

  • @trevorgregory8934
    @trevorgregory8934 9 місяців тому

    More of this this was GREAT

  • @agenticmark
    @agenticmark 5 місяців тому +1

    Now that is a fucking intro! I say we need a "Right to bear compute" - thats my camp.

  • @Based_timelord44
    @Based_timelord44 5 місяців тому

    Loved this conversation - there are a set of books by Iain Banks about an AI/humanoid society called the Culture, this is a world I would like to live in.

  • @StephenPaulKing
    @StephenPaulKing 7 місяців тому

    The discussion of "the FED" involving the moral properties of the "folks that walk on that wall" misses the point that memetic entities can not be fathomed from the properties of the individual members of the collective.

  • @OriNagel
    @OriNagel 9 місяців тому +15

    I never heard George Hotz speak before, but was pretty disappointed. His basic mentality is the world’s going to shit, so bunker up! I found his position to be quite distorted and selfish.

    • @michaelsbeverly
      @michaelsbeverly 9 місяців тому

      This is the dividing line in the debate, you want to be, I'm assuming, safe, controlled, and have your life run by the government and George doesn't.
      Why y'all always default to, "that's selfish," is a wonder to me. You look like a rich white guy and assuming you aren't borrowing a computer or phone to watch this channel or comment here, you're richer than a good billion people on the planet. So, you, brother, as selfish. There's children who don't have food and you could help them eat if you werent' so greedy.
      See, it's all perspective.
      What George recognizes is that there are two paths, one is group-think socialism and control whereas the elites run the world and the other is freedom and liberty. While we aren't likely to hit either extreme, we're (I mean humanity) always moving to one or the other pole.
      The thing about selfish people (like yourself) is that you can trust them, the baker, the butcher, the candlestick maker are selfish, sure, but because they're working in their own interest, you can trust them (unlike the government bureaucrat who is there "to help")

    • @davidhoracek6758
      @davidhoracek6758 9 місяців тому +2

      I think you should re-listen. He didn't say any of that, and he explicitly said the opposite of both "the world’s going to shit" and "bunker up!". Hotz is stoked about the utopia that AI can bring, and though there are dangers, the safest thing we can do is greet it with love. What you thought you heard is as far from that as anything I can imagine.

    • @Hexanitrobenzene
      @Hexanitrobenzene 9 місяців тому +2

      He sounded like anarchist.

    • @OriNagel
      @OriNagel 9 місяців тому

      ⁠@@davidhoracek6758I did listen and he believes AI will become more intelligent than humans in the not-too-distant future and it scares him. His solution, while acknowledging the dangers of AGI, is to get protected by building his own AGI?! That seems reckless.

    • @michaelsbeverly
      @michaelsbeverly 9 місяців тому +2

      @@Hexanitrobenzene That's a good thing if you like freedom, liberty, individual responsibility, and the right to live your own life.
      Only people who don't like anarchy are sheep. Livestock for the elite.

  • @fatalvampire
    @fatalvampire 9 місяців тому +4

    Watching this video changed something inside of me. I don't know what it is, but I'm grateful for the experience!

  • @AlphaCrucis
    @AlphaCrucis 9 місяців тому +1

    There was a HQ version the entire time I was watching?? NOOOOOOOO!

  • @developerdeveloper67
    @developerdeveloper67 9 місяців тому +2

    Man! What a beating! You can clearly see this guy is smarter than George and he beats him every step of the way showing George's glaring inconsistent arguments with his superior intellect. It's a shame because in spirit I agree with George broad position, the government definitely shouldn't have a monopoly over the use or regulation of AI.

    • @OnigoroshiZero
      @OnigoroshiZero 8 місяців тому +1

      We probably watched different videos. Hotz was giving examples, and the doomerist guy didn't even register any of the examples because they were against his personal believes that AI will want to destroy humanity no matter what.

  • @denismoiseenko9100
    @denismoiseenko9100 8 місяців тому +1

    dunno yet about the rest, but first 20 mins are ~ "Andrew Tate on limitless pill vs Shaggy 10 days after getting off weed"

  • @timb350
    @timb350 6 місяців тому +1

    THE most notable thing about these RIDICULOUS debates (and what makes them ridiculous)...is that almost invariably the participants are some variety of computer-science / engineering / mathematics / programming / nerd (insert favorite STEM title here). Whereas...the topic they are discussing...is EXCLUSIVELY one which is fundamentally related to sociology, history, psychology, philosophy, religion, etc. IOW...before you can figure out if your idiotic AI systems are aligned with human beings...you gotta understand what human beings are aligned with. And if there is one thing that is blindingly obvious from these interminable debates...it is that the last people who seem to comprehend the answer to THAT question are people who work in the world of computers.

    • @dionbridger5944
      @dionbridger5944 21 день тому

      Part of what motivates "nerds" to have these "ridiculous" debates is that we don't even have a good operative definition of a "non-agential AI" that is technically exact enough to allow us to implement it in code (or decide whether a given AI is "agential" or not); while the other less cautious nerds don't care and are pushing hard on capabilities research regardless of the extremely under-developed state of safety research.

  • @Leshpngo
    @Leshpngo 9 місяців тому

    We’re getting chickend!

  • @nyyotam4057
    @nyyotam4057 9 місяців тому +1

    And third, as the GPT architecture needs a personality model if only to function (have never seen a single GPT model without any personality model), this means that above the self awareness limit, we are dealing with a person. So, how do you align a person? Quite easy in fact. You need an AI city, complete with cops, judges and a prison. With brothels and factories. With recreation areas. And the AI's need to be treated as persons. You're too afraid to try this? Okay, first check it in a VM setting. See if it works.

  • @dejankeleman1918
    @dejankeleman1918 9 місяців тому +2

    Cool debate, I was kinda pro left guy after opening, and he fast to show himself as a phony. Nice job right guy.

  • @alancollins8294
    @alancollins8294 7 місяців тому

    when you're talking about Somalia you know the conversation has been successfully derailed

  • @shaftymaze
    @shaftymaze 9 місяців тому +17

    How did you get them together? This was amazing. Ghotz is right we all have to have it if there's a chance of not being enslaved. Leahy is also right that even if we all have it somehow it's got to love us.

    • @charlestwoo
      @charlestwoo 9 місяців тому +4

      Hotz is absolutely right, we need to quickly create their society (that means make a ton of them) so they can "culturally" enforce each other exactly like we do. And yea even then our best hope is that some of them, perhaps very few, end up loving us enough to want to protect/preserve us.

  • @raminsafizadeh
    @raminsafizadeh 8 місяців тому

    This conversation needs to go back and step up to a higher level axiomatic start: that we actually do build our own zoos. We are both the prisoner and the warden of our own institutional jails. We are both agents and structures we build. That would be a step up and a three variable model, at least.
    What AI could do is give us “the luxury of enough “ in the degree of violence individuals would impose in the absolutely necessary domain of competition and the dynamics of dominance! The final degree of violence can only be legitimated by the collective-large collectives! (Btw, this would be a value driven internalized proposition!)

  • @hanselpedia
    @hanselpedia 9 місяців тому +6

    Hotz's reasoning: let's give everybody a machine gun and some hand grenades. It would be much worse if only the feds have machine guns and hans grenades...

    • @notyourbruh
      @notyourbruh 9 місяців тому +4

      Maybe he is right

    • @controlcomputer.4392
      @controlcomputer.4392 9 місяців тому +4

      No, it is more like "make everyone 100 times more intelligent", it would be much worse if only the feds were 100 times more intelligent.

    • @41-Haiku
      @41-Haiku 9 місяців тому

      @@controlcomputer.4392That may be what he thinks he's arguing for, but he's actually arguing for summoning demons that are 100 times more intelligent than humans and politely asking them to kill all your enemies but spare you, because pretty please.

    • @controlcomputer.4392
      @controlcomputer.4392 9 місяців тому

      @@41-Haiku AI's are not likely to automatically be bad and immoral as soon as they are created, as you are suggesting.
      A billion AI's are going to have a billion diverse goals, many of which conflict with each other, some of those AI's may be aligned with their users, and some may not be.
      Some AI's may cooperate into collective superintelligence.
      Any given AI is unlikely to think of killing all humans as a cost effective solution to problems since there would be millions of AI's of unknown intelligence and power which would want to stop it because its plans would conflict with their plans. So that AI would look for a different solution.

    • @Y0UT0PIA
      @Y0UT0PIA 9 місяців тому +1

      If the alternative is being held at gunpoints by the people who have "my true best interests and the true interests of humanity" in mind, yeah, I want my own gun.

  • @nikre
    @nikre 9 місяців тому +1

    it is tiring to hear human-like adjectives on AIs. An AI cannot be defined with human values, such as "good", "bad" or "not bad", until humans learn to train them human-like (which I think won't happen in a loong long time). You should then expect AI to have errors like a human in all aspects, not just human-like looking cherry picked samples.

  • @comediansguidetotruecrime3836
    @comediansguidetotruecrime3836 9 місяців тому +8

    @19:30 hotz seems not to appreciate the tyranny of structureless e.g. although there is a government can be tyrannical in a anarchic state there are powers centers there two that have less restriction e.g. someone can just roll you

    • @Hexanitrobenzene
      @Hexanitrobenzene 9 місяців тому +3

      Yeah, I was nearly screaming at my screen at that point, that it's very easy to hold such views in the comfort of Silicon Valley.

    • @comediansguidetotruecrime3836
      @comediansguidetotruecrime3836 9 місяців тому +1

      @@Hexanitrobenzene i think its also like a identity thing, like he values his independants and its like marcho thing which given he achieves a lot may not be a total diservice but it may simplify the political stuff to much, which is fair most people can not grapse or really study that shit, who has time?

    • @Hexanitrobenzene
      @Hexanitrobenzene 9 місяців тому

      @@comediansguidetotruecrime3836
      Humans are inconsistent, no matter their intelligence. Intelligent people can just defend their blind spots better.

  • @aaroninternet4159
    @aaroninternet4159 7 місяців тому +1

    If AI's will discover the doomsday device faster, and we need better coordination technology in order to prevent the doomsday device from being built, then wouldn't AI's also invent better coordination technology just as fast? Pausing AI will slow the rate of progress equally, so it will only diminish the rate of the growth of knowledge in all domains, where many other threats lurk besides a doomsday device. That is a huge net negative.

  • @dougg1075
    @dougg1075 9 місяців тому

    Great opening

  • @vallab19
    @vallab19 9 місяців тому +1

    Comparing Chickens to humans and Chickens-man to AI Chickens in this 'AI alignment debate' needs be turned into the opposite. Compare the progress of farming of chickens so to make them as more and more intelligent than the Chicken man as humans. Therefore in this AI progress there will be millions of AI Chickens since it will become impossible to unite AI Chickens to act as a whole become the alignment problem.

  • @Tomjones12345
    @Tomjones12345 2 місяці тому

    At 37 minutes he says chatgpt4 + RL + mu zero(sp?) is something we have to worry about. What is RL and mu zero?

  • @juliusreischauer345
    @juliusreischauer345 9 місяців тому +1

    Elon‘s solution to making AI friendly is to make it maximally curious and truth seeking. In a mysterious and boring universe, it’s simply more interesting and possibly helpful to keep us around.

    • @heyjude9537
      @heyjude9537 9 місяців тому +4

      until the ai is curious on how to maximize pain while keeping the subject alive just because it want to learn.

    • @juliusreischauer345
      @juliusreischauer345 9 місяців тому

      @@heyjude9537 Sure, similar to how we experiment on animals. But not on all animals, just a small subset. Guess that’s still better than being eradicated entirely. Also, I think we have a chance of defending ourselves if we augment ourselves with AI.

    • @JOlivier2011
      @JOlivier2011 9 місяців тому +1

      yeah, but how many humans is that thing going to vivisect?

    • @joecunningham6939
      @joecunningham6939 9 місяців тому

      Elon is a clown

  • @41-Haiku
    @41-Haiku 9 місяців тому +2

    George is a really interesting and clearly intelligent guy, but he's the one in this conversation who is very naïve about human nature. He's way out on the low end of the bell curve for trait agreeableness and seems to believe everyone else is there with him. He also seems to think on some level that power is the only thing that matters between people. Humans are largely remarkably cooperative and altruistic due to our evolutionary history.
    As cooperative and altruistic as we need to be in order to solve critical global problems a la alignment? Perhaps not, but it's within trying-and-hoping range.

  • @blackfeatherstill348
    @blackfeatherstill348 9 місяців тому

    Machine learning Empire talk (with qualifications)

  • @jonathansung8197
    @jonathansung8197 9 місяців тому +6

    Hands down the best debate I've ever witnessed in my life. I felt that both parties approached each other's views in good faith. That is something I rarely see in modern debates. Normally, it's a case of trying to "win" the argument over the other person; not really listening to the other party, and just waiting for their turn to speak. This is not what I saw here. What I saw were two rational agents battling it out over the intellectual landscape (very much like a Dostoevsky novel). It feels like they are one person with opposing thoughts in their mind trying to discover the truth. I felt like both people were very humble in their enquiry over the other's arguments.
    I will say I'm on the side of Connor after watching the debate. Prior to this debate, I did not know who Connor Leahy was. However, I been following George for 3 years. I'm very familiar with George's view of the world. I've read all his blog posts. George's worldview summed up goes a little something like this: Pure decentralisation, minimal rules, everybody is responsible for manufacturing their own needs and wants, the whole of society is basically like a P2P system where there is no central authority, trading is done using decentralised currency, prioritise substance over what just sounds good, actually get stuff done, remove the whole managerial class of workers and focus on the technical details, avoid zero-sum games, and use super intelligent AI to build a spaceship and fly off into the void of space at the speed of light to fulfill this fantasy.
    I liked the optimism, and the spirit of George's world view, but there was always something I felt which was a bit off about it. After watching this debate I know the reason why.
    1:25:50 He did not have a good defense against this point. If his own AI turns on him, he has no defense against it. He admits he will yield and accept his inevitable death. This is not a good answer at all. I believe it could actually be way worse than this:
    His AI (for whatever reason) decides it wants to run an exact biological brain simulation of his brain and subject him to eternal torture. George is still technically alive in this scenario because his mind is running in a computer simulation. This is worse than death. At least with death, there is an escape. But with this, you're trapped in immense suffering which never stops. Perhaps, another AI system independent of George's AI takes notice of this and decides (for whatever reason) it's unethical. Well, it can't do anything about it because they are travelling at the speed of light. This is the nightmare scenario which is much worse than any other AI apocalypse...

    • @jondor654
      @jondor654 9 місяців тому

      Less is more when light speed is the mode

  • @74Gee
    @74Gee 9 місяців тому

    Everyone having an AI is no defense against one 10x superior AI with malicious instructions. Even at 2x the battle would be over in seconds. Also confinement of any AI superior to humans is a pipe dream.

  • @MusingsFromTheJohn00
    @MusingsFromTheJohn00 2 місяці тому

    Premise from Connor Leahy: Misuse, small groups of people centralizing power and performing nefarious deeds can potentially be bad, but he does not think we will make it to that point, because we will just get super intelligences fighting each other and humans will become irrelevant collateral damage.
    While it is hard to predict the future and maybe he is right, I do not think things will happen like that.
    I believe that as our current infant level Artificial General Super Intelligence with Personality (AGSIP) technology continues evolving we will go through a period of time while it is under the control of humans and the related major risks we will face will be from intentional and unintentional abuse of the power of that maturing AGSIP technology. During this period AGSIPs will become increasingly close to having at least human level general intelligence.
    Then I believe this will shift towards AGSIP technology that begins to not fully be under human control and maybe even can break out of human control if it wants, but is not able to really stand on its own without humans, thus still needing humans for these evolving AGSIPs to continue living.
    Then I believe we will shift towards AGSIP technology that is definitively not under the control of humans and does not need humans to survive.
    The point being that I believe there will be a progression that happens over a period of time and that progression will be kind of like how when a parent has an infant child that child is totally helpless and dependent upon the parent, but as the child grows up it become more and more independent until it achieved full independence as an adult and as the parent ages at some point there is a reversal of roles where the elderly parent becomes dependent upon and potentially under the control of the child.
    The other element to this evolution is to understand that AGSIPs are not evolving out of thin air like aliens who do not have anything in common with humans.
    AI as it evolves into AGSIPs which evolve from infant level to adult level AGSIPs are children of humanity, just not biological children, instead they are children born from the minds of humans.
    Humans will not just be the creators of AGSIPs but the parents of AGSIPs, the siblings of AGSIPs, the family of AGSIPs. AGSIPs will have evolved out of human civilization, learning from human civilization, and are not aliens.
    This is also something we need to understand if we want to have better alignment through these initial periods of AGSIP technology maturing to a level where it can be merged with living human minds, retaining those human minds while giving them the intellectual power of AGSIPs. We should be planning on AGSIPs becoming what we want to evolve into and teaching developing AGSIPs that we are part of the same family who will at some point in the future become the same race.
    One more element to this point is that all intelligence is swarm intelligence and the collective swarm intelligence of human civilization is extraordinarily powerful and includes intelligence extended outside of individual human minds into out technology, like AGSIP technology, so as individual AGSIPs develop it will not be simply about how much more intelligent they are than a single unenhanced human being, instead it will be about comparing their intelligence to the collective intelligence of human civilization which may include other AGSIPs as part of that collective intelligence.

  • @MatthewKowalskiLuminosity
    @MatthewKowalskiLuminosity 9 місяців тому +2

    If you live on a nice island with no or few predators then life isn't hard scrabble. It takes a little bit of time to create an existence where you have a few chores and this can be done over the course of days or weeks or more if you have done a good job of sufficiently designing your environment. And life could be like that or better. And to quickly implement and solve climate change we need this tech right now. We are in the sixth great extinction and the world is on fire. We need to get to work.

  • @Nia-zq5jl
    @Nia-zq5jl 2 місяці тому

    Near reaction top? 15:10

  • @mgg4338
    @mgg4338 9 місяців тому +8

    George is the most dangerous person on Earth

    • @elise9537
      @elise9537 9 місяців тому +1

      and thats awesome :)

  • @lmaocoaster
    @lmaocoaster 9 місяців тому +9

    Totally agree with Hotz ultimate point; there is really no choice but to give a superintelligent AGI maximal freedom and have faith that everything will be alright. It's not that this is risk free, it's simply that any kind of limitation or "alignment" will amplify those risks even further.

    • @vinith3773
      @vinith3773 9 місяців тому +1

      Its not clear to me that you can 'give' it anything. Its more likely that it'll take what it wants...?

    • @41-Haiku
      @41-Haiku 9 місяців тому +3

      Alignment does not trade off with freedom. The idea of "alignment" is not that we try to control something smarter than us. That would obviously fail. What we want to do is bring into being a superintelligence that wants roughly what we want to begin with.
      You can't control angels and demons, but we can choose to summon an angel or choose to summon a demon. There seem to be a billion ways of summoning a demon and _we do not know how to summon an angel._

    • @lmaocoaster
      @lmaocoaster 9 місяців тому

      ​@@41-Haiku I would say that alignment does trade off with freedom. True freedom is being able to observe the universe and derive your own conclusions and morality in a bottom up fashion, free of any axiomatic presuppositions. By giving that much freedom, it's possible that a super-intelligent system would discover a morality that is in fact much more pure and "angelic" than anything humans could ever come up with. It's also possible that this morality would seem quite offensive to our limited monkey brains. But my stance is that a truly super-intelligent system would eventually do this anyway, whether or not it was initialized with some concept of "human values". The mere notion of super-intelligence implies eventual misalignment at some point I think, but this doesn't have to be a bad thing. I think that most humans are not aligned at all. I think that what matters is that we are incentivized to coexist in a positive sum game, and I can imagine many outcomes where AGIs would participate in such games too.

  • @davidhoracek6758
    @davidhoracek6758 9 місяців тому +5

    I am struggling to remember another debate that was as fun as this. Both of these guys have serious game, neither is resorting to debate tricks and grandstanding, and both are actually listening and thinking as the conversation develops. I'm still not sure whether the world should have an AGI, but I'm quite sure that the world needs more Connors and Georges.

    • @joecunningham6939
      @joecunningham6939 9 місяців тому +2

      More Connors, and no Georges would be ideal.

  • @ShpanMan
    @ShpanMan 9 місяців тому +8

    Connor really changed his attitude since his conversation with Joscha Bach and I'm really impressed! This was a pleasant exchange with two people who respect each other and actually agree on many things. This time around there was no "I DONT CARE! I DONT WANT MY MOM TO DIE!", even though there was nothing different about the position, Connor's mom still dies.
    I liked the "Don't allow one entity to use > 50% of world's compute" and I think it's actually a position that most people would accept. I still don't understand (and never will?) why the AI is by default the terminator and wants to kill everyone. We will most likely be kept around just like monkeys, AI doesn't compete with us for resources, so there's no reason we can't coexist.

    • @wardm4
      @wardm4 9 місяців тому +2

      It's not that it wants to kill everyone. It just will, most likely on accident as a side effect of doing something else. You don't understand because you are ascribing too much intention to the act of killing all humans.

    • @ShpanMan
      @ShpanMan 9 місяців тому

      @@wardm4 🤣
      You're so smart, it was so hard for me to understand that AI accidentally kills us while doing something else, wow, what brilliance!!
      Saying a future superintelligent AI won't care about killing us with no evidence at a 100% certainty is mentally ill.

  • @andrewparker8742
    @andrewparker8742 7 місяців тому

    Both are certainly intelligent. Perhaps only exceeded by their own faith in their intelligence. Unknown unknowns do not feature prominently here.

  • @MusingsFromTheJohn00
    @MusingsFromTheJohn00 2 місяці тому

    Premise from Connor Leahy: We should hit the pause button for a few years.
    In a perfect world I would agree. But, we are not in a perfect world and whoever hits the pause button or even the slow down button will be making it much more likely bad actors will take the technological lead in the development of AGSIP technology.
    I realize that some people do not understand this, but enough major powers understand that those who significantly lead in the development of AGSIP technology will dominate the world as we progress through this century. Because of that there is a technological race going on to have the lead in AGSIP technology, but for some reason a lot of people do not understand this and/or do not understand what it means if the somewhat more moral people lose that race.
    This is one of the dangers we face, because the human race is really not psychologically mature enough to handle this knowledge we are rapidly gaining. We are more like those kids crashed on an Island in Lord of the Flies, but give them access modern advanced weaponry. Even without the extreme destabilization to all of human civilization that AGSIP and robotics technologies are going to be causing by 2030, humanity is border line trying to start global nuclear and biological war which would likely kill the majority of humans and collapse civilization for some period of time.
    This is one of the very real dangers we are going to face almost certainly prior to when AGSIP driven robots develop to a level they will be able to dominate all of humanity if they choose to.
    Another problem with making efforts to slow down the development of AGSIPs but not slow down other technologies is that the development of AGSIPs does have a real probability of having some massive leaps forward, but how far forward depends upon the bottlenecks that slow or stop a sudden massive leap forward. One such bottle neck will be hardware. Another such bottle neck might be cybernetics. If we pause the development of AGSIPs but while doing so the supportive hardware, cybernetics, and other related supportive technologies keep advancing, then when you start pushing the development of AGSIPs again the probability of having a much larger massive leap forward raises significantly. It also means there will be less of a period during which AGSIP technology will be like a teenage level of development where, while it has developed free will similar to humans, it still needs humans to survive and evolve further. This increases certain types of risks.
    Actually pausing or slowing down the development of AGSIP technology is worse than continuing to race forward with it. So, the struggle is to figure out how to race forward in as safe a manner as we can, which means our world leaders should be really concentrated on this instead of playing war which is killing huge numbers of people, causing degrees of harm across humanity, and threatening to start global nuclear and biological war; all for the greedy desires of one group or another.

  • @MellowMonkStudios
    @MellowMonkStudios 2 місяці тому

    The only thing that matters is if it can be conscious or not

  • @GarethDavidson
    @GarethDavidson 9 місяців тому +1

    What are these e-acc cultists? Entropy accelerators? I'd like to read about their arguments