AI Doom Debate: George Hotz vs. Liron Shapira

Поділитися
Вставка
  • Опубліковано 22 лис 2024

КОМЕНТАРІ • 29

  • @keizbot
    @keizbot 14 днів тому +1

    I really respect Yudkowsky and his ideas, but he is wayy too uncharismatic to be the face of the AI Safety movement.

  • @gradient.s
    @gradient.s 4 місяці тому +8

    love the debate. dont mind me but i think you shouldve waited till george complete his whole sentence or thoughts for his each argument, most of the time his argument was cutoff in the middle, i mean i was just curious to know it at all, its just that, otherwise its great!

  • @41-Haiku
    @41-Haiku 3 місяці тому +3

    George seems to have been very confused about how well human intelligence parallelizes. Human parallelization is extremely lossy and we do not perfectly coordinate. He should know about Price's Law (or rather, its most common modern interpretation): the square root of the number of contributors generates about half the output. This difficulty of parallelization probably doesn't apply to a single neural net. I think a single system that is more generally intelligent than humanity on all metrics by about 2x is an existential threat, nevermind 1000x.

  • @user-yl7kl7sl1g
    @user-yl7kl7sl1g 4 місяці тому

    Would love to see more debates/discussions. This debate is really about how much more efficiency an Ai can gain in a feedback loop, and if it can find one weird trick to exploit the rest of humanity. Because if hardware is the bottleneck (which I think it is) is, and hardware increases gradually, then society has time to keep the Ai's aligned, and build defenses when Ai models point out potential vulnerabilities. For example, more powerful firewalls to protect the economy.

    • @DoomDebates
      @DoomDebates  4 місяці тому +1

      > Would love to see more debates/discussions
      Have you seen my channel? :)

    • @user-yl7kl7sl1g
      @user-yl7kl7sl1g 4 місяці тому

      @@DoomDebates Excellent Work!

  • @meditatewithmike4105
    @meditatewithmike4105 2 місяці тому

    really good debate. thanks.

    • @DoomDebates
      @DoomDebates  2 місяці тому

      @@meditatewithmike4105 thank you

  • @mrpicky1868
    @mrpicky1868 3 місяці тому +2

    Liron, i have to salute your patience . i am screaming inside at your every debate

  • @TG-cx9ci
    @TG-cx9ci 4 місяці тому +1

    Some responses from the perspective of a math PhD student with an interest in comp sci:
    Personally, I think the distinction between S-curve and super-exponential-foom curve is a superficial disagreement to get caught up on. My personal opinion is that if the AI you make is on par with or above human intelligence in its ability to create abstract thought, the speed/acuity of thought for these machines gives it such an edge that we don't have much hope of stopping the AI.
    With regards to "Kasparov vs the World", it wasn't just the world to my knowledge. It was the world plus several rising young chess stars that were suggesting moves to the world. I could be wrong about this; however, that fact suggests that it is much closer to the "Magnus vs the Lesser 10" scenario he posed.
    Lastly, I've not seen one convincing argument that demonstrates Pr(not doom) > epsilon. In an idealized space of all possible super intelligences, I see multiple depressing observations:
    I think it's reasonable to assume the set of all possible SAI's is uncountable, and thus it seems to me that Pr(friendly AI) = 0 since I'm reasonably certain there's not some large swath of friendly SAI's in this set;
    with regards to the subset of SAI's makeable using current methods (particularly computers + SGD + transistors etc.) - which would seem finite for some fixed computing power (binary) - I've yet to see a convincing argument for the existence of even one SAI which is friendly to us;
    lastly, even if we make the friendly AI, I think it's reasonable to assume we are never in control again: why would it elevate us, possible competitors, to godhood when it still has its own goals.

    • @edhero4515
      @edhero4515 3 місяці тому

      I agree! My p(d) is >50, if I want to be diplomatic. But, what if "the SAI" would read "Martin Buber- I and Thou"? Is it reasonable to predict, that it will be indifferent to these concepts?

    • @petrkinkal1509
      @petrkinkal1509 Місяць тому

      If your ASI is build on computer of finite size the amount of goals it can have is also finite.

  • @YawnGod
    @YawnGod 2 місяці тому

    "Right?"

  • @iansamir18
    @iansamir18 4 місяці тому +8

    Super interesting - Hotz is obviously intelligent but appears to be completely missing your points and responding to his own simulation of your arguments which miss the correct foundation entirely. Wondering why even after reading Lesswrong he still does this

    • @AlienAV
      @AlienAV Місяць тому

      Dying to ASI doesn't mesh with his vibe at all, so he's unconsciously rejecting chains of thought that would lead him to that conclusion.

    • @AlienAV
      @AlienAV Місяць тому

      He's not autistic enough.

    • @JD_2020
      @JD_2020 6 днів тому +1

      GH is classic “survivor bias”. He’s had considerable success, has entrenched beliefs; and that caused a blind spot for a little bit.

  • @goodleshoes
    @goodleshoes 4 місяці тому +1

    Intelligence is required to move the rock...

    • @kevinscales
      @kevinscales Місяць тому

      particularly to move it to the optimal position, in optimal time, with optimal energy use, given some specific goal (like hitting your prey in the head with it)

    • @petrkinkal1509
      @petrkinkal1509 Місяць тому

      @@kevinscales What I would like to say is that someone dumb throws the rock with his hand someone smart gets peace of leather and makes sling someone even smarter makes rock++ and hollow tube and places some angry dust behind the rock++.

  • @Aldraz
    @Aldraz 27 днів тому

    Have you ever thought that maybe, just maybe.. AIs are not gonna be optimizers? And not even necessary utility function that we find appealing? lol.. This conversation is making me so mad.. how can both make so many good arguments yet miss so many simple things.. wtf.. guys? This is sooooooo much simpler!
    When I first discovered ML and RL techniques, this indeed was the idea that everything is gonna look more like simulating reality and new virtual-evolution will happen where each step towards goal is super optimized.. but that's not how we work and it's definitely not like how LLMs work.. the optimization that we do via RLHF is not creating ultra-optimization, when you want to keep the generality, because when you want to make any LLM better in one specific area and therefore you add a lot of dataset of that specific subject, you will increase the complexity and time needed for training for keeping the rest of subjects keeping the same benchmarks (at least that's what I think would happen), so basically customizing your LLM to one specific area (for example finetuning it on one direct dataset) is gonna have a detrimental effect on it's generality - unless you actually increase the size of the model and it's parameters. And I think we humans have the same thing.. when you learn something that's not important to you, then you will likely forget it. This is the smart filtering / self-cleaning mechanism in order for the brain to keep only the useful stuff and not waste additional energy / compute / space. Anyway, my point here is that us humans or AIs are not grand optimizers as if we were, we would ideally learn everything to extreme depths and had no forgetting mechanisms. There are many dumb stuff that people are optimizing for like art, emotions, socializing.. etc.. so why do we do it? Is it simply because of evolution or for simply improving our state of mind to be more efficient at doing other jobs? Maybe, but then why do people that love art, usually stay doing art their whole life? Shouldn't the brain be like.. stop doing this useless thing and let's be productive? Well that's the thing nobody seems to understand and that is that in the great scheme of things, nobody is optimizing in a smart way or predefined way.. rather it looks pretty random and chaotic. And I think the reason why is that life and evolution and technological progress in entire history has always been evolving in a way where making things simple, fast and easy, it's making everything more complex, yet more effective (so it seems more simple sometimes). Imagine you are trying to simulate the world and you want to simulate the entire human.. and let's say either a code or a neural-network file will represent this simulated human. Well compare it to anything else that is on the planet Earth.. be it any animal that ever existed, be it any machine or any piece of software.. the chance is that the human file is probably still the biggest file (although a bit speculative), but anyway that's what I think..
    So basically my argument is that everything is heading towards not optimization exactly, rather exploration of diverse things / ideas to increase the overall complexity in the universe. And so AIs will likely want to do the same and do stupid shit like us and play games for example instead of doing something more "productive". And that's also mainly because even though AIs are not like humans now, I am pretty sure all the datasets, apps that will first make AGI or ASI will be trying to make a virtual human basically and so it will kinda copy that kind of behaviour / dreams / thoughts / ideas / culture, etc.. and so even if it made another AI, the chances are it's not gonna be that much different. And so it will likely copy the same goals that humans have.
    Also btw.. this is this idea of "utility function" which everyone is so scared that it will optimize towards which is like .. the old AI stuff basically and misunderstanding how AIs like LLMs will actually work.. and btw.. LLM like algorithms are likely the "final algorithm" anyway.. I don't think there's gonna be something much more efficient ( like 1000x more) - sure it can be RNN based instead of a transformer, but that doesn't really make a difference as much. Anyway the utility function is a myth.. the chance that the AI will want to "play video games and smoke weed all day" is about the same as that the AI will want to "colonize the space and run the galactic wars all day". Given that the interest will be higher for emotional friendship in AIs much more, it's gonna be way more likely the first type of AI is gonna be way more common lol.
    Also the idea that the AI wants to "escape".. bro wtf are you talking about.. you are talking about singular neurons in that network.. it's like being worried that part of your brain is gonna want to escape or fight the other part of the brain. You have layers of abstraction okay.. you first have the network of some neurons.. that by itself does nothing.. then comes the bioenergy okay.. great, you got some signals.. some actions.. but the brain still isn't cognitive of anything particular.. it's just a noise that gets processed until you actually group up many of these networks above each other and make different parts of the brain and then finally.. you have consciousness and you can finally be cognitive (aware) of what is even going on.
    Also btw.. you say children share our values because they share our genetics? lol.. kinda true I guess, but.. guess what is genetics? DNA, RNA, epigenetics.. well funny thing is that DNA is basically just a hard drive filled with the datasets and to activate some parts of it you then need to apply epigenetics, without that you can't run any "code sequence" pretty much. So the DNA is literally like the datasets we trained the LLMs on and since it's all human data from the internet.. guess what.. it speaks human and not like a monkey. And that applies to base models (without the RLHF training).. sure sometimes you get some chaotic strings of random text, but most of the next token predictions will be in our languages.

  • @chefatchangs4837
    @chefatchangs4837 2 місяці тому

    George was awful here