Chat GPT and the Chinese Room

Поділитися
Вставка
  • Опубліковано 10 лис 2024

КОМЕНТАРІ • 11

  • @cthoadmin7458
    @cthoadmin7458 Рік тому +1

    So does it matter that Chat-GPT can't think? That it deals with syntax not semantics? Given how shockingly good chat gpt is now and getting better all the time, if we can't tell the difference between it and a human at the other end, then who cares? If it quacks like a duck, flies like a duck, and can pass any test of duckness, then does it really matter if it isn't a duck?

    • @sonsieface
      @sonsieface  Рік тому

      I don't think it really matters, as you suggested.

    • @philosophizeyourlife220
      @philosophizeyourlife220 4 місяці тому +1

      No skepticism

    • @ChristianIce
      @ChristianIce 4 місяці тому

      ", if we can't tell the difference between it and a human at the other end, then who cares?"
      I do, and the more you can't tell the difference, the more I care.

    • @cthoadmin7458
      @cthoadmin7458 4 місяці тому

      @@ChristianIce Why? Wouldn't it be nice to converse with an AI philosopher? Machines might have all sorts of useful insights into the human condition that humans don't.

    • @ChristianIce
      @ChristianIce 4 місяці тому

      @@cthoadmin7458
      Because I', tired of the doomsday scenarios, of the lies Sam Altman spits out to get more hype and funds, of the rampant paranoia and the ignorance around it.
      LLMs cannot think, they don't understand a single word, that's a fact.
      Once that's clear, we can have all the fun we want with it ;)

  • @googleyoutubechannel8554
    @googleyoutubechannel8554 11 місяців тому +1

    The hubris in the original argument, and something almost nobody has seemed to notice in the past 40 years, is that whether the person in the room understands Chinese is not an interesting question at all and misses any core issues in AI. It's too bad this argument is famous, and it's sound and fury swatting at nothing.
    To get anywhere 1) Searle would have had to propose a thoroughly flushed out model of what 'understand' means in context of all information processing, including human brains 2) Searle would have to show why the Chinese Room system AS A WHOLE does not meet the requirements of the model, why, and how that's interesting. He did no such thing, the Chinese Room argument is a boring waste of time, and it's the most famous framework that has that relevance to modern AI, so our popular philosophers and leaders have left us in pretty sorry state to even talk about these issues.

    • @sonsieface
      @sonsieface  11 місяців тому

      Interesting. Personally, the Chinese Room analogy helped me get a sense of what was going on with these LMM models (of course, I may be completely misunderstanding the whole thing - and I'm not being sarcastic). Never mind, sounds like AGI is closer than most of us believed, so all this may be moot pretty soon.

    • @ChristianIce
      @ChristianIce 4 місяці тому

      "is not an interesting question at all "
      It's actually the other way around.
      The fact it every day it *seems* more realistic underlines the importance to remind everybody that there is not a thinking agent behind text prediction.
      Once that is clear, all the doomsday scenario based on AI being "dangerous" are put to sleep, and we really need a little less paranoia on this planet.

    • @rockprime1136
      @rockprime1136 Місяць тому +1

      It's not hubris really. The Chinese Room Thought Experiment perfectly encapsulates how a Turing Machine functions which is every computer no matter how technologically advanced. Ultimately they are all just electronic switches turning on and off. Searle just added a twist by replacing the mindless machine doing symbol manipulation with an actual intelligent human which I have noticed confuses a lot of people.
      Your first point is unfair to Searle since no one knows if human brains are actually computers or specifically a Turing Machine. Modeling "understanding" in physical or mathematical terms seems impossible but humans intuitively know what it is. Your second point, the Systems Reply, implies that any intelligent system is irreducible to its components like some sort of black box. Disassemble that black box and somehow "understanding" disappears. Ask the builder of that black box how it "understanda" and he will insist it just is. Not an explanation at all. Might as well postulate the existence of a soul.