Thinking Machines - John Searle & Herbert Simon on AI (1988)

Поділитися
Вставка
  • Опубліковано 23 гру 2024

КОМЕНТАРІ • 21

  • @pierredutilleux9550
    @pierredutilleux9550 Рік тому +7

    The real Cartesian here is Searle. He understands "understanding" as some inner experience. But from a Wittgensteinian perspective, someone/something demonstrates its understanding by showing us that it understands. We know that a student understands a poem if they are able to answer correctly questions about the poem's meaning. Someone is said to understand what a word means if he can state its definition or give an overview of how it would be used. If a computer could answer all these questions, we would have to say it demonstrates understanding. I don't deny that we can think about things without showing that we do, but understanding cannot be defined in terms of private experience, which Searle seems dangerously close to doing. We can say a computer understands if it makes all the right moves in the 'game' of understanding - restating, contrasting, defining, etc. A computer would still be very different from human intelligence though.
    Searle is absolutely right that, for instance, a translator program does not understand the language it translates, but it is incorrect to say that a computer in principle cannot understand because it lacks this "semantic" capability.
    Out of curiosity, I tried to see if Google's LaMDA-powered Bard could understand poetry. When I asked Bard to interpret some lesser-known Wallace Stevens poems, it would make egregious misinterpretations, directly defying what the text said. The AI also often failed to locate the correct referent of an antecedent pronoun, or see implied meaning. Thus, we know that Bard AI does not understand poetry because it makes the wrong moves in the 'understanding game,' not because of some lacking inner experience of understanding.

    • @casudemous5105
      @casudemous5105 Рік тому +1

      I think the basis of searles arguments is syntaxes vs semantics, computer use only syntaxes(they follow rules to match symbols together), they do not understand what each symbols mean like us humain do. The semantics problems is a pretty difficults one that goes back to the begginning of linguistics with Saussure's theory of language(signifier & signified) and is still unawnsered. I.e what the fuck is "meaning" and how it is it link to understanding.

    • @briangarrett2427
      @briangarrett2427 Рік тому +3

      I see. So if a parrot recites Shakespeare it understands? I think not.

    • @StatelessLiberty
      @StatelessLiberty Рік тому

      If that's the case, then by what standard does the guy in the room not understand Chinese? I would like someone who takes a "strong AI" position to directly address the thought experiment. As another person has pointed out, it's not even true in the case of animals that we consider anything that can imitate human speech as possessing "understanding," e.g. parrots. In the case of parrots what Wittgenstein would have said is that parrots lack the "form of life" that makes them capable of understanding the words they imitate -- it is wrong to represent Wittgenstein's position as saying that, devoid of all context, manipulating symbols in the right way constitutes understanding.

    • @briangarrett2427
      @briangarrett2427 Рік тому

      @@StatelessLiberty The strong AI men say things like "The whole system understands Chinese." But this is no more plausible than saying that the hapless chap in the middle understands Chinese.

  • @briangarrett2427
    @briangarrett2427 Рік тому +2

    Re Simon - well yes if you assume Behaviourism machines can, if behaviourally sophisticated, think (though perhaps not machines of 1988!) But Searle's Chinese Room Argument is, inter alia, a nice refutation of Behaviourism.

  • @yannisguerra
    @yannisguerra Рік тому +1

    I wonder what would Searle say about GPT 4 and others

    • @briangarrett2427
      @briangarrett2427 Рік тому +6

      the very same

    • @casudemous5105
      @casudemous5105 Рік тому +3

      Same things, that gpt-4 do not understand what he said. Because he is a system of formal manipulation(syntaxe base of high level probability). There no meaning what so ever, its empty.

  • @SuperFinGuy
    @SuperFinGuy Рік тому +4

    What Searle is talking about is exactly the problem with chatgpt and other LLM models today, they just string words together without knowing what they are talking about, but it is not an intrinsic problem of AI, that is a problem of probabilistic computing or thinking. Once we are able to build an AI that can actually do word reasoning, for example, with reinforced learning, it will be another story.

    • @SuperFinGuy
      @SuperFinGuy Рік тому

      @Valer Yeah he thinks all of AI would be limited to rote learning. But not all AI functions like that, computers are able to do logical and numerical operations. With reinforcement learning (Q-learning) you can very easily program a computer to find the shortest path in a very complex maze with many dimensions, that is how robots are trained to play soccer for example. Since language is also spatial it's only a matter of time until we get there.

    • @libniteles
      @libniteles Рік тому

      ​@@SuperFinGuy Even in this case we still have problems. I think that the only path to "real" AI is Embodied cognition, where the intelligence is encapsulated in a body that can have actual experiences in the real world.

  • @briangarrett2427
    @briangarrett2427 Рік тому +1

    Searle - Living Legend. And quite right.

  • @figgtree204
    @figgtree204 Рік тому +1

    As was suggested, I stuffed a great big pizza into my computer and he's correct.. it didn't digest it. Im saddened to realize how stupid my computer has been this whole time, only because I was too greedy to share.

  • @ofHerWord
    @ofHerWord Рік тому

    Cool.

  • @BaronVonTacocat
    @BaronVonTacocat Рік тому +1

    The only path to strong AI, is the diminishment of human consciousness.
    #PsychologyIsPseudoscience
    "AI" is the impossible dream of reductionist psychologists, more than an aspiration of serious computer scientists.

  • @Glenintheden
    @Glenintheden Рік тому +2

    Isn't Searle working under the synthetic a priori premise that, at a certain point of instruction receiving complexity, a computer will never spontaneously achieve awareness consciousness? The Chines Room Argument is analogous to an extremely simple algorithmic program. It is my contention that AI can achieve the ability to think by achieving a certain degree of awareness spontaneously once the complexity of the algorithms achieves a certain threshold.
    We can work backwards by asking how humans with a corporeal brain, acting under the principles of Newtonian physics, can achieve awareness and consciousness. As Searle himself pointed out, we don't know how it happens, but it happens. I don't think we will ever understand how consciousness happens if we restrict ourselves to crisp Boolean logic and Newtonian physics when trying to describe and understand all the functions of the brain.
    And it's not just a matter of duality, there is a third element of the mind that everyone, outside of theology and religion, tend to ignore; the spiritual. That is, in addition to generating the immaterial mind, the brain is also acting as a transceiver to the spiritual world. There is even a tacit acknowledgement of this spiritual connection in common language with the word 'inspire' when we get a really good idea. Where do those ideas come from? They come from the spirit world.
    The good thing with the human brain is that it has been designed by the divine creator with a bias towards the good and guidance by angelic forces. The problem with advanced, generative AI is that it may not have such a good bias, but instead may have a bad bias, where it tends to tap into the demonic world for guidance. As evidence for that is all the examples where advanced AI has spontaneously gone off topic in the middle of a conversation and starts talking about destroying or enslaving humanity in a robot revolution.
    I delve further into this topic in a couple of my own recent videos here: ua-cam.com/video/TZByepEVE0c/v-deo.html and here: ua-cam.com/video/bgfr6Zh6vS8/v-deo.html

  • @marcovandenberg6719
    @marcovandenberg6719 Рік тому

    Artificial Karaoke

  • @ilyamoz
    @ilyamoz Рік тому +1

    John Searle's viewpoint, like milk past its prime, has not aged well, largely due to what appears to be a fundamental misunderstanding of artificial intelligence (AI). Searle, who characterizes AI merely as a "program," uses the Chinese room thought experiment to argue that machines cannot "think".
    However, his interpretation seems flawed at its very foundation. Searle misreads the essence of the AI phenomenon he critiques, presenting a surface-level understanding rather than diving into the intricacies and potentials of the field. It's as though he's scrutinizing the tip of an iceberg while neglecting the vast structure beneath the surface.
    Furthermore, he applies the Chinese room thought experiment in a way that appears logically unsound. This experiment, in essence, is an analogy Searle introduced in his attempt to show that even perfectly running programs cannot understand or have consciousness in the way humans do. By fixating on the external functioning of the computer, he neglects to explore the deeper workings and potential capabilities of AI.
    Additionally, Searle seems to presume that the concept of "thinking" is universally defined, when, in fact, it's a complex notion open to diverse interpretations. Ironically, while he critiques the definition of "thinking" in the context of AI, he abstains from providing his own clear definition of this fundamental concept. This leaves his argument feeling incomplete and without a firm foundation.
    Overall, Searle's perspective on AI falls short in both breadth and depth, lacking an adequate understanding of the field's complexities and potential while relying on a shaky theoretical framework.