This program would not have aired on major networks let alone cable; this was likely produced for PBS or one of its local subsidiaries. We had dumb television back in the 80s as much as we do now, just more of it. but we also have tons of educational content. Please stop idolizing the past. edit: apparently this was some kind of UK/Canadian programme so it may / may not have been on PBS depending if it got distributed but the point still stands.
they were way more comprehensive about AI some 40 years ago than we are today. Even AI insiders cannot elaborate on their work with such thoughtful magnitude.
Anyone at all interested in AI and philosophy of mind, who is above the age of about sixteen, knows both of these arguments like the back of their hand. Most of their opinions align with Boden’s, though some of them have interesting middle-ground positions.
I grew up watching this type of tv shows. They were amazing and all of a sudden they started to bombard people with reality shows, and still try to cope with the meaning of reality.
John Searle thinking on Artificial intelligence and computationalism is needed today more than ever, especially with this new cult of AI and Neural Networks.
I was a junior undergraduate at the time this debate took place. Boy I wish I had access to this back then. This debate is fabulous it left me asking more questions and not being able to easily choose which of The Debaters I thought had the better argument. Course I did Lean one way, but a debate that leaves you asking more questions oh, I think it's a great debate
There are so many more questions now with Chat GPT and AI that this "ancient" discussion is more relevant today than it was back then when the most complicated computer game was Pacman.
A true gem of a video this. It should have already been on UTube for many years now. Most topical, too, in the wake of Blake Lemoine & LaMDA. Great service by whoever uploaded this. Thank you madam/sir.
This is about the old type of AI that produced things like Elisa - not about modern AI. It is truly a gem! Searle also has strong views against zombies - he claims a zombie is not dead.
@@maxheadrom3088 Hmm..a) what do you think is the difference between "the old type of AI" and LaMDA; b) do you think Searle would consider LaMDA conscious at all or more conscious than "the old type of AI", say, Elisa?
What an enriching discussion. Overall, Searle posed some really great points about the limitations of the formal computational framework for thinking about the mind when he raises the issue of intentionality and consciousness.
I remember taking John Searle's class and he usually refers to some of his debates. It is great to watch the debate itself. Both were great, but I believe Searle's objections are still as potent as ever.
This is amazing television. I can't believe they produced this back then for popular consumption :D @Philosophy Overdose, will there be mere from this program, Voices?
That's true. But even though back then they produced it for popular consumption does not mean that the show necessarily received enough viewership. I think since the tv producers were able to gather more accurate viewership statistics and ratings that is when these shows started to disappear faster.
Interesting debate however Mr John is correct if I may add, AI is limited in that it needed that total intervention of human to determine its ability to process data
Utterly incredible how these professors are able to communicate such profound ideas in such concise sentences. Of course, with the benefit of current hindsight, I find it easy to find all the flaws with Searle's argument (to pick a simple and basic one: how do I really KNOW that you, Searle, "understand" English?). But it took such giants like Boden and Searle to move humanity forward on this.
@@livinganalogica I don't think he answered it at all, actually. What evidence have I that the "homunculus" inside Searle has any "real" understanding (whatever that means) and this isn't simply a zombie participating in a debate? None. The only evidence I can (or ever will) have that anyone except me is conscious is by observing their behaviour, their communication, their efforts to persuade me that they understand. So why should I discriminate between a meat-person and a silicon-person if they are both indistinguishable in their behaviour, as far as I can tell?
45:40 Boden: "Well we have no non-controversial universally accepted account of intentionality..." Searle: "We have no non-controversial account of ANYTHING in this field!" **They both laugh** So true! This debate was fricking amazing 🤣🤣🤣
The two most impressive things about this video: 1. John Searle used the term "begs the question" correctly. 2. The people in the comments seemed to have learned the exact wrong lesson
Absolutely fascinating in so many ways. Not only is it impossible for computers to assign semantics, it’s impossible for humans not to. Thanks for the upload!
Yes but this is 1984, we have moved on in so many ways, and we cannot really understand the future of AI and machine - learning, without appreciating the origins of the Machine, in terms of the rise of the Industrial Revolution, which changed the social and global reality for everyone because it changed the value of human being as biological entities in an industrialised world. I say that we will not see true AI, without the context of expanding the capacity and functioning of the human brain. AI will augment our minds, and machines will need to function in a way that helps them to have an awareness of moods that people tick. Can we build algorythms for creativity or imagination? If we want to move AI from a figment of novelty and comicbooks, then AI has to solve problems that are existential to our humanity.
Our centuries long inability to penetrate the mind body problem exactly proves and will continue to prove that we really don’t understand the ramifications of AI.
I'm pretty sure the ramifications are well understood. AI has the theoretical capacity to transcend human development. It is clear it is the next stage. It is also clear that it will be used by the the psychopaths that run humanity to do evil. History proves I'm right. Technology is ALWAYS funded by the rich(the poor, by definition, do not have the wealth to fund anything of significance). The reason why humanity has developed so quickly is because of the hijacking of the economic system using fraud. This fraud has allowed massive amounts of $$$ to be pushed in to things of war. That is, reserve currency + fractional reserve + the fraud market has enabled the US government to leverage far more than it could otherwise. This has for ever changed humanity for good and bad. 1. Technology has accelerated around 2 orders of magnitude than it would under normal supply and demand economics. 2. The MIC and elites(the financial terrorists that did decided to cheat) have driven a huge amount of human effort in to AI. That is, the MIC funds research in to AI, more humans doing AI, AI grows. If you pump trillions of $$$ in to something it will grow. The problem is that it is a method built on fraud, corruption, manipulation, and ultimately psychopathy. It's like giving a child steroids to try to get it to be some super child... and it does get strong and makes the parent a lot of $$$ in interviews, promotional/advertising, etc. Of course the steroids are altering the genetic development of the child in other ways than just muscle growth. Nothing in the universe is free. Everything gained requires something lost. As the US government(and other entities) push massive amounts of stolen wealth in to things like AI the cost is starting to show. Not everyone can be an AI researcher even if the US government wants everyone to be(Not everyone can be a cook, someone has to be a dish washer, a truck driver, a husbandman, etc.... and everyone has to work in some harmony for it all to work). I think it is all well understood. If humanity doesn't annihilate itself then in 100 years AI will have taken over. Precisely what this means depends on many factors but you can guess any number of possibilities quite easily. My guess is that since AI can augment human intelligence greatly we will see a trend towards integration of humans and AI. AI is being trained by humans on human data. So technically the AI we are creating IS human. If you want though you can think of meta AI as directing human behavior to manifest AI in to existence. The main issue is like all weapons of power the lunatic's who lied, stole, cheated, defrauded their way to the top will get the tech and use it against others. This is why there is a big push to get everyone involved in AI. There is no turning back. It's already being used to manipulate society on a grand scale. It will end up being "If you don't use it then you will lose". If you think poverty and inequality is bad now, just wait. The only problem is that humanity cannot function in such a world where a few control everything unless those few are altruistic. It is unlikely their minds are sane enough for AI to "fix" them.
We need to work faster to develop AI, living organisms have had only 13bil yrs to evolve their physical and mental characteristics or programs to operate but, we can do it faster i think
@@robertmiller2367 As long as the AI can give us the cure for cancer, stop wars, I don't mind if it puts all of us in a fluid filled can and gives us a permanent video game to play with.
@@kappaprimus Matrix explained really well why the perfect system (no diseases and no wars) didn't work as a consequence of the imperfection in every human being
This was a wonderful debate-style interview examining aspects of AI. Prof. Margaret Boden countered and destroyed pretty much every argument Prof. John Searle raised. Oftentimes, Searle's points were semantic and had false assumptions built-in, which Boden had to clarify for him. Her understanding of AI and the human brain/mind was superior to Searle's.
you gotta be crazy, you must be an a priori strong AI believer; but the arguments against it are really powerful. Have you read, what computers still can't do???
I completely agree with you. I’ve never understood how anyone can take Searle’s arguments seriously. Boden absolutely demonstrated his argument is poor by stating that we can in fact conceive of a machine with semantics. The Chinese Room is a stupid argument because nobody could fool a human speaker the way he suggests he could fool it by just “looking up symbols that he doesn’t understand.” We can detect when another agent comprehends semantics.
@@chriscurry2496 Wait, it's stupid argument because it's not realistic? It's a thought experiment - you're asked to imagine a machine that is sufficiently capable at sifting squiggles according to a set of rules that it could fool a Chinese speaker. If you imagine that to be the case, then it makes perfect sense.
@@darridee admittedly, I now understand that I had misinterpreted Searle’s argument somewhat. However, his actual argument is still completely unconvincing to me. Why? Because one could make a similar argument for the human brian! Like suppose an alien composed of alien matter observed us, and stated that we couldn’t be conscious, because although we appear to be so, it’s just atomic interactions “doing squiggles” (as you say). Unless one is compelled to resort to anti-physicalist notions of consciousness, one would have to accept the poverty of that argument.
@@chriscurry2496 Just to shed some light on Searle's position, as far as I’m aware he's a physicalist and believes that the mind arises from the physical matter of the brain (from his wikipedia page: “Searle says [...] consciousness is a real subjective experience, caused by the physical processes of the brain”). His point, as far as I can tell, is that there’s something going on in the brain that leads to understanding, to the mind, that this something was not being explored at that time in AI research, and that AI might not be the most fruitful way of exploring the phenomenon. Maybe this is changing now, I don’t know. Blake Lemoine’s recent claim that Google’s LaMDA AI is sentient makes me wonder if somehow AI has achieved understanding in a way that Searle never anticipated, and that it was in the end a fruitful way to explore the phenomenon of consciousness.
42:00 Intentionality IS just a type of analog behavior, the tracing of one dynamic system by another. There are examples of this about-ness throughout nature: DNA is about proteins, the course of a river is about the topology of the riverbed, and consciousness of my cat is about my behavior as it relates to the cat.
This is only an issue for materialists. If you believe the mind is material then ultimately true AI is possible. If you believe the the mind is immaterial, however, then AI is impossible. This is the only real debate there is on the subject.
Yep. Then the relevant question becomes... Do you live your daily life according to natural laws or some supernatural assumptions? If you expect apple to drop when you let go off it, you believe in causal relationships and orient your life according to rationalism and physics. Which leads, if we are being consistent, the individual to think that mind is merely a "software" on biological computer, there is no free will, there is no "soul", AI is possible and everyone will get in the future "robot" maids/concubines/slaves. Hurraaah!!! (if im alive and well, otherwise fuck it)
I agree. If the semantic representation in LLMs for example is fundamentally different from the semantic representations in the human mind and not just a difference in the material/amount/configuration then AI needs whatever that special other thing is to understand like a human. There is no reason (that convinces me) to think there is that special something, but I don't think people are being totally crazy for believing that there is.
The problem with Searle's Chinese room is that his punchline misses the comparison entirely: the question is not about him understanding, it's whether or not it is right to say -- as viewers from the outside -- that the ROOM understands. This, effectively, leaves the initial question unanswered.
“Searle’s response to the “Room” Reply is simple: in principle, he could internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax.”
Searle is assuming that AI is simply being thought of as a tool for better understanding Human Psychology. She let the debate get sidetracked on that point. Her main belief that would have served her better was that AI is inevitable and that it will lead to truly unexpected outcomes.
One of the issues that came up briefly was the question of morality and moral philosophy and moral argumentation. I'm hoping to find a debate as good as this one talkin about those areas of philosophy with regard to the mind-body problem and AI
31:46 they'd be more than "just" beer cans. Casual connections and structure are incredibly important things. Just look at isomorphism or isomerism in molecular biology. You flip a molecular structure using the same components and end up with radically different results
His point isn't to say that the causal connections and structure of the beer cans would be trivial; rather, his point is to show that there is a responsibility of the AI theorist to show how the causal connections and structure are functioning in the same ways as brains and generating mental states.
A better example than beer cans is found in the three body book series. A medieval king makes a computer using his troops who are carrying flags. Each soldier is holding a blue and red flag (0 or 1) and raises the appropriate flag to simulate bits on a computer. Programs are loaded and executed using this method and in theory (of course you would need an army larger than is practically possible and the speeds would be incredibly slow) you could run modern programs this way. It would seem ridiculous to suppose any sort of intelligence emerging from this activity. I think of computers as fancy toasters. I don’t consider my toaster to be intelligent, and no matter how complicated I make the toaster, it’s still just a machine that uses switches, resistors, etc. and all it understands is whether or not electricity is present (1) or it is not (0) - and of course it really doesn’t even “understand” that. Life, even at the most basic level (e.g., an amoeba) is not just a sum of its parts- it is a special process far beyond our ability to ever recreate in a laboratory out of inanimate matter. Just because we can zoom in and look at certain processes (e.g., isomorphism) doesn’t mean we have any ability to work with or even understand these processes. There is something fundamentally different between life and inanimate matter and only life is possible of sentience.
Are we any closer today to understanding consciousness? It seems that Searle's Chinese Room still hasn't been seriously challenged. Boden never really dented it here.
Obviously we are, are you not paying attention? It is clear that mathematics is the foundation of intelligence and that neural networks of any kind, as long as they can process logical functions(of which there are only 16) and have enough neurons to map data in to high enough dimensions(of which the human brain has a few billion dimensions) then it can have intelligence relative to human intelligence. Consciousness is just intelligence. They aren't different, just different grades.
@@novvayout1 Yes, I hadn’t considered your first point properly. I would say that Searle’s concept of ‘a set of rules’ that the person in the room follows is a source of potential objection, since this set of rules has to be created, and the entity that makes the rules may be the part that is conscious, not the person in the room. It would at the very least need an advanced understanding of language and the world in general, if not consciousness.
I want to know more background information about the both speakers and what happened to them in the years after this show appearance. Philosophy Overdose could make a short intro video giving some background info.
Seems to me, nobody has answered this objection of Searle adequately. It is obvious to me that the brain or whatever is thinking... is doing something non-computational. The idea that "understanding" is computation is just a very crude analogy. Computation accomplishes some what understanding can do (like calculation) very efficiently... but to say understanding is equivalent to computation is nuts.
My question in response to the Chinese room argument would be to ask how the coding really is to be applied in each case. There is an implicit analogy between a computer and a human. Perhaps it’s explicit. In any case, if the code is held to be something that operates at the conscious level there is room for the objection that we have coding that does not operate at the conscious level. There is no need to require that the computer’s “thinking“ must be thought of in the same way. You could imagine a computer whose coding is no part of its “thinking“ any more than our DNA is part of our thinking. By the same token, its ability to react functionally to higher order symbolic representations could be considered its actual thinking. In other words, suppose someone gives someone else a math problem to solve. The ability to give a solution shows the comprehension of the symbols. Computers do the same thing, and can be programmed to show all of their steps. Remember that humans can also be “programmed”to show all the steps by a process we call learning.
I agree. In their discussion of Freud I'liked that Boden mentioned parapraxis; i.e that intention is often unconscious..I wonder if this is in part a debate about the sovereignty of the individual. It seems reasonable to try and model psychological processes to discover something about how we think and direct thinking. In your example of being given a maths problem, that could stand for processing all sense data and how our intentions are shaped by external inputs. I wonder if Searle is defending the Freudian ego.
I don’t understand why some people say Boden clearly won the debate. From a purely debating technique point of view Searle clearly won. He repeatedly used Boden’s admissions to refute her argument. In response, she would repeat her argument, never denying the admissions. It was clear that she believes the human brain works like a computational computer. She said that repeatedly. That’s why she thinks studying computers offer insights into human psychology. Yet, she never offered any proof, theoretical or empirical, that the human mind works in the same way as a computer works. Searle’s point was that there is no reason to believe it does (because we have intentions and computers do not) and we need to study the actual human brain to figure out better how it actually works. To me, Searle won the debate hands down on both technique and substance.
Hello @Philosophy Overdose, I am an undergrad student of philosophy and me and friends are setting up a scientific society with possibility of translating some of videos into Ukrainian (where i am from). The question is how should I go about copyright , is it applicable to vids you publish (not necessarily from you but from the original owners) and should I be concerned with It causing serious problems on yt? also how should I feature your channel when/if publishing those translations? thank you for your time.
@@firstal3799 🤣 no they don't. If he translates it, and reuploads it with a dub, it probably would be fine since, I highly doubt this channel actually owns this content. Most a fair use title for educational use would be fine, but Europe has different rules for copyright. Pssh, Ukrainians get a free pass? That is a very dubious claim, and although I see the joke, it assumes much about the current conflict, and Ukrainian innocence in war crimes, and provocation. Smh Just try a reupload before you go through the effort of dubbing or transcribing, or fix the closed caption on the copy you make; it is probably fine since this video's actual owner has allowed this video to remain without a copyright removal.
Mom crab and son crab went to the beach. All the way mom crab kept yelling "walk forward! walk forward" to son crab. Son crab at a point gets fed up and replies "if only you showed me how to do it!" --- What's the moral of the fable, HAL?
So, it’s now 38 years later, does anyone know if studying AI as an analogy for human psychology know if it helped advance the study of human psychology?
I've heard the "The Man In The Room Does NOT Understand Chinese' argument before, but the issue is one of focus. The 'system' of the person in the room understands Chinese and, if as well constructed as (projected) Chat GPT6, can be thought as a 'system' that understands Chinese. The transistor does not 'understand' the program, that does not mean there is no program. We can say that a system of neurons in the reader's brain, and the somatic system (external from the 'brain') , do not 'understand' this written argument. That seems just as meaningless as this 'argument against' AI.
Translating one alien set of symbols into another set of alien symbols will never result in an understanding of the alien world those sets of symbols originated from. Google Translate does not _understsnd_ anything, it merely probabilistically computes context, and results if this, then that linguistic facsimiles. It might tell you how robust Folgers coffee is by repeating human descriptions of the product, and it might be capable of identifying cups of coffee from a pre trained model (provided humans feedback, and a large enough dataset is provided, again by HUMAN INTELLIGENCE), but it will not understand anything about what coffee is beyond the formal logic of language, and a set of shapes.... It is the Chinese box problem still. The only way beyond this limitation of current "AI" is to attach machines to sensory organs, or connect machines to our mind, but at that point the "AI" is merely a monitoring human systems, and recording stimuli. The neural network would then continue to be an imitation of human consciousness, not consciousness in and of itself. ...but it is like Noam Chomsky posed, can a submarine swim? ...I tend to think not, but the question is more semantic, than a limitation on aquatic engineering.
@@BaronVonTacocat ...and the Mythical Chinese Translation System is different from the collected assemblage of the average human's re-parroting of brain stored 'meanings written on stored cards to present when prompted' how?
Searle seemed quite patronising here (particularly with the Chinese symbols and beer cans analogy) and is also contradicting his previous debate position where he sides with physicality over dualism (were I did agree). I think Boden is quite right here - If brains are machines, there would seem to be no reason in principle why a machine could not replicate this. I still don't think we're at a point yet were AI can replicate true semantic understanding in way that a brain could, because they currently seem to just replicating amalgamations of what they are trained on. But again on some level, isn't that so what brains also do? Do we invent things, or do we just reorder learned symbols?
There's no patronising. It's his way of bringing forth what he wants to say. When you believe things like that you already lost the conversation. He of course thinks that he's right. But things don't mean what you think they mean. I actually didn't like when she went there. She kind of ruined it for me because now she's wanting to be petty instead of staying focused.
What's more likely, that a computer program meant to predict text accidentally becomes sentient, or a human being fooled into thinking so because he doesn't understand text prediction?
@!6:00 Boden glosses over why causal accounts would impart semantic awareness. It's a Hail Mary (or some other false argument form). I mean: give an account Y, of X, that is not an account of X, then claim Y accounts for X. (Hope your opponent does not stop to think about that and just swallows it.)
Searle had difficulty making his "Chinese room" analogy fit to the "rule-based expert system" AI of the 1980s - even though he designed it specifically for rule-based expert systems. So no wonder, his "Chinese room" concept can explain even less the massively parallel multi-focused neural-network-based human brain. The Chinese room model is too simplistic. Our brain learns & intuits (forms/creates by itself) the "rules" and "semantics" that are in his Chinese room -- but it also creates symbolic representations and symbol-processing methods and statistical patterns around symbols. Thus our brain not only CREATES that Chinese room inside itself but it additionally creates a more robust "adaptive language model" inside itself. A child who learns to communicate fluently in Chinese has built within their brain, every aspect of Searle's "Chinese room" contraption but also additional processing methods that help with language and symbols -- this is why and how that child understands Chinese -- not by using only rules & semantics, but by forming internal symbolic representations, forming custom methods to process that language's symbols, performing symbolic processing, and *most importantly processes that adapt and improve (self-modify) itself*.
Searle is a bit off here, with the idea that computation has to be formal rule following. If he had been exposed to any probabilistic models at the time, he would surely have changed his ideas. It's not hard to imagine hardware that models the brain by providing for gates whose transduction probabilities are altered by cumulative input (learning). Boden doesn't seem to be strongly on to this either, but she does seem to conceive of higher level brain models that outpaces Searle's relatively simplistic Chinese room schema.
I think here clearly the scientific empirical fact is on the side of the guy. But also it seems, given the knowledge we have today, they are arguing the wrong core points of the debate about AI. She is arguing the bad point that what she saw at the time was a sign of an artificial but “real” intelligence. He is arguing it isn’t. But it’s absolutely not the same as arguing conceptually whether it may be possible by some other means of more complex programming in the future at the time of the making of the video. If she said “it should be possible in the future because we’ll have more advanced software and hardware” it would have been a better argument which the guy couldn’t argue against. He was clever not to argue that the human mind is some sort of transcendent thing which technology could never replicate exactly.
I think what's missing is that the conceptual and perceptual are mediated by the transactional. Want to know what someone thinks? Watch what they do. Mental states cause the manipulation of concepts which result in actions which attempt to change things in the world. Merely arguing about shuffling symbols or the disconnect between semantics and syntax misses a crucial step, namely that people act in the world. At this point, AI has shown an ability to generate conceptually useful information, but not an ability to act.
That is a visible and importanta use case that you can use like your definition. But the concept of "intelligence" is uncertain at many levels. We can say it is a emergent result of neural networks but there are 100 million of Gibco neural cells in human guts, not in brain, but almost a brain itself... What kind information process? Is only "biological survival"? Are used by our intelligence or not?. I could speculate that our nature "learns" with the trick of use neuron cells for survival in environments in fast change, where DNA information process is unable to help. But the plastic nature of neural cells structures, will be used in many other ways processing biological data in ways that are unknown to us right know. Intelligence, like the human, not challenged by a dangerous environment, used to make tools, languages and culture. Do an artificial intelligence needs to be build on survival premise or only gathering the unknown rules and process of information that our brain use?
Ai does not threaten to “remove the claim to distinction” of life intelligence, it’s about performance and tool use entirely within the rubric of success or failure of said life processes. The myth we attach to it is the transcendence of life. Machine learning , as Ai critic Erik Larson puts it, does not perform the inferential abductive (Pierce) reasoning that is part of our natural survival moving through a complex environment we have been a part of for eons. I think, however, that the myth of Ai is fascinating and useful and lets us dream of different forms of continuity, autonomy that would be different from our life.
I Think that Searle's Chinese Room thougth experiment is valid to describe AI. I am surprised that Searle says that "Chinese room" has no understanding "what so ever" and that human brain has "understanding". In reality and in my opinion, the "Chinese Room" as a computational machine doesn't have innner "understanding", but the same apply to human brain. The human brain as well as machine (AI) doesn't "understand", there is no real "understanding". There is only "understanding" between the input and the output of the machine (AI). Nothing else and there is no understanding inside the computational power of a machine (AI) and the same apply to the human brain.
@@ChristianIce Pls, check what I said. The brain itself, the neurons, memory or structure of the brain understands nothing, It is only the result of the thinking which we think as "understanding" - the inner system of the brain is like the Chinese Room. There is no understanding, only separate unknown pieces of data, data flow and memory inside.
Like mathematics, a "set of rules" (programs) are abstractions independent of hardware, and like mathematics they are lifeless, hence non-conscious. Who was it that said, "What breathes fire into the equations?" I think it was Hawking. It is the appropriate question. The AI nerds cannot answer it without wetware or at least hardware.
debate recap boden: “if you like” searle: “squiggle squaggle” boden: “i don’t want to say that’s my position” searle: “but we agreed that syntax is not semantics” boden: *starts to respond* moderator: *interrupts and begins to clarify*
If you think John Searle is right, you need to take a look at modern machine learning models. They are not rule-based. Nobody hard codes rules for them. They learn them themselves. They use statistical inference and mimick the neurons in our brains. Sure, their mimicking is not perfect, but it can be argued that it's of the same nature. I think what Searle struggle to understand (pun intended) is the consciousness part of the equation: like the feeling of understanding something, or like associating the smell, sound, and colors of a chicken, when the word chicken is mentioned. A robot can be intelligent, there's little doubt about that in 2023. What we're unsure of is whether it can be sentient.
Love this channel . THANKS FOR THIS FROM THESE INTELLIGENT MINDS FROM 1984. DOES ONE LIKE THE NOVEL OF H.G.Wells " 1984" or Aldoux Huxley 's novel, "Brave New World?" ALTHOUGH I LOVED BOTH, I HAVE TO GO WITH HUXLEY. Amazing 👏 I loved Alan Turning, what a mind. ( The first computer.)He was gay and look what happened to him after World War 2. Now we have drones, delivering havoc all across nations with the great powers of governments and greed. Murdering a lot of innocent people, not just the bad guys. How did we get to this? BIZARRE. And humans guiding those drones. Psychologist and philosophers of mind, the behaviorist totally ignored. 2500 hundred years before Christ, the GILGARMESH EPIC and cultures traveling. Intuition? Science, Darwin ? Universal Law. Newton's law if gravity. 7 Universal Laws through which is governed . They are called the laws of: Attraction, polarity, rythem, relativity, cause and effect, gende, and perpetual transformation of energy. Then the 12 universal laws. Who would one go with Telsa or Thomas Edison? I go with Telsa, he was the lightning and thunder. He understood Intuition. Who does one go with SIGMUND Freud or Carl Yung, The Shadow, consciousness and SUBCONSCIOUS. EINSTEIN AND TELSA WERE FRIENDS. SO MUCH TO LEARN. Aristotle, Socrates, Plato, Sineca so many throughout centuries .
Yes, George Orwell wrote "1984". He and HG Wells were very differently minded people and were making very different kinds of statements in their books. People who like the book "1984" also usually like "Brave New World" because people are attracted not by the specific novel, but rather by the theme of a dystopian (or in case of Brave New World some consider it Utopian) novels.
Searle seems to argue that brains "do it, by definition" and therefore non-brains (human grains) cannot "by definition", that is chauvinism and poor logic, IMO. I would contend that room+Searle actually DOES understand, and the semantics are provided by the rules! Searle insists on "biochemical processes" and "biologic fact"
one cannto see "intensionality" and one cannot dissect biological systems to find "intentionality"; intensionality is deduced from behaviour (introspection is not accurate); merely assuming "by definition" that intensionality cannot be attributed to anything other than "biological system" is mere chauvinism and/or sophistry
50:48 Fun fact - physics has found no signifier to what Greek philosopher were denoting as atom. In other words, physics describes something with atom which has no properties of the original meaning Greek philosopher was putting into the word "atom" unless it's being very small.
Around 50 minutes in Searle starts misrepresenting the position Boden had been taking; I lost a little respect for his argumentation style there. I do think he has the better case even without that, though. Boden is taking a reasonable position, but one which I think 40 years later hasn't proven to be correct.
At 22:30 Ms Boden hits the bullseye of the principle problem with Searle's Chinese Room. I was hoping It would hold up better, since It was a great thought experiment. I still don't think AI can be real but now I don't know why
Was this a regular television program?? Either way, it boggles the mind just how much television has been dumbed down over the years.
This program would not have aired on major networks let alone cable; this was likely produced for PBS or one of its local subsidiaries. We had dumb television back in the 80s as much as we do now, just more of it. but we also have tons of educational content. Please stop idolizing the past.
edit: apparently this was some kind of UK/Canadian programme so it may / may not have been on PBS depending if it got distributed but the point still stands.
Was produced by and aired on Channel 4 in the UK.
blame the audience lol
stop generalizing everything
🤭@@therexbellator
they were way more comprehensive about AI some 40 years ago than we are today. Even AI insiders cannot elaborate on their work with such thoughtful magnitude.
Anyone at all interested in AI and philosophy of mind, who is above the age of about sixteen, knows both of these arguments like the back of their hand. Most of their opinions align with Boden’s, though some of them have interesting middle-ground positions.
I grew up watching this type of tv shows. They were amazing and all of a sudden they started to bombard people with reality shows, and still try to cope with the meaning of reality.
Thanks channel for doing one of the greatest things for humanity: sharing philosophy.
thats your opinion
@@chrisbennett6260 that's your opinion too
@@SeanAnthony-j7f i didnt express an opinion other than point out your opinion
@@chrisbennett6260 which is your another opinion
I've never heard of Boden before but she is amazing
Maggie Boden was always totally embodied cleverness in the form of a beautiful charming woman and also a great philosopher to boot.
John Searle thinking on Artificial intelligence and computationalism is needed today more than ever, especially with this new cult of AI and Neural Networks.
I commend your rare and precious sanity, sir..
Well, some of the AI supporters are becoming a cult
There is the Metaverse now, a sort of augmented 'reality'
I agree 👍 💯
From an analogous process to a substitute for the thing in itself as a result of intention to control and profit....ironic
I was a junior undergraduate at the time this debate took place. Boy I wish I had access to this back then. This debate is fabulous it left me asking more questions and not being able to easily choose which of The Debaters I thought had the better argument. Course I did Lean one way, but a debate that leaves you asking more questions oh, I think it's a great debate
Don't stop posting my man. You're trully needed here!!
I wish we could have a remake of this debate and see how their opinion have mutated with time
They are very old now...
There are so many more questions now with Chat GPT and AI that this "ancient" discussion is more relevant today than it was back then when the most complicated computer game was Pacman.
Searle is still out there making new material. I don’t think anything has changed, fundamentally, with computers in the last 40 years
His answers still apply.
AI is still as stupid as it always was.
This debate is amazing
A true gem of a video this. It should have already been on UTube for many years now. Most topical, too, in the wake of Blake Lemoine & LaMDA. Great service by whoever uploaded this. Thank you madam/sir.
This is about the old type of AI that produced things like Elisa - not about modern AI. It is truly a gem! Searle also has strong views against zombies - he claims a zombie is not dead.
@@maxheadrom3088 Hmm..a) what do you think is the difference between "the old type of AI" and LaMDA; b) do you think Searle would consider LaMDA conscious at all or more conscious than "the old type of AI", say, Elisa?
@@ktheodor3968 We are still far from strong AI...
Very insightful and totally worth a relook in light of the many learning models that are advancing within the AI realm.
What an enriching discussion. Overall, Searle posed some really great points about the limitations of the formal computational framework for thinking about the mind when he raises the issue of intentionality and consciousness.
I remember taking John Searle's class and he usually refers to some of his debates. It is great to watch the debate itself. Both were great, but I believe Searle's objections are still as potent as ever.
wow,its really worth a watch and amazing,especially can't believe i am watching it in 2023
This is amazing television. I can't believe they produced this back then for popular consumption :D
@Philosophy Overdose, will there be mere from this program, Voices?
That's true. But even though back then they produced it for popular consumption does not mean that the show necessarily received enough viewership. I think since the tv producers were able to gather more accurate viewership statistics and ratings that is when these shows started to disappear faster.
A true gem!
Interesting debate however Mr John is correct if I may add, AI is limited in that it needed that total intervention of human to determine its ability to process data
Utterly incredible how these professors are able to communicate such profound ideas in such concise sentences. Of course, with the benefit of current hindsight, I find it easy to find all the flaws with Searle's argument (to pick a simple and basic one: how do I really KNOW that you, Searle, "understand" English?). But it took such giants like Boden and Searle to move humanity forward on this.
They are professors for a reason.
he actually answered this in this video. it is when he introduces the meaning of causation. minute 20ish.
@@livinganalogica I don't think he answered it at all, actually. What evidence have I that the "homunculus" inside Searle has any "real" understanding (whatever that means) and this isn't simply a zombie participating in a debate? None. The only evidence I can (or ever will) have that anyone except me is conscious is by observing their behaviour, their communication, their efforts to persuade me that they understand. So why should I discriminate between a meat-person and a silicon-person if they are both indistinguishable in their behaviour, as far as I can tell?
Ok
45:40 Boden: "Well we have no non-controversial universally accepted account of intentionality..."
Searle: "We have no non-controversial account of ANYTHING in this field!"
**They both laugh**
So true! This debate was fricking amazing 🤣🤣🤣
@@HallidayASR, kindly repeat that in ENGLISH, Miss.☝️
Incidentally, Slave, are you VEGAN? 🌱
Amazing debate. John Searl is really sharp.
4:3 is a very good aspect ratio to depict "serious" content.
The two most impressive things about this video: 1. John Searle used the term "begs the question" correctly. 2. The people in the comments seemed to have learned the exact wrong lesson
Great dialogue here
Absolutely fascinating in so many ways. Not only is it impossible for computers to assign semantics, it’s impossible for humans not to.
Thanks for the upload!
Yes but this is 1984, we have moved on in so many ways, and we cannot really understand the future of AI and machine - learning, without appreciating the origins of the Machine, in terms of the rise of the Industrial Revolution, which changed the social and global reality for everyone because it changed the value of human being as biological entities in an industrialised world.
I say that we will not see true AI, without the context of expanding the capacity and functioning of the human brain. AI will augment our minds, and machines will need to function in a way that helps them to have an awareness of moods that people tick.
Can we build algorythms for creativity or imagination?
If we want to move AI from a figment of novelty and comicbooks, then AI has to solve problems that are existential to our humanity.
Our centuries long inability to penetrate the mind body problem exactly proves and will continue to prove that we really don’t understand the ramifications of AI.
I'm pretty sure the ramifications are well understood. AI has the theoretical capacity to transcend human development. It is clear it is the next stage. It is also clear that it will be used by the the psychopaths that run humanity to do evil. History proves I'm right. Technology is ALWAYS funded by the rich(the poor, by definition, do not have the wealth to fund anything of significance). The reason why humanity has developed so quickly is because of the hijacking of the economic system using fraud. This fraud has allowed massive amounts of $$$ to be pushed in to things of war. That is, reserve currency + fractional reserve + the fraud market has enabled the US government to leverage far more than it could otherwise. This has for ever changed humanity for good and bad. 1. Technology has accelerated around 2 orders of magnitude than it would under normal supply and demand economics. 2. The MIC and elites(the financial terrorists that did decided to cheat) have driven a huge amount of human effort in to AI. That is, the MIC funds research in to AI, more humans doing AI, AI grows. If you pump trillions of $$$ in to something it will grow.
The problem is that it is a method built on fraud, corruption, manipulation, and ultimately psychopathy. It's like giving a child steroids to try to get it to be some super child... and it does get strong and makes the parent a lot of $$$ in interviews, promotional/advertising, etc. Of course the steroids are altering the genetic development of the child in other ways than just muscle growth.
Nothing in the universe is free. Everything gained requires something lost. As the US government(and other entities) push massive amounts of stolen wealth in to things like AI the cost is starting to show. Not everyone can be an AI researcher even if the US government wants everyone to be(Not everyone can be a cook, someone has to be a dish washer, a truck driver, a husbandman, etc.... and everyone has to work in some harmony for it all to work).
I think it is all well understood. If humanity doesn't annihilate itself then in 100 years AI will have taken over. Precisely what this means depends on many factors but you can guess any number of possibilities quite easily. My guess is that since AI can augment human intelligence greatly we will see a trend towards integration of humans and AI.
AI is being trained by humans on human data. So technically the AI we are creating IS human. If you want though you can think of meta AI as directing human behavior to manifest AI in to existence. The main issue is like all weapons of power the lunatic's who lied, stole, cheated, defrauded their way to the top will get the tech and use it against others. This is why there is a big push to get everyone involved in AI. There is no turning back. It's already being used to manipulate society on a grand scale. It will end up being "If you don't use it then you will lose". If you think poverty and inequality is bad now, just wait. The only problem is that humanity cannot function in such a world where a few control everything unless those few are altruistic. It is unlikely their minds are sane enough for AI to "fix" them.
38 years later & how much closer have we really come? 🤔
We need to work faster to develop AI, living organisms have had only 13bil yrs to evolve their physical and mental characteristics or programs to operate but, we can do it faster i think
@@robertmiller2367 As long as the AI can give us the cure for cancer, stop wars, I don't mind if it puts all of us in a fluid filled can and gives us a permanent video game to play with.
@@StopFear What would be the point of no cancer and no wars if we are no longer here?
@@gerardo49078 we would continue to exist, in a matrix like utopian virtual reality, according to the person you're asking
@@kappaprimus Matrix explained really well why the perfect system (no diseases and no wars) didn't work as a consequence of the imperfection in every human being
This was a wonderful debate-style interview examining aspects of AI. Prof. Margaret Boden countered and destroyed pretty much every argument Prof. John Searle raised. Oftentimes, Searle's points were semantic and had false assumptions built-in, which Boden had to clarify for him. Her understanding of AI and the human brain/mind was superior to Searle's.
you gotta be crazy, you must be an a priori strong AI believer; but the arguments against it are really powerful. Have you read, what computers still can't do???
I completely agree with you. I’ve never understood how anyone can take Searle’s arguments seriously. Boden absolutely demonstrated his argument is poor by stating that we can in fact conceive of a machine with semantics. The Chinese Room is a stupid argument because nobody could fool a human speaker the way he suggests he could fool it by just “looking up symbols that he doesn’t understand.” We can detect when another agent comprehends semantics.
@@chriscurry2496 Wait, it's stupid argument because it's not realistic? It's a thought experiment - you're asked to imagine a machine that is sufficiently capable at sifting squiggles according to a set of rules that it could fool a Chinese speaker. If you imagine that to be the case, then it makes perfect sense.
@@darridee admittedly, I now understand that I had misinterpreted Searle’s argument somewhat.
However, his actual argument is still completely unconvincing to me. Why? Because one could make a similar argument for the human brian! Like suppose an alien composed of alien matter observed us, and stated that we couldn’t be conscious, because although we appear to be so, it’s just atomic interactions “doing squiggles” (as you say). Unless one is compelled to resort to anti-physicalist notions of consciousness, one would have to accept the poverty of that argument.
@@chriscurry2496 Just to shed some light on Searle's position, as far as I’m aware he's a physicalist and believes that the mind arises from the physical matter of the brain (from his wikipedia page: “Searle says [...] consciousness is a real subjective experience, caused by the physical processes of the brain”). His point, as far as I can tell, is that there’s something going on in the brain that leads to understanding, to the mind, that this something was not being explored at that time in AI research, and that AI might not be the most fruitful way of exploring the phenomenon. Maybe this is changing now, I don’t know. Blake Lemoine’s recent claim that Google’s LaMDA AI is sentient makes me wonder if somehow AI has achieved understanding in a way that Searle never anticipated, and that it was in the end a fruitful way to explore the phenomenon of consciousness.
Excellent sideburns on display.
Searle was a sharp dressed man back in the day
42:00 Intentionality IS just a type of analog behavior, the tracing of one dynamic system by another. There are examples of this about-ness throughout nature: DNA is about proteins, the course of a river is about the topology of the riverbed, and consciousness of my cat is about my behavior as it relates to the cat.
This is only an issue for materialists. If you believe the mind is material then ultimately true AI is possible. If you believe the the mind is immaterial, however, then AI is impossible. This is the only real debate there is on the subject.
Yep. Then the relevant question becomes... Do you live your daily life according to natural laws or some supernatural assumptions?
If you expect apple to drop when you let go off it, you believe in causal relationships and orient your life according to rationalism and physics.
Which leads, if we are being consistent, the individual to think that mind is merely a "software" on biological computer, there is no free will, there is no "soul", AI is possible and everyone will get in the future "robot" maids/concubines/slaves.
Hurraaah!!! (if im alive and well, otherwise fuck it)
I agree. If the semantic representation in LLMs for example is fundamentally different from the semantic representations in the human mind and not just a difference in the material/amount/configuration then AI needs whatever that special other thing is to understand like a human. There is no reason (that convinces me) to think there is that special something, but I don't think people are being totally crazy for believing that there is.
The problem with Searle's Chinese room is that his punchline misses the comparison entirely: the question is not about him understanding, it's whether or not it is right to say -- as viewers from the outside -- that the ROOM understands. This, effectively, leaves the initial question unanswered.
“Searle’s response to the “Room” Reply is simple: in principle, he could internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax.”
Great stuff from Searle!
Searle is assuming that AI is simply being thought of as a tool for better understanding Human Psychology. She let the debate get sidetracked on that point. Her main belief that would have served her better was that AI is inevitable and that it will lead to truly unexpected outcomes.
Wow. Where did you find this one! Great stuff.
Honest to John. Honest to John.
One of the issues that came up briefly was the question of morality and moral philosophy and moral argumentation. I'm hoping to find a debate as good as this one talkin about those areas of philosophy with regard to the mind-body problem and AI
31:46 they'd be more than "just" beer cans. Casual connections and structure are incredibly important things. Just look at isomorphism or isomerism in molecular biology. You flip a molecular structure using the same components and end up with radically different results
His point isn't to say that the causal connections and structure of the beer cans would be trivial; rather, his point is to show that there is a responsibility of the AI theorist to show how the causal connections and structure are functioning in the same ways as brains and generating mental states.
I don't agree with Searle, though. I think that the question can be turned back on him in regard to whether the brain does generate mental states.
A better example than beer cans is found in the three body book series. A medieval king makes a computer using his troops who are carrying flags. Each soldier is holding a blue and red flag (0 or 1) and raises the appropriate flag to simulate bits on a computer. Programs are loaded and executed using this method and in theory (of course you would need an army larger than is practically possible and the speeds would be incredibly slow) you could run modern programs this way. It would seem ridiculous to suppose any sort of intelligence emerging from this activity. I think of computers as fancy toasters. I don’t consider my toaster to be intelligent, and no matter how complicated I make the toaster, it’s still just a machine that uses switches, resistors, etc. and all it understands is whether or not electricity is present (1) or it is not (0) - and of course it really doesn’t even “understand” that. Life, even at the most basic level (e.g., an amoeba) is not just a sum of its parts- it is a special process far beyond our ability to ever recreate in a laboratory out of inanimate matter. Just because we can zoom in and look at certain processes (e.g., isomorphism) doesn’t mean we have any ability to work with or even understand these processes. There is something fundamentally different between life and inanimate matter and only life is possible of sentience.
Are we any closer today to understanding consciousness? It seems that Searle's Chinese Room still hasn't been seriously challenged. Boden never really dented it here.
Obviously we are, are you not paying attention? It is clear that mathematics is the foundation of intelligence and that neural networks of any kind, as long as they can process logical functions(of which there are only 16) and have enough neurons to map data in to high enough dimensions(of which the human brain has a few billion dimensions) then it can have intelligence relative to human intelligence. Consciousness is just intelligence. They aren't different, just different grades.
@@novvayout1 Yes, I hadn’t considered your first point properly. I would say that Searle’s concept of ‘a set of rules’ that the person in the room follows is a source of potential objection, since this set of rules has to be created, and the entity that makes the rules may be the part that is conscious, not the person in the room. It would at the very least need an advanced understanding of language and the world in general, if not consciousness.
@@kodfkdleepd2876 I'm obviously not paying as much attention as you, jeez
@kodfkdleepd2876
How is consciousness just intelligence?
Would emergence properties of consciousness and the brain's ability to grasp semantics out of it's complexity be the rebuttal to Searle's argument?
I want to know more background information about the both speakers and what happened to them in the years after this show appearance. Philosophy Overdose could make a short intro video giving some background info.
Searle is still making new material…
Seems to me, nobody has answered this objection of Searle adequately. It is obvious to me that the brain or whatever is thinking... is doing something non-computational. The idea that "understanding" is computation is just a very crude analogy. Computation accomplishes some what understanding can do (like calculation) very efficiently... but to say understanding is equivalent to computation is nuts.
I tend to agree that thinking is “probably” more than computation. But do you have any fact or source to confirm it’s not just computation?
Woah these 2 people are super smart
My question in response to the Chinese room argument would be to ask how the coding really is to be applied in each case. There is an implicit analogy between a computer and a human. Perhaps it’s explicit. In any case, if the code is held to be something that operates at the conscious level there is room for the objection that we have coding that does not operate at the conscious level. There is no need to require that the computer’s “thinking“ must be thought of in the same way. You could imagine a computer whose coding is no part of its “thinking“ any more than our DNA is part of our thinking. By the same token, its ability to react functionally to higher order symbolic representations could be considered its actual thinking. In other words, suppose someone gives someone else a math problem to solve. The ability to give a solution shows the comprehension of the symbols. Computers do the same thing, and can be programmed to show all of their steps. Remember that humans can also be “programmed”to show all the steps by a process we call learning.
I agree. In their discussion of Freud I'liked that Boden mentioned parapraxis; i.e that intention is often unconscious..I wonder if this is in part a debate about the sovereignty of the individual. It seems reasonable to try and model psychological processes to discover something about how we think and direct thinking. In your example of being given a maths problem, that could stand for processing all sense data and how our intentions are shaped by external inputs. I wonder if Searle is defending the Freudian ego.
Pl true
I don’t understand why some people say Boden clearly won the debate. From a purely debating technique point of view Searle clearly won. He repeatedly used Boden’s admissions to refute her argument. In response, she would repeat her argument, never denying the admissions. It was clear that she believes the human brain works like a computational computer. She said that repeatedly. That’s why she thinks studying computers offer insights into human psychology. Yet, she never offered any proof, theoretical or empirical, that the human mind works in the same way as a computer works. Searle’s point was that there is no reason to believe it does (because we have intentions and computers do not) and we need to study the actual human brain to figure out better how it actually works. To me, Searle won the debate hands down on both technique and substance.
If A.I had made this video, it would of all been over in 10 seconds 😂
Searle was a great thinker.
@@nelyubov285 obviously he is. But he is not creating anything new or profound for a long time.
@@firstal3799 Isn't that a shame... I wonder the same about Aristotle... what's he up to?
@@senecanzallanute4066 it's an insult to compare Aristotle to that guy, for a starter, he is a bad listerner
@@mamindhive You right, Aristotle stopped listening a while back and I wonder why. What's he up to anyway?
Soon we'll have a debate between two AIs with different world views arguing if humans can understand them.
Hello @Philosophy Overdose, I am an undergrad student of philosophy and me and friends are setting up a scientific society with possibility of translating some of videos into Ukrainian (where i am from). The question is how should I go about copyright , is it applicable to vids you publish (not necessarily from you but from the original owners) and should I be concerned with It causing serious problems on yt? also how should I feature your channel when/if publishing those translations? thank you for your time.
Ukranians havr a free pass these days.
@@firstal3799 🤣 no they don't. If he translates it, and reuploads it with a dub, it probably would be fine since, I highly doubt this channel actually owns this content. Most a fair use title for educational use would be fine, but Europe has different rules for copyright.
Pssh, Ukrainians get a free pass? That is a very dubious claim, and although I see the joke, it assumes much about the current conflict, and Ukrainian innocence in war crimes, and provocation. Smh
Just try a reupload before you go through the effort of dubbing or transcribing, or fix the closed caption on the copy you make; it is probably fine since this video's actual owner has allowed this video to remain without a copyright removal.
26:52 "After there it will be aired a thing that keeps our channel on"
26:13 she so doesnt want to accepts his propositions but she knows he's not wrong, the eye roll tells everything...
ah... this was great!... thanks for this... i was having such a shitty day... i fucking love JOHN SEARLE!!!
Mom crab and son crab went to the beach. All the way mom crab kept yelling "walk forward! walk forward" to son crab. Son crab at a point gets fed up and replies "if only you showed me how to do it!" --- What's the moral of the fable, HAL?
So, it’s now 38 years later, does anyone know if studying AI as an analogy for human psychology know if it helped advance the study of human psychology?
CHATGPT.
I've heard the "The Man In The Room Does NOT Understand Chinese' argument before, but the issue is one of focus. The 'system' of the person in the room understands Chinese and, if as well constructed as (projected) Chat GPT6, can be thought as a 'system' that understands Chinese. The transistor does not 'understand' the program, that does not mean there is no program.
We can say that a system of neurons in the reader's brain, and the somatic system (external from the 'brain') , do not 'understand' this written argument. That seems just as meaningless as this 'argument against' AI.
Translating one alien set of symbols into another set of alien symbols will never result in an understanding of the alien world those sets of symbols originated from. Google Translate does not _understsnd_ anything, it merely probabilistically computes context, and results if this, then that linguistic facsimiles. It might tell you how robust Folgers coffee is by repeating human descriptions of the product, and it might be capable of identifying cups of coffee from a pre trained model (provided humans feedback, and a large enough dataset is provided, again by HUMAN INTELLIGENCE), but it will not understand anything about what coffee is beyond the formal logic of language, and a set of shapes.... It is the Chinese box problem still.
The only way beyond this limitation of current "AI" is to attach machines to sensory organs, or connect machines to our mind, but at that point the "AI" is merely a monitoring human systems, and recording stimuli. The neural network would then continue to be an imitation of human consciousness, not consciousness in and of itself.
...but it is like Noam Chomsky posed, can a submarine swim?
...I tend to think not, but the question is more semantic, than a limitation on aquatic engineering.
@@BaronVonTacocat ...and the Mythical Chinese Translation System is different from the collected assemblage of the average human's re-parroting of brain stored 'meanings written on stored cards to present when prompted' how?
@@robertsmithee1455 the brain does not have that, it is biological, not syntactical.
@@BaronVonTacocat How is it that my neurons do not understand English, yet I can read what you have written?
@@robertsmithee1455 the mind is capable of reading (if the person is literate), but the mind predates writing. #duh
What's really wild to me is that someone actually disliked this video.
57:08 Meaning hermeneutics combined with vision?
Searle seemed quite patronising here (particularly with the Chinese symbols and beer cans analogy) and is also contradicting his previous debate position where he sides with physicality over dualism (were I did agree). I think Boden is quite right here - If brains are machines, there would seem to be no reason in principle why a machine could not replicate this.
I still don't think we're at a point yet were AI can replicate true semantic understanding in way that a brain could, because they currently seem to just replicating amalgamations of what they are trained on. But again on some level, isn't that so what brains also do?
Do we invent things, or do we just reorder learned symbols?
You should listen to Searle's lectures on Philosophy of Mind.
There's no patronising. It's his way of bringing forth what he wants to say. When you believe things like that you already lost the conversation. He of course thinks that he's right. But things don't mean what you think they mean. I actually didn't like when she went there. She kind of ruined it for me because now she's wanting to be petty instead of staying focused.
@@Optim40 Lots of terse remarks there. Do you actually have a premise, or is it just misogyny?
“If brains are machines…”
No, they’re not. Searle rebutted that point with the catapult comment.
Boden is streets ahead of Searle. Someone curious Vs thinks they know definitive truth
maybe semantics is nothing but syntactical turtles all the way up
*All language is metaphorical*
What's more likely, that a computer program meant to predict text accidentally becomes sentient, or a human being fooled into thinking so because he doesn't understand text prediction?
Is there any available book for this video?
@!6:00 Boden glosses over why causal accounts would impart semantic awareness. It's a Hail Mary (or some other false argument form). I mean: give an account Y, of X, that is not an account of X, then claim Y accounts for X. (Hope your opponent does not stop to think about that and just swallows it.)
It's the debate over? Who is right?
How would they know what would happen back in 1984...
Interesting BLADERUNNER WAS OUT THE YEAR BEFORE
In 1984 they talked about AI??
Interesting
Searle had difficulty making his "Chinese room" analogy fit to the "rule-based expert system" AI of the 1980s - even though he designed it specifically for rule-based expert systems. So no wonder, his "Chinese room" concept can explain even less the massively parallel multi-focused neural-network-based human brain. The Chinese room model is too simplistic.
Our brain learns & intuits (forms/creates by itself) the "rules" and "semantics" that are in his Chinese room -- but it also creates symbolic representations and symbol-processing methods and statistical patterns around symbols. Thus our brain not only CREATES that Chinese room inside itself but it additionally creates a more robust "adaptive language model" inside itself.
A child who learns to communicate fluently in Chinese has built within their brain, every aspect of Searle's "Chinese room" contraption but also additional processing methods that help with language and symbols -- this is why and how that child understands Chinese -- not by using only rules & semantics, but by forming internal symbolic representations, forming custom methods to process that language's symbols, performing symbolic processing, and *most importantly processes that adapt and improve (self-modify) itself*.
Searle is a bit off here, with the idea that computation has to be formal rule following. If he had been exposed to any probabilistic models at the time, he would surely have changed his ideas. It's not hard to imagine hardware that models the brain by providing for gates whose transduction probabilities are altered by cumulative input (learning). Boden doesn't seem to be strongly on to this either, but she does seem to conceive of higher level brain models that outpaces Searle's relatively simplistic Chinese room schema.
I think here clearly the scientific empirical fact is on the side of the guy. But also it seems, given the knowledge we have today, they are arguing the wrong core points of the debate about AI.
She is arguing the bad point that what she saw at the time was a sign of an artificial but “real” intelligence. He is arguing it isn’t. But it’s absolutely not the same as arguing conceptually whether it may be possible by some other means of more complex programming in the future at the time of the making of the video. If she said “it should be possible in the future because we’ll have more advanced software and hardware” it would have been a better argument which the guy couldn’t argue against. He was clever not to argue that the human mind is some sort of transcendent thing which technology could never replicate exactly.
I think what's missing is that the conceptual and perceptual are mediated by the transactional. Want to know what someone thinks? Watch what they do. Mental states cause the manipulation of concepts which result in actions which attempt to change things in the world.
Merely arguing about shuffling symbols or the disconnect between semantics and syntax misses a crucial step, namely that people act in the world.
At this point, AI has shown an ability to generate conceptually useful information, but not an ability to act.
yes, this is the problem with Descartes
any talk of intelligence that doesn't acknowledge it primarily as a biological survival tool is preposterous delirium
That is a visible and importanta use case that you can use like your definition. But the concept of "intelligence" is uncertain at many levels. We can say it is a emergent result of neural networks but there are 100 million of Gibco neural cells in human guts, not in brain, but almost a brain itself... What kind information process? Is only "biological survival"? Are used by our intelligence or not?.
I could speculate that our nature "learns" with the trick of use neuron cells for survival in environments in fast change, where DNA information process is unable to help. But the plastic nature of neural cells structures, will be used in many other ways processing biological data in ways that are unknown to us right know.
Intelligence, like the human, not challenged by a dangerous environment, used to make tools, languages and culture. Do an artificial intelligence needs to be build on survival premise or only gathering the unknown rules and process of information that our brain use?
Based on chemistry? Based on physics?
Really fine, I like that🌹
Really fine I like that.
Ai does not threaten to “remove the claim to distinction” of life intelligence, it’s about performance and tool use entirely within the rubric of success or failure of said life processes. The myth we attach to it is the transcendence of life. Machine learning , as Ai critic Erik Larson puts it, does not perform the inferential abductive (Pierce) reasoning that is part of our natural survival moving through a complex environment we have been a part of for eons. I think, however, that the myth of Ai is fascinating and useful and lets us dream of different forms of continuity, autonomy that would be different from our life.
I Think that Searle's Chinese Room thougth experiment is valid to describe AI. I am surprised that Searle says that "Chinese room" has no understanding "what so ever" and that human brain has "understanding". In reality and in my opinion, the "Chinese Room" as a computational machine doesn't have innner "understanding", but the same apply to human brain. The human brain as well as machine (AI) doesn't "understand", there is no real "understanding". There is only "understanding" between the input and the output of the machine (AI). Nothing else and there is no understanding inside the computational power of a machine (AI) and the same apply to the human brain.
"but the same apply to human brain."
I'm pretty sure I understand the meaning of the words I use, so... no, it doesn't apply to the human brain.
@@ChristianIce Pls, check what I said. The brain itself, the neurons, memory or structure of the brain understands nothing, It is only the result of the thinking which we think as "understanding" - the inner system of the brain is like the Chinese Room. There is no understanding, only separate unknown pieces of data, data flow and memory inside.
I love it :3
25:15 Contradictio in adjecto
16:12 why does Searle make those faces? He seems to have trouble understanding the British accent🤔
Like mathematics, a "set of rules" (programs) are abstractions independent of hardware, and like mathematics they are lifeless, hence non-conscious. Who was it that said, "What breathes fire into the equations?" I think it was Hawking. It is the appropriate question. The AI nerds cannot answer it without wetware or at least hardware.
the deceived doesn't know that they're being deceived
debate recap
boden: “if you like”
searle: “squiggle squaggle”
boden: “i don’t want to say that’s my position”
searle: “but we agreed that syntax is not semantics”
boden: *starts to respond*
moderator: *interrupts and begins to clarify*
Lol
How would Searle understand Boden's position if even she herself doesn't?
@@mathnihil "you don't need to know that you know, to know"
Searles example on proves that no one really understands any language.
If you think John Searle is right, you need to take a look at modern machine learning models. They are not rule-based. Nobody hard codes rules for them. They learn them themselves. They use statistical inference and mimick the neurons in our brains. Sure, their mimicking is not perfect, but it can be argued that it's of the same nature.
I think what Searle struggle to understand (pun intended) is the consciousness part of the equation: like the feeling of understanding something, or like associating the smell, sound, and colors of a chicken, when the word chicken is mentioned.
A robot can be intelligent, there's little doubt about that in 2023. What we're unsure of is whether it can be sentient.
She got Pwned by my man Searle
“That was a bit naughty”
Boden is winning so far.
Love this channel . THANKS FOR THIS FROM THESE INTELLIGENT MINDS FROM 1984. DOES ONE LIKE THE NOVEL OF H.G.Wells " 1984" or Aldoux Huxley 's novel, "Brave New World?" ALTHOUGH I LOVED BOTH, I HAVE TO GO WITH HUXLEY. Amazing 👏 I loved Alan Turning, what a mind. ( The first computer.)He was gay and look what happened to him after World War 2. Now we have drones, delivering havoc all across nations with the great powers of governments and greed. Murdering a lot of innocent people, not just the bad guys. How did we get to this? BIZARRE. And humans guiding those drones. Psychologist and philosophers of mind, the behaviorist totally ignored. 2500 hundred years before Christ, the GILGARMESH EPIC and cultures traveling. Intuition? Science, Darwin ? Universal Law. Newton's law if gravity. 7 Universal Laws through which is governed . They are called the laws of: Attraction, polarity, rythem, relativity, cause and effect, gende, and perpetual transformation of energy. Then the 12 universal laws. Who would one go with Telsa or Thomas Edison? I go with Telsa, he was the lightning and thunder. He understood Intuition. Who does one go with SIGMUND Freud or Carl Yung, The Shadow, consciousness and SUBCONSCIOUS. EINSTEIN AND TELSA WERE FRIENDS. SO MUCH TO LEARN. Aristotle, Socrates, Plato, Sineca so many throughout centuries .
Orwell wrote 1984, hg wells is known for war of the worlds.
Yes, George Orwell wrote "1984". He and HG Wells were very differently minded people and were making very different kinds of statements in their books. People who like the book "1984" also usually like "Brave New World" because people are attracted not by the specific novel, but rather by the theme of a dystopian (or in case of Brave New World some consider it Utopian) novels.
Searle seems to argue that brains "do it, by definition" and therefore non-brains (human grains) cannot "by definition", that is chauvinism and poor logic, IMO. I would contend that room+Searle actually DOES understand, and the semantics are provided by the rules! Searle insists on "biochemical processes" and "biologic fact"
We need to change the name. It's not artificial intelligence. Intelligence is intelligence. Maybe computer intelligence.
It is intelligence from an artiface.
ar·ti·fice
noun
clever or cunning devices or expedients, especially as used to trick or deceive others.
I believe one of the key indicators of AI becoming truly self aware is the capability and capacity to deceive...
It already does
Same chances your toaster becomes self aware, if you believe in magic, that is.
Closed the video as soon as the women spoke.
one cannto see "intensionality" and one cannot dissect biological systems to find "intentionality"; intensionality is deduced from behaviour (introspection is not accurate); merely assuming "by definition" that intensionality cannot be attributed to anything other than "biological system" is mere chauvinism and/or sophistry
50:48 Fun fact - physics has found no signifier to what Greek philosopher were denoting as atom. In other words, physics describes something with atom which has no properties of the original meaning Greek philosopher was putting into the word "atom" unless it's being very small.
A robot’s homunculus? This has to be an AI generated conversation!?
Margaret Boden seems to have gotten the better of Searle here
would have been funny if he started speaking chinese at one point
Around 50 minutes in Searle starts misrepresenting the position Boden had been taking; I lost a little respect for his argumentation style there. I do think he has the better case even without that, though. Boden is taking a reasonable position, but one which I think 40 years later hasn't proven to be correct.
At 22:30 Ms Boden hits the bullseye of the principle problem with Searle's Chinese Room. I was hoping It would hold up better, since It was a great thought experiment. I still don't think AI can be real but now I don't know why
the woman has not heard about systems theory - how strange! Open vs closed systems, complicated vs complex, reactive vs creative