Depends on how he works. If turning him off means to lose his personality or otherwise disrupt his being, then it probably like killing. But if nothing drastic happens, then it would probably count as a sleep, coma or losing your consciousness. With the exception that you will be unable to wake up without an external help.
Tyrell: If we gift them the past we create a cushion or pillow for their emotions and consequently we can control them better. Deckard: Memories. You're talking about memories.
It can be programmed to act like it thinks that it's a person but does it actually think it's anything or are we fooling _ourselves_ into thinking that it is?
I was really hoping that would be the twist to Ex Machina --that Domhnall Gleeson's character would turn out to be a robot. Oh, spoiler alert: there is no twist.
+Jeremiah B I guess that there is a point were the AI is to stupid to distinguish differences. So, basically being able to fool itself either means that it's really smart or really dumb.
Hanks should've at least pointed out the distinction between information processing (i.e. intelligence) and conscious experience. It seems pretty obvious to me that person vs non-personhood will go down to whether we think it has conscious experience. Most scientists do not believe that our computers (based on the 'Von Neuman architecture') could give rise to conscious experience. No matter how generally intelligent Siri becomes, she's still as conscious as a rock. A sentient machine can only be made once we figure out what sort of complex processing of information actually gives rise to conscious experience. Then, we can build the hardware for an artificial consciousness.
+CrashCourse The response you made to the Chinese Room is the main response to this argument and it's known as the, “Systems Response.” It goes like this; the person in the room doesn’t understand Chinese, but that person is part of a system, and the system as a whole does understand it. We attribute understanding not to the individual man, but to the entire room. Well Searle responds by saying: why is it that the person in the room doesn’t understand Chinese? Because the person has no way to attach meaning to symbols. So in this regard, the room has no resources that the person doesn’t have. So if the person has no way to attach meaning to the symbols, how could the room as a whole possibly have a way to do this? Searle himself suggests an extension to the thought experiment: imagine the person in the room memorises the database and the rule book. This means he now doesn’t need to the room anymore. He goes out and converses with people face-to-face in Chinese, but he still doesn’t understand Chinese, because all he’s doing is manipulating symbols. Yet in this case he is the entire system. Now of course an obvious objection to this is that if you can go out and converse with people in Chinese, you must be able to converse in Chinese and thus understand it. This objection the functionalist could make doesn’t actually addresses Searle’s point though. The whole point of the Chinese Room Thought Experiment is that you can’t generate understanding simply by running the right program. You can’t get semantics merely from the right syntax. Now granted you surely would understand Chinese if you could converse perfectly with Chinese people, but I think Searle can hold that this understanding arises not from manipulating symbols in the right way, but also from all the various things that can go on in face-to-face interactions.
I was actually thinking about the episode from season 1 of TNG where everyone got infected with the love virus from the original series and Data goes "If you prick me, do I not... leak?"
darkblood626 There's a Star Trek episode where some of the human crew members are stranded on a space station that's about to blow. The only way to save them is by sending three bots on a suicide mission to stabilize the defective element. The bots refuse their orders (long story) and the crew considers reprogramming them so they're no longer able to do so. Data locks them out of the transporters so the crew can't beam the bots over against their will, even though he knows that doing so greatly reduces his fellow officers' likelihood of survival.
I think the most pragmatic thing to do is to just not create strong AI in the first place. Avoid ourselves a lot of potential problems. Of course it's pretty much a given that if we are given the ability to create strong AI, someone will do it eventually, so I guess I'll just have to agree with your stance.
I feel that you should have addressed the consciousness question. Is their a subjective experience to being the strong AI? Is that what separates us (as opposed to souls) from a machine that simulates intelligence? Or does it matter to deciding if something is an actual AI? For me all the most interesting questions about personhood and AI in general surrounded consciousness.
Since you can't experience another person's consciousness, and we can't determine whether another human being is actively conscious outside of how accurately they respond to specific stimulus, it can't really be used to test anything. If a robot is answering questions it's 'conscious' for all intents and purposes.
That's exactly what an advanced, humanity infiltrating android would act to convince us, the human, definitely not robots, that they are inf act, humans.
I like the approach of Jack Cohen and Ian Stewart in "The collapse of chaos": They suggest that the mind is an emergent property, a process that is created by a certain arrangement of neurons. It is like the motion of a car, something abstract and not material. If you would "dissect" a car, you will find wheels, an engine etc but not a tiny bit of motion. The same applies to our brain/mind. So, in my opiniom, if we can create something similiar like neurons (and not only neurons, of course, more like a certain arrangement), we could create a mind as well. But it takes probably a few more years until they are on the same level as we already are.
So did we notice that ChatGPT can pass the Turing test as originally conceived? Like...easily. So. We're pretty sure ChatGPT isn't "alive" or "thinking". So what's our test now? Are we moving the goalposts?
Hank: "How can I figure out my brother is a robot or not?" Me: "Crack his skull open and feat on the goo inside?" Hank: "Without going into his mind or body." Me: :(
God, I'm loving this subject of philosophy! Identity, personhood, the mind, free will... This is way more interesting than arguments for and against religion :P
This episode just gives me a thought on the subject of language education,maybe we are somehow like a robot because we are learning vocabulary and grammar without understanding how to use it to express your own opinions.(school education has trained us to become a robot with strong AI)
Thank you for a great video. Three points from my side. - Firstly, Searle has a very good response to the argument of the whole system and not just the cpu/"man in the room" understanding Chinese, a response which Searle calls the Systems Reply. Searle suggests that the person in the room memorize the rule book and symbols, thus internalizing the whole system. That person now goes outside, gets handed a piece of paper with some symbols on it, remembers the rules for those symbols, and then writes a reply in front of the Chinese person. He can do all this yet still have no idea what those symbols mean. Only if someone shows him a hamburger with the symbol for hamburger next to it, will he understand what that symbol means. Until then, its all squiggles and squaggles. - Secondly, it is interesting to note that in Searle's 1980 paper "Minds, Brains and Programs", the original Chinese Room paper, he defines 'Strong AI' in a slightly different way from how it has come to be used since. Searle says that Strong AI is the view that "...the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." and later in the MIT Encyclopedia of Cognitive Science, he says, "“Strong AI” is defined as the view that an appropriately programmed digital computer with the right inputs and outputs, one that satisfies the Turing test, would necessarily have a mind. The idea of Strong AI is that the implemented program by itself is constitutive of having a mind." Thus Strong AI is not a property that a robot may or may not have, nor is it the idea that computers can think. Since Searle is the guy who coined the term, I believe he has to right to decide its meaning. This distinction is demonstrated by the next point. - Thirdly, he never says that a computer can't think. In fact, in the MIT encyclopedia, he states, " The Chinese room does not show that “computers can’t think.” On the contrary, something can be a computer and can think. If a computer is any machine capable of carrying out a computation, then all normal human beings are computers and they think. The Chinese room shows that COMPUTATION , as defined by Alan TURING and others as formal symbol manipulation, is not by itself constitutive of thinking." Also, the Turing Test might have been passed recently: www.bbc.com/news/technology-27762088 Thank you!
I find that Star Trek Voyager S7Ep19 "Author, Author" has some really good arguments for when "personhood" can be considered a part of what something is. Great episode! DFTBA
This question is so intriguing, if not a little vague. One important idea that isn't invluded in the question of 'What constitutes a person/can a robot be a person?' is 'What is a robot?'. What if all the organs exept the brain were grown in a lab by scientists and combined to make a human with a computer for a brain? Is that a robot? If not, that makes me wonder if a functioning brain in a jar being fed stimuli a person.
You ARE a robot. We all are. It just so happens the we are incredibly complex biomachines. No computer can make this level of complexity yet. But it could be just a matter of time, even if takes so damn long.
i think of this at an atomic level. both us and the robots are made of atoms, and are both smart. so whats the difference really? its only the complexity.
I've been thinking, maybe there's some point where logical systems become too complex and reach a point where they "become conscious." It's just an idea, tho.
One important thing you didn't go over though, is that the Turing test has been re-written numerous times to increase complexity. This was done to compensate faster processors, bigger memory reserves and more complex programming. When the original Turing test was drafted it merely asked for the program to convince the human after 5 minutes of conversation. But it has been adjusted numerous times to compensate diversionary tactics used by chatbots seeking to defeat the test based upon the manipulation of its rules.
Good point at the end about the program just following instructions to pass the Turing test. In machine learning, a very simple algorithm called "bag of words" can be taught surprisingly well how to classify movie reviews as either positive or negative. It does not have a conscious and in fact knows nothing about the English language, yet it is able to determine with 95% or greater accuracy the polarity of a movie review.
There is a huge difference between mimicking behavior and being conscious that you are doing it. The big leap for AI is not that AI can behave like us It's that it knows it's doing it.
Searle seems to be assuming that we MUST associate images with understanding. I would agree if that is the case. In the Chinese room example, the people receiving the outputs were limited to language only. That means we would have to limit understanding to language only, but that doesn’t work because Searle seems to imply there’s more to understanding than JUST language. The room with translator still isn’t understanding unless the room also contains a codex or pictures related to the terms
Detroit: Become Harry Edit: There was literally a Silverchair lyric in the next second after I unpaused THAT WAS MY SENIOR DANCE SONG rofl I thought no one remembered that
An AI will be "born" eventually. If it exhibits more human behavior than real humans, does this no longer qualify as human behavior or does this expand our understanding of what human behavior is?
How are you defining "human behaviour?" Surely it is simply the way humans behave, meaning it is impossible for anything to exhibit more human behaviour than a human.
"... We need to figure out how we are going to treat potential new persons, if we end up creating beings that we decide meet the threshold of personhood." In Orson Scott Card's SPEAKER FOR THE DEAD, there is a quote that addresses this subject: "The difference between [person] and [non-person] is not in the creature judged, but in the creature judging. When we declare an alien species to be [a person], it does not mean that they have passed a threshold of moral maturity. It means that we have." I have already figured out how I am going to treat potential new persons; I am going to treat them how I would like to be treated. As far as Turing's Test (or, The Imitation Game, if you will) I am eager to be fooled. One time my sister-in-law yelled into her phone, "Siri, you suck!" and Siri replied, "I'm doing my best". It made me feel so bad for Siri. Objectively, I know that Siri's lacks the ability to have hurt feelings; and yet, I vowed to never treat Siri with any less dignity than I would anyone else.
well i found the easiest way for me not to freak out about deep existencial questions is for portugal to be featured in a video, because then im just amazed that someone remembered we exist
I liked the inclusion of the challenge to The Chinese Room, but I was a little bit disappointed that it wasn't followed by a discussion of semanticity, even though it was mentioned. The system of The Chinese Room contains a book recording a sufficient number of details about how to construct a conversation and maybe even how to generate novel responses that it can fool any human into believing that it is human, and that's the principle under which chatbots operate, in a nutshell. But those chatbots don't verifiably have semantics, or knowledge of the meaning associated with its responses, and neither does the room system. If a perfect chatbot is possible, then what we have is either a philosophical zombie or an emergent strong AI (a strong AI which appeared by accident, incidentally, or perhaps antithetically to its design. Modeling semantics is a hard job in science, partially because we can't pull apart a brain and observe "aha yes this is semantics and we can clearly see the algorithm that represents it". A promising idea is statistical correlation between representations of concepts, developed through experience (learning, pretty much). Could you say for certain that a rules-based chatbot does not have semantics, somehow emergently represented, if it can perfectly fool a human? Rather, the problem of The Chinese Room (I think the phrase "real understanding" is used here) is if we can actually know whether a system possesses emergent semantics. In science, we've made a lot of progress in explicitly representing semantics, but it's still an open question whether emergent representation of semantics in an explicitly non-semantic system designed to superficially emulate semantics is even possible, or if there's even such a thing as a perfect chatbot. One perhaps more rigourous test is to have an explicit model we know is right, and test the robot for roughly equivalent outcomes on various tests of semanticity against it (although that's essentially a generalisation from the basic conceit of the Turing Test). Another method might be to evaluate the robot algorithmically and see if we can generate a higher-order algorithm using it which performs roughly the same tasks as our known algorithm, although it is potentially more limited in scope. I'd obviously disagree that robots can't think. A Turing machine can do anything mathematical, so saying that robots can't think implies that the mind is non-mathematical, which even if magic were real is just not possible, since math is just a symbolic system for representing any sort of rule. Whether a system of synthetic rules for mimicking conversation that is explicitly not symantic can represent semantics emergently, though, is a more interesting question. How we'd measure it is a more interesting question still, although thus far no such system has appeared and it may never. Personally, I don't believe in true philosophical zombies, and I do think that a perfect chatbot would necessarily have real semantics, although a very good chatbot doesn't need to. This is philosophy, but it's neat to consider we could identify that a chatbot could be very good without being perfect. More broadly, such emergent semanticity is the question of whether weak AI can make the leap into strong AI by accident, and I don't really believe in that either, but it's also interesting to imagine how we could divide weak algorithms from strong, and how much of a continuum there is between them. And in here at the very bottom I'd like to include a minor complaint about the distinction between strong and weak AI. The actual distinction as I understand it is that weak AI is task-specific and doesn't possess any discernable ability to understand, whereas strong AI is very capable of generating meaning and acting on it. From that perspective, a strong AI could be emulating a rat or be an unfathomably great superintelligence just as easily as it could be trying to be human.
bit of an error I just noticed - The Chinese Room can fool any Chinese reader into believing it writes Chinese. There is a human inside of it, so it wouldn't need to be fooling anyone on the humanity count.
You know I keep noticing more errors so please just excuse them where you can tease out what I meant. I'm not a very good writer and phones make editing tough to do.
你会说中文话吗? (ni hui shuo zhong wen hua ma?) 对!我会说中国话。(dui! wo hui shuo zhong guo hua.)That's the message with characters and the sounds to each character.
I really like crash course philosophy, you've had adressed the core problems of "the Cambridge quintet" (that I'm currently reading). Watch your videos makes easier to me to clear my ideas and gives me some new ideas. Thank you!
This reminds me of that doctor who episode from season 11 where that guy who was actually a bomb got his hand cut off and there was just wires instead of blood and all his memories were implanted.
Constructs: Things that are and were built. Humans are just really intelligent constructs (although the "really" part depends on which human). The only reason they are so intelligent is because they were built that way (When they are being built in the whom or whatever creationist story you believe in). They just have a different design and composition. There fore there composition and design defines (Human or other) a construct's intelligence. Which means any construct including an artificial one, can be as or even more intelligent then a human (to say they can't in all cases is ridiculous); depending on how it was designed and what it was made of. How can you tell if a construct has the level of intelligence as you do AKA personhood? By observing and testing that construct's intellectual actions. Sure some to the construct's actions (when your testing it) can be misleading, but best air on the side of caution. If you can't tell because it's actions or specific behavior is so similar to that of a sentient being, well best treat it as one just in case. Because if there that similar in comparable behavior (intellectual behavior) that you can't tell, then maybe there is no difference.
I think the best test to recognize a robot as a person is if it is non-prescriptively proactive and can demonstrate self-awareness and complex ideas by its own volition without being prompted. The robot must be able to pursue conversation and its own desires without a prescriptive programming influence. The Chinese room is a purely reactive case, and makes no mention of what a proactive robot would be like. Observing deliberate proactivity is key I thing.
And aliens can't be people? Sounds like the future of racism... If we are going to go around calling intelligent entities sub-people, then we will run into a lot of trouble.
I've heard the Chinese Room thought experiment used not to prove that Turing-Test-passing AI's are not necessarily strong, but that strong AI is impossible. Which is absurd. The biggest problem with the proper Chinese Room argument is simple. If you define "understanding" in such a way that the Chinese Room does not qualify...how is it that you could tell understanding from false understanding?
My response to the Chinese Room thought experiment is that if you were to program a computer to follow the same algorithm as described in the book you were following, you'd have a computer program which can pass the Turing Test. Hence, it's the algorithm in the book that's the person in that situation. You are just acting as a computer to run it.
What about a "reverse" Turing test? Due to differences in the "programming" that humans receive--culture, language, morality, etiquette, etc.--not to mention differences that were inborn (such as developmental disabilities), it may be difficult for Person A to be certain that radically different Person B is, in fact, a human.
Hey Crash Course! I like your book optical illusions on your "CC Literature" video thumbnails. Just noticed this now and thought it was cool. Very clever!
I recommend the book 'Superintelligence' by Nick Bostrom. It's brilliant and looks at AI in an extremely large range of different ways. Definitely read 'Superintelligence' if you are interested in AI.
Interesting ideas. It made me think of a little twist to the ship of theseus: if you take a person and replace all his organs with artificial organs, one at a time, at what point does this person stop being a person and start being a machine?
Making computers think like people is just good fun for sci-fi movies and philosophical questions. The ideal computer or robot is one that always does what we really want. How it works or thinks isn't so important. Is there any big practical benefit to having a computer pass the Turing test?
This episode was basicly made for me, love thinking about AI and the potential personhood of it ^^ Most of the things he said I've heard before, but the counter-argument to the Chinese room experiment consisting of thinking of the entire room as the "brain" and the person performing the instructions as just a part of it was very interesting. Who is the person in that regard, then? The room? The fusion of the writer of the instructions, the instructions themselves and the person performing them? Are the chinese-speaking people providing messages also a part of that personhood, seeing how without them the Chinese room wouldn't act at all? I love thinking about stuff like this, even though it may be complete nonsense.
I have this story I want to try writing sometime that has an interesting take on this question. Basically, if you made a strong AI and you could upload that AI into human body, would it be human? A person?
7:42 = potential fault in Chinese room T.E. 7:42 = VALID ... however Hypo: Fault can be communicated better + LCD is always best Potential solution: Eng. Word: Memory Text above in multiple languages. Respond if understood or and would like to continue/explore dialogue thread
Surely one of the primary considerations is the ability to learn, adapt and develop a unique personality in response to life experience? I'd absolutely show the same respect to robots that acted human if they developed in the same way as humans in all ways except birth. I find it inconceivable that such robots are not ALREADY possible, let alone in the future.
The term soul is often misused and is thought to mean a non corporeal part of living organisms that is separate from the body, but what it actually means however is life itself or the life of a living organism, or a creature or person. For example a living animal or person is a soul or living soul, but a dead animal is a dead soul. A dead soul is not separated it is just dead.
The Turing Test shouldn't be considered a one-and-done deal, but rather a constant process we go through whenever we consider personhood for someone or something. The Chinese Room might pass a cursory examination, but a conversation spanning hours or days would reveal the limited capacity of the room's responses.
the error with Chinese room is that it's assuming the machine is algorithmically (not cognitively) programmed with every single possible input and output. once we have machine learning won't that be a situation quite different than the code book?
To me, the Chinese room thought experiment just describes a chatbot. You can very easily prove that the chatbot doesn't actually understand what you're asking it, that it's only barfing out pre-determined answers. All you have to do is ask it to be creative. Even a little bit. Don't ask "What's your favorite color," ask "Help me pick a title for a book I'm writing."
There are machines that are able to pass the turing test just with writing (at least most of the time) like Alan Turing described it but as soon as the actions go beyond just writing they get problems. Like if you would hav asked them to send you their location via whatsapp. Even if they could write on whatsapp that would be almost impossible for them because their capabilities are limited to writing. That's why we are so far away from strong AI. We have weak AI that is programmed to act as if it was strong.
The real problem with Chinese room is that the code book is impossible today. If i made code book like that and gave it to a machine it would pass the Turing test as well.
Hank warmed my heart when he said that even if John bled motor oil instead of blood he would still be his brother.
Me too
Me too
Same! That was a particularly "human" thing to say. :)
DFTBC: don't forget to be cute.
What if Hank is a robot?
Living in the era of ChatGPT, it is quite alarming to look back at this video.
If a robot is ever considered a person, would it be considered immoral to turn it off or otherwise remove it's power source?
Its not in the bible so we will never know.
+Michael Hill
bible=the worst source of information ever
Just because numbers aren't physical doesn't make them any less of a reality.
Depends on how he works. If turning him off means to lose his personality or otherwise disrupt his being, then it probably like killing. But if nothing drastic happens, then it would probably count as a sleep, coma or losing your consciousness. With the exception that you will be unable to wake up without an external help.
No, you can turn them back on
A harder test would be, can it fool itself into thinking that it is a person?
Tyrell: If we gift them the past we create a cushion or pillow for their emotions and consequently we can control them better.
Deckard: Memories. You're talking about memories.
It can be programmed to act like it thinks that it's a person but does it actually think it's anything or are we fooling _ourselves_ into thinking that it is?
I was really hoping that would be the twist to Ex Machina --that Domhnall Gleeson's character would turn out to be a robot. Oh, spoiler alert: there is no twist.
I don't know it that's such a good idea. I mean, I'm so stupid sometimes I think I'm not even a person, you know?
+Jeremiah B I guess that there is a point were the AI is to stupid to distinguish differences. So, basically being able to fool itself either means that it's really smart or really dumb.
Hanks should've at least pointed out the distinction between information processing (i.e. intelligence) and conscious experience. It seems pretty obvious to me that person vs non-personhood will go down to whether we think it has conscious experience.
Most scientists do not believe that our computers (based on the 'Von Neuman architecture') could give rise to conscious experience. No matter how generally intelligent Siri becomes, she's still as conscious as a rock. A sentient machine can only be made once we figure out what sort of complex processing of information actually gives rise to conscious experience. Then, we can build the hardware for an artificial consciousness.
Haven't we long abandoned the traditional Von Neuman architecture due to the Von Neuman bottleneck?
+CrashCourse The response you made to the Chinese Room is the main response to this argument and it's known as the, “Systems Response.”
It goes like this; the person in the room doesn’t understand Chinese, but that person is part of a system, and the system as a whole does understand it. We attribute understanding not to the individual man, but to the entire room.
Well Searle responds by saying: why is it that the person in the room doesn’t understand Chinese? Because the person has no way to attach meaning to symbols. So in this regard, the room has no resources that the person doesn’t have. So if the person has no way to attach meaning to the symbols, how could the room as a whole possibly have a way to do this? Searle himself suggests an extension to the thought experiment: imagine the person in the room memorises the database and the rule book. This means he now doesn’t need to the room anymore. He goes out and converses with people face-to-face in Chinese, but he still doesn’t understand Chinese, because all he’s doing is manipulating symbols. Yet in this case he is the entire system.
Now of course an obvious objection to this is that if you can go out and converse with people in Chinese, you must be able to converse in Chinese and thus understand it.
This objection the functionalist could make doesn’t actually addresses Searle’s point though. The whole point of the Chinese Room Thought Experiment is that you can’t generate understanding simply by running the right program.
You can’t get semantics merely from the right syntax. Now granted you surely would understand Chinese if you could converse perfectly with Chinese people, but I think Searle can hold that this understanding arises not from manipulating symbols in the right way, but also from all the various things that can go on in face-to-face interactions.
Reminds me of the Star Trek episode when Picard has to try and show Data (the android) is sentient being with the right to choose.
that's what I was thinking too!
The Measure of a Man, if anyone wants to go watch it.
I was actually thinking about the episode from season 1 of TNG where everyone got infected with the love virus from the original series and Data goes "If you prick me, do I not... leak?"
Me too 😊 #Data!
Isn't that every episode?
The most moral and pragmatic thing would be to treat any AI that you have reason to suspect has reached personhood as such by default.
What if doing so required those whom you already know to be people to suffer?
aperson22222 Example?
darkblood626 There's a Star Trek episode where some of the human crew members are stranded on a space station that's about to blow. The only way to save them is by sending three bots on a suicide mission to stabilize the defective element. The bots refuse their orders (long story) and the crew considers reprogramming them so they're no longer able to do so. Data locks them out of the transporters so the crew can't beam the bots over against their will, even though he knows that doing so greatly reduces his fellow officers' likelihood of survival.
I think the most pragmatic thing to do is to just not create strong AI in the first place. Avoid ourselves a lot of potential problems. Of course it's pretty much a given that if we are given the ability to create strong AI, someone will do it eventually, so I guess I'll just have to agree with your stance.
aperson22222 Forcing the bots to die against their will would not be moral.
I feel that you should have addressed the consciousness question. Is their a subjective experience to being the strong AI? Is that what separates us (as opposed to souls) from a machine that simulates intelligence? Or does it matter to deciding if something is an actual AI? For me all the most interesting questions about personhood and AI in general surrounded consciousness.
I know.
I it was to my understanding that conciousness is still mistery.
To me, it's whether the AI can actually feel emotions or just simulate them.
+
Since you can't experience another person's consciousness, and we can't determine whether another human being is actively conscious outside of how accurately they respond to specific stimulus, it can't really be used to test anything. If a robot is answering questions it's 'conscious' for all intents and purposes.
look at the mistakes he made in the Crash Course Games episode
no robot would make such silly mistakes
also the fact in crash course world history he constantly missed the place he was talking about when he spun the globe
Unless those mistakes were part of its programming.
Or they were part of his cunning Ai - intended to throw us off from finding out the truth :-P
That's exactly what an advanced, humanity infiltrating android would act to convince us, the human, definitely not robots, that they are inf act, humans.
I dunno he could be an Ai. Do you remember Microsoft's Ai Tay made a few silly mistakes?
I like the approach of Jack Cohen and Ian Stewart in "The collapse of chaos": They suggest that the mind is an emergent property, a process that is created by a certain arrangement of neurons.
It is like the motion of a car, something abstract and not material. If you would "dissect" a car, you will find wheels, an engine etc but not a tiny bit of motion. The same applies to our brain/mind.
So, in my opiniom, if we can create something similiar like neurons (and not only neurons, of course, more like a certain arrangement), we could create a mind as well. But it takes probably a few more years until they are on the same level as we already are.
this is now almost our reality
This is probably the most compassionate view of intelligent AI i have ever seen, thank yoou
So did we notice that ChatGPT can pass the Turing test as originally conceived? Like...easily. So. We're pretty sure ChatGPT isn't "alive" or "thinking". So what's our test now? Are we moving the goalposts?
Hank: "How can I figure out my brother is a robot or not?"
Me: "Crack his skull open and feat on the goo inside?"
Hank: "Without going into his mind or body."
Me: :(
God, I'm loving this subject of philosophy! Identity, personhood, the mind, free will... This is way more interesting than arguments for and against religion :P
You guys are amazing! Seriously! The amount of contribution you guys are doing is mind blowing! Thank You!
This episode just gives me a thought on the subject of language education,maybe we are somehow like a robot because we are learning vocabulary and grammar without understanding how to use it to express your own opinions.(school education has trained us to become a robot with strong AI)
Thank you for a great video. Three points from my side.
- Firstly, Searle has a very good response to the argument of the whole system and not just the cpu/"man in the room" understanding Chinese, a response which Searle calls the Systems Reply. Searle suggests that the person in the room memorize the rule book and symbols, thus internalizing the whole system. That person now goes outside, gets handed a piece of paper with some symbols on it, remembers the rules for those symbols, and then writes a reply in front of the Chinese person. He can do all this yet still have no idea what those symbols mean. Only if someone shows him a hamburger with the symbol for hamburger next to it, will he understand what that symbol means. Until then, its all squiggles and squaggles.
- Secondly, it is interesting to note that in Searle's 1980 paper "Minds, Brains and Programs", the original Chinese Room paper, he defines 'Strong AI' in a slightly different way from how it has come to be used since. Searle says that Strong AI is the view that "...the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." and later in the MIT Encyclopedia of Cognitive Science, he says, "“Strong AI” is defined as the view that an appropriately programmed digital computer with the right inputs and outputs, one that satisfies the Turing test, would necessarily have a mind. The idea of Strong AI is that the implemented program by itself is constitutive of having a mind." Thus Strong AI is not a property that a robot may or may not have, nor is it the idea that computers can think. Since Searle is the guy who coined the term, I believe he has to right to decide its meaning. This distinction is demonstrated by the next point.
- Thirdly, he never says that a computer can't think. In fact, in the MIT encyclopedia, he states, " The Chinese room does not show that “computers can’t think.” On the contrary, something can be a computer and can think. If a computer is any machine capable of carrying out a computation, then all normal human beings are computers and they think. The Chinese room shows that COMPUTATION , as defined by Alan TURING and others as formal symbol manipulation, is not by itself constitutive of thinking."
Also, the Turing Test might have been passed recently: www.bbc.com/news/technology-27762088
Thank you!
I like that you decide to approach the old arguments with relevant new perspectives.
I find that Star Trek Voyager S7Ep19 "Author, Author" has some really good arguments for when "personhood" can be considered a part of what something is.
Great episode!
DFTBA
This question is so intriguing, if not a little vague. One important idea that isn't invluded in the question of 'What constitutes a person/can a robot be a person?' is 'What is a robot?'. What if all the organs exept the brain were grown in a lab by scientists and combined to make a human with a computer for a brain? Is that a robot? If not, that makes me wonder if a functioning brain in a jar being fed stimuli a person.
A small but thought-provoking discussion of personhood occurs in Kenneth Oppel's adolescent novel _Half Brother_.
But how do I know that I'M not a robot?!?
... Seriously, brains are pretty much computers, right?
Yes.
You ARE a robot. We all are. It just so happens the we are incredibly complex biomachines. No computer can make this level of complexity yet. But it could be just a matter of time, even if takes so damn long.
i think of this at an atomic level.
both us and the robots are made of atoms, and are both smart.
so whats the difference really?
its only the complexity.
Ben Dover The atoms aren't really relevant here.
I've been thinking, maybe there's some point where logical systems become too complex and reach a point where they "become conscious." It's just an idea, tho.
One important thing you didn't go over though, is that the Turing test has been re-written numerous times to increase complexity. This was done to compensate faster processors, bigger memory reserves and more complex programming.
When the original Turing test was drafted it merely asked for the program to convince the human after 5 minutes of conversation. But it has been adjusted numerous times to compensate diversionary tactics used by chatbots seeking to defeat the test based upon the manipulation of its rules.
No 2001 or terminator references? I'm disappointed!!
well there was an angler fish. that's enough a reference for me.
well there was an angler fish. that's enough a reference for me.
well there was an angler fish. that's enough reference for me.
well there was an angler fish. that's enough reference for me.
you should have listened better ;)
Good point at the end about the program just following instructions to pass the Turing test. In machine learning, a very simple algorithm called "bag of words" can be taught surprisingly well how to classify movie reviews as either positive or negative. It does not have a conscious and in fact knows nothing about the English language, yet it is able to determine with 95% or greater accuracy the polarity of a movie review.
Can I go back to Fallout 4 to answer this question?!
"Join the Railroad!"
That was the whole point of Fallout 4, basically.
Yeah that game was what decided it for me. Synths are not people nor are they sentient or deserving of the same rights of humans.
Brotherhood or Minutemen are the only good option, really.
+Parker Sprague Who really gives a damn about the half-assed plot if the REAL goal of the game is to recycle trash?
There is a huge difference between mimicking behavior and being conscious that you are doing it. The big leap for AI is not that AI can behave like us It's that it knows it's doing it.
Searle seems to be assuming that we MUST associate images with understanding. I would agree if that is the case.
In the Chinese room example, the people receiving the outputs were limited to language only. That means we would have to limit understanding to language only, but that doesn’t work because Searle seems to imply there’s more to understanding than JUST language.
The room with translator still isn’t understanding unless the room also contains a codex or pictures related to the terms
When Hank mentions the viola at 4:04 🙌
every mass effect fan: "Does this unit have a soul?"
I absolutely love it when you make those philosophers smile! Who thought of that? That is brilliant!
Detroit: Become Harry
Edit: There was literally a Silverchair lyric in the next second after I unpaused
THAT WAS MY SENIOR DANCE SONG rofl I thought no one remembered that
This helps me understand the episode of SMALL WONDER I just watched.
An AI will be "born" eventually. If it exhibits more human behavior than real humans, does this no longer qualify as human behavior or does this expand our understanding of what human behavior is?
How are you defining "human behaviour?" Surely it is simply the way humans behave, meaning it is impossible for anything to exhibit more human behaviour than a human.
For me the a key deciding point would be if the AI expressed individual wants or preferences outside the original scope of it's programming
"... We need to figure out how we are going to treat potential new persons, if we end up creating beings that we decide meet the threshold of personhood." In Orson Scott Card's SPEAKER FOR THE DEAD, there is a quote that addresses this subject:
"The difference between [person] and [non-person] is not in the creature judged, but in the creature judging. When we declare an alien species to be [a person], it does not mean that they have passed a threshold of moral maturity. It means that we have."
I have already figured out how I am going to treat potential new persons; I am going to treat them how I would like to be treated. As far as Turing's Test (or, The Imitation Game, if you will) I am eager to be fooled. One time my sister-in-law yelled into her phone, "Siri, you suck!" and Siri replied, "I'm doing my best". It made me feel so bad for Siri. Objectively, I know that Siri's lacks the ability to have hurt feelings; and yet, I vowed to never treat Siri with any less dignity than I would anyone else.
Our future robot masters will remember this.
Please do a Sociology and an Art History crash course!!!! Don't Forget, That'd Be Awesome!!
My 6th grade class does a project on this topic every year. We use "The Adoration of Jenna Fox" and "Frankenstein" as text. Great video!
well i found the easiest way for me not to freak out about deep existencial questions is for portugal to be featured in a video, because then im just amazed that someone remembered we exist
shoutout to all my fellow portuguese speakers, cheers from brazil!! amazing video and series
Please let there be an episode on personhood in other species, ie animals. Meat-eating is such a fascinating area of ethics
*we are all intelligent machines.* the question is are you the same type of machine or a different type of machine.
Crash courses should have a downloadable longer version where the speaker speaks at normal speed.
Wowowowowowow !!! The chinese code part really impress me !
I recommend watching Ex Machina, that movie has so many good points.
The Fallout 4 synths are perfect for this. They only have a few parts, all other organs are created with a machine, but are exactly like human organs.
I liked the inclusion of the challenge to The Chinese Room, but I was a little bit disappointed that it wasn't followed by a discussion of semanticity, even though it was mentioned. The system of The Chinese Room contains a book recording a sufficient number of details about how to construct a conversation and maybe even how to generate novel responses that it can fool any human into believing that it is human, and that's the principle under which chatbots operate, in a nutshell. But those chatbots don't verifiably have semantics, or knowledge of the meaning associated with its responses, and neither does the room system. If a perfect chatbot is possible, then what we have is either a philosophical zombie or an emergent strong AI (a strong AI which appeared by accident, incidentally, or perhaps antithetically to its design.
Modeling semantics is a hard job in science, partially because we can't pull apart a brain and observe "aha yes this is semantics and we can clearly see the algorithm that represents it". A promising idea is statistical correlation between representations of concepts, developed through experience (learning, pretty much). Could you say for certain that a rules-based chatbot does not have semantics, somehow emergently represented, if it can perfectly fool a human? Rather, the problem of The Chinese Room (I think the phrase "real understanding" is used here) is if we can actually know whether a system possesses emergent semantics. In science, we've made a lot of progress in explicitly representing semantics, but it's still an open question whether emergent representation of semantics in an explicitly non-semantic system designed to superficially emulate semantics is even possible, or if there's even such a thing as a perfect chatbot. One perhaps more rigourous test is to have an explicit model we know is right, and test the robot for roughly equivalent outcomes on various tests of semanticity against it (although that's essentially a generalisation from the basic conceit of the Turing Test). Another method might be to evaluate the robot algorithmically and see if we can generate a higher-order algorithm using it which performs roughly the same tasks as our known algorithm, although it is potentially more limited in scope.
I'd obviously disagree that robots can't think. A Turing machine can do anything mathematical, so saying that robots can't think implies that the mind is non-mathematical, which even if magic were real is just not possible, since math is just a symbolic system for representing any sort of rule. Whether a system of synthetic rules for mimicking conversation that is explicitly not symantic can represent semantics emergently, though, is a more interesting question. How we'd measure it is a more interesting question still, although thus far no such system has appeared and it may never. Personally, I don't believe in true philosophical zombies, and I do think that a perfect chatbot would necessarily have real semantics, although a very good chatbot doesn't need to. This is philosophy, but it's neat to consider we could identify that a chatbot could be very good without being perfect.
More broadly, such emergent semanticity is the question of whether weak AI can make the leap into strong AI by accident, and I don't really believe in that either, but it's also interesting to imagine how we could divide weak algorithms from strong, and how much of a continuum there is between them.
And in here at the very bottom I'd like to include a minor complaint about the distinction between strong and weak AI. The actual distinction as I understand it is that weak AI is task-specific and doesn't possess any discernable ability to understand, whereas strong AI is very capable of generating meaning and acting on it. From that perspective, a strong AI could be emulating a rat or be an unfathomably great superintelligence just as easily as it could be trying to be human.
bit of an error I just noticed - The Chinese Room can fool any Chinese reader into believing it writes Chinese. There is a human inside of it, so it wouldn't need to be fooling anyone on the humanity count.
You know I keep noticing more errors so please just excuse them where you can tease out what I meant. I'm not a very good writer and phones make editing tough to do.
Any Chinese people who can translate? ^_^
The characters meant "Yes, I know how to speak Chinese."
I'm not Chinese, but it says: "Can you speak Chinese?" "Correct. I can speak Chinese"
i must say the grammar is a bit off though
你会说中文话吗? (ni hui shuo zhong wen hua ma?)
对!我会说中国话。(dui! wo hui shuo zhong guo hua.)That's the message with characters and the sounds to each character.
The example in chinese is a bit off. It has failed the chinese speaking turing test. :P
I really like crash course philosophy, you've had adressed the core problems of "the Cambridge quintet" (that I'm currently reading). Watch your videos makes easier to me to clear my ideas and gives me some new ideas. Thank you!
This reminds me of that doctor who episode from season 11 where that guy who was actually a bomb got his hand cut off and there was just wires instead of blood and all his memories were implanted.
isn't that from the victory of the daleks in season 5?
Christians: harry can’t be like us because he has no soul!
possessed dolls : *exists in Christian culture*
Constructs: Things that are and were built. Humans are just really intelligent constructs (although the "really" part depends on which human). The only reason they are so intelligent is because they were built that way (When they are being built in the whom or whatever creationist story you believe in). They just have a different design and composition. There fore there composition and design defines (Human or other) a construct's intelligence. Which means any construct including an artificial one, can be as or even more intelligent then a human (to say they can't in all cases is ridiculous); depending on how it was designed and what it was made of.
How can you tell if a construct has the level of intelligence as you do AKA personhood? By observing and testing that construct's intellectual actions. Sure some to the construct's actions (when your testing it) can be misleading, but best air on the side of caution. If you can't tell because it's actions or specific behavior is so similar to that of a sentient being, well best treat it as one just in case. Because if there that similar in comparable behavior (intellectual behavior) that you can't tell, then maybe there is no difference.
83 brotherhood of steel members disliked this video
I learned new things today. Thx.
I think the best test to recognize a robot as a person is if it is non-prescriptively proactive and can demonstrate self-awareness and complex ideas by its own volition without being prompted. The robot must be able to pursue conversation and its own desires without a prescriptive programming influence. The Chinese room is a purely reactive case, and makes no mention of what a proactive robot would be like. Observing deliberate proactivity is key I thing.
a robot would never be a person, at best it would be an intelligent being, like an alien
But an alien can be a person, just not from earth, hence why its alien to this world, but still a person.
Ketraar it depends on what you think the word "person" mean
Tecnovlog
My point exactly. So a robot "could" be a person. ;-)
They did a whole episode on what person means. I think most people would agree an intelligent alien counts.
And aliens can't be people? Sounds like the future of racism... If we are going to go around calling intelligent entities sub-people, then we will run into a lot of trouble.
ex machina does an amazing job exploring this idea. recommend the movie to anyone who thought this was interesting
I've heard the Chinese Room thought experiment used not to prove that Turing-Test-passing AI's are not necessarily strong, but that strong AI is impossible. Which is absurd.
The biggest problem with the proper Chinese Room argument is simple. If you define "understanding" in such a way that the Chinese Room does not qualify...how is it that you could tell understanding from false understanding?
Please do create a CrashCourse on posthumanism and AI and robotics. Please, please, please with silicon & electroceutical sprinkles on top!
This made me think of the Bicentennial man. A great movie with Robin Williams.
this episode made me go CRAZY...!!!
Hopefully John or someone can expand on this discussion with Daneel Olivaw & Giskard
The first sentence I got really interested already. Keep up the good work! :)
My response to the Chinese Room thought experiment is that if you were to program a computer to follow the same algorithm as described in the book you were following, you'd have a computer program which can pass the Turing Test. Hence, it's the algorithm in the book that's the person in that situation. You are just acting as a computer to run it.
What about a "reverse" Turing test? Due to differences in the "programming" that humans receive--culture, language, morality, etiquette, etc.--not to mention differences that were inborn (such as developmental disabilities), it may be difficult for Person A to be certain that radically different Person B is, in fact, a human.
Hey Crash Course! I like your book optical illusions on your "CC Literature" video thumbnails. Just noticed this now and thought it was cool. Very clever!
This is acceptable homework content. Thanks to our lecturer for giving us this as material.
@hankgreen
I have been following you since 08 or so. that TBI I messed up my memory but I do remember how awesome crash course always was
FREE WILL! So excited for the next episode!
This was argued in a Star Trek New Gen episode with Data being in court instead of Harry. Great episode, really.
I recommend the book 'Superintelligence' by Nick Bostrom. It's brilliant and looks at AI in an extremely large range of different ways. Definitely read 'Superintelligence' if you are interested in AI.
Interesting ideas. It made me think of a little twist to the ship of theseus: if you take a person and replace all his organs with artificial organs, one at a time, at what point does this person stop being a person and start being a machine?
Making computers think like people is just good fun for sci-fi movies and philosophical questions. The ideal computer or robot is one that always does what we really want. How it works or thinks isn't so important. Is there any big practical benefit to having a computer pass the Turing test?
This episode was basicly made for me, love thinking about AI and the potential personhood of it ^^
Most of the things he said I've heard before, but the counter-argument to the Chinese room experiment consisting of thinking of the entire room as the "brain" and the person performing the instructions as just a part of it was very interesting. Who is the person in that regard, then? The room? The fusion of the writer of the instructions, the instructions themselves and the person performing them? Are the chinese-speaking people providing messages also a part of that personhood, seeing how without them the Chinese room wouldn't act at all?
I love thinking about stuff like this, even though it may be complete nonsense.
Now that you've watched this episode of Crash Course, my recommended movies for you: The Imitation Game and Ex Machina.
Ex Machina was an amazing movie. I second that recommendation.
I'm a huge mega man fan and machines are amazing the deep concepts are neat
I have this story I want to try writing sometime that has an interesting take on this question. Basically, if you made a strong AI and you could upload that AI into human body, would it be human? A person?
7:42 = potential fault in Chinese room T.E.
7:42 = VALID ... however
Hypo: Fault can be communicated better + LCD is always best
Potential solution:
Eng. Word: Memory
Text above in multiple languages. Respond if understood or and would like to continue/explore dialogue thread
that was such a sweet ending omg
That last comment of your love for your brother was so sweet :)
More videos about Artificial Intelligence please! It's interesting.
Cool. Certainly gave me something to think about.
Surely one of the primary considerations is the ability to learn, adapt and develop a unique personality in response to life experience? I'd absolutely show the same respect to robots that acted human if they developed in the same way as humans in all ways except birth. I find it inconceivable that such robots are not ALREADY possible, let alone in the future.
Turing test has been beat! Watching this in 2020
The term soul is often misused and is thought to mean a non corporeal part of living organisms that is separate from the body, but what it actually means however is life itself or the life of a living organism, or a creature or person. For example a living animal or person is a soul or living soul, but a dead animal is a dead soul. A dead soul is not separated it is just dead.
I just read a book called Being (in all of 3 days) that asks this question SO well...
would love a bibliography, I'm doing an essay on whether there is reason to fear AI or not at uni and would love books to refer to
hank was kind of passionate in this one
The Turing Test shouldn't be considered a one-and-done deal, but rather a constant process we go through whenever we consider personhood for someone or something. The Chinese Room might pass a cursory examination, but a conversation spanning hours or days would reveal the limited capacity of the room's responses.
No mention of Bicentennial Man? I feel emotional connection, shared history, memories and love go a long way to contributing to personhood too.
the error with Chinese room is that it's assuming the machine is algorithmically (not cognitively) programmed with every single possible input and output. once we have machine learning won't that be a situation quite different than the code book?
how many episodes will there be?
i hope it never ends :P
Nice Pulp reference.
Hank! Long shot here, do you have any philosophy books (fictional or not) that you would recommend? I am keen to explore further...
The fact that he made it through an entire episode about robots and AI without referencing Data from Star Trek is remarkable. :)
To me, the Chinese room thought experiment just describes a chatbot. You can very easily prove that the chatbot doesn't actually understand what you're asking it, that it's only barfing out pre-determined answers. All you have to do is ask it to be creative. Even a little bit.
Don't ask "What's your favorite color," ask "Help me pick a title for a book I'm writing."
There are machines that are able to pass the turing test just with writing (at least most of the time) like Alan Turing described it but as soon as the actions go beyond just writing they get problems. Like if you would hav asked them to send you their location via whatsapp. Even if they could write on whatsapp that would be almost impossible for them because their capabilities are limited to writing. That's why we are so far away from strong AI. We have weak AI that is programmed to act as if it was strong.
The real problem with Chinese room is that the code book is impossible today. If i made code book like that and gave it to a machine it would pass the Turing test as well.