Artificial Intelligence & Personhood: Crash Course Philosophy #23

Поділитися
Вставка
  • Опубліковано 28 вер 2024

КОМЕНТАРІ • 1,9 тис.

  • @Crazyvale100
    @Crazyvale100 7 років тому +1352

    Hank warmed my heart when he said that even if John bled motor oil instead of blood he would still be his brother.

  • @shawn-xl5ii
    @shawn-xl5ii Рік тому +41

    Living in the era of ChatGPT, it is quite alarming to look back at this video.

  • @schmittelt
    @schmittelt 8 років тому +567

    If a robot is ever considered a person, would it be considered immoral to turn it off or otherwise remove it's power source?

    • @Archangel125
      @Archangel125 8 років тому +90

      Its not in the bible so we will never know.

    • @hicham5770
      @hicham5770 8 років тому +192

      +Michael Hill
      bible=the worst source of information ever

    • @kevinwitkowski4895
      @kevinwitkowski4895 8 років тому +28

      Just because numbers aren't physical doesn't make them any less of a reality.

    • @DarthBiomech
      @DarthBiomech 8 років тому +71

      Depends on how he works. If turning him off means to lose his personality or otherwise disrupt his being, then it probably like killing. But if nothing drastic happens, then it would probably count as a sleep, coma or losing your consciousness. With the exception that you will be unable to wake up without an external help.

    • @mylespope6203
      @mylespope6203 8 років тому +9

      No, you can turn them back on

  • @SlipperyTeeth
    @SlipperyTeeth 8 років тому +736

    A harder test would be, can it fool itself into thinking that it is a person?

    • @PalimpsestProd
      @PalimpsestProd 8 років тому +40

      Tyrell: If we gift them the past we create a cushion or pillow for their emotions and consequently we can control them better.
      Deckard: Memories. You're talking about memories.

    • @MarkCidade
      @MarkCidade 8 років тому +54

      It can be programmed to act like it thinks that it's a person but does it actually think it's anything or are we fooling _ourselves_ into thinking that it is?

    • @ForgottenFirearm
      @ForgottenFirearm 8 років тому +10

      I was really hoping that would be the twist to Ex Machina --that Domhnall Gleeson's character would turn out to be a robot. Oh, spoiler alert: there is no twist.

    • @MusiCaninesTheMusicalDogs
      @MusiCaninesTheMusicalDogs 8 років тому +5

      I don't know it that's such a good idea. I mean, I'm so stupid sometimes I think I'm not even a person, you know?

    • @SlipperyTeeth
      @SlipperyTeeth 8 років тому +5

      +Jeremiah B I guess that there is a point were the AI is to stupid to distinguish differences. So, basically being able to fool itself either means that it's really smart or really dumb.

  • @ASLUHLUHC3
    @ASLUHLUHC3 4 роки тому +40

    Hanks should've at least pointed out the distinction between information processing (i.e. intelligence) and conscious experience. It seems pretty obvious to me that person vs non-personhood will go down to whether we think it has conscious experience.
    Most scientists do not believe that our computers (based on the 'Von Neuman architecture') could give rise to conscious experience. No matter how generally intelligent Siri becomes, she's still as conscious as a rock. A sentient machine can only be made once we figure out what sort of complex processing of information actually gives rise to conscious experience. Then, we can build the hardware for an artificial consciousness.

    • @josephs.7960
      @josephs.7960 4 роки тому

      Haven't we long abandoned the traditional Von Neuman architecture due to the Von Neuman bottleneck?

  • @CosmicFaust
    @CosmicFaust 7 років тому +12

    +CrashCourse The response you made to the Chinese Room is the main response to this argument and it's known as the, “Systems Response.”
    It goes like this; the person in the room doesn’t understand Chinese, but that person is part of a system, and the system as a whole does understand it. We attribute understanding not to the individual man, but to the entire room.
    Well Searle responds by saying: why is it that the person in the room doesn’t understand Chinese? Because the person has no way to attach meaning to symbols. So in this regard, the room has no resources that the person doesn’t have. So if the person has no way to attach meaning to the symbols, how could the room as a whole possibly have a way to do this? Searle himself suggests an extension to the thought experiment: imagine the person in the room memorises the database and the rule book. This means he now doesn’t need to the room anymore. He goes out and converses with people face-to-face in Chinese, but he still doesn’t understand Chinese, because all he’s doing is manipulating symbols. Yet in this case he is the entire system.
    Now of course an obvious objection to this is that if you can go out and converse with people in Chinese, you must be able to converse in Chinese and thus understand it.
    This objection the functionalist could make doesn’t actually addresses Searle’s point though. The whole point of the Chinese Room Thought Experiment is that you can’t generate understanding simply by running the right program.
    You can’t get semantics merely from the right syntax. Now granted you surely would understand Chinese if you could converse perfectly with Chinese people, but I think Searle can hold that this understanding arises not from manipulating symbols in the right way, but also from all the various things that can go on in face-to-face interactions.

  • @_Aly_00_
    @_Aly_00_ 8 років тому +149

    Reminds me of the Star Trek episode when Picard has to try and show Data (the android) is sentient being with the right to choose.

    • @RGLove13
      @RGLove13 8 років тому +3

      that's what I was thinking too!

    • @TulipQ
      @TulipQ 8 років тому +24

      The Measure of a Man, if anyone wants to go watch it.

    • @silvanbarrow86
      @silvanbarrow86 8 років тому +9

      I was actually thinking about the episode from season 1 of TNG where everyone got infected with the love virus from the original series and Data goes "If you prick me, do I not... leak?"

    • @exastrisscientia9678
      @exastrisscientia9678 8 років тому +3

      Me too 😊 #Data!

    • @Moomoo-cp6ey
      @Moomoo-cp6ey 8 років тому +1

      Isn't that every episode?

  • @darkblood626
    @darkblood626 8 років тому +66

    The most moral and pragmatic thing would be to treat any AI that you have reason to suspect has reached personhood as such by default.

    • @aperson22222
      @aperson22222 8 років тому +3

      What if doing so required those whom you already know to be people to suffer?

    • @darkblood626
      @darkblood626 8 років тому +6

      aperson22222 Example?

    • @aperson22222
      @aperson22222 8 років тому +11

      darkblood626 There's a Star Trek episode where some of the human crew members are stranded on a space station that's about to blow. The only way to save them is by sending three bots on a suicide mission to stabilize the defective element. The bots refuse their orders (long story) and the crew considers reprogramming them so they're no longer able to do so. Data locks them out of the transporters so the crew can't beam the bots over against their will, even though he knows that doing so greatly reduces his fellow officers' likelihood of survival.

    • @Joram647
      @Joram647 8 років тому +7

      I think the most pragmatic thing to do is to just not create strong AI in the first place. Avoid ourselves a lot of potential problems. Of course it's pretty much a given that if we are given the ability to create strong AI, someone will do it eventually, so I guess I'll just have to agree with your stance.

    • @darkblood626
      @darkblood626 8 років тому +8

      aperson22222 Forcing the bots to die against their will would not be moral.

  • @perfectzero001
    @perfectzero001 8 років тому +24

    I feel that you should have addressed the consciousness question. Is their a subjective experience to being the strong AI? Is that what separates us (as opposed to souls) from a machine that simulates intelligence? Or does it matter to deciding if something is an actual AI? For me all the most interesting questions about personhood and AI in general surrounded consciousness.

    • @BardicLiving
      @BardicLiving 8 років тому +1

      I know.

    • @damiandearmas2749
      @damiandearmas2749 8 років тому

      I it was to my understanding that conciousness is still mistery.

    • @ArcasDevlin
      @ArcasDevlin 8 років тому +1

      To me, it's whether the AI can actually feel emotions or just simulate them.

    • @mrchapsnap
      @mrchapsnap 8 років тому

      +

    • @SonicBoyster
      @SonicBoyster 8 років тому +7

      Since you can't experience another person's consciousness, and we can't determine whether another human being is actively conscious outside of how accurately they respond to specific stimulus, it can't really be used to test anything. If a robot is answering questions it's 'conscious' for all intents and purposes.

  • @DuranmanX
    @DuranmanX 8 років тому +163

    look at the mistakes he made in the Crash Course Games episode
    no robot would make such silly mistakes

    • @lapisleafuli1817
      @lapisleafuli1817 8 років тому +18

      also the fact in crash course world history he constantly missed the place he was talking about when he spun the globe

    • @KTSamurai1
      @KTSamurai1 8 років тому +53

      Unless those mistakes were part of its programming.

    • @somecuriosities
      @somecuriosities 8 років тому +28

      Or they were part of his cunning Ai - intended to throw us off from finding out the truth :-P

    • @lcmiracle
      @lcmiracle 8 років тому +9

      That's exactly what an advanced, humanity infiltrating android would act to convince us, the human, definitely not robots, that they are inf act, humans.

    • @therealDannyVasquez
      @therealDannyVasquez 8 років тому +3

      I dunno he could be an Ai. Do you remember Microsoft's Ai Tay made a few silly mistakes?

  • @mo5ch
    @mo5ch 8 років тому +5

    I like the approach of Jack Cohen and Ian Stewart in "The collapse of chaos": They suggest that the mind is an emergent property, a process that is created by a certain arrangement of neurons.
    It is like the motion of a car, something abstract and not material. If you would "dissect" a car, you will find wheels, an engine etc but not a tiny bit of motion. The same applies to our brain/mind.
    So, in my opiniom, if we can create something similiar like neurons (and not only neurons, of course, more like a certain arrangement), we could create a mind as well. But it takes probably a few more years until they are on the same level as we already are.

  • @patrickberth464
    @patrickberth464 Рік тому +6

    this is now almost our reality

  • @libracayes9564
    @libracayes9564 8 років тому +1

    This is probably the most compassionate view of intelligent AI i have ever seen, thank yoou

  • @Scheater5
    @Scheater5 Рік тому +3

    So did we notice that ChatGPT can pass the Turing test as originally conceived? Like...easily. So. We're pretty sure ChatGPT isn't "alive" or "thinking". So what's our test now? Are we moving the goalposts?

  • @qpid8110
    @qpid8110 7 років тому +50

    Hank: "How can I figure out my brother is a robot or not?"
    Me: "Crack his skull open and feat on the goo inside?"
    Hank: "Without going into his mind or body."
    Me: :(

  • @illdie314
    @illdie314 8 років тому +63

    God, I'm loving this subject of philosophy! Identity, personhood, the mind, free will... This is way more interesting than arguments for and against religion :P

  • @intxk-on-yt
    @intxk-on-yt 7 років тому +5

    You guys are amazing! Seriously! The amount of contribution you guys are doing is mind blowing! Thank You!

  • @j_art0117
    @j_art0117 8 років тому +3

    This episode just gives me a thought on the subject of language education,maybe we are somehow like a robot because we are learning vocabulary and grammar without understanding how to use it to express your own opinions.(school education has trained us to become a robot with strong AI)

  • @francois6915
    @francois6915 8 років тому +1

    Thank you for a great video. Three points from my side.
    - Firstly, Searle has a very good response to the argument of the whole system and not just the cpu/"man in the room" understanding Chinese, a response which Searle calls the Systems Reply. Searle suggests that the person in the room memorize the rule book and symbols, thus internalizing the whole system. That person now goes outside, gets handed a piece of paper with some symbols on it, remembers the rules for those symbols, and then writes a reply in front of the Chinese person. He can do all this yet still have no idea what those symbols mean. Only if someone shows him a hamburger with the symbol for hamburger next to it, will he understand what that symbol means. Until then, its all squiggles and squaggles.
    - Secondly, it is interesting to note that in Searle's 1980 paper "Minds, Brains and Programs", the original Chinese Room paper, he defines 'Strong AI' in a slightly different way from how it has come to be used since. Searle says that Strong AI is the view that "...the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." and later in the MIT Encyclopedia of Cognitive Science, he says, "“Strong AI” is defined as the view that an appropriately programmed digital computer with the right inputs and outputs, one that satisfies the Turing test, would necessarily have a mind. The idea of Strong AI is that the implemented program by itself is constitutive of having a mind." Thus Strong AI is not a property that a robot may or may not have, nor is it the idea that computers can think. Since Searle is the guy who coined the term, I believe he has to right to decide its meaning. This distinction is demonstrated by the next point.
    - Thirdly, he never says that a computer can't think. In fact, in the MIT encyclopedia, he states, " The Chinese room does not show that “computers can’t think.” On the contrary, something can be a computer and can think. If a computer is any machine capable of carrying out a computation, then all normal human beings are computers and they think. The Chinese room shows that COMPUTATION , as defined by Alan TURING and others as formal symbol manipulation, is not by itself constitutive of thinking."
    Also, the Turing Test might have been passed recently: www.bbc.com/news/technology-27762088
    Thank you!

  • @Newova5
    @Newova5 8 років тому

    I like that you decide to approach the old arguments with relevant new perspectives.

  • @dciking
    @dciking 8 років тому +2

    I find that Star Trek Voyager S7Ep19 "Author, Author" has some really good arguments for when "personhood" can be considered a part of what something is.
    Great episode!
    DFTBA

  • @genevieve6446
    @genevieve6446 8 років тому

    This question is so intriguing, if not a little vague. One important idea that isn't invluded in the question of 'What constitutes a person/can a robot be a person?' is 'What is a robot?'. What if all the organs exept the brain were grown in a lab by scientists and combined to make a human with a computer for a brain? Is that a robot? If not, that makes me wonder if a functioning brain in a jar being fed stimuli a person.

  • @Narbaculus
    @Narbaculus 8 років тому

    A small but thought-provoking discussion of personhood occurs in Kenneth Oppel's adolescent novel _Half Brother_.

  • @Rakned
    @Rakned 8 років тому +169

    But how do I know that I'M not a robot?!?
    ... Seriously, brains are pretty much computers, right?

    • @batrachian149
      @batrachian149 8 років тому +13

      Yes.

    • @BrownHairL
      @BrownHairL 8 років тому +33

      You ARE a robot. We all are. It just so happens the we are incredibly complex biomachines. No computer can make this level of complexity yet. But it could be just a matter of time, even if takes so damn long.

    • @BenDover-ex1cr
      @BenDover-ex1cr 8 років тому +10

      i think of this at an atomic level.
      both us and the robots are made of atoms, and are both smart.
      so whats the difference really?
      its only the complexity.

    • @batrachian149
      @batrachian149 8 років тому +8

      Ben Dover The atoms aren't really relevant here.

    • @Rakned
      @Rakned 8 років тому +5

      I've been thinking, maybe there's some point where logical systems become too complex and reach a point where they "become conscious." It's just an idea, tho.

  • @josephhicklin7313
    @josephhicklin7313 8 років тому

    One important thing you didn't go over though, is that the Turing test has been re-written numerous times to increase complexity. This was done to compensate faster processors, bigger memory reserves and more complex programming.
    When the original Turing test was drafted it merely asked for the program to convince the human after 5 minutes of conversation. But it has been adjusted numerous times to compensate diversionary tactics used by chatbots seeking to defeat the test based upon the manipulation of its rules.

  • @justinstark5732
    @justinstark5732 8 років тому +61

    No 2001 or terminator references? I'm disappointed!!

    • @hexeddecimals
      @hexeddecimals 8 років тому +1

      well there was an angler fish. that's enough a reference for me.

    • @hexeddecimals
      @hexeddecimals 8 років тому

      well there was an angler fish. that's enough a reference for me.

    • @IsThisRain
      @IsThisRain 8 років тому +1

      well there was an angler fish. that's enough reference for me.

    • @IsThisRain
      @IsThisRain 8 років тому +1

      well there was an angler fish. that's enough reference for me.

    • @unematrix
      @unematrix 8 років тому

      you should have listened better ;)

  • @GregoryMcCarthy123
    @GregoryMcCarthy123 8 років тому

    Good point at the end about the program just following instructions to pass the Turing test. In machine learning, a very simple algorithm called "bag of words" can be taught surprisingly well how to classify movie reviews as either positive or negative. It does not have a conscious and in fact knows nothing about the English language, yet it is able to determine with 95% or greater accuracy the polarity of a movie review.

  • @andersonandrighi4539
    @andersonandrighi4539 8 років тому +54

    Can I go back to Fallout 4 to answer this question?!

    • @laggles138
      @laggles138 8 років тому +14

      "Join the Railroad!"

    • @spencergeller2236
      @spencergeller2236 8 років тому +1

      That was the whole point of Fallout 4, basically.

    • @Delta3angle
      @Delta3angle 8 років тому +3

      Yeah that game was what decided it for me. Synths are not people nor are they sentient or deserving of the same rights of humans.

    • @Dorsiazwart
      @Dorsiazwart 8 років тому +4

      Brotherhood or Minutemen are the only good option, really.

    • @laggles138
      @laggles138 8 років тому +10

      +Parker Sprague Who really gives a damn about the half-assed plot if the REAL goal of the game is to recycle trash?

  • @anti-christ.666
    @anti-christ.666 4 роки тому +1

    There is a huge difference between mimicking behavior and being conscious that you are doing it. The big leap for AI is not that AI can behave like us It's that it knows it's doing it.

  • @ilovecodemonkeys
    @ilovecodemonkeys 5 років тому +1

    Searle seems to be assuming that we MUST associate images with understanding. I would agree if that is the case.
    In the Chinese room example, the people receiving the outputs were limited to language only. That means we would have to limit understanding to language only, but that doesn’t work because Searle seems to imply there’s more to understanding than JUST language.
    The room with translator still isn’t understanding unless the room also contains a codex or pictures related to the terms

  • @cynthia5911
    @cynthia5911 8 років тому

    When Hank mentions the viola at 4:04 🙌

  • @aleksandra8711
    @aleksandra8711 7 років тому +4

    every mass effect fan: "Does this unit have a soul?"

  • @spikeguy33
    @spikeguy33 8 років тому

    I absolutely love it when you make those philosophers smile! Who thought of that? That is brilliant!

  • @lumen8341
    @lumen8341 6 років тому +6

    Detroit: Become Harry
    Edit: There was literally a Silverchair lyric in the next second after I unpaused
    THAT WAS MY SENIOR DANCE SONG rofl I thought no one remembered that

  • @deadlyshoesalesman
    @deadlyshoesalesman 8 років тому

    This helps me understand the episode of SMALL WONDER I just watched.

  • @jackblack4359
    @jackblack4359 7 років тому +8

    An AI will be "born" eventually. If it exhibits more human behavior than real humans, does this no longer qualify as human behavior or does this expand our understanding of what human behavior is?

    • @justtheouch
      @justtheouch 7 років тому +5

      How are you defining "human behaviour?" Surely it is simply the way humans behave, meaning it is impossible for anything to exhibit more human behaviour than a human.

  • @halfsasquatch
    @halfsasquatch 6 років тому

    For me the a key deciding point would be if the AI expressed individual wants or preferences outside the original scope of it's programming

  • @EnderWaterBender
    @EnderWaterBender 8 років тому +2

    "... We need to figure out how we are going to treat potential new persons, if we end up creating beings that we decide meet the threshold of personhood." In Orson Scott Card's SPEAKER FOR THE DEAD, there is a quote that addresses this subject:
    "The difference between [person] and [non-person] is not in the creature judged, but in the creature judging. When we declare an alien species to be [a person], it does not mean that they have passed a threshold of moral maturity. It means that we have."
    I have already figured out how I am going to treat potential new persons; I am going to treat them how I would like to be treated. As far as Turing's Test (or, The Imitation Game, if you will) I am eager to be fooled. One time my sister-in-law yelled into her phone, "Siri, you suck!" and Siri replied, "I'm doing my best". It made me feel so bad for Siri. Objectively, I know that Siri's lacks the ability to have hurt feelings; and yet, I vowed to never treat Siri with any less dignity than I would anyone else.

    • @adamplentl5588
      @adamplentl5588 4 роки тому

      Our future robot masters will remember this.

  • @art-is-awen8842
    @art-is-awen8842 8 років тому

    Please do a Sociology and an Art History crash course!!!! Don't Forget, That'd Be Awesome!!

  • @jdwest34
    @jdwest34 8 років тому

    My 6th grade class does a project on this topic every year. We use "The Adoration of Jenna Fox" and "Frankenstein" as text. Great video!

  • @saramartins95
    @saramartins95 8 років тому +1

    well i found the easiest way for me not to freak out about deep existencial questions is for portugal to be featured in a video, because then im just amazed that someone remembered we exist

  • @elis__nbnb
    @elis__nbnb 6 років тому

    shoutout to all my fellow portuguese speakers, cheers from brazil!! amazing video and series

  • @ahorrell
    @ahorrell 8 років тому +1

    Please let there be an episode on personhood in other species, ie animals. Meat-eating is such a fascinating area of ethics

  • @whatthefunction9140
    @whatthefunction9140 8 років тому +5

    *we are all intelligent machines.* the question is are you the same type of machine or a different type of machine.

  • @ExiledGypsy
    @ExiledGypsy 8 років тому

    Crash courses should have a downloadable longer version where the speaker speaks at normal speed.

  • @Leotique
    @Leotique 8 років тому

    Wowowowowowow !!! The chinese code part really impress me !

  • @WaterCupBoi
    @WaterCupBoi 8 років тому

    I recommend watching Ex Machina, that movie has so many good points.

  • @IVAN3DX
    @IVAN3DX 8 років тому

    The Fallout 4 synths are perfect for this. They only have a few parts, all other organs are created with a machine, but are exactly like human organs.

  • @MideoKuze
    @MideoKuze 8 років тому +2

    I liked the inclusion of the challenge to The Chinese Room, but I was a little bit disappointed that it wasn't followed by a discussion of semanticity, even though it was mentioned. The system of The Chinese Room contains a book recording a sufficient number of details about how to construct a conversation and maybe even how to generate novel responses that it can fool any human into believing that it is human, and that's the principle under which chatbots operate, in a nutshell. But those chatbots don't verifiably have semantics, or knowledge of the meaning associated with its responses, and neither does the room system. If a perfect chatbot is possible, then what we have is either a philosophical zombie or an emergent strong AI (a strong AI which appeared by accident, incidentally, or perhaps antithetically to its design.
    Modeling semantics is a hard job in science, partially because we can't pull apart a brain and observe "aha yes this is semantics and we can clearly see the algorithm that represents it". A promising idea is statistical correlation between representations of concepts, developed through experience (learning, pretty much). Could you say for certain that a rules-based chatbot does not have semantics, somehow emergently represented, if it can perfectly fool a human? Rather, the problem of The Chinese Room (I think the phrase "real understanding" is used here) is if we can actually know whether a system possesses emergent semantics. In science, we've made a lot of progress in explicitly representing semantics, but it's still an open question whether emergent representation of semantics in an explicitly non-semantic system designed to superficially emulate semantics is even possible, or if there's even such a thing as a perfect chatbot. One perhaps more rigourous test is to have an explicit model we know is right, and test the robot for roughly equivalent outcomes on various tests of semanticity against it (although that's essentially a generalisation from the basic conceit of the Turing Test). Another method might be to evaluate the robot algorithmically and see if we can generate a higher-order algorithm using it which performs roughly the same tasks as our known algorithm, although it is potentially more limited in scope.
    I'd obviously disagree that robots can't think. A Turing machine can do anything mathematical, so saying that robots can't think implies that the mind is non-mathematical, which even if magic were real is just not possible, since math is just a symbolic system for representing any sort of rule. Whether a system of synthetic rules for mimicking conversation that is explicitly not symantic can represent semantics emergently, though, is a more interesting question. How we'd measure it is a more interesting question still, although thus far no such system has appeared and it may never. Personally, I don't believe in true philosophical zombies, and I do think that a perfect chatbot would necessarily have real semantics, although a very good chatbot doesn't need to. This is philosophy, but it's neat to consider we could identify that a chatbot could be very good without being perfect.
    More broadly, such emergent semanticity is the question of whether weak AI can make the leap into strong AI by accident, and I don't really believe in that either, but it's also interesting to imagine how we could divide weak algorithms from strong, and how much of a continuum there is between them.
    And in here at the very bottom I'd like to include a minor complaint about the distinction between strong and weak AI. The actual distinction as I understand it is that weak AI is task-specific and doesn't possess any discernable ability to understand, whereas strong AI is very capable of generating meaning and acting on it. From that perspective, a strong AI could be emulating a rat or be an unfathomably great superintelligence just as easily as it could be trying to be human.

    • @MideoKuze
      @MideoKuze 8 років тому

      bit of an error I just noticed - The Chinese Room can fool any Chinese reader into believing it writes Chinese. There is a human inside of it, so it wouldn't need to be fooling anyone on the humanity count.

    • @MideoKuze
      @MideoKuze 8 років тому

      You know I keep noticing more errors so please just excuse them where you can tease out what I meant. I'm not a very good writer and phones make editing tough to do.

  • @larsiparsii
    @larsiparsii 8 років тому +12

    Any Chinese people who can translate? ^_^

    • @CosmicErrata
      @CosmicErrata 8 років тому +13

      The characters meant "Yes, I know how to speak Chinese."

    • @kunwoododd2154
      @kunwoododd2154 8 років тому +14

      I'm not Chinese, but it says: "Can you speak Chinese?" "Correct. I can speak Chinese"

    • @drredchan220
      @drredchan220 8 років тому +4

      i must say the grammar is a bit off though

    • @seanmundy9829
      @seanmundy9829 8 років тому +4

      你会说中文话吗? (ni hui shuo zhong wen hua ma?)
      对!我会说中国话。(dui! wo hui shuo zhong guo hua.)That's the message with characters and the sounds to each character.

    • @ktan8
      @ktan8 8 років тому +5

      The example in chinese is a bit off. It has failed the chinese speaking turing test. :P

  • @LALITANENONANEZ
    @LALITANENONANEZ 8 років тому

    I really like crash course philosophy, you've had adressed the core problems of "the Cambridge quintet" (that I'm currently reading). Watch your videos makes easier to me to clear my ideas and gives me some new ideas. Thank you!

  • @piternal5609
    @piternal5609 4 роки тому +1

    This reminds me of that doctor who episode from season 11 where that guy who was actually a bomb got his hand cut off and there was just wires instead of blood and all his memories were implanted.

    • @mfc6941
      @mfc6941 4 роки тому

      isn't that from the victory of the daleks in season 5?

  • @audreyhockeyy
    @audreyhockeyy 4 роки тому +11

    Christians: harry can’t be like us because he has no soul!
    possessed dolls : *exists in Christian culture*

  • @brandonsams3819
    @brandonsams3819 8 років тому +1

    Constructs: Things that are and were built. Humans are just really intelligent constructs (although the "really" part depends on which human). The only reason they are so intelligent is because they were built that way (When they are being built in the whom or whatever creationist story you believe in). They just have a different design and composition. There fore there composition and design defines (Human or other) a construct's intelligence. Which means any construct including an artificial one, can be as or even more intelligent then a human (to say they can't in all cases is ridiculous); depending on how it was designed and what it was made of.
    How can you tell if a construct has the level of intelligence as you do AKA personhood? By observing and testing that construct's intellectual actions. Sure some to the construct's actions (when your testing it) can be misleading, but best air on the side of caution. If you can't tell because it's actions or specific behavior is so similar to that of a sentient being, well best treat it as one just in case. Because if there that similar in comparable behavior (intellectual behavior) that you can't tell, then maybe there is no difference.

  • @krasykay2294
    @krasykay2294 7 років тому +4

    83 brotherhood of steel members disliked this video

  • @LucasCodPro
    @LucasCodPro 8 років тому

    I learned new things today. Thx.

  • @Sol-nh4qd
    @Sol-nh4qd 8 років тому

    I think the best test to recognize a robot as a person is if it is non-prescriptively proactive and can demonstrate self-awareness and complex ideas by its own volition without being prompted. The robot must be able to pursue conversation and its own desires without a prescriptive programming influence. The Chinese room is a purely reactive case, and makes no mention of what a proactive robot would be like. Observing deliberate proactivity is key I thing.

  • @Federico84
    @Federico84 8 років тому +6

    a robot would never be a person, at best it would be an intelligent being, like an alien

    • @Ketraar
      @Ketraar 8 років тому +16

      But an alien can be a person, just not from earth, hence why its alien to this world, but still a person.

    • @Federico84
      @Federico84 8 років тому

      Ketraar it depends on what you think the word "person" mean

    • @Ketraar
      @Ketraar 8 років тому +10

      Tecnovlog
      My point exactly. So a robot "could" be a person. ;-)

    • @joekennedy4093
      @joekennedy4093 8 років тому +9

      They did a whole episode on what person means. I think most people would agree an intelligent alien counts.

    • @SbotTV
      @SbotTV 8 років тому +13

      And aliens can't be people? Sounds like the future of racism... If we are going to go around calling intelligent entities sub-people, then we will run into a lot of trouble.

  • @benkyung4894
    @benkyung4894 8 років тому

    ex machina does an amazing job exploring this idea. recommend the movie to anyone who thought this was interesting

  • @timothymclean
    @timothymclean 8 років тому +1

    I've heard the Chinese Room thought experiment used not to prove that Turing-Test-passing AI's are not necessarily strong, but that strong AI is impossible. Which is absurd.
    The biggest problem with the proper Chinese Room argument is simple. If you define "understanding" in such a way that the Chinese Room does not qualify...how is it that you could tell understanding from false understanding?

  • @mattmuller4235
    @mattmuller4235 7 років тому

    Please do create a CrashCourse on posthumanism and AI and robotics. Please, please, please with silicon & electroceutical sprinkles on top!

  • @beauvais413
    @beauvais413 8 років тому

    This made me think of the Bicentennial man. A great movie with Robin Williams.

  • @zaeemAtif
    @zaeemAtif 8 років тому

    this episode made me go CRAZY...!!!

  • @avoqado89
    @avoqado89 8 років тому

    Hopefully John or someone can expand on this discussion with Daneel Olivaw & Giskard

  • @g.b.9227
    @g.b.9227 8 років тому

    The first sentence I got really interested already. Keep up the good work! :)

  • @Roxor128
    @Roxor128 8 років тому

    My response to the Chinese Room thought experiment is that if you were to program a computer to follow the same algorithm as described in the book you were following, you'd have a computer program which can pass the Turing Test. Hence, it's the algorithm in the book that's the person in that situation. You are just acting as a computer to run it.

  • @therongjr
    @therongjr 8 років тому

    What about a "reverse" Turing test? Due to differences in the "programming" that humans receive--culture, language, morality, etiquette, etc.--not to mention differences that were inborn (such as developmental disabilities), it may be difficult for Person A to be certain that radically different Person B is, in fact, a human.

  • @FrozenSpector
    @FrozenSpector 8 років тому

    Hey Crash Course! I like your book optical illusions on your "CC Literature" video thumbnails. Just noticed this now and thought it was cool. Very clever!

  • @rachelle10
    @rachelle10 6 років тому

    This is acceptable homework content. Thanks to our lecturer for giving us this as material.

  • @Cardboardboxy
    @Cardboardboxy 8 років тому

    @hankgreen
    I have been following you since 08 or so. that TBI I messed up my memory but I do remember how awesome crash course always was

  • @Archangel125
    @Archangel125 8 років тому

    FREE WILL! So excited for the next episode!

  • @Giddleford
    @Giddleford 7 років тому

    This was argued in a Star Trek New Gen episode with Data being in court instead of Harry. Great episode, really.

  • @needlesslyredundant
    @needlesslyredundant 8 років тому

    I recommend the book 'Superintelligence' by Nick Bostrom. It's brilliant and looks at AI in an extremely large range of different ways. Definitely read 'Superintelligence' if you are interested in AI.

  • @LadyTaraJo
    @LadyTaraJo 8 років тому

    Interesting ideas. It made me think of a little twist to the ship of theseus: if you take a person and replace all his organs with artificial organs, one at a time, at what point does this person stop being a person and start being a machine?

  • @IARRCSim
    @IARRCSim 6 років тому

    Making computers think like people is just good fun for sci-fi movies and philosophical questions. The ideal computer or robot is one that always does what we really want. How it works or thinks isn't so important. Is there any big practical benefit to having a computer pass the Turing test?

  • @Benjamin_Kraft
    @Benjamin_Kraft 8 років тому

    This episode was basicly made for me, love thinking about AI and the potential personhood of it ^^
    Most of the things he said I've heard before, but the counter-argument to the Chinese room experiment consisting of thinking of the entire room as the "brain" and the person performing the instructions as just a part of it was very interesting. Who is the person in that regard, then? The room? The fusion of the writer of the instructions, the instructions themselves and the person performing them? Are the chinese-speaking people providing messages also a part of that personhood, seeing how without them the Chinese room wouldn't act at all?
    I love thinking about stuff like this, even though it may be complete nonsense.

  • @Matrinique
    @Matrinique 8 років тому +2

    Now that you've watched this episode of Crash Course, my recommended movies for you: The Imitation Game and Ex Machina.

    • @ziggyzak6017
      @ziggyzak6017 8 років тому

      Ex Machina was an amazing movie. I second that recommendation.

  • @fistpump64
    @fistpump64 6 років тому

    I'm a huge mega man fan and machines are amazing the deep concepts are neat

  • @BlazingMagpie
    @BlazingMagpie 8 років тому

    I have this story I want to try writing sometime that has an interesting take on this question. Basically, if you made a strong AI and you could upload that AI into human body, would it be human? A person?

  • @andrewmartin9440
    @andrewmartin9440 7 років тому

    7:42 = potential fault in Chinese room T.E.
    7:42 = VALID ... however
    Hypo: Fault can be communicated better + LCD is always best
    Potential solution:
    Eng. Word: Memory
    Text above in multiple languages. Respond if understood or and would like to continue/explore dialogue thread

  • @MelissaAlarcon1
    @MelissaAlarcon1 7 років тому +1

    that was such a sweet ending omg

  • @hazeltiberiuslee7216
    @hazeltiberiuslee7216 8 років тому

    That last comment of your love for your brother was so sweet :)

  • @nostalgicrobot
    @nostalgicrobot 8 років тому

    More videos about Artificial Intelligence please! It's interesting.

  • @fegolem
    @fegolem 8 років тому

    Cool. Certainly gave me something to think about.

  • @Martial-Mat
    @Martial-Mat 8 років тому

    Surely one of the primary considerations is the ability to learn, adapt and develop a unique personality in response to life experience? I'd absolutely show the same respect to robots that acted human if they developed in the same way as humans in all ways except birth. I find it inconceivable that such robots are not ALREADY possible, let alone in the future.

  • @ascentraland9264
    @ascentraland9264 4 роки тому

    Turing test has been beat! Watching this in 2020

  • @kungfuhusler
    @kungfuhusler 8 років тому

    The term soul is often misused and is thought to mean a non corporeal part of living organisms that is separate from the body, but what it actually means however is life itself or the life of a living organism, or a creature or person. For example a living animal or person is a soul or living soul, but a dead animal is a dead soul. A dead soul is not separated it is just dead.

  • @AKHMallory
    @AKHMallory 8 років тому

    I just read a book called Being (in all of 3 days) that asks this question SO well...

  • @Skylerride313
    @Skylerride313 7 років тому

    would love a bibliography, I'm doing an essay on whether there is reason to fear AI or not at uni and would love books to refer to

  • @sacredsanctuary420
    @sacredsanctuary420 7 років тому

    hank was kind of passionate in this one

  • @Armaggedon185
    @Armaggedon185 8 років тому

    The Turing Test shouldn't be considered a one-and-done deal, but rather a constant process we go through whenever we consider personhood for someone or something. The Chinese Room might pass a cursory examination, but a conversation spanning hours or days would reveal the limited capacity of the room's responses.

  • @fromscratchauntybindy9743
    @fromscratchauntybindy9743 8 років тому

    No mention of Bicentennial Man? I feel emotional connection, shared history, memories and love go a long way to contributing to personhood too.

  • @isaacliu896
    @isaacliu896 8 років тому

    the error with Chinese room is that it's assuming the machine is algorithmically (not cognitively) programmed with every single possible input and output. once we have machine learning won't that be a situation quite different than the code book?

  • @AlexTrusk91
    @AlexTrusk91 8 років тому

    how many episodes will there be?
    i hope it never ends :P

  • @SerbSkiLLz00
    @SerbSkiLLz00 8 років тому

    Nice Pulp reference.

  • @haroldbaker8869
    @haroldbaker8869 8 років тому

    Hank! Long shot here, do you have any philosophy books (fictional or not) that you would recommend? I am keen to explore further...

  • @bashawhm
    @bashawhm 8 років тому

    The fact that he made it through an entire episode about robots and AI without referencing Data from Star Trek is remarkable. :)

  • @ObeyBunny
    @ObeyBunny 8 років тому

    To me, the Chinese room thought experiment just describes a chatbot. You can very easily prove that the chatbot doesn't actually understand what you're asking it, that it's only barfing out pre-determined answers. All you have to do is ask it to be creative. Even a little bit.
    Don't ask "What's your favorite color," ask "Help me pick a title for a book I'm writing."

  • @Mr8lacklp
    @Mr8lacklp 8 років тому

    There are machines that are able to pass the turing test just with writing (at least most of the time) like Alan Turing described it but as soon as the actions go beyond just writing they get problems. Like if you would hav asked them to send you their location via whatsapp. Even if they could write on whatsapp that would be almost impossible for them because their capabilities are limited to writing. That's why we are so far away from strong AI. We have weak AI that is programmed to act as if it was strong.

  • @amos657
    @amos657 8 років тому

    The real problem with Chinese room is that the code book is impossible today. If i made code book like that and gave it to a machine it would pass the Turing test as well.