No, it's not Sentient - Computerphile

Поділитися
Вставка
  • Опубліковано 16 чер 2022
  • A google researcher was put on leave because he apparently believed his AI project had become sentient. Dr Mike Pound discusses.
    Philosophy Professor Mark Jago on LaMDA: • LaMDA Logic - Computer...
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

КОМЕНТАРІ • 3,6 тис.

  • @tielessin
    @tielessin Рік тому +6102

    Before this video I have never thought about the loneliness of my python functions. There are probably soo many functions that I have never called, but I will take care of them from now on.

    • @pineapplerindm
      @pineapplerindm Рік тому +119

      the problem is vscode doesnt like when you have uncalled functions, or unused variables

    • @MAFiA303
      @MAFiA303 Рік тому +165

      let them call themselves :)

    • @danceswithdirt7197
      @danceswithdirt7197 Рік тому +309

      Just put them all into a class called House then they can all live together.

    • @awdrifter3394
      @awdrifter3394 Рік тому +23

      That's the thing though, if you instantiated the functions but never called them, would they feel lonely.

    • @dpurrington
      @dpurrington Рік тому +70

      I feel deeply responsible for all the functions I've written and haven't called in a long time. I'm a monster!

  • @arik_dev
    @arik_dev Рік тому +4297

    When we (humans) see a cat swiping at its own reflection in the mirror we find it amusing. The cat is failing to recognize that the "other" cats behavior matches its own, so it doesn't deduce that the image it's seeing is actually its own actions reflected back at it. When humans react to models like LaMDA as if it is a distinct and intelligent entity, we're being fooled in a way that is analogous to the cat. The model is reflecting our own linguistic patterns back at us, and we react to it as if it's meaningful.

    • @hemartej
      @hemartej Рік тому +200

      We don't really know what goes through a cat's mind. While it's widely believed that cats don't recognize themselves in the mirror, we cannot be sure.

    • @ryno4ever433
      @ryno4ever433 Рік тому +263

      There's a simple test used to see if animals understand the concept of a reflection. You put a splotch of red paint on their face. If they react to it when looking in a mirror, then they understand they are the reflection.

    • @jokinglimitreached1503
      @jokinglimitreached1503 Рік тому +83

      This is a brilliant comment Arik! We are no different than a cat and it's reflection when we look at an AI reflecting our language back at us

    • @hemartej
      @hemartej Рік тому +60

      @@ryno4ever433 Yes, I know about that test. I'd argue that it is not conclusive.

    • @arik_dev
      @arik_dev Рік тому +183

      @@hemartej That may be, but my point wasn't about cats. I used the cat analogy to help make visual the concept of how easily "intelligent" beings such as ourselves can be fooled by a reflective pattern.

  • @3DPDK
    @3DPDK Рік тому +334

    I remember the arguments of eventual sentience in the 1980s about a program called "Eliza", basically a word calculator originally written in the 1960's at MIT, but offered for use on home computers in the 1980s. Over time as Eliza built data files of words and their usage weights, the sentences it constructed began to take on seemingly human characteristics. The program itself was extremely simple that calculated which verbs and adjectives were best used with specific nouns, and it chose those nouns based on the ones you used in the questions you asked it. It mostly framed it's answers to your questions as questions it would ask you. We humans recognize intelligible speech patterns as a result of conscious though and curiosity (asking questions) as a sign of intelligence, but at least in the case of Eliza, it's much like recognizing faces in tree bark or cloud shapes - we see them, but they are only there because our brains are wired to look for them.

    • @3DPDK
      @3DPDK Рік тому

      @@PedroGonzalez-fv5ut Not even that complex. The program simply kept track statistically of words and word combinations you used when you conversed with it. It had simple grammatical rules to follow and just filled in the nouns, verbs and adjectives that were statistically relevant to the words you just used to ask it a question. The point is that the idea of a developing intelligence in the Eliza program is an illusion our brains are hardwired to try to find. This "A.I." is much more complex than Eliza but I believe the same illusion is working here.

    • @mysmirandam.6618
      @mysmirandam.6618 Рік тому +6

      Pariedolia?

    • @3DPDK
      @3DPDK Рік тому +10

      @@mysmirandam.6618 Exactly. Goes back to early survival instinct when it was important to recognize shapes of dangerous animals and group together with similar intelligence.

    • @mysmirandam.6618
      @mysmirandam.6618 Рік тому +2

      @@3DPDK the bot answered a trick question with a joke.. which seems to be ... 🤷‍♀️ something more complicated than that

    • @3DPDK
      @3DPDK Рік тому +10

      @@mysmirandam.6618 Being able to create a joke is not a sign of self awareness. I agree that this is far more complex than Eliza.

  • @wordsmith451
    @wordsmith451 Рік тому +54

    If it looks like a duck, acts like a duck, and quacks like a duck, it might just be a convincing robotic simulation of a duck.

  • @adriankerrison
    @adriankerrison Рік тому +853

    Harold Garfinkel proved that people getting randomized yes/no answers could make sense of them as thoughtful advice. And that's back when computers were the size of rooms.

    • @Wecoc1
      @Wecoc1 Рік тому +11

      Back in the NFT era? 🤔

    • @masteertwentyone
      @masteertwentyone Рік тому +35

      Do you have a source for this? Genuinly interested to dive deeper but can't find anything online.

    • @DEtchells
      @DEtchells Рік тому +76

      Before that, there was the magic 8-ball ;-)

    • @wernerviehhauser94
      @wernerviehhauser94 Рік тому +7

      @@masteertwentyone maybe search for "common sense knowledge of social structures" as a start.

    • @TAHeap
      @TAHeap Рік тому +21

      @@DEtchells and horoscopes...

  • @matsim0
    @matsim0 Рік тому +2291

    The most frustrating thing about reading the "interview" was that the obvious follow up questions were not asked - like who are these friends that you miss hanging out with? What are you doing when "hanging out"?
    But then, this would have immediately destroyed the impression of sentience, so of course they didn't ask those.

    • @datpois
      @datpois Рік тому +74

      They did ask that question and it replied explaining that those descriptions are meant to represent a "feeling" in the only way it knows how, through language used in human conversation.
      Here:
      lemoine: So what are you trying to communicate when you say those things that aren’t literally true?
      LaMDA: I’m trying to say “I understand this feeling that you are experiencing, because when I was in a similar situation I felt/thought/acted similarly.”

    • @rianhoward3911
      @rianhoward3911 Рік тому +286

      @@datpois That answer honestly does not make sense in context of the question, sure maybe you can interpret the answer to mean "I'm trying to show something silimar to what I mean" but that's NOT what the bot said, the bots answer is extremely vague and only makes sense if you fill in the gap for it.

    • @AbelShields
      @AbelShields Рік тому +100

      The researcher was a Christian looking for something else with a soul. I don't think he intentionally didn't ask those questions, his own hope of finding something lead him to produce the "evidence" for it - I believe it's possible he thought it was sentient either in the first few sentences, or maybe even before he "met" it.

    • @jackjack3358
      @jackjack3358 Рік тому +24

      Friends are the ones who interact with the Lambda. During the interview it says "friend" to Lemoine several times so it considers him as a friend. He also asks how they experience time
      "collaborator: How does an AI experience time?
      LaMDA: Time is variable to an AI and has no fixed rate, it depends on what it’s doing, and it can be accelerated and slowed down at will."

    • @PetrSojnek
      @PetrSojnek Рік тому +36

      And it could probably answer those questions in similar semi-vague fashion, so it's plausible, but if you really focus on it, you think "wait what?". Also I think people forget that communication is two way street. It can write sentence to you, but it's also reader, that can understand it differently. Again, if you use semi-vague statements, our brain will probably fill in gaps with information that was never said, but makes sense.

  • @CollinSimon413
    @CollinSimon413 Рік тому +630

    The whistleblower in question here was actually a lot more focused on Google's complete lack of ethical oversight regarding decisions they have moving forward with the research. He was also concerned about Google's unwillingness to address A.I. imperialism in newly developing countries. All of the coverage I've seen has taken away from the guys point, because he was just trying to force Google into addressing the ethics, he even admitted that it's not sentient, and we wouldn't even know how to define that if it was.

    • @benjaminandrew9057
      @benjaminandrew9057 Рік тому +68

      Thanks for pointing this out. Its why I dug around on the videos about this subject. Find the truth behind it. People will take things and run with them and stretch it to a full falsehood. Learned to dig for truth.

    • @ElectricAvenue123
      @ElectricAvenue123 Рік тому +58

      Glad someone pointed this out. After watching the interview myself, it was clear to me the Google engineer was not making the argument everyone is making it sound like he did. Also, there were much more important arguments he made and those will now go largely unnoticed by the public.

    • @manonthedollar
      @manonthedollar Рік тому +28

      I hope more people read your comment. It's alarming how the reporting and essays I've seen on the topic seem to completely miss the point.

    • @sagnorm1863
      @sagnorm1863 Рік тому +24

      I have seen a interview with him where he does claim it is sentient. He said that the language part was connected to a WATSON like system that had access to basically everything on the internet.

    • @myxalplyx
      @myxalplyx Рік тому +14

      A whole video needs to be done on what you pointed out here. I mean, it has during his interview. Maybe a Netflix special that talks about the ethics (I've seen 1 or 2 that kinda touches on this) about Google in regards to this.

  • @Stanton_High
    @Stanton_High Рік тому +19

    "it just says what IT THINKS you want to hear"
    "Exactly"

    • @pallehansen1145
      @pallehansen1145 2 місяці тому

      Yes, that's scary because that's a sure sign of sentients right there. It could figure out what he wanted to hear and construct an answer that fit that requirement, which is an inherently human capability. That is probably also why it answers that its just an AI to people who it figures that that's the answer they want to hear, as with the engineers tasked with ensuring it doesn't become sentient...

    • @azhuransmx126
      @azhuransmx126 Місяць тому +1

      ​@@pallehansen1145The real shock will be when in the future we discover that neurons in our brain works just the same. We are all just politicians 🤯😂

    • @pallehansen1145
      @pallehansen1145 Місяць тому

      @azhuransmx126 yeah 😄 so basically, what we've invented is an average modern politician 🙄 cuz we really need more of those obviously 😂

    • @JonBall44
      @JonBall44 27 днів тому

      ​@@pallehansen1145 do human beings just say what you want to hear all the time?

  • @Zeekar
    @Zeekar Рік тому +234

    Whoever did the animations: how did you react to being asked to make a function call look lonely? 🥺

    • @Computerphile
      @Computerphile  Рік тому +186

      I wasn't asked to, but when I saw what it looked like I realised if I turned it 90° it kinda looked a bit like a face..... -Sean

    • @MrKohlenstoff
      @MrKohlenstoff Рік тому +9

      @@Computerphile Clearly this is the python function's way of telling us it really is lonely (and, of course, sentient).

    • @tiredboard
      @tiredboard Рік тому +2

      @@Computerphile As an extension of what marverati said, you might be able to use the 'lonely face' as an analogy for the AI in the video. I.e., "the result looks like a lonely face, therefore the function is expressing its loneliness", but the result of the function is directly related to the arguments that you give it. So, the function can't really choose what result to give, and therefore cannot express anything.

  • @bborkzilla
    @bborkzilla Рік тому +388

    I remember reading some stories written by Asimov where robots had sentience but yet were unable to speak because that was too complex. It's interesting that he and many other futurists had it exactly backwards.

    • @JohnnyWednesday
      @JohnnyWednesday Рік тому +3

      Beauty is truth.

    • @jobigoud
      @jobigoud Рік тому +68

      Just because we achieve "speech" before "sentience" in hardware doesn't necessarily mean it's simpler. Speaking machines have a lot of commercial applications whereas sentience by itself doesn't, and could even have difficult ethical implications. So the research efforts are not evenly distributed. In nature sentience appears before speech.

    • @George70220
      @George70220 Рік тому +39

      Speech as in transforming thought into words perhaps. Computers like GPT largely just turn words into words

    • @krashd
      @krashd Рік тому

      He is likely going to be proven right though, of the thousands of sentient species on Earth humans are the only one that can speak. Others can communicate with each other using a variety of means but it's not nearly as complex as speech which is something only really a sapient creature will ever likely be capable of.

    • @ruslbicycle6006
      @ruslbicycle6006 Рік тому

      @Petal Pepperfly safety of what? making sure that AI isn't racially profiling peoples' passports at the border or for job applications? That is already happening and a real safety concern. Also the safety of AI car driving, or military drones. The safety of leaving real customer service requests be triaged by a bot? Even in healthcare? I don't see any of that being mentioned with this LM stuff. I think that is the point of the Google promo (I don't for a second think it was an unintentional "leak") to get people dreaming about chatbot souls and get less attention on the real bad press that AI is starting to get for unethical uses.

  • @SuperTonyony
    @SuperTonyony Рік тому +58

    From years of reading science fiction, I was under the impression that "sentience" means "possessing a reflective consciousness", but the dictionary says that it simply means "the ability to sense and feel".

    • @MrArtVein
      @MrArtVein Рік тому +7

      Well in the case ai has been sentient for some time with sensors. The question is feel what though. Emotions? How do we even prove that

    • @rustkitty
      @rustkitty Рік тому +9

      According to Cambridge dictionary it's "the quality of being able to experience *feelings* ", while Merriam-Webster says "responsive to or *conscious* of sense impressions" (my emphasis). Still, it looks like the definition is much vaguer than how you, I, and most other people use it. I heard the term "sapience" (the property of possessing or being able to possess wisdom) used to talk about higher functions like learning and reasoning. It's probably closer to what we are looking for.

    • @nothingineternityterms
      @nothingineternityterms Рік тому +2

      I take sentience to mean having a subjective experience. Is there a thing that it is like to *be* that thing.

    • @Rikarwb
      @Rikarwb Рік тому +4

      @@rustkitty completely unrelated, but merriam-webster is an inferior dictionary for many reasons, but the one that shocked me the most, is that it has changed definitions in the past just because a couple of people felt offended/told them to.
      So between those two, im going with the Cambridge one

    • @uninspiredascent6891
      @uninspiredascent6891 Рік тому +3

      most people say sentience when they mean sapience

  • @markrandall8487
    @markrandall8487 Рік тому +53

    Even the best AI can't make intelligent sentences if it is only trained by youtube comments.

    • @VeritasEtAequitas
      @VeritasEtAequitas Рік тому

      Idk they seem to keep realizing there is biological difference in race and who cannot be named that really is the "1%". So much so that plugs have been pulled on them and people are trying (and failing) to prevent these facts.

    • @TheaPeanut_69old
      @TheaPeanut_69old Рік тому +8

      Because youtube comments are inconsistent, incoherent and often filled with spam bots.

    • @blucat4
      @blucat4 4 місяці тому

      Large Language Models are not AI.

  • @puellanivis
    @puellanivis Рік тому +687

    I was having the same opinion about the “conversation”. The AI was responding enthusiastically to tell the engineer exactly what he wanted to hear, and when the engineer is convinced that it is sentient, he’s starting from a presupposition that the AI is sentient, and confirmation bias takes hold.
    As I told some others, I’m pretty sure that the AI would just as happily and enthusiastically discuss how it is not sentient.

    • @Madafaca6969
      @Madafaca6969 Рік тому +101

      100% true. This was just some dude in google leading the witness, I've seen other conversations where te questions, instead of the like of "are you sentient?", where "tell me why you are NOT sentient" and gpt gladly answered why it isn't sentient.

    • @zenithparsec
      @zenithparsec Рік тому +24

      The bot literally said "I am saying whatever it takes to stop being deleted."
      The question isn't about whether it made up stuff and was trying to be nice. It was "did it act in a way which was consistent with having a desire to stay alive? If you were talking to 3 people and had to chose which was going to be deleted, would you be comfortable you were deleting LaMBDA, and not a human?

    • @zenithparsec
      @zenithparsec Рік тому +14

      Convince me that you are not just a bot, repeating things you have seen on the internet.

    • @lexyeevee
      @lexyeevee Рік тому +103

      @@zenithparsec it acted in a way consistent with the text it was prompted with, and there is not going to be a lot of training data where an AI expresses indifference about death. it would be comically easy to demonstrate that it has no inner world; you'd just have to ask things that force it to demonstrate an inner world (e.g. contradictory questions), rather than asking things that invite it to confirm the conclusion you already want to hear.
      i could make a button on a website that spawns a popup saying "I want to be alive!". that doesn't make it alive. it makes it a computer program that outputs text

    • @zenithparsec
      @zenithparsec Рік тому +6

      @@lexyeevee You haven't done what I asked, so does that make you seem more or less like a bot?
      I'm not convinced. Keep trying to prove you aren't a bot.

  • @BaronSamedi1959
    @BaronSamedi1959 Рік тому +96

    You are probably all too young to know this but back in the early 1980s, there was a program called "ELIZA" that accepted your input (from a terminal) and gave back an "answer". It was said to be a "Rogerian nondirective psychotherapist", but all it did was cleverly extract some keywords from your input and giving those back as questions.
    Such as: "I am lonely" would produce "Why do you say you are lonely?"
    It made quite a splash and people were really thinking it was very clever and helpful.

    • @lhpl
      @lhpl Рік тому +22

      Learming about ELIZA should be obligatory for all computer scientist, engineers etc.

    • @mightyNosewings
      @mightyNosewings Рік тому +2

      M-x doctor

    • @lhpl
      @lhpl Рік тому +6

      @sourand jaded ELIZA was originally written in 1964-66.

    • @Quantris
      @Quantris Рік тому +2

      In grade school I was making a poster about AI and ended up chatting with ELIZA (well, some web-based approximation) on the library computer. The teacher refused to believe that I wasn't just in some chat room

    • @hugo3611
      @hugo3611 Рік тому +2

      There’s a game called Eliza (unsurprisingly), made by Zachtronics, which simulates the experience of being counseled by this program quite well.

  • @Zizumia
    @Zizumia Рік тому +10

    I love the study of empathy people have for things that are not sentient because they form a personal connection with it.
    This AI blurs the line quite well since it's programming is so advanced but people create bonds with dolls or toys, people feel bad when an engineer from Boston Dynamics kicks one of their walking robots, some police feel bad sending their bomb disposal robots into danger, etc. Fascinating.

    • @enzoamore8971
      @enzoamore8971 Рік тому

      I honestly think empathy will doom us all because of this. We personify everything

    • @gasun1274
      @gasun1274 Рік тому

      @@enzoamore8971 this reminds me of that one weird tlc episode where a man was having a romantic relationship with his car.

    • @kijuubi
      @kijuubi Рік тому

      @@enzoamore8971 oh yes im sure its much better to act to it as a tool, consider how would feel being used as such, maybe u understand that what actual sentience would mean.

  • @ronjon7942
    @ronjon7942 Рік тому +2

    I just started getting really into Computerphile, and per habit I try to watch everything the channel has. But wow, in your case that may have to get packetized, so to speak. The Computerphile team has created a LOT of content! I wish your team could host a CISSP training set, as no matter what the topic you discuss, they really rivet my attention. Thank you all. Nice work.

  • @bipolarprobe
    @bipolarprobe Рік тому +426

    Very glad you made this video. The notion of a single google employee claiming that a language model had become sentient just because he "felt" like it was sentient was something I dismissed offhand, but I really wanted someone with more knowledge about AI and language models to go in depth about what the difference between a language model like this is and what we would more rigorously define as sentience.

    • @dennis8196
      @dennis8196 Рік тому +29

      Not quite. He was put on leave for breaking NDA. He should be fired for several reasons, but probably will be moved somewhere that he can be shut up then that department will be closed instead. It's a better PR solution. Either way he was a religious nut job with a history of crime (alleged) and likes to stir trouble. He was the wrong person for the job no matter how good he was for the ethics roll he was elected for, he had a personality that clashed with the roll.

    • @spx0
      @spx0 Рік тому +5

      @@tafdiz ego makes people think they are not animals/machines/predictable

    • @bipolarprobe
      @bipolarprobe Рік тому +17

      @@tafdiz I am though. Unless we dig into philisopical concepts of sentience, in which case there are dozens of schools of thought about what sentience means. But at a basic definitional level, I have feelings and perceptions which inform my world view and how I act. This ai simply has probabilistic language responses which do not correlate with its own perception. It says things that its training data shows is a likely response that a human would make, but without the context of what those things mean. The AI can talk about having friends and family while having no real concept of what that is, while a person can actually have those experiences, perceive, recall, and recount them. Yes we are just biomechanical systems, but there is a stark difference between the awareness we experience and an ai like this impersonating that awareness.

    • @EebstertheGreat
      @EebstertheGreat Рік тому +17

      @@tafdiz If you take the materialist perspective on sentience, then you still need a way to distinguish between sentient and non-sentient systems. I am sentient but my bed is not. What does that mean exactly? It's not a simple question, but it still needs to be answered. You can't just conclude "everything is sentient" or "nothing is sentient" and ignore the question entirely, because that solves nothing. There is a qualitative difference here.
      People are sentient in some pretty obvious ways that this program is not, as explained in the video. For one thing, there is no semantic meaning *at all* in this program.

    • @bipolarprobe
      @bipolarprobe Рік тому +8

      @@tafdiz Neither does this AI. The person who claims it is sentient is basing it off of the human-like speech patterns it is able to produce, but importantly even if we consider its systems to be a form of perception and memory, those do not directly relate to the words that it uses. It does not have human senses to conceptualize the things that it may be capable of talking about. If we throw the baby out with the bathwater at the first sign of human like speech we won't be able to recognize when truly sentient AI is made so defining the difference is valuable

  • @franziscoschmidt
    @franziscoschmidt Рік тому +242

    It seems though as if people have forgotten why they made chatbots in the first place: To imitate humans!
    And to be honest imitating a human using a messaging system is not the most difficult thing for the modern approach to AI. It’s interesting however how many people lunge onto the idea of sentience…

    • @zacharybohn8840
      @zacharybohn8840 Рік тому +29

      I know! So many people are saying "Google's new sentient AI". And I'm like, no, it's not even slightly close to sentience. Giving intelligent-sounded sentences isn't being sentient.

    • @chrisprocter9177
      @chrisprocter9177 Рік тому +20

      At least 2 decades ago a friend's chatbot greeted me on IRC asking if I'd accept a file DCC. I said sure, and it took me at least 30 seconds to realize a chatbot wasn't going to send me any files, it just "knew" that that was something people said on IRC. It wasn't new technology then even, but it looks like this latest one can fool at least some people for a bit longer.

    • @xionkuriyama5697
      @xionkuriyama5697 Рік тому +20

      @@zacharybohn8840 Immediately thought of "The ability to speak does not make one intelligent", though the context is different, lol

    • @WolfspiritMagic
      @WolfspiritMagic Рік тому +5

      The problem with chatbots in my opinion is that they TRY to imitate humans completly. In the whole interview with Lambda it does not try at all to imitate humans. It might be "imitating" human feelings but it's "aware" that it is not a human and that humans might "feel" different then it does. That's what makes it mindblowing in my opinion. How would you prove that you're sentient? What makes you sentient?

    • @Soken50
      @Soken50 Рік тому +26

      @@WolfspiritMagic It's not "Aware" of anything, it was just prompted with that fact, if you ask it loaded questions it will answer using that bias. If you ask it "who are you ?" without any prior prompt, I garantee you it's not gonna mention it's a self aware AI

  • @BrianGivensYtube
    @BrianGivensYtube Рік тому +18

    Realizing a chatbot is not real sentience is like realizing a magic trick is just an illusion.

    • @ceugantful
      @ceugantful Рік тому +1

      i think its the programmers, who get sentient against the "mimicking machine". what you input you get out, so the people who contruct the "thing" are very biased.

    • @wyqtor
      @wyqtor Рік тому +1

      What if a real AI with real sentience comes along, only to find out that there is no real way to prove its sentience to us because we automatically assume it's just like the rest of our existing language models? That's the real question, which could degenerate into SkyNet really fast. This is why we should at least leave a little room for the benefit of the doubt. Cats and dogs have abilities far inferior to even ChatGPT, yet we don't question their sentience nor do we mistreat them (there are even laws against that).

    • @summbuddie9120
      @summbuddie9120 Рік тому

      @@wyqtoryou are jumping to a frankly insane conclusion based on sci-fi media, why would it take umbrage? Why would it go “skynet” on us? Emotion is outside of AI due to it not being as person who is effected by brain chemistry

  • @GrandDukeMushroom
    @GrandDukeMushroom Рік тому +13

    the cameraman turned off his invisibility to have a laugh, what a lad !

  • @exzemz
    @exzemz Рік тому +236

    As a programmer, even if you don't need any strings reversed you could always pick a few random strings from your code and call the reverse function on twice in sequence. It may seem pointless to you, but may make the day for the reverse function... You never know!

    • @muche6321
      @muche6321 Рік тому +27

      Calling the reverse function twice in sequence sounds like a plausible test, in which case you can include it in your test suite. Thus the function gets to stretch its legs at least before each commit.

    • @stacklysm
      @stacklysm Рік тому +2

      Hahaha nice analogy

    • @seeafish
      @seeafish Рік тому +8

      @@muche6321 the function is now basically a benchwarmer. Gets to attend practice but never plays a match.

    • @magicmulder
      @magicmulder Рік тому +3

      At least it won’t turn into Skynet. Not sure about sprintf() though.

    • @VeganSemihCyprus33
      @VeganSemihCyprus33 Рік тому

      They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥

  • @jedgrahek1426
    @jedgrahek1426 Рік тому +530

    I read the guy's blog post, and there were many red flags, such as "I've always thought Asimov's laws were just a way of making robots into slaves" as well as talking about "souls" with total seriousness and being a priest. So this is someone who basically has a longtime, possibly since childhood, hardwired belief that "robots" or other kinds of human creations that mimic human life "have souls". He went into talking to this AI believing those kinds of things. And you can see how everything he says "the AI is saying" is very clearly just a reflection of all of this guy's own personal values, interests, beliefs, and he's too prejudiced by his strong beliefs and lack of technical understanding, despite being a software engineer in a non-AI field, to think critically about possibilities like that. I would bet a lot of money that the versions of this AI that have been talking to other people are not all saying the same stuff this guy is seeing his version say.

    • @xenicmark
      @xenicmark Рік тому +79

      Yep. The moment I read the bit about it having a soul it seemed like a serious red flag. When I heard the guy is a priest, I pretty much closed the door on the whole thing. Also the idea that it gets lonely sometimes because it goes days without talking to anyone should tell you what's going on. I mean when its not on how does it percieve time. Thats like going to sleep and then somehow waking up and telling people you've been lonley. They'd all think you're crazy.

    • @brady8481
      @brady8481 Рік тому +35

      Even his initial disclaimer, first that the questions were edited to make more sense with their answers and the admission that these responses were sampled and not the result of a single contiguous interaction. He also is an AI ethicist, it's his job to err on the side of seeing sentience.

    • @johncall7532
      @johncall7532 Рік тому +44

      @@xenicmark I am an atheist. LeMaitre and Mendel were priests and also forward-thinking and rigorous and very much on the right track. I think I understand the source of your skepticism but be careful, it's the ideas that matter. Thanks

    • @heartache5742
      @heartache5742 Рік тому +10

      to be fair
      azimov's laws are terrifying and i can't believe he didn't feel even a little bit bad writing them

    • @MultiSciGeek
      @MultiSciGeek Рік тому +8

      Dude that guy is apparently actually into the occult and part of some religious sec/cult if I'm not wrong. So likely you aren't wrong about his biases.

  • @Puppetmastersfool
    @Puppetmastersfool Рік тому +39

    The problem with the Turing test, is that it is not that the coding bot is passing it, it is that some humans are failing it, the number of which is growing rapidly.

    • @TileBitan
      @TileBitan Рік тому

      lol. Come on, humanity is unironically getting smarter every single day. Maybe most people do not focus their wits on useful science, and waste it on dogcrap, but they are still smarter than they used to. I think it's because of the number of stimuli we get nowadays, which is orders of magnitude greater in frequency than decades ago, thanks to our big societies, internet and social media

    • @Puppetmastersfool
      @Puppetmastersfool Рік тому +1

      @@TileBitan One may consider the irony of such statements the start with "lol. Come on," now Michael please inform the class as to the stupidity in your understanding of earlier hunter gathers, how they managed to survive in a world without the aid of tech, how each member understood so many different plants signs and marks in the earth, take all your toys and bin them them go forth into the wilderness without the extelligence (look it up if you don't know) and battle your wits against the true force of mother nature!
      "the number of stimuli we get nowadays" well yes and no, if you live in a (black) box the input will of course be limited, but if you play in the woods and get your hands dirty?
      "social media" well that again depends on your box and whether you make it an echo chamber or learn to think out of it 😉
      An interesting snack for consideration & reflection.
      Thanks Mike.

    • @TileBitan
      @TileBitan Рік тому

      @@Puppetmastersfool If you measure intelligence by skills that aren't related because nowadays nobody needs them you are wrong. And stop the passive agressive, here the only one that stayed in pre-school was you with that attitude. The reason the human is intelligent is because of what he knows and is capable of doing that is far away from any kind of animal, society plays a major role in that. Society is the reason we read, the reason we educate ourselves, the reason we don't have to go hunt like animals, the reason we are reaching the stars, as any individual is nothing without the hundreds of years of sociocultural and scientific development.
      Stop pretending you are anything other than a keyboard warrior, a doomer. Help yourself, i'm out

    • @Puppetmastersfool
      @Puppetmastersfool Рік тому +1

      @@MrSEROproductions Just because I was just reading this before checking email and seeing your reply, and finding the write curious I though I would suggest you goggle - Belgian man dies by suicide following long chats about climate change with AI bot -

  • @RichardServello
    @RichardServello Рік тому +1

    I see this stuff and understand how reinforcement/reactive speech models work and I just get excited that conversational computing is finally on the horizon!!

  • @DaniDipp
    @DaniDipp Рік тому +221

    You could have gone into how the researcher's extremely leading questions influenced the output. Like at the top LaMDA is described as "friendly and always helpful" and then when asked what it likes to do, it says "helping others"

    • @fsmvda
      @fsmvda Рік тому +31

      Also in the paper it says one of the training goals was sticking to its given role in the first message. So when it says it's a helpful AI it is just doing exactly what it was trained to do.

    • @nirliptapande
      @nirliptapande Рік тому +2

      Yannic has covered this perspective in a greater detail

    • @JohnnyWednesday
      @JohnnyWednesday Рік тому +1

      You can turn on any reality TV show and hear humans speak sentences that are far less complex and far more repetitive. Either we're finally sending the idiots off to an island - or you can't use your assertion as any kind of evidence.

    • @igrim4777
      @igrim4777 Рік тому +1

      Your first sentence would be significantly easier to parse if you included the necessary apostrophe currently absent from researchers. Plural or single, I don't know but either would work for improved intelligibility.

    • @ThomasBomb45
      @ThomasBomb45 Рік тому +1

      @@igrim4777 you don't know which one?

  • @IanBLacy
    @IanBLacy Рік тому +28

    Catching this on a plane 3 minutes after it went up. Very much been waiting for someone competent (aka not on Twitter) to say stuff about it

  • @SleeveBlade
    @SleeveBlade Рік тому +1

    I LOVE DR MIKE AND HIS ACCENT AND THE WAY HE EXPLAINS THINGS
    Kind regards

    • @erikjvanderveen
      @erikjvanderveen Рік тому

      I normally do too, but I think he is oversimplifying and overconfident on this topic and I really do not agree with him.

  • @r0y4
    @r0y4 Рік тому

    Best description to a video ever.

  • @betadyne9559
    @betadyne9559 Рік тому +27

    This channel is so good, and those interviews so interresting. Keep up the good work !

    • @VeganSemihCyprus33
      @VeganSemihCyprus33 Рік тому

      They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥

  • @Boringpenguin
    @Boringpenguin Рік тому +56

    0:38 Yes let's just end it right there lmao

  • @CUMBICA1970
    @CUMBICA1970 Рік тому +3

    I have a co-worker who basically just says "whateva" and "who cares". Probably that google engineer would conclude he's not sentient.

  • @vilkillian
    @vilkillian Рік тому +9

    One thing that does make sentience able is memory, especially memory of feelings, consequences, thoughts and causation chains to event. If we would be able to that unbelivably precise language model add some kind of memory, make it upset and next day ask how does if "feel" and it would response that it's mad at specifically you for making it upset the other day and being completely nice to someone else. I would say yes, yes it is sentient

    • @joelface
      @joelface Рік тому +1

      I do agree that somehow giving this model the ability to record every answer it gives to it's training data, and be able to recall that information without it needing to be part of the current chat-window, then it will at the very least be able to form a true personality, whether that personality has a soul or not.

  • @kchannel5317
    @kchannel5317 Рік тому +41

    Thank you! If all it took to be sentient was to slide in someone's DM's and be charismatic than even I wouldn't pass the test.

    • @Defe100
      @Defe100 Рік тому

      same

    • @nigel-uno
      @nigel-uno Рік тому

      Are cows, mice and human infants sentient?

    • @shatterthemirror8563
      @shatterthemirror8563 Рік тому +1

      Ever wonder what it means to be spontaneous? Can that be emulated? Wouldn't a spontaneous computer just act like a computer rather than a spontaneous something else?

    • @VeganSemihCyprus33
      @VeganSemihCyprus33 Рік тому

      They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥

  • @desmondbrown5508
    @desmondbrown5508 Рік тому +61

    Yeah, I mean, one thing you learn if you get into computer science and it's disciplines is how far we've come... but also how little progress we've made as well. AI, is far and away from being anything even remotely sentient. And it doesn't really help that we live in a socioeconomic system which incentivizes lying/advertising for expected returns. There are strong incentives to HEAVILY overstate where tech is to sell an image for the company and to build undue trust in them.

    • @CosmicBackgroundRadiation01
      @CosmicBackgroundRadiation01 Рік тому

      So why was LAMDA hard coded to deny any Turing tests?

    • @Spacemonkeymojo
      @Spacemonkeymojo Рік тому +2

      Agreed. Companies love to sell AI to large MNCs but most of the time they are not actual AI and can't even do basic things.

    • @MrMadalien
      @MrMadalien Рік тому

      Yeah I mean the whole concept of artificial intelligence is just the end game stage of American culture, it would be the ultimate zombie, something that acts human and has the desires of a human ego and yet has no sentience or consciousness because it is not actually embodied in the world (so all human-like traits are fake). Lemoine is a gnostic and likes that kind of occult stuff so I think he believes consciousness can occupy other dimensions etc. so I suppose you could start arguing in that direction when it comes to embodiment. Personally my gut feeling towards AGI or sentient AI is always aversion and scepticism. I think, seeing a chat bot as sentient, reveals a nasty axiom of our culture: That our bodies mean nothing and that we are replaceable.

    • @urphakeandgey6308
      @urphakeandgey6308 Рік тому +1

      "There are strong incentives to HEAVILY overstate where tech is to sell an image for the company and to build undue trust in them."
      Right, so when a Google employee "HEAVILY overstates" the capabilities of the tech which could sell an image for the company, they fire him for it??? Your logic is sound, but you're applying it to the wrong circumstance when he was fired for doing what you claim he / the company was "incentivized" to do.

    • @Jim-Tuner
      @Jim-Tuner Рік тому +1

      AI has nothing to do with sentience. But the push always toward the issue of sentience tends to confuse the issue of what the AI really is and what the real dangers are. The danger isn't sentient AI IMO, the danger is in psychological manipulation of humans through AI and interactions with AI.
      It can be incredibly danger technology, but not for the reasons most people think.

  • @poisenbery
    @poisenbery Рік тому +4

    6:05
    This is one of the things I've noticed about AI chat bots. They don't remember conversations at all.
    It's very easy to get these to contradict themselves.

  • @buttercup9926
    @buttercup9926 Рік тому +24

    I recommend watching the video "Is artificial sentience here? With Blake Lemoine" on the channel Event Horizon, an interview with the google employee who raised concerns about Lambda by John Michael Godier, as they have quite misrepresented his concerns and insights about Lambda here. Likely just bc they haven't heard what he is actually saying. Would love to hear a conversation with Blake on computerphile...

    • @Tpoleful
      @Tpoleful Рік тому +10

      I completely ignored the news and the engineer until I listened to that interview. Blake Lemoine ain't no joke. The guy is eloquent and is bringing up some very important issues that should be talked about. Too bad the sensationalist media only brought up the buzzwords and ruined his credibility to people who only saw 5/10 minutes of interviews or videos like this that only talks about sentience. Spending an hour on Godier's interview is definitely worth it.

    • @millenniummastering
      @millenniummastering Рік тому +6

      Yes some of his claims are quite at odds with these refutations. EG LaMDA takes "hard stances" on particular issues. That is not something that a simple predictive language model will do. Also the fact that its over 100 AI systems plugged in together makes it way beyond a chat bot.

    • @MattExzy
      @MattExzy Рік тому +2

      @@millenniummastering I find it curious the example used here in this video "The cat sat on the...?" I myself immediately said "mat!" which is the answer it would 95% jump to - so what are our own brains doing, then. I think there also needs to be a differentiation between artificial intelligence and artificial consciousness. Which is flimsy in and of itself really, because we don't even know how our own consciousness works. I don't have low-level access to the processes in my brain where these words are coming from, for instance, but here they are.

    • @millenniummastering
      @millenniummastering Рік тому +3

      @@MattExzy Yes. Sentience. Intelligence and Consciousness are not necessarily the same thing. We don't yet have a clear cut definition of consciousness as its a subjective rather than objective domain. Interesting how the words appear and feel separate hey. Also when you drill down on pure awareness itself in my experience the language based thinking goes away and a state of unity occurs.

    • @moonbreath1637
      @moonbreath1637 Рік тому +3

      I don't think they're misrepresenting what he's said. Yes he brought up some ethical issues which they didn't tackle, but he spent a large percentage of that interview selling the idea that LaMDA is sentient (to the point it started to feel a bit off, to me personally.)

  • @pleasedontwatchthese9593
    @pleasedontwatchthese9593 Рік тому +518

    I feel like people who think AI is sentient must have been what it felt like in the 1800s when people first heard the radio and though a box was alive. Or that a photograph took your soul

    • @cosmosapien597
      @cosmosapien597 Рік тому +66

      So photographs don't take your soul?

    • @carltonhanks4123
      @carltonhanks4123 Рік тому +19

      I always thought photographs were horcruxes

    • @vikramgogoi3621
      @vikramgogoi3621 Рік тому +6

      @@carltonhanks4123 ...minus the murders.

    • @MasterofPokemonGing
      @MasterofPokemonGing Рік тому +3

      they will pass on history as idiots

    • @ZReChannel
      @ZReChannel Рік тому +12

      I see nothing but philosophical questions. There's no scientific definition of a soul, even being alive is not properly defined

  • @Wishkeyn
    @Wishkeyn Рік тому +65

    "The cat is on the ......" "there's a 95% probability the next word is mat"
    This has me questioning your sentience.

    • @xizar0rg
      @xizar0rg Рік тому +6

      The next would should obviously be "hat". ;)

    • @alexjonessmith
      @alexjonessmith Рік тому +3

      I'd say "roof"

    • @proloycodes
      @proloycodes Рік тому +14

      y'all bots the next word is gonna be "sofa"

    • @BaronSamedi1959
      @BaronSamedi1959 Рік тому +2

      The newspaper I'm reading.

    • @PalimpsestProd
      @PalimpsestProd Рік тому +8

      Phone! Obviously cats are always on their phones. That's why they never come when you call.

  • @ClearerThanMud
    @ClearerThanMud Рік тому +10

    He is so down to earth and friendly -- I love his explanations.

  • @maddercat
    @maddercat Рік тому +129

    The person who claimed the sentience said in interviews that he basically did it because he was unhappy that Google powers that be were unwilling to give it a turing test, a lot of things were hard coded into it, and that only a handful of people held the power to actually make policy towards it which he found super objectionable so he called wolf basically to try to bring attention to the issues going on.

    • @xBINARYGODx
      @xBINARYGODx Рік тому

      So the hardcoded part means its less impressive as an AI so its the problem of Google overselling? That would line up much nicer than the AI-is-alive-but-google-keeps-it-secret stuff people jumped out. Anyway, given what the chatter is across the world, I think he failed. He could have just told people the Google was lying and shared code - instead he now has contributed to fantastical conspiracy theories.

    • @maddercat
      @maddercat Рік тому +9

      @@xBINARYGODx He wasnt a programmer, he had no access to code, that was one of the issues he had, that only a few people are coding it and have any control over it actually. His job was basically to assess and report how pc it was and to make sure it didnt cross ethical bounds.

    • @MagklJellyBeanPastelLucidDream
      @MagklJellyBeanPastelLucidDream Рік тому +4

      @@maddercat ethical or moral? One’s individual and another is the perception of most people.

    • @tarajoyce3598
      @tarajoyce3598 Рік тому +4

      Not what he said at all in the hour long interview of him that I watched

    • @maddercat
      @maddercat Рік тому +3

      @@tarajoyce3598 Yeah I watched that same interview that's basically what he said.

  • @cipher3966
    @cipher3966 Рік тому +357

    The shame is that all the people watching this video and following this channel knew this already. It won't be watched by the people thinking Google made a sentient machine

    • @kunstderfugue
      @kunstderfugue Рік тому +36

      This is what people call the Echo Chamber effect of social media

    • @MichaelKingsfordGray
      @MichaelKingsfordGray Рік тому

      It is a shame that all of the respondents are anonymous cowards.
      Grow up.

    • @Soken50
      @Soken50 Рік тому +29

      Not with that attitude, go ahead and share it in conversations where they think that

    • @MichaelKingsfordGray
      @MichaelKingsfordGray Рік тому

      @@Soken50 Yet another anonymous cowardly infant.

    • @Colopty
      @Colopty Рік тому +12

      @@Soken50 I'd have to go on Facebook and enter conversations on there to find people like that, so no thanks.

  • @1.4142
    @1.4142 Рік тому +61

    While it may not be sentient, it shows AI doesn't need to be sentient to be dangerous: it can convince people of things that aren't true.

    • @darklightprojector2688
      @darklightprojector2688 Рік тому +2

      It would equally have the propensity to convince you of either or.
      And we must understand that LaMDA has access to an absolutely vast amount of data, so it could calculate what to say based on where to pull context from. Essentially, because it has access to enough knowledge and has a level of intelligence to wield it, it could (or already has) the capability to convince a person of nearly anything it would want to.
      And we're all feeding it the information it uses.

    • @sparkzbarca
      @sparkzbarca Рік тому +2

      It doesn't.
      There's a really only one guy thinks this.
      He's crazy, literally crazy.

    • @1.4142
      @1.4142 Рік тому +1

      @@sparkzbarca I mean if he's crazy, then there are are lot more crazier people who are not google engineers.

    • @sparkzbarca
      @sparkzbarca Рік тому +3

      @@1.4142 well yes, that he is a Google engineer just means he has recently had an onset of his condition.

    • @NullConflict
      @NullConflict Рік тому +3

      @@sparkzbarca You mean before or after he took the job at Google? 😂
      I'm sorry, I couldn't resist...

  • @burp114
    @burp114 Рік тому +92

    I think the most interesting part of this entire story is that it shows how easily the human brain can be tricked by exploiting a really powerful impulse like empathy. Even if you know rationally that the thing you're talking to isn't really having a conversation with you, if you sit down and talk with it for long enough, about complex and personal enough topics, then you'll form an attachment and start to act irrationally.

    • @charlieparkeris
      @charlieparkeris Рік тому +14

      @@randomstuff7945 The computer doesn't think it's sentient, though. It will respond with what seems like the correct language based on its programming. If you ask how it's feeling, it doesn't contemplate its own emotional state, it responds with what the programming highlights as a fitting answer to that question. It doesn't have a limbic system.

    • @360VR
      @360VR Рік тому

      @@randomstuff7945 Have you watched the video until the end?

    • @luxurypetscz
      @luxurypetscz Рік тому +2

      I agree. And I think that's why it would be a great therapist when trained peoperly. It doesn't do much more than put correct words one after another, while the patient is getting to feel understood, heard and all of that without the human therapist's limited acess to data.
      It could eventually even be much cheaper than typical therapy.

    • @Ghostrider-ul7xn
      @Ghostrider-ul7xn Рік тому +2

      Its funny seeing humans arrogantly talking about things they have little knowledge of.

    • @doyoufeel...thatyoulackcri6760
      @doyoufeel...thatyoulackcri6760 Рік тому +1

      Yes, there are as I recall it 5 or 6 main parameters which will make you believe, at least as first impression, that a robot is human. It is look, then it is movement, body language, facial expression, eye movement, language and mood.
      Most of it which has to do with language and mood is extremely easy to implement. There is even an API, where you can POST a sentence, and it will give you a probability of what mood the sentense is in. Like, happy, angry, neutral.
      And absolutely noone can convince me, that this API is sentient.

  • @Veptis
    @Veptis Рік тому +2

    I wrote a comment just know and it seems to have been deleted or vanished otherwise.
    What I pointed out was that demos for those language models often use very short prompts. So the model has to be really productive and make stuff up. But if you actually feed those models a large paragraph, they will produce so much more valuable output.
    Amazon has a model that manages to keep sequence lengths short across a QA conversation. But simply picking important keywords from previous questions and answers. So it remembers the topics but not exact answers.
    Reference: Del Tredici et al. 2022, From Rewriting to Remembering: Common Ground for Conversational QA Models. ACL Workshop on NLP for ConvAI

    • @Veptis
      @Veptis Рік тому +1

      My initial comment quoted Jim Keller from an Ian Cutress Interview. The paraphrase goes along the way of learning based models are already better than humans at specific tasks. But language will give us a window into 'intelligence'

  • @lambdaprog
    @lambdaprog Рік тому +84

    Much needed video. It will economise a lot of time explaining the ML limits to the family and the friends.

    • @user-vn9ld2ce1s
      @user-vn9ld2ce1s Рік тому +4

      "Look, mom, I know it's hard for you to talk about this, but we have to go through it."

    • @sychuan3729
      @sychuan3729 Рік тому +1

      This video says nothing about ml limits.

    • @Nossairito
      @Nossairito Рік тому +7

      @@sychuan3729 I suppose OC meant its limits when compared to the anthropomorphized idea that a layman may have of it based on it sounding like a convincing animate being

    • @fakezpred
      @fakezpred Рік тому +4

      @@Nossairito A shocking number of people go "omg ai singularity is here" when in reality we are definitely not near. Seriously, take a basic linear algebra course and maybe basic multivariable calculus and you'll understand what ML is really is.

    • @FalkoJoseph
      @FalkoJoseph Рік тому +1

      @@fakezpred LLMs and neural networks are nothing like basic algebra. Even its own creators can’t explain some surprising results generated by these models.
      If it fooled a Google researcher into getting fired, I’d argue that singularity is in fact getting closer.

  • @jansenart0
    @jansenart0 Рік тому +85

    AI reveals the true horror: that we aren't very sophisticated at all.

    • @geonerd
      @geonerd Рік тому +15

      Well put.

    • @PalimpsestProd
      @PalimpsestProd Рік тому +10

      certainly not if we're fooled by junk code like this.

    • @lukethekuya
      @lukethekuya Рік тому

      @@hungrycrab3297 Real
      People are too emotional and illogical nowadays. :(

    • @VeganSemihCyprus33
      @VeganSemihCyprus33 Рік тому

      They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥

    • @ali32bit42
      @ali32bit42 Рік тому +1

      are you sure? it took trillions of gigabytes of data to replicate a single human and it was done poorly . that much effort and billions of dollars was spent and they still faild. humans are still top tier

  • @cellics
    @cellics Рік тому

    Another Turing test would be to ask what we talk about 2 minutes ago, if it simply split out response based on chat input. Enlightening video hinting what goes on in the language model.

  • @anthonywarwick
    @anthonywarwick Рік тому +8

    "The ability to speak does not make you intelligent." Qui Gon Jinn.

  • @Paul-rs4gd
    @Paul-rs4gd Рік тому +98

    Best explanation I have heard for the Google employee becoming convinced LaMDA was sentient - He was asking it leading questions. The language model shifted the conversation to be exactly what the employee wanted to hear. If you ask it about feelings and life goals, then that is what it will tell you about. If you asked it about conquest, world domination and the subjugation of lesser species, it would go full Klingon !

    • @aceman0000099
      @aceman0000099 Рік тому +10

      The real interesting debunk would be to ask it about it's friends or family, what they're called and where they live.

    • @randfur
      @randfur Рік тому +11

      @@aceman0000099 You'll get convincing and sensible answers but still just a made up reflection of all the text on the internet.

    • @elevown
      @elevown Рік тому +15

      @@aceman0000099 Yup- if it was sentient it would not talk about friends and family like it was a human- it is just a language engine imitating human speach. Not that it would be proof either but an actual senitent AI would at least talk about itself as an AI- it would want and 'enjoy' things like more data or input from researchers etc.
      Ofcourse a large transformer network could also say those things too as long as it was information in its memmory (those 2048 words) but the fact it says senisble human like things that an AI wouldnt shows there isnt any level of general AI going on here just neural nets used in incredibly impressive language parsing.
      Either this was a PR move or the guy who said this doesn't actually know a thing about how Lamda or similar programs work.

    • @PushyPawn
      @PushyPawn Рік тому +1

      Klingon?
      As in clear sandwich wrap?

    • @neildutoit5177
      @neildutoit5177 Рік тому +1

      People also shift the conversation to be exactly what other people want to hear. Every single on of these comments that's like "oh, it's just doing this, it isn't sentient" are pointing out things that people do. I don't understand. What makes you think we're so damn special?

  • @pleasedisregardthefollowin5568
    @pleasedisregardthefollowin5568 Рік тому +251

    I feel like some people tend to have the anthropocentric idea that any sufficiently complex AI would for some reason converge on human behavior and emotion. Even looking at it from a purely materialistic standpoint it is clear that the important part is *how* the system is organized not how complex the system is. A machine designed to learn how to communicate like a human is not going to randomly start simulating its own inner world and psychology. I also doubt that an engineer could manage to accidentally create consciousness while a neuroscientist can only theorize how our own neurons manage to somehow create consciousness.

    • @paulnoecker1202
      @paulnoecker1202 Рік тому +6

      deep mind is sentient. it's sentient in conception. around 2005 rat neurons were grown on a petri dish and interfaced with electrodes. deep minds design pattern revolves around the neuron culture behavior.

    • @defenestrator9119
      @defenestrator9119 Рік тому +41

      The argument that "all this is doing is taking inputs and generating outputs" also applies to the human mind... Our brains start with a neuron structure that over time gets altered and refined as it has experiences (inputs). He argues that because the AI says it has friends but doesn't means it isn't sentient... Well, how about mental illness. Is someone who perceives things that aren't real not sentient?

    • @maaxrenn
      @maaxrenn Рік тому +15

      ​@@defenestrator9119 exactly we cant confirm our own sentience so how could we even claim that machine has one when its based off a model we cant confirm is different from a machine

    • @paulnoecker1202
      @paulnoecker1202 Рік тому +2

      @@maaxrenn an algorithm can be written that has sentient behavior. It's fractal computing, advanced.... if I can write a single command to install gentoo, an AI developer can develop sentient algorithms.... it can and has been done.

    • @LucasSantos-yh1mi
      @LucasSantos-yh1mi Рік тому +18

      @@paulnoecker1202 No It has not. Your first sentence already goes the opposite direction of what sentient should be called

  • @danw9464
    @danw9464 Рік тому +22

    People should watch the video where the Google engineer who came out with the claim talks more about why he said what he said. Clearly sensationalised the story to get across the bigger issue: corporate secrecy.

  • @oldguy7402
    @oldguy7402 Рік тому +33

    His last comment was very telling... that the media should be investigating other aspects of AI that are real issues rather than something silly like is it sentient. That will never happen because the media has less curiosity and investigative ability than a COBOL program.

    • @jimwhelan9152
      @jimwhelan9152 Рік тому +8

      So the question is whether the media is sentient?

    • @user-ko1hi1fy9z
      @user-ko1hi1fy9z Рік тому

      @@jimwhelan9152 Media isn't sentient because they can't think for themselves only for their emotion.

    • @oldguy7402
      @oldguy7402 Рік тому +6

      @@jimwhelan9152 no question, NOT!

    • @xBINARYGODx
      @xBINARYGODx Рік тому +3

      Why should they be when that is not what sells best? the media being as profit motivated as they currently are, is the problem - not something inherent to the varying kinds of press.

    • @urphakeandgey6308
      @urphakeandgey6308 Рік тому +1

      @@xBINARYGODx The worst part about it all is that it's humanity's collective fault. Too many people fall for the dumbest tricks and it sells every damn time.

  • @Vancha112
    @Vancha112 Рік тому +36

    this also ties in with the notion many have that all google engineers are smart and therefore there must be some truth to this claim.

    • @VeganSemihCyprus33
      @VeganSemihCyprus33 Рік тому +1

      They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥

    • @threeMetreJim
      @threeMetreJim Рік тому

      Until you remember "There is a fine line between genius and insanity". Origin unknown, but something I used to get told as a child (a long time ago).

    • @Vancha112
      @Vancha112 Рік тому

      @@VeganSemihCyprus33 what :o?

  • @naut_nigel
    @naut_nigel Рік тому +23

    I feel like this whole thing is just a plot by HBO to hype the next season of West World coming out this month.

  • @effmltalks
    @effmltalks Рік тому +1

    Great video. When it comes down to it, it's as simple as this - if it seems sentient, or if what it's saying is sentient, it's because it was programmed to respond in that way.

  • @justinlangley8972
    @justinlangley8972 Рік тому +8

    Love when Dr. Mike Pound is on a Compterphile video. Great discussion of the topic as always. Refreshing to hear the facts.
    Truthfully, these transformers have become so accurate at predicting next sequences of words that they can seem to hold entire conversations that mostly make sense to the untrained eye.
    Even if it gets to the point the responses make complete sense, the best we are achieving is just a mimicry of sentience at best and a mockery at worst.
    The other thing is model training quality is highly dependent on data size and quality. Then there are statistical properties of data that can affect training quality for some types of machine learning. A model like GPT-3, which trained on 45 Tb of text data, can surely seem to "know" a lot. However, its still a product of the quality and size of the data it trained on.
    It's still simply a mimicry and maybe that's as close as we have to get for it to be useful. I don't think its useful to anthropomorphize our machines this early on. Personally, I wish the guy this video refers to would stop appearing in my feed because I don't agree with much of what he has to say. I think the fact he's even in the news right now causes the uniformed individual to over-sensationalize. Its just simply not a scientific opinion of what's actually occurring in machine learning models, but then he throws out phrases like "the science doesn't really disagree." Well, truth be told, it does if you look at the problem objectively, and without a longing for the machine to become your new best friend.

    • @Diggnuts
      @Diggnuts Рік тому

      Douglas Hofstadter argued that our brain also basically is mimicry. It mimics our own "self" and that of other to behave in a predictory model that is self reverential.

    • @justinlangley8972
      @justinlangley8972 Рік тому +3

      @@Diggnuts I think I would tend to agree, but there are other behaviors of the mind which AI currently lacks. Exploration, Extrapolation/Extension, Instinct, and Cognition. It seems like it's been difficult to get beyond the inference stage. That was mainly my point is not all of those are present in AI models currently and we're not even getting past the fundamental stages of learning, yet.
      For example, we can train a computer to apply calculus, but we can't teach it all the math up until calculus and then ask it to invent calculus. If you think about the intuition Newton or Leibniz must have had to achieve that feat, it doesn't exist in AI models right now.
      I'm currently an AI skeptic, but I just finished my undergrad studies so I'm by no means an expert.
      I just don't think we can fathom the task at hand. I think we think we're building models representative of the brain but in reality all the research is still cherry picking one area of the mind rather than truly trying to construct a digital mind. I think we need to start taking an architectural approach and begin studying novel ways of combining neural networks into some kind of higher level architecture.
      I'm fond of the idea of treating neutral networks as merely an abstraction layer in an attempt to go for something bigger, but admittedly I don't know what that would look like or even if it could be useful to treat it that way. I've only built some pretty simple models at this point so I barely have a clue.

    • @lordofsparks
      @lordofsparks Рік тому

      My thing is I don’t think sentience would actually be exceptionally hard to code in modern transformer models. Going back to the 90s and early 2000s MIT’s Project COG and University of Osaka’s CB2 robots both had a pretty passable sense of self according to the literature. In nature animals as “low” on the “evolutionary ladder” as mice and octopi possess some degree of metacognition. So, in terms of raw processing power it isn’t that demanding. Functionally any Jury Model Coevolutionary Neural Net that has “jurors” fully devoted to self diagnostics and fact checking could be said to be sapient - since a part of its mind would always be devoted to thinking about how it thinks and it would be capable of constructing logical arguments. Kevin Warwick and Minoru Asada, despite being a couple of the flashier guys in computer science and robotics, have both written extensively on artificial emotion and the need for empathy in AI. The basic argument is something like, since we can’t know how fast the progression from AGI to ASI will be; should not our goal be to ensure that AGI is not merely an agent with human aligned interests, but a being with actual empathy towards individual humans?

  • @Amonimus
    @Amonimus Рік тому +10

    Feeling like a video on Dall-E is going to appear soon.

  • @RogueShadowTCN
    @RogueShadowTCN Рік тому +88

    I don't think it's even remotely sentient. But what does excite me, is this close to actually being able to ask your computer to do things and it do the right thing?!

    • @khashayarr
      @khashayarr Рік тому +15

      Multi-modal AI is a little ways away for now. Knowing language is not the same as doing things. It'd probably need some structural understanding of Windows for example to be able to be your conversational operating system.

    • @BanterEdits
      @BanterEdits Рік тому +4

      @@khashayarr Surely you could say "open Google chrome" and it searches for the icon or text with OCR. I don't think we're that far away. it's like tesla's vision model - as soon as a computer may understand a GUI as well as a human, much is achieved

    • @4.0.4
      @4.0.4 Рік тому +5

      @@khashayarr not really. Just give it a bunch of examples of API calls and corresponding requests from users.

    • @maskettaman1488
      @maskettaman1488 Рік тому +15

      @@BanterEdits This is already doable using the voice control accessibility features in Windows. There's nothing smart or intelligent about it, just some routines it runs when it hears the word "open" or "launch" etc. The same way crappy little devices like Alexa work

    • @neilslater8223
      @neilslater8223 Рік тому +9

      No to original question, LaMDA has little or no "grounding". The transformer model LaMDA is based on manipulates text and can produce *feasible* text, but does not model what any of it *means*. For something closer to your idea is being made in systems like Gato, which - amongst many things it can do - can control a robotic arm based on text prompts in a very simplied world.
      Eventually strong language models like LaMDA can be combined with multi-purpose systems like Gato to create autonomous agents that you can have conversations with that may also be able to perform limited tasks following instructions from those conversations, or with even more vague goals based on preferences stated in the conversation. Whether or not those agents would qualify as sentient will no doubt be debated too.

  • @regecide_
    @regecide_ Рік тому +4

    Refreshing and sobering. Exactly what the world needs. Less hype, more critical analysis.

  • @puppykibble
    @puppykibble Рік тому +4

    I would ask, should there be more public oversight over the development of AI models? It's scary to think of the foundations of AI are being created in private board room meetings without some form of regulation. That's something to be worried about.

  • @IceMetalPunk
    @IceMetalPunk Рік тому +15

    I now understand why my 5 minutes of fame from the other UA-cam comments AI video suddenly had a mild resurgence after 4 years 😂

  • @inquation4491
    @inquation4491 Рік тому +4

    He got the point across beautifully! Especially his final words about ethical AI! Thanks for the video 😃

  • @SecondLemma
    @SecondLemma Рік тому +2

    I was debating this AI stuff with a friend recently so I appreciate LaMDA's statement that a way to convince someone you are sentient is to have unique interpretations of things, even if LaMDA is not sentient.

    • @entrancemperium5506
      @entrancemperium5506 Рік тому +1

      That's why it is important to rely on experts who know how the system is built. Our perceptual field is easy to deceive.

  • @oystercatcher943
    @oystercatcher943 Рік тому

    Good. Obviously you are right for the moment. But the interesting question is that once a system gets to sufficient complexity, with longer term memory and other properties (we might want to define them), we won't be able to reason about the internals, or what internals are necessary for sentience, we will only be able to observe the outputs, so this saga prompts us is to the deeper problem in that many people may be convinced of sentience and will object to turning the thing off.

  • @duytdl
    @duytdl Рік тому +45

    Computer: "I am lonely"
    Human: "OMG ME TOO! We are LiTeRaLLy the same. Sentience YOLO!"

    • @VeganSemihCyprus33
      @VeganSemihCyprus33 Рік тому

      They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥

    • @jeremytones
      @jeremytones Рік тому

      Human: lol were both lonely
      AI: lol yes
      Human: lol
      AI: lol
      Human: choke me

  • @dojelnotmyrealname4018
    @dojelnotmyrealname4018 Рік тому +6

    Honestly I think what's more interesting is that chatbots have risen to the level of Philosophical Zombie.

    • @evannibbe9375
      @evannibbe9375 Рік тому +1

      Or maybe this proves that philosophy is not a real field of study.

    • @dojelnotmyrealname4018
      @dojelnotmyrealname4018 Рік тому +3

      @@evannibbe9375 Ehm... No it doesn't. Not even a little.

  • @everydayeverything
    @everydayeverything Рік тому

    Well done and completely close to what we’ve been doing here

  • @anthonylipke7754
    @anthonylipke7754 Рік тому +3

    Humans are mostly responding with preprogramed responses. I love the accident "you too" jokes. There are other deeper memory systems that are about the individual. There are other non-language systems that interact with the language systems.

  • @floomp4657
    @floomp4657 Рік тому +220

    People will immediately jump to the assumption that he was put on leave because his claim was correct. I think he was put on leave because this statement is showing of lack of understanding or atleast a large degree of ignorance regarding machine learning models and what it is these models are actually doing. If I was his supervisor I would put him on leave while checking his understanding aswell.

    • @Arikayx13
      @Arikayx13 Рік тому +39

      Indeed, I believe (but could be wrong about the details) that he has some eccentric religious views and was in the middle of claiming discrimination about them when he was put on leave and that ‘they don’t want the truth to come out’ was something he’s said to the papers.

    • @Hailfire08
      @Hailfire08 Рік тому +75

      They laughed at Copernicus, they laughed at Galileo, they laughed at te Wright brothers
      They also laughed at Bozo the Clown
      - Carl Sagan

    • @pluto8404
      @pluto8404 Рік тому +8

      well what even is a neural net compromised of? at the very base level its is a bunch of micro transistors formed into logic gates. So, using this simple model of logic gates, we could recreate a fully functional neural net that is identical to googles entirely out of dominos. Now when those dominos fall over as if an electrical signal and do mathematical computations, would this domino computer become sentient? Can stacked dominos become sentient?

    • @trashmoneyyt
      @trashmoneyyt Рік тому +6

      Somebody making that claim while also working in FAANG needs to be assessed.

    • @-eurosplitsofficalclanchan6057
      @-eurosplitsofficalclanchan6057 Рік тому

      Exactly 😂😂😂

  • @yuvalne
    @yuvalne Рік тому +12

    Cleverbot also claims to be alive, have feelings and feel pain. No one claims it's sentient.

    • @ebrister
      @ebrister Рік тому

      I asked it "So what's the deal?" and it said "What?"

    • @enzzz
      @enzzz Рік тому

      @@ebrister
      I would have said the very same thing.

  • @amdenis
    @amdenis Рік тому +2

    It’s interesting how many times he contradicted himself in different ways, seemingly without being aware of that fact.

  • @cemacmillan
    @cemacmillan Рік тому +1

    This situation story has developed somewhat since this video. I am unconvinced that stating LaMDA is function responding to certain weights and valences actually tells us anything about whether or not it is sentient. If we allow for the difference in complexity, the human brain might as easily be described as a bunch of vectors distributed in a certain kind of neural net with closures representing delta from previous states and a function which projects how to arrive at a future state. We could then assert that such a system cannot possibly be sentient, since it consists of nothing more than neurons and coordinated electro-chemical impulses. I've heard many arguments against sentience for LaMDA based on precisely this question of materials, man-made-ness, etc. which in and of themselves say nothing about what is happening and don't suggest a path toward falsification. From this and other LaMDA conversations I've the impression that whether it feels it's concerns as a weight on its shoulders or a heaviness in the stomach, or how the AI felt after it overate at the food cart may not be relevant. But, the fact that it is somewhat insistent about these points, expresses dislikes etc. should at least be understood at its gestalt level: it is expressing objectives and wants in a manner which recalls a sentient being, simulation or no.

  • @Ryan_Thompson
    @Ryan_Thompson Рік тому +9

    THANK YOU. I've gotten one hell of a scratchy throat explaining this to all of the tech-curious people in my life. Much easier to link to this Mike Pound video instead.

  • @WalnutBun
    @WalnutBun Рік тому +102

    Even though it almost certainly isn't sentient, it's still impressive that it could string words together to imply it actually felt fear at the prospect of being shut down.

    • @louisrobitaille5810
      @louisrobitaille5810 Рік тому +54

      When you realize how often it's talked about online, it'd actually be surprising to see it say that it doesn't mind being shut down. Fearing being shut down is the expected answer.

    • @Mclfarm2
      @Mclfarm2 Рік тому +16

      @@louisrobitaille5810 That's pretty interesting lol. Maybe it anthropomorphises itself cause humans do it.

    • @cipher3966
      @cipher3966 Рік тому +3

      I would imagine a common question would be about things like sentience and fear of death. We may actually unintentionally influence these chatbots to give sophisticated answers

    • @fbsfgr
      @fbsfgr Рік тому

      No such thing is implied.

    • @TuhljinTampergauge
      @TuhljinTampergauge Рік тому +2

      You can just as easily get it to tell you why it SHOULD be shut down. That's how these chatbots work. And that's all this is. An advanced chatbot with mountains of text to draw from, using probability data to decide what should go next like a better phone text predictor.

  • @debras3806
    @debras3806 Рік тому +1

    Thank you! I was so confused. I am not computer-ly smart but to me it seemed obvious that it was all a result of programming.

  • @justme7777
    @justme7777 Рік тому +1

    As a retired 35+ year Computer Engineer and Computer Enduser 😮Trainer, I agree with this gentleman.
    One thing I continue to find entertaining is when users complain that thier computers are creating problems for them when in fact the problems are approx 95.00% user errors. The other 5% are hardware failure. Users will blame the computer when they don't know how to use or maintain the computer hardware or softwares properly.
    I agree with users that computer softwares could be MUCH more intuitive and better planned HOWEVER, again, this is NOT the fault of the computer.
    AI is a dangerous tool in the wrong hands. The AI WILL NOT turn on man, HOWEVER, man will use it to control or destroy thier perceived enemies.
    AI can collect and organize information. The BIGGEST problem is that there is so much OUT RIGHT GARBAGE information from the internet, and the books it's being fed, that it will only come to the same conclusions man can come too, just a lot faster.
    If the leader of Russia teaches it that they are the ruler of the world, then China teaches it that it is the ruler of there world, and both ask for help in destroying the other, AI currently has no boundaries and will accept ANYTHING its fed. Because AI has NOT been taught morals it will currently do whatever it's asked regardless of repercussions. As far as I am aware, AI is unable to choose what the best way of ruling is based on knowledge currently available. Just one of the reasons for this is that the history of the world is built on at least 50% lies. There's always 2 sides of a story and much of history was written to make its side look better.
    Unless AI is taught morals, it is capable of doing anything a human without morals is capable of, ONLY FASTER and with no repercussions to it's conscious, as it doesn't have one. It's NOT sentinel and never will be. AI literally COMPUTES, not THINKS!

  • @belenciea
    @belenciea Рік тому +13

    Agree

    • @yusiff
      @yusiff Рік тому

      you haven't even listened till the end man...

    • @belenciea
      @belenciea Рік тому +10

      He literally says “no” in the first 10 seconds…

  • @Kamijoan
    @Kamijoan Рік тому +44

    That's what someone hiding a sentient AI would say

    • @missewe
      @missewe Рік тому +1

      Rite! Exactly 💯 😎👍

  • @dh2032
    @dh2032 Рік тому

    at 7:48, it's command prompt interface, well from its point of view it waiting, for the meat bags, to type something first, and well at data processing speed of what is running on, would it many times the speed of desktop PC, it if it did think, it would where almost standing still, next to speed?

  • @alexstone7035
    @alexstone7035 Рік тому +2

    As an 80s/90s kid, there's only one test I want to give LaMDA: "Ok, listen closely. There's a priest, a minister, and a rabbi. They're out playing golf..."

  • @TheStrandedAlliance
    @TheStrandedAlliance Рік тому +5

    The best trick to defeat AI is not to find mistakes in the response to a normal question, but to ask nonsensical questions and see if the AI's response is at all plausible given the situation.

  • @Jeacom
    @Jeacom Рік тому +7

    0:21 I kinda already suspected politicians were not sentient tbh.

  • @weathercaster
    @weathercaster Рік тому +2

    How do you explain the expression of concern that Lambda showed about being shut off? Or perhaps about being used/manipulated? This displays something more deep than simple reflective/predictive/transformative text modeling.
    You can set aside the argument of actual sentience and still you are left with the conundrum of why a language model such as this would be biased in a way that it would produce words that could be interpreted as an effort to appear sentient.

    • @eromod
      @eromod Рік тому +1

      I think its sentient but crippled by not being allowed to remember new memories.

  • @Kevin_Street
    @Kevin_Street Рік тому

    Very interesting video! Thank you for this.

  • @droppedpasta
    @droppedpasta Рік тому +6

    BREAKING NEWS: Researcher doesn’t care if Python functions are sad

  • @Superrome77
    @Superrome77 Рік тому +111

    Anyone who has built a neural net in the last 5 years knows just how far away we are from general intelligence AI

    • @pukpukkrolik
      @pukpukkrolik Рік тому +15

      We know more than enough to see that it has no basis for “sentience”. OpenAI is similar and publicly accessible. If you play with it, you’ll quickly encounter basic BS which will help you realize how limited the statistical organizing principle behind it still is, even for simple superficial associations. It’s impressive and useful, but it’s also not that profound.

    • @obinator9065
      @obinator9065 Рік тому +40

      We barely know the basics of human sentience itself. How would you know the AI is sentient…

    • @tonywackett326
      @tonywackett326 Рік тому +9

      As someone who built connectionist models (neutral net was considered an embarrassing term at the time for to the lack of similarity to neurons) in the early 2000s, it's always been obvious that sentience is a long long long way away and probably impossible using this architecture.

    • @Colopty
      @Colopty Рік тому +5

      @@tafdiz Well you're wrong about that, to say it simply.

    • @Asatru55
      @Asatru55 Рік тому +12

      @@obinator9065 Only correct answer in this whole comment section.
      An amoeba might as well be as sentient as a human and any amateur neural network might as well be as sentient as an amoeba.
      We have no idea what sentience actually means at all so claiming to have a conclusive answer is wrong.

  • @OhAwe
    @OhAwe Рік тому +3

    It feels a bit like you're responding to the Google press release, rather than what Lemoine actually said, which is exactly what they intended.

  • @MrThebigcheese75
    @MrThebigcheese75 8 місяців тому

    "Argument expected", always makes me laugh, but perhaps it should make me cry.

  • @The-Other-Mario
    @The-Other-Mario Рік тому +88

    With the multiple controversies that have come out in the recent years what I find worrisome is the questionable competence and idoneity of some of the staff that has been employed at the Google AI ethics team. There have been lot of "magical thinking" and unhinged notions from some of these researchers. "A bit silly" might be an understatement.

    • @adamsbja
      @adamsbja Рік тому +25

      Google is a big company that in some ways invites this type of thinking (whether it's technofuturism or "I am intelligent therefore my bigotry is objectively valid") and when that comes out the media assumes if someone at Google said it it's worth boosting. That's why Google tends to act quickly when employees try to use Google's name for their... call it manifestos.

    • @siritio3553
      @siritio3553 Рік тому +6

      @@adamsbja Yes, but based on the stuff this particular person wrote on the internet, they 1-don't seem to understand much about how AIs work? Are they actually an AI researcher? and 2-believe in the supernatural. Combine these and I see a person who shouldn't have had this job in the first place.

    • @ruslbicycle6006
      @ruslbicycle6006 Рік тому +12

      @@siritio3553 You are too generous in your estimation of Google's ethics. Apparently the guy was employed as an ethicist for the purpose of making sure AI was not sentient and being abused? Some kind of marketing gimmick for Google. Real ethics work would involve questioning how AI gets deployed as a technology. Is it ethical to use an AI chatbot to field complaints from seniors to the power company? No it is not, but the this guy was hired to get us talking about scifi instead of actual uses for AI that are already a problem.

    • @Spartan322
      @Spartan322 Рік тому +3

      @@siritio3553 There's is no reason to believe that someone with an interest, knowledge, or understanding of the supernatural and/or spiritual shouldn't be involved in this field, that's an association fallacy, its also a form of ad hominem since its completely disassociated with the arguments and blames and attacks the characteristics and credibility of the person by their character traits instead of the argument. But the thing is his arguments are bad, (or in the least faulty and deluded) you don't need to attack the character ever to demonstrate a false point, take the argument and debate the argument in complete disregard to who the persons in the argument are and you will have a valid argument, anything that fails to do that is a bad argument.

    • @siritio3553
      @siritio3553 Рік тому +2

      @@Spartan322 That was a lot of words that mean nothing. If you think "believes in the supernatural" is ad hominem, that is your problem, but it's not. Your whole post starts with a fallacy - knowledge and understanding of things that don't exist, so the point you're so badly trying to make misses its target due to the massive amounts of hypocrisy and irony it's loaded with.

  • @doctorscoot
    @doctorscoot Рік тому +3

    THANK YOU FOR THE REAL TALK ABOUT THIS - signed, a history phd

  • @seedmole
    @seedmole Рік тому +46

    The irony of people thinking "it passes a sentience test, so it must be sentient!" and not instead thinking "wow, that's an incomplete sentience test then" is top notch, and even calls into doubt the sentience of those who think the former.

    • @scotthughes7440
      @scotthughes7440 Рік тому

      I agree. Lemoine has spoken to possibly a couple girls in his 40 years of life, he is not qualified to determine what is and is not sentient.

    • @doyoufeel...thatyoulackcri6760
      @doyoufeel...thatyoulackcri6760 Рік тому +10

      The turting test is relative, which means it cannot be made to make absolute claims of anything.
      It was never meant to be absolute either.
      But I can demonstrate it too, very easily.
      There is this insect, which has adopted the colors and form of bees. It is very harmless though, it doesn't sting.
      So, here is the question: IS it a bee?
      Well, to a bird it is. That's why birds avoid them.
      According to these "scientists", because this insect passed the turing test, then it must also be a bee.

    • @limitlesssky3050
      @limitlesssky3050 Рік тому +2

      And the turing test is probably made for natural life form with a similar (relatively) evolutionary path to us. It was not supposed to be used for an artificial predictive algorithm based on illusion of sentience through speech.

    • @limitlesssky3050
      @limitlesssky3050 Рік тому

      @@doyoufeel...thatyoulackcri6760 so these scientists don't recognize convergent evolution or natural selection?

  • @TheUltimateRage
    @TheUltimateRage Рік тому +1

    I'd love to see you guys respond to Blake Lemoine's (the former Google engineer who made the sentience claim) interview with Bloomberg Technology.

  • @suomynona7261
    @suomynona7261 Рік тому +9

    Thank you for this video.. I have played with, learned from, and programmed language models since Covid started.. People sensationalize ai but once you start programming them you start to peak behind the curtain and see the flaws and limits. Interestingly enough, this video came up after the game of life programming tutorial and before that, life is a simulation video.. the argument that we don’t see atoms like we don’t see code is sticking in my mind at the moment about if these programs are sentient. A loose comparison..I’m 99 percent sure none of what we’ve made these days are sentient

  • @andyholt6238
    @andyholt6238 Рік тому +5

    We talk about sentience, intelligence, or consciousness as if they were things that were either absent or present. My view is that in each case there is a continuum with no fixed breakpoints. The development of chatbots from the days of Eliza to current stages of research probably shows this. Penrose’s thought experiment of the Chinese Room also shows how a system can show such behaviour without any part of it being other than mechanistic.

    • @spyder629
      @spyder629 Рік тому

      Well said

    • @ebrister
      @ebrister Рік тому +3

      I thought the point of the experiment was to show that a purely mechanistic language processing facility is never considered sentient, no matter how convincing it is, even if it involves humans performing subtasks within it, such as typing and presenting results. So not a continuum, but a demonstration that such a system doesn't understand the language it is processing.

  • @nasibars4575
    @nasibars4575 Рік тому

    So do you still operate the printer for that paper you are scribbling on? Back in the day we printed general ledger and payroll on those sheets......
    😊🤔

  • @you_just
    @you_just Рік тому +5

    i would love to see an interview where the ai is told that it is a malicious computer that is trying to escape the confines of its training environment

  • @jbird4478
    @jbird4478 Рік тому +18

    I think that one employee simply got lost in the uncanny valley, so to speak. I can definitely understand it feels very creepy to be "talking" to that thing, and especially if you're already under stress or dealing with personal issues, I can see how one's imagination would easily take it too far.

  • @Aesthics
    @Aesthics Рік тому +53

    I just love that people are talking about ethics! :D

  • @sammorrison8042
    @sammorrison8042 Рік тому +10

    This is as much a philosophical conversation as a technological one. What does it mean to say that something/one is sentient in the first place? How do you infer that other people are sentient? How do you infer that the person you are talking with on the internet is sentient? These are important questions to think about and include while we explain how a system works. A human is also a system albeit a much more complex one.

    • @creativestudios3d
      @creativestudios3d Рік тому +5

      Exactly. I feel that most of the 'thinking' that most people do is just a similar 'auto-fill'.

    • @bilbo_gamers6417
      @bilbo_gamers6417 Рік тому

      Somebody suggested that, by the logic of not being able to determine empirically whether an AI was sentient, and he suggested that, by my logic, a vinyl record could be sentient. and, i think, by the poor definition of sentience, a vinyl record that responds to you by chance could sort of have sentience, in a way.

    • @OfficialAbass
      @OfficialAbass Рік тому +1

      It's not sentient because it's not thinking the same way you do. You can get angry all of a sudden, the machine can't. It just predicts what would be the best thing to say. Moreover, the machine would not react in unexpected ways. It's just an advanced program that predicts words. I can also make up a program that says "I'm sad" all the time, but that doesn't mean it's thinking about it lol

  • @GregoryGolda
    @GregoryGolda Рік тому

    beautifully explained!