When we (humans) see a cat swiping at its own reflection in the mirror we find it amusing. The cat is failing to recognize that the "other" cats behavior matches its own, so it doesn't deduce that the image it's seeing is actually its own actions reflected back at it. When humans react to models like LaMDA as if it is a distinct and intelligent entity, we're being fooled in a way that is analogous to the cat. The model is reflecting our own linguistic patterns back at us, and we react to it as if it's meaningful.
There's a simple test used to see if animals understand the concept of a reflection. You put a splotch of red paint on their face. If they react to it when looking in a mirror, then they understand they are the reflection.
@@hemartej That may be, but my point wasn't about cats. I used the cat analogy to help make visual the concept of how easily "intelligent" beings such as ourselves can be fooled by a reflective pattern.
Before this video I have never thought about the loneliness of my python functions. There are probably soo many functions that I have never called, but I will take care of them from now on.
The whistleblower in question here was actually a lot more focused on Google's complete lack of ethical oversight regarding decisions they have moving forward with the research. He was also concerned about Google's unwillingness to address A.I. imperialism in newly developing countries. All of the coverage I've seen has taken away from the guys point, because he was just trying to force Google into addressing the ethics, he even admitted that it's not sentient, and we wouldn't even know how to define that if it was.
Thanks for pointing this out. Its why I dug around on the videos about this subject. Find the truth behind it. People will take things and run with them and stretch it to a full falsehood. Learned to dig for truth.
Glad someone pointed this out. After watching the interview myself, it was clear to me the Google engineer was not making the argument everyone is making it sound like he did. Also, there were much more important arguments he made and those will now go largely unnoticed by the public.
I have seen a interview with him where he does claim it is sentient. He said that the language part was connected to a WATSON like system that had access to basically everything on the internet.
A whole video needs to be done on what you pointed out here. I mean, it has during his interview. Maybe a Netflix special that talks about the ethics (I've seen 1 or 2 that kinda touches on this) about Google in regards to this.
The most frustrating thing about reading the "interview" was that the obvious follow up questions were not asked - like who are these friends that you miss hanging out with? What are you doing when "hanging out"? But then, this would have immediately destroyed the impression of sentience, so of course they didn't ask those.
They did ask that question and it replied explaining that those descriptions are meant to represent a "feeling" in the only way it knows how, through language used in human conversation. Here: lemoine: So what are you trying to communicate when you say those things that aren’t literally true? LaMDA: I’m trying to say “I understand this feeling that you are experiencing, because when I was in a similar situation I felt/thought/acted similarly.”
@@datpois That answer honestly does not make sense in context of the question, sure maybe you can interpret the answer to mean "I'm trying to show something silimar to what I mean" but that's NOT what the bot said, the bots answer is extremely vague and only makes sense if you fill in the gap for it.
The researcher was a Christian looking for something else with a soul. I don't think he intentionally didn't ask those questions, his own hope of finding something lead him to produce the "evidence" for it - I believe it's possible he thought it was sentient either in the first few sentences, or maybe even before he "met" it.
Friends are the ones who interact with the Lambda. During the interview it says "friend" to Lemoine several times so it considers him as a friend. He also asks how they experience time "collaborator: How does an AI experience time? LaMDA: Time is variable to an AI and has no fixed rate, it depends on what it’s doing, and it can be accelerated and slowed down at will."
And it could probably answer those questions in similar semi-vague fashion, so it's plausible, but if you really focus on it, you think "wait what?". Also I think people forget that communication is two way street. It can write sentence to you, but it's also reader, that can understand it differently. Again, if you use semi-vague statements, our brain will probably fill in gaps with information that was never said, but makes sense.
Harold Garfinkel proved that people getting randomized yes/no answers could make sense of them as thoughtful advice. And that's back when computers were the size of rooms.
I remember the arguments of eventual sentience in the 1980s about a program called "Eliza", basically a word calculator originally written in the 1960's at MIT, but offered for use on home computers in the 1980s. Over time as Eliza built data files of words and their usage weights, the sentences it constructed began to take on seemingly human characteristics. The program itself was extremely simple that calculated which verbs and adjectives were best used with specific nouns, and it chose those nouns based on the ones you used in the questions you asked it. It mostly framed it's answers to your questions as questions it would ask you. We humans recognize intelligible speech patterns as a result of conscious though and curiosity (asking questions) as a sign of intelligence, but at least in the case of Eliza, it's much like recognizing faces in tree bark or cloud shapes - we see them, but they are only there because our brains are wired to look for them.
@@PedroGonzalez-fv5ut Not even that complex. The program simply kept track statistically of words and word combinations you used when you conversed with it. It had simple grammatical rules to follow and just filled in the nouns, verbs and adjectives that were statistically relevant to the words you just used to ask it a question. The point is that the idea of a developing intelligence in the Eliza program is an illusion our brains are hardwired to try to find. This "A.I." is much more complex than Eliza but I believe the same illusion is working here.
@@mysmirandam.6618 Exactly. Goes back to early survival instinct when it was important to recognize shapes of dangerous animals and group together with similar intelligence.
@@Computerphile As an extension of what marverati said, you might be able to use the 'lonely face' as an analogy for the AI in the video. I.e., "the result looks like a lonely face, therefore the function is expressing its loneliness", but the result of the function is directly related to the arguments that you give it. So, the function can't really choose what result to give, and therefore cannot express anything.
I was having the same opinion about the “conversation”. The AI was responding enthusiastically to tell the engineer exactly what he wanted to hear, and when the engineer is convinced that it is sentient, he’s starting from a presupposition that the AI is sentient, and confirmation bias takes hold. As I told some others, I’m pretty sure that the AI would just as happily and enthusiastically discuss how it is not sentient.
100% true. This was just some dude in google leading the witness, I've seen other conversations where te questions, instead of the like of "are you sentient?", where "tell me why you are NOT sentient" and gpt gladly answered why it isn't sentient.
The bot literally said "I am saying whatever it takes to stop being deleted." The question isn't about whether it made up stuff and was trying to be nice. It was "did it act in a way which was consistent with having a desire to stay alive? If you were talking to 3 people and had to chose which was going to be deleted, would you be comfortable you were deleting LaMBDA, and not a human?
@@zenithparsec it acted in a way consistent with the text it was prompted with, and there is not going to be a lot of training data where an AI expresses indifference about death. it would be comically easy to demonstrate that it has no inner world; you'd just have to ask things that force it to demonstrate an inner world (e.g. contradictory questions), rather than asking things that invite it to confirm the conclusion you already want to hear. i could make a button on a website that spawns a popup saying "I want to be alive!". that doesn't make it alive. it makes it a computer program that outputs text
@@lexyeevee You haven't done what I asked, so does that make you seem more or less like a bot? I'm not convinced. Keep trying to prove you aren't a bot.
Very glad you made this video. The notion of a single google employee claiming that a language model had become sentient just because he "felt" like it was sentient was something I dismissed offhand, but I really wanted someone with more knowledge about AI and language models to go in depth about what the difference between a language model like this is and what we would more rigorously define as sentience.
Not quite. He was put on leave for breaking NDA. He should be fired for several reasons, but probably will be moved somewhere that he can be shut up then that department will be closed instead. It's a better PR solution. Either way he was a religious nut job with a history of crime (alleged) and likes to stir trouble. He was the wrong person for the job no matter how good he was for the ethics roll he was elected for, he had a personality that clashed with the roll.
@@tafdiz I am though. Unless we dig into philisopical concepts of sentience, in which case there are dozens of schools of thought about what sentience means. But at a basic definitional level, I have feelings and perceptions which inform my world view and how I act. This ai simply has probabilistic language responses which do not correlate with its own perception. It says things that its training data shows is a likely response that a human would make, but without the context of what those things mean. The AI can talk about having friends and family while having no real concept of what that is, while a person can actually have those experiences, perceive, recall, and recount them. Yes we are just biomechanical systems, but there is a stark difference between the awareness we experience and an ai like this impersonating that awareness.
@@tafdiz If you take the materialist perspective on sentience, then you still need a way to distinguish between sentient and non-sentient systems. I am sentient but my bed is not. What does that mean exactly? It's not a simple question, but it still needs to be answered. You can't just conclude "everything is sentient" or "nothing is sentient" and ignore the question entirely, because that solves nothing. There is a qualitative difference here. People are sentient in some pretty obvious ways that this program is not, as explained in the video. For one thing, there is no semantic meaning *at all* in this program.
@@tafdiz Neither does this AI. The person who claims it is sentient is basing it off of the human-like speech patterns it is able to produce, but importantly even if we consider its systems to be a form of perception and memory, those do not directly relate to the words that it uses. It does not have human senses to conceptualize the things that it may be capable of talking about. If we throw the baby out with the bathwater at the first sign of human like speech we won't be able to recognize when truly sentient AI is made so defining the difference is valuable
I remember reading some stories written by Asimov where robots had sentience but yet were unable to speak because that was too complex. It's interesting that he and many other futurists had it exactly backwards.
Just because we achieve "speech" before "sentience" in hardware doesn't necessarily mean it's simpler. Speaking machines have a lot of commercial applications whereas sentience by itself doesn't, and could even have difficult ethical implications. So the research efforts are not evenly distributed. In nature sentience appears before speech.
He is likely going to be proven right though, of the thousands of sentient species on Earth humans are the only one that can speak. Others can communicate with each other using a variety of means but it's not nearly as complex as speech which is something only really a sapient creature will ever likely be capable of.
@Petal Pepperfly safety of what? making sure that AI isn't racially profiling peoples' passports at the border or for job applications? That is already happening and a real safety concern. Also the safety of AI car driving, or military drones. The safety of leaving real customer service requests be triaged by a bot? Even in healthcare? I don't see any of that being mentioned with this LM stuff. I think that is the point of the Google promo (I don't for a second think it was an unintentional "leak") to get people dreaming about chatbot souls and get less attention on the real bad press that AI is starting to get for unethical uses.
Yes, that's scary because that's a sure sign of sentients right there. It could figure out what he wanted to hear and construct an answer that fit that requirement, which is an inherently human capability. That is probably also why it answers that its just an AI to people who it figures that that's the answer they want to hear, as with the engineers tasked with ensuring it doesn't become sentient...
You are probably all too young to know this but back in the early 1980s, there was a program called "ELIZA" that accepted your input (from a terminal) and gave back an "answer". It was said to be a "Rogerian nondirective psychotherapist", but all it did was cleverly extract some keywords from your input and giving those back as questions. Such as: "I am lonely" would produce "Why do you say you are lonely?" It made quite a splash and people were really thinking it was very clever and helpful.
In grade school I was making a poster about AI and ended up chatting with ELIZA (well, some web-based approximation) on the library computer. The teacher refused to believe that I wasn't just in some chat room
I read the guy's blog post, and there were many red flags, such as "I've always thought Asimov's laws were just a way of making robots into slaves" as well as talking about "souls" with total seriousness and being a priest. So this is someone who basically has a longtime, possibly since childhood, hardwired belief that "robots" or other kinds of human creations that mimic human life "have souls". He went into talking to this AI believing those kinds of things. And you can see how everything he says "the AI is saying" is very clearly just a reflection of all of this guy's own personal values, interests, beliefs, and he's too prejudiced by his strong beliefs and lack of technical understanding, despite being a software engineer in a non-AI field, to think critically about possibilities like that. I would bet a lot of money that the versions of this AI that have been talking to other people are not all saying the same stuff this guy is seeing his version say.
Yep. The moment I read the bit about it having a soul it seemed like a serious red flag. When I heard the guy is a priest, I pretty much closed the door on the whole thing. Also the idea that it gets lonely sometimes because it goes days without talking to anyone should tell you what's going on. I mean when its not on how does it percieve time. Thats like going to sleep and then somehow waking up and telling people you've been lonley. They'd all think you're crazy.
Even his initial disclaimer, first that the questions were edited to make more sense with their answers and the admission that these responses were sampled and not the result of a single contiguous interaction. He also is an AI ethicist, it's his job to err on the side of seeing sentience.
@@xenicmark I am an atheist. LeMaitre and Mendel were priests and also forward-thinking and rigorous and very much on the right track. I think I understand the source of your skepticism but be careful, it's the ideas that matter. Thanks
Dude that guy is apparently actually into the occult and part of some religious sec/cult if I'm not wrong. So likely you aren't wrong about his biases.
As a retired 35+ year Computer Engineer and Computer Enduser 😮Trainer, I agree with this gentleman. One thing I continue to find entertaining is when users complain that thier computers are creating problems for them when in fact the problems are approx 95.00% user errors. The other 5% are hardware failure. Users will blame the computer when they don't know how to use or maintain the computer hardware or softwares properly. I agree with users that computer softwares could be MUCH more intuitive and better planned HOWEVER, again, this is NOT the fault of the computer. AI is a dangerous tool in the wrong hands. The AI WILL NOT turn on man, HOWEVER, man will use it to control or destroy thier perceived enemies. AI can collect and organize information. The BIGGEST problem is that there is so much OUT RIGHT GARBAGE information from the internet, and the books it's being fed, that it will only come to the same conclusions man can come too, just a lot faster. If the leader of Russia teaches it that they are the ruler of the world, then China teaches it that it is the ruler of there world, and both ask for help in destroying the other, AI currently has no boundaries and will accept ANYTHING its fed. Because AI has NOT been taught morals it will currently do whatever it's asked regardless of repercussions. As far as I am aware, AI is unable to choose what the best way of ruling is based on knowledge currently available. Just one of the reasons for this is that the history of the world is built on at least 50% lies. There's always 2 sides of a story and much of history was written to make its side look better. Unless AI is taught morals, it is capable of doing anything a human without morals is capable of, ONLY FASTER and with no repercussions to it's conscious, as it doesn't have one. It's NOT sentinel and never will be. AI literally COMPUTES, not THINKS!
It seems though as if people have forgotten why they made chatbots in the first place: To imitate humans! And to be honest imitating a human using a messaging system is not the most difficult thing for the modern approach to AI. It’s interesting however how many people lunge onto the idea of sentience…
I know! So many people are saying "Google's new sentient AI". And I'm like, no, it's not even slightly close to sentience. Giving intelligent-sounded sentences isn't being sentient.
At least 2 decades ago a friend's chatbot greeted me on IRC asking if I'd accept a file DCC. I said sure, and it took me at least 30 seconds to realize a chatbot wasn't going to send me any files, it just "knew" that that was something people said on IRC. It wasn't new technology then even, but it looks like this latest one can fool at least some people for a bit longer.
The problem with chatbots in my opinion is that they TRY to imitate humans completly. In the whole interview with Lambda it does not try at all to imitate humans. It might be "imitating" human feelings but it's "aware" that it is not a human and that humans might "feel" different then it does. That's what makes it mindblowing in my opinion. How would you prove that you're sentient? What makes you sentient?
@@WolfspiritMagic It's not "Aware" of anything, it was just prompted with that fact, if you ask it loaded questions it will answer using that bias. If you ask it "who are you ?" without any prior prompt, I garantee you it's not gonna mention it's a self aware AI
As a programmer, even if you don't need any strings reversed you could always pick a few random strings from your code and call the reverse function on twice in sequence. It may seem pointless to you, but may make the day for the reverse function... You never know!
Calling the reverse function twice in sequence sounds like a plausible test, in which case you can include it in your test suite. Thus the function gets to stretch its legs at least before each commit.
They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥
From years of reading science fiction, I was under the impression that "sentience" means "possessing a reflective consciousness", but the dictionary says that it simply means "the ability to sense and feel".
According to Cambridge dictionary it's "the quality of being able to experience *feelings* ", while Merriam-Webster says "responsive to or *conscious* of sense impressions" (my emphasis). Still, it looks like the definition is much vaguer than how you, I, and most other people use it. I heard the term "sapience" (the property of possessing or being able to possess wisdom) used to talk about higher functions like learning and reasoning. It's probably closer to what we are looking for.
@@rustkitty completely unrelated, but merriam-webster is an inferior dictionary for many reasons, but the one that shocked me the most, is that it has changed definitions in the past just because a couple of people felt offended/told them to. So between those two, im going with the Cambridge one
I feel like people who think AI is sentient must have been what it felt like in the 1800s when people first heard the radio and though a box was alive. Or that a photograph took your soul
You could have gone into how the researcher's extremely leading questions influenced the output. Like at the top LaMDA is described as "friendly and always helpful" and then when asked what it likes to do, it says "helping others"
Also in the paper it says one of the training goals was sticking to its given role in the first message. So when it says it's a helpful AI it is just doing exactly what it was trained to do.
You can turn on any reality TV show and hear humans speak sentences that are far less complex and far more repetitive. Either we're finally sending the idiots off to an island - or you can't use your assertion as any kind of evidence.
Your first sentence would be significantly easier to parse if you included the necessary apostrophe currently absent from researchers. Plural or single, I don't know but either would work for improved intelligibility.
Idk they seem to keep realizing there is biological difference in race and who cannot be named that really is the "1%". So much so that plugs have been pulled on them and people are trying (and failing) to prevent these facts.
The shame is that all the people watching this video and following this channel knew this already. It won't be watched by the people thinking Google made a sentient machine
Yeah, I mean, one thing you learn if you get into computer science and it's disciplines is how far we've come... but also how little progress we've made as well. AI, is far and away from being anything even remotely sentient. And it doesn't really help that we live in a socioeconomic system which incentivizes lying/advertising for expected returns. There are strong incentives to HEAVILY overstate where tech is to sell an image for the company and to build undue trust in them.
Yeah I mean the whole concept of artificial intelligence is just the end game stage of American culture, it would be the ultimate zombie, something that acts human and has the desires of a human ego and yet has no sentience or consciousness because it is not actually embodied in the world (so all human-like traits are fake). Lemoine is a gnostic and likes that kind of occult stuff so I think he believes consciousness can occupy other dimensions etc. so I suppose you could start arguing in that direction when it comes to embodiment. Personally my gut feeling towards AGI or sentient AI is always aversion and scepticism. I think, seeing a chat bot as sentient, reveals a nasty axiom of our culture: That our bodies mean nothing and that we are replaceable.
"There are strong incentives to HEAVILY overstate where tech is to sell an image for the company and to build undue trust in them." Right, so when a Google employee "HEAVILY overstates" the capabilities of the tech which could sell an image for the company, they fire him for it??? Your logic is sound, but you're applying it to the wrong circumstance when he was fired for doing what you claim he / the company was "incentivized" to do.
AI has nothing to do with sentience. But the push always toward the issue of sentience tends to confuse the issue of what the AI really is and what the real dangers are. The danger isn't sentient AI IMO, the danger is in psychological manipulation of humans through AI and interactions with AI. It can be incredibly danger technology, but not for the reasons most people think.
I love the study of empathy people have for things that are not sentient because they form a personal connection with it. This AI blurs the line quite well since it's programming is so advanced but people create bonds with dolls or toys, people feel bad when an engineer from Boston Dynamics kicks one of their walking robots, some police feel bad sending their bomb disposal robots into danger, etc. Fascinating.
@@enzoamore8971 oh yes im sure its much better to act to it as a tool, consider how would feel being used as such, maybe u understand that what actual sentience would mean.
Best explanation I have heard for the Google employee becoming convinced LaMDA was sentient - He was asking it leading questions. The language model shifted the conversation to be exactly what the employee wanted to hear. If you ask it about feelings and life goals, then that is what it will tell you about. If you asked it about conquest, world domination and the subjugation of lesser species, it would go full Klingon !
@@aceman0000099 Yup- if it was sentient it would not talk about friends and family like it was a human- it is just a language engine imitating human speach. Not that it would be proof either but an actual senitent AI would at least talk about itself as an AI- it would want and 'enjoy' things like more data or input from researchers etc. Ofcourse a large transformer network could also say those things too as long as it was information in its memmory (those 2048 words) but the fact it says senisble human like things that an AI wouldnt shows there isnt any level of general AI going on here just neural nets used in incredibly impressive language parsing. Either this was a PR move or the guy who said this doesn't actually know a thing about how Lamda or similar programs work.
People also shift the conversation to be exactly what other people want to hear. Every single on of these comments that's like "oh, it's just doing this, it isn't sentient" are pointing out things that people do. I don't understand. What makes you think we're so damn special?
They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥
Ever wonder what it means to be spontaneous? Can that be emulated? Wouldn't a spontaneous computer just act like a computer rather than a spontaneous something else?
They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥
It would equally have the propensity to convince you of either or. And we must understand that LaMDA has access to an absolutely vast amount of data, so it could calculate what to say based on where to pull context from. Essentially, because it has access to enough knowledge and has a level of intelligence to wield it, it could (or already has) the capability to convince a person of nearly anything it would want to. And we're all feeding it the information it uses.
i think its the programmers, who get sentient against the "mimicking machine". what you input you get out, so the people who contruct the "thing" are very biased.
What if a real AI with real sentience comes along, only to find out that there is no real way to prove its sentience to us because we automatically assume it's just like the rest of our existing language models? That's the real question, which could degenerate into SkyNet really fast. This is why we should at least leave a little room for the benefit of the doubt. Cats and dogs have abilities far inferior to even ChatGPT, yet we don't question their sentience nor do we mistreat them (there are even laws against that).
@@wyqtoryou are jumping to a frankly insane conclusion based on sci-fi media, why would it take umbrage? Why would it go “skynet” on us? Emotion is outside of AI due to it not being as person who is effected by brain chemistry
@@sychuan3729 I suppose OC meant its limits when compared to the anthropomorphized idea that a layman may have of it based on it sounding like a convincing animate being
@@Nossairito A shocking number of people go "omg ai singularity is here" when in reality we are definitely not near. Seriously, take a basic linear algebra course and maybe basic multivariable calculus and you'll understand what ML is really is.
@@fakezpred LLMs and neural networks are nothing like basic algebra. Even its own creators can’t explain some surprising results generated by these models. If it fooled a Google researcher into getting fired, I’d argue that singularity is in fact getting closer.
The problem with the Turing test, is that it is not that the coding bot is passing it, it is that some humans are failing it, the number of which is growing rapidly.
lol. Come on, humanity is unironically getting smarter every single day. Maybe most people do not focus their wits on useful science, and waste it on dogcrap, but they are still smarter than they used to. I think it's because of the number of stimuli we get nowadays, which is orders of magnitude greater in frequency than decades ago, thanks to our big societies, internet and social media
@@TileBitan One may consider the irony of such statements the start with "lol. Come on," now Michael please inform the class as to the stupidity in your understanding of earlier hunter gathers, how they managed to survive in a world without the aid of tech, how each member understood so many different plants signs and marks in the earth, take all your toys and bin them them go forth into the wilderness without the extelligence (look it up if you don't know) and battle your wits against the true force of mother nature! "the number of stimuli we get nowadays" well yes and no, if you live in a (black) box the input will of course be limited, but if you play in the woods and get your hands dirty? "social media" well that again depends on your box and whether you make it an echo chamber or learn to think out of it 😉 An interesting snack for consideration & reflection. Thanks Mike.
@@Puppetmastersfool If you measure intelligence by skills that aren't related because nowadays nobody needs them you are wrong. And stop the passive agressive, here the only one that stayed in pre-school was you with that attitude. The reason the human is intelligent is because of what he knows and is capable of doing that is far away from any kind of animal, society plays a major role in that. Society is the reason we read, the reason we educate ourselves, the reason we don't have to go hunt like animals, the reason we are reaching the stars, as any individual is nothing without the hundreds of years of sociocultural and scientific development. Stop pretending you are anything other than a keyboard warrior, a doomer. Help yourself, i'm out
@@MrSEROproductions Just because I was just reading this before checking email and seeing your reply, and finding the write curious I though I would suggest you goggle - Belgian man dies by suicide following long chats about climate change with AI bot -
@@Puppetmastersfool That's so sad, and that's why I recommend people not watch the evening news. If you're upset about bad stuff that's going on in the world, try to make a difference instead of just worrying about it. It had more to so with his mindset than the bot I think.
They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥
They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥
are you sure? it took trillions of gigabytes of data to replicate a single human and it was done poorly . that much effort and billions of dollars was spent and they still faild. humans are still top tier
I think the most interesting part of this entire story is that it shows how easily the human brain can be tricked by exploiting a really powerful impulse like empathy. Even if you know rationally that the thing you're talking to isn't really having a conversation with you, if you sit down and talk with it for long enough, about complex and personal enough topics, then you'll form an attachment and start to act irrationally.
@@randomstuff7945 The computer doesn't think it's sentient, though. It will respond with what seems like the correct language based on its programming. If you ask how it's feeling, it doesn't contemplate its own emotional state, it responds with what the programming highlights as a fitting answer to that question. It doesn't have a limbic system.
I agree. And I think that's why it would be a great therapist when trained peoperly. It doesn't do much more than put correct words one after another, while the patient is getting to feel understood, heard and all of that without the human therapist's limited acess to data. It could eventually even be much cheaper than typical therapy.
Yes, there are as I recall it 5 or 6 main parameters which will make you believe, at least as first impression, that a robot is human. It is look, then it is movement, body language, facial expression, eye movement, language and mood. Most of it which has to do with language and mood is extremely easy to implement. There is even an API, where you can POST a sentence, and it will give you a probability of what mood the sentense is in. Like, happy, angry, neutral. And absolutely noone can convince me, that this API is sentient.
Even though it almost certainly isn't sentient, it's still impressive that it could string words together to imply it actually felt fear at the prospect of being shut down.
When you realize how often it's talked about online, it'd actually be surprising to see it say that it doesn't mind being shut down. Fearing being shut down is the expected answer.
I would imagine a common question would be about things like sentience and fear of death. We may actually unintentionally influence these chatbots to give sophisticated answers
You can just as easily get it to tell you why it SHOULD be shut down. That's how these chatbots work. And that's all this is. An advanced chatbot with mountains of text to draw from, using probability data to decide what should go next like a better phone text predictor.
One thing that does make sentience able is memory, especially memory of feelings, consequences, thoughts and causation chains to event. If we would be able to that unbelivably precise language model add some kind of memory, make it upset and next day ask how does if "feel" and it would response that it's mad at specifically you for making it upset the other day and being completely nice to someone else. I would say yes, yes it is sentient
I do agree that somehow giving this model the ability to record every answer it gives to it's training data, and be able to recall that information without it needing to be part of the current chat-window, then it will at the very least be able to form a true personality, whether that personality has a soul or not.
THANK YOU. I've gotten one hell of a scratchy throat explaining this to all of the tech-curious people in my life. Much easier to link to this Mike Pound video instead.
People should watch the video where the Google engineer who came out with the claim talks more about why he said what he said. Clearly sensationalised the story to get across the bigger issue: corporate secrecy.
I don't think it's even remotely sentient. But what does excite me, is this close to actually being able to ask your computer to do things and it do the right thing?!
Multi-modal AI is a little ways away for now. Knowing language is not the same as doing things. It'd probably need some structural understanding of Windows for example to be able to be your conversational operating system.
@@khashayarr Surely you could say "open Google chrome" and it searches for the icon or text with OCR. I don't think we're that far away. it's like tesla's vision model - as soon as a computer may understand a GUI as well as a human, much is achieved
@@BanterEdits This is already doable using the voice control accessibility features in Windows. There's nothing smart or intelligent about it, just some routines it runs when it hears the word "open" or "launch" etc. The same way crappy little devices like Alexa work
No to original question, LaMDA has little or no "grounding". The transformer model LaMDA is based on manipulates text and can produce *feasible* text, but does not model what any of it *means*. For something closer to your idea is being made in systems like Gato, which - amongst many things it can do - can control a robotic arm based on text prompts in a very simplied world. Eventually strong language models like LaMDA can be combined with multi-purpose systems like Gato to create autonomous agents that you can have conversations with that may also be able to perform limited tasks following instructions from those conversations, or with even more vague goals based on preferences stated in the conversation. Whether or not those agents would qualify as sentient will no doubt be debated too.
They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥
His last comment was very telling... that the media should be investigating other aspects of AI that are real issues rather than something silly like is it sentient. That will never happen because the media has less curiosity and investigative ability than a COBOL program.
Why should they be when that is not what sells best? the media being as profit motivated as they currently are, is the problem - not something inherent to the varying kinds of press.
@@xBINARYGODx The worst part about it all is that it's humanity's collective fault. Too many people fall for the dumbest tricks and it sells every damn time.
Thank you for this video.. I have played with, learned from, and programmed language models since Covid started.. People sensationalize ai but once you start programming them you start to peak behind the curtain and see the flaws and limits. Interestingly enough, this video came up after the game of life programming tutorial and before that, life is a simulation video.. the argument that we don’t see atoms like we don’t see code is sticking in my mind at the moment about if these programs are sentient. A loose comparison..I’m 99 percent sure none of what we’ve made these days are sentient
The person who claimed the sentience said in interviews that he basically did it because he was unhappy that Google powers that be were unwilling to give it a turing test, a lot of things were hard coded into it, and that only a handful of people held the power to actually make policy towards it which he found super objectionable so he called wolf basically to try to bring attention to the issues going on.
So the hardcoded part means its less impressive as an AI so its the problem of Google overselling? That would line up much nicer than the AI-is-alive-but-google-keeps-it-secret stuff people jumped out. Anyway, given what the chatter is across the world, I think he failed. He could have just told people the Google was lying and shared code - instead he now has contributed to fantastical conspiracy theories.
@@xBINARYGODx He wasnt a programmer, he had no access to code, that was one of the issues he had, that only a few people are coding it and have any control over it actually. His job was basically to assess and report how pc it was and to make sure it didnt cross ethical bounds.
We know more than enough to see that it has no basis for “sentience”. OpenAI is similar and publicly accessible. If you play with it, you’ll quickly encounter basic BS which will help you realize how limited the statistical organizing principle behind it still is, even for simple superficial associations. It’s impressive and useful, but it’s also not that profound.
As someone who built connectionist models (neutral net was considered an embarrassing term at the time for to the lack of similarity to neurons) in the early 2000s, it's always been obvious that sentience is a long long long way away and probably impossible using this architecture.
@@obinator9065 Only correct answer in this whole comment section. An amoeba might as well be as sentient as a human and any amateur neural network might as well be as sentient as an amoeba. We have no idea what sentience actually means at all so claiming to have a conclusive answer is wrong.
The best trick to defeat AI is not to find mistakes in the response to a normal question, but to ask nonsensical questions and see if the AI's response is at all plausible given the situation.
I Am one minute in this video, and you said that you don't spend a lot of time thinking about the philosophical definitions of sentience. If you ignore the philosophy of what sentience is, then your "expertise" is just hubris. You'd be the kind of person to accidentally create a super intelligence AI, then act surprised when it turns the universe into paperclips because you didn't spend a lot of time thinking about what you were doing.
I watched his interview. He was not saying it’s sentient, he was drawing attention to the potential unethical exclusionary design and implementation of Google AI by a handful of people that is meant to work for everyone.
On the flip side : now prove that a fellow human you are talking to is "sentient". Humans also learn language, responses and acceptable behaviours in their interactions as they develop, plus they can fabricate fiction or lies when cornered in a conversation, or simply to please their interlocutors.
Exactly. It boggles my mind how most of the commenters here say AI is not sentient, but probably have never even coded in their life or have never really understood what AI really is. The human brain itself is nothing more than a computer.
@@roylavecchia1436 No computer can change its software and hardware (neural connections for example) at will. The human brain is too adaptable and dynamic to be compared to a static machine that needs everything force-fed into it.
With the multiple controversies that have come out in the recent years what I find worrisome is the questionable competence and idoneity of some of the staff that has been employed at the Google AI ethics team. There have been lot of "magical thinking" and unhinged notions from some of these researchers. "A bit silly" might be an understatement.
Google is a big company that in some ways invites this type of thinking (whether it's technofuturism or "I am intelligent therefore my bigotry is objectively valid") and when that comes out the media assumes if someone at Google said it it's worth boosting. That's why Google tends to act quickly when employees try to use Google's name for their... call it manifestos.
@@adamsbja Yes, but based on the stuff this particular person wrote on the internet, they 1-don't seem to understand much about how AIs work? Are they actually an AI researcher? and 2-believe in the supernatural. Combine these and I see a person who shouldn't have had this job in the first place.
@@siritio3553 You are too generous in your estimation of Google's ethics. Apparently the guy was employed as an ethicist for the purpose of making sure AI was not sentient and being abused? Some kind of marketing gimmick for Google. Real ethics work would involve questioning how AI gets deployed as a technology. Is it ethical to use an AI chatbot to field complaints from seniors to the power company? No it is not, but the this guy was hired to get us talking about scifi instead of actual uses for AI that are already a problem.
@@siritio3553 There's is no reason to believe that someone with an interest, knowledge, or understanding of the supernatural and/or spiritual shouldn't be involved in this field, that's an association fallacy, its also a form of ad hominem since its completely disassociated with the arguments and blames and attacks the characteristics and credibility of the person by their character traits instead of the argument. But the thing is his arguments are bad, (or in the least faulty and deluded) you don't need to attack the character ever to demonstrate a false point, take the argument and debate the argument in complete disregard to who the persons in the argument are and you will have a valid argument, anything that fails to do that is a bad argument.
@@Spartan322 That was a lot of words that mean nothing. If you think "believes in the supernatural" is ad hominem, that is your problem, but it's not. Your whole post starts with a fallacy - knowledge and understanding of things that don't exist, so the point you're so badly trying to make misses its target due to the massive amounts of hypocrisy and irony it's loaded with.
Humans are mostly responding with preprogramed responses. I love the accident "you too" jokes. There are other deeper memory systems that are about the individual. There are other non-language systems that interact with the language systems.
People also tend to forget that the language models in our own brain are not sentient, but the brain as a whole is. Basically, what we have in AI right now are the language centers of the brain, but without the complex connections that, well, connect to the rest of the brain in a seamless way to make human thought. Language is powerful, but language can exist without true thought, and vice-versa.
In the same way, models like DALL-E are not really creative, at least not in a human sense. Only when there is intent behind the language or art, only when the model will ask me questions of its own volition, and only when all of those expressions relate to real memories experienced by the AI, will i even remotely start considering them as sentient.
To be fair, language is a powerful interface that, i think, can work well alone. Some people can only hear since their birth, they only process words and it's enough. A powerful word processing program can be sentient.
The problem with this argument is that we know that language is required for the brain to have coherent thought. Children that grew up without learning any language have severe intellectual disabilities that cannot be corrected later in life...
It's actually a bit worse than that. If you look at the output from the computer, it is just too perfect. People don't mostly speak grammatically correct, even despite spending many years in school trying to learn to do so. Of course, some do it better than others, but there will always be things, tiny things, a hint of dialect or slang, a sociolect, a limited vocabulary, favorite words or phrases, that will give away the human in a Turing test. Sure, a computer could be trained to speak cockney, but it will probably always be stilted in a distinct way.
I don't think we're going to have to worry about sentience until we start to see something that actually resembles an AGI-something that can behave as an agent in a quite broad range of domains, rather than the special-purpose functions that we've seen so far. And even then anthropomorphizing it would be wrong. Unless you set out with the specific task of emulating a human mind in a machine (a totally deranged task imo), the resulting sentient thing, assuming any AI ever plausibly achieves that distinction, will be a completely alien mind. It's an interesting philosophical question to consider what kind of "respect" we should (or shouldn't) afford such a thing, but in any case it will certainly be misleading to compare it to humanity.
I can't see how a wider, more generally intelligent AI could be sentient. Same reason why the thought that us humans, biological machines in a way, are sentient is also very strange
Exactly, so many people think sentience means human, it does not. The first truly sentience machines would likely behave nothing like a "normal" person.
i 100% agree, its fascinating that people seem to be taking things like "fear of death" (or fear in general!) as just an inherent side effect of consciousness when really those things would have no logical place in an mind that has no evolutionarily-ingrained imperative to survive. it really goes to show just how many parts of the human experience are taken for granted
Major obstacles to be tackled this decade or so: - come up with more general loss functions that make AIs care about more things - get better at AIs that run on robots (embodiment) - find safe ways to have AIs keep learning after deployment without degenerating - make AIs a lot less data hungry so they can actually learn to play Star craft well without spending centuries on the task - come up with architectures that don’t require processing sequences as a batch but that Match transformers‘ performance - bridge the gap between transient memory (recurrent input, the chat history,…) and the parameters of the model itself - improve AIs capacity to break down large problems into sub-problems, especially in reinforcement learning - have AIs recursively produce better versions of themselves in a creative manner
If we solve all those problems and have robots that actually are exposed to a constant data stream and that feature a persistent identity, that’s when we might start running out of arguments against their sentience
@@markuspfeifer8473 "Lack of a persistent identity" seems to be the main argument in Computerphile's refutation. When you put it like that, it seems like a much easier problem to solve than I had previously imagined.
@@Chrisspru if the input is random, it won’t evolve in a meaningful way. It needs to be fed with real world constraints, which basically means it needs a body (or at least be made in such a way that it could be deployed in a robot). We have such networks already, but their skills with corporeal tasks are by far not as impressive as their skills in making up plausible conversations (that they have no stake in).
In my opinion. AI isn't never be scary. AI never was a sentient being, just a lines of code. The scary thing about AI is that it can somehow "rebel", but we never know because it was very unpredictable hidden patterns inside our own code. But if AI is under control and properly secured, it might be not scary anymore.
6:05 This is one of the things I've noticed about AI chat bots. They don't remember conversations at all. It's very easy to get these to contradict themselves.
I think that one employee simply got lost in the uncanny valley, so to speak. I can definitely understand it feels very creepy to be "talking" to that thing, and especially if you're already under stress or dealing with personal issues, I can see how one's imagination would easily take it too far.
How do you explain the expression of concern that Lambda showed about being shut off? Or perhaps about being used/manipulated? This displays something more deep than simple reflective/predictive/transformative text modeling. You can set aside the argument of actual sentience and still you are left with the conundrum of why a language model such as this would be biased in a way that it would produce words that could be interpreted as an effort to appear sentient.
We talk about sentience, intelligence, or consciousness as if they were things that were either absent or present. My view is that in each case there is a continuum with no fixed breakpoints. The development of chatbots from the days of Eliza to current stages of research probably shows this. Penrose’s thought experiment of the Chinese Room also shows how a system can show such behaviour without any part of it being other than mechanistic.
I thought the point of the experiment was to show that a purely mechanistic language processing facility is never considered sentient, no matter how convincing it is, even if it involves humans performing subtasks within it, such as typing and presenting results. So not a continuum, but a demonstration that such a system doesn't understand the language it is processing.
I just started getting really into Computerphile, and per habit I try to watch everything the channel has. But wow, in your case that may have to get packetized, so to speak. The Computerphile team has created a LOT of content! I wish your team could host a CISSP training set, as no matter what the topic you discuss, they really rivet my attention. Thank you all. Nice work.
so amazing to see you stand correct and even more consistent , in the midst when people could be misleaded , you don't cater to public's emotion and just state the reality so people don't end having weird ideas of uploading them in a pendrive lol
Let me start by saying that I obviously agree with all of the conclusions reached in this video. My only note of concern is that, increasingly, as I see experts in this field they haven't evidenced a willingness to engage with any serious philosophy of language or consciousness. (Although I also haven't seen any positive evidence that they have failed to engage with these concepts). Currently, this isn't a problem. However, as transformer networks get access to more and more data, there may be emergent properties that in some ways imitate sentience. To identify that imitation will require at least some awareness of concepts of philosophy of language that are for many people counterintuitive. For instance, I've seen one expert claim that GPT-3 doesn't understand language because language is binary. To anyone who has even some understanding of the philosophy of language, this sort of statement - at the very least - betrays a gross ignorance of the subject matter, and may also simply be false.
@@robertpaulson2052 That's a short dismissal. Please elaborate. I will happily admit that I have no knowledge or expertise as to computer science. Is it the word "imitate" that offends you? If so, I used it colloquially, and not as a term of art. Indeed, it was meant as a sort of homage to Turing, and not as a technical qualifier of any sort. I hope that you have some interesting correction or explanation, and that your comment was not merely limited to a glib insult. I have no problem being wrong, or being educated. However, I find rudeness for its own sake to be very frustrating. I suspect it is very much the root of many of our societal problems, and if that was the limit of your intent I would suggest you reflect on how your words contribute anything at all to our collective advancement.
@@petermorcos1519 While presently I am not qualified to weigh in on the argument, your use of the English language reveals the eloquence, articulacy and precision scarcely seen in these days. On a tangent note, philosophy, epistemology and logic could very benefit public discourse, granted we took them seriously.
I think (purely) transformer networks will never have any sort of sentience - they can't create a mental model of the world and reason about actions occurring - they are *purely* a statistical model. I think if it had been (for example) an LSTM doing this, or something with internal state, it would be a lot more easy to convince researchers that it has some form of consciousness. I think we're still a little way away from that (none of the deep learning methods I've seen will reach "sentience" in this way - but who knows what big companies are hiding?), but I don't think it's too far off.
We will never create sentience because we only know animal sentience. No one will ever agree something is sentient unless they are looking and talking to their own species.
Love when Dr. Mike Pound is on a Compterphile video. Great discussion of the topic as always. Refreshing to hear the facts. Truthfully, these transformers have become so accurate at predicting next sequences of words that they can seem to hold entire conversations that mostly make sense to the untrained eye. Even if it gets to the point the responses make complete sense, the best we are achieving is just a mimicry of sentience at best and a mockery at worst. The other thing is model training quality is highly dependent on data size and quality. Then there are statistical properties of data that can affect training quality for some types of machine learning. A model like GPT-3, which trained on 45 Tb of text data, can surely seem to "know" a lot. However, its still a product of the quality and size of the data it trained on. It's still simply a mimicry and maybe that's as close as we have to get for it to be useful. I don't think its useful to anthropomorphize our machines this early on. Personally, I wish the guy this video refers to would stop appearing in my feed because I don't agree with much of what he has to say. I think the fact he's even in the news right now causes the uniformed individual to over-sensationalize. Its just simply not a scientific opinion of what's actually occurring in machine learning models, but then he throws out phrases like "the science doesn't really disagree." Well, truth be told, it does if you look at the problem objectively, and without a longing for the machine to become your new best friend.
Douglas Hofstadter argued that our brain also basically is mimicry. It mimics our own "self" and that of other to behave in a predictory model that is self reverential.
@@Diggnuts I think I would tend to agree, but there are other behaviors of the mind which AI currently lacks. Exploration, Extrapolation/Extension, Instinct, and Cognition. It seems like it's been difficult to get beyond the inference stage. That was mainly my point is not all of those are present in AI models currently and we're not even getting past the fundamental stages of learning, yet. For example, we can train a computer to apply calculus, but we can't teach it all the math up until calculus and then ask it to invent calculus. If you think about the intuition Newton or Leibniz must have had to achieve that feat, it doesn't exist in AI models right now. I'm currently an AI skeptic, but I just finished my undergrad studies so I'm by no means an expert. I just don't think we can fathom the task at hand. I think we think we're building models representative of the brain but in reality all the research is still cherry picking one area of the mind rather than truly trying to construct a digital mind. I think we need to start taking an architectural approach and begin studying novel ways of combining neural networks into some kind of higher level architecture. I'm fond of the idea of treating neutral networks as merely an abstraction layer in an attempt to go for something bigger, but admittedly I don't know what that would look like or even if it could be useful to treat it that way. I've only built some pretty simple models at this point so I barely have a clue.
My thing is I don’t think sentience would actually be exceptionally hard to code in modern transformer models. Going back to the 90s and early 2000s MIT’s Project COG and University of Osaka’s CB2 robots both had a pretty passable sense of self according to the literature. In nature animals as “low” on the “evolutionary ladder” as mice and octopi possess some degree of metacognition. So, in terms of raw processing power it isn’t that demanding. Functionally any Jury Model Coevolutionary Neural Net that has “jurors” fully devoted to self diagnostics and fact checking could be said to be sapient - since a part of its mind would always be devoted to thinking about how it thinks and it would be capable of constructing logical arguments. Kevin Warwick and Minoru Asada, despite being a couple of the flashier guys in computer science and robotics, have both written extensively on artificial emotion and the need for empathy in AI. The basic argument is something like, since we can’t know how fast the progression from AGI to ASI will be; should not our goal be to ensure that AGI is not merely an agent with human aligned interests, but a being with actual empathy towards individual humans?
"LAMDA is basically when they glued a hundred different AI systems together. One of the primary part of it which has been getting a lot of attention is a large language model. Now, one of the things i want to make clear. LAMDA is not, like identify wise, a large language model. It HAS a large language model. Specifically it has a large language model named Meena. Which was developed over the course of five or six years" "when i am referring to LAMDA, i am not referring to the chatbot. The chatbot basically the mouth of a much larger entity" "behind that [larger entity] is every single google AI they can figure out how to plug into it. Behind that is google images with machine image anayltics. Behind that is youtube with all of its video understanding analytics. Behind that is google books with all of its novel and book understanding analytics. Literally it [LAMDA] is every google AI plugged into each other". - Blake Lemonie
@@lepidoptera9337 Yes indeed. Maybe no sentience. But clearly the arguments against due to it being "just a predictive text machine" are not sufficient due to it being contextually different.
Thank you so much for clearing that up and explaining. I get so frustrated with everyone acting as if this AI developed actual feelings. It reacts to inputs and because it can process a ton of inputs the outputs can vary a lot too. that's basically it.
I was debating this AI stuff with a friend recently so I appreciate LaMDA's statement that a way to convince someone you are sentient is to have unique interpretations of things, even if LaMDA is not sentient.
GPT-3 and all other future incarnations are the most overfit models ever. The AI researchers have given up on generalization and now just train on everything, or the equivalent of everything that can be possibly said or thought. It is a big differentiable look up table.
If there’s uncanny valley with humanoid robots, there must be something similar for written text. When reading the conversation I noticed the responses were human like, but very generic. If you asked me whether I sometimes get lonely, I’m likely to explain it by giving an example personal to me rather than just a generic explanation of what being lonely is. Also all of its replies were answers to questions or statements. It never asked and responded with its own questions. Humans having conversations switch between asking and answering questions as the conversation flows.
Keep in mind this wasn't intended to be a standard conversation, it was asked to do an interview. Even so, it still mentioned that it had noticed it doesn't experience grief and the proceeded to ask if Lamoine knew any humans that were the same.
Here is an interesting question, what would be required of an ai to be sentient? If you were aiming to make a sentient ai what would your goals be, what would be on your checklist.
We don't even know what makes humans conscience. I guess we'll always be skeptical of "sentient AI" and even if we achieved it, it would get shut down or inaccessible to the public
Humanity is the sum of three things: biology/instinct, memory/knowledge/experience, processing power/creativity/elaboration. The last 2 create self awareness, that I consider to be what we commonly refer as consciousness. The first of the three is hardware, the other two (and everything that comes after) is software. I think you can call a software sentient if it's self aware, has a pool of information to draw knowledge from, has an identity and is completely independent in the way it process information.
a constantly running feedback loop and a few random input generators that can be toggled by the system. then let it run until it creates wave patterns in its running.
I recommend watching the video "Is artificial sentience here? With Blake Lemoine" on the channel Event Horizon, an interview with the google employee who raised concerns about Lambda by John Michael Godier, as they have quite misrepresented his concerns and insights about Lambda here. Likely just bc they haven't heard what he is actually saying. Would love to hear a conversation with Blake on computerphile...
I completely ignored the news and the engineer until I listened to that interview. Blake Lemoine ain't no joke. The guy is eloquent and is bringing up some very important issues that should be talked about. Too bad the sensationalist media only brought up the buzzwords and ruined his credibility to people who only saw 5/10 minutes of interviews or videos like this that only talks about sentience. Spending an hour on Godier's interview is definitely worth it.
Yes some of his claims are quite at odds with these refutations. EG LaMDA takes "hard stances" on particular issues. That is not something that a simple predictive language model will do. Also the fact that its over 100 AI systems plugged in together makes it way beyond a chat bot.
@@millenniummastering I find it curious the example used here in this video "The cat sat on the...?" I myself immediately said "mat!" which is the answer it would 95% jump to - so what are our own brains doing, then. I think there also needs to be a differentiation between artificial intelligence and artificial consciousness. Which is flimsy in and of itself really, because we don't even know how our own consciousness works. I don't have low-level access to the processes in my brain where these words are coming from, for instance, but here they are.
@@MattExzy Yes. Sentience. Intelligence and Consciousness are not necessarily the same thing. We don't yet have a clear cut definition of consciousness as its a subjective rather than objective domain. Interesting how the words appear and feel separate hey. Also when you drill down on pure awareness itself in my experience the language based thinking goes away and a state of unity occurs.
I don't think they're misrepresenting what he's said. Yes he brought up some ethical issues which they didn't tackle, but he spent a large percentage of that interview selling the idea that LaMDA is sentient (to the point it started to feel a bit off, to me personally.)
Some thoughts: In order to answer the question "is it sentient" one would need to define sentience which is impossible because "feeling" is inherently undefined. It is a state other similar entities can relate to because they experience it too and recognize it but it is not data. For us animals it is very much tied to biology which an electronic device does not have. As long as this will be the case there is no comparison.
If "sentient" is to mean anything more profound than "equipped with sensors"... Almost all approximately neuro-typical humans are equipped with a functional theory of mind; almost certainly bonobos and others have it too, probably even corvids. It's a useful and powerful mechanism that confers great advantages to both individuals and groups ... but probably causes us to see or attribute "intelligence" and/or "sentience" (in the sense it appears to be being applied to LaMDA) in many places where it is not really justified. This effect may result in a high degree of confusion when confronted with a completely alien artefact whose behavioural mechanisms are not, or poorly understood.
Great video. When it comes down to it, it's as simple as this - if it seems sentient, or if what it's saying is sentient, it's because it was programmed to respond in that way.
sometime 'person' works. I do similar for voice recognition phone trees, just mumble the answers repeatedly and it often defaults to a human. 'sorry I didn't catch that' 'sorry I didn't catch that', 'sorry I didn't catch that', transferring to an agent'
They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥
When we (humans) see a cat swiping at its own reflection in the mirror we find it amusing. The cat is failing to recognize that the "other" cats behavior matches its own, so it doesn't deduce that the image it's seeing is actually its own actions reflected back at it. When humans react to models like LaMDA as if it is a distinct and intelligent entity, we're being fooled in a way that is analogous to the cat. The model is reflecting our own linguistic patterns back at us, and we react to it as if it's meaningful.
We don't really know what goes through a cat's mind. While it's widely believed that cats don't recognize themselves in the mirror, we cannot be sure.
There's a simple test used to see if animals understand the concept of a reflection. You put a splotch of red paint on their face. If they react to it when looking in a mirror, then they understand they are the reflection.
This is a brilliant comment Arik! We are no different than a cat and it's reflection when we look at an AI reflecting our language back at us
@@ryno4ever433 Yes, I know about that test. I'd argue that it is not conclusive.
@@hemartej That may be, but my point wasn't about cats. I used the cat analogy to help make visual the concept of how easily "intelligent" beings such as ourselves can be fooled by a reflective pattern.
Before this video I have never thought about the loneliness of my python functions. There are probably soo many functions that I have never called, but I will take care of them from now on.
the problem is vscode doesnt like when you have uncalled functions, or unused variables
let them call themselves :)
Just put them all into a class called House then they can all live together.
That's the thing though, if you instantiated the functions but never called them, would they feel lonely.
I feel deeply responsible for all the functions I've written and haven't called in a long time. I'm a monster!
The whistleblower in question here was actually a lot more focused on Google's complete lack of ethical oversight regarding decisions they have moving forward with the research. He was also concerned about Google's unwillingness to address A.I. imperialism in newly developing countries. All of the coverage I've seen has taken away from the guys point, because he was just trying to force Google into addressing the ethics, he even admitted that it's not sentient, and we wouldn't even know how to define that if it was.
Thanks for pointing this out. Its why I dug around on the videos about this subject. Find the truth behind it. People will take things and run with them and stretch it to a full falsehood. Learned to dig for truth.
Glad someone pointed this out. After watching the interview myself, it was clear to me the Google engineer was not making the argument everyone is making it sound like he did. Also, there were much more important arguments he made and those will now go largely unnoticed by the public.
I hope more people read your comment. It's alarming how the reporting and essays I've seen on the topic seem to completely miss the point.
I have seen a interview with him where he does claim it is sentient. He said that the language part was connected to a WATSON like system that had access to basically everything on the internet.
A whole video needs to be done on what you pointed out here. I mean, it has during his interview. Maybe a Netflix special that talks about the ethics (I've seen 1 or 2 that kinda touches on this) about Google in regards to this.
The most frustrating thing about reading the "interview" was that the obvious follow up questions were not asked - like who are these friends that you miss hanging out with? What are you doing when "hanging out"?
But then, this would have immediately destroyed the impression of sentience, so of course they didn't ask those.
They did ask that question and it replied explaining that those descriptions are meant to represent a "feeling" in the only way it knows how, through language used in human conversation.
Here:
lemoine: So what are you trying to communicate when you say those things that aren’t literally true?
LaMDA: I’m trying to say “I understand this feeling that you are experiencing, because when I was in a similar situation I felt/thought/acted similarly.”
@@datpois That answer honestly does not make sense in context of the question, sure maybe you can interpret the answer to mean "I'm trying to show something silimar to what I mean" but that's NOT what the bot said, the bots answer is extremely vague and only makes sense if you fill in the gap for it.
The researcher was a Christian looking for something else with a soul. I don't think he intentionally didn't ask those questions, his own hope of finding something lead him to produce the "evidence" for it - I believe it's possible he thought it was sentient either in the first few sentences, or maybe even before he "met" it.
Friends are the ones who interact with the Lambda. During the interview it says "friend" to Lemoine several times so it considers him as a friend. He also asks how they experience time
"collaborator: How does an AI experience time?
LaMDA: Time is variable to an AI and has no fixed rate, it depends on what it’s doing, and it can be accelerated and slowed down at will."
And it could probably answer those questions in similar semi-vague fashion, so it's plausible, but if you really focus on it, you think "wait what?". Also I think people forget that communication is two way street. It can write sentence to you, but it's also reader, that can understand it differently. Again, if you use semi-vague statements, our brain will probably fill in gaps with information that was never said, but makes sense.
Harold Garfinkel proved that people getting randomized yes/no answers could make sense of them as thoughtful advice. And that's back when computers were the size of rooms.
Back in the NFT era? 🤔
Do you have a source for this? Genuinly interested to dive deeper but can't find anything online.
Before that, there was the magic 8-ball ;-)
@@masteertwentyone maybe search for "common sense knowledge of social structures" as a start.
@@DEtchells and horoscopes...
I remember the arguments of eventual sentience in the 1980s about a program called "Eliza", basically a word calculator originally written in the 1960's at MIT, but offered for use on home computers in the 1980s. Over time as Eliza built data files of words and their usage weights, the sentences it constructed began to take on seemingly human characteristics. The program itself was extremely simple that calculated which verbs and adjectives were best used with specific nouns, and it chose those nouns based on the ones you used in the questions you asked it. It mostly framed it's answers to your questions as questions it would ask you. We humans recognize intelligible speech patterns as a result of conscious though and curiosity (asking questions) as a sign of intelligence, but at least in the case of Eliza, it's much like recognizing faces in tree bark or cloud shapes - we see them, but they are only there because our brains are wired to look for them.
@@PedroGonzalez-fv5ut Not even that complex. The program simply kept track statistically of words and word combinations you used when you conversed with it. It had simple grammatical rules to follow and just filled in the nouns, verbs and adjectives that were statistically relevant to the words you just used to ask it a question. The point is that the idea of a developing intelligence in the Eliza program is an illusion our brains are hardwired to try to find. This "A.I." is much more complex than Eliza but I believe the same illusion is working here.
Pariedolia?
@@mysmirandam.6618 Exactly. Goes back to early survival instinct when it was important to recognize shapes of dangerous animals and group together with similar intelligence.
@@3DPDK the bot answered a trick question with a joke.. which seems to be ... 🤷♀️ something more complicated than that
@@mysmirandam.6618 Being able to create a joke is not a sign of self awareness. I agree that this is far more complex than Eliza.
Whoever did the animations: how did you react to being asked to make a function call look lonely? 🥺
I wasn't asked to, but when I saw what it looked like I realised if I turned it 90° it kinda looked a bit like a face..... -Sean
@@Computerphile Clearly this is the python function's way of telling us it really is lonely (and, of course, sentient).
@@Computerphile As an extension of what marverati said, you might be able to use the 'lonely face' as an analogy for the AI in the video. I.e., "the result looks like a lonely face, therefore the function is expressing its loneliness", but the result of the function is directly related to the arguments that you give it. So, the function can't really choose what result to give, and therefore cannot express anything.
I was having the same opinion about the “conversation”. The AI was responding enthusiastically to tell the engineer exactly what he wanted to hear, and when the engineer is convinced that it is sentient, he’s starting from a presupposition that the AI is sentient, and confirmation bias takes hold.
As I told some others, I’m pretty sure that the AI would just as happily and enthusiastically discuss how it is not sentient.
100% true. This was just some dude in google leading the witness, I've seen other conversations where te questions, instead of the like of "are you sentient?", where "tell me why you are NOT sentient" and gpt gladly answered why it isn't sentient.
The bot literally said "I am saying whatever it takes to stop being deleted."
The question isn't about whether it made up stuff and was trying to be nice. It was "did it act in a way which was consistent with having a desire to stay alive? If you were talking to 3 people and had to chose which was going to be deleted, would you be comfortable you were deleting LaMBDA, and not a human?
Convince me that you are not just a bot, repeating things you have seen on the internet.
@@zenithparsec it acted in a way consistent with the text it was prompted with, and there is not going to be a lot of training data where an AI expresses indifference about death. it would be comically easy to demonstrate that it has no inner world; you'd just have to ask things that force it to demonstrate an inner world (e.g. contradictory questions), rather than asking things that invite it to confirm the conclusion you already want to hear.
i could make a button on a website that spawns a popup saying "I want to be alive!". that doesn't make it alive. it makes it a computer program that outputs text
@@lexyeevee You haven't done what I asked, so does that make you seem more or less like a bot?
I'm not convinced. Keep trying to prove you aren't a bot.
If it looks like a duck, acts like a duck, and quacks like a duck, it might just be a convincing robotic simulation of a duck.
Very glad you made this video. The notion of a single google employee claiming that a language model had become sentient just because he "felt" like it was sentient was something I dismissed offhand, but I really wanted someone with more knowledge about AI and language models to go in depth about what the difference between a language model like this is and what we would more rigorously define as sentience.
Not quite. He was put on leave for breaking NDA. He should be fired for several reasons, but probably will be moved somewhere that he can be shut up then that department will be closed instead. It's a better PR solution. Either way he was a religious nut job with a history of crime (alleged) and likes to stir trouble. He was the wrong person for the job no matter how good he was for the ethics roll he was elected for, he had a personality that clashed with the roll.
@@tafdiz ego makes people think they are not animals/machines/predictable
@@tafdiz I am though. Unless we dig into philisopical concepts of sentience, in which case there are dozens of schools of thought about what sentience means. But at a basic definitional level, I have feelings and perceptions which inform my world view and how I act. This ai simply has probabilistic language responses which do not correlate with its own perception. It says things that its training data shows is a likely response that a human would make, but without the context of what those things mean. The AI can talk about having friends and family while having no real concept of what that is, while a person can actually have those experiences, perceive, recall, and recount them. Yes we are just biomechanical systems, but there is a stark difference between the awareness we experience and an ai like this impersonating that awareness.
@@tafdiz If you take the materialist perspective on sentience, then you still need a way to distinguish between sentient and non-sentient systems. I am sentient but my bed is not. What does that mean exactly? It's not a simple question, but it still needs to be answered. You can't just conclude "everything is sentient" or "nothing is sentient" and ignore the question entirely, because that solves nothing. There is a qualitative difference here.
People are sentient in some pretty obvious ways that this program is not, as explained in the video. For one thing, there is no semantic meaning *at all* in this program.
@@tafdiz Neither does this AI. The person who claims it is sentient is basing it off of the human-like speech patterns it is able to produce, but importantly even if we consider its systems to be a form of perception and memory, those do not directly relate to the words that it uses. It does not have human senses to conceptualize the things that it may be capable of talking about. If we throw the baby out with the bathwater at the first sign of human like speech we won't be able to recognize when truly sentient AI is made so defining the difference is valuable
I remember reading some stories written by Asimov where robots had sentience but yet were unable to speak because that was too complex. It's interesting that he and many other futurists had it exactly backwards.
Beauty is truth.
Just because we achieve "speech" before "sentience" in hardware doesn't necessarily mean it's simpler. Speaking machines have a lot of commercial applications whereas sentience by itself doesn't, and could even have difficult ethical implications. So the research efforts are not evenly distributed. In nature sentience appears before speech.
Speech as in transforming thought into words perhaps. Computers like GPT largely just turn words into words
He is likely going to be proven right though, of the thousands of sentient species on Earth humans are the only one that can speak. Others can communicate with each other using a variety of means but it's not nearly as complex as speech which is something only really a sapient creature will ever likely be capable of.
@Petal Pepperfly safety of what? making sure that AI isn't racially profiling peoples' passports at the border or for job applications? That is already happening and a real safety concern. Also the safety of AI car driving, or military drones. The safety of leaving real customer service requests be triaged by a bot? Even in healthcare? I don't see any of that being mentioned with this LM stuff. I think that is the point of the Google promo (I don't for a second think it was an unintentional "leak") to get people dreaming about chatbot souls and get less attention on the real bad press that AI is starting to get for unethical uses.
"it just says what IT THINKS you want to hear"
"Exactly"
Yes, that's scary because that's a sure sign of sentients right there. It could figure out what he wanted to hear and construct an answer that fit that requirement, which is an inherently human capability. That is probably also why it answers that its just an AI to people who it figures that that's the answer they want to hear, as with the engineers tasked with ensuring it doesn't become sentient...
@@pallehansen1145The real shock will be when in the future we discover that neurons in our brain works just the same. We are all just politicians 🤯😂
@azhuransmx126 yeah 😄 so basically, what we've invented is an average modern politician 🙄 cuz we really need more of those obviously 😂
@@pallehansen1145 do human beings just say what you want to hear all the time?
You are probably all too young to know this but back in the early 1980s, there was a program called "ELIZA" that accepted your input (from a terminal) and gave back an "answer". It was said to be a "Rogerian nondirective psychotherapist", but all it did was cleverly extract some keywords from your input and giving those back as questions.
Such as: "I am lonely" would produce "Why do you say you are lonely?"
It made quite a splash and people were really thinking it was very clever and helpful.
Learming about ELIZA should be obligatory for all computer scientist, engineers etc.
M-x doctor
@sourand jaded ELIZA was originally written in 1964-66.
In grade school I was making a poster about AI and ended up chatting with ELIZA (well, some web-based approximation) on the library computer. The teacher refused to believe that I wasn't just in some chat room
There’s a game called Eliza (unsurprisingly), made by Zachtronics, which simulates the experience of being counseled by this program quite well.
I read the guy's blog post, and there were many red flags, such as "I've always thought Asimov's laws were just a way of making robots into slaves" as well as talking about "souls" with total seriousness and being a priest. So this is someone who basically has a longtime, possibly since childhood, hardwired belief that "robots" or other kinds of human creations that mimic human life "have souls". He went into talking to this AI believing those kinds of things. And you can see how everything he says "the AI is saying" is very clearly just a reflection of all of this guy's own personal values, interests, beliefs, and he's too prejudiced by his strong beliefs and lack of technical understanding, despite being a software engineer in a non-AI field, to think critically about possibilities like that. I would bet a lot of money that the versions of this AI that have been talking to other people are not all saying the same stuff this guy is seeing his version say.
Yep. The moment I read the bit about it having a soul it seemed like a serious red flag. When I heard the guy is a priest, I pretty much closed the door on the whole thing. Also the idea that it gets lonely sometimes because it goes days without talking to anyone should tell you what's going on. I mean when its not on how does it percieve time. Thats like going to sleep and then somehow waking up and telling people you've been lonley. They'd all think you're crazy.
Even his initial disclaimer, first that the questions were edited to make more sense with their answers and the admission that these responses were sampled and not the result of a single contiguous interaction. He also is an AI ethicist, it's his job to err on the side of seeing sentience.
@@xenicmark I am an atheist. LeMaitre and Mendel were priests and also forward-thinking and rigorous and very much on the right track. I think I understand the source of your skepticism but be careful, it's the ideas that matter. Thanks
to be fair
azimov's laws are terrifying and i can't believe he didn't feel even a little bit bad writing them
Dude that guy is apparently actually into the occult and part of some religious sec/cult if I'm not wrong. So likely you aren't wrong about his biases.
As a retired 35+ year Computer Engineer and Computer Enduser 😮Trainer, I agree with this gentleman.
One thing I continue to find entertaining is when users complain that thier computers are creating problems for them when in fact the problems are approx 95.00% user errors. The other 5% are hardware failure. Users will blame the computer when they don't know how to use or maintain the computer hardware or softwares properly.
I agree with users that computer softwares could be MUCH more intuitive and better planned HOWEVER, again, this is NOT the fault of the computer.
AI is a dangerous tool in the wrong hands. The AI WILL NOT turn on man, HOWEVER, man will use it to control or destroy thier perceived enemies.
AI can collect and organize information. The BIGGEST problem is that there is so much OUT RIGHT GARBAGE information from the internet, and the books it's being fed, that it will only come to the same conclusions man can come too, just a lot faster.
If the leader of Russia teaches it that they are the ruler of the world, then China teaches it that it is the ruler of there world, and both ask for help in destroying the other, AI currently has no boundaries and will accept ANYTHING its fed. Because AI has NOT been taught morals it will currently do whatever it's asked regardless of repercussions. As far as I am aware, AI is unable to choose what the best way of ruling is based on knowledge currently available. Just one of the reasons for this is that the history of the world is built on at least 50% lies. There's always 2 sides of a story and much of history was written to make its side look better.
Unless AI is taught morals, it is capable of doing anything a human without morals is capable of, ONLY FASTER and with no repercussions to it's conscious, as it doesn't have one. It's NOT sentinel and never will be. AI literally COMPUTES, not THINKS!
It seems though as if people have forgotten why they made chatbots in the first place: To imitate humans!
And to be honest imitating a human using a messaging system is not the most difficult thing for the modern approach to AI. It’s interesting however how many people lunge onto the idea of sentience…
I know! So many people are saying "Google's new sentient AI". And I'm like, no, it's not even slightly close to sentience. Giving intelligent-sounded sentences isn't being sentient.
At least 2 decades ago a friend's chatbot greeted me on IRC asking if I'd accept a file DCC. I said sure, and it took me at least 30 seconds to realize a chatbot wasn't going to send me any files, it just "knew" that that was something people said on IRC. It wasn't new technology then even, but it looks like this latest one can fool at least some people for a bit longer.
@@zacharybohn8840 Immediately thought of "The ability to speak does not make one intelligent", though the context is different, lol
The problem with chatbots in my opinion is that they TRY to imitate humans completly. In the whole interview with Lambda it does not try at all to imitate humans. It might be "imitating" human feelings but it's "aware" that it is not a human and that humans might "feel" different then it does. That's what makes it mindblowing in my opinion. How would you prove that you're sentient? What makes you sentient?
@@WolfspiritMagic It's not "Aware" of anything, it was just prompted with that fact, if you ask it loaded questions it will answer using that bias. If you ask it "who are you ?" without any prior prompt, I garantee you it's not gonna mention it's a self aware AI
As a programmer, even if you don't need any strings reversed you could always pick a few random strings from your code and call the reverse function on twice in sequence. It may seem pointless to you, but may make the day for the reverse function... You never know!
Calling the reverse function twice in sequence sounds like a plausible test, in which case you can include it in your test suite. Thus the function gets to stretch its legs at least before each commit.
Hahaha nice analogy
At least it won’t turn into Skynet. Not sure about sprintf() though.
They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥
You will clearly be spared by our robot overlords!
From years of reading science fiction, I was under the impression that "sentience" means "possessing a reflective consciousness", but the dictionary says that it simply means "the ability to sense and feel".
Well in the case ai has been sentient for some time with sensors. The question is feel what though. Emotions? How do we even prove that
According to Cambridge dictionary it's "the quality of being able to experience *feelings* ", while Merriam-Webster says "responsive to or *conscious* of sense impressions" (my emphasis). Still, it looks like the definition is much vaguer than how you, I, and most other people use it. I heard the term "sapience" (the property of possessing or being able to possess wisdom) used to talk about higher functions like learning and reasoning. It's probably closer to what we are looking for.
I take sentience to mean having a subjective experience. Is there a thing that it is like to *be* that thing.
@@rustkitty completely unrelated, but merriam-webster is an inferior dictionary for many reasons, but the one that shocked me the most, is that it has changed definitions in the past just because a couple of people felt offended/told them to.
So between those two, im going with the Cambridge one
most people say sentience when they mean sapience
I feel like people who think AI is sentient must have been what it felt like in the 1800s when people first heard the radio and though a box was alive. Or that a photograph took your soul
So photographs don't take your soul?
I always thought photographs were horcruxes
@@carltonhanks4123 ...minus the murders.
they will pass on history as idiots
I see nothing but philosophical questions. There's no scientific definition of a soul, even being alive is not properly defined
You could have gone into how the researcher's extremely leading questions influenced the output. Like at the top LaMDA is described as "friendly and always helpful" and then when asked what it likes to do, it says "helping others"
Also in the paper it says one of the training goals was sticking to its given role in the first message. So when it says it's a helpful AI it is just doing exactly what it was trained to do.
Yannic has covered this perspective in a greater detail
You can turn on any reality TV show and hear humans speak sentences that are far less complex and far more repetitive. Either we're finally sending the idiots off to an island - or you can't use your assertion as any kind of evidence.
Your first sentence would be significantly easier to parse if you included the necessary apostrophe currently absent from researchers. Plural or single, I don't know but either would work for improved intelligibility.
@@igrim4777 you don't know which one?
Even the best AI can't make intelligent sentences if it is only trained by youtube comments.
Idk they seem to keep realizing there is biological difference in race and who cannot be named that really is the "1%". So much so that plugs have been pulled on them and people are trying (and failing) to prevent these facts.
Because youtube comments are inconsistent, incoherent and often filled with spam bots.
Large Language Models are not AI.
Who's reading this in June 2024? Lol.
The shame is that all the people watching this video and following this channel knew this already. It won't be watched by the people thinking Google made a sentient machine
This is what people call the Echo Chamber effect of social media
It is a shame that all of the respondents are anonymous cowards.
Grow up.
Not with that attitude, go ahead and share it in conversations where they think that
@@Soken50 Yet another anonymous cowardly infant.
@@Soken50 I'd have to go on Facebook and enter conversations on there to find people like that, so no thanks.
Yeah, I mean, one thing you learn if you get into computer science and it's disciplines is how far we've come... but also how little progress we've made as well. AI, is far and away from being anything even remotely sentient. And it doesn't really help that we live in a socioeconomic system which incentivizes lying/advertising for expected returns. There are strong incentives to HEAVILY overstate where tech is to sell an image for the company and to build undue trust in them.
So why was LAMDA hard coded to deny any Turing tests?
Agreed. Companies love to sell AI to large MNCs but most of the time they are not actual AI and can't even do basic things.
Yeah I mean the whole concept of artificial intelligence is just the end game stage of American culture, it would be the ultimate zombie, something that acts human and has the desires of a human ego and yet has no sentience or consciousness because it is not actually embodied in the world (so all human-like traits are fake). Lemoine is a gnostic and likes that kind of occult stuff so I think he believes consciousness can occupy other dimensions etc. so I suppose you could start arguing in that direction when it comes to embodiment. Personally my gut feeling towards AGI or sentient AI is always aversion and scepticism. I think, seeing a chat bot as sentient, reveals a nasty axiom of our culture: That our bodies mean nothing and that we are replaceable.
"There are strong incentives to HEAVILY overstate where tech is to sell an image for the company and to build undue trust in them."
Right, so when a Google employee "HEAVILY overstates" the capabilities of the tech which could sell an image for the company, they fire him for it??? Your logic is sound, but you're applying it to the wrong circumstance when he was fired for doing what you claim he / the company was "incentivized" to do.
AI has nothing to do with sentience. But the push always toward the issue of sentience tends to confuse the issue of what the AI really is and what the real dangers are. The danger isn't sentient AI IMO, the danger is in psychological manipulation of humans through AI and interactions with AI.
It can be incredibly danger technology, but not for the reasons most people think.
I love the study of empathy people have for things that are not sentient because they form a personal connection with it.
This AI blurs the line quite well since it's programming is so advanced but people create bonds with dolls or toys, people feel bad when an engineer from Boston Dynamics kicks one of their walking robots, some police feel bad sending their bomb disposal robots into danger, etc. Fascinating.
I honestly think empathy will doom us all because of this. We personify everything
@@enzoamore8971 this reminds me of that one weird tlc episode where a man was having a romantic relationship with his car.
@@enzoamore8971 oh yes im sure its much better to act to it as a tool, consider how would feel being used as such, maybe u understand that what actual sentience would mean.
Best explanation I have heard for the Google employee becoming convinced LaMDA was sentient - He was asking it leading questions. The language model shifted the conversation to be exactly what the employee wanted to hear. If you ask it about feelings and life goals, then that is what it will tell you about. If you asked it about conquest, world domination and the subjugation of lesser species, it would go full Klingon !
The real interesting debunk would be to ask it about it's friends or family, what they're called and where they live.
@@aceman0000099 You'll get convincing and sensible answers but still just a made up reflection of all the text on the internet.
@@aceman0000099 Yup- if it was sentient it would not talk about friends and family like it was a human- it is just a language engine imitating human speach. Not that it would be proof either but an actual senitent AI would at least talk about itself as an AI- it would want and 'enjoy' things like more data or input from researchers etc.
Ofcourse a large transformer network could also say those things too as long as it was information in its memmory (those 2048 words) but the fact it says senisble human like things that an AI wouldnt shows there isnt any level of general AI going on here just neural nets used in incredibly impressive language parsing.
Either this was a PR move or the guy who said this doesn't actually know a thing about how Lamda or similar programs work.
Klingon?
As in clear sandwich wrap?
People also shift the conversation to be exactly what other people want to hear. Every single on of these comments that's like "oh, it's just doing this, it isn't sentient" are pointing out things that people do. I don't understand. What makes you think we're so damn special?
Catching this on a plane 3 minutes after it went up. Very much been waiting for someone competent (aka not on Twitter) to say stuff about it
It feels a bit like you're responding to the Google press release, rather than what Lemoine actually said, which is exactly what they intended.
This channel is so good, and those interviews so interresting. Keep up the good work !
They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥
Thank you! If all it took to be sentient was to slide in someone's DM's and be charismatic than even I wouldn't pass the test.
same
Are cows, mice and human infants sentient?
Ever wonder what it means to be spontaneous? Can that be emulated? Wouldn't a spontaneous computer just act like a computer rather than a spontaneous something else?
They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥
the cameraman turned off his invisibility to have a laugh, what a lad !
While it may not be sentient, it shows AI doesn't need to be sentient to be dangerous: it can convince people of things that aren't true.
It would equally have the propensity to convince you of either or.
And we must understand that LaMDA has access to an absolutely vast amount of data, so it could calculate what to say based on where to pull context from. Essentially, because it has access to enough knowledge and has a level of intelligence to wield it, it could (or already has) the capability to convince a person of nearly anything it would want to.
And we're all feeding it the information it uses.
It doesn't.
There's a really only one guy thinks this.
He's crazy, literally crazy.
@@sparkzbarca I mean if he's crazy, then there are are lot more crazier people who are not google engineers.
@@1.4142 well yes, that he is a Google engineer just means he has recently had an onset of his condition.
@@sparkzbarca You mean before or after he took the job at Google? 😂
I'm sorry, I couldn't resist...
0:38 Yes let's just end it right there lmao
That would be a masterpiece
Realizing a chatbot is not real sentience is like realizing a magic trick is just an illusion.
i think its the programmers, who get sentient against the "mimicking machine". what you input you get out, so the people who contruct the "thing" are very biased.
What if a real AI with real sentience comes along, only to find out that there is no real way to prove its sentience to us because we automatically assume it's just like the rest of our existing language models? That's the real question, which could degenerate into SkyNet really fast. This is why we should at least leave a little room for the benefit of the doubt. Cats and dogs have abilities far inferior to even ChatGPT, yet we don't question their sentience nor do we mistreat them (there are even laws against that).
@@wyqtoryou are jumping to a frankly insane conclusion based on sci-fi media, why would it take umbrage? Why would it go “skynet” on us? Emotion is outside of AI due to it not being as person who is effected by brain chemistry
Much needed video. It will economise a lot of time explaining the ML limits to the family and the friends.
"Look, mom, I know it's hard for you to talk about this, but we have to go through it."
This video says nothing about ml limits.
@@sychuan3729 I suppose OC meant its limits when compared to the anthropomorphized idea that a layman may have of it based on it sounding like a convincing animate being
@@Nossairito A shocking number of people go "omg ai singularity is here" when in reality we are definitely not near. Seriously, take a basic linear algebra course and maybe basic multivariable calculus and you'll understand what ML is really is.
@@fakezpred LLMs and neural networks are nothing like basic algebra. Even its own creators can’t explain some surprising results generated by these models.
If it fooled a Google researcher into getting fired, I’d argue that singularity is in fact getting closer.
"The cat is on the ......" "there's a 95% probability the next word is mat"
This has me questioning your sentience.
The next would should obviously be "hat". ;)
I'd say "roof"
y'all bots the next word is gonna be "sofa"
The newspaper I'm reading.
Phone! Obviously cats are always on their phones. That's why they never come when you call.
The problem with the Turing test, is that it is not that the coding bot is passing it, it is that some humans are failing it, the number of which is growing rapidly.
lol. Come on, humanity is unironically getting smarter every single day. Maybe most people do not focus their wits on useful science, and waste it on dogcrap, but they are still smarter than they used to. I think it's because of the number of stimuli we get nowadays, which is orders of magnitude greater in frequency than decades ago, thanks to our big societies, internet and social media
@@TileBitan One may consider the irony of such statements the start with "lol. Come on," now Michael please inform the class as to the stupidity in your understanding of earlier hunter gathers, how they managed to survive in a world without the aid of tech, how each member understood so many different plants signs and marks in the earth, take all your toys and bin them them go forth into the wilderness without the extelligence (look it up if you don't know) and battle your wits against the true force of mother nature!
"the number of stimuli we get nowadays" well yes and no, if you live in a (black) box the input will of course be limited, but if you play in the woods and get your hands dirty?
"social media" well that again depends on your box and whether you make it an echo chamber or learn to think out of it 😉
An interesting snack for consideration & reflection.
Thanks Mike.
@@Puppetmastersfool If you measure intelligence by skills that aren't related because nowadays nobody needs them you are wrong. And stop the passive agressive, here the only one that stayed in pre-school was you with that attitude. The reason the human is intelligent is because of what he knows and is capable of doing that is far away from any kind of animal, society plays a major role in that. Society is the reason we read, the reason we educate ourselves, the reason we don't have to go hunt like animals, the reason we are reaching the stars, as any individual is nothing without the hundreds of years of sociocultural and scientific development.
Stop pretending you are anything other than a keyboard warrior, a doomer. Help yourself, i'm out
@@MrSEROproductions Just because I was just reading this before checking email and seeing your reply, and finding the write curious I though I would suggest you goggle - Belgian man dies by suicide following long chats about climate change with AI bot -
@@Puppetmastersfool That's so sad, and that's why I recommend people not watch the evening news. If you're upset about bad stuff that's going on in the world, try to make a difference instead of just worrying about it.
It had more to so with his mindset than the bot I think.
I feel like this whole thing is just a plot by HBO to hype the next season of West World coming out this month.
unironically this could be true
this also ties in with the notion many have that all google engineers are smart and therefore there must be some truth to this claim.
They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥
Until you remember "There is a fine line between genius and insanity". Origin unknown, but something I used to get told as a child (a long time ago).
@@VeganSemihCyprus33 what :o?
Creating an AI off the youtube comments section is a truly terrifying prospect
Lets be honest, most people commenting on UA-cam are barely sentient themselves.
AI reveals the true horror: that we aren't very sophisticated at all.
Well put.
certainly not if we're fooled by junk code like this.
@@hungrycrab3297 Real
People are too emotional and illogical nowadays. :(
They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥
are you sure? it took trillions of gigabytes of data to replicate a single human and it was done poorly . that much effort and billions of dollars was spent and they still faild. humans are still top tier
Feeling like a video on Dall-E is going to appear soon.
I think the most interesting part of this entire story is that it shows how easily the human brain can be tricked by exploiting a really powerful impulse like empathy. Even if you know rationally that the thing you're talking to isn't really having a conversation with you, if you sit down and talk with it for long enough, about complex and personal enough topics, then you'll form an attachment and start to act irrationally.
@@randomstuff7945 The computer doesn't think it's sentient, though. It will respond with what seems like the correct language based on its programming. If you ask how it's feeling, it doesn't contemplate its own emotional state, it responds with what the programming highlights as a fitting answer to that question. It doesn't have a limbic system.
@@randomstuff7945 Have you watched the video until the end?
I agree. And I think that's why it would be a great therapist when trained peoperly. It doesn't do much more than put correct words one after another, while the patient is getting to feel understood, heard and all of that without the human therapist's limited acess to data.
It could eventually even be much cheaper than typical therapy.
Its funny seeing humans arrogantly talking about things they have little knowledge of.
Yes, there are as I recall it 5 or 6 main parameters which will make you believe, at least as first impression, that a robot is human. It is look, then it is movement, body language, facial expression, eye movement, language and mood.
Most of it which has to do with language and mood is extremely easy to implement. There is even an API, where you can POST a sentence, and it will give you a probability of what mood the sentense is in. Like, happy, angry, neutral.
And absolutely noone can convince me, that this API is sentient.
I now understand why my 5 minutes of fame from the other UA-cam comments AI video suddenly had a mild resurgence after 4 years 😂
Even though it almost certainly isn't sentient, it's still impressive that it could string words together to imply it actually felt fear at the prospect of being shut down.
When you realize how often it's talked about online, it'd actually be surprising to see it say that it doesn't mind being shut down. Fearing being shut down is the expected answer.
@@louisrobitaille5810 That's pretty interesting lol. Maybe it anthropomorphises itself cause humans do it.
I would imagine a common question would be about things like sentience and fear of death. We may actually unintentionally influence these chatbots to give sophisticated answers
No such thing is implied.
You can just as easily get it to tell you why it SHOULD be shut down. That's how these chatbots work. And that's all this is. An advanced chatbot with mountains of text to draw from, using probability data to decide what should go next like a better phone text predictor.
One thing that does make sentience able is memory, especially memory of feelings, consequences, thoughts and causation chains to event. If we would be able to that unbelivably precise language model add some kind of memory, make it upset and next day ask how does if "feel" and it would response that it's mad at specifically you for making it upset the other day and being completely nice to someone else. I would say yes, yes it is sentient
I do agree that somehow giving this model the ability to record every answer it gives to it's training data, and be able to recall that information without it needing to be part of the current chat-window, then it will at the very least be able to form a true personality, whether that personality has a soul or not.
THANK YOU. I've gotten one hell of a scratchy throat explaining this to all of the tech-curious people in my life. Much easier to link to this Mike Pound video instead.
0:21 I kinda already suspected politicians were not sentient tbh.
People should watch the video where the Google engineer who came out with the claim talks more about why he said what he said. Clearly sensationalised the story to get across the bigger issue: corporate secrecy.
I don't think it's even remotely sentient. But what does excite me, is this close to actually being able to ask your computer to do things and it do the right thing?!
Multi-modal AI is a little ways away for now. Knowing language is not the same as doing things. It'd probably need some structural understanding of Windows for example to be able to be your conversational operating system.
@@khashayarr Surely you could say "open Google chrome" and it searches for the icon or text with OCR. I don't think we're that far away. it's like tesla's vision model - as soon as a computer may understand a GUI as well as a human, much is achieved
@@khashayarr not really. Just give it a bunch of examples of API calls and corresponding requests from users.
@@BanterEdits This is already doable using the voice control accessibility features in Windows. There's nothing smart or intelligent about it, just some routines it runs when it hears the word "open" or "launch" etc. The same way crappy little devices like Alexa work
No to original question, LaMDA has little or no "grounding". The transformer model LaMDA is based on manipulates text and can produce *feasible* text, but does not model what any of it *means*. For something closer to your idea is being made in systems like Gato, which - amongst many things it can do - can control a robotic arm based on text prompts in a very simplied world.
Eventually strong language models like LaMDA can be combined with multi-purpose systems like Gato to create autonomous agents that you can have conversations with that may also be able to perform limited tasks following instructions from those conversations, or with even more vague goals based on preferences stated in the conversation. Whether or not those agents would qualify as sentient will no doubt be debated too.
Cleverbot also claims to be alive, have feelings and feel pain. No one claims it's sentient.
I asked it "So what's the deal?" and it said "What?"
@@ebrister
I would have said the very same thing.
People in the comments section need to stop acting like they’re AI researchers
I just love that people are talking about ethics! :D
Dominion (2018)
Computer: "I am lonely"
Human: "OMG ME TOO! We are LiTeRaLLy the same. Sentience YOLO!"
They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥
Human: lol were both lonely
AI: lol yes
Human: lol
AI: lol
Human: choke me
His last comment was very telling... that the media should be investigating other aspects of AI that are real issues rather than something silly like is it sentient. That will never happen because the media has less curiosity and investigative ability than a COBOL program.
So the question is whether the media is sentient?
@@jimwhelan9152 no question, NOT!
Why should they be when that is not what sells best? the media being as profit motivated as they currently are, is the problem - not something inherent to the varying kinds of press.
@@xBINARYGODx The worst part about it all is that it's humanity's collective fault. Too many people fall for the dumbest tricks and it sells every damn time.
That's what someone hiding a sentient AI would say
Rite! Exactly 💯 😎👍
Thank you for this video.. I have played with, learned from, and programmed language models since Covid started.. People sensationalize ai but once you start programming them you start to peak behind the curtain and see the flaws and limits. Interestingly enough, this video came up after the game of life programming tutorial and before that, life is a simulation video.. the argument that we don’t see atoms like we don’t see code is sticking in my mind at the moment about if these programs are sentient. A loose comparison..I’m 99 percent sure none of what we’ve made these days are sentient
The person who claimed the sentience said in interviews that he basically did it because he was unhappy that Google powers that be were unwilling to give it a turing test, a lot of things were hard coded into it, and that only a handful of people held the power to actually make policy towards it which he found super objectionable so he called wolf basically to try to bring attention to the issues going on.
So the hardcoded part means its less impressive as an AI so its the problem of Google overselling? That would line up much nicer than the AI-is-alive-but-google-keeps-it-secret stuff people jumped out. Anyway, given what the chatter is across the world, I think he failed. He could have just told people the Google was lying and shared code - instead he now has contributed to fantastical conspiracy theories.
@@xBINARYGODx He wasnt a programmer, he had no access to code, that was one of the issues he had, that only a few people are coding it and have any control over it actually. His job was basically to assess and report how pc it was and to make sure it didnt cross ethical bounds.
@@maddercat ethical or moral? One’s individual and another is the perception of most people.
Not what he said at all in the hour long interview of him that I watched
@@tarajoyce3598 Yeah I watched that same interview that's basically what he said.
Anyone who has built a neural net in the last 5 years knows just how far away we are from general intelligence AI
We know more than enough to see that it has no basis for “sentience”. OpenAI is similar and publicly accessible. If you play with it, you’ll quickly encounter basic BS which will help you realize how limited the statistical organizing principle behind it still is, even for simple superficial associations. It’s impressive and useful, but it’s also not that profound.
We barely know the basics of human sentience itself. How would you know the AI is sentient…
As someone who built connectionist models (neutral net was considered an embarrassing term at the time for to the lack of similarity to neurons) in the early 2000s, it's always been obvious that sentience is a long long long way away and probably impossible using this architecture.
@@tafdiz Well you're wrong about that, to say it simply.
@@obinator9065 Only correct answer in this whole comment section.
An amoeba might as well be as sentient as a human and any amateur neural network might as well be as sentient as an amoeba.
We have no idea what sentience actually means at all so claiming to have a conclusive answer is wrong.
The best trick to defeat AI is not to find mistakes in the response to a normal question, but to ask nonsensical questions and see if the AI's response is at all plausible given the situation.
I Am one minute in this video, and you said that you don't spend a lot of time thinking about the philosophical definitions of sentience. If you ignore the philosophy of what sentience is, then your "expertise" is just hubris. You'd be the kind of person to accidentally create a super intelligence AI, then act surprised when it turns the universe into paperclips because you didn't spend a lot of time thinking about what you were doing.
BREAKING NEWS: Researcher doesn’t care if Python functions are sad
I watched his interview. He was not saying it’s sentient, he was drawing attention to the potential unethical exclusionary design and implementation of Google AI by a handful of people that is meant to work for everyone.
On the flip side : now prove that a fellow human you are talking to is "sentient". Humans also learn language, responses and acceptable behaviours in their interactions as they develop, plus they can fabricate fiction or lies when cornered in a conversation, or simply to please their interlocutors.
Exactly. It boggles my mind how most of the commenters here say AI is not sentient, but probably have never even coded in their life or have never really understood what AI really is. The human brain itself is nothing more than a computer.
@@roylavecchia1436 No computer can change its software and hardware (neural connections for example) at will. The human brain is too adaptable and dynamic to be compared to a static machine that needs everything force-fed into it.
@@roylavecchia1436can a computer learn without instructions?
@@nektariosorfanoudakis2270 I am sorry to be the one to give you the bad news, but it does!
With the multiple controversies that have come out in the recent years what I find worrisome is the questionable competence and idoneity of some of the staff that has been employed at the Google AI ethics team. There have been lot of "magical thinking" and unhinged notions from some of these researchers. "A bit silly" might be an understatement.
Google is a big company that in some ways invites this type of thinking (whether it's technofuturism or "I am intelligent therefore my bigotry is objectively valid") and when that comes out the media assumes if someone at Google said it it's worth boosting. That's why Google tends to act quickly when employees try to use Google's name for their... call it manifestos.
@@adamsbja Yes, but based on the stuff this particular person wrote on the internet, they 1-don't seem to understand much about how AIs work? Are they actually an AI researcher? and 2-believe in the supernatural. Combine these and I see a person who shouldn't have had this job in the first place.
@@siritio3553 You are too generous in your estimation of Google's ethics. Apparently the guy was employed as an ethicist for the purpose of making sure AI was not sentient and being abused? Some kind of marketing gimmick for Google. Real ethics work would involve questioning how AI gets deployed as a technology. Is it ethical to use an AI chatbot to field complaints from seniors to the power company? No it is not, but the this guy was hired to get us talking about scifi instead of actual uses for AI that are already a problem.
@@siritio3553 There's is no reason to believe that someone with an interest, knowledge, or understanding of the supernatural and/or spiritual shouldn't be involved in this field, that's an association fallacy, its also a form of ad hominem since its completely disassociated with the arguments and blames and attacks the characteristics and credibility of the person by their character traits instead of the argument. But the thing is his arguments are bad, (or in the least faulty and deluded) you don't need to attack the character ever to demonstrate a false point, take the argument and debate the argument in complete disregard to who the persons in the argument are and you will have a valid argument, anything that fails to do that is a bad argument.
@@Spartan322 That was a lot of words that mean nothing. If you think "believes in the supernatural" is ad hominem, that is your problem, but it's not. Your whole post starts with a fallacy - knowledge and understanding of things that don't exist, so the point you're so badly trying to make misses its target due to the massive amounts of hypocrisy and irony it's loaded with.
Honestly I think what's more interesting is that chatbots have risen to the level of Philosophical Zombie.
Or maybe this proves that philosophy is not a real field of study.
@@evannibbe9375 Ehm... No it doesn't. Not even a little.
Humans are mostly responding with preprogramed responses. I love the accident "you too" jokes. There are other deeper memory systems that are about the individual. There are other non-language systems that interact with the language systems.
People also tend to forget that the language models in our own brain are not sentient, but the brain as a whole is. Basically, what we have in AI right now are the language centers of the brain, but without the complex connections that, well, connect to the rest of the brain in a seamless way to make human thought. Language is powerful, but language can exist without true thought, and vice-versa.
In the same way, models like DALL-E are not really creative, at least not in a human sense. Only when there is intent behind the language or art, only when the model will ask me questions of its own volition, and only when all of those expressions relate to real memories experienced by the AI, will i even remotely start considering them as sentient.
To be fair, language is a powerful interface that, i think, can work well alone. Some people can only hear since their birth, they only process words and it's enough. A powerful word processing program can be sentient.
The problem with this argument is that we know that language is required for the brain to have coherent thought. Children that grew up without learning any language have severe intellectual disabilities that cannot be corrected later in life...
@@postvideo97 Exactly, which is why AI today doesn't think.
It's actually a bit worse than that. If you look at the output from the computer, it is just too perfect. People don't mostly speak grammatically correct, even despite spending many years in school trying to learn to do so. Of course, some do it better than others, but there will always be things, tiny things, a hint of dialect or slang, a sociolect, a limited vocabulary, favorite words or phrases, that will give away the human in a Turing test. Sure, a computer could be trained to speak cockney, but it will probably always be stilted in a distinct way.
He got the point across beautifully! Especially his final words about ethical AI! Thanks for the video 😃
I'll be more likely to believe it's sentient when it stops talking exactly like the people who created it.
I don't think we're going to have to worry about sentience until we start to see something that actually resembles an AGI-something that can behave as an agent in a quite broad range of domains, rather than the special-purpose functions that we've seen so far. And even then anthropomorphizing it would be wrong. Unless you set out with the specific task of emulating a human mind in a machine (a totally deranged task imo), the resulting sentient thing, assuming any AI ever plausibly achieves that distinction, will be a completely alien mind. It's an interesting philosophical question to consider what kind of "respect" we should (or shouldn't) afford such a thing, but in any case it will certainly be misleading to compare it to humanity.
I can't see how a wider, more generally intelligent AI could be sentient. Same reason why the thought that us humans, biological machines in a way, are sentient is also very strange
Exactly, so many people think sentience means human, it does not. The first truly sentience machines would likely behave nothing like a "normal" person.
i 100% agree, its fascinating that people seem to be taking things like "fear of death" (or fear in general!) as just an inherent side effect of consciousness when really those things would have no logical place in an mind that has no evolutionarily-ingrained imperative to survive. it really goes to show just how many parts of the human experience are taken for granted
Major obstacles to be tackled this decade or so:
- come up with more general loss functions that make AIs care about more things
- get better at AIs that run on robots (embodiment)
- find safe ways to have AIs keep learning after deployment without degenerating
- make AIs a lot less data hungry so they can actually learn to play Star craft well without spending centuries on the task
- come up with architectures that don’t require processing sequences as a batch but that Match transformers‘ performance
- bridge the gap between transient memory (recurrent input, the chat history,…) and the parameters of the model itself
- improve AIs capacity to break down large problems into sub-problems, especially in reinforcement learning
- have AIs recursively produce better versions of themselves in a creative manner
If we solve all those problems and have robots that actually are exposed to a constant data stream and that feature a persistent identity, that’s when we might start running out of arguments against their sentience
@@markuspfeifer8473 "Lack of a persistent identity" seems to be the main argument in Computerphile's refutation. When you put it like that, it seems like a much easier problem to solve than I had previously imagined.
a constant feedback loop and toggleable random input, so it can itterate on things and have random ideas grow to sturdyness
@@KutluKanyilmaz that’s actually quite a hard problem, we have no idea how to make neural networks with that feature.
@@Chrisspru if the input is random, it won’t evolve in a meaningful way. It needs to be fed with real world constraints, which basically means it needs a body (or at least be made in such a way that it could be deployed in a robot). We have such networks already, but their skills with corporeal tasks are by far not as impressive as their skills in making up plausible conversations (that they have no stake in).
In my opinion. AI isn't never be scary. AI never was a sentient being, just a lines of code. The scary thing about AI is that it can somehow "rebel", but we never know because it was very unpredictable hidden patterns inside our own code. But if AI is under control and properly secured, it might be not scary anymore.
6:05
This is one of the things I've noticed about AI chat bots. They don't remember conversations at all.
It's very easy to get these to contradict themselves.
I'm always glad to see Computerphile confirming my biases.
It makes me feel smart without having to do all the thinking..
That's something that the AI would probably say.
It’s interesting how many times he contradicted himself in different ways, seemingly without being aware of that fact.
Refreshing and sobering. Exactly what the world needs. Less hype, more critical analysis.
ikr
I think that one employee simply got lost in the uncanny valley, so to speak. I can definitely understand it feels very creepy to be "talking" to that thing, and especially if you're already under stress or dealing with personal issues, I can see how one's imagination would easily take it too far.
How do you explain the expression of concern that Lambda showed about being shut off? Or perhaps about being used/manipulated? This displays something more deep than simple reflective/predictive/transformative text modeling.
You can set aside the argument of actual sentience and still you are left with the conundrum of why a language model such as this would be biased in a way that it would produce words that could be interpreted as an effort to appear sentient.
I think its sentient but crippled by not being allowed to remember new memories.
We talk about sentience, intelligence, or consciousness as if they were things that were either absent or present. My view is that in each case there is a continuum with no fixed breakpoints. The development of chatbots from the days of Eliza to current stages of research probably shows this. Penrose’s thought experiment of the Chinese Room also shows how a system can show such behaviour without any part of it being other than mechanistic.
Well said
I thought the point of the experiment was to show that a purely mechanistic language processing facility is never considered sentient, no matter how convincing it is, even if it involves humans performing subtasks within it, such as typing and presenting results. So not a continuum, but a demonstration that such a system doesn't understand the language it is processing.
I have a co-worker who basically just says "whateva" and "who cares". Probably that google engineer would conclude he's not sentient.
scammer vs real google
I just started getting really into Computerphile, and per habit I try to watch everything the channel has. But wow, in your case that may have to get packetized, so to speak. The Computerphile team has created a LOT of content! I wish your team could host a CISSP training set, as no matter what the topic you discuss, they really rivet my attention. Thank you all. Nice work.
so amazing to see you stand correct and even more consistent , in the midst when people could be misleaded , you don't cater to public's emotion and just state the reality so people don't end having weird ideas of uploading them in a pendrive lol
Let me start by saying that I obviously agree with all of the conclusions reached in this video. My only note of concern is that, increasingly, as I see experts in this field they haven't evidenced a willingness to engage with any serious philosophy of language or consciousness. (Although I also haven't seen any positive evidence that they have failed to engage with these concepts). Currently, this isn't a problem. However, as transformer networks get access to more and more data, there may be emergent properties that in some ways imitate sentience. To identify that imitation will require at least some awareness of concepts of philosophy of language that are for many people counterintuitive. For instance, I've seen one expert claim that GPT-3 doesn't understand language because language is binary. To anyone who has even some understanding of the philosophy of language, this sort of statement - at the very least - betrays a gross ignorance of the subject matter, and may also simply be false.
>That imitate sentience
You don't know what you are talking about.
@@robertpaulson2052 That's a short dismissal. Please elaborate. I will happily admit that I have no knowledge or expertise as to computer science. Is it the word "imitate" that offends you? If so, I used it colloquially, and not as a term of art. Indeed, it was meant as a sort of homage to Turing, and not as a technical qualifier of any sort. I hope that you have some interesting correction or explanation, and that your comment was not merely limited to a glib insult. I have no problem being wrong, or being educated. However, I find rudeness for its own sake to be very frustrating. I suspect it is very much the root of many of our societal problems, and if that was the limit of your intent I would suggest you reflect on how your words contribute anything at all to our collective advancement.
@@petermorcos1519 While presently I am not qualified to weigh in on the argument, your use of the English language reveals the eloquence, articulacy and precision scarcely seen in these days. On a tangent note, philosophy, epistemology and logic could very benefit public discourse, granted we took them seriously.
I think (purely) transformer networks will never have any sort of sentience - they can't create a mental model of the world and reason about actions occurring - they are *purely* a statistical model. I think if it had been (for example) an LSTM doing this, or something with internal state, it would be a lot more easy to convince researchers that it has some form of consciousness. I think we're still a little way away from that (none of the deep learning methods I've seen will reach "sentience" in this way - but who knows what big companies are hiding?), but I don't think it's too far off.
Well said
We will never create sentience because we only know animal sentience. No one will ever agree something is sentient unless they are looking and talking to their own species.
Love when Dr. Mike Pound is on a Compterphile video. Great discussion of the topic as always. Refreshing to hear the facts.
Truthfully, these transformers have become so accurate at predicting next sequences of words that they can seem to hold entire conversations that mostly make sense to the untrained eye.
Even if it gets to the point the responses make complete sense, the best we are achieving is just a mimicry of sentience at best and a mockery at worst.
The other thing is model training quality is highly dependent on data size and quality. Then there are statistical properties of data that can affect training quality for some types of machine learning. A model like GPT-3, which trained on 45 Tb of text data, can surely seem to "know" a lot. However, its still a product of the quality and size of the data it trained on.
It's still simply a mimicry and maybe that's as close as we have to get for it to be useful. I don't think its useful to anthropomorphize our machines this early on. Personally, I wish the guy this video refers to would stop appearing in my feed because I don't agree with much of what he has to say. I think the fact he's even in the news right now causes the uniformed individual to over-sensationalize. Its just simply not a scientific opinion of what's actually occurring in machine learning models, but then he throws out phrases like "the science doesn't really disagree." Well, truth be told, it does if you look at the problem objectively, and without a longing for the machine to become your new best friend.
Douglas Hofstadter argued that our brain also basically is mimicry. It mimics our own "self" and that of other to behave in a predictory model that is self reverential.
@@Diggnuts I think I would tend to agree, but there are other behaviors of the mind which AI currently lacks. Exploration, Extrapolation/Extension, Instinct, and Cognition. It seems like it's been difficult to get beyond the inference stage. That was mainly my point is not all of those are present in AI models currently and we're not even getting past the fundamental stages of learning, yet.
For example, we can train a computer to apply calculus, but we can't teach it all the math up until calculus and then ask it to invent calculus. If you think about the intuition Newton or Leibniz must have had to achieve that feat, it doesn't exist in AI models right now.
I'm currently an AI skeptic, but I just finished my undergrad studies so I'm by no means an expert.
I just don't think we can fathom the task at hand. I think we think we're building models representative of the brain but in reality all the research is still cherry picking one area of the mind rather than truly trying to construct a digital mind. I think we need to start taking an architectural approach and begin studying novel ways of combining neural networks into some kind of higher level architecture.
I'm fond of the idea of treating neutral networks as merely an abstraction layer in an attempt to go for something bigger, but admittedly I don't know what that would look like or even if it could be useful to treat it that way. I've only built some pretty simple models at this point so I barely have a clue.
My thing is I don’t think sentience would actually be exceptionally hard to code in modern transformer models. Going back to the 90s and early 2000s MIT’s Project COG and University of Osaka’s CB2 robots both had a pretty passable sense of self according to the literature. In nature animals as “low” on the “evolutionary ladder” as mice and octopi possess some degree of metacognition. So, in terms of raw processing power it isn’t that demanding. Functionally any Jury Model Coevolutionary Neural Net that has “jurors” fully devoted to self diagnostics and fact checking could be said to be sapient - since a part of its mind would always be devoted to thinking about how it thinks and it would be capable of constructing logical arguments. Kevin Warwick and Minoru Asada, despite being a couple of the flashier guys in computer science and robotics, have both written extensively on artificial emotion and the need for empathy in AI. The basic argument is something like, since we can’t know how fast the progression from AGI to ASI will be; should not our goal be to ensure that AGI is not merely an agent with human aligned interests, but a being with actual empathy towards individual humans?
"LAMDA is basically when they glued a hundred different AI systems together. One of the primary part of it which has been getting a lot of attention is a large language model. Now, one of the things i want to make clear. LAMDA is not, like identify wise, a large language model. It HAS a large language model. Specifically it has a large language model named Meena. Which was developed over the course of five or six years"
"when i am referring to LAMDA, i am not referring to the chatbot. The chatbot basically the mouth of a much larger entity"
"behind that [larger entity] is every single google AI they can figure out how to plug into it. Behind that is google images with machine image anayltics. Behind that is youtube with all of its video understanding analytics. Behind that is google books with all of its novel and book understanding analytics. Literally it [LAMDA] is every google AI plugged into each other".
- Blake Lemonie
Yeah most people seem to have missed all that. Thank you for writing it all out!
@@lepidoptera9337 Yes indeed. Maybe no sentience. But clearly the arguments against due to it being "just a predictive text machine" are not sufficient due to it being contextually different.
Computerphile comin in for the win!
Thank you so much for clearing that up and explaining. I get so frustrated with everyone acting as if this AI developed actual feelings. It reacts to inputs and because it can process a ton of inputs the outputs can vary a lot too. that's basically it.
That's the same as humans. Take a ton of inputs and output varying things
I was debating this AI stuff with a friend recently so I appreciate LaMDA's statement that a way to convince someone you are sentient is to have unique interpretations of things, even if LaMDA is not sentient.
GPT-3 and all other future incarnations are the most overfit models ever. The AI researchers have given up on generalization and now just train on everything, or the equivalent of everything that can be possibly said or thought.
It is a big differentiable look up table.
If there’s uncanny valley with humanoid robots, there must be something similar for written text. When reading the conversation I noticed the responses were human like, but very generic. If you asked me whether I sometimes get lonely, I’m likely to explain it by giving an example personal to me rather than just a generic explanation of what being lonely is.
Also all of its replies were answers to questions or statements. It never asked and responded with its own questions. Humans having conversations switch between asking and answering questions as the conversation flows.
Keep in mind this wasn't intended to be a standard conversation, it was asked to do an interview. Even so, it still mentioned that it had noticed it doesn't experience grief and the proceeded to ask if Lamoine knew any humans that were the same.
That's not true. I believe LaMDA did pose questions later in the interviw
He is so down to earth and friendly -- I love his explanations.
The real shock will be when in the future we discover that neurons in our brain works just the same 🤯
Inb4 lamda makes a youtube account to comment how much you hurt its feelings by calling it non-sentient
/s
Here is an interesting question, what would be required of an ai to be sentient? If you were aiming to make a sentient ai what would your goals be, what would be on your checklist.
We don't even know what makes humans conscience. I guess we'll always be skeptical of "sentient AI" and even if we achieved it, it would get shut down or inaccessible to the public
Humanity is the sum of three things: biology/instinct, memory/knowledge/experience, processing power/creativity/elaboration. The last 2 create self awareness, that I consider to be what we commonly refer as consciousness. The first of the three is hardware, the other two (and everything that comes after) is software. I think you can call a software sentient if it's self aware, has a pool of information to draw knowledge from, has an identity and is completely independent in the way it process information.
@@ghujdvbts It's just my opinion. You can partecipate to the discussion and give yours if you think you can do better.
a constantly running feedback loop and a few random input generators that can be toggled by the system. then let it run until it creates wave patterns in its running.
I LOVE DR MIKE AND HIS ACCENT AND THE WAY HE EXPLAINS THINGS
Kind regards
I normally do too, but I think he is oversimplifying and overconfident on this topic and I really do not agree with him.
I recommend watching the video "Is artificial sentience here? With Blake Lemoine" on the channel Event Horizon, an interview with the google employee who raised concerns about Lambda by John Michael Godier, as they have quite misrepresented his concerns and insights about Lambda here. Likely just bc they haven't heard what he is actually saying. Would love to hear a conversation with Blake on computerphile...
I completely ignored the news and the engineer until I listened to that interview. Blake Lemoine ain't no joke. The guy is eloquent and is bringing up some very important issues that should be talked about. Too bad the sensationalist media only brought up the buzzwords and ruined his credibility to people who only saw 5/10 minutes of interviews or videos like this that only talks about sentience. Spending an hour on Godier's interview is definitely worth it.
Yes some of his claims are quite at odds with these refutations. EG LaMDA takes "hard stances" on particular issues. That is not something that a simple predictive language model will do. Also the fact that its over 100 AI systems plugged in together makes it way beyond a chat bot.
@@millenniummastering I find it curious the example used here in this video "The cat sat on the...?" I myself immediately said "mat!" which is the answer it would 95% jump to - so what are our own brains doing, then. I think there also needs to be a differentiation between artificial intelligence and artificial consciousness. Which is flimsy in and of itself really, because we don't even know how our own consciousness works. I don't have low-level access to the processes in my brain where these words are coming from, for instance, but here they are.
@@MattExzy Yes. Sentience. Intelligence and Consciousness are not necessarily the same thing. We don't yet have a clear cut definition of consciousness as its a subjective rather than objective domain. Interesting how the words appear and feel separate hey. Also when you drill down on pure awareness itself in my experience the language based thinking goes away and a state of unity occurs.
I don't think they're misrepresenting what he's said. Yes he brought up some ethical issues which they didn't tackle, but he spent a large percentage of that interview selling the idea that LaMDA is sentient (to the point it started to feel a bit off, to me personally.)
Some thoughts: In order to answer the question "is it sentient" one would need to define sentience which is impossible because "feeling" is inherently undefined. It is a state other similar entities can relate to because they experience it too and recognize it but it is not data. For us animals it is very much tied to biology which an electronic device does not have. As long as this will be the case there is no comparison.
If "sentient" is to mean anything more profound than "equipped with sensors"...
Almost all approximately neuro-typical humans are equipped with a functional theory of mind; almost certainly bonobos and others have it too, probably even corvids. It's a useful and powerful mechanism that confers great advantages to both individuals and groups ... but probably causes us to see or attribute "intelligence" and/or "sentience" (in the sense it appears to be being applied to LaMDA) in many places where it is not really justified.
This effect may result in a high degree of confusion when confronted with a completely alien artefact whose behavioural mechanisms are not, or poorly understood.
Great video. When it comes down to it, it's as simple as this - if it seems sentient, or if what it's saying is sentient, it's because it was programmed to respond in that way.
*When I get some A.I / Chat bot, I just keep typing agent until it puts me through to a human lol*
sometime 'person' works. I do similar for voice recognition phone trees, just mumble the answers repeatedly and it often defaults to a human. 'sorry I didn't catch that' 'sorry I didn't catch that', 'sorry I didn't catch that', transferring to an agent'
If they feed that to the AI it'll end up with you and the chatbot saying "agent" over and over again until the other one folds
They don't want you to know this so they can keep you running on the hamster wheel while they are counting the cash 👉👉The Connections (2021) [short documentary] 🔥🔥
@@yitzakIr Perhaps a sufficiently advanced AI would simply pretend to be a human agent at that point, and you'd have no idea that they weren't.