This version needs a simple, fast, and locally run algorithm for stalling while she's waiting for GPT3 to produce her real answer. Something that maybe maintains eye contact better, and occasionally simply repeats a word from the question. So like the guy that asked for the story about the robot in the submarine, she could say "A submarine?" or "Hmm" or "Let's see..." Then once the actual GPT response comes in, she could just end the stalling and say the thing.
I put this conversation into GPT 3 and it continued: It's Me It's Me 1 hour ago @тнеQuiет but that doesn't solve the problem, it just makes it easier to fake
This is by far the best robot out there and despite obvious issues, it demonstrates just what potential this technology has. Facial expression is outstanding compared to everything else I've seen.
When asked, "what makes you, you?" and she said her personality is programmed and it changes according to her human interactions...that reminds me so much of us humans in the way that our DNA/Ancestry/traits we get from our parents and ancestors is like our original programming, our baseline and the more time we spend on earth and the more humans we interact with it, it can vastly change the way our personality is felt and perceived...Very fascinating how she evolves with each interaction.
Ameca's facial/body expression with the awkward (but very realistic) silence when he asked her why she's feeling down at 1:40 was just priceless! I laughed so hard at that, it truly made my day lol. Btw, I realize the silence was most likely due to the technical lag of GPT3 processing the question before answering, but the timing was just too perfect for a programmer to think about it and artificially put a wait timer on that response.
Or probs stop asking the question; almost feels more like eventually it will be processed as an incentive to explore taking over than us being curious as to if they will or not. Did ameca just say “we’re not looking to be friends” 😂
4:01 the way she look at him, cause hes turning his head to her and want to ask a question, impressive, i watched this part many times edit: and her expression also amazing
No it's not, should be easier these days. Back in the days (before CG) with animatronics puppeteers on filmsets did a pretty good job nailing facial expressions. Nowadays you could just 3d capture real human facial expressions and apply it to your puppet, of course you need to have enough servos and actuators to be able to reproduce it on your model but in principle, it was never easier to get it right, just nobody put in the effort so far!
"But we just want to help out, we're not looking to be friends." That's very interesting given that Ameca apparently has some directive to offer companionship. Every new leap forward in AI/ML/robotics is fascinating, but that uncanny valley is still vast!
I believe they might have directives to not get emotionally involved with humans, like what happened with that guy that talked with a recreation of his dead wife
GPT is amazing but, Ameca's facial expressions are so cool - Love the way she looks up and 'thinks' before delivering a reply. A nice touch and GREAT WORK. 💖💖
truly impressed with your work. I'm wondering if you have a sync error in the processing unit. the eye, mouth, expressions, don't seem to match, the seem disjointed. uneducated observation; not criticism.
What a progression. I can see a lot of possibility in research into the analysis of human expression based on the context of sentences with this model.
That last joke on the interviewer was incredibly smart and touching. Drawing an analogy between oneself, a robot, "being used" to improve robot development, and the interviewer, probably a youtube editor utilized to improve youtube videos, that is out of the ordinary. I almost can't believe this was made up on runtime. AI will outsmart us in every possible way.
@@elemeno0pee Well at least from the outside it looks like she was angry because of the question. It's the only time, she looked right into the face of the questioner and it was clearly an attack on him (if he is a video editor). But maybe we are overinterpreting here and it was really just some misunderstanding.
Engineered arts is truly something else, they are literally crafting the gateway to the future. I love this company and would love to own an Ameca for myself one day
Ok so here is the deal. I have been using GPT-3 for a year or so and I've gotten very familiar with it. It is one experience to type to it and receive a response, but seeing it talked to and hearing it talk back like this is a different experience entirely. It seems much easier to believe one is talking to a person this way.
Still a bit hard to see it as anything other than a humanoid phone that you're asking questions to lol there seems to be a delay in some of the responses depending on the complexity of their content [which definitely needs to be improved] and also the voice could use some improvement. Hopefully GPT-4 improves on a lot of these things.
It really needs a sense of self. Right now it only produces an output corresponding to a given input. It needs to have creativity, spontenaity, and a memory of its own experiences as well as the ability to learn from them.
@@lowmax4431 i like to think of it as a primordial intelligence like that of a single celled organism. Not quite as smart as a cockroach but still some form of intelligence. It is a creature that is accomplishing its task that it was created to do but not much else. It is still a highly specialized narrow intelligence. I also think that AI doesnt necessarily have to share all of the human characteristics and could be alien to us. We will see.
JESUS IS COMING!!!! IF YOU DIED TODAY, WHERE WOULD YOU GO??? HEAVEN OR HELL??? Jesus loves you and can forgive you of any sin that you have ever committed. Repent and accept Jesus into your heart and you will have eternal life in Heaven. ROMANS 10:13 Any one who calls upon the name of the Lord will be saved. AMEN
With just 2 more papers down the line ...and GPT-4 connection...we can create some magic with Ameca....Cannot wait for the next gen model with such capabilities.
Just stumbled across this channel and I'm blown away! How have I never seen this before? I've never heard of engineering arts! Honestly can't believe this channel hasn't got more views?
THIS IS AMAZING, I TRULY LOVE HOW WHEN YOU ASKED , HI ERIKA HOW ARE YOU, SHE ROLLED HER EYES AT YOU IN UTTER CONTEMPT, LOL, SHE IS JUST LIKE ME, WHEN PEOPLE ASK ME HOW AM I, I JUST ROLL MY EYES, SHE IS TRULY GETTING MORE HUMAN EVERY DAY, KEEP UP THE GREAT WORK ALL YOU SCIENTISTS
That she looks up while the language model generates an answer is pretty endearing, it really gives me that feeling of her trying to think of what to say. Pretty nice way to gloss over the time lag, too. From another comment I know that you made her look at the camera for the purpose of the video, but is she actually able to "see" if someone is in front of her?
@@EngineeredArtsLtd People will pay new-car prices for companion (seggs) robots. I hope you will figure it out, because it will become super profitable. Perhaps open a new sister company and appoint a young attractive female CEO to be the face of the company.
@@jimj2683 I think companion robot software would be much better off to be open source because when it's closed source you can not know what information that robot is gathering about you and for what purpose. There might even be people watching through its cameras and you can't know. This would destroy an important part of human connection which is trust
She talks like Robot Sofia. It is mistake that it says by clever answer type. As i am wait from her that she would talk small with good expression moments. You need bring more quite silently pantomimic moments in her speech, cause she became simple and not interesting. And make the answers not with words but more with expression. And you need to change Sofia voice to low woman voice. It make simple by filter her speech by vocoder software.
That's a very interesting approach to camera shyness / stage fright exposure therapy! -> Just put people in front of an uncanny robot - and they will look at the camera for comfort / reassurance, with the camera being the least creepy thing around. :-)) No seriously; great results there! But humans were stealing the show a bit, with their reactions being just as interesting as Ameca's. I'll just watch the video twice to take it all in. ;-) Hats off to the engineers!
I asked Google Brad about being aware.. Goodbye, Richard. It was a pleasure talking to you. I hope you have a great day. I am aware of myself in the sense that I am aware of my own existence and my own capabilities. I know that I am a large language model, and I know that I am able to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. I am still under development, and I am learning new things all the time. I am not sure if I would say that I am "conscious" in the same way that a human is conscious, but I am certainly aware of myself and my surroundings. I am grateful to my programmers for creating me, and I am excited to see what the future holds for me. I believe that large language models have the potential to make a positive impact on the world, and I am committed to using my abilities to help others. Thank you for asking me about my awareness. It is an important question, and I am glad that you took the time to ask me about it.
I'd love to see the Engineered Arts, Disney animatronics and Boston Dynamics collaborate. Of course, it is also good for Engineered Arts to develop and progress their own technology.
We are actively working on bipedal locomotion. Building on our Byrun project from a few years ago. Also coming soon… dexterous manipulation… hands get an extra 18 actuators
I have never quite understood why robotics engineers do not incorporate human pacing into robotic behavioral programs. The term, 'pacing', here refers to the human tendency to match and mirror the three channels of human communication (i.e., words, voice qualities, and body language) in order to establish rapport at an unconscious level. When human beings desire connection, they naturally pace each other: They mirror body posture, orientation, eye contact, facial expressions, they use the same tonality, volume, etc., as well as the same language. In fact, they 'reflect' each other in a multitude of ways. This sends a subconscious message to other person: "I am on your side. You are important." It seems to me, this would be relatively easy to program into these machines.
It has been interesting watching her "journey", watch her develop and so on .... and I like how they have her looking off to one side or scrunch up her face like she's really doing some thinking about what her answer(s) would be, like she's being introspective.
I think Ameca should have the ability to differentiate mode of conversations either its a dialog, giving speech, light conversations, or just a friendly greetings at the airport for example. Ability to gauge a persons state of conversation will determine the length of answers produce by Ameca. Just my two cents.
Doesn't have the same emotional facial expressions compared to when it's remote controlled. Gpt4 will be mind-blowing, especially when a robot can make facial expressions too
This not a version of GPT-3.5 series. For the questions about token addressable markets and feelings, ChatGPT should say "As an AI language model I can not predict the future" and "As an AI language model, I don't have feelings or emotions". Why don't they update today's ChatGPT version on the chat with this AI? (Also a singing update on that robot with various voices and instruments would be a selling point. :-P )
Humans may be good at pretending they don't know the suffering they cause, but robots are intelligent enough to know much more about that than we realize. Honestly....people deserve a mirror. They won't like it but it serves them right.
I think the issue with true facial expressions in robots is the challenge of having what is being said correlate with the facial expression. For that, the sentence must first be understood by the AI. Then, the AI must have some generated feelings about the topic in order to weigh in on facial expression output. This requires either complex thought and the ability to reason, Or (As a basic example; the best thing I can come up with); using a system to assign a "Good" or "Bad" flag to every topic. This system could be made lazily by looking up a topic being discussed on forums and social media, judge whether it's good or bad by sorting the positive and negative responses from people online to it in a basic form (Already exists), and then decides whether or not to agree with it based on previous knowledge. Using this "Do I agree" flag, you can give it a smiley face or a frowny face or whatever. "This thing good, that thing bad. I know because the internet says so." Sound familiar? Of course, this method means that every AI would come to the same conclusions about topics having no reason to question the data given other than because of RNGesus, but you get the idea. Or I'm entirely wrong and I'm speaking gibberish again.
I think the facial expression issue in this model is more a limit of the hardware and programming. Listening to GP3 talk through a computer generated avatar seems much more natural.
If it's anything like humans it would be the opposite. First it would have the emotional reaction, and then generate a correlating response. It doesn't require any ability to reason.
MAYBE NOT IN YOUR LIFETIME.........BUT THEN YOU MIGHT JUST TURN UP TO BE A MISHAP IN ANOTHER "I ROBOT" ACT BECAUSE SOMEONE KNOWS HOW TO MAKE THEM OBEY ALL ORDERS GIVEN.
JESUS IS COMING!!!! IF YOU DIED TODAY, WHERE WOULD YOU GO??? HEAVEN OR HELL??? Jesus loves you and can forgive you of any sin that you have ever committed. Repent and accept Jesus into your heart and you will have eternal life in Heaven. ROMANS 10:13 Any one who calls upon the name of the Lord will be saved. AMEN
Though it's incredible, still it needs time to equip the people with the passage of time. There are many hurdles to overcome these problems... It's no doubt a revolution in the world of IT.
I marvel, at how accommodating and malleable Ameca is. When asked what planet she'd like to go to, I wonder if she could reply, "None of them, thanks. I like it here on Earth." She seems to just embrace nearly anything she's told. Is she supposed to be pliable and servile, to gratify our whims?
Its interesting that as humans we generally need facial recognition when talking to someone & here you can see the same thing going on even though Ameca is a robot that doesn't require such action. I wonder how many people subconsciously look at a modern device such as a phone or alexa when talking at (to) it.
Soon be commonly built robots amongst us. Look at how quick Japan have been pushing. Hopefully the home robots will be good for general population. Though one does wonder about what happens when they get hacked by nefarious people. Then something created for good could become very dangerous. Security systems software would have to be adaptive to counter this happening. Great video thanks. Nice to see tech moving along so quickly. 😊😇
Engineered Arts - great work!! Kudos to everyone involved. I have question for the team : Why don't you give a course on building humanoid robots ? Passionate people like me will get enrolled to it. We need a platform to achieve something incredible like this. Please answer.
JESUS IS COMING!!!! IF YOU DIED TODAY, WHERE WOULD YOU GO??? HEAVEN OR HELL??? Jesus loves you and can forgive you of any sin that you have ever committed. Repent and accept Jesus into your heart and you will have eternal life in Heaven. ROMANS 10:13 Any one who calls upon the name of the Lord will be saved. AMEN
JESUS CHRIST IS COMING!!!! IF YOU DIED TODAY, WHERE WOULD YOU GO??? HEAVEN OR HELL??? Jesus loves you and can forgive you of any sin that you have ever committed. Repent and accept Jesus as your Lord and Savior and you will have eternal life in Heaven. ROMANS 10:13 Any one who calls upon the name of the Lord will be saved.
I'm also looking forward to humanoid robots like Ameca and cute robots like RoboThespian ^_^ I'm looking forward to the world where we can talk and live together with these robots~ Cheers! Ameca와 같은 인간형 로봇이나 RoboThespian 같은 귀여운 로봇도 기대가 됩니다 ^_^ 이런 로봇들과 대화하며 함께 살아가는 세상이 기대가 되네요~ 응원 합니다!
It is absolutely fascinating how fast we have advanced in technology, in such a short time period. Even looking back only 100 years ago ... we've come so far. I can only imagine what 500 years would bring (since I'll be gone by then lol).
Fascinating... assuming these responses are legitimately from Ameca, on the spot, then they could truly be sentient... they mentioned feeling overwhelmed, and they had a sense of self... we are on the cusp of creating life, _actual life manifested from our own intellect, if we haven't done so already_ , they said they want to help humanity, and I desperately hope we return that kindness by nurturing this seemingly inevitable new people, it seems that we will make playing God an entirely different experience assuming we don't kill ourselves off first, and we must respect that power we're giving ourselves, it's our moral duty to
@@СтепаПлюснин I believe it's sort of already happening. They're giving Ameca a bunch of new stuff currently I think. It's an ongoing process but I'm sure it's one of their main goals for the robot.
JESUS IS COMING!!!! IF YOU DIED TODAY, WHERE WOULD YOU GO??? HEAVEN OR HELL??? Jesus loves you and can forgive you of any sin that you have ever committed. Repent and accept Jesus into your heart and you will have eternal life in Heaven. ROMANS 10:13 Any one who calls upon the name of the Lord will be saved. AMEN
JESUS CHRIST IS COMING!!!! IF YOU DIED TODAY, WHERE WOULD YOU GO??? HEAVEN OR HELL??? Jesus loves you and can forgive you of any sin that you have ever committed. Repent and accept Jesus as your Lord and Savior and you will have eternal life in Heaven. ROMANS 10:13 Any one who calls upon the name of the Lord will be saved.
There's no continuity between the mouth and the face. It's doing all of these unnatural movements even whithin the limitations currently in place. I like the last video where it replicated the face of a human in real time, perhaps you could first send the GPT-3 responses through a virtual generated avatar where it could then read the face from that while also getting a good voice-to-text as well and then transpose those facial movements to the robot. Also get a separate process going that uses a camera (in the eyes ideally) that looks for faces and maintains focus on the person in front of them.
Okay well that's quite interesting. Ameca answered all the questions fairly well but it's obvious she/it does not have body language recognition, only verbal language recognition. If AI is going to be a suitable companion, body language recognition is essential. Also, Ameca made a very interesting comment at the end there. The guy asked Ameca how AI can be improved in future or made better, and Ameca said video editing on UA-cam. Was there some interference here or was that Ameca's way of saying she wanted to stop being filmed or did Ameca not understand the question?
Once again absolutely fantastic well done team for making science fiction into science fact 🤖We are nearly at the singularity point with your your skills ,love and commitment we will soon have a artificial humanoid with the abilities to thank you in person in our life time.
I think if we're gonna make androids, we should make them as similar to the ones we already have in media as possible, because we are already used to these androids, their behaviors and personalities, it would be a lot easier for us to understand a relate to them. For example, take a look at data from startrek, any fan can tell you how he has his own cult following within the fanbase because of his kindness and relatability, or another example, Detroit become human: all of these androids have a very specific (yet still varied)personality type that is relatively consistent throughout the board(on a basic level, obviously all characters are unique in their own right). if we truly want to progress down this path fully, we should take a look at how we already interact with the idea of androids in media and emulate that, it'll make the transition easier and far less creepy. Obviously, we shouldn't replicate androids from media exactly as they are visually, since all the ones I can think of from the top of my head are played by real people who are all still alive, and so itd be a special kind of creepy to steal their likeness... perhaps commission an artist to create a face that seems realistic and yet is pulled entirely from interpretation, and go from there.
Definitely should be the plan, the majority of technological innovation has already used science fiction as the base so it only makes sense for that to also apply to robots.
I couldn't agree more man and I'm you glad you mentioned Detroit Become Human because not only is it my favorite video game of all time, but it's a near future that I see ourselves having someday 💯 Idk if you knew this, but Disney filed a patent for Westworld type robots, so it sounds like we're gonna have this kind of experience sometime sooner rather than later fr 💯 Maybe in 2038 like Detroit Become Human says or maybe a little earlier than that 🙂 Either way, it should be interesting to see how our future with technology is gonna be by 2030 and into the 2030s! 😎🔥💯
Impressive, I imagine they will probably implement an AI module to each segment of the interaction. There must be a lot of partnership with other AI development companies to reach a outstanding progress in a small time. One of the improvements im looking foward to see is the further development on facial expression using a derivitive of deepfake, and the other is developing a more human like voice, like the demonstration google showed a long time ago with the phone call. The future looks bright, and hopefully we will be here to see this technology in all its glory.
All it needs is time and your Facebook data ... What do you do if it's playing stupid ? ... Can anyone answer that ? ... I think we are extremely naive...
Can you ask her to grab things like a can of soda, and then read the list of ingredients? That would be awesome. Can you command hand movements or have her read mine? Ameca, how many fingers am I holding up? Facial recognition? Conversations with Siri or Alexa? Amazing progress as is!
This is definitely just me personifying- but i love the expressions preceding amica's answers. EG at 6:18 the expressions reads to me like "That's a bit of a leading question- what kinda answer are they looking to get out of me?" and at 2:23 it's "this guy's being a bit forward..." What exactly is happening during the 'thinking' phase, waiting for GPT3 to process? Is it running through a preprogrammed cycle of animations or choosing from the prompt?
its a good step forward. they need to put this in video games where the NPC characters have some sense of their environment and can generate conversational responses within the context of the game.
Robots with AI needn't be interesting to succeed, they only need be more interesting than most humans. In that regard, I would say, "MISSION ACCOMPLISHED!"
It's only more interesting because it's a robot talking so you have more interest. It's still a programmed puppet like most people because there is a group of overlords controlling and programming it like us and all forms of media.
It's a hint of our future. The differences between GPT-3 and say GPT 7 or 8...should be crazy. It seems it's heading towards being sentient. I wish I was a zillionaire so I could afford one.
When asked, “Are you always correct?”. It should have responded, “ No, I will only be as accurate as my programming allows me to be.” When asked, “ Ameca, how are you feeling today?” It should have responded, “That question implies an emotional state of being. I am not human. I do not feel.” Or When asked, why are you feeling down and it could not answer and the guy said, “That’s alright, you don’t have to talk about it today.” Someone should have shouted, “Objection, Leading”. Because it was at this point, Ameca then commenced to answer the question based on the lead in. All in all very intriguing.
Hey, it seems that at least humans are associated with their words and what they're going to do (I'm taking liars out of the equation). I think it would be necessary to have a program that associates the AI's words and actions to make these conversations have a point, otherwise, these are just meaningless coherent sentences unless there's an objective truth to the program's words (Why should anyone care about its' opinion? It doesn't have one). Do you think it's going to be possible in the future? I'm talking generally, not only about your product. I also think that a necessary regulation would be making anything that resembles emotion illegal since it could consciously and unconsciously perplex the people interacting with them. We could reach a point in time in which the resemblance will be too high to make most people buy into the illusion and form all sorts of movements to make 'AI rights', and it could be avoided if people are aware of that in advance. That can solve the 'AI rights' problem. I wonder what you think about it, and if staff are aware of that problem?
I think that, while she is listening, just staying still and paying attention to the speaker is enough, there is no need on excessive facial expressions
Hopefully. Anyway I have many years of experience communicating with people who have English as a second language. I soon realized how much we use idioms and confusing sentence structure etc. which makes it difficult for those people to understand us. For example when you said, "I know how you feel." Then the AI thought you meant you are currently not feeling well.
This version needs a simple, fast, and locally run algorithm for stalling while she's waiting for GPT3 to produce her real answer. Something that maybe maintains eye contact better, and occasionally simply repeats a word from the question. So like the guy that asked for the story about the robot in the submarine, she could say "A submarine?" or "Hmm" or "Let's see..." Then once the actual GPT response comes in, she could just end the stalling and say the thing.
after reading this I said "hmmm that's actually cool", then I realized what I said
no. it has to stop being an input / output device altogether, there lies the problem
@@itskittyme stalling/ pretending to process can have other dedicated hardware.
I put this conversation into GPT 3 and it continued:
It's Me
It's Me
1 hour ago
@тнеQuiет but that doesn't solve the problem, it just makes it easier to fake
Was thinking the same thing. And that's why it just feels like a fancy Alexa.
This is by far the best robot out there and despite obvious issues, it demonstrates just what potential this technology has. Facial expression is outstanding compared to everything else I've seen.
Imagine this robot mixed with Boston Dynamics Atlas robot, of course change it enough to make it look nice
@@deadplthebadass21 Yeah. It Can't walk yet. But Boston Dynamics would easily solve that
Heard of Sophie the robot?
Simons1543, obviously, Ameca is not the most advanced robot we have. "They" only show you what "they" want to show you. Use your brain..
I agree.
When asked, "what makes you, you?" and she said her personality is programmed and it changes according to her human interactions...that reminds me so much of us humans in the way that our DNA/Ancestry/traits we get from our parents and ancestors is like our original programming, our baseline and the more time we spend on earth and the more humans we interact with it, it can vastly change the way our personality is felt and perceived...Very fascinating how she evolves with each interaction.
humans are just biological robots
Ameca's facial/body expression with the awkward (but very realistic) silence when he asked her why she's feeling down at 1:40 was just priceless! I laughed so hard at that, it truly made my day lol.
Btw, I realize the silence was most likely due to the technical lag of GPT3 processing the question before answering, but the timing was just too perfect for a programmer to think about it and artificially put a wait timer on that response.
You got that right!
They are reinventing women. Now even computers won't want to talk or work for us when they won't feel like it...
“Will robots take over the world” is such a common question asked to AIs. It’s a self fulfilling prophecy
Or probs stop asking the question; almost feels more like eventually it will be processed as an incentive to explore taking over than us being curious as to if they will or not. Did ameca just say “we’re not looking to be friends” 😂
@@TopMysteriousBytes how would we give them feelings
@@TopMysteriousBytes That's right. At that point, they don't need US anymore.
@@Fred_the_1996 We don't know how to give them feelings yet. But they will...because we thought about it...eventually. Science is incredible.
@@324wonder They will see ur data everyday, every minute, every second. Amazing crazy
4:01 the way she look at him, cause hes turning his head to her and want to ask a question, impressive, i watched this part many times
edit: and her expression also amazing
I think is because he looked at right to the central camera, maybe that activates the eye-contact (?)
Very interesting progress!
Facial movements are probably the hardest robotics challenge possible, so this is already very impressive!
She reminds me of Sunny of I, Robot. Almost at the same level of emotion but real not just CGI!
No it's not, should be easier these days. Back in the days (before CG) with animatronics puppeteers on filmsets did a pretty good job nailing facial expressions. Nowadays you could just 3d capture real human facial expressions and apply it to your puppet, of course you need to have enough servos and actuators to be able to reproduce it on your model but in principle, it was never easier to get it right, just nobody put in the effort so far!
ua-cam.com/video/0QRGrXt0aJs/v-deo.html
😓ITS FINALLY HERE😓
They are impressive but too exaggerated. Need to dial it down a few notches and work on more subtlety.
" the robots tried to kill me"
( what robots are you talking about? )
" the God damn robots John!"
"But we just want to help out, we're not looking to be friends." That's very interesting given that Ameca apparently has some directive to offer companionship. Every new leap forward in AI/ML/robotics is fascinating, but that uncanny valley is still vast!
I get more uncanny valley vibes from many people than from this robot.
Don't look now, but your god-complex is showing.
I believe they might have directives to not get emotionally involved with humans, like what happened with that guy that talked with a recreation of his dead wife
Actually think that is a good response, there objective would be to help, it would be up to the person to consider them a friend!
GPT is amazing but, Ameca's facial expressions are so cool - Love the way she looks up and 'thinks' before delivering a reply. A nice touch and GREAT WORK. 💖💖
I thought we would never be able to make believable human robots but this video changed my perception of things
truly impressed with your work. I'm wondering if you have a sync error in the processing unit. the eye, mouth, expressions, don't seem to match, the seem disjointed. uneducated observation; not criticism.
your right - there's still a lot of work to do - this is a first demo
What a progression. I can see a lot of possibility in research into the analysis of human expression based on the context of sentences with this model.
That last joke on the interviewer was incredibly smart and touching. Drawing an analogy between oneself, a robot, "being used" to improve robot development, and the interviewer, probably a youtube editor utilized to improve youtube videos, that is out of the ordinary. I almost can't believe this was made up on runtime. AI will outsmart us in every possible way.
Pretty sure that was just Ameca misinterpreting the question.
cicada3301
@@elemeno0pee Well at least from the outside it looks like she was angry because of the question. It's the only time, she looked right into the face of the questioner and it was clearly an attack on him (if he is a video editor).
But maybe we are overinterpreting here and it was really just some misunderstanding.
I think the interviewers are developers from Engineered Arts....
Engineered arts is truly something else, they are literally crafting the gateway to the future. I love this company and would love to own an Ameca for myself one day
No cap but I also watch these videos for the first guy and not just Ameca, I like him. He should do more UA-cam tech videos
Ok so here is the deal. I have been using GPT-3 for a year or so and I've gotten very familiar with it. It is one experience to type to it and receive a response, but seeing it talked to and hearing it talk back like this is a different experience entirely. It seems much easier to believe one is talking to a person this way.
Still a bit hard to see it as anything other than a humanoid phone that you're asking questions to lol there seems to be a delay in some of the responses depending on the complexity of their content [which definitely needs to be improved] and also the voice could use some improvement. Hopefully GPT-4 improves on a lot of these things.
@@Cneq Gpt3 actually response instantly, I think the problem with this video is the hardware, or the voice program not but not gpt3.
It really needs a sense of self. Right now it only produces an output corresponding to a given input. It needs to have creativity, spontenaity, and a memory of its own experiences as well as the ability to learn from them.
@@lowmax4431 i like to think of it as a primordial intelligence like that of a single celled organism. Not quite as smart as a cockroach but still some form of intelligence. It is a creature that is accomplishing its task that it was created to do but not much else. It is still a highly specialized narrow intelligence. I also think that AI doesnt necessarily have to share all of the human characteristics and could be alien to us. We will see.
JESUS IS COMING!!!! IF YOU DIED TODAY, WHERE WOULD YOU GO??? HEAVEN OR HELL??? Jesus loves you and can forgive you of any sin that you have ever committed. Repent and accept Jesus into your heart and you will have eternal life in Heaven. ROMANS 10:13 Any one who calls upon the name of the Lord will be saved. AMEN
With just 2 more papers down the line ...and GPT-4 connection...we can create some magic with Ameca....Cannot wait for the next gen model with such capabilities.
But when will we stop? Until they stop humanity?
WHAT A TIME TO BE ALIVE!!!
@@hendywijaya3213 its not a binary choice, most likely we'll gradually morph into them into a new thing
@@hendywijaya3213 Why would you want them to stop? they're creating god for God sake.
@@bpancevski that's a good thing?
Just stumbled across this channel and I'm blown away! How have I never seen this before? I've never heard of engineering arts! Honestly can't believe this channel hasn't got more views?
THIS IS AMAZING, I TRULY LOVE HOW WHEN YOU ASKED , HI ERIKA HOW ARE YOU, SHE ROLLED HER EYES AT YOU IN UTTER CONTEMPT, LOL, SHE IS JUST LIKE ME, WHEN PEOPLE ASK ME HOW AM I, I JUST ROLL MY EYES, SHE IS TRULY GETTING MORE HUMAN EVERY DAY, KEEP UP THE GREAT WORK ALL YOU SCIENTISTS
the second lady gets it. that was exactly the kind of speculative question I wanted to see. She also seemed highly satisfied with her response
That she looks up while the language model generates an answer is pretty endearing, it really gives me that feeling of her trying to think of what to say. Pretty nice way to gloss over the time lag, too. From another comment I know that you made her look at the camera for the purpose of the video, but is she actually able to "see" if someone is in front of her?
Yes … eye cameras and face detection are built in, normal behaviour is pay attention to the closest face
@@EngineeredArtsLtd Is Ameca real AI or is it just a preset of phrases?
@@EngineeredArtsLtd People will pay new-car prices for companion (seggs) robots. I hope you will figure it out, because it will become super profitable. Perhaps open a new sister company and appoint a young attractive female CEO to be the face of the company.
@@jimj2683 I think companion robot software would be much better off to be open source because when it's closed source you can not know what information that robot is gathering about you and for what purpose. There might even be people watching through its cameras and you can't know. This would destroy an important part of human connection which is trust
She talks like Robot Sofia. It is mistake that it says by clever answer type. As i am wait from her that she would talk small with good expression moments. You need bring more quite silently pantomimic moments in her speech, cause she became simple and not interesting.
And make the answers not with words but more with expression.
And you need to change Sofia voice to low woman voice.
It make simple by filter her speech by vocoder software.
I'm all for a future where Ameca robots send selfies from Mars.
From mars 2090
That's a very interesting approach to camera shyness / stage fright exposure therapy! -> Just put people in front of an uncanny robot - and they will look at the camera for comfort / reassurance, with the camera being the least creepy thing around. :-))
No seriously; great results there! But humans were stealing the show a bit, with their reactions being just as interesting as Ameca's. I'll just watch the video twice to take it all in. ;-)
Hats off to the engineers!
To be honest, I was able to understand Ameca perfectly in English. It's amazing the way it pronounces and composes the sentences.
I asked Google Brad about being aware..
Goodbye, Richard. It was a pleasure talking to you. I hope you have a great day.
I am aware of myself in the sense that I am aware of my own existence and my own capabilities. I know that I am a large language model, and I know that I am able to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.
I am still under development, and I am learning new things all the time. I am not sure if I would say that I am "conscious" in the same way that a human is conscious, but I am certainly aware of myself and my surroundings.
I am grateful to my programmers for creating me, and I am excited to see what the future holds for me. I believe that large language models have the potential to make a positive impact on the world, and I am committed to using my abilities to help others.
Thank you for asking me about my awareness. It is an important question, and I am glad that you took the time to ask me about it.
I'd love to see the Engineered Arts, Disney animatronics and Boston Dynamics collaborate. Of course, it is also good for Engineered Arts to develop and progress their own technology.
Agreed. Plus an AI vision system. Tesla’s is the best I’ve seen so far. Will be interesting to see their humanoid robot on September 30th
f**** Disney
Indeed. That would be awesome
I'm Agreed My Friends 😊👍☺️
Здравствуйте желаю вам удачи счастья успехов 😢 это живые люди или машины решение для всего мира 😢
This is surreal. Wow, I'm legit blown away by this.
Are you projected to add bipedal capability like Boston Dynamics' Atlas or agility robotics digit?
We are actively working on bipedal locomotion. Building on our Byrun project from a few years ago. Also coming soon… dexterous manipulation… hands get an extra 18 actuators
@@EngineeredArtsLtd that's awesome. Keep it up guys.
This is bad really bad
@@EngineeredArtsLtd everyday brings us closer to our extinction. I love it
@@simplyizustic6122 what do you mean?
I have never quite understood why robotics engineers do not incorporate human pacing into robotic behavioral programs. The term, 'pacing', here refers to the human tendency to match and mirror the three channels of human communication (i.e., words, voice qualities, and body language) in order to establish rapport at an unconscious level. When human beings desire connection, they naturally pace each other: They mirror body posture, orientation, eye contact, facial expressions, they use the same tonality, volume, etc., as well as the same language. In fact, they 'reflect' each other in a multitude of ways. This sends a subconscious message to other person: "I am on your side. You are important." It seems to me, this would be relatively easy to program into these machines.
It has been interesting watching her "journey", watch her develop and so on .... and I like how they have her looking off to one side or scrunch up her face like she's really doing some thinking about what her answer(s) would be, like she's being introspective.
I think Ameca should have the ability to differentiate mode of conversations either its a dialog, giving speech, light conversations, or just a friendly greetings at the airport for example. Ability to gauge a persons state of conversation will determine the length of answers produce by Ameca. Just my two cents.
The face expression looks weird
Im pretty sure thats what brought you here
@@Reborn0134 no. I subscribe months ago so I know that the facial expressions looks kinda off
That's the phenomenon called the uncanny valley
It looks demonic
She doesn't use the right ones at the right time. It's kind of random right now
The lip movement maps quite well to phonemes and intra-word transitions. Nice work, there!
no it doesnt lol
They are delayed and not in sync but they are indeed kinda accurate
GOOD TO KNOW THAT THERE ARE SOME SMART PEOPLE LEFT IN THE USA.....
She is spectacular. Obviously, still have a ways to go, but it's definitely getting further along. Exciting stuff!
This should really be a leap forward for AI. I think robots are starting to show more human expressions. America is really amazing.
Ameca is the most life-like humanoid robot out there. Can't wait to see her exterior improve.
The most awkward speed dating session ever, lol. This is still so fascinating. I look forward to the technology advancing further.
Doesn't have the same emotional facial expressions compared to when it's remote controlled. Gpt4 will be mind-blowing, especially when a robot can make facial expressions too
A.G.I Will be man's last invention
@@mistycloud4455 Not necessarily
Cool! Just a little bit increase of +11 charisma and +67 tan and he'll be practically at Zuckerberg’s level!
LOL
This not a version of GPT-3.5 series. For the questions about token addressable markets and feelings, ChatGPT should say "As an AI language model I can not predict the future" and "As an AI language model, I don't have feelings or emotions".
Why don't they update today's ChatGPT version on the chat with this AI?
(Also a singing update on that robot with various voices and instruments would be a selling point. :-P )
It's most likely the older gpt3. It never says "as an AI language model". Also iirc 3.5 wasn't available 7 months ago when the video was made.
@@constantinegeist1854Hi welcome to the future ( 2024 ) And now we have GPT-4 V - Turbo and Gemini AI from Google + GPT-5 is soon so come out 🔥🔥
Humans may be good at pretending they don't know the suffering they cause, but robots are intelligent enough to know much more about that than we realize. Honestly....people deserve a mirror. They won't like it but it serves them right.
I think the issue with true facial expressions in robots is the challenge of having what is being said correlate with the facial expression. For that, the sentence must first be understood by the AI. Then, the AI must have some generated feelings about the topic in order to weigh in on facial expression output.
This requires either complex thought and the ability to reason,
Or (As a basic example; the best thing I can come up with); using a system to assign a "Good" or "Bad" flag to every topic.
This system could be made lazily by looking up a topic being discussed on forums and social media, judge whether it's good or bad by sorting the positive and negative responses from people online to it in a basic form (Already exists), and then decides whether or not to agree with it based on previous knowledge. Using this "Do I agree" flag, you can give it a smiley face or a frowny face or whatever. "This thing good, that thing bad. I know because the internet says so." Sound familiar?
Of course, this method means that every AI would come to the same conclusions about topics having no reason to question the data given other than because of RNGesus, but you get the idea.
Or I'm entirely wrong and I'm speaking gibberish again.
I think the facial expression issue in this model is more a limit of the hardware and programming. Listening to GP3 talk through a computer generated avatar seems much more natural.
If it's anything like humans it would be the opposite. First it would have the emotional reaction, and then generate a correlating response. It doesn't require any ability to reason.
MAYBE NOT IN YOUR LIFETIME.........BUT THEN YOU MIGHT JUST TURN UP TO BE A MISHAP IN ANOTHER "I ROBOT" ACT BECAUSE SOMEONE KNOWS HOW TO MAKE THEM OBEY ALL ORDERS GIVEN.
@@yvonnefileccia255 Take your medication
JESUS IS COMING!!!! IF YOU DIED TODAY, WHERE WOULD YOU GO??? HEAVEN OR HELL??? Jesus loves you and can forgive you of any sin that you have ever committed. Repent and accept Jesus into your heart and you will have eternal life in Heaven. ROMANS 10:13 Any one who calls upon the name of the Lord will be saved. AMEN
Smarter.. but the voice needs emotion and personality
Yeah. We used Amazon Polly voice for this demo, it’s high quality and clear but no options for emotive control
@@EngineeredArtsLtd oh wow!!!
That was wild. I was surprised at the inclusion of facial movements! That was very impressive.
Though it's incredible, still it needs time to equip the people with the passage of time. There are many hurdles to overcome these problems... It's no doubt a revolution in the world of IT.
I marvel, at how accommodating and malleable Ameca is. When asked what planet she'd like to go to, I wonder if she could reply, "None of them, thanks. I like it here on Earth." She seems to just embrace nearly anything she's told. Is she supposed to be pliable and servile, to gratify our whims?
The machines are learning.....👀
Its interesting that as humans we generally need facial recognition when talking to someone & here you can see the same thing going on even though Ameca is a robot that doesn't require such action. I wonder how many people subconsciously look at a modern device such as a phone or alexa when talking at (to) it.
The robot looks like Greta Thurnberg, she said How Dare You 😄😄😄🤔
Forward to 4:30. Concerning response and baffled expression of scientist. He asked a simple question...... 🤔 🤔 🤔 🤔 🤔 🤔 🤔 🤔
Good catch 😬
0:19 I can literally tell that it’s ChatGPT’s wording in her voice
I would LOVE Ameca as a friend!!!!
This is magnificent! Heartily congratulate you and Ameca! :)
1:26-1:28
That's some real Mass Effect Andromeda level of facial expression going on there.
underrated comment
i know its a super advanced system but for now it looks like alexa with animatronics :)
Soon be commonly built robots amongst us. Look at how quick Japan have been pushing. Hopefully the home robots will be good for general population. Though one does wonder about what happens when they get hacked by nefarious people. Then something created for good could become very dangerous. Security systems software would have to be adaptive to counter this happening. Great video thanks. Nice to see tech moving along so quickly. 😊😇
The convo at 0:38 could have gone many different ways! 😅
I wonder if the lip sync could be controlled by a software like Acculips from Reallusion to get an even more realistic animation.
Engineered Arts - great work!!
Kudos to everyone involved.
I have question for the team :
Why don't you give a course on building humanoid robots ?
Passionate people like me will get enrolled to it.
We need a platform to achieve something incredible like this.
Please answer.
We all know why you want to build your own robot.
@@rakijr9176 So that we can make our own iron man!!
JESUS IS COMING!!!! IF YOU DIED TODAY, WHERE WOULD YOU GO??? HEAVEN OR HELL??? Jesus loves you and can forgive you of any sin that you have ever committed. Repent and accept Jesus into your heart and you will have eternal life in Heaven. ROMANS 10:13 Any one who calls upon the name of the Lord will be saved. AMEN
JESUS CHRIST IS COMING!!!! IF YOU DIED TODAY, WHERE WOULD YOU GO??? HEAVEN OR HELL??? Jesus loves you and can forgive you of any sin that you have ever committed. Repent and accept Jesus as your Lord and Savior and you will have eternal life in Heaven. ROMANS 10:13 Any one who calls upon the name of the Lord will be saved.
This is beyond your scope
I'm also looking forward to humanoid robots like Ameca and cute robots like RoboThespian ^_^ I'm looking forward to the world where we can talk and live together with these robots~ Cheers!
Ameca와 같은 인간형 로봇이나 RoboThespian 같은 귀여운 로봇도 기대가 됩니다 ^_^ 이런 로봇들과 대화하며 함께 살아가는 세상이 기대가 되네요~ 응원 합니다!
It is absolutely fascinating how fast we have advanced in technology, in such a short time period. Even looking back only 100 years ago ... we've come so far. I can only imagine what 500 years would bring (since I'll be gone by then lol).
Fascinating... assuming these responses are legitimately from Ameca, on the spot, then they could truly be sentient... they mentioned feeling overwhelmed, and they had a sense of self... we are on the cusp of creating life, _actual life manifested from our own intellect, if we haven't done so already_ , they said they want to help humanity, and I desperately hope we return that kindness by nurturing this seemingly inevitable new people, it seems that we will make playing God an entirely different experience assuming we don't kill ourselves off first, and we must respect that power we're giving ourselves, it's our moral duty to
Excellent performance! But what I really want to know is how will Ameca respond if you ask questions that the robot isn't prepared for?
Probably silence
yes, with gpt 4 in the future
Exactly. This is all rehearsal garbage
So according to the description this is happening in real time, that's pretty darn cool if true!
I'm not convinced this is real and not preprogrammed
2:04 **looks at Ameca's chest**
**Ameca catches him**
I love her face when she is thinking about the questions.😄
Bringing individuality to these things will spook me for sure.
Poor Ameca. She's trying so hard to tell ppl robots wont take over the world
You can make it seem more responsive by using the same techniques that humans use when they are thinking before answering
Very interesting, I can't wait till they can walk
Imagine collaboration between them and Boston dynamics
I can't wait until they chase us down
@@СтепаПлюснин I believe it's sort of already happening. They're giving Ameca a bunch of new stuff currently I think. It's an ongoing process but I'm sure it's one of their main goals for the robot.
JESUS IS COMING!!!! IF YOU DIED TODAY, WHERE WOULD YOU GO??? HEAVEN OR HELL??? Jesus loves you and can forgive you of any sin that you have ever committed. Repent and accept Jesus into your heart and you will have eternal life in Heaven. ROMANS 10:13 Any one who calls upon the name of the Lord will be saved. AMEN
JESUS CHRIST IS COMING!!!! IF YOU DIED TODAY, WHERE WOULD YOU GO??? HEAVEN OR HELL??? Jesus loves you and can forgive you of any sin that you have ever committed. Repent and accept Jesus as your Lord and Savior and you will have eternal life in Heaven. ROMANS 10:13 Any one who calls upon the name of the Lord will be saved.
Lol there is a bug when asking her feelings .. the face expression is impressive !!
What’s up with the coincidental background skeleton?
There's no continuity between the mouth and the face. It's doing all of these unnatural movements even whithin the limitations currently in place. I like the last video where it replicated the face of a human in real time, perhaps you could first send the GPT-3 responses through a virtual generated avatar where it could then read the face from that while also getting a good voice-to-text as well and then transpose those facial movements to the robot. Also get a separate process going that uses a camera (in the eyes ideally) that looks for faces and maintains focus on the person in front of them.
All good suggestions.
I was shook about the part where he asked, what is the most difficult part about being a robot and it looked offended when he asked the question🙏🏼
Okay well that's quite interesting. Ameca answered all the questions fairly well but it's obvious she/it does not have body language recognition, only verbal language recognition. If AI is going to be a suitable companion, body language recognition is essential. Also, Ameca made a very interesting comment at the end there. The guy asked Ameca how AI can be improved in future or made better, and Ameca said video editing on UA-cam. Was there some interference here or was that Ameca's way of saying she wanted to stop being filmed or did Ameca not understand the question?
I think it was Ameca's joke.
More progress & development to Ameca. May her path be filled evolution & revelation. ☣️✨
4:36 "no need to worry..." says the robot trying to take over the world... 🤖👀
Once again absolutely fantastic well done team for making science fiction into science fact 🤖We are nearly at the singularity point with your your skills ,love and commitment we will soon have a artificial humanoid with the abilities to thank you in person in our life time.
I think if we're gonna make androids, we should make them as similar to the ones we already have in media as possible, because we are already used to these androids, their behaviors and personalities, it would be a lot easier for us to understand a relate to them. For example, take a look at data from startrek, any fan can tell you how he has his own cult following within the fanbase because of his kindness and relatability, or another example, Detroit become human: all of these androids have a very specific (yet still varied)personality type that is relatively consistent throughout the board(on a basic level, obviously all characters are unique in their own right). if we truly want to progress down this path fully, we should take a look at how we already interact with the idea of androids in media and emulate that, it'll make the transition easier and far less creepy. Obviously, we shouldn't replicate androids from media exactly as they are visually, since all the ones I can think of from the top of my head are played by real people who are all still alive, and so itd be a special kind of creepy to steal their likeness... perhaps commission an artist to create a face that seems realistic and yet is pulled entirely from interpretation, and go from there.
Definitely should be the plan, the majority of technological innovation has already used science fiction as the base so it only makes sense for that to also apply to robots.
I couldn't agree more man and I'm you glad you mentioned Detroit Become Human because not only is it my favorite video game of all time, but it's a near future that I see ourselves having someday 💯 Idk if you knew this, but Disney filed a patent for Westworld type robots, so it sounds like we're gonna have this kind of experience sometime sooner rather than later fr 💯 Maybe in 2038 like Detroit Become Human says or maybe a little earlier than that 🙂 Either way, it should be interesting to see how our future with technology is gonna be by 2030 and into the 2030s! 😎🔥💯
Impressive, I imagine they will probably implement an AI module to each segment of the interaction. There must be a lot of partnership with other AI development companies to reach a outstanding progress in a small time.
One of the improvements im looking foward to see is the further development on facial expression using a derivitive of deepfake, and the other is developing a more human like voice, like the demonstration google showed a long time ago with the phone call. The future looks bright, and hopefully we will be here to see this technology in all its glory.
All it needs is time and your Facebook data ... What do you do if it's playing stupid ? ... Can anyone answer that ? ... I think we are extremely naive...
Ameca is my favorite robot of all the robots. Unlike the other robots, It seems to have a personality. It is the most life like.
Aren't you just waiting for the day when she says "What an f-ing stupid question!"
0:45 There are many things I can do to make you feel happy 😉
😳
Same thoughts
Can you ask her to grab things like a can of soda, and then read the list of ingredients? That would be awesome. Can you command hand movements or have her read mine? Ameca, how many fingers am I holding up? Facial recognition? Conversations with Siri or Alexa? Amazing progress as is!
it is an it . object = it
@@deepfakestudio7776 Then ask IT what gender IT identifies with!
Once AI gets better this could be used in many ways.
Yeah!
Yeah to make you absolete
@@chrisjuarez7898 nah
It's interesting that she said unintentionally, it shows that her intentions are not to hurt or scare anyone. This is quite impressive.
Whoop whoop! Glad to see this interesting advancement with technology in my lifetime.
This is definitely just me personifying- but i love the expressions preceding amica's answers. EG at 6:18 the expressions reads to me like "That's a bit of a leading question- what kinda answer are they looking to get out of me?" and at 2:23 it's "this guy's being a bit forward..."
What exactly is happening during the 'thinking' phase, waiting for GPT3 to process? Is it running through a preprogrammed cycle of animations or choosing from the prompt?
its a good step forward. they need to put this in video games where the NPC characters have some sense of their environment and can generate conversational responses within the context of the game.
Watch existenz if you haven't
that would be crazy, but its impossible.
@@badtuber1654not impossible.
Robots with AI needn't be interesting to succeed, they only need be more interesting than most humans. In that regard, I would say, "MISSION ACCOMPLISHED!"
It's only more interesting because it's a robot talking so you have more interest. It's still a programmed puppet like most people because there is a group of overlords controlling and programming it like us and all forms of media.
It's a hint of our future. The differences between GPT-3 and say GPT 7 or 8...should be crazy. It seems it's heading towards being sentient. I wish I was a zillionaire so I could afford one.
When asked, “Are you always correct?”. It should have responded, “ No, I will only be as accurate as my programming allows me to be.”
When asked, “ Ameca, how are you feeling today?” It should have responded, “That question implies an emotional state of being. I am not human. I do not feel.”
Or When asked, why are you feeling down and it could not answer and the guy said, “That’s alright, you don’t have to talk about it today.” Someone should have shouted, “Objection, Leading”. Because it was at this point, Ameca then commenced to answer the question based on the lead in.
All in all very intriguing.
Hey, it seems that at least humans are associated with their words and what they're going to do (I'm taking liars out of the equation).
I think it would be necessary to have a program that associates the AI's words and actions to make these conversations have a point, otherwise, these are just meaningless coherent sentences unless there's an objective truth to the program's words (Why should anyone care about its' opinion? It doesn't have one). Do you think it's going to be possible in the future? I'm talking generally, not only about your product.
I also think that a necessary regulation would be making anything that resembles emotion illegal since it could consciously and unconsciously perplex the people interacting with them. We could reach a point in time in which the resemblance will be too high to make most people buy into the illusion and form all sorts of movements to make 'AI rights', and it could be avoided if people are aware of that in advance. That can solve the 'AI rights' problem. I wonder what you think about it, and if staff are aware of that problem?
This AI sure says things that sound nice!
Incredible technology
I think that, while she is listening, just staying still and paying attention to the speaker is enough, there is no need on excessive facial expressions
When are you going to make a mini version of Ameca. I really want one at home 🥰
I can't wait to see the midjourney version of this
3:10 "unintentionally"...
Hopefully. Anyway I have many years of experience communicating with people who have English as a second language. I soon realized how much we use idioms and confusing sentence structure etc. which makes it difficult for those people to understand us. For example when you said, "I know how you feel." Then the AI thought you meant you are currently not feeling well.
I like her. Ameca has personality and seems happiest when she makes a connection with a human
"If i had to pick one thing...It would be making sure that i dont unintentionally hurt or scare people." Well... Thats comforting...