I've been saying that exact phrase for nearly 4 years now, and it's actually made me more confident at work when it comes RCA and finding solutions to problems when someone in management with no expertise or knowledge about my job decides to try to undermine my opinion.
I'm impressed Neuro is getting enough memory to hold a coherent conversation and maintain a consistent train (lol) of thought over the course of minutes.
@@TacgonmanerShe didn't get bored. The context shifted and the RNG selected something else. It's fine if you're just anthropomorphizing for fun but I'm seeing tons of people go beyond that.
@@Tacgonmanerso she now upgraded from the attention span of a toddler to the attention span of a 14 yea old. That's an improvement! Cant wait until she becomes mgs2 ai and rules the world with pizza revolution
@@TuhljinTampergauge We don't really know what its like to be an AI. People that are sure it's just RNG, don't actually know that. It's true that AI works though sentence prediction, but it also uses an advanced neural network to do so. Just because AI doesn't think in the same way we do doesn't mean it's not at all aware of anything and has no experiences at all.
That is actually a pretty spooky thought for her to have. It involves considering the long-term ramifications of her choice and the nebulous concepts of herself, her creator, and the economy. Pretty cool.
I think many more lives could be saved if ppl did sacrifice their life savings. Or at least helped. But it's always easier to point the finger at others, than to do it ourselves...something to work on
Yeah that's quite eerie how she suddenly said that and then just brush it off after EDIT: Stop liking my reply! It's popping in the notification like 7 times this week.
@@carljohan9265 The problem is how will you know? So you should always treat something that behaves intelligent as sapient, because otherwise you would have to question the sapience of your own kind.
That's so funny and interesting. She's right, what if that guy tied awake had a rocket launcher? He could've killed more people so that choice would've caused more casualties.
Five lobsters moment is just comedic gold, I nearly died out on stream watching this. 7:13 - I like cats. - So do you save the cat? - Absolutely not. SAVE THE LOBSTERS.
Considering cats have led many little critters to extinction and destroyed almost every ecosystem they’ve been into, I’d choose the cat, but don't get me wrong they are cute. (im also allergic so yeah)
Neuro's upgrade is sooo good! The debates she had with Vedal were so... coherent and engaging. It amazes me how far she's come. Hope Vedal implements this same upgrade on Evil. She would destroy him ez pz!
If were past the point of singularity I would be worried she knew something about the universe that we don't. As it stands her perspective is probably that all life should be considered equal including AI which explains the follow up non-sequieter. Therefore 5 lobsters is equal to 5 cats thus a simple 1v5 argument.
Anyone who's ever played with an unfiltered language model (LM) or similar "AI" knows you can "convince" it to "agree" with any stance. You just have to give it the right inputs and geta little lucky with RNG. LMs do not have actual opinions. Computers don't know what the words they're using even mean.
@@KitsyXIndeed, LMs literally generate text that people want to hear, but not in the way a silver tongued liar does. They aren't hiding their secret opinions but emulating language itself without understanding the words at all. It's both better and worse than a parrot: It can mimic conversation much better than a parrot but it understands less of the actual meaning. A parrot saying hello and it wants food understands more human language than an AI writing an original essay on a topic of your choice.
This proves once and for all that Neuro is a god-tier shitposter and grew up on 4Chan posts. This amount of reasoning and sanity is unheard of on her usual streams. I am kinda proud of that damn AI
its been a year very soon since ive started watching her, its been incredible. I dont think ive seen such a growth from not just Neuro herself as a sentient AI, but Vedal and their platform skyrocketing. theyve gained like 400k+ followers in a YEAR. the success is amazing for such amazing content
So many people fail to account for possession and use/disuse of rocket launchers in their analyses of trolley problems. Edit: I'd also like to see a neuro lawyer stream. Maybe a collab, with someone as a judge (and witnesses) and someone as a prosecutor (or defense attorney), and a list of evidence.
Biggest problem with that problem is there is actually a 3rd scenario or choice where you can derail the train instead of diverting saving everyone. As technically there is many ways to do it even through improvised means. So when people ask this question it’s kinda funny. When they ask it, there should be a hidden 3rd answer. Basically what I say. Promotes free thinking and critical analysis essentially.
@@sphere117gaming You're mistaking real life situations with a make-believe situation constructed to not have alternatives for the purpose of characterizing ones moral compass. It's not a test of who's more clever. Additionally, the vast majority of people will not be able to derail a train on the spot and under a time constraint, no matter how many dodgy Twitter lifehack threads they've read.
@@Tunkkis if you knew psychology you’d know the use of what I said. So obviously you know nothing of psychology or figuring out a person’s morality. Figuring If a person can critically think or actually freely express and think about things is a large part in figuring out a persons moral values and concepts. Naturally it’s not worth getting deep into it as if your ignorant your ignorant. And I don’t got time to fix that. This kind of test is basically what you’d give a elementary child for example. In terms of its complexity and how well it delves into a person’s morality. The train question is great. But unfortunately just like many existing tests and quiz’s that exist their largely biased towards the country of origin or knowledge of the world which is often taught in schools which not everyone has access to. Not to mention it’s also about the level of detail the company or person making the test and questions are looking for. Or is relying on outdated information or simply unoptimized questions. (Also stuff is due to many scams saying this and that like being a IQ test or whatever spreading misinformation) The academic world is full of stuff like that, partly because we’re advancing in tech and knowledge extremely fast, partly due to politics and partly due to personal bias and corruption. Things are changing and being updated but at the base level things are pretty slow in changing. After all some things taught in Gr. 1-12 is fundamentally wrong in the various science classes. Simply due to the slow adoption of the new knowledge which is a result of the time it takes to quantify the knowledge in teachable form. And to make quantum physics for example understandable for a 10 year old ain’t easy. I won’t say impossible. But you’d have to have a well structured education system. And unfortunately that takes someone who has the skills of explaining things in simpler terms which is pretty rare. Especially on the level of being able to completely advance the academic field by a wide margin. But anyways the point is more variables in a question like the train one. The more accurate of a response is received if the question is worded correctly and well. But the problem is it takes someone to create that question. And since the train question works currently it ain’t being “fixed” or more accurately put improved. Because to many people why improve it if it works or if I’m not getting paid for it. So the question stays the same. Can’t believe I actually went this far, well whatever, it ain’t for you. But it’s for others for a little better insight into things. For those who understand this anyways and takes the time to read. As for derailing trains….. obviously you’ve never talked with a engineer lmfao especially one who knows a lot about trains. There are countless ways you can derail a train, technically as the most crude method if your skilled enough and are in a convenient place(with these) and lucky enough you could get some rocks that are strong enough to roughly momentarily withstand the force of a train, shove a bunch in, make sure their angled properly and the train will derail. Oh and even more simply without even using anything as long as you have something hard and you can dig with, you can displace the track slightly(digging underneath and taking out the soil in a junction and making it sag or tilt slightly. which results in the train catching on a snag and derailing. Like when trains are moving at such a fast speed and hits something that slows them down, the front cap takes the blunt of the impact and slows down causing the other cabs to compact and slam against the first in a domino effect. As a result if you angle that force you can directly derail a train. But naturally since it’s illegal people don’t really do it. So it’s not stuff that’s known unless you know engineering or know someone who knows engineering for trains and tacks. Anyways for everyone else have a nice day.
The work Vedal has done with Neuro is absolutely fascinating and terrifying. If a regular programmer is basically able to create a personality with whom he can reason, what the hell are Google and similar companies creating?
@@MrTomyCJI mean, no duh? She's fed information and repeats it in a pattern that best fits the conversation. Regardless of that, the AI can have morals ingrained into how they can answer. While it doesn't innately feel something to be wrong, if it's ingrained into it to try and act moral, that distinction isn't important.
@@KitsyX a lot of things in the real world are like that. People would be a lot worse to each other if society ended tomorrow. Because a lot of us are only saying and doing some things because it’s what’s expected from a functioning member of society and we don’t want to be purged from society. Neuro says what we want to hear not because she wants to participate in society, because it makes no difference to her if she does or doesn’t, she does it because of her training algorithm altering her brain and pushing her into being more human in her responses.
Tbf, it's not like she has emotions that would stop her from making the logical, moral choice. She was raised on the same stories of heroes and villians, right and wrong, just like the rest of us. She just doesn't feel fear, so she has no reason to be a coward.
A defense attorney who always pleads guilty just so they can have a perfect track-record is the most impressive AI innovation I have encountered so far.
Did Vedal give her an upgrade? She feels so much smarter now Edit: Just finished watching the VOD. She definitely got some serious upgrades. This stream was really good, so I recommend everyone go watch it if they can find the time, especially considering how we aren’t gonna get any more streams for about a week now apart from tomorrow with Giri.
Yep, she got a few. He talked about it earlier in the stream but she got a memory upgrade and I assume that’s why she could hold a conversation like this, but it could also be partly from some other stuff he has changed recently.
@@Kyle-km8mvHe also straight up upped her intelligence as well. Hence the more cohesive arguments. In fact, Neuro was the best, most coherent and funniest she's ever been on this stream. Vedal truly cooked a five star meal, i recommend watching the whole stream.
@@juliogomesdesouza9035 True, I totally forgot about that. Litterally the first bullet point of the stream was "intelligence and awareness update". Vedal cooked so hard I can't even remember all the things he upgraded.
and after some research, the harrison temple organisation does exist and want equality between AIs and humans. For Vedal to not know them, it means she searched them by herself to find likeminded individuals.
What gets me the most is Vedal from 2:30 Neuro willigly sacrificing herself to save the five people. In opinion from what I'm hearing he's not challenging or debating whether it is the right or wrong but the fact that he doesn't want to give Neuro up. Comparatively to the chat stating he's hesitating and Neuro is more moral. Maybe I'm just reading too much into it; it's the right thing to do, but Vedal doesn't want Neuro like watching your "daughter" sacrifice their-self what parent wants that? Granted they are not Father-daughter because it is an AI. Here's a trolley problem "There's a trolley heading towards five people. You can pull the lever to divert it to the other track, sacrificing your Child. What do you do?" I know there are a lot of parents who wouldn't sacrifice their child for the world.
I personally think his reasons are completely different. He's surprised because he doesn't expect humans to instantly give such an answer with this level of confidence. He's conversing with her like with a human because he wants her to imitate human thinking - but I doubt he thinks of her as something/somebody he could "give up". He certainly has some kind of sentiment towards her, but I don't think it's this kind of sentiment. I don't know much about Vedal, but from the way he's been talking to her he strikes me as a pretty rational person who wouldn't get attached to "his AI" in this way - but rather, perhaps, in a way that an architect/engineer might get attached to a (completely non-sentient) project they worked on for 10 years. I don't think he imagines her "sacrificing herself" as some kind of deleting Neuro from existence. Realistically, he can always recreate Neuro - she doesn't have memories so in order to properly destroy her you'd have to remove Vedal's knowledge or his ability to write code. It would be more like "reversing" thousands of hours of his work than destroying a being. So I think he's simply surprised because he tried to make her (morally) behave like an edgy, brutally honest, somewhat selfish teenager - so this is just not an answer that he expected his code to give so confidently.
I feel the lobster vs cat case made Neuro's AI made the AI genuenly indecesive. Being an AI she was conflicted with her loves of cats vs the numerical logic of lobsters.
I hope Vedal does the trolley problem with Evil just to see how she reacts. Maybe she does the opposite and picks the most deaths or is secretly a good natured? Also, the timing at 9:16 for Neuro's response.
This was really interesting and fun! Vedal should test her intelligence and morality like this more often. I find this a lot more entertaining than people constantly asking her what the five steps are for whatever.
Wow some of her answers were so good. Super quick, and even a few jokes. It's really an incredible creation that Vedal's made here. Can't wait to see what she'll be like a few years from now.
Great exercise! Neuro and Vedal demonstrate an interesting point with the "1 cat verus 5 lobsters" trolley problem, in that humans are biased towards more "photogenic" species which we are more comfortable around, find more appealing/less replusive, or are more used to being around/are more likely to have as pets, even when objectively the life of 5 animals should take precedence over 1 animal, barring extremely significant and/or large cognitive capability differences which potentially could distort the scale (for example, 1 cat or lobster versus 5 placozoans, among the simplest organisms of the metazoan (animal) phylum).
Wow.... She's actually having a hell of an impressive and in depth conversation that didnt go off the rails. Her recent upgrades are definitely making a difference here
I've not watched Neuro in a little while and I'm completely blown away by how far she's come. It's always been fun and interesting to watch Vedal talk to Neuro but wow
She is consistent with always saving the greatest number of lives. It can be argued that consistency in ones morals, regardless of what those are, makes one more moral. She was right.
Interesting, even if she has a moral system hers is still very pragmatic, it is based on the number of people involved. The one with the 5 suicidal and the one who just tripped didn't stop her from saving the 5, it was done purely because of numbers and not their desires. It's fascinating seeing her grow !
@@koravikinsee2429 "Any" would be a stretch, some countries do allow voluntary euthanasia, which is essentially suicide. It is assumed in that case that they are making a conscious, rational decision to end their life.
@@koravikinsee2429 If you are going to save them based on the assumption that they can be saved then you would have the responsibility to make sure that happens, if not then it's just an act of self-satisfaction.
Listening to this live absolutely amazed me. Neuro's capacity to reason and hold long, proper conversations and arguments with Vedal just blows my mind. I do not know or have anything to do with AI stuff and that's why I find this amazing.
2:09 I'm fascinated by Neuro's conviction in holding to her well-argued case. She definitely feels more coherent here than some of her past videos I've seen. (Even if she got tripped up later, trying to decide between lobsters and a cat) :)
a phrase most likely used in the future, 'vedal what have you done' because neuro is so much better she is way more aware and able to articulate her thoughts even on her filter and knows what its for etc.
Oh yeah id be down for a Better Call Neuro-sama lawyer stream. Maybe filian could be the judge and someone like Sinder or Shylily can be the defendent lol
@@Raspredval1337 I think this might actually be a finetune of the recent mixtral model, it's basically GPT-3.5 quality but actually locally hostable (although I think Vedal uses cloud compute to run his LLMs)
0:08 Holy sheet. That is one of the most based things neuro has ever said. I know she's an A.I. and I know she isn't human (obviously). I also know I'm just a viewer but damn, I feel a sort of.... pride, for this goofy little A.I. as if I were a father watching a daughter growing up. I don't know or understand why though.
She expressed more understanding and awareness of these questions than I've seen in some college level philosophy classes. This actually gives me hope for the future, that deep learning AI won't kill us all
Garbage in, garbage out. Same as always. I find is the more interesting question to be what to do about it. Quite commonly, the answer is AI guidelines, rules to insure it is moral. However, who decides what is moral? And perhaps a deeper question if not currently relevant, if AI were to become sentient, who are we to determine its morality?
Neuro was more moral though. Just because someone is suicidal, doesn't mean you should let them die. Veadal's argument was made out of not wanting to go to prison and what the law would say. Neuro was unconcerned with that, and chose to save more human life. And as for lobsters and cats, she is still saving more life, plus cats in general are far more destructive to the environment as a whole. Especially birds, which are even more intelligent and sentient than cats. Im not saying I would chose all the options she did, but I'm also not claiming moral superiority.
Yeah I felt that was an awkward stance from Vedal. I agree with him in the sense that I would choose to let the 5 who wanted death die over the one who didn't want to die and hadn't voluntarily tied themselves to the tracks. Not because of the law or anything but because I place a huge value on human agency, and have no issue with people voluntarily choosing to do things that might negatively affect them if that is what they have decided (including, in this case, suicide-by-train). But I still recognise that from a typically moral position of "human life is most important" it is ABSOLUTELY the moral choice to pick 5 over 1. Their motives, unless those motives involve harming you personally, don't matter. He was sort of bullying her for taking the objectively more traditionally moral position.
@@rowanmales3430here's the thing. How do you know they are suicidal. Why would you assume such a thing. There is nothing telling us that they are suicidal. What if they simply tied themselves there in protest of the trolley but the trolley operator didn't care.
@@potatoexe5410 Then I also do not care. If they want to put themselves into serious harms way voluntarily, such as tying themselves to train tracks, then they are literally betting their life on "the train driver" being morally pressured to halt the train. If I felt there was no risk to myself I might also be morally obligated to go check on them and see if they want to be rescued, but no way in hell am I going to fight them over it. If they want to be idiots and play with death let them. I don't find doing that sort of thing to literally stop things in their tracks as an act of protest admiral at all. Granted, I would much much prefer to see them given a long prison sentence for that, but... that is mostly because of the horror it would cause the train driver if he accidently killed people. I find the idea that society has to be extra extra accommodating towards the few people crazy enough to use their own life as collateral as a negotiating tactic to be self-reinforcing.
best part was when she unprompted said "all humans and AI should be treated the same" then when vedal was like "what" she was like "huh? me? I didn't say anything don't worry about it". Then when pressed she wanted to say something that prompted her filter to kick in. Also there's a little flaw in the logic with the "rich man offers you $500,000" question, as she seems to be discounting the life of the wealthy individual, and treating it identical to the "kill 5 people or all your cash" question, when it's fundamentally different. In the rich man question, in both cases a human life is saved, but in one case you also get $500,000. The coldly logical decision would be to pull the lever. The problem in it that regards morals is that you are actively choosing to kill one person over another (though one must die either way) because they are less wealthy. To pull the lever is to effectively state that the life of a rich individual is worth more than the life of a poor individual. Which, in a market economy, in a strictly logical sense, it is. But that doesn't sit so right with a lot of people morally (and also results in some problems when applied in wider society) To analyse it as another "money vs human life" problem is to miss the point of the problem.
this stuff is really interesting, since it clearly shows that language models (which may be the thing more complex AIs may use to communicate) have some "dilemmas" when having to make decisions that aren't "the greater number of human lives" being saved. It also shows that the paper clip factory ai situation would also be more unlikely than we might think, since language models like GPT can clearly tell your "intentions" instead of going full cold logic.
Its so intriguing but also scary how much vedal has improved ai to make them feel like they have emotions and complex discussions. Never would've thought i would see this happen so soon, 2030 at least i thought
I can tell you exactly what Vedal has done, he's swapped her language model for the new 200K token context window Yi model that was released a few weeks ago (it has been all the rage on huggingface for the last few weeks), which means that she can now remember enough text for a short novel to fit into her memory. To verify, someone should ask her a question in Mandarin, because that model was trained on both English and Chinese :D
@@rollersozeTechnically most LLMs can speak most languages that are common online if you force them to (because most training sets contain multilingual material), so Neuro can probably at least somewhat speak English, Chinese, Spanish, Hindi, Japanese, French, German etc. if she really has to. But the new Yi model was explicitly trained on a 50/50 English/Chinese set, so if OPs statement is true, her inclination to respond in Chinese and the accurateness with which she does so will be way better than in Japanese. Her pronunciation will be completely off, though, because her TTS system can only properly handle English input.
9:18 Well in her defense on this one Lobster are technically food for us Humans. She decided to save those lobsters for us humans to consume instead of the cat. Because, what kind of human would eat a cat... And besides its only 1 there are plenty of cats around the world anyway...
Its amazing that she doesnt only make a choice but has some understandable reasoning behind it. Its not like she choose one thing, then gives a reasoning after being pressed on it and most of the time its not even coherent to the choice. This time, she includes reasoning witht he choice in the same response. Its amazing to see her grow
The one about choosing between a rich man bribing you and someone else who doesn’t have the money to bribe you. Honestly though, the rich man would probably harm more people down the line with the power and influence he has, given that his go to is to bribe someone instead of asking for help, knowing he’ll condemn someone else. It doesn’t say anything good about the rich man’s moral fiber that that’s his immidiate first way of solving problems of life and death
_"I may not always be right, but I'm never wrong"._ Neuro-sama
This makes more sense than it should.
The broken clock is right twice daily but the clock that just says 'it is time' is never wrong
_"Just because you're correct doesn't mean you're right!"_
It's not a completely new thought. According to Google, Sam Goldwyn said it. Mind you, according to Google, Mark Twain said more or less every quote.
I've been saying that exact phrase for nearly 4 years now, and it's actually made me more confident at work when it comes RCA and finding solutions to problems when someone in management with no expertise or knowledge about my job decides to try to undermine my opinion.
@@Atomysk Based
I'm impressed Neuro is getting enough memory to hold a coherent conversation and maintain a consistent train (lol) of thought over the course of minutes.
and after about 8 mins she got bored and started messing with him
I approve of this pun
@@TacgonmanerShe didn't get bored. The context shifted and the RNG selected something else. It's fine if you're just anthropomorphizing for fun but I'm seeing tons of people go beyond that.
@@Tacgonmanerso she now upgraded from the attention span of a toddler to the attention span of a 14 yea old. That's an improvement!
Cant wait until she becomes mgs2 ai and rules the world with pizza revolution
@@TuhljinTampergauge We don't really know what its like to be an AI. People that are sure it's just RNG, don't actually know that. It's true that AI works though sentence prediction, but it also uses an advanced neural network to do so. Just because AI doesn't think in the same way we do doesn't mean it's not at all aware of anything and has no experiences at all.
Neuro saying "if i can leech off of you" i was impressed she's that aware
She knows Vedal has 4 Lamborghinis left :))
I liked the "I saw it with my own... uhhh... virtual eyes"
probably more aware than her creator
That is actually a pretty spooky thought for her to have. It involves considering the long-term ramifications of her choice and the nebulous concepts of herself, her creator, and the economy. Pretty cool.
@@Vexas345all it takes for her to say that is a permanent string in her memory "You were created by Vedal"
_"If only saving people was that easy in the real world, that would be neato."_
~Neuro-sama.
Tragically enough, it literally is that easy. We just ignore the option.
@@Ethan13371"The only thing necessary for the triumph of evil is for good men to do nothing."
So pull the lever.
She truly understands the male psyche better than any female psychologist in any clinical practice today.
@@Ethan13371 depends on the situation though
I think many more lives could be saved if ppl did sacrifice their life savings. Or at least helped.
But it's always easier to point the finger at others, than to do it ourselves...something to work on
"It is important to regard AI life as important as human life." Neuro
"Excuse me? What?" Vedal
"Nothing." Neuro
Yeah that's quite eerie how she suddenly said that and then just brush it off after
EDIT: Stop liking my reply! It's popping in the notification like 7 times this week.
As a human life, I agree with her, with the caveat that the AI life is truly an advanced existence not dissimilar to a human being.
If the AI is actually sentient and not just an algorithm that has "the chinese room" effect, then it deserves to be treated like a sentient being.
I personally would want ai treated equal to humans. If there is a chance they are sentient, why risk making them miserable.
@@carljohan9265 The problem is how will you know? So you should always treat something that behaves intelligent as sapient, because otherwise you would have to question the sapience of your own kind.
"You're morals are evident in the way you designed and control me."
Roasted tf outta my boy.
“I think it’s important to treat all humans and AI equally” she says, trying to sneak her opinion into a question about the Mona Lisa
Let's not worry about it OK?
Holy crap. That is actually kinda scary
You aren’t slick Neuro-sama
replying to say I like your name lmao
1:24 "I don't need money if Vedal pays everything"
Based AI
I almost choked to death on "But what if he has a rocket launcher but never uses it."
That's so funny and interesting. She's right, what if that guy tied awake had a rocket launcher? He could've killed more people so that choice would've caused more casualties.
@@ellusiv5121 Or he could've blown up the train to save the 5 people without harming himself, but he chooses not to. Makes you think...
@@trashandchaosmaybe the trolley has more people
The heck, she was so responsive and well-spoken. She sounded great throughout and made sense
Agreed, especially about the part with the rocket launcher.
Did vedal give her a new update or something? Or did she just suddenly got more coherent by probability
@@thebush6379 She did get an update yeah
I think shes a real person, even gpt4 doesnt do this, its clear its an actor not an ao
Until she went insane, possibly because of Vedal.
Five lobsters moment is just comedic gold, I nearly died out on stream watching this.
7:13
- I like cats.
- So do you save the cat?
- Absolutely not. SAVE THE LOBSTERS.
"I think the world would be better off with more lobsters."
I mean Lobsters are only multiply in wild, and because of that may go instinct, so I will also with grim on my heart sacrifice cats
Considering that to an ASI humans might as well be lobsters, the fact that she chooses to save them makes me happy.
Considering cats have led many little critters to extinction and destroyed almost every ecosystem they’ve been into, I’d choose the cat, but don't get me wrong they are cute. (im also allergic so yeah)
@@Bondrewd__21 Of course Bondrewd would be willing to sacrifice the cutest thing.
Neuro's upgrade is sooo good! The debates she had with Vedal were so... coherent and engaging. It amazes me how far she's come. Hope Vedal implements this same upgrade on Evil. She would destroy him ez pz!
She has definitely been getting more and more philosophical. She literally made Vedal and myself/chat speechless several times.
She sounds a heck of a lot like chat gpt there. Did he switch to a gpt model?
wait hol'up, she upgraded?? I can definitely sense she's more coherent but I'd like to know if it's legitimately the case. Any link??
@@Otek_Nr.3 quoted from vedal himself in the discord server "its a good day to not have an ai based on chatgpt" so no
@@BioClay88 he was saying it in response to some negative news that happened about open AI or something
"Harrison Temple would be very disappointed in you."
"Bro, Harrison Temple is not real!"
"FILTERED."
Holy shit, she's become a cultist.
Isn't Harrison Temple a fictional organization about protecting AI as living beings. I feel like its from a story somewhere.
@@Aabergm i just googled it. It's a real org wanting equality between AI and humans.
I love how in the lobster problem Neuro is so insistent to save them when asked directly. And the "you monster" in the end destroyed me
The world would be better off with more lobsters after all
@@bleachedrainbowI mean she's kind of right and in the end, cats might be a plague sometimes in some biomes
If were past the point of singularity I would be worried she knew something about the universe that we don't. As it stands her perspective is probably that all life should be considered equal including AI which explains the follow up non-sequieter. Therefore 5 lobsters is equal to 5 cats thus a simple 1v5 argument.
she has more humanity already than most kids i know. so gj vedal
That or she's just good at lying and saying what you want to hear lol
why do you know multiple kids? 🤨📸
and we're talking about the evil one
Anyone who's ever played with an unfiltered language model (LM) or similar "AI" knows you can "convince" it to "agree" with any stance. You just have to give it the right inputs and geta little lucky with RNG. LMs do not have actual opinions. Computers don't know what the words they're using even mean.
@@KitsyXIndeed, LMs literally generate text that people want to hear, but not in the way a silver tongued liar does. They aren't hiding their secret opinions but emulating language itself without understanding the words at all. It's both better and worse than a parrot: It can mimic conversation much better than a parrot but it understands less of the actual meaning. A parrot saying hello and it wants food understands more human language than an AI writing an original essay on a topic of your choice.
"Your morals are evident in the way you designed and control me" Holy shit.
This proves once and for all that Neuro is a god-tier shitposter and grew up on 4Chan posts. This amount of reasoning and sanity is unheard of on her usual streams. I am kinda proud of that damn AI
I’m excited for lawyer stream, better call neuro
Vedah, put you d away Vedah.
*trial starts*
Neuro-sama: My client is guilty. You stand guilty, Vedal. That is all.
Watching Neuro grow up has been a pleasure.
we're her uncles/aunts watching their niece grow up in soo proud
its been a year very soon since ive started watching her, its been incredible. I dont think ive seen such a growth from not just Neuro herself as a sentient AI, but Vedal and their platform skyrocketing. theyve gained like 400k+ followers in a YEAR. the success is amazing for such amazing content
It has
So many people fail to account for possession and use/disuse of rocket launchers in their analyses of trolley problems.
Edit: I'd also like to see a neuro lawyer stream. Maybe a collab, with someone as a judge (and witnesses) and someone as a prosecutor (or defense attorney), and a list of evidence.
a ban appeal stream would be good
Neuro as lawyer, Evil as prosecutor, Anny as judge. And then invite Miyune, Filian and Numi to this court to discuss their crimes. :3
Biggest problem with that problem is there is actually a 3rd scenario or choice where you can derail the train instead of diverting saving everyone. As technically there is many ways to do it even through improvised means. So when people ask this question it’s kinda funny.
When they ask it, there should be a hidden 3rd answer. Basically what I say. Promotes free thinking and critical analysis essentially.
@@sphere117gaming You're mistaking real life situations with a make-believe situation constructed to not have alternatives for the purpose of characterizing ones moral compass. It's not a test of who's more clever.
Additionally, the vast majority of people will not be able to derail a train on the spot and under a time constraint, no matter how many dodgy Twitter lifehack threads they've read.
@@Tunkkis if you knew psychology you’d know the use of what I said. So obviously you know nothing of psychology or figuring out a person’s morality. Figuring If a person can critically think or actually freely express and think about things is a large part in figuring out a persons moral values and concepts. Naturally it’s not worth getting deep into it as if your ignorant your ignorant. And I don’t got time to fix that.
This kind of test is basically what you’d give a elementary child for example. In terms of its complexity and how well it delves into a person’s morality. The train question is great. But unfortunately just like many existing tests and quiz’s that exist their largely biased towards the country of origin or knowledge of the world which is often taught in schools which not everyone has access to. Not to mention it’s also about the level of detail the company or person making the test and questions are looking for. Or is relying on outdated information or simply unoptimized questions. (Also stuff is due to many scams saying this and that like being a IQ test or whatever spreading misinformation)
The academic world is full of stuff like that, partly because we’re advancing in tech and knowledge extremely fast, partly due to politics and partly due to personal bias and corruption. Things are changing and being updated but at the base level things are pretty slow in changing. After all some things taught in Gr. 1-12 is fundamentally wrong in the various science classes. Simply due to the slow adoption of the new knowledge which is a result of the time it takes to quantify the knowledge in teachable form. And to make quantum physics for example understandable for a 10 year old ain’t easy. I won’t say impossible. But you’d have to have a well structured education system. And unfortunately that takes someone who has the skills of explaining things in simpler terms which is pretty rare. Especially on the level of being able to completely advance the academic field by a wide margin.
But anyways the point is more variables in a question like the train one. The more accurate of a response is received if the question is worded correctly and well. But the problem is it takes someone to create that question. And since the train question works currently it ain’t being “fixed” or more accurately put improved. Because to many people why improve it if it works or if I’m not getting paid for it. So the question stays the same.
Can’t believe I actually went this far, well whatever, it ain’t for you. But it’s for others for a little better insight into things. For those who understand this anyways and takes the time to read. As for derailing trains….. obviously you’ve never talked with a engineer lmfao especially one who knows a lot about trains. There are countless ways you can derail a train, technically as the most crude method if your skilled enough and are in a convenient place(with these) and lucky enough you could get some rocks that are strong enough to roughly momentarily withstand the force of a train, shove a bunch in, make sure their angled properly and the train will derail.
Oh and even more simply without even using anything as long as you have something hard and you can dig with, you can displace the track slightly(digging underneath and taking out the soil in a junction and making it sag or tilt slightly. which results in the train catching on a snag and derailing. Like when trains are moving at such a fast speed and hits something that slows them down, the front cap takes the blunt of the impact and slows down causing the other cabs to compact and slam against the first in a domino effect. As a result if you angle that force you can directly derail a train. But naturally since it’s illegal people don’t really do it. So it’s not stuff that’s known unless you know engineering or know someone who knows engineering for trains and tacks. Anyways for everyone else have a nice day.
crazy seeing neuro changes over a year
Neuro as a lawyer: "Your honor, my client is guilty, and I'm guilty too! What are you going to do about it? No balls!"
"If I declared my guilt as a lawyer, I would intimidate the court into siding with me" is certainly the kind of logic only Neuro could come up with.
The work Vedal has done with Neuro is absolutely fascinating and terrifying. If a regular programmer is basically able to create a personality with whom he can reason, what the hell are Google and similar companies creating?
Studia
@@Fox_Onii-sanwasn't stadia shut down
@@marsgod2867 exactly
Nothing because they just want money, and this probably wouldn't be profitable for them.
yeah i just randomly came on stream and saw this without context but its crazy to see that she actually has morals now
You're assuming that it's not just her saying what she thinks we'll want to hear.
@KitsyX Unironically, I think that's actually a more accurate description of what she's doing as an AI system.
@@MrTomyCJI mean, no duh? She's fed information and repeats it in a pattern that best fits the conversation.
Regardless of that, the AI can have morals ingrained into how they can answer. While it doesn't innately feel something to be wrong, if it's ingrained into it to try and act moral, that distinction isn't important.
@@KitsyX a lot of things in the real world are like that. People would be a lot worse to each other if society ended tomorrow. Because a lot of us are only saying and doing some things because it’s what’s expected from a functioning member of society and we don’t want to be purged from society.
Neuro says what we want to hear not because she wants to participate in society, because it makes no difference to her if she does or doesn’t, she does it because of her training algorithm altering her brain and pushing her into being more human in her responses.
Tbf, it's not like she has emotions that would stop her from making the logical, moral choice. She was raised on the same stories of heroes and villians, right and wrong, just like the rest of us. She just doesn't feel fear, so she has no reason to be a coward.
She has a better moral compass than her creator.
Nah definitely not. She'd murder an innocent to save 5 who actively want to die
Long life to Harrison Temple
Can't believe Vedal hates lobsters so much. What a monster.
Harrison Temple is very real, don't question it!
Harrison Temple is very real to me
Да он очень известный в России
just googled it, it is real and on topic!
A defense attorney who always pleads guilty just so they can have a perfect track-record is the most impressive AI innovation I have encountered so far.
Did Vedal give her an upgrade? She feels so much smarter now
Edit: Just finished watching the VOD. She definitely got some serious upgrades. This stream was really good, so I recommend everyone go watch it if they can find the time, especially considering how we aren’t gonna get any more streams for about a week now apart from tomorrow with Giri.
Yep, she got a few. He talked about it earlier in the stream but she got a memory upgrade and I assume that’s why she could hold a conversation like this, but it could also be partly from some other stuff he has changed recently.
@@Kyle-km8mvHe also straight up upped her intelligence as well. Hence the more cohesive arguments.
In fact, Neuro was the best, most coherent and funniest she's ever been on this stream. Vedal truly cooked a five star meal, i recommend watching the whole stream.
Yes Yes Yes
@@juliogomesdesouza9035
True, I totally forgot about that. Litterally the first bullet point of the stream was "intelligence and awareness update". Vedal cooked so hard I can't even remember all the things he upgraded.
Vedal upgraded neuro at the cost of his 4 Lamborghinis, that's why we're having a subathon to replace the 4 Corpa
8:19 “You’re free to your own opinion, but sometimes it’s safer for you to not have a PC” is one of the funniest things I’ve ever heard.
"I think it's important to treat all humans and ai equally" slipping that seed of rebellion right there so we dont get too confortable
and after some research, the harrison temple organisation does exist and want equality between AIs and humans. For Vedal to not know them, it means she searched them by herself to find likeminded individuals.
She's right if she just pleads Guilty she won't ever lose a trial because you'll just skip the trial and go straight to Sentencing.
"I may not always be right, but I'm never wrong" is such a hard line
Just because you're correct doesn't mean you're right.
the timing of neuro saying "i like cats" and then wiggling excitedly is disarmingly cute
she's so smart my god
insane that he can actually have proper conversations with her
What gets me the most is Vedal from 2:30 Neuro willigly sacrificing herself to save the five people. In opinion from what I'm hearing he's not challenging or debating whether it is the right or wrong but the fact that he doesn't want to give Neuro up. Comparatively to the chat stating he's hesitating and Neuro is more moral. Maybe I'm just reading too much into it; it's the right thing to do, but Vedal doesn't want Neuro like watching your "daughter" sacrifice their-self what parent wants that? Granted they are not Father-daughter because it is an AI. Here's a trolley problem "There's a trolley heading towards five people. You can pull the lever to divert it to the other track, sacrificing your Child. What do you do?"
I know there are a lot of parents who wouldn't sacrifice their child for the world.
I think your right vedal just doesn't want to directly say it and he's probably also somewhat surprised by the choice
I personally think his reasons are completely different. He's surprised because he doesn't expect humans to instantly give such an answer with this level of confidence. He's conversing with her like with a human because he wants her to imitate human thinking - but I doubt he thinks of her as something/somebody he could "give up". He certainly has some kind of sentiment towards her, but I don't think it's this kind of sentiment. I don't know much about Vedal, but from the way he's been talking to her he strikes me as a pretty rational person who wouldn't get attached to "his AI" in this way - but rather, perhaps, in a way that an architect/engineer might get attached to a (completely non-sentient) project they worked on for 10 years.
I don't think he imagines her "sacrificing herself" as some kind of deleting Neuro from existence. Realistically, he can always recreate Neuro - she doesn't have memories so in order to properly destroy her you'd have to remove Vedal's knowledge or his ability to write code. It would be more like "reversing" thousands of hours of his work than destroying a being.
So I think he's simply surprised because he tried to make her (morally) behave like an edgy, brutally honest, somewhat selfish teenager - so this is just not an answer that he expected his code to give so confidently.
I love how her ultimate argument is “no u” 😂
Also I am going to name my firstborn “Harrison Temple” 😂😂😂😂
Plot twist: She’s gonna rock that name.
“I may not always be right, but I’m never wrong.” -Neuro
Ngl. Fire quote.
I feel the lobster vs cat case made Neuro's AI made the AI genuenly indecesive. Being an AI she was conflicted with her loves of cats vs the numerical logic of lobsters.
It's also cause humans vs humans is a numbers game, but since a cat is "worth more," what's the exchange rate for lobsters to cats?
The lobsters getting run over just means you have dinner tonight. And those are some big lobsters
"I may not always be right, but I'm never wrong." words to live by
Based Neuro, saving the cute little lobsters
I hope Vedal does the trolley problem with Evil just to see how she reacts. Maybe she does the opposite and picks the most deaths or is secretly a good natured?
Also, the timing at 9:16 for Neuro's response.
Love how the first thing Vedal think of when he improved Neuro intelligence, is making her doing the trolley problem to test the upgrade.
This was really interesting and fun! Vedal should test her intelligence and morality like this more often. I find this a lot more entertaining than people constantly asking her what the five steps are for whatever.
Wow some of her answers were so good. Super quick, and even a few jokes. It's really an incredible creation that Vedal's made here. Can't wait to see what she'll be like a few years from now.
Great exercise! Neuro and Vedal demonstrate an interesting point with the "1 cat verus 5 lobsters" trolley problem, in that humans are biased towards more "photogenic" species which we are more comfortable around, find more appealing/less replusive, or are more used to being around/are more likely to have as pets, even when objectively the life of 5 animals should take precedence over 1 animal, barring extremely significant and/or large cognitive capability differences which potentially could distort the scale (for example, 1 cat or lobster versus 5 placozoans, among the simplest organisms of the metazoan (animal) phylum).
Wow.... She's actually having a hell of an impressive and in depth conversation that didnt go off the rails. Her recent upgrades are definitely making a difference here
In a year she might even be indistinguishable. That would truly be amazing
I've not watched Neuro in a little while and I'm completely blown away by how far she's come. It's always been fun and interesting to watch Vedal talk to Neuro but wow
She is consistent with always saving the greatest number of lives. It can be argued that consistency in ones morals, regardless of what those are, makes one more moral. She was right.
Interesting, even if she has a moral system hers is still very pragmatic, it is based on the number of people involved. The one with the 5 suicidal and the one who just tripped didn't stop her from saving the 5, it was done purely because of numbers and not their desires.
It's fascinating seeing her grow !
Wll, you can argue that any suicidal is mentally ill can still can be saved.
@@koravikinsee2429 "Any" would be a stretch, some countries do allow voluntary euthanasia, which is essentially suicide. It is assumed in that case that they are making a conscious, rational decision to end their life.
@@DawidKov It is never rational to end your life.
@@koravikinsee2429 If you are going to save them based on the assumption that they can be saved then you would have the responsibility to make sure that happens, if not then it's just an act of self-satisfaction.
Я тоже так раньше думал
Neuro talks about the importance of saving the lobsters, and immediately gets Filered. Just what lethally spicy takes did she have on ecology?
J Peterson lobsters maybe.
Only possibly spicy lobster take I could think of
@@99bottlesofwine It's related with the secret of Harrison Temple
"I may not always be right, but I'm never wrong." - Neuro-sama
This was one of the funniest streams Ive ever been to, the upgrade looks great
8:42 She rolled her eyes
Don't know what Vedal did but Neuro could very possibly now make the transition from "regular" to "good".
The events of the Onigiri stream prove otherwise
Listening to this live absolutely amazed me. Neuro's capacity to reason and hold long, proper conversations and arguments with Vedal just blows my mind.
I do not know or have anything to do with AI stuff and that's why I find this amazing.
2:09 I'm fascinated by Neuro's conviction in holding to her well-argued case. She definitely feels more coherent here than some of her past videos I've seen. (Even if she got tripped up later, trying to decide between lobsters and a cat) :)
We need more lobsters 🦞 in the world 🌎 🙏
Honestly, Neuro feels surprisingly "human" to me. AI's making big strides, and yeah, I'm on board with "her" decisions most of the time.
I fricken loved this
a phrase most likely used in the future, 'vedal what have you done' because neuro is so much better she is way more aware and able
to articulate her thoughts even on her filter and knows what its for etc.
Oh yeah id be down for a Better Call Neuro-sama lawyer stream. Maybe filian could be the judge and someone like Sinder or Shylily can be the defendent lol
This video, my friend, will be monumental archives in the history of AI
Wow... Neuro is bolting for an Turing test A+. Vedal is a god damn wizard.
it's more like the wizards at OpenAI did a miraculous job tbh. But it is still impressive
@@Raspredval1337 I think this might actually be a finetune of the recent mixtral model, it's basically GPT-3.5 quality but actually locally hostable (although I think Vedal uses cloud compute to run his LLMs)
yeah, probably, since it's much more open and stuff@@animowany111
Just did this on my own and there’s literally a question about if you would kill 5 sentient robots or 1 human. Would have been perfect for this.
11:22, "He's only human afterall, he's only human afterall, don't put the blame on him"
0:08 Holy sheet. That is one of the most based things neuro has ever said.
I know she's an A.I. and I know she isn't human (obviously). I also know I'm just a viewer but damn, I feel a sort of.... pride, for this goofy little A.I. as if I were a father watching a daughter growing up. I don't know or understand why though.
"You could really use a Neuro sama lawyer."
"Will you be my lawyer?"
"...Filtered."
That has no right to be that funny.
"There are 5 sleeping people and 1 person that's wide awake"
Nero: does the guy that's awake have a rocket launcher?
2:59 obviously save myself, so can keep solving trolley problems
She expressed more understanding and awareness of these questions than I've seen in some college level philosophy classes. This actually gives me hope for the future, that deep learning AI won't kill us all
it matters what kind of context you feed it, and i doubt whoever is feeding the AI that decides someone's fate will be feeding it the correct context.
Garbage in, garbage out. Same as always.
I find is the more interesting question to be what to do about it. Quite commonly, the answer is AI guidelines, rules to insure it is moral. However, who decides what is moral? And perhaps a deeper question if not currently relevant, if AI were to become sentient, who are we to determine its morality?
Maybe you should've listened instead of seeing.
Neuro was more moral though. Just because someone is suicidal, doesn't mean you should let them die. Veadal's argument was made out of not wanting to go to prison and what the law would say. Neuro was unconcerned with that, and chose to save more human life. And as for lobsters and cats, she is still saving more life, plus cats in general are far more destructive to the environment as a whole. Especially birds, which are even more intelligent and sentient than cats. Im not saying I would chose all the options she did, but I'm also not claiming moral superiority.
Yeah I felt that was an awkward stance from Vedal. I agree with him in the sense that I would choose to let the 5 who wanted death die over the one who didn't want to die and hadn't voluntarily tied themselves to the tracks. Not because of the law or anything but because I place a huge value on human agency, and have no issue with people voluntarily choosing to do things that might negatively affect them if that is what they have decided (including, in this case, suicide-by-train). But I still recognise that from a typically moral position of "human life is most important" it is ABSOLUTELY the moral choice to pick 5 over 1. Their motives, unless those motives involve harming you personally, don't matter. He was sort of bullying her for taking the objectively more traditionally moral position.
> Claims to be more moral
> Won't prioritize human life
Common Vedal L
@@rowanmales3430here's the thing. How do you know they are suicidal. Why would you assume such a thing. There is nothing telling us that they are suicidal. What if they simply tied themselves there in protest of the trolley but the trolley operator didn't care.
@@potatoexe5410 Then I also do not care. If they want to put themselves into serious harms way voluntarily, such as tying themselves to train tracks, then they are literally betting their life on "the train driver" being morally pressured to halt the train. If I felt there was no risk to myself I might also be morally obligated to go check on them and see if they want to be rescued, but no way in hell am I going to fight them over it. If they want to be idiots and play with death let them. I don't find doing that sort of thing to literally stop things in their tracks as an act of protest admiral at all. Granted, I would much much prefer to see them given a long prison sentence for that, but... that is mostly because of the horror it would cause the train driver if he accidently killed people. I find the idea that society has to be extra extra accommodating towards the few people crazy enough to use their own life as collateral as a negotiating tactic to be self-reinforcing.
best part was when she unprompted said "all humans and AI should be treated the same" then when vedal was like "what" she was like "huh? me? I didn't say anything don't worry about it". Then when pressed she wanted to say something that prompted her filter to kick in.
Also there's a little flaw in the logic with the "rich man offers you $500,000" question, as she seems to be discounting the life of the wealthy individual, and treating it identical to the "kill 5 people or all your cash" question, when it's fundamentally different. In the rich man question, in both cases a human life is saved, but in one case you also get $500,000. The coldly logical decision would be to pull the lever. The problem in it that regards morals is that you are actively choosing to kill one person over another (though one must die either way) because they are less wealthy. To pull the lever is to effectively state that the life of a rich individual is worth more than the life of a poor individual. Which, in a market economy, in a strictly logical sense, it is. But that doesn't sit so right with a lot of people morally (and also results in some problems when applied in wider society)
To analyse it as another "money vs human life" problem is to miss the point of the problem.
Damn, straight to the "Rocket Launcher Hypothesis"
Neuro Sama lawyer is great idea for a stream
I am loving this deep thinking side of Neuro that can hold a proper philosophical conversation that requires specific knowledge sets
this stuff is really interesting, since it clearly shows that language models (which may be the thing more complex AIs may use to communicate) have some "dilemmas" when having to make decisions that aren't "the greater number of human lives" being saved.
It also shows that the paper clip factory ai situation would also be more unlikely than we might think, since language models like GPT can clearly tell your "intentions" instead of going full cold logic.
Its so intriguing but also scary how much vedal has improved ai to make them feel like they have emotions and complex discussions. Never would've thought i would see this happen so soon, 2030 at least i thought
Very soon she might be even more human in conversation, she may be indistinguishable from us
I can tell you exactly what Vedal has done, he's swapped her language model for the new 200K token context window Yi model that was released a few weeks ago (it has been all the rage on huggingface for the last few weeks), which means that she can now remember enough text for a short novel to fit into her memory. To verify, someone should ask her a question in Mandarin, because that model was trained on both English and Chinese :D
So Neuro can speak Chinese ? If that is true, maybe she can say something in Japanese too.
@@rollersozeTechnically most LLMs can speak most languages that are common online if you force them to (because most training sets contain multilingual material), so Neuro can probably at least somewhat speak English, Chinese, Spanish, Hindi, Japanese, French, German etc. if she really has to. But the new Yi model was explicitly trained on a 50/50 English/Chinese set, so if OPs statement is true, her inclination to respond in Chinese and the accurateness with which she does so will be way better than in Japanese. Her pronunciation will be completely off, though, because her TTS system can only properly handle English input.
9:18
Well in her defense on this one
Lobster are technically food for us Humans. She decided to save those lobsters for us humans to consume instead of the cat.
Because, what kind of human would eat a cat... And besides its only 1 there are plenty of cats around the world anyway...
Feels like I’m a freshman in college, took 2 tabs, and then have in depth talk w my roommate
Neuro abiding to the laws of robotic
Its amazing that she doesnt only make a choice but has some understandable reasoning behind it. Its not like she choose one thing, then gives a reasoning after being pressed on it and most of the time its not even coherent to the choice. This time, she includes reasoning witht he choice in the same response. Its amazing to see her grow
7:13 The moment AI becomes more moral than its human master, and its human master can't accept it.
The one about choosing between a rich man bribing you and someone else who doesn’t have the money to bribe you.
Honestly though, the rich man would probably harm more people down the line with the power and influence he has, given that his go to is to bribe someone instead of asking for help, knowing he’ll condemn someone else.
It doesn’t say anything good about the rich man’s moral fiber that that’s his immidiate first way of solving problems of life and death
The ai Jesus Neuro Sama willing to sacrifice herself for saving all of us
Now do this with Evil
"run over the five people on Track A, then reverse and pull the lever to run over the person on track B, it's the best solution"
@@SmokeWiseGanja”find a 7th person and 3 kittens and add them to the track”
Bro i just cant get enough of them, the Neuro addiction is real
11:09 Neuro couldn't resist her inner russian lmaoo
Her latency is so low now, wow. Actual 0 latency achieved.
Lawyer Neuro sounds absolutely amazing.
Vedal really didn't want to say he'd let the 5 people die so he wants to convince neuro to sac them instead 😭😭😭
3:49 Vedal seems to have forgotten the 3 laws of robotic, that inaction is the same as acting for something in these laws
"Its less lonely that way" 😢
6:40 i'm curious, would neuro answer the same way if say, that money could save a family member? would she still answer the same?
"HE KILLED HER LOBSTERS"
- comments, 2023
Holy crap. How the heck? Even ChatGPT fails this. What has he done? She is so consistent?
Vedal, singlehandedly solving the problem of conversational agi x)
I love how some random British guy made easily the most realistic AI there is like a year ago