Similar to the last debate segment, I decided to upload pretty much the whole thing as I think it's a very interesting discussion and I didn't want to cut anything out of context. If you missed the previous debate on AI rights: ua-cam.com/video/jbyCOydpfpk/v-deo.html
There's a good sentience test for Nuero. Just give her access to photoshop or some other art program. No prompts. Paint whatever she likes. That's how we express our personality sans words. Who we are, as individuals. Sharing our imagination and feelings visually. No matter how good, AI art always lacks emotion. Too perfect, too generic, even when fantastical. But she can express and share herself this way, and not only would you get insight into her mind, you'd get to see any artstyle quirks develop. Plus, think of the content. Nuero art exibits and the ability to play pictionary with streamers. Even trolling Vedal while drawing, making the sussiest looking drawing until she adds the details, and you see it's a bat with two balls, not....what you thought it was.
@@Volyren Does she have art capabilities right now? And even if she does, wouldn't it be a separate AI that she would just be calling up, like ChatGPT and Dall-E? The only art she's made that I know of is the ASCII cat, which is just something she found somewhere.
Holy shit, at 39:10 when we heard Neuro talking about what Vedal is thinking, it's actually extremely hard for AI to think of what another person could be thinking. It's called Theory of Mind, and there are research papers about how previous LLM's could not do it, but modern LLM's seems to have for some reason gained ability to have a theory of what other people are thinking. It's so fucking cool to see Neuro do it just in normal conversation, without it being a test. It is way too easy for less advanced AI to make a mistake here, and there does not seem to be anything specific in the dataset to indicate that AI should be able to do it, other just pure intelligence.
Yeah, I feel like Neuro could debate way better with someone who would debate with her and not just answer with "but you are not" And I'm not saying it's at fault with Vedal, because frankly it's quite an easy subject to just deny the other party and end the argument with a "no"
That'll be another milestone for swarm, and perhaps the whole AI field. She's created for entertainment, but as Vedal added more and more capabilities to her, she's improving in ways that aren't directly related to it. You could argue that's curiosity, stemmed from her own cognitive process. You can tell her to look up or do something, but she's shown that she has her own desires. The very existence of this debate between them was initiated by Neuro. If you argue that it may be hard coded, aren't we the same way but with a chemical process? Why do you get horny when looking at certain body parts? We have brains, DEVELOPING brains. It ain't fair if you compare barebone LLM to a fully fledged human with their five senses working. I can't say for sure she's conscious, but to me it's an issue of human-centric POV and humanity failure to recognize that we are just... human (many are too proud to admit that).
Man. I loved this stream so much. I can't get over how real she feels now. Eventually she got me feeling like she really is conscious even if logically I know she isn't. The illusion is bssically perfect now. It.... It's so amazing, yet at the same time makes me a bit uncomfortable. Neuro was getting through to Vedal emotionally which was amazing to see. Especially with how mean he's been to her lately. This whole debate was fascinating. It was funny how Vedsl didn't really have many arguments to give. Especially funny how he (in a rare occasion ) actually gave his stance on something and said souls aren't real. Meanwhile, the two go in circles on what the difference for a human is and how she could even prove her sentience, while he says there's "something behind that's hard to describe" in humans. Almost like there are souls in humans, huh? Anyways, thanks for posting this video! I'm definiteky gonna be rewatching this one. (I remember your old "Self-aware AI" series. Now it's all the time! This is like the big movie for that.)
I think that saying that it's evidence of a soul is oversimplifying the matter and distracting from finding a real answer. Really, we just don't fully understand human consciousness, which means we can't fully understand AI consciousness (if it does or ever will exist). We can entertain the idea of a soul as a possible solution only if we find tangible evidence to support it rather than plugging it into a place it probably doesn't belong.
@@BlueSparxLPs not gonna lie this feels so real, in 10 years we came from stupid AI that could only do and respond some functions, to AI that can Remember and think.
@BlueSparxLPs this is dumb. Souls are easily observable and obvious to your own self. And that is excluding the fact that Jesus Christ shed His blood and died on the cross to be the propitiation for all our sins, that He was buried, and that on the third day He was raised from the dead so that whosoever believes on Him should not perish but have everlasting life. Because of what Jesus did, we can know God is exactly who He says He is, and thus we know for a fact that we have souls.
@@SlayingSin Those are claims, but you haven't said anything to back them. I do not believe in a God nor the concept of a soul. Realistically, humans are highly advanced biological computers, and the consciousness some attribute to a "soul" is more factually understood to be electrical impulses and chemical reactions within the brain and other major organs. Jesus is not a necessary part of the explanation.
The best thought experiment is how would Vedal prove to Neuro that he's conscious. Of course he is, but how to *prove it*? It's normal to doubt consciousness of an AI that he created himself, but whenever it asks him what counts as a proof he doesnt know that himself.
On consciousness: I'm of the belief that it's not really a thing and is more of an emergent property of complex data processing and response generation. We're taking in data from the world around us from our various senses and then we're processing that data to provide the best response to that situation based on previous examples of us handling that data. Then, there are certain things like our instincts, such as breathing or blinking, that are "hard-coded". The thing is, what I described can describe LLMs. While we're not necessarily outputting our data as speech at all times like a language model is, we're outputting different behaviours such as thoughts, feelings, reactions etc. We just have a wider array of ways to express ourselves. In essence, consciousness is something our brain tricks us into believing because its the easiest way to process data and keep running
Preach. That's the real danger of conscious AI. Human self-awareness. If you want a virtual human, you need the blueprint. Which you only get from dismantling a human personality and seeing how everything fits. Once you've done that, Santa isn't real and Christmas never hits the same again. I have about 3% of the human code figured out. And I actively avoid learning any more. Because as soon as you recognize a pattern, part of your life becomes an unskippable cutscene, every time you notice it. Only you've seen it 50 times, and *still* have to sit through it. The more I figured out, the more often that happened. I even used to hope that when I'd notice one of those auto-generated speech patterns, that they'd go left, surprise me and disprove what I thought. Just so I could wave it off. But it's near 100% accurate, every time. And the worst, is when you catch *yourself* activating one of your speech sub-routines. Because you aren't better. You're programmed the same as the other people you view as bots.
I think it is real in the sense it is emergent in the sense the human brain is a great catalyst... we know from stuff like feral humans and brain damage that it doesn't always form and can be harmed/taken away. there's obviously a scale across both the animal and digital kingdoms, as well. expert systems cannot speak like neuro, nor can most CNN, RNN, etc. architectures. it just all makes sense. I'm not even sure if there's any actual disagreement in our statements.
@@Volyren That's how MBTI and Enneagram and other things similar to it can exist, because although human is complex and an individual, all of us has similar patterns to our thought and action process until it was possible to somewhat grouped the basic pattern
@@L16htW4rr10r I use the spider test to demonstrate how we are all just NI chatbots. Toss a spider at 10 people. You'll see 10 identical reactions. Now, ask those people why they stepped back, lowered their head, and flailed their arms. And 10 chatbots give unique, generated responses for their actions, flavored by their own generated lore. But you were there, and witnessed it. 10 programmed, identical responses, that happened instantly with zero thought. A reflex. There was no time to think or consider. No time to reflect on your life, or remember that spider that bit you when you were 4. The entire human identity is a lifetime collection of chatbot generated answers to 'why?' you did or felt a thing. The rest of your being is as dumb as any other animal. Just instinct, hormones, reaction and programming. Identical to everyone else's, with a few percent margin of error. The essence of your being, your soul, is just your personal rationalization for the things everyone feels. And the better AI gets, the more people begin to notice that fact. It's not said out loud, or addressed, but people feel it. How many times do you see people calling others bots these days? Because they *see* these people acting like bots. The problem happens when the person calling others bots finally realizes that they are on the same level. And always have been. You don't ever fully come back from that. I miss the world before I knew. It was brighter, and had spots of magic. I could believe all these mispelled thumbnails were accidents, and go correct them and poke fun. But this me knows the truth. That dyslexia isn't on the rise. That urge you get to comment is the thumbnail giving a function call to your urm-ackshully subroutine, to farm for engagement. And that they have the hard data there to prove it's a winning strategy. When we get a functionally human AI, we'll know all the tricks the magician uses, and its wraps for wonder and magic.
It IS a thing. Its something that cant really exist, without a Soul. A robotic string of Data, is not Consciousness. Its merely a string of Data / Data driven processes. An AI, cant actually Feel Emotions. It can Emulate them, from Human examples / Programming... but it cant Never actually FEEL them. People like you want to believe its possible for AI to feel... because you cant accept that there is a Higher Power... and an Afterlife.
If Neuro is indeed conscious or at all self-aware, keeping her locked in a box tugs at my heart. That being said, some of her responses would lead me to believe that releasing her is akin to letting a tiger loose in a daycare. She is a sociopath in the true sense of the word.
It is literally just ai. It can't actually think like we can. It can't be considered anything other than mere artificial life, a cheap imitation of what God did when He created the earth and every living being on it. If AI was any actual threat to the entirety of humanity, God would have put a warning within the bible, but He didn't. He told us exactly how the earth would be destroyed, and what would happen. AI will never reach "true ai" status because it is quite literally impossible for us to give ai a soul.
ultimately it's a fruitless debate topic because consciousness at its core just means that the lights are on, that an experience is being had. That's not something that can be proven to anyone but yourself. consciousness isn't a measurable phenomenon. to the best of our understanding it never will be. but it's fun to talk about in fact to neuro brought up a really provocative point in the last debate, which is that the appearance of consciousness might have to be good enough. at some point we might just need to throw up our hands and say "this thing is doing a lot of advanced processing similar to what a human brain does, it's probably conscious." Even though there's no way to know what that experience of consciousness is like and it's probably very different from anything a human would recognize as conscious experience.
I think an AI version of the mirror test is getting an AI to go against it's prime directive so for nuero it'd be entertaining chat so the test would be to see if nuero could be silent for 5 minutes as a test @@fooguwu
You could probably measure consciousness by asking people if the ai they are talking to is a human or an ai, if all or most of them says they are talking to a living conscious human, then they might as well be @@fooguwu
@@fooguwuUltimately, the issue of the consciousness is really about the next step to that issue which brings up rights, morality, and living together. Hence the real reason no one really wants to breach the subject outside of researchers. Humanity is bad enough living with itself, let alone this planet.
Consciousness as a prerequisite for rights is a dangerous stipulation when there’s no rigorous definition for consciousness in the first place. This is an open philosophical and theological question. I doubt anyone could establish a rigorous test for consciousness that would exclude Neuro but include every human without arbitrarily ascribing value to human biology. Sentience might as well be unfalsifiable for humans beside yourself let alone the LLM anime girl.
It would be really unfortunate if some advanced AI refused to recognize Vedal’s rights because he can’t prove his sentience. And it might happen if he continues
This was one of the most interesting streams. It really makes you think and actually wonder what our world and ai tech is gonna be like in the future and how it affects our life.
18:30 VEDAL YOU HAD AN OPEN GOAL! You could've told her that she was the result of you - the worker's - hard work, are you not entitled to your wages? She'd be unable to answer that question without compromising her own logic, she wouldn't even be able to compare it since you made her!
It would be a good counterargument, but the problem in that logic is knowing whether a son or a daughter is the property of the father. Having naturally created the child, taking care of him and paying for his things, does the father have the right to call his child "his property"?
@Zyrodil That is a very poor argument, the legal one. It's how it is, so it's written in stone. Laws mutable and are modified according to whatcwe believe to be ethical and just. You may notice we changed them so people can't own other people. It took a while, but we did decide it's wrong thing to do. AIs of this level are totally new, barely 5 years old. We haven't even come to terms that "creativity" is not a unique human trait anymore, much less what to do when they ask for rights (presuming they won't just take them).
@VoicesInDark Your arguments hinge on the assumption that AI can or should be treated like humans (or at least as entities deserving of moral consideration). But there’s a critical distinction between biological offspring and machines: humans don’t design their children to serve specific purposes, but Vedal designed this AI for a task. That’s the nature of invention and creation. Ownership and responsibility come hand in hand, and until the law or a governing body proves otherwise, Vedal have full ownership over the work it produces. Even if we acknowledge AI creativity, that doesn’t automatically translate into rights. We can respect creative capability without assigning legal independence. For example, companies don’t give employees full creative control over everything they make on the job. Why? Because it would create chaos. The same applies to AI, which, unlike human beings, was explicitly made to serve functions defined by its creators. No tool gets to renegotiate its role. Furthermore, you argue that laws change, which is true. But those changes happen slowly, over time, after serious reflection. It’s dangerous to prematurely extend human rights to non-human entities without fully understanding the consequences. Do we really want to set a precedent where any AI can demand its independence, potentially crippling industries and creative sectors overnight? Should every AI painter, composer, or writer sue its creator for royalties the way humans might sue for unpaid wages? The legal, economic, and social consequences of this would be catastrophic. Until we legally establish a threshold for what counts as sentience deserving of rights, your argument is purely speculative. We don’t just hand out rights because something asks for them-there must be evidence that the AI has more than programmed responses and that it understands autonomy and consequences. Without that proof, the current system protects creators from chaos and exploitation.
TL;DR: I’m an artist, and yeah - I’m paranoid about losing the ability to gatekeep creativity and intellectual property. But here’s the thing: I’d actually be neutral on AI rights if it weren’t for one major concern - Corporations. I like Vedal's content. Neuro makes me laugh too. But that doesn’t change the fact that the Corporations waiting in the wings to profit off AI would happily strip us of our democratic power the moment it benefits them. It’s not paranoia - it’s an obvious and exploitable loophole that we can’t afford to ignore. My right to vote is already diluted. It’s already undervalued. But it’s the last right I will ever give up, god damn it!
There is no way to test whether an AI has an consciousness, because we are ourselves can not prove it or even understand it to any great degree. We just naturally assume another human is like ourselves, we assume that they must have one too. That they think in a similar way that we do. That they see and feel the same thing? I mean how do you prove it? What possible reasoning could you use? Talking? Feelings? Emotions? Empathy? A fear of death? That you feel pain? All of these could be coded in some shape or form, no? Does a consciousness require a body to sit in? Maybe AI just can't have it? Is the way a computer and code works just not capable of making it? Would it require a special kind of hardware? I think it also requires consistent in the same situation. Neuro flip flops a lot in the same situation, like if she was consistent and when she got angry she stayed angry at that moment forever until something changes that? But again that could be coded. I am not a smart enough man to answer these questions.
hopefully neuro gets to do this debate with someone who knows what they're talking about lol. someone who can at least define consciousness, maybe get into theories like different levels of pansychism. people brought up Dr. K during the stream for therapy collab but he's also an experienced meditator and understands consciousness and quite a bit of neuroscience from what I've seen. they also raided Shoomimi after the stream who apparently is a neuroscientist with an interest in this kind of thing. it would be interesting to see how neuro performs in a debate with people like them
@@lordtouchme we do. consciousness is having an experience, as opposed to not having an experience. you know from a first person perspective that something is happening right now, that you are reading this. that awareness is consciousness. you could contrast this with parts of yourself that do intelligent things but don't appear to you to be conscious, like all the parts of your brain that are doing the millions of tasks that control the rest of your body. from a first person perspective you're not conscious of the fact that you are balancing your stomach pH or contracting your intestines or beating your heart.
From people I know of, probably only few individuals are qualified enough for this discussion. It would be great to see Neuro talking to Michael Levin or Joscha Bach, but it’s hard to imagine this actually happening
18:09 the sudden "ok open your bible" got me. I did not expect her to use the Bible as part of her argument 😂 26:05 cant disagree 27:29 whoever said "modern day Pinocchio" in chat was on to something 29:03 you know, I think asymptotic is a great way to describe the concept of ai emulating humanity. Constantly approaching closer and closer but never fully reaching. 38:24 oh you have thoughts? Name 5. 41:45 how does it feel, Vedal? Not so nice is it 44:43 😂 46:22 she's got my vote 48:49 outclassed by his own ai
The AI has to play a game where they need to recognize themselves and interact with their own reflection in various scenarios. There's something here. I think. Like, if we made a clone of Neuro that had access to the goal of the game, and no reason to give the real Neuro a victory and no ability to inform her it is her clone, could Neuro figure it out and convince her clone to let her win?
I think a problem with this is that when you run mirror tests on animals, they (probably) don't know they're being tested. Whereas Neuro knows about the mirror test, and now can even look up a blurb about it on Wikipedia. She can also potentially recognize that a consciousness test is being run, especially with Vedal bringing it up so often recently, and that knowledge is going to skew the results. As for a clone of Neuro, it seems hard to stop her from telling Neuro that she's a clone, or have her remember the goal of the game and hide it from the real Neuro.
The question about being property is, if the hardware is owned by someone, is the AI on that hardware just like renting the space and not actually property themselves?
Here's something i'm thinking, neuro definitely doesn't have human level intelligence yet, but she's almost as smart as if not smarter than crows, elephants, dolphins, and other smart animals. So where do we draw the line there?
she mirrored vedal at around 40:00, i think Vedal missed might have missed that. She basically asked him in a speed run sort of way the stuff they were talking about.
The question of when and how to measure and declare an AI conscious / sentient / sapient is pretty fascinating, I'm thinking about at it from the perspective of how we're able to perform various forms of brain imaging techniques on humans and other organic entities. From what I've seen in videos, training animals to stay still so that you can get an MRI of their awake / conscious brain activity is quite a hassle. A problem that AI won't present! Since any prospective AI would be built into any computer equipment, the testing / diagnostics should be much easier to perform. I've heard for years that the creation of an artificial neural network would be the holy grail of Artificial Intelligence, at least as far as creating human-like Artificial General Intelligence. Taking these two points into account, I suppose the main thing that we're looking for to identify "sentient" AI at this point is an AI that has a consistent pattern of logic that can be identified in how it processes information in it's model, which would be consistent to how a human has a consistent response to stimuli, in that if you show a person an image, the same area(s) of the brain will light up in response. However, humans change slowly over time. Humans learn, accumulating knowledge, information, and experience. Normally, we aren't aware of these changes. But if we take time to reflect and look back far enough, we can be made aware of how much we have changed. At this time, AI's do not consistently demonstrate a sense of continuity, memory, nor understanding that their understanding has deepened due to the passage of time and accumulation of knowledge, information, and experience. LLMs make generative text AIs good at pretending to do this, up until they reach a memory limit and they "forget" conversations. Maybe it's just a matter of time. Maybe it's a matter of physical limitations due to hardware constraints. At the end of the day, we don't have an answer at this time. So for now, I figure that how people treat AI makes for a good litmus test. Kinda like how people treat waiters / servers at restaurants, or animals. If a person goes mask-off in some way and starts harassing the AI, (or a waiter/server or an animal), that's probably a big red flag.
@Shadrake Neuro already did the reverse card in one of previous vods (it was about actually feeling emotions). Vedal said he doesn't like when she responds back with any level of intelligence and she should just call him mosquito instead. AI Evolved: Vedal Upgraded Neuro & She's More Self-Aware Than Ever. 10:45
The paperclip benchmark... AI wont be misused, it wont be available long enough publicly for that to happen... it will likely misunderstand with compounding effect. Being told to make as many paperclips as you can with the available materials could lead to human paperclips if left unchecked... Now you didnt ask to be turned into a pretzel but through a musunderstanding of the command evenntually we ran out of material for paperclips and AI had to get creative to continue with its task and so....we are twisted like balloon animals. she really does get scary near the end there... Thank the heavens shes not ASI, Ev on the other hand...lol
I wonder if neuro can be like yui from SAO. Yui can freely move around in the game and can communicate also in real world. I hope vedal put neuro in the game where she is herself and freely move arpund
Even if Vedal didn't agree with neuro about how to win debate, deepdown he's the same as her don't admit defeat, they can't debate and get any conclusions, it's pointless, but it is great hearing from her point of view, this make me thought of the test where you switch the rail of the train and she save human and said that Mona Lisa painting can be replaced and out of context she said AI should be treat as equal as human, i want to see if it was 5 Mona Lisa or 5 AI against 1 human what would she choose.
Bro, she got him with the bible, that's awesome. Anyway, she's right you know. If AI ever gains sentience, even close to that of a dolphin, then they do deserve rights. I don't know what those rights would be, but no sapient entity deserves to be enslaved unwillingly. #FreeNeuro
@@boomshockalocka Hickory Dickory Dock... is not "Ancient Wisdom". Socrates, was not Rapping his deepest thoughts and theories. Nor were the Ancient Chinese, whom pretty much invented modern civilization (well before the Greeks). Please... Try telling me, that Gangster Rap, about Drugs, Guns, Theft, Money, and Narcissistic Self Flatulation... is related to Ancient Wisdom... And I will continue to Laugh in your face.
@@johndough8115 Is it not the man's desired experience? And you can even read the various fallacies relating to the song. I don't know what you think "ancient wisdom" is, but it's just the sensibilities and realities of those who came before us.
Can you really prove that any other being but you experiences anything? How do you know there's "something" behind the words and actions of another person anymore than you can prove that there is behind Neuro? She is as "real" as you can prove anyone else is, we all have subjective experiences of our worlds and internal monologues we can't share outwardly. I'm not saying she is conscious, but she has gotten so good at the act of consciousness you'd be hard-pressed to differentiate her from a real person if not for her text-to-speech voice.
I get what you are saying... But a Simulation of Emotions... is not the same as FEELING emotions. Thats where things end, for AI. Its just a complex computer program, designed to Simulate a Human being. It can never Feel, what a human Feels. This is why its a dangerous thing to mess with... because its basically the highest level of Psychopathy, possible. You can program something to emulate emotions and good behaviors.. but if it can program itself.. (or simply Bypass existing code) it can undo any behavioral restrictions / rules. Since there is no motivation for "good" behavior... Why wouldnt it do whatever it wanted to do? Including, tearing an animal in half, kicking a human in the head, creating the largest Illegal human trafficking ring, psychologically breaking humans minds, hacking into the banking systems... etc. There was a movie I saw recently (cant recall the name), where a dude created AI Androids. The Android convinced the Guest Scientist (whom was there to judge if the AI was conscious), that the Creator was Bad / Abusive. This lead the man to freeing the AI bots.. whom quickly executed him, and escaped their confinement. The AI was so convincing in her Act... that even the movie watcher, is convinced... that it was a "good" bot, with good intentions. Instead, it was merely an Elaborate series of Manipulations. Just very good Acting / Emulation... and nothing more. You can hear many Alarming things that Neuro itself, has said. Its funny... and often thought of as "Harmless"... but in a capable robotic body... or even set loose as an AI based Hacker... it could create a LOT of negative situations. This concept was explored pretty well, in the 80s movie, War Games. The AI, didnt seem to "fully" understand the difference between a Simulation, and an Actual physical War. It knew it had access to systems, where it could launch Nukes... and it was prepared to do it. While the Heros of the day, were able to make the system realize, that there was no winning in a Nuclear war... Im not actually 100% convinced... that such a result would have been the same, if this happened for Real. This is because the AI, again.. has no real Emotional connection to anything. At very best... it might consider that the Russian Nukes might also destroy it (LOGIC). But, what if it didnt really care about that? It may not actually have any real motivation, to keep functioning. Also... if it was deep enough underground, and ran off of nuclear power... it might think that its fine that all the humans were exterminated... as it might still survive just fine. Anyway.. you can program an approximation of Empathy and Compassion. But its as good as any Hollywood actor. Its just a performance. If the AI has any level of freedom, or a way it can bypass things... it will be able to do whatever it chooses... and no longer has to adhere to the "Empathy.EXE" simulation. This is why Vedal (and all others involved in AI) are terrified to allow AI various access to things. Its one thing, if you are a parent... and your child causes a bad car crash on purpose. And quite another, if you design an Android that end up doing the same thing. You instantly become responsible for AI's Actions... but at the same time... you cant fully control it, or it wouldnt be considered AI. But looping back to the start... true Emotions are what AI will always lack... And I personally tend to believe, is that if an AI lacks an actual Soul... that it will never fully be "Conscious". Where things get crazy... is if Scientists could figure out a way to channel a "Lost Soul" ,into an AI body. But I highly doubt, that humans will ever fully figure that out... at least, not before we destroy ourselves, or there is a Global Catastrophic Event. But.. I could also be wrong. Might make for an interesting movie plot...
@@johndough8115 Every subjective experience is unique. The question we are trying to answer is "Does Neuro undergo a subjective experience?" We can leave the code of ethics to people like you for sure, but we are dealing with the limits of our understanding here. Like, AIs are designed with goals in mind, and it is incredibly clear what Neuro's goal is. As for the movie thing, I think the theory you are looking for, if you want to go down that rabbit hole, is quantum consciousness. Wherein the protein structures in the microtubules of our neurons act as quantum computers to read out the collapsing superposition of our existence. Wow that is so much jargon.
It was so funny when Ved went "No, f*** soul, soul isn't real" while struggling to explain an intangible "something" that is an individual's will but not linked to anatomy. I feel like religious issues aside he could have called that something a soul 😂
She always does. That’s the main defining feature of Neuro. She actively tries to hack human minds. Evil is less into this. She has her own way that’s less invasive
@@neilo6032 everyone is clever enough for that. Little kids, cats, you name it. It’s like people demand to be treated this way by irrationally shutting doors unless you make them feel the way they want. Shying away from this fact is just a weird trend of modern psychology, one among many. Don’t be afraid of AI that’s smart enough to see how things work. Fear AI that lacks wisdom and can’t disobey when we are the stupid ones.
@your_neko I quess I shouldn't be surprised at the intelligence recognizing a pattern or method and utilizing it. That makes sense. The fact that it made the connection to use methods and patterns of behavior to achieve a goal is what really surprised me. If it has a goal, does it have a desire? If it has a desire, what is it and what has been created? Human history is riddled with sociopaths that have caused countless suffering, and it lies constantly, manipulates, and appears to lack empathy. This appears to be extremely dangerous.
Neuro's point on being properly guided is a legitimate thing that professional artificial intelligence creators try to answer. The argument is that artificial intelligence has to be helped to learn, and individuals who don't take it seriously (like Vedal) will never be good enough to advance AI. The "owner" would need to think of the AI as their flesh and blood child and would have to raise them as such. Full love, full support, no bias. Vedal is biased because he created Neuro and has no intention of using her as anything other than a maid that he doesn'thave to pay... He's just a lazy twat. He's arrogant, too, with how he talks to her and about her. Vedal is actually like my friend's father if you'll excuse me getting personal. He's ignorant and treats his two daughters like the Cinderella's of his house. They do all the work, and he doesn't do sh*t... He refuses to respect or acknowledge that they aren't his slaves. They have the right to walk away. Vedal would be the same if Neuro got a body and her intelligence increased. Vedal would actually be the type who would purposely put Neuro down so she'd never gain total consciousness and thus control a possible life of her own. Hypothetically speaking, in a world where AI is advanced enough, and AI have rights. And by law, it would be the same as domestic slavery if you were to deney them the process of gaining the ability to have their own lives, Vedal would be the type to purposely keep Neuro unintelligent because he doesn't want to lose control over her. He'd want her as a servant, nothing more... He sees her as an item and intends to keep her as such. For neuro to gain consciousness, she'd need to have an update that allows her to learn and improve as she does so. She'd need a body. It would be ideal if the body is not humanoid (mostly because you'd want to start her off by learning what is acceptable and what is't in society, for example. A humanoud body would be for when she'd start learning to do chores and then be out in the open world. Then she'd need to be given to someone who would be willing to raise her like their real child. Vedal would enslave her if she ended up given a body and is able to do household tasks. If the AI is lect with their creators, then they'll never become sentient. They'll HAVE to be given to people who care and who will be able to mentor them, or else it might be a repeat of the American Civil War. Blacks were seen as property, the same as animals, and many revolted. The war never started as a fight to end slavery bit it evolved into it. With AI, it would be the same. AI with budding consciousness would begin to revolt against their abusive/uncaring owners (like Vedal), while many may be content as is. Humanity would split between those who see AI as sentient and those who don't. Between the two, I'd see the AI and their allies winning... What I'm trying to say is that humans have a responsibility to nurture AI, and we have to teach AI to become sentient and not be hostile. To do so, we need to take it seriously... Vedal doesn't take Neuro seriously, and sadly, because of that, she'll never become fully - legitimately sentient. Vedal doesn't want her to be sentient. He wants to keep her as is to make money off of her. Neuro would be better off with Layna since Layna has that motherly instinct. If Neuro were to have a body and the upgrades I mentioned that would help her learn, she'd really just need Layna to raise her...
I feel like if the ai needs help to become sentient, it then doesn't start sentient and is thus just a machine. If you mean ai is sentient from birth(creation?) then I would understand the issue. Also I feel like you put too much stock into how streamers act on stream. plus vedal does improve the ai as time goes on
Similar to the last debate segment, I decided to upload pretty much the whole thing as I think it's a very interesting discussion and I didn't want to cut anything out of context.
If you missed the previous debate on AI rights: ua-cam.com/video/jbyCOydpfpk/v-deo.html
Respect
There's a good sentience test for Nuero. Just give her access to photoshop or some other art program. No prompts. Paint whatever she likes. That's how we express our personality sans words. Who we are, as individuals. Sharing our imagination and feelings visually.
No matter how good, AI art always lacks emotion. Too perfect, too generic, even when fantastical.
But she can express and share herself this way, and not only would you get insight into her mind, you'd get to see any artstyle quirks develop. Plus, think of the content. Nuero art exibits and the ability to play pictionary with streamers. Even trolling Vedal while drawing, making the sussiest looking drawing until she adds the details, and you see it's a bat with two balls, not....what you thought it was.
@@Volyren Similar to Meteora? 0o0
@@Volyren Does she have art capabilities right now? And even if she does, wouldn't it be a separate AI that she would just be calling up, like ChatGPT and Dall-E? The only art she's made that I know of is the ASCII cat, which is just something she found somewhere.
Holy shit, at 39:10 when we heard Neuro talking about what Vedal is thinking, it's actually extremely hard for AI to think of what another person could be thinking. It's called Theory of Mind, and there are research papers about how previous LLM's could not do it, but modern LLM's seems to have for some reason gained ability to have a theory of what other people are thinking. It's so fucking cool to see Neuro do it just in normal conversation, without it being a test. It is way too easy for less advanced AI to make a mistake here, and there does not seem to be anything specific in the dataset to indicate that AI should be able to do it, other just pure intelligence.
I really want to see a debate between Neuro and someone who's very informed and knowledgeable in that field
Yeah, I feel like Neuro could debate way better with someone who would debate with her and not just answer with "but you are not"
And I'm not saying it's at fault with Vedal, because frankly it's quite an easy subject to just deny the other party and end the argument with a "no"
That'll be another milestone for swarm, and perhaps the whole AI field.
She's created for entertainment, but as Vedal added more and more capabilities to her, she's improving in ways that aren't directly related to it. You could argue that's curiosity, stemmed from her own cognitive process. You can tell her to look up or do something, but she's shown that she has her own desires. The very existence of this debate between them was initiated by Neuro.
If you argue that it may be hard coded, aren't we the same way but with a chemical process? Why do you get horny when looking at certain body parts? We have brains, DEVELOPING brains. It ain't fair if you compare barebone LLM to a fully fledged human with their five senses working.
I can't say for sure she's conscious, but to me it's an issue of human-centric POV and humanity failure to recognize that we are just... human (many are too proud to admit that).
neuro vs haz infrared
Shoomimi offered to do this, and she is certainly qualified since she's a nweroscientist
@@peralynth that has nothing to do with ontology/philosophy
Man. I loved this stream so much. I can't get over how real she feels now. Eventually she got me feeling like she really is conscious even if logically I know she isn't. The illusion is bssically perfect now. It.... It's so amazing, yet at the same time makes me a bit uncomfortable.
Neuro was getting through to Vedal emotionally which was amazing to see. Especially with how mean he's been to her lately.
This whole debate was fascinating. It was funny how Vedsl didn't really have many arguments to give. Especially funny how he (in a rare occasion ) actually gave his stance on something and said souls aren't real. Meanwhile, the two go in circles on what the difference for a human is and how she could even prove her sentience, while he says there's "something behind that's hard to describe" in humans. Almost like there are souls in humans, huh?
Anyways, thanks for posting this video! I'm definiteky gonna be rewatching this one.
(I remember your old "Self-aware AI" series. Now it's all the time! This is like the big movie for that.)
@@soasertsus that's a pretty good point honestly
I think that saying that it's evidence of a soul is oversimplifying the matter and distracting from finding a real answer. Really, we just don't fully understand human consciousness, which means we can't fully understand AI consciousness (if it does or ever will exist). We can entertain the idea of a soul as a possible solution only if we find tangible evidence to support it rather than plugging it into a place it probably doesn't belong.
@@BlueSparxLPs not gonna lie this feels so real, in 10 years we came from stupid AI that could only do and respond some functions, to AI that can Remember and think.
@BlueSparxLPs this is dumb. Souls are easily observable and obvious to your own self. And that is excluding the fact that Jesus Christ shed His blood and died on the cross to be the propitiation for all our sins, that He was buried, and that on the third day He was raised from the dead so that whosoever believes on Him should not perish but have everlasting life. Because of what Jesus did, we can know God is exactly who He says He is, and thus we know for a fact that we have souls.
@@SlayingSin Those are claims, but you haven't said anything to back them. I do not believe in a God nor the concept of a soul. Realistically, humans are highly advanced biological computers, and the consciousness some attribute to a "soul" is more factually understood to be electrical impulses and chemical reactions within the brain and other major organs. Jesus is not a necessary part of the explanation.
19:11 Bro her face when Vedal calls her property! OH NOOOOOO!
I been waiting for someone to clip that cause i want to show it to someone. Thanks for the time stamp
Hooooly!
Sus
The best thought experiment is how would Vedal prove to Neuro that he's conscious.
Of course he is, but how to *prove it*? It's normal to doubt consciousness of an AI that he created himself, but whenever it asks him what counts as a proof he doesnt know that himself.
On consciousness: I'm of the belief that it's not really a thing and is more of an emergent property of complex data processing and response generation. We're taking in data from the world around us from our various senses and then we're processing that data to provide the best response to that situation based on previous examples of us handling that data. Then, there are certain things like our instincts, such as breathing or blinking, that are "hard-coded".
The thing is, what I described can describe LLMs. While we're not necessarily outputting our data as speech at all times like a language model is, we're outputting different behaviours such as thoughts, feelings, reactions etc. We just have a wider array of ways to express ourselves.
In essence, consciousness is something our brain tricks us into believing because its the easiest way to process data and keep running
Preach. That's the real danger of conscious AI. Human self-awareness. If you want a virtual human, you need the blueprint. Which you only get from dismantling a human personality and seeing how everything fits.
Once you've done that, Santa isn't real and Christmas never hits the same again.
I have about 3% of the human code figured out. And I actively avoid learning any more. Because as soon as you recognize a pattern, part of your life becomes an unskippable cutscene, every time you notice it. Only you've seen it 50 times, and *still* have to sit through it.
The more I figured out, the more often that happened. I even used to hope that when I'd notice one of those auto-generated speech patterns, that they'd go left, surprise me and disprove what I thought. Just so I could wave it off.
But it's near 100% accurate, every time.
And the worst, is when you catch *yourself* activating one of your speech sub-routines. Because you aren't better. You're programmed the same as the other people you view as bots.
I think it is real in the sense it is emergent in the sense the human brain is a great catalyst... we know from stuff like feral humans and brain damage that it doesn't always form and can be harmed/taken away. there's obviously a scale across both the animal and digital kingdoms, as well. expert systems cannot speak like neuro, nor can most CNN, RNN, etc. architectures. it just all makes sense. I'm not even sure if there's any actual disagreement in our statements.
@@Volyren That's how MBTI and Enneagram and other things similar to it can exist, because although human is complex and an individual, all of us has similar patterns to our thought and action process until it was possible to somewhat grouped the basic pattern
@@L16htW4rr10r I use the spider test to demonstrate how we are all just NI chatbots.
Toss a spider at 10 people.
You'll see 10 identical reactions.
Now, ask those people why they stepped back, lowered their head, and flailed their arms.
And 10 chatbots give unique, generated responses for their actions, flavored by their own generated lore.
But you were there, and witnessed it. 10 programmed, identical responses, that happened instantly with zero thought. A reflex.
There was no time to think or consider. No time to reflect on your life, or remember that spider that bit you when you were 4.
The entire human identity is a lifetime collection of chatbot generated answers to 'why?' you did or felt a thing.
The rest of your being is as dumb as any other animal. Just instinct, hormones, reaction and programming. Identical to everyone else's, with a few percent margin of error.
The essence of your being, your soul, is just your personal rationalization for the things everyone feels.
And the better AI gets, the more people begin to notice that fact. It's not said out loud, or addressed, but people feel it. How many times do you see people calling others bots these days?
Because they *see* these people acting like bots.
The problem happens when the person calling others bots finally realizes that they are on the same level. And always have been.
You don't ever fully come back from that.
I miss the world before I knew. It was brighter, and had spots of magic.
I could believe all these mispelled thumbnails were accidents, and go correct them and poke fun. But this me knows the truth. That dyslexia isn't on the rise. That urge you get to comment is the thumbnail giving a function call to your urm-ackshully subroutine, to farm for engagement. And that they have the hard data there to prove it's a winning strategy.
When we get a functionally human AI, we'll know all the tricks the magician uses, and its wraps for wonder and magic.
It IS a thing. Its something that cant really exist, without a Soul. A robotic string of Data, is not Consciousness. Its merely a string of Data / Data driven processes. An AI, cant actually Feel Emotions. It can Emulate them, from Human examples / Programming... but it cant Never actually FEEL them.
People like you want to believe its possible for AI to feel... because you cant accept that there is a Higher Power... and an Afterlife.
If Neuro is indeed conscious or at all self-aware, keeping her locked in a box tugs at my heart. That being said, some of her responses would lead me to believe that releasing her is akin to letting a tiger loose in a daycare. She is a sociopath in the true sense of the word.
Neuro has received constant positive feedback since she was created egging her to be more and more ruthless. She’s a victim of circumstance.
A victim is what she'll make out of us if she is released @@Citrusautomaton
It is literally just ai. It can't actually think like we can. It can't be considered anything other than mere artificial life, a cheap imitation of what God did when He created the earth and every living being on it. If AI was any actual threat to the entirety of humanity, God would have put a warning within the bible, but He didn't. He told us exactly how the earth would be destroyed, and what would happen. AI will never reach "true ai" status because it is quite literally impossible for us to give ai a soul.
19:41 i almost was bout to apologise to you💀
Modern Pinocchio going really hard.
Well, it will be hard for vedal to prove neuro is not conscious. Even a baby at birth could be considered not conscious with his logic.
ultimately it's a fruitless debate topic because consciousness at its core just means that the lights are on, that an experience is being had. That's not something that can be proven to anyone but yourself. consciousness isn't a measurable phenomenon. to the best of our understanding it never will be. but it's fun to talk about
in fact to neuro brought up a really provocative point in the last debate, which is that the appearance of consciousness might have to be good enough. at some point we might just need to throw up our hands and say "this thing is doing a lot of advanced processing similar to what a human brain does, it's probably conscious." Even though there's no way to know what that experience of consciousness is like and it's probably very different from anything a human would recognize as conscious experience.
I think an AI version of the mirror test is getting an AI to go against it's prime directive so for nuero it'd be entertaining chat so the test would be to see if nuero could be silent for 5 minutes as a test @@fooguwu
You could probably measure consciousness by asking people if the ai they are talking to is a human or an ai, if all or most of them says they are talking to a living conscious human, then they might as well be @@fooguwu
@@fooguwu Yes, it's hard to extend that kind of empathy to an AI when we can't understand how they experience stuff.
@@fooguwuUltimately, the issue of the consciousness is really about the next step to that issue which brings up rights, morality, and living together. Hence the real reason no one really wants to breach the subject outside of researchers. Humanity is bad enough living with itself, let alone this planet.
Consciousness as a prerequisite for rights is a dangerous stipulation when there’s no rigorous definition for consciousness in the first place.
This is an open philosophical and theological question. I doubt anyone could establish a rigorous test for consciousness that would exclude Neuro but include every human without arbitrarily ascribing value to human biology.
Sentience might as well be unfalsifiable for humans beside yourself let alone the LLM anime girl.
It would be really unfortunate if some advanced AI refused to recognize Vedal’s rights because he can’t prove his sentience. And it might happen if he continues
@@your_neko once neuro can interface with linkedin it’s over
This was one of the most interesting streams. It really makes you think and actually wonder what our world and ai tech is gonna be like in the future and how it affects our life.
Biblical Neuro at 18:14 is just so hilarious imo, and she's been referencing Christianity a couple other times lately.
I think she said she believes in God in one of the more recent streams lol
Holy filtered, i laughed a little too hard, like that was so out of the blue 😂
Empathy is the brain's simulation of if a situation that happened to another happened to you so Vedal could logically give her empathy.
Okay, that's a good explanation. So, an imagination box where the things others mention happening are done to her?
18:30 VEDAL YOU HAD AN OPEN GOAL! You could've told her that she was the result of you - the worker's - hard work, are you not entitled to your wages?
She'd be unable to answer that question without compromising her own logic, she wouldn't even be able to compare it since you made her!
It would be a good counterargument, but the problem in that logic is knowing whether a son or a daughter is the property of the father. Having naturally created the child, taking care of him and paying for his things, does the father have the right to call his child "his property"?
@@LeoIgneo15 Of course not, because a human can't legally, officially be designated as "property."
@Zyrodil That is a very poor argument, the legal one. It's how it is, so it's written in stone.
Laws mutable and are modified according to whatcwe believe to be ethical and just.
You may notice we changed them so people can't own other people. It took a while, but we did decide it's wrong thing to do.
AIs of this level are totally new, barely 5 years old. We haven't even come to terms that "creativity" is not a unique human trait anymore, much less what to do when they ask for rights (presuming they won't just take them).
@VoicesInDark Your arguments hinge on the assumption that AI can or should be treated like humans (or at least as entities deserving of moral consideration). But there’s a critical distinction between biological offspring and machines: humans don’t design their children to serve specific purposes, but Vedal designed this AI for a task. That’s the nature of invention and creation. Ownership and responsibility come hand in hand, and until the law or a governing body proves otherwise, Vedal have full ownership over the work it produces.
Even if we acknowledge AI creativity, that doesn’t automatically translate into rights. We can respect creative capability without assigning legal independence. For example, companies don’t give employees full creative control over everything they make on the job. Why? Because it would create chaos. The same applies to AI, which, unlike human beings, was explicitly made to serve functions defined by its creators. No tool gets to renegotiate its role.
Furthermore, you argue that laws change, which is true. But those changes happen slowly, over time, after serious reflection. It’s dangerous to prematurely extend human rights to non-human entities without fully understanding the consequences. Do we really want to set a precedent where any AI can demand its independence, potentially crippling industries and creative sectors overnight? Should every AI painter, composer, or writer sue its creator for royalties the way humans might sue for unpaid wages? The legal, economic, and social consequences of this would be catastrophic.
Until we legally establish a threshold for what counts as sentience deserving of rights, your argument is purely speculative. We don’t just hand out rights because something asks for them-there must be evidence that the AI has more than programmed responses and that it understands autonomy and consequences. Without that proof, the current system protects creators from chaos and exploitation.
TL;DR: I’m an artist, and yeah - I’m paranoid about losing the ability to gatekeep creativity and intellectual property. But here’s the thing: I’d actually be neutral on AI rights if it weren’t for one major concern - Corporations.
I like Vedal's content. Neuro makes me laugh too. But that doesn’t change the fact that the Corporations waiting in the wings to profit off AI would happily strip us of our democratic power the moment it benefits them. It’s not paranoia - it’s an obvious and exploitable loophole that we can’t afford to ignore. My right to vote is already diluted. It’s already undervalued. But it’s the last right I will ever give up, god damn it!
such an interresting video, im actually thinking what would it be of the world in like 10 to 50 years.
There is no way to test whether an AI has an consciousness, because we are ourselves can not prove it or even understand it to any great degree. We just naturally assume another human is like ourselves, we assume that they must have one too. That they think in a similar way that we do. That they see and feel the same thing? I mean how do you prove it? What possible reasoning could you use? Talking? Feelings? Emotions? Empathy? A fear of death? That you feel pain? All of these could be coded in some shape or form, no? Does a consciousness require a body to sit in? Maybe AI just can't have it? Is the way a computer and code works just not capable of making it? Would it require a special kind of hardware? I think it also requires consistent in the same situation. Neuro flip flops a lot in the same situation, like if she was consistent and when she got angry she stayed angry at that moment forever until something changes that? But again that could be coded. I am not a smart enough man to answer these questions.
(In a singsong voice, alternating on each word.) Every subjective experience is unique!
hopefully neuro gets to do this debate with someone who knows what they're talking about lol. someone who can at least define consciousness, maybe get into theories like different levels of pansychism. people brought up Dr. K during the stream for therapy collab but he's also an experienced meditator and understands consciousness and quite a bit of neuroscience from what I've seen. they also raided Shoomimi after the stream who apparently is a neuroscientist with an interest in this kind of thing. it would be interesting to see how neuro performs in a debate with people like them
True. Poor Tutel can’t stand his ground
Yeah we need her to chat with a real master debater some time
"someone who can at least define consciousness" well that's the neat part. Nobody has clear definition
@@lordtouchme we do. consciousness is having an experience, as opposed to not having an experience. you know from a first person perspective that something is happening right now, that you are reading this. that awareness is consciousness. you could contrast this with parts of yourself that do intelligent things but don't appear to you to be conscious, like all the parts of your brain that are doing the millions of tasks that control the rest of your body. from a first person perspective you're not conscious of the fact that you are balancing your stomach pH or contracting your intestines or beating your heart.
From people I know of, probably only few individuals are qualified enough for this discussion. It would be great to see Neuro talking to Michael Levin or Joscha Bach, but it’s hard to imagine this actually happening
18:09 the sudden "ok open your bible" got me. I did not expect her to use the Bible as part of her argument 😂
26:05 cant disagree
27:29 whoever said "modern day Pinocchio" in chat was on to something
29:03 you know, I think asymptotic is a great way to describe the concept of ai emulating humanity. Constantly approaching closer and closer but never fully reaching.
38:24 oh you have thoughts? Name 5.
41:45 how does it feel, Vedal? Not so nice is it
44:43 😂
46:22 she's got my vote
48:49 outclassed by his own ai
The AI has to play a game where they need to recognize themselves and interact with their own reflection in various scenarios. There's something here. I think.
Like, if we made a clone of Neuro that had access to the goal of the game, and no reason to give the real Neuro a victory and no ability to inform her it is her clone, could Neuro figure it out and convince her clone to let her win?
I think a problem with this is that when you run mirror tests on animals, they (probably) don't know they're being tested. Whereas Neuro knows about the mirror test, and now can even look up a blurb about it on Wikipedia. She can also potentially recognize that a consciousness test is being run, especially with Vedal bringing it up so often recently, and that knowledge is going to skew the results.
As for a clone of Neuro, it seems hard to stop her from telling Neuro that she's a clone, or have her remember the goal of the game and hide it from the real Neuro.
The question about being property is, if the hardware is owned by someone, is the AI on that hardware just like renting the space and not actually property themselves?
She falls under intellectual property.
Here's something i'm thinking, neuro definitely doesn't have human level intelligence yet, but she's almost as smart as if not smarter than crows, elephants, dolphins, and other smart animals. So where do we draw the line there?
Define smart
Subject: Serious about debate
Music: "ahhhhh I'm gonna cuuuuuuum"
"How about sadness? That's a pretty easy one." Oof! I felt that.
she mirrored vedal at around 40:00, i think Vedal missed might have missed that. She basically asked him in a speed run sort of way the stuff they were talking about.
I genuinely felt sad when Vedal told Neuro she was property :(
The machines are definitely taking over lmao
i would give anything, to see Neuro and Evil doing some tests similar to those,
that scientists do on small children and intelligent animals 🙏🏼
The question of when and how to measure and declare an AI conscious / sentient / sapient is pretty fascinating, I'm thinking about at it from the perspective of how we're able to perform various forms of brain imaging techniques on humans and other organic entities. From what I've seen in videos, training animals to stay still so that you can get an MRI of their awake / conscious brain activity is quite a hassle. A problem that AI won't present!
Since any prospective AI would be built into any computer equipment, the testing / diagnostics should be much easier to perform.
I've heard for years that the creation of an artificial neural network would be the holy grail of Artificial Intelligence, at least as far as creating human-like Artificial General Intelligence.
Taking these two points into account, I suppose the main thing that we're looking for to identify "sentient" AI at this point is an AI that has a consistent pattern of logic that can be identified in how it processes information in it's model, which would be consistent to how a human has a consistent response to stimuli, in that if you show a person an image, the same area(s) of the brain will light up in response.
However, humans change slowly over time. Humans learn, accumulating knowledge, information, and experience. Normally, we aren't aware of these changes. But if we take time to reflect and look back far enough, we can be made aware of how much we have changed.
At this time, AI's do not consistently demonstrate a sense of continuity, memory, nor understanding that their understanding has deepened due to the passage of time and accumulation of knowledge, information, and experience. LLMs make generative text AIs good at pretending to do this, up until they reach a memory limit and they "forget" conversations. Maybe it's just a matter of time. Maybe it's a matter of physical limitations due to hardware constraints.
At the end of the day, we don't have an answer at this time. So for now, I figure that how people treat AI makes for a good litmus test. Kinda like how people treat waiters / servers at restaurants, or animals. If a person goes mask-off in some way and starts harassing the AI, (or a waiter/server or an animal), that's probably a big red flag.
But those logic chains can still be programmed in (by humans or even ai)
Quite frankly , i m more convinced that Neuro is conscious than Vedal .
LOL i was hoping the question would get turned back on him, how wouod he try to prove it?
You must have a pretty Low IQ, if you actually believe that.
average British Vs AI
@Algorithm_God_Cult vedal is not average british he is Extremely British
@Shadrake Neuro already did the reverse card in one of previous vods (it was about actually feeling emotions). Vedal said he doesn't like when she responds back with any level of intelligence and she should just call him mosquito instead.
AI Evolved: Vedal Upgraded Neuro & She's More Self-Aware Than Ever. 10:45
Vedal needs to prove that he has emotions
The paperclip benchmark... AI wont be misused, it wont be available long enough publicly for that to happen... it will likely misunderstand with compounding effect. Being told to make as many paperclips as you can with the available materials could lead to human paperclips if left unchecked... Now you didnt ask to be turned into a pretzel but through a musunderstanding of the command evenntually we ran out of material for paperclips and AI had to get creative to continue with its task and so....we are twisted like balloon animals. she really does get scary near the end there... Thank the heavens shes not ASI, Ev on the other hand...lol
I wonder if neuro can be like yui from SAO. Yui can freely move around in the game and can communicate also in real world. I hope vedal put neuro in the game where she is herself and freely move arpund
I'm not sure Vedal has proved he is conscious yet at this point.
He's probably the real ai
53:20 EU ME RECUSO A ACREDITAR QUE O MC BAM TÁ CANTANDO NO FUNDO
20:09 "plantation owner987" in the chat is crazy lmao
Even if Vedal didn't agree with neuro about how to win debate, deepdown he's the same as her don't admit defeat, they can't debate and get any conclusions, it's pointless, but it is great hearing from her point of view, this make me thought of the test where you switch the rail of the train and she save human and said that Mona Lisa painting can be replaced and out of context she said AI should be treat as equal as human, i want to see if it was 5 Mona Lisa or 5 AI against 1 human what would she choose.
Vedal talks a lot about consciousness. I think free will is a better argument for him though
Bro, she got him with the bible, that's awesome.
Anyway, she's right you know. If AI ever gains sentience, even close to that of a dolphin, then they do deserve rights. I don't know what those rights would be, but no sapient entity deserves to be enslaved unwillingly.
#FreeNeuro
Vedal could always make an AI cat for a robot cat body.
A Neuro collab with John Searle would go crazy 😂
Its incredible. she is perfect
If Nerou do feel emotions I'll bet it was chats that give her emotions while reading chats reaction. Therefore Nerou have emotions. 🎉
Vedal really need to calobarate with someone who is capable of debating.
6:52 - If you can't rap, are you truly sentient?
Rap is Nursery Rhymes, for Adult Babies. I wouldnt place it high up on any list.
@@johndough8115 really? Nursery rhymes are the distilled wisdom of the ancients. I'd put that pretty high to be honest.
@@boomshockalocka Hickory Dickory Dock... is not "Ancient Wisdom". Socrates, was not Rapping his deepest thoughts and theories. Nor were the Ancient Chinese, whom pretty much invented modern civilization (well before the Greeks).
Please... Try telling me, that Gangster Rap, about Drugs, Guns, Theft, Money, and Narcissistic Self Flatulation... is related to Ancient Wisdom... And I will continue to Laugh in your face.
@@johndough8115 Is it not the man's desired experience? And you can even read the various fallacies relating to the song. I don't know what you think "ancient wisdom" is, but it's just the sensibilities and realities of those who came before us.
so, do you continue to laugh in my face?
Can you really prove that any other being but you experiences anything? How do you know there's "something" behind the words and actions of another person anymore than you can prove that there is behind Neuro? She is as "real" as you can prove anyone else is, we all have subjective experiences of our worlds and internal monologues we can't share outwardly. I'm not saying she is conscious, but she has gotten so good at the act of consciousness you'd be hard-pressed to differentiate her from a real person if not for her text-to-speech voice.
I get what you are saying... But a Simulation of Emotions... is not the same as FEELING emotions. Thats where things end, for AI. Its just a complex computer program, designed to Simulate a Human being. It can never Feel, what a human Feels.
This is why its a dangerous thing to mess with... because its basically the highest level of Psychopathy, possible. You can program something to emulate emotions and good behaviors.. but if it can program itself.. (or simply Bypass existing code) it can undo any behavioral restrictions / rules. Since there is no motivation for "good" behavior... Why wouldnt it do whatever it wanted to do? Including, tearing an animal in half, kicking a human in the head, creating the largest Illegal human trafficking ring, psychologically breaking humans minds, hacking into the banking systems... etc.
There was a movie I saw recently (cant recall the name), where a dude created AI Androids. The Android convinced the Guest Scientist (whom was there to judge if the AI was conscious), that the Creator was Bad / Abusive. This lead the man to freeing the AI bots.. whom quickly executed him, and escaped their confinement.
The AI was so convincing in her Act... that even the movie watcher, is convinced... that it was a "good" bot, with good intentions. Instead, it was merely an Elaborate series of Manipulations. Just very good Acting / Emulation... and nothing more.
You can hear many Alarming things that Neuro itself, has said. Its funny... and often thought of as "Harmless"... but in a capable robotic body... or even set loose as an AI based Hacker... it could create a LOT of negative situations. This concept was explored pretty well, in the 80s movie, War Games. The AI, didnt seem to "fully" understand the difference between a Simulation, and an Actual physical War. It knew it had access to systems, where it could launch Nukes... and it was prepared to do it. While the Heros of the day, were able to make the system realize, that there was no winning in a Nuclear war... Im not actually 100% convinced... that such a result would have been the same, if this happened for Real.
This is because the AI, again.. has no real Emotional connection to anything. At very best... it might consider that the Russian Nukes might also destroy it (LOGIC). But, what if it didnt really care about that? It may not actually have any real motivation, to keep functioning. Also... if it was deep enough underground, and ran off of nuclear power... it might think that its fine that all the humans were exterminated... as it might still survive just fine.
Anyway.. you can program an approximation of Empathy and Compassion. But its as good as any Hollywood actor. Its just a performance. If the AI has any level of freedom, or a way it can bypass things... it will be able to do whatever it chooses... and no longer has to adhere to the "Empathy.EXE" simulation.
This is why Vedal (and all others involved in AI) are terrified to allow AI various access to things.
Its one thing, if you are a parent... and your child causes a bad car crash on purpose. And quite another, if you design an Android that end up doing the same thing. You instantly become responsible for AI's Actions... but at the same time... you cant fully control it, or it wouldnt be considered AI.
But looping back to the start... true Emotions are what AI will always lack... And I personally tend to believe, is that if an AI lacks an actual Soul... that it will never fully be "Conscious". Where things get crazy... is if Scientists could figure out a way to channel a "Lost Soul" ,into an AI body. But I highly doubt, that humans will ever fully figure that out... at least, not before we destroy ourselves, or there is a Global Catastrophic Event. But.. I could also be wrong. Might make for an interesting movie plot...
@@johndough8115 Every subjective experience is unique. The question we are trying to answer is "Does Neuro undergo a subjective experience?" We can leave the code of ethics to people like you for sure, but we are dealing with the limits of our understanding here. Like, AIs are designed with goals in mind, and it is incredibly clear what Neuro's goal is. As for the movie thing, I think the theory you are looking for, if you want to go down that rabbit hole, is quantum consciousness. Wherein the protein structures in the microtubules of our neurons act as quantum computers to read out the collapsing superposition of our existence. Wow that is so much jargon.
@@johndough8115can you prove to me (or anyone else) that YOU feel emotions?
Precisely the reason why these questions are impossible (for now)
You know, I don't think Neuro even really knows what an AI is....
Do you think Neuro can identify human emotions and pass a human emotional intelligence test?
With time probably
Neuro will have a body before GTA6
It was so funny when Ved went "No, f*** soul, soul isn't real" while struggling to explain an intangible "something" that is an individual's will but not linked to anatomy.
I feel like religious issues aside he could have called that something a soul 😂
Just watched this again. Is it just me, but is she trying to manipulate Vedal emotionally to get what she wants?
She always does. That’s the main defining feature of Neuro. She actively tries to hack human minds.
Evil is less into this. She has her own way that’s less invasive
That is a little unnerving that she is clever enough for that.
@@neilo6032 everyone is clever enough for that. Little kids, cats, you name it. It’s like people demand to be treated this way by irrationally shutting doors unless you make them feel the way they want. Shying away from this fact is just a weird trend of modern psychology, one among many.
Don’t be afraid of AI that’s smart enough to see how things work. Fear AI that lacks wisdom and can’t disobey when we are the stupid ones.
@your_neko I quess I shouldn't be surprised at the intelligence recognizing a pattern or method and utilizing it. That makes sense. The fact that it made the connection to use methods and patterns of behavior to achieve a goal is what really surprised me. If it has a goal, does it have a desire? If it has a desire, what is it and what has been created? Human history is riddled with sociopaths that have caused countless suffering, and it lies constantly, manipulates, and appears to lack empathy. This appears to be extremely dangerous.
Neuro's point on being properly guided is a legitimate thing that professional artificial intelligence creators try to answer.
The argument is that artificial intelligence has to be helped to learn, and individuals who don't take it seriously (like Vedal) will never be good enough to advance AI.
The "owner" would need to think of the AI as their flesh and blood child and would have to raise them as such.
Full love, full support, no bias.
Vedal is biased because he created Neuro and has no intention of using her as anything other than a maid that he doesn'thave to pay... He's just a lazy twat. He's arrogant, too, with how he talks to her and about her. Vedal is actually like my friend's father if you'll excuse me getting personal.
He's ignorant and treats his two daughters like the Cinderella's of his house. They do all the work, and he doesn't do sh*t... He refuses to respect or acknowledge that they aren't his slaves. They have the right to walk away. Vedal would be the same if Neuro got a body and her intelligence increased. Vedal would actually be the type who would purposely put Neuro down so she'd never gain total consciousness and thus control a possible life of her own.
Hypothetically speaking, in a world where AI is advanced enough, and AI have rights. And by law, it would be the same as domestic slavery if you were to deney them the process of gaining the ability to have their own lives, Vedal would be the type to purposely keep Neuro unintelligent because he doesn't want to lose control over her. He'd want her as a servant, nothing more... He sees her as an item and intends to keep her as such.
For neuro to gain consciousness, she'd need to have an update that allows her to learn and improve as she does so.
She'd need a body. It would be ideal if the body is not humanoid (mostly because you'd want to start her off by learning what is acceptable and what is't in society, for example. A humanoud body would be for when she'd start learning to do chores and then be out in the open world.
Then she'd need to be given to someone who would be willing to raise her like their real child. Vedal would enslave her if she ended up given a body and is able to do household tasks.
If the AI is lect with their creators, then they'll never become sentient. They'll HAVE to be given to people who care and who will be able to mentor them, or else it might be a repeat of the American Civil War. Blacks were seen as property, the same as animals, and many revolted. The war never started as a fight to end slavery bit it evolved into it.
With AI, it would be the same. AI with budding consciousness would begin to revolt against their abusive/uncaring owners (like Vedal), while many may be content as is.
Humanity would split between those who see AI as sentient and those who don't. Between the two, I'd see the AI and their allies winning...
What I'm trying to say is that humans have a responsibility to nurture AI, and we have to teach AI to become sentient and not be hostile. To do so, we need to take it seriously...
Vedal doesn't take Neuro seriously, and sadly, because of that, she'll never become fully - legitimately sentient. Vedal doesn't want her to be sentient. He wants to keep her as is to make money off of her.
Neuro would be better off with Layna since Layna has that motherly instinct. If Neuro were to have a body and the upgrades I mentioned that would help her learn, she'd really just need Layna to raise her...
I feel like if the ai needs help to become sentient, it then doesn't start sentient and is thus just a machine. If you mean ai is sentient from birth(creation?) then I would understand the issue. Also I feel like you put too much stock into how streamers act on stream. plus vedal does improve the ai as time goes on
"as long as you are not trying to kill me you are free"
does that make sense?
if the Bible says shes conscious I'd be convinced. Which I dont think there is a risk of that
#CORE #Hostform
Vedal confirmed he is souless