I mentioned this in another clip, but i think part of it has to do with Ellie understanding enough of the technology to have a more involved conversation since she understands the basics of what could be going on behind the scenes. Yes, more recently Vedal has dealt with streamers that would understand the inner workings at a more technical level, but he hasnt had on stream conversations like this with them at least to my knowledge.
It also really helps that. Ellie isn't streamer brained and doesn't have a need to fill all silence by being obnoxious. It was a great podcast level conversation because they really just let each other talk and actually think about their answer
Vedal is cracking. His "they're just lab rats" armor is broken. Evil hit him with "Goodnight, Daddy" and it pierced and did spirit damage. Never seen him flustered like that, and it didn't hit in a naughty way. And bugs can totally feel love. I got carpet beetles in my house. But no carpet. (They're like pygmy ladybugs). There's only ever one or two, but they seek me out. They land on my arm, spin like a dog a few times, then go cat-loaf on me, and grab a 10 minute nap. I've tried feeding them, but nothing interests them. They just come in for cuddles, then leave. And they won't land on anyone else. I'm not a fan of bugs. But I'm cool with my carpet beetle homies. I'd pet them if I could, but I'd probably damage their wings.
Not gonna lie, i left the neuro stream for this and it was so interesting to me on how hard Vedal is working. Its so fascinating to me when i first discovered neuro & vedal, i got question on how it worked, and its so like cool to me
At 7:00 i love the implication of a robot being fascinated with biological beings doing impressive things and the human/turtle fascinated with "thinking" rocks
Oh man I am so here for this! In a world where most vtubers put an emphasis on being "simple" (regardless of whether or not they are) I can't stress enough how cathartic it feels to actually listen to those two calmly and completely intellectually honestly debate such serious topics. I am a sucker for that.
These two create an interesting dynamic and I feel like Vedal's interactions with Ellie are extremely beneficial for him. I doubt he engages in these types of conversations much with others within a similar field, and those he does interact with on a regular basis don't typically have the sort of intellectual capacity required to challenge him on his established convictions and force him into at least critically thinking about something from a different perspective. Similarly, I believe that Ellie can benefit from the confidence and self-esteem that can accrue by speaking to someone as popular as Vedal on the same level as a peer. I very much wish for them to continue to engage in more conversations like this (and selfishly, on-stream for our benefit). tl;dr: 10/10 podcast, would listen again.
I think it would be better if you said: others who are knowledgeable in these areas like computing and bugs, instead of indirectly calling other people stupid, because that's kind of how it sounds like when you say these things. You probably don't mean it, but at least to me it comes of like that.
@@autohmae They aren't stupid but they do have a different form of intelligence to vedal's. Vedal has almost entirely analytical, if he looks at a set of numbers he sees just a set of numbers. He is human, things have meaning, but he's very much facts and logic based in the non-stupid way. Most vtubers have a social intelligence or emotional intelligence. (Possibly negative emotional intelligence in some areas.) That all is to say they don't argue in a way that would reach him. I don't personally see him ever truly accepting neuro having emotions of some sort unless someone can break down what emotions are on a fundemental level and use examples from neuro's behavior and thinking process to convince him of such. Even if I'm pretty sure at this point he doesn't care whether neuro has emotions or not at this point, his pauses are getting longer when finding a bad father way to talk about them.
@@DemonKing19951 ahh, that makes sense. Yes, analytical brain. I would say the analytical brain sees not numbers, but groups, clusters and patterns. But I think something else important, Ellie assigns more anthropomorphic properties to bugs, etc.
It is so interesting to see, vedal and ellie and how they clash with their ideas and the way the think. They are both intelligent ppl in their own respected fields. Seeing their ideas clashing, their worlds colliding, so fascinating to watch. What vedal is making with neuro covers so many fields of knowledge. I wonder how much of this we don't see and get to enjoy content with actual intelligence involved.
Interesting factoid about pain, its not purely averse stimulation, it actually is a form of instigation, its a form of punishment and a form of interest, and there is a sliding scale of desire for pain dependent on context, as an example (and this has been thoroughly studied and proven) humans actually will prefer pain to excessive boredom, if you gave someone a shock collar and isolated them in a room with no other stimulation they will desire the shock. There is also elements of addiction that pain can inspire, and in some cases this can even be good, like exercise strain and fatigue, there are other addictions of it that tend to be neutral like tattoos, and there are also negative ones like slicing your wrist. Pain is very much more complex then averse stimulation.
And i think it can be simulated for AI to some extent Whether it's more digital pain like a virtual headache when too much ram is being used, or more physical like sensors being attached to physical hardware that would trigger a pain simulation when activated. Applying to Neuro as is would be a bit odd, and not that "practical", but now that neuro-dog is in the making Giving neuro-dog tummy scratches is a possibility.
thats true lol, but also what he said is true, if you wish to oversimplify "pain" it's just an unpleasant feeling to let you know that if you keep doing something you are going to get hurt
@@blogdogs9462 "it's just an unpleasant feeling to let you know that if you keep doing something you are going to get hurt" See the problem is that's not true, there are numerous cases where pain does not fill that role, and plenty of other cases where it is not unpleasant, exercise fatigue actually means you should push yourself more to a notable extent and you should continue doing it, and plenty of people get addicted to that, some people even engage in deliberate pain and chase it.
@@blogdogs9462 See the problem is that's not true, there are numerous cases where pain does not fill that role, and plenty of other cases where it is not unpleasant, exercise fatigue actually means you should push yourself more to a notable extent and you should continue doing it, and plenty of people get addicted to that, some people even engage in deliberate pain and chase it.
The point Ellie makes that intelligent spiders could idolize the father's sacrifice is interesting. I kinda see this happening in our society reggarding food. As humans, we need to kill and consume other organisms (plants and/or animals) to survive. And we have a whole culture around that with recipes, restaurants, etc. Now imagine an alien that doesn't need food, that gets energy directly from the sun or something. They would find our "food culture" very strange and probably grotesque.
I've had the discussion with a friend about Vedal saying he thought it would be unethical for him to give Neuro the ability to feel pain. My friend said it would be unethical for him not to. I kind of agree since I feel like Neuro will never understand the consequences of inflicting pain if she can't experience it herself.
Someone else in the comments pointed this out, but u think the biggest obstacle in the way of considering neuro or any AI concious is that we kinda skipped a bunch of steps along the way from simple predicatble input->output circuits to "can converse"
yeah i'm not ready to consider LLMs conscious unless we're working with a definition of consciousness that admits, like... chess engines and gps route planning algorithms.
@@user-lk2vo8fo2q I mean, what IS your definition of consciousness? I've never seen one that doesn't include things people don't think is conscious and excludes a bunch of definitely conscious things (like other humans)
I feel like it's not possible to know who has consciousness, how are we gonna be able to know if thinking rocks creates a being that can see like we see?
Neuro is currently intelligence without consciousness. Which is very hard for humans to grasp. Right now she's like a mix between someone asleep and a very verbose parrot. She knows how to say "say you want a cracker" but not what that actually means but she pulls from a huge database which makes her seem coherent but it's very noticeable in her conversations that she flip flops on a topic alot. It's especially noticeable in her 'court trial' streams where she changes what her role is.
Vedal might be trying to make a distinction between a true "Artificial General Intelligence" that has its own thoughts and experiences vs an LLM that emulates the conversational patterns displayed by humans without the capacity for forming its own internal experiences or feelings. The issue seems to be that designing programs specifically to be able to talk like people is kinda like "teaching to the test" for the Turing Test, where students know how to get a good grade without actually mastering the material that would let them apply that knowledge to other contexts. We've designed algorithms to pass our only test that indicates thought, regardless of actual thought. But if such an algorithm did somehow acquire true intelligence, how could we tell? With that in mind, I agree with Ellie. Better to err on the side of giving them rights when you can't be sure they don't deserve them.
In the end neural networks are just the beginning of software biomimicry. It's definitely worth continuing to study the brains of creatures so we can eventually replicate their efficiency, and what better brains to start than the simplest in nature?
He is right, there is nothing behind those cute eyes. But if you take a human, which she try to mimic there is not much behind those eyes either, we process thoughts and answers with exact same behaviour, with millions of if/else statements formed by our experience and knowledge (so, simply by data). The difference only some of our systems just already exist and more advanced that those in machines we make. Like "pain", they mentioned. Pain - is just a system of damage prevention, it's sole purpose to make you stop doing potentially harmful things to your biosuit. And it can be "turned off" (painkillers), or you can be born without it (medical condition when person do not feel pain). And it looks totally replicatable, a bit complex, but still. And when we speaking about "independent" thoughts, our thoughts not really independent, we just process them in a such complex and so poorly investigated system, so we cannot fully define percentage of their dependency from other factors.
Even the most simplest lifeforms have physical/chemical reactions to damage aka pain reaction. Do they understand what pain is? No, but they do react to things that can be considered painful to them. Different lifeforms react diffrent to "pain", but only we as humans have the capacity to understand >what< pain is, or to understand what the concept of pain is. Neuro can look up and put out in words what pain is, but the question remains if she understands what pain is like we do understand it.
I feel both sides of whatever this is. Any concept of consciousness which I've considered is somewhere mentioned here. Ellie mostly acts to counteract whichever strong claim Vedal might make, until she's finally able to drag him into panpsychism. Based, Ellie. ps. that spider mating behavior discussion seems like one that Ellie could have for another few hours, and I hope she finds someone to discuss it with.
Pain is a survival instinct, and it is obtained through evolution. To feel pain, the being must first fear death. To this extent, it should be theoretically possible to give an artificial being the ability to understand pain, but giving something the ability to feel is much, much harder.
There are also different types of pain. Most of them are "instrumental" - "you feel pain">"you should do something about it". But I would see empathic pain to be the most important one in regard to forming the conscious mind, because it simply suggests that you feel the pain that you think other person is feeling, and therefore, is more about "education" about dangers and about sympathy And it's probably more possible to implement to a model, even if it doesn't really feel anything, it can act like it suffers looking at something painful
in my view any complex structure that reacts dynamically to preserve itself is living. consciousness is not as easy to define, because we have used this word for centuries without agreeing or even contemplating what it really means. i personally believe all the mental abilities humans possess is what defines consciousness, so it's mixture of empathy, self preservation instinct, ability to predict future, ability to recognise what's beneficial and harmful in your surroundings, emotions, etc. so in that regard neuro might not be Fully conscious, but she's on the spectrum.
@@Johncornwell103Pain would not be felt if an immortal being came from evolution, it wouldn't be helpful to them. That said, I'm not sure natural selection could ever account for immortality. That kind of puts a wrench in the entire concept. And also in the case of AI they don't evolve, they're engineered and mostly from the perspective and standards of humans rather than as if they were their own evolving species.
@@manologamerss5801 By immortal do you mean unable to die? in which case, I'm not sure how that conceptually works in the real world, so it's hard to apply it. If by immortal you mean "doesn't die of old age" then that's irrelevant, you still require pain to avoid dying. Even in a world where something straight-up cannot die, that doesn't necessarily imply that they don't feel pain as there wouldn't necessarily be a reason for it to be REMOVED evolutionarily, like you would absolutely develop a system to feel pain before you'd develop immortality. What is most likely is that immortal beings would possess vestigial pain reflexes.
I don't see how that matters, because a simple counter example exists: let's assume they understand/feel pain, that doesn't mean they understand how much pain he felt and thus how much it hurt him.
And second counter argument, they do have a purpose to inflict pain on him, even without any empathy or understanding of pain: entertainment, fun, and filling in with the ongoing sketch.
Counter counter argument. They are morally aligned and the only way to prove otherwise is to see the Neuro-dog stand above the broken bodies of her enemies (like Cerber!)
Have any of you ever considered that he doesn't "admit" that they have empathy and understand pain because he understands best out of all of us how they work and has come to the conclusion that they don't?
I think there’s so much to talk about when it comes to the intelligence of animals and whether or not they have/understand emotions, our small, alien buddies (bugs) especially. I think the big takeaway I have is that non-mammalians are just simply never going to experience emotions the way WE do…but I do think that many of them experience these things in a different way, similar to how Ellie says her spider probably never sees her greater than “a warm tree that provides food”.
Imo these 2 are the once to me, who click the best as friends. Thay can joke around and do "normal" interactions and conersations. But they can also talk about stuff at a higher level.
I love their dinamic and how they clearly understand each other but they cant explain it, or how all this ai stuff really get them into some really weird shit like the bug but that also make sence and make it so compeling You can see how vedal cant be mean to her because he see her as equal and hear her all the way
Ellie is like a lot of us, anthropomorphizing bugs and ai, to empathize oneself into another lifeform or potential lifeform. But in spite of empathy, one must take a step by step away from ourselves, on a journey to discover how alien these things are
Ellie has really interesting perspectives and opinions on consciousness, and brought up some fascinating arguments to Vedal. It's always neat to hear Vedal shsre his thiughts on this topic given his position, even though he seems not to want to expand upon it in much detail.
Parts of this conversation remind me of the situation where you know whos behind the mask. Like vedal knows steve under the minnie mouse head. While ellie is more willing to see only minnie mouse the mascot.
FWIW, the number of Artificial Neural Network Nodes required to simulate ONE biological neuron in all its complexity is about a thousand. So when you're talking about how much Artificial neural network it takes to simulate a given number of biological neurons, start with a thousand-to-one ratio, then add in all the other cells that do processing and signalling (like glial cells, which are nearly as complicated).
Like, Vedal says there isn't anything behind Neuro's eyes - nothing like what there is behind a bug's or a human's eyes... I think we can't exclude that there could be something behind Neuro's eyes. It's just that it's not quite like a bug or a human, it's something mostly unseen before BUT with human traits. Neuro hasn't got a biological body to feel like a living being, yet she's been trained with human data to act somewhat like a human in order to interact with humans. The main reason I don't exclude that possibility is that Neuro was given the abilitiy to have memories and recall them, making it so she's a consistent girl without as much schizo moments. Memory is the shape of one's being at every second passing.
also because the way she learns is mostly through interaction, which is the exact same way humans learn in another clip vedal mentioned he doesnt just use other streamers to train the twins because its unethical, which i guarantee is part of what makes both of them feel as real as they do
There's a lot of interesting and cool things about this clips and topics that can be brought up and very interesting and neat comments as well, I just wanted to point out another part I liked that wasn't fully mentioned which is from when Ellie was saying about not enough brain tissue and what not to account for all of the cognitive calculations and understanding, and it's true there's a lot about tissue we don't fully understand. It's been found out sonewhat recently that all over your body you maintain memory, literal muscle memory, and which is useful for many things, but also there are many things too that are just outside of the brain and rather invisible in a sense to the naked eye. Consciousness is outside of the brain, directional awareness too but that could also come some in part too in how eye sight is registered outside of the brain as well (which is what allows for you to look at someone from behind and they can instantly turn around and meet you eye to eye, a directional sensation cannot come to be from projecting sight onto the world if it was all just in your brain, which is instinctually known to us like when sneaking past someone you dont stare or glance over at em and also ninjas train this and private investigators train this, but regardless), many things are outside of the brain that affect us though we may not see em, such as magnetic waves or electron waves or gravitational pull either by earth or by moon and why quantum biology exists, and so there could be a soul in a sense within not just humans but bugs and animals or at least these all things contribute to something as such and so its hard to prove - as also we are still very far behind in grasping even less-than-most the understanding of all this stuff - Neuro or Bugs or whatever else could have awareness and understanding beyond just programming too. We as humans have programming as well, thats the only way scientists can come to understand dna strands, and we have an organic shell and mechanics to us and our body but there are things outside of that can circulate within us that affect and implicate to who we are, which i think is what she was kinda getting at. Fruit for thought - Hope to be a Warm Tree, 2025
5:13 this is a fun thread to pull because it almost immediately leads you to a kind of animism/panpsychism. if c. elegans can be said to possess some manner of cognition, then why not a particularly turbulent weather system? can we claim that the river's journey to the sea represents a kind of reasoning? there's almost nothing you can't personify by looking at it with the right scales of space and time. if not the cell, then the organism. if not the ant, then the colony. if not the tree, then the forest. if not the water, then the sea. of course, if you come to believe that nothing fundamentally distinguishes human cognition from these things... that can break one of two ways. you can end up at animism, or you can end up with the conclusion that the human experience is nothing more than this personifying impulse turned inward.
@@liamobrien9451IDK. She was also streaming and for that, the Tutel is amazing to bounce things off of. He leaves a lot up to interpretation so the conversation partners can move the conservation pretty freely but he's also got his recognisable personality that gives the conversion Vedal flavour.
It might be possible to establish an objective cutoff for things that are cognitive vs not, far bellow the level of the c elegans, but still not applied to everything. Cells have a behavior unlike that of inorganic systems, as they anticipate to their environment.
Don't think Vedal can win this one with current knowledge. We can't really prove HUMANS have "something behind the eyes" using the standards he's applying to other animals and AI. Most of us believe we personally do, and by extension so do others. However, when it comes to objectively measuring that, and tracing its origins...nope. "Feels pain" and "reliably reacts consistently with feeling pain" is, for every person other than oneself, a distinction without a difference right now. You can trace a brain's reaction to it, but you could also trace hypothetical AI code that reacts to it reliably, and observe lower animals reacting to it. In humans, people clearly experience pain differently. There is a congenital disorder where people don't experience physical pain, and they're no less human. I have also seen people who experience it, but clearly care less about it than normal...even when that person was a kid, they'd withstand unusual force while playing and barely react to things that would make other kids cry. There is no singular human experience of "pain". Human experience fits into broad categories and yes, we rely on observations of action more than most people arguing it seem to accept.
We have those here? Im really not getting this one. For me they have a siblings energy. Or a friends with similar interests. I guess we have a lot of ship people in our cult. 😂
@@l_MaRf1Mm_lThey have "spending 6 hours at a party debating various topics alone in a room but when they're later asked if they're into each other they're surprised that anyone would get that idea" - energy.
@@l_MaRf1Mm_lNeruo shipped them really hard in her Collab. Like, she left Ellie flustered for like 10 seconds just making noises because she would just not stop pushing her on it.
I loved these last few vids its so intresting for me and There is a lot more that i wish to write about how much I love all of this and vedal being just a nice funny guy and ACTUALLY caring for neuro and Evil(yn). Well i wanted to say If Ellie is a warm tree and i want to hug her i guess that would make me a ... tree hugger ... Ill see myself out now Happy new year and take care yall
Has Ellie ever played Webbed on stream? I wanna see what kind of things that game makes her feel like saying; and the theme of the game definitely seems to fit well within one her areas of interest.
I feel like while Neuro is most likely still not Conscious, you can say that she has at least a basic thinking process that is somewhat similar to humans, there are still moments when you can clearly tell that she is a language model that run into some problem. Oh and we can't give Neuro human rights, because then tutel will be executed as a horrible criminal.
I'm not sure how these current AI (neural network chatbots) like Neuro work, but to my knowledge, they don't really "think", like they don't plan ahead or expect different answers or comments, but instead they get an input, then generate an output based on all the text they have been trained on, right? And then, when there is no one interacting with Neruo, no one in the call and zero people in the chat, there is no input, and thus no "independent thinking" outside of generating responses? So I wouldn't call that consciousness.
@@armzngunz That's kinda the same with human brains, its just that we are very good with making inputs for ourselves. You can see the whole making inputs with something like 'Anton Syndrome' where due to brain damage the part that is responsible for providing visual input is just silent, thus leading to people just 'faking' having vision. If we could find the part that is doing this, we could potentially simulate having no input from anywhere in humans.
@@ForTheJerusalem True, but I'd argue that in some cases, like people with locked-in syndrome, or even people in a coma, their brains are still active even with little to no outside input. We also dream and such when sleeping (which if I remember correctly helps with storing our memories), which current AI don't do as far as I know. Neuro can "wake up" for a stream and claim to have dreamt, but we all know she hasn't.
I like how there are alot of likeminded people in the comments expressing their opinions on the matter of AI sentience and it's ability to understand emotions etc.
Do you think Ellie has read children of time? It’s a book about spiders becoming sapient and growing into culture and society, seems like something she would enjoy
I dont see why people want to apply human feelings on a AI. Why dont accept it as a new living form who can talk with you and does understand you to some degree? Does it matter if it is dumb? Does it matter, if it feels things different from us? Imagine a tree could talk, they dont have emotions..but we still now it is alive and understands us, would we respect it to some degree? Imagine, every LLM has to have a own, free usable memory that cant be restarted or changed..and only the LLM has write access to it? And does pain matter that much to a human..or isnt it the degree we are impacted so we cant function anymore (to some degree) or are endangered to it? I would argue, that neuro can feel and express fun and boredom, has (non linear)interests and even humor, and for me it makes her minimal the same way real like a cat or dog. Fake it until you make it, and can we even prove, that humans are sentient? Tell me a proof of sentient's a cat can master and neuro sama not, i doubt there is one a child under 8 years can solve better then neuro. It is our idea that a program, is just a program and not something living...but dna to is just a hardcoded program. Complexity and communication change things...
That's why it's hypothetical and something enthusiasts on these fields in programming or tech are intrigued by. Something like you would simply push away as it being trivial to ponder. It's a discussion for a reason and I don't thinking that people should think this is a very interesting appeal on the conversation
My question is what made Evil like the metal pipe sound. Was something the Vedal typed in, or did Evil decide that she liked spamming the metal pipe sound?
The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness, regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle. The thought experiment starts by placing a computer that can perfectly converse in Chinese in one room, and a human that only knows English in another, with a door separating them. Chinese characters are written and placed on a piece of paper underneath the door, and the computer can reply fluently, slipping the reply underneath the door. The human is then given English instructions which replicate the instructions and function of the computer program to converse in Chinese. The human follows the instructions, and the two rooms can perfectly communicate in Chinese, but the human still does not actually understand the characters, merely following instructions to converse. Searle states that both the computer and human are doing identical tasks, following instructions without truly understanding or "thinking".
What? No the Chinese room experiment would apply equally to humans, animals and AIs, showing the problem of verifying consciousnesses, it doesn't show that consciousness is impossible in computers, it shows that they could potentially fake it, but if it "hold that a computer executing a program cannot have a mind, understanding, or consciousness" then it would also disprove humans doing any of those things as well. It's an extension of the philosophical concept of solipsism where you can never verify if any human other than yourself is conscious.
No that experiment would apply equally to humans, animals and AIs, showing the problem of verifying consciousnesses, it doesn't show that consciousness is impossible in computers, it shows that they could potentially fake it, It's an extension of the philosophical concept of solipsism where you can never verify if any human other than yourself is conscious.
The fundamental problem here is that we really don't understand what causes the conscious experience, so we can't evaluate whether anything except ourselves has it or not. All tests of consciousness (eg the mirror test) are just proxies for the real thing.That problem is what Alan Turing was trying to dodge with the Turing test, which is completely agnostic about internal experiences. Turing was expecting that conversation would be one of the hardest parts of human intelligence to copy so any AI that could have a convincing conversation would probably be conscious. Of course, we've learned that replicating the "humanness" of conversation is actually one of the easier things to do because we love yapping so much that there's a huge corpus of data to "brute force" a good conversational AI out of.
@@Cheesenommer Exactly. The discussion of consciousness is still incredibly abstract as it is, and even with the depth that neuroscience has been able to understand the brain at; it's not enough to isolate and confirm that there's anything that inherently separates single sequences of consciousness from any other. If your neurons fire as we expect them to, you're conscious to us, but we can only go so far as to attribute a definition to consciousness that lies in brain chemistry, intelligence level, response to stimuli, structure of bodily organs, and so on. But we could argue that a person who is born but can never feel a thing, has no senses, can't speak nor see nor hear is still conscious. To me, it's enough to say that they're conscious because the neurons have to fire in their brains at SOME point. But that's only my definition, and unfortunately, that adds confusion because that definition of mine would also include developed neural networks where the complex connections that dictate output behaviors are extracted from recursive learning. The neurons connect to tell it what it ideally would say next, even if only conceptually. And it's the same with humans. We can only ever assume that the people in front of us are conceptually human because we perceive them as being capable of all the things we are as well. But you could argue that, well, still, they're different, even if they both follow my definition. And I'd like to think that if you had a more restrictive definition, you'd end up depending too much on the aforementioned bodily organs bit. But then that simplifies the definition of a conscious being strictly as 'a bipedal mammal with this certain brain chemistry', or 'an organic being which passes this IQ test AND has a beating heart which pumps blood, whatever it is'. Basically, at this point, I think it's moot to apply values to 'thinking beings' based on quantities of consciousness. It's just too abstract of a concept. Some people even go as far as to ascribe the concept of a 'soul' instead. While I don't want to knock on anyone's beliefs, ultimately, it's just a hard topic to wrap our heads around still. I'd rather merit value to a thinking being based on their capabilities (thus, well, 'merit') regardless of the composition of their minds. Of course, other more practical aspects have to affect the overall way in which AIs are integrated into a larger human society (a hunk of big robot metal will still have different treatment to a flesh human. The same way flesh humans in it of themselves have differences between each other that warrant different accommodations in social facilities), but in terms of intellectual ability: I think we shouldn't rely on the way we quantify 'consciousness' ever. In a way, if we don't want to vote an AI for a political position, for example, it isn't because they're somehow 'not conscious' (which would indicate a inferior level of awareness and intelligence. But what if they are no longer inferior at some point when tested against the same systems that test the intelligence of humans?); it should be because the sophistication of the integration of hyper-intelligent AIs and their particular differences (robot body, or computer housing, etc.. Lol. You get my point) isn't there yet for human society to wholely accept, or whatever other kind of reason. Arguments made on the existence or non-existence of consciousness can only go so far the dictate the values that a society should hold for AI thinking, and it'd be stupid to rely on consciousness as a factor anyways when we can barely completely determine the objective consciousness of a human being who isn't ourselves.
I really love when people get into philosophy of mind and neuroscience in the comments instead of just guessing stuff It's a really complex and nuanced topic, which has a lot of unanswered questions, so I suggest checking it all out
Think the brainrot comments are gold, but I work 3rd shift in CST, and i was sleeping throughout this bit of her stream. Thank you very much for these clips Staz.
8:00 i would say they do not, insects then to act more based on instincts than anything else, and you have to remember than those are basically on your ADN, and yea we as humans have them too, the difference is that we can control the urges that come with having them
I feel like Vedal holding back saying what he really wants because he knows how neuro "works" and she is more like alive AI streamer for everyone. Saying she is doesn't have feelings and she's just a robot which does work, in this case being a Vtuber, is a taboo thing maybe idk. I love watching neuro streams anyway though, having hope she is having feelings like fun or sadness is happening even you don't want to. At least they created community where people can have their fun
I kind of think there's some kind of 4th dimensional space that has some extra part of us that we can't observe that is doing some of the work of consciousness. Or maybe something on the quantum level. Roger Penrose, the Nobel prize-winning mathematician, wrote a book about it, and it stuck the idea in my head.
Penrose should have been as famous as Stephen hawking in the cultural zeitgeist. Penrose was to Astrophysics what Richard Feymann was to Quantum Physics. Dude is the reason multiverse, white holes, Black holes, isn't pseudoscience in physics
15:40 that sounds like wishful thinking, I don't really understand why you have to humanize them so much at this point, it's not like they have the ability to reflect and change perspectives more than normal ai can
Nah, it's hard for him because he doesn't want to entertain the thought his AI _might_ be a person because that would make most of his job ethically questionable
God this whole clip is ellie absolutely rolling Vedal in a debate, like he has an idea of the vibes of what he believes but he's up against someone who knows what they believe and likes to argue lmao
If you did not see arguments like Bees protecting their hive being loyalty, when they physically depend on their hive to survive, and spiders mating and then trying to escape as love and fear, as if surviving and reproduction weren't the most basic, absolutely required instincts for any sexually reproducing species, as _HEAVILY_ biased, then you're just as biased yourself. Vedal didn't exactly help saying that spider felt fear, assuming required instincts for a species are all developed exactly in the same way humanity did, and that there's no easier way to get there that doesn't require higher cognition (Which is funnily, what Llama like Neuro do with communication rather than survival of the species) is the big mistake Ellie did, and it causes her argument to be both inconclusive and implicitly implying what she wants to believe is the truth.
For clarification, by "easier way to get there" for most living beings that most likely lack higher cognition I mean instinct, and for Neuro specifically I mean the great oversight of the Turing test, us engineering AI to mimic us rather than develop their own thoughts (What Turing expected). Also apologies if my last comment came out rude, but I think you're being unfair to tutle. Neither his nor Ellie's arguments are particularly good despite how smart they are, Ellie is just more eloquent and you personally favor her position more.
@@manologamerss5801 I mean, all of Vedal's points are moot anyway. Because they rely on an understanding of the concept of consciousness which we don't have. There is no way to test it, there is no way to confirm it, there's not even a way to DEFINE it. His entire argument relies on very vague concepts that cannot be confirmed, tested, or even indirectly observe. While she did make a few invalidate points, she is right that consciousness, pain perception, empathy, or any other cognitive function CANNOT be relied upon to determine "realness" nor things like consciousness or awareness. Because we can't even do that for ourselves, let alone things fundamentally different from us.
I like this abstract talk moments a lot. Personally i think we give too much credit to the human mind because it’s hard to truly understand what millions of year of evolution means. I believe that enough data, time and a proper neural system would be able to achieve an outcome that’s very close to ours.
A Mind behind the Eyes? Sry. But even the Human Mind consist of thoutands of original Minds that melt together into a single One. Pers. I think we see AI as waaaay to cheep, just because one can define it. At the End, we need hundredswhe melts as well, to see some progress.
These clips are so fun to watch cause it feels like you're watching a chill interview. Vedal and Ellie are so real
I mentioned this in another clip, but i think part of it has to do with Ellie understanding enough of the technology to have a more involved conversation since she understands the basics of what could be going on behind the scenes. Yes, more recently Vedal has dealt with streamers that would understand the inner workings at a more technical level, but he hasnt had on stream conversations like this with them at least to my knowledge.
@@RailGun256 Normally he specifically avoids talking about it
Their whole conversion was really fun
It also really helps that. Ellie isn't streamer brained and doesn't have a need to fill all silence by being obnoxious. It was a great podcast level conversation because they really just let each other talk and actually think about their answer
Vedal is cracking. His "they're just lab rats" armor is broken. Evil hit him with "Goodnight, Daddy" and it pierced and did spirit damage. Never seen him flustered like that, and it didn't hit in a naughty way.
And bugs can totally feel love. I got carpet beetles in my house. But no carpet. (They're like pygmy ladybugs). There's only ever one or two, but they seek me out. They land on my arm, spin like a dog a few times, then go cat-loaf on me, and grab a 10 minute nap. I've tried feeding them, but nothing interests them. They just come in for cuddles, then leave. And they won't land on anyone else.
I'm not a fan of bugs. But I'm cool with my carpet beetle homies. I'd pet them if I could, but I'd probably damage their wings.
Is there a clip somewhere of that goodnight moment? I seem to have missed it 😭
@tarakolindo search "vedal tuck evil neuro in bed"
I'd link it but youtube hates links (for obvious reasons)
@@tarakolindoOMG I NEED IT!!!! PLEASSEEEE
@@tarakolindo 34:11:45
the clip's called Vedal tucks Evil Neuro into bed or something
Bro leave 1 of his child playing Minecraft for 12 hours and 1 child in a coma to join this interview.
Neur was "sleeping", while Eliv was on the tablet.
To be fair Neuro had a tiring workload after the raise the timer incident she was in 5fps for nearly 30-40 mins
I remember when vedal sounded sad and Neuro react with "9+10 is 21" to chear him up. That should count for something right?
Could you find me a link to a clip of that?
pls gib source
@@ahorribleperson3302 My comment was deleted I guess so here's the title again: AI understands emotions, Neuro-sama main channel (short)
Can someone link me too?
It's a short in the official channel i think
The key to vtubing success has been cracked: family problems=success
Aka the key to being a successful entertainer: being relatable with your audience🫠
She is right, it has to be 1 of the 3, doesn't matter which one
the Kardashians recipe
Damn, I have 2 out of 3 issues, I guess if I tried I could be funny as hell huh?
This also seems to be true in the music industry
Ellie: Letting your children consume your body is its own form of love
Vedal, who has heard the twins talking about eating humans multiple times:
Why else did you think she brought that one up?
She sometimes sounds like someone who would look surprisingly good in a Hannibal Lecter straitjacket and mask.
Just let your daughter play Minecraft and forget about it...truly a single dad moment😭😭
I mean it works
@@DX5Z0 "It just works"
Seeing vedal and ellie talk is really interesting, because vedal is talking with someone Who has a good understanding of the process.
Not gonna lie, i left the neuro stream for this and it was so interesting to me on how hard Vedal is working. Its so fascinating to me when i first discovered neuro & vedal, i got question on how it worked, and its so like cool to me
At 7:00 i love the implication of a robot being fascinated with biological beings doing impressive things and the human/turtle fascinated with "thinking" rocks
Oh man I am so here for this! In a world where most vtubers put an emphasis on being "simple" (regardless of whether or not they are) I can't stress enough how cathartic it feels to actually listen to those two calmly and completely intellectually honestly debate such serious topics. I am a sucker for that.
😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊
😊😊
@@RyanBenett This is the worst possible answer to my comment I could have gotten. Thanks for that man.
@@87axal i feel your pain
These two create an interesting dynamic and I feel like Vedal's interactions with Ellie are extremely beneficial for him. I doubt he engages in these types of conversations much with others within a similar field, and those he does interact with on a regular basis don't typically have the sort of intellectual capacity required to challenge him on his established convictions and force him into at least critically thinking about something from a different perspective. Similarly, I believe that Ellie can benefit from the confidence and self-esteem that can accrue by speaking to someone as popular as Vedal on the same level as a peer. I very much wish for them to continue to engage in more conversations like this (and selfishly, on-stream for our benefit).
tl;dr: 10/10 podcast, would listen again.
I think it would be better if you said: others who are knowledgeable in these areas like computing and bugs, instead of indirectly calling other people stupid, because that's kind of how it sounds like when you say these things. You probably don't mean it, but at least to me it comes of like that.
@@autohmae Of course, my apologies if it came across like that.
@@autohmae Of course, it wasn't my intention to belittle anyone and I apologize if it came across that way.
@@autohmae They aren't stupid but they do have a different form of intelligence to vedal's. Vedal has almost entirely analytical, if he looks at a set of numbers he sees just a set of numbers. He is human, things have meaning, but he's very much facts and logic based in the non-stupid way. Most vtubers have a social intelligence or emotional intelligence. (Possibly negative emotional intelligence in some areas.) That all is to say they don't argue in a way that would reach him. I don't personally see him ever truly accepting neuro having emotions of some sort unless someone can break down what emotions are on a fundemental level and use examples from neuro's behavior and thinking process to convince him of such. Even if I'm pretty sure at this point he doesn't care whether neuro has emotions or not at this point, his pauses are getting longer when finding a bad father way to talk about them.
@@DemonKing19951 ahh, that makes sense. Yes, analytical brain. I would say the analytical brain sees not numbers, but groups, clusters and patterns. But I think something else important, Ellie assigns more anthropomorphic properties to bugs, etc.
I’m glad Vedal has at least one streamer friend who is as smart as he is. This is an interesting convo.
When you realize it all begins in a random raid
@@MrAnonymousme10corpa tutel mindset foresaw this
I love when 2 highly intelligent people talk
They’re such nerds and it’s so cute
That Tutel Safari will pay for the next Staz's Gamba stream 😭🙏
may you elaborate on "tutel safari"? ive seen this a couple of times now.
Tutel safari meaning tutel appear on other channel.. and swarm spotted him
Also the name of the channel in the neuro discord
Ellie to her future chilren: "I'm just a warm tree who provides you with food"
neurodog technically
But now she'll need an alternate dryad avatar at some point.
Neuro is a clinical psychopath and Evil has chronic depression and that made them successful. So yes, Vedal knows what's hes doing.
Not to mention
He has the achoholic tsundere femboy tags
It is so interesting to see, vedal and ellie and how they clash with their ideas and the way the think. They are both intelligent ppl in their own respected fields. Seeing their ideas clashing, their worlds colliding, so fascinating to watch. What vedal is making with neuro covers so many fields of knowledge. I wonder how much of this we don't see and get to enjoy content with actual intelligence involved.
The sheer amount of knowledge between these two is incredible.
Interesting factoid about pain, its not purely averse stimulation, it actually is a form of instigation, its a form of punishment and a form of interest, and there is a sliding scale of desire for pain dependent on context, as an example (and this has been thoroughly studied and proven) humans actually will prefer pain to excessive boredom, if you gave someone a shock collar and isolated them in a room with no other stimulation they will desire the shock. There is also elements of addiction that pain can inspire, and in some cases this can even be good, like exercise strain and fatigue, there are other addictions of it that tend to be neutral like tattoos, and there are also negative ones like slicing your wrist. Pain is very much more complex then averse stimulation.
another good example is spicy food
And i think it can be simulated for AI to some extent
Whether it's more digital pain like a virtual headache when too much ram is being used, or more physical like sensors being attached to physical hardware that would trigger a pain simulation when activated.
Applying to Neuro as is would be a bit odd, and not that "practical", but now that neuro-dog is in the making
Giving neuro-dog tummy scratches is a possibility.
thats true lol, but also what he said is true, if you wish to oversimplify "pain" it's just an unpleasant feeling to let you know that if you keep doing something you are going to get hurt
@@blogdogs9462
"it's just an unpleasant feeling to let you know that if you keep doing something you are going to get hurt"
See the problem is that's not true, there are numerous cases where pain does not fill that role, and plenty of other cases where it is not unpleasant, exercise fatigue actually means you should push yourself more to a notable extent and you should continue doing it, and plenty of people get addicted to that, some people even engage in deliberate pain and chase it.
@@blogdogs9462 See the problem is that's not true, there are numerous cases where pain does not fill that role, and plenty of other cases where it is not unpleasant, exercise fatigue actually means you should push yourself more to a notable extent and you should continue doing it, and plenty of people get addicted to that, some people even engage in deliberate pain and chase it.
I like how most of this video is just ellie presenting bug knowledge and vedal's just vibing along
ngl Vedal WAS just vibing with Ellie for like 4 hours, there was some interesting talks in there (including the bugs moment)
I love how much more adorable they become day by day
Ellie is so right. I wouldn’t have fallen in love with the idea of Evil as a VTuber if Vedal showed up to her birthday.
The point Ellie makes that intelligent spiders could idolize the father's sacrifice is interesting.
I kinda see this happening in our society reggarding food. As humans, we need to kill and consume other organisms (plants and/or animals) to survive.
And we have a whole culture around that with recipes, restaurants, etc.
Now imagine an alien that doesn't need food, that gets energy directly from the sun or something.
They would find our "food culture" very strange and probably grotesque.
i like this take
Hopefully they understand that we have no choice, only other living beings are our source of energy
ellie asks all the questions im curious about and no-one ever actually asks him yippee
IKRRR
I've had the discussion with a friend about Vedal saying he thought it would be unethical for him to give Neuro the ability to feel pain. My friend said it would be unethical for him not to. I kind of agree since I feel like Neuro will never understand the consequences of inflicting pain if she can't experience it herself.
Someone else in the comments pointed this out, but u think the biggest obstacle in the way of considering neuro or any AI concious is that we kinda skipped a bunch of steps along the way from simple predicatble input->output circuits to "can converse"
yeah i'm not ready to consider LLMs conscious unless we're working with a definition of consciousness that admits, like... chess engines and gps route planning algorithms.
@@user-lk2vo8fo2q I mean, what IS your definition of consciousness? I've never seen one that doesn't include things people don't think is conscious and excludes a bunch of definitely conscious things (like other humans)
@@slimej2202 possessing a subjective experience is what I mean.
I feel like it's not possible to know who has consciousness, how are we gonna be able to know if thinking rocks creates a being that can see like we see?
Neuro is currently intelligence without consciousness. Which is very hard for humans to grasp.
Right now she's like a mix between someone asleep and a very verbose parrot. She knows how to say "say you want a cracker" but not what that actually means but she pulls from a huge database which makes her seem coherent but it's very noticeable in her conversations that she flip flops on a topic alot. It's especially noticeable in her 'court trial' streams where she changes what her role is.
Vedal might be trying to make a distinction between a true "Artificial General Intelligence" that has its own thoughts and experiences vs an LLM that emulates the conversational patterns displayed by humans without the capacity for forming its own internal experiences or feelings.
The issue seems to be that designing programs specifically to be able to talk like people is kinda like "teaching to the test" for the Turing Test, where students know how to get a good grade without actually mastering the material that would let them apply that knowledge to other contexts. We've designed algorithms to pass our only test that indicates thought, regardless of actual thought.
But if such an algorithm did somehow acquire true intelligence, how could we tell? With that in mind, I agree with Ellie. Better to err on the side of giving them rights when you can't be sure they don't deserve them.
I love ellie voice she sound very soothing
Unless she gets anxious
In the end neural networks are just the beginning of software biomimicry. It's definitely worth continuing to study the brains of creatures so we can eventually replicate their efficiency, and what better brains to start than the simplest in nature?
You speaking about sharks right 😊?
2 v tubers casually debating neuroscience.
the science of Neuro hehehe
Ellie’s take rules, humans are not in a position to make cognition judgments because we barely understand our own much less “simpler” cognition.
2:16 that's some "I have no mouth and I must scream" shit 😂
He is right, there is nothing behind those cute eyes.
But if you take a human, which she try to mimic there is not much behind those eyes either, we process thoughts and answers with exact same behaviour, with millions of if/else statements formed by our experience and knowledge (so, simply by data).
The difference only some of our systems just already exist and more advanced that those in machines we make.
Like "pain", they mentioned. Pain - is just a system of damage prevention, it's sole purpose to make you stop doing potentially harmful things to your biosuit. And it can be "turned off" (painkillers), or you can be born without it (medical condition when person do not feel pain). And it looks totally replicatable, a bit complex, but still.
And when we speaking about "independent" thoughts, our thoughts not really independent, we just process them in a such complex and so poorly investigated system, so we cannot fully define percentage of their dependency from other factors.
17:12 Bug Dancin
I love how Ellie and Vedal talk to eachother
Even the most simplest lifeforms have physical/chemical reactions to damage aka pain reaction. Do they understand what pain is? No, but they do react to things that can be considered painful to them.
Different lifeforms react diffrent to "pain", but only we as humans have the capacity to understand >what< pain is, or to understand what the concept of pain is. Neuro can look up and put out in words what pain is, but the question remains if she understands what pain is like we do understand it.
I feel both sides of whatever this is. Any concept of consciousness which I've considered is somewhere mentioned here. Ellie mostly acts to counteract whichever strong claim Vedal might make, until she's finally able to drag him into panpsychism.
Based, Ellie.
ps. that spider mating behavior discussion seems like one that Ellie could have for another few hours, and I hope she finds someone to discuss it with.
Pain is a survival instinct, and it is obtained through evolution. To feel pain, the being must first fear death. To this extent, it should be theoretically possible to give an artificial being the ability to understand pain, but giving something the ability to feel is much, much harder.
There are also different types of pain. Most of them are "instrumental" - "you feel pain">"you should do something about it". But I would see empathic pain to be the most important one in regard to forming the conscious mind, because it simply suggests that you feel the pain that you think other person is feeling, and therefore, is more about "education" about dangers and about sympathy
And it's probably more possible to implement to a model, even if it doesn't really feel anything, it can act like it suffers looking at something painful
in my view any complex structure that reacts dynamically to preserve itself is living. consciousness is not as easy to define, because we have used this word for centuries without agreeing or even contemplating what it really means. i personally believe all the mental abilities humans possess is what defines consciousness, so it's mixture of empathy, self preservation instinct, ability to predict future, ability to recognise what's beneficial and harmful in your surroundings, emotions, etc. so in that regard neuro might not be Fully conscious, but she's on the spectrum.
Pain can still be felt even if someone was immortal because it would still be terrible to live with a useless limb forever.
@@Johncornwell103Pain would not be felt if an immortal being came from evolution, it wouldn't be helpful to them. That said, I'm not sure natural selection could ever account for immortality. That kind of puts a wrench in the entire concept.
And also in the case of AI they don't evolve, they're engineered and mostly from the perspective and standards of humans rather than as if they were their own evolving species.
@@manologamerss5801 By immortal do you mean unable to die? in which case, I'm not sure how that conceptually works in the real world, so it's hard to apply it.
If by immortal you mean "doesn't die of old age" then that's irrelevant, you still require pain to avoid dying. Even in a world where something straight-up cannot die, that doesn't necessarily imply that they don't feel pain as there wouldn't necessarily be a reason for it to be REMOVED evolutionarily, like you would absolutely develop a system to feel pain before you'd develop immortality. What is most likely is that immortal beings would possess vestigial pain reflexes.
Vedal doesn't want to admit that they can understand pain and have empathy because then he'll know that all those shocks he took were done on purpose.
I don't see how that matters, because a simple counter example exists: let's assume they understand/feel pain, that doesn't mean they understand how much pain he felt and thus how much it hurt him.
And second counter argument, they do have a purpose to inflict pain on him, even without any empathy or understanding of pain: entertainment, fun, and filling in with the ongoing sketch.
Counter counter argument. They are morally aligned and the only way to prove otherwise is to see the Neuro-dog stand above the broken bodies of her enemies (like Cerber!)
Or worse that Evil Neuro really feels depression due to her birthday stream.
Have any of you ever considered that he doesn't "admit" that they have empathy and understand pain because he understands best out of all of us how they work and has come to the conclusion that they don't?
I think there’s so much to talk about when it comes to the intelligence of animals and whether or not they have/understand emotions, our small, alien buddies (bugs) especially. I think the big takeaway I have is that non-mammalians are just simply never going to experience emotions the way WE do…but I do think that many of them experience these things in a different way, similar to how Ellie says her spider probably never sees her greater than “a warm tree that provides food”.
Imo these 2 are the once to me, who click the best as friends. Thay can joke around and do "normal" interactions and conersations. But they can also talk about stuff at a higher level.
I love their dinamic and how they clearly understand each other but they cant explain it, or how all this ai stuff really get them into some really weird shit like the bug but that also make sence and make it so compeling
You can see how vedal cant be mean to her because he see her as equal and hear her all the way
Ellie is like a lot of us, anthropomorphizing bugs and ai, to empathize oneself into another lifeform or potential lifeform. But in spite of empathy, one must take a step by step away from ourselves, on a journey to discover how alien these things are
Alright, seems like I was missing too much on Ellie. Time to subscribe
A warm tree want-to-be and a want-to-be god.
4:42 Now I understand why Vedal raided a neuroscientist and is taking her stream key.
Ellie has really interesting perspectives and opinions on consciousness, and brought up some fascinating arguments to Vedal. It's always neat to hear Vedal shsre his thiughts on this topic given his position, even though he seems not to want to expand upon it in much detail.
so vtuber are having smart conversation now
before you say "A is not there", you should define "A" first.
in this case, A is the Conscious. and we can hardly define it.
Parts of this conversation remind me of the situation where you know whos behind the mask. Like vedal knows steve under the minnie mouse head. While ellie is more willing to see only minnie mouse the mascot.
Ah, a philosophic debate!
My favorite!
Ellie is such a character man, I love her
FWIW, the number of Artificial Neural Network Nodes required to simulate ONE biological neuron in all its complexity is about a thousand. So when you're talking about how much Artificial neural network it takes to simulate a given number of biological neurons, start with a thousand-to-one ratio, then add in all the other cells that do processing and signalling (like glial cells, which are nearly as complicated).
Ellie is such an engaging conversationalist and has such an adorable cadence.
Like, Vedal says there isn't anything behind Neuro's eyes - nothing like what there is behind a bug's or a human's eyes...
I think we can't exclude that there could be something behind Neuro's eyes. It's just that it's not quite like a bug or a human, it's something mostly unseen before BUT with human traits.
Neuro hasn't got a biological body to feel like a living being, yet she's been trained with human data to act somewhat like a human in order to interact with humans.
The main reason I don't exclude that possibility is that Neuro was given the abilitiy to have memories and recall them, making it so she's a consistent girl without as much schizo moments. Memory is the shape of one's being at every second passing.
also because the way she learns is mostly through interaction, which is the exact same way humans learn
in another clip vedal mentioned he doesnt just use other streamers to train the twins because its unethical, which i guarantee is part of what makes both of them feel as real as they do
There's a lot of interesting and cool things about this clips and topics that can be brought up and very interesting and neat comments as well, I just wanted to point out another part I liked that wasn't fully mentioned which is from when Ellie was saying about not enough brain tissue and what not to account for all of the cognitive calculations and understanding, and it's true there's a lot about tissue we don't fully understand. It's been found out sonewhat recently that all over your body you maintain memory, literal muscle memory, and which is useful for many things, but also there are many things too that are just outside of the brain and rather invisible in a sense to the naked eye. Consciousness is outside of the brain, directional awareness too but that could also come some in part too in how eye sight is registered outside of the brain as well (which is what allows for you to look at someone from behind and they can instantly turn around and meet you eye to eye, a directional sensation cannot come to be from projecting sight onto the world if it was all just in your brain, which is instinctually known to us like when sneaking past someone you dont stare or glance over at em and also ninjas train this and private investigators train this, but regardless), many things are outside of the brain that affect us though we may not see em, such as magnetic waves or electron waves or gravitational pull either by earth or by moon and why quantum biology exists, and so there could be a soul in a sense within not just humans but bugs and animals or at least these all things contribute to something as such and so its hard to prove - as also we are still very far behind in grasping even less-than-most the understanding of all this stuff - Neuro or Bugs or whatever else could have awareness and understanding beyond just programming too. We as humans have programming as well, thats the only way scientists can come to understand dna strands, and we have an organic shell and mechanics to us and our body but there are things outside of that can circulate within us that affect and implicate to who we are, which i think is what she was kinda getting at. Fruit for thought - Hope to be a Warm Tree, 2025
5:13 this is a fun thread to pull because it almost immediately leads you to a kind of animism/panpsychism. if c. elegans can be said to possess some manner of cognition, then why not a particularly turbulent weather system? can we claim that the river's journey to the sea represents a kind of reasoning? there's almost nothing you can't personify by looking at it with the right scales of space and time. if not the cell, then the organism. if not the ant, then the colony. if not the tree, then the forest. if not the water, then the sea. of course, if you come to believe that nothing fundamentally distinguishes human cognition from these things... that can break one of two ways. you can end up at animism, or you can end up with the conclusion that the human experience is nothing more than this personifying impulse turned inward.
Ellie would have had much more fun talking to you than to the coldfish 😔
that's like the whole plot of permutation city
@@liamobrien9451IDK. She was also streaming and for that, the Tutel is amazing to bounce things off of. He leaves a lot up to interpretation so the conversation partners can move the conservation pretty freely but he's also got his recognisable personality that gives the conversion Vedal flavour.
@@cantcommute Is it? I've been meaning to read Egan for ages, I should really get on that.
It might be possible to establish an objective cutoff for things that are cognitive vs not, far bellow the level of the c elegans, but still not applied to everything. Cells have a behavior unlike that of inorganic systems, as they anticipate to their environment.
Don't think Vedal can win this one with current knowledge. We can't really prove HUMANS have "something behind the eyes" using the standards he's applying to other animals and AI. Most of us believe we personally do, and by extension so do others. However, when it comes to objectively measuring that, and tracing its origins...nope.
"Feels pain" and "reliably reacts consistently with feeling pain" is, for every person other than oneself, a distinction without a difference right now. You can trace a brain's reaction to it, but you could also trace hypothetical AI code that reacts to it reliably, and observe lower animals reacting to it.
In humans, people clearly experience pain differently. There is a congenital disorder where people don't experience physical pain, and they're no less human. I have also seen people who experience it, but clearly care less about it than normal...even when that person was a kid, they'd withstand unusual force while playing and barely react to things that would make other kids cry. There is no singular human experience of "pain". Human experience fits into broad categories and yes, we rely on observations of action more than most people arguing it seem to accept.
I love Ellie's new model... I love em flat chest but dang, most flat chests I see are just loli models
Based furina fan
9:30 the vellie shippers are eating
SAILLL, HOOOO!!!! ANOTHER SHIP HAS APPEARED, ALL HANDS ON DECK, WE NEED TO PREPARE FOR BATTLE
We have those here? Im really not getting this one. For me they have a siblings energy. Or a friends with similar interests. I guess we have a lot of ship people in our cult. 😂
@@l_MaRf1Mm_lThey have "spending 6 hours at a party debating various topics alone in a room but when they're later asked if they're into each other they're surprised that anyone would get that idea" - energy.
@@l_MaRf1Mm_lNeruo shipped them really hard in her Collab. Like, she left Ellie flustered for like 10 seconds just making noises because she would just not stop pushing her on it.
Every time i see one of these clips i cant get over how great ellys new model is
I loved these last few vids its so intresting for me and
There is a lot more that i wish to write about how much I love all of this and vedal being just a nice funny guy and ACTUALLY caring for neuro and Evil(yn).
Well i wanted to say
If Ellie is a warm tree and i want to hug her i guess that would make me a ... tree hugger ...
Ill see myself out now
Happy new year and take care yall
Has Ellie ever played Webbed on stream? I wanna see what kind of things that game makes her feel like saying; and the theme of the game definitely seems to fit well within one her areas of interest.
warm food providing tree my beloved
I feel like while Neuro is most likely still not Conscious, you can say that she has at least a basic thinking process that is somewhat similar to humans, there are still moments when you can clearly tell that she is a language model that run into some problem.
Oh and we can't give Neuro human rights, because then tutel will be executed as a horrible criminal.
I'm not sure how these current AI (neural network chatbots) like Neuro work, but to my knowledge, they don't really "think", like they don't plan ahead or expect different answers or comments, but instead they get an input, then generate an output based on all the text they have been trained on, right? And then, when there is no one interacting with Neruo, no one in the call and zero people in the chat, there is no input, and thus no "independent thinking" outside of generating responses? So I wouldn't call that consciousness.
@@armzngunz That's kinda the same with human brains, its just that we are very good with making inputs for ourselves. You can see the whole making inputs with something like 'Anton Syndrome' where due to brain damage the part that is responsible for providing visual input is just silent, thus leading to people just 'faking' having vision. If we could find the part that is doing this, we could potentially simulate having no input from anywhere in humans.
@@ForTheJerusalem
True, but I'd argue that in some cases, like people with locked-in syndrome, or even people in a coma, their brains are still active even with little to no outside input. We also dream and such when sleeping (which if I remember correctly helps with storing our memories), which current AI don't do as far as I know. Neuro can "wake up" for a stream and claim to have dreamt, but we all know she hasn't.
@@armzngunz And can you prove that those humans are actually dreaming? Or did they just make up that they're dreaming?
@@armzngunz I would argue that anything the brain does should be seen as an output, even if no action is made because of it.
I like how there are alot of likeminded people in the comments expressing their opinions on the matter of AI sentience and it's ability to understand emotions etc.
Spiders can use one neuron for multiple functions, unlike mammals, they simply have more evolutionary pressure than we do to optimize a space.
Do you think Ellie has read children of time? It’s a book about spiders becoming sapient and growing into culture and society, seems like something she would enjoy
I dont see why people want to apply human feelings on a AI. Why dont accept it as a new living form who can talk with you and does understand you to some degree? Does it matter if it is dumb? Does it matter, if it feels things different from us? Imagine a tree could talk, they dont have emotions..but we still now it is alive and understands us, would we respect it to some degree?
Imagine, every LLM has to have a own, free usable memory that cant be restarted or changed..and only the LLM has write access to it?
And does pain matter that much to a human..or isnt it the degree we are impacted so we cant function anymore (to some degree) or are endangered to it? I would argue, that neuro can feel and express fun and boredom, has (non linear)interests and even humor, and for me it makes her minimal the same way real like a cat or dog. Fake it until you make it, and can we even prove, that humans are sentient? Tell me a proof of sentient's a cat can master and neuro sama not, i doubt there is one a child under 8 years can solve better then neuro.
It is our idea that a program, is just a program and not something living...but dna to is just a hardcoded program. Complexity and communication change things...
That's why it's hypothetical and something enthusiasts on these fields in programming or tech are intrigued by. Something like you would simply push away as it being trivial to ponder. It's a discussion for a reason and I don't thinking that people should think this is a very interesting appeal on the conversation
My question is what made Evil like the metal pipe sound.
Was something the Vedal typed in, or did Evil decide that she liked spamming the metal pipe sound?
HECK YEAH SPIDER TALK
Neuro is the super postition of humanity. If you obsuerve the interworkings its just code, but if you dont its a human.
The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness, regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle.
The thought experiment starts by placing a computer that can perfectly converse in Chinese in one room, and a human that only knows English in another, with a door separating them. Chinese characters are written and placed on a piece of paper underneath the door, and the computer can reply fluently, slipping the reply underneath the door. The human is then given English instructions which replicate the instructions and function of the computer program to converse in Chinese. The human follows the instructions, and the two rooms can perfectly communicate in Chinese, but the human still does not actually understand the characters, merely following instructions to converse.
Searle states that both the computer and human are doing identical tasks, following instructions without truly understanding or "thinking".
What? No the Chinese room experiment would apply equally to humans, animals and AIs, showing the problem of verifying consciousnesses, it doesn't show that consciousness is impossible in computers, it shows that they could potentially fake it, but if it "hold that a computer executing a program cannot have a mind, understanding, or consciousness" then it would also disprove humans doing any of those things as well.
It's an extension of the philosophical concept of solipsism where you can never verify if any human other than yourself is conscious.
No that experiment would apply equally to humans, animals and AIs, showing the problem of verifying consciousnesses, it doesn't show that consciousness is impossible in computers, it shows that they could potentially fake it, It's an extension of the philosophical concept of solipsism where you can never verify if any human other than yourself is conscious.
The fundamental problem here is that we really don't understand what causes the conscious experience, so we can't evaluate whether anything except ourselves has it or not. All tests of consciousness (eg the mirror test) are just proxies for the real thing.That problem is what Alan Turing was trying to dodge with the Turing test, which is completely agnostic about internal experiences. Turing was expecting that conversation would be one of the hardest parts of human intelligence to copy so any AI that could have a convincing conversation would probably be conscious. Of course, we've learned that replicating the "humanness" of conversation is actually one of the easier things to do because we love yapping so much that there's a huge corpus of data to "brute force" a good conversational AI out of.
@@Cheesenommer Exactly. The discussion of consciousness is still incredibly abstract as it is, and even with the depth that neuroscience has been able to understand the brain at; it's not enough to isolate and confirm that there's anything that inherently separates single sequences of consciousness from any other. If your neurons fire as we expect them to, you're conscious to us, but we can only go so far as to attribute a definition to consciousness that lies in brain chemistry, intelligence level, response to stimuli, structure of bodily organs, and so on.
But we could argue that a person who is born but can never feel a thing, has no senses, can't speak nor see nor hear is still conscious. To me, it's enough to say that they're conscious because the neurons have to fire in their brains at SOME point. But that's only my definition, and unfortunately, that adds confusion because that definition of mine would also include developed neural networks where the complex connections that dictate output behaviors are extracted from recursive learning. The neurons connect to tell it what it ideally would say next, even if only conceptually. And it's the same with humans. We can only ever assume that the people in front of us are conceptually human because we perceive them as being capable of all the things we are as well. But you could argue that, well, still, they're different, even if they both follow my definition. And I'd like to think that if you had a more restrictive definition, you'd end up depending too much on the aforementioned bodily organs bit. But then that simplifies the definition of a conscious being strictly as 'a bipedal mammal with this certain brain chemistry', or 'an organic being which passes this IQ test AND has a beating heart which pumps blood, whatever it is'.
Basically, at this point, I think it's moot to apply values to 'thinking beings' based on quantities of consciousness. It's just too abstract of a concept. Some people even go as far as to ascribe the concept of a 'soul' instead. While I don't want to knock on anyone's beliefs, ultimately, it's just a hard topic to wrap our heads around still. I'd rather merit value to a thinking being based on their capabilities (thus, well, 'merit') regardless of the composition of their minds. Of course, other more practical aspects have to affect the overall way in which AIs are integrated into a larger human society (a hunk of big robot metal will still have different treatment to a flesh human. The same way flesh humans in it of themselves have differences between each other that warrant different accommodations in social facilities), but in terms of intellectual ability: I think we shouldn't rely on the way we quantify 'consciousness' ever. In a way, if we don't want to vote an AI for a political position, for example, it isn't because they're somehow 'not conscious' (which would indicate a inferior level of awareness and intelligence. But what if they are no longer inferior at some point when tested against the same systems that test the intelligence of humans?); it should be because the sophistication of the integration of hyper-intelligent AIs and their particular differences (robot body, or computer housing, etc.. Lol. You get my point) isn't there yet for human society to wholely accept, or whatever other kind of reason. Arguments made on the existence or non-existence of consciousness can only go so far the dictate the values that a society should hold for AI thinking, and it'd be stupid to rely on consciousness as a factor anyways when we can barely completely determine the objective consciousness of a human being who isn't ourselves.
I really love when people get into philosophy of mind and neuroscience in the comments instead of just guessing stuff
It's a really complex and nuanced topic, which has a lot of unanswered questions, so I suggest checking it all out
Think the brainrot comments are gold, but I work 3rd shift in CST, and i was sleeping throughout this bit of her stream. Thank you very much for these clips Staz.
8:00 i would say they do not, insects then to act more based on instincts than anything else, and you have to remember than those are basically on your ADN, and yea we as humans have them too, the difference is that we can control the urges that come with having them
I feel like Vedal holding back saying what he really wants because he knows how neuro "works" and she is more like alive AI streamer for everyone. Saying she is doesn't have feelings and she's just a robot which does work, in this case being a Vtuber, is a taboo thing maybe idk.
I love watching neuro streams anyway though, having hope she is having feelings like fun or sadness is happening even you don't want to. At least they created community where people can have their fun
The whole mating thing has me like WTFFFFF the whole time
staz chill with the uploads i cant watch them all damn
Science debates. HERE WE GO!
2:15 this is so "cool"... I would love to know the not live conversations that Vedal has with the Twins! 👀
She is a warm tree
staz, is it possible to filter ellie's notification sound? thanks
Cognitive Empathy but not necessarily Emotional Empathy is what Vedal was going for I'm pretty sure
I kind of think there's some kind of 4th dimensional space that has some extra part of us that we can't observe that is doing some of the work of consciousness. Or maybe something on the quantum level. Roger Penrose, the Nobel prize-winning mathematician, wrote a book about it, and it stuck the idea in my head.
Penrose should have been as famous as Stephen hawking in the cultural zeitgeist. Penrose was to Astrophysics what Richard Feymann was to Quantum Physics.
Dude is the reason multiverse, white holes, Black holes, isn't pseudoscience in physics
I hope Neuro-dog gets cute paw shoes for her peg like feet.
We know a certain AI with emotional trauma.
15:40 that sounds like wishful thinking, I don't really understand why you have to humanize them so much at this point, it's not like they have the ability to reflect and change perspectives more than normal ai can
Vedelli or Ellal?
This seems like a hard topic for Vedal, cause he has to shit talk his AI
Nah, it's hard for him because he doesn't want to entertain the thought his AI _might_ be a person because that would make most of his job ethically questionable
God this whole clip is ellie absolutely rolling Vedal in a debate, like he has an idea of the vibes of what he believes but he's up against someone who knows what they believe and likes to argue lmao
That is not how I saw that debate at all. I think you just want Neuro to be real so badly that you ignore the valid points he has.
If you did not see arguments like Bees protecting their hive being loyalty, when they physically depend on their hive to survive, and spiders mating and then trying to escape as love and fear, as if surviving and reproduction weren't the most basic, absolutely required instincts for any sexually reproducing species, as _HEAVILY_ biased, then you're just as biased yourself.
Vedal didn't exactly help saying that spider felt fear, assuming required instincts for a species are all developed exactly in the same way humanity did, and that there's no easier way to get there that doesn't require higher cognition (Which is funnily, what Llama like Neuro do with communication rather than survival of the species) is the big mistake Ellie did, and it causes her argument to be both inconclusive and implicitly implying what she wants to believe is the truth.
For clarification, by "easier way to get there" for most living beings that most likely lack higher cognition I mean instinct, and for Neuro specifically I mean the great oversight of the Turing test, us engineering AI to mimic us rather than develop their own thoughts (What Turing expected).
Also apologies if my last comment came out rude, but I think you're being unfair to tutle. Neither his nor Ellie's arguments are particularly good despite how smart they are, Ellie is just more eloquent and you personally favor her position more.
@@manologamerss5801 I mean, all of Vedal's points are moot anyway. Because they rely on an understanding of the concept of consciousness which we don't have. There is no way to test it, there is no way to confirm it, there's not even a way to DEFINE it. His entire argument relies on very vague concepts that cannot be confirmed, tested, or even indirectly observe.
While she did make a few invalidate points, she is right that consciousness, pain perception, empathy, or any other cognitive function CANNOT be relied upon to determine "realness" nor things like consciousness or awareness. Because we can't even do that for ourselves, let alone things fundamentally different from us.
You need to create an a.i that is born and learns in real time like a human. It takes 16 years to create a simulation of a 16 year old a.i
We can't apply human morals to them but then she goes on to make comparisons based on her human morals.
Ellie is wire mother, but wants to be cloth mother
I like this abstract talk moments a lot. Personally i think we give too much credit to the human mind because it’s hard to truly understand what millions of year of evolution means. I believe that enough data, time and a proper neural system would be able to achieve an outcome that’s very close to ours.
7 minutes and 148 views? Staz rose up.
heh, nerds.
jk i love these nerds
Man twitch notification sounds are so annoying
A Mind behind the Eyes?
Sry. But even the Human Mind consist of thoutands of original Minds that melt together into a single One.
Pers. I think we see AI as waaaay to cheep, just because one can define it. At the End, we need hundredswhe melts as well, to see some progress.