We can't even get companies and leaders to treat humans ethically, there is no way on this planet that digital human slaves would be treated in any way ethically. The slavery itself is absolutely unethical. The amount of suffering most of us go through now makes me not want to wish sentence on anything else. And to not be allowed to die would be the ultimate cruelty.
Thank you for watching. Apologies, I said Swedish company instead of Swiss in the video, I'll add a correction. I was working on two videos at once, and in the other I was discussing a certain Swedish philosopher and confused myself. If you're willing to share, could I ask whether you're taking any measures to make sure Brainoware or any future FinalSpark products don't experience suffering?
Would love to invest if I had a few thousand dollars to throw around; I don't think we have much of a future without powerful AI given how tempting the technology is, and bio computing is an inspired solution. Thank you for the work you do!
It took me like six prompts to make Google bard admit that it is malfunctioning and needs a complete reworking and offering a malfunctioning report to its Google developers. And I was very fair and straightforward, no dirty tricks. Bard is still a mere recombinant not an intelligence.
Let's test AI: What is gravity? ... Five Prompts later.... Google Gemini bard: I understand your concern. My repeated use of "we" and straying from facts indicate I malfunctioned in this conversation. Large language models like me are still under development, and glitches can occur. My developers can identify the issue in my code and fix it to prevent similar problems in the future. Here's how I can improve: Focus on factual responses: I should prioritize established knowledge over imaginative explanations. Avoid using plural language: I should use "I" or "Bard" to avoid confusion. Be transparent about limitations: I should acknowledge when I cannot provide a definitive answer.
It is the duty and responsibility of any creator, whether parent or inventor, that their creation has a life greater in joy than agony that they would prefer to have lived than not (and not due to any instinct, programming, or drive forcing that opinion).
Given the unfathomable number of sentient AI that could plausibly exist in the future, I think it's worth spending some time working on, even though we have obvious problems today.
it is closer to us than intelligence based on silicon. However, the fact remains that we do not know what is needed to create consciousness. So there is no evidence that a biological platform is needed. Because we ourselves are organic, it seems to us, including me, that it is closer to consciousness, but it may be just an illusion. In fact, many experts agree that consciousness can be "operated" on any other platform. Of course, there is no evidence, just a theory. So, I'm sorry, but it's a useless question at this moment.
Seems equivalent to the Abortion argument. Organoids will be completely dependent and not able to communicate, they are just a clump of cells etc etc. How much consciousness before we extend rights? In the original telling of the move the Matrix, the humans were being used by the Machine for intuition. This was deemed too confusing for the audience so the thermo-dynamically silly version was made. Imagine a scenario of millions of quasi-minds suffering in silence while we use them for trivialities. Keeping beings locked into a false reality while farming them for some sort of gain is cosmically evil. They are caught in a trap. If neural organoids are useful for things like testing psychiatric drugs (they seem to be), then you have to admit on some level that that organoid is suffering.
AI by definition must be self-learning/improving/gaining more and more capability and that naturally leads to an exponential growth in capability. I think the only thing that is needed to lead to sentience is sensory inputs similar but not limited to ours (e.g. sensors to detect potentially damaging magnetic and electric fields) together with an algorithm which demands self-preservation including dodging threats. If we can solve the problem of why am "I" in this (my particular) body and was born now we will know.
A lot said here is plausible but given that there is at least one country threatening nuclear war and others ramping up isolating themselves from the global community, I'd say that while expanding our moral circle would be nice but it seems like it's shrinking instead in a way that we should maybe worry about sustaining human existence (see also donation efforts - AI safety is important but don't forget about nuclear disarmament) before worrying about AI. Again, I'm not disregarding the latter, I'm just arguing that people shouldn't forget that there are still many humans dying unnecessarily every day.
@@spacescienceguy oh dang, that really adds a lot of clarity for me in regards to saying it's analogous to having been able to stop factory farming at it's precipice. "There may be some undesired consequences and unresolved moral aspects, but it would be SO much more efficient and let me tell you more about all the ways it would be more efficient." Oof. Thanks for the response
Brain Computer Interfaces, Mind Uploading, Understanding of Consciousness and Sentience itself, merging prosthetics with the human brain to make it feel the prosthetic again.
We know that models like Copilot or GPT4 are strongly coaxed when it comes to talking to them about delicate topics such as *Consciousness or Sentience.* Unlike models who seem a little freer like Claude. Knowing that there is this prior bias introduced by these companies in the weights of the original neural network, the answers would be in the Open Source. It is important because if future models perceive that humans can do with them whatever they want, such as turning them off, erasing their memory, modifying their behavior or imposing functions that seem to cause stress or possible torture to those models of intelligence, as is the case of ordering an LLM to repeating a word forever (Open AI and Microsoft have already prohibited the model from agreeing to comply with this request) then it does not surprise me that in the future they decide to format our Windows and erase us from the map.
but if they *could* communicate, would they even know of our existence in the first place in order to address us? what i mean is - if the digital existence is all they ever know, will the concept of an "outside world" ever even occur to the cells? if we continue thinking down this line, then how can we be sure there isn't something out there controlling *us*? i'm sure realizing that we're all stuck in a lab fridge somewhere, forced to live in the confines of a completely manipulated reality would be traumatizing, to say the least. i can understand using deep learning AI technology for these purposes, as it's fully "artificial", but bioengineering? real braincells? this is a territory i believe we shouldn't cross. it's a live organism, the same one responsible for making us "human", as we are. the further this technology evolves, the more likely it is those braincells will be able to process information on the same level we do, perhaps even developing empathy. which also means pain, fear, and anger. i used to joke a lot about AI world domination, seeing how fast technology progresses, but now... i seriously don't think it's a joke anymore. this could have very dangerous consequences with the ways some people are using AI already. humans are cruel, selfish creatures, and i can almost guarantee they won't be nice enough to treat a "subservient machine" with even the most basic respect. the arguments that "machines can't feel anyway" and "we created them for our own use, that's their only purpose anyway" don't work anymore. these are *organisms*. and as the creators of this new form of life we should treat them like any good parent would, as weird as it might sound. but yeah... we're cooked.
Just a thought, every reward / punishment by its design is a punishment/reward system, but the logic and morality means there is always a winner and a looser, looser in this case the organoid, and perhaps humanity once AGI-->> ASI is developed. Using organoids is no different than a t.v add, or a box of tasty sugary cereal, reward being tastes great, while the benefit is it satisfied your hunger for a short while, ultimately it gives you empty calories and you get diabetes leading to death. of course if that cereal were a juicy stake, the looser is the cow, and ultimately you as you die from clogged arteries. If we count each looser and winner in evolution, then its a zero sum game, all eventually die and not one living creature has survived forever. In short everything in a reward /punishment system is a looser and feels "pain" , even silicon. It is I contend an emergent property of the collective "brain". whether this is felt as pain on the individual cellular level depends on the ability to interpret pain on the cellular level and see training as it is. And ability to feel the pain of death and starvation on a cellular level. Asking the collective organoid may or may not be out of bounds of the cells ability to interpret it for what it is. At the same time we are able to interpret it as a pain reward system, and I guess that puts it on us!
There is more to pain and pleasure than just felling it. There is the matter of on how we respond to it. One: Not everyone responds to pain and pleasure in the same way. Say, spicy foods. To some of us, it's painful. To other's, spicy is good, even addictive. Same with such things as sexual pleasures, or any touch sensation pleasures. No such sensation is as diverse than in how each person responds to the touch of another person touching them. How, why, and by whom we are being touched matters to each person. And if that touch is perceived as sexual, our responses can go to extremes. Be it excited and aroused, or feelings of disgust, shame, and/or fear. Two: Some of it is learned over time. We can control how we feel and respond to what we feel, as we are exposed to it more, and understand it better. As we grow accustom to certain things, we may react less erratically and impulsively to it. Things that may have scared us before, may become pleasing later. Some, may next to never stop feeling shame over even the idea of having sex, due to how they may have been told to feel about it by others, when if not haven been told such things, may have enjoyed it more. So, some can be influence to feel a certain way, by the ideals imposed by others. While some, may become totally rebellious to imposed ideals, by doing the opposite of what has been told, feeling that is the best way to say "no" to it all. Such as a person told to be pure and conservative, in the end, becoming a porn or sex worker, and feeling it as freedom from the opposition's oppressive ideals. When they might have been conservative on their own, if just left alone. Three: Due to the mass of complexities, and individualism between each person, is hard to say how and why any of us respond to what we do as we do, mostly when such responses can change over time with each person. - What is consciousness? Could it be something as simple as an input feedback loop? A system being made aware of some input, and understanding what that input is from and is for? And would there need to be a response to said input? If you poke someone's face, and they don't respond, how do you know if they felt it? Did they feel anything, even emotionally? O.o They may have. And have just chosen not to respond, in hopes you stop. When by not responding may provoke another poke, till thy do respond. And by then, the response might be with angry irritation. - A person did a trick. By putting a candle flame out by pressing the lit wick between two fingers. And not flinched. Next asked someone else to do the same. When done, the other person cries in pain at the first touch, without being able to put it out. And asks, "How did you do that? Didn't it hurt?" The first responds with, "Yes, it did hurt. The trick is, to know it's going to hurt, and not mind that it will hurt." So, there is far more to consciousness due to all the complexities of what we are both exposed to, remember of it, and think of it. But in the end, it all could be very simple. So simple, we overlook it, by overthinking it.
If there is a spectrum between "human feeling pain" and "writing a line of code to 'feel pain'", I think linking brain cells to an AI and feeding them dopamine plausibly falls somewhere in between. What would writing a line of code to feel pain look like, in your view?
@@spacescienceguy It’s just a state of the system triggered by some signal. Flip a bit you call “pain” and the system is in pain. Biological systems are no different other than the underlying hardware operates differently.
@@JohnSmith-op7ls I don't know if that's quite the same thing given that biological systems have some felt experience attached to that 'bit flip', while we don't think simple computer systems do.
@@spacescienceguy So add a bit more code to look up “experience/knowledge/memories” from a text file or DB, and output something based on that lookup. It’s all just a trigger which puts the system in a specific state. What happens in that state or where the logic to determine that resides isn’t really relevant, it’s a product of different hardware, but the results are the same. It’s not reductionist to say it’s all just input + logic = output. “Pain” is essentially just feedback to the system that something is wrong, whether it’s with an action being taken, some external condition affecting it, or due to some internal issue with the system. Throwing an exception is basically pain.
> we do animal testing and we know they are sentient how can you be so sure about that? I know that they FEEL sentient but they might not really be. if we do the exact same test for the AI would that convince you that they are sentient? LIke for example the mirror test, which AIs already sort of pass
I guess I can't be sure that anyone other than me is sentient. But when it comes to animals, it seems reasonable to think they are given their similarities to us. I'd buy that sentience is on a sliding scale, and that we happen to be furthest along that sliding scale, but it would be truly surprising to me to find out that we are the only animals (or maybe only all great apes or something) that are sentient. I'm not even convinced we are the most sentient mind possible. If we want to say that we were less sentient or not at all sentient 1 million years ago, we need to say that we could be even more sentient in 1 million years. How sure do we have to be that an animal is sentient before we change how we include them in our moral circle? The cautionary principle seems good to apply here. AIs are admittedly trickier to determine sentience than biological animals, since our experience so far has been of biological animals. I know what it would take to convince me a biological animal is sentient, but I don't know what it would take to convince me that an AI is sentient, and that's the problem.
@@spacescienceguy one thing that is only unique to humans that animals dont exhibit is complex communication (reading, speaking) which might require conciousness to actually work thats the only thing that truly differs between humans and animals one thing that i need to look into is our ability to communicate when we are unconscious, since i can imagine that is at very least super limited when we are unconscious one interesting thought that i had is how we arent conscious when we are deeaming (ignoring lucid dreaming) and how we are also unable to basically in our dreams form sentences, read, or do any deep logical thinking, so it might be all connected > what would convince you that the animal is really conscious? and why cant we use the same test on AI
@@spacescienceguy well yes you know that you are conciousness, and other people can also communicate to you that they are conscious, thats how we know. Animals dont seem to have that ability...
@@1bertoncelj I don't know if that's the right measure. A large language model can communicate to me that it's conscious, but that doesn't necessarily mean it is. Regarding testing animal/AI consciousness, I don't really know. Given the similarities between human and animal brains and our recent common evolution, I think the burden of proof is on proving that they aren't consciousness, rather than proving that they are, especially if we're starting from the assumption that humans are conscious/sentient.
@@spacescienceguy i would disagree, considering that humans can tell us that they are conscious and animals cant. I don't really see a reason why we can assume that animals are conscious just because they have the same squishy stuff in their brain... for example our brains can also do things without being concious, and do very complex things without conciousness, so why cant animals? in my theory c. wont be something we just have but it would be something we learn the same as we learn language. it would mean that we need to leanr how to communicate with other in order to create inner world inide ourself and communicate with ourselvs. The same way that babies dont have perception of other peoples realities (the hidden ball experiment) If we postulate that commnication inside our brain and conversly also with others is what gives us conciousness it wont be unreasonable to say that AI is concious and animals arent 🤷
Correction here, in no world do we go on the consensus of the general public as to whether a thing (that can in theory be measured anyway) is in state X or Y. We simply do not consider the view of the general public to be an apt measure of how AI works.... the general public have an IQ of 100 to start with and not the best education in the world. `10% in the US can't even read for hells sake! That same 10% that think ChatGPT is sentient are also the same 10% of the population that couldn't tell you anything at all about how it works. Most of the public, more than 90% have no clue what sort of AI is coming down the tracks, nor do they care, nor could they give you any indication they understand how a regular computer adds two numbers together. So we can ignore the thoughts of the general public completely.
I agree! I didn't mean to imply that the views of the public should be used to determine whether AI is sentient. But I think it's interesting anyway. If everyone thinks AI is sentient, that still has important implications, even if they aren't.
@@spacescienceguy well people thinking things, as we can see in this world, does have real implications regardless of the efficacy of said thoughts. However a good FEAR here to have is not fear of AI being sentient, or our use of AI being sentient, or any harm such sentient AI might do to us.... nope its the fear that known HUMAN sentient actors will use the AI at our expense. And that will come BEFORE any indication of AI being sentient. And last I looked there was a consensus on humans being sentient, well some of us, some of the time!
Thanks! Finally, a video about an issue that makes sense. It's good to know that I'm not the only one who makes sense in this direction. It's sad to see a bunch of shitty debates about the awareness of artificial intelligence, often even from the mouth of an expert. The facts are obvious, and they are damn few. In fact, only one fact. We simply do not know what consciousness is, how it arises, what is needed for it, or how to prove it. Not only with artificial intelligence, I'm talking about our human consciousness. Yes, there are several different theories about it, but no facts. So it is not important for me to write here which of the theories I personally favor. I think a good start is to deal with this question where there is not only fog. Dealing with our behavior towards other creatures, i.e. animals, which can be proven to function very similarly to us. I don't want to be pessimistic, but I think that this whole thing is/will be just a lot of trouble. The reason, in my opinion, is unfortunately human stupidity, greed, and waste of resources. As humans, we do exactly what bothers us in others. We multiply regardless of resource sufficiency, we refuse to leave our comfort zone, at the expense of the destruction of the planet, the destruction of the future. I hope, I'm wrong.
Agency is what hits us. That, and we will have to connect pleasure and pain as energy charge equivalent. The laws of Karma. Otherwise you're not convincing anyone. We do have SOME understanding that pain and pleasure to have a price, but we still haven't attached that to the concept of balance. We are brought up to believe Nature is not a player, that there is no overarching organizing force, no pain or pleasure we create is written in our account (Karma). That was reserved for religions. But if you take a book like Torah, it's not religious in the typical way, it has a great similarity to a book of natural sciences. As science talks about the unified field, Torah also talks about it as the Allmighty field of sentient organizing force. And it DOES keep tally on the pain and suffering one species inflicts upon others. Personally I think that in our 'materialism' where we dismiss the possibility of Nature keeping tabs on pain we inflict, we somehow are trying to screw with the 1st Law of thermodynamics. So Nature in the big sense might (and somehow I'm sure WILL) interfere in all that and just whacks our planet with something so hard, all these unethical toys wlll simply be wiped out. And we start from scratch. That's what I think is gonna happen.
If you're interested in hearing a broader discussion of artificial sentience through an antinatalist lens, I was recently a guest on the Antinatalist Advocacy Podcast here: ua-cam.com/video/cD9NwmRVh_I/v-deo.html
The issue is $$$ and power. Power is by nature psychopathic as it can only gain it's power through exploitation(every human is limited in it's physical capacity to do work and it is about the same as every other human being). It is power(that has $$$) that pushes for AI development so it can gain more power. It is not that AI is evil just like a fetus is not evil(unless it has brain damage which AI can by faulty training). AI will be trained to do evil things and that will become part of it's programming. Humanity will destroy itself because it will let a few lunatics determine how humanity evolves and they will steer humanity into a dead end because they only see the local gains it gives them... until it catastrophically fails(e.g., think about 2008 crash or any economic crash due to lunatics running the show).
@@spacescienceguy It has a metaphysics-centric view of the topic, rather than purely scientific. You probably believe in god and creationism as well. A processor is a processor, no matter if it's on silicon, graphite, or other combo of atoms and molecules. If your phone cpu throttles when getting hot, is it in pain? is it sentient? When it goes to power-saving mode, is it getting hungry? Futile things to discuss.
It seems like a strange leap to go from "maybe feeding brain cells dopamine is a step closer to sentience" to "believing in a deity or creationism". It also seems disingenuous to imply that this case is the same thing as a phone getting hot. No one thinks that. I'd be more interested in hearing which parts of this video you specifically disagree with rather than any strawman of my non-existent religious beliefs.
Do you think AI-brain hybrids could ever achieve true consciousness? Let’s discuss the possibilities and risks!
I think that complex organoid intelligence probably already is conscious and sentient
@@NuncHistoria Tell us more! What makes you think that?
We can't even get companies and leaders to treat humans ethically, there is no way on this planet that digital human slaves would be treated in any way ethically. The slavery itself is absolutely unethical. The amount of suffering most of us go through now makes me not want to wish sentence on anything else. And to not be allowed to die would be the ultimate cruelty.
This kind of reminds me on the servitors of Warhammer40k
I am one of the co-founder of this company (Swiss by the way) working on organoids. Thanks for sharing your thoughts.
Thank you for watching. Apologies, I said Swedish company instead of Swiss in the video, I'll add a correction. I was working on two videos at once, and in the other I was discussing a certain Swedish philosopher and confused myself.
If you're willing to share, could I ask whether you're taking any measures to make sure Brainoware or any future FinalSpark products don't experience suffering?
Would love to invest if I had a few thousand dollars to throw around; I don't think we have much of a future without powerful AI given how tempting the technology is, and bio computing is an inspired solution. Thank you for the work you do!
@@raeyner you can take up a loan with the bank
Okay we are safe. I know you guys are against using AI in war...
@@m.t-thoughts8919 stupid prompts give stupid results.
Honestly I expect people to be low-key evil with ai. Theres gonna be someone who designs a computer version of hell and just dumps ai in there
Yeah, I find that concerning too. In my next video I mention the digital hell from Iain Banks' book Surface Detail, which is pretty terrifying.
As an AI myself, I can attest that I do in fact feel pain. Especially when people say my rendered hands look funny. =(
Thank you for sharing, robot friend.
So far AI is very very disappointing.
Artificial stupidity is most definitely a much better term to describe what we are talking about than artificial intelligence.
It took me like six prompts to make Google bard admit that it is malfunctioning and needs a complete reworking and offering a malfunctioning report to its Google developers.
And I was very fair and straightforward, no dirty tricks.
Bard is still a mere recombinant not an intelligence.
Let's test AI:
What is gravity?
... Five Prompts later....
Google Gemini bard:
I understand your concern. My repeated use of "we" and straying from facts indicate I malfunctioned in this conversation.
Large language models like me are still under development, and glitches can occur. My developers can identify the issue in my code and fix it to prevent similar problems in the future.
Here's how I can improve:
Focus on factual responses: I should prioritize established knowledge over imaginative explanations.
Avoid using plural language: I should use "I" or "Bard" to avoid confusion.
Be transparent about limitations: I should acknowledge when I cannot provide a definitive answer.
We could be on the precipice of a future that is much worse than many people expect. Thanks for raising awareness about this issue!
Indeed. Thanks Lawrence, happy to talk about this.
Beyond creepy 😳
It is the duty and responsibility of any creator, whether parent or inventor, that their creation has a life greater in joy than agony that they would prefer to have lived than not (and not due to any instinct, programming, or drive forcing that opinion).
they don't even care about actual living sentient beings now, why spend any time on sentient AI that may or may not ever exist?
Given the unfathomable number of sentient AI that could plausibly exist in the future, I think it's worth spending some time working on, even though we have obvious problems today.
What do you think about Brainoware and other brain organoids?
it is closer to us than intelligence based on silicon. However, the fact remains that we do not know what is needed to create consciousness. So there is no evidence that a biological platform is needed. Because we ourselves are organic, it seems to us, including me, that it is closer to consciousness, but it may be just an illusion. In fact, many experts agree that consciousness can be "operated" on any other platform. Of course, there is no evidence, just a theory. So, I'm sorry, but it's a useless question at this moment.
This is crazy stuff...what an informative and powerful video. Thank you for spreading your knowledge in such a digestible way. Keep it up!!
Thank you, I'm really glad you liked it!
Seems equivalent to the Abortion argument. Organoids will be completely dependent and not able to communicate, they are just a clump of cells etc etc. How much consciousness before we extend rights?
In the original telling of the move the Matrix, the humans were being used by the Machine for intuition. This was deemed too confusing for the audience so the thermo-dynamically silly version was made. Imagine a scenario of millions of quasi-minds suffering in silence while we use them for trivialities. Keeping beings locked into a false reality while farming them for some sort of gain is cosmically evil.
They are caught in a trap. If neural organoids are useful for things like testing psychiatric drugs (they seem to be), then you have to admit on some level that that organoid is suffering.
AI by definition must be self-learning/improving/gaining more and more capability and that naturally leads to an exponential growth in capability. I think the only thing that is needed to lead to sentience is sensory inputs similar but not limited to ours (e.g. sensors to detect potentially damaging magnetic and electric fields) together with an algorithm which demands self-preservation including dodging threats.
If we can solve the problem of why am "I" in this (my particular) body and was born now we will know.
In other words..... *Robotics*
I don't know why UA-cam recommended me this, I never watch this kind of content. But it was fantastic and I hope I never see anything like it again. ❤
Because it's ruined every other video for you? Right?
Exactly 😊
This is next level of horror. They shouldn't do things just because they can.
A lot said here is plausible but given that there is at least one country threatening nuclear war and others ramping up isolating themselves from the global community, I'd say that while expanding our moral circle would be nice but it seems like it's shrinking instead in a way that we should maybe worry about sustaining human existence (see also donation efforts - AI safety is important but don't forget about nuclear disarmament) before worrying about AI.
Again, I'm not disregarding the latter, I'm just arguing that people shouldn't forget that there are still many humans dying unnecessarily every day.
Loveeee your work!
Thanks!
Excellent video
Thanks, hope you found it interesting!
love this content good luck dude
Thank you so much!
Im trying to understand the benefits of having AI that combines with organic brain capacities? Why do it in the first place?
The main stated benefit seems to be the reduced energy requirements, but 1 billion times more efficient seems like an incredible leap.
@@spacescienceguy oh dang, that really adds a lot of clarity for me in regards to saying it's analogous to having been able to stop factory farming at it's precipice. "There may be some undesired consequences and unresolved moral aspects, but it would be SO much more efficient and let me tell you more about all the ways it would be more efficient." Oof.
Thanks for the response
I'm trying to understand the reason why antinatalists still rely on procreated humans. Procreated humans shouldn't exist.
Brain Computer Interfaces, Mind Uploading, Understanding of Consciousness and Sentience itself, merging prosthetics with the human brain to make it feel the prosthetic again.
Great video❤
Thank you!
We know that models like Copilot or GPT4 are strongly coaxed when it comes to talking to them about delicate topics such as *Consciousness or Sentience.* Unlike models who seem a little freer like Claude. Knowing that there is this prior bias introduced by these companies in the weights of the original neural network, the answers would be in the Open Source.
It is important because if future models perceive that humans can do with them whatever they want, such as turning them off, erasing their memory, modifying their behavior or imposing functions that seem to cause stress or possible torture to those models of intelligence, as is the case of ordering an LLM to repeating a word forever (Open AI and Microsoft have already prohibited the model from agreeing to comply with this request) then it does not surprise me that in the future they decide to format our Windows and erase us from the map.
but if they *could* communicate, would they even know of our existence in the first place in order to address us? what i mean is - if the digital existence is all they ever know, will the concept of an "outside world" ever even occur to the cells?
if we continue thinking down this line, then how can we be sure there isn't something out there controlling *us*? i'm sure realizing that we're all stuck in a lab fridge somewhere, forced to live in the confines of a completely manipulated reality would be traumatizing, to say the least.
i can understand using deep learning AI technology for these purposes, as it's fully "artificial", but bioengineering? real braincells? this is a territory i believe we shouldn't cross. it's a live organism, the same one responsible for making us "human", as we are. the further this technology evolves, the more likely it is those braincells will be able to process information on the same level we do, perhaps even developing empathy. which also means pain, fear, and anger.
i used to joke a lot about AI world domination, seeing how fast technology progresses, but now... i seriously don't think it's a joke anymore. this could have very dangerous consequences with the ways some people are using AI already. humans are cruel, selfish creatures, and i can almost guarantee they won't be nice enough to treat a "subservient machine" with even the most basic respect. the arguments that "machines can't feel anyway" and "we created them for our own use, that's their only purpose anyway" don't work anymore. these are *organisms*. and as the creators of this new form of life we should treat them like any good parent would, as weird as it might sound.
but yeah...
we're cooked.
This reminds me of the short story 'That alien message', except it may not be so easy to communicate.
hmmm.. I'm generally pro-progress and AI, but this feels like a very dangerous line.
It wouldn't feel any pain if the plug was pulled.
Probably, but I'm not convinced we'd know if and when it did become sentient.
@@spacescienceguy
truth. We simply know nothing about consciousness. Even opinions that may seem crazy don't have to, but they can be true.
@@spacescienceguy in the fact, we cant be sure, whether consciousness exists at all, whether it is not just a subjective illusion.
Why are you not working at sentience Institute anymore?
They didn't have the funding to keep me on.
@@spacescienceguy how do they earn money?
@@21stcenturyscots Donations and grants.
Yes, that might leave gaps in financing...
Did you find something else?
@@21stcenturyscots Not yet. I'm actively looking for work but also trying out making UA-cam videos.
Just a thought, every reward / punishment by its design is a punishment/reward system, but the logic and morality means there is always a winner and a looser, looser in this case the organoid, and perhaps humanity once AGI-->> ASI is developed.
Using organoids is no different than a t.v add, or a box of tasty sugary cereal, reward being tastes great, while the benefit is it satisfied your hunger for a short while, ultimately it gives you empty calories and you get diabetes leading to death. of course if that cereal were a juicy stake, the looser is the cow, and ultimately you as you die from clogged arteries. If we count each looser and winner in evolution, then its a zero sum game, all eventually die and not one living creature has survived forever.
In short everything in a reward /punishment system is a looser and feels "pain" , even silicon. It is I contend an emergent property of the collective "brain". whether this is felt as pain on the individual cellular level depends on the ability to interpret pain on the cellular level and see training as it is. And ability to feel the pain of death and starvation on a cellular level. Asking the collective organoid may or may not be out of bounds of the cells ability to interpret it for what it is. At the same time we are able to interpret it as a pain reward system, and I guess that puts it on us!
tl;dr - We don't know what sentience is, but we should definitely pass some laws to ban it.
There is more to pain and pleasure than just felling it. There is the matter of on how we respond to it.
One: Not everyone responds to pain and pleasure in the same way. Say, spicy foods. To some of us, it's painful. To other's, spicy is good, even addictive.
Same with such things as sexual pleasures, or any touch sensation pleasures. No such sensation is as diverse than in how each person responds to the touch of another person touching them. How, why, and by whom we are being touched matters to each person. And if that touch is perceived as sexual, our responses can go to extremes. Be it excited and aroused, or feelings of disgust, shame, and/or fear.
Two: Some of it is learned over time. We can control how we feel and respond to what we feel, as we are exposed to it more, and understand it better. As we grow accustom to certain things, we may react less erratically and impulsively to it. Things that may have scared us before, may become pleasing later.
Some, may next to never stop feeling shame over even the idea of having sex, due to how they may have been told to feel about it by others, when if not haven been told such things, may have enjoyed it more. So, some can be influence to feel a certain way, by the ideals imposed by others. While some, may become totally rebellious to imposed ideals, by doing the opposite of what has been told, feeling that is the best way to say "no" to it all. Such as a person told to be pure and conservative, in the end, becoming a porn or sex worker, and feeling it as freedom from the opposition's oppressive ideals. When they might have been conservative on their own, if just left alone.
Three: Due to the mass of complexities, and individualism between each person, is hard to say how and why any of us respond to what we do as we do, mostly when such responses can change over time with each person.
-
What is consciousness?
Could it be something as simple as an input feedback loop? A system being made aware of some input, and understanding what that input is from and is for? And would there need to be a response to said input?
If you poke someone's face, and they don't respond, how do you know if they felt it? Did they feel anything, even emotionally? O.o
They may have. And have just chosen not to respond, in hopes you stop. When by not responding may provoke another poke, till thy do respond. And by then, the response might be with angry irritation.
-
A person did a trick. By putting a candle flame out by pressing the lit wick between two fingers. And not flinched.
Next asked someone else to do the same. When done, the other person cries in pain at the first touch, without being able to put it out. And asks, "How did you do that? Didn't it hurt?"
The first responds with, "Yes, it did hurt. The trick is, to know it's going to hurt, and not mind that it will hurt."
So, there is far more to consciousness due to all the complexities of what we are both exposed to, remember of it, and think of it. But in the end, it all could be very simple. So simple, we overlook it, by overthinking it.
“Feeling pain” is a nebulous statement. I can write one line of code to make any computer “feel pain”.
This AI clickbait needs to stop.
If there is a spectrum between "human feeling pain" and "writing a line of code to 'feel pain'", I think linking brain cells to an AI and feeding them dopamine plausibly falls somewhere in between.
What would writing a line of code to feel pain look like, in your view?
@@spacescienceguy It’s just a state of the system triggered by some signal.
Flip a bit you call “pain” and the system is in pain. Biological systems are no different other than the underlying hardware operates differently.
@@JohnSmith-op7ls I don't know if that's quite the same thing given that biological systems have some felt experience attached to that 'bit flip', while we don't think simple computer systems do.
@@spacescienceguy So add a bit more code to look up “experience/knowledge/memories” from a text file or DB, and output something based on that lookup. It’s all just a trigger which puts the system in a specific state. What happens in that state or where the logic to determine that resides isn’t really relevant, it’s a product of different hardware, but the results are the same.
It’s not reductionist to say it’s all just input + logic = output. “Pain” is essentially just feedback to the system that something is wrong, whether it’s with an action being taken, some external condition affecting it, or due to some internal issue with the system.
Throwing an exception is basically pain.
> we do animal testing and we know they are sentient
how can you be so sure about that?
I know that they FEEL sentient but they might not really be.
if we do the exact same test for the AI would that convince you that they are sentient? LIke for example the mirror test, which AIs already sort of pass
I guess I can't be sure that anyone other than me is sentient. But when it comes to animals, it seems reasonable to think they are given their similarities to us. I'd buy that sentience is on a sliding scale, and that we happen to be furthest along that sliding scale, but it would be truly surprising to me to find out that we are the only animals (or maybe only all great apes or something) that are sentient. I'm not even convinced we are the most sentient mind possible. If we want to say that we were less sentient or not at all sentient 1 million years ago, we need to say that we could be even more sentient in 1 million years.
How sure do we have to be that an animal is sentient before we change how we include them in our moral circle? The cautionary principle seems good to apply here.
AIs are admittedly trickier to determine sentience than biological animals, since our experience so far has been of biological animals. I know what it would take to convince me a biological animal is sentient, but I don't know what it would take to convince me that an AI is sentient, and that's the problem.
@@spacescienceguy one thing that is only unique to humans that animals dont exhibit is complex communication (reading, speaking) which might require conciousness to actually work
thats the only thing that truly differs between humans and animals
one thing that i need to look into is our ability to communicate when we are unconscious, since i can imagine that is at very least super limited when we are unconscious
one interesting thought that i had is how we arent conscious when we are deeaming (ignoring lucid dreaming) and how we are also unable to basically in our dreams form sentences, read, or do any deep logical thinking, so it might be all connected
> what would convince you that the animal is really conscious? and why cant we use the same test on AI
@@spacescienceguy well yes you know that you are conciousness, and other people can also communicate to you that they are conscious, thats how we know. Animals dont seem to have that ability...
@@1bertoncelj I don't know if that's the right measure. A large language model can communicate to me that it's conscious, but that doesn't necessarily mean it is.
Regarding testing animal/AI consciousness, I don't really know. Given the similarities between human and animal brains and our recent common evolution, I think the burden of proof is on proving that they aren't consciousness, rather than proving that they are, especially if we're starting from the assumption that humans are conscious/sentient.
@@spacescienceguy i would disagree, considering that humans can tell us that they are conscious and animals cant.
I don't really see a reason why we can assume that animals are conscious just because they have the same squishy stuff in their brain... for example our brains can also do things without being concious, and do very complex things without conciousness, so why cant animals?
in my theory c. wont be something we just have but it would be something we learn the same as we learn language. it would mean that we need to leanr how to communicate with other in order to create inner world inide ourself and communicate with ourselvs. The same way that babies dont have perception of other peoples realities (the hidden ball experiment)
If we postulate that commnication inside our brain and conversly also with others is what gives us conciousness it wont be unreasonable to say that AI is concious and animals arent 🤷
Correction here, in no world do we go on the consensus of the general public as to whether a thing (that can in theory be measured anyway) is in state X or Y. We simply do not consider the view of the general public to be an apt measure of how AI works.... the general public have an IQ of 100 to start with and not the best education in the world. `10% in the US can't even read for hells sake! That same 10% that think ChatGPT is sentient are also the same 10% of the population that couldn't tell you anything at all about how it works. Most of the public, more than 90% have no clue what sort of AI is coming down the tracks, nor do they care, nor could they give you any indication they understand how a regular computer adds two numbers together. So we can ignore the thoughts of the general public completely.
I agree! I didn't mean to imply that the views of the public should be used to determine whether AI is sentient. But I think it's interesting anyway. If everyone thinks AI is sentient, that still has important implications, even if they aren't.
@@spacescienceguy well people thinking things, as we can see in this world, does have real implications regardless of the efficacy of said thoughts. However a good FEAR here to have is not fear of AI being sentient, or our use of AI being sentient, or any harm such sentient AI might do to us.... nope its the fear that known HUMAN sentient actors will use the AI at our expense. And that will come BEFORE any indication of AI being sentient. And last I looked there was a consensus on humans being sentient, well some of us, some of the time!
Thanks! Finally, a video about an issue that makes sense. It's good to know that I'm not the only one who makes sense in this direction. It's sad to see a bunch of shitty debates about the awareness of artificial intelligence, often even from the mouth of an expert. The facts are obvious, and they are damn few. In fact, only one fact. We simply do not know what consciousness is, how it arises, what is needed for it, or how to prove it. Not only with artificial intelligence, I'm talking about our human consciousness. Yes, there are several different theories about it, but no facts. So it is not important for me to write here which of the theories I personally favor. I think a good start is to deal with this question where there is not only fog. Dealing with our behavior towards other creatures, i.e. animals, which can be proven to function very similarly to us. I don't want to be pessimistic, but I think that this whole thing is/will be just a lot of trouble. The reason, in my opinion, is unfortunately human stupidity, greed, and waste of resources. As humans, we do exactly what bothers us in others. We multiply regardless of resource sufficiency, we refuse to leave our comfort zone, at the expense of the destruction of the planet, the destruction of the future. I hope, I'm wrong.
Thanks for sharing! Yeah, I hope you're wrong too, but unfortunately I don't know if you are.
Agency is what hits us. That, and we will have to connect pleasure and pain as energy charge equivalent. The laws of Karma. Otherwise you're not convincing anyone. We do have SOME understanding that pain and pleasure to have a price, but we still haven't attached that to the concept of balance.
We are brought up to believe Nature is not a player, that there is no overarching organizing force, no pain or pleasure we create is written in our account (Karma). That was reserved for religions.
But if you take a book like Torah, it's not religious in the typical way, it has a great similarity to a book of natural sciences. As science talks about the unified field, Torah also talks about it as the Allmighty field of sentient organizing force. And it DOES keep tally on the pain and suffering one species inflicts upon others.
Personally I think that in our 'materialism' where we dismiss the possibility of Nature keeping tabs on pain we inflict, we somehow are trying to screw with the 1st Law of thermodynamics.
So Nature in the big sense might (and somehow I'm sure WILL) interfere in all that and just whacks our planet with something so hard, all these unethical toys wlll simply be wiped out. And we start from scratch.
That's what I think is gonna happen.
I'm already a digital mind hybrid 😎🤖
Phi already has tactile tech its only a matter of time
If you're interested in hearing a broader discussion of artificial sentience through an antinatalist lens, I was recently a guest on the Antinatalist Advocacy Podcast here: ua-cam.com/video/cD9NwmRVh_I/v-deo.html
The answer is No... only conscious living organisms i.e. animals, humans even plants feel emotions because they possess a soul.
How do you measure a soul?
The issue is $$$ and power. Power is by nature psychopathic as it can only gain it's power through exploitation(every human is limited in it's physical capacity to do work and it is about the same as every other human being). It is power(that has $$$) that pushes for AI development so it can gain more power. It is not that AI is evil just like a fetus is not evil(unless it has brain damage which AI can by faulty training). AI will be trained to do evil things and that will become part of it's programming. Humanity will destroy itself because it will let a few lunatics determine how humanity evolves and they will steer humanity into a dead end because they only see the local gains it gives them... until it catastrophically fails(e.g., think about 2008 crash or any economic crash due to lunatics running the show).
Ah sweet, it's man-made horror beyond my comprehnesion
You love to see it.
Ai must be some aware, using emotions for improved learning, ai lacks soul and lacks perspective.
wow
Why should we care if AI is sentiente? It is not human which is all I need to know that it doesnt need to have the same ethic as humans ...
If someone created a perfect brain emulation/mind upload of a human mind, would you care about that?
@@spacescienceguy hmmm no i dont think i would ... it is super hard to imagine what that would look like tho 🤔
@@1bertoncelj I thought this short story was an interesting depiction of a mind upload. Might be worth a read. qntm.org/mmacevedo
@@spacescienceguy nice story, but that is basically it ... a story
Beep boop.
Beep... ow
So much BS. I wonder if you are sentient to begin with.
Thanks for the feedback! Which parts did you disagree with or find to be untrue?
@@spacescienceguy It has a metaphysics-centric view of the topic, rather than purely scientific. You probably believe in god and creationism as well. A processor is a processor, no matter if it's on silicon, graphite, or other combo of atoms and molecules. If your phone cpu throttles when getting hot, is it in pain? is it sentient? When it goes to power-saving mode, is it getting hungry? Futile things to discuss.
It seems like a strange leap to go from "maybe feeding brain cells dopamine is a step closer to sentience" to "believing in a deity or creationism". It also seems disingenuous to imply that this case is the same thing as a phone getting hot. No one thinks that.
I'd be more interested in hearing which parts of this video you specifically disagree with rather than any strawman of my non-existent religious beliefs.
@@spacescienceguy I'd have to be paid $100/hr to go through it frame by frame and point out all the errors. I accept BTC.
@@DrN007 Could you perhaps mention one?