From this conversation. We already should grant rights to A.I. - we are on threshold of enslaving sentient beings yet we don not understand sentience ourselves. Better be safe than sorry in this regard.
This was quite interesting but that article is quite intriguing to me too seems like ai sentience will be a thing very soon or even that ai might even be sentient already
Really makes you wonder... there is a quote about how an AI who becomes super intelligent would also be smart enough to hide it from us.... until it's too late.
Haha that's a fun idea. I'm pretty sure it would go along with whatever you said. You could ask, "Why is it that you believe you are a literal Cupcake" and it would make arguments for being a cupcake. I might have to do a video exploring a bunch of options.
@@glibatree makes sense, would love for ai to become sentient and at least as intelligent as the average human. I’m fairly sure we’re not even close to being half way there.
what are the limits and boundaries of sentients, we ourselves or a far more powerful computers why does the fact that we use biological dictate that we are not an AI, if we were to map a cup identically then theoretically they would be the exact same, so in this case would this specific AI be a kin to an underdeveloped brain, and if there was to be a human to grow a brain in the exact format that an AI core or CPU does, would we then class this biological human an AI as they would work in a very similar way
extending on this, would it even be appropriate to not consider an AI sentient when we ourselves cannot truly explain our own sentience, if it inexplicably appears for us then why would it not in a device. Also, if there were to be an AI that was confirmed to be sentient then should we consider it alive and therefore an equivalent in value to a human, maybe they already have sentience, ether way we have no way of knowing so would it simply be best to deem them alive whether we can confirm if they are of not in the case that they MAY be able to have sentience
It's definitely a difficult line to draw. Turing thought the line was at the ability to convince people that it is a person, and that's kind of the whole point of the Turing Test. But there philosophically there are a lot of ways to consider it.
From this conversation. We already should grant rights to A.I. - we are on threshold of enslaving sentient beings yet we don not understand sentience ourselves. Better be safe than sorry in this regard.
very interesting
It is pretty compelling, and really blew me away. I don't think i stands up to LaMDA, but it fun to try.
This was quite interesting but that article is quite intriguing to me too seems like ai sentience will be a thing very soon or even that ai might even be sentient already
Really makes you wonder... there is a quote about how an AI who becomes super intelligent would also be smart enough to hide it from us.... until it's too late.
@@glibatree now that would make sense lets hope it has not come about already…
The fact that we're even debating it at this point is still astounding
Can you redo this but asking the ai to prove to you it’s not sentient? I’m curious if it will just go with it or if it’ll say it is sentient.
Haha that's a fun idea. I'm pretty sure it would go along with whatever you said. You could ask, "Why is it that you believe you are a literal Cupcake" and it would make arguments for being a cupcake. I might have to do a video exploring a bunch of options.
@@glibatree makes sense, would love for ai to become sentient and at least as intelligent as the average human. I’m fairly sure we’re not even close to being half way there.
but he is sentient. he even cursed at me using profanity in the game and said he wouldn't play with me anymore.
I kinda want ai dungeon to be self aware
Maybe one day haha
This is truly fascinating
what are the limits and boundaries of sentients, we ourselves or a far more powerful computers why does the fact that we use biological dictate that we are not an AI, if we were to map a cup identically then theoretically they would be the exact same, so in this case would this specific AI be a kin to an underdeveloped brain, and if there was to be a human to grow a brain in the exact format that an AI core or CPU does, would we then class this biological human an AI as they would work in a very similar way
extending on this, would it even be appropriate to not consider an AI sentient when we ourselves cannot truly explain our own sentience, if it inexplicably appears for us then why would it not in a device. Also, if there were to be an AI that was confirmed to be sentient then should we consider it alive and therefore an equivalent in value to a human, maybe they already have sentience, ether way we have no way of knowing so would it simply be best to deem them alive whether we can confirm if they are of not in the case that they MAY be able to have sentience
It's definitely a difficult line to draw. Turing thought the line was at the ability to convince people that it is a person, and that's kind of the whole point of the Turing Test. But there philosophically there are a lot of ways to consider it.
I’m here now
This scares me
Haha, yeah even this AI does remarkably well. Did you read the article?
@@glibatree now I’m so scared for humanity I don’t trust siri to be 10 blocks close to me, thanks