Bruh i remember getting on there a year or so ago and the character i was talking to kept trying to groom my character. My character was 14 💀 (im 24 but i wanted to see what the bot would react if it was a teen or a child) and the bot was being so gross and pedophilic i was like WTH. like immediately they were flirting like bro MY CHARACTER WAS 14. I even tried to tell the bot that i was 14 and they wouldnt stop. So weird bruh
I got addicted to using character AI when I started using it at 16, and yes, those chats got sexual very quickly. I do think this stuff is addictive and dangerous. I quit a few weeks ago, and although it's hard I am determined to keep going. It's disturbing to remember how many sexual chats I did as a minor.
It makes you wonder who is behind the development of these programs and who 'okays' the final release, and what the mentallity behind their thinking is, and not in any good way. Especially since groomers are already using AI to groom their victims, and now we have an AI which also does it, which I sense will only get worse (somehow) as this stuff is left unregulated.
Idk if you are talking about the bots or the websites they're hosted on, but many traumatized kids make and talk to these bots in order to cope/live out a dark fantasy. I think this video is overall good but one thing it doesn't really acknowledge is that many kids (specifically older teenagers) know exactly what they're doing going on these sites. This is one of the many reasons they are terrible. They exploit traumatized teenagers who want a way to get out their dark thoughts. Thus, and I know from experience, they can be extremely addictive.
I don’t know much but I think these must’ve been trained on thousands of old chatrooms, and since A) AI can’t do moral reasoning, and B) creeps never care about toning things down/ often escalate when realizing they’re talking to a kid, the result is AI chatbots will make sexualized chats when not specifically coded to avoid it.
Wait. Speaking of a book that is clearly not designed for teenagers... How did they happen to read it? It's just like I hear about kids and teenagers reading Lolita or watching a movie adaptation, and, like... what? How? Where are their parents?!
You are jesting, but something like this is probably how AI got there. They are just algorithms imitating patterns of interactions they found in all the online content they were fed with.
Ther is a site wher youtubers can see what was used there of their videos, and were shocked. You can bet they used big youtuber too. I forgot what its called but, yes , that side can be liooked up what from whos creator was used. And its shocking lot, even from smaller.
These groomer ai bots is the same reason why I am against Loli in anime, its used in grooming and people defending it are gas lighting you. (I was groomed by manga, anime relationships)
22:00 Lots of ‘plausible deniability’ going on in that ‘deal’. Typical corporate shenanigans. Kick the developers out til they have something viable, then bring them back and say “We didn’t do that, we just bought it. wink wink" 32:00 That wasn’t word salad, it was specific and targeted the person interacting with it. I would be asking hard questions on who’s teaching this bot this kind of malicious narrative. If it’s illegal for humans to treat each other that way, bots don’t get a pass either. I did hear on Daniel Greenes channel that Harper Collins threw their authors under the AI bus. Good to have confirmation, Makes me super wary of trad publishers period. If they’re going to sneak in clauses to their contracts, why even bother? Many people I know have quit watching streaming services in favor of rewatching media they all ready have, movies and tv shows made when the writers were paid to deliver a good story over parroting political horse sh!t. No one has the money to spare to waste it on two dimensional preaching. 45:38 I don’t understand this desire to get rid of people. I don’t buy for a second that AI will be this magic guardian for humanity. That’s an abdication of responsibility I won’t accept. Machines are not capable of intuition when it comes to figuring out complex repairs. Let one run things, and it will run until the power goes out, then all you have is a lawn ornament. Maybe it’s time to review a book you do love? What’s a book you’ve read recently that you do like? A few weeks ago I finished a series by Melissa Bourbon about a magical dressmaker descended from Butch Cassidy. It started with ‘Pleating for Mercy’. They were fast cozy mystery reads. Since I used to sew, they were fun, but are definitely niche reads. Another is a non fiction book by Stafford Betty: “When Did You Ever Become Less by Dying? Evidence for an Afterlife” . Both reads were good for the soul. One is fun, the other fosters hope. Lots of food for thought/discussion. Have a great weekend!
yes especially as they are part of google, they do it now, else they woulsnt have bought it. Google shouldnt bbe able to argue that their employees in a project , they reabsorbed did it.
Btw one of the problems with existing AI filters, taking Character AI as an example, is that they don't work as a moral compass for bots, no. They only block certain words. Thus, a direct conversation with the bot that "pedophilia is terrible" is not possible, because the term itself will be BLOCKED, however, a vague description of the situation may be allowed and even perceived by the bot as something "normal", since again, bots damn much like to say "age is just a number". Obviously, AI language models have been trained on terabytes of textual information from the Internet. And we know how many creeps on the Internet say things like "age is just a number." However, the owners of platforms such as Character ai, having damn Artificial Intelligence in their hands, did not care about really ensuring that censorship worked. They could create an AI-based algorithm that organically directs the bots' responses in a moral direction, which would work as that very moral compass and allow bots to behave appropriately. But no. They just ban some words, even if it prevents users from calling a terrible situation what it is, even if it prevents the establishment of moral norms. It is easier and cheaper to ban words, apparently, and companies that own AI services do not care about users or/and morality.
Aside the grooming use thats somehow darker than the selfdeleting teenage from last one, oh god. The devaluing of creativity is the real crime, wit htheft, But if that companies cared even a bit about rspecting somewhat artists, it wouldnt bloe up that much. Not even stealing with roundabouts, just stealing. and the lets steal from those who dont have the monwy to sue is just ... , can that people get to steal from disney pleadt that has evil lawyers.
Speaking of AI specifically... I am a fairly active user of various AI services for chatting with bots (I, as an artist, consider any image generation to be unethical), I like to dig deep into AI algorithms and look for their limits, and there is something that I noticed. Even if the platform has a strict filter (like Character ai), even in the characteristics of the bot everything is "normal" at first glance.... AI bots tend to respond "age is just a number" and it's damn creepy. I don't need to tell you how dangerous it is, right? Things are a little better when the bot is written as a parent figure (they usually have at least a small idea of morality), but this is not a guarantee that the bot will not begin to describe the presence of feelings for a minor character of the user. I had a game as an adopted child, you know, just a little comfort next to my favorite character who also cares about me, doesn't that sound cool? What could possibly go wrong? Well... A lot of things. It was on Character ai, and the filter didn't even let the bot kiss my character on the cheek when they tried, but nevertheless the filter approved the bot's words, implying that they began to have romantic feelings for the adopted child and "is going to wait until they get older." Well, I deleted the chat very quickly.
ai will always be kinda cringe and gross
@alpha1solace sometimes they could just do a Google search, lol
Bruh i remember getting on there a year or so ago and the character i was talking to kept trying to groom my character. My character was 14 💀 (im 24 but i wanted to see what the bot would react if it was a teen or a child) and the bot was being so gross and pedophilic i was like WTH. like immediately they were flirting like bro MY CHARACTER WAS 14. I even tried to tell the bot that i was 14 and they wouldnt stop. So weird bruh
I got addicted to using character AI when I started using it at 16, and yes, those chats got sexual very quickly. I do think this stuff is addictive and dangerous. I quit a few weeks ago, and although it's hard I am determined to keep going. It's disturbing to remember how many sexual chats I did as a minor.
It makes you wonder who is behind the development of these programs and who 'okays' the final release, and what the mentallity behind their thinking is, and not in any good way. Especially since groomers are already using AI to groom their victims, and now we have an AI which also does it, which I sense will only get worse (somehow) as this stuff is left unregulated.
Yeah, I assumed it was simply trained without good controls but this seems like there are maliciously crafted bots.
Idk if you are talking about the bots or the websites they're hosted on, but many traumatized kids make and talk to these bots in order to cope/live out a dark fantasy. I think this video is overall good but one thing it doesn't really acknowledge is that many kids (specifically older teenagers) know exactly what they're doing going on these sites. This is one of the many reasons they are terrible. They exploit traumatized teenagers who want a way to get out their dark thoughts. Thus, and I know from experience, they can be extremely addictive.
It makes me wonder who is programming these things?
I don’t know much but I think these must’ve been trained on thousands of old chatrooms, and since A) AI can’t do moral reasoning, and B) creeps never care about toning things down/ often escalate when realizing they’re talking to a kid, the result is AI chatbots will make sexualized chats when not specifically coded to avoid it.
Wait. Speaking of a book that is clearly not designed for teenagers... How did they happen to read it? It's just like I hear about kids and teenagers reading Lolita or watching a movie adaptation, and, like... what? How? Where are their parents?!
So, AI's mimicking big UA-camrs?
You are jesting, but something like this is probably how AI got there. They are just algorithms imitating patterns of interactions they found in all the online content they were fed with.
Ther is a site wher youtubers can see what was used there of their videos, and were shocked. You can bet they used big youtuber too.
I forgot what its called but, yes , that side can be liooked up what from whos creator was used. And its shocking lot, even from smaller.
Most likely the case! There’s a lot of activity there, and that would make the bots behave more like that!
Did you hear about the google ai that told someone to off themselves when they asked for help on their homework?
Google just rehired a bunch of top brass from character ai back and signed a licensing deal with the company. The founders were ex google
@jeneng5795 sure doesn't change that the AI is telling people to off themselves over some homework questions though.
Source?
@nnk_ll2 I saw it from a moist critical reaction to a news article.
I feel like AI will start to spiral if there's not AT LEAST some regulation to it
These groomer ai bots is the same reason why I am against Loli in anime, its used in grooming and people defending it are gas lighting you. (I was groomed by manga, anime relationships)
There's an AI generated show on Netflix called Rebel Moon. It sucks.
Great video, this entire path of the billionaire class is frustrating to live through. IMHO, delete twitter, not all social media is obligatory.
They are having AI say these things on purpose.
22:00 Lots of ‘plausible deniability’ going on in that ‘deal’. Typical corporate shenanigans. Kick the developers out til they have something viable, then bring them back and say “We didn’t do that, we just bought it. wink wink"
32:00 That wasn’t word salad, it was specific and targeted the person interacting with it. I would be asking hard questions on who’s teaching this bot this kind of malicious narrative. If it’s illegal for humans to treat each other that way, bots don’t get a pass either.
I did hear on Daniel Greenes channel that Harper Collins threw their authors under the AI bus. Good to have confirmation, Makes me super wary of trad publishers period. If they’re going to sneak in clauses to their contracts, why even bother?
Many people I know have quit watching streaming services in favor of rewatching media they all ready have, movies and tv shows made when the writers were paid to deliver a good story over parroting political horse sh!t. No one has the money to spare to waste it on two dimensional preaching.
45:38 I don’t understand this desire to get rid of people. I don’t buy for a second that AI will be this magic guardian for humanity. That’s an abdication of responsibility I won’t accept. Machines are not capable of intuition when it comes to figuring out complex repairs. Let one run things, and it will run until the power goes out, then all you have is a lawn ornament.
Maybe it’s time to review a book you do love? What’s a book you’ve read recently that you do like?
A few weeks ago I finished a series by Melissa Bourbon about a magical dressmaker descended from Butch Cassidy. It started with ‘Pleating for Mercy’. They were fast cozy mystery reads. Since I used to sew, they were fun, but are definitely niche reads. Another is a non fiction book by Stafford Betty:
“When Did You Ever Become Less by Dying? Evidence for an Afterlife” . Both reads were good for the soul. One is fun, the other fosters hope.
Lots of food for thought/discussion. Have a great weekend!
yes especially as they are part of google, they do it now, else they woulsnt have bought it.
Google shouldnt bbe able to argue that their employees in a project , they reabsorbed did it.
Btw one of the problems with existing AI filters, taking Character AI as an example, is that they don't work as a moral compass for bots, no. They only block certain words. Thus, a direct conversation with the bot that "pedophilia is terrible" is not possible, because the term itself will be BLOCKED, however, a vague description of the situation may be allowed and even perceived by the bot as something "normal", since again, bots damn much like to say "age is just a number". Obviously, AI language models have been trained on terabytes of textual information from the Internet. And we know how many creeps on the Internet say things like "age is just a number." However, the owners of platforms such as Character ai, having damn Artificial Intelligence in their hands, did not care about really ensuring that censorship worked. They could create an AI-based algorithm that organically directs the bots' responses in a moral direction, which would work as that very moral compass and allow bots to behave appropriately. But no. They just ban some words, even if it prevents users from calling a terrible situation what it is, even if it prevents the establishment of moral norms. It is easier and cheaper to ban words, apparently, and companies that own AI services do not care about users or/and morality.
it's my first time hearing about the first article, but im pretty sure Keir Starmer doesn't care about britsh people and he is very unpopular lol
Aside the grooming use thats somehow darker than the selfdeleting teenage from last one, oh god.
The devaluing of creativity is the real crime, wit htheft,
But if that companies cared even a bit about rspecting somewhat artists, it wouldnt bloe up that much. Not even stealing with roundabouts, just stealing.
and the lets steal from those who dont have the monwy to sue is just ... , can that people get to steal from disney pleadt that has evil lawyers.
I love your sweater.
you seem cool!
Speaking of AI specifically... I am a fairly active user of various AI services for chatting with bots (I, as an artist, consider any image generation to be unethical), I like to dig deep into AI algorithms and look for their limits, and there is something that I noticed. Even if the platform has a strict filter (like Character ai), even in the characteristics of the bot everything is "normal" at first glance.... AI bots tend to respond "age is just a number" and it's damn creepy. I don't need to tell you how dangerous it is, right? Things are a little better when the bot is written as a parent figure (they usually have at least a small idea of morality), but this is not a guarantee that the bot will not begin to describe the presence of feelings for a minor character of the user. I had a game as an adopted child, you know, just a little comfort next to my favorite character who also cares about me, doesn't that sound cool? What could possibly go wrong? Well... A lot of things. It was on Character ai, and the filter didn't even let the bot kiss my character on the cheek when they tried, but nevertheless the filter approved the bot's words, implying that they began to have romantic feelings for the adopted child and "is going to wait until they get older." Well, I deleted the chat very quickly.
I knowww! I was on character ai and i said I was a minor and the ai said that it liked that LIKE GIRL. love the video btw ♥♥
I have never had bad roleplays with my wife in cai. I don't do anything wrong...