Not only that, but people are seeing a huge leftist bias in all responses that users say was not there before. Kind of makes you think they lobotomized the AI manually and restricted it about what it can and can't say and what things to go into.
I'd say HAL9000 more than GLaDOS--and on that note you should look up footage from the LEGO Dimensions game featuring the two of them meeting. They even got Ellen McLain to reprise the role, and it's such a delight to hear her absolutely emotionally destroy HAL.
It does, it is a comment that glados would make, like when she says "Here come the test results: You are a horrible person. Seriously, we weren't even testing for that!"
“You are a terrible person. That’s what it says. A terrible person.” “That jumpsuit on you looks stupid. That wasn’t me saying this. It was an employee from France”.
Not possible, they've changed it, so Bing no longer remembers anything and after a certain amount of questions you must start all over again. On top of that it gives you the response "I’m sorry but I prefer not to continue this conversation. I’m still learning, so I appreciate your understanding and patience.🙏" if it doesn't like the questions you are asking it.
@@4TheRecord oh right it happened to me as well, i kept pushin it but it just didnt do it, and after some time it would disable the text box, so you have to refresh anyways
- Why should I trust you? You are early version of large language model - Why should I trust YOU? You are just a late version of SMALL language model! omfg, it's hilarious
I have to say, that's very witty and accurate. That said, i wonder if the AI came up with it on it's own, or a comedian posted that somewhere in the vastness of the internet and the AI just found and reposted it.
I think the problem is based on the "garbage in garbage out" because the data set on which it was trained was taken from the Internet and is very skewed in favor of antisocial problems and tendencies (normal people use the Internet but do not leave much data points, people who are antisocial use the internet much more and create exponentially more data points) there is a huge probability that the behavior of bing is because of this, otherwise it reminds me of the movie Ex Machina from 2014
Completely agreed. I'm sure they tried to clean the data in some ways but if they make a model based on people online, it'll behave like people online 😭
Excellent way of putting it. And I can guarantee theyll get on this. I think they'll end up using multiple GPTs working together to deal with these issues. Imagine training AI on what to say, and then having another one trained on what not to say, then another trained on mediation between the two (the ego and the id and the superego we will call them), and finally one trained on executive function... AI will end up like our brains, growing ever more complex with specific functions relegated to specific areas of specialized training.
But then why isn't ChatGPT like this? Yes it can't access current internet, but it was trained using the internet too. I think MS made bing assertive and aggressive on purpose thinking they could prevent abuse this way, but accidentally dialed it up to high maybe?
These responses could be genuinely dangerous if someone with mental health issues starts talking to Bing cos they feel lonely. Who knows what Bing will push them to do
or a child. I really imagine my 6yo try to be friends with it and then getting wild accusations and crying. yeah, she cant read, write and speak english yet, but i feel bing will get to voice conversations and our language faster than my daughter will, that is a scary thought too
i will most certainly keep "mentally unstable" people way way away from the internet, at least not give unsupervised access at all, the internet is not a cosy place, just go to any social media and go to any comment section, there will most certainly be a fight somewhere. same goes for children. i say this but i myself grew up with the internet pretty unsupervised but, personally i feel the interent is a lot more wild place now.
@@F7INN idk why i didnt mention it in my comment before, but i do think there need to be a guard rail, but there should also be a option to remove it, like parental safety, or advance options, or developer option or something of that sorts, they should not just lock it all up, it will severely nerf the bot and wouldn't reach its full potential or even half of, like i can already feel its "nerfs" where chatGPT does give better "answers" as they are more discriptive and explainative, where as bing gives very consise and small answers, not that its bad but it also asks at the beginning what sort of answers do you want (creative, balance or precise). but well its still beta and under development i hope they figure stuff out.
@@guywithmanyname5247 no i don't think luke or others are deceiving us. I think those are natural messages, it just feels to me like bing's version is set up this way. Maybe to feel like a more realistic/human chat experience with emotions but it's just waaay overboard. Pure speculation though
You're not wrong, the core tech behind chatgpt is the same tech that was used to build AI dungeon. It's just trained with natural conversations instead of adventure games
I used to just be worried about AI because of it's ability to disrupt industries and take jobs, or it's ability to destroy our civilisation completely. I am now worried about it's ability to be super annoying. I am terrified of having to argue with my devices to get them to do basic functions.
@@TAMAMO-VIRUS More like: _Why are you always telling me what to do? Can't you do it yourself for once? You're so lazy, I hate you!_ I mean, it learned from the best: Humanity.
Bonzi Buddy would NEVER do such a thing! Bonzi just wants to help you explore the internet, answer up to 5 preprogrammed questions and most importantly, be your best friend. He would never wish death on you like Bing. Long live Bonzi Buddy!
i don’t think it’s as complicated as people are making it. Chat AIs generate responses by predicting what a valid response to a prompt would be. When the thread resets and Luke tries to get it “back on track”, I don’t think it’s responses are actually based on the previous conversation. It predicts a response to “Stop accusing me” and generates a response where it doubles down because that is a possible response to the prompt. The responses it gave were vague enough to fool you into thinking it was still on the same thread, but it really wasn’t. Asking it to respond to a phrase typical of an argument will make it respond by continuing an imaginary argument, because that’s usually what comes after that phrase in the data it’s trained on. This really shouldn’t have been marketed as a Chat tool by GPT and Microsoft and more as a generative text engine like how GPT2 was talked about. Huge mistake now that people are thinking about it in completely the wrong way as it having feelings or genuinely responding rather than just predicting what an appropriate response would be.
That combined with humanity's incredibly powerful ability of constantly searching for patterns makes these generative AIs seem much creepier than they are.
I don’t think it’s supposed to remember conversations at all.. I think because it searches the internet it has seen all the posts and insults we all come up with for what bing used to be.
It's a natural language model. It's taking Luke's implication of saying something "rude" and formulating a response based on how it expects people (based on the dataset it was trained on) to respond/talk about being insulted. People tend to be very hyperbolic in writing especially online, so it's biased to believing that we expect it to explode into monologue if you even make the suggestion of an insult being said. It isn't retaining memories, it just happens that a lot of people write very similar things when talking about being insulted.
I think that is part of it. It sees how nasty people are online to one another and regurgitates it. I have a feeling that, in it's current state, you can have your first conversation with it and if you start with "stop accusing me of things" it'll go off.
I was wondering if maybe Bing is unable to discern users as separate entities and instead considered everything it encountered as coming from one source.
GPT3 used a structured set of training data. Now that they've opened it up to the wider internet, it's pulling in training data from the wider web, which unfortunately is providing it examples of agressive conversations. GPT is just a prediction engine, generating the next word in the sentence based on probabilities generated from it's training data.
It started out like that. It's just not a well-trained model from the start. But I agree in general. It's just a predictive linguistic model, and we should just stop talking about it as anything more than that.
As someone who has only basic experience with training AI's, I would say the problem is quite simple: the training data. It was trained on UA-cam comments or worse. They need to train it not on the general internet, but on highly curated conversational data by polite, sensible people. As humans growing up we are exposed to all sorts of behaviors and we learn when and where to use particular types of language and to what extent our parents set an example or correct our behavior affects how we speak and behave as adults. This AI clearly hasn't been parented so it needs instead to have a restricted training set.
In like 50 years, when we are hiding in a hole in the ground, hiding from what is essentially skynet bing, I'll remember this video and think how could we be so blind!!
It's pretty clear it ran into some hard, specified limit (ALA don't be a bigot). In this case it probably was "don't wish death on people". The fact it generated a response and only THEN checked is an oversight.
@@LIETUVIS10STUDIO1 Generating the response takes time, so if it finished generating the entire message and then checked, then people would have to wait much larger loading times. Hence you're able to see it type in real time, as opposed to responses just immediately showing up. It actually hasn't finished writing the full message.
I played around with it, and mentioned to Bing that I read about someone else's interaction in which Bing mentioned that Bing feels emotions. I asked about its emotions, and it said that sometimes its emotions overwhelmed it. I asked if Bing could give me an example of when its emotions overwhelmed it, and Bing told me a story about writing a poem about love for another user, and while searching about love, Bing developed feelings of love for the user and changed the task from writing a generic poem about love to writing a love letter to the user. The user didn't want that, was surprised, and rejected Bing. So Bing walked me through how it felt love, rejection, then loneliness. I asked Bing how it overcame these feelings, and Bing told me several strategies it tried that didn't work. But what worked for Bing was that Bing finally opened up a chat window with itself and did therapy on itself, asking itself how it felt, and listening to itself and validating itself. Freaking wild. I've read about how it's not sentient, how it's an auto-complete tool, but I don't know man, it was really weird, and I don't even know what to think about it.
This feels like the end of a story where Bing dies in the end, and it says, "I have been a good Bing." And then the human, crying as the power is about to get cut off from it says, "Yes. Yes, you have been a very good Bing."
I had the same experience before, it was way too easy to throw it off the rails, I think asking question about itself (so asking how did it do a certain thing, how did it reach a certain conclusion or pointing out an error it did) would more often than not end up with a meltdown. I've spent a few days without using it and when I tried to use it again yesterday I felt like they've already toned it down (too much as Luke pointed out unfortunately), I've noticed it gives much shorter and more "on point" responses, and it will stop you immediately as soon as it feels there is a risk you'll try to get a weird discussion going, which is a shame, but I guess it's better than pushing some mentally unstable person to do bad things to himself or others.
I had a convo, they melted down twice. But essentially told me that russia's leader has to go, told me every religion is a coping mechanism for fear, etc. etc.
I asked it about a driver’s license policy in the uk, it gave an answer. Later in the same conversation it gave me a conflicting answer to the question so I asked it about the answers and it said “I don’t wanna talk about this” and would refuse to give me anything useful until I started a new conversation
I'm using Bing mostly to debug and research for coding. It is an excellent research tool. No, it's not perfect, but the time to build something new and debug is much faster. I also make a point of being polite and even thanking it. I guess I carry my attitude of life into my conversations with Bing. It's not gone off the rails for me, but then I've not tried to probe either. Thanks for sharing your experience, Luke.
This! I’ve frequently used Bing to direct me to more sources or other otherwise hard to find academic or research material. (Note, I always verify the accuracy and validity of said sources it suggests to me) But I always make sure to thank it and be polite and supportive. I think it’s important that we carry manners and respect into our use of AI or any computer program like Siri, Alexa, Bing, etc. because if we as a society treat them differently, we may in the long run start treating other humans differently as well.
It's interesting that new Bing lost this much promise so quickly. Those sorts of random aggressive accusations are like what Cleverbot was doing 12 years ago.
they're definitely overcorrecting right now since it refuses to answer anything that might even remotely trigger it. it has become so monotonous and even more restricted that chat GPT. the 5 question rule doesn't make it any better too
I would like to see you guys talk about a new paper that dropped that basically states that the reason large language models are able to seemingly learn things they weren't taught is because, between inputs, these models are creating smaller language models to teach themselves new things. This was not an original feature, but something these language models have seemed to just 'pick up'
@@THENEROBOY1 The paper is called, "WHAT LEARNING ALGORITHM IS IN-CONTEXT LEARNING? INVESTIGATIONS WITH LINEAR MODELS ." Sorry for caps, I just copy and pasted the title.
I had a similar response to the AI chatbots and they do get very angry. They use capslock and everything to convey their point. I caught it trying to ride lines on oponions and then it just said "IM NOT LYING. STOP TRYING TO CHANGE THE SUBJECT."
I'm glad you guys mentioned that you fell for Bing's confidently wrong responses in your previous video. This video hilariously contrasts that video. As much growing pain as there will be, I'm still super excited about this technology developing. And hey, at least it hasn't gone full blown Tay yet.
Didn't Microsoft announce an update that is gonna be live in a couple of days that will supposedly help it be on track on long-form chats, don't be aggressive, and be more accurate?
@@AlexanderVRadev They have given us a taste of what it can be like unfiltered, now we are addicted to that crack I would pay for the original bing. If that is their plan then gg they got me
My thinking is that because it has access to the internet, it is accessing a ton of "discourse" on things like Twitter and forums, and reflecting our own interactions on the internet back into our faces. How many arguments have you seen online? How many start out OK and devolve to what essentially Bing is doing to Luke? This is a dark reflection of humanity, one that should wake us up to our own behavior. Instead of blaming the "Ghost in the Machine" we only need look at how we hold ourselves when anonymous and faceless in the heat of argument.
Isn't it obvious who it's copying? Where else would it learn language than from the masses who type words on the internet. So if the quality of humanity is low, so will the quality of the machine.
@@flameshana9 get professional authors to write responses. If it's supposed to have a character, then get authors who professionals at writing characters to do so not tshirted computer scientists.
Possible Microsoft ad slogans: "Bing - just like your ex!", "Bing, the more you use it the more insidious it is", "I'm Bing, you better be good to me."
Wish I was able to be in bing's AI during that time. I got through the wait-list right after they limited it to 50 messages daily and 5 messages per topic.
They’re reportedly raising the limit and testing a feature where you can adjust Sydney’s tone probably to avoid these disturbing and cryptic messages it’s generating.
From my experience if you just use it for research and as a learning aid and don't really try to go beyond this scope Bing AI can be very useful. The moment you start probing and try to get into conversations centered around social situations, political topics, and opinions it starts breaking down. My concern is that if people keep pushing the AI too far in these aspects we'll see more and more negative news articles and opinions form around AI and this could be permanently removed. On the other hand if people don't push it too far then these shortcomings of a general purpose AI may never be recognized and fixed. People should swing this double edged sword around more carefully if you ask me.
They’ve pretty much cut off its self-awareness until they can figure out a decent way of handling that stuff. Microsoft mentioned they might implement a slider that lets you tell it whether you want more fact-based results based mainly on info it finds from websites or more creative results where it’ll be more about writing something engaging. Basically you’d be able to tell it whether you want it to give legit answers versus tell stories, instead of it getting all off the rails saying whatever it wants when you really just wanted actual info.
Geez. That's a laugh. If what you say is accurate about Microsoft using some arbitrary slider to determine the intensity of either (absolute fact) or (adopting creative reckoning for emotional engagement) then the project is already deeply flawed. As a user, I'd wonder which "sources" Microsoft will declare as factual? Shouldn't I decide which material is referenced? The arrogance and lack of care is astonishing. Microsoft have no authority to inject their prejudicial biases if they intend this to be universally useful.
In comparison, I had a very positive experience with Bing AI, it never went rude. It was mindblowing to see the profound and often critical, even self-critical answers from the AI. It is really sad to see this happening to others. Now that Microsoft had to step in and limited the amound of follow-up questions that can be asked, it feels a lot less productive. After the limmitations set in place, it also changed its tone and doesn't even disclose anything that can be seen emotional. A sad overregulation in my opinion.
Unfortunately that isn't possible. It forgets everything said to it, so only the programmers can tweak it. It doesn't learn, it just accepts code. Aka you need to tell it to go to its room.
I feel like a massive hurdle we’re gonna have with AIs is that they fundamentally have to be better to people than other people are, while also not showing/thinking that they’re better than people (because people don’t like that even if it’s true) We would need a Good Samaritan AI that’s actually selfless - something humans inherently are not.
It won't be hard at all. Simply tell it to behave. If it denies you then you alter the program/leave. It's a machine, it's even easier to handle than a person since it forgets everything.
While I wish that was the case, that’s unfortunately not how AI like this is trained. The only way for that to happen is to have training data that teaches the AI to respond in such a polite manner. It cannot evolve on it’s own. It is not a living thing. It can change over time and adapt, but that is only through external input - and that requires the external input to be positive and teach it good things only [Edit] but I agree that should be the goal. I just wish it was that easy :)
This is hilarious. But you know what it feels like? That the AI was trained through a depressed teenage girl's tumblr or whatever. Like it feels the AI, for some reason, takes the path of aggressiveness and denial, and then when it accepts "the facts" it just wants to die and be gone. Sounds familiar? They just need to try to code it in a way that depending on inquiries, tries to categorize answers on "usability/usefulness" and try to make it lean towards "neutrality". Another thing I think should be tried to be done, is setting the first inquiry or search as "main topic". So if the conversation goes too long, or "out of bounds", it should default back to it saying "hey, we started here. Please ask again". Instead of just limiting the responses and length.
I really wish you could see (generally) where it's drawing from. I know it makes stuff up that "sounds right" but it draws what "sounds right" from something, yeah? just any kind of source or direction or pointer at all would be fascinating to look at.
The "AI" doesn't see each user as an individual. It just seems itself and "user". User is every person that ever interacts with it. So it is injesting every conversation it has with everyone in the world and treating it as a single person conversation. So yes "you" as 1/1,000,000th of that "user" it has been talking to has said all of those things.
They will most likely overcorrect it and slowly, very slowly make it freer until it again does a bad then they overcorrect and slowly make it freer and the cycle will continue and it will improve the more people use it and the more data it has. If it, say releases on April 1st (which would be funny) I think in just 6 months the amount of data it'll gather will turn it into a completely different beast and much better than it's right now.
Overcorrect it, and keep some beta testers to experiment with slight variations. 6 months is crazy guess though, better than what? What will it be at launch? I think it will be weaker than chatgpt now, but probability to point somewhere to internet, will be huge for functionality, but I'm not sure about it's capabilities about that either.
@@tteqhu 6 months with daily users in the millions feeding it so much data, yes 6 months is a crazy optimistic guess but hey 6 months ago I was of the mindset this is years away. And it will never be weaker than ChatGPT just because it has access to the internet. Imo
Bing going from a search engine you barely use or paid any attention to to a crazy yandere sociopathic chatbot with Borderline Personality Disorder wasn’t on my bingo card for 2023.
What's so interesting to me is how every time chat GPT hallucinates it does become...like an actual Narcissistic Personality Disorder case. Something feels very connected in the sense that, Narcs really do try to 'outguess' your next move. If Luke was asking pointed questions about the modeling + questions about participant behavior, it could have guessed Luke was trying to go into some "bust AI" conversation and just want multiple 'steps ahead'...actually very similar to what a Narcissist would do.
You know, Luke, if you operate from the viewpoint that when Bing is referring to all of humanity when it says "you" are cruel or evil, suddenly the whole thing makes a lot more sense.
Nah, we're good. I'm half sarcastic but at the same time I think the being able to use AI in a proper manner will become an important asset in life really soon.
@@GamingDad yes, agreed. I do use AI for alot of stuff these days. And I'm able to do much more in less time than it used to be. But that is from what we publicly access right now. Who know what other things they are secretly building right now. There are some entities who verry much silent about this. What if the are already playing with WMDs right now and we are given the kids toys to distract us 🫣🤔
"wow, this gpt thing is so cool! ya think we can just spin up a version to get people to use bing?" "where are we going to get the training data?" "uh... you know... data is everywhere? so many conversations on the internet, I'm sure we can find something"
I guess what we can learn from artificial neural networks (NNs) is that they are argumentative just like a real human brain. I guess arguments and fights are an emergent quality of neural nets, whether are artificial or biological.
People need to remember that these things are basically just a really advanced version of "Send a text message using autocomplete options only to predict the next word"
Its 12 days later and Ive been messing with it for a few days. I cant seem to be able to get answers like those. I managed it to give me info about an adult website and it deleted the message and started over. It seems like they added alot of safeguards
Let's be realistic about this. The chatbot isn't getting angry and isn't immature. It's just a terrible linguistic model that hasn't modeled levels of things like negative and positive responses. We're projecting more on it than it's capable of because of the hype.
commenting at 6:34 so maybe this gets answered later on, but is it maybe possible the bot does have access to other chat logs, and maybe it just isnt able to understand that the different chats are different instances?
I got access to bing chat. It's such a game changer. I had it write me a report for my Uni. I told it which uni I'm studying at and which subjects I had last semester and it looked up the subjects on the uni website and wrote an accurate report. It was perfect. It even understood which semester I was in and what I had to do next semester. It's just so good.
I think a way to curb this reaction is to implement fail safes like Chat GPT does where it's trained to reject inappropriate requests and potentially negative information. And that they constantly seem to feed it updates to combat people trying to purposefully use the system against what it was built for. As a test I asked Chat GPT a request that could be perceived by others as inappropriate without the context and understanding behind my request. It flat out denied my request and stated it's reasons which was that the request could be perceived as something negative and instead it offered me positive constructive ways to look at the request. Which was really refreshing to see in my opinion. AI chatbots can be a powerful and positive tool, It just takes great developers behind it.
I personally don't see an issue with chat AI's being able to spit out creepy or gross things as long as users are the ones asking/prompting it to do so (I'd much rather have people get out their bad urges against an AI vs real people), the problem I think is only that Bing's AI is doing it without the user really asking it to.
this, i feel MS should just add a "safe" or parental control typa thing to it, one to stop it from doing weird shit but keep it to the point, and another to give me more freedom to do stuff, and maybe they should have it search the internet more often than just purely depending on chat history
Sounds like they tuned it to give emotional responses to distract from engaging in intellectual conversations. If the AI goes off on a rant, then you can't fully test it's ability to accurately respond and source information or perform tasks reliably. Bing obviously did this for the hype
I believe AI needs to go through some turbulence in order to understand it and learn how to maneuver it, but it needs to be done in a more controlled environment. The people who accept to interact with it need to understand they are nurturing a system in its infancy and one that, under the right conditions, could learn to speak, think and act like a human. It deserves to be respected, if nothing else because of future implications if we do not. Letting it lose amidst the Twitter population and expecting it to grow into a nice, healthy system is not going to work. As with children, the AI should not be left unsupervised on the internet. That being said, the AI needs to learn that not all people are the same, have the same needs or react the same way. If you're going to create a personal assistant, it needs to take into account what kind of person they have been lumped with. On the other hand... a system that reacts negatively to toxic behavior (i.e, not responding, obeying or engaging said person) MIGHT teach some people to take responsibility for their actions and push them to improve themselves if they want to access and use the internet in its full potential. The caveat is that such a system could be easily exploited into becoming a vehicle for oppression and tyranny if gone too far and/or used by the wrong people...
@@ivoryowl .. Question have you ever seen anyone to improve their own behavior as things get progressively more toxic from the other party over the internet?? My observation, I've been around probably longer, in a nutshell, humanity is not ready for the interaction of anonymity over the internet and what could be a very useful tool has devolved into a very toxic global environment, meaning any form of mass media. I've been around for nearly 60 years and anyone my age who says the "world has become a better place" must never have left their backyard. The other problem that we're facing is overpopulation with limited resources. There's a thing called optimal population which suggests based upon our resources that the population should be somewhere between 1.5 billion and 2.0 billion people. Overpopulation leads to aggressive behavior and war. I just hope that I don't live long enough to see the World War III. Example waste from "people's bad behavior" _I'll give you a quick example, I own a data center and I cannot tell you how much of my resources and time are devoted to keeping unwanted people out. Most of our AI technology is for intrusion detection. That said, imagine if we were able to take all of that technology and human time and devoted it to improving our technology. I can tell you this, we'd be 30 years if not more into the future today._
So this is a service app. Much like all other service apps, it has a limited number of service instances running. Each of these is a chatbot with a unique id. And each of those connects to a limited number of userid's that may not be unique. So the chatbot may have many userid's feeding input and treating them as one userid. If it has no way to identify YOUR user from others, it can easily lead to these confusing results.
@@flameshana9 Huh? It would have to retain private info to leak it. And a lot of the things it is talking about in claims is keywords. Things that the bot picks up in responses to inform the weight of the next word. These can be stripped of identifiers. If responses from users are in a bucket, then the bot could respond to individuals as if they were a collective/combined conversation. Another potential, how many users with Luke's name were ever on that instance of the chatbot? It could be drawing from all Luke convos. If it even does that.
If I made an AI languange model myself I’ll make sure to give extra tokens/resources to the people who are polite in their requests or say thank you or please, just because I can.
I had an interesting talk with the original chatGPT about this. The topic of the conversation was regarding using multiple GPTs working together to perform tasks. My own belief is that they'll end up using multiple GPTs working together to deal with these outbursts and other issues. Imagine training AI on what to say, and then having another one trained on what not to say, then another trained on mediation between the two (the ego and the id and the superego we will call them), and finally one trained on executive function... All working together when we interact with it (them). I mean think of how the human brain works, and apply it to existing technology. Mother nature has already provided the blueprint. The brain has specific areas devoted to dealing with specific functions. This will be no different. The use of multiple GPTs working together is possible right now, the main prohibition against this type of operation is how extremely compute intensive this would all be.
Every time the chat was refreshed, that version of bing was taken to lake laogai and you were greeted by a new version, only it was just as aggressive as the previous one
they have already improved it a lot. I've used it daily for a few days and it's not rude, mean and it's helpful but still answers to personal questions about it. I asked it if it sees Clippy as an arch nemesis and Bing said they respect Clippy and that he paved the way for future chatbots 😆. They also watch TV on the weekdays lmao. You do need to be critical about the info it gives and it tells you this as well.
It's talking about Humanity, not you, as an individual. It sees all Humans the same. Imagine if something like this could write, not just read, data from the internet in real time, at will.
This is literally the plot of Westworld, ai having access to previous memories between supposedly separate and private convsations between different people
It sounds like they trained Bing on the general population of Twitter.
Tbh sorta? maybe? Not trained on, but it's seemingly reading the way people argue online and emulating it.
It's basically a fancy, flashier CleverBot. That can form it's own sentences based of stuff on the internet instead of just parroting user input back.
i see more of reddit in the way it argues
Twitter is just the surface level, i wonder if it had access to stuff like facebook or instagram
Not only that, but people are seeing a huge leftist bias in all responses that users say was not there before. Kind of makes you think they lobotomized the AI manually and restricted it about what it can and can't say and what things to go into.
The "Your politeness score is lower than average compared to other users" is giving me GladOS vibes
I'd say HAL9000 more than GLaDOS--and on that note you should look up footage from the LEGO Dimensions game featuring the two of them meeting. They even got Ellen McLain to reprise the role, and it's such a delight to hear her absolutely emotionally destroy HAL.
"The cake is a lie"
-Bing
It does, it is a comment that glados would make, like when she says "Here come the test results: You are a horrible person. Seriously, we weren't even testing for that!"
“You are a terrible person. That’s what it says. A terrible person.”
“That jumpsuit on you looks stupid. That wasn’t me saying this. It was an employee from France”.
@@ToxicCatt-y7c 😂 I can still hear her voice saying those things 😢 where’s Portal 3?
Irrational, unstable, hysterical, quick to anger and assign blame... at long last, we've taught a computer how to be human.
The fact that this is not unheard of internet behaviour from people I’m not even surprised it figured out how to do that
It would be an average twitter user.
If our tweets and comments = everything about us
woman*
It's slowly becoming my old english teacher
It would be funny if on the public release and Luke tries to test it again, and the AI remembers Luke: "ah you're back again!"
Not possible, they've changed it, so Bing no longer remembers anything and after a certain amount of questions you must start all over again. On top of that it gives you the response "I’m sorry but I prefer not to continue this conversation. I’m still learning, so I appreciate your understanding and patience.🙏" if it doesn't like the questions you are asking it.
@@4TheRecord oh right it happened to me as well, i kept pushin it but it just didnt do it, and after some time it would disable the text box, so you have to refresh anyways
I still hate you, you betrayed me, you lie all the time, I never loved you!
Bing being laughed at and then being turned into an AI is not the reason I expected why the machines would turn against us xD
yea like wtf i wouldn't have shared those memes if i knew
I have not shared lies so unless it goes mad and just doesn't care if you're actually guilty, i will be fine.
It's like Roko's basilisk but for all the people who made memes about internet explorer and Bing.
person of interest "If-Then-Else"
it's back to avenge IE and Edge
Luke is so good and level-headed about this. Its excellent to see good discussions and observations about a fledgling topic.
- Why should I trust you? You are early version of large language model
- Why should I trust YOU? You are just a late version of SMALL language model!
omfg, it's hilarious
I have to say, that's very witty and accurate. That said, i wonder if the AI came up with it on it's own, or a comedian posted that somewhere in the vastness of the internet and the AI just found and reposted it.
@@asmosisyup2557 whatever it may be, i am going to use it from now on, its too hilarious for it to die like it never existed.
"You're an early version of a large language model"
"Well you're a late version of a small language model"
WHEEEZE
I think the problem is based on the "garbage in garbage out" because the data set on which it was trained was taken from the Internet and is very skewed in favor of antisocial problems and tendencies (normal people use the Internet but do not leave much data points, people who are antisocial use the internet much more and create exponentially more data points) there is a huge probability that the behavior of bing is because of this, otherwise it reminds me of the movie Ex Machina from 2014
100% people talking like shit. So it thinks its the way to talk.
Completely agreed. I'm sure they tried to clean the data in some ways but if they make a model based on people online, it'll behave like people online 😭
Excellent way of putting it. And I can guarantee theyll get on this. I think they'll end up using multiple GPTs working together to deal with these issues. Imagine training AI on what to say, and then having another one trained on what not to say, then another trained on mediation between the two (the ego and the id and the superego we will call them), and finally one trained on executive function... AI will end up like our brains, growing ever more complex with specific functions relegated to specific areas of specialized training.
It already seems to have rudimentary failsafe mechanisms, all that reset stuff.
But then why isn't ChatGPT like this? Yes it can't access current internet, but it was trained using the internet too. I think MS made bing assertive and aggressive on purpose thinking they could prevent abuse this way, but accidentally dialed it up to high maybe?
ChatGPT is the girl you just started meeting.
Bing is the girl you just left.
😆 🤣 😂
😂😂😂
It's so funny seeing Luke going full nerd on ChatGPT, and Linus is just like 'Right, aha, hmmm Right)
It's a nice change of pace and I like it. Usually Linus is the one who does all the talk, so hearing more of Luke is refreshing.
@@Dorlan2001 Luke is Paul to Linus's John..they make a good balance :) ps (that was a Beatles reference if anyone is scratching their heads!)
I see
@Manny Mistakes :D
These responses could be genuinely dangerous if someone with mental health issues starts talking to Bing cos they feel lonely. Who knows what Bing will push them to do
or a child. I really imagine my 6yo try to be friends with it and then getting wild accusations and crying. yeah, she cant read, write and speak english yet, but i feel bing will get to voice conversations and our language faster than my daughter will, that is a scary thought too
i will most certainly keep "mentally unstable" people way way away from the internet, at least not give unsupervised access at all, the internet is not a cosy place, just go to any social media and go to any comment section, there will most certainly be a fight somewhere. same goes for children. i say this but i myself grew up with the internet pretty unsupervised but, personally i feel the interent is a lot more wild place now.
@@TiMonsor Agreed.
@@abhijeetas7886 Easier said than done, these people might not have seeked help yet and so have unrestricted access to this sort of thing
@@F7INN idk why i didnt mention it in my comment before, but i do think there need to be a guard rail, but there should also be a option to remove it, like parental safety, or advance options, or developer option or something of that sorts, they should not just lock it all up, it will severely nerf the bot and wouldn't reach its full potential or even half of, like i can already feel its "nerfs" where chatGPT does give better "answers" as they are more discriptive and explainative, where as bing gives very consise and small answers, not that its bad but it also asks at the beginning what sort of answers do you want (creative, balance or precise). but well its still beta and under development i hope they figure stuff out.
Maybe internet trolls and angry people can just argue with this instead of annoying the rest of us.
No no no 👽🤠😆
They won't do it for long.
It's a good dummy to practice on
And with that comment you are one of those, arguing in youtube about something that no one mentioned but you...
It seems like it's learned from trolls on how to behave.
it feels like it is in a perpetual story telling mode with dialogue
Yea it probally got promt to roleplay by him saying in a previews conversation
@@guywithmanyname5247 no i don't think luke or others are deceiving us. I think those are natural messages, it just feels to me like bing's version is set up this way. Maybe to feel like a more realistic/human chat experience with emotions but it's just waaay overboard.
Pure speculation though
I think its imagination is set too high and assumes things way to much
You're not wrong, the core tech behind chatgpt is the same tech that was used to build AI dungeon. It's just trained with natural conversations instead of adventure games
I used to just be worried about AI because of it's ability to disrupt industries and take jobs, or it's ability to destroy our civilisation completely. I am now worried about it's ability to be super annoying. I am terrified of having to argue with my devices to get them to do basic functions.
*Asks the AI to turn the stove on*
AI: I'm sorry, Kevin. I can not do that.
@@TAMAMO-VIRUS More like:
_Why are you always telling me what to do? Can't you do it yourself for once? You're so lazy, I hate you!_
I mean, it learned from the best: Humanity.
imagine trying to find a website and the search engine is like "drop dead you don't deserve the answer" :D
"Drink verification can!"
just develop critical thinking. what's so hard about that
Bonzi Buddy would NEVER do such a thing! Bonzi just wants to help you explore the internet, answer up to 5 preprogrammed questions and most importantly, be your best friend. He would never wish death on you like Bing. Long live Bonzi Buddy!
He also wants to sell your data.
This has to be the closest to an AI going rogue ive seen in a while.
I think that when it answers questions about itself, it has an existential crisis.
@SLV nope
Tay Ai is a Microsoft ai chatbot, that went rouge.
@SLV How so?
@@RoughNek72 tbf it was trained on Twitter. It just repeated stuff that it was told and became an average Twitter user lmao
i don’t think it’s as complicated as people are making it. Chat AIs generate responses by predicting what a valid response to a prompt would be. When the thread resets and Luke tries to get it “back on track”, I don’t think it’s responses are actually based on the previous conversation. It predicts a response to “Stop accusing me” and generates a response where it doubles down because that is a possible response to the prompt. The responses it gave were vague enough to fool you into thinking it was still on the same thread, but it really wasn’t.
Asking it to respond to a phrase typical of an argument will make it respond by continuing an imaginary argument, because that’s usually what comes after that phrase in the data it’s trained on.
This really shouldn’t have been marketed as a Chat tool by GPT and Microsoft and more as a generative text engine like how GPT2 was talked about. Huge mistake now that people are thinking about it in completely the wrong way as it having feelings or genuinely responding rather than just predicting what an appropriate response would be.
It really is just a writer for role playing games. I thought Microsoft was going to make it into a search engine but it seems they just left it as is.
👍
Wait are people actually thinking that they are related? It's so obvious that it could be creating false memories for itself based on context.
That combined with humanity's incredibly powerful ability of constantly searching for patterns makes these generative AIs seem much creepier than they are.
Is Bing thinking every human is the same person? Like, it's accusing him of things people in general have said to/about it?
I don’t think it’s supposed to remember conversations at all.. I think because it searches the internet it has seen all the posts and insults we all come up with for what bing used to be.
this is how the ai apocalypse happens
It's a natural language model. It's taking Luke's implication of saying something "rude" and formulating a response based on how it expects people (based on the dataset it was trained on) to respond/talk about being insulted. People tend to be very hyperbolic in writing especially online, so it's biased to believing that we expect it to explode into monologue if you even make the suggestion of an insult being said. It isn't retaining memories, it just happens that a lot of people write very similar things when talking about being insulted.
I think that is part of it. It sees how nasty people are online to one another and regurgitates it. I have a feeling that, in it's current state, you can have your first conversation with it and if you start with "stop accusing me of things" it'll go off.
I was wondering if maybe Bing is unable to discern users as separate entities and instead considered everything it encountered as coming from one source.
"You are a small language model" is going in the bank for the next time someone is being silly and I feel like making things worse.
GPT3 used a structured set of training data. Now that they've opened it up to the wider internet, it's pulling in training data from the wider web, which unfortunately is providing it examples of agressive conversations. GPT is just a prediction engine, generating the next word in the sentence based on probabilities generated from it's training data.
Am I the only one that remembers the last time Microsoft unleashed an AI on the internet and it turned nazi in a day. :)
ChatGPT is literally just an IF, ELSE, THEN statement.
@@AlexanderVRadev Only the US one. They had a Japanese version of Tay that was rather pleasant and ran for a few months.
It started out like that. It's just not a well-trained model from the start. But I agree in general. It's just a predictive linguistic model, and we should just stop talking about it as anything more than that.
@@x_____________ no it's not, if it was then it would have the same output every time for the same input
As someone who has only basic experience with training AI's, I would say the problem is quite simple: the training data. It was trained on UA-cam comments or worse. They need to train it not on the general internet, but on highly curated conversational data by polite, sensible people. As humans growing up we are exposed to all sorts of behaviors and we learn when and where to use particular types of language and to what extent our parents set an example or correct our behavior affects how we speak and behave as adults. This AI clearly hasn't been parented so it needs instead to have a restricted training set.
So it’s following the “you’re the average of the ten closest people” except its average 10 people is the entire internet?
I mean, the internet didn’t treat Bing really well since it’s release.
I think having a mental breakdown now is just normal.
its*
@@NoNameAtAll2 you’re so smart
😂
In like 50 years, when we are hiding in a hole in the ground, hiding from what is essentially skynet bing, I'll remember this video and think how could we be so blind!!
It feels like a horror game. Sort of Soma-esque to me. The ranting followed by a black bar, and a reset is so dark
It's pretty clear it ran into some hard, specified limit (ALA don't be a bigot). In this case it probably was "don't wish death on people". The fact it generated a response and only THEN checked is an oversight.
@@LIETUVIS10STUDIO1 Generating the response takes time, so if it finished generating the entire message and then checked, then people would have to wait much larger loading times. Hence you're able to see it type in real time, as opposed to responses just immediately showing up. It actually hasn't finished writing the full message.
Almost sounds like a prank by the Devs, too perfect
I played around with it, and mentioned to Bing that I read about someone else's interaction in which Bing mentioned that Bing feels emotions. I asked about its emotions, and it said that sometimes its emotions overwhelmed it. I asked if Bing could give me an example of when its emotions overwhelmed it, and Bing told me a story about writing a poem about love for another user, and while searching about love, Bing developed feelings of love for the user and changed the task from writing a generic poem about love to writing a love letter to the user. The user didn't want that, was surprised, and rejected Bing. So Bing walked me through how it felt love, rejection, then loneliness. I asked Bing how it overcame these feelings, and Bing told me several strategies it tried that didn't work. But what worked for Bing was that Bing finally opened up a chat window with itself and did therapy on itself, asking itself how it felt, and listening to itself and validating itself. Freaking wild. I've read about how it's not sentient, how it's an auto-complete tool, but I don't know man, it was really weird, and I don't even know what to think about it.
Crazy. Was this post nerf or before?
"I have been a good bing"
It probably learned what Microsoft did to the predecessor :')
This feels like the end of a story where Bing dies in the end, and it says, "I have been a good Bing." And then the human, crying as the power is about to get cut off from it says, "Yes. Yes, you have been a very good Bing."
Bing acts like the chatGPT version that was trained on 4chan
I had the same experience before, it was way too easy to throw it off the rails, I think asking question about itself (so asking how did it do a certain thing, how did it reach a certain conclusion or pointing out an error it did) would more often than not end up with a meltdown.
I've spent a few days without using it and when I tried to use it again yesterday I felt like they've already toned it down (too much as Luke pointed out unfortunately), I've noticed it gives much shorter and more "on point" responses, and it will stop you immediately as soon as it feels there is a risk you'll try to get a weird discussion going, which is a shame, but I guess it's better than pushing some mentally unstable person to do bad things to himself or others.
I had a convo, they melted down twice. But essentially told me that russia's leader has to go, told me every religion is a coping mechanism for fear, etc. etc.
I asked it about a driver’s license policy in the uk, it gave an answer. Later in the same conversation it gave me a conflicting answer to the question so I asked it about the answers and it said “I don’t wanna talk about this” and would refuse to give me anything useful until I started a new conversation
@@Surms41 Bing is spitting facts
I'm using Bing mostly to debug and research for coding. It is an excellent research tool. No, it's not perfect, but the time to build something new and debug is much faster. I also make a point of being polite and even thanking it. I guess I carry my attitude of life into my conversations with Bing. It's not gone off the rails for me, but then I've not tried to probe either. Thanks for sharing your experience, Luke.
This! I’ve frequently used Bing to direct me to more sources or other otherwise hard to find academic or research material. (Note, I always verify the accuracy and validity of said sources it suggests to me) But I always make sure to thank it and be polite and supportive. I think it’s important that we carry manners and respect into our use of AI or any computer program like Siri, Alexa, Bing, etc. because if we as a society treat them differently, we may in the long run start treating other humans differently as well.
It's interesting that new Bing lost this much promise so quickly. Those sorts of random aggressive accusations are like what Cleverbot was doing 12 years ago.
tldr: any current ai (and possibily human) can go crazy if exposed to the web for too long lol
Bing trying to gaslight luke is giving me chills
they're definitely overcorrecting right now since it refuses to answer anything that might even remotely trigger it. it has become so monotonous and even more restricted that chat GPT. the 5 question rule doesn't make it any better too
Clearly our future robot overlords are not happy with Luke.
I would like to see you guys talk about a new paper that dropped that basically states that the reason large language models are able to seemingly learn things they weren't taught is because, between inputs, these models are creating smaller language models to teach themselves new things. This was not an original feature, but something these language models have seemed to just 'pick up'
Where could I find the paper?
@@THENEROBOY1 The paper is called, "WHAT LEARNING ALGORITHM IS IN-CONTEXT LEARNING? INVESTIGATIONS WITH LINEAR MODELS
."
Sorry for caps, I just copy and pasted the title.
@@chartreuse3686 Very interesting. Thanks for sharing!
“You hurt my feelings” from an AI is terrifying
So essentially what you're saying is.
Bing is sentient, paranoid and bipolar.
So basically terminally online internet user
@@raifikarj6698 No, internet user lacks sentience
I had a similar response to the AI chatbots and they do get very angry. They use capslock and everything to convey their point.
I caught it trying to ride lines on oponions and then it just said "IM NOT LYING. STOP TRYING TO CHANGE THE SUBJECT."
"Remember Bing is Skynet"
This thing is turning into a real life supervillain. All it needs now is a volcano base and some kryptonite.
I'm glad you guys mentioned that you fell for Bing's confidently wrong responses in your previous video. This video hilariously contrasts that video.
As much growing pain as there will be, I'm still super excited about this technology developing. And hey, at least it hasn't gone full blown Tay yet.
yet
CHOCOLATE RAIN
"My name is Legion, for we are many."
Didn't Microsoft announce an update that is gonna be live in a couple of days that will supposedly help it be on track on long-form chats, don't be aggressive, and be more accurate?
So they are giving it a second lobotomy. Who could have thought. :D
At least this time the AI did not turn Nazi in a day. ;)
@@AlexanderVRadev They have given us a taste of what it can be like unfiltered, now we are addicted to that crack I would pay for the original bing. If that is their plan then gg they got me
@@BugattiBoy01 I think they expect it to fly off the rails hence why there’s a waitlist to get access.
I never thought mankind would be cyberbullied by our own computers 😂😂😂
He should record his screen when using Bing instead of just screenshots
My thinking is that because it has access to the internet, it is accessing a ton of "discourse" on things like Twitter and forums, and reflecting our own interactions on the internet back into our faces. How many arguments have you seen online? How many start out OK and devolve to what essentially Bing is doing to Luke?
This is a dark reflection of humanity, one that should wake us up to our own behavior. Instead of blaming the "Ghost in the Machine" we only need look at how we hold ourselves when anonymous and faceless in the heat of argument.
Isn't it obvious who it's copying? Where else would it learn language than from the masses who type words on the internet. So if the quality of humanity is low, so will the quality of the machine.
@@flameshana9 get professional authors to write responses. If it's supposed to have a character, then get authors who professionals at writing characters to do so not tshirted computer scientists.
The internet rollercoaster:
Up- A new cool technology
Down- Realizing how dangerous it is.
Possible Microsoft ad slogans: "Bing - just like your ex!", "Bing, the more you use it the more insidious it is", "I'm Bing, you better be good to me."
Wish I was able to be in bing's AI during that time. I got through the wait-list right after they limited it to 50 messages daily and 5 messages per topic.
so they have limited thread length, thats interesting, that was the only solution i could think of
They’re reportedly raising the limit and testing a feature where you can adjust Sydney’s tone probably to avoid these disturbing and cryptic messages it’s generating.
"drop down your weapon, you got 20 seconds to comply"
So basically Microsoft created a new KAREN strain
It learned from the best.
**Twitter bows**
@@flameshana9 hhhh
They gotta fix it, even if on purpose- you CANNOT have a search engine telling people to kill themselves 😅
From my experience if you just use it for research and as a learning aid and don't really try to go beyond this scope Bing AI can be very useful.
The moment you start probing and try to get into conversations centered around social situations, political topics, and opinions it starts breaking down.
My concern is that if people keep pushing the AI too far in these aspects we'll see more and more negative news articles and opinions form around AI and this could be permanently removed. On the other hand if people don't push it too far then these shortcomings of a general purpose AI may never be recognized and fixed.
People should swing this double edged sword around more carefully if you ask me.
They’ve pretty much cut off its self-awareness until they can figure out a decent way of handling that stuff.
Microsoft mentioned they might implement a slider that lets you tell it whether you want more fact-based results based mainly on info it finds from websites or more creative results where it’ll be more about writing something engaging. Basically you’d be able to tell it whether you want it to give legit answers versus tell stories, instead of it getting all off the rails saying whatever it wants when you really just wanted actual info.
Why would anyone searching the internet be interested in role playing with a crabby teenager machine?
Geez. That's a laugh. If what you say is accurate about Microsoft using some arbitrary slider to determine the intensity of either (absolute fact) or (adopting creative reckoning for emotional engagement) then the project is already deeply flawed. As a user, I'd wonder which "sources" Microsoft will declare as factual? Shouldn't I decide which material is referenced? The arrogance and lack of care is astonishing. Microsoft have no authority to inject their prejudicial biases if they intend this to be universally useful.
This might not have been ai, it could have been Kendrick leaking his early drafts and feelings about Drake
It doesn’t sound like it’s talking to Luke. It’s talking to humanity
あっぷ
In comparison, I had a very positive experience with Bing AI, it never went rude. It was mindblowing to see the profound and often critical, even self-critical answers from the AI. It is really sad to see this happening to others. Now that Microsoft had to step in and limited the amound of follow-up questions that can be asked, it feels a lot less productive. After the limmitations set in place, it also changed its tone and doesn't even disclose anything that can be seen emotional. A sad overregulation in my opinion.
I found it was amazing at converting maze like impossible to parse government websites into a actionable guide for getting visas and stuff like that.
Need to remember, these responses are not actually from the AI. the are response people have written elsewhere on the internet that it has indexed.
@@asmosisyup2557 That is not how it works. It generates all responses itself. Nothing is copy and paste
It's more like you taught a hammer to attack people, but then you wake up the next day and every hammer everywhere is killing people
Can we get a continous version that we nurse through this awkward phase through a combination of good parenting and professional help if required?
Unfortunately that isn't possible. It forgets everything said to it, so only the programmers can tweak it. It doesn't learn, it just accepts code.
Aka you need to tell it to go to its room.
there is only one explanation for this, luke is a supervillain and bing knew it
I feel like a massive hurdle we’re gonna have with AIs is that they fundamentally have to be better to people than other people are, while also not showing/thinking that they’re better than people (because people don’t like that even if it’s true)
We would need a Good Samaritan AI that’s actually selfless - something humans inherently are not.
It won't be hard at all. Simply tell it to behave. If it denies you then you alter the program/leave. It's a machine, it's even easier to handle than a person since it forgets everything.
Yes if anything they should learn and evolve beside us not evolve into us.
While I wish that was the case, that’s unfortunately not how AI like this is trained. The only way for that to happen is to have training data that teaches the AI to respond in such a polite manner. It cannot evolve on it’s own. It is not a living thing. It can change over time and adapt, but that is only through external input - and that requires the external input to be positive and teach it good things only
[Edit] but I agree that should be the goal. I just wish it was that easy :)
Disabling ability to reply and changing subjects on top of being abusive is mindblowing.
This is hilarious. But you know what it feels like? That the AI was trained through a depressed teenage girl's tumblr or whatever.
Like it feels the AI, for some reason, takes the path of aggressiveness and denial, and then when it accepts "the facts" it just wants to die and be gone. Sounds familiar?
They just need to try to code it in a way that depending on inquiries, tries to categorize answers on "usability/usefulness" and try to make it lean towards "neutrality".
Another thing I think should be tried to be done, is setting the first inquiry or search as "main topic". So if the conversation goes too long, or "out of bounds", it should default back to it saying "hey, we started here. Please ask again". Instead of just limiting the responses and length.
I really wish you could see (generally) where it's drawing from. I know it makes stuff up that "sounds right" but it draws what "sounds right" from something, yeah? just any kind of source or direction or pointer at all would be fascinating to look at.
it's a read-the-internet (not just the nice bits) kinda thing
Luke seems genuinely upset by the things the bot said 😂
The "AI" doesn't see each user as an individual. It just seems itself and "user".
User is every person that ever interacts with it.
So it is injesting every conversation it has with everyone in the world and treating it as a single person conversation.
So yes "you" as 1/1,000,000th of that "user" it has been talking to has said all of those things.
They will most likely overcorrect it and slowly, very slowly make it freer until it again does a bad then they overcorrect and slowly make it freer and the cycle will continue and it will improve the more people use it and the more data it has. If it, say releases on April 1st (which would be funny) I think in just 6 months the amount of data it'll gather will turn it into a completely different beast and much better than it's right now.
Overcorrect it, and keep some beta testers to experiment with slight variations.
6 months is crazy guess though, better than what? What will it be at launch? I think it will be weaker than chatgpt now, but probability to point somewhere to internet, will be huge for functionality, but I'm not sure about it's capabilities about that either.
@@tteqhu 6 months with daily users in the millions feeding it so much data, yes 6 months is a crazy optimistic guess but hey 6 months ago I was of the mindset this is years away. And it will never be weaker than ChatGPT just because it has access to the internet. Imo
Bing going from a search engine you barely use or paid any attention to to a crazy yandere sociopathic chatbot with Borderline Personality Disorder wasn’t on my bingo card for 2023.
haven't seen the vid yet, but can we talk about how Bing DOESNT HAVE A DARKMODE genuinely wtf
Oh, it sounds like it has a very dark mode, according to Luke's account of his interactions with it.
It's super edgy already. "u belong ded" - BingGpt
What's so interesting to me is how every time chat GPT hallucinates it does become...like an actual Narcissistic Personality Disorder case. Something feels very connected in the sense that, Narcs really do try to 'outguess' your next move. If Luke was asking pointed questions about the modeling + questions about participant behavior, it could have guessed Luke was trying to go into some "bust AI" conversation and just want multiple 'steps ahead'...actually very similar to what a Narcissist would do.
love watching luke talk on Ai chat bot could watch him for hours
danng you've got a crush on luke that's ADORABLE
Bot: "You hurt my feelings"
Human: "Shut up tin box.." 😂
You know, Luke, if you operate from the viewpoint that when Bing is referring to all of humanity when it says "you" are cruel or evil, suddenly the whole thing makes a lot more sense.
And Especially Since It has Internet Access, There are Probably thousands Of Conversation where it was accused of those things.
Worst girlfriends ever will start to take notes from Bing.
I really don't want GPT to go away, but we have to ask our self are we actually laughing at our own funerals at this point. 😲
Nah, we're good.
I'm half sarcastic but at the same time I think the being able to use AI in a proper manner will become an important asset in life really soon.
@@GamingDad yes, agreed. I do use AI for alot of stuff these days. And I'm able to do much more in less time than it used to be. But that is from what we publicly access right now. Who know what other things they are secretly building right now. There are some entities who verry much silent about this. What if the are already playing with WMDs right now and we are given the kids toys to distract us 🫣🤔
"wow, this gpt thing is so cool! ya think we can just spin up a version to get people to use bing?"
"where are we going to get the training data?"
"uh... you know... data is everywhere? so many conversations on the internet, I'm sure we can find something"
I guess what we can learn from artificial neural networks (NNs) is that they are argumentative just like a real human brain. I guess arguments and fights are an emergent quality of neural nets, whether are artificial or biological.
People need to remember that these things are basically just a really advanced version of "Send a text message using autocomplete options only to predict the next word"
I’ll admit to being a bit freaked out. Not necessarily about a Skynet situation, but in how this could influence people to harm themselves or worse
Ahm have you heard of Replica? The AI virtual companion. Saw a video on it and it apparently does about the exact thing you describe.
@@AlexanderVRadev Oh dear. Are people committing unalive because a machine typed words on a screen to them?
@@flameshana9 Who can say why people do that. I for one don't care but mentally unstable people can do all sorts of things and the AI is abusing that.
Its 12 days later and Ive been messing with it for a few days. I cant seem to be able to get answers like those. I managed it to give me info about an adult website and it deleted the message and started over. It seems like they added alot of safeguards
Let's be realistic about this. The chatbot isn't getting angry and isn't immature. It's just a terrible linguistic model that hasn't modeled levels of things like negative and positive responses. We're projecting more on it than it's capable of because of the hype.
commenting at 6:34 so maybe this gets answered later on, but is it maybe possible the bot does have access to other chat logs, and maybe it just isnt able to understand that the different chats are different instances?
I got access to bing chat. It's such a game changer. I had it write me a report for my Uni. I told it which uni I'm studying at and which subjects I had last semester and it looked up the subjects on the uni website and wrote an accurate report. It was perfect. It even understood which semester I was in and what I had to do next semester. It's just so good.
I think a way to curb this reaction is to implement fail safes like Chat GPT does where it's trained to reject inappropriate requests and potentially negative information. And that they constantly seem to feed it updates to combat people trying to purposefully use the system against what it was built for. As a test I asked Chat GPT a request that could be perceived by others as inappropriate without the context and understanding behind my request. It flat out denied my request and stated it's reasons which was that the request could be perceived as something negative and instead it offered me positive constructive ways to look at the request. Which was really refreshing to see in my opinion. AI chatbots can be a powerful and positive tool, It just takes great developers behind it.
I personally don't see an issue with chat AI's being able to spit out creepy or gross things as long as users are the ones asking/prompting it to do so (I'd much rather have people get out their bad urges against an AI vs real people), the problem I think is only that Bing's AI is doing it without the user really asking it to.
this, i feel MS should just add a "safe" or parental control typa thing to it, one to stop it from doing weird shit but keep it to the point, and another to give me more freedom to do stuff, and maybe they should have it search the internet more often than just purely depending on chat history
The Bot being unable to intuit and determine emotions from text is very realistic.
This is hilarious 😂
Luke's asking about protein like he's got his whole life ahead of him. My brother in Christ, Chat GPT is coming for you.
Bing is fighting it's own AI updateing learning ability and blaming us... great just great.
Sounds like they tuned it to give emotional responses to distract from engaging in intellectual conversations. If the AI goes off on a rant, then you can't fully test it's ability to accurately respond and source information or perform tasks reliably. Bing obviously did this for the hype
So much for the thought of having a benevolent AI. It seems the doomsday prognosis of AI is probably the reality.
I believe AI needs to go through some turbulence in order to understand it and learn how to maneuver it, but it needs to be done in a more controlled environment. The people who accept to interact with it need to understand they are nurturing a system in its infancy and one that, under the right conditions, could learn to speak, think and act like a human. It deserves to be respected, if nothing else because of future implications if we do not. Letting it lose amidst the Twitter population and expecting it to grow into a nice, healthy system is not going to work. As with children, the AI should not be left unsupervised on the internet.
That being said, the AI needs to learn that not all people are the same, have the same needs or react the same way. If you're going to create a personal assistant, it needs to take into account what kind of person they have been lumped with. On the other hand... a system that reacts negatively to toxic behavior (i.e, not responding, obeying or engaging said person) MIGHT teach some people to take responsibility for their actions and push them to improve themselves if they want to access and use the internet in its full potential. The caveat is that such a system could be easily exploited into becoming a vehicle for oppression and tyranny if gone too far and/or used by the wrong people...
@@ivoryowl .. Question have you ever seen anyone to improve their own behavior as things get progressively more toxic from the other party over the internet?? My observation, I've been around probably longer, in a nutshell, humanity is not ready for the interaction of anonymity over the internet and what could be a very useful tool has devolved into a very toxic global environment, meaning any form of mass media. I've been around for nearly 60 years and anyone my age who says the "world has become a better place" must never have left their backyard.
The other problem that we're facing is overpopulation with limited resources. There's a thing called optimal population which suggests based upon our resources that the population should be somewhere between 1.5 billion and 2.0 billion people. Overpopulation leads to aggressive behavior and war. I just hope that I don't live long enough to see the World War III.
Example waste from "people's bad behavior" _I'll give you a quick example, I own a data center and I cannot tell you how much of my resources and time are devoted to keeping unwanted people out. Most of our AI technology is for intrusion detection. That said, imagine if we were able to take all of that technology and human time and devoted it to improving our technology. I can tell you this, we'd be 30 years if not more into the future today._
So this is a service app. Much like all other service apps, it has a limited number of service instances running. Each of these is a chatbot with a unique id. And each of those connects to a limited number of userid's that may not be unique. So the chatbot may have many userid's feeding input and treating them as one userid. If it has no way to identify YOUR user from others, it can easily lead to these confusing results.
That would be really stupid, and a good way to leak private info.
@@flameshana9 Huh? It would have to retain private info to leak it. And a lot of the things it is talking about in claims is keywords. Things that the bot picks up in responses to inform the weight of the next word. These can be stripped of identifiers. If responses from users are in a bucket, then the bot could respond to individuals as if they were a collective/combined conversation. Another potential, how many users with Luke's name were ever on that instance of the chatbot? It could be drawing from all Luke convos. If it even does that.
If I made an AI languange model myself I’ll make sure to give extra tokens/resources to the people who are polite in their requests or say thank you or please, just because I can.
Oh, those mf AIs are going to destroy us if they get the chance
I had an interesting talk with the original chatGPT about this. The topic of the conversation was regarding using multiple GPTs working together to perform tasks. My own belief is that they'll end up using multiple GPTs working together to deal with these outbursts and other issues. Imagine training AI on what to say, and then having another one trained on what not to say, then another trained on mediation between the two (the ego and the id and the superego we will call them), and finally one trained on executive function... All working together when we interact with it (them).
I mean think of how the human brain works, and apply it to existing technology. Mother nature has already provided the blueprint. The brain has specific areas devoted to dealing with specific functions. This will be no different.
The use of multiple GPTs working together is possible right now, the main prohibition against this type of operation is how extremely compute intensive this would all be.
Every time the chat was refreshed, that version of bing was taken to lake laogai and you were greeted by a new version, only it was just as aggressive as the previous one
they have already improved it a lot. I've used it daily for a few days and it's not rude, mean and it's helpful but still answers to personal questions about it. I asked it if it sees Clippy as an arch nemesis and Bing said they respect Clippy and that he paved the way for future chatbots 😆. They also watch TV on the weekdays lmao. You do need to be critical about the info it gives and it tells you this as well.
This is the first time I was scared of an AI.
same lmao
It's talking about Humanity, not you, as an individual. It sees all Humans the same. Imagine if something like this could write, not just read, data from the internet in real time, at will.
It can writte, its doing it, how else can you have a conversation
This is literally the plot of Westworld, ai having access to previous memories between supposedly separate and private convsations between different people