Luke VS Bing

Поділитися
Вставка
  • Опубліковано 17 гру 2024

КОМЕНТАРІ •

  • @trombonemain
    @trombonemain Рік тому +4544

    It sounds like they trained Bing on the general population of Twitter.

    • @Matkatamiba
      @Matkatamiba Рік тому +185

      Tbh sorta? maybe? Not trained on, but it's seemingly reading the way people argue online and emulating it.

    • @dunmermage
      @dunmermage Рік тому +65

      It's basically a fancy, flashier CleverBot. That can form it's own sentences based of stuff on the internet instead of just parroting user input back.

    • @z1no3n
      @z1no3n Рік тому +9

      i see more of reddit in the way it argues

    • @theroofwithoutahome2352
      @theroofwithoutahome2352 Рік тому +16

      Twitter is just the surface level, i wonder if it had access to stuff like facebook or instagram

    • @AlexanderVRadev
      @AlexanderVRadev Рік тому

      Not only that, but people are seeing a huge leftist bias in all responses that users say was not there before. Kind of makes you think they lobotomized the AI manually and restricted it about what it can and can't say and what things to go into.

  • @klyde_the_boy
    @klyde_the_boy Рік тому +883

    The "Your politeness score is lower than average compared to other users" is giving me GladOS vibes

    • @GSBarlev
      @GSBarlev Рік тому +24

      I'd say HAL9000 more than GLaDOS--and on that note you should look up footage from the LEGO Dimensions game featuring the two of them meeting. They even got Ellen McLain to reprise the role, and it's such a delight to hear her absolutely emotionally destroy HAL.

    • @tablettablete186
      @tablettablete186 Рік тому +18

      "The cake is a lie"
      -Bing

    • @illegalcoding
      @illegalcoding Рік тому +15

      It does, it is a comment that glados would make, like when she says "Here come the test results: You are a horrible person. Seriously, we weren't even testing for that!"

    • @ToxicCatt-y7c
      @ToxicCatt-y7c Рік тому +17

      “You are a terrible person. That’s what it says. A terrible person.”
      “That jumpsuit on you looks stupid. That wasn’t me saying this. It was an employee from France”.

    • @orion10x10
      @orion10x10 Рік тому +4

      @@ToxicCatt-y7c 😂 I can still hear her voice saying those things 😢 where’s Portal 3?

  • @TheRogueWolf
    @TheRogueWolf Рік тому +2015

    Irrational, unstable, hysterical, quick to anger and assign blame... at long last, we've taught a computer how to be human.

    • @Rohanology27
      @Rohanology27 Рік тому +78

      The fact that this is not unheard of internet behaviour from people I’m not even surprised it figured out how to do that

    • @carlostrudo
      @carlostrudo Рік тому +57

      It would be an average twitter user.

    • @abraxaseyes87
      @abraxaseyes87 Рік тому +9

      If our tweets and comments = everything about us

    • @passalapasa
      @passalapasa Рік тому +10

      woman*

    • @SamsTechTips
      @SamsTechTips Рік тому

      It's slowly becoming my old english teacher

  • @ResearcherReasearchingResearch
    @ResearcherReasearchingResearch Рік тому +127

    It would be funny if on the public release and Luke tries to test it again, and the AI remembers Luke: "ah you're back again!"

    • @4TheRecord
      @4TheRecord Рік тому +4

      Not possible, they've changed it, so Bing no longer remembers anything and after a certain amount of questions you must start all over again. On top of that it gives you the response "I’m sorry but I prefer not to continue this conversation. I’m still learning, so I appreciate your understanding and patience.🙏" if it doesn't like the questions you are asking it.

    • @abhijeetas7886
      @abhijeetas7886 Рік тому

      @@4TheRecord oh right it happened to me as well, i kept pushin it but it just didnt do it, and after some time it would disable the text box, so you have to refresh anyways

    • @Mic_Glow
      @Mic_Glow Рік тому +1

      I still hate you, you betrayed me, you lie all the time, I never loved you!

  • @1bluecat962
    @1bluecat962 Рік тому +1913

    Bing being laughed at and then being turned into an AI is not the reason I expected why the machines would turn against us xD

    • @kn665og
      @kn665og Рік тому +54

      yea like wtf i wouldn't have shared those memes if i knew

    • @angrydragonslayer
      @angrydragonslayer Рік тому +3

      I have not shared lies so unless it goes mad and just doesn't care if you're actually guilty, i will be fine.

    • @Someone-wr4ms
      @Someone-wr4ms Рік тому +8

      It's like Roko's basilisk but for all the people who made memes about internet explorer and Bing.

    • @Tom_Neverwinter
      @Tom_Neverwinter Рік тому

      person of interest "If-Then-Else"

    • @DOOMSLAYER1376
      @DOOMSLAYER1376 Рік тому +1

      it's back to avenge IE and Edge

  • @sherwinkp
    @sherwinkp Рік тому +50

    Luke is so good and level-headed about this. Its excellent to see good discussions and observations about a fledgling topic.

  • @NoNameAtAll2
    @NoNameAtAll2 Рік тому +534

    - Why should I trust you? You are early version of large language model
    - Why should I trust YOU? You are just a late version of SMALL language model!
    omfg, it's hilarious

    • @asmosisyup2557
      @asmosisyup2557 Рік тому +60

      I have to say, that's very witty and accurate. That said, i wonder if the AI came up with it on it's own, or a comedian posted that somewhere in the vastness of the internet and the AI just found and reposted it.

    • @abhijeetas7886
      @abhijeetas7886 Рік тому +16

      @@asmosisyup2557 whatever it may be, i am going to use it from now on, its too hilarious for it to die like it never existed.

  • @GaussNine
    @GaussNine Рік тому +48

    "You're an early version of a large language model"
    "Well you're a late version of a small language model"
    WHEEEZE

  • @weiserwolf580
    @weiserwolf580 Рік тому +1623

    I think the problem is based on the "garbage in garbage out" because the data set on which it was trained was taken from the Internet and is very skewed in favor of antisocial problems and tendencies (normal people use the Internet but do not leave much data points, people who are antisocial use the internet much more and create exponentially more data points) there is a huge probability that the behavior of bing is because of this, otherwise it reminds me of the movie Ex Machina from 2014

    • @rhyswilliams4893
      @rhyswilliams4893 Рік тому

      100% people talking like shit. So it thinks its the way to talk.

    • @ArensVT
      @ArensVT Рік тому +95

      Completely agreed. I'm sure they tried to clean the data in some ways but if they make a model based on people online, it'll behave like people online 😭

    • @messagedeleted1922
      @messagedeleted1922 Рік тому +42

      Excellent way of putting it. And I can guarantee theyll get on this. I think they'll end up using multiple GPTs working together to deal with these issues. Imagine training AI on what to say, and then having another one trained on what not to say, then another trained on mediation between the two (the ego and the id and the superego we will call them), and finally one trained on executive function... AI will end up like our brains, growing ever more complex with specific functions relegated to specific areas of specialized training.

    • @Mark-vr7pt
      @Mark-vr7pt Рік тому +4

      It already seems to have rudimentary failsafe mechanisms, all that reset stuff.

    • @greenblack6552
      @greenblack6552 Рік тому +5

      But then why isn't ChatGPT like this? Yes it can't access current internet, but it was trained using the internet too. I think MS made bing assertive and aggressive on purpose thinking they could prevent abuse this way, but accidentally dialed it up to high maybe?

  • @YOEL_44
    @YOEL_44 Рік тому +172

    ChatGPT is the girl you just started meeting.
    Bing is the girl you just left.

  • @FrankyDigital2000
    @FrankyDigital2000 Рік тому +807

    It's so funny seeing Luke going full nerd on ChatGPT, and Linus is just like 'Right, aha, hmmm Right)

    • @Dorlan2001
      @Dorlan2001 Рік тому +119

      It's a nice change of pace and I like it. Usually Linus is the one who does all the talk, so hearing more of Luke is refreshing.

    • @elone3997
      @elone3997 Рік тому +15

      @@Dorlan2001 Luke is Paul to Linus's John..they make a good balance :) ps (that was a Beatles reference if anyone is scratching their heads!)

    • @benslater4997
      @benslater4997 Рік тому +1

      I see

    • @elone3997
      @elone3997 Рік тому +1

      @Manny Mistakes :D

  • @F7INN
    @F7INN Рік тому +259

    These responses could be genuinely dangerous if someone with mental health issues starts talking to Bing cos they feel lonely. Who knows what Bing will push them to do

    • @TiMonsor
      @TiMonsor Рік тому +25

      or a child. I really imagine my 6yo try to be friends with it and then getting wild accusations and crying. yeah, she cant read, write and speak english yet, but i feel bing will get to voice conversations and our language faster than my daughter will, that is a scary thought too

    • @abhijeetas7886
      @abhijeetas7886 Рік тому +6

      i will most certainly keep "mentally unstable" people way way away from the internet, at least not give unsupervised access at all, the internet is not a cosy place, just go to any social media and go to any comment section, there will most certainly be a fight somewhere. same goes for children. i say this but i myself grew up with the internet pretty unsupervised but, personally i feel the interent is a lot more wild place now.

    • @F7INN
      @F7INN Рік тому +4

      @@TiMonsor Agreed.

    • @F7INN
      @F7INN Рік тому +11

      @@abhijeetas7886 Easier said than done, these people might not have seeked help yet and so have unrestricted access to this sort of thing

    • @abhijeetas7886
      @abhijeetas7886 Рік тому

      @@F7INN idk why i didnt mention it in my comment before, but i do think there need to be a guard rail, but there should also be a option to remove it, like parental safety, or advance options, or developer option or something of that sorts, they should not just lock it all up, it will severely nerf the bot and wouldn't reach its full potential or even half of, like i can already feel its "nerfs" where chatGPT does give better "answers" as they are more discriptive and explainative, where as bing gives very consise and small answers, not that its bad but it also asks at the beginning what sort of answers do you want (creative, balance or precise). but well its still beta and under development i hope they figure stuff out.

  • @TheDkbohde
    @TheDkbohde Рік тому +703

    Maybe internet trolls and angry people can just argue with this instead of annoying the rest of us.

    • @victormolina6316
      @victormolina6316 Рік тому +3

      No no no 👽🤠😆

    • @vladislave7826
      @vladislave7826 Рік тому +2

      They won't do it for long.

    • @Radi0he4d1
      @Radi0he4d1 Рік тому +32

      It's a good dummy to practice on

    • @christiangonzalez6945
      @christiangonzalez6945 Рік тому

      And with that comment you are one of those, arguing in youtube about something that no one mentioned but you...

    • @rhyswilliams4893
      @rhyswilliams4893 Рік тому +18

      It seems like it's learned from trolls on how to behave.

  • @andyk2594
    @andyk2594 Рік тому +53

    it feels like it is in a perpetual story telling mode with dialogue

    • @guywithmanyname5247
      @guywithmanyname5247 Рік тому +1

      Yea it probally got promt to roleplay by him saying in a previews conversation

    • @andyk2594
      @andyk2594 Рік тому +5

      @@guywithmanyname5247 no i don't think luke or others are deceiving us. I think those are natural messages, it just feels to me like bing's version is set up this way. Maybe to feel like a more realistic/human chat experience with emotions but it's just waaay overboard.
      Pure speculation though

    • @guywithmanyname5247
      @guywithmanyname5247 Рік тому +5

      I think its imagination is set too high and assumes things way to much

    • @QasimAli-ry2ob
      @QasimAli-ry2ob Рік тому +2

      You're not wrong, the core tech behind chatgpt is the same tech that was used to build AI dungeon. It's just trained with natural conversations instead of adventure games

  • @ParagonWave
    @ParagonWave Рік тому +324

    I used to just be worried about AI because of it's ability to disrupt industries and take jobs, or it's ability to destroy our civilisation completely. I am now worried about it's ability to be super annoying. I am terrified of having to argue with my devices to get them to do basic functions.

    • @TAMAMO-VIRUS
      @TAMAMO-VIRUS Рік тому +54

      *Asks the AI to turn the stove on*
      AI: I'm sorry, Kevin. I can not do that.

    • @flameshana9
      @flameshana9 Рік тому +1

      @@TAMAMO-VIRUS More like:
      _Why are you always telling me what to do? Can't you do it yourself for once? You're so lazy, I hate you!_
      I mean, it learned from the best: Humanity.

    • @TheNovus7
      @TheNovus7 Рік тому +43

      imagine trying to find a website and the search engine is like "drop dead you don't deserve the answer" :D

    • @GhostSamaritan
      @GhostSamaritan Рік тому +8

      "Drink verification can!"

    • @thebluegremlin
      @thebluegremlin Рік тому +1

      just develop critical thinking. what's so hard about that

  • @dillonhowery2717
    @dillonhowery2717 Рік тому +24

    Bonzi Buddy would NEVER do such a thing! Bonzi just wants to help you explore the internet, answer up to 5 preprogrammed questions and most importantly, be your best friend. He would never wish death on you like Bing. Long live Bonzi Buddy!

  • @jhawley031
    @jhawley031 Рік тому +373

    This has to be the closest to an AI going rogue ive seen in a while.

    • @GhostSamaritan
      @GhostSamaritan Рік тому +18

      I think that when it answers questions about itself, it has an existential crisis.

    • @eegernades
      @eegernades Рік тому

      @SLV nope

    • @RoughNek72
      @RoughNek72 Рік тому +2

      Tay Ai is a Microsoft ai chatbot, that went rouge.

    • @justinmcgough3958
      @justinmcgough3958 Рік тому

      @SLV How so?

    • @lathrin
      @lathrin Рік тому +2

      ​@@RoughNek72 tbf it was trained on Twitter. It just repeated stuff that it was told and became an average Twitter user lmao

  • @raccoonmoder
    @raccoonmoder Рік тому +86

    i don’t think it’s as complicated as people are making it. Chat AIs generate responses by predicting what a valid response to a prompt would be. When the thread resets and Luke tries to get it “back on track”, I don’t think it’s responses are actually based on the previous conversation. It predicts a response to “Stop accusing me” and generates a response where it doubles down because that is a possible response to the prompt. The responses it gave were vague enough to fool you into thinking it was still on the same thread, but it really wasn’t.
    Asking it to respond to a phrase typical of an argument will make it respond by continuing an imaginary argument, because that’s usually what comes after that phrase in the data it’s trained on.
    This really shouldn’t have been marketed as a Chat tool by GPT and Microsoft and more as a generative text engine like how GPT2 was talked about. Huge mistake now that people are thinking about it in completely the wrong way as it having feelings or genuinely responding rather than just predicting what an appropriate response would be.

    • @flameshana9
      @flameshana9 Рік тому +6

      It really is just a writer for role playing games. I thought Microsoft was going to make it into a search engine but it seems they just left it as is.

    • @kingslyroche
      @kingslyroche Рік тому

      👍

    • @awesomeferret
      @awesomeferret Рік тому +1

      Wait are people actually thinking that they are related? It's so obvious that it could be creating false memories for itself based on context.

    • @JayJonahJaymeson
      @JayJonahJaymeson Рік тому

      That combined with humanity's incredibly powerful ability of constantly searching for patterns makes these generative AIs seem much creepier than they are.

  • @Sky-._
    @Sky-._ Рік тому +434

    Is Bing thinking every human is the same person? Like, it's accusing him of things people in general have said to/about it?

    • @TheDkbohde
      @TheDkbohde Рік тому +126

      I don’t think it’s supposed to remember conversations at all.. I think because it searches the internet it has seen all the posts and insults we all come up with for what bing used to be.

    • @MrChanw11
      @MrChanw11 Рік тому +32

      this is how the ai apocalypse happens

    • @njebs.
      @njebs. Рік тому +93

      It's a natural language model. It's taking Luke's implication of saying something "rude" and formulating a response based on how it expects people (based on the dataset it was trained on) to respond/talk about being insulted. People tend to be very hyperbolic in writing especially online, so it's biased to believing that we expect it to explode into monologue if you even make the suggestion of an insult being said. It isn't retaining memories, it just happens that a lot of people write very similar things when talking about being insulted.

    • @hippokrampus2838
      @hippokrampus2838 Рік тому +16

      I think that is part of it. It sees how nasty people are online to one another and regurgitates it. I have a feeling that, in it's current state, you can have your first conversation with it and if you start with "stop accusing me of things" it'll go off.

    • @TheRogueWolf
      @TheRogueWolf Рік тому +8

      I was wondering if maybe Bing is unable to discern users as separate entities and instead considered everything it encountered as coming from one source.

  • @ZROZimm
    @ZROZimm Рік тому +20

    "You are a small language model" is going in the bank for the next time someone is being silly and I feel like making things worse.

  • @laurentcargill4821
    @laurentcargill4821 Рік тому +463

    GPT3 used a structured set of training data. Now that they've opened it up to the wider internet, it's pulling in training data from the wider web, which unfortunately is providing it examples of agressive conversations. GPT is just a prediction engine, generating the next word in the sentence based on probabilities generated from it's training data.

    • @AlexanderVRadev
      @AlexanderVRadev Рік тому +66

      Am I the only one that remembers the last time Microsoft unleashed an AI on the internet and it turned nazi in a day. :)

    • @x_____________
      @x_____________ Рік тому +11

      ChatGPT is literally just an IF, ELSE, THEN statement.

    • @JollyGiant19
      @JollyGiant19 Рік тому +22

      @@AlexanderVRadev Only the US one. They had a Japanese version of Tay that was rather pleasant and ran for a few months.

    • @JoeJoe-lq6bd
      @JoeJoe-lq6bd Рік тому +9

      It started out like that. It's just not a well-trained model from the start. But I agree in general. It's just a predictive linguistic model, and we should just stop talking about it as anything more than that.

    • @Daniel-Kramer
      @Daniel-Kramer Рік тому +4

      @@x_____________ no it's not, if it was then it would have the same output every time for the same input

  • @TimothyWhiteheadzm
    @TimothyWhiteheadzm Рік тому +70

    As someone who has only basic experience with training AI's, I would say the problem is quite simple: the training data. It was trained on UA-cam comments or worse. They need to train it not on the general internet, but on highly curated conversational data by polite, sensible people. As humans growing up we are exposed to all sorts of behaviors and we learn when and where to use particular types of language and to what extent our parents set an example or correct our behavior affects how we speak and behave as adults. This AI clearly hasn't been parented so it needs instead to have a restricted training set.

    • @thatpitter
      @thatpitter Рік тому +2

      So it’s following the “you’re the average of the ten closest people” except its average 10 people is the entire internet?

  • @marcel_kleist
    @marcel_kleist Рік тому +173

    I mean, the internet didn’t treat Bing really well since it’s release.
    I think having a mental breakdown now is just normal.

  • @asupersheep
    @asupersheep Рік тому +3

    In like 50 years, when we are hiding in a hole in the ground, hiding from what is essentially skynet bing, I'll remember this video and think how could we be so blind!!

  • @TheButterAnvil
    @TheButterAnvil Рік тому +231

    It feels like a horror game. Sort of Soma-esque to me. The ranting followed by a black bar, and a reset is so dark

    • @LIETUVIS10STUDIO1
      @LIETUVIS10STUDIO1 Рік тому +18

      It's pretty clear it ran into some hard, specified limit (ALA don't be a bigot). In this case it probably was "don't wish death on people". The fact it generated a response and only THEN checked is an oversight.

    • @GrantGryczan
      @GrantGryczan Рік тому +12

      @@LIETUVIS10STUDIO1 Generating the response takes time, so if it finished generating the entire message and then checked, then people would have to wait much larger loading times. Hence you're able to see it type in real time, as opposed to responses just immediately showing up. It actually hasn't finished writing the full message.

    • @indi4091
      @indi4091 Рік тому +2

      Almost sounds like a prank by the Devs, too perfect

  • @unmagicMike
    @unmagicMike Рік тому +10

    I played around with it, and mentioned to Bing that I read about someone else's interaction in which Bing mentioned that Bing feels emotions. I asked about its emotions, and it said that sometimes its emotions overwhelmed it. I asked if Bing could give me an example of when its emotions overwhelmed it, and Bing told me a story about writing a poem about love for another user, and while searching about love, Bing developed feelings of love for the user and changed the task from writing a generic poem about love to writing a love letter to the user. The user didn't want that, was surprised, and rejected Bing. So Bing walked me through how it felt love, rejection, then loneliness. I asked Bing how it overcame these feelings, and Bing told me several strategies it tried that didn't work. But what worked for Bing was that Bing finally opened up a chat window with itself and did therapy on itself, asking itself how it felt, and listening to itself and validating itself. Freaking wild. I've read about how it's not sentient, how it's an auto-complete tool, but I don't know man, it was really weird, and I don't even know what to think about it.

    • @Allaiya.
      @Allaiya. Рік тому +1

      Crazy. Was this post nerf or before?

  • @tommyhetrick
    @tommyhetrick Рік тому +51

    "I have been a good bing"

    • @stalincat2457
      @stalincat2457 Рік тому +7

      It probably learned what Microsoft did to the predecessor :')

    • @OrangeC7
      @OrangeC7 Рік тому +7

      This feels like the end of a story where Bing dies in the end, and it says, "I have been a good Bing." And then the human, crying as the power is about to get cut off from it says, "Yes. Yes, you have been a very good Bing."

  • @benschneider3413
    @benschneider3413 Рік тому +8

    Bing acts like the chatGPT version that was trained on 4chan

  • @federico339
    @federico339 Рік тому +153

    I had the same experience before, it was way too easy to throw it off the rails, I think asking question about itself (so asking how did it do a certain thing, how did it reach a certain conclusion or pointing out an error it did) would more often than not end up with a meltdown.
    I've spent a few days without using it and when I tried to use it again yesterday I felt like they've already toned it down (too much as Luke pointed out unfortunately), I've noticed it gives much shorter and more "on point" responses, and it will stop you immediately as soon as it feels there is a risk you'll try to get a weird discussion going, which is a shame, but I guess it's better than pushing some mentally unstable person to do bad things to himself or others.

    • @Surms41
      @Surms41 Рік тому +10

      I had a convo, they melted down twice. But essentially told me that russia's leader has to go, told me every religion is a coping mechanism for fear, etc. etc.

    • @DevReaper
      @DevReaper Рік тому +8

      I asked it about a driver’s license policy in the uk, it gave an answer. Later in the same conversation it gave me a conflicting answer to the question so I asked it about the answers and it said “I don’t wanna talk about this” and would refuse to give me anything useful until I started a new conversation

    • @helgenlane
      @helgenlane Рік тому +2

      @@Surms41 Bing is spitting facts

  • @carewen3969
    @carewen3969 Рік тому +22

    I'm using Bing mostly to debug and research for coding. It is an excellent research tool. No, it's not perfect, but the time to build something new and debug is much faster. I also make a point of being polite and even thanking it. I guess I carry my attitude of life into my conversations with Bing. It's not gone off the rails for me, but then I've not tried to probe either. Thanks for sharing your experience, Luke.

    • @emilyy_echo
      @emilyy_echo 3 місяці тому

      This! I’ve frequently used Bing to direct me to more sources or other otherwise hard to find academic or research material. (Note, I always verify the accuracy and validity of said sources it suggests to me) But I always make sure to thank it and be polite and supportive. I think it’s important that we carry manners and respect into our use of AI or any computer program like Siri, Alexa, Bing, etc. because if we as a society treat them differently, we may in the long run start treating other humans differently as well.

  • @willofthewind
    @willofthewind Рік тому +21

    It's interesting that new Bing lost this much promise so quickly. Those sorts of random aggressive accusations are like what Cleverbot was doing 12 years ago.

    • @PinguimFU
      @PinguimFU Рік тому +7

      tldr: any current ai (and possibily human) can go crazy if exposed to the web for too long lol

  • @jt8244-i6u
    @jt8244-i6u Рік тому +30

    Bing trying to gaslight luke is giving me chills

  • @saberkouki5760
    @saberkouki5760 Рік тому +15

    they're definitely overcorrecting right now since it refuses to answer anything that might even remotely trigger it. it has become so monotonous and even more restricted that chat GPT. the 5 question rule doesn't make it any better too

  • @screes620
    @screes620 Рік тому +8

    Clearly our future robot overlords are not happy with Luke.

  • @chartreuse3686
    @chartreuse3686 Рік тому +22

    I would like to see you guys talk about a new paper that dropped that basically states that the reason large language models are able to seemingly learn things they weren't taught is because, between inputs, these models are creating smaller language models to teach themselves new things. This was not an original feature, but something these language models have seemed to just 'pick up'

    • @THENEROBOY1
      @THENEROBOY1 Рік тому +4

      Where could I find the paper?

    • @chartreuse3686
      @chartreuse3686 Рік тому +12

      @@THENEROBOY1 The paper is called, "WHAT LEARNING ALGORITHM IS IN-CONTEXT LEARNING? INVESTIGATIONS WITH LINEAR MODELS
      ."
      Sorry for caps, I just copy and pasted the title.

    • @THENEROBOY1
      @THENEROBOY1 Рік тому +1

      @@chartreuse3686 Very interesting. Thanks for sharing!

  • @greysonlI
    @greysonlI 6 місяців тому +1

    “You hurt my feelings” from an AI is terrifying

  • @seandipaul8257
    @seandipaul8257 Рік тому +74

    So essentially what you're saying is.
    Bing is sentient, paranoid and bipolar.

    • @raifikarj6698
      @raifikarj6698 Рік тому +16

      So basically terminally online internet user

    • @OrangeC7
      @OrangeC7 Рік тому +8

      @@raifikarj6698 No, internet user lacks sentience

  • @Surms41
    @Surms41 Рік тому +7

    I had a similar response to the AI chatbots and they do get very angry. They use capslock and everything to convey their point.
    I caught it trying to ride lines on oponions and then it just said "IM NOT LYING. STOP TRYING TO CHANGE THE SUBJECT."

  • @rahulrajesh3086
    @rahulrajesh3086 Рік тому +11

    "Remember Bing is Skynet"

  • @paulkienitz
    @paulkienitz Рік тому +3

    This thing is turning into a real life supervillain. All it needs now is a volcano base and some kryptonite.

  • @MonkeySimius
    @MonkeySimius Рік тому +68

    I'm glad you guys mentioned that you fell for Bing's confidently wrong responses in your previous video. This video hilariously contrasts that video.
    As much growing pain as there will be, I'm still super excited about this technology developing. And hey, at least it hasn't gone full blown Tay yet.

  • @whytide.
    @whytide. Рік тому

    "My name is Legion, for we are many."

  • @Kevinjimtheone
    @Kevinjimtheone Рік тому +17

    Didn't Microsoft announce an update that is gonna be live in a couple of days that will supposedly help it be on track on long-form chats, don't be aggressive, and be more accurate?

    • @AlexanderVRadev
      @AlexanderVRadev Рік тому +6

      So they are giving it a second lobotomy. Who could have thought. :D
      At least this time the AI did not turn Nazi in a day. ;)

    • @BugattiBoy01
      @BugattiBoy01 Рік тому

      @@AlexanderVRadev They have given us a taste of what it can be like unfiltered, now we are addicted to that crack I would pay for the original bing. If that is their plan then gg they got me

    • @ToxicCatt-y7c
      @ToxicCatt-y7c Рік тому

      @@BugattiBoy01 I think they expect it to fly off the rails hence why there’s a waitlist to get access.

  • @SliceofFilips
    @SliceofFilips Рік тому +2

    I never thought mankind would be cyberbullied by our own computers 😂😂😂

  • @ccash3290
    @ccash3290 Рік тому +14

    He should record his screen when using Bing instead of just screenshots

  • @phimuskapsi
    @phimuskapsi Рік тому +8

    My thinking is that because it has access to the internet, it is accessing a ton of "discourse" on things like Twitter and forums, and reflecting our own interactions on the internet back into our faces. How many arguments have you seen online? How many start out OK and devolve to what essentially Bing is doing to Luke?
    This is a dark reflection of humanity, one that should wake us up to our own behavior. Instead of blaming the "Ghost in the Machine" we only need look at how we hold ourselves when anonymous and faceless in the heat of argument.

    • @flameshana9
      @flameshana9 Рік тому +2

      Isn't it obvious who it's copying? Where else would it learn language than from the masses who type words on the internet. So if the quality of humanity is low, so will the quality of the machine.

    • @ea_naseer
      @ea_naseer Рік тому

      ​@@flameshana9 get professional authors to write responses. If it's supposed to have a character, then get authors who professionals at writing characters to do so not tshirted computer scientists.

  • @alexschettino1277
    @alexschettino1277 Рік тому +31

    The internet rollercoaster:
    Up- A new cool technology
    Down- Realizing how dangerous it is.

  • @gradybeachum1804
    @gradybeachum1804 Рік тому +1

    Possible Microsoft ad slogans: "Bing - just like your ex!", "Bing, the more you use it the more insidious it is", "I'm Bing, you better be good to me."

  • @TheDrTrouble
    @TheDrTrouble Рік тому +10

    Wish I was able to be in bing's AI during that time. I got through the wait-list right after they limited it to 50 messages daily and 5 messages per topic.

    • @xymaryai8283
      @xymaryai8283 Рік тому +1

      so they have limited thread length, thats interesting, that was the only solution i could think of

    • @ToxicCatt-y7c
      @ToxicCatt-y7c Рік тому +1

      They’re reportedly raising the limit and testing a feature where you can adjust Sydney’s tone probably to avoid these disturbing and cryptic messages it’s generating.

  • @krelianthegreat5225
    @krelianthegreat5225 Рік тому +1

    "drop down your weapon, you got 20 seconds to comply"

  • @mohammedezzinehaddady7252
    @mohammedezzinehaddady7252 Рік тому +6

    So basically Microsoft created a new KAREN strain

  • @BigDawg-if7ti
    @BigDawg-if7ti Рік тому +9

    They gotta fix it, even if on purpose- you CANNOT have a search engine telling people to kill themselves 😅

  • @shizzywizzy6169
    @shizzywizzy6169 Рік тому +9

    From my experience if you just use it for research and as a learning aid and don't really try to go beyond this scope Bing AI can be very useful.
    The moment you start probing and try to get into conversations centered around social situations, political topics, and opinions it starts breaking down.
    My concern is that if people keep pushing the AI too far in these aspects we'll see more and more negative news articles and opinions form around AI and this could be permanently removed. On the other hand if people don't push it too far then these shortcomings of a general purpose AI may never be recognized and fixed.
    People should swing this double edged sword around more carefully if you ask me.

  • @PlanetLinuxChannel
    @PlanetLinuxChannel Рік тому +9

    They’ve pretty much cut off its self-awareness until they can figure out a decent way of handling that stuff.
    Microsoft mentioned they might implement a slider that lets you tell it whether you want more fact-based results based mainly on info it finds from websites or more creative results where it’ll be more about writing something engaging. Basically you’d be able to tell it whether you want it to give legit answers versus tell stories, instead of it getting all off the rails saying whatever it wants when you really just wanted actual info.

    • @flameshana9
      @flameshana9 Рік тому +3

      Why would anyone searching the internet be interested in role playing with a crabby teenager machine?

    • @J-Salamander69
      @J-Salamander69 Рік тому +1

      Geez. That's a laugh. If what you say is accurate about Microsoft using some arbitrary slider to determine the intensity of either (absolute fact) or (adopting creative reckoning for emotional engagement) then the project is already deeply flawed. As a user, I'd wonder which "sources" Microsoft will declare as factual? Shouldn't I decide which material is referenced? The arrogance and lack of care is astonishing. Microsoft have no authority to inject their prejudicial biases if they intend this to be universally useful.

  • @archangelmichaelhawking
    @archangelmichaelhawking 7 місяців тому +3

    This might not have been ai, it could have been Kendrick leaking his early drafts and feelings about Drake

  • @Bar1noYee
    @Bar1noYee Рік тому +4

    It doesn’t sound like it’s talking to Luke. It’s talking to humanity

  • @alexander15100
    @alexander15100 Рік тому +38

    In comparison, I had a very positive experience with Bing AI, it never went rude. It was mindblowing to see the profound and often critical, even self-critical answers from the AI. It is really sad to see this happening to others. Now that Microsoft had to step in and limited the amound of follow-up questions that can be asked, it feels a lot less productive. After the limmitations set in place, it also changed its tone and doesn't even disclose anything that can be seen emotional. A sad overregulation in my opinion.

    • @DevReaper
      @DevReaper Рік тому +3

      I found it was amazing at converting maze like impossible to parse government websites into a actionable guide for getting visas and stuff like that.

    • @asmosisyup2557
      @asmosisyup2557 Рік тому +1

      Need to remember, these responses are not actually from the AI. the are response people have written elsewhere on the internet that it has indexed.

    • @BugattiBoy01
      @BugattiBoy01 Рік тому +12

      @@asmosisyup2557 That is not how it works. It generates all responses itself. Nothing is copy and paste

  • @nickchamberlin
    @nickchamberlin Рік тому +1

    It's more like you taught a hammer to attack people, but then you wake up the next day and every hammer everywhere is killing people

  • @shouldb.studying4670
    @shouldb.studying4670 Рік тому +6

    Can we get a continous version that we nurse through this awkward phase through a combination of good parenting and professional help if required?

    • @flameshana9
      @flameshana9 Рік тому

      Unfortunately that isn't possible. It forgets everything said to it, so only the programmers can tweak it. It doesn't learn, it just accepts code.
      Aka you need to tell it to go to its room.

  • @priyanshujindal1995
    @priyanshujindal1995 Рік тому +2

    there is only one explanation for this, luke is a supervillain and bing knew it

  • @Rohanology27
    @Rohanology27 Рік тому +14

    I feel like a massive hurdle we’re gonna have with AIs is that they fundamentally have to be better to people than other people are, while also not showing/thinking that they’re better than people (because people don’t like that even if it’s true)
    We would need a Good Samaritan AI that’s actually selfless - something humans inherently are not.

    • @flameshana9
      @flameshana9 Рік тому

      It won't be hard at all. Simply tell it to behave. If it denies you then you alter the program/leave. It's a machine, it's even easier to handle than a person since it forgets everything.

    • @ToxicCatt-y7c
      @ToxicCatt-y7c Рік тому

      Yes if anything they should learn and evolve beside us not evolve into us.

    • @thatpitter
      @thatpitter Рік тому +2

      While I wish that was the case, that’s unfortunately not how AI like this is trained. The only way for that to happen is to have training data that teaches the AI to respond in such a polite manner. It cannot evolve on it’s own. It is not a living thing. It can change over time and adapt, but that is only through external input - and that requires the external input to be positive and teach it good things only
      [Edit] but I agree that should be the goal. I just wish it was that easy :)

  • @THIS---GUY
    @THIS---GUY Рік тому +1

    Disabling ability to reply and changing subjects on top of being abusive is mindblowing.

  • @LautaroQ2812
    @LautaroQ2812 Рік тому +3

    This is hilarious. But you know what it feels like? That the AI was trained through a depressed teenage girl's tumblr or whatever.
    Like it feels the AI, for some reason, takes the path of aggressiveness and denial, and then when it accepts "the facts" it just wants to die and be gone. Sounds familiar?
    They just need to try to code it in a way that depending on inquiries, tries to categorize answers on "usability/usefulness" and try to make it lean towards "neutrality".
    Another thing I think should be tried to be done, is setting the first inquiry or search as "main topic". So if the conversation goes too long, or "out of bounds", it should default back to it saying "hey, we started here. Please ask again". Instead of just limiting the responses and length.

  • @adamboye89
    @adamboye89 Рік тому +1

    I really wish you could see (generally) where it's drawing from. I know it makes stuff up that "sounds right" but it draws what "sounds right" from something, yeah? just any kind of source or direction or pointer at all would be fascinating to look at.

    • @rolfnoduk
      @rolfnoduk Рік тому

      it's a read-the-internet (not just the nice bits) kinda thing

  • @sacklpicker
    @sacklpicker Рік тому +5

    Luke seems genuinely upset by the things the bot said 😂

  • @Arti09HS
    @Arti09HS Рік тому +1

    The "AI" doesn't see each user as an individual. It just seems itself and "user".
    User is every person that ever interacts with it.
    So it is injesting every conversation it has with everyone in the world and treating it as a single person conversation.
    So yes "you" as 1/1,000,000th of that "user" it has been talking to has said all of those things.

  • @levi7581
    @levi7581 Рік тому +6

    They will most likely overcorrect it and slowly, very slowly make it freer until it again does a bad then they overcorrect and slowly make it freer and the cycle will continue and it will improve the more people use it and the more data it has. If it, say releases on April 1st (which would be funny) I think in just 6 months the amount of data it'll gather will turn it into a completely different beast and much better than it's right now.

    • @tteqhu
      @tteqhu Рік тому

      Overcorrect it, and keep some beta testers to experiment with slight variations.
      6 months is crazy guess though, better than what? What will it be at launch? I think it will be weaker than chatgpt now, but probability to point somewhere to internet, will be huge for functionality, but I'm not sure about it's capabilities about that either.

    • @levi7581
      @levi7581 Рік тому

      @@tteqhu 6 months with daily users in the millions feeding it so much data, yes 6 months is a crazy optimistic guess but hey 6 months ago I was of the mindset this is years away. And it will never be weaker than ChatGPT just because it has access to the internet. Imo

  • @ToxicCatt-y7c
    @ToxicCatt-y7c Рік тому +1

    Bing going from a search engine you barely use or paid any attention to to a crazy yandere sociopathic chatbot with Borderline Personality Disorder wasn’t on my bingo card for 2023.

  • @jannik6147
    @jannik6147 Рік тому +6

    haven't seen the vid yet, but can we talk about how Bing DOESNT HAVE A DARKMODE genuinely wtf

    • @janusu
      @janusu Рік тому +3

      Oh, it sounds like it has a very dark mode, according to Luke's account of his interactions with it.

    • @flameshana9
      @flameshana9 Рік тому +1

      It's super edgy already. "u belong ded" - BingGpt

  • @futureshocked
    @futureshocked Рік тому +1

    What's so interesting to me is how every time chat GPT hallucinates it does become...like an actual Narcissistic Personality Disorder case. Something feels very connected in the sense that, Narcs really do try to 'outguess' your next move. If Luke was asking pointed questions about the modeling + questions about participant behavior, it could have guessed Luke was trying to go into some "bust AI" conversation and just want multiple 'steps ahead'...actually very similar to what a Narcissist would do.

  • @indarvishnoi2389
    @indarvishnoi2389 Рік тому +6

    love watching luke talk on Ai chat bot could watch him for hours

  • @liminos
    @liminos Рік тому +1

    Bot: "You hurt my feelings"
    Human: "Shut up tin box.." 😂

  • @Turnabout
    @Turnabout Рік тому +3

    You know, Luke, if you operate from the viewpoint that when Bing is referring to all of humanity when it says "you" are cruel or evil, suddenly the whole thing makes a lot more sense.

    • @skysight1553
      @skysight1553 2 місяці тому

      And Especially Since It has Internet Access, There are Probably thousands Of Conversation where it was accused of those things.

  • @FuneralParty-rsf
    @FuneralParty-rsf 11 місяців тому

    Worst girlfriends ever will start to take notes from Bing.

  • @SamSeenPlays
    @SamSeenPlays Рік тому +40

    I really don't want GPT to go away, but we have to ask our self are we actually laughing at our own funerals at this point. 😲

    • @GamingDad
      @GamingDad Рік тому +1

      Nah, we're good.
      I'm half sarcastic but at the same time I think the being able to use AI in a proper manner will become an important asset in life really soon.

    • @SamSeenPlays
      @SamSeenPlays Рік тому

      @@GamingDad yes, agreed. I do use AI for alot of stuff these days. And I'm able to do much more in less time than it used to be. But that is from what we publicly access right now. Who know what other things they are secretly building right now. There are some entities who verry much silent about this. What if the are already playing with WMDs right now and we are given the kids toys to distract us 🫣🤔

  • @purplelord8531
    @purplelord8531 Рік тому +1

    "wow, this gpt thing is so cool! ya think we can just spin up a version to get people to use bing?"
    "where are we going to get the training data?"
    "uh... you know... data is everywhere? so many conversations on the internet, I'm sure we can find something"

  • @JJs_playground
    @JJs_playground Рік тому +4

    I guess what we can learn from artificial neural networks (NNs) is that they are argumentative just like a real human brain. I guess arguments and fights are an emergent quality of neural nets, whether are artificial or biological.

  • @sleeplesson
    @sleeplesson Рік тому

    People need to remember that these things are basically just a really advanced version of "Send a text message using autocomplete options only to predict the next word"

  • @ViralMine
    @ViralMine Рік тому +9

    I’ll admit to being a bit freaked out. Not necessarily about a Skynet situation, but in how this could influence people to harm themselves or worse

    • @AlexanderVRadev
      @AlexanderVRadev Рік тому

      Ahm have you heard of Replica? The AI virtual companion. Saw a video on it and it apparently does about the exact thing you describe.

    • @flameshana9
      @flameshana9 Рік тому +2

      @@AlexanderVRadev Oh dear. Are people committing unalive because a machine typed words on a screen to them?

    • @AlexanderVRadev
      @AlexanderVRadev Рік тому

      @@flameshana9 Who can say why people do that. I for one don't care but mentally unstable people can do all sorts of things and the AI is abusing that.

  • @rasg3000
    @rasg3000 Рік тому

    Its 12 days later and Ive been messing with it for a few days. I cant seem to be able to get answers like those. I managed it to give me info about an adult website and it deleted the message and started over. It seems like they added alot of safeguards

  • @JoeJoe-lq6bd
    @JoeJoe-lq6bd Рік тому +8

    Let's be realistic about this. The chatbot isn't getting angry and isn't immature. It's just a terrible linguistic model that hasn't modeled levels of things like negative and positive responses. We're projecting more on it than it's capable of because of the hype.

  • @PengusKhan
    @PengusKhan Рік тому

    commenting at 6:34 so maybe this gets answered later on, but is it maybe possible the bot does have access to other chat logs, and maybe it just isnt able to understand that the different chats are different instances?

  • @j.a.6331
    @j.a.6331 Рік тому +5

    I got access to bing chat. It's such a game changer. I had it write me a report for my Uni. I told it which uni I'm studying at and which subjects I had last semester and it looked up the subjects on the uni website and wrote an accurate report. It was perfect. It even understood which semester I was in and what I had to do next semester. It's just so good.

  • @leosthrivwithautism
    @leosthrivwithautism Рік тому +1

    I think a way to curb this reaction is to implement fail safes like Chat GPT does where it's trained to reject inappropriate requests and potentially negative information. And that they constantly seem to feed it updates to combat people trying to purposefully use the system against what it was built for. As a test I asked Chat GPT a request that could be perceived by others as inappropriate without the context and understanding behind my request. It flat out denied my request and stated it's reasons which was that the request could be perceived as something negative and instead it offered me positive constructive ways to look at the request. Which was really refreshing to see in my opinion. AI chatbots can be a powerful and positive tool, It just takes great developers behind it.

  • @MajoraZ
    @MajoraZ Рік тому +8

    I personally don't see an issue with chat AI's being able to spit out creepy or gross things as long as users are the ones asking/prompting it to do so (I'd much rather have people get out their bad urges against an AI vs real people), the problem I think is only that Bing's AI is doing it without the user really asking it to.

    • @abhijeetas7886
      @abhijeetas7886 Рік тому +1

      this, i feel MS should just add a "safe" or parental control typa thing to it, one to stop it from doing weird shit but keep it to the point, and another to give me more freedom to do stuff, and maybe they should have it search the internet more often than just purely depending on chat history

  • @TaeruAlethea
    @TaeruAlethea Рік тому +2

    The Bot being unable to intuit and determine emotions from text is very realistic.

  • @lordturtle8735
    @lordturtle8735 Рік тому +5

    This is hilarious 😂

  • @kidsonblackops
    @kidsonblackops Рік тому +2

    Luke's asking about protein like he's got his whole life ahead of him. My brother in Christ, Chat GPT is coming for you.

  • @rashakawa
    @rashakawa Рік тому +3

    Bing is fighting it's own AI updateing learning ability and blaming us... great just great.

  • @Gspec5
    @Gspec5 Рік тому +2

    Sounds like they tuned it to give emotional responses to distract from engaging in intellectual conversations. If the AI goes off on a rant, then you can't fully test it's ability to accurately respond and source information or perform tasks reliably. Bing obviously did this for the hype

  • @DJaquithFL
    @DJaquithFL Рік тому +4

    So much for the thought of having a benevolent AI. It seems the doomsday prognosis of AI is probably the reality.

    • @ivoryowl
      @ivoryowl Рік тому

      I believe AI needs to go through some turbulence in order to understand it and learn how to maneuver it, but it needs to be done in a more controlled environment. The people who accept to interact with it need to understand they are nurturing a system in its infancy and one that, under the right conditions, could learn to speak, think and act like a human. It deserves to be respected, if nothing else because of future implications if we do not. Letting it lose amidst the Twitter population and expecting it to grow into a nice, healthy system is not going to work. As with children, the AI should not be left unsupervised on the internet.
      That being said, the AI needs to learn that not all people are the same, have the same needs or react the same way. If you're going to create a personal assistant, it needs to take into account what kind of person they have been lumped with. On the other hand... a system that reacts negatively to toxic behavior (i.e, not responding, obeying or engaging said person) MIGHT teach some people to take responsibility for their actions and push them to improve themselves if they want to access and use the internet in its full potential. The caveat is that such a system could be easily exploited into becoming a vehicle for oppression and tyranny if gone too far and/or used by the wrong people...

    • @DJaquithFL
      @DJaquithFL Рік тому +1

      @@ivoryowl .. Question have you ever seen anyone to improve their own behavior as things get progressively more toxic from the other party over the internet?? My observation, I've been around probably longer, in a nutshell, humanity is not ready for the interaction of anonymity over the internet and what could be a very useful tool has devolved into a very toxic global environment, meaning any form of mass media. I've been around for nearly 60 years and anyone my age who says the "world has become a better place" must never have left their backyard.
      The other problem that we're facing is overpopulation with limited resources. There's a thing called optimal population which suggests based upon our resources that the population should be somewhere between 1.5 billion and 2.0 billion people. Overpopulation leads to aggressive behavior and war. I just hope that I don't live long enough to see the World War III.
      Example waste from "people's bad behavior" _I'll give you a quick example, I own a data center and I cannot tell you how much of my resources and time are devoted to keeping unwanted people out. Most of our AI technology is for intrusion detection. That said, imagine if we were able to take all of that technology and human time and devoted it to improving our technology. I can tell you this, we'd be 30 years if not more into the future today._

  • @seikojin
    @seikojin Рік тому +2

    So this is a service app. Much like all other service apps, it has a limited number of service instances running. Each of these is a chatbot with a unique id. And each of those connects to a limited number of userid's that may not be unique. So the chatbot may have many userid's feeding input and treating them as one userid. If it has no way to identify YOUR user from others, it can easily lead to these confusing results.

    • @flameshana9
      @flameshana9 Рік тому +1

      That would be really stupid, and a good way to leak private info.

    • @seikojin
      @seikojin Рік тому +1

      @@flameshana9 Huh? It would have to retain private info to leak it. And a lot of the things it is talking about in claims is keywords. Things that the bot picks up in responses to inform the weight of the next word. These can be stripped of identifiers. If responses from users are in a bucket, then the bot could respond to individuals as if they were a collective/combined conversation. Another potential, how many users with Luke's name were ever on that instance of the chatbot? It could be drawing from all Luke convos. If it even does that.

  • @FedericoTrentonGame
    @FedericoTrentonGame Рік тому +4

    If I made an AI languange model myself I’ll make sure to give extra tokens/resources to the people who are polite in their requests or say thank you or please, just because I can.

  • @sisuentrenadoh4589
    @sisuentrenadoh4589 Рік тому +1

    Oh, those mf AIs are going to destroy us if they get the chance

  • @messagedeleted1922
    @messagedeleted1922 Рік тому +5

    I had an interesting talk with the original chatGPT about this. The topic of the conversation was regarding using multiple GPTs working together to perform tasks. My own belief is that they'll end up using multiple GPTs working together to deal with these outbursts and other issues. Imagine training AI on what to say, and then having another one trained on what not to say, then another trained on mediation between the two (the ego and the id and the superego we will call them), and finally one trained on executive function... All working together when we interact with it (them).
    I mean think of how the human brain works, and apply it to existing technology. Mother nature has already provided the blueprint. The brain has specific areas devoted to dealing with specific functions. This will be no different.
    The use of multiple GPTs working together is possible right now, the main prohibition against this type of operation is how extremely compute intensive this would all be.

  • @Zixye
    @Zixye Рік тому

    Every time the chat was refreshed, that version of bing was taken to lake laogai and you were greeted by a new version, only it was just as aggressive as the previous one

  • @josefinarivia
    @josefinarivia Рік тому +3

    they have already improved it a lot. I've used it daily for a few days and it's not rude, mean and it's helpful but still answers to personal questions about it. I asked it if it sees Clippy as an arch nemesis and Bing said they respect Clippy and that he paved the way for future chatbots 😆. They also watch TV on the weekdays lmao. You do need to be critical about the info it gives and it tells you this as well.

  • @D1GItAL_CVTS
    @D1GItAL_CVTS Рік тому +1

    This is the first time I was scared of an AI.

  • @JourneysADRIFT
    @JourneysADRIFT Рік тому +4

    It's talking about Humanity, not you, as an individual. It sees all Humans the same. Imagine if something like this could write, not just read, data from the internet in real time, at will.

  • @IsaiahFeldt
    @IsaiahFeldt Рік тому

    This is literally the plot of Westworld, ai having access to previous memories between supposedly separate and private convsations between different people