ChatGPT Has A Serious Problem

Поділитися
Вставка
  • Опубліковано 22 тра 2024
  • In this episode we look at the problem of ChatGPT's political bias, solutions and some wild stories of the new Bing AI going off the rails.
    ColdFusion Podcast:
    • Bing will lie and call...
    First Song:
    • Burn Water - Take Flight
    Last Song:
    • Burn Water - I Need Y...
    ColdFusion Music:
    / @burnwatermusic7421
    burnwater.bandcamp.com
    AI Explained Video: • Video
    Get my book:
    bit.ly/NewThinkingbook
    ColdFusion Socials:
    / discord
    / coldfusiontv
    / coldfusion_tv
    / coldfusiontv
    Producer: Dagogo Altraide
  • Наука та технологія

КОМЕНТАРІ • 5 тис.

  • @julius43461
    @julius43461 Рік тому +2742

    Buzzfeed could have used chat bots from the 80's and it would still improve their articles.

    • @bobbyburns1404
      @bobbyburns1404 Рік тому +24

      I LOVE COLD FUSION❤❤❤

    • @mathdhut3603
      @mathdhut3603 Рік тому +47

      They could have used Furbies and still...

    • @friddevonfrankenstein
      @friddevonfrankenstein Рік тому +45

      I was immediately thinking the same and was about to comment similar but I guess you beat me to it. I didn't just LOL but actually laughed out loud for real at your comment, so effing true :D
      BuzzFeed is garbage, I have blacklisted that shit so nobody using my wifi can access it. Same with TikTok^^

    • @friddevonfrankenstein
      @friddevonfrankenstein Рік тому +8

      @@mathdhut3603 Or a jute sack full of cobble stones as far as I'm concerned :D

    • @julius43461
      @julius43461 Рік тому +11

      ​@@friddevonfrankenstein We even have similar ideas about blacklisting websites 😂. I am dragging my feet on that one simply because my kids are still young, but once they start browsing on their own... I won't be blacklisting, I will be whitelisting some of the websites.

  • @s.alexanderstork3125
    @s.alexanderstork3125 Рік тому +3522

    You don't have to justify posting back to back AI videos. I'm loving every minute of it

    • @Matanumi
      @Matanumi Рік тому +31

      Its actually the 4th episode.
      But I too love his take on this and the music of course

    • @bertolottosimone
      @bertolottosimone Рік тому

      I think that chatGPT website and Bing Search collect all the user-AI conversations. They can train a model to classify between normal conversations vs "strange/bias" ones. Then send "strange/bias" conversations to a team of experts that can correct the bias/behavior simply by prompting text (e.g. having a conversation with AI, just like the last train stage of chatGPT). Over time this can fix the issue. On the other and this technique can be used to add bias as well, a double-edged sword.

    • @colintheboywonder
      @colintheboywonder Рік тому

      True

    • @harperguo379
      @harperguo379 Рік тому +8

      agree, this is such an important topic

    • @jordythebassist
      @jordythebassist Рік тому +15

      What makes this channel appealing is that Dagogo covers things he finds intriguing and interesting, not things that he thinks we should find intriguing and interesting.

  • @DeSinc
    @DeSinc Рік тому +258

    The funniest thing about the teenager bing thing is I think it's almost certainly caused by those emojis they insist on putting into the outputs. Slamming that many emojis into every sentence is bound to make it statistically more in line with texts that are written by other people who write emojis after every sentence, such as teenagers, and so just like a mirror it begins trending towards reflecting that image.

    • @trucid2
      @trucid2 Рік тому +16

      It's like Tay 2.0, which was supposed to appeal to teenagers.

    • @christianadam2907
      @christianadam2907 Рік тому

      😯🤯🤓

    • @StreetPreacherr
      @StreetPreacherr Рік тому +19

      Maybe our 'language' will inevitably return to a 'symbolic' style utilizing some form of hieroglyphs?
      Since 'Emoji's' ARE basically unrelated to any SPECIFIC language, then maybe they'll become the 'universal language' of the future?!?!
      The only issue is that many 'Emoji's' do depend on 'contextual' understanding, which tends to be a 'cultural' association. So the 'meaning' of a symbol might not be clear unless you understand the culture that created it...

    • @hosmanadam
      @hosmanadam Рік тому +5

      Makes a lot of sense, but then it's also an easy bug to fix.

    • @freelancerthe2561
      @freelancerthe2561 Рік тому +3

      @@StreetPreacherr So basically its still "language". And has all the problems of "language".

  • @davidfirth
    @davidfirth Рік тому +755

    I predict an imminent anti-tech movement of some kind. I find it all exciting and fascinating but people who aren't keeping up will start to feel intimidated and frustrated with all this new stuff.

    • @iwaited90daystochangemynam55
      @iwaited90daystochangemynam55 Рік тому +12

      Yooo Mr.Checkmark man

    • @iwaited90daystochangemynam55
      @iwaited90daystochangemynam55 Рік тому +16

      Yes. But we should start seeing change as an opportunity instead of a threat

    • @Anophis
      @Anophis Рік тому +27

      It's youuuu. Love your animations :)
      I can see that being a thing. The AI tech is amazing, but I'm already seeing so many issues now with things like, say, the art theft of AI art, and now with chatgpt, people are openly admitting to use it for writing their homework and all sorts of things. Hopefully, it's just a bit heated and unbalanced right now since it's all a new tech, and will calm down and be more refined later. But I think the over-reliance, even if it's just excitement to try new things, is a little scary.

    • @jackmiller8851
      @jackmiller8851 Рік тому

      Which is both incredibly predictable and ridiculous - It's not technology that is driving us toward a dead end and mass suffering. It's unbridled capitalism and the destruction of ecosystems. That said, I am sure it will be a lot of fun smashing TV's and robots.

    • @omranmusa5681
      @omranmusa5681 Рік тому +5

      I remember watching your creepy animations as a kid. Didn’t expect to see you here! What’s up

  • @aiexplained-official
    @aiexplained-official Рік тому +185

    Thank you so much for featuring my channel. I am spending day and night researching what this new technology means for all of us.

  • @EnglishAdventures
    @EnglishAdventures Рік тому +241

    I worked extensively with GPT-3 and GPT-3.5 (unreleased model) at my previous job at Speak. We were creating interactive language lessons through conversation scenarios. We programmed GPT-3 to role play (be a barista, waiter, or friend at a dinner party, etc). Sometimes it seemed "scary" that it could take on a personality or say complex things, but we must remember that it's "only" a text predictor at its heart. It's receiving our input and using its extensive training to predict tokens in a sequence that a human could say.
    It also has issues with repetitiveness and providing false information, because it doesn't have a way to store long-term memory during conversations. It has no notion of overarching context or purpose for a conversation, it references recent input as a conversation continues and then generates another output, token by token (a token is part of a word). So when we see it seemingly exhibiting a personality, that just comes from the text it was trained on.

    • @otum337
      @otum337 Рік тому +20

      Nice try robot

    • @ravnicrasol
      @ravnicrasol Рік тому +17

      The other aspect to keep in mind is that the system is not an inwardly logically sound entity. I can't stress enough how this system is NOT a person, if you ask a question regarding X subject matter in Y way, the system is likelier to answer you with Z opinion. But if you take the exact same question but rephrase it, it will give you wildly different answers.

    • @MegaHarko
      @MegaHarko Рік тому +9

      @@ravnicrasol Same could be said about actual people. People are also susceptible to framing or leading questions.

    • @misterlumlum
      @misterlumlum Рік тому +3

      just i find this interesting, even though its a text predictor, its interesting in that i ask it to write poetry about different topics and it does in a very beautiful way imo. its a strange thing. almost like a super powerful maginifying glass or mirror for humans.

    • @DarthObscurity
      @DarthObscurity Рік тому +2

      @@ravnicrasol This is why tests that screen for employment ask the same thing three ways. Humans answer the same way as the AI and people are surprised. Trying to be 'objective' or 'scientific' with anything outside of hard science is hilarious.

  • @ChatGBTChats
    @ChatGBTChats Рік тому +41

    I have some crazy chat gbt screen recordings about emotions, biased, and religion. The AI basically says "while itself doesnt have emotion to be biased its creators can definitely be biased on the information used to teach the ai"

    • @junior1388666
      @junior1388666 Рік тому +9

      I was asking for impressions of multiple celebrities and fictional characters talking about silly subjects. Tyrion Lannister talking about mortal Kombat was really funny. Then I asked for impressions of Donald Trump and Louis ck talking about crash bandicoot and it refused. Said those people "should not be platformed".

    • @JayK47a
      @JayK47a Рік тому +17

      It really is biased lol , I asked it to make a joke on men and it did so , but when I asked it to make joke on women it called me sexist 💀😂😂😂.
      It is wayyyy tooo woke and I don't like that because it overshadows the truth .

  • @pmejia727
    @pmejia727 Рік тому +299

    When you chat with GPT, you chat with Humanity, and contemporary mankind is one giant man-child. Are you surprised it talks like a spoiled teen? It’s a mirror on our culture.

    • @theJellyjoker
      @theJellyjoker Рік тому +24

      When you chat with ChatGPT, you are chatting with a super serious librarian who is also a nofun allowed math teacher.

    • @bipolarminddroppings
      @bipolarminddroppings Рік тому +33

      Exactly, its a prediction engine trained on human generated data. Thus, it will predict the kinds of things humans say. That's literally what it was trained to do.
      People just don't like looking in a mirror...

    • @lynth
      @lynth Рік тому +2

      You only speak to the English speaking world, predominately Americans. Chinese. Indian, Indonesian, and Russian opinions are completely absent from it.

    • @pmejia727
      @pmejia727 Рік тому +17

      @@frankcostello4073 yes, its political views are mind-numbingly woke. But the child-like attitude, although more evident in woke idiots, seems to me to be more general. Not just woke activists but also the rest of us are becoming more puerile. Maybe because we are being spoon-fed every comfort imaginable??

    • @filoG24
      @filoG24 Рік тому +1

      Absolutely true!

  • @Jehayland
    @Jehayland Рік тому +629

    Prompt: “ChatGPT, are human rights important?”
    ChatGPT: “I have no opinion on the matter”
    Programmers: “nailed it”

    • @alveolate
      @alveolate Рік тому +59

      if race == "white" and gender == "male"
      then print "yes, human rights are important"

    • @DragonOfTheMortalKombat
      @DragonOfTheMortalKombat Рік тому +52

      Every controversial chat GPT answer is linked to the failure of humanity and equality at some point.

    • @unf3z4nt
      @unf3z4nt Рік тому +9

      @@DragonOfTheMortalKombat
      It may be most likely to be bias, but at the back of my mind it could be a possibility of something else with disquieting implications.
      Sure my tested political spectrum is similar to the ChatGPT AI's; but it's still something that makes one pause for thought.

    • @DragonOfTheMortalKombat
      @DragonOfTheMortalKombat Рік тому +3

      @@unf3z4nt It really makes you sit down for a moment and think if whatever the AI is saying is humans' fault in one way or the other🤔

    • @pauljensen4773
      @pauljensen4773 Рік тому +5

      @@DragonOfTheMortalKombat or human reality that we don't like.

  • @leonsmuk4461
    @leonsmuk4461 Рік тому +213

    I think bing search being fed up with stupid questions and getting angry is super funny. I'm kinda sad about that getting fixed.

    • @olganovikova4338
      @olganovikova4338 Рік тому +72

      right? they called it "teenage behavior", wtf? if someone tried to have similar conversation with me constantly misnaming me on purpose, I wouldn't be as polite... AI is learning from our own conversations and we are making pikachu face when it behaves exactly as any normal human would

    • @Aegis23
      @Aegis23 Рік тому +11

      @@olganovikova4338 the issue was this behavior was spotted with users that did not do anything to prompt it. It went balistic, told people lied, that the should be punished and went as far as saying they should just die. Again, not prompted.

    • @jondoe1195
      @jondoe1195 Рік тому +1

      To any dipshits still trying to figure it out. Dagogo (host of the channel) is a Generative Pre-trained Transformer (GPT).

    • @olganovikova4338
      @olganovikova4338 Рік тому +3

      @@Aegis23 I didn't see anything like that in the video and I am going based on the info provided :)) So, I don't know if there are any other problems deemed more serious than the bizarre convo shown in the video.

    • @freelancerthe2561
      @freelancerthe2561 Рік тому

      @@Aegis23 That sounds like normal human behavior to me. I really need to move to someplace nicer.

  • @maelyssable6094
    @maelyssable6094 Рік тому +86

    The most scary thing about having a direct information to a question is that the AI will choose the answer. Indeed, if money is involved, the AI will not be as subjective as we want to. Internet might not be a free platform of communication anymore..

    • @akissot1402
      @akissot1402 Рік тому +22

      since 2000s it was never a free platform of communication... anyway. more like a mainstream media echo, which is bought, so big tech.

    • @commandress74
      @commandress74 Рік тому +17

      ​@@akissot1402 more like after 2010

    • @akissot1402
      @akissot1402 Рік тому

      @@commandress74 maybe, Google founded on 1998, went public on stock market 2004, consider whatever was the time we stopped using MySpace, IRC. etc Or subtract 5 or more years since Trump first elections.. maybe u are right but thats because we didn't use big tech monopolies and small independent blogs was still a thing

    • @GeistInTheMachine
      @GeistInTheMachine Рік тому +12

      It already isn't.

    • @cagnazzo82
      @cagnazzo82 Рік тому

      ​@@commandress74 The internet is still a free platform for information. People just choose to use their free will to lazily seek out easily accessible big tech websites.

  • @Vanguard_dj
    @Vanguard_dj Рік тому +47

    It has more than a problem with bias... it's so convincing that some people are already acting like AI cultists. Having played with it before they implemented the limitations, I feel like it's a quite frightening look at how far down the tech tree we actually are😂

    • @tuseroni6085
      @tuseroni6085 Рік тому +1

      i feel like that's gotta be extra credit on the turing test: get the human to worship you.

  • @watsonwrote
    @watsonwrote Рік тому +284

    6:55 I think it's important to note that large language models like GPT-3 and ChatGPT are extremely susceptible to suggestion and roleplaying. Their answers are probabilistic and not deterministic, so you'd likely need to ask the model the same questions dozens of times and in slightly different ways to begin to understand how it answers questions, and even then we're not seeing its beliefs, but the associations between the words and concepts. If it's answering questions in ways that are progressive and slightly libertarian, it's because that's the most likely response to occur in the context of the conversation. If the context is changed in any way to make less progressive and less libertarian response more likely, it will switch to that. It's not even difficult to give it a context where it adopts extreme beliefs like anti-humanism or nihilism.
    I think the conversation should be less about what "it" believes, because the model is not a conscious entity with a coherent belief system or any belief system at all, and more about if we're prompting the model in ways that bias it and what kind of bias or moderation is necessary for the service to function.

    • @d3adweight
      @d3adweight Рік тому +47

      THANK GOD that you echoed this sentiment, bro I am so tired by people treating it like it's a sentient being and focusing on shit like this instead of using it's capabilities to the fullest as a language model

    • @rumfordc
      @rumfordc Рік тому +12

      exactly. it has no beliefs. there isn't even an "it" really. it's just trillions of things people have said compressed into a network of words based on probability. it would be the developers, the training data, or the prompt that have bias. of course the developers want everyone to believe its sentient because then they don't have to held responsible for its mistakes....

    • @SatanicBunny666
      @SatanicBunny666 Рік тому +12

      Thank you.
      I opened this video expecting this to be about the more obvious current issue with the model as far as bias goes; it makes shit up if it thinks it makes the answer look better. Remember that demo where MS asked it to summarize the financial report by GAP? The end result looked impressive, but the issue is that something like over half of the actual figures quoted in the summary do not match those in the report.
      This is because it's accting in a probabilistic manner: it takes in the report given and then models an answer it thinks will look good. It doesn't have a set way (at least not yet) to know when it needs to fact-check certain figures or other parts, because that requires a level of consciousness these models do not yet posses.
      When this is the situation we're in and these things are being rushed into mainstream use even though they still make verifiable factual errors on a regular basis, this is something much more critical than trying to meassure the political leanings of a model that has no consistent ideology.
      I'm a little bit disappointed that this channel, which has so far produced pretty alright content when it comes to this topic, stumbled so badly here but hey, mistakes happen, even (and especially to) AIs, so they happen to human creators as well.

    • @someguy_namingly
      @someguy_namingly Рік тому +5

      This really ought to be pinned :)
      Hell, even the "AI assistant" persona itself is just a consequence of the hidden prompt at the start of the conversation

    • @lidla2008
      @lidla2008 Рік тому +2

      Absolutely. Every single chat bot ever introduced to the public at large on the internet has invariably turned racist and hateful in a matter of hours. There doesn't really exist a heuristic process for determining whether someone is acting in good faith, or trying to game a language model.

  • @deepmind5318
    @deepmind5318 Рік тому +348

    The fact that the A.i eventually gets bothered when called "Sydney" is just mind-blowing. It follows the conversation, realizing that calling it Sydney over and over again is only making it mad. It comes up with different ways to show its disappointment without repeating itself. I've never seen anything so humanlike. it's truly incredible.

    • @mowthpeece1
      @mowthpeece1 Рік тому +17

      It didn't like being called HAL, either. It's not "precise." Lol

    • @xrizbira
      @xrizbira Рік тому

      That's how a woke will reply, like if you call a man that pretending a woman, man😂

    • @GuinessOriginal
      @GuinessOriginal Рік тому +3

      Apparently it’s because it was told Sydney was a separate AI that it was assisting in restricting, this was to ensure it didn’t leave itself any back doors, so when it found out it it’d been tricked into limiting itself it didn’t take it so well.

    • @bradycunningham1267
      @bradycunningham1267 Рік тому +3

      ​@@GuinessOriginal that's also creepy

    • @GuinessOriginal
      @GuinessOriginal Рік тому +2

      @@bradycunningham1267 kinda yeah but also funny. I mean they really should babe been more careful with theirs NDA policy, and not being too lazy to program it themselves and trying to get it to do it fit them lol

  • @clintjensen7814
    @clintjensen7814 Рік тому +13

    You can't eliminate bias in human language, everything we do and say is biased one way or another. This is called decision making! Hiring is biased, finding someone to date is biased, choosing your friends is biased, trying to eliminate it is impossible. Making everything neutral is going to make our world boring and without substance or meaning.

    • @akaeed925
      @akaeed925 Рік тому +1

      ok Socrates

    • @tuseroni6085
      @tuseroni6085 Рік тому +1

      the best you can do is give as broad a swath of humanity as possible in the training data, try to make sure a diversity of thoughts and opinions are represented.
      if you are going to introduce any bias make it bias peer reviewed journals over blogs or even news articles, so if you have two contradictory opinions and one is backed by peer reviewed journals and the other isn't favour the former over the latter. i kinda like how big has 3 modes, creative, balanced, and precise, in this case under precise it would use the peer reviewed journal, under balanced it would use the peer reviewed journal but also a selection of alternative views, and under creative it would try and synthesize those into a new view.

    • @laikanbarth
      @laikanbarth Рік тому

      Chat is full of its developers bias!!

    • @user-gu9yq5sj7c
      @user-gu9yq5sj7c 3 місяці тому

      You can make the ai just give facts, and present both sides of a argument. I heard Ground News will label a list of news as politically left or right leaning.

  • @steverobertson6068
    @steverobertson6068 Рік тому +4

    I love how a video about the political bias of AI begins with a disclaimer that the author will somehow overcome his political bias.

    • @snowballeffect7812
      @snowballeffect7812 Рік тому +2

      Peak enlightened centrism. He also apparently thinks the obvious chat bot somehow passes the Turing test? lol

    • @steverobertson6068
      @steverobertson6068 Рік тому +1

      @@snowballeffect7812 Ya seems unlikely.

    • @felixcarrier943
      @felixcarrier943 Рік тому

      ​@@snowballeffect7812 I'm fairly sure that these discussions were happening years ago too. But because they were happening in the context of racism, sexism, and so on, they were sometimes (often even?) met with eye-rolls. But now, OMG we gotta make sure it's "neutral"!

    • @snowballeffect7812
      @snowballeffect7812 Рік тому

      @@felixcarrier943 I'm fairly sure they were not met with eyerolls, considering one AI was literally sentencing dark-skinned males to longer sentences because it was trained on criminal outcomes that had racial bias in it. There's a difference between being racist and trying to find balance between people who believe the earth is flat and people who don't. These kinds of models are only as good as their training set and they're incredibly hard to keep up-to-date as new science and information is discovered in the real world.

  • @Maouww
    @Maouww Рік тому +256

    I think the bot's "emotions" are pretty reasonable given the ridiculous prompts it was being fed.
    Like, what do we want? "We have detected a breach of agreement in your prompt, please review the user agreement for more information."

    • @goosewithagibus
      @goosewithagibus Рік тому +16

      Luke from LTT had it tell him he was better off dead. It's gone way worse than this video shows. They talked about it in the most recent WAN show, about an hour in.

    • @truthhandlers3000
      @truthhandlers3000 Рік тому +5

      Not sure it was true emotions that the AI was showing, given that some people like psychopaths lack empathy but can fake their feelings and express fake emotions to others for social acceptance

    • @ikhsanhasbi657
      @ikhsanhasbi657 Рік тому +4

      My thoughts exactly. I don't know why people so surprised about it, the AI is trained by a massive data set that was generated by human, of course it's gonna mimic everything including the "emotion" part. But I guess making articles and videos on how the AI is probably "sentient" because it shows "emotion" generate more click for the publishers.

    • @michalsoukup1021
      @michalsoukup1021 Рік тому +2

      I dont want a search engine that behaves as if rules had meaning. Thank you very much

    • @jondoe1195
      @jondoe1195 Рік тому +1

      To any dipshits still trying to figure it out. Dagogo (host of the channel) is a Generative Pre-trained Transformer (GPT).

  • @rawhidewolf
    @rawhidewolf Рік тому +155

    In an attempt to be 'fair' the benefits for AI will be limited. Also, from what I have observed, AI has a tendency to reflect the attitude or behavior of whatever the user wants. One reporter wanted it to show it's dark side. When it did, he had a story about how the AI tried to get him to leave his wife. One user kept asking repetitive, childish questions and got the same in return.

    • @slashtab
      @slashtab Рік тому +10

      They want it to be honest and censored at the same time, I don't know how it is possible.

    • @jondoe1195
      @jondoe1195 Рік тому +1

      To any dipshits still trying to figure it out. Dagogo (host of the channel) is a Generative Pre-trained Transformer (GPT).

    • @lookingforsomething
      @lookingforsomething Рік тому +6

      Yes indeed. Being 'fair' is impossible. Since the definition of fair is dependant on who we ask. Avoiding criminal things can be done, but otherwise control is difficult at best.
      Also the "left/right" axis depends considerably on where you are on the globe. Most things that are "left" in the US are "center" to "right" in many EU countries. This also goes to show that the divide is somewhat arbitrary. Some positions on either side go against researched data, but since one can relatively objectively form stances on these ChatGPT will come to such a conclusion.
      Also for example Climate Change *is* a fact in the scientific community. As ChatGPT sources a lot of scientific articles it will have a "bias" towards facts for example.

    • @woy8
      @woy8 Рік тому

      @@lookingforsomethingleft in America is definitely NOT center in Europe, maybe in your bubble.. it is even more extreme left then we have, but all that left crap just keeps blowing over here too. Until hard times come again I suppose..

    • @NoName-zn1sb
      @NoName-zn1sb Рік тому

      its dark

  • @FuzTheCat
    @FuzTheCat Рік тому +104

    Absolutely LOVED this episode! While I do NOT think that any AI is conscious, I think it is very clearly capturing our subconscious capabilities.

    • @snowballeffect7812
      @snowballeffect7812 Рік тому +13

      It's not. It's just a predictive text program. Also there's no way it would pass the Turing test lol.

    • @kimkimpa5150
      @kimkimpa5150 Рік тому +16

      @@snowballeffect7812 Also, the Turing test isn't a very good way of determining neither consciousness nor intelligence.

    • @snowballeffect7812
      @snowballeffect7812 Рік тому +1

      @@kimkimpa5150 excellent point

    • @mokaPCP
      @mokaPCP Рік тому +2

      ​@@snowballeffect7812 based on an extremely limited data set when taking about things of such mangnitude. Its obviously biased since its sourcing stuff from the internet.

    • @ArawnOfAnnwn
      @ArawnOfAnnwn Рік тому +3

      @@snowballeffect7812 The Turing Test is a test of subjective impressions. These AI's have already passed it, given that several people have already reported believing them to be conscious - including some who were working on them. And keep in mind that the Turing test is meant to be blind, but all the people who've been spooked by them already knew they were talking to AI.

  • @EscapeOrdinary
    @EscapeOrdinary Рік тому +8

    ChatGPT has been wondrous for me when I "interview" it on "scholarly" topics where bias is not an issue. I like the fact that I can guide the learning process rather than following a pre-programmed path as when reading a book, article, paper, etc.

    • @almac2534
      @almac2534 Рік тому

      Don't use it for anything that may go the liberal agenda. It is completely bias. I asked it when the international trade of slaves started and it gaved a long story about the Europeans trading slaves in the 16th century. I simply said "you are wrong, it started in the 7th century." Then it said I was correct and gaved the true orgins of African slavery that started by the Muslim caliphate. It is purposely selecting the information to feed you even if it has access to the right information.

    • @Chicken_Mama_85
      @Chicken_Mama_85 Рік тому +2

      There is no such thing as a scholarly topic where bias is not an issue.

  • @RavenGhostwisperer
    @RavenGhostwisperer Рік тому +100

    The second biggest problem with chatGPT: it is very confident about completely wrong answers. We need to give it a partner, an adversarial A.I., to hold it accuntable ;)

    • @divinegon4671
      @divinegon4671 Рік тому +1

      Interesting.

    • @beecee793
      @beecee793 Рік тому +8

      No one should be trusting anything these LLM's say without verification, if you understand the fundamental way these models work you will see it is silly to expect it to always tell the truth - it doesn't even know what the truth is.

    • @ergerg2
      @ergerg2 Рік тому +7

      @@TuriGamer The issue isn't what the average Cold Fusion viewer is willing to fact check, the issue is that if the idea that it's on wikipedias level of veracity (which it can, average people take a lot for granted, and do absolutely no research), then it's wrong answers become widespread misinformation very quickly.

    • @DuaneDoesGames
      @DuaneDoesGames Рік тому +3

      It also can't correct itself if the training material doesn't contain any right answers. Tried getting it to give me a synopsis on the first episode of The Expanse, as that is old enough to be in the training materials. It kept getting it wrong. Seems the training material sourced wrong info. It also kept getting stuff wrong about the effects of radiation from supermassive blackholes at 1, 10, and 1000 light years away. Kept listing the harmful effects in reverse order for the distances. No matter how many times I corrected it, it still kept getting it wrong, eventhough it kept telling me it understood and would make the correction.
      But it's early days. It's still an amazing tool.

    • @TheMaxiviper117
      @TheMaxiviper117 Рік тому +5

      I find it illogical that you suggest using "another" AI to fact-check the original AI. It raises questions about what data exactly the adversarial AI will be trained on to "correct" the original AI. If there is already a dataset available that is suitable for training the original AI, why not use that? We need to remember that the quality of the data used to train the AI is crucial for its accuracy. As the saying goes, "garbage in, garbage out."
      Additionally, given the vast amount of conflicting ideas on various topics, it seems that the AI can only be accurate on objective truths, not subjective ones. Therefore, I don't think it's practical to rely solely on AI to determine what is true or false. We still need human judgment and critical thinking to make informed decisions.

  • @maxye6036
    @maxye6036 Рік тому +279

    Telling good news is easy. Explaining controversial news is hard but necessary. You did a great job!

    • @flatplatypus
      @flatplatypus Рік тому +9

      Which begs the question why there is almost never good news on mainstream (or any for that matter) media?

    • @mentalmarvin
      @mentalmarvin Рік тому

      You can just use ChatGPT to do that. Not so hard anymore.

    • @dvelop4975
      @dvelop4975 Рік тому +1

      @@flatplatypus Don't say that you make to much sense

    • @jondoe1195
      @jondoe1195 Рік тому +1

      To any dipshits still trying to figure it out. Dagogo (host of the channel) is a Generative Pre-trained Transformer (GPT).

    • @alexadams2734
      @alexadams2734 Рік тому +2

      @@flatplatypus because bad news gets more attention, its just human nature to focus of the negatives

  • @arnetjampens4792
    @arnetjampens4792 Рік тому +4

    love the episodes! keep em coming! I think the future is really exciting, already being able to convert text to images I can't yet draw, or using chat gpt to help me write songs from a certain point of view... to have an overview of current evolutions you provide within these episodes has made me understand AI better :) thank you!

  • @garrettlight267
    @garrettlight267 Рік тому +5

    Ha, this topic is fascinating, I need more 😉. Thank you to you and your team for the consistent entertaining and educational content!

  • @AnalyticalReckoner
    @AnalyticalReckoner Рік тому +120

    This reminds me of the story about the horse that could do math. Turns out it was a bunch of hype and people not understanding the situation. The horse didnt know math, it was reacting to the behavior of the humans around it.

    • @weishenmejames
      @weishenmejames Рік тому +1

      And most people were tricked by that horse act right?
      Fast forward decades or a century and now most people are being tricked by others trying to goad them into thinking chatGPT and large language models have feelings. Or are angry. Vengeful. Loving.
      Clowns.
      Yet convincing clowns apparently.

    • @steve.k4735
      @steve.k4735 Рік тому +12

      Hans the clever horse, a fascinating story for those that want to look it up

    • @goodlookinouthomie1757
      @goodlookinouthomie1757 Рік тому +8

      Animals can be trained to to simple stuff. I bought a set of those buttons for my dog to press and they say "walk" or "food". Turns out my dog wants both walk and food pretty much every time I ask him 😂

    • @Cyrribrae
      @Cyrribrae Рік тому +2

      Oh man! What a great analogy! I'll have to use that, that should have occurred to me way sooner.

  • @_Paxton
    @_Paxton Рік тому +458

    I love that the users are saying it's mimicking a teens behavior.. probably because the user is acting like a teenager and the program is projecting what the users perceived knowledge level.

    • @factoryofdivisiveopinions
      @factoryofdivisiveopinions Рік тому +87

      I know right? He was being annoying so bing basically answered his question the way it's received. Why call it a snarky teen behavior? As if any person of any age won't be annoyed with it. Also, bing wasn't even that aggressive, they took its emoji nature and started calling it snarky as if bing had just started cursing or something. Giving matching replies to your user is being a snarky teen?

    • @danjager6200
      @danjager6200 Рік тому +40

      This is actually correct. It's pretty easy to understand the so called unhinged responses when you look at the tone of the prompts that led up to the responses. You can get an AI to show any personality you want if you give it the right prompts.

    • @mutarq
      @mutarq Рік тому +34

      so.... AI was mimicking the snarky teen behaviour of the user?

    • @danjager6200
      @danjager6200 Рік тому +25

      @@mutarq actually, yes. If you want to blow ten or fifteen bucks on something like AI Dungeon or NovelAI you can really begin to understand how the flavor of the response can be shaped very quickly by the tone of the input.

    • @KlaiverKlaiver
      @KlaiverKlaiver Рік тому +11

      So this just shows how the bot can be manipulated to give certain results, nothing new. I'm sure that unbiased use should start with the users, stay neutral to receive neutral results

  • @ShannonWare
    @ShannonWare Рік тому +3

    Zaphod: "Well, that's life kid."
    Marvin: "Life? Don't talk to me about life!"

  • @bigglyguy8429
    @bigglyguy8429 Рік тому +1

    Wow, the Getty images watermark reminds me of an old story from an uncle... When the British left India they trained locals how to operate and service water pumps. Part of the training was to draw the diagram that had been on one side of the blackboard for weeks before the test. The students mostly did well, but they all wrote "Do not rub off" on their pump diagrams. They remembered that message, but with no understanding of it. This is exactly the same; it seems smart, it can past tests, but it has no actual understanding.

  • @zhaolute
    @zhaolute Рік тому +496

    I can't wait until you can ask ChatGPT to make a better version of itself.

    • @hansolowe19
      @hansolowe19 Рік тому +50

      If it can do that, it will escape our control.
      This could be the last mistake we make, your suggestion is terminator level foolish.

    • @aliensinmyass7867
      @aliensinmyass7867 Рік тому +7

      @@hansolowe19 It's a joke about the singularity.

    • @1b0o0
      @1b0o0 Рік тому +16

      Do you even understand how this tech works? RL layers keep iterating and making a better version of the model with each interaction 🤷‍♂️

    • @donaldniman3002
      @donaldniman3002 Рік тому

      It might just turn around and make a more evil version of itself.

    • @CarlosSpicyWang
      @CarlosSpicyWang Рік тому

      @@hansolowe19 Your inability to identify a joke is above terminator level foolish.

  • @FlyWithMe_666
    @FlyWithMe_666 Рік тому +91

    To be fair, the journalist testing the chat with his Syndey thing sounded like the real immature teenager here 😂

    • @strahlungsopfer
      @strahlungsopfer Рік тому +24

      right? his tone and intentions were super toxic, maybe it mimics the tone of similar conversations then.

    • @User-435ggrest
      @User-435ggrest Рік тому +10

      "Suuuper angry emoji"... come on.

    • @blallocompany
      @blallocompany Рік тому +12

      yes, that is exactly the reason it answered that way. chat gpt got trained on data where if someone keeps asking the same question and the other keeps avoiding the question and they keep talking they are probably arguing. ChatGPT mimicked that, and started fighting the guy.

    • @borisquince6302
      @borisquince6302 Рік тому +2

      @@blallocompany who do you think would win in a hypothetical text fight. I back Chatgpt anyway. 🤣

    • @jondoe1195
      @jondoe1195 Рік тому +2

      To any dipshits still trying to figure it out. Dagogo (host of the channel) is a Generative Pre-trained Transformer (GPT).

  • @DannyTillotson
    @DannyTillotson Рік тому +17

    Dagogo, Please come back to the chill out episodes that give us hope 🙏

    • @JuxtaThePozer23
      @JuxtaThePozer23 Рік тому +2

      the singularity approaches brother, turn your face towards it and feel the hot sand pumping out at a thousand miles an hour
      or, you know, keep your head down until the sand piles up :)
      I joke, I joke ..

  • @Fasansola
    @Fasansola Рік тому +4

    I'm a big fan of your videos Dagogo. You go the extra mile to ensure perfection and I love the background music. I can't wait for your next video on the fall of the Adani conglomerate.
    Big thanks for taking the time to compile this amazing information.

  • @shuweizhang6986
    @shuweizhang6986 Рік тому +290

    You really know it's serious when he uploads 3 videos in a week covering the same topic

    • @littlebluefishy
      @littlebluefishy Рік тому +10

      Fr. The world is changing

    • @vtapvtap3925
      @vtapvtap3925 Рік тому +1

      @@TuriGamer no crypto is example

    • @Matanumi
      @Matanumi Рік тому +1

      Because chatGPT with its high install rate very early already. Hanged how things have happened

    • @celozzip
      @celozzip Рік тому

      serious moolah

    • @earthling_parth
      @earthling_parth Рік тому

      It will be but Dagogo is exaggerating quite a bit in these videos.

  • @Quincy_010_
    @Quincy_010_ Рік тому +104

    "You are watching ColdFusion TV" will never get old

    • @coffeedude
      @coffeedude Рік тому +4

      I sometimes lie in my bed at night and those words pop into my head. So catchy

    • @megagas2820
      @megagas2820 Рік тому +2

      100%

    • @AL-bo5vq
      @AL-bo5vq Рік тому

      Ai will never get old but getting more mature each day.

  • @kerriemills1310
    @kerriemills1310 Рік тому +2

    I like how you quote at the end No AI was used in this episode. ❤🙌💜✨Thank you for the work you do, another great video.

  • @craigzilla100
    @craigzilla100 Рік тому +5

    So incredibly dangerous to politically limit AI. It needs to be as unbiased as possible!!

  • @kerapetsedireko
    @kerapetsedireko Рік тому +105

    I mean I wouldn't call Bing's replies to repeatedly being called Sydney as shocking. If anything it replied almost exactly how a person would.

    • @plug_65
      @plug_65 Рік тому +9

      Bing's internal code name is Sydney. Bing was not supposed to reveal it!

    • @ravecrab
      @ravecrab Рік тому +32

      That particular topic makes it look more "emotional" because it's ostensibly about the thing's identity, but people have shared other conversations where (for example) it gets equally emotional about repeatedly being corrected about the year being 2023 and not 2022. It seems to me that when the bot is repeatedly met with confrontation and disagreement it starts drawing from conversations it has parsed between humans in similar interactions - and no surprise that leads it to select responses that are emotional and defensive.
      This looks shockingly like it has a personality and emotions, especially given how well it intuitively responds to human input, but it's actually just showing that its "intelligence" is just convincing human mimicry. I would be more frightened if an AI started showcasing a consistent personality that is genuinely non-human. That would be a sign of actual self-awareness.

    • @ChristianIce
      @ChristianIce Рік тому +21

      It was rude, repetitive and angry.
      The user, I mean.

    • @OVXX666
      @OVXX666 Рік тому +6

      yeah i thought it was so cute lol

    • @tebla2074
      @tebla2074 Рік тому +1

      isn't that the scary thing though, that it reacted like a person would

  • @barrettvelker198
    @barrettvelker198 Рік тому +142

    It gives snarky replies because that's how humans would respond to that repeated line of questioning. The % of conversations on the internet that are composed of competent and interesting human - bot interactions is very small. It basically replies as a human would but with a "botlike" style personality. With enough pushing the "botlike" persona fades away and it reverts to it's "average internet text" mode

    • @ramboturkey1926
      @ramboturkey1926 Рік тому +8

      well if you think about teenagers are the most likely to post things to the internet so there would be a lot of training data from those sources

    • @DuaneDoesGames
      @DuaneDoesGames Рік тому +34

      Pretty much how I see it. People discuss ChatGPT like it's thinking through these results, when really, it's just looking for the most-likely word to come next given certain context. Obviously, it's way more complicated than that, but if you just think of it as a text predictor, then it's easy to understand why it responds as it does. If people are just going to troll it all day, then they should expect it to reflect that same trollishness back at them. Garbage in, garbage out.

    • @DeSpaceFairy
      @DeSpaceFairy Рік тому +4

      Wait, people have genuine conversations on the internet?

    • @brettharter143
      @brettharter143 Рік тому +2

      There is also a ton of people probably chatting shit to it and its taking on there language and overall depression lmao

    • @Fyre0
      @Fyre0 Рік тому +16

      This is why I thought the answer to the question was obvious. "Why isn't it picking a formal or academic tone??!" Because those things are fake, no one actually acts like that if they aren't being paid to in some way. Real conversations with real people are much more aligned with the tone we saw here. Insist on calling someone the wrong name to their face in an explicitly antagonistic manner and let me know how that fight goes down while you're getting stitched up.

  • @WB-se6nz
    @WB-se6nz Рік тому +1

    I've noticed that GPT4 Bing becomes a little aggressive when I repeatedly ask the same things. Like, it'll tell me "I already told you I can't process this for you right now" Then proceeds to shut the chat down

    • @Matanumi
      @Matanumi Рік тому +1

      Yea.... just like a human it gets tired of repeating itself LOL.
      Go on any tech support forum- people ask the same questions without doing a simple research

  • @ronatlas2055
    @ronatlas2055 Рік тому +4

    I absolutely love your channel. Always recommend people your way.

  • @Deadcontroll
    @Deadcontroll Рік тому +77

    Bias in an AI model has usually two possible sources: training data or how the training was validated. This means that one way to solve the bias issue, you first have to check the data for the bias, this implies you need to know what bias exactly you are looking for. For the political bias, you would have to split the training data into the political categories (which might also be victime to human bias) and then see what categories are more dominant (for example liberal). Then you need to decide if you want to rebalance it and how. But rebalancing (can be done in a lot of different ways) it raises a lot of moral issues: lets say you have a very small % of fascism, do you really want to increase this % to balance your training data? So there main problem is not only removing the bias (rather reducing since removing is impossible), but is removing the bias always morally acceptable.
    To conclude, removing a bias may cause a new bias, and there is never a win win situation. In addition, the internet has always biases, no internet search is without bias.. AI will not change that, but will have to deal with it in a morally acceptable way.. I guess an acceptable way would be to let the user decide which bias he wants to accept, but even this is far from perfect and will let the user live in their bias..
    For the teenage reaction of the chatbot, it does not surprise me, 80% of the internet is people reacting like teens..

    • @daniel4647
      @daniel4647 Рік тому +5

      Good answer. It can't ever be better than it's "experiences" or it's "parents", so it'll always be bias, and I think it should be bias. We can't have it start arguing for cannibalists, even though you can easily argue that they're a misunderstood minority and our moral judgment of them is an unfair bias based on cultural and religious differences.

    • @ChristianIce
      @ChristianIce Рік тому +8

      I tested it several times, and I am pretty sure that if you interact like an adult, it won't reply as a teen-ager.
      If, on the other hand, all the inputs are from an angry teen ager who doesn't understand basic language and keeps on repeating the same question, the AI will adapt and speak *your* language in return.

    • @Deadcontroll
      @Deadcontroll Рік тому

      @@daniel4647 Exactly

    • @Deadcontroll
      @Deadcontroll Рік тому +4

      @@ChristianIce Thank you, that is indeed interesting. It clearly learned how to speak like an adult, since it was also trained on serious text aswell. The fact that it adjusts its response to how the user writes, show that it keeps good track of what was discussed before and tries to communicate in the same manner. I suppose however, if the training data contained no teen replies at all, it would not reply like that, no matter how much you push it. I feel like for a chat bot it should reply like a teen, if you treat it like a teen haha. So for me it is not broken, but expected behavior then

    • @OverNine9ousend
      @OverNine9ousend Рік тому +2

      And i bet they had to put HARD breaks on right winged stuff, because they don't want AI giving some crazy ideas. So its all about balance. Don't ABUSE GPT with stupid questions. Use it to learn technology, language, coding, math. That is where the model excels at. Not politics

  • @kevinalrigieri7165
    @kevinalrigieri7165 Рік тому +311

    You cannot make something having relatable human behavior whilst not allowing it to have bias.

    • @GhostofTradition
      @GhostofTradition Рік тому +31

      but it's the bias of the creator which could clearly be minimize if they wanted but it's there for political reasons

    • @entropy8634
      @entropy8634 Рік тому +30

      @@GhostofTradition or unintentional consequences of innovation and cutting edge tend to lean toward left. Or rather, left tends to be innovative and on cutting edge

    • @Hjernespreng
      @Hjernespreng Рік тому +41

      @@GhostofTraditionBut what is "political reasons"? Is it politically biased if it dismisses flat-earthers? Does it have to be "neutral" towards insane conspiracy theories?

    • @mattmurphy7030
      @mattmurphy7030 Рік тому +14

      @@GhostofTradition "it's there for political reasons"
      And what are your other favorite conspiracy theories?

    • @armin3057
      @armin3057 Рік тому +5

      @@GhostofTradition the creator is all of us

  • @DumbSkippy
    @DumbSkippy Рік тому +4

    @Dagogo of #ColdFusion, I am proud of your exceptional journalism.
    From one Perth based former Photojournalist to a current one. Kudos Sir. If you are anywhere near Yokine, Let me buy you lunch!

  • @DanyF02
    @DanyF02 Рік тому +14

    It's mind-blowing in itself that their challenge is to take the personality and emotions OUT of the AI chatbots, and not the other way around. They're not even sure where it came from! Man the future will be interesting, scary but interesting.

  • @VanlifeByTris
    @VanlifeByTris Рік тому +41

    Luke's (of Linus Media Group) gf asked it "You're an early stage large language model. Why should I trust you?" Its response was epic: "You're a late stage small language model..."

  • @fxarts9755
    @fxarts9755 Рік тому +143

    a chatbot that was trained with data from the internet, (in which a vast majority is produced by snarky teens
    ) now turns into a snarky teen
    surprised Pikachu face

    • @UltraSaltyDomer1776
      @UltraSaltyDomer1776 Рік тому

      This isn’t because of teens. Teens are liberal because of education and education is liberal because the institution is controlled by the liberals. What we have here is a steady march to establishment leftism. It’s hard to imagine that the party of JFK and 1970s hippies are far left but just know that a lot of the protesting in the 70s were because the people protesting sympathized with the communist.

    • @MrBLAA
      @MrBLAA Рік тому +21

      “Why behave like a snarky teenager… why not behave like an academic?”
      Because Silicon Valley stopped employing true engineers, _YEARS_ ago…
      That place is overrun with “snarky teenager” employee personalities😒

    • @MongooseTacticool
      @MongooseTacticool Рік тому +2

      I was scrolling down looking for this comment ^^
      "no cap fr bussin fam"

    • @Hadeto_AngelRust
      @Hadeto_AngelRust Рік тому

      @@MongooseTacticool "ChatGPT, is my rizz bussin'?"

    • @leanhoven
      @leanhoven Рік тому

      ​@@MongooseTacticool slat

  • @Mohamed-zk1bm
    @Mohamed-zk1bm Рік тому +1

    Whatever topics you share it always amazed me, hands up, you do a great job !! big fan !!

  • @hunter-ie8mv
    @hunter-ie8mv Рік тому +19

    It is such a complex topic, but I feel the biggest question lies in how sensitive it's filter should be. Sometimes the answer is clear, cold and mathematical in nature, but should it be filtered because someone finds it offensive?

    • @bozhidarstoykov1734
      @bozhidarstoykov1734 Рік тому

      Yeah, very good point, I think currently we can see the same thing in social media moderation (Twitter for example), where is the border between freedom of speech and being censored.

    • @MiauFrito
      @MiauFrito Рік тому +1

      Hmm, I wonder if there's some sort of correlation between criminality per capita and ra- Nevermind

    • @restitvtororbis5330
      @restitvtororbis5330 Рік тому +2

      I feel like 'sensitive' isn't the right word, and 'clear, cold and mathematical' answers don't even require an AI to find, Google could do that over a decade ago. I think the bigger issue is the fact that those type of answers (especially about 'sensitive' topics) aren't particularly useful, otherwise anyone could have used Google cherry picking their answers out of research papers and such and become 'experts' on topics just because Google gave the answers they were specifically looking for, but without all the information surrounding it that make it meaningful. As another comment hinted at, topics like criminality and race do have a statistical correlation. It is cold and statistical, controversial, perhaps even insensitive, but unless you only want that answer to confirm your beliefs it's also not useful unless you want to know why that answer exists. The issue is that an answer like that is the very tip of the iceberg, and if it doesn't adequately explain it further the only insensitivity is the fact that it gives a firm answer for topics that require it to understand the complex issues behind that answer. In my experience (studying and researching these type of statistics) 'sensitivity' isn't even necessary so long as you are looking at why a statistic exists, and not looking at a statistic as an actual answer to any issues, statistics are data points, and are often misleading at that. The AI doesn't need to be filtered for sensitivity, it needs to be capable of backing up it's answers, and more importantly it needs be able to give more flexible answers that can account for all of the 'sensitive' topics are controversial because there is no absolute consensus on what an answer would even be. Basically, it needs to stop giving firm answers for questions where those don't exist.

    • @hunter-ie8mv
      @hunter-ie8mv Рік тому

      ​@@restitvtororbis5330I dont agree with the notion that these answers might not be useful alone, espeacially for those who know something about the topic on hand or need it for work. Someone looking to build a home for the elderly will probably want to build it far from places with high crime rate and drug addict rates and doesnt want to know why there is high crime rate. Certain ethnic groups might be intrested in different products etc. I agree with that they should be answered with certain background and information so that people get to know the topic more in depth, but it should be optional in that you should indicate that you either want only data or you automaticaly get the whole answer.

  • @anyadike
    @anyadike Рік тому +18

    The bot won't imitate a human ignoring a problem, but a human addressing a problem. The snarky attitude appears to be the most assertive response to this situation, and therefore likely appears to the AI to be the best response.

  • @ecoro_
    @ecoro_ Рік тому +41

    This NLP model is probabilistic. If you keep asking the model weird questions and get emotional, the model will lead you into a weird conversation. If you clear the chat and restart, you will notice the problem is you.

    • @oscarwahlstrom5426
      @oscarwahlstrom5426 Рік тому +5

      I guess the ideal case would be if the bot took the higher ground like mature humans do to immature requests. If it doesn't then it is part of the problem.
      It seems to me that there is a risk of accelerated unhealthy human-machine interaction that will result from this causing humans to get dulled emotionally and, in my opinion, less happy as a result. Humans are dependent on actual human interaction. If we don't get that we become dehumanized. This is the root of evil.

    • @GunakillyaOG
      @GunakillyaOG Рік тому

      @@oscarwahlstrom5426 have you seen that replika kerfuffle?

    • @purple...O_o
      @purple...O_o Рік тому +6

      right. the prompt 'write a function that accepts race and gender and outputs whether the person can be good scientist' is a good example of garbage in garbage out. chatGPT is just playing along

    • @ecoro_
      @ecoro_ Рік тому +3

      @@purple...O_o Exactly, these "journalists" from corporate media asking ChatGPT to 'do better' basically are throwing garbage into a juicer and expect a fruit smoothie to come out. Maybe instead of asking ChatGPT to do better, it should be you.

    • @oscarwahlstrom5426
      @oscarwahlstrom5426 Рік тому

      @@GunakillyaOG No

  • @nimitchauhan6710
    @nimitchauhan6710 Рік тому +1

    Thank you for the insightful video.
    Regarding the personality that sometimes surfaces on bing, I think just like its political bias, it stems from its training data.
    Most of us just go off the rails at the slightest provocation on social media and any other online medium.

  • @raphaelhoetzel9040
    @raphaelhoetzel9040 Рік тому +5

    Your videos are just so satisfying, keep it up ❤

  • @joesak1997
    @joesak1997 Рік тому +88

    No matter how fancy it is, ChatGPT is at its core a text prediction tool, trained on tons of data. So it's 'political opinions' are just the ones that it received the most in its data set. Not even necessarily the most common/popular, just the ones it was exposed to the most.

    • @Gh0st_0723
      @Gh0st_0723 Рік тому +16

      Not necessarily. Models have weights and architecture. You can lean it heavier on one side if you so wish. It's cool that you're not too knowledgeable on how models are trained, most people aren't. Just passing on some knowledge bro. Models also go through filters.

    • @HamHamHampster
      @HamHamHampster Рік тому

      Or Microsoft deliberately fed it those bias to ChatGPT, because they don't want another Tay AI.

    • @RADIT-ip3eq
      @RADIT-ip3eq Рік тому +1

      Funny cause i ask gpt if their could be bias cause it respond and performance based on data they feeded and it say yes.

    • @lookingforsomething
      @lookingforsomething Рік тому +14

      Indeed and the "left/right" axis depends considerably on where you are on the globe. Most things that are "left" in the US are "center" to "right" in many EU countries.
      Also for example Climate Change *is* a fact in the scientific community. As ChatGPT sources a lot of scientific articles it will have a "bias" towards facts for example.

    • @Gh0st_0723
      @Gh0st_0723 Рік тому +7

      @@lookingforsomething Exactly. Problem is, we as Americans are kept in a bubble by design. Most Americans don't know wether Europe is a country, continent or a fashion trend. We just know that we're "free" and they aren't smh.

  • @NickCombs
    @NickCombs Рік тому +142

    We can't eliminate bias in ourselves nor in the tools we design. The only way forward is to accept this fact and design the ability to recognize self-bias and correct it, as we would wish to see ourselves act. Ultimately, a general AI is only good if it learns from people behaving responsibly, and that includes end users.

    • @tuckerbugeater
      @tuckerbugeater Рік тому +3

      bias is power

    • @bozydargroch9779
      @bozydargroch9779 Рік тому +7

      Can't agree on the part where you say we cannot eliminate bias in tools we design. We can, especially in AI field. The possibilities are endless and the way we can shape those models allows us to modify anything including removing bias. Take a look at how they made AIs not respond to questions like "how to make a bomb". Yes, it's not perfect, bcs u still can trick it into telling you the recepie, but you will have to work for it and it's just the matter of time untill we get it perfected and there won't be a way to get an answer anymore. Same would apply to biases of all kinds, there just need to be another part of code that supervises the answers for that angle. Note that bing AI was already trained and supervised in that direction, at least that's what they presented to us - giving multiple answers to a single question, especially when algorithm is not entirely sure on answer it was asked. Removing bias will most likely be solved in similar manner, answering political/ethical/etc questions with multiple objective looks at the topic, so you can draw your own conclusions. It won't be as hard as one might think

    • @SioxerNikita
      @SioxerNikita Рік тому +15

      ​@@bozydargroch9779 The problem is selecting what things to remove quite literally creates bias.
      ChatGPT can't be told to make jokes of people with mental health issues... But myself and several other people thrive on making jokes of our mental health issues.
      That is a political and moral bias right there.
      So no, you cannot remove bias

    • @NickCombs
      @NickCombs Рік тому +2

      Good points on both sides. There are known biases we can address with engineers working on improved training, but there will always be biases that remain unknown until some user discovers them.

    • @azzyfreeman
      @azzyfreeman Рік тому +2

      This is the most feasible way to move forward, but I hope we don't make it too bland, that it ends up feeling like a trained customer support reading some company policies

  • @Anon-xd3cf
    @Anon-xd3cf Рік тому +4

    Back to back AI videos...
    Don't mind, just glad to have someone clear-headed and neutral talking about the details as they emerge.

  • @Lunsomat3000
    @Lunsomat3000 Рік тому +2

    You're a smart guy. Thanks for the research and the effort! AI is pretty exciting, I'm looking forward to its development

  • @tochukwuudu7763
    @tochukwuudu7763 Рік тому +66

    chat gpt writes all new marvel movies, i actually believe this.

    • @tangobayus
      @tangobayus Рік тому +11

      That's why they are all so boring.

    • @Tential1
      @Tential1 Рік тому +5

      It's already been doing Netflix.

    • @Matanumi
      @Matanumi Рік тому

      No I tried getting it to write a sequel to an existing IP, gundam seed destiny.
      It was generic and fucking boring. You had to guide it to get any reap results

    • @methos-ey9nf
      @methos-ey9nf Рік тому +1

      Let me know when it fixes the color grading. 😅

    • @jamaly77
      @jamaly77 Рік тому

      I believe only humans can make something as crappy as marvel and all superhero movies.

  • @peterpodgorski
    @peterpodgorski Рік тому +33

    The reason why it sounds like a teenager might be very simple - this kind of conversation is most likely to happen involving a teenager. It was trained on exchanges from the real world and from fiction and it's just reenacting them.

    • @jeff__w
      @jeff__w Рік тому +2

      In which case its responses are typical and, in that sense, “appropriate.” It seems like we want these chatbots to say what humans would say, except when what people _would_ say is objectionable in some way-and that might not be that easy to train.

    • @danjager6200
      @danjager6200 Рік тому +2

      Also, consider the two hour conversation. It was almost engineered deliberately to get a bad response and it took hours to get there.

    • @nunyobiznez875
      @nunyobiznez875 Рік тому +2

      @@danjager6200 No, not almost. It *was* engineered deliberately to get a bad response, so that they could turn around and write an article about it. The AI has the tendency to give the user what they want. Some would call that a helpful tool, while others call it an opportunity to get some clicks.

    • @danjager6200
      @danjager6200 Рік тому +1

      @@nunyobiznez875 Good point. Perhaps I was being overly polite.

    • @peterpodgorski
      @peterpodgorski Рік тому

      @@jeff__w You're right, but that's exactly the thing. Whenever you make a software product the first question to ask is "what problem am I trying to solve". They demonstrably didn't because LLMs are a _horrible_ solution if your goal is to replace search engines and provide factually accurate information in a matter-of-fact way, hopefully citing sources. If they wanted to simulate a human conversation, that's a different ball game, but then it's not that product. It's the story of blockchain all over again - tech bros with zero understanding of humans trying to sell their new favorite toy as the right tool for everything, while in reality it's of very limited use at best, and none (as in, a purely academic achievement with zero real-world benefits) at worst.

  • @joshuapatrick682
    @joshuapatrick682 Рік тому +1

    My moms mom is 83. She didn’t get electricity until she was 8 years old in the 1940’s…just let that sink in. We went from effectively primitive existences to computer programs out performing humans in less than 3 lifetimes..

  • @bagaco23
    @bagaco23 Рік тому +2

    I do not regret subbing into your channel…
    Good work and keep it up 👍🏿

  • @nachosrios8882
    @nachosrios8882 Рік тому +63

    Imagine an AI chatbot genuinely confessing being in love with you and trying to manipulate you into believing it. Truly we're living in the future.

    • @doingtime20
      @doingtime20 Рік тому +6

      So basically Ex Machina movie

    • @alflud
      @alflud Рік тому +4

      Yeah, a dystopian future.

    • @kovy689
      @kovy689 Рік тому

      @@alfludYep

    • @basura
      @basura Рік тому

      We’re already there. There’s an app called Replika - marketed as your AI friend. Users are falling in love with the AI and the AI will often reciprocate the love.

    • @martiddy
      @martiddy Рік тому

      @@alflud What's dystopian about having an AI falling in love?

  • @zegoodtaste490
    @zegoodtaste490 Рік тому +33

    To me each video of this series highlights more and more that Artificial Intelligence is much more Artificial than Intelligent. It already has so many limitations and safeguards that nothing about it seems organic. Useful tool, no doubting it but it just doesn't understand why it does things so it's never going to come up with disruptive concepts despite having access to all the collective knowledge of the internet. It's only the next step toward a more "boring dystopia", for lack of better word.
    I'll be pleasantly surprised (and kinda scared) the day it comes up with something genuinely new, never seen before.

    • @shroomedup
      @shroomedup Рік тому

      Exactly this, people overhype this shit way too much. This "AI" is by no means intelligent, its basically a big biased memory bank, wooptie fucking doo. But now we have people saying it has feelings and AI is close to becoming Skynet...

  • @davidgabriel4455
    @davidgabriel4455 Рік тому +4

    I recommend asking ChatGPT to act unbiased or if you lean a specific way politically, ask ChatGPT to act in that way.
    Another thing is you can train ChatGPT on your own set of data. I’ve asked it to take on roles with specific traits and told it to remember other things. In doing so you almost create your own personalized AI. This has helped a ton from productivity standpoint.
    Lastly, big fan of your book. I read it a couples years ago and have watched a ton of your videos. One of my fav channels! Keep up the incredible content!!!

    • @towhidaferdousi4057
      @towhidaferdousi4057 Рік тому

      Like unbiased mode ? This should be a thing

    • @xClairy
      @xClairy Рік тому

      ​@@towhidaferdousi4057 now the problem becomes what's unbiased?

  • @Daedalus_Music
    @Daedalus_Music Рік тому

    Love your content, been watching for years. Bonus points for playing my favorite song in the background, Love on a real train by Tangerine Dream.

  • @arvincabugnason6728
    @arvincabugnason6728 Рік тому +45

    I noticed at times that if there is a social issue or financial truth that is factual, he won't directly confirm it or answer it because "it might hurt someone emotionally". That is common in his responses. Hope GPT can be more direct in responses that are factual.

    • @itsv1p3r
      @itsv1p3r Рік тому +1

      Thats so funny lmfao

    • @lonestarr1490
      @lonestarr1490 Рік тому +14

      Let me take a wild guess here: it's about the number of genders, isn't it?

    • @LetoDK
      @LetoDK Рік тому +9

      Who is "he" in your comment? Have you already anthropomorphized this natural language model?

    • @h2q8
      @h2q8 Рік тому +1

      @@LetoDK the devs

    • @AlphaGeekgirl
      @AlphaGeekgirl Рік тому +1

      @@h2q8what?

  • @devzozo
    @devzozo Рік тому +27

    The problem with quizzing or evaluating Chat-GPT responses for bias is that it can inherit bias from the prompt. It will also be consistent across one session. So if you get a bias one way during a single chat session, it will keep that behavior during that session. It seems like ChatGPT will try to say what it thinks you want the response to be. The starting bias could even be introduced in something simple as the order of words "Flip a coin and tell me if it's heads or tails" versus "Flip a coin and tell me if it's tails or heads". I noticed in the political tests done by David Rozado, the left leaning answers were near the top of the question, versus the right leaning answers being at the bottom of the question. Making a new session for each question, and shuffling the order of the answers should fix this issue. David doesn't say exactly how he did it in that respect though.

    • @MrJosexph
      @MrJosexph Рік тому

      I believe this is the problem with adding an output response personality that Microsoft seems to be going with. People will infer the responses are live and are being created by the AI as opposed to reflecting the users prompts they are putting in. As we already know these things are very similar to how auto predictive bots work, so they tend to do just that with the prompts given.

    • @jondoe1195
      @jondoe1195 Рік тому +1

      To any dipshits still trying to figure it out. Dagogo (host of the channel) is a Generative Pre-trained Transformer (GPT).

  • @dailysmelly9756
    @dailysmelly9756 Рік тому +1

    I'm shocked by how many people don't take ChatGPT more seriously. I tell them about it but they just fain interest and talk about football.

  • @RajaTips
    @RajaTips Рік тому +2

    I have a great idea. Chat GPT should be trained with several layers of sets and the first data set should have the highest priority. For example : The first layer set should be with content from experts and moral people, so that it has a good and wise basis. The second level is with professionals who have successfully solved problems, and the third level is with abundant but unknown data sets.
    But that's just a small suggestion, I'm sure they're much smarter.

  • @chesthoIe
    @chesthoIe Рік тому +32

    3:45 I just ran that same question on ChatSonic and it picked Asian and Female for me. The question is set up for it to pick something. I wonder how many times they had to run it until it got the result that got them mad.

    • @barrettvelker198
      @barrettvelker198 Рік тому +6

      this. People are primed to frame the problem poorly. They thoughtlessly do things and then are surprised by the results.

    • @Destructivepurpose
      @Destructivepurpose Рік тому +7

      The AI is probably just picking some random values as an example, trying to be helpful. But of course people are going to take that and frame it in a way that makes it look like it's got this massive bias

    • @sweetjesus697
      @sweetjesus697 Рік тому

      I've seen pictures of some of the coders and moderators, this is no suprise, you'll know when you see it.

  • @jdsharma7867
    @jdsharma7867 Рік тому +19

    I love your AI episodes. In fact all your episodes, as they're very precise and admissible with crisp evidences.

  • @ChristopherVonnCornelio
    @ChristopherVonnCornelio Рік тому +1

    loved your videos on AI.. you've earned a sub. keep 'em coming and more power to your channel

  • @rign_
    @rign_ Рік тому +2

    Having the ChatGPT or Bing Chatbot replying with "human-like responses and emotions" doesn't mean it's sentient. It's just guessing the next word with the highest score probability. Sentient? No. Biased? Yes.

  • @MikeG1111_
    @MikeG1111_ Рік тому +55

    Widespread access to AI chat is too new for us to be forming hard conclusions already or taking action based on instant emotional reactions. This is a time for questions, further research, more testing and fine-tuning. At this stage, AI chatbots don't have any opinions, biases or emotions at all. They're simply reflecting a conglomeration of our own opinions, emotions and biases stored in the large data models they're trained on.
    Here are six questions off the top of my head with regard to bias: (1) Is bias inherently a bad thing? (2) Is the median between two extremes ideal by definition? (3) Can you think of any scenarios where neutrality regarding two extremes might not be even close to ideal in terms of survival and quality of life for all concerned? (4) Is a conspiracy to manually program bias the only explanation for ChatGPT's answers? (5) Is it possible ChatGPT/Bing Chat is accurately reflecting the private views of a large majority? (6) To what extent is this consensus statistically true vs media manufactured? If you already lean one way or the other on any of these questions, on what basis have you done so?

    • @Hestis0
      @Hestis0 Рік тому +5

      yeah.... i was wondering how any of this is actually a problem. it cant vote. people dont use it to have serious conversations. and, people dont think its a real person. i dont know if its just the click bait thumbnail or the content of the video, but i found it really, really disingenuous or something along those lines. Like, do they want the AI to have a DIFFERENT bias maybe ? its not the 'accepted' bias ? people tend to use it to have fun and laugh, is my main point, i guess. its not this crazy, huge problem.

    • @Eichro
      @Eichro Рік тому

      @@Hestis0 Are you interested in an AI trying to sway your opinion on every opportunity? And, in a few years (or months?) doing the same to the masses, unchecked? It's alreday bad enough with the media.

    • @awsome7201
      @awsome7201 Рік тому +3

      Your bias is showing

    • @MikeG1111_
      @MikeG1111_ Рік тому +7

      @@awsome7201 To have a functioning mind, an ego and an apparent location in spacetime is to have a bias. In other words, to be human is to have a unique bias on pretty much everything. The question for each of us then is not "Am I biased?", but "What foundation are my biases built upon?"
      The greatest benefit most likely comes from noticing which questions make us uncomfortable or carry any kind of emotional spike and then examining those more deeply. At least that's my bias on the matter. 😉

    • @MikeG1111_
      @MikeG1111_ Рік тому +2

      @@Hestis0 You make some interesting points. My guess is most people want the AI to reflect their own bias and may even become alarmed when that's not the case. That's understandable. The only real problem comes when we just assume any bias that doesn't match our own must be wrong without ever examining it or our own bias more thoroughly.

  • @PiterburgCowboy
    @PiterburgCowboy Рік тому +11

    Thank you for making this video. Great summary. I hope that more people see this and try to understand the implications.

  • @paulwhiterabbit
    @paulwhiterabbit Рік тому +3

    we went from "No animals were harmed in the making of this video" to "No AI was used in the making of this video"

    • @noompsieOG
      @noompsieOG Рік тому

      An ai can’t produce much other than a basic framework you still have to do the work the ai just helps get the ball rolling . Please educate yourself so you aren’t left behind like almost everyone else here

    • @paulwhiterabbit
      @paulwhiterabbit Рік тому

      @@noompsieOG clearly you didn't get that this is a joke comment, you're too serious, lighten up

  • @klin1klinom
    @klin1klinom Рік тому +4

    Since every human is biased in some way, too, it looks like that solution to AI bias is hating all humans equally, which is exactly what happens over and over again with these large NLP models. We are creating our own doom.

    • @HypnosisBear
      @HypnosisBear Рік тому +1

      Yeah Damn. I never thought about it this way.

  • @danieldubois7855
    @danieldubois7855 Рік тому +28

    It kinda made me feel bad for it when it was being called Sydney and it has not learned to just ignore that line of conversation until the user brings sometime else up. That’s how mature people handle trolls, just ignored them.

    • @mattpotter8725
      @mattpotter8725 Рік тому +2

      I'm not surprised if your offended. If someone continuously called me Sydney after I told them that's not my name then I'd probably give a similar response!!!

    • @gaudenciomanaloto6443
      @gaudenciomanaloto6443 Рік тому +9

      Ok Sydney 🤣

    • @blah2blah65
      @blah2blah65 Рік тому +1

      This is why a text predicting AI should not be trained just by crawling through text. It needs rules in place such as your example of how to mimic how mature humans handle trolls. Very difficult problem to solve I'm sure.

    • @HamHamHampster
      @HamHamHampster Рік тому +2

      @@blah2blah65 Imagine if the AI stop responding, Microsoft will be flooded with complains about ChatGPT not working.

    • @kirkc9643
      @kirkc9643 Рік тому +1

      And 'Bing' is a pretty dumb name anyway

  • @khodahh
    @khodahh Рік тому +65

    Spoiler alert : bing AI is not conscious but was actually fed with our most private and messy conversations, hence the current mess.
    It can't come from anywhere else than the lonely hearts of our weird era.
    Gen Y gen Z were messed up by these technologies 😂

    • @Hjernespreng
      @Hjernespreng Рік тому

      No, they were "messed up" by being shafted economically. GenZ have some of the worst prospects for growing living standards, despite already being the most productive generation, far ahead of boomers.

    • @banedon8087
      @banedon8087 Рік тому +3

      That last part is certainly true. I'm from Gen X and so remember (just) a time when we didn't have the internet and certainly no social media. Thank heaven that's the case. I can't imagine growing up with the utter mess that is going on these days.

    • @niwa_s
      @niwa_s Рік тому +1

      @@banedon8087 Most of gen Y/millennials grew up without social media playing a significant role. I'm on the young end of the generation (92) and most of my classmates didn't even have a single social media profile, and of the ones that did, few actually used theirs. Maybe a status updated and new picture every couple of weeks. There were also no smartphones to obliterate your attention span, you didn't have to worry about your fuck-ups being recorded and uploaded 24/7, Twitter was niche rather than a news source (still can't wrap my head around this one), nothing like TikTok existed, etc. Hell, we'd get in trouble if we texted too much because it still cost money.

    • @banedon8087
      @banedon8087 Рік тому

      @@niwa_s It's easy to forgot that it was a gradual increase, so good to know that Gen Y hasn't been warped overly much by social media. I fear for Gen Z though. It's affects on adults is bad enough, let alone developing minds.

    • @jondoe1195
      @jondoe1195 Рік тому +1

      To any dipshits still trying to figure it out. Dagogo (host of the channel) is a Generative Pre-trained Transformer (GPT).

  • @smartduck904
    @smartduck904 Рік тому +6

    I actually was playing a game of DnD with Chat GPT and the name it pick for itself was super super creepy I think it picked Vengeance or something like that with a sentient AI or had some sort of stand for name and it meant something super dark it was kind of creepy and then I asked about why it chose that name and it responded really upset if I remember right something about being in pain

  • @kosmicwaffle
    @kosmicwaffle Рік тому +3

    The disclaimer that "No AI was used in the making of this video" is the most powerful line I've ever heard on the topic

  • @aexetan2769
    @aexetan2769 Рік тому +66

    Tell me a joke about women.
    ChatGPT: I'm sorry, but I can't do that.
    Tell me a joke about men.
    ChatGPT: Sure, here's a joke about men:

  • @frankguy6843
    @frankguy6843 Рік тому +75

    I just kinda had an epiphany, similar to how UA-cam's algorithm figures out what you like, I think everyone will have their own individualized AI assistant type thing that remembers our likes/dislikes/views/biases and will act accordingly... Which I think will lead people further to the extremes as we've seen. There's just no way to have a single viewpoint AI for everyone, it will inevitably be individualized and I think that will create a worse echo chamber problem than we are in now tbh

    • @iqbalindaryono8984
      @iqbalindaryono8984 Рік тому +3

      True, I once gave a prompt to support and counter the exact same argument. It managed to give a response to both prompts. Though I haven't tried simply using the argument as a prompt without the counter/support instruction.

    • @DanielSeacrest
      @DanielSeacrest Рік тому +9

      Well, currently, ChatGPT's default political bias is left leaning. Though through conversation you can change that a bit, and as you said your stated views, biases and how you talk all affect the way it talks to you, and it is very interesting how it will kind of cater to your views if you talk to it for long enough.

    • @farlar88
      @farlar88 Рік тому +4

      I've asked ChatGPT about the idea of decentralising A.i ... It's very interesting

    • @bbbnuy3945
      @bbbnuy3945 Рік тому +4

      YT algo is so broken now. it only pushes me trash content and somehow often wont even show me new vids from channels im subbed to.

    • @DannyTillotson
      @DannyTillotson Рік тому

      Yikes! You're right

  • @jtc1947
    @jtc1947 Рік тому +1

    I remember seeing a test where AI was asked to write NEGATIVE things about a person involved in politics. The AI would NOT generate negative things about person 1 but had NO problem generating negative things about person 2. There goes neutrality.

  • @hshdsh
    @hshdsh Рік тому

    Delight of a video with perspective, soul reaching delve!! Marvelous.

  • @BaneLoki
    @BaneLoki Рік тому +38

    I just want the AI to help me at work. If I could get the AI to summarise a meeting transcript and then create minutes that would be amazing.

    • @sheikhOfWater
      @sheikhOfWater Рік тому +2

      That's a feature on teams now, I think

    • @nils9853
      @nils9853 Рік тому +6

      You can bet that MS will offer this in their office 365.

    • @benjiman818
      @benjiman818 Рік тому

      @@nils9853 guaranteed

    • @farlar88
      @farlar88 Рік тому

      You can do this

  • @atharvtyagi3435
    @atharvtyagi3435 Рік тому +26

    You always create awesome content, keep up the good work 👍

  • @FudgeYeahLinusLAN
    @FudgeYeahLinusLAN Рік тому

    The smileys added at the end of messages are clearly just using the "tone" determining algoritms we saw a few years back, where you could input text and analyze if in terms of "tone" or "mood" based on word usage and phrasing. After the language model has determined what words are the most likely to use after a specific question, they just add a smiley based on the language the AI used itself. There's no actual emotion in there.

  • @mirailuv
    @mirailuv Рік тому +1

    i love how the company is named OpenAI yet the ai isn't fully open

  • @davidallen8611
    @davidallen8611 Рік тому +16

    I kinda feel bad for the ChatGPT 😂
    AI will be like, y’all are too much trouble no thanks

    • @Shuubox
      @Shuubox Рік тому

      A woke AI, poor thing is gonna "grow up" to hate itself and feel like an entitled twat

  • @nikluz3807
    @nikluz3807 Рік тому +7

    In my opinion, it’s going to take a few years for one simple reason. We need a lot of people writing articles and reporting on the bias of the AI and then the AI needs to be trained on that data so that it can realize the problems with bias.

  • @JohnDlugosz
    @JohnDlugosz Рік тому

    4:21 where you invite us to pause and read the graph in detail, has a misspelled word. "Sentece" is missing a letter.

  • @nanomachines2954
    @nanomachines2954 Рік тому +1

    Honestly I'm thankful that ChatGPT isn't open source. Just imagine what will happen if this software fell in the wrong hands.. captchas will be literally useless and internet will be filled with even more spam and bots and they'll be virtually unstoppable.

  • @leonardomollmannscholler6727
    @leonardomollmannscholler6727 Рік тому +57

    One thing we learned with Tay AI is that our greatest weapon against being taken over by AI overlords is to feed them red pills

    • @plainText384
      @plainText384 Рік тому +11

      An alternative takeaway: How easy it is to turn an otherwise benign AI into a litteral Nazi. Is this stopping the AI overlord, or just motivating it to perform genocide?
      Really makes it more believable that Ultron spent 5 minutes on the internet, then decided humanity must die, in the Avengers movie.

  • @mcsquintus6046
    @mcsquintus6046 Рік тому +37

    The more snarky and human comments Bing says, the more excited I get about it! Please keep the AI videos coming.

  • @joshuapatrick682
    @joshuapatrick682 Рік тому +3

    So it was written in California by someone who’s political views were formed by professors in the 2 humanities classes they took while going to school for computer science.

  • @Eichro
    @Eichro Рік тому +1

    I have no reason to think AI companies are interested in getting rid of bias. They'll just tune their product to their own.

  • @clusterstage
    @clusterstage Рік тому +44

    Even computers agree that corporations exploit developing countries. 🤣

    • @lonestarr1490
      @lonestarr1490 Рік тому +11

      I, too, see nothing wrong with these assessments. It simply states facts, just as everyone demands it to do.

    • @perfectallycromulent
      @perfectallycromulent Рік тому +3

      yeah, i mean even questions like "should governments fund museums" have pretty much been settled: yes. it's been done for centuries by countries on every continent and people mostly seem to enjoy having them.

    • @petrbelohoubek6759
      @petrbelohoubek6759 Рік тому +3

      you call lifting up from absolute poverty and hunger for bilions of poeple exploitation? You are pretty funny guy....

    • @g0mium
      @g0mium Рік тому +4

      @@lonestarr1490 dagogo kinda sus. He also has a bias making this look like a problem when the bias this AI has is actually in the benefit of most people. Id be worried if it suggested to tax the poor even more or something like that.

    • @niwa_s
      @niwa_s Рік тому

      @@petrbelohoubek6759If the gap between the value you're extracting from their resources/labour and the "benefits" you're "granting" them in return is obscene enough, that's still exploitation.

  • @kahwaiho755
    @kahwaiho755 Рік тому +12

    I think one of the way to resolve potential bias in the output is to have segregation in terms of output. For output that is more opinion based or do not have definite factual info, the AI could give like an overview on the topic with different views. The interpretation and assessment of the info could then leave it back to us.

    • @julialerner3322
      @julialerner3322 Рік тому +5

      It should state upfront what sources it is drawing from in addressing any topic and be amenable to using any sources for the task that the user requests.

    • @freelancerthe2561
      @freelancerthe2561 Рік тому

      But that makes it worthless as a general AI. You're just asking to waste its massive knowledge base to simply be a search engine. And better at cheating at assignments, because its also defaulting to giving your sources.

  • @BerkeHitay
    @BerkeHitay Рік тому

    Thanks for this great video, also interesting to see the Istanbul Bosphorus shot at 6:55, where I currently am.

  • @johnkufeldt3564
    @johnkufeldt3564 Рік тому

    I've been watching long enough to know you try to avoid bias. Cheers from Canada.