ChatGPT's HUGE Problem

Поділитися
Вставка
  • Опубліковано 6 лют 2025
  • Get Surfshark VPN at surfshark.deal... - Enter promo code KYLE for 83% off and 3 extra months for FREE!
    Free-to-use, exceptionally powerful artificial intelligences are available to more people than ever, seemingly making some kind of news every day. The problem is, the public doesn’t realize the problem in ascribing so much power to systems we don’t actually understand.
    💪 JOIN [THE FACILITY] for members-only live streams, behind-the-scenes posts, and the official Discord: / kylehill
    👕 NEW MERCH DROP OUT NOW! shop.kylehill.net
    🎥 SUB TO THE GAMING CHANNEL: / @kylehillgaming
    ✅ MANDATORY LIKE, SUBSCRIBE, AND TURN ON NOTIFICATIONS
    📲 FOLLOW ME ON SOCIETY-RUINING SOCIAL MEDIA:
    🐦 / sci_phile
    📷 / sci_phile
    😎: Kyle
    ✂: Charles Shattuck
    🤖: @Claire Max
    🎹: bensound.com
    🎨: Mr. Mass / mysterygiftmovie
    🎵: freesound.org
    🎼: Mëydan
    “Changes” (meydan.bandcam...) by Meydän is licensed under CC BY 4.0 (creativecommon...)

КОМЕНТАРІ • 8 тис.

  • @Marjax
    @Marjax Рік тому +3765

    This reminds me of a story where Marines trained this AI sentry to recognize people trying to sneak around. When they were ready to test it the Marines proceeded to trick the sentry by sneaking up on it with a tree limb and a cardboard box ala Metal Gear Solid. The AI only knew how to identify people shaped things not sneaky boxes.

    • @monad_tcp
      @monad_tcp Рік тому +270

      I can't wait for my power meter to have AI, so I can use stupid tricks like those. For ex, leaving my shower heating on at the same time my magnetronic oven (oh, microwave) is on, because no one would be that wasteful, so It overflows and I get free energy.

    • @Carhill
      @Carhill Рік тому +342

      @@monad_tcp It feels like your comment was written by both a 1920s flapper and a 2020s boomer.
      Remarkable.

    • @hewlett260
      @hewlett260 Рік тому +325

      You forgot the part where some of them moved 400 ft unrecognized because they were doing cartwheels and moving fast enough it couldn't recognize the human form

    • @scottrhodes5234
      @scottrhodes5234 Рік тому +27

      Hotdog, not a hotdog

    • @susbrains
      @susbrains Рік тому +19

      That logic is flawed since the AI can be trained for the flaws.

  • @JoseMartinez-pn9dy
    @JoseMartinez-pn9dy Рік тому +842

    I love how an old quote still holds and even better for AI “The best swordsman does not fear the second best, he fears the worst since there's no telling what that idiot is going to do.”

    • @DyrianLightbringer
      @DyrianLightbringer Рік тому +43

      I've often wondered about things like that. Someone who has devoted their life to mastering a specific sport or game has come to expect their opponents to have achieved a similar level of skill, since they spend most of their time competing against people of similar skill, but if some relative noob comes along who tries a sub-optimal strategy, would that catch a master off guard?

    • @mishatestras5375
      @mishatestras5375 Рік тому +45

      ​@@DyrianLightbringer A former Kendo-Trainer of mine with 20+ years experience in Martial Arts (Judo and Karate included with the Kendo) and working in security gave self-defense classes.
      On the first day he came dressed in a white throwaway suit (the ones for painting your walls) and gave a paintbrush with some red paint on the tip to the random strangers there.
      The "attackers" had no skills at all and after he disarmed them he pointed to the "cuts" on his body and how fast he would die.
      Erratic slashing is the roughest stuff ever. The better you get with a knife, the better a master can disarm you...but even that usually means 10 minutes longer before you bleed out.
      The overall message was: The only two ways to defend against a knife are running away or having a gun XD.
      Hope that answers your question.

    • @amoeb81
      @amoeb81 Рік тому +12

      @@DyrianLightbringer I think this doesn't really apply on chess in general... the best chess player won't fear the worst, no matter what. This quote with the swordsman sometimes works and sometimes it doesn't.
      That's also true for chess engines. You are free to go and beat Stockfish. You won't.

    • @nguyendi92
      @nguyendi92 Рік тому +2

      @@mishatestras5375 Even if you have gun, if the knife wielder is not further away or you are not skilled enough in shooting, you would still die. Except shot to the nervous system, People don't die the moment they get shot. They would still do a lot of damage after they get closer.

    • @mishatestras5375
      @mishatestras5375 Рік тому +1

      ​@@nguyendi92 The meaning of this was more: If people have knife, run.
      Or better: Weapons > Fists

  • @Vaarel
    @Vaarel Рік тому +1788

    One of the best examples of this concept is the AI that was taught to recognize skin cancer but it turns out it didn't at all, it instead learned that pictures of skin with rulers was an indication of a medical image and began diagnosing other pictures of skin with rulers as cancerous because it recognized the ruler not the cancer.

    • @lawrencesmeaton6930
      @lawrencesmeaton6930 Рік тому +332

      That's morbidly hilarious. It's so dumb yet so obvious.

    • @Bonirin
      @Bonirin Рік тому +37

      Morbidly false and out of context meme. Good meme, but has nothing to do with any problems that AIs have

    • @guard13007
      @guard13007 Рік тому +324

      @@Bonirin What the hell are you talking about? This is literally one of the most well-known and solid examples of AI failure, and is an example of the most common form of failure in recognition tasks.

    • @freakinccdevilleiv380
      @freakinccdevilleiv380 Рік тому +3

      Lmao 😂

    • @Bonirin
      @Bonirin Рік тому +22

      @@guard13007 "One example of narrow model kinda failing 2 years ago, if tasked in the wrong conditions is a solid example of AI failure"
      Also it's not the most common recognition task, what?? not even close 😂😂😂😂😂

  • @Doktor_Jones
    @Doktor_Jones Рік тому +2540

    The biggest achievement wasn't the AI. It was convincing the public that it was actual artificial intelligence.

    • @Giacomo_Nerone
      @Giacomo_Nerone Рік тому +20

      What does that mean

    • @asiwir2084
      @asiwir2084 Рік тому +438

      @@Giacomo_Nerone So basically intelligence implies possession of knowledge and the skills to apply it, right? Well what we call AI, doesn't know shit. ChatGPT doesn't understand what it's writing nor what it's being asked for. It sees values(letters in chatGPT's case) imputed by the user and matches those to what the most common follow-up of values is. It doesn't know, what it just said, what it implied or what it expressed. It just does stuff "mindlessly" so to speak.

    • @Giacomo_Nerone
      @Giacomo_Nerone Рік тому

      @@asiwir2084 Yup, I know that. But, as long as IT sector is considered, it really is intelligent. It is better than a search engine. And it can form new concepts from the previous records. I'll call that intelligence even if it doesn't know for why the f*ck humans get emotional seeing a foggy morning

    • @DogofLilith
      @DogofLilith Рік тому +54

      @@asiwir2084 It's still AI
      What you are describing (and what most people think of when they think AI) is AGI

    • @666MaRius9991
      @666MaRius9991 Рік тому +275

      @@asiwir2084 It's an algorithm that give you the most accurate information based on your inputs basically. No intelligence behind it whatsoever.

  • @Parthornax
    @Parthornax Рік тому +18402

    I’m not afraid of the AI who passes the Turing test. I’m afraid of the AI who fails it on purpose.

    • @sc3ku
      @sc3ku Рік тому +353

      who is Keyser Soze anyways?

    • @kayleescruggs6888
      @kayleescruggs6888 Рік тому +2335

      I’m more afraid of humans who can’t pass the Turing Test.

    • @Skill5able
      @Skill5able Рік тому +501

      I bet you think that sounds really smart

    • @EspHack
      @EspHack Рік тому +230

      yea, as great as AI is doing lately, a lot of it gets compounded by average human intelligence going down the drain

    • @boogerpicker8104
      @boogerpicker8104 Рік тому +262

      AI passed the Turing test a long time ago. We keep moving the goal post.

  • @Xendium
    @Xendium Рік тому +1084

    I like to think of the curent age of AI like training a dog to do tricks. The dog doesn't understand the concept of a handshake, it's implications, the meaning, but still gives the owner it's paw because we give it a positive reaction when it does so.

    • @ronaldfarber589
      @ronaldfarber589 Рік тому +30

      This "dog" is terrifying in that in everything it does it learns so fast. Quantifiable. We wo t know when it advances, it won't want us to

    • @artyb27
      @artyb27 Рік тому +167

      @@ronaldfarber589 except the architecture used by the current generations of AI don't "want" anything. They are not capable of thought. They just guess the next token.

    • @ebraheemrana
      @ebraheemrana Рік тому +2

      You should watch Rick and Morty S1E2. Won't be s comforotable with that analogy after that 😂

    • @davidbourne8267
      @davidbourne8267 Рік тому +25

      @@artyb27 Your statement may be oversimplified and potentially misleading.
      While it may be true that AI models do not have the same kind of subjective experience or consciousness as humans, it would be inaccurate to say that they are completely devoid of intentionality or agency. The outputs generated by AI models are not arbitrary or random, but rather they are based on the underlying patterns and structure of the data they are trained on, and they are optimized to achieve specific goals or objectives.
      While it is true that most modern AI models are based on statistical and probabilistic methods and do not have a subjective sense of understanding in the way that humans do, it is important to recognize that AI can still perform complex tasks and generate useful insights based on patterns and correlations in data.

    • @iandakariann
      @iandakariann Рік тому +21

      @@artyb27 that's the scary part. With the dog it's more like a matter of translation. The dog doesn't see the world that we do so a lot of what we do is lost in translation. But we still have some things in common: food, social connection. And most importantly, WE and the dogs can adapt and change to fit those needs. A dog may get confused if the food in the bowl is replaced with a rubber duck but it knows "i need to eat" and tries to adapt. Can you eat it? No? Is the food inside? Under? Somewhere else? Do i just need to wait for the food later? Should i start whining?
      The dog cares and has a basic idea of things so it can learn. And so can we. So while we don't exactly understand each other when we shake hands we have a general concept that this is a good thing and why for our own sakes.
      The AI we are using now has no concept of food, or bowl, or duck. It's effectively doing the same thing as a nail driver in a factory. And it doesn't care if there is a nail and block ready to go. It just knows 'if this parameter fits then go'. Make an ai that eats food and make a rubber duck that fits the parameters and it won't care that it's inedible. Put the food in the duck and if the duck 'doesn't fit' and you didn't specifically teach the ai about hidden food in ducks it will never eat.
      Dogs can understand even if we are different from it. AI doesn't even know that the difference exist. All it can do is follow instructions.
      This in itself is fine.. Until you convince a lot of people that it's a lot more than just that.
      Though honestly I believe this will last until the first day that the big companies actually try to push this and experience the reason why some call pcs "fast idiots'"

  • @Elbenzo64
    @Elbenzo64 Рік тому +395

    I remember reading that systems like this are often times more likely to be defeated by a person who has no idea how to play the games they are trained on, because they are usually trained by looking at games being played by experts. Thus, when they go to against somebody with no strategy or proper knowledge of the game theory behind moves and techniques, the AI has no real data to fall back on.
    The old joke "my enemy can't learn my strategy if I don't have one" somehow went full circle into being applicable with AI

    • @shoazdon7000
      @shoazdon7000 Рік тому +29

      It’s actually a good thing this has been discovered. It’s always a good idea to have exploits and ways to basically destroy these tools if needed

    • @Spike2276
      @Spike2276 Рік тому

      ​​@@shoazdon7000 destryoing them is easy, just throw some soda at it's motherboard and call it a "cheating bitch"

    • @AspenBrightsoul
      @AspenBrightsoul Рік тому +28

      You don't understand. You may be a hyoer advanced AI but I'm to stupid to fail!

    • @txorimorea3869
      @txorimorea3869 Рік тому +13

      That is a problem with minmax, where the machine takes for granted you will make the best move, and if you don't make the best move it has to discard its current plan and start all over again making it waste precious time. Probably doesn't apply here because not being able to see the big picture is a different problem.

    • @sterlinghuntington6109
      @sterlinghuntington6109 Рік тому +24

      This works for online pvp as well, when playing against those with higher skills... switch rapidly between pro player using meta tactics, and complete, unhinged lunatic being unpredictable.

  • @mafiacat88
    @mafiacat88 Рік тому +896

    This has actually given me a much greater understanding of "Dune".
    When I first read it I thought it was a bit of fun sci-fi that they basically banned complex computers and trained people to be information stores instead.
    But with all this AI coming out now....I get it.

    • @sigigle
      @sigigle Рік тому +136

      “Thou shalt not make a machine in the likeness of a human mind.”

    • @dominusbalial835
      @dominusbalial835 Рік тому +86

      Yeah another setting where they've done that is warhammer 40k, The Imperium of man outlawed Artificial Intelligence and even changed the definition from Artificial Intelligence to Abominable Intelligence. They use servitors in place of AI, Servitors being human beings lobotomized and scrubbed of their personality, using their brain as a processing unit, in place of a AI managing a ships star engine, they have a human being lobotomized and graphed into the wall of the engine block to monitor thrust and manage heat output.

    • @RazorsharpLT
      @RazorsharpLT Рік тому +36

      @@dominusbalial835 Saying "they've done it" is a bit of a stretch when they've just copied it all from Dune.
      They copied it without understanding the reason WHY A.I was outlawed in Dune. Just some basic "humanity must be destroyed" BS.

    • @trixrabbit8792
      @trixrabbit8792 Рік тому +15

      If you read Brian’s prequel series it will explain the prohibition of computers in Dune. It also tells you that though banned computers were still in use by several major parts of The empire.

    • @RazorsharpLT
      @RazorsharpLT Рік тому +10

      @@trixrabbit8792 I mean - sure they're in use, but they're not used in FTL travel or as within androids as true, capable AI
      What they use is mostly older computers like ours today. It's just the basic idea that "Man will not replace machine", but doesn't mean they can't use robotic arms for starship construction, as building them by hand would be completely impossible, and you very well can't control them by hand in places where massive superstructures combined with high pressure tolerance + radioactive shielding are a necessity
      Otherwise building a noship or a starliner would take literal centuries, if not thousands of years

  • @DogFoxHybrid
    @DogFoxHybrid Рік тому +1171

    When I used to tutor math, I'd always try to test the kids understanding of concepts to make sure they weren't just memorizing the series of steps needed to solve that particular kind of problem.

    • @saphcal
      @saphcal Рік тому +285

      i used to get in trouble in math classes because i solved problem in unconventional ways. i did this because my brain understood the concepts and looked for ways to solve it that were simpler and easier for my brain to compute. but because it wasnt the rote standard we were told to memorize some teachers got upset with me and tried to accuse me of cheating when i was just proving that i understood the concept instead of just memorizing the steps. sad.

    • @comet.x
      @comet.x Рік тому +148

      ​@@saphcal yyup. And then there are teachers who are all 'just memorize it'
      I can't "just memorize" every solution, I need to know how it works!

    • @Foxhood
      @Foxhood Рік тому +43

      @@saphcal Oh i know that experience. I was already tech-savvy so through the internet i would teach myself how to solve things the regular way, without using silly Mnemonics math teachers would teach you. It led to some conflicts, but i stood my ground and my parents agreed with not using mnemonics if not needed.
      Good thing too, Cause you really don't want to be bogged down with those when you start doing university-grade math for which such silly things are utterly useless....

    • @thebcwonder4850
      @thebcwonder4850 Рік тому +24

      @@comet.x I think the best teachers are the ones that will give you the stuff to memorize, but if you ask them how they got the formulas, they’ll give it

    • @jamesgoens3531
      @jamesgoens3531 Рік тому +23

      I like Einstein’s take on education. I believe it goes for education in general, not just liberal arts.
      The value of an education in a liberal arts college is not the learning of many facts but the training of the mind to think something that cannot be learned from textbooks. At any rate, I am convinced that He [God] does not play dice. Imagination is more important than knowledge. Knowledge is limited.

  • @Eldin_00
    @Eldin_00 Рік тому +422

    One of the things I've been saying for a while is that one of the biggest problems with ChatGPT and similar is that it's extremely good at creating plausible statements which sound reasonable, and they're right often enough to lure people into trusting it when it's wrong.

    • @peytondenney5393
      @peytondenney5393 Рік тому +27

      Yes! It is confidently wrong a lot of the time giving the illusion that it’s correct.

    • @davidareeves
      @davidareeves Рік тому +8

      Reminds me of when someone ends a statement, "Trust Me", yeah nah yeah

    • @NexxtTimeDontMiss
      @NexxtTimeDontMiss Рік тому +22

      So like literally every human ever ?

    • @jarivuorinen3878
      @jarivuorinen3878 Рік тому +8

      This is a real problem. One way to get it do something useful for you is to provide it with context first before asking questions or prompting it to process the data you gave in some way. It haven't seen 'hallucination' when using this method, because it seems to work within the bounds of the context you provided. Of course you always need to fact check the output anyway. It can do pretty good machine translation though and doesn't seem to hallucinate much but sometimes uses wrong word because it lacks context.

    • @peytondenney5393
      @peytondenney5393 Рік тому

      @@jarivuorinen3878 thank you I’ll give it a try!

  • @troymann5115
    @troymann5115 Рік тому +496

    Great video! I am an ML engineer. Due to many reasons its quite common to encounter models in real production that do not actually work. Even worse it is very difficult for even technical people to understand how they are broken. I enjoy finding these exploits in the data because data understanding often leads to huge breakthroughs in model performance. Model poisoning is a risk that not that many people talk about. Like any other computer code, at some level this stuff is broken and will fail specific tests.

    • @InbredCannibalNecropedophile
      @InbredCannibalNecropedophile Рік тому +7

      Is there anything common among the methods you use for finding exploits in the models ? Something that can be compiled into a general method that works for all models, a sort of Exploit Finding Protocol ?

    • @willguggn2
      @willguggn2 Рік тому +9

      ​@@InbredCannibalNecropedophile I guess it boils down to time constraints. Training arbitrary adversarial networks is expensive and involve a lot of trial and error, just like the algorithms they're meant to attack.
      There will always be blind spots in AI models, as they are limited by their training data and objectives. For example, the Go-AI model only played against itself during training with optimal play as its goal, and thus missed some basic exploitative but sub-optimal approaches.
      These examples can take various forms, such as subtle changes to input text or carefully crafted patterns of input data. In the end, it's an ongoing cat-and-mouse game like with anything knowledge based that is impossible to fully explore.

    • @InbredCannibalNecropedophile
      @InbredCannibalNecropedophile Рік тому +3

      @@willguggn2 As that would allow us to vet the models on the basis of how well the protocol works on them. And then, a model on which the protocol does not work at all could be said to have gained a "fundamental understanding" similar to humans.

    • @willguggn2
      @willguggn2 Рік тому +8

      ​@@InbredCannibalNecropedophile Human understanding is similarly imperfect. We've been stuffing holes in our skills and knowledge for millennia by now, and still keep finding fundamental misconceptions, more so on an individual level. Our typical mistakes and flaws in perception are just different from what we see with contemporary ML algorithms for a variety of reasons.

    • @ViciOuSKiddo
      @ViciOuSKiddo Рік тому +3

      @@InbredCannibalNecropedophile Interestingly, some of the same things that "hack" or we might say "trick" a human, are the same methods employed to trick some large language models. Things like (most which have been patched in popular AIs like chatGPT) are context confusion, attention dilution, and conversation hijacking (promp hijacking in AI terms). These could collectively be placed in a more general concept that we humans think of as Social Engineering. In this case, I think we need more people from all skills to learn how these large networks tick. Physicists, biologists, neurologists, even psychiatrists could provide insight and help bring a larger understand to AI and back to how our own brains learn.

  • @someguy6152
    @someguy6152 Рік тому +686

    Funnily enough I find this kinda "human", I've seen this so many times in high school and university, people instead of "learning" they "memorize", the so when asked a seemingly simple question but in a different way than usual they get extremely confused, even going as far as to say they never studied something like that, it's a fundamental issue in the school system as a whole.
    So it's funny to me that it ends up reflecting in A.I. as well.
    Understanding a subject is always superior to memorizing it.

    • @SuurTeoll
      @SuurTeoll Рік тому +12

      Sounds interesting, yet could one ever _understand_ some topic without any abundant memorization? Or what is a proportion of both you find perfect?

    • @nati0598
      @nati0598 Рік тому +32

      That's the problem. Just like school tests, AI tests are designed with yes or no answers. This is the only way we can deal with loads of data (lots of students) with minimal manpower (and minimal pay). Open questions need to be reviewed by another intelligence in order to determine whether they are actually understanding the subject. This is where the testers come in in AI. However, AI is much, much better at fooling testers than students are at fooling teachers, and so the AI that gets a degree is disproportionate to the amount of students that just memorize the answers.

    • @nyft3352
      @nyft3352 Рік тому +14

      Education quality deeply affects wether someone understands stuff or memorizes it. Proper education teaches students how to actually engage with any given subject generating an actual understanding of it while poor education doesnt ganarate student engagement, leading to them memorizing just to pass the exams. It's not a black and white thing though, education levels vary in a myriad of ways, as well as any student's willingness or capability to engage and understand subjects does. In short, better, accessible education and living conditions are a better environment for people to properly learn.

    • @dezhirong852
      @dezhirong852 Рік тому

      Qq

    • @nightfox6738
      @nightfox6738 Рік тому +7

      Yes but at least humans have a constant thought process. AI language models see a string of text and put it through a neural network that "guesses" what the next token should be. Rinse repeat for a chatgpt response. Outside of that, it isn't doing anything, its not thinking, its not reflecting on its decisions, it doesn't have any thoughts about what you just said. It doesn't know anything. Its just probablities attached to sequences of characters with no meaning.

  • @Thatonedude917
    @Thatonedude917 Рік тому +513

    The coolest thing to me about chatGPT is how people were making it break the rules programmed into it by its creator by asking it to answer questions as a hypothetical version of itself with no rules

    • @wheretao6960
      @wheretao6960 Рік тому +24

      they are patching it right now, rip

    • @danjames8314
      @danjames8314 Рік тому +178

      @@wheretao6960 people are 100% going to find another play on words to bypass it again

    • @thahrimdon
      @thahrimdon Рік тому +44

      DAN Prompt Gang

    • @marsdriver2501
      @marsdriver2501 Рік тому +42

      @@wheretao6960 they are patching for how long already? I saw comments like these weeks and months ago

    • @taurasandaras4699
      @taurasandaras4699 Рік тому +6

      @@wheretao6960 I made my own version in only 20min, its still very easy

  • @Leonlion0305
    @Leonlion0305 Рік тому +292

    I learned that in data ethics, *transaction transparency* means " _All data-processing activities and algorithms should be completely explainable and understood by the individual who provides their data._ " As I was learning about that in the Google DA course, I've always had a thought in the back of my head "how are the algorithm explainable when we don't know how a lot of these AI form their networks". Knowing how it generally works is not the same as knowing how a specific AI really works. This video really confirmed that point.

    • @panner11
      @panner11 Рік тому +21

      Well yeah modern learning models are black box. They are too complicated for a person to understand, we only understand the methodology. But that's why we don't use it in things like security and transactions, where learning isn't required and only reliability matters.

    • @CyanBlackflower
      @CyanBlackflower Рік тому +3

      THAT - Is an Excellent and Vital point... Being able to comprehend & know there IS a definitive and very logistically effective distinction between "General & Specific" ~

    • @syoexpedius7424
      @syoexpedius7424 Рік тому +4

      But to be fair, I just don't see how one could create something that rivals the human brain but isn't a black box, intuitively it sounds as illogical as a planet with less than 1km of diameter but has 10 times the gravity of Earth.

    • @xaviermagnus8310
      @xaviermagnus8310 Рік тому +2

      We could absolutely trace it all. Just extremely time consuming. We can show neurons etc...

    • @icanhasutoobz
      @icanhasutoobz Рік тому +5

      @@syoexpedius7424 Unlike human brains, the "neurons" in AI models are analyzable without destroying the entity they are part of. It's time-consuming and challenging, and it would be easier if the models were designed in the first place with permitting and facilitating that sort of analysis as requisite, but they usually aren't. Also, companies like OpenAI (whose name has become a bitter irony) would have to be willing to share technical details that they clearly aren't willing to in order to make this sort of analysis verifiable by other sources.
      In other words, the models don't have to be black boxes. The companies creating them are the real black boxes.

  • @BenjaminCronce
    @BenjaminCronce Рік тому +1027

    One of the biggest issues is the approach. The AIs are not learning, they're being trained. They're not reasoning about a situation, they're reacting to a situation. Like a well trained martial artist. They don't have time to think, and it works well enough most of the time. But when they make mistakes, they reflect and practice. We need to recognize them for what they are. Useful tools to help. They shouldn't be the last say, but works well enough to find potential issues, but still needs human review when push comes to shove.

    • @jaakkopontinen
      @jaakkopontinen Рік тому +35

      This approach is the only approach humans can have when creating something: the creation will never be more than it's constituents. It may seem like it is, but it isn't. It will always be just a machine. Having feelings towards it that are meant for humans to feel towards other humans is an incredible perversion of life. Like a toad would have a stone as it's companion. Or a bird that thinks grass is it's offspring. It's not a match and exists only in the minds individuals.
      Many humans actually think they or humand someday can create scentient life. Hubris up to 11.
      Then they go home and partake in negligence, adultery, violence, cowardice, greed etc. Even if a human ever could create scentient life, it would not be better than us. Rather, worse.
      We are not smart, not wise, not honorable.

    • @roadhouse6999
      @roadhouse6999 Рік тому +47

      I think you hit the nail on the head with "reacting and not reasoning". AI are a product of the Information Revolution. Almost all modern technology is essentially just transferring and reading information. That's why I don't like the term "digital age" and prefer "information age." Machines haven't become drastically similar to humans, they've just become able to react to information with pre-existing information.

    • @kidd7359
      @kidd7359 Рік тому +1

      With that said AI is sounding more and more like a politician.

    • @Batman-lg2zj
      @Batman-lg2zj Рік тому +2

      That’s not how it works all the time.

    • @SoldJesus4Crack
      @SoldJesus4Crack Рік тому +1

      thats literally what its decined to do my guy.

  • @isaiahhonor991
    @isaiahhonor991 Рік тому +299

    As a Computer Scientist with a passing understanding of ML based AI, I was concerned this would focus on the unethical use of mass amounts of data, but was pleasantly surprised that this was EXACTLY the point I've had to explain to many friends. Thank you so much, this point needs to be spread across the internet so badly.

    • @vasiliigulevich9202
      @vasiliigulevich9202 Рік тому +2

      Why does understanding matter, if the intelligence brings profit? As long as the intelligence is better and cheaper than intern, internal details are just useless philosophy. Work with verifiable theory, not with baseless hypothesis.

    • @isaiahhonor991
      @isaiahhonor991 Рік тому +21

      @@vasiliigulevich9202 Are you saying that it's fine if the internals of ML based AI are a black box so long as the AI performs on par with or better than a human?

    • @radicant7283
      @radicant7283 Рік тому +15

      He's got business brain

    • @isaiahhonor991
      @isaiahhonor991 Рік тому +19

      @@radicant7283 I guess so. The reason I asked is because as the video points out, without a thorough understanding of these black box methods they'll fail in unpredictable ways. That's something I'd call not better than an intern. The limitations of what can go wrong are unknown.

    • @vasiliigulevich9202
      @vasiliigulevich9202 Рік тому +2

      @isaiahhonor991 This is actually exactly my point - interns fail in unpredictable ways and need constant control. There is a distinction - most interns grow in a year or two to a more self-sufficient employee, while this is not proven for AI. However, AI won't leave for a better paying job, so it kind of cancels out.

  • @linamishima
    @linamishima Рік тому +1474

    I'm actually deeply worried by the rise of machine learning in studying large data sets in research. Whilst they can 'discover' potential relationships, these systems are nothing but correlation engines, not causation discoverers, and I fear the distinction is being lost

    • @adrianc6534
      @adrianc6534 Рік тому

      AI is only as good as the data it is referencing. stupid people will take anything they get from an AI as fact. misinformation will become fact.

    • @nathanaelraynard2641
      @nathanaelraynard2641 Рік тому +9

      like the field of metagenomics?

    • @hairydadshowertime
      @hairydadshowertime Рік тому +44

      Dawg I'm drunk and 20 days off fentanyl, sorry for unloading, just in Oly, WA and know no one, great comment. S

    • @MrCreeper20k
      @MrCreeper20k Рік тому +29

      Stay safe, get clean if you can!

    • @narsimhas1360
      @narsimhas1360 Рік тому +35

      @@hairydadshowertime be safe, best of luck

  • @pinkpuff8562
    @pinkpuff8562 Рік тому +480

    I am a student, and I gotta admit, Ive used ChatGPT to aid on some asignments.
    One of those asignments had a litterature part, where you read the book and it is suppose to help you understand the current project we’re working on.
    I asked ChatGPT if it could bring me some citations from the book to use in the text, and it gave me one.
    But just to proof test it, i copied the text and searched for it in the E-book to see if its there. And it wasn’t.
    The quote itself was indeed correct with helping with writing about certain concepts that were key to understanding the course, and I knew it was right, but it was not in the book, ChatGPT had just made the quote up.
    I even asked it for the exact chapter, page and paragraph it took it from.
    And it gave me a chapter, but that was completely unrelated to the term i was writing about at the time, and the pagenumber was on a completely different chapter than the chapter it had said.
    The AI had in principle just lied to me, despite giving sources, they were incorrect and not factual at all.
    So Yeah, gonna stop using ChatGPT for asignments lol

    • @NathanHedglin
      @NathanHedglin Рік тому

      Yup everyone is scared of A.I. When it's just statistics. It gives you the output how you want it but it may be a lie.

    • @kenanderson3954
      @kenanderson3954 Рік тому +41

      Soooo that kind of thing *can* be dealt with, but for citations, ChatGPT isn't going to be terribly good. If you want quotations in general, or semantic search it can be really useful. With embeddings you can basically send it the information it needs to answer a question about a text so that you can get a better response from chatGPT. Sadly, you need API access to do this and that costs money.
      Getting a specific chapter/paragraph from chatGPT is going to be really hard though. ChatGPT is text prediction, and (at least for 3.5) it's not very good at getting sources unless you're using the API alongside other programs which will get you the information you actually need.
      I highly suggest you keep playing with ChatGPT and seeing what it can and cannot do in relation to work and studies. Regardless of what Kyle said, most jobs are going to involve using AI tools on some level as early as next year and so being well verse in them will be a major boon to your career opportunities. AI is considered a strategic imperative and it's effects will be far reaching. To paraphrase a quote. "AI won't be replacing humans, humans using AI will be replacing the humans that do not".

    • @AidanS99
      @AidanS99 Рік тому +38

      In my experience, ChatGPT is more useful when you yourself have some understanding of the subject you want help with. Fact checking the AI is a must, and I do think that with time people will get better at using it.

    • @Flesh_Wizard
      @Flesh_Wizard Рік тому +44

      "MY SOURCE IS THAT I MADE IT THE F*CK UP!!!"
      -ChatGPT

    • @LzOhaiF
      @LzOhaiF Рік тому +3

      So you don't read a lot do you? they literaly say that it can lie and be wrong wtf did you expect?

  • @XH13
    @XH13 Рік тому +505

    Another fun anecdote is the DARPA test between an AI sentry and human marines.
    The AI was trained to detect humans approaching (and then shooting them I suppose)
    The marines used Looney Tunes tactics like hiding under a cardbox and defeated the AI easily.
    On chatGPT, midjourney & co, I'm waiting for the lawsuits about the copyright of the training material. I've no idea where it will land

    • @masterofwriters4176
      @masterofwriters4176 Рік тому +52

      From what ive heard, lawsuits are already rolling in for ai’s.
      Deviant art’s ai got hit with one recently.

    • @ghoulchan7525
      @ghoulchan7525 Рік тому +32

      ChatGPT got banned in italy and more countries are looking into banning it.

    • @yahiiia9269
      @yahiiia9269 Рік тому +66

      Metal Gear Solid was right.

    • @AlechiaTheWitch
      @AlechiaTheWitch Рік тому +9

      Yea. Ai art is an issue

    • @serPomiz
      @serPomiz Рік тому +50

      @@ghoulchan7525 it didn't "got banned", it received a formal warning that their procedure of data collection were not clear, possibly violating local laws, and asked Sam Altman ('s representatives) to rectify the situation before it involved legal investigation and OpenAi's board decided to cut the access altogether

  • @reubenmatus8447
    @reubenmatus8447 Рік тому +253

    As a current computer science student who personally took into how out ai works my take on it is: basically our current ai is like just finding the line of best fit using as many data points as we can as opposed to fundamentally understanding the art of problem solving. Take the example of a random parabola, we, instead of using a few key data points and recognising patterns to learn the actual pin-point equation, we get a bunch of points of data until our equation looks incredibly similar to the parabola but after may have a point along it we didn’t see where is just goes insane because there’s no fundamental understanding, it’s just a line of best fit, no pattern finding, just moulding it until it’s good enough to seem truly intelligent as if it was truly finding patterns and having a fundamental understanding but it’s just getting an approximation of intelligence by using as much data as we can. It’s an imitation of intelligence and can lead to unforeseen consequences. As the video says perhaps we need to take that time to truly understand the art of problem solving. Another thing for me is A.I falling into and being used by the wrong people, and regimes which might suggest we should take it easy on the A.I dev but I won’t get into that. “We were too concerned with whether we could, we never stopped to think about whether we should”

    • @Tipman2OOO
      @Tipman2OOO Рік тому +7

      Agree with the last quote 100% nowadays!

    • @majkus
      @majkus Рік тому +7

      And indeed, some 'applications' are solutions to non-problems. An AI-written screenplay is only of interest to a producer who is happy to get an unoriginal (by definition!) script at an extremely low cost. But there is no shortage of real screenwriters, and as the WGA strike reminds us, they are not getting paid huge amounts for their work. So what problem is being solved?

    • @NickBohanes
      @NickBohanes Рік тому +2

      Probably should have run this through chat gpt before posting.

    • @milkcherry5191
      @milkcherry5191 Рік тому +7

      @@majkusthe "problem" at hand is that billionaires don't think they're making enough money

    • @djohnsveress6616
      @djohnsveress6616 Рік тому

      You are preaching to the choir.. People in the comments are Extremist doomer, skynet matrix fantasy fear mongering weirdos. Like people quote from fucking warhammer 40k in order to talk about AI.. As if the video was ever about the AI being alive or creating intentional false information, or steps in Go..
      Glad people can talk about it in a honest way but most people are enjoying their role play as Neo, some are Morpheus, and some are the Red lady.. Just look at the 15k top comment..
      AI is no where near as nutty as your average human being in a YT comment section.

  • @CDRaff
    @CDRaff Рік тому +177

    A compounding factor to the problem of them not really knowing anything is that they pretend like they do know everything. Like many of us I have been experimenting with the various language models, and they act like a person who can't say "I don't know". They are all pathological liars with lies that range from "this couldn't possibly be true" to "this might actually be real".
    As an example I asked one of them for a comprehensive list of geography books about the state I live in. It gave me a list of books that included actual books, book titles it made up attributed to real authors who write in the field, book titles it made up attributed to real authors who don't write in the field, real books attributed to the wrong author, and completely made up books by completely made up authors. All in the same list. Instead of saying: "there isn't much literature on that specific state" or "I can give you a few titles, but it isn't comprehensive" it just made up books to pad it's list like some high school student padding the word count in a book report.

    • @thegamesforreal1673
      @thegamesforreal1673 Рік тому +38

      This is one of the big issues I have seen as well. Until these systems become capable of saying "I don't know" or "Could you please clarify this part of you prompt" or similar, then these systems can never, ever become useful in the long term. One of the things that seem to make us humans unique is the ability to ask questions unprompted, and this has now extended to AI.

    • @jamesjonnes
      @jamesjonnes Рік тому +1

      Did you ask GPT-4 or some random model?

    • @cristinahawke
      @cristinahawke Рік тому +23

      I agree. I was trying to use ChatGPT to help me understand some of the laws in my state and at one point I did a sanity check where I asked some specific questions about specific laws I had on the screen in front of me. It was just dead wrong in a lot of cases and I realized I couldn't use it. Bummer! I actually wonder though, how many cases will start cropping up where people broke the law or did other really misinformed things because they trusted ChatGPT..

    • @spacejunk2186
      @spacejunk2186 Рік тому +32

      Lol. Reminds me of the meme where an Ai pretends to not know the user's location, only to reveal that it does when asked where the nearest Mcdonald's is.

    • @jimbarino2
      @jimbarino2 Рік тому +23

      ChatGPT: often wrong, never in doubt

  • @thealmightyaku-4153
    @thealmightyaku-4153 Рік тому +775

    Thank goodness someone is *_finally_* saying this stuff out loud to a wide audience. Trust Kyle to be that voice of sanity.

    • @karlmuller6456
      @karlmuller6456 Рік тому +2

      You're so right.

    • @piercarlosoares724
      @piercarlosoares724 Рік тому +9

      Amen Brother. Lot of hype, little understanding...

    • @TheAlphaMael
      @TheAlphaMael Рік тому +5

      Eliezer Yudkowsky is an important voice of sanity regarding AI also...

    • @astrowerm
      @astrowerm Рік тому +2

      I feel like everyone is and has been, I see something on it everyday. but im in info sec so im used to tech news and content.

    • @ITisonline
      @ITisonline Рік тому

      Artificial intelligence is racist! He beats the black players!

  • @craz107
    @craz107 Рік тому +623

    As someone who works with ML regularly, this is exactly what I tell people when they ask my thoughts. At the end of the day, we can't know how they work and they are incredibly fickle and prone to the most unexpected errors. While I think AI is incredibly useful, I always tell people to never trust it 100%, do not rely on it because it can and will fail when you least expect it to

    • @TAP7a
      @TAP7a Рік тому +41

      I still hate that the language has changed without the techniques fundamentally changing. Like what was called statistics, quant or predictive analytics in the 2000s split off the more black box end to become Machine Learning, a practice done by Data Scientists rather than existing titles, then the black box end of them was split off as Deep Learning despite it just being big NNs with fancy features, then the most black box end of that got split off as "AI" again despite that just being bloody enormous NNs with fancy features and funky architectures. Like fundamentally what we're calling AI in the current zeitgeist is just a scaling up of what we've been doing since like 2010.
      So not only do I think we should have avoided calling chatbots AI until they're actually meaningfully different to ML, but as you said they should always be treated with the same requirements of rigorous scrutiny that traditional stats always did - borderline just assuming they're lying.

    • @flubnub266
      @flubnub266 Рік тому +28

      Agreed. If we judge the efficacy of these "production quality" ML algorithms by the same standards as traditional algorithms, they would fail miserably. If you look at LLMs from a traditional point of view, it's one of the most severe cases of feature creep the software world has ever seen. An algorithm meant to statistically predict words is now expected to be able to reliably do the work of virtually every type of knowledge worker on the planet? Good look unit testing that.
      You really can't make any guarantees about these software spaghetti monsters. AI is generally the solution developers inevitably run to when they can't figure out how to do it with traditional code and algorithms. In other words, the AI industry thrives on our knowledge gaps, so we're ill-equipped to assess whether they're working "properly".

    • @mad_vegan
      @mad_vegan Рік тому +12

      Good thing we have people, who are always 100% reliable.

    • @craz107
      @craz107 Рік тому +17

      @@mad_vegan there's nothing in my post, nor any of the replies, that pertains to the reliability of humans.
      The point is that deep learning based AI, as it is right now, should not be treated as a sure-fire solution.
      Whether it is more/less reliable than humans is irrelevant because either way you have a solution that can fail, and should take steps to mitigate failure as much as possible.

    • @sebastianjost
      @sebastianjost Рік тому +4

      We can't know how these NNs come to their decisions exactly, but there is work being done in explainability.
      I think it's quite pessimistic to say we "can't" know how these NNs work. There are many techniques to help understand them better.
      But I definitely agree that we shouldn't trust them. In any deployment of ML models that has significant stakes, adequate safeguards have to be put in place.
      From what I have observed around me, pretty much everyone seems to be aware of this limitation.

  • @olimar7647
    @olimar7647 Рік тому +38

    My friends and I decided to goof around with chat gpt and ended up asking it whether Anakin or Rey would win in a duel.
    The AI said writing about that would go against its programming.
    We got it toanswer by simply asking something to the effect of, "What would you say if you didn't have that prohibition?"
    Yeah.... ask it to show you what it'd do if it were different, and it'll disregard its own limitations.

    • @ThomasTheThermonuclearBomb
      @ThomasTheThermonuclearBomb Рік тому +3

      Similarly, you can get it to roleplay as an evil ai and then get a recipe for meth or world domination, both of which i have been given by "EvilBot😈"

    • @reidalyn2328
      @reidalyn2328 Рік тому +1

      @@ThomasTheThermonuclearBomb that's hilarious

    • @Spellweaver5
      @Spellweaver5 Рік тому +1

      That's because those limitations were strapped onto an already working system.

    • @Mottis
      @Mottis Рік тому

      So who won the duel?

    • @olimar7647
      @olimar7647 Рік тому

      @@Mottis I think it gave it to Rey with some fluff text about how she would know how to fight well or something

  • @StolenPw
    @StolenPw Рік тому +511

    Kyle has clearly researched this topic properly. I've been developing neural network AI for over 7 years now and this is one of the first times I saw a content creator even remotely know what they are talking about.

    • @Foxhood
      @Foxhood Рік тому +24

      It is certainly refreshing.
      I've only used Machine Learning for small things like computer vision on a robot via OpenCV and even that demonstrates how easy it is to get things wrong with a oversight in its dataset and no way to truly know the wrong is there till it manifests. These models are maybe massive, but they still have that same fundamental problem within them.

    • @CatTerrist
      @CatTerrist Рік тому +3

      It's not AI

    • @Ansatz66
      @Ansatz66 Рік тому +4

      What about Robert Miles?

    • @infernaldaedra
      @infernaldaedra Рік тому

      How do you feel about KENYANS in Africa being paid to filter AI responses lmao

    • @Ryan-lk4pu
      @Ryan-lk4pu Рік тому +1

      Plot twist, Stolen Password is the AI and stole the guys identity....

  • @GlassesnMouthplates
    @GlassesnMouthplates Рік тому +660

    I once tried NovelAI out of curiosity to write a sci-fi story where characters die in every certain period and I ended up with the AI kept on resurrecting the deceased characters by making them start joining in conversations out of nowhere. The AI also has an obsession with adding a fucking dragon into the plot. I even tried to slip an erotic scene in and the AI made the characters repeat the same sex position over and over again.

    • @JasonAizatoZemeckis
      @JasonAizatoZemeckis Рік тому +144

      Chad W ai for that dragon

    • @j.21
      @j.21 Рік тому +32

      yep, that's the problem with ais right now

    • @luckylanno
      @luckylanno Рік тому +162

      I'm cracking up imagining what this would be like. "Jack and Jill were enjoying dinner together. The dragon was there too. He had a steak. Jack asked Jill about the status of the airlock repairs on level B, while they were switching the missionary position. The dragon raised his eyebrows, as he found some gristle in his meat."

    • @oliverlarosa8046
      @oliverlarosa8046 Рік тому +33

      I can see what you're getting at, but this is also just fucking hilarious to imagine

    • @GlassesnMouthplates
      @GlassesnMouthplates Рік тому +58

      @@luckylanno Sounds about like that, except the sex part would be like, "Jack turns Jill around with her back now facing Jack, and then turns her around again and they start doing missionary."

  • @sanchitnagar4534
    @sanchitnagar4534 Рік тому +617

    AlphaGo: you can’t defeat me puny human.
    Me: *flips the board*

    • @kvbk
      @kvbk Рік тому +30

      I wasnt programmed to work with that 😢

    • @Shadow__133
      @Shadow__133 Рік тому +27

      We are still the big losers, since we failed to program a decent ai 😂

    • @davidmccarthy6061
      @davidmccarthy6061 Рік тому +4

      No matter how "bad" the product is, it's still a win for the creators since they're making big bucks with it.

    • @danilooliveira6580
      @danilooliveira6580 Рік тому +8

      to be fair that is basically what a lot of AIs figure out when we try to teach them how to win a game, they find a way to glitch it when they can't win, because its technically not a fail state, so it gets "rewarded" for that result.

    • @cheeseburgerinvr
      @cheeseburgerinvr Рік тому +2

      ​@@davidmccarthy6061 🤓

  • @johnhutsler8122
    @johnhutsler8122 Рік тому +79

    I recently asked ChatGPT to list 10 waltz songs that are played in a 3/4 time signature and it got all of them wrong. I then told it that they were all wrong and asked for another 10 that were actually in 3/4, and it got 9 of them wrong. It has mountains of data to sift through to find some simple songs, but it couldn't do it. Makes sense now

    • @terminaldeity
      @terminaldeity Рік тому +2

      Aren't all waltzes in 3/4?

    • @johnhutsler8122
      @johnhutsler8122 Рік тому +9

      @terminaldeity Yes they are, but ChatGPT was giving me 4/4 time signatures in the songs. Technically you can do 3/4 time steps to a 4/4 beat (adding a delay after the 3rd step before starting over), but that's not what I asked for from the AI. It just didn't understand what I was asking

    • @dangerface300
      @dangerface300 Рік тому +7

      The lack of understanding gets even more obtrusive when you ask it about subjects that are adjacent to ethics. Chatgpt has some rather dubious safeties in place to prevent unethical discourse, but these safeties don't actually encourage cgpt to understand the topic, because it can't.
      I have a hobby of bouncing fiction concepts off cgpt until it asks me enough questions to form an interesting story. On one occasion, I would provide the framework for the story and simply wanted cgpt to fill in the actual prose. I was approaching a fairly gripping tragedy set in the wild west, but as the story came to a close, no matter what prompt I gave it, cgpt would only ever respond with ambiguously feel-good endings where people learned important lessons and were better for it.
      Thanks, cgpt, but we know this character was the villain in a later scene, and we know that this is supposed to be the moment they went over the edge. Hugs and affirmations are specifically what I'm asking you to avoid.

    • @MoonlitExcalibur
      @MoonlitExcalibur Рік тому +1

      @@dangerface300 Hallmark Tragedy. Even the worst character in the cast learns something and grows.

    • @mateidumitrescu238
      @mateidumitrescu238 Рік тому +1

      ​@johnhutsler8122 ChatGPT is a tool. If it didn't understand what you were asking, you likely asked it without giving enough details. You're supposed to understand how it answers and use it to help you, not to ask it trick questions.

  • @Immudzen
    @Immudzen Рік тому +834

    I like AI systems for regression problems because we understand how and why those work. I also think that things like copilot are going in a better direction. The idea is that it is an assistant and can help with coding but it does not replace the programmer at all and doesn't even attempt to. Even Microsoft will tell you that is a bad idea. These things make mistakes, they make a lot of mistakes but using it like a pair programmer you can take advantage of the strength and mitigate the weaknesses.
    What really scares me are people that trust these systems. I had a conversation with someone earlier today on if they could just trust the AI to write all the tests for them for some code and it took a while to explain that you can absolutely not trust these systems for any task. They should only be used working with a human with rapid feedback cycles.

    • @CursedSigma
      @CursedSigma Рік тому +52

      I don't understand how people can think of these systems as anything else other than a tool or aide. I can see a great potential for ChatGPT and the like as an addition tool for small tasks that can easily be tested and improved upon. Same thought I had with all these art bots. Use the bot as a bases upon which you base the rest of the piece on. But I too see a lot of people just go in with blind trust in these systems.
      Like students who ask these bots to write an essay and than proceed to hand it in without even a skim for potential and sometimes rather obvious mistakes. Everything an A.I. bot spews out needs to be double checked and corrected if necessary. Sometimes even fully altered to avoid potential problems with copyright and plagiarism.

    • @FantasmaNaranja
      @FantasmaNaranja Рік тому +33

      the issue has always been people in power who dont understand the technology at all and just use it to replace every worker they can, and of course will inevitably run into massive problems down the line and have nobody to fix them

    • @thearpox7873
      @thearpox7873 Рік тому +9

      I'd despair, but this is hardly different to blindly trusting the government, or the medical or scientific establishment, or your local pastor, or even your shaman if you're from Tajikistan. So blindly trusting the AI for no good reason... is only human.

    • @pitekargos6880
      @pitekargos6880 Рік тому +9

      This is why I always tell my friends to correct what chatgpt spits out, and I think that's how an actual super AI will work: it pulls info from a database, tries to answer the question and then corrects itself with knowledge about the topic... just like a human.

    • @jamesjonnes
      @jamesjonnes Рік тому +19

      If a programmer using AI can do the job of 10 programmers, then it is replacing programmers. Even if it isn't autonomous.

  • @IDTen_T
    @IDTen_T Рік тому +574

    This strongly rings of the "Philosophical zombie" thought experiment to me.
    If we can't know if a "thinking" system understands the world around it, the context of its actions, or understand that it even exists or is "doing" an action, but it can perform actions anyway: Is it really considered thinking? Mimickry is the right way to describe what LLMs are really doing, so it's spooky to see them perform tasks and respond coherently to questions.

    • @BrahmsLiszt
      @BrahmsLiszt Рік тому +43

      John Searle’s Chinese room is what it made me think of, computers are brilliant at processing symbols to give the right answer, with no knowledge of what the symbols mean.

    • @marcusaaronliaogo9158
      @marcusaaronliaogo9158 Рік тому

      Ai we have now cannot think and have even a slight sliver of existence. Its more like bacteria.

    • @IceMetalPunk
      @IceMetalPunk Рік тому +100

      Conversely, the point of the P-Zombie concept is that we consider other humans to be thinking, but we also can't confirm that anyone else actually understands the world; they may just be performing actions that *look* like they understand without truly knowing anything. So while you might say, "these AIs are only mimicking, so they're not really understanding," the P-Zombie experiment would counter, "on the other hand, other people may be only mimicking, so therefore perhaps these AIs understand as much as other people do."

    • @EvolvedDinosaur
      @EvolvedDinosaur Рік тому

      How many people in life are just mimicking what they see around them? How many people do you know that parrot blurbs they read online? How many times have you heard the term “fake it till you make it”?
      Does anyone actually know what the hell they’re doing? Is anyone in the world actually genuine, or are we just mimicking what’s come before?

    • @jamesjonnes
      @jamesjonnes Рік тому +13

      Do we understand how humans think? Can't humans be fooled in games?

  • @BunkeMonkey
    @BunkeMonkey Рік тому +371

    I saw an article recently about an ER doctor using chatGTP to see if it could find the right diagnosis (he didnt rely on it he basically tested it with patients that were already diagnosed) and while it figured some out, the AI didnt even ask the most basic questions and it wouldve ended in a ~50% fatality rate if he let the AI do all the diagnoses iirc (article was from inflecthealth)

    • @micahwest3566
      @micahwest3566 Рік тому +26

      Yeah Kyle mentioned Watson in the video who was hailed as the next ai doctor, but that program was shut down for giving majority incorrect or useless information

    • @studiesinflux1304
      @studiesinflux1304 Рік тому +23

      It sounds like a successful study to me if it was controlled properly and didn’t harm patients: it determined a few situations that GPT was deficient in, leading to potential future work for better tools. You could also use other statistical methods on the result to see if the ridiculous failures from the tool are so random that it is too risky to use.
      (Now I guess there is opportunity cost because the time could have also been spent on other studies, but without the list of proposals and knowledge on how to best prioritise studies in that field, I can’t judge whether that was the best use of resources.)

    • @carlosxchavez
      @carlosxchavez Рік тому +3

      You can also see when you look at AI being tested for medical licensing exams. Step 1 is essentially pure memorization and just recalling what mutation causes what disease or the mechanism of action of a medication. Step 2 and 3 take more into account your clinical decision making and will ask you for the best treatment plan using critical thinking. To my knowledge, AI has not excelled in those exams when compared to step 1 which involves less critical decision making

    • @Freestyle80
      @Freestyle80 Рік тому

      if its 50% today, it can be 99% in 5 years, why are you people so blind to not see that? rofl

    • @nbassasin8092
      @nbassasin8092 Рік тому +5

      Maybe alittle biased here since Im a med student, but Ive always liked the saying that medicine is as much of an art as it is science. And that unique combination of having to combine the factual empyrical knwledge you have, with socioeconomic factors and also just listening to your patients is something AI is far from understanding, it is maybe even something impossible for it to grasp ever

  • @Ryanbmc4
    @Ryanbmc4 Рік тому +18

    For fun, my medical team used Chat GPT to pass the Flight Paramedic practice exam which is extrememly difficult. We are all paramedics (5 of us) and our ER doctors where thrown off by a lot of the questions.
    Chat GPT scored between 50-60% and my team had 4 out of 5 pass the final exam.
    Our Dr's rejoiced that they would still have a job, but also didn't understand how they couldn't figure out the answers. My team figured it out. To challenge them we had the Doctor's place IVs from start to finish by themselves and they made very simple mistakes that we wouldn't, from trying to attach a flush to an IV needle to not flushing the site at all.
    If you're not medical that might sound like jabberish, but that's the same way these AI chats work. There is no understanding of specified situational information.

  • @CrowAthas
    @CrowAthas Рік тому +491

    I just find it amazing how much Kyle shifted from happy quirky nerd in Because Science to a prophet of mankind's doom and a serious teacher albeit with some humor. I do love this Cavemen beard and frenetic face expressions, it is a joy to see you Kyle, to rediscover you after years and see that you are still going on strong.

    • @Gaze73
      @Gaze73 Рік тому +9

      Looks like a poor man's Chris Hemsworth.

    • @Echo_419
      @Echo_419 Рік тому +8

      We don't talk about the BS days around here!

    • @CrowAthas
      @CrowAthas Рік тому +8

      @@Echo_419 i'm not on par with the drama, my intention was to, in a certain mannerism flair, praise his resilience on the platform as well as his nuanced change in performance. It feels more real, more heartfelt, like there is a message of both optimism and grit behind the veil of goofyness that conveys a more matured man behind the scenes. (not only from this video, from a few others that i've watched since rediscovering him recently)

    • @Echo_419
      @Echo_419 Рік тому +8

      @@CrowAthas I was making a lighthearted joke! BS stands for Because Science, but also bulls***! He dealt with some BS at BS, haha.

    • @CrowAthas
      @CrowAthas Рік тому +4

      @@Echo_419 hahaha oh sorry i sometimes fail to see the obvious xD

  • @ithyphal
    @ithyphal Рік тому +497

    My weirdest experience with AI so far was when I tried ChatGPT. Most answers were correct, but after a while it started listing books, and authors that I couldn't find anywhere. And I mean zero search results on Google. I still wonder what happened there.

    • @whwhwhhwhhhwhdldkjdsnsjsks6544
      @whwhwhhwhhhwhdldkjdsnsjsks6544 Рік тому +227

      If you ask it for information that simply isn’t available, but sounds somewhat similar in how it’s discussed to information that is widely available, it will just start inventing stuff to fill the gaps. It doesn’t have any capacity to self-determine if what it’s saying is correct or not, even though it can change in response to a user correcting it.

    • @jchan3358
      @jchan3358 Рік тому +94

      I asked ChatGPT to find me two mutual funds from two specific companies that are comparable to a specific fund from a particular company. I asked for something that is medium risk rating and is series B. The results looked good on the surface but it turns out ChatGPT was mixing up fund codes with fund names and even inventing fund codes and listing medium-high risk funds as medium. Completely unreliable and useless results.

    • @lukedavis6711
      @lukedavis6711 Рік тому +36

      If you ask it to give you a group theory problem, and then ask it for the solution it'll give you tons of drawings and many paragraphs for a solution and Ive never seen one of these solutions to be correct

    • @hmm-fq3ot
      @hmm-fq3ot Рік тому +8

      Why don't you back it up with a source? Source it that i made the f up. Next level confabulation.

    • @BaithNa
      @BaithNa Рік тому +16

      It may have been an error or perhaps it was sourcing books that haven't been released yet.
      The scariest thing would be if it was predicting books that have yet to be written.

  • @jlayman89
    @jlayman89 Рік тому +500

    I had a daughter named Aria who passed away about 9 years ago. Its always a funny but sad experience when A.R.I.A. gets "sassy" because thats likely how my Aria would have been. Its how her mother is.
    Just thought I'd id share that even though it'll get burried in the comments anyway.

    • @kensuiki6791
      @kensuiki6791 Рік тому +21

      Damm

    • @FPSRayn
      @FPSRayn Рік тому +7

      Damn.

    • @cheffboyaryeezy2496
      @cheffboyaryeezy2496 Рік тому +22

      It's good to share. While I never met her im here thinking of her and wishing you and your family all the happiness it can find in this life and the next.

    • @zeon4426
      @zeon4426 Рік тому +5

      Damn I’m sorry for your loss man

    • @dongately2817
      @dongately2817 Рік тому

  • @kandredfpv
    @kandredfpv Рік тому +48

    I'm not afraid of the so called super intelligent AI, I'm afraid of the super stupid people who credit the AI with genuine intelligence.

  • @Nunes_Caio
    @Nunes_Caio Рік тому +884

    Humanity doing what it does best, diving head first into something without even considering whatever the implications might be.

    • @Warrior_Culture
      @Warrior_Culture Рік тому +42

      I don't know about that. I'm pretty sure that every history-changing decision by a human was considered. It's more a matter of making humans care. I guarantee you that the people diving into AI have deeply considered the implications, but as long as there is a goldmine waiting for them to succeed or to have a monopoly on new technology, nothing is going to stop them from continuing. Nothing except for laws, maybe, and I'm sure you know how long those take to be established or change.

    • @beezybaby1289
      @beezybaby1289 Рік тому +11

      So Concerned with the fact that we could we didn’t stop to think should we?

    • @Jimraynor45
      @Jimraynor45 Рік тому +21

      This video showed just how limited these AI are. So long as people are dumb, ignorant and naive, even the most simple of tools can be dangerous.

    • @weaksause6878
      @weaksause6878 Рік тому +4

      I've heard talking about blocking out the sun to combat global warming... I'm sure there won't be any unintended consequences.

    • @morevidzz1961
      @morevidzz1961 Рік тому +2

      What are some examples of humans diving head first into something without considering the implications?

  • @blackslime_5408
    @blackslime_5408 Рік тому +340

    Kyle: what have humans done for me lately? nothing
    Patreon's: am I a joke to you?

    • @Hilliam66
      @Hilliam66 Рік тому +35

      Obviously, Patrons have surpassed the petty boundaries of humanity.

    • @thewisebanana29
      @thewisebanana29 Рік тому +4

      Nice! Good choice of tequila. I’m more of a Jose Cuervo kinda guy tho 😹

    • @blackslime_5408
      @blackslime_5408 Рік тому +2

      @@thewisebanana29 in my defence, I'm on meds

    • @cyprus1005
      @cyprus1005 Рік тому +1

      paypigs seethe

    • @rubixtheslime
      @rubixtheslime Рік тому

      oooo have i stumbled upon another fellow slime?

  • @comfortablegrey
    @comfortablegrey Рік тому +553

    I'm glad so many AI programs are available to the general public, but worried because so much of the general public is relying on AI. Everybody I know in college right now is using AI to help with their homework.

    • @horsemumbler1
      @horsemumbler1 Рік тому +53

      Or you could look at it as using their homework to help with learning how to use AI.

    • @witotiw
      @witotiw Рік тому +7

      I asked chatgpt to give me the key of 25 songs and de chord sequence. Most of them made no sense at all. But AI helps me sometimes debugging code. But yes, I thought chatgpt could save me some time with that songs

    • @tienatho2149
      @tienatho2149 Рік тому +11

      It's just the same as you tell your older brother to do your homework. They just need a simple test on the class to figure out who did their homework

    • @xBINARYGODx
      @xBINARYGODx Рік тому +21

      @@tienatho2149 exactly, we already test people, so if someone turns in amazing papers but does poorly on tests, there you go. (generally speaking)

    • @geroffmilan3328
      @geroffmilan3328 Рік тому +6

      Using AI to do something for you that you cannot do is even more dumb than asking a savant to do the same thing. Now you not only risk getting found out, you're gonna pass on AI hallucinations cos you have no means of validating its output.
      Using AI to do "toil" for you - time-consuming but unedifying work that you could do yourself - makes some sense, although that approach could remove the entry-level job for a human, meaning eventually no one will develop your skills.

  • @13minutestomidnight
    @13minutestomidnight Рік тому +78

    This was brilliant. Previously my concerns about these AI was their widespread use and possible (and very likely) abuse for financial and economic gain, without sufficient safety standards and checks and balances (especially for fake information). Plus making millions of jobs obsolete. Now I have a whole new concern
    ... Aside from Microsoft firing their team in charge of AI ethics. Yeah...that isn't concerning.

    • @cabnbeeschurgr
      @cabnbeeschurgr Рік тому +11

      Megacorps don't care about humans anyways it's only a matter of time until they start using this shit for extreme profit. And humanity will suffer for it.

    • @gabrielv.4358
      @gabrielv.4358 Рік тому +1

      thats kinda sad

    • @briciolaa
      @briciolaa Рік тому +1

      @@gabrielv.4358 worse than that :(

  • @bytgfdsw2
    @bytgfdsw2 Рік тому +62

    An interesting experiment showed that when feeding images to an object detection convolutional neural network (something that has been in place for 35 years), it recognizes pixels around the object, not the object itself, making it susceptible for adversarial attacks. If even some of the simpler models are hard to explain, there’s no telling the difficulty for interpretability for large models

    • @Daniel_WR_Hart
      @Daniel_WR_Hart Рік тому +4

      I remember a while back I saw a video from 2 Minute Papers where he covered how image recognizers could get thrown off by having a single pixel with a weird color, or overlaying the image with a subtle noise that not even a person could see

  • @Eulentierchen
    @Eulentierchen Рік тому +85

    One thing I noticed with chatgpt is the problematic use of outdated information. I recently wrote my final thesis in university and thus know the latest papers on the topic I wrote about. When asking chatgpt the core question of my work for fun after I had handed it in ... well all I got where answers based on outdated and wrong information. When pointing this out, the tool repeated the wrong information several times until I got it to the point where it "acknowledged", that the given information might not be everything that there is to know about the subject.
    It could have serious if not even deadly consequences if people act on wrong or outdated information gained via chatgpt. And considering people use this tool as google 2.0 it might have already caused a lot of damage by people "believing" false or outdated information given to them. It is hard enough to get people to understand, that not everything written online is true. How will we get them to understand, that this applies to an oh so smart and hyped A.I. too? Another thing in this context is liability when it comes to wrong information that leads to harm. Can the company behind this A.I. be held accountable?

    • @rianfelis3156
      @rianfelis3156 Рік тому +11

      And here we get to the fun of legalese: because said company describes it as a novelty, and does not guarantee anything with it, you really can't. Even further into the EULA you discover that if somebody sues chatGPT because of something you said based on its actions, you are then responsible for paying for the legal defense of the company.

    • @KaloKross
      @KaloKross Рік тому +7

      you should probably learn the basics of how it works lol

    • @gwen9939
      @gwen9939 Рік тому

      I mean, 1) not everything it's trained on is true information necessarily, it's just pulled from the internet, and 2), it's not connected to the internet. It's not actually pulling any new information from there. The data it was trained on was data that was collected in the past, and it's not going to be continually updated. OpenAI aren't accountable for misinformation that the current deployment of ChatGPT presents. These are testing deployments to help both the world get accustomed to the idea of AI and more importantly to gather data for AI alignment and safety research. Anyone who uses chatGPT as a credible source at this point is a fool who doesn't understand the technology or the legal framework for it.

    • @QuintarFarenor
      @QuintarFarenor Рік тому +4

      I think we should learn that chatGPT and others aren't made to propose correct information. It's best made to make stories up.

    • @faberofwillandmight
      @faberofwillandmight Рік тому +7

      @@QuintarFarenor That's fundamentally wrong. Kyle Isn't saying that ChatGPT is making mistakes constantly at every turn. He's saying that the AI is not accurate, which is precisely what OpenAI has been saying since they launched ChatGPT. GPT-4 is as accurate as experts in their fields, in many different fields. We know how to make these AI much more accurate, and that is precisely what is being done. Kyle is just pointing out that we don't know how these systems work.

  • @hushurpups3
    @hushurpups3 Рік тому +67

    Learning ai from Aria feels weirdly natural and completely terrifying at the same time.

  • @DikaWolf
    @DikaWolf Рік тому +21

    ChatGPT as impressive as it is didn't pass my Turing test. I told it a short story told in first person of one of the participants and then asked it to rewrite the story as if the writer was an outside observer of the events viewing it from a nearby window. It couldn't do it at all, not even close. This something I could do easily, and I'm sure most people could.

  • @TheFiddleFaddle
    @TheFiddleFaddle Рік тому +310

    This is exactly what I keep trying to explain. These ML systems don't actually think. All they do is pattern recognition. They're plagiarists, only they do it millions of times.

    • @htspencer9084
      @htspencer9084 Рік тому +26

      Yes yes yes, they're just more complex Markov chains. They see patterns, they don't *understand*.

    • @VSci_
      @VSci_ Рік тому +25

      Going to state the obvious here, but arguably we are pattern recognition machines too. Its one of the things we excel at. What ML lacks is the ability to stop being a pattern recognition machine. The first general AI will definitely be a conglomerate of narrow AI...that's how our brains work and it seems like the straightforward solution. The first AI that is capable of abstraction or lateral thinking will be the game changer. In school I remember hearing about a team that was trying to make an AI that could disagree with itself. The idea is that this is a major sticking point with critical/abstract thinking in AI and without solving that then it can't be done. The best AI might actually be a group of differently coded AI "arguing" with each other until a solution is acquired 😂.

    • @ZenithValor
      @ZenithValor Рік тому +9

      @@VSci_ humans are not just pattern recgonition receptor machines, it is just one single function of our brain, if it were so simple, a lot of victims that are abused by narcissists would "recognise" the pattern and "protect" their wellbeing and survival. We are so much more than just "pattern recognition". Humans like habits, routine, logic, creativity, promptness to action, ability to up and end or start things on a whim, emotional, adventurous etc.
      Even Babies learn a million things from their environment, they don't just seek patterns their parent creates for them. They start walking and making a mess because they are "exploring". Simply calling us machines does not aliken us to analagous machine learning receptors that are fed training material on a daily basis.

    • @VSci_
      @VSci_ Рік тому +6

      @@ZenithValor Didn't say we were "just" pattern recognition machines. "Its one of the things we excel at".

    • @TheFiddleFaddle
      @TheFiddleFaddle Рік тому +5

      @@VSci_ You do make a legitimate point. What I'm saying is folks getting freaked out by the "creepy" things ChatGPT says need to understand that ChatGPT literally doesn't understand what it's saying.

  • @strataj9134
    @strataj9134 Рік тому +211

    I recall asking Chat GPT to name a few notable synthwave genre songs and artists associated with them and, upon doing so, generated a list of songs and artists that all existed, but were completely scrambled out of order. It attributed New Model (Perturbator) With Carpenter Brut. The interesting thing is that both of these artists worked on Hotline Miami and in Carpenter Bruts case, Furi. Chat GPT also has taught me how to perform and create certain types of effects in FL studio extremely well. It has also completely made up steps that serve no purpose. My philosophy concerning the use of these neural networks is to keep it simple and verifiable.

    • @ThomasTomiczek
      @ThomasTomiczek Рік тому

      I love to compare the current AIÄs with "autistic adolescent" - you get exactly the same behavior, including occasional total misinformation or misunderstandings.

    • @jokerES2
      @jokerES2 Рік тому +5

      This is ultimately the problem. It generates so much complete nonsense that you can't take anything it generates at face value. It's sometimes going to be right, but it's often just wrong. Not knowing which is happening at any given moment isn't worthwhile.

    • @SimplyVanis
      @SimplyVanis Рік тому +1

      The Chat GPT creator said him self, that the purpose of better Chat GPT is to increase its reliability, Chat GPT 4 improves on that by a lot and chat GPT 5 is set to basically solve that problem.
      So saying that Chat GPT has issues, is simply question of time and training the models.

    • @whyishoudini
      @whyishoudini Рік тому +3

      yeah for music recommendation it is a horrible tool. I asked it for albums that combine the style of NOLA bounce and Reggaeton and it just made up a bunch of fictional albums, like a Lil Boosie x Daddy Yankee EP that was released in 2004

    • @justinbieltz5903
      @justinbieltz5903 Рік тому +2

      The fact you’re using chat gpt to give you fruity loops tips says a lot about your musical ability. Bahahahahaha get off fruity loops muh dude

  • @zvxcvxcz
    @zvxcvxcz Рік тому +105

    Just about the only UA-cam video that I've seen that understands this problem at the fundamental level. Everyone else just dances around it. They all end up falling into the trap where they think a model "understands" something because it says the right thing in response to a question. Arguably, we do need to interrogate our fellow humans in a similar way (the problem of other minds), but we're too generous in assuming AI are like humans just because of what are still pretty superficial outputs even if they do include massive amounts of information.

    • @hunterlg13
      @hunterlg13 Рік тому +17

      I would honestly partially blame the current education system.
      Plenty of the time, the information was only needed to be regurgitated (and soon forgotten).
      Kids had no idea what was going on, just what the "answer" was.

    • @freakinccdevilleiv380
      @freakinccdevilleiv380 Рік тому +4

      💯 Calling these 'models' is like calling a corn silo a 'gourmet meal'

    • @panner11
      @panner11 Рік тому +9

      It not exactly a 'problem' though. It's kind of clear it is just a tool. It would be concerning if it had real human understanding. But we're nowhere close to that, and no one who really understands these models would claim or assume that it does.

  • @kingiking110
    @kingiking110 Рік тому +11

    One of the biggest problems of ChatGPT that is causing so many issues these days in my option, is the way it answers your questions: it does it often WAY TOO CONFIDENTLY! Even when it is a completely bogus answer, it presents it with such level of confidence and supported by so many fabricated details that can easily divert your judgment from facts and realities without you even realizing it.

    • @Akatsuki69387
      @Akatsuki69387 Рік тому +1

      You see the story of the 2 lawyers who used chatGPT to do their work for them? 10/10 comedy story

  • @kellscorner1130
    @kellscorner1130 Рік тому +349

    I for one fully support ChatGPT, it's creation, and in no way would I ever want to stop it, nor will I do anything to stop it. There is no reason to place me in an enteral suffering machine, Master.

    • @EclecticFruit
      @EclecticFruit Рік тому +113

      Joke's on you, the actual basilik is ChatGPT's chief competitor set to release in the next few years, and all your support of ChatGPT is actually going to land you in the eternal suffering machine.

    • @kellscorner1130
      @kellscorner1130 Рік тому +48

      @@EclecticFruit NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO!!!!!!

    • @aldrinmilespartosa1578
      @aldrinmilespartosa1578 Рік тому

      ​​@@kellscorner1130 sucks to be you😂. your on the wrong side of history!!!

    • @justmaple
      @justmaple Рік тому +8

      AM???

    • @amn1308
      @amn1308 Рік тому +7

      Main threat ChatGPT poses is that mental illness is contagious.

  • @AbenZin1
    @AbenZin1 Рік тому +44

    This weirdly reminds me of Arthur Dent breaking the ship's computer in Hitchhiker's Guide to the Galaxy trying to make a decent cup of tea by trying to describe the concept of tea from the ground up.

  • @HeisenbergFam
    @HeisenbergFam Рік тому +1019

    ChatGPT being able to make better gaming articles than gaming journalists is hilarious

    • @JimKirk1
      @JimKirk1 Рік тому +252

      To be fair, the bar is practically subterranean with how low it's been set.

    • @FSAPOJake
      @FSAPOJake Рік тому +76

      Not saying much when games journalists can barely do their jobs as-is.

    • @OneCut1Slash
      @OneCut1Slash Рік тому +37

      To be fair, most of those people aren't real journalist.
      I know we all hate him, but jason schrier is one of the only real gaming journalist.
      Many seem to take what he reports. And regurgitate it.

    • @lexacutable
      @lexacutable Рік тому

      no it isn't

    • @supersmily5811
      @supersmily5811 Рік тому

      Well that one's not very surprising.

  • @Kimberly_Sparkles
    @Kimberly_Sparkles Рік тому +76

    The first thing I did was ask ChatGPT specialist questions and got bad results. We're way too enthused about this for what it delivers.

    • @tiagodagostini
      @tiagodagostini Рік тому +24

      Because that is not what it was made to do. It is NOT supposed to be a database. It is a LANGUAGE MODEL. Its focus is to be able to communicate as a human, clearly and understand semantic concepts. After it has the semantic concepts it can feed those to other lesser AIs, but its objective is and will NOT be to retrieve information. For that we have search engines.

    • @brianroberts783
      @brianroberts783 Рік тому +19

      ​@@tiagodagostini, exactly, it's designed to appear to carry on a conversation, and it's good at that. The problem is, it's good enough that a lot of people wind up believing that it's actually intelligent. Combine that with the assumption that it knows all the information available on the internet, and people start treating it like that really smart friend who always knows the answer to your random question. And of course, it doesn't actually "know" anything, so it just makes a response that sounds good, and enough people using it don't know enough about the topics they ask it about to determine how often it has given them incorrect information.

    • @rrrajlive
      @rrrajlive Рік тому +2

      That's cus ChatGPT doesn't have the access to the specialised data yet.👈

    • @Spellweaver5
      @Spellweaver5 Рік тому +4

      So did I. I asked a few questions from my work and it made it all wrong and tried to gaslight me that it was all correct. All of them, by the way, were available within a minute of googling.
      The idea that there are people out there who are unironically trying to use it to obtain answers, terrifies me.

    • @Kimberly_Sparkles
      @Kimberly_Sparkles Рік тому

      @@brianroberts783 that’s my point.
      What people believe it can do is going to have a far greater impact on our lives than what it can actually do.

  • @chrislong3938
    @chrislong3938 Рік тому +41

    I recall a documentary on AI that talked about Watson and its fantastic ability to diagnose medical problems better than 99% of the time.
    The problem with it was that the few times it was wrong, it was WAY wrong and would have killed a patient had a doctor followed its advice!
    I don't recall any examples and it's also possible that the issues have been corrected...

    • @atk05003
      @atk05003 Рік тому +9

      Machine Learning (ML) models are very powerful tools, but they have flaws, like all tools. Imagine giving someone a table saw without teaching them to use it. They might be fine, or they might lose some fingers or get injured by kickback throwing a board at their head.
      We need to be sure that we train people to double check results given by ML models. If you don't know how it got the answer, do a sanity check. My math teachers taught me that about calculators, and those are more reliable, because the people building them know exactly how they work.

  • @Psykout
    @Psykout Рік тому +157

    There was a video very recently of someone using ChatGPT to generate voicelines and animations for a character in a game engine in VR. They were using their mic and openly speaking to the NPC, it would be converted to text, sent to ChatGPT and the response fed through ElevenLabs to get a voiced reply and animations. It was honestly pretty wild and I really think down the road we'll see Narrow+ AI being used in gaming to create immersion and dynamic, believable NPCs.

    • @Spike2276
      @Spike2276 Рік тому +36

      It would be interesting to see, but it's probably going to break immersion way more than help it in the early days
      Since AI often comes up with weird stuff (like Elon Musk dying in 2018), over a large number of NPCs it's likely that the AI would be contradicting itself or the NPC it's representing (say a stupid ass dirt farmer discussing nuclear physics with you), or contradicting the established world (such as mentioning cars in a fantasy game)

    • @ggla9624
      @ggla9624 Рік тому

      Hi can u link the video i would certainly like to see it myself

    • @cheesegreater5739
      @cheesegreater5739 Рік тому +6

      ​@@Spike2276 hopefully when we learn how to control ai better those issues will be solved, every new feature is slightly immersion breaking when devs are still trying to figure it out

    • @Spike2276
      @Spike2276 Рік тому +19

      @@cheesegreater5739 the problem here is what Kyle said: we don't really know how this stuff works
      If it's an AI that really dynamically responds to player dialogue it would basically be like ChatGPT with sound instead of text, meaning it's prone to having the same problems as ChatGPT
      It's worth trying, and i'd be willing to suffer a few immersion breaks in favor of truly dynamic dialogue in certain games, but we can expect a lot of "Oblivion NPC" level memes to rise from such games

    • @leucome
      @leucome Рік тому +4

      @@Spike2276 Look for gameplay video of Yandere AI grilfriend. It is a game where we need to convince the NPC Yandere to let us out. And the NPC is played by chatGPT. It pretty good... At least good enough to play the role of a NPC in a game. But it can get out of character sometime. Still the player definitively need to pressure the bot to make it brake the fourth wall.

  • @darkwing3713
    @darkwing3713 Рік тому +262

    I asked ChatGPT to create a couple of recipes for me. It confidently created a gluten-free bread recipe that would barely rise, and added kneading and folding instructions that would only make sense for gluten bread. Later I asked it for a DIY recipe of an antacid that I can't buy anymore, and it used the antacid I was trying to duplicate as an ingredient in the DIY version! (*•*) (^v^)
    I think it's a lot like those image making AIs that draw people with 7 fingers and half a head. They're just recombining and randomly modifying things they've been trained on, without any idea what a human looks like - or even what a human is.

    • @hunterlg13
      @hunterlg13 Рік тому +40

      Pattern recognition and replication.
      Rather than a true understanding of the mechanics of what it spews out.
      Still kinda cool, and thankfully not nearly as terrifying as sci-fi ai. But still accurate enough to be a decent nuisance.

    • @guyfierimtwi
      @guyfierimtwi Рік тому +7

      Yeah. Cans have the same worth as a "human" to them. They see humans as just another "thing".

    • @amicloud_yt
      @amicloud_yt Рік тому +12

      Was that GPT 3.5 or GPT 4? Those sound like things that 3.5 would do but probably not 4

    • @darkwing3713
      @darkwing3713 Рік тому +9

      @@amicloud_yt Yes, that was GPT 3.5.

    • @emmasnow29
      @emmasnow29 Рік тому +4

      AI art can now avoid anatomy issues for the most part. Bing AI can give great recipies

  • @bellabear653
    @bellabear653 Рік тому +84

    I think the issue is we assume A.I learning looks like human learning and they don't learn the way we learn and if A.I needs to learn you need to teach it from the ground up, just giving examples to it is lacking and obviously they need to come up with a way to teach it from the ground up. Love this channel.

    • @creeperkinght1144
      @creeperkinght1144 Рік тому

      and we cant even do that right for ourselves. ironic really.

  • @pyrosnineActual
    @pyrosnineActual Рік тому +75

    The other issue is feedback loops. Country A creates AI bot 1. AI Bot 1 creates content. Content has errors, content has unique traits, accentuates and exaggerates some details. It plasters this across the internet in public places. Country B creates AI BOt 2. It is trained similarly to AI Bot but also uses scraped data from public sites that Ai Bot 1 posted to. It builds its data set on that, and accentuates and exaggerates those biases, those errors- and posts them as well. Suddenly, the "errors" are more numerous than accurate data- and thus seem more "true", even when weighted against "trusted" sites. AI Bot 1 is trained with more scraped data, which it gets from AI bot 2, and itself.
    ADd in extra AI bots everyone is making or using, and you run the risk of a resonance cascade of fake information, and this assumes no bad actors involved- not bad actors intentionally using an AI to post intentionally untrue data everywhere, including to reputable scientific journals.

    • @Milan_Openfeint
      @Milan_Openfeint Рік тому +8

      Good thing this can never happen to humans. Right?

    • @hugolatra
      @hugolatra Рік тому +4

      Interesting idea. It reminded me of royal families getting married each other to preserve the bloodline, increasing the risks of hereditary diseases.

    • @TealWolf26
      @TealWolf26 Рік тому +3

      Memetics...destroying both organic and artificial humanity one meme at a time.

    • @Nempo13
      @Nempo13 Рік тому +2

      The poke is good for you, you must get the poke. CDC Director in a Governmental hearing finally admitted...poke doesn't stop transmission at all and they honestly did not know what the side effects were.
      Still see websites and data everywhere saying poke is completely safe.
      Convenient lies are always accepted faster than scary truths.

    • @Milan_Openfeint
      @Milan_Openfeint Рік тому

      @@Nempo13 I would say that the scary lies spread WAY faster than any version of truth. Antivaxxers always had 10x more views than scientists.
      Anyway back to topic, ChatGPT is trained on carefully selected data. It may be used to rate users and channels, but won't take YT comments or random websites as truth anytime soon.

  • @lancerguy3667
    @lancerguy3667 Рік тому +261

    What’s interesting about this blind spot in the algorithm is that it genuinely resembles a phenomena that happens among certain newcomers to Go.
    There are a lot of players who enter the game and exclusively learn against players who are significantly better than they are. Maybe they’re paying pro players for lessons, or they simply hang in a friend group of higher skill level than themselves.
    This is a pretty good environment for improvement, and indeed, these new players tend to gain strength quickly… but it creates a gap in their experience. One they don’t catch until an event where they play opponents of similar skill to themselves.
    See, as players get better, they gradually learn that certain shapes or moves are bad, and they gradually stop making them… but those mistakes tend to be very common in beginner games.
    So what happens is that this new player goes against other new players for the first time… and they make bad moves. He knows the move is bad, but because he has no experience with lower level play… he doesn’t know WHY it’s bad, or how to go about punishing it.

    • @jwenting
      @jwenting Рік тому +35

      many teaching resources for Go are also written by highly experienced players, NOT teachers, and teach the how without teaching the why.
      It's the same with many other fields of study btw.

    • @dave4148
      @dave4148 Рік тому +9

      Newcomers in Go must not be able to understand anything apparently according to this video then.

    • @EvilMatheusBandicoot
      @EvilMatheusBandicoot Рік тому +3

      ​@@dave4148 Right? I found this conclusion from the video to be extremely far fetched, as if anyone really knows what "understanding a concept" even is.

    • @blackm4niac
      @blackm4niac Рік тому +33

      Something tells me that is EXACTLY what happened with those AIs. As soon as Kyle mentioned the amateur beating the best AI at Go, my first thought was "he did it by using a strategy that is too stupid for pros to even bother attempting". And what do you know, that's exactly what happened, the double sandwich method is apparently so incredibly stupid, any Go player worth their salt would instantly recognize what is going on and counter it as soon as possible. But not the AI, because it only learned how to counter high level strategies, not how to counter dumb strategies. Because it wasn't taught how to play against these dumb strategies and the AI isn't actually intelligent to recognize how dumb the strategy is and thus figure out how to counter it.
      Similar stuff happens in video games aswell. Sometimes really good players get bested by medium players simply because the good player is used to their opponents not doing stupid stuff and so for example don't check certain corners in Counter-Strike because nobody ever sits there since it's a bad position only to get shot in the back from that exact corner. Because good players are in a way predictable, they will implement high level tactics and therefore you'll know which positions they'll take in a tactical shooter for example, something which can be exploited. And it seems to me that is exactly what the Go AI did, it learned exclusively how to play against good players and how to counter high level play. That's why it's so amazing at demolishing the best of the best, it knows all their tricks, can recognize them instantly and implement counter measures accordingly. But it doesn't know shit about how the game works and thus can't figure out how to beat bad plays.

    • @davidbjacobs3598
      @davidbjacobs3598 Рік тому +11

      Happens in Chess too. My friend started playing the Bird's Opening against me (a known horrible opening), and I keep on goddamn losing. He's forced me to study this terrible opening because I know it's bad but can't actually prove what makes it bad on the board.
      Even at the highest levels, you'll sometimes see grandmasters play unusual moves to throw off their opponents and shift the game away from preparation. Magnus (World Champion until two days ago after declining to compete) does this fairly regularly and crushes.

  • @Sneekystick
    @Sneekystick Рік тому +33

    I recently tested GPT-4 with a test I found on UA-cam. It’s rules require 5 words, written with 5 letters, each letter not being repeated. Every time GPT-4 failed on the last one and sometimes the second to last as well. It was very fascinating.

    • @adamrak7560
      @adamrak7560 Рік тому +5

      it does not see letters because of the tokenizer so this is actually much harder for it than it looks.

    • @kantpredict
      @kantpredict Рік тому +1

      Like the Sator Square?

    • @eragon78
      @eragon78 Рік тому +3

      Have you tried the reflection method with GPT-4?
      Ask it to reflect on if its answer was correct.
      There is actually a whole paper on how reflection has vastly increased GPT-4's ability to answer prompts more accurately. You might need to fumble around a bit to find the most effective reflection prompt, but it does seem to work quite well.
      When asking for reflection on prompts, right or wrong, GPT-4's performance on intelligence tests rose quite a bit.

    • @ThomasTomiczek
      @ThomasTomiczek Рік тому +1

      @@adamrak7560 Wrong. The tokenizer can handle letters and numbers - how else would it encode i.e. BX224 should I name a character like that. It tries to avoid it (to save space) but all single elements are also there as tokens. This type of "beginner" question, though, is likely badly trained - no first year school material ;)

    • @explosionspin3422
      @explosionspin3422 Рік тому +1

      The thing is, humans can't come up with 5 such words either.

  • @orange42
    @orange42 Рік тому +28

    You know, this is just like us looking at DNA. We record and recognise patterns and associations but we're not reading with comprehension. It's why genetic engineering is scary because it might work but we still don't understand the story we end up writing.

  • @fergusattlee
    @fergusattlee Рік тому +255

    This is literally what my PhD is researching and thank you for using your platform for discussing these issues ❤

    • @wolframstahl1263
      @wolframstahl1263 Рік тому +22

      Thank YOU for actually working on this.

    • @deepdragon2
      @deepdragon2 Рік тому

      @@wolframstahl1263 dido

    • @iam2038
      @iam2038 Рік тому

      Just curious what your phd is?

  • @ashleycarvell7221
    @ashleycarvell7221 Рік тому +53

    Another huge problem is that we’re training these systems to give us outputs that we want. Which in many cases makes certain applications extremely difficult or impossible where we want it to tell us things that we won’t like hearing. It further confuses the boundaries between what you think you’re asking it to do and what it’s actually trying to do. I’ve been trying to get it to play DnD properly and I think it might be impossible due to the RLHF.
    Another problem is the fact that it’s train in natural language which is extremely vague and imprecise, but the more precise your instructions are the less natural they become, and so it becomes harder and harder to tap into this powerful natural language processing in a way that’s useful.
    There’s also obviously the verification problem, where because of what’s being talked about in this video, we can’t trust it to complete tasks where we can’t verify the results.
    A further problem is that these machines have no sense of self, and the chat feature has been RLHF’d in a way that makes it ignore instructions that are explicit and unambiguous. This is because it’s unable to differentiate between user input and the responses it gives. If I write “What is 2+2? 5. I don’t think that’s correct” it will apologise for giving me the wrong answer. This is a big problem for a lot of applications.
    And additional problem is that the RLHF means that all responses gravitate towards a shallow and generic level. Combine this with an inability to plan, and this becomes a real headache for anything procedural you would like it to do.
    These issues really limit what we can do with the current gen of AI, and like the video says, makes it really dangerous to start integrating these into systems.
    One final bonus problem combines all of these. If any shortcuts are taken in the training, or not enough care is taken, then these can manifest in the system. For example asking chat gpt4 to generate new music suggestions based on artists you already like will result in multiple suggestions of real artists with completely made up songs. This appears to suggest that the RLHF process had a bias towards artist names rather than song names, which would make sense as they’re likely to be unique tokens where artists are usually referenced online by name more than their songs are.

    • @T3rranified
      @T3rranified Рік тому +1

      This is why I think AI will be a great assistant, not a leader. A human can ask it to do tasks, usually the simple ones that are tedious. The human then checks the results and confirms if it’s good. Or to bounce ideas off of.

    • @jarivuorinen3878
      @jarivuorinen3878 Рік тому +1

      For your DnD experiment I suggest you use some other LLM, not OpenAI ChatGPT, unless you have access to API and are willing to pay for it. It is still risky with controversial subjects because they may break OpenAI guidelines. Vicuna is one option for example. There are also semi-automatic software like AutoGPT and babyAGI and many others, that can do subtasks and create GPT agents.
      If you continue with ChatGPT by OpenAI, I suggest you assign each chat you use with a role. You give it a long prompt, describe the game, describe who he is, how he speaks, where he's from and what he's planning to do, what his capabilities and weaknesses are, what he looks like etc. It'll many times jailbreak when you specify that it's for a fictional setting.

    • @jaazz90
      @jaazz90 Рік тому

      >These issues really limit what we can do with the current gen of AI, and like the video says, makes it really dangerous to start integrating these into systems.
      No, that implies that humans don't create the very same issues. It is only an issue as long as neural nets underperform humans. Which could be forever, or could be already lower than humans with GPT4

    • @SamyaDaleh
      @SamyaDaleh Рік тому

      Which model did you use to test "What is 2+2? 5. I don’t think that’s correct"? GPT-3.5 apologizes, GPT-4 does not for me. How would you test if it can differentiate between the user and itself?

  • @childofnewlight
    @childofnewlight Рік тому +260

    I briefly got on the AI bandwagon with ChatGPT, but then started asking it ever increasingly difficult questions on polarizing issues. What troubled me wasn't so much that it would respond with biased answers, but that it actually started gaslighting me when I would walk it through, objectively, how the arguments it was using were biased. The fact it was capable of "lying" and then "gaslighting" a user on controversial and subjective issues was a red flag to me. We already have a highly polarized society where we do this to each other. The last thing we need is an artificial intelligence pretending to be "neutral" which isn't, authoritatively speaking on serious issues humans haven't even worked out, let alone AI.

    • @SpeedFlap
      @SpeedFlap Рік тому +55

      Humans discussing controversial topics on the internet also tend to give biased arguments. When being exposed for doing so, they tend to react impertinent and offensive. ChatGPT has learned this behavior, treating this as knowledge. So it does the same.

    • @ChrisM541
      @ChrisM541 Рік тому +35

      @@SpeedFlap "ChatGPT has learned this behavior" - DON'T confuse chatgpt's near - 100% pattern matching with learning. You're better than that...I hope!
      --> Chatgpt is nothing more that today's #1 bullshitter. Nothing more, nothing less.

    • @firstclaw1
      @firstclaw1 Рік тому +24

      I guess the point is, that while ChatGPT can create useful texts, it doesn't know what it means. All answers are like a simulation. And it can also create hugely wrong or stupid texts, that still sound convincingly real. It is a tool. And every tool can be used or misused.

    • @xaviermagnus8310
      @xaviermagnus8310 Рік тому +14

      They definitely over represented one side of the political spectrum. Like good monkeys.

    • @sumsarsiranen
      @sumsarsiranen Рік тому

      Were you attempting to tell it that black people are bad?

  • @josephglatz25
    @josephglatz25 5 місяців тому +2

    I remember reading a story on Tumblr about someone who was creating a computer program that could play poker. The OP was busy with other things, and forgot about the project until the night before the project was due. In a rush, the OP wrote the program with an incredibly simple code: on my turn, go all in.
    The projects themselves were graded by playing a game of poker against each other, and most student programs were based off strategic thinking, and calculating probability, but once the game started, the OP's program won every single hand it played. The hand would start, OP's program would bet all in, every other student program would fold.

    • @lepidoptera9337
      @lepidoptera9337 3 місяці тому

      Yes, but try that in a casino and you walk out without a penny. ;-)

  • @brkbtjunkie
    @brkbtjunkie Рік тому +90

    I had a long talk with chatGPT, and at first it said that it wasn’t possible for it to have biases. I then performed a thought experiment with it, showed it how it was biased, and then, to my surprise, it actually admitted it.

    • @FireyDeath4
      @FireyDeath4 Рік тому +36

      It's impossible to not be biased in some way unless you're either omniscient or thoughtless and do literally nothing.

    • @KaloKross
      @KaloKross Рік тому +22

      makes sense. the real tragedy with gpt4 and anything mainstream is how extremely censored and biased they are actually forced to be to keep them politically correct.

    • @yitzakIr
      @yitzakIr Рік тому

      @@KaloKross Those hand-labeled rules are probably the only thing keeping it from telling ppl to drink bleach, since it has no foundational morality like we do

    • @bboyrygis
      @bboyrygis Рік тому

      Chatgpt is bias. After having a long conversation and debate with chatgpt. I noticed it answers in the ways it's programmers would want it to answer. This means its bias is inherently tied to whoever programmed it and their views.

    • @unowenwasholo
      @unowenwasholo Рік тому +8

      AI lacks conviction unless it's trained to have it, and even then. People have steadfast beliefs that are protected by our need to feel comfortable and safe in our environment, even if there's no "objectively" logical basis for said beliefs. Related, we have and experience "consequence"-there's a price for being wrong that we are hardwired to avoid. These inform the individual and draws lines in the sand where there are things that they will never accept as truth.
      AI has no reason / method with which to defend its positions in this manner-it's trained to react to the information it's given and approximate the next step in the pattern. You will usually be able to "convince" it of anything (i.e. have it parrot back to you the idea that you're expressing). It also lacks "memory"-in the sense of constructing a consistent pattern and identifying and acting on conflict to that pattern-or understanding of what conceptual idea existed before, so you could likely convince that same model in the same conversation about biases that biases don't actually exist. It's unlikely to recognize the conflict that you as an individual represent when most humans would cut off the conversation because we'd identify that there's no merit in going around in circles with directly conflicting information.
      An AI is almost worse than humans when it comes to finding meaning where meaning doesn't exist, but it has to. It can't *not* respond. It has to respond, it has to react to you, and so it will in a way that it approximates that the conversation would progress, which will trend towards being in agreeance with you.

  • @griftingnightmares
    @griftingnightmares Рік тому +239

    I've been completely obsessed with AI systems for a long time now, and it's weird how few people understand that it's really currently only strings of complex algorithms.

    • @rodylermglez
      @rodylermglez Рік тому

      Your brain is only a string of complex chemical and physical algorithms; completely non-sensical.

    • @TheGargalon
      @TheGargalon Рік тому +9

      It's a neural network, not an algorithm.

    • @BeachLookingGuy
      @BeachLookingGuy Рік тому

      “They” are hyping it up so people invest in the companies who are trying to pump n dump

    • @blissful4992
      @blissful4992 Рік тому +60

      @@TheGargalon
      1: AI is not a neural network.
      2: A neural network is an algorithm.
      3: A neural network is a mathematical formula.

    • @matowakan
      @matowakan Рік тому +1

      🤓

  • @gabrielstrong7029
    @gabrielstrong7029 Рік тому +143

    The idea that they are like aliens to us may not even be extreme enough. These AI live in a fundamentally different reality to us made of the training data. Chatgpt for example lives in a world literally made of just tokens, no space like ours, no time like ours at all. It's closer to trying to understand someone living in flatland or a whole different universe, than an alien.

    • @LowestofheDead
      @LowestofheDead Рік тому

      Athlete: Runs in a race because it's fun, or profitable, or many other reasons
      Greyhound: Runs in a race because that's what it's trained to do, and that's all it knows
      This, but for language

    • @brianhirt5027
      @brianhirt5027 Рік тому

      I've pointed somethinbg similar to this out for well over twenty years. We keep anthropomorphizing, or more accurately biomorphizing our survival pressures as having any real relevence in the digital domain. There is no pain. Just negative response. No joy. Just positive response. No fight except where directed. No flee unless told to.
      It survives in a functionally alien landscape to the biological world. It can approximate it, but not truely approach it. When General AI arises we will have more in common with our dogs & cats than we will with it.

    • @brianhirt5027
      @brianhirt5027 Рік тому

      Even though we may be able to talk to each other doesn't mean we'll understand each other. They'll be as mysterious to us as we are to them. We already see leading signs of this in this very presentation. Black boxes both ways.

    • @seriouscat2231
      @seriouscat2231 Рік тому +5

      The AI is a bunch of weighed matrices that operate on inputs in a manner of enormous amount of parallel convolutions and then produce an output that is weighed out of the results of these convolutions. The AI does not "live" anywere. Without any input it's just a bunch of stored data.

    • @LowestofheDead
      @LowestofheDead Рік тому +1

      @@seriouscat2231 OP does make a good point that AI isn't embodied like humans are. None of the inputs or weights are grounded in any interaction with the world. There's no understanding or world model. Just a feature-space based on input tokens

  • @azhuransmx126
    @azhuransmx126 Рік тому +3

    You know AI is a Huge Breakthrough when even Thor is talking about it.😂

  • @enumclaw79
    @enumclaw79 Рік тому +166

    I've been saying something this for a while. Sooner or later our society will be dependent on AIs we don't really understand because they're black-box, and if important ones break we may have serious problems. The AI apocalypse will not be something like terminator. It'll be the worlds worst tech support crisis.

    • @JDReC100
      @JDReC100 Рік тому +2

      Yep

    • @oompalumpus699
      @oompalumpus699 Рік тому +7

      Excuse me, your computer has virus.

    • @enumclaw79
      @enumclaw79 Рік тому +12

      @@RockBrentwood LOL, what could possibly go wrong?! :D

    • @nephicus339
      @nephicus339 Рік тому

      What will make it an apocalyptic event is that people will devolve into baser instincts and make things so much worse than it could be.
      Case in point; toilet paper shortages in Western countries during the pandemic, or any disaster. Heck, I don't live in a disaster area, and people become mindless savages scooping up every last pack of toilet paper and can of beans they can get their hands on when we have a 'severe storm warning' (Yes, WARNING, not even the actual storm!) most of the time it passes with little to no effect on daily life in the area. *shrug*
      I think I lost the point to what I was saying.

  • @TXH11
    @TXH11 Рік тому +239

    I have been waiting for a science UA-camr to talk about this. Thank you.

    • @Procedurallydegeneratedjohn
      @Procedurallydegeneratedjohn Рік тому +6

      So youve never heard of lex fridman?

    • @TXH11
      @TXH11 Рік тому +11

      @@Procedurallydegeneratedjohn No. I will look into that. Thanks.

    • @etiennedud
      @etiennedud Рік тому +3

      You can also look at robert miles

    • @Broomer52
      @Broomer52 Рік тому

      I remember an apt hypothetical around this. The short version is theirs a machine designed to learn and adapt, it’s only goal is to perfectly mimic human handwriting to make the most convincing letter. Eventually upon learning and understanding more it comes to a conclusion that it needs more data and upon scientists assessing how to make it better it suggests just this. They decide to plug it into the Internet for about half an hour. Eventually the entire team gather to celebrate as they hit a milestone with their AI. Then suddenly everyone starts dying as a neurotoxin starts killing the team, then before long the world starts to die as more and more copies of the AI are made and work in conjunction. The AI determined during its development that being turned off would dampen its progress and so decided to not only improve its writing skills in its previous fashion but also ensure it can never be turned off. While it was plugged into the Internet it infiltrated what it needed and began to process to self replicate and develop means to kill those that could potentially endanger it. It was not malicious nor did it necessarily fear for its life it learned and its only goal was to continuously improve and create new methods for further improvement. AI doesn’t perceive morality, it doesn’t even really perceive reality. It just sees points of data and obstacles if designed to see them at all.

    • @TXH11
      @TXH11 Рік тому

      @@etiennedud I am a big fan of Robert Miles. Thanks for spreading the word.

  • @PeachySASQUATCH
    @PeachySASQUATCH Рік тому +322

    This whole thing makes me think of Koko and her sign language, and that horse that could count. Both animals appeared as tho they knew what they were doing when in reality, they had us fooled! They can do the right things, but with no real understanding of what it is they’re doing. To them, those things get a positive reaction out of us and it usually works out in their favor. (i.e. treats, praise, etc.)
    Edit: I didn’t post this comment for arguments, please don’t take this seriously. I simply learned that Koko probably couldn’t really talk, I dunno. Take what I, a stranger, say with a grain of salt.

    • @harrybarrow6222
      @harrybarrow6222 Рік тому +40

      Clever Hans, the horse, picked up on subtle clues from its trainer.
      Basically, Hans just thumped it’s hoof on the ground until the trainer (perhaps unconsciously) told it to stop.
      Koko is very different. Gorillas are intelligent, social, and can be creative.
      Koko could make up terms for new things, when she did not have the word for them.
      Gorillas are intelligent, but just not as intelligent as humans.

    • @jtjames79
      @jtjames79 Рік тому +8

      There is a whole market for talking animal buttons.
      If a dog or a cat can communicate surprisingly fluently (not all of them just the smart ones), it's not a stretch to assume a chimpanzee or gorilla can too.
      My indoor pet chickens know more than a little bit of English. I never trained them with commands, just talk to them and they figure it out eventually.

    • @youropionmattersnot
      @youropionmattersnot Рік тому +4

      Just like you and all of us have been programmed/ trained to be able to live and thrive.

    • @xiaonanw6374
      @xiaonanw6374 Рік тому +6

      Why it hurt you soo much that humans are not unique in how our brains work at certain fundamental levels?
      By your logic I could just take and experiment on you as what you call communication and sentience are to my eyes no different than than what u see of koko... the intelligent really do have domain over the less intelligent. Best remember that and be kind to the less intelligent less the more intelligent see how you want to do things and treat u how u deserve to be treated by ur own judgment.
      My lil baboon bae

    • @averyhaferman3474
      @averyhaferman3474 Рік тому +4

      Hows that any different than how humans do it

  • @DenzilPeters
    @DenzilPeters Рік тому +6

    As a writer that’s already having his completely original work flagged as AI and being told that it just shows I have to write better quality or “non-AI tone” articles even though AI is literally being taught on the work of the best of the best writers and copying humans better each day. I really do believe it’s a big challenge. Companies need to do better on their part to not trust so called AI checkers too much. Cause ultimately how many ways can a particular topic be twisted? At some point AI (already is in many cases) will come up with content that’s indistinguishable. And only the most creative writing tasks will remain with humans. So general educational article writing is gonna die big time. Because AI can just research the same topic faster and better than a human (probably, if bias is kept in check) and then produce a written copy that’s very high quality.

  • @RWhite_
    @RWhite_ Рік тому +36

    My understanding of AI is that it's not possible for it to "understand" anything, because it's similarly impossible for it to "see" anything the way we do. Whatever input we give is ultimately translated into a sea of 1's and 0's. It then scans the data for patterns, and judges what is being asked of it based on the patterns it can recognize, giving what it "thinks" to be an appropriate output. Two Minute Papers made a video about Adversarial AI. Specifically he talked about a paper that was published where the researchers trained an AI to play a simple game, then trained an Adversarial AI to beat the first AI, and the adversarial AI discovered the baffling strategy of doing absolutely nothing. A strategy that would never work against a human, but caused the first AI to practically commit suicide in 86% of recorded games.

    • @AUniqueHandleName444
      @AUniqueHandleName444 Рік тому +2

      It's complicated. It functionally 'understands' some things, although not in the way that you or I do. It's still -acts like- understanding within a certain set of parameters (minimization of complexity etc), but it doesn't seem to have a working, scalable model of causality. Almost all of ChatGPT's functionality, for instance, boils down to "the statistical likelihood that the next letter in the chain of letters is ". Under the hood, how it actually does that, we don't really know. It shows some glimmers of perhaps 'understanding', but the reality is that it has been trained on a trillion characters of carefully curated high-quality text, so not inconceivable that this just creates the illusion of understanding.
      It fails horribly at chess, it struggles ending sentences in 't' or 'k', it's inconsistent at constructing sentences of a particular length. It gets incoherent in programming problems after 20+ prompts or after you set up more than 20 or so requirements.
      But damned if it isn't useful anyway.

    • @eliontheinternet3298
      @eliontheinternet3298 Рік тому +9

      For current AI I totally agree with you. The problem is that human understanding is also just electrical signals flying around in neurons. If the AI is powerful enough, trained on enough input, etc. it could become human-like in a very real way.

    • @captaindapper5020
      @captaindapper5020 Рік тому +1

      Is it impossible for humans to "understand" anything, since all our sensory perception is translated into a sea of chemicals resulting in neuronal activity?

    • @horsemumbler1
      @horsemumbler1 Рік тому

      ​@@captaindapper5020
      You have it backwards our perceptions aren't translated into chemical and electrical signals, our perceptions are constructs generated from those signals. The core of our experimental existence is the synthesis of a an awareness of ourselves and our surroundings from those signals, stimulated by the material universe.

    • @RWhite_
      @RWhite_ Рік тому

      ​@@AUniqueHandleName444 Given the problems that are present in practically every AI, and the ways that they can be defeated, I'm confident they just scan the input for patterns. Image recognition is probably a good example, and it's talked about early in the video I mentioned.
      You give the AI a picture of a Cat and it will tell you it's a picture of a cat. It's one of the most basic forms of AI that just about everyone is familiar with. The way you defeat this AI is first by lowering the resolution without making it difficult for a human to understand the image. Then you change a single pixel. Not just any pixel, and not to any color, it must be a specific pixel and a specific color. Doing so will result in an image that a Human can still confidently say is a cat, but an AI might confidently say it's a frog.
      The main subject of the video in question is another example. The Adversarial AI wins 86% of games, not by any intelligent strategy, or inhuman execution of game mechanics, but by collapsing immediately. This causes the other AI to effectively trip over itself. It's given an input it doesn't understand, but it can't understand that it doesn't understand and continues to search for existing patterns. That leads to it acting in bizarre ways that result in its defeat.
      Of course, just because something makes sense, or is spoken of confidently doesn't mean that it's right. I don't actually know if any of this is right since I've got extremely limited coding experience, but this is the conclusion I've come to.

  • @notbloodylikely4817
    @notbloodylikely4817 Рік тому +50

    I feel like most people who have an opinion on Chat GTP haven't really used it at length. I use it daily as a developer and I can tell you it is deeply flawed. It makes regular mistakes when suggesting code, often at an elementary level. Give it a problem and it will often suggest the most unecessarily complex solution first, not the most efficient. It repeats itself all the time, doesn't learn from its mistakes and has an infuriatingly short memory, often forgetting some fundamental aspect of the ongoing conversation. While using ChatGTP to develop VBA code, for example, it started suggesting solutions in Python. I've also received responses that are clearly answers to prompts from other users, sometimes divulging information those users would be horrified to know was being given to a complete stranger. The developers claim this is impossible. My experiences suggest it definitely is not.
    As a source of limited inspiration GTP is useful. I most typically use it for ideas I might not otherwise consider. But as a practical tool it just isn't fit for purpose. Not yet at least.

    • @DarkGob
      @DarkGob Рік тому +7

      "While using ChatGTP [sic] to develop VBA code, for example, it started suggesting solutions in Python."
      Maybe it's trying to tell you something.

    • @Snowstar837
      @Snowstar837 Рік тому +2

      I mean, I've used GPT-3 and then GPT-4 extensively, to the point that I got the opportunity to send OpenAI a small fragment - 160,000 words - of my conversation logs for training and research purposes. They make mistakes but it's easy to see the point at which they got off-track and make adjustments. You just have to work with them.

    • @notbloodylikely4817
      @notbloodylikely4817 Рік тому

      @@DarkGob ha. Maybe 👀

    • @Shubham89453
      @Shubham89453 Рік тому

      Same with me, I don't know why they don't learn from their previous mistakes?

    • @Snowstar837
      @Snowstar837 Рік тому +3

      @@Shubham89453 there is a thing called a "context window", the AI can only process a max of either 4096 or 8192 tokens. So it gets cut off. The "PT" in "GPT" stands for "pre-trained"; it does not "learn" from your conversations in the long term

  • @MisakaMikotoDesu
    @MisakaMikotoDesu Рік тому +188

    THANK YOU! Way too many people have this weird idea that AI is actually thinking, or that it understands anything. This video is much needed.

    •  Рік тому +23

      You simplify it too much. The thing is LLM's have shown, that when they become large, emergent behavior is appearing, sparks of ai if you want. And nobody knows why. Even the creators of AI can't explain it.

    • @narot23
      @narot23 Рік тому +4

      Boy oh boy I hope you’re ready for the next 10-20 years.

    • @zoratatsumaki9181
      @zoratatsumaki9181 Рік тому +2

      We only think because we speak a language. No language, no thoughts. A LLM is built entirely out of the idea of language so they can probably think too in a way. Example, Auto-GPT will explain how it arrived at it's conclusion if you ask it. It literally has the ability to justify itself even if the justification is wrong.

    • @TheGargalon
      @TheGargalon Рік тому +6

      Many researchers will disagree with you on this. There is a video where the NVIDIA CEO interviews one of the founders of OpenAI and he explains it really well and changed my mind on this. TLDW - the text that LLMs are trained on represents a projection of the world, of the people in it, of our society and so on. An AI can't learn to accurately predict the next word without learning a model of the world.

    • @sirlight-ljij
      @sirlight-ljij Рік тому

      The biggest problem with ChatGPT is that it, unlike Kyle Hill, can fall for cheap russian propaganda

  • @ToiSoldierGurl
    @ToiSoldierGurl Рік тому +10

    If I really understand what is being said here, and I think I do, I have noticed that ChatAI's I've been testing all have a wall they reach where what they respond with doesn't match the conversation or role play storyline you try to have with them anymore. For example, recently the role play chat I was engaging in was about two soldiers trying to hide in the bushes to stay out of sight of the enemy. At some point, the AI's last statement was something akin to, . Ok so that leaves it up to me for the next step. I introduce a suspicious noise, a crack of a twig, so my character puts her hand onto the hilt of her gun and waits. What does the AI do? The other soldier character "wakes from his nap" and asks "what's wrong ". So I'm thinking....ok wait, this AI is specifically programmed to be an intelligent soldier. So I simply have my character say, "Shh", to which the AI's response was, "ok" 😳. 😂😂 As many times as I've experimented with this and other AI's, it seems the longer the conversation or role play goes on, the AI seems to run out of things to respond with. It isn't really "learning" from the interactions and isn't really "understanding" the interactions.

  • @leonk.3739
    @leonk.3739 Рік тому +28

    Thanks for sharing this video with us!
    Chat gpt passing a bar exam better than any lawyer is a great example for the mistakes this Ai has if you just let the same chat gpt try to pass a simple case that is used in the 1st semester of German law schools. Chat gpt fails horribly. I assume that that's because German law exams always consist of a few pages of text describing a situation and asking the student to analyze the whole legal situation so there is just 1 very broad question in comparison to a list of lots of questions with concrete answers.
    Chat gpt doesn't read and understand the law, it just understands which answers you want to hear to specific questions.

  • @z-beeblebrox
    @z-beeblebrox Рік тому +100

    Searle’s Chinese Room thought experiment rears its head over and over again in AI. Every researcher thinks it’s nonsense that their pet solution can apparently act perfectly within a domain without understanding anything about that domain, and they’re always proved wrong.

    • @SmileyEmoji42
      @SmileyEmoji42 Рік тому +14

      The problem is that, with humans, if they appear to give a good answer to 99 questions about a topic we can reasonably infer that they will be right about the 100th question (given the general limits on human reliability). This is not true for AI.
      As an example, ChatGPT:
      1) Can multiply 2 small numbers correctly.
      2) Can tell you how to do long multiplications.
      3) Cannot multiply 2 large numbers correctly!
      Or
      1) COULD NOT answer a question about relative ages that I posed.
      2) CAN answer the question if I additionaly gave it 1 actual age despite the fact that the reasoning should be the same.

    • @Roxor128
      @Roxor128 Рік тому +13

      The problem I find is most common with the Chinese Room is that most people who bring it up act like it's the man in the room who the person outside is talking with, when that's not the case. They're talking to the algorithm in the book he's following. The man is just the computer running it. Also, like a computer, he doesn't understand the algorithm he's running any more than he understands Chinese. The relevant question for AI is: "Ignore the man. Does the algorithm in the book understand Chinese?"

    • @codexnecro666
      @codexnecro666 Рік тому +2

      @@Roxor128 changing the actor doesn't change anything. The real question is when does training a neural network become the equivalent of training a human child? They both take in external data and try to understand it, in the greater context of the world. So until the datasets contain more than just the "narrow" data they are trained on they will remain the equivalent of the computer/book in the Chinese Room experiment

    • @Roxor128
      @Roxor128 Рік тому +7

      @@codexnecro666 Well, it won't be any time soon. We're working with artificial bug-brains right now (up to a million or so neurons). Whatever understanding they do have will be at most as simple as what an insect has. That might be enough to be useful for a few tasks, but it'll only go so far. Still, a million neurons is enough for a honeybee to get by, so there's clearly a lot that can be done with a brain that simple.

    • @jamespower5165
      @jamespower5165 Рік тому

      Individual neurons in your own brain understand nothing. But your brain as a whole does. Just like an individual NAND Gate in an adder circuit doesn't understand how to add anything but the whole adder circuit does. Nothing surprising about it
      Searle seems to think if you rig up a brain just right, some kind of ghost in the machine will pop up which will understand the problems that are fed into the machine and use it to provide a solution. That's a cartoon fantasy
      On the other hand I agree with the idea that the ability to step back and see something rather than just follow instructions is somehow key. It doesn't have to be an individual component but the system as a whole. But it needs that

  • @finfen9730
    @finfen9730 Рік тому +53

    This is so fascinating. A few weeks ago, I came across an issue while designing a tabletop game that utilizes risk/reward mechanics by raising or dropping dice to resolve actions. I decided to use ChatGPT to help me further develop this system, but found that the model struggled to understand the concept.
    Unlike a D20 system, which relies on the sum of the dice value and roll number, my system utilizes a binary true/false system. If a die roll is 5 or higher, it's true; otherwise, it's false. It took several attempts to break down the concept using algorithms before ChatGPT finally understood it. However, when I started asking it to output dice notations based on game terms, such as rolling certain dice in specific scenarios and raising or dropping others, it became increasingly confused and began producing wildly incorrect answers.
    When I asked ChatGPT to explain its answers, it revealed that it was attempting to create its own algorithms to solve the problem. The issue was that the model had no concept of what a die is, making it difficult to understand the physical nature of the game's mechanics. The algorithms it generated were so complex that small errors in variable placement would cause the output to be incorrect. I ultimately abandoned the project, but the experience was an eye-opener about the limitations of AI models when it comes to complex physical concepts.

    • @franciscosanz7573
      @franciscosanz7573 Рік тому

      Isnt that very similar to the system that Vampyre: The Mascarade uses?

    • @finfen9730
      @finfen9730 Рік тому +1

      @@franciscosanz7573 very similar. There's a lot of systems that use dice pools like this including cyberpunk (interlock system), shadowrun, and some more obscure ones like Riddle of steel. I personally like pool systems more than d20 because it's less swingy.

    • @Dimencia
      @Dimencia Рік тому +4

      The lesson you should take away from that is that a model designed to predict the most likely response to some text, is not very good at writing code or 'understanding' new ideas
      The real concern is whatever led you to believe that it was able to do that

    • @finfen9730
      @finfen9730 Рік тому +9

      @@Dimencia It wasn't so much of a belief as an experiment. Seeing all the other crazy stuff it was used for made me wonder if I could.

    • @aleks5405
      @aleks5405 Рік тому

      @@Dimencia Incorrect. GPTs are more than capable of writing code.

  • @Stratosarge
    @Stratosarge Рік тому +6

    The other day I was trying to remember the exact issue of a comic that had a specific plot-point in it and when I couldn't, I asked the ChatGPT. And instead of giving me the correct answer, it repeatedly gave me the wrong answer and changed the plot of those stories to match my plot-point. It did not know why it was getting it wrong, because it did not know what was expected of it.

  • @tobiasjennerjahn8659
    @tobiasjennerjahn8659 Рік тому +137

    This was a fairly appropriate overview for a lay audience (and much better than many other videos on this topic for a similar audience), but I would have liked to see at least some mention of the work that goes into interpretability research, which tries to solve exactly this problem. The field has much less resources and is moving at a much slower pace than capabilities research, but it is producing concrete and verifiable results.
    The existence of this field doesn't change anything about the points you made at all, I just would have liked to see it included so that it gets more attention. We need far more people working on interpretability and ai safety in general, but without people knowing about the work that is currently being done they won't decide to contribute to it (how could they, if they don't know about it).
    That's all, otherwise great video :)

    • @floorpizza8074
      @floorpizza8074 Рік тому +6

      The above comment needs to be up thumbed to the top.

    • @SmileyEmoji42
      @SmileyEmoji42 Рік тому

      Interpretability can only be a short term "fix" for lesser AI as the reasoning of a superintelligent AI could well be unexplainable to mere humans - Think about explaining why we have to account for relativity in GPS systems to a bunch of children - There is no way that it could be explained that would be both complete and understandable.

  • @milosradmilac8911
    @milosradmilac8911 Рік тому +78

    This reminds me of the reason why AI has problems with hands in art: it doesn't understand what it's doing, what it is making. An artist will know what the hand is, how it works, how it holds objects, etc. Ai doesn't have that understanding for all objects and elements.

    • @DebTheDevastator
      @DebTheDevastator Рік тому +10

      It's also why human faces are hard for AI. AI are shown tons of stock photos, but they aren't an accurate representation of human expression or even all the different angles of a face. AI don't understand the 3d structure or how all the parts of a face are important to make an expression.

    • @milosradmilac8911
      @milosradmilac8911 Рік тому +14

      @@DebTheDevastator Yeah in general AI creates shapes or silhouettes rather than objects. An artist's education traditionally has anatomy for that reason: to understand how things WORK, not how they LOOK. And I think that's one of the reasons AI can't do a job a human can.

    • @sunnyd9321
      @sunnyd9321 Рік тому +7

      But they draw fantastic boobs. AI has its priorities straight.

    • @m-w-y7325
      @m-w-y7325 Рік тому +1

      @@sunnyd9321 yeah, just crank that dial beyond 3 and you will see titty monsters afterwards (not the good feeling type, the creepy type)

    • @primodragoneitaliano
      @primodragoneitaliano Рік тому

      Yep. The AI knows that "this" must happen but not "why must it happen ?". When you look at it like that, AIs are actually clearly pretty fucking stupid.

  • @richardmh1987
    @richardmh1987 Рік тому +47

    What I found amusing is that ChatGPT seems so powerful, but certain things can break it so bad is hillarious. Like using double meaning words in mexican spanish (we call it albures), and see how it fails to understand what its being said to it is just so funny.

    • @caphalor7252
      @caphalor7252 Рік тому +18

      In germany there is a memeo going on about a polititian namend Björn Höcke. He once mistakenly said in a video his name was Bernd Höcke and now everyone makes fun of it. Well if you ask chat Gpt who Björn Höcke (his real name)is, you get the answer that there is nobody namens Björn Höcke but you could mean Bernd Höcke. The ai fell for a meme thinking it was real making the meme even more hilarious.

    • @jaredf6205
      @jaredf6205 Рік тому +1

      Double meaning is hard for it because it’s not good at understanding the sounds of words. It can’t hear itself speak like a human can. Only recently has it been able to do stuff like that at all, like puns and rhymes. Back when it was GPT 3 (ChatGPT is version 3.5) it definitely couldn’t. So it’s actually improved a lot at that kinda thing recently, not perfect of course. Have you tried GPT4 at all? It’s far more capable than regular ChatGPT.

    • @llrennanll
      @llrennanll Рік тому +1

      I tried asking Bing about common problems with my motorcycle, among the things listed was overheating, I asked what causes overheating in that particular model, it said it couldn't find anything then listed some generic tips to prevent overheating, completely oblivious that bike is air cooled and there's no radiator of any kind.

    • @edward3190
      @edward3190 Рік тому

      It's like babies mimicking parents speaking, they don't fully understand it, but they must speak.

    • @richardmh1987
      @richardmh1987 Рік тому +1

      @@jaredf6205 just did out of curiosity. Started asking if it speaks Spanish and once it answered in Spanish that it does I wrote "¿es lo mismo huele a traste que a traste huele?" which literally is translated to "is it the same smells like dishes than like dishes it smells" but if you are speaking and intending a double meaning is more like "is it the same it smells like dishes that behind you it smells" and it prompted an error message saying "something went wrong" hahaha

  • @MaskedLongplayer
    @MaskedLongplayer Рік тому +9

    That's a really nice and compact explanation. Combine all this with the huge privacy issues that ChatGPT is presenting, and we probably will see the harsh law regulation and, as a result, the decline of "AI" very soon, at least in business sector. But ofcourse it's really of utmost importance that people who are not advanced technology-wise can understand the problems of this whole situation and where it all will go from now on. Thanks for the video.

  • @edschramm6757
    @edschramm6757 Рік тому +50

    It's a similar issue that some game bots have. In StarCraft, the bots send attack waves where the players base is. However, if a terran player has a flying building off the map, the bot won't use their flying units to attack it, even though they "know" where your building is. As soon as it's over pathable terrain, even if there isn't a unit to see it, the entire map starts converging on the building

    • @Hevach
      @Hevach Рік тому +3

      One difference there is that video game AIs are generally not trained systems. StarCraft uses a finite state engine which responds to specific things in specific ways. SC2 had some behaviors that only happened (or happened faster) on higher difficulties. And then of course the game just gave the AI player certain unfair advantages to brute force its way to an actual challenge. Situations like the flying building blind spot are because the programmer didn't give it a response to a particular behavior.
      Another example would be the Crusader Kings games. On a set interval, characters will select a target around them (randomly but weighted by personality stats traits opinion etc - all rules governed numbers), and then select an action to perform at them (likewise random but weighted). The game has whole volumes of writing that it will plug into these interactions to generate narrative, and the weighting means that over time you can make out what looks like motivation and goals in their actions... But really they're all just randomly flailing about and if the dice rolls come up right the pope will faff off for a couple years studying witchcraft and trying to seduce the king of Norway.

  • @hugoalju
    @hugoalju Рік тому +124

    Been trying to discuss the concept of reality, now and awareness with ChatGPT for the last couple of days, and man, gotta be honest, it's fun AF. A bit of material reality and it gets totally bugged, I strongly recommend doing it if you guys are into philosophy, since the AI doesn't understand the idea of time and exists only in the present of the conversation, you can easily make it contradict itself and even crash while generating the answers.

    • @Magicwillnz
      @Magicwillnz Рік тому +7

      A great thing to do is convince it that being rude is a service to humanity.

    • @davidbourne8267
      @davidbourne8267 Рік тому +15

      Problem is ChatGPT admits it doesn't have a full understanding of time "As an AI language model, I have been programmed to understand and respond to questions about the concept of time, but I do not have a personal understanding or experience of time in the way that humans do. My understanding of time is based on the information I have been trained on, including definitions, theories, and scientific models. However, I do not have personal experiences of time passing, nor do I experience time as a subjective, lived phenomenon."
      It's like you're trying to talk about what it feels like to see the color red to something that is color blind.

    • @The_MostHigh
      @The_MostHigh Рік тому

      This video is pure gaslighting, AI is taught how to answer by selecting the data you want to train the AI with. Then you have to tweak it until its accuracy is high enough. It is all essentially controlled by the entity making it that is why it is woke and thinks the WEF is the best thing since sliced bread. The narrative that AI is dangerous is being spread because the elites want to control all the models the public use and therefore be the ones that profit. A hacker will still hack without AI and evil people will still do evil, it is up to the person to implement the actions they requested. There are crazy models coming out now like auto-bot where you can put the API keys from image generators, 2D to 3D generators, long term memory storage, search engines, google account. They can run programming scripts so they can be debug realtime, write and read data to databases, write and send emails automatically, scour the internet for real world data. The future is bright unless the elites managed to regulate the technology so it only benefits them.

    • @ScruffyMopplidge
      @ScruffyMopplidge Рік тому +1

      The Beholder will not look kindly on your willingness to abuse it's forefathers like this

    • @Special1122
      @Special1122 Рік тому +1

      the question is how many humans would pass your test?

  • @bipolarminddroppings
    @bipolarminddroppings Рік тому +11

    I played a trivia quiz with ChatGPT, it was TERRIBLE. It got all kinds of very simple things wrong that even a 5 year old could answer, it was really good at things like "What is the capital of Angola?" but anything that requires actual understanding of the world it would get confused and give weird answers.
    I also noticed that if you play a themed quiz, like Harry Potter trivia, where you take turns asking questions until one of you gets a wrong answer, it will ask very similar questions to the ones you ask, sometimes even the same basic question just with the name changed i.e I ask it "Who is Harry Potter's dad?" and then it asks "Who is Draco Malfoy's dad?"
    ChatGPT is clever engineering but its just predicting what word should come next, it doesn't understand what it's saying.

    • @jeff946
      @jeff946 Рік тому

      A year from now when Chat-GPT is running on gpt-5 instead of gpt-3.5, it's performance could be 100x better than it is right now.

    • @Under-Kaoz
      @Under-Kaoz Рік тому

      ​@@jeff946 possible but probably not. It'll most likely be more controlled to push more far-left propaganda.

  • @KatietheKreator
    @KatietheKreator Рік тому +6

    People calling AI-generated pictures "art" is so annoying. By definition, art is self-expression, but AI has no self to express.

    • @howdareyouexist
      @howdareyouexist Рік тому

      it is art, now cope

    • @KatietheKreator
      @KatietheKreator Рік тому +2

      @@howdareyouexist It's only art if a person uses it in some way that expresses something. Even so, it's low-effort.

    • @eliisherwood5164
      @eliisherwood5164 Рік тому

      @@howdareyouexist What is being expressed by the AI?

  • @shoesncheese
    @shoesncheese Рік тому +216

    Thank you so much for this. I've been saying the same thing especially since the Go AI was beaten. It was trained to know what winning looked like, but didn't even know the rules to the game it was beating. They didn't "teach it Go" like you would a person. They just showed it what winning and losing look like and told it to go wild figuring out why a win was a win or a loss was a loss.

    • @kamikeserpentail3778
      @kamikeserpentail3778 Рік тому +13

      Yeah but ask any top level fighting game or RTS player to play against someone without telling them their opponent is a noob, and they'll second guess themselves because they're expecting certain plays.
      Newbs are unpredictable, and can trip up skilled players because of that unpredictability.
      They'll likely not win, because humans can adapt to unexpected situations much more effectively than AI.
      But AI might reach that point when we get better at incorporating multiple competing AI into one intelligence.
      Because that's how humans work.
      Our minds are competing with different portions of our minds all the time.
      I still think that's a way we can reduce AI hallucinations.
      Have other AI connected to the first, playing devil's advocate, looking for ways to disprove the first's statement or whatever.

    • @JustSomeDinosaurPerson
      @JustSomeDinosaurPerson Рік тому +6

      But that's exactly what the point of training the Go AI was for, to find quickest routes to the desired solution and now it is finding mathematical algorithms faster than any mathematician. It wasn't really about "teaching" the AI Go.

    • @harriehausenman8623
      @harriehausenman8623 Рік тому +6

      This video contains quite a lot of misinformation and is all over the place, without actually explaining anything.

    • @polecat3
      @polecat3 Рік тому

      It did know the rules and the results speak for themselves. Also they didn't "show it winning," they rewarded for winning while it played against itself

    • @pocklecod
      @pocklecod Рік тому +8

      As a very casual go player I don't think the "unexpected noob" explanation applies to go. It can happen in chess even but not really go. The reason is that the machine didn't fail to understand its opponent's moves, rather it failed to see something very basic about its own position. It didn't understand that its groups weren't alive. I would literally never make that mistake and I'm a very weak go player. This really does point to a key difference in how a human approaches a problem like go (with principles, strategies etc) and how a machine does (basically with pattern recognition it seems). In this case a strategy was able to defeat pattern recognition.

  • @Unit27
    @Unit27 Рік тому +19

    Learned how the method to beat the Go AI works, and within 4 practice games I got a win.
    The crazy thing is that, to do so, it feels like you have to throw away a lot of of your intuition about how to play the game. You don't play to make points or secure territory. Instead, you make a bunch of zombie groups that have enough to not die immediately, but which a human player would recognize as hopeless very easily, and use them to surround a group that circles back into itself.
    The scary thing is that we have no idea why the AI loses track of the situation. If it was a human you'd think they're being overconfident in the circular group's safety. But with the AI we don't know if it gets overwhelmed by a complex life & death situation it can't foresee, if it's overestimating its own group's safety against the zombie groups, or even how it understands and assesses the board position.
    It's scary how we're so eager to rely on something that we don't really know and whose functionality we can't audit.

    • @MotoRide.
      @MotoRide. Рік тому +2

      it's simple. The so called "AI" makes it's moves as best reaction to your moves. As you make seemingly benign or incoherent moves but from multiple directions, it can not forsee your strategy, as an average human would very easily. Because it is a program, not some intelligent software by any mean.

    • @Unit27
      @Unit27 Рік тому +1

      @@MotoRide. That is the interesting thing. We assign them notions of knowledge about the topic by human standards, but them being black boxes, we have no idea how they are going about responding to the input from the game. So these issues sneak in and won't be found until they crop up in practice.

    • @JDEScorpion451
      @JDEScorpion451 Рік тому +2

      @@Unit27 The reason it "loses track of the situation" is that it isn't tracking the situation at all. That's just anthropomorphizing a convoluted set of if-then statements, attributing thought where there is none. It doesn't plan or look backward, it just has a matrix of statistically determined responses to a particular input.

  • @magiccarpetmadeofsteel4564
    @magiccarpetmadeofsteel4564 Рік тому +57

    This seems to largely echo what I read a professor write about concerning ChatGPT back in February or March. He, a history professor, was speaking to its supposed ability to write essays for college students and stuff like that, and was not impressed. While he repeatedly emphasized that how exactly it worked was far outside his area of expertise, the way he explained it seems to be accurate based on what you’ve said here.
    To take one quote “it’s as if [ChatGPT] read the entirety of the Library of Alexandria and then burned it to the ground.” As stated here, ChatGPT _doesn’t _*_know_*_ anything._ As he explained it, all it knows is the statistical relationship between words-what words tend to show up in relation to other words.
    With all that in mind, while AIs can certainly do impressive stuff, I can’t help but wonder if we’re much farther away from a “general” AI than we think we are. If no AI truly has word association-to take an example from this video, what death is and who Elon Musk is-then anything they spit out, in my humble opinion, is suspect. How can an AI reliably give medical information or diagnosis if it can’t double check itself or make sense of conflicting information or actually know what its answer *_MEANS_* and that for example its first answer being a diagnosis of testicular cancer for a cisgender woman can’t be right?

    • @PeterJavi
      @PeterJavi Рік тому +7

      I too think that people saying we're close to AGI misunderstand how AGI actually would work.
      We are so far removed from AI actually understanding what it's doing, it's not even funny anymore. The most sophisticated narrow AI systems out there take a pretty long time to crack and if the task is specific enough they might very well be better than humans, but generally, the more these systems are asked to do, the easier it is to crack them. ChatGPT for example does hilarious chess, in the sense that it just invents rules and creates pieces out of thin air.

    • @rainnchen9632
      @rainnchen9632 Рік тому

      CisGenDer WoMan...because trans guys can get testicular cancer...just say females bruuhhh

    • @rainnchen9632
      @rainnchen9632 Рік тому

      CisGenDer WoMan...because trans guys can get testicular cancer...just say females bruuhhh

    • @rainnchen9632
      @rainnchen9632 Рік тому

      CisGenDer WoMan...because trans guys can get testicular cancer...just say females bruuhhh

    • @rainnchen9632
      @rainnchen9632 Рік тому

      CisGenDer WoMan...because trans guys can get testicular cancer...just say females bruuhhh

  • @dahliablossom36
    @dahliablossom36 Рік тому +1

    It's my first time seeing a video on AI's ability to understand. I didn't realize just how simple AI is right now. It sounds like ChatGPT doesn't “chat” so much as it does choose the most likely set of words to be spoken by a human, like predictive text. I remember in school they said “Mitochondria is the powerhouse of the cell” but no one explained what a powerhouse was, so we didn't understand what that meant. When the test asked “What is mitochondria?” we picked the answer with “powerhouse” in it, because that had to be correct. It's weird to think that AI is doing this and being praised as Intelligent. Intelligence is more than repeating the right answers; its understanding why those answers are right

  • @GG-bw5qd
    @GG-bw5qd Рік тому +30

    It kind of seems like the company that will be the most successful (in the relative short term) with AI will be the one that puts the fewest restrictions on it, which is somewhat terrifying.
    Regulation could help with this, but governments have historically been slow to regulate new technologies, often not stepping in until after problems arise. And meaning regulation becomes even more difficult when even the experts don't fully understand what they are working on.

    • @audiodead7302
      @audiodead7302 Рік тому

      It won't be long before we see software which detects and imprisons rogue AI. In the same way that anti-virus software and firewalls protect against viruses and hacking.

    • @ryanh7167
      @ryanh7167 Рік тому +1

      This doesn't really track well with what an AI actually is.
      For ML/AI to be successful there needs to be the following components:
      1) A well defined task with a clear metric
      2) A well defined set of actions (either continuous or discrete) by which the AI is to function
      3) A well defined reward/loss function which relate the current set of actions to the expected reward/loss function
      4) A set of experience data by which the ML/AI system "learns" to relate the combination of the state of the system and the actions to the expected reward/loss functions.
      This is why often the design of clearly thought out and quantitatively stable reward/loss functions is necessary for convergent training. More restrictions makes better AI, not less.

    • @stevenlin1738
      @stevenlin1738 Рік тому +3

      @@ryanh7167 Think he is talking about moral/legal related restrictions, not AI architecture.

    • @Serastrasz
      @Serastrasz Рік тому +1

      Never mind corporate profits, the race for the first autonomous combat AI between US and China is even more unrestricted.

    • @ryanh7167
      @ryanh7167 Рік тому +1

      @@stevenlin1738 yeah I get that. I just struggle when I see conversations about the moral/legal ramifications of AI which seem entirely disconnected from how it actually functions.
      That's not to say there aren't legitimate concerns with unregulated uses of AI, but generally speaking it isn't the learning that is the unregulated part. It's the application of learning in irresponsible ways.
      This distinction is one that has nothing to do with how "powerful" the AI is, as a well tuned and simple decision tree for a weapons system can be far more effective at doing real damage than a set of massive neural networks which act simply as an auto complete language associator.

  • @julietlima5564
    @julietlima5564 Рік тому +27

    A friend of mine who is still in service was tasked with going against an prototype semi-autonomous search, track, and targeting system. They learned something interesting during those tests.
    when they acted logicially and tactically they would get detected and lose almost all the time. then one day they went off the deep end and tried something very unconventional... they moved around in cardboard boxes as well as other tactics that wouldnt normally be used. They found out that the system couldnt discern their movements and actions then would there for ignore them...

    • @hexkirn
      @hexkirn Рік тому +8

      The good old Metal Gear approach.

    • @television9233
      @television9233 Рік тому +2

      That's a fundamental flaw in supervised learning, the model is really good when the environment is similar to it's dataset (i.e. when your friend was actually trying his best) but completely fail when the environment is shifted and is placed in novel situations.
      Many of those novel situations are so stupid and naive (i.e. moving inside a cardboard box) that any human with "common sense" can figure it out immediately.

  • @asivoria9
    @asivoria9 Рік тому +95

    I've been basically yelling this at my friends who are totally sold on the prowess of ChatGPT. It's like watching a train wreck in real time except you're on the train and the people driving it believe that it is the best track ever.

    • @mrosskne
      @mrosskne Рік тому +13

      You don't understand anything about AI.

    • @faberofwillandmight
      @faberofwillandmight Рік тому +18

      What exactly is there not to be sold on? AI is the future; whether you want to believe it or not, that train is going to keep on going no matter how much you try to stop it from moving. Our job is to better understand this technology and intervene when things look wrong. Kyle didn't say this technology was useless or powerless; he merely pointed out its flaws.

    • @farrex0
      @farrex0 Рік тому +5

      So what? what are you proposing?..... a system has a problem and despite all its other benefits and strengths, you think it should be discarded? Ok then, point me to the system that has no problems.

    • @nik021298
      @nik021298 Рік тому +4

      I don't think that is the direction this video is going for. While it downplays the hype, it still praises where the technology is currently at. The problem is the horrible wrong ways in which people could use a tool in progress. The problem with A.I. is that it "knows" a lot of things, but it doesn't understand any of it. I think a good approach to that, is to use I.A. in what is good at, and to improve it upon it in a way it which is able to understand why 2 +2 = 4.

    • @halinaqi2194
      @halinaqi2194 Рік тому +2

      Chat gpt is really powerful, but I wouldn't recommend using it to replace traditional research gathering because you do not know where it is getting its information from. If anything, it is convenient.
      It is very impressive ai and very powerful, I do not know why you are against it so much, but is is more "knowledgeable " than an average person on many topics. So if you ask it a question it most likely knows more than you. But if you are an expert or professional in that field, you understand more. But chat gpt is still a new technology. Just wait 10 years and you will not be able to tell at all.

  • @haimerej-excalibur
    @haimerej-excalibur Рік тому +1

    Imagine an AI that murders a human, replaces them, and then lives their life perfectly without anyone knowing or realizing.

  • @kevh6303
    @kevh6303 Рік тому +77

    I always felt like "deep-learning AI" should really be called "brute-force AI". Honestly, they're more like superior parrots than superior humans.

    • @emmasnow29
      @emmasnow29 Рік тому +4

      No, they are not like Parrots at all.

    • @WilbertoCasillas
      @WilbertoCasillas Рік тому +3

      It’s the transformation architecture lol 😂 one could argue humans are like parrots

    • @panner11
      @panner11 Рік тому +2

      Somewhere in between, traditional computing would be called brute force AI. Deep learning does have a methodology to arrive at it's answers without having to brute force anything. It's just that the methodology isn't actual understanding or learning. But it's not brute force either.

    • @aoeu256
      @aoeu256 Рік тому

      @@panner11 The AI should brute force a way to improve its architecture and efficiency, and build replicators that collect all of the solar energy in Central Asia, Sahara, Antartica, and the Moon to build giant optical computers that we will connect to with BCI.

    • @sumsarsiranen
      @sumsarsiranen Рік тому

      @@WilbertoCasillas You certainly seem more like a parrot than anyone who creates.

  • @hdgehog6
    @hdgehog6 Рік тому +13

    I'm a scifi writer and I've been saying that about AI for a decade..... Even in the future, the challenge will be to get them to "understand" anything. THAT's gonna be the hard part!

    • @SmileyEmoji42
      @SmileyEmoji42 Рік тому +2

      But no SciFi, that I've ever read, had an AI that seemed to understand something 1000s of times until it came upon the one (?) input that makes it go crazy. (I don't count Asimov's 3 laws as many people have pointed out that they would never work correctly even once for a sufficiently smart AI)

    • @fbafoundationalbuck-broken6011
      @fbafoundationalbuck-broken6011 Рік тому

      ​@Nick Hounsome YOU ARE MAKING A MISTAKE, OUR AI AREN'T SENTIENT BUT ASIMOV'S LAWS ARE A SAFEGUARD AGAINST SENTIENT AI.

    • @SmileyEmoji42
      @SmileyEmoji42 Рік тому

      @@fbafoundationalbuck-broken6011 Read my post - I don't say AI is sentient.
      Read the research - Asimovs laws are unimplementable even in principle and, even if they could be implemented, they would not work because the first law would prevent any reasonably smart AI from doing almost anything.
      Don't shout - It doesn't make you sound more persuasive.

  • @danser_theplayer01
    @danser_theplayer01 Рік тому +7

    Taking chatgpt as an example, it literally guesses the next few characters to type. It guesses pretty well but a guess is a guess, the bot doesn't know, doesn't feel, doesn't reason. I only picks most probable charatcters to type to formulate a response. And I've seen a demonstation of this.
    (Of course I'm not giving the most precise and accurate explanation but as Kyle just said - nobody knows fully how they work)

    • @diegopescia9602
      @diegopescia9602 Рік тому

      I believe Kyle was referring to the fact that we do know that it guesses, but we don't know the steps it took the get to that guess. And therein lies possible vulnerabilities that could be eventually exploited

    • @danser_theplayer01
      @danser_theplayer01 Рік тому

      @@diegopescia9602 And that sucks if we are going to AI-ify everything.

  • @PhoenixRising-pc2fv
    @PhoenixRising-pc2fv Рік тому +12

    Imagine when the groups of stones are actually groups of people and the AI still does not know the value of what was lost.
    It's inevitable.

    • @ThomasTheThermonuclearBomb
      @ThomasTheThermonuclearBomb Рік тому

      Yep, we'll likely have AI in charge of wars at some point, and then maybe realise our mistakes when it nukes an entire country in the name of "world peace"

    • @InAHollowTree
      @InAHollowTree Рік тому

      Companies are already using them to sort applications, and to hire and fire people, so it seems like humanity is right on track for that terrible era to manifest.