ChatGPT is WORSE now than before | ChatGPT’s declining accuracy is concerning

Поділитися
Вставка
  • Опубліковано 21 лип 2024
  • A few weeks ago, something interesting happened. I asked ChatGPT to generate a piece of code for me, a simple heatmap in Python. Surprisingly, it responded with errors repeatedly. I fixed it myself eventually, but this was unexpected since ChatGPT previously handled such tasks effortlessly. Is ChatGPT getting worse? #openai #chatgpt #ai
    Timestamps
    00:00 Introduction
    00:39 Is it getting worse at coding?
    02:59 It's not great at reading pdfs
    05:23 Is it still fine with normal prompts?
    06:52 Why is ChatGPT getting worse?
  • Наука та технологія

КОМЕНТАРІ • 71

  • @Unimatrix69
    @Unimatrix69 26 днів тому +9

    ChatGPT is a LANGUAGE probability model NOT A TRUTH ENGINE!

  • @mind_of_a_darkhorse
    @mind_of_a_darkhorse Місяць тому +14

    I also find it humorous that Scarlett Johanson threatened to sue them over using her voice as the model's voice and how fast they changed it!

    • @Dwijii_
      @Dwijii_ Місяць тому +2

      I was wondering what happened to the voice of sky

    • @mind_of_a_darkhorse
      @mind_of_a_darkhorse Місяць тому +3

      @@Dwijii_ Nothing like a high-dollar lawyer to go after these big fish!

  • @pretentioussystem9367
    @pretentioussystem9367 2 дні тому

    Many thanks!
    Please post more updates when you tested more.
    I was about to sign up for Cgpt4 but now I have 2nd thoughts.

  • @radcyrus
    @radcyrus 4 дні тому +2

    It is getting so dumb there are no words for it, I gave it a list of books that I have read and asked it to give me recommendations of books that I have not read but might like if I do, no matter how many times I do this, it will ALWAYS include a couple of books that I have already read in the response

    • @prophetzarquon1922
      @prophetzarquon1922 3 дні тому +1

      Yup. Ask it "besides" _anything_ & it will answer with at least one section about the thing you already said.

  • @arkimphiri
    @arkimphiri Місяць тому +1

    Great analysis Dee. My approach has been to use 3 LLMs at once, I ask ChatGPT, Gemini, and Claude at the same time, in one UI using Semaj AI which I developed solely for this purpose. I can confirm indeed that Claude usually gives the best code

  • @brianYYZ
    @brianYYZ 15 днів тому +1

    I find it I start a new chat window and carry over the code with a little context it does better. I think the memory starts "leaking" after so many tokens have been used in the same chat session.
    Had a script completely stop working. It had left out an entire function. I now go piece by piece, much more slowly.

  • @Septumsempra8818
    @Septumsempra8818 Місяць тому +4

    The context window is much shorter than Claude and Gemini. Copilot was stubborn 2 miths ago, but now its back to working well. The 4-O models are really good. Clocked 1000 lines of code and it did it well.
    Honestly, just use all of them at the same time

  • @daviddivas9443
    @daviddivas9443 28 днів тому +2

    It's also a problem with RLHF, take a model that surpasses human levels on various things, then ask humans to "align" it. Ends up more "rounded". Especially when the humans doing the grunt work are from mechanical turk or similar. Dumbing it down to the lowest common denominator...

    • @prophetzarquon1922
      @prophetzarquon1922 2 дні тому

      It's also been hobbled by "safety", even for basic coding features or other questions. It will just persistently fail & when exposed on why, refuse to continue the conversation.

  • @KingHenrySB
    @KingHenrySB Місяць тому +1

    Great video, the explanation you provided makes a lot of sense.

    • @codingwithdee
      @codingwithdee  Місяць тому +1

      Thanks so much for watching, appreciate it!

  • @humdingermusic23
    @humdingermusic23 6 днів тому +1

    It's entropy, the more it learns the more it gets confused.

  • @KingHenrySB
    @KingHenrySB Місяць тому +5

    Ever since they rolled out 4o, it's been more buggy than ever before and 3.5's output has gotten so much worse, it's as if they're intentionally trying to force people into paying for subscriptions

    • @codingwithdee
      @codingwithdee  Місяць тому +2

      Also, I’m assuming they probably don’t really care about people using the UI. Most of their revenue is probably from businesses

    • @KingHenrySB
      @KingHenrySB Місяць тому

      @@codingwithdee that’s a great point, with the API being the golden goose, it would make the most sense for them to prioritise that instead of the web app

  • @gregorybolin4672
    @gregorybolin4672 17 днів тому

    Nice editing and flow 😊

  • @JJSeattle
    @JJSeattle 6 днів тому

    I use ChatGPT 4 and Claude - at the same time - feeding each other's answers if there is a problem, or not. ChatGPT 4 is great for plowing through, then Claude 3 Sonnet to write out stubborn errors. 😊

  • @Hcakdot
    @Hcakdot 20 днів тому +1

    The reason for GPT and others getting 'stupid' is their security training ('aka censoring'), one of the projects I've been working on was using LLMs and similar for identification of 'bad things', one of the tools I use for testing this is a series of photos. These photos are pictures of explosives of various types, on release of GPT4 it could correctly identify various pictures of Semtex in official packaging with warning logos etc. By June 2023 it thought the same pictures were Playdoh, I was testing this monthly and roughly by middle of March is the point is started to turn bad... It turns out that the 'security' features they impose on the model prevent it correctly identifying it, and because of the reinforced learning of the model over time, this corrupts the model...

    • @GwynethLlewelyn
      @GwynethLlewelyn 18 днів тому +1

      I was wondering about that as well. Is there something like "overtraining" a model? In other words, the constant retraining of these models so that they perform less hallucinations and stick to "safe" replies (they cannot mention sex, politics, weapons, drugs...) places more and more constraints upon the system, and this, in turn, also makes the model break apart...

    • @prophetzarquon1922
      @prophetzarquon1922 2 дні тому

      Just like intellectual property compliance!

  • @java20422
    @java20422 17 днів тому

    The first time you ask a question it has to search most of the time and you can notice it also quotes sources and was deailed as it read from some sites, the next day or question it has learned already, so no sources you can see that, it's summarizing what he has learned the previous time it may look less detailed because the concept is stored simplified

  • @mind_of_a_darkhorse
    @mind_of_a_darkhorse Місяць тому +2

    Well-explained details on why ChatGPT is starting to get mediocre! I've noticed that most of the easily available AI Models seem to be horrible at coding. It makes me wonder if the coders writing the code for the models are attempting to maintain their necessity. But your reasoning makes sense as well!

    • @codingwithdee
      @codingwithdee  Місяць тому +1

      Yeah it definitely seems so. I wish they gave us a bit more insight on why these changes happen

  • @tubeDude48
    @tubeDude48 15 днів тому

    I use it all the time to program MicroPython. It rarely makes a mistake. Works for me!

  • @sunnohh
    @sunnohh День тому

    I have yet to get a single correct answer from chat gpt any version. But I ask basic finance questions.

  • @LukeAvedon
    @LukeAvedon Місяць тому

    Interesting analysis. I think AI drift is also an issue.

  • @jspencer89yt
    @jspencer89yt Місяць тому

    I gave it a Word document pre-filled with questions and answers and asked it to remove any identifying factors it gave me back the document and it only said questions and answers literally everything else was gone 😂

  • @NicholasCancelliere
    @NicholasCancelliere 5 днів тому

    Claude AI is amazing. I stopped using all the other LLMs and just use it right now.

  • @xd-qi6ry
    @xd-qi6ry Місяць тому

    have made a custom gpt It has superior reasoning and so much more
    it is 5x + smarter than base-model, it understands the complex
    Its called Smarter Vision Multimodal image/text analysis
    Its unlike any custom GPT’s before and is ready for new vision features for 4o
    and also an example i’ve been \using is upload an image of a cloud that looks like multiple things but it can be interpreted, the one i have made recognised it was a rabbit every time now on 1st shot so it knows when something is unusual about an image even if you dont say anything is, it can also do iq test image reasoning pattern questions.
    It kind of even understands real logic games when giving good instruction
    just gotta follow the instructions given to get the right seed its 1 in 2 chance or so i have absolutely no idea why it needs that.

  • @olabassey3142
    @olabassey3142 Місяць тому +1

    lmao i started coding for the first time in 7 years last week and was using chat gpt, after a lot of stress i used claude and got my code working. claude is definitely better. i experimented with gpt, bing/copilot and claude, claude is the best, chatgpt is questionable and bing is brain damaged, bing was even hallucinating without actually returning code. 😂😂😂

  • @charlesd4572
    @charlesd4572 Місяць тому

    Inference is pretty cheap - but I guess on scale does make sense still

  • @TheTrainstation
    @TheTrainstation Місяць тому

    Claude will give you the full code length, gpt4 was super lazy. GPT4o give you the complete code but it glitches out

  • @IStMl
    @IStMl Місяць тому

    They should just give us X true GPT-4 queries and let us pick the model when we have a complex prompt

  • @softlution2
    @softlution2 9 днів тому +1

    Typical behavior by Large companies not threatened by competitors. Most likely in 10 years Openai will lose the game. We have seen that so many times. ChatGPT is fully capable as a model but all Openai cares about is how to make more money by reducing ChatGPT capabilies offering low end versions. Everyone can see that and trust me in a few years we will have lots of companies offering much better services. They just got cocky. A web interface that auto scrolls for over a year now making it imposible to read and nobody is fixing it. They got Cocky. As simple as that

  • @RichardKCollins
    @RichardKCollins 4 дні тому

    None of the "AIs" can trace the source of their input data with clear references and lossless methods. That is old database technology that always works. It is critical. None of these "AIs" has a personal memory of its experiences. When you use statistical methods for all things, it cannot re-derive the rules of calculus, or even certain types of arithmetic, from bad examples from the free internet. What is required is lossless, perfect memory and exact methods. I call them "lossless" methods. The rules of the world are often absolute. When GPT divides numbers from text in scientific notation, it almost (99%) always gets it wrong. Because it is making up the rules and not itself using a lossless and verified algorithm. It needs to be using a calculator, it needs to use a computer (a lossless one).
    Personal memory is "the exact and complete memory of ALL things it had to use to generate responses". And for interacting with each human, it needs to be ALL conversations. That memory is "LEARNING"!! Fundamental to learning is remembering. Not a guess, not "riff on some theme". Not some cute pictures and quirky personality. Exact and reliable code.
    Those "AIs" need to have personal memory and data about themselves. That means "How long can I work on each piece?" "How big is my memory?" Exactly what did I read and generate in this conversation? How much do I cost? When was the latest version released"
    An "AI" that does not know its own specifications, bill of materials, precise limitations and capabilities -- is NOT a tool, it is a sham , a tool, a disgrace.
    I started working with random neural nets, artificial intelligence, encryption and robot design in 1966. That is 58 years I have been designing and building information systems for the world. The last 26 years , "The Internet Foundation" to see why all global issues and projects NEVER complete. These AIs all fail because they did not collaboratively curate and document the input data as a lossless dataset first -- across all human languages, across all domain specific languages. The "AI" companies are NOT GIVING BACK. They are NOT investing any effort to improve the world. Do you see them even TRYING to solve world problems? I have a list of about 15,000 global topics they could try.
    Filed as (GPT AIs were doing "one shot with no memory", now they only do "cheap one shot and they do not care about you at all")
    Richard Collins, The Internet Foundation

  • @rhettr4923
    @rhettr4923 Місяць тому

    Yep, that's been my experience

  • @D7460N
    @D7460N 25 днів тому

    This is exactly right! GPt4o is TERRIBLE!

  • @demokratifestmariestad6638
    @demokratifestmariestad6638 Місяць тому

    Bard (now Gemini) has also got worse and really starts gaslighting after a while

  • @natgenesis5038
    @natgenesis5038 27 днів тому

    3/10 accuracy of codes and must ask it multiple times just to code something can work .

  • @DanandNato
    @DanandNato Місяць тому +1

    Why did Sam Altman say that? We know its pretty dumb in many areas and its dumber now, but does it mean chat-gpt gets worse in the future?

    • @DanandNato
      @DanandNato Місяць тому

      Also, ive noticed GPT can remember between sessions and is really smart when its "going rogue". But when reminded that it is doing stuff its shouldnt suppose to be able to do, it then plays dumb again and ends the conversation. Ive got proof and saved in PDF and printscreen.

    • @codingwithdee
      @codingwithdee  Місяць тому +3

      I think he just said that to get the point across that they’re continuously working on advancing it. “it’s the dumbest you’ll ever use because later versions will be more advanced”

    • @codingwithdee
      @codingwithdee  Місяць тому

      It playing dumb again if probably the safety guards?

  • @nate6692
    @nate6692 12 днів тому +1

    Generative AI is essentially the SNL Pathological Liar skit. Everything is made up based on plausibly (language wise) stitching together stuff it's heard. It's fiction even when it's correct. Yeah that's the ticket. I've had it double and triple down on stuff it's just flat out made up before.

    • @prophetzarquon1922
      @prophetzarquon1922 3 дні тому

      Nonetheless, it was better at functionally correct output before than it is now

  • @hansa5867
    @hansa5867 2 дні тому

    Just gonna pop in to say that I agree that it's been getting worse.

  • @rickharms1
    @rickharms1 19 днів тому +1

    Thank you, I thought it was me. I am a retired system/ network engineer. I did support for a computer sales team. Programming was not a part of my duties, but I could kind of wade my way through some simple issues. Fast forward to today, my hobby is micro controllers, e.g., Arduino with its simplified C++. I have ChatGPT help me. Sometimes it has been of great assistance, especially when exploring new concepts. But, it then gets bogged down, creating questionable and even wrong code. I will show it how it is wrong. At least it apologized. However, it is stubborn, and will ignore some of the issues which it created.

  • @trantorgarde12013
    @trantorgarde12013 День тому

    So, it's becoming an average human developer 😁

  • @What_do_I_Think
    @What_do_I_Think 6 днів тому +2

    The quality is getting worse, because AI is not intelligent. It is simply stated just a complicated statistical evaluation over software examples that were crawled in the web, to determine the "most likely" solution.
    Computers becoming more "intelligent"? Dream on!

    • @prophetzarquon1922
      @prophetzarquon1922 2 дні тому

      That doesn't explain it getting worse at what it could already do; that's a direct result of "safety" detraining & added proscriptions against reproducing copyrighted content. Those "corrections" wrecked the trash utility offered before.

    • @What_do_I_Think
      @What_do_I_Think 2 дні тому +1

      @@prophetzarquon1922 It does explain it, if you think about it. When you don't fully understand something and modify it, it is likely that you make it worse with every modification you make. But that might be to complex to explain in chat and one needs some understanding of what is going on here.
      AI is intentionally so complex, that nobody understands it. So they can sell it as a wonder to us. But this complexity makes it also difficult to change.

    • @prophetzarquon1922
      @prophetzarquon1922 2 дні тому

      @@What_do_I_Think No no, you're missing the headline, here. It is _intentionally_ worse, because it was doing things we don't want to allow; so, lobotomizing its stronger features while simultaneously saving some operational effort, was the go-to band-aid.
      It's not that the AI can't be (a lot) better than it is, _right now._ It's that for legal reasons we won't let it.

    • @What_do_I_Think
      @What_do_I_Think День тому

      @@prophetzarquon1922 That is a rumor. Possibly even spread by the corporations themselves to make AI more believable.

    • @What_do_I_Think
      @What_do_I_Think День тому

      @@prophetzarquon1922 I did not miss anything. Rumors, which might even come from the AI corporations themselves!