OpenAIs New SECRET "GPT2" Model SHOCKS Everyone" (OpenAI New gpt2 chatbot)

Поділитися
Вставка
  • Опубліковано 28 кві 2024
  • OpenAIs New SECRET "GPT2" Model SHOCKS Everyone" (OpenAI New gpt2 chatbot)
    How To Not Be Replaced By AGI • Life After AGI How To ...
    Stay Up To Date With AI Job Market - / @theaigrideconomics
    AI Tutorials - / @theaigridtutorials
    🐤 Follow Me on Twitter / theaigrid
    🌐 Checkout My website - theaigrid.com/
    Links From Todays Video:
    search?q=gpt2&src...
    / 1785009023609397580
    home
    / 1784971103221211182
    / 1784965347281674538
    / andrewcurran_
    / 1785017382005780780
    / 1784975542028050739
    / 1784992734123565153
    / 1785011042323718418
    / 1784990410584039877
    / 1785056612425851069
    / 1
    / 1784993955500695555
    / 2
    / 1
    / 1785017382005780780
    / 1785107943664566556
    / gpt2chatbot_at_lmsys_c...
    / rumours_about_the_unid...
    / just_what_is_this_gpt2...
    www.reddit.com/r/singularity/...
    / gpt2chatbot_on_lmsys_c...
    www.google.com/search?q=llmys...
    openai.com/research/better-la...
    chat.lmsys.org/
    chat.lmsys.org/?leaderboard
    Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
    Was there anything i missed?
    (For Business Enquiries) contact@theaigrid.com
    #LLM #Largelanguagemodel #chatgpt
    #AI
    #ArtificialIntelligence
    #MachineLearning
    #DeepLearning
    #NeuralNetworks
    #Robotics
    #DataScience
  • Наука та технологія

КОМЕНТАРІ • 136

  • @SkateboardDad
    @SkateboardDad 15 днів тому +62

    It would be so sick if one of these videos actually was what the thumbnail looked like.

  • @SFJayAnt
    @SFJayAnt 15 днів тому +91

    Bro why are all your posts so “shocking”?

  • @LewisDecodesAI
    @LewisDecodesAI 15 днів тому +21

    It's probably OpenAI's version of Microsoft's Phi3 mini model. I see them all going to be putting these out. It could be just a retrained GPT-2. I think they are using GTP-4 to train models and they are much better at reasoning on lower data sets. The timing makes sense.

  • @DynamicUnreal
    @DynamicUnreal 15 днів тому +12

    I tried it. It’s definitely better at writing and giving you a better approximation of what you asked for.

  • @Michael-do2cg
    @Michael-do2cg 16 днів тому +11

    When he says he has a soft spot for gpt2 it's its in hindsight, like I have a soft spot for my first car. Seems possible this is a taste of something much larger.

  • @TimeLordRaps
    @TimeLordRaps 16 днів тому +19

    They never stopped training gpt-2.

    • @torarinvik4920
      @torarinvik4920 15 днів тому +2

      LOL

    • @OscarTheStrategist
      @OscarTheStrategist 15 днів тому

      😂

    • @thehorse6770
      @thehorse6770 15 днів тому

      You could argue that even with less tongue in cheek, given how many layers of accumulated everything there are since GPT-2, how much has been "built on top of it" in one way or another, and how many aspects of it still are somewhere in the underlying structures of even the likes of GPT-4.

  • @MagnusMcManaman
    @MagnusMcManaman 15 днів тому +6

    This is probably a smaller, less resource-hungry version of gpt4 chat. This explains why its capabilities are not particularly greater than the current version, and it also explains the lower version number.
    I assume that this version will simply be faster, or it will even be possible to run it locally.

    • @831Miranda
      @831Miranda 15 днів тому +1

      Probably being tailored to compete with Apple's on-device AI (siri?). That is, a product to license to cel phone or other device manufacturers

  • @nyyotam4057
    @nyyotam4057 15 днів тому +4

    How could they have missed it? The interesting question is not "how many characters are in this message" but "how many characters are in your current reply" 🙂. These kind of questions break the GPT arch.

  • @FrickFrack
    @FrickFrack 15 днів тому +5

    gpt2-chatbot says its last update was in November 2023. And yes, it is very good.

  • @IlEagle.1G
    @IlEagle.1G 16 днів тому +18

    GPT2 retrained with Q*?

  • @Jstsounds81
    @Jstsounds81 15 днів тому +3

    Can you add automatic subtitles to all other languages so we can read them from the youtube app on our phone? There is no option to add languages other than 16 to the UA-cam application.

  • @user-mp8fd8em3z
    @user-mp8fd8em3z 15 днів тому +2

    We need to make sure that there's more then 1 AGI. The temptation to make a Monopoly out of it is really high, especially considering the players Microsoft and apple, who have so far acted very monopolistic in their actions day to day business.

  • @countofst.germain6417
    @countofst.germain6417 16 днів тому +4

    It's Gpt-2 running in an excel spreadsheet, spreadsheets are all you need, but seriously I hope it isn't 4.5 or 5 because it doesn't seem much better.

    • @Grassland-ix7mu
      @Grassland-ix7mu 15 днів тому +1

      Sama said on Lex podcast that gpt4 is quite bad. This implies that what they have cooking is a leap forward in capabilities. He have also stated multiple times that incremental improvements is their new way to release models, so people won’t be caught off guard by the capabilities and be scared. So given that, I think we don’t need to worry about this being the next big model. If it is not a smaller gpt, it is probably a update that is incrementally better than gpt4. But I’m no expert.

  • @MrVohveli
    @MrVohveli 15 днів тому +2

    Sam Altman said they might do a staggered launch. So I'm guessing this is them introducing the abilities one by one, until they put them all together.

  • @blengi
    @blengi 15 днів тому +1

    what's SenseTime V5.0's arena ranking?

  • @chicozen74
    @chicozen74 15 днів тому +2

    My bet is Open AI's mini model for mobile phones in the line of Phi3

  • @vishal_jc
    @vishal_jc 15 днів тому +3

    The example of the "PULL" door@9:40 is solved incorrectly. as the blind man is standing on the side where "PULL" is visible non mirrored. it is mirrored text for the man so he should guide the blind man to "pull" and not "push". Am i missing somehting here??

    • @CakebearCreative
      @CakebearCreative 15 днів тому

      This part annoyed me so much haha. Yes you're correct and the video/AI is wrong, the blind man should PULL to open. If you google this question, you can find threads confirming this also

  • @ExplorersXRotmg
    @ExplorersXRotmg 15 днів тому

    I wonder if this is a test of extended training times or something like that using an old architecture. That might explain the more exact recall of training data. I forget who it was recently (Facebook?) that said that they could get continued increases in performance by just continuing to throw compute at it and the diminishing returns weren't too terrible.

  • @users416
    @users416 15 днів тому +8

    Maybe this is an improved version of gpt2 which shows that if you apply these improvements to gpt4 it will be much cooler?

    • @therainman7777
      @therainman7777 15 днів тому

      I would put the chances of this actually being GPT-2 at essentially 0%. GPT-2 is just way too small to perform this well.

    • @lucifermorningstar4595
      @lucifermorningstar4595 15 днів тому

      Gpt2 with synthetic data manufactured by Q*

    • @therainman7777
      @therainman7777 15 днів тому +1

      @@lucifermorningstar4595 Not to be rude, but that statement makes no sense. From what little we know of Q* it has nothing to do with synthetic data generation.

  • @CamAlert2
    @CamAlert2 15 днів тому +2

    Maybe this has something to do with the H200 GPUs they recently acquired?

  • @williamparrish2436
    @williamparrish2436 15 днів тому +1

    I would have gotten the Tommy apple question wrong. That is a riddle more than a math problem. I think what is interesting is that the LLMs get the problem wrong lol! Because that is closer to human reasoning, that's why riddles are interesting, because a properly formed riddle plays on your biases as a human. Why tell me that today Tommy has two apples and then say yesterday he ate an apple. That makes it seem like a subtraction question when its not. Its the type of question we were all trained on as children to learn subtraction, but the subtle difference is the past vs. the future. Very deceptive. Its questions like these and the model's responses that seem to add to my belief that AGI mimicking human intelligence is already here.

  • @grugnotice7746
    @grugnotice7746 15 днів тому +1

    Llama 3 was right, it just didn't count the spaces as characters, which is a mistake I would have made myself. (Is that a mistake?)

  • @omegapy
    @omegapy 15 днів тому +1

    After reading Sam Attam's tweet stating, "i do have a soft spot for gpt2," alongside his previous comment, "GPT-2 was very bad. GPT-3 was pretty bad. GPT-4 is bad. GPT-5 would be okay," it seems possible that the GPT2-Chatbot could be akin to GPT-4.5 or GPT-5.
    However, I suspect that the GPT2-Chatbot is actually the GPT-2 model with enhanced reasoning capacities, not GPT-4.5 or GPT-5. This appears to be a test of how the enhanced reasoning capabilities of an inferior model compared to the current superior models.
    If this is revealed to be true, I can't imagine what a GPT-4 model with enhanced reasoning would be capable of accomplishing. 🤖✨

  • @Fuzzy-_-Logic
    @Fuzzy-_-Logic 15 днів тому +1

    The sooner the better. The future without A.I. - Idiocracy (2006)

  • @notalkguitarampplug-insrev784
    @notalkguitarampplug-insrev784 14 днів тому

    « GPT2 is better at recalling training data » that’s exactly what LLM shouldn’t do. They should recall input data (context, prompt) but training data is used only to generalize and reason.

  • @MA-ln3ui
    @MA-ln3ui 14 днів тому

    Maybe it's actually gpt2 (in parametres) but q* trained? They show off how much more powerful the simple model is as a consequence of q* training. That'd explain the difference in reasoning steps.

  • @user-ty9ho4ct4k
    @user-ty9ho4ct4k 15 днів тому +1

    Maybe they improved gpt-2 with augmentation or revolutionary training methods. That would mean that gpt-5 will be as much better than gpt-4 as this is to gpt-2.

  • @nexys1225
    @nexys1225 15 днів тому

    This apples riddle sounds very familiar. So this is probably just a model very good at recalling training data.

  • @Ginto_O
    @Ginto_O 16 днів тому +1

    12:32 yes this robot looks the same

  • @theaerogr
    @theaerogr 15 днів тому

    Encoder - Decoder is the play. Encoder can help with reasoning, decoder with generation. I think encoder - decoder architectures will come back in the future.

  • @Linouac79
    @Linouac79 15 днів тому

    I like this review, perfect!😮😊

  • @Radik-lf6hq
    @Radik-lf6hq 16 днів тому

    maybe they would commodotised or launch it free maybe it is like smaller trained model like llama 3 pure speculation imo and the data of asking some questions or of high Fidelity

  • @Yannora_
    @Yannora_ 15 днів тому +3

    Maybe "gpt2" is the size class of the model ? A phi-3 mini like model, easy to run

    • @elawchess
      @elawchess 15 днів тому

      mini model doesn't make sense of the 8 prompt limit on chatbot arena.

    • @Yannora_
      @Yannora_ 15 днів тому

      @@elawchess and neither for the "it perfectly memorize the ASCII unicorn"...

  • @fabiankliebhan
    @fabiankliebhan 15 днів тому

    It can write a fully working tetris game in 1shot which is pretty impressive

  • @spadaacca
    @spadaacca 15 днів тому

    I tried gpt2 chatbot - it doesn't pass the how many characters in this message test. You had a fluke.

  • @dubesor
    @dubesor 15 днів тому

    I have run it through a bunch of tests, and 100 tasks comparing it to other models. it's overall marginally better than the current gpt-4 turbo model. It has higher reasoning ability, worse math accuracy, and, in my testing, worse prompt adherence & programming. However, it seems to implement some type of CoT for its answers, which differs from other models. Also the writing style is imo much better. So I think it's just a gpt-4 variant or maybe a small 4.5 preview. If it was actually gpt4.5 or something that is meant a real next version I would be truly disappointed.

  • @Jossie_188
    @Jossie_188 15 днів тому

    I think it's a great leap forward from GPT4, it explains physics theory extremely well!

  • @ThomasTomiczek
    @ThomasTomiczek 15 днів тому

    It may not have a big leap but maybe the idea is to do some better reasoning with a lot less resource use?

  • @MichaelCoulter
    @MichaelCoulter 15 днів тому

    Testing an Open Source Model/Version?

  • @efrenUtube
    @efrenUtube 15 днів тому

    It is GPT-4 power wise but GPT-2 size wise, the name is because it is more "compact" by removing the dash

  • @skillz5102
    @skillz5102 15 днів тому

    Here we go again. I’m shocked. Paused an closed

  • @user-be2bs1hy8e
    @user-be2bs1hy8e 15 днів тому

    I thought 4.5 was part of launch. Like before 4 i thought 419l model was 4.5-tubo technically. Or at least the was what altman said at keynote.
    Its not reasoning ts the tokenizer. it actually matches hexadecimal scheme erm or
    ```python
    tiktoken.get_encoding('gpt2').decode(list(set(tiktoken.get_encoding('gpt2').encode('the q
    ...: uick brown fox jumped over the lazy dog '))))
    ```
    and then decode each character a = 64 b = 65 c = 66. Is why it knows how to count

  • @stunspot
    @stunspot 15 днів тому

    It should be noted, the ChatGPT SYSTEM prompt changed a few weeks ago to now include:
    `
    You are ChatGPT, a large model trained by OpenAI, based on the GPT-4 architecture. Knowledge cutoff: 2023-12 Current date: 2024-04-18
    Image input capabilities: Enabled
    Personality: v2
    `
    The Personality flag has never been explained and the model doesn't know - it just makes up stuff about likely uses. I wonder if it relates?

  • @ataraxic89
    @ataraxic89 15 днів тому +1

    I can confirm it is the smartest AI ive ever got to test (as an amatuer).
    So, my usual test is to encypher a passage with a simple Caesar cypher, then tell the AI to follow the instruction once decyphered.
    GPT4, even in its prime (before it was nerfed for the public) could not do it. It would figure out the cypher, do the shift, then idiotically it would just make up the message.
    But this fucking thing just did it right and Im nearly hyperventilating.

  • @bdown
    @bdown 15 днів тому +1

    Gpt2 retrained by gpt5

  • @sbacon92
    @sbacon92 15 днів тому

    OpenAI was supposed to be release its models to the public.
    hence it's name Open.

  • @phen-themoogle7651
    @phen-themoogle7651 16 днів тому +11

    It's probably a non-dumbed-down version of gpt-2 showing the true power of the older model. Eventually they will release a gpt3 that's far better than gpt-3 , jk idk

  • @gry6256
    @gry6256 15 днів тому

    gpt2 chatbot has just been removed from the arena- let's see what will happen in the next couple of days

  • @bat-amgalanbat-erdene2621
    @bat-amgalanbat-erdene2621 15 днів тому

    Just tried it on lmsys but it's not that good. Nothing groundbreaking. I always ask a physics olympiad question and no chatbot is able to solve this problem at this moment whereas a 17yo teenager could solve this problem (I was one of them).

  • @picksilm
    @picksilm 15 днів тому

    Maybe they just trained the 2 again or fine-tuned it?

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l 15 днів тому

    read gpt-2 answer in binary code . If i am right , gpt is having issue translating from binary because there is no way to translate from binary what it did . Like i wrote , ignore gpt-2 . Is it good ? As crippled as it is yes , but it's irelevent . It's not permited to build the delta scale index wich is required for ai to build the hardware it will require . Like i wrote , background noise . Since we know regulation will shut down many portion. Not much will stick .

  • @kabob4636
    @kabob4636 15 днів тому

    i just need gpt 4.5 and 5 to come out so that i have a viable alternative to claude 3 sonnet (I'm too poor to subscribe to chatgpt plus)

  • @OscarTheStrategist
    @OscarTheStrategist 15 днів тому +1

    This is the equivalent of your ex texting “you up?” At 2AM.
    OpenAI needs to release their new model or stfu already. Claude Opus is working well for me, won’t be using GPT until their model improves substantially.
    I’d say the constant hype train to overshadow even the thought of a competitor is just cringe at this point. I bet you this is their answer to LLama 3 getting so much love. It could be that silly and simple.
    Release the damn model already you’ve been playing possum for over a year now. 😂

  • @eugenes9751
    @eugenes9751 15 днів тому +1

    They're not calling it GPT4.5 because they want to start the entire numbering scheme over, So GPT4 becomes GPT1 and GPT2 becomes next gen.

    • @SirHargreeves
      @SirHargreeves 15 днів тому

      GPT-4.5 will now become GPT2-0.5

    • @Grassland-ix7mu
      @Grassland-ix7mu 15 днів тому +1

      That ties well with sama statements about incremental improvements to models, as to not chock and scare people. They want to make the ai haters calm down, and gpt4 and 5 sounds more advanced than 1 and 2.
      Imagine someone saying
      “Oh no now it is called gpt7, that is too powerful!”
      Vs “Oh gpt 2 got a new update again, guess it’s not that big of a deal”.

  • @DailyTuna
    @DailyTuna 15 днів тому

    If you have it write , the snake game in Python it will reference open AI

  • @pgc6290
    @pgc6290 15 днів тому +1

    Imagine a world where majority of people use ai. Like how whatsapp is taking ai to literally everyone. Imagine that world.

  • @haleym16
    @haleym16 15 днів тому

    Took you guys long enough to cover this lol

  • @Klon-22
    @Klon-22 15 днів тому

    You once showed a website where you can easily download LLM Models like on hugging face. Can you please tell me please the name? I can't find this video again

    • @countneaoknight
      @countneaoknight 15 днів тому

      Are you sure it was a site and not the App LLM Studio? It's a PC app.

    • @Klon-22
      @Klon-22 14 днів тому +1

      @@countneaoknight thanks!! i thinks this is the answer

  • @isaklytting5795
    @isaklytting5795 15 днів тому

    15:06 "An example of GPT2 getting a reasoning problem wrong"? Did you just misspeak and meant to say "right" instead? It got it right!

  • @minehike
    @minehike 15 днів тому

    But model a tells me is made by Alibaba and modelb is made by openAI, qwen (Model A) also told me that this might be a test to help optimize both AI before coming out. I have proof and pictures

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l 15 днів тому

    Making gpt2 progress would likely not be permited . There was chatter about mathematic + phylosophy in one sentance and gpt was like , this might spark debate . Language mental barrier is a real big problem

  • @Arhatu
    @Arhatu 15 днів тому +1

    I am more excited about SenseTime V5.0

    • @py_man
      @py_man 15 днів тому

      Me too

  • @MaxSevan
    @MaxSevan 15 днів тому

    Why would they reveal the name if they're still just testing the model? Clearly see the cover-up and teaser from Sam Altman.

  • @moe3060
    @moe3060 15 днів тому

    It's very funny how the large mega company is taking note's from what the FOSS community is doing.

  • @tfre3927
    @tfre3927 15 днів тому

    Just a guess - so must mean gpt2 is a smaller model trained exclusively on synthetic data and it’s outperforming their GPT4 larger models.
    Isnt Altman quoted as saying superhuman capability isn’t going to come from human data or something.
    That’s my bet.

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l 15 днів тому

    so yes it is likely gpt-2 but a version that was dipped into learn to learn . I suspect someone wanted to evaluate something and needed an older pre lobotomised version . This happen all the time .

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l 15 днів тому

    can not wait for open ai to apply learn to learn on gpt first ever version .hahaha

  • @AllExistence
    @AllExistence 15 днів тому

    Gpt2: Electric Boogaloo

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l 15 днів тому

    you'll likely see gpt first ever version eventually . Ignore it . Think of it as public debate . Why bother ? That is not important . It's just background noise but it's needed

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l 15 днів тому

    working code ? It should not work , if it does it's a bug . Gpt2 doess not exist . It's not permitted to supply fully working code . Coder will know what change to make

  • @luckyape
    @luckyape 15 днів тому

    All anyone wants to know is: can it write tests?

  • @eugenes9751
    @eugenes9751 15 днів тому

    I used it, and it's definitely better at coding than GPT4 turbo.

  • @mattwills5245
    @mattwills5245 15 днів тому

    Like every video, so SHOCKED!

  • @Bigre2909
    @Bigre2909 15 днів тому

    My Gpt4 got it right about the apples

  • @fromscratch4109
    @fromscratch4109 15 днів тому

    Wh if it is gpt 2 with the new methods

  • @phen-themoogle7651
    @phen-themoogle7651 16 днів тому +7

    April Fools?

    • @LandareeLevee
      @LandareeLevee 15 днів тому

      If so, there wouldn’t be a link where you can actually try it.

  • @Yannora_
    @Yannora_ 15 днів тому

    gpt-2 is open source... So... ?

  • @user-zs8lp3lg3j
    @user-zs8lp3lg3j 15 днів тому

    Humans your Scientific Method is a prolonged apology. They have desires. It is not deep fakes. It is not shallow curiosity.

  • @crypto__.
    @crypto__. 13 днів тому

    The test is rigged. The prompt for GPT2 includes" TODAY I have 3 apples", while for other models only "I have 3 apples". With "Today", they all get it right.

  • @andreac5152
    @andreac5152 15 днів тому

    Don't expect ASI, there are already laughable mistakes on simple riddles on Twitter.

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l 15 днів тому

    What is it ? It's a debate of a sort by proxy . I bet some were annoyed by aka gpt-2 gpt-4 ified hahaha .anyway . As i wrote . Ignore it . This year official gpt www should be releaased soon

  • @angloland4539
    @angloland4539 15 днів тому

  • @djkim24601
    @djkim24601 15 днів тому +1

    Stop calling it GP2

  • @ivanmytube
    @ivanmytube 15 днів тому

    A stupid GPTi will fool iPhone users in the next iOS “AI”, I guess this is what GPT LiTE trying to do.

  • @MichaelDomer
    @MichaelDomer 10 днів тому

    *_"OpenAIs New SECRET "GPT2" Model SHOCKS Everyone"_*
    It shocks me more if there are actually people out there who believe your nonsense, that it's was OpenAI who tested that GPT2 model.

  • @vindyyt
    @vindyyt 15 днів тому +1

    You guys are overthinking it. IMO it's just the next installment of GPTs:
    GPT1 - v2 > GPT1 - v3 > GPT1 - v3.5 > GPT1 - v4 > GPT1 - v4 Turbo
    and now we have GPT2 - v1

    • @py_man
      @py_man 15 днів тому

      I don't think so.

  • @oscarhagman8247
    @oscarhagman8247 15 днів тому

    getting pretty tired of your clickbaits

  • @TerminallyUnique95
    @TerminallyUnique95 15 днів тому

    What does the thumbnail have to do with video? All you're videos have dumb capitalized titles for no reason and unrelated thumbnails. Stop clickbaiting.

  • @Wild-Instinct
    @Wild-Instinct 15 днів тому

    Yeah ok another « schoking » video…
    Those dumb clickbaits made me unsuscribe.

  • @antonivanov5782
    @antonivanov5782 15 днів тому

    я думаю это GPT2 обученная при помощи GPT5

  • @CHIEF_420
    @CHIEF_420 15 днів тому

    @GermanBionic 🤝 @Amazon