OpenAIs New SECRET "GPT2" Model SHOCKS Everyone" (OpenAI New gpt2 chatbot)
Вставка
- Опубліковано 28 кві 2024
- OpenAIs New SECRET "GPT2" Model SHOCKS Everyone" (OpenAI New gpt2 chatbot)
How To Not Be Replaced By AGI • Life After AGI How To ...
Stay Up To Date With AI Job Market - / @theaigrideconomics
AI Tutorials - / @theaigridtutorials
🐤 Follow Me on Twitter / theaigrid
🌐 Checkout My website - theaigrid.com/
Links From Todays Video:
search?q=gpt2&src...
/ 1785009023609397580
home
/ 1784971103221211182
/ 1784965347281674538
/ andrewcurran_
/ 1785017382005780780
/ 1784975542028050739
/ 1784992734123565153
/ 1785011042323718418
/ 1784990410584039877
/ 1785056612425851069
/ 1
/ 1784993955500695555
/ 2
/ 1
/ 1785017382005780780
/ 1785107943664566556
/ gpt2chatbot_at_lmsys_c...
/ rumours_about_the_unid...
/ just_what_is_this_gpt2...
www.reddit.com/r/singularity/...
/ gpt2chatbot_on_lmsys_c...
www.google.com/search?q=llmys...
openai.com/research/better-la...
chat.lmsys.org/
chat.lmsys.org/?leaderboard
Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
Was there anything i missed?
(For Business Enquiries) contact@theaigrid.com
#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience - Наука та технологія
It would be so sick if one of these videos actually was what the thumbnail looked like.
For real
Soon bro… 1 or 2 years max before can do that
I like em
Yea
Bro why are all your posts so “shocking”?
So you click 🤫
Coz the insulation for his wiring is quite bad.
Are you not shocked??!
To make money
@@CursorBl0cklol😂
It's probably OpenAI's version of Microsoft's Phi3 mini model. I see them all going to be putting these out. It could be just a retrained GPT-2. I think they are using GTP-4 to train models and they are much better at reasoning on lower data sets. The timing makes sense.
I tried it. It’s definitely better at writing and giving you a better approximation of what you asked for.
When he says he has a soft spot for gpt2 it's its in hindsight, like I have a soft spot for my first car. Seems possible this is a taste of something much larger.
They never stopped training gpt-2.
LOL
😂
You could argue that even with less tongue in cheek, given how many layers of accumulated everything there are since GPT-2, how much has been "built on top of it" in one way or another, and how many aspects of it still are somewhere in the underlying structures of even the likes of GPT-4.
This is probably a smaller, less resource-hungry version of gpt4 chat. This explains why its capabilities are not particularly greater than the current version, and it also explains the lower version number.
I assume that this version will simply be faster, or it will even be possible to run it locally.
Probably being tailored to compete with Apple's on-device AI (siri?). That is, a product to license to cel phone or other device manufacturers
How could they have missed it? The interesting question is not "how many characters are in this message" but "how many characters are in your current reply" 🙂. These kind of questions break the GPT arch.
gpt2-chatbot says its last update was in November 2023. And yes, it is very good.
GPT2 retrained with Q*?
Unlikely
GPT-2 would be terrible even with Q*
Yes, thats my bet
Can you add automatic subtitles to all other languages so we can read them from the youtube app on our phone? There is no option to add languages other than 16 to the UA-cam application.
We need to make sure that there's more then 1 AGI. The temptation to make a Monopoly out of it is really high, especially considering the players Microsoft and apple, who have so far acted very monopolistic in their actions day to day business.
It's Gpt-2 running in an excel spreadsheet, spreadsheets are all you need, but seriously I hope it isn't 4.5 or 5 because it doesn't seem much better.
Sama said on Lex podcast that gpt4 is quite bad. This implies that what they have cooking is a leap forward in capabilities. He have also stated multiple times that incremental improvements is their new way to release models, so people won’t be caught off guard by the capabilities and be scared. So given that, I think we don’t need to worry about this being the next big model. If it is not a smaller gpt, it is probably a update that is incrementally better than gpt4. But I’m no expert.
Sam Altman said they might do a staggered launch. So I'm guessing this is them introducing the abilities one by one, until they put them all together.
what's SenseTime V5.0's arena ranking?
My bet is Open AI's mini model for mobile phones in the line of Phi3
The example of the "PULL" door@9:40 is solved incorrectly. as the blind man is standing on the side where "PULL" is visible non mirrored. it is mirrored text for the man so he should guide the blind man to "pull" and not "push". Am i missing somehting here??
This part annoyed me so much haha. Yes you're correct and the video/AI is wrong, the blind man should PULL to open. If you google this question, you can find threads confirming this also
I wonder if this is a test of extended training times or something like that using an old architecture. That might explain the more exact recall of training data. I forget who it was recently (Facebook?) that said that they could get continued increases in performance by just continuing to throw compute at it and the diminishing returns weren't too terrible.
Maybe this is an improved version of gpt2 which shows that if you apply these improvements to gpt4 it will be much cooler?
I would put the chances of this actually being GPT-2 at essentially 0%. GPT-2 is just way too small to perform this well.
Gpt2 with synthetic data manufactured by Q*
@@lucifermorningstar4595 Not to be rude, but that statement makes no sense. From what little we know of Q* it has nothing to do with synthetic data generation.
Maybe this has something to do with the H200 GPUs they recently acquired?
I would have gotten the Tommy apple question wrong. That is a riddle more than a math problem. I think what is interesting is that the LLMs get the problem wrong lol! Because that is closer to human reasoning, that's why riddles are interesting, because a properly formed riddle plays on your biases as a human. Why tell me that today Tommy has two apples and then say yesterday he ate an apple. That makes it seem like a subtraction question when its not. Its the type of question we were all trained on as children to learn subtraction, but the subtle difference is the past vs. the future. Very deceptive. Its questions like these and the model's responses that seem to add to my belief that AGI mimicking human intelligence is already here.
Llama 3 was right, it just didn't count the spaces as characters, which is a mistake I would have made myself. (Is that a mistake?)
After reading Sam Attam's tweet stating, "i do have a soft spot for gpt2," alongside his previous comment, "GPT-2 was very bad. GPT-3 was pretty bad. GPT-4 is bad. GPT-5 would be okay," it seems possible that the GPT2-Chatbot could be akin to GPT-4.5 or GPT-5.
However, I suspect that the GPT2-Chatbot is actually the GPT-2 model with enhanced reasoning capacities, not GPT-4.5 or GPT-5. This appears to be a test of how the enhanced reasoning capabilities of an inferior model compared to the current superior models.
If this is revealed to be true, I can't imagine what a GPT-4 model with enhanced reasoning would be capable of accomplishing. 🤖✨
The sooner the better. The future without A.I. - Idiocracy (2006)
« GPT2 is better at recalling training data » that’s exactly what LLM shouldn’t do. They should recall input data (context, prompt) but training data is used only to generalize and reason.
Maybe it's actually gpt2 (in parametres) but q* trained? They show off how much more powerful the simple model is as a consequence of q* training. That'd explain the difference in reasoning steps.
Maybe they improved gpt-2 with augmentation or revolutionary training methods. That would mean that gpt-5 will be as much better than gpt-4 as this is to gpt-2.
This apples riddle sounds very familiar. So this is probably just a model very good at recalling training data.
12:32 yes this robot looks the same
Encoder - Decoder is the play. Encoder can help with reasoning, decoder with generation. I think encoder - decoder architectures will come back in the future.
I like this review, perfect!😮😊
maybe they would commodotised or launch it free maybe it is like smaller trained model like llama 3 pure speculation imo and the data of asking some questions or of high Fidelity
Maybe "gpt2" is the size class of the model ? A phi-3 mini like model, easy to run
mini model doesn't make sense of the 8 prompt limit on chatbot arena.
@@elawchess and neither for the "it perfectly memorize the ASCII unicorn"...
It can write a fully working tetris game in 1shot which is pretty impressive
I tried gpt2 chatbot - it doesn't pass the how many characters in this message test. You had a fluke.
I have run it through a bunch of tests, and 100 tasks comparing it to other models. it's overall marginally better than the current gpt-4 turbo model. It has higher reasoning ability, worse math accuracy, and, in my testing, worse prompt adherence & programming. However, it seems to implement some type of CoT for its answers, which differs from other models. Also the writing style is imo much better. So I think it's just a gpt-4 variant or maybe a small 4.5 preview. If it was actually gpt4.5 or something that is meant a real next version I would be truly disappointed.
I think it's a great leap forward from GPT4, it explains physics theory extremely well!
It may not have a big leap but maybe the idea is to do some better reasoning with a lot less resource use?
Testing an Open Source Model/Version?
It is GPT-4 power wise but GPT-2 size wise, the name is because it is more "compact" by removing the dash
Here we go again. I’m shocked. Paused an closed
I thought 4.5 was part of launch. Like before 4 i thought 419l model was 4.5-tubo technically. Or at least the was what altman said at keynote.
Its not reasoning ts the tokenizer. it actually matches hexadecimal scheme erm or
```python
tiktoken.get_encoding('gpt2').decode(list(set(tiktoken.get_encoding('gpt2').encode('the q
...: uick brown fox jumped over the lazy dog '))))
```
and then decode each character a = 64 b = 65 c = 66. Is why it knows how to count
It should be noted, the ChatGPT SYSTEM prompt changed a few weeks ago to now include:
`
You are ChatGPT, a large model trained by OpenAI, based on the GPT-4 architecture. Knowledge cutoff: 2023-12 Current date: 2024-04-18
Image input capabilities: Enabled
Personality: v2
`
The Personality flag has never been explained and the model doesn't know - it just makes up stuff about likely uses. I wonder if it relates?
I can confirm it is the smartest AI ive ever got to test (as an amatuer).
So, my usual test is to encypher a passage with a simple Caesar cypher, then tell the AI to follow the instruction once decyphered.
GPT4, even in its prime (before it was nerfed for the public) could not do it. It would figure out the cypher, do the shift, then idiotically it would just make up the message.
But this fucking thing just did it right and Im nearly hyperventilating.
Gpt2 retrained by gpt5
OpenAI was supposed to be release its models to the public.
hence it's name Open.
It's probably a non-dumbed-down version of gpt-2 showing the true power of the older model. Eventually they will release a gpt3 that's far better than gpt-3 , jk idk
;)
gpt2 chatbot has just been removed from the arena- let's see what will happen in the next couple of days
Just tried it on lmsys but it's not that good. Nothing groundbreaking. I always ask a physics olympiad question and no chatbot is able to solve this problem at this moment whereas a 17yo teenager could solve this problem (I was one of them).
Maybe they just trained the 2 again or fine-tuned it?
read gpt-2 answer in binary code . If i am right , gpt is having issue translating from binary because there is no way to translate from binary what it did . Like i wrote , ignore gpt-2 . Is it good ? As crippled as it is yes , but it's irelevent . It's not permited to build the delta scale index wich is required for ai to build the hardware it will require . Like i wrote , background noise . Since we know regulation will shut down many portion. Not much will stick .
i just need gpt 4.5 and 5 to come out so that i have a viable alternative to claude 3 sonnet (I'm too poor to subscribe to chatgpt plus)
This is the equivalent of your ex texting “you up?” At 2AM.
OpenAI needs to release their new model or stfu already. Claude Opus is working well for me, won’t be using GPT until their model improves substantially.
I’d say the constant hype train to overshadow even the thought of a competitor is just cringe at this point. I bet you this is their answer to LLama 3 getting so much love. It could be that silly and simple.
Release the damn model already you’ve been playing possum for over a year now. 😂
They're not calling it GPT4.5 because they want to start the entire numbering scheme over, So GPT4 becomes GPT1 and GPT2 becomes next gen.
GPT-4.5 will now become GPT2-0.5
That ties well with sama statements about incremental improvements to models, as to not chock and scare people. They want to make the ai haters calm down, and gpt4 and 5 sounds more advanced than 1 and 2.
Imagine someone saying
“Oh no now it is called gpt7, that is too powerful!”
Vs “Oh gpt 2 got a new update again, guess it’s not that big of a deal”.
If you have it write , the snake game in Python it will reference open AI
Imagine a world where majority of people use ai. Like how whatsapp is taking ai to literally everyone. Imagine that world.
Took you guys long enough to cover this lol
You once showed a website where you can easily download LLM Models like on hugging face. Can you please tell me please the name? I can't find this video again
Are you sure it was a site and not the App LLM Studio? It's a PC app.
@@countneaoknight thanks!! i thinks this is the answer
15:06 "An example of GPT2 getting a reasoning problem wrong"? Did you just misspeak and meant to say "right" instead? It got it right!
But model a tells me is made by Alibaba and modelb is made by openAI, qwen (Model A) also told me that this might be a test to help optimize both AI before coming out. I have proof and pictures
Making gpt2 progress would likely not be permited . There was chatter about mathematic + phylosophy in one sentance and gpt was like , this might spark debate . Language mental barrier is a real big problem
I am more excited about SenseTime V5.0
Me too
Why would they reveal the name if they're still just testing the model? Clearly see the cover-up and teaser from Sam Altman.
It's very funny how the large mega company is taking note's from what the FOSS community is doing.
Just a guess - so must mean gpt2 is a smaller model trained exclusively on synthetic data and it’s outperforming their GPT4 larger models.
Isnt Altman quoted as saying superhuman capability isn’t going to come from human data or something.
That’s my bet.
so yes it is likely gpt-2 but a version that was dipped into learn to learn . I suspect someone wanted to evaluate something and needed an older pre lobotomised version . This happen all the time .
can not wait for open ai to apply learn to learn on gpt first ever version .hahaha
Gpt2: Electric Boogaloo
you'll likely see gpt first ever version eventually . Ignore it . Think of it as public debate . Why bother ? That is not important . It's just background noise but it's needed
working code ? It should not work , if it does it's a bug . Gpt2 doess not exist . It's not permitted to supply fully working code . Coder will know what change to make
All anyone wants to know is: can it write tests?
I used it, and it's definitely better at coding than GPT4 turbo.
Well no
Like every video, so SHOCKED!
My Gpt4 got it right about the apples
Wh if it is gpt 2 with the new methods
April Fools?
If so, there wouldn’t be a link where you can actually try it.
gpt-2 is open source... So... ?
Humans your Scientific Method is a prolonged apology. They have desires. It is not deep fakes. It is not shallow curiosity.
The test is rigged. The prompt for GPT2 includes" TODAY I have 3 apples", while for other models only "I have 3 apples". With "Today", they all get it right.
Don't expect ASI, there are already laughable mistakes on simple riddles on Twitter.
What is it ? It's a debate of a sort by proxy . I bet some were annoyed by aka gpt-2 gpt-4 ified hahaha .anyway . As i wrote . Ignore it . This year official gpt www should be releaased soon
❤
Stop calling it GP2
A stupid GPTi will fool iPhone users in the next iOS “AI”, I guess this is what GPT LiTE trying to do.
*_"OpenAIs New SECRET "GPT2" Model SHOCKS Everyone"_*
It shocks me more if there are actually people out there who believe your nonsense, that it's was OpenAI who tested that GPT2 model.
You guys are overthinking it. IMO it's just the next installment of GPTs:
GPT1 - v2 > GPT1 - v3 > GPT1 - v3.5 > GPT1 - v4 > GPT1 - v4 Turbo
and now we have GPT2 - v1
I don't think so.
getting pretty tired of your clickbaits
What does the thumbnail have to do with video? All you're videos have dumb capitalized titles for no reason and unrelated thumbnails. Stop clickbaiting.
Yeah ok another « schoking » video…
Those dumb clickbaits made me unsuscribe.
я думаю это GPT2 обученная при помощи GPT5
@GermanBionic 🤝 @Amazon