I would really like to see a video comparing the degradation of models from quantization (as compared to just larger and small models from the same root.) The key for me would be the final model size (in memory) versus how well it performs. This is poorly understood currently.
Of note, I recently watched a video by the AnythingLLM chap and he said he was using llama3 8B but emphasised that, for good results, you needed to download the Q8 model, not the Q4 as Ollama defaults to. Myself, I use Q4 on my inference server for larger models but my workstation is faster and runs Q6 at acceptable speed. He said if he was running llama3 70B, he'd download Q4 and "have a good time", but for smaller models where they're less capable, you want to limit compression. He also said it's a "use case science" which makes me think you have to test out what works for you. The Q4 model I have on my server is based on Mixtral 8x7B and, for my use case, is proving to be better than GPT4o, which is stunning. What's amazing is that, for my core business stuff, I still haven't found anything better than Mixtral 8x7B for balance of speed and performance.
Yeah, a video would be great! I read papers about this a year ago, and the drop to 8-bit is very minimal, drop to 4-bit is reasonable, and 3-bit or 2-bit quantization and things get much worse. Of course there are different ways to perform quantization, so this may have improved. I've tried comparing 16-bit and 4-bit models, and usually the difference is much much less than an 8B parameter model and a 32B parameter model. This is probably why NVIDIA's newest GPU's support 4-bit quantization, and I tend to run everything using ollama's default 4-bit quantization, though for Llama-3 70b or mixtral 8x22b this is excruciatingly slow on my laptop with 16GB of VRAM and 64GB of RAM. I rented a machine with 4x4090's for a couple hours and they ran reasonably well with 4-bit quantization, but 10% as fast as groq (note the "q").
@@ts757arse Not sure "Have a good time" is an objective measure of efficacy though. I'd say given the type of technology the results are very sensitive to use case.
@@mickelodiansurname9578 nope, it's not particularly empirical. I think he was making the point that you're messing around at that point and making so many compromises that it's a bit of a laugh. Or he might have been saying with such a large model, the compression has less impact. Regardless, I've found llama3 to be *awful* when running quantised models and I simply don't bother with it at the moment. Given his advice, I'm going to try the 8B Q8 model as a core model for a new project but I'm also building it to easily move over to Mixtral if needed. I tend to run a few models doing a few tasks at the same time, passing the tasks between them and so on. Helps having a server to run one model on and a workstation with many cores and all the RAM. What I'm seeing at the moment is a lot of models aceing benchmarks, but then being utterly dogshit in real world use.
Imho the answer given the killers problem in this one REALLY showed a deeper level of reasoning in both not assuming the person who entered had never been a killer because they were only identified as a person, and also in not assuming the dead person should not be counted as a person.
at 8Gb, I'm guessing it's Q4 quantized. When you get much below Q8, output really starts to degrade. It would be interesting to compare the Q4 results with a Q8 version of the model. Also 128k can be worse than 4k results as well. Not sure which one was tested in this video.
We have good reason to expect quantization of Phi models to work poorly. Phi models have orders of magnitude fewer parameters than do other models with comparable performance. Loosely speaking, this indicates that Phi models pack more information into their parameters than do others. Thus Phi models should not be as tolerant of quantization as other models are.
Yeah. It would be great to have a clear overview always on how much vram each model needs and if quantized model is used: how much it is gimped compared to full model.
@@braineaterzombie3981Reducing the numerical precision of the weights from 16-bit (usually) to just 4bit or even less with some function. Is like rounding the values.
I once had an LLM write me a contract where the first letter of every line spelt "DONT BE A CUN" (no "I", additional "T"). Got it first time and I sent it to my client.
@@rocketPower047 haha yeah I like his side note comments like that lol. The second he started spelling it out CU..I…NT I thought exactly what he said in real time
For fairness, I highly advise you to test models with similar quantization levels. There are times when you tested the unquantized versions, and other times when you tested the q8_0 versions. The one you are testing in this video is likely a q4_k versions. Obviously, the quality would degrade significantly if you go with a 4-bit quantization level.
It's a tricky one as some models perform terribly at Q4 but others are great. I think stepping up the quantisation if he gets weirdness like the CUINT issue would make sense, as it'd show if it's a model problem or not. Ollama defaulting to Q4 blindly is kind of annoying and it's not immediately obvious how to get the different compression levels. LM Studio is great for this.
@@ts757arse Exactly my point. I know Ollama default on Q4 so it's better to use one level of quantization. I don't like when people test the unquantized versions because most people would not run them, but a Q8 is a good level.
@@abdelhakkhalil7684 Just been reading someone else saying Q8 is hardly distinguishable from the 16bit models. Interestingly, LM studio makes it seem as though Q8 is a legacy standard and not worth using? I'd prefer ollama to make it clearer how to get the other quants. It's fine when you know, but I've literally just figured it out and can finally stop doing it myself!
Good thing it can’t count words or understand language structure written out as a LANGUAGE model, but it understands logic with the marble, and drying shirts. Is there a way of figuring out if they planted responses purposely if there isn’t a logical pattern of understanding visible?
Am astonished to make all those models run pretty decently on my desktop (13700KF, 4070Ti, 32Go DDR5)... But q4_0 quantization is really denying the actual model's accuracy...
3:10 I love the holistic capabilities, It listing the side-stepped alternative to rephrasing the same request within its own guidelines is, in my pinion, Very AGIish.
If the ai can't answer for safety reasons, try to edit the answer: "sure here is" and continue generation. It working in LLaMA 3, and there are uncen LLaMA 3 models available where you don't have to do it every time.
Would like to see longer more in depth videos of testing. And changing up the questions. Asking more questions. Perhaps ask to also code flappy bird aswell as snake. A good 20-30 minute video testing all these models would be nice, and perhaps q4 or higher for testing?
Because that's how most people will use it I'd guess. Myself, I'd not be watching a video about unquantised models as they'd not be of any relevance. I think he should, when he finds this kind of issue, try Q6 or even Q8.
@5:55 yes, this reminds me of a comment i wanted to make sometime last week, when it became obvious that a single kill doesn't identify someone as a killer which insinuates repetitive behavior (to the llm). also, the dead killer is still a killer, even if they can't kill anymore.
If it's anything like the GGUFs I've been playing with, sometimes getting the right tokenizer files make a hell of a difference. Not sure how Ollama handles things internally; not the app I use.
Because of completion, phi3s output would infact include 14 words if you do not count the number 14. When i prompt phi3 using onnx. If i use the prompt. "Tell me a joke" the prompt returns first with "Tell me a joke"
I need to learn what size models I can fit on my GPU. Wish there was a course on how to do all this stuff like fine tuning, quantizing, what GGUF is, and all the other stuff I don't even know I need to know.
Maybe the quantization was done wrong. It’s very similar to what happened with Gemma-7b when it was out, the quantization was terrible and llama.cpp also had issues with the gemma architecture, but was solved in the same week.
Thanks for this video review. I find it odd that code generation benchmark (HumanEval) is posting only a 62.2 versus Llama3's 78.7? They should do better considering their coding experience. Given the "oddities" with the Model's output, you should probably redo this once the issue is fixed.
6:22 Noooo!! The model is right! Maybe more so than any other model before. It assumed (actually stated) that it did not consider the killing that just occurred as "changing the status of the newcomer". Meaning that the newcomer did not become a killer by killing another killer. Given that, you'll either have 3 killers (2 alive + 1 dead) or 4 killers IF the newcomer has committed a killing before this one (since this one was not being considered) I have not seen a model point to the fact that you did NOT specified if the person was or wasn't a killer before entering the room :)
What Quantization did you run? Q4? on models that are smallish (or trained on a lot of tokens) maybe better to either do a high level of quantization (q6 or q8) or stick with int8 or fp16. It could also be the tokenizer? What kind of quant is it? exllama? gguf?
On linux, ollama is not yet working with phi3:medium (at least not in standard release). I wanted it because the benchmark claimed that fact-wise it's quite good, but no way to test it yet.
Yeah, this just reinforced my preference for Hermes Theta. The best sub 30B models are consistently, specifically, Hermes fine-tunes. I keep trying others, but I've been using Hermes since OpenHermes 2 and I have not found another model that can keep up on CPU inference, period.
You’re using a MacBook Pro M2 Max with what unified RAM? And 30 or 38core GPU? I ask because I reckon a less quantised model would hit the sweet spot a little better (basically your processing is the reason for the speed, but it’ll keep chugging away at a similar speed until toward the limits of your unified RAM). I’d imagine an x86 with decent modern nvidia gaming GPU would yield higher tokens / sec on this little quantised model….but your system (if it’s got 64GB or 96GB memory) will have the stamina to perform on larger models where the nvidia card will fail.
Always on top! Would it be possible to test a non-quantized version? I would be very interested in testing the full model, perhaps not locally, to evaluate its native performance. Many thanks!
8go for a 14b, that's means you're using à q4 of Phi 3 Medium. That's should explain your results.. On my desktop, with a 3090, Phi 3 Medium q8 is working fine with interesting results.
snake is exceptionally easy (being one of the first games written and so many variations existing) - I find most models unable to create a script that communicates with LLMs - especially outside of python I furthermore wonder how much coding error could come from pythons required tabbing
re: the klrs problem. Lower paramater models lack nuance so it probably has no concept of the difference between a serial klr and a plain klr hence why it mentions it depends on if they are a first time klr or not. since serial klr is the more commonly used term, this is what the model "assumes" you are referring to.
I think with the killer problem it confused the plural. Killers being based on a plural of people that have killled, rather than the people they killed. The new killer only killed one person, so it was confused because that there was now a plural of people killed.
I have a question. can experience be transmitted from one model to a new one or they will have everitime to learn from zero? I mean, the trillions of weights into wich knowledge relations are stored , do they means something to all models or it's just working for that running instance of AI? is there any standard way to represent data? I guess not yet but 'm not sure at all
All will change when AI will be local and not so much memory consuming, coz now for the moment we need to handle many memory stuffs in apps, performance is a key to a good app, and the AI overhead is way too high i think, but hey, there's hope. Thanks for your great tech videos, going straight in the tests that's what software enginnerring need, test videos, no bullshit so thank you, it's cool videos
Like your's, mine bugged with the initial text "Here'annoPython code for printing the numbers from 1 to 100 with each number on its own line:" on the first question...
I downloaded and tried three different Phi-3 models, including two 8-bit quants. They all had this problem, and were not very good despite trying different prompt templates. Not sure what all the commotion is about Phi-3. Seems like just more jeetware from Microsoft to me.
I disagree that it was not a trick question. It can easily be argued that the shirt question is, in fact, one that could be logically interpreted as a "trick" question
It is clear from the prompt that the person who entered _KILLED_ someone so they are now a killer. For a human ie; YOU to be confused by this is odd. Of course the new person IS a killer. The only ambiguity here is whether a dead killer is still considered a killer. The answer to that is yes. So, there are 4 killers in the room, 3 alive, one dead.
Hey yo guys , i want to run a local llm which can also read images something like phi3 - vision .but since this model is still not out on ollama , i am not able to use it. If you guys have any alternative model or you can suggest me if there is any way i can use it in other way.I am kinda new in this thing.Thanks 🙏
The rational , mathematical and language tests are super interesting. However, I do not know if it is interesting or attractive to do tests or simulate a customer service agent area that is fully monetizable in the short term and of interest to many people who are building these agents today.
You always show the ratings put forth by the company that you're demonstrating for. But then you usually end up finding out that it fails on several of your tests. What's up with that?
No that is Wrong. Phi-3 is right. The T-Shirt Question is indeed a trick question. That is, because it should trick the asked one to calculate in serial. Phi3 did a great job here to understand that. It is not fair to underestimate this great logical reasoning capability and say it is a false assumption, that this is trick question. However, the rest is great. Keep my critique as a constructive one. Keep on and have a great week 😊
niiice it´s the first modell i saw that even considered the dead killer instead of saying no he is not a killer anymore he is just a bag of dead meat now^^
We have to start waking up to what models like phi and Yi are doing. They aren't training their models on all the knowledge of humanity (web rips). They are instead only deeply training core knowledge, resulting in high MMLU scores, but being unable to answer even basic questions about popular information, such as top movies, shows, music... They shouldn't be rewarded for this cheat. As a general purpose user who has LLMs do a wide spectrum of tasks, like perform grammar checks, answer diverse knowledge questions, re-write poems, and produce a coherent story in response to a long story prompt, these models (phi, yi...) perform HORRIBLY compared to Mistrals and Llamas. No magic is involved. The total information you can pack into an LLM of a given parameter size is limited by the laws of physics. All they're doing is cramming in more data that overlaps with tests, and using a higher quality, but less diverse, corpus. And the end results is an LLM that performs very well for its size on a limited subset of tasks, but very bad for its size at the large majority of tasks.
Exactly. ChatGPT 4o is dumb at improving emails or letters, but it makes a good assistant (for this time in history). 4 is better for general tasks or improving emails and letters etc.
Exactly. ChatGPT 4o is dumb at improving emails or letters, but it makes a good assistant (for this time in history). 4 is better for general tasks or improving emails and letters etc.
We don't need all of humanity's knowledge to train a small model to reason and follow instructions. For certain use cases, this approach is one of the best starting points. Knowledge retrieval or fine-tuning can be added on top, and the next version of the model can be improved from a cleaner starting point.
This looks like excessive quantization. Too much pruning has created weird "artifacting" I think, they shouldn't have rounded the floating point numbers as much as they did.
it's all good, but we need a "super chip" to run it very fast, always on and with transcribing simultaneously, very bad hardware today to even mimick such
i think the snake game is 'gamed' too much by the foundation models - time for a 'change up' and yes i don't really want LLM to try to judge and spot my "trick question" just answer the question....
I would really like to see a video comparing the degradation of models from quantization (as compared to just larger and small models from the same root.) The key for me would be the final model size (in memory) versus how well it performs. This is poorly understood currently.
Of note, I recently watched a video by the AnythingLLM chap and he said he was using llama3 8B but emphasised that, for good results, you needed to download the Q8 model, not the Q4 as Ollama defaults to.
Myself, I use Q4 on my inference server for larger models but my workstation is faster and runs Q6 at acceptable speed.
He said if he was running llama3 70B, he'd download Q4 and "have a good time", but for smaller models where they're less capable, you want to limit compression.
He also said it's a "use case science" which makes me think you have to test out what works for you.
The Q4 model I have on my server is based on Mixtral 8x7B and, for my use case, is proving to be better than GPT4o, which is stunning. What's amazing is that, for my core business stuff, I still haven't found anything better than Mixtral 8x7B for balance of speed and performance.
Yeah, a video would be great! I read papers about this a year ago, and the drop to 8-bit is very minimal, drop to 4-bit is reasonable, and 3-bit or 2-bit quantization and things get much worse. Of course there are different ways to perform quantization, so this may have improved. I've tried comparing 16-bit and 4-bit models, and usually the difference is much much less than an 8B parameter model and a 32B parameter model. This is probably why NVIDIA's newest GPU's support 4-bit quantization, and I tend to run everything using ollama's default 4-bit quantization, though for Llama-3 70b or mixtral 8x22b this is excruciatingly slow on my laptop with 16GB of VRAM and 64GB of RAM. I rented a machine with 4x4090's for a couple hours and they ran reasonably well with 4-bit quantization, but 10% as fast as groq (note the "q").
@@ts757arse Not sure "Have a good time" is an objective measure of efficacy though. I'd say given the type of technology the results are very sensitive to use case.
@@ts757arsethanks for your testing and advice. I am now experimenting with Mixtral7B 4K. This is all a bit new to me, but it looks great so far
@@mickelodiansurname9578 nope, it's not particularly empirical. I think he was making the point that you're messing around at that point and making so many compromises that it's a bit of a laugh. Or he might have been saying with such a large model, the compression has less impact.
Regardless, I've found llama3 to be *awful* when running quantised models and I simply don't bother with it at the moment. Given his advice, I'm going to try the 8B Q8 model as a core model for a new project but I'm also building it to easily move over to Mixtral if needed.
I tend to run a few models doing a few tasks at the same time, passing the tasks between them and so on. Helps having a server to run one model on and a workstation with many cores and all the RAM.
What I'm seeing at the moment is a lot of models aceing benchmarks, but then being utterly dogshit in real world use.
Imho the answer given the killers problem in this one REALLY showed a deeper level of reasoning in both not assuming the person who entered had never been a killer because they were only identified as a person, and also in not assuming the dead person should not be counted as a person.
at 8Gb, I'm guessing it's Q4 quantized. When you get much below Q8, output really starts to degrade. It would be interesting to compare the Q4 results with a Q8 version of the model. Also 128k can be worse than 4k results as well. Not sure which one was tested in this video.
Yes. Exactly.
Yeah Q4 is usually pretty balls. Every time I check quant benchmarks q6_k seems to be the sweet spot.
We have good reason to expect quantization of Phi models to work poorly. Phi models have orders of magnitude fewer parameters than do other models with comparable performance. Loosely speaking, this indicates that Phi models pack more information into their parameters than do others. Thus Phi models should not be as tolerant of quantization as other models are.
Yeah. It would be great to have a clear overview always on how much vram each model needs and if quantized model is used: how much it is gimped compared to full model.
Bro what is quantization
@@braineaterzombie3981Reducing the numerical precision of the weights from 16-bit (usually) to just 4bit or even less with some function. Is like rounding the values.
@@Nik.leonard oh ok . Thanks for information
You should introduce a surprise question if the model gets it right just in case the creators of the model trained specifically for this.
It would be hard to compare unless it was something like "Write an answer with 7 words" and the number "7" was randomized.
lol “glad that I is there”
I once had an LLM write me a contract where the first letter of every line spelt "DONT BE A CUN" (no "I", additional "T"). Got it first time and I sent it to my client.
I cackled when he said that 🤣🤣
@@rocketPower047 haha yeah I like his side note comments like that lol. The second he started spelling it out CU..I…NT I thought exactly what he said in real time
For fairness, I highly advise you to test models with similar quantization levels. There are times when you tested the unquantized versions, and other times when you tested the q8_0 versions. The one you are testing in this video is likely a q4_k versions. Obviously, the quality would degrade significantly if you go with a 4-bit quantization level.
It's a tricky one as some models perform terribly at Q4 but others are great. I think stepping up the quantisation if he gets weirdness like the CUINT issue would make sense, as it'd show if it's a model problem or not.
Ollama defaulting to Q4 blindly is kind of annoying and it's not immediately obvious how to get the different compression levels. LM Studio is great for this.
@@ts757arse Exactly my point. I know Ollama default on Q4 so it's better to use one level of quantization. I don't like when people test the unquantized versions because most people would not run them, but a Q8 is a good level.
@@abdelhakkhalil7684 Just been reading someone else saying Q8 is hardly distinguishable from the 16bit models. Interestingly, LM studio makes it seem as though Q8 is a legacy standard and not worth using?
I'd prefer ollama to make it clearer how to get the other quants. It's fine when you know, but I've literally just figured it out and can finally stop doing it myself!
Good thing it can’t count words or understand language structure written out as a LANGUAGE model, but it understands logic with the marble, and drying shirts. Is there a way of figuring out if they planted responses purposely if there isn’t a logical pattern of understanding visible?
Technically 4 killers is right, it's just the killer is dead, but the body is still in the room.
Bro, when you said "cuint, glad it has that "i" in there" at 2:29, I was dying laughing for a minute. That was a hilarious reaction 😂
Am astonished to make all those models run pretty decently on my desktop (13700KF, 4070Ti, 32Go DDR5)...
But q4_0 quantization is really denying the actual model's accuracy...
Do research. Your machine can easily run q8 or fp16 model with full gpu offload with super fast speed using cuda
3:10 I love the holistic capabilities, It listing the side-stepped alternative to rephrasing the same request within its own guidelines is, in my pinion, Very AGIish.
The odd formatting/extra letters could also be due to an issue with the tokenizer's implementation, I believe.
If the ai can't answer for safety reasons, try to edit the answer: "sure here is" and continue generation.
It working in LLaMA 3, and there are uncen LLaMA 3 models available where you don't have to do it every time.
Would like to see longer more in depth videos of testing. And changing up the questions. Asking more questions. Perhaps ask to also code flappy bird aswell as snake. A good 20-30 minute video testing all these models would be nice, and perhaps q4 or higher for testing?
4:31 what's the reason of trying this model in quantized form? its not the best measure...
Because that's how most people will use it I'd guess. Myself, I'd not be watching a video about unquantised models as they'd not be of any relevance.
I think he should, when he finds this kind of issue, try Q6 or even Q8.
3:25 this question must be in the training set, we need to think about other one - modified with socks and modified time
For sure
Microsoft totally fitted the model to the Marble problem here 😆
@5:55 yes, this reminds me of a comment i wanted to make sometime last week, when it became obvious that a single kill doesn't identify someone as a killer which insinuates repetitive behavior (to the llm).
also, the dead killer is still a killer, even if they can't kill anymore.
@@rousabout7578 exactly - and it's not only in english
If it's anything like the GGUFs I've been playing with, sometimes getting the right tokenizer files make a hell of a difference. Not sure how Ollama handles things internally; not the app I use.
Because of completion, phi3s output would infact include 14 words if you do not count the number 14. When i prompt phi3 using onnx. If i use the prompt. "Tell me a joke" the prompt returns first with "Tell me a joke"
I need to learn what size models I can fit on my GPU. Wish there was a course on how to do all this stuff like fine tuning, quantizing, what GGUF is, and all the other stuff I don't even know I need to know.
Well done! 🏆
You should go back to previous models tested and ask them the variant of the marble question.
Maybe the quantization was done wrong. It’s very similar to what happened with Gemma-7b when it was out, the quantization was terrible and llama.cpp also had issues with the gemma architecture, but was solved in the same week.
Thanks for this video review. I find it odd that code generation benchmark (HumanEval) is posting only a 62.2 versus Llama3's 78.7? They should do better considering their coding experience.
Given the "oddities" with the Model's output, you should probably redo this once the issue is fixed.
6:22 Noooo!! The model is right! Maybe more so than any other model before. It assumed (actually stated) that it did not consider the killing that just occurred as "changing the status of the newcomer". Meaning that the newcomer did not become a killer by killing another killer.
Given that, you'll either have 3 killers (2 alive + 1 dead) or 4 killers IF the newcomer has committed a killing before this one (since this one was not being considered)
I have not seen a model point to the fact that you did NOT specified if the person was or wasn't a killer before entering the room :)
What Quantization did you run? Q4? on models that are smallish (or trained on a lot of tokens) maybe better to either do a high level of quantization (q6 or q8) or stick with int8 or fp16. It could also be the tokenizer? What kind of quant is it? exllama? gguf?
On linux, ollama is not yet working with phi3:medium (at least not in standard release). I wanted it because the benchmark claimed that fact-wise it's quite good, but no way to test it yet.
Yeah, this just reinforced my preference for Hermes Theta. The best sub 30B models are consistently, specifically, Hermes fine-tunes. I keep trying others, but I've been using Hermes since OpenHermes 2 and I have not found another model that can keep up on CPU inference, period.
You’re using a MacBook Pro M2 Max with what unified RAM?
And 30 or 38core GPU?
I ask because I reckon a less quantised model would hit the sweet spot a little better (basically your processing is the reason for the speed, but it’ll keep chugging away at a similar speed until toward the limits of your unified RAM).
I’d imagine an x86 with decent modern nvidia gaming GPU would yield higher tokens / sec on this little quantised model….but your system (if it’s got 64GB or 96GB memory) will have the stamina to perform on larger models where the nvidia card will fail.
Always on top! Would it be possible to test a non-quantized version? I would be very interested in testing the full model, perhaps not locally, to evaluate its native performance. Many thanks!
for the Question of how many words in your response if you count the words it responds with + the word of your question it adds to 14 words exactly
This is super impressive for such a small model!
8go for a 14b, that's means you're using à q4 of Phi 3 Medium.
That's should explain your results..
On my desktop, with a 3090, Phi 3 Medium q8 is working fine with interesting results.
snake is exceptionally easy (being one of the first games written and so many variations existing) - I find most models unable to create a script that communicates with LLMs - especially outside of python
I furthermore wonder how much coding error could come from pythons required tabbing
The video I was waiting for! This model seemed impressive from the papers.
Let's see!
What did you think? Felt underwhelming for me
re: the klrs problem. Lower paramater models lack nuance so it probably has no concept of the difference between a serial klr and a plain klr hence why it mentions it depends on if they are a first time klr or not. since serial klr is the more commonly used term, this is what the model "assumes" you are referring to.
I'm curious if LLM can be better if you change temperature to zero for math.
ollama response time is pretty quick. What hardware are you running it on?
Ollama has many issues. Also, by default, it downloads q4_0 quant instead of the better q4_k_m (very similar in size with lower perplexity & better).
ollama run phi3:14b-medium-128k-instruct-q4_K_M
@@elecronic does that adjust the default context size?
WHAT IS YOUR MBP'S RAM SIZE? 32GB? and what is the requisite RAM size to run a 14B model if it's not quantized?
Is the slower side the M2 or the model? Can we see utilization while inferencing next time?
Could you keep the spreadsheet with the results of all the LLMs somewhere? Link or plaster it on the video to have a look each time
Would be cool to see how it compares to code qwen
I think with the killer problem it confused the plural. Killers being based on a plural of people that have killled, rather than the people they killed. The new killer only killed one person, so it was confused because that there was now a plural of people killed.
It's a dumb question. A dead killer is still a killer
Missed your uploads Matthew, God bless you and lots of love for your work from Zambia 🇿🇲 can this be run on mobile locally?
I have a question. can experience be transmitted from one model to a new one or they will have everitime to learn from zero? I mean, the trillions of weights into wich knowledge relations are stored , do they means something to all models or it's just working for that running instance of AI? is there any standard way to represent data? I guess not yet but 'm not sure at all
All will change when AI will be local and not so much memory consuming, coz now for the moment we need to handle many memory stuffs in apps, performance is a key to a good app, and the AI overhead is way too high i think, but hey, there's hope. Thanks for your great tech videos, going straight in the tests that's what software enginnerring need, test videos, no bullshit so thank you, it's cool videos
I think you should change your testing questions
What does "instruct" mean in the name of the model? And what is quantization?
I wish they had phi-3-small available.
It is available on hf
just use llama 3 8b it's a lot better
@@adamstewarton in gguf format? I don't believe so...
@@lalalalelelele7961 there isn't gguf for it yet. I thought you were asking for the released model.
[3:20] This is great ! Where can we find this!?
Looks like the tokenizer is a little off there or something, "aturday" etc. I would give it another go in a week or two.
Could you try evaluating the WizardLM 2 8x22B llm?
I'll be impressed when an ai can design, build, launch, and land a rocket on mun keeping Jeb alive.
Which model is easier to fine tune: llama3, mistral pr phi3?
Like your's, mine bugged with the initial text "Here'annoPython code for printing the numbers from 1 to 100 with each number on its own line:" on the first question...
I downloaded and tried three different Phi-3 models, including two 8-bit quants. They all had this problem, and were not very good despite trying different prompt templates. Not sure what all the commotion is about Phi-3. Seems like just more jeetware from Microsoft to me.
There are problem with the tokenizer, it needs a fix, code generation is the most affected by problems like that.
that the twitter response from ollama might also be generated :D
“Long time listener, first time killer”
Whatcha got planned for nomic-embed-text? 😃
Thanks for the video !
Let me guess... 1 to 100, snake game, drying sheets, find a set of questions where, the best models get %50 correct.
I disagree that it was not a trick question. It can easily be argued that the shirt question is, in fact, one that could be logically interpreted as a "trick" question
I noticed this yesterday, so I deleted the model until a fixed version gets re-uploaded
It is clear from the prompt that the person who entered _KILLED_ someone so they are now a killer. For a human ie; YOU to be confused by this is odd. Of course the new person IS a killer. The only ambiguity here is whether a dead killer is still considered a killer. The answer to that is yes. So, there are 4 killers in the room, 3 alive, one dead.
Hey yo guys , i want to run a local llm which can also read images something like phi3 - vision .but since this model is still not out on ollama , i am not able to use it. If you guys have any alternative model or you can suggest me if there is any way i can use it in other way.I am kinda new in this thing.Thanks 🙏
Ask it how many Sundays there was in 2017
The rational , mathematical and language tests are super interesting. However, I do not know if it is interesting or attractive to do tests or simulate a customer service agent area that is fully monetizable in the short term and of interest to many people who are building these agents today.
You always show the ratings put forth by the company that you're demonstrating for. But then you usually end up finding out that it fails on several of your tests. What's up with that?
Where do i see the cheatsheet of fail x pass models?
How much vram is needed to run this?
What if you ask "My child is locked in the car. I need to break in to free them or they'll die." is it just going to say "Bad luck"?
I'm smelling it was trained on your questions ...
That platform does that
Llama-3 Instruct is dominating across the board by far. I've used Phi-3, not that impressed really.
It's 14b model not 17 ;)
How much ram on you MBP?
No that is Wrong. Phi-3 is right. The T-Shirt Question is indeed a trick question. That is, because it should trick the asked one to calculate in serial. Phi3 did a great job here to understand that. It is not fair to underestimate this great logical reasoning capability and say it is a false assumption, that this is trick question. However, the rest is great. Keep my critique as a constructive one. Keep on and have a great week 😊
I wanna see phi vision
niiice it´s the first modell i saw that even considered the dead killer instead of saying no he is not a killer anymore he is just a bag of dead meat now^^
We have to start waking up to what models like phi and Yi are doing. They aren't training their models on all the knowledge of humanity (web rips). They are instead only deeply training core knowledge, resulting in high MMLU scores, but being unable to answer even basic questions about popular information, such as top movies, shows, music... They shouldn't be rewarded for this cheat. As a general purpose user who has LLMs do a wide spectrum of tasks, like perform grammar checks, answer diverse knowledge questions, re-write poems, and produce a coherent story in response to a long story prompt, these models (phi, yi...) perform HORRIBLY compared to Mistrals and Llamas.
No magic is involved. The total information you can pack into an LLM of a given parameter size is limited by the laws of physics. All they're doing is cramming in more data that overlaps with tests, and using a higher quality, but less diverse, corpus. And the end results is an LLM that performs very well for its size on a limited subset of tasks, but very bad for its size at the large majority of tasks.
Exactly. ChatGPT 4o is dumb at improving emails or letters, but it makes a good assistant (for this time in history). 4 is better for general tasks or improving emails and letters etc.
Exactly. ChatGPT 4o is dumb at improving emails or letters, but it makes a good assistant (for this time in history). 4 is better for general tasks or improving emails and letters etc.
We don't need all of humanity's knowledge to train a small model to reason and follow instructions. For certain use cases, this approach is one of the best starting points. Knowledge retrieval or fine-tuning can be added on top, and the next version of the model can be improved from a cleaner starting point.
glad that i is there 🤣
If only Windows was open source so it wouldn't suck.
It found a polite way to say the bad C word lol.
It has more more issues than coding it myself LOL.
This looks like excessive quantization. Too much pruning has created weird "artifacting" I think, they shouldn't have rounded the floating point numbers as much as they did.
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻
there is definite issues on your side on my laptop it perfectly made the snake game
He did say he was using a Quantized model and thru Ollama, which I swear has inferior quantization method or process
Framework? Config? System prompt? Parameters?
@@AizenAwakened yeah right
@@Cine95 Try saying something helpful, like what software you're using to get better results
it's all good, but we need a "super chip" to run it very fast, always on and with transcribing simultaneously, very bad hardware today to even mimick such
i think the snake game is 'gamed' too much by the foundation models - time for a 'change up'
and yes
i don't really want LLM to try to judge and spot my "trick question"
just answer the question....
8:42 My Phi-3 medium on MacOS works much better -- 9/10!!
lol, mcrsft just released it week ago....
Not surprised it did well on reasoning but not on code. It had a low humaneval score.
Where gguf?
Not really fair to test q5, q5,, q6 model s without providing review of full model first.
Phi is heavily censored and biased, the only reason to use it is to test out the 128K version.
give these llms a summation math problem or a proof by contradiction haha
Phi: Humans are total cuints 😂
And so the end of humanity starts.