I am using DeepSeek releases since over 9 months. The results have been great all the time but are getting better and better. I am running locally on my Linux PC all Qwen based DeepSeek R1 models and they are all great. The 1.5B model works fantastic when you are using it in the q16 variant. It is really killer. Inference is not very fast since I am running all models (from 1.5B up to 32B) on my CPU Ryzen5 8600G WITHOUT a dedicated GPU adapter. The CPU uses up to 40GB of my 64GB RAM for the 32B model. With good prompting the results are fantastic and save me hours of work every day. The dynamic memory allocation of the 8600G is great and allows me to run powerful LLMs with a small budget. My PC has cost me $900.
@@Aurelnpounengong The Ryzen5 8600G has a GPU on the processor and can use system memory for VRAM, but much more slowly. (40gb out of the 64gb system memory) He provided the details to research the parts you don't understand.
@@rhadiem ahhh I see, i did not know it used system emmory as VRAM. I also have 64GB DDR4 memory do you think I'll be able to run a 32B model with my Graphics card with some Memory offset to system memory?
ive also had luck gettign the model to reflect using - reversing the calculation (math) - writing the documentation while it codes - writing a tutorial while it codes. this is one of the best videos I have seen in some time Chris!
Brilliant work! Yes i do remember you mentioning that o1 was mcts and and r1 was not. I agreed with you that r1 was surely not, will be exciting to see if o1 or o3 used similar techniques or used mcts!
I’m 100 percent convinced that o1 is using search (specifically mcts) at inference time, and I’m 100% convinced that R1 will do the same in a future release when they figure it. But the results they’ve gotten without it, is pretty incredible
@chrishayuk It just blows my mind every time I think about it still! That one can converge through search or learning at these endpoints so long as one is bootstrapped with some notion of correctness! Your demo was incredible work. thanks again.
thank you, yeah, i came up with the concept of getting the compiler to do the calc, and the ai to do the explanation, a while back, i think i did a video on this in june 2024. so it seemed a natural fit when i saw the long chain of thought coldstart piece from deep seek. felt like a good merge. i was blown away also on how good the results were
Thank you for a great presentation, especially for your explanation and examples of the "cold start' part. The 'Incentivizing' paper and the technical report are heavy going, especially the reinforcement learning algorithm. When will you have a video out explaining the RL algorithm ?
Thank you Chris. I am hoping I will be able to replicate this on my old windows laptop. I want to be able to train a base model from scratch like you did here.
I’m not verifying yet, I’ll do that in the RL stage in the next video. I’m just generating long and accurate chain of thoughts for coldstarting training
hello Chris Hay ! this is crazy, you made this amazing tutorial. thats mind blowing. while openAI is cloased, open source community is actually builidng it openly for community. although comanies like deepseek are validation and inspring. community is doing its own discovery. you are very inspring as well. thanks again for a wonderful video
@@chrishayuk we might not need MOE now , as we need only cold start data for different tasks 1. fuction calling 2. coding 3. summarization 4. role play 5. nlq and others we can do this on colab as its 1.5b, its going to crazy
One cool addition.i use TwinMind AI on screen assistant to explain what your doing exactly,as i watch the video.(Reads transcript im guessing) Anyway it makes understanding the topic far easier.
Very much appreciate your videos. Thank you. I noticed your training data jsonl format is different than your validation and test jsonl format. Could you please explain?
Thanks for the info! I followed your instructions and it’s training the model but it’s pretty slow on my M1 Mac. Is there a similar software for Linux that I can coldstart train the model on a VPS?
Given that the intention is not so much to train new knowledge, but synthesize chain of thought capabilities on existing models, how good would it work if we were to use R1 to generate a bunch of non-math questions/thinking/answers input output as the cold start seed?
@@chrishayuk Thanks! I was playing around with Granite 3.1 MoE 3B, found it to be insanely fast even on CPU only. I'd be really curious to see how much "intelligence" we can extract from smaller MoE models like that by synthesizing chain of thought. I'll have to find some time to play around and see what could be extracted. I'm thinking a semi-capable thinking model, with MCP (thanks to your MCP-CLI project), that requires no GPU will be a very powerful local assistant!
I'm sure you're aware ot the Qwen-maths models but using these reasoning techniques it would be interesting to see if a small (Qwen2.5-1.5b) model could be trained to reason geometry or integration in the same was a mathematician would simply apply the rules they know to see what fits. I think the only limitation with this is the size of the context. I put the DeepSeek-R1-7b (Q4) on my phone and it was good but limited. I increased the context to 8192 and wow, it solved things o1 struggled with and failed.
Hi Chris, it's pretty cool thanks for sharing. can we try to generate the cold start data from deepseek-r1-zero just like the paper and train lora, what do you think of that?
So what you are saying is that R1 will not perform good on non-logical and non maths like queries, where they cant use a verifier? Like what if I want to use R1 in a healthcare domain?
Obviously this is a toy example. The purpose is to explain how to generate accurate synthetic Chain of Thought data to use during the training process, which is quite valuable. Even better, he walks through it end to end within the context of DeepSeek's COLDSTART methodology.
unfortunately, I think, it's a military race, and we'll never know for sure until it's too late. For the general public, open-source model will win, this video shows it already pretty much
Unlike the space and nuclear arms race where spies were the only way to get the latest technological advances, DS has OPEN SOURCED everything they did to produce this model. Imagine how much faster the space/nuclear arms race would have been in that case! Open Source has been the biggest if not nearly the biggest accelerator for AI advancement in my opinion, especially within the last ~2 years.
bitsnbytes releases (bnb) many small models for ollama on windows/linux and yeah peft adapters. i am pretty impressed with mac ml, but I cant imagine not being on linux with direct access to my 4090!
I am using DeepSeek releases since over 9 months. The results have been great all the time but are getting better and better. I am running locally on my Linux PC all Qwen based DeepSeek R1 models and they are all great. The 1.5B model works fantastic when you are using it in the q16 variant. It is really killer. Inference is not very fast since I am running all models (from 1.5B up to 32B) on my CPU Ryzen5 8600G WITHOUT a dedicated GPU adapter. The CPU uses up to 40GB of my 64GB RAM for the 32B model. With good prompting the results are fantastic and save me hours of work every day. The dynamic memory allocation of the 8600G is great and allows me to run powerful LLMs with a small budget. My PC has cost me $900.
wait you're able to run a 32B model on just your CPU? i have a RTX 4060 TI with 16 gB of VRAM and I'm scared to download a 32B model 😅
@@Aurelnpounengong The Ryzen5 8600G has a GPU on the processor and can use system memory for VRAM, but much more slowly. (40gb out of the 64gb system memory) He provided the details to research the parts you don't understand.
really ? all this cost you 900 ? 64 gb ram ?
@@rhadiem ahhh I see, i did not know it used system emmory as VRAM. I also have 64GB DDR4 memory do you think I'll be able to run a 32B model with my Graphics card with some Memory offset to system memory?
@@Aurelnpounengong It will run, just slow. I can run a 32b on my 4090, but anything larger and it has to swap in and out of memory which is painful.
ive also had luck gettign the model to reflect using - reversing the calculation (math) - writing the documentation while it codes - writing a tutorial while it codes.
this is one of the best videos I have seen in some time Chris!
Awesome, so glad you’ve seen similar results
It would be awesome if you did a tutorial on fine tuning a reasoning model with tool calling abilities
That is a really good shout, I will do that
Yes that would be awesome !
yes! i want to train a model for z3 use when doing logical reasoning. very powerful solver
Thanks for answering all the basic questions I had. Great teaching style, even for the non-programmer.
Glad it was useful, I had a lot of fun making this video
I may no longer be at IBM, but I was curious to hear your thoughts on DeepSeek. Very insightful video, thanks!
Excellent ! Bravo! I am spending hours analyzing how DS1-R 32B works with my 4090. I am getting amazing results everyday...
Hey Chris, Great video. Really enjoy the way you teach. Keep up the good work. Can't wait for your next video on RLHF.
Keep doing helpful videos Chris 😊
Always, glad it was useful, I was particularly happy with this one
Great video. Looking forward to RL video.
Coming soon!
Brilliant work! Yes i do remember you mentioning that o1 was mcts and and r1 was not. I agreed with you that r1 was surely not, will be exciting to see if o1 or o3 used similar techniques or used mcts!
I’m 100 percent convinced that o1 is using search (specifically mcts) at inference time, and I’m 100% convinced that R1 will do the same in a future release when they figure it. But the results they’ve gotten without it, is pretty incredible
@chrishayuk It just blows my mind every time I think about it still! That one can converge through search or learning at these endpoints so long as one is bootstrapped with some notion of correctness! Your demo was incredible work. thanks again.
thank you, yeah, i came up with the concept of getting the compiler to do the calc, and the ai to do the explanation, a while back, i think i did a video on this in june 2024. so it seemed a natural fit when i saw the long chain of thought coldstart piece from deep seek. felt like a good merge. i was blown away also on how good the results were
Thank you for a great presentation, especially for your explanation and examples of
the "cold start' part. The 'Incentivizing' paper and the technical report are heavy going, especially
the reinforcement learning algorithm. When will you have a video out explaining the RL algorithm ?
Thank you Chris. I am hoping I will be able to replicate this on my old windows laptop. I want to be able to train a base model from scratch like you did here.
In your newly trained qwen model. What is the verifier step doing? Since there is no math compiler in qwen
I’m not verifying yet, I’ll do that in the RL stage in the next video. I’m just generating long and accurate chain of thoughts for coldstarting training
hello Chris Hay !
this is crazy, you made this amazing tutorial. thats mind blowing. while openAI is cloased, open source community is actually builidng it openly for community. although comanies like deepseek are validation and inspring. community is doing its own discovery. you are very inspring as well.
thanks again for a wonderful video
Thank you, I appreciate it, I was pretty pleased with this one, glad it’s useful
@@chrishayuk we might not need MOE now , as we need only cold start data for different tasks
1. fuction calling
2. coding
3. summarization
4. role play
5. nlq and others
we can do this on colab as its 1.5b, its going to crazy
It’s cool right
One cool addition.i use TwinMind AI on screen assistant to explain what your doing exactly,as i watch the video.(Reads transcript im guessing)
Anyway it makes understanding the topic far easier.
oooh, that sounds pretty sweet
Good work. One tiny suggestion - Maybe try using word-wraps for long lines , for better readability when watching a video.
Very much appreciate your videos. Thank you. I noticed your training data jsonl format is different than your validation and test jsonl format. Could you please explain?
Can a reasoning model figure out that it doesn't know something, and ask for inputs? Or could it be trained to ask?
That’s an awesome idea
Thanks for the info! I followed your instructions and it’s training the model but it’s pretty slow on my M1 Mac. Is there a similar software for Linux that I can coldstart train the model on a VPS?
Given that the intention is not so much to train new knowledge, but synthesize chain of thought capabilities on existing models, how good would it work if we were to use R1 to generate a bunch of non-math questions/thinking/answers input output as the cold start seed?
That’s pretty much what happens with the RL stage.. but I also think you can use verifiers to do this well also
@@chrishayuk Thanks! I was playing around with Granite 3.1 MoE 3B, found it to be insanely fast even on CPU only. I'd be really curious to see how much "intelligence" we can extract from smaller MoE models like that by synthesizing chain of thought. I'll have to find some time to play around and see what could be extracted. I'm thinking a semi-capable thinking model, with MCP (thanks to your MCP-CLI project), that requires no GPU will be a very powerful local assistant!
Fantastic, well done
Thank you! Cheers!
I'm sure you're aware ot the Qwen-maths models but using these reasoning techniques it would be interesting to see if a small (Qwen2.5-1.5b) model could be trained to reason geometry or integration in the same was a mathematician would simply apply the rules they know to see what fits.
I think the only limitation with this is the size of the context. I put the DeepSeek-R1-7b (Q4) on my phone and it was good but limited. I increased the context to 8192 and wow, it solved things o1 struggled with and failed.
Are you saying there is a math compiler in deepseek R1 ? Its open source, so that can be checked
They said in the paper they use a math verifier
Can you actually fine tune DeepSeek R1? I see you used Qwen-2.5
Hi Chris, it's pretty cool thanks for sharing.
can we try to generate the cold start data from deepseek-r1-zero just like the paper and train lora, what do you think of that?
Yes, I plan to do a pure version with RL, so will do that when I have that ready (which should be very soon)
@@chrishayuk that would be great! I would like to contribute in researching, writing script or generating data if possible
Awesome video 👏🏼👏🏼👏🏼
14:52 isn't the answer it gives here incorrect?
So what you are saying is that R1 will not perform good on non-logical and non maths like queries, where they cant use a verifier? Like what if I want to use R1 in a healthcare domain?
Nope, because verifiers work for that also, which I’m gonna show in an upcoming video
How about a video on creating a jsonl to finetune a model to write computer code
Yeah I plan do a new one on that using verifiers
Is NVDA going to die?
I think a new grand theft auto game is coming out, they’ll be fine
a video on local agentic ide please.
excellent!
WOW Superb
Have a look at the Open R1 repo from huggingface as they work with the community to replicate deepseek r1 datasets etc
This resembles "first principle" , -Don't teach me how to reason, I will find it myself!
Exactly
real open ai
N.Ireland / N.American accents is wild.
Agreed, love those accents. Mine is Scottish though
@@chrishayukhaha :)
@@chrishayuk.u look like a musician that got into AI😂.Like I can see you on a synthesizer in a music video.
hahaha, i'm terrible at music.. but i think there is a lot of synergies. i like using lots of tools and techniques and meshing them together
Wait a minute.. you used a how many billion parameter LLM to solve what a card-sized Casio calculator could solve in the 80s?
one is hardware
one is ml
ml can do things hardware cant. generalize.
Obviously this is a toy example. The purpose is to explain how to generate accurate synthetic Chain of Thought data to use during the training process, which is quite valuable. Even better, he walks through it end to end within the context of DeepSeek's COLDSTART methodology.
*_Who do you think will win the AI race: China or the US? Please reply._*
I don’t believe there will be a winner… I believe the game is an infinite game, and players will join and drop off. There are no winners….
@ Don't you think so it will be like space race?
@@HiteshKrishanKumar To what finish line? AI is already here and people use it every day.
unfortunately, I think, it's a military race, and we'll never know for sure until it's too late.
For the general public, open-source model will win, this video shows it already pretty much
Unlike the space and nuclear arms race where spies were the only way to get the latest technological advances, DS has OPEN SOURCED everything they did to produce this model. Imagine how much faster the space/nuclear arms race would have been in that case! Open Source has been the biggest if not nearly the biggest accelerator for AI advancement in my opinion, especially within the last ~2 years.
how to do this in windows? i guess peft from huggingface. cool.
bitsnbytes releases (bnb) many small models for ollama on windows/linux and yeah peft adapters.
i am pretty impressed with mac ml, but I cant imagine not being on linux with direct access to my 4090!
I’ll do a regular PyTorch video for the next one
@@chrishayuknice
I see there are now R1 reasoning datasets on Hugging Face e.g. ServiceNow-AI/R1-Distill-SFT