Brother.if I have some pdf text books in english,Arabic,chinese more than 1000 pdf books in my pc locally.can we use deepseek r1 locally installed to ask questions from them in any languages or in specifically allowed languages.can deepseek find answers related to my question from all over the books.i am talking about locally
@giswqs thanks bro🫵,it's my dream.can you also include if the pdf books are in other languages.but chatting is in other language.still is it possible.try bro.
Followed every steps and it works just a bit slow… my laptop is the issue here 😂 is there a way we can add something a kin to open ai’s operator to this ?
If I understand it correctly, if you setup DeepSeek R1 locally via Ollama, you will be able to give the local run model web search capability if you then use ChatboxAI ?
It wasn't really that cheap. Based on articles I read, he already had the GPUs on-hand when the project started. They were initially purchased for his hedge fund and were re-purposed for this project. He also brought over his brightest minds from the hedge fund who already technically knew what needed to be done. They weren't starting from scratch.
Since it's local, does it mean that it's impossible for anything discussed in this chat to be uploaded to any server? Or should I feel more secure if I block internet access with a firewall?
thanks for sharing, I wouldn't know otherwise this existed, but how are you planning to use it for geospatial purposes that you wouldn't already do with VSCode Copilot or Chat GPT? I think i'm missing something here. thanks.
Many have said that it rivals the latest OpenAi model o1. Deepseek is opensource and free to install and use on your computer without sharing your data.
bro can someone help,when i try pulling and downloading any of the versions it just doesn't download it fluctuates from 9% to 10% and doesn't ever move
@giswqs i actually tried every single model minus the 671b model right now im using LM studio to download and it seems to be downloading although i would have preferred just using my computers cmd not an app
Take a look at this? github.com/ollama/ollama/issues/2850#issuecomment-1974732982 "Turn off the windows firewall or make a rule exception. That worked for me."
@giswqs Oh, I see, that’s why when I also run the 32 billion one, which is 19 GB, it doesn’t use the GPU but instead relies on RAM and CPU, and runs so slowly? I understand that it can use up to a maximum of the 12 GB of the GPU.
Thing that matters the most in AI inference is Ram and Vram. 32 Gb ram and 12gb Vram can run 14b models. Also if you use quantised model of any model it will be faster and smaller. Like fp16 is most accurate of any parameters, but size is higher so lower speed, might crash of you don't have enough ram and vram. You could try q8 versions, or q5,q6 if available. q4 accuracy is very bad compared to q5 so choose wisely. Not all models have all quantise versions. But most have q4 and q8
Glad that SOMEONE is using Chatbox AI.
Very informative, keep up the good work!
Excellent video very clear instructions 👏👏👍👍
well explained no BS, thanks xie xie
Fantastic, thank you!!
awesome guide, thanks for sharing!
Very nice video. Thank you.
thank you sir
Brother.if I have some pdf text books in english,Arabic,chinese more than 1000 pdf books in my pc locally.can we use deepseek r1 locally installed to ask questions from them in any languages or in specifically allowed languages.can deepseek find answers related to my question from all over the books.i am talking about locally
Yes, it is possible. I will try to create another tutorial for this.
@giswqs thanks bro🫵,it's my dream.can you also include if the pdf books are in other languages.but chatting is in other language.still is it possible.try bro.
thats crazy!!! cant wait @@giswqs
Awesome video!
Followed every steps and it works just a bit slow… my laptop is the issue here 😂 is there a way we can add something a kin to open ai’s operator to this ?
If I understand it correctly, if you setup DeepSeek R1 locally via Ollama, you will be able to give the local run model web search capability if you then use ChatboxAI ?
please confirm this, with LM studio there is no internet search capability, is there such functionality with chatbox ai and ollama?
How to know which model enough for coding?
How to exclude thinking process in context for deepseek-R1? it consumes too much time
Bro, its so crazy that this model dropped and is so cheap... How will NVDA make money LMAO
It wasn't really that cheap. Based on articles I read, he already had the GPUs on-hand when the project started. They were initially purchased for his hedge fund and were re-purposed for this project. He also brought over his brightest minds from the hedge fund who already technically knew what needed to be done. They weren't starting from scratch.
@@xJohnNguyen Theres a difference between training a model and running it.
Since it's local, does it mean that it's impossible for anything discussed in this chat to be uploaded to any server? Or should I feel more secure if I block internet access with a firewall?
It is secure. Nothing is uploaded to the Internet unless you expose the server to the Internet
thanks for sharing, I wouldn't know otherwise this existed, but how are you planning to use it for geospatial purposes that you wouldn't already do with VSCode Copilot or Chat GPT? I think i'm missing something here. thanks.
Many have said that it rivals the latest OpenAi model o1. Deepseek is opensource and free to install and use on your computer without sharing your data.
can the local version go on in the internet to find the results?
Not at the moment
@@giswqs can you train the localled installed DeepSeek?
¡Great!
Can it be installed on an SSD? My storage is not big enough
Wow two RTX 6000 ada
How to give local R1 internet access?
does R1 change the RAG knowledge graph game?
how can y use the gpu becaus it is only using my cpu
Did you install the Nvidia driver and cuda?
bro can someone help,when i try pulling and downloading any of the versions it just doesn't download it fluctuates from 9% to 10% and doesn't ever move
Probably your Internet problem
@giswqs 😂not trying to be disrespectful and or rude but obviously thats that first thing id try to rule out if i was having issues with downloads
Can you try pulling a different model the see if the problem persists?
@giswqs i actually tried every single model minus the 671b model right now im using LM studio to download and it seems to be downloading although i would have preferred just using my computers cmd not an app
Take a look at this?
github.com/ollama/ollama/issues/2850#issuecomment-1974732982
"Turn off the windows firewall or make a rule exception. That worked for me."
I tried to install LMStudio but get a virus warning that it includes malware. So I broke up the installation. Does anyone else had this?
What operating system are you using? It worked fine on my Linux machine. Did not get any warning at the time.
@giswqs Seemed to be a false positive heuristic find. I use Windows 11. I installed it meanwhile. Works fine
How’s the 70billions model comparing to the 14b?
70b >>>> 14b
substantially better
it's way better bro. slower obviously, but far far better in terms of reasoning and creativity
70 billion model is crashing, Is it normal for a system with 32 GB of RAM, an i5 13600K, and a 4070 Super to crash?
The 70b model is over 40GB. You will need a GPU larger than 40 GB to run it. I run it on a 48 GB GPU, and it works fine.
@giswqs Oh, I see, that’s why when I also run the 32 billion one, which is 19 GB, it doesn’t use the GPU but instead relies on RAM and CPU, and runs so slowly? I understand that it can use up to a maximum of the 12 GB of the GPU.
Thing that matters the most in AI inference is Ram and Vram.
32 Gb ram and 12gb Vram can run 14b models. Also if you use quantised model of any model it will be faster and smaller.
Like fp16 is most accurate of any parameters, but size is higher so lower speed, might crash of you don't have enough ram and vram.
You could try q8 versions, or q5,q6 if available.
q4 accuracy is very bad compared to q5 so choose wisely.
Not all models have all quantise versions. But most have q4 and q8
@anshulsingh8326 Thank you very much for your detailed explanation, I will do more tests.
Chatbox AI is trash. Every answer is a hallucination
Chatbox AI is just a wrapper. It calls local AI models
@ allegedly is running locally on my mac using deepseek r1 + ollama
I will review your entire video again to identify any error I may have caused.
llama.cpp error: 'error loading model vocabulary: unknown pre-tokenizer type: 'deepseek-r1-qwen''