How to Run Deepseek R1 Locally Using Ollama, Chatbox AI, and LM Studio

Поділитися
Вставка
  • Опубліковано 3 лют 2025

КОМЕНТАРІ • 64

  • @existentialbaby
    @existentialbaby 5 днів тому +4

    Glad that SOMEONE is using Chatbox AI.

  • @Orycta
    @Orycta День тому

    Very informative, keep up the good work!

  • @mindovermatter82
    @mindovermatter82 7 днів тому +1

    Excellent video very clear instructions 👏👏👍👍

  • @Indygoflow1
    @Indygoflow1 7 днів тому +3

    well explained no BS, thanks xie xie

  • @13sprintuser
    @13sprintuser 6 днів тому

    Fantastic, thank you!!

  • @Active-Vision
    @Active-Vision 9 днів тому

    awesome guide, thanks for sharing!

  • @NguyễnGiaoHòa-ViệnTàichínhbềnv

    Very nice video. Thank you.

  • @duniasainsasrofi
    @duniasainsasrofi 8 днів тому

    thank you sir

  • @I---student-of-knowledge---I
    @I---student-of-knowledge---I 9 днів тому +8

    Brother.if I have some pdf text books in english,Arabic,chinese more than 1000 pdf books in my pc locally.can we use deepseek r1 locally installed to ask questions from them in any languages or in specifically allowed languages.can deepseek find answers related to my question from all over the books.i am talking about locally

    • @giswqs
      @giswqs  9 днів тому +21

      Yes, it is possible. I will try to create another tutorial for this.

    • @I---student-of-knowledge---I
      @I---student-of-knowledge---I 9 днів тому +1

      @giswqs thanks bro🫵,it's my dream.can you also include if the pdf books are in other languages.but chatting is in other language.still is it possible.try bro.

    • @Wkwkwkw-e3w
      @Wkwkwkw-e3w 5 днів тому

      thats crazy!!! cant wait ​@@giswqs

  • @geizigeratze
    @geizigeratze 9 днів тому

    Awesome video!

  • @TheGladiatorMkali
    @TheGladiatorMkali 6 днів тому

    Followed every steps and it works just a bit slow… my laptop is the issue here 😂 is there a way we can add something a kin to open ai’s operator to this ?

  • @MindvisionMediaHD
    @MindvisionMediaHD 6 днів тому +1

    If I understand it correctly, if you setup DeepSeek R1 locally via Ollama, you will be able to give the local run model web search capability if you then use ChatboxAI ?

    • @chosen-m
      @chosen-m 3 дні тому

      please confirm this, with LM studio there is no internet search capability, is there such functionality with chatbox ai and ollama?

  • @hasansafi3341
    @hasansafi3341 5 днів тому

    How to know which model enough for coding?

  • @daghinost
    @daghinost 5 днів тому

    How to exclude thinking process in context for deepseek-R1? it consumes too much time

  • @mrktmker
    @mrktmker 8 днів тому +10

    Bro, its so crazy that this model dropped and is so cheap... How will NVDA make money LMAO

    • @xJohnNguyen
      @xJohnNguyen 8 днів тому

      It wasn't really that cheap. Based on articles I read, he already had the GPUs on-hand when the project started. They were initially purchased for his hedge fund and were re-purposed for this project. He also brought over his brightest minds from the hedge fund who already technically knew what needed to be done. They weren't starting from scratch.

    • @mrktmker
      @mrktmker 7 днів тому +2

      @@xJohnNguyen Theres a difference between training a model and running it.

  • @nvadito7579
    @nvadito7579 7 днів тому

    Since it's local, does it mean that it's impossible for anything discussed in this chat to be uploaded to any server? Or should I feel more secure if I block internet access with a firewall?

    • @giswqs
      @giswqs  7 днів тому +2

      It is secure. Nothing is uploaded to the Internet unless you expose the server to the Internet

  • @BMarques803
    @BMarques803 8 днів тому +1

    thanks for sharing, I wouldn't know otherwise this existed, but how are you planning to use it for geospatial purposes that you wouldn't already do with VSCode Copilot or Chat GPT? I think i'm missing something here. thanks.

    • @rne1223
      @rne1223 7 днів тому

      Many have said that it rivals the latest OpenAi model o1. Deepseek is opensource and free to install and use on your computer without sharing your data.

  • @dYbqYpbYd
    @dYbqYpbYd 5 днів тому

    can the local version go on in the internet to find the results?

    • @giswqs
      @giswqs  5 днів тому +1

      Not at the moment

    • @christopherkim1744
      @christopherkim1744 4 дні тому

      @@giswqs can you train the localled installed DeepSeek?

  • @elisaurysuarez5927
    @elisaurysuarez5927 9 днів тому

    ¡Great!

  • @sardar4235
    @sardar4235 7 днів тому

    Can it be installed on an SSD? My storage is not big enough

  • @X2cao
    @X2cao 8 днів тому +1

    Wow two RTX 6000 ada

  • @eutimioedvin8058
    @eutimioedvin8058 6 днів тому

    How to give local R1 internet access?

  • @JonathanBerte
    @JonathanBerte 8 днів тому

    does R1 change the RAG knowledge graph game?

  • @raulmolina4372
    @raulmolina4372 5 днів тому

    how can y use the gpu becaus it is only using my cpu

    • @giswqs
      @giswqs  5 днів тому

      Did you install the Nvidia driver and cuda?

  • @BRIGS21
    @BRIGS21 4 дні тому

    bro can someone help,when i try pulling and downloading any of the versions it just doesn't download it fluctuates from 9% to 10% and doesn't ever move

    • @giswqs
      @giswqs  4 дні тому

      Probably your Internet problem

    • @BRIGS21
      @BRIGS21 4 дні тому

      @giswqs 😂not trying to be disrespectful and or rude but obviously thats that first thing id try to rule out if i was having issues with downloads

    • @giswqs
      @giswqs  4 дні тому

      Can you try pulling a different model the see if the problem persists?

    • @BRIGS21
      @BRIGS21 4 дні тому

      @giswqs i actually tried every single model minus the 671b model right now im using LM studio to download and it seems to be downloading although i would have preferred just using my computers cmd not an app

    • @giswqs
      @giswqs  4 дні тому

      Take a look at this?
      github.com/ollama/ollama/issues/2850#issuecomment-1974732982
      "Turn off the windows firewall or make a rule exception. That worked for me."

  • @schniggedagges74
    @schniggedagges74 5 днів тому

    I tried to install LMStudio but get a virus warning that it includes malware. So I broke up the installation. Does anyone else had this?

    • @giswqs
      @giswqs  4 дні тому

      What operating system are you using? It worked fine on my Linux machine. Did not get any warning at the time.

    • @schniggedagges74
      @schniggedagges74 3 дні тому

      @giswqs Seemed to be a false positive heuristic find. I use Windows 11. I installed it meanwhile. Works fine

  • @X2cao
    @X2cao 8 днів тому

    How’s the 70billions model comparing to the 14b?

    • @Kmrabhinav569
      @Kmrabhinav569 7 днів тому

      70b >>>> 14b

    • @existentialbaby
      @existentialbaby 5 днів тому

      substantially better

    • @BossKing1243
      @BossKing1243 5 днів тому +1

      it's way better bro. slower obviously, but far far better in terms of reasoning and creativity

  • @edu40914
    @edu40914 7 днів тому +1

    70 billion model is crashing, Is it normal for a system with 32 GB of RAM, an i5 13600K, and a 4070 Super to crash?

    • @giswqs
      @giswqs  7 днів тому +4

      The 70b model is over 40GB. You will need a GPU larger than 40 GB to run it. I run it on a 48 GB GPU, and it works fine.

    • @edu40914
      @edu40914 7 днів тому

      @giswqs Oh, I see, that’s why when I also run the 32 billion one, which is 19 GB, it doesn’t use the GPU but instead relies on RAM and CPU, and runs so slowly? I understand that it can use up to a maximum of the 12 GB of the GPU.

    • @anshulsingh8326
      @anshulsingh8326 2 дні тому +2

      Thing that matters the most in AI inference is Ram and Vram.
      32 Gb ram and 12gb Vram can run 14b models. Also if you use quantised model of any model it will be faster and smaller.
      Like fp16 is most accurate of any parameters, but size is higher so lower speed, might crash of you don't have enough ram and vram.
      You could try q8 versions, or q5,q6 if available.
      q4 accuracy is very bad compared to q5 so choose wisely.
      Not all models have all quantise versions. But most have q4 and q8

    • @edu40914
      @edu40914 День тому +1

      @anshulsingh8326 Thank you very much for your detailed explanation, I will do more tests.

  • @kobenbawest
    @kobenbawest 4 дні тому

    Chatbox AI is trash. Every answer is a hallucination

    • @giswqs
      @giswqs  4 дні тому

      Chatbox AI is just a wrapper. It calls local AI models

    • @kobenbawest
      @kobenbawest 4 дні тому

      @ allegedly is running locally on my mac using deepseek r1 + ollama

    • @kobenbawest
      @kobenbawest 4 дні тому

      I will review your entire video again to identify any error I may have caused.

  • @wowLaLaLa
    @wowLaLaLa 7 днів тому +1

    llama.cpp error: 'error loading model vocabulary: unknown pre-tokenizer type: 'deepseek-r1-qwen''