Run Any Hugging Face Model with Ollama in Just Minutes!

Поділитися
Вставка
  • Опубліковано 9 лис 2024

КОМЕНТАРІ • 43

  • @kennygoespostal
    @kennygoespostal 6 місяців тому +4

    Love your videos. It's weird how you do everything in Windows without WSL. It could be a selling point for your videos, maybe add a Windows tag somewhere? Keep at it!

    • @DigitalMirrorComputing
      @DigitalMirrorComputing  6 місяців тому +1

      That's a good idea mate, didn't think about it! I don't really like WSL to be honest, if I need to do something that specifically requires linux, I just connect to a linux VM running on another machine. thanks for the feedback mate, and the support! :)

  • @AbhijayK
    @AbhijayK 2 місяці тому

    you deserve way more subscribers my guy! thank you!

  • @ramsb
    @ramsb 17 днів тому +1

    For you this makes sense but for non programmers/prompt users this is a nightmare. I don't understand WHY I am doing every step, that is really frustrating. This was a huge struggle and I did not get it done.

  • @GeekDad74x
    @GeekDad74x Місяць тому

    Does this work with *any* Huggingface model or only GGUF? You can import GGUF files without the first half of your instructions, just download the GGUF model, make the model file and use ollama create. Not sure why anaconda and python installs were required?

  • @parthwagh3607
    @parthwagh3607 4 місяці тому

    thank you so much for the information. But could you please tell us how we can do this for AWQ. They have multiple files in single folder. Even if I only provided path to folder where safetensors files are present, I am getting error. Also, we have to consider that there may be more than one safetensors files for single model. And one request, how to do this without using Conda.

  • @IdentityV-Araby
    @IdentityV-Araby 19 днів тому

    when I use "notedpad modefile" its creating a modefile.txt insted of just modefile how can I fix that

  • @mahaltech
    @mahaltech 2 місяці тому

    how can i push model from hugging face to ollama website

  • @DihelsonMendonca
    @DihelsonMendonca 3 місяці тому +1

    💥 Wow, it's very complex. I wish there was a tool to automatically convert GGUF models to Ollama, or Ollama could use Gguf directly without all this rocket 🚀 science, man ! 😮😮

    • @DigitalMirrorComputing
      @DigitalMirrorComputing  3 місяці тому +1

      ...and maybe there is! I just don't know one hehe :) If you find one, please let me know and I will make a video about it! :) thanks for watching mate!

  • @pfbeast
    @pfbeast 26 днів тому

    thank you

  • @nono-lq1oh
    @nono-lq1oh 10 годин тому

    What do i do if it says
    "ollama: The term 'ollama' is not recognized as the name of a cmdle, function, script file, or operable program."

    • @DigitalMirrorComputing
      @DigitalMirrorComputing  8 годин тому

      @@nono-lq1oh make sure it's in the path of your operating system. If on windows add your ollama.exe to the env of the os. If in Linux it should be in .bashrc add the ollama bin folder to PATH. in mac I have no idea lol

  • @dhaneshdutta
    @dhaneshdutta 5 місяців тому +1

    can u make the same but for linux? bit confused in some steps

    • @siddhubhai2508
      @siddhubhai2508 3 місяці тому +1

      Write that commands then go the claude/chatgpt or the best will be the deepseek coder v2 and then ask that this command is used in windows cmd please tell me how to use it in linux, simple!

  • @HimanshuGhadigaonkar
    @HimanshuGhadigaonkar 6 місяців тому

    Thank you so much for this.. this worked like a charm.. i think we have to test with models that are not in gguf format..

  • @wintrover
    @wintrover Місяць тому

    that was very helpful. thank you.

  • @Hennessyjenkins2
    @Hennessyjenkins2 4 місяці тому

    Pls talk about copyrights, any potential infringing if ine was to creat social media content with HF

    • @DigitalMirrorComputing
      @DigitalMirrorComputing  4 місяці тому +2

      That is down to the model! Make sure you check well the disclaimers for the models you choose to use! :)

  • @Moraes.S
    @Moraes.S 2 місяці тому

    Valeu Felipe, funcionou aqui. Mas na etapa final precisei adicionar um .txt no Modelfile para funcionar.
    Se colocar só Modelfile igual você fez, dava esse error:
    Error: open C:\Users\Daniel\Modelfile: The system cannot find the file specified.
    Quando fiz com txt:
    C:\Users\Daniel>ollama create bartowski_gemma-9b -f .\Modelfile.txt
    transferring model data 100%
    Top. Working like a charm.

  • @MG3-l3g
    @MG3-l3g 4 місяці тому

    heya, great video. I followed it perfectly until I tried to run 'ollama create' and got 'The term 'ollama' is not recognized as the name ... etc'. I definitely 'pip installed' Ollama according to the steps here. How do I fix this error?

    • @parthwagh3607
      @parthwagh3607 4 місяці тому

      may be ollama is not environment variable. You have to find where is ollama is stored and on that location open cmd.

  • @WreckTangledTV
    @WreckTangledTV 2 місяці тому

    Been searching for hours for a video, you are #1 thank u so much!

  • @siferCEO
    @siferCEO 6 місяців тому

    Uh oh what does this mean?
    Error: Models based on 'LlamaForCausalLM' are not yet supported.
    More importantly how does one identify if the model is this “variation.”

    • @amrut1872
      @amrut1872 5 місяців тому +2

      'LlamaForCausalLM' is one of the many architectures that are out there for LLMs, and to identify the architecture of a particular model you need to look inside the config.json file for that model which can be found in the 'files and versions' tab for your model on hugging face.

  • @NyxesRealms
    @NyxesRealms 6 місяців тому

    followed the instructions up until Modelfile and when I run ollama to create it it can't find the specific file.

    • @DigitalMirrorComputing
      @DigitalMirrorComputing  6 місяців тому

      Make sure you are in the same directory as the model file! Or use -f followed by model file path!

    • @NyxesRealms
      @NyxesRealms 6 місяців тому +2

      @@DigitalMirrorComputing I appreciate it, but I already solved it. It was actually saved as a txt file so I did some digging and made sure to remove the extension. If you ever update a video like this maybe you can include the steps to do that because you kind of breezed over it. Additionally, I ran into another issue where the file path in the Modelfile had to be replaced because it was taking \ as an escape, so i switched to single forward slash and it was able to create the file finally. :) Thank you for your quick reply though!

    • @popularcontrol
      @popularcontrol 6 місяців тому

      @@NyxesRealms Where did you find the modelfile?

    • @NyxesRealms
      @NyxesRealms 6 місяців тому

      @@popularcontrol c:/users/myname

  • @EMO_Amarok
    @EMO_Amarok Місяць тому

    can someone help me with the command to change the download location of the model in anaconda please.

  • @OfficeArcade
    @OfficeArcade 6 місяців тому

    Another great video!

  • @DH-zt9tw
    @DH-zt9tw 3 місяці тому

    didn't work on a mac

  • @luisEnrique-lj4fq
    @luisEnrique-lj4fq 2 місяці тому

    thanks, thanks, thanks

  • @cybercdh
    @cybercdh 6 місяців тому

    Love it.

  • @JG-gf8hs
    @JG-gf8hs 6 місяців тому

    Muito bom este "passo-a-passo" do processo, obrigado!
    No entanto no meu caso tenho este erro quando estou na fase de criar o file :
    ollama create dolphin-2.9-llama3-8b -f .\Modelfile
    O erro é o seguinte :
    C:\Windows\system32>ollama create dolphin-2.9-llama3-8b -f .\Modelfile
    transferring model data
    panic: regexp: Compile(`(?im)^(from)\s+C:\Users\joseg\.cache\huggingface\hub\models--QuantFactory--dolphin-2.9-llama3-8b-GGUF\snapshots\525446eaa510585c590352c0a044c19be032a250\dolphin-2.9-llama3-8b.Q4_K_M.gguf\s*$`): error parsing regexp: invalid escape sequence: `\U`
    Fazes alguma ideia do que possa ser a causa ? Qualquer tipo de informaçao util na resoluçao deste impasse sera bem vinda 🙂

    • @DigitalMirrorComputing
      @DigitalMirrorComputing  6 місяців тому

      Tenta apagar o file nessa location e download again. Ou então foi o próprio model que não foi bem gravado em gguf

  • @thevinn
    @thevinn 2 місяці тому +1

    Why would you create a video instead of a set of written instructions?

    • @DigitalMirrorComputing
      @DigitalMirrorComputing  2 місяці тому +4

      @@thevinn why would you watch the video instead of reading a set of instructions?