Love your videos. It's weird how you do everything in Windows without WSL. It could be a selling point for your videos, maybe add a Windows tag somewhere? Keep at it!
That's a good idea mate, didn't think about it! I don't really like WSL to be honest, if I need to do something that specifically requires linux, I just connect to a linux VM running on another machine. thanks for the feedback mate, and the support! :)
For you this makes sense but for non programmers/prompt users this is a nightmare. I don't understand WHY I am doing every step, that is really frustrating. This was a huge struggle and I did not get it done.
Does this work with *any* Huggingface model or only GGUF? You can import GGUF files without the first half of your instructions, just download the GGUF model, make the model file and use ollama create. Not sure why anaconda and python installs were required?
thank you so much for the information. But could you please tell us how we can do this for AWQ. They have multiple files in single folder. Even if I only provided path to folder where safetensors files are present, I am getting error. Also, we have to consider that there may be more than one safetensors files for single model. And one request, how to do this without using Conda.
💥 Wow, it's very complex. I wish there was a tool to automatically convert GGUF models to Ollama, or Ollama could use Gguf directly without all this rocket 🚀 science, man ! 😮😮
...and maybe there is! I just don't know one hehe :) If you find one, please let me know and I will make a video about it! :) thanks for watching mate!
@@nono-lq1oh make sure it's in the path of your operating system. If on windows add your ollama.exe to the env of the os. If in Linux it should be in .bashrc add the ollama bin folder to PATH. in mac I have no idea lol
Write that commands then go the claude/chatgpt or the best will be the deepseek coder v2 and then ask that this command is used in windows cmd please tell me how to use it in linux, simple!
Valeu Felipe, funcionou aqui. Mas na etapa final precisei adicionar um .txt no Modelfile para funcionar. Se colocar só Modelfile igual você fez, dava esse error: Error: open C:\Users\Daniel\Modelfile: The system cannot find the file specified. Quando fiz com txt: C:\Users\Daniel>ollama create bartowski_gemma-9b -f .\Modelfile.txt transferring model data 100% Top. Working like a charm.
heya, great video. I followed it perfectly until I tried to run 'ollama create' and got 'The term 'ollama' is not recognized as the name ... etc'. I definitely 'pip installed' Ollama according to the steps here. How do I fix this error?
Uh oh what does this mean? Error: Models based on 'LlamaForCausalLM' are not yet supported. More importantly how does one identify if the model is this “variation.”
'LlamaForCausalLM' is one of the many architectures that are out there for LLMs, and to identify the architecture of a particular model you need to look inside the config.json file for that model which can be found in the 'files and versions' tab for your model on hugging face.
@@DigitalMirrorComputing I appreciate it, but I already solved it. It was actually saved as a txt file so I did some digging and made sure to remove the extension. If you ever update a video like this maybe you can include the steps to do that because you kind of breezed over it. Additionally, I ran into another issue where the file path in the Modelfile had to be replaced because it was taking \ as an escape, so i switched to single forward slash and it was able to create the file finally. :) Thank you for your quick reply though!
Muito bom este "passo-a-passo" do processo, obrigado! No entanto no meu caso tenho este erro quando estou na fase de criar o file : ollama create dolphin-2.9-llama3-8b -f .\Modelfile O erro é o seguinte : C:\Windows\system32>ollama create dolphin-2.9-llama3-8b -f .\Modelfile transferring model data panic: regexp: Compile(`(?im)^(from)\s+C:\Users\joseg\.cache\huggingface\hub\models--QuantFactory--dolphin-2.9-llama3-8b-GGUF\snapshots\525446eaa510585c590352c0a044c19be032a250\dolphin-2.9-llama3-8b.Q4_K_M.gguf\s*$`): error parsing regexp: invalid escape sequence: `\U` Fazes alguma ideia do que possa ser a causa ? Qualquer tipo de informaçao util na resoluçao deste impasse sera bem vinda 🙂
Love your videos. It's weird how you do everything in Windows without WSL. It could be a selling point for your videos, maybe add a Windows tag somewhere? Keep at it!
That's a good idea mate, didn't think about it! I don't really like WSL to be honest, if I need to do something that specifically requires linux, I just connect to a linux VM running on another machine. thanks for the feedback mate, and the support! :)
you deserve way more subscribers my guy! thank you!
For you this makes sense but for non programmers/prompt users this is a nightmare. I don't understand WHY I am doing every step, that is really frustrating. This was a huge struggle and I did not get it done.
Does this work with *any* Huggingface model or only GGUF? You can import GGUF files without the first half of your instructions, just download the GGUF model, make the model file and use ollama create. Not sure why anaconda and python installs were required?
thank you so much for the information. But could you please tell us how we can do this for AWQ. They have multiple files in single folder. Even if I only provided path to folder where safetensors files are present, I am getting error. Also, we have to consider that there may be more than one safetensors files for single model. And one request, how to do this without using Conda.
when I use "notedpad modefile" its creating a modefile.txt insted of just modefile how can I fix that
how can i push model from hugging face to ollama website
💥 Wow, it's very complex. I wish there was a tool to automatically convert GGUF models to Ollama, or Ollama could use Gguf directly without all this rocket 🚀 science, man ! 😮😮
...and maybe there is! I just don't know one hehe :) If you find one, please let me know and I will make a video about it! :) thanks for watching mate!
thank you
What do i do if it says
"ollama: The term 'ollama' is not recognized as the name of a cmdle, function, script file, or operable program."
@@nono-lq1oh make sure it's in the path of your operating system. If on windows add your ollama.exe to the env of the os. If in Linux it should be in .bashrc add the ollama bin folder to PATH. in mac I have no idea lol
can u make the same but for linux? bit confused in some steps
Write that commands then go the claude/chatgpt or the best will be the deepseek coder v2 and then ask that this command is used in windows cmd please tell me how to use it in linux, simple!
Thank you so much for this.. this worked like a charm.. i think we have to test with models that are not in gguf format..
that was very helpful. thank you.
Pls talk about copyrights, any potential infringing if ine was to creat social media content with HF
That is down to the model! Make sure you check well the disclaimers for the models you choose to use! :)
Valeu Felipe, funcionou aqui. Mas na etapa final precisei adicionar um .txt no Modelfile para funcionar.
Se colocar só Modelfile igual você fez, dava esse error:
Error: open C:\Users\Daniel\Modelfile: The system cannot find the file specified.
Quando fiz com txt:
C:\Users\Daniel>ollama create bartowski_gemma-9b -f .\Modelfile.txt
transferring model data 100%
Top. Working like a charm.
@@Moraes.S obrigado amigo! Fico contente que tenha funcionado!
heya, great video. I followed it perfectly until I tried to run 'ollama create' and got 'The term 'ollama' is not recognized as the name ... etc'. I definitely 'pip installed' Ollama according to the steps here. How do I fix this error?
may be ollama is not environment variable. You have to find where is ollama is stored and on that location open cmd.
Been searching for hours for a video, you are #1 thank u so much!
Uh oh what does this mean?
Error: Models based on 'LlamaForCausalLM' are not yet supported.
More importantly how does one identify if the model is this “variation.”
'LlamaForCausalLM' is one of the many architectures that are out there for LLMs, and to identify the architecture of a particular model you need to look inside the config.json file for that model which can be found in the 'files and versions' tab for your model on hugging face.
followed the instructions up until Modelfile and when I run ollama to create it it can't find the specific file.
Make sure you are in the same directory as the model file! Or use -f followed by model file path!
@@DigitalMirrorComputing I appreciate it, but I already solved it. It was actually saved as a txt file so I did some digging and made sure to remove the extension. If you ever update a video like this maybe you can include the steps to do that because you kind of breezed over it. Additionally, I ran into another issue where the file path in the Modelfile had to be replaced because it was taking \ as an escape, so i switched to single forward slash and it was able to create the file finally. :) Thank you for your quick reply though!
@@NyxesRealms Where did you find the modelfile?
@@popularcontrol c:/users/myname
can someone help me with the command to change the download location of the model in anaconda please.
Another great video!
thanks dude!! :D
didn't work on a mac
thanks, thanks, thanks
Love it.
Muito bom este "passo-a-passo" do processo, obrigado!
No entanto no meu caso tenho este erro quando estou na fase de criar o file :
ollama create dolphin-2.9-llama3-8b -f .\Modelfile
O erro é o seguinte :
C:\Windows\system32>ollama create dolphin-2.9-llama3-8b -f .\Modelfile
transferring model data
panic: regexp: Compile(`(?im)^(from)\s+C:\Users\joseg\.cache\huggingface\hub\models--QuantFactory--dolphin-2.9-llama3-8b-GGUF\snapshots\525446eaa510585c590352c0a044c19be032a250\dolphin-2.9-llama3-8b.Q4_K_M.gguf\s*$`): error parsing regexp: invalid escape sequence: `\U`
Fazes alguma ideia do que possa ser a causa ? Qualquer tipo de informaçao util na resoluçao deste impasse sera bem vinda 🙂
Tenta apagar o file nessa location e download again. Ou então foi o próprio model que não foi bem gravado em gguf
Why would you create a video instead of a set of written instructions?
@@thevinn why would you watch the video instead of reading a set of instructions?