Got this, I think because there is a form to fill for Ministral and some other models. Error: pull model manifest: Get "Authentication%20required?nonce=uJTLPEnU0-Br15UXm5zbPg&scope=&service=&ts=1729173119": unsupported protocol scheme "" Any idea for bypassing it, please?
Thank you, and my question is how do I run a model in Olam with several parts, whether it was downloaded and I want to add it to Olam with a modelfile or by downloading from huggface
Thanks for the good content. That's a very useful feature but: As I understand, Ollama now can download the GGUF file then it will change it into the blobs it uses to run the the model where these blobs are stored by default in .ollama folder. For example in windows it will be in: C:\Users\{user_name}\.ollama\models\blobs but the question is: Where does Ollama store the downloaded GGUF file????? and most importantly, does it keep it or delete it after downloading it????? If Ollama keeps the downloaded GGUF file, then this feature is a truly wonderful one
It does keep the gguf file, though renames it. Linux: /usr/share/ollama/.ollama/models macOS: ~/.ollama/models Windows: C:\Users\%username%\.ollama\models
@@fahdmirza Thanks for the info. If it keeps the GGUF file then it can be used in other LLM inference engines .. for example it can be used in LM studio or vllm 🌹🌹🌹
@@HassanAllaham truly revolutionary. The models can even be used in VS code too. I use phi3.5 mini as a substitute for cursor. This is really great news
@@HassanAllaham truly revolutionary. The models can even be used in VS code too. I use phi3.5 mini as a substitute for cursor. This is really great news
Really this is the most awaited thing everyone wanted including me 😢
Sure, enjoy
That's great. Yesterday, I was wondering when will I have the chance to run Ministral models via ollama.
Thank you for presenting this.
Got this, I think because there is a form to fill for Ministral and some other models.
Error: pull model manifest: Get "Authentication%20required?nonce=uJTLPEnU0-Br15UXm5zbPg&scope=&service=&ts=1729173119": unsupported protocol scheme ""
Any idea for bypassing it, please?
Glad I could be of assistance!
Amazing bro ! Thank you so much for this update and effort you are the best ❤
You are so welcome!
I've been waiting for this forever!
yes its cool.
Thanks for the heads up[!
Enjoy!
Thanks fahad. Good 👍
You are welcome!
This is awesome
Indeed
Thank you, and my question is how do I run a model in Olam with several parts, whether it was downloaded and I want to add it to Olam with a modelfile or by downloading from huggface
You can run only GGUF models?
yes
OK nice how about Running Models Directly NO GGUF format from Hugging Face with Ollama Locally
YESS!!!
cheers
Thanks for the good content.
That's a very useful feature but:
As I understand, Ollama now can download the GGUF file then it will change it into the blobs it uses to run the the model where these blobs are stored by default in .ollama folder.
For example in windows it will be in:
C:\Users\{user_name}\.ollama\models\blobs
but the question is:
Where does Ollama store the downloaded GGUF file????? and most importantly, does it keep it or delete it after downloading it?????
If Ollama keeps the downloaded GGUF file, then this feature is a truly wonderful one
It does keep the gguf file, though renames it. Linux: /usr/share/ollama/.ollama/models
macOS: ~/.ollama/models
Windows: C:\Users\%username%\.ollama\models
@@fahdmirza Thanks for the info. If it keeps the GGUF file then it can be used in other LLM inference engines .. for example it can be used in LM studio or vllm 🌹🌹🌹
@@HassanAllaham truly revolutionary. The models can even be used in VS code too.
I use phi3.5 mini as a substitute for cursor.
This is really great news
@@HassanAllaham truly revolutionary. The models can even be used in VS code too.
I use phi3.5 mini as a substitute for cursor.
This is really great news
Nice❤, but what about template and stops 🤔
Those features remain part of ollama.
@@fahdmirza I mean, for this method is ollama import it automatically from repo?