How to Run Any GGUF AI Model with Ollama Locally
Вставка
- Опубліковано 22 жов 2024
- This video is a step-by-step tutorial to install and run any LLM in GGUF format with Ollama locally.
🔥 Buy Me a Coffee to support the channel: ko-fi.com/fahd...
🔥 Get 50% Discount on any A6000 or A5000 GPU rental, use following link and coupon:
bit.ly/fahd-mirza
Coupon code: FahdMirza
▶ Become a Patron 🔥 - / fahdmirza
#gguf #ollama
PLEASE FOLLOW ME:
▶ LinkedIn: / fahdmirza
▶ UA-cam: / @fahdmirza
▶ Blog: www.fahdmirza.com
RELATED VIDEOS:
▶ Resource ollama.com/
All rights reserved © 2021 Fahd Mirza
I don't like working in the console. That said, this is the only video or description of any I have come across that managed to get me to understand how to actually do this so thank you. So many others seem to skip steps because they assume people are familiar with them already.
Works in ollama now and some other apps I have that attach to it; doesn't show up in Open WebUI for some reason, but I don't really need custom modules for how I use that app at the moment. so again, thanks for the way you described this.
Hey Fahd... Thanks for sharing the knowledge. Hugs from Brazil.
Thanks you made my day
Great, will you share for safetensors type
Thank for your tutorial. I follow only you and i learn more about u
Thanks my friend, you just made my day. cheers
how can i fix Error: this model is not supported by your version of Ollama. You may need to upgrade and i already upgrade the latest ver
Atualiza seu ollama
I didn't even know 48GB VRAM Nvidia existed for personal use.
sure
you can rent 48 vram or even higher gpus on websites like runpod