Was using Ubuntu Desktop running mixtral on ollama so i can make api calls with my FastApi app on VS code but realized i should separate them out and go headless for ollama. I didn’t realize that CORS was preventing outside calls from my dev machine and this video helped once i found the github page as well. Thanks for sharing
thank you, I ended up storing the environment variable into the .conf file to bind the IP address so it handles this process automatically.@@IanWootten
Are you running it without a GPU? I have an old Laptop with 16gb as well with a beefy CPU, but I'm not sure if It'll be able to run somewhat smoothly with just the specs I have.
how does this scale for multiple users sending multiple requests at a time? do you need to use a load balancer / reverse proxy? i don't think ollama supports batch inference still
You would have to launch and run the application multiple times, the best way is to just use something like docker. Otherwise, I believe theirs the "screen" command. If I remember correctly on Linux this will allow you to run applications in the CLI with multiple virtual "screens" or rather more like sessions, you would then want to make sure what ever port it uses is different from the others. Also take note the ram it uses, is the ram it uses, CPU can be shared. It might be possible with ram (with some tricks) however it's unlikely.
what about pulling from a custom endpoint where i have my own hosted models? i want to run this on an air gapped network that doen'st have any access to the internet so if i could point it to an on prem server i have that would be awesome. @@jakestevens3694
Hello. I'm developing an OnPremises application that consumes Ollama via API. However, after a few minutes, the Ollama Server stops automatically. I would like to know if there is any way to keep it running until I stop it. Thank you very much.
This was a really helpful video Ian! But I am facing one issue after running ollama serve the server is shutting down when I close terminal. Please tell me if there is a way to prevent this. Thanks!
If you simply want to make a request to an API from Python, there are plenty of options. You can use a package from Python itself like urlllib, or a popular library like requests.
for 70B model, what server would I need to rent? docs says at least 64GB of RAM... but regarding NVIDEA card no minimal specs in the docs. Who has experience with this?
Anyone got this running on anything lower than 8GB of RAM on digital ocean? I tried locally on my own computer with a huge prompt with a 3B model, and it only used around 1GB of RAM maximum
hello Ian, Its a very great video. I have some query, i will very thankful if you can help me. I am stuck since 3 days. Apparently, I am trying to host the ollama on my server. i am very new to linux and dont understand the whats wrong i am doing. I am using nginx to host the ollama on my proxies and configure the nginx file and yet getting access denied error. I can show you the code if you want, please respond.
Yes. Ollama has desktop versions too and it doesn't send anything externally when query when you go that route. I have another video where I do this on my mac.
Right now there is an issue with Ollama where if you create an model, it spams you with "do not have permission to open Modelfile" It's super odd, because even if you give full read and execution rights to every user or run the command with sudo it still fails. The only viable work around is to run it on /tmp
This is an issue with the current user not having access to the ollama group. There's a recommended solution posted here (though sounds like it might not be completely resolved): github.com/jmorganca/ollama/issues/613#issuecomment-1756293841
@@IanWootten saw that. Even after running sudo usermod -a -G ollama $(whoami) It still won't work. The idea to run it in /tmp came from that thread haha. Outside of this issue, the rest of the project is pretty cool imo. Local llm with reinforced learning, wifi and direct brain integration will be the future.
Thanks for leaving all the errors in and correcting them. Excellent.
Was using Ubuntu Desktop running mixtral on ollama so i can make api calls with my FastApi app on VS code but realized i should separate them out and go headless for ollama. I didn’t realize that CORS was preventing outside calls from my dev machine and this video helped once i found the github page as well. Thanks for sharing
Glad to hear you sorted it!
thank you, I ended up storing the environment variable into the .conf file to bind the IP address so it handles this process automatically.@@IanWootten
This is amazing news! I'm limited to 16gb RAM on my Macs, but not so on my Linux machines!
just what i was looking for, thanks ian!
No problem!
Mistral 7B running really sweet on my old Asus (16GB ram ) laptop
Runs really fast on my MBP too, just started playing with it yesterday.
@@IanWootten sweet
Are you running it without a GPU? I have an old Laptop with 16gb as well with a beefy CPU, but I'm not sure if It'll be able to run somewhat smoothly with just the specs I have.
Thank you, indeed. It worked.
You are real hero
Run Pod is very affordable too. From 17c per hour for a Nvidea 3080
Yeah, I wanted to do a comparison of all the new services appearing.
Great stuff, thanks a lot!
i cant run it on service ollama start, it says the following:
$sudo: service ollama start
ollama: unrecognized service
can we use ollama to serve in production ? if not,what is your suggestion?
how does this scale for multiple users sending multiple requests at a time? do you need to use a load balancer / reverse proxy? i don't think ollama supports batch inference still
You would have to launch and run the application multiple times, the best way is to just use something like docker. Otherwise, I believe theirs the "screen" command. If I remember correctly on Linux this will allow you to run applications in the CLI with multiple virtual "screens" or rather more like sessions, you would then want to make sure what ever port it uses is different from the others. Also take note the ram it uses, is the ram it uses, CPU can be shared. It might be possible with ram (with some tricks) however it's unlikely.
what about pulling from a custom endpoint where i have my own hosted models? i want to run this on an air gapped network that doen'st have any access to the internet so if i could point it to an on prem server i have that would be awesome. @@jakestevens3694
Hello. I'm developing an OnPremises application that consumes Ollama via API. However, after a few minutes, the Ollama Server stops automatically. I would like to know if there is any way to keep it running until I stop it.
Thank you very much.
0:08 How did you get to your pronunciation of Linux?
10:53 How could one correct the error occurring here?
This was a really helpful video Ian!
But I am facing one issue after running ollama serve the server is shutting down when I close terminal. Please tell me if there is a way to prevent this.
Thanks!
How do you connect to server via Python Client or Fast APIs for integration with projects/notebook?
If you simply want to make a request to an API from Python, there are plenty of options. You can use a package from Python itself like urlllib, or a popular library like requests.
for 70B model, what server would I need to rent? docs says at least 64GB of RAM... but regarding NVIDEA card no minimal specs in the docs. Who has experience with this?
Anyone got this running on anything lower than 8GB of RAM on digital ocean? I tried locally on my own computer with a huge prompt with a 3B model, and it only used around 1GB of RAM maximum
Yeah, depends on the model itself. ollama often lists the memory requirements on the model page. e.g. ollama.ai/library/llama2
Which version of Ubuntu did you choose? It seems to be missing from the video.
hello Ian, Its a very great video. I have some query, i will very thankful if you can help me. I am stuck since 3 days. Apparently, I am trying to host the ollama on my server. i am very new to linux and dont understand the whats wrong i am doing. I am using nginx to host the ollama on my proxies and configure the nginx file and yet getting access denied error. I can show you the code if you want, please respond.
I got an error while executing the curl command : Failure writing output to destination
Weird. Perhaps try running it from a directory you are certain you have write access to.
what is the cost of webhosting ollama or these type of LLM models?
In this case, it'll be the price of the virtual machine you choose to install it on so depends on the provider.
How come the model run in 8gb of ram? On the docs it self it need at least 16gb for llama2
No idea - I was going on experience using ollama rather than the model itself.
How do i change IP and port for Ollama
Use the env var OLLAMA_HOST. e.g. OLLAMA_HOST=127.0.0.1:8001 ollama serve
Thanks
benefits of running it like this vs docker?
Running anything within a container will always mean the app runs slower.
do you think it is safe to install on your own laptop instead of the cloud server?
Yes. Ollama has desktop versions too and it doesn't send anything externally when query when you go that route. I have another video where I do this on my mac.
Right now there is an issue with Ollama where if you create an model, it spams you with "do not have permission to open Modelfile"
It's super odd, because even if you give full read and execution rights to every user or run the command with sudo it still fails.
The only viable work around is to run it on /tmp
This is an issue with the current user not having access to the ollama group. There's a recommended solution posted here (though sounds like it might not be completely resolved): github.com/jmorganca/ollama/issues/613#issuecomment-1756293841
@@IanWootten saw that.
Even after running sudo usermod -a -G ollama $(whoami)
It still won't work.
The idea to run it in /tmp came from that thread haha.
Outside of this issue, the rest of the project is pretty cool imo.
Local llm with reinforced learning, wifi and direct brain integration will be the future.
Even if the killer is dead on the floor the killer is still there and would still be a killer 😅 so the correct answer would be 3.
A killed killer
thank you🦙
No problem!