Yes and no! On paper, if you only had one pod this could work. But the API call needs to be made every time a new Ollama pod is scheduled (unless you're using a PVC mounted to the pod to store the model). As far as I'm aware it's not possible to start a Kubernetes job at the creation of a new pod without using an operator.
Can you please advise how to run ollama in k3d cluster in wsl2- windows 11 and docker desktop environment. The issue I’m not able to solve is making gpu visible in a node.
It's a cool setup to use to run a RAG setup locally as well - nice going.
Excelente video! Me encantó cómo explicaste el proceso de implementar Ollama en Kubernetes. Gracias por compartir tu conocimiento!
I use docker compose but i was curious about k8s
Looking forward for the next video!
You can find the video here: ua-cam.com/video/Yomo2DnL9NA/v-deo.html
@@mathisve thanks!
Would really like the video on how to add a GPU to this, both locally and on the cloud.
Stay tuned for that video! I'm working on it as we speak, should be out later this week!
Helpful tutorial
thank you
Would using a k8s job to make the pull API call suffice?
Yes and no! On paper, if you only had one pod this could work. But the API call needs to be made every time a new Ollama pod is scheduled (unless you're using a PVC mounted to the pod to store the model). As far as I'm aware it's not possible to start a Kubernetes job at the creation of a new pod without using an operator.
Can you please advise how to run ollama in k3d cluster in wsl2- windows 11 and docker desktop environment. The issue I’m not able to solve is making gpu visible in a node.
here i cant deploy a simple laravel app on k8s🤒
Do you need help with deploying Laravel on Kubernetes?