Hello, the video was nicely and clearly explained, step by step. I would like to see the same Ollma setup but on a serverless architecture. Could you please post a video on the Ollma serverless setup?
@@ScaleUpSaaS I mean setting up ollma on serverless technology on AWS using lambda or other services. Or maybe on Google cloud functions for serverless
I recently came across your video on installing and running Llama3 (or any LLM) using Ollama on AWS Linux. I was wondering if it's possible to interact with the deployed model programmatically by calling it as an API in code. Could you provide insights or a brief guide on how to achieve this? Thank you for the great content!
Yes. You can. You can call it as API All you need to do is to implement Python FastAPI and once you getting requests to fast API you can make an inner request to your local Ollama. So you want us to make video about it?
@@ScaleUpSaaS Can you please make a video or explain how we can do it. I have writtern the Fast API code but not getting how to call the local Ollama API
Can i link the webui with my domain? So people can access the web ui through domain and of course SSL. I'd be really thankful if you explain or make a video on it.
Thanks for sharing. Ollama, llama3 or any other LLM that you can pull are free to use. But the server , because we are not using free tier instance type, it will cost you money for aws.
You can run it on your computer using docker as we showed in the tutorial. Or the next thing do what we did in the video and restrict access to the server only to your IP (config security group).
@@ScaleUpSaaS but what if you're WiFi Ip is not static and keeps on changing and you want access to the LLM from any device and any network but still keep it safe only accessible to you?
@wagmi614 in that case you can use Elastic IP address. In this video you can see how we are setting elastic IP address in AWS Full Node.js Deployment to AWS - FREE SSL, NGINX | Node js HTTPS Server ua-cam.com/video/yhiuV6cqkNs/v-deo.html
Hello, the video was nicely and clearly explained, step by step. I would like to see the same Ollma setup but on a serverless architecture. Could you please post a video on the Ollma serverless setup?
Thanks for sharing. Appreciated. Can you elaborate more…
@@ScaleUpSaaS I mean setting up ollma on serverless technology on AWS using lambda or other services. Or maybe on Google cloud functions for serverless
We don’t know if it’s possible. But we will check and let you know 🫡
We try to look for a solution for you. Unfortunately we didn't found one yet. We will let you know if something comes up...
@@ScaleUpSaaS, Thank You
I recently came across your video on installing and running Llama3 (or any LLM) using Ollama on AWS Linux. I was wondering if it's possible to interact with the deployed model programmatically by calling it as an API in code. Could you provide insights or a brief guide on how to achieve this?
Thank you for the great content!
Yes. You can. You can call it as API
All you need to do is to implement Python FastAPI and once you getting requests to fast API you can make an inner request to your local Ollama.
So you want us to make video about it?
@@ScaleUpSaaS Can you please make a video or explain how we can do it. I have writtern the Fast API code but not getting how to call the local Ollama API
@@ScaleUpSaaS Yes please If there will be an video that would helpful.
Sure. We will be happy to share that with you.
@@ScaleUpSaaS Thank you
Can i link the webui with my domain? So people can access the web ui through domain and of course SSL. I'd be really thankful if you explain or make a video on it.
I will make video about this very soon
I followed entire turorial but when i type 'llama3' in 'select a model' , 'Pull "llama3" from Ollama ' option is not appearing.
Please try the tutorial again from scratch. We tried it many times with users. And it’s worked each time.
Is this free to run on AWS?. If not, can you comment on the AWS cost incurred to run this application?.
Thanks for sharing. Ollama, llama3 or any other LLM that you can pull are free to use. But the server , because we are not using free tier instance type, it will cost you money for aws.
How to run is private? Someone looking for those endpoint can find it on clear Web?
You can run it on your computer using docker as we showed in the tutorial. Or the next thing do what we did in the video and restrict access to the server only to your IP (config security group).
@@ScaleUpSaaS but what if you're WiFi Ip is not static and keeps on changing and you want access to the LLM from any device and any network but still keep it safe only accessible to you?
@wagmi614 in that case you can use Elastic IP address. In this video you can see how we are setting elastic IP address in AWS
Full Node.js Deployment to AWS - FREE SSL, NGINX | Node js HTTPS Server
ua-cam.com/video/yhiuV6cqkNs/v-deo.html
Watch this. Full Node.js Deployment to AWS - FREE SSL, NGINX | Node js HTTPS Server
ua-cam.com/video/yhiuV6cqkNs/v-deo.html
@@ScaleUpSaaS wait i don't get it how elastic ip of aws helps when it's my ip that's changing and i want to input to be accepted from any ip?