Ollama is easy or else you can take llama api key bro Amazon bedrock. 1. For ollama you need high end gpu and cpu. 2. Try to create an account in AWS and get the llama3.1 key it's not expensive. You can get 1000 input tokens for 0.0004 $ . The approximate budget is 10$ for your project. Best option
@@Danny_Bananie downloading on Linux ? It just required 2 commands,I used ollama on ec2 Linux server with the above mentioned commands. see for the local laptop we need to download ollama .
@@techCodio When I deploy, the streamlit throws the error here:Error invoking LLM: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))
Hello, thank you for your video.
Could you please let me know if I can use it on my laptop, which only has an NVIDIA GeForce MX330 and 16GB RAM?
Yes you can use it, but don't go for bigger models ,go for 8B parameters models or less than that.
@@techCodio Thank you very much.
And could you please make a demo video of the Llama model using the API?
@@quanbuiinh604 using groq api or any other platforms
@@quanbuiinh604 Recently I started Rag course in this channel using free models,In the advanced rag I will use groq api for llama model,
can you suggest configuration, understood 8gb GPU, and 16GB RAM sufficient, how about cpu and motherboard best to use
I have no idea about the motherboard ,but icore 5 is enough along with the above mentioned configuration 16gb ram and 8gb gpu
Hello bro, how to make mental health counseling chatbot, which Llama source is easy?
Ollama is easy or else you can take llama api key bro Amazon bedrock.
1. For ollama you need high end gpu and cpu.
2. Try to create an account in AWS and get the llama3.1 key it's not expensive. You can get 1000 input tokens for 0.0004 $ . The approximate budget is 10$ for your project. Best option
How to see This chatbot in public ip website ? So in This way my friend can use my ollama? What I have to change?
It's a private ip, yeah you can use ollama
@@techCodio can you do a tutorial that explain how to install ollama on linux then how to share the website with this ollama chatbot in public?
@@jefnize2444 Steps 1: sudo apt install ollama ,2.ollama pull llama3.1 ,3. ollama run 3.1
I think they want to be able to use this as a chat bot on their website. How do you go about doing that?
@@Danny_Bananie downloading on Linux ? It just required 2 commands,I used ollama on ec2 Linux server with the above mentioned commands. see for the local laptop we need to download ollama .
How to view this app in our mobile phone which is locally deployed in our own device?
When you deploy the streamlit application you are able to see the deployment link,then you can access the link anywhere
@@techCodio When I deploy, the streamlit throws the error here:Error invoking LLM: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))
Could you please help me to solve this error?
We cannot deploy the big llm application in the streamlit,it's a light weighted application development,
In order to deploy the app you need to host in AWS ,azure services and that is a big process,
m getting unable to ModuleNotFoundError: No module named 'llama_index'
pip install llama_index