Thanks for feedback. Appreciate it. In this tutorial, I have used Groq API for accessing LlaMa-3 hence you should be able to run it even with 2 GB RAM. Because it'll hit Groq API for LLM. You'll need to create api-key on groq though. But if you want to use Local LLMs like the one available from Ollama then you'll need around 8 GB of RAM. It'll be better with more than 8GB but 7-8 B parameters model works with 8 GB as well. Good to have i3 or or above processor. But want to inform you that results were not that good with Llama-3 accessed through Ollama. I have tried it. Maybe because it's quantized version. I would suggest you using Groq API.
Great video karthik
Thank you 🙂
Thanks for share! 🇧🇷
Thanks for watching!
Nice tutorial. May I know what configuration of computer required to run in local machine . With 8gb Ram above tutorial possible.
Thanks for feedback. Appreciate it.
In this tutorial, I have used Groq API for accessing LlaMa-3 hence you should be able to run it even with 2 GB RAM. Because it'll hit Groq API for LLM. You'll need to create api-key on groq though.
But if you want to use Local LLMs like the one available from Ollama then you'll need around 8 GB of RAM. It'll be better with more than 8GB but 7-8 B parameters model works with 8 GB as well. Good to have i3 or or above processor.
But want to inform you that results were not that good with Llama-3 accessed through Ollama. I have tried it. Maybe because it's quantized version.
I would suggest you using Groq API.
@@CoderzColumn Thank you .RAG vs fine tuning can you explain the difference please in which situations we use them.