Generate Pydantic & JSON Objects from Text using LlaMa-3 | LlamaIndex | Groq API

Поділитися
Вставка
  • Опубліковано 15 вер 2024

КОМЕНТАРІ • 7

  • @deekshitht786
    @deekshitht786 Місяць тому

    Great video karthik

  • @jailsonsouto9122
    @jailsonsouto9122 Місяць тому

    Thanks for share! 🇧🇷

  • @karthikb.s.k.4486
    @karthikb.s.k.4486 Місяць тому

    Nice tutorial. May I know what configuration of computer required to run in local machine . With 8gb Ram above tutorial possible.

    • @CoderzColumn
      @CoderzColumn  Місяць тому

      Thanks for feedback. Appreciate it.
      In this tutorial, I have used Groq API for accessing LlaMa-3 hence you should be able to run it even with 2 GB RAM. Because it'll hit Groq API for LLM. You'll need to create api-key on groq though.
      But if you want to use Local LLMs like the one available from Ollama then you'll need around 8 GB of RAM. It'll be better with more than 8GB but 7-8 B parameters model works with 8 GB as well. Good to have i3 or or above processor.
      But want to inform you that results were not that good with Llama-3 accessed through Ollama. I have tried it. Maybe because it's quantized version.
      I would suggest you using Groq API.

    • @karthikb.s.k.4486
      @karthikb.s.k.4486 Місяць тому

      @@CoderzColumn Thank you .RAG vs fine tuning can you explain the difference please in which situations we use them.