Build Your Own Chatbot with Langchain, Ollama & LLAMA 3.2 | Local LLM Tutorial

Поділитися
Вставка
  • Опубліковано 29 гру 2024

КОМЕНТАРІ • 9

  • @ssvipl64
    @ssvipl64 10 днів тому +1

    Thanks for this Tutorial, It was nicely explained and given with functional simple code.

  • @rshekarLan
    @rshekarLan 2 місяці тому +3

    Very well explanined, please make some videos with RAG and using inhouse styles. Thank you

  • @yinkafad
    @yinkafad 2 місяці тому

    Excellent delivery and thank you. Explore a demo for RAG i.e. chatting with my data

  • @gazzalifahim
    @gazzalifahim 2 місяці тому

    Hey I was looking for something like this thorough guidance!! Thanks a million. Can you also share your system config please?

  • @aaron-ri
    @aaron-ri Місяць тому

    What if I want to use a custom model made with Ollama's ModelFile?

  • @mootykins7
    @mootykins7 2 місяці тому

    What are the hardware requirements to run LLaMA 3.2 locally? I assume it needs a GPU with 6GB VRAM at least?

    • @KGPTalkie
      @KGPTalkie  2 місяці тому

      4GB is well enough. 6GB would be good choice.

  • @Mr_Sniper5
    @Mr_Sniper5 2 місяці тому

    Hi can you suggest real time project

  • @sahilchalke1469
    @sahilchalke1469 2 місяці тому

    how to host this ???