🛠️ Build Your Own Chatbot Using Llama 3.1 8B | Ollama & Streamlit 🚀

Поділитися
Вставка
  • Опубліковано 16 січ 2025

КОМЕНТАРІ • 25

  • @PraveenM-f2t
    @PraveenM-f2t 4 місяці тому +1

    Hello bro, how to make mental health counseling chatbot, which Llama source is easy?

    • @techCodio
      @techCodio  4 місяці тому +1

      Ollama is easy or else you can take llama api key bro Amazon bedrock.
      1. For ollama you need high end gpu and cpu.
      2. Try to create an account in AWS and get the llama3.1 key it's not expensive. You can get 1000 input tokens for 0.0004 $ . The approximate budget is 10$ for your project. Best option

  • @jayasreevaradarajan9382
    @jayasreevaradarajan9382 2 місяці тому +1

    How to view this app in our mobile phone which is locally deployed in our own device?

    • @techCodio
      @techCodio  2 місяці тому +1

      When you deploy the streamlit application you are able to see the deployment link,then you can access the link anywhere

    • @jayasreevaradarajan9382
      @jayasreevaradarajan9382 2 місяці тому +1

      @@techCodio When I deploy, the streamlit throws the error here:Error invoking LLM: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))

    • @jayasreevaradarajan9382
      @jayasreevaradarajan9382 2 місяці тому +1

      Could you please help me to solve this error?

    • @techCodio
      @techCodio  2 місяці тому

      We cannot deploy the big llm application in the streamlit,it's a light weighted application development,

    • @techCodio
      @techCodio  2 місяці тому

      In order to deploy the app you need to host in AWS ,azure services and that is a big process,

  • @sp.kannan
    @sp.kannan 3 місяці тому +1

    can you suggest configuration, understood 8gb GPU, and 16GB RAM sufficient, how about cpu and motherboard best to use

    • @techCodio
      @techCodio  3 місяці тому +1

      I have no idea about the motherboard ,but icore 5 is enough along with the above mentioned configuration 16gb ram and 8gb gpu

  • @jefnize2444
    @jefnize2444 4 місяці тому +2

    How to see This chatbot in public ip website ? So in This way my friend can use my ollama? What I have to change?

    • @techCodio
      @techCodio  4 місяці тому +3

      It's a private ip, yeah you can use ollama

    • @jefnize2444
      @jefnize2444 4 місяці тому +2

      @@techCodio can you do a tutorial that explain how to install ollama on linux then how to share the website with this ollama chatbot in public?

    • @techCodio
      @techCodio  4 місяці тому +1

      ​@@jefnize2444 Steps 1: sudo apt install ollama ,2.ollama pull llama3.1 ,3. ollama run 3.1

    • @Danny_Bananie
      @Danny_Bananie 4 місяці тому +2

      I think they want to be able to use this as a chat bot on their website. How do you go about doing that?

    • @techCodio
      @techCodio  4 місяці тому +1

      @@Danny_Bananie downloading on Linux ? It just required 2 commands,I used ollama on ec2 Linux server with the above mentioned commands. see for the local laptop we need to download ollama .

  • @quanbuiinh604
    @quanbuiinh604 2 місяці тому +1

    Hello, thank you for your video.
    Could you please let me know if I can use it on my laptop, which only has an NVIDIA GeForce MX330 and 16GB RAM?

    • @techCodio
      @techCodio  2 місяці тому +2

      Yes you can use it, but don't go for bigger models ,go for 8B parameters models or less than that.

    • @quanbuiinh604
      @quanbuiinh604 2 місяці тому +1

      @@techCodio Thank you very much.
      And could you please make a demo video of the Llama model using the API?

    • @techCodio
      @techCodio  2 місяці тому

      @@quanbuiinh604 using groq api or any other platforms

    • @techCodio
      @techCodio  2 місяці тому

      @@quanbuiinh604 Recently I started Rag course in this channel using free models,In the advanced rag I will use groq api for llama model,

  • @divakarv7727
    @divakarv7727 4 місяці тому

    m getting unable to ModuleNotFoundError: No module named 'llama_index'