Replace OpenAI API with local models: Ollama + LiteLLM, text gen webui, google colab

Поділитися
Вставка
  • Опубліковано 24 лис 2024

КОМЕНТАРІ • 11

  • @MelroyvandenBerg
    @MelroyvandenBerg 10 місяців тому

    I which I can easily use ollama together with the power of text-generation-webui project.

  • @paritaistudio206
    @paritaistudio206 10 місяців тому

    Thanks for the content! i enjoyed a OpenAI / ChatGPT video for the first time in months, simple and informative, it's really appreciated. I've been in the search of replacin the OpenAI API token with running my model locally and directing my apps to it with 0 luck. All i want is to run a chatbot on wordpress with a local LLM usin ollama. Good thing dreaming it's still free :)

    • @LearnCodeWithJV
      @LearnCodeWithJV  10 місяців тому

      Nice, the good news is the open models keep getting more performant!

  • @forestpeoplemushrooms5267
    @forestpeoplemushrooms5267 Рік тому +2

    Thanks for the video! Got the model working in colab webui but the cloud flare link is broken somehow. Thanks for all your work to make this video. Cheers

  • @goonie79
    @goonie79 Рік тому

    Great tutorial. Is there a method to use Ngrok and Litellm on the Google Colab side and then connect from local machine to it? This would make Open Interpreter powerful for any machine.

    • @LearnCodeWithJV
      @LearnCodeWithJV  Рік тому +1

      Probably. I know google colab doesn't like running servers and prioritises interactive notebooks so I would guess it's a bit of a moving target. I've seen a bunch of stable diffusion notebooks they've disabled but the Ngrok + Litellm combo sounds simple enough that it would be worth a try.

  • @mohsinsheikh2480
    @mohsinsheikh2480 Рік тому +1

    Thanks great !
    Whats the configuration of your system?

    • @LearnCodeWithJV
      @LearnCodeWithJV  Рік тому

      NVIDIA RTX 3060 with 12 GB is the main driver of how well the local models perform. For the rest I just run Windows Subsystem Linux on a i9 with 3.3ghz and 64GB memory

  • @rich.fortune
    @rich.fortune Рік тому

    Thanks for sharing JV! Can you share your local setup too (in the vids or in a linked video) so people like myself can get a realistic measure of what we can achieve at home. (you touched on it a bit in this video, but any additional info would be :chefskiss:

    • @LearnCodeWithJV
      @LearnCodeWithJV  Рік тому

      I get a few questions every now and then when I run local stuff but the setup isn't that interesting and I don't know if I'd have much more to say than list some specs.
      NVIDIA RTX 3060 with 12 GB is the main driver of how well the local models perform. For the rest I just run Windows Subsystem Linux on a i9 with 3.3ghz and 64GB memory