Thanks for the content! i enjoyed a OpenAI / ChatGPT video for the first time in months, simple and informative, it's really appreciated. I've been in the search of replacin the OpenAI API token with running my model locally and directing my apps to it with 0 luck. All i want is to run a chatbot on wordpress with a local LLM usin ollama. Good thing dreaming it's still free :)
Thanks for the video! Got the model working in colab webui but the cloud flare link is broken somehow. Thanks for all your work to make this video. Cheers
Great tutorial. Is there a method to use Ngrok and Litellm on the Google Colab side and then connect from local machine to it? This would make Open Interpreter powerful for any machine.
Probably. I know google colab doesn't like running servers and prioritises interactive notebooks so I would guess it's a bit of a moving target. I've seen a bunch of stable diffusion notebooks they've disabled but the Ngrok + Litellm combo sounds simple enough that it would be worth a try.
NVIDIA RTX 3060 with 12 GB is the main driver of how well the local models perform. For the rest I just run Windows Subsystem Linux on a i9 with 3.3ghz and 64GB memory
Thanks for sharing JV! Can you share your local setup too (in the vids or in a linked video) so people like myself can get a realistic measure of what we can achieve at home. (you touched on it a bit in this video, but any additional info would be :chefskiss:
I get a few questions every now and then when I run local stuff but the setup isn't that interesting and I don't know if I'd have much more to say than list some specs. NVIDIA RTX 3060 with 12 GB is the main driver of how well the local models perform. For the rest I just run Windows Subsystem Linux on a i9 with 3.3ghz and 64GB memory
I which I can easily use ollama together with the power of text-generation-webui project.
Thanks for the content! i enjoyed a OpenAI / ChatGPT video for the first time in months, simple and informative, it's really appreciated. I've been in the search of replacin the OpenAI API token with running my model locally and directing my apps to it with 0 luck. All i want is to run a chatbot on wordpress with a local LLM usin ollama. Good thing dreaming it's still free :)
Nice, the good news is the open models keep getting more performant!
Thanks for the video! Got the model working in colab webui but the cloud flare link is broken somehow. Thanks for all your work to make this video. Cheers
Glad it was useful
Great tutorial. Is there a method to use Ngrok and Litellm on the Google Colab side and then connect from local machine to it? This would make Open Interpreter powerful for any machine.
Probably. I know google colab doesn't like running servers and prioritises interactive notebooks so I would guess it's a bit of a moving target. I've seen a bunch of stable diffusion notebooks they've disabled but the Ngrok + Litellm combo sounds simple enough that it would be worth a try.
Thanks great !
Whats the configuration of your system?
NVIDIA RTX 3060 with 12 GB is the main driver of how well the local models perform. For the rest I just run Windows Subsystem Linux on a i9 with 3.3ghz and 64GB memory
Thanks for sharing JV! Can you share your local setup too (in the vids or in a linked video) so people like myself can get a realistic measure of what we can achieve at home. (you touched on it a bit in this video, but any additional info would be :chefskiss:
I get a few questions every now and then when I run local stuff but the setup isn't that interesting and I don't know if I'd have much more to say than list some specs.
NVIDIA RTX 3060 with 12 GB is the main driver of how well the local models perform. For the rest I just run Windows Subsystem Linux on a i9 with 3.3ghz and 64GB memory