Easy Open-WebUI + LM Studio Tutorial: Free & Local ChatGPT Alternative
Вставка
- Опубліковано 26 чер 2024
- In this video, we'll walk you through the simple steps to install Open WebUI, set up a dedicated environment, and connect it to LM Studio for a fully functional, locally-run AI assistant. You'll learn how to: Install Open WebUI and its dependencies
Set up a dedicated environment with Miniconda
Connect to LM Studio for open-source language models
Use your local AI assistant for text-to-speech, image generation, document Q&A, and more! With over 29,000 stars on GitHub, the open-source community loves Open WebUI. Join the movement and take the first step towards a more private and customizable AI experience.
Let us know in the comments if you'd like to see more tutorials on what you can do with Open WebUI Don't forget to like, subscribe, and hit that notification bell for more exciting AI adventures!"
Local Lab Twitter - / thelocallab_
buymeacoffee.com/thelocallab - Наука та технологія
🔴 How To Run Your Llama 3.1 Models With Open WebUI Web Search Locally 👉 ua-cam.com/video/eBLvRV73xzU/v-deo.html
Thank so much for your easy to understand tutorials🙏
thank you for the video finally got figured out put "correct url's addresses" in correct places to connect Ollama then managed to download Ollama 3.1 model
so that solution can only help with working with documents? this can also be done directly in the LM studio...
It would be interesting to understand how to complement a conversation with a model with an Internet search, this is really important..
Can we instal the openwebui in android? With termux?
Hey, i have tried this. In the API-key field, "none" does not work, the apikey is "lm-studio"
That's a bit strange but if it works, it works. Normally if your using a local OpenAI compatible API you don't need an API key at all and "none" usually works for me as demonstrated in the video.
I have same problem
just here to say it worked here, say if your curl is "localhost:1234/v1" paste that in the first box and type none in the second. I did this on pc so I hit enter on both fields just to make sure then saved, then watched it gave me the verified notification.
how to run second time?
Activate the conda environment and run "open-webui serve". That's it.
I dont get it. Why do we need open web ui if the prompt chat can directly be done in LM Studio?
personally I was looking for this exact video because I wanted to be able to get LLMs at what ever quality(/quant) and size I want, know if my machine can run it or not, and download it. and besides I have ollama connected to it so what's wrong with having 2 quality sources to pull from. I can test the model on LM, if I like it I move it to web ui and tweak it to what I need it to be. one of the many reasons Lol
@@JohnSmith-iv5zy It may be a good fit for the open source models, but for openai api it's meaningless. You can add the api key instead getting key via lm studio.