How to build a Local AI like ChatGPT using Deepseek R1 and Open WebUI (Windows 11 Version)
Вставка
- Опубліковано 7 лют 2025
- For macOS/Linux users, please check out this tutorial: • How to build a Local A...
Deepseek: www.deepseek.com
Ollama: ollama.com
Open WebUI: openwebui.com
Command lines for Linux:
1. Download and install ollama: ollama.com/dow...
2. Run ollama models: ollama run deepseek-r1:1.5b
3. Pull ollama models: ollama pull deepseek-r1:7b
4. Install uv: powershell -ExecutionPolicy ByPass -c "irm astral.sh/uv/i... | iex"
5. Install Open WebUI: $env:DATA_DIR="C:\open-webui\data"; uvx --python 3.11 open-webui@latest serve
6. Open WebUI: localhost:8080
For more Deepseek tutorials, check out this playlist: • Deepseek and Open WebU...
For macOS/Linux users, please check out this tutorial: ua-cam.com/video/Ajm057ChUqw/v-deo.html
Thanks for your video. Full of useful knowledge. How to enable support for web browsing for local installation of DeepSeek R1? I am not worried about my DeepSeek R1 local installation accessing the internet. I asked what is today's date and it answered "The date today is October 29, 2023.". It says it does not have access to web browsing. I am running it via ollama and chatbox.
I was watching your geospatial analysis tutorials for my thesis. Now I have graduated and working as a Data Analyst, back here again.
Welcome back!
I also face the localhost problem. Thanks for the tutorial, professor.
THANK YOUUUUUUU YOU’RE THE BOSS
Thx a lot for the video, works great!
but when I upload local files when I chat with model, get an error like "HTTP Error 403: Forbidden", the file is already upload in the data folder, but ollama model just can't read it : (
great video does gpu tdp should be considered while selecting the models?
is it possible to downlaod the models outside CMD? or maybe like in parts?
Great video tutorial thank you sir
Amazing tutorial, thank you! One thing I'm missing, after I've installed everything, and I restart my computer, what is the fastest way to then open up the deepseek model with open web ui again? Do I need to go to powershell everytime and initiate the model and open web ui from there and the manualy go to the localhost location? Or is there a faster way? Like a bookmark on the local host?
wait, isnt the open web ui is under construction?
See this new tutorial on how to autostart webui: ua-cam.com/video/jTddxpxCAJw/v-deo.html
@@giswqs Great! Thank you!
All this lama models are downloaded n installed in drive C ....is there any way to store it in another drive
Can we train the AI? if so how? thanks
I have run successfully thank you for.
Thank you!
Thanks for the video. However if I try to use webui without an internet connection I get an error. How can I solve it?
I just uploaded a new tutorial on how to use open-webui and deepseek without Internet connection. Check it out: ua-cam.com/video/wdHRud-e9vY/v-deo.html
Good explained
Many thanks for your contribution... DeepSeek Janus Pro just out... Is it possible to make a video to guide us to install and test it locally?
Will look into it
@@giswqs appreciate and happy new year
Awesome ....
how to open next time do i need to run the command again and again on windows ?
See this new tutorial on how to autostart webui: ua-cam.com/video/jTddxpxCAJw/v-deo.html
if i close the powershell and i want to reopen again then i have to wait again this much time to run the model?
The package installation is one time only. It should take a few seconds to reopen the web UI
See this new tutorial on how to autostart webui: ua-cam.com/video/jTddxpxCAJw/v-deo.html
do I need change setting in the firefox?is it because the proxy setting? my setting right now is USING SYSTEM PROXY SETTING. I also tried chrome , same problem...should I change to no proxy? I already setup a admin account when I first enter the webui. Many thanks
It runs locally. No need to use proxy
It runs locally. No need to use proxy
@@giswqs i keep on getting: firefox can’t establish a connection to the server at localhost:8080. I just had luck once when first install and setup an admin account, then after close website, the problem began when I tried to reopen...any idea what I should do? do i need open a terminal and run ollama deepseek?
@@giswqs this is the rejection when using chrome: This site can’t be reached
localhost refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
can you please make a video of how we can use deepseek locally and use it as a notebook lm for our private files and folders
See this new tutorial: How to set up an API endpoint for local Deepseek R1 and use it with Python
ua-cam.com/video/u4KHemHvmPc/v-deo.html
Can we use Geforce RTX 3070 Ti 8GB for this?
Yes, you should be able to use the 1.5b and 7b models
Is there an easy way, to uninstall everything again? I want to remove OpenwebUI from a windows 11 PC so that nothing is left on the PC.
See the instructions here to remove uv from your computer: docs.astral.sh/uv/getting-started/installation/#uninstallation
hi i have everything set up as per your instruction, however i noticed the response time is very slow even just for a simple hello. Do you know why is that? I have amd ryzen 5950x cpu and amd 6800XT gpu. for 8b model it took 17 seconds to respond, and for 32b model, its not even responding and it has been more then a minute. .
Open the performance tab from the task manager. Look at if the GPU is being utilized when running the model
@@giswqs Shared GPU memory goes up to 8gb, dedicated less then that, and Memory usage is 31/32, almost 100%
@@giswqs I have downloaded llama3.2, and its using 8gb of Dedicated GPU and its faster then deepseek. I wonder why deekseek is using RAM instead of GPU.
Thanks sir
Each time I start Open WebUI after closing it, it's asking me to go through the admin creation again. Am I doing something wrong? Should I not launch it using "uvx --python 3.11 open-webui@latest serve"?
You need to use the following command. The user info is saved in the DATA_DIR.
$env:DATA_DIR="C:\open-webui\data"; uvx --python 3.11 open-webui@latest serve
@@giswqs Thank you! And thank you for this video!
My pleasure
web access to deepseek
The only model equivalent to ChatGPT is the 671B, so you are not running a local AI like ChatGPT going for a subpar version, and the smaller models are way less performant
Nothing is preventing you from using the 671b model.
@@lahtin3n the 404Gb of required memory are
you are right. but that qwen coder 2.5 at 3B sure is useful.
@@himaya-relaxsleep1410 Not really, to be honest; the other models are not good. Deepseek R1 claims to provide access to an open-source ChatGPT like model, which this title plays on, but even the 32B model falls far short of this promise and only the 671B will do. Otherwise, we already had plenty of models to use locally.
i dont care really which equals which. im looking for useful tools i can run locally. And as I said a model at 3B that is fast and useful that can run on gtx 1660 is useful for me. Besides what do you expect? What do you hope? imitate openAI's power on a cheap pc? tose beasts run on data centers.. not some budget gaming pc you have.