Use Your Self-Hosted LLM Anywhere with Ollama Web UI

Поділитися
Вставка
  • Опубліковано 2 жов 2024

КОМЕНТАРІ • 231

  • @kameit00
    @kameit00 7 місяців тому +88

    Just in case you missed it, your auth token was visible since the position changed on your screen changed. If you want to regenerate it. Thanks for posting your videos!

    • @decoder-sh
      @decoder-sh  7 місяців тому +68

      Good eye! That was one of the things I told myself I wouldn’t do as I started this process, and of course it’s the first thing I did 😰 But don’t worry, I regenerated it before publishing the video as part of good hygiene.
      Stay tuned to see what other PII I leak 😅

    • @proterotype
      @proterotype 7 місяців тому +4

      Good man @kameit00

    • @kornflakesss
      @kornflakesss 7 місяців тому +3

      Such a good person

    • @JarppaGuru
      @JarppaGuru 6 місяців тому

      if we can generate it it wont matter? LOL. there is nothing good with keys. its only for tracking

    • @burncloud-com
      @burncloud-com 18 днів тому

      @@decoder-sh Great share, insightful share as always

  • @steftrando
    @steftrando 7 місяців тому +45

    See, these are the types of UA-cam tech videos I like to watch. This guy is clearly a very knowledgeable senior dev, and he puts more priority into the tech than a fancy UA-camr influencer setup.

    • @decoder-sh
      @decoder-sh  7 місяців тому +2

      Thank you for watching, and for the kind words! Don't expect to see me on a sound stage anytime soon 😂

    • @SanctuaryGardenLiving
      @SanctuaryGardenLiving 7 місяців тому +4

      Like how UA-cam used to be.
      Less loud intro music.
      Less advertising.
      Less sponsor segments.
      Less click bait titles.
      Less hiding the actual valued info among wasted time.
      Makes sense though... If your explicit about what your video is about the people interested will watch, but if you make it a mystery than hopefully anyone that thinks it might be helpful will click, then they have to parse through.... Fack "don't recommend (Shii) channel!!"

    • @A.eye_101
      @A.eye_101 24 дні тому

      @@decoder-sh curious why you did show the full video of the ngrok setup? you skipped the warning part. Why?

  • @imadsaddik
    @imadsaddik 7 місяців тому +25

    Oh man, I don't know how and why UA-cam recommended your video, but I am very happy that they did. I enjoyed this video a lot

    • @decoder-sh
      @decoder-sh  7 місяців тому

      Happy to hear it, welcome to my channel!

    • @xXWillyxWonkaXx
      @xXWillyxWonkaXx 7 місяців тому

      I second that. Straight to the point, very sharp with the info, thank you bro

    • @decoder-sh
      @decoder-sh  6 місяців тому

      @@ts757arseI'm thrilled to hear that! I'd love to hear more about your business if you're willing to share

    • @decoder-sh
      @decoder-sh  6 місяців тому

      @@ts757arselooks like it got nuked :( I need to set up a google org email with my domain so I can talk to viewers 1:1

    • @decoder-sh
      @decoder-sh  6 місяців тому

      @@ts757arse Ah so it's like pen testing simulation and planning? Very cool, that's a necessary service. Self-hosting an uncensored model seems like the perfect use case.
      Nuke test still fails, but I finally set up "david at decoder dot sh"!

  • @bndy0
    @bndy0 7 місяців тому +4

    Ollama WebUI has been renamed to Open WebUI, video tutorial on how to update would be helpful!

    • @decoder-sh
      @decoder-sh  7 місяців тому +1

      Looks like it's the same codebase, but I could possibly go over the migration? Appears to be just a couple commands github.com/open-webui/open-webui?tab=readme-ov-file#moving-from-ollama-webui-to-open-webui

  • @SashaBaych
    @SashaBaych 7 місяців тому +8

    You are really good at explaining things! Thank you so much. No useless hype, just plain useful hands on information that is completely understandable.

    • @decoder-sh
      @decoder-sh  7 місяців тому +1

      Thank you so much for watching and leaving a comment! I’ll continue to do my best to make straightforward and easy to understand videos in the future 🫡

  • @matthewnohrden7209
    @matthewnohrden7209 2 місяці тому +1

    THIS IS SO COOL. I've been looking for a way to do this for a couple of months now

  • @TheColdharbour
    @TheColdharbour 7 місяців тому +4

    Really enjoyed this video too! Complete success, really well paced and carefully explained! Looking forward to the next one (open source LLMs) - thanks for the great tutorials! :)

  • @Bearistotle_
    @Bearistotle_ 8 місяців тому +4

    Amazing tutorial, all the steps are broken down and explained very well

  • @keylanoslokj1806
    @keylanoslokj1806 7 місяців тому +2

    Great info. What kind of beast workstation server you need to set-up though to run your own gpt?

    • @decoder-sh
      @decoder-sh  7 місяців тому +2

      Depends what your needs are! If you just want to use a small model for simple tasks, any gpu in the last 5(?) years should be fine, or a beefy cpu. I’m using an M1 MacBook Pro, though I’ve also got requests for Linux demos and would be happy to show you how models run on a 2080ti

  • @Fordtruck4sale
    @Fordtruck4sale 7 місяців тому +1

    How does this handle multiple users wanting to load multiple different models at same time? FIFO?

  • @iamwhoiam7057
    @iamwhoiam7057 4 дні тому

    i am just a beginer in all things AI and voila I implemented this video successfully' So proud that and my AI is running in my phone. Feel so empowered now. Thanks for a great video.

  • @matthewarchibald5118
    @matthewarchibald5118 5 місяців тому +1

    would it be possible to use tailscale instead of ngrok?

    • @decoder-sh
      @decoder-sh  5 місяців тому +1

      If you're just using it for yourself, or with other people that you trust to share a VPN with, then tailscale definitely works! In that case your UI address will either be localhost or whatever your tailscale dns name is. I use tailscale myself for networking my devices

  • @Enkumnu
    @Enkumnu 4 місяці тому

    Very interesting! However, can we configure Ollama on a specific port? The default is localhost, but how do we use a server with a specific IP address (e.g., 192.168.0.10)?

  • @eric.o
    @eric.o 8 місяців тому +2

    Excellent video, super easy to follow

  • @Wade_NZ
    @Wade_NZ 5 місяців тому

    My AV (Bitdefender) goes nuts and wont allow the NGROK Agent to remain installed on my PC :(

  • @Mr.Morgan.
    @Mr.Morgan. 2 місяці тому

    thank you for video! I have one issue - when i trying to chat with my model through phone, using OpenWebUI and ngrok - model generate answer edlessly after one or maybe two completed answers. But if i do this from another PC - all works as it should be. Does anyone know the solution for this?

  • @JarppaGuru
    @JarppaGuru 6 місяців тому

    yes now we can use AI answer what was trained. this is question answer this. we allready had jarvis with voice LOL. now we back text LOL

  • @kevinfox9535
    @kevinfox9535 5 місяців тому

    I used webui to run mistral but its very slow. I have 3050 6gb vram with 16gb ram. However i can run ollama mistral model fine on command prompt.

  • @robwin0072
    @robwin0072 Місяць тому

    Will one lose the private LLM when using ngrok? IOW, will the prompts and responses be exposed to external servers, the Internet?

  • @NoHack_Know_How
    @NoHack_Know_How 2 місяці тому

    Hello, how can I run Ollama for my internal network? I don't really need outside access yet; can you explain or point me in that direction, please.

  • @manassingh5351
    @manassingh5351 3 місяці тому

    Great video! I have a question, after getting a link via NGrok, the whole AI model is still running offline, or the data is going to any other server, that is my main concern. Thanks again

  • @ISK_VAGR
    @ISK_VAGR Місяць тому

    Man. That is amazing. It took me 10 min to set this up and I am not a coder. Thanks. That is bunkers. I got 3 immediate questions. Is that safe in terms of the information that one loads there? Can one customize the logos and the appearance? can one use it for personal and commercial purposes?

  • @annbjer
    @annbjer 7 місяців тому +2

    Really cool stuff, thanks for keeping it clear and to the point. It’s awesome that experimenting with local and custom models is becoming more accessible. I’m definitely planning to give it a try and hope to design my own custom interfaces someday. Just subbed and looking forward to learning more!

    • @decoder-sh
      @decoder-sh  7 місяців тому

      I look forward to seeing what you create! I have some really fun videos planned, thanks for the sub :)

  • @skatemore33
    @skatemore33 2 місяці тому

    Hey man great tutorial. For some reason on my phone, I can't access my chat history. I can only start a new chat each time. Do you know how to fix this?

  • @VipulAnand751
    @VipulAnand751 2 місяці тому +1

    thanks man

  • @anand83r
    @anand83r 7 місяців тому +3

    Very useful, simple to understand and very focused on subject 👌. Its hard to find Americans like this who delivers messages without sugarcoating or too much filler content. Good jobs 👌. people its worth to support this person👏

    • @decoder-sh
      @decoder-sh  7 місяців тому

      Thank you for your support!

  • @SODKGB
    @SODKGB 7 місяців тому +1

    I would like to make changes to the provided interface for example hide/remove left menu bar, change colors, change fonts or adding some graphics. Any pointers to the right direction would be great. Thinking might need to download the web-ui and edit the source before starting docker and ollama?

    • @decoder-sh
      @decoder-sh  7 місяців тому +2

      The UI already allows you to show/hide the left menu (there's a tiny button that's hard to see, but it's there). Beyond that, yes you'd need to download their repo and manually edit their frontend code. Let me know how it turns out!

    • @SODKGB
      @SODKGB 7 місяців тому

      @decoder-sh It's been a lot of hacking. At least the ollama for windows in combination with docker is fast and easy. Potential exists to use Python to send and receive content from local server and modify the content to accept variables via get or post.

  • @JenuelDevTutors
    @JenuelDevTutors 4 місяці тому

    Hi! I wanna deploy this on my own server, how to do that?

  • @johnmyers3233
    @johnmyers3233 5 місяців тому

    Does a pile downloaded seems to be coming up with some malicious software

  • @hypergraphic
    @hypergraphic 6 місяців тому +1

    Great walk-through, although I think I will just install it on a vps instead.

    • @decoder-sh
      @decoder-sh  6 місяців тому

      A VPS also works! Which would you use?

  • @luiferreira8437
    @luiferreira8437 8 місяців тому +2

    Thanks for the video. I would like to know if it is possible to have this be done with a RAG system built on ollama and also add a diffuser model (like stable diffusion) to generate images

    • @decoder-sh
      @decoder-sh  8 місяців тому +2

      This is my first time hearing someone talk about combining RAG with image generation - what kind of use case do you have in mind?

    • @luiferreira8437
      @luiferreira8437 7 місяців тому +2

      @@decoder-sh the idea that I have is to improve model accuracy on a certain topic, while having the option to generate images if needed. Some use case would be like writing a book, keeping consistent characters descriptions and images.
      I actually didn’t have in mind both simultaneously, but it could be interesting

    • @decoder-sh
      @decoder-sh  7 місяців тому +2

      That seems a bit more like a knowledge graph where you update connections or attributes of entities as the model parses more text. I'll be covering some RAG topics in the near future and would like to eventually get to knowledge graphs and their use with LLMs

  • @nachesdios1470
    @nachesdios1470 6 місяців тому

    This is really cool, but for anyone that wants to try this out, be careful when exposing services on the internet.
    - Check updates regularly
    - try to break the app yourself first before exposing it
    - I would highly recommend monitoring activity closely.

  • @thegamechanger3793
    @thegamechanger3793 5 місяців тому +1

    Do you need good cpu/ram to run? Just trying to see when you install docker/LLM/grok if it require high end system requirements?

    • @decoder-sh
      @decoder-sh  5 місяців тому

      It depends on the model you want to run. docker & ngrok don't require much resources at all, and I've seen people run (heavily quantized) 7B models on a raspberry Pi. I'm using an M1 macbook, but it's overkill for smaller models.

    • @peterparker5161
      @peterparker5161 4 місяці тому

      You can run Phi-3 mini quantized on an entry level laptop with 8gb RAM. If you have 4gb VRAM, the response will be very quick.

  • @sitedev
    @sitedev 7 місяців тому +1

    This is nuts. Imagine if you could (you probably can) connect this with a RAG system running on the local machine which contains a business's entire knowledge base and then deploy it to your entire sales/support team.

    • @decoder-sh
      @decoder-sh  7 місяців тому +3

      You totally can! Maybe as a browser extension that integrates with gmail? I'm planning a series on RAG now, and may eventually discuss productionizing and use cases as well. Stay tuned 📺

    • @sitedev
      @sitedev 7 місяців тому +1

      @@decoder-sh Cool. I saw another video yesterday discussing using very small LLM's fine-tuned for specific function calling - I can imagine this would also be a neat method of extending the local ai to perform other tasks too (replying to requests via email etc). Have you experimented with local LLMs and function calling?

  • @yashkaul802
    @yashkaul802 4 місяці тому

    please make a video on deploying this on huggingface spaces or AWS ECS. Great video!

  • @BogdanTestsSoftware
    @BogdanTestsSoftware 7 місяців тому

    What hardware do I need to run this container? GPU ?
    Ah, found it: "WARNING: No NVIDIA GPU detected. Ollama will run in CPU-only mode."

  • @mernik5599
    @mernik5599 5 місяців тому

    Please! How can I add function calling to this ollama served web ui? And is it possible to add internet access so that if I ask for today's news highlights then it can give a summary of news from today

    • @decoder-sh
      @decoder-sh  5 місяців тому

      I'm not sure if open-webui supports functoin calling from their UI, unfortunately

  • @JacobLehman-ov4eu
    @JacobLehman-ov4eu 4 місяці тому

    Thanks, very helpful and simple. I'm very new to all of this (and coding) but it really fascinates me. I would love to be able to set up an LLM with RAG and use in web ui so that my coworkers could test projects. I will get there and your content is very helpful!

  • @bhagavanprasad
    @bhagavanprasad 4 місяці тому

    Question: Docker image is running, but web-ui is not listing any models that are installed in my PC
    How to fix it?

    • @riseupallday
      @riseupallday 4 місяці тому

      Download any model of your choice using ollama run name_of_model

  • @soyhenryxyz
    @soyhenryxyz 7 місяців тому +1

    For cloud hosting of the Ollama web UI, which services do you suggest?
    Additionally, are there any services you recommend for API use to avoid installing and storing large models?
    appreciate any insight here and great video!

    • @simonbrennan7283
      @simonbrennan7283 7 місяців тому

      Most people considering self hosting would be doing so because of privacy and security concerns, which I think is the target audience for this video. Cloud hosting totally defeats the purpose.

    • @decoder-sh
      @decoder-sh  7 місяців тому

      I don't have any recommended services at the moment, but I would like to research and create a video reviewing a few of the major providers in the near future. Ditto for API providers, I've been mostly focused on self-hosting at the moment. Some that come to mind are openAI (obviously), Mistral (mistral.ai/product/), and one that was just announced is Groq (wow.groq.com/)

  • @skylarksparrow932
    @skylarksparrow932 2 місяці тому

    Things worked for me on my local host, upto the stage where it asked me to select a model. No model showed up. I have Phi installed. Can someone help?

    • @Kalsriv
      @Kalsriv 2 місяці тому

      Try changing port from 3000 to 4000 when creating container. Worked for me

  • @leandrogoethals6599
    @leandrogoethals6599 7 місяців тому +1

    oh thx man i was tyred of going to rdp with port forwarding where it ran locally ;)

    • @decoder-sh
      @decoder-sh  7 місяців тому

      I actually do something a little similar - I use tailscale as a VPN into my home network, then I can easily access whatever services are running. Ngrok is great for a one-off, but I use the VPN daily since I don't need to share it with anyone else.

    • @leandrogoethals6599
      @leandrogoethals6599 7 місяців тому

      @@decoder-sh But don't u lose the ability to use the foreign network when connecting when not using virtual adapters?
      Wich is a pain on phones

  • @adamtechnology3204
    @adamtechnology3204 7 місяців тому

    How can I see the hardware requirements for each model? Since even phi doesnt give me response back after minutes waiting I have really old laptop XD

  • @rajkumar3433
    @rajkumar3433 6 місяців тому

    What will be deployment command on azure Linux machine.

  • @itsban
    @itsban 2 місяці тому

    If you are into the Apple ecosystem, you can use Enchanted as the UI. It is native and available on Mac.

    • @decoder-sh
      @decoder-sh  2 місяці тому

      I'll check that out, thanks for the tip

  • @danteinferno8983
    @danteinferno8983 6 місяців тому

    Hi
    can we have a local AI Model installed in our Linux VPS and then use it with API to integrate it in our WordPress website or something like it?

  • @YorkyPoo_UAV
    @YorkyPoo_UAV 6 місяців тому

    At first I thought it was great but since I've turned on then off a VPN, I can't get models to load on the remote page. Also every time I start an instance, a new code is generated so I can keep using the same URL.

  • @kashifrit
    @kashifrit 4 місяці тому

    NGROK keeps changing the link everytime it gets started up
    ?

    • @decoder-sh
      @decoder-sh  4 місяці тому

      Yes, each session's link will be unique. It may be possible to have consistent links if you pay for their service

  • @khalidkifayat
    @khalidkifayat 7 місяців тому

    great one. few questions here
    1. can u through some light on input/output token consumption to/from LLM.
    2. How can we give this app to client as service provider ??
    thank you

  • @dannish2000
    @dannish2000 3 місяці тому

    Are the commands the same if I am using Linux ubuntu, WSL?

    • @decoder-sh
      @decoder-sh  3 місяці тому +1

      Linux and Mac are both Unix so I imagine they would be the same

  • @VimalMeena7
    @VimalMeena7 7 місяців тому

    everything working final locally but when i run it on internet using ngrok it shows "Ollama WebUI Backend Required". although my backend running ... on local system i am getting responses to my queries. please help. i am not able to resolve it.

  • @dhmkkk
    @dhmkkk 6 місяців тому +1

    What a great tutorial please keep on making more content!

    • @decoder-sh
      @decoder-sh  6 місяців тому

      Thanks for watching, I certainly will!

  • @NevsTechBits
    @NevsTechBits 3 місяці тому

    Great info! Commenting to show support! Keep going my guy!

    • @decoder-sh
      @decoder-sh  3 місяці тому

      Thanks for your support, I’m looking forward to making more!

  • @RamseyLEL
    @RamseyLEL 8 місяців тому +1

    Solid, detailed, and thorough video tutorial

  • @scott701230
    @scott701230 7 місяців тому +1

    Awesomeness! Thank you for the Tutorial!

    • @decoder-sh
      @decoder-sh  7 місяців тому

      My pleasure, thanks for watching!

  • @hmdz150
    @hmdz150 7 місяців тому

    This is amazing, does the ollama web ui work with pdf files too?

    • @decoder-sh
      @decoder-sh  7 місяців тому

      It does have document uploading abilities, but I haven’t looked at their code to see how that actually works under the hood. I believe it does do some naive parsing and embedding generation. Try uploading a document and asking a question about it!

  • @Shivam-bi5uo
    @Shivam-bi5uo 7 місяців тому

    can you help me, if i want to host a fine tuned LLM how can i do so?

  • @acan.official
    @acan.official 17 днів тому

    where do i put the first code, im very beginner

    • @acan.official
      @acan.official 16 днів тому

      found it. but why does the web change everytime, can i make it fixed or custom it or something?

  • @JT-tg9uo
    @JT-tg9uo 7 місяців тому +1

    Everything works but I can't select a model. I can acess from phone , etc. but cannot select model.

    • @decoder-sh
      @decoder-sh  7 місяців тому

      It may be that you don't have any models installed yet? I didn't actually call that out in the video, so that's my bad! In the web ui go to settings > models, and then type in any of the model names you see here ollama.ai/library ("phi" is an easy one to start with). Let me know if that was the issue! Thanks for watching.

    • @JT-tg9uo
      @JT-tg9uo 7 місяців тому

      Thank you Sir I'll give it a whirl

    • @JT-tg9uo
      @JT-tg9uo 7 місяців тому

      Yeh it says Ollama:webuii server connection error when trying to pull phi or any other. But other than that it works from phone etc.

    • @JT-tg9uo
      @JT-tg9uo 7 місяців тому +1

      Ollama works fine from terminal with phi, etc. Maybe docker not configured right. I never used docker before.

  • @spencerfunk6697
    @spencerfunk6697 5 місяців тому

    integration with open interpreter would be cool

  • @razorree
    @razorree 6 місяців тому

    another 'ollama' tutorial....

    • @decoder-sh
      @decoder-sh  6 місяців тому

      Guywhowatchesollamatutorialssayswhat

  • @arquitectoqasdetautomatiza5373
    @arquitectoqasdetautomatiza5373 6 місяців тому

    Eres la mera v3rga carnal, por favor sigue subiendo videos

  • @paoloavogadro7329
    @paoloavogadro7329 7 місяців тому +1

    Very well done, quick and clean to the point.

    • @decoder-sh
      @decoder-sh  7 місяців тому

      I'm glad you think so, thanks for watching!

  • @shanesteven4578
    @shanesteven4578 7 місяців тому

    Would love to see what you could do with something like ‘Arduino GIGA R1 WiFi’ with Screen and others such small devices as ESP32 Meshtastic, LLM’s being accessible on such devices with LLM’s limited to subject specific such as: emergency services, medical, logistics, finance, administration, sales & marketing, radio communications, agriculture, math, etc etc ….

    • @decoder-sh
      @decoder-sh  7 місяців тому

      As long as it has a screen and internet connection, you can use this method to interact with your LLM's on the device!

  • @iseverynametakenwtf1
    @iseverynametakenwtf1 7 місяців тому

    This is cool. Might see if I can get LM Studio to work. Why not host your own server too?

  • @SanctuaryGardenLiving
    @SanctuaryGardenLiving 7 місяців тому

    So excited to find your channel... looking forward to more videos. I'm a total noob so feel a bit like I'm floating out in space.

  • @OgeIloanusi
    @OgeIloanusi 29 днів тому

    Thank You!!

  • @samarbid13
    @samarbid13 5 місяців тому

    Ngrok is considered a security risk because it is closed-source, leaving users uncertain about how their data is being handled.

    • @decoder-sh
      @decoder-sh  5 місяців тому +1

      Fair enough! One could also just use a VPN of their choice (including self-hosted Wireguard) to connect their phone to the host device, and reach the webui on localhost

  • @Soniboy84
    @Soniboy84 7 місяців тому

    You forgot to mention that you need a chunky computer at home running those models, potentially costing $1000s

    • @decoder-sh
      @decoder-sh  7 місяців тому +1

      It doesn't hurt! But even small models like Phi are pretty functional and don't have very high hw requirements. Plus if you're a gamer then you've already got a chunky GPU, and LLMs give you one more thing you can use it for 👨‍🔬

  • @ArtificialChange
    @ArtificialChange 6 місяців тому

    my olama wont install models and i dont know where to put them, theres no folder called models

    • @decoder-sh
      @decoder-sh  6 місяців тому

      Once you have ollama installed, it should manage the model files for you (you shouldn't need to put them anywhere yourself). If `ollama pull [some-model]` isn't working for you, you may need to re-install ollama

    • @ArtificialChange
      @ArtificialChange 6 місяців тому

      @@decoder-shI will give it another try. I want to know where to put my own models

  • @PhenomRom
    @PhenomRom 7 місяців тому

    why didnt you put the commands in the description

    • @decoder-sh
      @decoder-sh  7 місяців тому

      UA-cam doesn't support code blocks in the description so I spent the day writing code to generate a static site for each video, so I can post the code there. Enjoy!
      decoder.sh/videos/use-your-self_hosted-llm-anywhere-with-ollama-web-ui

    • @PhenomRom
      @PhenomRom 7 місяців тому

      oh wow. thank you @@decoder-sh

    • @decoder-sh
      @decoder-sh  7 місяців тому

      @@PhenomRommy pleasure! Might do a video about how to make my website too 😂

  • @ollimacp
    @ollimacp 7 місяців тому

    Splendid tutorial. Thanks alot :) You got a like and a sub from me!
    And if i write a custom model(Memgpt+CrewAI) and want to use the WebUI, would it be better to try to get the model into a ollama modelfile, or just expose the model via an API which mimiks the standard (openai)?

    • @decoder-sh
      @decoder-sh  7 місяців тому

      Thanks for watching! It looks like MemGPT isn't a model as much as a library that uses models (via openAI and their own endpoint) to act as agents. So a modelfile wouldn't work, but it does look like they have some instructions for connection to a UI (oogabooga in this case memgpt.readme.io/docs/local_llm). Best of luck, let us know how it goes!

  • @ArtificialChange
    @ArtificialChange 6 місяців тому

    remember your docker looks different

  • @shobhitagnihotri416
    @shobhitagnihotri416 7 місяців тому

    I am not able to understand to docker part , May be some glitch at my MacBook .I s there any way we can do it without use of docker

    • @decoder-sh
      @decoder-sh  7 місяців тому

      It will be a bit messier, but they do provide instructions for non-Docker installation. Docker desktop should just be a .dmg you open to install github.com/ollama-webui/ollama-webui?tab=readme-ov-file#how-to-install-without-docker

  • @adamtechnology3204
    @adamtechnology3204 7 місяців тому

    This was really benificial thank you a lot!

  • @gold-junge91
    @gold-junge91 7 місяців тому

    on my root server, its not working its looks like the docker container have no access to ollama, the troubleshoot section doesn't help

    • @decoder-sh
      @decoder-sh  7 місяців тому

      Do you have any logs that you could share? Ollama is running? When url is listed when you go into the web UI settings and look at the "ollama api url"?

  • @albertlan
    @albertlan 7 місяців тому

    Anyone know how to access ollama via API like you would with ChatGPT? I got the webui working, would love to be able to code on my laptop and utilize the remote PC's GPU

    • @decoder-sh
      @decoder-sh  7 місяців тому +1

      I find that the easiest way to use services on another machine is just to ssh into it. So if you have ollama serving its api on your beefy machine on port 11434, then from your local machine you’d run ssh -L 11434:11434 beefy-user@beefy-local-ip-address. This assumes you have sshd running on your other machine, but it’s not hard to set up

    • @albertlan
      @albertlan 7 місяців тому

      @@decoder-sh how did you know my user name lol. I finally got it working thru nginx but the speed was too slow to be useful unfortunately

  • @michamohe
    @michamohe 7 місяців тому

    I'm on a windows 11 machine, is there anything I would do differently with that in mind?

    • @decoder-sh
      @decoder-sh  7 місяців тому

      Ollama is working on windows support now! x.com/ollama/status/1757560242320408723
      For now, you can still run ollama on ubuntu in windows via WSL.

  • @Candyapplebone
    @Candyapplebone 6 місяців тому

    Interesting. You really didn’t have to code that much to actually get it all up and running.

    • @decoder-sh
      @decoder-sh  6 місяців тому

      Yes indeed! There will be more coding in future videos, but in the beginning I’d like to show what’s possible without much coding experience

  • @jayadky5983
    @jayadky5983 8 місяців тому

    Hey, good work mate! I wanted to know if we could self host our Ollama API to Ngrok just as we hosted WebUI? I am using a server to run ollama and I have to ssh in everytime to use it. So, can we instead forward the ollama localhost api to ngrock and then use it in my machine?

    • @decoder-sh
      @decoder-sh  8 місяців тому +2

      Yeah you could definitely do that! Let me know how it works out for you :)

  • @WolfeByteLabs
    @WolfeByteLabs 4 місяці тому

    Thanks so much for this video man. Awesome entry point to local + private llms

    • @decoder-sh
      @decoder-sh  4 місяці тому

      My pleasure, thanks for watching!

  • @gabrielkasonde367
    @gabrielkasonde367 7 місяців тому

    please add the commands to the description, thank you.

  • @alizaka1467
    @alizaka1467 7 місяців тому

    Can we use GPT models with this? Thanks. Great video as always

    • @decoder-sh
      @decoder-sh  7 місяців тому +1

      Do you mean OpenAI? Yes you can add your OpenAI API key to the webui in Settings. Sorry for not showing that!

  • @PublikSchool
    @PublikSchool 3 місяці тому

    Great video! was the most seamless of any video I've watched

    • @decoder-sh
      @decoder-sh  3 місяці тому

      Thank you for watching!

  • @ANIMATION_YT520
    @ANIMATION_YT520 7 місяців тому

    Bro , how do you connect to internet for free using domain host

    • @decoder-sh
      @decoder-sh  7 місяців тому

      Do you mean how would you use a custom domain with ngrok for free? I'm not sure if that's possible, that's probably something they'd make you pay for.

  • @bhagavanprasad
    @bhagavanprasad 4 місяці тому

    Excellent. thank you

  • @Rambo51TV
    @Rambo51TV 7 місяців тому

    An you show how do use it offline with personal information?

    • @decoder-sh
      @decoder-sh  6 місяців тому +1

      I will have videos about this coming soon!

  • @chrisumali9841
    @chrisumali9841 7 місяців тому

    Thanks for the demo and info, have a great day

  • @UnchartedWorlds
    @UnchartedWorlds 8 місяців тому

    Thank you keep it up! Sub made

  • @anthony.boyington
    @anthony.boyington 7 місяців тому

    Very good video and easy to follow.

  • @photize
    @photize 7 місяців тому

    Great video , macOs what happened to the majority vote you lost me there not even a mention for the non lemmings Nvidia crew!

    • @decoder-sh
      @decoder-sh  7 місяців тому

      Far enough, I’d be happy to do some videos for Linux as well! Thanks for watching

    • @photize
      @photize 7 місяців тому

      @@decoder-sh I'm presuming the majority to be Windows, it is amazing me how many ai guys have CrApple where in the real world many are using gaming machines for investing time in ai. (Not me I'm just an Apple hater)

  • @baheth3elmy16
    @baheth3elmy16 7 місяців тому

    I really liked your video, I subscribed of course. I don't think Ollama adds much with the current abundant services available for mobile.

    • @decoder-sh
      @decoder-sh  7 місяців тому

      Thanks for watching and subscribing! What are your current favorite LLM apps?

    • @baheth3elmy16
      @baheth3elmy16 7 місяців тому

      @@decoder-sh I use Oobabooga sometimes on its own and sometimes I use SillyTavern as a front, and Faraday, for local LLM

    • @baheth3elmy16
      @baheth3elmy16 7 місяців тому

      @@decoder-sh Oobabooga sometimes on its own and sometimes with SillyTavern, and Faraday

  • @collinsk8754
    @collinsk8754 7 місяців тому +1

    Excellent tutorial 👏👏!

    • @decoder-sh
      @decoder-sh  7 місяців тому

      I’m glad you enjoyed it!

  • @rgm4646
    @rgm4646 4 місяці тому

    This works great! thanks!!

  • @ronaldokun
    @ronaldokun 6 місяців тому

    Thank you for the exceptional tutorial!

    • @decoder-sh
      @decoder-sh  6 місяців тому

      My pleasure, thanks for subscribing!

  • @aolowude
    @aolowude 6 місяців тому

    Worked like a charm. Great walkthrough!

  • @fedorp4713
    @fedorp4713 7 місяців тому

    Wow, hosting an app on a free hostname from your home, it's just like 2002.

    • @decoder-sh
      @decoder-sh  7 місяців тому

      Next I'll show you how to use an LLM to create your very own ringtone

    • @fedorp4713
      @fedorp4713 7 місяців тому

      @@decoder-sh How will that work with my pager?

    • @decoder-sh
      @decoder-sh  7 місяців тому +1

      @@fedorp4713I’ve seen people make music with HDDs, I’m sure we can quantize some Beach Boys to play on DTMF

    • @fedorp4713
      @fedorp4713 7 місяців тому

      @@decoder-sh Love it! Subbed, can't wait for the boomer pager LLM series.

  • @garbagechannel6514
    @garbagechannel6514 7 місяців тому

    isnt the electric bill higher than just paying for chatgpt

    • @decoder-sh
      @decoder-sh  7 місяців тому +5

      Depends on the price of electricity where you are, and how much you use it! But running llms locally has other benefits as well. No need for an internet connection, no vendor lock-in, no concern about sending your data to meta or openai, ability to use different models for different jobs, plus some people just like to own their whole stack.
      It would be interesting to figure out the electricity utilization per token for an average gpu though…

    • @garbagechannel6514
      @garbagechannel6514 7 місяців тому

      @@decoder-sh true enough, the concept is appealing but thats what holds me back atm. i was also looking at on demand cloud servers but it seems like it would get either very expensive or very slow if u let an instance spin up for every query. most effective does seem to be anything with shared resources like chatgpt

  • @optalgin2371
    @optalgin2371 5 місяців тому

    Do you have to use 3000:8080 ?

    • @decoder-sh
      @decoder-sh  5 місяців тому +1

      No, you can change the docker config to use whatever host ports you want

    • @optalgin2371
      @optalgin2371 5 місяців тому

      @@decoder-sh What if I want to use the ollama server on my win machine and connect the OpenWebUI to the server on a different Mac machine? I've seen there's a code using ollama on a different host but whenever I use that code with 3000:8080 the UI page opens I can register change things but it doesn't connect, however when I use the network flag fix it doesn't even load the webui page.

    • @optalgin2371
      @optalgin2371 5 місяців тому

      @@decoder-sh Is there a way to use this method to connect two machines?

  • @Mehrdadkh87
    @Mehrdadkh87 7 місяців тому

    Should we connected to the internet ?

    • @decoder-sh
      @decoder-sh  7 місяців тому

      Yes you'll need to be connected to the internet