Це відео не доступне.
Перепрошуємо.

Is Open Webui The Ultimate Ollama Frontend Choice?

Поділитися
Вставка
  • Опубліковано 17 сер 2024
  • On 04/25/2024 I did a livestream where I made this video...and here is the final product. It’s a look at one of the most used frontends for Ollama. It's not perfect, but there is a lot to like.
    Someone clarified something that I missed... It seems that you can specify the model to use in the prompt using the @ sign. This is great. They should highlight that in the docs and make it a bit more discoverable.
    Here is the chart I mentioned. www.technovang...
    You can find the code for every video I make at github.com/tec.... Then find the folder name that starts with the date this video was published and a title that makes sense for what the video covers.
    Be sure to sign up to my monthly newsletter at technovangelis...
    I have a Patreon at / technovangelist
    You can find the Technovangelist discord at: / discord
    The Ollama discord is at / discord
    (they have a pretty url because they are paying at least $100 per month for Discord. You help get more viewers to this channel and I can afford that too.)00:00 Introduction
    02:47 Getting Started with Open WebUI
    04:01 Let's setup Open WebUI
    04:51 How Often is Open WebUI updated
    05:16 The Actual Install Process
    06:52 The Parts of the UI
    07:17 Setting the Settings
    09:59 Connect to Multiple Models
    11:19 Working with Prompts
    13:04 Talking to a Website
    13:36 Talking to Documents
    14:54 What do you think?

КОМЕНТАРІ • 291

  • @IdPreferNot1
    @IdPreferNot1 3 місяці тому +37

    Wow... this is the kind of detailed, helpful and to the point app review we should see more of from people. Thanks!

  • @jayd8935
    @jayd8935 3 місяці тому +26

    Have my subscription Matt. I like your highly clear and structured way of speaking.

  • @MyAmazingUsername
    @MyAmazingUsername 3 місяці тому +5

    Thanks for teaching me how to get started. The only downside of Ollama is that it's unable to integrate with HuggingFace, but it is able to import the raw GUF files or whatever they are called, by manually filling out a Modelfile. It's amazing.
    I basically fill out FROM, TEMPLATE, PARAMETER context size and PARAMETER stop words. Then import it. The result is perfect.
    I even imported inside a Docker environment. Just place the image folder inside the mounted colume path. Then use "bash" inside the container and then you can do the import.

  • @barneymattox
    @barneymattox 3 місяці тому +7

    I really appreciated this video. I've only been using this tool for about a week and was really excited to get answers to all of the confounding and non-working features I kept running into...only to find out that they're actually confounding or non-working. 😂

  • @matthewbond375
    @matthewbond375 3 місяці тому +11

    When you set additional hosts in the 'Connections" settings, they will act as redundancy assuming you have the same models installed on each host. So if I serve to multiple users, all using the same model at the same time, it will queue up requests to the current unoccupied host, in sequence. I've tested it locally with 3 separate hosts, and it works quite well. BTW thank you for the great video!

    • @liamburgess3385
      @liamburgess3385 3 місяці тому

      I was wondering whether a model like Llama3:70b or Llava could run on one PC with a lot of hardware resources but on a seperate PC you could run a light model like Phi 3. Then... I could turn off the powerful pc at night /weekends to save power and the chats model could default to Phi3? Maybe this is what it could be used for.
      After all when the powerful PC is off Open web UI wouldn't know things like Llava even exist... Maybe open web UI would need a restart to notice things had changed? What are your thoughts?

  • @joeburkeson8946
    @joeburkeson8946 3 місяці тому +5

    Good review, I have been using open-webui for a while and learned a bunch of new stuff, thanks. It appears to get better all the time which should continue especially after you've uncovered areas for improvement. BTW, I like the new chat archive feature.

  • @JoeBrigAI
    @JoeBrigAI 3 місяці тому +29

    The new required login doesn't go to any remote site, it stays on the local computer. This way multiple users can store chat history and settings. I agree that it should be optional, but at least it's local.

    • @technovangelist
      @technovangelist  3 місяці тому +5

      Correct. It’s for access to openwebui. But it’s intended as a feature for hosting it on another system online.

    • @superfliping
      @superfliping 3 місяці тому

      Great video enjoy your content helpful. I have a question about agent's how can i contact you private

    • @technovangelist
      @technovangelist  3 місяці тому +1

      I am on the ollama discord. Or you can find me on twitter. Same name as this channel

  • @0xJarry
    @0xJarry 3 місяці тому +7

    Thanks for this Matt, very easy to work with this tool!

  • @lakergreat1
    @lakergreat1 10 днів тому

    would love to see a follow up detailed update review. love the level of detail in this one

  • @bigpickles
    @bigpickles 3 місяці тому +2

    Love your videos, mate. Even if we are on opposite sides of the fence re. Dark mode! Cheers.

  • @OliNorwell
    @OliNorwell 3 місяці тому +3

    There’s a tiny button after the response that gives you data on tokens per second etc, I love that about this particular UI, easy to compare speeds

    • @technovangelist
      @technovangelist  3 місяці тому +1

      Yes that is nice. It’s pretty interesting to see how much they have been able to replicate from the cli

  • @tdorisabc123
    @tdorisabc123 3 місяці тому +1

    The user login default worked well for me - at a company that can’t use cloud based LLMs for security reasons, the default workflow allows you to immediately install this tool and share it with regular users (who don’t know what a command line is). But I agree maybe there ought to be a “dev” switch that turns it off.
    Really great video, looking forward to more.

  • @jimlynch9390
    @jimlynch9390 3 місяці тому +2

    I use open webui some but also use the command line. I'm not familiar enough with advanced usage of either, though. I appreciated this video and am looking forward to learning more. At this point, I'm just a sponge. Thanks!

  • @wilhelm8735
    @wilhelm8735 3 місяці тому +6

    I deployed open webui on my kubernetes cluster and I am pretty happy with it. It makes it easy to test some LLMs and compare their output. I wish one could add langchain code and select that as a model in the dropdown. Then it would be easy to integrate your own RAG/agent pipeline.
    Thank you for your videos! Your content is awesome!

    • @W1ldTangent
      @W1ldTangent 3 місяці тому +2

      Awesome to hear that you've successfully deployed Open WebUI on your Kubernetes cluster, are enjoying using it to test and compare LLM outputs! We appreciate your feedback and enthusiasm for the project.
      We love to hear from our users about their ideas and suggestions. In fact, we've had similar requests to yours for a while and we absolutely plan to address them. While we haven't had the bandwidth to implement these features yet, we're excited to know that there's continued interest in this direction.
      If you or anyone else in the community is interested in contributing to Open WebUI, we'd be happy to see pull requests for these features! Even if it's not directly related to Langchain integration, any PRs or answers to questions in our community can help free up time for our developers to focus on bigger features.
      Thanks again for your kind words and for being part of the Open WebUI community!

    • @AlexandreBarbosaIT
      @AlexandreBarbosaIT 3 місяці тому +1

      Hey! Kubernetes for this is a great idea specially because the time it takes for Ollama to switch models before give back the response. Would you share your kubectl command?

    • @technovangelist
      @technovangelist  3 місяці тому +2

      thanks @W1ldTangent for that reply. I look forward to seeing open webui progress over each of the releases. Its amazing to see how far it has come.

  • @DihelsonMendonca
    @DihelsonMendonca 27 днів тому +1

    I love Open WebUI. I can download a GGUF model in hugging face and convert directly into Ollama format in minutes using the GUI. And TTS is fantastic, hands free, I can talk and listen. I even installed new voices. And I can web search, RAG, many features indeed ! ❤❤❤

    • @bigglyguy8429
      @bigglyguy8429 23 дні тому

      "convert directly into Ollama format in minutes using the GUI" Why is Ollama so dumb it needs to convert anything? So many other apps just work with normal GGUF files, without any messing around.

    • @technovangelist
      @technovangelist  23 дні тому

      If the model weights are gguf there is no conversion. Ollama just uses straight gguf files. I assume he means converting from the source safetensors files. Oh wait. He did say convert gguf. He doesn’t know what he is talking about. In order to use a model anywhere you typically want a system prompt and template. With most other tools you have to figure that out yourself and with ollama you get it all. But it just uses gguf as is.

    • @bigglyguy8429
      @bigglyguy8429 23 дні тому

      @@technovangelist Nope, I edited the environment variable to point Ollama at my 95GB folder full of GGUF that Backyard, ChatGPT, LMStudio and others can use, Ollama just stares blankly and declares "No models found"

    • @technovangelist
      @technovangelist  23 дні тому

      You need to add them to ollama. There is no conversion. But you do need to tell Ollama about them.

  • @anotherhuman7344
    @anotherhuman7344 3 місяці тому +1

    hi Matt, amazing content. Thank you for sharing your thoughts with us and chatting with me during your stream.

  • @aamir122a
    @aamir122a 3 місяці тому +3

    What would make a great addition to this , would be a RAG backed to load bulk documents . The way to do this would simply mount an external volume to the docker image. Then have a file watcher to load up any new document which are added to the external directory . All documents are avaiable to all users of Web-UI for RAG use.

    • @sergejzr
      @sergejzr 2 місяці тому

      Or having a RAG from a webcrawl where the user just puts in the starting URL and domain

    • @thexedryk
      @thexedryk Місяць тому

      You can upload all of the files that you want to load into the DOCS_DIR directory and hit scan, it will load any new files it finds. Not 100% automated, but more reliable than using the (+) button.

  • @grokowarrior
    @grokowarrior 3 місяці тому +6

    I use Open WebUI every day and I love it! I love how it formats results nicely and stores the conversations for easy reference. The login page works with my password manager so it's not that inconvenient and I feel better that my conversations are kept private this way because privacy is such a huge motivation for running a private AI after all.

    • @technovangelist
      @technovangelist  3 місяці тому +4

      But the login provides no privacy on your local machine. Maybe if it were hosted on an external server.

    • @ts757arse
      @ts757arse 3 місяці тому +4

      The login for me has use. I host this on a server. I have the admin account with all the trial models and so on and the user account which only has access to the one or two models that just work. As a result, when my wife wants to use it or when I want to just get stuff done, the user accounts are great. When I am fiddling and don't care there are loads of available models and duplicates with slightly different names, then admin it is.
      I do host it online but I do not treat the login page as any degree of security, just as a way of segregating functionality.

  • @alx8439
    @alx8439 3 місяці тому

    Love your review, sir. I came up with exactly same notes and frustrations as you did 😂

  • @docbrian2573
    @docbrian2573 16 днів тому

    Very clear presentation; thank you!

  • @alx8439
    @alx8439 3 місяці тому +1

    User management is actually a good thing if you want to share your LLM among other ppl without giving them an ability to mess with your stuff.

  • @K600K300
    @K600K300 3 місяці тому +1

    i usually use anythingllm but after you explain open web Ui i will try it

  • @bazwa6a
    @bazwa6a Місяць тому

    your content is very hight quality... thanks Matt

  • @AlexandreBarbosaIT
    @AlexandreBarbosaIT 3 місяці тому

    Loving your videos and surely will give Open WebUI a try. Keep it going on such amazing work with these content.

  • @artur50
    @artur50 3 місяці тому

    Great ideas (as usual 😂), cannot wait for the whole series…

  • @alexmac2724
    @alexmac2724 3 місяці тому

    Good stuff and one of the more useful things i have watched in a while debating uis

  • @juanjesusligero391
    @juanjesusligero391 3 місяці тому +1

    Hey, this is great, Matt! :D Having you try and review all the Ollama frontends will be super useful! I'm really looking forward to the rest of the series! :D
    I currently use Open WebUI as well as the Ollama CLI, and I completely agree with the pros and cons you outlined.
    By the way, could you tell me where to find the comparative chart you mentioned in the video? I couldn’t find it on your website, but I'm really interested in having a look at it :)

  • @Treewun2
    @Treewun2 3 місяці тому

    I’ve been looking at this and other tools and the one thing I find elusive is the ability to fine tune a model with desired prompt/inference examples to help fast track the usefulness of a newly downloaded model. Including this in your reviews would be amazing if possible.

  • @desireco
    @desireco 3 місяці тому

    Thanks for the detailed description. Opens up a lot of possibilities, what I am missing on command line is a history which this provides... I also discovered Enchanted desktop client for Mac which does this as well, so this is easier to install.

  • @andrewzhao7769
    @andrewzhao7769 11 днів тому

    Thank you very much, this is super helpful

  • @tntg5
    @tntg5 3 місяці тому +2

    It would be great if you can make a video about deploying the model into the could and use its endpoints see how api friendly it is

  • @HyperUpscale
    @HyperUpscale 3 місяці тому

    BTW awesome tutorial - I was using the olv version and I didnt know the prompts, web scraper, document and the voice was available. Thank you for sharing.
    I how they will fix the whisper TTS soon as the generic windows TTS is so annoying .. sounds like we are year 2000 :)

  • @ibbobud
    @ibbobud 3 місяці тому +1

    Thanks for the info. Love the videos

  • @ukaszkonieczny8909
    @ukaszkonieczny8909 12 днів тому

    Hi : ) Thanks for nice review. Like your voice : ) Btw. there is no need to create account. During container creation it is possible to add environment variable -e WEBUI_AUTH=False (it is in doc)

    • @technovangelist
      @technovangelist  12 днів тому

      I think they made a number of changes after this video

  • @PP_Mclappins
    @PP_Mclappins 3 місяці тому +1

    I wouldn't go as far as to say that this is even close to a 1-1 command line tool.
    I use both regularly, it's nice to be able to easily setup a DB for file storage and to have a smooth and extremely easy way to integrate your files into your chats. It's also pretty nice to have an easy interface for model building rather than needing to build out your models in text files and then create them from that.
    It's also pretty nice to be able to provide feedback to your models in a concise way using the thumbs up and down feature, it's especially noticeable if you test with local files and repeatedly give the model poor praises when it answers correctly and visa versa, the model reflects the mixed judgment and starts to act foolish.
    Additionally, it makes it very easy to serve models to friends, family, and in a work environment.

  • @dr.mikeybee
    @dr.mikeybee 3 місяці тому +2

    It would be nice if this were an assistant with a wake word. If there were a page to add actions, that would be terrific.

  • @OliNorwell
    @OliNorwell 3 місяці тому +2

    User login is a pain if you’re on your own but if you have a family using it then it’s good to have your conversations stored per user. Many users will run ollama and a UI like this on a powerful machine in a cupboard for example then access it via phones or laptops from the couch. Then if you have 3 or 4 of you then user management becomes almost a necessity. I agree though it should be optional.

    • @technovangelist
      @technovangelist  3 місяці тому +2

      Optional is the key word

    • @technovangelist
      @technovangelist  3 місяці тому +2

      Plus if with a password your kids can see all your conversations by looking at the unsecured information that’s stored as plaintext.

  • @conneyk
    @conneyk 3 місяці тому +1

    Thanks for the Video. I use this tool regularly but I did not cover all the features you mentioned- time to explore them 😃
    The document chat for me feels a little bit less configurable than I would like to use it, e.g specific Text Splitters, working with external vector stores but maybe some of the features will be added sometime. I am very impressed by the release frequency of these guys!

  • @AdrienSales
    @AdrienSales 22 дні тому

    Hi Matt, as always, your demos just make things so clear : would you plan a demo about "custom tools" integrations ?... and add a cutom one ?

  • @PestOnYT
    @PestOnYT 3 місяці тому

    Fully agree. Its an all-new technology for many of us and some terms used aren't that obvious. So, good help text and tool-tips are as important as the feature itself. Having a good UI is great. But, as far as understood this video, it is not *just* a UI. Built-in RAG, vector database etc. means there's more "stuff" than just the UI itself. It is needed of course, but it is rather a full blown frontend application than just a UI. Things grow over time. ;-)

  • @Maisonier
    @Maisonier 3 місяці тому +3

    I was using Lm Studio with Anything LLM... After seeing this video I think is time to change...

  • @truehighs7845
    @truehighs7845 3 місяці тому

    The CLI is less handy because it doesn't record the chat and editing a prompt in CLI id tough, but in general I agree with you. Especially the custom model would benefit of additional functionalities like using the unlsoth framework for fine-tuning, then saving, benchmarking and loading custom models, it is very compatible with ollama's local philosophy,

  • @K600K300
    @K600K300 3 місяці тому

    always your videos so informative thank you 🥰

    • @K600K300
      @K600K300 3 місяці тому

      my main language is Arabic and i am weak in English but i understand you without using any subtitles

  • @lunarrevel8754
    @lunarrevel8754 Місяць тому

    Great video thanks!

  • @jesusjim
    @jesusjim 3 місяці тому

    i use it all the time. i use it as apposed to faster services for privacy reasons. i serve it from home and have it sitting behind a reverse proxy server so im able to reach it from a FQDN. it suits me well :D

  • @jephrennaicker4584
    @jephrennaicker4584 3 місяці тому

    Very captivating explanation, there is others like I would like you to review is LLM studios and anythingLLM as a suggestion. Thanks 🙏

  • @blee6782
    @blee6782 Місяць тому

    I installed it and it seems useful. There's a call feature now, but I haven't gotten it to work yet. MIght be a killer feature when I'm using my own llm from a mobile device. I use wireguard to connect to my home instance when I'm away.

    • @technovangelist
      @technovangelist  Місяць тому +1

      interesting. I have been meaning to create an update. and just started updating my table of integrations....still a lot to do : www.technovangelist.com/notes/annotated%20list%20of%20ollama%20web%20and%20desktop%20integrations/

  • @wardehaj
    @wardehaj 3 місяці тому

    Once again a great video with super easy explanation, thank you soo much!

  • @poldiderbus3330
    @poldiderbus3330 3 місяці тому

    I envision a chat app with a tree-like structure, including a main trunk and collapsible branches for topics created by users. You could invite and select bots, similar to users, that are defined with Flowise with various capabilities defined by their flows. A respond from a chatbot would be triggered by selecting a bot or activating a checkbox when sending a message. Additionally, I’d like to have a Telegram client interface in a specific topic / branch and include SST/TTS functionalities. 🙂 (All I need is a seed investor and a engineer to do all the work. 😋)

  • @pin65371
    @pin65371 3 місяці тому

    I like this tool. For us people that maybe arent as comfortable with code it makes things easier. If the whole point of these open source models is to open LLMs to as many people as possible then these tools are needed. If the developers see this I'll throw one idea out there. Start people off by having them pick a model if if its a small model and have a help system that can run off of that. I actually dont get why more people arent doing that already. You just need structured documentation to make it that even the really small models can work with them. If someone doesnt understand something it would be really simple to just have a question mark button that they can click which the person can chat with. They could even go as far as having features requests or bug reports that use a similar system. On the developers side they can take in that data and use a larger model to do more processing on it to find common themes which would make it easy to prioritize everything.

  • @israel8746
    @israel8746 3 місяці тому +1

    Wow, that's a lot of models. Which ones are your favorites and what do you use them for?

  • @LabiaLicker
    @LabiaLicker Місяць тому

    Looks excellent for self hosting a LLM for friends and family to use. Instead of continuing to feed user prompts to the beast (openai).

  • @jaycee62
    @jaycee62 3 місяці тому

    thank u...cool presentation style my man..#thumbs👍🏾

  • @bic4
    @bic4 29 днів тому

    open webui came a long way in the last 2 months

  • @cken27
    @cken27 3 місяці тому +2

    For local hosting ollama UI, AnythingLLM is better for RAG use case, but Open WebUI offers a closer UX to ChatGPT interface.

    • @TheGoodMorty
      @TheGoodMorty 3 місяці тому

      Does AnythingLLM have an API endpoint for prompting with RAG functionality?

  • @TokyoNeko8
    @TokyoNeko8 3 місяці тому

    I love open webui. It is for sure the best. It even has some support for images if you don’t want to go to say the native automatic1111 webui

  • @moe3060
    @moe3060 3 місяці тому +1

    I like that color scheme though

  • @cruachanx
    @cruachanx 2 місяці тому

    FWIW, I think the ModelFiles section is the most powerful part of Open WebUI.

  • @craigrichards5472
    @craigrichards5472 3 місяці тому

    Thanks Matt, what about using the spew hint capabilities. Could you go through that?

  • @StephenRayner
    @StephenRayner 3 місяці тому

    -d is actually deatacched
    ‘docker run -help’
    -d, --detach Run container in background and print container ID

  • @robwin0072
    @robwin0072 Місяць тому

    I Liked and Subscribed.
    Hello Mat, I was one of the first group of STS Space Shuttle programmers 40 years ago while still in my early days of college. It's great to see how programmers of far ago and today's brains use the same synapse pathways.
    I have been with my hidratespark pro 32oz for three weeks now - love it. I plan to buy a small one (16oz?) to fit into my vehicle.
    2. Which do you recommend, anaconda or Docker?
    3. And what are we to do with the Modelfiles section?
    4. What controls compare to Openai’s Custom Instructions in open web UI?
    5. The ‘/’ features appear pretty helpful - I have to rewatch your explanation.
    6. where to find user manual instructions for all of the open web UI features and how-tos.
    Thank you for the video.
    Happy Hidration.

    • @technovangelist
      @technovangelist  Місяць тому +1

      Anaconda or docker??? Those have two very different roles and purposes. But I tend to avoid anaconda or conda or any of those package environments for Python. Just bloated. I don’t understand the question for 3 and 4

    • @technovangelist
      @technovangelist  Місяць тому +1

      I don’t think there is any docs

    • @robwin0072
      @robwin0072 Місяць тому

      @@technovangelist Matt, #3, I am asking what the purpose/function of the Modelfiles area is.
      #4, OpenAI’s ChatGPT has a [Custom Instructions] feature in Settings (I think that's where it’s located; it allows for the user to predefine things the user wants ChatGPT to use in the responses without the user having to put it in every prompt.

    • @robwin0072
      @robwin0072 Місяць тому

      @@technovangelist 🥲🥲🥲
      Thank you for replying.

    • @technovangelist
      @technovangelist  Місяць тому +1

      ok, number 3. Wish i knew. its useless. they should remove since it doesn't add anything. For 4, I am not sure

  • @JoeBurnett
    @JoeBurnett 3 місяці тому

    Great video and information. Thank you!

  • @robwin0072
    @robwin0072 3 дні тому

    Good day,
    Matt, hopefully, this is my last question about the Private GPT installation. My laptop has arrived.
    I have installed an M.2 2T primary drive and a secondary 2T SSD.
    Q: After installing Ollama, Docker, and WebUI, can the models be stored (directed) to the secondary SSD to preserve space on the primary M.2 system SSD?
    If so, when do I pick where to store the models during their installation?

  • @StephenRayner
    @StephenRayner 3 місяці тому +1

    Yes it is

  • @AltMarc
    @AltMarc 3 місяці тому +1

    PrivateGPT as UI?
    It's thanks to PrivateGPT that I learned about Ollama, it works pretty well on my Jetson Xavier AGX 32GB, not a simple task due to ARM64+CUDA.

  • @michaelbubnov3306
    @michaelbubnov3306 3 місяці тому

    Is there a way to pull models from hugging face to open webs ui?
    I don’t want generic models, they are often too censored to answer questions.

  • @DihelsonMendonca
    @DihelsonMendonca Місяць тому

    I can't use light mode anymore. I got a terrible illness on my eyes, on the retinas, which I can't read almost anything on a white bg. It simply hurts my eyes. I still can see white over dark, but I need lots of contrast. 😮

  • @DrakeStardragon
    @DrakeStardragon 3 місяці тому

    You leave my precious dark mode alone, you.. you meanie! 🙃 I use openwebui and I agree, why do we have to sign in and what is that modelfiles area for? I have not tried other addons yet, tho, but I am about to, which is part of why I watched this video. So. keep going through that addons list! Excellent video!

    • @technovangelist
      @technovangelist  3 місяці тому +1

      I’ll move on to the other user interfaces. My goal is to see if there is one that improves on the built in cli

  • @Techonsapevole
    @Techonsapevole 3 місяці тому

    Open WebUI is fantastic, but I agree some features need refinement

  • @tacorevenge87
    @tacorevenge87 3 місяці тому

    Great content. Thank you

  • @userou-ig1ze
    @userou-ig1ze 3 місяці тому

    The only thing that is missing for me when it comes to the web ui is
    A) do a sequential websearch (i.e. google stuff, if condition unsatisfied, google more, integrate into chromadb)
    B) digest my pdf data folder (e.g. a list of pdf publications) and store in DB. this could also be done in cli

    • @technovangelist
      @technovangelist  3 місяці тому

      Those things can’t be done in the cli as is but this webui doesn’t really do those things all that well either.

  • @mernik5599
    @mernik5599 3 місяці тому

    Just hope we get to see function calling in it soon!

  • @SimoneScanzoni
    @SimoneScanzoni 3 місяці тому

    Last time I tried RAG from a big PDF in Open WebUI it did a terrible job, while Cheshire Cat did a good job with the same PDF. I tried also BionicGPT, bigAGI and chatd and Cheshire Cat was the clear winner in RAG. Besides that its plugin system offers many functionalities and its ability to delete specific memories is something I haven't seen anywhere else. I think it deserves a try, it seems a joke but it's not

  • @greypsyche5255
    @greypsyche5255 3 місяці тому

    Thing is, any gui would be better than commandline. Because you can use arrows to go back or forth, edit, select, etc. You cannot do that using Ollama in the terminal.

    • @technovangelist
      @technovangelist  3 місяці тому

      Those are things you can do in the ollama cli.

    • @greypsyche5255
      @greypsyche5255 3 місяці тому

      ​@@technovangelist well i can't. my own cli app has readline which allows me to do that but the official ollama cli does not allow me. when i hit left arrow for example i get ^[[D

  • @utuberay007
    @utuberay007 Місяць тому

    How do I connect azure OpenAI embedding ?

  • @user-uv3nv2bc6v
    @user-uv3nv2bc6v 2 місяці тому

    Hi Matt, thanks for your detail video.
    Do you recommend another WebUI tool?

  • @freeideas
    @freeideas 3 місяці тому +1

    The most shocking new information I got from this video is the idea that dark mode hurts someone's eyes! I thought everyone was saying, "I wish light mode would die because it hurts my eyes". By the way, a 100% white screen flashing at me in the middle of the night... literally causes my eyes physical pain.

    • @technovangelist
      @technovangelist  3 місяці тому +1

      The only ones shocked by that are the ones who don’t hear all the folks complaining about dark mode.

  • @JNET_Reloaded
    @JNET_Reloaded 3 місяці тому

    I think it would be better to have the 1st ollama web ui account be admin and have better user management and generation of local api keys also would be awesome so security is there from step 1 in case its ever in production in future!

    • @technovangelist
      @technovangelist  3 місяці тому

      Yes the user management is a bit lackluster not really providing much security and really only offering a little speed bump. So make it optional and then for folks that want the security offer it in a real way.

  • @PMProut
    @PMProut 2 місяці тому

    I got addicted to ollama last year and got to play around with openwebui when it was still called ollama webui
    The name change messed up my docker installs, not gonna lie
    But then, we decided to try it as a corporate AI companion, but as it was a testing phase, we didn't scale our cloud very high, so it was pretty slow
    On my machine though, I wanted to try and use every bit of feature, which led me to install and learn ComfyUI, and while the image generations options from openwebui is limited whatever the backend you use, it's still useable

    • @technovangelist
      @technovangelist  2 місяці тому

      Interesting. I haven't really played with ComfyUI.

    • @louisfeges2913
      @louisfeges2913 2 місяці тому

      I appreciate that you introduced me to ollama and are sharing your experiences and frustrations with the deployment of it's various features. Your videos are a combination of joy and frustration that are part of every software development cycle, and it feels great that I am not alone feeling this. Thank you 😊

  • @RomPereira
    @RomPereira 25 днів тому

    @technovangelist thank you for your video. You mentioned a chart, do you mind sharing it?

    • @technovangelist
      @technovangelist  25 днів тому

      I did? What chart? I can review later but easier if you can give any info. Thanks

    • @RomPereira
      @RomPereira 25 днів тому

      @@technovangelist 😅yes you did... 00:26 on... thank you

    • @technovangelist
      @technovangelist  25 днів тому +2

      Thanks so much for pointing it out. www.technovangelist.com/notes/annotated%20list%20of%20ollama%20web%20and%20desktop%20integrations/

  • @ihaveacutenose
    @ihaveacutenose 3 місяці тому

    How good is the rag capability with open webui?

  • @GeorgeCBaez
    @GeorgeCBaez 3 місяці тому

    I am wondering if there is a way to apply a white label style updating to the UI. Can you recommend customization for those who want to demo LLM centered ideas using Ollama UI? Perhaps an alternative from end with similar features?

  • @DiegoCrescenti70
    @DiegoCrescenti70 Місяць тому

    Thx for video. A question about the combo ollama/openwebui/docker. I’ve this configuration and all is ok. I’ve a goal to raise. I want to specialize a LLM pre-trained. Training it with a large base of data about coding in a proprietary language not popular. Only 2 hundreds programmers use it.
    My question is:
    - which generic and light LLM can i use?
    - i use some Python script to train a LLM (in my case test with Phi3:Mini). I found a problem to solve. When i try to load the model something wrong. Infact Python says that not find the path of the model, usually ~./ollama/models/…
    - I note LLM are encrypted SHA256! Peraphs is a problem!
    Can you help me to do this training?
    Can you give some documentation or tutorials?
    Thx in advance. Have a good day.
    Sorry for my bad english. I’m an italian developer.

  • @CrazyTechy
    @CrazyTechy 17 днів тому

    Matt, I just installed Ollama and started using llama3.1 via the Windows cmd prompt. Now I need to install Webui and I followed you until you mentioned docker. You lost me after that. I need the procedure for installing Webui, and I assume I need docker. However you go through lots of details that seem important, but you don’t follow through for me. But what I’m saying is I need just more direct instructions for getting Winui to work without using the cmd prompt window. Thanks for your informative series. Howard from Detroit.

    • @technovangelist
      @technovangelist  16 днів тому

      Yup. There is a lot of stuff with ai tools that assumes knowledge of docker. It’s a pretty low level requirement for so much of tech these days. I will be doing an update of this in the next few weeks and will try to include more of the steps required. And I know Detroit pretty well. My wife is from around there and we got married in Pinckney closer to Ann Arbor.

    • @CrazyTechy
      @CrazyTechy 14 днів тому

      @@technovangelist Thanks for replying. I bet you are very busy with this Ai stuff. I am having a great time learning, and I am very excited about it. I graduated from Wayne State University in Detroit then worked for Chrysler Defense. I ended publishing some books on Basic programming with Howard W. Sam’s, and went on my own. It didn’t work out as well as I’d thought but I became a stay-at-home dad for our 2 kids, My older daughter is a programmer at GM and the younger one works for Ohio State. My wife went to U of Michigan. We go to Ann Arbor a few times a year for their theater. I think Ai is the next tech leap in computers and there’s nothing to fear. I am retired and doing some silly videos on UA-cam and working on a UFO blog with a friend from Colorado. My first computer was the Motorola M6800D1 kit using a teletype.

  • @AdmV0rl0n
    @AdmV0rl0n 3 місяці тому

    I've tried setting up several different RAGs. In most cases, the rosey docs don't capture the snagging issues I have ran into. I can't help but feel we're in an early days state, and that in a few months RAGs will evolve. Right now, I'm kinda backing away from investing further time as they only work partially, and in document, image, and sound handling - there is ... work to do :/

  • @BikinManga
    @BikinManga 3 місяці тому

    How to get webui connect directly with lan (Ollama o pc, webui on nas, access from iPad? I tried tunneling but text refresh is slow.

  • @pschakravarthi
    @pschakravarthi 3 місяці тому

    Thanks for the detailed video. I am trying to create a chat with voice. Something like amazon alexa. Can you please create a video around it ?

  • @user-yk8li2fh1c
    @user-yk8li2fh1c 3 місяці тому

    My use case involves querying email archives, so it is crucial that the documents are not sent to external servers. I used the sentence-transformers/all-MiniLM-L6-v2 as the embedding model, and I believe that the documents I added are not sent to outside servers. I found GPT-4o much better than the models in Chat. My question is: will my email archives be exposed to external servers, or is only the question in my chat sent to OpenAI?

    • @technovangelist
      @technovangelist  3 місяці тому

      You embed to add them to a vector database so that you can find the most appropriate email to ask a question against. Then the email gets sent in plaintext with the question to whichever model you are using. If that’s going to be OpenAI, you are am sending the emails there. There is no way around this

    • @user-yk8li2fh1c
      @user-yk8li2fh1c 3 місяці тому

      @@technovangelist Thanks, Matt, for the clarification.

  • @loremipsumamet2477
    @loremipsumamet2477 3 місяці тому

    Yes ibused it using docker, but why when i tried to stop the containers the port 8080 are still used for the ui?

  • @code_poseidon
    @code_poseidon 3 місяці тому

    Just had a thought - does the Ollama server already include a login management system? It’d be great if it could handle user credentials similar to Git, allowing specific access based on rules. For example, certain users could access specific models, or there might be usage restrictions. This would make it so much easier to deploy Ollama as an offline LLM service for small businesses. Not sure if this feature exists already, but if not, it could be a cool addition. By the way, awesome project! Really helpful for deploying LLMs locally. 🚀👊

    • @technovangelist
      @technovangelist  3 місяці тому +2

      No it doesn’t. It’s designed to be the best way to run models locally on your own hardware. Some folks are hosting solutions using ollama but they need to come up with that authentication and authorization system on their own. There are lots of tools for that depending on the specific needs of the project. In fact there are many large companies only focused on that part and none of them can provide all the options some folks want.

  • @TheUselessgeneration
    @TheUselessgeneration 3 місяці тому

    Funnily enough i specifically downloaded this UI because it supported Automatic111.

  • @dusk2dawn2
    @dusk2dawn2 3 місяці тому

    Highly appreciated!

  • @kannansingaravelu
    @kannansingaravelu 2 місяці тому

    There are no models listed by default after web ui docker installation. Models installed locally are not shown unless one install all those models again on docker instance. Are there any easy way to install Ollama and pull the models instead of repeating what was already done locally on the system. Am I missing something?

    • @technovangelist
      @technovangelist  2 місяці тому +1

      From the comment it sounds like you installed ollama on docker in addition to the host. You need only one

    • @kannansingaravelu
      @kannansingaravelu 2 місяці тому

      @@technovangelistfor some reason it works on my windows pc with GPU and not on Intel Mac. The UI / dashboard menu is also different in Mac. Are there any additional settings to be done for Mac.

    • @technovangelist
      @technovangelist  2 місяці тому

      Nothing needed to get it to work on an Apple Silicon Mac, but it won't use the GPU on an Intel Mac. There is no way to enable that.

  • @mwarnas
    @mwarnas 3 місяці тому

    So it’s not just me who struggles with some of these options. The OpenAI API key is not properly saved between restarts, something that drove me nuts.

  • @loco-herzog7047
    @loco-herzog7047 13 днів тому

    I can connect my downloaded Models from ollama. They don’t show up on the WebUI.

  • @HyperUpscale
    @HyperUpscale 3 місяці тому

    You don't have to review any other webui - this is the best one :) Only the login is annoying.

    • @technovangelist
      @technovangelist  3 місяці тому +1

      I’m hoping there is one that does a good job with everything. This is nice but far from perfect

    • @technovangelist
      @technovangelist  3 місяці тому +1

      And the goal isn’t to look at web uis but rather all clients in general. It doesn’t offer that much over the cli but I am sure there is one that blows everything away.

  • @gokudomatic
    @gokudomatic 3 місяці тому

    My big hop in open webui was the capability to call google search api, but I didn't find it here.

  • @sfl1986
    @sfl1986 3 місяці тому

    how can you customize the outpu length so that it can write longer responses

  • @MilesBellas
    @MilesBellas Місяць тому

    What is the best way to Integrate with Comfyui and Stable Diffusion?

    • @technovangelist
      @technovangelist  Місяць тому +1

      That’s something I have no idea on. Until anything comes close to midjourney I haven’t bothered.

  • @MamunSrizon
    @MamunSrizon 3 місяці тому

    You can actually use @ to interact with a different model.
    And I also find modelfile a bit interesting way to override model's default configuration.

    • @technovangelist
      @technovangelist  3 місяці тому +2

      If it’s not documented it doesn’t exist

    • @technovangelist
      @technovangelist  3 місяці тому +1

      I see the docs have been updated to include this. that’s great. Its not everything I was mentioning, but it’s a good part of it. The modelfile is the key part of ollama that makes it amazing, but I didn't see any improvement on the basics in open webui