Getting Started with Ollama and Web UI

Поділитися
Вставка
  • Опубліковано 25 гру 2024

КОМЕНТАРІ • 80

  • @hfislwpa
    @hfislwpa 4 місяці тому +10

    2 videos in 1 day? Woah! Thanks

    • @aonangthailandboattours4757
      @aonangthailandboattours4757 Місяць тому

      indeed, as the second will reduce views of the first so it wasnt for self benefit to upload 2

  • @RamkumarL-o9z
    @RamkumarL-o9z 4 місяці тому +3

    Interesting tutorial with Web UI and Ollama, Thanks!!!

  • @abrahammonteza
    @abrahammonteza Місяць тому +1

    Excelente explicacion !!!!! , simple y directo a la vena como dicen aqui en mi pais

  • @khalildureidy
    @khalildureidy 4 місяці тому +34

    Big thanks from Palestine

  • @AleksandarT10
    @AleksandarT10 4 місяці тому +2

    Great one Dan! Keep ups updated on the AI stuff!

  • @KennylexLuckless
    @KennylexLuckless 5 днів тому

    In the beginning you asked "why" to use a local LLM, I think you forgot, "Online connectivity". I do sometime take my laptop to place where I have no WiFi or do not think the WiFi is secure, but I still want to use a LLM to analyze text and scripts.

  • @SayemHasnat-e4h
    @SayemHasnat-e4h 4 місяці тому +6

    How can I connect my local ollama3 with webUi, My webUI couldn't find the locally running ollama3

    • @MURD3R3D
      @MURD3R3D 3 місяці тому +1

      same problem

    • @MURD3R3D
      @MURD3R3D 3 місяці тому

      from home page of your webUI localhost3000 in your browser, click on your account name in the lower left, then click settings, then "models", then you can pull llama3.1 by typing it in the "pull" box and clicking the download button. when it completes, close webUI and reopen it. then i had the option to select 3.1 8B from the models list

    • @SayemHasnat-e4h
      @SayemHasnat-e4h 3 місяці тому

      @@MURD3R3D i found that happen due to docker networking.

    • @manojkl1323
      @manojkl1323 3 місяці тому +2

      I faced similar problem. Restarting the system, Starting Ollama, starting the docker desktop and container solved the issue for me.

  • @JREGANZOCLIPS
    @JREGANZOCLIPS 3 години тому

    Hello! Which software is used to make this video?
    thanks in advance

  • @chameleon_bp
    @chameleon_bp 4 місяці тому +2

    Dan, what the specs for your local machine?

  • @quanbuiinh604
    @quanbuiinh604 Місяць тому +1

    Hello, thank you for your video.
    Could you please let me know if I can use Llama 3.1 on my laptop, which only has an NVIDIA GeForce MX330?

    • @dsmith004
      @dsmith004 5 днів тому

      I am runnung llama3.1 on my Alien R17 without issue.

  • @mochammadrevaldi1790
    @mochammadrevaldi1790 3 місяці тому +1

    in Ollama Is there an admin dashboard for tuning the model, sir?

  • @borntobomb
    @borntobomb 4 місяці тому +4

    Note for 405B:
    We are releasing multiple versions of the 405B model to accommodate its large size and facilitate multiple deployment options:
    MP16 (Model Parallel 16) is the full version of BF16 weights. These weights can only be served on multiple nodes using pipelined parallel inference. At minimum it would need 2 nodes of 8 GPUs to serve.
    MP8 (Model Parallel 8) is also the full version of BF16 weights, but can be served on a single node with 8 GPUs by using dynamic FP8 (Floating Point 8) quantization. We are providing reference code for it. You can download these weights and experiment with different quantization techniques outside of what we are providing.
    FP8 (Floating Point 8) is a quantized version of the weights. These weights can be served on a single node with 8 GPUs by using the static FP quantization. We have provided reference code for it as well.
    405B model requires significant storage and computational resources, occupying approximately 750GB of disk storage space and necessitating two nodes on MP16 for inferencing.

    • @AlexSmile-y2x
      @AlexSmile-y2x 4 місяці тому

      and what about 70B? How it could be served? Could some of llama 3.1 be used by simple 16-cores laptop with integrated GPU and 32GB ram?

    • @isaac10231
      @isaac10231 3 місяці тому

      When you say "we" do you work for meta?

    • @borntobomb
      @borntobomb 2 місяці тому

      @@isaac10231 im reprinting from release notes.
      Understand?

  • @je2587
    @je2587 4 місяці тому +1

    Love your terminal, which tools do you use to customize it?

  • @kasirbarati3336
    @kasirbarati3336 16 днів тому

    Loved this 🤩😍

  • @bause6182
    @bause6182 4 місяці тому +6

    Ollama should integrate a feature like artifact that allow you to test your html css code in a mini webview

    • @aonangthailandboattours4757
      @aonangthailandboattours4757 Місяць тому +1

      you should integrate a monthly 1000 dollar payment into my bank account.. thats a good idea too. I am affraid LLM's are the way of inputting and outputting, it is other applications, software and hardware that does stuff like that. i.e. a browser to display css, the web ui and LLMs use Markdown not html so cannot do stuff like youtube embeds. Besides, F12 on most browsers will give you that anyway

  • @Fayaz-Rehman
    @Fayaz-Rehman 19 днів тому

    Five Stars ***** - Thanks for sharing.

  • @termino2184
    @termino2184 3 місяці тому +2

    Does Open WebUI support creating an API endpoint for AI models or is it just a chat UI?
    does expose the models as a RESTful API ?

    • @transmeta01
      @transmeta01 10 днів тому

      no, but ollama does. See docs.

  • @expire5050
    @expire5050 4 місяці тому +1

    finally setup open webui thanks to you. i'd approached it, seen "docker" and left it on my todo list for weeks/months. I'm running gemma2 2b on my gtx 1060 6gb vram. any suggestions on good models for my size?

  • @MrI8igmac
    @MrI8igmac 2 місяці тому +1

    I have spent all morning trying to getup and running. I can get ollama running and also open webui on port 3000. But there are no models on the web ui

    • @DanVega
      @DanVega  2 місяці тому

      If you got ollama installed you need to install a model. What happens if you run ollama list

    • @mikeyz8775
      @mikeyz8775 2 місяці тому

      @@DanVega
      deepseek-coder-v2:latest 63fb193b3a9b 8.9 GB 2 hours ago
      llama3.1:latest 42182419e950 4.7 GB 6 hours ago

    • @mikeyz8775
      @mikeyz8775 2 місяці тому

      this is my desktop.

    • @MrI8igmac
      @MrI8igmac 2 місяці тому

      @@DanVega
      Ollama list shows deepseek-coder-v2 id:63fb
      Llama3.1:latest id:4218

  • @vinnythepooh13
    @vinnythepooh13 14 днів тому

    Any idea why the port isn't popping up for me in Docker? Tried the generic address numbers in Chrome and couldnt find the webui

  • @Marssonde1
    @Marssonde1 Місяць тому

    despite my model being listed with ollama list it unfortunately doesnt show up in the webui as an option not sure what to do since i am not skilled in such things

  • @CortezLabs
    @CortezLabs 5 днів тому

    Thank you

  • @stoicguac9030
    @stoicguac9030 4 місяці тому +1

    Is WebUI a replacement for aider?

  • @carlp4954
    @carlp4954 2 місяці тому

    Do you mind telling us what your Mac book specs are?

  • @Peter-x29
    @Peter-x29 Місяць тому

    How did you connect to the api?!

  • @DrMacabre
    @DrMacabre 4 місяці тому

    hello, any idea how to set keep_alive when running the windows exe ?

  • @abubakkarsiddique13
    @abubakkarsiddique13 2 місяці тому

    Hey,
    Its nice
    Can you list all the specs of your machine, so for running 8b/9b model?

  • @meshuggah24
    @meshuggah24 2 місяці тому

    is it normal for docker to take up 15gb of ram on your machine?

  • @jaroslavsedlacek7077
    @jaroslavsedlacek7077 4 місяці тому

    Is there an integration for Open WebUI + Spring AI?

  • @abiolambamalu7061
    @abiolambamalu7061 3 місяці тому

    Thanks so so much for this
    I'd been struggling with it for so long.
    So I usually have this problem where it's really slow and if I try to reference a document like you did, it just keeps loading and never responds. I did everything you did except that I use phi model instead of llama 3.1. could this be the reason?
    Thanks in advance😊

  • @vikas-jz3tv
    @vikas-jz3tv 4 місяці тому

    How we can tune a model with custom data?

  • @elhadjibrahimabalde1234
    @elhadjibrahimabalde1234 4 місяці тому

    hello. After installing OpenWebUI, I am unable to find OLLAM under 'Select a Model'. Is this due to a specific configuration? For information, my system is running Ubuntu
    24.04.

  • @Enki-AI
    @Enki-AI 3 місяці тому

    hey Dan can you help me out I have an issue i cant figure out, i usedto host ollama webui locally and online on a server, but im not sure why its not working anymore

  • @rockylau4267
    @rockylau4267 3 місяці тому

    Thanks DAN, good video!
    It runs so smooth. Sorry I am a new subscriber.
    I want to know what is your computer hardware for my reference. Many thanks!!

  • @vrynstudios
    @vrynstudios 4 місяці тому

    A perfect tutorial.

  • @landsman737
    @landsman737 15 днів тому

    Very nice

  • @kelthekonqrr
    @kelthekonqrr 3 місяці тому

    Is it possible to build it out to monetize ?

  • @CleoCat75
    @CleoCat75 21 день тому

    I installed this under WSL on Windows 11 and it's really slow. is it because it's under WSL and not native on my windows box?! I have a 3080ti GPU and i9 processor and yours is MUCH faster than mine.

  • @lwjunior2
    @lwjunior2 4 місяці тому

    This is great. Thank you

  • @billblair3155
    @billblair3155 3 місяці тому

    Good stuff Big Dawg!

  • @AliHelmi-GET
    @AliHelmi-GET 4 місяці тому

    Thank you, I tried it but it is very slow, running it on a laptop with 16GB RAM!

  • @vactum0
    @vactum0 4 місяці тому

    my ollama running same model is deadslow, running in laptop i5 11th gen without GPU 26GB Ram.
    Is it because of no dedicated GPU?

  • @MAKU011111
    @MAKU011111 16 днів тому

    where do I get spring-boot-reference.pdf ?

  • @abhinaysingh1420
    @abhinaysingh1420 3 місяці тому

    this Is really helpful

  • @zo7lef
    @zo7lef 4 місяці тому

    Would make a video on how to integrate llama 3 to wordpress website, making chatbot or co pilot

  • @trapez_yt
    @trapez_yt 4 місяці тому

    Hey, could you make a video on how to edit the login page? I want to make the login page to my liking.

    • @aonangthailandboattours4757
      @aonangthailandboattours4757 Місяць тому

      ask your LLM to restyle it for you... same as when you want to know the time you dont ask your friend, you look at your phone

  • @quarteratom
    @quarteratom Місяць тому

    Which program stores the local user data? Ollama or Web UI? Data like improvements to the model, chat history. How do multiple users work, which program does that? Can different users access other users' data? Does 1 user "improving" the model affect other users' conversations? How can you completely reset all the environment?

  • @NikolaiMhishi
    @NikolaiMhishi 4 місяці тому

    Bro you the G

  • @betterwithmaul
    @betterwithmaul 2 місяці тому

    finnaly my gpu has other task than gaming

  • @selub1058
    @selub1058 4 дні тому

    You skipped configuration of WeUI. It's unfair. 😢 Excellent video, but without this important thing it will not work.👎

  • @chucky_genz
    @chucky_genz Місяць тому +1

    Talk to much

  • @shuangg
    @shuangg 4 місяці тому +1

    6 months behind everyone else.