How To Host AI Locally: Ollama and Open WebUI

Поділитися
Вставка
  • Опубліковано 21 лис 2024

КОМЕНТАРІ • 112

  • @Warframeplayer-s4l
    @Warframeplayer-s4l 21 день тому +3

    This right here can replace the internet

  • @tikishark1
    @tikishark1 20 днів тому +2

    I’ve done this with locally based AI image processing software like comfyui, can’t wait to now try chat as well. I really appreciate the background and tutorial.

  • @Saintel
    @Saintel 21 день тому +7

    I've been waiting for your take on AI and I was not disappointed. Great work as normally. Currently using Msty locally.

  • @GregBressler
    @GregBressler 19 днів тому +5

    Great video! Kinda glad this was recommended by Google...wonder how Google knew I was interested.

  • @keithwhite2986
    @keithwhite2986 16 днів тому

    Comprehensive overview, thanks. A few months back I saw a UA-cam video by Data Slayer running one of these models on an 8GB Raspberry Pi 5 with impressive results.

  • @markldevine
    @markldevine 21 день тому +3

    Best content for Nov 1, 2024. You get my vote. Great job!

  • @theaugur1373
    @theaugur1373 19 днів тому +2

    If you have an Apple Silicon Mac, LM Studio is another option that also has a CLI tool (lms). It supports MLX (optimized for Macs) and GGUF model formats.

    • @noanyobiseniss7462
      @noanyobiseniss7462 2 дні тому +2

      You didnt get a luv, must not be on the approved list.

  • @raphaelcazenave-leveque7991
    @raphaelcazenave-leveque7991 20 днів тому +2

    Cool Geeky video.
    Thanks Naomi.

  • @ronm6585
    @ronm6585 18 днів тому +1

    Thank you Naomi.

  • @AnythingGodamnit
    @AnythingGodamnit 18 днів тому +1

    Great video. I must admit I gave up on local LLMs about 6 months ago, due to it being *so slow* to run them, even on my beefy desktop machine. I don't have an NVidia GPU, though. I was hoping to see more info on using GPUs in this video, but it's probably a bit advanced/out of scope. I've been meaning to go back and try again and this video has further encouraged me to do so. Thanks!

  • @johnlegend1360
    @johnlegend1360 21 день тому +1

    Omg I forgot about this! Glad you made another video for it!

  • @ninjanape
    @ninjanape 21 день тому +4

    YES! This is literally on my to-do list for next week, and now I have a great source of info to start from. You're the best Naomi :D

  • @johnlegend1360
    @johnlegend1360 21 день тому +3

    Love it! Using this I won't need to agree to terms that allow someone to own MY data. It is mine.

  • @natemarx4999
    @natemarx4999 21 день тому +7

    Naomi makes light blue look extra elegant.

  • @karlamellow1655
    @karlamellow1655 21 день тому +2

    I love this channel. Clear and easy instructions, even for people that aren't amazing with computers. I now have a private Ai!

  • @terrorbilly1
    @terrorbilly1 20 днів тому +1

    I honestly can't appreciate this enough!

  • @raphaelcazenave-leveque7991
    @raphaelcazenave-leveque7991 12 днів тому +4

    I did some experimentations on the usage of differents olamma AI models, considering harddrive/SSD usage in giga octets, memory (RAM) usage, time in seconds to get the answer, using 3 LAMMAS models size (in Billions/Milliards of parameters/variables).
    CONCLUSION: I suggest to stick on the smallest model (7Billions) for User that are having an average computer laptop.
    If you have more than 16GB of RAM, you can uses the 33B Olamma model, but it take some time (a while on my old CPU) to get the answer.
    Here are the raw data I got in my experience :
    [ { "Model": "LLaMA-7B", "Time_Calculation_seconds": "5-15", "RAM_GB": "10-12", "Disk_Size_GB": 13 }, { "Model": "LLaMA-33B", "Time_Calculation_seconds": "15-30", "RAM_GB": "30-40", "Disk_Size_GB": 65 }, { "Model": "LLaMA-70B", "Time_Calculation_seconds": ">30", "RAM_GB": "70+", "Disk_Size_GB": 130 } ]

  • @Zane-e8b
    @Zane-e8b 20 днів тому +2

    This is a HUGE. Thank You!

  • @BogDrakonov
    @BogDrakonov 21 день тому +2

    I’d love to see a tutorial on running your own StableDiffusion as well

  • @waynedickens-i3r
    @waynedickens-i3r 22 години тому

    holy crap finally found you again

  • @jwb6583
    @jwb6583 20 днів тому +1

    Very nice! Have a nice weekend!

  • @lufeacbo8
    @lufeacbo8 21 день тому +4

    Great video. Please do another about local LLM on Android. Do not know if it is possible at this time.

    • @berylliumdotaep
      @berylliumdotaep 21 день тому +3

      there are people doing it but kind of complex

    • @LivingLinux
      @LivingLinux 21 день тому +4

      You can install a Linux terminal with Termux. From there you can install Ollama. But there are some things that don't make it easy. I tried with Debian and the directory was not added to the path. And I needed a second session, as I needed to start the Ollama server in a separate session. But it can be done.

  • @lendaquino979
    @lendaquino979 20 днів тому +1

    wow , you smashed it ,

  • @marcus_cole_2
    @marcus_cole_2 21 день тому +1

    You should talk about the several.AIs worth considering for llm art and more especially those using the Pinocchio installer engine for PC. Installing on a secondary drive, like D:, is recommended.
    For Android, the 'Local AI Offline Chatbot' app is available as a fully offline option, with no internet required. Although this AI is limited to a G-rating, it offers a solid offline experience for those seeking a private, all-ages model. No offensive material allowed. for a private AI.running local"

  • @RandyV2max
    @RandyV2max 18 днів тому +1

    great video!

  • @TazzSmk
    @TazzSmk 18 днів тому +1

    very well made tutorial! :)

  • @ABC-je8yq
    @ABC-je8yq 14 днів тому +1

    This is helpful. Thank you. Subscribed ! : )

  • @samo-zaposao8757
    @samo-zaposao8757 17 днів тому +1

    So helpful,thanks

  • @waynedickens-i3r
    @waynedickens-i3r 22 години тому

    love the work

  • @itchylol742
    @itchylol742 20 днів тому +2

    1:03 "a tutorial made by The Hated One" is such a hard line out of context

  • @asan1050
    @asan1050 20 днів тому +3

    Naomi Brockwell Thanks for posting this video

  • @wolfgangk4952
    @wolfgangk4952 21 день тому

    Thx`s for the video. I missed any words about the Hardware i need for the docker lxc`s or the backbone where i install the Librarys for all the stuff. Also in this context the special " NPU " which are necessary for the OP`s . ( TOPS )

  • @RarelyCorrect
    @RarelyCorrect 21 день тому +4

    I was hoping to hear more about hardware requirements. What kind of GPU do we need for 40B models?

    • @PyCoder82
      @PyCoder82 21 день тому

      It can run from cpu too it's just slower. Ram is based on the model. Small once are fine with e.g. 2 - 5gb and relly big ones are 40gb+.

    • @PracticalPcGuide
      @PracticalPcGuide 20 днів тому +2

      40B Q4_0 is going to be around 20GB. so 3090-4090 is needed, my friend!

    • @ScriptureFirst
      @ScriptureFirst 18 днів тому

      Quantization is shrinking all the models quite nicely. Kind of the only way to go w light consumer scale models

  • @AliceHernandez-u1q
    @AliceHernandez-u1q 15 днів тому

    I have just found you and love you. I am pretty ignorant of this and am a bit overwhelmed. I have started with what I thought was first step and opened firefox. Here is question: In firefox , I get the message to search using Google. Am I doing somthing wrong in the firefox setup. I have disabled all tracking and cookies and permissions in Google account. Thank you. I promise I will get better. You just have a lot of info . Thank you very much.

  • @MrSnacko
    @MrSnacko 20 днів тому +2

    How do you protect your computer from viruses when you download things from the command prompt?

    • @ScriptureFirst
      @ScriptureFirst 18 днів тому

      Depends on threat model, but good start is sticking with the big name sources that have tens of thousands of dl’s

  • @108u9
    @108u9 21 день тому +1

    Hi NBTV, curious how would users know if any telemetry-esque connection doesn’t happen in the background when the device connects online with the offline options? That prompts and inputs, end user info can still be collected via so?

    • @thegreyfuzz
      @thegreyfuzz 21 день тому +1

      I thought the same thing, I've set up multiple instances of different AI chat and Image generation on separate machines that are in a sandbox'd VLAN that was blocked from internet access and firewall logs monitored at the gateway, so far none of them have tried to call home or off network for anything prompt/query related. For queries that needed very new information that was not available when the model was built it did not call out instead reported it didn't have the info to answer. Additionally, I didn't see it covered in this video but there are uncensored models available that you can use, other public models/engines tend to censor very useful info, quick example - in a SHTF moment how to make useful compounds, for example KNO3CS.

    • @NaomiBrockwellTV
      @NaomiBrockwellTV  21 день тому +1

      I run little snitch so I see every connection being requested

  • @mijgreco3914
    @mijgreco3914 14 днів тому +1

    You should do another Ron Paul interview...been a while.

  • @macbitz
    @macbitz 21 день тому

    Fascinating stuff, and I can see why a local model is good for privacy. However, if I've registered with an AI system (say ChatGPT) with a masked email address, I always use a VPN and I never use personally identifiable information in chats, what is my privacy risk?

    • @NaomiBrockwellTV
      @NaomiBrockwellTV  21 день тому +2

      1) chatgpt requires an active cell number (not voip)
      2) Everything you do through your acct is tied into a single profile and kept forever. I wouldn't bet against the ability for that information to eventually be tied to you.
      3) if you don't reveal sensitive information in chat, I think using chatgpt is fine. It's just about understanding what information is collected, and making smart usage decisions accordingly.

  • @ZavakShadow
    @ZavakShadow 17 днів тому +1

    "Timeless Naomi"

  • @istuff4137
    @istuff4137 18 днів тому

    2:35 the most statistically probable responses ;)

  • @alieskandari6863
    @alieskandari6863 21 день тому

    Can we train the models and add parameters to them (forking our own model?)

  • @Jerrybeam101
    @Jerrybeam101 7 днів тому +1

    what do you think about gpt4all

  • @skistenl6566
    @skistenl6566 16 днів тому

    My concern is that I use LLMs to solve certain issues. However, using the arguably "not as smart" local smaller models compared to the ones hosted by those 3rd parties online will result in worse solutions, won't it?

  • @Justmylukc
    @Justmylukc 20 днів тому +8

    UA-cam should be sued for all the lies in the ads.

    • @psxtuneservice
      @psxtuneservice 12 днів тому

      What adds? Inform yourself....but to google the solutions will most probably not work...use a different search engine

    • @noanyobiseniss7462
      @noanyobiseniss7462 2 дні тому +2

      Utube shill didnt luv your comment, you should feel special.

  • @لاتشبهنفسكبإنسانوأفعالكأفعالحي

    Thanks Naomi but is their a way to jailbreak AI or ChatGPT for privacy or like if I ask AI itself to encrypt our chat?

  • @Hindsight101
    @Hindsight101 21 день тому

    How do you update Open Web UI? Thank you

  • @Dj-Z-S
    @Dj-Z-S 20 днів тому

    Is the Grass DePin Project safe?

  • @StephenKensler-j3f
    @StephenKensler-j3f 14 днів тому

  • @jozsefizsak
    @jozsefizsak 20 днів тому +1

    Always a pleasure. (Forgive my lack of originality!) 😊

  • @Un_Pour_Tous
    @Un_Pour_Tous 19 днів тому

    msty works well for the windows user.

  • @prempalsharma4559
    @prempalsharma4559 5 днів тому

    Please review Invizible pro and Rethink dns android apps

  • @aXimo
    @aXimo 21 день тому

    7:50 ekhem, what is that strange website you've showed here? I have never ever seen it before.

  • @krishnanmanushresth3400
    @krishnanmanushresth3400 9 днів тому

    Can I ask my local files in this?

  • @suzihazlove4979
    @suzihazlove4979 12 днів тому

    PLEASE, DO A VIDEO on how t0 keep my house property safe.

  • @dotaallstars8081
    @dotaallstars8081 21 день тому +1

    Honestly seeing the hated one using nord or the pIrate bay undermined his credibility

  • @bravohomie
    @bravohomie 21 день тому +2

    I'd like to make an honorable mention for GPT4All. Turn key UI with CPU based models for more compatibility

  • @tiagoreis5390
    @tiagoreis5390 18 днів тому

    The only problem is that all these modules use a lot of resources (some even GPU)... I think that until we could run a decent AI model locally that does not require a modern CPU and RAM, this will not be feasible... I tried already and it was to slow, or the "lower" models are to bad.... :(... Let's see in the future :)

  • @SirWolf2018
    @SirWolf2018 20 днів тому +1

    I don't really like docker, it's still not user friendly enough. It's not like a regular application that you just install, you still need the command line.

  • @latitamontoya
    @latitamontoya 4 дні тому

    my question to anyone who wants to answer, even though I applied this llama, my internet provider will be able to see my data, it is confusing to me as I am 1000 years old help please

    • @NaomiBrockwellTV
      @NaomiBrockwellTV  3 дні тому

      If you are self hosting a model, you don't need internet access to use it. It's all stored locally on your machine.

  • @ricochet287
    @ricochet287 20 днів тому +1

    Naomi be know that ollama LLM are not really Open Source, Meta like others (Google recently corrected this) are abusing the the Open Source definition standard (OSI), this LLM are trained by private data and none of the sources are explained or given to the users.

  • @FCTouhou
    @FCTouhou 10 днів тому +1

    Llama by Meta is not open source. It's a word play to make you think that but if you look at the legality it isn't actually open source

  • @ElGroggy
    @ElGroggy 6 днів тому

    Excellent video. I should have seen it sooner, my bad.
    Is it possible to download the AI model and install it in Ollama or Open WebUI?
    I want to know if it is possible to get it while you already have a home network cut from the internet?

  • @Un_Pour_Tous
    @Un_Pour_Tous 19 днів тому

    The piratebay was the shyt in its day.

  • @GGrev
    @GGrev 21 день тому +1

    I love you Naomi!

  • @MyHandleIsAplaceholder
    @MyHandleIsAplaceholder День тому

    Inb4 rugpull

  • @Johnny-c6p
    @Johnny-c6p 13 днів тому

    i still think ur really pretty. i like LLM's . but yea feel really concerned about them. but im careful what i put on there. but only academic consumer level stuff.. Local LLM is out of the reach of my Hardware because i cant afford to run it well enough. I only like Claude ai and GPT so far. i Dis-like gemini and meta Lama whatever. And open source is subjective when Meta does it. read the fine print.

  • @Kebekwoodcraft7375
    @Kebekwoodcraft7375 21 день тому

    Bonjour gorgeous 😊
    I read a study about the crakeling voice it’s only happening in USA so far I remember 😊

  • @KomradeMikhail
    @KomradeMikhail 20 днів тому +3

    How about instructions on removing AI slop from our lives, instead of adding more to it ?

  • @elvendragonhammer5433
    @elvendragonhammer5433 20 днів тому

    Would have been nice if you had linked the actual LLM video- couldn't find a single vid anywhere in his playlist (The hated one) about anything AI, much less LLM's. Also 99% of his content looks like it's ,ore likely to get you put on a gov list first. You also didn't even mention TechXplainator which seems to be where most of the video content is actually from, (like the Docker clips), although all 8 of his video's I clicked on have a female ai as voiceover, so I don't know where the male voice came from.

    • @NaomiBrockwellTV
      @NaomiBrockwellTV  20 днів тому

      What actual LLM video? This is a project we put together in collaboration, this IS the video :)

  • @Un_Pour_Tous
    @Un_Pour_Tous 19 днів тому

    ppl don't care about their data until they are caught with a crime.

  • @Ivan-hu7jf
    @Ivan-hu7jf 7 днів тому

    Hey.
    Naomi is malwarebytes any good for virus protection on my phone? Please let me know.

  • @scotttovey
    @scotttovey 20 днів тому +1

    The only problem I see with your tutorial is that it, like most tutorials that tell people to sue docker, is that there is no instructions on how to install docker.
    OH sure, I could do a web search like I have many times before, but they all tell me to fun a command in my prompt that my computer says; does not exist.
    My biggest frustration with techies is that they make generalized assumptions that are only true if the given computer happens to have the latest, greatest, most advanced everything, including a monitor set to the highest resolution possible, which I can not see and therefore must set mine to 1280 x 768.
    Other than that, good video.

  • @PSM5787
    @PSM5787 5 днів тому

    You being shadowed band I barely see you videos and I like them

  • @rogernevez5187
    @rogernevez5187 21 день тому

    [Title] How To Host AI Locally ?
    OLLAMA ???

  • @damianentropy
    @damianentropy День тому +1

    Bad tutorial. Almost nothing worked as you showed. Had to look it all up and figure it out on my own.

  • @noanyobiseniss7462
    @noanyobiseniss7462 2 дні тому +1

    Was going to watch but I auto block channels with soy face intro screens.

  • @krismorgahan8589
    @krismorgahan8589 20 днів тому

    Why are you fo focused on running it on a laptop?

  • @Bzl-ak
    @Bzl-ak 20 днів тому

    Thank christ I have a brain and don't need these tools.

  • @nicku8642
    @nicku8642 21 день тому

    I don't trust it

  • @pezz779
    @pezz779 20 днів тому +1

    Thanks Mate Respect From Oz 🇦🇺🦘