host ALL your AI locally

Поділитися
Вставка
  • Опубліковано 20 тра 2024
  • Ready to get a job in IT? Start studying RIGHT NOW with ITPro: go.acilearning.com/networkchuck (30% off FOREVER) *affiliate link
    Discover how to set up your own powerful, private AI server with NetworkChuck. This step-by-step tutorial covers installing Ollama, deploying a feature-rich web UI, and integrating stable diffusion for image generation. Learn to customize AI models, manage user access, and even add AI capabilities to your note-taking app. Whether you're a tech enthusiast or looking to enhance your workflow, this video provides the knowledge to harness the power of AI on your local machine. Join NetworkChuck on this exciting journey into the world of private AI servers.
    📓📓Guide and Commands: ntck.co/ep_401
    ⌨️⌨️My new keyboard: Keychron Q6 Max: geni.us/0SGY
    🖥️🖥️My Computer Build🖥️🖥️
    ---------------------------------------------------
    ➡️Lian Li Case: geni.us/B9dtwB7
    ➡️Motherboard - ASUS X670E-CREATOR PROART WIFI: geni.us/SLonv
    ➡️CPU - AMD Ryzen 9 7950X3D Raphael AM5 4.2GHz 16-Core: geni.us/UZOZ5
    ➡️Power Supply - Corsair AX1600i 1600 Watt 80 Plus Titanium: geni.us/O1toG
    ➡️CPU AIO - Lian Li Galahad II LCD-SL Infinity 360mm Water Cooling Kit: geni.us/uBgF
    ➡️Storage - Samsung 990 PRO 2TB Samsung: geni.us/hQ5c
    ➡️RAM - G.Skill Trident Z5 Neo RGB 64GB (2 x 32GB): geni.us/D2sUN
    ➡️GPU - MSI GeForce RTX 4090 SUPRIM LIQUID X 24G Hybrid Cooling 24GB: geni.us/G5BZ
    🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy
    **Sponsored by ITProTv from ACI Learning
    00:00 - Intro: Building an AI Server for My Daughters
    01:01 - Meet Terry: My Powerful AI Server Build
    02:02 - Installing PopOS on the AI Server
    02:38 - What You Need to Build Your Own Local AI Server
    02:58 - Step 1: Installing Ollama AI Foundation
    04:24 - Sponsor: ITPro Training from ACI Learning
    05:20 - Testing Ollama Installation and Adding Models
    06:07 - Interacting with Llama2 AI Model Locally
    07:45 - Comparing AI Performance on Different Hardware
    08:18 - Step 2: Setting Up Open WebUI
    09:31 - Logging into Open WebUI for the First Time
    10:06 - Admin Panel: User Management and Permissions
    11:07 - Customizing AI Models with Prompts and Restrictions
    13:03 - Step 3: Installing Stable Diffusion with Automatic1111
    15:52 - Prerequisites and Python Version Management with pyenv
    17:21 - Installing Automatic1111 Web UI
    18:20 - Testing Stable Diffusion Image Generation Locally
    19:23 - Integrating Stable Diffusion into Open Web UI
    21:17 - Bonus 1: Using Documents for Context in Open Web UI
    21:59 - Bonus 2: Integrating Local AI with Obsidian Note-Taking
    23:37 - Wrap-up: The Power and Potential of Local AI
    SUPPORT NETWORKCHUCK
    ---------------------------------------------------
    ➡️NetworkChuck membership: ntck.co/Premium
    ☕☕ COFFEE and MERCH: ntck.co/coffee
    Check out my new channel: ntck.co/ncclips
    🆘🆘NEED HELP?? Join the Discord Server: / discord
    STUDY WITH ME on Twitch: bit.ly/nc_twitch
    READY TO LEARN??
    ---------------------------------------------------
    -Learn Python: bit.ly/3rzZjzz
    -Get your CCNA: bit.ly/nc-ccna
    FOLLOW ME EVERYWHERE
    ---------------------------------------------------
    Instagram: / networkchuck
    Twitter: / networkchuck
    Facebook: / networkchuck
    Join the Discord server: bit.ly/nc-discord
    AFFILIATES & REFERRALS
    ---------------------------------------------------
    (GEAR I USE...STUFF I RECOMMEND)
    My network gear: geni.us/L6wyIUj
    Amazon Affiliate Store: www.amazon.com/shop/networkchuck
    Buy a Raspberry Pi: geni.us/aBeqAL
    Do you want to know how I draw on the screen?? Go to ntck.co/EpicPen and use code NetworkChuck to get 20% off!!
    fast and reliable unifi in the cloud: hostifi.com/?via=chuck
    1. Set up a private AI server
    2. Install Llama for local AI
    3. Deploy an AI web UI
    4. Integrate stable diffusion AI
    5. Generate AI images locally
    6. Customize AI models
    7. Manage user access for AI server
    8. Add AI to note-taking apps
    9. NetworkChuck AI server tutorial
    10. Run AI on your own hardware
    11. Local machine learning server
    12. Llama AI installation guide
    13. Open source AI server setup
    14. Self-hosted AI solutions
    15. Enhance workflow with private AI
    16. Harness AI power locally
    17. AI-powered note-taking
    18. Privacy-focused AI tools
    19. DIY AI server project
    20. Cutting-edge AI technologies
    21. Democratizing AI access
    22. Empowering users with local AI
    23. Offline AI capabilities
    24. Future of private AI computing
    #aiserver #ollama #llama3
  • Наука та технологія

КОМЕНТАРІ • 1,7 тис.

  • @NetworkChuck
    @NetworkChuck  18 днів тому +52

    Ready to get a job in IT? Start studying RIGHT NOW with ITPro: go.acilearning.com/networkchuck (30% off FOREVER) *affiliate link
    Discover how to set up your own powerful, private AI server with NetworkChuck. This step-by-step tutorial covers installing Ollama, deploying a feature-rich web UI, and integrating stable diffusion for image generation. Learn to customize AI models, manage user access, and even add AI capabilities to your note-taking app. Whether you're a tech enthusiast or looking to enhance your workflow, this video provides the knowledge to harness the power of AI on your local machine. Join NetworkChuck on this exciting journey into the world of private AI servers.
    📓📓Guide and Commands: ntck.co/ep_401
    ⌨⌨My new keyboard: Keychron Q6 Max: geni.us/0SGY
    🖥🖥My Computer Build🖥🖥
    ---------------------------------------------------
    ➡Lian Li Case: geni.us/B9dtwB7
    ➡Motherboard - ASUS X670E-CREATOR PROART WIFI: geni.us/SLonv
    ➡CPU - AMD Ryzen 9 7950X3D Raphael AM5 4.2GHz 16-Core: geni.us/UZOZ5
    ➡Power Supply - Corsair AX1600i 1600 Watt 80 Plus Titanium: geni.us/O1toG
    ➡CPU AIO - Lian Li Galahad II LCD-SL Infinity 360mm Water Cooling Kit: geni.us/uBgF
    ➡Storage - Samsung 990 PRO 2TB Samsung: geni.us/hQ5c
    ➡RAM - G.Skill Trident Z5 Neo RGB 64GB (2 x 32GB): geni.us/D2sUN
    ➡GPU - MSI GeForce RTX 4090 SUPRIM LIQUID X 24G Hybrid Cooling 24GB: geni.us/G5BZ
    🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy
    **Sponsored by ITProTv from ACI Learning

    • @MARO_MR
      @MARO_MR 18 днів тому

      first reply

    • @mshark111
      @mshark111 18 днів тому

      @@MARO_MR Second reply

    • @MARO_MR
      @MARO_MR 18 днів тому +1

      @@mshark111 third reply

    • @xozx1715
      @xozx1715 18 днів тому +1

      I use chat with rtx. Do you advise me to change to this?

    • @mshark111
      @mshark111 18 днів тому

      @@MARO_MR LOL

  • @Zvxers7
    @Zvxers7 17 днів тому +369

    Man really gave his kids 2x rtx 4090s for school, he did the "mom i need this [overkill computer] for school"

    • @brandonwiederhold2573
      @brandonwiederhold2573 17 днів тому +23

      Its only a $6K build lol

    • @Zvxers7
      @Zvxers7 17 днів тому

      @@brandonwiederhold2573 only $6000 for school...

    • @notaras1985
      @notaras1985 17 днів тому +62

      ​@@brandonwiederhold2573ONLY 6000? You can adopt me any day

    • @Outsider_07
      @Outsider_07 16 днів тому +1

      @@notaras1985 exactly

    • @fp1715
      @fp1715 15 днів тому +3

      ​@@notaras1985just do a video for vmware

  • @christianmbaba8671
    @christianmbaba8671 17 днів тому +55

    That moment when you realize port 11434 looks like the word llama

  • @JeremyFeldmesser
    @JeremyFeldmesser 17 днів тому +177

    I'm 62 years old and a computer techy, I'm no super genius though and I'm really happy to have been able to run a local AI on my PC. Private AI is the way to go for sure. I signed up for your free academy for now, there's enough in there to keep me learning/busy for a while yet! :)

    • @nahrafe
      @nahrafe 16 днів тому +14

      Good job pops

    • @projectptube
      @projectptube 8 днів тому +9

      now if we can just get some models that have no wokeness/leftist insanity.

    • @gaiustacitus4242
      @gaiustacitus4242 8 днів тому

      ​@@projectptube I would be happy with an AI that could actually write fairly entry level code instead of churning out garbage code that:
      1) won't compile, and efforts to have AI integrated into the development environment correct issues makes it worse with each iteration
      2) doesn't actually meet requirements (regardless of how many iterations made to fine tune the output, by which YOU are training the AI)
      3) is poorly structured (leading to maintainability problems)
      4) lacks proper error handling (leading to problems with stability and data integrity)
      5) fails to follow any type of consistent naming convention (code quality/maintainability issues)
      6) randomly include variables which determine type on first assignment
      7) creates classes where local data types do not correspond to the columns defined in database tables:
      7.a) string data types do not enforce the defined length limits
      7.b) numeric variables are of inconsistent types
      7.c) the data access layer doesn't handle null values, always storing 0 for numeric data types or zero-length strings for (n)varchar fields
      8) thrashes database connections (a problem that connection pooling implemented in the client stack doesn't reliably solve)
      9) introduces security vulnerabilities.
      I could go on, but why bother? The current state of AI for software development is to have companies and sole developers pay to use it while the AI is trained on the well-written source code (or at least better written) the developers end up producing. A packet sniffer will detect that not only is the corrected AI generated code being shared but also proprietary code which has not been authorized for such use.

    • @legendaryphoenix8607
      @legendaryphoenix8607 5 днів тому +1

      ​​​@@projectptube exactly cough... Gemini... cough. But what do you have in mind when you said that? I am interested to know

  • @OgBrog
    @OgBrog 14 днів тому +8

    Alright, now integrate it into home assistant with text to speech and voice to text so you can have your own alexa that controls your home automation.

  • @guitarguy911
    @guitarguy911 18 днів тому +201

    Ollama troubleshooting: if you can’t run Ollama on the first try, open a new terminal and type “Ollama serve”

    • @ezradevs
      @ezradevs 17 днів тому +9

      On my Mac, I had to keep an ollama serve window open and in a new terminal window running the ollama commands would work.

    • @Jalan-Api
      @Jalan-Api 17 днів тому

      @@ezradevs you do not have to do that to work...

    • @nuggetbugget9305
      @nuggetbugget9305 17 днів тому +5

      @@Jalan-Api I had to use the ollama serve command on my computer for it to work on WSL, but the windows preveiw works without using the ollama serve command.

    • @itachi_shrestha
      @itachi_shrestha 17 днів тому +2

      Try ollama run llama3

    • @Jalan-Api
      @Jalan-Api 17 днів тому

      @@nuggetbugget9305 No no, I meant like you do not need the terminal open in background running "ollama serve" on Mac

  • @alexclark6777
    @alexclark6777 12 днів тому +23

    This video was an absolute gem, thank you so much. I've been struggling with setting up local AI and the majority of videos I've watched have resulted in me having to try and learn concepts while also deciphering a very heavy accent from the narrator, which made it so much harder for me to focus. This was clear, to the point, and covered everything I wanted. Thank you!

    • @JG27Korny
      @JG27Korny День тому

      Just use LM studio. You will get just that. Also recommendation of models and information if they can run on your machine. Also the models get downloaded authomatically from hugging face.

  • @chinmaykapoor962
    @chinmaykapoor962 16 днів тому +18

    Man!!! My boss showed me the last local AI video of yours, introducing me to your channel. Now I feel any video you’re making on similar topics I need to see them! Make more videos on this, exploring what all we can do, in workplaces. This is so interesting and cool! Thanks man!

  • @mrbabyhugh
    @mrbabyhugh День тому

    4:25 how the video has FLOWED thus far and your SMOOTH TRANSITION in to your sponsor is EXCELLENT. I haven't watched a lot of your videos, but the few I have watched have been fantastic work!

  • @Marustic
    @Marustic 16 днів тому +10

    I only watched like 4 minutes of your video and I wanted to try asap. Not only did I get it up and running in like an hour but I also configured it to be accessed anywhere in the world I want. Thank you for sparking this fun little piece of technology I can utilize in my own home. This is actually much more useful than I thought because I can have my mother utilize this in her everyday life since I’m all grown up now and out of the house.

    • @maxhaberstroh2504
      @maxhaberstroh2504 16 днів тому +3

      can you hint me in a direction for making it accessible from other pcs in a local network?

    • @user-de9db8fj1e
      @user-de9db8fj1e 16 днів тому

      @@maxhaberstroh2504 Tailscale is probably your easiest solution

  • @Hack_O_Lantern
    @Hack_O_Lantern 17 днів тому +5

    Another fantastic video! And your on screen graphics are some of the best on UA-cam.

  • @Bdantioch
    @Bdantioch 15 днів тому +44

    Easy mode: 1. Microcenter's RTX 3090TI x2 (24gb VRAM x2) OR get the Tesla K80's (cheaper) . 2. MOBO that supports either x16 x 2 or x8 x 2. 3. Get at least 64gb system ram (GGUF models run on CPU/RAM/ GPU combined). 4. A 850 - 1,000 Watt power supply. Congrats. You have a computer that almost rivals a system with RTX A6000 (5,000$) card.

    • @sil778
      @sil778 14 днів тому +1

      Thx Man..

    • @user-ze5nf6gv5h
      @user-ze5nf6gv5h 14 днів тому +7

      I m building cheap home server for cloud gaming.. for 4 VM : Dell T7810 (200euro) 2x Xeon E5-2697v3 (50euro), ECC 64GB 2400Mhz in quad channel (70euro) Nvidia Tesla P100 16GB (160euro) and added Tesla M40 12G , second PSU 1000w . I hope Llama will use 2 different GPUs. Now the server will be for cloudgaming and AI, so cool :)

    • @randallrulo2109
      @randallrulo2109 12 днів тому +3

      tesla k80... dude, your a lifesaver...
      i feel seriously dumb for not having found this a year ago...

    • @ToucheFarming
      @ToucheFarming 8 днів тому +1

      @@randallrulo2109 something you need to know about the K80's is that it is not a normal PCIe cable needed, it uses a 8 PIN CPU plug. you can get an adapter to convert 2 PCIe 8 pins to 1 8 pin CPU connector

    • @VioFax
      @VioFax День тому +1

      @@ToucheFarming Its also a Pita to get working on some workstations like Dell or HP without Rebar.
      I'd skip the Tesla's TBH. Ive been fooling with 2 P40s for 2 months. Really not worth the trouble they caused me. Its a good option if you have no money but plenty of time on your hands and really want to be a masochist trying to keep them cool enough ect...
      I ended up getting the 3090's and am much happier. Yeah I lose ECC but whoopty doo, i rather just not be waiting on replies from the model... and to run without compression that's already messing with accuracy. 2x 3090's just end up making more sense for the time/money ratio.
      I ended up getting the Teslas to work on a Dell 5820 and you have to change the Vbios mode to the GPU with nvflash to be in graphics mode instead of compute. You lose a lot of performance doing it this way though. Cuts it in half. But it will work. Was a week of research to figure that out.
      I gave up on the Teslas and the dell after finally pulling this off and having to get a windows machine to change the vbois anyway... and just got 2 3090's in a cheap gaming board. Works so so much better.
      Looking back i wish i had not wasted my time. I hope i save someone else some time by sharing my experience with the Tesla cards.

  • @iant720
    @iant720 13 днів тому +2

    This will greatly help my daughter in the future as we plan to homeschool especially since private GPT can be loaded with local sources like PDF's of books. Very hyped for this content!

  • @VincentWillcox
    @VincentWillcox 18 днів тому +13

    Thank you for making it simple! I've followed several tutorials for getting these running locally and they all have their own plus points. Your's with its Stable Diffusion addition is a nice added touch!

  • @markddddddd
    @markddddddd 18 днів тому +65

    I am using Ollama on my 13 year Old MacBook Pro and it's running pretty fine. Thanks a lot. Keep the great work. Thanks for the videos!! :)!

    • @Grandwigg
      @Grandwigg 18 днів тому +3

      That is about how old my desktop is. Maybe i have a chance after all.

    • @UmeshJoshi333
      @UmeshJoshi333 18 днів тому

      Good idea ;)

    • @Shadow_Banned_Conservative
      @Shadow_Banned_Conservative 17 днів тому +2

      I want to play with this as well. I wound up with a Best Buy open-box i5-12400, 32gb or ram, and an open-box Nvidia 4060 OC 8GB. So I'm in for about $600 all together. I wanted to start as cheap as I could and be power efficient at the same time, at least to start with. Hopefully I'll start playing with it in the next couple of weeks.
      One thing I'm curious about though. I wonder how secure these are. Are they really secure, or is it one of those "not too many of them today so nobody is bothering to hack them, yet" situations?

    • @kulligo3192
      @kulligo3192 17 днів тому

      @@Shadow_Banned_Conservative selfhosted LLMs are completly local, there isnt really anything to hack

    • @ronilevarez901
      @ronilevarez901 17 днів тому

      The magic is that the GPU is more powerful than the average 13yo GPU. In my 15yo pc nothing can run.

  • @cks980929
    @cks980929 16 днів тому +1

    +1 in my bucketlist. Sir it’s mindblowing! Combining with vector embedding, local chat history and smart home stuffs will be awesome too

  • @MichaelTanOfficialChannel
    @MichaelTanOfficialChannel 13 днів тому +1

    If you add multiple models for a chat, both models will respond. You will see “< 2 / 2 >” (assuming you added 2 models) beneath the displayed response for you to navigate between the response of each model. The name of the model’s response is displayed above the response. 11:28

  • @jonjayb
    @jonjayb 18 днів тому +87

    Maaaaaan i did this last week on my own, i just had to wait for the master to come along and do it better haha

    • @jonathonvargas8724
      @jonathonvargas8724 17 днів тому

      That’s awesome bro!

    • @eropoke
      @eropoke 17 днів тому +1

      Me too!

    • @murlock666
      @murlock666 17 днів тому +5

      if you did this alone. be proud of that. don't lessen your achievement. there's enough people out there that will do it as it is. don't help them by doing to yourself.

    • @jonjayb
      @jonjayb 17 днів тому +3

      It all turned out okay. This video helped with Stable Diffusion. Also had some jankyness with WSL networking to work around.

    • @RashadPrince
      @RashadPrince 14 днів тому +1

      Same 😁

  • @DiannaGold
    @DiannaGold 18 днів тому +14

    I love this ... I was wondering what PC project I wanted to do next. NOW I KNOW!

  • @DanielNeedles
    @DanielNeedles 8 днів тому +3

    One caveat. Using Windows WSL access from the outside is not possible without a lot of hoop-jumping. Though the "--network=host" will sync up Docker on Ubuntu in WSL2, there is a whole lot more hoop-jumping required to get WSL2 to talk to your local network as there is no "bridging" option like there is with VMware or Virtualbox.

  • @laserguidedbrick
    @laserguidedbrick 12 днів тому

    Thanks, I will be setting this one up for sure - As always, your enthusiasm is awesome..

  • @peacemaker9807
    @peacemaker9807 18 днів тому +10

    I was literally thinking of doing exactly this recently, great timing. Thanks.!

  • @briantcosta
    @briantcosta 18 днів тому +10

    This is some next level content, man!! All love from Brazil

  • @jasonp3484
    @jasonp3484 17 днів тому

    Money!!! On the first shot too. Epic my friend! Thank you Sir, for keeping it simple.

  • @saharattosakoon9731
    @saharattosakoon9731 16 днів тому

    "And speaking of...my updates are done." Dude your ad perfectly aligned with my updates as well!

  • @markverstappen1365
    @markverstappen1365 17 днів тому +48

    I love these plain simple straight on explanation videos.
    A suggestion or addition to this would be:
    - how to add or restrict the knowledge base.
    For example:
    - corporate data, pdf's, tables, pictures, statistics etc and how to purely add this info as knowledge.
    - Ask the AI questions and so that it only searches the corporate data and doesn't get blurred with other data.
    - let the AI do analysis on the data and pull conclusions on it.
    This would be a perfect addition.

    • @tonymburu7804
      @tonymburu7804 17 днів тому

      No one does it better, NC is awesome. Simple and very intuitive videos.

    • @jesuiscool7
      @jesuiscool7 17 днів тому +4

      "- how to add or restrict the knowledge base."
      Well, he shows exactly that by showing you the system prompt he gives. You can kinda do whatever you want there, like banning words etc.
      Looking into Ollama, you can also train your model on specific data which can help for your your specific uses cases. There is a lot of documentation/videos on that topic on YT if you want.
      But that's more relevant of AI training than "easy and fast setup" which was the scope of this video.

    • @matthewarchibald5118
      @matthewarchibald5118 14 днів тому

      check out his last local AI video and his mentions of "Private GPT"

    • @kiranwebros8714
      @kiranwebros8714 14 днів тому

      Instead of chatting with models there should be agents with specific skills. why nobody creating something like that?

    • @randallrulo2109
      @randallrulo2109 12 днів тому

      @@kiranwebros8714 this is what i thought modelfiles were supposed to be, but it doesnt really look like it...

  • @AdrianC2006Uk
    @AdrianC2006Uk 15 днів тому +4

    Great video. I've just gone through all of this myself. Looks like they have also added a few more features (LiteLLM, Whisper). Local AI is where it's at!!! Privacy first!!! Hoping they add MemGPT and CrewAI/AutoGen to it.

  • @BTraxler
    @BTraxler 12 днів тому

    Thank you so much! I'm getting ready to setup an AI demo and been wracking my brain on what to do, this is perfect!

  • @GordonRaboud
    @GordonRaboud 14 днів тому

    Wow! Very impressed by your skills. (Decades ago I was an SCO Sys Admin and watching you handle the console takes me back. Kind of nostalgic.) 🙂

    • @RhettGibson
      @RhettGibson 13 днів тому +1

      Years ago, I was the SCO admin. I ended up virtualizing the server. It had an ancient application that we needed. I can’t believe I got it to work.

  • @KipIngram
    @KipIngram 14 днів тому +7

    Chuck, THIS has got to be the most significant video I've seen in ages. Thank you for sharing this information. I LOVE the idea that we can now have this power under our own control. I will definitely have to do this when I can gather up enough money to build my own Terry (if I'm going to do it I want to do it right).

  • @alpine7840
    @alpine7840 17 днів тому +3

    This is sweet! Just did this on my spare system and it was faster then I thought it would be.
    I9-10900 with 64gb and a SFF Quadro RTX A2000 12gb.
    Thank you Chuck

    • @Brax1982
      @Brax1982 15 днів тому

      What was faster? These cheap models he is showing? Or got anything better to run?

    • @CafeComClicks
      @CafeComClicks 14 днів тому +1

      lol, wish i had a spare system like that! that´s a beast.

  • @FoxyTyger
    @FoxyTyger 15 днів тому

    Thank you TONS! This is exactly what I needed information-wise to get started!!

  • @professorfontanez
    @professorfontanez 17 днів тому

    OUTSTANDING! And that rig is SICK!

  • @theChistu
    @theChistu 18 днів тому +5

    Awesome timing; been tinkering around with Ollama and various models and pondering what my next steps should/could be. Looks like I've got a weekend project!

    • @Sunbeamss
      @Sunbeamss 18 днів тому

      That just became my new weekend project too haha

  • @pr0nGren
    @pr0nGren 17 днів тому +3

    Love your videos! Really enjoy watching them *ofc with a cup of warm coffee!* Your videos are so much better "layout" and paced than other guide videos.
    That being said, 2 pointers regarding this video.
    1st. You would have made Terry ( from nine nine ?) a lot better for AI using a NVIDIA rtx A6000 that cost about the same as your dual GPU combo did.. since AI ( unless configured with accelerate) isnt very good with dual GPU. they cant help each other with the same task. They can however split up the tasks if you configure it. but overall a A6000 would have been better.
    2nd. Coming from a hardcore Ubuntu lover.. You gotta check out NixOS.. Its crazy! being able to config the entire OS from 1 file and have different versions of dependencies installed at the same time. its crazy.

  • @doa2758
    @doa2758 12 днів тому

    For a "nerdy" video . . . enjoyable to watch, and I appreciate your demonstrations, narrative and humor. Good on you.

  • @sharmaraju
    @sharmaraju 3 дні тому

    Great tutorial content on Generative AI, I followed your tutorial in M1 Mac OS, With some hikups, finally worked! Cheers!

  • @justicepayne464
    @justicepayne464 18 днів тому +8

    Thank you for always posting helpful tips. From another IT guy, if you're looking to get into IT or up your game a lot of us out here reference Network Chucks videos for our careers even after we are in the industry. Keep up the magnificent work!

  • @SpragginsDesigns
    @SpragginsDesigns 17 днів тому +5

    Dude, your videos are so good. I never miss a video from you. Im working on a project analyzing sports data with local AI for work, so its been very interesting going outside the realm of the simple UIs from OpenAI/Anthropic etc.

    • @BWane-wd7zz
      @BWane-wd7zz 2 дні тому +1

      Hmm... May be a huge vegas hit

  • @laser_turret
    @laser_turret 14 днів тому

    i heard your new keyboard immediately, glad you've finally gone custom hotswappable :) very marbly sounding build you got

  • @sudarsha-hewa
    @sudarsha-hewa 12 днів тому

    Thank you NetworkChuck! Your videos are both informative and entertaining to watch!

  • @AICentralSA
    @AICentralSA 12 днів тому +2

    This is probably the best video I've seen this year. Please do more AI related content, you're awesome. Already got my CCNA and Azure certs while watching you.

  • @xanZion1
    @xanZion1 14 днів тому

    LOVING the keyboard sounds, adds a bit of entertaining feedback when doing the compy things.

  • @jeremyfontenot496
    @jeremyfontenot496 16 днів тому +1

    Just did it! First time I have ever used Linux! Works great! I’ve got llama3 and llava as models. I might bring in llama2 uncensored to see how it goes! This is too cool!

  • @techlitindia
    @techlitindia 17 днів тому +10

    Over the past year, I've incorporated all your tips into my daily routine, and they've definitely helped me feel more energetic and productive! For a while, I even felt unstoppable! But lately, I've been feeling down and sluggish again.
    It seems like forcing yourself to do all these things every day can feel like a chore, especially if they're not natural habits. It takes time and effort to build new routines, and it can be frustrating to miss a day. This made me realize that feeling good is more about your mind than your body. When you have a positive outlook, you naturally have more energy.
    The key is to find activities that make YOU happy. There's no one-size-fits-all solution! You might find great advice from others, but ultimately, you need to discover what works for you.
    I'm not trying to be negative, just to remind everyone that feeling good and bad is a normal part of life. You might have some fantastic days, but there will also be times when you feel down. Don't let that stop you from doing the important things, even if it's just for a few minutes each day. Consistency is key! Successful people might not always feel amazing, but they make time for what matters most in their lives.

  • @Adopted_Gaming
    @Adopted_Gaming 18 днів тому +7

    Would be great if you could make a video on setting up a local AI language model to be trained on documents that get permanently saved in its memory. Seems like there is potential for that using webAI? I want to use this program to be able to reference a part number and have it give me information on the product or manual for that specific part number in my company.

    • @hillishudson32
      @hillishudson32 17 днів тому +6

      Check out RAG ( retrieval augmented generation). Essentially use a model to store docs into a vector database which is queried by the AI when sending prompts to use in its context window. Lots of videos on RAG out there

  • @marsthunder
    @marsthunder 16 днів тому +2

    Last week, I literally installed Ollama3 but with Python only. Jumping through many hoops to get Cuda recognized. Version had to match Pytorch. Anyway, all works through cli, but love this WebUI install. I'm already playing with local TTS and stoked about Stable Fusion. Thanks for timely video!

    • @sil778
      @sil778 14 днів тому

      Ollama3 🤣

  • @Piratacapitan
    @Piratacapitan 11 днів тому

    I follow your tutorial and I could install Ollama library and the open-ui in my computer, I have an Rtx 2060 and run very well!
    Amazing video!

  • @markoyos5841
    @markoyos5841 18 днів тому +7

    Ohoho this is fire! 🔥

  • @MichelBertrand
    @MichelBertrand 17 днів тому +3

    I've had it running - slowly - on a RaspberryPi 5. Love the imploementation on WSL in Windows 11, **BUT** we definitely need a complete guide for those of us who are running an AMD GPU in Windows.
    Not everyone had $10K lying around to build a server with TWO $3200CAD Nvidia cards, Chuck...

    • @antonyaustin1388
      @antonyaustin1388 10 днів тому

      the updated version of ollama checks amd graphics

    • @MichelBertrand
      @MichelBertrand 10 днів тому

      @@antonyaustin1388 I found that on the ollama website - unfortunately it looks like the cutoff is 6800XT, right above my 6750XT. Oh well.

    • @BrandonHurt
      @BrandonHurt 10 днів тому

      I have it running via docker using an old radeon 7 and a ryzen 9 with 12 cores 24 threads and 32gb ram and it runs decently fast on gentoo, and downloaded the auto1111 the way he showed how and its not any slower than his shows.

    • @MichelBertrand
      @MichelBertrand 9 днів тому

      @@BrandonHurt does it actually use your GPU? If so I'd be interested to see what your docker config is exactly. It runs ok on just my CPU (13700k), but would be faster using the GPU from what I can tell.

  • @bystander85
    @bystander85 16 днів тому +1

    Awesome! I was looking for a good starting point and this is a great overview! I'd love to see a follow up video about the Text-to-Speech and Speech-to-Text models that can be integrated with a local AI system.

  • @maxbd2618
    @maxbd2618 2 дні тому

    Thanks for making this video, I'm so excited to start doing this!

  • @MyWatermelonz
    @MyWatermelonz 18 днів тому +7

    Good tutorial! Never used ollama or open webui. Just gonna say though for anyone trying, loading SD and an LLM at the same time might be a little rough. You'll probably want 32gb of ram and 16gb vram. Might be able to get away with 8gn vram.
    Ollama uses llama.cpp as the backend which puts models into the quantized gguf format (allowing it to be split across gpu/cpu vram/ram) for llms.
    SD can also load across the gpu/cpu. When you do this for both, the output gets really slow. So you'll want the specs above minimum for both. Also these are 7b llm's which are easier. Terry could run the bigger quantized models which are a lot closer to gpt4 performance.
    For everyone else, i'd recommend 7b/8b llama2/3, mistral 7b, and microsoft phi 3b(gguf).

    • @cakekomo
      @cakekomo 15 днів тому

      Do you know a model that can do basic math? All the ones i've tried, including the ones you listed, can't answer the simple question "is 24 divisible by 3" correctly. Some will say no, then explain it can be (often in inexplicable ways); others say yes, then no through the explanation, again with odd explanations. (i haven't tried microsoft phi 3b yet though.

    • @MyWatermelonz
      @MyWatermelonz 15 днів тому +1

      @@cakekomo Local models? No. Even opus, gpt4, and Gemini 1.5 still suck at math. There are some math finetuned local models that do slightly better than not, but I wouldn't trust any model for a calculation.
      Your best best is gpt4 using python data analysis. If you're looking for math concepts to be explained to you, most LLM's are pretty good since they're trained on math textbooks. Just stick to the calculator or Wolfram for the actual computation.

    • @cakekomo
      @cakekomo 13 днів тому

      ⁠@@MyWatermelonzdefinitely sticking to pencil and paper or calculators for the actual math, but when somebody is learning and needs a math concept explained to them having AI give a correct answer would be expected.
      When one is learning the Divisibilty Rules, and asks an AI if 832,605 is divisible by 3, it attempts to explain the rule and the shows its attempt at the answer. One would expect that answer to be correct.
      As I mentioned before, several of the models I tired flubbed it one way or the other. So I simplified the question down from 832,605 during my trouble shooting, and they still flubbed it. One even stated that since “8 + 5 = 3” then 24is divisible by 3 (wtf 😂)!
      Thanks for the suggestion!

  • @Napert
    @Napert 18 днів тому +7

    Good luck running anything larger than 8B parameters on just the cpu (and even that might be too big for most people) and expecting more than 2 tokens per second
    A relatively recent 8gb gpu is highly recommended to run up to 8B models at over 50 tokens per second

    • @touma-san91
      @touma-san91 18 днів тому +2

      And not just that.. You need to get to something like 100-400B models to be comparable to the bigger AI services.. Those small LLM models are good for things like roleplay and such but when it comes to factual information and productive tasks, they tend to be quite poor.

    • @CappellaKeys
      @CappellaKeys 15 днів тому

      @@touma-san91 First time i've seen someone mention the comparison to the larger ones. Never knew nor though of that. I might be doing all this work for nothing lol

    • @aaroncarroll4158
      @aaroncarroll4158 15 днів тому +1

      I run llama3-70B on CPU only I7-13700K and 64gb ddr5. Is it fast, fast? No, but it runs fine.
      I can also run it on my 2021 M1 Mac Pro with 64gb of ram. Runs fine there as well.

    • @touma-san91
      @touma-san91 15 днів тому +2

      @@CappellaKeys If you have lot of RAM (Minimum is something like 64 gigs for 70B-models) and good CPU and good GPU with decent chunk of VRAM, you can run these things using GGUF but it will probably take a few minutes to get a response out of the larger models. And you really should use GGUF because that way you can split the load on both the CPU and GPU so it runs tiny bit faster than fully running on CPU.

    • @touma-san91
      @touma-san91 15 днів тому

      @@aaroncarroll4158 I'm curious, how fast it is for you? Like how long it takes for it to generate a whole message

  • @crusher70
    @crusher70 11 днів тому

    Love the videos you’re making. I’ve been in the IT industry from the late 80’s and still learning everyday. I look forward to your videos as it’s usually something very interesting and has a purpose. I’m already rebuilding an oldish Razer Laptop. Might not have the horsepower but a good test before investing.

  • @viper33802
    @viper33802 16 днів тому

    I thought I was doing OK running LLMs on my M3 MBP; Terry is on another level.

  • @jimarasthegod
    @jimarasthegod 16 днів тому +7

    Cheaper alternatives that can be combined with other nvidia GPUs, solely for running AI, are used Nvidia Tesla P40, (24GBof VRAM) currently about ~200 bucks each on the used market. Otherwise go AMD 6800 or newer/better, (16GB+ of VRAM) which are also supported out of the box.

    • @Brax1982
      @Brax1982 15 днів тому +3

      Are you kidding? These go for 7k new. I can see that there are a lot of these offers for used ones, but did you ever confirm that it is legit? Looks like very obvious fraud. Or are you trying to run a scam, yourself?

    • @VioFax
      @VioFax День тому

      Those p40's are a pain in the butt though...i'd stay away from them unless you can't do something better.

    • @VioFax
      @VioFax День тому

      @@Brax1982 I have 2 they work (bought used for $175 each) but they aren't that great and were a pita to get working and keep cool enough... Get a 3090 instead.

    • @Brax1982
      @Brax1982 22 години тому

      @@VioFax Thanks, I was not considering it, because how could they be that much cheaper than list price? Are you sure you got the real ones? I would seriously doubt that...even if "something" works. I guess this is one of those things where you have to be a master engineer to get it to work and that's why it's so cheap...

  • @fchris82
    @fchris82 15 днів тому +12

    How much energy is eaten by Terry per month? Do you have any data about this? Real question, I am interested in it.

    • @abitw210
      @abitw210 5 днів тому

      totally not worth it over regular subscriptions from OpenAi

    • @fchris82
      @fchris82 5 днів тому +1

      @@abitw210 I think you haven't watched the video, or you just didn't understand what it is for. He could give a "self prompted" AI for his daughter with limitations. Can you do the same in the OpenAI? And many companies won't share private, sensitive business documents with a third party AI. I can imagine, it is not for you, but it doesn't mean it is not worth it for anybody.

  • @ManagedImpulsivity
    @ManagedImpulsivity День тому

    The "and its ready!" filled me with false hope but was worth the wait when it downloaded :) Guess I've become impatient, yet 20 years ago I would have watched a 1MB file download with anticipation for hours!

  • @aldenhoot9967
    @aldenhoot9967 15 днів тому

    @networkchuck, great intro! Would love to see someone going over customizing or training their own model. This would be a great fit for a private hosting platform like yours.

  • @Lampe2020
    @Lampe2020 18 днів тому +30

    3:15 Oh no, a curl piped into a shell… Aargh!

    • @_modiX
      @_modiX 18 днів тому +4

      Unjustified panic mode. If you install anything from the internet there is always risk to it no matter the install method. The beauty of an installer script is just you just can read it and make sure it's not doing anything nasty.

    • @Lampe2020
      @Lampe2020 17 днів тому +11

      @@_modiX
      The problem with curl|sh is that a failed download will still get executed. So if the script e.g. had some "rm -rf /tmp/someapp" and the download happened to fail after "rm -rf /", then you can't do anything about it. Or a failed download may cause the partially downloaded script to break and leave you with a broken configuration.
      So rather just download the script, quickly check it if it didn't fail (maybe even check the download hash) and _then_ execute it in a seperate step.

    • @BruceNJeffAreMyFlies
      @BruceNJeffAreMyFlies 17 днів тому +2

      Could you describe how to do it your recommended way?
      I.E. copy the prompt, but remove " | sh" from the end, and - after SUCCESSFUL download - enter "sh ollama run" ?

    • @nikolai00115
      @nikolai00115 16 днів тому +1

      @@BruceNJeffAreMyFlies Redirect curl into a file, check the file, and then run it.

    • @BruceNJeffAreMyFlies
      @BruceNJeffAreMyFlies 16 днів тому

      @@nikolai00115 Eh, sorry bro. If someone knows how to 'redirect curl into a file, and then run it', they probably already know the answer to my question.

  • @kalsiscorpion
    @kalsiscorpion 17 днів тому +4

    Can we run all this in proxmox

  • @michaelwheeler5637
    @michaelwheeler5637 14 днів тому +1

    Great walk through, I have been using ollama for a while, love the fact that you have shown how to install stable diffusion,I never expect that all this is possible in my lifetime 75 and still learning , I can't justify building a TERRY , a jetson orin will have to make do,

  • @HiroyukiMori
    @HiroyukiMori 15 днів тому

    Thank you @NetworkChuck! This will be a project for the future. Amazing.

  • @gravy7861_
    @gravy7861_ 18 днів тому +12

    Terry seems nice

    • @tdrg_
      @tdrg_ 18 днів тому

      He has a great personality

    • @FATEH-se9kr
      @FATEH-se9kr 18 днів тому

      I met him in my dream

    • @birdboygee9660
      @birdboygee9660 15 днів тому

      Have you met Deborah? She is nice to

  • @corymc92
    @corymc92 11 днів тому +5

    Everything about your videos are perfect. You are easily in the top 10 of greatest content creators of all time.
    Perfect video structure
    Perfect sense of humor
    Awesome video structure, amazing editing.
    You're fast, simple, and concise.
    Amazing dude. 👊

  • @panchomarquez
    @panchomarquez 9 днів тому +1

    OMG 1 minute into the video and already loving this content! Keep it going!

  • @IvanDinkov
    @IvanDinkov 16 днів тому

    You are super hyped and surprised about everything, I can't imagine keeping that hype while I know the technicals in such situation. :D

  • @DanielsHugo
    @DanielsHugo 14 днів тому +7

    As always, a great analysis. Newcomers often wonder if it's too late to navigate the financial market, but the market is always unpredictable. Trading has more advantages than simply holding, so it's important to learn before diving in. Active trades are necessary to ride the market's waves. Thanks to Linda Sue Baier insights, daily trade signals, and my dedication to learning, I've been increasing my daily earnings. Keep it up!

    • @Vajsbsbssjssmsnsk
      @Vajsbsbssjssmsnsk 14 днів тому +7

      It's unexpected to come across her name here. She understands every beginner’s intention and fix you to a trading course that matches your capacity, she knows her stuff! Her advice has been invaluable to my trading journey. Definitely worth giving a shot!

    • @DeboraGruba
      @DeboraGruba 14 днів тому +2

      Investing in alternative income streams that are independent of the government should be the top priority for everyone right now. especially given the global economic crisis we are currently experiencing. Stocks, gold, silver, and virtual currencies are still attractive investments at the moment.

    • @SixtoClarke
      @SixtoClarke 14 днів тому +1

      Such market uncertainties are the reason I don’t base my market judgements and decisions on rumors' and here-says, got the best of me 2020 and had me holding worthless position in the market, I had to revamp my entire portfolio through the aid of an advisor, before I started seeing any significant results happens in my portfolio, been using the same advisor and I’ve scaled up 950k within a year, whether a bullish or down market, both makes for good profit, it all depends on where you’re looking.

    • @user-ur3sj9zb4y
      @user-ur3sj9zb4y 14 днів тому

      I’ve been down a ton, I’m only holding on so I can recoup, I really need help, who is this investment-adviser that guides you?

  • @BarrelOfLube-cl2qq
    @BarrelOfLube-cl2qq 14 днів тому +3

    PS: please support the open source project you use, the devs put in a lot of effort in creating and maintaining them for free, making them accessible for everyone. No pressure tho, enjoy free AI for everyone

  • @WoodyWilliams
    @WoodyWilliams 15 днів тому

    Nice home run, NetworkChuck! It's about time.

  • @rossburrow2813
    @rossburrow2813 15 днів тому

    I have been thinking about making a private AI assistant, to help me recall information, set reminders, and basically do alot of work for me.
    Maybe a mobile app and browser plugin...etc.
    I just dont know where to start but this... This is amazing

  • @Zelanskyel
    @Zelanskyel 10 днів тому +86

    I will do 0 push-ups for each like.

  • @albertjames8842
    @albertjames8842 17 днів тому +42

    Thank you Lord Jesus for the gift of life and blessings to me and my family $14,120.47 weekly profit Our lord Jesus have lifted up my Life!!!🙏❤️❤️

    • @larryhenry2070
      @larryhenry2070 17 днів тому

      I'm 37 and have been looking for ways to be successful, please how??

    • @albertjames8842
      @albertjames8842 17 днів тому +1

      Sure, the investment-advisor that guides me is..

    • @albertjames8842
      @albertjames8842 17 днів тому +1

      Mrs Kathy lien

    • @freddiearthur2151
      @freddiearthur2151 17 днів тому

      Her services is the best, I got a brand new Lambo last week and paid off my mortgage loan thanks to her wonderful services!

    • @ghazanferfazli7335
      @ghazanferfazli7335 17 днів тому

      Same, I met Kathy lien last year for the first time at a conference in Wilshire, after then my Life has changed for good.God bless Kathy lien

  • @ltnlabs
    @ltnlabs 8 днів тому

    This is awesome. Well done Chuck and team.

  • @outofahat9363
    @outofahat9363 17 днів тому +1

    Another cool use of local ai integration with other apps I’m using is with VSCode to run your own free local copilot.
    Just download extensions like llama coder (text completion) and continue (sidebar chatbot) choose the model and point them to your ollama server and your good to go.
    The funny thing is you can see your gpu spiking up when you write code from the code completion suggestions. Welcome to the future! It’s 2024 and now we need a Graphics card to write python scripts xD

  • @cocboss8977
    @cocboss8977 16 днів тому

    Just love the things you do! Keep it up!

  • @danc5519
    @danc5519 11 днів тому

    Thanks for sharing this! You've inspired me to start my local LLM chatbot for my fam too.

  • @MarkPanado
    @MarkPanado 13 днів тому

    Awesome, Chuck! Fellow Obsidian enjoyer here.

  • @zeal514
    @zeal514 12 днів тому

    Yea this is the way to go. More and more homelab stuff is getting easier and easier. Being able to take control of your own data, and run tools like this on your own server is really going to be required.

  • @L0wPressure
    @L0wPressure 15 днів тому

    That's amazing, i already use ollama just as a cli, but using it inside obsidian is even better. Especially since i save sove of the answers in obsidian anyway.

  • @mikenorfleet2235
    @mikenorfleet2235 15 днів тому +2

    People want it local at home, you gotta spend spend spend. I also went crazy and built a terry of my own. My view: having a family AI that you control is very important in the coming future. Also learning the guardrails that you can put on them is very important as well as having some uncensored models when they grow up and are ready for real life. Knowledge is power and these AI models are collections of the vary biggest data sets humanity has ever collected. By the way you helped create these data sets even if you were unaware, thanks big data.

  • @normanlove222
    @normanlove222 13 днів тому +1

    This was the best video ive seen in a long time. wow
    Subbed.

  • @chieftron
    @chieftron 16 днів тому

    CrewAI is pretty fun to mess around with. Building agents and having them use tools is very awesome.

  • @niv8880
    @niv8880 17 днів тому

    Fantastic presentation - thank you very much for this. Sent it to a few people - especially for their kids

  • @slimerone
    @slimerone 3 дні тому

    this was intense and dense but made so much sense. thank you kind sir

  • @hussein.younes
    @hussein.younes 11 годин тому

    That's great value, just knowing about the things you discussed in the video alone is worth watching.

  • @handlegeoff
    @handlegeoff 17 днів тому

    Hey thanks for all of your contributions!
    I’ve been putting a few notes down as I do my usual things and I’ll be happy to put some projects together.
    I went to school for industrial electrical stuff and production tech years ago but kept all of my notes and endured many an injury while preparing to go back to school or restart an apprenticeship-so there’s lots of ideas for optimizing task sets and relative workflow!
    I’ve been looking at blender and UE5 a bunch over the last few years also, mostly navigating tools & effects and I expect to have something to be able to BLOW YOUR HUMAN MIND with
    Not mind blowing yea, but assistance!
    ASSISTANCE!
    ¡ASSISTANCE RUULES!

  • @unclemike2008
    @unclemike2008 14 днів тому

    Seriously after 6 months of watching only AI tutorial videos, this is the first time google recommended this epic beard wearing an AI wizard behind it. I have so much work ahead of me and clearly not enough beard in front of me.

  • @peoplelikesbymonowproducti9813
    @peoplelikesbymonowproducti9813 14 днів тому

    Bro this is so amazing thanks for all you doing

  • @luminographix
    @luminographix День тому

    Wish I had teachers like @NetworkChuck growing up.. you are simply awesome!

    • @luminographix
      @luminographix День тому

      And your 20 min video takes me 48 hours to complete step by step, wonder how long it takes for you to produce, cheers thou!

  • @MarkoTManninen
    @MarkoTManninen 16 днів тому

    That's some serious quality stuff. Brilliant entertaining concept also. I did not know open webui is such a powerhouse app, with all batteries and candies included.

  • @AlanKlughammer
    @AlanKlughammer 16 днів тому

    This was a good starting point. I was able to get it running on a vm on an old server. I don't have a gpu in it, but with 20 cores, and a bunch of ram, it is reasonably fast. To be honest, since it is local, it is probably faster that ChatGPT

  • @4430salton
    @4430salton 11 днів тому

    I was able to get this running using a DellT5500 running Ubuntu 24.04, with 65GB RAM, 2-CPU's at 2.13Ghz and a GeForce GTX 1660 Ti . as you can expect it responds slow with the answers BUT it is working! Thank you for the Info video on this topic! this is really amazing to have!! the next step build new PC, wont be as robust as yours but it will be better than T5500...it was an old computer I had Thank you Chuck!!

  • @sean_vikoren
    @sean_vikoren 2 дні тому

    thanks for the great video - got it all going, how fun!
    i took this opportunity to also make the jump from windows
    so it took me a little longer
    pop os is great

  • @MMEDHAT1910
    @MMEDHAT1910 17 днів тому

    Your videos are super fun and inspirational❤ and yess please make a video on obsidian it’s my favourite note taking app too

  • @gerardobrien
    @gerardobrien 6 днів тому

    Nicee! I may have found a new solution for my old server :) Great video! Thanks

  • @tanishksharma2899
    @tanishksharma2899 17 днів тому

    I tried to do it on my own few weeks back but got stuck at many places. This video is a amazing and helps me getting through most of those stuck places. Thanks! I think I too will be checking out obsidian now that ollama can be integrated into it. I keep my notes organized in joplin but the number of notes is too many now. An AI assistant to help me out would be the best!

  • @sunday20
    @sunday20 16 днів тому +1

    Two 4090's. You really care and love your kids. You are a good father.

  • @timothybaldinger261
    @timothybaldinger261 16 днів тому

    Thank you so much this is the app i have been looking for. Thank you agian.