Uncensored self-hosted LLM | PowerEdge R630 with Nvidia Tesla P4

Поділитися
Вставка
  • Опубліковано 29 лис 2024

КОМЕНТАРІ • 69

  • @halo64654
    @halo64654 4 місяці тому +17

    For anyone trying this on old enterprise hardware on top of VMs. Tread carefully with the HPE Gen 7 through 8. There's a bios bug that will not allow you to do PCI passthough and you wont be able to do anything PCI related.
    Also, underated channel.

    • @JzJad
      @JzJad 3 місяці тому +1

      Im guessing this is on specific bios versions, have done pci pass through on some gen 8s and luckily did not have any issues.

    • @halo64654
      @halo64654 3 місяці тому +2

      @@JzJad Mine is a G7. I'm personally on the most recent BIOS version. I've pretty much given up trying to make it work.

    • @JzJad
      @JzJad 3 місяці тому

      @@halo64654 I had done it with VMware and proxmox once I do remember proxmox being a bit more of a paint and having issues in some slots but never realized it was a HP BIOS issue,rip

    • @nokel2
      @nokel2 Місяць тому

      dang, just got a p4 and have a hpe g8... welp, worst case scenario is that I can get a better server in the future I guess... Or sell the card if I really have to...

    • @halo64654
      @halo64654 Місяць тому

      @@nokel2 I've heard gen8 has better results with workarounds as those tend to be more favored by the community. I have a gen7.

  • @videowatcher495
    @videowatcher495 26 днів тому

    The one minute mark threw me for a loop... Then I just laughed really hard. Thanks.

  • @JoeCooperTech
    @JoeCooperTech 4 місяці тому +2

    Brilliant work. Really well done, Connor. New subscriber here.

  • @LeeZhiWei8219
    @LeeZhiWei8219 3 місяці тому

    Interesting! A tour of the homelab maybe? Subscribed!

  • @taktarak3869
    @taktarak3869 4 місяці тому

    Thank you. I've been thinking of starting my own home lab for final year project, wasn't able to find a source of where i should start with :) cheers mate

    • @Connorsapps
      @Connorsapps  4 місяці тому

      I’d love to hear more about it. So do you have any particular hardware in mind?

    • @taktarak3869
      @taktarak3869 4 місяці тому

      @@Connorsapps There are a few IBMs around near my local. I probably can start with them. The last time i try a Supermicro it didn't like some gpus.
      I have plenty of gpus laying around too, mostly Quadro cards or Tesla. Recently got a batch of AMD's vega gpus (like the 56 and 64) from a retired mining ring too. Since Ollama are getting support for them, i believe it's worth a try.

  • @mopeygoff
    @mopeygoff 2 місяці тому

    Good video. I run a similar setup on an R-720, but i'm using an RTX 2000 Ada Gen (16gb). No external power needed, uses a blower style fan so no need for an "external" cooler solution, really, but they run about $500-$600 on ebay. I got mine for $550. I'm on the hunt for another one. It's basically an Nvidia 3060 with a couple hundred more tensor cores and more vram. So not too shabby. I'm using a proxmox container for the AI Gen stuff. My model is a fine-tuned version of Dolphin-Mistral 2.6 Experimental with a pretty chonky context window.

  • @trolledepicpeeterstyle1678
    @trolledepicpeeterstyle1678 4 місяці тому +1

    I like this video, keep this up!

    • @noth606
      @noth606 4 місяці тому

      You know there's a button to save you the time to express this as a comment, right? As a bonus it tells YT that you like it too, so it can be prioritized higher in searches and stuff 😉

  • @alivialee
    @alivialee 4 місяці тому +1

    love the emperor's new groove reference haha

  • @vulcan4d
    @vulcan4d 3 місяці тому +5

    A good test would be to show how many tokens/sec you got instead of duration.

    • @guytech7310
      @guytech7310 Місяць тому

      answer: less than 1 token per second. P4 just doesn't have enough go to make it a useable solution

  • @jco997
    @jco997 Місяць тому

    Interesting setup on an Intel Xeon E5-2640. I'm trying the same with my AMD Ryzen 5600GT, but still haven't decided if I should get the M40 with 24 GB of RAM, or the "newer" Tesla P4.

    • @Connorsapps
      @Connorsapps  Місяць тому

      @@jco997 the m40 is quite a bit longer and I would have gotten the the p40 or m40 if I could. What server do you have?

    • @jco997
      @jco997 Місяць тому

      @@Connorsapps My comment was deleted for posting a link, but is a custom build AMD Ryzen 5600GT. Your's, I think, must be an Xeon E5-2640 v4, considering you have a Core Count of 20.

    • @jco997
      @jco997 Місяць тому

      I normally use cpu benchmarks from passmark, since it gives me a ballpark figure on how much performance I could expect from any CPU model.

  • @TheCreaperHead
    @TheCreaperHead 4 місяці тому +1

    this was a well made video, is this channel going forward going to be about home lab or server stuff? Im working on my own home lab with Ollama3 with my 3090 fe (ik its overkill lol) and I love seeing ppl make their own stuff. Also, do you know how to make 2 gpus work for Ollama? I added in a 3060ti fe and it isnt being used at all with Ollama3

    • @Connorsapps
      @Connorsapps  4 місяці тому

      Programming and tech is my biggest hobby so next time I have a bigger project I’ll probably make a video.
      Depending on the models you’re using GPU memory seems to be the real bottleneck.
      As for getting 2 GPUs to work for ollama I wouldn’t think this would be supported. Here’s a GitHub issue about it github.com/ollama/ollama/issues/2672

    • @mopeygoff
      @mopeygoff 2 місяці тому

      I have not been able to split a model across multiple gpus, but Ollama has loaded a second model to a second GPU, or offloaded a part of a model to the CPU. I have an RTX 2000 Ada Gen (16gb) and an old NVIDIA 1650. With the context window, my main LLM is about 12.5GB or so. That goes onto the Ada Gen. When I send something to the 4gb llava/vision model it dumps most of it onto the 1650, with a small chunk going to CPU. It is significantly slower than the main model but not annoyingly so (and hey, I only use it occasionally).

  • @cifers8928
    @cifers8928 4 місяці тому +2

    If you can fit the entire model into your GPU you should use exl2 for free performance gains with no perplexity loss

  • @bennett1723
    @bennett1723 4 місяці тому +1

    Great video

  • @technotic_us
    @technotic_us 3 місяці тому

    I have a PER730 8LFF running unraid. I found this video with a very vague search (tesla llm for voice assistant self hosted) but I was looking at the Tesla P4 for all the same reasons. 75w max.
    I don't want my r730 going into r737-max mode (with the door plug removed in flight, so you get the full turbine sound in the cabin, if you want that "riding on a wing and a prayer" vibe, like you're literally strapped to and riding on the wing during flight). I considered the p40 but I'm in California, the electricity cost difference could be a week worth of groceries in the Midwest, or lunch and dinner here...
    Thankfully theres one on ebay for only a couple dollars more than china and i can have it in 3 days. But its good to see someone else with basically the same use case. Also running jellyfin, and wanted acceleration for that too.
    Anyway glad you did this. Your vid made me confident in the $100 for a low budget accelerator.
    Btw what is your cpu/ram config? Im on 2x e5-2680v4 14cx2 (28c56t) and 128gb 2400 ddr4 ecc.
    Everything i want to accelerate is in containers so i should be good. Thanks again 👌

    • @Connorsapps
      @Connorsapps  3 місяці тому

      In the midwest, food cost is actually pretty dang close to everywhere else but you're definitely right on the electricity.
      I made this video due to the lack of content on this sorta thing so I'm very glad it was worth the time.
      2x CPUs Intel Xeon E5-2640 v3 (32) @ 3.400GHz
      Memory: 6x 16GB DDR4, in total: ~95GB

  • @FroggyTWrite
    @FroggyTWrite 4 місяці тому

    the r630xd and r730xd have room for a decent sized GPU and PCI-E power connectors you can use with adapters

    • @Connorsapps
      @Connorsapps  4 місяці тому

      I was actually looking into buying one of those models but I couldn’t justify another heat generating behemoth in my basement

  • @loupitou06fl
    @loupitou06fl 3 місяці тому

    Great video, I got my hands on a couple of supermicro 1U servers and tried the 1st part (CPU only) of your video, is there any other GPU that would fit in that slot ?

    • @Connorsapps
      @Connorsapps  3 місяці тому

      The GeForce GT 730 will as seen here: ua-cam.com/video/5kueBAgigj4/v-deo.htmlsi=Bl1zuecYDxfYJNgQ&t=188 but you've gotta cut a hole for airflow. You're super limited if you don't have an external power supply so I'd consider buying a used gaming pc and using it as a server.

  • @shreyasbhat
    @shreyasbhat 4 місяці тому +15

    The title says Tesla P40, but you are using Tesla P4. I'm not sure if the title is wrong or if I got it wrong. Aren't they different GPUs?

  • @DB-dg9lh
    @DB-dg9lh 3 місяці тому

    Ya might want to try blur that receipe again. I can read it pretty easily.

    • @Connorsapps
      @Connorsapps  3 місяці тому

      Oops. I added some extra blur now thanks

  • @roykale9141
    @roykale9141 4 місяці тому

    Ok this was funny and educative

  • @AprilMayRain
    @AprilMayRain 4 місяці тому

    Have an r720 with a GTX 750ti and need more uses for it!
    Do you think the 2GB of VM would make any difference for Ollama?

    • @Connorsapps
      @Connorsapps  4 місяці тому

      100% for the smallish models. It's definitely worth trying out a few to see. I'd first try ollama.com/library/gemma:2b then maybe ollama.com/library/llama3.1:8b to see what happens.

  • @Benderhino
    @Benderhino 3 місяці тому

    I love how sarcastically he was talking about piracy

    • @guytech7310
      @guytech7310 Місяць тому

      Too bad he could fine a public domain video about pirates for his video.

  • @UnfiItered
    @UnfiItered 18 днів тому

    Just bought myself an r630 e5-2690v4 128gb to self host gaming server and other things. Is t4 really the best we can do without modifications? Ugh, if so, im so mad i didn't go with the xd version so I can get a better gpu for inference and transcoding.

  • @lspecian
    @lspecian 11 днів тому

    Drivers and OS? I couldn’t get that from the video

    • @Connorsapps
      @Connorsapps  11 днів тому +1

      Good point, I'll update the description. I run Ubuntu 24.04. I'm using github.com/NVIDIA/k8s-device-plugin for working with nvidia GPUs in a Kubernetes cluster. That page provides other guides on getting OS specific drivers installed.

  • @Flight1530
    @Flight1530 4 місяці тому +1

    I just found this channel, I hope you do many more LLM with your servers.

    • @jaykoerner
      @jaykoerner 4 місяці тому

      +19:20 you know you can still read that blurred text right.... At least I can

  • @k01db100d
    @k01db100d 2 місяці тому

    llama.cpp works fine on CPU, it's slower than on GPU but still usable

  • @HaydonRyan
    @HaydonRyan 3 місяці тому

    What cpu or cpus do you have? I’m looking at a gpu for my r7515 for ollama.

    • @Connorsapps
      @Connorsapps  3 місяці тому +1

      @@HaydonRyan 2x Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz, 8 cores each

  • @MrButuz
    @MrButuz 4 місяці тому

    Good interesting video.

  • @thedevhow
    @thedevhow День тому

    Have you looked at Tesla T4?

    • @Connorsapps
      @Connorsapps  День тому

      @@thedevhow the price is mainly what scared me off for now, I’d need a better use for my servers GPU then what I’m currently doing

  • @TheSmileCollector
    @TheSmileCollector 3 місяці тому

    Could you fit two Tesla P4? Also what os you using on your machine?

    • @Connorsapps
      @Connorsapps  3 місяці тому +1

      @@TheSmileCollector it could fit another one but I’d have to remove its idrac module. Ubuntu server.

    • @Connorsapps
      @Connorsapps  3 місяці тому

      What OS do you usually use?

    • @TheSmileCollector
      @TheSmileCollector 2 місяці тому +1

      @@Connorsapps Sorry for the late reply! Just got proxmox on mine at the moment. Still in the learning stages of servers.

  • @lundylizard
    @lundylizard 4 місяці тому

    Nice video :)

  • @Flight1530
    @Flight1530 3 місяці тому

    so when are the other Gpus coming in?

    • @Connorsapps
      @Connorsapps  3 місяці тому

      @@Flight1530 I just got a 4GB NVIDIA GeForce RTX 3060 for a normal pc but maybe I could get some massive used ones for heating my house once the AI hype cycle is over.

    • @Flight1530
      @Flight1530 3 місяці тому

      @@Connorsapps lol

  • @internet155
    @internet155 4 місяці тому

    pull the lever kronk

  • @SamTheEnglishTeacher
    @SamTheEnglishTeacher 3 місяці тому

    Did the instructions it gave you actually work though? If so, I expect a lot more output from your channel, although it may become nonsensical over time.

    • @Connorsapps
      @Connorsapps  3 місяці тому

      I've already started using TempleOS

    • @SamTheEnglishTeacher
      @SamTheEnglishTeacher 3 місяці тому

      @@Connorsapps based. After all what are LLMs but a scaled up version of Terry's Oracle application

    • @Connorsapps
      @Connorsapps  3 місяці тому

      @@SamTheEnglishTeacher hhahaha i forgot about that