For anyone trying this on old enterprise hardware on top of VMs. Tread carefully with the HPE Gen 7 through 8. There's a bios bug that will not allow you to do PCI passthough and you wont be able to do anything PCI related. Also, underated channel.
@@halo64654 I had done it with VMware and proxmox once I do remember proxmox being a bit more of a paint and having issues in some slots but never realized it was a HP BIOS issue,rip
dang, just got a p4 and have a hpe g8... welp, worst case scenario is that I can get a better server in the future I guess... Or sell the card if I really have to...
Thank you. I've been thinking of starting my own home lab for final year project, wasn't able to find a source of where i should start with :) cheers mate
@@Connorsapps There are a few IBMs around near my local. I probably can start with them. The last time i try a Supermicro it didn't like some gpus. I have plenty of gpus laying around too, mostly Quadro cards or Tesla. Recently got a batch of AMD's vega gpus (like the 56 and 64) from a retired mining ring too. Since Ollama are getting support for them, i believe it's worth a try.
Good video. I run a similar setup on an R-720, but i'm using an RTX 2000 Ada Gen (16gb). No external power needed, uses a blower style fan so no need for an "external" cooler solution, really, but they run about $500-$600 on ebay. I got mine for $550. I'm on the hunt for another one. It's basically an Nvidia 3060 with a couple hundred more tensor cores and more vram. So not too shabby. I'm using a proxmox container for the AI Gen stuff. My model is a fine-tuned version of Dolphin-Mistral 2.6 Experimental with a pretty chonky context window.
You know there's a button to save you the time to express this as a comment, right? As a bonus it tells YT that you like it too, so it can be prioritized higher in searches and stuff 😉
Interesting setup on an Intel Xeon E5-2640. I'm trying the same with my AMD Ryzen 5600GT, but still haven't decided if I should get the M40 with 24 GB of RAM, or the "newer" Tesla P4.
@@Connorsapps My comment was deleted for posting a link, but is a custom build AMD Ryzen 5600GT. Your's, I think, must be an Xeon E5-2640 v4, considering you have a Core Count of 20.
this was a well made video, is this channel going forward going to be about home lab or server stuff? Im working on my own home lab with Ollama3 with my 3090 fe (ik its overkill lol) and I love seeing ppl make their own stuff. Also, do you know how to make 2 gpus work for Ollama? I added in a 3060ti fe and it isnt being used at all with Ollama3
Programming and tech is my biggest hobby so next time I have a bigger project I’ll probably make a video. Depending on the models you’re using GPU memory seems to be the real bottleneck. As for getting 2 GPUs to work for ollama I wouldn’t think this would be supported. Here’s a GitHub issue about it github.com/ollama/ollama/issues/2672
I have not been able to split a model across multiple gpus, but Ollama has loaded a second model to a second GPU, or offloaded a part of a model to the CPU. I have an RTX 2000 Ada Gen (16gb) and an old NVIDIA 1650. With the context window, my main LLM is about 12.5GB or so. That goes onto the Ada Gen. When I send something to the 4gb llava/vision model it dumps most of it onto the 1650, with a small chunk going to CPU. It is significantly slower than the main model but not annoyingly so (and hey, I only use it occasionally).
I have a PER730 8LFF running unraid. I found this video with a very vague search (tesla llm for voice assistant self hosted) but I was looking at the Tesla P4 for all the same reasons. 75w max. I don't want my r730 going into r737-max mode (with the door plug removed in flight, so you get the full turbine sound in the cabin, if you want that "riding on a wing and a prayer" vibe, like you're literally strapped to and riding on the wing during flight). I considered the p40 but I'm in California, the electricity cost difference could be a week worth of groceries in the Midwest, or lunch and dinner here... Thankfully theres one on ebay for only a couple dollars more than china and i can have it in 3 days. But its good to see someone else with basically the same use case. Also running jellyfin, and wanted acceleration for that too. Anyway glad you did this. Your vid made me confident in the $100 for a low budget accelerator. Btw what is your cpu/ram config? Im on 2x e5-2680v4 14cx2 (28c56t) and 128gb 2400 ddr4 ecc. Everything i want to accelerate is in containers so i should be good. Thanks again 👌
In the midwest, food cost is actually pretty dang close to everywhere else but you're definitely right on the electricity. I made this video due to the lack of content on this sorta thing so I'm very glad it was worth the time. 2x CPUs Intel Xeon E5-2640 v3 (32) @ 3.400GHz Memory: 6x 16GB DDR4, in total: ~95GB
Great video, I got my hands on a couple of supermicro 1U servers and tried the 1st part (CPU only) of your video, is there any other GPU that would fit in that slot ?
The GeForce GT 730 will as seen here: ua-cam.com/video/5kueBAgigj4/v-deo.htmlsi=Bl1zuecYDxfYJNgQ&t=188 but you've gotta cut a hole for airflow. You're super limited if you don't have an external power supply so I'd consider buying a used gaming pc and using it as a server.
100% for the smallish models. It's definitely worth trying out a few to see. I'd first try ollama.com/library/gemma:2b then maybe ollama.com/library/llama3.1:8b to see what happens.
Just bought myself an r630 e5-2690v4 128gb to self host gaming server and other things. Is t4 really the best we can do without modifications? Ugh, if so, im so mad i didn't go with the xd version so I can get a better gpu for inference and transcoding.
Good point, I'll update the description. I run Ubuntu 24.04. I'm using github.com/NVIDIA/k8s-device-plugin for working with nvidia GPUs in a Kubernetes cluster. That page provides other guides on getting OS specific drivers installed.
@@Flight1530 I just got a 4GB NVIDIA GeForce RTX 3060 for a normal pc but maybe I could get some massive used ones for heating my house once the AI hype cycle is over.
Did the instructions it gave you actually work though? If so, I expect a lot more output from your channel, although it may become nonsensical over time.
For anyone trying this on old enterprise hardware on top of VMs. Tread carefully with the HPE Gen 7 through 8. There's a bios bug that will not allow you to do PCI passthough and you wont be able to do anything PCI related.
Also, underated channel.
Im guessing this is on specific bios versions, have done pci pass through on some gen 8s and luckily did not have any issues.
@@JzJad Mine is a G7. I'm personally on the most recent BIOS version. I've pretty much given up trying to make it work.
@@halo64654 I had done it with VMware and proxmox once I do remember proxmox being a bit more of a paint and having issues in some slots but never realized it was a HP BIOS issue,rip
dang, just got a p4 and have a hpe g8... welp, worst case scenario is that I can get a better server in the future I guess... Or sell the card if I really have to...
@@nokel2 I've heard gen8 has better results with workarounds as those tend to be more favored by the community. I have a gen7.
The one minute mark threw me for a loop... Then I just laughed really hard. Thanks.
Brilliant work. Really well done, Connor. New subscriber here.
Interesting! A tour of the homelab maybe? Subscribed!
Thank you. I've been thinking of starting my own home lab for final year project, wasn't able to find a source of where i should start with :) cheers mate
I’d love to hear more about it. So do you have any particular hardware in mind?
@@Connorsapps There are a few IBMs around near my local. I probably can start with them. The last time i try a Supermicro it didn't like some gpus.
I have plenty of gpus laying around too, mostly Quadro cards or Tesla. Recently got a batch of AMD's vega gpus (like the 56 and 64) from a retired mining ring too. Since Ollama are getting support for them, i believe it's worth a try.
Good video. I run a similar setup on an R-720, but i'm using an RTX 2000 Ada Gen (16gb). No external power needed, uses a blower style fan so no need for an "external" cooler solution, really, but they run about $500-$600 on ebay. I got mine for $550. I'm on the hunt for another one. It's basically an Nvidia 3060 with a couple hundred more tensor cores and more vram. So not too shabby. I'm using a proxmox container for the AI Gen stuff. My model is a fine-tuned version of Dolphin-Mistral 2.6 Experimental with a pretty chonky context window.
I like this video, keep this up!
You know there's a button to save you the time to express this as a comment, right? As a bonus it tells YT that you like it too, so it can be prioritized higher in searches and stuff 😉
love the emperor's new groove reference haha
A good test would be to show how many tokens/sec you got instead of duration.
answer: less than 1 token per second. P4 just doesn't have enough go to make it a useable solution
Interesting setup on an Intel Xeon E5-2640. I'm trying the same with my AMD Ryzen 5600GT, but still haven't decided if I should get the M40 with 24 GB of RAM, or the "newer" Tesla P4.
@@jco997 the m40 is quite a bit longer and I would have gotten the the p40 or m40 if I could. What server do you have?
@@Connorsapps My comment was deleted for posting a link, but is a custom build AMD Ryzen 5600GT. Your's, I think, must be an Xeon E5-2640 v4, considering you have a Core Count of 20.
I normally use cpu benchmarks from passmark, since it gives me a ballpark figure on how much performance I could expect from any CPU model.
this was a well made video, is this channel going forward going to be about home lab or server stuff? Im working on my own home lab with Ollama3 with my 3090 fe (ik its overkill lol) and I love seeing ppl make their own stuff. Also, do you know how to make 2 gpus work for Ollama? I added in a 3060ti fe and it isnt being used at all with Ollama3
Programming and tech is my biggest hobby so next time I have a bigger project I’ll probably make a video.
Depending on the models you’re using GPU memory seems to be the real bottleneck.
As for getting 2 GPUs to work for ollama I wouldn’t think this would be supported. Here’s a GitHub issue about it github.com/ollama/ollama/issues/2672
I have not been able to split a model across multiple gpus, but Ollama has loaded a second model to a second GPU, or offloaded a part of a model to the CPU. I have an RTX 2000 Ada Gen (16gb) and an old NVIDIA 1650. With the context window, my main LLM is about 12.5GB or so. That goes onto the Ada Gen. When I send something to the 4gb llava/vision model it dumps most of it onto the 1650, with a small chunk going to CPU. It is significantly slower than the main model but not annoyingly so (and hey, I only use it occasionally).
If you can fit the entire model into your GPU you should use exl2 for free performance gains with no perplexity loss
Great video
I have a PER730 8LFF running unraid. I found this video with a very vague search (tesla llm for voice assistant self hosted) but I was looking at the Tesla P4 for all the same reasons. 75w max.
I don't want my r730 going into r737-max mode (with the door plug removed in flight, so you get the full turbine sound in the cabin, if you want that "riding on a wing and a prayer" vibe, like you're literally strapped to and riding on the wing during flight). I considered the p40 but I'm in California, the electricity cost difference could be a week worth of groceries in the Midwest, or lunch and dinner here...
Thankfully theres one on ebay for only a couple dollars more than china and i can have it in 3 days. But its good to see someone else with basically the same use case. Also running jellyfin, and wanted acceleration for that too.
Anyway glad you did this. Your vid made me confident in the $100 for a low budget accelerator.
Btw what is your cpu/ram config? Im on 2x e5-2680v4 14cx2 (28c56t) and 128gb 2400 ddr4 ecc.
Everything i want to accelerate is in containers so i should be good. Thanks again 👌
In the midwest, food cost is actually pretty dang close to everywhere else but you're definitely right on the electricity.
I made this video due to the lack of content on this sorta thing so I'm very glad it was worth the time.
2x CPUs Intel Xeon E5-2640 v3 (32) @ 3.400GHz
Memory: 6x 16GB DDR4, in total: ~95GB
the r630xd and r730xd have room for a decent sized GPU and PCI-E power connectors you can use with adapters
I was actually looking into buying one of those models but I couldn’t justify another heat generating behemoth in my basement
Great video, I got my hands on a couple of supermicro 1U servers and tried the 1st part (CPU only) of your video, is there any other GPU that would fit in that slot ?
The GeForce GT 730 will as seen here: ua-cam.com/video/5kueBAgigj4/v-deo.htmlsi=Bl1zuecYDxfYJNgQ&t=188 but you've gotta cut a hole for airflow. You're super limited if you don't have an external power supply so I'd consider buying a used gaming pc and using it as a server.
The title says Tesla P40, but you are using Tesla P4. I'm not sure if the title is wrong or if I got it wrong. Aren't they different GPUs?
Oops
Ya might want to try blur that receipe again. I can read it pretty easily.
Oops. I added some extra blur now thanks
Ok this was funny and educative
Have an r720 with a GTX 750ti and need more uses for it!
Do you think the 2GB of VM would make any difference for Ollama?
100% for the smallish models. It's definitely worth trying out a few to see. I'd first try ollama.com/library/gemma:2b then maybe ollama.com/library/llama3.1:8b to see what happens.
I love how sarcastically he was talking about piracy
Too bad he could fine a public domain video about pirates for his video.
Just bought myself an r630 e5-2690v4 128gb to self host gaming server and other things. Is t4 really the best we can do without modifications? Ugh, if so, im so mad i didn't go with the xd version so I can get a better gpu for inference and transcoding.
Drivers and OS? I couldn’t get that from the video
Good point, I'll update the description. I run Ubuntu 24.04. I'm using github.com/NVIDIA/k8s-device-plugin for working with nvidia GPUs in a Kubernetes cluster. That page provides other guides on getting OS specific drivers installed.
I just found this channel, I hope you do many more LLM with your servers.
+19:20 you know you can still read that blurred text right.... At least I can
llama.cpp works fine on CPU, it's slower than on GPU but still usable
What cpu or cpus do you have? I’m looking at a gpu for my r7515 for ollama.
@@HaydonRyan 2x Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz, 8 cores each
Good interesting video.
Have you looked at Tesla T4?
@@thedevhow the price is mainly what scared me off for now, I’d need a better use for my servers GPU then what I’m currently doing
Could you fit two Tesla P4? Also what os you using on your machine?
@@TheSmileCollector it could fit another one but I’d have to remove its idrac module. Ubuntu server.
What OS do you usually use?
@@Connorsapps Sorry for the late reply! Just got proxmox on mine at the moment. Still in the learning stages of servers.
Nice video :)
so when are the other Gpus coming in?
@@Flight1530 I just got a 4GB NVIDIA GeForce RTX 3060 for a normal pc but maybe I could get some massive used ones for heating my house once the AI hype cycle is over.
@@Connorsapps lol
pull the lever kronk
Did the instructions it gave you actually work though? If so, I expect a lot more output from your channel, although it may become nonsensical over time.
I've already started using TempleOS
@@Connorsapps based. After all what are LLMs but a scaled up version of Terry's Oracle application
@@SamTheEnglishTeacher hhahaha i forgot about that