CORRECTIONS: - The memory is DDR4 3200 not 2400 (Thanks Ian!) - I completely forgot to mention the 2TB Samsung SSD for the boot drive. This was also purchased for my new editing PC build.
Now you just need to build a hot spare. Then add another for 3-way quorum for storage. And an identical offsite backup machine. Then we're back to square one ;)
Been trying to get my sister or brother to host a back up NAS/home server so I have a private off site back up. Then they can back up their data on my NAS and I can vise versa. Plus, I can configure some smart functionality into their system and surveillance. For funsies.
It's pretty incredible what you can run on even modest, modern x86 hardware. For several months, I ran everything on a Pentium Gold G6400 with 32GB of ram. Host OS was Unraid, virtualized Untangle for router/firewall (gigabit fiber), virtualized Windows with a GPU passed through for a "Console PC" in my Homelab for management/admin... was running plex, pihole, vpn, etc, etc. All of it, no performance issues. So why did I move away from it? Maintenance... Of course I wanted to still tinker, but with a full household of very connected people... it really constrained my maintenance window to before 8am and after 11pm. Not worth it.
@@massgrave8x Not that I noticed. Routing a 1gbit internet connection doesn't take a lot of x86 horsepower and I would say the more important performance variable was RAM in this case - having 32GB gave me plenty of headroom for my workloads.
You pretty much demonstrated a conclusion I've been struggling with for a while... there are some things that CAN be done on a virtual machine, but probably shouldn't be. The two glaring examples I picked out were the router and the NAS. I've been trying for a couple of months now to put as much on my NAS as possible, but found that all too often taking down the NAS to get a container working would cause problems with the NAS itself. I finally removed everything from it but Jellyfin, Syncthing and Nextcloud... and it might get an instance of pihole. Now it sits quietly drawing about 21 watts, storing my stuff and always available when needed. For a router I wanted something with more capability than the typical consumer device, but it needs to be always available so my family can do what they do while I'm playing with stuff. So I chose to buy a Netgate device. I think it comes down to risk/availability tolerance. If it needs to be (nearly) always available, it shouldn't really be virtualized... or should be virtualized in a cluster. For a home lab where a cluster isn't really feasible for most people you just have to pick your battles. Which ever way you go, thanks for sharing the struggle!
I feel you on that GPU passthrough. It has been the bane of my existence on Proxmox. I've been working on it for the past 8 hours for the GPU on my laptop Intel chip. I worked my way through 3-4 different errors - used SEVERAL different guides that all gave different instructions - and finally got stuck.
I had this same issue. From what I understand, it being the primary, it gets taken by the bio and the grub and never really gets properly released. Even though Proxmox is mostly web, it still takes hold of it(probably the Debian) and just screws you over. I can only get it to work having the nice GPU in the secondary PCIe slot.
@@choahjinhuay My motherboard and CPU dont have a intergrated graphics and uses my GPU as primary upon boot. I unregister the ZOTAC GTX 1050i gpu from proxmox to get it ready for passthru. But it wasnt that simple. My GPU is pretty locked and I have to dump the GPU rom file and modify it to work with proxmox. I have to append the custom rom to load the GPU from the vm conf file. Took me days to figure it out! phew.
Yeah, similar issues. Not sure what the silver bullet ended up being, but I ended blacklisting up the GPU in the host, moving it off of slot 1, and switching to q35. Now it works OK (RTX 2060 12GB). Somehow, the hacked virtual GPU on consumer GPU thing Craft Computing does was easier.
@@ccoder4953 imo, the problem is there is no igpu. The motherboard need a gpu to start, start using resources. Once in use, blacklisting kinda is impossible. I would give it a try with two different gpu's, being different brands. Throw in an AMD/nvidia combo, blacklist the nvidia, for passthru. it did work in my diy box, with an AMD.nvidia combo. There are some motherbards that will boot without any gpu though, that could be option too, or, getting some more server oriented motherboard. Further, there is another problem with nvidia commercial series, with nvidia not allowing/fighting to virtualising them.
This video covers so many things that I want to get back to doing, but I've just been distracted with Life and other things. Thank you for showing us your struggles. It helps rest of us know that we're not alone when we meet up with hardware challenges. And, I do plan more getting back to trying to virtualize all of my appliances on less hardware -- just takes time to overcome the issues, while trying to keep everything else in life running too. So, yeah, I feel your struggles and time-constraints.
I am quite confident this kind of setup can be an amazing workstation if you only focus of the storage and computing aspects while keeping the networking on a separate low-power machine. The most power efficient machine us one that is spun up only on demand, so for my personal needs, the NAS wouldn't have to run 24:7. What I would need here instead is the means to power it on from afar i.e. by hooking it up to a Pi-KVM (or hope for WoL to work properly). Then, it's a matter of getting a ZFS-root setup to work, throw in 2 NVMe drives and a couple HDDs and your off to the races. You should still have one dedicated NAS for cold-storage backups as well as one large capacity exterbal drive you can just unplug and throw into a safe or remote location, though.
I like the idea of having a hyper converged home lab. I use LXD containers to run all my services (router, OTA/Satellite TV, DHCP, DNS, SMB, IoT automation, VMs for kuberneres cluster among other services) on a single node (ryzen 3700x, X470d4u motherboard). I didn't want to maintain multiple physical servers and took a minimalist approach, heck I even replaced my GPON Modem/ONT with an SFP module connected to my mikrotik switch on its own VLAN. It's been running rock solid for the past year, and the entire process has been very educational, I can try new configurations on my containerized router and if things go south I just roll back to a working snapshot, plus I can spin up as many virtual routers with as many virtual interfaces as needed to try out new stuff or learn how routing protocols work. I will probably be adding another node or 2 for High availability
I'm always on the hunt it seems to bring power draw down, or think about services in a different way. I'm with you, I'd not put the firewall/router on the same machine you are trying other things with, unless it is one that is just part of the lab. Overall neat concept and I understand the frustration of just trying to get a video done.
Regarding your IOMMU issue with the HBA For okayish iOMMU Groups you would need an X570 rather than B550 or A520. As the X570 IOhub on the PCH is basically what AMD used on Zen2 CPUs anyways it has a more dedicated iOMMU grouping. X570 as a Chipset was developed by AMD themselves based on Zen2 CPU chips. The IOhub on X570 mimics the IOhub that EPYC uses... As Epyc by it self doesn't have a PCH. B550 and A520, and any 400 and 300 Chipsets for that matter were subcontracted to ASmedia, AsMedia did not the best job regarding their IOMMu groupings.. I am running quite the similiar box in my Lab (though currently shelfed for summer as not needed at the moment) on ESXi 8 with a R9 3900x on a X570 board
Oh boy! Where has all this come from? I remember messing around with Free BSD on used ddr2 systems, and tonkering with the Raspberry Pie, but that was 10 years ago! So much has happened since I left.
About the passthrough issues you had at 14:50, I think I had the same issue last year when passing my GPU. In my case (also B550), when enabling Above 4G Decoding on BIOS it would *silently* also enable Re-sizeable BAR support. Manually disabling it, but obviously keeping 4G Decoding on, fixed my issue.
Might be a problem with the cpu? I mean, not to get in intel vs amd war, but ... I always has my concern about this kind of little issues that nobody knows the real problem.
For the iommu group issue, you can have an acs override line in the grub file and force them to have different iommu groups. I did the same thing and it worked.
I attempted something similar AMD IOMMU is always a pain but this command may enable the devices connected on the chipset to be split (GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nomodeset") if you ever plan an AMD proxmox server try using a "G" series processor it will make the GPU passthrough so much easer.
@@HardwareHaven I can confirm the same, I never had issues splitting IOMMU groups on the chipset both on AMD and Intel platforms. Of course, if the problem is in the IOMMU implementation on motherboard there's nothing much that can be done about it.
As someone that went from a mini pc based homelab to a monolith single server on proxmox and back to mini pcs again, I think separate devices for separate functions is that way to go for me. I also cut my power consumption over half by going back to multiple mini pcs.
I am thinking about seperating my services on mini pcs but the biggest show stopper atm is missing IPMI ... there's piKVM etc..they say they are "cheap" kvm solutions but I find them quite expensive and if you are going to have multiple mini pcs, you would need a piKVM per pc...
My thinking : start with your information and system needs - also look at local power prices (HUGE in the UK, Australia, etc.). A lot of stuff can be handled by ultra low power solutions such as a Raspberry Pi with attached USB 2.5” storage (SSD or spinners). I think virtualising your outward facing firewall/router is too dangerous - the risk of a zero-day virtualisation layer breach is always there, incoming packets have to be handled by Proxmox before pfsense gets them so it’s an extra layer of vulnerability of course. You can run a physical decided hardware solution for as little as 5-10w of power nowadays, 20w is common (though you need 64-bit capable x86 to run pfsense last time I looked). Personally I consider this type of setup useful for a not-always-on lab solution. I’ve run ESXi for 15 years or so, used to do what you’re doing here (I lacked the linux skills and initially ultra low powered SBC hardware) to do it. Today I use it as a fire-up-when-I-need-the-lab setup, I use Wake-on-Lan headers to wake the system up and a script which I shell into the ESXi host and execute to shut it down; it’s really effective and the host runs headless in a cupboard on a 11 year old Xeon. Also for those with archive servers - split things up a bit. I use two small drives for always-available stuff and the rest are on two storage tanks which are powered down unless archiving or retrieving. I can also do backups that way too - WoL packet to wake the server, MQTT messages to notify of boot completion and the backup or archive proceeds after which it’s powered down (I use an MQTT subscription script on the backup server which puts it into standby or shutdown). Another thing I do is I have an old half-missing laptop I power up for chomping on things (e.g. compression tasks) in the background - I can script its wake, compression task and even shutdown. Cost me nothing and is only on when I am actually using it. It’s surprising what you can do with older kit that has a lot of grunt but chews power if you use this technique - sure it might chew X watts at the wall but if it’s on for an hour a week it’s not that big a deal.
Oof! He's running HDDs on a smooth hard flat desktop surface. The vibrations could harm the drives. At least run them on some kind of insulating material to dampen the vibrations. Like, a mouse pad.
If you have a managed switch that supports Trunk/LAGG, I would suggest setting up your 4 ports on the NIC to be LAGG ports and run the VLANs and network through that. Since you have 2 2.5GB on-board NICs, you can run one as WAN in pfSense and the other for something else. I Knwo it is just a temporary setup but it is totally doable. I run LAGG on my pfSense system, TrueNAS, Proxmox and Proxmox Backup Server. It just gives you flexibility to have more lanes when tx/rx data to multiple devices that demand high throughput like the servers mentioned.
I can definitely relate to the pass through issue. Around a year ago, I was starting my journey into Linux (virtualization of windows through OVMF). The biggest issue I had was pcie passthrough of my 2nd gpu which I realized was in the same IOMMU group as something else unless I put it on the bottom pcie slot which was scraping against my psu. My solution: upgraded my cpu and motherboard with an identifiable block diagram 😂
My PROXMOX box is running on an old HP Compaq Elite 8300 SFF that I upgraded the CPU to i7-3770 and the RAM to 32GB...then slapped in one of those icydock 4x 2.5" into a 5.25" bay and a cheapo chinese sata card and did a passthru and running TrueNAS with RAIDZ1, the OS and VM's are running from a 1TB SSD. I don't do massive stuff with it, just a few debian VM's, and OMV, an Windows 11 Lite for my security cameras, and TrueNAS...it runs well...but would LOVE to have 32 thread CPU...power draw would hammer my UPS though Cool video! Keep em coming!!!!
I may sound counter-intuitive, but I still prefer to have multiple pcs and dedicate one for each function (one for 3d graphic, one for nas, one for firewall, etc). For power consumption I add a couple of photovoltaic panels, at night I turn off the pc's that are not needed and in the morning everything restarts automatically with wakeup from bios. This is just my point of view, I really like your videos and I am subscribed almost from the beginning of your channel 🙂
Cool!. I have been planning a setup almost like this for my Haswell Xeon server but has not prepared for it yet. I also have the IOcrest 4Port 2.5G realtek PCieX4 card. Thanks for the effort and yours is a great powerful machine.
I've setup a similar thing in the past few weeks in free time...took my old X5680 Supermicro and replaced it with an i7-7700 based thing and added 2.5G and an extra two-port intel gig nic...it has five JBOD disks as well....so i have ubuntu (no hypervisor) running plex and local SMB share and then inside of that have two ubuntu VM's, one running wiregaurd only and one running nextcloud...i was going to passthrough toi the VM's, etc., but i ended up doing the easy thing and just teaming my 4 nics together and bonding the two VMs to that bond within virtualbox (nope, you dont need qemu/kvm if you dont want to!...but yea no passing through so thats easy for me)...... i LOVE having wireguard ill tell you that....this server combined with my Belkin AX3200 openwrt is good for me....not enterprise but who cares, it works so well i cant imagine cisco could do better (for such a small setup that is)
Good content but you took the easy way out! The issue with all the chip-set devices in one IOMMU group can be overcome. Unraid can do it easily through a GUI so I'm sure Proxmox can do it. I would actually have liked to see that.
Great examples. I would rather have seen some large file transfers, network speed tests, and pretty performance graphs. Something like NetData would be fascinating. Impossible to to tell if its actually usable vs just functional otherwise. Ty
Instead of OMV or TrueNAS or a dedicated OS for sharing I just set up a container with cockpit installed to manage the shares and permissions and let the host OS do the lifting. Seems to be working out well so far. I've been running it in my "production" environment for a few months now.
Regarding your difficulty on passthrough, there are a lot of changes between kernel versions that break pcie passthrough regarding gpu passthrough and it really gets into the weeds especially regarding frame buffers. try staying on kernel 5 for now and not upgrading to kernel 6
As a PC/tech nerd who is also into networking I eventually just wanna put a server inside a case and just have like probably clusters of them or multiple of them😅
I know this probably got covered somewhere, but I am just paralysed by choice: where would you recommend for someone like me (software tester who did some networking in the past) to start with a home lab? What are the things I should get and what should I avoid (pitfalls, traps, things that can be virtualised in a docker container and be done with it, or someone going to fall into the trap of gettin a 24 port Router because it might come handy later, etc. )
My goal for a while now has been to do something similar without running a router/firewall on it. Running unraid or truenas scale instead of proxmox. I have a spare x470 board, and I want to use a 3900 or 5900 oem cpu that is 65w tdp. And use a P2000 and gtx 1070 for plex/jellyfin and a windows vm respectively. That or get a ddr4 z670 board and use an i5 13500 without the p2000 and use quicksync instead. 6c/12t vm for remote cloud gaming, the remaining threads to run a NAS and containers I use.
Awesome video, the GPU passthrough issue makes me glad to be running unRAID since I have a very similar setup (minus the router setup) and have not had an issue passing my 3050 to a few docker containers one of them being plex (: Keep the videos coming though they're always a great time.
You could have used acs patches to separate all of your pci devices into their own iommu group, so you could have used the hba. I'm pretty sure that proxmox's kernel has the patches already applied, so you would just have to add pcie_acs_override=downstream, multifunction into your grub and then just update grub and reboot.
Yes I have something similar; 5950x, 128gb ecc @3200mhz, 2x16tb storage, rtx3090,gtx 730 etc, TrueNas Scale OS, running pihole, nextcloud, valheimserver, and a gaming/editing windows VM (w/gpu passthru) which is run remotely through sunshine and moonlight (need a dummy hdmi plug -which maxes out at 1440p120hz; to get 4k120hz, i had to get a DP1.4 to HDMI2.1 adapter ).
Couple of things: 1) Your mileage will DEFINITELY vary. I recently went through a massive consolidation project where I consolidated 4 of my NAS servers down to a single system. Before the consolidation, I was using about 1242 W of power. Now, my new server (which houses 36 3.5" drives, and also is running dual Xeon E5-2697A v4 (16-core/32-thread) and 256 GB of RAM (I think that my total installed hard drive, raw capacity is somewhere around 228 TB), typical power consumption is now around 585 W. And whilst that's twice what this system is at load, I'm also running I think I'm upto 13 VMs now, and 2 containers, all running simultaneously. 2) I have absolutely ZERO problems passing through the RTX A2000 6 GB. Remote desktop sucks, but I can still play Anno 1800 through it. (And I have been able to get hardware transcoding with Plex work on that VM as well.) 3) Also as a result of my consolidation, I was able to take out my 48-port gigabit switch and now I just run a Netgear GS-116 16-port GbE switch, which consumes like 7 W of power (vs the ~40 W that the 48 port used to take.) 4) On top of all of that, instead of getting more networking gear (e.g. 10 GbE NICs and a 10 GbE switch, which aren't cheap), I just run virtio-fs so that the VMs can talk directly to the Proxmox host. I set up SMB, NFS, and an iSCSI target on it, got a Steam LAN cache going, have dedup turned on in ZFS on the iSCSI target, so that can help reduce the actual disk space used by the Steam games. I have a minecraft server running in a LXC container (turnkey linux gameserver is awesome for that), and if I am using my mini PC (my current desktop system) to manage some of the files on the server via Windows Explorer, if I am moving them around on the server, I can get transfers of over 500 MB/s (~4 Gbps without needing a switch). (Two of the NAS units that I consolidated from were both 8-bay NAS servers, one was 8 bay, and one was 12-bay. Hence the need for a 36-bay, 4U rackmount server.) The system runs well unless I have a LOT of disk I/O, then the system load averages shoots up and that sometimes can be an issue for some of the VMs. Virtio-FS is awesome. The VirtIO paravirtualized NIC in Windows 7, when you install the virtio drivers for Windows 7, will show up as a 100 Gbps NIC. (NICE!) (But in Windows 10+, it shows up as a 10 Gbps NIC, which is still nice.) Neither xcp-ng nor TrueNAS Scale was able to run virtio-fs, and wants you to route everything through a virtual switch instead. By running virtio-fs, I skip the entire network stack. Virtio-fs works in Ubuntu, and CentOS, but doesn't work in SLES12SP4. (Although I haven't tried updating the kernel for that yet. But out of the box, it doesn't work. Actually, technically neither did CentOS, until you update the kernel.) So yeah, you can definitely do everything you want to, on a single system, as long as you have enough RAM for the VMs, and enough space on your hard drive(s).
For the GPU passthrough problem have you tried (maybe you already did it) putting a dummy plug on the GPU output connector to simulate the connected monitor? Sometimes the problem is just that the GPU doesn't hear any connected monitor.
There are 2 years that I’m trying to do a single pc home lab and the GPU passthrough was always the problem, I have a 5950x and a x570 board and a 6750xt and a SAS controller. To make the GPU work I putted in the first slot and the SAS controller on the third slot with the chipset and now the passthrough work
Interesting concept but yeah it's not really practical... I tried consolidating my OPNsense instance into a VM on my TrueNAS Scale server, and while it worked, it wasn't stable. Several times a week (sometimes a day) I would have to reboot either the VM, or the entire TrueNAS Scale system itself, and that got annoying since I run Plex, Sonarr, most recently Prowlarr, QBitTorrent and Filebot and each reboot would of course disrupt those services. I reverted my routing duties to a spare miniPC that coincidentally has a dual NIC miniITX motherboard that I forgot about, so it was a perfect fit! With all that said, the whole reason why I wanted to consolidate everything into one machine was because my previous multi-LAN miniPC solution was janky... it was a $250 miniPC from QTOM that I got off of Amazon, and it would overheat without constant cooling on it.... When I had it running to mitigate that I got some USB powered fans that were running a top the miniPC case, but that got dusty fast, and I decided to retire the entire miniPC. It's got decent hardware, so for short-term testing I might install Debian Linux and Casa OS atop it (I did try Casa OS a top Raspberry Pi OS, but it was too slow and unreliable for my liking) and play around with it some more. I actually have a Ryzen 9 3900X in my main everyday system and I've been wanting to upgrade (mainly for bragging rights lol) to something faster, but at this point I might just max out the AM4 spec and go with a 5950X (a more modern version of the 3950X) once they get cheaper.... Once/if I ever do that the 3900X would be inherrited by my aforementioned TrueNAS Scale server to replace its Ryzen 7 3700X (8C/16T)... That'd be quite the upgrade... That'd leave the 3700X available for other projects too... Thanks for this video. It was entertaining as much as it was informative! See you around!
Meh. I recently did a full rebuild for my home setup and what I ended doing was using a 2018 i7 MacMini with 64GB memory and 1TB NVME. Deployed k8s lab with all my crap. Plenty of power, tiny form factor and low power usage. Especially when idle. It runs everything without missing a beat. Storage redundancy is solved by using 2 external NVMEs. It has 4 USB-C ports, two are on a separate BUS, so you get a lot of juice. I do daily backups to an encrypted cloud storage box which is cheaper than running my own backup server in terms of power consumption/price. I am actually considering to get a few of these or maybe M1 or M2 and make a custom server rack. There is a reason Github are using them for their new runners.
I wont swear to this, but it sounds like that video card is engaging the system bus in some clever (?) way that precludes the host OS from being able to release it. i swear i remembered reading about that back when i was rebuilding my own box and studying problems with PCI passthrough. Its just an issue peculiar to certain hardware and they way some graphics cards leverage the system bus. I know that sounds kinda vague but at least dont feel like you're leaving some secret rock uncovered 😁. Really enjoy this video!
With regards to IOMMU groups, you could ACS patch the kernel and they will essentially be on different groups for normal passthrough without issues, I had this problem on my proxmox server and fixed it that way.
I have it as a 3 machine setup currently a old mini pc running pfsense ryzen5 5500 with 64gb running truenas and a 13500 with 64.running everything else(7 minecraft server behind a proxy a bunch of website jellyfin traefik and vpn) i would say having it as a separate machine sound way better for both security purpose and ease of setting things up.
I put a 5950x into a build for all my stuff recently I got a 5950x 128GB DDR4 3200 6x4 TB Raid z2 Stripe 1Tbx2 Samsung SSD 2x500GB mirror boot drive 5700XT with 10 gig networking
Sometimes this work for GPU passthrough: start the host with proxmox with the GPU not occupied, remove every monitor before start until proxmox is done. Then plug in monitors and then start the windows VM. Helped me too.
@@serikk if it looks like a lost cause or passthrough is glitching, this should be the fix. I have to use this method. This way Proxmox doesn’t lock the GPU and so doesn’t have to give it back to the VM.
ive got a similar setup at home, Ryzen2600,16gb of RAM, Proxmox host with TrueNAS,WIN10 and PfSense Clients. RAM compatibilty / stability is a problem, ive got two identical kits of Gskill 16Gb 3200Mhz RAM, one kit works fine, the other has to run at 2400MHz to be stable, both kits test fine under windows Pfsense is not running yet as the reliability is kinda poor (probably due to all the tinkering ive done for GPU passthrough) TrueNAS works great with a single 4Tb array (id need more RAM for more storage) the Win10 client is for media ingestion and printing/scanning - most of my passthrough hardware goes to this one, ill be setting up another win10 client to RDP into from the garage (that PC is an AMD fusion APU, it doesnt run well enough for anything other than checking emails and music from youtube (and only one thing at a time) but ill need to sort out VGPU passthrough first as CPU decoding of media is a bit slow at times. ill also get around to setting up a steam Cache and minecraft server. all in all it was a fun project to set up.
Pass through GPU issue idea. Windows VM being UEFI or Legacy could make a difference. I think some GPUs won’t work in passthru unless the VM is UEFI (and also some require legacy). I’m not an expert but that is something I’ve had issues with as well!
Cool to find this in my feed at random. I'm planning to go this route. GPU passthrough does seem like a moving target. There are many methods out there and none have worked for me so far (same issue - linux wont give it up). Also, this may be a silly question but during the times it does work, when you access the VM remotely, are you able to enjoy the high framerates (for instance, if I have a windows VM for gaming on my server can I play remotely on like a thinclient or low-end laptop)? Anyway, I feel like it's a good solution and would likely be lower power than the combined physical devices. Plus, once you get it all set up, you shouldn't have to mess with with the host much, so not really concerned about knocking all my systems offline regularly.
I've been running everything on an i5 12400 (previously i7 9700) Proxmox host for years, although it's really efficient, sometimes I wish to separate things in their own appliances.
I have a ryzen 3950x and a x570 motherboard sitting on the shelf which I tried to use for a server , but the problem was the power it uses just on idle , 90 -100 watts on totally idle , now replaced for a b550 and a ryzen 5900g with total idle 20 watts , but 55 watts with running all my tasks , but could cut 15 watts off that by removing my rtx 2700 super , and using the internal igpu for transcoding , but the GPU is pass to a windows 10 system for gaming etc.
I saw you were running pfsense as well as pihole. I've been using pfblockerng in pfsense for a while and like it a lot, could you tell me how pihole compares?
May I ask why using so much VMs? In my opionion this is a real waste of resorces. You could at least install more LXC containers to do the same if Docker is not possible with Proxmox (sorry I‘m not really familiar with Proxmox because I moved away from all VMs and run most of things in Docker and LXC)
For the hba "problem" i would sugest put nvme drives for proxmox and storage and passthrough the onboard sata controler!!!! works pretty good for me in many instalations and chipsets!!!
For IOMMU issues with the chipset's PCIe slot, have you tried adding "pcie_acs_override=downstream" to the kernel boot commandline to see if it helps with isolation?
I'm setting up a microtik router os VM right now In proxmox. I just wanted to be a backup in case my pie four open WRT router goes down for any reason. I already made a copy of the SD card for the pie so if it just gets corrupted I can just switch it but if the whole thing goes down it's nice to have something to switch to.
Idling at 118 watts sounds a bit too high! I have a similar setup (windows vm with a 3090 passed through, i5 13400 cpu and ddr5 ram) nas with 1 14tb hdd and two linux containers for plex and qbittorent. Idles at 60 watts. I noticed that the idle power is higher when the windows vm is off. Seems like the gpu consumes much less power when it has drivers loaded. Also I'm currently building a second replica but with weaker gpu (gt710) and it is idling at 30W
I dont know if this is the case for proxmox but... shouldn't you configure the NVIDIA GPU to use the OVMF driver rather than the default nouveau driver?
not your fault ob the GPU pass through its just some NVIDIA corporate shit that is not working well with VMs forcing everyone to buy dedicated hardware that is has a 500% markup with the materials they are built with, rather than just using consumer/off-the- shelf hardware.
I wish we have a m.2 nvme GPU (not those stupid adapters), apparently asrock made one but I cant find one. that thing will be really helpful for KVM stuff for a iGPU less processor like this.
Not necessarily. I was able to boot without any display adapter and still access the proxmox web UI. And I tried running it with two GPUs just to make sure.
@@HardwareHaven I have a gt710 as primary and don't have any spare pcie slot for 2nd gpu, can the primary GPU be used as hardware transcoding or passthrough as well as primary GPU? keep doing this vids great content 👍
It can be used for hardware transcoding on a service or container running in the host OS, but can’t be passed through to a VM while also running on the host
that's why I stopped using proxmox...because GPU passthrough was always a hit and miss and behaved weirdly at times.. esxi worked much better for gpu passthrough
what are the possible scenarios if I use non-ecc cards with truenas (no containers just pure file & media svr)? will i experience data corruption knowing zfs cache uses ram?
CORRECTIONS:
- The memory is DDR4 3200 not 2400 (Thanks Ian!)
- I completely forgot to mention the 2TB Samsung SSD for the boot drive. This was also purchased for my new editing PC build.
All the best for LTX
Meanwhile I'm here suffering trying to the lowest timings with 16-13-13-12-28 3800mhz
@@BBWahoo chip? 2 rams or 4 rams?
@@LucasSousaRosa
Oh dude I already figured it out, but yeah dual rank
Now you just need to build a hot spare. Then add another for 3-way quorum for storage. And an identical offsite backup machine. Then we're back to square one ;)
That’s always how it goes isn’t it?
Hi Jeff!
It's a trap!
@@RaidOwl But just think of all the content and fun hardware you get to set up... :D
Been trying to get my sister or brother to host a back up NAS/home server so I have a private off site back up. Then they can back up their data on my NAS and I can vise versa. Plus, I can configure some smart functionality into their system and surveillance. For funsies.
You know a tech channel is good reliable and helpful when the host talk about the problems he encountered himself, love your work
I try haha
@@HardwareHaven TRY HARDER 😂😅
Honestly no, failure is just good contend. If you don't believe me watch any LTT video - nothing about them is reliable or honest anymore
It's pretty incredible what you can run on even modest, modern x86 hardware. For several months, I ran everything on a Pentium Gold G6400 with 32GB of ram. Host OS was Unraid, virtualized Untangle for router/firewall (gigabit fiber), virtualized Windows with a GPU passed through for a "Console PC" in my Homelab for management/admin... was running plex, pihole, vpn, etc, etc. All of it, no performance issues. So why did I move away from it? Maintenance... Of course I wanted to still tinker, but with a full household of very connected people... it really constrained my maintenance window to before 8am and after 11pm. Not worth it.
Haha exactly. That’s why all of my household things currently run on two separate machines, one of which I basically haven’t touched in like 6 months.
That seems like a lot for only 4 total CPU threads, did you ever run into headroom issues?
@@massgrave8x Not that I noticed. Routing a 1gbit internet connection doesn't take a lot of x86 horsepower and I would say the more important performance variable was RAM in this case - having 32GB gave me plenty of headroom for my workloads.
That's what I love about your channel, most of people can't have an entire datacenter at home, that's when setups like this really save the day!
I think people greatly overestimate how much power they need to run stuff in their homelab. I am also EXTREMELY guilty of this lol. Great stuff!
People were running early web servers with 10k visitors a day on 486 machines with 16mb of ram back in the 90s!
Very true haha
That's how you end up with hot tub water "cooling"
I've got 1184TB of storage and 92TB of enterprise SSDs in my homelab, so ... YUP, guilty
@@BenjaminArntzen
O.o
You pretty much demonstrated a conclusion I've been struggling with for a while... there are some things that CAN be done on a virtual machine, but probably shouldn't be. The two glaring examples I picked out were the router and the NAS. I've been trying for a couple of months now to put as much on my NAS as possible, but found that all too often taking down the NAS to get a container working would cause problems with the NAS itself. I finally removed everything from it but Jellyfin, Syncthing and Nextcloud... and it might get an instance of pihole. Now it sits quietly drawing about 21 watts, storing my stuff and always available when needed. For a router I wanted something with more capability than the typical consumer device, but it needs to be always available so my family can do what they do while I'm playing with stuff. So I chose to buy a Netgate device.
I think it comes down to risk/availability tolerance. If it needs to be (nearly) always available, it shouldn't really be virtualized... or should be virtualized in a cluster. For a home lab where a cluster isn't really feasible for most people you just have to pick your battles. Which ever way you go, thanks for sharing the struggle!
I feel you on that GPU passthrough. It has been the bane of my existence on Proxmox. I've been working on it for the past 8 hours for the GPU on my laptop Intel chip. I worked my way through 3-4 different errors - used SEVERAL different guides that all gave different instructions - and finally got stuck.
I had this same issue. From what I understand, it being the primary, it gets taken by the bio and the grub and never really gets properly released. Even though Proxmox is mostly web, it still takes hold of it(probably the Debian) and just screws you over. I can only get it to work having the nice GPU in the secondary PCIe slot.
@@choahjinhuay My motherboard and CPU dont have a intergrated graphics and uses my GPU as primary upon boot. I unregister the ZOTAC GTX 1050i gpu from proxmox to get it ready for passthru. But it wasnt that simple. My GPU is pretty locked and I have to dump the GPU rom file and modify it to work with proxmox. I have to append the custom rom to load the GPU from the vm conf file. Took me days to figure it out! phew.
Yeah, similar issues. Not sure what the silver bullet ended up being, but I ended blacklisting up the GPU in the host, moving it off of slot 1, and switching to q35. Now it works OK (RTX 2060 12GB). Somehow, the hacked virtual GPU on consumer GPU thing Craft Computing does was easier.
@@ccoder4953 imo, the problem is there is no igpu.
The motherboard need a gpu to start, start using resources. Once in use, blacklisting kinda is impossible. I would give it a try with two different gpu's, being different brands. Throw in an AMD/nvidia combo, blacklist the nvidia, for passthru. it did work in my diy box, with an AMD.nvidia combo.
There are some motherbards that will boot without any gpu though, that could be option too, or, getting some more server oriented motherboard.
Further, there is another problem with nvidia commercial series, with nvidia not allowing/fighting to virtualising them.
Just FYI, I could never get GPU passthru (in my case iGPU passthru) working on UEFI VMs. Only BIOS/i440fx VMs.
This video covers so many things that I want to get back to doing, but I've just been distracted with Life and other things. Thank you for showing us your struggles. It helps rest of us know that we're not alone when we meet up with hardware challenges. And, I do plan more getting back to trying to virtualize all of my appliances on less hardware -- just takes time to overcome the issues, while trying to keep everything else in life running too. So, yeah, I feel your struggles and time-constraints.
I am quite confident this kind of setup can be an amazing workstation if you only focus of the storage and computing aspects while keeping the networking on a separate low-power machine.
The most power efficient machine us one that is spun up only on demand, so for my personal needs, the NAS wouldn't have to run 24:7. What I would need here instead is the means to power it on from afar i.e. by hooking it up to a Pi-KVM (or hope for WoL to work properly). Then, it's a matter of getting a ZFS-root setup to work, throw in 2 NVMe drives and a couple HDDs and your off to the races.
You should still have one dedicated NAS for cold-storage backups as well as one large capacity exterbal drive you can just unplug and throw into a safe or remote location, though.
I like the idea of having a hyper converged home lab. I use LXD containers to run all my services (router, OTA/Satellite TV, DHCP, DNS, SMB, IoT automation, VMs for kuberneres cluster among other services) on a single node (ryzen 3700x, X470d4u motherboard). I didn't want to maintain multiple physical servers and took a minimalist approach, heck I even replaced my GPON Modem/ONT with an SFP module connected to my mikrotik switch on its own VLAN. It's been running rock solid for the past year, and the entire process has been very educational, I can try new configurations on my containerized router and if things go south I just roll back to a working snapshot, plus I can spin up as many virtual routers with as many virtual interfaces as needed to try out new stuff or learn how routing protocols work. I will probably be adding another node or 2 for High availability
I'm always on the hunt it seems to bring power draw down, or think about services in a different way. I'm with you, I'd not put the firewall/router on the same machine you are trying other things with, unless it is one that is just part of the lab. Overall neat concept and I understand the frustration of just trying to get a video done.
Regarding your IOMMU issue with the HBA
For okayish iOMMU Groups you would need an X570 rather than B550 or A520.
As the X570 IOhub on the PCH is basically what AMD used on Zen2 CPUs anyways it has a more dedicated iOMMU grouping. X570 as a Chipset was developed by AMD themselves based on Zen2 CPU chips. The IOhub on X570 mimics the IOhub that EPYC uses... As Epyc by it self doesn't have a PCH.
B550 and A520, and any 400 and 300 Chipsets for that matter were subcontracted to ASmedia, AsMedia did not the best job regarding their IOMMu groupings..
I am running quite the similiar box in my Lab (though currently shelfed for summer as not needed at the moment) on ESXi 8 with a R9 3900x on a X570 board
Yeah I imagined so. That once again comes down to buying the mobo for editing rather than a goofy home server haha. Appreciate the input!
Very fine tips and tricks 👍 in the comments section on IOMMU groups and GPU passthrough.
Thank you.
Oh boy! Where has all this come from? I remember messing around with Free BSD on used ddr2 systems, and tonkering with the Raspberry Pie, but that was 10 years ago! So much has happened since I left.
About the passthrough issues you had at 14:50, I think I had the same issue last year when passing my GPU. In my case (also B550), when enabling Above 4G Decoding on BIOS it would *silently* also enable Re-sizeable BAR support. Manually disabling it, but obviously keeping 4G Decoding on, fixed my issue.
Hmm.. I’m pretty sure I checked both (they are right next to one another), but maybe I didn’t notice it.
Might be a problem with the cpu? I mean, not to get in intel vs amd war, but ... I always has my concern about this kind of little issues that nobody knows the real problem.
Re-sizable BAR support, manually disabling it . . . helpful, thank you 👍
For the iommu group issue, you can have an acs override line in the grub file and force them to have different iommu groups. I did the same thing and it worked.
I attempted something similar AMD IOMMU is always a pain but this command may enable the devices connected on the chipset to be split
(GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nomodeset") if you ever plan an AMD proxmox server try using a "G" series processor it will make the GPU passthrough so much easer.
Yeah an APU would definitely be valuable haha
@@HardwareHaven I can confirm the same, I never had issues splitting IOMMU groups on the chipset both on AMD and Intel platforms. Of course, if the problem is in the IOMMU implementation on motherboard there's nothing much that can be done about it.
ACS override 👍
Thank you
Hey, Congrats on 100K subscribers !!!
Thanks!
As someone that went from a mini pc based homelab to a monolith single server on proxmox and back to mini pcs again, I think separate devices for separate functions is that way to go for me. I also cut my power consumption over half by going back to multiple mini pcs.
I am thinking about seperating my services on mini pcs but the biggest show stopper atm is missing IPMI ... there's piKVM etc..they say they are "cheap" kvm solutions but I find them quite expensive and if you are going to have multiple mini pcs, you would need a piKVM per pc...
@@KisameSempaiI think you can run pikvm on the pi zero 2 w. The only pain is getting them at the moment
My thinking : start with your information and system needs - also look at local power prices (HUGE in the UK, Australia, etc.). A lot of stuff can be handled by ultra low power solutions such as a Raspberry Pi with attached USB 2.5” storage (SSD or spinners).
I think virtualising your outward facing firewall/router is too dangerous - the risk of a zero-day virtualisation layer breach is always there, incoming packets have to be handled by Proxmox before pfsense gets them so it’s an extra layer of vulnerability of course. You can run a physical decided hardware solution for as little as 5-10w of power nowadays, 20w is common (though you need 64-bit capable x86 to run pfsense last time I looked).
Personally I consider this type of setup useful for a not-always-on lab solution. I’ve run ESXi for 15 years or so, used to do what you’re doing here (I lacked the linux skills and initially ultra low powered SBC hardware) to do it. Today I use it as a fire-up-when-I-need-the-lab setup, I use Wake-on-Lan headers to wake the system up and a script which I shell into the ESXi host and execute to shut it down; it’s really effective and the host runs headless in a cupboard on a 11 year old Xeon.
Also for those with archive servers - split things up a bit. I use two small drives for always-available stuff and the rest are on two storage tanks which are powered down unless archiving or retrieving. I can also do backups that way too - WoL packet to wake the server, MQTT messages to notify of boot completion and the backup or archive proceeds after which it’s powered down (I use an MQTT subscription script on the backup server which puts it into standby or shutdown).
Another thing I do is I have an old half-missing laptop I power up for chomping on things (e.g. compression tasks) in the background - I can script its wake, compression task and even shutdown. Cost me nothing and is only on when I am actually using it. It’s surprising what you can do with older kit that has a lot of grunt but chews power if you use this technique - sure it might chew X watts at the wall but if it’s on for an hour a week it’s not that big a deal.
Thank you 👍
Zero day risk is a helpful callout on virtualising a router.
Kindest regards, neighbours and friends.
Oof! He's running HDDs on a smooth hard flat desktop surface. The vibrations could harm the drives. At least run them on some kind of insulating material to dampen the vibrations. Like, a mouse pad.
If you have a managed switch that supports Trunk/LAGG, I would suggest setting up your 4 ports on the NIC to be LAGG ports and run the VLANs and network through that. Since you have 2 2.5GB on-board NICs, you can run one as WAN in pfSense and the other for something else. I Knwo it is just a temporary setup but it is totally doable. I run LAGG on my pfSense system, TrueNAS, Proxmox and Proxmox Backup Server. It just gives you flexibility to have more lanes when tx/rx data to multiple devices that demand high throughput like the servers mentioned.
I can definitely relate to the pass through issue. Around a year ago, I was starting my journey into Linux (virtualization of windows through OVMF). The biggest issue I had was pcie passthrough of my 2nd gpu which I realized was in the same IOMMU group as something else unless I put it on the bottom pcie slot which was scraping against my psu.
My solution: upgraded my cpu and motherboard with an identifiable block diagram 😂
My PROXMOX box is running on an old HP Compaq Elite 8300 SFF that I upgraded the CPU to i7-3770 and the RAM to 32GB...then slapped in one of those icydock 4x 2.5" into a 5.25" bay and a cheapo chinese sata card and did a passthru and running TrueNAS with RAIDZ1, the OS and VM's are running from a 1TB SSD. I don't do massive stuff with it, just a few debian VM's, and OMV, an Windows 11 Lite for my security cameras, and TrueNAS...it runs well...but would LOVE to have 32 thread CPU...power draw would hammer my UPS though
Cool video!
Keep em coming!!!!
Love your content, it scratches that itch I have to play with home lab stuff without having to spend hours of time fixing the things I break
I may sound counter-intuitive, but I still prefer to have multiple pcs and dedicate one for each function (one for 3d graphic, one for nas, one for firewall, etc).
For power consumption I add a couple of photovoltaic panels, at night I turn off the pc's that are not needed and in the morning everything restarts automatically with wakeup from bios.
This is just my point of view, I really like your videos and I am subscribed almost from the beginning of your channel 🙂
Cool!. I have been planning a setup almost like this for my Haswell Xeon server but has not prepared for it yet. I also have the IOcrest 4Port 2.5G realtek PCieX4 card. Thanks for the effort and yours is a great powerful machine.
I've setup a similar thing in the past few weeks in free time...took my old X5680 Supermicro and replaced it with an i7-7700 based thing and added 2.5G and an extra two-port intel gig nic...it has five JBOD disks as well....so i have ubuntu (no hypervisor) running plex and local SMB share and then inside of that have two ubuntu VM's, one running wiregaurd only and one running nextcloud...i was going to passthrough toi the VM's, etc., but i ended up doing the easy thing and just teaming my 4 nics together and bonding the two VMs to that bond within virtualbox (nope, you dont need qemu/kvm if you dont want to!...but yea no passing through so thats easy for me)......
i LOVE having wireguard ill tell you that....this server combined with my Belkin AX3200 openwrt is good for me....not enterprise but who cares, it works so well i cant imagine cisco could do better (for such a small setup that is)
Good content but you took the easy way out! The issue with all the chip-set devices in one IOMMU group can be overcome. Unraid can do it easily through a GUI so I'm sure Proxmox can do it. I would actually have liked to see that.
Great examples. I would rather have seen some large file transfers, network speed tests, and pretty performance graphs. Something like NetData would be fascinating. Impossible to to tell if its actually usable vs just functional otherwise.
Ty
Instead of OMV or TrueNAS or a dedicated OS for sharing I just set up a container with cockpit installed to manage the shares and permissions and let the host OS do the lifting. Seems to be working out well so far. I've been running it in my "production" environment for a few months now.
What's cockpit? Could you explain how does it work. Thank you
congrats on 100k man your vids are great godspeed bro
Regarding your difficulty on passthrough, there are a lot of changes between kernel versions that break pcie passthrough regarding gpu passthrough and it really gets into the weeds especially regarding frame buffers. try staying on kernel 5 for now and not upgrading to kernel 6
As a PC/tech nerd who is also into networking I eventually just wanna put a server inside a case and just have like probably clusters of them or multiple of them😅
I know this probably got covered somewhere, but I am just paralysed by choice: where would you recommend for someone like me (software tester who did some networking in the past) to start with a home lab?
What are the things I should get and what should I avoid (pitfalls, traps, things that can be virtualised in a docker container and be done with it, or someone going to fall into the trap of gettin a 24 port Router because it might come handy later, etc. )
3950X is still a monster of a cpu.
My goal for a while now has been to do something similar without running a router/firewall on it. Running unraid or truenas scale instead of proxmox. I have a spare x470 board, and I want to use a 3900 or 5900 oem cpu that is 65w tdp. And use a P2000 and gtx 1070 for plex/jellyfin and a windows vm respectively.
That or get a ddr4 z670 board and use an i5 13500 without the p2000 and use quicksync instead.
6c/12t vm for remote cloud gaming, the remaining threads to run a NAS and containers I use.
Awesome video, the GPU passthrough issue makes me glad to be running unRAID since I have a very similar setup (minus the router setup) and have not had an issue passing my 3050 to a few docker containers one of them being plex (:
Keep the videos coming though they're always a great time.
Bro is it possible to change your background music? No hate, love your channel, but it might be time for a new track tbh…
I can see you are running both TrueNAS and openmediavault, could you explain what are their best use case/purpose scenario?
You could have used acs patches to separate all of your pci devices into their own iommu group, so you could have used the hba. I'm pretty sure that proxmox's kernel has the patches already applied, so you would just have to add
pcie_acs_override=downstream, multifunction
into your grub and then just update grub and reboot.
Yes I have something similar; 5950x, 128gb ecc @3200mhz, 2x16tb storage, rtx3090,gtx 730 etc, TrueNas Scale OS, running pihole, nextcloud, valheimserver, and a gaming/editing windows VM (w/gpu passthru) which is run remotely through sunshine and moonlight (need a dummy hdmi plug -which maxes out at 1440p120hz; to get 4k120hz, i had to get a DP1.4 to HDMI2.1 adapter ).
Yo dawg, I heard you like backups with your backups 😆 I'm the same way. I got my '3' in a bank vault
Couple of things:
1) Your mileage will DEFINITELY vary.
I recently went through a massive consolidation project where I consolidated 4 of my NAS servers down to a single system.
Before the consolidation, I was using about 1242 W of power. Now, my new server (which houses 36 3.5" drives, and also is running dual Xeon E5-2697A v4 (16-core/32-thread) and 256 GB of RAM (I think that my total installed hard drive, raw capacity is somewhere around 228 TB), typical power consumption is now around 585 W. And whilst that's twice what this system is at load, I'm also running I think I'm upto 13 VMs now, and 2 containers, all running simultaneously.
2) I have absolutely ZERO problems passing through the RTX A2000 6 GB.
Remote desktop sucks, but I can still play Anno 1800 through it.
(And I have been able to get hardware transcoding with Plex work on that VM as well.)
3) Also as a result of my consolidation, I was able to take out my 48-port gigabit switch and now I just run a Netgear GS-116 16-port GbE switch, which consumes like 7 W of power (vs the ~40 W that the 48 port used to take.)
4) On top of all of that, instead of getting more networking gear (e.g. 10 GbE NICs and a 10 GbE switch, which aren't cheap), I just run virtio-fs so that the VMs can talk directly to the Proxmox host.
I set up SMB, NFS, and an iSCSI target on it, got a Steam LAN cache going, have dedup turned on in ZFS on the iSCSI target, so that can help reduce the actual disk space used by the Steam games.
I have a minecraft server running in a LXC container (turnkey linux gameserver is awesome for that), and if I am using my mini PC (my current desktop system) to manage some of the files on the server via Windows Explorer, if I am moving them around on the server, I can get transfers of over 500 MB/s (~4 Gbps without needing a switch).
(Two of the NAS units that I consolidated from were both 8-bay NAS servers, one was 8 bay, and one was 12-bay. Hence the need for a 36-bay, 4U rackmount server.)
The system runs well unless I have a LOT of disk I/O, then the system load averages shoots up and that sometimes can be an issue for some of the VMs.
Virtio-FS is awesome.
The VirtIO paravirtualized NIC in Windows 7, when you install the virtio drivers for Windows 7, will show up as a 100 Gbps NIC. (NICE!)
(But in Windows 10+, it shows up as a 10 Gbps NIC, which is still nice.)
Neither xcp-ng nor TrueNAS Scale was able to run virtio-fs, and wants you to route everything through a virtual switch instead.
By running virtio-fs, I skip the entire network stack.
Virtio-fs works in Ubuntu, and CentOS, but doesn't work in SLES12SP4. (Although I haven't tried updating the kernel for that yet. But out of the box, it doesn't work. Actually, technically neither did CentOS, until you update the kernel.)
So yeah, you can definitely do everything you want to, on a single system, as long as you have enough RAM for the VMs, and enough space on your hard drive(s).
Congratulations on 100K subscribers!
Thank you!
For the GPU passthrough problem have you tried (maybe you already did it) putting a dummy plug on the GPU output connector to simulate the connected monitor? Sometimes the problem is just that the GPU doesn't hear any connected monitor.
There are 2 years that I’m trying to do a single pc home lab and the GPU passthrough was always the problem, I have a 5950x and a x570 board and a 6750xt and a SAS controller. To make the GPU work I putted in the first slot and the SAS controller on the third slot with the chipset and now the passthrough work
Interesting concept but yeah it's not really practical... I tried consolidating my OPNsense instance into a VM on my TrueNAS Scale server, and while it worked, it wasn't stable. Several times a week (sometimes a day) I would have to reboot either the VM, or the entire TrueNAS Scale system itself, and that got annoying since I run Plex, Sonarr, most recently Prowlarr, QBitTorrent and Filebot and each reboot would of course disrupt those services. I reverted my routing duties to a spare miniPC that coincidentally has a dual NIC miniITX motherboard that I forgot about, so it was a perfect fit!
With all that said, the whole reason why I wanted to consolidate everything into one machine was because my previous multi-LAN miniPC solution was janky... it was a $250 miniPC from QTOM that I got off of Amazon, and it would overheat without constant cooling on it.... When I had it running to mitigate that I got some USB powered fans that were running a top the miniPC case, but that got dusty fast, and I decided to retire the entire miniPC. It's got decent hardware, so for short-term testing I might install Debian Linux and Casa OS atop it (I did try Casa OS a top Raspberry Pi OS, but it was too slow and unreliable for my liking) and play around with it some more. I actually have a Ryzen 9 3900X in my main everyday system and I've been wanting to upgrade (mainly for bragging rights lol) to something faster, but at this point I might just max out the AM4 spec and go with a 5950X (a more modern version of the 3950X) once they get cheaper.... Once/if I ever do that the 3900X would be inherrited by my aforementioned TrueNAS Scale server to replace its Ryzen 7 3700X (8C/16T)... That'd be quite the upgrade... That'd leave the 3700X available for other projects too... Thanks for this video. It was entertaining as much as it was informative! See you around!
Meh. I recently did a full rebuild for my home setup and what I ended doing was using a 2018 i7 MacMini with 64GB memory and 1TB NVME. Deployed k8s lab with all my crap. Plenty of power, tiny form factor and low power usage. Especially when idle. It runs everything without missing a beat.
Storage redundancy is solved by using 2 external NVMEs. It has 4 USB-C ports, two are on a separate BUS, so you get a lot of juice. I do daily backups to an encrypted cloud storage box which is cheaper than running my own backup server in terms of power consumption/price.
I am actually considering to get a few of these or maybe M1 or M2 and make a custom server rack. There is a reason Github are using them for their new runners.
I wont swear to this, but it sounds like that video card is engaging the system bus in some clever (?) way that precludes the host OS from being able to release it. i swear i remembered reading about that back when i was rebuilding my own box and studying problems with PCI passthrough. Its just an issue peculiar to certain hardware and they way some graphics cards leverage the system bus. I know that sounds kinda vague but at least dont feel like you're leaving some secret rock uncovered 😁. Really enjoy this video!
Yeah I heard something from someone else that was similar. Too late now haha 😂
Some very fine tips in the comments section on GPU passthrough and IOMMU groups how-tos 👍😊
With regards to IOMMU groups, you could ACS patch the kernel and they will essentially be on different groups for normal passthrough without issues, I had this problem on my proxmox server and fixed it that way.
ACS patch . . . helpful idea 👍
Kindest regards, neighbours and friends.
Did you have mentioned about the CPU idle state power consumption?
Did you try above 4k encoding in the BIOS? Have heard a lot of people had issues without it if they want to pass through a GPU
Yeah sadly that didn’t work either.. 😞
4k encoding in BIOS . . . helpful tip 👍
Congrats for 100k !!!! 🥳
if you were on AM5 with that iGPU you probably would have been able to passed through that GPU as you expected
I have it as a 3 machine setup currently a old mini pc running pfsense ryzen5 5500 with 64gb running truenas and a 13500 with 64.running everything else(7 minecraft server behind a proxy a bunch of website jellyfin traefik and vpn) i would say having it as a separate machine sound way better for both security purpose and ease of setting things up.
I put a 5950x into a build for all my stuff recently
I got a 5950x
128GB DDR4 3200
6x4 TB Raid z2
Stripe 1Tbx2 Samsung SSD
2x500GB mirror boot drive
5700XT with 10 gig networking
Sometimes this work for GPU passthrough: start the host with proxmox with the GPU not occupied, remove every monitor before start until proxmox is done. Then plug in monitors and then start the windows VM. Helped me too.
will try this , thanks
@@serikk if it looks like a lost cause or passthrough is glitching, this should be the fix. I have to use this method. This way Proxmox doesn’t lock the GPU and so doesn’t have to give it back to the VM.
Never heard of this tip. Will keep this in my notes 👍
Have you done a video on your security cameras setup?
ive got a similar setup at home, Ryzen2600,16gb of RAM, Proxmox host with TrueNAS,WIN10 and PfSense Clients.
RAM compatibilty / stability is a problem, ive got two identical kits of Gskill 16Gb 3200Mhz RAM, one kit works fine, the other has to run at 2400MHz to be stable, both kits test fine under windows
Pfsense is not running yet as the reliability is kinda poor (probably due to all the tinkering ive done for GPU passthrough)
TrueNAS works great with a single 4Tb array (id need more RAM for more storage)
the Win10 client is for media ingestion and printing/scanning - most of my passthrough hardware goes to this one,
ill be setting up another win10 client to RDP into from the garage (that PC is an AMD fusion APU, it doesnt run well enough for anything other than checking emails and music from youtube (and only one thing at a time) but ill need to sort out VGPU passthrough first as CPU decoding of media is a bit slow at times.
ill also get around to setting up a steam Cache and minecraft server.
all in all it was a fun project to set up.
Pass through GPU issue idea. Windows VM being UEFI or Legacy could make a difference. I think some GPUs won’t work in passthru unless the VM is UEFI (and also some require legacy). I’m not an expert but that is something I’ve had issues with as well!
I was just browsing ebay for these ryzens... That power draw tho... Dear lord... I may still grab it anyway 😂
I would keep the nas seperate. Remote editing would be seperate. The rest can be run on that box with proxmox
Damn, you are my kind of hardware guy. But I dont have that many NASes. Just two Synologies and a ProxMos server.
Cool to find this in my feed at random. I'm planning to go this route. GPU passthrough does seem like a moving target. There are many methods out there and none have worked for me so far (same issue - linux wont give it up). Also, this may be a silly question but during the times it does work, when you access the VM remotely, are you able to enjoy the high framerates (for instance, if I have a windows VM for gaming on my server can I play remotely on like a thinclient or low-end laptop)?
Anyway, I feel like it's a good solution and would likely be lower power than the combined physical devices. Plus, once you get it all set up, you shouldn't have to mess with with the host much, so not really concerned about knocking all my systems offline regularly.
Why you so nervous about power consumption? In my country, 1 MW costs about $44 for me.
Thanks for the help. I made my first dedicated Minecraft server whit the Minecraft server tutorial video. ❤
Proxmox pass-through on AMD can be challenging.
Hey man, what case is that that you're holding up that you call your NAS at the beginning? Are those all 5.25 open to the front bays?
I've been running everything on an i5 12400 (previously i7 9700) Proxmox host for years, although it's really efficient, sometimes I wish to separate things in their own appliances.
Hey, what are those black silicon things at 4:50 that you are using to prop up the card?
Meanwhile my entire homelab is running off of a hp t620 thin client. At least it doesnt draw much power, 14w tops 😅
I have a ryzen 3950x and a x570 motherboard sitting on the shelf which I tried to use for a server , but the problem was the power it uses just on idle , 90 -100 watts on totally idle , now replaced for a b550 and a ryzen 5900g with total idle 20 watts , but 55 watts with running all my tasks , but could cut 15 watts off that by removing my rtx 2700 super , and using the internal igpu for transcoding , but the GPU is pass to a windows 10 system for gaming etc.
I saw you were running pfsense as well as pihole. I've been using pfblockerng in pfsense for a while and like it a lot, could you tell me how pihole compares?
May I ask why using so much VMs? In my opionion this is a real waste of resorces.
You could at least install more LXC containers to do the same if Docker is not possible with Proxmox (sorry I‘m not really familiar with Proxmox because I moved away from all VMs and run most of things in Docker and LXC)
DId you ever ocnsider setting up ACS patching for that board?
For the hba "problem" i would sugest put nvme drives for proxmox and storage and passthrough the onboard sata controler!!!! works pretty good for me in many instalations and chipsets!!!
That was my thought as well, but the sata controller is on the same iommu group as a lot of other stuff which would’ve caused issues.
Do you suggest 5600x as a low powered PC server?
I'm about to do something like this. Mainly Game Server, but also cloud storage eill be in one machine. I'm just runnign everything on a ryzen 7 3700x
Please make sure you are protected from the CVE-2019-7630 affecting all gigabute motherbaords. something about the appcentee driver being the problem
For IOMMU issues with the chipset's PCIe slot, have you tried adding "pcie_acs_override=downstream" to the kernel boot commandline to see if it helps with isolation?
I did and don't believe it helped. However I could've missed something. Too late now unfortunately haha. Appreciate it though!
I have also had troubles with GPU passthrough in windows... I know how you feel.
AMD R9 3950x i miss you... (R.I.P. 2019- 2022)
I wish i could have such deep knowledge about technology like you i mostly didn't understood what you did but i love home automation to
I'm setting up a microtik router os VM right now In proxmox. I just wanted to be a backup in case my pie four open WRT router goes down for any reason. I already made a copy of the SD card for the pie so if it just gets corrupted I can just switch it but if the whole thing goes down it's nice to have something to switch to.
Idling at 118 watts sounds a bit too high! I have a similar setup (windows vm with a 3090 passed through, i5 13400 cpu and ddr5 ram) nas with 1 14tb hdd and two linux containers for plex and qbittorent. Idles at 60 watts. I noticed that the idle power is higher when the windows vm is off. Seems like the gpu consumes much less power when it has drivers loaded.
Also I'm currently building a second replica but with weaker gpu (gt710) and it is idling at 30W
I dont know if this is the case for proxmox but... shouldn't you configure the NVIDIA GPU to use the OVMF driver rather than the default nouveau driver?
My understanding is that you don’t want any driver in use because you don’t want the hypervisor OS to use the GPU. I could be way off though haha
not your fault ob the GPU pass through its just some NVIDIA corporate shit that is not working well with VMs forcing everyone to buy dedicated hardware that is has a 500% markup with the materials they are built with, rather than just using consumer/off-the- shelf hardware.
How's the disk performance and long term stability when they are passthrough as virtual disk?
I wish we have a m.2 nvme GPU (not those stupid adapters), apparently asrock made one but I cant find one. that thing will be really helpful for KVM stuff for a iGPU less processor like this.
do have in plan make video about PXE server but some the easiest way?
Don't you need 2nd GPU for passthrough working cause 3950x doesn't have igpu ??
Not necessarily. I was able to boot without any display adapter and still access the proxmox web UI. And I tried running it with two GPUs just to make sure.
@@HardwareHaven I have a gt710 as primary and don't have any spare pcie slot for 2nd gpu, can the primary GPU be used as hardware transcoding or passthrough as well as primary GPU?
keep doing this vids great content 👍
It can be used for hardware transcoding on a service or container running in the host OS, but can’t be passed through to a VM while also running on the host
@@HardwareHaven pls make a video in this topic
Proxmox with just 1 GPU : transcoding, ful utilization
Hi you should make a video about the Lenovo P520 Thinkstation workstation 8. gen Xeon W-2135 as a proxmox server
GG 100K subs
Thanks!
I had a similar error whit a Tesla card, the solution was to activate "PCI Express 64-Bit BAR Support" in the Bios
Putting this tip into my notes 👍
Thank you.
Got the same cpu I’m trying to do the exact same thing with just don’t know what would be the best motherboard to combo it with for this project
Q: "My ENTIRE Home-Lab On A SINGLE CPU???"
A: "Yes it's possible, but wouldn't be funny...."
that's why I stopped using proxmox...because GPU passthrough was always a hit and miss and behaved weirdly at times.. esxi worked much better for gpu passthrough
what are the possible scenarios if I use non-ecc cards with truenas (no containers just pure file & media svr)? will i experience data corruption knowing zfs cache uses ram?
The system has undervolt ? Maybe try to undervolt the CPU to get a way better efficiency