My ENTIRE Home-Lab On A SINGLE CPU???

Поділитися
Вставка

КОМЕНТАРІ • 243

  • @HardwareHaven
    @HardwareHaven  Рік тому +141

    CORRECTIONS:
    - The memory is DDR4 3200 not 2400 (Thanks Ian!)
    - I completely forgot to mention the 2TB Samsung SSD for the boot drive. This was also purchased for my new editing PC build.

    • @jumpmaster5279
      @jumpmaster5279 Рік тому +2

      All the best for LTX

    • @BBWahoo
      @BBWahoo Рік тому +1

      Meanwhile I'm here suffering trying to the lowest timings with 16-13-13-12-28 3800mhz

    • @LucasSousaRosa
      @LucasSousaRosa Рік тому

      @@BBWahoo chip? 2 rams or 4 rams?

    • @BBWahoo
      @BBWahoo Рік тому +1

      @@LucasSousaRosa
      Oh dude I already figured it out, but yeah dual rank

  • @JeffGeerling
    @JeffGeerling Рік тому +618

    Now you just need to build a hot spare. Then add another for 3-way quorum for storage. And an identical offsite backup machine. Then we're back to square one ;)

    • @HardwareHaven
      @HardwareHaven  Рік тому +97

      That’s always how it goes isn’t it?

    • @ArifKamaruzaman
      @ArifKamaruzaman Рік тому +9

      Hi Jeff!

    • @RaidOwl
      @RaidOwl Рік тому +43

      It's a trap!

    • @boneappletee6416
      @boneappletee6416 Рік тому +2

      ​@@RaidOwl But just think of all the content and fun hardware you get to set up... :D

    • @Yuriel1981
      @Yuriel1981 Рік тому +29

      Been trying to get my sister or brother to host a back up NAS/home server so I have a private off site back up. Then they can back up their data on my NAS and I can vise versa. Plus, I can configure some smart functionality into their system and surveillance. For funsies.

  • @jumpmaster5279
    @jumpmaster5279 Рік тому +145

    You know a tech channel is good reliable and helpful when the host talk about the problems he encountered himself, love your work

    • @HardwareHaven
      @HardwareHaven  Рік тому +11

      I try haha

    • @brianhecimovich4488
      @brianhecimovich4488 Рік тому +1

      @@HardwareHaven TRY HARDER 😂😅

    • @Max24871
      @Max24871 Місяць тому

      Honestly no, failure is just good contend. If you don't believe me watch any LTT video - nothing about them is reliable or honest anymore

  • @jimkirk5081
    @jimkirk5081 Рік тому +35

    It's pretty incredible what you can run on even modest, modern x86 hardware. For several months, I ran everything on a Pentium Gold G6400 with 32GB of ram. Host OS was Unraid, virtualized Untangle for router/firewall (gigabit fiber), virtualized Windows with a GPU passed through for a "Console PC" in my Homelab for management/admin... was running plex, pihole, vpn, etc, etc. All of it, no performance issues. So why did I move away from it? Maintenance... Of course I wanted to still tinker, but with a full household of very connected people... it really constrained my maintenance window to before 8am and after 11pm. Not worth it.

    • @HardwareHaven
      @HardwareHaven  Рік тому +9

      Haha exactly. That’s why all of my household things currently run on two separate machines, one of which I basically haven’t touched in like 6 months.

    • @massgrave8x
      @massgrave8x Рік тому +1

      That seems like a lot for only 4 total CPU threads, did you ever run into headroom issues?

    • @jimkirk5081
      @jimkirk5081 Рік тому +4

      @@massgrave8x Not that I noticed. Routing a 1gbit internet connection doesn't take a lot of x86 horsepower and I would say the more important performance variable was RAM in this case - having 32GB gave me plenty of headroom for my workloads.

  • @thedevtoss
    @thedevtoss 12 днів тому

    That's what I love about your channel, most of people can't have an entire datacenter at home, that's when setups like this really save the day!

  • @RaidOwl
    @RaidOwl Рік тому +131

    I think people greatly overestimate how much power they need to run stuff in their homelab. I am also EXTREMELY guilty of this lol. Great stuff!

    • @pt9009
      @pt9009 Рік тому +24

      People were running early web servers with 10k visitors a day on 486 machines with 16mb of ram back in the 90s!

    • @HardwareHaven
      @HardwareHaven  Рік тому +13

      Very true haha

    • @ajpenninga
      @ajpenninga Рік тому +4

      That's how you end up with hot tub water "cooling"

    • @BenjaminArntzen
      @BenjaminArntzen Рік тому +7

      I've got 1184TB of storage and 92TB of enterprise SSDs in my homelab, so ... YUP, guilty

    • @DarkFrostX5
      @DarkFrostX5 Рік тому +4

      ​@@BenjaminArntzen
      O.o

  • @cameronfrye5514
    @cameronfrye5514 Рік тому +9

    You pretty much demonstrated a conclusion I've been struggling with for a while... there are some things that CAN be done on a virtual machine, but probably shouldn't be. The two glaring examples I picked out were the router and the NAS. I've been trying for a couple of months now to put as much on my NAS as possible, but found that all too often taking down the NAS to get a container working would cause problems with the NAS itself. I finally removed everything from it but Jellyfin, Syncthing and Nextcloud... and it might get an instance of pihole. Now it sits quietly drawing about 21 watts, storing my stuff and always available when needed. For a router I wanted something with more capability than the typical consumer device, but it needs to be always available so my family can do what they do while I'm playing with stuff. So I chose to buy a Netgate device.
    I think it comes down to risk/availability tolerance. If it needs to be (nearly) always available, it shouldn't really be virtualized... or should be virtualized in a cluster. For a home lab where a cluster isn't really feasible for most people you just have to pick your battles. Which ever way you go, thanks for sharing the struggle!

  • @ZachariahWiedeman
    @ZachariahWiedeman Рік тому +25

    I feel you on that GPU passthrough. It has been the bane of my existence on Proxmox. I've been working on it for the past 8 hours for the GPU on my laptop Intel chip. I worked my way through 3-4 different errors - used SEVERAL different guides that all gave different instructions - and finally got stuck.

    • @choahjinhuay
      @choahjinhuay Рік тому +3

      I had this same issue. From what I understand, it being the primary, it gets taken by the bio and the grub and never really gets properly released. Even though Proxmox is mostly web, it still takes hold of it(probably the Debian) and just screws you over. I can only get it to work having the nice GPU in the secondary PCIe slot.

    • @aki_tomato_
      @aki_tomato_ Рік тому +2

      @@choahjinhuay My motherboard and CPU dont have a intergrated graphics and uses my GPU as primary upon boot. I unregister the ZOTAC GTX 1050i gpu from proxmox to get it ready for passthru. But it wasnt that simple. My GPU is pretty locked and I have to dump the GPU rom file and modify it to work with proxmox. I have to append the custom rom to load the GPU from the vm conf file. Took me days to figure it out! phew.

    • @ccoder4953
      @ccoder4953 Рік тому +1

      Yeah, similar issues. Not sure what the silver bullet ended up being, but I ended blacklisting up the GPU in the host, moving it off of slot 1, and switching to q35. Now it works OK (RTX 2060 12GB). Somehow, the hacked virtual GPU on consumer GPU thing Craft Computing does was easier.

    • @Steve25g
      @Steve25g Рік тому

      @@ccoder4953 imo, the problem is there is no igpu.
      The motherboard need a gpu to start, start using resources. Once in use, blacklisting kinda is impossible. I would give it a try with two different gpu's, being different brands. Throw in an AMD/nvidia combo, blacklist the nvidia, for passthru. it did work in my diy box, with an AMD.nvidia combo.
      There are some motherbards that will boot without any gpu though, that could be option too, or, getting some more server oriented motherboard.
      Further, there is another problem with nvidia commercial series, with nvidia not allowing/fighting to virtualising them.

    • @dustojnikhummer
      @dustojnikhummer 8 місяців тому +1

      Just FYI, I could never get GPU passthru (in my case iGPU passthru) working on UEFI VMs. Only BIOS/i440fx VMs.

  • @PoeLemic
    @PoeLemic Рік тому +6

    This video covers so many things that I want to get back to doing, but I've just been distracted with Life and other things. Thank you for showing us your struggles. It helps rest of us know that we're not alone when we meet up with hardware challenges. And, I do plan more getting back to trying to virtualize all of my appliances on less hardware -- just takes time to overcome the issues, while trying to keep everything else in life running too. So, yeah, I feel your struggles and time-constraints.

  • @HoshPak
    @HoshPak Рік тому +5

    I am quite confident this kind of setup can be an amazing workstation if you only focus of the storage and computing aspects while keeping the networking on a separate low-power machine.
    The most power efficient machine us one that is spun up only on demand, so for my personal needs, the NAS wouldn't have to run 24:7. What I would need here instead is the means to power it on from afar i.e. by hooking it up to a Pi-KVM (or hope for WoL to work properly). Then, it's a matter of getting a ZFS-root setup to work, throw in 2 NVMe drives and a couple HDDs and your off to the races.
    You should still have one dedicated NAS for cold-storage backups as well as one large capacity exterbal drive you can just unplug and throw into a safe or remote location, though.

  • @akachomba
    @akachomba Рік тому +1

    I like the idea of having a hyper converged home lab. I use LXD containers to run all my services (router, OTA/Satellite TV, DHCP, DNS, SMB, IoT automation, VMs for kuberneres cluster among other services) on a single node (ryzen 3700x, X470d4u motherboard). I didn't want to maintain multiple physical servers and took a minimalist approach, heck I even replaced my GPON Modem/ONT with an SFP module connected to my mikrotik switch on its own VLAN. It's been running rock solid for the past year, and the entire process has been very educational, I can try new configurations on my containerized router and if things go south I just roll back to a working snapshot, plus I can spin up as many virtual routers with as many virtual interfaces as needed to try out new stuff or learn how routing protocols work. I will probably be adding another node or 2 for High availability

  • @ntgm20
    @ntgm20 Рік тому +7

    I'm always on the hunt it seems to bring power draw down, or think about services in a different way. I'm with you, I'd not put the firewall/router on the same machine you are trying other things with, unless it is one that is just part of the lab. Overall neat concept and I understand the frustration of just trying to get a video done.

  • @itmkoeln
    @itmkoeln Рік тому +10

    Regarding your IOMMU issue with the HBA
    For okayish iOMMU Groups you would need an X570 rather than B550 or A520.
    As the X570 IOhub on the PCH is basically what AMD used on Zen2 CPUs anyways it has a more dedicated iOMMU grouping. X570 as a Chipset was developed by AMD themselves based on Zen2 CPU chips. The IOhub on X570 mimics the IOhub that EPYC uses... As Epyc by it self doesn't have a PCH.
    B550 and A520, and any 400 and 300 Chipsets for that matter were subcontracted to ASmedia, AsMedia did not the best job regarding their IOMMu groupings..
    I am running quite the similiar box in my Lab (though currently shelfed for summer as not needed at the moment) on ESXi 8 with a R9 3900x on a X570 board

    • @HardwareHaven
      @HardwareHaven  Рік тому +4

      Yeah I imagined so. That once again comes down to buying the mobo for editing rather than a goofy home server haha. Appreciate the input!

    • @chromerims
      @chromerims Рік тому

      Very fine tips and tricks 👍 in the comments section on IOMMU groups and GPU passthrough.
      Thank you.

  • @noleftturnunstoned
    @noleftturnunstoned Місяць тому +1

    Oh boy! Where has all this come from? I remember messing around with Free BSD on used ddr2 systems, and tonkering with the Raspberry Pie, but that was 10 years ago! So much has happened since I left.

  • @mttkl
    @mttkl Рік тому +14

    About the passthrough issues you had at 14:50, I think I had the same issue last year when passing my GPU. In my case (also B550), when enabling Above 4G Decoding on BIOS it would *silently* also enable Re-sizeable BAR support. Manually disabling it, but obviously keeping 4G Decoding on, fixed my issue.

    • @HardwareHaven
      @HardwareHaven  Рік тому +6

      Hmm.. I’m pretty sure I checked both (they are right next to one another), but maybe I didn’t notice it.

    • @Maisonier
      @Maisonier Рік тому +2

      Might be a problem with the cpu? I mean, not to get in intel vs amd war, but ... I always has my concern about this kind of little issues that nobody knows the real problem.

    • @chromerims
      @chromerims Рік тому +2

      Re-sizable BAR support, manually disabling it . . . helpful, thank you 👍

  • @jaskaransandhu779
    @jaskaransandhu779 Рік тому +1

    For the iommu group issue, you can have an acs override line in the grub file and force them to have different iommu groups. I did the same thing and it worked.

  • @elmestguzman3038
    @elmestguzman3038 Рік тому +10

    I attempted something similar AMD IOMMU is always a pain but this command may enable the devices connected on the chipset to be split
    (GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nomodeset") if you ever plan an AMD proxmox server try using a "G" series processor it will make the GPU passthrough so much easer.

    • @HardwareHaven
      @HardwareHaven  Рік тому +3

      Yeah an APU would definitely be valuable haha

    • @mttkl
      @mttkl Рік тому +2

      @@HardwareHaven I can confirm the same, I never had issues splitting IOMMU groups on the chipset both on AMD and Intel platforms. Of course, if the problem is in the IOMMU implementation on motherboard there's nothing much that can be done about it.

    • @chromerims
      @chromerims Рік тому

      ACS override 👍
      Thank you

  • @awesomearizona-dino
    @awesomearizona-dino Рік тому +5

    Hey, Congrats on 100K subscribers !!!

  • @DATApush3r
    @DATApush3r Рік тому +2

    As someone that went from a mini pc based homelab to a monolith single server on proxmox and back to mini pcs again, I think separate devices for separate functions is that way to go for me. I also cut my power consumption over half by going back to multiple mini pcs.

    • @KisameSempai
      @KisameSempai Рік тому +1

      I am thinking about seperating my services on mini pcs but the biggest show stopper atm is missing IPMI ... there's piKVM etc..they say they are "cheap" kvm solutions but I find them quite expensive and if you are going to have multiple mini pcs, you would need a piKVM per pc...

    • @rmo9808
      @rmo9808 Рік тому

      ​@@KisameSempaiI think you can run pikvm on the pi zero 2 w. The only pain is getting them at the moment

  • @davocc2405
    @davocc2405 Рік тому +3

    My thinking : start with your information and system needs - also look at local power prices (HUGE in the UK, Australia, etc.). A lot of stuff can be handled by ultra low power solutions such as a Raspberry Pi with attached USB 2.5” storage (SSD or spinners).
    I think virtualising your outward facing firewall/router is too dangerous - the risk of a zero-day virtualisation layer breach is always there, incoming packets have to be handled by Proxmox before pfsense gets them so it’s an extra layer of vulnerability of course. You can run a physical decided hardware solution for as little as 5-10w of power nowadays, 20w is common (though you need 64-bit capable x86 to run pfsense last time I looked).
    Personally I consider this type of setup useful for a not-always-on lab solution. I’ve run ESXi for 15 years or so, used to do what you’re doing here (I lacked the linux skills and initially ultra low powered SBC hardware) to do it. Today I use it as a fire-up-when-I-need-the-lab setup, I use Wake-on-Lan headers to wake the system up and a script which I shell into the ESXi host and execute to shut it down; it’s really effective and the host runs headless in a cupboard on a 11 year old Xeon.
    Also for those with archive servers - split things up a bit. I use two small drives for always-available stuff and the rest are on two storage tanks which are powered down unless archiving or retrieving. I can also do backups that way too - WoL packet to wake the server, MQTT messages to notify of boot completion and the backup or archive proceeds after which it’s powered down (I use an MQTT subscription script on the backup server which puts it into standby or shutdown).
    Another thing I do is I have an old half-missing laptop I power up for chomping on things (e.g. compression tasks) in the background - I can script its wake, compression task and even shutdown. Cost me nothing and is only on when I am actually using it. It’s surprising what you can do with older kit that has a lot of grunt but chews power if you use this technique - sure it might chew X watts at the wall but if it’s on for an hour a week it’s not that big a deal.

    • @chromerims
      @chromerims Рік тому

      Thank you 👍
      Zero day risk is a helpful callout on virtualising a router.
      Kindest regards, neighbours and friends.

  • @TheChadXperience909
    @TheChadXperience909 Рік тому +3

    Oof! He's running HDDs on a smooth hard flat desktop surface. The vibrations could harm the drives. At least run them on some kind of insulating material to dampen the vibrations. Like, a mouse pad.

  • @NightHawkATL
    @NightHawkATL Рік тому +1

    If you have a managed switch that supports Trunk/LAGG, I would suggest setting up your 4 ports on the NIC to be LAGG ports and run the VLANs and network through that. Since you have 2 2.5GB on-board NICs, you can run one as WAN in pfSense and the other for something else. I Knwo it is just a temporary setup but it is totally doable. I run LAGG on my pfSense system, TrueNAS, Proxmox and Proxmox Backup Server. It just gives you flexibility to have more lanes when tx/rx data to multiple devices that demand high throughput like the servers mentioned.

  • @John117alon1
    @John117alon1 Рік тому +9

    I can definitely relate to the pass through issue. Around a year ago, I was starting my journey into Linux (virtualization of windows through OVMF). The biggest issue I had was pcie passthrough of my 2nd gpu which I realized was in the same IOMMU group as something else unless I put it on the bottom pcie slot which was scraping against my psu.
    My solution: upgraded my cpu and motherboard with an identifiable block diagram 😂

  • @haydenc2742
    @haydenc2742 Рік тому +1

    My PROXMOX box is running on an old HP Compaq Elite 8300 SFF that I upgraded the CPU to i7-3770 and the RAM to 32GB...then slapped in one of those icydock 4x 2.5" into a 5.25" bay and a cheapo chinese sata card and did a passthru and running TrueNAS with RAIDZ1, the OS and VM's are running from a 1TB SSD. I don't do massive stuff with it, just a few debian VM's, and OMV, an Windows 11 Lite for my security cameras, and TrueNAS...it runs well...but would LOVE to have 32 thread CPU...power draw would hammer my UPS though
    Cool video!
    Keep em coming!!!!

  • @BinaryBroadcast
    @BinaryBroadcast Рік тому

    Love your content, it scratches that itch I have to play with home lab stuff without having to spend hours of time fixing the things I break

  • @beholder2033
    @beholder2033 Рік тому +1

    I may sound counter-intuitive, but I still prefer to have multiple pcs and dedicate one for each function (one for 3d graphic, one for nas, one for firewall, etc).
    For power consumption I add a couple of photovoltaic panels, at night I turn off the pc's that are not needed and in the morning everything restarts automatically with wakeup from bios.
    This is just my point of view, I really like your videos and I am subscribed almost from the beginning of your channel 🙂

  • @fteoOpty64
    @fteoOpty64 Рік тому

    Cool!. I have been planning a setup almost like this for my Haswell Xeon server but has not prepared for it yet. I also have the IOcrest 4Port 2.5G realtek PCieX4 card. Thanks for the effort and yours is a great powerful machine.

  • @OsX86H3AvY
    @OsX86H3AvY Рік тому +1

    I've setup a similar thing in the past few weeks in free time...took my old X5680 Supermicro and replaced it with an i7-7700 based thing and added 2.5G and an extra two-port intel gig nic...it has five JBOD disks as well....so i have ubuntu (no hypervisor) running plex and local SMB share and then inside of that have two ubuntu VM's, one running wiregaurd only and one running nextcloud...i was going to passthrough toi the VM's, etc., but i ended up doing the easy thing and just teaming my 4 nics together and bonding the two VMs to that bond within virtualbox (nope, you dont need qemu/kvm if you dont want to!...but yea no passing through so thats easy for me)......
    i LOVE having wireguard ill tell you that....this server combined with my Belkin AX3200 openwrt is good for me....not enterprise but who cares, it works so well i cant imagine cisco could do better (for such a small setup that is)

  • @ronsafranic5177
    @ronsafranic5177 Рік тому +2

    Good content but you took the easy way out! The issue with all the chip-set devices in one IOMMU group can be overcome. Unraid can do it easily through a GUI so I'm sure Proxmox can do it. I would actually have liked to see that.

  • @FloridaMan02
    @FloridaMan02 Рік тому +2

    Great examples. I would rather have seen some large file transfers, network speed tests, and pretty performance graphs. Something like NetData would be fascinating. Impossible to to tell if its actually usable vs just functional otherwise.
    Ty

  • @Trains-With-Shane
    @Trains-With-Shane Рік тому

    Instead of OMV or TrueNAS or a dedicated OS for sharing I just set up a container with cockpit installed to manage the shares and permissions and let the host OS do the lifting. Seems to be working out well so far. I've been running it in my "production" environment for a few months now.

    • @redemption5294
      @redemption5294 Рік тому

      What's cockpit? Could you explain how does it work. Thank you

  • @ariii003
    @ariii003 Рік тому

    congrats on 100k man your vids are great godspeed bro

  • @elliotthanford1328
    @elliotthanford1328 Рік тому +2

    Regarding your difficulty on passthrough, there are a lot of changes between kernel versions that break pcie passthrough regarding gpu passthrough and it really gets into the weeds especially regarding frame buffers. try staying on kernel 5 for now and not upgrading to kernel 6

  • @Nathan15038
    @Nathan15038 4 місяці тому

    As a PC/tech nerd who is also into networking I eventually just wanna put a server inside a case and just have like probably clusters of them or multiple of them😅

  • @StaK_1980
    @StaK_1980 Рік тому +1

    I know this probably got covered somewhere, but I am just paralysed by choice: where would you recommend for someone like me (software tester who did some networking in the past) to start with a home lab?
    What are the things I should get and what should I avoid (pitfalls, traps, things that can be virtualised in a docker container and be done with it, or someone going to fall into the trap of gettin a 24 port Router because it might come handy later, etc. )

  • @christopherjames9843
    @christopherjames9843 Рік тому +2

    3950X is still a monster of a cpu.

  • @7MBoosted
    @7MBoosted Рік тому +1

    My goal for a while now has been to do something similar without running a router/firewall on it. Running unraid or truenas scale instead of proxmox. I have a spare x470 board, and I want to use a 3900 or 5900 oem cpu that is 65w tdp. And use a P2000 and gtx 1070 for plex/jellyfin and a windows vm respectively.
    That or get a ddr4 z670 board and use an i5 13500 without the p2000 and use quicksync instead.
    6c/12t vm for remote cloud gaming, the remaining threads to run a NAS and containers I use.

  • @hurriedmilk
    @hurriedmilk Рік тому

    Awesome video, the GPU passthrough issue makes me glad to be running unRAID since I have a very similar setup (minus the router setup) and have not had an issue passing my 3050 to a few docker containers one of them being plex (:
    Keep the videos coming though they're always a great time.

  • @TheN4UM4N
    @TheN4UM4N Рік тому +3

    Bro is it possible to change your background music? No hate, love your channel, but it might be time for a new track tbh…

  • @PutraAsnawa
    @PutraAsnawa Рік тому +1

    I can see you are running both TrueNAS and openmediavault, could you explain what are their best use case/purpose scenario?

  • @Decommissioned
    @Decommissioned 8 місяців тому

    You could have used acs patches to separate all of your pci devices into their own iommu group, so you could have used the hba. I'm pretty sure that proxmox's kernel has the patches already applied, so you would just have to add
    pcie_acs_override=downstream, multifunction
    into your grub and then just update grub and reboot.

  • @user-dz3ph7dl4m
    @user-dz3ph7dl4m Рік тому

    Yes I have something similar; 5950x, 128gb ecc @3200mhz, 2x16tb storage, rtx3090,gtx 730 etc, TrueNas Scale OS, running pihole, nextcloud, valheimserver, and a gaming/editing windows VM (w/gpu passthru) which is run remotely through sunshine and moonlight (need a dummy hdmi plug -which maxes out at 1440p120hz; to get 4k120hz, i had to get a DP1.4 to HDMI2.1 adapter ).

  • @SB-qm5wg
    @SB-qm5wg Рік тому

    Yo dawg, I heard you like backups with your backups 😆 I'm the same way. I got my '3' in a bank vault

  • @ewenchan1239
    @ewenchan1239 Рік тому +2

    Couple of things:
    1) Your mileage will DEFINITELY vary.
    I recently went through a massive consolidation project where I consolidated 4 of my NAS servers down to a single system.
    Before the consolidation, I was using about 1242 W of power. Now, my new server (which houses 36 3.5" drives, and also is running dual Xeon E5-2697A v4 (16-core/32-thread) and 256 GB of RAM (I think that my total installed hard drive, raw capacity is somewhere around 228 TB), typical power consumption is now around 585 W. And whilst that's twice what this system is at load, I'm also running I think I'm upto 13 VMs now, and 2 containers, all running simultaneously.
    2) I have absolutely ZERO problems passing through the RTX A2000 6 GB.
    Remote desktop sucks, but I can still play Anno 1800 through it.
    (And I have been able to get hardware transcoding with Plex work on that VM as well.)
    3) Also as a result of my consolidation, I was able to take out my 48-port gigabit switch and now I just run a Netgear GS-116 16-port GbE switch, which consumes like 7 W of power (vs the ~40 W that the 48 port used to take.)
    4) On top of all of that, instead of getting more networking gear (e.g. 10 GbE NICs and a 10 GbE switch, which aren't cheap), I just run virtio-fs so that the VMs can talk directly to the Proxmox host.
    I set up SMB, NFS, and an iSCSI target on it, got a Steam LAN cache going, have dedup turned on in ZFS on the iSCSI target, so that can help reduce the actual disk space used by the Steam games.
    I have a minecraft server running in a LXC container (turnkey linux gameserver is awesome for that), and if I am using my mini PC (my current desktop system) to manage some of the files on the server via Windows Explorer, if I am moving them around on the server, I can get transfers of over 500 MB/s (~4 Gbps without needing a switch).
    (Two of the NAS units that I consolidated from were both 8-bay NAS servers, one was 8 bay, and one was 12-bay. Hence the need for a 36-bay, 4U rackmount server.)
    The system runs well unless I have a LOT of disk I/O, then the system load averages shoots up and that sometimes can be an issue for some of the VMs.
    Virtio-FS is awesome.
    The VirtIO paravirtualized NIC in Windows 7, when you install the virtio drivers for Windows 7, will show up as a 100 Gbps NIC. (NICE!)
    (But in Windows 10+, it shows up as a 10 Gbps NIC, which is still nice.)
    Neither xcp-ng nor TrueNAS Scale was able to run virtio-fs, and wants you to route everything through a virtual switch instead.
    By running virtio-fs, I skip the entire network stack.
    Virtio-fs works in Ubuntu, and CentOS, but doesn't work in SLES12SP4. (Although I haven't tried updating the kernel for that yet. But out of the box, it doesn't work. Actually, technically neither did CentOS, until you update the kernel.)
    So yeah, you can definitely do everything you want to, on a single system, as long as you have enough RAM for the VMs, and enough space on your hard drive(s).

  • @AlfieLikesComputers
    @AlfieLikesComputers Рік тому

    Congratulations on 100K subscribers!

  • @zippi777
    @zippi777 Рік тому +4

    For the GPU passthrough problem have you tried (maybe you already did it) putting a dummy plug on the GPU output connector to simulate the connected monitor? Sometimes the problem is just that the GPU doesn't hear any connected monitor.

  • @mv_dev
    @mv_dev Рік тому

    There are 2 years that I’m trying to do a single pc home lab and the GPU passthrough was always the problem, I have a 5950x and a x570 board and a 6750xt and a SAS controller. To make the GPU work I putted in the first slot and the SAS controller on the third slot with the chipset and now the passthrough work

  • @ShiggitayMediaProductions
    @ShiggitayMediaProductions Рік тому

    Interesting concept but yeah it's not really practical... I tried consolidating my OPNsense instance into a VM on my TrueNAS Scale server, and while it worked, it wasn't stable. Several times a week (sometimes a day) I would have to reboot either the VM, or the entire TrueNAS Scale system itself, and that got annoying since I run Plex, Sonarr, most recently Prowlarr, QBitTorrent and Filebot and each reboot would of course disrupt those services. I reverted my routing duties to a spare miniPC that coincidentally has a dual NIC miniITX motherboard that I forgot about, so it was a perfect fit!
    With all that said, the whole reason why I wanted to consolidate everything into one machine was because my previous multi-LAN miniPC solution was janky... it was a $250 miniPC from QTOM that I got off of Amazon, and it would overheat without constant cooling on it.... When I had it running to mitigate that I got some USB powered fans that were running a top the miniPC case, but that got dusty fast, and I decided to retire the entire miniPC. It's got decent hardware, so for short-term testing I might install Debian Linux and Casa OS atop it (I did try Casa OS a top Raspberry Pi OS, but it was too slow and unreliable for my liking) and play around with it some more. I actually have a Ryzen 9 3900X in my main everyday system and I've been wanting to upgrade (mainly for bragging rights lol) to something faster, but at this point I might just max out the AM4 spec and go with a 5950X (a more modern version of the 3950X) once they get cheaper.... Once/if I ever do that the 3900X would be inherrited by my aforementioned TrueNAS Scale server to replace its Ryzen 7 3700X (8C/16T)... That'd be quite the upgrade... That'd leave the 3700X available for other projects too... Thanks for this video. It was entertaining as much as it was informative! See you around!

  • @Radoslav_Stefanov
    @Radoslav_Stefanov 10 місяців тому

    Meh. I recently did a full rebuild for my home setup and what I ended doing was using a 2018 i7 MacMini with 64GB memory and 1TB NVME. Deployed k8s lab with all my crap. Plenty of power, tiny form factor and low power usage. Especially when idle. It runs everything without missing a beat.
    Storage redundancy is solved by using 2 external NVMEs. It has 4 USB-C ports, two are on a separate BUS, so you get a lot of juice. I do daily backups to an encrypted cloud storage box which is cheaper than running my own backup server in terms of power consumption/price.
    I am actually considering to get a few of these or maybe M1 or M2 and make a custom server rack. There is a reason Github are using them for their new runners.

  • @DIYDaveOK
    @DIYDaveOK Рік тому

    I wont swear to this, but it sounds like that video card is engaging the system bus in some clever (?) way that precludes the host OS from being able to release it. i swear i remembered reading about that back when i was rebuilding my own box and studying problems with PCI passthrough. Its just an issue peculiar to certain hardware and they way some graphics cards leverage the system bus. I know that sounds kinda vague but at least dont feel like you're leaving some secret rock uncovered 😁. Really enjoy this video!

    • @HardwareHaven
      @HardwareHaven  Рік тому +1

      Yeah I heard something from someone else that was similar. Too late now haha 😂

    • @chromerims
      @chromerims Рік тому

      Some very fine tips in the comments section on GPU passthrough and IOMMU groups how-tos 👍😊

  • @Nunoflashy
    @Nunoflashy Рік тому

    With regards to IOMMU groups, you could ACS patch the kernel and they will essentially be on different groups for normal passthrough without issues, I had this problem on my proxmox server and fixed it that way.

    • @chromerims
      @chromerims Рік тому

      ACS patch . . . helpful idea 👍
      Kindest regards, neighbours and friends.

  • @yuan.pingchen3056
    @yuan.pingchen3056 Рік тому +1

    Did you have mentioned about the CPU idle state power consumption?

  • @JohnSmith-yz7uh
    @JohnSmith-yz7uh Рік тому +2

    Did you try above 4k encoding in the BIOS? Have heard a lot of people had issues without it if they want to pass through a GPU

    • @HardwareHaven
      @HardwareHaven  Рік тому +1

      Yeah sadly that didn’t work either.. 😞

    • @chromerims
      @chromerims Рік тому

      4k encoding in BIOS . . . helpful tip 👍

  • @catcam
    @catcam Рік тому

    Congrats for 100k !!!! 🥳

  • @upgrade1373
    @upgrade1373 Рік тому +1

    if you were on AM5 with that iGPU you probably would have been able to passed through that GPU as you expected

  • @Bixmy
    @Bixmy Рік тому

    I have it as a 3 machine setup currently a old mini pc running pfsense ryzen5 5500 with 64gb running truenas and a 13500 with 64.running everything else(7 minecraft server behind a proxy a bunch of website jellyfin traefik and vpn) i would say having it as a separate machine sound way better for both security purpose and ease of setting things up.

  • @mikeo3382
    @mikeo3382 Рік тому

    I put a 5950x into a build for all my stuff recently
    I got a 5950x
    128GB DDR4 3200
    6x4 TB Raid z2
    Stripe 1Tbx2 Samsung SSD
    2x500GB mirror boot drive
    5700XT with 10 gig networking

  • @xoniq-vr
    @xoniq-vr Рік тому

    Sometimes this work for GPU passthrough: start the host with proxmox with the GPU not occupied, remove every monitor before start until proxmox is done. Then plug in monitors and then start the windows VM. Helped me too.

    • @serikk
      @serikk Рік тому

      will try this , thanks

    • @xoniq-vr
      @xoniq-vr Рік тому +1

      @@serikk if it looks like a lost cause or passthrough is glitching, this should be the fix. I have to use this method. This way Proxmox doesn’t lock the GPU and so doesn’t have to give it back to the VM.

    • @chromerims
      @chromerims Рік тому

      Never heard of this tip. Will keep this in my notes 👍

  • @UltimateArts13
    @UltimateArts13 Рік тому

    Have you done a video on your security cameras setup?

  • @ravenof1985
    @ravenof1985 Рік тому

    ive got a similar setup at home, Ryzen2600,16gb of RAM, Proxmox host with TrueNAS,WIN10 and PfSense Clients.
    RAM compatibilty / stability is a problem, ive got two identical kits of Gskill 16Gb 3200Mhz RAM, one kit works fine, the other has to run at 2400MHz to be stable, both kits test fine under windows
    Pfsense is not running yet as the reliability is kinda poor (probably due to all the tinkering ive done for GPU passthrough)
    TrueNAS works great with a single 4Tb array (id need more RAM for more storage)
    the Win10 client is for media ingestion and printing/scanning - most of my passthrough hardware goes to this one,
    ill be setting up another win10 client to RDP into from the garage (that PC is an AMD fusion APU, it doesnt run well enough for anything other than checking emails and music from youtube (and only one thing at a time) but ill need to sort out VGPU passthrough first as CPU decoding of media is a bit slow at times.
    ill also get around to setting up a steam Cache and minecraft server.
    all in all it was a fun project to set up.

  • @ChunkyKong32
    @ChunkyKong32 Рік тому

    Pass through GPU issue idea. Windows VM being UEFI or Legacy could make a difference. I think some GPUs won’t work in passthru unless the VM is UEFI (and also some require legacy). I’m not an expert but that is something I’ve had issues with as well!

  • @DrDipsh1t
    @DrDipsh1t 9 місяців тому

    I was just browsing ebay for these ryzens... That power draw tho... Dear lord... I may still grab it anyway 😂

  • @Phil-D83
    @Phil-D83 Рік тому

    I would keep the nas seperate. Remote editing would be seperate. The rest can be run on that box with proxmox

  • @fteoOpty64
    @fteoOpty64 Рік тому

    Damn, you are my kind of hardware guy. But I dont have that many NASes. Just two Synologies and a ProxMos server.

  • @ruthlessadmin
    @ruthlessadmin Рік тому

    Cool to find this in my feed at random. I'm planning to go this route. GPU passthrough does seem like a moving target. There are many methods out there and none have worked for me so far (same issue - linux wont give it up). Also, this may be a silly question but during the times it does work, when you access the VM remotely, are you able to enjoy the high framerates (for instance, if I have a windows VM for gaming on my server can I play remotely on like a thinclient or low-end laptop)?
    Anyway, I feel like it's a good solution and would likely be lower power than the combined physical devices. Plus, once you get it all set up, you shouldn't have to mess with with the host much, so not really concerned about knocking all my systems offline regularly.

  • @iprizzzrak
    @iprizzzrak Рік тому +1

    Why you so nervous about power consumption? In my country, 1 MW costs about $44 for me.

  • @peterbicskei4935
    @peterbicskei4935 Рік тому

    Thanks for the help. I made my first dedicated Minecraft server whit the Minecraft server tutorial video. ❤

  • @sfsfsdfsdification
    @sfsfsdfsdification Рік тому

    Proxmox pass-through on AMD can be challenging.

  • @davidrodgers5534
    @davidrodgers5534 Рік тому

    Hey man, what case is that that you're holding up that you call your NAS at the beginning? Are those all 5.25 open to the front bays?

  • @kian8382
    @kian8382 Рік тому

    I've been running everything on an i5 12400 (previously i7 9700) Proxmox host for years, although it's really efficient, sometimes I wish to separate things in their own appliances.

  • @irukhan07
    @irukhan07 Рік тому

    Hey, what are those black silicon things at 4:50 that you are using to prop up the card?

  • @szwiecifon
    @szwiecifon Рік тому +1

    Meanwhile my entire homelab is running off of a hp t620 thin client. At least it doesnt draw much power, 14w tops 😅

  • @peteradshead2383
    @peteradshead2383 8 місяців тому

    I have a ryzen 3950x and a x570 motherboard sitting on the shelf which I tried to use for a server , but the problem was the power it uses just on idle , 90 -100 watts on totally idle , now replaced for a b550 and a ryzen 5900g with total idle 20 watts , but 55 watts with running all my tasks , but could cut 15 watts off that by removing my rtx 2700 super , and using the internal igpu for transcoding , but the GPU is pass to a windows 10 system for gaming etc.

  • @joshhardin666
    @joshhardin666 Рік тому

    I saw you were running pfsense as well as pihole. I've been using pfblockerng in pfsense for a while and like it a lot, could you tell me how pihole compares?

  • @ich777
    @ich777 Рік тому

    May I ask why using so much VMs? In my opionion this is a real waste of resorces.
    You could at least install more LXC containers to do the same if Docker is not possible with Proxmox (sorry I‘m not really familiar with Proxmox because I moved away from all VMs and run most of things in Docker and LXC)

  • @trektn
    @trektn Рік тому

    DId you ever ocnsider setting up ACS patching for that board?

  • @gustratcat
    @gustratcat Рік тому

    For the hba "problem" i would sugest put nvme drives for proxmox and storage and passthrough the onboard sata controler!!!! works pretty good for me in many instalations and chipsets!!!

    • @HardwareHaven
      @HardwareHaven  Рік тому

      That was my thought as well, but the sata controller is on the same iommu group as a lot of other stuff which would’ve caused issues.

  • @aleksdeveloper698
    @aleksdeveloper698 11 місяців тому

    Do you suggest 5600x as a low powered PC server?

  • @LaBAdochnicht
    @LaBAdochnicht Рік тому

    I'm about to do something like this. Mainly Game Server, but also cloud storage eill be in one machine. I'm just runnign everything on a ryzen 7 3700x

  • @tanmaypanadi1414
    @tanmaypanadi1414 Рік тому

    Please make sure you are protected from the CVE-2019-7630 affecting all gigabute motherbaords. something about the appcentee driver being the problem

  • @vng
    @vng Рік тому

    For IOMMU issues with the chipset's PCIe slot, have you tried adding "pcie_acs_override=downstream" to the kernel boot commandline to see if it helps with isolation?

    • @HardwareHaven
      @HardwareHaven  Рік тому

      I did and don't believe it helped. However I could've missed something. Too late now unfortunately haha. Appreciate it though!

  • @soniclab-cnc
    @soniclab-cnc Рік тому

    I have also had troubles with GPU passthrough in windows... I know how you feel.

  • @Dragonheng
    @Dragonheng 10 місяців тому

    AMD R9 3950x i miss you... (R.I.P. 2019- 2022)

  • @RADIUM108
    @RADIUM108 Рік тому +1

    I wish i could have such deep knowledge about technology like you i mostly didn't understood what you did but i love home automation to

  • @dominick253
    @dominick253 11 місяців тому

    I'm setting up a microtik router os VM right now In proxmox. I just wanted to be a backup in case my pie four open WRT router goes down for any reason. I already made a copy of the SD card for the pie so if it just gets corrupted I can just switch it but if the whole thing goes down it's nice to have something to switch to.

  • @Mehdital89
    @Mehdital89 Рік тому

    Idling at 118 watts sounds a bit too high! I have a similar setup (windows vm with a 3090 passed through, i5 13400 cpu and ddr5 ram) nas with 1 14tb hdd and two linux containers for plex and qbittorent. Idles at 60 watts. I noticed that the idle power is higher when the windows vm is off. Seems like the gpu consumes much less power when it has drivers loaded.
    Also I'm currently building a second replica but with weaker gpu (gt710) and it is idling at 30W

  • @TrisTanster
    @TrisTanster Рік тому

    I dont know if this is the case for proxmox but... shouldn't you configure the NVIDIA GPU to use the OVMF driver rather than the default nouveau driver?

    • @HardwareHaven
      @HardwareHaven  Рік тому +2

      My understanding is that you don’t want any driver in use because you don’t want the hypervisor OS to use the GPU. I could be way off though haha

  • @totallymage6507
    @totallymage6507 Рік тому

    not your fault ob the GPU pass through its just some NVIDIA corporate shit that is not working well with VMs forcing everyone to buy dedicated hardware that is has a 500% markup with the materials they are built with, rather than just using consumer/off-the- shelf hardware.

  • @kylesther
    @kylesther Рік тому

    How's the disk performance and long term stability when they are passthrough as virtual disk?

  • @EmilePolka
    @EmilePolka Рік тому

    I wish we have a m.2 nvme GPU (not those stupid adapters), apparently asrock made one but I cant find one. that thing will be really helpful for KVM stuff for a iGPU less processor like this.

  • @matid8453
    @matid8453 Рік тому

    do have in plan make video about PXE server but some the easiest way?

  • @samiul16
    @samiul16 Рік тому +1

    Don't you need 2nd GPU for passthrough working cause 3950x doesn't have igpu ??

    • @HardwareHaven
      @HardwareHaven  Рік тому

      Not necessarily. I was able to boot without any display adapter and still access the proxmox web UI. And I tried running it with two GPUs just to make sure.

    • @samiul16
      @samiul16 Рік тому

      @@HardwareHaven I have a gt710 as primary and don't have any spare pcie slot for 2nd gpu, can the primary GPU be used as hardware transcoding or passthrough as well as primary GPU?
      keep doing this vids great content 👍

    • @HardwareHaven
      @HardwareHaven  Рік тому +1

      It can be used for hardware transcoding on a service or container running in the host OS, but can’t be passed through to a VM while also running on the host

    • @samiul16
      @samiul16 Рік тому +2

      @@HardwareHaven pls make a video in this topic
      Proxmox with just 1 GPU : transcoding, ful utilization

  • @aldo9897
    @aldo9897 11 місяців тому

    Hi you should make a video about the Lenovo P520 Thinkstation workstation 8. gen Xeon W-2135 as a proxmox server

  • @thk6634
    @thk6634 Рік тому +1

    GG 100K subs

  • @xabson18
    @xabson18 Рік тому +2

    I had a similar error whit a Tesla card, the solution was to activate "PCI Express 64-Bit BAR Support" in the Bios

    • @chromerims
      @chromerims Рік тому

      Putting this tip into my notes 👍
      Thank you.

  • @InsaiyanTech
    @InsaiyanTech Рік тому

    Got the same cpu I’m trying to do the exact same thing with just don’t know what would be the best motherboard to combo it with for this project

  • @HeitorCardoso-v7x
    @HeitorCardoso-v7x 10 місяців тому

    Q: "My ENTIRE Home-Lab On A SINGLE CPU???"
    A: "Yes it's possible, but wouldn't be funny...."

  • @KisameSempai
    @KisameSempai Рік тому

    that's why I stopped using proxmox...because GPU passthrough was always a hit and miss and behaved weirdly at times.. esxi worked much better for gpu passthrough

  • @lumpiataoge9536
    @lumpiataoge9536 Рік тому

    what are the possible scenarios if I use non-ecc cards with truenas (no containers just pure file & media svr)? will i experience data corruption knowing zfs cache uses ram?

  • @alex590263
    @alex590263 Рік тому

    The system has undervolt ? Maybe try to undervolt the CPU to get a way better efficiency