I think you missed an important point on this system. This is what I call a “nickel and dime” server. If you can’t come up with $800 bucks at one time you can still build this server 100-150 bucks at a time and end up with the same result while continuing to use it in the meantime. I think this kind of build is more relevant in today’s economy than a pure budget build.
Home servers are rarely purchased in their final form. Even servers that might cost $1000 are going to get reused, modified, recombined, or just gutted. I get what you are saying, but only full on enterprises actually buy a server, use it, then dump it and replace it. In that sense, EVERY home server is going to be a "nickel and Dime" one.
My home server is a discarded machine. If you are in the business at all you run across these all the time. I have this exact same model in my office that I am using just for Kali. My home server is a mini. Plenty enough to do CTFs and pihole and all those other little things one uses at home. I know people who have taken discarded business servers home then shocked by the rise in their electricity cost. You can grow weed indoors for the electric cost of a real server or two. A mini works well and very low power usage.
@@tim3172 Hrmm... How about... Record income inequality? Record number of people living under the poverty line in "developed" countries? Housing inequality so bad that several countries (including the USA) have declared it officially an "emergency"? I don't know how YOU judge whether an economy is 'working', but I judge it based on whether or not it achieves it's stated purpose. An economy is SUPPOSED to "...manage the availability of resources so that those who need them have access..." Our global economy is broken, likely beyond repair. It doesn't do 'it's job'. It is a failure. The fact that you can point at "...some numbers go up..." as a sign of success is part of the problem. "Number go up" is not the sign of success. It is often a sign of failure.
I'm really happy with my setup. 2 E-Waste laptops and raspberry pi-5 run in a q device for proxmox. Each proxmox node backs up to backup server on the other node. And although I don't have anything like ha I do have redundancy in many of my services whether there's a hot backup or I could just click on another one on the other node and be back up and running. Dual pinholes dual tailscale vpns. Even a backup for my router if it goes down.
I'm rocking a Dell PE-T110ii with 4x8TB drives, 16GB DDR3, a TESLA P4 and a Xeon E3-1260L. I bought this a year ago for $50, completely intact, and have been happy with it so far. This is my NAS, Plex, MC, Cloud, file-sharing server, and I love it. Who needs screaming fast & expensive hardware when the only person I do this for is myself. I did the old opti 9020 USFF with usb drives+enclosures, upgrading to more ram, an i7-4785T, and an additional ssd in a dvd adapter, because it was CHEAP and yet Capable. Im not spending ~$800, but I'm not doing the same level of services you are. But this kind of approach, where you utilize old tech to suit your needs, is really the best way, imo, and also more fun to see what you can reliably squeeze out of it while still having the system do it without breaking a sweat.
Your starting machine plus the 2 4tb drives is my current home lab, almost exactly, just in a different box. I love it. It effectively free for me since it's an old PC of mine.
Dude, this took me back, way back, to 1991. I had a 386sx. I used it to host a BBS as well as use it for my Turbo C coding and compile environment for college. I had 4 meg of RAM and 40 Meg of HDD. (Not a typo.) I don't think you're as old as me, but something tells me, you can relate to those specs, if only from having learned about them in school. Great video. I spent my career as a back end guy, mostly BI and Data Warehouse systems. I'll admit, I should have a home server, but after 27 years of living the "dream" I fell back into my old hobby of Flight Sims, X-Plane and MSFS, and picked up writing fiction to satisfy that old creativity itch I used to scratch with coding. Great video, I'm a subscriber now.
@@RaidOwl and I thought I looked old. ROTFL. Trust me, it's the amazing hairstyle and gray beard. I'm with you. I'm pretty sure there's an unwritten law about hardware/backend people and beards with lack of hair. including the women.
😮This is almost exactly what I was starting with in 1991. I had a Radio Shack “Tandy 1000 386SX/33” (yes, 33 MHz!!) with 2 MB of RAM soldered in which I upgraded to its max of 10 MB (2 MB + pair of 4 MB) and 40 MB HDD (upgraded to 540 MB) and added a 33.6 Kbps dial up modem card. I learned a ton from this which got me into system building and eventually IT. After 20 years of being a database/datacenter admin -> mgr -> integrator -> director -> architect for a medium sized retailer across 3 US states, I have a great memories and appreciation for that old PC and the journey it sent me on. Cheers!
Still amazed in retrospect by how fast mainstream HDDs went from tens of megabytes, to tens of gigabytes, to 100s of gigabytes going from 1990 to 2005, only to hang around the 250-500gb range all the way until 2015 or 2016. Heck, a lot of people are still picking drives in that 250-500gb range just because they don't NEED more space.
Similar consolidation that I just did was getting 5 $50 mini-PCs, a $10 TP-Link switch, and a Synology NAS. The reason for this is because some people just give away a blank Synology if it doesn't have storage, but if you already have hard drives then it works well. I have two 6TB hard drives I've had for a while that I put in the NAS for long-term storage and back up my ProxMox nodes. The 4 mini-PCs have 250GB SSDs each that you can get for free from microcenter during their promotion. So all total I have about $300 in a small "server". All plugged in to the same switch, they can be VLAN'd off and I 3D printed a mount for them all with the switch so it's just one singular box with a few power cables coming out. But, if one node completely fails me, like it lights itself on fire and I can't fix it, I still have 4 other nodes to pick up the pace until I get that node replaced with any other mini-pc. Works well for me!
One thing if you're looking at adding a GPU to your NAS: Sparkle has a single-slot LP ARC A310 for $99. It's got almost the gaming performance of the RX6400, but it has phenomenal encoding for h264, h265, and even AV1. A lot of people buy ARC GPUs just for the AV1 transcoding, and this is the cheapest way to get into that, plus it fits into any PC. Even if you're not gaming on your network-in-a-box, it's great for passing through to PLEX and having better transcoding than an older QuickSync GPU.
Did something very similar with a HP 800 G4 EliteDesk...i7-8700, 64GB RAM, and a couple of network cards (2.5G). Installed Proxmox and then the possibilities were endless at that point. It is the center of my home automation and NAS backups, and an extra Windows 11 VM I use for testing. The i7-8700 is 6 core 12 thread. These are awesome for homelabs !! I think the G4 with a i7-8700 gives the most bang for the buck.
Kudos for calling out the lack of backup options on a single server homelab. Newbies beware. How do you restore your VMs when your NAS is virtualized on a server which is down. Would love a video on backup options for this scenario tbh.
As an idea (on the cheap) grab the cheapest n100 mini that has 2.5 networking, SATA and nvme and use a m.2 to pci converter with HBA out to x8 SATA. Power the drives externally and curious if you could have a dang nice NAS for less than $300 minus drives. I'm thinking of trying a backup TrueNAS in my home lab with this hardware ... Cause my other 5 servers want another friend?...and because I can.
I have a couple small used PCs, one using Proxmox server, and the other using Proxmox backup. I’m working on an off site clone, but mostly just for my file shortage. Most of the stuff I run is mostly for fun, not critical, and if it all fails I wouldn’t mind rebuilding it all and do something different. I just don’t want to lose my photos and stuff.
I'm running my homelab on a Dell 3060 that I got for free, i3 6100t and 16gb of ram. Upgraded it with a 1tb crucial mx500 for proxmox and added an 8tb hard drive for file storage and it's hosting Jellyfin, adguard, NAS, home assistant, tailscale, entire ARR suite for linux ISOs. Not bad for a system that was destined for the trash.
Nice to see a video about running a home lab on a smaller scale. It sucks that the PC can't support a 7th gen CPU. I'm running a similar setup but with an EliteDesk G3 SFF and it does support both 6th and 7th gen. I picked up a barebones unit for $28 CAD and installed a 7th gen CPU and 16GB RAM that I already had. It also has onboard NVMe so I bought a new NVMe drive for it. I already had 2 x 5TB drives that I installed and it all works great with TrueNAS Scale.
A lot of the docker container setups offer a hidden efficiency option - always opt for the alpine tagged version vs Debian or other. disk footprint is about 50%. It would be interesting to do a follow up to this taking some of the networking advice that others have stated, and also trying some chaos testing on the machine - that would be awesome. Also another idea for a new home server would be a local LLM server, connected into homeassistant. Unfortunately that would be pretty hard to make performant and inexpensive but would love to see you try!
HP Z840 workstation. $300. Built like a tank. Amazing cooling.. 16 DDR4 ecc slots, 2 PCI x16, 2 x4, 1 x1. Add 2nd cpu, and you get another x16, and another x8, and the x4 becomes a x8. 4 hot swap 3.5” drive bays sas or sata. 2 x 5.25” bays, 6 sata and 8 sas onboard, 825 or 1125W PSU. Oh, and it’s super quiet. If you find and install the 3D vapor CPU coolers, custom engineered for this box, you can run it full power 2x cpu, and there is no difference in noise level at idle or maxxed out.
That is a cool set up. I have been running a PfSense vm for 3 years no issues. You could have saved a PCI slot by using in 2.5G for the router WAN and the other as the main link out of PMOX. Used the VNIC for the LAN port in PfSense and TrueNAS. The Virtual NICs in PMOX are 10G.
I love these little boxes. They're waaaay more than suitable as a home server and a few SBs have caught on to that idea a year ago when I was trying this out on hardware that rides the line. Turns out I don't need much and I really mean that. Athlon 64, 2GB DDR2, ~20TB storage, 10GbE SFP..✓💯 Works out great. Something like a multicore chip with onboard graphics would be like a rocket. Anything made within the past 3-6 years would definitely be put to better use as a workstation but you get the idea. There's a few lines to be drawn for this and that. It's best to define them and then build based around that.
Well done video sir. This kind of info is great for people who are kinda serious about getting into homelab. Not too cheap, but keeping it realistic. I started 10 years ago with 2 HP D530s. Gotta start somewhere!
I'm running my entire home server collection on an ODroid H3+ with two 6 TB hard drives in Raid-0. This is just running a desktop-less install of Debian as a host for Docker. And as you pointed out, all the stuff in Docker doesn't take up that much in terms of resources. The H3+ sports a Pentium Silver N6005, and 16 GB of Ram. Hey - it works for me!
Had no parity or backups for my data in my workstation so I finally decided to invest in building my own home server. Just ordered a Dell with an i7-8700, GTX 1080, and 64GB ram for $370 for the base of my first home server and NAS. Can’t wait to get tinkering. Excited to see how the 1080 with unlocked drivers compares to my 2080 super (with game ready drivers) on my windows machine for plex transcoding. Decided to spend the extra premium on 8th gen over 7th gen for better performance when hosting game servers. Starting off with 6 used 8tb HDDs before I slowly swap them out for larger 18TB NAS drives as things progress. Snagged a Classico dark rock case and an HBA card for some extra SATA ports. Assuming there are no issues with the HBA card and the stock PSU I am quite happy with the hardware I was able to snag for under $900. Now I just need to save up for a working UPS. Last one I ordered new was defective and wouldn’t charge the battery. I got to keep the defective unit so there is a chance only the UPS or the battery is defective. Debating if I should just risk it and spend the $90 on a used UPS and chuck in my battery that may not be defective or if I should just snag a second unit for troubleshooting purposes at full price and just use it on another system.
An alternative that I found with my personal system: pass the integrated net card to pfsense and then create 2 extra virtual net cards that are VXNet. The first one should be client network and associated with a physical port. The second is a service network for the VMs to communicate with jumbo frames. This helps all comms stay within the box. The service net isn't required but you can get some speed boosts and lower CPU usage with jumbo frames.
I downgraded everything to a Unifi Dream Machine, Unifi 24 port POE switch, and a Synology 5 Bay NAS.. the simplicity makes life good these days.. and massive power savings compared to all my old rack equipment
I really liked this concept, I just built a HTPC based on the N100 that is also a NAS / Docker server for my applications, and I can even play retro games on it using RetroArch, just using Debian as a base system, no virtualization at all. And I'm very satisfied with the results so far.
I'm still a fan of using retired server gear like E5 Xeons and Supermicro Boards. My "new" main server (E5-2680V2/X9SRL-F) is idling at around 60W and is literally whisper quiet. This stuff is old but you get more IOMMU-Groups for passing devices through to VMs and DDR3 registered ECC RAM is dirt cheap. The board even supports PCIE-bifurcation so i can use multiple NVME drives quite easily. For smaller setups (router/firewall/homeassistant/backupserver) i prefer the intel N100 series. Low cost, very low power consumption and more than enough computing power for these tasks using ProxMox and TrueNAS Scale.
I agree about preferring to have a separate firewall, but I also want to virtualize it to try out different platforms over time. My go-to has been a single device that I only run firewall software on.
I did that over 22 years ago :) Then 14 years ago I split it back out into a few dedicated systems, storage+ha virt, which is a bit more convenient than doing it all on your workstation.
Love this video. I went from a 24U rack to a 15U and just ordered a 9U which is bigger than I need but I wanted some shelf space, really only need 6U but yeah lol. It seems as the years go by I keep looking for ways to downsize my gear to keep things minimal and clean while maintaining all of the functionality I am used to. Its cathartic to purge gear i dont need and get things to just the core of what I want!
With some bridging setup it would not be needed to connect pfSense LAN port to your LAN switch separately. Make the 2nd (LAN) port of the router VM a virtual NIC, and bridge it to the 2.5G port that's already connected into your LAN. A huge bonus for efficiency, now the server itself doesn't have to go out and back in via an external switch for all its Internet access.
Ehhhh, while that makes sense for convenience and performance's sake, it's not the most secure solution on older hardware. Then again, everything about this box is a peformance issue so... 🤷
I really like that you picked Skylake for this machine. the difference between Skylake and previous generations was HUGE and imo for used gear it's smack in the middle of best bang for your buck.
This is a lot like my setup, though I didn't repurpose an old PC. I have a custom built ryzen 5600G w/ 32G RAM and a 2.5G Intel NIC pciE card. Proxmox running on the host on a 1 TB nvme. 3 drive raidz1 pool directly on the proxmox host. I also put docker on the proxmox host rather than a VM. I'm not using portainer. Samba docker container is the NAS rather than TrueNAS. My TP-Link router is my firewall/router/wifi, but I do have an lxc container running dnsmasq for DHCP/DNS. I don't have all the nice GUI interfaces you have (TrueNAS, portainer, pfSense, pihole), but I prefer the CLI most of the time, so it works for me.
eh i had a dual tower build for awhile using spinners and lots of juice, simplified it down to a few mini pc's and now the cluster has grown again. At least with this modular design I can shut down the power hungry nodes and extra stuff when i'm not playing in the lab and spin them all up when i want dedicated resources for each VM/LXC.
100%. I'm looking at breaking up my frankenstein's monster of a Linux desktop/server situation into a server(/s) and I don't _need_ to make them rack servers, but I want to build a rack and work with real enterprise hardware for experience and fun.
You don’t need 3 ports used up for wan, lan to switch then back to proxmox, you can use only 2 physical ports and 1 virtual that bridges pfsense and proxmox
I´m going all the way - the plan is, if all goes well, to do a Thermaltake Core WP 200 chassis computer system for a rackmount PC with with two machines and many enough drives. It will eventually be connected to at least 2 monitors. Already have at least 4 other cases too which can be used to make a good overall home lab consisting of at least 5 movable sections. I keep wondering where to keep the large chassis. There is a dilemma - if it is kept upstairs, there is a staircase and that might not be a good idea, so better keep it near the entrance to the home in front of a large window with access to lots of air coming from the outside. If all goes well, buildup should start in november provided I can maintain my income in october. Thanks for the video! Best wishes to you from Iceland.
I grabbed an old dell r330 the other day for under 200 bucks. Found that with the backplane installed I could add 10gb dual Intel nic and a nvme to pcie adapter. So I have 2 disk drives at 8tb a piece, a 128gb os drive with a 128gb backup for that. A 1tb ssd for faster storage and a 256gb nvme drive, which really was just a test to see if it would work. And it does. So with the exception of a video card I can run everything there and it doesn’t run too hot. It’s a trueNAS scale server though and hosting all my 24/7 services so I wanted something a little more efficient than my previous monstrosity. I have a game server on the side though that is a 10th gen i5 which I built for under 400 so I think it’d be possible to upgrade even your cheap server here pretty easily if you’d like newer hardware.
would like to see you build a low wattage NAS that holds a lot of storage. I been asking so many groups going from r510 with 60tb and i'd like to build a low wattage server just for storage. No youtube video i found shows a full breakdown of lowest wattage setup really except for 1 and the cpu/motherboard are hardest thing to get.
The biggest problems with utilizing older hardware I've found in my fairly recent explorations of homelab stuff - for my needs - have been the lack of RAM or RAM slots, and power consumption.
Power usage is the main issue for me. Because yes, you can do good thing for a fair price... But when you have to run that 24H / day... Then it start to cost you a lot...
Yeah, I can have hundreds of GBs of storage in mirrored volumes, but if it costs me more than 2€ a month per 100G, I could have just paid Google, and I do think there are cheaper options, too. 1 TB on Storj is $12 a month. @@LtSich
I agree with physical vs virtualized firewall conundrum; would rather have the dedicated firewall; was running Opnsense on a N100 firewall appliance; swapped over to a RK3588, running openwrt. There were some things that can be done more easily under Openwrt than in Opnsense, like Policy Based Routing. In the end, it is smaller, faster, and more power efficient with the RK3588; I may very well virtualize it at some point, just to try it; but in the end, I'll leave the RK3588 sitting off to the side powered off as a backup, or just run it and keep the virtualized one as backup.
Great video. I had a $150 HP Z2 mini G3 I76700, 16GB RAM, 256gb NVME, 1TB 5.2K HDD, that has been my server I recently when to ebay and bought some old server grade stuff, 96GB of EEC RAM, 9 10K SAS 600GB HDDs, Quadro K4200, HP DL360G9 with dual Xeon E5-2620, Cisco 3560 PoE, and C897VA-K9 all for about $500. It will be interesting to compare it once it arrives to your $800 build.
I loved the humor In this! I snagged the 9th Gen big brother of the base system for $100 last week and have been trying to figure out some uses for it. This is very helpful
These 6-7th gen systems can be modded to accept 9th gen (and 9th gen refresh) CPUs. You can pick up a 6c/12t 9.5th gen engineering sample CPU for $50-$60. It will have much better QuickSync. A better memory controller. It will boost higher. It will use much less power. Until these ES CPUs start getting scarce and start to go up in price they are a REALLY cost effective boost to these older systems.
@@rpm10k. It is a BIOS modification. It involves changing a few flags, removing unneeded (old) CPU microcode, and adding microcode for the CPUs you want. There are programs like CoffeeTime that can do the modifications. Search around and you should find some tutorials. It certainly isn't as simple as plugging in the new CPU and loading stuff off a USB drive, but it is pretty easy. I wouldn't attempt it on your ONLY system, since things can go wrong and you might need to use another machine to recover it. But once it's working it keeps working.
I bought a $55 thin client, and upgraded it slightly to an NVME and 12gb ram. Runs docker and a bunch of services :) you definitely don't need a big bunch of money, depending on what you're doing.
Kinda doing the same, about to replace my Dell R210ii server with a SFF PC to be my firewall. Trying to decomission my R710 that is my Hypervisor (ESXi) also running a virtual TrueNAS, and see if I can move those VM's to my R510 that is my NAS running Unraid. The one thing I would miss moving to a regular PC over my servers is iDRAC or ILO. I have never once hooked a keyboard/mouse/monitor to them always do everything thru remote management.
I've just switched month ago from fujitsu esprimo desktop with i5 4gen to hp elitedesk 800 g3 with i5 7th gen mainly due to power consumption. On esprimo I had WS2019 with hyper-V which wasn't much optimal solution although it served me since late 2019. On hp on the other hand I'm having 'just' proxmox, truenas core virtualized, docker with few containers and two pihole instances. I've succesfully migrated windows VM from hyper-v to proxmox VM. Overall it nice and smooth now, and power consumption vs computing power is ok for me.
Now here goes the usual warning to never store all your eggs in a single basket... My dream homelab would be a 3-4 idential passive systems without any hard drives at all (I've got a dedicated Synology DS1821+ for storage) and with at least two nvme drives for host OS and some local storage, and all of that at a reasonable cost to boot. Or, even better, some kind of modular blade-like system so that I could add more nodes in the future should I ever need them. Hmm, that actually sounds like an interesting project.
Having whole storage on a single NAS isn't "storing all your eggs in a single basket?" With that config I may look for something like Ceph for things like Proxmox and then yes, use the NAS for pure storage solution.
@@JoaquinVacas No, because NAS not only has redundancies in form of parity drives, but also backs itself up to a cloud. Besides I will rather trust Sylonogy devices backed by 5 years warranty with expedited replacement program to deliver, then to some random old hardware which was used who knows how and in what conditions. My previous Synology NAS worked for 12 years before I upgraded it with zero issues, and it still technically works now - I replaced it because I wanted to upgrade, not because I had any problems with it.
@@asmi06 Cloud backup is always a plus. One thing I'll try to achieve is DFS under TrueNAS SCALE, as I have two different locations where I can have NASes to replicate on themselves. (1000km of each other, my parent's house 😆)
Running your specific entire homelab on an old PC like that is a bit of a stretch, but would be more than enough for someone starting out and learning. But if you get a few more of those desktops and now you can easily match your setup. I have 5 of those desktops and run: pfSense firewall 2 ESXi hosts (each running about 6 vms) SAN Backup server Plex server (running on my old gaming system) Total cost: about $800 Total power draw at idle: 200watts I have learn TONS from this serup
I mean yeah that’s kinda the point. The video is exaggerated but the premise is that you can do a lot with minimal hardware and customize it to your needs.
@RaidOwl 100% agree. I started out on an old HP Elitebook laptop running just Plex. If you want to learn you must start somewhere and no need to spend thousands. Now running a massive network at home, the complexity of which rivals that if a medium sized business so I may have gotten carried away.
200W idle is like $450 a year in electricity, yikes. There is zero reason to run separated Plex server for like $10 a month plus Plex subscription, at that point you might as well pay for some streaming service or two. I felt bad with my 60W average but now I feel better.
@kaminekoch.7465 Sorry, should have clarified that I'm in Florida, so that comes to about $170 per year in electricity costs. But I have sense consolidated my systems to VMs running on one ESXi box (including the Plex server) and idle usage is now around 90watts.
My dell optiplex 5070 i5-9500 was $160 on ebay and eats everything i throw at it. I was actually hoping it would start underperforming so I can upgrade it but its just a BEAST
I got the Nvidia RTX A2000 12GB for my server when looking for a LP GPU. Does set ya back a lot more than your choice! I paid $450 for mine. But it has a lot of promise for running AI tasks and anything else I throw at it. Def not an alternative at the price point but this was my favorite after looking at all the options.
Recently upgraded my Unraid box (previously running a Phenom II) to a Xeon E3-1226 V3, supermicro X10SLL-F, and 16GBs of ECC RAM for less than 100 bucks. Ebay is useful sometimes.
At the end, having that running 24/7 might be a better ideal than a rack full of server/ firewall and switches. And you still can run servers with lab stuff
I used to run my firewall in proxmox, but I had to mess with the computer one too many times, and I really did not want to lose internet for no reason again. I bought a thin client and put pfsense onto that, and I haven't even had to look at it even once after I set it up.
13:50 - yes, having a single point of failure in virtualised pfSense is a reason to be stressed. That's why I have a cluster of two Proxmox machines: one "production" and second "spare" - with an option to auto migrate critical workloads (VMs, LXCs) to the spare PC. Most of the time the "spare PC" is just shut down, booted only in case of (rare) issues with the Prod. But as you said - it has a lot of consequences if your pfSense is down. Cheers!
7 out of 10? That's the nicest thing I've heard all day. Seriously though, I just spent $700 on an Epyc 7302, MB, cooler, and 128GB of RAM as an upgrade to my 7700k system that has been my server for a couple of years now. I just have to put on my big boy pants now and actually swap in the hardware.
My Proxmox server is an HP Elitedesk 800 G2 I picked up for just under $20 from a university auction. I had to throw in an SSD and a couple sticks of RAM but it's still got less than a hundred bucks in it. I'm currently building a NAS based on an HP Z440 motherboard, just need a CPU cooler and some RAM and storage now. Splurged and replaced the hex core E5-1650 v3 with an 18 core E5-2699 v3. Maybe I should make this one the Proxmox server, lol.
Just in the process of setting up an intel NUC with proxmox, pfsense and some virtual machines to play with some stuff, was originally going to be AHV as use that for work, but unfortunately requires min 3 physical SSD's - was using ESXi but it was more of a pain though was working! First thing was to get Plex up an running as didn't run well on my NAS! One of the reasons for the NUC was very lower power usage even when running maxed out.
You inspired me to do almost the same thing with a Dell T30, so far I've spent 200 USD including the server, the only thing I'll probably change is the processor only if it's the case because I wont play but maybe stream. The E3 1225 V5 only has 4 cores 4 threads, I'll look for a better XEON
For $800 I got an old supermicro 4u chassis with an x9 vintage motherboard and CPUs, upgraded CPUs (to get 20c/40t at about 3.5GHz boost clock), 64GB of RAM, noctua fans to replace the stock fans, noctua CPU coolers, and an SSD for the boot drive. I wish I had held out for a couple years on that and gotten a used thread ripper system. It's just slow enough to be annoying on single threaded workloads. I also got a GPU later on but that's beyond the $800 limit.
I have same model, I will upgrade it soon, power consumption is the point here, with that money you can buy a second hand Dell poweredge server but electricity bill will damage your pocket!
First thing I'd do with that before installing anything is make those X1 slots open ended. That gives you more flexibility and you could even add more cards which will just have lower performance because they have access to less lanes.
I have 3 devices for it. J3160 box with four 1gbit lan adapters for pfsense. Xpenology custom DIY NAS with J3160 ITX board, 4xHDD. And finally NUC13 i7 box with 64 GB and Proxmox for virtualization, docker, etc. I'm not in a team which have everything together on one device. Internet have to work even though I updating Proxmox server.
Im needing to upgrade my gaming pc anyway so I want and need to set up my old gaming pc to a home server and a gaming server. But I want to learn and research on alot of this free stuff cause you know, money, I like to save as much as possible haha
I'm getting a ton of value out of an old r610. Theyre cheap on ebay. Mine has dual Intel(R) Xeon(R) CPU E5620 @ 2.40GHz which give me 16 threads. Upgraded the HBA to H200 flashed into IT mode to let the OS (Ubuntu Server 22.04) manage the array of 6 1tb 2.5" HHD's. System boots off a 256gb scandisk ssd. Plan on upgrading the cpus and then the RAM to at least 64gb though it maxs out at 384gb.
Rather than giving thePfSense/OPNSense VM 2 dedicated NICs, could do with just a single physical NIC for the WAN link and its LAN link be a virtual one on ProxMox's vswitch along with all the other VMs on ProxMox's bridge port. Sure, it's all on a shared 1Gb port to anything external, but internally between VMs, it's much faster. Also, later on, could install a 2.5/10/etc Gb NIC and make it the bridge port for the vswitch. Save a PCIe slot, particularly if your home network is only 1Gb.
Finally shutdown the last of my old home lab hardware and am now running solely on the newer, much more power efficient, systems. The new lab is a 5 node Proxmox cluster with all SSD storage (NVMe for Host installation and SATA for VM data) configured in a Ceph cluster with 10Gb networking spread across 4 ports on each node. The reason for needing 5 nodes was to ensure I had enough RAM available as each node can only accept a maximum of 64GB of RAM as where I had 192GB RAM in each HP server. So far the estimated amperage draw has dropped from 12A to bouncing between 4-5A. I should still be able to increase the efficiency by replacing some of the last older equipment still running. Kept the Same: x1 Dell Optiplex 7070 SFF x1 Dell Optiplex 7040 SFF x1 Cisco Catalyst 2960X (plan to replace with newer, more efficient, Ubiquiti switch) x1 HP ProCurve 2810-24G (plan to replace with newer, more efficient, Ubiquiti switch) x1 Ubiquiti US-8-60W x2 APC Smart-UPS 2200 (SMT2200R2X658) x1 RPi 3B+ x1 ControlByWeb X410 WebRelay (RTDs for server room [house mud room] temperature monitoring) x1 Synology DS418 OLD Lab: x3 HP Proliant DL380p Gen8 servers x1 Dell PowerEdge R720 x1 NetApp DS4246 (12/24 drives loaded) x1 NetApp DS2246 (24/24 drives loaded) x1 Dell R710 x1 Delta Networks ET-DT7024 (Flashed with Dell PowerConnect 8024F firmware) x1 HP ProCurve 2810-24G NEW Lab: x5 HP EliteDesk 800 G3 SFF x1 Ubiquiti USW-EnterpriseXG-24
Ive been holding onto my single slot low profile 1650 just in case I ever wanted to do a home lab and virtulize my gaming pc, but I did see a low profile 30 series graphics card as well
Interesting - I have one of these on a shelf that I stopped using because of power consumption. It was streaming video non-stop 24/7 using x264 because I didn't have a half height gpu with nvenc, and pretty sure the chip mine has doesn't have quick sync. Maybe it's a generation older, but I'll have to check the socket and see if there are quicksync chips available - thanks for that! I replaced it with a MeLe celeron-based passive cooled mini pc which seems to use no power at all and streams really well off the igpu, just as an unnecessary extra note.
My homelab is Free old computers I got from work. I had an optiplex 7050 with a bunch of SSD shoved in there as my Truenas, and I had my proxmox on 1 Lenovo Tiny M720Q. Then I found a free Optiplex 5000 from work... Yea really new, 6 core HT, and gathered 64 gigs of ram... So now my homelab has gone from 2 computers down to one computer. the uptime and Arc cache kind of never gets as good when I restart but win some lose some. My Computer + Dream Machine Pro + one Access point + Pi 3b + modem idle around 52 watts so it's not even that expensive to run. it was closer to 65 watts when I was on 2 computers.
@@RaidOwl We bought a company - they were a dell shop - we are Lenovo - they sent all of their comptuers up after we replaced them and they were getting recycled. Just pull the SSD out and boom free computer. I like to up-cycle things. Granted I've done this a few times and I have some tiny PC to offload now!
When I first saw this video thumbnail I thought you were going to replace your home server with the new Minisforum MS-01 (since a lot of those review videos have been dropping recently, Hardware Haven etc). I think you probably could have done a lot of it except the 3.5" HDDs. Already comes with 2x SFP and 2x 2.5gb plus an extra PCIe and a newer process etc. Maybe in the future! Good video though!
I still think I got a banger of a deal. 192gb of exc ram a 16 core Xeon with a 500w psu and case for 550. Threw in a $70 10g copper nic and $350 of hdd. It has so much room to grow into
@@RaidOwlafter doing some research it seems hp never updated the hp 600 g2 bios microcode to support 7th gen cpus even though it is a Q150 chipset. I have a H110 and B150 chipset motherboards that support 7th gen cpus. Thanks for the reply, saved me from making a mistake buying one of these for a media pc.
For better or worse my home ‘server’ is a windows htpc with hyperv vms for opnsense and debian. It has one 2.5gbe nic and 5 drives in a fanless streacom case. AMD 5600g with 32gb of ecc ram. I want to add a pcie bifurcor card to add more nvmes and another nic, someday. For now it’s been amazing!
AM4 hardware + Ryzen 5700G (you don't need a GPU for that) - this is somehow the sweetspot. Maybe in some months AM5 + 8700G + DDR5 is the same sweetspot.
If only the prodesk/elitedesk could run ECC memory i would totally hop on one, such nice cases thru the bank. Selling the "mini" to ppl quiet often from g3 to now and they are all happy. Therefore to me its haswell xeon with 64GB ecc on asrock :)
There are SO MANY used electronics out there. You could have got a NIC with more ports for the same price. and you could have got a low profile GTX 1050 for the price as well. I Rarely buy new anymore except for solid state memory.
That's what Wendall (level1techs) calls the "forbidden router" setup. Good times. I run a similar setup on my XCP-NG server. Love the content, appreciate you!
I think you missed an important point on this system. This is what I call a “nickel and dime” server. If you can’t come up with $800 bucks at one time you can still build this server 100-150 bucks at a time and end up with the same result while continuing to use it in the meantime. I think this kind of build is more relevant in today’s economy than a pure budget build.
Home servers are rarely purchased in their final form.
Even servers that might cost $1000 are going to get reused, modified, recombined, or just gutted.
I get what you are saying, but only full on enterprises actually buy a server, use it, then dump it and replace it.
In that sense, EVERY home server is going to be a "nickel and Dime" one.
@@Prophes0rmeanwhile,
me and my minipc as homeserver xd :
My home server is a discarded machine. If you are in the business at all you run across these all the time. I have this exact same model in my office that I am using just for Kali. My home server is a mini. Plenty enough to do CTFs and pihole and all those other little things one uses at home. I know people who have taken discarded business servers home then shocked by the rise in their electricity cost. You can grow weed indoors for the electric cost of a real server or two. A mini works well and very low power usage.
Today's economy of record GDP growth, record jobs numbers, record stock market figures, and incredibly-low unemployment?
That... economy?
@@tim3172 Hrmm... How about...
Record income inequality?
Record number of people living under the poverty line in "developed" countries?
Housing inequality so bad that several countries (including the USA) have declared it officially an "emergency"?
I don't know how YOU judge whether an economy is 'working', but I judge it based on whether or not it achieves it's stated purpose. An economy is SUPPOSED to "...manage the availability of resources so that those who need them have access..."
Our global economy is broken, likely beyond repair. It doesn't do 'it's job'. It is a failure.
The fact that you can point at "...some numbers go up..." as a sign of success is part of the problem.
"Number go up" is not the sign of success. It is often a sign of failure.
I'm really happy with my setup. 2 E-Waste laptops and raspberry pi-5 run in a q device for proxmox. Each proxmox node backs up to backup server on the other node. And although I don't have anything like ha I do have redundancy in many of my services whether there's a hot backup or I could just click on another one on the other node and be back up and running. Dual pinholes dual tailscale vpns. Even a backup for my router if it goes down.
Hell yeah brother
I'm rocking a Dell PE-T110ii with 4x8TB drives, 16GB DDR3, a TESLA P4 and a Xeon E3-1260L.
I bought this a year ago for $50, completely intact, and have been happy with it so far.
This is my NAS, Plex, MC, Cloud, file-sharing server, and I love it.
Who needs screaming fast & expensive hardware when the only person I do this for is myself.
I did the old opti 9020 USFF with usb drives+enclosures, upgrading to more ram, an i7-4785T, and an additional ssd in a dvd adapter, because it was CHEAP and yet Capable.
Im not spending ~$800, but I'm not doing the same level of services you are. But this kind of approach, where you utilize old tech to suit your needs, is really the best way, imo, and also more fun to see what you can reliably squeeze out of it while still having the system do it without breaking a sweat.
55W vs 850W TDP LOL
Love the P4. It's a beast of a card in that form factor.
A Tesla P4 for $50 by itself would be crazy, you had to have bought that from a friend.
Your starting machine plus the 2 4tb drives is my current home lab, almost exactly, just in a different box. I love it. It effectively free for me since it's an old PC of mine.
Quick someone give this guy a medal
Dude, this took me back, way back, to 1991. I had a 386sx. I used it to host a BBS as well as use it for my Turbo C coding and compile environment for college. I had 4 meg of RAM and 40 Meg of HDD. (Not a typo.) I don't think you're as old as me, but something tells me, you can relate to those specs, if only from having learned about them in school. Great video. I spent my career as a back end guy, mostly BI and Data Warehouse systems. I'll admit, I should have a home server, but after 27 years of living the "dream" I fell back into my old hobby of Flight Sims, X-Plane and MSFS, and picked up writing fiction to satisfy that old creativity itch I used to scratch with coding. Great video, I'm a subscriber now.
Dang, nice. Yeah in 1991 I was being birthed so I don't really remember much from that year lol
@@RaidOwl and I thought I looked old. ROTFL. Trust me, it's the amazing hairstyle and gray beard. I'm with you. I'm pretty sure there's an unwritten law about hardware/backend people and beards with lack of hair. including the women.
😮This is almost exactly what I was starting with in 1991. I had a Radio Shack “Tandy 1000 386SX/33” (yes, 33 MHz!!) with 2 MB of RAM soldered in which I upgraded to its max of 10 MB (2 MB + pair of 4 MB) and 40 MB HDD (upgraded to 540 MB) and added a 33.6 Kbps dial up modem card. I learned a ton from this which got me into system building and eventually IT. After 20 years of being a database/datacenter admin -> mgr -> integrator -> director -> architect for a medium sized retailer across 3 US states, I have a great memories and appreciation for that old PC and the journey it sent me on. Cheers!
Still amazed in retrospect by how fast mainstream HDDs went from tens of megabytes, to tens of gigabytes, to 100s of gigabytes going from 1990 to 2005, only to hang around the 250-500gb range all the way until 2015 or 2016. Heck, a lot of people are still picking drives in that 250-500gb range just because they don't NEED more space.
Similar consolidation that I just did was getting 5 $50 mini-PCs, a $10 TP-Link switch, and a Synology NAS. The reason for this is because some people just give away a blank Synology if it doesn't have storage, but if you already have hard drives then it works well. I have two 6TB hard drives I've had for a while that I put in the NAS for long-term storage and back up my ProxMox nodes. The 4 mini-PCs have 250GB SSDs each that you can get for free from microcenter during their promotion. So all total I have about $300 in a small "server".
All plugged in to the same switch, they can be VLAN'd off and I 3D printed a mount for them all with the switch so it's just one singular box with a few power cables coming out. But, if one node completely fails me, like it lights itself on fire and I can't fix it, I still have 4 other nodes to pick up the pace until I get that node replaced with any other mini-pc.
Works well for me!
Mini-PCs are very good options. I have 5 with 64gb RAM, 16 threads and 1TB nvme each... everything that my small business needs for less than 100w
One thing if you're looking at adding a GPU to your NAS: Sparkle has a single-slot LP ARC A310 for $99. It's got almost the gaming performance of the RX6400, but it has phenomenal encoding for h264, h265, and even AV1. A lot of people buy ARC GPUs just for the AV1 transcoding, and this is the cheapest way to get into that, plus it fits into any PC. Even if you're not gaming on your network-in-a-box, it's great for passing through to PLEX and having better transcoding than an older QuickSync GPU.
Did something very similar with a HP 800 G4 EliteDesk...i7-8700, 64GB RAM, and a couple of network cards (2.5G). Installed Proxmox and then the possibilities were endless at that point. It is the center of my home automation and NAS backups, and an extra Windows 11 VM I use for testing. The i7-8700 is 6 core 12 thread. These are awesome for homelabs !! I think the G4 with a i7-8700 gives the most bang for the buck.
Thank you for the idea!!
Kudos for calling out the lack of backup options on a single server homelab. Newbies beware. How do you restore your VMs when your NAS is virtualized on a server which is down.
Would love a video on backup options for this scenario tbh.
As an idea (on the cheap) grab the cheapest n100 mini that has 2.5 networking, SATA and nvme and use a m.2 to pci converter with HBA out to x8 SATA. Power the drives externally and curious if you could have a dang nice NAS for less than $300 minus drives. I'm thinking of trying a backup TrueNAS in my home lab with this hardware ... Cause my other 5 servers want another friend?...and because I can.
I have a couple small used PCs, one using Proxmox server, and the other using Proxmox backup. I’m working on an off site clone, but mostly just for my file shortage. Most of the stuff I run is mostly for fun, not critical, and if it all fails I wouldn’t mind rebuilding it all and do something different. I just don’t want to lose my photos and stuff.
Im gonna use a raspberry pi with an external drive running proxmox backup server. Defeats the "single PC" setup but it's not too disruptive
Single PC is single point of failure of everything. Also VM for home server is asking for trouble, too much moving parts with questionable benefit
I'm running my homelab on a Dell 3060 that I got for free, i3 6100t and 16gb of ram. Upgraded it with a 1tb crucial mx500 for proxmox and added an 8tb hard drive for file storage and it's hosting Jellyfin, adguard, NAS, home assistant, tailscale, entire ARR suite for linux ISOs. Not bad for a system that was destined for the trash.
i managed to not understand any of this server stuff but stayed hyped up
Nice to see a video about running a home lab on a smaller scale. It sucks that the PC can't support a 7th gen CPU. I'm running a similar setup but with an EliteDesk G3 SFF and it does support both 6th and 7th gen. I picked up a barebones unit for $28 CAD and installed a 7th gen CPU and 16GB RAM that I already had. It also has onboard NVMe so I bought a new NVMe drive for it. I already had 2 x 5TB drives that I installed and it all works great with TrueNAS Scale.
Yeah I ran a 7700k in my main rig for so long. Woulda been a cool throwback to slap one in here haha.
O have pretty much same setup. HP G3 MIDI tower but with i3 7100T. IT worka great and literally sips power :)
I think it can run 7th gen, but only non-K chip. I think the only upside for this box vs G3 is that it can run E-3 chip with ECC memory.
@@MC-dd8gj If I had one laying around I'd throw it in there and see
How hot do the hard drives get in that system? Airflow seems limited based on what I saw in the video.
A lot of the docker container setups offer a hidden efficiency option - always opt for the alpine tagged version vs Debian or other. disk footprint is about 50%. It would be interesting to do a follow up to this taking some of the networking advice that others have stated, and also trying some chaos testing on the machine - that would be awesome.
Also another idea for a new home server would be a local LLM server, connected into homeassistant. Unfortunately that would be pretty hard to make performant and inexpensive but would love to see you try!
Yeah the few web servers I’ve spun up use the alpine imagine
HP Z840 workstation. $300. Built like a tank. Amazing cooling.. 16 DDR4 ecc slots, 2 PCI x16, 2 x4, 1 x1. Add 2nd cpu, and you get another x16, and another x8, and the x4 becomes a x8. 4 hot swap 3.5” drive bays sas or sata. 2 x 5.25” bays, 6 sata and 8 sas onboard, 825 or 1125W PSU. Oh, and it’s super quiet. If you find and install the 3D vapor CPU coolers, custom engineered for this box, you can run it full power 2x cpu, and there is no difference in noise level at idle or maxxed out.
The main issue here will be the electricity cost to run that...
That is a cool set up. I have been running a PfSense vm for 3 years no issues. You could have saved a PCI slot by using in 2.5G for the router WAN and the other as the main link out of PMOX. Used the VNIC for the LAN port in PfSense and TrueNAS. The Virtual NICs in PMOX are 10G.
They do have 4X NIC cards as well...
I love these little boxes. They're waaaay more than suitable as a home server and a few SBs have caught on to that idea a year ago when I was trying this out on hardware that rides the line. Turns out I don't need much and I really mean that.
Athlon 64, 2GB DDR2, ~20TB storage, 10GbE SFP..✓💯 Works out great.
Something like a multicore chip with onboard graphics would be like a rocket. Anything made within the past 3-6 years would definitely be put to better use as a workstation but you get the idea. There's a few lines to be drawn for this and that. It's best to define them and then build based around that.
Well done video sir. This kind of info is great for people who are kinda serious about getting into homelab. Not too cheap, but keeping it realistic. I started 10 years ago with 2 HP D530s. Gotta start somewhere!
I'm running my entire home server collection on an ODroid H3+ with two 6 TB hard drives in Raid-0. This is just running a desktop-less install of Debian as a host for Docker. And as you pointed out, all the stuff in Docker doesn't take up that much in terms of resources. The H3+ sports a Pentium Silver N6005, and 16 GB of Ram. Hey - it works for me!
that's a relatively recent architecture despite being a "Celeron-esque" chip -- really good pick for this kind of work imo.
I wonder how the disks are connected? In usual PC you can put them inside
in his case, probably SATA, it has two ports on the board@@slavic_commonwealth
I'm running jellyfin and docker containers on pi4 4gb and Samsung 870 ssd
Had no parity or backups for my data in my workstation so I finally decided to invest in building my own home server. Just ordered a Dell with an i7-8700, GTX 1080, and 64GB ram for $370 for the base of my first home server and NAS. Can’t wait to get tinkering. Excited to see how the 1080 with unlocked drivers compares to my 2080 super (with game ready drivers) on my windows machine for plex transcoding. Decided to spend the extra premium on 8th gen over 7th gen for better performance when hosting game servers. Starting off with 6 used 8tb HDDs before I slowly swap them out for larger 18TB NAS drives as things progress. Snagged a Classico dark rock case and an HBA card for some extra SATA ports. Assuming there are no issues with the HBA card and the stock PSU I am quite happy with the hardware I was able to snag for under $900.
Now I just need to save up for a working UPS. Last one I ordered new was defective and wouldn’t charge the battery. I got to keep the defective unit so there is a chance only the UPS or the battery is defective. Debating if I should just risk it and spend the $90 on a used UPS and chuck in my battery that may not be defective or if I should just snag a second unit for troubleshooting purposes at full price and just use it on another system.
An alternative that I found with my personal system: pass the integrated net card to pfsense and then create 2 extra virtual net cards that are VXNet. The first one should be client network and associated with a physical port. The second is a service network for the VMs to communicate with jumbo frames.
This helps all comms stay within the box. The service net isn't required but you can get some speed boosts and lower CPU usage with jumbo frames.
I downgraded everything to a Unifi Dream Machine, Unifi 24 port POE switch, and a Synology 5 Bay NAS.. the simplicity makes life good these days.. and massive power savings compared to all my old rack equipment
I really liked this concept, I just built a HTPC based on the N100 that is also a NAS / Docker server for my applications, and I can even play retro games on it using RetroArch, just using Debian as a base system, no virtualization at all. And I'm very satisfied with the results so far.
I'm still a fan of using retired server gear like E5 Xeons and Supermicro Boards. My "new" main server (E5-2680V2/X9SRL-F) is idling at around 60W and is literally whisper quiet. This stuff is old but you get more IOMMU-Groups for passing devices through to VMs and DDR3 registered ECC RAM is dirt cheap. The board even supports PCIE-bifurcation so i can use multiple NVME drives quite easily. For smaller setups (router/firewall/homeassistant/backupserver) i prefer the intel N100 series. Low cost, very low power consumption and more than enough computing power for these tasks using ProxMox and TrueNAS Scale.
I agree about preferring to have a separate firewall, but I also want to virtualize it to try out different platforms over time. My go-to has been a single device that I only run firewall software on.
I did that over 22 years ago :)
Then 14 years ago I split it back out into a few dedicated systems, storage+ha virt, which is a bit more convenient than doing it all on your workstation.
Love this video. I went from a 24U rack to a 15U and just ordered a 9U which is bigger than I need but I wanted some shelf space, really only need 6U but yeah lol. It seems as the years go by I keep looking for ways to downsize my gear to keep things minimal and clean while maintaining all of the functionality I am used to. Its cathartic to purge gear i dont need and get things to just the core of what I want!
With some bridging setup it would not be needed to connect pfSense LAN port to your LAN switch separately. Make the 2nd (LAN) port of the router VM a virtual NIC, and bridge it to the 2.5G port that's already connected into your LAN. A huge bonus for efficiency, now the server itself doesn't have to go out and back in via an external switch for all its Internet access.
Ehhhh, while that makes sense for convenience and performance's sake, it's not the most secure solution on older hardware. Then again, everything about this box is a peformance issue so... 🤷
I really like that you picked Skylake for this machine. the difference between Skylake and previous generations was HUGE and imo for used gear it's smack in the middle of best bang for your buck.
Yea, but it would be preferable to have at least 8th gen or 9th for the better quicksync.
@@rudysal1429 great point tbh, for anyone building a machine that will be used for transcoding/streaming that's critical stuff.
This is a lot like my setup, though I didn't repurpose an old PC. I have a custom built ryzen 5600G w/ 32G RAM and a 2.5G Intel NIC pciE card. Proxmox running on the host on a 1 TB nvme. 3 drive raidz1 pool directly on the proxmox host. I also put docker on the proxmox host rather than a VM. I'm not using portainer. Samba docker container is the NAS rather than TrueNAS. My TP-Link router is my firewall/router/wifi, but I do have an lxc container running dnsmasq for DHCP/DNS. I don't have all the nice GUI interfaces you have (TrueNAS, portainer, pfSense, pihole), but I prefer the CLI most of the time, so it works for me.
So happy with my Anker chargers, powerbanks and cables, they lasted me 7 years so far and I'm happy with their products for charging needs :)
eh i had a dual tower build for awhile using spinners and lots of juice, simplified it down to a few mini pc's and now the cluster has grown again. At least with this modular design I can shut down the power hungry nodes and extra stuff when i'm not playing in the lab and spin them all up when i want dedicated resources for each VM/LXC.
You don’t need it, you WANT IT!!
Preach!
story of my life really
I want a dyson sphere to see how far we can go with gaming. Home server!? Nah, I want my own home matrix.
Do you also want the humanoid cpu for your matrix?
100%. I'm looking at breaking up my frankenstein's monster of a Linux desktop/server situation into a server(/s) and I don't _need_ to make them rack servers, but I want to build a rack and work with real enterprise hardware for experience and fun.
You don’t need 3 ports used up for wan, lan to switch then back to proxmox, you can use only 2 physical ports and 1 virtual that bridges pfsense and proxmox
I´m going all the way - the plan is, if all goes well, to do a Thermaltake Core WP 200 chassis computer system for a rackmount PC with with two machines and many enough drives. It will eventually be connected to at least 2 monitors. Already have at least 4 other cases too which can be used to make a good overall home lab consisting of at least 5 movable sections. I keep wondering where to keep the large chassis. There is a dilemma - if it is kept upstairs, there is a staircase and that might not be a good idea, so better keep it near the entrance to the home in front of a large window with access to lots of air coming from the outside. If all goes well, buildup should start in november provided I can maintain my income in october. Thanks for the video! Best wishes to you from Iceland.
I grabbed an old dell r330 the other day for under 200 bucks. Found that with the backplane installed I could add 10gb dual Intel nic and a nvme to pcie adapter. So I have 2 disk drives at 8tb a piece, a 128gb os drive with a 128gb backup for that. A 1tb ssd for faster storage and a 256gb nvme drive, which really was just a test to see if it would work. And it does. So with the exception of a video card I can run everything there and it doesn’t run too hot. It’s a trueNAS scale server though and hosting all my 24/7 services so I wanted something a little more efficient than my previous monstrosity. I have a game server on the side though that is a 10th gen i5 which I built for under 400 so I think it’d be possible to upgrade even your cheap server here pretty easily if you’d like newer hardware.
would like to see you build a low wattage NAS that holds a lot of storage. I been asking so many groups going from r510 with 60tb and i'd like to build a low wattage server just for storage. No youtube video i found shows a full breakdown of lowest wattage setup really except for 1 and the cpu/motherboard are hardest thing to get.
The biggest problems with utilizing older hardware I've found in my fairly recent explorations of homelab stuff - for my needs - have been the lack of RAM or RAM slots, and power consumption.
Power usage is the main issue for me.
Because yes, you can do good thing for a fair price...
But when you have to run that 24H / day... Then it start to cost you a lot...
Yeah, I can have hundreds of GBs of storage in mirrored volumes, but if it costs me more than 2€ a month per 100G, I could have just paid Google, and I do think there are cheaper options, too. 1 TB on Storj is $12 a month. @@LtSich
This is exactly what I was looking for on a project I'm looking to start up, thanks for this!
I’m so glad you posted this because I really need to consolidate. Thank you.
I agree with physical vs virtualized firewall conundrum; would rather have the dedicated firewall; was running Opnsense on a N100 firewall appliance; swapped over to a RK3588, running openwrt. There were some things that can be done more easily under Openwrt than in Opnsense, like Policy Based Routing. In the end, it is smaller, faster, and more power efficient with the RK3588;
I may very well virtualize it at some point, just to try it; but in the end, I'll leave the RK3588 sitting off to the side powered off as a backup, or just run it and keep the virtualized one as backup.
Damn, that transition at the start was so clean!!! Who's your editor?
i’m always excited to see your new videos, they’re fantastic!
Great video. I had a $150 HP Z2 mini G3 I76700, 16GB RAM, 256gb NVME, 1TB 5.2K HDD, that has been my server I recently when to ebay and bought some old server grade stuff, 96GB of EEC RAM, 9 10K SAS 600GB HDDs, Quadro K4200, HP DL360G9 with dual Xeon E5-2620, Cisco 3560 PoE, and C897VA-K9 all for about $500. It will be interesting to compare it once it arrives to your $800 build.
homelab youtuber finally realizes he doesn’t need the amount of compute he has for the 10 containers and VMs he really uses
I’d wager most people know they don’t “need” the amount of hardware they have
I loved the humor In this! I snagged the 9th Gen big brother of the base system for $100 last week and have been trying to figure out some uses for it. This is very helpful
These 6-7th gen systems can be modded to accept 9th gen (and 9th gen refresh) CPUs.
You can pick up a 6c/12t 9.5th gen engineering sample CPU for $50-$60.
It will have much better QuickSync. A better memory controller.
It will boost higher. It will use much less power.
Until these ES CPUs start getting scarce and start to go up in price they are a REALLY cost effective boost to these older systems.
How...?
@@rpm10k. It is a BIOS modification.
It involves changing a few flags, removing unneeded (old) CPU microcode, and adding microcode for the CPUs you want.
There are programs like CoffeeTime that can do the modifications.
Search around and you should find some tutorials.
It certainly isn't as simple as plugging in the new CPU and loading stuff off a USB drive, but it is pretty easy.
I wouldn't attempt it on your ONLY system, since things can go wrong and you might need to use another machine to recover it. But once it's working it keeps working.
Hardware Haven will be proud of you 😊
*anxiously awaiting Colten's response*
@@RaidOwl same here
This was fun to watch thanks. It was great as a quick overall refresher on hardware and setup.
At least the sponsor is a excellent quality brand that actually delivers what is promised, as supposed others
I bought a $55 thin client, and upgraded it slightly to an NVME and 12gb ram. Runs docker and a bunch of services :) you definitely don't need a big bunch of money, depending on what you're doing.
Agreed
Bout to check this out buddy happy new year 🎉
Your sense of humour is gold!
You have a pretty intense room resonance at 380hz. Cool video as always.
i didn’t notice until you pointed it out and now i can’t unhear it lmao
Kinda doing the same, about to replace my Dell R210ii server with a SFF PC to be my firewall.
Trying to decomission my R710 that is my Hypervisor (ESXi) also running a virtual TrueNAS, and see if I can move those VM's to my R510 that is my NAS running Unraid.
The one thing I would miss moving to a regular PC over my servers is iDRAC or ILO. I have never once hooked a keyboard/mouse/monitor to them always do everything thru remote management.
I've just switched month ago from fujitsu esprimo desktop with i5 4gen to hp elitedesk 800 g3 with i5 7th gen mainly due to power consumption. On esprimo I had WS2019 with hyper-V which wasn't much optimal solution although it served me since late 2019. On hp on the other hand I'm having 'just' proxmox, truenas core virtualized, docker with few containers and two pihole instances. I've succesfully migrated windows VM from hyper-v to proxmox VM. Overall it nice and smooth now, and power consumption vs computing power is ok for me.
Now here goes the usual warning to never store all your eggs in a single basket... My dream homelab would be a 3-4 idential passive systems without any hard drives at all (I've got a dedicated Synology DS1821+ for storage) and with at least two nvme drives for host OS and some local storage, and all of that at a reasonable cost to boot. Or, even better, some kind of modular blade-like system so that I could add more nodes in the future should I ever need them. Hmm, that actually sounds like an interesting project.
Having whole storage on a single NAS isn't "storing all your eggs in a single basket?"
With that config I may look for something like Ceph for things like Proxmox and then yes, use the NAS for pure storage solution.
@@JoaquinVacas No, because NAS not only has redundancies in form of parity drives, but also backs itself up to a cloud. Besides I will rather trust Sylonogy devices backed by 5 years warranty with expedited replacement program to deliver, then to some random old hardware which was used who knows how and in what conditions. My previous Synology NAS worked for 12 years before I upgraded it with zero issues, and it still technically works now - I replaced it because I wanted to upgrade, not because I had any problems with it.
Yeah I have a local backup server as well as a remote backup server, so 3 copies of my data. RAID is not a backup.
@@asmi06 Cloud backup is always a plus.
One thing I'll try to achieve is DFS under TrueNAS SCALE, as I have two different locations where I can have NASes to replicate on themselves. (1000km of each other, my parent's house 😆)
Running your specific entire homelab on an old PC like that is a bit of a stretch, but would be more than enough for someone starting out and learning.
But if you get a few more of those desktops and now you can easily match your setup.
I have 5 of those desktops and run:
pfSense firewall
2 ESXi hosts (each running about 6 vms)
SAN
Backup server
Plex server (running on my old gaming system)
Total cost: about $800
Total power draw at idle: 200watts
I have learn TONS from this serup
I mean yeah that’s kinda the point. The video is exaggerated but the premise is that you can do a lot with minimal hardware and customize it to your needs.
@RaidOwl 100% agree. I started out on an old HP Elitebook laptop running just Plex. If you want to learn you must start somewhere and no need to spend thousands. Now running a massive network at home, the complexity of which rivals that if a medium sized business so I may have gotten carried away.
200W idle is like $450 a year in electricity, yikes. There is zero reason to run separated Plex server for like $10 a month plus Plex subscription, at that point you might as well pay for some streaming service or two. I felt bad with my 60W average but now I feel better.
@kaminekoch.7465 Sorry, should have clarified that I'm in Florida, so that comes to about $170 per year in electricity costs.
But I have sense consolidated my systems to VMs running on one ESXi box (including the Plex server) and idle usage is now around 90watts.
My dell optiplex 5070 i5-9500 was $160 on ebay and eats everything i throw at it. I was actually hoping it would start underperforming so I can upgrade it but its just a BEAST
I got the Nvidia RTX A2000 12GB for my server when looking for a LP GPU. Does set ya back a lot more than your choice! I paid $450 for mine. But it has a lot of promise for running AI tasks and anything else I throw at it. Def not an alternative at the price point but this was my favorite after looking at all the options.
Recently upgraded my Unraid box (previously running a Phenom II) to a Xeon E3-1226 V3, supermicro X10SLL-F, and 16GBs of ECC RAM for less than 100 bucks. Ebay is useful sometimes.
Electrucity goes up for 0.001$
Us homelabbers:
SELL SELL SELL
You had me at, "I have two whole-ass videos".
Once you're in the game, you can't leave. Everyone comes back. I've downsized and upsized half a dozen a times. :D
At the end, having that running 24/7 might be a better ideal than a rack full of server/ firewall and switches. And you still can run servers with lab stuff
I used to run my firewall in proxmox, but I had to mess with the computer one too many times, and I really did not want to lose internet for no reason again. I bought a thin client and put pfsense onto that, and I haven't even had to look at it even once after I set it up.
13:50 - yes, having a single point of failure in virtualised pfSense is a reason to be stressed. That's why I have a cluster of two Proxmox machines: one "production" and second "spare" - with an option to auto migrate critical workloads (VMs, LXCs) to the spare PC. Most of the time the "spare PC" is just shut down, booted only in case of (rare) issues with the Prod. But as you said - it has a lot of consequences if your pfSense is down. Cheers!
7 out of 10? That's the nicest thing I've heard all day.
Seriously though, I just spent $700 on an Epyc 7302, MB, cooler, and 128GB of RAM as an upgrade to my 7700k system that has been my server for a couple of years now. I just have to put on my big boy pants now and actually swap in the hardware.
This is awesome! Can you please make a detailed video covering the software side for us noobs 😅
Maybe the Home Lab Tour software edition video I linked down in the description will suffice? Unless you mean specifically for this setup.
My Proxmox server is an HP Elitedesk 800 G2 I picked up for just under $20 from a university auction. I had to throw in an SSD and a couple sticks of RAM but it's still got less than a hundred bucks in it.
I'm currently building a NAS based on an HP Z440 motherboard, just need a CPU cooler and some RAM and storage now. Splurged and replaced the hex core E5-1650 v3 with an 18 core E5-2699 v3. Maybe I should make this one the Proxmox server, lol.
Just in the process of setting up an intel NUC with proxmox, pfsense and some virtual machines to play with some stuff, was originally going to be AHV as use that for work, but unfortunately requires min 3 physical SSD's - was using ESXi but it was more of a pain though was working! First thing was to get Plex up an running as didn't run well on my NAS! One of the reasons for the NUC was very lower power usage even when running maxed out.
Price 100 ducks love it 🦆
quack
I spent $600 in 2020 on a Supermicro X10SRL-F + E5-2630L v4 + 64GB of ECC Registered memory off of Ebay. Great server for four years.
ONLY a 7/10? 🤨
First time watcher, this was a nice video :) Loved the humor, and calm voice explaining things. Thanks for the content
Leave proxmox behind, use docker containers only. Switch from pfsense to openwrt with simple firewall rules.
You inspired me to do almost the same thing with a Dell T30, so far I've spent 200 USD including the server, the only thing I'll probably change is the processor only if it's the case because I wont play but maybe stream.
The E3 1225 V5 only has 4 cores 4 threads, I'll look for a better XEON
For $800 I got an old supermicro 4u chassis with an x9 vintage motherboard and CPUs, upgraded CPUs (to get 20c/40t at about 3.5GHz boost clock), 64GB of RAM, noctua fans to replace the stock fans, noctua CPU coolers, and an SSD for the boot drive. I wish I had held out for a couple years on that and gotten a used thread ripper system. It's just slow enough to be annoying on single threaded workloads. I also got a GPU later on but that's beyond the $800 limit.
I have same model, I will upgrade it soon, power consumption is the point here, with that money you can buy a second hand Dell poweredge server but electricity bill will damage your pocket!
I would love more of a walk through/breakdown on the LCX and plex install.
First thing I'd do with that before installing anything is make those X1 slots open ended. That gives you more flexibility and you could even add more cards which will just have lower performance because they have access to less lanes.
And gives me an excuse to use my dremmel
I have 3 devices for it. J3160 box with four 1gbit lan adapters for pfsense. Xpenology custom DIY NAS with J3160 ITX board, 4xHDD. And finally NUC13 i7 box with 64 GB and Proxmox for virtualization, docker, etc. I'm not in a team which have everything together on one device. Internet have to work even though I updating Proxmox server.
Im needing to upgrade my gaming pc anyway so I want and need to set up my old gaming pc to a home server and a gaming server. But I want to learn and research on alot of this free stuff cause you know, money, I like to save as much as possible haha
I'm getting a ton of value out of an old r610. Theyre cheap on ebay. Mine has dual Intel(R) Xeon(R) CPU E5620 @ 2.40GHz which give me 16 threads. Upgraded the HBA to H200 flashed into IT mode to let the OS (Ubuntu Server 22.04) manage the array of 6 1tb 2.5" HHD's. System boots off a 256gb scandisk ssd. Plan on upgrading the cpus and then the RAM to at least 64gb though it maxs out at 384gb.
Rather than giving thePfSense/OPNSense VM 2 dedicated NICs, could do with just a single physical NIC for the WAN link and its LAN link be a virtual one on ProxMox's vswitch along with all the other VMs on ProxMox's bridge port. Sure, it's all on a shared 1Gb port to anything external, but internally between VMs, it's much faster. Also, later on, could install a 2.5/10/etc Gb NIC and make it the bridge port for the vswitch. Save a PCIe slot, particularly if your home network is only 1Gb.
ive done this before with mini pcs... serves as a standalone mobile lab. or a really cool mobile streaming setup
Finally shutdown the last of my old home lab hardware and am now running solely on the newer, much more power efficient, systems. The new lab is a 5 node Proxmox cluster with all SSD storage (NVMe for Host installation and SATA for VM data) configured in a Ceph cluster with 10Gb networking spread across 4 ports on each node. The reason for needing 5 nodes was to ensure I had enough RAM available as each node can only accept a maximum of 64GB of RAM as where I had 192GB RAM in each HP server.
So far the estimated amperage draw has dropped from 12A to bouncing between 4-5A. I should still be able to increase the efficiency by replacing some of the last older equipment still running.
Kept the Same:
x1 Dell Optiplex 7070 SFF
x1 Dell Optiplex 7040 SFF
x1 Cisco Catalyst 2960X (plan to replace with newer, more efficient, Ubiquiti switch)
x1 HP ProCurve 2810-24G (plan to replace with newer, more efficient, Ubiquiti switch)
x1 Ubiquiti US-8-60W
x2 APC Smart-UPS 2200 (SMT2200R2X658)
x1 RPi 3B+
x1 ControlByWeb X410 WebRelay (RTDs for server room [house mud room] temperature monitoring)
x1 Synology DS418
OLD Lab:
x3 HP Proliant DL380p Gen8 servers
x1 Dell PowerEdge R720
x1 NetApp DS4246 (12/24 drives loaded)
x1 NetApp DS2246 (24/24 drives loaded)
x1 Dell R710
x1 Delta Networks ET-DT7024 (Flashed with Dell PowerConnect 8024F firmware)
x1 HP ProCurve 2810-24G
NEW Lab:
x5 HP EliteDesk 800 G3 SFF
x1 Ubiquiti USW-EnterpriseXG-24
Ive been holding onto my single slot low profile 1650 just in case I ever wanted to do a home lab and virtulize my gaming pc, but I did see a low profile 30 series graphics card as well
Very simple home lab love it. I would like to built one but I have no idea for the specs needed. I would like to use my gaming pc part
Interesting - I have one of these on a shelf that I stopped using because of power consumption. It was streaming video non-stop 24/7 using x264 because I didn't have a half height gpu with nvenc, and pretty sure the chip mine has doesn't have quick sync. Maybe it's a generation older, but I'll have to check the socket and see if there are quicksync chips available - thanks for that! I replaced it with a MeLe celeron-based passive cooled mini pc which seems to use no power at all and streams really well off the igpu, just as an unnecessary extra note.
My homelab is Free old computers I got from work. I had an optiplex 7050 with a bunch of SSD shoved in there as my Truenas, and I had my proxmox on 1 Lenovo Tiny M720Q.
Then I found a free Optiplex 5000 from work... Yea really new, 6 core HT, and gathered 64 gigs of ram... So now my homelab has gone from 2 computers down to one computer.
the uptime and Arc cache kind of never gets as good when I restart but win some lose some.
My Computer + Dream Machine Pro + one Access point + Pi 3b + modem idle around 52 watts so it's not even that expensive to run. it was closer to 65 watts when I was on 2 computers.
Anytime I try to get something "free" from work I end up talking to HR
@@RaidOwl We bought a company - they were a dell shop - we are Lenovo - they sent all of their comptuers up after we replaced them and they were getting recycled. Just pull the SSD out and boom free computer. I like to up-cycle things. Granted I've done this a few times and I have some tiny PC to offload now!
When I first saw this video thumbnail I thought you were going to replace your home server with the new Minisforum MS-01 (since a lot of those review videos have been dropping recently, Hardware Haven etc). I think you probably could have done a lot of it except the 3.5" HDDs. Already comes with 2x SFP and 2x 2.5gb plus an extra PCIe and a newer process etc. Maybe in the future! Good video though!
I have 2 HP workstations and plan to do something similar this year.
I still think I got a banger of a deal. 192gb of exc ram a 16 core Xeon with a 500w psu and case for 550. Threw in a $70 10g copper nic and $350 of hdd. It has so much room to grow into
Why not get an I7-7700 instead of the I7-6700? It has better Igpu for transcoding and uses same 1151 socket.
Same socket but this chipset won't run 7th gen
@@RaidOwlafter doing some research it seems hp never updated the hp 600 g2 bios microcode to support 7th gen cpus even though it is a Q150 chipset. I have a H110 and B150 chipset motherboards that support 7th gen cpus. Thanks for the reply, saved me from making a mistake buying one of these for a media pc.
For better or worse my home ‘server’ is a windows htpc with hyperv vms for opnsense and debian. It has one 2.5gbe nic and 5 drives in a fanless streacom case. AMD 5600g with 32gb of ecc ram. I want to add a pcie bifurcor card to add more nvmes and another nic, someday. For now it’s been amazing!
I love the cool transition
AM4 hardware + Ryzen 5700G (you don't need a GPU for that) - this is somehow the sweetspot. Maybe in some months AM5 + 8700G + DDR5 is the same sweetspot.
Yes, I went that route too, Unraid build.
If only the prodesk/elitedesk could run ECC memory i would totally hop on one, such nice cases thru the bank. Selling the "mini" to ppl quiet often from g3 to now and they are all happy. Therefore to me its haswell xeon with 64GB ecc on asrock :)
There are SO MANY used electronics out there. You could have got a NIC with more ports for the same price. and you could have got a low profile GTX 1050 for the price as well. I Rarely buy new anymore except for solid state memory.
Tesla p4 might be a better graphics card if you can figure out cooling and setup. Craft Computing has a video on the setup.
Yep. That's what I've done with one of my ProxMox servers. The cooler is available on e-bay.
That's what Wendall (level1techs) calls the "forbidden router" setup. Good times. I run a similar setup on my XCP-NG server. Love the content, appreciate you!