Just grabbed an epyc H12 mobo + 7302 cpu + 128gb ram combo for about $1000. With a 7302 and 128gb memory I can finally consolidate a couple old crappy machines into 1. Luckily I was able to find a fractal R5 to house everything in (needed 8 drive bays for truenas).
I got a EPYC 8024P and a Gigabyte ME03-CE1 board and 16Gb DDR5 ECC RDIMM RAM for 700€ and added more RAM, 4 U.2 NVMe drives, some SATA drives and a m.2 drive... And more NICs. It's amazing and idles at around 77Wh.
Just ordered a AMD EPYC 7551P CPU, a Supermicro H11SSL-NC Motherboard with eight 16GB 2133 DDR4 ram modules and an active 2U heatsink. It will be replacing my current dual Intel Xeon e5-2695 v2 CPU build in my SuperMicro 4U 36 bay Chassis. I plan to use it with unRAID and the LSI IT mode PCI-E raid controllers I already have in the current build.
Upgrading my UNRAID server again. Did one of those eBay combos with a 7402, a Supermicro H12SSL-i, 128 gbs of ecc ram to top it off. Going in my rack mounted Rosewill 4u chassis with a Noctua NH-U9, 10 gig Mellanox sfp card, a 6750 XT, and a 7900 GRE for gaming VM moonlight/sunshine fun times. Crapload of storage going with it. Thanks to your channel and others I have a full ubiquity Unifi pro max networking setup to connect it all to. This will be my 4th UNRAID hardware migration in 5 years. Can’t wait. Have it all in my 10900k rig in a Meshify 2 right now but hopefully won’t have to wait long for shipping from china 🇨🇳
I went the workstation route: 3970x/3090 in a fractal Meshify 2 XL using an Asus Zenith II Extreme Alpha mobo. The intended use is for VMs, testing stuff, learning. Proxmox, pfSense (prior to buying a hw appliance if I go that route), and so forth. I'd like to learn how to deploy Docker images and whatnot.
I went with a Netgate sg-1100 for my pfSense setup and it’s been great so far. For my Docker stuff I’m using a VM running Portainer which makes everything pretty easy.
Built basically the same PC as Linus did for Mark Rober too. Love it. Barely any overhead on it with two people using it relatively regularly so far. Though I do want to add in NVME Expansion cards for LOG and Metadata cache ARCs. I also went a step further and got 12Gbps SAS EXOS X20 drives connected to a Broadcom/LSI HBA. Overkill indeed, for now...
My NAS/server I just built using i3 10100 on the Gigabyte B560M AORUS PRO AX mATX 4 DIMM,6 SATA, 2.5GLAN, thunderbolt 4 headers. 64GB kit 32GB x2 3600 CL20 + Optane P1600X 118GB SLOG, ADATA ISS333 512GB OS drive. I went with Silverstone Sugo SG02B-F Case with x2 5.2" bays (replace silver strip with Walnut Veneer). The 3.5" IcyDock TurboSwap MB171SP-1 went nice with dual 2.5" ToughArmor EX MB492SKL-B in the SUGO optical bays. Noctua fans and CPU cooler + EVGA 600w GQ power supply. It would be nice to throw a single slot AV1 encoding capable GPU in there. I plan to use four 2.5" 4TB or 8TB SSD's in RAID Z1 using TruNAS Scale. I am currently learning Docker on current windows 11 NUC home server thanks to your videos. The NUC just has a 2TB 660P + a 4TB external HDD hence why I am building a NAS. Most of my BR/DVD and music collection is stored currently on my main PC in a 6TB HDD that is not redundant hence the need to create a NAS. Media Server Software I use is JRiver Media Server and Player, JRiver has a version for Windows, MAC 64bit and even Linux, it would be cool to see it in a container like Plex. It has a free trial version but it does cost money so I can see why it is not for everyone, I highly rate JRiver media center and server. Pro tip with Windows, map your network drives so they seem like local drives and add them to media library that way.
Very nice. I was in the same boat trying to talk myself into a pleb Xeon build. Went Epyc and couldn’t be happier. 7551p is sub $300 now. Main benefit for me is reduced power consumption over dual cpu build I am replacing.
What is the actual power usage of this machine? You mentioned that a dual Xeon build would be cheaper, and less power efficient, but you never specified what this server pulls out of the wall.
Great video! For some workloads a setup that would normally be considered overkill isn’t even enough. In my case it’s running snapraid against 28 encrypted data drives. Dual 2690 v3s and 128GB are easily maxed out during a sync. Looking forward to Milan being more readily available. 😁
They’re split between 2x 24 bay DAS units, each with a dedicated, dual-linked, SAS2 HBA card. Parity drives are on SAS3 in the host. The goal was “no bottlenecks” and it turns out the current bottleneck is the software! Cloudflare’s dmcrypt flags in newer kernels are a big help, but it still makes the UPS sweat doing a sync.
I just pulled the trigger on an Epyc 7F52 w/ a Gigabyte MZ32-AR0 and 128 GB Ram. Cost me around 700€ on eBay. Reason is that my current Xeon E3-1225v3 is getting quite long in the tooth (not surprising after 8 years) and also is severely limited in PCIe lanes and max memory.
@@RaidOwl yeah I'm using 2 slots already, extra 10gbit card and an extra hba card. I put my setup in a fractal define 7 and it works nicely but it's a bit tight at the bottom with the headers for power button and reset buttons.
2 years on and those EPYC cpu are now selling for around USD$90, the dual EPYC CPU motherboards are still expensive though, but getting there for around USD$900 for the Gigabyte MZ72-HB0 with PCIE 4.0
I don't have a home server, but a backup server. Since I'm the only user, I keep all my stuff on the desktop. My backup server consists of the remains of an almost 20 year old HP d530 SFF. It has a Pentium 4 HT (1C2T; 3.0GHz); 1GB DDR (400MHz); 4 HDDs in total 1.21TB (2x IDE; 3.5"; 250+320GB and 2x SATA-1; 2.5"; 320+320GB); 1Gbps Ethernet. It is now inside a Compaq Evo Tower with a Windows 98SE activation sticker. It runs FreeBSD 13.1 on OpenZFS 2.1.4 (32-bits). All storage and all transfers are lz4 compressed. The system is limited to 200Mbps due to a 95% load on 1 CPU thread :(
I'm going through the exact same decision process now as I have 2 x Synology NAS's and an HP Microserver G8 that I want to combine all into a single new machine. I've considered something like a Dell R730 with 8 x 3.5" drives but again the power consumption is a consideration with dual Xeon's so I'm tending towards a single Epyc.
I am thinking about doing a similar setup with the exact motherboard and CPU. Plan is to use ESXI and to have one of the VMs being a gaming VM with a 3070 RTX passthrough. Can you tell me about how your experience has been with gaming using your setup and what type of games you have tried it with?
It's certainly convenient having a gaming setup on my server and after getting everything configured it's been great. It can be a little tedious to get set up but as long as you follow the guide, you should be fine. As for games, I mainly play stuff that is more casual and not as graphically intense. I have played some CoD Warzone on there and got an average of 100-115 fps on 1440p ultra settings. I am fully aware that the lower clock speeds of EPYC is holding it back a little but I am very happy with the performance. This motherboard has been awesome so far, doing everything I need and more. The only hiccup was that I needed to get a custom Bios from ASRock to be able set custom fan curves...but that was easy enough. Hope everything goes smooth with your setup, thanks for watching!
I know this feeling i got almost the same setup (and yes i also don't really need that much power) CPU: AMD EPYC 7402P Motherboard: ASRock RomeD8-2T Cooler: Noctua NH-U14S Boot Drive: 2x 250 samsung evo (had them still laying aroud will switch for nvme later on) SSDs: 2x 1TB Samsung 860Pro HDDs: 6x 12TB Seagate Exos X16 HBAs: 1x LSI 16i Case: Fractal Design define 7 XL
oOooOoo I like that. Are you digging that RomeD8-2T? I loving it, with the exception that I had to get a custom IPMI update from ASRock to do any fan control lol.
@@RaidOwl Ye i'm liking it and i still need to get the custom IPMI update (totally forgot about that one thx for reminding me) Also forgot to add what other pcie devices i have atm GPUs: P2200, Gtx 1070
@@bulzaiguard Fans speed control was since then relocated to the IPMI webUI under the BMC controller. Please update the BMC firmware to 1.11.0 www.dropbox.com/s/hwapsx8pm0qzb0h/ROMED8-2T_L1.11.00.ima?dl=0 . Once the BMC firmware is update, please log back in to the IPMI webUI, then go to Settings, you will see Fan Control or Fan Setting available in that page.
My server is more modest... R9 3950X, 32gb, Win 10 Pro, ~60tb over many discs, Chia farming, Plex DVR, gaming. I have the R9 under-volted so it sips power. I wish I had more cores, but don't want to pay the electric bill. My mobo will support more RAM, so I will likely add another 32gb in the near future. Also looking at expansion solutions to increase my disc drive count... probably some sort of DAS solution. I am using Process Lasso to keep everything tidy and running smoothly... love it!!
Dude I want to do the same exact thing by the end of the year. I need a 5HDD NAS, gaming PC, plex transcoding vm, some docker containers and a little room for temporal/test VMS but the heat and noise is a very important factor as the server will be in my entertainment room. Would you recommend me some (maybe cheaper) gear?
For a "budget" system I'd actually recommend an Intel i7 10700. It's 8 cores/16 threads with a boost to 4.8 GHz at only a 65W TDP for $288. It also has integrated graphics so you don't run into the issues I used to have with my Ryzen system and trying to pass through a GPU. You only get about 20 PCIe gen 3.0 lanes, but thats fine. I'd allocate 4 cores to a Windows VM with a GPU passed through for gaming then you have 4 cores and 8 threads to play with for a NAS/plex/other VMs. Its obviously not the best setup but I think that would be optimal for your needs. I'd recommend a BeQuiet case for sure. They have solid airflow and are designed to well...be quiet. Something like this: pcpartpicker.com/list/9HTnRT I know its not cheap but thats all brand new parts at retail (no GPU or storage). I hope this helps! Edit: You could also go with some older Xeon chips and dual socket boards. This would net you more cores and pcie lanes...but you're gonna have pretty mediocre single core performance and stuck on ddr3 RAM. Check out Craft Computing if you want some ideas, he has a few builds like that.
What PSU did you use for that combo? I am thinking of getting the same MOBO as you but putting in a EPYC 7551P in the rig. It will run some VMs, plex (no Hardware decoding so no need for a GPU), sonarr, radarr, portainer (docker), guacamole and anything else that I find which is entertaining. what PSU should I use? Also, can you elaborate on the Ethermining? I have some spare GPUs (1070 and 1650) and was wondering what I can use them for. Not interested in setting up a gaming rig.
I'm using a 850W Seasonic 80+ Platinum. As for Ethereum mining, I use Nanominer. You can download it on Github (github.com/nanopool/nanominer/releases). All you have to do is modify the .config file with your email, miner name and Eth wallet address. I would suggest using MSI Afterburner to lower the power draw of your GPUs to about 50%, lower the clock speed and boost the memory to give you optimal performance. Then all you have to do is run the nanominer.exe and it'll start mining.
Just flew by the mining comment without throwing out a hash rate? Actually, with a single 2080... are you sure that makes sense? I'd think you'd want to take advantage of the CPU and other non-GPU parts of that monster. Is Chia still a thing?
My home server setup is also my work lab. I run a couple HP ProLiant Gen 8 rack mount servers running Windows Hyper-V (because I love being able to use PowerShell to manage the VMs). I want to convert my media server from Windows Server over to TrueNAS but I need to find enough temporary storage to move all my music and movies off first.
Yeah the struggle of needing a ton of storage space to move your current storage is the worst. So far I am liking TrueNAS much more than just hosting my storage directly from Proxmox.
Overkill? Depends on your use. what if I want to do serious engineering work? Computational chemistry? Antenna design using AI? RF circuit design? I'm using old Xeon servers with 28cores each for these.
@Raid Owl would you go with Threadripper Pro today instead? How is the noise level and cpu/gup temps of that build on air? It's crazy that a 13900K or 7950x out now a year+ later gets like double the Cinebench r23 multi-core scores.
I’d honestly still stick with EPYC. The power efficiency and 128 pcie gen4 lanes is fantastic. The noise with a Noctua cooler is extremely quiet. The loudest part is the hard drives.
I love the channel and hope you prosper well! I'm building the 16 variant dual socket with 7302. Curious which model Noctua you went with nh-u14s-dx-4189 or nh-u14s-dx-4677? Did you also need to request the: > Optional mounting-kit required (free of charge). Please contact I'm wondering if the one you're using will fit in a 4u rosewill rack. I did see the nh-d9-dx-*-4u variant is 14 mm less in height.
Running proxmox on a DELL T420. Still have some more upgrades I want to get in but kind of pricey to do (storage, memory). Mine is mostly for Plex but I would like to get it running better and ditch the old SSDs I have at the moment.
Storage gets expensive very quickly. I think the best bang-for-buck storage is to snag some 3TB Seagate Constellation SAS drives off eBay then pair that with an HBA card similar to mine. Seems like those Dell T420 are pretty popular. I’ve seen quite a few people using them as their home servers.
@@RaidOwl Yeah I did not plan my storage upgrade that well so it is going to cost extra for me. My homelab needs are pretty low but if you don't mind the tower form factor, (I actually really like the look of the T420) these units are great do-it-all in a box. The end goal is to have 24 drives in it. Currently at 16 (2x HBA cards). I still have room to expand this server and I think it will serve my slowly growing needs for a long time to come.
I'm about to upgrade my dual 2680's to either dual 2687W v2's or dual 2697 v2's. My box already has 128 gigs of ddr3 1600 ECC. I'm not sure which way to go. With 2687w v2's I get a 700 mhtz jump in clock speed, with no reduction in core count at the cost of a higher TDP. With the 2697 v2's I get a 50% increase in core count, with no reduction in clock speed, and no increase in TDP. What I'm struggling with is whether faster clock speeds or higher core count would give me the most performance boost. If it helps, this box is my daily driver at home, and my media server when I am away. It runs win10 LTSC and I use backblaze for cloud backup. I would love your input.
If this is your daily driver when you’re home I’d prioritize IPC/Core Speed over core count, unless you plan on spinning up plenty of VMs. The 2687 would probably give you a noticeable increase in your current workload with the faster cores, but it’ll just come down to whether you wanna handle the higher TDP. My gut says go with the 2687 v2s but nobody would question if you decided to go with the 2697 v2s. Hope that helped!
@@RaidOwl Thanks for the reply. I also forgot to mention that even though this is a desktop machine, its built in 4u rack chassis. I didn't want my computer(s) on my desk and I didn't anything on the floor either. Do you think my current Noctua NH--12U 120mm coolers can handle a 150 watt TDP chips in a 4U chassis?
@@rdsii64 that may be cutting it close in terms of cooling, but I don't think you'd have any real issues. Check out this guide Noctua has: noctua.at/en/support/compatibility-lists/cpu
@@RaidOwl I ended up going the E5-2697V2. I got a matched pair for 250 shipped. The best price I could find for the E5-2687W V2 over 350 for a pair. They were delivered today.
My home server runs a 3700x with 16GB of ram soon to be 32GB of ram. Currently I run Truenas Core. At the moment it has 1 6TB WD red drive and one 4TB WD red drive plus a 500GB nvme ssd for jails and VMs. Hoping to swap out the 6TB drive for two 8TB drives in raid 1.
wow, someone who's totally in my ballpark when it comes to the why's. Myself running a 1700 for unraid server, i've run into the issue with pci-e lanes as well, 1 vga dedicated graphics, 2nd vga for VM, third VGA for Shinobi.... oh wait, i have no more lanes! So i started looking at threadripper and epyc too. Threadripper is much more money compared to an epyc but has higher clockspeeds. And the reason i'm not considering the older Xeon's with ddr3 is because those platforms don't seem to support NVME drives properly. One question for you tho. Since i'm hosting game servers, clockspeed is important too. Do you have anything that requires clockspeed over cores and have you considered threadripper for that reason? If so, what was your reasoning?
I am hosting a windows VM with a GPU pass through for gaming. I am pretty happy with the performance. The EPYC line obviously isn’t designed for gaming but this isn’t really my main gaming rig so my expectations were tempered. I’m totally fine with the trade off considering the price and power consumption advantages of EPYC.
@@RaidOwl alright, fair enough. Gaming is still fairly doable then? Since it requires decent cpu speeds too. Game servers require similar specs, mainly IPC ofcourse. But games like factorio have a rather cycle heavy main thread, even on a dedicated server.
Yeah exactly. The IPCs on the Rome lineup of EPYC are good enough to make up for their “low” clock speeds. Obviously Threadripper is going to perform better in gaming and a lot of encoding tasks but EPYC has its place and it’s been great for me. Keep me updated on what you decide to go with!
It seems like ill be pairing a 7282 with a Asrack EPYCD8-2t. Got a good deal on the cpu (still gotta go thru but didn't want to hold back) and i like the dual 10gb lan on the mobo. Just incase i need it. still deciding if i'm going for RDIMMS or LRDIMMS if i can find any
I got myself a Dual Xeon E5 2696v3 and 64 GB ( DDR4 Octochannel ) or RAM for virtualisation in a windows server 2019 using Hyper-V I am still a noob but I take sysadmin / network engineering courses since 3 month now, I am currently using this server to try out multiple configurations and setups, deploying a DHCP server, DNS resolver, DNS of Authority ( if I spell that right ? ) from both windows and linux environnement, configuring pfSense, also learning about active directory... I currently have 13 virtual machines running with wayyy more ressources than what we can get to set up in school using VMWare WS on their machines with old i5 4690 vPro and 16 GB of RAM ^^
I just built a server using a 10850k salvaged from my old gaming PC. 10 cores 20 threads. I already need more cores/threads. I might copy this build's CPU/Mobo/Cooler combo. I've already got 3x16TB drives in raidz. Planning to add more when the drives go on sale again. Please stop saying utilize. Utilize and use do not mean the same thing.
Some would critize the Noctua coolers, because of their orientation. They're blowing hot air towards your RAM modules and towards your GPUs, and not towards the exit. But well yeah ... what's the alternative ?
Great video! Did you have issues with PCI devices? I have this same board with a 7401. The BIOS don't recognize any of my hard drives, the M.2 drive, or the 4-port ethernet card I have connected.
Currently building a server using a Silverstone mATX case. I want to go the route of testing Proxmox. Bought the Mastering Proxmox book to help guide me along while following a lot of your videos on Proxmox deployment. I’m also going to follow the commercial route. Any recommendations on either the RYZEN 9 5950x or RYZEN 7 5700G?
5950x for raw power. 5700G for on board graphics (makes gpu pass through much easier if you’re into that). Both great chips but I’d go 5700G unless you need the horsepower of the 5950x.
Yes re ur dissection of the lane situation you faced w/ ur former 8 core am4 cpu... that is what the matter boils down to. But... it bears noting a lot has changed since pcie 4 combined with x570 chipset mobos. They take desktops a big step toward TR capabilities lanewise. firstly, the same job can be done w/ half the lanes, so e.g a 8 lane pcie 4 gpu can equal the perf of a 16 lane pcie 3 gpu, resulting in 8 free lanes for nvme e.g. secondly, the x570 chipset's 4 pcie 4 lanes of bandwidth, is a doubling of other AM$ chipset's shared bandwidth pool to 8GB/s. This allows more and better io ports. as i said - just sayin - but modern am4 isn't as grim as u portray. They can still make a natty server/workstation with a mid range x570 mobo & a pcie4 gpu.
You are definitely correct in that pci gen 4 is going to help in that area. We are still only getting 20ish lanes but more bandwidth. Hopefully manufactures follow suit and give us some useful pci gen 4 1x or 4x cards. Fingers crossed.
Running an i9 9900 with 120tb of storage with double parity on Unraid. I would have gone with a 12 core ryzen but I wanted the hardware transcoding for plex and the GPU market is still bonkers.
Im running a 7282 with 128gb ram, 8 8tb drives and a few 4tb drives in my truenas server. Im working on setting up a dual 7282 rug for transcoding and enxiding gopro footafe and plex movie files. Rather rhan using my 5950x desktop. Also hace a dual epyc 7742 server mining raptoreum
Lol "hobby" he says. Me who now looks down on everyone who doesn't have a home server: "This isn't a hobby it's for life!!" 😆 Proxmox on an old 8 thread i7 optiplex with 24gb ram, running two mc servers, pihole, docker (portainer). I do want more threads so I can also run pfsense!
Why did you use AMD EPYC Rome 7302 instead of 7302P. What is the reason? MSRP is $ 978.00 for 7302 and $ 825.00 for 7302P. the only difference is support for dual CPU configuration for non "P" model. And dual CPU ability is useless for your configuration.
Just sexy! I know you wouldn't recommend it but I'm also building a similar system. I'm going for dual processor mobo though -- either the Supermicro H12DSI-NT6 or the Asrock ROME2D16-2T. I would appreciate any input
Hi, just found your channel, I was wondering about the CPU, I have found many in ebay but most are DELL or HP locked (at least the ones with good prices ~below 1000 US$) so I wanted to ask 2 questions... 1. If you bought a DELL or HP Locked and they still work in your AsRock Motherboard 2. If you bought a non Locked one, how much did you pay for it? I ask so I can wait untiI they drop in price because I found a couple but at double the price of the locked ones (around 2000 US$).
Can you please do a walk-through, built, review or suggestions on building a home server for storage/plex, no gaming. Budget $1500-$2000, future proof for drive expansion up to 20 drives (12tb each drive), raid 10, non rack using fractal design 7 for example.
I just picked up an Epyc 7402p (24 core) and Supermicro H12SSL-i. Honestly, it's my poor choice in RAM. My current setup is an R9 5950X and 128GB of RAM, but ZFS is a memory hog. I can't run Plex through an NFS share, it wants to gobble down 65GB of RAM while my Proxmox host is also consuming 65GB of RAM. I bought 2 sticks of 128GB 3200mhz Nemix RDIMMs... Yeah, that was a ton of money and fully populating my slots will probably take a year or so. I probably should have bought 64GB sticks instead. Oh well 🤷♂️. I currently have eight 8TB HDDs in ZFS RAID 10. I will upgrade to 20TB drives eventually. My Plex library is growing. 32TB may not be enough. Yeah... I got problems.
Love the channel, love the content. My only question is: How's that etherium workin out for ya? lol. Bad joke, i know. Why people invested AT ALL in crypto is mind boggling. My boss lost pretty much his whole investment (which wasn't much). I tried to tell him but no one listens to me. I plan to build my first Threadripper/Epyc Server one day so all this info is a huge help! Keep it up!
Lol yeah I mined Eth for shits n giggles. I've never been heavily involved with crypto cuz I think 99% of it is dumb AF, but mining it for some extra cash was cool.
@@RaidOwl As long as you made a profit vs the cost of mining it lol. Another question: Have you used TrueNAS to run VMs? If so, how's that whole experience?
@@RaidOwl have you had any issues with TruNAS and zfs (RAID)? I read or heard somewhere that Epyc doesn't support RAID (not sure if ZFS is the same thing).
As a tech enthusiast who hasn't used the "future-proof" line to justify spending more on equipment? :)
Just grabbed an epyc H12 mobo + 7302 cpu + 128gb ram combo for about $1000. With a 7302 and 128gb memory I can finally consolidate a couple old crappy machines into 1. Luckily I was able to find a fractal R5 to house everything in (needed 8 drive bays for truenas).
I just did a similar move, H12 and a 7313
Can’t wait for it to get here… binge watching epyc videos until the
@@ImTheKaiser I'm kind of surprised how few there are to be honest. Congrats on the grab!
It's amazing the value on offer with that platform. That is a lot of machine for $1k.
@@morosis82 A year later price has dropped to ~$750ish. Pretty insane value considering.
I got a EPYC 8024P and a Gigabyte ME03-CE1 board and 16Gb DDR5 ECC RDIMM RAM for 700€ and added more RAM, 4 U.2 NVMe drives, some SATA drives and a m.2 drive... And more NICs. It's amazing and idles at around 77Wh.
Hey man you encouraged me to build a server, thank you for the detailed explanation.
Hell yeah! 💪🏼
Just ordered a AMD EPYC 7551P CPU, a Supermicro H11SSL-NC Motherboard with eight 16GB 2133 DDR4 ram modules and an active 2U heatsink. It will be replacing my current dual Intel Xeon e5-2695 v2 CPU build in my SuperMicro 4U 36 bay Chassis. I plan to use it with unRAID and the LSI IT mode PCI-E raid controllers I already have in the current build.
Man, the opening line of this video really hit home with me lol
Upgrading my UNRAID server again. Did one of those eBay combos with a 7402, a Supermicro H12SSL-i, 128 gbs of ecc ram to top it off. Going in my rack mounted Rosewill 4u chassis with a Noctua NH-U9, 10 gig Mellanox sfp card, a 6750 XT, and a 7900 GRE for gaming VM moonlight/sunshine fun times. Crapload of storage going with it. Thanks to your channel and others I have a full ubiquity Unifi pro max networking setup to connect it all to. This will be my 4th UNRAID hardware migration in 5 years. Can’t wait. Have it all in my 10900k rig in a Meshify 2 right now but hopefully won’t have to wait long for shipping from china 🇨🇳
Literally about to do the exact same thing 👊🏻
I went the workstation route: 3970x/3090 in a fractal Meshify 2 XL using an Asus Zenith II Extreme Alpha mobo. The intended use is for VMs, testing stuff, learning. Proxmox, pfSense (prior to buying a hw appliance if I go that route), and so forth. I'd like to learn how to deploy Docker images and whatnot.
I went with a Netgate sg-1100 for my pfSense setup and it’s been great so far. For my Docker stuff I’m using a VM running Portainer which makes everything pretty easy.
Built basically the same PC as Linus did for Mark Rober too. Love it. Barely any overhead on it with two people using it relatively regularly so far. Though I do want to add in NVME Expansion cards for LOG and Metadata cache ARCs.
I also went a step further and got 12Gbps SAS EXOS X20 drives connected to a Broadcom/LSI HBA. Overkill indeed, for now...
My NAS/server I just built using i3 10100 on the Gigabyte B560M AORUS PRO AX mATX 4 DIMM,6 SATA, 2.5GLAN, thunderbolt 4 headers. 64GB kit 32GB x2 3600 CL20 + Optane P1600X 118GB SLOG, ADATA ISS333 512GB OS drive.
I went with Silverstone Sugo SG02B-F Case with x2 5.2" bays (replace silver strip with Walnut Veneer).
The 3.5" IcyDock TurboSwap MB171SP-1 went nice with dual 2.5" ToughArmor EX MB492SKL-B in the SUGO optical bays.
Noctua fans and CPU cooler + EVGA 600w GQ power supply. It would be nice to throw a single slot AV1 encoding capable GPU in there.
I plan to use four 2.5" 4TB or 8TB SSD's in RAID Z1 using TruNAS Scale. I am currently learning Docker on current windows 11 NUC home server thanks to your videos. The NUC just has a 2TB 660P + a 4TB external HDD hence why I am building a NAS.
Most of my BR/DVD and music collection is stored currently on my main PC in a 6TB HDD that is not redundant hence the need to create a NAS. Media Server Software I use is JRiver Media Server and Player, JRiver has a version for Windows, MAC 64bit and even Linux, it would be cool to see it in a container like Plex. It has a free trial version but it does cost money so I can see why it is not for everyone, I highly rate JRiver media center and server. Pro tip with Windows, map your network drives so they seem like local drives and add them to media library that way.
Very nice. I was in the same boat trying to talk myself into a pleb Xeon build. Went Epyc and couldn’t be happier. 7551p is sub $300 now.
Main benefit for me is reduced power consumption over dual cpu build I am replacing.
All those PCIe Gen4 lanes too ;)
Hi, can you mention the motherbaord used?
What is the actual power usage of this machine? You mentioned that a dual Xeon build would be cheaper, and less power efficient, but you never specified what this server pulls out of the wall.
Great video! For some workloads a setup that would normally be considered overkill isn’t even enough. In my case it’s running snapraid against 28 encrypted data drives. Dual 2690 v3s and 128GB are easily maxed out during a sync. Looking forward to Milan being more readily available. 😁
Damn 28 drives?? What kind of data/transfers you working with?
They’re split between 2x 24 bay DAS units, each with a dedicated, dual-linked, SAS2 HBA card. Parity drives are on SAS3 in the host. The goal was “no bottlenecks” and it turns out the current bottleneck is the software! Cloudflare’s dmcrypt flags in newer kernels are a big help, but it still makes the UPS sweat doing a sync.
great vid! I guess Im a big nerd too! Epyc 7551 here, 8* 32gb dimms, 48tb. storage, fractal case. the base is prox mox
I just pulled the trigger on an Epyc 7F52 w/ a Gigabyte MZ32-AR0 and 128 GB Ram.
Cost me around 700€ on eBay.
Reason is that my current Xeon E3-1225v3 is getting quite long in the tooth (not surprising after 8 years) and also is severely limited in PCIe lanes and max memory.
I just setup a similar build today, same mobo but with a Epyc 7282 instead. Found a good deal on the board and RAM.
Awesome! Now gotta decide how to use 128 pci lanes lol
@@RaidOwl yeah I'm using 2 slots already, extra 10gbit card and an extra hba card. I put my setup in a fractal define 7 and it works nicely but it's a bit tight at the bottom with the headers for power button and reset buttons.
2 years on and those EPYC cpu are now selling for around USD$90, the dual EPYC CPU motherboards are still expensive though, but getting there for around USD$900 for the Gigabyte MZ72-HB0 with PCIE 4.0
I don't have a home server, but a backup server. Since I'm the only user, I keep all my stuff on the desktop. My backup server consists of the remains of an almost 20 year old HP d530 SFF. It has a Pentium 4 HT (1C2T; 3.0GHz); 1GB DDR (400MHz); 4 HDDs in total 1.21TB (2x IDE; 3.5"; 250+320GB and 2x SATA-1; 2.5"; 320+320GB); 1Gbps Ethernet. It is now inside a Compaq Evo Tower with a Windows 98SE activation sticker. It runs FreeBSD 13.1 on OpenZFS 2.1.4 (32-bits). All storage and all transfers are lz4 compressed. The system is limited to 200Mbps due to a 95% load on 1 CPU thread :(
I'm going through the exact same decision process now as I have 2 x Synology NAS's and an HP Microserver G8 that I want to combine all into a single new machine. I've considered something like a Dell R730 with 8 x 3.5" drives but again the power consumption is a consideration with dual Xeon's so I'm tending towards a single Epyc.
I am thinking about doing a similar setup with the exact motherboard and CPU. Plan is to use ESXI and to have one of the VMs being a gaming VM with a 3070 RTX passthrough. Can you tell me about how your experience has been with gaming using your setup and what type of games you have tried it with?
It's certainly convenient having a gaming setup on my server and after getting everything configured it's been great. It can be a little tedious to get set up but as long as you follow the guide, you should be fine.
As for games, I mainly play stuff that is more casual and not as graphically intense. I have played some CoD Warzone on there and got an average of 100-115 fps on 1440p ultra settings. I am fully aware that the lower clock speeds of EPYC is holding it back a little but I am very happy with the performance.
This motherboard has been awesome so far, doing everything I need and more. The only hiccup was that I needed to get a custom Bios from ASRock to be able set custom fan curves...but that was easy enough. Hope everything goes smooth with your setup, thanks for watching!
@@RaidOwl thanks for the fast reply. I am really enjoying your channel. Now I am really excited to get my build up and running.
I know this feeling i got almost the same setup (and yes i also don't really need that much power)
CPU: AMD EPYC 7402P
Motherboard: ASRock RomeD8-2T
Cooler: Noctua NH-U14S
Boot Drive: 2x 250 samsung evo (had them still laying aroud will switch for nvme later on)
SSDs: 2x 1TB Samsung 860Pro
HDDs: 6x 12TB Seagate Exos X16
HBAs: 1x LSI 16i
Case: Fractal Design define 7 XL
oOooOoo I like that. Are you digging that RomeD8-2T? I loving it, with the exception that I had to get a custom IPMI update from ASRock to do any fan control lol.
@@RaidOwl Ye i'm liking it
and i still need to get the custom IPMI update (totally forgot about that one thx for reminding me)
Also forgot to add what other pcie devices i have atm
GPUs: P2200, Gtx 1070
@@bulzaiguard Fans speed control was since then relocated to the IPMI webUI under the BMC controller. Please update the BMC firmware to 1.11.0 www.dropbox.com/s/hwapsx8pm0qzb0h/ROMED8-2T_L1.11.00.ima?dl=0 . Once the BMC firmware is update, please log back in to the IPMI webUI, then go to Settings, you will see Fan Control or Fan Setting available in that page.
@@RaidOwl Thx this will make my life so much easier
My server is more modest... R9 3950X, 32gb, Win 10 Pro, ~60tb over many discs, Chia farming, Plex DVR, gaming. I have the R9 under-volted so it sips power. I wish I had more cores, but don't want to pay the electric bill. My mobo will support more RAM, so I will likely add another 32gb in the near future. Also looking at expansion solutions to increase my disc drive count... probably some sort of DAS solution. I am using Process Lasso to keep everything tidy and running smoothly... love it!!
Dude I want to do the same exact thing by the end of the year.
I need a 5HDD NAS, gaming PC, plex transcoding vm, some docker containers and a little room for temporal/test VMS but the heat and noise is a very important factor as the server will be in my entertainment room.
Would you recommend me some (maybe cheaper) gear?
For a "budget" system I'd actually recommend an Intel i7 10700. It's 8 cores/16 threads with a boost to 4.8 GHz at only a 65W TDP for $288. It also has integrated graphics so you don't run into the issues I used to have with my Ryzen system and trying to pass through a GPU. You only get about 20 PCIe gen 3.0 lanes, but thats fine. I'd allocate 4 cores to a Windows VM with a GPU passed through for gaming then you have 4 cores and 8 threads to play with for a NAS/plex/other VMs. Its obviously not the best setup but I think that would be optimal for your needs.
I'd recommend a BeQuiet case for sure. They have solid airflow and are designed to well...be quiet.
Something like this: pcpartpicker.com/list/9HTnRT
I know its not cheap but thats all brand new parts at retail (no GPU or storage).
I hope this helps!
Edit: You could also go with some older Xeon chips and dual socket boards. This would net you more cores and pcie lanes...but you're gonna have pretty mediocre single core performance and stuck on ddr3 RAM. Check out Craft Computing if you want some ideas, he has a few builds like that.
@@RaidOwl Thank you very much for the help!, I'll have a look.
What PSU did you use for that combo? I am thinking of getting the same MOBO as you but putting in a EPYC 7551P in the rig. It will run some VMs, plex (no Hardware decoding so no need for a GPU), sonarr, radarr, portainer (docker), guacamole and anything else that I find which is entertaining. what PSU should I use? Also, can you elaborate on the Ethermining? I have some spare GPUs (1070 and 1650) and was wondering what I can use them for. Not interested in setting up a gaming rig.
I'm using a 850W Seasonic 80+ Platinum. As for Ethereum mining, I use Nanominer. You can download it on Github (github.com/nanopool/nanominer/releases). All you have to do is modify the .config file with your email, miner name and Eth wallet address. I would suggest using MSI Afterburner to lower the power draw of your GPUs to about 50%, lower the clock speed and boost the memory to give you optimal performance. Then all you have to do is run the nanominer.exe and it'll start mining.
@raidowl What powersupply did you end up going with? I'm looking into a 850watt one for pretty much the same build.
gettting 3995x.. I loved the proxmox passing the gpu would try it...
128 thread would it cut it (: not needs but wants....
that is a BEAST of a processor!
Excuse me I want to ask you, to build dual epyc what psu we used that compatible with dual epyc motherboards like gigabyte or supermicro?
Just flew by the mining comment without throwing out a hash rate? Actually, with a single 2080... are you sure that makes sense? I'd think you'd want to take advantage of the CPU and other non-GPU parts of that monster. Is Chia still a thing?
My home server setup is also my work lab. I run a couple HP ProLiant Gen 8 rack mount servers running Windows Hyper-V (because I love being able to use PowerShell to manage the VMs). I want to convert my media server from Windows Server over to TrueNAS but I need to find enough temporary storage to move all my music and movies off first.
Yeah the struggle of needing a ton of storage space to move your current storage is the worst. So far I am liking TrueNAS much more than just hosting my storage directly from Proxmox.
This is useful for my own build, thanks for making the video. What power supply did you use?
A Seasonic 850W 80+ Platinum
Overkill? Depends on your use. what if I want to do serious engineering work? Computational chemistry? Antenna design using AI? RF circuit design? I'm using old Xeon servers with 28cores each for these.
@Raid Owl would you go with Threadripper Pro today instead? How is the noise level and cpu/gup temps of that build on air? It's crazy that a 13900K or 7950x out now a year+ later gets like double the Cinebench r23 multi-core scores.
I’d honestly still stick with EPYC. The power efficiency and 128 pcie gen4 lanes is fantastic. The noise with a Noctua cooler is extremely quiet. The loudest part is the hard drives.
I love the channel and hope you prosper well!
I'm building the 16 variant dual socket with 7302. Curious which model Noctua you went with nh-u14s-dx-4189 or nh-u14s-dx-4677? Did you also need to request the:
> Optional mounting-kit required (free of charge). Please contact
I'm wondering if the one you're using will fit in a 4u rosewill rack. I did see the nh-d9-dx-*-4u variant is 14 mm less in height.
Ocu”link” is great! Hook up 4 sata drives for each just like miniSAS. That’s what I use for a mitx asrock build.
Running proxmox on a DELL T420. Still have some more upgrades I want to get in but kind of pricey to do (storage, memory). Mine is mostly for Plex but I would like to get it running better and ditch the old SSDs I have at the moment.
Storage gets expensive very quickly. I think the best bang-for-buck storage is to snag some 3TB Seagate Constellation SAS drives off eBay then pair that with an HBA card similar to mine.
Seems like those Dell T420 are pretty popular. I’ve seen quite a few people using them as their home servers.
@@RaidOwl Yeah I did not plan my storage upgrade that well so it is going to cost extra for me. My homelab needs are pretty low but if you don't mind the tower form factor, (I actually really like the look of the T420) these units are great do-it-all in a box. The end goal is to have 24 drives in it. Currently at 16 (2x HBA cards). I still have room to expand this server and I think it will serve my slowly growing needs for a long time to come.
Heck yeah man, sounds like a good plan!
Is there an option to turn off IPMI Remote Management so I can use it locally? I don't want any one remoting into my server..
Love your videos! I bought the same motherboard, will 4 pin PWM fans work with the 7 pin connectors on that board?
Yes it will
I'm about to upgrade my dual 2680's to either dual 2687W v2's or dual 2697 v2's. My box already has 128 gigs of ddr3 1600 ECC.
I'm not sure which way to go. With 2687w v2's I get a 700 mhtz jump in clock speed, with no reduction in core count at the cost of a higher TDP.
With the 2697 v2's I get a 50% increase in core count, with no reduction in clock speed, and no increase in TDP.
What I'm struggling with is whether faster clock speeds or higher core count would give me the most performance boost.
If it helps, this box is my daily driver at home, and my media server when I am away.
It runs win10 LTSC and I use backblaze for cloud backup. I would love your input.
If this is your daily driver when you’re home I’d prioritize IPC/Core Speed over core count, unless you plan on spinning up plenty of VMs.
The 2687 would probably give you a noticeable increase in your current workload with the faster cores, but it’ll just come down to whether you wanna handle the higher TDP.
My gut says go with the 2687 v2s but nobody would question if you decided to go with the 2697 v2s. Hope that helped!
@@RaidOwl Thanks for the reply. I also forgot to mention that even though this is a desktop machine, its built in 4u rack chassis. I didn't want my computer(s) on my desk and I didn't anything on the floor either. Do you think my current Noctua NH--12U 120mm coolers can handle a 150 watt TDP chips in a 4U chassis?
@@rdsii64 that may be cutting it close in terms of cooling, but I don't think you'd have any real issues. Check out this guide Noctua has: noctua.at/en/support/compatibility-lists/cpu
@@RaidOwl I ended up going the E5-2697V2. I got a matched pair for 250 shipped. The best price I could find for the E5-2687W V2 over 350 for a pair. They were delivered today.
My home server runs a 3700x with 16GB of ram soon to be 32GB of ram. Currently I run Truenas Core. At the moment it has 1 6TB WD red drive and one 4TB WD red drive plus a 500GB nvme ssd for jails and VMs. Hoping to swap out the 6TB drive for two 8TB drives in raid 1.
wow, someone who's totally in my ballpark when it comes to the why's. Myself running a 1700 for unraid server, i've run into the issue with pci-e lanes as well, 1 vga dedicated graphics, 2nd vga for VM, third VGA for Shinobi.... oh wait, i have no more lanes! So i started looking at threadripper and epyc too. Threadripper is much more money compared to an epyc but has higher clockspeeds. And the reason i'm not considering the older Xeon's with ddr3 is because those platforms don't seem to support NVME drives properly.
One question for you tho. Since i'm hosting game servers, clockspeed is important too. Do you have anything that requires clockspeed over cores and have you considered threadripper for that reason? If so, what was your reasoning?
I am hosting a windows VM with a GPU pass through for gaming. I am pretty happy with the performance. The EPYC line obviously isn’t designed for gaming but this isn’t really my main gaming rig so my expectations were tempered. I’m totally fine with the trade off considering the price and power consumption advantages of EPYC.
@@RaidOwl alright, fair enough. Gaming is still fairly doable then? Since it requires decent cpu speeds too. Game servers require similar specs, mainly IPC ofcourse. But games like factorio have a rather cycle heavy main thread, even on a dedicated server.
Yeah exactly. The IPCs on the Rome lineup of EPYC are good enough to make up for their “low” clock speeds. Obviously Threadripper is going to perform better in gaming and a lot of encoding tasks but EPYC has its place and it’s been great for me.
Keep me updated on what you decide to go with!
@@RaidOwl Thanks for the clarification. Will try! quite an investment :P
It seems like ill be pairing a 7282 with a Asrack EPYCD8-2t. Got a good deal on the cpu (still gotta go thru but didn't want to hold back) and i like the dual 10gb lan on the mobo. Just incase i need it. still deciding if i'm going for RDIMMS or LRDIMMS if i can find any
I got myself a Dual Xeon E5 2696v3 and 64 GB ( DDR4 Octochannel ) or RAM for virtualisation in a windows server 2019 using Hyper-V
I am still a noob but I take sysadmin / network engineering courses since 3 month now, I am currently using this server to try out multiple configurations and setups, deploying a DHCP server, DNS resolver, DNS of Authority ( if I spell that right ? ) from both windows and linux environnement, configuring pfSense, also learning about active directory... I currently have 13 virtual machines running with wayyy more ressources than what we can get to set up in school using VMWare WS on their machines with old i5 4690 vPro and 16 GB of RAM ^^
Just curious…what’s your day job? You seem like you have a lot of time on your hands lol. Great vid
I work in IT as application support/development. It’s all remote so I can use my time efficiently.
I just built a server using a 10850k salvaged from my old gaming PC. 10 cores 20 threads. I already need more cores/threads. I might copy this build's CPU/Mobo/Cooler combo. I've already got 3x16TB drives in raidz. Planning to add more when the drives go on sale again.
Please stop saying utilize. Utilize and use do not mean the same thing.
It's all about memory bandwidth for me because I want to do FDTD electromagnetic simulations. I'm looking at EPYC Genoa 4 node boxes with 8 CPUs.
Some would critize the Noctua coolers, because of their orientation. They're blowing hot air towards your RAM modules and towards your GPUs, and not towards the exit. But well yeah ... what's the alternative ?
I mean the temps are well under control so I’m no super worried about it. Gotta go custom loop right???
Great video! Did you have issues with PCI devices? I have this same board with a 7401. The BIOS don't recognize any of my hard drives, the M.2 drive, or the 4-port ethernet card I have connected.
I have not. Have you tried different PCI slots?
Currently building a server using a Silverstone mATX case. I want to go the route of testing Proxmox. Bought the Mastering Proxmox book to help guide me along while following a lot of your videos on Proxmox deployment. I’m also going to follow the commercial route. Any recommendations on either the RYZEN 9 5950x or RYZEN 7 5700G?
5950x for raw power. 5700G for on board graphics (makes gpu pass through much easier if you’re into that). Both great chips but I’d go 5700G unless you need the horsepower of the 5950x.
@@RaidOwl thanks for the recommendations!
The count killed me. Literally. I'm dead.
RIP 🙏🏼
Yes re ur dissection of the lane situation you faced w/ ur former 8 core am4 cpu... that is what the matter boils down to.
But... it bears noting a lot has changed since pcie 4 combined with x570 chipset mobos. They take desktops a big step toward TR capabilities lanewise.
firstly, the same job can be done w/ half the lanes, so e.g a 8 lane pcie 4 gpu can equal the perf of a 16 lane pcie 3 gpu, resulting in 8 free lanes for nvme e.g.
secondly, the x570 chipset's 4 pcie 4 lanes of bandwidth, is a doubling of other AM$ chipset's shared bandwidth pool to 8GB/s. This allows more and better io ports.
as i said - just sayin - but modern am4 isn't as grim as u portray. They can still make a natty server/workstation with a mid range x570 mobo & a pcie4 gpu.
You are definitely correct in that pci gen 4 is going to help in that area. We are still only getting 20ish lanes but more bandwidth. Hopefully manufactures follow suit and give us some useful pci gen 4 1x or 4x cards. Fingers crossed.
Running an i9 9900 with 120tb of storage with double parity on Unraid. I would have gone with a 12 core ryzen but I wanted the hardware transcoding for plex and the GPU market is still bonkers.
Yeah I don’t blame you.
What was the specification of RAM you have installed? Thanks
8x16gb of Samsung DDR4 2400Mhz ECC (M393A2K40BB1-CRC)
@@RaidOwl any advantage to going for 3200mhz or not really with a server?
@@benheaven8069 You won't see any noticeable difference unless you're running a lot of RAM intensive tasks.
Im running a 7282 with 128gb ram, 8 8tb drives and a few 4tb drives in my truenas server. Im working on setting up a dual 7282 rug for transcoding and enxiding gopro footafe and plex movie files. Rather rhan using my 5950x desktop. Also hace a dual epyc 7742 server mining raptoreum
I just bought a 7742 for mining xrp or raptorium, can I use just 1 stick of ram or do I need to populate each channel with a stick?
You can use just 1
@Raid Owl awesome. Thanks. Eventually I want to succeed in building my own cloud, but for now, might as well stack coins to pay for the rest of it
Did you managed to update the bios to. 3.20 and bmc to 1.19?
why raid 10 and not raid 5?
Better performance
Will this be good enough to run Minecraft?
Yes
Excellent!!!!
Lol "hobby" he says. Me who now looks down on everyone who doesn't have a home server: "This isn't a hobby it's for life!!" 😆
Proxmox on an old 8 thread i7 optiplex with 24gb ram, running two mc servers, pihole, docker (portainer). I do want more threads so I can also run pfsense!
Why did you use AMD EPYC Rome 7302 instead of 7302P. What is the reason? MSRP is $ 978.00 for 7302 and $ 825.00 for 7302P. the only difference is support for dual CPU configuration for non "P" model. And dual CPU ability is useless for your configuration.
Yeah everything you said is valid, but I found a good deal on eBay for a non “P” model.
When’s the software changes set up video coming?
Just finishing up a video about my how I backup my files, then my next one should be about the new software/programs I’m running.
Just sexy! I know you wouldn't recommend it but I'm also building a similar system. I'm going for dual processor mobo though -- either the Supermicro H12DSI-NT6 or the Asrock ROME2D16-2T. I would appreciate any input
Hey if you got the cash and the use case then go for it haha. I’ve been pretty happy with this Asrock board.
Really wondering which PSU you put in though.
I used a Seasonic 850W 80+ titanium
I would use RAIDZ2 for your four drives. Just as (in)efficient as RAIDZ10 but has better redundancy. Slightly slower than RAIDZ10.
Yep I just changed this a few weeks ago
why is the CPU fan was mounted in the opposite direction?
It was just preference. It is in a pull configuration so it is still flowing air to the top to be exhausted.
Can you bifurcate each slot to x4x4x4x4? Want to play with nvme )
Yep
@@RaidOwl thanks. This is crazy
Hi, just found your channel, I was wondering about the CPU, I have found many in ebay but most are DELL or HP locked (at least the ones with good prices ~below 1000 US$) so I wanted to ask 2 questions...
1. If you bought a DELL or HP Locked and they still work in your AsRock Motherboard
2. If you bought a non Locked one, how much did you pay for it? I ask so I can wait untiI they drop in price because I found a couple but at double the price of the locked ones (around 2000 US$).
I actually bought an HP one. It came in an HP box and everything. As far as I know HP doesn’t lock their EPYC cpus down but I could be wrong.
The CPUs are not locked, the sellers ar just mentioning that is the system they game from or was ment for. Works inn all supported motherboards
If you can do it, you should also
which hba`s did you use ?
Some LSI 8i cards. I recently upgraded to a single 16i
today, the Epyc cpu prices are going down, especialy qye 7302p or 7551p, and you can find a good board for like 200$, Epyc will be the next xeon.
Yessir, I hope so
Did you managed to connect ssd/hdd in unraid via sata 1 to 6x octopus connector?
Can you please do a walk-through, built, review or suggestions on building a home server for storage/plex, no gaming. Budget $1500-$2000, future proof for drive expansion up to 20 drives (12tb each drive), raid 10, non rack using fractal design 7 for example.
Laughs in 202TB
I just picked up an Epyc 7402p (24 core) and Supermicro H12SSL-i. Honestly, it's my poor choice in RAM. My current setup is an R9 5950X and 128GB of RAM, but ZFS is a memory hog. I can't run Plex through an NFS share, it wants to gobble down 65GB of RAM while my Proxmox host is also consuming 65GB of RAM. I bought 2 sticks of 128GB 3200mhz Nemix RDIMMs... Yeah, that was a ton of money and fully populating my slots will probably take a year or so. I probably should have bought 64GB sticks instead. Oh well 🤷♂️. I currently have eight 8TB HDDs in ZFS RAID 10. I will upgrade to 20TB drives eventually. My Plex library is growing. 32TB may not be enough. Yeah... I got problems.
Love the channel, love the content. My only question is: How's that etherium workin out for ya? lol. Bad joke, i know. Why people invested AT ALL in crypto is mind boggling. My boss lost pretty much his whole investment (which wasn't much). I tried to tell him but no one listens to me.
I plan to build my first Threadripper/Epyc Server one day so all this info is a huge help! Keep it up!
Lol yeah I mined Eth for shits n giggles. I've never been heavily involved with crypto cuz I think 99% of it is dumb AF, but mining it for some extra cash was cool.
@@RaidOwl As long as you made a profit vs the cost of mining it lol.
Another question: Have you used TrueNAS to run VMs? If so, how's that whole experience?
@@TheJason13 Core sucks for virtualization but Scale is better.
@@RaidOwl have you had any issues with TruNAS and zfs (RAID)? I read or heard somewhere that Epyc doesn't support RAID (not sure if ZFS is the same thing).
@@TheJason13 No issues at all, and EPYC will have no issues with any kind of RAID so don't worry.
There is also the HPE AMD EPYC 7302, which has the same specs as the Epyc 7302 at more than half the price. Anybody know more about that?
Would be interesting to see a video about ETH mining!
"Don't want to think about space" ... Ends up with 20ish TB usable 🤣🤣🤣
Laughs in 300+TB.
301 TB upgrade video incoming