I started this playlist thinking that I was interested in setting up a home server / NAS. Halfway through it, I looked at my Antec 900 breeding dust bunnies in the closet and said, huh, I wonder if that would work, and started looking up 5.25" bay slot adapters and planning on tossing a dirt cheap AM4 board into it. Absolutely killed me to see literally the same case sitting on the desk here at the end. Definitely think I'm going to go with that case, if only because it has a lot of room to grow -- probably just start with one mechanical drive bay filled with a couple drives, but with 9x 5.25" bays it could grow as ridiculous as I would let it.
Ah yes, the Antec Nine Hundred, legendary! One of the best cases, way ahead of it's time. Looks great (especially compared to "gamer" cases of it's time), slick black, mesh front, insane cooling options, useful case top tray including IO and amazing front configuration options. I built my first PC in this case with an Asus P5Q and a Core 2 Quad Q9400, 8GB RAM with the legendary 212 Evo and of course neon tube lighting. xD It was still going strong into 2020, even the neon Tubes! I then made the BIOS mod to run an X5460 Xeon and am it's still being used in my workshop. I never thought of it as a good option for a server build, but it really does tick all the right boxed, with those adapter options! Thanks Wendell
You are doing gods work. The first reasonable notable public figure to do so, but I fear it may be too little too late. I've ranted where I could and for what little it's worth that 5.25 bays lining the front of a case should be the standard since the original Cosmos. It's not just for the incredible flexibility of icey dock bays too. It's a great standardized way to mount anything. Thick radiators, reservoirs, fan controllers, exterior displays, bloody cup holders and car cigarette lighters! These days everything needs a custom solution. Gotta break out the drill. I also can't trust anyone that doesn't have a 4k bluray drive with hacked firmware to rip media.
This video is the reason why I decided to go with Enthoo Pro (1 Gen) case for my build. I wish it would be easy to buy that asrock MoBo for a reasonable price as in the US. It is an "unobtainium item" here. Thank you, Wendell. You are a beacon of light for us newbies :)
I wonder if anyone else would be interested in seeing Wendell, or Ryan, programming, or scripting, the auto-ingest parts of the a NAS build? Ideally framed as a tutorial series? It's those little quality-of-life bits that I think would help people get more info self-building home storage solutions, because nobody wants to go through their twenty year optical collection and do all the ingests, manually clicking buttons every single time.
AFAIK there are MakeMKV etc. containers that allow passing in DVD/BluRay drive with some autosensing logic in the apps themselves to dump the contents to mkv (or using disc authoring containers to rip to iso), which then needs to be moved to a location that acts as an ingest directory for handbrake container to transcode to target profile... A few years back I had found a few containers that worked with USB optical drives and used them prior to my last server crash, each container having its USB device observed for disc load to automatically trigger iso dump to a workdir, which then got ingested by a handbrake container (workdir was a watched ingress dir for handbrake container with predefined transcode profile), with output then moved using server's incron settings to a correct media server's media location... and the work files were configured to be removed on process successs on each container (so the "ripping" one extracted the disc and the handbrake one removed source). That's how I ingested my BoxSets collection into my NAS... now I'm only adding new items once new buys arrive, so I'm using just handbrake container to select exposed drive as source omitting the ripping part... and then tweaking finer media container/format details such as changing stream selection or adjusting metadata using either scripted mkvmerge or ffmpeg run initiated by incron...
I've been using my old gaming rig with an i7 4790k for the past 4 years as a home server and just recently treated myself to a significantly more powerful Epyc Rome server
Just out of curiosity, what are you using all this compute for? My home server is running on an i3-4130, I've been thinking of upgrading to some sort of quad-core Xeon but I can't justify it when my avg load on i3 hovers around 0.2-0.3
@@DrathVader Home automation, plex server, backup for work, gaming vm‘s media encoding, a web server and a couple docker containers at the end of the day all I wanted was enough headroom to expand :D
@@jstnjx yeah the price on the 32cores just fell off a cliff. I bought a dual 32core epyc + 256gb ram and a mobo for under $2000. People always ask why, but then you consider it replaced at least 1/2 a dozen devices and puts it into a nice small unit its totally worth it.
About time you got the X470D4U Wendell! It's a good board for the price, was a bit quirky on release and 5000 series can be questionable at times depending on what your doing but mines been solid since May 2019, probably had about 20 - 30 restarts in that time. It's soon to be upgraded to a 5950X and 128GB RAM for a hosting game servers more betterer than it's current set up. One really important thing to note about this board. The CPU socket isn't up to AMD spec, it's to close to the RAM so you'll need to be damn careful when picking a cooler especially if you max out the RAM slots. The newer boards fix this but, just a heads up to everyone. Noctua have a supported cooler list for this board.
Really bugs me that no case manufacturer is seemingly allowed to make cases with 10+ 5.25 bays anymore (being figurative of course), there are some cases available, but they only exist as they just haven't sold out yet. It's part of the reason Icy dock and hotswap bays in general can be so spendy, they are getting to be niche items. I see a death spiral of the 5.25 bay and it's related accessories.
I don't think they will disappear, 5.25" is just a standard form factor, being used for many different things over the time. What becomes obsolete is what you put in there each year, not the form factor because it perfectly matches the spare space in PC cases. Now Icy Dok just launched a 8 x NVMe 5'25" enclosure that has a lot of potential and even a 12 SSD unit, not to mention a 2 x nvme in the 5.25" dvd slot... But there are many other potential uses. To mention the weirdest ones I have seen a 5.25" drawer (with keykock), a 5.25" subwoofer, and even a 5.25" coffe holder that keep it warm (gets hot hair by redirecting internal fan exhaust outwards ...). Another nice trick is to add an extra power supply in there when your original one is not enough. And of course you can put traditional mechanical hard disks (boring, but nothing else will give you a better cost per terabyte, by far ). Other uses are extra fans, car readers, professional audio gear with balanced connectors, video patch panels, a ush hub, a battery charger, RGB lighting, retractable cables, AC power plugs for monitors or other devices, KVMs, a small UPS for safe auto-power off on blackouts, small network switches, a raspberri pi, ... the 5.25" bay really helps a PC to be a PC, the only modern machine designed to be upgraded as you want, when you want, how you want. Your way. Even when left empty the PC soul still lives in there, and I hope it will never die.
If you’re into rackmount, rose will has a 3u case that technically has 12 5.25 slots. It’s populated by its own HD de tray holder, but you can easily swap that out for anything else.
@@DoozyBytes that's exactly what I did and it works great. $150 case plus another $150 of adaptors / fans(the stock fans are junk) and you have a super nice 3U rack case.
The two 5.25" slots (albeit poorly supported weight-wise) have been the major selling point of Define 7 XL for me. Yeap, some people with modern systems still need this stuff.
@@nismo4x4n I agree about the fans, but also the case design is important for rack mounting. PC stock cooling can be cheap while still powerful because they use the fact that they can intake air from any direction and blow it out in any other direction, turbulence does not matter too much and they typically have the room for themselves. Meanwhile rack mount requires front intake only and back exhaust only (otherwise it will breath hot air coming from equipment at the bottom and blow it even hotter to the equipment on top, also messing around with air convection.
got a couple of Icy Dock 5.25" single bay adapters 5 years ago, each holds both a 3.5" and a 2.5" SATA drive, individual power and eject buttons...; great stuff!
I wanted to add some extra Icy Dock 5.25 adapters similar to what you are using, so I bought a multi bay DVD duplicator case without the drives on ebay and then connected it to my system using an external 6g sas adapter card. Works like a champ.
It's funny you mention the Antec Nine Hundred. My PC I built in 2011 (Core i5-760, Asus P7P55D-E PRO, 8gb ddr3 1600) is in an Antec Nine Hundred Two. I did the exact thing you mentioned in this video and I put in 3 Icy Dock 4x3.5" hot swap bays and upgraded my NAS to it. I've got 12 3TB HDD's in RAID 6 and I'm super happy. The only thing was I regretted not getting the Antec Twelve Hundred full tower so that I could have fit 4 of these bays in it, but back in 2010, I had no idea I'd still have that case 10 years later haha.
A word of caution on the ASRock rack X470 motherboard with onboard 10G Ethernet. It really was built for server airflow. I am running a parsec gaming VM as well as proxmox/TrueNAS on this and ran into system instability due to insufficient cooling of the 10G chip. Pointed a fan at it, fixed.
Got some Icy Dock bays a decade or so ago. (Windows Home Server) and been transplanted a few times now (currently in rack mount cases - which still do 5 1/4" bays). The three drive and four drive versions had slots a decade ago, but bending tabs is fairly easy.
I have that Asrock Rack mobo running my truenas with a nice HBA card with an LSI chipset, and an nvidia T600 for encoding, with a Ryzen 2600 and 64 gigs of ram. Runs flawlessly.
Same. I have the same board, my first gen ryzen 1800x, LSI HBA, and 64 gigs of dirt slow RAM running Proxmox. I'm using the icydock 6x for VM drives, and the HBA will future state feeed 8 3.5" drives for my NAS VM. Antec P100S holds everything pretty well.
That Antec 900 is still a fantastic case IMO. I know it probably wouldn't sell well but I wish antec would make an updated version that kept its original charm but also implemented certain modern designs features like cable management.
I just finished doing something similar except with the OLD OLD HAF 932 case. The wheels still work and it fits my 80TB array without issue lol. Threw a x570 board in there with a 3600x and 64gb of ram. Runs Truenas and I love it!
@@soulfinderz The drives are certainly a bit noisier than I had initially anticipated! Not sure what to attribute that to other than load. Its sitting next to a 5 gpu mining rig that uses 5x Noctua NF-F12 fans as intake so I rarely notice the nas being noisy. Its using a Noctua NH-D15 for the cpu cooler. Even under a full load I couldnt get it past 66 degrees celsius.
I run my Home Server and Backup Server on some Icy Dock and Rosewill hotswap cages. Both using the Zalman MS800 which has 10 5.25 bays. Works perfectly.
I use a Rosewill Thor V2 with two 4 bay hot swap docks while the case holds 6 additional internal 3-1/2 drives. I put Silver Stone 2.5" trays for a total of 12 2.5" drives in the internal bays. The case houses 3 200mm fans for plenty of air flow. Makes a nice ProxMox system.
I had just bought the 4 bay icy dock with no eject buttons for my main desktop so I can easily swap backup drives and drives for trying different OSs without virtualization or affecting my main OS install. Very nice choice, highly recommended!
Huge fan of Icy dock.. my personal pc has a bluray burner and two sata drives (each sata drive bay has there own power switches) in one 5.25 bay.. awesome for cloning drives and saving data long term.
Started out with this case, a Q6600 and GTX275 a "while" back... Many drops of blood have been spilled in this case over the years. It's was a true flashback to see it in a YT video in 2022, admittedly it's still standing ... if not a bit skewed on the floor beside me still 😂
Hey Wendel, would recommend you check out the silverstone tj08e, looks like it would be pretty much perfect for what you want (4-5 3.5” without bay adaptors, depending on how you configure it and 2x 5.25” to play around with).
I remember building a few "privateering" servers for ahem, known individuals with 3x5.25 to 5x3.5 tool less hotswaps from istarusa. 3 of those for antec 900s and 4 for antec 1200 was 15 to 20 hdds. Nobody wanted to spend the money for the silversone temjin. Also the main "privatizing" rig was an antec 900 with 6 dvd (slowly updated to blu-ray) drives and one of the istarusa hotswaps with 5 hdds in JBOD. A 16i hba was used for most of the sata connections. The White Album reference was mentiond a few time during the blu-ray replacement times.
When I did my last home server build using decommissioned parts from eBay, the case (w/ accessories for it) was the single biggest expense (not the 128GB of registered ECC memory, and certainly not the cheap Supermicro board or the pair of Xeon E5-2650 V2's I got for it). I've been happy with the Define 7 XL, and the cheap server I built from used stuff. I even bought cheap Chinese knockoffs of the Intel X520-DA1, which are still providing me with a 10 Gbps connection between my PC & server. Some gambles seem worth taking, and when you build a server for yourself (or your family) you can judge that as you see fit.
LGA 2011 & 2011 v3 are still my favorite platforms for building "let's just have some fun with these" servers. You can get an obscene amount of cores for very little money, support for lots and lots of cheap ECC RAM, it's super-easy to work with and bulletproof reliable, and OK they're maybe not as fast as the latest Sunny Cove cores, but if you're just playing around with a ton of VMs or containers or whatever, you'll have more than enough MHz to keep everyone happy.
@@MatthewHill The ability to install a lot of _cheap_ ECC memory was a key factor. Otherwise, I would've probably gone the AM4 route w/ unbuffered DDR4 ECC. I had wondered how well SMT would work with Proxmox, but overall it was a big plus to double the number of logical processors. When assigning processors to VMs one just needs to be aware of how they map to actual cores (w/ lscpu). Of course I _want_ an EPYC server, but this server is getting it done, and it's just as reliable as my previous 2P Opteron 6128 system. Good Lord, the single-thread performance was _bad_ on that system, though! Even when it was brand new, it was slow.
@@vonkruel Oh yeah those Opterons were garbage. You can get very cheap 2p and even 4p servers now but it's not even worth it. Epyc, though, is a nice platform to aspire to. 7001 is starting to be pretty affordable and you can even get some 7002 CPUs for not too much.
I'm absolutely a data hoarder, using my previous gaming PC as a server and it's not got any redundancy, or expansion left. (Not cool, not quiet, and. not pretty anymore) Been meaning to get truenas going so everything doesn't disappear if my one of the drives decides it's been on long enough... I was waiting for an explanation of that black magic at the end for more SATA connections lol In the end a redundant file server for Documents and irreplaceable media should be my priority instead of just loading up 70TB+ into a redundant truenas configuration. Keep up the rambling!
SATA wise, when I was going reaaaaallly cheapo wise with a BananaPI based NAS, which only has one SATA port, I used a SATA port multiplier turning one SATA port into 5. This, performance aside, worked wonders, just don't expect to have the full bandwidth when using all drives in parallel. But as this is/was a single user system and not really intended for more than one user at a time that wasn't an issue. My new one actually is meant to enable to install Windows/Linux over BootP/PXE network installs so here I do need the additional bandwidth. At this time my network is limited to 1Gbps and I do have a fully managed switch, so in order to minimize performance issues what I did was bonding/teaming 3 gigabit NICS on the NAS enabling 3 client connections to it at full gigabit performance.
For hard drives a SATA splitter is probably not much of a bottleneck. The drives spend more time seeking than transferring data anyway. Two IDE drives on one cable was not a problem and SATA is a bit faster.
Right now I am considering an external SAS enclosure as a storage expansion possibility, though DIY one of those will be trickier, specially finding a proper enclosure solution...
About 12 years ago I was building computers for people and started to use those cube Lian-Li PC-V3xx series cases to be "premium" blah blah. I got one of the very first ones I sent out into the wild back last month (After being continuously in use for 12 years, mind you) because the power supply finally failed. I had forgotten how sweet and almost modular those cases were so I asked the guy if I could just buy the computer from him. Now I have my new server chassis. Still debating to go with that Asrock Rack X470 motherboard or if I want to do a low power Alder Lake. But I'll be tossing in one or two of those 5.25" Icy Dock things for four, or maybe eight SSD's.
BTW how is Alder Lake behaving with “questionable” PCIe adapters when using PCIe Gen4? Is it less fragile than Zen 2/3 platforms since Intel is already on Gen5 here? Similar to PCIe Gen3 being pretty robust now on AMD’s side… Does PCIe Advanced Error Reporting work properly on Alder Lake systems? (Waiting for the release of (more) socket 1700 motherboards with ECC support before I might try one out for the very first time)
Having been down the potato home server road for many years and dealt with all the limitations (i.e. limited PCIe lanes/SATA ports) in consumer class hardware, I've gone the other route into used enterprise server class gear like Supermicro.
I was looking at something like that board when I was building my home server, maybe that exact board. But at the time it was like $350, more than I wanted to spend. Went with a cheap MSI B450 motherboard instead and it's working out well after a BIOS update allowing it to boot without a GPU. I also got one of those 16-bay 2.5" Icy Dock enclosures connected over SAS to an HBA. The idea was to use a bunch of laptop drives for storage, but then the whole SMR and ZFS problems came to light and all the laptop drives on the market were SMR. I ended up getting a bunch of SSDs instead. It's pretty nice, but more costly than I'd planned on, and in hindsight I wish I'd gotten one of the 5-bay 3.5" enclosures instead. For the case I used a Coolermaster HAF XB Evo. It's bigger than I thought it was going to be, but it works well. I was very tempted by some of the Silverstone cases, but I was already venturing into new territory for me and didn't want to add first time SFF builder problems on top of everything else.
I got a couple of the M.2 SSD to 2.5" SATA enclosures, they are really awesome and pretty inexpensive. Let's face it, most motherboards and notebooks got too few M.2 slots, but these enclosures make it pretty easy to clone you old SSD to a bigger one and then just swap the sticks.
Dang I've been looking to upgrade my old xeon server with that x470d4u motherboard but your coverage might cause it to go up in price ahhh lol. Love your hard work and content and hoping to learn a good amount.
If you wanted to build ultimate mITX or mATX build, I'd say the best could definitely be based off ASRock Rack's ROMED4ID-2T - it has SP3 socket so you could select whatever you need or can afford from the huge collection of either brand new or e-waste sourced Epycs, from lower count to nigh on highest tiers, so once SP4 next-gen Epycs get sold Epyc Romes might get heavy discounts for one to snatch some higher quality sillicon on sales, 4x DDR4 (L)RDIMMs for huge mem potential in the way similar to Epyc selection, you are limited to single PCIe slot but you also get 4 Slimline 8xPCIe4 and 2 Slimline 8xPCIe4 that can be split to 16x SATA3 total, so you can have way more (as in count or performance) front-loaded devices (eg. 2x 5x3.5" + 4x2.5" + 2x slim ODDs, eg. in MB155SP-B/MB975SP-B and MB604SPO-B, or just that 16x 2.5", eg. in MB516SP-B, for SATA devices and 4x 2x PCIe4x4 for a total of 8x M.2/U.2 high speed storage components eg. in 2x MB720M2K-B/MB699VP-B using approximately 6 to 10 5.25" bays worth of front panel space for ease of life extensibility), and the board has 2x10GbE so you might just as well get away just with that without the need for "upgrading" to a dedicated 10GbE+ PCIe solution... thus freeing the PCIe4x16 slot for either a GP-GPU or mining GPU or eg. something like Xilinx Alveo line-ups or similar transcoding offloading solutions for better/faster media processing... One potential issue to take care of, of course, as with all the server boards, the VRMs need to be cooled with either Noctua's Industrial CO 40mm fan to push air around their rad or something bigger... heck, if you plan to use water cooling as the main cooling solution you might as well consider replacing the VRM's rad with some alternative to plug into the same loop or build some contraption to incorporate it into the loop. Same could possibly happen to the M.2 drive and of course the GPU/extension card depending on the requirements/potential of the system... Of course you could use the M.2 slot for eg. OS or you could play with those M.2 adapters to eg. U.2 or controllers to expose multiple USB or SATA/mSATA ports for eg. a RAID1/10/5 of OS drives... And if you were really creative you might consider not going for off-the-shelves case and make a custom one for the components and front-panel options you choose in the build.. I bet LTT team or NFC or similar channel would gladly help with such a custom project ;)
I'd love a video on hotswappabilty. What it depends on - mobo-wise, hba-wise, enclosure-wise, and form factor/interface-wise, os/software-wise. For example, the 2.5 doorless icydock thingy presupposes hotswap, but I've heard of people burning out their ports eventually by hotswapping sata drives.
Right on time, I've just started building my first homelab with a 1l ThinClient as a Proxmox server. I've been contemplating building a DIY NAS solution to replace my Netgear 6 drive turnkey solution. Ihave settled on the big brother of the AsRock board you picked as I want to go with the Ryzen 7 Pro 5750GE. That little monster of a CPU will help save energy and reduce noise, with enough reserves to virtualize some services as soon as TrueNAS Scale goes stable. Looking forward to what you decide to do, as I have not settled on formfactor/case. Still contemplating a compact NAS case, an low-noise solution (R5) and a low depth rack mount case.
My potato class PC converted to a NAS is an AMD FX8320 and I got that installed on a 4U Industrial Rackmount enclosure. The only thing I'm regretting there is the maximum number of hard disks I can put on it, basically 10 internal 3.5" plus 2x5"1/4 and 1x3"1/2 drive bays (which I have installed a 4x 2"1/2 SATA Disk Bay + and a 1x3"1/2 + 1x2"1/2 bays adapters for those 5"1/4 bays. At the time I have 2 SAS controllers on it, one with 4 Drives support and the other with 8 Drives support which combined with the onboard 8x SATA is leaving me wanting more drive bays, booohoooo On my 3"/2 bay I put a multi card reader with CF card support, you know cause having the OS on a CF card can be handy.
That's the motherboard I picked up to use as a new server/VM host. The only issues I have with it is the proximity of the DIMM slots to the CPU socket. Not a lot of heatsinks will fit without blocking the first slot or leaning up on that first DIMM. The other issue is that (as far as I can tell) it won't use the integrated graphics on the Athlon 3000G, Ryzen 3 3200G or Ryzen 5 5600G. You're stuck with the ASPEED graphics which are slow and barely handle a Windows GUI, or a discrete GPU. Very curious which cooler you'll wind up with. But it does support PCIe Bifurcation of the x16 slot, which should let me use an Asus Hyper M.2 X16 CARD V2, which should be fun to play with. Oddly enough, I'm also setting up a media ingest station with some DVD, CD & BR drives, though not with that board. I've a perfectly good Antec Solo that will work nicely for it, as it has two of the three optical drives in it already, and has good enough airflow to cool them and a few SSDs. I'd like to get into 10Gb ethernet, but I don't want a super loud rackmount switch, or to spend more than the server costs on it. The dearth of inexpensive 10Gb switches that use RJ45 and rather than SFP is a real problem.
Currently building out an IBM x3550 m4 with 5tb segate drives and I used an icydock laptop cd drive to 2.5" drive adapter contemplating a lp dual nvme card aswell
Usually, when I upgrade my main system (a gaming PC/development workstation), I keep the old CPU/mobo/RAM and I put it on my server case. Used to have an old Silverstone case with an IcyDock with 4 hotswap 3.5" bays that used to fit in 3 5.25" bays on the case. The Silverstone was so old though that the airflow was bad and the HDDs were running hot, so I ditched the IcyDock and the old case for a new Fractal Design Meshify 2 which can be configured internally with up to 9 x 3.5" HDDs and has excellent airflow... Yes I lost the hotswap capability but I gained a lot in HDD and CPU operating temperatures
I really, really would like to see the video and-or forum post on the media ingestion and media server. This is probably the project on my list that has the most direct and tangible benefit (so many things on the to-make list...)
I need to get me some of those Antec cases. One for my optical backup system, a DVD or Blu-ray burner per bay and my scripts means that it can backup critical data every second day for quite a few days before I need to cycle discs. I also need to get one for a better quiet hotswap storage server.
Hi! I want to build an NAS for me. But I'm not sure what Software to use. I want 4 things from it. 1.) 3 HDDs for parity 2.) The RAID must be able to be expanded with more Disks. 3.) The Data on the RAID must be crypted in AES 4.) An m.2 R/W cache.
Yeah those asrock AM4 server boards are very nice, they are murderously expensive on ebay tho. I can get like 6 normal matx boards for the same processor at the same price of that
MSI were doing an X470 board with 3 full length slots and 3 1x slots for £85. I built my server on that. Still not enough full length slots though. I pruned the video card to fit in a 1X slot.
The Antec 900 is like all of those older cases that have many 5.25" bays that seem to be lacking from the pc cases these days along with the missing 3.5" drive bays. There should be at least 4x 3.5" bays. And that motherboard is >$300.
I wouldn’t recommend the 4 or 5 3.5” bay icy dock part. I would stick with only 3- I really wanted to stick as many drives as possible in my server- but heat quickly becomes a problem - it’s still a problem with 3 drives (I have 2 3x 3.5” and 1x 6x 2.5” bay) heat is a problem in both. Now it should be noted that I’ve stuffed my very large server in a closet. I added a fan vent into the attic but it gets toasty in there anyway, perhaps the 5 bay in an open room with ideal ventilation would do just fine
I have this exact case. Antec 600. Mine without the water cooling ports. Excellent system. Was my gaming system for 12 years. Now it’s a xpenology back up target. Top fan stopped recently. Still trying to figure out how to replace it.
I have that EXACY same motherboard in an apevia XQPACK2 case, an ICYDOCK 16x2.5inch hot swap SSDs, and a SATA3/SAS3 JBOD/ITmode RAID card, and a pair of Optane drives for cache
Those motherboards and almost everything like them from asrock rack have been out of stock everywhere for ages - when do you think there will be more available for purchase in the usual places? IE Newegg
I still need 5.25 bays. Have a large physical media, CD / DVD / BluRay, collection. W/ a large portion of that being live concerts in FLAC I still need optical drives as I’m never getting rid of the original media.
Have started this recently but with a sharkoon T9 (gave my antec 900 to a friend to do it for himself). Preface: I know nothing about servers or networking, First time. Xeon e5-2698 v3 16 core . MSI X99a sli Plus . 8x8GB corsair LPX . Intel X550-T2 10Gbit NIC . GT710 gpu . 3x icydock (4 drives per 3 bays) adaptors. Also have a spare Optane 16GB I might use for a cache of some sort. Still need to get.. ~16 port raid card/HBA=£400 12x 4TB HDD's... oh they're like another £1000-£1400 alone Probably going truenas. While saving for this my 2TB mx500 SSD died taking 12 years of photos and spreadsheets with it. Now relying on a single barracuda compute 4TB (SMR) which had a backup from 6 months ago.... yeah Fun.
My server wich ive been running for 2.5 years is a Threadripper 1920x in the X399m Taichai that I got used as a combo for $400. Started off trying to make an dense mATX build in the Nanoxia the Deep Silence 4 which had 6 3.5" drive bays and a dual 5.25 to 3x 3.5" hot swap dock to give me space for 9 drives. However fan on the back of the dock conflicted with the 24pin power and i needed low profile sata cables to fit. it worked OK, but was a pain. I only had 6 drives to begin with so I didn't get to use the dock. I got a good deal on a used Fractal Define r6 so swapped over to that, and found a good deal on a Noctua NH-U14S which wouldn't fit in the mATX case. SFF is supposedly fun doe some people, but for me it mas too much of a hassle. The define r5 is technically a better data hoarder case, because it still has 2x5.25" bays for a wider array of docks.
Just curious what solution you use to keep 6 DVD readers ingesting DVDs & Blue-Rays continuously with auto-eject and Plex-friendly output. I've been gradually crawling through my 20+ years of DVDs with MakeMKV, and it works but it's slow, and the files come out with funny names that make no sense to Plex unless you manually rename everything afterward.
I have had that x470 server board with ipmi for a year. I thought it was bad at first, the reset button on the backplate was stuck and the integrated fw won't init if so. being that controls all the system fans, this can make for a bad time. i had to make a recovery usb to reflash the integrted fw to fix it. long live dos!
I'm trying to turn an old Haswell gaming PC into a long term storage server. The machine is upgraded, it has a Xeon E3 1280V3 CPU, 32GB of DDR3, and a GTX 980 (I will be swapping it out for a Vega 64), and I'm looking at dropping in four 10TB drives, an old SSD, and putting Kubuntu on it with ZFS for storage management. My problem is that I also want to run 10G LAN, and with a GPU, I don't have the PCIE lanes for a 10G NIC; I have a single PCIE 3.0 slot, the additional PCIE 4X slot is PCIE 2.0 standard. Now I'm debating if I should go another route, but I really dislike tossing functional hardware for no reason.
I have not had much luck with the icy 2.5" docks. The mechanism to eject the drive breaks extremely easily. My "server" is an old optiplex with an 8400T. No speed demon, but it gets the job done!
With that board, would you be able to pass the Sata porta to different VM ? Can anyone point me in the direction of a hot plug 2.5/3.5 dock , maybe this icydock ? I can’t find the way to find this information …
Hey Wendel what do you think about a Dell Z400 I can get for 100$ +- with 2 x GB ECC Ram and 4 x 1 TB (SATA 7.2K rpm) and as CPU is a XEON W3520 and there is even a Nvidia Quadro 2000 in it. (I would go for TrueNas Core)
Got my self a upgraded hp microserver gen 8 with truenas on it. Downgraded from a full size 10core xeon on c602 motherboard. Prefer the small size over the full horse power
You know, with the odd designs I've seen for PCs over the years and the front side being basically wide-opened, it makes me wonder how much thought people _really_ put into their computers long-term. I would love a PC case with a modular front panel so that I could have full-frontal rad support, _or_ sacrifice some height and go with a *thicker* 120x240 rad with 5.25" bays either top or bottom, however I want it _expressly_ for use with such adapters as shown here.
If you're going to connect all those drives with SATA cables you're going to have a bloody mess - a cabling nightmare - going on inside that case. Unless those drive cages support SAS connectivity that thing will not be fun at all. I have a 4-port 2.5 drive cage that slides into a 5.25" slot in my server case and the SATA connections with power going to that thing is a mess. Unfortunately, I don't see a whole lot of those drive cages supporting SAS for some reason. Back in the day, all server drive cages were connected via SCSI. It was an ugly ribbon cable but it was just one cable (with however many power connectors too but...). Connecting 5 cables (4-data and one power) to the back of these SATA cages is too much. The scary thing is that they make 8-port 2.5" SATA drive cages with 8-data and 2 power connectors in the back. I can't even imagine the mess that would be.
Oh my, I had that Antec Nine Hundred chassis when the Intel Q6600 launched in the olden times!
Also, Wendell rambling is the best of times!
I started this playlist thinking that I was interested in setting up a home server / NAS. Halfway through it, I looked at my Antec 900 breeding dust bunnies in the closet and said, huh, I wonder if that would work, and started looking up 5.25" bay slot adapters and planning on tossing a dirt cheap AM4 board into it.
Absolutely killed me to see literally the same case sitting on the desk here at the end. Definitely think I'm going to go with that case, if only because it has a lot of room to grow -- probably just start with one mechanical drive bay filled with a couple drives, but with 9x 5.25" bays it could grow as ridiculous as I would let it.
Ah yes, the Antec Nine Hundred, legendary! One of the best cases, way ahead of it's time. Looks great (especially compared to "gamer" cases of it's time), slick black, mesh front, insane cooling options, useful case top tray including IO and amazing front configuration options.
I built my first PC in this case with an Asus P5Q and a Core 2 Quad Q9400, 8GB RAM with the legendary 212 Evo and of course neon tube lighting. xD
It was still going strong into 2020, even the neon Tubes! I then made the BIOS mod to run an X5460 Xeon and am it's still being used in my workshop.
I never thought of it as a good option for a server build, but it really does tick all the right boxed, with those adapter options! Thanks Wendell
How is this comment 10 days old?
@@applicablerobot They're probably a patron on Patreon.
@@joshluvhalo oh thanks. Somehow I didn't know l1t has a patreon
You are doing gods work. The first reasonable notable public figure to do so, but I fear it may be too little too late. I've ranted where I could and for what little it's worth that 5.25 bays lining the front of a case should be the standard since the original Cosmos.
It's not just for the incredible flexibility of icey dock bays too. It's a great standardized way to mount anything. Thick radiators, reservoirs, fan controllers, exterior displays, bloody cup holders and car cigarette lighters!
These days everything needs a custom solution. Gotta break out the drill. I also can't trust anyone that doesn't have a 4k bluray drive with hacked firmware to rip media.
This video is the reason why I decided to go with Enthoo Pro (1 Gen) case for my build. I wish it would be easy to buy that asrock MoBo for a reasonable price as in the US. It is an "unobtainium item" here.
Thank you, Wendell. You are a beacon of light for us newbies :)
I wonder if anyone else would be interested in seeing Wendell, or Ryan, programming, or scripting, the auto-ingest parts of the a NAS build? Ideally framed as a tutorial series?
It's those little quality-of-life bits that I think would help people get more info self-building home storage solutions, because nobody wants to go through their twenty year optical collection and do all the ingests, manually clicking buttons every single time.
There might be legal issues when showing what “to do” with media that has some sort of copy protection like DVDs and Blu-rays.
You don't, need, all, those, commas.
AFAIK there are MakeMKV etc. containers that allow passing in DVD/BluRay drive with some autosensing logic in the apps themselves to dump the contents to mkv (or using disc authoring containers to rip to iso), which then needs to be moved to a location that acts as an ingest directory for handbrake container to transcode to target profile... A few years back I had found a few containers that worked with USB optical drives and used them prior to my last server crash, each container having its USB device observed for disc load to automatically trigger iso dump to a workdir, which then got ingested by a handbrake container (workdir was a watched ingress dir for handbrake container with predefined transcode profile), with output then moved using server's incron settings to a correct media server's media location... and the work files were configured to be removed on process successs on each container (so the "ripping" one extracted the disc and the handbrake one removed source). That's how I ingested my BoxSets collection into my NAS... now I'm only adding new items once new buys arrive, so I'm using just handbrake container to select exposed drive as source omitting the ripping part... and then tweaking finer media container/format details such as changing stream selection or adjusting metadata using either scripted mkvmerge or ffmpeg run initiated by incron...
I've been using my old gaming rig with an i7 4790k for the past 4 years as a home server and just recently treated myself to a significantly more powerful Epyc Rome server
I still game on my 4770k 😂
Just out of curiosity, what are you using all this compute for? My home server is running on an i3-4130, I've been thinking of upgrading to some sort of quad-core Xeon but I can't justify it when my avg load on i3 hovers around 0.2-0.3
@@DrathVader Home automation, plex server, backup for work, gaming vm‘s media encoding, a web server and a couple docker containers at the end of the day all I wanted was enough headroom to expand :D
@@jstnjx yeah the price on the 32cores just fell off a cliff. I bought a dual 32core epyc + 256gb ram and a mobo for under $2000. People always ask why, but then you consider it replaced at least 1/2 a dozen devices and puts it into a nice small unit its totally worth it.
My Unraid is a i7-3820...and game on the Virtual PC setup in Unraid...ahahahahahaha
About time you got the X470D4U Wendell! It's a good board for the price, was a bit quirky on release and 5000 series can be questionable at times depending on what your doing but mines been solid since May 2019, probably had about 20 - 30 restarts in that time. It's soon to be upgraded to a 5950X and 128GB RAM for a hosting game servers more betterer than it's current set up.
One really important thing to note about this board. The CPU socket isn't up to AMD spec, it's to close to the RAM so you'll need to be damn careful when picking a cooler especially if you max out the RAM slots. The newer boards fix this but, just a heads up to everyone. Noctua have a supported cooler list for this board.
Really bugs me that no case manufacturer is seemingly allowed to make cases with 10+ 5.25 bays anymore (being figurative of course), there are some cases available, but they only exist as they just haven't sold out yet.
It's part of the reason Icy dock and hotswap bays in general can be so spendy, they are getting to be niche items. I see a death spiral of the 5.25 bay and it's related accessories.
I don't think they will disappear, 5.25" is just a standard form factor, being used for many different things over the time. What becomes obsolete is what you put in there each year, not the form factor because it perfectly matches the spare space in PC cases. Now Icy Dok just launched a 8 x NVMe 5'25" enclosure that has a lot of potential and even a 12 SSD unit, not to mention a 2 x nvme in the 5.25" dvd slot...
But there are many other potential uses. To mention the weirdest ones I have seen a 5.25" drawer (with keykock), a 5.25" subwoofer, and even a 5.25" coffe holder that keep it warm (gets hot hair by redirecting internal fan exhaust outwards ...). Another nice trick is to add an extra power supply in there when your original one is not enough. And of course you can put traditional mechanical hard disks (boring, but nothing else will give you a better cost per terabyte, by far ).
Other uses are extra fans, car readers, professional audio gear with balanced connectors, video patch panels, a ush hub, a battery charger, RGB lighting, retractable cables, AC power plugs for monitors or other devices, KVMs, a small UPS for safe auto-power off on blackouts, small network switches, a raspberri pi, ... the 5.25" bay really helps a PC to be a PC, the only modern machine designed to be upgraded as you want, when you want, how you want. Your way.
Even when left empty the PC soul still lives in there, and I hope it will never die.
If you’re into rackmount, rose will has a 3u case that technically has 12 5.25 slots. It’s populated by its own HD de tray holder, but you can easily swap that out for anything else.
@@DoozyBytes that's exactly what I did and it works great. $150 case plus another $150 of adaptors / fans(the stock fans are junk) and you have a super nice 3U rack case.
The two 5.25" slots (albeit poorly supported weight-wise) have been the major selling point of Define 7 XL for me.
Yeap, some people with modern systems still need this stuff.
@@nismo4x4n I agree about the fans, but also the case design is important for rack mounting. PC stock cooling can be cheap while still powerful because they use the fact that they can intake air from any direction and blow it out in any other direction, turbulence does not matter too much and they typically have the room for themselves. Meanwhile rack mount requires front intake only and back exhaust only (otherwise it will breath hot air coming from equipment at the bottom and blow it even hotter to the equipment on top, also messing around with air convection.
Big love for Icy Dock, their gadgets are often exactly what I need for odd jobs
got a couple of Icy Dock 5.25" single bay adapters 5 years ago, each holds both a 3.5" and a 2.5" SATA drive, individual power and eject buttons...; great stuff!
I just ordered a single 2.5” bay from icy dock. I’m excited to play with it. I’m as excited as you are!
I love this type of homelab / enterprise-ish hardware, brings much needed features to the masses
Yes, they do some amazing hardware at a great price!
I wanted to add some extra Icy Dock 5.25 adapters similar to what you are using, so I bought a multi bay DVD duplicator case without the drives on ebay and then connected it to my system using an external 6g sas adapter card. Works like a champ.
It's funny you mention the Antec Nine Hundred. My PC I built in 2011 (Core i5-760, Asus P7P55D-E PRO, 8gb ddr3 1600) is in an Antec Nine Hundred Two. I did the exact thing you mentioned in this video and I put in 3 Icy Dock 4x3.5" hot swap bays and upgraded my NAS to it. I've got 12 3TB HDD's in RAID 6 and I'm super happy. The only thing was I regretted not getting the Antec Twelve Hundred full tower so that I could have fit 4 of these bays in it, but back in 2010, I had no idea I'd still have that case 10 years later haha.
A word of caution on the ASRock rack X470 motherboard with onboard 10G Ethernet.
It really was built for server airflow. I am running a parsec gaming VM as well as proxmox/TrueNAS on this and ran into system instability due to insufficient cooling of the 10G chip. Pointed a fan at it, fixed.
Also ran into instability using two USB ports with external HDDs (for backup)
Got some Icy Dock bays a decade or so ago. (Windows Home Server) and been transplanted a few times now (currently in rack mount cases - which still do 5 1/4" bays).
The three drive and four drive versions had slots a decade ago, but bending tabs is fairly easy.
I have that Asrock Rack mobo running my truenas with a nice HBA card with an LSI chipset, and an nvidia T600 for encoding, with a Ryzen 2600 and 64 gigs of ram.
Runs flawlessly.
I've got two of this board as well, running a 5900x and 3900x. Both with 64GB ECC ram, 2x32GB sticks. Running Fedora server.
Same. I have the same board, my first gen ryzen 1800x, LSI HBA, and 64 gigs of dirt slow RAM running Proxmox. I'm using the icydock 6x for VM drives, and the HBA will future state feeed 8 3.5" drives for my NAS VM. Antec P100S holds everything pretty well.
That Antec 900 is still a fantastic case IMO. I know it probably wouldn't sell well but I wish antec would make an updated version that kept its original charm but also implemented certain modern designs features like cable management.
I just finished doing something similar except with the OLD OLD HAF 932 case. The wheels still work and it fits my 80TB array without issue lol. Threw a x570 board in there with a 3600x and 64gb of ram. Runs Truenas and I love it!
How much noise?
@@soulfinderz The drives are certainly a bit noisier than I had initially anticipated! Not sure what to attribute that to other than load. Its sitting next to a 5 gpu mining rig that uses 5x Noctua NF-F12 fans as intake so I rarely notice the nas being noisy. Its using a Noctua NH-D15 for the cpu cooler. Even under a full load I couldnt get it past 66 degrees celsius.
I run my Home Server and Backup Server on some Icy Dock and Rosewill hotswap cages. Both using the Zalman MS800 which has 10 5.25 bays. Works perfectly.
I've got several old business Optiplex's with i5-6500's running Proxmox and a couple VMs, been great for starting things so far
I use a Rosewill Thor V2 with two 4 bay hot swap docks while the case holds 6 additional internal 3-1/2 drives. I put Silver Stone 2.5" trays for a total of 12 2.5" drives in the internal bays. The case houses 3 200mm fans for plenty of air flow. Makes a nice ProxMox system.
Icy Dock, Startech, and Silverstone are all companies that make just about the perfect product for your use case. Amazing companies.
Built my "basic" file server around a 5600G, 16GBs 3200, 2x4TB & 2x6TB running UNRAID. Has been rock solid. Housed in a reused dell Vostro matx case.
I had just bought the 4 bay icy dock with no eject buttons for my main desktop so I can easily swap backup drives and drives for trying different OSs without virtualization or affecting my main OS install. Very nice choice, highly recommended!
Huge fan of Icy dock.. my personal pc has a bluray burner and two sata drives (each sata drive bay has there own power switches) in one 5.25 bay.. awesome for cloning drives and saving data long term.
Been using icy-dock cages since 2000. Now I have two 6x2.5 bays cages installed for a total of 12 hot-swappable SSDs running Microsoft Storage Spaces.
Those cases are incredible dust vacuums.. but the storage possibilities are insane
Started out with this case, a Q6600 and GTX275 a "while" back... Many drops of blood have been spilled in this case over the years. It's was a true flashback to see it in a YT video in 2022, admittedly it's still standing ... if not a bit skewed on the floor beside me still 😂
Built many many customer systems in the 900 back in the day. Great case.
Hey Wendel, would recommend you check out the silverstone tj08e, looks like it would be pretty much perfect for what you want (4-5 3.5” without bay adaptors, depending on how you configure it and 2x 5.25” to play around with).
Silverstone DS380 has been my ultimate case for my home server, I needed a simple yet long lasting hotswap case that is compact, runs like a beast.
14:29 thank you editor! That made me chuckle.
I've been using that Asrock rack motherboard in production for a while now, they've been excellent.
I remember building a few "privateering" servers for ahem, known individuals with 3x5.25 to 5x3.5 tool less hotswaps from istarusa. 3 of those for antec 900s and 4 for antec 1200 was 15 to 20 hdds. Nobody wanted to spend the money for the silversone temjin. Also the main "privatizing" rig was an antec 900 with 6 dvd (slowly updated to blu-ray) drives and one of the istarusa hotswaps with 5 hdds in JBOD. A 16i hba was used for most of the sata connections. The White Album reference was mentiond a few time during the blu-ray replacement times.
When I did my last home server build using decommissioned parts from eBay, the case (w/ accessories for it) was the single biggest expense (not the 128GB of registered ECC memory, and certainly not the cheap Supermicro board or the pair of Xeon E5-2650 V2's I got for it). I've been happy with the Define 7 XL, and the cheap server I built from used stuff. I even bought cheap Chinese knockoffs of the Intel X520-DA1, which are still providing me with a 10 Gbps connection between my PC & server. Some gambles seem worth taking, and when you build a server for yourself (or your family) you can judge that as you see fit.
LGA 2011 & 2011 v3 are still my favorite platforms for building "let's just have some fun with these" servers. You can get an obscene amount of cores for very little money, support for lots and lots of cheap ECC RAM, it's super-easy to work with and bulletproof reliable, and OK they're maybe not as fast as the latest Sunny Cove cores, but if you're just playing around with a ton of VMs or containers or whatever, you'll have more than enough MHz to keep everyone happy.
@@MatthewHill The ability to install a lot of _cheap_ ECC memory was a key factor. Otherwise, I would've probably gone the AM4 route w/ unbuffered DDR4 ECC. I had wondered how well SMT would work with Proxmox, but overall it was a big plus to double the number of logical processors. When assigning processors to VMs one just needs to be aware of how they map to actual cores (w/ lscpu). Of course I _want_ an EPYC server, but this server is getting it done, and it's just as reliable as my previous 2P Opteron 6128 system. Good Lord, the single-thread performance was _bad_ on that system, though! Even when it was brand new, it was slow.
@@vonkruel Oh yeah those Opterons were garbage. You can get very cheap 2p and even 4p servers now but it's not even worth it.
Epyc, though, is a nice platform to aspire to. 7001 is starting to be pretty affordable and you can even get some 7002 CPUs for not too much.
I'm absolutely a data hoarder, using my previous gaming PC as a server and it's not got any redundancy, or expansion left. (Not cool, not quiet, and. not pretty anymore)
Been meaning to get truenas going so everything doesn't disappear if my one of the drives decides it's been on long enough...
I was waiting for an explanation of that black magic at the end for more SATA connections lol
In the end a redundant file server for Documents and irreplaceable media should be my priority instead of just loading up 70TB+ into a redundant truenas configuration.
Keep up the rambling!
SATA wise, when I was going reaaaaallly cheapo wise with a BananaPI based NAS, which only has one SATA port, I used a SATA port multiplier turning one SATA port into 5. This, performance aside, worked wonders, just don't expect to have the full bandwidth when using all drives in parallel. But as this is/was a single user system and not really intended for more than one user at a time that wasn't an issue. My new one actually is meant to enable to install Windows/Linux over BootP/PXE network installs so here I do need the additional bandwidth. At this time my network is limited to 1Gbps and I do have a fully managed switch, so in order to minimize performance issues what I did was bonding/teaming 3 gigabit NICS on the NAS enabling 3 client connections to it at full gigabit performance.
For hard drives a SATA splitter is probably not much of a bottleneck. The drives spend more time seeking than transferring data anyway. Two IDE drives on one cable was not a problem and SATA is a bit faster.
Antec 300 is an awesome NAS case. 6x 3.5", spaced out and well-cooled with dual 120mm intake in the front, plus another 3x 5.25 for a cage.
Your reviews are very informative.
Right now I am considering an external SAS enclosure as a storage expansion possibility, though DIY one of those will be trickier, specially finding a proper enclosure solution...
I love all of Icy Dock's stuff.
A lot of them make me want to make a project just to use some of them.
those old Antec cases were legit.👍
That antec is like so many multi-cd/DVD muli-copy cases that where chruned out back-in-the-day.Wish modern cases still had tons of drive bays.
About 12 years ago I was building computers for people and started to use those cube Lian-Li PC-V3xx series cases to be "premium" blah blah. I got one of the very first ones I sent out into the wild back last month (After being continuously in use for 12 years, mind you) because the power supply finally failed. I had forgotten how sweet and almost modular those cases were so I asked the guy if I could just buy the computer from him.
Now I have my new server chassis. Still debating to go with that Asrock Rack X470 motherboard or if I want to do a low power Alder Lake. But I'll be tossing in one or two of those 5.25" Icy Dock things for four, or maybe eight SSD's.
BTW how is Alder Lake behaving with “questionable” PCIe adapters when using PCIe Gen4? Is it less fragile than Zen 2/3 platforms since Intel is already on Gen5 here? Similar to PCIe Gen3 being pretty robust now on AMD’s side…
Does PCIe Advanced Error Reporting work properly on Alder Lake systems?
(Waiting for the release of (more) socket 1700 motherboards with ECC support before I might try one out for the very first time)
Having been down the potato home server road for many years and dealt with all the limitations (i.e. limited PCIe lanes/SATA ports) in consumer class hardware, I've gone the other route into used enterprise server class gear like Supermicro.
I was looking at something like that board when I was building my home server, maybe that exact board. But at the time it was like $350, more than I wanted to spend.
Went with a cheap MSI B450 motherboard instead and it's working out well after a BIOS update allowing it to boot without a GPU.
I also got one of those 16-bay 2.5" Icy Dock enclosures connected over SAS to an HBA. The idea was to use a bunch of laptop drives for storage, but then the whole SMR and ZFS problems came to light and all the laptop drives on the market were SMR. I ended up getting a bunch of SSDs instead. It's pretty nice, but more costly than I'd planned on, and in hindsight I wish I'd gotten one of the 5-bay 3.5" enclosures instead.
For the case I used a Coolermaster HAF XB Evo. It's bigger than I thought it was going to be, but it works well. I was very tempted by some of the Silverstone cases, but I was already venturing into new territory for me and didn't want to add first time SFF builder problems on top of everything else.
I got a couple of the M.2 SSD to 2.5" SATA enclosures, they are really awesome and pretty inexpensive. Let's face it, most motherboards and notebooks got too few M.2 slots, but these enclosures make it pretty easy to clone you old SSD to a bigger one and then just swap the sticks.
Love the video, thanks Wendell!
Dang I've been looking to upgrade my old xeon server with that x470d4u motherboard but your coverage might cause it to go up in price ahhh lol. Love your hard work and content and hoping to learn a good amount.
Great vid. A 5900X in green mode (BIOS setting) is a 5900, If anyone is tossing up between the two…
If you wanted to build ultimate mITX or mATX build, I'd say the best could definitely be based off ASRock Rack's ROMED4ID-2T - it has SP3 socket so you could select whatever you need or can afford from the huge collection of either brand new or e-waste sourced Epycs, from lower count to nigh on highest tiers, so once SP4 next-gen Epycs get sold Epyc Romes might get heavy discounts for one to snatch some higher quality sillicon on sales, 4x DDR4 (L)RDIMMs for huge mem potential in the way similar to Epyc selection, you are limited to single PCIe slot but you also get 4 Slimline 8xPCIe4 and 2 Slimline 8xPCIe4 that can be split to 16x SATA3 total, so you can have way more (as in count or performance) front-loaded devices (eg. 2x 5x3.5" + 4x2.5" + 2x slim ODDs, eg. in MB155SP-B/MB975SP-B and MB604SPO-B, or just that 16x 2.5", eg. in MB516SP-B, for SATA devices and 4x 2x PCIe4x4 for a total of 8x M.2/U.2 high speed storage components eg. in 2x MB720M2K-B/MB699VP-B using approximately 6 to 10 5.25" bays worth of front panel space for ease of life extensibility), and the board has 2x10GbE so you might just as well get away just with that without the need for "upgrading" to a dedicated 10GbE+ PCIe solution... thus freeing the PCIe4x16 slot for either a GP-GPU or mining GPU or eg. something like Xilinx Alveo line-ups or similar transcoding offloading solutions for better/faster media processing... One potential issue to take care of, of course, as with all the server boards, the VRMs need to be cooled with either Noctua's Industrial CO 40mm fan to push air around their rad or something bigger... heck, if you plan to use water cooling as the main cooling solution you might as well consider replacing the VRM's rad with some alternative to plug into the same loop or build some contraption to incorporate it into the loop. Same could possibly happen to the M.2 drive and of course the GPU/extension card depending on the requirements/potential of the system... Of course you could use the M.2 slot for eg. OS or you could play with those M.2 adapters to eg. U.2 or controllers to expose multiple USB or SATA/mSATA ports for eg. a RAID1/10/5 of OS drives... And if you were really creative you might consider not going for off-the-shelves case and make a custom one for the components and front-panel options you choose in the build.. I bet LTT team or NFC or similar channel would gladly help with such a custom project ;)
Ha! I've got 3 of those cases, Antec 900 may be it's called. Nice video Wendell 👍
I'd love a video on hotswappabilty. What it depends on - mobo-wise, hba-wise, enclosure-wise, and form factor/interface-wise, os/software-wise. For example, the 2.5 doorless icydock thingy presupposes hotswap, but I've heard of people burning out their ports eventually by hotswapping sata drives.
Right on time, I've just started building my first homelab with a 1l ThinClient as a Proxmox server. I've been contemplating building a DIY NAS solution to replace my Netgear 6 drive turnkey solution. Ihave settled on the big brother of the AsRock board you picked as I want to go with the Ryzen 7 Pro 5750GE. That little monster of a CPU will help save energy and reduce noise, with enough reserves to virtualize some services as soon as TrueNAS Scale goes stable. Looking forward to what you decide to do, as I have not settled on formfactor/case. Still contemplating a compact NAS case, an low-noise solution (R5) and a low depth rack mount case.
My potato class PC converted to a NAS is an AMD FX8320 and I got that installed on a 4U Industrial Rackmount enclosure.
The only thing I'm regretting there is the maximum number of hard disks I can put on it, basically 10 internal 3.5" plus 2x5"1/4 and 1x3"1/2 drive bays (which I have installed a 4x 2"1/2 SATA Disk Bay + and a 1x3"1/2 + 1x2"1/2 bays adapters for those 5"1/4 bays.
At the time I have 2 SAS controllers on it, one with 4 Drives support and the other with 8 Drives support which combined with the onboard 8x SATA is leaving me wanting more drive bays, booohoooo
On my 3"/2 bay I put a multi card reader with CF card support, you know cause having the OS on a CF card can be handy.
That's the motherboard I picked up to use as a new server/VM host. The only issues I have with it is the proximity of the DIMM slots to the CPU socket. Not a lot of heatsinks will fit without blocking the first slot or leaning up on that first DIMM.
The other issue is that (as far as I can tell) it won't use the integrated graphics on the Athlon 3000G, Ryzen 3 3200G or Ryzen 5 5600G. You're stuck with the ASPEED graphics which are slow and barely handle a Windows GUI, or a discrete GPU. Very curious which cooler you'll wind up with.
But it does support PCIe Bifurcation of the x16 slot, which should let me use an Asus Hyper M.2 X16 CARD V2, which should be fun to play with.
Oddly enough, I'm also setting up a media ingest station with some DVD, CD & BR drives, though not with that board. I've a perfectly good Antec Solo that will work nicely for it, as it has two of the three optical drives in it already, and has good enough airflow to cool them and a few SSDs.
I'd like to get into 10Gb ethernet, but I don't want a super loud rackmount switch, or to spend more than the server costs on it.
The dearth of inexpensive 10Gb switches that use RJ45 and rather than SFP is a real problem.
Absolutely awesome vid with great information, thanks for making vids like this!!!
I see the future of my R9 3950x.
It was definitely a good investment and replacement for the I7 970 CPU.
cooler master had the stryker series which had a lot of 5.25 bays and could maybe still be found. Icydock doing gods work here.
Currently building out an IBM x3550 m4 with 5tb segate drives and I used an icydock laptop cd drive to 2.5" drive adapter contemplating a lp dual nvme card aswell
Usually, when I upgrade my main system (a gaming PC/development workstation), I keep the old CPU/mobo/RAM and I put it on my server case. Used to have an old Silverstone case with an IcyDock with 4 hotswap 3.5" bays that used to fit in 3 5.25" bays on the case. The Silverstone was so old though that the airflow was bad and the HDDs were running hot, so I ditched the IcyDock and the old case for a new Fractal Design Meshify 2 which can be configured internally with up to 9 x 3.5" HDDs and has excellent airflow... Yes I lost the hotswap capability but I gained a lot in HDD and CPU operating temperatures
I really, really would like to see the video and-or forum post on the media ingestion and media server. This is probably the project on my list that has the most direct and tangible benefit (so many things on the to-make list...)
I need to get me some of those Antec cases.
One for my optical backup system,
a DVD or Blu-ray burner per bay and my scripts means that it can backup critical data every second day for quite a few days before I need to cycle discs.
I also need to get one for a better quiet hotswap storage server.
Hi!
I want to build an NAS for me.
But I'm not sure what Software to use.
I want 4 things from it.
1.) 3 HDDs for parity
2.) The RAID must be able to be expanded with more Disks.
3.) The Data on the RAID must be crypted in AES
4.) An m.2 R/W cache.
I remember on the X470D4U I ran into problems with many coolers and the 4th ram slot... ended up with a 240 clc in my box... which barely fit...
Yeah those asrock AM4 server boards are very nice, they are murderously expensive on ebay tho. I can get like 6 normal matx boards for the same processor at the same price of that
MSI were doing an X470 board with 3 full length slots and 3 1x slots for £85. I built my server on that. Still not enough full length slots though. I pruned the video card to fit in a 1X slot.
I’m doing this type of build as we speak and have a small dilemma, get a consumer grade am4 mobo and just PiKVM for IPMI.
This is what pisses me off so much about current cases - the limited 5.25" bays.
The Antec 900 is like all of those older cases that have many 5.25" bays that seem to be lacking from the pc cases these days along with the missing 3.5" drive bays. There should be at least 4x 3.5" bays. And that motherboard is >$300.
I wouldn’t recommend the 4 or 5 3.5” bay icy dock part. I would stick with only 3- I really wanted to stick as many drives as possible in my server- but heat quickly becomes a problem - it’s still a problem with 3 drives (I have 2 3x 3.5” and 1x 6x 2.5” bay) heat is a problem in both.
Now it should be noted that I’ve stuffed my very large server in a closet. I added a fan vent into the attic but it gets toasty in there anyway, perhaps the 5 bay in an open room with ideal ventilation would do just fine
I have this exact case. Antec 600. Mine without the water cooling ports. Excellent system. Was my gaming system for 12 years. Now it’s a xpenology back up target. Top fan stopped recently. Still trying to figure out how to replace it.
I have that EXACY same motherboard in an apevia XQPACK2 case, an ICYDOCK 16x2.5inch hot swap SSDs, and a SATA3/SAS3 JBOD/ITmode RAID card, and a pair of Optane drives for cache
Dang, I was actually about to do this. Now everything's gonna be bought out
Those motherboards and almost everything like them from asrock rack have been out of stock everywhere for ages - when do you think there will be more available for purchase in the usual places? IE Newegg
I still need 5.25 bays. Have a large physical media, CD / DVD / BluRay, collection. W/ a large portion of that being live concerts in FLAC I still need optical drives as I’m never getting rid of the original media.
FYI, the Athlon 3050E is still Zen1 based, though it is apparently supported by windows11
Spotifly.
The real takeaway from this video
Have started this recently but with a sharkoon T9 (gave my antec 900 to a friend to do it for himself). Preface: I know nothing about servers or networking, First time.
Xeon e5-2698 v3 16 core . MSI X99a sli Plus . 8x8GB corsair LPX . Intel X550-T2 10Gbit NIC . GT710 gpu . 3x icydock (4 drives per 3 bays) adaptors. Also have a spare Optane 16GB I might use for a cache of some sort. Still need to get..
~16 port raid card/HBA=£400
12x 4TB HDD's... oh they're like another £1000-£1400 alone
Probably going truenas.
While saving for this my 2TB mx500 SSD died taking 12 years of photos and spreadsheets with it. Now relying on a single barracuda compute 4TB (SMR) which had a backup from 6 months ago.... yeah Fun.
how about old chieftec cases they were awesome back in the day, or the cooler master stacker cases
i found a stacker 830 for 40 bux recently, it's amazing and best part is full alu so it's light as a feather.
Running an Antec twelve hundred with icydock bays at home, lovely combo for sure. Gets hard to get enough sata ports though
I have a case with only two 5,25" bays. Is it possible to stuff three 3,5" HDDs in there? I couldn't find an adapter for that
My server wich ive been running for 2.5 years is a Threadripper 1920x in the X399m Taichai that I got used as a combo for $400. Started off trying to make an dense mATX build in the Nanoxia the Deep Silence 4 which had 6 3.5" drive bays and a dual 5.25 to 3x 3.5" hot swap dock to give me space for 9 drives. However fan on the back of the dock conflicted with the 24pin power and i needed low profile sata cables to fit. it worked OK, but was a pain. I only had 6 drives to begin with so I didn't get to use the dock. I got a good deal on a used Fractal Define r6 so swapped over to that, and found a good deal on a Noctua NH-U14S which wouldn't fit in the mATX case. SFF is supposedly fun doe some people, but for me it mas too much of a hassle. The define r5 is technically a better data hoarder case, because it still has 2x5.25" bays for a wider array of docks.
Just curious what solution you use to keep 6 DVD readers ingesting DVDs & Blue-Rays continuously with auto-eject and Plex-friendly output. I've been gradually crawling through my 20+ years of DVDs with MakeMKV, and it works but it's slow, and the files come out with funny names that make no sense to Plex unless you manually rename everything afterward.
wendell knows how to use linux and make scripts and programs
@@mrlithium69 I've been known to do some scripting too, "back in the day."
Icydock has gotten lots of my money
This is the stuff I've been trying to find too! Why no build your own NAS cases?
I have had that x470 server board with ipmi for a year. I thought it was bad at first, the reset button on the backplate was stuck and the integrated fw won't init if so. being that controls all the system fans, this can make for a bad time. i had to make a recovery usb to reflash the integrted fw to fix it. long live dos!
I've always loved IcyDock products. It's a pity they have no presence in Argentina
I'm trying to turn an old Haswell gaming PC into a long term storage server. The machine is upgraded, it has a Xeon E3 1280V3 CPU, 32GB of DDR3, and a GTX 980 (I will be swapping it out for a Vega 64), and I'm looking at dropping in four 10TB drives, an old SSD, and putting Kubuntu on it with ZFS for storage management.
My problem is that I also want to run 10G LAN, and with a GPU, I don't have the PCIE lanes for a 10G NIC; I have a single PCIE 3.0 slot, the additional PCIE 4X slot is PCIE 2.0 standard.
Now I'm debating if I should go another route, but I really dislike tossing functional hardware for no reason.
did you plan to fully use the gpu in your system? or just for media hardware encoding?
I have not had much luck with the icy 2.5" docks. The mechanism to eject the drive breaks extremely easily. My "server" is an old optiplex with an 8400T. No speed demon, but it gets the job done!
Had a few issues with the molex myself
do you guys have a video on setting up a render server by chance?
With that board, would you be able to pass the Sata porta to different VM ? Can anyone point me in the direction of a hot plug 2.5/3.5 dock , maybe this icydock ? I can’t find the way to find this information …
Hey Wendel what do you think about a Dell Z400 I can get for 100$ +- with 2 x GB ECC Ram and 4 x 1 TB (SATA 7.2K rpm) and as CPU is a XEON W3520 and there is even a Nvidia Quadro 2000 in it. (I would go for TrueNas Core)
Got my self a upgraded hp microserver gen 8 with truenas on it. Downgraded from a full size 10core xeon on c602 motherboard. Prefer the small size over the full horse power
Lmao I am dying. "Or only subscribe for a very short period of time... Uhhh... Maybe do some transcoding..."
Given usb is faster than DVD, I'd move the DVD ingestion out to usb and use the extra slots for more drives
I use that mb with the following case SilverStone CS381B
You know, with the odd designs I've seen for PCs over the years and the front side being basically wide-opened, it makes me wonder how much thought people _really_ put into their computers long-term. I would love a PC case with a modular front panel so that I could have full-frontal rad support, _or_ sacrifice some height and go with a *thicker* 120x240 rad with 5.25" bays either top or bottom, however I want it _expressly_ for use with such adapters as shown here.
If you're going to connect all those drives with SATA cables you're going to have a bloody mess - a cabling nightmare - going on inside that case. Unless those drive cages support SAS connectivity that thing will not be fun at all. I have a 4-port 2.5 drive cage that slides into a 5.25" slot in my server case and the SATA connections with power going to that thing is a mess. Unfortunately, I don't see a whole lot of those drive cages supporting SAS for some reason. Back in the day, all server drive cages were connected via SCSI. It was an ugly ribbon cable but it was just one cable (with however many power connectors too but...). Connecting 5 cables (4-data and one power) to the back of these SATA cages is too much. The scary thing is that they make 8-port 2.5" SATA drive cages with 8-data and 2 power connectors in the back. I can't even imagine the mess that would be.