I love you videos, been watching a couple of years and I just like what you do and how you do it. Thanks for being entertaining and providing great information - (Synology user here).
I believe this video might be the best video about DiY NAS. It just happen i follow you recently just because i was looking for a review on the Sivelstone case. Thank you !
I built mine last October with the CS382 case and it is really nice. Quite heavy and well built. I built mine primarily for Plex server but also plans on SMB shares and file back ups. I tried out TrueNAS but for me it had too much manually setup and configuration for me as it was my first time trying to use it. I ended up going with Windows 11 Pro and using Storage Spaces.
If you want a great NAS/Server case you can look out for a used Cooler Master Stacker STC 01. They're usually cheap since they are 20 years old already and have none of the modern features, but 12 5.25" bays, redundant PSU option and they can fit up to 12"x13" motherboards. The mobo panel is also great with so many options for different mounting spots.
Back in the day I had three of these cases with three modules holding four 3.5" drives and a 120mm fan in each. Worked great until I had to start swapping out hard drives. Now I really love hot swap bays. So very inexpensive cases and modules. It is a pity I can't get good, affordable 2U and 4U chassis that can be made quieter. My Dell R740XD is an awesome server; but, I want something a bit more power efficient as my servers tend to spend most of their idling with drives spun down. that Silverstone CS382 is my current favorite compromise between cost, storage space, and ability to get it quiet enough for home use.
If max storage is the goal, I don't see the point in choosing SSD's over more hard drives. You could get a qnap DAS box and more hard drives to expand the storage far greater than the hardware going toward the SSD's. That Silverstone case is sick btw. I didn't know about it. I wish my server case had an optical bay.
To be fair, some of the DiY offerings have become pretty affordable (comparatively, of course). Plus now alot more smaller brands in the pc and pre made component industries have stepped in. Surprised good options in the market for sub $99 tbh
I think you'll find that it is ME that is being GOT by seagulls...and make my words, I shall have my day, and thEY shall rew me.... OH HOW THEY WILL REW!
A consideration you didn't mention is remote management. Believe me it's a HUGE deal being able to remotely KVM into a server when you need to mess in the BIOS or something, as otherwise you have to unplug the machine and tote it over to a mouse, keyboard, and monitor, which is a massive pain. So people may want to consider paying for a server platform that has it. There are also external alternatives like BliKVM and PiKVM.
You are 1000000% correct and I will try and add this to the recordings for part 2. I did a bunch of truenas and UnRAID stuff, dancing around performance etc, but I do not directly call out the (frankly) MASSIVE BALL ACHE that troubleshooting on a system like this is going to be like without streamline remote management. cheers for keeping me on the straight and narrow, P!
Rackmount cases can be a good option, some (like Supermicro) use standard type motherboards opening up a whole host of options. There's a lot of kite flyiers on eBay trying to charge £800 for a Supermicro CSE-836 (3U, 12 bay) case that they've removed all the caddys, power supplies and backplane from, but deals do show up. I got a CSE-836 from a place that was retiring it from their datacenter, it came with a Supermicro X9 motherboard/Xeon E3-1240 v2 CPU/RAM, all 12 caddies, both PSU's (these are hot swappable), the SAS/SATA backplane, a Supermicro SAS pci-e card and two WD SATA drives in Startech hot swap bays that mount over ununsed pci-e slots (I think someone forgot to remove these before selling it...) for £52 delivered.
i think an angle you haven't looked at is used enterprise computers that have skylake xeons. They are modern enough to have decent enough idle wattage (30w for just cpu), have an insane amount of pcie expansion, but have 5.25 bays for extra hot swap functionality. I've bought a P520, but the dell t5820 is another possible solution. The only issue may the fact that t5820 only has 4 hotswap bays built in while p520 has 4 hdd bays with no hotswap (with an option for 3 hotswap 3.5 bays with 2x5.25 bays), but other than that the price to performance/value proposition of these xeon workstations outperforms the options you've listed here.
If it was not such a niche I would import from China and compete with them but it is such a small niche so small so that 1 icydock customer literally subsidize for 10 other would be customers
Excellent video I don't know how many videos I see talk about icy dock while skipping the essential of the computer connectivity. I want a case with a bunch of 5.25" bas. I'm really staring down using u.2 to nvme adapters in hotswap bays and sas instead of sata. If I want a graphics card I start thinking about epyc sp3.
If we are talking about nvme NAS I'll never go for 1k$ adapters, it's too much for me. So my choice MATX am4/am5 mobo with 1 bifurcation adapter for 4 very fast pool (+1 on board, so RaidZ1 is achievable) and one switch card to x4 slot (sabrent pc-p3x4) for slower pool. Or some atx epyc/threadripper board (or even some Xeon v3/v4) with plenty x16 slots and bifurcation support). All this variants will be cheaper than any 1k$ adapters and will support ecc out of the box
Yeah, I was wondering that if you are paying 900 for the adapter and all that stuff why not use the money on a powerful threadripper or something with pcie slots in it already. Maybe they use more power, but I don't know how much those expansion cards use. I'm just using my old pc 3700x system basically with a 16x slot for the 3070, 4x for the sfp+, 2x nvme, 4 hd in zfs, but it would be nice to get something with more pcie slots.
@@axescar yes I agree however it will be much more cost effective to accommodate NVMe, also to achieve PCIe 4 specifically is hard and the sff cables are rare still in the works, getting a solution like flashstor gen2 that already has this part done will be more efficient and reliable, the only concern is that will be not very nice to USB 4 with an external device as you mentioned
A big thank you for a very good channel! Thank you for this video that is almost spot on what I'm looking for to build. I have looked at the SilverStone cases that you show in the video and they are very interesting for me. The problem thou is where can I buy them here in Thailand where I live, The cost of shipping from US is crazy, so some local retailers is the best, but the ones I contacted is no longer importing these cases. So what I asking for is to reach out to your viewers to see if anyone knows about where to buy those cases here in Thailand or in countrys near Thailand. I am not very good in speaking Thai so english speaking contacts is my preference.
also look at the siena (epyc 8004) boards. Those can be a perfect nas board. 16 or 32 sata channels, either onboard dual 10g or dual sfp28, and 6 ram channels, all ecc, and enough pcie lanes for cards or SSDs.
Ive been tempted by the radxa rock 5 itx, and some sort of m2 to a crazy amount of sata disks. The best rhing about it is the 11w power draw. The worst is the lack of sata or physical pcie slots.
I have a video coming very soon called "You are using your m.2 slot wrong" that discusses something about this. It's a little "tongue in check" as a video, but I think you'll like it. Arriving next week
@@nascompares excellent. Ive been looking at getting some hardcore bifurcation done, i want more slots damn it! Seriously though. That radxa board with a couple of pcie x1 slots would be ideal.
All very interesting but the elephant in the room is how to power all of the additional SATA / SSD drives that you are attaching? It would be goid to see a review of PSUs with lots of SATA power adapters.❤
btw, did you know your postings and discussions about HexOS was covered on the WAN Show? Luke seems to really appreciate what your doing. Moving up in the world Robbie.
I remember you highlighting that I was mentioned on WAN the Friday before last (thanks for highlighting that P!) and it was cool. Did I come up again!?! Hopefully for good reasons... I know @jon is having to defend/discuss HexOS and the web management. What's your thoughts on that too?
Another option, similar to the OWC 8M2, but much cheaper, is the OWC U2 Shuttle. It has an onboard PCIe switch to turn any x4 PCIe lanes into four M.2 NVME slots. If you use the right motherboard, you can fit up to 7 of these on a consumer Intel board, or 8 on an AMD board (egs. Asus Prime Z790-P WIFI for Intel, and ASRock X670E Steel Legend for AMD) for 28 or 32 M.2 drives… and still have enough PCIe lanes left over for a 10GBe NIC!
If you can get your hands on something like a Thermaltake ARMOR+ you have a great case, suitable for MANY MANY HDD's Note you do need those brackets that either convert 3 5.25" bays to 5 HDD's or 1 5.25" bay to 6-8 2.5 inch HDD's Or if you go the NVME route you can have EVEN more SSD's Would be hilarious a desktop Case with 80 NVME ssd's
Why no mention of a SAS HBA? They can be gotten for around $100 and allow 8 sata drives, which would struggle to saturate any modern PCIE speed you put it on (assuming you are using HDDs and not SSD but this would go well with the 5.25 adapter you shown too)
Interesting stuff, nice one. I do like those cases you showed, I'm still using old Antec 900/1200's and a historic Thermaltake Armor (10 x 5.25" bays all filled with trayless dodads). I keep wondering if the Seagulls are silent partners of NASCompares 🙂
i have an efficient option to fully utilize the pcie x16 slot, get a breakout board, like those asus hyper m.2 there you have an easy expansion , you can use for example 2x nvme to 5x sata m.2 to 10gbe x4 riser cable for a gpu use onboard nvme for cache drives , there you have it, you can have like 2+4 nvme using all lanes , its a cheaper way, and probably more affordable to if you don't want to buy expansive cards and waste 8 lanes
@@nascompares i wonder if that works , it have its own controller right ? Would be cool but less efficient i guess ,what i mention is simply a break out board with no controller, split up x16 to x4/x4/x4/x4, but we will see , if owc works right :p
What I'm missing from this roundup is power. It's all good and well you have eight hdds and eight ssds, but how are you supposed to pwer all of them. Are there even psus with 16 sata connectors?
The Icydocks only require two SATA power inputs. There are splitters with capacitors such as Silverstone SST-CP06-E4-USA. They take one SATA power input, buffer it with a capacitor (to lessen the strain on the PSU during the energy-intensive HDD startup process), and split it into 4 outputs. So, you'd only actually need 4 SATA power inputs from your PSU to run an 8+8 configuration. But yeah, there are PSUs with up to 16 native power outputs like the CORSAIR RM1000x, RM1200X. MSI - MEG Ai1300P, SilverStone PS-ST1500-TI, and CORSAIR HX1000. (I'm sure many others as well.)
Please help me understand, can I reuse an existing drive bay of a Synology DS 920 plus with a storage pool and a hard drive with basic volume created on it by inserting another new drive in that slot without losing the data of the previous drive? So if later if I again insert the older drive it will recognise it and mount normally with its storage pool. When I inserted a new drive it was beeping continuously with error message. I know if I delete the storage pool then the beep will stop and allow me to format the new drive on that slot, but then my old drive with its data won’t be recognised when I will reinsert it
Please can someone help me out i m bit confused ? Which Motherboard was the one with ECC support he was talking about in the Video ? He showed two AMD boards AMD 7840HS and AMD 7940HS but which of them is supporting ECC ?
Thank you so much for this video, very informative and helpful information summarizing the NVME requirements, however I was actually thinking that getting a flashstor Asustor 12 nvme gen1 or gen2 direct attaching using thunderbolt with the DIY NAS will be much more cost efficient other than the HBA car and the Icydock nvme enclosure, hope you can share your thoughts on this ?
The G1 won't DAS in the way you describe, but the G2 over USB4 might. Even then, that would be a little less stable than a single enclosure using internal connections. And likely cost more. That said, it would definitely easier to leverage the SATA and NVMe costs but giving them their own enclosures and reducing the need of a HBA cars. Good stuff man, always great to approach in different ways
AVOID the Silverstone CS380 Case if your looking for an always on or medium/heavy use. Had one, still have it but boxed it up. Temps for the drives are horrible, even did the cardboard mod to help airflow and yet, HDDs were running in the high 50's/low 60's. Moved to a 4U 24 bay case and I'm in the low 40's. I didn't think this was an issue till WD told me the 2 drives I had die were because of eccessive heat. 3rd day from the top and last bay at the bottom to be specific. Those 2 were idle at 55-64C. New case looks to have the same flaw, not even 3000RPM Nactua fans could help, and I even tried adding 2 too the MOBO side (industrial double stick tape). So be warned, this will kill spinning drives if your planning on Plex or anything HDD intensive.
"if your looking" You didn't explain what drives you're (you are) using so we can figure out if they're within spec. Did you know that larger drives run at a hotter temperature on average? I know, right? It's shocking. 68 is considered in spec for the 18-22TB series of NAS drives, for example. WD told you they died from "eccessive" (RIP English) heat... sure, they did. You added 2... also... the motherboard? If my planning on Plex what? Less being an ammosexual and more learning how to write words, please.
@@tim3172 65C is MAX, not 68C. However, that is to be a momentary temp bump and not sustained 24/7. With enclosures WD sells, it was explained to me on the death of the second drive, this is fine as the drive will spin down and "sleep". So a few minutes at max temp, followed by a low power state, is far different than an always on/ always running at the max operating temp. Now, as far as size…. they are all the same physical size, but if you intended drive capacity, yes, capacity does increase heat production, as well as drives designed to run at higher RPMs. I felt this was irrelevant in this context as we are talking NAS builds, where typically drive capacity is desired. Perhaps you run 120GB drives? As for “MOBO side”, yes. Anyone looking at the older case (CS380) would see 2 fans are preinstalled on the side of the drives nearest the removable cover. There are none installed or even provisions for, fans on the other side of the drives (the side of the case the MOBO mounts too). Sounds like you’re concern with grammar, is, perhaps, because you’re not intelligent enough to fill in blanks that are not fully, completely and 100% accurately given to you. Perhaps, as a gun guy (glad you noticed), I don’t need someone to tell me with perfect grammar which end I shoulder and which end the bullets come out, I have the ability to fill in the blanks and to understand things not spoon feed to me. Perhaps such training will help with your condition.
at this point, it would be interesting to consider having a terminal server with direct access to such storage, without need for expensive external networking :D I mean, "ancient" 1Gbit is enough to watch videos, play music etc... anything higher than that means some heavier workload where any network can easily become a bottleneck - single "old" NVME PCIe 3.0 SSD can already do 25Gbit throughput, but SMB itself can't transfer files at such speeds on its own...
True, but we have to at least acknowledge that there will also be a % of users looking at a data behemoth like this and want those drives to stretch their collective RAID'd muscles to the maximum.
I'm thinking of building my first DIY nas, on as limited a budget as possible 🙈😂 must be able to be placed in my 19" rack :) Referrals, suggestions welcome 😊
Tbh, it's the rack bit that will kill it in terms of affordability. Silverstone and a few special rack case vendors on AliExpress might be a good base, but it's still pretty hefty. Most of them are M-ITX though, due to the drive Caddies taking up 1/3 to 1/2 the depth. Watch my vid on AliExpress nas cases and there a few decent ones mentioned if I remember right
I am super new at hardware, but why is there no talk about HBA cards in this build? Is it because we are trying to keep it all inside the Desktop case? Also, what about SAS Shelves (which I am told are compatible with SATA drives? Maybe I am so confused that these questions don't make sense. For example, I don't know what this means, but this dude seems to knwo what is up: You can use a single HBA and PCIE slot to run hundreds of disks if you so wanted to. I run a basic two port 9207-8i to host all 25 of my disks. The SAS interface has the wonderful advantage of being able to be expanded with expander backplanes.
But what if I have a little more money and I want to just go ahead and get the full size motherboard? Are there any good recommendations if I am looking to build a dual NVME SSD, 8x SATA drive, unlocked Blu-ray drive Plex server to host all of my ripped DVDs and Blu-rays?
I build pretty much what you are looking for. ATX motherboard in Antec P101 silent case. 7 x spinning rust + 1 Sata SSD UHD Blu-ray drive 4 x NVMe Dual 10 GbE SFP+ add in nic + dual 10 GbE onboard nics (wish I went for SFP+ version instead of RJ45) 2 GPU's Room to expand in the future...
@@hfw3 If I was building it today I would probably go for Epyc 74F3 or 75F3 as they clock a bit higher. They were just too expensive when I was getting the components.
Can you please rename this video to "Building a NAS in a case that doesn't have enough space for the drives", "Building a NAS with huge internal bandwidth for no reason", or something similar? The video title of "MAXIMUM STORAGE Desktop" doesn't fit this video. Maximum storage means maximizing, well.. the storage available to the user in a desktop form factor. I completely fail to see how the Icydock 8 x 2.5" is anything other than a blatant rip-off. There is no logic in the device. The power is a simple distribution of 2 SATA power inputs and the SATA data is simply a passthrough. $300/$268 pounds for a passthrough and 2 fans. Did you just waste over 7 minutes explaining why no rational person should choose an ITX board for this? You can, but.... You recommend a FULL ATX case and a Micro ATX case. USE THE LARGEST BOARD YOUR CASE SUPPORTS. THAT'S THE POINT OF THE LARGER CASE. "You can use USB4 to take advantage of 10Gbe adapters..." yes, because USB is *FAMOUSLY* (un)reliable for networking. Alternatively, you could... IDK, buy a 10Gbe PCIE card for 1/4 to 1/5 the price that will be much more reliable? THREE HUNDRED DOLLARS for a 10Gbe Thunderbolt adapter? Again... why? "Some decent, overpriced, underperforming, limited, expensive, overpriced, costing too much money, no sane person would buy this, ITX options out there for people who don't understand how anything works. You should buy a MicroATX board at 1/2 the cost with twice to thrice of the expansion for the purposes of using it in non-ITX case." FTFY. And more wasted money explaining how to overcome the limits of the single slots on ITX. Are people watching this interested in a $980-2200 card? The LRES2224PF-2SFP+ shown doesn't seem to be commonly available. Variants for it are $110-240*. It's based on a Mellanox ConnectX-3 which is... *checks notes* TWENTY DOLLARS ON EBAY or $35 for a pair, shipped. No sane person would do this. 13:48 "We're trying to maximize storage." No, you're not. You just aren't. The LRNV9F48 is a $980 card (AGAIN). "This has been about value and bang for the buck." RRRRRRRRRRRIIIIIIIIIIIIIGGGGGHHHTTTT. CS380B, + ATX motherboard that supports ECC + Ryzen 7600 + appropriate amount of ECC RAM. Add a 10Gbe SFP+ Mellanox ConnectX-2 or 3 or something like an Intel 550 for RJ45. Add an HBA (something like an LSI 9300-16i) and populate it with 22TB hard drives. Populate the 1-4 M.2 slots with the largest SSD(s) you can afford. Consider a 3x 5.25" to 5x 3.5" adapter (i.e. SST-FS305-12G or ISTAR BPN-DE350HD) and add 5 more hard drives. All 13(!) hard drives can run off the LSI card (286TB) and you can use the other 3 to run to, idk 2.5" SATA SSDS up to 8TB each... plus the 4-8 SATA slots can also be used for more SATA SSDs, either in the overpriced Icydock or another case such as the still-Icydock ExpressCage MB038SP-B for 1/3rd the price. The bizarre focus on NVME and SSDs when trying to maximize......... storage... the thing that hard drives still exist for... is very odd. Assuming you foolishly follow this video, you'd get 8x8TB in the NVME enclosure, + 8x SATA SSDs(?) for a maximum of 128TB at 8x $820 (Generic an "Inland" 8TB NVME, cheapest NVME SSD) and 8x $620 (Samsung QVO) for just $11520 in drives. What a bargain. (13 22TB drives is still $4620... 40% of the price for 2.25x the storage... with many open SATA ports to power up to 11 Samsung QVO 8TB drives.) *LRES2203PF-2SFP is $110 on Newegg, others seem to only be sold in bulk and require minimums of 10.
I have been debating. Wrather a NAS or a server would be the best course. So far server is winning. My investigation tells me a server can handle better my needs going forward. For converting a regualar mid tower like the meshify medium into a server. Is there a bay expansion y can use? Also how much storage do i need for photos and plex library?
Your bottleneck on any of those motherboard options is lane count. You need 4 lanes per NVMe unless you are prepared to slow them down (in which case, why use NVMe instead of SATA?) and even using available lanes for SATA will only yield counts of about 1.5 SATA channels per PCIe lane on gen 3 (6 SATA channels from 4 lanes) or theoretically double that on PCIe gen 4 (if you can find suitable PCIe 4.0 to SATA III interface cards - they all seem to be PCIe 3.0) without any bottleneck. A 10GbE will have a bottleneck if it isn't given 2 lanes per port on PCIe 3.0 and even on PCIe 4.0 will need a lane per port - but bear in mind that if you need 2 lanes you have to add it to a x4 or greater slot, and a few lanes are used internally in the system. So any discussion of storage servers which doesn't cover PCIe lane availability, generation, and use is a waste of time and potentially money. You can't add throughput to a system without a lot of free lanes of the highest generation available. I have 44 lanes and find it a limitation, although that is because NVMe drives eat 4 lanes apiece. Fortuitously, I have 8 SATA ports on my motherboard, and if I were prepared to go slower by moving more from NVMe onto SATA it would be less limiting. The 4 lanes used by one NVMe drive can provide 6 SATA or two 10GbE ports.
When used in a NAS or a server it's definitely something you need to consider. Sure bit flips doesn't occur very often but when it does and the fault is stored on the disk as the write cache is flushed you loose something and doesn't even know it until it's too late. These bit flips might not happen very often, but probably are a lot more common than you think. Most people only have experience of computers that doesn't have ECC memory and they just see that the machines doesn't crash all that often. On most users machines the part of memory containing executable code is pretty small compared to all the graphic data, sounds and other parts that make up programs and especially games. And a single bit flipped in most of that data are never going to be particularly notable. At the most it might be a pixel that isn't the right color, a blip in the sound that is never heard again or something like this. The real problems occur when a bit flip occurs in executable code and that is actually processed or when it happens in the write cache for the storage system. In the first case it can result in a program crash and in the later it can corrupt a file that is saved. With ECC memory the computer will detect and correct single bit errors which should catch most single bit flip problems. The OS will also create a post in the system log saying that something was fixed. This is normal for machines with ECC memory. At least if it only happens intermittently, and by that I mean no more than a couple of times a year. When it get more common a memory chip might be getting marginal, and if the system stops storing reports with a statement that they are so common it takes to much time to handle them then it's time to really check the computer because something is seriously wrong. There are even more robust systems than common ECC. They use the same method but are more advanced. ECC will detect and correct a single bit error, and it can detect a multi bit error. However it can't correct a multi bit error. If this happens most machines will just halt execution. At least that's what the machines I worked with did. However multi bit errors are very rare and a spontaneous bitflip causing multiple bit errors in a single read is very unlikely. OK this became long and probably went over things you already knew. But there are more people reading this and it's important to not forget that a lot of people have never worked with RCC or knows just what it is and how it works. I'm going to avoid the "how it works" part as this is already long, and that is something most people doesn't really need to know.
If you plan to use any filesystem that heavily relies on RAM, you will REALLY REALLY REALLY want to use ECC. ZFS, BtrFS, etc. The difference won't be noticeable until an uncorrected fault (or series of faults) obliterates your files.
I know how much those nvme icy docks and the tri mode controllers cost. If you are putting those in a system, the cost of how much power is used throughout the year is no object. ITX with a soldered CPU is ridiculous. You do not want to starve the system of enough compute or memory to make use of the storage. Otherwise it might just as well be a direct attached storage drive to the real compute. I would go with a proper workstation motherboard, if not server, from the likes of ASUS for 7 PCIe slots for your NICs, Controllers, and graphics cards for decoding and acceleration. Pair it with newest gen 16+ core CPU and 64GB+ memory. With this much storage, pairing it with value components would be the real waste of money. I am not sure if you shouldn't have a real world use case for this machine since it is beyond what some might consider a home lab configuration.
How is everyone getting past the ECC subject? Just avoiding intel altogether? I’m building out a ASUS w680 + i5 13500, I know it might be overkill but I’m curious to hear what others are using for a base 4k ripping media server .
I really cant recommend the CS380, it is the worst design ive ever encountered. First the drive fans blow directly into a solid side panel, an the cage its self is almost solid, which results in almost 20c drive temp variance. Also if you have warmer components (as i did with supermicro server board) if you try mount fans to the side panel, they overlap the hdd cage ones, so you can only run one or the other. Result is horrible airflow, and now i had to rma that very exspensive mb. I went back to my old matx mb an cpu idle temps were 58C, i then decided to stick with matx but swapped to the CS383, an was literally night n day, idle temps are now 33C, exact same hardware. 20C is the difference custom watercooling does, thats how bad the airflow is.
Okay, and you don't specify what the temperatures were, just the amount of variance. Can you explain, in simple terms, how a CPU idling *more* within spec is better than a CPU idling *well within spec*? Certainly, you don't think cooler = WiLl LaSt LoNgEr, right?
@@tim3172 temps went from approx 58C running truenas scale (essentially idle), an throwing multiple cpuhot codes within ipmi, to now running 36c in the CS382, that is on my asrock matx board. My x11spi-tf is the rma, but as it is atx obviously couldnt put in the c382, that was getting to 70C on the pch. If they moved the fans back 3 inches they would blow straight on the heatsink an not foul the hdd fans!! but as mentioned cant put standard fans with out removing drive ones. Also the hdd temps were all over the place, some were in high 20c's while some were 45C, not great for 24/7, an excess heat does kill components quicker. The X11spi-tf was bought as new (although he may of lied on ebay) an has died less than 6 months later. By all means man buy it if u like it, or if u have it keep using it, thats just my experience, its the worst designed case ive come accross in 20 years building pcs
The answer is never use a case, attach to a flat board with some french cleats(sp) hang your system on a wall in your basement somewhere out of the way. come back once a week with a can of air to kick the spiders out. wasting cash on cases. and lights crazy!
True, but itx boards are smaller, so more cooling space, often arrive with SoC Mobile processors pre attached and in of themselves have smaller cooling needs. Sure. An ATX would rock, but so would he scale. Just trying to maximize space a little, thought you are totally write, MATX would open the floodgates for H/W!
@@nascompares None of those things are relevant to the purpose of building putting the largest amount of storage possible in a desktop computer... the title of this video. "so would he scale" Do you mean... so would the scale? ATX is a standard desktop form factor... like... what? You chose an ATX and MATX case as examples and then go on to explain that the... motherboards designed to fit them... are too big for the cases (?).
mATX? Disgusting! I have a habit of reusing my desktop's components for servers when I upgrade and I always go for ATX mobos for all the extra PCIe-slots they got. I'd never pay for an mATX.
NAS is Network Attached Storage, a function a server can perform. SAN is Storage Area Network, which is a network of NAS devices (i.e. servers performing the NAS function.) There is no arbitrary, gate-keeping, upper limit to the storage size or complexity of a NAS device until it interacts with other NAS devices to function, at which point it becomes a SAN. "A Storage Area Network (SAN) is a network of storage devices..." Stop. Think. (don't) Post.
I love you videos, been watching a couple of years and I just like what you do and how you do it. Thanks for being entertaining and providing great information - (Synology user here).
Thanks for the kind words bud, massively appreciated. Have a fantastic week!
I believe this video might be the best video about DiY NAS. It just happen i follow you recently just because i was looking for a review on the Sivelstone case.
Thank you !
I built mine last October with the CS382 case and it is really nice. Quite heavy and well built.
I built mine primarily for Plex server but also plans on SMB shares and file back ups.
I tried out TrueNAS but for me it had too much manually setup and configuration for me as it was my first time trying to use it.
I ended up going with Windows 11 Pro and using Storage Spaces.
I think its time to make an eBay shop to sell / make space on your desk ;) great to see a British UA-camr, keep up the good work!
If you want a great NAS/Server case you can look out for a used Cooler Master Stacker STC 01. They're usually cheap since they are 20 years old already and have none of the modern features, but 12 5.25" bays, redundant PSU option and they can fit up to 12"x13" motherboards. The mobo panel is also great with so many options for different mounting spots.
Yep, that and the node 804 still kick arse.
I'm starting to regret giving mine away for free a few years ago...
@@Destructificial i regret selling coolermaster tc-01 , it had 11 5.25 bays , 12 if remove front panel, that case was love and modulair
@@nascompares The thermaltake armot+ is a case with a similar style. and might be a better solution. as well
Back in the day I had three of these cases with three modules holding four 3.5" drives and a 120mm fan in each. Worked great until I had to start swapping out hard drives. Now I really love hot swap bays. So very inexpensive cases and modules. It is a pity I can't get good, affordable 2U and 4U chassis that can be made quieter. My Dell R740XD is an awesome server; but, I want something a bit more power efficient as my servers tend to spend most of their idling with drives spun down. that Silverstone CS382 is my current favorite compromise between cost, storage space, and ability to get it quiet enough for home use.
Thanks!
If max storage is the goal, I don't see the point in choosing SSD's over more hard drives. You could get a qnap DAS box and more hard drives to expand the storage far greater than the hardware going toward the SSD's. That Silverstone case is sick btw. I didn't know about it. I wish my server case had an optical bay.
Such a good video. Thanks!
Thank you for sharing!
Wow so many nas. Ive needed to get one for a long time ,they are just so expensive.
To be fair, some of the DiY offerings have become pretty affordable (comparatively, of course). Plus now alot more smaller brands in the pc and pre made component industries have stepped in. Surprised good options in the market for sub $99 tbh
The seagulls get me everytime 🥹
I think you'll find that it is ME that is being GOT by seagulls...and make my words, I shall have my day, and thEY shall rew me.... OH HOW THEY WILL REW!
What a video! Loads of interesting options I didn't know existed, thank you!
A consideration you didn't mention is remote management. Believe me it's a HUGE deal being able to remotely KVM into a server when you need to mess in the BIOS or something, as otherwise you have to unplug the machine and tote it over to a mouse, keyboard, and monitor, which is a massive pain. So people may want to consider paying for a server platform that has it. There are also external alternatives like BliKVM and PiKVM.
You are 1000000% correct and I will try and add this to the recordings for part 2. I did a bunch of truenas and UnRAID stuff, dancing around performance etc, but I do not directly call out the (frankly) MASSIVE BALL ACHE that troubleshooting on a system like this is going to be like without streamline remote management. cheers for keeping me on the straight and narrow, P!
Loving these new series of DIY NAS videos!
Rackmount cases can be a good option, some (like Supermicro) use standard type motherboards opening up a whole host of options. There's a lot of kite flyiers on eBay trying to charge £800 for a Supermicro CSE-836 (3U, 12 bay) case that they've removed all the caddys, power supplies and backplane from, but deals do show up. I got a CSE-836 from a place that was retiring it from their datacenter, it came with a Supermicro X9 motherboard/Xeon E3-1240 v2 CPU/RAM, all 12 caddies, both PSU's (these are hot swappable), the SAS/SATA backplane, a Supermicro SAS pci-e card and two WD SATA drives in Startech hot swap bays that mount over ununsed pci-e slots (I think someone forgot to remove these before selling it...) for £52 delivered.
i think an angle you haven't looked at is used enterprise computers that have skylake xeons. They are modern enough to have decent enough idle wattage (30w for just cpu), have an insane amount of pcie expansion, but have 5.25 bays for extra hot swap functionality. I've bought a P520, but the dell t5820 is another possible solution. The only issue may the fact that t5820 only has 4 hotswap bays built in while p520 has 4 hdd bays with no hotswap (with an option for 3 hotswap 3.5 bays with 2x5.25 bays), but other than that the price to performance/value proposition of these xeon workstations outperforms the options you've listed here.
The icydock stuff is insanely priced!
If it was not such a niche I would import from China and compete with them but it is such a small niche so small so that 1 icydock customer literally subsidize for 10 other would be customers
Excellent video I don't know how many videos I see talk about icy dock while skipping the essential of the computer connectivity. I want a case with a bunch of 5.25" bas.
I'm really staring down using u.2 to nvme adapters in hotswap bays and sas instead of sata. If I want a graphics card I start thinking about epyc sp3.
If we are talking about nvme NAS I'll never go for 1k$ adapters, it's too much for me. So my choice MATX am4/am5 mobo with 1 bifurcation adapter for 4 very fast pool (+1 on board, so RaidZ1 is achievable) and one switch card to x4 slot (sabrent pc-p3x4) for slower pool. Or some atx epyc/threadripper board (or even some Xeon v3/v4) with plenty x16 slots and bifurcation support). All this variants will be cheaper than any 1k$ adapters and will support ecc out of the box
Yeah, I was wondering that if you are paying 900 for the adapter and all that stuff why not use the money on a powerful threadripper or something with pcie slots in it already. Maybe they use more power, but I don't know how much those expansion cards use. I'm just using my old pc 3700x system basically with a 16x slot for the 3070, 4x for the sfp+, 2x nvme, 4 hd in zfs, but it would be nice to get something with more pcie slots.
I think if you direct attach a flashstor assustor to your DIY will be a cheaper option
@@akegca 2 devices? Its too much
@@axescar yes I agree however it will be much more cost effective to accommodate NVMe, also to achieve PCIe 4 specifically is hard and the sff cables are rare still in the works, getting a solution like flashstor gen2 that already has this part done will be more efficient and reliable, the only concern is that will be not very nice to USB 4 with an external device as you mentioned
@@akegca How is it that the "Flashstor All-M.2 SSD NAS" could be considered a DIY setup.?
this will be an interesting Build thanks
A big thank you for a very good channel!
Thank you for this video that is almost spot on what I'm looking for to build. I have looked at the SilverStone cases that you show in the video and they are very interesting for me. The problem thou is where can I buy them here in Thailand where I live, The cost of shipping from US is crazy, so some local retailers is the best, but the ones I contacted is no longer importing these cases. So what I asking for is to reach out to your viewers to see if anyone knows about where to buy those cases here in Thailand or in countrys near Thailand. I am not very good in speaking Thai so english speaking contacts is my preference.
also look at the siena (epyc 8004) boards. Those can be a perfect nas board. 16 or 32 sata channels, either onboard dual 10g or dual sfp28, and 6 ram channels, all ecc, and enough pcie lanes for cards or SSDs.
4:16 I tought I was going crazy. I knew I heard a notification somewhere :D
awesome video
Ive been tempted by the radxa rock 5 itx, and some sort of m2 to a crazy amount of sata disks.
The best rhing about it is the 11w power draw. The worst is the lack of sata or physical pcie slots.
I have a video coming very soon called "You are using your m.2 slot wrong" that discusses something about this. It's a little "tongue in check" as a video, but I think you'll like it. Arriving next week
@@nascompares excellent. Ive been looking at getting some hardcore bifurcation done, i want more slots damn it!
Seriously though. That radxa board with a couple of pcie x1 slots would be ideal.
There's a lovely/interesting little 2x 10G to m.2 adapter that I am puzzled by that I would love to share. Next video man. Cheers for watching
All very interesting but the elephant in the room is how to power all of the additional SATA / SSD drives that you are attaching? It would be goid to see a review of PSUs with lots of SATA power adapters.❤
All big name PSUs come with enough power to support all those drives, most of them come with enough cable to support 12 sata drives.
Good video, but why mess around with itx when you have a huuuge chassi?
btw, did you know your postings and discussions about HexOS was covered on the WAN Show?
Luke seems to really appreciate what your doing.
Moving up in the world Robbie.
I remember you highlighting that I was mentioned on WAN the Friday before last (thanks for highlighting that P!) and it was cool. Did I come up again!?! Hopefully for good reasons... I know @jon is having to defend/discuss HexOS and the web management. What's your thoughts on that too?
Another option, similar to the OWC 8M2, but much cheaper, is the OWC U2 Shuttle. It has an onboard PCIe switch to turn any x4 PCIe lanes into four M.2 NVME slots. If you use the right motherboard, you can fit up to 7 of these on a consumer Intel board, or 8 on an AMD board (egs. Asus Prime Z790-P WIFI for Intel, and ASRock X670E Steel Legend for AMD) for 28 or 32 M.2 drives… and still have enough PCIe lanes left over for a 10GBe NIC!
Bloody fair point that! I also considered a couple of other bits I saw on macsales, but they post out on sata storage capacity etc for archival
If you can get your hands on something like a Thermaltake ARMOR+ you have a great case, suitable for MANY MANY HDD's
Note you do need those brackets that either convert 3 5.25" bays to 5 HDD's or 1 5.25" bay to 6-8 2.5 inch HDD's
Or if you go the NVME route you can have EVEN more SSD's
Would be hilarious a desktop Case with 80 NVME ssd's
If you don't care about hot-swap, why not the Meshify 2 or 2XL in storage config?
Why no mention of a SAS HBA? They can be gotten for around $100 and allow 8 sata drives, which would struggle to saturate any modern PCIE speed you put it on (assuming you are using HDDs and not SSD but this would go well with the 5.25 adapter you shown too)
Interesting stuff, nice one. I do like those cases you showed, I'm still using old Antec 900/1200's and a historic Thermaltake Armor (10 x 5.25" bays all filled with trayless dodads).
I keep wondering if the Seagulls are silent partners of NASCompares 🙂
i have an efficient option to fully utilize the pcie x16 slot,
get a breakout board, like those asus hyper m.2
there you have an easy expansion , you can use for example
2x nvme to 5x sata
m.2 to 10gbe
x4 riser cable for a gpu
use onboard nvme for cache drives ,
there you have it, you can have like 2+4 nvme using all lanes , its a cheaper way, and probably more affordable to if you don't want to buy expansive cards and waste 8 lanes
*ponders an idea to install 8x m.2-to-6xSATA PCB, in each slot of the OWC Accelsior PCIe card*
*Breaths deeply
WHAT HAVE I DONE? WHAT HAVE I CREATED?
@@nascompares i wonder if that works , it have its own controller right ? Would be cool but less efficient i guess ,what i mention is simply a break out board with no controller, split up x16 to x4/x4/x4/x4, but we will see , if owc works right :p
Does that nvme adapter card support RAID? Or just JBOD?
What I'm missing from this roundup is power. It's all good and well you have eight hdds and eight ssds, but how are you supposed to pwer all of them. Are there even psus with 16 sata connectors?
Power splitters
The Icydocks only require two SATA power inputs.
There are splitters with capacitors such as Silverstone SST-CP06-E4-USA.
They take one SATA power input, buffer it with a capacitor (to lessen the strain on the PSU during the energy-intensive HDD startup process), and split it into 4 outputs.
So, you'd only actually need 4 SATA power inputs from your PSU to run an 8+8 configuration.
But yeah, there are PSUs with up to 16 native power outputs like the CORSAIR RM1000x, RM1200X. MSI - MEG Ai1300P, SilverStone PS-ST1500-TI, and CORSAIR HX1000. (I'm sure many others as well.)
Can it be running 24/7 without issue ?
Please help me understand, can I reuse an existing drive bay of a Synology DS 920 plus with a storage pool and a hard drive with basic volume created on it by inserting another new drive in that slot without losing the data of the previous drive? So if later if I again insert the older drive it will recognise it and mount normally with its storage pool. When I inserted a new drive it was beeping continuously with error message. I know if I delete the storage pool then the beep will stop and allow me to format the new drive on that slot, but then my old drive with its data won’t be recognised when I will reinsert it
Please can someone help me out i m bit confused ? Which Motherboard was the one with ECC support he was talking about in the Video ? He showed two AMD boards AMD 7840HS and AMD 7940HS but which of them is supporting ECC ?
I’ve heard that both variants supported ECC but never figured out which ram kit to buy specifically
I recently found very affordable SAS drives for purchase. But absolutely no channel addresses installing them in a home setup with a controller. Why?
They are probably loud and consumes a lot of power.
Thank you so much for this video, very informative and helpful information summarizing the NVME requirements, however I was actually thinking that getting a flashstor Asustor 12 nvme gen1 or gen2 direct attaching using thunderbolt with the DIY NAS will be much more cost efficient other than the HBA car and the Icydock nvme enclosure, hope you can share your thoughts on this ?
The G1 won't DAS in the way you describe, but the G2 over USB4 might. Even then, that would be a little less stable than a single enclosure using internal connections. And likely cost more. That said, it would definitely easier to leverage the SATA and NVMe costs but giving them their own enclosures and reducing the need of a HBA cars. Good stuff man, always great to approach in different ways
@@nascompares 👍 agreed
AVOID the Silverstone CS380 Case if your looking for an always on or medium/heavy use.
Had one, still have it but boxed it up. Temps for the drives are horrible, even did the cardboard mod to help airflow and yet, HDDs were running in the high 50's/low 60's.
Moved to a 4U 24 bay case and I'm in the low 40's.
I didn't think this was an issue till WD told me the 2 drives I had die were because of eccessive heat. 3rd day from the top and last bay at the bottom to be specific. Those 2 were idle at 55-64C.
New case looks to have the same flaw, not even 3000RPM Nactua fans could help, and I even tried adding 2 too the MOBO side (industrial double stick tape). So be warned, this will kill spinning drives if your planning on Plex or anything HDD intensive.
"if your looking"
You didn't explain what drives you're (you are) using so we can figure out if they're within spec.
Did you know that larger drives run at a hotter temperature on average? I know, right? It's shocking.
68 is considered in spec for the 18-22TB series of NAS drives, for example.
WD told you they died from "eccessive" (RIP English) heat... sure, they did.
You added 2... also... the motherboard?
If my planning on Plex what?
Less being an ammosexual and more learning how to write words, please.
@@tim3172 65C is MAX, not 68C. However, that is to be a momentary temp bump and not sustained 24/7.
With enclosures WD sells, it was explained to me on the death of the second drive, this is fine as the drive will spin down and "sleep". So a few minutes at max temp, followed by a low power state, is far different than an always on/ always running at the max operating temp.
Now, as far as size…. they are all the same physical size, but if you intended drive capacity, yes, capacity does increase heat production, as well as drives designed to run at higher RPMs. I felt this was irrelevant in this context as we are talking NAS builds, where typically drive capacity is desired. Perhaps you run 120GB drives?
As for “MOBO side”, yes. Anyone looking at the older case (CS380) would see 2 fans are preinstalled on the side of the drives nearest the removable cover. There are none installed or even provisions for, fans on the other side of the drives (the side of the case the MOBO mounts too).
Sounds like you’re concern with grammar, is, perhaps, because you’re not intelligent enough to fill in blanks that are not fully, completely and 100% accurately given to you. Perhaps, as a gun guy (glad you noticed), I don’t need someone to tell me with perfect grammar which end I shoulder and which end the bullets come out, I have the ability to fill in the blanks and to understand things not spoon feed to me.
Perhaps such training will help with your condition.
at this point, it would be interesting to consider having a terminal server with direct access to such storage, without need for expensive external networking :D
I mean, "ancient" 1Gbit is enough to watch videos, play music etc... anything higher than that means some heavier workload where any network can easily become a bottleneck - single "old" NVME PCIe 3.0 SSD can already do 25Gbit throughput, but SMB itself can't transfer files at such speeds on its own...
True, but we have to at least acknowledge that there will also be a % of users looking at a data behemoth like this and want those drives to stretch their collective RAID'd muscles to the maximum.
I'm thinking of building my first DIY nas, on as limited a budget as possible 🙈😂 must be able to be placed in my 19" rack :) Referrals, suggestions welcome 😊
Tbh, it's the rack bit that will kill it in terms of affordability. Silverstone and a few special rack case vendors on AliExpress might be a good base, but it's still pretty hefty. Most of them are M-ITX though, due to the drive Caddies taking up 1/3 to 1/2 the depth. Watch my vid on AliExpress nas cases and there a few decent ones mentioned if I remember right
@@nascompares Thanks :)
I am super new at hardware, but why is there no talk about HBA cards in this build? Is it because we are trying to keep it all inside the Desktop case? Also, what about SAS Shelves (which I am told are compatible with SATA drives? Maybe I am so confused that these questions don't make sense. For example, I don't know what this means, but this dude seems to knwo what is up:
You can use a single HBA and PCIE slot to run hundreds of disks if you so wanted to. I run a basic two port 9207-8i to host all 25 of my disks. The SAS interface has the wonderful advantage of being able to be expanded with expander backplanes.
But what if I have a little more money and I want to just go ahead and get the full size motherboard? Are there any good recommendations if I am looking to build a dual NVME SSD, 8x SATA drive, unlocked Blu-ray drive Plex server to host all of my ripped DVDs and Blu-rays?
I build pretty much what you are looking for.
ATX motherboard in Antec P101 silent case.
7 x spinning rust + 1 Sata SSD
UHD Blu-ray drive
4 x NVMe
Dual 10 GbE SFP+ add in nic + dual 10 GbE onboard nics (wish I went for SFP+ version instead of RJ45)
2 GPU's
Room to expand in the future...
@@Hansen999 - which CPU and GPU choices did you make? How much RAM? Which NAS software did you go with?
@@hfw3 AMD Epyc 7302, 128 GB Reg ECC Ram @3200, RTX 3060Ti and a GTX 1660 Super
@@hfw3 If I was building it today I would probably go for Epyc 74F3 or 75F3 as they clock a bit higher. They were just too expensive when I was getting the components.
@@Hansen999 - I only have $2,250 for my budget and the 74F3 alone is $3,000.
Waiting for Jonsbo N5 to build my NAS.
Same here.....but when😅
@@269jeroenify I really don't know. Hopefully sometime this year.
Darn I was sas3 hot swap bays.
Anddddd it makes this a very expensive hobby!!! Get ready to be kicked out of the house. 😊
Can you please rename this video to "Building a NAS in a case that doesn't have enough space for the drives", "Building a NAS with huge internal bandwidth for no reason", or something similar?
The video title of "MAXIMUM STORAGE Desktop" doesn't fit this video. Maximum storage means maximizing, well.. the storage available to the user in a desktop form factor.
I completely fail to see how the Icydock 8 x 2.5" is anything other than a blatant rip-off. There is no logic in the device. The power is a simple distribution of 2 SATA power inputs and the SATA data is simply a passthrough. $300/$268 pounds for a passthrough and 2 fans.
Did you just waste over 7 minutes explaining why no rational person should choose an ITX board for this?
You can, but....
You recommend a FULL ATX case and a Micro ATX case. USE THE LARGEST BOARD YOUR CASE SUPPORTS. THAT'S THE POINT OF THE LARGER CASE.
"You can use USB4 to take advantage of 10Gbe adapters..." yes, because USB is *FAMOUSLY* (un)reliable for networking.
Alternatively, you could... IDK, buy a 10Gbe PCIE card for 1/4 to 1/5 the price that will be much more reliable?
THREE HUNDRED DOLLARS for a 10Gbe Thunderbolt adapter? Again... why?
"Some decent, overpriced, underperforming, limited, expensive, overpriced, costing too much money, no sane person would buy this, ITX options out there for people who don't understand how anything works. You should buy a MicroATX board at 1/2 the cost with twice to thrice of the expansion for the purposes of using it in non-ITX case."
FTFY.
And more wasted money explaining how to overcome the limits of the single slots on ITX.
Are people watching this interested in a $980-2200 card?
The LRES2224PF-2SFP+ shown doesn't seem to be commonly available. Variants for it are $110-240*. It's based on a Mellanox ConnectX-3 which is... *checks notes* TWENTY DOLLARS ON EBAY or $35 for a pair, shipped. No sane person would do this.
13:48 "We're trying to maximize storage." No, you're not. You just aren't.
The LRNV9F48 is a $980 card (AGAIN).
"This has been about value and bang for the buck." RRRRRRRRRRRIIIIIIIIIIIIIGGGGGHHHTTTT.
CS380B, + ATX motherboard that supports ECC + Ryzen 7600 + appropriate amount of ECC RAM.
Add a 10Gbe SFP+ Mellanox ConnectX-2 or 3 or something like an Intel 550 for RJ45.
Add an HBA (something like an LSI 9300-16i) and populate it with 22TB hard drives.
Populate the 1-4 M.2 slots with the largest SSD(s) you can afford.
Consider a 3x 5.25" to 5x 3.5" adapter (i.e. SST-FS305-12G or ISTAR BPN-DE350HD) and add 5 more hard drives.
All 13(!) hard drives can run off the LSI card (286TB) and you can use the other 3 to run to, idk 2.5" SATA SSDS up to 8TB each... plus the 4-8 SATA slots can also be used for more SATA SSDs, either in the overpriced Icydock or another case such as the still-Icydock ExpressCage MB038SP-B for 1/3rd the price.
The bizarre focus on NVME and SSDs when trying to maximize......... storage... the thing that hard drives still exist for... is very odd.
Assuming you foolishly follow this video, you'd get 8x8TB in the NVME enclosure, + 8x SATA SSDs(?) for a maximum of 128TB at 8x $820 (Generic an "Inland" 8TB NVME, cheapest NVME SSD) and 8x $620 (Samsung QVO) for just $11520 in drives. What a bargain.
(13 22TB drives is still $4620... 40% of the price for 2.25x the storage... with many open SATA ports to power up to 11 Samsung QVO 8TB drives.)
*LRES2203PF-2SFP is $110 on Newegg, others seem to only be sold in bulk and require minimums of 10.
I wanna buy a motherboard, but I only want the 'I Hate Seagulls' option ... and I want that printed on the PCB.
*carves it into PCB with a nail* yep...that's next year's merch sorted
Is it possible to build a NAS that also has LTO drives ?
No. It's impossible, despite zero reasons to the contrary.
I have been debating. Wrather a NAS or a server would be the best course. So far server is winning. My investigation tells me a server can handle better my needs going forward. For converting a regualar mid tower like the meshify medium into a server. Is there a bay expansion y can use? Also how much storage do i need for photos and plex library?
NAS is s a specific function that a server performs.
"Should I buy a car or a driving?"
"How much storage do I need?"
Are you for real?
Your bottleneck on any of those motherboard options is lane count. You need 4 lanes per NVMe unless you are prepared to slow them down (in which case, why use NVMe instead of SATA?) and even using available lanes for SATA will only yield counts of about 1.5 SATA channels per PCIe lane on gen 3 (6 SATA channels from 4 lanes) or theoretically double that on PCIe gen 4 (if you can find suitable PCIe 4.0 to SATA III interface cards - they all seem to be PCIe 3.0) without any bottleneck. A 10GbE will have a bottleneck if it isn't given 2 lanes per port on PCIe 3.0 and even on PCIe 4.0 will need a lane per port - but bear in mind that if you need 2 lanes you have to add it to a x4 or greater slot, and a few lanes are used internally in the system.
So any discussion of storage servers which doesn't cover PCIe lane availability, generation, and use is a waste of time and potentially money. You can't add throughput to a system without a lot of free lanes of the highest generation available.
I have 44 lanes and find it a limitation, although that is because NVMe drives eat 4 lanes apiece. Fortuitously, I have 8 SATA ports on my motherboard, and if I were prepared to go slower by moving more from NVMe onto SATA it would be less limiting. The 4 lanes used by one NVMe drive can provide 6 SATA or two 10GbE ports.
So why not just a Define 7 (XL) instead of all those workarounds.
Yeah it seems that going ITX is causing completely unnecessary difficulties.
How important is ECC memory for the average Joe? Will the difference be noticeable or is it necessary for the use case?
When used in a NAS or a server it's definitely something you need to consider. Sure bit flips doesn't occur very often but when it does and the fault is stored on the disk as the write cache is flushed you loose something and doesn't even know it until it's too late. These bit flips might not happen very often, but probably are a lot more common than you think. Most people only have experience of computers that doesn't have ECC memory and they just see that the machines doesn't crash all that often. On most users machines the part of memory containing executable code is pretty small compared to all the graphic data, sounds and other parts that make up programs and especially games. And a single bit flipped in most of that data are never going to be particularly notable. At the most it might be a pixel that isn't the right color, a blip in the sound that is never heard again or something like this. The real problems occur when a bit flip occurs in executable code and that is actually processed or when it happens in the write cache for the storage system. In the first case it can result in a program crash and in the later it can corrupt a file that is saved.
With ECC memory the computer will detect and correct single bit errors which should catch most single bit flip problems. The OS will also create a post in the system log saying that something was fixed. This is normal for machines with ECC memory. At least if it only happens intermittently, and by that I mean no more than a couple of times a year. When it get more common a memory chip might be getting marginal, and if the system stops storing reports with a statement that they are so common it takes to much time to handle them then it's time to really check the computer because something is seriously wrong.
There are even more robust systems than common ECC. They use the same method but are more advanced. ECC will detect and correct a single bit error, and it can detect a multi bit error. However it can't correct a multi bit error. If this happens most machines will just halt execution. At least that's what the machines I worked with did. However multi bit errors are very rare and a spontaneous bitflip causing multiple bit errors in a single read is very unlikely.
OK this became long and probably went over things you already knew. But there are more people reading this and it's important to not forget that a lot of people have never worked with RCC or knows just what it is and how it works. I'm going to avoid the "how it works" part as this is already long, and that is something most people doesn't really need to know.
If you plan to use any filesystem that heavily relies on RAM, you will REALLY REALLY REALLY want to use ECC.
ZFS, BtrFS, etc.
The difference won't be noticeable until an uncorrected fault (or series of faults) obliterates your files.
@@blahorgaslisk7763 You... loose... something?
You really should tighten that down.
I know how much those nvme icy docks and the tri mode controllers cost. If you are putting those in a system, the cost of how much power is used throughout the year is no object. ITX with a soldered CPU is ridiculous. You do not want to starve the system of enough compute or memory to make use of the storage. Otherwise it might just as well be a direct attached storage drive to the real compute. I would go with a proper workstation motherboard, if not server, from the likes of ASUS for 7 PCIe slots for your NICs, Controllers, and graphics cards for decoding and acceleration. Pair it with newest gen 16+ core CPU and 64GB+ memory. With this much storage, pairing it with value components would be the real waste of money. I am not sure if you shouldn't have a real world use case for this machine since it is beyond what some might consider a home lab configuration.
Too bad the cs382 isn't rack mountable.
How is everyone getting past the ECC subject? Just avoiding intel altogether? I’m building out a ASUS w680 + i5 13500, I know it might be overkill but I’m curious to hear what others are using for a base 4k ripping media server .
Damn I have absolutely no idea whats going on. 😢
Can you do a rack mount build. 1ru preferably please
It would be nice if a 10gb ethernet and hba combo pcie card existed
I think LSI and Adaptec (showing my age a bit these) did this over SFP. Will check, but it might have just been a 1x 1Gb OOB port.
@@nascompares 🙌 thanks for looking into it
hmm 8x 24TB HDD's 8x 8TB 2.5" SSD's and 8x 8TB NVME's and dual 10GBe... would be hella expensive but oh so sweeet
This is Robbie from the future. Currently like 75% through pt2 of this video....DEAR GOD THE FAN NOISE
@@nascompares rofl... it's a SERVER not a teddybear to snuggle up next to while napping :D
THIS BABY COULD MAKE METALLICA BANG A BROOM ON THE CEILING AND ASK TO KEEP IT DOWN!
@@nascompares better fans? :D
Dual 10 GBe is too slow for this.
Icydock as always is wayyyyyyyyy to expensive all their stuff has ridiculous margins
I really cant recommend the CS380, it is the worst design ive ever encountered. First the drive fans blow directly into a solid side panel, an the cage its self is almost solid, which results in almost 20c drive temp variance. Also if you have warmer components (as i did with supermicro server board) if you try mount fans to the side panel, they overlap the hdd cage ones, so you can only run one or the other. Result is horrible airflow, and now i had to rma that very exspensive mb. I went back to my old matx mb an cpu idle temps were 58C, i then decided to stick with matx but swapped to the CS383, an was literally night n day, idle temps are now 33C, exact same hardware. 20C is the difference custom watercooling does, thats how bad the airflow is.
Okay, and you don't specify what the temperatures were, just the amount of variance.
Can you explain, in simple terms, how a CPU idling *more* within spec is better than a CPU idling *well within spec*?
Certainly, you don't think cooler = WiLl LaSt LoNgEr, right?
@@tim3172 temps went from approx 58C running truenas scale (essentially idle), an throwing multiple cpuhot codes within ipmi, to now running 36c in the CS382, that is on my asrock matx board. My x11spi-tf is the rma, but as it is atx obviously couldnt put in the c382, that was getting to 70C on the pch. If they moved the fans back 3 inches they would blow straight on the heatsink an not foul the hdd fans!! but as mentioned cant put standard fans with out removing drive ones. Also the hdd temps were all over the place, some were in high 20c's while some were 45C, not great for 24/7, an excess heat does kill components quicker. The X11spi-tf was bought as new (although he may of lied on ebay) an has died less than 6 months later. By all means man buy it if u like it, or if u have it keep using it, thats just my experience, its the worst designed case ive come accross in 20 years building pcs
The answer is never use a case, attach to a flat board with some french cleats(sp) hang your system on a wall in your basement somewhere out of the way. come back once a week with a can of air to kick the spiders out. wasting cash on cases. and lights crazy!
FAIR WARNING.. IF your build large arrays with consumer grade SSDs, you will get SSD controller burnout. aka Dead SSDs, usually without warning.
I mean, the heaviness of write and rewrite plays a pretty big part too, but I see your point.
Source: it happened once to this person using one (1) specific controller.
Those ITX motherboards are so NOT the optimal choice for an ATX storage server it’s not even funny.
True, but itx boards are smaller, so more cooling space, often arrive with SoC Mobile processors pre attached and in of themselves have smaller cooling needs. Sure. An ATX would rock, but so would he scale. Just trying to maximize space a little, thought you are totally write, MATX would open the floodgates for H/W!
@@nascompares None of those things are relevant to the purpose of building putting the largest amount of storage possible in a desktop computer... the title of this video.
"so would he scale" Do you mean... so would the scale? ATX is a standard desktop form factor... like... what?
You chose an ATX and MATX case as examples and then go on to explain that the... motherboards designed to fit them... are too big for the cases (?).
mATX? Disgusting! I have a habit of reusing my desktop's components for servers when I upgrade and I always go for ATX mobos for all the extra PCIe-slots they got. I'd never pay for an mATX.
Really, NAS should be removed from the title. This thing is a borderline SAN, never mind storage server. This is way past NAS territory.
NAS is Network Attached Storage, a function a server can perform.
SAN is Storage Area Network, which is a network of NAS devices (i.e. servers performing the NAS function.)
There is no arbitrary, gate-keeping, upper limit to the storage size or complexity of a NAS device until it interacts with other NAS devices to function, at which point it becomes a SAN.
"A Storage Area Network (SAN) is a network of storage devices..."
Stop. Think. (don't) Post.
cs382 is to small for full atx motherboard !
cs382 is to small as... Corsair 1100D is to big?
Are we doing analogies now?