"It was pretty expensive and a lot of work" for a budget home server. Really like the video. I have 50TB in my home server I wonder what's below budget. Keen to see further upgrades to this server
You can buy recertified drives for an even lower budget. I record RAW video files that fill up my available space quickly so now I'm looking at 200TB or so file server. If you need to max out the storage per drive then there's finally available the Seagate Exos X24 24TB drive.
The 3.3v pin isn't to prevent shucking. Those externals actually use that pin to hard reboot the drive if it freezes for some reason. A lot of NAS systems also support that pin to do the same thing. It has a purpose... it just isn't a widely used option in desktop systems, so they don't usually support that pin.
9:18 WD doesn't do that to discourage people to shuck the drives. It's an actual enterprise feature, when you have multiple servers with 100s of drive each, you don't want them to spin up all of them once, with that pin number 3, you can schedule drives to spin up in order, so that you don't have massive power spike trying to turn on your servers by all the drives trying to spinning up at once.
@@raidone7413 many nas and enterprise hardware boot drives(and other devices with high load on the PSU/s ) one at a time or by pairs of them ,to avoid exactly the same ,power spikes are a dangerous and real problem due to power supplies features like overcurrent protection and overload protection. imagine the power spike if you boot 25 drives @15k RPM at the same time... . On the other side ,if they really would like to avoid shucking ,belive me there are easier ,cheaper and better options like firmware checks or propietary connectors instead of standart sata port on drives. in fact i shucked 4*8TB wd drives and i did nothing about the 3.3 pin ,cause my ASUSTOR nas use the drives without any mod :)
@@raidone7413 Most of the bigger WD external drives are just using their enterprise internal drives. They used to not even replace the labels and were reds before switching to the white label like 3 or 4 years ago. Also the 3.3v pin feature just isn't compatible with most consumer brand PSUs, that's the reason they don't spin up.
@@raidone7413 This is the same reason why corps like google are making their own way to boot up their linux machines. cuz it take ages for each to spin up and bring to life.
As an enterprise storage admin for a global media company, when it comes to determining how many parity drives you want, you also want to take into account rebuild times and the potential to have another failure while you are rebuilding. The larger the drive, the longer the rebuild time.
Yes, given 200MB/s-240MB/s read/write average like HC530(14TB), read/write all content on single drive will take about 1 day in full capacity. For HOME NAS, due to the number of drives, the rebuild process can be quite long. EC based solution like minio is much better than RAID based solution but I didn't see much NAS package provide that option.
I do like the option to have dedicated parity drives (unraid, snapraid) which saves power and makes sure that you can restore individual disks if you had too many failures. As long as you don't need the extra speed of classical distributed parity array. Good option for home server use.
@@kyu3474 Lawrence Systems has a good video on zfs caches, boils down to this: cache in ram has a order of magnitude lower latency than other options. You should start thinking about ssd/optane cache only if you get low hit rate for you ram cache. And as always it all depends on your use case, if you pull random video files and you watch them once cache won't help you much. If you run vm's or docker containers I would first max out ram and only when hit ratio drops would add l2arc. BTW Matt: did you pick TrueNas Core over Scale for a reason?
10GBit is to expensive, there are like $20-$25 2.5GBit network cards on PCI-e x1/USB3 which work with CAT. 5e cables. I don't think that any modern HDD will exceed ~300MiB/s even in RAID/ZFS.
You can get rid of the front I/O errors by grounding the sense/ID pins on the motherboard headers. I'd suggest soldering small wires to the backside of the board under the header if you actually want to use the ports for your case's front I/O. If not, then just use some jumper wires to attach to the pins directly on the header. For the USB 2.0 header, short pin 10 to either pin 7 or 8. For the USB 3.0 header, short pin 10 to any of the following: 4, 7, 13, 17. For the Audio header. short pins 2 and 4. For the 1394 Firewire, HP used a proprietary header, so you'll need to either plug in an HP front I/O module and stuff it somewhere in the case, or search the forums to see if anyone reverse engineered the pinout so you can know which pins to short. As for the fans well, just add a couple case fans and use those headers.
You're a life saver - I've been looking for this information desperately Edit: Some more research yielded this video, which is also useful - ua-cam.com/video/c8G97FlI2QA/v-deo.html
Hp proprietary IO is the wrench in this build. Also, a thermal sensor is present on the HP Z workstations that need to remain plugged into the mobo for transplant viability.
Interestingly enough, Western Digital is currently (April 2022) pushing out a lot of both Red Pro and Gold 16TB drives for $299.00... they must be getting ready to release a newer, larger capacity drive and need the warehouse space.
@@quazar912 Yes they have a different design for drives that is incapable of being used in a raid array or Nas. They literally got caught using inferior drives without telling customers about it. After that they “promised” that they would keep the Pro line as the only line with raid and Nas compatibility.
@@tjhana I think it’s only the pro they kept the quality of. Look up “SMR drives”. And read people’s experiences and horror stories. They have a very slow transfer speed once you get to a certain amount of data. It’s like they will seem fine but then take 14 hours transferring data that should have taken 4 hours. They can’t be raided or used in a NAS as a result. Only the Pro is the original quality.
This video was worth liking just for the PC-Cleaning-Jutsu. But on a more serious note, it's nice to see people doing practical builds that a wider audience can make themselves. Great work.
building a NAS as I watch this; amazingly in the exact same case also bought used local for the same reasons as you. I love the component selection, like you said you're way more about it than the big boi yters. Was able to snag eight 4TB Seagate 5900RPM Skyhawk drives for $35 per, paired with an X299 MSI Raider board ($120 used local), and a $200 ebay i9-7900X cooled by a $40 ThermalRight FC140 that performs neck to neck with an NHD-15. Nowhere near your 84TB but your hoarding habit is worse than mine. Subbed, hype for more!
The power disable pin is part of the sata spec. The general intent is by using supporting controllers to remotely hard reset or shutdown drives in the same way pulling the power would. Drives intended for retail do not have the feature as they are directly connected to the power supply, thus the pin would be always active and the drive wouldn't turn on. Drive manufactures make both drives with and without power disable, and distribute to the appropriate channels. When these portable drives are made whatever is available is used. This is why you can end up with an assortment of drive types, and in your case some with and without power disable. Don't know how they choose the drives, maybe what ever if coming off the line, or the one there is excess of, or even the ones that are slightly under preforming, but the ones with power disable are likely better as they where not originally destined for the retail market. Shucking already voids the warranty, if they cared to stop the practice they would just use a custom firmware that would only work with the adapter or even go further and pair the two as that would defeat flashing a normal firmware to the drive.
The real problem isn't shucked WD drives powering off, it's damn PSU manufactures STILL putting a 3.3v rail on their SATA cables. I have no idea why almost all PSUs still do it, when it has been depreciated. First thing I do when I get a new PSU (after testing) is remove the wire from all the included SATA cables. I hate getting screwed by a cable with 3.3v when I don't realize right away the damn cable is just disabling the drive. :D
@@PeterBrockie I don't know, but isn't it there for compatibility reasons? I would guess that some old hardware still requires this 3.3V rail. Just because modern devices (including drives) step down the voltage onboard doesn't mean it's not part of the spec or it's not necessary for some devices.
@@comedyclub333 Molex connectors use 12v + 5v. When SATA came around I think the idea was to add 3.3v since a lot of ICs ran 3.3v instead of 5v at the time, and they could just directly run off that rail. But what ended up happening is essentially zero devices used the 3.3v rail because ICs got lower and lower in voltage (modern CPUs are under a volt) and it was easier to just use a local power regulator to change the 5v into 1.2v or whatever. CPUs use 12v input and your VRM lowers it to the ~0-2v (depending on hardware). They changed the spec into making it a shutdown pin simply because it was never actually used in drives. I have no idea why PSUs still have it other than they are often based on older designs (I don't think even motherboards use 3.3v these days).
@@PeterBrockie 95-99% of ICs are 3.3v nowadays. CPUs & GPUs are forced to use lower voltage because there are billions - trillions transistors inside that all add up to hundreds of amps current consumption. The higher the voltage, the more power draw and thus heat, less efficient they are. It is only more expensive to produce lower voltage logic ICs because it requires much smaller physical transistor sizes and expensive fabs.
Dude! You have inspired me!! I have watched this video before but didn't realize the motherboard. I have had a 420 system for 10 years now. I upgraded the CPU to e5-2660(?) and up to 24gigs. After transferring it to another case, I did have to get the adapter cable for the power. I have found that if you jump certain pins on the front USB, firewire, etc., those errors go away on startup. If you want the fans not to ramp up, pins 11, 12, and 5 are a thermal sensor. I found mine in the original case and ripped it out, soooooooo much quite! Thanks again for your hard work.
Count me in for requesting the upgrades. I went with the same drives on Black Friday, could only afford three since I was still Christmas shopping. I went with the Molex to SATA solution since my NAS is at its core an archiving file server. It won’t be on for extended periods. I also believe that these drives have different firmware from the Reds, and chose to just using Basic as opposed to RAID. I have no idea how these would stand up to rebuilding a pool.
I was never a fan of shucking externals to rig up a server. I just buy used sas drives and LSI controllers from ebay.. works well. This last go around I bought new HGST/WD sas drives, but used an intel i7 11700 and B series motherboard.. added 10gb nics to all my servers and torrent box, main pc, etc. Couldn't be happier with it. Fast and stable. The 11700 I bought for transcoding without a gpu, and the IGP is more than enough.
Matt, you forgot about other content creators that get help from sponsors and get a fully build server with ALL the drives that will populate the server. And they also get a lot of MB since tey are sponsered and in some cases they don't even monetize these pices of hardware. So in all they get a ton of freebee they have it easy to build what ever they want. Unlike you or I, we need to buy these parts on a small budget. And It's not easy, I know that!
I also have a HP z420 and you need to tell the viewers about how to swap it properly. The system won't boot up without a video card at all. Also you need to short out some pins in the USB 2.0 header, USB 3.0 header and the FireWire header. CPU fan header 5th pin needs to be jumped to the 1st pin. Also fans have to be connected to the memory and rear exhaust headers. It will only boot with all these conditions satisfied. USB 3.0 header was especially hard and i had to solder the 2 pins under the motherboard.
3.3v pin is power disable feature from WD and it is not to prevent shuck. It is hard to just mask the 3rd pin as it is very small. You can actually mask all the 3 pins from left not just the 3rd one, this is much better to do. Standard 3M electric tap should do the work.
Trunas uses the ZFS file system. this file system is by concept hungry for RAM! In my job I created a professional NAS with ZFS. with ZFS the rule is: Throughput problem -> Adds RAM Speed issue -> Adds even more RAM! ! The classic rate is 1GB of RAM for 1TB of net data in the NAS! in your case the 64GB is a minimum, the OS needs RAM too; not to mention the caches on very high performance professional SSDs. ZFS uses read and write caches all the time (if they are not on dedicated disks, it uses the pool), installed on high performance ssd, ZFS will "abuse" the higher speed of SSDs to boost its performance
Couple of money-saving suggestions I'd make if you're considering building your own NAS: Older server CPU's generally idle at much higher wattage. If you intend 24/7 operation, seriously consider a modern i-series or much newer Xeon or Xeon-D instead. You might save money long-term. An older Xeon-E might double your energy bill over newer chips. Don't consider ECC a requirement. It really depends on how much you care about what you're storing. We're talking Act of God levels of paranoia about your data to consider ECC a must. My advice - if you're asking yourself whether you need ECC, you probably don't. There are free Linuxes that are not much more complicated than the paid-for NAS options. Consider Fedora server for example. People rightly talk-up ZFS, but mdraid is an incredibly stable, mature and flexible system.
"Older server CPU's generally idle at much higher wattage. If you intend 24/7 operation, seriously consider a modern i-series or much newer Xeon or Xeon-D instead. You might save money long-term. An older Xeon-E might double your energy bill over newer chips." Whilst TECHNICALLY true, the reality is that at the end of the day, the increased cost of the newer processors vs. the money that you'd save on idle power, in many cases, just isnt' worth it. As stated, he can literally buy an Intel Xeon E5-2660 (V1) 8 core CPU for < $10. Depending on how long you're idling for and the system configuration, a DUAL Xeon E5-2690 (V1, 8-core, 2.9 GHz base clock, max turbo 3.6 GHz, max all core turbo 3.3 GHz) will idle at 112 W. Assuming that you split the power consumption directly in half, the idle would be at around 56 W. (Source: www.anandtech.com/show/8423/intel-xeon-e5-version-3-up-to-18-haswell-ep-cores-/18). I couldn't find the idle power consumption data for the E5-2660 (V1) readily. My HP Z420 which has the E5-2690, but also a GTX 660 in it, and 128 GB of DDR3-1600 ECC Reg RAM (8x 16 GB) idles at around 80 W, but I am sure that a decent portion of that is just the RAM and the GTX 660. So, let's say I assume that the idle power is somewhere around 50 W. 50 W * 24 hours = 1.2 kWh * 365 days/year = 438 kWh/year * $0.10 USD/kWh = $43.80/year. If you halved the idle power consumption, you'd be saving $21.90/year. And then now, depending on which processor you want to replace that with (in terms of a newer, more modern architecture, between the motherboard, the CPU, and the RAM) how many years would it take before you'd break even with the increased costs? So it really depends on what you are trying to replace it with. I mention this because I've ran the analysis multiple times over the years to modernise my hardware and it always come back to this very same question. "Don't consider ECC a requirement." That depends on the motherboard and/or the CPU. Some motherboard-CPU combinations actually REQUIRE at least unbuffered ECC just to POST. I forget what the HP Z420 motherboard requires because I've always ran it with ECC Registered RAM because I don't want there to be any stability problems on account of the system expecting there to be ECC, but then the memory modules not having it. "There are free Linuxes that are not much more complicated than the paid-for NAS options. Consider Fedora server for example." TrueNAS is free. As for pre-built boxes, that depends. There are some things that are easier to do on my QNAP NAS systems that, whilst you can do it on Linuxes and/or TrueNAS, it takes more time and effort vs. clicking a button (e.g. setting up the NAS to be a Time Machine backup and/or installing applications where I can backup the photos and videos from iPhones). This isn't to say that it can't be done. But that also depends on the answer to the question "what is your time worth?". For me, whilst I can probably deploy those solutions on Linux, it would take me a lot more time than what it would be worth it for me to just buy a pre-built NAS system for that (minus the hard drives). Like another thing that my CentOS install HATES right now is the fact that the SMB ACL is in direct conflict with the NFS ACL. (i.e. my other CentOS compute nodes and access the NFS exports over RDMA without any permissions errors, but when I tried to make the same mount points SMB shares, my Windows clients perpetually complain about it not having write permissions despite the fact that everything has been set up correctly (or there is nothing that jumps out to me as being wrong, based on the various Linux Samba deployment guides that are available for CentOS). Conversely, I have both NFS and SMB shares on my QNAP NAS units, and I don't have this same permissions problem. So....*shrug*....who knows. And I don't really have the time to pour more time into trying to fix it, so the "dumb" answer is that if I need to transfer the data, I will send it to my QNAP NAS systems from Windows and then have my CentOS compute nodes and cluster headnode pull it down from the same QNAP NAS system over NFS. It's a roundabout way of getting the data transferred, but the few seconds or few minutes more that it to go around is less than the time it would take for me to try and actually figure this out properly. (And this is even with SMB guest enabled on the SMB share.) "People rightly talk-up ZFS, but mdraid is an incredibly stable, mature and flexible system." Yes and no. mdraid, to be best of my knowledge, does not detect nor correct for silent bit rot.
@@ewenchan1239 I can't say I disagree with anything you say. Note I was careful to qualify everything I said with "might" and "consider". I have no prescriptive notions of what others should be doing. Was just riffing on the theme of budget. Couple of points: You should try plugging in the domestic kWh average of - for example - Germany (~$0.34-0.35) into your sums and see how things quickly change from a price point of view. I recently replaced an ancient Athlon-based home server that's been on duty for 12+ years with Alder-lake. I want 24/7 operation but it sees most of its use during the day, cores will spend most of their time in C10, with whole package using about 2W. It will save approximately $80/year on January energy prices - no doubt even more after recent events. With a bit of tweaking managed to get the whole system to use < 30W idle. If it proves as resilient as the old Athlon system, it'll be worth the extra upfront cost.
@@0w784g "You should try plugging in the domestic kWh average of - for example - Germany (~$0.34-0.35) into your sums and see how things quickly change from a price point of view." I understand your point here, but I would counter with the price delta between a modern CPU vs. an older one might be relatively proportional to the changes in the rate such that my general point, the "slope" of the curves, relative to each other, may change a little bit, but the overarching jist of my point still remains the same. At the end of the day, it's still going to boil down to the same question: how many years of savings in electrical costs will it take to break even against the incremental cost of a newer, more modern CPU/motherboard/RAM/hardware? And the answer to that question depends a LOT on what you pick as the old hardware vs. the new hardware in the sense that if you are only migrating from one year or one generation behind, then the relative advantages of a decrease in the (idle) power consumption may not be as significant if you were to compare it to say, migrating from hardware that's 5+ years old/5 generations older. But again, that kind of a migration has an incremental cost associated with the more modern hardware and I've ran this calculation many times over (it is one of the reasons why I haven't moved over to an AMD EPYC nor the AMD Threadripper nor the Threadripper Pro platform yet) because my old quad node, dual socket/node, 8-core/16-thread per socket system (64 cores, 128 threads in total, with 512 GB of RAM) pulls 1.62 kW. And whilst the AMD EPYC 64-core/128-thread processor would be more efficient being at only 280 W TDP, it's also ~$5000 at launch, excluding the cost of the motherboard, RAM, chassis, etc., and let's assume, for a moment that the AMD EPYC 7713P total peak system power draw is 500 W. This means that my old machine @ 1.62 kW, pulls 38.88 kWh/day vs. a new system at @0.5 kW, would pull 12 kWh/day. For a year, it would be 14191.2 kWh vs. 4380 kWh. At $0.10/kWh, it would be $1419.12 /year in electricity costs vs. $438/year, or a net difference of $981.12/year in savings. Therefore; the TARR for the CPU alone is 5.096217 years. And you can make it agnostic to currency by plugging in your own values into the calculator to calculate what the TARR is for your specific scenario that you want to analyse. 5 years from the time that you built and upgraded the system, there would always be something newer and better and more efficient, and you're running this calculation again and again and again. The "slope" of the TARR may change depending on region, but again, my general jist remains the same. "I recently replaced an ancient Athlon-based home server that's been on duty for 12+ years with Alder-lake." I just RMA'd my Core i9-12900K back to Intel because it failed to run memtest86 for longer than 27 seconds and got a full refund for it after about 3 months in service. " I want 24/7 operation but it sees most of its use during the day, cores will spend most of their time in C10, with whole package using about 2W. It will save approximately $80/year on January energy prices - no doubt even more after recent events. With a bit of tweaking managed to get the whole system to use < 30W idle. If it proves as resilient as the old Athlon system, it'll be worth the extra upfront cost." But again, how much did you spend, upfront, to save said $80/year in electricity costs? How many years will it take before you break even when you divide the incremental costs by the annual savings in electricity costs? I'm not saying that you don't save money. What I am saying is "is the TARR worth it?" (And an Athlon-based server --- wow....that's impressive!!! I have systems that are that old, but none of those are currently in use/deployed right now. I keep mulling with the idea of re-deploying my old AMD Opteron 2210HE server because the Tyan S2915WANRF board has 8 SAS 3 Gbps and 6 SATA 1.5 Gbps ports built-in, but then also comes the fact that they are only SAS 3 Gbps and SATA 1.5 Gbps, which, for a "dumb" file server, it'll be plenty, but those processors also have ZERO support for virtualisation, so, its usefulness at that point will also be limited, and back when I had that system running, it would easily suck down 510 W of power, so it's not super efficient either. So believe me -- I understand your point. But that's also why I have two QNAP NAS units that are using ARM processors, one that's using a Celeron J, and a TrueNAS server that I have that is running dual Xeons (which I've been running this same calculation to see if I can update that server to something newer, mostly so that I can run more virtual machines off of it, which increases the overall system utilisation.)) I WISHED that the newer processors actually had more PCIe lanes to them because then they can be lower powered, but also super powerful as well for home server duties. The last generation that I have, which is sporting the Intel Core i7-4930K has 48 PCIe 3.0 lanes, is great because I can put 100 Gbps Infiniband network card in there, a GPU, and also a SAS 12 Gbps HW RAID HBA and it isn't super power hungry. A lot of the processors newer than that, all have fewer PCIe lanes than that.
I've been looking for a server video like this and haven't been able to find any good ones. This is exactly what I was looking for. Tha k you for the content. It was a super good video!
HP workstation boards like the Z-400 and Z-420, etc. are notorious for hanging on boot if you don't have all the hardware connected that was originally in the workstation. On the fan headers, you can jumper one of the pins to ground to fool it into thinking the proper fan is connected. On the front USB, and front 1394 error, it's easiest to keep the front USB/1394 cluster and cables, and connect it and just stuff it in the case somewhere. I used a Z-400 board in a nice build with a Lian Li all aluminum full tower case, and ran into these very same issues. With the Z-400, I found that the m/b tray from the original workstation is the best way to mount the board into an alternate case, as it already has the I/O shield integrated into the tray, and the mounts for the CPU cooler, and it will make all the ports and slots line up where they should be, in a generic case.😉 Hope this helps.
I just built a NAS myself. I used a Microserver Gen8 for it as I could get it relatively cheap and I wanted remote management and a compact design. I used the SCALE-version of TrueNAS as it runs on top of Linux and you can use Dockers and VMs there. And if I find another server of the same model for cheap, I could even connect them together. Also I can recommend using a Ventoy stick if you have a bigger stick and want to install different operating systems from it as you have a menu there where you can select one of the ISOs you put on the drive before. There is a partition where you can simply copy the ISOs onto and delete them later. No need for burning them to the stick and wait for that process to finish.
In case anyone is wondering: You can't sell the WD Elements enclosures as empty hard drive enclosures because they only accommodate the original drive that was in. This is because the rubber grommets are unique to the original HDD and won't exactly fit any other, preventing you from successfully inserting/fitting a 3rd party drive in the enclosure. The controller board itself will however work with other drives, at least it did in my case (8TB WD Elements). But it's not really too useful if you can't fit non-original drives into the enclosure.
Great video Matt! I definitely recommend Truenas. ZFS Pools are mainly what I was after. It’s been rock solid on my 12 bay Supermicro server. Currently have 8 WD 8TB reds in raid z3.
@@SupremeRuleroftheWorld Could you elaborate on that, please? Why would TrueNas require "unlimited funds to buy boxes of new drives every time"? Every time what?
Quite a good video, and the first one i saw that adresses the active-low pin on salvaged USB hard drives. Ive build a NAS myself a few Months back with a "Node 804" Case which supports 8 hanging HDs in a small form factor (though i had to print out an additional fan holder for in between the HD cages because the disks got too warm). In the end i installed 9 HDDs for storage and one m.2 for the OS. Since our electricity is the most expensive in the world (0,3 - 0,35 €/kWh -> ~0.35 USD/kWh) i cant let it run 24/7, so i configured it with WOL (Wake on Lan). If i need the storage, i simply ping it with a "magic packet", and when im done i execute a special ssh script which logs into the machine and shuts it down. a bit of work in the beginning, but easy to use once its all configured.
My "server" is the same HP Z420 workstation. It is a very good choice. I have 80GB of ECC RAM since it has an Xeon processor. I also have a SAS controller to handle my NAS drives.
That is a great and silent case. I happen to have one too and wanted to mention, that there is dust filter on bottom too. Didn't see you pulling it out during the clean up, so you might want to check how that looks. If it is as dusty as your front panel filter was, I bet your power source won't get much airflow ;) You can pull the filter out from front side. Just grab the "bottom" of the case (below the door) with your fingers and slide the filter module toward yourself.
TrueNAS uses ZFS as the file system for the data drives. It is great overall for combating bitrot. TrueNAS also likes using ECC RAM to help combat bitrot as well. Look it up. It is some interesting reading.
For the RAM, if you're using TrueNAS and ZFS, I think the guideline is 1 GB of RAM per TB of raw storage. So, an 84 TB server should have at least 84 GB of RAM...or, more practically, 96 to 128 GB.
That's the guideline most people use. I saw some guidance from an IXSystems employee (they develop TrueNas) where he said just max out your RAM if you can. For something like this, or most home use-cases the SSD cache drives aren't going to be that beneficial. With that said, he'll be okay with 32 or 64GB, he just won't have the greatest performance.
Was looking at buying an off the shelf storage server, but wanted to see if a DIY project was within my scope. Thanks to your insight and excellent presentation skills, looks like I will go down the DIY path and save some major $$$. Thank you so much!
Yes, NAS servers are over-price and low spec. The worst part is that they are limited and expensive in extending drive capacity. The 4 bay extension can be sold at $500 or more, this can cost almost 60% of hard drive. This is ridiculous. The rack mount chassis is too loud to be used as HOME NAS. TBH, any NAS vendor should provide option for new drive extension in less than $20. That fits the nature of home NAS(incremental/annual spending without much planning)
Damn, Matt ... You are freaking hilarious ... 6:18 ... I loved that brief intermission of humor, creativity, and ingenuity ... Glad I subbed. Haven't watched much lately, but I'll come back & keep up.
I have the same chassis, bought some years back. You can add up to two extra disk modules in there with room for three drives each, taking some space for PCIe, obviously, but that may be worth it. I'm using an LSI SAS controller for my drives, in addition to the onboard SATA ports. All drives are SATA, though, but SAS controllers support SATA as well as SAS. PS: You can move those rubbery gaskets around if you slide them aside and pop them out to fit all screws. At least, that has worked with all the drives I've tested so far.
Great video. I have a PC that I build around 2012 in a big Cooler Master Tower case with i7 CPU that was very fast and just stopped using about a month ago. It was using 10 drives for 40TB. Now I am thinking of repurpose that as a NAS setup like yours.
Also when you rip apart an external drive do not throw away the pieces from it because you can use those as a hard drive recovery tool. They are easy to break so it's nice to have multiple ones laying around. Instead of buying a new toaster half the time I just use these. No I'm not talking about bread for half of you that don't understand what a toaster is.
FYI, shucking isn’t always the cheapest option. For me, there are 20 TB or 22 TB drives around for ~15 CHF per TB, which is about the same as your drives cost/TB. So with for of those do a RAID 10 and be golden as well.
You're thinking that 2 parity drives has a very low probability of error, but that's a mistake when all the disks are large and new as in your case. The reason is that when disks are a lot alike (same make, model and age) then they tend to fail at the same time. When the first drive fails, of course you replace it with a new one to rebuild the array, and so you're adding stress on all of the other disks... but those disks are not small, 14 TB takes a long time to churn, and since the other disks are also very near failure (as demonstrated by the one that has failed already), you often get a second failure... and now you're with your pants down if another failure occurs. That's the drama that many people experience and fail to understand.... "but I used 2 parity disks!" they say.
Would love to see a video about setting up things like next cloud on the server as well as a game server and how you set up your modem to allow traffic from outside while keeping your data safe!
The fact this guy refused multiple 18 tb drives because he wanted to teach other people how to choose the correct parts for a cheap server earns my respect. Good job 👍
I love this case. I was looking for something that holds a lot of 3.5" drives. and bought one for my editing machine. The price is kinda overkill for that old case, but there are fewer and fewer options for ones that hold more drives.
Great stuff, planning my own NAS soon and I'll consider that motherboard you chose (although I don't like the idea of using an adapter for the main board power, but it's probably fine). I would have gone with a better power supply personally, something running 24/7 storing a lot of data you want something real reliable. I'm not an expert in PSU's, but there's a PSU tier list on the cultists network that's quite well done and I'd stick to anything Tier A on there (Corsair RMX PSU's are a great option). Look forward to future updates on this server! (would love to see you show off a dual purpose, like running a game server alongside it as you mentioned) Oh and if you want a great SSD(s) at a great price keep an eye out for Best Buy Geek Squad refurbished Samsung 870 EVO's, the 500GB model sometimes goes on sale for $40 and 1TB model for $70. Despite the "refurbished" name they almost always have very little usage, highly recommended
i built a 6 drive Freenas server about 5 years ago using a base HP server that i got from Tiger for $200. The bad thing was it needed DDR4 ECC ram and that was not cheap back then. Freenas is happy using 16gb flash drives for the OS to I installed one inside on the MB usb connector and the other on an external slot so my OS is redundant. I did not have video files to backup so i used 6ea 3tb drives in a ZFS array and was configuring it 10 minutes after hitting the power button. That particular server uses a very proprietary display port that i could not get working in my monitors so I threw in a old low end video card for initial setup, after that everything as done via ssh over my network. The server interface over ssh is very easy to use on Freenas and it looks very similar to the Truenas screen. Back then it cost me about $950 for this including the hard drives but it's been online since then with no hiccups. i have lost power a few times so i usually have to power cycle after that but it always comes right back up like nothing happened. maybe someday I'll need more capacity bit for now it's just fine for me.
i love how open source stuff is kinda timeless...5 yrs with no issues is amazing..lotta times I'll youtube a command and not realize till later the video is from 8 yrs ago but still relevant today
@@calvinpryor This was a loss leader HP Proliant M10small tower server I got at tiger for $200 back then. The most expensive thing was the 16GB of DDR4 ECC ram. The 6ea 3tb red drives are all 5400 ,( don't need speed here and low speed drives are more reliable. I've updated Freenas a few times and have 2 or 3 power failures a year but the system always comes up clean with no errors in the logs except for the sudden power loss. This system has been as reliable as anyone could ask for
@@dell177 did that version of true nas support third party (if any) apps? That's really the deciding factor on diy/Tru nas vs synology type nas....if I can get the same.applications and functionality with true nas why would I spend the extra $ on their hardware? 🤷
@@calvinpryor I believe it does but I've never looked into it, all I wanted was a safe place on my network for my files. I can link to iot from OSX,Linux, and Windows
SO the 3.3v pin issues isn't isolated to WD drives and its not to discourage from shucking them from external drive carriers, its literally a power saver feature. When drive is not in use it has logic built in that will spin the down the drive and put it in a minimal power state. This is why it needs the 3.3 v rail vs the 5 and 12v. This comes on out of the box drives also. the reason for using the Molex adapter is due to the lack of the 5-pin power cable, or the 3.3v lead, so that it bypasses this. Much safer than putting tape in with power....
Why does no non European tech UA-camr care about energy efficiency? That thing easily pulls 150 Watts idle... If you go with a itx + new gold rated PSU, idle draw will be sub 10W without drives
Pay attention people this is called talking out the side off your neck. Who the fuck cares how much power it would pull idle with no drives in it. We make NASs to put drives in them. The fuck you talking my guy. Just saying random shit in hopes for a likes. You know my car does infinite miles per hour when its not cranked up. Smh
Well if you want a pc to look at the internet and use excel that is a great idea, haha if you want a computer that does literally anything else haha bro come on
I built a NAS a few years ago and used Proxmox because I couldn't get TruNAS to work hosting VMs properly. I used desktop hardware and it's been working well for quite a while (5-6 years? I think). But now I want to build a new one with more space and maybe server hardware. This has some good tips. Craft Computing is a good source of information on this type of build.
Great video, but i might sugguest if you have the budget to buy a more efficient power supply, as the nas will properly run 24/7 the power cost, at least in eroupe isn't worth it.
Went with a Synology DS1621+ after my DIY server decided the backplane should die outs nowhere. Upgraded the Syn with 64GB of ram and stuffed in 6x18TB Seagate Exos and of course 2x2TB 970 Pro nvme SSDs for caching. Far better solution and was blown away by the speed at which Synology was to answer my questions about the PCIe slot within 45 seconds I had my answer and a contact email if I ever needed help with anything else.
I always wounder how much it costs to do a 8 bay, but £1000 (or 300-600 used) you can get Synology 8 bay nas and most of its power use is just the drives only and supports hotplug most qnaps has a video port and normal bios, pull the USB DOM and replace it with say a 32/64gb DOM and install truenas onto it (or buy a qnap with hero os that uses zfs but don't open it to the internet)
@@leexgx I turned a QNAP TS 469 Pro into a unraid backup I desoldered the 512mb dom. Doing so it even saw the extra ram I installed. 4x18Tb drives as a offline back up
I built about the same thing. But I spent a lot more $$$ bc I bought everything brand new. Bought 8x6G hard drives. Had a spare ryzen 3800X, and a spare 700w power supply. So I had to buy out all of the other things, motherboard, memory, HBA, root hard drives, and a cheap nivida 710. Housed the entire thing in an R5. The only thing different is that I built it out in Proxmox and VM'd the True Nas build and direct attached the hard drives. All in all it ran me about $1500, which was about the same price as a Synology with the same drives. So I think I got a better deal: better CPU and system to build out VM's. I also got a better upgrade path if I wanted a better CPU.
The warranty doesn't mean s*** on an external drive when you're going to rip it out that case anyway😂 that's how I brought my laptop up to 4 tb on a budget. I have a dual hard drive slot laptop. PS I like your build and I have half the components to actually build this myself right now. I've never built or set up a nas system but I used to use one all the time and now my home network has gotten so big that I need a centralized Network for multiple computers to access the same software. I'm finally setting up my first nas at home😊
This looks like a good reasonable high core count* higher memory option. If you can find the v2 version of this hp board you can go upto 12core cpu, the v1 board is limited to 8cores. *compared with my current i7-6700k storage server. The extra cores and higher memory support will be useful to host vms as well as storage, though going upto 2011-v3 xeon is probably better for vm performance but obviously higher cost.
I built out a extra Dell R720 with 192GB of RAM with 2 EMC JBODS with 30 4TB SAS6 drives 120TB Raw. Its freaking 8U and works very well with a 10GB SFP to my cisco switch. I have TrueNAS Scale on it as well. I use it host iSCSI LUNS for VMware.
Hi Matt: Thanks for the video. I am looking to archive a lot of 4K video footage and am just now exploring storage options. So I really appreciate the info you presented in the video. Thanks again.
Yeah yeah, U should most def go for the 10gbit option, interesting indeed. Prepare mentally to alter you pool setup to be able to handle writes in that speed though.
This is great content Matt. I'm currently in the process of replicating your setup (more or less), but the core computing will be the same. I'm struggling to find mobo/cpu combos, so I guess I'll have to just buy an entire Z420 workstation (will save me on the PSU, since I can use its own). For the case, I also struggled - snagged a Corsair Obsidian 750D - only 6x 3.5 bays, but it's got 3 more 5 inch bays which I can adapt. I read up on the 12 and 14 TBs, and the consensus among shuckers is they run quite hot - Would you be able to share your experience on this topic please? Thanks for your work!
@@CharlesMacro So uh, it's live for some time now - my main NAS actually. I added a Quadro P400 card for Plex transcoding. I transplanted the mobo/ram and cpu (E2640) from an actual Z420 workstation which I bought second hand into another case - You have to mind the transplanting, as it's finnicky - You can only move over the mobo/cpu/ram - as the PSU is non standard and will not fit a ATX case - ua-cam.com/video/c8G97FlI2QA/v-deo.html He's using it with TrueNas for storage mainly - I'm using it with UnRaid (which is additional cost) - but somewhat easier to deal with. The supported CPUs for this mobo are quite old, and while they can deal with quite a lot - the energy consumption is a factor - I jumped feet first into this before the energy crisis - not sure I would do the same now.
Awesome cleaning ninjutsu. Kakashi would be proud. I would definitely be interested in seeing more from the home NAS story. I'm setting one up myself just for peace of mind for memories, games, etc. Great stuff, brudda.
At 3:10 The chip model.... You send the chip model was an E5-1620. You forgot to mention which one it was....there is the original, the V2, the V3, and the V4. The V1 and V2 take a different socket (LGA2011) than V3 and V4 (LGA2011-3) The V1 chip used Sandy bridge (32nm, 2012 release) microarchitecture. V2 used ivy bridge (22nm 2013 release). V3 used Haswell (22nm, 2014 release). V4 used Broadwell (14nm, 2016 release). Each of the four chips has a different set of clocks as well.... This is all according to the app "CPU-L"....
You will save some time buying a 10g router. Direct card to card is a different use case, google this prior to doing it, you need an ip from somewhere…. Excellent video quality.
The use of old workstation boards is interesting as there are so many to be had on ebay - it looks as though the Dell workstations are more proprietary, but good to see that the HP boards might follow some standards - i'll take a look at some of those.
Those fractal cases have two SSD trays on the back of the motherboard plate. I have one I'm in the process of turning into a Proxmox VE + NAS. If your trays are missing I think you can still get them on ebay.
Quick question: I’d love to pursue a build like this, however after having a hard drive and external hard drive fail on me in the past, I became super paranoid about trusting any hard drive or SSD. Even though these are each 14tb and you have multiple copies, how can you ensure these will last you and retain all the media you plan to put on them? Great video!
@@colekter5940 wow that’s absolutely amazing! I’m going to definitely look into setting this up, as I’m a filmmaker and if I’m going to continue being one then I need to invest in this system haha. Thanks for the information :)
Remember,, cases itself doesn't have any moving parts... so being old doesn't mean it become less capable. But sure why made that comment but... yes Fractal Design R series are excellent cases for holding plenty HDDs. It was my choice as well. I personally did not have problem paying even $100 for it as finding a GOOD new cases with a lot of HDD is rare and rare these days. Well worth buying these excellent used cases.
Awesome build! The only "problem" I see here is the PSU, for a server application that will likely be running 24/7, the extra $40 or so is WELL worth it for an 80+ Gold rather than Bronze.
Nice video, was really helpful and easy to follow. Looking forward to upgrades to the box and possibly would you be considering using truenas scale to fully utilize upgraded machine? Thanks
I'm building a new gaming pc this black friday and I'll be using my old gaming PC as my first nas server build. It has an i7 4790k, 1050ti, 750w psu, 16gb ddr3 ram, 1tb m.2 drive, and a few old crappy 500gb-1tb hard drives. I'm a little weary about using those drives in a raid configuration so I'm gonna keep an eye out for a good deal on nas drives. I might just start off with 1 or 2 and them build up from there. I cant wait to set up a plex server that my family can have access to remotely.
I love how 84TB is considered to be a "budget server"
I mean I'm sure it's within someone's budget but maybe not mine or yours lol. Either way it was an informative video. 👍
Budget❌ flexing✅
It's definitely budget considering what you'd be paying by going for a Synology or QNAP system with the same amount of bays and capacity.
"It was pretty expensive and a lot of work" for a budget home server. Really like the video. I have 50TB in my home server I wonder what's below budget. Keen to see further upgrades to this server
You can buy recertified drives for an even lower budget. I record RAW video files that fill up my available space quickly so now I'm looking at 200TB or so file server. If you need to max out the storage per drive then there's finally available the Seagate Exos X24 24TB drive.
my man earned his like on this one for the full hazmat suit 😂
Done like 😂
Video didn't even load yet but I'll leave a like too in that case
ua-cam.com/video/cpt6rR7seRM/v-deo.html finally its here
1 like for the Naruto soundtrack and video editing 🤣
ua-cam.com/users/shortsJm-SUR6vPkQ?feature=share
The 3.3v pin isn't to prevent shucking. Those externals actually use that pin to hard reboot the drive if it freezes for some reason. A lot of NAS systems also support that pin to do the same thing. It has a purpose... it just isn't a widely used option in desktop systems, so they don't usually support that pin.
9:18 WD doesn't do that to discourage people to shuck the drives.
It's an actual enterprise feature, when you have multiple servers with 100s of drive each, you don't want them to spin up all of them once, with that pin number 3, you can schedule drives to spin up in order, so that you don't have massive power spike trying to turn on your servers by all the drives trying to spinning up at once.
How do you know that
@@raidone7413 many nas and enterprise hardware boot drives(and other devices with high load on the PSU/s ) one at a time or by pairs of them ,to avoid exactly the same ,power spikes are a dangerous and real problem due to power supplies features like overcurrent protection and overload protection. imagine the power spike if you boot 25 drives @15k RPM at the same time... . On the other side ,if they really would like to avoid shucking ,belive me there are easier ,cheaper and better options like firmware checks or propietary connectors instead of standart sata port on drives. in fact i shucked 4*8TB wd drives and i did nothing about the 3.3 pin ,cause my ASUSTOR nas use the drives without any mod :)
@@raidone7413 Most of the bigger WD external drives are just using their enterprise internal drives. They used to not even replace the labels and were reds before switching to the white label like 3 or 4 years ago. Also the 3.3v pin feature just isn't compatible with most consumer brand PSUs, that's the reason they don't spin up.
@@raidone7413 This is the same reason why corps like google are making their own way to boot up their linux machines. cuz it take ages for each to spin up and bring to life.
As an enterprise storage admin for a global media company, when it comes to determining how many parity drives you want, you also want to take into account rebuild times and the potential to have another failure while you are rebuilding. The larger the drive, the longer the rebuild time.
Yes, given 200MB/s-240MB/s read/write average like HC530(14TB), read/write all content on single drive will take about 1 day in full capacity. For HOME NAS, due to the number of drives, the rebuild process can be quite long. EC based solution like minio is much better than RAID based solution but I didn't see much NAS package provide that option.
in addittion to a RAID rebuild putting more stress on the drives. Well said though.
Yes, really long rebuild time + you have to have a spare drive to swap, and not wait on it being shipped
With that in mind, what would you suggest? Less parity drives, or smaller drives? Both?
I do like the option to have dedicated parity drives (unraid, snapraid) which saves power and makes sure that you can restore individual disks if you had too many failures. As long as you don't need the extra speed of classical distributed parity array. Good option for home server use.
*Matt* Yes, 10 gbps networking and SSD cache video in the future please. Thank you for producing this piece.
I don't think there is a way to use a SSD with ZFS / TrueNAS ...
@@Felix-ve9hs there is. When you create a pool you can select a cache drive. I'm using 4tb hard drives in my nas and a 500gb m.2 nvme ssd as cache
and by future he means next week. and by next week i mean now, as this video is already weeks old
@@kyu3474 Lawrence Systems has a good video on zfs caches, boils down to this: cache in ram has a order of magnitude lower latency than other options. You should start thinking about ssd/optane cache only if you get low hit rate for you ram cache.
And as always it all depends on your use case, if you pull random video files and you watch them once cache won't help you much.
If you run vm's or docker containers I would first max out ram and only when hit ratio drops would add l2arc.
BTW Matt: did you pick TrueNas Core over Scale for a reason?
10GBit is to expensive, there are like $20-$25 2.5GBit network cards on PCI-e x1/USB3 which work with CAT. 5e cables.
I don't think that any modern HDD will exceed ~300MiB/s even in RAID/ZFS.
You can get rid of the front I/O errors by grounding the sense/ID pins on the motherboard headers. I'd suggest soldering small wires to the backside of the board under the header if you actually want to use the ports for your case's front I/O. If not, then just use some jumper wires to attach to the pins directly on the header.
For the USB 2.0 header, short pin 10 to either pin 7 or 8.
For the USB 3.0 header, short pin 10 to any of the following: 4, 7, 13, 17.
For the Audio header. short pins 2 and 4.
For the 1394 Firewire, HP used a proprietary header, so you'll need to either plug in an HP front I/O module and stuff it somewhere in the case, or search the forums to see if anyone reverse engineered the pinout so you can know which pins to short.
As for the fans well, just add a couple case fans and use those headers.
You're a life saver - I've been looking for this information desperately
Edit: Some more research yielded this video, which is also useful - ua-cam.com/video/c8G97FlI2QA/v-deo.html
Hp proprietary IO is the wrench in this build. Also, a thermal sensor is present on the HP Z workstations that need to remain plugged into the mobo for transplant viability.
Interestingly enough, Western Digital is currently (April 2022) pushing out a lot of both Red Pro and Gold 16TB drives for $299.00... they must be getting ready to release a newer, larger capacity drive and need the warehouse space.
Just don’t get anything but red pro. Anything below that is junk now
@@ghost-user559 not really
@@quazar912 Yes they have a different design for drives that is incapable of being used in a raid array or Nas. They literally got caught using inferior drives without telling customers about it. After that they “promised” that they would keep the Pro line as the only line with raid and Nas compatibility.
@@ghost-user559 How about WD Red Plus ?
@@tjhana I think it’s only the pro they kept the quality of. Look up “SMR drives”. And read people’s experiences and horror stories. They have a very slow transfer speed once you get to a certain amount of data. It’s like they will seem fine but then take 14 hours transferring data that should have taken 4 hours. They can’t be raided or used in a NAS as a result. Only the Pro is the original quality.
This video was worth liking just for the PC-Cleaning-Jutsu. But on a more serious note, it's nice to see people doing practical builds that a wider audience can make themselves. Great work.
building a NAS as I watch this; amazingly in the exact same case also bought used local for the same reasons as you. I love the component selection, like you said you're way more about it than the big boi yters. Was able to snag eight 4TB Seagate 5900RPM Skyhawk drives for $35 per, paired with an X299 MSI Raider board ($120 used local), and a $200 ebay i9-7900X cooled by a $40 ThermalRight FC140 that performs neck to neck with an NHD-15. Nowhere near your 84TB but your hoarding habit is worse than mine. Subbed, hype for more!
The power disable pin is part of the sata spec. The general intent is by using supporting controllers to remotely hard reset or shutdown drives in the same way pulling the power would.
Drives intended for retail do not have the feature as they are directly connected to the power supply, thus the pin would be always active and the drive wouldn't turn on.
Drive manufactures make both drives with and without power disable, and distribute to the appropriate channels.
When these portable drives are made whatever is available is used. This is why you can end up with an assortment of drive types, and in your case some with and without power disable.
Don't know how they choose the drives, maybe what ever if coming off the line, or the one there is excess of, or even the ones that are slightly under preforming, but the ones with power disable are likely better as they where not originally destined for the retail market.
Shucking already voids the warranty, if they cared to stop the practice they would just use a custom firmware that would only work with the adapter or even go further and pair the two as that would defeat flashing a normal firmware to the drive.
Ah, so does the USB adapter in the WD shell set that pin to ground? Or just not connect it at all?
The real problem isn't shucked WD drives powering off, it's damn PSU manufactures STILL putting a 3.3v rail on their SATA cables. I have no idea why almost all PSUs still do it, when it has been depreciated.
First thing I do when I get a new PSU (after testing) is remove the wire from all the included SATA cables. I hate getting screwed by a cable with 3.3v when I don't realize right away the damn cable is just disabling the drive. :D
@@PeterBrockie I don't know, but isn't it there for compatibility reasons? I would guess that some old hardware still requires this 3.3V rail. Just because modern devices (including drives) step down the voltage onboard doesn't mean it's not part of the spec or it's not necessary for some devices.
@@comedyclub333 Molex connectors use 12v + 5v. When SATA came around I think the idea was to add 3.3v since a lot of ICs ran 3.3v instead of 5v at the time, and they could just directly run off that rail. But what ended up happening is essentially zero devices used the 3.3v rail because ICs got lower and lower in voltage (modern CPUs are under a volt) and it was easier to just use a local power regulator to change the 5v into 1.2v or whatever. CPUs use 12v input and your VRM lowers it to the ~0-2v (depending on hardware).
They changed the spec into making it a shutdown pin simply because it was never actually used in drives. I have no idea why PSUs still have it other than they are often based on older designs (I don't think even motherboards use 3.3v these days).
@@PeterBrockie 95-99% of ICs are 3.3v nowadays.
CPUs & GPUs are forced to use lower voltage because there are billions - trillions transistors inside that all add up to hundreds of amps current consumption.
The higher the voltage, the more power draw and thus heat, less efficient they are.
It is only more expensive to produce lower voltage logic ICs because it requires much smaller physical transistor sizes and expensive fabs.
Okay the " PC Cleaning Jutsu" part was completely unexpected and absolutely phenomenal.
I applaud your integrity - and your technical prowess; thank you!
Dude! You have inspired me!! I have watched this video before but didn't realize the motherboard. I have had a 420 system for 10 years now. I upgraded the CPU to e5-2660(?) and up to 24gigs. After transferring it to another case, I did have to get the adapter cable for the power. I have found that if you jump certain pins on the front USB, firewire, etc., those errors go away on startup. If you want the fans not to ramp up, pins 11, 12, and 5 are a thermal sensor. I found mine in the original case and ripped it out, soooooooo much quite! Thanks again for your hard work.
Count me in for requesting the upgrades. I went with the same drives on Black Friday, could only afford three since I was still Christmas shopping. I went with the Molex to SATA solution since my NAS is at its core an archiving file server. It won’t be on for extended periods. I also believe that these drives have different firmware from the Reds, and chose to just using Basic as opposed to RAID. I have no idea how these would stand up to rebuilding a pool.
I was never a fan of shucking externals to rig up a server. I just buy used sas drives and LSI controllers from ebay.. works well. This last go around I bought new HGST/WD sas drives, but used an intel i7 11700 and B series motherboard.. added 10gb nics to all my servers and torrent box, main pc, etc. Couldn't be happier with it. Fast and stable. The 11700 I bought for transcoding without a gpu, and the IGP is more than enough.
I'm so glad there's another Matt out there who made their server IP end in .69 as well :D
Matt, you forgot about other content creators that get help from sponsors and get a fully build server with ALL the drives that will populate the server. And they also get a lot of MB since tey are sponsered and in some cases they don't even monetize these pices of hardware. So in all they get a ton of freebee they have it easy to build what ever they want. Unlike you or I, we need to buy these parts on a small budget. And It's not easy, I know that!
I watched your older video, and I'm very happy to see this update.. I can't wait to see you making more home server videos
I also have a HP z420 and you need to tell the viewers about how to swap it properly. The system won't boot up without a video card at all. Also you need to short out some pins in the USB 2.0 header, USB 3.0 header and the FireWire header. CPU fan header 5th pin needs to be jumped to the 1st pin. Also fans have to be connected to the memory and rear exhaust headers. It will only boot with all these conditions satisfied. USB 3.0 header was especially hard and i had to solder the 2 pins under the motherboard.
3.3v pin is power disable feature from WD and it is not to prevent shuck. It is hard to just mask the 3rd pin as it is very small. You can actually mask all the 3 pins from left not just the 3rd one, this is much better to do. Standard 3M electric tap should do the work.
Trunas uses the ZFS file system.
this file system is by concept hungry for RAM!
In my job I created a professional NAS with ZFS.
with ZFS the rule is:
Throughput problem -> Adds RAM
Speed issue -> Adds even more RAM! !
The classic rate is 1GB of RAM for 1TB of net data in the NAS!
in your case the 64GB is a minimum, the OS needs RAM too; not to mention the caches on very high performance professional SSDs.
ZFS uses read and write caches all the time (if they are not on dedicated disks, it uses the pool), installed on high performance ssd, ZFS will "abuse" the higher speed of SSDs to boost its performance
Couple of money-saving suggestions I'd make if you're considering building your own NAS:
Older server CPU's generally idle at much higher wattage. If you intend 24/7 operation, seriously consider a modern i-series or much newer Xeon or Xeon-D instead. You might save money long-term. An older Xeon-E might double your energy bill over newer chips.
Don't consider ECC a requirement. It really depends on how much you care about what you're storing. We're talking Act of God levels of paranoia about your data to consider ECC a must. My advice - if you're asking yourself whether you need ECC, you probably don't.
There are free Linuxes that are not much more complicated than the paid-for NAS options. Consider Fedora server for example.
People rightly talk-up ZFS, but mdraid is an incredibly stable, mature and flexible system.
"Older server CPU's generally idle at much higher wattage. If you intend 24/7 operation, seriously consider a modern i-series or much newer Xeon or Xeon-D instead. You might save money long-term. An older Xeon-E might double your energy bill over newer chips."
Whilst TECHNICALLY true, the reality is that at the end of the day, the increased cost of the newer processors vs. the money that you'd save on idle power, in many cases, just isnt' worth it.
As stated, he can literally buy an Intel Xeon E5-2660 (V1) 8 core CPU for < $10.
Depending on how long you're idling for and the system configuration, a DUAL Xeon E5-2690 (V1, 8-core, 2.9 GHz base clock, max turbo 3.6 GHz, max all core turbo 3.3 GHz) will idle at 112 W. Assuming that you split the power consumption directly in half, the idle would be at around 56 W. (Source: www.anandtech.com/show/8423/intel-xeon-e5-version-3-up-to-18-haswell-ep-cores-/18). I couldn't find the idle power consumption data for the E5-2660 (V1) readily.
My HP Z420 which has the E5-2690, but also a GTX 660 in it, and 128 GB of DDR3-1600 ECC Reg RAM (8x 16 GB) idles at around 80 W, but I am sure that a decent portion of that is just the RAM and the GTX 660.
So, let's say I assume that the idle power is somewhere around 50 W. 50 W * 24 hours = 1.2 kWh * 365 days/year = 438 kWh/year * $0.10 USD/kWh = $43.80/year.
If you halved the idle power consumption, you'd be saving $21.90/year.
And then now, depending on which processor you want to replace that with (in terms of a newer, more modern architecture, between the motherboard, the CPU, and the RAM) how many years would it take before you'd break even with the increased costs?
So it really depends on what you are trying to replace it with.
I mention this because I've ran the analysis multiple times over the years to modernise my hardware and it always come back to this very same question.
"Don't consider ECC a requirement."
That depends on the motherboard and/or the CPU.
Some motherboard-CPU combinations actually REQUIRE at least unbuffered ECC just to POST. I forget what the HP Z420 motherboard requires because I've always ran it with ECC Registered RAM because I don't want there to be any stability problems on account of the system expecting there to be ECC, but then the memory modules not having it.
"There are free Linuxes that are not much more complicated than the paid-for NAS options. Consider Fedora server for example."
TrueNAS is free.
As for pre-built boxes, that depends. There are some things that are easier to do on my QNAP NAS systems that, whilst you can do it on Linuxes and/or TrueNAS, it takes more time and effort vs. clicking a button (e.g. setting up the NAS to be a Time Machine backup and/or installing applications where I can backup the photos and videos from iPhones). This isn't to say that it can't be done. But that also depends on the answer to the question "what is your time worth?". For me, whilst I can probably deploy those solutions on Linux, it would take me a lot more time than what it would be worth it for me to just buy a pre-built NAS system for that (minus the hard drives).
Like another thing that my CentOS install HATES right now is the fact that the SMB ACL is in direct conflict with the NFS ACL. (i.e. my other CentOS compute nodes and access the NFS exports over RDMA without any permissions errors, but when I tried to make the same mount points SMB shares, my Windows clients perpetually complain about it not having write permissions despite the fact that everything has been set up correctly (or there is nothing that jumps out to me as being wrong, based on the various Linux Samba deployment guides that are available for CentOS). Conversely, I have both NFS and SMB shares on my QNAP NAS units, and I don't have this same permissions problem. So....*shrug*....who knows.
And I don't really have the time to pour more time into trying to fix it, so the "dumb" answer is that if I need to transfer the data, I will send it to my QNAP NAS systems from Windows and then have my CentOS compute nodes and cluster headnode pull it down from the same QNAP NAS system over NFS. It's a roundabout way of getting the data transferred, but the few seconds or few minutes more that it to go around is less than the time it would take for me to try and actually figure this out properly. (And this is even with SMB guest enabled on the SMB share.)
"People rightly talk-up ZFS, but mdraid is an incredibly stable, mature and flexible system."
Yes and no.
mdraid, to be best of my knowledge, does not detect nor correct for silent bit rot.
@@ewenchan1239 I can't say I disagree with anything you say. Note I was careful to qualify everything I said with "might" and "consider". I have no prescriptive notions of what others should be doing. Was just riffing on the theme of budget. Couple of points:
You should try plugging in the domestic kWh average of - for example - Germany (~$0.34-0.35) into your sums and see how things quickly change from a price point of view.
I recently replaced an ancient Athlon-based home server that's been on duty for 12+ years with Alder-lake. I want 24/7 operation but it sees most of its use during the day, cores will spend most of their time in C10, with whole package using about 2W. It will save approximately $80/year on January energy prices - no doubt even more after recent events. With a bit of tweaking managed to get the whole system to use < 30W idle. If it proves as resilient as the old Athlon system, it'll be worth the extra upfront cost.
@@0w784g
"You should try plugging in the domestic kWh average of - for example - Germany (~$0.34-0.35) into your sums and see how things quickly change from a price point of view."
I understand your point here, but I would counter with the price delta between a modern CPU vs. an older one might be relatively proportional to the changes in the rate such that my general point, the "slope" of the curves, relative to each other, may change a little bit, but the overarching jist of my point still remains the same.
At the end of the day, it's still going to boil down to the same question: how many years of savings in electrical costs will it take to break even against the incremental cost of a newer, more modern CPU/motherboard/RAM/hardware?
And the answer to that question depends a LOT on what you pick as the old hardware vs. the new hardware in the sense that if you are only migrating from one year or one generation behind, then the relative advantages of a decrease in the (idle) power consumption may not be as significant if you were to compare it to say, migrating from hardware that's 5+ years old/5 generations older. But again, that kind of a migration has an incremental cost associated with the more modern hardware and I've ran this calculation many times over (it is one of the reasons why I haven't moved over to an AMD EPYC nor the AMD Threadripper nor the Threadripper Pro platform yet) because my old quad node, dual socket/node, 8-core/16-thread per socket system (64 cores, 128 threads in total, with 512 GB of RAM) pulls 1.62 kW. And whilst the AMD EPYC 64-core/128-thread processor would be more efficient being at only 280 W TDP, it's also ~$5000 at launch, excluding the cost of the motherboard, RAM, chassis, etc., and let's assume, for a moment that the AMD EPYC 7713P total peak system power draw is 500 W.
This means that my old machine @ 1.62 kW, pulls 38.88 kWh/day vs. a new system at @0.5 kW, would pull 12 kWh/day. For a year, it would be 14191.2 kWh vs. 4380 kWh. At $0.10/kWh, it would be $1419.12 /year in electricity costs vs. $438/year, or a net difference of $981.12/year in savings. Therefore; the TARR for the CPU alone is 5.096217 years.
And you can make it agnostic to currency by plugging in your own values into the calculator to calculate what the TARR is for your specific scenario that you want to analyse.
5 years from the time that you built and upgraded the system, there would always be something newer and better and more efficient, and you're running this calculation again and again and again.
The "slope" of the TARR may change depending on region, but again, my general jist remains the same.
"I recently replaced an ancient Athlon-based home server that's been on duty for 12+ years with Alder-lake."
I just RMA'd my Core i9-12900K back to Intel because it failed to run memtest86 for longer than 27 seconds and got a full refund for it after about 3 months in service.
" I want 24/7 operation but it sees most of its use during the day, cores will spend most of their time in C10, with whole package using about 2W. It will save approximately $80/year on January energy prices - no doubt even more after recent events. With a bit of tweaking managed to get the whole system to use < 30W idle. If it proves as resilient as the old Athlon system, it'll be worth the extra upfront cost."
But again, how much did you spend, upfront, to save said $80/year in electricity costs?
How many years will it take before you break even when you divide the incremental costs by the annual savings in electricity costs?
I'm not saying that you don't save money.
What I am saying is "is the TARR worth it?"
(And an Athlon-based server --- wow....that's impressive!!! I have systems that are that old, but none of those are currently in use/deployed right now. I keep mulling with the idea of re-deploying my old AMD Opteron 2210HE server because the Tyan S2915WANRF board has 8 SAS 3 Gbps and 6 SATA 1.5 Gbps ports built-in, but then also comes the fact that they are only SAS 3 Gbps and SATA 1.5 Gbps, which, for a "dumb" file server, it'll be plenty, but those processors also have ZERO support for virtualisation, so, its usefulness at that point will also be limited, and back when I had that system running, it would easily suck down 510 W of power, so it's not super efficient either. So believe me -- I understand your point. But that's also why I have two QNAP NAS units that are using ARM processors, one that's using a Celeron J, and a TrueNAS server that I have that is running dual Xeons (which I've been running this same calculation to see if I can update that server to something newer, mostly so that I can run more virtual machines off of it, which increases the overall system utilisation.))
I WISHED that the newer processors actually had more PCIe lanes to them because then they can be lower powered, but also super powerful as well for home server duties.
The last generation that I have, which is sporting the Intel Core i7-4930K has 48 PCIe 3.0 lanes, is great because I can put 100 Gbps Infiniband network card in there, a GPU, and also a SAS 12 Gbps HW RAID HBA and it isn't super power hungry. A lot of the processors newer than that, all have fewer PCIe lanes than that.
@@ewenchan1239 Sorry but what does TARR mean?
@@3nertia
Time adjusted rate of return.
The cleaning jutsu.... laughed my butt off you have a new subscriber now......
I've been looking for a server video like this and haven't been able to find any good ones. This is exactly what I was looking for. Tha k you for the content. It was a super good video!
HP workstation boards like the Z-400 and Z-420, etc. are notorious for hanging on boot if you don't have all the hardware connected that was originally in the workstation. On the fan headers, you can jumper one of the pins to ground to fool it into thinking the proper fan is connected. On the front USB, and front 1394 error, it's easiest to keep the front USB/1394 cluster and cables, and connect it and just stuff it in the case somewhere. I used a Z-400 board in a nice build with a Lian Li all aluminum full tower case, and ran into these very same issues. With the Z-400, I found that the m/b tray from the original workstation is the best way to mount the board into an alternate case, as it already has the I/O shield integrated into the tray, and the mounts for the CPU cooler, and it will make all the ports and slots line up where they should be, in a generic case.😉 Hope this helps.
6:19 not what I was here for, but it is the best aspect of the entire video
as for now, its the best home NAS build, cheap with a lot of features. thank you
I just built a NAS myself. I used a Microserver Gen8 for it as I could get it relatively cheap and I wanted remote management and a compact design. I used the SCALE-version of TrueNAS as it runs on top of Linux and you can use Dockers and VMs there. And if I find another server of the same model for cheap, I could even connect them together.
Also I can recommend using a Ventoy stick if you have a bigger stick and want to install different operating systems from it as you have a menu there where you can select one of the ISOs you put on the drive before. There is a partition where you can simply copy the ISOs onto and delete them later. No need for burning them to the stick and wait for that process to finish.
First time you find out you're network adapter dies on you, you'll be really glad you have that video card for on-system troubleshooting.
In case anyone is wondering: You can't sell the WD Elements enclosures as empty hard drive enclosures because they only accommodate the original drive that was in. This is because the rubber grommets are unique to the original HDD and won't exactly fit any other, preventing you from successfully inserting/fitting a 3rd party drive in the enclosure. The controller board itself will however work with other drives, at least it did in my case (8TB WD Elements). But it's not really too useful if you can't fit non-original drives into the enclosure.
Thumb's up for covering the 3rd pin with tape. Really useful information.
Great video Matt! I definitely recommend Truenas. ZFS Pools are mainly what I was after. It’s been rock solid on my 12 bay Supermicro server. Currently have 8 WD 8TB reds in raid z3.
Nice! sounds like a great setup.
Truenas is only useful if you have unlimiyed funds to buy boxes of new drives every time. For normal people unraid is vastly superior
@@SupremeRuleroftheWorld Could you elaborate on that, please? Why would TrueNas require "unlimited funds to buy boxes of new drives every time"? Every time what?
@@3nertia i dare you to put in 1 new drive to expande storage
@@SupremeRuleroftheWorld WHY!?
Quite a good video, and the first one i saw that adresses the active-low pin on salvaged USB hard drives.
Ive build a NAS myself a few Months back with a "Node 804" Case which supports 8 hanging HDs in a small form factor (though i had to print out an additional fan holder for in between the HD cages because the disks got too warm).
In the end i installed 9 HDDs for storage and one m.2 for the OS.
Since our electricity is the most expensive in the world (0,3 - 0,35 €/kWh -> ~0.35 USD/kWh) i cant let it run 24/7, so i configured it with WOL (Wake on Lan).
If i need the storage, i simply ping it with a "magic packet", and when im done i execute a special ssh script which logs into the machine and shuts it down. a bit of work in the beginning, but easy to use once its all configured.
My "server" is the same HP Z420 workstation. It is a very good choice. I have 80GB of ECC RAM since it has an Xeon processor. I also have a SAS controller to handle my NAS drives.
That is a great and silent case. I happen to have one too and wanted to mention, that there is dust filter on bottom too. Didn't see you pulling it out during the clean up, so you might want to check how that looks. If it is as dusty as your front panel filter was, I bet your power source won't get much airflow ;) You can pull the filter out from front side. Just grab the "bottom" of the case (below the door) with your fingers and slide the filter module toward yourself.
I have the same case and love it!
@@zobertson which case is this? where can I find it?
@@garycarr8467 I don't think they make it anymore. He referenced what it was in the video at one point
@@garycarr8467 fractal design define r4
TrueNAS uses ZFS as the file system for the data drives. It is great overall for combating bitrot. TrueNAS also likes using ECC RAM to help combat bitrot as well. Look it up. It is some interesting reading.
I only subscribe to a new channel like once every 2 years. You earned a sub for the pc cleaning jutsu.
You paid $40 for 32GB, I once paid about the same for a 1MB stick mid 90's
0:30 stuffing the SSD's in the HDD drive bay is honestly so big brain
love your video dude. have been working with servers for 20+ years they are so much fun you gained a subscriber
For the RAM, if you're using TrueNAS and ZFS, I think the guideline is 1 GB of RAM per TB of raw storage. So, an 84 TB server should have at least 84 GB of RAM...or, more practically, 96 to 128 GB.
That's the guideline most people use. I saw some guidance from an IXSystems employee (they develop TrueNas) where he said just max out your RAM if you can. For something like this, or most home use-cases the SSD cache drives aren't going to be that beneficial. With that said, he'll be okay with 32 or 64GB, he just won't have the greatest performance.
hmmm so how much storage would 256GB of ram give you than ? :O
@Bewefau You can't do that math yourself?
Was looking at buying an off the shelf storage server, but wanted to see if a DIY project was within my scope. Thanks to your insight and excellent presentation skills, looks like I will go down the DIY path and save some major $$$. Thank you so much!
Yes, NAS servers are over-price and low spec. The worst part is that they are limited and expensive in extending drive capacity. The 4 bay extension can be sold at $500 or more, this can cost almost 60% of hard drive. This is ridiculous. The rack mount chassis is too loud to be used as HOME NAS. TBH, any NAS vendor should provide option for new drive extension in less than $20. That fits the nature of home NAS(incremental/annual spending without much planning)
Just so you know...the only reason I subscribed is because of your "PC Cleaning Jutsu" bit 🤣
I'd really like to see you add game servers like Minecraft, etc. that you mentioned. Great video!
Damn, Matt ... You are freaking hilarious ... 6:18 ... I loved that brief intermission of humor, creativity, and ingenuity ... Glad I subbed. Haven't watched much lately, but I'll come back & keep up.
I have the same chassis, bought some years back. You can add up to two extra disk modules in there with room for three drives each, taking some space for PCIe, obviously, but that may be worth it. I'm using an LSI SAS controller for my drives, in addition to the onboard SATA ports. All drives are SATA, though, but SAS controllers support SATA as well as SAS.
PS: You can move those rubbery gaskets around if you slide them aside and pop them out to fit all screws. At least, that has worked with all the drives I've tested so far.
What's the chassis model number?
to find out whether you can boot UEFI, just look at whether efivars and/or /boot/efi is mounted in your current linux system.
This route suits me more than a blade type server. Thanks for this and I will be interested to see any updates for this as well.
This seems like a scam. What is the stuff to win?
Could you measure the idle and on-load power consumption of this build from wall?
Great video. I have a PC that I build around 2012 in a big Cooler Master Tower case with i7 CPU that was very fast and just stopped using about a month ago. It was using 10 drives for 40TB. Now I am thinking of repurpose that as a NAS setup like yours.
Ur videos are real gold man.
So much detail ....
I was actually looking at this same case for my future NAS build.
Love the way everything is setup in these things.
Awesome video man!🏆
Name of the case? I couldn’t understand what he was calling the case
@@noahemmitt Fractal Design Define R5 I believe
@@noahemmitt Fractal Design Define R4 (Turning on CC (Closed Caption) is great!)
Better man than most declining 4 18tb drives
Also when you rip apart an external drive do not throw away the pieces from it because you can use those as a hard drive recovery tool. They are easy to break so it's nice to have multiple ones laying around. Instead of buying a new toaster half the time I just use these. No I'm not talking about bread for half of you that don't understand what a toaster is.
FYI, shucking isn’t always the cheapest option. For me, there are 20 TB or 22 TB drives around for ~15 CHF per TB, which is about the same as your drives cost/TB. So with for of those do a RAID 10 and be golden as well.
You're thinking that 2 parity drives has a very low probability of error, but that's a mistake when all the disks are large and new as in your case.
The reason is that when disks are a lot alike (same make, model and age) then they tend to fail at the same time.
When the first drive fails, of course you replace it with a new one to rebuild the array, and so you're adding stress on all of the other disks... but those disks are not small, 14 TB takes a long time to churn, and since the other disks are also very near failure (as demonstrated by the one that has failed already), you often get a second failure... and now you're with your pants down if another failure occurs.
That's the drama that many people experience and fail to understand.... "but I used 2 parity disks!" they say.
Would love to see a video about setting up things like next cloud on the server as well as a game server and how you set up your modem to allow traffic from outside while keeping your data safe!
The fact this guy refused multiple 18 tb drives because he wanted to teach other people how to choose the correct parts for a cheap server earns my respect. Good job 👍
I love this case.
I was looking for something that holds a lot of 3.5" drives. and bought one for my editing machine.
The price is kinda overkill for that old case, but there are fewer and fewer options for ones that hold more drives.
Fractal meshify 2 is great for storage and is a more current model.
Antec P101
Great stuff, planning my own NAS soon and I'll consider that motherboard you chose (although I don't like the idea of using an adapter for the main board power, but it's probably fine). I would have gone with a better power supply personally, something running 24/7 storing a lot of data you want something real reliable. I'm not an expert in PSU's, but there's a PSU tier list on the cultists network that's quite well done and I'd stick to anything Tier A on there (Corsair RMX PSU's are a great option). Look forward to future updates on this server! (would love to see you show off a dual purpose, like running a game server alongside it as you mentioned)
Oh and if you want a great SSD(s) at a great price keep an eye out for Best Buy Geek Squad refurbished Samsung 870 EVO's, the 500GB model sometimes goes on sale for $40 and 1TB model for $70. Despite the "refurbished" name they almost always have very little usage, highly recommended
i built a 6 drive Freenas server about 5 years ago using a base HP server that i got from Tiger for $200. The bad thing was it needed DDR4 ECC ram and that was not cheap back then. Freenas is happy using 16gb flash drives for the OS to I installed one inside on the MB usb connector and the other on an external slot so my OS is redundant. I did not have video files to backup so i used 6ea 3tb drives in a ZFS array and was configuring it 10 minutes after hitting the power button. That particular server uses a very proprietary display port that i could not get working in my monitors so I threw in a old low end video card for initial setup, after that everything as done via ssh over my network. The server interface over ssh is very easy to use on Freenas and it looks very similar to the Truenas screen.
Back then it cost me about $950 for this including the hard drives but it's been online since then with no hiccups. i have lost power a few times so i usually have to power cycle after that but it always comes right back up like nothing happened. maybe someday I'll need more capacity bit for now it's just fine for me.
i love how open source stuff is kinda timeless...5 yrs with no issues is amazing..lotta times I'll youtube a command and not realize till later the video is from 8 yrs ago but still relevant today
@@calvinpryor This was a loss leader HP Proliant M10small tower server I got at tiger for $200 back then. The most expensive thing was the 16GB of DDR4 ECC ram. The 6ea 3tb red drives are all 5400 ,( don't need speed here and low speed drives are more reliable.
I've updated Freenas a few times and have 2 or 3 power failures a year but the system always comes up clean with no errors in the logs except for the sudden power loss. This system has been as reliable as anyone could ask for
@@dell177 did that version of true nas support third party (if any) apps? That's really the deciding factor on diy/Tru nas vs synology type nas....if I can get the same.applications and functionality with true nas why would I spend the extra $ on their hardware? 🤷
@@calvinpryor I believe it does but I've never looked into it, all I wanted was a safe place on my network for my files. I can link to iot from OSX,Linux, and Windows
SO the 3.3v pin issues isn't isolated to WD drives and its not to discourage from shucking them from external drive carriers, its literally a power saver feature. When drive is not in use it has logic built in that will spin the down the drive and put it in a minimal power state. This is why it needs the 3.3 v rail vs the 5 and 12v. This comes on out of the box drives also. the reason for using the Molex adapter is due to the lack of the 5-pin power cable, or the 3.3v lead, so that it bypasses this. Much safer than putting tape in with power....
Stiff brush that comes with keyboard cleaning kit is perfect for quickly cleaning PC & AC air filters
Why does no non European tech UA-camr care about energy efficiency? That thing easily pulls 150 Watts idle... If you go with a itx + new gold rated PSU, idle draw will be sub 10W without drives
Cuz US has cheap electricity in most of the country that's comparable to third world countries rates or even lesser.
You'll never save that money by buying more expensive parts where i live.
Pay attention people this is called talking out the side off your neck. Who the fuck cares how much power it would pull idle with no drives in it. We make NASs to put drives in them. The fuck you talking my guy. Just saying random shit in hopes for a likes. You know my car does infinite miles per hour when its not cranked up. Smh
He don't know what he is speaking about.
Well if you want a pc to look at the internet and use excel that is a great idea, haha if you want a computer that does literally anything else haha bro come on
Pretty cool, I was thinking about turning my old gaming system into a nas, its been sitting in my closet.
I built a NAS a few years ago and used Proxmox because I couldn't get TruNAS to work hosting VMs properly. I used desktop hardware and it's been working well for quite a while (5-6 years? I think). But now I want to build a new one with more space and maybe server hardware. This has some good tips. Craft Computing is a good source of information on this type of build.
Jeff from Craft Computing would be Proud Matt !!
6:18 this made me subscribe to the channel... and also the great information in this video 😂
I seriously need to get one of those. The sata ports on several of my PCs are completely maxed and I got around 50 TB of redundancy
Great video, but i might sugguest if you have the budget to buy a more efficient power supply, as the nas will properly run 24/7 the power cost, at least in eroupe isn't worth it.
Went with a Synology DS1621+ after my DIY server decided the backplane should die outs nowhere.
Upgraded the Syn with 64GB of ram and stuffed in 6x18TB Seagate Exos and of course 2x2TB 970 Pro nvme SSDs for caching.
Far better solution and was blown away by the speed at which Synology was to answer my questions about the PCIe slot within 45 seconds I had my answer and a contact email if I ever needed help with anything else.
I always wounder how much it costs to do a 8 bay, but £1000 (or 300-600 used) you can get Synology 8 bay nas and most of its power use is just the drives only and supports hotplug
most qnaps has a video port and normal bios, pull the USB DOM and replace it with say a 32/64gb DOM and install truenas onto it (or buy a qnap with hero os that uses zfs but don't open it to the internet)
@@leexgx I turned a QNAP TS 469 Pro into a unraid backup I desoldered the 512mb dom. Doing so it even saw the extra ram I installed. 4x18Tb drives as a offline back up
I built about the same thing. But I spent a lot more $$$ bc I bought everything brand new. Bought 8x6G hard drives. Had a spare ryzen 3800X, and a spare 700w power supply. So I had to buy out all of the other things, motherboard, memory, HBA, root hard drives, and a cheap nivida 710. Housed the entire thing in an R5. The only thing different is that I built it out in Proxmox and VM'd the True Nas build and direct attached the hard drives. All in all it ran me about $1500, which was about the same price as a Synology with the same drives. So I think I got a better deal: better CPU and system to build out VM's. I also got a better upgrade path if I wanted a better CPU.
The warranty doesn't mean s*** on an external drive when you're going to rip it out that case anyway😂 that's how I brought my laptop up to 4 tb on a budget. I have a dual hard drive slot laptop. PS I like your build and I have half the components to actually build this myself right now. I've never built or set up a nas system but I used to use one all the time and now my home network has gotten so big that I need a centralized Network for multiple computers to access the same software. I'm finally setting up my first nas at home😊
This looks like a good reasonable high core count* higher memory option. If you can find the v2 version of this hp board you can go upto 12core cpu, the v1 board is limited to 8cores.
*compared with my current i7-6700k storage server. The extra cores and higher memory support will be useful to host vms as well as storage, though going upto 2011-v3 xeon is probably better for vm performance but obviously higher cost.
Looks great. What’s the electricity consumption?
Thanks for the nostalgia trip back to Naruto, it's been a while since I last saw that series.
I built out a extra Dell R720 with 192GB of RAM with 2 EMC JBODS with 30 4TB SAS6 drives 120TB Raw. Its freaking 8U and works very well with a 10GB SFP to my cisco switch. I have TrueNAS Scale on it as well. I use it host iSCSI LUNS for VMware.
Hi Matt: Thanks for the video. I am looking to archive a lot of 4K video footage and am just now exploring storage options. So I really appreciate the info you presented in the video. Thanks again.
First time came to this channel ✌🏻
• watch from Melaka, Malaysia 🇲🇾
Yeah yeah, U should most def go for the 10gbit option, interesting indeed. Prepare mentally to alter you pool setup to be able to handle writes in that speed though.
I have the exact same case in my basement for close to 10 years, with my i7 4770 in there. I think I know what my next project is now.
This is great content Matt. I'm currently in the process of replicating your setup (more or less), but the core computing will be the same. I'm struggling to find mobo/cpu combos, so I guess I'll have to just buy an entire Z420 workstation (will save me on the PSU, since I can use its own). For the case, I also struggled - snagged a Corsair Obsidian 750D - only 6x 3.5 bays, but it's got 3 more 5 inch bays which I can adapt.
I read up on the 12 and 14 TBs, and the consensus among shuckers is they run quite hot - Would you be able to share your experience on this topic please?
Thanks for your work!
Hey I'm about to do the same in replicating his build. How did your build work out?
@@CharlesMacro So uh, it's live for some time now - my main NAS actually. I added a Quadro P400 card for Plex transcoding.
I transplanted the mobo/ram and cpu (E2640) from an actual Z420 workstation which I bought second hand into another case - You have to mind the transplanting, as it's finnicky - You can only move over the mobo/cpu/ram - as the PSU is non standard and will not fit a ATX case - ua-cam.com/video/c8G97FlI2QA/v-deo.html
He's using it with TrueNas for storage mainly - I'm using it with UnRaid (which is additional cost) - but somewhat easier to deal with.
The supported CPUs for this mobo are quite old, and while they can deal with quite a lot - the energy consumption is a factor - I jumped feet first into this before the energy crisis - not sure I would do the same now.
Awesome cleaning ninjutsu. Kakashi would be proud. I would definitely be interested in seeing more from the home NAS story. I'm setting one up myself just for peace of mind for memories, games, etc. Great stuff, brudda.
Future Reference Using a Air Compressor to Blow out Cases and Filters is the Best Option and Works Amazingly
At 3:10
The chip model....
You send the chip model was an E5-1620. You forgot to mention which one it was....there is the original, the V2, the V3, and the V4.
The V1 and V2 take a different socket (LGA2011) than V3 and V4 (LGA2011-3)
The V1 chip used Sandy bridge (32nm, 2012 release) microarchitecture. V2 used ivy bridge (22nm 2013 release). V3 used Haswell (22nm, 2014 release). V4 used Broadwell (14nm, 2016 release).
Each of the four chips has a different set of clocks as well....
This is all according to the app "CPU-L"....
You will save some time buying a 10g router. Direct card to card is a different use case, google this prior to doing it, you need an ip from somewhere….
Excellent video quality.
The use of old workstation boards is interesting as there are so many to be had on ebay - it looks as though the Dell workstations are more proprietary, but good to see that the HP boards might follow some standards - i'll take a look at some of those.
Those fractal cases have two SSD trays on the back of the motherboard plate. I have one I'm in the process of turning into a Proxmox VE + NAS. If your trays are missing I think you can still get them on ebay.
True NAS is great! Been using it and free Nas for many years now
Quick question: I’d love to pursue a build like this, however after having a hard drive and external hard drive fail on me in the past, I became super paranoid about trusting any hard drive or SSD. Even though these are each 14tb and you have multiple copies, how can you ensure these will last you and retain all the media you plan to put on them? Great video!
@@colekter5940 wow that’s absolutely amazing! I’m going to definitely look into setting this up, as I’m a filmmaker and if I’m going to continue being one then I need to invest in this system haha. Thanks for the information :)
really like this budget server build, that case has a huge number of HDD space :o
Remember,, cases itself doesn't have any moving parts... so being old doesn't mean it become less capable.
But sure why made that comment but... yes Fractal Design R series are excellent cases for holding plenty HDDs. It was my choice as well. I personally did not have problem paying even $100 for it as finding a GOOD new cases with a lot of HDD is rare and rare these days. Well worth buying these excellent used cases.
Awesome build! The only "problem" I see here is the PSU, for a server application that will likely be running 24/7, the extra $40 or so is WELL worth it for an 80+ Gold rather than Bronze.
Thank you. Another Good video. Straight to the point with sum LoLz 🙏
Nice video, was really helpful and easy to follow. Looking forward to upgrades to the box and possibly would you be considering using truenas scale to fully utilize upgraded machine? Thanks
I'm still using a old HP Z420 as my main rig. You'll want the Xeon 2667 v2. Its the sweet spot of core count and core speed.
You got my like for the Kagebunshin! :D
That was really cool and informative, thanks for sharing!
I'm building a new gaming pc this black friday and I'll be using my old gaming PC as my first nas server build. It has an i7 4790k, 1050ti, 750w psu, 16gb ddr3 ram, 1tb m.2 drive, and a few old crappy 500gb-1tb hard drives. I'm a little weary about using those drives in a raid configuration so I'm gonna keep an eye out for a good deal on nas drives. I might just start off with 1 or 2 and them build up from there. I cant wait to set up a plex server that my family can have access to remotely.