A couple of errors and omissions: Firstly, I said I used RAID 6, but it's actually RAID z2 - the concept is exactly the same, but I should have made that clear. Secondly, a few viewers have asked about wattage. I will try and measure this exactly and update this comment, but the Xeon CPU in the T20 should idle at around 20w, and will hit 85 ish when maxed out (which is unlikely to happen in this use case).
I did a similar build using HP Z420, Xeon E5-1620v2, RAM upgrade 8 -> 64GB DDR3 ECC, 4 shucked 10 TB WD drives, LSI SAS card. The Z420 has a bunch of PCIe slots, so you dont need a separate 10 Gbit switch, build one youself! I installed a couple of Mellanox X3 dual ports and a 4 port Intel 1 Gbit card. Setup a bridge, DHCP daemon. Most computers connected via cheap direct attached copper cables, no need to spend on optical adapters. The RAM and all cards are dirty cheap on ebay.
Doing the same thing using HP ML310e Gen8 v2 and HP Microserver Gen8 machines. Both equipped with HP P222 Smart RAID. Maybe not the best solution for ZFS, but no issues experienced at all and they perform well. TrueNAS Scale runs flawlessly on those machines. The benefit of these machines is they halve builtin SD Card slots to use as an OS drive. Maybe a bit slower at boot, but run fine once up and running. So I agree there's no need to buy new hardware if you can find machines like these for a good price...
I fell into the synology trap a few years ago got stuck with it until recently, im kicking myself as to why I didn't DIY one myself in the first place. ill Never but off the shelf slow rubbish again, also with the syno if you use ip cams you'll be forking out for a licence PER camera! crazy I know.
Did the same with an equivalent from Lenovo (P310) and it ran for years 24/7 without breaking a sweat. Still on TrueNAS Core, but now on enterprise rack server with lot's of PCIe lanes, nvme storage pools and tons of ram for caching.
I’ve built a similar setup because I had a T20 lying around as well. I use unraid for it. I have a sata card but no 10 gig networking. 4 8TB drives plus one ssd. In idle with disks not spinning it’s about 20-25 W. Could be lower but buying new will always be more expensive. More so because mine doesn’t run 24/7
It really doesn't matter if you already have gear on hand. You will never save money buying new stuff even if its the most power efficient. By the time you start saving money on the power efficiency the tech will be outdated and replaced.
Have a bunch of these coming of EOL as Small biz servers. I was going to dump them but now Hosting them as backup destination at our office for clients.
Another nice workstation is the Dell Precision 5640. Comes with the W1290P cpu/apu 10 core 20 threads, 2x m.2 slots on board and PSU is generic. I got one for my brother as a gaming PC. Cpu speeds 3.7ghz all core boost 4.7ghz and 1 core boost 5.3 GHz. My system ran at all cores 5.8 GHz lol. After I did the thermal paste. I bought it from a server reseller. I paid them for 32gb ram and 1tb my.2 nvme upgrade. Paid $647 cad.
I appreciate your video! I enjoyed it! You spoke about the data transfer at the end and I also enjoyed watching Lawrence systems vidoes on "truenas is a cow" and als "asynchronous vs synchronous writes" defending what you have set, it can help understand the transfer speeds and RAM usage.
at low budget, I'd definitely consider having only two huge-capacity drives in mirror setup, likehood of failure and rebuild speed should be better than RaidZ2 with 4 drives, speeds will be less most likely, but with enough RAM or proper caching it would still provide nice throughput for most use cases, rather saving cost on expanding lack of onboard SATA ports as well as operational total power draw might be worth it, and price/capacity is on par too, thoughts?
It would be best if you got T5820 from Dell. It is only 230 USD and it has 4 hot-swappable HDD trays ready to go. The CD drive bay could be converted to 2 extra internal 2.5' drives, making it a 7-HDDs solution.
I have an old xeon x5460 desktop, with integrated graphics and pretty good amount of SATA and PCI slots. It has two downsides as a NAS/homeserver: Power consumption and limited RAM. 8GB is in theory, plenty for a NAS, but not if you start running containers in it with OpenZFS. I should note, underclocked the system uses a total of 60-120 watts, depending on load, so not TERRIBLE, but it really doesn't make me want to run it 24/7. I could get a pretty ideal upgrade - a N100 + 32GB of RAM - for 300, but I am wondering if it's worth the cost.
I like PC cases with optical drive slots. I typically choose those with 5 inch slots but slim drive is better than nothing. I understand that PC nerd need a space for water cooling fan but I'm pretty sure that there's always spare space for slim drive. All case should have that. If you spend $1000 for whole system, extra $20 for optical drive is totally reasonable even if you use it once a year or less. Anyway, I'm doing thing like this with ThinkCentre of Core i3-3220. It only costed me $20 for whole system. But I don't know how much electricity it sucks, if it is good for environment as a whole and if the cost worth for what I do which is basic local file sharing, automated download and server management practice.
The trouble with the T20 case is the lack of airflow/cooling for the harddrives and 10Gb nic. Both get toasty under sustained use and I leave mine on 24/7. Also I like TrueNas Core but wish Docker Compose was easier to implement, much prefer Unraid for this reason but storage config is defintely better in TrueNas for a noob like me.
@@BandanazX I have and they're not. One shitty exhaust fan to cope with 4 Seagate Enterprise drives, Xenon cpu, Dell Perc IT card & Solarflare 10Gb SFP does not equal adequate airflow. I need to mount an intake fan somewhere and find something to power it off.
at 8:40 you mention using sata connectors. many have reported injection moulded sata connections as fire hazards due to the cables very slowly eroding the plastic and coming into contact. i don't think it is a good idea to say to people to buy possible fire hazards.
The risk you refer to is in drawing more power than the connector can safely handle, which can happen if you were to split a single power connector multiple times, or if you use SATA to power things it wasn’t designed for. Some people are trying to convert SATA power to 8-pin PCIe for GPUs, which would certainly be a fire risk. SATA connectors are not designed for that kind of power draw. The cable will get hot, potentially melting the insulation and presenting a risk of short. Normally, you’d expect the PSU to pop before that happens, but it’s certainly not an advisable approach. That’s not what I’m doing though. What I’m doing is running two drives from one connector. Running two HDDs or SSDs from one SATA power connector is very unlikely to cause issues. According to my calculations and taking into account the power ratings of the drives, the specs of the PSU, and the design specifications for SATA connectors, this is within design limits and doesn’t pose a risk.
RAID 10 definitely offers improved performance, but I'm not sure it offers exactly the same protection. As I understand it RAID 6 can sustain two simultaneous drive failures, and it doesn't matter which drives fail. With RAID 10, you could only sustain two failures if both failed drives were part of the same striped pair. Either is a great solution for a four drive NAS though. 👍🏻
Great video, thank you 😀 I think the power utilisation was worth delving into more. NAS are a balance between size, abilities and power draw. That Dell will be using a fair bit of power and is of course huge compared to a typical 4 drive NAS.
Yep, fair point. I only boot mine up when I need to access files, so I’m cheating a bit! Will try and remember to include more power detail on future vids 👍🏻
I haven’t measured it. I believe this CPU is about 20w idle and up to 89w when maxed out (which it never is). Less powerful NAS CPUs may be more efficient at idle, but they get worked harder. Plus, if you want to run VMs or do anything beyond basic file storage, a more powerful CPU is desirable. Drives will be under 6w idle and about 9w when reading. SATA SSDs are probably more like 2w idle and 8w active. I tend to switch the unit on whilst I’m working and then shut it down when finished. I don’t need 24/7 access to video files. I do occasionally spin up VMs for web development, so this is a flexible setup for me.
RAID10 is garbage. If you lose a certain 2 drives, you lose your data. RAIDZ2 any 2 drives can fail. There is more computational overhead, but that's negligible on any 64bit CPU.
Or be lucky and find a Dell PowerEdge T410, 6 core CPU, 32Gb RAM for $50 (Australian) on marketplace. I have the option to put in another CPU and 32Gb RAM but after I filled the 6 drive bays with 6TB SAS drives, I'll leave it.... for the time being P.S. I also took the CD drive and put a 250Gb SSD for the OS.
Great video! I just turned an old Dell laptop into a Linux server. It has a 3.0 usb and i was initially thinking just connect a DAS of my drives to that and call it a day, but what other type of connection could I make to have the drives run better if thats even possible? I have an optical drive on the laptop i could remove which would free up a port. The spec documentation says that would be a SATA 1.5gbps. I see some people have removed their wifi hardware and replaced it with a M.2e Key 2.5gbps ethernet adapter. Using command sudo lspci -vv it showed my width for the wireless port is 1 which i believe means it has the capability to be swapped. I also was thinking maybe use the SATA 6g port being used now for my internal hdd as the connection to the DAS somehow? Out of those 4 connection options, how could i best utilize this machine to be the best NAS?
Always challenging with a laptop. USB 3.0 should give you a 5Gbit connection to a DAS. You could get something like a Terramaster D4-300, giving you 4 drive bays which will be exposed to the host machine as individual drives. The D4-300 is USB 3.1 (10Gbit), but even running at the slower 3.0 speed, there should be enough bandwidth for four spinning disks. With a 2.5Gbit ethernet connection, it would give reasonable speeds for a single user. The cost of buying the DAS enclosure is not far off the cost of the build I did in this video though, so I'd consider that carefully. Laptops aren't ideal for this sort of application.
@@ConstantGeekerythanks for the detailed reply, you got a new sub! That unit was one I was looking at since others have also used it for a similar build. What do you think I should do with that 1.5gb sata port that used to be for the optical drive? I put a new ssd for the main drive so I could just put the old drive in the spot but maybe there is something else I’m not thinking of..
@@tdelgado1138 thanks for the sub 😁 - the SATA port should have enough bandwidth for a spinning 2.5" disk. You could use a SATA SSD, though it won't work at full speed, that may not be so important in the overall scheme of things.
You can do that. Just be aware that the switch built in to most home wifi routers will probably be 1 gigabit/sec. I specifically wanted 10 gigabit/sec.
You mentioned that you already had the drives that you used. Did TrueNAS have a way to help you migrate that data to your new setup? For example, was it able to detect your old RAID setup and just keep going? I am looking to migrate from a Synology NAS that has two drives, and I want to find some way to avoid the long process of transferring all of that data to a temporary location, and I'd have to buy new drives or use cloud storage for that, both of which cost more money than I want to spend since I'm already buying new hardware.
I should have made clear that I wiped the drives (after backing up). There are some circumstances where an existing software raid might be picked up - I’ve had that with macOS when moving drives to external direct attached storage, but I can’t say for sure with TrueNAS.
A couple of errors and omissions:
Firstly, I said I used RAID 6, but it's actually RAID z2 - the concept is exactly the same, but I should have made that clear.
Secondly, a few viewers have asked about wattage. I will try and measure this exactly and update this comment, but the Xeon CPU in the T20 should idle at around 20w, and will hit 85 ish when maxed out (which is unlikely to happen in this use case).
I did a similar build using HP Z420, Xeon E5-1620v2, RAM upgrade 8 -> 64GB DDR3 ECC, 4 shucked 10 TB WD drives, LSI SAS card. The Z420 has a bunch of PCIe slots, so you dont need a separate 10 Gbit switch, build one youself! I installed a couple of Mellanox X3 dual ports and a 4 port Intel 1 Gbit card. Setup a bridge, DHCP daemon. Most computers connected via cheap direct attached copper cables, no need to spend on optical adapters. The RAM and all cards are dirty cheap on ebay.
Very nice! Thanks for sharing 👍🏻
is that not power hungry
Doing the same thing using HP ML310e Gen8 v2 and HP Microserver Gen8 machines. Both equipped with HP P222 Smart RAID. Maybe not the best solution for ZFS, but no issues experienced at all and they perform well. TrueNAS Scale runs flawlessly on those machines. The benefit of these machines is they halve builtin SD Card slots to use as an OS drive. Maybe a bit slower at boot, but run fine once up and running.
So I agree there's no need to buy new hardware if you can find machines like these for a good price...
I fell into the synology trap a few years ago got stuck with it until recently, im kicking myself as to why I didn't DIY one myself in the first place. ill Never but off the shelf slow rubbish again, also with the syno if you use ip cams you'll be forking out for a licence PER camera! crazy I know.
Did the same with an equivalent from Lenovo (P310) and it ran for years 24/7 without breaking a sweat. Still on TrueNAS Core, but now on enterprise rack server with lot's of PCIe lanes, nvme storage pools and tons of ram for caching.
But will it idle below 10W?
Came here to say that. Power consumption needs to be factored in
I’ve built a similar setup because I had a T20 lying around as well. I use unraid for it. I have a sata card but no 10 gig networking. 4 8TB drives plus one ssd.
In idle with disks not spinning it’s about 20-25 W. Could be lower but buying new will always be more expensive. More so because mine doesn’t run 24/7
It really doesn't matter if you already have gear on hand. You will never save money buying new stuff even if its the most power efficient. By the time you start saving money on the power efficiency the tech will be outdated and replaced.
@@โต้ง-ล3ต I was not into buying new stuff either, but I have for example a MacMini laying around.
When you factor in the power that hard drives use does the rest of the hardware matter?
Have a bunch of these coming of EOL as Small biz servers. I was going to dump them but now Hosting them as backup destination at our office for clients.
Great video. I am looking to build a NAS for home use and your solution seems to be very good for my needs. Thanks.
Another nice workstation is the Dell Precision 5640. Comes with the W1290P cpu/apu 10 core 20 threads, 2x m.2 slots on board and PSU is generic.
I got one for my brother as a gaming PC.
Cpu speeds 3.7ghz all core boost 4.7ghz and 1 core boost 5.3 GHz. My system ran at all cores 5.8 GHz lol. After I did the thermal paste. I bought it from a server reseller. I paid them for 32gb ram and 1tb my.2 nvme upgrade. Paid $647 cad.
I appreciate your video! I enjoyed it!
You spoke about the data transfer at the end and I also enjoyed watching Lawrence systems vidoes on "truenas is a cow" and als "asynchronous vs synchronous writes" defending what you have set, it can help understand the transfer speeds and RAM usage.
at low budget, I'd definitely consider having only two huge-capacity drives in mirror setup,
likehood of failure and rebuild speed should be better than RaidZ2 with 4 drives,
speeds will be less most likely, but with enough RAM or proper caching it would still provide nice throughput for most use cases,
rather saving cost on expanding lack of onboard SATA ports as well as operational total power draw might be worth it,
and price/capacity is on par too,
thoughts?
It would be best if you got T5820 from Dell. It is only 230 USD and it has 4 hot-swappable HDD trays ready to go. The CD drive bay could be converted to 2 extra internal 2.5' drives, making it a 7-HDDs solution.
I enjoyed your video. While I don't have need for a NAS I do like upgrading old computers, and my recent project was a SFF Dell OptiPlex 7010.
I have an old xeon x5460 desktop, with integrated graphics and pretty good amount of SATA and PCI slots. It has two downsides as a NAS/homeserver: Power consumption and limited RAM. 8GB is in theory, plenty for a NAS, but not if you start running containers in it with OpenZFS.
I should note, underclocked the system uses a total of 60-120 watts, depending on load, so not TERRIBLE, but it really doesn't make me want to run it 24/7.
I could get a pretty ideal upgrade - a N100 + 32GB of RAM - for 300, but I am wondering if it's worth the cost.
He fell for the Cat7 meme
Alteast they are braided cables. If you have cats like mine who eat power cords the braided material stands up well lol
I like PC cases with optical drive slots. I typically choose those with 5 inch slots but slim drive is better than nothing. I understand that PC nerd need a space for water cooling fan but I'm pretty sure that there's always spare space for slim drive. All case should have that. If you spend $1000 for whole system, extra $20 for optical drive is totally reasonable even if you use it once a year or less.
Anyway, I'm doing thing like this with ThinkCentre of Core i3-3220. It only costed me $20 for whole system. But I don't know how much electricity it sucks, if it is good for environment as a whole and if the cost worth for what I do which is basic local file sharing, automated download and server management practice.
The trouble with the T20 case is the lack of airflow/cooling for the harddrives and 10Gb nic. Both get toasty under sustained use and I leave mine on 24/7. Also I like TrueNas Core but wish Docker Compose was easier to implement, much prefer Unraid for this reason but storage config is defintely better in TrueNas for a noob like me.
Nonsense. The T20/T30/T40 cases are engineered for adequate airflow. Check the drive temperatures and they are fine.
@@BandanazX I have and they're not. One shitty exhaust fan to cope with 4 Seagate Enterprise drives, Xenon cpu, Dell Perc IT card & Solarflare 10Gb SFP does not equal adequate airflow. I need to mount an intake fan somewhere and find something to power it off.
@@Gill_Bates On my T20, the drives are at 42c which is just fine.
I did something similar with an old lenovo p310 awesome vid!!
at 8:40 you mention using sata connectors. many have reported injection moulded sata connections as fire hazards due to the cables very slowly eroding the plastic and coming into contact. i don't think it is a good idea to say to people to buy possible fire hazards.
The risk you refer to is in drawing more power than the connector can safely handle, which can happen if you were to split a single power connector multiple times, or if you use SATA to power things it wasn’t designed for. Some people are trying to convert SATA power to 8-pin PCIe for GPUs, which would certainly be a fire risk. SATA connectors are not designed for that kind of power draw. The cable will get hot, potentially melting the insulation and presenting a risk of short. Normally, you’d expect the PSU to pop before that happens, but it’s certainly not an advisable approach. That’s not what I’m doing though.
What I’m doing is running two drives from one connector. Running two HDDs or SSDs from one SATA power connector is very unlikely to cause issues.
According to my calculations and taking into account the power ratings of the drives, the specs of the PSU, and the design specifications for SATA connectors, this is within design limits and doesn’t pose a risk.
What is your wattage?
with 4 disk, you better go with 2 mirrors for more IOPS and the same protection
RAID 10 definitely offers improved performance, but I'm not sure it offers exactly the same protection. As I understand it RAID 6 can sustain two simultaneous drive failures, and it doesn't matter which drives fail. With RAID 10, you could only sustain two failures if both failed drives were part of the same striped pair. Either is a great solution for a four drive NAS though. 👍🏻
Great video, thank you 😀
I think the power utilisation was worth delving into more.
NAS are a balance between size, abilities and power draw.
That Dell will be using a fair bit of power and is of course huge compared to a typical 4 drive NAS.
Yep, fair point. I only boot mine up when I need to access files, so I’m cheating a bit! Will try and remember to include more power detail on future vids 👍🏻
What is the power consumption at idle state and what is when the disks are working?
A lot. But I don't think he cares about consumption. And that xeon doesn't help at all.
I haven’t measured it. I believe this CPU is about 20w idle and up to 89w when maxed out (which it never is). Less powerful NAS CPUs may be more efficient at idle, but they get worked harder. Plus, if you want to run VMs or do anything beyond basic file storage, a more powerful CPU is desirable. Drives will be under 6w idle and about 9w when reading. SATA SSDs are probably more like 2w idle and 8w active.
I tend to switch the unit on whilst I’m working and then shut it down when finished. I don’t need 24/7 access to video files. I do occasionally spin up VMs for web development, so this is a flexible setup for me.
It has one great feature: cooking hard drives. I have one in use and I hate it because of this.
Would have been better with RAID 10 than RAID 6 same availability but better write and read speeds.
Not the same fault tolerance or flexibility to expand the array though.
RAID10 is garbage. If you lose a certain 2 drives, you lose your data. RAIDZ2 any 2 drives can fail. There is more computational overhead, but that's negligible on any 64bit CPU.
Awesome project, like allways. Thumbs up and greatz from Bavaria😊.
P.S. Missing Erin 😅 bonded with network cables 😂
I picked mine up for $15 USD local to me .
Why did you omit the choice of SATA 2.5" SSDs when first fiscussing the setup?
At the time of the build, there were no 8TB SATA 2.5” SSDs. I needed the capacity.
can you mount a standard MATX motherboard in these? or is it fully proprietary?
Mount yes, but some rewiring will need to be done.
Or be lucky and find a Dell PowerEdge T410, 6 core CPU, 32Gb RAM for $50 (Australian) on marketplace. I have the option to put in another CPU and 32Gb RAM but after I filled the 6 drive bays with 6TB SAS drives, I'll leave it.... for the time being
P.S. I also took the CD drive and put a 250Gb SSD for the OS.
Very nice 👌🏻
Thanks 😊
Great video! I just turned an old Dell laptop into a Linux server. It has a 3.0 usb and i was initially thinking just connect a DAS of my drives to that and call it a day, but what other type of connection could I make to have the drives run better if thats even possible? I have an optical drive on the laptop i could remove which would free up a port. The spec documentation says that would be a SATA 1.5gbps. I see some people have removed their wifi hardware and replaced it with a M.2e Key 2.5gbps ethernet adapter. Using command sudo lspci -vv it showed my width for the wireless port is 1 which i believe means it has the capability to be swapped. I also was thinking maybe use the SATA 6g port being used now for my internal hdd as the connection to the DAS somehow?
Out of those 4 connection options, how could i best utilize this machine to be the best NAS?
Always challenging with a laptop. USB 3.0 should give you a 5Gbit connection to a DAS. You could get something like a Terramaster D4-300, giving you 4 drive bays which will be exposed to the host machine as individual drives. The D4-300 is USB 3.1 (10Gbit), but even running at the slower 3.0 speed, there should be enough bandwidth for four spinning disks. With a 2.5Gbit ethernet connection, it would give reasonable speeds for a single user.
The cost of buying the DAS enclosure is not far off the cost of the build I did in this video though, so I'd consider that carefully. Laptops aren't ideal for this sort of application.
@@ConstantGeekerythanks for the detailed reply, you got a new sub! That unit was one I was looking at since others have also used it for a similar build. What do you think I should do with that 1.5gb sata port that used to be for the optical drive? I put a new ssd for the main drive so I could just put the old drive in the spot but maybe there is something else I’m not thinking of..
@@tdelgado1138 thanks for the sub 😁 - the SATA port should have enough bandwidth for a spinning 2.5" disk. You could use a SATA SSD, though it won't work at full speed, that may not be so important in the overall scheme of things.
Why need a switch? Can't this just plug straight into the wifi router where my computers are connected anyways?
You can do that. Just be aware that the switch built in to most home wifi routers will probably be 1 gigabit/sec. I specifically wanted 10 gigabit/sec.
I clicked to hear about TrueNas and then get to the end…and you don’t cover it! Please do a truenas video!
Powerdraw?
You mentioned that you already had the drives that you used. Did TrueNAS have a way to help you migrate that data to your new setup? For example, was it able to detect your old RAID setup and just keep going? I am looking to migrate from a Synology NAS that has two drives, and I want to find some way to avoid the long process of transferring all of that data to a temporary location, and I'd have to buy new drives or use cloud storage for that, both of which cost more money than I want to spend since I'm already buying new hardware.
I should have made clear that I wiped the drives (after backing up). There are some circumstances where an existing software raid might be picked up - I’ve had that with macOS when moving drives to external direct attached storage, but I can’t say for sure with TrueNAS.
How did you manage to run raid6 with truenas? I thought you had to use zfs
Raid-z2 is zfs raid 6
I didn't hear him mention raidz2, just raid6. Was hoping there was a way around using zfs
I'm sorry, I should have explained that.
Hi guys, second comment. Cheers from your Swiss fan.
No comments yet? Okay. I'll bite. 👏🏽👏🏽👏🏽
Attached 😉 0:43
10gb cards are less than $50
$200 on thumbnail... 😒
You can build the same system for that or less. (Minus storage of course, but NAS appliances are not usually sold with storage).
Your gestures in the video are annoying and too big. Maybe you don’t need my true feelings and thoughts, I'm sorry.