one of the few youtubers i watch that talks fast enough that I dont need to use 1.25x-1.5x speed while watching 😅😅 great build and very insightful walkthrough of the parts selection!!
the important thing to realise, is when a drive fails (and it will) the pool has Zero redundancy, any additional failure and you loose everything. And guess what when you replace the failed drive it does a big rebuild (many hours) putting the existing drives under significant load.
Awesome brother. It's awesome you do GIF. Love the support and this makes me want to support your channel. I use truenas scale and love it. Smart People make great videos. The Digital life is a new to me but he is very educated on the Truenas Scale ZFS system. LOVE IT. KEEP THEM COMING BROTHER.
For reference, consider the ASRock Rack MBs for home server use. They use standard desktop chipsets, but include some handy server features. For instance, the X570D4U-2L2T includes multiple 1Gb & 10Gb NICs, IPMI, SATA DOM, etc. crammed onto a MicroATX MB.
Great setup, however i would not use raidz1 in a 12 disk pool, the risk to loose more then one disk is for me to high, i use raidz3 for my 12 disk pools...... just my 2 cents. Thanks for the vid.....
Raidz3 seems a bit of overkill to me. At work we run a selfbuild TrueNAS server for backing up Xen VMs with around 270 TB netto capacity and in raidz2 setup. This allows two HDDs to fail and the array still functions. Not using fast enterprise level SSDs as read / write cache for the pool seems to be a no no for me. ZFS in the end is not THAT memory intense unless you do deduplication. A fast CPU plus 8 GB of RAM will be fine to serve a ZFS pool.
I also builded my Truenas scale finally 1 week ago. TBH: If I dont have an old desktop to refurbish I wouldnt go for your customer PC build. A refurbished or used Supermicro board has usual a proper CPU and IPMI port and maybe even a 10g nic. Much more support on ECC and is more reliable on a 24/7 job and needs much less power. You CPU alone is a 65W listed. A m-ITX board with a Xeon (8C/16T) is listed at 45W for example. A used HP microproliant gen8 is highly modable and also offer more value with ILO on the used sector. So at the end - I personal wouldnt recommend this parts, but everyone priors different things and its nice that more people get into truenas scale in general.
From performance and reliability standpoint its better to use multiple VDEVs. In your case probably 2xraid-z1. Still from my point of view its beter to use vdev with multiple parity (z2+), otherwise with some bad luck you can have some unrecoverable read errors while resilvering if one drive dies. Still good content, thank you :)
One downside to using a desktop motherboard is the lack of a management interface. If something happens and the system is hard to get to, troubleshooting can be a pain. There are fairly reasonable priced boards from supermicro or asrock rack
I built a NAS many years ago with a 24bay Norco case however I upgraded this to a used Dell T630 Server which was around 1000 Euro so much cheaper than a DIY build and much much higher quality parts 12G SAS3 backplane and included controller. upgrade CPUs to 14 core dual CPU for 70 Euro also can fit so much more RAm (128G and might add another 128G) . Best of all this server is so so quiet. had it in my apartment loungeroom. You can get other Dell servers cheaper but with less bays (mine is 18 bay) and used a cheap Sun F80 Warpdrive for proxmox /datastore) Great vid and remember it was fun building my 1st server similar to yours. Enjoyed your vid Christian Maybe you could do a vid on buying a cheap Dell server maybe a T320 or T620 or the like?? and building into a truenas server for the people who don't know much about building hardware? Also used SAS drives are much much cheaper too for these servers
Very informative and interesting video. The only thing I don't think is so great is the hard disk choice. Price (€) per TB including shipping. 14TB are the sweet spot at the moment. In addition, there is warranty 5 years, helium filled, faster and power consumption with only 5 hard drives 24/7.
I have two TrueNAS systems in my home network, and I use ZFS replication between them for the first copy. I also have a sync on the same data to a cloud provider for a second copy and off-site storage. The local replication duplicates the snapshots I have, and the off-site provides disaster recovery.
8:11 that's because (at least in my country) ECC memories are not readily available as compare to regular one and they tend to be lot expressive as much as twice or three times the cost of regular one
Great video as always. I work in Enterprise Infrastructure and we have seen multiple drives fail before at nearly the same time and the added strain of a typical rebuild on the other drives increases the likelihood of another drive failing. As such, I would recommend at least ZFS RAIDZ2.
You really should not use a RaidZ1 on that amount of raw storage. In case 1 drive fails and you swap it to rebuild the volume, you put a lot of pressure on your remaining drives and that for a long time (because of huge drive capacity). There is a huge possibility that another drive fails during the rebuild process and since you are only using RaidZ1, all your data will be lost. Only use RaidZ1 for small deployments (4 drives or less with low capacity) and have good backups. I would suggest at least RaidZ2. And as always Raid is not a Backup. Trotzdem gutes Video, hab selber ein ähnliches Setup, allerdings hab ich TrueNAS noch in Proxmox virtualisiert mit PCI-e Passthrough von dem HBA.
Hello 👋, you can tell my why i see connections lost on true nas when i use iscsi with proxmox , but iscsi work perfectly, but true nas show me connections lost ip 192...... . Please 🙏 i search for solution but not solve.
In US there are NORCO chassis with the same interior. But all of them are from China. Supermicro motherboard will cost same amount of money but they are only for intel CPU's. For Ryzen CPU's there are Asrock server boards available. The main reason to buy server board is having ECC memory and IPMI.
@@valleyboy3613 Airflow mostly depends on what fans you use. Stock Dell fans are very loud. And in Norco case you can use fans that you want 80 or 120 mm.
(you maybe should have mentioned the the reason why half your ram is already in use by the cache. That's normal. It automatically takes up 50% of any amount of ram)
Unbuffered ECC is the best/fastest but you can double the accessible RAM amount if you use buffered ECC. Buffered ECC is a bit slower though. If you have enough RAM slots then you will be OK with unbuffered...
Good video! THanks. Now one reason to go with an older SuperMicro 4U system is because you'll get the MB, CPU, SAS Controller and RAM all for under the cost of buying all new components. What you built is great but for a NAS with the same amount of money you could get the same results. Also you might find the 4TB HGST Ultrastar 7K4000 are a cheaper option but you can't go wrong with WD's.
Even the cofounder/ current developer of ZFS doesn't require/encourage people to use ECC. So I don't see a necessity to do so. There's also a Hacker News article regarding this topic. Nonetheless I enjoyed your recent videos. The Proxmox Packer one was really awesome. I combined it with a gitlab pipeline and it now it throws me out fresh new images once a week.
Sign, why don't you stop arguing about ECC? It's recommended by IX Systems in the official docs, and by any IT professional. Btw, thanks for the positive feedback, but you need to understand that when you make a video like this, you can't skip over ECC.
@@christianlempa It wasn't meant to be rude. I thought it was worth mentioning it, since most of the concerns about ECC are regarding ZFS. Have a nice day anyway.
I needed a network storage array so I started down this road. I tried to find an affordable solution with ECC but was told to not use risen because it does not support hardware transcoding for a black server. So I wouldn't need a secondary video card and I really did not want that period I am looking to build a 150TB array. Older Zion did not have quick sync And I was unable to find any Intel atom processors available so I ended up going with an i3 and non ECC memory.
Hi Christian, the HDD status LED-s in your Inter-Tech enclosure works out of the box with TrueNas scale? (Needs any additional wiring or this is nagive SAS feature?) In next days I would like to send out my Shopping list with Inter-Tech or Supermicro CSE-216 cases... Thank for your Time you spend to your UA-cam channel, this is a great starting point day by day for my me time... 🤟
RAIDZ1 on 12 disks aren't recommended and you're limited to 1 vdev's speed and IOPS.. Even with RAIDZ2. It is better in your setup to make two 6-disk RAIDZ1 and combine them into a stripe. It is supported in zfs.
Hi Christian, the HDD status LED-s in your Inter-Tech enclosure works out of the box with TrueNas scale? (Needs any additional wiring or this is native SAS feature?) In next days I would like to send out my Shopping list with Inter-Tech or Supermicro CSE-216 cases... Thank for your Time you spend to your UA-cam channel, this is a great starting point day by day for my me time... 🤟
Thanks a ton for the great content. I have found your videos quite helpful as I find my way around this “new world” of self-hosting / home lab setup. In a Proxmox + TrueNAS or OMV setup, what is best approach for ZFS Storage Pool. Is it best to setup the zpool in Proxmox for use by the NAS software or is it better to setup the zpool from within the NAS software?
20:20 When you connect two devices directly do you use a cross-over cable? I have a TrueNAS Mini X (Diskless) on order and already have drives, plus an unmanaged switch that will connect the iSystems’ machine and an AppleTV which is also wired to an Eero 6 mesh network. My hope is to be able to watch media from the NAS on the TV even if the internet is down. Eero seems to need cloud/internet :(
great content, im planning to build my own little truenas box due to a qnap failure. the section where you talked about networking configuration and speed tests peaked my interest. have you covered any of this setup previously in your other videos, or know a good place to read up on this?
I have this cpu running, but with the current pve kernel ecc is not reporting correctly. Should be fixed with about 5.17. I tested the ecc function with the same Mainboard in Win10.
@@peterfeurstein6085 Yeah I guess it is based on Linux Kernel, I had no chance to get it working. If you have the same on PVE, hmm. Glad that I replaced it.
Inter-Tech, I have some cases here of this brand. they are really nice and good priced. You can find them in The Netherlands (I ordered them thru Amazon Germany). Do you also ordered the rails for the case ?
Can you provide more information on the fan controller you touched on in the video? I've followed your build spec to the letter and the fan controller is not something listed on your kit page. Thanks
hi, i'am from indonesia, Nice Content. you always creating High Quality Content and before i watch this video, i already install Truenas Scale on My IBM System X3100 M4. interest another part video Truenas Scale. Thanks Christian.
Fascinating! I'm not sure if I missed this on the video, but why wouldn't you go for the maximum available capacity per drive, say 18 or 20Tb, to optimize costs and maximize the capacity per drive slot? Or was your main point to have as many drives as possible for the enhanced transfer speed?
I have quite recently tried to put all data into one giant hdd for archive purpose. rsync kept failing on verification. It turned out at the end one of the memory modules (non-ECC) in the system was failing. Without verification I would not know it and have ended up with broken data. ECC all the way if you need reliable storage.
How do you use the 48T of storage space? do you have a data replication server at a different place? I don't like data-replication service, I choose a power switched USB hub and attach large capacity of SATA HDD to periodically backup most important files.
I've been seeing some fairly cheap 24 bay supermicro combos (case CPU mobo and ram) on ebay and have been thinking about picking one up, this is a nice setup though and that is a nice case. Hadn't heard of that brand before.
Great video. Nice build. Like you said software is also important. I'm wondering if you use any software to manage your personal photo/video libraries. If you do, what are they ?
thanks for the video, unfortunately, I can't use TrueNas since it didn't have Delete Permissions in ACL, because its needed in our case, so I'm stuck with Windows or xpenology, since Synology has this option in advanced permissions section
In your file transfer test you say it doesn't reach 10 Gigabit, but the transfer rate show ~700 Megabytes which is 5600 megabits for 5.6 gigabit, so even the first transfer is extremely fast.
I just built my first TrueNAS core system (debating on starting over & installing TrueNAS Scale on instead), and have ~31TB of drives in mine but have only setup a pool with 4x 3TB WD RED drives so far for media storage & streaming. would you recommend for movie streaming and uploading to the server that I upgrade from my dual 1Gig onboard NICs as I see you went with 10Gig setup? I"m wondering if 10Gig would be overkill for my usage but if it isn't I'm curious what the cheapest compatible setup might be for a 10Gig connection (would need a 10Gig NIC, 10GB or multi-Gig switch & either CAT6A copper or SFP+ transceivers (seems like quite the investment) since it doesn't seem like there's a way to Team/bond my two 1Gb onboard NICS (only aggregate them assuming I have a switch that supports aggregation as well). Or should I just consider investing in a Multi-Gig (2.5Gb NIC & switch)? I do plan on creating other pools for Backups and possibly a pool for running VMs down the road possibly if that makes any difference.
Please do not use Raid-Z1 on a pool with that many disks, there is a good chance you will have 2 disks fail within a few days of each other, especially if you bought them at the same time. if you lose a 2nd disk before it has resilvered, bye bye data.
Can you run Plex GPU transcoding using trueNAS Scale? Never simple to have Plex in containers and access the GPU. Also can you install latest version of Plex? Usually the version provided is pretty old.
In ZFS the total pool IOPS are equal to 1 disk IOPS * N-of-vdevs, and since you have a single vdev, the IOPS are equal to a single WD Red Plus 4TB, so your config is pretty bad also from a performance point of view, other than being quite unreliable. The only thing that's not so bad is the streaming performance (as long as you are using a large block size) With 12 disks, my choices would probably be: reliability: 2x (Z2 w/ 6 disks) performance: 3x (Z1 w/ 4 disks). You can also avoid parity arrays and get even more IOPS but the usable space decreases drastically. P.S. Unless things are changed from the past (have not played with SCALE yet), that 1TB NVMe drive is completely wasted as boot drive. Ideally the OS should stay on a small SATA SSD (best if a couple in mirror) and that NVMe would be better used as level 2 ARC cache
Thanks, great feedback. I probably will change my config to something else, once I have the chance to do so. Still haven't decided what exactly, but 2x Z2 would mean I'm loosing 4 disks which is 30%. Seems like a lot to me. I probably will go with a single Z2 or even Z3 for the 12x4tb as a big data pool for backups and video files and add a second vdev with 4x SSDs. What do you think of that idea?
@@christianlempa The recommended number of disks per vdev is between 3 and 9 and more than 12 is not recommended, so a single Z2 with 12 disk is pretty much an explicitly unrecommended configuration. I know that loosing all that space sucks, but this is the price if you want to do things the right way. Since you are not storing mission critical data (right? 😛) you can configure the pool with two Z1 6 disks vdevs and a very frequent backup 🙂.
@@christianlempa for many reasons, the most obvious is rebuilding time (it could take a week or more and during the process you could loose more disks because they are under high stress) but also space efficiency (due to parity and padding increased complexity) and other joyful reasons (like further performance degradation) that you can discover deep diving into technical documentation if you want 🙂 Of course, once you are aware of all the risks and limitations, if they are still within your "margins of acceptance" you are free to configure your pool as you wish, I just wanted to make you aware 😉
About the Storage Controller. I never rly understood those. In the moment, im thinking about to upgrade my existing fujitsu primergy tx1320 m3 server from the standard 4 to 8 connectable disks. In the official data sheet of the server are some controllers listed to upgrade the sas connections. however i dont rly understand, Do i have to use a spezific one from fujitsu or is any raid controller with equal connections usable? I would be happy if some one could explain me what is important by choosing the right raid controller. THX
Nice build and very nice video. Thank you very much. I was also considering getting the same case. However, I was a bit worried about the airflow for an ATX power supply with a fan on the bottom since there seems to be little to no space to actually suck air in. In contrast to the 4U-4424 which does not have this metal separator between the mainboard and the PSU. Is there enough space to get some air into the PSU or did you solve this in some other way?
Nice Video. I've setup TrueNas Scale on my old QNAP, which I use to backup my Synology. I can't afford to go 10Gb, but I've got some 2.5Gb usb Nic's that work well with both systems, so I get a bit of a speed bump.
@@christianlempa Not all QNAPS, but those with a usb DOM and HDMI. Due to all qnap security concerns, I opted to move away from their O/S. TrueNas works great on it.
Are you crazy :o RAIDZ1 on a 12-drive array... You must hate your data.!!! It is very common for a 2nd (or third, or more) drive to fail during a rebuild, which guarantees you data loss, and depends on when and how bad it fails, you could very well lose ALL of your data... On a 12 drive RAID array, I wouldn't ever consider using anything less than RAIDZ3. In reality though, I would use two 6-drive RAIDZ2 arrays
Hey Christian, do you have any idea on what something that this costs to run per day? how much kw ? thinking of looking at doing something like this myself but with prices of electric these days its putting me off at the moment. - great content as always!
Good video, but why did you not go the easy route? I have a Dell R420 with 196Gb Ram, dual 10c20t CPU's, and 4 12Tb NAS drives. Altogether at a cost of just under $1200.
The powersupplies and fans produce too much noise, the server rack is right beside my YT studio. So I needed to find a silent case with hardware that is also efficient.
Hello youtube here are the steps to get apt working on truenas if u don't have the right zfs permission.. First connect to the shell of Ur server if its by ssh or directly on the server. Then u type `chmod apt` this command will allow u the user to have access to the command now u need to update Ur repository's by doing sudo apt update or apt update then do apt-get upgrade then if u want and I highly recommend this is. Is to add the official Ubuntu repos to the sources.list. u can do this by typing. nano /etc/apt/sources.list Now u can edit where u want to download applications from that are based on debian and this will give u endless possibilities on how u wanna use ur server u can even install and configure beef wich is an hacking tool ment for ethical hacking by doing.. sudo apt install beef-xss & y Note u can only install beef if u added the official kali repos to ur sources.list If u get an error while doing sudo apt update do a ufw allow behind the command and make it accept unknown repos so u can use kali it's repo to . Thx for reading this long command and I hope it helped u out
Thanks mate! I needed to plug in a graphics card to do the installation, once I had installed the system, the AsRock board supports boot without a gpu by default. So I could easily replace the graphics card with the network card and then everything still worked.
Excellent content as usual! From the video, it seems like an Adaptec ASR 71605 - Can you please confirm/share the exact model of the RAID controller? Thank you in advance.
Is it better to have many small HDD's or a few larger capacity ones? For example using 6x14TB on ZFS1 vs 11x8TB on ZFS2 both system giving approx 70TB usable space.
Thanks for the great video! I am thinking about using the same Motherboard and CPU Combo. But I was wondering... You did not mention any GPU. The AMD Ryzen 5 3600 has no integrated graphics. Didn't you use any cheap GPU? Thanks for the advice!
You're welcome! I used a cheap GPU to install the system, but afterwards you can remove it as the AsRock MB supports boot without a GPU by default. Pretty nice 😀
Can all the data also be uploaded automatically to Google Drive? So that if there is damage to the hard disk, we still have a backup of all data in the cloud.
Nice video Christian. As far I know Truenas at least the ones built on BSD are not so much about CPU, but indeed much more about available ram. This is why I skipped at the time to debian server with btrfs. It's less memory consuming and has still snapshot functionality. My primmary goal however was to build energy friendly so I went with an J-3455 type intel processor onboard and 2X 1GB lan and 16Gb ram. Over the network wireles on samba this is of course not very good in transfer speeds, especially wireless. On nfs direct to my main pc i'm getting really close to 1Gb/s on bigger files which is OK to me. I chose for a Raid10 solution to maintain also write speed at expense of diskspace, which was ok for my use case. Of course your use case is completely different and also the idea behind, but with current energy prices it wouldn't hurt to look into more energy sufficient alternatives rather than just high specs. Running it with several docker containers and 8 users without sweat. The whole thing idles at around 15 Watts and draws maximum around 28 Watt. Not bad for a 24/7 running system. Even running plex with 2 users is working OK, without additional videocard. Sofar it has been rock solid stable and the board I have is natively supporting headless usage.
Ich glaube, ich hätte - der Temperatur wegen - die Festplatten auch in jede zweite Reihe geschoben. So sitzen die nicht aufeinander und haben etwas mehr Platz zum Atmen.
Nice server ! Do you have any shared storage solution for a Proxmox cluster ? In my homelab I have NFS shares but the NAS becomes a Single Point Of Failure 😕 Maybe the scaling system of TrueNAS would help 🤔
Great video...but. You set up 12 drives in a single VDEV, RAIDZ1? I'd seriously reconsider that decision, unless none of the data going to it is all that important; if a single drive fails, and then another fails during the intense 4 hour rebuild time (as can happen)...well, you'd lose everything. And no one wants that! (Use RAIDZ2, minimum!)
Thanks for the great video. One thing I do not understand is why the APU 5750G didn‘t work with the ECC RAM. From AMD side they say all APUs in the PRO line supports ECC RAM?
Good question, I was confused by that, too. I think that has something to do with the kernel drivers, which don't support this CPU, other than that I don't know why it's not working.
@@christianlempa ... that was a good hint. I found a blog which describes exactly that problem. It seems that the EDAC Driver doesn't support the AMD APU. But there is a opportunity to get that fixed. I'll have a closer look in the blog. Unfortunatelly I can't link it here :( If you search for "ecc-on-amd-cezanne" you'll find it. Cheers Ralph
@@ralphmeinel9990 I think I also found this post, the problem is that all modifications to the kernel will be overwritten and unsupported by truenas firmware. It's a bit unfortunate :(
@@christianlempa I did a little research and found out that the EDAC driver supports AMD APU 5750G/5650G in kernel version 5.18.x. TrueNAS Scale ships with 5.10. kernel, so we have to wait until there is an update. But in the current release there is support for the "older" APUs. As I am currently building a NAS, I gave it a try and can confirm that the AMD APU 4650G works with the ECC memory :)
one of the few youtubers i watch that talks fast enough that I dont need to use 1.25x-1.5x speed while watching 😅😅 great build and very insightful walkthrough of the parts selection!!
Thank you 😂🙏
ZFS-1 on 12 drives?! I'd either do a single ZFS-2 pool or split it into a combined 2x6 drive ZFS-1 pools.
the important thing to realise, is when a drive fails (and it will) the pool has Zero redundancy, any additional failure and you loose everything. And guess what when you replace the failed drive it does a big rebuild (many hours) putting the existing drives under significant load.
Killer build! I hadn't seen that server chassis before. Great value for the money there.
Thank you! 😉
Awesome brother. It's awesome you do GIF. Love the support and this makes me want to support your channel. I use truenas scale and love it. Smart People make great videos. The Digital life is a new to me but he is very educated on the Truenas Scale ZFS system. LOVE IT. KEEP THEM COMING BROTHER.
For reference, consider the ASRock Rack MBs for home server use. They use standard desktop chipsets, but include some handy server features. For instance, the X570D4U-2L2T includes multiple 1Gb & 10Gb NICs, IPMI, SATA DOM, etc. crammed onto a MicroATX MB.
Wow interesting, thank you! I'll take a look at these boards
So you have any cheaper or older boards that are similar that you could suggest
@@PingPongOblong Unfortunately, I do not know of any. The ASRock Rack boards are really rather unique and as such they are $$.
You can tune the ZFS (ARC) memory usage.. The default is 50% memory.
Lawrence Systems has a video on that.
Great setup, however i would not use raidz1 in a 12 disk pool, the risk to loose more then one disk is for me to high, i use raidz3 for my 12 disk pools...... just my 2 cents. Thanks for the vid.....
Yeah, that's a valid point. Well, at least you can say in a few years "told you so", when I need to restore it from Cloud ;)
@@christianlempa with your internet connection how long does it take for a complete restore of the pool?
Raidz3 seems a bit of overkill to me. At work we run a selfbuild TrueNAS server for backing up Xen VMs with around 270 TB netto capacity and in raidz2 setup. This allows two HDDs to fail and the array still functions. Not using fast enterprise level SSDs as read / write cache for the pool seems to be a no no for me. ZFS in the end is not THAT memory intense unless you do deduplication. A fast CPU plus 8 GB of RAM will be fine to serve a ZFS pool.
@@HolgerBeetz what cpu and mb do you use?
@@PingPongOblong Xeon Gold 5222 CPUs Dual CPUs and some standard Supermicro MoBo which comes with the Storage Package from their Vendor
You're brave, running raidz1 with 44TB and 12 disks!? I run raidz2 with my 8 drive 20TB TrueNAS.
I'm a maniac 🤣
I also builded my Truenas scale finally 1 week ago. TBH: If I dont have an old desktop to refurbish I wouldnt go for your customer PC build. A refurbished or used Supermicro board has usual a proper CPU and IPMI port and maybe even a 10g nic. Much more support on ECC and is more reliable on a 24/7 job and needs much less power. You CPU alone is a 65W listed. A m-ITX board with a Xeon (8C/16T) is listed at 45W for example. A used HP microproliant gen8 is highly modable and also offer more value with ILO on the used sector.
So at the end - I personal wouldnt recommend this parts, but everyone priors different things and its nice that more people get into truenas scale in general.
From performance and reliability standpoint its better to use multiple VDEVs. In your case probably 2xraid-z1. Still from my point of view its beter to use vdev with multiple parity (z2+), otherwise with some bad luck you can have some unrecoverable read errors while resilvering if one drive dies.
Still good content, thank you :)
Thanks mate, I guess I will change my pool to raid-z2, might be the better decision.
I would always use ECC Memory when storing important files.
But, if there isn't important data on the stake, I still like to use zfs. Even without ECC
Yeah I 100% agree on that!
Check out lawrence‘s video on the (no) need for ecc when using zfs
One downside to using a desktop motherboard is the lack of a management interface.
If something happens and the system is hard to get to, troubleshooting can be a pain.
There are fairly reasonable priced boards from supermicro or asrock rack
I built a NAS many years ago with a 24bay Norco case however I upgraded this to a used Dell T630 Server which was around 1000 Euro so much cheaper than a DIY build and much much higher quality parts 12G SAS3 backplane and included controller. upgrade CPUs to 14 core dual CPU for 70 Euro also can fit so much more RAm (128G and might add another 128G) . Best of all this server is so so quiet. had it in my apartment loungeroom. You can get other Dell servers cheaper but with less bays (mine is 18 bay) and used a cheap Sun F80 Warpdrive for proxmox /datastore) Great vid and remember it was fun building my 1st server similar to yours. Enjoyed your vid Christian Maybe you could do a vid on buying a cheap Dell server maybe a T320 or T620 or the like?? and building into a truenas server for the people who don't know much about building hardware? Also used SAS drives are much much cheaper too for these servers
12 disk pool with raidz1 !!!!! a newbie demonstrating a very very risky setup to other newbies… good luck!!
thanks, good luck to you too *g*
High Q-U-A-L-I-T-Y content as always!! Bravo!
Glad you enjoyed it!
Very informative and interesting video. The only thing I don't think is so great is the hard disk choice. Price (€) per TB including shipping. 14TB are the sweet spot at the moment. In addition, there is warranty 5 years, helium filled, faster and power consumption with only 5 hard drives 24/7.
I found this was one of the best price per TB values. Sure, I could save a little bit, but I wanted to see how this big pool of HDDs performs ;)
I have two TrueNAS systems in my home network, and I use ZFS replication between them for the first copy. I also have a sync on the same data to a cloud provider for a second copy and off-site storage. The local replication duplicates the snapshots I have, and the off-site provides disaster recovery.
Sounds like a great setup!
8:11 that's because (at least in my country) ECC memories are not readily available as compare to regular one and they tend to be lot expressive as much as twice or three times the cost of regular one
No, I don't believe it's just the availability and price. It's just because some IT guys just like to argue about everything...
Great video as always. I work in Enterprise Infrastructure and we have seen multiple drives fail before at nearly the same time and the added strain of a typical rebuild on the other drives increases the likelihood of another drive failing. As such, I would recommend at least ZFS RAIDZ2.
Thanks mate!
You really should not use a RaidZ1 on that amount of raw storage. In case 1 drive fails and you swap it to rebuild the volume, you put a lot of pressure on your remaining drives and that for a long time (because of huge drive capacity). There is a huge possibility that another drive fails during the rebuild process and since you are only using RaidZ1, all your data will be lost.
Only use RaidZ1 for small deployments (4 drives or less with low capacity) and have good backups.
I would suggest at least RaidZ2. And as always Raid is not a Backup.
Trotzdem gutes Video, hab selber ein ähnliches Setup, allerdings hab ich TrueNAS noch in Proxmox virtualisiert mit PCI-e Passthrough von dem HBA.
Hello 👋, you can tell my why i see connections lost on true nas when i use iscsi with proxmox , but iscsi work perfectly, but true nas show me connections lost ip 192...... . Please 🙏 i search for solution but not solve.
In US there are NORCO chassis with the same interior. But all of them are from China. Supermicro motherboard will cost same amount of money but they are only for intel CPU's. For Ryzen CPU's there are Asrock server boards available. The main reason to buy server board is having ECC memory and IPMI.
I have a 24 bay Norco case but airfolow is terrible ... I bought a Used Dell T630 Instead all up much cheaper than a DIY build like my 1st server
@@valleyboy3613 Airflow mostly depends on what fans you use. Stock Dell fans are very loud. And in Norco case you can use fans that you want 80 or 120 mm.
(you maybe should have mentioned the the reason why half your ram is already in use by the cache. That's normal. It automatically takes up 50% of any amount of ram)
Thanks for sharing!
great content! I just upgraded my Synology to 48TB
Oh nice! Same capacity :D
Unbuffered ECC is the best/fastest but you can double the accessible RAM amount if you use buffered ECC. Buffered ECC is a bit slower though. If you have enough RAM slots then you will be OK with unbuffered...
Good video! THanks. Now one reason to go with an older SuperMicro 4U system is because you'll get the MB, CPU, SAS Controller and RAM all for under the cost of buying all new components. What you built is great but for a NAS with the same amount of money you could get the same results. Also you might find the 4TB HGST Ultrastar 7K4000 are a cheaper option but you can't go wrong with WD's.
Thank you! Also sounds like a great set up
Was 48tb considered massive 2 years ago ? Could've sworn I was pushing half a petabyte 2 years ago.
LOL. I just set up a thing like that at work today with 42 TB :D And the new TrueNAS Scale is really cool!
Hehe nice ;)
Even the cofounder/ current developer of ZFS doesn't require/encourage people to use ECC. So I don't see a necessity to do so. There's also a Hacker News article regarding this topic. Nonetheless I enjoyed your recent videos. The Proxmox Packer one was really awesome. I combined it with a gitlab pipeline and it now it throws me out fresh new images once a week.
Sign, why don't you stop arguing about ECC? It's recommended by IX Systems in the official docs, and by any IT professional. Btw, thanks for the positive feedback, but you need to understand that when you make a video like this, you can't skip over ECC.
@@christianlempa It wasn't meant to be rude. I thought it was worth mentioning it, since most of the concerns about ECC are regarding ZFS. Have a nice day anyway.
@@christianlempa Coming from a guy using RAIDZ1 on a 12 disk array production machine :)
@@deckardstp yeah don't worry, I just went over this discussion too many times, it's all good 😉
I needed a network storage array so I started down this road. I tried to find an affordable solution with ECC but was told to not use risen because it does not support hardware transcoding for a black server. So I wouldn't need a secondary video card and I really did not want that period I am looking to build a 150TB array. Older Zion did not have quick sync And I was unable to find any Intel atom processors available so I ended up going with an i3 and non ECC memory.
Great video, already looking forward to the 2nd video. I'm planning to install TrueNAS scale on my QNAP...
Coming soon! Thank you ;)
Hi Christian,
the HDD status LED-s in your Inter-Tech enclosure works out of the box with TrueNas scale? (Needs any additional wiring or this is nagive SAS feature?)
In next days I would like to send out my Shopping list with Inter-Tech or Supermicro CSE-216 cases...
Thank for your Time you spend to your UA-cam channel, this is a great starting point day by day for my me time... 🤟
Cool build. I built a couple Xponology HP Microserver NAS'. I also added a quad port Intel NIC and USB-C card in each.
Thank you! Sounds like a really cool set up!
RAIDZ1 on 12 disks aren't recommended and you're limited to 1 vdev's speed and IOPS.. Even with RAIDZ2. It is better in your setup to make two 6-disk RAIDZ1 and combine them into a stripe. It is supported in zfs.
Hi Christian,
the HDD status LED-s in your Inter-Tech enclosure works out of the box with TrueNas scale? (Needs any additional wiring or this is native SAS feature?)
In next days I would like to send out my Shopping list with Inter-Tech or Supermicro CSE-216 cases...
Thank for your Time you spend to your UA-cam channel, this is a great starting point day by day for my me time... 🤟
There's also some smb tuning..see linus tech tips on some samba tuning...DUAL-RTX2060-O6G-EVO
What should I tune here?
Thanks a ton for the great content. I have found your videos quite helpful as I find my way around this “new world” of self-hosting / home lab setup.
In a Proxmox + TrueNAS or OMV setup, what is best approach for ZFS Storage Pool. Is it best to setup the zpool in Proxmox for use by the NAS software or is it better to setup the zpool from within the NAS software?
20:20 When you connect two devices directly do you use a cross-over cable? I have a TrueNAS Mini X (Diskless) on order and already have drives, plus an unmanaged switch that will connect the iSystems’ machine and an AppleTV which is also wired to an Eero 6 mesh network. My hope is to be able to watch media from the NAS on the TV even if the internet is down. Eero seems to need cloud/internet :(
I basically just did exactly what you did! Great minds!
Sounds great! Thank you ;)
great content, im planning to build my own little truenas box due to a qnap failure. the section where you talked about networking configuration and speed tests peaked my interest. have you covered any of this setup previously in your other videos, or know a good place to read up on this?
Not yet, storage is not actually my main interest tbh, but I make new videos when I start a new storage project
Hi, Please make a video on how to install a Web server with Apache PHP MariaDB or MySQL on TruneNas Scale
Another disadvantage of the Ryzen 7 5750G is that it's on PCI-e 3.0 instead of 4.0, and I believe it also has fewer PCI-e lanes.
Ryzen 7 PRO 5750G supports ECC memory
Didn't work for me
I have this cpu running, but with the current pve kernel ecc is not reporting correctly. Should be fixed with about 5.17.
I tested the ecc function with the same Mainboard in Win10.
@@peterfeurstein6085 Yeah I guess it is based on Linux Kernel, I had no chance to get it working. If you have the same on PVE, hmm. Glad that I replaced it.
Cool build but how does the power supply breathe? The case doesn't seem to have any ventilation holes for it
Inter-Tech, I have some cases here of this brand.
they are really nice and good priced.
You can find them in The Netherlands (I ordered them thru Amazon Germany).
Do you also ordered the rails for the case ?
Interesting, yeah I also ordered the rails from them.
Can you provide more information on the fan controller you touched on in the video? I've followed your build spec to the letter and the fan controller is not something listed on your kit page. Thanks
If you run into issues with that Intel 10G Nic pick up a Chelsio
Great video! Can you please do a tutorial on Trunanas Scale ACL permissions?
Thanks mate! Well maybe at some point, i'll put that on the backlog
hi, i'am from indonesia, Nice Content.
you always creating High Quality Content and before i watch this video, i already install Truenas Scale on My IBM System X3100 M4.
interest another part video Truenas Scale.
Thanks Christian.
Thank you! Of course i'll do a second video about Kubernetes ;)
I also came to complain about your RaidZ1 choice.. Otherwise I approve 😂
Thanks! :D Yeah it's a valid point, I need to admit ;)
Thanks Christian. Keep it up the good work.
Thank you! Of course, I'll do ;)
AFAIK "Logic Case" is selling the same cases as Inter-Tech if the latter aren't available in some countries
Interesting! Thanks!
I bet the ECC memory fight is because the ecc memory cost more. Great video thanks
I think so too :/ thank you bro!
Interesting. Chip configuration, big result. Dont stop :o)
Thanks, will do!
Fascinating! I'm not sure if I missed this on the video, but why wouldn't you go for the maximum available capacity per drive, say 18 or 20Tb, to optimize costs and maximize the capacity per drive slot? Or was your main point to have as many drives as possible for the enhanced transfer speed?
I have quite recently tried to put all data into one giant hdd for archive purpose. rsync kept failing on verification. It turned out at the end one of the memory modules (non-ECC) in the system was failing. Without verification I would not know it and have ended up with broken data. ECC all the way if you need reliable storage.
How much power does it consume on average? How fast is your Internet speed? Thanks
+1 for this question. I'm also very curious about the power consumption
How do you use the 48T of storage space? do you have a data replication server at a different place? I don't like data-replication service, I choose a power switched USB hub and attach large capacity of SATA HDD to periodically backup most important files.
I've been seeing some fairly cheap 24 bay supermicro combos (case CPU mobo and ram) on ebay and have been thinking about picking one up, this is a nice setup though and that is a nice case. Hadn't heard of that brand before.
Great video. Nice build. Like you said software is also important. I'm wondering if you use any software to manage your personal photo/video libraries. If you do, what are they ?
For personal stuff I use Google Drive
Truenas comes with support for packages.
Nextcloud is one that i recommended!
Great video! Are you planning on making a video on different vdev setups?
Thanks! Not yet, it's not my primary expertise.
12 drives with RAIDZ1? Can't tell if it's brave or not smart... but... you do you man.
Let's call it Brave, okay? 😅
Extremely risky….
thanks for the video, unfortunately, I can't use TrueNas since it didn't have Delete Permissions in ACL, because its needed in our case, so I'm stuck with Windows or xpenology, since Synology has this option in advanced permissions section
In your file transfer test you say it doesn't reach 10 Gigabit, but the transfer rate show ~700 Megabytes which is 5600 megabits for 5.6 gigabit, so even the first transfer is extremely fast.
Absolutely, it's already fast without the cache.
I just built my first TrueNAS core system (debating on starting over & installing TrueNAS Scale on instead), and have ~31TB of drives in mine but have only setup a pool with 4x 3TB WD RED drives so far for media storage & streaming. would you recommend for movie streaming and uploading to the server that I upgrade from my dual 1Gig onboard NICs as I see you went with 10Gig setup? I"m wondering if 10Gig would be overkill for my usage but if it isn't I'm curious what the cheapest compatible setup might be for a 10Gig connection (would need a 10Gig NIC, 10GB or multi-Gig switch & either CAT6A copper or SFP+ transceivers (seems like quite the investment) since it doesn't seem like there's a way to Team/bond my two 1Gb onboard NICS (only aggregate them assuming I have a switch that supports aggregation as well). Or should I just consider investing in a Multi-Gig (2.5Gb NIC & switch)? I do plan on creating other pools for Backups and possibly a pool for running VMs down the road possibly if that makes any difference.
Please do not use Raid-Z1 on a pool with that many disks, there is a good chance you will have 2 disks fail within a few days of each other, especially if you bought them at the same time.
if you lose a 2nd disk before it has resilvered, bye bye data.
Can you run Plex GPU transcoding using trueNAS Scale? Never simple to have Plex in containers and access the GPU. Also can you install latest version of Plex? Usually the version provided is pretty old.
Do you have a local backup server? How do you backup something of this scale as a person that isnt a company?
In ZFS the total pool IOPS are equal to 1 disk IOPS * N-of-vdevs, and since you have a single vdev, the IOPS are equal to a single WD Red Plus 4TB, so your config is pretty bad also from a performance point of view, other than being quite unreliable. The only thing that's not so bad is the streaming performance (as long as you are using a large block size)
With 12 disks, my choices would probably be: reliability: 2x (Z2 w/ 6 disks) performance: 3x (Z1 w/ 4 disks). You can also avoid parity arrays and get even more IOPS but the usable space decreases drastically.
P.S. Unless things are changed from the past (have not played with SCALE yet), that 1TB NVMe drive is completely wasted as boot drive. Ideally the OS should stay on a small SATA SSD (best if a couple in mirror) and that NVMe would be better used as level 2 ARC cache
Thanks, great feedback. I probably will change my config to something else, once I have the chance to do so. Still haven't decided what exactly, but 2x Z2 would mean I'm loosing 4 disks which is 30%. Seems like a lot to me. I probably will go with a single Z2 or even Z3 for the 12x4tb as a big data pool for backups and video files and add a second vdev with 4x SSDs. What do you think of that idea?
@@christianlempa The recommended number of disks per vdev is between 3 and 9 and more than 12 is not recommended, so a single Z2 with 12 disk is pretty much an explicitly unrecommended configuration. I know that loosing all that space sucks, but this is the price if you want to do things the right way. Since you are not storing mission critical data (right? 😛) you can configure the pool with two Z1 6 disks vdevs and a very frequent backup 🙂.
The question would be why it's not recommended apart from being not so performant. Anyway I might give the 2x Z2 idea a shot.
@@christianlempa for many reasons, the most obvious is rebuilding time (it could take a week or more and during the process you could loose more disks because they are under high stress) but also space efficiency (due to parity and padding increased complexity) and other joyful reasons (like further performance degradation) that you can discover deep diving into technical documentation if you want 🙂
Of course, once you are aware of all the risks and limitations, if they are still within your "margins of acceptance" you are free to configure your pool as you wish, I just wanted to make you aware 😉
@@dariopetrusic4215 No worries mate, I appreciate useful feedback! That makes totally sense to me. I guess I'll go with 2x z2 then.
I understand that you are running proxmox and truenas scale on different server. How do you add the zfs pool by truenas scale onto proxmox?
how do I buy a Case there? I cant find a "register" on the website itself of Inter-Tech
About the Storage Controller. I never rly understood those.
In the moment, im thinking about to upgrade my existing fujitsu primergy tx1320 m3 server from the standard 4 to 8 connectable disks.
In the official data sheet of the server are some controllers listed to upgrade the sas connections. however i dont rly understand,
Do i have to use a spezific one from fujitsu or is any raid controller with equal connections usable?
I would be happy if some one could explain me what is important by choosing the right raid controller. THX
Nice build and very nice video. Thank you very much. I was also considering getting the same case. However, I was a bit worried about the airflow for an ATX power supply with a fan on the bottom since there seems to be little to no space to actually suck air in. In contrast to the 4U-4424 which does not have this metal separator between the mainboard and the PSU. Is there enough space to get some air into the PSU or did you solve this in some other way?
I haven't seen any issue with temperature, not in the case or components itself, nor at the power supply. And it was really hot this summer.
@@christianlempa Thank you for the quick response. Seems like I'll be trying my luck with this case then. 🙂
What do you think about ddr5 memory? As there is no real ecc with it just "ondie" ecc which work differently
if your after performance a pool should have no more than 8 disks thats usually the money spot.
Nice Video. I've setup TrueNas Scale on my old QNAP, which I use to backup my Synology. I can't afford to go 10Gb, but I've got some 2.5Gb usb Nic's that work well with both systems, so I get a bit of a speed bump.
Very nice! Didn't know you can even install that on a QNAP, but sure.. why not :D
@@christianlempa Not all QNAPS, but those with a usb DOM and HDMI. Due to all qnap security concerns, I opted to move away from their O/S. TrueNas works great on it.
Qnap in general is garbage. Software sucks and it has been hacked several times
@@AM93000 Exactly, that's why I put TrueNas on it.
Are you crazy :o RAIDZ1 on a 12-drive array... You must hate your data.!!! It is very common for a 2nd (or third, or more) drive to fail during a rebuild, which guarantees you data loss, and depends on when and how bad it fails, you could very well lose ALL of your data... On a 12 drive RAID array, I wouldn't ever consider using anything less than RAIDZ3. In reality though, I would use two 6-drive RAIDZ2 arrays
we can argue about raidz2, kinda makes sense, but raidz3? come on...
Hey Christian, do you have any idea on what something that this costs to run per day? how much kw ? thinking of looking at doing something like this myself but with prices of electric these days its putting me off at the moment. - great content as always!
Thanks! The power consumption is mostly ~120-130W
Thanks for the video!.. Did you need to flash your HBA card to IT mode?
Hey Michael, no the AsRock board could configure this setting in the BIOS. Then you can switch from RAID to HBA mode easily.
Even though I've already got a running truenas scale build (from an old gaming pc) with similar amount of storage
Cool!
Good video, but why did you not go the easy route? I have a Dell R420 with 196Gb Ram, dual 10c20t CPU's, and 4 12Tb NAS drives. Altogether at a cost of just under $1200.
The powersupplies and fans produce too much noise, the server rack is right beside my YT studio. So I needed to find a silent case with hardware that is also efficient.
Cause with no ecc at least for me...i had great problems of lag cause i want it to run 24/7 no matter what.
Hello youtube here are the steps to get apt working on truenas if u don't have the right zfs permission..
First connect to the shell of Ur server if its by ssh or directly on the server.
Then u type `chmod apt` this command will allow u the user to have access to the command now u need to update Ur repository's by doing sudo apt update or apt update then do apt-get upgrade then if u want and I highly recommend this is. Is to add the official Ubuntu repos to the sources.list. u can do this by typing.
nano /etc/apt/sources.list
Now u can edit where u want to download applications from that are based on debian and this will give u endless possibilities on how u wanna use ur server u can even install and configure beef wich is an hacking tool ment for ethical hacking by doing..
sudo apt install beef-xss & y
Note u can only install beef if u added the official kali repos to ur sources.list
If u get an error while doing sudo apt update do a ufw allow behind the command and make it accept unknown repos so u can use kali it's repo to .
Thx for reading this long command and I hope it helped u out
Nice build. I'm confused, did you finally use a dGPU for the Ryzen 5 3600 or not ?
Thanks mate! I needed to plug in a graphics card to do the installation, once I had installed the system, the AsRock board supports boot without a gpu by default. So I could easily replace the graphics card with the network card and then everything still worked.
@@christianlempa Ok. Thanks
At 20:53, you actually mean 10Gbit. Or 1GB. 700MB/s is a lot more than one gigabit per second :)
Yep thats true ;)
the goat
Pretty sure this WD is smr. It will make problems with zfs.
Actually the Ryzen Pro APUs support ECC.
ASRock boards are some of the best for home labs
Excellent content as usual! From the video, it seems like an Adaptec ASR 71605 - Can you please confirm/share the exact model of the RAID controller? Thank you in advance.
Thanks! You should find the exact model on the kit page.
You said you'd link the zfs video in the description, but it isn't there.
What? Let me fix that! Thanks for the heads up :)
Is it better to have many small HDD's or a few larger capacity ones?
For example using 6x14TB on ZFS1 vs 11x8TB on ZFS2 both system giving approx 70TB usable space.
It's better to not use as many hard drives than I use. At some point it becomes slow and unusable. Split the pools into multiple smaller ones.
Thanks for the great video! I am thinking about using the same Motherboard and CPU Combo. But I was wondering... You did not mention any GPU. The AMD Ryzen 5 3600 has no integrated graphics. Didn't you use any cheap GPU? Thanks for the advice!
You're welcome! I used a cheap GPU to install the system, but afterwards you can remove it as the AsRock MB supports boot without a GPU by default. Pretty nice 😀
@@christianlempa Many thanks for getting back to me on that question! I will buy a cheap GPU then. Thank you for the great content! Grüße!
Can all the data also be uploaded automatically to Google Drive? So that if there is damage to the hard disk, we still have a backup of all data in the cloud.
I think you can do that
Nice video Christian. As far I know Truenas at least the ones built on BSD are not so much about CPU, but indeed much more about available ram. This is why I skipped at the time to debian server with btrfs. It's less memory consuming and has still snapshot functionality. My primmary goal however was to build energy friendly so I went with an J-3455 type intel processor onboard and 2X 1GB lan and 16Gb ram. Over the network wireles on samba this is of course not very good in transfer speeds, especially wireless. On nfs direct to my main pc i'm getting really close to 1Gb/s on bigger files which is OK to me. I chose for a Raid10 solution to maintain also write speed at expense of diskspace, which was ok for my use case. Of course your use case is completely different and also the idea behind, but with current energy prices it wouldn't hurt to look into more energy sufficient alternatives rather than just high specs. Running it with several docker containers and 8 users without sweat. The whole thing idles at around 15 Watts and draws maximum around 28 Watt. Not bad for a 24/7 running system. Even running plex with 2 users is working OK, without additional videocard. Sofar it has been rock solid stable and the board I have is natively supporting headless usage.
Ich glaube, ich hätte - der Temperatur wegen - die Festplatten auch in jede zweite Reihe geschoben. So sitzen die nicht aufeinander und haben etwas mehr Platz zum Atmen.
Scale is not up to snuff (yet) as far as performance goes compared to truenas 12/13....
how do you backup this machine?
Snapshots and Cloud Backup
Nice server !
Do you have any shared storage solution for a Proxmox cluster ?
In my homelab I have NFS shares but the NAS becomes a Single Point Of Failure 😕
Maybe the scaling system of TrueNAS would help 🤔
Thanks! No I'm just running two machines, the TrueNAS and Proxmox.
Great video...but. You set up 12 drives in a single VDEV, RAIDZ1? I'd seriously reconsider that decision, unless none of the data going to it is all that important; if a single drive fails, and then another fails during the intense 4 hour rebuild time (as can happen)...well, you'd lose everything. And no one wants that! (Use RAIDZ2, minimum!)
Fair point, I probably do
Earned a sub.
Thanks ;)
Thanks for the great video. One thing I do not understand is why the APU 5750G didn‘t work with the ECC RAM. From AMD side they say all APUs in the PRO line supports ECC RAM?
Good question, I was confused by that, too. I think that has something to do with the kernel drivers, which don't support this CPU, other than that I don't know why it's not working.
@@christianlempa ... that was a good hint. I found a blog which describes exactly that problem. It seems that the EDAC Driver doesn't support the AMD APU. But there is a opportunity to get that fixed. I'll have a closer look in the blog. Unfortunatelly I can't link it here :( If you search for "ecc-on-amd-cezanne" you'll find it. Cheers Ralph
@@ralphmeinel9990 I think I also found this post, the problem is that all modifications to the kernel will be overwritten and unsupported by truenas firmware. It's a bit unfortunate :(
@@christianlempa I did a little research and found out that the EDAC driver supports AMD APU 5750G/5650G in kernel version 5.18.x. TrueNAS Scale ships with 5.10. kernel, so we have to wait until there is an update.
But in the current release there is support for the "older" APUs. As I am currently building a NAS, I gave it a try and can confirm that the AMD APU 4650G works with the ECC memory :)