Awesome brother. It's awesome you do GIF. Love the support and this makes me want to support your channel. I use truenas scale and love it. Smart People make great videos. The Digital life is a new to me but he is very educated on the Truenas Scale ZFS system. LOVE IT. KEEP THEM COMING BROTHER.
one of the few youtubers i watch that talks fast enough that I dont need to use 1.25x-1.5x speed while watching 😅😅 great build and very insightful walkthrough of the parts selection!!
the important thing to realise, is when a drive fails (and it will) the pool has Zero redundancy, any additional failure and you loose everything. And guess what when you replace the failed drive it does a big rebuild (many hours) putting the existing drives under significant load.
For reference, consider the ASRock Rack MBs for home server use. They use standard desktop chipsets, but include some handy server features. For instance, the X570D4U-2L2T includes multiple 1Gb & 10Gb NICs, IPMI, SATA DOM, etc. crammed onto a MicroATX MB.
Great setup, however i would not use raidz1 in a 12 disk pool, the risk to loose more then one disk is for me to high, i use raidz3 for my 12 disk pools...... just my 2 cents. Thanks for the vid.....
Raidz3 seems a bit of overkill to me. At work we run a selfbuild TrueNAS server for backing up Xen VMs with around 270 TB netto capacity and in raidz2 setup. This allows two HDDs to fail and the array still functions. Not using fast enterprise level SSDs as read / write cache for the pool seems to be a no no for me. ZFS in the end is not THAT memory intense unless you do deduplication. A fast CPU plus 8 GB of RAM will be fine to serve a ZFS pool.
8:11 that's because (at least in my country) ECC memories are not readily available as compare to regular one and they tend to be lot expressive as much as twice or three times the cost of regular one
No, I don't believe it's just the availability and price. It's just because some IT guys just like to argue about everything...
2 роки тому+1
I also builded my Truenas scale finally 1 week ago. TBH: If I dont have an old desktop to refurbish I wouldnt go for your customer PC build. A refurbished or used Supermicro board has usual a proper CPU and IPMI port and maybe even a 10g nic. Much more support on ECC and is more reliable on a 24/7 job and needs much less power. You CPU alone is a 65W listed. A m-ITX board with a Xeon (8C/16T) is listed at 45W for example. A used HP microproliant gen8 is highly modable and also offer more value with ILO on the used sector. So at the end - I personal wouldnt recommend this parts, but everyone priors different things and its nice that more people get into truenas scale in general.
20:20 When you connect two devices directly do you use a cross-over cable? I have a TrueNAS Mini X (Diskless) on order and already have drives, plus an unmanaged switch that will connect the iSystems’ machine and an AppleTV which is also wired to an Eero 6 mesh network. My hope is to be able to watch media from the NAS on the TV even if the internet is down. Eero seems to need cloud/internet :(
I have two TrueNAS systems in my home network, and I use ZFS replication between them for the first copy. I also have a sync on the same data to a cloud provider for a second copy and off-site storage. The local replication duplicates the snapshots I have, and the off-site provides disaster recovery.
Great video as always. I work in Enterprise Infrastructure and we have seen multiple drives fail before at nearly the same time and the added strain of a typical rebuild on the other drives increases the likelihood of another drive failing. As such, I would recommend at least ZFS RAIDZ2.
From performance and reliability standpoint its better to use multiple VDEVs. In your case probably 2xraid-z1. Still from my point of view its beter to use vdev with multiple parity (z2+), otherwise with some bad luck you can have some unrecoverable read errors while resilvering if one drive dies. Still good content, thank you :)
I built a NAS many years ago with a 24bay Norco case however I upgraded this to a used Dell T630 Server which was around 1000 Euro so much cheaper than a DIY build and much much higher quality parts 12G SAS3 backplane and included controller. upgrade CPUs to 14 core dual CPU for 70 Euro also can fit so much more RAm (128G and might add another 128G) . Best of all this server is so so quiet. had it in my apartment loungeroom. You can get other Dell servers cheaper but with less bays (mine is 18 bay) and used a cheap Sun F80 Warpdrive for proxmox /datastore) Great vid and remember it was fun building my 1st server similar to yours. Enjoyed your vid Christian Maybe you could do a vid on buying a cheap Dell server maybe a T320 or T620 or the like?? and building into a truenas server for the people who don't know much about building hardware? Also used SAS drives are much much cheaper too for these servers
Hello 👋, you can tell my why i see connections lost on true nas when i use iscsi with proxmox , but iscsi work perfectly, but true nas show me connections lost ip 192...... . Please 🙏 i search for solution but not solve.
One downside to using a desktop motherboard is the lack of a management interface. If something happens and the system is hard to get to, troubleshooting can be a pain. There are fairly reasonable priced boards from supermicro or asrock rack
Very informative and interesting video. The only thing I don't think is so great is the hard disk choice. Price (€) per TB including shipping. 14TB are the sweet spot at the moment. In addition, there is warranty 5 years, helium filled, faster and power consumption with only 5 hard drives 24/7.
In US there are NORCO chassis with the same interior. But all of them are from China. Supermicro motherboard will cost same amount of money but they are only for intel CPU's. For Ryzen CPU's there are Asrock server boards available. The main reason to buy server board is having ECC memory and IPMI.
@@valleyboy3613 Airflow mostly depends on what fans you use. Stock Dell fans are very loud. And in Norco case you can use fans that you want 80 or 120 mm.
Hi Christian, the HDD status LED-s in your Inter-Tech enclosure works out of the box with TrueNas scale? (Needs any additional wiring or this is nagive SAS feature?) In next days I would like to send out my Shopping list with Inter-Tech or Supermicro CSE-216 cases... Thank for your Time you spend to your UA-cam channel, this is a great starting point day by day for my me time... 🤟
Good video! THanks. Now one reason to go with an older SuperMicro 4U system is because you'll get the MB, CPU, SAS Controller and RAM all for under the cost of buying all new components. What you built is great but for a NAS with the same amount of money you could get the same results. Also you might find the 4TB HGST Ultrastar 7K4000 are a cheaper option but you can't go wrong with WD's.
Hi Christian, the HDD status LED-s in your Inter-Tech enclosure works out of the box with TrueNas scale? (Needs any additional wiring or this is native SAS feature?) In next days I would like to send out my Shopping list with Inter-Tech or Supermicro CSE-216 cases... Thank for your Time you spend to your UA-cam channel, this is a great starting point day by day for my me time... 🤟
Unbuffered ECC is the best/fastest but you can double the accessible RAM amount if you use buffered ECC. Buffered ECC is a bit slower though. If you have enough RAM slots then you will be OK with unbuffered...
Fascinating! I'm not sure if I missed this on the video, but why wouldn't you go for the maximum available capacity per drive, say 18 or 20Tb, to optimize costs and maximize the capacity per drive slot? Or was your main point to have as many drives as possible for the enhanced transfer speed?
How do you use the 48T of storage space? do you have a data replication server at a different place? I don't like data-replication service, I choose a power switched USB hub and attach large capacity of SATA HDD to periodically backup most important files.
hi, i'am from indonesia, Nice Content. you always creating High Quality Content and before i watch this video, i already install Truenas Scale on My IBM System X3100 M4. interest another part video Truenas Scale. Thanks Christian.
Thanks a ton for the great content. I have found your videos quite helpful as I find my way around this “new world” of self-hosting / home lab setup. In a Proxmox + TrueNAS or OMV setup, what is best approach for ZFS Storage Pool. Is it best to setup the zpool in Proxmox for use by the NAS software or is it better to setup the zpool from within the NAS software?
Even the cofounder/ current developer of ZFS doesn't require/encourage people to use ECC. So I don't see a necessity to do so. There's also a Hacker News article regarding this topic. Nonetheless I enjoyed your recent videos. The Proxmox Packer one was really awesome. I combined it with a gitlab pipeline and it now it throws me out fresh new images once a week.
Sign, why don't you stop arguing about ECC? It's recommended by IX Systems in the official docs, and by any IT professional. Btw, thanks for the positive feedback, but you need to understand that when you make a video like this, you can't skip over ECC.
@@christianlempa It wasn't meant to be rude. I thought it was worth mentioning it, since most of the concerns about ECC are regarding ZFS. Have a nice day anyway.
(you maybe should have mentioned the the reason why half your ram is already in use by the cache. That's normal. It automatically takes up 50% of any amount of ram)
About the Storage Controller. I never rly understood those. In the moment, im thinking about to upgrade my existing fujitsu primergy tx1320 m3 server from the standard 4 to 8 connectable disks. In the official data sheet of the server are some controllers listed to upgrade the sas connections. however i dont rly understand, Do i have to use a spezific one from fujitsu or is any raid controller with equal connections usable? I would be happy if some one could explain me what is important by choosing the right raid controller. THX
Can you provide more information on the fan controller you touched on in the video? I've followed your build spec to the letter and the fan controller is not something listed on your kit page. Thanks
You really should not use a RaidZ1 on that amount of raw storage. In case 1 drive fails and you swap it to rebuild the volume, you put a lot of pressure on your remaining drives and that for a long time (because of huge drive capacity). There is a huge possibility that another drive fails during the rebuild process and since you are only using RaidZ1, all your data will be lost. Only use RaidZ1 for small deployments (4 drives or less with low capacity) and have good backups. I would suggest at least RaidZ2. And as always Raid is not a Backup. Trotzdem gutes Video, hab selber ein ähnliches Setup, allerdings hab ich TrueNAS noch in Proxmox virtualisiert mit PCI-e Passthrough von dem HBA.
I have quite recently tried to put all data into one giant hdd for archive purpose. rsync kept failing on verification. It turned out at the end one of the memory modules (non-ECC) in the system was failing. Without verification I would not know it and have ended up with broken data. ECC all the way if you need reliable storage.
I've been seeing some fairly cheap 24 bay supermicro combos (case CPU mobo and ram) on ebay and have been thinking about picking one up, this is a nice setup though and that is a nice case. Hadn't heard of that brand before.
RAIDZ1 on 12 disks aren't recommended and you're limited to 1 vdev's speed and IOPS.. Even with RAIDZ2. It is better in your setup to make two 6-disk RAIDZ1 and combine them into a stripe. It is supported in zfs.
I needed a network storage array so I started down this road. I tried to find an affordable solution with ECC but was told to not use risen because it does not support hardware transcoding for a black server. So I wouldn't need a secondary video card and I really did not want that period I am looking to build a 150TB array. Older Zion did not have quick sync And I was unable to find any Intel atom processors available so I ended up going with an i3 and non ECC memory.
Nice Video. I've setup TrueNas Scale on my old QNAP, which I use to backup my Synology. I can't afford to go 10Gb, but I've got some 2.5Gb usb Nic's that work well with both systems, so I get a bit of a speed bump.
@@christianlempa Not all QNAPS, but those with a usb DOM and HDMI. Due to all qnap security concerns, I opted to move away from their O/S. TrueNas works great on it.
may i ask a question about truenas-scale? here it is: my ssh service enabled at default settings,it's used be fine and stable,recently after i upgrade to latest 22.02.0.1 version,i constantly lost ssh connection with my Truenas scale,take a glance to console it's frequently printout : strixnas kernel: audit: audit_backlog=65 > audit_backlog_limit=64 May 1 21:26:07 strixnas kernel: audit: type=1400 audit(1651411567.181:21541034): apparmor="DENIED" operation="ptrace" profile="docker-default" pid=17459 comm="apps.plugin" requested_mask="read" denied_mask="read" peer="unconfined"“
I have this cpu running, but with the current pve kernel ecc is not reporting correctly. Should be fixed with about 5.17. I tested the ecc function with the same Mainboard in Win10.
@@peterfeurstein6085 Yeah I guess it is based on Linux Kernel, I had no chance to get it working. If you have the same on PVE, hmm. Glad that I replaced it.
great content, im planning to build my own little truenas box due to a qnap failure. the section where you talked about networking configuration and speed tests peaked my interest. have you covered any of this setup previously in your other videos, or know a good place to read up on this?
Inter-Tech, I have some cases here of this brand. they are really nice and good priced. You can find them in The Netherlands (I ordered them thru Amazon Germany). Do you also ordered the rails for the case ?
Thanks for this great vidéo as I planes to update my entire home network (switch and server) beginning next year Not sure you will answer But I’m still have a MAJOR QUESTION. should I have to install TrueNas Scale as my main operating system and use Plex container in TrueNas or should I have to install Proxmox as main OS and use TrueNas, plex and CM in Proxmox ? As you’re using both, you maybe have a point of view about this question. For the present time, I just planes to have one new server (similar to this one) but I will maybe add one more server later. Using prixmox as main OS could be useful to move one VM to one server to the second one
Thank you! It really depends on your needs. If you want simplicity and just need Plex and storage, TrueNAS Scale as your main OS with Plex in a container is a solid choice. But if you're looking for flexibility and might expand later, going with Proxmox as your main OS lets you run multiple VMs, including TrueNAS and Plex. It’s great for moving things around between servers too!
Can you run Plex GPU transcoding using trueNAS Scale? Never simple to have Plex in containers and access the GPU. Also can you install latest version of Plex? Usually the version provided is pretty old.
Thank you so much for sharing this video it's very helpful. Can you tell me how can I see all my hard drives and space in my TrueNas interface. So I have about 10 -TB with five drives but I'm not seeing the amount of the disk spaces
Is it better to have many small HDD's or a few larger capacity ones? For example using 6x14TB on ZFS1 vs 11x8TB on ZFS2 both system giving approx 70TB usable space.
Can all the data also be uploaded automatically to Google Drive? So that if there is damage to the hard disk, we still have a backup of all data in the cloud.
Nice build and very nice video. Thank you very much. I was also considering getting the same case. However, I was a bit worried about the airflow for an ATX power supply with a fan on the bottom since there seems to be little to no space to actually suck air in. In contrast to the 4U-4424 which does not have this metal separator between the mainboard and the PSU. Is there enough space to get some air into the PSU or did you solve this in some other way?
Great video. Nice build. Like you said software is also important. I'm wondering if you use any software to manage your personal photo/video libraries. If you do, what are they ?
Excellent content as usual! From the video, it seems like an Adaptec ASR 71605 - Can you please confirm/share the exact model of the RAID controller? Thank you in advance.
How did you configure the Adaptec asr-71605 so it detects the hard drives? I bought the same card and passed it through to the TrueNAS Scale VM. I can detect it using lspci but none of the drives are detected when I want to create my pool. Thanks.
thanks for the video, unfortunately, I can't use TrueNas since it didn't have Delete Permissions in ACL, because its needed in our case, so I'm stuck with Windows or xpenology, since Synology has this option in advanced permissions section
I just built my first TrueNAS core system (debating on starting over & installing TrueNAS Scale on instead), and have ~31TB of drives in mine but have only setup a pool with 4x 3TB WD RED drives so far for media storage & streaming. would you recommend for movie streaming and uploading to the server that I upgrade from my dual 1Gig onboard NICs as I see you went with 10Gig setup? I"m wondering if 10Gig would be overkill for my usage but if it isn't I'm curious what the cheapest compatible setup might be for a 10Gig connection (would need a 10Gig NIC, 10GB or multi-Gig switch & either CAT6A copper or SFP+ transceivers (seems like quite the investment) since it doesn't seem like there's a way to Team/bond my two 1Gb onboard NICS (only aggregate them assuming I have a switch that supports aggregation as well). Or should I just consider investing in a Multi-Gig (2.5Gb NIC & switch)? I do plan on creating other pools for Backups and possibly a pool for running VMs down the road possibly if that makes any difference.
so what was the model number for the 16 port sas card? because my server is on x470 and it has similar limitation on pcie lanes. would be helpful. many thanks
Do you have any updates to this a year later? I'm considering building something like this for fun and for my Plex/Jellyfin server. Any recommendations for a chassis i can get in the US?
Not yet, I'm still trying to figure out what to do with my NAS project. But I'm working on some pretty heavy refresh as this project was just too power hungry for me :/
Ok. The suspense is killing me. I'm eagerly awaiting the refresh. As soon as you post I'll start buying my parts. Finding a good chassis has been hard as intertech is German and I'm in Los Angeles.@@christianlempa
BTW I just bought a sysracks 42 U rack and running my old intel macbook pro as a server running some docker containers and home assistant. Unfortunately even though it has 64 GB of ram and a 6 TB HD, laptops don't make good servers.
In your file transfer test you say it doesn't reach 10 Gigabit, but the transfer rate show ~700 Megabytes which is 5600 megabits for 5.6 gigabit, so even the first transfer is extremely fast.
Thanks for the great video. One thing I do not understand is why the APU 5750G didn‘t work with the ECC RAM. From AMD side they say all APUs in the PRO line supports ECC RAM?
Good question, I was confused by that, too. I think that has something to do with the kernel drivers, which don't support this CPU, other than that I don't know why it's not working.
@@christianlempa ... that was a good hint. I found a blog which describes exactly that problem. It seems that the EDAC Driver doesn't support the AMD APU. But there is a opportunity to get that fixed. I'll have a closer look in the blog. Unfortunatelly I can't link it here :( If you search for "ecc-on-amd-cezanne" you'll find it. Cheers Ralph
@@ralphmeinel9990 I think I also found this post, the problem is that all modifications to the kernel will be overwritten and unsupported by truenas firmware. It's a bit unfortunate :(
@@christianlempa I did a little research and found out that the EDAC driver supports AMD APU 5750G/5650G in kernel version 5.18.x. TrueNAS Scale ships with 5.10. kernel, so we have to wait until there is an update. But in the current release there is support for the "older" APUs. As I am currently building a NAS, I gave it a try and can confirm that the AMD APU 4650G works with the ECC memory :)
In ZFS the total pool IOPS are equal to 1 disk IOPS * N-of-vdevs, and since you have a single vdev, the IOPS are equal to a single WD Red Plus 4TB, so your config is pretty bad also from a performance point of view, other than being quite unreliable. The only thing that's not so bad is the streaming performance (as long as you are using a large block size) With 12 disks, my choices would probably be: reliability: 2x (Z2 w/ 6 disks) performance: 3x (Z1 w/ 4 disks). You can also avoid parity arrays and get even more IOPS but the usable space decreases drastically. P.S. Unless things are changed from the past (have not played with SCALE yet), that 1TB NVMe drive is completely wasted as boot drive. Ideally the OS should stay on a small SATA SSD (best if a couple in mirror) and that NVMe would be better used as level 2 ARC cache
Thanks, great feedback. I probably will change my config to something else, once I have the chance to do so. Still haven't decided what exactly, but 2x Z2 would mean I'm loosing 4 disks which is 30%. Seems like a lot to me. I probably will go with a single Z2 or even Z3 for the 12x4tb as a big data pool for backups and video files and add a second vdev with 4x SSDs. What do you think of that idea?
@@christianlempa The recommended number of disks per vdev is between 3 and 9 and more than 12 is not recommended, so a single Z2 with 12 disk is pretty much an explicitly unrecommended configuration. I know that loosing all that space sucks, but this is the price if you want to do things the right way. Since you are not storing mission critical data (right? 😛) you can configure the pool with two Z1 6 disks vdevs and a very frequent backup 🙂.
@@christianlempa for many reasons, the most obvious is rebuilding time (it could take a week or more and during the process you could loose more disks because they are under high stress) but also space efficiency (due to parity and padding increased complexity) and other joyful reasons (like further performance degradation) that you can discover deep diving into technical documentation if you want 🙂 Of course, once you are aware of all the risks and limitations, if they are still within your "margins of acceptance" you are free to configure your pool as you wish, I just wanted to make you aware 😉
Please do not use Raid-Z1 on a pool with that many disks, there is a good chance you will have 2 disks fail within a few days of each other, especially if you bought them at the same time. if you lose a 2nd disk before it has resilvered, bye bye data.
Nice server ! Do you have any shared storage solution for a Proxmox cluster ? In my homelab I have NFS shares but the NAS becomes a Single Point Of Failure 😕 Maybe the scaling system of TrueNAS would help 🤔
Thanks for the great video! I am thinking about using the same Motherboard and CPU Combo. But I was wondering... You did not mention any GPU. The AMD Ryzen 5 3600 has no integrated graphics. Didn't you use any cheap GPU? Thanks for the advice!
You're welcome! I used a cheap GPU to install the system, but afterwards you can remove it as the AsRock MB supports boot without a GPU by default. Pretty nice 😀
Ich glaube, ich hätte - der Temperatur wegen - die Festplatten auch in jede zweite Reihe geschoben. So sitzen die nicht aufeinander und haben etwas mehr Platz zum Atmen.
Hey Christian, do you have any idea on what something that this costs to run per day? how much kw ? thinking of looking at doing something like this myself but with prices of electric these days its putting me off at the moment. - great content as always!
Killer build! I hadn't seen that server chassis before. Great value for the money there.
Thank you! 😉
Awesome brother. It's awesome you do GIF. Love the support and this makes me want to support your channel. I use truenas scale and love it. Smart People make great videos. The Digital life is a new to me but he is very educated on the Truenas Scale ZFS system. LOVE IT. KEEP THEM COMING BROTHER.
one of the few youtubers i watch that talks fast enough that I dont need to use 1.25x-1.5x speed while watching 😅😅 great build and very insightful walkthrough of the parts selection!!
Thank you 😂🙏
ZFS-1 on 12 drives?! I'd either do a single ZFS-2 pool or split it into a combined 2x6 drive ZFS-1 pools.
the important thing to realise, is when a drive fails (and it will) the pool has Zero redundancy, any additional failure and you loose everything. And guess what when you replace the failed drive it does a big rebuild (many hours) putting the existing drives under significant load.
You can tune the ZFS (ARC) memory usage.. The default is 50% memory.
Lawrence Systems has a video on that.
For reference, consider the ASRock Rack MBs for home server use. They use standard desktop chipsets, but include some handy server features. For instance, the X570D4U-2L2T includes multiple 1Gb & 10Gb NICs, IPMI, SATA DOM, etc. crammed onto a MicroATX MB.
Wow interesting, thank you! I'll take a look at these boards
So you have any cheaper or older boards that are similar that you could suggest
@@PingPongOblong Unfortunately, I do not know of any. The ASRock Rack boards are really rather unique and as such they are $$.
Great setup, however i would not use raidz1 in a 12 disk pool, the risk to loose more then one disk is for me to high, i use raidz3 for my 12 disk pools...... just my 2 cents. Thanks for the vid.....
Yeah, that's a valid point. Well, at least you can say in a few years "told you so", when I need to restore it from Cloud ;)
@@christianlempa with your internet connection how long does it take for a complete restore of the pool?
Raidz3 seems a bit of overkill to me. At work we run a selfbuild TrueNAS server for backing up Xen VMs with around 270 TB netto capacity and in raidz2 setup. This allows two HDDs to fail and the array still functions. Not using fast enterprise level SSDs as read / write cache for the pool seems to be a no no for me. ZFS in the end is not THAT memory intense unless you do deduplication. A fast CPU plus 8 GB of RAM will be fine to serve a ZFS pool.
@@HolgerBeetz what cpu and mb do you use?
@@PingPongOblong Xeon Gold 5222 CPUs Dual CPUs and some standard Supermicro MoBo which comes with the Storage Package from their Vendor
High Q-U-A-L-I-T-Y content as always!! Bravo!
Glad you enjoyed it!
You're brave, running raidz1 with 44TB and 12 disks!? I run raidz2 with my 8 drive 20TB TrueNAS.
I'm a maniac 🤣
8:11 that's because (at least in my country) ECC memories are not readily available as compare to regular one and they tend to be lot expressive as much as twice or three times the cost of regular one
No, I don't believe it's just the availability and price. It's just because some IT guys just like to argue about everything...
I also builded my Truenas scale finally 1 week ago. TBH: If I dont have an old desktop to refurbish I wouldnt go for your customer PC build. A refurbished or used Supermicro board has usual a proper CPU and IPMI port and maybe even a 10g nic. Much more support on ECC and is more reliable on a 24/7 job and needs much less power. You CPU alone is a 65W listed. A m-ITX board with a Xeon (8C/16T) is listed at 45W for example. A used HP microproliant gen8 is highly modable and also offer more value with ILO on the used sector.
So at the end - I personal wouldnt recommend this parts, but everyone priors different things and its nice that more people get into truenas scale in general.
20:20 When you connect two devices directly do you use a cross-over cable? I have a TrueNAS Mini X (Diskless) on order and already have drives, plus an unmanaged switch that will connect the iSystems’ machine and an AppleTV which is also wired to an Eero 6 mesh network. My hope is to be able to watch media from the NAS on the TV even if the internet is down. Eero seems to need cloud/internet :(
Great video, already looking forward to the 2nd video. I'm planning to install TrueNAS scale on my QNAP...
Coming soon! Thank you ;)
I would always use ECC Memory when storing important files.
But, if there isn't important data on the stake, I still like to use zfs. Even without ECC
Yeah I 100% agree on that!
Check out lawrence‘s video on the (no) need for ecc when using zfs
I have two TrueNAS systems in my home network, and I use ZFS replication between them for the first copy. I also have a sync on the same data to a cloud provider for a second copy and off-site storage. The local replication duplicates the snapshots I have, and the off-site provides disaster recovery.
Sounds like a great setup!
Great video as always. I work in Enterprise Infrastructure and we have seen multiple drives fail before at nearly the same time and the added strain of a typical rebuild on the other drives increases the likelihood of another drive failing. As such, I would recommend at least ZFS RAIDZ2.
Thanks mate!
Thanks Christian. Keep it up the good work.
Thank you! Of course, I'll do ;)
From performance and reliability standpoint its better to use multiple VDEVs. In your case probably 2xraid-z1. Still from my point of view its beter to use vdev with multiple parity (z2+), otherwise with some bad luck you can have some unrecoverable read errors while resilvering if one drive dies.
Still good content, thank you :)
Thanks mate, I guess I will change my pool to raid-z2, might be the better decision.
I built a NAS many years ago with a 24bay Norco case however I upgraded this to a used Dell T630 Server which was around 1000 Euro so much cheaper than a DIY build and much much higher quality parts 12G SAS3 backplane and included controller. upgrade CPUs to 14 core dual CPU for 70 Euro also can fit so much more RAm (128G and might add another 128G) . Best of all this server is so so quiet. had it in my apartment loungeroom. You can get other Dell servers cheaper but with less bays (mine is 18 bay) and used a cheap Sun F80 Warpdrive for proxmox /datastore) Great vid and remember it was fun building my 1st server similar to yours. Enjoyed your vid Christian Maybe you could do a vid on buying a cheap Dell server maybe a T320 or T620 or the like?? and building into a truenas server for the people who don't know much about building hardware? Also used SAS drives are much much cheaper too for these servers
LOL. I just set up a thing like that at work today with 42 TB :D And the new TrueNAS Scale is really cool!
Hehe nice ;)
Hello 👋, you can tell my why i see connections lost on true nas when i use iscsi with proxmox , but iscsi work perfectly, but true nas show me connections lost ip 192...... . Please 🙏 i search for solution but not solve.
One downside to using a desktop motherboard is the lack of a management interface.
If something happens and the system is hard to get to, troubleshooting can be a pain.
There are fairly reasonable priced boards from supermicro or asrock rack
Interesting. Chip configuration, big result. Dont stop :o)
Thanks, will do!
great content! I just upgraded my Synology to 48TB
Oh nice! Same capacity :D
Very informative and interesting video. The only thing I don't think is so great is the hard disk choice. Price (€) per TB including shipping. 14TB are the sweet spot at the moment. In addition, there is warranty 5 years, helium filled, faster and power consumption with only 5 hard drives 24/7.
I found this was one of the best price per TB values. Sure, I could save a little bit, but I wanted to see how this big pool of HDDs performs ;)
Cool build. I built a couple Xponology HP Microserver NAS'. I also added a quad port Intel NIC and USB-C card in each.
Thank you! Sounds like a really cool set up!
Great video! Can you please do a tutorial on Trunanas Scale ACL permissions?
Thanks mate! Well maybe at some point, i'll put that on the backlog
I basically just did exactly what you did! Great minds!
Sounds great! Thank you ;)
In US there are NORCO chassis with the same interior. But all of them are from China. Supermicro motherboard will cost same amount of money but they are only for intel CPU's. For Ryzen CPU's there are Asrock server boards available. The main reason to buy server board is having ECC memory and IPMI.
I have a 24 bay Norco case but airfolow is terrible ... I bought a Used Dell T630 Instead all up much cheaper than a DIY build like my 1st server
@@valleyboy3613 Airflow mostly depends on what fans you use. Stock Dell fans are very loud. And in Norco case you can use fans that you want 80 or 120 mm.
Hi Christian,
the HDD status LED-s in your Inter-Tech enclosure works out of the box with TrueNas scale? (Needs any additional wiring or this is nagive SAS feature?)
In next days I would like to send out my Shopping list with Inter-Tech or Supermicro CSE-216 cases...
Thank for your Time you spend to your UA-cam channel, this is a great starting point day by day for my me time... 🤟
Good video! THanks. Now one reason to go with an older SuperMicro 4U system is because you'll get the MB, CPU, SAS Controller and RAM all for under the cost of buying all new components. What you built is great but for a NAS with the same amount of money you could get the same results. Also you might find the 4TB HGST Ultrastar 7K4000 are a cheaper option but you can't go wrong with WD's.
Thank you! Also sounds like a great set up
Hi Christian,
the HDD status LED-s in your Inter-Tech enclosure works out of the box with TrueNas scale? (Needs any additional wiring or this is native SAS feature?)
In next days I would like to send out my Shopping list with Inter-Tech or Supermicro CSE-216 cases...
Thank for your Time you spend to your UA-cam channel, this is a great starting point day by day for my me time... 🤟
Unbuffered ECC is the best/fastest but you can double the accessible RAM amount if you use buffered ECC. Buffered ECC is a bit slower though. If you have enough RAM slots then you will be OK with unbuffered...
Was 48tb considered massive 2 years ago ? Could've sworn I was pushing half a petabyte 2 years ago.
Fascinating! I'm not sure if I missed this on the video, but why wouldn't you go for the maximum available capacity per drive, say 18 or 20Tb, to optimize costs and maximize the capacity per drive slot? Or was your main point to have as many drives as possible for the enhanced transfer speed?
There's also some smb tuning..see linus tech tips on some samba tuning...DUAL-RTX2060-O6G-EVO
What should I tune here?
How do you use the 48T of storage space? do you have a data replication server at a different place? I don't like data-replication service, I choose a power switched USB hub and attach large capacity of SATA HDD to periodically backup most important files.
12 disk pool with raidz1 !!!!! a newbie demonstrating a very very risky setup to other newbies… good luck!!
thanks, good luck to you too *g*
hi, i'am from indonesia, Nice Content.
you always creating High Quality Content and before i watch this video, i already install Truenas Scale on My IBM System X3100 M4.
interest another part video Truenas Scale.
Thanks Christian.
Thank you! Of course i'll do a second video about Kubernetes ;)
Thanks a ton for the great content. I have found your videos quite helpful as I find my way around this “new world” of self-hosting / home lab setup.
In a Proxmox + TrueNAS or OMV setup, what is best approach for ZFS Storage Pool. Is it best to setup the zpool in Proxmox for use by the NAS software or is it better to setup the zpool from within the NAS software?
Even the cofounder/ current developer of ZFS doesn't require/encourage people to use ECC. So I don't see a necessity to do so. There's also a Hacker News article regarding this topic. Nonetheless I enjoyed your recent videos. The Proxmox Packer one was really awesome. I combined it with a gitlab pipeline and it now it throws me out fresh new images once a week.
Sign, why don't you stop arguing about ECC? It's recommended by IX Systems in the official docs, and by any IT professional. Btw, thanks for the positive feedback, but you need to understand that when you make a video like this, you can't skip over ECC.
@@christianlempa It wasn't meant to be rude. I thought it was worth mentioning it, since most of the concerns about ECC are regarding ZFS. Have a nice day anyway.
@@christianlempa Coming from a guy using RAIDZ1 on a 12 disk array production machine :)
@@deckardstp yeah don't worry, I just went over this discussion too many times, it's all good 😉
(you maybe should have mentioned the the reason why half your ram is already in use by the cache. That's normal. It automatically takes up 50% of any amount of ram)
Thanks for sharing!
About the Storage Controller. I never rly understood those.
In the moment, im thinking about to upgrade my existing fujitsu primergy tx1320 m3 server from the standard 4 to 8 connectable disks.
In the official data sheet of the server are some controllers listed to upgrade the sas connections. however i dont rly understand,
Do i have to use a spezific one from fujitsu or is any raid controller with equal connections usable?
I would be happy if some one could explain me what is important by choosing the right raid controller. THX
Can you provide more information on the fan controller you touched on in the video? I've followed your build spec to the letter and the fan controller is not something listed on your kit page. Thanks
You really should not use a RaidZ1 on that amount of raw storage. In case 1 drive fails and you swap it to rebuild the volume, you put a lot of pressure on your remaining drives and that for a long time (because of huge drive capacity). There is a huge possibility that another drive fails during the rebuild process and since you are only using RaidZ1, all your data will be lost.
Only use RaidZ1 for small deployments (4 drives or less with low capacity) and have good backups.
I would suggest at least RaidZ2. And as always Raid is not a Backup.
Trotzdem gutes Video, hab selber ein ähnliches Setup, allerdings hab ich TrueNAS noch in Proxmox virtualisiert mit PCI-e Passthrough von dem HBA.
Hi, Please make a video on how to install a Web server with Apache PHP MariaDB or MySQL on TruneNas Scale
I have quite recently tried to put all data into one giant hdd for archive purpose. rsync kept failing on verification. It turned out at the end one of the memory modules (non-ECC) in the system was failing. Without verification I would not know it and have ended up with broken data. ECC all the way if you need reliable storage.
Earned a sub.
Thanks ;)
I've been seeing some fairly cheap 24 bay supermicro combos (case CPU mobo and ram) on ebay and have been thinking about picking one up, this is a nice setup though and that is a nice case. Hadn't heard of that brand before.
RAIDZ1 on 12 disks aren't recommended and you're limited to 1 vdev's speed and IOPS.. Even with RAIDZ2. It is better in your setup to make two 6-disk RAIDZ1 and combine them into a stripe. It is supported in zfs.
AFAIK "Logic Case" is selling the same cases as Inter-Tech if the latter aren't available in some countries
Interesting! Thanks!
If you run into issues with that Intel 10G Nic pick up a Chelsio
I needed a network storage array so I started down this road. I tried to find an affordable solution with ECC but was told to not use risen because it does not support hardware transcoding for a black server. So I wouldn't need a secondary video card and I really did not want that period I am looking to build a 150TB array. Older Zion did not have quick sync And I was unable to find any Intel atom processors available so I ended up going with an i3 and non ECC memory.
Great video! Are you planning on making a video on different vdev setups?
Thanks! Not yet, it's not my primary expertise.
How much power does it consume on average? How fast is your Internet speed? Thanks
+1 for this question. I'm also very curious about the power consumption
Do you have a local backup server? How do you backup something of this scale as a person that isnt a company?
Nice Video. I've setup TrueNas Scale on my old QNAP, which I use to backup my Synology. I can't afford to go 10Gb, but I've got some 2.5Gb usb Nic's that work well with both systems, so I get a bit of a speed bump.
Very nice! Didn't know you can even install that on a QNAP, but sure.. why not :D
@@christianlempa Not all QNAPS, but those with a usb DOM and HDMI. Due to all qnap security concerns, I opted to move away from their O/S. TrueNas works great on it.
Qnap in general is garbage. Software sucks and it has been hacked several times
@@AM93000 Exactly, that's why I put TrueNas on it.
Cool build but how does the power supply breathe? The case doesn't seem to have any ventilation holes for it
may i ask a question about truenas-scale?
here it is: my ssh service enabled at default settings,it's used be fine and stable,recently after i upgrade to latest 22.02.0.1 version,i constantly lost ssh connection with my Truenas scale,take a glance to console it's frequently printout :
strixnas kernel: audit: audit_backlog=65 > audit_backlog_limit=64
May 1 21:26:07 strixnas kernel: audit: type=1400 audit(1651411567.181:21541034): apparmor="DENIED" operation="ptrace" profile="docker-default" pid=17459 comm="apps.plugin" requested_mask="read" denied_mask="read" peer="unconfined"“
Another disadvantage of the Ryzen 7 5750G is that it's on PCI-e 3.0 instead of 4.0, and I believe it also has fewer PCI-e lanes.
Ryzen 7 PRO 5750G supports ECC memory
Didn't work for me
I have this cpu running, but with the current pve kernel ecc is not reporting correctly. Should be fixed with about 5.17.
I tested the ecc function with the same Mainboard in Win10.
@@peterfeurstein6085 Yeah I guess it is based on Linux Kernel, I had no chance to get it working. If you have the same on PVE, hmm. Glad that I replaced it.
I bet the ECC memory fight is because the ecc memory cost more. Great video thanks
I think so too :/ thank you bro!
great content, im planning to build my own little truenas box due to a qnap failure. the section where you talked about networking configuration and speed tests peaked my interest. have you covered any of this setup previously in your other videos, or know a good place to read up on this?
Not yet, storage is not actually my main interest tbh, but I make new videos when I start a new storage project
Inter-Tech, I have some cases here of this brand.
they are really nice and good priced.
You can find them in The Netherlands (I ordered them thru Amazon Germany).
Do you also ordered the rails for the case ?
Interesting, yeah I also ordered the rails from them.
Thanks for this great vidéo as I planes to update my entire home network (switch and server) beginning next year
Not sure you will answer But I’m still have a MAJOR QUESTION. should I have to install TrueNas Scale as my main operating system and use Plex container in TrueNas or should I have to install Proxmox as main OS and use TrueNas, plex and CM in Proxmox ?
As you’re using both, you maybe have a point of view about this question. For the present time, I just planes to have one new server (similar to this one) but I will maybe add one more server later.
Using prixmox as main OS could be useful to move one VM to one server to the second one
Thank you! It really depends on your needs. If you want simplicity and just need Plex and storage, TrueNAS Scale as your main OS with Plex in a container is a solid choice. But if you're looking for flexibility and might expand later, going with Proxmox as your main OS lets you run multiple VMs, including TrueNAS and Plex. It’s great for moving things around between servers too!
how do I buy a Case there? I cant find a "register" on the website itself of Inter-Tech
What do you think about ddr5 memory? As there is no real ecc with it just "ondie" ecc which work differently
I understand that you are running proxmox and truenas scale on different server. How do you add the zfs pool by truenas scale onto proxmox?
At 20:53, you actually mean 10Gbit. Or 1GB. 700MB/s is a lot more than one gigabit per second :)
Yep thats true ;)
Can you run Plex GPU transcoding using trueNAS Scale? Never simple to have Plex in containers and access the GPU. Also can you install latest version of Plex? Usually the version provided is pretty old.
Thank you so much for sharing this video it's very helpful. Can you tell me how can I see all my hard drives and space in my TrueNas interface. So I have about 10 -TB with five drives but I'm not seeing the amount of the disk spaces
Thank you, glad you enjoyed it! I don't know what could be the cause of your issue, maybe check out the truenas forums or our discord?
Is it better to have many small HDD's or a few larger capacity ones?
For example using 6x14TB on ZFS1 vs 11x8TB on ZFS2 both system giving approx 70TB usable space.
It's better to not use as many hard drives than I use. At some point it becomes slow and unusable. Split the pools into multiple smaller ones.
Can all the data also be uploaded automatically to Google Drive? So that if there is damage to the hard disk, we still have a backup of all data in the cloud.
I think you can do that
Nice build and very nice video. Thank you very much. I was also considering getting the same case. However, I was a bit worried about the airflow for an ATX power supply with a fan on the bottom since there seems to be little to no space to actually suck air in. In contrast to the 4U-4424 which does not have this metal separator between the mainboard and the PSU. Is there enough space to get some air into the PSU or did you solve this in some other way?
I haven't seen any issue with temperature, not in the case or components itself, nor at the power supply. And it was really hot this summer.
@@christianlempa Thank you for the quick response. Seems like I'll be trying my luck with this case then. 🙂
Great video. Nice build. Like you said software is also important. I'm wondering if you use any software to manage your personal photo/video libraries. If you do, what are they ?
For personal stuff I use Google Drive
Truenas comes with support for packages.
Nextcloud is one that i recommended!
How do you mount TrueNAS ZFS with Proxmox? Please explain me...
Using NFS
Excellent content as usual! From the video, it seems like an Adaptec ASR 71605 - Can you please confirm/share the exact model of the RAID controller? Thank you in advance.
Thanks! You should find the exact model on the kit page.
Great Video
Thanks mate!
How did you configure the Adaptec asr-71605 so it detects the hard drives? I bought the same card and passed it through to the TrueNAS Scale VM. I can detect it using lspci but none of the drives are detected when I want to create my pool. Thanks.
You need to set the controller in HBA mode, check the settings in your controller via the BIOS
@@christianlempa Thank you! Worked like a charm.
thanks for the video, unfortunately, I can't use TrueNas since it didn't have Delete Permissions in ACL, because its needed in our case, so I'm stuck with Windows or xpenology, since Synology has this option in advanced permissions section
8:16 This is probably because ECC memory tends to be more expensive.
Hm that could be one reason, yeah.
Even though I've already got a running truenas scale build (from an old gaming pc) with similar amount of storage
Cool!
I just built my first TrueNAS core system (debating on starting over & installing TrueNAS Scale on instead), and have ~31TB of drives in mine but have only setup a pool with 4x 3TB WD RED drives so far for media storage & streaming. would you recommend for movie streaming and uploading to the server that I upgrade from my dual 1Gig onboard NICs as I see you went with 10Gig setup? I"m wondering if 10Gig would be overkill for my usage but if it isn't I'm curious what the cheapest compatible setup might be for a 10Gig connection (would need a 10Gig NIC, 10GB or multi-Gig switch & either CAT6A copper or SFP+ transceivers (seems like quite the investment) since it doesn't seem like there's a way to Team/bond my two 1Gb onboard NICS (only aggregate them assuming I have a switch that supports aggregation as well). Or should I just consider investing in a Multi-Gig (2.5Gb NIC & switch)? I do plan on creating other pools for Backups and possibly a pool for running VMs down the road possibly if that makes any difference.
so what was the model number for the 16 port sas card? because my server is on x470 and it has similar limitation on pcie lanes. would be helpful. many thanks
You find it on my kit page
Chris von welcher firma ist das case ?
how do you backup this machine?
Snapshots and Cloud Backup
Do you have any updates to this a year later? I'm considering building something like this for fun and for my Plex/Jellyfin server. Any recommendations for a chassis i can get in the US?
Not yet, I'm still trying to figure out what to do with my NAS project. But I'm working on some pretty heavy refresh as this project was just too power hungry for me :/
Ok. The suspense is killing me. I'm eagerly awaiting the refresh. As soon as you post I'll start buying my parts. Finding a good chassis has been hard as intertech is German and I'm in Los Angeles.@@christianlempa
BTW I just bought a sysracks 42 U rack and running my old intel macbook pro as a server running some docker containers and home assistant. Unfortunately even though it has 64 GB of ram and a 6 TB HD, laptops don't make good servers.
In your file transfer test you say it doesn't reach 10 Gigabit, but the transfer rate show ~700 Megabytes which is 5600 megabits for 5.6 gigabit, so even the first transfer is extremely fast.
Absolutely, it's already fast without the cache.
Thanks for the video!.. Did you need to flash your HBA card to IT mode?
Hey Michael, no the AsRock board could configure this setting in the BIOS. Then you can switch from RAID to HBA mode easily.
Thanks for the great video. One thing I do not understand is why the APU 5750G didn‘t work with the ECC RAM. From AMD side they say all APUs in the PRO line supports ECC RAM?
Good question, I was confused by that, too. I think that has something to do with the kernel drivers, which don't support this CPU, other than that I don't know why it's not working.
@@christianlempa ... that was a good hint. I found a blog which describes exactly that problem. It seems that the EDAC Driver doesn't support the AMD APU. But there is a opportunity to get that fixed. I'll have a closer look in the blog. Unfortunatelly I can't link it here :( If you search for "ecc-on-amd-cezanne" you'll find it. Cheers Ralph
@@ralphmeinel9990 I think I also found this post, the problem is that all modifications to the kernel will be overwritten and unsupported by truenas firmware. It's a bit unfortunate :(
@@christianlempa I did a little research and found out that the EDAC driver supports AMD APU 5750G/5650G in kernel version 5.18.x. TrueNAS Scale ships with 5.10. kernel, so we have to wait until there is an update.
But in the current release there is support for the "older" APUs. As I am currently building a NAS, I gave it a try and can confirm that the AMD APU 4650G works with the ECC memory :)
12 drives with RAIDZ1? Can't tell if it's brave or not smart... but... you do you man.
Let's call it Brave, okay? 😅
Extremely risky….
I also came to complain about your RaidZ1 choice.. Otherwise I approve 😂
Thanks! :D Yeah it's a valid point, I need to admit ;)
ASRock boards are some of the best for home labs
In ZFS the total pool IOPS are equal to 1 disk IOPS * N-of-vdevs, and since you have a single vdev, the IOPS are equal to a single WD Red Plus 4TB, so your config is pretty bad also from a performance point of view, other than being quite unreliable. The only thing that's not so bad is the streaming performance (as long as you are using a large block size)
With 12 disks, my choices would probably be: reliability: 2x (Z2 w/ 6 disks) performance: 3x (Z1 w/ 4 disks). You can also avoid parity arrays and get even more IOPS but the usable space decreases drastically.
P.S. Unless things are changed from the past (have not played with SCALE yet), that 1TB NVMe drive is completely wasted as boot drive. Ideally the OS should stay on a small SATA SSD (best if a couple in mirror) and that NVMe would be better used as level 2 ARC cache
Thanks, great feedback. I probably will change my config to something else, once I have the chance to do so. Still haven't decided what exactly, but 2x Z2 would mean I'm loosing 4 disks which is 30%. Seems like a lot to me. I probably will go with a single Z2 or even Z3 for the 12x4tb as a big data pool for backups and video files and add a second vdev with 4x SSDs. What do you think of that idea?
@@christianlempa The recommended number of disks per vdev is between 3 and 9 and more than 12 is not recommended, so a single Z2 with 12 disk is pretty much an explicitly unrecommended configuration. I know that loosing all that space sucks, but this is the price if you want to do things the right way. Since you are not storing mission critical data (right? 😛) you can configure the pool with two Z1 6 disks vdevs and a very frequent backup 🙂.
The question would be why it's not recommended apart from being not so performant. Anyway I might give the 2x Z2 idea a shot.
@@christianlempa for many reasons, the most obvious is rebuilding time (it could take a week or more and during the process you could loose more disks because they are under high stress) but also space efficiency (due to parity and padding increased complexity) and other joyful reasons (like further performance degradation) that you can discover deep diving into technical documentation if you want 🙂
Of course, once you are aware of all the risks and limitations, if they are still within your "margins of acceptance" you are free to configure your pool as you wish, I just wanted to make you aware 😉
@@dariopetrusic4215 No worries mate, I appreciate useful feedback! That makes totally sense to me. I guess I'll go with 2x z2 then.
You said you'd link the zfs video in the description, but it isn't there.
What? Let me fix that! Thanks for the heads up :)
Please do not use Raid-Z1 on a pool with that many disks, there is a good chance you will have 2 disks fail within a few days of each other, especially if you bought them at the same time.
if you lose a 2nd disk before it has resilvered, bye bye data.
Nice Video! I tried ECC with Ryzen as well but didn't get it to boot...
You need to have a CPU and MB that supports ECC, maybe that was the problem?
Actually the Ryzen Pro APUs support ECC.
Nice server !
Do you have any shared storage solution for a Proxmox cluster ?
In my homelab I have NFS shares but the NAS becomes a Single Point Of Failure 😕
Maybe the scaling system of TrueNAS would help 🤔
Thanks! No I'm just running two machines, the TrueNAS and Proxmox.
Thanks for the great video! I am thinking about using the same Motherboard and CPU Combo. But I was wondering... You did not mention any GPU. The AMD Ryzen 5 3600 has no integrated graphics. Didn't you use any cheap GPU? Thanks for the advice!
You're welcome! I used a cheap GPU to install the system, but afterwards you can remove it as the AsRock MB supports boot without a GPU by default. Pretty nice 😀
@@christianlempa Many thanks for getting back to me on that question! I will buy a cheap GPU then. Thank you for the great content! Grüße!
Ich glaube, ich hätte - der Temperatur wegen - die Festplatten auch in jede zweite Reihe geschoben. So sitzen die nicht aufeinander und haben etwas mehr Platz zum Atmen.
Hey Christian, do you have any idea on what something that this costs to run per day? how much kw ? thinking of looking at doing something like this myself but with prices of electric these days its putting me off at the moment. - great content as always!
Thanks! The power consumption is mostly ~120-130W