I've had this case for 3 years, and just recently sold it. It's super great, good airflow and I took off the release tabs and painted them black. Gave it a better aesthetic
@@gavination_domination Not the same guy but I have 3 of these silverstone cases in my "Homelab" and there pretty good sound wise. (I do have them in a rack with glass and DIY sound proofing) I did end up replacing the fans with some noctura units which were a tight fit but worked great.
@@noblebullshark gotcha, thanks for the reply. I was hesitant about getting an enclosed rack, mostly because of heat reasons as my drives get quite warm as it is, but my current cases are NAS desktop-style Silverstones. They are...not awesome for managing heat lol. The Noctua tip is a must for sure.
If you want to avoid the hassle of registering a new usb every time one breaks I remember somebody attaching it through an external usb hub. The identifier is then linked to the hub so you can have backup usb drives ready to go.
Well this is very useful. I want to upgrade my mini-tower TrueNAS install and have exactly 2U free in my rack. In fact I was looking at this exact case yesterday. Good tip about the power supply venting.
Been using Unraid now for over a year and really like it. I dont use the main array though so just have a dummy usb in that for now so array starts. In version 7.0 you wont need a disk in the array to start it. I just use all SSD disks in a cache pool. I actually chose not to use ZFS though yet and might switch to that in future, but instead I am using BTRFS pool. It works well for now.
I recently did the same thing! 😁👍 This unRAID is my second one I've built. It's just a compute box. Then I repurposed the old/1st unRAID box as a truenas scale box for storage for the whole homelab.
Great videos Christian, got my own small homelab but nothing fancy, just a rack with some unifi networking, media server and a few Pi's for small services 👍 looking into setting up Dashy atm.
8:50 port multipliers like the JMB575 are never recommended. The JMB585 itself is fine, though it does not support ASPM and will force your entire system to run at a higher power state than it had to. A better choice would be an ASM1166-based card and updating the firmware so it supports ASPM as well. Those give you 6 ports per chip, which is less than the 8 you get with JMB585+JMB575 but at least there's no port multiplier in the mix.
@@christianlempaI also noticed that this card has port multipler, basically jmb585 is 5x sata to 2x pcie3.0, so 4x sata on first sas connector and last sata port wired to second sata switch chip. Sata3 is capable of about 550MB/s so its bootleneck of all four port, this can be ok for hdd drives, usually at about 160MB/s, four will be about one one sata3 speed. I was interested about this particular card because it could support smbus for backplane leds control in backplane, can You please check out this?
@@dmckrk I’ve ordered 3 new cards, one with the ASM1166, another one with the ASM1064, and a PCIe 3.0 for the x1 slot. I’m gonna do some testing on these cards and see how the performance is when all drives are utilized. Will take some time though
@@christianlempa If You are still talking about m.2 then ASM1064 is just one lane pcie 3.0, rather bad choice. Remember to update firmware on ASM1166 :) What board rev You have on card?
I've plan to rebuild my entire HomeLab with new server and as there is a lot of possibilities I’ve a stupid question. Which of the following choice is the best: - Install first ProxMox and then install TrueNAS or UnRAID and finally other VMs / Container on ProxMox (will be able to use ProxMox cluster and migration of VMs from one ProxMox to another) - Install first UnRAID and other VMs / Container inside UnRAID - Install ProxMox on a first server and TrueNAS or UnRAID on a second server (more expansive) As you're using both, I will wait for anoter video about TrueNAS / UnRAID battle ;-)
Am just looking into setting up a storage server. Not sold on unraid atm (extra cost) and also ZFS on truenas seems way more mature tbh. Curious about the comparison video which one is gonna "win".
@Redostrike to expand your array you effectively have to double the drive count due to ZFS not supporting expansion of a vdev. say I have a vdev with 8 drives to add more storage you will need another 8 drives to create a new vdev to add to the overall pool, but with unrsid you can just add singular drives at a time, therefore costing less
For me N100 boards are no go. With only 9 PCI-E 3.0 lanes, any motherboard based on this is just starved for PCI-E lanes and has to make sacrifices. For example PCI-E x16 slot on this mobo is only x2 when it comes to data transfer. m.2 slot is only PCI-E x2. If you plan any add-in cards, then you must basically choose what you want to cut on them. So for a simple NAS or home media server - ok, but for anything more advanced... no thank you. As for the UNRAID license on the USB - use good SD card and SD card reader. UNRAID will register the serial number of the card reader and you will be free to swap to a new card if old one gets damaged. Network card is PCI-E gen.2, so 1GB/s (since it uses older type of packet encoding with a 20% overhead). You won't get full 10Gbits on it on single slot, not to mention on a 2 port card, that you are also talking about in your video. To get a full 10Gbps 2 port speeds, you would have to buy a PCI-E gen 3 card.
It totally depends on the size of the build, for smaller servers N100 is totally fine. I guess that my build seems to push it to the limits, :D Maybe an upgrade at some point might be considerable.
@@christianlempa I agree, for a simple NAS it's fine. For more advanced server, where your NAS is just one of the VMs - not so much :) But everything has its own use case and it's hard to beat the N100 in the power efficiency. I myself am getting a N100 system to replace my main router/firewall - perfectly capable for this use case.
Community Apps are the way of truth and light. Unraid is Well worth the license fee in exchange for the reliability and convenience. Devs gotta eat too.
This is my view to The time you save using it, it's hard to go back to portainer or anything else now This is just to easy and that comes from someone pretty well versed with docker
Love it. I want to rebuild my unraid server for the same reason. But I do have three applications that need to run perfectly. Plex, Nextcloud and frigate. I guess there is enough power on this cpu? And I need a method to get my old unraid pool to zfs without buying 5 new disks somehow…
you should have got those cwwk board as they have sodimm ram and some n100 can do ddr5 which I have works fast. Also trueNas scale works awesome! I think I should start a channel!
Would recommend a SFX power supply, preferably modular, for a 2U. Get rid of that cable mess. Those ketchup & mustard cables look to be inpeding the airflow a bit. :-)
Oh man, was du hier wieder alles reinstopfst haha , bin gespannt wie lange es dauert bis der M2. Adapter to 8 SATA dir Probleme macht, hatte 2 im Einsatz, die sind echt leider nur Klumpat ;)
@@christianlempadouble check everything and make extensive burn tests. JMB585 had few firmware updates, some of them brake s.m.a.r.t. or high speed modes. It also can destroy drives on RAID causing long rebuilds :(
Unraid is really a damn good system. BUT the array is also damn slow, at least write speed. You have to accept this and then its a nice server. But you can work around this with pools...
Plesse don't use 3. USB-Sticks. Most oft them produce mor heat than a 2. USB-Stick. The lower speed doesn't matter. It doesn't tale long and is only needed for the boot process. The system itself runs totally in your RAM.
I was super worried when i saw the video having those kingston drives going in, those would definitely explode as cache drives. The crucial drives are much better!
I'm curious why you chose to build with consumer parts in a server case rather than going for a used enterprise server. Seems like that could potentially offer better value/performance. Any particular reasons you went this route? Btw, really enjoyed watching the build process!
Power Usage is one, but also the noise, as the "server room" is right besides the studio without a great door that would absorb the noise :D Also I just like to tinker a bit from time to time.
It would be nice if you could cover how to connect one postgres container to many applications like nextcloud and others. Another point that gives me headache is restarting the server after setting it to sleep with the s3 sleep plugin. I tried a lot with Rtcwake in bios with scripts and commands in the plugins before sleeping, but it does not start after going to sleep over night. Maybe you could alos cover if it's good or bad to spin drives down or turn the server off/to sleep. For me it's about efficiency and saves me about 30 watts per hour.
It depends on whether the disks are in idle or not, with all disks spun down it's around 35W, which is still a lot in my opinion. But that might be related to the power supply and PCIe cards.
No ECC RAM on a ZFS NAS is a lost opportunity. I recently experienced bad ram and corrupted data on my desktop (non-ECC). It didn't matter there, but I wouldn't have wanted such an issue on my NAS.
N100 don't support ECC mode, so such feature would change much more than just RAM modules. Always worth to fire memtest86 for night to chceck everything. Once I got four bits switched at about 34GB address, same issue (and place) with swapped modules. It turns out that was caused by high temperature and simple, small fan solved it.
I'm still testing and improving my current build, as the power consumption is a bit too high in my opinion. I will keep you updated when I sorted out the problems
so the question is now - what do you prefere more? A Unraid Server? Or a Proxmox+Truenas+Portainer Server? I run an Unraid Server since 3 years and im pretty happy... i want my opnsene now virtual to safe some energy costs (im from germany too :) ). But i always had issues with an virtual opnsense as router in unraid (unraid need fix ip and when you reboot your device, or try to connect during router VM is offline - you need to enter a static ip in your client or you cant reach your unraid server because your router is offline and so on :D
Honestly, I haven't fully decided, yet :) I'm not committed to TrueNAS or Unraid either, I like both, but what I'm gonna use in the future might also change from time to time.
I have my pfSense running in a VM on one of my Unraid servers. Just make sure you have the array to start on reboot/startup and your open sense VM to auto start as well.
Hey Christian, echt geiler Scheiß, aber ich denke eine Proxmox VM mit zwei Festplatten dran geklemmt würde für das Cluster zuhause durchaus reichen und ein NAS würde ich separat hosten. Nicht nur weil es Zeit und Arbeit spart, sondern noch viel wichtiger: es spart Ressourcen für den Umweltschutz. Ich weiß, das Thema ist dir wichtig, deswegen möchte ich dich nochmal dazu anreizen, doch sowas ein bisschen kleiner zu machen und die Hardware zu sparen :)
@Christian Unraid, with its XFS storage, has one caveat that people should at least know before diving in. In Unraid with ZFS, let's say you want to create a VM with 1TB storage. Your server has 8x 800G drives. In traditional scenarios, that would be 6.4TB (-1 drive for parity) usable capacity to use as you wish. While the usable capacity is 5.6TB in both cases (Unraid vs. others), in Unraid, the largest single file you can create can not be larger than the largest drive in your storage array, assuming that file will be stored on the XFS array. This is minor to most, but it is an important distinction. Think of Unraid as a pool of devices, not a storage pool. (edit for typos)
That's not an Unraid system limitation. That's a limitation of using an "Unraid Array" specifically - as of version 7 you can use ZFS Pools without a traditional array. Raidz included.
@@espressomatic First, only XFS can be used in the main array function. ZFS can be used in the pools, but ZFS functionality was something that was added recently (~ 12 months? unsure) My initial reply was referring to the limitation of the XFS array, not the Unraid OS. If I am incorrect, please let me know. I welcome better understanding.
You can use ZFS (and other) filesystems in the array but as of version 6.xx you still have to have a parity drive Spaceinvaderone has a video on ZFS drive in the array. If I could insert a photo I could show you my setup. ZFS in the array is great if you are using snapshots
It's currently a Ryze K633GCO, which I got cheap :D It's unfortunately not Mac compatible, but you can easily re-configure the keys with Karabiner so it works great, I also added some custom keycaps, to see how it's working. However, I'm not fully satisfied to be honest... still looking for a better one or maybe better keycaps, which are hard to find for german iso :/
Thank you for your answer, I have the magic keyboard for my Mac Mini but I am looking for a mechanical one with good compatibility the Logitech MX Mechanical mini for Mac seems not bad but expensive 160€ so I am looking for cheaper alternatives with French Azerty layout not very simple to find also.
Does this m..2 -> sff adapter works as an HBA ? I mean the electrical circuits are solely for connection of the drives with the slot right no raid function)? Is this also true for the asmedia1166 chip of the other pci you used first?
I've not had any problems with software raids and these cards, the main issue that you want to avoid with HBAs is having a controller card that doesn't have any internal RAID functions, which those adapters also don't have, so they should fine as from my understanding.
It's around 35W, which is still a bit high in my opinion, but I'm testing other combinations of power supplies and PCIe expansion cards in the future, I'll keep you updated :)
Die Adapter Karte war doch x4 und nicht x1? Selbes Problem habe ich nämlich auch und habe einen x1-x16 auf x1 Adapter bestellt, allerdings nur für eine usb Karte, da x1 für alles andere zu langsam ist. Außerdem wird usb 2.0 und nicht usb 3.1 für den bootstick empfohlen, da der sonst zu heiss wird und wesentlich schneller den Geist aufgibt
Der pcie Slot ist open ended, dh du kannst eine größere Karte reinstecken allerdings läuft die dann langsamer als sie könnte. Danke für den Hinweis mit dem usb stick
Hm.. The motherboard choice isn't optimal for my taste. I'd prefer to purchase some $80 new or $80 used (but better) mITX and $40 used AMD Zen 2 processor with an embedded graphics. It will have xx16 slot which I'd split 8+8 (almost all AMD motherboards support 8+8 bifurcation, and may be I'll be lucky to find one for a good price with 4+4+4+4, and/or ECC RAM) for 10Gb and 4+ SATA. As a result - more and faster RAM, higher CPU score, 1-2 M.2. For a same price that they ask for N100. And 20W TPD isn't a benefit that could change my mind. PS Oh. With that case which supports mATX, things would be much simpler. No bifurcation, Y-splitter and riser cables would be needed, a motherboard would be cheaper, I'd expect to find 2-3 M.2, may be 6xSATA, and less searching for ECC support
No.1 is actually advantage. i dont know why people think its problem to boot form usb. its server, so you won't use more than 2 usb ports anyway and more importantly you dont waist hdd/ssd connector / nvme for 2 gb os system files that will run just fine from your 128gb+ ram memory and it will run probably faster then from hdd/ssd
For this build, the boot from USB is indeed an advantage, it depends on the OS and how good it's build to work with USB boot. As Unraid is built for that, I wouldn't see a reason why this should be problematic.
It depends on whether the disks are in idle or not, with all disks spun down it's around 35W, which is still a lot in my opinion. But that might be related to the power supply and PCIe cards.
@@christianlempa with the drives?!? Whoa. I was torn between the N100 and the i3-14100 I literally bought and built my system last Thursday lol I built a home server about a year ago, Epyc 7313p, and my regret is I feel as if I over built it. That thing idles 150watts and almost 200 when doing a file transfer. Hahaha Which isn’t bad I guess, just power in California is almost the price in Germany. So I use that as my “mother vault”, keep it off 90% of the time, and am going to use this new Unraid system I built for all my docker containers, VMs, etc. What’s interesting is I found during boot, system will spike to 90 watts lol, then go down to 28 once all containers and VMs are started.
It depends on whether the disks are in idle or not, with all disks spun down it's around 35W, which is still a lot in my opinion. But that might be related to the power supply and PCIe cards.
@@BartAfterDark I meant not a N100 mobo. But what he is using this server (low power storage) it is perfect. Due to the HW decoding, he can also make this as a Plex server.
I think they can make a slightly expensive version and well spend those 9 pcie instead of x2 and a x1 slot plus 2 sata port Quick network port , stronger sata port controller , multiple m2 slot are all miasing from here 😢
Although it's power efficient, it's worth noting that spinning down the disks can give them a shorter life than if they were otherwise spinning 24/7. Not because of spinning them down, but spinning them back up from 0 RPM each time it needs to be accessed, which wears out the motor quicker than staying at a constant speed if they weren't spun down. You should also be able to do this in TrueNAS but I don't think it's recommended for this reason in particular. While probably valid, I also don't understand the argument regarding the ease of expanding your pool with Unraid as opposed to ZFS. While it's true that you'd have to destroy the entire vdev you wanna expand, it should already be planned from the get go in my opinion. For example, I'm planning a 8x18TB build soon, which I'll most likely configure in a RAID-Z2, whenever I run out of space on that vdev, it's just a matter of creating another vdev and you have essentially expanded your storage (although not the vdev itself).
not too worried about the spin down as when I was when I first started using unraid as i know people who have drives running in unraid for over 6years with that feature turned on. But yea comments like this made me think twice initially. Not anymore
@@NathanFernandes The point of my comment wasn't to dissuade anyone from using it, but just to inform. You should never decide based on random comments anyway, just gather the info and make your own informed decision. I don't think I would be worried about it myself, but given the option between that and spinning them 24/7 with ZFS integrity checks in place, I'd pick the latter. That said, just because they haven't failed in 6 years for some people, doesn't mean that spinning them down isn't harming the disks and potentially giving them a shorter life, but it may be a worthwhile tradeoff for them. Just like it doesn't mean much the other way around, there are cases of 24/7 operation where the disks fail within days or months, and others where they last 10 or 20 years, it's all relative.
8x18TB means that you have money. unriad is for people that are short on $$ as they can add 1 by 1 hdd to expand with no penalty's. And with more vdevs their is higher chance of whole system crashing becuase 1 vdev death take pool down. And you have to add at lest 2 more hdd at time (i think that there can not be 1hdd vdev but maybe im wrong) so it double the price of adding storage. as for spining down hdds, i see use case of unraid as media storage for not frequent accessed files. Perfect for home media / backup server. SAS 3TB drives use 16W of power so my server of 12 drives use more power to spin drives than to do other tasks (power consummation 150W hdd sleep-250w idle-350W peak / dell R510)
I am interested in on how to utilize Mac Mini M# as a server to replace my current old Intel NUC. I am mainly running MicroK8s (Ubuntu Server) as my homelab platform for Home Assistant and other pet projects. But other generic options is fine too. I would be interested to see more on this PoC (video) 🙂
"The nice UNRAID" which changed by chance their license model to the default "business modell" of yearly licenses... 19 February 2024: "Upcoming Changes to Unraid OS Pricing" > Starter and Unleashed licenses will include one year of software updates with purchase. > After a year, customers will be able to pay an optional extension fee, making them eligible for another year of updates.
@@christianlempa You already had to pay in the old license model, too ^^... And the "good product" is from my opinion not really good . * The old basic and I think also the new smallest license is very hard to setup with redudancy cache and some "Raid5/6" compatible modes because of HDD limit and therefore from IT sight a more "you don't want to use it" case. else you want to save power. * The USB stick problem you also run into has to do with the requirements of USB 2.0 (=> NOT 3.x) Sticks with MAX of 16 GB in size and several vendors are not allowed /ignored because Unraid can't read their UUID or don't accept them. * If your USB Stick breaks you can request%pray that your license is transferred to a new one; else you have to pay again. * The "good stuff" is only available when you install the non-commercial community repository (while the basic licensed functionality is ... "very basic") and has nothing to do with Unraid itself. But there are many UA-camr/influencers - even german channels - which "like" these mass of functionality (which is community but not Unraid driven)...so for all a "happy using till the crash is coming . 😁
@@Reiner030 Think you need to update yourself a bit. Yes the new license model is bit of a downgrade, but if you can live without new features added to the core system there is no need to pay. You will still receive security updates. USB 3.x is supported, the recovery process is made much easier now You have the option of manually installing docker applications if you wish, it's just a lot more complex than through the community application.
@@Hansen999 yes... "!tons of COMMUNITY APPS" ... so NOT ones from UNRAID itself which gets the "credit" / payment for it... And my license is just 1 year old and very limited used because of this mess of setup / informations for default usage even many influcencer can't think about different systems even to compare with them. Technical evangelists at their bests... AMEN
Both a TrueNAS and UnRaid user here, and I think I can make the case for UnRaid. To be clear, both are dope, and I think both serve a great set of use cases. UnRaid, in my opinion, takes the cake for general ease-of-use and setup, as well as general experimentation and light virtuaization (meaning, "not Proxmox" levels) TL;DR: I think UnRaid deserves to exist, and not every NAS has to be enterprise-grade, as there are some very real usability tradeoffs that come with going that route. For starters, you could argue that unRaid is the inherently less power-consuming option (at a real performance cost) if you opt not to rely on ZFS for your filesystem. Since it doesn't stripe files across all the drives by default, you'd only really need to spin up the ones that are accessed most often. This is use-case dependent, of course, but for the home user, I think it's a win. You could also argue that unRaid can contribute to longer lasting drives, again, if you opt out of ZFS, since without true striping, you're not wearing drives evenly. Also, there is the ease-of-use aspect for newer homelabbers. In my experience, it was just much quicker and easier getting VM's and Docker containers up and running in unRaid, though Christian isn't making much use of them here. Pooling and expanding drives also tends to be easier as well. There is also the ubiquity of storage mediums you can use. Adding more disks of varying brand/size is rather freeing. For a home user, this can be very nice, since they can repurpose SSDs/HDDs in old machines. TrueNAS is VERY finicky with hardware, especially the drives you want to use as caches, as a point of reference. @FuzzyKaos mentioned that the bucket style storage can be confusing, and I don't disagree, but I will say that you can restrict certain files/shares to certain drives if you'd like. It does offer a lot of control and a lot of niceties that TrueNAS doesn't. That said, it has its definite cons. Raw performance, namely, goes to TrueNAS (dat read caching tho!). And while UnRaid does support ZFS now, it's still rather early days, and I wouldn't recommend using UnRaid exclusively for ZFS yet. There is also the matter of cost. $250 for a lifetime license is exactly...*checks notes*...$250 more than TrueNAS is lol. But, seeing as TrueNAS does open their OS for normies and cater exclusively to Enterprise customers creates the ability to monetize the way they do. UnRaid just wasn't built for that niche, and I think it's a reasonable enough price to warrant the work the devs do.
The Unraid Array implementation does seem really nice. I personally wouldn't ever pay for something that only boots from a USB drive which is the single entity that has the activation/license attached to it, but if it did boot from a real disk I would think about it. Yeah I know many license dongles exist and they work using USB drives (even Microsoft did/does hand out USB drive as license dongles) bla bla bla, but come on... give me a licensing server implementation that I can setup or something that I can make backups of. USB? No, not with my money at least..
@@FuzzyKaos you can disable files flow to other drives if you dont like it. all so you can select on how deep do you want split to happen for example root folder moves can be split only on level 1. So you could have 4 drive of 500 gb and 1.5TB of movies. Movies will be automatically be split as each drive will have folder movies and subfolder with movies names in it. It would not split movie1 files on drive 2 and 3 for example.
If you take all the hardware, case and license costs you could almost buy a Synology DS1821+ which provides the possibility to enhance to 2x 5-bay drives and can also make use of a Intel X520/710 network card. This would avoid the lousy ASM1166 controller, which is honestly not sufficent for a homelab. (played around with it). Furthermore it has an ECC memory option and also provides options to run docker containers or virtual machines. I would be interested in how much power consumption you end up. The ASM1166 is not doing a good job on that point, neither does the X520. Both can be improved but it takes a lot of extra work. Sorry, but I'm not impressed.
Well, some others have especially recommended an ASM1166 (which I have not used in this video, maybe some even worse controller). However, it still works for me and I enjoyed the tinkering. Even though I'm not an expert in this field, I still like to build my own stuff, no matter the cost :D
@@christianlempa I know that it was popular on the Unraid board. I hope it works for you and you won't face any loss of data. That's the most important point. I know what you mean. I also love to tinker my own stuff, but if it comes to my personal data I now moved on to the mentioned NAS. :)
This motherboard is not the best choice for NAS. You should have looked for one that at least has an Intel network controller that supports 2.5Gigabit speeds so you don't have to waste a slot
@@nominevacans7602 It is better to look for something on AMD with ECC memory support (unofficial off course and it will be 2 channels) and a pair of Intel 2.5 Gigabit Ethernet controllers on board (you can set up LAGG). It will most likely be a board from a Chinese brand of the second echelon, but they show rather well. If you need transcoding, you can use cheap intel graphics card)
There's no need for ECC in my setup, as well as 2.5, since I added a 10Gbit card, I don't trust any of these no-name Chinese brands for motherboards, so I think for this particular build the N100 AsRock is a good choice.
I've had this case for 3 years, and just recently sold it. It's super great, good airflow and I took off the release tabs and painted them black. Gave it a better aesthetic
Do you remember how the sound profile was? Thinking about moving to a rack, but I'm a bit nervous about the ambient noise in my space!
@@gavination_domination Not the same guy but I have 3 of these silverstone cases in my "Homelab" and there pretty good sound wise. (I do have them in a rack with glass and DIY sound proofing) I did end up replacing the fans with some noctura units which were a tight fit but worked great.
@@noblebullshark gotcha, thanks for the reply. I was hesitant about getting an enclosed rack, mostly because of heat reasons as my drives get quite warm as it is, but my current cases are NAS desktop-style Silverstones. They are...not awesome for managing heat lol. The Noctua tip is a must for sure.
If you want to avoid the hassle of registering a new usb every time one breaks I remember somebody attaching it through an external usb hub. The identifier is then linked to the hub so you can have backup usb drives ready to go.
What I red is that people us USB SD Card readers for this exact same purpose.
have been using the same USB stick for almost 5 years, no problem
You can also use a usb DOM
Well this is very useful. I want to upgrade my mini-tower TrueNAS install and have exactly 2U free in my rack. In fact I was looking at this exact case yesterday. Good tip about the power supply venting.
Awesome! Thanks for the feedback :)
I switched from Synology to Unraid and very happy. I can customize what hardware to use, cheap to upgrade than upgrade the whole appliance.
Cool!
Been using Unraid now for over a year and really like it. I dont use the main array though so just have a dummy usb in that for now so array starts. In version 7.0 you wont need a disk in the array to start it. I just use all SSD disks in a cache pool. I actually chose not to use ZFS though yet and might switch to that in future, but instead I am using BTRFS pool. It works well for now.
I recently did the same thing! 😁👍 This unRAID is my second one I've built. It's just a compute box. Then I repurposed the old/1st unRAID box as a truenas scale box for storage for the whole homelab.
Nice! I'm interested in the new version and how it brings new features to Unraid
Thanks for the demo and info. This is awesome. I've been running Unraid for years, its awesome. Have a great day
Awesome, thank you!
Considering your previous dismissal of UnRaid OS, this a nice fair review.
Thx :D
thanks christian i’m looking forward to the comparison video. cheers mate!
Cool, stay tuned! :)
Welcome to the unRAID family. Been using it since 5.0 days. With the upcoming 7.0 release it will have ZFS fully implemented.
Thank you :D
cannot wait for the follow up video's
Great video Christian, a follow up with some more tinkering in Unraid would be very welcome.
Stay tuned! :)
Great videos Christian, got my own small homelab but nothing fancy, just a rack with some unifi networking, media server and a few Pi's for small services 👍 looking into setting up Dashy atm.
Nice!
8:50 port multipliers like the JMB575 are never recommended. The JMB585 itself is fine, though it does not support ASPM and will force your entire system to run at a higher power state than it had to.
A better choice would be an ASM1166-based card and updating the firmware so it supports ASPM as well. Those give you 6 ports per chip, which is less than the 8 you get with JMB585+JMB575 but at least there's no port multiplier in the mix.
yup, when he ads more than 5 hdds that the JMB585 supports it could cause havoc in the system
Thanks for sharing! I'm honestly no expert in this field, but I will do some tests with other expansion cards at some point.
@@christianlempaI also noticed that this card has port multipler, basically jmb585 is 5x sata to 2x pcie3.0, so 4x sata on first sas connector and last sata port wired to second sata switch chip.
Sata3 is capable of about 550MB/s so its bootleneck of all four port, this can be ok for hdd drives, usually at about 160MB/s, four will be about one one sata3 speed.
I was interested about this particular card because it could support smbus for backplane leds control in backplane, can You please check out this?
@@dmckrk I’ve ordered 3 new cards, one with the ASM1166, another one with the ASM1064, and a PCIe 3.0 for the x1 slot. I’m gonna do some testing on these cards and see how the performance is when all drives are utilized. Will take some time though
@@christianlempa If You are still talking about m.2 then ASM1064 is just one lane pcie 3.0, rather bad choice. Remember to update firmware on ASM1166 :) What board rev You have on card?
I've plan to rebuild my entire HomeLab with new server and as there is a lot of possibilities I’ve a stupid question.
Which of the following choice is the best:
- Install first ProxMox and then install TrueNAS or UnRAID and finally other VMs / Container on ProxMox (will be able to use ProxMox cluster and migration of VMs from one ProxMox to another)
- Install first UnRAID and other VMs / Container inside UnRAID
- Install ProxMox on a first server and TrueNAS or UnRAID on a second server
(more expansive)
As you're using both, I will wait for anoter video about TrueNAS / UnRAID battle ;-)
Had unraid and thanks god I moved to synology 1821. Never looked back and photocopy I'm thankfull for that. Good luck with it, you will need!
lol thanks... I guess? :D
I can't wait for the videos installing applications in unraid......!!!!!
I'm sorry, it probably takes some time :D
Appealing advertisement! Nicely targeted..!
Thanks for the feedback! :)
Endlich mal ein Unraid Video von dir. :)
Cooles Video jedenfalls.
Danke dir! :D
@@christianlempa wenn du in nächster Zeit wirklich mit Unraid rum spielst sie auch mal ins Forum, dort wird immer geholfen. :)
Unraid is the beast
Am just looking into setting up a storage server. Not sold on unraid atm (extra cost) and also ZFS on truenas seems way more mature tbh. Curious about the comparison video which one is gonna "win".
if you're worried about the cost, ZFS costs way more in the long run due to how you need to add more storage.
@@ImFascinated Sorry can you elaborate? How is the cost gonna be more? (no BS i'm serious i wanna learn)
@Redostrike to expand your array you effectively have to double the drive count due to ZFS not supporting expansion of a vdev. say I have a vdev with 8 drives to add more storage you will need another 8 drives to create a new vdev to add to the overall pool, but with unrsid you can just add singular drives at a time, therefore costing less
Yes! Unraid is so Good!
It is!
For me N100 boards are no go. With only 9 PCI-E 3.0 lanes, any motherboard based on this is just starved for PCI-E lanes and has to make sacrifices. For example PCI-E x16 slot on this mobo is only x2 when it comes to data transfer. m.2 slot is only PCI-E x2. If you plan any add-in cards, then you must basically choose what you want to cut on them. So for a simple NAS or home media server - ok, but for anything more advanced... no thank you.
As for the UNRAID license on the USB - use good SD card and SD card reader. UNRAID will register the serial number of the card reader and you will be free to swap to a new card if old one gets damaged.
Network card is PCI-E gen.2, so 1GB/s (since it uses older type of packet encoding with a 20% overhead). You won't get full 10Gbits on it on single slot, not to mention on a 2 port card, that you are also talking about in your video. To get a full 10Gbps 2 port speeds, you would have to buy a PCI-E gen 3 card.
It totally depends on the size of the build, for smaller servers N100 is totally fine. I guess that my build seems to push it to the limits, :D Maybe an upgrade at some point might be considerable.
@@christianlempa I agree, for a simple NAS it's fine. For more advanced server, where your NAS is just one of the VMs - not so much :) But everything has its own use case and it's hard to beat the N100 in the power efficiency. I myself am getting a N100 system to replace my main router/firewall - perfectly capable for this use case.
Community Apps are the way of truth and light.
Unraid is Well worth the license fee in exchange for the reliability and convenience. Devs gotta eat too.
This is my view to
The time you save using it, it's hard to go back to portainer or anything else now
This is just to easy and that comes from someone pretty well versed with docker
Absolutely!
Finally.
UnRAID is just awesome!
This is the way!!
Haha nice :D Yeah, that came out recently. For my new tutorial I'd like to look at the version 7
Unraid rocks my socks!
only if it was faster and not so expensive... 250eur!
@@naitcalo2141you spend more on the hardware tho
@@naitcalo2141 It is as fast as your cache! But yes the new pricing that came this year sucks. Good I bought license earlier.
@@naitcalo2141 well lifetime license...
I love the unRAID GUI but was disappointed when they changed the license model. 🫤
Love unraid for my use case. Media storage
Nice!
Love it. I want to rebuild my unraid server for the same reason. But I do have three applications that need to run perfectly. Plex, Nextcloud and frigate. I guess there is enough power on this cpu? And I need a method to get my old unraid pool to zfs without buying 5 new disks somehow…
Awesome! I can't say about Plex, or Frigate, but for Nextcloud it should work.
Maybe I missed it in the video, have you measured the power usage of the device?
Yes, but I'm planning to make an updated video about more testing, etc.
Nice server case. Great potential if ATX MBOs would be supported.
Absolutely :D But it's amazing even though only mATX fit in.
you should have got those cwwk board as they have sodimm ram and some n100 can do ddr5 which I have works fast. Also trueNas scale works awesome! I think I should start a channel!
I don't trust these boards :D
Would recommend a SFX power supply, preferably modular, for a 2U. Get rid of that cable mess. Those ketchup & mustard cables look to be inpeding the airflow a bit. :-)
Haha yeah that's true :D
Good video sir !! Awesome !
Thanks 👍
Hello Christian. I would really like a truenas vs unraid video. go for it!
Stay tuned! :)
Unraid thumbs up Christian
Thx :)
Oh man, was du hier wieder alles reinstopfst haha , bin gespannt wie lange es dauert bis der M2. Adapter to 8 SATA dir Probleme macht, hatte 2 im Einsatz, die sind echt leider nur Klumpat ;)
Welcher Chip ist auf deinen 8er Adapter?
@@1xXNimrodXx1 Irgendein Asmedia wars, keine Ahnung mehr, ich hab die Dinger mittlerweile getauscht gegen eine richtige HBA Karte
@@harry19832601 ah ok, mal sehen wann meiner abraucht, ist auch ein ASmedia läuft seit nem jahr
Also bis jetzt hab ich noch keine Probleme gemacht, aber ich gebe gerne Bescheid wenn's so sein sollte ;P
@@christianlempadouble check everything and make extensive burn tests. JMB585 had few firmware updates, some of them brake s.m.a.r.t. or high speed modes. It also can destroy drives on RAID causing long rebuilds :(
Unraid is really a damn good system. BUT the array is also damn slow, at least write speed. You have to accept this and then its a nice server. But you can work around this with pools...
Congratulations for the video, what is the consumption of this server when the disks are spin down?
Unraid? For me an absolutely no-go.
Plesse don't use 3. USB-Sticks. Most oft them produce mor heat than a 2. USB-Stick. The lower speed doesn't matter. It doesn't tale long and is only needed for the boot process. The system itself runs totally in your RAM.
I was super worried when i saw the video having those kingston drives going in, those would definitely explode as cache drives. The crucial drives are much better!
I agree with the Kingston drives, I think they cheaped out on DRAM cache for those models.
Yeah, I switched those ones quickly :D
it would have been cheaper if you put the storage card in the x16 and buy a m2 to pci slot converter for the network card. but you solution is cleaner
would be an interesting option as well
“Welcome to the dark side!”
He shouted as he witnessed Christian unleashing his hardware.
*
I'm curious why you chose to build with consumer parts in a server case rather than going for a used enterprise server. Seems like that could potentially offer better value/performance. Any particular reasons you went this route? Btw, really enjoyed watching the build process!
Power Usage is one, but also the noise, as the "server room" is right besides the studio without a great door that would absorb the noise :D Also I just like to tinker a bit from time to time.
@@ceeap3680 around 180W (but i think you can get the server + 1 year of the electricity bill for the same price)
The only reason i have avoided unraid speed concerns. Do you find the read/writes acceptable for most people (even with a cache drive)?
The speed is as expected, but I haven't done any extensive testing, I'm planning to make an update video about this at some point
It would be nice if you could cover how to connect one postgres container to many applications like nextcloud and others. Another point that gives me headache is restarting the server after setting it to sleep with the s3 sleep plugin. I tried a lot with Rtcwake in bios with scripts and commands in the plugins before sleeping, but it does not start after going to sleep over night. Maybe you could alos cover if it's good or bad to spin drives down or turn the server off/to sleep. For me it's about efficiency and saves me about 30 watts per hour.
🤔 oh i thought the elephant in the room was the piano back there. 🧐🤣
Haha! You noticed that one :D
@@christianlempa 😁👍
Did i miss the power consumption?
It depends on whether the disks are in idle or not, with all disks spun down it's around 35W, which is still a lot in my opinion. But that might be related to the power supply and PCIe cards.
@@christianlempajust check powertop for everything to find out that m.2 card fails at this, maybe 10G too. :/
No ECC RAM on a ZFS NAS is a lost opportunity. I recently experienced bad ram and corrupted data on my desktop (non-ECC). It didn't matter there, but I wouldn't have wanted such an issue on my NAS.
Hm, I'm not convinced by the need of ECC ram, also I'm not using ZFS.
N100 don't support ECC mode, so such feature would change much more than just RAM modules.
Always worth to fire memtest86 for night to chceck everything. Once I got four bits switched at about 34GB address, same issue (and place) with swapped modules. It turns out that was caused by high temperature and simple, small fan solved it.
@@christianlempa Fair enough Christian....I know it's an endless debate on ZFS - but I didn't realise you weren't using ZFS :)
What is your power consumption for this build? With and without the HDD standby.
I'm still testing and improving my current build, as the power consumption is a bit too high in my opinion. I will keep you updated when I sorted out the problems
Can you make a video on networking in your homelab , like what firewall / router and switch you are using like pfsense or ubiquity.
I did 2 videos, one for my XG firewall and another one for my Sophos Switch, maybe that's helping :)
@@christianlempa Yea I found them, hadn't seen them before. I'll take a look, thanks ! I'm using pfsense by the way.
so the question is now - what do you prefere more? A Unraid Server? Or a Proxmox+Truenas+Portainer Server? I run an Unraid Server since 3 years and im pretty happy... i want my opnsene now virtual to safe some energy costs (im from germany too :) ). But i always had issues with an virtual opnsense as router in unraid (unraid need fix ip and when you reboot your device, or try to connect during router VM is offline - you need to enter a static ip in your client or you cant reach your unraid server because your router is offline and so on :D
Honestly, I haven't fully decided, yet :) I'm not committed to TrueNAS or Unraid either, I like both, but what I'm gonna use in the future might also change from time to time.
@@christianlempa i understand :) i also have the old license from unraid... 50 euros for lifetime.. that makes the decision harder
I have my pfSense running in a VM on one of my Unraid servers.
Just make sure you have the array to start on reboot/startup and your open sense VM to auto start as well.
Hey Christian,
echt geiler Scheiß, aber ich denke eine Proxmox VM mit zwei Festplatten dran geklemmt würde für das Cluster zuhause durchaus reichen und ein NAS würde ich separat hosten.
Nicht nur weil es Zeit und Arbeit spart, sondern noch viel wichtiger: es spart Ressourcen für den Umweltschutz. Ich weiß, das Thema ist dir wichtig, deswegen möchte ich dich nochmal dazu anreizen, doch sowas ein bisschen kleiner zu machen und die Hardware zu sparen :)
Danke ;) Ich baue meine Geräte immer mit dem Gedanken im Hintergrund, deswegen ist es auch für meine Zwecke ganz gut dimensioniert.
Have you considered running SynologyOS with ARC Loader on this storage server?
No, seems like most people either use TrueNAS or Unraid, that's what I was focusing on
I didn't know that 10gtek had their own store.
If silverstone made a 24 bay version of this it would be perfect
Oh they have bigger cases that are great as well!
My supermicro x9 motherboard can't boot NVME but with a USB stick with Clover OS you can.
Nice video, but did I miss the speed copy part or is that for a future video?
That's gonna be a topic for a future video
Thanks mate!!
@Christian Unraid, with its XFS storage, has one caveat that people should at least know before diving in. In Unraid with ZFS, let's say you want to create a VM with 1TB storage. Your server has 8x 800G drives. In traditional scenarios, that would be 6.4TB (-1 drive for parity) usable capacity to use as you wish. While the usable capacity is 5.6TB in both cases (Unraid vs. others), in Unraid, the largest single file you can create can not be larger than the largest drive in your storage array, assuming that file will be stored on the XFS array. This is minor to most, but it is an important distinction. Think of Unraid as a pool of devices, not a storage pool. (edit for typos)
That's not an Unraid system limitation. That's a limitation of using an "Unraid Array" specifically - as of version 7 you can use ZFS Pools without a traditional array. Raidz included.
@@espressomatic First, only XFS can be used in the main array function. ZFS can be used in the pools, but ZFS functionality was something that was added recently (~ 12 months? unsure) My initial reply was referring to the limitation of the XFS array, not the Unraid OS.
If I am incorrect, please let me know. I welcome better understanding.
Thanks for sharing! This is indeed something I didn't consider, yet. However, none of my files is larger than 4TB :D
You can use ZFS (and other) filesystems in the array but as of version 6.xx you still have to have a parity drive
Spaceinvaderone has a video on ZFS drive in the array.
If I could insert a photo I could show you my setup. ZFS in the array is great if you are using snapshots
Hi Christian, a question please, what is the keyboard model you are using and is it a mac layout? Thank you
It's currently a Ryze K633GCO, which I got cheap :D It's unfortunately not Mac compatible, but you can easily re-configure the keys with Karabiner so it works great, I also added some custom keycaps, to see how it's working. However, I'm not fully satisfied to be honest... still looking for a better one or maybe better keycaps, which are hard to find for german iso :/
Thank you for your answer, I have the magic keyboard for my Mac Mini but I am looking for a mechanical one with good compatibility the Logitech MX Mechanical mini for Mac seems not bad but expensive 160€ so I am looking for cheaper alternatives with French Azerty layout not very simple to find also.
Does this m..2 -> sff adapter works as an HBA ? I mean the electrical circuits are solely for connection of the drives with the slot right no raid function)?
Is this also true for the asmedia1166 chip of the other pci you used first?
I've not had any problems with software raids and these cards, the main issue that you want to avoid with HBAs is having a controller card that doesn't have any internal RAID functions, which those adapters also don't have, so they should fine as from my understanding.
what's the power draw for this build?
It's around 35W, which is still a bit high in my opinion, but I'm testing other combinations of power supplies and PCIe expansion cards in the future, I'll keep you updated :)
Aahhh Zhe German is excited for new hardware, this must be good.
lol :D
Die Adapter Karte war doch x4 und nicht x1? Selbes Problem habe ich nämlich auch und habe einen x1-x16 auf x1 Adapter bestellt, allerdings nur für eine usb Karte, da x1 für alles andere zu langsam ist.
Außerdem wird usb 2.0 und nicht usb 3.1 für den bootstick empfohlen, da der sonst zu heiss wird und wesentlich schneller den Geist aufgibt
Der pcie Slot ist open ended, dh du kannst eine größere Karte reinstecken allerdings läuft die dann langsamer als sie könnte. Danke für den Hinweis mit dem usb stick
Hm.. The motherboard choice isn't optimal for my taste. I'd prefer to purchase some $80 new or $80 used (but better) mITX and $40 used AMD Zen 2 processor with an embedded graphics. It will have xx16 slot which I'd split 8+8 (almost all AMD motherboards support 8+8 bifurcation, and may be I'll be lucky to find one for a good price with 4+4+4+4, and/or ECC RAM) for 10Gb and 4+ SATA. As a result - more and faster RAM, higher CPU score, 1-2 M.2. For a same price that they ask for N100. And 20W TPD isn't a benefit that could change my mind.
PS Oh. With that case which supports mATX, things would be much simpler. No bifurcation, Y-splitter and riser cables would be needed, a motherboard would be cheaper, I'd expect to find 2-3 M.2, may be 6xSATA, and less searching for ECC support
How much did you buy the case for ?
Somewhat around 400 euros
Unraid is OK. The 2 issues seem to be 1. USB drive for the OS, and 2. they increased the price by $20.
No.1 is actually advantage. i dont know why people think its problem to boot form usb. its server, so you won't use more than 2 usb ports anyway and more importantly you dont waist hdd/ssd connector / nvme for 2 gb os system files that will run just fine from your 128gb+ ram memory and it will run probably faster then from hdd/ssd
For this build, the boot from USB is indeed an advantage, it depends on the OS and how good it's build to work with USB boot. As Unraid is built for that, I wouldn't see a reason why this should be problematic.
How much Watt power does your server use in this configuration?
It depends on whether the disks are in idle or not, with all disks spun down it's around 35W, which is still a lot in my opinion. But that might be related to the power supply and PCIe cards.
@@christianlempa oh then I stay with my DL380 Gen9 that has many SSD, 512GB ECC and 99W running some VMs...
Whats the total system power draw?
Just built i3-14100 with 4 SSD and unraid. Avg system idle is 28 watts.
It's somewhere around 35W, which I believe comes from the PCIe Cards and maybe the power supply that might not be the most efficient one.
@@christianlempa with the drives?!? Whoa.
I was torn between the N100 and the i3-14100
I literally bought and built my system last Thursday lol
I built a home server about a year ago, Epyc 7313p, and my regret is I feel as if I over built it. That thing idles 150watts and almost 200 when doing a file transfer. Hahaha
Which isn’t bad I guess, just power in California is almost the price in Germany.
So I use that as my “mother vault”, keep it off 90% of the time, and am going to use this new Unraid system I built for all my docker containers, VMs, etc.
What’s interesting is I found during boot, system will spike to 90 watts lol, then go down to 28 once all containers and VMs are started.
overall power consumption??
It depends on whether the disks are in idle or not, with all disks spun down it's around 35W, which is still a lot in my opinion. But that might be related to the power supply and PCIe cards.
lots of unraid videos pop up, i used to use it before the truenas scale / zfs hype… why all the sudden unraid again?
Yes, it's obvious. Looks like they started a new round to pay UA-camrs for advertising.
Looks like their new subscription model is not running well. XD
Would have been nice, if the cpu and motherboard could support more ram.
You can always just get a different motherboard. For his needs it was perfect.
@@bluesquadron593 Don't think there's any N100 boards that support more ram.
@@BartAfterDark I meant not a N100 mobo. But what he is using this server (low power storage) it is perfect. Due to the HW decoding, he can also make this as a Plex server.
I think they can make a slightly expensive version and well spend those 9 pcie instead of x2 and a x1 slot plus 2 sata port
Quick network port , stronger sata port controller , multiple m2 slot are all miasing from here 😢
Although it's power efficient, it's worth noting that spinning down the disks can give them a shorter life than if they were otherwise spinning 24/7. Not because of spinning them down, but spinning them back up from 0 RPM each time it needs to be accessed, which wears out the motor quicker than staying at a constant speed if they weren't spun down. You should also be able to do this in TrueNAS but I don't think it's recommended for this reason in particular.
While probably valid, I also don't understand the argument regarding the ease of expanding your pool with Unraid as opposed to ZFS. While it's true that you'd have to destroy the entire vdev you wanna expand, it should already be planned from the get go in my opinion. For example, I'm planning a 8x18TB build soon, which I'll most likely configure in a RAID-Z2, whenever I run out of space on that vdev, it's just a matter of creating another vdev and you have essentially expanded your storage (although not the vdev itself).
not too worried about the spin down as when I was when I first started using unraid as i know people who have drives running in unraid for over 6years with that feature turned on. But yea comments like this made me think twice initially. Not anymore
@@NathanFernandes The point of my comment wasn't to dissuade anyone from using it, but just to inform. You should never decide based on random comments anyway, just gather the info and make your own informed decision.
I don't think I would be worried about it myself, but given the option between that and spinning them 24/7 with ZFS integrity checks in place, I'd pick the latter.
That said, just because they haven't failed in 6 years for some people, doesn't mean that spinning them down isn't harming the disks and potentially giving them a shorter life, but it may be a worthwhile tradeoff for them.
Just like it doesn't mean much the other way around, there are cases of 24/7 operation where the disks fail within days or months, and others where they last 10 or 20 years, it's all relative.
8x18TB means that you have money. unriad is for people that are short on $$ as they can add 1 by 1 hdd to expand with no penalty's. And with more vdevs their is higher chance of whole system crashing becuase 1 vdev death take pool down. And you have to add at lest 2 more hdd at time (i think that there can not be 1hdd vdev but maybe im wrong) so it double the price of adding storage.
as for spining down hdds, i see use case of unraid as media storage for not frequent accessed files. Perfect for home media / backup server. SAS 3TB drives use 16W of power so my server of 12 drives use more power to spin drives than to do other tasks (power consummation 150W hdd sleep-250w idle-350W peak / dell R510)
I am interested in on how to utilize Mac Mini M# as a server to replace my current old Intel NUC. I am mainly running MicroK8s (Ubuntu Server) as my homelab platform for Home Assistant and other pet projects. But other generic options is fine too. I would be interested to see more on this PoC (video) 🙂
Cool! Keep in mind Mac Mini with Apple Silicon is ARM, so you will be limited on apps similar to Raspberry Pi.
"The nice UNRAID" which changed by chance their license model to the default "business modell" of yearly licenses...
19 February 2024: "Upcoming Changes to Unraid OS Pricing"
> Starter and Unleashed licenses will include one year of software updates with purchase.
> After a year, customers will be able to pay an optional extension fee, making them eligible for another year of updates.
What's the problem with paying for a good product?
@@christianlempa You already had to pay in the old license model, too ^^...
And the "good product" is from my opinion not really good .
* The old basic and I think also the new smallest license is very hard to setup with redudancy cache and some "Raid5/6" compatible modes because of HDD limit and therefore from IT sight a more "you don't want to use it" case. else you want to save power.
* The USB stick problem you also run into has to do with the requirements of USB 2.0 (=> NOT 3.x) Sticks with MAX of 16 GB in size and several vendors are not allowed /ignored because Unraid can't read their UUID or don't accept them.
* If your USB Stick breaks you can request%pray that your license is transferred to a new one; else you have to pay again.
* The "good stuff" is only available when you install the non-commercial community repository (while the basic licensed functionality is ... "very basic") and has nothing to do with Unraid itself.
But there are many UA-camr/influencers - even german channels - which "like" these mass of functionality (which is community but not Unraid driven)...so for all a "happy using till the crash is coming . 😁
@@Reiner030 Think you need to update yourself a bit.
Yes the new license model is bit of a downgrade, but if you can live without new features added to the core system there is no need to pay. You will still receive security updates.
USB 3.x is supported, the recovery process is made much easier now
You have the option of manually installing docker applications if you wish, it's just a lot more complex than through the community application.
@@Hansen999 yes... "!tons of COMMUNITY APPS" ... so NOT ones from UNRAID itself which gets the "credit" / payment for it...
And my license is just 1 year old and very limited used because of this mess of setup / informations for default usage even many influcencer can't think about different systems even to compare with them.
Technical evangelists at their bests... AMEN
oh no you joined the dark side😅
:D
why are you paying for a less feature rich solution? that isn't as mature as truenas? that is free.
Yea, I dont get UnRaid, with its bucket style storage, its confusing, your files are stored across several drives using a water level system.
Both a TrueNAS and UnRaid user here, and I think I can make the case for UnRaid. To be clear, both are dope, and I think both serve a great set of use cases. UnRaid, in my opinion, takes the cake for general ease-of-use and setup, as well as general experimentation and light virtuaization (meaning, "not Proxmox" levels)
TL;DR: I think UnRaid deserves to exist, and not every NAS has to be enterprise-grade, as there are some very real usability tradeoffs that come with going that route.
For starters, you could argue that unRaid is the inherently less power-consuming option (at a real performance cost) if you opt not to rely on ZFS for your filesystem. Since it doesn't stripe files across all the drives by default, you'd only really need to spin up the ones that are accessed most often. This is use-case dependent, of course, but for the home user, I think it's a win. You could also argue that unRaid can contribute to longer lasting drives, again, if you opt out of ZFS, since without true striping, you're not wearing drives evenly.
Also, there is the ease-of-use aspect for newer homelabbers. In my experience, it was just much quicker and easier getting VM's and Docker containers up and running in unRaid, though Christian isn't making much use of them here. Pooling and expanding drives also tends to be easier as well.
There is also the ubiquity of storage mediums you can use. Adding more disks of varying brand/size is rather freeing. For a home user, this can be very nice, since they can repurpose SSDs/HDDs in old machines. TrueNAS is VERY finicky with hardware, especially the drives you want to use as caches, as a point of reference. @FuzzyKaos mentioned that the bucket style storage can be confusing, and I don't disagree, but I will say that you can restrict certain files/shares to certain drives if you'd like. It does offer a lot of control and a lot of niceties that TrueNAS doesn't.
That said, it has its definite cons. Raw performance, namely, goes to TrueNAS (dat read caching tho!). And while UnRaid does support ZFS now, it's still rather early days, and I wouldn't recommend using UnRaid exclusively for ZFS yet. There is also the matter of cost. $250 for a lifetime license is exactly...*checks notes*...$250 more than TrueNAS is lol. But, seeing as TrueNAS does open their OS for normies and cater exclusively to Enterprise customers creates the ability to monetize the way they do. UnRaid just wasn't built for that niche, and I think it's a reasonable enough price to warrant the work the devs do.
The Unraid Array implementation does seem really nice. I personally wouldn't ever pay for something that only boots from a USB drive which is the single entity that has the activation/license attached to it, but if it did boot from a real disk I would think about it. Yeah I know many license dongles exist and they work using USB drives (even Microsoft did/does hand out USB drive as license dongles) bla bla bla, but come on... give me a licensing server implementation that I can setup or something that I can make backups of. USB? No, not with my money at least..
@@cheebadigga4092 why do you want to waste whole hdd/sdd for 2 gb os files that can be loaded to RAM?
@@FuzzyKaos you can disable files flow to other drives if you dont like it. all so you can select on how deep do you want split to happen for example root folder moves can be split only on level 1. So you could have 4 drive of 500 gb and 1.5TB of movies. Movies will be automatically be split as each drive will have folder movies and subfolder with movies names in it. It would not split movie1 files on drive 2 and 3 for example.
why tho
Why not?
^^ this :D
If you take all the hardware, case and license costs you could almost buy a Synology DS1821+ which provides the possibility to enhance to 2x 5-bay drives and can also make use of a Intel X520/710 network card. This would avoid the lousy ASM1166 controller, which is honestly not sufficent for a homelab. (played around with it). Furthermore it has an ECC memory option and also provides options to run docker containers or virtual machines. I would be interested in how much power consumption you end up. The ASM1166 is not doing a good job on that point, neither does the X520. Both can be improved but it takes a lot of extra work. Sorry, but I'm not impressed.
Well, some others have especially recommended an ASM1166 (which I have not used in this video, maybe some even worse controller). However, it still works for me and I enjoyed the tinkering. Even though I'm not an expert in this field, I still like to build my own stuff, no matter the cost :D
@@christianlempa I know that it was popular on the Unraid board. I hope it works for you and you won't face any loss of data. That's the most important point. I know what you mean. I also love to tinker my own stuff, but if it comes to my personal data I now moved on to the mentioned NAS. :)
such a waste to build systems without ecc
PCI-E x8 LAN card in a PCI-E x2 socket? Very poorly designed server
Why?
This motherboard is not the best choice for NAS. You should have looked for one that at least has an Intel network controller that supports 2.5Gigabit speeds so you don't have to waste a slot
Do you have any recommendations for motherboard?
Why? He can populate all the bays.
@@nominevacans7602 It is better to look for something on AMD with ECC memory support (unofficial off course and it will be 2 channels) and a pair of Intel 2.5 Gigabit Ethernet controllers on board (you can set up LAGG). It will most likely be a board from a Chinese brand of the second echelon, but they show rather well. If you need transcoding, you can use cheap intel graphics card)
There's no need for ECC in my setup, as well as 2.5, since I added a 10Gbit card, I don't trust any of these no-name Chinese brands for motherboards, so I think for this particular build the N100 AsRock is a good choice.
TrueNAS scale... Better.. Free..