When I was researching parts for my server, the hardest part was choosing a raid card. Trying to determine what raid level to go with and an acceptable price was the most challenging part, and then I found ZFS and realized I didn't need exactly what I had been looking at.
Awesome video! This is the most unbiased explanation I've come across and will watch this often as part of my reference material. Was extremely concerned that my hardware raid implementations were not good enough and were obsolete. Really glad to have found this channel.
@@ArtofServer..If good deeds and trying to help people out has a 'payoff', I hope you get yours, handsomely! I don't know if you believe in that sort of thing. But, from the few video/tutorials of yours that I've seen, you earned it. You've saved me from a lot of uncertainty & indecision. 🇺🇸 👍☕ It looks like others have commented the same sorts of feelings.
@@ArtofServer I mainly only did Windows Server and Hardware RAID. Only recently started looking at TrueNAS solutions and now understand more about HBA's and how they function vs my traditional method.
While I’m not new to HBA/RAID cards, this video still filled in some holes in my knowledge. Some of the card specific details were helpful too. I didn’t know about the heartbeat light for example.
Another great video. Thank you so much. Will be nice a video talking about the difference between raid, 0 1 10 5 6, etc and your experience with them with use cases.
Is it true that if a raid card fails you need to replace with the exact same model to keep your data? If so, that seems like a pretty significant downside compared with a software solution. Loved the video, great presentation. Clear, concise and easy to follow.
@@LubomirGeorgiev You're not wrong, but replacing a single PCIe device and quickly resuming normal operation is always going to be hugely preferable to replacing the device AND restoring an entire system from a backup.
@@DannleChannel I agree. It really depends on the raid controller used. I have been able to import a RAID array from an older Dell PERC to a newer one without issue. In that particular scenario, the server just died and we bought a new one and moved the HDDs to the new server. They have some documentation about this exact topic. I can only speak about PERC since we only use Dell servers.. No idea how other vendors manage this.
thanks a lot for your videos, I really appreciate them. I still don't see the difference between the IT-Mode on my PERC 730 in the BIOS and a flashed RAID Controller to an IT-Mode. Is there any?
The main difference between a MegaRAID controller running in "HBA mode" vs a true HBA firmware is the driver's behavior. For most use cases, the difference does not matter so you can often get away with doing that. However, "HBA mode" does not change the PCI ID, which is used to determine the driver to attach to the device. So "HBA mode" still uses the MegaRAID driver, not the HBA driver in the OS. In some OSes, the difference in driver may support different features. For example, I've seen low level access to the drives behave differently in some cases.
At the very least, simulate a dead drive by pulling a drive out from a raidz or mirror pool. Then insert a different drive and practice restoring redundancy. As for data, just load up random file set with sizes similar to your use case. Run a data scrub to check the integrity of the data after resilver.
Great explanation! I absolutely agree with your closing "most important" point. It's tempting to go for "the best" whether that's the fastest RAID or the fastest car. When things go wrong, what are you ready to handle? I'm off to cuddle with my retired Drobo.
Hello, Great video thanks for making it. I must have missed something thou as I dont know if card will work on windows 11 to add more sata drives as all ports taken on mobo. Also what HBA sas card is best for number of drives (not raid)? Thanks
Thanks for watching. I think your question is answered by a different video. Check out this video: ua-cam.com/video/hTbKzQZk21w/v-deo.html Hope that helps you out! :-)
When you were comparing hardware RAID vs ZFS, you left out a very important part about data integrity. ZFS uses checksums to guard against bit rot. Hardware RAID has a feature called "patrol read" which performs background scanning of disk blocks, but most documentation is very vague about the exact method. Does it use checksums like ZFS, or does it simply read data blocks without checking their integrity and if the drive returns read error, only then it attempts some sort of recovery?
That's a good point to compare too. However, I think it's a bit more complicated. HW RAID uses both patrol reads and consistency checks to ensure data integrity, and has parity or redundancy to attempt recovery. ZFS is similar, but uses checksums on each record block and does it all in RAM. It's not trivial to say which is superior. For example, if you're not using ECC RAM, ZFS can really mess up checksums in RAM without knowing it. So at some point, ZFS depends on hardware error detection/correction. HW RAID basically just does all of that within it's hardware. Either way, it's a good point to mention, but I think it warrants a more in-depth discussion.
@@ArtofServer At least for my RAID card (LSI 9261-8i) and RAID0 virtual drive, patrol read only checks for "media errors" which does not involve any checksum verification. In fact, patrol reads on SSDs are disabled by default. Without ZFS, the hardware RAID will never detect bit rot, unless the disk firmware detects it and returns read error. The consistency check is even less effective.
Does ZFS support SGPIO signaling of the LEDs on the backplane? Does HW RAID? I feel that's important to know. Otherwise, how will you know if a drive member failed?
No, SGPIO and backplane LED control is not a widely adopted standard. Each vendor seems to choose their own way of doing things. If the hardware vendors can't decide on a common method, I don't expect the software layers above to handle it. You can however, write your own code to hook into ZFS and manipulate the LEDs (if your controller and backplane are capable) as desired for your specific setup.
@@ArtofServer I bought a 9260-4i and my LEDs are working now. RAID is configured in the management utility. If a member fails, the red light comes on. SO much easier then writing code.
that highly depends on how you configure your ZFS pool, what type of transfers you are doing (sequential vs random, large blocks vs small blocks, etc.) and the performance characteristics of your storage devices.
Thanks for your videos, they are very informative. Since I am new in this area please excuse my question. In this video you have clearly compared two different hardware controllers(raid vs hba). As I know we can use some raid controllers as HBA Mode.In this case What is there difference than normal HBA Controller? If nothing, why do we need HBA Controllers ? We can always have RAID Controllers and if it s required, we can switch it. Thanks in advance for your replies :)
Thanks for your question. It's not a simple answer because there are RAID controllers, and then there are RAID controllers. :-) The type of RAID controller I'm comparing in this video is a true hardware RAID controller with cache and is designed to work only as a RAID controller. There are also other RAID controllers that run similar firmware, or the "Integrated RAID" (aka IR mode) firmware, and these do not have cache and are not really meant to be used as RAID controllers. They are sort of the "poor person" RAID controllers and most of these types of controllers can also run the IT mode firmware and be converted to HBA. *Some* of the true hardware RAID controllers with cache, can also be converted to IT mode firmware, but that was not their original intended design. And then there are also some RAID controllers that have a "HBA mode" in their RAID firmware, which behaves very much like IT mode firmware, but also has some differences. So, there's a lot of if this, if that, in between a pure hardware RAID controller and a pure HBA controller.
Thanks for your reply. It s very interesting and as I understand there are a wide variety of dependencies depending on the manufacturer. On the other side there are HBA Ports for network connectivity. Why are they too calling HBA is also interesting :)). Anyway thanks for your answer and I’ll continue to follow your videos. Regards from Germany.
Got those 5 machines home last night. looks like 4 have the perc i/6 and one has the hba. Im stoked. trying to decide to go sata or sas for drives rn. big thanks to you setups moving right along and im learning a lot.
I like SAS drives for labs because the pulled enterprise SAS drives are so cheap. As far as buying *new* SAS drives to store everyday data that I'll keep for years, I'm not so sure.
I ended up finding and going with some used dell 600gb sas hdd's with caddys for 25 on amazon. best deal I found for filling up the r610s. Cant wait till they get here, Then i can start playing around with these raid controllers.
@@ArtofServer Thanks! Fun indeed, Caddys came in the other day so i got proxmox up and running. Had a fun moment last night when i changed the vmbr0 ip and locked myself out of the gui. >.< lol As a linux noob i stumbled my way through command line but got it working again. currently working on figuring how to update the lifecycle controllers that are way out of date. been having some boot loop issues, may be brownout related. but figured i get those things updated either way.
Great video! It would be helpful to know more about this from a perspective of use cases other than RAID. I only need a SAS card to interface with a tape drive, and a few old SAS drives I have picked up in bulk lots from auctions. I don't use RAID for redundancy because I don't need always-on, my workflow is based on on- and offsite backups. I still don't really know whether one of these cards would be better than the other for my use case (although I do understand more about them in general, thank you).
If you need help understanding how the various HBAs available compare to each other, then this video might help you: ua-cam.com/video/hTbKzQZk21w/v-deo.html what are you doing for on and off-site backups?
Hardware raid used to be necessary to have large data arrays and speed. Major advantage was off loading parity to the controller. It was simple and with a hot spare sort of set it and forget it. ZFS and CEPH are much more CPU intensive but we arent working with sigle core cpus anymore. Nowadays I think GUI driven configuration and management like Proxmox offers are the better solution.
What a great content. Thanks you dude for share us this useful information. I always I've been working with hardware raids but I want to test hba + zfs
The last point is very very true... Which is also why I have been so slow at migrating my mdadm software RAID to ZFS. But as I have started using a backup for the backup strategy, I have the other half of it on ZFS and have interacted with it, and operated Linux root on ZFS, all to get familiar with it. Still, for Windows environments, ZFS isn't a good option yet, so there hardware RAID still reign supreme.
ZFS pools on Truenas storage servers are accessable using SMB shares, and have all the data protection as far as data corruption or loss (if you provide for that in your storage setup). This is actually a lot better in my experience over many years vs. NTFS and FAT systems with windows. ZFS runs natively on FreeBSD (Truenas core) and has been rock solid. Truenas Scale is progressing very fast if you like linux (which I do but prefer truenas core since it has been rock solid for years and I already know it.... I need to try out Truenas scale though.)
Hey, nice vid..... Just a point though about the single most important point to me about RAID systems..... Unless something has changed and I've missed it, if you want to expand your RAID array, you have to stick with the same capacity disk (or waste capacity if you get a larger one). If your storage grows over time, having the ability to use dissimilar HD's allows you to expand as capacity goes up and price comes down. Your older smaller disks naturally get retired as they fail/get replaced. I changed to HBA/Unraid a decade or so ago and I've never regretted it for a second. It maybe an Unraid thing, but I tried multiple RAID cards in JBOD mode and never had any success. When I think of the money I dropped on fancy RAID cards, it makes me weep... HBA cards are so cheap secondhand these days, it's almost a no brainer. Oh, and whatever direction you go in, if you value your data you NEED a backup solution! Personally, I've gone the LTO tape way.........
The hardware RAID controllers that I've used are limited to the least common size of drive as you describe. Software RAID is not, though. Basically, ReadyNAS and Synology (maybe QNAP too) partition your drives to make several RAID sets using common sizes. For example, suppose you have 6 drives: (3) 6TB, (1) 3TB, (1) 2TB, and (1) 1TB. All 6 drives get a 1TB partition = 6*1TB, then double parity ("RAID 6") = 4TB usable 5 drives get another 1TB partition for 5*1TB double parity = another 3TB usable 4 drives get another 1TB = 4*1= 2TB usable 3 drives still have 3TB free each so that's 3*3TB = +3TB usable So, that mixed set with double parity gives you 4+3+2+3= 12TB data. Replace the 1TB drive with a 3TB drive. Now you add a partition on the new drive to the second TB set and the third TB set, so your new total is 4+4+3+3= 14TB data.
Incidentally, Drobo and Compellent take a different approach. They divide every drive into small zones, say 1GB size. Then they maintain a bitmap of all the zones in the set, and distribute data across the bitmap. There's another layer where they manage the actual drives and track status etc. The result is that the total storage pool is more flexible and if it understands the filesystem, it manages data integrity and parity at the data level rather than blindly calculating every sector on every disk. There are drawbacks, mainly cost, but I think it's a really neat approach.
Thanks for sharing. I agree, RAID redundancy is no substitute for backups. Have that on my list of future video topics. It's good you found a solution that works for you. Thanks for watching!
very sound advice to favor RAID for unsophisticated staff. the rebuild in RAID is basically automatic; just pop in a new drive and a day later all is back to normal.
yes, I believe a tool is only as good as the skills of the person using it. that said, RAID rebuilds are costly and put a heavy burden on the entire array. ZFS rebuilds (resilver) is data aware and only rebuilds what is needed so it can minimize the load on the system during the process.
I had a PERC 6/i RAID controller, and I bought a PERC H700 card because the 6/i doesn't handle drives above 2Tb. However, out of the 9 drives that I had, the H700 could only make a virtual drive for 1 drive. You recommend RAID, but with my experience so far. I am gonna try HBA out because I can't get anything to work with a RAID controller.
I don't actually recommend one or the other. I recommend what best suits the need and the person responsible is most comfortable using when there's a bad situation. Not sure why your H700 only allowed you to create VD with just 1 drive. Could be that some of the other drives had some issues? If you still have the H700, make sure it has the latest firmware update.
Most SATA-3 drives can connect the LSI SAS-2 or SAS-3 controllers just fine. However, there are some drives with known firmware bugs or other issues. Seagate drives seem to have the most such issues. If the Seagate drives you have don't have such problems, it should work just fine, otherwise it's a guess in the dark.
Hi great video... I'm new in this subject and im glad i came across your channel. I was trying to buy some cards blindly and now this video make me reconsider my setup. i have a question though, how is HBA compared to Sata PCIE expiation cards? everybody say they are rubbish and i have to go for enterprise grade cards but they dont say why?! im planning to use it for windows storage spaces (SSD pool drives)
Glad it was helpful! With SATA controllers, you need to find good ones because there are a lot of poorly designed SATA controllers where there may be a lot of SATA ports, but they didn't design it with enough PCIe bandwidth to support that many SATA ports. Or, alternatively, the SATA controller chip cannot handle the I/O at load. There are probably some good ones out there, but it's very easy to find SAS controllers that are well designed.
I have a T7910 w/ a internal SAS3008 RAID card w/ free ESXi. Im not using the RAID features, ie no raid levels configured. In that case, is there any advantages/disadvantages in flashing the RAID card firmware to IT mode ? Rgds.
That's a good question. The answer depends on your software. Some OSes or applications don't have the code to access the SMART data behind a RAID card driver like the megasas driver and expect the HBA driver like the mpt3sas. In that case, you would be better off flashing that SAS3008 to IT mode. But if your OS is fully capable of performing it's regular duties even when using the megasas driver, then you are fine as-is. I talk a little bit about such issues in this video: ua-cam.com/video/BgOcCCAzHiY/v-deo.html
tip: use raid 0 for all disks and then use zfs that saves parity of the raid 0 on a different disk, a disk parity backup that can be used to restore a disk failure on a raid 0, config can be daunting but is doable
What about performance? Comparing raid 6 with ZFS raidz 2 with same amount of disks And is it right to say that if you need to expand the vdev you need to create a new one? With Raid i know you can expand the disks and have more performance by using more disks at the same VD
Yes, I've been thinking about a performance comparison as well! However, I haven't decided what type comparison would make the most sense? Something like ZFS is heavily influenced by the speed of CPU and RAM since it depends on it. So, the results can be very different depending on the choice of CPU and RAM. Also, what would be a fair choice for RAID controller to compare with? The SAS2108 mentioned in this video? Or, a newer SAS-3 RAID? But then, do we test SAS-3 SSD array vs SAS-3 SSD zpool? In that case, do we use ZFS default settings or special tuning for SSD? So many variables... it's not a simple comparison. ZFS currently is not able to expand a vdev for raidz type. You can add to mirrors for more replication. If you need more space in a ZFS pool, you can increase by adding more vdev to the pool, but it does not automatically rebalance the old data across the vdevs. Modern RAID controllers can expand RAID5/6 volumes, but it is incredibly slow. Thanks for watching!
The most high end way I've seen, is usually a symmetrical mirror of SSDs, but if on a budget a mirror of two striped arrays (a 3 disk stripe and a 2 disk strip) goes as fast as a stripe but fault tolerant. Ensure no "failing" Western Digitals as this messes with linux a lot. Then you one can be a fast dangerous stripes/j (nested 2 levels of ZFS). When a stripe inevitably dies, it does it quickly; and no rebuilds due to using a mirror instead of RAID/parity. Less efficient in space, faster recovery time. Using RAM for the read cache, ssd/nvme for the intent log to top it off.
@@ArtofServer Yes it does. Oh man I wish you are closer, at least some country in Europe :D Is it possible to ask you some questions on private, like email or something?
A raid card is like an electric screwdriver. An HBA is like a screw bit. ZFS or software RAID is like a drill. If you have ZFS (a drill), you want an HBA (a bit). If you don't, you may want a RAID card (an electric screwdriver)
Could anyone weigh in on the 2TB limit? My end goal is 24 x 20TB 3.5" SATA drives. Right now I have a Dell PowerEdge R730 which only has 8 bays, all of which are full. I'm looking for a new chassis and ofcourse HBA that can support 24 drives with 20TB capacities..
From a hardware perspective, that's true, but you'll need to enable JBOD mode in your RAID controller and configure JBOD drives. However, from a software perspective, some are more particular. For example, watch this video: ua-cam.com/video/BgOcCCAzHiY/v-deo.html
We seek you kind permission to re-produce the content in our local language. Your contents are great and deserves to reach to people who are not that much conversant to English.
If you can translate to your native language and send me the subtitles, I can add them to my videos so people can read the subtitles in their native language.
If you're talking about those 10 SATA port cards with a PCIe x1 or x2 connector coming out of China, well, so long as your OS has the driver, they can sort of "work". However, you can imagine the I/O from 10 HDDs being squeezed into that PCIe x1 or x 2 connector isn't going to be a pretty sight.
If you have only RAID cards and not HBA ones you can always make RAID cards act as HBA ones by setting up JBOD mode or RAID0 on all individual disks and then make a ZFS pool on them. It's not as fun as having HBA card but still.
Haha true. For the home server audience, though, fibre channel might not be relevant. If they weren't so loud and hot, I'd love a FC SAN at home, maybe Compellent with a few drive shelves...yum!
@@ArtofServer honestly with what it does above and beyond other file systems it's something I'm willing to deal with, still does drive me a little nuts though
i understand your point BUT zfs on top of raid OR just plain raid is not nearly as robust or secure as zfs. Here is why.... first, if raid card fails, you lose the raid array structure. It is not as likely but I have seen it happen. 2nd, the drives are tied specifically to that raid card and that machine. Many techs dont realize this but you can remove zfs pooled drives from 1 machine, RANDOMLY place them in a new server, into ANY physical position, and as long as that server can read the drives, you can import the raid and you are back in business. The drives are individually marked as to their zfs membership. 3rd, raid cards are not (easily) ugradeable. You are limited by their processor and ram installed on that raid card. ZFS raid relies on the server CPU and ram and can be tweaked so technically it can be MUCH faster than a raid card. 4th, ZFS raid is much more robust. It is DIFFICULT to destroy a zfs pool, like if u put a zfs pool member ina different machine and try to add it to an existing or new raid, it will warn you AND even tell you the pool name that this drive previously belonged to. Additionally, zfs drives dont have to be the same brand, type, form factor, size.. i.e. you can mix and match sas, sata, 2.5, 3.5, scsi, etc...not always a good idea for performance but in a pinch you can stick in a sata drive to replace a failed SAS drive until you get a SAS replacement. I LITERALLY did this just today. ALSO, zfs pools can be more easily expanded. On the other hand, with hardware raid, it is really easy to "mess up" and lose your raid. PLUS you can't always manipulate the raid from within the OS and it requires a reboot into the raid bios to do anything. WITH zfs, the OS is aware of it and can control it. You NEVER have to STOP your machine to manipulate your zfs pools. This is CRITICAL for our VE platforms where we are running dozens of mission critical vms and containers. So IMO, you are MUCH safer using ZFS pools than hardware raid. Yes there is a learning curve BUT if you can get a low level tech to swap a drive out physically, if they arent savvy enough to rebuild the pool, u can always get a tech to remote in and do it.. like I am doing right now! The only thing I would stay away from with zfs is boot drives. Either do a hardware raid or nvme boot, or a small separate zfs mirror for your boot, but your primary storage which does all the work, should be a separate zfs pool. I have learned this from MANY years of experiences, good and bad.
I'm not sure what you are referring to? I never talked about ZFS on top of RAID? Also, I think you'll find this video interesting: ua-cam.com/video/6EVjztB7z24/v-deo.html
it's not that simple .... HBA is more demanding solution for cpu power. You can easly get around 50% cpu utilization on HBA matrix. So you need 1 comp. for HBA matrix only. It's good when you make you own NAS. But raid controllers are good when you do some server stuff with some of redundancy. So in my opinion... HBA => NAS, Raid -> Server operating systems.
I don't understand your logic. A "NAS" is a "server operating system" for a specific purpose. So, by your logic, both "HBA" and "RAID" controllers are suitable for "NAS", which is a server operating system. ;-P
Long story short - RAID card does HW RAID and HBA does SW RAID... All looks fine as long as you dont use VMware, because ESXI doesnt support SW RAID at all...
teeeeechnicallly, there is no such this as "hardware RAID" since all RAID is software. RAID cards just have dedicated hardware for running the proprietary software that calculates the RAID. using proprietary software "hardware RAID" can create vendor lock-in, with that specific vendor, or even exact card, being required to even access the data. something like zfs doesnt have this.
Hi, I am deploying ZFS storage, I don't know whether to choose Omnios or truenas....high availability is the top priority. Because I haven't found the optimal HA solution for ZFS. I want HA controller, or can give me a suggestion, thanks
Saying "HA is top priority" sounds nice, but you have to be mindful and think about specifically what type of situation you are protecting against. Going "HA" is not a generic term to just have a resilient server or service. Any "HA" related components are only counter measures against certain types of failures and risks, but will not cover all risks. So, any "HA" discussion should always start with "what type of risks do I want to protect against and why? And what type of risks will I consider acceptable and why?" If you're not sure, you could start by tabulating a list of risks, then assign some sort of value scale on the impact of the risk, and also assign a scale on the likelihood of occurrence of the risk event. Then start sorting the list by those criteria and see what are the most high risk events that you want to protect against. Then you can start thinking about what type of "HA" you need to protect against that. This is at least, how I advise clients when they have interest in HA.
When I was researching parts for my server, the hardest part was choosing a raid card. Trying to determine what raid level to go with and an acceptable price was the most challenging part, and then I found ZFS and realized I didn't need exactly what I had been looking at.
Awesome video! This is the most unbiased explanation I've come across and will watch this often as part of my reference material. Was extremely concerned that my hardware raid implementations were not good enough and were obsolete. Really glad to have found this channel.
Glad it was helpful!
@@ArtofServer..If good deeds and trying to help people out has a 'payoff', I hope you get yours, handsomely! I don't know if you believe in that sort of thing. But, from the few video/tutorials of yours that I've seen, you earned it. You've saved me from a lot of uncertainty & indecision. 🇺🇸 👍☕ It looks like others have commented the same sorts of feelings.
I really appreciate your channel. If i ever forget which card i need for particular purpose i can always rewatch one of your vids. Thank you.
Thank you for watching! Glad this channel has been useful to you!
Great video. Learned a lot. Especially like you people component in choosing between RAID and HBA! Brilliant.
Thanks! Hope this was helpful! :-)
Thanks for reminding those old golden days of my initial carrer with LSI.
You are very welcome 😁
Many thanks from Erbil , awesome illustrations!
Thank you! Wow, I didn't know i had viewers from Kurdistan!
@@ArtofServer Many thanks 🙏
You hit the nail on the head.
Thanks! :-)
You explained artfully. Fantastic!
Thank you kindly!
Awesome video and great channel. Been in this business for many years and always can learn something.
Thank you! :-) What did you learn from this video that was new to you?
@@ArtofServer I mainly only did Windows Server and Hardware RAID. Only recently started looking at TrueNAS solutions and now understand more about HBA's and how they function vs my traditional method.
I love watching your videos. I learn so much from them.
Thanks! Glad you are benefiting from my content! Thanks for watching!
Very cool comparison. Thanks a lot!
My pleasure!
So much information provided. Thanks
Glad it was helpful! thanks for watching!
Unbelievably informative thank you.
Glad it was helpful!
Very informative and nicely explained. Good job.
Glad it was helpful!
Great video! Thanks for taking the time to create it and help out !
Glad it was helpful!
Thanks for your time and effort. (22:19 lol)
Thanks for watching!
While I’m not new to HBA/RAID cards, this video still filled in some holes in my knowledge. Some of the card specific details were helpful too. I didn’t know about the heartbeat light for example.
Hi Tom! Good to hear from you! Thanks for watching!
I know everything in the video.
After watching the video lol.
Is there a way to disable the light?
Another great video. Thank you so much. Will be nice a video talking about the difference between raid, 0 1 10 5 6, etc and your experience with them with use cases.
Request noted. Thanks for watching!
Is it true that if a raid card fails you need to replace with the exact same model to keep your data? If so, that seems like a pretty significant downside compared with a software solution.
Loved the video, great presentation. Clear, concise and easy to follow.
That's a great question and it is not. I'll be making a video on that exact topic soon.
You are being 'served'. 🙂 Check this here: ua-cam.com/video/6EVjztB7z24/v-deo.html
Raid is not backup! If you really care about the data you should back it up.
@@LubomirGeorgiev You're not wrong, but replacing a single PCIe device and quickly resuming normal operation is always going to be hugely preferable to replacing the device AND restoring an entire system from a backup.
@@DannleChannel I agree. It really depends on the raid controller used. I have been able to import a RAID array from an older Dell PERC to a newer one without issue. In that particular scenario, the server just died and we bought a new one and moved the HDDs to the new server.
They have some documentation about this exact topic. I can only speak about PERC since we only use Dell servers.. No idea how other vendors manage this.
Excellent video. Thanks for your hard work.
Thanks for your kind words! :-)
So good! Thank you so much for this ❤
Glad you like it! Thanks for watching!
thanks a lot for your videos, I really appreciate them. I still don't see the difference between the IT-Mode on my PERC 730 in the BIOS and a flashed RAID Controller to an IT-Mode. Is there any?
The main difference between a MegaRAID controller running in "HBA mode" vs a true HBA firmware is the driver's behavior. For most use cases, the difference does not matter so you can often get away with doing that. However, "HBA mode" does not change the PCI ID, which is used to determine the driver to attach to the device. So "HBA mode" still uses the MegaRAID driver, not the HBA driver in the OS. In some OSes, the difference in driver may support different features. For example, I've seen low level access to the drives behave differently in some cases.
Got any tips on how to setup a recovery test; e.g. how to intentionally recreate common 'solvable' scenarios on some practice drives with test data?
At the very least, simulate a dead drive by pulling a drive out from a raidz or mirror pool. Then insert a different drive and practice restoring redundancy. As for data, just load up random file set with sizes similar to your use case. Run a data scrub to check the integrity of the data after resilver.
Outstanding lesson. Subscribed
Thank you!
Thanks for sharing and make this video, great job!
Thank you too!
Great explanation!
I absolutely agree with your closing "most important" point. It's tempting to go for "the best" whether that's the fastest RAID or the fastest car. When things go wrong, what are you ready to handle?
I'm off to cuddle with my retired Drobo.
Lol
Hello, Great video thanks for making it.
I must have missed something thou as I dont know if card will work on windows 11 to add more sata drives as all ports taken on mobo.
Also what HBA sas card is best for number of drives (not raid)?
Thanks
Thanks for watching. I think your question is answered by a different video. Check out this video: ua-cam.com/video/hTbKzQZk21w/v-deo.html
Hope that helps you out! :-)
@@ArtofServer Hello, Thanks for the fast reply and help.
When you were comparing hardware RAID vs ZFS, you left out a very important part about data integrity. ZFS uses checksums to guard against bit rot. Hardware RAID has a feature called "patrol read" which performs background scanning of disk blocks, but most documentation is very vague about the exact method. Does it use checksums like ZFS, or does it simply read data blocks without checking their integrity and if the drive returns read error, only then it attempts some sort of recovery?
That's a good point to compare too. However, I think it's a bit more complicated. HW RAID uses both patrol reads and consistency checks to ensure data integrity, and has parity or redundancy to attempt recovery. ZFS is similar, but uses checksums on each record block and does it all in RAM. It's not trivial to say which is superior. For example, if you're not using ECC RAM, ZFS can really mess up checksums in RAM without knowing it. So at some point, ZFS depends on hardware error detection/correction. HW RAID basically just does all of that within it's hardware. Either way, it's a good point to mention, but I think it warrants a more in-depth discussion.
@@ArtofServer At least for my RAID card (LSI 9261-8i) and RAID0 virtual drive, patrol read only checks for "media errors" which does not involve any checksum verification. In fact, patrol reads on SSDs are disabled by default. Without ZFS, the hardware RAID will never detect bit rot, unless the disk firmware detects it and returns read error. The consistency check is even less effective.
Does ZFS support SGPIO signaling of the LEDs on the backplane? Does HW RAID? I feel that's important to know. Otherwise, how will you know if a drive member failed?
No, SGPIO and backplane LED control is not a widely adopted standard. Each vendor seems to choose their own way of doing things. If the hardware vendors can't decide on a common method, I don't expect the software layers above to handle it. You can however, write your own code to hook into ZFS and manipulate the LEDs (if your controller and backplane are capable) as desired for your specific setup.
@@ArtofServer I bought a 9260-4i and my LEDs are working now. RAID is configured in the management utility. If a member fails, the red light comes on. SO much easier then writing code.
Great video, keep up the good work.
Thanks, will do!
Excellent ... Well done ...
Glad you liked it!
will ZFSs arrays typically support 10GB ethernet transfers or 25GB links?
that highly depends on how you configure your ZFS pool, what type of transfers you are doing (sequential vs random, large blocks vs small blocks, etc.) and the performance characteristics of your storage devices.
Thanks for your videos, they are very informative.
Since I am new in this area please excuse my question.
In this video you have clearly compared two different hardware controllers(raid vs hba).
As I know we can use some raid controllers as HBA Mode.In this case What is there difference than normal HBA Controller? If nothing, why do we need HBA Controllers ? We can always have RAID Controllers and if it s required, we can switch it. Thanks in advance for your replies :)
Thanks for your question. It's not a simple answer because there are RAID controllers, and then there are RAID controllers. :-) The type of RAID controller I'm comparing in this video is a true hardware RAID controller with cache and is designed to work only as a RAID controller. There are also other RAID controllers that run similar firmware, or the "Integrated RAID" (aka IR mode) firmware, and these do not have cache and are not really meant to be used as RAID controllers. They are sort of the "poor person" RAID controllers and most of these types of controllers can also run the IT mode firmware and be converted to HBA. *Some* of the true hardware RAID controllers with cache, can also be converted to IT mode firmware, but that was not their original intended design. And then there are also some RAID controllers that have a "HBA mode" in their RAID firmware, which behaves very much like IT mode firmware, but also has some differences. So, there's a lot of if this, if that, in between a pure hardware RAID controller and a pure HBA controller.
Thanks for your reply. It s very interesting and as I understand there are a wide variety of dependencies depending on the manufacturer.
On the other side there are HBA Ports for network connectivity. Why are they too calling HBA is also interesting :)). Anyway thanks for your answer and I’ll continue to follow your videos.
Regards from Germany.
Thanks! Learned a lot, appreciated
Glad it was helpful!
Got those 5 machines home last night. looks like 4 have the perc i/6 and one has the hba. Im stoked. trying to decide to go sata or sas for drives rn. big thanks to you setups moving right along and im learning a lot.
Congrats and have fun with your new machines! :-)
I like SAS drives for labs because the pulled enterprise SAS drives are so cheap. As far as buying *new* SAS drives to store everyday data that I'll keep for years, I'm not so sure.
I ended up finding and going with some used dell 600gb sas hdd's with caddys for 25 on amazon. best deal I found for filling up the r610s. Cant wait till they get here, Then i can start playing around with these raid controllers.
@@ArtofServer Thanks! Fun indeed, Caddys came in the other day so i got proxmox up and running. Had a fun moment last night when i changed the vmbr0 ip and locked myself out of the gui. >.< lol As a linux noob i stumbled my way through command line but got it working again. currently working on figuring how to update the lifecycle controllers that are way out of date. been having some boot loop issues, may be brownout related. but figured i get those things updated either way.
Great ppt man thanks
Thanks man! :-)
Great video! It would be helpful to know more about this from a perspective of use cases other than RAID. I only need a SAS card to interface with a tape drive, and a few old SAS drives I have picked up in bulk lots from auctions. I don't use RAID for redundancy because I don't need always-on, my workflow is based on on- and offsite backups. I still don't really know whether one of these cards would be better than the other for my use case (although I do understand more about them in general, thank you).
If you need help understanding how the various HBAs available compare to each other, then this video might help you: ua-cam.com/video/hTbKzQZk21w/v-deo.html
what are you doing for on and off-site backups?
Yes! I would like a HWRAID deep dive video : )
Got it. Thanks for the feedback!
Hardware raid used to be necessary to have large data arrays and speed. Major advantage was off loading parity to the controller. It was simple and with a hot spare sort of set it and forget it. ZFS and CEPH are much more CPU intensive but we arent working with sigle core cpus anymore. Nowadays I think GUI driven configuration and management like Proxmox offers are the better solution.
Thanks for sharing your thoughts!
What a great content. Thanks you dude for share us this useful information.
I always I've been working with hardware raids but I want to test hba + zfs
Glad you enjoyed it. Thanks for watching!
The last point is very very true... Which is also why I have been so slow at migrating my mdadm software RAID to ZFS.
But as I have started using a backup for the backup strategy, I have the other half of it on ZFS and have interacted with it, and operated Linux root on ZFS, all to get familiar with it.
Still, for Windows environments, ZFS isn't a good option yet, so there hardware RAID still reign supreme.
That's a very sound approach. Thanks for watching and sharing your thoughts!
ZFS pools on Truenas storage servers are accessable using SMB shares, and have all the data protection as far as data corruption or loss (if you provide for that in your storage setup).
This is actually a lot better in my experience over many years vs. NTFS and FAT systems with windows.
ZFS runs natively on FreeBSD (Truenas core) and has been rock solid. Truenas Scale is progressing very fast if you like linux (which I do but prefer truenas core since it has been rock solid for years and I already know it.... I need to try out Truenas scale though.)
Hey, nice vid..... Just a point though about the single most important point to me about RAID systems..... Unless something has changed and I've missed it, if you want to expand your RAID array, you have to stick with the same capacity disk (or waste capacity if you get a larger one). If your storage grows over time, having the ability to use dissimilar HD's allows you to expand as capacity goes up and price comes down. Your older smaller disks naturally get retired as they fail/get replaced. I changed to HBA/Unraid a decade or so ago and I've never regretted it for a second. It maybe an Unraid thing, but I tried multiple RAID cards in JBOD mode and never had any success. When I think of the money I dropped on fancy RAID cards, it makes me weep... HBA cards are so cheap secondhand these days, it's almost a no brainer. Oh, and whatever direction you go in, if you value your data you NEED a backup solution! Personally, I've gone the LTO tape way.........
The hardware RAID controllers that I've used are limited to the least common size of drive as you describe.
Software RAID is not, though. Basically, ReadyNAS and Synology (maybe QNAP too) partition your drives to make several RAID sets using common sizes.
For example, suppose you have 6 drives: (3) 6TB, (1) 3TB, (1) 2TB, and (1) 1TB.
All 6 drives get a 1TB partition = 6*1TB, then double parity ("RAID 6") = 4TB usable
5 drives get another 1TB partition for 5*1TB double parity = another 3TB usable
4 drives get another 1TB = 4*1= 2TB usable
3 drives still have 3TB free each so that's 3*3TB = +3TB usable
So, that mixed set with double parity gives you 4+3+2+3= 12TB data.
Replace the 1TB drive with a 3TB drive. Now you add a partition on the new drive to the second TB set and the third TB set, so your new total is 4+4+3+3= 14TB data.
Incidentally, Drobo and Compellent take a different approach. They divide every drive into small zones, say 1GB size. Then they maintain a bitmap of all the zones in the set, and distribute data across the bitmap. There's another layer where they manage the actual drives and track status etc. The result is that the total storage pool is more flexible and if it understands the filesystem, it manages data integrity and parity at the data level rather than blindly calculating every sector on every disk.
There are drawbacks, mainly cost, but I think it's a really neat approach.
Thanks for sharing. I agree, RAID redundancy is no substitute for backups. Have that on my list of future video topics. It's good you found a solution that works for you. Thanks for watching!
Thanks brofor this video, is very detailed and very true. Old techology vs new, In my case I use both, i´m always learn from practice
Glad it was helpful!
very sound advice to favor RAID for unsophisticated staff. the rebuild in RAID is basically automatic; just pop in a new drive and a day later all is back to normal.
yes, I believe a tool is only as good as the skills of the person using it. that said, RAID rebuilds are costly and put a heavy burden on the entire array. ZFS rebuilds (resilver) is data aware and only rebuilds what is needed so it can minimize the load on the system during the process.
You ROCK!!! Thanks.
Thanks! Happy holidays 🎊
I had a PERC 6/i RAID controller, and I bought a PERC H700 card because the 6/i doesn't handle drives above 2Tb. However, out of the 9 drives that I had, the H700 could only make a virtual drive for 1 drive.
You recommend RAID, but with my experience so far. I am gonna try HBA out because I can't get anything to work with a RAID controller.
I don't actually recommend one or the other. I recommend what best suits the need and the person responsible is most comfortable using when there's a bad situation.
Not sure why your H700 only allowed you to create VD with just 1 drive. Could be that some of the other drives had some issues? If you still have the H700, make sure it has the latest firmware update.
can seagate exos SATA drives connect to LSI SAS controller ?
Most SATA-3 drives can connect the LSI SAS-2 or SAS-3 controllers just fine. However, there are some drives with known firmware bugs or other issues. Seagate drives seem to have the most such issues. If the Seagate drives you have don't have such problems, it should work just fine, otherwise it's a guess in the dark.
Hi great video... I'm new in this subject and im glad i came across your channel.
I was trying to buy some cards blindly and now this video make me reconsider my setup.
i have a question though, how is HBA compared to Sata PCIE expiation cards? everybody say they are rubbish and i have to go for enterprise grade cards but they dont say why?!
im planning to use it for windows storage spaces (SSD pool drives)
Glad it was helpful! With SATA controllers, you need to find good ones because there are a lot of poorly designed SATA controllers where there may be a lot of SATA ports, but they didn't design it with enough PCIe bandwidth to support that many SATA ports. Or, alternatively, the SATA controller chip cannot handle the I/O at load. There are probably some good ones out there, but it's very easy to find SAS controllers that are well designed.
@@ArtofServer thank you.
I have a T7910 w/ a internal SAS3008 RAID card w/ free ESXi. Im not using the RAID features, ie no raid levels configured. In that case, is there any advantages/disadvantages in flashing the RAID card firmware to IT mode ? Rgds.
That's a good question. The answer depends on your software. Some OSes or applications don't have the code to access the SMART data behind a RAID card driver like the megasas driver and expect the HBA driver like the mpt3sas. In that case, you would be better off flashing that SAS3008 to IT mode. But if your OS is fully capable of performing it's regular duties even when using the megasas driver, then you are fine as-is. I talk a little bit about such issues in this video: ua-cam.com/video/BgOcCCAzHiY/v-deo.html
tip: use raid 0 for all disks and then use zfs that saves parity of the raid 0 on a different disk, a disk parity backup that can be used to restore a disk failure on a raid 0, config can be daunting but is doable
you mean like in this video: ua-cam.com/video/S_YN1vluLws/v-deo.html
Thank you for this fine video/tutorial! 🇺🇸 👍☕
Glad it was useful!
What about performance? Comparing raid 6 with ZFS raidz 2 with same amount of disks
And is it right to say that if you need to expand the vdev you need to create a new one?
With Raid i know you can expand the disks and have more performance by using more disks at the same VD
Yes, I've been thinking about a performance comparison as well! However, I haven't decided what type comparison would make the most sense? Something like ZFS is heavily influenced by the speed of CPU and RAM since it depends on it. So, the results can be very different depending on the choice of CPU and RAM. Also, what would be a fair choice for RAID controller to compare with? The SAS2108 mentioned in this video? Or, a newer SAS-3 RAID? But then, do we test SAS-3 SSD array vs SAS-3 SSD zpool? In that case, do we use ZFS default settings or special tuning for SSD? So many variables... it's not a simple comparison.
ZFS currently is not able to expand a vdev for raidz type. You can add to mirrors for more replication. If you need more space in a ZFS pool, you can increase by adding more vdev to the pool, but it does not automatically rebalance the old data across the vdevs.
Modern RAID controllers can expand RAID5/6 volumes, but it is incredibly slow.
Thanks for watching!
The most high end way I've seen, is usually a symmetrical mirror of SSDs, but if on a budget a mirror of two striped arrays (a 3 disk stripe and a 2 disk strip) goes as fast as a stripe but fault tolerant. Ensure no "failing" Western Digitals as this messes with linux a lot. Then you one can be a fast dangerous stripes/j (nested 2 levels of ZFS). When a stripe inevitably dies, it does it quickly; and no rebuilds due to using a mirror instead of RAID/parity. Less efficient in space, faster recovery time. Using RAM for the read cache, ssd/nvme for the intent log to top it off.
@ArtofServer Sir, Thank you very much for this video! :)
Hope it helps! :-)
@@ArtofServer Yes it does. Oh man I wish you are closer, at least some country in Europe :D
Is it possible to ask you some questions on private, like email or something?
A raid card is like an electric screwdriver. An HBA is like a screw bit. ZFS or software RAID is like a drill.
If you have ZFS (a drill), you want an HBA (a bit). If you don't, you may want a RAID card (an electric screwdriver)
That's not a bad analogy!
Could anyone weigh in on the 2TB limit? My end goal is 24 x 20TB 3.5" SATA drives. Right now I have a Dell PowerEdge R730 which only has 8 bays, all of which are full. I'm looking for a new chassis and ofcourse HBA that can support 24 drives with 20TB capacities..
See this video ua-cam.com/video/u55vIGMzzKw/v-deo.html
so JBOD=HBA and we don't need to flash in IT mode raid cards that support JBOD?
From a hardware perspective, that's true, but you'll need to enable JBOD mode in your RAID controller and configure JBOD drives. However, from a software perspective, some are more particular. For example, watch this video: ua-cam.com/video/BgOcCCAzHiY/v-deo.html
We seek you kind permission to re-produce the content in our local language. Your contents are great and deserves to reach to people who are not that much conversant to English.
If you can translate to your native language and send me the subtitles, I can add them to my videos so people can read the subtitles in their native language.
@@ArtofServer Where I am supposed to send sub titles ?
My contact info is on the "about" tab on my channel page.
I've seen these 10 port sata pcie cards. Would these work as a zfs hba card? I literally have no clue.
If you're talking about those 10 SATA port cards with a PCIe x1 or x2 connector coming out of China, well, so long as your OS has the driver, they can sort of "work". However, you can imagine the I/O from 10 HDDs being squeezed into that PCIe x1 or x 2 connector isn't going to be a pretty sight.
Your channel is the shit. Thank you.
I hope that's a compliment? not sure... ? :-/
Drive Bender for windows and for the win :)
If you haven't already, you should make a playlist of your videos on drive bender. Thanks for watching! :-)
How can we have dual HBA controllers for redundancy ?
You can, but you need to have a dual port backplane, and either SAS drives or SATA drives with SAS interposers.
@@ArtofServer do you have any video on how to do this ?
If you have only RAID cards and not HBA ones you can always make RAID cards act as HBA ones by setting up JBOD mode or RAID0 on all individual disks and then make a ZFS pool on them. It's not as fun as having HBA card but still.
yes, I've demonstrated that in the past in this video: ua-cam.com/video/S_YN1vluLws/v-deo.html
thanks bro
Glad this was helpful! Thanks for watching!
Man feels like I need a degree
nah, it's not that complicated. :-)
@@ArtofServer lol I’ve got a few buddies in storage and I tell you lol I know nothing lol just enough to get things running
nice vib Thank you.
Thanks for watching!
informative video
Glad this helped you! Thanks for watching!
dont forget there are FC HBAs!
Haha true. For the home server audience, though, fibre channel might not be relevant.
If they weren't so loud and hot, I'd love a FC SAN at home, maybe Compellent with a few drive shelves...yum!
Great info and we'll delivered. But.... Way toooooo many commercials.
Sorry about that. YT has been aggressive with pushing ads lately, especially on my more popular videos. :-(
ZFS Still can't even add a device to vdev, this is still the only major issue with zfs IMO
That is true, but it's work in progress.
@@ArtofServer honestly with what it does above and beyond other file systems it's something I'm willing to deal with, still does drive me a little nuts though
i understand your point BUT zfs on top of raid OR just plain raid is not nearly as robust or secure as zfs. Here is why.... first, if raid card fails, you lose the raid array structure. It is not as likely but I have seen it happen. 2nd, the drives are tied specifically to that raid card and that machine. Many techs dont realize this but you can remove zfs pooled drives from 1 machine, RANDOMLY place them in a new server, into ANY physical position, and as long as that server can read the drives, you can import the raid and you are back in business. The drives are individually marked as to their zfs membership. 3rd, raid cards are not (easily) ugradeable. You are limited by their processor and ram installed on that raid card. ZFS raid relies on the server CPU and ram and can be tweaked so technically it can be MUCH faster than a raid card. 4th, ZFS raid is much more robust. It is DIFFICULT to destroy a zfs pool, like if u put a zfs pool member ina different machine and try to add it to an existing or new raid, it will warn you AND even tell you the pool name that this drive previously belonged to. Additionally, zfs drives dont have to be the same brand, type, form factor, size.. i.e. you can mix and match sas, sata, 2.5, 3.5, scsi, etc...not always a good idea for performance but in a pinch you can stick in a sata drive to replace a failed SAS drive until you get a SAS replacement. I LITERALLY did this just today. ALSO, zfs pools can be more easily expanded. On the other hand, with hardware raid, it is really easy to "mess up" and lose your raid. PLUS you can't always manipulate the raid from within the OS and it requires a reboot into the raid bios to do anything. WITH zfs, the OS is aware of it and can control it. You NEVER have to STOP your machine to manipulate your zfs pools. This is CRITICAL for our VE platforms where we are running dozens of mission critical vms and containers. So IMO, you are MUCH safer using ZFS pools than hardware raid. Yes there is a learning curve BUT if you can get a low level tech to swap a drive out physically, if they arent savvy enough to rebuild the pool, u can always get a tech to remote in and do it.. like I am doing right now! The only thing I would stay away from with zfs is boot drives. Either do a hardware raid or nvme boot, or a small separate zfs mirror for your boot, but your primary storage which does all the work, should be a separate zfs pool. I have learned this from MANY years of experiences, good and bad.
I'm not sure what you are referring to? I never talked about ZFS on top of RAID?
Also, I think you'll find this video interesting: ua-cam.com/video/6EVjztB7z24/v-deo.html
Easy to understand video on UA-cam...
Glad it was helpful! Thanks for watching!
Art of Server : the next generation 😂
LOL
it's not that simple .... HBA is more demanding solution for cpu power. You can easly get around 50% cpu utilization on HBA matrix.
So you need 1 comp. for HBA matrix only. It's good when you make you own NAS. But raid controllers are good when you do some server stuff with some of redundancy. So in my opinion... HBA => NAS, Raid -> Server operating systems.
I don't understand your logic. A "NAS" is a "server operating system" for a specific purpose. So, by your logic, both "HBA" and "RAID" controllers are suitable for "NAS", which is a server operating system. ;-P
you save me 1 hr in google , thanks
glad to hear it! thanks for watching!
I will choose RAID. The ZFS recovering process is totally a disaster.
Sounds like you've had some painful experiences?
All I wanted to do was set up a little plex server, but here I am, balls deep in some advanced server shit which only confuses me more, lol
Long story short - RAID card does HW RAID and HBA does SW RAID... All looks fine as long as you dont use VMware, because ESXI doesnt support SW RAID at all...
RAID controller is "single point of failure". If RAID controller is failed your data will be lost. You need to have redundant RAID controllers.
That would require a more complicated ring topology SAS setup. I don't think that applies in this discussion.
teeeeechnicallly, there is no such this as "hardware RAID" since all RAID is software. RAID cards just have dedicated hardware for running the proprietary software that calculates the RAID.
using proprietary software "hardware RAID" can create vendor lock-in, with that specific vendor, or even exact card, being required to even access the data. something like zfs doesnt have this.
Stay tuned for a video showing how the vendor lock-in is a myth.
smash thumb down button twice, lol
;-P
Hi, I am deploying ZFS storage, I don't know whether to choose Omnios or truenas....high availability is the top priority. Because I haven't found the optimal HA solution for ZFS. I want HA controller, or can give me a suggestion, thanks
Saying "HA is top priority" sounds nice, but you have to be mindful and think about specifically what type of situation you are protecting against. Going "HA" is not a generic term to just have a resilient server or service. Any "HA" related components are only counter measures against certain types of failures and risks, but will not cover all risks. So, any "HA" discussion should always start with "what type of risks do I want to protect against and why? And what type of risks will I consider acceptable and why?" If you're not sure, you could start by tabulating a list of risks, then assign some sort of value scale on the impact of the risk, and also assign a scale on the likelihood of occurrence of the risk event. Then start sorting the list by those criteria and see what are the most high risk events that you want to protect against. Then you can start thinking about what type of "HA" you need to protect against that. This is at least, how I advise clients when they have interest in HA.
I want data protection, this is the top priority. Please guide me to implement this. Thank you and good health