As a Storage Architect for a living, I liked what you did here! One thing I didn't hear mentioned... The Iron Wolf Pro drives have a chipset that talks natively to Synology NAS chassis to prevent harmonics across spinning drives and they provide a far more detailed drive health data set. Also, I found when I put in 1TB M.2 drives x2 for. cache (be sure to use NAS rated M.2 drives, standard M.2 drives for laptops and the like aren't built for that type of data load and stress) that the vast majority of the storage I accessed frequently were stored on the solid state drives and I maintained incredible performance consistently. Hope this helps. I assume you're now using shr?
Jeff, I'm so glad that you and your Dad get to work together. He's a funny fellow. I have never gone back to Seagate drives. They let me down years ago in my home lab, and at work one after another whereas my experience with Western Digital has been totally the opposite. Of course, if Seagate wants to send me 4 hard drives I'll be happy to test them out for a couple of years. Just sayin'!
I was thinking about using the WD Red Plus 12TB due to its lower noise rating, but thought about using the Exos or standard Ironwolf NAS disks of the same size. The large capacity Exos and Ironwolf were both up to 32-34db on their spec sheets, and then I saw that higher write speed of 270MB/s and a top operating noise of 29db on the new Ironwolf Pro (the older models were as loud as the Exos) and it sold me on it. Then it was just about gaming out the price per TB, which made the 16TB be the best by size.
I‘ve run BTRFS with 4 drives in raid1 mode for metadata and data until 2 years ago. After a power outage the FS was not mountable anymore. I recovered everything from that BTRFS array successfully but couldn‘t trust the setup anymore. So now I‘m running ZFS with 2 2-drive RAID1 groups, which has about the same features and is essentially considered stable, especially with ECC RAM. I loved the greater flexibility with BTRFS (shrink filesystem, reduce/rearrange drives, …) initally but it is no use if I can‘t trust the system. And don‘t get me started on backup :)
Good enterprise or NAS drives are def worth the price if you care about your data, not to mention they usually last longer, so the cost amortization is likely fairly close to cheapo drives over 3 to 5 years. I've had great luck with the WD Ultrastar (formerly Hitachi/HGST) drives in my little home Ceph cluster. When I first built it, I had some WD Blue drives in there... probably one of the worst options just short of WD Greens. lol. Besides being pretty slow, they also aggressively tried to go into power save mode and had a very high load cycle count. Had two of them die along with a couple non-pro (5400rpm) WD Reds. Massive speed improvement with good 7200 rpm NAS/enterprise drives, and have had no failures yet.
I'm happy with the Toshiba drives I've got in my workstation. I was happy when the recent hard drive reliability studies showed Toshiba to be quite reliable, though I don't know if it went to specific models(I've got 2x4TB X300 Pros). Really nice and affordable supplement to my NVMe drives.
Not a Asustore customer, I run Netgear with BTRFS. Been using it in raid 6 for years with no problems. My ZFS on Truenas core works well too, but I have less time on pretty weak hardware.
Yeah; Wendell mentioned he likes some aspects of Btrfs, and some from ZFS, but didn't give a very strong opinion of one over the other (though it seems he is mostly sticking with ZFS these days).
Nice video. Am running a DS920 with 42tb with seagate drives. Just ordered another iron wolf 16tb pro for $299 as a backup Curious as to why the pro was cheaper than the standard iron wolf! Love your setup!
@@JeffGeerling You ever have a hard drive that groans every now and then. I have one in a ds920 and can't tell which it is. Guess I'll just wait till it fails:)
In this case, I think the radio station would've been okay with the slower drives, especially considering they're going to be stuck at a gigabit on the network for some time anyways. The major advantage would be the 5 year disk-replacement warranty and longer rated lifespan per disk, which is nice for a place that can't afford to spend the time checking up on their gear as much as a more IT-dedicated place. But in the end, I think we'd be happy with non-Pro drives. 8TB usable storage would've still been enough for the station for at least a few years-this upgrade will make it so they probably don't have to worry about storage *ever* unless they get into video.
For me was it worth it? Oh yeah! But for a facility like ours the slower drives would be a good choice. I personally would go for the better/faster drives to get the better specs and call it "Future-proofing". Same for selecting the drives. If your facility is good at replacing systems like this on a 4 year cycle make your best guess on size and get the speed that will still feel good after possible network upgrades, etc. over the 4 years it will be in service. I'm looking at connecting two computers directly to the back of that unit. (one might be mine:)
@@GeerlingEngineering Now they have much larger storage what about their back ups? Are they big enough to hold all of that now and in the future? I smell another video coming up. lol
@@Darkk6969 Indeed you're right! We hope to cover backups in the future. Right now it's not as formal as the setup Jeff has for his homelab-it's mostly a couple hard drives that are swapped out offsite.
I still make the mistake of referring to HD/SSD sizes in gigs, too. Not from being old for me(though I'm definitely not young anymore) but because I'm a home user and still getting used to counting in TB. Real conversation between me and a friend. "So yeah, I upgraded to a 1 gig NVMe drive, moved the OS and programs onto it from the 1 gig SATA SSD, which is joining my 2 gig hard disk for media and games." "Gig?" "You know what I mean."
Just think, in 10 or 20 years we'll start talking about Petabytes when our kids want to download a game and they need more than a Petabyte of storage for it :D
@@GeerlingEngineering Yeah, we haven't half come a long way since my first computer from the '80s. It could store about 130kb on each side of a C60 tape. C90s did work, but they had a tendency to stretch and become unreadable. Floppies were available, of course, but the price(the drive would've cost more than the rest of the computer put together) of them meant they were very much enterprise grade.
@@WhenDoesTheVideoActuallyStart Heh no it'll just be because some silly developer forgot to compress all the TIFFs and so there's a petabyte of assets that should really be like 1 GB.
YES BTRFS OMG, im so happy I cried. ya so just FYI using BTRFS for raid5/6 is actually fine for data as long as you do a higher raid level for metadata like raid10 or raid1c3. just never use btrfs raid5/6 for *metadata*
Test the noise. Important in many environments, especially with the 2-4 bay NAS units popular with home users that the non-pro drives are designed for. that's the downside. You can sometimes pickup the even higher end EXOS enterprise drives even cheaper than the Ironwolf Pro.
ZFS FTW. 😀 I built a TrueNAS Scale box last fall, using some junk 250G drives I just had in pile in the corner. I decided it was worth keeping around, so I found some 4TB WD Red (CMR, not SMR) for $68 each and upgraded it. I just went through the usual ZFS "remove, replace, resilver" process and it only took maybe 5 minutes each to resilver the drives (I only had a total of ~200G on the thing, so it was a pretty trivial rebuild).
Running TrueNAS core with four 4TB WD Reds. When one of my 3TB WD Reds failed I decided to go ahead and replace all four 3TB Red drives with 4TB Red drives since they are over 7 years old ran 24hrs non-stop. Figured it would be matter of time before rest of the 3TB drives start to fail. It took me a couple of days to swap the drives out without powering the NAS server down. I've made a complete backup before I started the swaps in case resilvering should fail. No issues during the swaps and after.
I'm sure you've written it out somewhere, but I'd love to hear your opinion on it; Why do you use a Macbook over a Windows or Linux device? My main guesses are the combination of wanting a Unix-like environment and not wanting to deal with the Linux maintenance headache, but I don't know if those are major factors and/or if I'm missing anything else.
A little bit of that, but mostly for the hardware aspect-I've tried over and over to find a good replacement for the MacBook Air that runs Linux but is as portable and lasts as long on battery... and there just isn't anything on the market right now. The Dell XPS line came closest for me but was still lacking in a couple areas (most notably the trackpad). Linux still has a ways to go for gesture support too, but it's gotten better.
I know it is a limitation of the Asusstor, but I would 100% recommend ZFS. I have been using it for years, and it is way ahead of BTRFS, however, I do know that some like the licences on BTRFS better. My servers have survived many different things while on ZFS, also, as a COPY-ON-WRITE ONLY file system ZFS provides power loss protection in ways that many OS file systems, and other server RAID's cannot.
They'd be okay, but I'd stick with something more general purpose and not NAS-specific for an HDD, or buy as big an SSD as you can get for a little more speed.
Well, I used to support Netgear NAS devices (up to enterprise class) and they did use BTRFS, which did have a lot on nice features even then, it also had a LOT of headaches forcing manual maintenance on the FS on a lot of situations. That said that was over 4 or 5 years ago and BTRFS was more or less new on the block at that stage and has since been improved a lot, though I need new intense usage examples to make an opinion on the FS as this time. Back then, for myself on my own NAS, I was sticking to ReiserFS because I had no need for the additional features and the fact that performance wise that was much faster, specially when you had a LOT of small files on the system. It had and has one of the best if not the best mixed file size performance I have seen so far. Honestly I'm waiting for what comes out of ReiserFS 5 as from the initial specs it sounds like what I am looking for nowadays, this compared to ZFS, BTRFS or some MDRAID solution with a "normal" FS on top of it.
@@ASUSTOR_YT No doubts about that, the same degree of testing happened on the products I used to support, but as I said, back then BTRFS was a much younger system which has since had considerable improvements. That said, trust me when I say that it won't matter how good and deep your testing is, final users will always end up one way or another creating unexpected scenarios which were not possible to be accounted for, this is where you will have a chance to shine depending on how the system copes with these and how your user support reacts and helps in such cases. I have gone "above and beyond" in support roles (within constraints of course) that typically resulted in going over the time I was supposed to spend there, but the results spoke for themselves. For BTRFS specifically I found myself advising customers to schedule regular maintenance via CRON on their systems on some use cases as the normal way BTRFS would do this in the background wouldn't suffice due to the usage patterns in such cases, these would feature some of the following features and in one case all of them: High number of files and directory structure updates, complex and multilevel file version with branching. This by itself wouldn't normally be an issue, though when this things happened on a relatively short time scale it resulted in the BTRFS normal maintenance schedule not being enough to keep the system responsive and stable. Of course the fact that these were also using BTRFS for RAID didn't help with the CPU load. But I have to admit, at the time this was working way better than I expected for such a recent filesystem and solution. I'm betting this is working way better nowadays. Should actually test this on my DIY NAS :) *By the way ASUSTOR, as an long time ASUS user and I have to admit that I am a fan, do you need to give me a job working for you in Ireland? Just make sure the job is more technical than bureaucratic! I have extensive experience with Networking, Hardware, Software both on the Client and Server side (and my experience starts with MS-DOS 3.3 up to our lovely Windows 11. Also, Novel Netware 3.x and up, Linux from RedHat 4.5 recently more based on Debian, OS/2 2.1 -> 4.5, Windows NT4, 2K, XP, Win.Server 2012/2016, and honestly love my hardware specially server grade hardware. HBA/RAID controllers SAS storage solutions etc, I love this stuff and basically keep up to date just because I love to, don't need to want to hehe)*
Wouldn’t EXOs drives be just as good for less money? I have over 580 TiB, so obviously price is important to me, but the warranty on the EXOs seems the same as Ironwolf and the Ironwolf is much more expensive. What’s the difference again? I know, i know… I just need to look it up.
Hmmm… More than SMART can tell you, or just more easily understood? Anyway, for my application it doesn’t make sense, but I can see why that could be useful.
@@glenallan6279 More than SMART. It has better monitoring than SMART. Measures temps, disc writes, shock and more. Keeps tabs on its history. It's easy to use and understand. Monitors more potential failure points.
Temps, and disk writes are a part of SMART already. Third party software can monitor that. Shock issues are only an issue if you have bad mounting or move the drives a lot which you should already not be doing. Everything I've read about Ironwolf Pro vs EXOs is that EXOs are the superior drive. I don't know anyone in my field who would choose an IW over an EXOs. Specs are better on EXOs, infinitely scalable, etc. The bells & whistles on the IW Pro seems to largely be fluff and marketing to make people feel more secure. But you know, if you don't have the best backup strategy and need data recovery and aren't scaling maybe that failed drive can be recovered? I don't know... I just don't see the value add being enough to justify the added cost. And really it's going to be application specific anyway and for me that added cost makes no sense. But I do understand peace of mind, even if it's not really a guarantee at all. Perception is important. And maybe having that extra guarantee helps with insurance claims?
Are you 100% sure that btrfs isn't using mdadm to create the raid, which it does on Synology ??? The only reason for using btrfs would be due to snapshorts otherwise exf4 is just as good.
From a lot of what I have heard, BTRFS is fantastic as a single-disk FS, but it can fail in some edge (and not-so-edge) cases for multi-disk configurations.
That was some of the chatter that made me do a double-take when we were setting things up. Since the radio station doesn't have a storage engineer on staff and I'd be on-call for any weird problems, I decided to stick to the tried and true, simple RAID 10. Backups will be the main safety factor here, in either case, but I'm not as familiar yet with Btrfs to recommend it for a hands-off environment.
2 роки тому
Depens on how the RAID is managed. Btrfs’ native RAID manager is not ready for prime time yet, it’s really unstable. But if you manage the RAID with mdadm and then use BTRFS only as a file system, it works fine. In fact, that’s what Synology and ASUSTor do.
@@eDoc2020 All raid is reliable until it's not ;) At this point, RAID 5/6 or 1/10 are all very reliable on most modern systems (no excuse for not having a backup though). And sometimes Btrfs concerns are overblown. There are a lot of people who are a bit too much in the 'my favorite option is better than yours' crowd-zillions of servers are running all of mdadm RAID, Btrfs, and ZFS nowadays, and the only people who have major disasters are the ones who don't have a good/tested DR plan. That's probably 10x more important (and more often neglected) than the choice of Btrfs, ZFS, or plain ol' software or hardware RAID.
Modern NAS-spec drives do not have endurances their former counterparts featured in the past. You had better use datacentre-spec drives, which are usually in the same price range.
The IronWolf Pro line are built for anything short of monstrous Enterprise storage servers. You can jump up to Exos, but that's another leap that is probably not necessary for smaller storage servers unless there are specific features you need in that line.
EXOs drives are cheaper. Maybe there was a time when they weren't, but they have been for quite some time. I don't see the value in the IW drive line unless they actually cost less.
Personally, I beleive it was a good idea to ditch btrfs. I've been following btrfs progress since around back when it was included in the kernel, some 13 years ago, and back then it really looked promising. All it needed was to stabilise a bit. Well, it's been 13 years and it works, somehow, but it's still hunted with bugs of different sorts. The RAID[56] they talked about 10 years ago as to be almost finished, is still almost done. It's a bit like waiting for fusion power - it's always 30 years away. It seems bcachefs will outrun it and ceph already has, as has zfs. I guess the reason why Asus can't support ZFS, is of legal issues, since ZFS isn't GPL, but CDDL, so it can't be distributed in binary format with the Linux kernel (which is GPL). As for that nice Asustore NAS, I'd recommend getting a cheap, second hand 1U or 2U server with room for enough drives, including perhaps an SSD or two for caching and to just run plain Linux with ZFS or whatever filesystem you'd like on top. PS: I got a couple of new drives some months back, 16TB Seagate Exos X16 with helium atmosphere. They are very silent, the fastest thing I have seen of 3,5" rotating drives so far and AFAICS cheaper than the ones you're using right now. I guess the ones you got will do, though, since the bottleneck is, after all, on the network. Also, with RAID10, rebuilding won't take that long as compared to RAID5 or RAID6. With drives that size, RAID[56] isn't of much use, since it'll take weeks to rebuild it when (not if!) a drive fails. roy
"Pro nas" drives mainly have higher RPM then basic NAS drives. And that can be good (better seek times, transfer speed) but for most people is a negative (higher noise, vibrations). Most consumer drives are limited to how many you can have in a chassis. WD does not recommend more then 6 or 8 in the same case, as they dont like the vibrations - read speeds are affected but more critical the lifespan of the drives. Real enterprise or "pro pro" drives dont have those limitations but they also work with other requirements. Noise, heat is not relevant mostly.
I'll actually be testing out some Exos drives in an upcoming video on my main channel. Exos and Ironwolf Pro seem pretty similar on many fronts, though.
@@JeffGeerling oh nice then i will wait until that video :) looking forward to it .. yea but they are a lot cheaper .. like 18tb ironwolf pro costs 400 -500€ and exos 18tb like 300 .. and they are rated for 270mb constant and have 2,5million mtbf .. and they are enterprsie class so they can handle a lot more drives per rack, the ironwolf pros are only rated for i think 16 drives per rack or device and the exos for a lot more per rack
If you’re using drives larger than 4TB, you should only be using RAID6. Why? If you lose a drive in RAID5 and have to replace it with a hot or cold spare, the length of time to rebuild the array is so long that you risk losing another drive - thus all the data - since a rebuild is pretty stressful to the array. If one drive failed, then there’s a likelihood another will also fail since they’re the same age.
For sure; in this case I only wanted to test sequential performance since that will be the primary factor in the radio station's use case. For that, the overhead introduced in the USB to SATA conversion is almost nothing. For random IO (especially small files), it does make a very significant difference.
1milion hours of service , how do they get that number , a year has 8760h ,, 1 million hours is more than 100 years ! and you extremely luck if you drive last 5 years
The improved throughput is likely from the 16tb drives being newer and having higher areal density. The "Pro" line gives you a longer warranty. And based on your disclaimer with other reviews, we're to infer that since these products are provided to you for free, your opinion can't be trusted as unbiased?
We mentioned up front that the video contains sponsorship/paid promotion (though there was no money involved-only the drives and NAS), and I mentioned in the beginning of the video that both were provided to us. Neither ASUSTOR nor Seagate were given any input into the making of either of our videos on the equipment, but yes, there is no possible way to be completely bias-free when equipment is provided for the purposes of testing and review-anyone who says otherwise is crazy! We're quite up-front about it, which should be helpful to anyone who is coming to this video wanting to learn more about the products.
We actually have an interest in Jeff speaking his mind. Jeff, with his Linux kernel knowledge is best suited to tear our NAS apart and rip us apart if we were doing shady things. We want to hear what Jeff thinks. Many of the things Jeff and commentors say to us are taken to heart and I try to give every opinion an audience. We love criticism! If I wanted someone to parrot selling points like a puppet, I wouldn't have sent it to Jeff Geerling. I won't send it to someone who can ansible his life if I wanted a puppet. We want to do better. And this means I have to have lots of difficult conversations with management in this commitment. Your (and everyone's) opinions help immensely. -Marco
@@kchiem ZFS was actually one of our original goals upon founding the company in 2011. At the time, we had major difficulties with getting it to work with our apps as well as licencing issues. Eventually we settled on Btrfs because it has many of the same features like Copy on Write and snapshots. Even Btrfs was a pain to implement, but less so. Back then, the hardware couldn't even handle it. Today's hardware is likely going to be a better experience. We have to test our NASes extensively and the management here is conservative. There is a reason QNAP's ZFS NASes require a different OS and a different team. We have to make sure we aren't pushing something that would trash people's data. Since then OpenZFS is now a thing and far more polished so... I can ask to revisit the idea.
@@ASUSTOR_YT ZFS was production ready before your company was founded. You guys chose wrong going with BTRFS. I've been running ZFS for 14-15 years. OpenZFS has been a thing since 2013.
So I was a little disappointed by this comparison. I recently built a home server with 4 Seagate Ironwolf 8tb drives. These 8tb drives actually run at 7200rpm... in fact, on paper, the specs for these seemed almost exactly the same as the pro variant. So I opted for cheaper and got the non-pro. I was hoping this video would show me a direct comparison of apples to apples. But your original 4tb non-pros where 5400rpm (i think they bumped up the speed in the higher capacity non-pros). So there was pretty much no reason to even bother comparing them, other than for show.
BTRFS is imo a shitshow. At no point in time was ZFS as unstable and prone to dataloss as BTRFS now is. It works for Meta/Facebook, but because they develop it for their needs. ZFS is the better choice here, if it is at all available. Also, why did you go with RAID10 ? Would RAID6 be the better Option here, as any of 2 drives can fail, instead of one drive in either Raid 1? What if 2 Disks in the same mirror fail? All the Data would be gone! The CPU in that thing has more than enough Power to support RAID6 with 2,5 Gbit/s . Probably even more and at that point the Disks would hit their limit anyway. BTW; BTRFS works totally fine as a RAID1, probably as a RAID10, but has Problems with RAID5/6. Otherwise its pretty stable now, even tho ZFS is way better imo. Might be biased here tho. EDIT: Might it be possible to install TrueNAS on that thing, like Wendel did with his QNAP Device he had ? Just an Idea.
I would also pick zfs over btrfs. But your points about btrfs... It's fine now. What you're referring to was 5 years ago. By now raid5 and 6 are fine on btrfs
@@thibaultmol As far as the wiki tells me, Raid 5/6 is still in an unstable state. Not only the writehole is still there, tho normal raid has the same problem, but it's also kinda broken and not really tested in other ways. I never used btrfs because of exactly the reasons I wrote about above. It's these horrorstories I heard, read and are even stated in the wiki, which brought me to zfs. Raid 1 is totally fine and raid 5/6 is fine if you have the Metadata on a raid 1 array, but then I would waste 2 disks for metadata + another 2 disks for redundancy of my normal data. I don't see the point really. Btrfs has so much potential, but it's just a shit show of meta's needs, imo.
The reality is more complex than that; just looking at data from a publicly available report (Backblaze), there's no vendor you can single out as the worst in all classes. Seagate doesn't do well in some models, but does great in others. And it's really hard to have quantitative data to judge the quality of an individual drive until after a company like Backblaze releases data for hundreds or thousands of units over at least a year of use. With any drive, you should plan on failure and have a good disaster recovery plan that accounts for any drives failing at any time.
I was using WD drives and I moved over to seagate ironwolf. TBA both brands have about the same amount of failures for me (running zfs). I look at the warranty now on drives to gather what I should expect out of a drive.
A NAS is network I/O-bound, not disk I/O-bound. Minor (and major) improvements in disk I/O speed are meaningless, because one is always limited by the network. Reliability is the only important factor.
This particular NAS has a 10 GbE expansion card available, and if using that card most operations would be severely constrained by disk speed, especially if using parity.
Dude I love BTRFS, they're my favorite boy band right now
As a Storage Architect for a living, I liked what you did here! One thing I didn't hear mentioned... The Iron Wolf Pro drives have a chipset that talks natively to Synology NAS chassis to prevent harmonics across spinning drives and they provide a far more detailed drive health data set. Also, I found when I put in 1TB M.2 drives x2 for. cache (be sure to use NAS rated M.2 drives, standard M.2 drives for laptops and the like aren't built for that type of data load and stress) that the vast majority of the storage I accessed frequently were stored on the solid state drives and I maintained incredible performance consistently. Hope this helps. I assume you're now using shr?
Jeff, I'm so glad that you and your Dad get to work together. He's a funny fellow. I have never gone back to Seagate drives. They let me down years ago in my home lab, and at work one after another whereas my experience with Western Digital has been totally the opposite. Of course, if Seagate wants to send me 4 hard drives I'll be happy to test them out for a couple of years. Just sayin'!
I was indifferent between the two, but the WD CMR/SMR scandal really put me off then
Agreed. Seagate is crap.
Western Digital and Seagate drives have let me down in the past, but WD has let me down less
I had a Seagate arrive two weeks ago DOA and am still waiting for a replacement. 😒 ... second bad experience with Seagate.
Almost grabbed a WD drive for my upgrade but was sad to see all the WD reds in my capacity and price range were SMR drives.
We like to call it Butterfuss. ;)
Either way, if anyone has questions, comments, praise and/or criticism, feel free to reply!
I was thinking about using the WD Red Plus 12TB due to its lower noise rating, but thought about using the Exos or standard Ironwolf NAS disks of the same size. The large capacity Exos and Ironwolf were both up to 32-34db on their spec sheets, and then I saw that higher write speed of 270MB/s and a top operating noise of 29db on the new Ironwolf Pro (the older models were as loud as the Exos) and it sold me on it. Then it was just about gaming out the price per TB, which made the 16TB be the best by size.
Wow! A father/son radio station! How cool is that?!?!
I‘ve run BTRFS with 4 drives in raid1 mode for metadata and data until 2 years ago. After a power outage the FS was not mountable anymore. I recovered everything from that BTRFS array successfully but couldn‘t trust the setup anymore. So now I‘m running ZFS with 2 2-drive RAID1 groups, which has about the same features and is essentially considered stable, especially with ECC RAM.
I loved the greater flexibility with BTRFS (shrink filesystem, reduce/rearrange drives, …) initally but it is no use if I can‘t trust the system. And don‘t get me started on backup :)
Good enterprise or NAS drives are def worth the price if you care about your data, not to mention they usually last longer, so the cost amortization is likely fairly close to cheapo drives over 3 to 5 years. I've had great luck with the WD Ultrastar (formerly Hitachi/HGST) drives in my little home Ceph cluster. When I first built it, I had some WD Blue drives in there... probably one of the worst options just short of WD Greens. lol. Besides being pretty slow, they also aggressively tried to go into power save mode and had a very high load cycle count. Had two of them die along with a couple non-pro (5400rpm) WD Reds. Massive speed improvement with good 7200 rpm NAS/enterprise drives, and have had no failures yet.
I'm happy with the Toshiba drives I've got in my workstation. I was happy when the recent hard drive reliability studies showed Toshiba to be quite reliable, though I don't know if it went to specific models(I've got 2x4TB X300 Pros). Really nice and affordable supplement to my NVMe drives.
Not a Asustore customer, I run Netgear with BTRFS. Been using it in raid 6 for years with no problems. My ZFS on Truenas core works well too, but I have less time on pretty weak hardware.
@Geerling Engineering: what's that white "wall-e look-a-like" gizmo on the bottom of the screen at 01:11?
Ha, we'll need to do a 'Dad's desk tour' soon, and he can talk all about the random doodads he's accumulated over the years!
That would be an old USB 2.0 hub!
Was going to ask if you saw his video. I thought he was liking zfs though. I'll have to watch that again.
Yeah; Wendell mentioned he likes some aspects of Btrfs, and some from ZFS, but didn't give a very strong opinion of one over the other (though it seems he is mostly sticking with ZFS these days).
Nice video. Am running a DS920 with 42tb with seagate drives. Just ordered another iron wolf 16tb pro for $299 as a backup Curious as to why the pro was cheaper than the standard iron wolf! Love your setup!
I'll never understand pricing-sometimes the Pro lineup is cheaper, sometimes not. Seems to just be an inventory thing.
@@JeffGeerling You ever have a hard drive that groans every now and then. I have one in a ds920 and can't tell which it is. Guess I'll just wait till it fails:)
Nice upgrade, but the question in your title is not really answered i think?. Is the difference between the drives really worth it?
In this case, I think the radio station would've been okay with the slower drives, especially considering they're going to be stuck at a gigabit on the network for some time anyways.
The major advantage would be the 5 year disk-replacement warranty and longer rated lifespan per disk, which is nice for a place that can't afford to spend the time checking up on their gear as much as a more IT-dedicated place.
But in the end, I think we'd be happy with non-Pro drives. 8TB usable storage would've still been enough for the station for at least a few years-this upgrade will make it so they probably don't have to worry about storage *ever* unless they get into video.
For me was it worth it? Oh yeah! But for a facility like ours the slower drives would be a good choice. I personally would go for the better/faster drives to get the better specs and call it "Future-proofing". Same for selecting the drives. If your facility is good at replacing systems like this on a 4 year cycle make your best guess on size and get the speed that will still feel good after possible network upgrades, etc. over the 4 years it will be in service. I'm looking at connecting two computers directly to the back of that unit. (one might be mine:)
@@GeerlingEngineering Now they have much larger storage what about their back ups? Are they big enough to hold all of that now and in the future? I smell another video coming up. lol
@@Darkk6969 Indeed you're right! We hope to cover backups in the future. Right now it's not as formal as the setup Jeff has for his homelab-it's mostly a couple hard drives that are swapped out offsite.
btrfs RAID 10 is perfectly fine in my experience, it's RAID 5/6 that you'll have problems with.
I still make the mistake of referring to HD/SSD sizes in gigs, too. Not from being old for me(though I'm definitely not young anymore) but because I'm a home user and still getting used to counting in TB.
Real conversation between me and a friend.
"So yeah, I upgraded to a 1 gig NVMe drive, moved the OS and programs onto it from the 1 gig SATA SSD, which is joining my 2 gig hard disk for media and games."
"Gig?"
"You know what I mean."
Just think, in 10 or 20 years we'll start talking about Petabytes when our kids want to download a game and they need more than a Petabyte of storage for it :D
@@GeerlingEngineering that game better look more realistic than reality
@@GeerlingEngineering Yeah, we haven't half come a long way since my first computer from the '80s. It could store about 130kb on each side of a C60 tape. C90s did work, but they had a tendency to stretch and become unreadable.
Floppies were available, of course, but the price(the drive would've cost more than the rest of the computer put together) of them meant they were very much enterprise grade.
@@WhenDoesTheVideoActuallyStart Heh no it'll just be because some silly developer forgot to compress all the TIFFs and so there's a petabyte of assets that should really be like 1 GB.
@@JeffGeerling Deliberately done so you upgrade, newer software has to use more resources to keep everyone in a job.
YES BTRFS OMG, im so happy I cried. ya so just FYI using BTRFS for raid5/6 is actually fine for data as long as you do a higher raid level for metadata like raid10 or raid1c3. just never use btrfs raid5/6 for *metadata*
When do you think there will be 10tb ssd and above?
Test the noise. Important in many environments, especially with the 2-4 bay NAS units popular with home users that the non-pro drives are designed for. that's the downside. You can sometimes pickup the even higher end EXOS enterprise drives even cheaper than the Ironwolf Pro.
I use the 10TB Pro version is very very good and fast too
why not freenas/truenas?
Raid 10 is the raid type. Btrfs is a file system like ntfs or ext4
ZFS FTW. 😀 I built a TrueNAS Scale box last fall, using some junk 250G drives I just had in pile in the corner. I decided it was worth keeping around, so I found some 4TB WD Red (CMR, not SMR) for $68 each and upgraded it. I just went through the usual ZFS "remove, replace, resilver" process and it only took maybe 5 minutes each to resilver the drives (I only had a total of ~200G on the thing, so it was a pretty trivial rebuild).
Running TrueNAS core with four 4TB WD Reds. When one of my 3TB WD Reds failed I decided to go ahead and replace all four 3TB Red drives with 4TB Red drives since they are over 7 years old ran 24hrs non-stop. Figured it would be matter of time before rest of the 3TB drives start to fail. It took me a couple of days to swap the drives out without powering the NAS server down. I've made a complete backup before I started the swaps in case resilvering should fail. No issues during the swaps and after.
I'm sure you've written it out somewhere, but I'd love to hear your opinion on it; Why do you use a Macbook over a Windows or Linux device? My main guesses are the combination of wanting a Unix-like environment and not wanting to deal with the Linux maintenance headache, but I don't know if those are major factors and/or if I'm missing anything else.
A little bit of that, but mostly for the hardware aspect-I've tried over and over to find a good replacement for the MacBook Air that runs Linux but is as portable and lasts as long on battery... and there just isn't anything on the market right now.
The Dell XPS line came closest for me but was still lacking in a couple areas (most notably the trackpad). Linux still has a ways to go for gesture support too, but it's gotten better.
@@GeerlingEngineering Thanks for answering!
He has too much money. Apple is a great way to fix that problem.
With networking that slow, the drives probably would have been able to keep up even configured as RAID5, giving an additional 50% available space.
I wonder how much longer till the physical size if the drives have to increase?
Well I guess SSDs can store soon a few petabytes per square inch and well... 215 petabytes per gram of DNA.
Are they transferring files to this NAS over a 1 GbE network connection, and thereby ultimately limited to ~110 MB/sec transfer speeds anyway? :)
Yes, in some of the tests. After the first gigabit test, we also tested with two connections to the NAS and got more bandwidth in aggregate.
I know it is a limitation of the Asusstor, but I would 100% recommend ZFS.
I have been using it for years, and it is way ahead of BTRFS, however, I do know that some like the licences on BTRFS better.
My servers have survived many different things while on ZFS, also, as a COPY-ON-WRITE ONLY file system ZFS provides power loss protection in ways that many OS file systems, and other server RAID's cannot.
are these ironwolf drives great for gaming?
They'd be okay, but I'd stick with something more general purpose and not NAS-specific for an HDD, or buy as big an SSD as you can get for a little more speed.
@@JeffGeerling what about the WD_Black Performance Internal Hard Drive - 7200 RPM for gaming im not to sure on this one?
@@Nathan-ji7qh Yeah, those are probably one of the better options for a gaming PC for extra storage.
RAID with BTRFS, good luck with that ? I would have gone with regular RAID6
Well, I used to support Netgear NAS devices (up to enterprise class) and they did use BTRFS, which did have a lot on nice features even then, it also had a LOT of headaches forcing manual maintenance on the FS on a lot of situations. That said that was over 4 or 5 years ago and BTRFS was more or less new on the block at that stage and has since been improved a lot, though I need new intense usage examples to make an opinion on the FS as this time. Back then, for myself on my own NAS, I was sticking to ReiserFS because I had no need for the additional features and the fact that performance wise that was much faster, specially when you had a LOT of small files on the system. It had and has one of the best if not the best mixed file size performance I have seen so far.
Honestly I'm waiting for what comes out of ReiserFS 5 as from the initial specs it sounds like what I am looking for nowadays, this compared to ZFS, BTRFS or some MDRAID solution with a "normal" FS on top of it.
We test our products intensively before bringing to market. For us, Btrfs is stable and we do recommend using it.
@@ASUSTOR_YT No doubts about that, the same degree of testing happened on the products I used to support, but as I said, back then BTRFS was a much younger system which has since had considerable improvements.
That said, trust me when I say that it won't matter how good and deep your testing is, final users will always end up one way or another creating unexpected scenarios which were not possible to be accounted for, this is where you will have a chance to shine depending on how the system copes with these and how your user support reacts and helps in such cases. I have gone "above and beyond" in support roles (within constraints of course) that typically resulted in going over the time I was supposed to spend there, but the results spoke for themselves. For BTRFS specifically I found myself advising customers to schedule regular maintenance via CRON on their systems on some use cases as the normal way BTRFS would do this in the background wouldn't suffice due to the usage patterns in such cases, these would feature some of the following features and in one case all of them: High number of files and directory structure updates, complex and multilevel file version with branching. This by itself wouldn't normally be an issue, though when this things happened on a relatively short time scale it resulted in the BTRFS normal maintenance schedule not being enough to keep the system responsive and stable. Of course the fact that these were also using BTRFS for RAID didn't help with the CPU load. But I have to admit, at the time this was working way better than I expected for such a recent filesystem and solution. I'm betting this is working way better nowadays. Should actually test this on my DIY NAS :)
*By the way ASUSTOR, as an long time ASUS user and I have to admit that I am a fan, do you need to give me a job working for you in Ireland? Just make sure the job is more technical than bureaucratic! I have extensive experience with Networking, Hardware, Software both on the Client and Server side (and my experience starts with MS-DOS 3.3 up to our lovely Windows 11. Also, Novel Netware 3.x and up, Linux from RedHat 4.5 recently more based on Debian, OS/2 2.1 -> 4.5, Windows NT4, 2K, XP, Win.Server 2012/2016, and honestly love my hardware specially server grade hardware. HBA/RAID controllers SAS storage solutions etc, I love this stuff and basically keep up to date just because I love to, don't need to want to hehe)*
Thankyou:)
Wouldn’t EXOs drives be just as good for less money? I have over 580 TiB, so obviously price is important to me, but the warranty on the EXOs seems the same as Ironwolf and the Ironwolf is much more expensive. What’s the difference again? I know, i know… I just need to look it up.
**IronWolf Pro
Ironwolf drives have extra software to help make it easy to monitor a drive's health. They're specifically for NAS devices.
Hmmm… More than SMART can tell you, or just more easily understood? Anyway, for my application it doesn’t make sense, but I can see why that could be useful.
@@glenallan6279 More than SMART. It has better monitoring than SMART. Measures temps, disc writes, shock and more. Keeps tabs on its history. It's easy to use and understand. Monitors more potential failure points.
Temps, and disk writes are a part of SMART already. Third party software can monitor that. Shock issues are only an issue if you have bad mounting or move the drives a lot which you should already not be doing. Everything I've read about Ironwolf Pro vs EXOs is that EXOs are the superior drive. I don't know anyone in my field who would choose an IW over an EXOs. Specs are better on EXOs, infinitely scalable, etc. The bells & whistles on the IW Pro seems to largely be fluff and marketing to make people feel more secure. But you know, if you don't have the best backup strategy and need data recovery and aren't scaling maybe that failed drive can be recovered? I don't know... I just don't see the value add being enough to justify the added cost.
And really it's going to be application specific anyway and for me that added cost makes no sense. But I do understand peace of mind, even if it's not really a guarantee at all. Perception is important. And maybe having that extra guarantee helps with insurance claims?
Are you 100% sure that btrfs isn't using mdadm to create the raid, which it does on Synology ??? The only reason for using btrfs would be due to snapshorts otherwise exf4 is just as good.
From a lot of what I have heard, BTRFS is fantastic as a single-disk FS, but it can fail in some edge (and not-so-edge) cases for multi-disk configurations.
That was some of the chatter that made me do a double-take when we were setting things up. Since the radio station doesn't have a storage engineer on staff and I'd be on-call for any weird problems, I decided to stick to the tried and true, simple RAID 10. Backups will be the main safety factor here, in either case, but I'm not as familiar yet with Btrfs to recommend it for a hands-off environment.
Depens on how the RAID is managed. Btrfs’ native RAID manager is not ready for prime time yet, it’s really unstable. But if you manage the RAID with mdadm and then use BTRFS only as a file system, it works fine. In fact, that’s what Synology and ASUSTor do.
I thought btrfs RAID1/10 worked reliably and it was just 5/6 which has issues.
@@eDoc2020 All raid is reliable until it's not ;)
At this point, RAID 5/6 or 1/10 are all very reliable on most modern systems (no excuse for not having a backup though). And sometimes Btrfs concerns are overblown. There are a lot of people who are a bit too much in the 'my favorite option is better than yours' crowd-zillions of servers are running all of mdadm RAID, Btrfs, and ZFS nowadays, and the only people who have major disasters are the ones who don't have a good/tested DR plan.
That's probably 10x more important (and more often neglected) than the choice of Btrfs, ZFS, or plain ol' software or hardware RAID.
Modern NAS-spec drives do not have endurances their former counterparts featured in the past. You had better use datacentre-spec drives, which are usually in the same price range.
The IronWolf Pro line are built for anything short of monstrous Enterprise storage servers. You can jump up to Exos, but that's another leap that is probably not necessary for smaller storage servers unless there are specific features you need in that line.
EXOs drives are cheaper. Maybe there was a time when they weren't, but they have been for quite some time. I don't see the value in the IW drive line unless they actually cost less.
@@glenallan6279 EXOs drives are also noisy which would be distracting in a radio station.
Personally, I beleive it was a good idea to ditch btrfs. I've been following btrfs progress since around back when it was included in the kernel, some 13 years ago, and back then it really looked promising. All it needed was to stabilise a bit. Well, it's been 13 years and it works, somehow, but it's still hunted with bugs of different sorts. The RAID[56] they talked about 10 years ago as to be almost finished, is still almost done. It's a bit like waiting for fusion power - it's always 30 years away. It seems bcachefs will outrun it and ceph already has, as has zfs. I guess the reason why Asus can't support ZFS, is of legal issues, since ZFS isn't GPL, but CDDL, so it can't be distributed in binary format with the Linux kernel (which is GPL). As for that nice Asustore NAS, I'd recommend getting a cheap, second hand 1U or 2U server with room for enough drives, including perhaps an SSD or two for caching and to just run plain Linux with ZFS or whatever filesystem you'd like on top.
PS: I got a couple of new drives some months back, 16TB Seagate Exos X16 with helium atmosphere. They are very silent, the fastest thing I have seen of 3,5" rotating drives so far and AFAICS cheaper than the ones you're using right now. I guess the ones you got will do, though, since the bottleneck is, after all, on the network. Also, with RAID10, rebuilding won't take that long as compared to RAID5 or RAID6. With drives that size, RAID[56] isn't of much use, since it'll take weeks to rebuild it when (not if!) a drive fails.
roy
They could, they'd just need to negotiate with Oracle.
@@zxcvb_bvcxz Oracle isn't relevant here - they dropped the project and it's open source now.
"Pro nas" drives mainly have higher RPM then basic NAS drives. And that can be good (better seek times, transfer speed) but for most people is a negative (higher noise, vibrations).
Most consumer drives are limited to how many you can have in a chassis. WD does not recommend more then 6 or 8 in the same case, as they dont like the vibrations - read speeds are affected but more critical the lifespan of the drives.
Real enterprise or "pro pro" drives dont have those limitations but they also work with other requirements. Noise, heat is not relevant mostly.
You forgot the Exos Series which is Enterprise Class with a lot more MTBF than these „Ironwol Pro“ Drives and also a lot cheaper
I'll actually be testing out some Exos drives in an upcoming video on my main channel. Exos and Ironwolf Pro seem pretty similar on many fronts, though.
@@JeffGeerling oh nice then i will wait until that video :) looking forward to it ..
yea but they are a lot cheaper .. like 18tb ironwolf pro costs 400 -500€ and exos 18tb like 300 .. and they are rated for 270mb constant and have 2,5million mtbf .. and they are enterprsie class so they can handle a lot more drives per rack, the ironwolf pros are only rated for i think 16 drives per rack or device and the exos for a lot more per rack
Yes! The EXOs drives are the shit. I'm running 53 of them with 28 at 14TB running for 2 years with no failures. Much better price, no fluff.
If you’re using drives larger than 4TB, you should only be using RAID6.
Why?
If you lose a drive in RAID5 and have to replace it with a hot or cold spare, the length of time to rebuild the array is so long that you risk losing another drive - thus all the data - since a rebuild is pretty stressful to the array. If one drive failed, then there’s a likelihood another will also fail since they’re the same age.
I'm thinking of buying a new laptop, any clue how to make sure you can fit one of these into it?
Actually, it's 15 times more storage, otherwise it'd be 34 TB.
"Why do they used to put stickers on those?!" 🗨😏💮
SATA over USB be it 3 or any other is a poor idea. Use a real SATA controller for proper performance tests (eSATA will do the job as well)
The performance is not just measured in bandwidth, and USB->SATA will definitely impact other performance metrics substantially.
For sure; in this case I only wanted to test sequential performance since that will be the primary factor in the radio station's use case. For that, the overhead introduced in the USB to SATA conversion is almost nothing.
For random IO (especially small files), it does make a very significant difference.
Man you cant do this over usb too slow bus
Buy EXOS drives. They are cheaper and better. I picked up (2x) 8TB for $281.
Stop shaking these bloody hard drives!!! :)
Resyncing or better known as resilvering
You’re making me nervous by waving those expensive drives around…
1milion hours of service , how do they get that number , a year has 8760h ,, 1 million hours is more than 100 years ! and you extremely luck if you drive last 5 years
The improved throughput is likely from the 16tb drives being newer and having higher areal density. The "Pro" line gives you a longer warranty. And based on your disclaimer with other reviews, we're to infer that since these products are provided to you for free, your opinion can't be trusted as unbiased?
We mentioned up front that the video contains sponsorship/paid promotion (though there was no money involved-only the drives and NAS), and I mentioned in the beginning of the video that both were provided to us.
Neither ASUSTOR nor Seagate were given any input into the making of either of our videos on the equipment, but yes, there is no possible way to be completely bias-free when equipment is provided for the purposes of testing and review-anyone who says otherwise is crazy!
We're quite up-front about it, which should be helpful to anyone who is coming to this video wanting to learn more about the products.
We actually have an interest in Jeff speaking his mind. Jeff, with his Linux kernel knowledge is best suited to tear our NAS apart and rip us apart if we were doing shady things. We want to hear what Jeff thinks. Many of the things Jeff and commentors say to us are taken to heart and I try to give every opinion an audience. We love criticism! If I wanted someone to parrot selling points like a puppet, I wouldn't have sent it to Jeff Geerling. I won't send it to someone who can ansible his life if I wanted a puppet.
We want to do better. And this means I have to have lots of difficult conversations with management in this commitment. Your (and everyone's) opinions help immensely. -Marco
@@ASUSTOR_YT In that case, why don't you guys support zfs?
@@kchiem ZFS was actually one of our original goals upon founding the company in 2011. At the time, we had major difficulties with getting it to work with our apps as well as licencing issues. Eventually we settled on Btrfs because it has many of the same features like Copy on Write and snapshots. Even Btrfs was a pain to implement, but less so. Back then, the hardware couldn't even handle it. Today's hardware is likely going to be a better experience. We have to test our NASes extensively and the management here is conservative. There is a reason QNAP's ZFS NASes require a different OS and a different team. We have to make sure we aren't pushing something that would trash people's data. Since then OpenZFS is now a thing and far more polished so... I can ask to revisit the idea.
@@ASUSTOR_YT ZFS was production ready before your company was founded. You guys chose wrong going with BTRFS. I've been running ZFS for 14-15 years. OpenZFS has been a thing since 2013.
So I was a little disappointed by this comparison. I recently built a home server with 4 Seagate Ironwolf 8tb drives. These 8tb drives actually run at 7200rpm... in fact, on paper, the specs for these seemed almost exactly the same as the pro variant. So I opted for cheaper and got the non-pro.
I was hoping this video would show me a direct comparison of apples to apples. But your original 4tb non-pros where 5400rpm (i think they bumped up the speed in the higher capacity non-pros). So there was pretty much no reason to even bother comparing them, other than for show.
I'm running the same drives, so this is the comparison I was hoping for too.
BTRFS is imo a shitshow. At no point in time was ZFS as unstable and prone to dataloss as BTRFS now is. It works for Meta/Facebook, but because they develop it for their needs. ZFS is the better choice here, if it is at all available. Also, why did you go with RAID10 ? Would RAID6 be the better Option here, as any of 2 drives can fail, instead of one drive in either Raid 1? What if 2 Disks in the same mirror fail? All the Data would be gone! The CPU in that thing has more than enough Power to support RAID6 with 2,5 Gbit/s . Probably even more and at that point the Disks would hit their limit anyway.
BTW; BTRFS works totally fine as a RAID1, probably as a RAID10, but has Problems with RAID5/6. Otherwise its pretty stable now, even tho ZFS is way better imo. Might be biased here tho.
EDIT: Might it be possible to install TrueNAS on that thing, like Wendel did with his QNAP Device he had ? Just an Idea.
I would also pick zfs over btrfs. But your points about btrfs... It's fine now. What you're referring to was 5 years ago. By now raid5 and 6 are fine on btrfs
@@thibaultmol As far as the wiki tells me, Raid 5/6 is still in an unstable state. Not only the writehole is still there, tho normal raid has the same problem, but it's also kinda broken and not really tested in other ways. I never used btrfs because of exactly the reasons I wrote about above. It's these horrorstories I heard, read and are even stated in the wiki, which brought me to zfs. Raid 1 is totally fine and raid 5/6 is fine if you have the Metadata on a raid 1 array, but then I would waste 2 disks for metadata + another 2 disks for redundancy of my normal data. I don't see the point really. Btrfs has so much potential, but it's just a shit show of meta's needs, imo.
You may want to read my thoughts on raid[56] with that large drives above.
Toshiba Enterprise Capacity Much better deal and reliability and 5 years warranty
👍👍👍👊👊
In the future, use HGST drives. Much more reliable.
🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣
No.
@@Digikidthevoiceofreason Yes. What makes you think otherwise?
Seagate is complete garbage and pure shit. If you want DEPENDABLE hard drives that are QUALITY then Western Digital is your only choice.
The reality is more complex than that; just looking at data from a publicly available report (Backblaze), there's no vendor you can single out as the worst in all classes. Seagate doesn't do well in some models, but does great in others. And it's really hard to have quantitative data to judge the quality of an individual drive until after a company like Backblaze releases data for hundreds or thousands of units over at least a year of use.
With any drive, you should plan on failure and have a good disaster recovery plan that accounts for any drives failing at any time.
I was using WD drives and I moved over to seagate ironwolf. TBA both brands have about the same amount of failures for me (running zfs).
I look at the warranty now on drives to gather what I should expect out of a drive.
A very catholic radio station
A NAS is network I/O-bound, not disk I/O-bound. Minor (and major) improvements in disk I/O speed are meaningless, because one is always limited by the network. Reliability is the only important factor.
This particular NAS has a 10 GbE expansion card available, and if using that card most operations would be severely constrained by disk speed, especially if using parity.
BTRFS nope nope nope
RAID 10 sucks!
[Citation needed]