I just got back the replacement drive from the HDD reseller. The replacement drive is a slightly different model, in this case a HUH728080ALE601 and has 37119 hours on it. The RMA process took about a week from my shipping off my old drive to getting the replacement. Overall the RMA process was easy and fast, but this might not be the case for all resellers.
So glad this channel was recommended to me! You define all the information and test scenario readily without too much 'waffle', and blast the resulting data out faster than my web browser could buffer it. 😵 I hope to see your Wizard status grow.. maybe get a Wizard hat and grow a mystical beard. Keep up the great work!!
I wasnt really into the whole nas thing a year ago and got 4 smr seagte 8tbs. NEVER AGAIN. It's absolutely criminal. There should be a warning on them with "only to be used as end storage".
@GiJoe94 well. really only write once operations. Nothing above that. If you need to read from them more than once per year it's serious pain. I can only attest from a pool of 5 drives. But getting around 20mb/s read speed for some picture or video files is really fucking up your timetable if you need 10tb.
Yea that’s a good use case, just needing to store data for moderately long periods. I still have had issues with slow incremental backups but with light home use this is unlikely to be an issue.
Great tests! I bought a big bulk box of 8tb HGST used drives and so far (knock on wood) they've been great in my TrueNas (raidz2) & BlueIris (windows mirror) homelab servers.
Great video as always. I think you have a possible future in more serious reviews/tests. The changing camera angles and cuts to charts and graphs and meters provides good pacing and video "punctuation" and keeps the viewers interest. Definitely on the right track here! I have 16 X HGST 12TB Helium drives in my home proxmox cluster. These can be found from eBay sellers with 20-40K hours on them for ~$90-120 each. I've had mine for a couple years with no failures and pretty consistent performance. Work great in a ceph cluster. My advise is go for the biggest CMR drives you can get at reasonable price/TB. Bigger is always better... The energy cost per storage gets better and the performance tends to be better for a given amount of used storage with more of the data being towards the outside of the platters.
Excellent observations! I'm seeing some of those now, especially the Baracuda SMR, the writes are painfully small (25mb per at the most) considering 2 TB backup files are a 22-hour ordeal. Thank goodness I opted for the CMR NAS version and getting 125mb read and higher write performance for backups seems acceptable in comparison.
I ended up with some SMR External HDDs as a small buiness server, and when doing some backup tasks I saw the huge performance hits like you saw. I am very annoying with how HDD manufactures tried to hide DM-SMR drives with CMR drives and not tell customers about it.
Good vid with some useful info. I've had one of the WD Green 1TB drives (and multiple others) since it came out and it still works today. I did disable all that unnecessary load/unload cycling with some command based utility way back then, so maybe that's why it isn't dead yet. In my experience I've had better luck with slower drives lasting longer too. All of this is anecdotal of course.
Loving your content. This video was interesting particularly the part on helium drives and power consumption. I do have a request though. When you test hardware can you check how the hardware works in a system using Active-state Power Management? Sometimes hardware with low power consumption numbers interfere with C-states so a supposedly "more power efficient" device can actually cause the system to use more power. LSI HBA's are a great example of this; I have yet to find one that can let my server drop to C-6 like it does when I am just using the onboard SATA ports.
Having had both Seagate and WD fail in the past, I was shocked when I disassembled them and saw how different the internal components were. In general, it looked like the WD drives had beefier components (magnets, etc.) vs. the Seagate. Given this and I've had more Seagate failures, I only buy the HSGT (WD) enterprise (mostly helium) now (SAS not SATA). And ya, just say NO to SMR unless you are archiving data (write once)
My experience with Seagate is less good - 100% failure rate! And WD is only marginally better, has positive experience with 12 and 16TB HGST, but since HGST was taken over by WD and they introduced SMR disks without informing the buyers, WD/HGST is also completely out of the discussion. In recent years have only bought Fujitsu helium-filled enterprise HDDs and so far have had no problems.
I've been using six WD Red Plus drives (CMR drives from 2015) without any failures, albeit with relatively light use. When I upgrade, which may have to be soon, I'll buy WD Red Plus again. My 2015 drives are 3TB; the new ones will be 12TB.
So Im new to all this, the past month or so, and I went with 4 of the 10TB HGST drives going around. Bought from GoHardDrive. They claim a 5 yr warranty but as long as they replace when they go bad within a year Im happy. I ran short and long on all of them and each passed. Then I wrote about 2% of my total pool volume and one fell off and died, clicking on boot. Sending that off tomorrow and almost scared of my pool now. Changing out my pcie sata card for a lsi hba card tomorrow, once thats in Im gonna give my two vdevs a lot to write and see if any others fall off.
Awesome data ! Not enough people are doing this work. Re: SMR - The hyper-dense host-managed highest-platter-count OEM drives that cloud providers are getting directly from mfrs offer the absolute cheapest $/TB/Watt in the world. These are not available to any retail customers and are the only SMR use case that makes sense. When the individual SMR single-platters fail platter testing during mfr, they are put into these lower-capacity consumer SMR drives in order to extract some value. The problem is that mfrs are not pricing these drives according to their abysmal performance as they should. Great video ! :)
"Does building a Proxmox server that boots from a 6x18TB dual-actuator RZ2 array make sense?" :) It certainly doesn't work for single-actuator drives, as the performance falls dramatically short due to the limited random-write workload capacity. Perhaps dual-actuator boosts performance to where it's as if you are getting similar performance to a 2x(6-Drive RZ2) array? 🤷♂ These are the hard-hitting questions nobody can answer :)
I wish I could see the prices the big buyers are getting, but I'm working with the small customer markets. I think a dual actuator HDD array would be cool, and I was almost tempted to do that setup, but I had a few disconnects on my dual actuator drive so I was kinda scared away from doing that setup, especially since all the dual actuator drives are of unknown origin on ebay.
I use HGSTs last 10 years. All used data-center off eBay enterprise drives. Prior to that it was Hitachi. Never had an issue with them. Since then I look at the people who buy Seagate with a constant question "Why?" in my head. I usually never put my thoughts about it on Internet, as it always sounds like a debate about the best religion in the world. Some dudes have better luck with Barracudas for years, and I happy for them.
SMR drives are fine for single-drive usage in a desktop PC when the drive gets enough idle time to shuffle its data around. A COW filesystem like btrfs helps their performance as well, since COW reduces the random IO a little. But for anything server- or NAS-related I'd stay away from them, too.
I'm probably a bit more pessimistic than some with SMR drives as I've been burned waiting much longer for a SMR drive to finish a task that I originally planned for. But for light desktop tasks SMR drives are generally fine and won't show their big performance hits.
I've been using eBay seller refurbished HGST drives, oh geez....for like 20 years now. Over the last 20 years, I think that I've had a total of roughly 8-10 dead drives; with at least four of them died because the system was sitting in front of the heater vent, so the hot air that was coming from said heater vent, during winter, is what caused their premature deaths. I think that one of my refurbished HGST 6 TB drives that I bought 8 years ago (in 2016) is finally starting to die, so I am going to need to replace that soon. Still trying to decide on whether I am going to be just replacing that 6 TB drive with a HGST 6 TB cold spare, or whether I am going to pull the rip cord and actually start upgrading the capacity as well (either moving to HGST 10 TB drives (like much of the rest of my array) or whether I am going to start increasing the capacity overall (to 16 TB drives). For me, the $/GB is my main concern. Individual drive performance is less of a consideration because they're all on 8-drive-wide raidz2 vdevs, with one pool having a single 8-wide vdev whilst my other, "bulk storage" pool, has 3 8-wide raidz2 vdevs (24 drives in total). Once you're dealing with 32 drives in total (plus the 4 drives that's in a HW RAID6 array for the Proxmox OS itself, so a grand total of 36 drives), and it's all going through an 8-lane SAS 12 Gbps SAS RAID HBA, even at 200 MB/s (1.6 Gbps), I am still nowhere close to the theoretical bandwidth limit of the HBA/SAS 12 Gbps interface. And as you've point out, the fuller the drive, the slower it'll end up being, and they all converge onto roughly the same sequential and random read/write speeds when the drives are > 50% full anyways. So, since performance is fairly uniform at that end of the platters, that's why it gets factored out from all makes/models and the only differentiator becomes $/GB. (And I've already exhausted the write endurance of 7 SSDs in 8 years, so unless it's like Optane assisted HDDs, I don't use HDDs that have some kind of SSD caching on it, because I will burn through the write endurance limit of the SSDs. That's what happens when your system has 64 GB of RAM and you use the SSDs as the swap drive. The repetitive random I/O kills the SSDs VERY quickly, even if it doesn't hit the TBW limit.)
The only use for an SMR drive IMHO is one time data storage ( a data dump ) you put the data on and never write to that drive again. I only have a few and they were from the days Seagate called them Archive Drives and were cheap (ish) , I have never had much luck with Seagate drives since they acquired Samsung other than 15k SAS drives, so generally stick to WD ( Red / Black / Gold / Shucked) Thankfully I never got a SMR drive before the scandal of SMR WD reds, they were all CMR when I checked. I did get caught with some SMR Toshiba Laptop drives that I raided in a 24/7 PC and they lasted 2 months after the 1 year warranty. I do have some HGST He drives from before WD acquired them, that are still going strong in a workstation. The other drives I use are Toshiba NAS Drives seem to be louder/hotter than the reds in my NAS but still working.
I moved from 8TB Iron Wolf drives to Seagate 10TB Helium in my NAS. I'm building a backup TrueNAS server for the 8TB Seagate drives. Once the backup is completed, I'll only turn on the TrueNAS server weekly for incremental backups which should make the 8TB drives last longer.
I've got a pair of 12TB Barracuda Pro drives in my NAS. The Barracuda Pro is CMR (according to the website) over the standard Barracuda and dramatically quieter than the Exos drives I have.
@@incandescentwithrage Barracuda Pro is that legendary Seagate that lasts for decades, much better than any WD can ever give. While Seagate Barracuda (SMR) is just trash
Exos random reads are so loud for some reason. sequential reads (movies) are okayish. really makes you minimize the usage down to actual work. however after SMR anything CMR is a wonder.
@@Avrelivs_Gold Barracuda pro is not legendary. When was the last time you saw one offered for server or SAN use? Never. HP, Dell EMC etc use rebadged Seagate Exos or WD Ultrastar. You can compare the datasheets to see why. Using a quieter drive is fine until it loses your data.
Was going to buy a Iron wolf.. Needed a backup drive for my laptop... But ended up with an External 8TB WD Black 7200 RPM for 189.00... Workes great and is just for backup once a week... Can also use it to off load my Xbox...
This is a good example of the difficulty with making a video like this, pricing changes a lot over time.That WD black seems like a good pick for you at that price, but its hard to recommend based off sales. Its kinda odd/annoying to me that WD doesn't seem to have updated the black line recently with larger models like they have with Red plus/gold/purple.
@@ElectronicsWizardry I was gona buy an enterprise drive but with the enclosure it did not make sense. It’s cold storage so with the price and capacity it made sense to go with a HDD.. and personal I think my data is safer on a HDD in cold storage rather than an ssd…
I am curious if they same model of HDDs are used in the WD black externals and the internal HDDs. I haven't seem many reports of people shucking the black external HDDs, but if its like other higher end external HDDs ive seen they do typically have a higher tier of HDD included than the the cheaper easystore or other desktop HDD brand.
Personally, I'd get mostly used drives and have a couple new drives per failure zone. In my nas now I have 4 sets of 5 drive RaidZ2 VDEVs so I'd probably have 1-2 new drives per VDEV and fill the rest with used drives
I wonder what happens to helium filled drives after some years and some of the gas has leaked? Can you still read data from them? That would be very good. I'm ok with them not being able to do writes as long as the data is safe and readable.
Got 4 8TB ironwolfs in 2016 and 4 4TB iron wolfs & still running in my server, although I got some errors lately and need to change one soon. A note, under drive attributes the 8TB drives got "Hardware ECC recovered" & are not available on the 4TB. which is a good thing for protecting from bitrot if you havnt got ecc ram.
I'm a bit confused when I hear the SMR rants because I'll copy files that resemble an average of CD through BD disk size files 500 megs to 40 gigs and delete and rewrite with generally about only 30 gigs free on a terabyte drive and I have never seen these horror story slowdowns. It seems like direct copy from one drive to the 2.5" SMR(Seagate in this case) that is TRIM enabled and it seems like I can push an entire 10 gigs at a time and see 60-120 megs per second which I think is pretty standard for a 2.5" drive if it was CMR. So it seems like the use case works for me. I back my stuff up to another hard drive regularly (with a third more frequent copy for critical stuff) rather than using RAID for the redundancy. I think for most external USB hard drives for storage, copy/paste style simple backups, and non-RAID storage that SMR is probably fine as long as it is not one of older TRIM enabled drives and it's used with a TRIM enabled OS. The performance I see from my specific drive seems to suggest to me that if the SMR zone is clear (TRIM designated as empty) that the drive is bypassing the persistent cache and just dumping it straight into the empty zones. I might having a pretty uncommon large file use-case though for video and audio editing very large files.
Agreed that most workflows for external drives don't run into the SMR drive speed issues, and if your working with smallish files like you listed here that you won't see the slowdowns, especially if given sufficient idle time. I've personally been burned by SMR drives in the past for tasks like rolling back incremental backups and raid rebuilds where the CMR buffer is filled and the bad performance is noticed. This is compounded with how HDD manufactures have put SMR in drive lineups without advertising them as SMR and even in some NAS rated drives. I think the cheaper refurb drives also make me less likely to get a new SMR drive as there cheaper often without the potential issues of SMR.
I think the test is in the GitHub link under drive tests then drive name, then smrtest then test.sh. The test using too was random write with large blocks. From memory this command should work. Fio -name=smrtest -filename=/dev/sdX -ioengine=libaio -rw=randwrite -bs=1m. Then I used the bandwidth log to records the data over time and graphed it with a python script.
Great comparison. I still cant decide if its worth it saving 50% of money by going for the second hand drives without warranty. I might risk it and start buying something like WD Golds since they have 5y factory warranty anyway as long as they have low amount of start/stop counts and less than 700 days of power on time.
SMR was designed for long term archival storage. Write once and be rarely changed again. Its not a thing a consumer wants or needs, for all the downsides stated.
I feeel like I couldn’t have made an easy recommendation due to the different workloads and requirements that users have. In addition to the changing pricing and models in the future. If I had to make a simple recommendation I’d get a nas or server rated cmr drive of the capacity you want. While this may be overkill and more expensive than neededit will mean you get a drive that should be good for almost all workloads.
I hear the non-CMR drives from WD are crap. I will add as a PSA, SSD drives can lose data in 1-2 years if they are unplugged. So if you want long-term backup, get an HDD.
I really need to Make My Own NAS out of an older PC, Or Upgrade My Junk Lenovo. Either way I need NEW Proper HDDs, Not 4 Randoms in 4 & 2 Gig working in RAID, I Do NOT believe I have them Stripped as it wouldn't let me since they weren't matching Drives. I cant Afford to Buy them, but Researching them all the time, I'm worried about getting a Refurb, that still would Most likely give me Crazy Anxiety LOL
Wow! It looks like we are already in a situation where SSDs are on par with HHD for NAS cost-wise. Taking into account the cost of electricity and the time frame of 5 years, SSDs have lower TCO (total cost of the ownership) for a home NAS, which is plugged in 24x7 and mostly idle.
I moved into SSD's on my home usage. Though I do occasional backups to HDD's and my data amounts are really small compared to many users. It's like I think 2-3TB's including system backups. Biggest thing for me is that I don't need to wait for drives powering up.
I think I looked into the math for this, and at the capacities I was interested in(about 50TB) SSDs still didn't make sense price wise. Consumer grade drive's value goes way down past 4 TB currently so I would either need lots of bays or server grade drives which often need SAS/NVMe making it harder to get compatible systems(A 24 bay SAS server might make sense if you can get the drives fairly cheap). It will be interesting to see how these prices change over time.
@@ElectronicsWizardry I think I looked into the math for this, and at the capacities I was interested in(about 50TB) SSDs - My datahoarding case is not as severe as yours :-). Nevertheless, recently I've asked about the use case for a small consumer all-flash NAS. After this video, my conclusion is that all-flash is competitive cost wise.
Yea for smaller NAS sizes SSDs can make a lot of sense, and the lower power, silent operation, and much higher speeds. I've seen some lower end consumer grade all flash NAS units, but I haven't found one I've fallen in love with.
There's also an argument for tiered storage. Use SSDs for the hot regularly used files, with HDDs that spin up when you want the archived cold storage - old films, photos, backups etc.
SMR drives should not be considered real HDDs it should be illegal to bamboozle people like that just today I easily used my 15 year old (CMR) HDD as it were new. while a 3 year old Seagate SMR was giving me 0MB/s and disconnects with data loss...
First of all, I would suggest you to go bald once and for all. You would look much more attractive I bet. Secondly, I strongly disagree about Ironwolf drives. I thought the same as you do for a time thanks to the "Seagate NAS" drives. They were acceptable to good. When they renamed the line to Ironwolf I thought they were the same. Nope. As also a professional user of WD Re4 and WD gold - talking about several tenth of drives - I can tell you the difference in cost is totally justified. What happened with Ironwolf: random reported uncorrected errors that magically solved themselves, or after 'repairs' through Seagate tools, some turned into reallocate sectors. Now, some of the drives continued to work for years in that state - I put them to less important jobs - but this is unacceptable and also cumbersome to handle. Quality is too erratic in Ironwolf drives. Ironwolf PRO I don't know as I've never bought them. Exos are way better, just because they are cheaper now for bigger drives. But they suck a lot more power. Best choice? HSGT NAS or enterprise variants without a doubt. WD GOLD if you don't care about power; other good choices are WD red plus - lowest power consumpion, Toshiba NAS series for smaller setups. Toshiba MG series are very good too. At least this is my experience.
Thanks for that insight about ironwolf drives. I probably have only worked with 10 or so my self and never ran into issues, but that is a very small sample size. Its really hard to test for all these possible issues as large samples in many different use cases. The price difference for the 8TB gold vs Ironwolf seems much bigger than on other sizes and with Exos drives. I think going with the enterprise/server rated drives makes a lot of sense if the price is close or cheaper on the enterprise grade drives. Do you know when WD stopped using the HGST brand? I have had generally good luck with them, but since the HGST brand seems to be gone now, all the HGST drives for sale now seem to have a good amount of hours on them.
@@ElectronicsWizardry well they did not stop making hgst drives they just renamed them, at least with regards to the ultrastar line. And it looks like some of hgst know how spilled over to the pro and gold line BUT that is not my assessment just some feedback here and there that might be just delusional. Depending on the dimension of the arrays you are building however, my personal recommendation would be to go wd red plus or pro - smaller- or wd red pro or gold. Alternatively as I said, Toshiba mg drives are pretty competitive. The only reason I never recommend exos drives is because of consumption and - therefore - heat dissipation. If you live in Alaska or have air conditioning in your rack and don't care about consumption, Seagate Exos seem to be the most cost effective solution.
I just got back the replacement drive from the HDD reseller. The replacement drive is a slightly different model, in this case a HUH728080ALE601 and has 37119 hours on it. The RMA process took about a week from my shipping off my old drive to getting the replacement. Overall the RMA process was easy and fast, but this might not be the case for all resellers.
Excellent analysis. There is not enough modern review material on HDDs these days. Thank you for covering this important topic.
"My analysis in this video are going to be focused around kind of a home server"
**gestures towards server rack**
🤣
So glad this channel was recommended to me! You define all the information and test scenario readily without too much 'waffle', and blast the resulting data out faster than my web browser could buffer it. 😵 I hope to see your Wizard status grow.. maybe get a Wizard hat and grow a mystical beard. Keep up the great work!!
so much knowledge, proper benchmarks, infos, and usefull tips in one channel ! thanks for sharing with us years of experience in the tech field !
I wasnt really into the whole nas thing a year ago and got 4 smr seagte 8tbs. NEVER AGAIN. It's absolutely criminal. There should be a warning on them with "only to be used as end storage".
SMR drives are disgusting, they should've never existed!
Differentiating smr for "S" stands for Shit. Never wanting it.
SMR drives for archival purposes are great
@GiJoe94 well. really only write once operations. Nothing above that. If you need to read from them more than once per year it's serious pain. I can only attest from a pool of 5 drives. But getting around 20mb/s read speed for some picture or video files is really fucking up your timetable if you need 10tb.
Great video. Thank you! I'm using SMR drives to support offline usb backup and I would say they are pretty reasonable for that purpose.
Yea that’s a good use case, just needing to store data for moderately long periods. I still have had issues with slow incremental backups but with light home use this is unlikely to be an issue.
TY for making this video. Labels don't mean what they used to. Shopping these days requires crowd-info to know whats good or not.
I can feel the honesty of his review 👍🏼
A very informative video for us storage nerds. Good work.
Glad you liked it! I'll try to do more videos like this in the future.
Thanks Brandon, good info we dont normally get tests like this, i am currently testing my new "synologee" build with very old 1 tb drives.
Great tests! I bought a big bulk box of 8tb HGST used drives and so far (knock on wood) they've been great in my TrueNas (raidz2) & BlueIris (windows mirror) homelab servers.
Great video as always. I think you have a possible future in more serious reviews/tests. The changing camera angles and cuts to charts and graphs and meters provides good pacing and video "punctuation" and keeps the viewers interest. Definitely on the right track here!
I have 16 X HGST 12TB Helium drives in my home proxmox cluster. These can be found from eBay sellers with 20-40K hours on them for ~$90-120 each. I've had mine for a couple years with no failures and pretty consistent performance. Work great in a ceph cluster. My advise is go for the biggest CMR drives you can get at reasonable price/TB. Bigger is always better... The energy cost per storage gets better and the performance tends to be better for a given amount of used storage with more of the data being towards the outside of the platters.
Excellent observations! I'm seeing some of those now, especially the Baracuda SMR, the writes are painfully small (25mb per at the most) considering 2 TB backup files are a 22-hour ordeal. Thank goodness I opted for the CMR NAS version and getting 125mb read and higher write performance for backups seems acceptable in comparison.
I ended up with some SMR External HDDs as a small buiness server, and when doing some backup tasks I saw the huge performance hits like you saw.
I am very annoying with how HDD manufactures tried to hide DM-SMR drives with CMR drives and not tell customers about it.
Great video! If you decide to do a part 2, I would love to see how some surveillance drives perform.
Good vid with some useful info. I've had one of the WD Green 1TB drives (and multiple others) since it came out and it still works today. I did disable all that unnecessary load/unload cycling with some command based utility way back then, so maybe that's why it isn't dead yet. In my experience I've had better luck with slower drives lasting longer too. All of this is anecdotal of course.
Loving your content. This video was interesting particularly the part on helium drives and power consumption. I do have a request though. When you test hardware can you check how the hardware works in a system using Active-state Power Management? Sometimes hardware with low power consumption numbers interfere with C-states so a supposedly "more power efficient" device can actually cause the system to use more power. LSI HBA's are a great example of this; I have yet to find one that can let my server drop to C-6 like it does when I am just using the onboard SATA ports.
Great video! I did happen to go with IronWolf and I guess I'm glad I did!
Having had both Seagate and WD fail in the past, I was shocked when I disassembled them and saw how different the internal components were. In general, it looked like the WD drives had beefier components (magnets, etc.) vs. the Seagate. Given this and I've had more Seagate failures, I only buy the HSGT (WD) enterprise (mostly helium) now (SAS not SATA). And ya, just say NO to SMR unless you are archiving data (write once)
They both have varying lines of varying quality. But you do you I guess.
My experience with Seagate is less good - 100% failure rate! And WD is only marginally better, has positive experience with 12 and 16TB HGST, but since HGST was taken over by WD and they introduced SMR disks without informing the buyers, WD/HGST is also completely out of the discussion.
In recent years have only bought Fujitsu helium-filled enterprise HDDs and so far have had no problems.
Definitely interesting. Thanks!
I've been using six WD Red Plus drives (CMR drives from 2015) without any failures, albeit with relatively light use. When I upgrade, which may have to be soon, I'll buy WD Red Plus again. My 2015 drives are 3TB; the new ones will be 12TB.
was literally thinking about this. thanks duder
So Im new to all this, the past month or so, and I went with 4 of the 10TB HGST drives going around. Bought from GoHardDrive. They claim a 5 yr warranty but as long as they replace when they go bad within a year Im happy.
I ran short and long on all of them and each passed. Then I wrote about 2% of my total pool volume and one fell off and died, clicking on boot. Sending that off tomorrow and almost scared of my pool now.
Changing out my pcie sata card for a lsi hba card tomorrow, once thats in Im gonna give my two vdevs a lot to write and see if any others fall off.
great info, thanks for the video !
Awesome data ! Not enough people are doing this work. Re: SMR - The hyper-dense host-managed highest-platter-count OEM drives that cloud providers are getting directly from mfrs offer the absolute cheapest $/TB/Watt in the world. These are not available to any retail customers and are the only SMR use case that makes sense. When the individual SMR single-platters fail platter testing during mfr, they are put into these lower-capacity consumer SMR drives in order to extract some value. The problem is that mfrs are not pricing these drives according to their abysmal performance as they should. Great video ! :)
"Does building a Proxmox server that boots from a 6x18TB dual-actuator RZ2 array make sense?" :) It certainly doesn't work for single-actuator drives, as the performance falls dramatically short due to the limited random-write workload capacity. Perhaps dual-actuator boosts performance to where it's as if you are getting similar performance to a 2x(6-Drive RZ2) array? 🤷♂ These are the hard-hitting questions nobody can answer :)
I wish I could see the prices the big buyers are getting, but I'm working with the small customer markets.
I think a dual actuator HDD array would be cool, and I was almost tempted to do that setup, but I had a few disconnects on my dual actuator drive so I was kinda scared away from doing that setup, especially since all the dual actuator drives are of unknown origin on ebay.
@@ElectronicsWizardry there's plenty of not-extortive-priced DA drives at the usual "recertified" drive vendors with seller-fullfiled warranties 🤷♂
I use HGSTs last 10 years. All used data-center off eBay enterprise drives. Prior to that it was Hitachi. Never had an issue with them. Since then I look at the people who buy Seagate with a constant question "Why?" in my head. I usually never put my thoughts about it on Internet, as it always sounds like a debate about the best religion in the world. Some dudes have better luck with Barracudas for years, and I happy for them.
SMR drives are fine for single-drive usage in a desktop PC when the drive gets enough idle time to shuffle its data around. A COW filesystem like btrfs helps their performance as well, since COW reduces the random IO a little. But for anything server- or NAS-related I'd stay away from them, too.
I'm probably a bit more pessimistic than some with SMR drives as I've been burned waiting much longer for a SMR drive to finish a task that I originally planned for. But for light desktop tasks SMR drives are generally fine and won't show their big performance hits.
I've been using eBay seller refurbished HGST drives, oh geez....for like 20 years now.
Over the last 20 years, I think that I've had a total of roughly 8-10 dead drives; with at least four of them died because the system was sitting in front of the heater vent, so the hot air that was coming from said heater vent, during winter, is what caused their premature deaths.
I think that one of my refurbished HGST 6 TB drives that I bought 8 years ago (in 2016) is finally starting to die, so I am going to need to replace that soon.
Still trying to decide on whether I am going to be just replacing that 6 TB drive with a HGST 6 TB cold spare, or whether I am going to pull the rip cord and actually start upgrading the capacity as well (either moving to HGST 10 TB drives (like much of the rest of my array) or whether I am going to start increasing the capacity overall (to 16 TB drives).
For me, the $/GB is my main concern.
Individual drive performance is less of a consideration because they're all on 8-drive-wide raidz2 vdevs, with one pool having a single 8-wide vdev whilst my other, "bulk storage" pool, has 3 8-wide raidz2 vdevs (24 drives in total).
Once you're dealing with 32 drives in total (plus the 4 drives that's in a HW RAID6 array for the Proxmox OS itself, so a grand total of 36 drives), and it's all going through an 8-lane SAS 12 Gbps SAS RAID HBA, even at 200 MB/s (1.6 Gbps), I am still nowhere close to the theoretical bandwidth limit of the HBA/SAS 12 Gbps interface.
And as you've point out, the fuller the drive, the slower it'll end up being, and they all converge onto roughly the same sequential and random read/write speeds when the drives are > 50% full anyways.
So, since performance is fairly uniform at that end of the platters, that's why it gets factored out from all makes/models and the only differentiator becomes $/GB.
(And I've already exhausted the write endurance of 7 SSDs in 8 years, so unless it's like Optane assisted HDDs, I don't use HDDs that have some kind of SSD caching on it, because I will burn through the write endurance limit of the SSDs. That's what happens when your system has 64 GB of RAM and you use the SSDs as the swap drive. The repetitive random I/O kills the SSDs VERY quickly, even if it doesn't hit the TBW limit.)
Very informative thank you
The only use for an SMR drive IMHO is one time data storage ( a data dump ) you put the data on and never write to that drive again. I only have a few and they were from the days Seagate called them Archive Drives and were cheap (ish) , I have never had much luck with Seagate drives since they acquired Samsung other than 15k SAS drives, so generally stick to WD ( Red / Black / Gold / Shucked) Thankfully I never got a SMR drive before the scandal of SMR WD reds, they were all CMR when I checked. I did get caught with some SMR Toshiba Laptop drives that I raided in a 24/7 PC and they lasted 2 months after the 1 year warranty. I do have some HGST He drives from before WD acquired them, that are still going strong in a workstation. The other drives I use are Toshiba NAS Drives seem to be louder/hotter than the reds in my NAS but still working.
Thanks for that - really interesting video.
I moved from 8TB Iron Wolf drives to Seagate 10TB Helium in my NAS. I'm building a backup TrueNAS server for the 8TB Seagate drives. Once the backup is completed, I'll only turn on the TrueNAS server weekly for incremental backups which should make the 8TB drives last longer.
Should've added a Toshiba MG08 to the mix. Toshiba enterprise drives are really underrated. Though, they really shine in the 16TB+ range.
I've got a pair of 12TB Barracuda Pro drives in my NAS. The Barracuda Pro is CMR (according to the website) over the standard Barracuda and dramatically quieter than the Exos drives I have.
Barracuda Pro being better than Barracuda is like saying Faulty condom Pro is better than faulty condom.
You're fucked either way.
@@incandescentwithrage 😁The best comment ever! LOL Thanks!
@@incandescentwithrage Barracuda Pro is that legendary Seagate that lasts for decades, much better than any WD can ever give.
While Seagate Barracuda (SMR) is just trash
Exos random reads are so loud for some reason.
sequential reads (movies) are okayish.
really makes you minimize the usage down to actual work.
however after SMR anything CMR is a wonder.
@@Avrelivs_Gold Barracuda pro is not legendary. When was the last time you saw one offered for server or SAN use?
Never.
HP, Dell EMC etc use rebadged Seagate Exos or WD Ultrastar.
You can compare the datasheets to see why. Using a quieter drive is fine until it loses your data.
Was going to buy a Iron wolf.. Needed a backup drive for my laptop... But ended up with an External 8TB WD Black 7200 RPM for 189.00... Workes great and is just for backup once a week... Can also use it to off load my Xbox...
This is a good example of the difficulty with making a video like this, pricing changes a lot over time.That WD black seems like a good pick for you at that price, but its hard to recommend based off sales. Its kinda odd/annoying to me that WD doesn't seem to have updated the black line recently with larger models like they have with Red plus/gold/purple.
@@ElectronicsWizardry I was gona buy an enterprise drive but with the enclosure it did not make sense. It’s cold storage so with the price and capacity it made sense to go with a HDD.. and personal I think my data is safer on a HDD in cold storage rather than an ssd…
I am curious if they same model of HDDs are used in the WD black externals and the internal HDDs. I haven't seem many reports of people shucking the black external HDDs, but if its like other higher end external HDDs ive seen they do typically have a higher tier of HDD included than the the cheaper easystore or other desktop HDD brand.
@@ElectronicsWizardry I called WD… it’s a CMR but that’s all I know..
Personally, I'd get mostly used drives and have a couple new drives per failure zone. In my nas now I have 4 sets of 5 drive RaidZ2 VDEVs so I'd probably have 1-2 new drives per VDEV and fill the rest with used drives
I wonder what happens to helium filled drives after some years and some of the gas has leaked? Can you still read data from them? That would be very good. I'm ok with them not being able to do writes as long as the data is safe and readable.
Got 4 8TB ironwolfs in 2016 and 4 4TB iron wolfs & still running in my server, although I got some errors lately and need to change one soon.
A note, under drive attributes the 8TB drives got "Hardware ECC recovered" & are not available on the 4TB. which is a good thing for protecting from bitrot if you havnt got ecc ram.
I'm a bit confused when I hear the SMR rants because I'll copy files that resemble an average of CD through BD disk size files 500 megs to 40 gigs and delete and rewrite with generally about only 30 gigs free on a terabyte drive and I have never seen these horror story slowdowns. It seems like direct copy from one drive to the 2.5" SMR(Seagate in this case) that is TRIM enabled and it seems like I can push an entire 10 gigs at a time and see 60-120 megs per second which I think is pretty standard for a 2.5" drive if it was CMR. So it seems like the use case works for me. I back my stuff up to another hard drive regularly (with a third more frequent copy for critical stuff) rather than using RAID for the redundancy. I think for most external USB hard drives for storage, copy/paste style simple backups, and non-RAID storage that SMR is probably fine as long as it is not one of older TRIM enabled drives and it's used with a TRIM enabled OS. The performance I see from my specific drive seems to suggest to me that if the SMR zone is clear (TRIM designated as empty) that the drive is bypassing the persistent cache and just dumping it straight into the empty zones. I might having a pretty uncommon large file use-case though for video and audio editing very large files.
Agreed that most workflows for external drives don't run into the SMR drive speed issues, and if your working with smallish files like you listed here that you won't see the slowdowns, especially if given sufficient idle time.
I've personally been burned by SMR drives in the past for tasks like rolling back incremental backups and raid rebuilds where the CMR buffer is filled and the bad performance is noticed. This is compounded with how HDD manufactures have put SMR in drive lineups without advertising them as SMR and even in some NAS rated drives. I think the cheaper refurb drives also make me less likely to get a new SMR drive as there cheaper often without the potential issues of SMR.
Thanks for the tests.
Can you share "smr test" script?
I think the test is in the GitHub link under drive tests then drive name, then smrtest then test.sh. The test using too was random write with large blocks. From memory this command should work. Fio -name=smrtest -filename=/dev/sdX -ioengine=libaio -rw=randwrite -bs=1m. Then I used the bandwidth log to records the data over time and graphed it with a python script.
would be nice if they listed just basic stats atleast and CLEARLY wrote CMR/SMR... cache size, spin speed etc.
thanks for the video
Great comparison. I still cant decide if its worth it saving 50% of money by going for the second hand drives without warranty. I might risk it and start buying something like WD Golds since they have 5y factory warranty anyway as long as they have low amount of start/stop counts and less than 700 days of power on time.
So whats the recommendation than
SMR was designed for long term archival storage. Write once and be rarely changed again. Its not a thing a consumer wants or needs, for all the downsides stated.
Isn't 5400 drive better in terms of power efficiency
Generally speaking yes. But speeds also take a hit.
both of your posts are addressed in the video lmfao
Great video!
I really wish you settled on one or two drives and said point blank get these - i watched the video and still don't know what is best choice
I feeel like I couldn’t have made an easy recommendation due to the different workloads and requirements that users have. In addition to the changing pricing and models in the future. If I had to make a simple recommendation I’d get a nas or server rated cmr drive of the capacity you want. While this may be overkill and more expensive than neededit will mean you get a drive that should be good for almost all workloads.
Why are the low TB/Year green and the high TB/year red?
You've got it backwards
OOps. I can't change that now, but thanks for bringing it up.
Buffalo Bill went and cleaned his whole ack up.
I hear the non-CMR drives from WD are crap. I will add as a PSA, SSD drives can lose data in 1-2 years if they are unplugged. So if you want long-term backup, get an HDD.
5400 rpm limits speed quite a bit as well...
I've just used the used SAS drives for like 10 years off ebay. None have died yet and I end up replacing them cuz they are too small.
I really need to Make My Own NAS out of an older PC, Or Upgrade My Junk Lenovo.
Either way I need NEW Proper HDDs, Not 4 Randoms in 4 & 2 Gig working in RAID, I Do NOT believe I have them Stripped as it wouldn't let me since they weren't matching Drives.
I cant Afford to Buy them, but Researching them all the time, I'm worried about getting a Refurb, that still would Most likely give me Crazy Anxiety LOL
wow its terabytes drives not gigabytes drives sir
Wow! It looks like we are already in a situation where SSDs are on par with HHD for NAS cost-wise. Taking into account the cost of electricity and the time frame of 5 years, SSDs have lower TCO (total cost of the ownership) for a home NAS, which is plugged in 24x7 and mostly idle.
I moved into SSD's on my home usage. Though I do occasional backups to HDD's and my data amounts are really small compared to many users. It's like I think 2-3TB's including system backups. Biggest thing for me is that I don't need to wait for drives powering up.
I think I looked into the math for this, and at the capacities I was interested in(about 50TB) SSDs still didn't make sense price wise. Consumer grade drive's value goes way down past 4 TB currently so I would either need lots of bays or server grade drives which often need SAS/NVMe making it harder to get compatible systems(A 24 bay SAS server might make sense if you can get the drives fairly cheap). It will be interesting to see how these prices change over time.
@@ElectronicsWizardry I think I looked into the math for this, and at the capacities I was interested in(about 50TB) SSDs
- My datahoarding case is not as severe as yours :-). Nevertheless, recently I've asked about the use case for a small consumer all-flash NAS. After this video, my conclusion is that all-flash is competitive cost wise.
Yea for smaller NAS sizes SSDs can make a lot of sense, and the lower power, silent operation, and much higher speeds. I've seen some lower end consumer grade all flash NAS units, but I haven't found one I've fallen in love with.
There's also an argument for tiered storage. Use SSDs for the hot regularly used files, with HDDs that spin up when you want the archived cold storage - old films, photos, backups etc.
SMR drives should not be considered real HDDs
it should be illegal to bamboozle people like that
just today I easily used my 15 year old (CMR) HDD as it were new.
while a 3 year old Seagate SMR was giving me 0MB/s and disconnects with data loss...
Don’t use barracuda in nas not designed for it
CMR pro version is fine for anything
First of all, I would suggest you to go bald once and for all. You would look much more attractive I bet.
Secondly, I strongly disagree about Ironwolf drives. I thought the same as you do for a time thanks to the "Seagate NAS" drives. They were acceptable to good. When they renamed the line to Ironwolf I thought they were the same. Nope.
As also a professional user of WD Re4 and WD gold - talking about several tenth of drives - I can tell you the difference in cost is totally justified.
What happened with Ironwolf: random reported uncorrected errors that magically solved themselves, or after 'repairs' through Seagate tools, some turned into reallocate sectors. Now, some of the drives continued to work for years in that state - I put them to less important jobs - but this is unacceptable and also cumbersome to handle. Quality is too erratic in Ironwolf drives.
Ironwolf PRO I don't know as I've never bought them. Exos are way better, just because they are cheaper now for bigger drives. But they suck a lot more power.
Best choice? HSGT NAS or enterprise variants without a doubt. WD GOLD if you don't care about power; other good choices are WD red plus - lowest power consumpion, Toshiba NAS series for smaller setups. Toshiba MG series are very good too.
At least this is my experience.
Thanks for that insight about ironwolf drives. I probably have only worked with 10 or so my self and never ran into issues, but that is a very small sample size. Its really hard to test for all these possible issues as large samples in many different use cases.
The price difference for the 8TB gold vs Ironwolf seems much bigger than on other sizes and with Exos drives. I think going with the enterprise/server rated drives makes a lot of sense if the price is close or cheaper on the enterprise grade drives.
Do you know when WD stopped using the HGST brand? I have had generally good luck with them, but since the HGST brand seems to be gone now, all the HGST drives for sale now seem to have a good amount of hours on them.
@@ElectronicsWizardry well they did not stop making hgst drives they just renamed them, at least with regards to the ultrastar line. And it looks like some of hgst know how spilled over to the pro and gold line BUT that is not my assessment just some feedback here and there that might be just delusional.
Depending on the dimension of the arrays you are building however, my personal recommendation would be to go wd red plus or pro - smaller- or wd red pro or gold. Alternatively as I said, Toshiba mg drives are pretty competitive. The only reason I never recommend exos drives is because of consumption and - therefore - heat dissipation. If you live in Alaska or have air conditioning in your rack and don't care about consumption, Seagate Exos seem to be the most cost effective solution.
HSGT I don't think so
WD differs from model to model
just like Seagate
there's just no fail-proof solution like many want to believe