I was about to say the same. Both WD Red Plus and Pro would be most welcome. These are the only drives that we purchase anymore for NAS. We have seen far too many failed Seagate drives that we do not even consider them anymore.
Interesting that you chose a test that is low impact for spinning disks. I wonder what would happen if you did random writes all over the storage medium and how much of a difference that would make.
Out of the ~70 disks I've bought since 2019, 6 have failed so far. All bad disks failed after approx 3 years of use (a mix of SG and WD) and all were spinning rust. Interested to see what your sample sizes will yield
But they only have a fraction of the write endurance of a enterprise SSD (From Micron) (Crucial MX500 2TB = 300TBW vs Micron 5400 MAX 3.84TB = 23,000+TBW)
Hi Brett Nice to meet you at Cephlacon. Are you going to share the Prometheus and grafana setup? I think having graphs for our ceph cluster with a mix of NvME SSD drives and HDD could be interesting to follow their degradation and decide on replacement. Before I used the built in redhat AI tooling but with the way they go this day and the fact it can't be run on later Debian platforms with Python 3.9 (before they merge my PR) makes our insight into the drive health extremely limited. Keep up the great work. Best regards Daniel
Sequential writes are nothing! The spinning itself (even if they are idle) is almost the same as if they work (just faster rpm but the fail reason is not the "bearing" usually), for the moving I/O head to fail, 5 random write where the head moves from inner to outer side 5x is (almost) the same movement as writing it full 5x, which can't stress it enough. A nonstop random write test would be better stresstest and would produce results faster
I'd also like to see these kinds of tests repeated for various different types of file systems ZFS versus ext3fs versus ext4fs versus Btrfs versus ... others?
Hi so can we use Enterprise drives in a normal PC? because i want a big drive but i was told the enterprise drives wont work right with normal pc power supply's because ent drives use Sata 3.2 power system ? + what's the best 3.5" 7200 rpm 16-18tb drive atm ?
a more mixed load (more random r/w for hdds) would make tests more interesting since sequential writes only does not stress the spinning disks that much, but i get that it will take a lot more time. filling a hdd does not play so much in it's weaknesses as it does to the ssd. you could fill the drives and then start heavy random reads only on all and that will affect hdds a hell lot more than ssds, we want to hear the heads scream!!!! joking aside, maybe use 2 systems to test, one for ssds only and one for hdds. fire up the ssds with filling data and the hdds with random ops. i'm guessing the heads with give up the ghost, before ssd cells reach their max writes.
I'm not convinced by the explanation given for the slowing write performance of the spinning drives as the test progresses. Although the surface speed the heads see slows as the test progresses, the data density increases at the same time, meaning the rate at which the sectors pass under the heads remains the same throughout. I think it more likely that as the test progresses the head is having to move back forth further and further to update the master file table, assuming an NTFS format on the drives.
I love this video already and only 2 min in, I have used a mix of WD, Seagate, and Toshiba HDDs in my life and all are still alive today some being 20+ years old and some being 8-10 years old so my prediction is if any fail it will be the SSD since I have no NAND storage in my own NAS but I also don't know how the high-end Seagate drives are.
i have a old server with 4 1tb drive wd blue those drives running now for 7 years and smart reports no issues. but i must say one thing about that those drives spin down after 20 min not used and are actual used for 2h a day and spin up 12 times over the day but the server is running 24h the the last 7 years no down time
Hello, Please Show (Post) a Chart with Detailed information of all the SSDs and HDDs you are Testing (or going to be testing in future),plus your lates test results! Thanks.
This year I lost my 1TB WD HDD (black) in my desktop after 10 power-on years :( That speed difference is the reasons that I have 2 partitions on my 2TB disk, the first one with a ZFS datapool for Virtual Machines and the last one with a ZFS datapool for photos, family videos, music, documents etc.
looking at the base Storinator AV15: that cost 4400 , all the hardware inside cpu ram mb controler cost 1500 so you are charging for a case and asembly 3000 what is the deal ,,,
There is a huge subsonic drone in this video did you guys have an AC unit running or something. I was just watching the new Mission Impossible movie so my 15in sub is turned up but my pictures were vibrating on the wall and thought Micron was drilling again down the street but turned down my theater system down and the vibration and drone was gone turn it up and there it is again.
I've had worse luck with SSDs than hard drives - usually with a sudden death where the SSD is nowhere near the write endurance. I had a 3 year old Sandisk fail recently - just suddenly went read only, about 90% endurance still left. (At least the data could be pulled off that one, before that I had a SSD in a different computer just suddenly not being recognized even at BIOS). Hard drive failures that I've experienced tend to show up as slower and slower speeds before it dies - giving me a prewarning to get a new one.
Couple of things. Its not angular velocity. Angular velocity is radians per second, and they are constant wherever you are on the disc. So you are describing more the circumferential speed. The number of sectors of a platter change so that they are more evenly distributed around circunferential distance. (Wikipwedia Drive Sectors for a full explaination) Drive manufactues got this pretty much down when we moved passed GB drives. You also don't say if they are random or sequential R/W? So as a disk has to search through more blocks to find info in random read, as the disc fills, it has to move further between seeks... completely unrelated to the speed of the sectors. Obviously none of that applies to SSD. I also don't think you clearly stated the disc speed (Angualr Velocity) of the drives, obviously a 15000 rpm enterprise will do everything faster than a 5200 commercial drive.
Its the other way around actually. It writes the outside first (more surface area) which means it slows down performance as it gets closer to the inside which means it has to switch tracks more often.
Unfortunately, the S.M.A.R.T. data from drives misses something like 95% of all drive errors and is a horrible predictor of hardware problems. You have to go to the vendor proprietary information to get the real insights.
For me always ssd. I have ide drives that are still fin and working DAILY a timed use drive is bad for storage just use it for speed ... So os , apps and games. Do not store date esp irreplaceable personal photos and videos on limited use drives like ssd's
I would absolutely love to see a version of this test with WD Red Pros vs their shucked cousins (WD Elements, Easystore, My Book)
HGST(Hitachi Global Star Technology) and Toshiba(Kioxia) are way more reliable.
Good idea!
I was about to say the same. Both WD Red Plus and Pro would be most welcome. These are the only drives that we purchase anymore for NAS.
We have seen far too many failed Seagate drives that we do not even consider them anymore.
Interesting that you chose a test that is low impact for spinning disks. I wonder what would happen if you did random writes all over the storage medium and how much of a difference that would make.
Out of the ~70 disks I've bought since 2019, 6 have failed so far. All bad disks failed after approx 3 years of use (a mix of SG and WD) and all were spinning rust. Interested to see what your sample sizes will yield
Love that you guys are doing this. Been wondering about the same thing. Same with chucked drives. Where in the reliability hierarchy do they fall
Unfortunately, no consumer SSDs like crucial Mx/bx500 were tested.
That would have been really interesting
But they only have a fraction of the write endurance of a enterprise SSD (From Micron)
(Crucial MX500 2TB = 300TBW vs Micron 5400 MAX 3.84TB = 23,000+TBW)
Hi Brett
Nice to meet you at Cephlacon. Are you going to share the Prometheus and grafana setup?
I think having graphs for our ceph cluster with a mix of NvME SSD drives and HDD could be interesting to follow their degradation and decide on replacement.
Before I used the built in redhat AI tooling but with the way they go this day and the fact it can't be run on later Debian platforms with Python 3.9 (before they merge my PR) makes our insight into the drive health extremely limited.
Keep up the great work.
Best regards
Daniel
Cool stuff, looking forward to seeing the results. 👍
Why is the big wall mounted 'fan cover' backwards?
Sequential writes are nothing! The spinning itself (even if they are idle) is almost the same as if they work (just faster rpm but the fail reason is not the "bearing" usually), for the moving I/O head to fail, 5 random write where the head moves from inner to outer side 5x is (almost) the same movement as writing it full 5x, which can't stress it enough. A nonstop random write test would be better stresstest and would produce results faster
I'd also like to see these kinds of tests repeated for various different types of file systems ZFS versus ext3fs versus ext4fs versus Btrfs versus ... others?
And then I'd love to see larger scale comparisons across the entire array with various different filesystems.
Hi so can we use Enterprise drives in a normal PC? because i want a big drive but i was told the enterprise drives wont work right with normal pc power supply's because ent drives use Sata 3.2 power system ? + what's the best 3.5" 7200 rpm 16-18tb drive atm ?
I always though the 550TB workload was combined read and write on HDD. As this is when the read/write head is working and also creating vibrations.
a more mixed load (more random r/w for hdds) would make tests more interesting since sequential writes only does not stress the spinning disks that much, but i get that it will take a lot more time. filling a hdd does not play so much in it's weaknesses as it does to the ssd. you could fill the drives and then start heavy random reads only on all and that will affect hdds a hell lot more than ssds, we want to hear the heads scream!!!! joking aside, maybe use 2 systems to test, one for ssds only and one for hdds. fire up the ssds with filling data and the hdds with random ops. i'm guessing the heads with give up the ghost, before ssd cells reach their max writes.
We need NVME in the line up and where is the update?
I'm not convinced by the explanation given for the slowing write performance of the spinning drives as the test progresses. Although the surface speed the heads see slows as the test progresses, the data density increases at the same time, meaning the rate at which the sectors pass under the heads remains the same throughout. I think it more likely that as the test progresses the head is having to move back forth further and further to update the master file table, assuming an NTFS format on the drives.
I love this video already and only 2 min in, I have used a mix of WD, Seagate, and Toshiba HDDs in my life and all are still alive today some being 20+ years old and some being 8-10 years old so my prediction is if any fail it will be the SSD since I have no NAND storage in my own NAS but I also don't know how the high-end Seagate drives are.
i have a old server with 4 1tb drive wd blue those drives running now for 7 years and smart reports no issues. but i must say one thing about that those drives spin down after 20 min not used and are actual used for 2h a day and spin up 12 times over the day but the server is running 24h the the last 7 years no down time
Hello,
Please Show (Post) a Chart with Detailed information of all the SSDs and HDDs you are Testing (or going to be testing in future),plus your lates test results!
Thanks.
What about the HDS drives? They rate well by Backblaze, but I'd be interested to see your tests.
This year I lost my 1TB WD HDD (black) in my desktop after 10 power-on years :(
That speed difference is the reasons that I have 2 partitions on my 2TB disk, the first one with a ZFS datapool for Virtual Machines and the last one with a ZFS datapool for photos, family videos, music, documents etc.
Doug and Brett - thank you very much for the content !
Go Seagate Exos !!!
looking at the base Storinator AV15: that cost 4400 , all the hardware inside cpu ram mb controler cost 1500 so you are charging for a case and asembly 3000 what is the deal ,,,
and the Storinator S45: cost 8000 for the case and asembly
There is a huge subsonic drone in this video did you guys have an AC unit running or something. I was just watching the new Mission Impossible movie so my 15in sub is turned up but my pictures were vibrating on the wall and thought Micron was drilling again down the street but turned down my theater system down and the vibration and drone was gone turn it up and there it is again.
I've had worse luck with SSDs than hard drives - usually with a sudden death where the SSD is nowhere near the write endurance. I had a 3 year old Sandisk fail recently - just suddenly went read only, about 90% endurance still left. (At least the data could be pulled off that one, before that I had a SSD in a different computer just suddenly not being recognized even at BIOS). Hard drive failures that I've experienced tend to show up as slower and slower speeds before it dies - giving me a prewarning to get a new one.
really fun this thanks!
Couple of things. Its not angular velocity. Angular velocity is radians per second, and they are constant wherever you are on the disc. So you are describing more the circumferential speed.
The number of sectors of a platter change so that they are more evenly distributed around circunferential distance. (Wikipwedia Drive Sectors for a full explaination) Drive manufactues got this pretty much down when we moved passed GB drives.
You also don't say if they are random or sequential R/W? So as a disk has to search through more blocks to find info in random read, as the disc fills, it has to move further between seeks... completely unrelated to the speed of the sectors. Obviously none of that applies to SSD.
I also don't think you clearly stated the disc speed (Angualr Velocity) of the drives, obviously a 15000 rpm enterprise will do everything faster than a 5200 commercial drive.
I watched this video on an HP Z400 with more than 50.000 hours on it. The WD 320GB HD is stock.
The data must be written from the inside outwards if the speed decreases over time.
Its the other way around actually. It writes the outside first (more surface area) which means it slows down performance as it gets closer to the inside which means it has to switch tracks more often.
Unfortunately, the S.M.A.R.T. data from drives misses something like 95% of all drive errors and is a horrible predictor of hardware problems. You have to go to the vendor proprietary information to get the real insights.
"Aboowt midiar, eh?" :P
So.. which one will die first? Not answered.
yea, i was disappointed, but they indicated they will provide update videos as the testing progresses
Promise we will be providing updates!
@@wallacebrf Promise we will be providing updates!
For me always ssd. I have ide drives that are still fin and working DAILY a timed use drive is bad for storage just use it for speed ... So os , apps and games. Do not store date esp irreplaceable personal photos and videos on limited use drives like ssd's