I could store stuff on a HDD and not power it on for 5 to 10 years and all my data will still remain safely on it, same can't be said for flash media aka SSD
Right, let’s be honest. This is more a speed thing here. If you talking the most secure storage. Tape. Yes tape, that 80s marvel. Is far and away the most secure storage solution. Proven over decades. CDs and DVDs were meant to the future! Tape has outlasted them all.
@@Riyozsu There's no reason to think an SSD will last longer than a hard drive. There's no reason to believe the data won't self-erase when stored unpowered for awhile. They're certainly more durable in operation but nobody really knows long term.
I use HDD's since the 90's, all my backups are stored in HDD's from the 90's and I also have backups of those HDD's in new HDD's drivers. My oldest hard drive is from 1994 and the newest is from 2015, they all still work. I don't care about NVMe or SSDs, all my main computers runs in RAID-0.
I have changed my hdd like 2-6 years because it keeps failing. I used it to store my games and keep it in my pc. Any idea of the cause or habit to have to keep them safe
@@MARProduction24434 What type of flooring is the PC stood on? HDDs dont like mechanical noise caused by bouncy wooden floorboards or panels; if it is on such a floor, try and move it somewhere that minimises the vibration; fit rubber anti-vibration mounts to the HDD; or perhaps stand the PC on a couple of carpet tiles (assuming it isnt bottom cooled). Shouting and screaming can also cause damage (seriously); so try and avoid that. Finally, fit extra RAM (16-32GB) and turn off the Windows "Page File", or set it to a static 1GB size; this is notorious for cache thrashing, which prematurely wears out both mechanical and solid state drives, and slows down your PC.
Every single Seagate drive I have ever owned - going back to Win95; died within 5 years; and one that failed after 6 months, Seagate failed to replace under warranty, despite my getting an authorised RMA for it. Seagate will never get my business again. As for size; I just bought a 6 bay NVME NAS that is only slightly larger than 2 HDDs; it is perfect for home use, storing films and music to pipe around the house, with no worries about bouncy wooden floorboards causing heads to crash.
I have both wd and seagate drives that are 9 year old. The seagate is showing some error in smart reading while the wd is as good as new. Backblaze data checks out
Quick correction before watching the rest of the video: HDDs *do* have a TBW rating. Or rather a DWPD, except it's annualized. Granted it's rarely (if ever) mentioned for low-end or midrange consumer drives, but it's in the spec sheet of Exos and IronWorlf Pro drives for sure (basically server and enterprise class models). If memory serves, older models are rated for 300TB/y, newer models get 550TB/y, and it's generally the same values for Seagate and HGST/WD drives. So if you want to stick to specs, you actually _can't_ write to them 24/7/365 @250MB/s because you're well past the annual workload they're rated for (and you couldn't do it anyway because that speed is _not_ the average speed when you fill the entire drive, which is lower). That being said, it's true HDDs don't have an actual, absolute TBW limit, contrary to NVMe drives. But then again, the TBW ratings of NVMe drives, aside from being generally so high that it's virtually infinite for all intents and purposes, rarely is an actual, hard limit (i.e. the drive will function just fine way longer than its rated TBW).
@@yumeN0dengon Well, about that: it proves hard drives to be much less endurable than NVMe SSDs. Even the best hard drives offer like 3PB of total workload (that’s both read and write). And hard drives tend to produce more errors over time. Even if it lasts 4x that in the real world (which it likely doesn’t), it’s still pathetically low compared to cheap Qlc SSDs. A 16TB (even smaller than the hard drives, so it’s at a disadvantage) solidigm nvme ssd for around $100 per 1TB (that’s quiet cheap for huge drives) has a TBW of at least 15PB in a worst case hammering scenario (you wouldn’t buy qlc for that) and 30PB in a more realistic, sequential workload. So even smaller Qlc drives outlive the best hard drives by a factor of 5-10 in their TBW spec. The best SSDs will do 60PBW, that’s 20-40x of what the best NAS HDDs are rated for.
@@yumeN0dengon btw. The WD Red pro is pretty bad in particular. It’s rated for 300TB per year at 20TB capacity. That’s 15 full reads/writes a year. When using in a RAID, it’s recommended to do scheduled scrubbing to ensure integrity. Recommendations are often between every 2 to 6 weeks. When doing a scrub, often the entire disk is read (depends on your setup, but at least all your data is read at least once). When scrubbing every 4 weeks (which is a usual amount), you’ll have 13 full drive reads per year just for scrubbing which is 260TB of the 300TB spec. So, realistically speaking, you can only do a couple of full drive reads/writes a year without exceeding the specification. Large hard drives are pretty much useless outside of cold backup storage. Reads and writes both lead to wear on the mechanical components and manufacturers failed to scale workload capacity with disk capacity. In SSDs it’s the other way around, more capacity means more endurance.
Seagate? No thanks. I’ve encountered too many Seagate HDD failures and expended too many hours recovering from them over the decades to allow me to ever overcome the negative vibe (even if some of their newer drives have improved in reliability).
That's odd. I use seagate drives for over 20 years now and not one of them failed me. Granted, my sampling size is relative small, i bought 5 drives from seagate.
Correction: raid 1 is only for copying and redundancy. It is raid 0 that allows fast writing and reading because it writes at the same time on both disks and it distributes data
Yes. RAID 0 effectively doubles the throughput so if the HDDs can read/write at 300MB/s then this will jump to 600MB/s. A number which surpasses SATA SSD (non-raid).
@@Richo5566may be only in theory. Using 2 hdd in raid 0 (striping) and doesnt get any significant speed improvement. But yes it writes simultaneously on both disk in raid 0.
@@deepaknanda1113 I still have raid-0 on hdd´s here, and yes, you have a significant speed improvement, but not the double, and for sure not even close to a sata ssd, in real world the raid 0 will give you a 250mb/s, still far from a SSD.
Only advantage of a HDD is space. If you're using a drive for storage, and not writing from it, but using it a lot for reads, then the NVMe will last for decades, whereas the HDD won't last even half that long. Even if the NVMe's write gets filled, you can still read from it. I have had no NVMe's go bad, but some SATA SSD'S have gone bad, especially from certain manufacturers, and many more HDD's have gone bad.
In the early years of SATA I had 3 drives from various manufacturers up and die... I own PATA drives that still function and those live in old DOS/Win98 PCs...
@@EbonKim given the live expectancy, reliability and integrity and capacity (SSDs come with higher capacity nowadays), SSDs are often more cost effective in the long run. Speed and efficiency isn’t even significant in that case. Do the math and find out how close they came in terms of long term cost.
Hard Drives are super useful for long term data archives, i.e the kind where you put your backup in a fireproof safe. They do not need power to maintain data integrity (well, for decades anyway) whereas flash data does. But if you're that serious about backups, you would use tape.
So for the non savvy people.. That means if i wanted to access stuff on the NVMe like look at photos or watch a film saved on there that would last a long time but if i was transferring photos and/or movies from my computers hard drive to the NVMe then it would die much quicker than a hard drive? "Read" meaning looking and accessing data between the two and "write" meaning transferring data between the two, have i understood that correctly?
Best intro on this entire channel. F***ing hilarious. But I do need to correct you on one point: HDD's do have a rated workload limit, check the spec sheet. For both the Ironwolf Pro 24TB and Exos X24 24TB it's 550TB/year. Considering they come with a 5 year warranty, the expected lifetime is 2750TB. And this covers both read and write operations on HDD's unlike SSD's where only write operations are counted. If you really hammer these HDD's with 24TB a day like you mention in your video, from a reliability point they will be considered end of life after just 4 months. Many do last way beyond that as it's not the storage medium that degrades like with SSD's. But the mechanical moving parts like the actuator arm and heads do have a limited life span.
@@kleanthisgroutides7100 individual components have far less ? you mean the HDD is not the sum of its parts ? I don't believe that. Theoretical and in the best conditions, yes I agree. But to me, MTBF of the whole HDD means all of its parts have a MTBF of at least that. But still, an exponential drop from 157000TB can still be a lot and I care about the temperature of my HDDs.
@@claudej8805 IC’s have far lower MTBF’s especially at high temps, it’s still a theoretical metric and not really a real world number when something is running in the field. Reliability is a bell curve over the longterm, you can’t lump it into MTBF, your more likely an IC fails taking the whole disk with it.
@@claudej8805 also M.2 was never designed for high reliability… the form factor simply existed because it caught on and was cheap. If you need reliability then you must use U2/U3 or EDF.
A key point to bear in mind: Exos drives are supplied in bulk to OEMs and SIs and can't be warranted direct with Seagate but you must rely on the supplier. IronWolf Pro are classed as consumer drives and are warranted for 5 years direct by Seagate. I have both, but have just bought 5 new 16TB IronWolf Pros for a new Synology 1522Plus in RAID6. I've found both types extremely reliable.
A raid mirror (Raid 1) does not double the speed. It doubles the data as a backup. If anything it's slower since it needs to write the same data to 2 drives simultaneously. A striped raid (raid 0) would increase the speed and data bandwidth but if either drive fails you lose all the data. If you need both performance and data reliability, you may want to go either raid 5 or 10.
HDDs are so needed and useful, Not all data access has to be NVME speed. The best use is a balance, I run (2) large HDDs paired with NVME and SSD allocating data to needed speeds of drives, For Ex, all my games are loaded on My SSDs and some on the NVME, while I have vast archives of data on the HDDs. It just works out. I will always have HDDs in my PC.
HDDs with such high capacity take very long time (days) to rebuild RAIDs, and also their speed drops a lot (up to 10x) once drives become filled and data fragmented,
Defragmentation is a thing and it's up to you to not actually fully fill the drives. It's basic knowledge that you should let from 15% up to 20% of empty space.
@@andreabriganti5113 indeed, but it's not a thing with SSDs, so buying 24TB HDD will give you "only" about 17TB "usable" space if I follow your 20% rule (which also is what Synology and others recommend)
@@TazzSmk It's actually a thing for SSDs as well and on SSDs it not about performance but about survival of the drive. Its called over-provisioning meaning you only allocate ~90-93% of your SSD to partitions. This ensures the TRIM operation and handling of discarded data and suboptimally distributed data does not create excess writes. Without this, you will need to be lucky if a heavily filled SSD survives 2-3 years because an SSD is in the end nothing but a RAID 0 with 8 / 16 drives (chips) and a hidden defragmentation process (TRIM) thats called the more often the more the disk is filled, which is quite bad given that SSDs are considerably more write limited than HDDs.
Nice ad. The cost per TB of these enormous drives is actually not great; you do better in the 8TB - 12 TB range. Besides, I will never need a 24TB HDD. My own archive machine, built several years ago, has the redundancy of six 3TB HDDs, really small by 2024 standards. They're all WD Reds (CMR, not SMR). I will have to buy bigger ones soon, probably another batch of WD Reds, but this time in the 8TB capacity.
Very interesting video for reals!! I use 4 4TB Seagate Iron Wolf drives in my NAS but in my computer I have a combination of an Intel Optane SSD 900P for Windows, I have a Samsung 990 Pro 1TB for Applications like VMware Workstation Pro, and a Samsung 980 Pro for games and things are very fast!! Great video!!
For the storage server at my clients (video productions house) we run 30x 20TB Exos on 3 x 10 RaidZ2 Pool with 1TB of ram various SSDs for caching runs like a charm.
I have two Pcie to m.2 with 1TB each. Your needs may differ. I use the drives as 1) Mint 22 boot drive 2) home 3) Debian 12 boot drive. I also have a 5TB HDD for TV shows and a 10 TB External for complete backup of E-Books; DVDs; music CDs and several backups.
Good video! I'm old-school, it's been a long time since I built a computer but the time is now. There've been a lot of changes since my last build in 2012 (which is still going strong but getting long in the tooth. 😁) New MBs have M.2 slots, not so many HDD storage locations; I'm thinking of maybe 2 NVMEs for boot & games, then 2 HDDs for archive & general use. Looking forward to my new build, and my new case arrives Thursday, so it's ON BABY!
As per the usual, both have specific purposes with a few caveats. For me, 8TB WD hard drives are the backbone of my high res modeling 'collection', as after I finish a big project, everything goes over to LT storage and archiving on HDDs. The catch is that, while they are Tons cheaper for bulk storage, to keep them on the usable side for years and years, you have to spin them up once in a while. The older they get, the easier it is for them to freeze in place when not used. NVME has relatively limited storage, but it comes at you like a flood vs an ice cube melting vis-a-vis the HDD. Even if you're carefully collecting your data that you call for read cycles, they definitely have a limited speed lifecycle, where the more times you rewrite them, the NVME will Drastically drop performance over its life. They are better for writing to them and keeping the data for quick reads vs using them like most people do where data is constantly written, deleted and the space reused. After a year or two, their speed drops to half or less of 'new'/advertised. SATA SSDs naturally fall somewhere in between, while having none of the pros of either other than being overall faster than hard disks. Getting the half TB variety or so for dedicated OS use is a cheap way to go, however.
Be honest you made this video just to show off those 24TB drives. 😂🤣😂🤣 Nah, just kidding. Keep up your great work mate! You are a very reliable source when it comes to hardware. 🙏🏻🙏🏻
I use both Mac and Windows for different projects. Can you recommend a direct connect thunderbolt or USB-C hardware raid case that can hold at least 4 drives?
Please note: MTBF is not a promise or general expectation of effective life before a failure. MTBF is a calculation only based on the components ratings and the stress environment where the module is installed. It is not uncommon to have some kind of failure at 10-20% of the MTBF calculation with previous generation HDD's with MTBF's of 150k to 250k hours.. That is why there are many internal surveillance factors under SMART that are embedded in the HDD firmware. These would not be necessary if the MTFB was really a dependable value. Also, I think each system should be powered with a UPS. Nevertheless, for very large storage, this was an interesting video.
I use HDD's in a RAID for storage. I find that those that really are concerned with Data protection (and almost no one is) Take your projects that you are actively working on... Copy the project to an internal NvMe drive. Separate from your OS drive. Conduct your work.....Then copy that project back to the RAID. This does a number of things. 1st allows you to take advantage of the high speeds for editing. 2nd allows you to edit without worry of damaging the original. Lastly, lets you wear out a drive unconnected to the operation of your computer (much easier to replace than the system drive)
Tx for what you do and bring to us. The $1k question. Hard drives for their longevity and durability. I only get SIW Pros anyhow. But tempted to experiment with the newer 26TB REDs from WD. Thoughts on making a video on that yet?
If you stick to WD, go with the enterprise HGST Ultrastar ones (also owned by HD). They are reliable and less expensive than the Red Pro ones. With the RED Pro ones you pay a plus for less powerconsumption and noise reduction as they are supposed to be used on a machine running in the same place as you are. The enterprise level disks are more durable but use a bit more power and rattle more 😁. But then again: if you build a storage system for a NAS, you typically dont want that in the same place where a lot of people are to avoid people bumping into it for example. So noise doesnt matter. The only thing to think about is the power consumption
Power consumption is important for me as well. I'm waiting for the price of NVMe USB4/TB4 DAS enclosures and 4TB-8TB NVMe SSDs to drop in price sometime in the near decade to where reasonable prices can allow a prosumer/consumer load up 75TB DAS units with drives meeting low power consumption and blazing speed specs without breaking the bank. ...But the HDD mafia will not allow that anytime soon to happen, so it will be a while, in my experience, before something like this becomes a reality and becomes affordable for middle-class prosumers/regular consumers.
Finally a coherent video, thank you very much for this wonderful video, HD is indeed very useful, especially for home users who store masses of files that are lost over time or even forgotten.
Actually I have experience with HVMEs and HDDs.Last helium filled HDD i using now hat constant transfer rate around 250mb/sec without any saturation slowdowns.Yes,its much slower for small files seek operations etc,but for large file transfers HDD way better than most nvme drives ,while last loose their speed dramatically while saturated.IMHO.
Both. In an UNRAID NAS use the ssd or nmve in a raid 0 config pool to move file temporarily into the NAS. Then the NAS then moves those files onto the hard drives which are in a fault tolerant Raid setup. This setup makes it a breeze to get files onto the NAS very quickly but still has the advantage of fault tolerance because of the hybrid setup.
Reason 2 is just wrong. All HDDs have a so called workload rating that tells how much data can be written or read on/of the drive. A WD Red pro 20TB is only rated for 300TB per year over 5 years. That's worse than most TBW ratings.
Hard drives do not have a defined workload from a technical perspective. The term was invented to deny warranty claims and artificially differentiate drives based on the market segment.
In terms of price vs. capacity and also for long time storage, especially without power, there is still nothing better than a good HDD. And the latest NT001 and NT002 Series Ironwolf Pro are awesome, pretty quiet, very fast for HDDs and up to 24TB are a solid statement, not to mention the Data Rescue support for 3 years. I'm running 2 of them a 20TB NT001 and a slightly older (3-4years) 12TB model and i cant complain. Planning for a NAS/Home Server i will surely go with some NT001/2 models.
Exos are currently rated the same. It's in the product manual (see spec summary tables). I would've sworn it was also in the data sheets, but it's not.
Using a quartet of 2.5" Seagate 5TB HDDs in a RAID as a Time Machine backup archive for my home network. Gives me about 14TB of Mac back ups overall which is perfect for my needs. The NAS on my Win10/11 systems is a bit bigger.
I use spinning disc HD's for storage and I'm old enough to remember when that's all we had and there was no such thing as a 1TB drive in a home PC. My home server and both NAS' are spinning disc. My home NAS running UNRAID has a M.2, a SSD and 10 HDD's. I went through the WD Red kerfuffle a couple years ago but I knew there was going to be a way around it. When I finish editing on my rig it's exported to two different storage sources off of my editing rig.
only need em for gaming loading textures- big difference there, and faster booting and apps loading. but i use hdd for my movie servers and they work great and also for back up.
i scored a load of Sky boxes a couple of years ago and took out the 1tb Hdd's out of them and they suit my needs perfectly , but one thing that i dont think was mentioned was the electric usuage of Hdd's is a lot less than Nvme's , at least my understanding of them is , i have a 1TB nvme as my O.S. but i have 2 1TB Hdd's for my backups , great video BTW
@@Richo5566 well thats exactly what i used to think but then i saw a report that a Hdd (used for backups mostly) uses a lot less electricity because SSD's uses electricity all of the time
I thought about buying 24TB hard drives, but I found that 8TB SSDs (2.5", SATA drives) offer the best bang for the buck. They're lightweight and offer higher speeds than magnetic drives and come in pretty cheap. I have too many AI models, which adds up to disk space.
For OS partitions SSDs, for data storage HDDs. For my private filsharing server I recently configured teared storage, means using a smaller 30 GB SSD partition as caching for a larger main storage 1 TB HDD. Works great until now.
I have been doing an NVME for OS and general software and still use HDDs for games and bulk storage of files since my previous FX-8350 build(now I have a 5900X). I currently have a 1Tb NVME, a 3TB HDD and a 12TB HDD with plans to get a 2 or 4 TB NVME and then another large HDD to upgrade the 3TB one.
I have had nothing but problems, with Seagate. It was a while ago, but since then I haven't dared to gamble my files on my NAS just to check if they've ever improved.
I miss my UW SCSI with 10,000 RPM drives…. I Have always liked the Seagate drives. Currently I use a 2T NVMe with my MacBook pro b/c it is much more portable and does not require external power. On my desktop Mac, I have stuck to external HDD b/c you simply cannot beat the bang for the buck. And the data is recoverable in case of failure. The internal Mac storage is NVMe, which I use only for the OS and application storage. My photography and iTunes is kept on a Drobo RAID. Time machine is on its own external drive. I never buy a drive that is not capable of less than 7200 RPM, and I’ve noticed that faster drives are difficult to find these days. Thanks for the info on the differences in the Seagate line up
Not to mention you are using 4x pci lanes for an NVME, a HDD uses DMI link through the PCH. Now with a 24lane CPU, you need 16x PCIE lanes for your 4090. You have only 2x NVME worth of lanes left. Large data transfers on NVME also creates a tonne of heat, I have seen some of the chips literally sweat goo from the NVME 😳 I still love my NAS HDD as my back ups. I just use 2x 6tb mirror for my needs.
I will upgrade from WD Blue 500GB and I am planning to buy either SEAGate EXOS or Iron Wolf Pro? What do you recommend for these two for Long term Gaming? Its not brand new just freshly pulled?
Not touching a Seagate with a ten foot pole. Every single drive that have ever failed on me had been a Seagate. Got several WDs/HGST and Toshiba drives been running well for over decade now.
Yep, I still prefer the old-tech HDD in my devices where the r/w operations of spinning platters over 5 years >> ssd/nvme drives. I've only had 2-3 drives failures in over 3 decades of use. There's the use case for ssd/nvme over hdd, but HDD is still relevant technology even today. Cheers.
While I still use hard drives for long term data storage an backups, the problem I have with them is their capacity increases with time and are pretty big nowadays - up to 24GB, but speed remains pretty much the same unless you do RAID. Speed just can't catch up with the increases in capacity. So it becomes increasingly slower working with files on an HDD as time goes by since file sizes also tend to increase with time now that we have fast multi-core CPUs and fast RAM to create and process them. So it becomes daunting to even use them for backup storage due to the long wait time to write all the terabytes of data on them. Not to mention the poor performance with lots of small files. Something needs to be done to get their speed a bit more in line with their capacity and I hope they are working on this aspect.
in HDD they have 1 servos to read/write multiple disc, both front/back of the disc, so perhaps they should improve on this by having multiple servos to read/write multiple surface in parallel.
Data recovery too... You can go to a specialist data recovery technicians that can recover data from non working HDD while SSD's are usually 99.9% completely lost if it breaks.
I have tried many of the backup software such as Acronis etc. They never work as advertised and recovery is very slow usually. My method which has worked well for me and much faster is to clone two SSDs exactly mirroring each other. When I get a glitch I just reboot, go to BIOS and move one of the other two system drives up to first boot and I am done. Then I just clone the corrupted drive again. I refresh the clones at least every month or if I have made many changes I do it sooner. This method is accurate, fast and dependable. I use all m.2 nvme or sata SSDs. This method is fool proof and has never failed me.
There are other differences between the Exos and Ironwolf series drives too; like heat; I recently replaced a failed Ironwolf drive with an Exos drive ; it runs several degrees warmer than the other Ironwolf drives in the NAS. The drives are about 7 years old and have about 4 years of use on the drives. MTBF is largely hype in my opinion; 1 million hours MTBF equates to about 114 years;
You don't need a high TBW rating on a drive that's used for archiving... because a lot of the data on it will be written just once therefore (from that perspective) an MVMe drive is *better* suited to archiving than an HDD -- except "price".
@@phaikyouser9499 No... the accepted storage lifetime of a modern NVMe drive left without power in an ambient temperature of 25 degrees C without power is 10 years or so.
That is what you think. Read a bit on how data is organized across a RAID and what SSDs and NVMEs are doing in the background. On the disk itself data is continuously relocated. Ad RAID1 in the mix and there is a lot more writing going on than you realize. Use RAID5 and then keep an eye on the actual write counters. You would be surprised how consumer grade NVMEs wear out in a NAS system... You could reason: i switch it off. But then: what is the point in that case for having such an expensive flash only NAS?
Good video. Exactly my experience with nvmes. The problem with modern nvme ssds (and sata ssds) is just that they are bad at very large (>150 Gb) file backups. Write speeds go very badly down after cache is exhausted. I came back to HDDs, in a nas with 10G ethernet, and i heavily use btrfs Raid Zero. You get just steady write speeds with over 300 Mb/s. I didnt test yet with more than 2 disks. And yes, Raid is NOT backup! Its interesting, my old sata SSDs are very good at long term write speeds, due to SLC...
Personally, I tend to use a tiered storage system in my NAS. I've got big HDDs running for archival purposes and backups. Standard SSDs for VM storage as well as hotter storage which doesn't need very fast access but shouldn't lie to waste on HDDs and I've got a small-ish NVMe RAID running that's for hot storage. However, due to the prices of SSDs in general, it's relatively expensive. I started out with just four 6TB HDDs which I recycled into my current NAS totalling my usable HDD storage space to 48TB with a 36TB + 12TB config). Most important data is then also backed up to the cloud (encrypted of course). Since that's a fraction of my overall space, I'm pretty comfortable about my data. Next stop is tape storage :D
Great video for both mediums of storage, I tend to use both nvme for high speed game access & HDD for music. How long can data remain intact on an NVME/SSD unconnected to a power source compared to a HDD?
What would I do? For massive long term storage where I'd only ever 'read' the files, go optical storage. Then do smaller mechanical raid for general storage that doesn't add up to too much. Imagine the time it will take to upload/download 70TB+ from a massive raid NAS/DAS drive? Several days if the drive is mechanical. It dose come down to use case. A nice USB4 NVMe raid is nice to work off of without too many worries or bottlenecks. Big project done? Burn to optical.
SSD drives required they be supplied with power at least once every 8 months(maybe mine were cheaply made) or else the Data stored on can degrade, and that is from personal experience from not using them for 9 months at a time. But my external HDDs, after two years in storage I experience no lost of data at all when I connected it to my computer.
Ironwolf never disappointed me Using synlogy 1821+ 8 bays raid 6 plus 2 NVME raid 0 for random IPOS bundling 10 GB lan ,7200 ironwolf pro really for long term storage with huge TB and the speed really not bad and it and share multiple PC and notebook phone etc. ,and easy replace data for recovery data prevent loss ,main reason it really cheap per TB NVME just install 2 stick per PC is really enough Now I want a to upgrade 128 TB is also easy a upgraded per drive *one by one* each time while I need it
Are "hybrid" HDDs still around? I have an older Seagate "hybrid" HDD that was just as faster as earlier SSDs at that time. At that point in time: a 2TB "hybrid" drive was like $80.00 while a 2TB SSD was almost triple that. Plus, those drives are now almost 6 years old and have shown no signs of failure. They don't 3ven have "head noise" yet.
@@MMOPC78 yeah Seagate still make a few. It’s called the Seagate Firecuda SSHD in a 2TB capacity with an 8GB SSD used for frequently accessed data such as the boot files and the apps and games you use most. In my testing, Windows booted much quicker than a standard HDD but some apps had to be opened a few times before the drive put them in the SSD storage. They were also introduced at a time when large SSDs were expensive and having these provided a way of getting quicker boot times but still having lots of space of storage. This was handy in devices with space for only one drive such as laptops. Things are different now and large SSDs are much more reasonably priced so the need for these diminished.
Unfortunately, the newer energy assisted drives that heat the media in some way to increase magnetic permeability in a local spot DO have a limit on how much you can write, just like an nvme.
I have a NAS with 4x 18tb Exos drives, but I turn it on only to archive my stuff as it's like having an expresso coffee machine perpetually brewing :')
Could have summarized this in a shorter video, probably. HDD: Long term storage/RAID setups/immediate access to data not always critical, redundancies less expensive. SSD: IOPS - Faster access to data, not just load speed NVME: IOPS + Load speeds, faster access to data and the ability to work with it faster. Example.. Editing a video? scrubbing speeds skyrocket on an NVME. But i'm sure everyone has an idea of this at this point. Only reason to use an HDD is if you're offloading data for convenience access and backup, since it's less expensive to keep with redunancies in place. In the end this is all depending on intended use and individual needs.
So far, SSD/nVme shows like a way safer option, as they fail extremely rarely compared to hard drives (at least in my experience). But yes, for a NAS, HDD is still the best choice, even tho I'd trust another brand... But, that's another story :)
So I buy external HDDs to store movies and music on, but rarely do they live past a couple of years. They don't move, just sit on my desk until I need them and then plug them in to write or read. So I seriously considering an SSD. Storage until recent has been around 4T for movies and TV shows and 2T for music. I'm just sick of shelling out for HDDs every couple of years. I've had Seagate and WD.
I payed 1400 dollars for an external 20gb drive. :) Mtbf can be tested in different ways, yielding hugely different results. If you had constant writing 24 hours a day for 8 years, or in some cases 100 + years based on the million plus hours ratings, these drives would assuredly die. Taking 300,000 drives and running them all for an hour and saying each Drive should last 300,000 hours is a completely flawed system. Seagate has moved to the annualized failure rating instead of Mtbf for this reason
Well, i had 4x 20tb X20 exos on my ds923+, its running without raid, but i have 2 nvme 250GB samsung ssds, for cache in raid 0 , i think its enough, i did think about doing raid 0 OR 1 for HDDs but even with my 10G isp speed you will not see any changes on your mobile phones photos app, or plex. 20tb was the price to performance best hdd 1 year ago, but now we have 24tb versions, iam waiting black friday to buy 24tb drives, already sold 2 of 4 hdds and 1 hdd on Unifi pro max, 1 left for my photos app and couple movies, plex waiting for the new storage :)
I have 10 computers in my home office, with the highest being a Xeon W7, 512GB, RTX3090. I have 3x16TB HDDs in my ASUSTOR NAS RAID-5. All of my systems have a 1TB/2TB HDD as a backup clone drive to their 1TB/2TB NVMe OS Drive in case of OS drive failure just reboot. In my main computer for the data storage drives are 4x8TB HDDs (Data/Backup). And my HTPC has 2x6TB HDDs for movies (Data/Backup). I use NVMe or SSD only for OS Drives and for fast Temporary Work drives. In the past 30 years I have had two HDDs fail, and three SSDs and one NVMe. I don't write that much data to any of them. I have Robocopy scripts on all of my systems to mirror the Data HDD drive to the Backup HDD drive.
I didn't use NAS HDD. Actually 2 of them in my NAS are just normal one from Seagate and WD and if I buy new one I will always buy from both 2 brands cause it easier to identify which one failing. For me my very first NVME SSD wasn't great at all. It fast indeed but it failed in one year then after claimed it it still failed in one year the same. My experience only improve when I buy WD Green and Black.
Raid 1 is mirroring.. no data is "split" and there is no speed increase for writes or reads. he said it twice. What he was explaining was actually raid 10
I guess I'll check around when Black Friday comes around to replace 5TB Toshiba drive in my Alienware Aurora R16. the Toshiba currently hosts as a Steam Games drive. I'm now wanting to consolidate Steam , Ubisoft and Epic games under one drive . in addition to the Toshiba ,I have 2 attached drives with more games . I want to get rid of attached drives in the coming future.
A continuously run HDD will die very soon. But I still vote for HDDs for long term storage. Nowadays I use HDDs with SSD/NVMe/Optane as caching disks on ZFS file systems. However a HDD which is run normally will last quite some time. I dont fear its failure like some others do.
I always use HDD for backup and longtime storage !
Until you realise that SSD will outlive hdds. I could fling an SSD and hdd across the room and the SSD will more likely survive.
@@Riyozsu yes of course, but I don"t throw my hdd. For travel etc I use SSD. But i've always keep copy on HDD in a room without mad thrower ! :D
I could store stuff on a HDD and not power it on for 5 to 10 years and all my data will still remain safely on it, same can't be said for flash media aka SSD
Right, let’s be honest. This is more a speed thing here. If you talking the most secure storage. Tape. Yes tape, that 80s marvel. Is far and away the most secure storage solution.
Proven over decades. CDs and DVDs were meant to the future! Tape has outlasted them all.
@@Riyozsu There's no reason to think an SSD will last longer than a hard drive. There's no reason to believe the data won't self-erase when stored unpowered for awhile. They're certainly more durable in operation but nobody really knows long term.
I use HDD's since the 90's, all my backups are stored in HDD's from the 90's and I also have backups of those HDD's in new HDD's drivers. My oldest hard drive is from 1994 and the newest is from 2015, they all still work.
I don't care about NVMe or SSDs, all my main computers runs in RAID-0.
I only get rid of HDDs when the SD card of the same size, drops below £10.
I have raid-0 too, but the NVMEs are MUCH faster than that!
Got 2 nvme´s and 4 hdd´s on my main machine.
@@ianemery2925 This statement, while it makes sense on its face, is such a staggering technological flex that it makes me proud to live when we do.
I have changed my hdd like 2-6 years because it keeps failing. I used it to store my games and keep it in my pc. Any idea of the cause or habit to have to keep them safe
@@MARProduction24434 What type of flooring is the PC stood on? HDDs dont like mechanical noise caused by bouncy wooden floorboards or panels; if it is on such a floor, try and move it somewhere that minimises the vibration; fit rubber anti-vibration mounts to the HDD; or perhaps stand the PC on a couple of carpet tiles (assuming it isnt bottom cooled).
Shouting and screaming can also cause damage (seriously); so try and avoid that.
Finally, fit extra RAM (16-32GB) and turn off the Windows "Page File", or set it to a static 1GB size; this is notorious for cache thrashing, which prematurely wears out both mechanical and solid state drives, and slows down your PC.
Every single Seagate drive I have ever owned - going back to Win95; died within 5 years; and one that failed after 6 months, Seagate failed to replace under warranty, despite my getting an authorised RMA for it.
Seagate will never get my business again.
As for size; I just bought a 6 bay NVME NAS that is only slightly larger than 2 HDDs; it is perfect for home use, storing films and music to pipe around the house, with no worries about bouncy wooden floorboards causing heads to crash.
Yeah they are crap
I have a bunch of the eXos drives, most well over 5 years. They have been fantastic.
Me too. I don't think I've had a HD last as long as 4 years. It just $ucking ups and dies.
Oh you are so right!
I have both wd and seagate drives that are 9 year old. The seagate is showing some error in smart reading while the wd is as good as new. Backblaze data checks out
Quick correction before watching the rest of the video: HDDs *do* have a TBW rating. Or rather a DWPD, except it's annualized. Granted it's rarely (if ever) mentioned for low-end or midrange consumer drives, but it's in the spec sheet of Exos and IronWorlf Pro drives for sure (basically server and enterprise class models). If memory serves, older models are rated for 300TB/y, newer models get 550TB/y, and it's generally the same values for Seagate and HGST/WD drives. So if you want to stick to specs, you actually _can't_ write to them 24/7/365 @250MB/s because you're well past the annual workload they're rated for (and you couldn't do it anyway because that speed is _not_ the average speed when you fill the entire drive, which is lower).
That being said, it's true HDDs don't have an actual, absolute TBW limit, contrary to NVMe drives. But then again, the TBW ratings of NVMe drives, aside from being generally so high that it's virtually infinite for all intents and purposes, rarely is an actual, hard limit (i.e. the drive will function just fine way longer than its rated TBW).
@@yumeN0dengon Well, about that: it proves hard drives to be much less endurable than NVMe SSDs.
Even the best hard drives offer like 3PB of total workload (that’s both read and write). And hard drives tend to produce more errors over time. Even if it lasts 4x that in the real world (which it likely doesn’t), it’s still pathetically low compared to cheap Qlc SSDs.
A 16TB (even smaller than the hard drives, so it’s at a disadvantage) solidigm nvme ssd for around $100 per 1TB (that’s quiet cheap for huge drives) has a TBW of at least 15PB in a worst case hammering scenario (you wouldn’t buy qlc for that) and 30PB in a more realistic, sequential workload. So even smaller Qlc drives outlive the best hard drives by a factor of 5-10 in their TBW spec.
The best SSDs will do 60PBW, that’s 20-40x of what the best NAS HDDs are rated for.
@@yumeN0dengon btw. The WD Red pro is pretty bad in particular. It’s rated for 300TB per year at 20TB capacity. That’s 15 full reads/writes a year.
When using in a RAID, it’s recommended to do scheduled scrubbing to ensure integrity. Recommendations are often between every 2 to 6 weeks. When doing a scrub, often the entire disk is read (depends on your setup, but at least all your data is read at least once). When scrubbing every 4 weeks (which is a usual amount), you’ll have 13 full drive reads per year just for scrubbing which is 260TB of the 300TB spec.
So, realistically speaking, you can only do a couple of full drive reads/writes a year without exceeding the specification.
Large hard drives are pretty much useless outside of cold backup storage.
Reads and writes both lead to wear on the mechanical components and manufacturers failed to scale workload capacity with disk capacity.
In SSDs it’s the other way around, more capacity means more endurance.
We wrote over 14 pb to a mirror pair of 4tb Seagate drives in a server over a few years. Try that with an ssd!
@@lukas_ls By that logic my 18TB Exos would have died a long time ago as it has read almost 17PB of data and written 243TB in 2 years.
@@chuckthetekkie what about ure rate? Did it go up?
That’s just an anecdote and don’t tell the whole story
Alright that intro got me in tears over here. That was beautiful
Seagate? No thanks. I’ve encountered too many Seagate HDD failures and expended too many hours recovering from them over the decades to allow me to ever overcome the negative vibe (even if some of their newer drives have improved in reliability).
That's odd. I use seagate drives for over 20 years now and not one of them failed me. Granted, my sampling size is relative small, i bought 5 drives from seagate.
@@Adesterr yo just lucky.
Back With The Intros 😂🔥 HDD for longterm backups💯
Correction: raid 1 is only for copying and redundancy. It is raid 0 that allows fast writing and reading because it writes at the same time on both disks and it distributes data
Yes. RAID 0 effectively doubles the throughput so if the HDDs can read/write at 300MB/s then this will jump to 600MB/s. A number which surpasses SATA SSD (non-raid).
@@Richo5566may be only in theory.
Using 2 hdd in raid 0 (striping) and doesnt get any significant speed improvement.
But yes it writes simultaneously on both disk in raid 0.
@@deepaknanda1113 I still have raid-0 on hdd´s here, and yes, you have a significant speed improvement, but not the double, and for sure not even close to a sata ssd, in real world the raid 0 will give you a 250mb/s, still far from a SSD.
Only advantage of a HDD is space. If you're using a drive for storage, and not writing from it, but using it a lot for reads, then the NVMe will last for decades, whereas the HDD won't last even half that long. Even if the NVMe's write gets filled, you can still read from it. I have had no NVMe's go bad, but some SATA SSD'S have gone bad, especially from certain manufacturers, and many more HDD's have gone bad.
In the early years of SATA I had 3 drives from various manufacturers up and die... I own PATA drives that still function and those live in old DOS/Win98 PCs...
@@EbonKim given the live expectancy, reliability and integrity and capacity (SSDs come with higher capacity nowadays), SSDs are often more cost effective in the long run. Speed and efficiency isn’t even significant in that case.
Do the math and find out how close they came in terms of long term cost.
And cost. With my current 120tb storage along with 100 tb of backup. Using pure NVME drives will be near the price of buying a car.
Hard Drives are super useful for long term data archives, i.e the kind where you put your backup in a fireproof safe. They do not need power to maintain data integrity (well, for decades anyway) whereas flash data does. But if you're that serious about backups, you would use tape.
So for the non savvy people.. That means if i wanted to access stuff on the NVMe like look at photos or watch a film saved on there that would last a long time but if i was transferring photos and/or movies from my computers hard drive to the NVMe then it would die much quicker than a hard drive? "Read" meaning looking and accessing data between the two and "write" meaning transferring data between the two, have i understood that correctly?
Best intro on this entire channel. F***ing hilarious. But I do need to correct you on one point: HDD's do have a rated workload limit, check the spec sheet.
For both the Ironwolf Pro 24TB and Exos X24 24TB it's 550TB/year. Considering they come with a 5 year warranty, the expected lifetime is 2750TB. And this covers both read and write operations on HDD's unlike SSD's where only write operations are counted. If you really hammer these HDD's with 24TB a day like you mention in your video, from a reliability point they will be considered end of life after just 4 months. Many do last way beyond that as it's not the storage medium that degrades like with SSD's. But the mechanical moving parts like the actuator arm and heads do have a limited life span.
He's very wrong, but you're wrong too. MTBF is 2.5 million hours = 285 years (?!) and 285*550=157000TB
@@claudej8805MTBF is a theoretical metric, individual components have far less MTBT and drops exponentially with temperature.
@@kleanthisgroutides7100 individual components have far less ? you mean the HDD is not the sum of its parts ? I don't believe that.
Theoretical and in the best conditions, yes I agree. But to me, MTBF of the whole HDD means all of its parts have a MTBF of at least that.
But still, an exponential drop from 157000TB can still be a lot and I care about the temperature of my HDDs.
@@claudej8805 IC’s have far lower MTBF’s especially at high temps, it’s still a theoretical metric and not really a real world number when something is running in the field. Reliability is a bell curve over the longterm, you can’t lump it into MTBF, your more likely an IC fails taking the whole disk with it.
@@claudej8805 also M.2 was never designed for high reliability… the form factor simply existed because it caught on and was cheap. If you need reliability then you must use U2/U3 or EDF.
A key point to bear in mind: Exos drives are supplied in bulk to OEMs and SIs and can't be warranted direct with Seagate but you must rely on the supplier. IronWolf Pro are classed as consumer drives and are warranted for 5 years direct by Seagate. I have both, but have just bought 5 new 16TB IronWolf Pros for a new Synology 1522Plus in RAID6. I've found both types extremely reliable.
Seagate are trash
A raid mirror (Raid 1) does not double the speed. It doubles the data as a backup. If anything it's slower since it needs to write the same data to 2 drives simultaneously. A striped raid (raid 0) would increase the speed and data bandwidth but if either drive fails you lose all the data. If you need both performance and data reliability, you may want to go either raid 5 or 10.
Exactly. I was scratching my head when he said that in the video.
Raid 1 technically increases Read IOPs if the raid controller do load balancing?
he clearly said raid1 for write will be slower, but not for read apparently
@@astralboy ...and? Go on...repeat the rest that pertains to our conversation.
It can improve read speeds.
Theoretically double.
There's only 1 main reason to get hdd over ssd. Larger capacity and/or cheaper
Constant writing also
no reason, just raid 0 good HDs and you have better performance for way less money
@IdiawesKara raid 0 is dead. Solid state replaced need for raid 0
on the contrary, the only reason i would want an SSD over a HDD is just speed, HDD are leagues more reliable than an SSD
@Wolfrich666 yeah ideally have SSD and HDD. SSD for os and games, platter HDD for data backup, videos, movies, photos, game backup etc.
Until you start hearing that hdd clicking sound. That's when you know a near death experience.
HDDs are so needed and useful, Not all data access has to be NVME speed. The best use is a balance, I run (2) large HDDs paired with NVME and SSD allocating data to needed speeds of drives, For Ex, all my games are loaded on My SSDs and some on the NVME, while I have vast archives of data on the HDDs. It just works out. I will always have HDDs in my PC.
HDDs with such high capacity take very long time (days) to rebuild RAIDs, and also their speed drops a lot (up to 10x) once drives become filled and data fragmented,
Defragmentation is a thing and it's up to you to not actually fully fill the drives. It's basic knowledge that you should let from 15% up to 20% of empty space.
@@andreabriganti5113 indeed, but it's not a thing with SSDs, so buying 24TB HDD will give you "only" about 17TB "usable" space if I follow your 20% rule (which also is what Synology and others recommend)
@@TazzSmk It's actually a thing for SSDs as well and on SSDs it not about performance but about survival of the drive.
Its called over-provisioning meaning you only allocate ~90-93% of your SSD to partitions.
This ensures the TRIM operation and handling of discarded data and suboptimally distributed data does not create excess writes. Without this, you will need to be lucky if a heavily filled SSD survives 2-3 years because an SSD is in the end nothing but a RAID 0 with 8 / 16 drives (chips) and a hidden defragmentation process (TRIM) thats called the more often the more the disk is filled, which is quite bad given that SSDs are considerably more write limited than HDDs.
Nice ad. The cost per TB of these enormous drives is actually not great; you do better in the 8TB - 12 TB range. Besides, I will never need a 24TB HDD. My own archive machine, built several years ago, has the redundancy of six 3TB HDDs, really small by 2024 standards. They're all WD Reds (CMR, not SMR). I will have to buy bigger ones soon, probably another batch of WD Reds, but this time in the 8TB capacity.
Very interesting video for reals!! I use 4 4TB Seagate Iron Wolf drives in my NAS but in my computer I have a combination of an Intel Optane SSD 900P for Windows, I have a Samsung 990 Pro 1TB for Applications like VMware Workstation Pro, and a Samsung 980 Pro for games and things are very fast!! Great video!!
This is my favorite tech opening of all time! Hilarious, give your writers a raise!
For the storage server at my clients (video productions house) we run 30x 20TB Exos on 3 x 10 RaidZ2 Pool with 1TB of ram various SSDs for caching runs like a charm.
I have two Pcie to m.2 with 1TB each. Your needs may differ. I use the drives as 1) Mint 22 boot drive 2) home 3) Debian 12 boot drive. I also have a 5TB HDD for TV shows and a 10 TB External for complete backup of E-Books; DVDs; music CDs and several backups.
Good video! I'm old-school, it's been a long time since I built a computer but the time is now. There've been a lot of changes since my last build in 2012 (which is still going strong but getting long in the tooth. 😁) New MBs have M.2 slots, not so many HDD storage locations; I'm thinking of maybe 2 NVMEs for boot & games, then 2 HDDs for archive & general use. Looking forward to my new build, and my new case arrives Thursday, so it's ON BABY!
I have been a computer tech since 1994 and I agree 100%.
As per the usual, both have specific purposes with a few caveats. For me, 8TB WD hard drives are the backbone of my high res modeling 'collection', as after I finish a big project, everything goes over to LT storage and archiving on HDDs. The catch is that, while they are Tons cheaper for bulk storage, to keep them on the usable side for years and years, you have to spin them up once in a while. The older they get, the easier it is for them to freeze in place when not used.
NVME has relatively limited storage, but it comes at you like a flood vs an ice cube melting vis-a-vis the HDD. Even if you're carefully collecting your data that you call for read cycles, they definitely have a limited speed lifecycle, where the more times you rewrite them, the NVME will Drastically drop performance over its life. They are better for writing to them and keeping the data for quick reads vs using them like most people do where data is constantly written, deleted and the space reused. After a year or two, their speed drops to half or less of 'new'/advertised. SATA SSDs naturally fall somewhere in between, while having none of the pros of either other than being overall faster than hard disks. Getting the half TB variety or so for dedicated OS use is a cheap way to go, however.
Be honest you made this video just to show off those 24TB drives. 😂🤣😂🤣 Nah, just kidding. Keep up your great work mate! You are a very reliable source when it comes to hardware. 🙏🏻🙏🏻
I use both Mac and Windows for different projects. Can you recommend a direct connect thunderbolt or USB-C hardware raid case that can hold at least 4 drives?
Please note: MTBF is not a promise or general expectation of effective life before a failure. MTBF is a calculation only based on the components ratings and the stress environment where the module is installed. It is not uncommon to have some kind of failure at 10-20% of the MTBF calculation with previous generation HDD's with MTBF's of 150k to 250k hours.. That is why there are many internal surveillance factors under SMART that are embedded in the HDD firmware. These would not be necessary if the MTFB was really a dependable value. Also, I think each system should be powered with a UPS. Nevertheless, for very large storage, this was an interesting video.
This is perfect, these are the exact 2 drives I have been looking at for archive drives!
I use HDD's in a RAID for storage. I find that those that really are concerned with Data protection (and almost no one is)
Take your projects that you are actively working on... Copy the project to an internal NvMe drive. Separate from your OS drive.
Conduct your work.....Then copy that project back to the RAID.
This does a number of things. 1st allows you to take advantage of the high speeds for editing. 2nd allows you to edit without worry of damaging the original.
Lastly, lets you wear out a drive unconnected to the operation of your computer (much easier to replace than the system drive)
Tx for what you do and bring to us. The $1k question.
Hard drives for their longevity and durability.
I only get SIW Pros anyhow.
But tempted to experiment with the newer 26TB REDs from WD.
Thoughts on making a video on that yet?
If you stick to WD, go with the enterprise HGST Ultrastar ones (also owned by HD). They are reliable and less expensive than the Red Pro ones. With the RED Pro ones you pay a plus for less powerconsumption and noise reduction as they are supposed to be used on a machine running in the same place as you are. The enterprise level disks are more durable but use a bit more power and rattle more 😁. But then again: if you build a storage system for a NAS, you typically dont want that in the same place where a lot of people are to avoid people bumping into it for example. So noise doesnt matter. The only thing to think about is the power consumption
Power consumption is important for me as well.
I'm waiting for the price of NVMe USB4/TB4 DAS enclosures and 4TB-8TB NVMe SSDs to drop in price sometime in the near decade to where reasonable prices can allow a prosumer/consumer load up 75TB DAS units with drives meeting low power consumption and blazing speed specs without breaking the bank.
...But the HDD mafia will not allow that anytime soon to happen, so it will be a while, in my experience, before something like this becomes a reality and becomes affordable for middle-class prosumers/regular consumers.
Finally a coherent video, thank you very much for this wonderful video, HD is indeed very useful, especially for home users who store masses of files that are lost over time or even forgotten.
Actually I have experience with HVMEs and HDDs.Last helium filled HDD i using now hat constant transfer rate around 250mb/sec without any saturation slowdowns.Yes,its much slower for small files seek operations etc,but for large file transfers HDD way better than most nvme drives ,while last loose their speed dramatically while saturated.IMHO.
Both. In an UNRAID NAS use the ssd or nmve in a raid 0 config pool to move file temporarily into the NAS. Then the NAS then moves those files onto the hard drives which are in a fault tolerant Raid setup. This setup makes it a breeze to get files onto the NAS very quickly but still has the advantage of fault tolerance because of the hybrid setup.
Reason 2 is just wrong. All HDDs have a so called workload rating that tells how much data can be written or read on/of the drive.
A WD Red pro 20TB is only rated for 300TB per year over 5 years. That's worse than most TBW ratings.
True, I scratched my head at that one too.
For Model Number: WD201KFGX WD Red Pro NAS Hard Drive - 20TB, it's rated for 550 TB/year workloads and up to 2.5M hours MTBF.
@@lukas_ls your missing a key point, HDD’s are used in RAID so the workload limit isn’t an issue as it’s expected drives will die hence the RAID.
@@kleanthisgroutides7100 that doesn't make them better than ssds
Hard drives do not have a defined workload from a technical perspective. The term was invented to deny warranty claims and artificially differentiate drives based on the market segment.
In terms of price vs. capacity and also for long time storage, especially without power, there is still nothing better than a good HDD.
And the latest NT001 and NT002 Series Ironwolf Pro are awesome, pretty quiet, very fast for HDDs and up to 24TB are a solid statement, not to mention the Data Rescue support for 3 years.
I'm running 2 of them a 20TB NT001 and a slightly older (3-4years) 12TB model and i cant complain.
Planning for a NAS/Home Server i will surely go with some NT001/2 models.
the Ironwolf pro harddrives do have a workload per year spec of 550/TB/year, the exos don't publish that number.
Exos are currently rated the same. It's in the product manual (see spec summary tables). I would've sworn it was also in the data sheets, but it's not.
I’ve got a 16TB exos. It’s noisy and crunchy sounding but I know it’s durable!
So what is better? IronWolf Pro or EXOS? for gaming purpose please
Using a quartet of 2.5" Seagate 5TB HDDs in a RAID as a Time Machine backup archive for my home network. Gives me about 14TB of Mac back ups overall which is perfect for my needs. The NAS on my Win10/11 systems is a bit bigger.
LOL! The text to speech at the end was far more entertaining than helpful!
I use spinning disc HD's for storage and I'm old enough to remember when that's all we had and there was no such thing as a 1TB drive in a home PC. My home server and both NAS' are spinning disc. My home NAS running UNRAID has a M.2, a SSD and 10 HDD's. I went through the WD Red kerfuffle a couple years ago but I knew there was going to be a way around it.
When I finish editing on my rig it's exported to two different storage sources off of my editing rig.
I use disk drives for my NAS and NVR. NVME drives are used in my PCs for gaming and speed. Imagine the slow boot speed with a disk drive!
only need em for gaming loading textures- big difference there, and faster booting and apps loading. but i use hdd for my movie servers and they work great and also for back up.
i scored a load of Sky boxes a couple of years ago and took out the 1tb Hdd's out of them and they suit my needs perfectly , but one thing that i dont think was mentioned was the electric usuage of Hdd's is a lot less than Nvme's , at least my understanding of them is , i have a 1TB nvme as my O.S. but i have 2 1TB Hdd's for my backups , great video BTW
I think you have that mixed up. The SSD will use less power than HDD due to no moving parts.
@@Richo5566 well thats exactly what i used to think but then i saw a report that a Hdd (used for backups mostly) uses a lot less electricity because SSD's uses electricity all of the time
I thought about buying 24TB hard drives, but I found that 8TB SSDs (2.5", SATA drives) offer the best bang for the buck. They're lightweight and offer higher speeds than magnetic drives and come in pretty cheap. I have too many AI models, which adds up to disk space.
For OS partitions SSDs, for data storage HDDs. For my private filsharing server I recently configured teared storage, means using a smaller 30 GB SSD partition as caching for a larger main storage 1 TB HDD. Works great until now.
I have been doing an NVME for OS and general software and still use HDDs for games and bulk storage of files since my previous FX-8350 build(now I have a 5900X). I currently have a 1Tb NVME, a 3TB HDD and a 12TB HDD with plans to get a 2 or 4 TB NVME and then another large HDD to upgrade the 3TB one.
I have had nothing but problems, with Seagate.
It was a while ago, but since then I haven't dared to gamble my files on my NAS just to check if they've ever improved.
I miss my UW SCSI with 10,000 RPM drives….
I Have always liked the Seagate drives. Currently I use a 2T NVMe with my MacBook pro b/c it is much more portable and does not require external power.
On my desktop Mac, I have stuck to external HDD b/c you simply cannot beat the bang for the buck. And the data is recoverable in case of failure. The internal Mac storage is NVMe, which I use only for the OS and application storage. My photography and iTunes is kept on a Drobo RAID. Time machine is on its own external drive.
I never buy a drive that is not capable of less than 7200 RPM, and I’ve noticed that faster drives are difficult to find these days.
Thanks for the info on the differences in the Seagate line up
Not to mention you are using 4x pci lanes for an NVME, a HDD uses DMI link through the PCH.
Now with a 24lane CPU, you need 16x PCIE lanes for your 4090. You have only 2x NVME worth of lanes left.
Large data transfers on NVME also creates a tonne of heat, I have seen some of the chips literally sweat goo from the NVME 😳
I still love my NAS HDD as my back ups.
I just use 2x 6tb mirror for my needs.
I don't plan on buying a NAS, should I still get NAS drives for my regular pc?
I will upgrade from WD Blue 500GB and I am planning to buy either SEAGate EXOS or Iron Wolf Pro? What do you recommend for these two for Long term Gaming? Its not brand new just freshly pulled?
Not touching a Seagate with a ten foot pole. Every single drive that have ever failed on me had been a Seagate. Got several WDs/HGST and Toshiba drives been running well for over decade now.
Yep, I still prefer the old-tech HDD in my devices where the r/w operations of spinning platters over 5 years >> ssd/nvme drives. I've only had 2-3 drives failures in over 3 decades of use.
There's the use case for ssd/nvme over hdd, but HDD is still relevant technology even today. Cheers.
While I still use hard drives for long term data storage an backups, the problem I have with them is their capacity increases with time and are pretty big nowadays - up to 24GB, but speed remains pretty much the same unless you do RAID. Speed just can't catch up with the increases in capacity. So it becomes increasingly slower working with files on an HDD as time goes by since file sizes also tend to increase with time now that we have fast multi-core CPUs and fast RAM to create and process them. So it becomes daunting to even use them for backup storage due to the long wait time to write all the terabytes of data on them. Not to mention the poor performance with lots of small files. Something needs to be done to get their speed a bit more in line with their capacity and I hope they are working on this aspect.
in HDD they have 1 servos to read/write multiple disc, both front/back of the disc, so perhaps they should improve on this by having multiple servos to read/write multiple surface in parallel.
Use a NVME caching layer in front of them and put the HDDs either in RAID1 or RAID5.
Data recovery too... You can go to a specialist data recovery technicians that can recover data from non working HDD while SSD's are usually 99.9% completely lost if it breaks.
For reliability:
1- WD Ultra star DC HC-5/6
2- Toshiba Enterprise capacity MG series
3- Seagate
Seagate no thanks. The others sure.
I have tried many of the backup software such as Acronis etc. They never work as advertised and recovery is very slow usually. My method which has worked well for me and much faster is to clone two SSDs exactly mirroring each other. When I get a glitch I just reboot, go to BIOS and move one of the other two system drives up to first boot and I am done. Then I just clone the corrupted drive again. I refresh the clones at least every month or if I have made many changes I do it sooner. This method is accurate, fast and dependable. I use all m.2 nvme or sata SSDs. This method is fool proof and has never failed me.
Still waiting for the zen 5 productivity benchmarks.
There are other differences between the Exos and Ironwolf series drives too; like heat; I recently replaced a failed Ironwolf drive with an Exos drive ; it runs several degrees warmer than the other Ironwolf drives in the NAS. The drives are about 7 years old and have about 4 years of use on the drives.
MTBF is largely hype in my opinion; 1 million hours MTBF equates to about 114 years;
Had I had $1000 then the choice is 1000\54= 18 Kingston A400 960GB SATA 3 2.5" , the poorpeople's treasure.
Keep up making the good videos.
That intro was hilarious!!
Thank's for this retro-feeling video of HDD vs SSD.
Well not so long ago I bought 2x4TB HP 7.2k rpm SAS Premium drives for £40 each on eBay for My slowly building edit rig so I am very happy!
You don't need a high TBW rating on a drive that's used for archiving... because a lot of the data on it will be written just once therefore (from that perspective) an MVMe drive is *better* suited to archiving than an HDD -- except "price".
An Nnvme drive is never suited for archiving, because if they get no power for 6+ months, all data will be lost.
@@phaikyouser9499 No... the accepted storage lifetime of a modern NVMe drive left without power in an ambient temperature of 25 degrees C without power is 10 years or so.
That is what you think. Read a bit on how data is organized across a RAID and what SSDs and NVMEs are doing in the background. On the disk itself data is continuously relocated. Ad RAID1 in the mix and there is a lot more writing going on than you realize. Use RAID5 and then keep an eye on the actual write counters. You would be surprised how consumer grade NVMEs wear out in a NAS system...
You could reason: i switch it off. But then: what is the point in that case for having such an expensive flash only NAS?
Good video. Exactly my experience with nvmes. The problem with modern nvme ssds (and sata ssds) is just that they are bad at very large (>150 Gb) file backups. Write speeds go very badly down after cache is exhausted. I came back to HDDs, in a nas with 10G ethernet, and i heavily use btrfs Raid Zero. You get just steady write speeds with over 300 Mb/s. I didnt test yet with more than 2 disks. And yes, Raid is NOT backup! Its interesting, my old sata SSDs are very good at long term write speeds, due to SLC...
Personally, I tend to use a tiered storage system in my NAS. I've got big HDDs running for archival purposes and backups.
Standard SSDs for VM storage as well as hotter storage which doesn't need very fast access but shouldn't lie to waste on HDDs and I've got a small-ish NVMe RAID running that's for hot storage. However, due to the prices of SSDs in general, it's relatively expensive. I started out with just four 6TB HDDs which I recycled into my current NAS totalling my usable HDD storage space to 48TB with a 36TB + 12TB config).
Most important data is then also backed up to the cloud (encrypted of course). Since that's a fraction of my overall space, I'm pretty comfortable about my data.
Next stop is tape storage :D
you forgot for better archival and heavy use, avoid any drive that using shingled storage, like the cmr drives.
Great video for both mediums of storage, I tend to use both nvme for high speed game access & HDD for music. How long can data remain intact on an NVME/SSD unconnected to a power source compared to a HDD?
What would I do? For massive long term storage where I'd only ever 'read' the files, go optical storage. Then do smaller mechanical raid for general storage that doesn't add up to too much. Imagine the time it will take to upload/download 70TB+ from a massive raid NAS/DAS drive? Several days if the drive is mechanical. It dose come down to use case. A nice USB4 NVMe raid is nice to work off of without too many worries or bottlenecks. Big project done? Burn to optical.
SSD drives required they be supplied with power at least once every 8 months(maybe mine were cheaply made) or else the Data stored on can degrade, and that is from personal experience from not using them for 9 months at a time. But my external HDDs, after two years in storage I experience no lost of data at all when I connected it to my computer.
Ironwolf never disappointed me
Using synlogy 1821+ 8 bays raid 6 plus 2 NVME raid 0 for random IPOS bundling 10 GB lan ,7200 ironwolf pro really for long term storage with huge TB and the speed really not bad and it and share multiple PC and notebook phone etc.
,and easy replace data for recovery data prevent loss ,main reason it really cheap per TB
NVME just install 2 stick per PC is really enough
Now I want a to upgrade 128 TB is also easy a upgraded per drive *one by one* each time while I need it
I use SSD for my OS install and modern games, and use an HDD for my older games , because I like having all my games available on my system.
Are "hybrid" HDDs still around? I have an older Seagate "hybrid" HDD that was just as faster as earlier SSDs at that time. At that point in time: a 2TB "hybrid" drive was like $80.00 while a 2TB SSD was almost triple that. Plus, those drives are now almost 6 years old and have shown no signs of failure. They don't 3ven have "head noise" yet.
@@MMOPC78 yeah Seagate still make a few. It’s called the Seagate Firecuda SSHD in a 2TB capacity with an 8GB SSD used for frequently accessed data such as the boot files and the apps and games you use most.
In my testing, Windows booted much quicker than a standard HDD but some apps had to be opened a few times before the drive put them in the SSD storage.
They were also introduced at a time when large SSDs were expensive and having these provided a way of getting quicker boot times but still having lots of space of storage. This was handy in devices with space for only one drive such as laptops.
Things are different now and large SSDs are much more reasonably priced so the need for these diminished.
Unfortunately, the newer energy assisted drives that heat the media in some way to increase magnetic permeability in a local spot DO have a limit on how much you can write, just like an nvme.
I have a NAS with 4x 18tb Exos drives, but I turn it on only to archive my stuff as it's like having an expresso coffee machine perpetually brewing :')
Sadly for modern gaming laptop hasn't sata interface anymore
Could have summarized this in a shorter video, probably.
HDD: Long term storage/RAID setups/immediate access to data not always critical, redundancies less expensive.
SSD: IOPS - Faster access to data, not just load speed
NVME: IOPS + Load speeds, faster access to data and the ability to work with it faster. Example.. Editing a video? scrubbing speeds skyrocket on an NVME. But i'm sure everyone has an idea of this at this point.
Only reason to use an HDD is if you're offloading data for convenience access and backup, since it's less expensive to keep with redunancies in place.
In the end this is all depending on intended use and individual needs.
You forgot to mention how easy it is to recover files from a HDD but with SSDs it's very difficult and sometimes nearly impossible
Is WD blue a good brand for HDD?
Already been planning to go HDD back ups for the home network.
I use an NVMe for apps and a HDD for my mass storage because the cost per gigabyte on a HDD can't be beat
I just purchased 5 Seagate Ironwolf Pro16TB for a TrueNAS setup costing $1060 CDN.
So far, SSD/nVme shows like a way safer option, as they fail extremely rarely compared to hard drives (at least in my experience).
But yes, for a NAS, HDD is still the best choice, even tho I'd trust another brand... But, that's another story :)
I would only use Nvme M.2 SSD for OS and Gaming drive.
Download and backup is for HDDs
the reason is seagate payed well on sponsoring the video
So I buy external HDDs to store movies and music on, but rarely do they live past a couple of years. They don't move, just sit on my desk until I need them and then plug them in to write or read. So I seriously considering an SSD. Storage until recent has been around 4T for movies and TV shows and 2T for music. I'm just sick of shelling out for HDDs every couple of years. I've had Seagate and WD.
I use SSD/Nvme for boot media, hdds for data storage and archives
I payed 1400 dollars for an external 20gb drive. :)
Mtbf can be tested in different ways, yielding hugely different results. If you had constant writing 24 hours a day for 8 years, or in some cases 100 + years based on the million plus hours ratings, these drives would assuredly die.
Taking 300,000 drives and running them all for an hour and saying each Drive should last 300,000 hours is a completely flawed system.
Seagate has moved to the annualized failure rating instead of Mtbf for this reason
HDD become more useful when they are in RAID 1,5,6. Exos are my choice too. Rock solid for me.
Well, i had 4x 20tb X20 exos on my ds923+, its running without raid, but i have 2 nvme 250GB samsung ssds, for cache in raid 0 , i think its enough, i did think about doing raid 0 OR 1 for HDDs but even with my 10G isp speed you will not see any changes on your mobile phones photos app, or plex. 20tb was the price to performance best hdd 1 year ago, but now we have 24tb versions, iam waiting black friday to buy 24tb drives, already sold 2 of 4 hdds and 1 hdd on Unifi pro max, 1 left for my photos app and couple movies, plex waiting for the new storage :)
@@Cemilaws did you lose much $ on the 20TB drives you sold?
@@Richo5566 i bought it for 325 euro shipping included and sold it for 285€ per hdd from 2nd hand market
@@Cemilaws not too much loss then!
I have 10 computers in my home office, with the highest being a Xeon W7, 512GB, RTX3090. I have 3x16TB HDDs in my ASUSTOR NAS RAID-5. All of my systems have a 1TB/2TB HDD as a backup clone drive to their 1TB/2TB NVMe OS Drive in case of OS drive failure just reboot. In my main computer for the data storage drives are 4x8TB HDDs (Data/Backup). And my HTPC has 2x6TB HDDs for movies (Data/Backup). I use NVMe or SSD only for OS Drives and for fast Temporary Work drives. In the past 30 years I have had two HDDs fail, and three SSDs and one NVMe. I don't write that much data to any of them. I have Robocopy scripts on all of my systems to mirror the Data HDD drive to the Backup HDD drive.
Get a NAS with both and use the NVMe drives for cache.
I didn't use NAS HDD. Actually 2 of them in my NAS are just normal one from Seagate and WD and if I buy new one I will always buy from both 2 brands cause it easier to identify which one failing.
For me my very first NVME SSD wasn't great at all. It fast indeed but it failed in one year then after claimed it it still failed in one year the same. My experience only improve when I buy WD Green and Black.
I got 4* 16TB HDD from Toshiba 2 years ago. Now, 24TB?
It is actually only 3 times more, because the hard drive is 6 times bigger?
Raid 1 is mirroring.. no data is "split" and there is no speed increase for writes or reads. he said it twice. What he was explaining was actually raid 10
I guess I'll check around when Black Friday comes around to replace 5TB Toshiba drive in my Alienware Aurora R16. the Toshiba currently hosts as a Steam Games drive. I'm now wanting to consolidate Steam , Ubisoft and Epic games under one drive . in addition to the Toshiba ,I have 2 attached drives with more games . I want to get rid of attached drives in the coming future.
A continuously run HDD will die very soon. But I still vote for HDDs for long term storage. Nowadays I use HDDs with SSD/NVMe/Optane as caching disks on ZFS file systems.
However a HDD which is run normally will last quite some time. I dont fear its failure like some others do.
HDD's still work perfectly fine for strategy genre games, and really most non-FPS games.
Spinning rust just is spinning rust ! Tell your sponsors!
Very helpful. Thanks.