~300MB/s up to ~500MB/s is a great jump, but what interests me more is the possibility of lower -seek times- latency (thanks for pointing that out seek times on both sbu-drives will be the same, but its like splittling the queue in half)
@@jackwhite3820 If I understood everything correctly, it will be the same* *: the same as a similarly configured RAID. Which should be faster than one drive.
Seek times are relatively the same on each disk subsystem, however the net effect is twice as many IOPS/TB in the same disk slot vs a conventional drive.
Hey Wendel, In 1997, on a Canadian Navy frigate, I serviced an 80MB hard drive that was the size of a small kitchen stove. The PLATTER section was removable so you could calibrate the read/write heads. There were two of them and they ran the ENTIRE command and control. Radar, weapons etc. Yes, 80 MegaBytes was handling the operating system that controlled missiles and guns. Heh. Shortly after that they got replaced with off-the-shelf RAID solution AFAIK. Oh, and we had a communications device the size of a fridge that had a readout that used HEATED elements because it preceded LED's. In 1997! And... if you really want a laugh I was standing there watching the Signals Operators do a black-out drill and noticed they were shouting out commands like "restart crypto" so after that I asked what they were doing. It turns out the BATTERY had died so every time they did these power blackout drills that had to restart the cryptography device. I said "would you like me to put batteries in it so you don't need to do that?" They said "It has batteries?" The problem had been happening for so long they'd put the normally unnecessary procedure into MANUALS. (and all the clocks were wrong and had been for a couple years. I went over to the next ship over, borrowed the manual that we no longer had then fixed it.)
And the HDD companies hit back on speed against SATA SSDs. This is really cool tech and im excited to see if these will end up becoming affordable for consumers eventually
Looking at the prices, its ~13%, ~$30 more compared to the normal Exos. It also consumes ~2 watts more. If you need performance and space is an issue its good. If space is an issue but performance not then the normal Exos is fine If performance is an issue but space isn't then a few 10 or 8TB's instead.
@@GraveUypo this is fair but these will be very nice for large storage systems where the really critical operations aren't random, re-silvering and the like. Both the storage density and the speeds are extremely welcome and I'm hoping these drop a little in price by the time I'm ready to re-disk my JBOD
SSD's are soo cheap and have soo many upsides, that these are basically a novelty. The main benefit of SSD's, other than speed, is their longevity. Even when they can't write anymore, you at least can still read the data on it. That's a big deal for data storage. Now of course one could argue that you can get more capacity for cheaper than SSD's, but if that capacity doesn't come with speed and low latency, then even in the business space it's pretty useless these days. This is a product that would have been amazing 20 years ago, but is just meh today. It's just too little too late. And the latest NVME's are ultra compact, these things are putting 4TB of storage at 7GB/s and 1 million IOPS in the size of a small thumb drive, at pretty affordable prices.
It's funny how people keep saying that hard drives are dead, but they keep finding ways to keep them relevant, even outside of capacity. I'd love this for the working data drive on my server. I'll be really interested to see the random read/write performance as well.
@@Kanakarhuthey still make some sense, I think. I've heard the explanation that they're very cheap for their capacity, and the seek times don't matter much in the backup and restore scenarios they are often used
@@Kanakarhu Even in server related groups, I've been laughed at for saying tape is still quite relevant and is still widely used in the enterprise space as backup. Those people are simply ignorant. They're enthusiasts that think they're super hard core, but don't really know the market when it comes down to it.
@@iaial0 They're incredibly cheap for their capacity. You can get a 20TB HDD for less than the cheapest 8TB SSD. And yeah, they're great for cold storage and backups, but HDDs are still more versatile than that
@@TheGameBench SSD's are great for cold storage. And especially for catastrophic loss backups, because you can get back up and running much quicker. It can mean the difference between being out of business for hours or days, which more than makes up for the extra cost of SSD's. Things like tape also need specialized storage facilities. For SSD storage all you'd really need is to power them up once every year to ensure data retention, and if you really want to ensure data retention, to have a mirror backup that copies over between two drives to ensure the write charge is fresh. And this all can be automated inside of a suitcase sized container that can be stored almost anywhere. Companies only stick with tape backup because they're already invested in it, and because of business insurance purposes. Basically, big tape has the system locked down, ensuring that what insurance covers will be at jeopardy if businesses use anything else.
SIR!!! I have always marvelled at the achievement of electro-mechanical engineers in the field of Hard Drive Technology. To me, these achievements rank among mankinds greatest so far. An these chaps are don't get much applause. Just When you think the Hard Drive was a thing of the past It Rises Like a Phoenix Once Again. "The Hard Drive is Dead! Long Live the Hard Drive!"
Damn... And here I am, just installed x2 8TB QVOs, start xfering over ~3.5TBs of video files, and quickly watch xfer speeds plummet to ~150MB/s once the cache ran out.
i had 4 of those in raid 0 (before ssd prices plummeted) which brought the speeds up to meaningful levels, also because you could write 4 times the amount of data at once till the cache run out. but without striping several of those its indeed very 2010 feeling.
Heh, there was a time, about 25 years ago, when 15 MB/s would have been considered crazy fast and you needed a RAID array to achieve anywhere close to 50 MB/s and here we are looking down at many times more than that speed. How quickly technology marches on and how rapidly we humans adapt and get used to nicer things..
I've been waiting on these for years and now they are available and I don't need them. Looking forward to seeing what the difference between the SAS and SATA version is.
@@mrmotofy I have spares of the same model in production, and even a bunch of used Coolspin drives that I intended to use for a backup server if I absolutely had to.
I've got a couple of questions - Are these dual heads quieter than a single head drives during read/write operations, as there's less mass moving and slowing during a seek? Or louder due to 2 separate seeks able to be performed at the same time. Whats the power requirement differences between single and dual. In the enterprise world, we dont really care. But with SATA versions due, I can see that the "average" SOHO and homelaber who may take note, might care a lot more as KwH=$$$$ these days. Overall, really cool. However in the coming years they might have a tough market to compete in should flash prices keep dropping at it's current rate.
I have a pair of the 2x18 drives in my workstation, and seeks are definitely quieter than the Exos X18 drives they replaced. Like other helium drives, there is little to no rotational noise.
I would expect slightly higher average power consumption due to controller complexity and the need to move and rotate the platters and heads at the same speeds while using smaller actuators, but almost certainly lower power consumption than two separate drives.
It's great that SuperMicro has these for brand new with manufacturer warranty and everything for a reasonable price. Server Part Deals has had refurbished 2x14 SAS and 2x18 SATA models in stock off and on for $150 and $220 ea. respectively. It's probably worth buying these new for data you care about... or use the savings to buy a few spares! Two or three of these can keep a 10GbE link saturated so they are perfect for a home NAS without a ton of drive bays. Hopefully this will be the last time I buy spinning rust for my personal data hoard. Thanks for the video Wendell!
Oh I thought someone had finally made the idea I had probably 20 years ago of multiple heads per platter. This doesn't seem much cleverer than jamming two drives into one box, especially with added complexity of having to figure out which platters are on which drives. An interesting idea for sure, but feels like it needs more development time.
I'm surprised Wendel and Linus haven't done this already I know Linus has needed to call Wendel on some of his networking and servers it's time to help the legend with us own petabytes project.
@@MinorLG I'll be posting on forum's finally got things coming together and 8x18 isn't enough plus I made the mistake of making a JBOD instead of a raid 5 with 2 hot spares I have over 45TB of data so this server will backup the main then I'll wipe it recreate with Raid 5 make 2 drives for fault tolerance.
As someone who worked on an exabyte scale Hadoop cluster, it literally takes thousands of machines in a data center. An exabyte is 1mil TB. Or 55 thousand of those new 18TB drives 😲
The 9mb seagate is awesome! Those old tech times were honestly way more cool than nowadays. Note: I bought my first PC with 40MB seagate drive in a 80286 12MHz PC's. But I liked those times A LOT!😊
You had me at "eleventy" I thought I was the only one who said that. Love these drives and I can't wait to get all those "gigglebytes" of read/write speed.
it amazes me that the air current (well, helium to be precise) from one head moving doesn't disturb the positioning of the other head. Or, how do they know that certain seek patterns can't create an oscillating gas current that could amplify and cause errors or even a head crash.
The two actuators sit on the same actuator axis, so the upper actuator has the heads for platters 1-5 and the lower actuator for 6-10, say. So interference should be very low to begin with. This also means that these might be scaled to three or four actuators, because the footprint doesn't really increase.
This is an improvement in storage density. Nothing else. All other advantages are already available thru software trickery and multiple separate HDDs working in tandem.
'91 or '92 I bought two HP 2GB drive the same size. It took forever to spin up and would vibrate the table. It had a chime when it was done spinning up and ready. Sounded like an airplane fasten seatbelt chime. I used an old sturdy XT case and ran them externally using IDE cables out the back of the main system.
Not in the 3.5" form factor, there's not enough Z height for all of the magnets and actuators. If they brought back the 5.25" drives, they could scale the capacity way up, but that'd present a different set of challenges.
Also look at the arc of the head movement. You have a density limit there as well and they cannot impact the laminar airflow that the heads ride upon. I'd imagine more heads already made that smooth air buffer choppier. I wonder if anyone has revisited Bubble Memory. It actually shipped in the early 1980's - terribly slow access times but the density was stunning. I think the great granddaddy of the modern clam shell laptop, The Grid Systems Computer, shipped with bubble.
If you want 4~6 actuators, go to the market today, buy 4~6 HDDs, slap them together in a raid config, and you have your "4~6 actuator" HDD ready in your hands today itself 😂
@@GGigabiteM 4 would be possible if they could get mirrored sets of heads. Have 2 stacks of 2 actuators. One stack doing the tops of the platters, and one stack doing the bottom of the platters. You would likely loose a platter due to needing enough height for the arms to overlap. With a helium drive it shouldn't be too terrible of a loss.
Other than the amount of vdevs involved, are there any differences to doing the two raidzs vs first raid 0 - ing the drives and putting them into one raidz? One benefit of doing in with the two raidzs is that in the event of half a failure, you would not risk the second vdev at the same time, but that assumes extra slots. I'd be interested in seeing any performance differences, you'd avoid doing two expensive striping calculations, but you'd have to do a whole lot more "splitting"
Looking forward to the video on the SATA versions. Fingers crossed when it comes that they're decently priced down in Australia, because that could be great for some things in planning soon
HP had dual-actuator drives about 20 years ago, but they didn't catch on. Maybe they were too complex to control/integrate into RAID systems at the time? Too expensive? Too locked into a proprietary ecosystem? I dunno, but I've often wondered if the idea would crop up again. The thing that occurred to me at the time was, "that would really speed up file-copying on the same disk: not having to seek back and forth between the source file and the destination". At the time I was thinking about tasks like defragging, where lots have data has to be moved from one part of the disk to somewhere more appropriate. =:o}
Interesting to see what is on their website: "MACH.2 is the world's first multi-actuator hard drive technology, containing two independent actuators that transfer data concurrently" Based on your comment I wonder if they are wrong with that statement...
OK, found something: Conner Peripherals (which became part of Seagate) in 1991 announced dual-actuator drives. Chinook had 2 sets of heads on each platter. This has one head per platter, and is essentially stacking 2 HDDs in one case.
@@autohmae Interesting... I've just tried some Googling, and can't find any record of the earlier HP dual-actuator drives. Did they never actually make it to market? =:oo I learned about HP's design from a big show-off display that was in the lobby of one of their sites, where a friend of mine was working at the time. The images showed an elongated drive housing, with an actuator assembly at each end, accessing the disk from opposite sides, and the text talked about how innovative this was (of course!) and how much it could speed things up. Was I taken in by some CGI concept-art for an un-realised product, I wonder? (If anyone could do photo-realistic graphics at the time, HP certainly could! =:o} ) Looking at the images Seagate are sharing for how the Mach.2 works, they've got the two actuators stacked one on top of the other, with each one only able to address half the platters - hence the drive showing up as two separate devices. The HP design could access all platters equally, which would surely be preferable? But of course you then have the extra length of the housing to fit the extra actuator, which maybe made the product just too big to fit in existing machines. Certainly a 3.5inch drive would end up being as long as some of the early 5.25" optical drives, and have to be put in a 5.25" bay if there wasn't plenty of clearance at the back of the 3.5" bays... And that's just thinking about cheap consumer cases. Servers, with their rows of snugly-fitting exchangeable drive sleds, would be a whole different ball game (i.e. a whole new design required). But back then, SSDs were just a glint in their daddy's eye, so *maybe* it would have been worth it for some customers to invest in a whole new HP server, built to take the longer drives (and with the necessary "extra clever" controllers)...? (I think the pictures I saw were of a 5.25" design, BTW, but I could easily be misremembering.) I think we need an HP employee in this thread, stat! =:o}
@@autohmae [BRAIN SPARK] the name Chinook rings a bell... [SCRATCHES HEAD] ... But why would they have had details of a Conner product proudly displayed at an HP site? =:oo Now I'm totally baffled!
Always wondered why we never added more optical sensors for disc drives to read more per revolution (or why we never continued to improve that working prototype 500gb bluray from the 2000s). Glad to see improvements yet again but it does feel like were abandoning formats that still got potential.
@@shanent5793 Huh TIL, that idea for splitting the beams is pretty neat and it might do wonders with modern advancements and components, I would also think there'd be issues on read speeds with normal discs as that kind of laser set up would need discs to burn data/programs in a specific sector layout designed for said lasers to read more data. Also the only Kenwood I ever saw was my dads Record player (Kenwood KD-2055) so that was an unexpected brand name to see for a computer part haha.
Hey Wendell, exciting stuff, could you let us know about the power consumption? Also, I'm curious about if one was to mix the 16TB drives with paired 8TB SSDs, would that configuration improve performance if at all?
Normal Seagate are fine and have been for years. It's their cheap crap you have to look out for. Usually you only find those in OEM stuff though. WD didn't earn this reputation as much because they didn't really do cheap OEM crap (or if they did they refused to even be associated with it.)
@@CreativityNull Fact check: WD is a massive OEM supplier for Dell, Lenovo, HP, and others. They simply manufacture their bottom tier to a quality above Seagate's mid-tier so they don't get the bad reputation.
@@tim3172 and more expensive in more expensive models (at least for the spinning drives when they used them awhile ago in these devices out of the factory.) As far as I know though, that's mostly for SSDs currently, not for spinning drives except rarely in the past, which is where Seagate has the bad reputation. Seagate got the reputation over a decade ago and it's not even relevant anymore since the low end stuff isn't using the Seagate spinning rust and is instead using eMMC.
So these show up as two independent 9TB drives each? Does definitely take a bit of thinking to optimize the performance and reliability, but really cool technology!
Very cool I still remember the Excitement when i got to play with my First Raid card and 2TB SAS drives for the first time, as good as SSD is there will always be a place for Mechanical drives.
Very nice! And yes, it makes perfect sense to do it this way. I am curious how this will work with SATA. As far as I know, you can't have multiple devices on a single SATA link, like you can with SAS and an expander chip. Will the drive have two SATA connectors perhaps?
Very interested in this. I saw back in Feb press release for ultrastar dc hs760, but haven't seen much news on them since. Wonder how these would work in Unraid.
Oh, I was thinking they'd be on opposite corners to each other, so was wondering how they'd done it with the rust still at one end of the chassis. Only at the end when explicitly shown did it click that they're overlapping eachother. Did not expect.
I wonder what this does for reliability. How often would you lose one head & not the other? In consumer land 18tb is too much to lose to one disk, but 18tb is also too much to back up
I sometimes think Wendell is the only guy worth talking to if one needs to discuss technical things inside IT tech. These days it's way to much marketing blabla... Everyone is trying to hide the obvious red flags and design fails..... They will happily sell you a consultant... A guy half your age with less experience that will point out that the problem you describe is rather normal behavior and can only be solved by throwing more money at the problem...
I'm glad I caught this video on Mechanical HD's...and not ssds or m.2............they simply don't have the capacity........I really appreciate your presentations. I, clearly do not understand HOW these hd's actually function, but I have lost enough data over the years to bad PCB's I have a valid question, (I think) Perhaps you could explain why the Mfgs. can't seem to make any sort of uniform replaceable PCB for their HDs? how many Petabyte of data are lost every year because "donor" drives can't be found? or the technical skills of a solderer are lost on a microscopic resistor or capacitor? I'm sure you could supply a short informational vid?
I think if they REALLY tried, we could an independent actuators per platter per actuator stack. 12*2 for iops potential. 24x random iops. Still no where near SSD's, but WAY better.
I've been curious to know the price of these since they were announced a long while back, but I can see noe they're quite pricey (as would be expected). It's cool technology, but it still can't replace SSDs in terms of IOPS and random performance.
I wished they put 4 actuators in the same platter, would give insane amounts of read write speed and insane seek times with the heads having to accelerate and decelerate less.
i was hoping it was gonna be 2 full sets of read-write heads, so that you could perform 2 parallel IO operations on the same disk and double the random IO performance. i was big curious how they fit a whole second arm mechanism in there but no, they punted on it and just split the one actuator in half :( what i don't understand is how this is a speedup, it's the same number of read-write heads traveling over the same surface area at the same speed. like the same number of bits per second are passing underneath each head regardless of if they're moving in sync or not.
Would raid0-ing the 2 halves of the drive to make them behave like 1 fast drive then combining them into raidz2 be better or how would it be different than 2 vdevs of raidz1-ed half-drives?
why do harddrive manufacturers not just make the actual write head have like strips of side-by-side read/write heads, that could write and read parallel on-disk lines? you could build them with extra heads so that they could account for the fact that a head that's slightly further in towards the center or out towards the edge would account for that distance change. either by being slightly bigger, or having more read/write heads. 5 parallel needle-ends = ~5x faster. as you increase density you could end up multiplying your read/write performance since you'd be able to add an extra read/write head whenever the density really increased
If I've understood your suggestion correctly, the reason why they don't do that is because of the math's/geometry of circles and lines. The tracks/lines that are on the disk platter form an imaginary circle, whose center is the center of the platter, and whose perimeter is the track itself. The further away from the center of the platter a track is, the greater the area of the imaginary circle, the larger the perimeter that imaginary circle will be. The read/ write head is essentially a single point. It can be moved anywhere along the platter in any fashion, and read from any given track. When you have a stationary single point intercepting a rotating circle, that point will always stay the same distance away from the center of the circle, and for a hard drive that means it will always stay on a track. Introducing more read/write points changes the geometry of the read/write mechanism from being a simple single point, to some other shape, either a straight line or an arc of some kind. If we use a straight line of 5 points for example, then only the original point can be guaranteed to be on the right track. The four extra points will be further out from the center of the platter, and so they could cross over into the tracks that are further away from the center too. If we decided to use 5 points that were placed in an arc formation to mimic the circular platter, they would only be able to reliably read from tracks that form a circle whose perimeter shared the same arc as the read/write layout.
The only way to make that idea work, is if you found a way to actuate the read/write points themselves, instead of just actuating the entire head. There's an even bigger problem though. Lets say you've magically created a read/write assembly that is able to move the read/write points on the head dynamically as the head moves up and down the platter. And the result of that breakthrough was you could read 5 times as much data sequentially through the head.... The only way that breakthrough would increase read speed, would be if you also increase the rotational speed of the platter by 5 times it's original speed. That would mean having hard drives in your computer or server rack that have an RPM of 50,000 or more! That would make a lot of noise, consume a lot more energy, cause heat problems, wear out the platters faster, possibly damage the PC or server, or cause it to fall over. And the hard drives would need to be able to keep up with that speed without any errors, which is hard to make happen. You would have all those problems, and still, you wouldn't be anywhere near as fast as a modern SSD which can be bought for pretty cheap these days. Basically, the engineering behind HDD's is already so fine tuned at this point, and more progress on HDD technology is becoming exponentially more expensive, the rewards shrinking, and the SSD has already achieved much better results.
@@firstnamelastname-oy7es i don't think this would be an issue. think about this. over time, the track density has increased obviously by many orders of magnitude over time. originally drives like the ibm 350 had a density of 2000bits per square inch. drives nowadays have densities of 1Terabit/square inch or greater. Thus we have MANY more tracks in a modern drive than in a drive back in the day. it would simply be a matter of making a multipoint read head in a strip that has the furthest tip of the strip with multiple points (or instead of a strip, you'd have say 3-5 read heads, all in a line, (or more) and you could write to multiple heads at a time. essentially, you could format tracks differently, whereby tracks were thicker than they used to be, but files and data would get broken up into 3-5 pieces or more. or simply have 4 or more arms with read head strips writing to the drive at the same time (say at cardinal points on the drive. perhaps you could have the read arm be a straight bar of metal passing over the entire drive, fixed in place to the chassis of the drive even, with multiple drive actuators along the arm.
Would be nice if SATA SSDs came in the 3.5" form factor. I don't think I'll ever be able to afford to make the 288 TB in drives I have now solid state, ever.
@@timramich Reason they don't is market share wont be high enough. 2.5" can be used in laptops and desktops(with and adapter plate) Heck we don't even need the full 4" of length on 2.5" SSDs. If you open them up the PCB inside is only 1" in length at times.
It's pretty simple to go fast, it just requires a quicksort combination as such; 9 - 8 + 7 - 6 + 5 - 4 + 3 - 2 + 1 (and '0' is a stop bit in general, its a universal marker, because if a bit is ever zero, are plans to go fast are brutally foiled) So, lets start easy, a seven segent combination in quickort, this is actally a jacobian determinant matrix operation in chapter 9 from Calculus 4; 111 110 101 100 011 010 001 (and 000 means stop) it;s very effective because the floating point and integer operations are rated at 80 gigaflops per core.
Is it just me or does it seems like there have been so many advances in HDDs lately? I had thought with time we would all just go SSD based for everything as a "simpler" solution, but I find myself still buying HDDs all the time because for large media storage, or just storage that is not speed critical, they are king.
Are these even available to regular consumer's been keeping a eye out for some since I can get a full Epyc system for just under $2000 figure time to build a good server now. Invest in proper great HDDs.
there's a easy solution, bring back 5 1/2 inch bay hdds, plenty of room for platters or a entirely ssd based, LTT showed off one for a 3.5 inch bay prototype so no reason why it couldn't work there too. Plus it'd be good incentive to bring back popularity in cases with 5 1/2 inch bays, (barely any new ones these days). It'd be much more reason than we have now and we could still have the benefit of a optical drive or 2 (DVD-RW *& Blue ray) without having to resort to a older case or the clutter on the desk. Wouldn't mind a new version of sata or SAS & sata combined controller on boards to bring that back up to reasonable levels in a pc.
For something on the drive itself, single actuator with dual surface striping ought to be feasible, buffer 2 tracks in a cylinder and read / write them simultaneously, much more feasible for drive logic to handle.
Not only Europe uses the metric system (part of the SI system), all the continent of America (except USA) have used it for a long time, to be fair, in fact almost the entire globe!
I've seen one or two of these pop up and... Yeah literally one of these would be better than my 3 cheap SSD pool at dumping backups. Now restoring them might be another matter but all else being equal I'd rather get the backup written quicker than restored quicker as a home user.
Id be curious to see how these drives perform in a Ceph array. Even if one half of the drive dies, it could help a datacenter limp along or have higher levels of redundancy in the same hardware footprint.
I think RAID60 might be the best way to use these, this covers the 2 statistical HDA failures among the entire array. You get the performance of striping and still only 2 HDA overhead.
That "30 year old" Seagate holds 9 GB, are you sure about that? I have a similar model (size wise) that just holds 40 MB, and it was a top of the line HDD back in the day.
Now all we need is some kind of redundancy awareness in other, less-technical disk pooling products (e.g. Unraid, Storage Spaces, etc) so that they know not to store two copies on the same disk, and this would actually be a hit for small video producers. ...oh who am I kidding this will NEVER be supported properly lol
To be more clear: “Let’s wait for Gen2 of products that actually enter the mass market (maybe even consumer levels of volume) and then look at it after a few years again”. I’ve personally been burnt a few too many times.
the first drives of this type from Seagate have entered the server market in 2017 so this is like gen3 or gen4. They have been successful enough in the server market that WD has finally got off their collective backside and eventually launched their first dual actuator drive Ultrastar DC HS760 in january of 2023.
Not sure those drives make a lot of sense at current pricing, I see them listed for 650$ for the 16TB model. So they use twice the power and cost twice as much as single actuator drives of the same size. You can buy twice the drives and get double the storage space with regular drives at the same speed.
Whilst the concept is cool, I think they should have done the "RAID" internally. Seriously... How much more complicated can it be to set up a software raid via the drive's own firmware, vs physically building two separate drives into the same enclosure? Even better, make it a "Hybrid" SSHD by adding a NAND cache to facilitate the internal raid transfers more evenly. I'd just call that lazy..
That's great so now it will run like absolute trash because the onboard controller is not anywhere near the power of a RAID controller? Why not have the raid controller in the raid card (or the software raid in the OS) actually do the raid instead
@@marcogenovesi8570 Why would it run like trash? HD controllers by their very nature are built to manage the transfer of data at high speeds, and we are talking about a continuous data flow of ~500MBs if utilising a NAND "SSHD" type system. Its exactly the same thing, except now except Y splitting to two heads.. SATA drives do when sending data to different NAND cells on the fly and normal HDs do it when sending data to different sectors on the fly), its the same work load..
~300MB/s up to ~500MB/s is a great jump, but what interests me more is the possibility of lower -seek times- latency (thanks for pointing that out seek times on both sbu-drives will be the same, but its like splittling the queue in half)
lower latency for the WIN!
I'm pretty sure seek time will be just the same.
@@jackwhite3820 If I understood everything correctly, it will be the same*
*: the same as a similarly configured RAID. Which should be faster than one drive.
@@jackwhite3820**possibility** if you use a file raid, instead of a block raid, it could be thought of as cutting the queue in half
Seek times are relatively the same on each disk subsystem, however the net effect is twice as many IOPS/TB in the same disk slot vs a conventional drive.
Hey Wendel,
In 1997, on a Canadian Navy frigate, I serviced an 80MB hard drive that was the size of a small kitchen stove. The PLATTER section was removable so you could calibrate the read/write heads. There were two of them and they ran the ENTIRE command and control. Radar, weapons etc. Yes, 80 MegaBytes was handling the operating system that controlled missiles and guns. Heh. Shortly after that they got replaced with off-the-shelf RAID solution AFAIK. Oh, and we had a communications device the size of a fridge that had a readout that used HEATED elements because it preceded LED's. In 1997! And... if you really want a laugh I was standing there watching the Signals Operators do a black-out drill and noticed they were shouting out commands like "restart crypto" so after that I asked what they were doing. It turns out the BATTERY had died so every time they did these power blackout drills that had to restart the cryptography device. I said "would you like me to put batteries in it so you don't need to do that?" They said "It has batteries?" The problem had been happening for so long they'd put the normally unnecessary procedure into MANUALS. (and all the clocks were wrong and had been for a couple years. I went over to the next ship over, borrowed the manual that we no longer had then fixed it.)
Bonus points for the Floppotron reference. :)
Wendel is such a good thing to have on youtube, knowledge, professionalism, presentation, all straight A's
And the HDD companies hit back on speed against SATA SSDs. This is really cool tech and im excited to see if these will end up becoming affordable for consumers eventually
Looking at the prices, its ~13%, ~$30 more compared to the normal Exos. It also consumes ~2 watts more. If you need performance and space is an issue its good.
If space is an issue but performance not then the normal Exos is fine
If performance is an issue but space isn't then a few 10 or 8TB's instead.
still much, much worse random access speeds, which is the main reason ssds feel fast.
@@GraveUypo this is fair but these will be very nice for large storage systems where the really critical operations aren't random, re-silvering and the like.
Both the storage density and the speeds are extremely welcome and I'm hoping these drop a little in price by the time I'm ready to re-disk my JBOD
SSD's are soo cheap and have soo many upsides, that these are basically a novelty. The main benefit of SSD's, other than speed, is their longevity. Even when they can't write anymore, you at least can still read the data on it. That's a big deal for data storage. Now of course one could argue that you can get more capacity for cheaper than SSD's, but if that capacity doesn't come with speed and low latency, then even in the business space it's pretty useless these days. This is a product that would have been amazing 20 years ago, but is just meh today. It's just too little too late. And the latest NVME's are ultra compact, these things are putting 4TB of storage at 7GB/s and 1 million IOPS in the size of a small thumb drive, at pretty affordable prices.
@@peoplez129HDDs make sense above 4TB
It's funny how people keep saying that hard drives are dead, but they keep finding ways to keep them relevant, even outside of capacity. I'd love this for the working data drive on my server. I'll be really interested to see the random read/write performance as well.
It's funny how people keep saying that Tape's are dead, still IBM storage makes quite a good money out of them every year...
@@Kanakarhuthey still make some sense, I think.
I've heard the explanation that they're very cheap for their capacity, and the seek times don't matter much in the backup and restore scenarios they are often used
@@Kanakarhu Even in server related groups, I've been laughed at for saying tape is still quite relevant and is still widely used in the enterprise space as backup. Those people are simply ignorant. They're enthusiasts that think they're super hard core, but don't really know the market when it comes down to it.
@@iaial0 They're incredibly cheap for their capacity. You can get a 20TB HDD for less than the cheapest 8TB SSD. And yeah, they're great for cold storage and backups, but HDDs are still more versatile than that
@@TheGameBench SSD's are great for cold storage. And especially for catastrophic loss backups, because you can get back up and running much quicker. It can mean the difference between being out of business for hours or days, which more than makes up for the extra cost of SSD's. Things like tape also need specialized storage facilities. For SSD storage all you'd really need is to power them up once every year to ensure data retention, and if you really want to ensure data retention, to have a mirror backup that copies over between two drives to ensure the write charge is fresh. And this all can be automated inside of a suitcase sized container that can be stored almost anywhere. Companies only stick with tape backup because they're already invested in it, and because of business insurance purposes. Basically, big tape has the system locked down, ensuring that what insurance covers will be at jeopardy if businesses use anything else.
SIR!!! I have always marvelled at the achievement of electro-mechanical engineers in the field of Hard Drive Technology. To me, these achievements rank among mankinds greatest so far. An these chaps are don't get much applause.
Just When you think the Hard Drive was a thing of the past It Rises Like a Phoenix Once Again.
"The Hard Drive is Dead! Long Live the Hard Drive!"
until we get mainstream multilayer crystal or DNA storage HDDs should continue to have a place.
Damn... And here I am, just installed x2 8TB QVOs, start xfering over ~3.5TBs of video files, and quickly watch xfer speeds plummet to ~150MB/s once the cache ran out.
i had 4 of those in raid 0 (before ssd prices plummeted) which brought the speeds up to meaningful levels, also because you could write 4 times the amount of data at once till the cache run out. but without striping several of those its indeed very 2010 feeling.
Heh, there was a time, about 25 years ago, when 15 MB/s would have been considered crazy fast and you needed a RAID array to achieve anywhere close to 50 MB/s and here we are looking down at many times more than that speed.
How quickly technology marches on and how rapidly we humans adapt and get used to nicer things..
Your enthusiasm and tech geekiness are second to none. Well done Wendell!
I've been waiting on these for years and now they are available and I don't need them. Looking forward to seeing what the difference between the SAS and SATA version is.
It would be a shame if your drives started failing at an alarming rate LOL
@@mrmotofy I have spares of the same model in production, and even a bunch of used Coolspin drives that I intended to use for a backup server if I absolutely had to.
I've got a couple of questions -
Are these dual heads quieter than a single head drives during read/write operations, as there's less mass moving and slowing during a seek? Or louder due to 2 separate seeks able to be performed at the same time.
Whats the power requirement differences between single and dual. In the enterprise world, we dont really care. But with SATA versions due, I can see that the "average" SOHO and homelaber who may take note, might care a lot more as KwH=$$$$ these days.
Overall, really cool. However in the coming years they might have a tough market to compete in should flash prices keep dropping at it's current rate.
I have a pair of the 2x18 drives in my workstation, and seeks are definitely quieter than the Exos X18 drives they replaced. Like other helium drives, there is little to no rotational noise.
I would expect slightly higher average power consumption due to controller complexity and the need to move and rotate the platters and heads at the same speeds while using smaller actuators, but almost certainly lower power consumption than two separate drives.
It's great that SuperMicro has these for brand new with manufacturer warranty and everything for a reasonable price. Server Part Deals has had refurbished 2x14 SAS and 2x18 SATA models in stock off and on for $150 and $220 ea. respectively. It's probably worth buying these new for data you care about... or use the savings to buy a few spares!
Two or three of these can keep a 10GbE link saturated so they are perfect for a home NAS without a ton of drive bays. Hopefully this will be the last time I buy spinning rust for my personal data hoard. Thanks for the video Wendell!
Oh yeah, EXOS are amazing drives. Discovered them by accident. Have 2, love them.
These aren't the "standard" EXOS drives though. This is a special line of them within the EXOS drive series.
Doing this with drives is such a good idea, double the speed and the only downside is a little bit extra to config, brilliant
Oh I thought someone had finally made the idea I had probably 20 years ago of multiple heads per platter. This doesn't seem much cleverer than jamming two drives into one box, especially with added complexity of having to figure out which platters are on which drives. An interesting idea for sure, but feels like it needs more development time.
We need to get Wendell to exabyte amounts of storage. Make it happen.
Why stop there? Geopbytes or burst
I'm surprised Wendel and Linus haven't done this already I know Linus has needed to call Wendel on some of his networking and servers it's time to help the legend with us own petabytes project.
@@shadowarez1337 I mean poor ole me is over 10TB used. That's without having vast video archives, and having most of my games uninstalled.
@@MinorLG I'll be posting on forum's finally got things coming together and 8x18 isn't enough plus I made the mistake of making a JBOD instead of a raid 5 with 2 hot spares I have over 45TB of data so this server will backup the main then I'll wipe it recreate with Raid 5 make 2 drives for fault tolerance.
As someone who worked on an exabyte scale Hadoop cluster, it literally takes thousands of machines in a data center. An exabyte is 1mil TB. Or 55 thousand of those new 18TB drives 😲
It's about time we needed some long overdue updates to humble hdd.
FINALLY someone talking about exos drives
now hoping to see 20 drive raid 10 crystaldiskmark results
The 9mb seagate is awesome!
Those old tech times were honestly way more cool than nowadays.
Note: I bought my first PC with 40MB seagate drive in a 80286 12MHz PC's. But I liked those times A LOT!😊
how much did that beast set you back? LOL
Probably wouldn’t get you much change out of $5K back then, I reckon
You had me at "eleventy" I thought I was the only one who said that. Love these drives and I can't wait to get all those "gigglebytes" of read/write speed.
I had that WD Raptor X drive and sold it to a collector some years ago... I'm regretting now. It was awesome
it amazes me that the air current (well, helium to be precise) from one head moving doesn't disturb the positioning of the other head. Or, how do they know that certain seek patterns can't create an oscillating gas current that could amplify and cause errors or even a head crash.
No, you were correct at 'air'. Drives are NOT hermetically sealed and all that is in them is air.
@@quantos8061 No, these drives are helium-sealed.
@@quantos8061 it literally says on their specs "hellium"
The two actuators sit on the same actuator axis, so the upper actuator has the heads for platters 1-5 and the lower actuator for 6-10, say. So interference should be very low to begin with. This also means that these might be scaled to three or four actuators, because the footprint doesn't really increase.
@@deneb_tm No, they are NOT hermetically sealed. Take one up in a plane, if it's hermetically sealed the drive will bulge.
This is an improvement in storage density. Nothing else.
All other advantages are already available thru software trickery and multiple separate HDDs working in tandem.
'91 or '92 I bought two HP 2GB drive the same size. It took forever to spin up and would vibrate the table. It had a chime when it was done spinning up and ready. Sounded like an airplane fasten seatbelt chime. I used an old sturdy XT case and ran them externally using IDE cables out the back of the main system.
Curious if you could scale this up to 4-6 actuators, that would prolong the life of disks like these by a lot
Not in the 3.5" form factor, there's not enough Z height for all of the magnets and actuators.
If they brought back the 5.25" drives, they could scale the capacity way up, but that'd present a different set of challenges.
Also look at the arc of the head movement. You have a density limit there as well and they cannot impact the laminar airflow that the heads ride upon. I'd imagine more heads already made that smooth air buffer choppier. I wonder if anyone has revisited Bubble Memory. It actually shipped in the early 1980's - terribly slow access times but the density was stunning. I think the great granddaddy of the modern clam shell laptop, The Grid Systems Computer, shipped with bubble.
If you want 4~6 actuators, go to the market today, buy 4~6 HDDs, slap them together in a raid config, and you have your "4~6 actuator" HDD ready in your hands today itself 😂
@@GGigabiteM 4 would be possible if they could get mirrored sets of heads. Have 2 stacks of 2 actuators. One stack doing the tops of the platters, and one stack doing the bottom of the platters. You would likely loose a platter due to needing enough height for the arms to overlap. With a helium drive it shouldn't be too terrible of a loss.
With the NetApp 6Gbps IOMs having reached EOS a few years ago, those 2246/4246 disk enclosures should be come pretty available on the used market.
Yes and they are awesome. Have 2 FAS2246 at home(filled with 24 1.2Tb SAS SSD’s each) And a 9.4 Pb NetApp MetroCluster at work
Other than the amount of vdevs involved, are there any differences to doing the two raidzs vs first raid 0 - ing the drives and putting them into one raidz?
One benefit of doing in with the two raidzs is that in the event of half a failure, you would not risk the second vdev at the same time, but that assumes extra slots. I'd be interested in seeing any performance differences, you'd avoid doing two expensive striping calculations, but you'd have to do a whole lot more "splitting"
Looking forward to the video on the SATA versions. Fingers crossed when it comes that they're decently priced down in Australia, because that could be great for some things in planning soon
Great for us Ceph users!
HP had dual-actuator drives about 20 years ago, but they didn't catch on. Maybe they were too complex to control/integrate into RAID systems at the time? Too expensive? Too locked into a proprietary ecosystem? I dunno, but I've often wondered if the idea would crop up again.
The thing that occurred to me at the time was, "that would really speed up file-copying on the same disk: not having to seek back and forth between the source file and the destination". At the time I was thinking about tasks like defragging, where lots have data has to be moved from one part of the disk to somewhere more appropriate. =:o}
Interesting to see what is on their website: "MACH.2 is the world's first multi-actuator hard drive technology, containing two independent actuators that transfer data concurrently"
Based on your comment I wonder if they are wrong with that statement...
OK, found something: Conner Peripherals (which became part of Seagate) in 1991 announced dual-actuator drives.
Chinook had 2 sets of heads on each platter. This has one head per platter, and is essentially stacking 2 HDDs in one case.
@@autohmae Interesting... I've just tried some Googling, and can't find any record of the earlier HP dual-actuator drives. Did they never actually make it to market? =:oo
I learned about HP's design from a big show-off display that was in the lobby of one of their sites, where a friend of mine was working at the time. The images showed an elongated drive housing, with an actuator assembly at each end, accessing the disk from opposite sides, and the text talked about how innovative this was (of course!) and how much it could speed things up.
Was I taken in by some CGI concept-art for an un-realised product, I wonder? (If anyone could do photo-realistic graphics at the time, HP certainly could! =:o} )
Looking at the images Seagate are sharing for how the Mach.2 works, they've got the two actuators stacked one on top of the other, with each one only able to address half the platters - hence the drive showing up as two separate devices. The HP design could access all platters equally, which would surely be preferable? But of course you then have the extra length of the housing to fit the extra actuator, which maybe made the product just too big to fit in existing machines. Certainly a 3.5inch drive would end up being as long as some of the early 5.25" optical drives, and have to be put in a 5.25" bay if there wasn't plenty of clearance at the back of the 3.5" bays... And that's just thinking about cheap consumer cases. Servers, with their rows of snugly-fitting exchangeable drive sleds, would be a whole different ball game (i.e. a whole new design required). But back then, SSDs were just a glint in their daddy's eye, so *maybe* it would have been worth it for some customers to invest in a whole new HP server, built to take the longer drives (and with the necessary "extra clever" controllers)...?
(I think the pictures I saw were of a 5.25" design, BTW, but I could easily be misremembering.)
I think we need an HP employee in this thread, stat! =:o}
@@autohmae [BRAIN SPARK] the name Chinook rings a bell... [SCRATCHES HEAD] ... But why would they have had details of a Conner product proudly displayed at an HP site? =:oo
Now I'm totally baffled!
@@therealpbristow Compaq used to have a tight relationship with them and even investor... and HP bought Compaq... so that's the connection ?
Always wondered why we never added more optical sensors for disc drives to read more per revolution (or why we never continued to improve that working prototype 500gb bluray from the 2000s). Glad to see improvements yet again but it does feel like were abandoning formats that still got potential.
MiniDisc does exactly that: magnetic write, optical read.
Already happened, the Kenwood CLV 72x CDROM with seven sensors was 1990s tech
@@shanent5793 Huh TIL, that idea for splitting the beams is pretty neat and it might do wonders with modern advancements and components, I would also think there'd be issues on read speeds with normal discs as that kind of laser set up would need discs to burn data/programs in a specific sector layout designed for said lasers to read more data. Also the only Kenwood I ever saw was my dads Record player (Kenwood KD-2055) so that was an unexpected brand name to see for a computer part haha.
For Ceph with host level failure domains, this is a drop in replacement, since a single drive failing is still 2 OSDs on a single node.
Hey Wendell, exciting stuff, could you let us know about the power consumption? Also, I'm curious about if one was to mix the 16TB drives with paired 8TB SSDs, would that configuration improve performance if at all?
Spec sheet says it's double the power and double the price.
only pairing that makes sense is using the ssds as a cache layer for an hdd array.
Yay!! Ive been hoping you'd cover these!!
This is super promising and could even be the reason I finally give Seagate a try
Normal Seagate are fine and have been for years. It's their cheap crap you have to look out for. Usually you only find those in OEM stuff though. WD didn't earn this reputation as much because they didn't really do cheap OEM crap (or if they did they refused to even be associated with it.)
Exos in general (not just the dual head ones) have always been fine
@@CreativityNull Fact check: WD is a massive OEM supplier for Dell, Lenovo, HP, and others.
They simply manufacture their bottom tier to a quality above Seagate's mid-tier so they don't get the bad reputation.
@@tim3172 and more expensive in more expensive models (at least for the spinning drives when they used them awhile ago in these devices out of the factory.) As far as I know though, that's mostly for SSDs currently, not for spinning drives except rarely in the past, which is where Seagate has the bad reputation. Seagate got the reputation over a decade ago and it's not even relevant anymore since the low end stuff isn't using the Seagate spinning rust and is instead using eMMC.
So these show up as two independent 9TB drives each? Does definitely take a bit of thinking to optimize the performance and reliability, but really cool technology!
Thank you Wendell, love your work as always.
Very cool I still remember the Excitement when i got to play with my First Raid card and 2TB SAS drives for the first time, as good as SSD is there will always be a place for Mechanical drives.
Amazing throughout for spinning media.
Very nice! And yes, it makes perfect sense to do it this way.
I am curious how this will work with SATA. As far as I know, you can't have multiple devices on a single SATA link, like you can with SAS and an expander chip. Will the drive have two SATA connectors perhaps?
its one big lba, split right down the middle. helper script to help you setup partitions on the level1forums
Mechanical drives are like the Undertaker, every time you think he's dead he comes back to life with a new trick.
Very interested in this. I saw back in Feb press release for ultrastar dc hs760, but haven't seen much news on them since. Wonder how these would work in Unraid.
Oh, I was thinking they'd be on opposite corners to each other, so was wondering how they'd done it with the rust still at one end of the chassis.
Only at the end when explicitly shown did it click that they're overlapping eachother. Did not expect.
there is not enough space to do that
@@marcogenovesi8570 Hey, when brain's gotta fart, brain's gonna fart. Physics be damned!
I wonder what this does for reliability. How often would you lose one head & not the other? In consumer land 18tb is too much to lose to one disk, but 18tb is also too much to back up
New Video!! Yay
Also could be a good first device in a surveillance feed archiving workflow. 18TB also happens to be the capacity of an LTO-9 tape.
Eleventy billion. Well done sir.
Wendel your content is fantastic, keep it up!
Oooh, this might be very useful indeed! Gonna have to do reading on these
What I would love to see is how draid arrays in zfs handle these as parity is distributed differently.
I sometimes think Wendell is the only guy worth talking to if one needs to discuss technical things inside IT tech.
These days it's way to much marketing blabla... Everyone is trying to hide the obvious red flags and design fails..... They will happily sell you a consultant... A guy half your age with less experience that will point out that the problem you describe is rather normal behavior and can only be solved by throwing more money at the problem...
I have 2 of those Seagate drives, and 1 still works if you give it a quick flick of the wrist!
i dont understand everything yet but i keep watching until i do. and reading else where of course.
It’s been too long since I’ve seen a video from you guys glad I got the notification
what's the max IOPS? Latency? Bit error rate? Sequential speed means nothing at 500mb/sec if the competition is doing 15gb/sec anyway
I'm glad I caught this video on Mechanical HD's...and not ssds or m.2............they simply don't have the capacity........I really appreciate your presentations. I, clearly do not understand HOW these hd's actually function, but I have lost enough data over the years to bad PCB's I have a valid question, (I think) Perhaps you could explain why the Mfgs. can't seem to make any sort of uniform replaceable PCB for their HDs? how many Petabyte of data are lost every year because "donor" drives can't be found? or the technical skills of a solderer are lost on a microscopic resistor or capacitor? I'm sure you could supply a short informational vid?
I think if they REALLY tried, we could an independent actuators per platter per actuator stack. 12*2 for iops potential. 24x random iops. Still no where near SSD's, but WAY better.
that Dell monitor is FILTHY. Love it
Good enough for a Media storage array. You don't need PCIE speeds for watching a 4k movie other than for convenience of backups.
EXOS are INSANE Drives!
I've been curious to know the price of these since they were announced a long while back, but I can see noe they're quite pricey (as would be expected). It's cool technology, but it still can't replace SSDs in terms of IOPS and random performance.
I wished they put 4 actuators in the same platter, would give insane amounts of read write speed and insane seek times with the heads having to accelerate and decelerate less.
@@fss1704 I'm sure they could in the 5.25" form factor.
i was hoping it was gonna be 2 full sets of read-write heads, so that you could perform 2 parallel IO operations on the same disk and double the random IO performance. i was big curious how they fit a whole second arm mechanism in there but no, they punted on it and just split the one actuator in half :(
what i don't understand is how this is a speedup, it's the same number of read-write heads traveling over the same surface area at the same speed. like the same number of bits per second are passing underneath each head regardless of if they're moving in sync or not.
Would raid0-ing the 2 halves of the drive to make them behave like 1 fast drive then combining them into raidz2 be better or how would it be different than 2 vdevs of raidz1-ed half-drives?
why do harddrive manufacturers not just make the actual write head have like strips of side-by-side read/write heads, that could write and read parallel on-disk lines? you could build them with extra heads so that they could account for the fact that a head that's slightly further in towards the center or out towards the edge would account for that distance change. either by being slightly bigger, or having more read/write heads. 5 parallel needle-ends = ~5x faster. as you increase density you could end up multiplying your read/write performance since you'd be able to add an extra read/write head whenever the density really increased
If I've understood your suggestion correctly, the reason why they don't do that is because of the math's/geometry of circles and lines.
The tracks/lines that are on the disk platter form an imaginary circle, whose center is the center of the platter, and whose perimeter is the track itself.
The further away from the center of the platter a track is, the greater the area of the imaginary circle, the larger the perimeter that imaginary circle will be.
The read/ write head is essentially a single point. It can be moved anywhere along the platter in any fashion, and read from any given track.
When you have a stationary single point intercepting a rotating circle, that point will always stay the same distance away from the center of the circle, and for a hard drive that means it will always stay on a track.
Introducing more read/write points changes the geometry of the read/write mechanism from being a simple single point, to some other shape, either a straight line or an arc of some
kind.
If we use a straight line of 5 points for example, then only the original point can be guaranteed to be on the right track. The four extra points will be further out from the center of the platter, and so they could cross over into the tracks that are further away from the center too.
If we decided to use 5 points that were placed in an arc formation to mimic the circular platter, they would only be able to reliably read from tracks that form a circle whose perimeter shared the same arc as the read/write layout.
The only way to make that idea work, is if you found a way to actuate the read/write points themselves, instead of just actuating the entire head.
There's an even bigger problem though. Lets say you've magically created a read/write assembly that is able to move the read/write points on the head dynamically as the head moves up and down the platter. And the result of that breakthrough was you could read 5 times as much data sequentially through the head....
The only way that breakthrough would increase read speed, would be if you also increase the rotational speed of the platter by 5 times it's original speed. That would mean having hard drives in your computer or server rack that have an RPM of 50,000 or more!
That would make a lot of noise, consume a lot more energy, cause heat problems, wear out the platters faster, possibly damage the PC or server, or cause it to fall over. And the hard drives would need to be able to keep up with that speed without any errors, which is hard to make happen.
You would have all those problems, and still, you wouldn't be anywhere near as fast as a modern SSD which can be bought for pretty cheap these days.
Basically, the engineering behind HDD's is already so fine tuned at this point, and more progress on HDD technology is becoming exponentially more expensive, the rewards shrinking, and the SSD has already achieved much better results.
@@firstnamelastname-oy7es i don't think this would be an issue. think about this. over time, the track density has increased obviously by many orders of magnitude over time. originally drives like the ibm 350 had a density of 2000bits per square inch. drives nowadays have densities of 1Terabit/square inch or greater.
Thus we have MANY more tracks in a modern drive than in a drive back in the day. it would simply be a matter of making a multipoint read head in a strip that has the furthest tip of the strip with multiple points (or instead of a strip, you'd have say 3-5 read heads, all in a line, (or more) and you could write to multiple heads at a time. essentially, you could format tracks differently, whereby tracks were thicker than they used to be, but files and data would get broken up into 3-5 pieces or more. or simply have 4 or more arms with read head strips writing to the drive at the same time (say at cardinal points on the drive. perhaps you could have the read arm be a straight bar of metal passing over the entire drive, fixed in place to the chassis of the drive even, with multiple drive actuators along the arm.
Imagine if we had 5.25" HDDs today.
Based off current todays bit density we could get 40TB per drive.
Would be nice if SATA SSDs came in the 3.5" form factor. I don't think I'll ever be able to afford to make the 288 TB in drives I have now solid state, ever.
@@timramich Reason they don't is market share wont be high enough.
2.5" can be used in laptops and desktops(with and adapter plate)
Heck we don't even need the full 4" of length on 2.5" SSDs.
If you open them up the PCB inside is only 1" in length at times.
I ordered version x24 from Amazon. Is there more power and speed?
Thank you tech 1
It's pretty simple to go fast, it just requires a quicksort combination as such; 9 - 8 + 7 - 6 + 5 - 4 + 3 - 2 + 1 (and '0' is a stop bit in general, its a universal marker, because if a bit is ever zero, are plans to go fast are brutally foiled) So, lets start easy, a seven segent combination in quickort, this is actally a jacobian determinant matrix operation in chapter 9 from Calculus 4; 111 110 101 100 011 010 001 (and 000 means stop) it;s very effective because the floating point and integer operations are rated at 80 gigaflops per core.
Is it just me or does it seems like there have been so many advances in HDDs lately? I had thought with time we would all just go SSD based for everything as a "simpler" solution, but I find myself still buying HDDs all the time because for large media storage, or just storage that is not speed critical, they are king.
Of course. All those crashed UFOs were good for something.
They have to innovate to keep up with SSD
Are these even available to regular consumer's been keeping a eye out for some since I can get a full Epyc system for just under $2000 figure time to build a good server now. Invest in proper great HDDs.
Oh boy. I sure hope we get SATA 4.
I guess we also need raid z4 and z6 now.
It's a modern Quantum Chinook!!!
there's a easy solution, bring back 5 1/2 inch bay hdds, plenty of room for platters or a entirely ssd based, LTT showed off one for a 3.5 inch bay prototype so no reason why it couldn't work there too.
Plus it'd be good incentive to bring back popularity in cases with 5 1/2 inch bays, (barely any new ones these days). It'd be much more reason than we have now and we could still have the benefit of a optical drive or 2 (DVD-RW *& Blue ray) without having to resort to a older case or the clutter on the desk.
Wouldn't mind a new version of sata or SAS & sata combined controller on boards to bring that back up to reasonable levels in a pc.
Will love to have data on RAIDZ VDEV Build Times with these drivers.
Well done Seagate.
For something on the drive itself, single actuator with dual surface striping ought to be feasible, buffer 2 tracks in a cylinder and read / write them simultaneously, much more feasible for drive logic to handle.
Not only Europe uses the metric system (part of the SI system), all the continent of America (except USA) have used it for a long time, to be fair, in fact almost the entire globe!
I was looking to equip my new storage with these as I remembered this video .. but I could not find them and make a purchase.
I think you could also configure your 12 discs into a 8*3 array of vdevs.
A mechanical drive capable of saturating a SATA III bus? It's about time!
Eh, I would have thought this'd be firmware managed dual actuators. I guess the SATA ones are?
I've seen one or two of these pop up and... Yeah literally one of these would be better than my 3 cheap SSD pool at dumping backups.
Now restoring them might be another matter but all else being equal I'd rather get the backup written quicker than restored quicker as a home user.
restroing backups is also mostly sequential so these would be fine
they need to bring back the Big Foot size drives
Wendell: "Easy peazy!"
Me: *brain smoking* "Bzzzzt!" *sounds and smells of crackling bacon* "ERROR 420!"
My drives(in Ralph Wiggum's voice): "I'm degraded!"
Id be curious to see how these drives perform in a Ceph array. Even if one half of the drive dies, it could help a datacenter limp along or have higher levels of redundancy in the same hardware footprint.
I think RAID60 might be the best way to use these, this covers the 2 statistical HDA failures among the entire array. You get the performance of striping and still only 2 HDA overhead.
So... essentially, now it's become possible to RAID-0 a single physical drive and lose all our data, all in a single occupied SATA-3 link!
I wounder if each read head could be controlled independently
That "30 year old" Seagate holds 9 GB, are you sure about that? I have a similar model (size wise) that just holds 40 MB, and it was a top of the line HDD back in the day.
Now all we need is some kind of redundancy awareness in other, less-technical disk pooling products (e.g. Unraid, Storage Spaces, etc) so that they know not to store two copies on the same disk, and this would actually be a hit for small video producers.
...oh who am I kidding this will NEVER be supported properly lol
I noticed the two heads are on separate platters.
Hmm wouldn't these theoretically have twice the failure rate because of twice the moving parts?
Let’s wait for Gen2 of this technology…
These are already the third generation of dual actuator drives from Seagate!
To be more clear: “Let’s wait for Gen2 of products that actually enter the mass market (maybe even consumer levels of volume) and then look at it after a few years again”.
I’ve personally been burnt a few too many times.
the first drives of this type from Seagate have entered the server market in 2017 so this is like gen3 or gen4.
They have been successful enough in the server market that WD has finally got off their collective backside and eventually launched their first dual actuator drive Ultrastar DC HS760 in january of 2023.
Not sure those drives make a lot of sense at current pricing, I see them listed for 650$ for the 16TB model.
So they use twice the power and cost twice as much as single actuator drives of the same size.
You can buy twice the drives and get double the storage space with regular drives at the same speed.
Whilst the concept is cool, I think they should have done the "RAID" internally. Seriously... How much more complicated can it be to set up a software raid via the drive's own firmware, vs physically building two separate drives into the same enclosure? Even better, make it a "Hybrid" SSHD by adding a NAND cache to facilitate the internal raid transfers more evenly. I'd just call that lazy..
That's great so now it will run like absolute trash because the onboard controller is not anywhere near the power of a RAID controller? Why not have the raid controller in the raid card (or the software raid in the OS) actually do the raid instead
@@marcogenovesi8570 Why would it run like trash? HD controllers by their very nature are built to manage the transfer of data at high speeds, and we are talking about a continuous data flow of ~500MBs if utilising a NAND "SSHD" type system. Its exactly the same thing, except now except Y splitting to two heads.. SATA drives do when sending data to different NAND cells on the fly and normal HDs do it when sending data to different sectors on the fly), its the same work load..
awsome post,
however, I can not find any source for recertified/refurbished MACH.2 Drives
Because it's new, refurbished drives are old drivew
Wendell & ZFS is like, "Say the line, Bart!"
Wonder what would be the capacity of a modern 5.25 hdd :)
This seems like such a natural progression of the tech so why wasn't it done ages ago?
So... can we get a MAMR version? Boost that puppy up to 24GB+ per drive
Mo heads, mo better