Can confirm the unequal lengths of pads on the connectors are necessary for hotplug. You find the same arrangement on USB, etc. Bad things can happen when you attach e.g. data before ground, for example large amounts of current might unexpected flow to ground through the ESD protection diodes of the data lines.
+1 Ground is the first connection you make and the last one you break when unplugging. Same thing with power connectors designed to be plugged/unplugged under load.
But can you just arrange the pins no the female connector to be at different lengths? For example, since all pin lenghts are the same on the NVME SSD, can you put a connector with a different pin lenghts on the server itself, to achieve the same effect
data center: I want this technology now. home users: How much is this going to cost me? my motherboard already comes with two m.2 nvme slots for free. I don't need anything else. manufacturers: is there a licensing fee associated with this new interface? because m.2 don't cost us a penny.
This really had me excited until I heard that the latching mechanism is left unspecified by the standard. That's really dumb, the whole point of a standard is to prevent OEMs from doing stupid and/or non-interoperable things and then charging printer-ink prices for their locked down variant.
I'd bet that the OEMs fought tooth and nail to not have a standard latch for exactly that reason. Dell will offer a 24 slot 1U, but if we order it with only 2 drives the other 22 slots will be covered with useless snap-in plastic blanks. Want to add more storage? Got to buy it from Dell for their proprietary latch. When retiring servers, I strip out: CPUs, RAM, and any sleds/caddies (and sometimes PSUs). Experience has taught me that, in the long run, the sleds end up being the most precious of those parts.
@@JonMartinYXD don't buy from Dell then. Other vendors will sell you servers full of drive trays even if not a single one is populated. So this "problem" is not really a problem.
The latch is a part of a chassis. Who cares if it's proprietary? They come with the chassis. The important part is that the drive itself is standard formfactor and interchangeable even if latches are different.
I really appreciate the transparency about the fact that the drives were donated/loaned to you guys. I certainly wouldn't consider it sponsored content but I do like that you are upfront and err on the side of caution. One more reason to feel confident trusting sth (don't really *need* more but doesn't hurt)
Well I also had the option to use an Intel U.2 drive and a old Pliant SAS drive but I used Kioxia/ Toshiba because they were able to get the EDSFF drives. I also told Kioxia I was OK if the put labels on the EDSFF drives. This took them a month and a half but I had been trying to get a collection like this for two months so I wanted to give them exposure for it. Felt like the right move to mark as sponsored.
7:10 the "Right sequence" as I understand it for hotswap is "ground connects first", a design you'll also see on grounded wall plugs. Which, conveniently, also means ground disconnects last. So your arm is never the easiest route for that erratic current to take.
Thanks for keeping us up to date. If a server will accommodate 44 of 4x PCIe Drives a total of 176 PCIe lanes just for NVMe I wonder what kind of CPU's will be "Not to use multiplexers" & Finally Patric granted his wish ---- Toles screw less HDD :)
The question is how cheap? especially if capacity is prioritized instead of performance. Like those E.1L NVMe's for example, how much is a single one of those would be expected to go for?
21:00. That is the thing i hated most about with the HP G8 servers. They needed the special drive sleds with the microchips. If you used a third party or just stuffed the drive in there (because the cost of those sleds were stupid expensive), the iLO would keep the fans running at 100%.
It's so nice to see the heatsinks actually cover both sides of the PCB unlike pretty much all the consumer M.2 drives that just can't cool half the NAND flash because the form factor doesn't allow for it.
I thought it was the controllers that needed the cooling, not so much the memory itself. Infact, I could be wrong but I recall seeing somewhere that they had an “ideal” temperature and cooling them too much was actually suboptimal.
Vendor-specific latches would serve the same function as vendor-specific carriers. Most storage vendors use custom drive firmware. Some vendors won't even read a drive with the wrong firmware. Having worked tech support for storage back when we used spinning rust I can tell you there are some weird behaviors when you start dropping random drives into a carrier.
theoretically it could allow more imaginative layouts or configurations of devices by changing the latching mechanism, but that probably won't play out in reality.
The Amiga 600 in 1992 came with a 2.5" drive. I think they offered versions with 0, 20 and 40 MByte of storage and they sold pretty well. To be honest I never heared about 2.5" before and I was even running several computers using 5.25" drives. I even have one single 8.0" disk drive with five floppys somewhere meant for the Altair systems.
Thanks to this transition you can get enterprise U.2 SSDs now for pennies on the dollar. Back in the olden times we would short stroke spinning rust drives to boost performance in RAID arrays.
I see this leading to big increases in PCIe lane provision on server CPUs - using the full capacity of singlewide E3 drives means you need to dig into the extra unofficial PCIe lanes on a dual epyc server, and that's before you account for the network connectivity or a boot drive or two
@asdrubale bisanzio Normally 2-socket epyc has 128 lanes for system IO with 128 reserved for inter-socket comms (so 64 each way), but it's possible for OEMs to configure them to only use 64 lanes between the sockets (32 lanes each way) to free up some more lanes for IO
The big reasons I'm imagining for not having a single standardized latching mechanism are (a) so a vendor can fit a latch containing an actual keyed lock for scenarios where that kind of physical security measure is relevant, and (b) to allow robotic handling of the drives for applications similar to existing large tape archives.
One reason for the custom latch per manufacturer is clearly greed. Now you can't buy your storage directly from Seagate/WD/Kioxia but have to pay the Dell/HP/NetApp tax which they no longer get without the caddies/trays.
It's another move to stop ordinary people from DIYing things. The PC market was killed with a last straw because of these fake shortages. Now if they make people buy drives from Dell, to work only in Dell servers, they can just make the drive cost like 10x as much, and people won't be able to go get something like a Supermicro and build a system. They will be unaffordable, even to the day they are obsolete. And then, even if they were affordable, no one wants a drive that's been used that much.
I know they are not some exotic thing that we can never afford. In fact they are priced very similarly to other drives per gigabyte. These actually seem to be very similar to M.2 but not the same.
They haven't walked away from the form factor because of a situation like industrial inertia. The 2.5 form factor is ubiquitous across the industry's fabrication process, current computer chassis (including desktop cases), and even contract obligations. Things like that take a lot of time to die off. Which is why even though the west coast is on fire year round, the move to something green on a macro scale takes decades. Or how the music CD is still being produced even though nearly everyone we know in these tech circles don't use CDs anymore.
22:35 Interesting you mention this I remember taking apart a netbook probably 4 years ago, and finding a daughter board, but not just any daughterboard, this had a dual core (or was it single core) hyperthreaded processor, and 1-2GB of RAM. Looked it up and this was actually used in a "supercomputer" i forget by who. It would be pretty cool to be able to do this in a hot swap fashion. Maybe a new type of blade server, with "Xeon U" processors that are just laptop U or Y class processors on a compute stick with an M.2 drive and some sodimm Yes they slotted dozzens of these things into a 2u chassis, and with i think 40 Atom D525 processors they(probably) had the most core dense platform out there, well, besides a larraby/knights ferry system I could be wrong about the core count, it could have been 10 of these atom boards and i'm just conflating the thread count with processor count If i recall correctly, they were Dell Inspiron Mini 910/1010/1210
The reason they went with customizable latches looks to be flexibility. If we look at the proposed and actual chassis designs, there's quite a variety of them. 1U is obvious but we also see 2U with two rows of E1.S drives. Not to mention all servers have very different face plates, especially OCP one's. There's probably no way they could make a unified latch mechanism that would fit all chassis designs. Or even if they did force on everybody some latch design it would quickly lead to them constantly changing the spec to meet various use cases. Also, there seems to be a move to make datacenter operations automated through robotics. They could design special latches specifically suited for robots and wouldn't need to change the specs.
There are certain hyper-scale clients that are also asking for a unified latch design. The primary reason for this is not customer requirements. Even the two row E1.S systems could use the same latch as 1U and indeed already do (we covered the Supermicro EDSFF BigTwin as an example.)
@@ServeTheHomeVideo don't those guys usually all in OCP? I'm sure they can design unified OCP latch for themselves and let everyone else do their own thing.
As a former admin, I hated talking to the CEO and CFO about the new systems being faster and smaller but not cheaper. It was a losing battle because cost verses benefit. I think that might have changed if the chip shortage catches up to where it was just a few years ago. In short, I'm glad I'm not an admin anymore.
"New" is never cheaper. If phrased correctly, one might be able to frame it as "cheaper" per some unit, but the check the PHB's have to write for it isn't getting any smaller. I was very shocked my penny grubbing former employer dropped nearly have a mil (over two years) for new, almost top of the line blade systems and storage (SAS, but, ok.) I'd spent over a decade having to buy other people's "junk" on ebay, one at a time to stay below the must-have-CEO-signature line. Two years in, they started regretting going down this path as they're now committed to buying more of these blades vs. a handful of 1U/2U servers from wherever. And because the blades are uber-dense, all one can do with them is add them to vcenter and run whatever can be virtualized, or dedicate an entire $100k blade to openstack that nothing will use. For a development lab, numerous smaller servers provides significantly more flexibility.
@@jfbeam I managed to convince them that because our production was Just in time it meant efficiency gains, because the production team would be spending less time waiting for parts, they also multi skilled the production team so that parts could be pushed through production faster. A gain of 1 minute in the office means a week on the floor.
At Seagate, I first saw our 2.5" drives about 1993. My first comment was "you can put it in your shirt pocket". The drives actually set off a bit of a bug-a-boo: because they were popular in notebook computers, we started to see a lot of failures due to "head slap", a high tech term for dropping the notebook. The resulting G force could cause the head to "slap" the face of the disk and pull off magnetic particles into the interior of the drive. This would contaminate it, and shortly destroy the drive. It was mostly solved by paranoid software. The drive controller would pull the head back into the landing zone at every opportunity.
At 18:57 , do I see a light blue outline of "old style" NVME connector? Because if the connectors were lined up like this, it might be possible to create a hybrid chassis/caddy which would work with *both* existing NVME devices and new E.3 & E.1 devices. Assuming appropriate spacing of disks, of course.
It is still around, it just appears as though E1.S is going to be the more popular format. The market share numbers were in capacity (PB) so if E1.L is designed for higher capacity per drive, and is still much smaller in total capacity, that means a much smaller set of systems will use those drives.
Why not create exactly 1 form factor - like e1. And it will be cage-less. If you want e3 - you take 2 e1 slots (in height) and use 2 connectors. To have more power, more connections OR cross-connection. If you want double or tripple cooling - you take 2 or 3 slots (in width) and just do not use ecsessive connectors. Now you have less formactors and all forms compatible with all servers. And more possibilities with power, cooling and most importsnt - connections. PCIe already sopports this.
Okay, I'm trying hard for the custom latches here, but what if the custom latches are for the tinyest of tiny status display that show drive numbers, error codes, mount status and powerstatus and such. Now you'd need a custom latch for this, and in the future you'd need a connector on the board that reaches into the latch. That's some "good" reason for the weird latches. But yeah not standardising the latches is really stupid.
Honestly I would have like to see them dedicate 2-3 pins of the main plug to status signals, and have the standard specify that they go to visible x-colour LEDs located on the front of the drive with no additional processing through the drive controller; i.e. controlled by the backplane. No extra connector needed, and your drive status is the same regardless of whose server it's in. The other thing is that drives usually have serial numbers on the opposite end from the connector (visible at 9:04 for example), which would be perfectly visible with the drive installed... if it wasn't for the drive sled completely blocking visibility of it on every manufacturer that I've seen.
If we're really gonna connect front to back with some wires, itd probably be better to connect it over a 4-lane SPI connection and just power it with an 8-bit microcontroller of $0.5 in the latch itsself. Then the OS could decide what the drive should do with the latch, like displaying XYZ or whatever, reporting on drive heat, display RAID rebuild progression of said disk, drive hours, smart information or use all 40 drive displays as one big LED screen for generic server status info. small screens cost like ... $20, but some simple LED's for less than a dollar could also be used when trying to save a buck. These plugs aren't mission critical, so these things should be allowed to fail without consequences. In that context I'd understand it completely! But this... this... nahh.
Before i watch this, i'm hoping that any new form factor will be 'self sled' no install time, with push-push style of hot swap, no trays to screw in. Just unbox, slide it in and push to lock. Then push to unlock and eject(push all the way in), maybe a physical lock on the server that prevents push-push but that one lock can lock/unlock the entire server Edit, you could probably do push-push with M.2, but that would be risky, there is no retension method other than friction, and these often have exposed contacts, i could see these touching the chassis and grounding in a spot not expected to have a trace, or working themselves lose if you had a single sided M.2 flopping around in a slot meant for 2 sided
my complaint about moving away from SATA drives is capacity. ill give up some speed for 16tb drive. SAS drives are (all that ive found) small capacity and the capacity is "odd" sized. m.2 and SSD's the more you use them the faster they crap out. i have mechanical drives (PATA) that STILL WORK 20 years later. as a home server user the last thing i want is "new stuff" that wont last as long, be super expensive and give up capacity, and or be proprietary.
These new formfactors allow for bigger drives than SATA. You can already buy ruler format bigger that 3.5 HDDs - 30TB. As for reliability, SSDs long surpassed mechanical drives in that regard.
This is brilliant! Although I DO AGREE with those silly latches, there HAS to be a practical reason for them not doing it. Why can't servers just have EVERYTHING serviceable in the front? I guess having stuff in the back does add more things you can put in, but pulling an entire rack because a NIC or Accelerator fails must suck. Yea they most likely don't fail as often as drives, but there has got to be a better way. (There ALWAYS is :D )
I really hope this will find its way back into desktops. The reason I kinda miss the SATA days is because it *always* was hotpluggable, regardless of motherboard, chassis, controller cards, drives or any other factors.
@10:02 At 15000 rpm the disk head of a 3.5 inch hard disk would have at the edge of the plates to read and write data reliably at about ¼ of the sound speed. I believe this is way more problematic than making the disk spin or it’s power consumption... @14:40 The best combustion engines are wasting up to 65% of their power generating heat... I don’t see any Tesla where batteries are cool down by airflow... I’d like to believe we will stop using a thermal isolator to cool down server and plug them straight to the A.C. or dip them in oil (some are).
Informative as ever. One thing I wondering about , is that when going through the evolution of drives, it's when get to the E.3's, that the drive connector becomes a protruding bit of pcb. Is there some good reasoning for this?
Probably because at higher speeds the parasitic inductance of connectors becomes more and more of an issue. You can buy more expensive connectors to get around that, but there's little point to that when gold fingers are both better and cheaper. The only downside of them is that they do not survive many insertion cycles, which isn't a problem here.
I really doubt it but I wonder if something like that will be coming in a smaller scale for the consumer market, because I'd really have an interest in these technologies and I also could use them pretty good in my upcoming projects.
NVME can be hot swapped just not in a normal use scenario when used with an rather rare adapter in an expresscard slot which sadly is limited to just pci-e 2.0 speeds and 1x. Neat for old laptops that don't have USB 3.0 or better but not applicable for anything modern. The worst part is the cost of high capacity 2230 and 2242 drives beyond 512gb being rather excessive.
What about home users, the ones that want to be on the cutting edge, and need a lot of nad storage? for me atm was U.2 NVMEs, but now with the E1/3 changes, the form factor dont seem something that i would like to run on the future, i do think for server seems fine (as long as they manage to deal with latching), but what about end users that m.2 nvme are not enough in size?
Why not change the PCIe card form factor for enterprise altogether? Instead the old bent plate slot load on the back, just use E3, or maybe a doubled-up E1 (more power draw) for any NIC or GPU. With the PSUs, storage, all this hot-swappable from the rack, just make as many components capable of that as possible. The only things left needing chassis intrusion would be the CPU(s), RAM, fans, and the motherboard itself.
So are isolinear chips coming out in a few years. The new form factors are starting to look like the computer storage devices from ST:TNG in 1987. These new form factors make high-speed hot swap easier to deal with than some of my servers with the m.2 pci adapters..
This looks like way too many “standards”. How can there be half a dozen new standards in a single generational change? U.3 was exciting because it made things simpler. Now there will be half a dozen different drive standards with possibly a dozen latching combinations. This just seems painful!
U.3 still had trays so that is not better on the latching side. Also, U.3 was always just a stop-gap standard. EDSFF drives were being deployed at scale before U.3. With EDSFF you can take the same E1.S PCB and in theory, use the drive in any E1/ E3 bay by just having the correct housing. I am not sure where folks thought U.3 was going to be the future but there have been some other comments to that effect as well. The EDSFF transition was decided before U.3 drives were out. Perhaps that is a good reason to have this video/ article since there is confusion.
These mostly are electrically interchangeable, just different thickness, just like we had different 2.5 inch drive thicknesses with HDDs. That's just a requirement to satisfy the different performance and density needs in the market
The OEM's have to be able to charge you $50 a piece for the proprietary latch, and be able to justify the marked up prices for the OEM branded drives with the latch preinstalled
If you want to use them in your PC you can with an adapter card very similar to the one used for the M.2 NVME drives. Note it is not the same card but a similar one. They may both use a PCIe interface but the slot is different. For this reason they do not interchange with M.2. I got a 4 TB Intel P4511 and I am waiting for the adapter card to come so I can use it. I like that this one has TLC flash instead if QLC. The fewer the layers per cell the better. I would love to buy SLC if it wasn't wickedly expensive! The TBW on this one is 3 or 4 petabytes. Of course that is just a rating. I may get a bit more out of it. I don't know! I am not going to be writing to it all the time. If anything it will just last longer than a 4 TB QLC drive (especially one of those consumer models). The thing is that enterprise drives are not always insanely expensive compared to the ones in the consumer market. Mine was close to $200
There was a lot of great information on this presentation, but I think it could have been done in 15 minutes instead of 25. First video I've watched from this channel so I don't know if that is normal.
The modularity of this combination with CXL is awesome! 2U chassis with 40 slots where you can mix accelerators, gpus, memory or storage as needed for your usecase. It also removes need for traditional PCIe cards, allowing rear slots to also use this form factor. So here is your DPUs, Infiniband, or whatever high bw interconnect for supercomputer clustering/HPC. Exciting times!!
Hit the nail on the head about the latching mechanism. "Proprietary" is not the answer after doing such a good job of standardizing a more logical form-factor and inter-connect(yes, you were correct when mentioning the differing lengths of the male(REEEE!)connectors traces/pads). It still lets a lot of "proprietary" creep in when it comes to the carrier/connector spacing to the PCI-E management hubs/splitters though, especially as Intel still relies on those a lot of the time just because of their lack of direct lanes to the cheaper(!?!) CPU models. Just out of interest for us Linus fans.....how much does the E1.S 25mm cost? 😏
Although not within the scope of this video, I look forward to every data center dumping their old chassis and other hardware on the used market and watching the prices take a tumble. The differences between 'must have' and 'want-to-have' in my home lab will be greatly diminished.
Looks at 2.5" drive... Double the width of an M.2 Drive place it in a 2.5" holder... Click the 2.5" holder into the backplane.... I don't understand why is is so hard or why the bubblegum stick cannot increase in width.
If you cannot use them in home machines there will be no secondary market market for them like ECC server memory. I look forward to the liquidation of 4tb SM883 drives.
Yeah, I'd rather actually have SATA than NVMe. Nothing but dedicated storage processors can utilize it. They're too fast for CPUs. After all these years I have found zero answers on trying to make mine not slower than a hard disk. Whatever is posted out there on forums or bug report type threads assume people are top tier sysadmins or programmers. I can't interpret the information. I'm just a damn hobbyist who wants fast storage. SATA is easier for that.
M.2 is hard to cool in a server? Jee, I guess the fans just make all that noise and move no air. I can see them being hard to cool in a laptop. I don't get how those thermal nightmares of proprietaryness ever got so popular.
@asdrubale bisanzio It doesn't "just work." The drives and the lanes are too fast for CPUs. That's why they're coming out with storage processors. Silicon built for storage. Level1techs went through this in detail. But no details as of how to actually make it work.
I'm a bit surprised such a high percentage of server ssd sales are m.2, is that all the boot drives in systems that don't need fast storage, so you just chuck a small m.2 ssd along with a bunch of hard drives in there?
microsoft is big on m.2 as primary storage. They not slow at all. The thing about m.2 is they're small and still very fast, so you can put a bunch of them in a single box and get a ton of performance. E1.S is targeted for the same use case but without all the problems of m.2
@@tommihommi1 yep, SSD boot drives for the win. sataDOMs not so much, they're much better fit for hypervisors which do not write anything - we run esxi on them. For regular OS SATA or m.2 drives are the best. Much faster and much more reliable than HDD. Some vendors even have enterprise drives built specifically for boot devices like kingston.
@@ServeTheHomeVideo I know, I'm just wondering what happened to NF1 and in what ways it didn't satisfy the industry. Didn't follow these things from inside ...
Admittedly, that's top power. You would need this much only at peak IO load, which is unlikely to last for long and is also unlikely to impact all the drives (40-wide stripe??? I recommend against)
@@bronekkozicki6356 I don't think that you necessarily need a 40-wide stripe to be able to max out the drive at full load 100% of the time. I think that it depends on what you are doing with it. For example, if that is the scratch directory/headnode for a HPC cluster, and you have hundreds of users running thousands of jobs against it, I can DEFINITELY imagine you hitting the array hard where you will be very close to hitting the full load/full power limit. Besides, even if that WEREN'T the case, you still need to size the power and therefore; the cooling, to be able to support that so that you don't have what effectively becomes "rolling blackouts" at the hardware level (be it the CPU, RAM, networking, or storage subsystems) because the PSUs weren't sized adequately enough to be able to take the peak load.
The hotswappable variety of devices sounds super awesome though. I hope homelabbers get to play with this stuff someday without it being all super proprietary. How far down in the future do you think the connections on all of these will be optical/photonic?
The lack of standardization on the latch will retard the adoption of these drives by many years. Yes it was a huge mistake not to specify how it would be latched. Once again the computer industry creates a nexus of chaos.
Patrick do you have another channel where you talk about making Brisket, because I'm currently wishing you were based in Oxford UK not the Valley as those were good looking hunks of meat. Oh well, I guess if I can find good looking hunks of meat locally...
U.3 will rule ! Sorry, but Patrick is wrong on SSD future. OEM servers , like for Facebook, Google, Microsoft may go for this new ES form factor. But big 3 of retails servers - HPE, DELL, Lenovo are not expected to switch to E3.s anytime soon. Most definitely they will go U.3 route which supports both MVMe, SATA and SAS. And yes , people are still using HDD in servers, so for the manufacturer compatibility with all of them would be number 1 priority. Anyone heard of HPE, DELL, Lenovo upcoming servers with E3 form factor?
The fortune 50 company I worked for cried they had limited funds when it came time for reviews and raises ! Of course. But they were buying / leasing rows and rows of flash drive storage arrays, IBM and EMC, and I know they were not cheap. They used to tell me, see those 10 storage arrays, they are all being replaced with this one new array. That and the new maainframes, they sure acted like money was no object when it came to new hardware in the data center. I'm sure a lot of it was leased, but just saying. That's not to say they were not buying servers, because they sure were, they had an order for 6,000 at one point. Oh well I'm retired and that's all I'm concerned with. My point is that new mass storage is always getting cheaper per gb or tb - what ever, runs cooler, takes up less room and changes its form factor in a shorter amount of time than ever before.
Seems to me that money & more profit is involved here to obsolete previous standards and force us to replace everything we currently have and use without issues....
It's solving purely an engineering problem because we ARE having issues. m.2 is not hot-swappable, low capacity, low power, cannot be cooled properly. U.2 takes too much space, restricts airflow to the rest of the server and uses old connector that cannot be scaled further. EDSFF family solves all of these problems and even takes it much further with E.3 and ability to connect all kinds of other devices.
The core issue I see here is the fact that each OEM might have different mechanical properties for NVMe which will make them some sort of vendor locked drives. In current and previous standards this is not the case. However, PCIe gen5 must come as quickly as possible to solve many issues. We do have 640TB, all NVMe PCIe gen4 as part of Ceph storage. These drives don't have an issue to be installed on any hardware as they are 2.5" form factor.
@@pstoianov edsff are also unified. Oems can only change latch mechanism to fit their chassis but how that latch is attached to the drive is written in the specification. Akin to how currently we have screw holes for sleds. The dimensions of each form factor are also specified.
Also not only are those PCIe cards hard to service and unpopular they were horribly expensive back just before all of these SSDs were out on the market. Now look. You can buy drives cheaply. Just beware of knockoffs, fakes and counterfeits. They do exist now and they are horrible. They could only be worse if they came crammed full of malware!! Though you will still get less than the capacity on the label it doesn't take much to store malware sadly so it can be hiding anywhere. This malware is not just annoying but it might steal credentials and login cookies thus causing potentially a lot of financial misery. If you ever suspect you may have ran anything like that unknowingly or otherwise keep an eye on ALL of your accounts!
Can confirm the unequal lengths of pads on the connectors are necessary for hotplug. You find the same arrangement on USB, etc. Bad things can happen when you attach e.g. data before ground, for example large amounts of current might unexpected flow to ground through the ESD protection diodes of the data lines.
+1 Ground is the first connection you make and the last one you break when unplugging. Same thing with power connectors designed to be plugged/unplugged under load.
But can you just arrange the pins no the female connector to be at different lengths?
For example, since all pin lenghts are the same on the NVME SSD, can you put a connector with a different pin lenghts on the server itself, to achieve the same effect
data center: I want this technology now.
home users: How much is this going to cost me? my motherboard already comes with two m.2 nvme slots for free. I don't need anything else.
manufacturers: is there a licensing fee associated with this new interface? because m.2 don't cost us a penny.
This really had me excited until I heard that the latching mechanism is left unspecified by the standard.
That's really dumb, the whole point of a standard is to prevent OEMs from doing stupid and/or non-interoperable things and then charging printer-ink prices for their locked down variant.
Even when standard they will find ways.
Intel sfp ransceivers are keyed for example.
I'd bet that the OEMs fought tooth and nail to not have a standard latch for exactly that reason. Dell will offer a 24 slot 1U, but if we order it with only 2 drives the other 22 slots will be covered with useless snap-in plastic blanks. Want to add more storage? Got to buy it from Dell for their proprietary latch.
When retiring servers, I strip out: CPUs, RAM, and any sleds/caddies (and sometimes PSUs). Experience has taught me that, in the long run, the sleds end up being the most precious of those parts.
@@JonMartinYXD don't buy from Dell then. Other vendors will sell you servers full of drive trays even if not a single one is populated. So this "problem" is not really a problem.
The latch is a part of a chassis. Who cares if it's proprietary? They come with the chassis. The important part is that the drive itself is standard formfactor and interchangeable even if latches are different.
@@JonMartinYXD this is standard practice for caddies, and it has never really been a problem.
I'm calling it now. Someone will eventually put gamer-style RGB LEDs on the latch mechanism.
So long as we get STH Blue as an option.
This will make the devices go EVEN faster, Imagine that DPU with RGB, now instead of that 75watt limit it can be extended to 80watts :p
@@johnmijo plot twist: it uses so much power that there is actually less voltage haha.
The E1 style comes with LEDs for health and activity. We're halfway there.
LEDs are used in switches, blades, etc. even today
I really appreciate the transparency about the fact that the drives were donated/loaned to you guys. I certainly wouldn't consider it sponsored content but I do like that you are upfront and err on the side of caution. One more reason to feel confident trusting sth (don't really *need* more but doesn't hurt)
Well I also had the option to use an Intel U.2 drive and a old Pliant SAS drive but I used Kioxia/ Toshiba because they were able to get the EDSFF drives. I also told Kioxia I was OK if the put labels on the EDSFF drives. This took them a month and a half but I had been trying to get a collection like this for two months so I wanted to give them exposure for it. Felt like the right move to mark as sponsored.
@@ServeTheHomeVideo You may reconsider having a lift & a vehicle you can service yourself once your E/V starts needing attention. 💰💰💰💰
7:10 the "Right sequence" as I understand it for hotswap is "ground connects first", a design you'll also see on grounded wall plugs.
Which, conveniently, also means ground disconnects last. So your arm is never the easiest route for that erratic current to take.
That and power to the logic so the data / status lines are stable.
Thanks for keeping us up to date. If a server will accommodate 44 of 4x PCIe Drives a total of 176 PCIe lanes just for NVMe I wonder what kind of CPU's will be "Not to use multiplexers" & Finally Patric granted his wish ---- Toles screw less HDD :)
I'm so excited for when this starts hitting the second hand market and I can snatch some up! Oh man!
Wait..... does that mean cheaper u.2 and u.3 SSDs? sign me up!
The question is how cheap? especially if capacity is prioritized instead of performance.
Like those E.1L NVMe's for example, how much is a single one of those would be expected to go for?
@asdrubale bisanzio well, QLC are not that fast. Like those Intel D5 E1.L - 30TB and 7800 write IOPS.
@asdrubale bisanzio You are thinking like a consumer, not a server admin. Servers in demanding environments can always use more.
21:00. That is the thing i hated most about with the HP G8 servers. They needed the special drive sleds with the microchips. If you used a third party or just stuffed the drive in there (because the cost of those sleds were stupid expensive), the iLO would keep the fans running at 100%.
It's so nice to see the heatsinks actually cover both sides of the PCB unlike pretty much all the consumer M.2 drives that just can't cool half the NAND flash because the form factor doesn't allow for it.
I thought it was the controllers that needed the cooling, not so much the memory itself. Infact, I could be wrong but I recall seeing somewhere that they had an “ideal” temperature and cooling them too much was actually suboptimal.
Vendor-specific latches would serve the same function as vendor-specific carriers. Most storage vendors use custom drive firmware. Some vendors won't even read a drive with the wrong firmware. Having worked tech support for storage back when we used spinning rust I can tell you there are some weird behaviors when you start dropping random drives into a carrier.
I'm very disappointed that in the future, Patrick will no longer have chances to get excited about toolless drive trays in videos :(
Idk think that will continue for better or worse
Patrick is right: the latches aren't about vanity, they're all about greed.
theoretically it could allow more imaginative layouts or configurations of devices by changing the latching mechanism, but that probably won't play out in reality.
Right. MY PATENT, means you PAY for MY carrier.
Still seems like too many form factors. Have to imagine all this long, short, single and double height stuff will normalize to one or two choices.
With the way the current market is fragmented, i wouldn't hold my breath on that one.
The Amiga 600 in 1992 came with a 2.5" drive. I think they offered versions with 0, 20 and 40 MByte of storage and they sold pretty well. To be honest I never heared about 2.5" before and I was even running several computers using 5.25" drives. I even have one single 8.0" disk drive with five floppys somewhere meant for the Altair systems.
I'm still waiting for this to take over.
Big moves lately.
@@ServeTheHomeVideo I'm still licking my wounds on NF1! I will believe new storage standards 10 years after I hear about them!
This video is now a year old. Any indication yet if this will win the format war with Samsung's NGSFF (also known as NF1 or confusingly M.3)?
The world is EDSFF from now on in new SSD form factors.
Thanks to this transition you can get enterprise U.2 SSDs now for pennies on the dollar. Back in the olden times we would short stroke spinning rust drives to boost performance in RAID arrays.
I see this leading to big increases in PCIe lane provision on server CPUs - using the full capacity of singlewide E3 drives means you need to dig into the extra unofficial PCIe lanes on a dual epyc server, and that's before you account for the network connectivity or a boot drive or two
@asdrubale bisanzio Normally 2-socket epyc has 128 lanes for system IO with 128 reserved for inter-socket comms (so 64 each way), but it's possible for OEMs to configure them to only use 64 lanes between the sockets (32 lanes each way) to free up some more lanes for IO
9Blz b88j9
The big reasons I'm imagining for not having a single standardized latching mechanism are (a) so a vendor can fit a latch containing an actual keyed lock for scenarios where that kind of physical security measure is relevant, and (b) to allow robotic handling of the drives for applications similar to existing large tape archives.
😢🎉😢🎉🎉😂😢اعتةلىىف5media.tenor.com/ZgpTu56tj1AAAAAM/rigole.gif 😢😮media.tenor.com/ZgpTu56tj1AAAAAM/rigole.gif media.tenor.com/KhtaBX-rWXIAAAAM/lmao-laughing.gif media.tenor.com/KhtaBX-rWXIAAAAM/lmao-laughing.gif media.tenor.com/KhtaBX-rWXIAAAAM/lmao-laughing.gif media.tenor.com/ZgpTu56tj1AAAAAM/rigole.gif 🎉🎉media.tenor.com/KhtaBX-rWXIAAAAM/lmao-laughing.gif media.tenor.com/KhtaBX-rWXIAAAAM/lmao-laughing.gif media.tenor.com/KhtaBX-rWXIAAAAM/lmao-laughing.gif media.tenor.com/ZgpTu56tj1AAAAAM/rigole.gif😂❤media.tenor.com/ZgpTu56tj1AAAAAM/rigole.gif ❤❤❤ج😅١🎉😢😮❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤😂❤❤❤❤😊من افف فغغ77777عععبللفىلىللپففغغ668007غ54ق4
2لت
One reason for the custom latch per manufacturer is clearly greed. Now you can't buy your storage directly from Seagate/WD/Kioxia but have to pay the Dell/HP/NetApp tax which they no longer get without the caddies/trays.
There is a good chance drives in the future will get locked in other ways in the name of hardware security.
It's another move to stop ordinary people from DIYing things. The PC market was killed with a last straw because of these fake shortages. Now if they make people buy drives from Dell, to work only in Dell servers, they can just make the drive cost like 10x as much, and people won't be able to go get something like a Supermicro and build a system. They will be unaffordable, even to the day they are obsolete. And then, even if they were affordable, no one wants a drive that's been used that much.
What tax? The latches come with chassis, it looks like. So you can buy any drive you want, screw on the latch and use it.
@@ServeTheHomeVideocan consumer mobos boot into u.2 & u.3 ssds on an sfff adapter or pcie adapter?
I know they are not some exotic thing that we can never afford. In fact they are priced very similarly to other drives per gigabyte. These actually seem to be very similar to M.2 but not the same.
They haven't walked away from the form factor because of a situation like industrial inertia. The 2.5 form factor is ubiquitous across the industry's fabrication process, current computer chassis (including desktop cases), and even contract obligations. Things like that take a lot of time to die off. Which is why even though the west coast is on fire year round, the move to something green on a macro scale takes decades. Or how the music CD is still being produced even though nearly everyone we know in these tech circles don't use CDs anymore.
22:35 Interesting you mention this I remember taking apart a netbook probably 4 years ago, and finding a daughter board, but not just any daughterboard, this had a dual core (or was it single core) hyperthreaded processor, and 1-2GB of RAM. Looked it up and this was actually used in a "supercomputer" i forget by who. It would be pretty cool to be able to do this in a hot swap fashion. Maybe a new type of blade server, with "Xeon U" processors that are just laptop U or Y class processors on a compute stick with an M.2 drive and some sodimm
Yes they slotted dozzens of these things into a 2u chassis, and with i think 40 Atom D525 processors they(probably) had the most core dense platform out there, well, besides a larraby/knights ferry system
I could be wrong about the core count, it could have been 10 of these atom boards and i'm just conflating the thread count with processor count
If i recall correctly, they were Dell Inspiron Mini 910/1010/1210
The reason they went with customizable latches looks to be flexibility. If we look at the proposed and actual chassis designs, there's quite a variety of them. 1U is obvious but we also see 2U with two rows of E1.S drives. Not to mention all servers have very different face plates, especially OCP one's. There's probably no way they could make a unified latch mechanism that would fit all chassis designs. Or even if they did force on everybody some latch design it would quickly lead to them constantly changing the spec to meet various use cases.
Also, there seems to be a move to make datacenter operations automated through robotics. They could design special latches specifically suited for robots and wouldn't need to change the specs.
There are certain hyper-scale clients that are also asking for a unified latch design. The primary reason for this is not customer requirements. Even the two row E1.S systems could use the same latch as 1U and indeed already do (we covered the Supermicro EDSFF BigTwin as an example.)
@@ServeTheHomeVideo don't those guys usually all in OCP? I'm sure they can design unified OCP latch for themselves and let everyone else do their own thing.
Great video and review. Thanks!
As a former admin, I hated talking to the CEO and CFO about the new systems being faster and smaller but not cheaper. It was a losing battle because cost verses benefit. I think that might have changed if the chip shortage catches up to where it was just a few years ago. In short, I'm glad I'm not an admin anymore.
"New" is never cheaper. If phrased correctly, one might be able to frame it as "cheaper" per some unit, but the check the PHB's have to write for it isn't getting any smaller. I was very shocked my penny grubbing former employer dropped nearly have a mil (over two years) for new, almost top of the line blade systems and storage (SAS, but, ok.) I'd spent over a decade having to buy other people's "junk" on ebay, one at a time to stay below the must-have-CEO-signature line.
Two years in, they started regretting going down this path as they're now committed to buying more of these blades vs. a handful of 1U/2U servers from wherever. And because the blades are uber-dense, all one can do with them is add them to vcenter and run whatever can be virtualized, or dedicate an entire $100k blade to openstack that nothing will use. For a development lab, numerous smaller servers provides significantly more flexibility.
@@jfbeam I managed to convince them that because our production was Just in time it meant efficiency gains, because the production team would be spending less time waiting for parts, they also multi skilled the production team so that parts could be pushed through production faster. A gain of 1 minute in the office means a week on the floor.
Somehow I overlooked this video. Glad I caught it. Good information for this non IT Guy. Good tease with the Brisket......lol.
At Seagate, I first saw our 2.5" drives about 1993. My first comment was "you can put it in your shirt pocket". The drives actually set off a bit of a bug-a-boo: because they were popular in notebook computers, we started to see a lot of failures due to "head slap", a high tech term for dropping the notebook. The resulting G force could cause the head to "slap" the face of the disk and pull off magnetic particles into the interior of the drive. This would contaminate it, and shortly destroy the drive. It was mostly solved by paranoid software. The drive controller would pull the head back into the landing zone at every opportunity.
Only if your shirt has a pocket! :-)
True...uneven pins are for hotplug like usb power pin are extended then data pins
"I know engineers they love to change things." Dr Leonard McCoy
At 18:57 , do I see a light blue outline of "old style" NVME connector? Because if the connectors were lined up like this, it might be possible to create a hybrid chassis/caddy which would work with *both* existing NVME devices and new E.3 & E.1 devices. Assuming appropriate spacing of disks, of course.
The E1.L's I deal with are referred to as 'rulers'. I feel it was a wasted opportunity to call them a drive stick.
15:15 You vs. the drive she told you not to worry about
I was wondering what had happened with the ruler format. Cool to see the evolution.
It is still around, it just appears as though E1.S is going to be the more popular format. The market share numbers were in capacity (PB) so if E1.L is designed for higher capacity per drive, and is still much smaller in total capacity, that means a much smaller set of systems will use those drives.
Ruler drives were also ridiculously expensive and low availability. Been looking at the E.1 format for quite a while now.
Why not create exactly 1 form factor - like e1. And it will be cage-less.
If you want e3 - you take 2 e1 slots (in height) and use 2 connectors. To have more power, more connections OR cross-connection.
If you want double or tripple cooling - you take 2 or 3 slots (in width) and just do not use ecsessive connectors.
Now you have less formactors and all forms compatible with all servers. And more possibilities with power, cooling and most importsnt - connections. PCIe already sopports this.
Okay, I'm trying hard for the custom latches here, but what if the custom latches are for the tinyest of tiny status display that show drive numbers, error codes, mount status and powerstatus and such. Now you'd need a custom latch for this, and in the future you'd need a connector on the board that reaches into the latch. That's some "good" reason for the weird latches.
But yeah not standardising the latches is really stupid.
Honestly I would have like to see them dedicate 2-3 pins of the main plug to status signals, and have the standard specify that they go to visible x-colour LEDs located on the front of the drive with no additional processing through the drive controller; i.e. controlled by the backplane.
No extra connector needed, and your drive status is the same regardless of whose server it's in.
The other thing is that drives usually have serial numbers on the opposite end from the connector (visible at 9:04 for example), which would be perfectly visible with the drive installed... if it wasn't for the drive sled completely blocking visibility of it on every manufacturer that I've seen.
If we're really gonna connect front to back with some wires, itd probably be better to connect it over a 4-lane SPI connection and just power it with an 8-bit microcontroller of $0.5 in the latch itsself. Then the OS could decide what the drive should do with the latch, like displaying XYZ or whatever, reporting on drive heat, display RAID rebuild progression of said disk, drive hours, smart information or use all 40 drive displays as one big LED screen for generic server status info.
small screens cost like ... $20, but some simple LED's for less than a dollar could also be used when trying to save a buck.
These plugs aren't mission critical, so these things should be allowed to fail without consequences.
In that context I'd understand it completely!
But this... this... nahh.
@@someonesomewhere1240 they already did. EDSFF spec requires all drives to have their own status LEDs
Mad BBQ skills mate!
Excited to see what this is going to mean for VDI and Hyperconverged architectures.
I welcome this new server SSD-stuff!! Hope the 2nd-hand market is flooded with those U2/U3 drives!!
Before i watch this, i'm hoping that any new form factor will be 'self sled' no install time, with push-push style of hot swap, no trays to screw in. Just unbox, slide it in and push to lock. Then push to unlock and eject(push all the way in), maybe a physical lock on the server that prevents push-push but that one lock can lock/unlock the entire server
Edit, you could probably do push-push with M.2, but that would be risky, there is no retension method other than friction, and these often have exposed contacts, i could see these touching the chassis and grounding in a spot not expected to have a trace, or working themselves lose if you had a single sided M.2 flopping around in a slot meant for 2 sided
my complaint about moving away from SATA drives is capacity. ill give up some speed for 16tb drive. SAS drives are (all that ive found) small capacity and the capacity is "odd" sized. m.2 and SSD's the more you use them the faster they crap out. i have mechanical drives (PATA) that STILL WORK 20 years later. as a home server user the last thing i want is "new stuff" that wont last as long, be super expensive and give up capacity, and or be proprietary.
These new formfactors allow for bigger drives than SATA. You can already buy ruler format bigger that 3.5 HDDs - 30TB. As for reliability, SSDs long surpassed mechanical drives in that regard.
This is brilliant! Although I DO AGREE with those silly latches, there HAS to be a practical reason for them not doing it.
Why can't servers just have EVERYTHING serviceable in the front? I guess having stuff in the back does add more things you can put in, but pulling an entire rack because a NIC or Accelerator fails must suck. Yea they most likely don't fail as often as drives, but there has got to be a better way. (There ALWAYS is :D )
I really hope this will find its way back into desktops. The reason I kinda miss the SATA days is because it *always* was hotpluggable, regardless of motherboard, chassis, controller cards, drives or any other factors.
@10:02 At 15000 rpm the disk head of a 3.5 inch hard disk would have at the edge of the plates to read and write data reliably at about ¼ of the sound speed. I believe this is way more problematic than making the disk spin or it’s power consumption...
@14:40 The best combustion engines are wasting up to 65% of their power generating heat... I don’t see any Tesla where batteries are cool down by airflow... I’d like to believe we will stop using a thermal isolator to cool down server and plug them straight to the A.C. or dip them in oil (some are).
Liquid cooling video ETA in 2 weeks or so. Just filmed most of it.
You mention that next gen servers in 2022 will have these, but do you expect Power10 servers to feature this form factor?
Power10 is really like end of 2021 but rolling-out more in 2022, right? No reason IBM cannot use this.
Informative as ever. One thing I wondering about , is that when going through the evolution of drives, it's when get to the E.3's, that the drive connector becomes a protruding bit of pcb. Is there some good reasoning for this?
Probably because at higher speeds the parasitic inductance of connectors becomes more and more of an issue. You can buy more expensive connectors to get around that, but there's little point to that when gold fingers are both better and cheaper. The only downside of them is that they do not survive many insertion cycles, which isn't a problem here.
I really doubt it but I wonder if something like that will be coming in a smaller scale for the consumer market, because I'd really have an interest in these technologies and I also could use them pretty good in my upcoming projects.
I hope that E1.S will get adopted by the consumer market. Hot swap SSDs instead of M.2.
@@ServeTheHomeVideo mm
21:00 It's the "we made a new standard!" "Oh, now we have x+1 standards"
NVME can be hot swapped just not in a normal use scenario when used with an rather rare adapter in an expresscard slot which sadly is limited to just pci-e 2.0 speeds and 1x. Neat for old laptops that don't have USB 3.0 or better but not applicable for anything modern. The worst part is the cost of high capacity 2230 and 2242 drives beyond 512gb being rather excessive.
What about home users, the ones that want to be on the cutting edge, and need a lot of nad storage? for me atm was U.2 NVMEs, but now with the E1/3 changes, the form factor dont seem something that i would like to run on the future, i do think for server seems fine (as long as they manage to deal with latching), but what about end users that m.2 nvme are not enough in size?
does it look really similar to the flash modules of the EMC^2 DSSD?
Why not change the PCIe card form factor for enterprise altogether? Instead the old bent plate slot load on the back, just use E3, or maybe a doubled-up E1 (more power draw) for any NIC or GPU.
With the PSUs, storage, all this hot-swappable from the rack, just make as many components capable of that as possible. The only things left needing chassis intrusion would be the CPU(s), RAM, fans, and the motherboard itself.
So are isolinear chips coming out in a few years. The new form factors are starting to look like the computer storage devices from ST:TNG in 1987. These new form factors make high-speed hot swap easier to deal with than some of my servers with the m.2 pci adapters..
This looks like way too many “standards”. How can there be half a dozen new standards in a single generational change?
U.3 was exciting because it made things simpler. Now there will be half a dozen different drive standards with possibly a dozen latching combinations. This just seems painful!
U.3 still had trays so that is not better on the latching side. Also, U.3 was always just a stop-gap standard. EDSFF drives were being deployed at scale before U.3. With EDSFF you can take the same E1.S PCB and in theory, use the drive in any E1/ E3 bay by just having the correct housing. I am not sure where folks thought U.3 was going to be the future but there have been some other comments to that effect as well. The EDSFF transition was decided before U.3 drives were out. Perhaps that is a good reason to have this video/ article since there is confusion.
These mostly are electrically interchangeable, just different thickness, just like we had different 2.5 inch drive thicknesses with HDDs. That's just a requirement to satisfy the different performance and density needs in the market
The OEM's have to be able to charge you $50 a piece for the proprietary latch, and be able to justify the marked up prices for the OEM branded drives with the latch preinstalled
Thanks !
If you want to use them in your PC you can with an adapter card very similar to the one used for the M.2 NVME drives. Note it is not the same card but a similar one. They may both use a PCIe interface but the slot is different. For this reason they do not interchange with M.2. I got a 4 TB Intel P4511 and I am waiting for the adapter card to come so I can use it. I like that this one has TLC flash instead if QLC. The fewer the layers per cell the better. I would love to buy SLC if it wasn't wickedly expensive! The TBW on this one is 3 or 4 petabytes. Of course that is just a rating. I may get a bit more out of it. I don't know! I am not going to be writing to it all the time. If anything it will just last longer than a 4 TB QLC drive (especially one of those consumer models). The thing is that enterprise drives are not always insanely expensive compared to the ones in the consumer market. Mine was close to $200
All this talk about 1988 form factors for drives and stuff, makes me wonder... why are we still using 19-inch rackmount form factor 100 years later? 😛
There was a lot of great information on this presentation, but I think it could have been done in 15 minutes instead of 25. First video I've watched from this channel so I don't know if that is normal.
Pretty old video, the new ones published in the last year or so are much faster paced
The modularity of this combination with CXL is awesome!
2U chassis with 40 slots where you can mix accelerators, gpus, memory or storage as needed for your usecase.
It also removes need for traditional PCIe cards, allowing rear slots to also use this form factor. So here is your DPUs, Infiniband, or whatever high bw interconnect for supercomputer clustering/HPC.
Exciting times!!
Hit the nail on the head about the latching mechanism. "Proprietary" is not the answer after doing such a good job of standardizing a more logical form-factor and inter-connect(yes, you were correct when mentioning the differing lengths of the male(REEEE!)connectors traces/pads).
It still lets a lot of "proprietary" creep in when it comes to the carrier/connector spacing to the PCI-E management hubs/splitters though, especially as Intel still relies on those a lot of the time just because of their lack of direct lanes to the cheaper(!?!) CPU models.
Just out of interest for us Linus fans.....how much does the E1.S 25mm cost? 😏
what about desk tops will thay fit my all in one?
It should be interesting to see, on a side note the used market will hopefully flood with cheaper U.2 drives. :)
Although not within the scope of this video, I look forward to every data center dumping their old chassis and other hardware on the used market and watching the prices take a tumble. The differences between 'must have' and 'want-to-have' in my home lab will be greatly diminished.
Looks at 2.5" drive...
Double the width of an M.2 Drive place it in a 2.5" holder...
Click the 2.5" holder into the backplane....
I don't understand why is is so hard or why the bubblegum stick cannot increase in width.
If you cannot use them in home machines there will be no secondary market market for them like ECC server memory. I look forward to the liquidation of 4tb SM883 drives.
I don't know why there needs to be a new form factor. U.3 was supposed to bridge a gap.
There is a reason we did not cover U.3 heavily on STH.
Yeah, I'd rather actually have SATA than NVMe. Nothing but dedicated storage processors can utilize it. They're too fast for CPUs. After all these years I have found zero answers on trying to make mine not slower than a hard disk. Whatever is posted out there on forums or bug report type threads assume people are top tier sysadmins or programmers. I can't interpret the information. I'm just a damn hobbyist who wants fast storage. SATA is easier for that.
M.2 is hard to cool in a server? Jee, I guess the fans just make all that noise and move no air. I can see them being hard to cool in a laptop. I don't get how those thermal nightmares of proprietaryness ever got so popular.
@asdrubale bisanzio It doesn't "just work." The drives and the lanes are too fast for CPUs. That's why they're coming out with storage processors. Silicon built for storage. Level1techs went through this in detail. But no details as of how to actually make it work.
@asdrubale bisanzio What these new drives aren't going to need cooled? 🤣 It's still pcie.
ServeTheHome can you tell a opinion of the ASUS ASMB10-iKVM?
I'm a bit surprised such a high percentage of server ssd sales are m.2, is that all the boot drives in systems that don't need fast storage, so you just chuck a small m.2 ssd along with a bunch of hard drives in there?
microsoft is big on m.2 as primary storage. They not slow at all. The thing about m.2 is they're small and still very fast, so you can put a bunch of them in a single box and get a ton of performance. E1.S is targeted for the same use case but without all the problems of m.2
@asdrubale bisanzio a SSD boot drive is much more reliable than a HDD. Most people use toolless m.2 as boot drives instead of sataDOMs these days, no?
@@tommihommi1 yep, SSD boot drives for the win. sataDOMs not so much, they're much better fit for hypervisors which do not write anything - we run esxi on them. For regular OS SATA or m.2 drives are the best. Much faster and much more reliable than HDD. Some vendors even have enterprise drives built specifically for boot devices like kingston.
Whatever happened to EDSFF E2?
The small ones just make me crave chocolate bars.
The idea is really interesting however the nomenclature sucks and so does the unsecured module connector
The E3 adapter should support 2 E1 modules
I wonder if we will see E1 drives come into laptops and desktops.
Unlikely in any volume.
My home pc has 4 drives 1 nvme and 4 ssd's It boots to desktop fully usable under 2 seconds.
Any word on Samsung's attempt at M.3 aka NF1 ?
EDSFF is the way forward for servers.
@@ServeTheHomeVideo I know, I'm just wondering what happened to NF1 and in what ways it didn't satisfy the industry. Didn't follow these things from inside ...
That's CRAZY!
40 drives @ 70 W a piece = 2.8 kW JUST for the drives alone.
Doesn't include GPU accelerators, DPUs, CPUs, or RAM.
That's just nuts!
Admittedly, that's top power. You would need this much only at peak IO load, which is unlikely to last for long and is also unlikely to impact all the drives (40-wide stripe??? I recommend against)
@@bronekkozicki6356
I don't think that you necessarily need a 40-wide stripe to be able to max out the drive at full load 100% of the time.
I think that it depends on what you are doing with it.
For example, if that is the scratch directory/headnode for a HPC cluster, and you have hundreds of users running thousands of jobs against it, I can DEFINITELY imagine you hitting the array hard where you will be very close to hitting the full load/full power limit.
Besides, even if that WEREN'T the case, you still need to size the power and therefore; the cooling, to be able to support that so that you don't have what effectively becomes "rolling blackouts" at the hardware level (be it the CPU, RAM, networking, or storage subsystems) because the PSUs weren't sized adequately enough to be able to take the peak load.
not only that, b ut just imagine how much processign pwoer is needed to pusht hat many IOPS,
like watchign Wendel try push mil iops was crazy.
"That may not be true. But that is what somebody told me." 😂
Storage speed vs. network throughput having a flat out brawl.
I mean if the standards are better.
The hotswappable variety of devices sounds super awesome though. I hope homelabbers get to play with this stuff someday without it being all super proprietary. How far down in the future do you think the connections on all of these will be optical/photonic?
What does 1 'OU' mean? I know what the 'U' means. Thanks.
Wish they had this for consumer platforms m.2 is so space constrained. I have a hot swap bay in a pcie slot I just have to refresh in disk manager.
it's a little late, but i'd like to request a video on making that brisket
The closest we have come was Of BBQ and Virtualization: ua-cam.com/video/dgov7184za0/v-deo.html
The lack of standardization on the latch will retard the adoption of these drives by many years. Yes it was a huge mistake not to specify how it would be latched. Once again the computer industry creates a nexus of chaos.
OEMs have to add value.
@@ServeTheHomeVideo Missing the sarcasm quotes: "add value"
Patrick do you have another channel where you talk about making Brisket, because I'm currently wishing you were based in Oxford UK not the Valley as those were good looking hunks of meat. Oh well, I guess if I can find good looking hunks of meat locally...
... just that tiny skosch closer to isolinear chips.
U.3 will rule !
Sorry, but Patrick is wrong on SSD future. OEM servers , like for Facebook, Google, Microsoft may go for this new ES form factor. But big 3 of retails servers - HPE, DELL, Lenovo are not expected to switch to E3.s anytime soon. Most definitely they will go U.3 route which supports both MVMe, SATA and SAS.
And yes , people are still using HDD in servers, so for the manufacturer compatibility with all of them would be number 1 priority.
Anyone heard of HPE, DELL, Lenovo upcoming servers with E3 form factor?
Wouldn’t it be more accurate to compare a combustion car to a Tesla still in the same form factor
Why do we still call them "drives"?
Just askin'
Old habits :-)
Subscribed.
Why is he so buff I remember him being a lot smaller
The fortune 50 company I worked for cried they had limited funds when it came time for reviews and raises ! Of course. But they were buying / leasing rows and rows of flash drive storage arrays, IBM and EMC, and I know they were not cheap. They used to tell me, see those 10 storage arrays, they are all being replaced with this one new array. That and the new maainframes, they sure acted like money was no object when it came to new hardware in the data center. I'm sure a lot of it was leased, but just saying. That's not to say they were not buying servers, because they sure were, they had an order for 6,000 at one point. Oh well I'm retired and that's all I'm concerned with. My point is that new mass storage is always getting cheaper per gb or tb - what ever, runs cooler, takes up less room and changes its form factor in a shorter amount of time than ever before.
Seems to me that money & more profit is involved here to obsolete previous standards and force us to replace everything we currently have and use without issues....
The bigger challenge is PCIe Gen5 and CXL. Next gen servers change so much. They needed to make a change for the electrical Gen5 interface too
It's solving purely an engineering problem because we ARE having issues. m.2 is not hot-swappable, low capacity, low power, cannot be cooled properly. U.2 takes too much space, restricts airflow to the rest of the server and uses old connector that cannot be scaled further. EDSFF family solves all of these problems and even takes it much further with E.3 and ability to connect all kinds of other devices.
The core issue I see here is the fact that each OEM might have different mechanical properties for NVMe which will make them some sort of vendor locked drives. In current and previous standards this is not the case.
However, PCIe gen5 must come as quickly as possible to solve many issues.
We do have 640TB, all NVMe PCIe gen4 as part of Ceph storage. These drives don't have an issue to be installed on any hardware as they are 2.5" form factor.
@@pstoianov edsff are also unified. Oems can only change latch mechanism to fit their chassis but how that latch is attached to the drive is written in the specification. Akin to how currently we have screw holes for sleds. The dimensions of each form factor are also specified.
One day, hard drives will plug-in like ram chips.
Even sooner, RAM will plug in like SSDs. Samsung has already announced the CXL Memory Expander for 2022 that uses EDSFF E3 as discussed in this video.
Also not only are those PCIe cards hard to service and unpopular they were horribly expensive back just before all of these SSDs were out on the market. Now look. You can buy drives cheaply. Just beware of knockoffs, fakes and counterfeits. They do exist now and they are horrible. They could only be worse if they came crammed full of malware!! Though you will still get less than the capacity on the label it doesn't take much to store malware sadly so it can be hiding anywhere. This malware is not just annoying but it might steal credentials and login cookies thus causing potentially a lot of financial misery. If you ever suspect you may have ran anything like that unknowingly or otherwise keep an eye on ALL of your accounts!
Why aren't you talking about speed? Is the new form factor anywhere near as fast as nvme m.2's? If not... they're useless!
These are designed to be PCIe Gen4, Gen5, and Gen6 with higher TDP. Higher TDP = more speed. So these are faster than M.2.
@@ServeTheHomeVideo awesome! Thanks. Can't wait...
M.2 is dying? Yay, I can't wait. I hated that crappy non-standard and I'm glad it's going to be one of those short lived forgotten strange formats.
Giant servers that, these days, cost a lot of... come on dude.... when were servers cheap? :))))
PCI Expess is designed to be hot swapable!