@@Fenix1861 U.3 is interesting but is it electrically better than the U.2? with current and future speed that should be a priority and U.3 seems to have different ones
U.3? you can do this with PCIE cards on desktop. M.2 to U.3/U.2 adapters are also a thing: www.amazon.com/s?k=U.3.+PCIE&crid=34868U1FMJPID&sprefix=u.3.+pcie%2Caps%2C157&ref=nb_sb_noss_2
I've learned any money spent on SSDs/RAM/10G networking was worth it. Money spent on extra compute... was usually a waste. Even mid-tier consumer CPUs are fast enough to run almost anything in a homelab: and still spend most of their life idle.
SSD speeds now match typical DDR3 RAM Bandwidth. Even the very low-end of DDR4's bandwidth. Wild. Can't wait to see stress tests coupled with Intel Directstorage. Get ready for games with 1TB of textures.
@@Frozoken That's fine, it's easy to anticipate which textures you'll need when rendering a scene, which is the bulk of data being swapped around. Same for video editing I think (I mean the time scale on which you're scrubbing through a video is like a million times slower than what You're talking about anyway). The really giant RAM consuming desktop workloads are stuff like data analytics and it's really so much easier to do that using cloud tools like Databricks and Snowflake anyway for reasons beyond desktop system performance.
@Proton_Decay Yeah no it's not easy to anticipate which is why my 1.5GB/s optane drive measurably beats out my 7GB/s NAND drive in loading times nearly every single time (the worst cases being ties) Windows only goes backwards in all aspects of performance too so I really don't see it being optimised to take proper advantage of sequential speeds in even the next decade tbh
Letting GPUs read data directly from disks also means you can’t use any featureful file systems. It would kill ZFS and BTRFS for personal machines. I don’t know about you but I could never go back to ext4, and I really especially old not downgrade to NTFS.
That controller is a freaking monster, solidigm picked the Atomos Prime which is a 16-channel @1600 MT/s with support for up to 128 dies (8 Chip enables per channel). Based on a multi-core of ARM Cortex-R8.
Totally! My background is in Network Engineering, computer science. Maybe I'll get my NetworkNickOfficial YT channel going and contact Wendell and Team for a collaboration video.
Would be nice to get an overview of these U.2 U.3 E1.s U3.s connectors and cables, seems that this connector is the future, but what kind of cable/connector fits on it etc ? I have a Genoa system for now i use m.2 nvme, but at some point i want to switch.
As much as I want the same after talking to some people who understand the engineering side of things I kind of understand why something other than m.2 hasn't trickled down any further than HEDT boards. I've been seriously thinking about building myself a TR system to actually have stuff I want in my main system and stop hamstringing myself with consumer hardware.
Would love to see a Solidigm refresh for their m.2 drives. Love my P44 Pros, but would love to see some of this E2.S quality drop down to the consumer/workstation level, with an 8TB drive, even with a corresponding drop in endurance.
@@justinpatterson5291 I've been using Epyc Rome + gen 3 at home mostly because gen 4 NVME SSD's are still relatively expensive for my needs. That said my homelab setup is pedestrian compared to a lot of folks, I'm only at 10Gbe with a couple of servers (proxmox host + NAS). Looking to upgrade to 25/40 Gbe though (because reasons) and to fully take advantage of that I'd need to upgrade storage as well. I'm not in a rush though so I'm keeping a looking for good deals on gen 4 drives and 25/40GbE switches.
@@justinpatterson5291 I just put a 4TB gen-5 MP700 Pro SE drive in my PC this morning for the hell of it, then noticed how hot it ran even with the large chunk of a passive heatsink I strapped on it. Set link power saving mode to most aggressive in windows power management and geez loueez, temp dropped from 52C at idle down to 40! lol My older system drive (WD Black SN770 1TB) also dropped from 53C down to 42... Didn't expect that, honestly. :D Of course, the heatsink is passive so it's going to be hot, and all the chassis fans are either off or spinning super slow for noise reasons. Not that 52C is a problematic idle temp anyhow, it's just interesting to see the effects of how aggressively the links sleep mode, that's all. Now, I don't know how hot this thing will get if I load it up with work, but as I'm just a regular ole desktop user/gamer, that's not going to be a very big problem. Kind of miss my old Optane drive in my old PC. It was cool - but not to the touch though! lol Too bad Intel killed that division off. :(
@2:11 when talking about the models and the respective sizes, Wendell says 1030 but the video circles the 1010 drives and then the 1030 drive sizes get circled when talking about the 1010 model
I'm in the same boat, I already have NVME drives in every system in my house, I just want to replace all of my spinning storage drives with SSDs. A few years back you could get a 2tb crucial drive for less than $100. I bought only one, not super fast drives by any means but wish I bought 5 more of them.
The FIPS 140-3 level 2 compliance makes the export of the media encryption keys used in TCG OPAL quite tricky. . . I don’t think an export function would be certifiable.
i still dont understand why we dont have 3.5" SSD drives with massive storage already, any chance we can get a video on this on why they dont, is it impossible for some reason or just not cost effective?
i believe there were variants, that was the whole purpose of U.2 to make it easier for cooling for nvme as well as enabling a larger housing to add higher capacity
They actually do exist, there have been a few with absolutely monstrous capacities. It’s difficult to make a controller to handle all that nand, and then there’s the cost and complexity of creating a multi layer pcb as well. I’m not sure that there’s a significant enough push for density beyond what we already have to justify the cost of developing those products compared to the significant demand for faster and more performant storage. So they’ll probably remain niche
There used to be. Google had some custom 3.5in sized SSDs back in the day when PCIe based SSDs where very new and you needed 3 PCBs to contain the massive FPGA and the huge amounts of NAND chips needed to get to 2 or 4tb. It's a similar story today, today's controllers aren't meant to address the huge amounts of NAND chips necessary to fill a 3.5in drive. You'd have to be doing something very custom or use an FPGA which is higher cost at the end of the day
So awesome, your storage videos are generally the most entertaining to me, does anyone else feel as if the future of hardware is storage?? I don't see many bottlenecks for the average gamer (1080p) outside of latency, and throughput...
I'd like to see more comparisons of these to P4800X and P5800X optane drives as at least the PCIe 3.0 ones are cheap enough they're now they'd be worth getting for a raid setup.
The most important workload I have is a huge sequential sustained write. Do these drives have an SLC cache that fills up? I need to see a full drive write speed chart to know if the drive is any good for me. Can you do this please? (For every drive you review)
What would be a good motherboard to support 6-8 NVME drives for a SOHO server? I am working with video editing, I want my data on a server and not on my workstation so that I can access it from 2-3 different locations seamlessly though 25G fiber. It seems like most consumer boards don't have two x16 slots so that I can add two 4-way NVME cards. Do I need a threadripper/xeon motherboard?
With all these improvements in controllers I'm starting to wonder if we will ever see hardware ZFS controllers and what it would take to get there. The embedded stuff is shrouded in mystery.
So, where in the U.S. can you actually buy these at retail? (I don't want to have to buy from a business seller, where you have to contact them for a quote, etc. etc.
I would love to be excited about them! But... as long as they have an interface I cant use and cost like 5x or more then what a HDD would cost, it`s kinda impossible
I just like technology, I am not an expert. Regarding server endurance, that is, how much use/time the SSD will have, which one is better? HDD or Nvme SSD?
I couldn't find on the website whether there is a dram buffer or not in this class of devices. Since 1GiB is selected, isn't this a ssd test, a dram test?
I hate that in all the benchmarks they are quoting 4k random speed in IOPS, and at HUGE queue depths. I want to know 4k random performance in MB/s at QD=1. Can anyone provide that?
I have 8 drives in my hot swap home server and I still don't know which drive has failed. I don't know why more data centre engineers aren't alcoholics
If you only have 3 drives out 1000 bad, there is no reason to touch the 3 drives until the entire encloser gets replaced. In the spinning rust relm, you needed 10% down to make a good point to even go in and pull out bad drives. The real risk is humans opening the front door of the data center.
Does anyone know why Mini Cool Edge has such a seemingly silly name (is there a bigger Cool Edge connector? Is it just named after board edge connectors like normal PCIe and M.2 style connectors? What makes it cool?)? It bothers me more than it has any right to whenever I see it mentioned lol
I just want large capacity SSDs. They're forcing us to waste our limited PCIe lanes on storage (going from 8 sata ports to 2 m.2 on modern boards) so speed is minor concern I just want way more capacity
@@Pandemonium088 FYI aged 980Pro was/is pretty good and does about 60-65 MB/s. Many Phison-E18+Micron 176L-NAND SSDs can do only ~55-60 MB/s. Newer 990Pro and newest Gen5-SSDs (Micron 238L NAND) go up to 66-69 MB/s, so none of them even reached 70. So yes, 70MB/s is really good, considering its 2024 ;-)
i just want cheap large ssd storage. why is 8tb the biggest on the consumer side. damn monopolies. we should in theory have 30-40TB drivers cheaper than the 22 TB spinning rust out there since there is less material and size used...
How to we know that the drives can actually survive the endurance rating? 5 years ago I bought 2x Corsair MP510 with a 1600 TBW rating as cache drives for my NAS. Now the drive software shows that only 28% lifespan is left with only 321TB Written and 40TB Read. I'm not complaining about the drives i'm just finding it odd that the rating and real life endurance is so different
Without being familiar with Corsair's control software I couldn't comment on your specific situation but true wear is determined by number of erase cycles rather than amount written (with writes being an easier to measure proxy for that) - different workloads, different drives and different firmware configurations will all result in different amounts of write amplification which will throw off real world results, since 321TB lifetime writes in terms of data being pushed over the bus to the controller can be very different to 321TB worth of write/erase cycles. All of that is going to be a lot worse in consumer grade drives though (particularly in a NAS), because workloads are more variable, specs assume average consumer use which is very different from NAS usage in a high data turnover setup, the flash itself is lower grade and the controller is more likely to have bugs that accelerate wear (see the infamous Samsung incident not too long ago where their 980 Pro and 990 Pro drives were just burning themselves out). Enterprise drives have much more predictable use cases, more thoroughly vetted firmware and controllers, better underlying flash and more predictable write amplification so the ratings will be more accurate (not to mention a lot more conservative to begin with). By the way, what on earth are you using your NAS for that your cache drives have had 8 times as much data written to them as has been read from them?
@@bosstowndynamics5488 wow, that was a much more detailed answer than I could have hoped for. I use unRaid so I guess the software in unRaid could be inaccurate. I have 5 vm's going and about 35 dockers but some of the dockers have previously crashed and written way to many TB of data in logfiles and so on so thats some of the usage.
Optane isn't produced anymore. When the last ones (made in 2022) have been sold, then they are gone. It seems like the Solidigm drives have more space and faster overall transfer speed. But random IOPS and endurance is better on Optane.
@@LisaSamaritanIn another video Wendell commented that NVMe might finally be catching up to Optane on latency and that Solidigm was working on some stuff to that effect, presumably he was either referring to a specific workloadn they've optimised or a different product that is still in development
@@bosstowndynamics5488 In what he showed in this video. The fastest Optane drive that I can find, is still 3 times faster att random IOPS. So sure, some day. But not now.
Isn't QD1 just going to be limited by the CPU/OS and not a good test of how much the drive can sustain ? (like benchmarking CPUs at 4K resolution?) And also, can't you find out the MB/s by dividing IOPS by 256 ? (256 4K blocks in a MB)
@@Winnetou17 There may be an impact of CPU/OS, but it is also very drive dependent. An Optane drive from 6 years ago will have 3x-4x greater low queue depth random 4k read performance than even the fastest current drives ON THE SAME SYSTEM. And no, you can't just divide by the queue depth. That is not terribly meaningful.
I’m excited by the innovation from the professional side of things but it drives me crazy that the cost of Sata SSDs hasn’t come down as the newer faster connectors have saturated the market. I just want to build an all SSDs NAS with 8-8tb Sata drives. Why? I have no idea but I want it. Just make it affordable for us plebs
I've heard that NVMe is actually slightly cheaper to implement these days so any newly manufactured SATA drives are relegated to bring cheap upgrades for legacy systems that can't justify the cost of 2-8TB of NAND. I do feel your pain though, as another user of a lot of spinning rust (there are ways to connect a ton of NVMe to your system if you don't care so much about performance for what it's worth, they're just all at least one of expensive, power hungry or a bit less reliable than SATA/direct NVMe)
I bought two second hand (but basically brand new) Intel P4510 8TB for about 800 bucks in total for my homelab server. That was about 1.5 years ago. Contrary to consumer SSDs they can even handle the write amplification from filesystems like ZFS. Nowadays they have become quite a bit more costly though.
@@g0r3ify I’ve looked into U.2 drives a couple times. It’s the consistently high price that’s been the drag for me. I was hoping with continued development of nvme that older drives would fall in price and I could home lab on budget with “newer” equipment. Spinning rust for the win for dolor per tb. I used nvme as a cache but still
I see the use for drives that encrypts the data. It's only that I also see the pain it has to be trying to do a lot of maintenance when the drives are encrypted. Sure it stopps them from being easy to image or copy, but they also makes it hard to the same when you need to do it.... I can't remember the times I had customers killing their machines and asking me to save their data. Let's just say that I think there might have been one or two who would have bothered to keep track of keys used for the drives encryption. As is probably obvious I have never touched a encrypted drive so I really don't know how I would have accessed them. This makes me feel ancient.
U.2 for the desktop needs to make a comeback. Also, my P44 Pro is crying about being compared to these.
yes! or adopt the E3.S standard if have better electrical characteristics
@@Stef3m Agreed. On the bright side, 'import' passive adapters are becoming more available.
I mean… U.3 is a thing, unfortunately it is NOT backwards compatible with U.2. Though I haven’t seen much out there utilizing the standard.
@@Fenix1861 U.3 is interesting but is it electrically better than the U.2? with current and future speed that should be a priority and U.3 seems to have different ones
U.3? you can do this with PCIE cards on desktop. M.2 to U.3/U.2 adapters are also a thing:
www.amazon.com/s?k=U.3.+PCIE&crid=34868U1FMJPID&sprefix=u.3.+pcie%2Caps%2C157&ref=nb_sb_noss_2
I discovered Solidigm thanks to this channel, have P44 Pro in my PC and P41 Plus in my dads. Couldn't be happier. Thanks!
I'm already spending too much on hardware as it is.. And Level1Techs is giving me all the excuses I need to justify it 😀
You need high-speed multi workload self encrypting drives! It’s not a choice!
Those Danny DeVitos won't generate themselves! (yet)
I've learned any money spent on SSDs/RAM/10G networking was worth it. Money spent on extra compute... was usually a waste. Even mid-tier consumer CPUs are fast enough to run almost anything in a homelab: and still spend most of their life idle.
My thoughts exactly
Just get a wife and have kids. Your hardware spending will magically dry up.
SSD speeds now match typical DDR3 RAM Bandwidth. Even the very low-end of DDR4's bandwidth. Wild.
Can't wait to see stress tests coupled with Intel Directstorage.
Get ready for games with 1TB of textures.
Too bad they still have 1000x worse latency
@@Frozoken That's fine, it's easy to anticipate which textures you'll need when rendering a scene, which is the bulk of data being swapped around. Same for video editing I think (I mean the time scale on which you're scrubbing through a video is like a million times slower than what You're talking about anyway).
The really giant RAM consuming desktop workloads are stuff like data analytics and it's really so much easier to do that using cloud tools like Databricks and Snowflake anyway for reasons beyond desktop system performance.
@Proton_Decay Yeah no it's not easy to anticipate which is why my 1.5GB/s optane drive measurably beats out my 7GB/s NAND drive in loading times nearly every single time (the worst cases being ties)
Windows only goes backwards in all aspects of performance too so I really don't see it being optimised to take proper advantage of sequential speeds in even the next decade tbh
Letting GPUs read data directly from disks also means you can’t use any featureful file systems. It would kill ZFS and BTRFS for personal machines. I don’t know about you but I could never go back to ext4, and I really especially old not downgrade to NTFS.
most people don't have high end ssd so games won't support it.
A red light on a bad drive? Yes please!
A simple multicolor led for diagnostic health. Green for good. Amber for okay. And red for paperweight mode.
That controller is a freaking monster, solidigm picked the Atomos Prime which is a 16-channel @1600 MT/s with support for up to 128 dies (8 Chip enables per channel). Based on a multi-core of ARM Cortex-R8.
Wendell is just too cool...
nah but hello and welcome
Totally! My background is in Network Engineering, computer science. Maybe I'll get my NetworkNickOfficial YT channel going and contact Wendell and Team for a collaboration video.
He really is! 🤓
@@Level1TechsYT account manager is a hater lol
3:00 - Quoting Wendell as saying the connector will carry us into PCIe Gen 6 without a doubt
🤣
Would be nice to get an overview of these U.2 U.3 E1.s U3.s connectors and cables, seems that this connector is the future, but what kind of cable/connector fits on it etc ? I have a Genoa system for now i use m.2 nvme, but at some point i want to switch.
u.2 connectors need to be added to consumer Motherboards and not just server spaces.
Amen to that. I would LOVE to have a U.2 connector. I'm tired of accidentally snapping M.2 drives. It gets very old VERY quickly.
As much as I want the same after talking to some people who understand the engineering side of things I kind of understand why something other than m.2 hasn't trickled down any further than HEDT boards. I've been seriously thinking about building myself a TR system to actually have stuff I want in my main system and stop hamstringing myself with consumer hardware.
my old Z170 mobo had them and there were no drives I could find for it at the time LOL
they are on thread ripper. mcio is awesome on that platform
AMEN
Would love to see a Solidigm refresh for their m.2 drives. Love my P44 Pros, but would love to see some of this E2.S quality drop down to the consumer/workstation level, with an 8TB drive, even with a corresponding drop in endurance.
I bought a P44 Pro a few weeks ago and I'm happy with it. I love the software optimisation.
And here's me happy that Gen 4 stuff is starting to get 'cheap' enough for me to play with at home.
7GB/Sec is fairly solid. Can't imagine needing 12 or more... That's a lotta data processing.
@@justinpatterson5291 I've been using Epyc Rome + gen 3 at home mostly because gen 4 NVME SSD's are still relatively expensive for my needs. That said my homelab setup is pedestrian compared to a lot of folks, I'm only at 10Gbe with a couple of servers (proxmox host + NAS).
Looking to upgrade to 25/40 Gbe though (because reasons) and to fully take advantage of that I'd need to upgrade storage as well. I'm not in a rush though so I'm keeping a looking for good deals on gen 4 drives and 25/40GbE switches.
@@justinpatterson5291 I just put a 4TB gen-5 MP700 Pro SE drive in my PC this morning for the hell of it, then noticed how hot it ran even with the large chunk of a passive heatsink I strapped on it. Set link power saving mode to most aggressive in windows power management and geez loueez, temp dropped from 52C at idle down to 40! lol My older system drive (WD Black SN770 1TB) also dropped from 53C down to 42...
Didn't expect that, honestly. :D Of course, the heatsink is passive so it's going to be hot, and all the chassis fans are either off or spinning super slow for noise reasons. Not that 52C is a problematic idle temp anyhow, it's just interesting to see the effects of how aggressively the links sleep mode, that's all. Now, I don't know how hot this thing will get if I load it up with work, but as I'm just a regular ole desktop user/gamer, that's not going to be a very big problem.
Kind of miss my old Optane drive in my old PC. It was cool - but not to the touch though! lol Too bad Intel killed that division off. :(
Wendell always has the coolest toys... and I'm down like four flats on a Cadillac.
I've pulled out the wrong drive in a datacenter before while on the phone to my supervisor. It was not fun.
A diagnostic led could help in that sitch... Red for dead. Green for good. Amber for somewhere in the middle.
Btw 2:04 you circled the opposite things
I hope they'll help with the loading speed of MS Flight Sim 2020. 🤣 RAID 0 with gen 4 NVME drives doesn't help much, but they sure are fast!
2:14 visiuals i think is flipped
yup
Damn it Wendell I was so excited to move to the new Nvme age! Don’t tell me I have to bust out the cables again, I can’t man!!
@2:11 when talking about the models and the respective sizes, Wendell says 1030 but the video circles the 1010 drives and then the 1030 drive sizes get circled when talking about the 1010 model
I think my personal need for drive speed is more than satisfied, they should start working on making it dirt cheap
I'm in the same boat, I already have NVME drives in every system in my house, I just want to replace all of my spinning storage drives with SSDs. A few years back you could get a 2tb crucial drive for less than $100. I bought only one, not super fast drives by any means but wish I bought 5 more of them.
Solidigm is by far the best SSD I have ever owned. Really reliable and very fast.
I think you flipped the highlights off disk size @02:20 for 1010 n 1030.
They need to make a 20tb drive with long term cold storage edurance, I want to replace my 20tb exos drives eventually.
I'm a software developer and I'd like to have a fraction of a fraction Wendell's intelligence regarding hardware.
Ditto
The FIPS 140-3 level 2 compliance makes the export of the media encryption keys used in TCG OPAL quite tricky. . . I don’t think an export function would be certifiable.
i still dont understand why we dont have 3.5" SSD drives with massive storage already, any chance we can get a video on this on why they dont, is it impossible for some reason or just not cost effective?
i believe there were variants, that was the whole purpose of U.2 to make it easier for cooling for nvme as well as enabling a larger housing to add higher capacity
They actually do exist, there have been a few with absolutely monstrous capacities. It’s difficult to make a controller to handle all that nand, and then there’s the cost and complexity of creating a multi layer pcb as well. I’m not sure that there’s a significant enough push for density beyond what we already have to justify the cost of developing those products compared to the significant demand for faster and more performant storage. So they’ll probably remain niche
There used to be. Google had some custom 3.5in sized SSDs back in the day when PCIe based SSDs where very new and you needed 3 PCBs to contain the massive FPGA and the huge amounts of NAND chips needed to get to 2 or 4tb.
It's a similar story today, today's controllers aren't meant to address the huge amounts of NAND chips necessary to fill a 3.5in drive. You'd have to be doing something very custom or use an FPGA which is higher cost at the end of the day
There is the ExaDrive from Nimbus, which has 100TB in 3.5". However that thing apparently costs 40k$, so it's not really cost competitive
cant wait for the "e series" connectors to fully take over, even on consumer stuff.
Didn't mention Optane once. Why?
That was so, like, yesterday man. Throw that thing away man that's an antique!
This -- is it better than optane? I see some saying the latency is slower.
@@asknight Wendell mentions it on every ssd video except for this one
it's dead Jim. too soon. TOO SOON.
@@Level1Techs in our hearts it is alive
Some features are also nice for a Ceph Cluster.
So… can we build a 6 second cold to boot system with these like you did with the Optane gen 2 drives?
So awesome, your storage videos are generally the most entertaining to me, does anyone else feel as if the future of hardware is storage?? I don't see many bottlenecks for the average gamer (1080p) outside of latency, and throughput...
Wendell, what kind of table is that? I'd like to get one.
3:41, What do you mean? you don't individually have them written down in sequence.
Solidigm and team group also kioxia are becoming the fastest drives in enterprise and possibly will dominate in consumer markets
I'd like to see more comparisons of these to P4800X and P5800X optane drives as at least the PCIe 3.0 ones are cheap enough they're now they'd be worth getting for a raid setup.
What is the current best practice for NVME RAID 1, 5, 6 in Enterprise servers? Doesn't the RAID process bottleneck throughput?
the image quality is amazing
Did I miss it? How much do these cost?
The most important workload I have is a huge sequential sustained write. Do these drives have an SLC cache that fills up? I need to see a full drive write speed chart to know if the drive is any good for me. Can you do this please? (For every drive you review)
Second that. Reliable data on sustained write performance is sadly very rare to find.
wendell thank you for explain to none really deep into techs. about different work loads affect storage differently!. not many people talk about it!
What would be a good motherboard to support 6-8 NVME drives for a SOHO server? I am working with video editing, I want my data on a server and not on my workstation so that I can access it from 2-3 different locations seamlessly though 25G fiber. It seems like most consumer boards don't have two x16 slots so that I can add two 4-way NVME cards. Do I need a threadripper/xeon motherboard?
With all these improvements in controllers I'm starting to wonder if we will ever see hardware ZFS controllers and what it would take to get there. The embedded stuff is shrouded in mystery.
One of these on a pcie card for use in a standard PC would be neat.
So, where in the U.S. can you actually buy these at retail? (I don't want to have to buy from a business seller, where you have to contact them for a quote, etc. etc.
so when's HP going to buy solidigm? :P
I'm curious how this stacks up against Intel's aging p5800x
it'd get obliterated
The chassis' in our SAN at work tells us exactly which drive in our storage pool is bad.... along with a visual indicator on the drive itself... what?
When can we have these crazy fast speeds on M.2?
Are these faster than optane 4k random?
As a P5800X user, yes. What I don't understand is the latency though, that's garbage latency on the Solidigm
12:13 the highlights in the video are wrong
I would love to be excited about them! But... as long as they have an interface I cant use and cost like 5x or more then what a HDD would cost, it`s kinda impossible
I just like technology, I am not an expert. Regarding server endurance, that is, how much use/time the SSD will have, which one is better? HDD or Nvme SSD?
I couldn't find on the website whether there is a dram buffer or not in this class of devices.
Since 1GiB is selected, isn't this a ssd test, a dram test?
Yes, all them have DRAM (+PLP!). And No, there is no need for bigger test-file-sizes, as they don't use any pSLC.
Can it finally find a specific pdf I got 5 years ago through so many reinstalls and drive cloning under 1 minute?
Solidigm drives are really neat
I hate that in all the benchmarks they are quoting 4k random speed in IOPS, and at HUGE queue depths.
I want to know 4k random performance in MB/s at QD=1. Can anyone provide that?
I have 8 drives in my hot swap home server and I still don't know which drive has failed. I don't know why more data centre engineers aren't alcoholics
Is this a Xpoint killer?
If you only have 3 drives out 1000 bad, there is no reason to touch the 3 drives until the entire encloser gets replaced. In the spinning rust relm, you needed 10% down to make a good point to even go in and pull out bad drives. The real risk is humans opening the front door of the data center.
But Wendall, We Must Quote you on PCIe 6 support...
You are the Janitor SAGE that will Save Us All, ha :p
Where is Samsung consumer one though, the new Pro model.
Maybe increase test Data size from 1 GB to 100 with a Drive delivering 14 GBps, otherwise nicely done!
He likes the 5 second tests, lol
At last, SSD reliability stepping up.
Does anyone know why Mini Cool Edge has such a seemingly silly name (is there a bigger Cool Edge connector? Is it just named after board edge connectors like normal PCIe and M.2 style connectors? What makes it cool?)? It bothers me more than it has any right to whenever I see it mentioned lol
I am guessing that hotswap is not supported?
I just want large capacity SSDs. They're forcing us to waste our limited PCIe lanes on storage (going from 8 sata ports to 2 m.2 on modern boards) so speed is minor concern I just want way more capacity
Me, being a a regular consumer "Yes, very cool, very cool. 10 years until I get this?"
But can it run crisis?
U.2?
Hope people who allow Clear distro can be run on WRX board are keeping their jobs at Intel next year. ❤️
What is the random 4k q1t1 read/write speeds on these?
It's NAND-flash still, so expect about 70 MB/s max....
@@Wlad1 Not all NAND flash chips are limited to 70mb/s FYI and that's a pretty low max considering its 2024, doesn't beat the aged 980 Pro.
@@Pandemonium088 FYI aged 980Pro was/is pretty good and does about 60-65 MB/s. Many Phison-E18+Micron 176L-NAND SSDs can do only ~55-60 MB/s. Newer 990Pro and newest Gen5-SSDs (Micron 238L NAND) go up to 66-69 MB/s, so none of them even reached 70. So yes, 70MB/s is really good, considering its 2024 ;-)
mm I see no rgb watercooled copper fins on it, must not be that great
i just want cheap large ssd storage. why is 8tb the biggest on the consumer side. damn monopolies. we should in theory have 30-40TB drivers cheaper than the 22 TB spinning rust out there since there is less material and size used...
Well. I guess I need this now.
I definitely do not.
Can’t wait for 3.0 and 4.0 to drop in price.
And there i am trying to figure out how to justify throwing this into my unraid array..
nice work thanks
GOD. I WANT THESE
Looking For Group
How to we know that the drives can actually survive the endurance rating? 5 years ago I bought 2x Corsair MP510 with a 1600 TBW rating as cache drives for my NAS. Now the drive software shows that only 28% lifespan is left with only 321TB Written and 40TB Read.
I'm not complaining about the drives i'm just finding it odd that the rating and real life endurance is so different
Without being familiar with Corsair's control software I couldn't comment on your specific situation but true wear is determined by number of erase cycles rather than amount written (with writes being an easier to measure proxy for that) - different workloads, different drives and different firmware configurations will all result in different amounts of write amplification which will throw off real world results, since 321TB lifetime writes in terms of data being pushed over the bus to the controller can be very different to 321TB worth of write/erase cycles. All of that is going to be a lot worse in consumer grade drives though (particularly in a NAS), because workloads are more variable, specs assume average consumer use which is very different from NAS usage in a high data turnover setup, the flash itself is lower grade and the controller is more likely to have bugs that accelerate wear (see the infamous Samsung incident not too long ago where their 980 Pro and 990 Pro drives were just burning themselves out). Enterprise drives have much more predictable use cases, more thoroughly vetted firmware and controllers, better underlying flash and more predictable write amplification so the ratings will be more accurate (not to mention a lot more conservative to begin with).
By the way, what on earth are you using your NAS for that your cache drives have had 8 times as much data written to them as has been read from them?
@@bosstowndynamics5488 wow, that was a much more detailed answer than I could have hoped for.
I use unRaid so I guess the software in unRaid could be inaccurate. I have 5 vm's going and about 35 dockers but some of the dockers have previously crashed and written way to many TB of data in logfiles and so on so thats some of the usage.
Hopefully they get more affordable in future 😅
Need to make my homelab even faster so I can not use it like I do now
I didn't get... is it better then Optane or not?
Optane isn't produced anymore. When the last ones (made in 2022) have been sold, then they are gone.
It seems like the Solidigm drives have more space and faster overall transfer speed. But random IOPS and endurance is better on Optane.
@@LisaSamaritan I still wanna Optane as a system disk and i probably buy one )
@@LisaSamaritanIn another video Wendell commented that NVMe might finally be catching up to Optane on latency and that Solidigm was working on some stuff to that effect, presumably he was either referring to a specific workloadn they've optimised or a different product that is still in development
@@bosstowndynamics5488 In what he showed in this video. The fastest Optane drive that I can find, is still 3 times faster att random IOPS. So sure, some day. But not now.
@@LisaSamaritan IOPS may not be in favor of Optane actually, but latency definitively is
but can it run crysis?
I stumbled onto this and it took me WAY too long to realize that this is NOT enthusiast level stuff.
I will buy this with my left kidney
If you have about 3-4k in expendable cash, go for it...
PLEASE provide RND 4k QD1 in MB/s! (not high queue depths, not microseconds and not IOPS!)
Isn't QD1 just going to be limited by the CPU/OS and not a good test of how much the drive can sustain ? (like benchmarking CPUs at 4K resolution?)
And also, can't you find out the MB/s by dividing IOPS by 256 ? (256 4K blocks in a MB)
@@Winnetou17 There may be an impact of CPU/OS, but it is also very drive dependent. An Optane drive from 6 years ago will have 3x-4x greater low queue depth random 4k read performance than even the fastest current drives ON THE SAME SYSTEM.
And no, you can't just divide by the queue depth. That is not terribly meaningful.
Please don't make the same mistake you did with LTT again.
Sol-ly-dig-gum
I want one.
"... to power your AI/ML data pipeline..."
Is there any new product that is *not* AI these days?
Framework's laptop with Meteor Lake is refreshingly void of any AI mentions. Even though the chip has a (pretty weak) NPU in it.
Are these SSSDs? #Helldivers2
I’m excited by the innovation from the professional side of things but it drives me crazy that the cost of Sata SSDs hasn’t come down as the newer faster connectors have saturated the market. I just want to build an all SSDs NAS with 8-8tb Sata drives. Why? I have no idea but I want it. Just make it affordable for us plebs
I've heard that NVMe is actually slightly cheaper to implement these days so any newly manufactured SATA drives are relegated to bring cheap upgrades for legacy systems that can't justify the cost of 2-8TB of NAND. I do feel your pain though, as another user of a lot of spinning rust (there are ways to connect a ton of NVMe to your system if you don't care so much about performance for what it's worth, they're just all at least one of expensive, power hungry or a bit less reliable than SATA/direct NVMe)
I bought two second hand (but basically brand new) Intel P4510 8TB for about 800 bucks in total for my homelab server. That was about 1.5 years ago. Contrary to consumer SSDs they can even handle the write amplification from filesystems like ZFS. Nowadays they have become quite a bit more costly though.
@@g0r3ify I’ve looked into U.2 drives a couple times. It’s the consistently high price that’s been the drag for me. I was hoping with continued development of nvme that older drives would fall in price and I could home lab on budget with “newer” equipment. Spinning rust for the win for dolor per tb. I used nvme as a cache but still
My 4HDD is faster than those next gen SSDs.... when it comes to losing it's data - it happens faster than a millisecond.
i wanna enjoy these videos, but i'm too much of a pleb to be even in the same room with one of these fancy drives that get featured on this channel
Nice
I see the use for drives that encrypts the data. It's only that I also see the pain it has to be trying to do a lot of maintenance when the drives are encrypted. Sure it stopps them from being easy to image or copy, but they also makes it hard to the same when you need to do it....
I can't remember the times I had customers killing their machines and asking me to save their data. Let's just say that I think there might have been one or two who would have bothered to keep track of keys used for the drives encryption. As is probably obvious I have never touched a encrypted drive so I really don't know how I would have accessed them. This makes me feel ancient.
Intel gives a damn though ........ 🤪🤪