Determining which motherboards support pci bifurcation is a huge pain. Manufacturers need to step up their documentation game and list bifurcation support or lack of.
Least ASUS has a long list on their website for what supports x4x4x4x4/hyper m.2. Even if the lack of support on some mobos doesn't make sense. And even if I'm a bit done with ASUS for the time being.
From the Manual: "For MB204MP-B, if the installed M.2 NVMe SSD requires power exceeding 15W, you will need to connect the 6-pin power connector from the power supply to the 6-pin power input to ensure sufficient power supply." Good Lord! External power for four M.2 drives - that's crazy!
Yeah. M.2 Gen5 NVMe is...a mess. It's kinda like putting a big block into a Honda Civic for street use. Does it make big numbers? Yeah... sure? But what are you giving up to get there? And what is the real world experience of using it like? I'd much rather have a 3-5w Gen4 drive on an x2 link. It's ONLY 4GB/s per drive, but that's more than enough for anything an M.2 should be installed in. If people are spending $40k on a monster EPYC workstation because they have some specialized need, just use a U.2/U.3 drive and be done with it.
@Prophes0r there are issues with some gen5 servers having enough ports. Where speed actually matters like in high frequency trading, where each IOPs means the difference of $0.00002 and $0.00006
@@robertbslee4209 Well high frequency trading should be illegal, so that argument isn't going to work on me. It is several layers deep into the loopholes/abuses of a system. But that is an entirely different argument. We are talking about workstations/desktops here. So my argument that we don't need x4 lane links for NVMe storage is still solid
routing pcie gen 2/3, and even 4, lanes are relatively easy and cheap compared to routing pcie gen 5 lanes... but what do consumers even plug into their computer anymore? pretty much just a gpu and nothing else. maybe a network card or a capture card, but for >99% of people... just a GPU. high-end atx boards these days have 4, 5, 6 or even more nvme slots onboard. if you need more than that... the consumer cpus don't even have the pcie lanes to support it without multiplexing anwyay.
Oh yes, I have basically my old desktop as a server. X99 with lots of bells and whistles. And I have a decommissioned workstation that has IIRC 7 slots.
It's uncanny Wendell. Just about every time I am contemplating a new piece of tech you post a video on it. If there was a Nobel Prize for Computing I would nominate you for it bro! Cheers and keep up the outstanding work.
I was just about to past almost the exact same thing. To the point I saved the Amazon listing on my phone to make sure I saved $20 with price match at Micocenter.
There are already those, it’s great for adding an external GPU to a laptop. I plan on trying to use it for GPU passthrough to a virtual machine since my desktop ran out of physical PCIE slots but not m.2 slots.
@@Fractal_32 Also good for adding 10GbE when you have spare M.2 slots, I/O Crest makes one based on the AQC chipset, Aliexpress has ones with Intel controllers, kinda wild.
My 2012 MacBook Pro has 16TB via 2x Samsung 8TB 870 QVO 2.5” SATA SSD drives. Sure, they’re not as fast but for $1200AUD on special in 2023, I’m not complaining 🤷♂️
you said 16TB but also 4TB drives and also you're holding 5 of them so already at the 15 second mark there are mysteriousnesslynessess mysteriously mysterieing.
they do exist, but they only work for PCIe 3.0x4 at the moment. now, i'm sure besides gaming market hype, the most anyone needs right now is PCIe 3.0/3.1, with higher-end users getting into PCIe 4.0 for their storage. Going from 3.0 to 4.0 in my experience hasn't really changed, still just as snappy as 3.0 drives. sure, games load a few seconds faster but it's not the end of the world. for sure more for storage than speed, be it you get an EC-P3X4 from Saprent or a PA20 from Glotrends, normally your just adding another M.2 for storage.
From the title I was expecting it to be Expanding M.2 single port into multiple PCIe/M.2 port, which would be quite exciting to me as an option - lots of portable devices have PCIe on an M2 connector available, be nice to be able to do more with that, even if its only going to be on the broken/EOL type machines or embedded use. But this thing is interesting too, even somewhat tempting for my primary computer.
I run 4x2TB with the pcie gen 4 similar board from asus on a threadripper to deal with large point clouds. I just have them set up in the windows utility. fast big local storage is really important for point cloud workloads.
Still a bit salty about the in my opinion false advertising by Icy Dock claiming the original ToughArmor MB699VP-B (V1) can do PCIe Gen4 but am happy that PCIe Adapters get more and more mainstream for retail end customers.
Oh hey, funny seeing you here. Hello from the pcie bs thread lol. And yeah, icy dock is just... I can't like em. Far too expensive for so little, and a bad rep for support.
My problem with it was finding compatible cable to hook it up to a SlimSAS host. Thankfully the retailer I bought it from finally stocked the V3 and allowed me to return the V1, albeit with a restocking fee.
I'm struggling to justify the price for this piece of hardware that's not actually ANY active components on it. OK so it's got fancy hot-swap 'bays' but that's just mechanical. Routing for PCIe5 lanes from the slot to a card really isn't that hard... The trace lengths are tiny and nicely in split into x4,x4,x4,x4 from the slot.
This thing is not even hot-swap! You basically spend $260 to not have to get your screwdriver from the drawer and save 2 minutes of time once every 5 year when you actually replace a drive
Exactly my thoughts. Looked nice, but I’ll wait until the prices reach sane levels before I upgrade one of my cards. Quite happy with the space and energy reduction after moving from 4TB HDD to M.2.
Thanks for this. One scenario I would be interested in is seeing how Adobe products like Lightroom works, with catalogue on one raided drive, then photos on another raided drive, and maybe temp on another drive?
I use a couple of Icy Dock adapter myself for my server as nobody else had what I wanted that wasn't an arm and a leg. I have a 5 1/4" to 8 2.5"x9mm SATA drives and a 5 1/4" to 4 2.5"x15mm SATA drives.
How much is storage latency (and perhaps even the throughput?) a bottleneck for compiling around 300-400GB+ of software like Unreal Engine e.g. main dev branch? How much of the "CPU 100% utilisation" is actually the CPU waiting for data to arrive? (yes when it says 100% they often lie). Personally looking for the next TR Pro and am thinking of having entire UE in a RAM drive (yes mad scientist kinda madness), just to see if latency has an significant effect on compiling, or not. I limit my main dev updates and compilations for once every 1-2 months atm, faster compilation would be enabling me to recompile weekly, just because I then can. Anyway, here's a suggestion for you to test if it would interest you as well :)
Could you try this on the minisforum ryzen motherboard as a flash NAS option ? I bet it will make a nice homelab combination. Also minisforum added a second revision to the motherboard that is even cheaper at the expense of graphical performance. But that would be even better from a NAS application point of view.
So I wanted a really hot-swap solution for m.2, or U.3 SSDs aimed at low cost. I don't see the market replacing M.2 with something like EDSFF E1.S or another alternative any time soon, and the power issues to support an SSD are becoming increasingly complicated for M.2.
I cannot recommend this, from my experience the consumer M.2 drives, especially the GEN5 ones rot like hell. They may be cool when they are fresh and you fill them with data, or do small in-SLC cache benchmarks like this, but the dip of sustained write performance when you run out of the cache is huge, and they also do not tend to age well. Try loading them full of data and then try reading the same data half year later. Thats where they suffer and it bit in the ass so many times I pretty much gave up and these days I'm installing old second hand U.2 drives. Not as high peaks but the sustained performance and reliability is worth it.
Wendell, can you take a look at some of Highpoint Storage Technologies newer PCIe 5.0 NVMe AIC/RAID cards, they've also got some PCIe "Switch" cards that breakout PCIe to external enclosures including a half petabyte solution they just released
I want a multi M.2 expansion card quite a bit, but so few motherboards support 4x4x4x4 bifurcation, and getting a motherboard just for that is just expensive, and it can also mean your top slot gets speed halved, which might not be a problem for some videocards, but for some it might. I just want a card that can hold 4x M.2 drives, and get reasonable speeds, they do not all have to be able to get their full speeds, i won't need or use that, i just want the ability to add more M.2s and that they all work at reasonable speed on their own, so if they all can go up to 7Gb/s but not at the same time, just 7Gb/s in total, that's fine by me. If they can handle 4 cards but it means each get their max speed cut down to 1/4 they could reach, that would suck, so if it could handle some shit on itself with a controller, that would be great, probably expensive, but that's what i want, you could then also stick that in basically any PC, so no problem taking it with you to your next build. Just a 4x M.2 @ PCIe 4.0 x16 or even x4 total speed but not needing bifurcation, that is what i want. My current ones can go 7Gb/s, which i usually don't even reach, but that's fast enough for me. So 7Gb/s total speed for the card but being able to stick in 4x M.2 cards is fine by me.
it would be acceptable too with splitting Gen5 into a 1x1x1x1 bifurcation. just one lane of Gen5 is already ~4GB/s and in an array that can go faster still, reaching 7GB/s is simple with Gen5 speeds.
Oh yeah, I am using that 13850HX -ES on an AsRock z690 phantom gaming-itx/tb4. I have the recommended Asus Z690-I; however, it is occupied as my main rig at the time with a 13900K, and I didn't want to disassemble it. Anyway, with a 240 AIO, 26K+ Mulithreaded score, about 1851, single with very little tuning. Now the rub, BIOS doesn't have FIVR, ughh; although I can get to most changes with Throttlestop and the appropriate selections in UEFI. Anyway, I updated to latest UEFI; bam! XMP doesn't work out of the box anymore. Fiddling to come. Now the big rub, upon adding the IG driver, my Displayport monitor went into some kinda check. This happened during the install and upon Boot-Up, not the OS. What?! Changing cables fixed it. Now mind you, this cable worked on all systems with identical monitors until this driver update. How could a driver update change firmware or whatever? I am at a loss at what could be happening. I'd like to have a small portable system, Midori V2.1, that can act as low powered everything rig but later relegate it to an all-NVMe NAS using OpenMediaVault and so.e plugins and customizations? Since I have a 12600K lying around, I think I may purchase another Asus Z690-I to check disabling E-Cores for MatLab AVX-512 in simulations and get to more options. It's simply amazing how snappy this 28 thread CPU is! Without super tuning, again 27K at 70C!
Would love to see a RAID performance comparison of this card vs. motherboard mounted drives, which I assume would be basically a Chipset controlled RAID vs CPU controlled RAID comparison?
Well presented and informative as usual. (On a production note, kudos to a creator who knows how to make use of two angles while presenting to camera. You face your viewer at all times. Banish the cutting to "camera 2" with a "side-of-face" shot (as if you were being interviewed one second and facing your viewer the next). This "fashion" has crept into everything from commercials to cooking shows on PBS.
Is there something similar with a RAID controller on it? We sell a lot of systems with the Supermicro AOC-SLG4-2H8M2, but that can only hold two M.2 SSDs and it has a SATA SAS controller, which limits the troughput like crazy! We basically only use it because it is a simple option to do a two-drive RAID 1 to throw an OS on, but it would be nice to find something similarly easy, but faster!
Hardware raid is considered a legacy system and was only suitable for large scale operations that could keep an in house stock of identical raid controllers (like exact versioning identical).
@@mytech6779 That doesn't fit with what our customers are buying. We are selling about as many systems equipped with HBAs as we are with RAID controllers. I'd estimate that about 1/3 of our customers still use hardware RAID instead of software RAID. And many of them aren't large scale operations. And you only need an in-house stock of identical RAID controllers if they break constantly. And that doesn't happen either, at least with our customers.
@@titaniummechanism3214 I'm not telling you you're wrong, you have real world experience and I don't, but there's actually a L1T video on this channel called something like "hardware raid is dead" that goes into more detail on the topic
@@xXx_Regulus_xXx Yeah, I watched that. Wendell mainly seems to have a problem with hardware RAID solutions not verifying the integrity of the data that they return to you. He is probably right about that. My knowledge doesn't go very far on this topic. I only know that with our broadcom controllers, you initialize a RAID before putting data on it. How much that helps with preventing faulty data from being returned, I don't know.
I want ICY Dock to create the ultimate slimline SATA media caddy. Imagine; having your eject button on the laptop's ODD fascia be _useful_ in this application. You push a button, the device is ejected and when ejected, it'll kick the tray open for removal and insertion of another media. For gaming laptops with ODDs it would be _peak_ considering some games need another media unto themselves for _reasonably_ storing them.
@@waterbb7248 Seems the right fit, but _if only_ they made it so the tray slides out, is retained, and doesn't require the media to be fastened into place before installing it. Really, I'm looking for something _exactly like that_ except, it would utilise the eject lever on the fascia to remove the tray by pressing a button behind it, as typical for an ODD when ejecting a disc.
I'm curious as I'm not much of a laptop gamer, is this something you want as a desktop replacement where it mostly stays in one place, or is it something you travel with for LAN parties frequently?
@@xXx_Regulus_xXx That oughtn't necessarily matter, and the use case _also_ oughtn't really matter. For the context of gaming, the point of such a setup for a user implementing this would be to treat the optical bay as a cartridge slot for games which take up a lot of space in on-board media otherwise. While what was shared would fulfill this role nicely, there's no way to make it look like it's part of the laptop without compromising its functionality. If it were a sliding tray integrated into the unit, it would be theoretically possible to snap the fascia of the ODD tray from the laptop tray _and_ have the eject button be useful for swapping in storage media to run different games.
Have made an XQ69 case clone, slightly thiccened, PCIe splitter (for 2×8 lanes) coming and should be here 4-5 days. Idea is to see if the Ultra 3 is here soon enough (need to still check if bifurcation is a thing only only on Intels highest end chipsets or if it is as normal there ason AMD) or else use the 7700X to make a system that has 2×4 lane drives. Most mini-ITX is really limited but bifurcation can really help.
I have my board fully populated with 2TB drives. Reason being I got the m.2 drives when they were at rock bottom low price 2 years ago for $65 a drive!!!!! T-Force CARDEA A440 PRO. crazy cheap! My board has 4 - gen4 m.2's - 3 running x4 speed and one running x1 Gigaybte Aorus Master x570s 3 pci3 x16 slots but if you populate all the slots they all run x8
I'd really appreciate it if you covered the switching variants of these cards which don't require bifurcation. They seem to me to be just a better option.
How do non-workstation platforms handle PCIe bifurcation? The CPU has 24 lanes, the GPU could take up to 16, then there’s 8 lanes left over for any number of M.2 drives. Curious to know.
PCIe bifurcation can divide the PCIe lanes available on a slot. So a full x16 PCIe slot can support up to 4 x M.2 with 4 PCIe lanes each. That's if the motherboard supports bifurcation at all, which isn't a certain thing when looking at desktop motherboards. So bifurcation splitts the available PCIe lanes, it doesn't make more lanes available. Now technically there's no need for a M.2 slot to have 4 Lanes. They can have two or even just one lane but that means the M.2 drives would be slower. Never seen one with three lanes, so not certain if that's a possible design. There are some cards that have a PLX chip. These take the available PCIe lanes and switches those to a number of devices that use PCIe. These can have devioces using more PCIe lanes than is available in the slot, but the total throughput can never be higher than what can be handled by the slot. So I was just looking at LRNV9541-4IR a card from LR-LINK. It has a Marvell 88NR2241B0 PLX chip that allows it to handle four M.2 drives on a x8 PCIe slot without bifurcation. The negatives are that the card is pretty expensive compared to one that uses bifurcation, and I haven't seen them state the number of lanes for each M.2 slot. If it's two lanes then a card requiring bifurcation will be both cheaper and have about the same speed. If it's four lanes then this has the possibility to be faster if I/O is only on one or two of the drives at a time. Anyway the advantage of a card like this is that the motherboard doesn't need to support bifurcation and that's often a big plus.
They don't unless your manual specifically says otherwise, but you can buy four NVME expansion cards with built in switches to handle the limited bandwidth. Speed will be limited but this can be offset with a raid array across them.
How does it compare to something like the Asus Hyper m.2 card? That's what I'm running on my Gen 2 Threadripper platform rn and it does what I need it to do, but I'm seriously considering building a new X870E platform since we're getting 32 lanes out. Getting higher clocks for single threaded performance in creative apps (mostly photo editing, sounds stupid but you try waiting an hour to crunch 50GB of photos from a session, it really slows things down).
well i have had nothing but trouble with the 990 pro on a m2 pcie adapter in my servers after 4 months in use a the datastorage for a database they start disconnecting form the system and i have to cold power cycle the system to get them back had it happen on both amd servers i used them in. updating the firmware did nothing and drive temp never went above 38°C where previously i used those type of adapters in other older intel systems for years without issue so i am a bit skeptical about the newer m2 drives working reliability 24/7 even when in fancy cooled adapters
What about Synology's offerings? If it will bifurcate 8x8x... With an adapter for an ITX board, we get two 16x physical but two 8x electrically. That way, one can hold 4 NVMe and another a 10Gb Nic and two additional NVMe. They are 3.0; however, 3.0 are relatively cooler. They have PLX chips to handle switching. Edit-->They are about $180 and a bit less for the other. $339 for this one. I am waiting to finish the video to see if this fits my use. The Dragon Canyon i9 may still have life as an NVMe NAS along with its built-in 10Gb ethernet. Please, if possible, reply about Synology's offerings, pros and cons.
I don't need the speed and I'm lane poor, so a pcie generation X by 8 to 4 nvme by 4 generation X-1 would be great for me. Slot by 4 to generation-2 would also be suitable for me
How about merging multiple humongous point clouds? Are there any server MOBOs that can span RAID across multiple PCI-E slots full of these ICY DOCKS? Do you see any benefits to CXL eliminating the intermediate memory buffer bottleneck?
Kinda cool. But you make going to MicroCenter sound like a *bad* thing. Desktop? Kinda overkill but for a small cluster... it would make sense for edge computing. Where you would want to even have a good Nvidia video card for GPU tasks. You would also want to have a 100GbE network between your small cluster nodes if not higher. What you end up with is a data fabric on the edge. If you don't want t cluster.. w the card you have raid... so while you may lose some redundancy in terms of system failure... it would still be a good edge compute node where you can do some preprocessing and filtering before you send data back on-prem to your main DC.
So this card requires bifurcation (like many do) but am I wrong that they could simply put a PCIe switch chip on the board and then it would work with any slot? Is it _that_ expensive to just include the chip, especially for a card that's in the hundreds of dollars price class? They used to put such chips on dual-GPU graphics cards without seemingly blowing the budget too badly.
Yeah it's expensive. Also not sure how many gen5 switches are present. Wish Intel made the chipsets they do to be used as pure paid to m.2 when put in a pcie slot (technically doable), instead we get this stuff which is just a bunch of wires routed on pcb for a high price high profit margin. It's just good for level1tech to go Gaga about. At the moment better to be on gen4 simpler multiple cards without an extra power requirement. This is getting hilarious with peak power draws for 16tb getting near or above hard drives. Level1tech never does long term stability test either -- and I have suffered from the chines board computes he sings praises about -- ymmv if you follow his recommendations.
26 днів тому
so you're saying... this is the preferred storage for 9950x with 4090 for pure gaming?
I had nothing but trouble with IcyDock SATA Bay. Computer would just freeze with drives connected to them, yes, them, I bought a second one just to validate that it was indeed the IcyDock causing the computer to freeze and it was, so I have a hard time trusting this comapny.
So to clarify did the T705 M.2 drives work fine with VROC? I was looking at them (on sale) but couldnt find anybody doing VROC to confirm intel's raid solution would like them.
so what you're saying is we should turn workstations into gaming PCs get this add in card and a bunch of m.2 drives and then proceed to install our entire steam libraries nice
When used on workstations I think that linux is much better for this product, because it can share volumes at several gigabytes per second (over NFS), while windows is limited to a few hundreds of MB/sg.
What are you talking about? A properly configured SMB setup will be MUCH faster than NFS. SMB multichannel and SMB Direct are fantastic. You aren't going to get any faster without a purpose built SAN using exotic tech like NVMeo(F/E/I/etc)
@Prophes0r Are you talking about Windows server? I started my post with "when used on workstations..." precisely because of the limitations of Windows Pro for worstations. To me, the nice feature of the icy dock & m.2 nvme solutions is that you can get a lot of performance in a workstation without the need for server class hardware and server licenses.
Try a giant LLM like Lama 3.1 405b run strait off of NVME. Make a huge swap-space and give a system with 16 or 32GB of ram and a budget CPU with integrated graphics torture test!
In gaming I can tell that Minecraft would benefit high io and read speeds. In case when players are loading new chunks with chests and trying to open the chest would often result in a server freeze for up to a minute. Server would somewhat predict what should have happened afterwards, resulting in players death that otherwise could have been avoided. This is usually avoided by preloading most of the world or running the server in a ramdisk, both of which have serious flaws. Also loading up big modpacks, or upgrading the modded server would take a huge amount of time, depending on amount and type of mods, up to several hours.
Looking for something similar on my cheap-o ASRock A520M-HDV board using an AMD 5700G APU (so no GPU will hog my PCI bus). The problem: I don't think it can do bifurcation because it's PCI 3.0…
Bifurcation has always been realy problematic for me, even on boards from SuperMicro it is not as reliable as a card that comes with a PLX switch chip - I guess this feature is not a high priority for the mainboard vendors to do a lot of testing with.
with my xeon 6132 LGA3647 and 6x16G DDR4 2400 and 4x2TB samsung 970 evo plus i was able to get 11-12GB/s with 55-60% cpu load, that's with zfs stripe (8TB total) latest truenas scale 24.10.2..so...3 times less with platform from 2018, that's a shame :D
Linux raid does not scale no matter the level. First step is to disable bitmaps. Luckily patches that fix raid5 and hopefully other levels are in the works.
Funny how modern consumer motherboards only have a single 16x slot and it's used for the GPU. The rest of the slots are basically cosmetic. Only 2 generations ago there were 5 functioning slots, and more SATA ports, so we could actually use hardware like this to add NVME (m.2) storage.. now there's no option and NVME is forced down our throats at the expense of any other connectivity (sound card, NIC, capture card, HBA, SATA itself, etc). I was wanting to upgrade from x570 to x870, but literally can't install my current hardware on any board, and only about 4 x670 boards even support it too. Going to have to somehow get rid of even more hardware when I already downscaled from X99 to x570... Server world must be nice to live in.
@@dirkmanderin Considering it's Icy Dock I'm surprised it's not more like $500 lol On the other hand they do specifically market it as "Users can swiftly open the side panel of their computer case and slide the quick-release knob to the left to extract the M.2 SSD tray with minimal effort. " And it's substantially smaller than Asus card (partially bc Asus card supports 110mm drives). Icy Dock knows their niche
@@marcogenovesi8570true, but less that 1/10th of the price… guess it depends on your need for hot swap but that still one heck of a markup for 4x $5 connectors on digikey.
Determining which motherboards support pci bifurcation is a huge pain. Manufacturers need to step up their documentation game and list bifurcation support or lack of.
Least ASUS has a long list on their website for what supports x4x4x4x4/hyper m.2.
Even if the lack of support on some mobos doesn't make sense. And even if I'm a bit done with ASUS for the time being.
From the Manual:
"For MB204MP-B, if the installed M.2 NVMe SSD requires power exceeding 15W, you will need to connect the 6-pin power connector from the power supply to the 6-pin power input to ensure sufficient power supply."
Good Lord! External power for four M.2 drives - that's crazy!
Yeah. M.2 Gen5 NVMe is...a mess.
It's kinda like putting a big block into a Honda Civic for street use.
Does it make big numbers? Yeah... sure?
But what are you giving up to get there?
And what is the real world experience of using it like?
I'd much rather have a 3-5w Gen4 drive on an x2 link.
It's ONLY 4GB/s per drive, but that's more than enough for anything an M.2 should be installed in.
If people are spending $40k on a monster EPYC workstation because they have some specialized need, just use a U.2/U.3 drive and be done with it.
@Prophes0r there are issues with some gen5 servers having enough ports.
Where speed actually matters like in high frequency trading, where each IOPs means the difference of $0.00002 and $0.00006
@@robertbslee4209 Well high frequency trading should be illegal, so that argument isn't going to work on me. It is several layers deep into the loopholes/abuses of a system. But that is an entirely different argument.
We are talking about workstations/desktops here. So my argument that we don't need x4 lane links for NVMe storage is still solid
@@Prophes0r Its not and 100s of billions are made each year with the strategy
Remember when ATX motherboards had 6 pcie expansion slots?
I remember having 7 of them.
Insert "Pepperidge Farm Remembers" right here. Now to solve having enough PCIE lanes after the GPU and 10+ GB Ethernet had their fill.
routing pcie gen 2/3, and even 4, lanes are relatively easy and cheap compared to routing pcie gen 5 lanes...
but what do consumers even plug into their computer anymore? pretty much just a gpu and nothing else. maybe a network card or a capture card, but for >99% of people... just a GPU. high-end atx boards these days have 4, 5, 6 or even more nvme slots onboard. if you need more than that... the consumer cpus don't even have the pcie lanes to support it without multiplexing anwyay.
Oh yes, I have basically my old desktop as a server. X99 with lots of bells and whistles. And I have a decommissioned workstation that has IIRC 7 slots.
yeh, miss those days.
It's uncanny Wendell. Just about every time I am contemplating a new piece of tech you post a video on it. If there was a Nobel Prize for Computing I would nominate you for it bro! Cheers and keep up the outstanding work.
I was just about to past almost the exact same thing. To the point I saved the Amazon listing on my phone to make sure I saved $20 with price match at Micocenter.
The Nobel "Peace-CI-e" prize 😏
Bro, looking at the current motherboards we are going to need M.2 to PCI-E cards, not the other way around.
There are already those, it’s great for adding an external GPU to a laptop. I plan on trying to use it for GPU passthrough to a virtual machine since my desktop ran out of physical PCIE slots but not m.2 slots.
@@Fractal_32 Also good for adding 10GbE when you have spare M.2 slots, I/O Crest makes one based on the AQC chipset, Aliexpress has ones with Intel controllers, kinda wild.
Yes! I love ICY DOCK products! I've owned some flexiDOCKs in the past for my old laptops and Desktops at home, and they're are amazing.
My 2012 MacBook Pro has 16TB via 2x Samsung 8TB 870 QVO 2.5” SATA SSD drives. Sure, they’re not as fast but for $1200AUD on special in 2023, I’m not complaining 🤷♂️
you said 16TB but also 4TB drives and also you're holding 5 of them so already at the 15 second mark there are mysteriousnesslynessess mysteriously mysterieing.
Itd be amazing if one of these carriers ever included a pcie switch so you could plug the 4 drives in without having to worry about bifurcation
They exist for previous PCIe gens. Expensive though
they do exist, but they only work for PCIe 3.0x4 at the moment. now, i'm sure besides gaming market hype, the most anyone needs right now is PCIe 3.0/3.1, with higher-end users getting into PCIe 4.0 for their storage. Going from 3.0 to 4.0 in my experience hasn't really changed, still just as snappy as 3.0 drives. sure, games load a few seconds faster but it's not the end of the world. for sure more for storage than speed, be it you get an EC-P3X4 from Saprent or a PA20 from Glotrends, normally your just adding another M.2 for storage.
This channel should have 2 million subs.
Been looking to replace my spinning rust in my RAID’d T320… this 4TB SSDs setup I think will do the trick, thanks!
From the title I was expecting it to be Expanding M.2 single port into multiple PCIe/M.2 port, which would be quite exciting to me as an option - lots of portable devices have PCIe on an M2 connector available, be nice to be able to do more with that, even if its only going to be on the broken/EOL type machines or embedded use. But this thing is interesting too, even somewhat tempting for my primary computer.
This seems like the hardware we commissioned for our quant servers from icydock without water cooling
I run 4x2TB with the pcie gen 4 similar board from asus on a threadripper to deal with large point clouds. I just have them set up in the windows utility. fast big local storage is really important for point cloud workloads.
I've got an icy dock 5.25 dock for 6 x 2.5 drives and it has been awesome paired with a mobo that has 6 SATA connections.
Still a bit salty about the in my opinion false advertising by Icy Dock claiming the original ToughArmor MB699VP-B (V1) can do PCIe Gen4 but am happy that PCIe Adapters get more and more mainstream for retail end customers.
Oh hey, funny seeing you here. Hello from the pcie bs thread lol.
And yeah, icy dock is just... I can't like em. Far too expensive for so little, and a bad rep for support.
My problem with it was finding compatible cable to hook it up to a SlimSAS host. Thankfully the retailer I bought it from finally stocked the V3 and allowed me to return the V1, albeit with a restocking fee.
I'm struggling to justify the price for this piece of hardware that's not actually ANY active components on it. OK so it's got fancy hot-swap 'bays' but that's just mechanical. Routing for PCIe5 lanes from the slot to a card really isn't that hard... The trace lengths are tiny and nicely in split into x4,x4,x4,x4 from the slot.
icy dock is sooooooooooooooooo expensive.
Asus Hyper M.2 gen 5 4x4 - $80
Icy Dock M.2 gen 5 - $340
Besides the hot swap, how is this worth 4 times as much?
This thing is not even hot-swap! You basically spend $260 to not have to get your screwdriver from the drawer and save 2 minutes of time once every 5 year when you actually replace a drive
The Asus cards are super nice too. No reason to go anywhere else a m.2 pcie card.
Exactly my thoughts.
Looked nice, but I’ll wait until the prices reach sane levels before I upgrade one of my cards.
Quite happy with the space and energy reduction after moving from 4TB HDD to M.2.
Oh man am I excited to see what Direct IO is going to do to NVMe storage in OpenZFS 3.2
Thank you, Wendellman! 👍🏼
Thanks for this.
One scenario I would be interested in is seeing how Adobe products like Lightroom works, with catalogue on one raided drive, then photos on another raided drive, and maybe temp on another drive?
I use a couple of Icy Dock adapter myself for my server as nobody else had what I wanted that wasn't an arm and a leg. I have a 5 1/4" to 8 2.5"x9mm SATA drives and a 5 1/4" to 4 2.5"x15mm SATA drives.
How much is storage latency (and perhaps even the throughput?) a bottleneck for compiling around 300-400GB+ of software like Unreal Engine e.g. main dev branch?
How much of the "CPU 100% utilisation" is actually the CPU waiting for data to arrive? (yes when it says 100% they often lie).
Personally looking for the next TR Pro and am thinking of having entire UE in a RAM drive (yes mad scientist kinda madness), just to see if latency has an significant effect on compiling, or not.
I limit my main dev updates and compilations for once every 1-2 months atm, faster compilation would be enabling me to recompile weekly, just because I then can.
Anyway, here's a suggestion for you to test if it would interest you as well :)
Could you try this on the minisforum ryzen motherboard as a flash NAS option ? I bet it will make a nice homelab combination. Also minisforum added a second revision to the motherboard that is even cheaper at the expense of graphical performance. But that would be even better from a NAS application point of view.
I grabbed a few Open Box T700 4Tb , and this product is right up my alley.. The price though is very high for what you get.
That hoodie looks so comfortable
So I wanted a really hot-swap solution for m.2, or U.3 SSDs aimed at low cost.
I don't see the market replacing M.2 with something like EDSFF E1.S or another alternative any time soon, and the power issues to support an SSD are becoming increasingly complicated for M.2.
Dreaming about one of these for my genomics work 🤤
I cannot recommend this, from my experience the consumer M.2 drives, especially the GEN5 ones rot like hell. They may be cool when they are fresh and you fill them with data, or do small in-SLC cache benchmarks like this, but the dip of sustained write performance when you run out of the cache is huge, and they also do not tend to age well. Try loading them full of data and then try reading the same data half year later. Thats where they suffer and it bit in the ass so many times I pretty much gave up and these days I'm installing old second hand U.2 drives. Not as high peaks but the sustained performance and reliability is worth it.
I have a smaller version of these. Great stuff!
Wendell, can you take a look at some of Highpoint Storage Technologies newer PCIe 5.0 NVMe AIC/RAID cards, they've also got some PCIe "Switch" cards that breakout PCIe to external enclosures including a half petabyte solution they just released
I want a multi M.2 expansion card quite a bit, but so few motherboards support 4x4x4x4 bifurcation, and getting a motherboard just for that is just expensive, and it can also mean your top slot gets speed halved, which might not be a problem for some videocards, but for some it might.
I just want a card that can hold 4x M.2 drives, and get reasonable speeds, they do not all have to be able to get their full speeds, i won't need or use that, i just want the ability to add more M.2s and that they all work at reasonable speed on their own, so if they all can go up to 7Gb/s but not at the same time, just 7Gb/s in total, that's fine by me.
If they can handle 4 cards but it means each get their max speed cut down to 1/4 they could reach, that would suck, so if it could handle some shit on itself with a controller, that would be great, probably expensive, but that's what i want, you could then also stick that in basically any PC, so no problem taking it with you to your next build.
Just a 4x M.2 @ PCIe 4.0 x16 or even x4 total speed but not needing bifurcation, that is what i want. My current ones can go 7Gb/s, which i usually don't even reach, but that's fast enough for me. So 7Gb/s total speed for the card but being able to stick in 4x M.2 cards is fine by me.
it would be acceptable too with splitting Gen5 into a 1x1x1x1 bifurcation.
just one lane of Gen5 is already ~4GB/s and in an array that can go faster still, reaching 7GB/s is simple with Gen5 speeds.
@@kitame6991 if the card itself could do that that would be great, most motherboardsc don’t support bifurcation let alone for several slots.
Oh yeah, I am using that 13850HX -ES on an AsRock z690 phantom gaming-itx/tb4. I have the recommended Asus Z690-I; however, it is occupied as my main rig at the time with a 13900K, and I didn't want to disassemble it. Anyway, with a 240 AIO, 26K+ Mulithreaded score, about 1851, single with very little tuning.
Now the rub, BIOS doesn't have FIVR, ughh; although I can get to most changes with Throttlestop and the appropriate selections in UEFI. Anyway, I updated to latest UEFI; bam! XMP doesn't work out of the box anymore. Fiddling to come. Now the big rub, upon adding the IG driver, my Displayport monitor went into some kinda check. This happened during the install and upon Boot-Up, not the OS. What?! Changing cables fixed it. Now mind you, this cable worked on all systems with identical monitors until this driver update. How could a driver update change firmware or whatever? I am at a loss at what could be happening.
I'd like to have a small portable system, Midori V2.1, that can act as low powered everything rig but later relegate it to an all-NVMe NAS using OpenMediaVault and so.e plugins and customizations? Since I have a 12600K lying around, I think I may purchase another Asus Z690-I to check disabling E-Cores for MatLab AVX-512 in simulations and get to more options. It's simply amazing how snappy this 28 thread CPU is! Without super tuning, again 27K at 70C!
Would love to see a RAID performance comparison of this card vs. motherboard mounted drives, which I assume would be basically a Chipset controlled RAID vs CPU controlled RAID comparison?
Well presented and informative as usual. (On a production note, kudos to a creator who knows how to make use of two angles while presenting to camera. You face your viewer at all times. Banish the cutting to "camera 2" with a "side-of-face" shot (as if you were being interviewed one second and facing your viewer the next). This "fashion" has crept into everything from commercials to cooking shows on PBS.
Is there something similar with a RAID controller on it? We sell a lot of systems with the Supermicro AOC-SLG4-2H8M2, but that can only hold two M.2 SSDs and it has a SATA SAS controller, which limits the troughput like crazy! We basically only use it because it is a simple option to do a two-drive RAID 1 to throw an OS on, but it would be nice to find something similarly easy, but faster!
Hardware raid is considered a legacy system and was only suitable for large scale operations that could keep an in house stock of identical raid controllers (like exact versioning identical).
@@mytech6779 That doesn't fit with what our customers are buying. We are selling about as many systems equipped with HBAs as we are with RAID controllers. I'd estimate that about 1/3 of our customers still use hardware RAID instead of software RAID. And many of them aren't large scale operations. And you only need an in-house stock of identical RAID controllers if they break constantly. And that doesn't happen either, at least with our customers.
@@titaniummechanism3214 I'm not telling you you're wrong, you have real world experience and I don't, but there's actually a L1T video on this channel called something like "hardware raid is dead" that goes into more detail on the topic
@@xXx_Regulus_xXx Yeah, I watched that. Wendell mainly seems to have a problem with hardware RAID solutions not verifying the integrity of the data that they return to you. He is probably right about that. My knowledge doesn't go very far on this topic. I only know that with our broadcom controllers, you initialize a RAID before putting data on it. How much that helps with preventing faulty data from being returned, I don't know.
I want ICY Dock to create the ultimate slimline SATA media caddy. Imagine; having your eject button on the laptop's ODD fascia be _useful_ in this application. You push a button, the device is ejected and when ejected, it'll kick the tray open for removal and insertion of another media. For gaming laptops with ODDs it would be _peak_ considering some games need another media unto themselves for _reasonably_ storing them.
Are you looking for MB411SPO?
@@waterbb7248 Seems the right fit, but _if only_ they made it so the tray slides out, is retained, and doesn't require the media to be fastened into place before installing it.
Really, I'm looking for something _exactly like that_ except, it would utilise the eject lever on the fascia to remove the tray by pressing a button behind it, as typical for an ODD when ejecting a disc.
I'm curious as I'm not much of a laptop gamer, is this something you want as a desktop replacement where it mostly stays in one place, or is it something you travel with for LAN parties frequently?
@@xXx_Regulus_xXx That oughtn't necessarily matter, and the use case _also_ oughtn't really matter. For the context of gaming, the point of such a setup for a user implementing this would be to treat the optical bay as a cartridge slot for games which take up a lot of space in on-board media otherwise.
While what was shared would fulfill this role nicely, there's no way to make it look like it's part of the laptop without compromising its functionality. If it were a sliding tray integrated into the unit, it would be theoretically possible to snap the fascia of the ODD tray from the laptop tray _and_ have the eject button be useful for swapping in storage media to run different games.
@tekwendell external pci slot expanders would be a good topic to cover.
Finally released!
Have made an XQ69 case clone, slightly thiccened, PCIe splitter (for 2×8 lanes) coming and should be here 4-5 days.
Idea is to see if the Ultra 3 is here soon enough (need to still check if bifurcation is a thing only only on Intels highest end chipsets or if it is as normal there ason AMD) or else use the 7700X to make a system that has 2×4 lane drives. Most mini-ITX is really limited but bifurcation can really help.
Are you about a size 14?
I'm looking into buying an icy dock myself. well the 2 bay pcie one.
I have my board fully populated with 2TB drives. Reason being I got the m.2 drives when they were at rock bottom low price 2 years ago for $65 a drive!!!!! T-Force CARDEA A440 PRO. crazy cheap!
My board has 4 - gen4 m.2's - 3 running x4 speed and one running x1 Gigaybte Aorus Master x570s
3 pci3 x16 slots but if you populate all the slots they all run x8
I'd really appreciate it if you covered the switching variants of these cards which don't require bifurcation. They seem to me to be just a better option.
How do non-workstation platforms handle PCIe bifurcation?
The CPU has 24 lanes, the GPU could take up to 16, then there’s 8 lanes left over for any number of M.2 drives.
Curious to know.
Cant cut them up in a way that makes sense if you want 4 drives
If you are using this on a non-workstation platform, you ain't using a GPU, you use this instead of the GPU
@@guiorgyYeah, I thought that was totally clear?!
PCIe bifurcation can divide the PCIe lanes available on a slot. So a full x16 PCIe slot can support up to 4 x M.2 with 4 PCIe lanes each. That's if the motherboard supports bifurcation at all, which isn't a certain thing when looking at desktop motherboards.
So bifurcation splitts the available PCIe lanes, it doesn't make more lanes available.
Now technically there's no need for a M.2 slot to have 4 Lanes. They can have two or even just one lane but that means the M.2 drives would be slower. Never seen one with three lanes, so not certain if that's a possible design.
There are some cards that have a PLX chip. These take the available PCIe lanes and switches those to a number of devices that use PCIe. These can have devioces using more PCIe lanes than is available in the slot, but the total throughput can never be higher than what can be handled by the slot. So I was just looking at LRNV9541-4IR a card from LR-LINK. It has a Marvell 88NR2241B0 PLX chip that allows it to handle four M.2 drives on a x8 PCIe slot without bifurcation. The negatives are that the card is pretty expensive compared to one that uses bifurcation, and I haven't seen them state the number of lanes for each M.2 slot. If it's two lanes then a card requiring bifurcation will be both cheaper and have about the same speed. If it's four lanes then this has the possibility to be faster if I/O is only on one or two of the drives at a time.
Anyway the advantage of a card like this is that the motherboard doesn't need to support bifurcation and that's often a big plus.
They don't unless your manual specifically says otherwise, but you can buy four NVME expansion cards with built in switches to handle the limited bandwidth. Speed will be limited but this can be offset with a raid array across them.
How does it compare to something like the Asus Hyper m.2 card? That's what I'm running on my Gen 2 Threadripper platform rn and it does what I need it to do, but I'm seriously considering building a new X870E platform since we're getting 32 lanes out. Getting higher clocks for single threaded performance in creative apps (mostly photo editing, sounds stupid but you try waiting an hour to crunch 50GB of photos from a session, it really slows things down).
I'm looking at PCIe switches (HBA), but this is very cool.
well i have had nothing but trouble with the 990 pro on a m2 pcie adapter in my servers
after 4 months in use a the datastorage for a database they start disconnecting form the system and i have to cold power cycle the system to get them back
had it happen on both amd servers i used them in.
updating the firmware did nothing and drive temp never went above 38°C
where previously i used those type of adapters in other older intel systems for years without issue
so i am a bit skeptical about the newer m2 drives working reliability 24/7 even when in fancy cooled adapters
it would love to run ZFS natively on those babies :D
I can't get my H10 Optane m.2 to wirk in my AMD system. A single slot card or in the second m.2 slot. It would be kinda neat if it did.
What about Synology's offerings? If it will bifurcate 8x8x... With an adapter for an ITX board, we get two 16x physical but two 8x electrically. That way, one can hold 4 NVMe and another a 10Gb Nic and two additional NVMe. They are 3.0; however, 3.0 are relatively cooler. They have PLX chips to handle switching. Edit-->They are about $180 and a bit less for the other. $339 for this one.
I am waiting to finish the video to see if this fits my use. The Dragon Canyon i9 may still have life as an NVMe NAS along with its built-in 10Gb ethernet.
Please, if possible, reply about Synology's offerings, pros and cons.
I don't need the speed and I'm lane poor, so a pcie generation X by 8 to 4 nvme by 4 generation X-1 would be great for me. Slot by 4 to generation-2 would also be suitable for me
That card is super cool, thanks for sharing.
How about merging multiple humongous point clouds? Are there any server MOBOs that can span RAID across multiple PCI-E slots full of these ICY DOCKS? Do you see any benefits to CXL eliminating the intermediate memory buffer bottleneck?
Kinda cool.
But you make going to MicroCenter sound like a *bad* thing.
Desktop? Kinda overkill but for a small cluster... it would make sense for edge computing. Where you would want to even have a good Nvidia video card for GPU tasks.
You would also want to have a 100GbE network between your small cluster nodes if not higher. What you end up with is a data fabric on the edge.
If you don't want t cluster.. w the card you have raid... so while you may lose some redundancy in terms of system failure... it would still be a good edge compute node where you can do some preprocessing and filtering before you send data back on-prem to your main DC.
Would be for curiosity to see testing AI training, loading GiB files into VRAM, processing, et all
So this card requires bifurcation (like many do) but am I wrong that they could simply put a PCIe switch chip on the board and then it would work with any slot? Is it _that_ expensive to just include the chip, especially for a card that's in the hundreds of dollars price class? They used to put such chips on dual-GPU graphics cards without seemingly blowing the budget too badly.
Yeah it's expensive. Also not sure how many gen5 switches are present. Wish Intel made the chipsets they do to be used as pure paid to m.2 when put in a pcie slot (technically doable), instead we get this stuff which is just a bunch of wires routed on pcb for a high price high profit margin. It's just good for level1tech to go Gaga about. At the moment better to be on gen4 simpler multiple cards without an extra power requirement. This is getting hilarious with peak power draws for 16tb getting near or above hard drives. Level1tech never does long term stability test either -- and I have suffered from the chines board computes he sings praises about -- ymmv if you follow his recommendations.
so you're saying... this is the preferred storage for 9950x with 4090 for pure gaming?
Will the performance from four p1600x drives be killed by the overhead?
I had nothing but trouble with IcyDock SATA Bay. Computer would just freeze with drives connected to them, yes, them, I bought a second one just to validate that it was indeed the IcyDock causing the computer to freeze and it was, so I have a hard time trusting this comapny.
Do they make this with a Thunderbolt interface, too? The poor Mac users are chronically starved for local storage…
is the saphire rapids capable of CXL? Could this help speed up further?
So is this a good card to use as your boot raid for your server?
So would this be a good card to use as your l2 arc on trunass
So to clarify did the T705 M.2 drives work fine with VROC? I was looking at them (on sale) but couldnt find anybody doing VROC to confirm intel's raid solution would like them.
so what you're saying is we should turn workstations into gaming PCs get this add in card and a bunch of m.2 drives and then proceed to install our entire steam libraries nice
When used on workstations I think that linux is much better for this product, because it can share volumes at several gigabytes per second (over NFS), while windows is limited to a few hundreds of MB/sg.
What are you talking about?
A properly configured SMB setup will be MUCH faster than NFS.
SMB multichannel and SMB Direct are fantastic.
You aren't going to get any faster without a purpose built SAN using exotic tech like NVMeo(F/E/I/etc)
@Prophes0r Are you talking about Windows server? I started my post with "when used on workstations..." precisely because of the limitations of Windows Pro for worstations. To me, the nice feature of the icy dock & m.2 nvme solutions is that you can get a lot of performance in a workstation without the need for server class hardware and server licenses.
@@javiej SMB Direct (Multichannel and RDMA) is available on Windows Pro.
My motherboard's PCIe bifurcation menu only shows X8X8 option, no X4X4X4X4 sadly
It's the Asus B650E-I
What about plugging 4x Hailo-8 devices. Now that would be interesting.
Try a giant LLM like Lama 3.1 405b run strait off of NVME. Make a huge swap-space and give a system with 16 or 32GB of ram and a budget CPU with integrated graphics torture test!
In gaming I can tell that Minecraft would benefit high io and read speeds. In case when players are loading new chunks with chests and trying to open the chest would often result in a server freeze for up to a minute. Server would somewhat predict what should have happened afterwards, resulting in players death that otherwise could have been avoided. This is usually avoided by preloading most of the world or running the server in a ramdisk, both of which have serious flaws.
Also loading up big modpacks, or upgrading the modded server would take a huge amount of time, depending on amount and type of mods, up to several hours.
Well, yeah I've a test. You could end the debate on bcachefs VS btrfs VS mdadm VS lvm /w raid0 VS 1 VS 10 VS 6... and why doesn't mdadm support 5e?
Looking for something similar on my cheap-o ASRock A520M-HDV board using an AMD 5700G APU (so no GPU will hog my PCI bus).
The problem: I don't think it can do bifurcation because it's PCI 3.0…
7:20 Not using the Shizuku Edition, smh.
So are there cards that work on board that don't support bifurcation?
Thread the Mini-Ripper would look and feel like a 5" x 5" x 5" NextCube, pull power from the atmosphere, read your mind, and mine Bitcoin Bigly.
8 channel memory boards like X299?
So the PCIe Bus Error Log is clean with PCIe Gen5 and PCIe AER enabled?
Icy Dock makes some nice stuff really .. and it used to be cheap too :)
Is it possible to use these on an i9 machine or do I need bifurcation
Bifurcation has always been realy problematic for me, even on boards from SuperMicro it is not as reliable as a card that comes with a PLX switch chip - I guess this feature is not a high priority for the mainboard vendors to do a lot of testing with.
It's backwards compatible with Gen 4 right?
Wendell If I have i9-12900K
Will I be able to use ICY DOCK's Gen 5 M.2 ?
mostly no
I wanna see how ZFS does on it.
id rather mobo makers put more pcie slots than m.2 if i need more m.2 i can buy an adapter.
with my xeon 6132 LGA3647 and 6x16G DDR4 2400 and 4x2TB samsung 970 evo plus i was able to get 11-12GB/s with 55-60% cpu load, that's with zfs stripe (8TB total) latest truenas scale 24.10.2..so...3 times less with platform from 2018, that's a shame :D
Still waiting for PCIe to actually support hotswap reliably outside of SOME special servers (not even all servers lol)
Linux raid does not scale no matter the level. First step is to disable bitmaps. Luckily patches that fix raid5 and hopefully other levels are in the works.
ICY dock is what I got in my desktop
They are good
I'm quite sure you could get 4x 4TB U.3 for the same price so why would you do this ???
bring back the SATA SSD hot swap bays for m.2.. edit... they sort of did.
whats the wallpaper? :)
year of the linux workstation
2 SSD + 1 parity SSD + 25gbe NIC would be the ideal server card
You could build that in a system where the platform supports x4/x4/x4/x4 PCIe Bifurcation in a x16 slot and a few adapters.
@@abavariannormiepleb9470 25/40/100gbe cards are so overpriced it's cheaper to get mobos wit them built in like the asrock rack AM5 board
9:00 2025 year of the lInux deskop confirmed
I want to see this in action with 400GBE.
something related to Unreal Engine maybe ? rendering should take advantage of this much power ....
Nice 😮
Funny how modern consumer motherboards only have a single 16x slot and it's used for the GPU. The rest of the slots are basically cosmetic.
Only 2 generations ago there were 5 functioning slots, and more SATA ports, so we could actually use hardware like this to add NVME (m.2) storage.. now there's no option and NVME is forced down our throats at the expense of any other connectivity (sound card, NIC, capture card, HBA, SATA itself, etc). I was wanting to upgrade from x570 to x870, but literally can't install my current hardware on any board, and only about 4 x670 boards even support it too. Going to have to somehow get rid of even more hardware when I already downscaled from X99 to x570...
Server world must be nice to live in.
Damn, $340 w/o switch chip is pricey! Looks like a real nice bit of kit for those with bifurcation support tho!
You can get an Asus card that does the same thing and is Gen5 for $70. That ICY Dock card is ridiculous. If M.2 was hotswap it would maybe make sense.
@@dirkmanderin Considering it's Icy Dock I'm surprised it's not more like $500 lol
On the other hand they do specifically market it as "Users can swiftly open the side panel of their computer case and slide the quick-release knob to the left to extract the M.2 SSD tray with minimal effort. "
And it's substantially smaller than Asus card (partially bc Asus card supports 110mm drives).
Icy Dock knows their niche
_Seventeen nines_ feels like a Freudian "Selip", iykwIm. Iykyk.
Aside from a bunch of cheap 12V fans, that's literally identical to the $35 4x PCIe to NVME adapters.
none of those have the trays for quick swap. If you want to change a drive you have to take out the whole card and screw it open
@@marcogenovesi8570true, but less that 1/10th of the price… guess it depends on your need for hot swap but that still one heck of a markup for 4x $5 connectors on digikey.