This is so perfect. Cam we get the shamwow guy and icydock together? It shouldn't cost more than two packs of cigs and a ramen pack to hire him these days.
The 8x M.2 Nvme box really looks like an interesting way to handle flash storage for virtual machines so its realatively easy to access the devices while having high capacity.
High-density ZFS pool, and you can populate it with the lower-cost more-available m.2 drives instead of sourcing u.2 drives. But, ugh, the hot swap bay is I think > $400, and the Highpoint 8 port NVMe controller is > $750, so not even counting cables, ~$1200 before you even start buying drives themselves.
I was happy to find one of these on eBay for around 350, but I gave up when I went down the rabbit hole for the controller needed. A shame it doesn't come with one or just have a full 8 sata ports on the back.
I recently bought their 3x5,25 -> 4x3,5" HDD cage. I'm very surprise by how high quality it is. Almost all metal construction, really nice controls, easy to use design and zero vibrations.
So many "standards", so many connector formats, so many headaches ! I loved the x8 NVMe cage until I realized there would be 8 cables behind it. I'm not sure what to think about that.
5:40 that Noctua fan casually sandwiched between the Radeon Pro GPU and what I presume to be a PCIe NVME raid card is awesomely ghetto for such a high end system.
Icy Dock is really cool. I stumbled upon their 5.25” quad U.2 enclosure a couple of years ago, and there were many long nights involved in trying to get U.2 drives hot swap on an ASUS X299 workstation motherboard and that high point U.2 carrier card working. The project was eventually abandoned but that was a fun time. Now I have the 5.25” 8x SATA enclosure in my NAS. Very cool company
One of the smartest people I know in IT convinced me that software RAID was the way to go at home. His massive home automation array could boot in seconds. When he had a hardware array everything took forever to reboot.
Software RAID in Linux has come so far recently, especially with the new multi core processors having the power to run it properly. I'd love to see a video just on Wendel's take on software RAID in Linux. I'm sure I'd learn a lot.
I found Icy Dock back when they had USB 2.0/Firewire external SATA enclosures and it's been really cool seeing how their products have evolved since then.
3:15 Wow... you're like the one UA-camr who understands how cables work. The idiots screeching about how AMD "caused" the problems with 4.0 graphics cards/motherboards connected through PCIE 3.0 16x riser cables were hilarious.
Completely agree, but it would have been really nice if they had a BIOS version for their ITX boards that defaults to PCIe 3. In some chassis updating the BIOS is a real PITA.
Wendell's breakdown on how Radeon cards handled poor cables was world changing. Since making the suggested changes, I have not had a black screen on my PC with a Radeon 5700. This channel is a god-send
I love these Icy Dock devices. They've had some of the best gear for adapting drives. I also love the case being made for software RAID over hardware these days. 10M IOPS per core is really making hardware RAID controllers just storage with extra steps.
ICY-DOCK been around for ages in Europe. Used to have multiple of their drive-bays in my Tower (back when IDE still was a thing). For me they always worked great, but I only used them in a prosumer-kind of way.
Ah, now this is the kind of tech that's going to get us going places. We have the processors, we have the storage, but the middleman between the two needs work. Will definitely give these guys a look.
Not a PCIe device, but I recently created a Linux "recovery/maintentance" boot drive with a USB3.1 Gen 2 (10 Gb/s) NVMe external enclosure. Much, much better than using Live OSes on a USB flash drive. Way, way faster, the data is persistent and yet still very portable. I can now run my customized Linux on any machine with a USB 3.1 port. Not sure a USB 2 port could provide enough power. I'm using a 1TB WD Blue sn550 NVMe. Not the fastest device around, but I figured the USB interface would be the limiting factor anyway, so why go faster. And the sn550 uses less power than some of the faster NVMe devices. It works great. As a matter of fact, I'm using it right now. It is fast enough on an AMD 3600X to be a daily driver. USB (and NVMe) have come a long ways.
I love Icy Dock! I had one of the 2.5" SSD four-bay enclosures and liked it because it only took one Power plug to run all four (along with the four SATA cables). I just paid my SSD's in the trays and slid them in. They worked fine until one day I turned the righ sideways to do something and a couple of the SSD's came off the sled enough that I could not open the doors to take them out. They still worked, but one time I was taking it down again to play around and decided I'd take the enclosure out. I was able to turn it upside down and knock on it a bit and the SSD's dropped into place and I could open the doors. I added the enclosed screws (8 ea) to the bottom of the sleds to secure the drives, and all has been fine since. Oh, I hooked a 120MM fan to the bottom of the enclosure blowing filtered air up from below and the SSD's stay super cool.
I've bought some Icy Dock products before and while the caddies are good quality the fans they use on them have not been. I purchased 6 caddy's that each had 2 fans and at-least one fan broke in every single caddy within 6 months. On some models 2 broke. I requested replacements and they were nice enough to send me just the fans so I could replace them myself. Several of the fans came with broken blades, like literally blades missing that weren't even inside the packaging. Of the fans that did work they failed within 24 hours to 3 months. So again I liked the caddys, good mechanical engineering on their part but the fans are atrocious. I ended up replacing the fans with Noctuas which wasn't a walk in the park due to Icy dock using custom connectors on the ones I purchased but splicing did work.
EDFFS is available in a U.2-like form factor too (basically a 2U version) so maybe that will get more popular over time (STH has made an in-depth video about it).
I bought a SATA controller that fits into the NVME slot on my motherboard. It turns that one drive port into a 5-drive SATA port. It's slick. I got it off of Amazon and it works pretty well, I think.
Thanks so much! You read my mind. I just spent the last couple of days trying to find ways to get PCie4 out of the motherboard to something like a hot-swap bay for m.2. Very pricey but you have given me some more options to look at.
Had no idea that differing pins are needed for hot-swapping. Or that some pins were differing. OR that some of these devices were hot-swappable! I LOVE TECH! :D
Icy dock is sick, you can glue together whatever you want wherever you want. I always catch myself just browsing their website and dreaming up exotic setups
I thoroughly love the entire premise of that 5.25" RAID capable M.2 drive bay setup. It just reminds me a lot of those 2.5" enclosures in that form factor that I was always fond of. I'm really tempted to get one once I have a bit more cash flow and migrate my storage to some more reliable flash in RAID. It would require a new case, though. Also, can you imagine modding an old HP Pavilion case for more air flow, then shoving one of those into the optical drive bays? Talk about some seriously overkill sleeper PC fodder. Edit: I remember my first encounter with a system that actually gave me the option of static vs. variable device ID enumeration was an ancient IBM Power5 box running VIOS. I'm sure it wasn't anything special about that particular machine, just that it was the only one I actually noticed that this was a feature of. And boy is it useful to have static device ID enumeration when you're fighting with virtual network ports being mapped onto actual ports in a SEA config passed through by VIOS. Otherwise, the network adapter IDs can change when you're rebooting or removing and adding network ports, and that can just make it a real pain to figure out what's going on.
For the contest, I work for my school's Student Technology support service, but we've been having issues with our m.2 backup station and adapters, so if I won I'd use the removable rack to update the backup station that so we can do more data backups.
I have their 8xSata 5.25 bay in my compact home server full of 2TB HDD (plan is to go full ssd when I can afford it) ended up swapping the fans to some low end delta fans to control heat a little better as spinning rust gets a little too toasty during parity checks and such in such a dense package. Icy Dock is definitely a "you get what you pay for" brand. price tag stings for a home user but by golly it does exactly what you need.
I like these products. I'd never use them, but I like them. Stuff like this always makes me wonder, what can you plug into a modern PC that's a standard form factor, and make it do something non standard?
I have the 16 bay 2.5 to 5.25 in a steel mATX case with handle. I use this server as my on site redundancy(in an off building that doesnt have a server rack on the lowest level, like in my house where any running water from a busted pipe would pool) Its portable because my off site backup only has 4G through CALYX, and i only live sync the really important stuff. All of my photos, videos, and TV recordings i sync once every few weeks.
I have no idea about server grade hardware but knowing Icy Dock also from more consumer orientated products, I agree that their stuff is good. It looks weird at first because it barely comes with anything but it works well.
Icy dock is awesome, I use them on all my computers.. I have a blu ray recorder and two sata drives in one 5.25 bay for backing up data, videos and photos.. I even use an icy dock on a retro machine running win98/xp dual boot , win 10 on different sata drive, I just turn the drive i want to boot one using the front on/off switches on the face of the icy dock, i just hooked up an sata to ide adapter on the back of the icy dock sata bay and swap the drives out.. plus I use a sata to back up/clone my os drives using a icy dock..
I have actually been eyeing that exact M.2 to PCIe card to add to my current workstation. I currently have a MB014SP-B and a MB171SP-B mounted in the available 5.25" bays and that adapter card is the last "thing" I need to be able to hot swap any modern(ish) drive for imaging/recovery/experimental/etc.purposes. I'm not sure how feasible it is, but it would be really nice if there was something that combined 2.5 SATA and M.2. Essentially the MB014SP-B, but instead of 4x 2.5 SATA drives, it was 2x 2.5 SATA and 2x M.2.
I think Icy Dock may make their units hot swap capable by elongating the grounding pins on the connector side of the bay. Maybe take a look? The FAQ section of the product page lists hot swap capable controllers. They seem to use Tri-Mode RAID \ HBA Adapters. Capable of accepting SATA \ SAS \ PCIe drives. I looked into this for hot swap data erasure. PR'ed a pair of tool-less MB720M2K-B docks and an Icy Dock recommended HBA controller. When we get the hardware in, I will be interested to see performance and capabilities of the hardware.
I agree with droknron, Icy Dock uses cheap fans. The M.2 caddy is a pain in the rump to load your stick into. I have to disagree with you about the PCIe-3 for the stand alone adapter. I'm getting solid PCIe-4 speeds on my EVO 980 pro in the furthest slot from the CPU on my ASUS motherboard.
What we really need is a PCIe card with multiple m.2 slots (4?), but only requires a PCIe x4 slot to run them all. Basically a card with a PCIe switch. Not necessarily raid.
I would use the PCIe to M.2 NVMe adapter in my HP desktop PC I as a boot drive. It has no M.2 slots. I realize I'll need to point to it via a grub or syslinux bootloader but that's no problem. Thanks for the PCIe solutions education. :-)
Wendell, if you show b-roll, can you perhaps actually show what you are talking about? E.g. the U.2 connectors on the WRX80. Or these slim ones. I often find myself looking for what you are talking about and often it is not even shown because the b-roll is rather unrelated. Otherwise very interesting content, thank you!
Very useful content, thanks! I have a suggestion. I am a visual learner. Much of the valuable content here was spoken, and not provided in a graphic/table/text. Thus, I will have to rewatch (possibly many times) to remember details. Please consider adding the key information graphically
I have a couple of Icy Dock 2.5" to 3.5" drive adapters for use with my 3.5" hot swap bays if there is one thing i hate about nvme is taking the drive out so i can test something and not risk a data loss Having a nvme on a sled would be great, too bad my only slot i can use that for would be PCIe 2.0 even just having it on one of those cheap ebay cards is better than dealing with that tiny screw on a motherboard, i'd prefer to use those over onboard m.2 slots and just have more PCIe slots on board as unplugging that is not such a PITA
Color me stupid, but I feel we need one or two choices that do bifurcation onboard, for 2x M.2's or 4x M.2's. I don't like hammering my OS M.2 w/ workloads, I'd rather leave this to specific M.2's. Aplicata Quad M.2 NVMe SSD PCIe x8 Adapter - has a x8 bifurcation chip to run 4x M.2's in PCIe Gen3 at x16 speeds using the bifurcation chip. I'd love to see this to fully utilize the x4 Gen4 lanes in X570/Z590 type MoBo's and on, with 4x Gen3 M.2's. This way users don't have to sacrifice PCIe lanes to their GPU or OS M.2.
Hey Wendell! Great video - thanks! I would love to get your advice on current best value 2nd hand u.2 drives. I've got dual m.2 drives in my server via pcie riser for my proxmox and they are wearing out, it was stopgap solution. I'd like to install two 1tb drives in mirror and only got few pcie slots available.
Where does one aquire those SFF-8643 to U.2 cables in the PCIe 4.0 flavor? I have the same WRX80 Sage motherboard and have been looking to use the U.2 ports but I can't find any cables in that form factor that support PCIe 4.0
I wonder if we could organize a drop like event for decent flash storage, it's a pain to get decent ssds (micron/ironwolf) for NAS/Servers for the home user :/
great info, As per ASUS WRX80 Manual Page 1-13 it states U.2 PCIe 3.0. How did you manage to get it work as PCIe 4? or is it a manual mistake? Thanks in advance
As much as I'd like a free m.2 to pcie, I have no current need for something like that, but would be interesting to use as a dedicated Linux boot drive because I choose poorly and built with a itx motherboard in a atx case, which has a single m.2... Almost time to upgrade my portable server though-
re: Hot plug for m.2, couldn't the socket be designed to make connection with the ground pins first, even if the connection traces on the male end are all the same length? Sure that would probably cost more than it being properly designed, but to my extremely naive I-Am-Not-An-Electrical-Engineer view, it seems like they could make it work.
How can you connect the MB873MP-B eight sff-8612 connectors to a raid controller such as the Broadcom 9560-16i adapter card? I cannot figure out which cables one needs to be able to connect up all 8 drives to a single adapter card. The Broadcom 9560-16i manual says that it supports up to 32 NVMe drives so in theory it should be possible.
not using pro nvme stuff but over the years (10+) I've used plenty of Icy Dock gear, drive adapters, external storage/raid boxes including the unique 10 HDD bay which I'm waiting to get back in stock I need a 2nd one Icy Dock build quality isn't "great" but I never had a functional problem with them, yes the fans are always crap but that's better than a raid enclosure that badly writes and corrupts your drive...I had that from competitors and lost 24TB of data :/
When I first read the title of this video I thought that the content was going to include something other than storage. Since the means of connection is PCIe, why are there not other expansion options such as Expresscard Slots or a 5.25" bay with a regular PCIe x4 slot in it? This would provide useful increased expansion capability and would go a long way mitigate the installation of a GPU that prohibits the use of several adjacent PCIe slots.
I wish hard cards were still a thing. I wonder what it would take to rebuild a lot of the retro parts, would modern chip shortages affect the old stuff as bad?
I posted on your forums about a m.2 pci sled that you could switch drives on and off with. Thoughts? Even faster and safer than a hardswappable m.2 bay. I'm doing this with sata now, but there are many cable, not so with m.2. The pcb could make it so simple and easy. A dream. Also very ransomware proof.
Level1Techs, how do I convert all PCIe X16 and basic M.2 connections into a hotswap NVME (or future CXL) cage on the front bays that are available for customers are instead of kept to OEMs? Would I need to convert all PCIe connections to U.2 or 3? Desperate to know so that I could built a fast NAS/Server that way.
Id I had it I would put it into my Home server and use it as the cache drive for my AI projects, right now I am working on a AI that can predict and create a full "Song" from a 3 second snip of an existing work.
Hey everyone! If I buy an Intel P5801X in E1.S what is a good adapter I can use? I want to run a E1.S to PCIe x4 Gen4 adapter. I see a few options available, but not sure if they are all equal. Thanks. I’m wanting to run a P5801X as my daily OS drive, thanks!
I'm totally down but none of this stuff works on consumer computer systems. I have several 5950x machines that are otherwise better than Threadripper with similar core counts for significantly less expensive but for some reason we only get 24 pcie lanes!? What the crap is that about!? Maxed out you get 2 8x devices and a 4x device and except in shower niche circumstances you don't even get 10g networking. So we're stuck with super expensive plx chip solutions or bust. Very frustrating.
IcyDock, apply directly to the workstation!
This is so perfect. Cam we get the shamwow guy and icydock together? It shouldn't cost more than two packs of cigs and a ramen pack to hire him these days.
The 8x M.2 Nvme box really looks like an interesting way to handle flash storage for virtual machines so its realatively easy to access the devices while having high capacity.
High-density ZFS pool, and you can populate it with the lower-cost more-available m.2 drives instead of sourcing u.2 drives. But, ugh, the hot swap bay is I think > $400, and the Highpoint 8 port NVMe controller is > $750, so not even counting cables, ~$1200 before you even start buying drives themselves.
@@dondumitru7093 yeah, not exactly something for every homelab ever.
I was happy to find one of these on eBay for around 350, but I gave up when I went down the rabbit hole for the controller needed. A shame it doesn't come with one or just have a full 8 sata ports on the back.
Really worried about the heat, tho.
I recently bought their 3x5,25 -> 4x3,5" HDD cage. I'm very surprise by how high quality it is. Almost all metal construction, really nice controls, easy to use design and zero vibrations.
I bought a MB840M2P-B and a extra MB840TP-B tray for switching between windows or linux .
So many "standards", so many connector formats, so many headaches !
I loved the x8 NVMe cage until I realized there would be 8 cables behind it. I'm not sure what to think about that.
5:40 that Noctua fan casually sandwiched between the Radeon Pro GPU and what I presume to be a PCIe NVME raid card is awesomely ghetto for such a high end system.
I love Icy Dock! I have quite a few bay adapters that I use. Great company producing great products!
I love ICY-DOCK they make a lot of pretty good stuff, i unintentionally have ended up with quite a few of their products at work and at home 😅
I had no idea about the longer GND pins and hot-swap design. Thanks! Learned something today!
Icy Dock is really cool. I stumbled upon their 5.25” quad U.2 enclosure a couple of years ago, and there were many long nights involved in trying to get U.2 drives hot swap on an ASUS X299 workstation motherboard and that high point U.2 carrier card working. The project was eventually abandoned but that was a fun time. Now I have the 5.25” 8x SATA enclosure in my NAS. Very cool company
Yea even their sas/sata enclosures i use them in towers all the time super useful.
I've been using their SATA 4x2.5" bay for quite a while, as well as the 6x and 8x versions. They have a bunch of really cool stuff too.
One of the smartest people I know in IT convinced me that software RAID was the way to go at home. His massive home automation array could boot in seconds. When he had a hardware array everything took forever to reboot.
Software RAID in Linux has come so far recently, especially with the new multi core processors having the power to run it properly. I'd love to see a video just on Wendel's take on software RAID in Linux. I'm sure I'd learn a lot.
Great video. It's great to have a computer tech channel that isn't gaming focused.
I found Icy Dock back when they had USB 2.0/Firewire external SATA enclosures and it's been really cool seeing how their products have evolved since then.
3:15 Wow... you're like the one UA-camr who understands how cables work.
The idiots screeching about how AMD "caused" the problems with 4.0 graphics cards/motherboards connected through PCIE 3.0 16x riser cables were hilarious.
Completely agree, but it would have been really nice if they had a BIOS version for their ITX boards that defaults to PCIe 3. In some chassis updating the BIOS is a real PITA.
Wendell's breakdown on how Radeon cards handled poor cables was world changing. Since making the suggested changes, I have not had a black screen on my PC with a Radeon 5700. This channel is a god-send
That intro jam...
I love these Icy Dock devices. They've had some of the best gear for adapting drives. I also love the case being made for software RAID over hardware these days. 10M IOPS per core is really making hardware RAID controllers just storage with extra steps.
ICY-DOCK been around for ages in Europe. Used to have multiple of their drive-bays in my Tower (back when IDE still was a thing). For me they always worked great, but I only used them in a prosumer-kind of way.
Ah, now this is the kind of tech that's going to get us going places. We have the processors, we have the storage, but the middleman between the two needs work. Will definitely give these guys a look.
Can you please do a dedicated video on the 8x M.2 enclosure? That thing looks amazing, and I would love to know more.
Not a PCIe device, but I recently created a Linux "recovery/maintentance" boot drive with a USB3.1 Gen 2 (10 Gb/s) NVMe external enclosure.
Much, much better than using Live OSes on a USB flash drive. Way, way faster, the data is persistent and yet still very portable. I can now run my customized Linux on any machine with a USB 3.1 port. Not sure a USB 2 port could provide enough power.
I'm using a 1TB WD Blue sn550 NVMe. Not the fastest device around, but I figured the USB interface would be the limiting factor anyway, so why go faster. And the sn550 uses less power than some of the faster NVMe devices.
It works great. As a matter of fact, I'm using it right now. It is fast enough on an AMD 3600X to be a daily driver. USB (and NVMe) have come a long ways.
I love Icy Dock! I had one of the 2.5" SSD four-bay enclosures and liked it because it only took one Power plug to run all four (along with the four SATA cables). I just paid my SSD's in the trays and slid them in. They worked fine until one day I turned the righ sideways to do something and a couple of the SSD's came off the sled enough that I could not open the doors to take them out. They still worked, but one time I was taking it down again to play around and decided I'd take the enclosure out. I was able to turn it upside down and knock on it a bit and the SSD's dropped into place and I could open the doors. I added the enclosed screws (8 ea) to the bottom of the sleds to secure the drives, and all has been fine since.
Oh, I hooked a 120MM fan to the bottom of the enclosure blowing filtered air up from below and the SSD's stay super cool.
I've bought some Icy Dock products before and while the caddies are good quality the fans they use on them have not been. I purchased 6 caddy's that each had 2 fans and at-least one fan broke in every single caddy within 6 months. On some models 2 broke. I requested replacements and they were nice enough to send me just the fans so I could replace them myself.
Several of the fans came with broken blades, like literally blades missing that weren't even inside the packaging. Of the fans that did work they failed within 24 hours to 3 months. So again I liked the caddys, good mechanical engineering on their part but the fans are atrocious. I ended up replacing the fans with Noctuas which wasn't a walk in the park due to Icy dock using custom connectors on the ones I purchased but splicing did work.
And the only thing their 4*3,5 case does is slowcook it's victims (insufficient cooling). It does not deliver enough power to spin all of em up.
Do these doc's use standard sized 40-80 mm fans or are they proprietary?
@@shadowarez1337 When it comes to the ones I bought they were 40mm in size but thinner than normal. I think it depends on the caddy.
Just recently used one of these, 2.5 to 3.5 for a 1TB SSD in my Playstation 2 with the Sata Converter. Icy Dock makes great stuff!
EDFFS is available in a U.2-like form factor too (basically a 2U version) so maybe that will get more popular over time (STH has made an in-depth video about it).
I bought a SATA controller that fits into the NVME slot on my motherboard. It turns that one drive port into a 5-drive SATA port. It's slick. I got it off of Amazon and it works pretty well, I think.
Thanks so much! You read my mind. I just spent the last couple of days trying to find ways to get PCie4 out of the motherboard to something like a hot-swap bay for m.2. Very pricey but you have given me some more options to look at.
I have Icy Docks 6 x 2.5” in a 5.25” bay. It has been great.
Had no idea that differing pins are needed for hot-swapping. Or that some pins were differing. OR that some of these devices were hot-swappable!
I LOVE TECH! :D
Love Icy Dock products. Quality components built to last.
Icy dock is sick, you can glue together whatever you want wherever you want. I always catch myself just browsing their website and dreaming up exotic setups
I thoroughly love the entire premise of that 5.25" RAID capable M.2 drive bay setup. It just reminds me a lot of those 2.5" enclosures in that form factor that I was always fond of. I'm really tempted to get one once I have a bit more cash flow and migrate my storage to some more reliable flash in RAID. It would require a new case, though. Also, can you imagine modding an old HP Pavilion case for more air flow, then shoving one of those into the optical drive bays? Talk about some seriously overkill sleeper PC fodder.
Edit: I remember my first encounter with a system that actually gave me the option of static vs. variable device ID enumeration was an ancient IBM Power5 box running VIOS. I'm sure it wasn't anything special about that particular machine, just that it was the only one I actually noticed that this was a feature of. And boy is it useful to have static device ID enumeration when you're fighting with virtual network ports being mapped onto actual ports in a SEA config passed through by VIOS. Otherwise, the network adapter IDs can change when you're rebooting or removing and adding network ports, and that can just make it a real pain to figure out what's going on.
Wendel how do you time these videos I swear to god you're better than Google and Amazon at knowing what I want.
Long time no see. Praise Wendell.
For the contest, I work for my school's Student Technology support service, but we've been having issues with our m.2 backup station and adapters, so if I won I'd use the removable rack to update the backup station that so we can do more data backups.
I have their 8xSata 5.25 bay in my compact home server full of 2TB HDD (plan is to go full ssd when I can afford it) ended up swapping the fans to some low end delta fans to control heat a little better as spinning rust gets a little too toasty during parity checks and such in such a dense package. Icy Dock is definitely a "you get what you pay for" brand. price tag stings for a home user but by golly it does exactly what you need.
If only PC cases had 5.25" bays anymore.
As I understand it you might need a redriver for that length on the cable for the U.2 thingy.
I like your blog. I enjoy your presentations, almost all of them. 🙂
8:59 “aBav.” is just the abbreviation of my full user name in the forum - thought I was hitting a character limit during registration 🙃
I like these products. I'd never use them, but I like them. Stuff like this always makes me wonder, what can you plug into a modern PC that's a standard form factor, and make it do something non standard?
I have the 16 bay 2.5 to 5.25 in a steel mATX case with handle. I use this server as my on site redundancy(in an off building that doesnt have a server rack on the lowest level, like in my house where any running water from a busted pipe would pool)
Its portable because my off site backup only has 4G through CALYX, and i only live sync the really important stuff. All of my photos, videos, and TV recordings i sync once every few weeks.
I have no idea about server grade hardware but knowing Icy Dock also from more consumer orientated products, I agree that their stuff is good.
It looks weird at first because it barely comes with anything but it works well.
Icy dock is awesome, I use them on all my computers.. I have a blu ray recorder and two sata drives in one 5.25 bay for backing up data, videos and photos.. I even use an icy dock on a retro machine running win98/xp dual boot , win 10 on different sata drive, I just turn the drive i want to boot one using the front on/off switches on the face of the icy dock, i just hooked up an sata to ide adapter on the back of the icy dock sata bay and swap the drives out.. plus I use a sata to back up/clone my os drives using a icy dock..
11:15 their product listing claims to only support 22x80 M.2s on the CP065.
Appears he has the ToughArmor MB873MP-B and is functionally the same.
I have actually been eyeing that exact M.2 to PCIe card to add to my current workstation.
I currently have a MB014SP-B and a MB171SP-B mounted in the available 5.25" bays and that adapter card is the last "thing" I need to be able to hot swap any modern(ish) drive for imaging/recovery/experimental/etc.purposes.
I'm not sure how feasible it is, but it would be really nice if there was something that combined 2.5 SATA and M.2. Essentially the MB014SP-B, but instead of 4x 2.5 SATA drives, it was 2x 2.5 SATA and 2x M.2.
Excellent Vid! Question: The background picture you have on the monitors behind you, please can you share the download link for that exact picture?
love my compact ASrock EPYC3451D4I2-2T HATE the Oculink, i had to get the x8-Sata from Australia
I am planning a media server build and one of these would be a nice way to go.
I think Icy Dock may make their units hot swap capable by elongating the grounding pins on the connector side of the bay. Maybe take a look? The FAQ section of the product page lists hot swap capable controllers. They seem to use Tri-Mode RAID \ HBA Adapters. Capable of accepting SATA \ SAS \ PCIe drives. I looked into this for hot swap data erasure. PR'ed a pair of tool-less MB720M2K-B docks and an Icy Dock recommended HBA controller. When we get the hardware in, I will be interested to see performance and capabilities of the hardware.
Nice to a a review of them been around for years
I had that issue with a pcie 4.0 GPU and X570 mb...riser cable was a 3.0.
I agree with droknron, Icy Dock uses cheap fans. The M.2 caddy is a pain in the rump to load your stick into. I have to disagree with you about the PCIe-3 for the stand alone adapter. I'm getting solid PCIe-4 speeds on my EVO 980 pro in the furthest slot from the CPU on my ASUS motherboard.
What we really need is a PCIe card with multiple m.2 slots (4?), but only requires a PCIe x4 slot to run them all. Basically a card with a PCIe switch. Not necessarily raid.
I would use the PCIe to M.2 NVMe adapter in my HP desktop PC I as a boot drive. It has no M.2 slots.
I realize I'll need to point to it via a grub or syslinux bootloader but that's no problem.
Thanks for the PCIe solutions education. :-)
I went down that rabbit hole once and tried to make "clover" work--Didn't Happen
All this stuff is great 👍🏾👍🏾
Nothing much for m.2 nvme drives though 🧐🤨🧐
Icy Dock has come a long way, it seems.
how would the asus hypercard compare to the icy dock for a software raid solution?
Can you test Areca's new NVME raid controllers?
Wendell, if you show b-roll, can you perhaps actually show what you are talking about? E.g. the U.2 connectors on the WRX80. Or these slim ones. I often find myself looking for what you are talking about and often it is not even shown because the b-roll is rather unrelated. Otherwise very interesting content, thank you!
Would love to see a ITX NAS running a bunch of NVMe storage in ZFS or similar
Very useful content, thanks! I have a suggestion. I am a visual learner. Much of the valuable content here was spoken, and not provided in a graphic/table/text. Thus, I will have to rewatch (possibly many times) to remember details. Please consider adding the key information graphically
So.. would a 2xU.2 board be "better" somehow than half-full 4xM.2 board on a a 8x link in a X570 system?
I have a couple of Icy Dock 2.5" to 3.5" drive adapters for use with my 3.5" hot swap bays
if there is one thing i hate about nvme is taking the drive out so i can test something and not risk a data loss
Having a nvme on a sled would be great, too bad my only slot i can use that for would be PCIe 2.0
even just having it on one of those cheap ebay cards is better than dealing with that tiny screw on a motherboard, i'd prefer to use those over onboard m.2 slots and just have more PCIe slots on board as unplugging that is not such a PITA
Color me stupid, but I feel we need one or two choices that do bifurcation onboard, for 2x M.2's or 4x M.2's. I don't like hammering my OS M.2 w/ workloads, I'd rather leave this to specific M.2's.
Aplicata Quad M.2 NVMe SSD PCIe x8 Adapter - has a x8 bifurcation chip to run 4x M.2's in PCIe Gen3 at x16 speeds using the bifurcation chip.
I'd love to see this to fully utilize the x4 Gen4 lanes in X570/Z590 type MoBo's and on, with 4x Gen3 M.2's. This way users don't have to sacrifice PCIe lanes to their GPU or OS M.2.
Can you please share the model of the first card with 2 drive to PCI express x8 ?
In this episode of "Wendell Rambles", We make a mess on a desk with Icy Dock stuff and all manner of flash drives :p
Hey Wendell! Great video - thanks! I would love to get your advice on current best value 2nd hand u.2 drives. I've got dual m.2 drives in my server via pcie riser for my proxmox and they are wearing out, it was stopgap solution. I'd like to install two 1tb drives in mirror and only got few pcie slots available.
Where does one aquire those SFF-8643 to U.2 cables in the PCIe 4.0 flavor? I have the same WRX80 Sage motherboard and have been looking to use the U.2 ports but I can't find any cables in that form factor that support PCIe 4.0
I wonder if we could organize a drop like event for decent flash storage, it's a pain to get decent ssds (micron/ironwolf) for NAS/Servers for the home user :/
I don't have the slightest idea what your talking about but this tech is interesting.
I would love an external M.2 2230 enclosure with USB-C ... Nice and fast USB Stick ...
great info, As per ASUS WRX80 Manual Page 1-13 it states U.2 PCIe 3.0. How did you manage to get it work as PCIe 4? or is it a manual mistake? Thanks in advance
8x M.2 NVMe - Hmm, that's really cool!
8x M.2 NVMe 22110 supported - F**K YEAH GIMME
What's the name and model of that PCIE 4.0 U.2 add-in card at 3:46? I'd really like to get one of those but I am having trouble finding it.
As much as I'd like a free m.2 to pcie, I have no current need for something like that, but would be interesting to use as a dedicated Linux boot drive because I choose poorly and built with a itx motherboard in a atx case, which has a single m.2...
Almost time to upgrade my portable server though-
re: Hot plug for m.2, couldn't the socket be designed to make connection with the ground pins first, even if the connection traces on the male end are all the same length? Sure that would probably cost more than it being properly designed, but to my extremely naive I-Am-Not-An-Electrical-Engineer view, it seems like they could make it work.
How can you connect the MB873MP-B eight sff-8612 connectors to a raid controller such as the Broadcom 9560-16i adapter card? I cannot figure out which cables one needs to be able to connect up all 8 drives to a single adapter card. The Broadcom 9560-16i manual says that it supports up to 32 NVMe drives so in theory it should be possible.
not using pro nvme stuff but over the years (10+) I've used plenty of Icy Dock gear, drive adapters, external storage/raid boxes including the unique 10 HDD bay which I'm waiting to get back in stock I need a 2nd one Icy Dock build quality isn't "great" but I never had a functional problem with them, yes the fans are always crap but that's better than a raid enclosure that badly writes and corrupts your drive...I had that from competitors and lost 24TB of data :/
When I first read the title of this video I thought that the content was going to include something other than storage. Since the means of connection is PCIe, why are there not other expansion options such as Expresscard Slots or a 5.25" bay with a regular PCIe x4 slot in it? This would provide useful increased expansion capability and would go a long way mitigate the installation of a GPU that prohibits the use of several adjacent PCIe slots.
Apparently I'm "fabulously ancient". I'll take it.
I wish hard cards were still a thing. I wonder what it would take to rebuild a lot of the retro parts, would modern chip shortages affect the old stuff as bad?
the build quality by icydock is also way higher than cheap chinesium plastic, but all of this comes with a cost - as usual
I posted on your forums about a m.2 pci sled that you could switch drives on and off with. Thoughts? Even faster and safer than a hardswappable m.2 bay. I'm doing this with sata now, but there are many cable, not so with m.2. The pcb could make it so simple and easy. A dream. Also very ransomware proof.
Level1Techs, how do I convert all PCIe X16 and basic M.2 connections into a hotswap NVME (or future CXL) cage on the front bays that are available for customers are instead of kept to OEMs? Would I need to convert all PCIe connections to U.2 or 3? Desperate to know so that I could built a fast NAS/Server that way.
Thinking the highpoint ssd7204 is best bet for m.2 nvme ==> 4 nvme drives in a 8X slot ---> pcie 3 but🎉 raidable 🤨🤨
Isn't PCI-E generally Hot Swap?
What's the problem with the Asus Board?
would love to see some of these tossed into a regular pc
I’d like to update (replace not add a second drive) the hdd in my mid 2011 27” 3.4gb iMac. What are my inexpensive yet reliable options? Thanks
hi which tower cases are these that are displayed in the video? thanks.
Love videos like this.
6:39 is why you are here
Id I had it I would put it into my Home server and use it as the cache drive for my AI projects, right now I am working on a AI that can predict and create a full "Song" from a 3 second snip of an existing work.
Nice video clip, keep it up, thank you for sharing it :)
Love this kind of video
I am shocked that motherboards not supporting hotswap on all storage still exist.
I need links to those cables....
can you please post the link for the pcie gen 4 cables, thanks
This guy is seriously a genius Lol
Hey everyone! If I buy an Intel P5801X in E1.S what is a good adapter I can use? I want to run a E1.S to PCIe x4 Gen4 adapter. I see a few options available, but not sure if they are all equal. Thanks. I’m wanting to run a P5801X as my daily OS drive, thanks!
I'm totally down but none of this stuff works on consumer computer systems. I have several 5950x machines that are otherwise better than Threadripper with similar core counts for significantly less expensive but for some reason we only get 24 pcie lanes!? What the crap is that about!? Maxed out you get 2 8x devices and a 4x device and except in shower niche circumstances you don't even get 10g networking. So we're stuck with super expensive plx chip solutions or bust. Very frustrating.