For the first card, under the heatsink there's a pci-e switch chip which takes in 4 pci-e 3.0 lanes and creates 6 pci-e 2.0 lanes (max 500MB/s). Then you get 6 asm1064 chips connected at PCI-e 2.0 even though they could do pci-e 3.0, because switch chip can't do it. So groups of 4 ports are capped to less than 500 MB/s because there's also pci-e overhead. For 2'nd card bifurcation support is required and not all motherboards support it.
This. One of the ASM1064s is also under that heatsink, and the manufacturer also makes multiple variants using the same PCB which omit various amounts of the 1064s and the connected ports. I'd be interested in a followup on this and possibly identification of the switch (to confirm the suspected PCI-e 2.0 limitation). Despite the fact that they're only SATA rather than SAS, this could actually be a decent option for an unRAID NAS with surplus drives.
@@michaeldcullen the follow up video to this recently uploaded deosn't contain much info - so no, he did not got into figure out the plx-chip - all he said: "plugged it in - and no problems under windows" and showed some benchmarks as for the card: i found two listings - one on amazon and the one on aliexpress - and when you factor in the additional import taxes you have to pay on top when order on aliexpress vs get it from amazon comes out rather even the card uses an asm1812 pci-e 2.0 x4 as plx switch there're "cheaper" versions which don't use a plx at all but rather s-ata port-multipliers - but the catch with these is: if a port downstream has issues the entire upstream port blocks I have a pci-e 3.0 x1 to 20 s-ata - it uses only one asm1064 with 4 jmb575 port-multipliers - it does get the job done - but with the afore mentioned flaw if you really look for some usefull card invest in a proper sas controller like a LSI one - it's way more reliable
Those M.2 to 5x SATA breakout adapters might come in pretty nifty nowadays even beyond just NAS builds seeing as even solidly expensive midrange desktop motherboards featuring more and more M.2 ports and fewer and fewer SATA ports.
They are nifty indeed. You can plug them into Thunderbolt NVMe adapters and use them with laptops, NUCs, etc. I have a couple different ones (an 88se9230 and a JMS585) and was surprised to find that they even work with my Apple M1 Air. That was just a try for fun, though. I can't comment on long term reliability.
@@beauslim oh damn, i never thought theyd work externally, thats potentially amazing for building a dedicated external drive box out of an old pc case.
@@curvingfyre6810 That was my first thought, too. The m.2 cards are kind of flimsy, though. If you aren't trying to build something really small (ie for 2.5" drives) you might be better off using an m.2 to PCIe-slot adapter in the TB to NVMe device and a normal PCIe controller.
@Michaels Carport its gonna be a pretty long time before m.2 is competitive for serious storage. We may even see a new sata revision in that time. Honestly, im expecting m.2 and sata to be entirely separate storage use cases/form factors. I expect something to come along and replace sata, other than m.2. M.2 is specialized for boot drives, small storage fixed to the board either for reliability, portability, or space. I fully expect to see the continuation of the separate drive form factor, perhaps through internal usbC, since that shits so absurdly fast these days and wont be quite as resource demanding as direct pcie for mass storage, plus it takes care of power.
@@curvingfyre6810 while not a typical solution for a home pc, pcie m.2 cards are readily available allowing you to use four to eight m.2 drives in one pcie slot. They are rarely used due to bandwidth limitations when going through the chipset as consumer grade CPUs usually have only 1 x16 slot directly connected to the cpu. The cards are fairly common with servers or threadripper platform that have dozens of lanes to the slots.
Retail consumer mainboards usually bifurcate 8x/8x or 8x/4x/4x. With 4xm.2 riser u get 3xm.2 (installed in 1.,3.,4. position on adapter)... and thats only beginning. m.2 to sata adapters with 5xSATA usually uses J585(if i remember correctly) chipset and that chipset is really old. It wouldnt suprise me that is 1x or 2x pcie gen 2. M.2 to sata adapters with 6SATA ports usually uses ASM1166 (much modern) chipset that have 2x pcie gen 3 (16Gbs for one m.2 adapter). Sooo, on normal mainboard u get 3x6SATA ports. Every cluster of 6SATA have shared bandwidth of 16Gbs (1,6GB/sec aprox.). It is sufficient for six mechanical harddrives(new ones) or 3 SSDs. Act accordingly.
That first card is likely very slow per port, and intended for Chia mining. Those cards use a main SATA chip who's inputs are routed to port multiplier chips to give you all those SATA connectors.
I'll have to test this, I'm thinking 4 Samsung sata 870 evo sdd should give me a good indication. If I can get close 2GB transfer rate, I'd say it's good enough.
On this card they did it another way, under the heatsink is a PCIe switch chip that is connected to the slot using a x4 lane and has 6 x1 lanes on the other side connecting to the 6 SATA host controllers
You claim that the traces end. And it looks like it’s wired for only 4X PCIE. This is incorrect. The traces do not end. If you look closer you see those traces have holes at the end. These are what’s known as Vias. They are circuits running down to lower layers of the PCB. So it is very likely that the card is actually wired for 16X, not 4. It’s just the traces go elsewhere into the board….
@@SwitchingPower as said in my other comment those traces do not end. They end in vias, the other PCIE lanes are going down to another layer of the board so it is very possible that it is running 16X and not 4X.
If you could get 1GB/s at any of the ports at any time it would still be a good deal to enable that much storage in one PC. Even if it enables access to one port at a time, it's still probably faster with SSDs than that many HDDs in raid 0; unless the set up ends up being slow at switching ports.
You forgot to mention that the pcie x16 to 4x m.2 cards require a full x16 slot AND the bios needs to support pcie bifurcation. If not they will only detect the device in the first m.2 slot
Thank you very much, I need to ask: If we turn off the computer directly from the power button Or if the electricity suddenly cut off in the house Can such a PCIe SATA Expansion cards lead to burning or Damaged the HDD's connected to it?
What about the speed, how fast are those sata boards when the drives are plugged into all the sata ports ? And how they perform compared to cheap server host bus adapter from eBay ? I think that the performance will not be very good against the genuine server components for the same price. + check for random read/write eror :D
Currently run LSI HBA and HP port expander with 31 drives on unraid. No issues and all in cost well under $150 for both cards and the cables needed. The expander probably isn't the way to go if you are going ZFS, but for unraid it works just fine, as it is rare that more than a few drives are being used at once. for $50, though the 8 drives you get wits a sas2 HBA are plenty fast.
What I want is a PCIE card that is a RAMdisk, that you can plug DDR3, 4, or 5 sticks into. With a battery backup on card of course. They use to make ram disks on on expansion cards, but no more. At least I haven't been able to find one... anywhere. Right now I use system ram and a free util called Imdisk.
@@ProjectSmithTech When you want the fastest random access or otherwse harddrive possible... like for your current video editing project files... you want a RAMdisk. Think a Gen 5 nvme is fast? Ha!
@@ProjectSmithTech Once you see the prices of Epyc, you'll change your mind. I bought the most powerful Ryzen chip I could, a 7950x, and all the other parts sniped on ebay. Needed the cores for video rendering, not the 3D vcache. I never scrub so all the video editors touting Intel 13th & 14th gen... it didn't seem like Windows 10 and my $40 old Sony Movie Studio video editor software would be able to take advantage of such an L shapped performance and efficiency core processor. Just give me all equal cores all across the board, Windows task scheduler has enough trouble spreading the load as it... Now as Intel CPUs are cratering, all those Davinci / Adobe Premier video editors with their Intel 13th 14th gen chips are regretting their purchases :-/ I like to say, like the Millenium Falcon, my computer Snow White I built while blind, she ain't the fastest or the most expensive, but she's got it where it counts... like a Shelby Cobra replicar... :-)
I think these cards are neat, but really don't understand who is buying them. An LSI sas2 HBA can be had for about $50 and a 28 port expander card goes for about another $50. add in some cables and what not and you're running 32 ports for well under $150. Currently running this setup in my UnRaid box with 31 drives of various sorts.
@@eustache_dauger I mean sure… and I know some people seem to like the absurd idea of using tiny cases, restricting air flow, and spending far more money than they need to, but I don’t see anyone actually trying to build a reliable storage server actively choosing to use a board that costs more money and has less capabilities where the board size isn’t going to matter in a case that has room for a bunch of drives.
Interesting. So a bluray when watching a Moive read rate is about 54mbps. When at full speed it can read about 400mbps. If you watch my second video in this series, depending on where you plug in the sata port you get about 300 to 350 per port with six plug-ins at the same time. So it could work.
Question: How many PCIe lanes does an M.2 need? I have seen 1x M.2 to PCIe adapters that vary between PCIe x1 and PCIe x4. Can an M.2 card run at full/nearly full speed on a PCIe x1 interface, or are there not enough traces?
if nvme drives comply to the pci-e standard (which I think they do, I think the nvme protocol is part of the standard) then the drives would run fine on 1 or 2 , just at 1/4 and 1/2 the speed respectively
Find me a RAMdrive expansion card... its a PCIe card you put sticks of ram on it, and use it like a super fast ram drive. They use to exist once in the day of the floppy, the don't any more.... because, well... nvme drives. I use ImDisk instead... but it eats up system ram, and doesn't survive system crashes.
@airwarorg it'd have to be backed up on battery if you're going to reset your system. I hadn't seen one of those pcie cards since the ddr2 days. Unfortunately I can't help you here. The best I can suggest is a gen 5 m.2 drive by Crucial. Or Persistent memory which is made by Intel all of these options are very expensive and that's if you can find a motherboard that will use the technology. Also consider Intel Optane DC P4800X low latency experience, again very expensive.
I bought a 4x SSD m.2 expansion card (same like the one shown here) but I only ONE SSD m.2 working at the time. I have 3x SSD they all work individually but when they are on the card only one shows up. Any ideas?
@balancedaustralia You might be very lucky with your motherboard. Go to your Bios (hit delete button as soon as you turn on your computer) go to advance ( the advance tab which will be displayed up on top of your screen). Select Advance on bored device configuration. And the PCIE slot you have inserted the M.2 card select. 4x4x4x4. Let me know how it goes. Here's a link forum.level1techs.com/t/asus-pro-ws-x570-ace-can-now-bifurcate-the-third-pcie-slot/194220
Are those supported by true and though ? I don’t think they will be compatible with freebsd at least…maybe TrueNAS scale ? I got an lsi for 10 euros and it supports 16 drives
I've found some motherboards to be very picky about sata expander cards Especially newer motherboards A card that works fine in one can be problematic in another. Not sure exactly why
1:14 Except when (regularly, as you should) parity checking... Or rebuilding. Then you wish you bought something better. Though if you already lack CPU power, it might not be a (single) issue.
Very interesting... I've seen a few products like these too, and I've wondered... Do the M.2 4-way bifurcation boards work for non-storage hardware? Or is there something about how they're configured that makes them exclusively suited to storage? I've been wondering about using one of them to connect a bunch of GPU's for an AI cluster, but I can't get a straight answer as to whether or not they'll work.
Wow, that sounds like an exciting project. If sdd/hdd work then there's no reason why on a hardware level you're motherboard shouldn't detect the GPUs I could be wrong.
@@ProjectSmithTech That's exactly what I thought. But I've found that when bifurcating and screwing around with pci-e connectivity it's very easy to get a no-boot condition for no apparent reason. Maybe the moon is in the wrong place. Maybe your motherboard bios doesn't do bifurcation. Maybe you talked smack about it's mother. Maybe there's no above 4g decoding. Maybe it just doesn't feel like it today.
@Derranged Gadgeteer this video received far more interest than I anticipated. I'll have to test it out on the X99 boards I have. I have two M.2 that I can spare.
I'm having trouble finding bifurcation support except in LGA 2011 boards, X79 or X99. The cheapest way to get GPUs in a cluster is a 5 slot x16 (x8 electrical) mining board (with processor included) for around $35. The BIOS is limited to 2 cores, I assume to keep the TDP low. I was going to try and push it with a 2643v2 on 4 cores 😆 after unlocking the BIOS setting if possible. I am going to try a $10-$15 "6/8 GPU mining rack" for a case, one of the 500mm wide ones. For RAM you can add any single stick DDR3, but I found 32gb 1866mt/s PC3 14900L for about $15. I will be using it to try to bifurcate onto m.2 drives for cache on a TrueNAS. It has a single SATA port and an mSATA, but I have an mSATA to SATA breakout. The quad x16 m.2 cards are annoying because they are cheaper than the dual m.2 boards $20 vs $35. But the 5x mining motherboard would only be using 2 of them.
@@ProjectSmithTech It looks like sadly my B550 AORUS PRO (AC) does not mention bifurcation in the manual. I guess I can do My M.2 boot drive, primary GPU, 1 straight optane stick, and 3 cards for adapting 4x to B key, which will ocupy all my slots and give me four 16gig sticks to fool with using primocache for windows or bcache for linux.
@@ProjectSmithTech Looking into it further, it seems that the 16gb B+M keyed sticks only use two lanes of PCIE each? If that is the case someone with the support for it could do 8 2x lane splits it would seem.
I got this as recommended and indeed its very informative, thank you. Id like to ask, im building home server/NAS on ITX platform and Id like to know if there are any "cheap" PCIe cards that combine at least 2.5GbE NIC, nVMe and SATA? Ive found one that costs almost as the entire build (around 700$) and then bunch from QNAP and Synology but those only have nVMe. Thanks
I'm sorry, you might be on your own with that one. Wendell from level1techs just did a video questioning the usefulness of the existence of those cards. He didn't give much information. It sound like your almost going a full nuc inside your server. Seem like a very interesting project nonetheless.
I have a video mostly edited, the delay is I don't have a motherboard that bifurcation 4x4x4x4x. I think I'll have to put that aside. The card's here are a thinking outside the box solution. I have a pair of pcie cards that work really well, high bandwidth works out of the box. I'll be building a new server based on them.
why bother with slots for weird things intended for laptops, just solder the flash chips directly to full height full length cards. oh and just make all of it do parallel dma instead of all this weird serial nonsense and 'pretending to be some sata drive of a past long gone' to the os
lol … the photo has been re touched.. 😂 … and the chineese way … i.e. suuuper poorly. so the conclusion is: ye are totally useless and know nothing about the cards ye r flashing .. ordinary click-w_ore then?
For the first card, under the heatsink there's a pci-e switch chip which takes in 4 pci-e 3.0 lanes and creates 6 pci-e 2.0 lanes (max 500MB/s). Then you get 6 asm1064 chips connected at PCI-e 2.0 even though they could do pci-e 3.0, because switch chip can't do it. So groups of 4 ports are capped to less than 500 MB/s because there's also pci-e overhead. For 2'nd card bifurcation support is required and not all motherboards support it.
This. One of the ASM1064s is also under that heatsink, and the manufacturer also makes multiple variants using the same PCB which omit various amounts of the 1064s and the connected ports.
I'd be interested in a followup on this and possibly identification of the switch (to confirm the suspected PCI-e 2.0 limitation). Despite the fact that they're only SATA rather than SAS, this could actually be a decent option for an unRAID NAS with surplus drives.
@@michaeldcullen the follow up video to this recently uploaded deosn't contain much info - so no, he did not got into figure out the plx-chip - all he said: "plugged it in - and no problems under windows" and showed some benchmarks
as for the card: i found two listings - one on amazon and the one on aliexpress - and when you factor in the additional import taxes you have to pay on top when order on aliexpress vs get it from amazon comes out rather even
the card uses an asm1812 pci-e 2.0 x4 as plx switch
there're "cheaper" versions which don't use a plx at all but rather s-ata port-multipliers - but the catch with these is: if a port downstream has issues the entire upstream port blocks
I have a pci-e 3.0 x1 to 20 s-ata - it uses only one asm1064 with 4 jmb575 port-multipliers - it does get the job done - but with the afore mentioned flaw
if you really look for some usefull card invest in a proper sas controller like a LSI one - it's way more reliable
Those M.2 to 5x SATA breakout adapters might come in pretty nifty nowadays even beyond just NAS builds seeing as even solidly expensive midrange desktop motherboards featuring more and more M.2 ports and fewer and fewer SATA ports.
They are nifty indeed. You can plug them into Thunderbolt NVMe adapters and use them with laptops, NUCs, etc. I have a couple different ones (an 88se9230 and a JMS585) and was surprised to find that they even work with my Apple M1 Air. That was just a try for fun, though. I can't comment on long term reliability.
@@beauslim oh damn, i never thought theyd work externally, thats potentially amazing for building a dedicated external drive box out of an old pc case.
@@curvingfyre6810 That was my first thought, too. The m.2 cards are kind of flimsy, though. If you aren't trying to build something really small (ie for 2.5" drives) you might be better off using an m.2 to PCIe-slot adapter in the TB to NVMe device and a normal PCIe controller.
@Michaels Carport its gonna be a pretty long time before m.2 is competitive for serious storage. We may even see a new sata revision in that time. Honestly, im expecting m.2 and sata to be entirely separate storage use cases/form factors. I expect something to come along and replace sata, other than m.2. M.2 is specialized for boot drives, small storage fixed to the board either for reliability, portability, or space. I fully expect to see the continuation of the separate drive form factor, perhaps through internal usbC, since that shits so absurdly fast these days and wont be quite as resource demanding as direct pcie for mass storage, plus it takes care of power.
@@curvingfyre6810 while not a typical solution for a home pc, pcie m.2 cards are readily available allowing you to use four to eight m.2 drives in one pcie slot. They are rarely used due to bandwidth limitations when going through the chipset as consumer grade CPUs usually have only 1 x16 slot directly connected to the cpu. The cards are fairly common with servers or threadripper platform that have dozens of lanes to the slots.
Really interesting stuff, never thought I'll ever see one expansion card with 24 SATA ports 🤯
Retail consumer mainboards usually bifurcate 8x/8x or 8x/4x/4x. With 4xm.2 riser u get 3xm.2 (installed in 1.,3.,4. position on adapter)... and thats only beginning. m.2 to sata adapters with 5xSATA usually uses J585(if i remember correctly) chipset and that chipset is really old. It wouldnt suprise me that is 1x or 2x pcie gen 2.
M.2 to sata adapters with 6SATA ports usually uses ASM1166 (much modern) chipset that have 2x pcie gen 3 (16Gbs for one m.2 adapter).
Sooo, on normal mainboard u get 3x6SATA ports. Every cluster of 6SATA have shared bandwidth of 16Gbs (1,6GB/sec aprox.). It is sufficient for six mechanical harddrives(new ones) or 3 SSDs.
Act accordingly.
That first card is likely very slow per port, and intended for Chia mining. Those cards use a main SATA chip who's inputs are routed to port multiplier chips to give you all those SATA connectors.
I'll have to test this, I'm thinking 4 Samsung sata 870 evo sdd should give me a good indication. If I can get close 2GB transfer rate, I'd say it's good enough.
On this card they did it another way, under the heatsink is a PCIe switch chip that is connected to the slot using a x4 lane and has 6 x1 lanes on the other side connecting to the 6 SATA host controllers
@@SwitchingPower Interesting. You'd have to be careful about selecting ports when building arrays, but that actually doesn't sound terrible.
You claim that the traces end. And it looks like it’s wired for only 4X PCIE.
This is incorrect. The traces do not end. If you look closer you see those traces have holes at the end. These are what’s known as Vias. They are circuits running down to lower layers of the PCB. So it is very likely that the card is actually wired for 16X, not 4. It’s just the traces go elsewhere into the board….
@@SwitchingPower as said in my other comment those traces do not end. They end in vias, the other PCIE lanes are going down to another layer of the board so it is very possible that it is running 16X and not 4X.
If you could get 1GB/s at any of the ports at any time it would still be a good deal to enable that much storage in one PC. Even if it enables access to one port at a time, it's still probably faster with SSDs than that many HDDs in raid 0; unless the set up ends up being slow at switching ports.
Just found your channel. Interesting content, and today you sir have gained a subscriber. Keep up the good work!
Theres so many of these cards that have been tempting buys to upgrade my +2 controller
You forgot to mention that the pcie x16 to 4x m.2 cards require a full x16 slot AND the bios needs to support pcie bifurcation. If not they will only detect the device in the first m.2 slot
Thank you for posting this video 🎉. You have a new subscriber. Keep up the great 👍 work..
Thank you very much.
VERY GOOD AND INFORMATIVE VIDEO . PLEASE KEEP UP THE GOOD WORK .
For the secaont card you would need bifurcation support on the cpu and the motherboard!
Correct, only the first card doesn't need bifurcation.
Just discovered your channel. Subscribed. Can't wait to see what you'll put out next! Tech channels are awesome.
Wow interesting build! I’m curious how well it renders & the blender 3d test is another benchmark you can easily get. Subscribed
Ordered the second one on Amazon, can’t wait to test this xD
Thank you very much, I need to ask:
If we turn off the computer directly from the power button
Or if the electricity suddenly cut off in the house
Can such a PCIe SATA Expansion cards lead to burning or Damaged the HDD's connected to it?
What about the speed, how fast are those sata boards when the drives are plugged into all the sata ports ? And how they perform compared to cheap server host bus adapter from eBay ? I think that the performance will not be very good against the genuine server components for the same price. + check for random read/write eror :D
You definitely have a point. The only system I've got with enough sata drives is my TrueNAS. I'll have to try it on that.
Currently run LSI HBA and HP port expander with 31 drives on unraid. No issues and all in cost well under $150 for both cards and the cables needed. The expander probably isn't the way to go if you are going ZFS, but for unraid it works just fine, as it is rare that more than a few drives are being used at once. for $50, though the 8 drives you get wits a sas2 HBA are plenty fast.
What I want is a PCIE card that is a RAMdisk, that you can plug DDR3, 4, or 5 sticks into. With a battery backup on card of course.
They use to make ram disks on on expansion cards, but no more. At least I haven't been able to find one... anywhere.
Right now I use system ram and a free util called Imdisk.
@@choppergirl interesting
@@ProjectSmithTech When you want the fastest random access or otherwse harddrive possible... like for your current video editing project files... you want a RAMdisk. Think a Gen 5 nvme is fast? Ha!
@@choppergirl yeap, that's what came to mind instantly. I'll definitely look into it.
@choppergirl I might have to go AMD epyc for the PCIe lanes. I'll do some research
@@ProjectSmithTech Once you see the prices of Epyc, you'll change your mind. I bought the most powerful Ryzen chip I could, a 7950x, and all the other parts sniped on ebay. Needed the cores for video rendering, not the 3D vcache.
I never scrub so all the video editors touting Intel 13th & 14th gen... it didn't seem like Windows 10 and my $40 old Sony Movie Studio video editor software would be able to take advantage of such an L shapped performance and efficiency core processor. Just give me all equal cores all across the board, Windows task scheduler has enough trouble spreading the load as it...
Now as Intel CPUs are cratering, all those Davinci / Adobe Premier video editors with their Intel 13th 14th gen chips are regretting their purchases :-/
I like to say, like the Millenium Falcon, my computer Snow White I built while blind, she ain't the fastest or the most expensive, but she's got it where it counts... like a Shelby Cobra replicar... :-)
That's great if you happen to have 20 hard discs very near the card. Not impossible, I guess you could rig up a server somewhere.
Why not pick up an lsi sas hba? You can get those with actual 24 ports. Compared to the cost of the drives it is quite affordable.
Plus you can get cheap sas expanders if you need more than 24.
I think these cards are neat, but really don't understand who is buying them. An LSI sas2 HBA can be had for about $50 and a 28 port expander card goes for about another $50. add in some cables and what not and you're running 32 ports for well under $150. Currently running this setup in my UnRaid box with 31 drives of various sorts.
I just check some of those cards out. Clearly I didn't do my homework, thank for the information.
For those doing itx nas, they may need to reserve the pcie port for other use, gpu or 10gbe nic. Therefore, using an M.2 hba might be an option.
@@eustache_dauger I mean sure… and I know some people seem to like the absurd idea of using tiny cases, restricting air flow, and spending far more money than they need to, but I don’t see anyone actually trying to build a reliable storage server actively choosing to use a board that costs more money and has less capabilities where the board size isn’t going to matter in a case that has room for a bunch of drives.
PCIE SATA Card 24 Port : how about 24 dvd or bluray drives ? you would would have to use all at once and the speed should not be a issue
Interesting. So a bluray when watching a Moive read rate is about 54mbps. When at full speed it can read about 400mbps.
If you watch my second video in this series, depending on where you plug in the sata port you get about 300 to 350 per port with six plug-ins at the same time. So it could work.
Question: How many PCIe lanes does an M.2 need? I have seen 1x M.2 to PCIe adapters that vary between PCIe x1 and PCIe x4. Can an M.2 card run at full/nearly full speed on a PCIe x1 interface, or are there not enough traces?
if nvme drives comply to the pci-e standard (which I think they do, I think the nvme protocol is part of the standard) then the drives would run fine on 1 or 2 , just at 1/4 and 1/2 the speed respectively
@@TunsaMcHaggis Thanks for your help!
Find me a RAMdrive expansion card... its a PCIe card you put sticks of ram on it, and use it like a super fast ram drive.
They use to exist once in the day of the floppy, the don't any more.... because, well... nvme drives.
I use ImDisk instead... but it eats up system ram, and doesn't survive system crashes.
@airwarorg it'd have to be backed up on battery if you're going to reset your system. I hadn't seen one of those pcie cards since the ddr2 days. Unfortunately I can't help you here. The best I can suggest is a gen 5 m.2 drive by Crucial. Or Persistent memory which is made by Intel all of these options are very expensive and that's if you can find a motherboard that will use the technology. Also consider Intel Optane DC P4800X low latency experience, again very expensive.
The parallel performance is lower than the speed of one single drive (at least the asmedia thing)
First time ive ever felt motion sickness watching a youtube video. I guess that's something... ShaKy CloSeUpS!
I actually get that a lot
People have complained with my other videos too. I do have a Tripod may need to use it more.
Thanks for the feedback.
I bought a 4x SSD m.2 expansion card (same like the one shown here) but I only ONE SSD m.2 working at the time. I have 3x SSD they all work individually but when they are on the card only one shows up. Any ideas?
You need to have a motherboard that supports bifurcation. 4x4x4x4.
@ProjectSmithTech thanks but how do I know? There is no information whatsoever. Which SSDs worked? I could buy a couple the same if necessary.
@@ProjectSmithTech the motherboard is an Asus x570 PRO.
@balancedaustralia You might be very lucky with your motherboard. Go to your Bios (hit delete button as soon as you turn on your computer) go to advance ( the advance tab which will be displayed up on top of your screen). Select Advance on bored device configuration. And the PCIE slot you have inserted the M.2 card select. 4x4x4x4. Let me know how it goes. Here's a link forum.level1techs.com/t/asus-pro-ws-x570-ace-can-now-bifurcate-the-third-pcie-slot/194220
Ok i like those m.2 expansions and m.2 5x sata expanders
I saw the first card and immediately let out a "what the fuck" at that amount of sata ports
cant you get a stack of cheap 100GB drives and test the 24port thing out? I imagine there would be views in that.
Are those supported by true and though ? I don’t think they will be compatible with freebsd at least…maybe TrueNAS scale ? I got an lsi for 10 euros and it supports 16 drives
Lsi 16 drives what was the name of it that’s what I’m looking for atm
I've found some motherboards to be very picky about sata expander cards
Especially newer motherboards
A card that works fine in one can be problematic in another. Not sure exactly why
It's a pain, I've found in my experience that X99 motherboards are the most tolerant.
@@ProjectSmithTech I think it's some of the chipsets that are the issue. But that's really just a theory at this point.
@@ProjectSmithTechthat’s funny I have an x99 mobo love it and my gpu just started to get wonky, editing setup with 5k monitor
1:14 Except when (regularly, as you should) parity checking... Or rebuilding. Then you wish you bought something better. Though if you already lack CPU power, it might not be a (single) issue.
Very interesting... I've seen a few products like these too, and I've wondered... Do the M.2 4-way bifurcation boards work for non-storage hardware? Or is there something about how they're configured that makes them exclusively suited to storage?
I've been wondering about using one of them to connect a bunch of GPU's for an AI cluster, but I can't get a straight answer as to whether or not they'll work.
Wow, that sounds like an exciting project. If sdd/hdd work then there's no reason why on a hardware level you're motherboard shouldn't detect the GPUs
I could be wrong.
@@ProjectSmithTech That's exactly what I thought. But I've found that when bifurcating and screwing around with pci-e connectivity it's very easy to get a no-boot condition for no apparent reason.
Maybe the moon is in the wrong place. Maybe your motherboard bios doesn't do bifurcation. Maybe you talked smack about it's mother. Maybe there's no above 4g decoding. Maybe it just doesn't feel like it today.
@Derranged Gadgeteer this video received far more interest than I anticipated. I'll have to test it out on the X99 boards I have. I have two M.2 that I can spare.
@@ProjectSmithTech fantastic! I'll be keeping an eye out for it!
I'm having trouble finding bifurcation support except in LGA 2011 boards, X79 or X99. The cheapest way to get GPUs in a cluster is a 5 slot x16 (x8 electrical) mining board (with processor included) for around $35. The BIOS is limited to 2 cores, I assume to keep the TDP low. I was going to try and push it with a 2643v2 on 4 cores 😆 after unlocking the BIOS setting if possible.
I am going to try a $10-$15 "6/8 GPU mining rack" for a case, one of the 500mm wide ones.
For RAM you can add any single stick DDR3, but I found 32gb 1866mt/s PC3 14900L for about $15.
I will be using it to try to bifurcate onto m.2 drives for cache on a TrueNAS. It has a single SATA port and an mSATA, but I have an mSATA to SATA breakout.
The quad x16 m.2 cards are annoying because they are cheaper than the dual m.2 boards $20 vs $35. But the 5x mining motherboard would only be using 2 of them.
The transition at 6:39 is way too good LMAO.
Great video!
Thank you.
If I was to get 4 optane B+M keyed sticks for real cheap, would this be compatible with it?
You'll need a motherboard that supports both bifurcation 4x4x4x4x and supports intel optane.
@@ProjectSmithTech It looks like sadly my B550 AORUS PRO (AC) does not mention bifurcation in the manual. I guess I can do My M.2 boot drive, primary GPU, 1 straight optane stick, and 3 cards for adapting 4x to B key, which will ocupy all my slots and give me four 16gig sticks to fool with using primocache for windows or bcache for linux.
@@ProjectSmithTech Looking into it further, it seems that the 16gb B+M keyed sticks only use two lanes of PCIE each? If that is the case someone with the support for it could do 8 2x lane splits it would seem.
@BloodAsp Unlikely due to the physical Realestate needed by the drives, the four PCIe lanes slot. I highly doubt bifurcation will divide that way.
The main problem with that is the Power Supply!
I got this as recommended and indeed its very informative, thank you. Id like to ask, im building home server/NAS on ITX platform and Id like to know if there are any "cheap" PCIe cards that combine at least 2.5GbE NIC, nVMe and SATA? Ive found one that costs almost as the entire build (around 700$) and then bunch from QNAP and Synology but those only have nVMe. Thanks
I'm sorry, you might be on your own with that one. Wendell from level1techs just did a video questioning the usefulness of the existence of those cards. He didn't give much information. It sound like your almost going a full nuc inside your server. Seem like a very interesting project nonetheless.
PCIe 4x is good enough for mechanical drives. I think?
I've been looking for an nvme expansion card for my nas, this helps
Will Intel Rapid Storage Technology support these devices?
The M.2 expansion cards depends on the motherboard you will need Bios biotification. As for the sata card, I'm tipping not.
Why not use something like HWinfo to find out how many lanes the cards use, and at what PCIE gen?
I'll revisit this later with some ssd and M.2 drives
welcome to TaoBao my friend😂
Good Video. Keep it up!
How did they peform 8 months later?
I have a video mostly edited, the delay is I don't have a motherboard that bifurcation 4x4x4x4x. I think I'll have to put that aside. The card's here are a thinking outside the box solution. I have a pair of pcie cards that work really well, high bandwidth works out of the box. I'll be building a new server based on them.
Guys Is There A Motherboard Product That Support 2 FM2+ Cpus??
Or just get a cheap i24 hba (it mode) card from ebay, lower power snd more bandwith
@@omid4861 I'm going to be honest, this is the first time I've even considered the product. I think you're onto something.
Can you make performance test of this nvme card? (with 4x nvme)
I wish I had the M.2 drives spare but unfortunately they're spread out over my other system.
@@ProjectSmithTech ok
interesting content. .subbed
Have anyone an experiense with packet swith cards for 2 or 4 nvme drives? To add 2 more drives on APU with x8 lines without bifurcation support.
Any updates on these cards?
It's going to take a little while.
if you use one of them keep in mind it wil slower down your gpu
Not with Xeon. They have 40 pcie lanes
Thes spelled SUS wrong on the Sata card
Run DOOM?
I am like number 282
why bother with slots for weird things intended for laptops, just solder the flash chips directly to full height full length cards. oh and just make all of it do parallel dma instead of all this weird serial nonsense and 'pretending to be some sata drive of a past long gone' to the os
:O
lol … the photo has been re touched.. 😂 … and the chineese way … i.e. suuuper poorly. so the conclusion is: ye are totally useless and know nothing about the cards ye r flashing .. ordinary click-w_ore then?
You mucking talking too much usel3ss talks