I did a very, very quick search. SSDs with PCIe 3.0, which would be completely sufficient here, are now quite rare and are not cheaper than those with PCIe 4.0 for the same size.
regarding the confusion between JBOD and RAID0 - in simple terms JBOD writes the data sequentially (e.g. it assembles the drives in that way, so when copying files on it writes on the first disk, and after it's filled up it starts writing on the second one...), but in RAID0 the files are broken into chunks, and one file is spread over all drives. I imagine it calculates the modulus of the chunk and writes it to a particular drive.
Yes, that's the point of raid5. "1 of n" drives has redundant information, and that "block" rotates through the drives so it's not one drive that is bottlenecked writing checksums. Raid 6 is the same, but with "2 of n".
I bought one - put Proxmox on it, set up the drives as ceph storage. Works nicely for that purpose - I don't think nvme is all that great for NAS purposes as you're pretty much flushing the potential down the network hole drain, and at even 10GbE you can saturate it easily here. Important to note that some of those drives are behind a pcie switch, which really was the only design choice they could make given the chipsets today have jack-all for I/O lanes. Even severely throttled in pcie lanes, you're still able to stuff the network pipe completely, so not really a problem. But since you're paying multiples for m.2 storage vs. spinning rust, that's where I argue about the economics of these - I don't think the price is out of line, just that the application is questionable. And honestly most home networks aren't at 10GbE yet, and for those that have that kind of bandwidth we're not going to be satisfied with just one port. And really my DIY NAS can keep up with this fine with traditional HDD's fronted with nvme cache.
in 5 yrs you will have 10 u2 nvme totaling a petabyte of storage and you will use it all - for now diy all flash nas sweet spot is probably 4-6 1tb nvme and 4-6 spinning rust plus fast networking - this was still good content and hints at what is possbile - expect all nvme nas mkt to grow - no slowdowns
All of these little NAS companies making NVME nas's get it wrong over and over again. Very frustrating honestly. You have like a minimum of 10GB/s available with very commodity NVME drives, likely 2-3x that with nice drives, and they try to give you basically zero networking. They should have 25/40gbe networking, minimum. It's really a ton more expensive to do so.
Any suggestion how to build a 25/40Gbe + 8 bay nvme storage node from just 9 usable pcie 3.0 lanes? Closest I can think of is a am5 build but size and power consumption going skyrocket
Do you have any idea how expensive new 25/40GBe..................... everything... is? Sure, eBay listings exist for used ex-datacenter parts for a $19 Connect X3... but new that card costs around $522. You're not adding a $500 card in a $600-$800 NAS and no serious manufacturer is going to be salvaging chips. Never mind the price of switches, new or used. (Don't forget that nearly all 25+ Gb switches require a paid OS subscription for them to even operate.) It's the same reason you can get a used smartphone for $90 brand new with 8 cores, 8GB RAM, and 256GB of storage while an Nvidia shield costs $150 for 4/2GB/16GB or $200 for 4/3GB/16GB.
25G is very expensive. I wouldn't expect more than 2x10G from a prosumer device, even if the drives can outrun it. 40G is also a dead-end standard so there's no reason to build a new device with it. Realistically having enough compute power to make use of that NVMe bandwidth locally would be a more cost effective proposition. Something like an i5/i7 for 8 drvies, not an N305.
@@frankwong9486 The low lane counts is exactly why i keep wanting to see higher capacity, "slow", cheap NVME drives. Gen3 or 4 x2, but ~12-16+TB, and for ~$200-$250.
@apalrdsadventures 40 gets you 10x4, sure it's a dead end, but it's cheaper than qsfp+ The point I'm making is that it's severely network bound, not storage bound, which is backwards for a nas
Thanks! For another great video and honest review.
Thanks!
I did a very, very quick search. SSDs with PCIe 3.0, which would be completely sufficient here, are now quite rare and are not cheaper than those with PCIe 4.0 for the same size.
regarding the confusion between JBOD and RAID0 - in simple terms JBOD writes the data sequentially (e.g. it assembles the drives in that way, so when copying files on it writes on the first disk, and after it's filled up it starts writing on the second one...), but in RAID0 the files are broken into chunks, and one file is spread over all drives. I imagine it calculates the modulus of the chunk and writes it to a particular drive.
This thing should be priced at $350 - $450 max.
Its like 550 on AliExpress
6:13 JBOD doesnt use striping where RAID 0 does.
I'm running my F8 with 48GB of RAM and on Unraid. Works great as a little media server.
What RAM are you using?
@@edwin-janssen Corsair Vengeance CMSX48GX5M1A4800C40
The WORM mode probably uses samba's vfs_worm module.
What about a cheap pcie card with an nvme controller with truenas in proxmox doing the controller passthrough on a existing server isnt it smarter?
Good video as always! Are you sure that md raid5 can lose data with 1 out of say 3 drives failing?
Yes, that's the point of raid5. "1 of n" drives has redundant information, and that "block" rotates through the drives so it's not one drive that is bottlenecked writing checksums. Raid 6 is the same, but with "2 of n".
I bought one - put Proxmox on it, set up the drives as ceph storage. Works nicely for that purpose - I don't think nvme is all that great for NAS purposes as you're pretty much flushing the potential down the network hole drain, and at even 10GbE you can saturate it easily here. Important to note that some of those drives are behind a pcie switch, which really was the only design choice they could make given the chipsets today have jack-all for I/O lanes. Even severely throttled in pcie lanes, you're still able to stuff the network pipe completely, so not really a problem. But since you're paying multiples for m.2 storage vs. spinning rust, that's where I argue about the economics of these - I don't think the price is out of line, just that the application is questionable. And honestly most home networks aren't at 10GbE yet, and for those that have that kind of bandwidth we're not going to be satisfied with just one port. And really my DIY NAS can keep up with this fine with traditional HDD's fronted with nvme cache.
in 5 yrs you will have 10 u2 nvme totaling a petabyte of storage and you will use it all - for now diy all flash nas sweet spot is probably 4-6 1tb nvme and 4-6 spinning rust plus fast networking - this was still good content and hints at what is possbile - expect all nvme nas mkt to grow - no slowdowns
That terramaster logo is AWFULLY similar looking to the Cooler Master logo
We need a CHEAP EDSFF Caddy with PCIe Connetion directly to PC.. Something like ICY dock but cheaper
$800 😂
why is it so darn expensive?
All of these little NAS companies making NVME nas's get it wrong over and over again. Very frustrating honestly. You have like a minimum of 10GB/s available with very commodity NVME drives, likely 2-3x that with nice drives, and they try to give you basically zero networking. They should have 25/40gbe networking, minimum.
It's really a ton more expensive to do so.
Any suggestion how to build a 25/40Gbe + 8 bay nvme storage node from just 9 usable pcie 3.0 lanes?
Closest I can think of is a am5 build but size and power consumption going skyrocket
Do you have any idea how expensive new 25/40GBe..................... everything... is?
Sure, eBay listings exist for used ex-datacenter parts for a $19 Connect X3... but new that card costs around $522.
You're not adding a $500 card in a $600-$800 NAS and no serious manufacturer is going to be salvaging chips.
Never mind the price of switches, new or used. (Don't forget that nearly all 25+ Gb switches require a paid OS subscription for them to even operate.)
It's the same reason you can get a used smartphone for $90 brand new with 8 cores, 8GB RAM, and 256GB of storage while an Nvidia shield costs $150 for 4/2GB/16GB or $200 for 4/3GB/16GB.
25G is very expensive. I wouldn't expect more than 2x10G from a prosumer device, even if the drives can outrun it. 40G is also a dead-end standard so there's no reason to build a new device with it.
Realistically having enough compute power to make use of that NVMe bandwidth locally would be a more cost effective proposition. Something like an i5/i7 for 8 drvies, not an N305.
@@frankwong9486 The low lane counts is exactly why i keep wanting to see higher capacity, "slow", cheap NVME drives. Gen3 or 4 x2, but ~12-16+TB, and for ~$200-$250.
@apalrdsadventures 40 gets you 10x4, sure it's a dead end, but it's cheaper than qsfp+
The point I'm making is that it's severely network bound, not storage bound, which is backwards for a nas
I have a TM unit and happy with that, but this looks junk for the price
A bit late to the party but better now than never!
Wide angle shot is out of focus.
Bob seems to be in charge of camera operations.