The storage features are certainly an issue. As the author of mergerfs I've been in talks with them since they used mergerfs for their "merge storage" feature they have in CasaOS but I think they are a bit of overwhelmed with everything and spread a bit thin. Their wrapper around mergerfs for instance is barely usable. The UX and UI however is slowly coming together and they have been receptive to feedback. We'll see.
- If you're thinking of the N100, I really recommend you stick with a smaller and more of the shelf solution like a basic synology for a better experience. If you fill this thing up with drives you will want to be running ZFS / RAID, and this thing might not be able to keep up. - Most traditional NAS devices you can buy in a sleek little package tends to be quite expensive. This definitely gives you flexibility and a lot of networking and the i5 version can probably run some homelab stuff on it no problem (such as plex) while also serving as a pretty fast NAS with all the PCIE and M.2 slots. - As a most likely relatively niche and low volume product, you're note getting as much value for money. If you build something yourself you can blow this thing out of the water, but that's not what this product is about.
Thanks for the video I think I'll stick with synology. But I believe Zima can become a good competitor, which will stimulates the business and moves the future for NAS technology forward.
Unfortunately no ECC support. Otherwise the Pro model has really nice specs. Do you know what GPU size fits in this case? They sell ift with the RTX A2000, but is that upgradeable or already the maximum size?
Have to say that the video does a good thing showing the pro's and con's for the ZimaCube. Unfortunately when looking at it from a distance and taking out the emotion there is nothing left but a simple small board with an Intel laptop CPU on it and some memory in a nice case. When looking at the price and power usage both are very much on the high side. The cheapest version is $499 voor N100 one and this does not include the quad M.2 adapter shown in the video. If I compare this to for example a Topton N100 PC with 16GB DDR4, 4x 2.5Gbit NICs and a 256GB SSD those go for $230 (barebones it is $164). Not entirely the same of course but just an indication that this price for this solution is more than double. From people using those systems I've seen reports of idle power usage in the 10-15W range. So power usage for this system is also fairly high.
I'm unsure on the n100 processor version, especially at this price point. The price for the 10-core i5 version seems a little high on the kickstarter as well, especially when considering the price points of the ZimaBoard and ZimaBlade servers, but I'm definitely interested all the same. For now though I think my trusty gen8 proliant server with 12 drive bays will continue to do its job even if it costs much more in power. It's hard to argue with a secondhand server with 10x the processing power and memory capacity, and twice the disk capacity of one of these at a third of the cost...
cost is too high, much like 45 drives "homelab" offering. They're targeting a market of IT people with a lot of disposable income and think we're dumb and will pony up. I can build a better offering for cheaper.
I am surprised that they didn't take advantage of the ZimaBoard to the point where they made something like this as a docking station, where you could upgrade the ZimaBoard later on and maintain the 'Cube' format.. if I'm making sense..
Seen this unit "reviewed" and reviewed by several UA-camrs now.My takeaway : The soldered case fans on full speed and artificial single channel make this a no go for me for me. Sticking to my Haswell box longer. Curious about the I5 version.
Even at MSRP this isn't bad if you want something 'plug and play' but as someone who had been building my own NAS for a while now I could roll my own and get anywhere from 8 to 16 disks and 4+ NVME. Wouldn't be N100/i5-1235U levels of low power and to match the feature set it would probably cost a bit more but 65W TDP CPU (before disks) is acceptable for me for now. If I were looking at a 4 disk Synology or something along those lines I'd probably think about one of these but I think a N300 option would have been a nice addition to the lineup between the base and the pro. A busy N100 will get hot so the fan is a good idea but kind of puzzling the N100 only comes with 8gb memory, 16-32 seems a no brainer.
Power consumption was a bit wasted. Can you run some more tests does it support APSM? type in the command and show what it shows. Then also run powertop. I expect some of the extra chips they are using are causing the cpu to not reach higher c states.
Given some of the decisions made with the board layout, connectors, lack thereof, CPU choices, and price point I think I'd rather just have the chassis, backplane with a standard cable, drive sleds. and a pico PSU or whatever equivilent option is available using that 220w brick. And then just spec my own board, cpu, and ram.
Did you read a lot of questions about PCIe lanes available for M.2 NVMe and PCIe slots on board ? NVMe performance is unknown. They use PCIe switch chips on motherboard. Both CPU hasn't enough PCIe lanes available for all M.2 and PCIe slots and networking.
They need to make this support 8 sata 3.5" drives, as six is a stupid number. It's more than you should do raid5 on and yet it's not the sweet spot for raid6. The quad nvme is a nice touch but most of us just need dual nvme on the motherboard so that we can raid 1 the operating system (nvme caching is nice if you are video editing I guess). If they brought the pro out but had 8 sata 3.5" instead of 6 I have literally four spots I would use these to replace aging hardware. As it stands I can't buy it, it doesn't support enough drives.
I dont know so much about raid configurations. Why is a six bay not good for raid 5 or 6 and does that still matter if you put in only 4 drives? I would like to add 4 drives and two caching ssds at first and if I need more capacity I want to be able to add 2 more drives later.
@@Tri-Technology depends how big the drives you are putting in are. Right now the sweet spot is 18-20TB drives so if you put six of them in there you are looking at a 12-40 hour hour resilver, which is a long time to have 90-100TB avoiding complete loss by depending on 5 other drives you bought at the same time, possibly from the same lot, with the same amount of wear on them as the one drive that just failed. in general you want 20-25 % parity is the sweet spot for likelihood of recovery vs cost and you want 2 drives to be able to fail (or maybe more) as you go larger because the probability of failure of your drives is not independent, it's correlated (hopefully not so highly that you lose all your data) At six spots you can't hit the price/protection sweet spot on raid5 or raid6. You will be under protected or over protected. Plus adding those two extra bays would very marginally increase the cost of this thing, as it appears the controller can support 2 more sata (one of the pre release reviewers showed the mobo and it had 2 more sata ports right on it) Your caching situation actually makes sense for this setup but I ahve to ask, why not just use way faster nvme drives? This thing apparently supports 4 of them in addition to the two on the mobo for the operating system.
@@barfnelson5967 I understand now why 6 drives are suboptimal. I bought now 4x 16tb drives and want to start with one as parity drive. I tried to make sure that they are not from the same lot as I ordered always only 2 from different suppliers with 2 months in between. I think I won't need more than 48 TB of storage for the next 7-10 years so probably I won't need to upgrade. Most critical data I will save additionally on a second nas that is not in my house, so raid 5 should be enough in my case, or am I wrong? For caching, booting and VMs I want to use 1 tb nvme drives (not SSDs as I said in the first reply)
@@Tri-Technology I would think it's probably fine if you are using 4 or 5 drives. Keep in mind it's not a backup solution, so you should really setup some sort of backup, ideally to a different location of the stuff that you really can't lose (i.e. maybe don't backup those backups of your dvd movie collection, but you almost certainly want a backup of your family photos and documents). When I said catastrophic loss above I was really meaning having to copy everything from the remote backup location again and rebuild the archive of teh stuff I am not paying to backup.
I have N100 motherboard (Topton) and I installed 32GB DDR5 (and 4x 2.5Gbps) so I don't know what these guys are presenting or what they are installing there in their N100. Oh and N100 without fan can easily get to 90C so good they bring the fan
Interesting! I assumed the N100 would work fine with a passive cooling since every vendor is doing that, but apparently, IceWhale is wise to put a fan in this thing :D
Is that a mini pc or just motherboard? It's a huge pain searching for topton stuff on aliexpress but if it's just motherboard do you have a model number?
This is not a review, i understand that they payed for an ad but this is a kickstarter project, unclear when and how it will ship if ever and definetely too expensive. In short big meh
Reputation of the company makes a huge difference. Also review boxes available for influencers an other. So I would say it has only a small risk the product not shipping. You can contest other elements of the product but not the ones you mentioned.
Do NOT give any money to kickstarter campaigns. I can't comment on this one, but I, and a lot of other backers have been ripped off in the last 12 months. In my case to the tune of nearly 1K. Kickstarter refuse to do anything about the scam artists that now populate their platform. Be warned.
@@Tri-Technology If you have a single host (machine) you won't be able to achieve aggregated throughput of 10Gbps by aggregating 4x2.5Gbps links as the data will flow at a max speed of one of those interfaces i.e. 2.5Gbps to a single host. If you have four hosts with 2.5Gbps cards each then each host will also get a maximum of 2.5Gbps but all four will saturate 10Gbps as far as the sending device is concerned. Hope that makes sense.
The storage features are certainly an issue. As the author of mergerfs I've been in talks with them since they used mergerfs for their "merge storage" feature they have in CasaOS but I think they are a bit of overwhelmed with everything and spread a bit thin. Their wrapper around mergerfs for instance is barely usable. The UX and UI however is slowly coming together and they have been receptive to feedback. We'll see.
- If you're thinking of the N100, I really recommend you stick with a smaller and more of the shelf solution like a basic synology for a better experience. If you fill this thing up with drives you will want to be running ZFS / RAID, and this thing might not be able to keep up.
- Most traditional NAS devices you can buy in a sleek little package tends to be quite expensive. This definitely gives you flexibility and a lot of networking and the i5 version can probably run some homelab stuff on it no problem (such as plex) while also serving as a pretty fast NAS with all the PCIE and M.2 slots.
- As a most likely relatively niche and low volume product, you're note getting as much value for money. If you build something yourself you can blow this thing out of the water, but that's not what this product is about.
Some lights out remote management on the pro version would be really nice
Yay ZimaCube. I’m a backer and excited for this.
Thanks for the video I think I'll stick with synology. But I believe Zima can become a good competitor, which will stimulates the business and moves the future for NAS technology forward.
Unfortunately no ECC support. Otherwise the Pro model has really nice specs. Do you know what GPU size fits in this case? They sell ift with the RTX A2000, but is that upgradeable or already the maximum size?
Thank you for this great review.
@christianlempa did you try a virtualized truenas ?
Have to say that the video does a good thing showing the pro's and con's for the ZimaCube. Unfortunately when looking at it from a distance and taking out the emotion there is nothing left but a simple small board with an Intel laptop CPU on it and some memory in a nice case. When looking at the price and power usage both are very much on the high side. The cheapest version is $499 voor N100 one and this does not include the quad M.2 adapter shown in the video. If I compare this to for example a Topton N100 PC with 16GB DDR4, 4x 2.5Gbit NICs and a 256GB SSD those go for $230 (barebones it is $164). Not entirely the same of course but just an indication that this price for this solution is more than double. From people using those systems I've seen reports of idle power usage in the 10-15W range. So power usage for this system is also fairly high.
Jonsbo N2 or N3. Put in own hardware spec, install truenas scale and satisfied 😊
Unrelated but can we get a tutorial on the desktop background you’re using?? I recognize cmatrix but how did you get it to continuously run?
I'm unsure on the n100 processor version, especially at this price point. The price for the 10-core i5 version seems a little high on the kickstarter as well, especially when considering the price points of the ZimaBoard and ZimaBlade servers, but I'm definitely interested all the same. For now though I think my trusty gen8 proliant server with 12 drive bays will continue to do its job even if it costs much more in power. It's hard to argue with a secondhand server with 10x the processing power and memory capacity, and twice the disk capacity of one of these at a third of the cost...
Same with proliant dl380 gen9
cost is too high, much like 45 drives "homelab" offering. They're targeting a market of IT people with a lot of disposable income and think we're dumb and will pony up. I can build a better offering for cheaper.
Appreciate you not just glossing over your critiques of the product, especially considering their involvement. Good on ya!
I am surprised that they didn't take advantage of the ZimaBoard to the point where they made something like this as a docking station, where you could upgrade the ZimaBoard later on and maintain the 'Cube' format.. if I'm making sense..
Seen this unit "reviewed" and reviewed by several UA-camrs now.My takeaway :
The soldered case fans on full speed and artificial single channel make this a no go for me for me. Sticking to my Haswell box longer. Curious about the I5 version.
Check out the Pro version, it should have more memory slots and shouldn’t have the case fans
Even at MSRP this isn't bad if you want something 'plug and play' but as someone who had been building my own NAS for a while now I could roll my own and get anywhere from 8 to 16 disks and 4+ NVME. Wouldn't be N100/i5-1235U levels of low power and to match the feature set it would probably cost a bit more but 65W TDP CPU (before disks) is acceptable for me for now. If I were looking at a 4 disk Synology or something along those lines I'd probably think about one of these but I think a N300 option would have been a nice addition to the lineup between the base and the pro.
A busy N100 will get hot so the fan is a good idea but kind of puzzling the N100 only comes with 8gb memory, 16-32 seems a no brainer.
Power consumption was a bit wasted. Can you run some more tests does it support APSM? type in the command and show what it shows. Then also run powertop. I expect some of the extra chips they are using are causing the cpu to not reach higher c states.
I'm waiting for the final release because I assume it will have some differences
At that price an n300 would have been more in focus. N100 is good for pfsense and similar only.
To me the zimacube classic is non-sense hardware.
Given some of the decisions made with the board layout, connectors, lack thereof, CPU choices, and price point I think I'd rather just have the chassis, backplane with a standard cable, drive sleds. and a pico PSU or whatever equivilent option is available using that 220w brick. And then just spec my own board, cpu, and ram.
Do these have Intel NICs?
Did you read a lot of questions about PCIe lanes available for M.2 NVMe and PCIe slots on board ? NVMe performance is unknown. They use PCIe switch chips on motherboard. Both CPU hasn't enough PCIe lanes available for all M.2 and PCIe slots and networking.
When I saw the first product with the "Zima" logo - my first thought - why this name?
EXPLANATION:
In Polish, the word "Zima" simply means - winter
They need to make this support 8 sata 3.5" drives, as six is a stupid number. It's more than you should do raid5 on and yet it's not the sweet spot for raid6. The quad nvme is a nice touch but most of us just need dual nvme on the motherboard so that we can raid 1 the operating system (nvme caching is nice if you are video editing I guess). If they brought the pro out but had 8 sata 3.5" instead of 6 I have literally four spots I would use these to replace aging hardware. As it stands I can't buy it, it doesn't support enough drives.
I dont know so much about raid configurations. Why is a six bay not good for raid 5 or 6 and does that still matter if you put in only 4 drives? I would like to add 4 drives and two caching ssds at first and if I need more capacity I want to be able to add 2 more drives later.
@@Tri-Technology depends how big the drives you are putting in are. Right now the sweet spot is 18-20TB drives so if you put six of them in there you are looking at a 12-40 hour hour resilver, which is a long time to have 90-100TB avoiding complete loss by depending on 5 other drives you bought at the same time, possibly from the same lot, with the same amount of wear on them as the one drive that just failed. in general you want 20-25 % parity is the sweet spot for likelihood of recovery vs cost and you want 2 drives to be able to fail (or maybe more) as you go larger because the probability of failure of your drives is not independent, it's correlated (hopefully not so highly that you lose all your data)
At six spots you can't hit the price/protection sweet spot on raid5 or raid6. You will be under protected or over protected. Plus adding those two extra bays would very marginally increase the cost of this thing, as it appears the controller can support 2 more sata (one of the pre release reviewers showed the mobo and it had 2 more sata ports right on it)
Your caching situation actually makes sense for this setup but I ahve to ask, why not just use way faster nvme drives? This thing apparently supports 4 of them in addition to the two on the mobo for the operating system.
@@barfnelson5967 I understand now why 6 drives are suboptimal. I bought now 4x 16tb drives and want to start with one as parity drive. I tried to make sure that they are not from the same lot as I ordered always only 2 from different suppliers with 2 months in between. I think I won't need more than 48 TB of storage for the next 7-10 years so probably I won't need to upgrade. Most critical data I will save additionally on a second nas that is not in my house, so raid 5 should be enough in my case, or am I wrong?
For caching, booting and VMs I want to use 1 tb nvme drives (not SSDs as I said in the first reply)
@@Tri-Technology I would think it's probably fine if you are using 4 or 5 drives. Keep in mind it's not a backup solution, so you should really setup some sort of backup, ideally to a different location of the stuff that you really can't lose (i.e. maybe don't backup those backups of your dvd movie collection, but you almost certainly want a backup of your family photos and documents). When I said catastrophic loss above I was really meaning having to copy everything from the remote backup location again and rebuild the archive of teh stuff I am not paying to backup.
@@barfnelson5967 Of course I will setup a backup solution for the most important datat. Thank you for your advices!
I have N100 motherboard (Topton) and I installed 32GB DDR5 (and 4x 2.5Gbps) so I don't know what these guys are presenting or what they are installing there in their N100.
Oh and N100 without fan can easily get to 90C so good they bring the fan
Interesting! I assumed the N100 would work fine with a passive cooling since every vendor is doing that, but apparently, IceWhale is wise to put a fan in this thing :D
@@christianlempa it depends on usage: as Opnsense you don't need fan, but when I put Proxmox on it, it boiled completely
@@zyghom
Install on OPNsense - NGFW Zenarmor it's CPU will surely sweat. ;-)
Is that a mini pc or just motherboard? It's a huge pain searching for topton stuff on aliexpress but if it's just motherboard do you have a model number?
@@nadtz mini pc: "12th Gen Intel N100 Firewall Computer N5105 Soft Router 4x 2.5G i226 i225 LAN NVMe Industrial Fanless Mini PC pfSense PVE ESXi"
Servus Christian, muss gestehen das Teil ist schon sehr teuer für das was eingebaut ist und leisten kann :/ Aber danke fürs ausführliche Review
Without ECC memory it is prety much useless for small businesses.
This is not a review, i understand that they payed for an ad but this is a kickstarter project, unclear when and how it will ship if ever and definetely too expensive.
In short big meh
Reputation of the company makes a huge difference. Also review boxes available for influencers an other. So I would say it has only a small risk the product not shipping. You can contest other elements of the product but not the ones you mentioned.
Agree, and the fact that they could not give him a proper prototype only a mixed one tells you a lot.
Free advertising for Zimacube. Lol.
The huge "Advertisment" in the upper right corner didn't do it for you?
no ecc
if it were 1ru, 10-slot with 25g and silent, it's be interesting
Do NOT give any money to kickstarter campaigns. I can't comment on this one, but I, and a lot of other backers have been ripped off in the last 12 months. In my case to the tune of nearly 1K. Kickstarter refuse to do anything about the scam artists that now populate their platform. Be warned.
Don’t listen to this bullshit…
he’s actually right, crowdfunding has become a true scam business/platform as well unfortunately
Too pricey! EVEN with kickstarter prices
2.5 Gb/s? Let me know when it has 10Gb/s or greater.
With port aggregation you can get 10 gbit/s
@@Tri-TechnologyNot to a single host you cannot.
@@Spoonuk666 So what does that mean? Only the reading speed can be 10 Gbit/s, but writing on the nas is limited to 2.5 Gbit/s?
@@Tri-Technology If you have a single host (machine) you won't be able to achieve aggregated throughput of 10Gbps by aggregating 4x2.5Gbps links as the data will flow at a max speed of one of those interfaces i.e. 2.5Gbps to a single host. If you have four hosts with 2.5Gbps cards each then each host will also get a maximum of 2.5Gbps but all four will saturate 10Gbps as far as the sending device is concerned. Hope that makes sense.
.