@@marcogenovesi8570 Well you get software support, vendor support.. DIY will be cheaper, but DIY won't be in the form factor, nor have the level of support if something goes wrong. Something people always fail to calculate in is having actual support for your product if something goes wrong. With a lot of things like this, you aren't paying for the hardware. You're paying for a supported platform where if something breaks, you will have someone to call and help fix it. That's huge for people who don't have the expertise. That's why my company buys Dell, and doesn't just build the servers themselves.
It’s not hard to deduce the target audience: supports enterprise drives with high endurance, has relatively competent CPU for NAS, thunderbolt, and networking = video content producer who dumps and edits over thunderbolt, transcodes with CPU (don’t need tons of RAM), upload or access from another machine over net. Sure enough “ Designed for film sets, small studios, small-scale video production teams and SOHO users” and “ njoy the smoothest experience ever in real-time video editing, large file transfer, video transcoding, and backup. The TBS-h574TX, as the bridge between pre-production and post-production”
I somewhat see the other case for it. I mean, think of all the people paying crazy amounts for Mac storage. Having something portable-ish is also very useful.
@@ServeTheHomeVideo Well Mac users almost always overpay for anything. So yeah maybe that is the turn QNAP wanna take. But QNAP as a brand is not that established among Mac audiences (unlike LaCie or Promise). They need to at least get themselves into the Apple Store first.
Its a good direction, but for 5 disks I would rather use socket AM4/AM5 boards, that can do bifurcation 4x4 out of x16 slot. With 2xM.2 onboard i can use same or even better setup and on fullspeed w/o this annoying RAM limitation. With AM5 it can be even fullspeed pcie 5.0 drives. If I'll get cooling problems I always can limit tdp in bios to 35w (or any number I like)
True, but remember this is also designed for the folks running around editing video, photos, and so forth on Macbook Pro's who want to connect a TB4 cable and dump footage or edit on the NAS on the road. AM5 would be larger footprint wise and at lot of these people will pay 2x for carbon fiber tripods to save 0.5lbs or pay crazy amounts to Apple for Macbook storage.
@@ServeTheHomeVideo I understand that and know some of that folks :). I'm saying that this product is interesting as nvme nas with powerful CPU, but not for me. I'd like to see a barebone like Asrock x300 sff, where I can configure it as I like and change configuration in future
I would love to see a machine like this running AM5. Run most of the 24 PCI-E Gen 5 lanes to a chipset that splits out into twice as many gen 4 lanes for the backplane you could end up with 40 lanes at PCI-E Gen 4 speeds which could drive up to 10 gen 4 drives at full speed.
@@shadowtheimpure That can be fun to see, but you need some interesting splitter+converter, that can convert 2xpcie5 to 4xpcie4 a lot of times. 4x4 you can do natively through bifurcation, but suggested setup will cost + many hundred $. May be next gen CPU's will do 8x2 or even 16x1 bifurcation and we can buy some pcie5x1 nvme, so we can achieve 16 drives on a single x16 slot. But for now its only my dreams :)
Was really excited to see this over the Asustor… then saw the 5 drive limitation, 12/16GGB RAM limitations (non upgradable) and associated prices. Allowing this device to leave my dreams and drift into the forgotten realm of could have been great hardware.
Not sure why so expensive. Technically, the only thing changing is the drive connections. It's not like it is coming with the drives at that price. Guess we'll need Synology and others to come out with similar to drive down price.
@@ServeTheHomeVideoYou can do "all flash" with the DS1823xs+ if you want. Built in 10gb networking and a slot for a 25gb. Can fully saturate the 10gb and pretty close to saturating a 25gb.
In this price range, you can get a Xeon-D-based 1U micro-server from Supermicro. It will allow you to connect to at least 5 drives with 4 lanes + give QAT acceleration, ECC memory, and potentially even a 25GBe port in just slightly bigger box.
Very fair. I love Xeon D (we were the first to review each Xeon D generation since 2015.) We actually have a really cool Snow Ridge server that I am not sure if we will have time to do a video on, but we will have a STH main site review in the next few weeks. Built-in 25GbE and QAT with a 16 core CPU!
i built myself a 5 drive nvme server using my old Asrock X370 Taichi board (running headless) with a ryzen 5 5600x, 32 gigs of samsung b-die memory and an Asus Hyper M.2 X16 card with 4x4 bifurcation. I also went with 2x 40 gig Mellanox ConnectX-3 Pro Dual Port QSFP+ cards and a 5 meter AOC cable that i picked up used on ebay for about $80 because I didn't want the drives to be bottlenecked by 10 gig speeds. i was surprised how cheap the used 40 gig networking equipment is if you don't need a switch. very happy with this setup and it's blazing fast.
@@ServeTheHomeVideo This comment is out of touch/. Who needs several tens of TB for video editing on the road? How many weeks are you "on the road" and why wouldn't you just use a 4-8TB NVME in a USB-C enclosure?
1. The form factor be? This is at least portable 2. Will you be able to plug your laptop in via the thunderbolt port and it "work"? 3. Will you have vendor support if things go wrong? (hint, no, you won't) You could probably make something comparable.. But, what is your time worth, and what is your data worth if you screw something up? This isn't really marketed towards the "DIY" crowd. This is a turnkey solution for your home business, or (very small) business.
@@rdvanslotenif anything you’re (we’re) out of touch. Normies like simple turnkey solutions that are compact, and they’ll pay quite a bit for them. There’s also a lot of people living out of tiny studio apartments, RVs, vans and the like, for which space is at an ultra premium.
As someone in the apple ecosystem, I would have given this a shot if it took widely available u.2 drives. The pricing is crazy but with u.2, it may have made that easier to swallow. I currently run 2 u.2 drives with owc pcie boxes and have been looking for something like this but with u.2 but everything that supports u.2 in the nas space is either overkill, expensive or both
I just want 16x-32x EDSFF bays... a low-power CPU... and high-capability NIC (or even DPU)... in a form factor that doesn't start at ~$10-15K like the OEMs are still charging for JBOD-esque nvme storage. I don't need a super-powerful CPU... but I would like something that can manage to get 40-100 gbit/s out over the network for a "poor mans storage over fabric" setup. I'd even take like... Just a DPU + EDSFF + nvme switch in a box?
Then you need a powerful CPU because only those come with enough PCIe lanes to feed it. Also if you need 100Gbit you're far into enterprise and that was never "poor man's" domain.
Made my home server running i5-7500 (10bit / HEVC encoding and transcoding) with 4 TB NAS HDD and 2 TB m2 SSD with just 300$. Running Ubuntu Server and casaOS. Works like a charm.
Completely agree, it missed SFP+. Finding a decent 10Gb switch fully passively colled is almost impossible. These copper ports run sometimes over 90° C which is just crazy. Also no way to use a low power DAC.
@@kwinsch7423 a workaround would be to get a media converter box that converts 10gbitT to SFP+ and then you can plug in a DAC cable or optical transceiver
For that price, at the very least I would rather it have double the drive slots, even if that meant going to Gen3 x1. I mean, ~950MBs is a lot of bandwidth on a per-drive RAID array. Would be a lot more cost effective per TB for roughly the same throughput.
Umm no go at this density. If it were 12 bays like the Asustor then… maybe… but this is crazy. Also why do people poop all over the Asustor for “only 1 lane per NVME” when it only has one 10Gbps port and can, yes, fill it no problem? If it had a 40 or 100 gig port and couldn’t fill it then okay maybe you have a point but then you should be using an enterprise NAS.
While I can see the use case for both I cannot justify either when there are better options out there. Hopefully they take the criticism on board and fix the issues for version 2. People can grab a Jonsbo N3 and a mini-pc then connect the mini SATA/SAS backplane through an M.2 breakout adapter.
This was highly informative, just as I had been pondering Synology 9xx+ NAS for weeks. I need to spend more time and re-budget the whole thing from scratch.
At 1200 bucks, Im just not interested. Im not someone that quibbles about price because for what ever you want in life; there is a cost associated with it. I do quibble about value though and I just dont see the value here. 1200 for the NAS and another 700-900 bucks for descent m.2 drives. For a cost of 2k or more if you go with larger m.2 drives, you could get much more NAS by sticking with the tradition HDD NAS. This thing is cool, but its price needs to come down before I will be able to see its value. If it were 1200 bucks with 2 or 3 m.2 drives included, then I could see it.
All I need is capacity. Why arent 8,12, 16-bay NVMe USB-C enclosures a thing? I have dozens of 512GB NVMe drives from old PCs at work. A cheap multi-NVMe USB enclosure would be ideal.
See our recent Broadcom VMware piece. When PLX was purchased, PCIe switch prices went up by 3x and stopped their use in consumer and most prosumer gear.
I love low power SSD NAS's, and with the price of SDD's still getting lower I recently swapped my 4U out into a 2U ultra short throw chassis using 4x4 bifurcation on an AM4 AsRock Rack board and an old R5 3600 I had kicking around. Superbly powerful, small, cool, quiet, fast and sips power. Unraid on it. Love it. But this QNAP thing is like 3-4x the cost that I spent just doing it myself, and I can expand mine much easier.
So say you use 4 x 8TB NVMEs + this enclosure you're looking at £3500. Compared to a 4 bay 3.5" (16TB each) + 2 x 1TB NVME for read/write cache = £1300 for double the storage. With the latter, you still have 1TB NVME cache (on raid1) for metadata and recent files, so in most cases you still get blazing performance for double the storage at 2.7x less cost.
You have to use an external PSU with NVME to PCI-E adapters to add four RAID controllers which each connect to 8 hard drives. Then the jank will be complete.
This competes well with the only other thing i can find, the owc thunder8. I do not understand why you would want it, and would love to see a consumer platform with more than 24 lanes. I could see jonsbo making a mitx case with room for 8x e1s drives at somepoint. So I'd need a lower power cpu with about 48 lanes.
@@Cynyr Because it doesn't exist with 10Gb, TB, and a controller to run u.2 drives, along with the correct spacing to house 15mm drives. And especially not in a small form factor. Regular 8tb ssd's are qvo and extremely slow. Even slower than regular hard drives in some cases and not suited for any form of raid. Even if you could manage it, then 8tb and u.2 drives would exceed the price of this with regular nvme drives. There's a reason people buy thin and light laptops instead thick and bulky computers.
Copper 10GBE is dead and useless, much to expensive and energy wasting with only a few niche uses. SFP+ Switches are much more reliable and affordable, and QSFP/QSFP28 even 40/100Gbit is available and affordable. With nvme SSDs, QSFP+ is the least to use. Mainboards could be much cheaper if they'd get rid of these 10GbE niche Ports and just use Dual SFP+/(Q)SFP28 which is used in every Serverroom and this way the prices have gone down.
@@deineroehre Yeah but it is hardly an end user friendly solution. Almost nobody has optics running through their house, almost nobody has network cables period (most people are on wifi). I would prefer NAS with wifi 7 and copper as normal consumer.
Me too. I can see the value of running a mini-PC with 10GbE to a 10/2.5/2.5/2.5/2.5 GbE switch for Backups, Imaging or Wi-Fi in a homelab or SMB. Not everyone can afford SPF+ or QSFP+, enterprise elitists will never understand working with a very limited budget.
@@ericneo2 Has nothing to do with elitists, it is just normal everyday use even in homes. SFP+ is actually much cheaper than Copper, I couldn't afford 10Gbe over copper for all PCs here despite having Cat7 cables into every room. Apart from that, end users had telefone wires before Network cables (in the US there is still in most homes just fancy patchcables instead of proper Cat7 network cables in the walls), so the switch to fibre ist basically no problem. Aditionally, since the bandwith is only needed between NAS, Servers and the central switch (like Mikrotik CSS css326-24g-2s+rm or even CRS310-8G+-2S+ with 8 2,5Gbe for Clients and 2 SFP+ for Server/NAS) is sufficient and cheap. Or if you want to be rather future proof an don't want to put new cables in the wall there is even CRS326-4C+20G+2Q, but with around 1000€ this is not really suited for the home, for this price you can just put fibre optic cables in the walls and be absolutely future proof for the same price. If someone is playing around with Wifi, there is no need for 10GbE over Copper either, so there is no point in sticking to expensive legacy technique like 10GbE.
Great video. Interesting product (despite questionable execution) albeit out of my budget and it seems high for what it is. Can’t wait for edsff drives to be more common. Kinda hoping m.2 is replaced by something else as it seems to struggle with the higher speed pcie
One thing I must have missed - what are the PCI bus for these drives? Shared or separate or...? On just a high level look, I would have considered this if it had an option for more drives at the same price. Let's be clear - there are a couple of other companies making very competitive comparable NAS (Asustor Flashstor comes to mind).
Its frustrating to see products hobbled in pursuit of market segmentation. I have an x79 board with a xeon cpu running 40G networking, 2x PCIEx16 + 1xPCIEx8 @PCIE v3. X79 Seems to be the last consumer chipset with a decent spread of PCIE slots and lanes. Did I mention it also has 8 RAM slots and proper ECC? I keep looking, but I can't find anything similar that's more recent.
That might be correct because once server platforms went to PCIe Gen4, now Gen5 (e.g. Xeon W, and Threadripper (Pro), it is really hard to drive signal to far away PCIe slots on a motherboard. In servers, you usually would use a cable to go that far. Next year with PCIe Gen6 it will get even worse for signaling.
The e1.s and related drives can be up to 8 PCIe lanes. The P4511 is a 4 lane PCIe gen 3.1 with 4 lanes. If you look at the pins you can see it and it is on Intel's website. It seems to me that if they made a NAS that could support all 4 lanes for each drive and 8 for future ones you would have a horribly expensive NAS but one fast enough to benefit from some sort of exotic connection like fiber. Maybe fiber is not that exotic but it is not in every house either!
Bought one, and was planning on using the thunderbolt port for importing data from thunderbolt drives. However thunderbolt drives doesn’t seem to work in the qnap system. 😢
For the final price tag of this unit and the NVMe drives, the only way I could potentially embrace this is if they scale the platform and come up with an option to add extra drive bays (for much cheaper than the main unit) using the Thunderbolt connection as the uplink. That would be helpful. As is as a standalone product, it's a solid pass.
Does it have ECC RAM option? I think not. I think you need this + Linux + ZFS filesystem to assure your data safety. Overall, my recommendation is get an old tower server or 2U rack server with many drive slots and >128GB ECC RAM. That's what I did. Sadly, manufacturers always try to save a few bucks by avoiding ECC RAM. I'd never buy a server without it.
@@szaszm_ Thunderbolt 3 controllers were 40 Gbps. But the ports were only required to support 20 Gbps. Most laptops with 2 Thunderbolt 3 ports next to each other only ran each at 20Gbps. A lot of mini-PCs do the same, even when the ports aren't next to each other. Thunderbolt 4 requires all ports to support the full speed. Thunderbolt 3 also had a lot more protocol overhead, so 40Gbps was more like 32. Thunderbolt 4 can operate in pure data mode, using using the full 40Gbps if it only has one device on the controller.
Apart of the ludicrous price: are the termals with the new drive format and the m.2 one good under load and the idle? Maybe it's still coming - haven't finished watching, but thought I'd ask.
For external M.2 storage via Thunderbolt, I have a HighPoint SSD7505 4x M.2 NVMe card in an OWC Mercury Helios 3s Thunderbolt PCIe enclosure, and, while having only one drive less, will outperform this product in almost every way, for a very similar total price.
Only good thing is the form factor. Price is horrific. You can put together a brilliant supermicro m/b - xeon - ecc ram build in a normal case for vastly less that's more expandable and vastly more powerful.
I really want to have something in this form factor to be a HCI/PVE/K8s host, but there is no good hardware choices.. Price aside, this could easily be the homelab box if there are more and faster ram (this only have DDR4). Also internal power supply like mac minis would be nice.
Was there any problem hot-plugging the thunderbolt connection? I tried to engineer some low cost file sharing function using thunderbolt between my mac and linux desktop. But I had to restart my pc every time I would like to connect it with my mac. And after some digging, ethernet over thunderbolt is recognized as a pcie card by linux and thus no hot-plugging
I understand that Qnap is targetting SMB or video production house with less tech savvy people using it, because with the same amount of money I can grab an AM5 system and B550 board with bifurcation turned on, ryzen 5 5600/G, 32GB o RAM, X710/Mellanox ConnectX4 network adapter, Optane boot drive, and put either PCIe NVMe bifurcation riser or eDSSF one, and still have plenty of money left ( much more if I got most of it used).
This may have fancy software and extra connectivity, but for the price of the i3 I could buy two of the six-bay Asustore units. Admittedly this is more powerful, but I could just buy one 6 or 12 bay Asustore for less than half the price, then put the savings into a sff with much more cores and memory for running my containers and VMs.
About time this stuff started happening. Exciting to see. But still very expensive and doesn't support a huge amount of data at a cost efficient price yet... Eg, 30tb+. I still want one. Haha
Hello Sir. I have a mac M2 Max Studio can I use it just as a DAS with my Mac? Is it possible with no Ethernet fuss. And Can I put NVMe with their heatsink? Is there space?
I guess i would have liked a pcie (x4) slot so the network ports could be changed out / customized. Or at least an sfp+/rj45 combo port like they have on some of their switches. And of course more/upgradeable ram. Also for all the people complaining about price, do remember the cpu is twice as fast as the asustor. And actually qnap has a cheaper m.2 nasbook already on the market as well.
My to problems with the produc: QNAP Ecosystems are always a problem, especially when a manufacturer has control over certain things. And the price, +1000$/€ for what only has limited options for an upgrade, is a bit much. You could now argue that you get a good software basis without having to have a lot of knowledge (plug and play) or even 10Gbit/s network speed and perhaps the EDSFF E1.S standard with carrier boards for m2. Except for the 10Gbit/s network card and EDSFF E1.S, you could probably get everything cheaper and, above all, easier to expand. It would be difficult to set it up for the first time, but over time you have a lot more options.
I tried searching for a Lincstation N1 review on your channel but didn't see it, considering it's only ~$400 and you can get 4TB M.2 NVMe SSDs for about $230 nowadays and it comes with UNRAID I went with that for my personal use, all in all I have an 8TB NAS with 2 parity drives (4x4TB drives) for just under $1400. I know I didn't need two parity drives but... I'm paranoid when it comes to data loss. Also, still waiting to populate the 2x2.5" SSD slots (2.5" being so much slower but the same price as M.2 NVMe drives of similar size irks me). Part of me thinks I should've gone full blown 1U server but the other part of me knows that if I did that I'd wind up spending a lot more probably.
I like the drive bays but they are also it's most significant limitation. If it only accepted M.2 drives directly on the motherboard you could get 12 in there by using 6 double stacked sockets. If it had socketed RAM then those wanting decent ZFS caching could upgrade or select larger when ordering. i3 is fine, no need for i5.
Not a bad idea. The price probably is going to be OMG. But the idea of hooking up a lot of M.2 drives. It would make for quite a "scratch disc" device. The only limitation probably would be its network connection.
Why do the 2 network jacks look different? Is the plug for 10 GbE different than all previous versions of RJ45? Is the twist pair arrangement in the cable also different? Does it have to be CAT6 or higher?
Which SSd and what amount should I get? 🎉Should the SSd be Crucial? Should I upgrade to the $1600 system. I have a whole house battery system. Should I just get the QNAP TBS - h574TX? TBS-h574TX?
I think it is interesting how you knocked this device for not allowing very much expandability but then knocked the Asustor NAS for being slow. It all comes down to the PCIE lanes. The processors used in these things only have like 24 PCIE lanes which after you break out some for the 10Gb networking it really limits the speed and number of drives you can put in it. Yes a single lane of PCIE gen 3 is a little slow but honestly by the time you stripe your data across multiple drives you are going to be more limited by that 10Gb port than you are the drives. This one seems to get around that fact by including thunderbolt and hence allowing for the faster drives by allowing you to use it as a DAS. It really comes down to what you want to do with it. I will say that for my part the Asustor seems to be the better deal for me. I don't want a DAS and so anything I am going to do is going to be network limited anyway. The expandability of the Asustor is going to be a much better fit as a NAS
The issue for me is this a TB device, so filling up a 40Gbs, not 10, is my consideration. At this price, it needs a performant PCIe switch, x4 to each drive, and upgradeable RAM. I, too, would have liked SFP+ option. I wanted to like, then order this. But definitely not going to happen with this iteration
The i5 version is $1500 🤨At that price I would expect more drive bays or better throughput. The Asustek device is a better fit for the market niche this appears to be going for. I get the appeal of cheap m.2 storage, but for now SSD units are better deals for the money.
Much rather get a RAID enclosure like the OWC 4M2 for about 1/3 the price and use it with my own server. If you have a tower case, you can just get PCIe cards that can mount several NVMe SSDs on it.
This thing I pretty cool. It looks nice. But I'm sticking with HDDs. Most people don't need the speed of SSDs on a NAS. I think most people want a NAS for the capacity. You also have to factor in the price. SSDs are still a lot more expensive for the same amount of storage.
Wish they made something similar with 4 U.2 nvme bays. I have a few 8TB U.2 nvme drives laying around but nothing to use them in and definitely can't afford the TS-h1290FX. Enterprise U.2 nvme drives are getting cheap now but nothing to use them in other that loud large rack servers.
Right off the bat, I find the form factor just slightly less than ideal. If it was a little bit shorter, you could mount it in a rack, with a suitable adapter/shelf, like from RackMountIt. But this looks like it's more than 1u thick, which ruins that.
Why does stuff like this not ship with an SFP+ port? I’d rather use fibre for everything because of the lower switch temperatures and longer distances.
Usually in this space having 10Gbase-T is used because buildings tend to have CAT6 or something like that in the walls. Having a second copper port means that it is easy for folks to plug in the second port using existing cabling/ wiring. I totally agree I would prefer SFP+ but if you think of people who have high-end Threadripper (Pro), Xeon W, or Mac Mini/ Mac Studio/ Mac Pro workstations, those all have 10Gbase-T as standard.
I noped right out at the price, I'll keep my Flashstor 12. The TB port would have been nice, but the limited drive bays, limited upgradability and price is a HUGE no.
Inteersting box, though I would have been more enthusiatic if it was slightly narrower and taller, with 2x4 slots and bigger (and quieter) fans. And socketed memory... At the current price it's too niche... I really like QNAP; I've had (and still have) several NAS boxes from 2 to 8 drives, and I have nothing but high praise for their support: When the "automagic" drive adoption logic totally botched my attempt at moving a RAID set between two units, QNAP support rescued me and my data, despite none of my systems being under warranty. Edit: I also see that Intel's recommended price for the i3-1320PE is just north of $300 in 1000 quantity, so not a cheap CPU... probably goes some way to explain the pricing.
I was really hoping to see performance reviews of some form? ZFS Likes ram for Cache so your point of 12GB max on the i3 could have some interesting impacts for long writes/random reads.
Unless you need the (modest) improvements of a more powerful CPU, I just can't see the value proposition of this over an Asusflash unit. Add a memory upgrade to 16GB (which has to be part of the base price of this unit!), and use 4GB or 8GB M2s in a 6-bay Asusflash, and you have a very economical storage device that can handle small to medium server tasks via native (limited) or Docker support. Yes, the Asus unit has slower network speeds - but how fast and how numerous are your workstations? Yes, the Asus unit has limited channel speed to the memory - but how saturated will it be at a given time? And yes, the Asus has a funky design - but if you add the optional heat sinks and give it reasonable ventilation, it will be the most quiet, responsive home or SMB server you can imagine. And with careful shopping, you can get a 6x4GB RAID5 unit for not much more than the price of the bare base chassis of the QNAP. There's definitely some advantages to the QNAP, but for a much larger segment of the home/SMB market, the Asusflash is the smarter choice.
Can we install M.2 SSDs with heatsink inside the drive tray? If so, what would the max allowed height of such drive? (I guesss, Samsung 990 pro with heatsink has a ~9.5mm z height)
@ServeTheHomeVideo @ServeTheHomeVideo i couldn't transfer faster than 600 MB/s in multiple tests. It didn't perform great for me. My normal servers peg out transfers at 1 Gb/s consistently. Its still a cool device, but it coulda been better imo. What kind of transfer speeds do you see? The latency for seeking is also kind of high, my intended use was hosting a mongodb so that may have contributed to my disappointment. It did do slightly better with the thunderbolt connection. I think it may become a media server now, or i might send it back to amazon just had it for a few days. I paid almost 1500 for it, i wish i had found it for the 1100.
This is the kind of nas I would love, sadly they failed at the execution, and should have build a expandable model instead. {like with modules you click underneat/on top with room for 5-10+ more SSD's}
I bought "MTFDKBZ3T8TFR-1BC1ZABYY Data Center 7450 PRO" from the Qnap Compatibility list of drives but can't seem to fit it in the casing either in adapter or without it just doesn't align with the screws. Do you know how i can install these.
Lost my interest at $1100
Thanks for saving me some time.
$1199 at 9:00
Lost my interest at „QNAP“. 😬
@@LordSteinchen a man of culture I see
Have you looked at the prices on synology?
I like how it has the aesthetics of a simple object that will blend in on my desk under a pile of clutter until it overheats.
Pricing is a joke : 1600 € for the i3 and 3000 € for the i6
Have QNAP been taken over by Apple ?
they have to pay many inlfuencers who will tell us that this overpriced stuff is amazing.
For that money I can -buy- build a machine that routes 25G+
Just to be clear, we purchased this unit on Amazon and QNAP did not sponsor this.
@@ardziu0 this price is normal for turnkey NAS stuff, it's always a ripoff
@@marcogenovesi8570 Well you get software support, vendor support..
DIY will be cheaper, but DIY won't be in the form factor, nor have the level of support if something goes wrong. Something people always fail to calculate in is having actual support for your product if something goes wrong.
With a lot of things like this, you aren't paying for the hardware. You're paying for a supported platform where if something breaks, you will have someone to call and help fix it. That's huge for people who don't have the expertise. That's why my company buys Dell, and doesn't just build the servers themselves.
It’s not hard to deduce the target audience: supports enterprise drives with high endurance, has relatively competent CPU for NAS, thunderbolt, and networking = video content producer who dumps and edits over thunderbolt, transcodes with CPU (don’t need tons of RAM), upload or access from another machine over net.
Sure enough “ Designed for film sets, small studios, small-scale video production teams and SOHO users” and “ njoy the smoothest experience ever in real-time video editing, large file transfer, video transcoding, and backup. The TBS-h574TX, as the bridge between pre-production and post-production”
I find its form factor attractive. But given the price and constraints, I would rather build a proper 4U......
I somewhat see the other case for it. I mean, think of all the people paying crazy amounts for Mac storage. Having something portable-ish is also very useful.
@@ServeTheHomeVideo Well Mac users almost always overpay for anything. So yeah maybe that is the turn QNAP wanna take. But QNAP as a brand is not that established among Mac audiences (unlike LaCie or Promise). They need to at least get themselves into the Apple Store first.
I see it as the NAS for the fiber, always-on, backed by the cloud group.
@@ServeTheHomeVideo I am looking forward to u.2 truenas scale machines video(s).
4U would be a lot more space and power. A 1U seems a more comparable form factor.
Its a good direction, but for 5 disks I would rather use socket AM4/AM5 boards, that can do bifurcation 4x4 out of x16 slot. With 2xM.2 onboard i can use same or even better setup and on fullspeed w/o this annoying RAM limitation. With AM5 it can be even fullspeed pcie 5.0 drives. If I'll get cooling problems I always can limit tdp in bios to 35w (or any number I like)
True, but remember this is also designed for the folks running around editing video, photos, and so forth on Macbook Pro's who want to connect a TB4 cable and dump footage or edit on the NAS on the road. AM5 would be larger footprint wise and at lot of these people will pay 2x for carbon fiber tripods to save 0.5lbs or pay crazy amounts to Apple for Macbook storage.
@@ServeTheHomeVideo I understand that and know some of that folks :). I'm saying that this product is interesting as nvme nas with powerful CPU, but not for me. I'd like to see a barebone like Asrock x300 sff, where I can configure it as I like and change configuration in future
I would love to see a machine like this running AM5. Run most of the 24 PCI-E Gen 5 lanes to a chipset that splits out into twice as many gen 4 lanes for the backplane you could end up with 40 lanes at PCI-E Gen 4 speeds which could drive up to 10 gen 4 drives at full speed.
@@ServeTheHomeVideo4Tb Samsung T7 or Samsung T9 is an alternative.
@@shadowtheimpure That can be fun to see, but you need some interesting splitter+converter, that can convert 2xpcie5 to 4xpcie4 a lot of times. 4x4 you can do natively through bifurcation, but suggested setup will cost + many hundred $. May be next gen CPU's will do 8x2 or even 16x1 bifurcation and we can buy some pcie5x1 nvme, so we can achieve 16 drives on a single x16 slot. But for now its only my dreams :)
Was really excited to see this over the Asustor… then saw the 5 drive limitation, 12/16GGB RAM limitations (non upgradable) and associated prices. Allowing this device to leave my dreams and drift into the forgotten realm of could have been great hardware.
Not sure why so expensive. Technically, the only thing changing is the drive connections. It's not like it is coming with the drives at that price. Guess we'll need Synology and others to come out with similar to drive down price.
At least have 6 drives for RaidZ2. Cmon!
Now with 2x price as well 😅😅
Yes.
Why locking barrel plug for power isn't a standard yet?
Great point. We mentioned that in the main site review too.
Excellent presentation. Thank you for an honest review. For that money, I would personally take a Synology DS1823xs+.
I think this is only for those that want all-flash, or all flash plus USB drive enclosures for extra HDD space.
@@ServeTheHomeVideoYou can do "all flash" with the DS1823xs+ if you want. Built in 10gb networking and a slot for a 25gb. Can fully saturate the 10gb and pretty close to saturating a 25gb.
I love the swapability and flexibility of this system. When these hit the second hand market, I'll have to snag one.
Very much looking forward to the Topton clone in a few months
In this price range, you can get a Xeon-D-based 1U micro-server from Supermicro. It will allow you to connect to at least 5 drives with 4 lanes + give QAT acceleration, ECC memory, and potentially even a 25GBe port in just slightly bigger box.
Very fair. I love Xeon D (we were the first to review each Xeon D generation since 2015.) We actually have a really cool Snow Ridge server that I am not sure if we will have time to do a video on, but we will have a STH main site review in the next few weeks. Built-in 25GbE and QAT with a 16 core CPU!
@@ServeTheHomeVideo Can't wait for that one! Server-grade Atoms and Xeon-D seems to be underrepresented :D
i built myself a 5 drive nvme server using my old Asrock X370 Taichi board (running headless) with a ryzen 5 5600x, 32 gigs of samsung b-die memory and an Asus Hyper M.2 X16 card with 4x4 bifurcation. I also went with 2x 40 gig Mellanox ConnectX-3 Pro Dual Port QSFP+ cards and a 5 meter AOC cable that i picked up used on ebay for about $80 because I didn't want the drives to be bottlenecked by 10 gig speeds. i was surprised how cheap the used 40 gig networking equipment is if you don't need a switch. very happy with this setup and it's blazing fast.
Or use a dell box with nvme adapters. Love this stuff, but man they make em outta diamonds.
What is the model number?
How big is that? Is it portable to put into a backpack so you can do video editing on the road?
@@ServeTheHomeVideo This comment is out of touch/. Who needs several tens of TB for video editing on the road? How many weeks are you "on the road" and why wouldn't you just use a 4-8TB NVME in a USB-C enclosure?
1. The form factor be? This is at least portable
2. Will you be able to plug your laptop in via the thunderbolt port and it "work"?
3. Will you have vendor support if things go wrong? (hint, no, you won't)
You could probably make something comparable.. But, what is your time worth, and what is your data worth if you screw something up? This isn't really marketed towards the "DIY" crowd. This is a turnkey solution for your home business, or (very small) business.
@@rdvanslotenif anything you’re (we’re) out of touch. Normies like simple turnkey solutions that are compact, and they’ll pay quite a bit for them. There’s also a lot of people living out of tiny studio apartments, RVs, vans and the like, for which space is at an ultra premium.
As someone in the apple ecosystem, I would have given this a shot if it took widely available u.2 drives. The pricing is crazy but with u.2, it may have made that easier to swallow. I currently run 2 u.2 drives with owc pcie boxes and have been looking for something like this but with u.2 but everything that supports u.2 in the nas space is either overkill, expensive or both
We need more nas devices like this utilizing ssd, m.2 to drive down the price.
I just want 16x-32x EDSFF bays... a low-power CPU... and high-capability NIC (or even DPU)... in a form factor that doesn't start at ~$10-15K like the OEMs are still charging for JBOD-esque nvme storage. I don't need a super-powerful CPU... but I would like something that can manage to get 40-100 gbit/s out over the network for a "poor mans storage over fabric" setup. I'd even take like... Just a DPU + EDSFF + nvme switch in a box?
Yes. That is about perfect.
Then you need a powerful CPU because only those come with enough PCIe lanes to feed it.
Also if you need 100Gbit you're far into enterprise and that was never "poor man's" domain.
you realize that 40-100GB/s needs a massive CPU .. not just for your 16-32 EDSFF bays .. but also just to troughput all that data.. get realistic man.
Made my home server running i5-7500 (10bit / HEVC encoding and transcoding) with 4 TB NAS HDD and 2 TB m2 SSD with just 300$. Running Ubuntu Server and casaOS. Works like a charm.
Sweet!
I agree , i'mma wait for u.2
Lost me when they didn't include an SFP+ port for 10Gb, then the price cemented it....
Apple Mac systems use 10Gbase-T. Threadripper Pro systems have 10Gbase-T onboard. I can see the logic.
normies (aka the main target for this device) don't use SFP+
Completely agree, it missed SFP+. Finding a decent 10Gb switch fully passively colled is almost impossible. These copper ports run sometimes over 90° C which is just crazy. Also no way to use a low power DAC.
@@kwinsch7423 a workaround would be to get a media converter box that converts 10gbitT to SFP+ and then you can plug in a DAC cable or optical transceiver
Why? How is this a dealbreaker? Explain me please as if Im 5. Thanks, honestly. Concidering to buy it
For that price, at the very least I would rather it have double the drive slots, even if that meant going to Gen3 x1. I mean, ~950MBs is a lot of bandwidth on a per-drive RAID array. Would be a lot more cost effective per TB for roughly the same throughput.
Umm no go at this density. If it were 12 bays like the Asustor then… maybe… but this is crazy. Also why do people poop all over the Asustor for “only 1 lane per NVME” when it only has one 10Gbps port and can, yes, fill it no problem? If it had a 40 or 100 gig port and couldn’t fill it then okay maybe you have a point but then you should be using an enterprise NAS.
Exactly. We discussed that in the video. We actually use the 12-bay Asustor every day.
While I can see the use case for both I cannot justify either when there are better options out there. Hopefully they take the criticism on board and fix the issues for version 2. People can grab a Jonsbo N3 and a mini-pc then connect the mini SATA/SAS backplane through an M.2 breakout adapter.
This was highly informative, just as I had been pondering Synology 9xx+ NAS for weeks. I need to spend more time and re-budget the whole thing from scratch.
At 1200 bucks, Im just not interested. Im not someone that quibbles about price because for what ever you want in life; there is a cost associated with it. I do quibble about value though and I just dont see the value here. 1200 for the NAS and another 700-900 bucks for descent m.2 drives. For a cost of 2k or more if you go with larger m.2 drives, you could get much more NAS by sticking with the tradition HDD NAS. This thing is cool, but its price needs to come down before I will be able to see its value. If it were 1200 bucks with 2 or 3 m.2 drives included, then I could see it.
Connect the 2.5 directly to your main rig and the 10 to your home network ( specially if you have a lot of user's connected to the home network )
By the time you add the ssd drive, it is the price of a high end gaming pc?
All I need is capacity. Why arent 8,12, 16-bay NVMe USB-C enclosures a thing? I have dozens of 512GB NVMe drives from old PCs at work. A cheap multi-NVMe USB enclosure would be ideal.
See our recent Broadcom VMware piece. When PLX was purchased, PCIe switch prices went up by 3x and stopped their use in consumer and most prosumer gear.
@@ServeTheHomeVideo Microchip produces them as well.
finally a nas that can natively match what a power users network can handle. Especially those with above gigabit upload/download.
I love low power SSD NAS's, and with the price of SDD's still getting lower I recently swapped my 4U out into a 2U ultra short throw chassis using 4x4 bifurcation on an AM4 AsRock Rack board and an old R5 3600 I had kicking around. Superbly powerful, small, cool, quiet, fast and sips power. Unraid on it. Love it. But this QNAP thing is like 3-4x the cost that I spent just doing it myself, and I can expand mine much easier.
why does it cost $2.5k?
With drives maybe? We bought ours on Amazon for $1199.
I know he talked about saturating the 10G port, but can anyone tell me if he was able to fully saturate the 40G thunderbolt port?
So say you use 4 x 8TB NVMEs + this enclosure you're looking at £3500. Compared to a 4 bay 3.5" (16TB each) + 2 x 1TB NVME for read/write cache = £1300 for double the storage. With the latter, you still have 1TB NVME cache (on raid1) for metadata and recent files, so in most cases you still get blazing performance for double the storage at 2.7x less cost.
You have to use an external PSU with NVME to PCI-E adapters to add four RAID controllers which each connect to 8 hard drives. Then the jank will be complete.
This competes well with the only other thing i can find, the owc thunder8. I do not understand why you would want it, and would love to see a consumer platform with more than 24 lanes. I could see jonsbo making a mitx case with room for 8x e1s drives at somepoint. So I'd need a lower power cpu with about 48 lanes.
Just think of a wedding photographer/ videographer. Corp video person and so forth that needs a mobile NAS and is in the Apple ecosystem for notebooks
@@ServeTheHomeVideo why not just a few 8tb ssds? Or a sffpc and ingest to a pair of 20tb u.2 drives?
@@Cynyr Because it doesn't exist with 10Gb, TB, and a controller to run u.2 drives, along with the correct spacing to house 15mm drives. And especially not in a small form factor. Regular 8tb ssd's are qvo and extremely slow. Even slower than regular hard drives in some cases and not suited for any form of raid. Even if you could manage it, then 8tb and u.2 drives would exceed the price of this with regular nvme drives. There's a reason people buy thin and light laptops instead thick and bulky computers.
I am glad to see the growing support for 10Gbps copper.
Copper 10GBE is dead and useless, much to expensive and energy wasting with only a few niche uses.
SFP+ Switches are much more reliable and affordable, and QSFP/QSFP28 even 40/100Gbit is available and affordable. With nvme SSDs, QSFP+ is the least to use.
Mainboards could be much cheaper if they'd get rid of these 10GbE niche Ports and just use Dual SFP+/(Q)SFP28 which is used in every Serverroom and this way the prices have gone down.
@@deineroehre Yeah but it is hardly an end user friendly solution. Almost nobody has optics running through their house, almost nobody has network cables period (most people are on wifi). I would prefer NAS with wifi 7 and copper as normal consumer.
Me too. I can see the value of running a mini-PC with 10GbE to a 10/2.5/2.5/2.5/2.5 GbE switch for Backups, Imaging or Wi-Fi in a homelab or SMB. Not everyone can afford SPF+ or QSFP+, enterprise elitists will never understand working with a very limited budget.
@@ericneo2 Has nothing to do with elitists, it is just normal everyday use even in homes. SFP+ is actually much cheaper than Copper, I couldn't afford 10Gbe over copper for all PCs here despite having Cat7 cables into every room.
Apart from that, end users had telefone wires before Network cables (in the US there is still in most homes just fancy patchcables instead of proper Cat7 network cables in the walls), so the switch to fibre ist basically no problem.
Aditionally, since the bandwith is only needed between NAS, Servers and the central switch (like Mikrotik CSS css326-24g-2s+rm or even CRS310-8G+-2S+ with 8 2,5Gbe for Clients and 2 SFP+ for Server/NAS) is sufficient and cheap.
Or if you want to be rather future proof an don't want to put new cables in the wall there is even CRS326-4C+20G+2Q, but with around 1000€ this is not really suited for the home, for this price you can just put fibre optic cables in the walls and be absolutely future proof for the same price.
If someone is playing around with Wifi, there is no need for 10GbE over Copper either, so there is no point in sticking to expensive legacy technique like 10GbE.
Great video. Interesting product (despite questionable execution) albeit out of my budget and it seems high for what it is. Can’t wait for edsff drives to be more common. Kinda hoping m.2 is replaced by something else as it seems to struggle with the higher speed pcie
Wish somebody would make a case like this that can take a standard motherboard (miniITX or microATX) and can load any OS.
One thing I must have missed - what are the PCI bus for these drives? Shared or separate or...? On just a high level look, I would have considered this if it had an option for more drives at the same price. Let's be clear - there are a couple of other companies making very competitive comparable NAS (Asustor Flashstor comes to mind).
Its frustrating to see products hobbled in pursuit of market segmentation. I have an x79 board with a xeon cpu running 40G networking, 2x PCIEx16 + 1xPCIEx8 @PCIE v3.
X79 Seems to be the last consumer chipset with a decent spread of PCIE slots and lanes. Did I mention it also has 8 RAM slots and proper ECC?
I keep looking, but I can't find anything similar that's more recent.
That might be correct because once server platforms went to PCIe Gen4, now Gen5 (e.g. Xeon W, and Threadripper (Pro), it is really hard to drive signal to far away PCIe slots on a motherboard. In servers, you usually would use a cable to go that far. Next year with PCIe Gen6 it will get even worse for signaling.
The e1.s and related drives can be up to 8 PCIe lanes. The P4511 is a 4 lane PCIe gen 3.1 with 4 lanes. If you look at the pins you can see it and it is on Intel's website. It seems to me that if they made a NAS that could support all 4 lanes for each drive and 8 for future ones you would have a horribly expensive NAS but one fast enough to benefit from some sort of exotic connection like fiber. Maybe fiber is not that exotic but it is not in every house either!
Bought one, and was planning on using the thunderbolt port for importing data from thunderbolt drives. However thunderbolt drives doesn’t seem to work in the qnap system. 😢
For the final price tag of this unit and the NVMe drives, the only way I could potentially embrace this is if they scale the platform and come up with an option to add extra drive bays (for much cheaper than the main unit) using the Thunderbolt connection as the uplink. That would be helpful. As is as a standalone product, it's a solid pass.
Does it have ECC RAM option? I think not. I think you need this + Linux + ZFS filesystem to assure your data safety. Overall, my recommendation is get an old tower server or 2U rack server with many drive slots and >128GB ECC RAM. That's what I did.
Sadly, manufacturers always try to save a few bucks by avoiding ECC RAM. I'd never buy a server without it.
I bet ZFS on Linux filesystem RAIDZ 5 mechanical drive + >128GB RAM could saturate the 10GB/s LAN? If so, why would you need nvmes?
There's a bit of a spec discrepancy on the Amazon page. It says Thunderbolt 4, but then it says 20Gbps. Is it perhaps Thunderbolt 3?
Thunderbolt 3 is also 40Gbps. There wasn't a speed upgrade between the versions, only changes in required features and some details.
@@szaszm_ Thunderbolt 3 controllers were 40 Gbps. But the ports were only required to support 20 Gbps. Most laptops with 2 Thunderbolt 3 ports next to each other only ran each at 20Gbps. A lot of mini-PCs do the same, even when the ports aren't next to each other. Thunderbolt 4 requires all ports to support the full speed. Thunderbolt 3 also had a lot more protocol overhead, so 40Gbps was more like 32. Thunderbolt 4 can operate in pure data mode, using using the full 40Gbps if it only has one device on the controller.
@@rightwingsafetysquad9872 Interesting, I didn't know the full details.
Apart of the ludicrous price: are the termals with the new drive format and the m.2 one good under load and the idle? Maybe it's still coming - haven't finished watching, but thought I'd ask.
For external M.2 storage via Thunderbolt, I have a HighPoint SSD7505 4x M.2 NVMe card in an OWC Mercury Helios 3s Thunderbolt PCIe enclosure, and, while having only one drive less, will outperform this product in almost every way, for a very similar total price.
Very cool. Does that operate as a NAS as well and handle all of the RAID and network attach? If so, perhaps I can ask the team to look into this.
@@ServeTheHomeVideoIt's obviously not the same as full NAS like this, no. But I think perhaps a chunk of potential buyers may not need that.
My previous main system was bought in '11 with 16gb of ram... This a no-go from the start, especially at that price.
Only good thing is the form factor. Price is horrific.
You can put together a brilliant supermicro m/b - xeon - ecc ram build in a normal case for vastly less that's more expandable and vastly more powerful.
I really want to have something in this form factor to be a HCI/PVE/K8s host, but there is no good hardware choices.. Price aside, this could easily be the homelab box if there are more and faster ram (this only have DDR4). Also internal power supply like mac minis would be nice.
Was there any problem hot-plugging the thunderbolt connection? I tried to engineer some low cost file sharing function using thunderbolt between my mac and linux desktop. But I had to restart my pc every time I would like to connect it with my mac. And after some digging, ethernet over thunderbolt is recognized as a pcie card by linux and thus no hot-plugging
At least NAS to Mac and NAS to Windows PC it worked.
I'm liking the new power meter. It's much easier to read.
I understand that Qnap is targetting SMB or video production house with less tech savvy people using it, because with the same amount of money I can grab an AM5 system and B550 board with bifurcation turned on, ryzen 5 5600/G, 32GB o RAM, X710/Mellanox ConnectX4 network adapter, Optane boot drive, and put either PCIe NVMe bifurcation riser or eDSSF one, and still have plenty of money left ( much more if I got most of it used).
This may have fancy software and extra connectivity, but for the price of the i3 I could buy two of the six-bay Asustore units. Admittedly this is more powerful, but I could just buy one 6 or 12 bay Asustore for less than half the price, then put the savings into a sff with much more cores and memory for running my containers and VMs.
Having reviewed both in the same video, I would get a 12 drive Asustor over 2x 6 drive. Still, this is a slightly different market with Thunderbolt
About time this stuff started happening. Exciting to see. But still very expensive and doesn't support a huge amount of data at a cost efficient price yet... Eg, 30tb+.
I still want one. Haha
Hello Sir. I have a mac M2 Max Studio can I use it just as a DAS with my Mac? Is it possible with no Ethernet fuss. And Can I put NVMe with their heatsink? Is there space?
Yes. It will work as Thunderbolt DAS but I would use a 1GbE connection for the NAS at least to let it update and such
@@ServeTheHomeVideo OK Very Good Thanks
I guess i would have liked a pcie (x4) slot so the network ports could be changed out / customized. Or at least an sfp+/rj45 combo port like they have on some of their switches. And of course more/upgradeable ram.
Also for all the people complaining about price, do remember the cpu is twice as fast as the asustor. And actually qnap has a cheaper m.2 nasbook already on the market as well.
My to problems with the produc:
QNAP Ecosystems are always a problem, especially when a manufacturer has control over certain things.
And the price, +1000$/€ for what only has limited options for an upgrade, is a bit much.
You could now argue that you get a good software basis without having to have a lot of knowledge (plug and play) or even 10Gbit/s network speed and perhaps the EDSFF E1.S standard with carrier boards for m2.
Except for the 10Gbit/s network card and EDSFF E1.S, you could probably get everything cheaper and, above all, easier to expand.
It would be difficult to set it up for the first time, but over time you have a lot more options.
Make one with 10 bays and i'll buy it. I can only go up to 2tb on each bay at the present moment.
With that pgh, lets get to our key lessons learned. 😃
I would still most likely choose the Minisforum MS-01 over this. 🤔
Sure, but then you need to fit more SSDs.
I tried searching for a Lincstation N1 review on your channel but didn't see it, considering it's only ~$400 and you can get 4TB M.2 NVMe SSDs for about $230 nowadays and it comes with UNRAID I went with that for my personal use, all in all I have an 8TB NAS with 2 parity drives (4x4TB drives) for just under $1400. I know I didn't need two parity drives but... I'm paranoid when it comes to data loss. Also, still waiting to populate the 2x2.5" SSD slots (2.5" being so much slower but the same price as M.2 NVMe drives of similar size irks me). Part of me thinks I should've gone full blown 1U server but the other part of me knows that if I did that I'd wind up spending a lot more probably.
I like the drive bays but they are also it's most significant limitation. If it only accepted M.2 drives directly on the motherboard you could get 12 in there by using 6 double stacked sockets.
If it had socketed RAM then those wanting decent ZFS caching could upgrade or select larger when ordering. i3 is fine, no need for i5.
Looks great! but I'd only be willing to buy at roughly half that price..
Fair enough!
Not a bad idea.
The price probably is going to be OMG.
But the idea of hooking up a lot of M.2 drives. It would make for quite a "scratch disc" device.
The only limitation probably would be its network connection.
this is the perfect nas to run Virtual Machines and apps, docker etc, for storage prefer the other models of the Qnap
Very fair
Why do the 2 network jacks look different? Is the plug for 10 GbE different than all previous versions of RJ45? Is the twist pair arrangement in the cable also different? Does it have to be CAT6 or higher?
"Guys, I don't know what the heck is going on here. Usually Intel SSDs are really good at following standards."
That's a very Solidigm observation.
Which SSd and what amount should I get? 🎉Should the SSd be Crucial? Should I upgrade to the $1600 system. I have a whole house battery system. Should I just get the QNAP TBS - h574TX?
TBS-h574TX?
I don't know why, but seeing your fingers grabbing the shiny, sensitive connection pads triggers me extremely! 😁
I think it is interesting how you knocked this device for not allowing very much expandability but then knocked the Asustor NAS for being slow. It all comes down to the PCIE lanes. The processors used in these things only have like 24 PCIE lanes which after you break out some for the 10Gb networking it really limits the speed and number of drives you can put in it. Yes a single lane of PCIE gen 3 is a little slow but honestly by the time you stripe your data across multiple drives you are going to be more limited by that 10Gb port than you are the drives. This one seems to get around that fact by including thunderbolt and hence allowing for the faster drives by allowing you to use it as a DAS. It really comes down to what you want to do with it. I will say that for my part the Asustor seems to be the better deal for me. I don't want a DAS and so anything I am going to do is going to be network limited anyway. The expandability of the Asustor is going to be a much better fit as a NAS
The issue for me is this a TB device, so filling up a 40Gbs, not 10, is my consideration. At this price, it needs a performant PCIe switch, x4 to each drive, and upgradeable RAM. I, too, would have liked SFP+ option. I wanted to like, then order this. But definitely not going to happen with this iteration
The i5 version is $1500 🤨At that price I would expect more drive bays or better throughput. The Asustek device is a better fit for the market niche this appears to be going for. I get the appeal of cheap m.2 storage, but for now SSD units are better deals for the money.
Agreed.
Much rather get a RAID enclosure like the OWC 4M2 for about 1/3 the price and use it with my own server. If you have a tower case, you can just get PCIe cards that can mount several NVMe SSDs on it.
That is totally right. We have reviewed cards, but with those there is no hot swap.
@@ServeTheHomeVideo you're right, but how often does one really hot swap drives? SSD's are far more reliable than HDDs.
This thing I pretty cool. It looks nice. But I'm sticking with HDDs. Most people don't need the speed of SSDs on a NAS. I think most people want a NAS for the capacity. You also have to factor in the price. SSDs are still a lot more expensive for the same amount of storage.
Wish they made something similar with 4 U.2 nvme bays. I have a few 8TB U.2 nvme drives laying around but nothing to use them in and definitely can't afford the TS-h1290FX. Enterprise U.2 nvme drives are getting cheap now but nothing to use them in other that loud large rack servers.
Right off the bat, I find the form factor just slightly less than ideal. If it was a little bit shorter, you could mount it in a rack, with a suitable adapter/shelf, like from RackMountIt. But this looks like it's more than 1u thick, which ruins that.
Do you need a link to some of these drives but I could not follow.
Liked it until the price; lol. ✌️
Thanks for the review!
You and me both!
Thanks fort the review, Q: can I install custom NAS software on this?
Why does stuff like this not ship with an SFP+ port? I’d rather use fibre for everything because of the lower switch temperatures and longer distances.
Usually in this space having 10Gbase-T is used because buildings tend to have CAT6 or something like that in the walls. Having a second copper port means that it is easy for folks to plug in the second port using existing cabling/ wiring. I totally agree I would prefer SFP+ but if you think of people who have high-end Threadripper (Pro), Xeon W, or Mac Mini/ Mac Studio/ Mac Pro workstations, those all have 10Gbase-T as standard.
I can see it sits on top of my Mac studio and connect using the thunderbolt connection. Then, my PC can use 10 gigabit connection.
That is exactly the use case
@@ServeTheHomeVideo Just ordered one right after seeing your video. Great review!!!
I noped right out at the price, I'll keep my Flashstor 12. The TB port would have been nice, but the limited drive bays, limited upgradability and price is a HUGE no.
I agree, especially if you do not need TB
Oh its QNAP; so does it come with a free installation of DEADBOLT, to help secure our files?
Inteersting box, though I would have been more enthusiatic if it was slightly narrower and taller, with 2x4 slots and bigger (and quieter) fans. And socketed memory...
At the current price it's too niche... I really like QNAP; I've had (and still have) several NAS boxes from 2 to 8 drives, and I have nothing but high praise for their support: When the "automagic" drive adoption logic totally botched my attempt at moving a RAID set between two units, QNAP support rescued me and my data, despite none of my systems being under warranty.
Edit: I also see that Intel's recommended price for the i3-1320PE is just north of $300 in 1000 quantity, so not a cheap CPU... probably goes some way to explain the pricing.
I was really hoping to see performance reviews of some form? ZFS Likes ram for Cache so your point of 12GB max on the i3 could have some interesting impacts for long writes/random reads.
Unless you need the (modest) improvements of a more powerful CPU, I just can't see the value proposition of this over an Asusflash unit. Add a memory upgrade to 16GB (which has to be part of the base price of this unit!), and use 4GB or 8GB M2s in a 6-bay Asusflash, and you have a very economical storage device that can handle small to medium server tasks via native (limited) or Docker support. Yes, the Asus unit has slower network speeds - but how fast and how numerous are your workstations? Yes, the Asus unit has limited channel speed to the memory - but how saturated will it be at a given time? And yes, the Asus has a funky design - but if you add the optional heat sinks and give it reasonable ventilation, it will be the most quiet, responsive home or SMB server you can imagine. And with careful shopping, you can get a 6x4GB RAID5 unit for not much more than the price of the bare base chassis of the QNAP. There's definitely some advantages to the QNAP, but for a much larger segment of the home/SMB market, the Asusflash is the smarter choice.
I actually like the idea of having 5 NVME's on a PCIE card instead of having them only in one chassis.
Or you could get the TBS-464 if you need a smaller formfactor, and only need 2.5GbE, 4 NVMe drives and don't need to hotswap them. It's cheaper too!
So the Sabrent 8Tb works...? All 5 of them?
that quiet expensive for those spec...
could be more interesting with 6 bay, 8 Bay, 12 Bay
16Go Ram or atleast replaceable ram...
Totally agree. U.2 as well
Could be good as an on set device for DITs backing up footage
Can we install M.2 SSDs with heatsink inside the drive tray? If so, what would the max allowed height of such drive? (I guesss, Samsung 990 pro with heatsink has a ~9.5mm z height)
12GB of memory is completely inadequate! Memory should be upgradeable!!
Agreed
its really interesting. Thanks
No ECC RAM?!
Oh 5 whole disks for 1100 usd. With a cpu slow enough to prevent max speed transfers.
Not sure what you mean. This is more than enough CPU for 10GbE
@ServeTheHomeVideo @ServeTheHomeVideo i couldn't transfer faster than 600 MB/s in multiple tests. It didn't perform great for me. My normal servers peg out transfers at 1 Gb/s consistently. Its still a cool device, but it coulda been better imo.
What kind of transfer speeds do you see?
The latency for seeking is also kind of high, my intended use was hosting a mongodb so that may have contributed to my disappointment.
It did do slightly better with the thunderbolt connection. I think it may become a media server now, or i might send it back to amazon just had it for a few days. I paid almost 1500 for it, i wish i had found it for the 1100.
Can it saturate thunderbolt network connection? That is 40Gbps?
This is the kind of nas I would love, sadly they failed at the execution, and should have build a expandable model instead. {like with modules you click underneat/on top with room for 5-10+ more SSD's}
E1.S... wake me up when they make an E1.L version. But I'm interested.
I bought "MTFDKBZ3T8TFR-1BC1ZABYY Data Center 7450 PRO" from the Qnap Compatibility list of drives but can't seem to fit it in the casing either in adapter or without it just doesn't align with the screws. Do you know how i can install these.