My lg g6 has a media server option in settings 2tb sd storage and the phone is cheap mine is $18.42 4gb ram snapdragon 821 all you need is to buy the sd cards 190mb/s 1tb sd on the g6 is $94 with out the phone.
@@beatyoubeachyt8303 LG G6 had user replacable batteries too., If you need to replace batteries these days., you need to have a torque wrench with ifixit kit
Hey Jeff! Thanks so much for taking a look at our first ever All-Flash NVMe NAS! We have made numerous improvements to our design since the last time we sent products to you and we'd love to share all the ways we keep Red Shirts out of our NAS and enthusiasts and tinkerers inside! With our recent endorsement of third party operating systems, (though without technical support) we're sure that using our NAS is nothing short of a NASTastic experience and we want to keep listening! If you, dear commentor; or youtuber, want to send me a message, feel free to do so! I love praise, comments, questions and even criticism! Hit me up and thanks again!
This might be a stretch, but are there any plans to sell Nas enclosures without hardware built in, so it is for the user to choose. Love the direction where the asustor is heading in, with allowing other oses. Maybe there could one day be official true Nas and unraid support?
@@JeffGeerling I'm doing my best! I still have to really sell these ideas to the more conservative and risk-averse elements in the office too. But your backing helps me get the point across!
One additional thing I'd call out when comparing HDD vs SSD: how much data you can store in a given physical space. It's a little insane to me the absolute minimal footprint that a flash based system can occupy, and for people who live in places where physical space is at a premium, that's a very real consideration.
the ppl who life in a small places, wouldn't be able to afford ssd prices. The only consideration is that mechanical drive is more prone to failure than ssd, however ssd chip could fry no problem as well
@@s.i.m.c.a I'm kind of making an assumption here, but I think he's referring to people that live in places like cities (where even a 39m2 apartment costs 60% of your salary).
@@s.i.m.c.amaking some pretty big assumptions there. Not everyone chooses to waste money on more space than necessary. Why is there a tiny house movement anyway?
Open bios shouldn't be "crazy", it should be expected / the norm for hardware that you buy - if the bios is locked, you don't really "own" the device. It's very sad that we're already at a place where an unlocked bios is "crazy" when that was the NORM for decades. Since when do you buy a PC that had a locked down bios / bootloader??
Wow great video comparing the nitty gritty details on these 2 NAS solutions! So interesting and informative - Thanks for this Jeff! I also really like how the purchased NAS solution is basically open hardware and not locked down and you can install anything you want on it - Thanks the way it should be.
The pocket NAS is actually of GREAT use to me. This can be a travel NAS for me for my photography. It's easy to set up and I could put my data on it without blasting it on my PC and have it WAY more safe. The flashstore could a cool thing for me at home as an intermediate storage for hot projects, too. I could edit them there and after I'm finished archiving them on a slower NAS. Especially due to it not being locked down. This is a really great factor for people like me who have some DIY NAS and consider some pre-builts like this one so that they can be managed with the same OS.
Boom! Ampere Altra at 128 lanes of PCIe Gen4! Nailed it this one. Something else that would be much better is the memory bandwidth which matters as well on all-flash NAS units.
Please go to the link to Rick's site and indicate what features you'd be looking for specifically. I can't wait to see the final version he comes out with... I've seen renders of a much more reliable prototype based on the Rock 5 model B, but there's still time to let him know if there's some other feature you'd be missing!
@@hundredfireify I have been using a m.2 enclosure for a couple years but I'd like to be able to access it wirelessly sometimes like with my phone. I have used wireless storage devices before (I had a Seagate wireless drive and a western digital wireless drive) but the current solution don't support the flexibility I'm looking for yet. I want a AIO portable Nas with media output on it for tv. I'm asking for a lot but if it's not this device I was looking into buying a Latte Panda Sigma which offers a lot of what I am looking for. Speed, flexibility ("full hack-able" , portability, etc.
the pocket nas would be perfect for me as a trucker, great to store some games on for my laptop, might even be able to make a ceph storage cluster , that would be something
I do wonder if it'd run CEPH. I tried setting up ROOK CEPH on a microk8s cluster running on i5-6500T and 32GB RAM and ran out of CPU. Maybe I did something wrong, but certainly interesting.
data hoarding is the only reason i'd ever look at hdd going forward. thanks to oversupply of flash memory, it is a great time to set up a flash-only storage. not to forget how easier they are to move around without risking data loss.
When you actually get 50+ PCIe lanes for your drives something like an AMD EPYC, you can run into another problem for an NVMe-only NAS: internal bandwidth of the CPU. When Linus Tech Tips filled up an AMD EPYC with 24 SSDs, he hit a major stability bug in the CPU, because all those NVMe traffic ate up the entire internal bus bandwidth of the EPYC processor and started knocking CPU cores offline!
The small SBC as NAS devices interest me for home clustering experimentation. Hiding a bunch of these around the house for distributed compute and storage would be neat... running your own little home cloud, the house is the server.
I took an old dual cassette player gutted it and put an sbc in it with an 8TB drive. Set it in a detached garage that's hardwired. Now have a backup copy in a different building. Next step is getting a copy offsite.
At my org we're already talking full solid state with U.3 drives for servers moving forward. The elephant in the room is we don't expect to still be buying spinning rust in 10 years, but we have a tendency to keep equipment in production for 6+ years. You might think "that's at least one more refresh" but sometimes you move at the speed of committee approval.
N5105 can easily run with 32GB of RAM - should help TueNAS. And the slow-down on write speeds is due to reaching end of cache. Most cheaper flash drives use QLC memory as the most cost-effective with some cache (DRAM or SLC). Once it fills - the drive becomes dreadfully slow. Would be interesting to see the influence of that for the ZFS poo performance.
Awesome! I'm stoked to see NVME storage really dropping. I picked some 1TB WD SN850X for $55/ea on prime day and a 4TB version for $220. There has been some crazy deals on 'slower' drives, especially PCIe Gen 3 models. We just need GPUs to finally reach some level of sanity again but that's about as likely right now as Samsung stopping their quest to be a crappier version of Apple.
I've liked my TeamGroup SSDs as they are cheap, usually reliable, and have a solid warranty which I have used. What I don't like about them is most of their current lineup is DRAMless, but for my uses (homelabbing with RAID or ZFS, with spinning rust as my main NAS array) they work well.
Are these SSDs ok to use in a NAS? I guess with all the videos I see from other creators I thought I’d have to shell out more money for NAS specific drives. Is DRAM just what I have to look for?
@@jacobdavis6615 there is no such thing as "nas specific drives" either SSD or HDD, it's mostly a marketing gimmick that WD started in an effort to squeeze more money out of people. DRAMless SSD drives are usually cheaper and have lower performance, but just as with hard drives, that's not terribly important for a NAS where you are bottlenecked to 120MB/s by a gigabit connection (or 1-ish GB/s by a 10 Gbit connection) What you want to look at is the write endurance value
@@jacobdavis6615 I've found them to be fine, but I'm mainly using them as either a read cache for HDDs or in an array that's at least mirrored. Would drives with DRAM be faster and maybe last longer? Probably, but capacity is more important than speed for me. I should also add that I have killed a couple. I now tend to skip the 128GB ones as the price of the larger capacities has come down and ones with more capacity in theory are more reliable.
Awesome video Jeff! Love your thoroughness in your reviews. What I'd one day like about consumer nases is enclouseres for diyers to use. I can build a Nas in a case, but it has not enough drive mounts. I can build one in an old server but it is not power efficient and empty server chassis with drive bays are super expensive
Yeah, there are very few cases you can buy that are great for NAS use cases. It'd be pretty cool if ASUSTOR used a particular spec for their main boards so you could pop in a mini ITX replacement or something. Would make it so you could buy a used consumer NAS, rip out the guts, and put in your own!
@@JeffGeerling Standard RAID uses some version of CRC32 by default which has had hardware acceleration for a while now. BTRFS also defaults to using CRC32 as well though you can use a different option. ZFS uses Fletcher4 by default and SHA256 if you enable deduplication.
@@JeffGeerling apparently ZFS isn’t great at handling flash storage. EXT4 and F2FS are reported as having higher performance for arrayed flash storage like these. With a faster CPU and more PCIe lanes, a more optimized filesystem might also give you closer to spec performance out of those M.2 drives
Thank you for sharing. The ideal specifications for my DIY NAS would include support for an ARM CPU, 10G Ethernet, and NVMe SSD. The prototype shown in the video is already very close to that.
This Asustor is my unicorn. I've been waiting for a small form factor flash NAS with 2.5G so I can downsize my media services from a rack mount to smaller with much less power consumption. Got mine on order and plan to pair it up with a Beelink w/2.5g also for all docker services and VMs. Thanks for sharing this - made my week.
The Oragmi thing could even run (even if i dont need) it on my powerbank,that caps at 10.5Watts probably all day long under good load.What are the chances?😂.Great Video Jeff,please keep going and stay healty.
FYI the Intel N5105 will run 32GB (2x 16GB only) of RAM, I've got that installed in my QNAP TS-464 NAS, and plenty of people confirm it on Reddit. Now, I know Intel does specify a max of 16GB, and it does state that on it's website, however pre-late October 2022, when I was researching to buy a new NAS, Intel's website did say max 32GB of RAM, which is why I went hunting on Reddit, coz reviewers were saying !6 but I saw intel say 32. I think it might have been that "your millage may vary" scenario, because even though Intel pre-October said 32GB, QNAP always stated a max of 16GB, so I think Intel were initially edging their bets, and QNAP were being conservative to ensure they could 100% support customers. I've had 32GB running for 5 months now without issues. But I agree the weakness of the N5105 is it's PCIe lanes, QNAP only offer PCIe 3 x1 speeds, to split up what they are trying to do with 4 SATA drives, 2 onboard NVMe slots, and an add-in PCIe slot for 10Gbe or 10GBe + 2 NVMe cards, I came from a J1900, so even if I wish for a little more, the N5105 is a pretty capable CPU. I would say look out for Intel's Alder Lake N100, N200 and N350 CPUs, even faster and more power efficient, I've got a N100 in my new pfSeense firewall mini PC. I like the idea of the small SoC NAS, once we get a little more power I might deploy one at my mum's for a media server, they watch a lot of legally obtained films.
At 9:50, you implied that unlike ZFS, Btrfs doesn't support snapshots and synchronisation. However, Btrfs does support snapshots and commands "btrfs send" and "btrfs receive" can send and receive snapshots between two hosts over a network, similar to ZFS commands "zfs send" and "zfs receive".
Btrfs does, I wasn't careful with my wording there, as ADM does support Btrfs (and I used it on the NAS we deployed at my Dad's radio station). But some of the Btrfs features are not as easy to use through ADM as they would be on plain Linux, and that was more what I was comparing (ADM vs TrueNAS in particular) here.
I like ZFS but I ain't a ZFS zealot. And I never delete any comment on any video, except for anything with commercial spam (e.g. "Telegram me you won a prize") or explicit content.
@@patryk4815I have had arguments with him several times. No comment was deleted, even if he didn't agree. Now bill Murray (the 8bitguy) on the other hand.... He does, maybe you are confused.
@@JeffGeerling I use both ZFS (mostly in TrueNAS Core, but I've also used it on Linux) and Btrfs, and both work well, but I tend to prefer the ZFS snapshot model and naming syntax. Btrfs treats snapshots as directories in the same file system, so it's easier to misplace them, whereas ZFS records snapshots in a separate namespace that you can list easily with the command "zfs list -t snapshot". However, on Linux, I tend to use Btrfs more often because it is available in the kernel and requires less memory than ZFS. Though I've used ZFS for almost a decade, I've yet to learn how to control the amount of memory that the various ZFS caches consume. I guess it's never been a priority since I mostly run it on my TrueNAS machine.
I know it wouldn't be very fast, but I'd love to see an actual pocket NAS that used a Pi Zero W so I could power it with batteries and push/pull files to/from it while it's in my pocket.
I have created a NAS which gives best of price as well performance. I used bcache. Storage Pool - 3 X 10 TB hdd and 3 X 500 GB Nvme. I used 1 X 120 GB SSD - with Debian 11 . 10 TB HDD are coupled with 500 Nvme Drive. Used "writeback" mode. Once 3 devices created, I used btrfs over the these devices. Best part is that bcache gives me read and write cache together. So in a nutshell I am getting almost nvme performance on my sata hdd. I am also enjoy btrfs snapshots. To manage storage I am using cockpit. This serves my purpose.
$1300 is a pretty sizeable price premium for the size and low power usage. I've been looking at making a 4U box with 10GbE and a Ryzen 7 5700G for both NAS and Docker, and it's looking to be about $1300 for two 4TB drives, with a $75 expansion card to add four more. Sure, that's six bays instead of twelve, but it's also about 6x the CPU performance, a lot more PCIe lanes, a decent GPU for transcoding, and a dedicated NVMe slot for the OS. I even threw in 2x16 of RAM, and I think it can take 4x32 if I really wanted to
I use a Raspberry Pi 4, Sata 2TB SSD, OMV, and no raid. That works well for me because I just have a few movies and TV shows on it, and all of those files are copied from DVD or Blu Ray discs I own, so I'm not worried about data recovery. I do get about 100 MB/s, which is good enough for streaming.
I'd like to see them go one step further; a travel router with NAS capabilities using flash storage. For travel, or even for home use, one or two NVMe slots should provide plenty of storage. Great for travel or off grid use.
You might be able to use more than 16gb of memory on N5105. Well i think it depends on the motherboard. I installed 2 sticks of 16gb on my Topton N5105 router just this morning and it works just fine. I also have a N5095 board with 12 sata ports. I installed a 32gb ram on that and it works too!
Yeah, some people mentioned 32 GB works here. I know 16 does because that's the spec, and 64 doesn't because ServeTheHome tested that and it broke. So 32 might be the goldilocks if you want a lot of RAM.
@@JeffGeerlingabout the slow perf you saw on TrueNAS, I noticed you're using SCALE, did you try Core? I've had bad performance experiences with SCALE and good ones with Core on limited hardware (specially old CPUs and lower end NICs supported by Core). May be woth a try...
If the pocket NAS fan was squealing that badly, it's likely also damaged in shipping, ball bearing fans are the highest quality/longest lasting industrial fans, but they're also really sensitive to shipping damage, I learned this the hard way. It's a big reason why the PC building community considers them worse for noise than sleeve I suspect.
I built my NAS using my old Ryzen 1700X with 8X 2TB Crucial MX500 SATA SSDs under Windows Server 2016 and Windows Storage Spaces. The processor, motherboard, and memory were leftovers from an upgrade, so it essentially cost me nothing, and the drives are now running under $100 each. (The 10Gb network cost a bit more.) Running with a mirror config and 8TB of usable space, I get about 800MB/s transfer rates, nearly saturating my 10Gb link.
Using what you have is always the cheapest option! (Though Windows Server 2016 is an interesting choice, it's more rare to see that used for a storage-only server).
@@JeffGeerling It's what I knew how to do. (Plus getting the license key from VIP-SCDKey.) It's not the best, but I tried other methods and couldn't get them right. Either they were too confusing to set up or I couldn't actually log into the share after getting it done to put my data in. It was too annoying, so after 2 months of dealing with it, I went with WSS. (The REAL WSS, not the dynamic partitions.) I also happen to have iSCSI targets on that drive set for my three Hyper-V hosts (self training lab) to back up to using the built in Windows Server Backup. Works great.
I got the 6 drive version to replace my unraid server to reduce power usage, noise, and physical space. Gonna be setting it up in it's (probably) permanent location today and ensuring I copied everything over, but so far I'm pretty satisfied. Sure it's not as customizable as the unraid server but I wasn't really using everything unraid had to offer anyways.
That "write speed cliff" which you fell off is there for al NAND based flash storage- sometimes better and sometimes worse. But it is always there. Basically when you write you are really writing to pre-cleared blocks of flash. The pre-clearing is a LOT slower than writing to an already cleared block. The pre-clearing happens in background using hidden blocks in your NAND flash device. If you do constant writes you eventually run out of pre-cleared blocks, then you drop down to the speed of clear-a-block-then-write. If you leave the storage alone for 10 minutes then you'll get another burst of high write performance then a drop back down to the slower write perf. All NAND based storage devices suffer this problem eventually, if your writes exceed the pre-clearing rate of the device. Enterprise drives normally just allocate more Flash storage to hidden blocks which are used for wither faster write performance, or to replace the inevitable failed blocks. For some more details, read "Over-Provisioning NAND-Based Intel SSDs for Better Endurance" which also talks about performance.
Hey Jeff, on 8:30 you should disconnect the 4pin dc power of JP1 from your supermicro X10SDV board. It's not recommended because of an alternatively support of two power sources. You can find this information in PDF on Page26 (1-18) Note 1: The X10SDV series motherboard alternatively supports 4-pin 12V DC input power at PJ1 for embedded applications. The 12V DC input is limited to 18A by design. It provides up to 216W power input to the motherboard. Please keep onboard power use within the power limits specified above. Over-current DC power use may cause damage to the motherboard! Note 2: Do not use the 4-pin DC power at PJ1 when the 24-pin ATX Power at JPW1 is connected to the power supply. Do not plug in both PJ1 and JPW1 at the same time.
My guess for the "subpar" ZFS performance is a mix of it still making checksums for data and how it is distributing data to the vdevs that report back they are done and have committed the data and the PLX switching that is going on adding latency, maybe? May also have to do with updating the metadata, would be a neat experiment to use 2 of the SSDs as a metadata offload for the rest to see if that brings your closer to generic raid.
@@peterbronez1188 I don't see why not, but if you are thinking of using the Optane as a ZIL then it is mostly moot for SMB as SMB is async IO unless you set the ZFS dataset to sync=always.
I agree with the checksum idea. I suspect if you turned them off you'd see that saturation occur pretty easily. Though I wouldn't recommend that as a long term solution; it's part of the point of ZFS.
The cost of quality NVME SSDs has dropped by half in the last 14 months. Maybe others prefer the rock bottom pricing of spinning media but the premium for NVME SSDs isn't so premium anymore. Only capacity keeps spinning media in my NAS; if I could buy consumer-level 16tb SSDs, I probably would.
It's amazing how quickly 2 and 4 TB NVMe drives have fallen in price. I'm okay with just a few TB of usable space so I'm doing RAID 10 with 10 drives right now, plus 2 spares. But I could upgrade over time and double or quadruple the capacity, once 2/4/8 TB drives hit whatever price point I'm comfortable with.
Asus's consumer electronics guys seems to be very pro-consumer, a breath of freedom with Apple, Samsung, Microsoft, John Deere and intel trying to end personal ownership.
This was a VERY good dive into this subject ... and you actually got better results than I got with an AMD EPYC with 128 PCIe Lanes ... in Dells R7415 (which Dell only provides 32 PCIe Lanes for all 24 NVMe slots). I'm looking at getting an R7525 ... but it's sad just how much you have to spend for U.2 access to some PCIe lanes just bc of games mfrs play.
Yeah, I really wish U.2 were more available in the consumer space. To even get adapters for it can be a bit pricey. Would love consumer 2.5" drives to be around as drop-in replacements where SATA drives were used.
For someone with a large movie and TV library, that's not a lot of space. I have 3 NAS drives totaling 20 TB of space (plus a duplicate of each for backups) to house my collection of TV shows and movies ripped from DVDs and BluRays to watch on various TVs around the house. I have a few more TV series to rip to disk, then I'll be looking to add a couple more drives.
SSD costs are coming down but at the same time hard drives just keep packing more and more storage per drive with WD now listing 26TB drives with 20 and 22TB drives being fairly common at this point.
Those high end drives are a little exotic and risky to deploy for a desktop scenario where you might only have 4-6 of them, but they do bring the cost per TB way down! I hope to see NVMe prices continue to fall. It's been pretty dramatic the past 3 years.
Woah that Pocket NAS is crazy small. Would love to see ASUSTOR make a tiny Arm NAS, with like the Qualcomm' 8cx Gen3/4 or MediaTek's WoA chip once that's finally out
PCI x16 to quad M.2 adapters are like $30, so that Ampere option interesting. Or if you can find any motherboard that supports bifurcation you could make a full speed RAID array that isn’t bottlenecked by bus or CPU. I’ve been looking at an Epyc board that would support tons of PCI lanes - specifically the Asrock ROMED8T. You could put 26 full speed Gen 4 SSDs on that. Video editing is just barely too needy to run nicely on hard drives, and I don’t know of any solid caching solutions. Instead what’s making sense to me is RAID SSD’s as a “hot” pool to store an active video editing library, which then gets snapshots backed up to a “cold” hard disk pool.
Synology still has solid devices, but I do feel like they've allowed other entrants in the consumer NAS space to eat their lunch, yes. They've been content sitting atop their NAS throne lately.
Unlike many ARM vendors, though, Rockchip does do upstreaming of their kernel support so it will work in the future with just a vanilla upstream kernel. (Also those SSDs are a lot cheaper than the Samsung I tend to use. Are they any good?)
Okay, Open Bios is a killer feature. I wrote off pre-built NAS boxes for that exact reason, but, uggh, i may just consider this one then, because thats HUGE
The Pocket NAS looks quite promising! I'm interested in how it interfaces to the Rock 5 B - I assume through the M.2 slot underneath the Rock 5 B, but it also looks like it has an interface through the GPIO pins, or are those just for power?
You can choose what kind of networking file system you use; I used SMB (Samba), which has great compatibility with Windows, macOS, and Linux, but you could also use something like NFS, which is mostly compatible, but can have its own quirks. You set up users on the NAS for login, and on your computers, you browse the network to find the NAS, then log in, and 'mount' a share from the NAS, just like you'd plug in a USB flash drive or an external hard drive.
3:52 I have noticed this with high end NVMes that use QLC NAND. I have a Sabrent Rocket Q4 2TB gen4 NVMe When copyinging ~300GB of photos from a day at the airshow at Scott AFB down by St Louis MO, the rocket Q4 got to about 70GB into the transfer from my SD card then slowed to just 40MB/s. I tried to restart the transfer several times and just ended up sending it to a 16TB HDD which ran at nearly the full 300MB/s speed my SD card supported for the entire transfer only sometimes slowing to 250MB/s but generally around 280-290MB/s.
The 45MB/s likely means that the SLC caching ran out on the parity drive(or all 3 drives) and the drive had to slow to the maximum supported speed of QLC(assuming these are QLC based, which at 1TB for $40, suggests this is the case)
These cheap SSDs(both NVME and SATA) are only usable up to about 20~30% capacity. Beyond that point, no matter how hard you try, writing becomes a joke, e.g. 300MB/s; for NVME, >800MB/s); meanwhile, those under $30/TB ALWAYS drop to 30MB/s or so after 20~30% capacity. These manufactures definitely know what they are doing - if a user can notice their SSDs become too slow to write after such light use, these manufactures must have done extensive tests to make sure this is how their products on the cheap end should perform, therefore their high-end product can sell.
What tool do you use on linux to monitor or measure real time transfer rate while downloading a file over the network? What tool are you using at 3:27?
If you’re able to either view it in the datasheet or visually see where the PCIe lanes go, perhaps you can try that 3-drive RAID0 test again but make sure all 3 drives are behind separate PCIe switches. Even just 1xPCIe gen3 lane per should get you ~1GB/s to that PCIe endpoint. In fact, it may be just as interesting to simply try a single drive and see if you still hit that 600MB/s.
3:49 Free block in flash storage must be initialised (erased) before writing. That usually happening in the background when the drive is idle. When the drive ran out of already initialised free block, then it will initialise just before writing. That takes much more time. I suppose you just observed that. fstrim command initialise unused (already deleted) blocks. If you set the discard mount option then the initialisation will happen automatically when files been deleted and blocks been freed.
Considering how much RAM you installed, I'm surprdsed TrueNAS and ZFS saw so much of a performance hit even when striped. I bet it has something to do with those PCIe switches. I wonder if reads would be faster as a merged JBOD instead.
@1:40 Oh man that could probably use some heatsinks between the m.2 drives. I'm imagining even something as simple as two copper plates with some metal spacers between them and then thermal adhesive on both sides. Then have a squirrel cage fan or something blow air from the side, which would force air to go through thole copper plates cooling them.
Looking at these, I noticed they had TeamGroup NVMe storage. TeamGroup are cheap, but they are not anywhere close to high performance. That said, they should last a while even in a NAS. Spinning drives still have their place if a lot of read/write actions are going on. Spinning drives, despite being slower, will still saturate a 2.5Gbps connection and will tend to last longer. If your NAS is primarily reading data with little writing, then the NVMe could be an option.
They didn't come with TeamGroup, I just bought those because they were cheap enough for me to afford for this video, but also decent enough they could perform well in aggregate.
On the lower TrueNas read speeds, are you maybe running into decryption performance issues similar to what Tom at Lawrence systems has reported with low end CPU's?
I specifically avoided enabling encryption on this volume, but from what I've heard it could be checksumming going on during reads that kills some of the performance. Definitely CPU-bound!
There are solutions to solve the slow write performance in openmediavault by simply adding some arguments in the extra options box under SMB/CIFS to improve write performance. I noticed a substantial improvement on a gigabit connection in terms of write speeds on my Xeon E3 powered nas.
The bottle neck with writes is probably those nvme drives, or failing to use trim or something drive related (maybe os tuning (eg raid settings or filesystem or samba)). Because it make no sense to be cpu or lanes related. The cpu doesn't calculate anything with raid0, so you would expect to see more speed when compared raid5 but you didn't see any. And it ain't lanes because a single pcie3.0 lane is 985 mbytes/s, and you have 8 of them (7880 in total), more than enough to get higher speeds than 680ish mbytes/s write.
I came to say something similar about the PCIe lanes. It also makes no logical sense that a system that can read from the drives at 1.2gb/s using the same PCIe lanes would only be able to write at half the speed... using the same PCIe lanes.
@@ASUSTOR_YT but why are writes so different to reads? Googling samba read and write performance on other cpu limited devices rpi show slightly lower numbers for write over read, eg 80-90% but not 50%
Hey Jeff! Thank you for another super interesting video, I can really see how relaxed you're on camera now. Love to see the progress!! Quick question, I don't usually buy "merch" on YT, but I love the baby "chaos engineer", will you make it available in kids sizes? Something for 4-5y olds? Keep up the good work! I always smile when I see you've posted another video 😁
All I'm waiting for is 32TB of NVMe to be at least somewhat affordable. I currently have a NAS with 4 8TB spinning disks. I'd like to get a second unit going so I can switch to solid state for the primary and keep the spinning disks as a backup.
@@tomspettigue8791 no, but it's easy to confuse. Pine64 makes RockPro64 board with RK3399 - same SoC as in Rock Pi4. Rock Pi4 and Rock5 are made by Radxa.
At that low density, you wouldn't need the NAS in the first place if you weren't using a Macintosh. 2 Crucial P3 4tb drives is $400, and most modern machines support some kind of PCIe bifurcation, so one $15 adapter later, you have local redundancy at a full 2gb/s.
around 7:00 you test write performance raid1 vs raid5 but doesn't raid1 and raid5 have similar write performance? wouldn't raid0 vs either raid1 or raid5 be a better measure?
I tested RAID 5 and RAID 0, then on the ASUSTOR I also tested RAID 10 separately (which would have similar performance penalties but a little less sometimes than RAID 5).
But is TEAMGROUP a high enough quality for a NAS? Like the 2 TB M.2 has a 5 year or 5 TB written warranty. So you can only write the entire drive a little over twice before the warranty has expired? Wouldn't the drives be out of warranty in just a few months when in a zFS pool due to periodic scrub?
I still think using a Mac mini m1 with 10gig ethernet will probably be the best way to go for a high powered NAS. A few Thunderbolt 2 PCIe adapters would be required to install all of the storage needed. Along with having a Thunderbolt 2 gigabit adapter. You can also add on a storage array of your choice and you still have two USB 3.0 ports
4:46 those Mini560 are very cheap and making exactly the noise you said the fan PWM is making. For future use I would recommend using Mini560 PRO cost basically the same and no noise.
Also big props for not locking down the bios and providing a convenient video port
Thank you!
Seriously. I’ve never been more tempted by a consumer solution
@@taylormanning2709 We appreciate your consideration! I'll do my best to fight hard for the consumer.
My lg g6 has a media server option in settings 2tb sd storage and the phone is cheap mine is $18.42 4gb ram snapdragon 821 all you need is to buy the sd cards 190mb/s 1tb sd on the g6 is $94 with out the phone.
@@beatyoubeachyt8303 LG G6 had user replacable batteries too., If you need to replace batteries these days., you need to have a torque wrench with ifixit kit
Hey Jeff! Thanks so much for taking a look at our first ever All-Flash NVMe NAS! We have made numerous improvements to our design since the last time we sent products to you and we'd love to share all the ways we keep Red Shirts out of our NAS and enthusiasts and tinkerers inside! With our recent endorsement of third party operating systems, (though without technical support) we're sure that using our NAS is nothing short of a NASTastic experience and we want to keep listening! If you, dear commentor; or youtuber, want to send me a message, feel free to do so! I love praise, comments, questions and even criticism! Hit me up and thanks again!
Thank you for (officially!) allowing alternate OSes on your NASes! Now... when ZFS in ADM? ;)
asustor my beloved
This might be a stretch, but are there any plans to sell Nas enclosures without hardware built in, so it is for the user to choose. Love the direction where the asustor is heading in, with allowing other oses. Maybe there could one day be official true Nas and unraid support?
@@JeffGeerling I'm doing my best! I still have to really sell these ideas to the more conservative and risk-averse elements in the office too. But your backing helps me get the point across!
This is how you build a good reputation. Not locking down your hardware, listening to feedback, and engaging constructively with your users.
I really appreciate that you just go straight into it with no intro
Gotta respect my viewer's time!
One additional thing I'd call out when comparing HDD vs SSD: how much data you can store in a given physical space. It's a little insane to me the absolute minimal footprint that a flash based system can occupy, and for people who live in places where physical space is at a premium, that's a very real consideration.
Something I didn't even consider!
the ppl who life in a small places, wouldn't be able to afford ssd prices. The only consideration is that mechanical drive is more prone to failure than ssd, however ssd chip could fry no problem as well
@@s.i.m.c.a Not everyone with money lives in big places...
@@s.i.m.c.a I'm kind of making an assumption here, but I think he's referring to people that live in places like cities (where even a 39m2 apartment costs 60% of your salary).
@@s.i.m.c.amaking some pretty big assumptions there. Not everyone chooses to waste money on more space than necessary. Why is there a tiny house movement anyway?
Amazing product from asustor! Open bios is crazy, I love being able to use my own software
Thank you!
Open bios shouldn't be "crazy", it should be expected / the norm for hardware that you buy - if the bios is locked, you don't really "own" the device. It's very sad that we're already at a place where an unlocked bios is "crazy" when that was the NORM for decades. Since when do you buy a PC that had a locked down bios / bootloader??
I love watching these things, the $1300 is definitely outside of my price range but that is a fantastic little nas
And costs will likely just go down over time, nice to have something to look forward to :)
Wait a year and see.
Wow great video comparing the nitty gritty details on these 2 NAS solutions!
So interesting and informative - Thanks for this Jeff!
I also really like how the purchased NAS solution is basically open hardware and not locked down and you can install anything you want on it - Thanks the way it should be.
The pocket NAS is actually of GREAT use to me. This can be a travel NAS for me for my photography. It's easy to set up and I could put my data on it without blasting it on my PC and have it WAY more safe. The flashstore could a cool thing for me at home as an intermediate storage for hot projects, too. I could edit them there and after I'm finished archiving them on a slower NAS.
Especially due to it not being locked down. This is a really great factor for people like me who have some DIY NAS and consider some pre-builts like this one so that they can be managed with the same OS.
Another thoroughly researched and excellently presented video by Jeff "The Man" Geerling.
Boom! Ampere Altra at 128 lanes of PCIe Gen4! Nailed it this one. Something else that would be much better is the memory bandwidth which matters as well on all-flash NAS units.
Now I need to start campaigning for ASUSTOR to build their next tiny NAS with an AmpereOne 192-core CPU with PCIe 5.0...
The pocket nas is almost EXACTLY what I've been wanting for a few years. I'm a traveler who requires a lot of offline video storage.
Please go to the link to Rick's site and indicate what features you'd be looking for specifically. I can't wait to see the final version he comes out with... I've seen renders of a much more reliable prototype based on the Rock 5 model B, but there's still time to let him know if there's some other feature you'd be missing!
@@hundredfireify I have been using a m.2 enclosure for a couple years but I'd like to be able to access it wirelessly sometimes like with my phone. I have used wireless storage devices before (I had a Seagate wireless drive and a western digital wireless drive) but the current solution don't support the flexibility I'm looking for yet. I want a AIO portable Nas with media output on it for tv. I'm asking for a lot but if it's not this device I was looking into buying a Latte Panda Sigma which offers a lot of what I am looking for. Speed, flexibility ("full hack-able" , portability, etc.
@@youdontneedmyrealname if you set up a samba share on your laptop you could wirelessly share your enclosed drive to your phone
the pocket nas would be perfect for me as a trucker, great to store some games on for my laptop, might even be able to make a ceph storage cluster , that would be something
I do wonder if it'd run CEPH. I tried setting up ROOK CEPH on a microk8s cluster running on i5-6500T and 32GB RAM and ran out of CPU. Maybe I did something wrong, but certainly interesting.
Couple external drives would be much simpler. Use 1 and keep a 2nd synced occasionally as a backup. Much cheaper
Yea but why
data hoarding is the only reason i'd ever look at hdd going forward. thanks to oversupply of flash memory, it is a great time to set up a flash-only storage. not to forget how easier they are to move around without risking data loss.
Why do you need to move your Nas around?
When you actually get 50+ PCIe lanes for your drives something like an AMD EPYC, you can run into another problem for an NVMe-only NAS: internal bandwidth of the CPU. When Linus Tech Tips filled up an AMD EPYC with 24 SSDs, he hit a major stability bug in the CPU, because all those NVMe traffic ate up the entire internal bus bandwidth of the EPYC processor and started knocking CPU cores offline!
That is one downside to NVMe, just like throwing hundreds of physical CPU cores in a system, all that NVMe can make things get wonky!
The small SBC as NAS devices interest me for home clustering experimentation. Hiding a bunch of these around the house for distributed compute and storage would be neat... running your own little home cloud, the house is the server.
I took an old dual cassette player gutted it and put an sbc in it with an 8TB drive. Set it in a detached garage that's hardwired. Now have a backup copy in a different building. Next step is getting a copy offsite.
As an it pro, that shirt is insanely accurate to my life
At my org we're already talking full solid state with U.3 drives for servers moving forward. The elephant in the room is we don't expect to still be buying spinning rust in 10 years, but we have a tendency to keep equipment in production for 6+ years. You might think "that's at least one more refresh" but sometimes you move at the speed of committee approval.
Yes saw this Nas a few weeks ago, that impressed I bought the 6 version, 6 x team group 2tb drives. Got it yesterday, can't stop playing with it 😀
It's a neat unit!
Thank you for your support!
N5105 can easily run with 32GB of RAM - should help TueNAS. And the slow-down on write speeds is due to reaching end of cache. Most cheaper flash drives use QLC memory as the most cost-effective with some cache (DRAM or SLC). Once it fills - the drive becomes dreadfully slow. Would be interesting to see the influence of that for the ZFS poo performance.
Awesome! I'm stoked to see NVME storage really dropping. I picked some 1TB WD SN850X for $55/ea on prime day and a 4TB version for $220. There has been some crazy deals on 'slower' drives, especially PCIe Gen 3 models.
We just need GPUs to finally reach some level of sanity again but that's about as likely right now as Samsung stopping their quest to be a crappier version of Apple.
I've liked my TeamGroup SSDs as they are cheap, usually reliable, and have a solid warranty which I have used. What I don't like about them is most of their current lineup is DRAMless, but for my uses (homelabbing with RAID or ZFS, with spinning rust as my main NAS array) they work well.
DRAMless on NVME is less bad than on Sata as per the nvme spec the SSD can get up to 64MB of system memory (RAM) to use
Luckily I found the ones I used here, which still have DRAM; I linked to the model on Amazon and can confirm it seems they do have DRAM cache.
Are these SSDs ok to use in a NAS? I guess with all the videos I see from other creators I thought I’d have to shell out more money for NAS specific drives. Is DRAM just what I have to look for?
@@jacobdavis6615 there is no such thing as "nas specific drives" either SSD or HDD, it's mostly a marketing gimmick that WD started in an effort to squeeze more money out of people.
DRAMless SSD drives are usually cheaper and have lower performance, but just as with hard drives, that's not terribly important for a NAS where you are bottlenecked to 120MB/s by a gigabit connection (or 1-ish GB/s by a 10 Gbit connection)
What you want to look at is the write endurance value
@@jacobdavis6615 I've found them to be fine, but I'm mainly using them as either a read cache for HDDs or in an array that's at least mirrored. Would drives with DRAM be faster and maybe last longer? Probably, but capacity is more important than speed for me.
I should also add that I have killed a couple. I now tend to skip the 128GB ones as the price of the larger capacities has come down and ones with more capacity in theory are more reliable.
Awesome video Jeff! Love your thoroughness in your reviews. What I'd one day like about consumer nases is enclouseres for diyers to use. I can build a Nas in a case, but it has not enough drive mounts. I can build one in an old server but it is not power efficient and empty server chassis with drive bays are super expensive
Yeah, there are very few cases you can buy that are great for NAS use cases. It'd be pretty cool if ASUSTOR used a particular spec for their main boards so you could pop in a mini ITX replacement or something. Would make it so you could buy a used consumer NAS, rip out the guts, and put in your own!
man it's crazy seeing those prices on ssds I remember paying $140 for my 1tb drive a few years ago
Yep. In a few years SSDs will be the only thing you can get. For now super large drives spinning rust is the way to go.
I remember paying $400 for a 20 MB hard drive, just a "few" years ago.
@@KameraShy I'm with you, I remember this same conversation and progression, but with GB instead of TB. On mechanical disks.
Hah, I just bought two 2TB NVME drives at $200 each a few months before the prices dropped this year.
Yes, it's crazy...
Hey Jeff, great content!
The Asusstor read speed drop with truenas was from ZFS’s checksum verification on read. The N5105 is just a bit slow at that task.
Good to know! That does make sense, that ZFS would be adding some processing that holds it back a little.
@@JeffGeerling Standard RAID uses some version of CRC32 by default which has had hardware acceleration for a while now. BTRFS also defaults to using CRC32 as well though you can use a different option. ZFS uses Fletcher4 by default and SHA256 if you enable deduplication.
@@JeffGeerling apparently ZFS isn’t great at handling flash storage. EXT4 and F2FS are reported as having higher performance for arrayed flash storage like these. With a faster CPU and more PCIe lanes, a more optimized filesystem might also give you closer to spec performance out of those M.2 drives
Thank you for sharing. The ideal specifications for my DIY NAS would include support for an ARM CPU, 10G Ethernet, and NVMe SSD. The prototype shown in the video is already very close to that.
This Asustor is my unicorn. I've been waiting for a small form factor flash NAS with 2.5G so I can downsize my media services from a rack mount to smaller with much less power consumption. Got mine on order and plan to pair it up with a Beelink w/2.5g also for all docker services and VMs. Thanks for sharing this - made my week.
Thank you for your support!
The Oragmi thing could even run (even if i dont need) it on my powerbank,that caps at 10.5Watts probably all day long under good load.What are the chances?😂.Great Video Jeff,please keep going and stay healty.
I don't see why that wouldn't work? A newer version I am working on should make that a reality.
FYI the Intel N5105 will run 32GB (2x 16GB only) of RAM, I've got that installed in my QNAP TS-464 NAS, and plenty of people confirm it on Reddit. Now, I know Intel does specify a max of 16GB, and it does state that on it's website, however pre-late October 2022, when I was researching to buy a new NAS, Intel's website did say max 32GB of RAM, which is why I went hunting on Reddit, coz reviewers were saying !6 but I saw intel say 32. I think it might have been that "your millage may vary" scenario, because even though Intel pre-October said 32GB, QNAP always stated a max of 16GB, so I think Intel were initially edging their bets, and QNAP were being conservative to ensure they could 100% support customers. I've had 32GB running for 5 months now without issues. But I agree the weakness of the N5105 is it's PCIe lanes, QNAP only offer PCIe 3 x1 speeds, to split up what they are trying to do with 4 SATA drives, 2 onboard NVMe slots, and an add-in PCIe slot for 10Gbe or 10GBe + 2 NVMe cards, I came from a J1900, so even if I wish for a little more, the N5105 is a pretty capable CPU. I would say look out for Intel's Alder Lake N100, N200 and N350 CPUs, even faster and more power efficient, I've got a N100 in my new pfSeense firewall mini PC.
I like the idea of the small SoC NAS, once we get a little more power I might deploy one at my mum's for a media server, they watch a lot of legally obtained films.
Sweet hack! An so so awesome how Asus sent you praise.
Hi there! We're not ASUS, we are ASUSTOR. We were spun off from ASUS but we are run fully independently as a separate company.
At 9:50, you implied that unlike ZFS, Btrfs doesn't support snapshots and synchronisation. However, Btrfs does support snapshots and commands "btrfs send" and "btrfs receive" can send and receive snapshots between two hosts over a network, similar to ZFS commands "zfs send" and "zfs receive".
he likes ZFS and he also likes to delete comments that don't agree with his ideology ;)
Btrfs does, I wasn't careful with my wording there, as ADM does support Btrfs (and I used it on the NAS we deployed at my Dad's radio station). But some of the Btrfs features are not as easy to use through ADM as they would be on plain Linux, and that was more what I was comparing (ADM vs TrueNAS in particular) here.
I like ZFS but I ain't a ZFS zealot.
And I never delete any comment on any video, except for anything with commercial spam (e.g. "Telegram me you won a prize") or explicit content.
@@patryk4815I have had arguments with him several times. No comment was deleted, even if he didn't agree. Now bill Murray (the 8bitguy) on the other hand.... He does, maybe you are confused.
@@JeffGeerling I use both ZFS (mostly in TrueNAS Core, but I've also used it on Linux) and Btrfs, and both work well, but I tend to prefer the ZFS snapshot model and naming syntax. Btrfs treats snapshots as directories in the same file system, so it's easier to misplace them, whereas ZFS records snapshots in a separate namespace that you can list easily with the command "zfs list -t snapshot". However, on Linux, I tend to use Btrfs more often because it is available in the kernel and requires less memory than ZFS. Though I've used ZFS for almost a decade, I've yet to learn how to control the amount of memory that the various ZFS caches consume. I guess it's never been a priority since I mostly run it on my TrueNAS machine.
I know it wouldn't be very fast, but I'd love to see an actual pocket NAS that used a Pi Zero W so I could power it with batteries and push/pull files to/from it while it's in my pocket.
i never get bored of jeff's videos, its been a while since i saw red jeff tho
I have created a NAS which gives best of price as well performance. I used bcache. Storage Pool - 3 X 10 TB hdd and 3 X 500 GB Nvme. I used 1 X 120 GB SSD - with Debian 11 . 10 TB HDD are coupled with 500 Nvme Drive. Used "writeback" mode. Once 3 devices created, I used btrfs over the these devices. Best part is that bcache gives me read and write cache together. So in a nutshell I am getting almost nvme performance on my sata hdd. I am also enjoy btrfs snapshots. To manage storage I am using cockpit. This serves my purpose.
$1300 is a pretty sizeable price premium for the size and low power usage. I've been looking at making a 4U box with 10GbE and a Ryzen 7 5700G for both NAS and Docker, and it's looking to be about $1300 for two 4TB drives, with a $75 expansion card to add four more. Sure, that's six bays instead of twelve, but it's also about 6x the CPU performance, a lot more PCIe lanes, a decent GPU for transcoding, and a dedicated NVMe slot for the OS. I even threw in 2x16 of RAM, and I think it can take 4x32 if I really wanted to
I use a Raspberry Pi 4, Sata 2TB SSD, OMV, and no raid. That works well for me because I just have a few movies and TV shows on it, and all of those files are copied from DVD or Blu Ray discs I own, so I'm not worried about data recovery. I do get about 100 MB/s, which is good enough for streaming.
Yeah, honestly a gigabit is enough for a lot of use cases, even 4K and more than one user, as long as the needs aren't too taxing.
good week sir !!!
I'd like to see them go one step further; a travel router with NAS capabilities using flash storage. For travel, or even for home use, one or two NVMe slots should provide plenty of storage. Great for travel or off grid use.
You might be able to use more than 16gb of memory on N5105. Well i think it depends on the motherboard. I installed 2 sticks of 16gb on my Topton N5105 router just this morning and it works just fine. I also have a N5095 board with 12 sata ports. I installed a 32gb ram on that and it works too!
Yeah, some people mentioned 32 GB works here. I know 16 does because that's the spec, and 64 doesn't because ServeTheHome tested that and it broke. So 32 might be the goldilocks if you want a lot of RAM.
@@JeffGeerlingabout the slow perf you saw on TrueNAS, I noticed you're using SCALE, did you try Core? I've had bad performance experiences with SCALE and good ones with Core on limited hardware (specially old CPUs and lower end NICs supported by Core). May be woth a try...
I would definitely consider the Pocket NAS for portable storage, especially off-grid.
If the pocket NAS fan was squealing that badly, it's likely also damaged in shipping, ball bearing fans are the highest quality/longest lasting industrial fans, but they're also really sensitive to shipping damage, I learned this the hard way.
It's a big reason why the PC building community considers them worse for noise than sleeve I suspect.
Ah, could be. Shipping seems to have taken its toll on this poor device :(
I thought the fan was damaged the first time I fired it up. Surprisingly, that noise is the PWM interacting with the fan...
I built my NAS using my old Ryzen 1700X with 8X 2TB Crucial MX500 SATA SSDs under Windows Server 2016 and Windows Storage Spaces. The processor, motherboard, and memory were leftovers from an upgrade, so it essentially cost me nothing, and the drives are now running under $100 each. (The 10Gb network cost a bit more.) Running with a mirror config and 8TB of usable space, I get about 800MB/s transfer rates, nearly saturating my 10Gb link.
(angry neckbeard noises for your choice of using WinServer2016 and Windows Storage Spaces)
Using what you have is always the cheapest option! (Though Windows Server 2016 is an interesting choice, it's more rare to see that used for a storage-only server).
@@JeffGeerling It's what I knew how to do. (Plus getting the license key from VIP-SCDKey.) It's not the best, but I tried other methods and couldn't get them right. Either they were too confusing to set up or I couldn't actually log into the share after getting it done to put my data in. It was too annoying, so after 2 months of dealing with it, I went with WSS. (The REAL WSS, not the dynamic partitions.) I also happen to have iSCSI targets on that drive set for my three Hyper-V hosts (self training lab) to back up to using the built in Windows Server Backup. Works great.
That's why I like Asus routers. Easy to flash open-wrt.
I got the 6 drive version to replace my unraid server to reduce power usage, noise, and physical space. Gonna be setting it up in it's (probably) permanent location today and ensuring I copied everything over, but so far I'm pretty satisfied. Sure it's not as customizable as the unraid server but I wasn't really using everything unraid had to offer anyways.
Thank you for your support!
That "write speed cliff" which you fell off is there for al NAND based flash storage- sometimes better and sometimes worse. But it is always there. Basically when you write you are really writing to pre-cleared blocks of flash. The pre-clearing is a LOT slower than writing to an already cleared block. The pre-clearing happens in background using hidden blocks in your NAND flash device. If you do constant writes you eventually run out of pre-cleared blocks, then you drop down to the speed of clear-a-block-then-write. If you leave the storage alone for 10 minutes then you'll get another burst of high write performance then a drop back down to the slower write perf. All NAND based storage devices suffer this problem eventually, if your writes exceed the pre-clearing rate of the device. Enterprise drives normally just allocate more Flash storage to hidden blocks which are used for wither faster write performance, or to replace the inevitable failed blocks. For some more details, read "Over-Provisioning NAND-Based
Intel SSDs for Better Endurance" which also talks about performance.
Hey Jeff, on 8:30 you should disconnect the 4pin dc power of JP1 from your supermicro X10SDV board. It's not recommended because of an alternatively support of two power sources.
You can find this information in PDF on Page26 (1-18)
Note 1: The X10SDV series motherboard alternatively supports 4-pin 12V DC input power at PJ1 for embedded applications. The 12V DC input is limited to 18A by design. It provides up to 216W power input to the motherboard. Please keep onboard power use within the power limits specified above. Over-current DC power use may cause damage to the motherboard!
Note 2: Do not use the 4-pin DC power at PJ1 when the 24-pin ATX Power at JPW1 is connected to the power supply. Do not plug in both PJ1 and JPW1 at the same time.
My guess for the "subpar" ZFS performance is a mix of it still making checksums for data and how it is distributing data to the vdevs that report back they are done and have committed the data and the PLX switching that is going on adding latency, maybe? May also have to do with updating the metadata, would be a neat experiment to use 2 of the SSDs as a metadata offload for the rest to see if that brings your closer to generic raid.
Wonder if you could put an optane mirror set in there…
@@peterbronez1188 I don't see why not, but if you are thinking of using the Optane as a ZIL then it is mostly moot for SMB as SMB is async IO unless you set the ZFS dataset to sync=always.
I agree with the checksum idea. I suspect if you turned them off you'd see that saturation occur pretty easily. Though I wouldn't recommend that as a long term solution; it's part of the point of ZFS.
The cost of quality NVME SSDs has dropped by half in the last 14 months. Maybe others prefer the rock bottom pricing of spinning media but the premium for NVME SSDs isn't so premium anymore. Only capacity keeps spinning media in my NAS; if I could buy consumer-level 16tb SSDs, I probably would.
Sadly you have to cherry pick them. Not all are the same. The best nand flash is always Corp/ server stuff
And if 8TB nvme wasn't $1000 each
OMV is actually pretty neat, I use it on a nas too.
Thanks so much for this, Jeff
wow, those NASes look great ... the pocket NAS with its 10W consumption would be great to be put in my RV - an used as "offsite" storage 😉
That's about the perfect use case for such a little board!
good idea!
Loading up the pocketNAS or Flashstore with 4TB drives is where the NVME density wins. I was considering loading up a flashstore with 12x4TB drives.
It's amazing how quickly 2 and 4 TB NVMe drives have fallen in price. I'm okay with just a few TB of usable space so I'm doing RAID 10 with 10 drives right now, plus 2 spares. But I could upgrade over time and double or quadruple the capacity, once 2/4/8 TB drives hit whatever price point I'm comfortable with.
Asus's consumer electronics guys seems to be very pro-consumer, a breath of freedom with Apple, Samsung, Microsoft, John Deere and intel trying to end personal ownership.
It was DNS 😂😂😂😂😂😂😂 That shirt is top tier, brother.
9:51 snapshots and data integrity are also features of btrfs
This was a VERY good dive into this subject ... and you actually got better results than I got with an AMD EPYC with 128 PCIe Lanes ... in Dells R7415 (which Dell only provides 32 PCIe Lanes for all 24 NVMe slots). I'm looking at getting an R7525 ... but it's sad just how much you have to spend for U.2 access to some PCIe lanes just bc of games mfrs play.
Yeah, I really wish U.2 were more available in the consumer space. To even get adapters for it can be a bit pricey. Would love consumer 2.5" drives to be around as drop-in replacements where SATA drives were used.
For someone with a large movie and TV library, that's not a lot of space. I have 3 NAS drives totaling 20 TB of space (plus a duplicate of each for backups) to house my collection of TV shows and movies ripped from DVDs and BluRays to watch on various TVs around the house. I have a few more TV series to rip to disk, then I'll be looking to add a couple more drives.
1:48 Into the video and the first issue that I see with the small RockPi 5 NAS unit is heat dissipation.
The other problem with the diy one is maintenance on failing drives. You have to take them all out to get to the ones at the bottom of the stack.
SSD costs are coming down but at the same time hard drives just keep packing more and more storage per drive with WD now listing 26TB drives with 20 and 22TB drives being fairly common at this point.
Those high end drives are a little exotic and risky to deploy for a desktop scenario where you might only have 4-6 of them, but they do bring the cost per TB way down! I hope to see NVMe prices continue to fall. It's been pretty dramatic the past 3 years.
Woah that Pocket NAS is crazy small. Would love to see ASUSTOR make a tiny Arm NAS, with like the Qualcomm' 8cx Gen3/4 or MediaTek's WoA chip once that's finally out
PCI x16 to quad M.2 adapters are like $30, so that Ampere option interesting. Or if you can find any motherboard that supports bifurcation you could make a full speed RAID array that isn’t bottlenecked by bus or CPU. I’ve been looking at an Epyc board that would support tons of PCI lanes - specifically the Asrock ROMED8T. You could put 26 full speed Gen 4 SSDs on that. Video editing is just barely too needy to run nicely on hard drives, and I don’t know of any solid caching solutions. Instead what’s making sense to me is RAID SSD’s as a “hot” pool to store an active video editing library, which then gets snapshots backed up to a “cold” hard disk pool.
Asustor is coming to eat Synology's lunch, finally some competition
Synology still has solid devices, but I do feel like they've allowed other entrants in the consumer NAS space to eat their lunch, yes. They've been content sitting atop their NAS throne lately.
Hello again! Your comments on the Linus Flashstor video were hilarious!
Unlike many ARM vendors, though, Rockchip does do upstreaming of their kernel support so it will work in the future with just a vanilla upstream kernel.
(Also those SSDs are a lot cheaper than the Samsung I tend to use. Are they any good?)
so far so good, but I've only been running them for a couple weeks now. I'll definitely update on my blog if I find any issues!
You're a rockstar Jeff ✌️
Okay, Open Bios is a killer feature.
I wrote off pre-built NAS boxes for that exact reason, but, uggh, i may just consider this one then, because thats HUGE
The Pocket NAS looks quite promising! I'm interested in how it interfaces to the Rock 5 B - I assume through the M.2 slot underneath the Rock 5 B, but it also looks like it has an interface through the GPIO pins, or are those just for power?
GPIO for power, the SATA all goes through a custom set of plugs that goes into a standard 6-port M.2 SATA adapter card.
Hi Michael!
looking at rebuilding my nas at the moment
Have a spare raid card so tempted with using a Rockpro64 as this has a PCIe slot bult in
Good video Jeff!
Wow, you edit for Network Chuck? Thanks Jeff, you rock!
Hey jeff,
I want to know how do you use the true nas. How you upload files on the
Server?
And how even you access it through your computer?
Love your videos
You can choose what kind of networking file system you use; I used SMB (Samba), which has great compatibility with Windows, macOS, and Linux, but you could also use something like NFS, which is mostly compatible, but can have its own quirks.
You set up users on the NAS for login, and on your computers, you browse the network to find the NAS, then log in, and 'mount' a share from the NAS, just like you'd plug in a USB flash drive or an external hard drive.
Thanks
6:01 oh my the rolling shutter caused jello to the device 😂
3:52 I have noticed this with high end NVMes that use QLC NAND.
I have a Sabrent Rocket Q4 2TB gen4 NVMe
When copyinging ~300GB of photos from a day at the airshow at Scott AFB down by St Louis MO, the rocket Q4 got to about 70GB into the transfer from my SD card then slowed to just 40MB/s. I tried to restart the transfer several times and just ended up sending it to a 16TB HDD which ran at nearly the full 300MB/s speed my SD card supported for the entire transfer only sometimes slowing to 250MB/s but generally around 280-290MB/s.
The 45MB/s likely means that the SLC caching ran out on the parity drive(or all 3 drives) and the drive had to slow to the maximum supported speed of QLC(assuming these are QLC based, which at 1TB for $40, suggests this is the case)
These cheap SSDs(both NVME and SATA) are only usable up to about 20~30% capacity. Beyond that point, no matter how hard you try, writing becomes a joke, e.g. 300MB/s; for NVME, >800MB/s); meanwhile, those under $30/TB ALWAYS drop to 30MB/s or so after 20~30% capacity.
These manufactures definitely know what they are doing - if a user can notice their SSDs become too slow to write after such light use, these manufactures must have done extensive tests to make sure this is how their products on the cheap end should perform, therefore their high-end product can sell.
What tool do you use on linux to monitor or measure real time transfer rate while downloading a file over the network? What tool are you using at 3:27?
iftop, nice little tool!
@@JeffGeerling Perfect. Thx!
Isn’t PCIe gen3 ~1GB/s/lane? Hearing that 8x lanes being a limitation sounds silly - even 8xPCIe gen2 lanes would be plenty, no?
If you’re able to either view it in the datasheet or visually see where the PCIe lanes go, perhaps you can try that 3-drive RAID0 test again but make sure all 3 drives are behind separate PCIe switches.
Even just 1xPCIe gen3 lane per should get you ~1GB/s to that PCIe endpoint. In fact, it may be just as interesting to simply try a single drive and see if you still hit that 600MB/s.
🫨🫨🫨
3:49 Free block in flash storage must be initialised (erased) before writing. That usually happening in the background when the drive is idle. When the drive ran out of already initialised free block, then it will initialise just before writing. That takes much more time. I suppose you just observed that. fstrim command initialise unused (already deleted) blocks. If you set the discard mount option then the initialisation will happen automatically when files been deleted and blocks been freed.
Pocket NAS With a case and battery what a dream , with a Ethernet port yay I would buy it
Considering how much RAM you installed, I'm surprdsed TrueNAS and ZFS saw so much of a performance hit even when striped. I bet it has something to do with those PCIe switches. I wonder if reads would be faster as a merged JBOD instead.
My thinking is that with the limited number of PCIe channels, the commands are getting queued rather than being performed in parallel.
I use an 8Gb zimaboard running Truenas core with HDDs and an M2 SSD cache and it works great!
@1:40 Oh man that could probably use some heatsinks between the m.2 drives. I'm imagining even something as simple as two copper plates with some metal spacers between them and then thermal adhesive on both sides. Then have a squirrel cage fan or something blow air from the side, which would force air to go through thole copper plates cooling them.
Looking at these, I noticed they had TeamGroup NVMe storage. TeamGroup are cheap, but they are not anywhere close to high performance. That said, they should last a while even in a NAS.
Spinning drives still have their place if a lot of read/write actions are going on. Spinning drives, despite being slower, will still saturate a 2.5Gbps connection and will tend to last longer. If your NAS is primarily reading data with little writing, then the NVMe could be an option.
They didn't come with TeamGroup, I just bought those because they were cheap enough for me to afford for this video, but also decent enough they could perform well in aggregate.
On the lower TrueNas read speeds, are you maybe running into decryption performance issues similar to what Tom at Lawrence systems has reported with low end CPU's?
I specifically avoided enabling encryption on this volume, but from what I've heard it could be checksumming going on during reads that kills some of the performance. Definitely CPU-bound!
@@JeffGeerling Hopefully they will make one with an i3-n305 soon. That would be killer. Still only 9 PCIe lanes but much higher performance.
There are solutions to solve the slow write performance in openmediavault by simply adding some arguments in the extra options box under SMB/CIFS to improve write performance. I noticed a substantial improvement on a gigabit connection in terms of write speeds on my Xeon E3 powered nas.
What kind of arguments and where could one find info on it?
The bottle neck with writes is probably those nvme drives, or failing to use trim or something drive related (maybe os tuning (eg raid settings or filesystem or samba)). Because it make no sense to be cpu or lanes related. The cpu doesn't calculate anything with raid0, so you would expect to see more speed when compared raid5 but you didn't see any. And it ain't lanes because a single pcie3.0 lane is 985 mbytes/s, and you have 8 of them (7880 in total), more than enough to get higher speeds than 680ish mbytes/s write.
It's the CPU. SMB is single threaded and heavily dependent on clock speed.
I came to say something similar about the PCIe lanes. It also makes no logical sense that a system that can read from the drives at 1.2gb/s using the same PCIe lanes would only be able to write at half the speed... using the same PCIe lanes.
@@ASUSTOR_YT but why are writes so different to reads? Googling samba read and write performance on other cpu limited devices rpi show slightly lower numbers for write over read, eg 80-90% but not 50%
@@yeahright3348 It's a combination of a few factors, but is mostly due to the Intel Turbo Boost Technology settling in at around 2.4 Ghz when writing.
Hey Jeff! Thank you for another super interesting video, I can really see how relaxed you're on camera now. Love to see the progress!!
Quick question, I don't usually buy "merch" on YT, but I love the baby "chaos engineer", will you make it available in kids sizes? Something for 4-5y olds?
Keep up the good work! I always smile when I see you've posted another video 😁
I should! I will try to see if Teespring offers kids sizes like that, I think they do.
@@JeffGeerling Awsome, thx 👍🏼👍🏼
All I'm waiting for is 32TB of NVMe to be at least somewhat affordable. I currently have a NAS with 4 8TB spinning disks. I'd like to get a second unit going so I can switch to solid state for the primary and keep the spinning disks as a backup.
Personally, I would use freeNAS for the pocket NAS. Much better solution overall.
I use a lot of these consumers NAS for backups for my clients. Normally use Synology and a cloud provider.
What is the chip for splitting out the sata ports on the pocket nas? Looking at what is available on the market, is it ASM1166?
Oooh, another attempt at Rock5! Nice! Might dust off mine, the software support seems to have improved.
That's a Pine64 board, isn't it?
@@tomspettigue8791 no, but it's easy to confuse. Pine64 makes RockPro64 board with RK3399 - same SoC as in Rock Pi4. Rock Pi4 and Rock5 are made by Radxa.
12:00 you'd also consume way more power than the small nas because in the end the performance/power budgets are different
Did you count the pci lanes used for that add-on card you need 2 for 10GbE then 2 per NVME (or more) but the card is only an 4 lane card.
At that low density, you wouldn't need the NAS in the first place if you weren't using a Macintosh. 2 Crucial P3 4tb drives is $400, and most modern machines support some kind of PCIe bifurcation, so one $15 adapter later, you have local redundancy at a full 2gb/s.
around 7:00 you test write performance raid1 vs raid5 but doesn't raid1 and raid5 have similar write performance? wouldn't raid0 vs either raid1 or raid5 be a better measure?
I tested RAID 5 and RAID 0, then on the ASUSTOR I also tested RAID 10 separately (which would have similar performance penalties but a little less sometimes than RAID 5).
But is TEAMGROUP a high enough quality for a NAS? Like the 2 TB M.2 has a 5 year or 5 TB written warranty. So you can only write the entire drive a little over twice before the warranty has expired? Wouldn't the drives be out of warranty in just a few months when in a zFS pool due to periodic scrub?
do you have jumbo frames setup on your network? if not, that will help speed up large file transfers
I still think using a Mac mini m1 with 10gig ethernet will probably be the best way to go for a high powered NAS.
A few Thunderbolt 2 PCIe adapters would be required to install all of the storage needed. Along with having a Thunderbolt 2 gigabit adapter. You can also add on a storage array of your choice and you still have two USB 3.0 ports
不错哦,这些闪存型NAS是我想要的那种数据仓储设备,机械硬盘的NAS运行起来还是太吵了,尤其读写时的声音让我压力升高。虽然现在固态硬盘的每GB单价还是略高于机械硬盘,但是随着中国厂家的技术进步,固态硬盘的进一步降价趋势已经很明显了,之后如果哪家最终的产品能够做到足够轻便、足够安静、接口与处理器性能充足,那我就会选择购买了
4:46 those Mini560 are very cheap and making exactly the noise you said the fan PWM is making. For future use I would recommend using Mini560 PRO cost basically the same and no noise.
When i can make order this pocket NAS?