My NAS has NVMe drives as the tip of the spear storage (fast, critical storage for low latency operations like VM/Container OS images, data caching, database storage), SATA SSDs for less critical storage (often used data and warm archives), Rust drives for long term cold archive (straight up, "never touch these" type files and 1st tier backups). NVMe and SSD disks are on a 2 x 10g nics and Rust drives are on a 2.5g nic, all running on top of TrueNAS, on top of a 48 core Epyc CPU and 128gb of RAM. Used the Epc for the large RAM capacity, CPU cores for small low priority VMs and containers, many PCIe lanes and the fast exotic storage buses.
This is good for a ceph cluster if you have 3 of these, all three can communicate with each other over thunderbolt network and then you can expose ceph to users via the regular 10g link it would be a pretty efficient setup, if you don't have huge bandwidth requirements (and you probably don't in a home environment)
@@giornikitop5373 ohhh 100% not worth it especially now when we are spoiled for choice when it comes to mini pc’s in general also 1000$ gets you 6 fully decked out dell r630’s but those do require you to have cheap electricity
I've thought about doing something like that but going through all that work I'd want ECC memory and that limits the options quite a lot, usually to the point you don't have USB 4 or Thunderbolt to have proper host-to-host communication.
There's actually a way to test a Thunderbolt drive if all you have is a MacBook. You could place your MacBook into "target" mode, so it will act like a Thunderbolt-connected storage drive. Instructions are on the web. Apple includes that as a backup data recovery method, in case you can't boot your Mac.
Looking like an interesting little travel NAS. Would have been nice to see some benchmarks tho. And a tip for the next install: Don't cover the NAND with thermal pads, just the controller. NAND actually likes being a bit toasty.
I would love a video about your distaste for Docker (and possible alternatives. Do you just run every app bare-metal?) Btw - docker networking is less of a disaster if you use MacVLAN network type. It lets the container look like a VM to the network, complete with its own MAC address and IP.
Generally I use LXC containers for apps. I also include all of the support for an app (TLS proxy, database, ...) in the same container, much like a pod might run. LXC containers have good networking support - each container is on the local network, directly, and can negotiate DHCP or SLAAC as required. There's no orchestration required to 'give' a subnet to Docker to allocate to containers, since the container is responsible for its own address. This also means the container is mobile across hosts without the orchestrator carrying its IP separately. Even with MACVLAN, Docker still statically assigns IPs to containers instead of using the network provided mechanisms.
@@apalrdsadventures oh man, the fact that docker is unable to just expose a network interface and get dhcp from the lan is so incredibly infuriating. You know they can do it, but they refuse to because "you should use Docker for networking".
Why are you not a huge Docker fan? Maybe its because I'm also a developer, but as far i'm concerned Docker is a godsend for simplifying the distribution of all the multi-technology stacks used in modern software and web development. Its networking isn't that complicated if you understand just how its trying to security act in a explicit defined access only model. I've certainly never had an issue with it, although I guess maybe that comes with being a developer who works not only on how my containers communicate with the outside world, but also how different images and containers interact between each other, which probably helps me navigate all that quite a bit. Its definitely nothing thats gonna be easily used or fully understand with a GUI most of the time, absolutely is much easier just hacking away at the config files manually. Its a beautifully elegant solution that simplifies development, deployment and makes iteration on version of software much easier to manager. So idk... I love it, because I hack away at and mod lots of existing open source tools, and make a lot of my own, for things like this.
I can understand why people use Docker, but when you're used to the incredibly small footprint and great flexibility of LXCs, Docker feels really heavy and rigid. Like, I can put any Linux app / service / whatever in an LXC with almost no overhead or configuration, the networking takes care of itself - it's just so easy. If I want to add another service to the same LXC (not common, but it comes up sometimes), it's the native Linux installation process, no special configuration. If the app or service is updated, I can apply the update with apt, I don't need to wait for some third party to update their container. Obviously this only works if you're on a Linux distro that supports LXCs, like Proxmox, and only using Linux apps / services. But if you're mainly doing that, adding Docker feels like bloat.
In general, my feelings on Docker networking: - Docker by default completely mangles iptables, which is very irritating if I am already managing iptables and it breaks everything else I am doing - Docker networks are private within the host, so I can't use them in a clustered environment by default - Docker greatly prefers NAT, even with IPv6 (and it doesn't do IPv6 by default), so there's always a mess of which IPs are internal and external to Docker in the private space - Docker with macvlan / ipvlan demands that I give it a subnet for it to assign IPs to containers, which means each host needs its own range to avoid overlaps and containers aren't portable across hosts unless due to IPs being different or some orchestrator updating the IPs in DNS (Kubernetes does this much better) - All of these problems have been completely solved by network engineers , and Docker completely ignored their solutions to do their own thing - LXC containers can easily get their address from the network using normal methods (SLAAC / DHCP) and use the network as normal systems do, so their address is bound to the MAC address and the container can be moved around the network and retain its IP and use 'normal' access control methods - Not necessarily a Docker-specific problem, but Docker containers absolutely love to host http(s) on their own special secret port number for no reason As to Docker as a development tool: - Docker (the company) has been advertising Docker as the only form of containerization, and while containerized workloads are absolutely amazing, it was largely a solved problem in the linux/unix world already (Solaris Zones, BSD Jails, Linux LXC) before Docker came and brought their own bad ideas - At least working in Linux development (and outside of node.js), packaging apps is also easily solved with native packages (basically choose either RHEL or Debian and package either RPM or DEBs). Some development environments make this even easier (i.e. golang builds single-file binaries).
Great coverage of the device! tbh, the Minisforum MS01 seems like a better value option? Especially the 12600h version has more power for an NVME NAS. The Dual Intel X710 10G SFP+ Connectors should work better than the Aquantia NIC and with an adaptercard in the 16x slot you can add a 4th NVMe SSDs. Sure you would give up an 5th Slot for the Boot SSD in camparision to the Ugreen but this is well worth imo!
It's a damn shame the AMD version doesn't have 10G ports... But the board they also sell with 7945x looks like a good option as well, if you want more power.
@0xKruzr I already went 100gbe... sure the MS01 is good for most at home but 10gb is cheaper than 2.5gb...and still not enough storage for a Nas nor redundancy was my Main point. 2.5gb is not needed for ipmi. Great to use it for corosync though.
yeh and get a fullsized ATX mainboard and a fullsized Tower and get an CPU which doesnt cost an arm and an leg and is efficient... etc.... If you dont get the benefits of it just dont comment... 🤦♂🤦♂🤷♂
"It's like unboxing an Apple product... if Apple made a NAS" Funny thing is that Apple made NASes(-ish) in the past (the Time Machine boxes). They even made WAPs.
it does have dual usb4 but other than that this is a clear case where you should really just build your own serious nas for half the price and double the performance - going with all ssd/nvme is a great way to go but having a spinning rust raid is always nice as well - prices for these will drop a bunch
@@apalrdsadventures Its still a mobile chip and they wouldn't be buying in the same quantity or from a supplier they dont already do tons of business with.
I hear about more and more people making 3-node clusters out of mini-pcs with Thunderbolt as the backend. I'm hoping we can see some optimizations to make the driver a bit more performant. I'd like to build my own sometime in the near future.
For the money I'd get the QNAP TBS-h574TX-i3-12G-US instead. 5 bays and can take E1.S which opens up a world of options for SSD's even with that unit's PCIE limitations. It is nice that the RAM is upgradeable in this unit though. What we really need is a NAS maker to use something like an Epyc Siena or ARM chip in one of these, fairly low power but gobs of PCIE lanes, none of these consumer chips have enough PCIE lanes to really do all NVME NAS's justice. Of course that would cost lot more though.
Anyway you could compare this to something like the Asustor Flashstor 12 Pro FS6712X? $200 USD less for triple the bays but no thunderbolt. Curious what your thoughts are on that.
I haven't tested the Asustor (maybe I should email them), but they are using a considerably worse CPU (Intel N4505) which has PCIe gen 3 x 8 total, so they are splitting up those lanes to 1x10Gbe and 12xNVMe using PCIe switches. All of that is probably fine for serving files, but adding applications / VMs on top it will not be happy, and the IO bandwidth is not there to support 12 drives (not that 10Gbe file serving actually needs the bandwidth of 12 drives anyway).
@@apalrdsadventures Hey thanks for the reply. I knew it was a worse CPU but I guess I didn't realize how much worse. I'm pretty sure the FS6712X is designed with something like a LANCache in mind so that makes sense. Sad to see but it happens I guess.
I've been using an Aquantia/Marvell NIC in my desktop for a few years now and it seems to be doing fine. It's not actively cooled, but it's in a desktop with fans for other components. I believe it was one of the first 10Gbe cards to do multi-gig (2.5/5/10).
I really want to love these SSD NAS units.. but I think I'll stick with my own 'built' system consisting of a MINI PC with 1 NVME and 1 SATA SSD and 64gb of RAM. Sure, there's no RAID running on it, but it as bandwidth permits, it stays in sync with a spinning disk server back at my main home office... I get my portability, speed and a powerhouse of a mini PC on top of it all that runs several virtual machines.
Ah, power issues. Does it support UPS,if so which ones? You can't afford data corruption due to power cuts. I bought the Terera Master F2-424 and about to put TrueNas on it, I'm going to try putting a low profile right-angle USB A cable in the onboard port to a slim internal NVMe case, then boot from an external USB drive to install TrueNas. I've upgraded to 32GB to run a few containers, but as it's a N95 I'm not going to be able to run many on it. I mainly want it for my video library, and backups from Veeam Community Edition via iSCSI LUNs.
I'm not an expert, but it seems like there should be been enough lanes for PCIe 4 x4 to the 4 main drives due to the lack on a dedicated GPU and only having 2 Thunderbolt ports. Even connected two drives to the chipset should allow for both have PCIe 3 x4 (Note, the whole chipset is limited to about the same bandwidth as PCIe 3 x4, so this would only help when one of the two drive chipset linked slots are in use.) All told, I would rather have all 4 drives at PCIe 4 x2 if x4 wasn't doable. It won't much mater for saturating the network, but it could help the system rebuild faster when a drive is replaced. Could even use the saved lanes for a 2nd NIC.
I agree Gen 4 x 2 would have been nice. Ultimately though this limitation is due to Intel. The U-series only supports 2x gen 4 x 4 PCIe ports, neither of which can bifurcate. All other PCIe must go through the PCH. Intel Ark is certainly not clear on this, it lists the 1235U has supporting 'up to' 20 PCIe lanes (*including the chipset), while the chip itself only supports 8. AMD is a bit different in that the CPU does not use a PCH so all of the PCIe comes off the CPU directly and it's less confusing. AMD also generally allows significantly more bifurcation down to x1 or x2 on more ports.
@@apalrdsadventures Admittedly I don't know if `iperf` (without the number) is iperf2 or not. I don't remember what communication protocol it was using (I don't think that it says when you run it, not unless you add/explicitly ask it via more command line flags). All I remember was that `iperf3` produced results that were slower than `iperf` on the same hardware/software setup (with `iperf` vs. `iperf3` being the only variable left in the equation).
I still find it sad that there is nobody making a simple NAS appliance that does proper high-available NFS. For now I'm scratching by using a debian VM on top of proxmox with replicated ZFS local storage, but that's sadly not actual HA - one VM outage easily kills heavily written sqlite dbs on it.
Checkout about Kubernetes high availability cluster nodes and the usage of NFS with persistent volumes and claims, it works pretty well for the replications, even if one dies, the others are still there (of course you can add more nodes, in your case the proxmox vms) With microk8s it's just a few commands on debian, that you can automate to fully recover your cluster, or even use Terraform to have a phenix-rebirth-from-the-ashes-infrastructure-as-code, pretty usefull if you plan to migrate all the infrastructure to a cloud provider for example :) Maybe also you could look at Rook/Ceph which are maybe less barebones and more featured than NFS ? Also warning: Sqlite isn't made for replication, for that use Mysql or MariaDB for the "network" equivalent of sqlite (which also lack of features and performance) In Kubernetes you can deploy a Mysql cluster with the Mysql-Operator helm charts for example, have readers and writers replicas, maintained by the HA, across your Kubernetes nodes (your proxmox VMS in your case)
@@galactic_dust42 I actually use my NFS servers for providing storage to my Talos Linux based Kubernetes cluster. Ceph I actually tried and decided that it's maintenance efforts and failure modes are too much for my homeland, since I do not want to dedicate that much of my time budget to it. If you want to take a look, my GitHub has a repo showing the entire Phoenix-Ashes resources (Klaernie/k8s-internal) and allows me to recover from total cluster failure easily, only excluding persistent storage. SQlite is sadly the only option for Uptime Kuma, hence I'm probably switching to what I know best (from maintaining a large install at work): Nagios Core.
What is the power konsumption of your spinning rust?? 🤔 Are you able to run 8TB Nvmes in the 4 Slots on the UGREEN ? 🤔 So if it’s running 74 watts at full tilt off AC power it really only konsumes 60 watts if it was from a DC power Sourse.
12:30 "SCSI's more performant than VirtIO" it is? I was under the impression VirtIO was more performant because because it was "native" to virtualization as opposed to having to fake a bunch of SCSI protocol stuff for the sake of standards compliance.
sorry - virtio SCSI is more performant than virtio BLOCK. I assumed that they were using virtio-scsi, although that may not be the case. Generally SCSI support in operating systems is fantastic, since basically all modern disk protocols are SCSI-based. Virtio SCSI tunnels SCSI commands across virtio, instead of tunneling a purely virtual command set.
@@apalrdsadventures thanks! I have always picked VirtIO block in Proxmox, assuming "SCSI" was going to necessarily be less well-implemented and therefore less performant.
It should be at least 8 nvme disks, preferably 16 even if 1xPCIe 4.0. BTW how cool it would be to have usb 4.0 network switches especially considering usb 4.0 v2 (80Gb/s) is a thing.
I have no use for HDD even in a NAS, because of their far slower transfer speeds. Question is should I buy a NAS that has both 2.5inch SATA SSD bays & NVMe slots, or keep searching/waiting for an all NVMe NAS that isn't super wrongfully priced. I have no clue. I only have about 1Tb of data so I can buy very fairly priced 1Tb/2Tb NVMe SSD's. What should I do?
@apalrdsadventures can you please explain to me the basic on what simple mirrors are, I would like to hear your response than try to understand on Google. I'll of course Google/UA-cam it to learn more if I need to.
@@apalrdsadventuresAlso, how many 1-2Tb NVMe slots for my 1Tb+ of data do you reccomend I use to ensure there's enough redundancy to not ever lose any important data?
With one disk, you have one copy of the data, 100% of the space is available for data and there is no redundancy. Performance is equal to the disk. With two disks, you have the same data on both disks (a mirror), so 50% of the space is available for data and 50% is for redundancy, but the redundancy is trivially simple (just another copy of the data). Read performance is the sum of all disks and write performance is the slowest disk. With 3+ disks you could continue to mirror, which gets you very high levels of redundancy and potentially extremely fast read speed scaling. Capacity is equal to the size of one disk, so a 3-wide mirror is 33% and a 4-wide is 25%. With an even number of disks you can also make pairs of mirrors (RAID10), where you pair the drives into mirrored pairs and then sum the capacity of those pairs together. RAID5/RAID6 are forms of parity RAID which distribute parity information along with the data, which can be used to compute the original data from less than all of the chunks. So, for a 4-disk RAID5, you have 1 drive capacity worth of parity and 3 drives of capacity usable for data (75% efficiency) and can lose any single drive and rebuild. Reads require reading at least 3 disks, and writes require writing all 4 disks. Sequential and large file performance is still very good, but tiny accesses can amplify if they are less than the block size.
"I get plenty of people in the comment section saying oh that's so expensive you could do used Enterprise for cheaper" bro, you could build a much higher performing and more flexible system using consumer parts for the price of this. the $999 price tag is silly and no one should buy this...the early bird pricing of 50ish% off was fine, but the full retail price is absurd. No thanks!
Disappointing that despite NVME being the standard for coming on a decade, we still are stuck with expensive boxes at at-most 2x speed. You'd think something designed specifically to be a NAS would be designed with proper PCIE links.
The Intel U series 12th gen chips have only 2 x PCIe gen 4 x 4 interfaces. They can't be bifurcated to go below x4 per device. Blame Intel for these limitations on soldered chips. None of their soldered chips support more than 3 PCIe devices - the H-series add a third interface at gen 4 x 8, but it cannot bifurcate. AMD is not nearly as limiting on their soldered chip families.
@@apalrdsadventures yeah I see the technical reason, but again if your designing an NVME NAS, why would you not then go with an AMD chip to produce at least a consistent product?
Who buys this garbage for $1000? I spent $800 on my home server it has a epyc 7f52, gigabyte mz32-ar0, 256gb ddr4 3200, rtx 3060 12gb, rtx a400, 3 asus gen4 raid cards, evga 1600w PSU all in a 4u server case. That's price is without the storage then I have twelve samsung 990 2tb MVME'S and 6 western digital gold 12tb HD's and 2 Intel Optane 64gb. I'm running tunas scales and plex. But my point is I built the server for around $800 and with all of my storage I still have the capacity to grow and I'm always idling at a couple percent. If any of my parts die I can always replace or upgrade. But why would somebody pay$1000 for something that if it breaks it's completely bricked non upgradeable and you only have a couple slots to install storage.
@@apalrdsadventures I just Googled it and I seen one just sold for under 300 bucks. AMD EPYC 7F52 16-Core 3.50GHZ 256MB Cache Server Processor CPU 100-000000140 US $295.00 on ebay. I totally agree with this guy use server hardware is the best way to go. 128 PCIE GEN4 lanes DDR4 3200 16 core at 3.5ghz 256 MB cache. If you gave me a $1000 budget I could shop around and build something that's 20 times more powerful and has a 50 times more the expandability. This U Green NVME NAS is a joke.
Great video but completely unrealistic from a price perspective. 4TB drives plus unit is 2k. Not a homelabber setup especially with 3 of these like was suggested.
Ugreen! Not good, I bought their RAID box but suddenly crashed without recover, bloody lost all data, this one integrated high end controller but I can't believe what such consequence was. The CS had no explanation to me when I asked them many time, so I put it in garbage that it was only half year life time. So I would try another time to buy any product from ugreen
@@elalemanpaisa you asked the question, when answer to that question has been already provided in video. again, c'mon dude. if you wanted to complain about something else, maybe you should've word it differently.
I stopped using consumer drives over a decade ago for critical infrastructure, ESPECIALLY for NVME drives. My u.2 drives are hot, they need air cooling but not much even though they pull up to 20w-30w (8x PM1733 u.3 15.36tb) depending on firmware, which is impossible to find for Samsung enterprise. Next time I will go with Intel as you can find the firmware. SR-IOV for NVME, yes please. Different firmware for different load types? Yes please? Slow down...fuck no.
@@apalrdsadventures5W would mean its 97% efficient - aint no way. anyway that's not what i meant. i had one of these style chargers and the thing started crackling in the wall and i smelled the burnt capacitor smell. unplugged my laptop. i shook the psu and then you could hear bits rattling around inside.
Such an underrated channel
My NAS has NVMe drives as the tip of the spear storage (fast, critical storage for low latency operations like VM/Container OS images, data caching, database storage), SATA SSDs for less critical storage (often used data and warm archives), Rust drives for long term cold archive (straight up, "never touch these" type files and 1st tier backups). NVMe and SSD disks are on a 2 x 10g nics and Rust drives are on a 2.5g nic, all running on top of TrueNAS, on top of a 48 core Epyc CPU and 128gb of RAM. Used the Epc for the large RAM capacity, CPU cores for small low priority VMs and containers, many PCIe lanes and the fast exotic storage buses.
$1000, no disks included, and max individual disk size is 4TB.... you've gotta be kidding me.
the idea is cool but it is prohibitively impractical
This is theft
I doubt you need 16TB of NVME storage...
@@user-ic6xf why? the whole point of a NAS is providing storage
@@user-ic6xf If you ahve 16TB you will fill up 16TB. Ask me how I know.
This is good for a ceph cluster if you have 3 of these, all three can communicate with each other over thunderbolt network and then you can expose ceph to users via the regular 10g link it would be a pretty efficient setup, if you don't have huge bandwidth requirements (and you probably don't in a home environment)
for ~$1K for each unit, uhm i don't know, seems it's not worth the hassle.
@@giornikitop5373 ohhh 100% not worth it especially now when we are spoiled for choice when it comes to mini pc’s in general also 1000$ gets you 6 fully decked out dell r630’s but those do require you to have cheap electricity
I've thought about doing something like that but going through all that work I'd want ECC memory and that limits the options quite a lot, usually to the point you don't have USB 4 or Thunderbolt to have proper host-to-host communication.
i would much rather have 3 minisforum ms01
@@seethruhead7119 Those things have no match currently
There's actually a way to test a Thunderbolt drive if all you have is a MacBook. You could place your MacBook into "target" mode, so it will act like a Thunderbolt-connected storage drive. Instructions are on the web.
Apple includes that as a backup data recovery method, in case you can't boot your Mac.
Looking like an interesting little travel NAS.
Would have been nice to see some benchmarks tho.
And a tip for the next install: Don't cover the NAND with thermal pads, just the controller.
NAND actually likes being a bit toasty.
I would love a video about your distaste for Docker (and possible alternatives. Do you just run every app bare-metal?)
Btw - docker networking is less of a disaster if you use MacVLAN network type. It lets the container look like a VM to the network, complete with its own MAC address and IP.
Generally I use LXC containers for apps. I also include all of the support for an app (TLS proxy, database, ...) in the same container, much like a pod might run.
LXC containers have good networking support - each container is on the local network, directly, and can negotiate DHCP or SLAAC as required. There's no orchestration required to 'give' a subnet to Docker to allocate to containers, since the container is responsible for its own address. This also means the container is mobile across hosts without the orchestrator carrying its IP separately. Even with MACVLAN, Docker still statically assigns IPs to containers instead of using the network provided mechanisms.
@@apalrdsadventures oh man, the fact that docker is unable to just expose a network interface and get dhcp from the lan is so incredibly infuriating. You know they can do it, but they refuse to because "you should use Docker for networking".
Please, use "ip -c a", so it will color and highlight the addresses for your viewers. It just makes it easier to read.
Thanks for the tip. I'll set that as an alias for myself now for personal use.
@@shambles3833 Bonus points! ⭐
Damn... TIL! Thanks :))
Why are you not a huge Docker fan? Maybe its because I'm also a developer, but as far i'm concerned Docker is a godsend for simplifying the distribution of all the multi-technology stacks used in modern software and web development. Its networking isn't that complicated if you understand just how its trying to security act in a explicit defined access only model. I've certainly never had an issue with it, although I guess maybe that comes with being a developer who works not only on how my containers communicate with the outside world, but also how different images and containers interact between each other, which probably helps me navigate all that quite a bit. Its definitely nothing thats gonna be easily used or fully understand with a GUI most of the time, absolutely is much easier just hacking away at the config files manually. Its a beautifully elegant solution that simplifies development, deployment and makes iteration on version of software much easier to manager. So idk... I love it, because I hack away at and mod lots of existing open source tools, and make a lot of my own, for things like this.
Does Docker support IPv6 yet? has it stopped trying to take over the host networking? is it still inserting it's firewall rules first in the chain?
Docker is bloat and private company. But its idea is good.
Also good luck finding the external ip
I can understand why people use Docker, but when you're used to the incredibly small footprint and great flexibility of LXCs, Docker feels really heavy and rigid.
Like, I can put any Linux app / service / whatever in an LXC with almost no overhead or configuration, the networking takes care of itself - it's just so easy. If I want to add another service to the same LXC (not common, but it comes up sometimes), it's the native Linux installation process, no special configuration. If the app or service is updated, I can apply the update with apt, I don't need to wait for some third party to update their container.
Obviously this only works if you're on a Linux distro that supports LXCs, like Proxmox, and only using Linux apps / services. But if you're mainly doing that, adding Docker feels like bloat.
In general, my feelings on Docker networking:
- Docker by default completely mangles iptables, which is very irritating if I am already managing iptables and it breaks everything else I am doing
- Docker networks are private within the host, so I can't use them in a clustered environment by default
- Docker greatly prefers NAT, even with IPv6 (and it doesn't do IPv6 by default), so there's always a mess of which IPs are internal and external to Docker in the private space
- Docker with macvlan / ipvlan demands that I give it a subnet for it to assign IPs to containers, which means each host needs its own range to avoid overlaps and containers aren't portable across hosts unless due to IPs being different or some orchestrator updating the IPs in DNS (Kubernetes does this much better)
- All of these problems have been completely solved by network engineers , and Docker completely ignored their solutions to do their own thing
- LXC containers can easily get their address from the network using normal methods (SLAAC / DHCP) and use the network as normal systems do, so their address is bound to the MAC address and the container can be moved around the network and retain its IP and use 'normal' access control methods
- Not necessarily a Docker-specific problem, but Docker containers absolutely love to host http(s) on their own special secret port number for no reason
As to Docker as a development tool:
- Docker (the company) has been advertising Docker as the only form of containerization, and while containerized workloads are absolutely amazing, it was largely a solved problem in the linux/unix world already (Solaris Zones, BSD Jails, Linux LXC) before Docker came and brought their own bad ideas
- At least working in Linux development (and outside of node.js), packaging apps is also easily solved with native packages (basically choose either RHEL or Debian and package either RPM or DEBs). Some development environments make this even easier (i.e. golang builds single-file binaries).
Great coverage of the device! tbh, the Minisforum MS01 seems like a better value option? Especially the 12600h version has more power for an NVME NAS. The Dual Intel X710 10G SFP+ Connectors should work better than the Aquantia NIC and with an adaptercard in the 16x slot you can add a 4th NVMe SSDs. Sure you would give up an 5th Slot for the Boot SSD in camparision to the Ugreen but this is well worth imo!
It's a damn shame the AMD version doesn't have 10G ports... But the board they also sell with 7945x looks like a good option as well, if you want more power.
Personally a nas with only 1 7mm u.2 does not make a MS-01..... I want at LEAST 4 15mm 2.5in slots for 15.36tb plus per caddie, 30.72tb preferred.
a cluster of MS-01s is an absolute BEAST in the homelab especially with the 2.5G NICs available for IPMI use.
@0xKruzr I already went 100gbe... sure the MS01 is good for most at home but 10gb is cheaper than 2.5gb...and still not enough storage for a Nas nor redundancy was my Main point. 2.5gb is not needed for ipmi. Great to use it for corosync though.
great video, thank you!!
Only 4 drives bays ?
At this price, just get a PCIe extension card with 4 drives bays it will be only 50$
My thought exactly not worth that price point at all.
yeh and get a fullsized ATX mainboard and a fullsized Tower and get an CPU which doesnt cost an arm and an leg and is efficient... etc....
If you dont get the benefits of it just dont comment... 🤦♂🤦♂🤷♂
"It's like unboxing an Apple product... if Apple made a NAS"
Funny thing is that Apple made NASes(-ish) in the past (the Time Machine boxes). They even made WAPs.
not just that, they were *the first* to include WiFi built-in and made some of the first WAPs
Yup, the Airport units were stable and very fast ! The express was an awesome device too !
you are clueless.... if apple makes a NAS they wouldnt let you install whatever software you want under warranty....
clueless ppl will always talk bs
@@sirdurex5581 True, now you know why they don't make them any more.
it does have dual usb4 but other than that this is a clear case where you should really just build your own serious nas for half the price and double the performance - going with all ssd/nvme is a great way to go but having a spinning rust raid is always nice as well - prices for these will drop a bunch
Why don't they do 4 times gen4x2 or 4 times gen3x4 instead?
Limitations from Intel on how far you can bifurcate lanes from the CPU (the gen 4 lanes).
@@apalrdsadventures Could have put on a plx chip and at $1000 could have fit it in the budget.
The cheapest PCIe gen 4 switch chip I can find has a list price of $205. It's not as simple as that.
@@apalrdsadventures Its still a mobile chip and they wouldn't be buying in the same quantity or from a supplier they dont already do tons of business with.
It they wanted more PCIe gen 4 split options they would have to go with an AMD CPU, essentially.
For that price tag , How about something like minisforum ms01
I hear about more and more people making 3-node clusters out of mini-pcs with Thunderbolt as the backend. I'm hoping we can see some optimizations to make the driver a bit more performant. I'd like to build my own sometime in the near future.
For the money I'd get the QNAP TBS-h574TX-i3-12G-US instead. 5 bays and can take E1.S which opens up a world of options for SSD's even with that unit's PCIE limitations. It is nice that the RAM is upgradeable in this unit though. What we really need is a NAS maker to use something like an Epyc Siena or ARM chip in one of these, fairly low power but gobs of PCIE lanes, none of these consumer chips have enough PCIE lanes to really do all NVME NAS's justice. Of course that would cost lot more though.
Can you re upload a test of this video with your thoughts coming first? Instead of at the end.
If I remember correctly, you can't put 8 terabyte SSD's in it? Max capacity is four fours.?
Yeah, answered my own question from other comments.
When will it be available in EU? Please provide link.
Yea, I need more storage than that. I also don't need that fast of storage. Mechanical disks work perfect for my use.
Anyway you could compare this to something like the Asustor Flashstor 12 Pro FS6712X? $200 USD less for triple the bays but no thunderbolt. Curious what your thoughts are on that.
I haven't tested the Asustor (maybe I should email them), but they are using a considerably worse CPU (Intel N4505) which has PCIe gen 3 x 8 total, so they are splitting up those lanes to 1x10Gbe and 12xNVMe using PCIe switches.
All of that is probably fine for serving files, but adding applications / VMs on top it will not be happy, and the IO bandwidth is not there to support 12 drives (not that 10Gbe file serving actually needs the bandwidth of 12 drives anyway).
@@apalrdsadventures Hey thanks for the reply. I knew it was a worse CPU but I guess I didn't realize how much worse. I'm pretty sure the FS6712X is designed with something like a LANCache in mind so that makes sense. Sad to see but it happens I guess.
Looks pretty nice... wish I could get one for review. Also would be nice if they used an SFP jack instead of the 10Gbps simple port.
They're using the Aquantia AQC family NIC, which is a single-chip MAC + PHY solution for multi-gig.
@@apalrdsadventures I'll have to read up... is it power efficient?
I've been using an Aquantia/Marvell NIC in my desktop for a few years now and it seems to be doing fine. It's not actively cooled, but it's in a desktop with fans for other components.
I believe it was one of the first 10Gbe cards to do multi-gig (2.5/5/10).
I really want to love these SSD NAS units.. but I think I'll stick with my own 'built' system consisting of a MINI PC with 1 NVME and 1 SATA SSD and 64gb of RAM. Sure, there's no RAID running on it, but it as bandwidth permits, it stays in sync with a spinning disk server back at my main home office... I get my portability, speed and a powerhouse of a mini PC on top of it all that runs several virtual machines.
Good Stuff! Do you have videos on Kea DHCP with Opnsense?
Not yet for Kea.
I can imagine what you can backup on 4TB. Considering how much expensive SSD are, there is no market for this stuff.
Ah, power issues. Does it support UPS,if so which ones? You can't afford data corruption due to power cuts. I bought the Terera Master F2-424 and about to put TrueNas on it, I'm going to try putting a low profile right-angle USB A cable in the onboard port to a slim internal NVMe case, then boot from an external USB drive to install TrueNas. I've upgraded to 32GB to run a few containers, but as it's a N95 I'm not going to be able to run many on it. I mainly want it for my video library, and backups from Veeam Community Edition via iSCSI LUNs.
I'm not an expert, but it seems like there should be been enough lanes for PCIe 4 x4 to the 4 main drives due to the lack on a dedicated GPU and only having 2 Thunderbolt ports.
Even connected two drives to the chipset should allow for both have PCIe 3 x4 (Note, the whole chipset is limited to about the same bandwidth as PCIe 3 x4, so this would only help when one of the two drive chipset linked slots are in use.)
All told, I would rather have all 4 drives at PCIe 4 x2 if x4 wasn't doable. It won't much mater for saturating the network, but it could help the system rebuild faster when a drive is replaced. Could even use the saved lanes for a 2nd NIC.
I agree Gen 4 x 2 would have been nice. Ultimately though this limitation is due to Intel. The U-series only supports 2x gen 4 x 4 PCIe ports, neither of which can bifurcate. All other PCIe must go through the PCH.
Intel Ark is certainly not clear on this, it lists the 1235U has supporting 'up to' 20 PCIe lanes (*including the chipset), while the chip itself only supports 8.
AMD is a bit different in that the CPU does not use a PCH so all of the PCIe comes off the CPU directly and it's less confusing. AMD also generally allows significantly more bifurcation down to x1 or x2 on more ports.
Try running with `iperf` rather than `iperf3`.
I have found that `iperf3` yields slower results than `iperf`. Not really sure why.
iperf2 and 3 work very differently (they aren't even related codebases).
But, iperf2 was probably testing using UDP instead of TCP
@@apalrdsadventures
Admittedly I don't know if `iperf` (without the number) is iperf2 or not.
I don't remember what communication protocol it was using (I don't think that it says when you run it, not unless you add/explicitly ask it via more command line flags).
All I remember was that `iperf3` produced results that were slower than `iperf` on the same hardware/software setup (with `iperf` vs. `iperf3` being the only variable left in the equation).
iperf is iperf2 usually, but iperf 1 and 2 are related.
They are both basically unrelated to iperf3 (other than it copying their name)
@@apalrdsadventures
Ahhh...okay. Gotcha. Thanks.
I still find it sad that there is nobody making a simple NAS appliance that does proper high-available NFS. For now I'm scratching by using a debian VM on top of proxmox with replicated ZFS local storage, but that's sadly not actual HA - one VM outage easily kills heavily written sqlite dbs on it.
Checkout about Kubernetes high availability cluster nodes and the usage of NFS with persistent volumes and claims, it works pretty well for the replications, even if one dies, the others are still there (of course you can add more nodes, in your case the proxmox vms)
With microk8s it's just a few commands on debian, that you can automate to fully recover your cluster, or even use Terraform to have a phenix-rebirth-from-the-ashes-infrastructure-as-code, pretty usefull if you plan to migrate all the infrastructure to a cloud provider for example :)
Maybe also you could look at Rook/Ceph which are maybe less barebones and more featured than NFS ?
Also warning: Sqlite isn't made for replication, for that use Mysql or MariaDB for the "network" equivalent of sqlite (which also lack of features and performance)
In Kubernetes you can deploy a Mysql cluster with the Mysql-Operator helm charts for example, have readers and writers replicas, maintained by the HA, across your Kubernetes nodes (your proxmox VMS in your case)
@@galactic_dust42 I actually use my NFS servers for providing storage to my Talos Linux based Kubernetes cluster. Ceph I actually tried and decided that it's maintenance efforts and failure modes are too much for my homeland, since I do not want to dedicate that much of my time budget to it.
If you want to take a look, my GitHub has a repo showing the entire Phoenix-Ashes resources (Klaernie/k8s-internal) and allows me to recover from total cluster failure easily, only excluding persistent storage.
SQlite is sadly the only option for Uptime Kuma, hence I'm probably switching to what I know best (from maintaining a large install at work): Nagios Core.
What is the power konsumption of your spinning rust?? 🤔
Are you able to run 8TB Nvmes in the 4 Slots on the UGREEN ? 🤔
So if it’s running 74 watts at full tilt off AC power it really only konsumes 60 watts if it was from a DC power Sourse.
If it’s 20 volts why are they using a barrel plug and not a USBC 🤔
If you are testing a lot of devices like this it'd be nice if you got 2 sticks of 48GB RAM to test compatibility
12:30 "SCSI's more performant than VirtIO" it is? I was under the impression VirtIO was more performant because because it was "native" to virtualization as opposed to having to fake a bunch of SCSI protocol stuff for the sake of standards compliance.
sorry - virtio SCSI is more performant than virtio BLOCK. I assumed that they were using virtio-scsi, although that may not be the case.
Generally SCSI support in operating systems is fantastic, since basically all modern disk protocols are SCSI-based. Virtio SCSI tunnels SCSI commands across virtio, instead of tunneling a purely virtual command set.
@@apalrdsadventures thanks! I have always picked VirtIO block in Proxmox, assuming "SCSI" was going to necessarily be less well-implemented and therefore less performant.
It should be at least 8 nvme disks, preferably 16 even if 1xPCIe 4.0. BTW how cool it would be to have usb 4.0 network switches especially considering usb 4.0 v2 (80Gb/s) is a thing.
The camera has problems with focusing at times
You can load OMV or truenas on this?
Yes, SCALE only for TrueNAS
@@apalrdsadventures might be worth looking into then. Thanks
I have no use for HDD even in a NAS, because of their far slower transfer speeds.
Question is should I buy a NAS that has both 2.5inch SATA SSD bays & NVMe slots, or keep searching/waiting for an all NVMe NAS that isn't super wrongfully priced.
I have no clue. I only have about 1Tb of data so I can buy very fairly priced 1Tb/2Tb NVMe SSD's.
What should I do?
At only 1T I would go all NVMe. You don't even need parity coded RAID setups, simple mirrors are easy to work with.
@apalrdsadventures can you please explain to me the basic on what simple mirrors are, I would like to hear your response than try to understand on Google. I'll of course Google/UA-cam it to learn more if I need to.
@@apalrdsadventuresAlso, how many 1-2Tb NVMe slots for my 1Tb+ of data do you reccomend I use to ensure there's enough redundancy to not ever lose any important data?
With one disk, you have one copy of the data, 100% of the space is available for data and there is no redundancy. Performance is equal to the disk.
With two disks, you have the same data on both disks (a mirror), so 50% of the space is available for data and 50% is for redundancy, but the redundancy is trivially simple (just another copy of the data). Read performance is the sum of all disks and write performance is the slowest disk.
With 3+ disks you could continue to mirror, which gets you very high levels of redundancy and potentially extremely fast read speed scaling. Capacity is equal to the size of one disk, so a 3-wide mirror is 33% and a 4-wide is 25%.
With an even number of disks you can also make pairs of mirrors (RAID10), where you pair the drives into mirrored pairs and then sum the capacity of those pairs together.
RAID5/RAID6 are forms of parity RAID which distribute parity information along with the data, which can be used to compute the original data from less than all of the chunks. So, for a 4-disk RAID5, you have 1 drive capacity worth of parity and 3 drives of capacity usable for data (75% efficiency) and can lose any single drive and rebuild. Reads require reading at least 3 disks, and writes require writing all 4 disks. Sequential and large file performance is still very good, but tiny accesses can amplify if they are less than the block size.
How could I get proxmox to see those nvmes’s as CEPH OSD’s?
Now I see that it has a boot drive. This is a full blown server.
"I get plenty of people in the comment section saying oh that's so expensive you could do used Enterprise for cheaper"
bro, you could build a much higher performing and more flexible system using consumer parts for the price of this. the $999 price tag is silly and no one should buy this...the early bird pricing of 50ish% off was fine, but the full retail price is absurd. No thanks!
Disappointing that despite NVME being the standard for coming on a decade, we still are stuck with expensive boxes at at-most 2x speed. You'd think something designed specifically to be a NAS would be designed with proper PCIE links.
Biggest issue is networking, even 10Gbe is at most 1.25GBps so even a pcie3x2 (2GBps) lane wont be fully utilized
The Intel U series 12th gen chips have only 2 x PCIe gen 4 x 4 interfaces. They can't be bifurcated to go below x4 per device.
Blame Intel for these limitations on soldered chips. None of their soldered chips support more than 3 PCIe devices - the H-series add a third interface at gen 4 x 8, but it cannot bifurcate.
AMD is not nearly as limiting on their soldered chip families.
@@apalrdsadventures yeah I see the technical reason, but again if your designing an NVME NAS, why would you not then go with an AMD chip to produce at least a consistent product?
Yes please
Man, $1,000 for this little device. The cost is a bit to high to me. Why this device has a such high price tag?
Who buys this garbage for $1000? I spent $800 on my home server it has a epyc 7f52, gigabyte mz32-ar0, 256gb ddr4 3200, rtx 3060 12gb, rtx a400, 3 asus gen4 raid cards, evga 1600w PSU all in a 4u server case. That's price is without the storage then I have twelve samsung 990 2tb MVME'S and 6 western digital gold 12tb HD's and 2 Intel Optane 64gb. I'm running tunas scales and plex. But my point is I built the server for around $800 and with all of my storage I still have the capacity to grow and I'm always idling at a couple percent. If any of my parts die I can always replace or upgrade. But why would somebody pay$1000 for something that if it breaks it's completely bricked non upgradeable and you only have a couple slots to install storage.
The 7F52 retails for $3100 it’s nowhere near this class of system
@@apalrdsadventures Well obviously I'm not talking about new. Everything besides the storage was used I only paid $450 for the motherboard and CPU.
@@apalrdsadventures I just Googled it and I seen one just sold for under 300 bucks. AMD EPYC 7F52 16-Core 3.50GHZ 256MB Cache Server Processor CPU 100-000000140
US $295.00 on ebay. I totally agree with this guy use server hardware is the best way to go. 128 PCIE GEN4 lanes DDR4 3200 16 core at 3.5ghz 256 MB cache. If you gave me a $1000 budget I could shop around and build something that's 20 times more powerful and has a 50 times more the expandability. This U Green NVME NAS is a joke.
answer: Yes.
protect what you love.....put it "safely" on this chinese nas :D love your videos though man :)
Great video but completely unrealistic from a price perspective. 4TB drives plus unit is 2k. Not a homelabber setup especially with 3 of these like was suggested.
What's the point for only 4xM.2?
4x4tb in raid configuration gives you tremendous speeds, plenty of storage and reliability. The real question is: what's the point of your comment.
Ugreen! Not good, I bought their RAID box but suddenly crashed without recover, bloody lost all data, this one integrated high end controller but I can't believe what such consequence was. The CS had no explanation to me when I asked them many time, so I put it in garbage that it was only half year life time. So I would try another time to buy any product from ugreen
i want a big ass rack mounted server but my wife would not let me :/
All nvme? What network connection do you have at home?
He basically tells at the begining and the 10gbe hes using is a big part of the video. C'mon dude.
@@KapitanMokraFaja all nvme with 4 nvmes is like 500git/s
@@elalemanpaisa you asked the question, when answer to that question has been already provided in video. again, c'mon dude.
if you wanted to complain about something else, maybe you should've word it differently.
I stopped using consumer drives over a decade ago for critical infrastructure, ESPECIALLY for NVME drives. My u.2 drives are hot, they need air cooling but not much even though they pull up to 20w-30w (8x PM1733 u.3 15.36tb) depending on firmware, which is impossible to find for Samsung enterprise. Next time I will go with Intel as you can find the firmware. SR-IOV for NVME, yes please. Different firmware for different load types? Yes please? Slow down...fuck no.
8x 15T U.3 drives for home storage?! Deep pockets! ;) What do you use to run that many U.3 drives?
way too expensive
that power supply is gonna fail - too small to dissipate 140w - be careful of fire.
the power supply doesn't dissipate 140W, 140W leaves the power supply electrically. The power supply probably dissipates around 5W.
@@apalrdsadventures5W would mean its 97% efficient - aint no way. anyway that's not what i meant. i had one of these style chargers and the thing started crackling in the wall and i smelled the burnt capacitor smell. unplugged my laptop. i shook the psu and then you could hear bits rattling around inside.