You can get your firewall in HA mode (I think): Create a VLAN and plug your fiber into an Switch that uses that created VLAN as an untagged port. Tag that VLAN to every port where your ms-01s are plugged in and you should be able to use a VM with an ethernet Port configured in Proxmox with that VLAN. Mark it in the Cluster as HA and it should work.
you dont even need vlans, just connect all minis to a switch + isp and whenever one mini fails, opnsense will get restarted and connect with either pppoe or based on MAC dhcp, good thing is, even if using virtIO networking, you set MAC inside opnsense without having to set it physically on the port
Just a small word of warning with the ms-01. I had nothing but problems with proxmox using the 2.5G ports with proxmox/opnsense. They would just not run at full speed at least with opnsense. I ended up buying a managed 2.5G mikrotik switch and used the 10G ports from the ms-01 and connected my modem to a 2.5G port and that solved all my problems.
This is exactly what I would do with a handful of these machines. It's cool to see someone actually do it, especially with the config being a little atypical.
Hello, really good idea the 40 gb USBC for the internal Proxmox ring, what is the cable type you use for that ? In other video can you focus on the proxmox config ?
I've done this, but I installed a 4 port sfp+ adapter in each so the entire cluster is using DAC. One 10gbe ring network for the proxmox cluster and another for ceph. I don't trust networking over USB.
Great video looking forward to the next one. I keep changing my mind between using an itx motherboard with a 13500t or higher and these machines. Keep them coming & thanks for all the hard work on these videos.
@@Jims-Garage Absolutely, these machines are great for the size. More expandable is also going to mean bigger. I currently do similar to your setup but using micro pc's (10th gen cpu) but always limited by the network which they only have one with these machines you get four plus expansion. I think these machines are just in between the dell & HP micro pc's and a full size pc. My only other concern would be life span of the units and replacement options for when they do fail, I suspect in 5 years if one fails you won't find them around and your back to hashing it out again :) Fair play for taking the jump not sure I would, well not until I watch your videos anyway :)
I'm also a bit torn here, I'm trying to get as close to a "do it all" server for VMs, router/firewall, NAS, all in a somewhat low-power and small package (sure I could just go with a cheap Epyc chip + mobo from China on ebay, but that's a bit much). The Ar900i ITX from minisforum is also appealing, it's got four m.2 slots and a 16x PCIe slot, after doing some digging I've found someone on reddit had success with a M.2 to 16x riser cable on this board, and was able to use an LSI HBA on this adapted slot to add hard drives to it (keeping the onboard 16x slot free), he also made a 3d printed bracket to fit it in a Jonsbro N3 case on the opposite side of the board from the onboard 16x slot to hold this LSI card. I also found another video confirming the Synology E10M20-T1 works on this board, this adds a 10g base-t port and two additional m.2 slots. So as long as I'm fine without a dedicated GPU, this would be quite the beast of a machine that has everything else in an ITX package. I'm leaning back to the MS-01 though because of costs, having a bunch of HDD storage would be nice, but I don't really need it, and PCIe 3.0 U.2 drives are pretty affordable for a good chunk of reliable storage. I'm thinking of going two Intel Optane 1600x 128gb m.2 in a stripe, not a lot of capacity but enough for VM OS volumes for what I'm doing, and the limitation of the 3rd slot at PCI 3.0 x2 doesn't hurt this drive. Then in the future if I really want to add HDDs, I could hook up a USB 3.2 jbod enclosure. And now I still have the 16x free to add a better GPU or whatever else.
@@coinholio470 it's always a difficult decision but both are a good choice. Pick one and be happy IMO. As for m.2 to PCIe, I've been using one for a year in my main PC. I record all of my videos with an elgato card that is attached to my m.2 via PCIe adapter. So far it's been perfect.
@@coinholio470 If you need storage you could always hook up some network attached storage instead or usb (more stable) I think the key thing with these systems is power usage vs network interfaces, you just can't find anything else in the same size, power usage with the same nics (I have tried). My only concern is how long they will last, I expect if they fail it will be an entire new system and this is were the price may seem not so good. I'm still on the fence with purchasing and fair play to Jim for taking the plunge.
Great video Jim! Currently in my homelab I have DL360 Gen10 and DL380 Gen10 (currently for sale) which I want to replace with three MS-01. Your video convinced me that this is a very good idea.
@@Jims-Garage Thank you for your answer! I'll think about it more, I'm currently testing my DIY server Specs below: CPU: AMD Ryzen 9 5900X, MOBO: Gigabyte MC12-LE0, RAM: 4x 32GB DDR4 ECC UDIMM, 2 port 10Gbps SFP+ NIC, 2x 256GB SSD in the mirror on Proxmox and 4x 1TB NVME in the adapter connected to PCIE x16. This mobo supports bifurcation. The server is in the IPC 2U-2404L housing So far I'm very satisfied. Now I'm wondering whether to build two more such servers or buy three MS-01.
@@oSAendI’m curious about the power draw on your DIY server as I like the BOM you came up with. How does it compare to the MS-01? I particularly like that it has ECC RAM and you can bifurcate the x16 slot to get the most flexibility / performance out of it. As you say it’s really food for thought 🤔
from a power draw perspective alone, this is going to be a huge upgrade, especially with the i5 version. and you don't even sacrifice management, you still have IPMI etc. IMO this box is a perfect homelabber system.
I've recently setup a medium availability Opnsense on my pve cluster using the proxmox HA rather than Opnsense HA as I only have a single public IP. The key for me was using the cluster resource mapping for the hardware pass thru and having the cable modem connected to all the node Wan ports using a separate switch (in my case I used locked down vlan ports on my main switch), set the Mac address in the wan interface and then as long as opnsense is only running once (as it should with HA) then only one wan connection is active. You could use a small SFP+ switch like a mikrotik for sharing your wan to all nodes. Works great, single IP, not true HA but in an unplanned failure, downtime is only a few minutes and for planned stuff, I can migrate easily.
I spent a couple of days trying to iGPU passrhough the Xe to a linux VM and had no luck, so I'll be very interested to see how you go. For now, I've installed the desktop OS directly on the MS-01 and using an Incus VM to run Proxmox. I'll also be running Proxmox Backup Server in an Incus LXC container and passing through a 4TB NVMe to it.
@@Jims-Garage Great. Do you mean you currently have full iGPU Xe passthrough working with a Ubuntu VM on an MS-01? I track a few forums and I have not seen anyone post anything positive yet. Which guide do you mean?
Thanks for showing the topology! This gave me some ideas. MS-01 showing up on Monday. I was about to mansplain adding a switch between ISP and firewall but looks like others beat me to it. I don't have enough confidence in not screwing myself to not keep the firewall HA on totally separate hardware and get at least a /29.
I’ve been eyeing these for a similar 3 node cluster setup. Great idea with using thunderbolt for network, I haven’t thought about that! I agree that the chassis limits the options for the add-in cards, and it feels like a bit of a waste to just discard the chassis.
For the single point of failure / non-HA firewall issue, you could bring in the WAN via a VLAN on the last SFP+ port on your Unifi switch, then you can migrate the firewall VM across the cluster without any hardware dependency and minimal downtime.
For my fiber internet, I terminate to the switch and layer 2 vlan it to proxmox over the VLANs port. Then, the firewall can migrate to any host. Also have the option of HA firewall. But with firewall on ceph clustered storage, the hardware concern is less of a concern.
Hey, for your redundant firewall issue, you could plug your fibre "modem" into the switch on an isolated vlan, and trunk that vlan to all three nodes. Your opnsense VM can then float between all the nodes and because you've just trunked it over a VLAN you'll still get the DHCP public address on opnsense. I think you could also set up two opnsense VMs running in HA like this, if one went down it would take on the IP of your primary VM.
@AshleyTayles ha, that's okay. I ultimately end up with 1 VM using HA inside Proxmox. These all share the internet via a cheap switch. I demo the fail over as well.
Just bit the bullet on my second ms-01. Been using my first one as my gaming rig after a failure of my other PC, with an A2000 (custom heatsink) in it it does the job great. Now picking one up to replace/augment my aging i5 nuc home lab, with something that has a bit of oomph.
Hi Jim , It is very exciting homelab project but i suggest putting links every products you use to get the highest bandwidth available like the brand name of thunderbolt cables you are using etc. Cause I noticed that so many people on the net could get enough speeds over thunderbolt bridges . Awesome work btw. Congratulations.
Got my MS-01 last week to replace my 10 year old proxmox box, pretty nice box! Put an NVME in the pcie slot to mirror the gen 4. Could not think of anything else to put in there haha
Cool idea with the Thunderbolt ring. Curious how the Ceph cluster is performing? I also have a cluster of 3 and considered a Ceph cluster. If I understand it correctly, a replicated storage array would hit the SSDs much harder than if the VMs were just running on their individual node.
I'll hopefully be able to feedback on performance in the coming weeks. It will definitely hit the drives harder but I estimate around 5 years for my nvmes. I'm happy with that.
Hello and thank you for your great videos! I have two MS-01s myself with the i5 processor and 96GB Crucial RAM each. My setup is not really running smoothly. Could you perhaps tell me which components (how much RAM from which manufacturer) you have installed and whether the NVMe with Proxmox on it runs as ZFS?
Hey, thanks for the comment. I have 64GB of Corsair Vengeance (believe it's 5200 Mhz - nothing special). I'm running the Proxmox OS on non-ZFS (ext4). Ordinarily I would advocate the use of ZFS, but in my case I have 3 devices and wanted to keep costs down. I figure if a node fails I have 2 to take over and can replace the disk.
Could you not bring the fiber into a small fiber switch first the break out three ips from it? Not a dumb switch obviously, but I can think of a few devices which would work. Even a low power firewall if you want that functionality. You could handle the connections on the nodes in a number of ways.
@@XtianApi no, I can only obtain 1 IP. I have since placed this into a small switch with each port going to wan on a virtual firewall. This is configured for HA (see on later video)
@@Jims-Garage cool. I have since finished the video and saw what you mean. That's cool. I got interrupted at work, lol. I'm trying to find a decent affordable swith that can do multi chassis link aggregation. Don't want to spend enterprise money. Lots of switches do link aggregation but only the higher end ones seem to be able to do multiple instances of Link aggregation. Random side thing, I know. Anyway, thank you!
Great videos! I've been working on assembling a similar MS-01 cluster and really appreciate that you've got the thunderbolt networking in particular already worked out for these devices. One question - wrt the 980 Pro, what speeds have you seen on the primary NVME slot? I'm only seeing a peak of 3500MB/s disk reads on the 990 Pro's (vs 7000MB/s). The asterisk on the drive's description says you need to go into Samsungs windows drive utility and enable something called 'Turbo Write' - which appears to be a volatile write burst buffer. Curious if you've done that or had any advice for a newb on whether that's a good thing to enable on a linux proxmox server.
@@Jims-Garage I'm def on the right slot - I've actually got 990 pro's in both of the first 2 slots and they both report the same ~3500MB/s speed. The 3rd slot reports 1300MB/s, which is expected. I'm a bit newb-ish still, but here's the command I'm running (in Proxmox 8.2 shell): "hdparm -Tt /dev/nvme0n1". It reports: "Timing buffered disk reads: 10156 MB in 3.00 seconds = 3385.11 MB/sec". I'm going to grab Samsungs 'Magician' tool for windows and see if it offers any clues.
unsure where my earlier comment went, but it had an external link so maybe it got cleansed! had recurring lock up issues with the MS-01 with a k8s workload! after doing all the usual things: bios update, microcode updates etc things are looking more steady. would love to get this working stable without having to do something like disable the efficiency cores. have you had any lock up with container workloads? great video!
Nice set up Jim! I'm looking at getting these MS-01 as well. I've setup a thunderbolt ring on some old 2013 macs using syncos guide and it is great to have that speed on such an old machine. About your Ceph setup.. don't you want to run the VMS on your Ceph drives for High Availability so that if a MS-01 node fails it will switch over very quickly? Or do you have another plan for that?
Yea, that's the plan. However, I do iGPU Passthrough which means they won't fail over. Others will benefit from it though, and the data will be secure.
Hi Jim, great video! About the coral TPU, I'm using one together with iGPU passthrough in frigate. The TPU is used for the frigate detection mechanism, the GPU for hardware offloading the ffmpeg streams. So frigate benefits using both. Frigate is running HA mode. If a node fail it will boot on another node just file. Live migration is not an option, but the system is back in about 3 minutes, which is totally fine with me. I use the resource mapping feature in proxmox for this.
@@Jims-Garage I have the single indeed, because my slot has only one pci-e lane connected, as most wifi slots have. Some have no pci-e connected at all... Only CNVI / CNViov2. If your slot has only one lane connected, at least one TPU should show up... You could try it in an other slot with a M-key to A/E key adapter?
Hello @@exteraNL . Do you use an adaptor for Coral? Or your mobo support it? I found a mobo Asrock B660M-HDV which has "1 x M.2 Socket (Key E), supports type 2230 WiFi/BT PCIe WiFi module". Do you know it support Coral? Thanks!
I have the MS-01 and SR-IOV with the Intel iGPU works fine. I have the 12th gen variant and it works out great. I loaded it up with 2x48 GB sticks and this is my low power Proxmox node. I have successfully passed through the iGPU to multiple VM's at the same time (max. 7) and GPU decoding/encoding also works. Just know that when it comes to LXC containers there are some caveats from what I understand. Might be better to just run a VM with e.g. Docker or Podman on it and then spin up a Handbrake or Jellyfin container or something if that is what you're going for.
Hi Jim. Did you look at ipvs(adm) and a virtual IP address? Maybe three lines off of a switch connected to the modem, so that each of the nodes can assume the role of the uplink with a virtual IP address just in case the "current active node" goes offline`? As long as the HW switch between modem and cluster doesn't fail, your should be good to go.
I'm not sure what your exact issue was with the internet and fiber that caused you to remove HA, but if I understand correctly, I would suggest one thing. You could plug your fiber into an unmanaged switch with a fiber SFP optical transceiver, and then you could plug two connections, using RJ45, from the unmanaged switch into each MS-01.
About not having a failover for your firewall, could you not just run your ISP fiber connection into a cheap unmanaged SFP+ switch, and then run a fiber to each MS-01 and have Opensense on each device for failover that way? EDIT: Nevermind. I just started watching your next video and you covered this. Thanks for the heads up on that WAS-110 SFP+ module! Ive been wanting to bypass Google Fiber's ONT box and go straight into my MS-01. I hate having fiber to the house and then having to run ethernet to my otherwise fiber backend. This way I could stay 100% fiber.
@@Jims-Garage Yup! Just found your channel and videos. I'm running pfsense currently in a VM in unraid. I bought an Intel QAT to pass though to it, but soon discovered that it is blacklisted in the vfio drivers and I'm having troubles bypassing it. Let alone the free version of pfsense will not allow the use of the QAT. So my solution is just to migrate to a bare metal install of Opensense to run the card.
@@Jims-Garage for homelabs this is the kind of thing where DDR5 for consumers comes in handy. For the lab I'm overall not too worried about ECC. I'd never go without it in an enterprise environment though. Sometimes I miss the hardware raid controllers with their own redundant power. Then I remember how annoying it was when they died.
Thanks, it kind of is. My fibre goes into a switch and converts it to 2.5Gbe rj45. It's because all the IO is stupidly upside on the MS-01 and my was-110 doesn't fit due its heatsink.
Excellent video Jim, I too now have 2Gb internet from a company called BRSK. I also just installed a 42U rack and some aircon in a 3m x 1m room I have that comes off my office. This thunderbolt networking looks really cool. I don't think I can justify another spend on these things though once I have seen this now I will find resistance hard. I already have a flawless 3 node cluster of ryzen 5 3600s though with 128Gb RAM each. Not using ceph though, I have a Flashstor12 over 10Gb and another biggish Truenas box.
@@Jims-Garage I'm also trying to go low power now so am messing with a lot of arm stuff, have you seen the turing pi2 and the RK1s? I have 4 of those with 32Gb RAM each and did something funky with the M.2 slots. Instead of being conventional I got some m2 to 10Gb adapters and stuck them in so each of the RK1 has a 10Gb going to the flashstor too. Working well, just got proxmox running on them al running in a 2U rackmount.
@@DigisDen those look cool! I like the idea of them, I just find arm lacking in compatibility for many of the things I need. Hopefully that'll change in the near future.
I have bought because of you three MS-01 😂 but I am thinking to buy a fourth one. Because I am looking for a nice HBA to replace my power hungry 8x disk setup (ZFS RAID 10). But on UA-cam there is also a dude that got an expansion card where you can actually put 6(!) M.2 SSD in the MS-01! So how are you doing your storage for your homelab? And why Longhorn and not CEPH?
@@erwin757 I use Ceph for VMs and longhorn for Kubernetes data. I don't recommend the MS-01 as a NAS, I would build something with plenty of HDD storage.
Wow. Amazing video. Can you please explain vlan setup and also software define network over the 3 proxmox node? Maybe a squid proxy to make vm'n in sdn connect to internet
I am curious to see if you have any problems with the 2.5 GbE ports. I had the I226-V (the one without Intel AMT capability) for the fiber connection and an ONT that can handle 2.5 Gbps. Ever so often, that connection stalls and can only be recovered by a NIC reset (either ifconfig down/up or cable disconnect). This is a problem that has been discussed on pfSense forums as well and is actually worse than with previous generation I225 chips where this never happened to me. It seems to happen only at 2.5 Gbps, not at 1 Gbps.
@@Jims-Garage Kernel 6.8 indicates you are using OpnSense under Proxmox. Is it true that you use the NIC in bridged / virtualized mode or do you have it passed through to the VM? My reason for asking is that I found a "reset on TX hang" part in the igc driver - but that is present only in Linux, not in FreeBSD.
How are the P-E cores handling the virtualization? We tried running virtulizers on 12Gen when it came out and it was a horror . ESXi , Proxmox and Red Hat KVM didnt know what to do with the e-cores. The only workaround was to just pastru the P cores which makes buying this thing pointless.
I'm on kernel 6.8 which I believe has an improved scheduler. So far I haven't had to manually intervene and everything seems fine. I will dig into core utilisation once I'm a little more settled.
All great stuff, nice seeing you doing more clustering~ Not certain how much experience you have with ceph with SSD OSDs but just keep in mind you want enterprise SSDs/NVMe(s) for longevity and reliability; plenty of fast consumer SSDs but not many of them handle the demands of ceph -- if you see worse than expected speeds keep that in mind.
Thanks, appreciate the warning. Some experience but lots to learn. Was going to try with 1TB nvme with 550TBW - see how that works out. I've some similar with longhorn for a few years and imagine it's similar wear and tear.
@@Jims-Garage The problem is not really TBW (for good consumer drives) but rather the poor write speeds over time/and both with heat. Also, you really should use a drive with PLP power loss protection. Consumer drives really are just a poor choice for Ceph.
UA-cam ate a reply I had in here, but it was along the lines of considering an M.2 NVMe carrier board (with a switch chip as the MS-01 doesn't appear to have bifurcation support, unless that has changed recently) -- won't be able to recommend the Supermicro AOC-SLG3-2M2 because of this but there are other options out there if you need more NVMe storage. Do keep in mind that the switch chip on these carrier boards will be the limiting factor so look for something beefy that carries all the needed lanes for each NVMe SSD. Finally, backing up what Justin has added -- you really want to seek out NVMe SSDs with PLP -- these are usually designated as M.2 22110 and are all mostly enterprise SSDs as they need that additional room for capacitors~
k3s storage is something I struggle with. I have used longhorn, NFS and openebs. Then moved to ISCSI, and everything seems to be really stable, but that means I'm relaying on a single instance of truenas. It would be cool if you could make a video comparing HA storage solutions for k3s and their performance
Longhorn can be extremely buggy. I have lost databases because of it. When everything works, it works beautifully, but every few months, something fails to start. The biggest issue, in my opinion, is the io speed. If you have a demanding work load or a large db, everything comes to a crawl. This is with nvme storage backing across 4 nodes
@@hansaya yes, I've witnessed many of the things you mentioned. Thankfully I don't have anything that is too heavy on IO but if that changes I might need a different solution. I'm keen to go ceph for a single solution but need to figure it all out first.
@@Jims-Garage One thing saved my bacon many times is using a reliable backup solution. Learned that from my longhorn saga :(. I use Velero and I highly recommend it. Very easy to use as well
Suggestion , buy a unifi aggrefator switch. Link aggregate the rwo 10gbits interfaces at 20gbits and your firewall Will be abble to migrate and have high avaibility. This bucause if router fails everthing fails on the cluster , i Will bot like my firewall and internet to BE a point of failure 🙂 cheears from Portugal 🤗
Hi! Jim, I have exact same setup like you but I have ordered dual ports 25gbe pci card for cluster. I am curious if I use TB4 port on MS-01, Do I need just cable or need TB network device to use them as cluster network? How much speed you get on TB4 networking?
Will this SSD fit in the U2 slot of the Minisforum MS-01 and function without any issues? It's a Western Digital Ultrastar DC SN650, U.3, 15MM, 15,360GB, 2.5-inch, PCIe.
@@Jims-Garage it works but for the 15 mm thickness i need to buy 3d printed case extension cost 25 euro including shipping. but then I cancelled the 3d print order and opted for samsung u.2 7mm 15.3tb mlc pcie 4 gen
I'm looking forward to how you set up this cluster! I grabbed some MS-01s too and I'm excited to see how you set up thing so I can also improve my setup! I'm really curious how you got the internal proxmox ring buffer set up. Right now I have it all connected to a switch since I've never done something like that before and I'm not quite sure how to do it correctly.
don't be hung up on lack of ha fw - do this part with sep boxes, also run 20g bonded and then make the management network with 2.5 - simpler, faster and easier
I have it set up exactly the same way. I only used Ipv6 for the internal routing from a Video i found online. Works great. No i just realiesed I'm using the 2,5gbe for the internal ring and i got the 10gbe bonded to the switch
Using Thunderbolt 4 for a ring network is honestly genius. I do have to ask though... Since you use Thunderbolt 4, shouldn't the theoretical throughput be around 40 Gbit/s? You mentioned it's around 2.6 GB/s, so it's around 20 Gbit/s. Where's the other half? Is it just a limitation of Proxmox? Do these Minis Forum PCs have cheaper Thunderbolt chips built in and it can't use the full capability? Or does Thunderbolt not allow Full Duplex communication?
It's because the networking stack within TB4 caps out at ~20Gb. Data transfer should reach ~40GB. The TB4 used is fully certified in the MS-01 for full speed.
Nice, I got 4 of those and did something very similar except I went 2 TB wdlback 850x's for ceph and 1tb patriot p300's in the next slot for the proxmox drive. That ring drive was a disaster for me on 4 units (I had a 4 unit setup with the beelink ser7's previously)
@@Jims-Garage yep. I put the opnsense on dedicated hardware though. virtualizing the router sketches me out. I did it with a 12th gen intel but if i were to do it now I would probably investigate the aliexpress lower power stuff with 10 gig and then do two of them in high availability just for the router.
I have one with two WD Red N700's in RAID Z1 mirror and 96 GB RAM. Do you mind if I ask you about thermals and noise? On mine, it idles at about 42C and if the CPU gets up to 5% usage, it goes up to around 52C and the fan gets annoyingly loud. This does not seem normal to me. I could probably tune the fan in the BIOS but it doesn't seem like 5% should raise temps that much.
@@praetorxyn sounds like mine. Have to remember it's a laptop with a small heatsink, powerful CPU, small fan, and aggressive curve. I'm going to take the covers off mine and strap noctua fans to them. Ugly but I don't look at them.
@@Jims-Garage Ah. Damn. I was hoping my situation was abnormal and that repasting the CPU or something might improve things. I have not done much with my MS-01 yet, as I have been trying to sort out remote KVM. Remote KVM works flawlessly in MeshCommander, but it's not maintained anymore and MeshCentral has a much nicer interface while being more secure etc., as it uses a CIRA tunnel for AMT. I can get AMT connected in MeshCentral but remote KVM doesn't work for some reason. Haven't looked at it in a few weeks, but I wanted to get remote management sorted out before I really started setting up Docker containers on it etc., as I didn't want to have a bunch of stuff I was depending on running on it when I might have to physically move it to make a BIOS change.
@@Jims-Garage MeshCentral is still being maintained as far as I'm aware. The setup is just more involved. I will say I first tried the Docker version and didn't have much luck. I have had better luck running the actual NodeJS version as a systemd unit, but still haven't managed to get remote KVM working. Mesh Central can give you comman ds to install user agents on machines etc., and that all works fine out of the box. It's the AMT stuff that's kind of tedious to setup, particularly since I'm using swag as my reverse proxy. Currently m y stuff is running on a Synology DS918+ and the version of Docker on there is downright ancient, to the point I can't even have one Compose file include another one. I'm planning to switch to Traefik at some point and thought I might have better luck making it work with that, but I haven't gotten it done yet, as I was kind of wanting to get Traefik setup on the MS-01, but I started tinkering with the remote management first. So it's something of a chicken / egg problem.
Lets Talk opnsense, I have this exact machine, and I cant get more then 500mb down of my 1G... plugged directly into modem I get all the speed. Ive tried bridges, q35, i440, passthrou, bonding... I just cant get it... can you point me in the direction you went or issues you had?
It is something I have weighed up but it's very difficult to counter. I do obvious things like blocking egress traffic to china and Russia etc, but that's not a robust control.
I was looking to get a MinisForum MS-01 and see if I can fit a MBA card and use it for storage , but can't find a external 4/6 bay unit which just has a PSU where you can plug the all the sata plugs into it , they are all USB or thunderbolt . With 20 line CPUs it's hard to work out how do you get a GPU , sata and 10GBT network .
Why did you choose Proxmox vs Ubuntu MicroCloud (LXD+ceph+OVN)? I started with Ubuntu KVM ages ago, VMWare/vSphere, OpenCloud, etc... And so far Ubuntu LXD is so easy. Anyway, what reasons won you over for Proxmox?
@@Jims-Garage Thank you for the reply... and forgive me; if prefer VMs LXD will do that too! running both VM and containers was the draw for me. And LXD has a webUI but as soon as you do clustering or something more advanced LXD starts getting a little cumbersome for sure. Proxmox scared me away with pricing which is a big hill to climb for a little guy like me running a modest home lab.
Wow that setup looks very cool. I'm curious to know more about the WAN connection. Are you using your ISP ONT or removed it in favor of the SFP+ port? How does that part work in your network? Thanks for your great videos, learning a lot!
My ISP doesn't have modem mode currently. I'm having to bypass their router with a was-110. I plug it into a switch and convert it to rj45 to save an SFP port on the ms-01. It could go straight into the ms-01 though (if the heat sink wasn't so large!)
Looks great Jim. Are you going to be integrating the Ugreen NVMe NAS into this as well? I'm going two nodes with the Ugreen NVMe NAS. Very interested in the Thunderbolt internal network connectivity. Looking forward to the next session!
@@Jims-Garage Well the good news is I can just use the 10GbE connection for now as i'm not limited there. Curious if you planed on using another OS on the Ugreen NAS. I will probably go TrueNAS Scale. I have to see if TrueNAS Scale as a CSI driver.
I have been thinking about building a new homelab server for the past 4 months now 😅 and I really want to replicate your setup but my main concern is storage. I want to add HDs and there are 2 options either DAS over USB which is really flimsy especially with power saving or maybe using a SAS card. Maybe I should just build a NAS but then wouldn't it be better to build an AM5 build with Supermicro H13SAE-MF? Too many choices 😅😅😅. Can’t wait for your next update!
A good question, just because I collect Black Legion and I was reading the book at the time 🤓 I do like the fact that Abaddon isn't even a primarch but is strong enough to resist the will of the chaos gods. Some books make him seem weak and petulant but others like the black legion series make him far more interesting.
Are you going to stick with K3s or are you also considering to move over to Talos? Awesome video by the way I’m following the exact same path by using the ms-01s
Right now I'm focusing on migration and stability, as such migrating existing k3s. I will likely go RKE2, but am definitely going to look into Talos. Want to limit the number of unknown variables at present.
You can sync opnsense config between them and configure it as an active-passive cluster, so eventually just manually switch to one?! But I think this can also be automated. Also, I prefer k0s to k3s, as it seems more stable to me and easier to deploy/upgrade with k0sctl tool
My $.02 on boot drive. I would highly recommend an Optane drive, even if just a small one... I get away with 16gb but I wouldn't recommend anything smaller than 32gb unless you want to be running scripts to clear up space after every upgrade. Much higher write resiliency and random IO. I was using them for a while, since LTT gave me the idea, but then I saw level 1 do a video on Optane as well. GL with the new lab! Congrats on the fiber!
I am from Brazil ! Simply Fantastic Home Lab. I was curious how you did the "Internal Only Ring" network settings. To perform the Celph Cluster Sync. Could you please include this topic in your next video?! Thanks.
Also, you may already know that vPro requires native vlan only but has a cool feature you use called serial over LAN for lan based console access. I combined this with a secondary path for guest vlans. hence the comment that vpro requires native vlan. Worth noting I am not using the minisforums hardware.
11:55 create a CARP HA setup so they switchover automatically but TBH i'd rather use a descrete firewall box/appliance to keep that part separated from the rest
As far as router redundancy. It’s gonna cost a bit of coin but you could do two UniFi UDM Pros running one in shadow mode. That would free up your first MS-01 so it could be cycled without a network outage
If you use something like Proxmox, it can handle it as well (when 1 node goes down, it'll just move the VM given you use something like Ceph or another way of "shared storage"). The issue would still be that his WAN goes out but the UDM Pro has that exact same issue (unless you have say, redundant WAN links).
Take the fiber in on the Switch. Otherwise you just sacrefice your redundancy by having opnsense just havint it on one node. Virtualize the entire network
Here's my solution to have HA firewall. It is not perfect but it works for me. I have a 3 node NUC setup and to have a high availability firewall I created a vmbr to 1 dedicated lan port on each node (each time the same port). As the modem has 4 ports, I directly connect this port of each node to the modem. This vmbr is set as wan on my pfSense instance, nothing else on linked to it. I was not able to set the modem in bridge mode (fiber is too complex for me), instead on the modem I set the IP of pfSense as dmz. All traffic hitting my modem is in fact redirected to my pfSense, with my internet IP facing directly facing pfSense. With HA, if the node where my pfSense goes down it automatically starts back on another node. Thanks to ceph it goes quite fast. I'm now considering to have 2 OPNsense instances running instead of 1 pfSense to have no downtime.
Is there any issue with using a 4TB 990 Pro in any of the SSD slots? I understand that the site list 2TB as the max capacity, but supposedly that's because most 4TB SSDs are double-sided and can't physically fit in the slots.
I'm running the 4TB 990 in the primary slot, reusing the low profile heatsink from the 'stock' Kingston drive. I've got a 2TB 990 in the second slot, using some carefully x-acto'd thermal tape to contact the fan assembly. So far everything's been running pretty cool. The 4TBs are single sided, so I think you could probably run them in all 3 slots, but only actually heat-sinked on the first.
I like the warhammer 40k names .. I am replacing my edge servers to a similar build but I am trying to explore hosting local 70b llms and tie it to home assistant.. I want a different back haul though... this is soooo freakn fun my fiance she thinks I am crazy 😂.. maybe who knows.. just local llm won't be on miniform .,. I m gonna try 96gigs hosting inference on cpu .. maybe.. before building 4 gpu server
@@Jims-Garage16 core epyc 256gb ram. Running esxi vsphere with truenas (raid z1) with the five 8tb nvme drives. Also have a vm running ollama with cpi only for now. I have some work related vms and a ci/cd program running. Slowly building up. I have been thinking of migrating off synology for docker vms as I’m maxing the cpu.
I recently came across the MINISFORUM BD790i with the AMD Ryzen 9 7945HX. It supports up to 64GB DDR5 RAM, has two PCIe 5.0 M.2 slots for fast storage, and includes a PCIe 5.0 x16 slot for high-end GPUs like the NVIDIA RTX 4090. Plus, it comes with a robust cooling system and plenty of connectivity options including 2.5GbE LAN, HDMI 2.1, and USB-C. It seems like a solid choice for a high-performance setup in a small form factor.
@@Jims-Garage I have four BD790is clustered like this with a Miktrotik 10G switch (used the PCIe slot for a 10G card) plus two 2TB NVMes per machine. runs like a top. and 4 nodes makes Ceph extra happy, lol.
Thanks!
Thanks, Henry. That's very generous.
You can get your firewall in HA mode (I think):
Create a VLAN and plug your fiber into an Switch that uses that created VLAN as an untagged port. Tag that VLAN to every port where your ms-01s are plugged in and you should be able to use a VM with an ethernet Port configured in Proxmox with that VLAN. Mark it in the Cluster as HA and it should work.
Thanks, I'll retest. Might be complicated by using a WAS-110
I could confirm it works. I use a vlan for my isp to fw
Mini Forums should have a AMD edition on the way for next month
@@ulrikboesen that's good to know
you dont even need vlans, just connect all minis to a switch + isp and whenever one mini fails, opnsense will get restarted and connect with either pppoe or based on MAC dhcp, good thing is, even if using virtIO networking, you set MAC inside opnsense without having to set it physically on the port
Thanks for the demo and info, as always another awesome video Jim, Have a great day
Thanks, you too!
Internet speeds are like storage and RAM. It's always good to have headroom. I envy you and your precious symmetrical connection! :D
@@cheebadigga4092 I've done my time... 80/20 was years of pain
Just a small word of warning with the ms-01. I had nothing but problems with proxmox using the 2.5G ports with proxmox/opnsense. They would just not run at full speed at least with opnsense. I ended up buying a managed 2.5G mikrotik switch and used the 10G ports from the ms-01 and connected my modem to a 2.5G port and that solved all my problems.
Good to know. I haven't seen that yet, I'm getting 2.5Gb on speed test.
This is exactly what I would do with a handful of these machines. It's cool to see someone actually do it, especially with the config being a little atypical.
Thanks, will be covering the config in the next video (once I've figured it all out 😂)
Hello, really good idea the 40 gb USBC for the internal Proxmox ring, what is the cable type you use for that ? In other video can you focus on the proxmox config ?
It's a cable matters thunderbolt 4 certified cable. Yes, I'll cover configuration in the next video.
I've done this, but I installed a 4 port sfp+ adapter in each so the entire cluster is using DAC. One 10gbe ring network for the proxmox cluster and another for ceph. I don't trust networking over USB.
Great video looking forward to the next one. I keep changing my mind between using an itx motherboard with a 13500t or higher and these machines. Keep them coming & thanks for all the hard work on these videos.
All depends what you're after. This is a great all in one, but the 13500t likely is more expandable and easier to work with.
@@Jims-Garage Absolutely, these machines are great for the size. More expandable is also going to mean bigger. I currently do similar to your setup but using micro pc's (10th gen cpu) but always limited by the network which they only have one with these machines you get four plus expansion. I think these machines are just in between the dell & HP micro pc's and a full size pc. My only other concern would be life span of the units and replacement options for when they do fail, I suspect in 5 years if one fails you won't find them around and your back to hashing it out again :) Fair play for taking the jump not sure I would, well not until I watch your videos anyway :)
I'm also a bit torn here, I'm trying to get as close to a "do it all" server for VMs, router/firewall, NAS, all in a somewhat low-power and small package (sure I could just go with a cheap Epyc chip + mobo from China on ebay, but that's a bit much).
The Ar900i ITX from minisforum is also appealing, it's got four m.2 slots and a 16x PCIe slot, after doing some digging I've found someone on reddit had success with a M.2 to 16x riser cable on this board, and was able to use an LSI HBA on this adapted slot to add hard drives to it (keeping the onboard 16x slot free), he also made a 3d printed bracket to fit it in a Jonsbro N3 case on the opposite side of the board from the onboard 16x slot to hold this LSI card. I also found another video confirming the Synology E10M20-T1 works on this board, this adds a 10g base-t port and two additional m.2 slots. So as long as I'm fine without a dedicated GPU, this would be quite the beast of a machine that has everything else in an ITX package.
I'm leaning back to the MS-01 though because of costs, having a bunch of HDD storage would be nice, but I don't really need it, and PCIe 3.0 U.2 drives are pretty affordable for a good chunk of reliable storage. I'm thinking of going two Intel Optane 1600x 128gb m.2 in a stripe, not a lot of capacity but enough for VM OS volumes for what I'm doing, and the limitation of the 3rd slot at PCI 3.0 x2 doesn't hurt this drive. Then in the future if I really want to add HDDs, I could hook up a USB 3.2 jbod enclosure. And now I still have the 16x free to add a better GPU or whatever else.
@@coinholio470 it's always a difficult decision but both are a good choice. Pick one and be happy IMO.
As for m.2 to PCIe, I've been using one for a year in my main PC. I record all of my videos with an elgato card that is attached to my m.2 via PCIe adapter. So far it's been perfect.
@@coinholio470 If you need storage you could always hook up some network attached storage instead or usb (more stable) I think the key thing with these systems is power usage vs network interfaces, you just can't find anything else in the same size, power usage with the same nics (I have tried). My only concern is how long they will last, I expect if they fail it will be an entire new system and this is were the price may seem not so good. I'm still on the fence with purchasing and fair play to Jim for taking the plunge.
Great video Jim!
Currently in my homelab I have DL360 Gen10 and DL380 Gen10 (currently for sale) which I want to replace with three MS-01. Your video convinced me that this is a very good idea.
It is working well for me, my only concern is longevity of the device but time will tell. It's a great little machine.
@@Jims-Garage Thank you for your answer!
I'll think about it more, I'm currently testing my DIY server
Specs below:
CPU: AMD Ryzen 9 5900X, MOBO: Gigabyte MC12-LE0, RAM: 4x 32GB DDR4 ECC UDIMM, 2 port 10Gbps SFP+ NIC, 2x 256GB SSD in the mirror on Proxmox and 4x 1TB NVME in the adapter connected to PCIE x16. This mobo supports bifurcation. The server is in the IPC 2U-2404L housing
So far I'm very satisfied.
Now I'm wondering whether to build two more such servers or buy three MS-01.
@@oSAendI’m curious about the power draw on your DIY server as I like the BOM you came up with. How does it compare to the MS-01? I particularly like that it has ECC RAM and you can bifurcate the x16 slot to get the most flexibility / performance out of it.
As you say it’s really food for thought 🤔
from a power draw perspective alone, this is going to be a huge upgrade, especially with the i5 version. and you don't even sacrifice management, you still have IPMI etc. IMO this box is a perfect homelabber system.
I've recently setup a medium availability Opnsense on my pve cluster using the proxmox HA rather than Opnsense HA as I only have a single public IP.
The key for me was using the cluster resource mapping for the hardware pass thru and having the cable modem connected to all the node Wan ports using a separate switch (in my case I used locked down vlan ports on my main switch), set the Mac address in the wan interface and then as long as opnsense is only running once (as it should with HA) then only one wan connection is active. You could use a small SFP+ switch like a mikrotik for sharing your wan to all nodes.
Works great, single IP, not true HA but in an unplanned failure, downtime is only a few minutes and for planned stuff, I can migrate easily.
Thanks, that's what I'm thinking of doing as well. Hopefully cover it in the future.
I spent a couple of days trying to iGPU passrhough the Xe to a linux VM and had no luck, so I'll be very interested to see how you go. For now, I've installed the desktop OS directly on the MS-01 and using an Incus VM to run Proxmox. I'll also be running Proxmox Backup Server in an Incus LXC container and passing through a 4TB NVMe to it.
I have it working without issue (no sr-iov though). Follow my GPU passthrough guide, those instructions still worked for me on 24.04 server.
@@Jims-Garage Great. Do you mean you currently have full iGPU Xe passthrough working with a Ubuntu VM on an MS-01? I track a few forums and I have not seen anyone post anything positive yet. Which guide do you mean?
Thanks for showing the topology! This gave me some ideas. MS-01 showing up on Monday. I was about to mansplain adding a switch between ISP and firewall but looks like others beat me to it. I don't have enough confidence in not screwing myself to not keep the firewall HA on totally separate hardware and get at least a /29.
I’ve been eyeing these for a similar 3 node cluster setup. Great idea with using thunderbolt for network, I haven’t thought about that! I agree that the chassis limits the options for the add-in cards, and it feels like a bit of a waste to just discard the chassis.
Would be great if you could buy the board and fit your own cooling
@@Jims-Garage yes totally. Wouldn’t mind trying to fit 3 in a rack shelf or custom chassis either, that would be an interesting project.
For the single point of failure / non-HA firewall issue, you could bring in the WAN via a VLAN on the last SFP+ port on your Unifi switch, then you can migrate the firewall VM across the cluster without any hardware dependency and minimal downtime.
A good idea but sadly those ports are only 1Gb
@@Jims-Garage SFP instead of SFP+?
My internet is 2Gb not 1Gb, I'd lose half of my speed.
For my fiber internet, I terminate to the switch and layer 2 vlan it to proxmox over the VLANs port. Then, the firewall can migrate to any host. Also have the option of HA firewall. But with firewall on ceph clustered storage, the hardware concern is less of a concern.
Thanks, I used to do something similar. Little trickier with hardware passthrough and the was-110 ont SFP stick but I'm going to keep experimenting.
@Jims-Garage I believe you can power off a guest and migrate. The same hardware pass through then applies I believe providing you have configured.
Hey, for your redundant firewall issue, you could plug your fibre "modem" into the switch on an isolated vlan, and trunk that vlan to all three nodes. Your opnsense VM can then float between all the nodes and because you've just trunked it over a VLAN you'll still get the DHCP public address on opnsense. I think you could also set up two opnsense VMs running in HA like this, if one went down it would take on the IP of your primary VM.
@@AshleyTayles thank you. That's ultimately the setup I've settled on as shown in a later video.
@ fair play, working through your MS-01 videos as I’m thinking of doing a similar thing. Guess I should have finished the series before commenting 😝
@AshleyTayles ha, that's okay. I ultimately end up with 1 VM using HA inside Proxmox. These all share the internet via a cheap switch. I demo the fail over as well.
Just bit the bullet on my second ms-01. Been using my first one as my gaming rig after a failure of my other PC, with an A2000 (custom heatsink) in it it does the job great.
Now picking one up to replace/augment my aging i5 nuc home lab, with something that has a bit of oomph.
Awesome. I'll check out that heatsink!
Very nice setup! Well thought out.
You know, doing videos like this really isn't healthy for my wallet. You should know better!
It's fine, you only need 1 kidney and lung 😉
Yeah just bought the thunderbolt cables… expensive…. 😢
Hi Jim , It is very exciting homelab project but i suggest putting links every products you use to get the highest bandwidth available like the brand name of thunderbolt cables you are using etc. Cause I noticed that so many people on the net could get enough speeds over thunderbolt bridges . Awesome work btw. Congratulations.
Thanks, I'll add it
Got my MS-01 last week to replace my 10 year old proxmox box, pretty nice box! Put an NVME in the pcie slot to mirror the gen 4. Could not think of anything else to put in there haha
Good tip! I was thinking the same!
Cool idea with the Thunderbolt ring. Curious how the Ceph cluster is performing? I also have a cluster of 3 and considered a Ceph cluster. If I understand it correctly, a replicated storage array would hit the SSDs much harder than if the VMs were just running on their individual node.
I'll hopefully be able to feedback on performance in the coming weeks. It will definitely hit the drives harder but I estimate around 5 years for my nvmes. I'm happy with that.
@@Jims-Garage it was 3 or so years for me when I used QLC drives so that is probably just about right.
Excited for all the new topics you have lined up. Good luck with all of it.
Mate take a look at an older Sophos appliance, you can load OPNSence on them and it solves your firewall issue for not much more monies
Thanks, it's running well on the MS-01 atm
Hello and thank you for your great videos! I have two MS-01s myself with the i5 processor and 96GB Crucial RAM each. My setup is not really running smoothly. Could you perhaps tell me which components (how much RAM from which manufacturer) you have installed and whether the NVMe with Proxmox on it runs as ZFS?
Hey, thanks for the comment. I have 64GB of Corsair Vengeance (believe it's 5200 Mhz - nothing special). I'm running the Proxmox OS on non-ZFS (ext4). Ordinarily I would advocate the use of ZFS, but in my case I have 3 devices and wanted to keep costs down. I figure if a node fails I have 2 to take over and can replace the disk.
Could you not bring the fiber into a small fiber switch first the break out three ips from it?
Not a dumb switch obviously, but I can think of a few devices which would work. Even a low power firewall if you want that functionality.
You could handle the connections on the nodes in a number of ways.
@@XtianApi no, I can only obtain 1 IP. I have since placed this into a small switch with each port going to wan on a virtual firewall. This is configured for HA (see on later video)
@@Jims-Garage cool. I have since finished the video and saw what you mean. That's cool. I got interrupted at work, lol.
I'm trying to find a decent affordable swith that can do multi chassis link aggregation. Don't want to spend enterprise money. Lots of switches do link aggregation but only the higher end ones seem to be able to do multiple instances of Link aggregation.
Random side thing, I know.
Anyway, thank you!
Great videos! I've been working on assembling a similar MS-01 cluster and really appreciate that you've got the thunderbolt networking in particular already worked out for these devices.
One question - wrt the 980 Pro, what speeds have you seen on the primary NVME slot? I'm only seeing a peak of 3500MB/s disk reads on the 990 Pro's (vs 7000MB/s). The asterisk on the drive's description says you need to go into Samsungs windows drive utility and enable something called 'Turbo Write' - which appears to be a volatile write burst buffer. Curious if you've done that or had any advice for a newb on whether that's a good thing to enable on a linux proxmox server.
I haven't done a test, but I will check later. Make sure you're in the 4x4 slot, 3x4 will max at 3500.
@@Jims-Garage I'm def on the right slot - I've actually got 990 pro's in both of the first 2 slots and they both report the same ~3500MB/s speed. The 3rd slot reports 1300MB/s, which is expected. I'm a bit newb-ish still, but here's the command I'm running (in Proxmox 8.2 shell): "hdparm -Tt /dev/nvme0n1". It reports: "Timing buffered disk reads: 10156 MB in 3.00 seconds = 3385.11 MB/sec". I'm going to grab Samsungs 'Magician' tool for windows and see if it offers any clues.
unsure where my earlier comment went, but it had an external link so maybe it got cleansed!
had recurring lock up issues with the MS-01 with a k8s workload! after doing all the usual things: bios update, microcode updates etc things are looking more steady. would love to get this working stable without having to do something like disable the efficiency cores.
have you had any lock up with container workloads? great video!
All 3 machines have been running k3s for a few days, I haven't had a single issue. I wonder if you have a bad unit?
@@Jims-Garage that was my first thought, but I saw a lot of posts with people having the same issue! no crashes in 1 week tho, im hoping im good
Nice set up Jim! I'm looking at getting these MS-01 as well. I've setup a thunderbolt ring on some old 2013 macs using syncos guide and it is great to have that speed on such an old machine. About your Ceph setup.. don't you want to run the VMS on your Ceph drives for High Availability so that if a MS-01 node fails it will switch over very quickly? Or do you have another plan for that?
Yea, that's the plan. However, I do iGPU Passthrough which means they won't fail over. Others will benefit from it though, and the data will be secure.
Hi Jim, great video!
About the coral TPU, I'm using one together with iGPU passthrough in frigate. The TPU is used for the frigate detection mechanism, the GPU for hardware offloading the ffmpeg streams. So frigate benefits using both. Frigate is running HA mode. If a node fail it will boot on another node just file. Live migration is not an option, but the system is back in about 3 minutes, which is totally fine with me. I use the resource mapping feature in proxmox for this.
Thanks for the info! Sadly I have a dual chip tpu and I cannot get it to show using lspci. Do you have single?
@@Jims-Garage I have the single indeed, because my slot has only one pci-e lane connected, as most wifi slots have. Some have no pci-e connected at all... Only CNVI / CNViov2. If your slot has only one lane connected, at least one TPU should show up... You could try it in an other slot with a M-key to A/E key adapter?
Hello @@exteraNL . Do you use an adaptor for Coral? Or your mobo support it? I found a mobo Asrock B660M-HDV which has "1 x M.2 Socket (Key E), supports type 2230 WiFi/BT PCIe WiFi module". Do you know it support Coral? Thanks!
I have the MS-01 and SR-IOV with the Intel iGPU works fine. I have the 12th gen variant and it works out great. I loaded it up with 2x48 GB sticks and this is my low power Proxmox node. I have successfully passed through the iGPU to multiple VM's at the same time (max. 7) and GPU decoding/encoding also works. Just know that when it comes to LXC containers there are some caveats from what I understand. Might be better to just run a VM with e.g. Docker or Podman on it and then spin up a Handbrake or Jellyfin container or something if that is what you're going for.
That's great to hear. Do you have a link to a guide you used in case of any caveats, or is it the standard approach?
@@Jims-Garage There is some caveats atm : you must pin kernel to last 6.5.X version because 6.8 not compatible with the i915 dkms
@@gtarrare thanks, I just read up on that. I'm already running 6.8...
@@Jims-Garage HW accelation working in a plex LXC and Jellyfin in VM (Ubuntu 22)
@@Jims-Garage I tried to left a comment yesterday but YT removed it... I pointed that out as well. Don't run kernels newer then 6.6
Hi Jim. Did you look at ipvs(adm) and a virtual IP address? Maybe three lines off of a switch connected to the modem, so that each of the nodes can assume the role of the uplink with a virtual IP address just in case the "current active node" goes offline`? As long as the HW switch between modem and cluster doesn't fail, your should be good to go.
I'm not sure what your exact issue was with the internet and fiber that caused you to remove HA, but if I understand correctly, I would suggest one thing. You could plug your fiber into an unmanaged switch with a fiber SFP optical transceiver, and then you could plug two connections, using RJ45, from the unmanaged switch into each MS-01.
Jim - did you try using the dual edge TPU in the wifi spot? I couldn't get lspci to recognize them when I tested.
I did, and in the x8. Wasn't recognised, only single works.
Dream setup right here
Thanks, still have a lot to learn.
About not having a failover for your firewall, could you not just run your ISP fiber connection into a cheap unmanaged SFP+ switch, and then run a fiber to each MS-01 and have Opensense on each device for failover that way?
EDIT: Nevermind. I just started watching your next video and you covered this.
Thanks for the heads up on that WAS-110 SFP+ module! Ive been wanting to bypass Google Fiber's ONT box and go straight into my MS-01. I hate having fiber to the house and then having to run ethernet to my otherwise fiber backend. This way I could stay 100% fiber.
@@spotopolis that's exactly what I ended up doing on a later video in the series.
@@Jims-Garage Yup! Just found your channel and videos. I'm running pfsense currently in a VM in unraid. I bought an Intel QAT to pass though to it, but soon discovered that it is blacklisted in the vfio drivers and I'm having troubles bypassing it. Let alone the free version of pfsense will not allow the use of the QAT. So my solution is just to migrate to a bare metal install of Opensense to run the card.
The only thing I wonder about these is the number of PCIE Lanes if you were to attach a disk shelf or similar to it.
You should be able to fit at least 8 drives to an HBA and probably 16. However, without full ECC I won't be doing it.
@@Jims-Garage for homelabs this is the kind of thing where DDR5 for consumers comes in handy.
For the lab I'm overall not too worried about ECC. I'd never go without it in an enterprise environment though.
Sometimes I miss the hardware raid controllers with their own redundant power. Then I remember how annoying it was when they died.
I was very interested in this MS1 machine but the e-cores hindered my push lol.
Latest kernel tends to just handle it, you can also pin cores if you need. Have to remember it's basically a laptop in SFF.
This is a wicked video :) although you called the 2.5gbe fiber :). Why not get a 8 port Mikrotik SFP+ switch ? then link them all together via SFP+ ?
Thanks, it kind of is. My fibre goes into a switch and converts it to 2.5Gbe rj45. It's because all the IO is stupidly upside on the MS-01 and my was-110 doesn't fit due its heatsink.
Excellent video Jim, I too now have 2Gb internet from a company called BRSK. I also just installed a 42U rack and some aircon in a 3m x 1m room I have that comes off my office. This thunderbolt networking looks really cool. I don't think I can justify another spend on these things though once I have seen this now I will find resistance hard. I already have a flawless 3 node cluster of ryzen 5 3600s though with 128Gb RAM each. Not using ceph though, I have a Flashstor12 over 10Gb and another biggish Truenas box.
Nice, that sounds like a great setup
@@Jims-Garage I'm also trying to go low power now so am messing with a lot of arm stuff, have you seen the turing pi2 and the RK1s? I have 4 of those with 32Gb RAM each and did something funky with the M.2 slots. Instead of being conventional I got some m2 to 10Gb adapters and stuck them in so each of the RK1 has a 10Gb going to the flashstor too. Working well, just got proxmox running on them al running in a 2U rackmount.
@@DigisDen those look cool! I like the idea of them, I just find arm lacking in compatibility for many of the things I need. Hopefully that'll change in the near future.
I have bought because of you three MS-01 😂 but I am thinking to buy a fourth one. Because I am looking for a nice HBA to replace my power hungry 8x disk setup (ZFS RAID 10). But on UA-cam there is also a dude that got an expansion card where you can actually put 6(!) M.2 SSD in the MS-01! So how are you doing your storage for your homelab? And why Longhorn and not CEPH?
@@erwin757 I use Ceph for VMs and longhorn for Kubernetes data. I don't recommend the MS-01 as a NAS, I would build something with plenty of HDD storage.
I'm curious how much you've leaned into the Proxmox SDN features with this new deployment.
Nothing yet but potentially. I don't really see a need for it at the moment.
@@Jims-Garage Likewise, although the baked in IPAM could be interesting.
@@kc9nyy would love to learn how to use that myself, personally. (also 73 DE KC2KOA!)
Wow. Amazing video. Can you please explain vlan setup and also software define network over the 3 proxmox node? Maybe a squid proxy to make vm'n in sdn connect to internet
Thanks. I'll discuss some of my vLANs in the next one. There's no sdn, the thunderbolt 4 is used as a network adapter
I am curious to see if you have any problems with the 2.5 GbE ports. I had the I226-V (the one without Intel AMT capability) for the fiber connection and an ONT that can handle 2.5 Gbps. Ever so often, that connection stalls and can only be recovered by a NIC reset (either ifconfig down/up or cable disconnect). This is a problem that has been discussed on pfSense forums as well and is actually worse than with previous generation I225 chips where this never happened to me. It seems to happen only at 2.5 Gbps, not at 1 Gbps.
I've had it running for a week without an issue but that on a single unit running latest kernel 6.8
@@Jims-Garage So you have the OpnSense running as a VM - I use it plain vanilla.
@@Jims-Garage Kernel 6.8 indicates you are using OpnSense under Proxmox. Is it true that you use the NIC in bridged / virtualized mode or do you have it passed through to the VM? My reason for asking is that I found a "reset on TX hang" part in the igc driver - but that is present only in Linux, not in FreeBSD.
How are the P-E cores handling the virtualization?
We tried running virtulizers on 12Gen when it came out and it was a horror . ESXi , Proxmox and Red Hat KVM didnt know what to do with the e-cores. The only workaround was to just pastru the P cores which makes buying this thing pointless.
I'm on kernel 6.8 which I believe has an improved scheduler. So far I haven't had to manually intervene and everything seems fine. I will dig into core utilisation once I'm a little more settled.
All great stuff, nice seeing you doing more clustering~
Not certain how much experience you have with ceph with SSD OSDs but just keep in mind you want enterprise SSDs/NVMe(s) for longevity and reliability; plenty of fast consumer SSDs but not many of them handle the demands of ceph -- if you see worse than expected speeds keep that in mind.
Thanks, appreciate the warning. Some experience but lots to learn. Was going to try with 1TB nvme with 550TBW - see how that works out. I've some similar with longhorn for a few years and imagine it's similar wear and tear.
@@Jims-Garage The problem is not really TBW (for good consumer drives) but rather the poor write speeds over time/and both with heat. Also, you really should use a drive with PLP power loss protection. Consumer drives really are just a poor choice for Ceph.
UA-cam ate a reply I had in here, but it was along the lines of considering an M.2 NVMe carrier board (with a switch chip as the MS-01 doesn't appear to have bifurcation support, unless that has changed recently) -- won't be able to recommend the Supermicro AOC-SLG3-2M2 because of this but there are other options out there if you need more NVMe storage.
Do keep in mind that the switch chip on these carrier boards will be the limiting factor so look for something beefy that carries all the needed lanes for each NVMe SSD.
Finally, backing up what Justin has added -- you really want to seek out NVMe SSDs with PLP -- these are usually designated as M.2 22110 and are all mostly enterprise SSDs as they need that additional room for capacitors~
@@avluis86 thank, I'll check those drives out (I had these because UGreen sent me them).
You absolutely need SSDs with PLP or your CEPH cluster will be painfully slow.
k3s storage is something I struggle with. I have used longhorn, NFS and openebs. Then moved to ISCSI, and everything seems to be really stable, but that means I'm relaying on a single instance of truenas. It would be cool if you could make a video comparing HA storage solutions for k3s and their performance
Will look into it. I'm currently using longhorn but looking into ceph.
Longhorn can be extremely buggy. I have lost databases because of it. When everything works, it works beautifully, but every few months, something fails to start. The biggest issue, in my opinion, is the io speed. If you have a demanding work load or a large db, everything comes to a crawl. This is with nvme storage backing across 4 nodes
@@hansaya yes, I've witnessed many of the things you mentioned. Thankfully I don't have anything that is too heavy on IO but if that changes I might need a different solution. I'm keen to go ceph for a single solution but need to figure it all out first.
@@Jims-Garage One thing saved my bacon many times is using a reliable backup solution. Learned that from my longhorn saga :(. I use Velero and I highly recommend it. Very easy to use as well
@@hansaya thanks, will take a look. Currently using PBS, rclone and Google drive
Hi Jim, looking into a similar setup. How are you getting the thunderbolt 4 ports as a nic in proxmox?
Check this out gist.github.com/scyto/67fdc9a517faefa68f730f82d7fa3570
Waitin for the next video. Great Video.
Thanks 👍
Suggestion , buy a unifi aggrefator switch. Link aggregate the rwo 10gbits interfaces at 20gbits and your firewall Will be abble to migrate and have high avaibility. This bucause if router fails everthing fails on the cluster , i Will bot like my firewall and internet to BE a point of failure 🙂 cheears from Portugal 🤗
Thank you. I actually ordered one earlier this afternoon!
@@Jims-Garagewell done :)
Hi! Jim,
I have exact same setup like you but I have ordered dual ports 25gbe pci card for cluster. I am curious if I use TB4 port on MS-01, Do I need just cable or need TB network device to use them as cluster network? How much speed you get on TB4 networking?
Hi, as shown in the video and description links. Cable matters thunderbolt 4 cable, just a cable. Speed is a little over 25Gb (around 2.6GB/s)
@@Jims-Garage Were you able to fix the IPv4 issue with TB networking? I have similar issue.
Will this SSD fit in the U2 slot of the Minisforum MS-01 and function without any issues? It's a Western Digital Ultrastar DC SN650, U.3, 15MM, 15,360GB, 2.5-inch, PCIe.
@@Renee.Dominique1 I'm not sure. It says U.3 in the title. I don't have a lot of experience with those drives
@@Jims-Garage it works but for the 15 mm thickness i need to buy 3d printed case extension cost 25 euro including shipping. but then I cancelled the 3d print order and opted for samsung u.2 7mm 15.3tb mlc pcie 4 gen
@@Jims-Garage VMs on Samsung SSD PM9A3 OEM Enterprise 2.5" 7mm U.2 PCIe NVMe 15.36 TB (compatible with PCIe 4.0 x4 slot)
Samsung PM9A3 OEM Enterprise/DataCenter M.2 22110 NVMe 3840 GB (compatible with PCIe 3.0 x4 slot)
Samsung PM9A3 OEM Enterprise/DataCenter M.2 22110 NVMe 3840 GB (compatible with PCIe 3.0 x2 slot)
Proxmox on 2230 A+E nvme utilizing the wifi port or extension cable that converts A+E to B key (Proxmox on the slowest slot)
I'm looking forward to how you set up this cluster! I grabbed some MS-01s too and I'm excited to see how you set up thing so I can also improve my setup! I'm really curious how you got the internal proxmox ring buffer set up. Right now I have it all connected to a switch since I've never done something like that before and I'm not quite sure how to do it correctly.
Coming soon! I'll link to the documentation I used and show the process and some testing. It's really impressive.
don't be hung up on lack of ha fw - do this part with sep boxes, also run 20g bonded and then make the management network with 2.5 - simpler, faster and easier
I have it set up exactly the same way. I only used Ipv6 for the internal routing from a Video i found online. Works great.
No i just realiesed I'm using the 2,5gbe for the internal ring and i got the 10gbe bonded to the switch
Using Thunderbolt 4 for a ring network is honestly genius. I do have to ask though... Since you use Thunderbolt 4, shouldn't the theoretical throughput be around 40 Gbit/s? You mentioned it's around 2.6 GB/s, so it's around 20 Gbit/s. Where's the other half?
Is it just a limitation of Proxmox? Do these Minis Forum PCs have cheaper Thunderbolt chips built in and it can't use the full capability? Or does Thunderbolt not allow Full Duplex communication?
It's because the networking stack within TB4 caps out at ~20Gb. Data transfer should reach ~40GB. The TB4 used is fully certified in the MS-01 for full speed.
Was the power draw of 130w the combined total of all 3, or was that per ms-01 for a total of 390w for the whole cluster?
The whole cluster (all 3 combined).
@@Jims-Garage thanks! Thats about 20w lower than my current single hypervisor.
Nice, I got 4 of those and did something very similar except I went 2 TB wdlback 850x's for ceph and 1tb patriot p300's in the next slot for the proxmox drive. That ring drive was a disaster for me on 4 units (I had a 4 unit setup with the beelink ser7's previously)
4! Very nice 👍 sounds like you have a similar setup.
@@Jims-Garage yep. I put the opnsense on dedicated hardware though. virtualizing the router sketches me out. I did it with a 12th gen intel but if i were to do it now I would probably investigate the aliexpress lower power stuff with 10 gig and then do two of them in high availability just for the router.
I have one with two WD Red N700's in RAID Z1 mirror and 96 GB RAM. Do you mind if I ask you about thermals and noise? On mine, it idles at about 42C and if the CPU gets up to 5% usage, it goes up to around 52C and the fan gets annoyingly loud. This does not seem normal to me. I could probably tune the fan in the BIOS but it doesn't seem like 5% should raise temps that much.
@@praetorxyn sounds like mine. Have to remember it's a laptop with a small heatsink, powerful CPU, small fan, and aggressive curve. I'm going to take the covers off mine and strap noctua fans to them. Ugly but I don't look at them.
@@Jims-Garage Ah. Damn. I was hoping my situation was abnormal and that repasting the CPU or something might improve things.
I have not done much with my MS-01 yet, as I have been trying to sort out remote KVM. Remote KVM works flawlessly in MeshCommander, but it's not maintained anymore and MeshCentral has a much nicer interface while being more secure etc., as it uses a CIRA tunnel for AMT. I can get AMT connected in MeshCentral but remote KVM doesn't work for some reason. Haven't looked at it in a few weeks, but I wanted to get remote management sorted out before I really started setting up Docker containers on it etc., as I didn't want to have a bunch of stuff I was depending on running on it when I might have to physically move it to make a BIOS change.
@@praetorxyn I haven't bothered with it for now, hoping the project might be resurrected. Let me know how you get on.
@@Jims-Garage MeshCentral is still being maintained as far as I'm aware. The setup is just more involved.
I will say I first tried the Docker version and didn't have much luck. I have had better luck running the actual NodeJS version as a systemd unit, but still haven't managed to get remote KVM working.
Mesh Central can give you comman ds to install user agents on machines etc., and that all works fine out of the box. It's the AMT stuff that's kind of tedious to setup, particularly since I'm using swag as my reverse proxy.
Currently m y stuff is running on a Synology DS918+ and the version of Docker on there is downright ancient, to the point I can't even have one Compose file include another one. I'm planning to switch to Traefik at some point and thought I might have better luck making it work with that, but I haven't gotten it done yet, as I was kind of wanting to get Traefik setup on the MS-01, but I started tinkering with the remote management first. So it's something of a chicken / egg problem.
Great stuff I’d love one of these devices
So far, so good!
@@Jims-Garage can I ask where you bought them please? Was it amazon or minisforum direct?
@@kevinhughes9801 2 from minis forum website, the third from Amazon UK (be sure to click the discount check box for £150 off)
@@Jims-Garage great thank you
Lets Talk opnsense, I have this exact machine, and I cant get more then 500mb down of my 1G... plugged directly into modem I get all the speed. Ive tried bridges, q35, i440, passthrou, bonding... I just cant get it... can you point me in the direction you went or issues you had?
Try disabling ASPM on the NIC in the BIOS.
Do you have any concerns regarding the security of these chinese made devices and if so what have you done to try to stay secure?
It is something I have weighed up but it's very difficult to counter. I do obvious things like blocking egress traffic to china and Russia etc, but that's not a robust control.
I wish I had 2G fiber Internet.. I mea 1G would make me happy as hell. I am on 150/50 on coaxial cable :) anyway looking forward that CEPH setup..
Hopefully soon? I was on 80/20 for years...
Not sure of your specific setup, but could you put a 10Gb switch on the WAN side and plug all the ms-01's into that for making nonsense work for HA?
Opnsense not nonsense.
Bingo, check my latest OPNSense HA video - that's exactly what I did :)
I was looking to get a MinisForum MS-01 and see if I can fit a MBA card and use it for storage , but can't find a external 4/6 bay unit which just has a PSU where you can plug the all the sata plugs into it , they are all USB or thunderbolt .
With 20 line CPUs it's hard to work out how do you get a GPU , sata and 10GBT network .
Why did you choose Proxmox vs Ubuntu MicroCloud (LXD+ceph+OVN)? I started with Ubuntu KVM ages ago, VMWare/vSphere, OpenCloud, etc... And so far Ubuntu LXD is so easy. Anyway, what reasons won you over for Proxmox?
I like the simple UI, easy to use features, stability and generally prefer VMs over LXD as I find them easier to manage.
@@Jims-Garage Thank you for the reply... and forgive me; if prefer VMs LXD will do that too! running both VM and containers was the draw for me. And LXD has a webUI but as soon as you do clustering or something more advanced LXD starts getting a little cumbersome for sure. Proxmox scared me away with pricing which is a big hill to climb for a little guy like me running a modest home lab.
I really like that sign on your back right! I really need to learn vLAN My apologies, thank you for the informative video.
Haha, thanks. I'll be covering my vLANs soon
Wow that setup looks very cool. I'm curious to know more about the WAN connection. Are you using your ISP ONT or removed it in favor of the SFP+ port? How does that part work in your network? Thanks for your great videos, learning a lot!
My ISP doesn't have modem mode currently. I'm having to bypass their router with a was-110. I plug it into a switch and convert it to rj45 to save an SFP port on the ms-01. It could go straight into the ms-01 though (if the heat sink wasn't so large!)
@@Jims-Garage cool, didn't know about the was-110. i will investigate. thanks
Have you had any issues running Proxmox. Are they any special setup to keep it stable.
So far no issues at all. I simply upgraded to the latest version with kernel 6.8. I'll report back any problems I face.
Looks great Jim. Are you going to be integrating the Ugreen NVMe NAS into this as well? I'm going two nodes with the Ugreen NVMe NAS. Very interested in the Thunderbolt internal network connectivity. Looking forward to the next session!
That was the plan... Unfortunately the thunberbolt 4 wasn't stable on the UGreen NAS, you cannot use both ports simultaneously from my experience.
@@Jims-Garage Well the good news is I can just use the 10GbE connection for now as i'm not limited there. Curious if you planed on using another OS on the Ugreen NAS. I will probably go TrueNAS Scale. I have to see if TrueNAS Scale as a CSI driver.
thanks very interested. I have one of the minis forums running unraid connected to my qnap via iscsi.
Nice, sounds like a cool setup
I’ve tested and it does support up to 96GB of RAM using 2 SODIMM Crucial 48GB sticks
Yes, I read that before I purchased it. Sadly I'm not that rich, 64GB will do me for the foreseeable future.
Can these three sit on a rack side by side? Or would it be too wide for a 1u setup on a shelf?
Yes, if you had a shelf it would just fit a standard 19" (about 21")
@@Jims-Garage might be time to retire my 3 Dell r620s
@@JohnWeland they've done their duties!
I have been thinking about building a new homelab server for the past 4 months now 😅 and I really want to replicate your setup but my main concern is storage. I want to add HDs and there are 2 options either DAS over USB which is really flimsy especially with power saving or maybe using a SAS card. Maybe I should just build a NAS but then wouldn't it be better to build an AM5 build with Supermicro H13SAE-MF? Too many choices 😅😅😅. Can’t wait for your next update!
Thanks. If it helps I have a dedicated NAS outside of this cluster. I don't intend to change that.
Why the Despoiler and not the OG Warmaster for 3 Primarchs though? 😂
A good question, just because I collect Black Legion and I was reading the book at the time 🤓 I do like the fact that Abaddon isn't even a primarch but is strong enough to resist the will of the chaos gods. Some books make him seem weak and petulant but others like the black legion series make him far more interesting.
can't seem to find my comments after adding links to the resources I used... are links going to cause it to get deleted?
Yes, unfortunately links are not permitted. It's a UA-cam anti-spam thing.
Are you going to stick with K3s or are you also considering to move over to Talos? Awesome video by the way I’m following the exact same path by using the ms-01s
Right now I'm focusing on migration and stability, as such migrating existing k3s. I will likely go RKE2, but am definitely going to look into Talos. Want to limit the number of unknown variables at present.
waouh...jalous...ok enjoy! thx for the video :)
You can sync opnsense config between them and configure it as an active-passive cluster, so eventually just manually switch to one?! But I think this can also be automated. Also, I prefer k0s to k3s, as it seems more stable to me and easier to deploy/upgrade with k0sctl tool
Thanks, I'm reading up on it now.
My $.02 on boot drive. I would highly recommend an Optane drive, even if just a small one... I get away with 16gb but I wouldn't recommend anything smaller than 32gb unless you want to be running scripts to clear up space after every upgrade. Much higher write resiliency and random IO. I was using them for a while, since LTT gave me the idea, but then I saw level 1 do a video on Optane as well. GL with the new lab! Congrats on the fiber!
Thanks, good idea 💡
I am from Brazil ! Simply Fantastic Home Lab. I was curious how you did the "Internal Only Ring" network settings. To perform the Celph Cluster Sync. Could you please include this topic in your next video?! Thanks.
Also, you may already know that vPro requires native vlan only but has a cool feature you use called serial over LAN for lan based console access. I combined this with a secondary path for guest vlans. hence the comment that vpro requires native vlan. Worth noting I am not using the minisforums hardware.
Thanks, John. That's good to know, I haven't yet dabbled with the vPro feature yet. This will be useful
11:55 create a CARP HA setup so they switchover automatically but TBH i'd rather use a descrete firewall box/appliance to keep that part separated from the rest
Doing something similar but starting with 1-2 nodes (main one being ms-01) 4gbit fiber and kubevirt instead of proxmox. Following!
did vmedia over provision that to 2.2Gb/s
The speed test in the video is the best I've done so far, maybe?
Vmedia usually over provision
How much does that cost per month on the internet side of things?
It's £90 / month
@@Jims-Garage WOW that's incredible especially for the UK!
As far as router redundancy. It’s gonna cost a bit of coin but you could do two UniFi UDM Pros running one in shadow mode.
That would free up your first MS-01 so it could be cycled without a network outage
If you use something like Proxmox, it can handle it as well (when 1 node goes down, it'll just move the VM given you use something like Ceph or another way of "shared storage").
The issue would still be that his WAN goes out but the UDM Pro has that exact same issue (unless you have say, redundant WAN links).
There is ms-01 x6 m.2 adapter
Interesting. How does that work? It doesn't support bifurcation on the PCIe slot AFAIK.
Take the fiber in on the Switch. Otherwise you just sacrefice your redundancy by having opnsense just havint it on one node. Virtualize the entire network
Great video! Earned a sub!
Thanks for the sub!
Here's my solution to have HA firewall. It is not perfect but it works for me.
I have a 3 node NUC setup and to have a high availability firewall I created a vmbr to 1 dedicated lan port on each node (each time the same port). As the modem has 4 ports, I directly connect this port of each node to the modem. This vmbr is set as wan on my pfSense instance, nothing else on linked to it. I was not able to set the modem in bridge mode (fiber is too complex for me), instead on the modem I set the IP of pfSense as dmz. All traffic hitting my modem is in fact redirected to my pfSense, with my internet IP facing directly facing pfSense.
With HA, if the node where my pfSense goes down it automatically starts back on another node. Thanks to ceph it goes quite fast. I'm now considering to have 2 OPNsense instances running instead of 1 pfSense to have no downtime.
Thanks, this is similar to how I used to have it. However, using the WAS-110 SFP stick has complicated matters. I'm keen to keep trying though.
Subscribed, bell is on! Cheers!
Awesome, thank you!
Nice James😊
Thanks - took a while this one!
@@Jims-Garage I was not expecting you to upgrade everything on your network!
Is there any issue with using a 4TB 990 Pro in any of the SSD slots? I understand that the site list 2TB as the max capacity, but supposedly that's because most 4TB SSDs are double-sided and can't physically fit in the slots.
I don't know as I haven't tested. I suspect they might do without a heatsink
I'm running the 4TB 990 in the primary slot, reusing the low profile heatsink from the 'stock' Kingston drive. I've got a 2TB 990 in the second slot, using some carefully x-acto'd thermal tape to contact the fan assembly. So far everything's been running pretty cool. The 4TBs are single sided, so I think you could probably run them in all 3 slots, but only actually heat-sinked on the first.
I like the warhammer 40k names .. I am replacing my edge servers to a similar build but I am trying to explore hosting local 70b llms and tie it to home assistant.. I want a different back haul though... this is soooo freakn fun my fiance she thinks I am crazy 😂.. maybe who knows.. just local llm won't be on miniform .,. I m gonna try 96gigs hosting inference on cpu .. maybe.. before building 4 gpu server
@@rimonmikhael that does sound fun! Hey, we're all on the spectrum somewhere 😂
My single r7515 server with 5 8gb nvme is idling at 130w. :)
@@HaydonRyan nice 😂 that's a beefy server though. What do you have running on it?
That idle draw, story of my life...
@@Jims-Garage16 core epyc 256gb ram. Running esxi vsphere with truenas (raid z1) with the five 8tb nvme drives. Also have a vm running ollama with cpi only for now. I have some work related vms and a ci/cd program running. Slowly building up. I have been thinking of migrating off synology for docker vms as I’m maxing the cpu.
I recently came across the MINISFORUM BD790i with the AMD Ryzen 9 7945HX. It supports up to 64GB DDR5 RAM, has two PCIe 5.0 M.2 slots for fast storage, and includes a PCIe 5.0 x16 slot for high-end GPUs like the NVIDIA RTX 4090. Plus, it comes with a robust cooling system and plenty of connectivity options including 2.5GbE LAN, HDMI 2.1, and USB-C. It seems like a solid choice for a high-performance setup in a small form factor.
I agree. Put a quad NIC on it and it's a great option. I did consider it for a while.
@@Jims-Garage I have four BD790is clustered like this with a Miktrotik 10G switch (used the PCIe slot for a 10G card) plus two 2TB NVMes per machine. runs like a top. and 4 nodes makes Ceph extra happy, lol.
@@0xKruzr damn, that's some serious horsepower!
980pro....till you run out of cache. This is why I stopped with consumer storage....
In an ideal world I'd use enterprise kit but for now they're fine. Most workloads I have are small and bursts
Tell us how you create 26 gigabytes Ethernet over the USB!
Will do, but you can skip ahead and read here: gist.github.com/scyto/67fdc9a517faefa68f730f82d7fa3570