Proxmox Cluster Network Overview - Link Aggregation to the Rescue! Minis Forum MS-01

Поділитися
Вставка
  • Опубліковано 22 лис 2024

КОМЕНТАРІ • 118

  • @cheebadigga4092
    @cheebadigga4092 21 день тому +2

    I always explain link aggregation and such analogous to highway lanes: 1 NIC with a certain bandwith equals 1 highway lane with a certain speed limit, and all cars on that lane (electrons if you will) drive close to that speed limit. Another structurally equivalent/identical NIC added to the system as LAGG equals an additional highway lane with exactly the same speed limit as the first lane, and the cars on that lane drive at exactly the same speed as on the first lane. So, speed is the same for both lanes, yet you still have more data (cars) that can pass through, effectively somewhat doubling the bandwidth. 2 NICs on their own = 1 highway with 1 lane for each NIC. LAGG = 1 highway with 2 lanes for both NICs at the same time, so to speak.

    • @Jims-Garage
      @Jims-Garage  21 день тому +1

      @@cheebadigga4092 great analogy, I might have to steal that one

    • @cheebadigga4092
      @cheebadigga4092 21 день тому

      @@Jims-Garage haha thanks!

  • @joakimsilverdrake
    @joakimsilverdrake 6 місяців тому +23

    This is probably how I would do it: Connect your WAN SFP to the little switch. Connect one 2.5gbit port from each MS-01 to that switch. Connect both SFP+ ports from each MS-01 as a LAGG to the aggregation switch. Configure the necessary VLANS on those LAGGs. Connect 2 SFP+ ports from the aggregation switch to the 48-port switch. Connect your NAS to 1 (or 2 for a lagg) SFP+ port(s) on the 48 port switch.
    Add one (or more) vNIC with internal bridge (and appropriate VLANs) and one vNIC with the external bridge to OPNSense. This way your OPNsense can be made HA within Proxmox and can be migrated freely between nodes.

    • @kgottsman
      @kgottsman 6 місяців тому +4

      I was going to say the same thing... No reason to run dedicated physical NICs on the WAN interface of your firewall. Plus, this will allow the VMs to float easier between Proxmox nodes. I think the only potential configuration issue with this is if the ISP gets cranky about seeing the HA/cluster firewall.

    • @Jaabaa_Prime
      @Jaabaa_Prime 6 місяців тому +5

      This is 100% the way to go. Don't forget LAGG isn't just about speed, it is also about HA. There really is no need to have dedicated "opnSense" cables when you can do it with VLANs.

    • @headlibrarian1996
      @headlibrarian1996 5 місяців тому

      @@kgottsman Wouldn’t the 2.5G ports attached to the mini switch be considered dedicated physical NICs for the WAN? I probably misunderstand.
      Even with the cable modem in bridge mode isn’t only one of the 3 nodes actually transmitting on its 2.5G port into the mini switch at any one time? How can the ISP tell there’s a cluster?

    • @headlibrarian1996
      @headlibrarian1996 5 місяців тому

      I think I see what you’re trying to do here. Any tutorials I can read on how to configure things as you say, and why?
      Also, with all your ports being taken up by LAGGs where are your workstation or media devices going to hook up so they have a 10G link to the NAS and each other?

    • @joakimsilverdrake
      @joakimsilverdrake 5 місяців тому

      @@headlibrarian1996 the LAGGs will carry all internal traffic. If you have several VLANs the LAGGs can be configured as trunks.

  • @Shrp91
    @Shrp91 6 місяців тому +6

    Really looking forward to the config video!

    • @Jims-Garage
      @Jims-Garage  6 місяців тому

      Thanks, coming soon. Promise!

  • @shephusted2714
    @shephusted2714 6 місяців тому +3

    put in 56g connectx cards in all modes - no switch needed with dual port cards - separate out opnsense and use discrete boxes with ha for redundancy - you can run 40gbe with cat8 on 56g cards - you will like 4x bandwidth on cluster nodes - improve speed, lower latency the dual port cards on ebay are like 50 bucks, using this approach you can use 10g for management network - use dual nas and put faster nic plus ssd cache - all these upgrades are affordable and performant. overall this upgrade is better than the original but you could still better for not much money and unleash the true power of the cluster by eliminating bottlenecks. good content - explore other netfs like ocfs2/nfs/sshfs/gluster and do some benchmarks comparing it to ceph

  • @chrisumali9841
    @chrisumali9841 6 місяців тому +1

    Thanks for the demo and info, have a great day

  • @andyhello23
    @andyhello23 6 місяців тому +1

    Glad your still doing these videos

    • @Jims-Garage
      @Jims-Garage  6 місяців тому

      Yes, I'll be recording adding the Agg switch and setting up the cluster soon

  • @magnificoas388
    @magnificoas388 6 місяців тому +1

    this is becoming an paramount config to follow :)

  • @brianoconnell-df7kz
    @brianoconnell-df7kz 6 місяців тому +4

    For opensense on two or more MS-01'S, you might consider running VRRP, keepalived presenting a single virtual IP that would be active on only one of the opensense services at a time. I currently use that in my lab for my home DNS failover, and also for my default gateway for my route failover playground.

    • @Jims-Garage
      @Jims-Garage  6 місяців тому

      Thanks for the suggestion, I'll look into that. Haven't used VRRP previously.

  • @aliihsandonmezer6667
    @aliihsandonmezer6667 6 місяців тому +1

    thanks for sharing your brilliant ideas with us , as you can see from the numbers you are a shining star on homelab content creators. Keep it up good work.

  • @kurtbrown7504
    @kurtbrown7504 6 місяців тому

    I know others have said it but you can pipe your Internet into the switch on a vlan. This then allows you to put that ISP connection anywhere in your network. Then you can setup your router VM in HA and it just spins up on another node. This is how I do it in my own home lab. Also do this at work will hundreds of ISP connections.

  • @johnscabintech
    @johnscabintech 6 місяців тому

    Thank you for the update. I was thinking the MS-01 had more than enough network then I watched your video :) Just goes to show not every home lab is the same. I have the 9 port Sodola version of that switch running without issues for the past 6 months but I have just purchased a new switch to simplify my home lab (24 port 2.5gbe with 4 sfp+ ports managed).

    • @MrakCZ
      @MrakCZ 6 місяців тому

      Hello, which switch? SG3428X-M2?

    • @johnscabintech
      @johnscabintech 6 місяців тому

      @@MrakCZ Yes correct

    • @MrakCZ
      @MrakCZ 6 місяців тому

      @@johnscabintech Would you tell me your opinion to noise of the fan, when it arrive? I am really afraid of the noise, because it will be in living room. And some of the others tp-link switches with a fans are very loud. Thank you.

    • @johnscabintech
      @johnscabintech 6 місяців тому +1

      No problem I will send you an update on Saturday as that's when I'm setting it up

    • @johnscabintech
      @johnscabintech 6 місяців тому

      @MrakCZ if you are worried about the fan they do a version which is fanless (24 port 2.5 GBE with 8 sfp ports at 10gbe. Ali sells them and sometimes on eBay. Might be a good option.

  • @johnpaulsen1849
    @johnpaulsen1849 6 місяців тому +1

    I accomplished this by passing my isp into a separate vlan that I define on the vnic and a second in for mylan traffic.
    My connection is into my unifi switch directly. This allows me to live migrate the pfsense VM with no interruptions

  • @johnwalshaw
    @johnwalshaw 6 місяців тому

    Another great video. I think it's worth testing a standard bridge. My Palo is virtual on Proxmox and use for all layer 3 (and layer 2 filtering). I can push multi Gbps through the Palo no problem. Proxmox is not limiting this and that's 10th gen Intel with 2x10Gbps LAG to the Proxmox bridge. Then you can do power on migrate between hosts if you want. Your virtual firewall will talk layer 2 to the Mac of the SFP ONT via the dedicated switch. There is no such thing as a network loop in Proxmox so no worries there.

  • @Thierry.g38
    @Thierry.g38 Місяць тому +1

    Great Video.
    I've a stupid question: Why using 3 MS01 ? For High availability and redundancy?

    • @Jims-Garage
      @Jims-Garage  Місяць тому

      @@Thierry.g38 yes, they're in a Proxmox cluster with Kubernetes spread across them

  • @jccl1706
    @jccl1706 6 місяців тому +2

    Nice video as always, is possible can you explain how to create that thunderbolt mesh between those ms1 I will be happy to know how you did it thank you

  • @davidgulbransen6801
    @davidgulbransen6801 6 місяців тому

    Part of the benefit of aggregation is added redundancy (not just throughput). I have definitely added aggregation where I don’t need the added bandwidth, but do want the link redundancy.

  • @FlaxTheSeedOne
    @FlaxTheSeedOne 6 місяців тому +7

    Why not have the SFP of the ISP in the aggregation switch? Tagg its vlan to all the Nodes.
    Have 2 OpenSense VMs on any of the 3 nodes and give it a taged interface in that vlan? thus both vms can se the SFP and you are HA with the option of any node being able to fail and the VM that Failed migrating over the other one still existing?
    The extra switch is not needed and the extra cabling is also obsolete, This slimms down everything massively.
    It also simplefies setup as you can push the same config/same setup to all nodes.
    Edit: also no extra NIC needed as you can use the existing 2 Ports in LACP to the Ag Switch.
    I would also suggest getting maybe 2 Switches from the likes of Mikrotik to allow for MLAG which allows LACP accross switches for a higher redundancy state and more ports.

    • @Jims-Garage
      @Jims-Garage  6 місяців тому +1

      Thanks, good suggestion. I'm going to see how that works out and will report back in a later video.

    • @philippbeckonert1678
      @philippbeckonert1678 Місяць тому

      : This is definitely a great idea. I like getting rid of the little switch just for translating sfp+ to rj45. And then also getting rid of the extra hba on one node again saves money and reduces complexity. Lastly use the same config on all 3 nodes with the vlans configured correctly. That's just a very bright idea and I think the best solution to this "problem".

    • @philippbeckonert1678
      @philippbeckonert1678 Місяць тому

      : It also reminds me that I planing a simliar setup at home but was never sure if it's secure to put the ISP WAN directly on the switch since I'm not too familiar with vlans as of yet.

  • @shootinputin6332
    @shootinputin6332 5 місяців тому +1

    One of the reasons i haven't got into mini-pc, is how cheap it is to grab some mellanox 10GB SFP+ NICs and chuck them in spare pci slots (with great value Mikrotik SFP+ switches).

  • @brandoncherry8025
    @brandoncherry8025 6 місяців тому

    How about moving the ISP connection onto one of the switches and then putting it in a VLAN? You could then pass the VLAN through the SFP+ links to the cluster. Then you don't need the extra NIC for OPNSense's WAN. Only complication is that you cannot use unmanaged switches anymore if you get in a pinch.

  • @TerenceKearns
    @TerenceKearns 6 місяців тому +1

    Looks cool. Would be pretty handy for running a Peertube instance. You can have runners (ffmpeg transcoding farm agents) using all that GPU grunt.

    • @Jims-Garage
      @Jims-Garage  6 місяців тому

      Yes, that would be a cool setup!

  • @mwsmith824
    @mwsmith824 6 місяців тому +1

    How are the thermals holding up on the MS-01's? I heard there were overheating issues and have been holding off getting one.

  • @vartanshakhoian9606
    @vartanshakhoian9606 2 місяці тому +1

    Hi, what is the model of those 3 compute modules you are using to create HA ?

    • @Jims-Garage
      @Jims-Garage  2 місяці тому

      @@vartanshakhoian9606 these are the MinisForum MS-01

  • @MrJakecornford
    @MrJakecornford 6 місяців тому +1

    Please tell us more about the WAS-110. I would really like to know how you can replace your ONT! I thoguht they were all serealised and had to do a handshake with the headend to get service

    • @Jims-Garage
      @Jims-Garage  6 місяців тому

      I still have a lot to learn regarding how it functions, but my understanding is it masquerades as your ISP router. You need to enter your SN/MAC into the WAS-110. On this type of fibre, that is the only authentication.

    • @MrJakecornford
      @MrJakecornford 6 місяців тому

      @@Jims-Garage I'm very interested in this, I also have the MS-01 with a spare sfp+ would be great to plug my FTTP drop cable straight into it.
      Have you got any details on how much it cost and where you got it from? I'm also located in the UK

  • @nitraz9113
    @nitraz9113 6 місяців тому +2

    Out of curiosity... why don't you just buy UDM-Pro/SE (or even 2, because now they have full HA support)? You already have some equipment from Ubiquiti. And, you will not have to host the controller separately, more features, integrity, etc. Your entire setup will be simplified.

    • @Jims-Garage
      @Jims-Garage  6 місяців тому +1

      I had one a few years ago and sold it. I found the intervlan performance to be terrible, the ghost devices were concerning, and general firewall stats were fabricated. That might have changed now, but I'm not willing to shell out £600 to find out. If unifi are kind enough to send me one I'll certainly test it out (virtual firewalls do have a lot of advantages though).

  • @simo47768
    @simo47768 6 місяців тому +1

    Hi
    For a home lab starter person, aggregation already started to get too advanced. Why not do it at the end as an improvement?

    • @Jims-Garage
      @Jims-Garage  6 місяців тому +1

      Check some of my other videos for a simpler set-up, albeit a lot of homelabs are complex (kind of the point). You can replicate most of the aggregation with a single link instead, you don't have to use it.

  • @rcobsesssed
    @rcobsesssed Місяць тому +1

    I see so many negative comments about the MS01, have you had any of the problems reported with RAM and would you still recommend the MS01 today?

    • @Jims-Garage
      @Jims-Garage  Місяць тому +1

      Interesting, I only see positive comments. I haven't had a single issue since purchasing them. All 3 running just as well as the day I bought them, they have been on the entire time.

    • @rcobsesssed
      @rcobsesssed Місяць тому

      @@Jims-Garage Thanks for the quick reply. It’s probably that most people only complain… looking mostly on STH forum and Reddit. I think I’m going to order a pair for Proxmox. I appreciate your efforts!

  • @jonathan.sullivan
    @jonathan.sullivan 6 місяців тому +1

    LAG - think of it as adding more lanes to a highway. The Speedlimit it the same but you can handle more traffic in the additional lanes.

  • @RobertFabiano
    @RobertFabiano 6 місяців тому

    Any thoughts about a transparent/ firewall on a stick? Then you can trunk all your routed and non-router vlans on one LAG.

  • @oildiggerlwd
    @oildiggerlwd 6 місяців тому

    Could keep the lagg between switches for failover parity

  • @TheMongolPrime
    @TheMongolPrime 6 місяців тому +1

    Looks good. However the SODOLA's reviews mentioned that they tend to crap out after a few weeks or months. Hopefully you won the lottery and don't have that happen. If you do, I suggest a quad-port x710 card. Also like others have mentioned, check out VRRP and HA (CARP) Mode. I'd love to see you tackle that project.

    • @Jims-Garage
      @Jims-Garage  6 місяців тому

      Thanks, I'll be reading that later tonight. It is a cheap switch and I don't expect years of performance, but fingers crossed!

  • @mrkdosmil2879
    @mrkdosmil2879 6 місяців тому +1

    Not that it will make any difference performance wise but why not just use the x710 NIC for the LAGGs and use onboard one for the cluster? I think that would look more OCD friendly lol.

    • @Jims-Garage
      @Jims-Garage  6 місяців тому

      Haha, whatever you do don't look at my homelab tour, you'll have nightmares! Spaghetti junction!

  • @urzaaaaa
    @urzaaaaa 6 місяців тому +1

    What is the name of the small 2.5/10G switch please?

    • @Jims-Garage
      @Jims-Garage  6 місяців тому +1

      Hi it's a SODOLA 6 Port 2.5G Easy Web Managed Switch

  • @levifig
    @levifig 6 місяців тому

    So sad that Ubiquiti didn’t do a Gen 2 of the US-XG-16. I have a Gen 1 and it would totally fit your setup…

  • @parsons151185
    @parsons151185 6 місяців тому +1

    Does the MS-01 onboard X710 not support SR-IOV? As then you could pass the virtual function (interface) through to the VMs instead of the physical interface/function.
    I do just that on my Asrock X570D4U-2L2T and GoWin R86S-N.
    If possible, it would negate the need for the extra X710

    • @Jims-Garage
      @Jims-Garage  6 місяців тому

      Yes, it does I believe and is something I mentioned in a previous video. It could be helpful in resolving some design choices but also means I cannot have failover (I think)

    • @parsons151185
      @parsons151185 6 місяців тому +1

      After posting this, I paused to consider it's impact on a LAG... I don't It wouldn't be possible. The agg switch wouldn't be able to establish multiple LAG sessions.
      I utilise it for Junos vMX, but the VPF is the only device utilising a VF on the X550. LAG worked after altering a bunch of options (trust, promisc, multicast and spoofchk). I'd expect my agg switch would be a little confused if some other VM started wanting to bring up a LAG too, as the vMX already brings up a LAG to a pair of OS10 devices in a VTL configuration

    • @Jims-Garage
      @Jims-Garage  6 місяців тому

      @@parsons151185 guessing you'd need laggs by vLAN. I haven't looked into that.

  • @_Jonny_
    @_Jonny_ 6 місяців тому +1

    So are you bypassing the Virgin Hub with your own ONT SFP?

    • @ThePJN91
      @ThePJN91 6 місяців тому

      This was going to be my question... very interested to see how you achieved this if you have time?

  • @darrenoleary5952
    @darrenoleary5952 6 місяців тому +1

    I'll assume that you have already set this up, but the LAGG diagram is incorrect for the USW-48 down to the USW-AGG and the first MS-01 up to the USW-AGG.
    You will need to use consecutive ports on the USW-AGG, ie 1 & 2 or 2 & 3, or 3 & 4, etc instead of using the ports as per your diagram.

    • @Jims-Garage
      @Jims-Garage  6 місяців тому +1

      Thanks, I wasn't aware of that as the switch hasn't arrived yet. That will save me a lot of headaches!

    • @darrenoleary5952
      @darrenoleary5952 6 місяців тому

      @@Jims-Garage I believe switches of other brands allow aggregation of side-by-side ports, ie 1 & 3, 2 & 4, etc, but in the UniFi system, the ports must be consecutive.
      If you wanted to use ports 1 & 3, it would automatically include port 2.

    • @davidgulbransen6801
      @davidgulbransen6801 6 місяців тому +3

      @@Jims-Garageanother heads up: the order in which you create the LAG in UniFi matters. If you do the wrong switch first, you’ll break the link and potentially lock yourself out of UniFi until you directly connect the the other switch to access UniFi again and complete the config. Ask me how I know :P
      UniFi’s docs on creating aggregated links cover this, but I wanted to call it out as something to be on the lookout for.

    • @Jims-Garage
      @Jims-Garage  6 місяців тому +1

      @@davidgulbransen6801 thanks, that's really good to know as well

  • @jasonperry6046
    @jasonperry6046 6 місяців тому +2

    Please don't skimp on details. You can put out as many videos as you want on this, and I will watch them all. Then I will probably book mark all of them and go back and watch them again.
    I would even watch a video of you going through the comments and giving your thoughts on people's opinions.

    • @MrakCZ
      @MrakCZ 6 місяців тому +1

      +1

  • @sku2007
    @sku2007 6 місяців тому +1

    isn't LAGG of 2x10Gbit two connections of each 10G and not devices? Meaning, a second device can use 20G if there were more than one connection and the connection speed is high enough (either same LAGG or higher speed if different switch used) ?

    • @Jims-Garage
      @Jims-Garage  6 місяців тому

      Maximum single speed of the slowest single connection. However, you can have two at the same time at full speed.

  • @sku2007
    @sku2007 6 місяців тому +1

    I don't understand why opnsense needs LAgg. like storage CAN be routed, but puts unnecessary load on the FW. better put storage on the needed VLAN with only necessary ports open, in the end the same as through FW anyway (necessary port open).

  • @ryanmalone2681
    @ryanmalone2681 6 місяців тому +3

    Where's the new cardigan? I was promised a plethora of cardigans! ;-)

    • @Jims-Garage
      @Jims-Garage  6 місяців тому +1

      It was tough breaking tradition...

    • @darylnd
      @darylnd 6 місяців тому

      Will this help? ua-cam.com/video/iYuldgIOelY/v-deo.htmlsi=BbGzOcqpx0A9sEuA

  • @ewenchan1239
    @ewenchan1239 6 місяців тому +1

    Do you really think that you're going to have 2x 10 GbE worth of traffic going from the rest of your network to your OPNsense, especially given that the outbound WAN is only on a 2.5 GbE port to a 2 Gbps fibre internet link?
    I am trying to think of the rationale for where you would need that 2x 10 GbE SFP+ link for OPNSense because if you're not going to have that much traffic going through your OPNSense, then you don't need the x710 and then you can free up the SFP+ port on the first MS-01 node where you can plug that into your USW-Agg switch.
    And then I would probably move the NAS over onto the other switch and use a single 10 GbE link between the two (as you mentioned in the video) since the NAS isn't going to be that fast anyways.

    • @Jims-Garage
      @Jims-Garage  6 місяців тому

      It is more academic than strictly required, but it will be useful for longhorn replication before I move to ceph. There is also the edge case of resilience but it's more likely the switch would fail entirely as opposed to a single port.

    • @ewenchan1239
      @ewenchan1239 6 місяців тому

      @@Jims-Garage
      Three things:
      1) If it is for the purposes of learning, you can deploy LACP really on any pair of links such that in all three cases, you can LACP all of the 10 GbE SFP+ links so that all three nodes will have it.
      The benefit of doing this is that you will have a server that will have LACP that can serve the data and then you will have clients that has the potential to be able to consume the data, at those rates, if it is intended as a learning tool and an academic exercise.
      2) I had to google what "Longhorn replication" was.
      2a) Ceph is awesome!
      I HIGHLY recommend that you watch the video from apalrd's adventures on how to install the Ceph manager dashboard as that makes making other types of replication rules for Ceph (within the context of Proxmox) SIGNIFICANTLY easier.
      I'm running Ceph 17.2, and with the ceph-mgr-dashboard installed, I was able to set up the erasure coded replication rule (whereas Proxmox, via its GUI for Ceph, ONLY allows you to create replicated rule for Ceph RBD and/or CephFS).
      And I'm running this on three Mini PCs nodes which only has the Intel N95 Processor in it, and a 512 GB NVMe SSD in each of those nodes, so overall, not a lot of space, but it's enough to learn some of the basic concepts of Ceph and has been absolutely awesome as the VM/CT storage, where live migrations complete in a matter of seconds (because it's shared storage, so no data needs to physically move).
      3) Agreed. The chances of the links themselves failing is quite low.
      But if you are using LACP to try and get more total bandwidth rather than using it for HA, then the failover aspect of LACP won't really be applicable.

  • @sharkovios
    @sharkovios 6 місяців тому +2

    I have a question. Why not putting the 2G fiber on a Vlan and use it as WAN on opnsense so that you could have opnsense failover to another node and not lose internet when node #1 is down?

    • @Jims-Garage
      @Jims-Garage  6 місяців тому +2

      I will be testing that, it's how I used to have it with Sophos XG. The WAS-110 could have issues with that setup though (and it requires hardware passthrough, not a vmbr).

  • @hansaya
    @hansaya 6 місяців тому +2

    instead of lagging the Lan ports, why dont you install two instances of openSense in HA(CARP) mode. Most ISP's let you request two public ip's or more. You do not need static public ip's to have active/backup system installed. Other benefit to this is, you can restart proxmox without taking down the internet. I have a similar setup with one dedicated low power pfsense box that only takes over when my primary pfsense in proxmox is down. This stops everyone in the house from complaining about not having internet :D. This is really important if you do decide to have proxmox in a cluster. Cluster wont work without quorum and will get yourself in trouble when you only can bring one node up

    • @hansaya
      @hansaya 6 місяців тому

      I realized others have suggested the same. I would consider this at planning stage so it's lot easier implement

    • @Jims-Garage
      @Jims-Garage  6 місяців тому

      Thanks for the comment, this is exactly how I used to have it with Sophos XG. I'm trialing now to see if it's possible using the WAS-110 stick.

    • @AIC69420
      @AIC69420 6 місяців тому

      This is Virgin media, unless you use Business, you won’t get more than one public IP

  • @XtianApi
    @XtianApi 2 місяці тому +1

    You can double the transfer speed to 20 gig on a single client if proxmox and the switch supports full Dynamic algorithm 802.3ad.
    But if it's just balance-r or all the various t modes, than no. This is a constant source of pain for people on the forums. Asking about why they aren't getting the speed doubling. Or sometimes it's only in One Direction

    • @Jims-Garage
      @Jims-Garage  2 місяці тому +1

      @@XtianApi thanks, that's good to know. I'll check that out.

    • @XtianApi
      @XtianApi 2 місяці тому

      @@Jims-Garage cool! I dig the channel. One of the easiest ways to see all these modes is if you have a qnap Nas and turn on link aggregation. You have to select the aggregation mode. Even if you just go Google an image of it, it'll show the drop-down of all of them and have a pretty good explanation of each.
      Keep doing what you are doing!

  • @xgod978
    @xgod978 6 місяців тому +1

    sr-iov soon? 👀

    • @Jims-Garage
      @Jims-Garage  6 місяців тому

      Yes, I will be investigating it (have used it previously)

  • @barfnelson5967
    @barfnelson5967 6 місяців тому +1

    if you actually need 20 gig back to opnsense when you only have a 2 gig pipe out, you probably designed your network and/or vm interfaces wrong. You should go back through your vms/nas and assign interfaces on the vlans that have massive throughput so that the traffic stays on the switches and never has to come back up to your opnsense router and back down again.
    Assuming that thunderbolt backhaul network is working and not shitting the bed like it was on my 4 ser7's, you should connect one .25 gig from each ms-01 to the internet switch, one 10 gig to the distribution switch reserved for opnsense, one 10 gig to distribution switch for vms then backhaul ceph and cluster traffic on thunderbolt. Then you can have opensense high availability fail over to any of the 3 other boxes no problem. Go into your nas and create virtual interfaces on the higher traffic vlans so that traffic never hits the opnsense router

    • @Jims-Garage
      @Jims-Garage  6 місяців тому

      Thanks, that's something to consider. Right now the dual 10Gb lagg will be useful for longhorn replication but I plan to move to ceph in the near future on the usb4 ports

    • @barfnelson5967
      @barfnelson5967 6 місяців тому +1

      @@Jims-Garage That raises more questions for me then since you have one lag just for opnsense on the machine that is running opnsense and single 10 gigs coming off for proxmox. The only reason data should flow back up from the switches, across the lag and through opnsense is if the source and sync of data are on different vlans, which they shouldn't be if you designed the vlan setup right. Even then nothing else is lagged except the 48 port 1 gig switch so unless you are having 20+ nodes on that maxing out their connection to 20 other nodes on that through different vlans that require going back through opnsense, I don't see where the traffic is going.
      @joakimsilverdrake had a better idea than my original idea above. Lag all 3 of your mso1s to the distribution switch, connect all four ms01s to 2.5 gb switch, create opensense stuff and vm vlans on those lags. It's too bad you don't have 2 more 10 gig ports on the distribution switch then you could lag everything. Doing the mso1s as all laggs means you can only send one 10 gig to 48 port switch and one 10 gig to nas.

    • @Jims-Garage
      @Jims-Garage  6 місяців тому +1

      @@barfnelson5967 thanks I'll take a look at this

    • @headlibrarian1996
      @headlibrarian1996 5 місяців тому +1

      Since the USW-Agg switch is a dumb switch that can’t do L3 routing how can it keep VLAN traffic restricted? It’s basically a router on a stick setup as is. The Pro version of the USW-Agg switch with a bunch of additional and probably unnecessary SFP+ ports can do L3 but it’s triple the price. The 48 port PoE switch can also do L3.

    • @Jims-Garage
      @Jims-Garage  5 місяців тому

      @@headlibrarian1996 it doesn't, my OPNSense router does. vLAN is L2