vSphere With Tanzu DDS - Ep 1 - Network Prerequisites and Routing

Поділитися
Вставка
  • Опубліковано 28 лип 2024
  • The first in a series of videos that walk you through the whole process of deploying and using vSphere with Tanzu. See below for chapter links.
    This particular video focuses on the lab setup we're going to use for the series, the networking prerequisites and setting up routing using Linux gateways. If you already have a lab set up with workload management enabled, skip ahead to video 4.
    Target audience: vSphere Admins
    Chapters:
    0:00 vSphere With Tanzu Deep Dive Series Ep 1
    3:31 My vSphere Environment
    15:29 Creating a Client VM for Network Validation
    24:54 Configuring a Gateway VM for Network Routing
    29:02 Testing the "Frontend" Gateway
    31:26 Configuring and Testing the "Workload" Gateway
    Other Links:
    vSphere Documentation: docs.vmware.com/en/VMware-vSp...
    GitHub link to scripts: github.com/corrieb/vspherewit...
    William Lam's Blog: www.virtuallyghetto.com/
    Cormac Hogan's Blog: cormachogan.com/
    Quick Start Guide: core.vmware.com/resource/vsph...
  • Наука та технологія

КОМЕНТАРІ • 11

  • @VMwarevSphere
    @VMwarevSphere  3 роки тому +3

    Update: New setup from William Lam allows you to install a minimal setup that gets you quickly to video 4: www.virtuallyghetto.com/2020/11/complete-vsphere-with-tanzu-homelab-with-just-32gb-of-memory.html

  • @GregZuro
    @GregZuro 3 роки тому +1

    Thanks for much for this. *Really* helpful.

  • @MrEzandy
    @MrEzandy 3 роки тому +1

    The network config is challenging on a home network. I'm 2 days deep making progress but still having problems.

    • @VMwarevSphere
      @VMwarevSphere  3 роки тому

      Yea to some extent this shows why SDN was invented. Happy to help if you can describe the issue. Otherwise William Lam recently posted a single node environment that simplifies the networking requirements: www.virtuallyghetto.com/2020/11/complete-vsphere-with-tanzu-homelab-with-just-32gb-of-memory.html

    • @MrEzandy
      @MrEzandy 3 роки тому +1

      @@VMwarevSphere Thanks Mate. I'll take a look at William's latest via your link. The problem I'm running into is that the HAProxy VIPs are not resolving to the control plane VM IPs. I can hit the management interface on each control plane VM IP on port 443 from a VM on the combined workload/frontend network. The HAProxy addresses are pingable, but just hang on an https request. I inspected the /etc/haproxy/haproxy.cfg file and it looks like it's configured to resolve the VIPs to the workload interface on the Supervisor, not to the management interface. The workload interface doesn't respond to https requests either. Tried different ways of configuring the network for 2 days straight and keep getting the same result.

    • @VMwarevSphere
      @VMwarevSphere  3 роки тому

      @@MrEzandy Ok. Glad you found the haproxy.cfg file as this should certainly help in diagnosing. The HAProxy VM IPs will be pingable regardless of whether HAProxy is even running. It's Linux routing that's responding to those pings, not HAProxy, so that can be a red herring. The fact that you have some backends defined in HAProxy means that vSphere is able to reach your management endpoint, which is good.
      I'm not sure whether the control plane nodes you're talking about are the Supervisor nodes or a Tanzu Kubernetes Cluster node. HAProxy will load-balance traffic to both and always to the workload network. Regardless, you should be able to get a response by curling the ip:port backends defined in haproxy.cfg from a shell in the HAProxy appliance. If you can't, you may have a routing problem, although if you're using a single network for workload/frontend we should be talking about IPs on the same network. Don't trust ping because TKC nodes are configured to drop ICMP (don't ask me why).
      Also check you didn't accidentally configure your Load Balanced IP range in HAProxy to conflict with your gateway or any other essential IP.

    • @gregorkastelic
      @gregorkastelic 3 роки тому +1

      @@MrEzandy I have the same behavior. HAproxy.cfg configured correct but Guest TKG cluster deploys only Control VM and than stalls. Did you manage to get it running?

    • @gregorkastelic
      @gregorkastelic 3 роки тому

      @@VMwarevSphere Don't know if I fully understand routing requirements for those VLANs, which should route to which and which should not (FE, WL, MGMT). Can you tell some more about this?
      Thanx for confirming about ICMP dropping settings and TKC nodes (would not tell you why ;-))
      Regards