Install A Highly Available Kubernetes Cluster | Making Kubernetes HA with Kubeadm

Поділитися
Вставка
  • Опубліковано 17 лис 2024

КОМЕНТАРІ • 15

  • @DoZZaSR
    @DoZZaSR Рік тому +1

    Excellent video.

  • @emilmihailpop6162
    @emilmihailpop6162 4 місяці тому

    Hi! Thank you for a very good video.
    I want to ask: kube-api-server and apiserver-advertise-address ips should be
    different or they can be the same?

    • @Drewbernetes
      @Drewbernetes  4 місяці тому

      Hi, Thanks very much.
      So the "kube-api-server" in this video is just an DNS alias that resolves to the Kube-VIPs' virtual IP address. This is like having a real domain pointing to a loadbalancer. It also means your certificates are generated using that domain name instead of an IP allowing you to change the underlying IP without having to regenerate certificates.
      For example, say you owned the domain my-kube-cluster.example.com, you could point that to a load balancer which had an IP of 1.2.3.4 and this would then route traffic through to (in this case) the three nodes that run as control planes, which may have 192.168.0.201-203 as their (local) IPs.
      The advertise-address is the IP address that is used by other members nodes of the cluster. so in my case, it's the local IP address of the 1st control plane node. So yes, in theory you could use the same IP address for both fields, but it's not recommended in a HA setup as all traffic will hit one node before being directed to the correct location.
      If you were setting up a single control plane node, it would be fine to do this but to be honest, I'd still setup Kube-VIP and use a DNS record (or host file adjustment as I do in this video) as it would allow me to
      A. add more control plane nodes at a later date
      B. change the ip address without having to regenerate all of the cluster certificates
      I hope that helps and clarifies this!

  • @anilpatel-ds3nx
    @anilpatel-ds3nx 2 місяці тому

    Hey Drew , Thanks great video . been trying to get HA setup for few days now , after few goes I ran into problems with Kubernetes Version 1.29,1.30 and 1.31 control plane doesn't initialise
    with versions 1.28 it just works every time, would appreciate any feedback or anyone else has any thoughts . Thanks

    • @Drewbernetes
      @Drewbernetes  2 місяці тому

      Hi!
      The process should generally be the same no matter which version you're using however the config may change slightly.
      It could be something as simple as a Feature Gate not being supported anymore or being moved into GA.
      What's happening when you start the process? Where is it failing?
      Depending on where in the process it fails for you, you should be able to check things like the kube-apiserver logs and the kubelet logs. These are your two main sources of errors.

    • @anilpatel-ds3nx
      @anilpatel-ds3nx 2 місяці тому

      @@Drewbernetes Thank you for the quick feedback , it doesn't seem to add the second IP address for kub-vip on the nic but I will dig into the logs more as you suggested. oh by the way great channel , gone through most of your videos :)

    • @Drewbernetes
      @Drewbernetes  2 місяці тому

      @@anilpatel-ds3nx Thanks very much! Yeah have a check through those logs. Also check out the logs for kube-vip too. I've updated my Kubernetes installation to 1.31.0 today to check things over and everything is working as I would expect, so it does definitely work 🙂. However that's an upgrade from 1.30.4 where it was already installed. It's not a fresh installation. That being said if it wasn't going to work, it wouldn't work on the upgraded cluster either 🙂

    • @anilpatel-ds3nx
      @anilpatel-ds3nx 2 місяці тому

      @@Drewbernetes i tried to post workaround link here but seems my message don't get posted trying again
      command pre-kubeadm:
      sed -i 's#path: /etc/kubernetes/admin.conf#path: /etc/kubernetes/super-admin.conf#' \
      /etc/kubernetes/manifests/kube-vip.yaml
      command post-kubeadm (Edit note: this causes a pod restart and may cause flaky behavior):
      sed -i 's#path: /etc/kubernetes/super-admin.conf#path: /etc/kubernetes/admin.conf#' \
      /etc/kubernetes/manifests/kube-vip.yaml

    • @Drewbernetes
      @Drewbernetes  23 дні тому

      Sorry the filter is grabbing things and I never get notified 🤦‍♂️. I'll have to have a look into this as I've not experienced this myself as of yet. I did see a version of kube-vip that did seem to restart fairly often but since an upgrade it seems stable again so could well be the same thing you're seeing. I'll see if I can track this one down 😉

  • @Aminech1920
    @Aminech1920 8 місяців тому +1

    kubeadm init fail with ip and dns no route to host

    • @Drewbernetes
      @Drewbernetes  8 місяців тому

      Hi! Sorry to hear that.
      Are you able supply any more information on this? It should be working if you've followed along with the tutorial. can you copy and paste the kubeadm init command you're running? Also confirm the configuration file you're supplying is valid too - it could be a typo that is causing this.
      That being said some things you can check is from the node are:
      ping google.com
      dig google.com
      nslookup google.com
      If any of those fail then you have an issue with the node itself. In which case you'll need to resolve those first before continuing.

    • @Aminech1920
      @Aminech1920 8 місяців тому

      @@Drewbernetes this is the command i am running kubeadm init --control-plane-endpoint vip-k8s-master --apiserver-advertise-address 192.168.1.16
      i set record in etc/hosts
      when i do nc -v 192.168.1.40 6443
      nc: connect to 192.168.1.40 port 6443 (tcp) failed: No route to host port 6443 is allowed in ufw i am using ubuntu 22.04.2

    • @Drewbernetes
      @Drewbernetes  7 місяців тому

      Hi, sorry this comment ended up in "held for review" for some reason.
      So you have "192.168.1.40 vip-k8s-master" in /etc/hosts?
      If so then as long as you've correctly configure the kube-vip steps, this should work. I would recommend running "crictl ps" and reviewing the logs for the containers that were successfully created. Kube-vip creates an additional IP on the interface you've supplied to it so as long as that configured, and the container is running, it should do that.
      Also check the interface itself to ensure the IP has been added."ip a" will list all the interfaces and addresses associated with them.
      Hopefully that will help you get to be bottom of why this isn't working for you.

    • @filipforst9048
      @filipforst9048 2 місяці тому

      Same issue here