EKS Cluster Auto Scaling (Kubernetes Autoscaler | EKS Cluster Autoscaler | EKS Autoscale Nodes)

Поділитися
Вставка
  • Опубліковано 18 чер 2024
  • 🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - me@antonputra.com
    👉 [UPDATED] AWS EKS Kubernetes Tutorial [NEW]: • AWS EKS Kubernetes Tut...
    ▬▬▬▬▬ Experience & Location 💼 ▬▬▬▬▬
    ► I’m a Senior Software Engineer at Juniper Networks (12+ years of experience)
    ► Located in San Francisco Bay Area, CA (US citizen)
    ▬▬▬▬▬▬ Connect with me 👋 ▬▬▬▬▬▬
    ► LinkedIn: / anton-putra
    ► Twitter/X: / antonvputra
    ► GitHub: github.com/antonputra
    ► Email: me@antonputra.com
    ▬▬▬▬▬▬ Related videos 👨‍🏫 ▬▬▬▬▬▬
    👉 [Playlist] Kubernetes Tutorials: • Kubernetes Tutorials
    👉 [Playlist] Terraform Tutorials: • Terraform Tutorials fo...
    👉 [Playlist] Network Tutorials: • Network Tutorials
    👉 [Playlist] Apache Kafka Tutorials: • Apache Kafka Tutorials
    👉 [Playlist] Performance Benchmarks: • Performance Benchmarks
    👉 [Playlist] Database Tutorials: • Database Tutorials
    🔴UPDATED🔴: How to Create EKS Cluster Using Terraform - • How to Create EKS Clus...
    =========
    ⏱️TIMESTAMPS⏱️
    0:00 Intro
    0:16 Create EKS cluster with eksctl
    4:13 Create IAM OIDC provider
    5:18 Create IAM policy for autoscaler
    6:05 Create IAM role for autoscaler
    7:25 Deploy Kubernetes cluster autoscaler
    12:50 Create Nginx deployment to test autoscaler
    =========
    Source Code
    🖥️ - GitHub: github.com/antonputra/tutoria...
    #EKS #Kubernetes #AWS

КОМЕНТАРІ • 93

  • @AntonPutra
    @AntonPutra  9 місяців тому

    🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - me@antonputra.com
    👉 [UPDATED] AWS EKS Kubernetes Tutorial [NEW]: ua-cam.com/play/PLiMWaCMwGJXnKY6XmeifEpjIfkWRo9v2l.html&si=wc6LIC5V2tD-Tzwl

  • @geraldmusaga6969
    @geraldmusaga6969 2 роки тому +17

    you were born for this. the calmness in your voice made the knowledge transfer so easy and smooth. thanks very much

    • @AntonPutra
      @AntonPutra  2 роки тому

      Thank you so much Gerald! :)

  • @anshulangaria72
    @anshulangaria72 10 місяців тому +3

    This is a gem of a channel.

  • @vijaysarde385
    @vijaysarde385 Рік тому +1

    I am glad that I found your channel and I am learning wonderful things which are helping me to make my day-to--day task super convenient. Appreciate it! :)

  • @sandv2902
    @sandv2902 3 роки тому +1

    Such a best explanation i have ever seen...Thanks for putting efforts for sharing the knowledge.

  • @wchopite
    @wchopite 2 роки тому +1

    Excellent video bro, thanks for sharing this kind of information

  • @palanisamy-dl9qe
    @palanisamy-dl9qe 3 роки тому +2

    Hi Buddy
    Thank you so much for the video, its very useful and good to replicate same thing from our real world project.

  • @diegomacielp
    @diegomacielp 2 роки тому +1

    Working, congrats!!!

  • @mijhaelcastrociviero2848
    @mijhaelcastrociviero2848 2 роки тому +1

    Thanks! Very useful!

  • @phucbker
    @phucbker 2 роки тому

    Thanks you for sharing!

  • @shulyakav
    @shulyakav 2 роки тому +1

    Very usfull.
    Thank you.

  • @AntonPutra
    @AntonPutra  2 роки тому +5

    🔴UPDATED🔴: How to Create EKS Cluster Using Terraform - ua-cam.com/video/MZyrxzb7yAU/v-deo.html

  • @DineshKumar-bo5vx
    @DineshKumar-bo5vx 2 роки тому +1

    Superb explanation

  • @AntonPutra
    @AntonPutra  Рік тому +1

    👉 How to Manage Secrets in Terraform - ua-cam.com/video/3N0tGKwvBdA/v-deo.html
    👉 Terraform Tips & Tricks - ua-cam.com/video/7S94oUTy2z4/v-deo.html
    👉 ArgoCD Tutorial - ua-cam.com/video/zGndgdGa1Tc/v-deo.html

  • @lindelaninema8850
    @lindelaninema8850 3 роки тому +1

    Thanks man.

  • @ErlindoSilva-cn8mv
    @ErlindoSilva-cn8mv Місяць тому

    Very good video. Tku.

    • @AntonPutra
      @AntonPutra  Місяць тому +1

      thanks, working on new one that using eks pod identeties

  • @AntonPutra
    @AntonPutra  3 роки тому +1

    Kubernetes Horizontal Pod Autoscaler - ua-cam.com/video/pI_pMsuazqw/v-deo.html

  • @zongzaili9701
    @zongzaili9701 2 роки тому +1

    Excellence vedio

  • @JackReacher1
    @JackReacher1 Рік тому

    Anton Next video on: Setting up Appmesh in EKS (Fargate) using Helm Chart with Terraform.
    It has been some time since we visited EKS + Servicemesh (Appmesh).

  • @fabianvivanco6555
    @fabianvivanco6555 2 роки тому

    Excellent video dear!, I have a query, I want to change the type of instance (enlarge its size) of my only node, do you recommend me to do it in the nodegroup.yaml (instanceType:) and redeploy it? Are the changes that I had in my previous Node lost?

    • @AntonPutra
      @AntonPutra  2 роки тому

      Cluster autoscaler can only scale your cluster horizontally. If you just need to update instance type once you can update instance group and perform rolling update

  • @khoale8688
    @khoale8688 2 роки тому

    Great video. If one of nodes goes NotReady status, does Autoscaler do remove and create a new node by automatically?

    • @AntonPutra
      @AntonPutra  2 роки тому

      If the autoscaler drained the node then yes.

  • @manishc4531
    @manishc4531 2 роки тому +1

    Thanks for the video. On which parameters (CPU, Memory ....) does the ASG scales? Can we specify any targets such as CPU > 70% for scale up and CPU < 20% for scale down? If yes, where should be specify them - in yaml or ASG(in console) ?

    • @AntonPutra
      @AntonPutra  2 роки тому +2

      This particular lesson focus on autoscalling of k8s worker nodes, if you want to scale pods up or down you can use horizontal pod autoscaller. Horizontal pod autoscaller can increase the number of pods based on memory, CPU, or custom metrics, and then if they won't fit into the cluster, Kubernetes autoscaller will create more k8 workers.
      Kubernetes Horizontal Pod Autoscaler (CPU Utilization | Based on Memory | Autoscaling | HPA | EKS) - ua-cam.com/video/pI_pMsuazqw/v-deo.html
      Horizontal Pod Autoscaler CUSTOM METRICS & PROMETHEUS: (Kubernetes | EKS | Autoscaling | HPA | K8s) - ua-cam.com/video/iodq-4srXA8/v-deo.html

    • @manishc4531
      @manishc4531 2 роки тому +1

      @@AntonPutra Thanks Anton. I wanted to know on which parameters does the auto scaler scales the number of nodes . To put it other way, when does the auto scaler/ ASG kicks in and does the scale out or scale in of nodes?

    • @georgecimpoies
      @georgecimpoies 2 роки тому

      @@AntonPutra I am hesitant to set up the autoscaler in production environment for this exact reason. I want to have a buffer of 50% memory available at all times (because HA is very important and I want to be able to handle surges/memory leaks). I don't want the autoscaler to figure out it can save 1-2 nodes by squeezing all other nodes at 80% capacity and risk running into resource starvation. How can I tell the autoscaler to not scale down in this case? Just by setting the --nodes-min in eksctl?

    • @AntonPutra
      @AntonPutra  2 роки тому

      @@georgecimpoies I would suggest going over the autoscaller configuration is see if one of them suits your needs.
      github.com/antonputra/tutorials/blob/main/lessons/070/k8s/0-cluster-autoscaler.yaml#L158-L162
      github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#common-notes-and-gotchas

  • @georgecimpoies
    @georgecimpoies 2 роки тому

    Hi Anton! When setting up the cluster autoscaler for multiple EKS clusters, is it possible to reuse the same IAM role, or do I need to create individual roles for each cluster? (since each role has a trust relationship with a unique OIDC)

    • @AntonPutra
      @AntonPutra  2 роки тому +1

      Yes, you must create separate IAM roles if you want to use OIDC. Or you can go with old approach without open id connect provider and attach roles directly to nodes. In that way you can share it between multiple EKS clusters.

    • @georgecimpoies
      @georgecimpoies 2 роки тому

      @@AntonPutra thank you!

  • @steavenpat9122
    @steavenpat9122 Рік тому

    Great tutorial. When I scaled down the pods, it not scale down the nodes. Can you explain how can i scale down the nodes when the pods scaled down

    • @AntonPutra
      @AntonPutra  Рік тому

      It just takes a lot of time, but you can see the logic in the controller log.

  • @TheNeoamerica
    @TheNeoamerica Рік тому

    Hi Anton, good video, is it necessary to have openid connect , can we use our root or admin user for the same ?

    • @AntonPutra
      @AntonPutra  Рік тому +1

      not necessary, you can create IAM policy and attach it to the IAM role that you're using for k8s workers

    • @TheNeoamerica
      @TheNeoamerica Рік тому

      @@AntonPutra ok , thanks, that means i would either need an IAM role and attach the policy and attach the role to k8s worker nodes... yes ?

    • @AntonPutra
      @AntonPutra  Рік тому +1

      @@TheNeoamerica yes

    • @TheNeoamerica
      @TheNeoamerica Рік тому

      @@AntonPutra thanks much appreciate your inputs here ...

  • @NathercialoveS2
    @NathercialoveS2 Рік тому

    Hey Anton, when I create the deployment to test I get pod didn't trigger scale-up: 1 node(s) didn't match Pod's node affinity/selector, do you might know why?

    • @AntonPutra
      @AntonPutra  Рік тому

      if you have nodeselector or affiniti, make sure that you match them with node labels

  • @radzhivapasov4929
    @radzhivapasov4929 2 роки тому +1

    hi Anton, thank you for the video, i try to create eks-autoscaler, I follow all the documentation, and at the end i created a test deployment with 50 replicas and it didn't scale up, what could be the issue? and do you have video how to create auto-scaler with terraform? i'm trying to do it automation

    • @AntonPutra
      @AntonPutra  2 роки тому

      The best way to find out is to check cluster-autoscaler logs, is any issues? maybe permission denied?

    • @radzhivapasov4929
      @radzhivapasov4929 2 роки тому

      mojni li s vami po zoomu svyazatsya?

    • @AntonPutra
      @AntonPutra  2 роки тому

      @@radzhivapasov4929 da konechno, mojno libo v pyatnicu ili na vihodnih. me@antonputra - zoom

    • @radzhivapasov4929
      @radzhivapasov4929 2 роки тому

      @@AntonPutra Антон,да было бы круто,дайте мне пожалуйста знать когда вам удобно будет, я могу 24/7

    • @AntonPutra
      @AntonPutra  2 роки тому

      @@radzhivapasov4929 replied in the email

  • @himakar9999
    @himakar9999 8 місяців тому

    I have one question I need to install Cluster Autoscaler in Eks cluster but itself needs at least one node to operate In an AWS EKS cluster so that how you can ensure that you have at least one node running when setting up Eks Cluster ?
    Is it possible to bring up the 1st node using aws fargate on top of that node we can install cluster autoscaler ??

    • @AntonPutra
      @AntonPutra  8 місяців тому

      1. Autoscaling config has 3 parameters. Desired size, min size and max size. You need to set min size to 1
      2. You probably can, but you still need to run some system pods such as dns, why not to use a regular node for that?

  • @KhushbooKumari-qw7nn
    @KhushbooKumari-qw7nn 2 роки тому

    IT is possible to configure Cluster Autoscaler by attaching instance profile to eks worker node group, if I don't want to user OIDC.

    • @AntonPutra
      @AntonPutra  2 роки тому

      Sure, just create role with necessary permissions an attach to the nodes

    • @KhushbooKumari-qw7nn
      @KhushbooKumari-qw7nn 2 роки тому

      @@AntonPutra but this will not work with kube2iam, as I already tried but it gave error. Anyway thank you for information.

  • @sreeramn435
    @sreeramn435 2 роки тому

    Hi Anton, I have followed the steps exactly as above, but my cluster auto scaler pod runs for a few seconds and goes into CrashLoopBackOff. Any idea?

    • @AntonPutra
      @AntonPutra  2 роки тому

      Hi Sreeram, CrashLoopBackOff means your app (process) exists with status code not equal to 0. To check the last log before crash run "kubectl logs -p "

    • @sreeramn435
      @sreeramn435 2 роки тому +1

      @@AntonPutra Resolved the issue by increasing the cpu and memory limits for the auto scaler pod. Thanks.

    • @AntonPutra
      @AntonPutra  2 роки тому

      @@sreeramn435 thanks for leetting me know.

  • @vijaykumary5427
    @vijaykumary5427 2 роки тому

    What about memory... Do this work for memory crunch as well

    • @AntonPutra
      @AntonPutra  2 роки тому

      well, if you have horizontal pod autoscaler deployed in the cluster that can increase number of pods based on the memory usage, then EKS autoscaler will increase number of nodes to fit all containers.

    • @vijaykumary5427
      @vijaykumary5427 2 роки тому

      @@AntonPutra makes sense...

  • @okopyl
    @okopyl Рік тому

    Could you please tell how to configure the auto-scaling policy?
    It’s so confusing.
    Getting more questions from these tutorials than answers…

    • @AntonPutra
      @AntonPutra  Рік тому

      1. Create EKS with node group where max != min (example - github.com/antonputra/tutorials/blob/main/lessons/166/terraform/2-eks.tf#L15-L16)
      2a. Deploy either Autoscaler - github.com/antonputra/tutorials/blob/d09887fa44a415ca1c5a0b80f0eef090dfb692f7/lessons/160/infrastructure-modules/kubernetes-addons/1-cluster-autoscaler.tf#L69-L93
      2b. Of karpenter - github.com/antonputra/tutorials/blob/d09887fa44a415ca1c5a0b80f0eef090dfb692f7/lessons/156/terraform/10-karpenter.tf#L22-L29

  • @sreeharinambiarkt7765
    @sreeharinambiarkt7765 2 роки тому

    Can we scale to 0 pods?

    • @AntonPutra
      @AntonPutra  2 роки тому +1

      I don't think that its possible at this moment - github.com/aws/containers-roadmap/issues/724

  • @nataliarodideal4904
    @nataliarodideal4904 Рік тому

    Hello Anton,
    I have an error for the pod cluster-autoscaler; Failed to create AWS Manager: cannot auto-discover ASGs: WebIdentityErr: failed to retrieve credentials.
    Failed to create ASG cache: cannot auto-discover ASGs: WebIdentityErr: was unable to retrieve credentials.
    thanks

    • @AntonPutra
      @AntonPutra  Рік тому

      Seems to me you misconfigured IAM. Do you use Open id connect provider or attach IAM role/instance to the node?

    • @nataliarodideal4904
      @nataliarodideal4904 Рік тому

      @@AntonPutra Open id
      Thank for you comment

    • @nataliarodideal4904
      @nataliarodideal4904 Рік тому

      @@AntonPutra I'm still dealing with this problem "cannot autodiscover ASGs: WebIdentityErr:failed to retrieve credentials"

    • @nataliarodideal4904
      @nataliarodideal4904 Рік тому

      And the company has the EKS cluster created manually for every resource, and now, they want to Migrate to terraform for Infrastructure config. Please suggest something on how to this, or maybe do you already have some video about this. Thanks

  • @ajaykharat8832
    @ajaykharat8832 5 місяців тому

    Repository link please. The one in description is not pointing to any repo.

    • @AntonPutra
      @AntonPutra  5 місяців тому +1

      It works for me - github.com/antonputra/tutorials/tree/main/lessons/070

    • @ajaykharat8832
      @ajaykharat8832 5 місяців тому

      Thanks , @@AntonPutra It helped .

  • @palanisamy-dl9qe
    @palanisamy-dl9qe 3 роки тому

    I deployed nginx managed deployment yaml file but my pod is not coming up its seems node affinity value (managed-nodes) i changed my self based on my node group name but still pod not coming up.
    Warning FailedScheduling 18s (x2 over 18s) default-scheduler 0/2 nodes are available: 2 node(s) didn't match Pod's node affinity.
    Normal NotTriggerScaleUp 8s cluster-autoscaler pod didn't trigger scale-up: 1 node(s) didn't match Pod's node affinity

    • @AntonPutra
      @AntonPutra  3 роки тому

      Please check if your node labels match with node affinity.
      kubectl get nodes --show-labels

    • @palanisamy-dl9qe
      @palanisamy-dl9qe 3 роки тому

      @@AntonPutra i have two node with label
      kubectl label node ip app=backend-heavy
      kubectl label node ip app=backend-light
      i have updated node affinity value=backend-light then i deployed nginx pod but still its coming same error

    • @AntonPutra
      @AntonPutra  3 роки тому

      @@palanisamy-dl9qe well, it's hard to debug based on this information. If you can share github repo with your code, it may be easier to help you.

  • @cggirl7481
    @cggirl7481 2 роки тому

    @Anton Putra
    Could you please do a video to set a CloudWatch alarm when a new node scaled up ?

    • @AntonPutra
      @AntonPutra  2 роки тому

      I guess it's possible, but I won't recommend setting alarms for scaling up/down the cluster. You can just have a Grafana dashboard showing the number of nodes.

    • @cggirl7481
      @cggirl7481 2 роки тому

      @@AntonPutra We already use Grafana. But could you please give me an idea like how this can we set using aws cloudwatch as I don't see any metrices which can be used for this purpose..