How Autoscaling Works In Kubernetes (And Beyond)? Kubernetes Tutorial

Поділитися
Вставка
  • Опубліковано 18 чер 2024
  • What is autoscaling? How to scale applications and Kubernetes clusters? What is the difference between vertical and horizontal scaling? Should we combine Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler (VPA), and Cluster Autoscaler (CA) in Kubernetes?
    #Kubernetes #Scaling #Cloud
    Consider joining the channel: / devopstoolkit
    ▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬
    ➡ Gist with the commands: gist.github.com/a1f5a629263ec...
    🎬 How To Auto-Scale Kubernetes Clusters With Karpenter: • How To Auto-Scale Kube...
    🎬 GKE Autopilot - Fully Managed Kubernetes Service From Google: • GKE Autopilot - Fully ...
    🎬 Serverless Computing With Knative And Containers As A Service (CaaS): • Serverless Computing W...
    🎬 Containers As A Service (CaaS) Flavor Of Serverless With Azure Container Apps: • Containers As A Servic...
    🎬 Amazon Lambda Containers - How to Package AWS Functions as Container Images: • Amazon Lambda Containe...
    🎬 Google Cloud Run (GCR) vs Azure Container Instances (ACI) vs AWS ECS with Fargate: • Google Cloud Run (GCR)...
    ▬▬▬▬▬▬ 💰 Sponsoships 💰 ▬▬▬▬▬▬
    If you are interested in sponsoring this channel, please use calendly.com/vfarcic/meet to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).
    ▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
    ➡ Twitter: / vfarcic
    ➡ LinkedIn: / viktorfarcic
    ▬▬▬▬▬▬ 🚀 Courses, books, and podcasts 🚀 ▬▬▬▬▬▬
    📚 Books and courses: www.devopstoolkitseries.com
    🎤 Podcast: www.devopsparadox.com/
    💬 Live streams: / devopsparadox
    ▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬
    00:00 Autoscaling Introduction
    02:54 Scaling Applications Vertically
    13:09 Scaling Applications Horizontally
    20:43 Scaling Nodes Horizontally
    26:41 Scaling Nodes Vertically
    29:13 Kubernetes Autoscaling Tips
  • Наука та технологія

КОМЕНТАРІ • 68

  • @DevOpsToolkit
    @DevOpsToolkit  2 роки тому +1

    How do you scale your apps and #Kubernetes clusters?

  • @akhil-ph
    @akhil-ph 2 роки тому +3

    Thank you for this awesome video 👍, we all would like to see a video of HPA combined with Prometheus.

  • @kemibrianolimba682
    @kemibrianolimba682 8 місяців тому +1

    Brilliant...That was a great explanation. Keep up the great work

  • @adiavanth369
    @adiavanth369 2 роки тому +2

    Very nice presentation as always. Looking forward to know hpa using custom metrics from prometheus

  • @viniciosantos
    @viniciosantos 2 роки тому +3

    Great video as usual! This channel is very underrated

  • @javisartdesign
    @javisartdesign 2 роки тому

    Many thanks! I did not heard ever used VerticalPodAutoescaler!! There are many ways to describe scaling for applications, I also like the Scale Cube that it ir more from the point of view how microservices can be scaled.

  • @iposipos9342
    @iposipos9342 13 днів тому +1

    thanks for your videos, yes i would like to know to scale pods with HPA based on metrics in Prometheus. Thank you very much

    • @DevOpsToolkit
      @DevOpsToolkit  13 днів тому

      I'm planning to release a video that explores different types of scaling on July 8.

  • @romankrut7038
    @romankrut7038 Рік тому +1

    Hey, I want to leave my feedback. Your videos are very usefull and explanation is very good. Keep going man!

  • @CrashTheGooner
    @CrashTheGooner Рік тому +1

    Master ❤️

  • @hamstar7031
    @hamstar7031 2 роки тому +5

    Great video on teaching and as a refresher for me on HPA and VPA!
    I would like to learn and understand how to utilize metrics from Prometheus as another means for the autoscaling use-case.

    • @DevOpsToolkit
      @DevOpsToolkit  2 роки тому +4

      It's coming... :)

    • @DrorNir
      @DrorNir Рік тому +1

      @@DevOpsToolkit can't wait! I need it for a project like right now

    • @DevOpsToolkit
      @DevOpsToolkit  Рік тому

      @@DrorNir If everything goes as planned, that one should go live thrid Monday from now.

    • @hiteshsmit
      @hiteshsmit Місяць тому +1

      is the video made/available for - using Prometheus for custom metric monitoring and using it for HPA

  • @mateuszszczecinski8241
    @mateuszszczecinski8241 2 роки тому +1

    Dziękujemy.

  • @Levyy1988
    @Levyy1988 2 роки тому +8

    Great video as always!
    I think that it would also be useful to introduce KEDA autoscaler along with Prometheus base HPA.
    I am using KEDA and it is working great (in my case with RabbitMQ) since I can scale from zero pods which is huge cost saving.

    • @arns9006
      @arns9006 2 роки тому

      We do Keda + Karpenter .. Magic

    • @DevOpsToolkit
      @DevOpsToolkit  2 роки тому +1

      Yeah! KEDA is awesome.

    • @johnw.8782
      @johnw.8782 2 роки тому +1

      Can I ask if you're using KEDA with GKE? I've had issues with intermittent metrics server availability. I love KEDA and want to use it, but it's def a blocker.

    • @DevOpsToolkit
      @DevOpsToolkit  2 роки тому

      @@johnw.8782 I haven't used it in GKE just yet. So far, most of my experience with KEDA is on other providers.

  • @ioannisgko
    @ioannisgko 2 роки тому +1

    Thank you for the video!!! Question: how do we horizontally autoscale databases in Kubernetes? What are the challenges and what would be the proper way to overcome them? (Maybe an idea for a future video)

    • @DevOpsToolkit
      @DevOpsToolkit  2 роки тому +1

      Adding it to the TODO list for a future video... :)
      Until then... If designed well, DB should come with an operator that takes care of common operations including scaling and all you really have to do is change the number of replicas (unless you enable autoscaling which is still not a common option).

  • @singhnarendra
    @singhnarendra 2 роки тому +1

    Hey Viktor.. this video is very helpful. Please make a video on HPA with Prometheus monitoring solution.

  • @sahilbhawke605
    @sahilbhawke605 2 роки тому +1

    Hey doing a great job waiting for your videos and the notification bell to buzz everytime ❤️ just a question hpa with respect to memory do we have any information for reference than it would be helpful also can we use them both simultaneously in our hpa manifest

    • @DevOpsToolkit
      @DevOpsToolkit  2 роки тому +1

      Don't use vpa together with hpa. They are not aware of each other and might do conflicting actions.
      If you're wondering how to deduce how much memory to assign to a deployment managed by hpa, explore Prometheus. It should give you the info about memory utilization or anything else.

    • @sahilbhawke605
      @sahilbhawke605 2 роки тому +1

      @@DevOpsToolkit sure thanks for the information 💯 can you please come up with video more precise on cluster autoscaling in gke cluster and how it works like poddistributionbudget the annotation safe to evict pods how it's used the correct way would be great help of you 💯

    • @DevOpsToolkit
      @DevOpsToolkit  2 роки тому +1

      @@sahilbhawke605 Adding it to my TODO list... :)

    • @sahilbhawke605
      @sahilbhawke605 2 роки тому +1

      @@DevOpsToolkit Sure i would be eagerly waiting ;)...Thanks for being such a great spot by sharing your valuable 💯 knowledge for us from your videos always waiting for your new video #devops 💯

  • @allengooch7
    @allengooch7 2 роки тому +1

    Good stuff. I believe the units for describing CPU limits should be called millicores instead of milliseconds, however.

    • @arns9006
      @arns9006 2 роки тому +1

      whatever you say, based on your avatar, you're right

  • @jarodmoser5588
    @jarodmoser5588 2 роки тому +1

    Great video, would it be possible to run the VPA in recommend mode while relying upon the HPA to ensure scaling of pods? Can that combination be used to fine tune the autoscaling policies?

    • @DevOpsToolkit
      @DevOpsToolkit  2 роки тому

      It could, but I would not rely on that. VPA recommendations might easily be incorrect due to HPA activities. I recommend using Prometheus instead.

  • @acartag7
    @acartag7 Рік тому +1

    I started using jsonnet and it has been a pain to use and a steep learning curve. Few months later we moved to ytt as it was easier to manage but now we are going for Kustomize for all new projects.
    Jsonnet is really powerful but when bringing someone new to the team and you show them jsonnet, they can easily feel overwhelmed.

    • @DevOpsToolkit
      @DevOpsToolkit  Рік тому

      That's my main issue with Jsonnet. It's too easy to over-complicate it and confuse everyone.

  • @Martin-sr8yb
    @Martin-sr8yb 2 роки тому +1

    I would like to see a futrue video talking about metrics of auto-scaling like what you mentioned in the video. (Prometheus Kabana)

  • @salborough2
    @salborough2 4 місяці тому +1

    Hi Victor thanks for a great video :) Just a question from my side - do you know how gitops (ie with ArgoCD) handles auto-scaling as I assume the replica count on the deployment yaml will no longer conform to the declared yaml in an autoscaling setup?

    • @DevOpsToolkit
      @DevOpsToolkit  4 місяці тому

      Yeah. You should remove hard coded replicas or nodes when using scalers. That's not directly related to gitops. Argo CD and similar tools only sync manifests into clusters. If you do specify replicas and a scaler, the former will be overwritten by the later.

    • @salborough2
      @salborough2 4 місяці тому +1

      @@DevOpsToolkit thanks so much Victor - ahh ok gotcha I didnt realise I could leave out the replica count in the deployment manifest - thanks :) Im going to look into this more. Also going to checkout your videos on Argo events and rollout to see how to deal with progressing a release through different environments while still using gitops.

  • @unixbashscript9586
    @unixbashscript9586 2 роки тому +1

    Hi Victor, thanks for this! I'd also really appreciate a video on how to hpa based on metrics from Prometheus
    Edit: I also have a question about Karpenter. Does it scale both horizontally and vertically?

    • @DevOpsToolkit
      @DevOpsToolkit  2 роки тому +2

      Great! Adding it to the TODO list... :)

    • @Levyy1988
      @Levyy1988 2 роки тому +1

      Karpenter scale horizontally but it have this advantage that it will add node that will handle all of your pods in pending state and not only randomly add node in one of your autoscaling groups that can be to big for your current needs.

    • @unixbashscript9586
      @unixbashscript9586 2 роки тому

      @@Levyy1988 hey, thanks

    • @DevOpsToolkit
      @DevOpsToolkit  2 роки тому

      @@Levyy1988 Exactly. That's why i said in the video that vertical scaling of nodes is typically combined with horizontal (new node, new size).
      Karpenter is a much better option than the "original" Cluster Autoscaler used in EKS. It provides similar functionality like GKE Autopilot.

  • @PhilLee1969
    @PhilLee1969 4 місяці тому +1

    Great video - as a complete beginner to Kubernetes it's helped me to understand what I want to with a particular project that I'm working. I currently have a long-term process that runs under Python but runs in a single thread. Up until know I've scaled vertically by moving to more powerful machines but also horizontally by runnning additional copies of the process on different processor cores and then dividing the clients up geographically. If I've understood correctly, with Kubernetes it looks like I could run one copy but get it to spread across multiple cores or even multiple servrers as required whilst to my clients it just looks like one machine ? Do I need to do anything to my process to ready it for deployment on Kubernetes or is it just a case of setting the resource limits and scaling parameters ?

    • @DevOpsToolkit
      @DevOpsToolkit  4 місяці тому

      Assuming that it is a stateless application, all you have to do is define HPA that will scale it for you or, if scaling is not frequent, set manually the number of replicas in the deployment.

    • @PhilLee1969
      @PhilLee1969 4 місяці тому +1

      It's stateless (I think) as nothing is left once the application exits other than some log files. I'm definitely going to have to put together a cluster and have a go. Thanks again !

  • @VinothKumar-ej2jc
    @VinothKumar-ej2jc 2 роки тому +1

    When scale in/down happens how does k8s make sure there is no traffic being served by those pods.. will there be a chance where user experience interruption due to scale in of pods

    • @DevOpsToolkit
      @DevOpsToolkit  2 роки тому +1

      When Kubernetes decides to kill a Pod, among other things it does the following.
      1. Stop all new incomming traffic from going to that Pod
      2. Send SIGTERM signal to the process inside the containers in that Pod
      3. Wait until the processes respond with OK to SIGTERM or it times out (timeout is configurable).
      4. Destroy the Pod
      Assuming that SIGTERM is implemented in the app, all existing requests will be processed before the Pod is shut down. SIGTERM itself is not specific to Kubernetes but a mechanism that is applied to any Linux process (it might work on Windows as well, but I'm not familiar with it enough to confirm that). That means that if an app is implementing "best practices" that are independent of Kubernetes, there should be no issues when shutting down Pods.
      As a side note, the same process is used when upgrading the app (spin up new Pods and shut down the old ones) so you need to think about those things even if you never scale down.

  • @VinothKumar-ej2jc
    @VinothKumar-ej2jc 2 роки тому +1

    May I know why you have deployment.yaml and ingress.yaml in overlay directory though you dont have any changes/patches to them.. you can keep them in base directory itself right.

    • @VinothKumar-ej2jc
      @VinothKumar-ej2jc 2 роки тому

      Also how is replicaset is different from hpa

    • @DevOpsToolkit
      @DevOpsToolkit  2 роки тому +1

      You're right. I should have placed those inside the base directory. I copied those tiles from another demo and failed to adapt them for this one.

  • @bules12
    @bules12 3 місяці тому +1

    Gist is not well documented in the description! Can you fix it please?

    • @DevOpsToolkit
      @DevOpsToolkit  3 місяці тому +1

      Sorry for that, and thanks for letting me know. It should be fixed now.

    • @bules12
      @bules12 3 місяці тому +1

      @@DevOpsToolkit thanks for the quick response, ur the best!

  • @swapnilshingote8773
    @swapnilshingote8773 2 роки тому

    First to comment...yooo

  • @snehotoshbanerjee1938
    @snehotoshbanerjee1938 2 місяці тому +1

    Does Kubernetes support scaling to zero?

    • @DevOpsToolkit
      @DevOpsToolkit  2 місяці тому

      It does but that is rarely what you want. There's almost always something you need to run.

    • @snehotoshbanerjee1938
      @snehotoshbanerjee1938 2 місяці тому +1

      @@DevOpsToolkit question is for running LLM app which is costly to run 24*7.

    • @DevOpsToolkit
      @DevOpsToolkit  2 місяці тому

      If that is the only thing you're running in that cluster, the answer is yes. You can scale down worker nodes. However, controle planes nodes will have to keep running.
      Actually, now that i think of it, why don't you just create a cluster when you need it and destroy when you don't?

  • @owenzmortgage8273
    @owenzmortgage8273 Рік тому +1

    Demo don’t just talking about it, everybody can google 100 answers about this topic. Show people what you did at an enterprise environment. What you did in real world. Don’t just read white paper

    • @DevOpsToolkit
      @DevOpsToolkit  Рік тому +1

      Have you seen any other video on this channel? Almost all are with demos with a small percentage being how something works (like this one). If anything, i might need to less demos.