Testing in Production! Progressive Delivery with Canary Deployments Explained!

Поділитися
Вставка
  • Опубліковано 22 лис 2024

КОМЕНТАРІ • 25

  • @shivamkumarz
    @shivamkumarz 10 днів тому +4

    Quality content. Please make more videos like this 🔥

  • @IvanRizzante
    @IvanRizzante 10 днів тому +3

    Thanks for another great video 🎉 I didn't know they finally updated Argo rollouts to point to resources instead of replacing the deployment manifest. One good reason to evaluate it again

  • @Fjglira
    @Fjglira 9 днів тому +1

    Are you in the KubeCon NA? I think I saw you 😊. Your videos are really cool and help a lot to understand a lot of concepts

    • @DevOpsToolkit
      @DevOpsToolkit  9 днів тому

      Oh yeah. I'm on every KubeCon. Pass by my talks.

  • @po6577
    @po6577 10 днів тому +1

    Damn this is out of this world. Great content~

  • @liamdebell7148
    @liamdebell7148 11 днів тому +1

    Great video!

  • @softwareengineer5764
    @softwareengineer5764 11 днів тому +2

    Viktor always makes audiance speechless. No match in DevOps teaching. Thanks alot. 😍😍

  • @romann1295
    @romann1295 12 годин тому +1

    Thanks for the video! Please advise: we want to set up a canary deployment for applications that are published via Nginx Ingres, but which also interact with each other via Kubernetes services.
    Is it a good practice to use both Argo Rollouts and Istio in conjunction with the Ingres controller in this case?
    In theory, in this case, Istio can replace the Nginx controller, but we are not ready to abandon Nginx yet

    • @DevOpsToolkit
      @DevOpsToolkit  10 годин тому

      Is it okay if I put this question to the list that I'll cover in the upcoming AMA live session next week?

    • @romann1295
      @romann1295 10 годин тому

      ​@@DevOpsToolkit yep, it`s will be cool :)

  • @ShamilSattarov
    @ShamilSattarov 11 днів тому +1

    Great video!
    I'm trying to implement almost the same configiration on GKE, just intead of Istio going to use Google Service Mesh.
    I was wondering how to organize progressive delivery for multiple deployments from "monorepo"? Should I rollback all applications in case of one of them fails or I must move forward? And what we're doing with codebase in case of rollback? Revert to last working version or just apply patches?
    So many questions, to be honest. I'll live them for your future videos. Thanks again!

    • @DevOpsToolkit
      @DevOpsToolkit  11 днів тому +2

      I tend to treat each app separately. If you have multiple apps that need to run together at specific releases, that is, from my perspective, a single app (call it distributed monolith). In that cases, it should be treated as one and i would probably wrap them all up into a single helm chart or, even better, a single Custon resource.

  • @dane2565
    @dane2565 11 днів тому +1

    How would you recommend progressive delivery of a service that has a database migration?

    • @DevOpsToolkit
      @DevOpsToolkit  11 днів тому +1

      Change of a schema needs to be backwards compatible. If it isn't, it's impossible to make no-downtime deployment no matter which process is used since the old release needs to be down before the new release is up.
      If schema changes are backwards compatible, it is relatively easy. We just need to make changes to the DB before we deploy the new release of the app. To be more precise, the deployment of the new release can start at any time if checking DB schema version is part of the health-check.

  • @mirceanton
    @mirceanton 11 днів тому +1

    I'd love to see a showcase of Flagger too!
    I've seen a lot of guides and tutorials involving argo lately, but I think seeing some for flux would also be interesting.
    Maybe a comparison between argo rollouts and flagger?

  • @MrEsL13
    @MrEsL13 4 дні тому +1

    How do you do a rollout if most of your services are using kafka and are not getting traffic through ingress

    • @DevOpsToolkit
      @DevOpsToolkit  4 дні тому

      You can base your queries on any metrics. The easiest way to start is to take a look at your current observability. Which metrics are you using today to deduce whether something is working correctly in production? Whatever the answer is to that question is likely a good starting point for progressive delivery.

  • @iposipos9342
    @iposipos9342 11 днів тому +1

    thank your always for your insightfull vidoes❤. I have a question can i also use progressive delivery to manage upgrades of 3rd party open source apps that i use ? for example using progressive delivery to manage upgrades for ingress controllers, cert-manager, grafana and prometheus etc?

    • @DevOpsToolkit
      @DevOpsToolkit  11 днів тому

      You certainly can. To begin with, if that something is based on Kubernetes Deployment (most are), any update is executing rolling updates by default. That's already progressvie delivery. If you want something more complicated , like, for example, Canary Deployment, you can do it as well. You just need to tell Argo Rollouts (if that's what you're using) which resources to manage.

    • @iposipos9342
      @iposipos9342 11 днів тому +1

      @DevOpsToolkit thank you 😊
      I was thinking about your last example where you used observability for canary promotion.
      If i want to upgrade my prometheus to a newer version using canary deployment.
      How will i set up the canary promotion using observability in these case, will i have to use other canary promotion method instead using observability?

    • @DevOpsToolkit
      @DevOpsToolkit  11 днів тому

      It all depends on the architecture of the app. Prometheus, for example, cannot have more than one replica. As such, it cannot participate in any progressive delivery (at least not without a lot of work involved).
      Now, if I ignore what I wrote above (and imagine that Prometheus can run with multiple replicas), you will need to figure out what is information that makes you see whether to more forward or backward. It should not differ from analysis of any other app (e.g., error rate, latency, etc.).

  • @VironPapadopoulos
    @VironPapadopoulos 10 днів тому +1

    You can split traffic with k8s API gateway instead of ingress without service mesh!!!

    • @DevOpsToolkit
      @DevOpsToolkit  10 днів тому

      That's true but only if the app is accessible from outside the cluster. You likely have apps that are accessible only by other apps through Kubernetes Services, VirtualServices, etc.