Thanks for another great video 🎉 I didn't know they finally updated Argo rollouts to point to resources instead of replacing the deployment manifest. One good reason to evaluate it again
Thanks for the video! Please advise: we want to set up a canary deployment for applications that are published via Nginx Ingres, but which also interact with each other via Kubernetes services. Is it a good practice to use both Argo Rollouts and Istio in conjunction with the Ingres controller in this case? In theory, in this case, Istio can replace the Nginx controller, but we are not ready to abandon Nginx yet
Great video! I'm trying to implement almost the same configiration on GKE, just intead of Istio going to use Google Service Mesh. I was wondering how to organize progressive delivery for multiple deployments from "monorepo"? Should I rollback all applications in case of one of them fails or I must move forward? And what we're doing with codebase in case of rollback? Revert to last working version or just apply patches? So many questions, to be honest. I'll live them for your future videos. Thanks again!
I tend to treat each app separately. If you have multiple apps that need to run together at specific releases, that is, from my perspective, a single app (call it distributed monolith). In that cases, it should be treated as one and i would probably wrap them all up into a single helm chart or, even better, a single Custon resource.
Change of a schema needs to be backwards compatible. If it isn't, it's impossible to make no-downtime deployment no matter which process is used since the old release needs to be down before the new release is up. If schema changes are backwards compatible, it is relatively easy. We just need to make changes to the DB before we deploy the new release of the app. To be more precise, the deployment of the new release can start at any time if checking DB schema version is part of the health-check.
I'd love to see a showcase of Flagger too! I've seen a lot of guides and tutorials involving argo lately, but I think seeing some for flux would also be interesting. Maybe a comparison between argo rollouts and flagger?
You can base your queries on any metrics. The easiest way to start is to take a look at your current observability. Which metrics are you using today to deduce whether something is working correctly in production? Whatever the answer is to that question is likely a good starting point for progressive delivery.
thank your always for your insightfull vidoes❤. I have a question can i also use progressive delivery to manage upgrades of 3rd party open source apps that i use ? for example using progressive delivery to manage upgrades for ingress controllers, cert-manager, grafana and prometheus etc?
You certainly can. To begin with, if that something is based on Kubernetes Deployment (most are), any update is executing rolling updates by default. That's already progressvie delivery. If you want something more complicated , like, for example, Canary Deployment, you can do it as well. You just need to tell Argo Rollouts (if that's what you're using) which resources to manage.
@DevOpsToolkit thank you 😊 I was thinking about your last example where you used observability for canary promotion. If i want to upgrade my prometheus to a newer version using canary deployment. How will i set up the canary promotion using observability in these case, will i have to use other canary promotion method instead using observability?
It all depends on the architecture of the app. Prometheus, for example, cannot have more than one replica. As such, it cannot participate in any progressive delivery (at least not without a lot of work involved). Now, if I ignore what I wrote above (and imagine that Prometheus can run with multiple replicas), you will need to figure out what is information that makes you see whether to more forward or backward. It should not differ from analysis of any other app (e.g., error rate, latency, etc.).
That's true but only if the app is accessible from outside the cluster. You likely have apps that are accessible only by other apps through Kubernetes Services, VirtualServices, etc.
Quality content. Please make more videos like this 🔥
Thanks for another great video 🎉 I didn't know they finally updated Argo rollouts to point to resources instead of replacing the deployment manifest. One good reason to evaluate it again
Are you in the KubeCon NA? I think I saw you 😊. Your videos are really cool and help a lot to understand a lot of concepts
Oh yeah. I'm on every KubeCon. Pass by my talks.
Damn this is out of this world. Great content~
Great video!
Viktor always makes audiance speechless. No match in DevOps teaching. Thanks alot. 😍😍
Thanks for the video! Please advise: we want to set up a canary deployment for applications that are published via Nginx Ingres, but which also interact with each other via Kubernetes services.
Is it a good practice to use both Argo Rollouts and Istio in conjunction with the Ingres controller in this case?
In theory, in this case, Istio can replace the Nginx controller, but we are not ready to abandon Nginx yet
Is it okay if I put this question to the list that I'll cover in the upcoming AMA live session next week?
@@DevOpsToolkit yep, it`s will be cool :)
Great video!
I'm trying to implement almost the same configiration on GKE, just intead of Istio going to use Google Service Mesh.
I was wondering how to organize progressive delivery for multiple deployments from "monorepo"? Should I rollback all applications in case of one of them fails or I must move forward? And what we're doing with codebase in case of rollback? Revert to last working version or just apply patches?
So many questions, to be honest. I'll live them for your future videos. Thanks again!
I tend to treat each app separately. If you have multiple apps that need to run together at specific releases, that is, from my perspective, a single app (call it distributed monolith). In that cases, it should be treated as one and i would probably wrap them all up into a single helm chart or, even better, a single Custon resource.
How would you recommend progressive delivery of a service that has a database migration?
Change of a schema needs to be backwards compatible. If it isn't, it's impossible to make no-downtime deployment no matter which process is used since the old release needs to be down before the new release is up.
If schema changes are backwards compatible, it is relatively easy. We just need to make changes to the DB before we deploy the new release of the app. To be more precise, the deployment of the new release can start at any time if checking DB schema version is part of the health-check.
I'd love to see a showcase of Flagger too!
I've seen a lot of guides and tutorials involving argo lately, but I think seeing some for flux would also be interesting.
Maybe a comparison between argo rollouts and flagger?
Adding it to my to-do list...
How do you do a rollout if most of your services are using kafka and are not getting traffic through ingress
You can base your queries on any metrics. The easiest way to start is to take a look at your current observability. Which metrics are you using today to deduce whether something is working correctly in production? Whatever the answer is to that question is likely a good starting point for progressive delivery.
thank your always for your insightfull vidoes❤. I have a question can i also use progressive delivery to manage upgrades of 3rd party open source apps that i use ? for example using progressive delivery to manage upgrades for ingress controllers, cert-manager, grafana and prometheus etc?
You certainly can. To begin with, if that something is based on Kubernetes Deployment (most are), any update is executing rolling updates by default. That's already progressvie delivery. If you want something more complicated , like, for example, Canary Deployment, you can do it as well. You just need to tell Argo Rollouts (if that's what you're using) which resources to manage.
@DevOpsToolkit thank you 😊
I was thinking about your last example where you used observability for canary promotion.
If i want to upgrade my prometheus to a newer version using canary deployment.
How will i set up the canary promotion using observability in these case, will i have to use other canary promotion method instead using observability?
It all depends on the architecture of the app. Prometheus, for example, cannot have more than one replica. As such, it cannot participate in any progressive delivery (at least not without a lot of work involved).
Now, if I ignore what I wrote above (and imagine that Prometheus can run with multiple replicas), you will need to figure out what is information that makes you see whether to more forward or backward. It should not differ from analysis of any other app (e.g., error rate, latency, etc.).
You can split traffic with k8s API gateway instead of ingress without service mesh!!!
That's true but only if the app is accessible from outside the cluster. You likely have apps that are accessible only by other apps through Kubernetes Services, VirtualServices, etc.