Pretty neat! I was working on something very similar, but with Flux instead of ArgoCD and throwing DARP into the mix. There is a nice vid from last KubeCon EU talking about those concepts too (ua-cam.com/video/Ajc2ngGv0m8/v-deo.html&ab_channel=CNCF%5BCloudNativeComputingFoundation%5D). At the end, I think the most important part here is not the tools, but correct mindset. The dynamics between OPS (or as I like to call it Platform Team) and DEVs should be exactly as you described and it also echoes findings from latest State of DevOps Report. One step further, I would like to see more standardization efforts similar to OAM, but for infrastructure components. Maybe Crossplane will lead the charge here? Time will tell. Also, looking forward to your video about creating virtual kubernetes clusters with "pre-baked" addons and config :)
The goal of crossplane is to become a standard like OAM. There is a strong chance for that since quite a few are already on board. AWS, IBM, Azure, and Google are contributing (in some capacity or another). But, as you said, time will tell. Standards mostly depend on adoption.
@active ccsuser You can easily create some sort of a portal (e.g., Backstage) or use an existing solution (any Kubernetes dashboard) to have the button that scales an instance directly. Or, even better, you can have a button that pushes a change of the manifest to Git and let Argo CD or Flux do the work instead of interacting directly with the cluster. As for being "up and running in less than a minute"... If you're running your apps as containers, it'll take close to no time to start a container. The rest depends on the time your app needs to initialize. > developers have no clue about how the networking with calico works within the cluster There is no strong need for devs to know how Calico works. As long as it's there, it'll do the job until things go wrong. When that happens, whomever tries to debug the issue will need to understand Calico. That being said, the more people know, the better. > nor how to secure snapshots of the databases of the instances - does your kube-combi-kit / kube-devops-suite make any sense it such a situation? Snapshots should be created automatically (e.g., once an hour, once a day, etc.), so there is no need for anyone to secure anything beyond setting up the corn job. Normally, if it's snapshots of DBs, you would go with whatever tooling that DB vendor provides. For everything else, Velero is probably the best choice. > PS: are you done now? time to relax and focus on the promotion, collecting feedback, and optimize this and that ... I'll relax when I retire. Until then, it's full throttle :)
@active ccsuser by "correct mindset" I meant, following best practices as outlined in the DevOps survey results and other industry standards and adopting it for your needs. Not the best choice of words on my part. I hope it makes more sense now.
As a developper, I think it is very complicated. Would've been great if we could use one tool instead of combining many, also it depends on the need of the developpers and the app's architecture...
Very clarified video! We are trying to create something like that in my company. We also want to add backstage as a frontend view for developer to simplify even more the tooling usability. I'll keep following you for more tips ;)
I realized that only once I started recording. I would have to rewrite the commands and the manifests and test it all. For a minute, I thought to do just that but then my "lazy gene" kicked in and told me not to delay everything for a day. Next time...
Excellent as ever, thanks Viktor! The feeling is we are closing on the ultimate goal for any devops pundit - to automate us out of the jobs. ;-) But there is one pillar out of three, which is kind of missing, or is mentioned as a pre-requisite only in the recent demos - [infrastructure, platform, application] - but we need all 3 for completeness. It is bothering me for some time that the means to provide the foundation level - the infrastructure, or the kube clusters management does not have the same wealth of tools as the other two. But there is the 'Cluster API' and the corresponding 'clusterctl' tool. Viktor could you please add the review of that - in the gitops paradigm, of course - to your TODO list? Or maybe another alternative?
cool stuff Viktor, Thanks, I appreciate very your work! I am using also ArgoCD in my new project for deploying workload. But currently I am not sure what is a better approach for a middle/big company: A) let create every dev-team its own k8s-cluster and got m.b. hunderts of clusters with many unsed (but payed) ressources. As ops you should manage the cluster at the end. B) let create by the ops-team a 1 "big" cluster with isolated workload and got it managed by the ops-team as a PAAS solution Both approaches have their pro and cons.The approach B seems for me as Ops more manageable (networking stuff, storage, security, monitoring, logging, backup/recover etc.) and cheaper at the end by cosolidating more different workloads into 1 cluster
The answer to that question (partly) depends on the size of the operations. If option a) would result in hundreds of clusters, option would is unlikely going to be a single cluster. Assuming that hundreds of clusters (option A) is the result of having hundreds of teams, I'm assuming that you have hundreds or even thousands of servers. That is unlikely going to fit into a single cluster and, if it does, might be a bit too much. That being said, I'm not suggesting to go into the other extreme and create hundreds of tiny clusters. Isolation of workloads is a potentially good solution, especially with the tools like vCluster.
Hi Viktor thank you for your amazing videos. I have a request: comparison between kaniko, buildah and buildkit using argo workflows and tekton. What do you think?
That's a good suggestion. I need to see whether to go directly to the comparison, or first review Buildah and Buildkit so that they get the same coverage as Kaniko (ua-cam.com/video/EgwVQN6GNJg/v-deo.html), and then compare them.
Hey Viktor. As always, thank you so much for such great content. Question. I'm having a lot of trouble trying to have ArgoCD deploy KubeVela applications. It recognizes the application, shows up as Synced in Argo, but the underlying resources never get deployed like they are here in the video. Do you know why this might be?
I'd need to check the gist and see whether something changed in kubevela. It might take a while though since i stopped using kubevela since crossplane released functions that effectively remove the need for kubevela. Hence, I'm not up-to-date with kubevela.
@@DevOpsToolkit ok. I haven't gotten that deep into crossplane yet but it's good to hear crossplane tackles the same problem since KubeVela documentation and support is so bad. I'll be looking into cp functions then. Thank you
Hi Viktor, thank you for a great video! In such setup, how we should define infra dependencies of an application? for example: - My backend application require a MySQL & Redis services How my application should "know" how to address those sevices, is it via env variable or other tools like consul? Thanks again!
Crossplane can create a Secret with whichever info you need and your application can use that Secret. You can find an example in ua-cam.com/video/lZpKF1B80dU/v-deo.html.
@@DevOpsToolkit Hi Viktor, thank you for your reply! I find it a bit complex to maintain both Crossplane & OAM. Can we somehow leverage Kubevela/OAM to control both infra & applications/components/traits? For example, I have a service that require MySQL as one of its dependencies. When deployed to production I would like it to use a managed service (e.g. AWS RDS) while on the other hand, for a testing environment, I would prefer using something ephemeral / cheaper such as running an instance in my k8s cluster without PV, H/A etc.
@@eldadyamin You can do that. You can use KubeVela to construct any k8s resources, including Crossplane. But you can also use Crossplane alone. You can use Crossplane compositions to generate application resources. Lately, I've been leaning more towards the latter options and use Crossplane alone. There are a few upcoming features that will elevate that experience to a new level. I started a series of videos on that subject in the Upbound channel. The first one is ua-cam.com/video/lZpKF1B80dU/v-deo.html and a few additional ones will follow shortly.
Is there a way to still rely on the eksctl definitions in crossplane? It would be a bit challenging to recreate all features in eksctl via compositions that I need to create
There isn't anything similar to eksctl. Crossplane composites and eksctl are, in a way similar in terms that both are grouping AWS resources and simplifying the experience. The difference is that composites are created by you or someone in the community. For example, I will soon add the composite I created for this video into the catalog. There are likely similar composites created by others.
To begin with, you need CRDs and controllers that represent and manage cloud resources. You need cross plane for that (or some other similar tool). What you might do is replace Crossplane Compositions with KubeVela.
@@DevOpsToolkit yeah that was my point. If you see all kubevela add-ons many are for terraform, they install a controller, so essentially they do everything you mentioned here and it feels a lot simpler to use than what you described here. Do they work well or better than cross plane? No idea. Haven't tried
@@crikxouba Terraform does not work well in Kubernetes (no matter whether it's used with KubeVela or something else). It does not declare individual Cloud resources as k8s resources so it cannot be queried through Kube API, the state is stored in a remote drive instead of etcd, it treats projects as single entities instead of individual resources thus making it difficult (if not impossible) to establish relations, etc. Simply put, Terraform is great, but not designed to run in Kubernetes natively. As such, wrapping Terraform into KubeVela does not produce even remotely similar results as wrapping Kubernetes (e.g., Deployment, Service, Ingress) resources. Crossplane has something similar with the Terraform provider but that is recommended only as a temporary solution that serves as an intermediary step towards using Kubernetes as a control plane.
You'll find "Gist with the commands" in the description of the video. It contains all the commands I executed plus those I used to set up everything before I started recording. The references to the Git repos are in there.
Thanks a lot Victor! This is awsome! Do you know if argo/git can handle thousands of infra folders? Otherwise, when it becomes bigger it will be hard to maintain. Do you have any replacements in mind? I thought about storing in s3, but then it makes it a bit more complex to use for the developers when comparing to the simplicity of pushing to git. What do you think?
Git should not have any issue to hold thousands of manifests and, even of it does, there is no alternative to it (at least today). S3 is not a replacement for a code repository. As for Argo CD... It can scale if needed and you can also split it into multiple instances. You do not have to have only one central server.
What would be an approach to completely bootstrap a cluster? Meaning I manually install argocd, crossplane and a cluster composite in kind. Then once the cluster is up, apply an argocd project *in the new cluster* that will look up and sync all my definitions, including everything I manually applied. I have been looking for this approach in your videos but it seems you always have this local “control” cluster where the top level confits are applied
It all depends on the provider. Applying the same definitions in the second cluster works when a provider (e.g. GCP) allows names for all the resources and you use external-name annotation in crossplane. Some, however, tend to assign names randomly (e.g. VPC in AWS) and, in that case, applying the same manifests would not work and you would need to restore etcd backup from the first local cluster. Most of the companies I worked with tend to use crossplane at large scale where the initial management cluster was created without it. The argument from those organizations is that one cluster not being managed by crossplane is not important and a small price to pay. Finally, there is always the option to use hosted crossplane (upbound cloud).
After I wrote my previous message I realized that I might have misunderstood the question. Are you asking how to bootstrap other clusters to be configured with Argo CD right away? If that's the question, then I have good news. That will be the subject of an upcoming webinar I will do with an ex-colleague that is also very involved with Argo.
@@DevOpsToolkit i didn’t explain it that good: I want one cluster only. I want everything managed via gitops with argocd. I want the cluster to also manage external resources via crossplane. And finally that the crossplane manifests also manage that single cluster.
@@barefeg I'll make a video on that scenario soon. It'll be either on this channel or in ua-cam.com/channels/m_v2HL0pdqtShHD-ZDDTaA.html. Until then, and if you try moving it yourself, remember `external-name`. That is the most important attribute in Crossplane (for scenarios like that one) since it establishes the relation between Crossplane's internal state and the actual name of the resource.
Nice one as always Viktor! I'm interested in how this could be used in conjunction with keptn quality gates and some chaos engineering products but this looks sweet. In regards to vclusters vs this approach, apart from the costs, do you think this is always the better option?
I don't think this approach collides with keptn or cluster. Those are solving different problems. Argo CD is about syncing k8s resources with whatever is defined in Git. Crossplane is about managing infra, and both Crossplane Composites and KubeVela are good at simplifying potentially complex manifests. Keptn is a different way to run pipelines and vcluster is about creating virtual clusters. You can easily combine vcluster with KubeVela or Crossplane.
I do agree that Nomad is much easier than k8s, just as Docker Swarm was. However, I am more concerned about the ecosystem for both CF and Nomad. Almost all innovation goes into k8s. Most of the vendors and OSS projects are being developed as k8s first and others later and only if we're lucky. It's enough to take a look at the CNCF landscape and that is only a fraction of the projects being developed primarily for k8s. So yes, there is value in both CF and Nomad but I am not sure that they will be able to catch up with k8s. It's not about the quality any more but about the ecosystem. That's where k8s is years ahead of them and the difference is likely going to increase over time. P.S. Long time ago, I placed bets on Docker Swarm for similar reasons as those you're stating. That was failed investment. I do not want to repeat that error again. I feel that k8s won the "war" and the next change will come in the form of something completely new and not from CF or Nomad. P.P.S. There is nothing wrong continuing usage of either of the two, but I would not make investments into them.
@UtkarshMishra-it4oc if the question is how to test them, the answer is coming next week on Monday. On the other hand, if the question is how to validate them through policies and confirm that people specify only what they should, i would choose kyverno.
Great video, like this approach a lot. But I think argocd is a bit heavy and not very modular. I think flux2 is much more lightweight and is ideal for this scenario.
I like both and have a hard time choosing one over the other. I'll do my best to create a variation of the video with Flux or, at least, provide a Gist with modified commands and repo.
Thanks, Viktor! I have an unpopular opinion - there's no need to create abstractions on top of abstractions to simplify something. What you're proposing in the video is old segregation between Dev & Ops, quite opposite to what DevOps culture tried to resolve. Instead of relying on pure and well-defined k8s API, (or cloud API) you're proposing to "simplify" this and wrap it into the new, more complex abstraction, which only the Ops team is now in charge of. Mastering k8s or cloud API is not rocket science. Any dev team should do it and embrace the "you build it, you run it" approach. Don't try to simplify the already simplified good abstractions.
I think it depends on the way the teams are organized. I do believe that DevOps is about creating self-sufficient teams that can manage everything, from reqs, to apps, to production, and everything in between. However, that does not necessarily exclude having specialized teams that create services for those teams. You can, in a way, think of AWS as being a specialized team that creates services we consume. Similarly, you can think of SREs as being similar teams, but working internally within a company. A good example would be, for example, the team in Spotify that created Backstage which, later on, was adopted by all other teams in Spotify. From my perspective, being self-sufficient does not mean that we are not using services created by others (be it outside or inside the company), but that we do not request from others to do our jobs. Saying "we do not deploy our apps, others do that" is bad. Saying "we are using a service or a helper tool created by others" is potentially good. It reduces duplication and it prevents reinventing the wheel, as long as that is not mandatory. I do agree that "mastering k8s or cloud API is not rocket science", but it's also not trivial. Kubernetes API is easy, but (core) Kubernetes resources are not meant to be used directly. For example, there is no way to define an application in Kubernetes. Instead, without resorting to the projects in the ecosystem, we need to define a bunch of resources (e.g., Deployment, Service, VirtualService, Gateway, HPA, VPA, etc.). Those are more like building blocks than app definitions. Knative, for example, is much closer to being a way to define an application than those I mentioned before. AWS is in a similar situation. You cannot define a cluster. Instead, you need to define a bunch of resources, and the way you do that will differ from one organization to another. Azure and, especially, GCP, is in a much better place from that perspective than AWS. Finally, I am not advocating not to use k8s API. Quite the contrary. I'm advocating the usage of k8s API as a way to manage everything. Cloud APIs, on the other hand, are a mess. Kelsey Hightower had quite a few conversations in which he explained it much better than me.
@@DevOpsToolkit Thank you for the reply! Yes, as a developer you need to deal with Deployments, Services, Ingresses, storage, etc. Yet, I doubt this could be abstracted away and locked by a central team which doesn't fully understand the application requirements. This leads to the same old silos but with modern tools. I agree about adopting tools and services delivered by other teams (we live in the microservices world today). But I disagree that wrapping basic building blocks in some abstract APIs is a good thing. You're locking teams from exploring and learning native k8s API, teaching them to use your custom API instead. The paradox: simplification and poor abstractions lead to more complex systems.
I completely agree that being locked by a central team is a very bad idea. I see composites as a way to collaborate but, as you said, that can be abused and provide yet another way for one team to lock another. This conversation reminds me of talks I had with teams that used Jenkins shared libraries. At the time, not necessarily today, I saw them as a great way to create shared knowledge and provide reusable libraries that can be used by everyone in a company. However, they were often used by shared services teams to say "everyone else must use shared libraries we created". What I'm trying to say is that creating reusable code is great, but it should not be mandatory to use. Forcing others to use something you created is bad. Providing an option for others to benefit from your work, if they choose to do so, is good.
Also, composites, application models, and similar tools do not necessarily have to be created by central teams. Any team can, for example, convert their app to an OAM definition. Others might, or might not choose to use it.
@@DevOpsToolkit Excellent point! This also reminds me some drawbacks of applying DRY in microservices. You get a tightly coupled system with dependently deployable services. As for cloud API mess, totally agree there. I hope Crossplane approach will be adopted as a standard by clouds.
Great video Viktor! Would love to see you dig more into the dev pipeline too and hear your opinions on tools like Telepresence, okteteo/devspace or google's ko .
What do you think about combining Argo CD, Crosplane, and KubeVela? Can we use those to simplify infra and apps management?
Pretty neat! I was working on something very similar, but with Flux instead of ArgoCD and throwing DARP into the mix. There is a nice vid from last KubeCon EU talking about those concepts too (ua-cam.com/video/Ajc2ngGv0m8/v-deo.html&ab_channel=CNCF%5BCloudNativeComputingFoundation%5D). At the end, I think the most important part here is not the tools, but correct mindset.
The dynamics between OPS (or as I like to call it Platform Team) and DEVs should be exactly as you described and it also echoes findings from latest State of DevOps Report. One step further, I would like to see more standardization efforts similar to OAM, but for infrastructure components. Maybe Crossplane will lead the charge here? Time will tell.
Also, looking forward to your video about creating virtual kubernetes clusters with "pre-baked" addons and config :)
The goal of crossplane is to become a standard like OAM. There is a strong chance for that since quite a few are already on board. AWS, IBM, Azure, and Google are contributing (in some capacity or another). But, as you said, time will tell. Standards mostly depend on adoption.
@active ccsuser
You can easily create some sort of a portal (e.g., Backstage) or use an existing solution (any Kubernetes dashboard) to have the button that scales an instance directly. Or, even better, you can have a button that pushes a change of the manifest to Git and let Argo CD or Flux do the work instead of interacting directly with the cluster.
As for being "up and running in less than a minute"... If you're running your apps as containers, it'll take close to no time to start a container. The rest depends on the time your app needs to initialize.
> developers have no clue about how the networking with calico works within the cluster
There is no strong need for devs to know how Calico works. As long as it's there, it'll do the job until things go wrong. When that happens, whomever tries to debug the issue will need to understand Calico. That being said, the more people know, the better.
> nor how to secure snapshots of the databases of the instances - does your kube-combi-kit / kube-devops-suite make any sense it such a situation?
Snapshots should be created automatically (e.g., once an hour, once a day, etc.), so there is no need for anyone to secure anything beyond setting up the corn job. Normally, if it's snapshots of DBs, you would go with whatever tooling that DB vendor provides. For everything else, Velero is probably the best choice.
> PS: are you done now? time to relax and focus on the promotion, collecting feedback, and optimize this and that ...
I'll relax when I retire. Until then, it's full throttle :)
@active ccsuser by "correct mindset" I meant, following best practices as outlined in the DevOps survey results and other industry standards and adopting it for your needs. Not the best choice of words on my part. I hope it makes more sense now.
As a developper, I think it is very complicated. Would've been great if we could use one tool instead of combining many, also it depends on the need of the developpers and the app's architecture...
Thank you Viktor, we appreciate you so much! ❤
Very clarified video!
We are trying to create something like that in my company. We also want to add backstage as a frontend view for developer to simplify even more the tooling usability.
I'll keep following you for more tips ;)
Your Charisma is of the charts! Your my Guru Viktor🙏
"happy couples don't make demands from eachother" **takes a short break to contemplate past serious relationships
Awesome!!! Looking forward to test this out!!!
Fantastic Vid, myself and a colleague were actually talking about creating a repo similar to this. Looks like you beat us to it 😂
FANTASTIC!
Thank you Victor, looks very promising!.
Kind Request: Openshift Hive vs Crossplane ?
Adding it to my TODO list... :)
Viktor come on you almost had the... A-Team ^_^
I really like where you took those tools. Nice and clean.
I realized that only once I started recording. I would have to rewrite the commands and the manifests and test it all. For a minute, I thought to do just that but then my "lazy gene" kicked in and told me not to delay everything for a day.
Next time...
@@DevOpsToolkit That "lazy gene" is what makes you good at all this so embrace it ;-)
That's so cool Viktor :) Thx you
Excellent as ever, thanks Viktor!
The feeling is we are closing on the ultimate goal for any devops pundit - to automate us out of the jobs. ;-)
But there is one pillar out of three, which is kind of missing, or is mentioned as a pre-requisite only in the recent demos - [infrastructure, platform, application] - but we need all 3 for completeness. It is bothering me for some time that the means to provide the foundation level - the infrastructure, or the kube clusters management does not have the same wealth of tools as the other two. But there is the 'Cluster API' and the corresponding 'clusterctl' tool. Viktor could you please add the review of that - in the gitops paradigm, of course - to your TODO list? Or maybe another alternative?
Great suggestion. Adding it to my TODO list...
cool stuff Viktor, Thanks, I appreciate very your work! I am using also ArgoCD in my new project for deploying workload.
But currently I am not sure what is a better approach for a middle/big company:
A) let create every dev-team its own k8s-cluster and got m.b. hunderts of clusters with many unsed (but payed) ressources. As ops you should manage the cluster at the end.
B) let create by the ops-team a 1 "big" cluster with isolated workload and got it managed by the ops-team as a PAAS solution
Both approaches have their pro and cons.The approach B seems for me as Ops more manageable (networking stuff, storage, security, monitoring, logging, backup/recover etc.) and cheaper at the end by cosolidating more different workloads into 1 cluster
The answer to that question (partly) depends on the size of the operations. If option a) would result in hundreds of clusters, option would is unlikely going to be a single cluster. Assuming that hundreds of clusters (option A) is the result of having hundreds of teams, I'm assuming that you have hundreds or even thousands of servers. That is unlikely going to fit into a single cluster and, if it does, might be a bit too much. That being said, I'm not suggesting to go into the other extreme and create hundreds of tiny clusters. Isolation of workloads is a potentially good solution, especially with the tools like vCluster.
This is amazing
Great video
Hi Viktor
thank you for your amazing videos.
I have a request: comparison between kaniko, buildah and buildkit using argo workflows and tekton. What do you think?
That's a good suggestion. I need to see whether to go directly to the comparison, or first review Buildah and Buildkit so that they get the same coverage as Kaniko (ua-cam.com/video/EgwVQN6GNJg/v-deo.html), and then compare them.
@@DevOpsToolkit
maybe doing a whole series: 1. kaniko, 2. buildah, 3. buildkit, 4. comparison, 5. all them applied using argo workflow and tekton
Something like that sounds great, with exception of kaniko since I already have a video with it.
awesome
Hey Viktor. As always, thank you so much for such great content.
Question. I'm having a lot of trouble trying to have ArgoCD deploy KubeVela applications. It recognizes the application, shows up as Synced in Argo, but the underlying resources never get deployed like they are here in the video. Do you know why this might be?
I'd need to check the gist and see whether something changed in kubevela. It might take a while though since i stopped using kubevela since crossplane released functions that effectively remove the need for kubevela. Hence, I'm not up-to-date with kubevela.
@@DevOpsToolkit ok. I haven't gotten that deep into crossplane yet but it's good to hear crossplane tackles the same problem since KubeVela documentation and support is so bad. I'll be looking into cp functions then. Thank you
Hi Viktor, thank you for a great video!
In such setup, how we should define infra dependencies of an application?
for example:
- My backend application require a MySQL & Redis services
How my application should "know" how to address those sevices, is it via env variable or other tools like consul?
Thanks again!
Crossplane can create a Secret with whichever info you need and your application can use that Secret. You can find an example in ua-cam.com/video/lZpKF1B80dU/v-deo.html.
@@DevOpsToolkit Hi Viktor, thank you for your reply!
I find it a bit complex to maintain both Crossplane & OAM.
Can we somehow leverage Kubevela/OAM to control both infra & applications/components/traits?
For example, I have a service that require MySQL as one of its dependencies. When deployed to production I would like it to use a managed service (e.g. AWS RDS) while on the other hand, for a testing environment, I would prefer using something ephemeral / cheaper such as running an instance in my k8s cluster without PV, H/A etc.
@@eldadyamin You can do that. You can use KubeVela to construct any k8s resources, including Crossplane. But you can also use Crossplane alone. You can use Crossplane compositions to generate application resources. Lately, I've been leaning more towards the latter options and use Crossplane alone. There are a few upcoming features that will elevate that experience to a new level.
I started a series of videos on that subject in the Upbound channel. The first one is ua-cam.com/video/lZpKF1B80dU/v-deo.html and a few additional ones will follow shortly.
@@DevOpsToolkit Thanks again! Looking forward.
Is there a way to still rely on the eksctl definitions in crossplane? It would be a bit challenging to recreate all features in eksctl via compositions that I need to create
There isn't anything similar to eksctl. Crossplane composites and eksctl are, in a way similar in terms that both are grouping AWS resources and simplifying the experience. The difference is that composites are created by you or someone in the community. For example, I will soon add the composite I created for this video into the catalog. There are likely similar composites created by others.
Amazing that 25-30 .yml lines can replace hundreds of lines.
What I struggle to understand is why do I need crossplane at all when I can just use an add-on in Kubevela for a component to create cloud resources.
To begin with, you need CRDs and controllers that represent and manage cloud resources. You need cross plane for that (or some other similar tool). What you might do is replace Crossplane Compositions with KubeVela.
@@DevOpsToolkit yeah that was my point. If you see all kubevela add-ons many are for terraform, they install a controller, so essentially they do everything you mentioned here and it feels a lot simpler to use than what you described here. Do they work well or better than cross plane? No idea. Haven't tried
@@crikxouba Terraform does not work well in Kubernetes (no matter whether it's used with KubeVela or something else). It does not declare individual Cloud resources as k8s resources so it cannot be queried through Kube API, the state is stored in a remote drive instead of etcd, it treats projects as single entities instead of individual resources thus making it difficult (if not impossible) to establish relations, etc. Simply put, Terraform is great, but not designed to run in Kubernetes natively. As such, wrapping Terraform into KubeVela does not produce even remotely similar results as wrapping Kubernetes (e.g., Deployment, Service, Ingress) resources. Crossplane has something similar with the Terraform provider but that is recommended only as a temporary solution that serves as an intermediary step towards using Kubernetes as a control plane.
Can't find the link for the repo. Could u help me on that. I loved the approach btw, congrats!
You'll find "Gist with the commands" in the description of the video. It contains all the commands I executed plus those I used to set up everything before I started recording. The references to the Git repos are in there.
if devs can't manage their own infrastructure you don't have devops system setup
Thanks a lot Victor! This is awsome!
Do you know if argo/git can handle thousands of infra folders? Otherwise, when it becomes bigger it will be hard to maintain.
Do you have any replacements in mind? I thought about storing in s3, but then it makes it a bit more complex to use for the developers when comparing to the simplicity of pushing to git. What do you think?
Git should not have any issue to hold thousands of manifests and, even of it does, there is no alternative to it (at least today). S3 is not a replacement for a code repository.
As for Argo CD... It can scale if needed and you can also split it into multiple instances. You do not have to have only one central server.
What would be an approach to completely bootstrap a cluster? Meaning I manually install argocd, crossplane and a cluster composite in kind. Then once the cluster is up, apply an argocd project *in the new cluster* that will look up and sync all my definitions, including everything I manually applied. I have been looking for this approach in your videos but it seems you always have this local “control” cluster where the top level confits are applied
It all depends on the provider. Applying the same definitions in the second cluster works when a provider (e.g. GCP) allows names for all the resources and you use external-name annotation in crossplane. Some, however, tend to assign names randomly (e.g. VPC in AWS) and, in that case, applying the same manifests would not work and you would need to restore etcd backup from the first local cluster.
Most of the companies I worked with tend to use crossplane at large scale where the initial management cluster was created without it. The argument from those organizations is that one cluster not being managed by crossplane is not important and a small price to pay.
Finally, there is always the option to use hosted crossplane (upbound cloud).
After I wrote my previous message I realized that I might have misunderstood the question. Are you asking how to bootstrap other clusters to be configured with Argo CD right away? If that's the question, then I have good news. That will be the subject of an upcoming webinar I will do with an ex-colleague that is also very involved with Argo.
@@DevOpsToolkit i didn’t explain it that good: I want one cluster only. I want everything managed via gitops with argocd. I want the cluster to also manage external resources via crossplane. And finally that the crossplane manifests also manage that single cluster.
@@barefeg I'll make a video on that scenario soon. It'll be either on this channel or in ua-cam.com/channels/m_v2HL0pdqtShHD-ZDDTaA.html.
Until then, and if you try moving it yourself, remember `external-name`. That is the most important attribute in Crossplane (for scenarios like that one) since it establishes the relation between Crossplane's internal state and the actual name of the resource.
@@DevOpsToolkit thanks! Unfortunately I’m on AWS so maybe it will not be possible to transfer the resources without a backup restore
Nice one as always Viktor! I'm interested in how this could be used in conjunction with keptn quality gates and some chaos engineering products but this looks sweet. In regards to vclusters vs this approach, apart from the costs, do you think this is always the better option?
I don't think this approach collides with keptn or cluster. Those are solving different problems. Argo CD is about syncing k8s resources with whatever is defined in Git. Crossplane is about managing infra, and both Crossplane Composites and KubeVela are good at simplifying potentially complex manifests.
Keptn is a different way to run pipelines and vcluster is about creating virtual clusters. You can easily combine vcluster with KubeVela or Crossplane.
You can use cloud foundry or nomad too ;)
Both are great tools in isolation but if you include the ecosystem around them, Kubernetes is way ahead of them.
I do agree that Nomad is much easier than k8s, just as Docker Swarm was. However, I am more concerned about the ecosystem for both CF and Nomad. Almost all innovation goes into k8s. Most of the vendors and OSS projects are being developed as k8s first and others later and only if we're lucky. It's enough to take a look at the CNCF landscape and that is only a fraction of the projects being developed primarily for k8s.
So yes, there is value in both CF and Nomad but I am not sure that they will be able to catch up with k8s. It's not about the quality any more but about the ecosystem. That's where k8s is years ahead of them and the difference is likely going to increase over time.
P.S. Long time ago, I placed bets on Docker Swarm for similar reasons as those you're stating. That was failed investment. I do not want to repeat that error again. I feel that k8s won the "war" and the next change will come in the form of something completely new and not from CF or Nomad.
P.P.S. There is nothing wrong continuing usage of either of the two, but I would not make investments into them.
What is the way to verify crossplane manifest files, does datree integrate with crossplane out of the box?
That's the subject of the next week's video :)
As a side note, Datree is no more. The development on it stopped.
@@DevOpsToolkit Any other way I can certify crossplane manifests ?
@UtkarshMishra-it4oc if the question is how to test them, the answer is coming next week on Monday. On the other hand, if the question is how to validate them through policies and confirm that people specify only what they should, i would choose kyverno.
Thanks, looking forward to the upcoming video.
Great video, like this approach a lot. But I think argocd is a bit heavy and not very modular. I think flux2 is much more lightweight and is ideal for this scenario.
I like both and have a hard time choosing one over the other. I'll do my best to create a variation of the video with Flux or, at least, provide a Gist with modified commands and repo.
Thanks, Viktor!
I have an unpopular opinion - there's no need to create abstractions on top of abstractions to simplify something. What you're proposing in the video is old segregation between Dev & Ops, quite opposite to what DevOps culture tried to resolve. Instead of relying on pure and well-defined k8s API, (or cloud API) you're proposing to "simplify" this and wrap it into the new, more complex abstraction, which only the Ops team is now in charge of.
Mastering k8s or cloud API is not rocket science. Any dev team should do it and embrace the "you build it, you run it" approach. Don't try to simplify the already simplified good abstractions.
I think it depends on the way the teams are organized. I do believe that DevOps is about creating self-sufficient teams that can manage everything, from reqs, to apps, to production, and everything in between. However, that does not necessarily exclude having specialized teams that create services for those teams. You can, in a way, think of AWS as being a specialized team that creates services we consume. Similarly, you can think of SREs as being similar teams, but working internally within a company. A good example would be, for example, the team in Spotify that created Backstage which, later on, was adopted by all other teams in Spotify.
From my perspective, being self-sufficient does not mean that we are not using services created by others (be it outside or inside the company), but that we do not request from others to do our jobs. Saying "we do not deploy our apps, others do that" is bad. Saying "we are using a service or a helper tool created by others" is potentially good. It reduces duplication and it prevents reinventing the wheel, as long as that is not mandatory.
I do agree that "mastering k8s or cloud API is not rocket science", but it's also not trivial. Kubernetes API is easy, but (core) Kubernetes resources are not meant to be used directly. For example, there is no way to define an application in Kubernetes. Instead, without resorting to the projects in the ecosystem, we need to define a bunch of resources (e.g., Deployment, Service, VirtualService, Gateway, HPA, VPA, etc.). Those are more like building blocks than app definitions. Knative, for example, is much closer to being a way to define an application than those I mentioned before. AWS is in a similar situation. You cannot define a cluster. Instead, you need to define a bunch of resources, and the way you do that will differ from one organization to another. Azure and, especially, GCP, is in a much better place from that perspective than AWS.
Finally, I am not advocating not to use k8s API. Quite the contrary. I'm advocating the usage of k8s API as a way to manage everything. Cloud APIs, on the other hand, are a mess. Kelsey Hightower had quite a few conversations in which he explained it much better than me.
@@DevOpsToolkit Thank you for the reply!
Yes, as a developer you need to deal with Deployments, Services, Ingresses, storage, etc. Yet, I doubt this could be abstracted away and locked by a central team which doesn't fully understand the application requirements. This leads to the same old silos but with modern tools.
I agree about adopting tools and services delivered by other teams (we live in the microservices world today). But I disagree that wrapping basic building blocks in some abstract APIs is a good thing. You're locking teams from exploring and learning native k8s API, teaching them to use your custom API instead.
The paradox: simplification and poor abstractions lead to more complex systems.
I completely agree that being locked by a central team is a very bad idea. I see composites as a way to collaborate but, as you said, that can be abused and provide yet another way for one team to lock another.
This conversation reminds me of talks I had with teams that used Jenkins shared libraries. At the time, not necessarily today, I saw them as a great way to create shared knowledge and provide reusable libraries that can be used by everyone in a company. However, they were often used by shared services teams to say "everyone else must use shared libraries we created".
What I'm trying to say is that creating reusable code is great, but it should not be mandatory to use. Forcing others to use something you created is bad. Providing an option for others to benefit from your work, if they choose to do so, is good.
Also, composites, application models, and similar tools do not necessarily have to be created by central teams. Any team can, for example, convert their app to an OAM definition. Others might, or might not choose to use it.
@@DevOpsToolkit Excellent point! This also reminds me some drawbacks of applying DRY in microservices. You get a tightly coupled system with dependently deployable services.
As for cloud API mess, totally agree there. I hope Crossplane approach will be adopted as a standard by clouds.
Great video Viktor! Would love to see you dig more into the dev pipeline too and hear your opinions on tools like Telepresence, okteteo/devspace or google's ko .
I already did video about Okteto (might revisit it with the new stuff the've been adding). Adding Telepresence and DevSpace to the TODO list... :)
DevSpace was just published at ua-cam.com/video/nQly_CEjJc4/v-deo.html
Telepresence should follow :)