How do you update your manifests after building new releases? IMPORTANT: For reasons I do not comprehend (and Google support could not figure out), UA-cam tends to delete comments that contain links. Please exclude from the links to ensure that your comments are not removed.
@@MrPureJam It's a good one. Not directly related to the subject, but a good one nevertheless. Adding it to my TODO list for one of the upcoming videos... :)
Depending on the CI Pipeline you could specify triggers on file changes in the GIT. For example: only build a new Docker Image when files have changed except the argocd manifest version file. This could prevent the loop.
Thank you for awesome content! i have a question. there is no guide on official document how to link private google artifact registry. can i have some info about it?
I haven't used it with Google registry but I'm guessing it's the same as with others. You create a secret with credentials and put the reference into the config.
thank for this video, it's super awesome! small one from my side: how would you suggest to organise image updater in case I have one argocd cluster and I have several environments(argocd projects) where I deploy my apps: dev, test, stage, prod. I have app off apps for each env, but I want to have more control on what goes to production. thanks and again, good job!
That's one of the main issues of the Image Updater. It updates base manifests instead of updating Argo CD Application manifest. As a result, you need to split base manifests as well and, in my case, that's not the solution I want to use. I opened github.com/argoproj-labs/argocd-image-updater/issues/282 but, so far, it did not get much attention.
Hi, you said that it supports Helm, Kustomize or native k8s Yaml. But, it looks like they don't support native Yamls. There were an open issue regarding this topic - argocd-image-updater/issues/222 :) Thanks, great vidio!
What you think of this strategy? 1. CI pipeline builds new docker image in the app repo and push it to CR 2. CI(we are still in App build CI pipeline) knows image tag, it pull deployment repo, use kustomize to update image tag in deployment, update deployment repo, push it 3. ArgoCD watches for the deployment repo and notice the tag change 4. ArgoCD does the deployment In here we use a single CI pipeline to build docker image, push it to CR, update deployment repo(which is a separate one) and let argo do the rest
That is almost exactly the same as i do it. The only difference is that i tend to update argo CD application manifest instead (unless when I'm using customize for my manifest). What's missing is what happens after argo CD synchronizes the states. For example, you might want to run some other tests after a new release was synched with argo CD. In those cases, you might want to add something like argo events that will notify some process to execute some additional tasks.
How image updater gets the access to the artifact registry, I am getting below error but unable to find which and how the service account is being passed(even though the tag and image are present) level=error msg=Could not get tags from registry denied: Permission denied on resource (or it may not exist) If anyone can help
Hello, I am using ArgoCD for my deployments. Currently I am deploying applications to default project and SIT projects. Currently ArgoCD shows applications in all projects is there a way to select applications based on projects in UI
argo image updater not able to pull ecr images giving error in auth token.how i can use aws role so that argocd image updater use this role to pull images from ecr?. please share the setup steps.
@@DevOpsToolkit Actually, I am not able to complete this step. I have installed Argo CD Image Updater without using Helm, and I created the secret, but it only works for 12 hours. I want to create the token dynamically. It would be very helpful if you could provide a link to a blog or guide where I can learn how to install Argo CD Image Updater in such a way that the token is automatically created whenever the pod requires it.
I tend to always roll forward. Ideally, that would be through a quick fix. But, if that's not an option, I would just tag the older release as the new one. For example, if the current release is 1.2.3 and you want to roll back to 1.2.2, you can create a new container tag 1.2.4 (e.g., `docker image tag my-image:1.2.2 my-image:1.2.4 && docker image push my-image:1.2.4`. I believe that's better then deleting images. You can also (temporarily) change the annotation in the manifest to the specific tag you want to roll back to and, later on, change it again to be a range of tags when you want to continue moving forward.
@@DevOpsToolkit Coming from the argo-image-updater container. After a litle bit of research it is expecting from me to deploy my app using Helm charts in stead of GIT. (You can specify this when you create the argocd application) I am not sure but i think argocd-image-updater is only accepting Helm charts or Kustomize for the deployment. so GIT is not an option, for now.
@@sonergezgen4046 Your manifests should be defined as Helm charts or Kustomize but stored in Git. So it's not Git or Helm/Kustomize but Git and Helm/Kustomize.
@@DevOpsToolkit yes, that's it. just need to make a bit of research how i can convert all of my manifests into helm charts, Thank you for your answers and again, for the quality content !!
It should work with (almost) any container image registry, including ECR. Since ECR is not baked in, you just need to add it to the `argocd-image-updater-cm` ConfigMap. You can find more info in argocd-image-updater.readthedocs.io/en/latest/configuration/registries/.
It definitely is. Adding it to my TODO list... :) As a side note, I was involved with the project at the very start but left to pursue other opportunities in the meantime. It'll be fun for me to see how far did it advance since then.
It means that the manifests of the application are in the same repo as the rest of that application (code, tests, build scripts, etc.). That does not mean that the Argo CD Application manifest is in that same repo. Most of the time it's not.
@@DevOpsToolkit If I understood your presentation, image-updater will update the application manifest, according to its annotations. So, in order to have an infinite loop, you need to have the dockerfile and the app manifest in the same repo. Apart from that, great video, as usual. Thank you ! Maybe, you could make a video on mono-repo vs multi-repo form micro-service application. Where to put code, CI manifests, CD manifests, for each micro-service.
I have not (yet) used it with ACR so I do not have an example at hand. Still, it should be the same as for any other container registry so you should be able to adapt any of the examples from argocd-image-updater.readthedocs.io/en/stable/configuration/registries/.
@@DevOpsToolkit yes I have followed this doc already but still image updater can't detect the app.. and can't see any logs from the image updater pod logs
Until then, can you try the option of using a k8s secret? If you already have ACR configured in, let's say, Docker, you should be able to store the contents of its config as a secret and use it to authenticate to ACR.
@@DevOpsToolkit Hi , I figured it out, I had to create the secret in the default namespace instead of argocd, first I have tried to create the secret in argocd and in the deployment file point to that secret using pull secret under image section, but it didn't work , the only way it work if I create the secret under the default namespace, noting that I have argocd deployed in argocd ns , app deployed under dev ns.
Gitlab has an option to avoid their internal CI to run upon a push, I think it was `-o ci.skip`. However, I'm not sure if it would work with any other but Gitlab's CI system .. and it also doesn't feel right.
There are other pipeline tools that allow you to filter out the cases when builds should not run. Nevertheless, I do not like that approach. It's easily gets messy and I prefer using it only as the last resort.
@@DevOpsToolkit the two repository approach definitely works best in many cases. I do wonder how big the benefit of argo-image-updater is when a CI system is in place. I can see that it removes a few lines of code but it comes at the cost of an additional tool to remember and maintain. A good case is the integration of external images that I don't build via CI. It does feel similar to fluxcd's helm controller. Does argo-image-updater support helm or plans to go in that direction?
@@JanChristophEbersbach In general, I think that we (industry) will be moving towards "event-based pipelines". In that scenario, Image Updater is a good addition. While it's true that, when looking at a single app, it does not help much, when applied to all the apps, it could be an important change. Even if it replaces only a few lines in a pipeline, when calculated over hundreds of apps it's might be a significant reduction in what we need to maintain. It does already support Helm (together with Kustomize).
It would be, more or less, the same with Helm. You'd use a Helm chart instead of Kustomize manifests I used but, from the Argo CD Image Updater perspective, it would be the same. It would still add a file with the tag to wherever the chart is (as long as it's in a Git repo). I'm not sure that would warrant the whole video since it's the same process. If you'd like, I can do a quick demo in one of the upcoming "Ask Me Anything" sessions.
Hey victor, If the image is updated automatically, how can a rollback be happened in this situation? imagine the new image has a problem and you want to rollout to previous version but the image updater continuously uses the latest image.
In that situation, you'd need to update the image updater labels so that it (temporarily) uses a specific version/tag instead of the latest within a specific range.
It's weird that it doesn't support ECR via IRSA out of the box. Instead, it looks like it actually integrates with aws-cli so it can use token-based creds.
As Image Updater replaces the need to promote the new image in the pipeline, how do you synchronize the pipeline build/promote/test if the promotion is not in the pipeline ?
@@DevOpsToolkit So the "pipeline" is build only. You need an other "pipeline" to launch tests that will be triggered on the write back to git from Image Updater
For the tests you execute after deployments, it would be a trigger based on Kubernetes events confirming that the app is up and running rather than from git.
Thanks for the video! It'd be nice to merge all those projects into one so everyone uses all that with ArgoCD out of the box. 1) In my opinion the "introduction" part is too long. Consider making it more concise. 2) Usually you look ..better. Hopefully you're not having any health issues?
1) Great feedback. I'll do my best to make shorter introductions in the upcoming videos. 2) No health issues. I guess it's just a side-effect of getting older or, more likely, often no sleeping enough due to the sheer amount of work I have on my plate.
Great video as always, your videos have saved me a lot of R&D time!! QQ: I am trying to re-tag a dev image to trigger deployment to qa environment. Does re-tagging(docker pull, tag and push or ecr put-image) same image digest trigger image-updater to push updated image version to git based on allow-tags?
It's not that Argo CD depends on GitOps but, rather, that Argo CD is a GitOps implementation. As for k8s manifests... There is no single GitOps implementation I'm aware of that is not designed to synchronize k8s manifests. That makes sense since much of GitOps functionality is native to k8s (e.g., drift-detection, reconciliation, etc.) and it would be hard to implement elsewhere.
@@DevOpsToolkit yes and no, technically if you removed the GitOps aspect from Argo CD it'd still work fine with Application Set, Rollout, Workflow, etc. Instead of synchronizing K8s manifests from a Git repository, you'd just apply them using infrastructure as code instead.
@@DevOpsToolkit Argo CD is unfortunately useless when using Terraform or Pulumi due to the GitOps requirement of synchronizing Kubernetes YAML manifests
@@uziboozy4540 That would be something similar to what we're doing with pipelines (e.g., Jenkins, GitHub Actions, etc.) when, at some point, we execute `kubectl apply`, `helm install`, or whatever the command is to deploy an app. If that's what you're looking for, there is Argo Workflows. The major difference between "traditional" deployments and GitOps is in continuous drift-detection and reconciliation and, among other things, that's what Argo CD does and that wouldn't be able to (or would be hard to) do without k8s. It "guarantees" that the desired state is the same as the actual state all the time and not only when a change is pushed to Git or someone executes some command. On top of that, there are benefits of the pull model (IaC normally uses the push model). One of those being security since there's no need to open ports towards the destination servers/clusters. Finally, you can use Iac tools like Terraform with Argo CD. There are quite a few projects that run it inside k8s clusters as operators.
It's all in the Gist and the link to the gist is in the description of the video. Specifically, the folder is gist.github.com/vfarcic/acf300f415b6fc9f699222bfe5b9e14f#file-124-argo-cd-image-updater-sh-L22
Thanks a lot for making these videos! Several questions... if someone perhaps has time to answer: 1. How will kustomize pick up this new added version file? I have checked your repo and searched both kustomize and argocd auto updater. (I have not added it to my argocd installation yet). Update, testresult: Kustomize will ignore the file, and I assume argo-cd will instead by filename convention go in and pick up the file, then modify the manifests in-memory before applying them to the cluster? doesnt sound that good.. "argocd kustomize special image addon" :D. That means I cant clone the repo and reproduce the manifests.. I mean I want git to be the source of truth using kustomize.. not argocd's modified view of git :D. argocd should just be automation, not manifest rewrite 2. What will happen for multi image applications? Suppose I have 3 deployments in my application, each with 3 diff images (or 1 deployment with 3 diff containers). To me it sounds like the image updater can only handle one container image per application? I am probably wrong, but cant find any good doc on this, or how it figures out which ones to update. 3. Image auto updater doesnt support web hooks as far as I can tell :(. They have it as a future feature on their github page. Slow/polling mode is back, with poll limits also being and issue :D (explained in auto updater doc). Any way to make it faster? I feel like this isnt much better than pipelines. Both feel a little... well.. shitty :D. That said, I am new to this so it might ju be my own lack of knowledge here. But anyway, I think the overall design of the image auto updater is a bit bad - I think the image updater annotations/labels are placed on the wrong part of the manifests... and not being able to reproduce the manifests/adding some internal argo manifest modification step is also quite bad .. It would imho be good if argo image updater instead integrated with for example the kustomize image tag transformer, lets say: kustomization.yaml: images: - name: someImageName newTag: v1.2.3
1. Argo CD is periodically pulling Git repos and if there are commits since the last sync, it compares the outcomes of manifests against the actual state. Since those image updater "special" files are processed as part of the process of generating the "final" version that is compared, changes to them result in a new desired state. Now, the source of the desired state (the actual state is the source of truth) is almost always dependent on some tools. That could be Helm, Kustomize, Argo CD, or any other. The only way not be like that would be to use "pure" k8s manifests and not Kustomize, Helm, or other templating engines. That being said, if you prefer to have a dependency only on Kustomize and not on Argo CD when the desired state is concerned, the solution is relatively simple. Extend the pipelines that are building images to clone the repo with the manifests, run a simple `sed` command, and push it back. I prefer doing that over using image updater, even though for different reasons. 2. That should not be a problem. The file created with image updater is very similar to how Kustomize edits image tags. It references an existing image and applies a new tag to it so it can work with any number of images. 3. The way to make it faster is to do the change of a tag in a manifests through pipelines, probably just after the steps that build images, and not use the Image Updater at all. One advantage over pipelines is that it is based on events allowing us greater reusability and decoupling of pipelines. However, if you do prefer having one big pipeline, I would not recommend using the Image Updater. It takes only a few lines of scripting to do the same inside the pipelines. Personally, my main problem with the Image Updater is that it does not have the flexibility what and where will be updated. The current way of always placing the file in the repo URL defined in the Argo CD Application manifest often does not work for me.
@@DevOpsToolkit Thanks for taking the time and writing such a long reply, and explaining the quirks you have found :). I have also come to the conclusion that as of right now, image updater is not for me, but it would definitely be nice to revisit in the future, if it gets more mature. It is built on a good principle, but I just feel like the implementation is not there quite yet. But I am not vey exited abut having one huge pipeline either :D . Ideally I dont really want my application repo to be aware of where it runs and what environments it should update images in. Instead, I am sort of thinking of applying the argo pattern of observing repo, but from another repo. But that would take a while to implement, so for now I am probably going to use just pipelines when we introduce argocd where I work, and then gradually refine the setup. However, in the future, I am thinking of doing it a bit like argo itself does it, but a bit more generalized. Basically a git repo/pipeline observer pattern (maybe this is foolish but it sounds like fun to me!) 1. The application code repo runs its build pipeline and publishes an image (unaware of any other piplines or argo). 2. A webhook notifies the argo repo that a pipeline in the application repo has been executed - and thus the argo repo starts running its own pipeline, pulling changes from the repo it is observing (=the application repo) and updating all affected images as specified in the argo repo only (decoupling). 3. Webhook is then automatically sent to argocd from the argo repo when the argo repo pipeline finishes updating images :) (standard stuff)
@@MrTheGurra It would be interesting to discuss this in more detail, but writing messages might not be the most convenient way to do that. How about joining the monthly chat between channel members? If that sounds good, please DM me your email and I'll add you to the invite (it's at the beginning of every month). You can find me on Twitter (@vfarcic) or Linkedin (www.linkedin.com/in/viktorfarcic/).
How do you update your manifests after building new releases?
IMPORTANT: For reasons I do not comprehend (and Google support could not figure out), UA-cam tends to delete comments that contain links. Please exclude from the links to ensure that your comments are not removed.
What do you think about Renovate?
@@MrPureJam It's a good one. Not directly related to the subject, but a good one nevertheless. Adding it to my TODO list for one of the upcoming videos... :)
After a long delay... ua-cam.com/video/l0YH557eIiE/v-deo.html
Thanks!
Thanks a ton, Anthony.
Mulțumim!
Thanks a ton, Bogdan.
@@DevOpsToolkit My pleasure, Viktor !
Awesome vid! Looking into ArgoCD for work, and this is super timely. As always great content!👍
Thanks
That's very generous. Thanks a ton Toel.
Great content! So glad I found this channel.
Depending on the CI Pipeline you could specify triggers on file changes in the GIT. For example: only build a new Docker Image when files have changed except the argocd manifest version file. This could prevent the loop.
That is indeed true, but messy. Still, without a better option, that's what needs to be done.
Great way to make your flow simple and GitOps. I am using this Addon approach in my current deployments. Love the way ArgoCD works.
great video quality! Thanks!
Fantastic video. We might be using it since the looping disadvantage doesn't apply to us as we store our argocd manifests elsewhere. Thanks so much.
Just 1 question, how to configure other registries here, we are using ECR.
argocd-image-updater.readthedocs.io/en/stable/configuration/registries/ can help
Excellent Tutorial on argcd image updater. gratitude to you my friend. much appreciated !!
Great vid, and also quite funny. Thanks.
Very cool! I was just starting to play with this one!
Great video, thank you... please could you do a video about connecting argocd image updater to AWS Elastic container registry
adding it to my to-do list...
@@DevOpsToolkit Thank you!!
@@DevOpsToolkit I also look forward to it
Great video. 👍
Inspiring !
you can use something like jenkins ignore committers to prevent the build loop.
True. If you choose to use pipelines for deployments (instead of gitops), most of the solutions allow some sort of filtering.
Thank you for awesome content!
i have a question. there is no guide on official document how to link private google artifact registry. can i have some info about it?
I haven't used it with Google registry but I'm guessing it's the same as with others. You create a secret with credentials and put the reference into the config.
thank for this video, it's super awesome! small one from my side:
how would you suggest to organise image updater in case I have one argocd cluster and I have several environments(argocd projects) where I deploy my apps: dev, test, stage, prod.
I have app off apps for each env, but I want to have more control on what goes to production. thanks and again, good job!
That's one of the main issues of the Image Updater. It updates base manifests instead of updating Argo CD Application manifest. As a result, you need to split base manifests as well and, in my case, that's not the solution I want to use. I opened github.com/argoproj-labs/argocd-image-updater/issues/282 but, so far, it did not get much attention.
@@DevOpsToolkit you are right, thanks for your answer.
Hi, you said that it supports Helm, Kustomize or native k8s Yaml.
But, it looks like they don't support native Yamls.
There were an open issue regarding this topic - argocd-image-updater/issues/222 :)
Thanks, great vidio!
My bad... I tend to wrap k8s manifests into Kustomize (without overlays) and completely forgot that does not count as "pure" k8s YAML :(
GitOps power!
What you think of this strategy?
1. CI pipeline builds new docker image in the app repo and push it to CR
2. CI(we are still in App build CI pipeline) knows image tag, it pull deployment repo, use kustomize to update image tag in deployment, update deployment repo, push it
3. ArgoCD watches for the deployment repo and notice the tag change
4. ArgoCD does the deployment
In here we use a single CI pipeline to build docker image, push it to CR, update deployment repo(which is a separate one) and let argo do the rest
That is almost exactly the same as i do it. The only difference is that i tend to update argo CD application manifest instead (unless when I'm using customize for my manifest).
What's missing is what happens after argo CD synchronizes the states. For example, you might want to run some other tests after a new release was synched with argo CD. In those cases, you might want to add something like argo events that will notify some process to execute some additional tasks.
@@DevOpsToolkit yeah, in my usecases this seems fine
How image updater gets the access to the artifact registry,
I am getting below error but unable to find which and how the service account is being passed(even though the tag and image are present)
level=error msg=Could not get tags from registry denied: Permission denied on resource (or it may not exist)
If anyone can help
argocd-image-updater.readthedocs.io/en/stable/basics/authentication/ might help.
Hello, I am using ArgoCD for my deployments. Currently I am deploying applications to default project and SIT projects. Currently ArgoCD shows applications in all projects is there a way to select applications based on projects in UI
Projects is one of the filters.
argo image updater not able to pull ecr images giving error in auth token.how i can use aws role so that argocd image updater use this role to pull images from ecr?. please share the setup steps.
I believe that you need to create a script that outputs creds and use that to authenticate to ECR with roles.
@@DevOpsToolkit Actually, I am not able to complete this step. I have installed Argo CD Image Updater without using Helm, and I created the secret, but it only works for 12 hours. I want to create the token dynamically. It would be very helpful if you could provide a link to a blog or guide where I can learn how to install Argo CD Image Updater in such a way that the token is automatically created whenever the pod requires it.
@RajeshKumar-zw3zz I haven't written a tutorial like that. I will put it on my to-do list but I cannot promise when I'll get to it.
@@DevOpsToolkit Thanks waiting for your video on this topic.
What if you would need to rollback to a previous version in production? Delete the v1.2.3 version (for example) from the container registry?
I tend to always roll forward. Ideally, that would be through a quick fix. But, if that's not an option, I would just tag the older release as the new one. For example, if the current release is 1.2.3 and you want to roll back to 1.2.2, you can create a new container tag 1.2.4 (e.g., `docker image tag my-image:1.2.2 my-image:1.2.4 && docker image push my-image:1.2.4`. I believe that's better then deleting images.
You can also (temporarily) change the annotation in the manifest to the specific tag you want to roll back to and, later on, change it again to be a range of tags when you want to continue moving forward.
@@DevOpsToolkit I agree. Better to roll forward I guess
Amazing content however getting the error:
level=warning msg="skipping app 'myapp' of type 'Directory' because it's not of supported source type..
Where does thst error come from?
@@DevOpsToolkit Coming from the argo-image-updater container. After a litle bit of research it is expecting from me to deploy my app using Helm charts in stead of GIT. (You can specify this when you create the argocd application) I am not sure but i think argocd-image-updater is only accepting Helm charts or Kustomize for the deployment. so GIT is not an option, for now.
@@sonergezgen4046 Your manifests should be defined as Helm charts or Kustomize but stored in Git. So it's not Git or Helm/Kustomize but Git and Helm/Kustomize.
@@DevOpsToolkit yes, that's it. just need to make a bit of research how i can convert all of my manifests into helm charts, Thank you for your answers and again, for the quality content !!
If your manifests are in yaml, it might be easier and better to use kustomize. All you need is kustomize.yaml file.
Very good!
Can argocd image updater work with aws ecr repository?
It should work with (almost) any container image registry, including ECR. Since ECR is not baked in, you just need to add it to the `argocd-image-updater-cm` ConfigMap. You can find more info in argocd-image-updater.readthedocs.io/en/latest/configuration/registries/.
Hey Victor, would it be possible to do a video on Argocd autopilot?
It definitely is. Adding it to my TODO list... :)
As a side note, I was involved with the project at the very start but left to pursue other opportunities in the meantime. It'll be fun for me to see how far did it advance since then.
The loop you're talking about suposes that your docker build repo is the same as your CD repo. Who does that ?
It means that the manifests of the application are in the same repo as the rest of that application (code, tests, build scripts, etc.). That does not mean that the Argo CD Application manifest is in that same repo. Most of the time it's not.
@@DevOpsToolkit If I understood your presentation, image-updater will update the application manifest, according to its annotations. So, in order to have an infinite loop, you need to have the dockerfile and the app manifest in the same repo. Apart from that, great video, as usual. Thank you ! Maybe, you could make a video on mono-repo vs multi-repo form micro-service application. Where to put code, CI manifests, CD manifests, for each micro-service.
Hey Victor, could you please tell me how to add the config needed to pull the image from azure registery and add the secret needed for azure git repo
I have not (yet) used it with ACR so I do not have an example at hand. Still, it should be the same as for any other container registry so you should be able to adapt any of the examples from argocd-image-updater.readthedocs.io/en/stable/configuration/registries/.
@@DevOpsToolkit yes I have followed this doc already but still image updater can't detect the app.. and can't see any logs from the image updater pod logs
@@galalelatfawy I can try it out with ACR myself but I can't promise when. Probably next weekend...
Until then, can you try the option of using a k8s secret? If you already have ACR configured in, let's say, Docker, you should be able to store the contents of its config as a secret and use it to authenticate to ACR.
@@DevOpsToolkit Hi , I figured it out, I had to create the secret in the default namespace instead of argocd, first I have tried to create the secret in argocd and in the deployment file point to that secret using pull secret under image section, but it didn't work , the only way it work if I create the secret under the default namespace, noting that I have argocd deployed in argocd ns , app deployed under dev ns.
Gitlab has an option to avoid their internal CI to run upon a push, I think it was `-o ci.skip`. However, I'm not sure if it would work with any other but Gitlab's CI system .. and it also doesn't feel right.
There are other pipeline tools that allow you to filter out the cases when builds should not run. Nevertheless, I do not like that approach. It's easily gets messy and I prefer using it only as the last resort.
@@DevOpsToolkit the two repository approach definitely works best in many cases.
I do wonder how big the benefit of argo-image-updater is when a CI system is in place. I can see that it removes a few lines of code but it comes at the cost of an additional tool to remember and maintain.
A good case is the integration of external images that I don't build via CI. It does feel similar to fluxcd's helm controller. Does argo-image-updater support helm or plans to go in that direction?
@@JanChristophEbersbach In general, I think that we (industry) will be moving towards "event-based pipelines". In that scenario, Image Updater is a good addition. While it's true that, when looking at a single app, it does not help much, when applied to all the apps, it could be an important change. Even if it replaces only a few lines in a pipeline, when calculated over hundreds of apps it's might be a significant reduction in what we need to maintain.
It does already support Helm (together with Kustomize).
Can you show example with helm ?
It would be, more or less, the same with Helm. You'd use a Helm chart instead of Kustomize manifests I used but, from the Argo CD Image Updater perspective, it would be the same. It would still add a file with the tag to wherever the chart is (as long as it's in a Git repo).
I'm not sure that would warrant the whole video since it's the same process. If you'd like, I can do a quick demo in one of the upcoming "Ask Me Anything" sessions.
@@DevOpsToolkit thanks for the quick response 👍
Plz consider this for next session 🙏🏼
keel is another option?
Yes it is :)
I already have it on my TODO list and it's very close to the top so there will be a video soon :)
Hey victor, If the image is updated automatically, how can a rollback be happened in this situation? imagine the new image has a problem and you want to rollout to previous version but the image updater continuously uses the latest image.
In that situation, you'd need to update the image updater labels so that it (temporarily) uses a specific version/tag instead of the latest within a specific range.
It's weird that it doesn't support ECR via IRSA out of the box. Instead, it looks like it actually integrates with aws-cli so it can use token-based creds.
It is weird, but not too weird. It's probably that no one willing to contribute to the project is using ECR.
Except that it actually integrates with aws-cli
As Image Updater replaces the need to promote the new image in the pipeline, how do you synchronize the pipeline build/promote/test if the promotion is not in the pipeline ?
via an event from the push of Image Updater ?
Pipeline ultimately pushes and image to a registry and image updater is monitoring the registry. Push is the event.
@@DevOpsToolkit So the "pipeline" is build only. You need an other "pipeline" to launch tests that will be triggered on the write back to git from Image Updater
For the tests you execute after deployments, it would be a trigger based on Kubernetes events confirming that the app is up and running rather than from git.
Thanks for the video!
It'd be nice to merge all those projects into one so everyone uses all that with ArgoCD out of the box.
1) In my opinion the "introduction" part is too long. Consider making it more concise.
2) Usually you look ..better. Hopefully you're not having any health issues?
1) Great feedback. I'll do my best to make shorter introductions in the upcoming videos.
2) No health issues. I guess it's just a side-effect of getting older or, more likely, often no sleeping enough due to the sheer amount of work I have on my plate.
Great video as always, your videos have saved me a lot of R&D time!!
QQ: I am trying to re-tag a dev image to trigger deployment to qa environment. Does re-tagging(docker pull, tag and push or ecr put-image) same image digest trigger image-updater to push updated image version to git based on allow-tags?
Retagging should work :)
It's annoying that Argo CD depends on GitOps using K8s YAML manifests...
It's not that Argo CD depends on GitOps but, rather, that Argo CD is a GitOps implementation. As for k8s manifests... There is no single GitOps implementation I'm aware of that is not designed to synchronize k8s manifests. That makes sense since much of GitOps functionality is native to k8s (e.g., drift-detection, reconciliation, etc.) and it would be hard to implement elsewhere.
@@DevOpsToolkit yes and no, technically if you removed the GitOps aspect from Argo CD it'd still work fine with Application Set, Rollout, Workflow, etc.
Instead of synchronizing K8s manifests from a Git repository, you'd just apply them using infrastructure as code instead.
@@DevOpsToolkit Argo CD is unfortunately useless when using Terraform or Pulumi due to the GitOps requirement of synchronizing Kubernetes YAML manifests
@@uziboozy4540 That would be something similar to what we're doing with pipelines (e.g., Jenkins, GitHub Actions, etc.) when, at some point, we execute `kubectl apply`, `helm install`, or whatever the command is to deploy an app. If that's what you're looking for, there is Argo Workflows.
The major difference between "traditional" deployments and GitOps is in continuous drift-detection and reconciliation and, among other things, that's what Argo CD does and that wouldn't be able to (or would be hard to) do without k8s. It "guarantees" that the desired state is the same as the actual state all the time and not only when a change is pushed to Git or someone executes some command.
On top of that, there are benefits of the pull model (IaC normally uses the push model). One of those being security since there's no need to open ports towards the destination servers/clusters.
Finally, you can use Iac tools like Terraform with Argo CD. There are quite a few projects that run it inside k8s clusters as operators.
Have you tried one of terraform k8s operators?
@DevOpsToolkit
in which folder/namespace do we need to do these steps
git add .
git commit -m "msg"
git push
It's all in the Gist and the link to the gist is in the description of the video. Specifically, the folder is gist.github.com/vfarcic/acf300f415b6fc9f699222bfe5b9e14f#file-124-argo-cd-image-updater-sh-L22
Thanks a lot for making these videos!
Several questions... if someone perhaps has time to answer:
1. How will kustomize pick up this new added version file? I have checked your repo and searched both kustomize and argocd auto updater. (I have not added it to my argocd installation yet). Update, testresult: Kustomize will ignore the file, and I assume argo-cd will instead by filename convention go in and pick up the file, then modify the manifests in-memory before applying them to the cluster? doesnt sound that good.. "argocd kustomize special image addon" :D. That means I cant clone the repo and reproduce the manifests.. I mean I want git to be the source of truth using kustomize.. not argocd's modified view of git :D. argocd should just be automation, not manifest rewrite
2. What will happen for multi image applications? Suppose I have 3 deployments in my application, each with 3 diff images (or 1 deployment with 3 diff containers). To me it sounds like the image updater can only handle one container image per application? I am probably wrong, but cant find any good doc on this, or how it figures out which ones to update.
3. Image auto updater doesnt support web hooks as far as I can tell :(. They have it as a future feature on their github page. Slow/polling mode is back, with poll limits also being and issue :D (explained in auto updater doc). Any way to make it faster?
I feel like this isnt much better than pipelines. Both feel a little... well.. shitty :D. That said, I am new to this so it might ju be my own lack of knowledge here.
But anyway, I think the overall design of the image auto updater is a bit bad - I think the image updater annotations/labels are placed on the wrong part of the manifests... and not being able to reproduce the manifests/adding some internal argo manifest modification step is also quite bad ..
It would imho be good if argo image updater instead integrated with for example the kustomize image tag transformer, lets say:
kustomization.yaml:
images:
- name: someImageName
newTag: v1.2.3
1. Argo CD is periodically pulling Git repos and if there are commits since the last sync, it compares the outcomes of manifests against the actual state. Since those image updater "special" files are processed as part of the process of generating the "final" version that is compared, changes to them result in a new desired state.
Now, the source of the desired state (the actual state is the source of truth) is almost always dependent on some tools. That could be Helm, Kustomize, Argo CD, or any other. The only way not be like that would be to use "pure" k8s manifests and not Kustomize, Helm, or other templating engines.
That being said, if you prefer to have a dependency only on Kustomize and not on Argo CD when the desired state is concerned, the solution is relatively simple. Extend the pipelines that are building images to clone the repo with the manifests, run a simple `sed` command, and push it back. I prefer doing that over using image updater, even though for different reasons.
2. That should not be a problem. The file created with image updater is very similar to how Kustomize edits image tags. It references an existing image and applies a new tag to it so it can work with any number of images.
3. The way to make it faster is to do the change of a tag in a manifests through pipelines, probably just after the steps that build images, and not use the Image Updater at all.
One advantage over pipelines is that it is based on events allowing us greater reusability and decoupling of pipelines. However, if you do prefer having one big pipeline, I would not recommend using the Image Updater. It takes only a few lines of scripting to do the same inside the pipelines.
Personally, my main problem with the Image Updater is that it does not have the flexibility what and where will be updated. The current way of always placing the file in the repo URL defined in the Argo CD Application manifest often does not work for me.
@@DevOpsToolkit Thanks for taking the time and writing such a long reply, and explaining the quirks you have found :).
I have also come to the conclusion that as of right now, image updater is not for me, but it would definitely be nice to revisit in the future, if it gets more mature. It is built on a good principle, but I just feel like the implementation is not there quite yet.
But I am not vey exited abut having one huge pipeline either :D . Ideally I dont really want my application repo to be aware of where it runs and what environments it should update images in. Instead, I am sort of thinking of applying the argo pattern of observing repo, but from another repo. But that would take a while to implement, so for now I am probably going to use just pipelines when we introduce argocd where I work, and then gradually refine the setup.
However, in the future, I am thinking of doing it a bit like argo itself does it, but a bit more generalized. Basically a git repo/pipeline observer pattern (maybe this is foolish but it sounds like fun to me!)
1. The application code repo runs its build pipeline and publishes an image (unaware of any other piplines or argo).
2. A webhook notifies the argo repo that a pipeline in the application repo has been executed - and thus the argo repo starts running its own pipeline, pulling changes from the repo it is observing (=the application repo) and updating all affected images as specified in the argo repo only (decoupling).
3. Webhook is then automatically sent to argocd from the argo repo when the argo repo pipeline finishes updating images :) (standard stuff)
@@MrTheGurra It would be interesting to discuss this in more detail, but writing messages might not be the most convenient way to do that. How about joining the monthly chat between channel members? If that sounds good, please DM me your email and I'll add you to the invite (it's at the beginning of every month). You can find me on Twitter (@vfarcic) or Linkedin (www.linkedin.com/in/viktorfarcic/).
18:45 Testing comment deletion.
That seemed to be working (at least this time). The comment was not removed.
A Pipeline should never update a repo. If Argo requires that I'm sad it exists.
Do you mean that updates of tags in manifests, releases, and similar things should always be manual or that we should not keep those things in a repo?
could not make it work with ECR :(
I haven't used it with ECR (yet). I can go through it and get back to you but I'm not sure I'll be able to do that before the next weekend.
@@DevOpsToolkit Thanks for your reply,
i will be very glad to see it work!
i love your content, happy to see you grow
Thanks!
Thanks a ton.
@@DevOpsToolkit Thanks for sharing amazing content :)
Thanks!
Thanks a ton.
@@DevOpsToolkit no problem, great videos