I dont think its about the complexity of writing a dockerfile. Some of the key finctionality that tools like buildpacks give us is the rebase functionality which allows us to rebuild specific layers such as a patched base image without needing to rebuild the entire image. Also it adds an sbom to every image and even every layer allowing for better understanding of exactly what is in the image. It also forces best practices by not allowing to run as root for example. Kpack specifically also integrates directly with cosign to allow for auto signing of all images built by kpack. It can also point to a branch and rebuild new tags automatically for every git commit and then with tools like flux image reflector you can run cd with the new image. Or use tools like cartographer which would pick up the new image and can then run tests, scan the image etc. And then deploy it. In kpack by simply changing the clusterbuilder/builder to point to a newer patched buildpack or base image you can rebase as many images as you want in a matter if seconds. We saw this with log4j for example and the time to patch all images was almost instantanious vs the dockerfile approach which was a very tedious and long process. The 42 year is also a very key feature as it zeros out the date when building which allows for accuracy. If you build the exact same image using pack on your laptop and then kpack it will end up with the same sha which means we have immutable images. Kpack will auto generate new builds when it sees a new commit or an updated buildpack or base image which is pretty awesome
Clearly there's a lot more to it than the complexity of writing a Dockerfile, thank you for this enlightening comment. I do think writing good Dockerfiles can be a challenge as well and I much rather lean on peer review and community for this art.
2 роки тому+3
In case you never used containers before Docker Inc., the BuildPacks are actually from such time, nice to see the k8s world is catching up with some good concepts from Heroku and CloudFoundry :)
I tend to think all magic things have to be avoided and I know many others are thinking the same (and many others not ^^) Magic things can help a lot but can became also a nightmare since you don't know what is exactly happens behind, for instance how to optimize better the image size? (I can have a
One thing, if it's good or bad is up for everyone own, is that when using monorepo you can run into some troubles as the detection on which builder to use is bound to some patterns. Other than that, buildpacks are really awesome way to provide a fundamental building block.
With buildpacks you save a lot on image storage and I/O costs. It does layering and layer reuse very well. Say you have 100 Java apps. Java apps need a java VM, they need third party libraries, resource files and the application class files. With Dockerfile the general approach is the build the JAR/WAR file (usually 10s or 100s or MB) and use some openjdk image (another 10s or 100s of MB) as base to build the container image. You make one small change in one source file and BAM! Your JAR/WAR blob of 100 MB has changed. Every time someone pushes a commit you would use about 100 MB of additional storage in your container registry! With buildpacks, it creates separate layers for base image, 3rd party libraries, resource files and class files. Commit usually just affect the class files or resources layer which are quite small. The remaining layers can be simply reused from other images. So each commit only costs a few KBs to low MBs Also if you want to change the OS or JVM layers for all apps, it can simply change the pointers for that layer. So the operation can be really fast. Also on the client side, due to this layer reuse, image deltas are much smaller.
The same can be done (and should be done) with Dockerfile as well. Each instruction in Dockerfile is a separate layer and container image builders (e.g., Docker, Kaniko, etc.) are building only the layers from the one that has changed. That makes ordering in Dockerfile very important. Still, Buildpacks make that same result much easier to accomplish (there's almost nothing to do). The bulk of the work is in making your own builders making a clearer separation between ops and dev type of tasks (ops make builders, devs use them).
This makes a lot of sense in my case, where we’re utilizing buildpacks in other products, like those provided by Pivotal, which containerize code but don’t provide an image that could be deployed to other platforms. It would be interesting to use these tools to create a buildpack-based image through the pipeline rather than having the platform create it for us on deployment. The end result should be the same, but it would give us a deployable image, making migration/availability other platforms (Kubernetes, for example) that much easier. Using those images would also allow developers to run and build containers locally, making the local environment more comparable to production; a big win IMO.
spring has been using buildpacks for quite a while now, lots of devs are not even aware of that, when they use their plugin... Every time I see a dockerfile in a project - I ask "why?" when the lifecycle of the image can be tied to the lifecycle of the project itself, couple it even more to the project (which is good from a CI/CD perspective, imo)
We could definetly use this in my company : we are expected to harden images, our application teams are almost all using the same stack and not many of our developers knows how to write Dockerfile. Creating a kind of common builder could help us. I guess everything depends on how complex it is to create our own builder.
One aspect that should be mentioned is that you can provide a branch or tag as the spec.git.revision and kpack will poll your repository and rebuild on updates. No CI/CD pipeline needed
That's true. However, there are often steps before and after building images (e.g., testing, linting, etc.). If kpack pulls repos and builds images, it is hard to coordinate that with the steps executed in pipelines. Nevertheless, I should have mentioned that in the video.
Thanks for the video, where i can get the base image from redhat and it's available in open-source to anyone can download and then make customized one based our need.
hello i am so impressed with your many videos thank you. so.. can you recommend which macbook more better for me? i want to install kubernetes and make cicd and others. i consider mac pro with m1 max or m1 pro. thank u😀😀
I don’t think the term docker container is going away anytime soon. In my previous position the project guys used to call our CentOS VM’s. “Unix Machines”. This company hasn’t owned any Unix System in the last two decades.
I do agree that docker image term is here to stay and that I am almost certainly not going to change that. Still, I find a certain level of satisfaction chasing unrealistic objectives.
Hi buddy, as per your instructions i have installed pack CLI and i download sample application source code from packeto repository to build a node js app, but while selecting default builder as a paketo I'm getting error like not able fetch the image from docker. Do we need pre load the base builder image before starting the application code, if yes from where i can pull.
There is no need to preload images. Buikdpacks will pull those that are needed when needed. I'm not sure why it cannot pull the images in your case. Might there be som firewall or proxy that might be blocking it?
Buildah is ok. Personally, I believe that the best one right now is rancher Desktop. It has everything I need (image builder and local Kubernetes cluster).
I do use yq. However, in videos I try to make the least possible number of assumptions and go with the tools that people likely have on their laptops. My hope is that makes it more inclusive and also keeps the focus on the subject at hand.
What do you think of Buildpacks? Is Dockerfile sufficiently hard to write to justify Buildpacks?
I dont think its about the complexity of writing a dockerfile. Some of the key finctionality that tools like buildpacks give us is the rebase functionality which allows us to rebuild specific layers such as a patched base image without needing to rebuild the entire image. Also it adds an sbom to every image and even every layer allowing for better understanding of exactly what is in the image. It also forces best practices by not allowing to run as root for example. Kpack specifically also integrates directly with cosign to allow for auto signing of all images built by kpack. It can also point to a branch and rebuild new tags automatically for every git commit and then with tools like flux image reflector you can run cd with the new image. Or use tools like cartographer which would pick up the new image and can then run tests, scan the image etc. And then deploy it. In kpack by simply changing the clusterbuilder/builder to point to a newer patched buildpack or base image you can rebase as many images as you want in a matter if seconds. We saw this with log4j for example and the time to patch all images was almost instantanious vs the dockerfile approach which was a very tedious and long process. The 42 year is also a very key feature as it zeros out the date when building which allows for accuracy. If you build the exact same image using pack on your laptop and then kpack it will end up with the same sha which means we have immutable images. Kpack will auto generate new builds when it sees a new commit or an updated buildpack or base image which is pretty awesome
This answer is so technically dense with info that you should get an award
Clearly there's a lot more to it than the complexity of writing a Dockerfile, thank you for this enlightening comment. I do think writing good Dockerfiles can be a challenge as well and I much rather lean on peer review and community for this art.
In case you never used containers before Docker Inc., the BuildPacks are actually from such time, nice to see the k8s world is catching up with some good concepts from Heroku and CloudFoundry :)
I tend to think all magic things have to be avoided and I know many others are thinking the same (and many others not ^^)
Magic things can help a lot but can became also a nightmare since you don't know what is exactly happens behind, for instance how to optimize better the image size? (I can have a
One thing, if it's good or bad is up for everyone own, is that when using monorepo you can run into some troubles as the detection on which builder to use is bound to some patterns. Other than that, buildpacks are really awesome way to provide a fundamental building block.
With buildpacks you save a lot on image storage and I/O costs. It does layering and layer reuse very well. Say you have 100 Java apps. Java apps need a java VM, they need third party libraries, resource files and the application class files. With Dockerfile the general approach is the build the JAR/WAR file (usually 10s or 100s or MB) and use some openjdk image (another 10s or 100s of MB) as base to build the container image. You make one small change in one source file and BAM! Your JAR/WAR blob of 100 MB has changed. Every time someone pushes a commit you would use about 100 MB of additional storage in your container registry!
With buildpacks, it creates separate layers for base image, 3rd party libraries, resource files and class files. Commit usually just affect the class files or resources layer which are quite small. The remaining layers can be simply reused from other images. So each commit only costs a few KBs to low MBs
Also if you want to change the OS or JVM layers for all apps, it can simply change the pointers for that layer. So the operation can be really fast. Also on the client side, due to this layer reuse, image deltas are much smaller.
The same can be done (and should be done) with Dockerfile as well. Each instruction in Dockerfile is a separate layer and container image builders (e.g., Docker, Kaniko, etc.) are building only the layers from the one that has changed. That makes ordering in Dockerfile very important. Still, Buildpacks make that same result much easier to accomplish (there's almost nothing to do). The bulk of the work is in making your own builders making a clearer separation between ops and dev type of tasks (ops make builders, devs use them).
This makes a lot of sense in my case, where we’re utilizing buildpacks in other products, like those provided by Pivotal, which containerize code but don’t provide an image that could be deployed to other platforms. It would be interesting to use these tools to create a buildpack-based image through the pipeline rather than having the platform create it for us on deployment. The end result should be the same, but it would give us a deployable image, making migration/availability other platforms (Kubernetes, for example) that much easier. Using those images would also allow developers to run and build containers locally, making the local environment more comparable to production; a big win IMO.
Great video ! Thanks !
Your reviews are very helpful, thank you!
Otro excelente video. Gracias Víctor.
spring has been using buildpacks for quite a while now, lots of devs are not even aware of that, when they use their plugin... Every time I see a dockerfile in a project - I ask "why?" when the lifecycle of the image can be tied to the lifecycle of the project itself, couple it even more to the project (which is good from a CI/CD perspective, imo)
We could definetly use this in my company : we are expected to harden images, our application teams are almost all using the same stack and not many of our developers knows how to write Dockerfile. Creating a kind of common builder could help us.
I guess everything depends on how complex it is to create our own builder.
Creating builders is relatively easy and straightforward. You might need to write a bit of Ruby though.
@@DevOpsToolkit Thank you for the info !
One aspect that should be mentioned is that you can provide a branch or tag as the spec.git.revision and kpack will poll your repository and rebuild on updates. No CI/CD pipeline needed
That's true. However, there are often steps before and after building images (e.g., testing, linting, etc.). If kpack pulls repos and builds images, it is hard to coordinate that with the steps executed in pipelines.
Nevertheless, I should have mentioned that in the video.
@@DevOpsToolkit I would enjoy watching your take on Keptn for those aspects :)
@@amaline1 Keptn is on my TODO list but I cannot yet say when its turn will come. The list tends to grow much faster than I can handle.
Thanks for the video, where i can get the base image from redhat and it's available in open-source to anyone can download and then make customized one based our need.
Here it goes... github.com/paketo-buildpacks
@@DevOpsToolkit thanks for your support
hello i am so impressed with your many videos thank you.
so.. can you recommend which macbook more better for me?
i want to install kubernetes and make cicd and others.
i consider mac pro with m1 max or m1 pro.
thank u😀😀
I'm using m1 Mac pro and it's great. I haven't tried m1 max/pro so I cannot somoare them. What I can say is that any m chip is amazing.
I don’t think the term docker container is going away anytime soon. In my previous position the project guys used to call our CentOS VM’s. “Unix Machines”. This company hasn’t owned any Unix System in the last two decades.
I do agree that docker image term is here to stay and that I am almost certainly not going to change that. Still, I find a certain level of satisfaction chasing unrealistic objectives.
@@DevOpsToolkit that makes to of us. 👍
btw i really enjoy your Chanel and the DevOps Paradox Podcast with Darin Pope. 👌
Hi buddy, as per your instructions i have installed pack CLI and i download sample application source code from packeto repository to build a node js app, but while selecting default builder as a paketo I'm getting error like not able fetch the image from docker.
Do we need pre load the base builder image before starting the application code, if yes from where i can pull.
There is no need to preload images. Buikdpacks will pull those that are needed when needed.
I'm not sure why it cannot pull the images in your case. Might there be som firewall or proxy that might be blocking it?
@@DevOpsToolkit please share me ur LinkedIn id
Hey, can you show me any resource on how I can use a private git repo in kpack.
Something like github.com/pivotal/kpack/blob/main/docs/secrets.md?
Is the buildah approach (bash scripts to build container) already dead?
Buildah is ok. Personally, I believe that the best one right now is rancher Desktop. It has everything I need (image builder and local Kubernetes cluster).
Why do you only use sed for editing why not something like yq (yaml alt for jq)
I do use yq. However, in videos I try to make the least possible number of assumptions and go with the tools that people likely have on their laptops. My hope is that makes it more inclusive and also keeps the focus on the subject at hand.
@@DevOpsToolkit makes sense ! And
Love your videos man!
Could you please try to make video on falco?
It's already on my TODO list. I'm not sure when it'll come though.