🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - me@antonputra.com 👉 [UPDATED] AWS EKS Kubernetes Tutorial [NEW]: ua-cam.com/play/PLiMWaCMwGJXnKY6XmeifEpjIfkWRo9v2l.html&si=wc6LIC5V2tD-Tzwl
Thank you for this outstanding video (your tutorials are truly exceptional). I have a small request: could you please provide an update to this one? The Helm chart featured in the video for deploying Karpenter hasn't been updated in about 2 years. I like this method of deploying EKS clusters with Karpenter.
Nice video. Another cool approach is to not use any (self or eks managed) node groups. After creating eks cluster, you deploy coredns and karpenter on fargate, that should be enough to bootstrap EC2 worker nodes.
Great tutorial Anton, thanks! I would suggest to call subnets like private-az1, public-az1, etc. I used a different region and got a little bit confused with the subnet names. Nonetheless, great work, thanks again.
Hi Anton, First of all this is great tutorial and it boosted my clarity on Karpenter. I have two queries. 1. Where are all these pods actually running. In local? or in AWS? 2. What puts our k8 deployment from yaml and kubectl to AWS? Thank you.
Thanks! 1. All pods are running in AWS (EKS) including the Karpenter controler pod 2. When you run kubectl apply -f .yaml, kubectl uploads that yaml config to remote kubernetes running in AWS. Based on that yaml config, Kubernetes downloads pods and specifies how to run them on EC2 instances.
@Anton Putra 7:12 does karpenter node creates a separate node than managed node ? if so looking at deployment yaml I don't understand how does it understand which node to pick up
Well, EKS evolves, and they may have introduced some breaking changes, or you might have misconfigured something. It's hard to say. You need to start from looking for error messages in the karpenter controller.
Thanks for the video! I am running this on a pipeline. Is there a better way to execute the provisioner.yaml file without creating in the cluster right after creating karpenter? I want to be able to manage everything through pipeline without any manual step of logging to the cluster from command line. Thanks
🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - me@antonputra.com
👉 [UPDATED] AWS EKS Kubernetes Tutorial [NEW]: ua-cam.com/play/PLiMWaCMwGJXnKY6XmeifEpjIfkWRo9v2l.html&si=wc6LIC5V2tD-Tzwl
I Just sent you an email
Efficient, fast enough, practical and super clear
Really amazing video, well thought and direct to the point
I thank you very much
thank you, Yasser!
Thank you for this outstanding video (your tutorials are truly exceptional). I have a small request: could you please provide an update to this one? The Helm chart featured in the video for deploying Karpenter hasn't been updated in about 2 years.
I like this method of deploying EKS clusters with Karpenter.
take a look at this one, i have a section for karpenter
Excellent tutorial. Very Quick and informative. Thanks for making this
Thanks Pratap!
Nice video.
Another cool approach is to not use any (self or eks managed) node groups. After creating eks cluster, you deploy coredns and karpenter on fargate, that should be enough to bootstrap EC2 worker nodes.
Great tutorial Anton, thanks! I would suggest to call subnets like private-az1, public-az1, etc. I used a different region and got a little bit confused with the subnet names. Nonetheless, great work, thanks again.
Thanks! Sure, most of the time in the real world, you include AZ as a suffix.
Hi Anton, First of all this is great tutorial and it boosted my clarity on Karpenter. I have two queries.
1. Where are all these pods actually running. In local? or in AWS?
2. What puts our k8 deployment from yaml and kubectl to AWS?
Thank you.
Thanks!
1. All pods are running in AWS (EKS) including the Karpenter controler pod
2. When you run kubectl apply -f .yaml, kubectl uploads that yaml config to remote kubernetes running in AWS. Based on that yaml config, Kubernetes downloads pods and specifies how to run them on EC2 instances.
Many thanks for the tutorial you are a life saver.
Super 🎉
thank you!
Get Full-Length High-Quality DevOps Tutorials for Free - Subscribe Now! - ua-cam.com/users/AntonPutra
Why we are creating node groups, If we use karpenter ?
you still need to run the karpenter controller somewhere, right?
@@AntonPutra Yeah !! Thanks Man
@Anton Putra 7:12 does karpenter node creates a separate node than managed node ? if so looking at deployment yaml I don't understand how does it understand which node to pick up
Karpenter creates standalone EC2 instances and adds them to the Kubernetes node pool. On the other hand, the autoscaler uses AWS Auto Scaling groups.
thx
It didnt work for me. The nodes dont scale and the pods keeps in pending state. help !
Well, EKS evolves, and they may have introduced some breaking changes, or you might have misconfigured something. It's hard to say. You need to start from looking for error messages in the karpenter controller.
super nice
Thank you!!
👉 How to Manage Secrets in Terraform - ua-cam.com/video/3N0tGKwvBdA/v-deo.html
👉 Terraform Tips & Tricks - ua-cam.com/video/7S94oUTy2z4/v-deo.html
👉 ArgoCD Tutorial - ua-cam.com/video/zGndgdGa1Tc/v-deo.html
Thanks for the video! I am running this on a pipeline. Is there a better way to execute the provisioner.yaml file without creating in the cluster right after creating karpenter? I want to be able to manage everything through pipeline without any manual step of logging to the cluster from command line. Thanks
Sure, you can use helm terraform provider for example, or kubectl provider
I may have found out how to get my graviton nodes in eks but similar to fargate provisioning? Will see.
try this - github.com/antonputra/tutorials/blob/main/lessons/150/terraform/7-nodes.tf#L48-L49
Its very usefull😊
What will happen if on same cluster I have cluster autoscaler installed? Which component will handle the node scaling if both of them are deployed?
you'll get race condition =) don't do it
Thanks for your effort ! At 7:18, how can we configure without giving resources ? I want to scale up while pod is running actually.
You don't have to (but should), if newly created pods stuck in pending state, karpenter increase resources to fit those pods.
how to tag ec2 instance launch by karpenter
take a look at EC2NodeClass custom resource - github.com/aws/karpenter-provider-aws/blob/main/examples/v1beta1/general-purpose.yaml#L34C7-L34C19
🔴UPDATED🔴 How to create EKS Cluster using Terraform MODULES (AWS Load Balancer Controller + Autoscaler + IRSA) - ua-cam.com/video/kRKmcYC71J4/v-deo.html
Nice tutorial I will try to implement it.
Thanks!
Thanks!
Thank you Ricardo!
So fast
on purpose, you can find code and commands in description