It is one of the best eksctl video for beginners. Undoubtedly eksctl is the best choice for AWS EKS. It will be very useful if you do a video on AWS EKS Cluster Auto Scaler based on the five strategies....Cheers Thanks for a wonderful video.
Entertaining & educational, the way I like to learn. If you could do a video on AWS EKS networking deep-diving on the virtual ethernet interfaces and namespaces, that would be good! :) Thanks again.
Great video! I love the explanations and comparisons. We use terraform for all our infra, including EKS. Main problem for us is that we're using EKS with windows pods too. Updating amis for our windows worker nodes can be problematic. The main reason is b/c as of today (AFAIK) EKS doesn't support managed windows worker nodes. So we have to rely on ASGs, and launch templates. Windows worker nodes take awhile to come online (25 mins from launch time to be seen in the EKS cluster, including domain joins), and at least another 10 minutes to download the main containers from MS. We have to use a script to watch the nodes coming online, and then drain the old nodes one at a time, waiting for the pods to come on the new windows node, then terminate the old. It's problematic and very error prone. I'd love to do it a cleaner, less hacky way.
That's not much different from what EKS does with Linux nodes. Unfortunately, EKS upgrade process is deeply flawed. If you add Windows nodes to the mix, it becomes even worse.
Thank you for the video. I learn a lot about AWS and EKS. I have been using GKE for two years on my previous job and on my new job was asked to evaluate EKS for the proyect. EKS is lame, too much complexity to operate. I will suggest to just migrate to GCP and GKE.
An AKS cluster configured with autoscaling can be upgraded with terraform by upgrading the version only. The version has to be set on the cluster and node group. It spins up a new node before terminating an old one.
eksctl uses cloudformation behind the scenes, so was a little surprised you didn't mention that . You can run into issues deleting where you have to dig through the cloudformation stacks. I am a fan of terraform and creating modules for the infra resources.
How is eksctl any better than a Terraform module that provides reasonable defaults and abstracts away the complexity of all the resources that need to be created for EKS cluster?
It's not better, it's different. Think of it as an opinionated way to create EKS clusters and similar to a Terraform module done by others. It saves you from creating a module yourself but, if you use a module done by others, than it's similar. It's certainly not better than terraform, it's just easier to start with.
The main advantage of Terraform is that after creating the cluster it can install any services on that cluster, be it K8s resources or Helm charts. Provisioning a cluster by itself is pretty useless, you then would need another tool to install the software. Open Source Terraform EKS module makes it pretty easy too.
Are you installing services using the EKS module (I didn't know that''s possible) or through Terraform k8s and Helm modules? The latter two are very bad and I prefer running `helm upgrade` or `kustomize build`. Now, those commands can indeed be executed from Terraform, but I don't think that's a good idea, unless you're doing it only once for the initial installation when the cluster is created for the first time. Ideally, when you create a cluster, you would create a Git repo and install Argo CD or Flux. You do NOT need anything else. From there on, one of those would be monitoring that repo (or multiple repos) and syncing Git manifests.
@@simonkp3202 If you're referring to the initial deployments of apps when creating clusters with Terraform, invoking kubectl and helm through Terraform is great since you have a single process that does everything (create a cluster, deploy the initial set of apps). However, later on, when you want only to upgrade the apps, Terraform does not provide any tangible benefit when compared to simply executing directly `kubectl apply` or `helm upgrade`. I don't like it simply because Terraform is an unnecessary overhead (wrapper) that does not help unless when deployments are tied to infra work, and those two have very different frequencies to be used together. Going back to my word "very bad". Sorry for that. I was being too harsh. It's not bad, but an unnecessary overhead/wrapper that does not add much (if any) tangible benefit.
@@DevOpsToolkit Not invoking kubectl and helm commands through Terraform, but using native K8s and Helm Terraform providers. The benefit is that everything is version controlled - chart version, chart values etc. and applied through Terraform. Example - github.com/eskapaid/infrastructure/blob/develop/services/metrics-server.tf
eksctl is, without doubt, the best and the easiest way to create and manage EKS clusters. If EKS and the related resources is all we need, eksctl rules. The main doubt is whether it is worth using it if we need to manage AWS resources that are not related to EKS. In those cases, having everything in Terraform instead of everything but EKS, might be a better choice. Then again, we might introduce Crossplane for "everything else", but that is a completely other subject.
@@DevOpsToolkit yes I ended up with both eksctl and terraform sections in my configuration. But maybe with AWS Controllers for Kubernetes (ACK) we could at some point get rid of terraform
Even back then when AWS/eksctl didn't support rolling upgrades, you could still: - create the new nodegroup - cordon all nodes in the old nodegroups - rollout restart all your deployments with kubectl - delete the old nodegroups Otherwise, you're introducing downtime the moment you delete your nodegroups and this is not desirable in a production cluster but is perfectly fine in a demo cluster like the author used here. (if they are still running your pods which I assume they do).
You're right that, back then, we could upgrade EKS clusters without downtime. However, the whole usage of ASGs for EKS was, I believe, a wrong decision that made EKS be behind other hosted k8s offerings. That is changing now that EKS relies less and less on ASGs.
Hi Viktor, quick question. I am new to K8s and we I would like to switch some of application alreadu running on Docker to EKS. 1 application will have to be public and the 2 other one shoud remain private. I understand that it its best practice to put the nodes in a private subnet and then keep the public subnet to setup a load balancer and ingress. I am not sure about this tho, do you confirm this is the best way to go about this ? Thx a lot !
Yes it will. The moment a pod becomes unavailable, Kubernetes reschedules it, no matter whether that is cause by upgrades, node failures, or anything else.
That config is for creating and managing Kubernetes clusters and all the associated resources, while Helm charts are for deploying apps inside Kubernetes. In other words, you need eksctl to be able to use Helm charts.
@@DevOpsToolkit thank you very much, I greatly appreciate your content as it focuses on best practices and real-world applications. Just enroll in your udemy course also. Bless
Terraform Kubernetes provider is about creating and managing Kubernetes resources on top of an existing cluster. Terraform equivalent of eksctl would be Terraform AWS modules, including EKS. Now, if you compare eksctl with terraform else module, you will find out that it is only a fraction of what you need. It only creates a control plane, and you still need to define node groups, IAM, networking, etc.
Thanks for the great explanation and video. I am facing lot of troubles while trying to create a fully Private EKS cluster along with Worker nodes in Private Subnets using EKSCTL (yaml file). 1. I have a yaml file 2. While running the ekstcl create custer I am running into lot of different error (mostly with the unable to connect to the VPC endpoints). So I had to manually add entry into security groups of VPC endpoints to allow the traffic from the Server where I am running the eksctl commands. Cant this be automated in EKSCTL itself?
Eksctl is meant to be an easy way to create EKS clusters. It is simple because it is opinionated. If you need more flexibility (less opinionated), Terraform/crossplane/pulumi are a better choice.
Hello Viktor, just question, when we do upgrade cluster, or in my case, I need to change size of EC2 instance (from smaller to bigger) is there anything what I need to do in k8s before we start with this steps ? and If we start upgrade k8s (from 1.17 to 1.18) or just node grupe, can users still access applicaiton which is configured on cluster ?
@@DevOpsToolkit It is scaled, guys who configured k8s cluster are not with us anymore, and k8s is pretty new thing for me, anyway, they configured t3-medium and we have now around 40 EC2 instances and 100 API replicas, 4 frontend, and a few services more with 4,5 replicas. Idea is to reduce number of EC2 instances with bigger instances. but we have 1.17 and I think is EOL in AWS :(. Thank you for your reply. :) Sve najbolje:)
@@zeenmc There's one important thing you should be aware of. You should always upgrade to the next minor version. So, if you'd like to get to 1.22, you should first upgrade to 1.18, than to 1.19, etc. There's also a lot of testing you should do and you should also know that some breaking changes were introduced to some of the resources. For example, Ingress you used in 1.17 will need to be rewritten to 1.22 or whichever newer version you're going for. The gist of it is: "you're using very old k8s version and you might need to invest quite a lot to get to one of the newer versions."
Nice explanation how to use eksctl thanks, but I have a couple doubts, can I use an existing VPC and existing subnets? and if it's posible what happens when I destroy the cluster?
I did not notice until now that Google held your comments for approval (probably because of the links). They are visible now. Sorry for the delay. It wasn't intentional.
As always, great video. What do you think about this teks project? have you tried it? I just discovered it.... It does not use cloudformation... github.com/particuleio/teks Congratulations for all your work.
Love it!
Thanks!
Any suggestions for the next subject?
@@DevOpsToolkit not at this time, you're doing a great job :)
Another wonderful video from Viktor. Thanks a lot!
Just what I needed
This channel deserves more resp
It is one of the best eksctl video for beginners. Undoubtedly eksctl is the best choice for AWS EKS. It will be very useful if you do a video on AWS EKS Cluster Auto Scaler based on the five strategies....Cheers Thanks for a wonderful video.
Glad it was helpful!
Great conclusion Viktor.
Thanks
Thanks, this was super helpful. Especially the nodegroup modification and all the warts that come with that.
Glad it was helpful!
@@DevOpsToolkit It was! And you know when a video is really good when you come back to watch it again :D
Entertaining & educational, the way I like to learn.
If you could do a video on AWS EKS networking deep-diving on the virtual ethernet interfaces and namespaces, that would be good! :)
Thanks again.
Adding it to my to-do list...
Hopefully AWS can improve EKS! The other Cloud providers offer much simpler implementations of Kubernetes. Great video, thanks
Great video! I love the explanations and comparisons.
We use terraform for all our infra, including EKS. Main problem for us is that we're using EKS with windows pods too. Updating amis for our windows worker nodes can be problematic. The main reason is b/c as of today (AFAIK) EKS doesn't support managed windows worker nodes. So we have to rely on ASGs, and launch templates. Windows worker nodes take awhile to come online (25 mins from launch time to be seen in the EKS cluster, including domain joins), and at least another 10 minutes to download the main containers from MS.
We have to use a script to watch the nodes coming online, and then drain the old nodes one at a time, waiting for the pods to come on the new windows node, then terminate the old.
It's problematic and very error prone. I'd love to do it a cleaner, less hacky way.
That's not much different from what EKS does with Linux nodes. Unfortunately, EKS upgrade process is deeply flawed. If you add Windows nodes to the mix, it becomes even worse.
Thank you for the video. I learn a lot about AWS and EKS.
I have been using GKE for two years on my previous job and on my new job was asked to evaluate EKS for the proyect.
EKS is lame, too much complexity to operate. I will suggest to just migrate to GCP and GKE.
Good job!
An AKS cluster configured with autoscaling can be upgraded with terraform by upgrading the version only. The version has to be set on the cluster and node group. It spins up a new node before terminating an old one.
The upgrade process on AKS and, especially, GKE, is so much better that EKS, partly because it depends on ASG.
this.. is incredible
Very entertaining and good content!
eksctl uses cloudformation behind the scenes, so was a little surprised you didn't mention that . You can run into issues deleting where you have to dig through the cloudformation stacks. I am a fan of terraform and creating modules for the infra resources.
I thought I mentioned that it creates CloudFormation stacks. If I didn't, I'll blame it on forgetfulness caused by my old age.
@@DevOpsToolkit kkkkkkk
He mentions it around 15:45 in the video.
Awesome video! great!
Glad you liked it!
very informative, great video !
How is eksctl any better than a Terraform module that provides reasonable defaults and abstracts away the complexity of all the resources that need to be created for EKS cluster?
It's not better, it's different. Think of it as an opinionated way to create EKS clusters and similar to a Terraform module done by others. It saves you from creating a module yourself but, if you use a module done by others, than it's similar.
It's certainly not better than terraform, it's just easier to start with.
The main advantage of Terraform is that after creating the cluster it can install any services on that cluster, be it K8s resources or Helm charts. Provisioning a cluster by itself is pretty useless, you then would need another tool to install the software. Open Source Terraform EKS module makes it pretty easy too.
Are you installing services using the EKS module (I didn't know that''s possible) or through Terraform k8s and Helm modules? The latter two are very bad and I prefer running `helm upgrade` or `kustomize build`. Now, those commands can indeed be executed from Terraform, but I don't think that's a good idea, unless you're doing it only once for the initial installation when the cluster is created for the first time.
Ideally, when you create a cluster, you would create a Git repo and install Argo CD or Flux. You do NOT need anything else. From there on, one of those would be monitoring that repo (or multiple repos) and syncing Git manifests.
@@DevOpsToolkit Yep, using Helm provider to deploy Helm charts and K8s Provider to deploy K8s resources with Terraform. What is "very bad" about them?
@@simonkp3202 If you're referring to the initial deployments of apps when creating clusters with Terraform, invoking kubectl and helm through Terraform is great since you have a single process that does everything (create a cluster, deploy the initial set of apps). However, later on, when you want only to upgrade the apps, Terraform does not provide any tangible benefit when compared to simply executing directly `kubectl apply` or `helm upgrade`.
I don't like it simply because Terraform is an unnecessary overhead (wrapper) that does not help unless when deployments are tied to infra work, and those two have very different frequencies to be used together.
Going back to my word "very bad". Sorry for that. I was being too harsh. It's not bad, but an unnecessary overhead/wrapper that does not add much (if any) tangible benefit.
@@DevOpsToolkit Not invoking kubectl and helm commands through Terraform, but using native K8s and Helm Terraform providers. The benefit is that everything is version controlled - chart version, chart values etc. and applied through Terraform. Example - github.com/eskapaid/infrastructure/blob/develop/services/metrics-server.tf
Set up EKS on console my first time, then moved to eksctl. Never looked back.
eksctl is, without doubt, the best and the easiest way to create and manage EKS clusters. If EKS and the related resources is all we need, eksctl rules. The main doubt is whether it is worth using it if we need to manage AWS resources that are not related to EKS. In those cases, having everything in Terraform instead of everything but EKS, might be a better choice. Then again, we might introduce Crossplane for "everything else", but that is a completely other subject.
@@DevOpsToolkit yes I ended up with both eksctl and terraform sections in my configuration. But maybe with AWS Controllers for Kubernetes (ACK) we could at some point get rid of terraform
Is there any way to specify the endpoint_public_access & endpoint_private_access values with eksctl ??
You can do that through the config by adding something like:
```
vpc:
clusterEndpoints:
publicAccess: true
privateAccess: true
```
Even back then when AWS/eksctl didn't support rolling upgrades, you could still:
- create the new nodegroup
- cordon all nodes in the old nodegroups
- rollout restart all your deployments with kubectl
- delete the old nodegroups
Otherwise, you're introducing downtime the moment you delete your nodegroups and this is not desirable in a production cluster but is perfectly fine in a demo cluster like the author used here. (if they are still running your pods which I assume they do).
You're right that, back then, we could upgrade EKS clusters without downtime. However, the whole usage of ASGs for EKS was, I believe, a wrong decision that made EKS be behind other hosted k8s offerings. That is changing now that EKS relies less and less on ASGs.
@@DevOpsToolkit completely agree!
Thanks! Would love to see a video on fluxcd vs argocd. Which one is better or recommended?
Added to the TODO list :)
Thank you for this great content (as usual !!)
My pleasure!
Hi Viktor, quick question. I am new to K8s and we I would like to switch some of application alreadu running on Docker to EKS. 1 application will have to be public and the 2 other one shoud remain private. I understand that it its best practice to put the nodes in a private subnet and then keep the public subnet to setup a load balancer and ingress. I am not sure about this tho, do you confirm this is the best way to go about this ? Thx a lot !
Yes. You should expose the public app through ingress which, in turn, will create an elb that will be the entry point.
while upgarding nodes to new version,will the pod in older version of nodegroup will automatically migrate to newer nodegroup automatically?
Yes it will. The moment a pod becomes unavailable, Kubernetes reschedules it, no matter whether that is cause by upgrades, node failures, or anything else.
I have EKS cluster deployed using cloudformation template, now I want to control it completely using eksctl, how can this be achieved?
I don't think you can use eksctl to take over control of an existing cluster.
so could i pack this eksctl config file inside helm chart
That config is for creating and managing Kubernetes clusters and all the associated resources, while Helm charts are for deploying apps inside Kubernetes. In other words, you need eksctl to be able to use Helm charts.
Can you do a video on EKS with FSx lustre.
Adding it to my TODO list... :)
@@DevOpsToolkit thank you very much, I greatly appreciate your content as it focuses on best practices and real-world applications. Just enroll in your udemy course also. Bless
7:56 EKS 1.19 is available as of 2/16/2021
Is Terraform's Kubernetes provider any easier?
Terraform Kubernetes provider is about creating and managing Kubernetes resources on top of an existing cluster. Terraform equivalent of eksctl would be Terraform AWS modules, including EKS.
Now, if you compare eksctl with terraform else module, you will find out that it is only a fraction of what you need. It only creates a control plane, and you still need to define node groups, IAM, networking, etc.
Thanks for the great explanation and video. I am facing lot of troubles while trying to create a fully Private EKS cluster along with Worker nodes in Private Subnets using EKSCTL (yaml file).
1. I have a yaml file
2. While running the ekstcl create custer I am running into lot of different error (mostly with the unable to connect to the VPC endpoints). So I had to manually add entry into security groups of VPC endpoints to allow the traffic from the Server where I am running the eksctl commands.
Cant this be automated in EKSCTL itself?
Eksctl is meant to be an easy way to create EKS clusters. It is simple because it is opinionated. If you need more flexibility (less opinionated), Terraform/crossplane/pulumi are a better choice.
Hello Viktor, just question, when we do upgrade cluster, or in my case, I need to change size of EC2 instance (from smaller to bigger) is there anything what I need to do in k8s before we start with this steps ? and If we start upgrade k8s (from 1.17 to 1.18) or just node grupe, can users still access applicaiton which is configured on cluster ?
If app is scaled than yes. If there is a single replica, than there will be short downtime.
@@DevOpsToolkit It is scaled, guys who configured k8s cluster are not with us anymore, and k8s is pretty new thing for me, anyway, they configured t3-medium and we have now around 40 EC2 instances and 100 API replicas, 4 frontend, and a few services more with 4,5 replicas. Idea is to reduce number of EC2 instances with bigger instances. but we have 1.17 and I think is EOL in AWS :(.
Thank you for your reply. :) Sve najbolje:)
@@zeenmc There's one important thing you should be aware of. You should always upgrade to the next minor version. So, if you'd like to get to 1.22, you should first upgrade to 1.18, than to 1.19, etc. There's also a lot of testing you should do and you should also know that some breaking changes were introduced to some of the resources. For example, Ingress you used in 1.17 will need to be rewritten to 1.22 or whichever newer version you're going for.
The gist of it is: "you're using very old k8s version and you might need to invest quite a lot to get to one of the newer versions."
@@DevOpsToolkit I understand everything. Application is working fine, I will be happy just to change size if EC2 instances only.
@@zeenmc Do NOT postpone upgrading v1.17 k8s cluster. The longer you wait, the more it will hurt.
Nice explanation how to use eksctl thanks, but I have a couple doubts, can I use an existing VPC and existing subnets? and if it's posible what happens when I destroy the cluster?
You can find info on how to use existing VPC in eksctl.io/usage/vpc-networking/#use-existing-vpc-other-custom-configuration
i have a working example if you still need it
@@Micahtmusic woow! It would be great :)
19:51 One-click full cluster upgrade is being worked on github.com/aws/containers-roadmap/issues/600
I did not notice until now that Google held your comments for approval (probably because of the links). They are visible now. Sorry for the delay. It wasn't intentional.
Bro this was good but mad confusing from your communication. doesn’t need all the hoopla.
Don't click click click 😂
Clicking (in the context of changing the state of something) should be made illegal, unless that is the "merge" button in PRs :)
PTC - ‘Permit to click’ process and paperwork
As always, great video. What do you think about this teks project? have you tried it? I just discovered it.... It does not use cloudformation...
github.com/particuleio/teks
Congratulations for all your work.
Unfortunately, I haven't tried it yet. It does look interesting so let me add it to my TODO list... :) I'll comment once I get my hands dirty with it.