Антон, спасибо огромное! Как всегда - превосходная подача и идеально выбраны границы, в пределах которых освещена тема. And a question about scaling stateful apllication, particularly a database - is it possible to create master-master configuration with several endpoints in service for avoiding downtime during pod recreation? And do you see potential problems with auto attaching pv back to it or some concerns about it possible inconsistency?
Thank you! Well, it's possible, but I would highly recommend creating your own Operator instead of relying on StatefulSets. Based on my experience, databases and messaging systems work just fine in Kubernetes in light environments with low load. However, when you start moving all of it into production, you will discover many nuances, and recovering a database in Kubernetes in production is very challenging. Think twice, especially if you will be responsible for running it and not simply developing and handing it over to an ops team. :)
Elemental, highly informative content. Well done as per usual, Anton! If at all possible, I'd like to request a more advanced solution (i.e, deploying VPA along with JVM based applications, deploying a multidimensional pod autoscaling solution in a production-ready environment, etc.).
I need a clarification with regards the promtheus setup, if I already have a prometheus deployed on my cluster using the kube-prometheus-stack will that suffice to use also because I am seeing prometheus adapter and operator?
Is it possible to do automatic tracking for jaeger + istio, request and response communication between services on the jaeger dashboard? Would you have an example?
Hi Anton , thank you very much for the valuable information especially VPA use case , let's say i will have node exporter deployed as DeamonSet pod , what about the tagging , will i be able to add it in the new worker node by using karpenter to let promethues detect the new worker node and monitor it via service discovery mechanism or no ?
Yes, Prometheus does not care if you use Autocaller or Karpenter. Whenever a new Kubernetes node is created, assuming there are no taints that the node exporter cannot tolerate, that node will be automatically added to the Prometheus target list.
Does EKS allow us to create node group with different instance types of ec2 instances? As far as I know it doesn't allow why because instances will be under ASG and will have launch configuration defined, Please correct me if I am wrong. If they are part of ASG then how to update the instance type of ec2 without causing downtime.
If you use kOPS to create Kubernetes (K8s) in AWS, you can definitely use mixed instance family types. I've been using it for spot instances so that, in case one instance type is not available, another one is selected automatically. I'm not sure about EKS; you may need to create launch templates. Here is the official documentation for mixed Auto Scaling groups (AGs): docs.aws.amazon.com/autoscaling/ec2/userguide/create-mixed-instances-group-manual-instance-type-selection.html.
📝 Summary of Key Points: 📌 Autoscaling in Kubernetes allows adjusting resources based on application usage, saving on compute and avoiding wasted resources. 🧐 Different types of autoscaling controllers exist in Kubernetes, including built-in and separately installed ones. 🧐 Autoscaling is easier to implement for stateless applications compared to distributed databases. 🧐 The Horizontal Pod Autoscaler (HPA) is a common approach for autoscaling in Kubernetes, adjusting the desired scale based on metrics like CPU and memory usage. 🧐 Custom metrics can be used for autoscaling, such as latency, traffic, errors, and saturation, by deploying a Prometheus operator and instance. 🧐 For stateful applications, vertical scaling is the only option, and the Vertical Pod Autoscaler (VPA) recommends resource requests and limits. 🧐 Kubernetes allows autoscaling of the cluster itself with the Cluster Autoscaler, adjusting the node group size based on pending pods. 🧐 The KEDA project helps with autoscaling based on the number of messages in a queue or topic, supporting various messaging systems. 💡 Additional Insights and Observations: 💬 "Autoscaling is important because it allows you to adjust the resources allocated to your applications based on their usage, saving on compute and avoiding wasted resources." 📊 The Horizontal Pod Autoscaler (HPA) is a commonly used approach for autoscaling in Kubernetes. 🌐 The KEDA project supports various messaging systems like Apache Kafka, RabbitMQ, and MySQL for autoscaling based on message queues or topics. 📣 Concluding Remarks: Autoscaling in Kubernetes is a powerful tool that efficiently manages resources and handles varying workloads. It can be implemented using built-in controllers like the HPA and VPA, as well as additional tools like Prometheus and KEDA. By adjusting resources based on application usage, autoscaling optimizes resource allocation and improves overall efficiency. Made with Talkbud
🔴 To support my channel, I'd like to offer Mentorship/On-the-Job Support/Consulting (me@antonputra.com)
Absolutely brilliant! All the great content together in a single video. Thanks a lot, bro!
Thanks a lot!! All content in the channel is a gold mine. you have explained very well the concepts. thanks and keep it up 😊👏👏
Thank you! :)
very good video, which includes not only useful information, but pretty important recommendations!
Thank you!
Your diagrams are very easy to understand. Thank you Anton
my pleasure!
Thank you Anton, this is very helpful and your diagrams and explanations are really clear and to the point. Keep going 👌🏼
❤️
спасибо, хорошая подача и без воды
spasibo!
Антон, спасибо огромное!
Как всегда - превосходная подача и идеально выбраны границы, в пределах которых освещена тема.
And a question about scaling stateful apllication, particularly a database - is it possible to create master-master configuration with several endpoints in service for avoiding downtime during pod recreation? And do you see potential problems with auto attaching pv back to it or some concerns about it possible inconsistency?
Thank you! Well, it's possible, but I would highly recommend creating your own Operator instead of relying on StatefulSets. Based on my experience, databases and messaging systems work just fine in Kubernetes in light environments with low load. However, when you start moving all of it into production, you will discover many nuances, and recovering a database in Kubernetes in production is very challenging. Think twice, especially if you will be responsible for running it and not simply developing and handing it over to an ops team. :)
Elemental, highly informative content. Well done as per usual, Anton! If at all possible, I'd like to request a more advanced solution (i.e, deploying VPA along with JVM based applications, deploying a multidimensional pod autoscaling solution in a production-ready environment, etc.).
Thank you! I'll see what I can do.
I have used keda in gcp with pub/sub and it works quite well.
cool, did you use workload identity or you had to use credentials json file?
@@AntonPutra I had to use the credentials json file for the service account, the GKE cluster wasn’t provisioned with workload identity unfortunately.
@@LeandroSantos-bt1lg I see. I hate using JSON credentials. I wonder why they had to use JSON instead of a key and secret as in AWS.
Awesome content. Easy to understand. Thank you
thank you!
I need a clarification with regards the promtheus setup, if I already have a prometheus deployed on my cluster using the kube-prometheus-stack will that suffice to use also because I am seeing prometheus adapter and operator?
You did great!!! Thanks you so much
Thank you!!
Where is the documentation of using karpenter with non AWS clouds? I have only ever seen documentation for AWS.
Here is for azure - github.com/Azure/karpenter
@@AntonPutra thank you
Is it possible to do automatic tracking for jaeger + istio, request and response communication between services on the jaeger dashboard? Would you have an example?
Thanks for sharing this quality content!
My pleasure!
Good video. Your diagrams were top notch.
thanks!
Hi Anton , thank you very much for the valuable information especially VPA use case , let's say i will have node exporter deployed as DeamonSet pod , what about the tagging , will i be able to add it in the new worker node by using karpenter to let promethues detect the new worker node and monitor it via service discovery mechanism or no ?
Yes, Prometheus does not care if you use Autocaller or Karpenter. Whenever a new Kubernetes node is created, assuming there are no taints that the node exporter cannot tolerate, that node will be automatically added to the Prometheus target list.
Ur Gold to K8s community !!!
thanks :)
Hatsoff Next Level Explanations ! more power to you :)
master stroke, nice work!
thanks :)
Could you please let me know which tool you used to create the video? The tool seems helpful in organizing the explanation. Thanks!
sure, it's adobe suite
Thanks@@AntonPutra
🍿 Benchmarks: ua-cam.com/play/PLiMWaCMwGJXmcDLvMQeORJ-j_jayKaLVn.html&si=p-UOaVM_6_SFx52H
Does EKS allow us to create node group with different instance types of ec2 instances? As far as I know it doesn't allow why because instances will be under ASG and will have launch configuration defined, Please correct me if I am wrong.
If they are part of ASG then how to update the instance type of ec2 without causing downtime.
If you use kOPS to create Kubernetes (K8s) in AWS, you can definitely use mixed instance family types. I've been using it for spot instances so that, in case one instance type is not available, another one is selected automatically. I'm not sure about EKS; you may need to create launch templates. Here is the official documentation for mixed Auto Scaling groups (AGs): docs.aws.amazon.com/autoscaling/ec2/userguide/create-mixed-instances-group-manual-instance-type-selection.html.
thanks anton for this content, it is awesome, can you do a video comparing apache kafka and amazon kinesis please with grafana visualization
Thanks, sure at some point in the future.
subscribed, keep up the good work
thanks! will do!
Karpenter is interesting but seems it currently best supports AWS which makes sense considering they made it
Yes, they claim they support multiple clouds, but in reality, no one is using it in production on other clouds besides AWS.
Where are we supposed to get the Github repository from?
github.com/antonputra/tutorials/tree/main/lessons/181
Молодец
Спасибо:)
Answered every question
thanks!
📝 Summary of Key Points:
📌 Autoscaling in Kubernetes allows adjusting resources based on application usage, saving on compute and avoiding wasted resources.
🧐 Different types of autoscaling controllers exist in Kubernetes, including built-in and separately installed ones.
🧐 Autoscaling is easier to implement for stateless applications compared to distributed databases.
🧐 The Horizontal Pod Autoscaler (HPA) is a common approach for autoscaling in Kubernetes, adjusting the desired scale based on metrics like CPU and memory usage.
🧐 Custom metrics can be used for autoscaling, such as latency, traffic, errors, and saturation, by deploying a Prometheus operator and instance.
🧐 For stateful applications, vertical scaling is the only option, and the Vertical Pod Autoscaler (VPA) recommends resource requests and limits.
🧐 Kubernetes allows autoscaling of the cluster itself with the Cluster Autoscaler, adjusting the node group size based on pending pods.
🧐 The KEDA project helps with autoscaling based on the number of messages in a queue or topic, supporting various messaging systems.
💡 Additional Insights and Observations:
💬 "Autoscaling is important because it allows you to adjust the resources allocated to your applications based on their usage, saving on compute and avoiding wasted resources."
📊 The Horizontal Pod Autoscaler (HPA) is a commonly used approach for autoscaling in Kubernetes.
🌐 The KEDA project supports various messaging systems like Apache Kafka, RabbitMQ, and MySQL for autoscaling based on message queues or topics.
📣 Concluding Remarks:
Autoscaling in Kubernetes is a powerful tool that efficiently manages resources and handles varying workloads. It can be implemented using built-in controllers like the HPA and VPA, as well as additional tools like Prometheus and KEDA. By adjusting resources based on application usage, autoscaling optimizes resource allocation and improves overall efficiency.
Made with Talkbud
06:43
CarPanter
🫡
Thanks Teacher ❤!!
Thanks :)
Karpenter is not usable on GKE
I know, but they claim it is usable.
Thanks :)
Thompson Jennifer Hernandez Christopher Thomas Sandra
?
feeling sleepy , dont read like a news anchor.