I am working with Kubernetes for last 4 years. That is a very clean explanation. Nice video! I would request you to create a specific video on HPA with request based.
i remember some hard days when i started in DevOps, tainting some nodes for ML, then getting not enough nodes error, i fix and then a lead dev manually change from Kubernetes dashboard, until i used ArgoCD with selfHeal then RBAC so he cannot change after that the cluster became stable
These k8s videos are awesome let them coming. It's so easy understand through animation too much effort to make these very grateful to you for making these lectures.
Hi, Anton, again, really interesting and REALLY well presented. For someone like me, who's only dealt with Minikube locally (so that means only one Node), it is something new, but, nevertheless, great knowledge to have for the future when I will deploy in production. As always, thank you!
Thank you! Sometimes, it's helpful to run multiple nodes even with Minikube. This could be useful, for example, to test how an app behaves if a node goes down (for example if you want to run it on spot). minikube start --nodes 2
@@AntonPutra Much thanks for the reply, Anton! Wasn't even aware of the --nodes flag for minikube, will be sure to try it out! And can only agree with you, really useful to test behavior of when a node goes down and even architect for when you will be deploying in production on multiple nodes.
Hey anton can you make video on when we deploy pods ko gke cluster where does the container log (/var/logs/) get stored , what happens in backend if we don't mount it with persistence volume.
In GKE, EKS and even AKS you just need to update your logger to write to stdout or stderr. In GCP you'll get your logs in stackdriver, you don't need to mount anything.
Thanks @AntonPutra for the micro detailed video. We are currently facing some issue related to pod scheduling in eks this video provided some insights . We have a monitoring demonset which took normally ~1min to 1.5 min to spin up and ready. But some of my application pods are in ready state in the same node before the demon set pod is ready because of faster startup time.Our monitoring tool will not inject agent if the pod is ready before the demonset. Could you please suggest taint or pod affinity which one is the best approach. Other than the affinity and taint is there any other helpful suggestions please provide that as well . Thanks in advance...
@@AntonPutra Ah brilliant - I already have the Adobe all app suite. I probably haven't used more than half the apps though! 😂 Is the animation done with Adobe Animate?
The diagram at 0:50 is confusing, the pod requests 2 CPU and 4Gi memory, why node-01 which has 6CPU and 16Gi (more than pod's request) be considered not enough memory? Same question also for node-03 which has 8CPUs why it is considered not enough CPU?
Well, that's the whole point of Kubernetes: to abstract away a data center. In the case of the cloud, we typically use large instance types to reduce wasted resources since we also need to run monitoring and logging agents on each node. So, we use large instances and schedule multiple pods on a single virtual machine.
Why don't you make a course to deploy pods on gke with best practices and how to do container logs management , how to do best pods monitoring in gke , advance concepts like how to deploy microservices in gke.
Hello, in the CKA exam, a node is given an assigned label and asked to deploy on that node, I solved this question with nodeAffinity instead of nodeselector, I wonder if this could be a wrong method to get a successful score.
🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - me@antonputra.com
I am working with Kubernetes for last 4 years. That is a very clean explanation. Nice video!
I would request you to create a specific video on HPA with request based.
Thank you! Will do
i remember some hard days when i started in DevOps, tainting some nodes for ML, then getting not enough nodes error, i fix and then a lead dev manually change from Kubernetes dashboard, until i used ArgoCD with selfHeal then RBAC so he cannot change after that the cluster became stable
It's sometimes hard in large teams to prevent manual changes, but as you said, everyone should be using GitOps as much as possible.
Basically strictly reserve nodes => taints and anti-affinity. Otherwise, just use affinity or node selector.
These k8s videos are awesome let them coming.
It's so easy understand through animation
too much effort to make these
very grateful to you for making these lectures.
Thank you! Yes, it takes some time :)
🎉
Wich tool did u use for that content? It’s amazing 🎉🎉
Extremely useful topic, thanks for great content again, Anton!
Thanks!
What would be a condition when 2 both with similar configuration should always schedule in 2 diff nodes ?
@@PiyushGhosh-l4k if you form your question properly, Anton, me or gpt could answer you :)
Hi, Anton, again, really interesting and REALLY well presented. For someone like me, who's only dealt with Minikube locally (so that means only one Node), it is something new, but, nevertheless, great knowledge to have for the future when I will deploy in production. As always, thank you!
Thank you! Sometimes, it's helpful to run multiple nodes even with Minikube. This could be useful, for example, to test how an app behaves if a node goes down (for example if you want to run it on spot).
minikube start --nodes 2
@@AntonPutra Much thanks for the reply, Anton! Wasn't even aware of the --nodes flag for minikube, will be sure to try it out! And can only agree with you, really useful to test behavior of when a node goes down and even architect for when you will be deploying in production on multiple nodes.
Very good explanation
Thank you Anton
Thank you!
Very good explanation,
Thank you Anton
Thanks!
Hey anton can you make video on when we deploy pods ko gke cluster where does the container log (/var/logs/) get stored , what happens in backend if we don't mount it with persistence volume.
In GKE, EKS and even AKS you just need to update your logger to write to stdout or stderr. In GCP you'll get your logs in stackdriver, you don't need to mount anything.
@@AntonPutra so it will not write anything on my gke disk.
@@nishitkumar7650 no just stout
Thanks @AntonPutra for the micro detailed video. We are currently facing some issue related to pod scheduling in eks this video provided some insights . We have a monitoring demonset which took normally ~1min to 1.5 min to spin up and ready. But some of my application pods are in ready state in the same node before the demon set pod is ready because of faster startup time.Our monitoring tool will not inject agent if the pod is ready before the demonset. Could you please suggest taint or pod affinity which one is the best approach. Other than the affinity and taint is there any other helpful suggestions please provide that as well .
Thanks in advance...
Can you run your monitoring tool as a sidecar to daemonset?
Great video! Can I ask what software you use for the diagrams and animations?
Thanks, adobe stack
@@AntonPutra Ah brilliant - I already have the Adobe all app suite. I probably haven't used more than half the apps though! 😂 Is the animation done with Adobe Animate?
The diagram at 0:50 is confusing, the pod requests 2 CPU and 4Gi memory, why node-01 which has 6CPU and 16Gi (more than pod's request) be considered not enough memory? Same question also for node-03 which has 8CPUs why it is considered not enough CPU?
Well, that's the whole point of Kubernetes: to abstract away a data center. In the case of the cloud, we typically use large instance types to reduce wasted resources since we also need to run monitoring and logging agents on each node. So, we use large instances and schedule multiple pods on a single virtual machine.
hey Anton are you working with Azure or planning to make any videos in the future ? Thanks for what you doing as always
Yes, soon
Hey there ! Nice video ! Relly nice explanation, can I ask you with tool did you use to make those diagram animation ?
Sure Adobe
Thank you so much ,Can you please share kubernetes scenario based questions
Welcome, you mean interview questions?
@@AntonPutra yes Anton
Please share the entire Pod Life cycle
@@soumyamishra8734 Got it, will do
Why don't you make a course to deploy pods on gke with best practices and how to do container logs management , how to do best pods monitoring in gke , advance concepts like how to deploy microservices in gke.
Thanks, I'll think about it.
very informative video.
Thank you
Hello, in the CKA exam, a node is given an assigned label and asked to deploy on that node, I solved this question with nodeAffinity instead of nodeselector, I wonder if this could be a wrong method to get a successful score.
Affinity came as an improvement on NodeSelector. It's just more flexible but does exactly the same thing, so I think you're good.
Which tools do you use for video editing for your channel
Adobe suite
What would be a condition when 2 both with similar configuration should always schedule in 2 diff nodes ?
use podantiaffinity - kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#more-practical-use-cases
perfect explanation.
thank you!
Nicely explained 👏
Thank you!
First 😎
First minute: How does the score is resolved?
You can read more about scheduler here - kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/
Great Videos
Thank you Sid!
Thanks 🙏 a lot , great job 👏
my pleasure!
bro, What would be a condition when 2 both with similar configuration should always schedule in 2 diff nodes ?
I didn't yet watch the video, but kudos to someone dare to bend it.
Thanks:)