Although Kubernetes is "unopinionated", it does need to provide some guidelines or best practices, otherwise it is just a bag of tools dependent on the user's skill sets. By 1.23+, Kubernetes is still too difficult for most users, and this is telling.
Actually how many subnet is required to form k8s cluster? Got nodeip, clusterip, serviceip, podip. I seen for min single subnet does work but actually for production deployment how many subnet network is required (regardless the prefix).
It does very by many factors. In many cases people (developers) are not in full control of their operating environment and things like overlay networks can give them some flexibility there (e.g. pod network packets are routing over the node physical network). The cluster/service IPs (same thing) in most installations are thought of as networks, but not really networks in themselves. They are typically just rules in IPTables that influence things before routing decisions are made. Again, that is partially because IPTables was/is a gold standard in managing packets AND using IPs keeps things down at L3 (aka for routing), other wise we have to deal with more specific things like HTTP Headers (at L7). Not really an answer, just some thoughts on why it looks like 3 networks are the norm, when in theory and practice we can do other things.
@@ronaldpetty343 Thanks Ronald, you have good point. The perspective might be different from a developer who just leverage the container pods for the application development compare to system admin who manage the cluster itself. As a cluster owner definitely they want to have the control of the cluster inclusive the network attached. I mean, user have the right to customize the cluster as per their requirement. In this specific case user must subscribe dedicated cluster for them, I don't think sharing cluster will be fit for them.
11:53 start
Thank you Dr. House for this talk
Can't speak after watching this. Such an amazing experience I had !
This was a fantastic comprehensive talk. Filled some the gaps in my understanding for sure.
Although Kubernetes is "unopinionated", it does need to provide some guidelines or best practices, otherwise it is just a bag of tools dependent on the user's skill sets. By 1.23+, Kubernetes is still too difficult for most users, and this is telling.
Thanks!
Brilliant presentation, thanks @Randy
Such an incredible insightful video!!
expecting summary
This is what I was looking for. I haven't gotten far into the video, but hope this has info about metallb.🤞
Yes metallb is in this!
Actually how many subnet is required to form k8s cluster? Got nodeip, clusterip, serviceip, podip. I seen for min single subnet does work but actually for production deployment how many subnet network is required (regardless the prefix).
It does very by many factors. In many cases people (developers) are not in full control of their operating environment and things like overlay networks can give them some flexibility there (e.g. pod network packets are routing over the node physical network). The cluster/service IPs (same thing) in most installations are thought of as networks, but not really networks in themselves. They are typically just rules in IPTables that influence things before routing decisions are made. Again, that is partially because IPTables was/is a gold standard in managing packets AND using IPs keeps things down at L3 (aka for routing), other wise we have to deal with more specific things like HTTP Headers (at L7). Not really an answer, just some thoughts on why it looks like 3 networks are the norm, when in theory and practice we can do other things.
@@ronaldpetty343 Thanks Ronald, you have good point. The perspective might be different from a developer who just leverage the container pods for the application development compare to system admin who manage the cluster itself. As a cluster owner definitely they want to have the control of the cluster inclusive the network attached. I mean, user have the right to customize the cluster as per their requirement. In this specific case user must subscribe dedicated cluster for them, I don't think sharing cluster will be fit for them.
Kubernetes eats it's own dog food.
Or Kubernetes eat it own shit.