People, don't be lazy and hit the 'like' button. With 15K views a I can't believe there so low likes... Viktor, i have seen many videos in your channel, they are very educative and adds a lot of value to the environment. Thank you for all the effort you make on those! Regarding Gateway API, I see a lot of fast development in all the providers, I hope to see this a mainstream soon. Thanks for being in the cutting edge and sharing all this topics with nice examples!
Great video, Viktor. I find your videos very informative & easy to follow. I wanted to know about k8s-GA and the first thing i did was to search for it in your channel :) Thank you!
IMO It’s the most exciting k8s feature in last few years. I’m tired of my traffic controller of choice not being supported by this tool or partially supported by this other tool. Too bad I’m mostly using EKS, so probably won’t be mature enough for next 2 years 😅
True. I overlooked that part of the message. My guess is that EKS will not support it out of the box for a long while (if ever). But, using one from one of the vendors should do.
I'm not sure you made the case for service mesh replacement. SMI (service mesh interface) already exists for east-west traffic (with mTLS). But there is a lot of overlap. See GAMMA Initiative.
SMI is indeed an attempt to standardize, but it never got widely adopted (beyond linkerd and a few other projects). My hope is that gateway API will succeed where SMI failed (adoption). So far, judging by the list of those working on it's implementations, it is looking very positive.
@@DevOpsToolkit SMI, which resulted from a collaboration of Microsoft, Bouyant, HashiCorp, Solo, Kinvolk, and Weaveworks, has a different spin on the impact of their spec-first effort and its relationship with GA. See GAMMA Initiative. Any replacement of service mesh appears to be much farther out than a replacement for ingress. I'm no expert on this, but I can't find support in the online docs for the idea that GA is eating service meshes. It's not clear to me how GA will ever do east-west mTLS.
@@joebowbeer You're right. Officially, Gateway API does not mention service meshes. That's only my interpretation of where it's going. The way I see it, service meshes emerged as a way to remedy deficiencies of Ingress and Service specs, which is the same objective Gateway API has. As a result, we can see it containing some of the features currently available in Ingresses and others available in service meshes (e.g., weights). All that being said, it's in early stages and it's hard to predict where it will end up in, lets say, a year. The spec will continue evolving but that's the least of my concerns. What matters is for it to get wide adoption and we seem to be on a good track with that. As for SMI... I love it. That was (and still is) one of my main arguments in favor or LinkerD over Istio. I was convinced that it'll become a standard adopted by (almsot) everyone but now I'm not so sure. To be clear, I want it to become that. We'll see. So, I can't say for certain that Gateway API will "eat" service mesh nor that SMI will become the standard everyone will use. What I can say is that we desperately need a spec that encompases all networking-related "stuff" in Kubernetes. Right now, it's fragmented between Ingresses, service meshes, etc. That's probably my wishful thinkings though.
@@DevOpsToolkit Was about the write the same comment as OP. As I see it, service meshes have existed before Kubernetes in form of projects like Netflix Zuul + Spring Eureka. They provided - Service Discovery, Load Balancing/Shedding, Resilience, Rate Limiting, Short Circuiting etc. Since Kubernetes provides many of these features out of the box, IMO, Service Meshes should have just stopped existing at least inside Kubernetes.But perhaps the project maintainers didn't want to be unemployed so they retrofitted it for Kubernetes. mTLS, retry logic, observability are the some value adds that SM provides but IMO the key features that SM provided in non K8S environment was service discovery, Load Balancing/Shedding and Resilience. I had done a small PoC on Zuul before getting into K8S. It was really nice to see what Zuul could do. But after finding K8S, Zuul feels pointless. When SMs were running outside K8S there was no cluster boundary so mTLS made sense. I am not fan of in-cluster communication over mTLS. I mean where do you start trusting the traffic? If your intra cluster traffic is not safe you have bigger problems. I feel its more of a carried over feature
I agree. Internal mTLS is not as important as the one for external facing apps so it's not the end of the world if one does not have it. On the other hand, it's almost free (negligible maintenance work) so there is also no good reason not to use it, outside of potential performance penalty.
That's very exciting! That'd probably also mean that you could have middleware such as a oauth/oidc gateway in the middle with whatever provider you use
I’ll keep an eye on this, I’ll be definitely doing some test on my cluster with this. I agree that service mesh are really complicated, my team doesn’t want to implement any service mesh due to the overhead as we only need some simple features. I will probably try to suggest this when it’s a bit more mature. Great video!
That would be the domain of a specific implementation. Gateway API is mostly focused on trying to provide a universal API without diving into implementations.
2 роки тому+1
@@DevOpsToolkit It seems to me that Gateway API is only modeling/generalising a subset of the SM features, namely the ingress traffic, but regardless of provider implementations it doesn't do anything with pod-to-pod networking (that's where sidecars become relevant and cilium might have promising future insights) or egress networking. That won't make SM go out of scope.
You're right. It does not make sm go away, or even ingress itself. There's still a long way for it to go and we are yet to see how it will look like in its final form.
Thank you, very informative. It seems to me this will also replace load balancers so I'll definitely wait for MetalLB to support this before I touch it. The name is a bit unfortunate since it'll make it hard to google, so generic.
I guess that some miss-use of annotations will be present between implementations again in order to be more competitive on the market by providing more features than specs offer. BTW looking forward to see you in Belgrade's conference this autumn :)
You can (or soon should be) able to do everything you normally do with Ingresses or service meshes. The only difference is that now there is a standard spec instead of each project coming up with it's own CRDs (specs).
Are you referring to kubernetes gateway API (the one in this video) or GCP API Gateway? If it's the later... I'm not sure... I haven't tried to connect it with kubernetes and I'm not even sure that's possible.
Interesting times! Do you think that this will have any affect on the number of implementations available and their features? Are there any implementations you think will do well or badly out of this new standard?
It's too soon to say which implementation will prevail. For now, the focus is on having a standard spec on top of which we will see different implementations. The important part is to have a standard spec instead of having every project/vendor inventing its own.
There is no significant difference without going into nitty-gritty details that are not important for the majority. The main one would be that with Istio Gateway, you don't need a separate Ingress but, on the other hand, with Istio Gateway you are, in a way, forced to meshify all apps that are exposed to the outside world and that is often not what many of us want. Now, that was NGINX vs. Istio Gateway. Gateway API is a new standard (new API) that will replace Ingress API and most Ingress solutions are now offering the option to define stuff both using Ingress and Gateway API spec.
Service mesh is not becoming useless. It stays. What changes is how we define it. Instead of using implementation specific spec, now we are getting a standard spec that does not depend on an implementation. The manifests should be the same no matter which implementation you choose.
@@DevOpsToolkit from my opinion i will use gateway api for traffic incoming from external to inside cluster. But for pods communication will use network policy. CMIIW
I do not focus on any specific language when talking about kubernetes since it's not directly related to any. Learn kubernetes independently of spring boot and you should have not trouble running your apps packaged as container images.
It's on my TODO list, but I cannot say when I'll get to it. For every video I do, 3 items are added to the list so I gave up on planning and focus only on the next one.
Great explanation! I always thought that API Gateway and Gateway API are the same thing. QUESTION: unrelated to Cabernets, how about turning a regular REST API into a Gateway API? Again, unrelated to Kubernetes but rather instead of having an API Gateway routing calls to other APIs, just turning an existing API into API Gateway and making it both, an API Gateway and an API. Disclaimer: to me, this makes no sense but if requirement is like that and no choice, is there any pro points for such design or best thing is to run away from the place lol?
I'm not sure I understood what you meant by "regular REST API into a Gateway API". Gateway API is forwarding requests entering your cluster to destination services (among many other things) so if a request for REST API enters the cluster it'll be forwarded to the app that responds back to the client. Now, since I'm not sure I understood your question I'm not sure whether I'm answering it or complicating it even more. If you can clarify it a bit more I'd be more than happy to give a better answer.
@@DevOpsToolkit say I have a SpringBoot REST API CarAPI that doesCRUD on a database. Then someone comes up with a requirement and says “hey, we want this existing CarAPI to also get us the list of planets in our solar system. To do so, it will call couple of other downstream APIs and for that reason, we want to make it also an API Gateway ( say Zuul or Spring Cloud Gateway ). Hope that explains. Again, to me, I see zero pros of this approach or why would anyone ever wants to do this instead of adding an API Gateway in front of CarAPI but that’s the requirement.
@@dinobulja I'm guessing something like that (aggregation) should be done by a third-party and not by the service you developed. If that's the case, than you would need something else. Cluster API (and Ingress before it) does not have the goal to do that. That would be "API Gateway" of some kind (and yes, the names are confusing). I'm not sure that a good idea though. Such solutions were common in the past and are now mostly gone. More often than not, they tend to fail to reduce complexity (quite the opposite).
@@DevOpsToolkit that’s exactly my point and I had a feeling that this is some old obsolete way of doing things, just don’t know how to change that way of thinking
There is nothing new introduced in cluster API. That is not the goal. The objective is to standardize the networking API and let other providers (including traefik) come up with implementations of that API. Most of those providers already have all those, and quite a few other features developed but you, as a user, have to use their API. Soon, the implementations will stay more or less the same but the API will be common.
@@DevOpsToolkit - thanks can you guide or suggest in one or two lines, I am having machine learning models , which I am deploying in aks, planning to use linkerd service mesh with Nginx ingress controller. What is best practice to expose this ingress having linkerd in place? Kindly suggest in one or two lines
@vamsikukati8495 LinkedIn does not have it's own ingress (like istio does) so you can use nginx with it in the same way you would use it without linkerd. Incoming traffic goes to an external lb to nginx, nginx redirect it based on the host to a Service and linkerd manages traffic between services (between apps).
@@DevOpsToolkit that’s really nice and impressive, but from here to production and a lot of features that we got from all another tools in the same niche there’s long way…
I don't think this has anything to do with features. It's more about each tool in the past having to use their own CRDs because the current k8s resources (such as Ingress) are too limiting. This way, there will be a standard set of (core) resources, that may be enough to replace any custom CRDs for Ingress Controllers and (possibly) Service Meshes.
"An emotional attachment to Ingress"... You mean the single worst part of k8s, which must have asked storage to hold its beer for that badge of honor. Long live Gateway API, I guess?
Are you planning to replace Ingresses and/or service mesh specs with Gateway API?
Yes, I am :) I’ll just wait till it’s faster lol
That's correct. It's still too early to adopt it, but not too early to experiment and prepare.
Yes! Started to use contour instead of nginx and will switch to gateway soon!
do this give me some more visibility about services like in kiali ?
No. This is all about standardization of the spec. As a results, we should be able to specify what we need independently of the implementations.
Writing you from 2024 - my Customer wants me to deploy Istio, I think Gateway is what I need. Your channel is a treasure!
Just wanted to note that I thoroughly enjoy your content and know how much effort goes into making one . Thank you for what you do.
Thanks a ton for saying that. It helps me to keep going to know that my work is useful to others.
Thanks!
Thanks a ton
People, don't be lazy and hit the 'like' button. With 15K views a I can't believe there so low likes... Viktor, i have seen many videos in your channel, they are very educative and adds a lot of value to the environment. Thank you for all the effort you make on those!
Regarding Gateway API, I see a lot of fast development in all the providers, I hope to see this a mainstream soon.
Thanks for being in the cutting edge and sharing all this topics with nice examples!
Great video, love to see the word "silly" in almost all examples!
These are my favorite type of videos! Keeping me ahead of the game. Thank you!!
Great video, Viktor. I find your videos very informative & easy to follow. I wanted to know about k8s-GA and the first thing i did was to search for it in your channel :) Thank you!
IMO It’s the most exciting k8s feature in last few years. I’m tired of my traffic controller of choice not being supported by this tool or partially supported by this other tool. Too bad I’m mostly using EKS, so probably won’t be mature enough for next 2 years 😅
Judging by the rate of advancement, I think we'll see stable implementations much sooner than that.
@@DevOpsToolkit Jun said EKS, not any distro that you like Lol
True. I overlooked that part of the message.
My guess is that EKS will not support it out of the box for a long while (if ever). But, using one from one of the vendors should do.
I'm not sure you made the case for service mesh replacement. SMI (service mesh interface) already exists for east-west traffic (with mTLS). But there is a lot of overlap. See GAMMA Initiative.
SMI is indeed an attempt to standardize, but it never got widely adopted (beyond linkerd and a few other projects). My hope is that gateway API will succeed where SMI failed (adoption). So far, judging by the list of those working on it's implementations, it is looking very positive.
@@DevOpsToolkit SMI, which resulted from a collaboration of Microsoft, Bouyant, HashiCorp, Solo, Kinvolk, and Weaveworks, has a different spin on the impact of their spec-first effort and its relationship with GA. See GAMMA Initiative. Any replacement of service mesh appears to be much farther out than a replacement for ingress. I'm no expert on this, but I can't find support in the online docs for the idea that GA is eating service meshes. It's not clear to me how GA will ever do east-west mTLS.
@@joebowbeer You're right. Officially, Gateway API does not mention service meshes. That's only my interpretation of where it's going. The way I see it, service meshes emerged as a way to remedy deficiencies of Ingress and Service specs, which is the same objective Gateway API has. As a result, we can see it containing some of the features currently available in Ingresses and others available in service meshes (e.g., weights).
All that being said, it's in early stages and it's hard to predict where it will end up in, lets say, a year. The spec will continue evolving but that's the least of my concerns. What matters is for it to get wide adoption and we seem to be on a good track with that.
As for SMI... I love it. That was (and still is) one of my main arguments in favor or LinkerD over Istio. I was convinced that it'll become a standard adopted by (almsot) everyone but now I'm not so sure. To be clear, I want it to become that. We'll see.
So, I can't say for certain that Gateway API will "eat" service mesh nor that SMI will become the standard everyone will use. What I can say is that we desperately need a spec that encompases all networking-related "stuff" in Kubernetes. Right now, it's fragmented between Ingresses, service meshes, etc. That's probably my wishful thinkings though.
@@DevOpsToolkit Was about the write the same comment as OP. As I see it, service meshes have existed before Kubernetes in form of projects like Netflix Zuul + Spring Eureka. They provided - Service Discovery, Load Balancing/Shedding, Resilience, Rate Limiting, Short Circuiting etc. Since Kubernetes provides many of these features out of the box, IMO, Service Meshes should have just stopped existing at least inside Kubernetes.But perhaps the project maintainers didn't want to be unemployed so they retrofitted it for Kubernetes. mTLS, retry logic, observability are the some value adds that SM provides but IMO the key features that SM provided in non K8S environment was service discovery, Load Balancing/Shedding and Resilience. I had done a small PoC on Zuul before getting into K8S. It was really nice to see what Zuul could do. But after finding K8S, Zuul feels pointless.
When SMs were running outside K8S there was no cluster boundary so mTLS made sense. I am not fan of in-cluster communication over mTLS. I mean where do you start trusting the traffic? If your intra cluster traffic is not safe you have bigger problems. I feel its more of a carried over feature
I agree. Internal mTLS is not as important as the one for external facing apps so it's not the end of the world if one does not have it. On the other hand, it's almost free (negligible maintenance work) so there is also no good reason not to use it, outside of potential performance penalty.
hmmm... ive been waiting about 12 minutes and still no external IP assigned.... any ideas why?
What do you get when you describe it? Events should point you towards the issue.
@@DevOpsToolkit Thanks for the quick response man! Wow lol! Yeah ill check that out and try that and post back shortly! Thanks!
That's very exciting! That'd probably also mean that you could have middleware such as a oauth/oidc gateway in the middle with whatever provider you use
True. That'll depend on implementations of the API.
I’ll keep an eye on this, I’ll be definitely doing some test on my cluster with this.
I agree that service mesh are really complicated, my team doesn’t want to implement any service mesh due to the overhead as we only need some simple features. I will probably try to suggest this when it’s a bit more mature.
Great video!
Btw Check out cilium they have a very very low overhead service mesh based on bpf tech ;)
API Gateway is all about the spec, and not implementation. Cilium is already working on adopting it.
What about the other features of service mesh like control plane, observabity ? Any side car proxying in the Gateway?
That would be the domain of a specific implementation. Gateway API is mostly focused on trying to provide a universal API without diving into implementations.
@@DevOpsToolkit It seems to me that Gateway API is only modeling/generalising a subset of the SM features, namely the ingress traffic, but regardless of provider implementations it doesn't do anything with pod-to-pod networking (that's where sidecars become relevant and cilium might have promising future insights) or egress networking. That won't make SM go out of scope.
You're right. It does not make sm go away, or even ingress itself. There's still a long way for it to go and we are yet to see how it will look like in its final form.
Thank you, very informative. It seems to me this will also replace load balancers so I'll definitely wait for MetalLB to support this before I touch it. The name is a bit unfortunate since it'll make it hard to google, so generic.
I guess that some miss-use of annotations will be present between implementations again in order to be more competitive on the market by providing more features than specs offer. BTW looking forward to see you in Belgrade's conference this autumn :)
It'd be interesting an update to this video.
How are api authorizations handled with gateway APIs? Can we integrate with an IAM solution that provides OAuth-JWT based authorization?
You can (or soon should be) able to do everything you normally do with Ingresses or service meshes. The only difference is that now there is a standard spec instead of each project coming up with it's own CRDs (specs).
How do i connect kubernetes with gcp api gateway
Are you referring to kubernetes gateway API (the one in this video) or GCP API Gateway? If it's the later... I'm not sure... I haven't tried to connect it with kubernetes and I'm not even sure that's possible.
Interesting times! Do you think that this will have any affect on the number of implementations available and their features? Are there any implementations you think will do well or badly out of this new standard?
It's too soon to say which implementation will prevail. For now, the focus is on having a standard spec on top of which we will see different implementations. The important part is to have a standard spec instead of having every project/vendor inventing its own.
Hi Viktor. Is there an advantage of using Istio with Istio Gateway/Gateway API over standard Nginx Ingress Controller with Istio?
There is no significant difference without going into nitty-gritty details that are not important for the majority. The main one would be that with Istio Gateway, you don't need a separate Ingress but, on the other hand, with Istio Gateway you are, in a way, forced to meshify all apps that are exposed to the outside world and that is often not what many of us want.
Now, that was NGINX vs. Istio Gateway. Gateway API is a new standard (new API) that will replace Ingress API and most Ingress solutions are now offering the option to define stuff both using Ingress and Gateway API spec.
Thanks! That clarifies things@@DevOpsToolkit
seems intersting but from what I understand it's not doing mtls which is really a service-mesh feature?
It does :)
@@DevOpsToolkit hooooo ok so service mesh start to be useless so ;) can you make a video on a how to mtls in w/ api gateway ?
Service mesh is not becoming useless. It stays. What changes is how we define it. Instead of using implementation specific spec, now we are getting a standard spec that does not depend on an implementation. The manifests should be the same no matter which implementation you choose.
If we implement gateway api, is we still need using network policy?
It's not mandatory to use policies so it's up to you.
@@DevOpsToolkit from my opinion i will use gateway api for traffic incoming from external to inside cluster. But for pods communication will use network policy. CMIIW
Could you recommend a good learning material for K8S for Spring Boot developers?
Much appreciated.
I do not focus on any specific language when talking about kubernetes since it's not directly related to any. Learn kubernetes independently of spring boot and you should have not trouble running your apps packaged as container images.
Do you have any plan on covering Traefik Mesh?
It's on my TODO list, but I cannot say when I'll get to it. For every video I do, 3 items are added to the list so I gave up on planning and focus only on the next one.
@@DevOpsToolkit I imagine!
Great explanation! I always thought that API Gateway and Gateway API are the same thing.
QUESTION: unrelated to Cabernets, how about turning a regular REST API into a Gateway API? Again, unrelated to Kubernetes but rather instead of having an API Gateway routing calls to other APIs, just turning an existing API into API Gateway and making it both, an API Gateway and an API.
Disclaimer: to me, this makes no sense but if requirement is like that and no choice, is there any pro points for such design or best thing is to run away from the place lol?
I'm not sure I understood what you meant by "regular REST API into a Gateway API".
Gateway API is forwarding requests entering your cluster to destination services (among many other things) so if a request for REST API enters the cluster it'll be forwarded to the app that responds back to the client. Now, since I'm not sure I understood your question I'm not sure whether I'm answering it or complicating it even more.
If you can clarify it a bit more I'd be more than happy to give a better answer.
@@DevOpsToolkit say I have a SpringBoot REST API CarAPI that doesCRUD on a database. Then someone comes up with a requirement and says “hey, we want this existing CarAPI to also get us the list of planets in our solar system.
To do so, it will call couple of other downstream APIs and for that reason, we want to make it also an API Gateway ( say Zuul or Spring Cloud Gateway ).
Hope that explains. Again, to me, I see zero pros of this approach or why would anyone ever wants to do this instead of adding an API Gateway in front of CarAPI but that’s the requirement.
@@dinobulja I'm guessing something like that (aggregation) should be done by a third-party and not by the service you developed. If that's the case, than you would need something else. Cluster API (and Ingress before it) does not have the goal to do that. That would be "API Gateway" of some kind (and yes, the names are confusing). I'm not sure that a good idea though. Such solutions were common in the past and are now mostly gone. More often than not, they tend to fail to reduce complexity (quite the opposite).
@@DevOpsToolkit that’s exactly my point and I had a feeling that this is some old obsolete way of doing things, just don’t know how to change that way of thinking
Can't I already do all of this in Traefik? (tho it's a bit funky)
There is nothing new introduced in cluster API. That is not the goal. The objective is to standardize the networking API and let other providers (including traefik) come up with implementations of that API. Most of those providers already have all those, and quite a few other features developed but you, as a user, have to use their API. Soon, the implementations will stay more or less the same but the API will be common.
It would be nice to have that with pod-to-pod communication (east-west). linkerd does not come with gateway so this could be a nice combination
Do you have video on service mesh ? Searching for good video related to service mesh.
Something like ua-cam.com/video/cjhb7_uwzDk/v-deo.html ?
@@DevOpsToolkit thanks
Can some one share course on linkerd with ingress course link plz?
Can you share that link which is completely on linkerd , in detail
@vamsikukati8495 i did not do it after all. Not many asked for it so i moved to other subjects.
@@DevOpsToolkit - thanks can you guide or suggest in one or two lines, I am having machine learning models , which I am deploying in aks, planning to use linkerd service mesh with Nginx ingress controller. What is best practice to expose this ingress having linkerd in place? Kindly suggest in one or two lines
@vamsikukati8495 LinkedIn does not have it's own ingress (like istio does) so you can use nginx with it in the same way you would use it without linkerd. Incoming traffic goes to an external lb to nginx, nginx redirect it based on the host to a Service and linkerd manages traffic between services (between apps).
Istio also supports gateway api. Gateway API is still in beta.
Made that comment before I saw your last few min. 😜.
Going to be a worthwhile upgrade.
I don’t understand what is the really difference between this tool to Emissary-ingress?
The difference is in the spec. We are finally getting a standard that is independent of implementations.
@@DevOpsToolkit that’s really nice and impressive, but from here to production and a lot of features that we got from all another tools in the same niche there’s long way…
We're not there yet, but we are getting there very fast.
I don't think this has anything to do with features. It's more about each tool in the past having to use their own CRDs because the current k8s resources (such as Ingress) are too limiting. This way, there will be a standard set of (core) resources, that may be enough to replace any custom CRDs for Ingress Controllers and (possibly) Service Meshes.
Do apigateway needs any database or storage, when a Terraform code is written for it?
You should not need terraform. The point is that we are getting a standard spec that does not depend on any implementation.
Ok.. thankyou
We need to do a Spiffe video....
It's coming...
Rate Limit by REDIS ?
Redirection Rules?
Config Snippet?
That's all part of the spec. Now it's up to specific implementations to do the last mile.
also retries, circuit-breaker , etc. which some service meshes implement.
That is or will be part of the spec. Also, it's made in a way that can be extended by providers with any additional info they might need.
Eventually, we will succeed in reinventing OpenShift.
"An emotional attachment to Ingress"... You mean the single worst part of k8s, which must have asked storage to hold its beer for that badge of honor.
Long live Gateway API, I guess?
Long live gateway API.
I have zero attachment to ingress.
Less than zero attachment to service mesh.
Please don't call it kube cattle 😂😂 it's either kube control or kube C T L
Note taken. Will do starting from the next video. Thanks for the tip.
gg game 🤠
no mTLS no party
mTLS does not depend on a spec (besides, maybe, a bool field) so that's all up to implementations of the API Gateway.
first comment!
Thanks!
Thanks a ton