very well presented. I'm very glad you put actual product names in the diagrams, so that its clear which are complimentary and which are competing with each other. It's still somewhat confusing how one deploys all these in an automated manner... their configurations seem to be somewhat interrelated.
Thanks for the detailed video Kai. I do get the service mesh/proxy idea for communicating between Microservices or with Kafka platform, but Microservices can also use Kafka Events to communicate with each other. In that case, do we still need these proxies?
3 роки тому+1
The proxy is "just" a sidecar. It adds service-independent logic like security or filtering. Hence, Kafka is still the communication layer. The proxy enables configuration of separate logic across different microservices (even if they use very different technologies). If you do some research about the sidecar design pattern (and how e.g. Kubernetes use it) makes clear how the service mesh proxy (like Envoy/Istio) is complementary (and optional) to Kafka. But the proxy does not do the communication between the microservices. That's Kafka (or in other cases gRPC or REST/HTTP).
@Kai thank you for the detailed presentation! I was wondering what are the alternatives to achieve Kafka ACL authorization with Istio mTLS, except the commercial solution provided by BanzaiCloud? Are there any alternatives on this or workarounds? It seems to me a limitation of the Istio-Kafka solution.
2 роки тому+1
Honestly, I did not see much evolution or demand for this service mesh stuff since I recorded the video. I don't have a good answer but I think custom workarounds are the best option today. BanzaiCloud was acquired by Cisco in the meantime. I don't think they focus on Kafka + Istio anymore from a product perspective (but I might be wrong).
This is a great presentation and I have been searching for this for quite sometime. You kind of read the mind of viewers on what might be the questions that the viewer may be having and addressed those in your presentation. Have a question though, I am not very clear how application traceability is really going to work with this if I use it for an event driven micro service. Would Istio support this out of the box?
4 роки тому+1
Istio supports application tracing indirectly. You can use sidecars to send the tracing/monitoring metrics to the related tool. Service mesh tools integrate with tools such as new relic, zipkin, jaeger, prometheus, datadog, etc. where you actually implement the tracing and monitoring.
Event driven micro services are location transparent and they do not need to know address/location of other micro services since they don't call each other like synchronous micro services so there is no need to have traditional service discovery , so my naïve question is why we need leverage service discovery offered by service mesh?
3 роки тому
A valid point. I think the term "service discovery" is used a little bit differently in the service mesh context. Even if you don't need to know the address/location, you still need to find services (e.g. via meta data or function descriptions) to use them (no matter what URL they have). For instance, in the event streaming world around Kafka, many projects buy or implement data governance solutions for a data catalog (+ data lineage, data quality, etc). These requirements do not go away with the service mesh principles.
Thanks Kai for your prompt response and acknowledgement. I am still not convinced why a service ( consumer or producer or stream function in streaming platform) need to know the name or address of other? They dont, their interface is message or event confirming to schema and each of service or function's responsibility ends by posting an event or message , nothing more nothing less. It is as simple as " I dont call you so you dont call me , just check your mailbox , read , process and drop a message or event in mailbox , either postman will deliver or I will pick up"
3 роки тому
@@RideLikeAChamp I don't disagree with what you say! Also, to clarify, a streaming application does not need to know the other service, but only the Kafka topic (and message schema) to consume the message. Otherwise, it would be the same tight coupling like with web services. Hence, streams governance includes the holistic view (with different applications and topics), but also provides help for a single applications (e.g. using a data catalog to find a Kafka topic including the metadata).
The reason I posted this question to experts since recently I designed broker and platform agnostic event driven microservices architecture for an application eco system (works against all popular messaging/streaming middleware) with no code change required to switch between brokers. I did not feel the need of using service mesh as I did not need to discover service and the framework I leveraged offered the observability integration out of the box with few configuration. I am wondering do I still need a service mesh ? May be for message header based mutual tls at l4 and apply unified network policy across service again at l4 protocol layer?
3 роки тому+1
Well, I think the basic rule of thumb is to only use service mesh technology if you really need it. As you say, security or network config might be easier using it, but if you/your end users are happy to use the platform without it, I would not add it. That's the 10000 foot answer, of course. For a deep tech answer, I guess one of my colleagues would have to take a deeper look :-)
What are best Kafka configurations and setup for 40mb/s load?
2 роки тому
In general, Kafka configurations depend on the workloads, message size, latency requirements, network, etc. But for 40mb/s, the default setup with a three-broker cluster is sufficient and no fine-tuning required.
very well presented. I'm very glad you put actual product names in the diagrams, so that its clear which are complimentary and which are competing with each other. It's still somewhat confusing how one deploys all these in an automated manner... their configurations seem to be somewhat interrelated.
That was exactly what I was looking for, thank you very much.
Thank you. Exactly what I've been trying to find!
Thanks for the detailed video Kai. I do get the service mesh/proxy idea for communicating between Microservices or with Kafka platform, but Microservices can also use Kafka Events to communicate with each other. In that case, do we still need these proxies?
The proxy is "just" a sidecar. It adds service-independent logic like security or filtering. Hence, Kafka is still the communication layer. The proxy enables configuration of separate logic across different microservices (even if they use very different technologies). If you do some research about the sidecar design pattern (and how e.g. Kubernetes use it) makes clear how the service mesh proxy (like Envoy/Istio) is complementary (and optional) to Kafka. But the proxy does not do the communication between the microservices. That's Kafka (or in other cases gRPC or REST/HTTP).
@Kai thank you for the detailed presentation! I was wondering what are the alternatives to achieve Kafka ACL authorization with Istio mTLS, except the commercial solution provided by BanzaiCloud? Are there any alternatives on this or workarounds? It seems to me a limitation of the Istio-Kafka solution.
Honestly, I did not see much evolution or demand for this service mesh stuff since I recorded the video. I don't have a good answer but I think custom workarounds are the best option today. BanzaiCloud was acquired by Cisco in the meantime. I don't think they focus on Kafka + Istio anymore from a product perspective (but I might be wrong).
It has been almost a year. I wonder if the needed L7 PRs mentioned at 57:50 have been resolved. Any idea?
Yes, the L7 support for Kafka has merged into Envoy already. Though, still an early version with limited feature set.
This is a great presentation and I have been searching for this for quite sometime. You kind of read the mind of viewers on what might be the questions that the viewer may be having and addressed those in your presentation. Have a question though, I am not very clear how application traceability is really going to work with this if I use it for an event driven micro service. Would Istio support this out of the box?
Istio supports application tracing indirectly. You can use sidecars to send the tracing/monitoring metrics to the related tool. Service mesh tools integrate with tools such as new relic, zipkin, jaeger, prometheus, datadog, etc. where you actually implement the tracing and monitoring.
Event driven micro services are location transparent and they do not need to know address/location of other micro services since they don't call each other like synchronous micro services so there is no need to have traditional service discovery , so my naïve question is why we need leverage service discovery offered by service mesh?
A valid point. I think the term "service discovery" is used a little bit differently in the service mesh context. Even if you don't need to know the address/location, you still need to find services (e.g. via meta data or function descriptions) to use them (no matter what URL they have). For instance, in the event streaming world around Kafka, many projects buy or implement data governance solutions for a data catalog (+ data lineage, data quality, etc). These requirements do not go away with the service mesh principles.
Thanks Kai for your prompt response and acknowledgement. I am still not convinced why a service ( consumer or producer or stream function in streaming platform) need to know the name or address of other? They dont, their interface is message or event confirming to schema and each of service or function's responsibility ends by posting an event or message , nothing more nothing less. It is as simple as " I dont call you so you dont call me , just check your mailbox , read , process and drop a message or event in mailbox , either postman will deliver or I will pick up"
@@RideLikeAChamp I don't disagree with what you say! Also, to clarify, a streaming application does not need to know the other service, but only the Kafka topic (and message schema) to consume the message. Otherwise, it would be the same tight coupling like with web services. Hence, streams governance includes the holistic view (with different applications and topics), but also provides help for a single applications (e.g. using a data catalog to find a Kafka topic including the metadata).
The reason I posted this question to experts since recently I designed broker and platform agnostic event driven microservices architecture for an application eco system (works against all popular messaging/streaming middleware) with no code change required to switch between brokers. I did not feel the need of using service mesh as I did not need to discover service and the framework I leveraged offered the observability integration out of the box with few configuration. I am wondering do I still need a service mesh ? May be for message header based mutual tls at l4 and apply unified network policy across service again at l4 protocol layer?
Well, I think the basic rule of thumb is to only use service mesh technology if you really need it. As you say, security or network config might be easier using it, but if you/your end users are happy to use the platform without it, I would not add it. That's the 10000 foot answer, of course. For a deep tech answer, I guess one of my colleagues would have to take a deeper look :-)
What are best Kafka configurations and setup for 40mb/s load?
In general, Kafka configurations depend on the workloads, message size, latency requirements, network, etc. But for 40mb/s, the default setup with a three-broker cluster is sufficient and no fine-tuning required.