How about `GraphQL`? I understand it's primarily suited for client-to-server interaction rather than inter-service communication in a microservices architecture. However, I'd love to hear different perspectives on this!
Excellent lesson. This is a gem. Thanks for sharing your knowledge generously with us. A question: does gRPC have lower latency than req-res messaging? Mind you elaborate this a bit, please
Oh yes, much less latency. The persistent HTTP/2 connection combined with protocol buffers gives gRPC very low latency. HOWEVER, it is a blocking wait upon sending, so OVERALL latency-wise, request/reply messaging might give you better results if you have lots of processing to do while waiting for the reply in the reply-queue.
One more question: in your book Fundamental of Software Architecture, you explained the request-response messaging and you wrote that it halts all operations until it receives the response. But now, you recommend to use this approach when we need a fire-forget scenario. Can you please elaborare the discrepency?
TL;DR; I had some similar thoughts, so I wrote down some notes that I think it depends on the client implementation: I believe (just from the video) that the limitation would be based on the implementation. In the video, he labeled it as request-reply and the diagram indicated there were 2 message queues. In a "halts all operations" scenario, the client (wishlist service) would be implemented to make a request by putting a message on the queue for the server (payment service), then "wait" for the reply by checking the second queue. I think this case would be used when the client cannot do anything else until it receives a response, like when authenticating, or creating a new record that the server creates an id that's used in the next steps. In this implementation, the second queue may be dynamic and only be created for the specific request. In the "fire-forget" scenario, the client would be implemented to make a request by putting a message on the queue for the server, then just returning execution back to the caller. This case can be used when the client does not need the response for any next steps, like updating a record. In this implementation, the second queue may be a general queue for all updates that the client is already watching.
Glad to see new lessons are more applicative level. Thank you for your job! )
Glad you are finding them useful!
How about `GraphQL`? I understand it's primarily suited for client-to-server interaction rather than inter-service communication in a microservices architecture.
However, I'd love to hear different perspectives on this!
Sure, GraphQL has an interesting story when it comes to stamp coupling and bandwidth. I'll add a lesson to this in the near future!
Very nice topic and well explained. Thanks M. Richards
Glad you are finding them useful!
Always a great class! 🎉
Thanks!
Excellent lesson. This is a gem. Thanks for sharing your knowledge generously with us.
A question: does gRPC have lower latency than req-res messaging? Mind you elaborate this a bit, please
Oh yes, much less latency. The persistent HTTP/2 connection combined with protocol buffers gives gRPC very low latency. HOWEVER, it is a blocking wait upon sending, so OVERALL latency-wise, request/reply messaging might give you better results if you have lots of processing to do while waiting for the reply in the reply-queue.
One more question: in your book Fundamental of Software Architecture, you explained the request-response messaging and you wrote that it halts all operations until it receives the response. But now, you recommend to use this approach when we need a fire-forget scenario. Can you please elaborare the discrepency?
TL;DR; I had some similar thoughts, so I wrote down some notes that I think it depends on the client implementation:
I believe (just from the video) that the limitation would be based on the implementation. In the video, he labeled it as request-reply and the diagram indicated there were 2 message queues.
In a "halts all operations" scenario, the client (wishlist service) would be implemented to make a request by putting a message on the queue for the server (payment service), then "wait" for the reply by checking the second queue. I think this case would be used when the client cannot do anything else until it receives a response, like when authenticating, or creating a new record that the server creates an id that's used in the next steps. In this implementation, the second queue may be dynamic and only be created for the specific request.
In the "fire-forget" scenario, the client would be implemented to make a request by putting a message on the queue for the server, then just returning execution back to the caller. This case can be used when the client does not need the response for any next steps, like updating a record. In this implementation, the second queue may be a general queue for all updates that the client is already watching.
Yeah, that was a bit misleading. for fire-and-forget you would just use REGULAR messaging, NOT request/reply. Sorry about that!
@@markrichards5014 Mark, you are legendary and I have learnt a lot from you, even if you occasionally makes mistake, it's ok 😉