Why HTTP based Microservices is a BAD idea. | explained with example

Поділитися
Вставка
  • Опубліковано 21 сер 2024
  • When discussing monolithic apps, we stated that this type of communication takes place between processes.
    In essence, synchronous communication is defined as using the HTTP or gRPC protocol to return sync responses. The customer submits a request and waits for the service to respond. Therefore, client code must block its thread until the server's response arrives. That implies that the client calls the server and has its operations blocked.
    Asynchronous communication basically involves the client sending a request but not waiting for a response from the service. The client shouldn't have blocked a thread while waiting for a response, which is the main point in this situation.
    To support the channel : paypal.me/hell...
    Facebook: / krish.dinesh.official
    Instagram : / krish.dinesh.official
    twitter: / krishantha (@Krishantha)
    #krishdinesh #microservices #messaging

КОМЕНТАРІ • 30

  • @prashanthb6521
    @prashanthb6521 Рік тому

    Nice explanation of the problem Mr. Dinesh. Thanks.
    One suggestion, I strongly urge you to wear a collar mic so that your voice is heard clearly.

  • @MohamedRizan
    @MohamedRizan 2 роки тому

    Hi Krish Dinesh, Your videos are incredibly clear and easy to follow. Please make more videos; they are quite helpful. Have a blessed future.

  • @shanilmiranda4913
    @shanilmiranda4913 2 роки тому

    Thank you ayye. This is just interesting and valuable. And you are helping the SE community massively. Thank you for this content

  • @sushaindilishan1237
    @sushaindilishan1237 2 роки тому

    Great explanation!. Helping out the developer community with every video. priceless!!

  • @arunausitha242
    @arunausitha242 Рік тому

    Thanks for the Great Explanation Krish, But in 13:44 this is not how it happens.

  • @dattasaimogudampalli
    @dattasaimogudampalli 2 роки тому

    Thanks for the wonderful video. Waiting for the implementation of this Kafka based service you explained.

    • @krish
      @krish  2 роки тому +1

      Very soon

  • @upulabeysinghe1488
    @upulabeysinghe1488 2 роки тому

    Good explanation to dev. Community , highly appreciate .., for a next video share a small solution with a github link...well Done Dinesh..

  • @darshanbc
    @darshanbc 2 роки тому

    As always simple explanation ❤️ hope you are doing good in these difficult times in Lanka, ❤️ from India

    • @krish
      @krish  2 роки тому

      thank you

  • @udarasan8216
    @udarasan8216 2 роки тому

    Mmm this video make my day better 💪

  • @Hariprasadization
    @Hariprasadization 2 роки тому +1

    Well explained.Thanx

  • @mzuhry4263
    @mzuhry4263 Рік тому

    I expect my lecturer taught me like this

  • @shivamgupta5476
    @shivamgupta5476 2 роки тому

    Great Explanation

  • @muratasarslan2359
    @muratasarslan2359 2 роки тому

    like the way you explain these topics, thank you :-)

  • @akilasupun
    @akilasupun 2 роки тому +1

    Really interesting topic. I've a question, Doesn't this make high load on kafka and each service has to do the communication through the Kafka?

    • @krish
      @krish  2 роки тому +2

      you can scale kafka to many replicas and its very high throughput streaming service. i have done a video with demo to show how you can do load balance in service side using consumer groups. watch that

  • @buddhikalakshan831
    @buddhikalakshan831 2 роки тому

    Great explanation 👍

  • @namilaperera5649
    @namilaperera5649 2 роки тому

    Great explanation..! I am wondering how we should handle the interaction with the UI in this case.... should we open a web socket connection as soon as the user submits this request and inform him the result through that once all the async work is done? Will that scale well since we have to keep open many websocket connections?

    • @sghosh4223
      @sghosh4223 2 роки тому +1

      It is not a good idea to keep the user waiting. After all, if some other service is down then the verification process will get delayed and you have no idea how long it will take to get confirmation from all. So even with web socket, what you will tell the user? The user will simply get impatient and leave the site. So use async user workflow here also.
      Tell the user something like "Thank you for choosing us. We are processing your request. Once approved, we will notify you. You can also track the progress using the ticket number T12345".
      This way you do not have to maintain many long lived connections or perform lots of ajax polls. The user can leave the site knowing that an email will arrive once the work is done.
      Once the background checks are done, the system will send an email to the user. If the user is impatient, the user can check manually next day.
      This video also highlights why there should be monitoring and alerting system to monitor the services.
      If you are checking with other systems that are not under your control, you need to implement a monitoring system to alert you if some application gets stuck for a long time. If the application is still in the queue and there is no response in the response queue and it is another system that is beyond your control and the tech team of that organization is taking a long time, you have 3 options - #1 cancel the whole thing and send and email to the user (but then you lose a customer), #2 wait and hope that the problem will be resolved soon (and in the meanwhile your customer support has a busy day receiving angry phonecalls), #3 flag the request as 'provisionally approved' so that the approval logic will not wait for the reply in the message queue and proceed (but then you will have to log this and later when you get a health check ok from that other service, you will have to check the log and call the other service for verification and then if it comes ok, all is well but if not then again you need to decide)
      Aren't distributed system fun?

    • @krish
      @krish  2 роки тому +1

      thank you taking time to write clear detailed explanation.

    • @MrMikomi
      @MrMikomi Рік тому

      @@sghosh4223 you say it is not a good idea to keep the user waiting and then say give the user a message telling him/her that they will get an email when the thing has been done? Seriously? That's a great way to lose a customer.

  • @ashokp4608
    @ashokp4608 2 роки тому

    Hi Krish,
    Great explanation, thank you for this video.
    We have one use case, one of our service is reading kafka topic and inserting the data to a maria DB. Due to some connection issue. data could not save to maria DB. So in this case again we are requesting the upstream service and asking them to publish data again.
    Here my ask is,
    Is there any option in kafka that store data certain duration like 24 hrs or 48 hrs, so that we can consume from kafka in failure case, instead of asking the upstream api to publish again.

    • @krish
      @krish  2 роки тому

      what you do isCOMPLETELY WRONG. 🤨 (i mean you shouldn’t do that way) i did similar use case with example demo code. check my channel. published in last week i think. if you still have question comment here

  • @ratankr39
    @ratankr39 Рік тому

    Thanks for the detailed session, I like leveraging messaging system to handle reliable changes. Although,
    1. You are making Kafka a single point of failure for your system. Any overcoming thoughts?
    2. How would you handle dependency work. Say billing needs to wait on user creation in accounts, but for some reason accounts have not completed their job yet. Now, billing will think that this user does not exist and discards the requests. If you got that covered how would you handle deadlock scenarios and over bloating Kafka system with messages. How do you debug and come out of this?

  • @niroshansandeepa8797
    @niroshansandeepa8797 2 роки тому

    You mentioned that there is no thread blocking. But services are listening to the kafka. If a service is listening to the kafka, and lets say, there is a pending request. So request has not completed yet. So thread is alive and not going to timeout (still listening to the kafka). What happen more requests came faster than produce the responses? Won't it lead for not open the circuit breakers ? I'm confused now🙄

    • @krish
      @krish  2 роки тому +1

      No No… messaging does not work on request response cycle as HTTP. i have done 3 videos about Kafka and explains how it works with demo. watch those 3. so you will get the point.

    • @niroshansandeepa8797
      @niroshansandeepa8797 2 роки тому

      @@krish Sure, I'll watch them. Thank you.

  • @nipundesilva9306
    @nipundesilva9306 Рік тому

    Can we use rabbitmq instead similar manner?

  • @vishwamithra3390
    @vishwamithra3390 2 роки тому +1

    Laughs in openstack