Razorpay Engg here.. one thing that was critical for us is ensuring we consume all the events published by the clients - there are two important things we implemented - an Outbox pattern on the publishing side and second the API layer doesn't write to the db since that can become another bottleneck. You can read about the Outbox pattern in another blog we have written which is a critical component to scaling a microservices architecture. If you're interested we can come talk about this on your channel too.
I would love to host you. Although I have never had a guest on my channel, it would be fun to do a deep dive (so long as Razorpay permits) on the design. Let me know, once you are comfortable. You can reach out via LI or Twitter twitter.com/arpit_bhayani www.linkedin.com/in/arpitbhayani/
Does outbox pattern suit in this video's notification system? I understand it suits to communicate transactional domain events across microservices. But I can't see it suits here.
@@vyshnavramesh9305 for us, webhook delivery to the merchant's system is a critical part of the payment flow. We need to guarantee at least once delivery. Hence, we have to ensure that the payment system's events reach the notification platform. From there the notification platform ensures at least once delivery. You can't believe how many messages get lost over the network at this scale.
Explained in such a layman language … never imagined I could understand such a complex architecture in span of 15-17 min .. This content is too good to be free… Kudos to you :)
we can have SNS in the place of Limiter and integrate with SQS. For Ordering, we can use, SNS FIFO and SQS FIFO. Since SNS and SQS are fully managed services, we can somewhat avoid the rate limiter concept. We can apply SNS filtering rules to push the events to respective SQS based on the filtering rules. Along with this, we can have individual DLQ to SQS so that worker (AWS Lambda) can check the DLQ and process the messages. This will help reduce the Latency and corn job work.
Good content ! Couple of suggestions - 1) Justifying choice of kinesis over SQS (again) for async writes would help since cost is important criteria in the design. 2) Re-usability of the "task prioritization" module across new requests and scheduling failed ones for retries - Does it make sense to move it to a separate microservice/API ?
Hey Arpit, great video! One question though: At about 7:20, you discuss how read load would increase during peaks. But I don't see how the solution that has been implemented would address this issue. The solution would address write loads since we are making asynchronous writes to the DB. But read loads would still be high, since worker / executors might need info from DB to process the event. Please correct me if I am missing something 🙂 Edit: Is it a secondary effect? By reducing write load using async behaviour, are we freeing up more IOPS bandwidth for reads?
I feel Database in the end of flow can be removed if the information can be re-constructed from source DBs. Dead-Letter Queues can be the best option, Scheduler can act on Dlqs.
Man Hats off to you please don't stop putting up videos like these, You and Gaurav Sen are legends, No where in the UA-cam did I found content similar to you guys 🙂🙂Keep Bringing more System Design Videos
Quality content! The rate limiter to mellow down the spikes from a single user was a good learning. Thanks for putting out these videos. Love the passion with which you explain things.
I think - adding a message queue reduces the IOPS spikes on the DB from executors but how did this reduce the overall latency of the system? Is it because the executors had lot of retries which caused the delay during the spikes (>1000 TPS) and added the additional lag which is fixed with the queue? but with queue you introduce additional process also right?
I have a question. Why have a Mysql in the new solution. Can't Kinesis directly plug into the scheduler ? (or is it like scheduler is persisting the jobs so that if the server is restarted, it can still reschedule lost events)
How will the read load be mitigated by async calls? If data reaches db slowly/asynchronously, won't the systems dependent on it will again be slowed down? What's the work around here? DB scaling? scharding?
Yeah, was wondering the same. I think the solution doesn't use async calls for reading, it uses it only for writing. But that said, I still don't get how read performance would improve during peak. Only reasonable explanation that comes to mind is: Maybe since write load is reducing due to async behaviour, DB might have more IOPS bandwidth for reads. I'm not sure if that explanation is valid though 😅
Thanks for the awesome video. I think sending mail and recording it in DB is sort of a distributed transaction (there are pattern like outbox pattern which can solve this problem) and hence it might play a role in the scaling strategy of the system.
I see we could replace sqs with event bus like Kafka itself that way it can be used for persistence also. I don't see the necessity to store it on message queue and again on event bus. Thoughts?
at times, I find it hard to keep up with the videos :) can't even fathom how you manage to read, try and share so much cool engineering stuff outside work. 👏
@@AsliEngineering much appreciated :) pls do not reduce the frequency any further though. also please do a little bit on how to be productive outside work . "A day in the life of a normal curious software engineer" pun intended.
@@adianimesh hahaha :) a lot of people have asked for this but it is very hard for me to record such a video. I just don't want to put out a narcissistic video :D I stay away from anything that holds a potential to distract me :) By any chance if that videos get a big traction, I will be tempted to take that route and hence I typically avoid. I hope you understand. But yes the short one line answer to this is PASSION. I am extremely passionate about the field and have a huge bias for action.
Why not just do acknowledgement after success response of notifications, so you don't have to worry about writing to database, if that fails, it can be queued again as database is very hard to scale but these queueing services like kafka is highly distributed and scalable?
I see some concerns/doubts with this design - 1. SQS queues ensure atleast once delivery, not exactly once right ? Hence they must ensure that their notification system handles duplication, else customer will get a shock if he receives 2 debit notifications for 1 transaction 2. If their workers are lambdas, if huge number of lambdas are triggered from huge number of messages in the SQS, now if these lambdas are doing anything else like calling some service/reading from DB etc, I am sure it will throttle that service, how do they handle that or is it not the case. Because once the lambdas spring up there is no way for one to know how many others are actively calling a downstream, so some control is needed at the event source side 3. Since this entire process is asynchronous, is their API also asynchronous and if so, just curious how do they make their public apis asynchronous, is it pub sub based/polling based/ web hook kind of thing? After what time client retries if the process fails or they ensure 100% delivery? 4. How is this schedular designed? Is it cron job that goes over the DB once an hour to check failures? If thats so, it’s introducing a lag in retrying, why cant they use a no sql db like Dynamo Db of AWS and utilise dynamo db stream events which will immediately trigger a lambda if there is a failure and it will send the message for retry, converting to a trigger based soln can get rid of the latency 5. Why a sql db for just maintaining event status, why not a no sql db like dynamo? Is my sql serverless? Or they are handling the maintainance part themselves which increases the Oncall load
we can use bloom filters to avoid re-sending notifications. but this means we can miss out some genuine notifications but still memory efficient generally payment systems are asychronous. even razorpay would receive webhooks async from their integrated banks and then they send webhooks to clients
Awesome explanation, thanks.. Recently i connected to vpn on my mobile.. when randomly opened the ICICI & hotstar(uae) application, it detected & showed an alert that I am using vpn.. i still surprised how they are getting to know that I am using vpn
I think an event bridge would be a preferred choice instead of sqs, because it can read the request body and send messages to respected consumers based on the service, plus you can do modifications to the request body and the failed messages are archived and can be replayed free of cost, of course you would trade off on throughput as event bridge doesn't have the agility of sqs but you already dampened that in this architecture by using rate limiter.
I know there are prioritized queues that are used here, so it is that until P0 is consumed P1 would not be consumed? What if P1 consumption is in progress and consumer is blocked for sometime, how does the system makes sure at this time that P0 should be picked up?
Hi, can u please explain how to solve SES rate limiting issue at scale, because SES has a rate limiting for sending emails like 14 emails/sec. Because I got a scenario in a startup where i need to send a marketing emails to 50k users exactly on Sunday morning.
I'm assuming there will be data loss if at all sqs is unavailable. Something like cdc pipeline may mitigate this issue. Even though, cdc is typically used for data integration only. Thoughts?
@asliengineering - aren’t there consumers on Kinesis which are actually writing to db? How is kinesics able to write to db directly? Are there gng to be any lambda functions triggers?
Razorpay Engg here.. one thing that was critical for us is ensuring we consume all the events published by the clients - there are two important things we implemented - an Outbox pattern on the publishing side and second the API layer doesn't write to the db since that can become another bottleneck. You can read about the Outbox pattern in another blog we have written which is a critical component to scaling a microservices architecture. If you're interested we can come talk about this on your channel too.
I would love to host you. Although I have never had a guest on my channel, it would be fun to do a deep dive (so long as Razorpay permits) on the design.
Let me know, once you are comfortable. You can reach out via LI or Twitter
twitter.com/arpit_bhayani
www.linkedin.com/in/arpitbhayani/
@@AsliEngineering we can do it officially. Someone will reach out
Does outbox pattern suit in this video's notification system? I understand it suits to communicate transactional domain events across microservices. But I can't see it suits here.
@@vyshnavramesh9305 for us, webhook delivery to the merchant's system is a critical part of the payment flow. We need to guarantee at least once delivery. Hence, we have to ensure that the payment system's events reach the notification platform. From there the notification platform ensures at least once delivery. You can't believe how many messages get lost over the network at this scale.
Explained in such a layman language … never imagined I could understand such a complex architecture in span of 15-17 min .. This content is too good to be free… Kudos to you :)
Stop giving ideas for monetization man😂
@@StingSting844 yeah ;-)
we can have SNS in the place of Limiter and integrate with SQS. For Ordering, we can use, SNS FIFO and SQS FIFO. Since SNS and SQS are fully managed services, we can somewhat avoid the rate limiter concept. We can apply SNS filtering rules to push the events to respective SQS based on the filtering rules. Along with this, we can have individual DLQ to SQS so that worker (AWS Lambda) can check the DLQ and process the messages. This will help reduce the Latency and corn job work.
Good content ! Couple of suggestions - 1) Justifying choice of kinesis over SQS (again) for async writes would help since cost is important criteria in the design. 2) Re-usability of the "task prioritization" module across new requests and scheduling failed ones for retries - Does it make sense to move it to a separate microservice/API ?
Hey Arpit, great video!
One question though:
At about 7:20, you discuss how read load would increase during peaks. But I don't see how the solution that has been implemented would address this issue.
The solution would address write loads since we are making asynchronous writes to the DB. But read loads would still be high, since worker / executors might need info from DB to process the event.
Please correct me if I am missing something 🙂
Edit:
Is it a secondary effect? By reducing write load using async behaviour, are we freeing up more IOPS bandwidth for reads?
Yes. It is a secondary effect. You free up IOPS to do reads while async workers are doing staggered writes.
What a content man !! Hats off
#AsliEngineering is happening here .. no need to go anywhere .. Kudos to your content Man! Thanks!
Great stuff. Thanks for teaching these things in simple terms 🎉
I feel Database in the end of flow can be removed if the information can be re-constructed from source DBs. Dead-Letter Queues can be the best option, Scheduler can act on Dlqs.
Thanks Arpit 🙏
Excellent Session
Man Hats off to you please don't stop putting up videos like these, You and Gaurav Sen are legends, No where in the UA-cam did I found content similar to you guys 🙂🙂Keep Bringing more System Design Videos
Man hands down this is just so awesome. And you have nailed it!! Thanks a ton
Instead of using kinesis, SQL db and scheduler, can we introduce retry SQS queues which would be picked up by the workers
Quality content! The rate limiter to mellow down the spikes from a single user was a good learning.
Thanks for putting out these videos. Love the passion with which you explain things.
Very well explained, please keep posting such videos ❤
I think - adding a message queue reduces the IOPS spikes on the DB from executors but how did this reduce the overall latency of the system? Is it because the executors had lot of retries which caused the delay during the spikes (>1000 TPS) and added the additional lag which is fixed with the queue? but with queue you introduce additional process also right?
Great video and thanks for making it! A quick question for db choosing, is it required for picking SQL for Db?
I have a question. Why have a Mysql in the new solution. Can't Kinesis directly plug into the scheduler ? (or is it like scheduler is persisting the jobs so that if the server is restarted, it can still reschedule lost events)
Awesome explanation, definitely purchasing your course
Thank you. Looking forward to it ✨
How will the read load be mitigated by async calls? If data reaches db slowly/asynchronously, won't the systems dependent on it will again be slowed down? What's the work around here? DB scaling? scharding?
Yeah, was wondering the same. I think the solution doesn't use async calls for reading, it uses it only for writing. But that said, I still don't get how read performance would improve during peak.
Only reasonable explanation that comes to mind is:
Maybe since write load is reducing due to async behaviour, DB might have more IOPS bandwidth for reads.
I'm not sure if that explanation is valid though 😅
Thank you 👏
Thanks for the awesome video. I think sending mail and recording it in DB is sort of a distributed transaction (there are pattern like outbox pattern which can solve this problem) and hence it might play a role in the scaling strategy of the system.
Do you have any resources for the out box pattern?
I see we could replace sqs with event bus like Kafka itself that way it can be used for persistence also. I don't see the necessity to store it on message queue and again on event bus. Thoughts?
Yes. even SQS persists. for this use case, Kafka/SQS would have given a similar performance. But Kafka would be costlier.
at times, I find it hard to keep up with the videos :) can't even fathom how you manage to read, try and share so much cool engineering stuff outside work. 👏
I am realizing this and hence starting next week I am chopping freq to 2 per week :)
@@AsliEngineering much appreciated :) pls do not reduce the frequency any further though. also please do a little bit on how to be productive outside work . "A day in the life of a normal curious software engineer" pun intended.
@@adianimesh hahaha :) a lot of people have asked for this but it is very hard for me to record such a video.
I just don't want to put out a narcissistic video :D I stay away from anything that holds a potential to distract me :)
By any chance if that videos get a big traction, I will be tempted to take that route and hence I typically avoid.
I hope you understand. But yes the short one line answer to this is
PASSION. I am extremely passionate about the field and have a huge bias for action.
@@AsliEngineering u are an inspiration for me :) thanks
Why not just do acknowledgement after success response of notifications, so you don't have to worry about writing to database, if that fails, it can be queued again as database is very hard to scale but these queueing services like kafka is highly distributed and scalable?
I think for audit purposes, but that definitely doesn’t need to be synchronous. Only the failures ones can be synchronous.
But won't the Reads be impacted when we are using Kinesis (asynchronous writes)?
We do not need consistent reads here.
@@AsliEngineering Yeah, realized later.....but why they were using synchronous writes in the first place..😅
@@shantanutripathi no one thinks about optimization on Day 0. It is all about shipping and getting things done
Great explanation
why to write in mySQL? why couldn't scheduler read from kinesis to retry?
It was really loving it
I see some concerns/doubts with this design -
1. SQS queues ensure atleast once delivery, not exactly once right ? Hence they must ensure that their notification system handles duplication, else customer will get a shock if he receives 2 debit notifications for 1 transaction
2. If their workers are lambdas, if huge number of lambdas are triggered from huge number of messages in the SQS, now if these lambdas are doing anything else like calling some service/reading from DB etc, I am sure it will throttle that service, how do they handle that or is it not the case. Because once the lambdas spring up there is no way for one to know how many others are actively calling a downstream, so some control is needed at the event source side
3. Since this entire process is asynchronous, is their API also asynchronous and if so, just curious how do they make their public apis asynchronous, is it pub sub based/polling based/ web hook kind of thing? After what time client retries if the process fails or they ensure 100% delivery?
4. How is this schedular designed? Is it cron job that goes over the DB once an hour to check failures? If thats so, it’s introducing a lag in retrying, why cant they use a no sql db like Dynamo Db of AWS and utilise dynamo db stream events which will immediately trigger a lambda if there is a failure and it will send the message for retry, converting to a trigger based soln can get rid of the latency
5. Why a sql db for just maintaining event status, why not a no sql db like dynamo? Is my sql serverless? Or they are handling the maintainance part themselves which increases the Oncall load
we can use bloom filters to avoid re-sending notifications. but this means we can miss out some genuine notifications but still memory efficient
generally payment systems are asychronous. even razorpay would receive webhooks async from their integrated banks and then they send webhooks to clients
Awesome explanation, thanks..
Recently i connected to vpn on my mobile.. when randomly opened the ICICI & hotstar(uae) application, it detected & showed an alert that I am using vpn.. i still surprised how they are getting to know that I am using vpn
Why can we just use LSM trees for write heavy system here?
I think an event bridge would be a preferred choice instead of sqs, because it can read the request body and send messages to respected consumers based on the service, plus you can do modifications to the request body and the failed messages are archived and can be replayed free of cost, of course you would trade off on throughput as event bridge doesn't have the agility of sqs but you already dampened that in this architecture by using rate limiter.
USP of this channel is Short, meaningful content - no hour-long videos.
Thanks Vanshika 🙌
I know there are prioritized queues that are used here, so it is that until P0 is consumed P1 would not be consumed? What if P1 consumption is in progress and consumer is blocked for sometime, how does the system makes sure at this time that P0 should be picked up?
They are all consumed in parallel. Just the number of consumers would vary.
@@AsliEngineering but then Kinesis is still a queue right? How does the system make sure that the important ones go before anything else?
@@raj_kundalia different priorities are different topics.
@@AsliEngineering makes sense, thank you for replying. Big fan and learning every day from you :)
Bhaiya what are workers that you mentioned here?
Worker will pick up (consume) the multiple events & executor will write the data/message in relational db & obviously push the notification as well.
What about the latency you’re introducing in the system due to kinesis?
Why is that a problem?
Hi, can u please explain how to solve SES rate limiting issue at scale, because SES has a rate limiting for sending emails like 14 emails/sec. Because I got a scenario in a startup where i need to send a marketing emails to 50k users exactly on Sunday morning.
Talk to Razorpay. It is artificial rate limiting.
How can I send emails 50k emails on Sunday morning using SES ( SES has rate limiting ) how can I achieve it.
I'm assuming there will be data loss if at all sqs is unavailable. Something like cdc pipeline may mitigate this issue. Even though, cdc is typically used for data integration only. Thoughts?
could have used CDC but with extra filters and edge case handling.
Keeping systems simple is important in real world.
@asliengineering - aren’t there consumers on Kinesis which are actually writing to db? How is kinesics able to write to db directly? Are there gng to be any lambda functions triggers?
There are consumers consuming and writing to db.
@@AsliEngineering - Got it. Are the consumers going to be services on a physical servers or they cron job programs running every night?
Video starts @2:38
What tool you use for drawing architecture...I need it to present in interview
GoodNotes
Excellent Topic!
Slightly off-topic question! Which tool do you use to record/edit videos! Is it Loom?
OBS
why mysql ??
It's a very common architecture
How a single consumer can consume events from 3 SQS queue ???
Multi-threading.