How Grab configured their data layer to handle multi-million database transactions a day!

Поділитися
Вставка
  • Опубліковано 13 січ 2025

КОМЕНТАРІ • 90

  • @BiranchiPadhi-id1le
    @BiranchiPadhi-id1le Рік тому +3

    Falling in love with your Content Arpit - Breaking complex topics intro smaller chunks and explaining it like explaining to a beginner is your strength. Keep bringing such content!

  • @nuralikhoja8773
    @nuralikhoja8773 2 роки тому +5

    feeling proud of myself after watching this i implemented a similar thing a few days back on dynamodb, i found the dynamodb docs helpful where they suggested use of global secondary index for filtering

  • @safwanmansuri3203
    @safwanmansuri3203 2 роки тому +3

    seriously your videos are so informative for a software engineer, it is really a gold mine for us please continuing making such amazing videos.

  • @_SoundByte_
    @_SoundByte_ Рік тому

    Amazing Arpit. Easy and powerful!
    Thank you!

  • @anupamayadav3800
    @anupamayadav3800 Рік тому +1

    Fantastic design by Grab.. really loved it.. and most importantly thank you for presenting it in such a simplified way. Love your content

  • @architshukla8076
    @architshukla8076 2 роки тому +1

    Thanks Arpit fr explaining such a brilliant architecture!!

  • @karanhotwani5179
    @karanhotwani5179 2 роки тому +1

    Liking this video half way. Just brilliant explanation. Thanks

  • @kuldeepsharma7499
    @kuldeepsharma7499 Рік тому

    Simple and great explanation! Thanks

  • @ravi77003
    @ravi77003 7 місяців тому

    Implementing GSI and updating the data in OLAP was amazing.

  • @ADITYAKUMARking
    @ADITYAKUMARking 2 роки тому +1

    Brilliant architecture. Thanks for explaining.

  • @Bluesky-rn1mc
    @Bluesky-rn1mc 2 роки тому +1

    Amazing analysis...mza aa gya 😃

  • @Hercules159
    @Hercules159 2 роки тому

    Awsome from Grab, Now I can use the same concept in interview and tell this will works

  • @rushikesh_chaudhari
    @rushikesh_chaudhari Рік тому

    Great explanation Arpit💯

  • @loveleshsharma5663
    @loveleshsharma5663 7 місяців тому

    Awesome explanation of the concept.

  • @NithinKumarKv
    @NithinKumarKv Рік тому +3

    One question Arpit, Since GSI is eventually consistent, would we get a consistent view of Ongoing orders at any point in time?

  • @aashishgoyal1436
    @aashishgoyal1436 2 роки тому +1

    Seems exciting

  • @hardikmenger4275
    @hardikmenger4275 2 роки тому +1

    Dynamodb is eventual consistent and not strongly consistent right? If they needed strong consistency they would want to shift to postgres or something right?

  • @sounishnath513
    @sounishnath513 2 роки тому +1

    Thank you so much for this Banger on 1st, 2023.❤

  • @unfiltered_with_yogi
    @unfiltered_with_yogi 2 місяці тому

    where is the grab doc ??

  • @628sonu
    @628sonu 2 роки тому +2

    a small doubt,
    user_id_gsi is stored with user_id when placing an order, what if there are 2 orders from the same user (maybe from different devices or even the same), won't GSI have 2 duplicate entries even if the orders are different

    • @imdsk28
      @imdsk28 2 роки тому +4

      We can have multiple orders for same index and it’s not a primary key… It’s just to index the data… when you fetch ongoing orders of that user you need to fetch both the orders and through with this index it can fetch those 2 quickly.

    • @syedaqib2912
      @syedaqib2912 2 роки тому

      Order Ids are always unique for each user

  • @blameItleaveit
    @blameItleaveit 6 місяців тому

    No practical application for demo ?

  • @akashgoswami5046
    @akashgoswami5046 4 місяці тому

    Great 👍

  • @SwikarP
    @SwikarP Рік тому

    Great video. What if DLQ on aws down?

  • @architbhatiacodes
    @architbhatiacodes 2 роки тому +1

    AFAIK If we are not able to process an SQS message, it goes to DLQ. If SQS is down DLQ will also be down and we will not be able to publish the message there.
    Great video btw!

    • @architbhatiacodes
      @architbhatiacodes 2 роки тому +3

      Read the blog and they have mentioned same, "When the producer fails, we will store the message in an Amazon Simple Queue Service (SQS) and retry. If the retry also fails, it will be moved to the SQS dead letter queue (DLQ), to be consumed at a later time.".
      So I think we will not be able to do anything if both Kafka and SQS are down (It might be a very rare event though)

    • @DesuTechHub
      @DesuTechHub 3 місяці тому

      ​@@architbhatiacodes So, we reduced the impact of missing data state but not zeroed.
      How would you handle the messages that further change state but the previous state is unapplied bcoz it is in DLQ

  • @Arunkumar-eb5ce
    @Arunkumar-eb5ce Рік тому +1

    Amazing content.
    Quick question: Updating the table based on timestamp will not be reliable right - in a distributed system we cannot rely on system clocks for ordering messages.

    • @dhruvagarwal7854
      @dhruvagarwal7854 Рік тому

      Can you give an example where this scenario could occur? Unable to understand this.

  • @sagarnikam8001
    @sagarnikam8001 2 роки тому +3

    One question: If they are using SQS with DLQ(100% SLA guaranteed by AWS) in data ingestion, what could be the reason of using Kafka in the first place? Why can't they just use SQS(with DLQ) only?

    • @Bluesky-rn1mc
      @Bluesky-rn1mc 2 роки тому

      could be due to cost. kafka is open source.

    • @imdsk28
      @imdsk28 2 роки тому +1

      I think Kafka maintains order when compared to queue and only when Kafka is down we will be using SQS… SQS doesn’t maintain order that’s why we have two edge cases to handle upserts and updates with fresh time stamps

    • @AsliEngineering
      @AsliEngineering  2 роки тому +5

      Sqs maintains order but Kafka provides higher write and read throughput.

    • @imdsk28
      @imdsk28 2 роки тому +1

      @@AsliEngineering thanks for correcting… the architecture explanation is great…

    • @SaketAnandPage
      @SaketAnandPage 8 місяців тому

      There are standard and fifo queue in sqs. Standard have high throughput.
      Also DLQ is not for the purpose of fallback of Primary queue, instead that is for the consumer that if message fails to successfully consumed X number of times. Then it should be moved to DLQ.
      I think the explanation for what if SQS is down is not correct.

  • @amanchoudhury1995
    @amanchoudhury1995 6 місяців тому

    Where will the query goes if the user wants to see his last 10 or n orders? Analytics Db ? as they have removed entries from gsi on ddb?

  • @GaneshSrivatsavaGottipati
    @GaneshSrivatsavaGottipati 8 місяців тому

    nice explanation! But how orders svc writes in database and in kafka is it async for both or sync?

  • @astar4025
    @astar4025 4 місяці тому

    Hi Arpit, just wanted to notify, I didn't find the blog link of "Grab" in your description. I might have missed it, could you please send the link again if possible.

    • @AsliEngineering
      @AsliEngineering  4 місяці тому

      You can find the blog right at the bottom of the description and in the i-card at the top right of the video.

  • @mayankhacker1
    @mayankhacker1 4 місяці тому

    Why do they use both kafka and SQS if both serve the same purpose, they could use SQS with a dead letter queue only

    • @AsliEngineering
      @AsliEngineering  4 місяці тому

      For Grab SQS is just a fallback option in case Kafka goes down. Given that it charges on demand you won't pay anything if Kafka never goes down.
      Also, SQS and Kafka don't serve the "exact" same purpose. SQS, or any msgs queue offer better elasticity than msg stream like Kafka. You can scale up and down consumers easily while concurrency in Kafka is limited by the number of partition.

  • @GauravSharma-pj3dj
    @GauravSharma-pj3dj 2 роки тому

    please make video on what kind of database to use in what situation that will be very helpful thanks

    • @AsliEngineering
      @AsliEngineering  2 роки тому

      I cover that in my course, hence cannot put out a video on it. I hope you understand the conflict, but thanks for suggesting.

  • @ekusimmu
    @ekusimmu 11 місяців тому

    Why SQS and kafka both ? Why can't only SQS with DLQ for high availability ?

    • @AsliEngineering
      @AsliEngineering  11 місяців тому

      Because the usecase was of a message stream and not a message queue.
      Hence Kafka being preferred. SQS is just a fall back to ensure no loss of events.

    • @prerakchoksi2379
      @prerakchoksi2379 5 місяців тому

      @@AsliEngineering I quite didn't get this part. I can see both kafka and sqs will be working in same method, why would someone need to use both kafka and sqs and why not just sqs (with DLQ)?

    • @AsliEngineering
      @AsliEngineering  5 місяців тому

      @@prerakchoksi2379 SQS plus DLQ would also work but it would limit that one message/event can be processed once.
      Kafka can have multiple consumer groups allowing you to do multiple things on same message/event.

  • @nguyennguyenpham2289
    @nguyennguyenpham2289 Рік тому

    Thanks very much for the video. This is really helpful in understanding how Grab can handle both large and spiky requests coming during rush hour. I just wonder in our company case, we also need to use the historical data to validate the promotion of customers based on their order history. For example, one promotion is only applicable for first time customer (simplest case). In that case, do we need to use the analytical data to calculate this?

  • @prerakchoksi2379
    @prerakchoksi2379 5 місяців тому

    if aws gurantees that dlq will be available if sqs isn't, why do we need kafka? Can't we directly use sqs for the insertions?

    • @AsliEngineering
      @AsliEngineering  5 місяців тому

      Extensibility of usecase. You can consume Kafka message and reprocess then to drive something else, if not anything than just archive for audit.

  • @shantanutripathi
    @shantanutripathi Рік тому

    How would they get to know just by looking by the former timestamp that its not the latest one? (1. newer update hasn't came yet. 2. would they query transactional db just for that?)

    • @AsliEngineering
      @AsliEngineering  Рік тому

      no. You just discard the updates you receive with older timestamp. No need to query anything.

  • @DesuTechHub
    @DesuTechHub 3 місяці тому

    My qa team checks the orders page(use analytics query which is eventual consistent) immediately after placing order and say bug 😂
    I observe this for aws payments page. Myself felt confused as the payments page take 60 seconds to show the payment revord itself...

  • @krgowtham
    @krgowtham 2 роки тому +1

    Thanks for the informative video Arpit. I have a doubt on handling out of order messages at the end of your video. While deciding upon update #1 to be processed first over update #2 using timestamp, how does the consumer know that the message is the oldest one over the other as consumer #1 may have update #1 and consume #2 may have update #2? I would think of versioning the data and make sure the data we update is the next available version of the one present in the analytical database. Is this approach correct?

    • @srinish1993
      @srinish1993 2 роки тому

      I got the similar question in a recent interview,
      question:
      consider a order mgmt system, with multiple instances same order service responsible for handling updates of orders(on a MySQL db). now three updates arrive in a sequence at t1 < t2 < t3 timestamps, at three different order service instances.
      Now how do we ensure the updates u1, U2 and u3 are applied in the same sequential order.
      any thoughts

    • @javeedbasha6088
      @javeedbasha6088 Рік тому +1

      ​@@srinish1993 maybe can we send the timestamp along with the data? so that when it is processed by the consumer, it can create SQL query to update the data WHERE updated_at < data.timestamp and id=data.order_id.

    • @prajwalsingh7712
      @prajwalsingh7712 Рік тому

      @@javeedbasha6088 yeah correct, usually its a good practice to send event_timestamp field in the kafka msgs, to decide the order of msgs.

    • @javeedbasha6088
      @javeedbasha6088 Рік тому +2

      After some reading, I discovered that updating data based on timestamps is not a reliable method since there are inconsistencies in the system clock.
      A more effective approach is to utilize end-to-end partitioning. In this method, all messages related to a specific partition key are written to the same kafka partition. As long as the producer sends these messages in order to kafka, kafka will maintain their order within the partition, although ordering is not maintained across different partitions. This specific partition is then consumed by only a single consumer instance, ensuring that related events are process by the same consumer.
      for example, suppose we have two messages: t1( create profile 'A' ), t2( update profile 'A' )
      The same consumer will receive and process t1 and t2 sequentially. This approach ensures that order is maintained in event-driven architecture. And this approach can also handle concurrency.

    • @ankuragarwal9712
      @ankuragarwal9712 Рік тому

      @@javeedbasha6088 What about comparing version of the order document ?
      Because if we keep version then we can avoid the need of kafka , simple SQS will work

  • @ankuragarwal9712
    @ankuragarwal9712 Рік тому

    What if we use CDC (Dynamo Streams) to achieve the same?

    • @ankuragarwal9712
      @ankuragarwal9712 Рік тому

      instead of order service pushing the data to kafka?

  • @HellGuyRj
    @HellGuyRj Рік тому

    Yo. What if instead of timestamp difference, we just use versioning + counters ?

    • @AsliEngineering
      @AsliEngineering  Рік тому

      maintaining version / vector clocks is a pain and in most cases an overkill. TS works well for the majority of workloads.

  • @rahulprasad2318
    @rahulprasad2318 2 роки тому

    The analytics DB can be a write only, right?

  • @gauravraj2604
    @gauravraj2604 Рік тому

    Hi Arpit, 1 quick question.
    So when we do upsert,
    1. operation type whether it would be insert / update is always decided based on availability of primary key in database?
    2. when we are doing upsert, we always need to provide all mandatory parameters in SQL query?

    • @dhruvagarwal7854
      @dhruvagarwal7854 Рік тому

      1. You can define what is the update field/fields in your query or ORM layer. Doesn't need to be primary key.
      2. While doing upsert, always provide ALL the parameters, even those that are not mandatory but have a non-null value, because every field will be overwritten.

    • @gauravraj2604
      @gauravraj2604 Рік тому

      @@dhruvagarwal7854 Hi Dhruv
      Thank you for clarifying.
      So if I understood correctly, upsert can be decided based on any field but that field should be unique in nature. Is this correct?
      2nd point I am clear now. it makes sense to provide all parameters as every parameter is going to be overwritten and we won't want any parameter to be lost.

  • @yuvarajyuvi9691
    @yuvarajyuvi9691 2 роки тому +1

    Just a small doubt.
    As transactional queries are handled synchronously, won't there be any issue while handling huge number(millions) of synchronous writes to db during peak traffic hours.Like there is a possibility that the DB servers can choke while handling them right? BTW loving watching ur videos, great content!

    • @imdsk28
      @imdsk28 2 роки тому

      I believe this will be handled by dynamo DB itself by creating partitions internally when the tiers are hot

    • @AsliEngineering
      @AsliEngineering  2 роки тому +1

      Yes. They would slowdown. Hence consumers will slow their consumption

    • @yuvarajyuvi9691
      @yuvarajyuvi9691 2 роки тому

      @@AsliEngineering For transaction queries we will be hitting the db server directly right as they need to be synchronous? You said there won't be any messaging queue involved in such cases.Crct me if I am wrong

    • @yuvarajyuvi9691
      @yuvarajyuvi9691 2 роки тому

      @@imdsk28 Is it true for the RDS as well?? I don't think soo but not sure

    • @imdsk28
      @imdsk28 2 роки тому

      @@yuvarajyuvi9691 not completely sure… need to deep dive

  • @yashswarnkar1702
    @yashswarnkar1702 2 роки тому

    Thanks for the content Arpit. Have a small doubt, how are we handling the huge spikes? Does dyanmo hot key partioning + lean gsi index do the job? I am assuming the peak duration will last for sometime since order delivery isn't just done in a 10 minute window. So at peak time even the index would start piling up.
    Would you say using dyanmo is the cost effective solution here? [I am assuming team wanted a cloud native solution and cost effectiveness involved calculating maintenance cost for in house solution]

    • @AsliEngineering
      @AsliEngineering  2 роки тому

      DDB is cost effective as well as great at balancing the load.
      It can easily handle a huge load given how it limits the queries to near KV use case.
      Also, indexing will not be hefty because the index is lean.

  • @strotpaliya
    @strotpaliya 4 місяці тому

    Hi Arpit, First of all thanks for the awesome video.
    I am wondering how they are able to support Strong Consistency over GSI. They classify Get ongoing orders by passenger id query pattern as transactional but they are leveraging GSI to support this query pattern and GSI is eventually consistent and when we are talking about a high TPS system serving millions of transactions per day the error rate scenarios becomes very high in number.

    • @AsliEngineering
      @AsliEngineering  4 місяці тому

      It was okay for them if data was eventually consistent. Grab only needed strong consistency (if any) for this main PK which is orders and not while getting all orders for a pax. So, it was totally fine for them to relax the constraint here.
      Also practically, the replication lag in GSI is not too high. Assume a lag of 5 seconds (relatively high, AWS says it would be between 0.5 to 2.5 that too across geographies), still after the user has created an order, the user is very likely to remain on order page to live track it or minimize the app. How many of them would even go to the orders page to see all orders.
      Even if they do, it will take them atleast 2 second to tap that icon and reach the screen. Enough time gap for the updates to catch-up. So, even when GSIs are eventually consistent it is totally okay UX.

    • @strotpaliya
      @strotpaliya 4 місяці тому

      @@AsliEngineering Yeah even that is what i was thinking that their use-case might not be of a strong consistency but their article got me confused where they mentioned otherwise. It makes sense if it is purely required from UX perspective.

  • @blunderfoxbeta
    @blunderfoxbeta Рік тому

    One pain point of dynamodb is handling pagination during filter and search. It skips the record if it doesn't match the query criteria and we have to run a recursive loop to meet the page limit.
    Ex. You are running a query on 1000 records with page limit 10 and filter status for in-active users and there are only 2 in-active records and first record is in first row and 2nd record is at 1000th row, now you have to run 100 query iteration in order to get just 2 records
    As per my understanding this is the biggest disaster on dynamodb.
    Does anyone have any solutions here?
    Hi Arpith have you come across this limitation in dynmodb?

    • @AsliEngineering
      @AsliEngineering  Рік тому

      Yes. DDB is not meant for such queries, hence should not be used for such cases, unless you can manipulate indexes (LSI and GSI).

  • @SaketAnandPage
    @SaketAnandPage 8 місяців тому

    There are standard and fifo queue in sqs. Standard have high throughput.
    Also DLQ is not for the purpose of fallback of Primary queue, instead that is for the consumer that if message fails to successfully consumed X number of times. Then it should be moved to DLQ.
    I think the explanation for what if SQS is down is not correct.

    • @swaroopas5207
      @swaroopas5207 7 місяців тому

      Yes ur crt, if consumer is unable consume even after retries, then it moves it to DLQ.

  • @abhishekvishwakarma9045
    @abhishekvishwakarma9045 2 роки тому

    Generally, I spend my weekends for learning something new and your content helped me a lot, thanks arpit sir 🫡 🔥 I was totally amazed by the Transaction DB part 😎