29: Amazon Payment Gateway | Systems Design Interview Questions With Ex-Google SWE

Поділитися
Вставка
  • Опубліковано 3 сер 2024
  • That's right, my payments DB is strongly consistent so I'm never gonna lose track of who bought those foot pics
  • Наука та технологія

КОМЕНТАРІ • 52

  • @yuanyizhang8228
    @yuanyizhang8228 Місяць тому +1

    Thanks Jordan for your awesome video! Hope I can see the topic about 'design some meeting scheduler' thing one day~~

  • @tejasvenky5538
    @tejasvenky5538 Місяць тому +1

    This is literally what I needed today. Cramming this playlist hopefully offer pulls up, I never comment on posts but u are the goat bro the goat broski if I get this offer I will send u some only fans money

    • @jordanhasnolife5163
      @jordanhasnolife5163  Місяць тому

      Haha please take the only fans money and donate it to charity

  • @nikhilm9494
    @nikhilm9494 Місяць тому +1

    One more banger system design video!

  • @JLJConglomeration
    @JLJConglomeration 29 днів тому +2

    the cache design that you mention with the doubly linked list and hashmap is basically the implementation a LRU cache

  • @aforty1
    @aforty1 15 днів тому +1

    Hey Jordan! Thanks as always for these awesome videos! I was wondering if you could do a quick video about how to actually structure and talk about these on an interview. Is there a structure to it that you do (or have seen), such as laying out requirements first, then doing some considerations, then diving into the design? Is it really not one size fits all? Either way could be interesting to see what tips you might have around this. Keep it up, you're incredibly helpful!

    • @jordanhasnolife5163
      @jordanhasnolife5163  15 днів тому +1

      Here ya go m8
      ua-cam.com/video/IY2EPjShgc4/v-deo.htmlsi=Xw4uwvd4iDBbpp_w
      You can always just ask your Interviewer too. Hey is it ok if I start with x?

    • @aforty1
      @aforty1 14 днів тому

      @@jordanhasnolife5163 oh shit I missed that, thanks! 🙏🏼

  • @jporritt
    @jporritt Місяць тому +1

    The other video I’d like to see: A distributed system for generating unique IDs, akin to Twitter Snowflake. Also with the functional requirement of how to allow people to bring along their own IDs.

    • @jordanhasnolife5163
      @jordanhasnolife5163  29 днів тому

      This does feel somewhat similar to what we do in the payment gateway video, shard the key range, allow users to bring their own

  • @kokoromarudi7717
    @kokoromarudi7717 Місяць тому +3

    First! Happy Saturday!

  • @easward
    @easward 25 днів тому +2

    Please make a video on design aws cloud watch

  • @tomtran6936
    @tomtran6936 Місяць тому +1

    why are you so smart, my love Jason?

  • @rationallearner
    @rationallearner Місяць тому +1

    Thanks for the video Jordan. How is polling going to work? Is there a cron job or a scheduler running every x minutes in the flink? Moreover maintaining the doubly linkedlist in the order of create time stamp would be log(n) right, as each event reaching flink could be out of order, right?

    • @jordanhasnolife5163
      @jordanhasnolife5163  Місяць тому

      Yeah something like that, or you could just say something like on a new event, if we haven't polled in x amount of time, poll again.
      I would think that events reaching Flink should be coming in order on a timestamp per partition, so we could always just do a flink node per partition to maintain that invariant.

  • @htm332
    @htm332 19 днів тому +1

    The assertion behind the need for a derived Pending Transactions cache - that reads will hurt write throughput due to row locking - is not necessarily true if you're using a DB with MVCC (like Spanner or Cockroach). So I question the need for the separate cache.

    • @jordanhasnolife5163
      @jordanhasnolife5163  19 днів тому

      The other reason is that you then need to run a query on disk to figure out all of the pending transactions. I'd rather just have them all precached, but agreed that if you're using snapshot isolation locking is unnecessary for such a read

    • @htm332
      @htm332 19 днів тому

      ​@@jordanhasnolife5163 could add a local covered index to speed up the query and ensure consistency, which would slow down writes a bit but per the original requirements that's not a problem. In any case pending transactions would be a great topic to deep dive in a real interview and discuss tradeoffs, so thanks for calling it out explicitly in your video

  • @thestarbahety
    @thestarbahety Місяць тому +2

    Can you make a video on designing Spotify?
    Func Requirements:
    - Ability to play any song. super low latency while playing any song.
    - User can create playlist, share playlist.
    - Follow other playlist, artist, get notified for any song updates by artist or in a playlist.

    • @jordanhasnolife5163
      @jordanhasnolife5163  Місяць тому

      At least off the cuff I'll say
      1) CDNs and precaching when going down a playlist
      2) Use a database
      3) This feels like twitter
      You think there are any other unique pieces to it?

    • @thestarbahety
      @thestarbahety Місяць тому

      @@jordanhasnolife5163 how about live streaming & podcasts? I believe that this is similar to Zoom. But would like to know if there any optimizations which can be done?

  • @chawlagarima
    @chawlagarima 24 дні тому +1

    Thanks a lot for another amazing video... I've a question, how does the payment reaches seller?

    • @jordanhasnolife5163
      @jordanhasnolife5163  23 дні тому

      Well I guess thats a detail for tipalti, but Amazon probably makes batch payments every month to them via an ACH (wire) trabsfer

  • @NBetweenStations
    @NBetweenStations Місяць тому +1

    Thanks for the great video! Question about web hooks. So the Payment system is listening for web hook callbacks and the polling mechanism is only triggered when a pending payment hasn’t received a callback in a specified amount of time? Is that the idea?

  • @DivyanshRana265
    @DivyanshRana265 8 днів тому +1

    I have a dumb question. Why would row read locks on the pending payments slow down the write throughput of the table, given the writes' idempotency key are different from the pending ones?

    • @jordanhasnolife5163
      @jordanhasnolife5163  7 днів тому

      Great point, there probably wouldn't be many conflicts IRL, but the reads themselves would be quite expensive and would take resources away from the DB

  • @shuozhang236
    @shuozhang236 Місяць тому +1

    thanks Jordan I have been watching your sd video each week have two qq regarding your design
    1. could we use snowflake algorithm to generate id as idompotence key ?
    2. was Flink processing part of payment service code ? if so, for fault tolerance case where payment service was down how is it going to affect flink processing ?
    thanks

    • @jordanhasnolife5163
      @jordanhasnolife5163  Місяць тому

      1) Not familiar with this method, feel free to send me a link to what it is
      2) Not sure what you mean by this question. Flink is just getting data from our payment db, and occasionally polling stripe to see the status of it, it is independent of any synchronous operation to do with the payment service.

    • @shuozhang236
      @shuozhang236 Місяць тому

      @@jordanhasnolife5163 thanks for replying
      1) en.wikipedia.org/wiki/Snowflake_ID
      2) let me rephrase my question a little bit where was the application code to generate idempotence key logic and save to payment as one microservice and flink process another microservice or they are all clustered as one service as in payment service. if both processes are treated as one service application code and payment service was down, flink process will also halt right

  • @siddharthsingh7281
    @siddharthsingh7281 Місяць тому +2

    can you make a separate video elaborating Change Data Capture part ? like log based , trigger based ...

    • @jordanhasnolife5163
      @jordanhasnolife5163  Місяць тому

      I don't really know how much there is to elaborate on here beyond what I've discussed in my concepts videos. I'd take a look at something like debezium.

  • @uday3patel
    @uday3patel 28 днів тому +1

    17:18 - should pending payments that are `not recognized` by stripe at poll time really just be deleted from the payments table? this case might require special processing since at this point, the payment has a local DB status of `pending` but stripe has not recognized this payment. What would be a possible solution for this inconsistency?

    • @jordanhasnolife5163
      @jordanhasnolife5163  28 днів тому

      I don't really think there is any solution, network requests to stripe can always fail. Do we want to delete the event? Maybe not but then we may find ourselves doing a lot of polling after a while.

  • @abhishekmiet
    @abhishekmiet 25 днів тому +1

    What happens if Flink cache fails? I think we will somehow have to redrive the CDC stream to repopulate the new cache instance?

    • @jordanhasnolife5163
      @jordanhasnolife5163  25 днів тому +1

      Please see the flink concepts video. State is periodically checkpoibted to s3

  • @jporritt
    @jporritt 26 днів тому +1

    Could you suggest a database that would match the consistency requirements? Or are we rolling our own?

    • @jporritt
      @jporritt 26 днів тому +1

      I see Cassandra can be configured into a strong consistency mode?

    • @jordanhasnolife5163
      @jordanhasnolife5163  26 днів тому

      I think Cassandra's "strong consistency" is probably quorum consistency. I'd look into spanner, cockroach, yugabyte, as it seems they lean towards using distributed consensus within a replication group.

  • @Kevin-jt4oz
    @Kevin-jt4oz Місяць тому +1

    can you do privacy/visibility controls system design?

    • @jordanhasnolife5163
      @jordanhasnolife5163  Місяць тому

      Perhaps, how do you see this one being a challenge after we put everything in a strongly consistent table?

  • @jporritt
    @jporritt Місяць тому +1

    In a future video, could you do an RSS newsfeed aggregator? Maybe throw keyword search in there.

    • @jordanhasnolife5163
      @jordanhasnolife5163  Місяць тому

      Oh man I'll have to look into this one, you may be aging yourself by asking for an RSS feed and I may be aging myself by saying I've never used one lol

  • @huguesbouvier3821
    @huguesbouvier3821 Місяць тому +1

    Thanks for the video! Why not just use zookeeper to give us a monotonically increasing u64 for the idempotency key? This way we are guaranteed to not have any conflict, also u64 should be enough till the end of times.

    • @lalasmith2137
      @lalasmith2137 Місяць тому +1

      hey friend, can you please explain what is u64? is it like a uuid?

    • @huguesbouvier3821
      @huguesbouvier3821 Місяць тому +2

      @@lalasmith2137 haha sorry, an unsigned 64 bits integer

    • @lalasmith2137
      @lalasmith2137 Місяць тому +1

      @@huguesbouvier3821 thank you for clarifying that, helped me understand your answer :)

    • @jordanhasnolife5163
      @jordanhasnolife5163  Місяць тому +2

      1) using a monotonically increasing sequence number implies that all writes must go through the same choke point (meaning you can't shard zookeeper, which is potentially fine if we really don't care about performance)
      2) We basically do this anyways, as our payments db is basically using a consensus algorithm, making it effectively the same as zookeeper