I loved this design! thanks a lot! I'd like to add one point not covered in the capacity estimation. based on the input, users can add up to 10Mb per paste. 100k per day, deriving in 1000GB per day. I think it's good to mention that we can apply compression (manually or let the db do it) to the text that way we can save up to ~60% of the initial calculation for storage, reducing costs quite a lot
If anyone is thinking how 64 comes up at 20:40, that is because [A-Z,a-z,0-9] sums up to [26+26+20]=62 and special characters like '+' and '/' adds up to 64, Base64 encoding
Couple of things. 1. Using serverless will not give you predictable SLA. 2) Cleanup service need to delete the entry from cache also. else, expired paste will still be accessible. 3.) rather than DKGS , you can use UUID (it takes double size 128bit) but u will not need redis and DKGS
If you use UUID inside the writepaste lambdas or containers then there is a chance of duplication as there are multiple instances. I don’t think that will work well.
@@kartikvaidyanathan1237 A collision is when the same UUID is generated more than one time and is assigned to different objects. Even though it is possible, the 128-bit value is extremely unlikely to be repeated by any other UUID. The possibility is close enough to zero, for all practical purposes, that it is negligible.
@@rabindrapatra7151 yes but what is stopping from multiple lambdas from generating the same UUID. I understand if the Id gen service is centralised and UUID comes from there. But my understanding is each lambda internally generates a UUID
Cleanup service need to delete the entry from cache also. else, expired paste will still be accessible -> Can't we add into cache with TTL. That should automatically solve the issue
@@TheCosmique11 Acc to wikipedia - the number of random version-4 UUIDs which need to be generated in order to have a 50% probability of at least one collision is 2.71 quintillion. en.wikipedia.org/wiki/Universally_unique_identifier#Random_UUID_probability_of_duplicates:~:text=22%5D%5B23%5D-,Collisions,-%5Bedit%5D
Hi Naren, this is one of the greatest system design overviews for this particular problem. A good approach in both LLD and HLG well justified and explained each trade-offs of the implementation. Thank youl
Hi Narendra - Thanks for making these system design videos. Its useful for all engineers irrespective of whether they are interviewing or not. Please make a detailed video on following topics - a) System design for Heatmap. Let's say Heatmap of Uber drivers. b) Zookeeper functionality c) DevOps best practices or DevOps series.
Your comment confused me further. You said "not 1.5k writes/sec" but in video he said "150 writes/sec". Did you mean it should be 1.5 writes/sec and not 150 writes/sec?
Thanks Narian for great videos! I have a question regarding DKGS. Isn't it an overhead? I mean that key generation formula (timestamp+nodeid+counter) seems to be already enough to cover uniqueness requirement. And even if there will be collision -- is it still worse to call DKGS service instead of DB directly? Thanks!
hey, thanks for tutorial.This channel is really awesome. I have a doubts:- 1.When you are mentioning data base comparision @13:00(approx). Why did you choose partitioning in RDMS.I think partitioning would be easy in NO SQL world as some db provide inbuild solution for this and RDMS is often easy to scale veritcally instead horizontally. Isnt it better to have NOSQL if we want to scale later? 2.And if we use partitioning .the way u mentioned that if one db is filled ,we will go other. Shouldnt we be using consistent hashing in this case instead of one range filled, go to other. Range based partitioning has multiple db nodes running parallel each handling different data. Please let me know you thoughts
Please try to include Food delivery system( real time tracking, multiple different apps for each types of user(buyer/delivery/hotel), synch between apps, sending device info when app is web based/sdk based etc) and ecommerce site( like best practices for big billion day with less latency(how we can optimize locking), Order management services - where millions of customers wants to view order in real time, order tracking, canceling order and similar kinds of design challenges in such systems)
Happy birthday 🎂 Dear bhaiya😘 Wishing you a very happy and prosperous birthday with full of love, joy and happiness...🌷🌺🏵️🌸💐 ..... ..... It's belated but yeah.....🥰 🌸🌿 Stay blessed , safe and protected always ❣️🌿🌸
Your video is great !! Had one small doubt though below: How do you explain how can 10 character key be used to generate 100GB? Ideally total storage for 10 characters can be (62^10) * 10 bytes [assuming a character takes 1 byte] which is a lot more than 100GB If we assume we are using 5 character key we will get (62^5)*5 bytes which is approx 34GB
Thanks Narendra! I learned lots of great ideas from you! One question, there's can be race condition when two write APIs will get same key from DKGS. How can we prevent such race condition?
Thanks for the video. What about adding a CDN between the end user and read request? This would cache and offload all large portion of the read requests from ever hitting the API. Only a small percentage of pages would need to hit the origin. You could introduce a queue for expiring cached pages upon edit or deletion.
Hi Naren, great content as always. I noticed u have not posted since last year. Are you going to put any more great videos as u used to or r u taking a break?
Hi Naren, your channel is full of resource ..thanks for sharing. Can you have one video on designing dashboard for efficient data centers monitoring using data model.
Hey narendra, what shouldbe the initial approach on any system design. Ex: I can we have split the requirement into functional requirement and non functional. Also what are videos which we need to go thru first to get the fundamentals and to full scale design. Any recommendation on materials to understand the basic module of SD.
1. Is schema important to discuss in System Design in detail? 2. If serverless, in URL Shortner also can we use serverless? 3. Why do we use memcached and not redis, just cost? High performance and cost? 4. Is Zookeeper a DKGS?
Hi, very nice video. Why are you saying that memcached is faster than redis for this usage pattern? Are there some benchmarks or white papers? Or what are you basing this assumption on?
This is super good bro...Great Thx for ur great work! Could u do a video about calculating those normally seen volumnes, eg, 1 video takes how much bytes etc. Thx :D
Can you please make a video about Notion? I'm working on my final year project at university, measuring the impact of various software on the planet - your videos are so helpful! Thank you
Hi Narain, good video and explanation. Question on S3 Blob retrieval! Do you share the blob link to client and client downloads content? In this case, how authentication to S3 happens? Or Webserver behind the gateway retrieves the content from s3 by authentication and then send the downloaded data to client?.
like the video thanks for sharing. But i think the math is wrong for cap estimation. 100K / 24 / 3600 is not 150 writes per second - it's 1.15 - not sure if anyone else caught that - and the video is 4 years old. i didn't scan all the comments
Hi Narendra, May be I missed something, but why we cant store everything in S3 and use a Geo specific CDN around that to get the data very fast? in that case we dont need the actual data in DB.
Hi Narain if we are using twiiter snowflake to generate unique IDs and generate hashvalue from IDs using base 62 conversation then there is no chance of collisions. So I think we don't need KGS service . Write service can handle the key generation part and no need to check db if key exist or not as keys are always unique. What do you think ?
Thanks for the explanation! In the async clean-up step, should we also clear the value in the mem cache? For long texts, we'd end up with a preview (from the cache) but when it tries to fetch the full text from the DB it'll error out?
@@TechDummiesNarendraL Hmm .. bloom filter may return true when it's not, wouldn't it? I suppose since we have quite a few IDs to spare in this case, that could do. :)
How is "User can decide on the pastebin link" solved? Here it seems that the user can keep on trying for a unique ID, of length 10 as constrained, and keep on hitting on the already used ones, leading to going to and fro until he finds a unique key? Does the snowflake generator or just a normal key generator help with this?
@29, if we know algo to generate keys, like time stamp+node_id+counter. Then why cant write service generate on its own?It will take almost ineligible time
How much cost you to add English subtitles in your videos or even Portuguese? It an awesome subject that should be available for more people to enjoy it.
Great video. Thanks for your effort. One doubt though, How to allow custom string provided by user? I understand that we can store mapping between key and custom string for each user. However, how to do for anonymous user? Wouldn't we be overwriting the paste or exposing name of paste in such case?
Thank you so much for uploading great content. Can you please explain what happen to local counter when it recover from a failure? will be it reset to initial value?
I did not understand the 64^10 part? I understand that we have 64 bit(8 character) unique Id, that can be used in our URL right? How are we generating 10 character url with 64 possibilities at each character?
Does memcache support replication? I guess not and it isn’t great for distributed systems. I would like to know more why memcache is a better choice here than redis.
Do we really need to keep track of used token ranges? Threads in the key generation service will always increment and return anyways, question of providing used keys will never arise
Hi, I have a question on key generation, if there's no user provided/customised key, why not just generate a UUID and store it into the DB? If the key is provided by the user, then we check whether the key exists in the database.
Hi sir, i have one question. I am new to all this i am 12th passout so it might be dumb question for you but please reply the answer. if pastebin gets expired how to delete it from database either scan database daily for which text has expired and remove it or any other technique which one is best, optimized,etc. and if we bin gets expired after 10 mins, 1hr,etc. then how to implement how to do it?
Hi Naren, Thank you so much for the amazing work of teaching system design that you are doing. Over the last few minutes I just kept wondering should we also not flush out the paste from the cache as well? As in what happens if my highly popular JSON’s soul just doesn’t become too out of talks to leave the cache even though its long dead in the DB?
Hi Narendra. I listened to your system design videos and practised. Now I got the job I was trying to get. Thank you so much!
I loved this design! thanks a lot! I'd like to add one point not covered in the capacity estimation. based on the input, users can add up to 10Mb per paste. 100k per day, deriving in 1000GB per day.
I think it's good to mention that we can apply compression (manually or let the db do it) to the text that way we can save up to ~60% of the initial calculation for storage, reducing costs quite a lot
If anyone is thinking how 64 comes up at 20:40, that is because [A-Z,a-z,0-9] sums up to [26+26+20]=62 and special characters like '+' and '/' adds up to 64, Base64 encoding
Couple of things. 1. Using serverless will not give you predictable SLA. 2) Cleanup service need to delete the entry from cache also. else, expired paste will still be accessible. 3.) rather than DKGS , you can use UUID (it takes double size 128bit) but u will not need redis and DKGS
If you use UUID inside the writepaste lambdas or containers then there is a chance of duplication as there are multiple instances. I don’t think that will work well.
@@kartikvaidyanathan1237 A collision is when the same UUID is generated more than one time and is assigned to different objects. Even though it is possible, the 128-bit value is extremely unlikely to be repeated by any other UUID. The possibility is close enough to zero, for all practical purposes, that it is negligible.
@@rabindrapatra7151 yes but what is stopping from multiple lambdas from generating the same UUID. I understand if the Id gen service is centralised and UUID comes from there. But my understanding is each lambda internally generates a UUID
Cleanup service need to delete the entry from cache also. else, expired paste will still be accessible -> Can't we add into cache with TTL. That should automatically solve the issue
@@TheCosmique11 Acc to wikipedia - the number of random version-4 UUIDs which need to be generated in order to have a 50% probability of at least one collision is 2.71 quintillion.
en.wikipedia.org/wiki/Universally_unique_identifier#Random_UUID_probability_of_duplicates:~:text=22%5D%5B23%5D-,Collisions,-%5Bedit%5D
Hands down the best channel I've found for system design. Kudos bro!
Hi Naren, this is one of the greatest system design overviews for this particular problem. A good approach in both LLD and HLG well justified and explained each trade-offs of the implementation. Thank youl
very underrated design video that covers most use cases, thx a lot
Outstanding, clear and very deep solution for this problem. Really enjoyed it. Very well presented.
Really cool idea to show the preview of 100KB while fetching data from S3.
Whenever I get a job I'm definetly joining your channel
Very good approach. Thank you. Wish me luck in my systems design interview today!
This guy is truly a genius. Keep doing good work
Hi Narendra - Thanks for making these system design videos. Its useful for all engineers irrespective of whether they are interviewing or not. Please make a detailed video on following topics - a) System design for Heatmap. Let's say Heatmap of Uber drivers. b) Zookeeper functionality c) DevOps best practices or DevOps series.
Happy happieee birthday champ 😉
Your dedication of posting video even on your bday is just amazing.
Looking forward to learn more from you 😊
Thanks :)
Hi Narain. Great video. However, there seems to be a mistake at 3:26 in the estimation part. 100k/(24*3600) = ~1.5 not 1.5k writes/sec.
Feels stupid, why did I make that mistake😁
@@TechDummiesNarendraL You are doing thousands of smart things. No prob!!!
Your comment confused me further. You said "not 1.5k writes/sec" but in video he said "150 writes/sec". Did you mean it should be 1.5 writes/sec and not 150 writes/sec?
Also to note, here "per hour" and "per second" is not really used in the design later on.
@@IC-kf4mz Yes, it should be 1.5 writes every second.
Please make a video on github design! :)
yes make a video on this.
yes Tech Dummies please make a video on this.
I would love to watch this!
Thanks Narian for great videos! I have a question regarding DKGS. Isn't it an overhead? I mean that key generation formula (timestamp+nodeid+counter) seems to be already enough to cover uniqueness requirement. And even if there will be collision -- is it still worse to call DKGS service instead of DB directly? Thanks!
Let's see how many legends will watch this incredible video to the end🧡
sir, you are doing a great job, please don't stop making such videos
just want you know your videos are great, appreciate your efforts!
hey,
thanks for tutorial.This channel is really awesome.
I have a doubts:-
1.When you are mentioning data base comparision @13:00(approx). Why did you choose partitioning in RDMS.I think partitioning would be easy in NO SQL world as some db provide inbuild solution for this and RDMS is often easy to scale veritcally instead horizontally. Isnt it better to have NOSQL if we want to scale later?
2.And if we use partitioning .the way u mentioned that if one db is filled ,we will go other. Shouldnt we be using consistent hashing in this case instead of one range filled, go to other. Range based partitioning has multiple db nodes running parallel each handling different data.
Please let me know you thoughts
Thanks! Your videos are always top quality
Thank you, Narendra, for your exceptional content! I learned a lot from you.
Please try to include Food delivery system( real time tracking, multiple different apps for each types of user(buyer/delivery/hotel), synch between apps, sending device info when app is web based/sdk based etc) and ecommerce site( like best practices for big billion day with less latency(how we can optimize locking), Order management services - where millions of customers wants to view order in real time, order tracking, canceling order and similar kinds of design challenges in such systems)
Yeah, please make video on Swiggy or zomatos system design video.
lol - in other words "please design my business idea"
Excellent explanation Narendra, Wish you a very happy birthday dear !!!
Thank you so much 🙂
Hi I think the paste table needs a 'usr_id' field so you can find out who created the paste.
Your videos keeping getting better
This is good. Thank you :)
Happy birthday 🎂 Dear bhaiya😘
Wishing you a very happy and prosperous birthday with full of love, joy and happiness...🌷🌺🏵️🌸💐
.....
.....
It's belated but yeah.....🥰
🌸🌿 Stay blessed , safe and protected always ❣️🌿🌸
Great Job👍
Waiting for Token based authentication system design.
Your video is great !!
Had one small doubt though below:
How do you explain how can 10 character key be used to generate 100GB?
Ideally total storage for 10 characters can be (62^10) * 10 bytes [assuming a character takes 1 byte] which is a lot more than 100GB
If we assume we are using 5 character key we will get (62^5)*5 bytes which is approx 34GB
Hey Naren, where are you these days?? I really miss your videos on SAD. Please come back..
One small correction, there would be userId field in the paste table.(13:48)
hi. some really important points for decision making. thanks
Thanks! I think cleanup service has to delete record from MemCache if record present in cache.
Thanks Narendra! I learned lots of great ideas from you! One question, there's can be race condition when two write APIs will get same key from DKGS. How can we prevent such race condition?
Thanks Narendra for great videos. I see you corrected the Traffic estimation. Shouldn't it also change the storage estimation?
Bro Can you make video of ZERODHA KITE system design
Thanks for the video. What about adding a CDN between the end user and read request? This would cache and offload all large portion of the read requests from ever hitting the API. Only a small percentage of pages would need to hit the origin. You could introduce a queue for expiring cached pages upon edit or deletion.
Do you still make videos in this channel?
Hi Naren, great content as always. I noticed u have not posted since last year. Are you going to put any more great videos as u used to or r u taking a break?
Simply awesome!!😎😎
Very nicely explained thank you.
Hi Naren, your channel is full of resource ..thanks for sharing. Can you have one video on designing dashboard for efficient data centers monitoring using data model.
Make a video on tradingview system design....
Hey narendra, what shouldbe the initial approach on any system design. Ex: I can we have split the requirement into functional requirement and non functional.
Also what are videos which we need to go thru first to get the fundamentals and to full scale design.
Any recommendation on materials to understand the basic module of SD.
Hi Narendra, can you please discuss the feature of sharing the paste with other users and discuss it at scale and how to handle it
1. Is schema important to discuss in System Design in detail?
2. If serverless, in URL Shortner also can we use serverless?
3. Why do we use memcached and not redis, just cost? High performance and cost?
4. Is Zookeeper a DKGS?
Hi, very nice video. Why are you saying that memcached is faster than redis for this usage pattern? Are there some benchmarks or white papers? Or what are you basing this assumption on?
I love your videos. you taught me a lot.
I hope you make a video of e-learning system design.
Naren , you stopped making videos ?? Ohh ohh you might be busy on new job, but.......... we want you back.
Paste schema needs to also have user_id and user schema needs to have a list of all pastes created to fulfill the functional requirements
This is super good bro...Great Thx for ur great work! Could u do a video about calculating those normally seen volumnes, eg, 1 video takes how much bytes etc. Thx :D
Can you please make a video about Notion? I'm working on my final year project at university, measuring the impact of various software on the planet - your videos are so helpful! Thank you
The traffic estimates for writes is incorrect. 100K/(24*3600) = 1.1 pastes/sec . Am I missing something here?
Hi, good explanation, but why there is no linking attribute between User and Paste tables
Shouldnt a CreatedBy field be there in Paste table referring to User table
100K/24/3600 should be approx. 1.3 not 150
Hi Narain, good video and explanation.
Question on S3 Blob retrieval! Do you share the blob link to client and client downloads content? In this case, how authentication to S3 happens?
Or
Webserver behind the gateway retrieves the content from s3 by authentication and then send the downloaded data to client?.
Awesome video. Distributed key generation service can use bloom filter instead of Redis.
Why? Bloom filter is not deterministic , it is a probabilistic data structure, it only gives true negation but can give a false positive.
Please, describe socket connection in a distributed system/multi-server system
12:30 , isn't 65 KB Max row size in DBs like MYSQL? How are we going to store 100 KB data then in content field ?
Can't we use Range based approach with Zookeeper as u explained in urlShortner design video to generate the keys?
for unique keys, can't we use same zookeeper process as we used in the url shortening design?
like the video thanks for sharing. But i think the math is wrong for cap estimation. 100K / 24 / 3600 is not 150 writes per second - it's 1.15 - not sure if anyone else caught that - and the video is 4 years old. i didn't scan all the comments
Hi Narendra, May be I missed something, but why we cant store everything in S3 and use a Geo specific CDN around that to get the data very fast? in that case we dont need the actual data in DB.
good video sir
Hi Narain if we are using twiiter snowflake to generate unique IDs and generate hashvalue from IDs using base 62 conversation then there is no chance of collisions. So I think we don't need KGS service . Write service can handle the key generation part and no need to check db if key exist or not as keys are always unique. What do you think ?
Can you please upload the basics of systems design
Why have you used serverless for this problem, it will cost more than using a full server for the problem
Thanks for the explanation!
In the async clean-up step, should we also clear the value in the mem cache? For long texts, we'd end up with a preview (from the cache) but when it tries to fetch the full text from the DB it'll error out?
Mostly we will cache the data with some expiry dates time like 2 days or 7 days something like that, so it will automatically expiry automatically
I am wondering why blob storage read does not have cached?
I have a suggestion/question - could we not create GUID instead of using DKGS
Key generation service , can we use bloomfilter to reduces size of redis memory size?
Yes
@@TechDummiesNarendraL Hmm .. bloom filter may return true when it's not, wouldn't it? I suppose since we have quite a few IDs to spare in this case, that could do. :)
How is "User can decide on the pastebin link" solved? Here it seems that the user can keep on trying for a unique ID, of length 10 as constrained, and keep on hitting on the already used ones, leading to going to and fro until he finds a unique key? Does the snowflake generator or just a normal key generator help with this?
Why were both redis and memcached used? Wouldn't either solve the job?
@29, if we know algo to generate keys, like time stamp+node_id+counter. Then why cant write service generate on its own?It will take almost ineligible time
shouldnt the cleanup service also cleanup the used keys in Redis?
How much cost you to add English subtitles in your videos or even Portuguese? It an awesome subject that should be available for more people to enjoy it.
his english is very understable and if you are in a real interview they won't show you subtitles.
It's not about real interview.
It's about making his content available to everybody.
@@AModernCTO the video already has CC.
Great explanation!!
Redis vs Memcache, when to use what?
Why are we using SQL for storing data here?
Sir make a video on charting website like tradingview system design.....
Thank you for great content :)
Hi, why no new videos for long time?
Happy Birthday! 🎉🎂
Great video. Thanks for your effort. One doubt though, How to allow custom string provided by user? I understand that we can store mapping between key and custom string for each user. However, how to do for anonymous user? Wouldn't we be overwriting the paste or exposing name of paste in such case?
Thank you so much for uploading great content. Can you please explain what happen to local counter when it recover from a failure? will be it reset to initial value?
Why couldn't zookeeper be used to generate unique id's in this case also just like in URL shortener video instead of KGS?
I did not understand the 64^10 part?
I understand that we have 64 bit(8 character) unique Id, that can be used in our URL right? How are we generating 10 character url with 64 possibilities at each character?
hey bro
if you have time can you make video on Brave Browser Blockchain based - System design, architecture. thanks
Does memcache support replication? I guess not and it isn’t great for distributed systems. I would like to know more why memcache is a better choice here than redis.
For keys can't we use UUID V4?
Do we really need to keep track of used token ranges? Threads in the key generation service will always increment and return anyways, question of providing used keys will never arise
When key is expired, should we clear MemCache as well? Or it will be automatically cleared once DB is cleared. Thanks.
aren't the initial back of envelope calculations wrong ?
Hi, I have a question on key generation, if there's no user provided/customised key, why not just generate a UUID and store it into the DB? If the key is provided by the user, then we check whether the key exists in the database.
UUID is 128-bit, doubling the 64-bit needed for each record. Prob storage concern would make the 64-bit a better choice?
But with multiple writepaste services even uuid can end up being duplicated. A centralized KGS is better efficiency.
Put a queue/message broker between the DKGS and consumers
Hi sir, i have one question. I am new to all this i am 12th passout so it might be dumb question for you but please reply the answer.
if pastebin gets expired how to delete it from database either scan database daily for which text has expired and remove it or any other technique which one is best, optimized,etc.
and if we bin gets expired after 10 mins, 1hr,etc. then how to implement how to do it?
Hi Naren, Thank you so much for the amazing work of teaching system design that you are doing. Over the last few minutes I just kept wondering should we also not flush out the paste from the cache as well? As in what happens if my highly popular JSON’s soul just doesn’t become too out of talks to leave the cache even though its long dead in the DB?
Yup your right!. Also most caching services have in-build expiry feature so those services can work independently.
Why can't we use Zookeeper for key generation just like we used for url shortener?
You can, the reason why I didn't use is to show different ways of doing same thing