I read many different ways this can be designed. But this design looks the best of all as it is highly scalable, very efficient and something that can easily work on a global scale. Thanks Sandeep for so much of hard work at your end to come up with such wonderful designs. I am watching your entire playlist.
Hey do you want to study system design together? I have a lot of exp but I feel most of the online resources are incorrect or incomplete. We can make a solid understanding of the common designs in next 10-15 days by brainstorming everything together.
Very informative and the way you show the simple way to design first and then discard it again telling the problems and how we could tackle that, is really great.
There is another approach which pre-calculates the short urls, and use when requested. This way, there will not be range loses when servers go down. Your way is also very good, thank you!
Ya I think if Token Service layer is removed and Url shortener service simply do what Token service is doing, that will be your case, and it seems fine to me
Just a thought- instead of using the token service we can generate unique tokens per service within the service itself. Steps could be as follows 1. When service node starts, it will registers itself with DB and gets itself an ID and it’s sequence will start with 1 2. Now, a particular node can generate token based on its ID+Today’s date+sequence 3.When a particular node goes down, new node will spin up and performs step 1 4. This will avoid complexity of calling token service totally
MD5 Hash of the user IP Address + Time Stamp encoded to base 62 would also be valid. Both these approaches save us from the complexity of managing another set of services and their connections to a DB. We can more easily scale horizontally.
I had this video in watch recommendation but always skipping it thinking shortening URL, how hard it could be. But going through this video I started realize nothing is easy at scale. Your way of explaining is so awesome that at the same time I understood depth of problem and its solutions. Please keep it up.
It would be great if you build up slowly for the tech/tool to pick up for the design rather than directly putting the cassandra or redis or kafka. There can be possibility that the Interviewer is highly good at those tools and will start digging deep into that as soon as we name a stack which can surely bring us in trouble sometimes.
As an alternative, you can read about the tool being discussed in video after watching it. It will keep the videos shorter and packed with more relevant content.
@@isachinq i think the token service is being load balanced to avoid a single point of failure. Having miltiple tokenservices will not impact the design as it is only used to get the next range from same mysql cluster.
this is the first video im waching on your channel and i just loved the explanation. im sure its gonna help me with my interivew. the very first thing i did post watching the video is subscribing. thanks alot for the detailed information.
Great video, as always very helpful. If you could 1) add custom key support - user can specify their own tiny url 2) talk more about what could be other ways like, md5(main url) -> base2_encode, and their drawbacks etc 3) add diagram on a analytics part, that would really be helpful. Do you also feel a cache can be added in front of Cassandra to serve hot urls? Thanks, keep helping.
Amazing video with possibly the best explanation so far on this use case. I had watched several videos on TinyURL but none could explain in the first 5 mins so lucidly the need for 7 characters based shortened URL. Good job
@@isachinq Sandeep already answered your question in the video and it's a common approach to scaling mysql - the token service (by default) will be not overloaded with massive amount of requests, but if somehow it is then the solution would be to utilise the MySql horizontal scaling / sharding between multiple servers / instances.
Excellent explanation !!! Even though there are a bunch of system design videos out there, your videos stand apart from them by discussing various situations and pitfalls of using a certain tool/ database. Just one quick suggestion from my side regarding upcoming videos - can you please create any video that explains capacity estimation of a database. For example how much space will a users table with let's say 6 attributes having almost 100 million records take is postgres or mongodb or any other database. This is also commonly asked in interviews now a days and given your breadth of experience, I think you would be able to create awesome content in this space also. Once again Thanks for videos :)
Thanks Chaitanya! We'll put this in some smaller video that comes out in the future. Just that it's a time taking thing to go over the calculations so we skipped it in all the videos till now :)
One issue is that sequentially generating shortcode could be security threat as it would be predictable either we should append a random number at starting or end before converting base 62 conversion
Really cool content. Analytics/Observability is generally mised; thanks for taling about it. Provided html page link in description for the content would definitely help to revise.
@codeKarle how do you handle the case when token service goes down and we wait for data to persist in sql db before token service return the ranges to short url service ? what if sql db goes down ? if we keep replica of db then do we want data to be synced before we return range to user ?
all the video you have made are awesome and really easy to understand. If you can make more videos regarding technologies that being used in system design. Having more deeper dive and comparison of difference tech is helpful.
Are you going to maintain the mappings from counter range to its availability in an extra datastore, or a table so that your token service could handle them?
can we use instance id and utc time to generate token instead of taking separate token generation service and maintain token range as per different instance.
Let's say we have scenarios where one url U1, is called from two different users for the first time, and both the request R1 and R2 come at the same time, but being sent to different nodes, since all the server nodes have different range of tokens being given to them, same url will be using two tokens, it can also result in decreasing the amount of tokens we have?
Custom shorlink support is also most of the user wants plus its not good to have incremental shorlink ... better to have something randomized nature of shortlink.. share your thoughts
Custom Shortlink is a good feature, that can be implemented easily I believe. You are right about the incremental shortlinks, that's a trade-off you need to choose. Random is good, but you'll end up in high collisions and higher latencies. If the NFR's are okay with some high latency/less throughput, then a randomized solution would be a much better choice since it's hard to predict the next short URL.
@@codeKarle another possibility is to add salt in the incremented number being used by shortener service that will maintain the uniqueness as well as hard to guess
How do you scale the DB for token service across multiple DCs and regions without repeating token ranges? Do the token ranges need to be seeded manually per DC/region to prevent the same range from being reused in more than one DC/region? Does this solution constrain us to using a DB technology like Spanner so there is one consistent data set that spans geographies and is replicated in near real time?
I think there is still a possibility of duplicates because we are using a substring of length 6 or 7 of the base62 encoding of the numbers, which can collide. for example base62(0000001) is 107Zzj5ex0 and base62(0000002) is 107Zzj5ex0 as you can see the prefixes are the same.
@@sagardafle Doesn't matter. The thing is that there is always possibility of collision if we do substring. I can't find any explanation of it anywhere, as how to resolve.
Amazing explanation! Can you please provide pdf or image format of the architecture like you provided for other videos? It really helps to see everything all together in 1 place to digest everything. Thank you.
Thanks, I have seen many solutions talking about Event based decoupled systems. However I never encountered a robust way of making sure there is no Consistency and Integrity being effected due to any failures during those Async processing of the Event. What are various techniques for ensure Decoupled systems ensures no loss
Does this approach assume that we do not care about idempotency? In this model, if the long to short URL service receives multiple requests of the same long URL, the token service will assign that request to different short URLs.
A small doubt here, let's say we pick n as 6 for short URL chars and we use base 64 instead of 62. If you are starting the range from 1000 to 9999, then the base64 encoding will contain 6 chars, but as soon as you move to 10000 the chars will be 7. Doesn't it diverge from the initial design to keep the short URL chars as 6 only, also we are only using 9000 URLs in this range? If we follow this route, we might have to go to a very high range to convert to base64 and create a short URL (which will not be short anymore)
After encoding the Base10 token to Base62/Base64 for readability, doesn't taking out the first few characters(ex. 7 chars) and using them to store the short URLs increase the collision probability. Basically, from all the token ranges provided to us by token service, we get unique tokens. Cool. Then we convert those Base10 tokens to Base64. Cool. Isn't it possible to get the first 7 characters of two completely different base64 encoded tokens to be the same and thus resulting in a collision. How do you handle this. Similar situation would also arise even if you take the "Encoding URLs" approach
I am having trouble understanding, that let's say there are two datacenters. Each datacenter has its own token service and DB. How do you make sure that token service in two different data centers don't end up assinging same range to the SHORTTOLONGURL service? I am assuming DB contains the range and token service simply gets the range from DB, and it's an atomic transaction. But how do you manage ranges in DB across data centers? Would you have another service for doing so?
The main idea of two DCs for token service is for redundancy. Let's say if you have Token Service in DC1 and DC2, the Master of DB can be in DC1 and it can have a Slave in DC2. if Master goes down, or DC1 is not available, then slave in DC2 can become the master, but for all other transactions it can make a cross DC call to the Database in DC1 thus making sure that the range is always unique. Latency is not a concern here because it's once in a few hours kind of an API call to assign tokens.
How we can make sure that we don't "shorten" urls which have already been "shortened" - probably with some cache... otherwise we would need to query Cassandra and find out but is it not a cheap operation?
One question - Whenever the user asks for a short URL do we check in the DB if there is an existing short url for the same? If so, will that not again slow down the application?
Nice explanation!! But what if after a server has been assigned a range of number let's say 1-1000. If the server is handling parallel request and which it should, can it assign same short url for 2 different long url? In that case, shoule the server be handled with atomic behavior?
I have a doubt here.. You said that if we have multiple redis then it will be tricky but then you added more redis so I kind of got lost there or may be I dint get what you said... Can you please elobarate?
I have heard in many videos that checking in DB if the URL exists is not efficient, I do not understand that, you are designing a system with 1:200 write to read ration, how much of an overhead it is to check the database if the URL exists?
one thing i did not understand ... how we are using counter/token ..are we appending it with the long url and paassing it to base62 function to generate a new short url
Why two different cassandra DB is used.. one for Long to short url request and other short to long url ? How will the second DB get those information via replication?
How can you request a number from Redis Cluster? Isn't that just a in-memory database? Would you need to program some kind of a logic into redis cluster?
Thank you for the great content ! It looks like the Token Service is a single point of failure as well. And if we create multiple instances of the Token Service, how do they ensure that each instance provides a unique URL range, and no 2 instances provide overlapping ranges? If the Token Services are supposed to communicate each other before deciding the range for an incoming request, this would again add to overhead and slow down the process. Can someone please share their thoughts on this ?
I don't undestand why not redis ? following you concern about Single Point of Failure --> Redis is designed for horizontal scalability and can be deployed in a clustered configuration for high availability and fault tolerance. Can you explain better ?
What if same url is asked to be shortened. This design will keep creating new short URL everytime. Not sure if it's good or bad. I see tinyurl uses same short url so definitely it's not creating new short url.
If token value is 100M, Base62 gives 5 char length output only. I think in order to generate 7 character length base62, the token value should start from 62^6. How to keep those ranges of tokens which are in between 62^6 and 62^7?
Great work. Can you also upload a video for Dropbox/Google Drive like service? In case of Dropbox, most videos drop the ball at, there will be a Notification service and it will communicate with clients asynchronously using Queues and every client will have its own Queue. They don't talk about, if there are billions of clients, do we expect to have a billion queues? Is that scalable? Do focus on this as well if you make the video.
Great video! I have just one question. For the short url service to be able to keep track of the current range, does it need to be stateful? And how / where would you store that info?
Great job on your side. A big thank you from my end. Can you please answer the query on handling duplicate requests? Same URL requested 3 times generates 3 tinyURL? How to handle it in this design?
Ok I just verified in bitly and found that they generate different short URL everytime the same long URL is passed. So this is not a concern apparently. Thanks anyways
Token service is a single service sitting on a transactional DB, so on multiple calls it would not repeat the same sequence. Just that the service is hosted in multiple data centers to make it redundant/fault tolerant/disaster recovery, but the Database would have one master that would power all the queries to fetch the next sequence.
but at 16th minute - token service becomes single point of failure.. even if you say you fetch from DB, that also becomes single point of failure. isn't it ?
cant we simply hash the 61 character set with new guid. guid will always be unique. i think then we dont need redis or any distributed cache service to generate us a unique number. just a thought..
You could have given some more thought on the short url to long url flow. Fetching data from Cassandra for each and every request could be very time consuming an latency needs may not be met. May be we can use caching in that flow to reduce the fetch latency.
I feel enlightened watching your videos.
Thanks!! Glad to hear that :)
true.. this channel is a treasure chest 👍👍
I watched 4 different Tiny URL System Design Video. This one is by far the best
I read many different ways this can be designed. But this design looks the best of all as it is highly scalable, very efficient and something that can easily work on a global scale. Thanks Sandeep for so much of hard work at your end to come up with such wonderful designs. I am watching your entire playlist.
Hey do you want to study system design together? I have a lot of exp but I feel most of the online resources are incorrect or incomplete. We can make a solid understanding of the common designs in next 10-15 days by brainstorming everything together.
@@aman1893_ I would be happy to learn from you😀
A diagram with Kafka for Analytics would be a cherry on the cake. Overall great job!
Very informative and the way you show the simple way to design first and then discard it again telling the problems and how we could tackle that, is really great.
There is another approach which pre-calculates the short urls, and use when requested. This way, there will not be range loses when servers go down. Your way is also very good, thank you!
Ya I think if Token Service layer is removed and Url shortener service simply do what Token service is doing, that will be your case, and it seems fine to me
One of the best video i found out on youtube for url shortening.
"in technical terms it's called a collision, but for us it's a problem" made me laugh. thanks for the great content
Just a thought- instead of using the token service we can generate unique tokens per service within the service itself.
Steps could be as follows
1. When service node starts, it will registers itself with DB and gets itself an ID and it’s sequence will start with 1
2. Now, a particular node can generate token based on its ID+Today’s date+sequence
3.When a particular node goes down, new node will spin up and performs step 1
4. This will avoid complexity of calling token service totally
You are talking about Zookeeper here.
@@alpitanand20 use anything (db/zk) that is a singleton service.
MD5 Hash of the user IP Address + Time Stamp encoded to base 62 would also be valid. Both these approaches save us from the complexity of managing another set of services and their connections to a DB. We can more easily scale horizontally.
I had this video in watch recommendation but always skipping it thinking shortening URL, how hard it could be. But going through this video I started realize nothing is easy at scale. Your way of explaining is so awesome that at the same time I understood depth of problem and its solutions. Please keep it up.
It would be great if you build up slowly for the tech/tool to pick up for the design rather than directly putting the cassandra or redis or kafka. There can be possibility that the Interviewer is highly good at those tools and will start digging deep into that as soon as we name a stack which can surely bring us in trouble sometimes.
Which make sense
As an alternative, you can read about the tool being discussed in video after watching it. It will keep the videos shorter and packed with more relevant content.
Isn't a token service a single point of failure? Even if we use multiple token service, how will we synchronise all of them? Please answer 🙏
@@isachinq i think the token service is being load balanced to avoid a single point of failure. Having miltiple tokenservices will not impact the design as it is only used to get the next range from same mysql cluster.
@@ANILKHANDEI how will you synchronise all the MySQL in MySQL cluster? By definition, horizontal scaling will bring down the consistency
What a calm and composed thoughts/teaching for each and everything
Hate off bhai...one of the best tech youTubers!!!
Finally someone who has a good solution. Thnx.
best video out of all available resources for a url shortening service
Even after 1 year this material is gold !
Bhai thanks so much. I got your course on Udemy as well. There is not better channel than this on System Design.
wow.. you are a world class system architect
this is the first video im waching on your channel and i just loved the explanation. im sure its gonna help me with my interivew. the very first thing i did post watching the video is subscribing. thanks alot for the detailed information.
Great video, as always very helpful. If you could 1) add custom key support - user can specify their own tiny url 2) talk more about what could be other ways like, md5(main url) -> base2_encode, and their drawbacks etc 3) add diagram on a analytics part, that would really be helpful. Do you also feel a cache can be added in front of Cassandra to serve hot urls?
Thanks, keep helping.
Amazing video with possibly the best explanation so far on this use case. I had watched several videos on TinyURL but none could explain in the first 5 mins so lucidly the need for 7 characters based shortened URL. Good job
Isn't a token service a single point of failure? Even if we use multiple token service, how will we synchronise all of them? Please answer 🙏
@@isachinq Sandeep already answered your question in the video and it's a common approach to scaling mysql - the token service (by default) will be not overloaded with massive amount of requests, but if somehow it is then the solution would be to utilise the MySql horizontal scaling / sharding between multiple servers / instances.
@@mirrorps but horizontal scaling won't ensure the consistency between different MySQL nodes. So it may assign the same range of values
@@isachinq there are some hashing algos to distribute the data between the nodes, so the ranges may be based on similar hashing algorithms
great energy, honest intention, a beatiful human being. thank you
You're highly articulate, I love that.
Great video. Thanks for putting all the effort and explaining different choices and corresponding trade offs. 👍
Thanks!
One the best I have seen so far on this topic. Keep making videos on systems design. I just subscribed and tunned in for every upcoming video now. 👍🎉💐
Excellent explanation !!! Even though there are a bunch of system design videos out there, your videos stand apart from them by discussing various situations and pitfalls of using a certain tool/ database.
Just one quick suggestion from my side regarding upcoming videos - can you please create any video that explains capacity estimation of a database. For example how much space will a users table with let's say 6 attributes having almost 100 million records take is postgres or mongodb or any other database. This is also commonly asked in interviews now a days and given your breadth of experience, I think you would be able to create awesome content in this space also.
Once again Thanks for videos :)
Thanks Chaitanya!
We'll put this in some smaller video that comes out in the future.
Just that it's a time taking thing to go over the calculations so we skipped it in all the videos till now :)
Love it!! Thanks for sharing multiple options of implementing a solution. Keep posting more videos
Such an amazing effort, man! Thank you so so much. 🙏❤
Awesome explantation , Please keep Posting more videos.
One issue is that sequentially generating shortcode could be security threat as it would be predictable either we should append a random number at starting or end before converting base 62 conversion
Thanks, amazing explanation of TinyURL system design !
Keep posting..love your vids...very simple and understandable content...
Glad you like them :)
Really cool content. Analytics/Observability is generally mised; thanks for taling about it. Provided html page link in description for the content would definitely help to revise.
Great Job Sandeep!!! I have seen all your system design videos. Waiting to see a video on cloud system design.
Excellent presentation skills!! Thank you
Awesome video! Lots of insights. As a piece of feedback, I would change microphones. Thank you.
You explained so well. nice video .
Thanks a lot 😊
@codeKarle how do you handle the case when token service goes down and we wait for data to persist in sql db before token service return the ranges to short url service ? what if sql db goes down ? if we keep replica of db then do we want data to be synced before we return range to user ?
Very good content thanks for the video.
what are the tables rows contains information you keep in the cassandra?
all the video you have made are awesome and really easy to understand. If you can make more videos regarding technologies that being used in system design. Having more deeper dive and comparison of difference tech is helpful.
Are you going to maintain the mappings from counter range to its availability in an extra datastore, or a table so that your token service could handle them?
can we use instance id and utc time to generate token instead of taking separate token generation service and maintain token range as per different instance.
Let's say we have scenarios where one url U1, is called from two different users for the first time, and both the request R1 and R2 come at the same time, but being sent to different nodes, since all the server nodes have different range of tokens being given to them, same url will be using two tokens, it can also result in decreasing the amount of tokens we have?
Custom shorlink support is also most of the user wants plus its not good to have incremental shorlink ... better to have something randomized nature of shortlink.. share your thoughts
Custom Shortlink is a good feature, that can be implemented easily I believe.
You are right about the incremental shortlinks, that's a trade-off you need to choose.
Random is good, but you'll end up in high collisions and higher latencies. If the NFR's are okay with some high latency/less throughput, then a randomized solution would be a much better choice since it's hard to predict the next short URL.
@@codeKarle another possibility is to add salt in the incremented number being used by shortener service that will maintain the uniqueness as well as hard to guess
How do you scale the DB for token service across multiple DCs and regions without repeating token ranges? Do the token ranges need to be seeded manually per DC/region to prevent the same range from being reused in more than one DC/region? Does this solution constrain us to using a DB technology like Spanner so there is one consistent data set that spans geographies and is replicated in near real time?
I think there is still a possibility of duplicates because we are using a substring of length 6 or 7 of the base62 encoding of the numbers, which can collide. for example base62(0000001) is 107Zzj5ex0 and base62(0000002) is 107Zzj5ex0 as you can see the prefixes are the same.
Hey, how is base62(0000001) = 107Zzj5ex0?
@@sagardafle Doesn't matter. The thing is that there is always possibility of collision if we do substring. I can't find any explanation of it anywhere, as how to resolve.
how is redis is single point of failure, most cloud providers support HA for redis clusters?
Amazing explanation! Can you please provide pdf or image format of the architecture like you provided for other videos? It really helps to see everything all together in 1 place to digest everything. Thank you.
Sure, we'll get it done in a few days :)
Was looking for the same. Thank you
There you go: www.codekarle.com/system-design/TinyUrl-system-design.html
You'll find the architecture & summary here :)
Thanks, I have seen many solutions talking about Event based decoupled systems. However I never encountered a robust way of making sure there is no Consistency and Integrity being effected due to any failures during those Async processing of the Event. What are various techniques for ensure Decoupled systems ensures no loss
Does this approach assume that we do not care about idempotency? In this model, if the long to short URL service receives multiple requests of the same long URL, the token service will assign that request to different short URLs.
Crystal clear explanation.......
A small doubt here, let's say we pick n as 6 for short URL chars and we use base 64 instead of 62. If you are starting the range from 1000 to 9999, then the base64 encoding will contain 6 chars, but as soon as you move to 10000 the chars will be 7. Doesn't it diverge from the initial design to keep the short URL chars as 6 only, also we are only using 9000 URLs in this range? If we follow this route, we might have to go to a very high range to convert to base64 and create a short URL (which will not be short anymore)
6^64 can contains
6.3340287e+49 unique numbers
Thanks, very insightful
After encoding the Base10 token to Base62/Base64 for readability, doesn't taking out the first few characters(ex. 7 chars) and using them to store the short URLs increase the collision probability.
Basically, from all the token ranges provided to us by token service, we get unique tokens. Cool. Then we convert those Base10 tokens to Base64. Cool.
Isn't it possible to get the first 7 characters of two completely different base64 encoded tokens to be the same and thus resulting in a collision.
How do you handle this. Similar situation would also arise even if you take the "Encoding URLs" approach
I would rather choose last 6-7 chars. What do you think ? Less probability of having collision
@@Crosion1546 it still might have collision and we may not prove that in the scale of billions of numbers.
I am having trouble understanding, that let's say there are two datacenters. Each datacenter has its own token service and DB. How do you make sure that token service in two different data centers don't end up assinging same range to the SHORTTOLONGURL service? I am assuming DB contains the range and token service simply gets the range from DB, and it's an atomic transaction. But how do you manage ranges in DB across data centers? Would you have another service for doing so?
The main idea of two DCs for token service is for redundancy. Let's say if you have Token Service in DC1 and DC2, the Master of DB can be in DC1 and it can have a Slave in DC2. if Master goes down, or DC1 is not available, then slave in DC2 can become the master, but for all other transactions it can make a cross DC call to the Database in DC1 thus making sure that the range is always unique. Latency is not a concern here because it's once in a few hours kind of an API call to assign tokens.
please create more system design videos, love your work
How we can make sure that we don't "shorten" urls which have already been "shortened" - probably with some cache... otherwise we would need to query Cassandra and find out but is it not a cheap operation?
If I generate multiple instances of token service it can also send same range to multiple services....?
Nice explanation!!
One question - Whenever the user asks for a short URL do we check in the DB if there is an existing short url for the same? If so, will that not again slow down the application?
What if token service goes down ? I was unable to see your view on this or why it won't go down ? Thanks for letting me know of it.
Nice explanation!! But what if after a server has been assigned a range of number let's say 1-1000. If the server is handling parallel request and which it should, can it assign same short url for 2 different long url? In that case, shoule the server be handled with atomic behavior?
can't we use Redis for the last part you discussed at 23:00
Maza aa gya bhai. Thank You so much. ❤
good explanation.
I have a doubt here.. You said that if we have multiple redis then it will be tricky but then you added more redis so I kind of got lost there or may be I dint get what you said... Can you please elobarate?
Should not we be using a cache for doing the redirects since it is pretty static data?
Very informative videos
HI
I have a question regarding the Collision it this case is avoided if we have two same shortURL ,because it uses tokens right?
I have heard in many videos that checking in DB if the URL exists is not efficient, I do not understand that, you are designing a system with 1:200 write to read ration, how much of an overhead it is to check the database if the URL exists?
one thing i did not understand ... how we are using counter/token ..are we appending it with the long url and paassing it to base62 function to generate a new short url
Why two different cassandra DB is used.. one for Long to short url request and other short to long url ? How will the second DB get those information via replication?
How can you request a number from Redis Cluster? Isn't that just a in-memory database? Would you need to program some kind of a logic into redis cluster?
Thank you for the great content !
It looks like the Token Service is a single point of failure as well. And if we create multiple instances of the Token Service, how do they ensure that each instance provides a unique URL range, and no 2 instances provide overlapping ranges? If the Token Services are supposed to communicate each other before deciding the range for an incoming request, this would again add to overhead and slow down the process. Can someone please share their thoughts on this ?
I don't undestand why not redis ?
following you concern about Single Point of Failure --> Redis is designed for horizontal scalability and can be deployed in a clustered configuration for high availability and fault tolerance.
Can you explain better ?
for base62 hashing, is it ensured that for different hash key, we will get different value?
1) why can’t the long url be hashed to create a tiny url
2) can the instance handle the volume of writes ?
Very informative. Thankyou!
Great Video, would appreciate if you can do
1. Elevator System
2. Discount System at SuperMart
What if same url is asked to be shortened. This design will keep creating new short URL everytime. Not sure if it's good or bad. I see tinyurl uses same short url so definitely it's not creating new short url.
If token value is 100M, Base62 gives 5 char length output only. I think in order to generate 7 character length base62, the token value should start from 62^6. How to keep those ranges of tokens which are in between 62^6 and 62^7?
Great work. Can you also upload a video for Dropbox/Google Drive like service? In case of Dropbox, most videos drop the ball at, there will be a Notification service and it will communicate with clients asynchronously using Queues and every client will have its own Queue. They don't talk about, if there are billions of clients, do we expect to have a billion queues? Is that scalable? Do focus on this as well if you make the video.
Sure, we'll try to make that in a few weeks
Token service can also be single point of failure.If we will use multiple instances then again will same issue of duplication.
1. why are we using cassandra db as we already know that we get lot of queries.Why not prefer mongo over cassandra ?
It's also possible to use distributed RNG (another topic for SD interview) or just hash long URLs with extra steps for hash collisions.
Great video! I have just one question. For the short url service to be able to keep track of the current range, does it need to be stateful? And how / where would you store that info?
This is amazing! But quick question, isn't the token service also a single point of failure?
Great job on your side. A big thank you from my end. Can you please answer the query on handling duplicate requests? Same URL requested 3 times generates 3 tinyURL? How to handle it in this design?
Ok I just verified in bitly and found that they generate different short URL everytime the same long URL is passed. So this is not a concern apparently. Thanks anyways
what is the token service give the same range, how token service work in a distributed environment and not generate the same range ?
Token service is a single service sitting on a transactional DB, so on multiple calls it would not repeat the same sequence. Just that the service is hosted in multiple data centers to make it redundant/fault tolerant/disaster recovery, but the Database would have one master that would power all the queries to fetch the next sequence.
Nice 👍
how will you build optimized short->long url search for such scale ?
but at 16th minute - token service becomes single point of failure.. even if you say you fetch from DB, that also becomes single point of failure. isn't it ?
token service is also single point of failure. can we scale that also if yes then how we can manage tokens ?
Better than paid courses
cant we simply hash the 61 character set with new guid. guid will always be unique. i think then we dont need redis or any distributed cache service to generate us a unique number. just a thought..
You could have given some more thought on the short url to long url flow. Fetching data from Cassandra for each and every request could be very time consuming an latency needs may not be met. May be we can use caching in that flow to reduce the fetch latency.
Excellent. Thank you!
I love your videos buddy...
Isnt token service became single point of failure?