*Timestamps* 0:00 Intro (Kevin Modzelewski from Dropbox Server Team) 1:28 Agenda 2:10 1. *What* *is* *this* *talk?* 3:22 1.1 Why is this interesting? (summary: how to do it with little resources) 4:11 2. *Background* *(What* *is* *Dropbox)* 5:59 2.1 Challenge 1: Write volume (nearly equal as read volume, magnitudes above industry average) 7:25 2.2 Challenge 2: ACID (en.wikipedia.org/wiki/ACID) 10:11 3. *Examples* *(how* *have* *we* *evolved?)* 10:30 3.1 Example 1: High-level architecture 30:08 3.2 Example 1 questions 44:30 3.3 Example 2: Database for metadata 52:42 3.3 Example 2 questions 56:55 4. *Wrap* *up* 1:00:45 5. Final questions More info in replies
*Example* *1* *Questions* *pt.* *2* 11. "Costs of running on Amazon compared to DIY?" (ans: [DIY] costs more) 12. "How many operations people do you have on your side?" (ans: 7 including network guy) 13. "is your customer base world wide? [because Amazon Cloud is located in Virginia]" (ans: everything file related is in Virginia, everything metadata related is in San Jose, majority of customer is international, 65%) 14. "How many cloud based data do you store per user in S3?" (ans: Amazon takes care of replication, we just upload it once) 15. "S3 went down recently, did you get hammered?" (ans: Amazon is pretty competent over there, but it is interesting to see things on our side) 16. "Do you know how much S3 is used up by you?" (ans: some guy in audience knew, but camera is on) 17. "Evolution of instrumentation?" (ans: at the beginning it's easy, right now the server is pretty regular, you build up a good intuition of what's going wrong. We went for a long time without data visualization, but we have all that now which is better.) 18. "What metrics do you watch?" (ans: we watch all the servers' load, requests/sec, breakdown in time that went into a single request, bandwidth as measured by users, etc) 19. "What do you do for security?" (ans: I can't talk too much about specific things that have happened, but we take security very seriously)
*Example* *2* *Summary:* how it started out: - metadata is stored as a log of all the edits (server file journal) - fields: *id* | *filename* | *casepath* | *latest* | *ns_id* (namespace id) - primary key: id (meaning things are appended in id order) changes: - getting rid of "casepath" (probably has to do with case sensitivity things, but that has moved elsewhere) - to get file edit history, needed to add "prev_rev" (previous revision) - Primary key changed to ( *ns_id* | *latest* | *id* ) - changed varchar(260) to varchar(255) (255 is optimized because only 1 byte is needed for representing string length) - removed 'latest' from compound primary key (optimizes for writes, but more expensive in reads)
*Example* *2* *Questions:* 1. "When you really want to delete something, do you delete old data or does [the log] just grow?" (ans: in normal cases it just grows, I personally don't know [any cases where this is not true]) 2. "Did you have to change the size of id at some point?" (ans: ids are per namespace, we haven't had an issue, they went to unique to not unique at one point, [when primary key became a compound key]) 3. "How did you measure whether these changes make a difference?" (ans: it's extremely hard to test, because it's hard to generate realistic workload) 4. "You do A/B testing for new builds?" (ans: yeah, we are also increasing our ability to do operational changes incrementally)
*Final* *Questions:* 1. "What are the next big challenges [as a company]?" (ans: we always want to get bigger and appeal to more people) 2. "Are you discouraging people to use this as a backup service?" (ans: I don't know how many people do but I'm happy that people found productive ways to use Dropbox) 3. "How are you [solving the Mega Upload problem]?" (en.wikipedia.org/wiki/Megaupload#2012_indictments_by_the_United_States) (ans: we do explicitly prohibit this stuff, and we do take it very seriously) 4. "People paying for this?" (ans by guy in audience: you can get a small account for free, but if you want to REALLY use it you have to pay) 5. "I think [Dropbox] has 2 great advantages, one is you don't get privacy issues because you're not selling things to advertisers, the other is the spammers aren't going to pay for it when they can find a free one somewhere else" (ans: [paid nature] of our service doesn't stop them from trying, but we do try to detect abuse, etc) 6. "[difference in user experience in different locations]" (ans: we don't have a whole lot of metrics divided by geography, but client behavior is more tolerant to latency; but the requirements on back-end architecture will be less latency tolerant over time.) 7. "What are the main competitors, how do you think about them?" (ans: Box.net... as a whole we are just trying to build the best service we can, and not get distracted)
Loved this video. Great speaker and fantastic insight into a fairly simplistic approach to distributed systems and it's evolution within Dropbox across the years. There was just one small question that concerned me. We are told Dropbox is splitting a file into blocks of 4 mb in size, which are hashed and if the hash exists in storage already, they avoid storing a 2nd copy and instead create a mapping to the block already in storage. This is a fairly standard approach to de-duplication. My concern is that, at the scale of files Dropbox is handling, the possibility for several of these chunks to collide increases. So I am secretly hoping that, in addition to checking the hash of the block matches an existing one and that the actual contents are compared byte by byte.
I think Dropbox doesn't have to worry about collisions for two reasons: 1. They probably have metadata about the owner and filename of these files associated with the hashes. In this way, in order for two 4mb chunk hashes to collide, it would have to be under the same owner, or even within the same file, which would be highly infeasible with a solid hashing algorithm. 2. With a sufficient hashing algorithm, it's still pretty infeasible that two 4mb chunks anywhere within Dropbox collide. The infeasibility of this possibility makes it far outweigh checking every file byte by byte, as those comparisons would be prohibitively slow.
It's worth noting that Dropbox is now also using AWS for storing their Metadata, or big part of their Metadata, using DynamoDB and AWS S3. And if I am not mistaken, they are not using S3 anymore for file storage. So it's the other way around
It feels like when HN people sits inside a classroom and bombard you with question from all around and still you need to stand through answering as much as you can.
If block server, which is in North Virginia, calls load balancer , wouldn't it cause latency? because it is similar to calling data base which is in texas as described in the lecture.
In that case, I think deduplication will find that there is not other copy of client's data at server storage. So it will store client's encrypted one also.
the underlying data its not relevant. bcs a set of bits decrypted with a key will always give the same result and if you change the key, it will give you a diferent result wich will be equally correct. So you are deduplicating the encrypted data, not the file uploaded.
I think it's more about Python's GIL can use only single thread at a given time as a result no matter how many request you send to the server via load balancer python can execute only one ata a time and he also said that even if you send more the performance will be only 60% and that's 40% drop in performance as a result they throttle the LB to send one request at a given time so that python can run at it's optimal performance. TLDR; It's Pytohn's GIL who's the culprit here not LB.
Notification server pings the clients every time there is a change, Metaserver keeps track of metadata in the database, Blockserver handles upload and download of the data.
*Timestamps*
0:00 Intro (Kevin Modzelewski from Dropbox Server Team)
1:28 Agenda
2:10 1. *What* *is* *this* *talk?*
3:22 1.1 Why is this interesting? (summary: how to do it with little resources)
4:11 2. *Background* *(What* *is* *Dropbox)*
5:59 2.1 Challenge 1: Write volume (nearly equal as read volume, magnitudes above industry average)
7:25 2.2 Challenge 2: ACID (en.wikipedia.org/wiki/ACID)
10:11 3. *Examples* *(how* *have* *we* *evolved?)*
10:30 3.1 Example 1: High-level architecture
30:08 3.2 Example 1 questions
44:30 3.3 Example 2: Database for metadata
52:42 3.3 Example 2 questions
56:55 4. *Wrap* *up*
1:00:45 5. Final questions
More info in replies
*Example* *1* *Questions* *pt.* *2*
11. "Costs of running on Amazon compared to DIY?"
(ans: [DIY] costs more)
12. "How many operations people do you have on your side?"
(ans: 7 including network guy)
13. "is your customer base world wide? [because Amazon Cloud is located in Virginia]"
(ans: everything file related is in Virginia, everything metadata related is in San Jose, majority of customer is international, 65%)
14. "How many cloud based data do you store per user in S3?"
(ans: Amazon takes care of replication, we just upload it once)
15. "S3 went down recently, did you get hammered?"
(ans: Amazon is pretty competent over there, but it is interesting to see things on our side)
16. "Do you know how much S3 is used up by you?"
(ans: some guy in audience knew, but camera is on)
17. "Evolution of instrumentation?"
(ans: at the beginning it's easy, right now the server is pretty regular, you build up a good intuition of what's going wrong. We went for a long time without data visualization, but we have all that now which is better.)
18. "What metrics do you watch?"
(ans: we watch all the servers' load, requests/sec, breakdown in time that went into a single request, bandwidth as measured by users, etc)
19. "What do you do for security?"
(ans: I can't talk too much about specific things that have happened, but we take security very seriously)
*Example* *2* *Summary:*
how it started out:
- metadata is stored as a log of all the edits (server file journal)
- fields: *id* | *filename* | *casepath* | *latest* | *ns_id* (namespace id)
- primary key: id (meaning things are appended in id order)
changes:
- getting rid of "casepath" (probably has to do with case sensitivity things, but that has moved elsewhere)
- to get file edit history, needed to add "prev_rev" (previous revision)
- Primary key changed to ( *ns_id* | *latest* | *id* )
- changed varchar(260) to varchar(255) (255 is optimized because only 1 byte is needed for representing string length)
- removed 'latest' from compound primary key (optimizes for writes, but more expensive in reads)
*Example* *2* *Questions:*
1. "When you really want to delete something, do you delete old data or does [the log] just grow?"
(ans: in normal cases it just grows, I personally don't know [any cases where this is not true])
2. "Did you have to change the size of id at some point?"
(ans: ids are per namespace, we haven't had an issue, they went to unique to not unique at one point, [when primary key became a compound key])
3. "How did you measure whether these changes make a difference?"
(ans: it's extremely hard to test, because it's hard to generate realistic workload)
4. "You do A/B testing for new builds?"
(ans: yeah, we are also increasing our ability to do operational changes incrementally)
*Final* *Questions:*
1. "What are the next big challenges [as a company]?"
(ans: we always want to get bigger and appeal to more people)
2. "Are you discouraging people to use this as a backup service?"
(ans: I don't know how many people do but I'm happy that people found productive ways to use Dropbox)
3. "How are you [solving the Mega Upload problem]?" (en.wikipedia.org/wiki/Megaupload#2012_indictments_by_the_United_States)
(ans: we do explicitly prohibit this stuff, and we do take it very seriously)
4. "People paying for this?"
(ans by guy in audience: you can get a small account for free, but if you want to REALLY use it you have to pay)
5. "I think [Dropbox] has 2 great advantages, one is you don't get privacy issues because you're not selling things to advertisers, the other is the spammers aren't going to pay for it when they can find a free one somewhere else"
(ans: [paid nature] of our service doesn't stop them from trying, but we do try to detect abuse, etc)
6. "[difference in user experience in different locations]"
(ans: we don't have a whole lot of metrics divided by geography, but client behavior is more tolerant to latency; but the requirements on back-end architecture will be less latency tolerant over time.)
7. "What are the main competitors, how do you think about them?"
(ans: Box.net... as a whole we are just trying to build the best service we can, and not get distracted)
This is indeed a legendary talk, hopefully my notes will be useful to help people digest.
This is a legendary talk.
Good insight about large scale storage underneath Dropbox architecture. Thanks!!!
Great talk about architecture and the challenges. Learnt a lot from this lecture.
eager to know what they continued to talk after camera was turned off!
definition of a pragmatic programmer.
Loved this video. Great speaker and fantastic insight into a fairly simplistic approach to distributed systems and it's evolution within Dropbox across the years. There was just one small question that concerned me. We are told Dropbox is splitting a file into blocks of 4 mb in size, which are hashed and if the hash exists in storage already, they avoid storing a 2nd copy and instead create a mapping to the block already in storage. This is a fairly standard approach to de-duplication. My concern is that, at the scale of files Dropbox is handling, the possibility for several of these chunks to collide increases. So I am secretly hoping that, in addition to checking the hash of the block matches an existing one and that the actual contents are compared byte by byte.
I think Dropbox doesn't have to worry about collisions for two reasons:
1. They probably have metadata about the owner and filename of these files associated with the hashes. In this way, in order for two 4mb chunk hashes to collide, it would have to be under the same owner, or even within the same file, which would be highly infeasible with a solid hashing algorithm.
2. With a sufficient hashing algorithm, it's still pretty infeasible that two 4mb chunks anywhere within Dropbox collide. The infeasibility of this possibility makes it far outweigh checking every file byte by byte, as those comparisons would be prohibitively slow.
sha 256 is practically collision resistant
@@Lifelightning I totally think reverse of your point number 1.
Really enjoyed the guessing part ! Gold talk!
It's worth noting that Dropbox is now also using AWS for storing their Metadata, or big part of their Metadata, using DynamoDB and AWS S3. And if I am not mistaken, they are not using S3 anymore for file storage. So it's the other way around
Wait, where are they storing the files ? In Dynamo db?
@@grandhirahul no, in their data centers. only meta data on AWS. At least that's my last updated info
Amazing talk 🔥
gavin belson @30:34
Really nice presentation.
AND HERE I AM WATCHING IT IN 2021
2012 but looks far older! Stanford can't afford HD camera?
Looks my daddy's home video
I think they did on purpose. But the content quality is the thing matters
@@chang8106 Why would you deliberately make your video look bad? lol
loved the technical insight
"if you don't use dropbox, welcome to silicon valley...you will soon"
This.. this is the guy I'm scared of
Nice talk !
It feels like when HN people sits inside a classroom and bombard you with question from all around and still you need to stand through answering as much as you can.
Informative talk.. enjoyed the guessing game!
This was awesome. Thank you dropbox.
the prof. got some swag as seen at 1:03 :p
Is there a doc version of this video, or similar, e.g. slides, webpages?
Awesome!
Kinda confused on why block servers would make rpc to the LB. Any idea? What’s that call gonna do? 22:40
from what I understood, it helps keep the database query logic in one place (the meta server)
good talk. nothing extraordinary -- but, it teaches you that great things do not have to be complex. pretty neat and simple architecture
If block server, which is in North Virginia, calls load balancer , wouldn't it cause latency? because it is similar to calling data base which is in texas as described in the lecture.
varun mankal yes, but latency is not a problem for Dropbox - asynchronous
true, but isnt latency what he mentions as the reason why they switched direct DB calls from block server to my-sql? can you clarify this please
agreed. i don't know how latency was avoided just by putting a LB in front.
No. Load balancer latency is negligible. It's not more than what an extra switch or router along the network path would cause.
Very good presentation
Kalpesh Patel il2oq
1
@44, why will block server talks talks to metadata server?
great video!!!!
🔛🔛🔜
Svhhjf Hdcvbg @@@
How does the deduplication get done assuming each client's data is encrypted under its own key?
In that case, I think deduplication will find that there is not other copy of client's data at server storage. So it will store client's encrypted one also.
the underlying data its not relevant. bcs a set of bits decrypted with a key will always give the same result and if you change the key, it will give you a diferent result wich will be equally correct. So you are deduplicating the encrypted data, not the file uploaded.
1 Notserver can handle 1M connections is impressive. But 1 load balancer cannot handle multiple requests sounds not good.
Also not sure what is usage of namespace (ns_id )?
I think it's more about Python's GIL can use only single thread at a given time as a result no matter how many request you send to the server via load balancer python can execute only one ata a time and he also said that even if you send more the performance will be only 60% and that's 40% drop in performance as a result they throttle the LB to send one request at a given time so that python can run at it's optimal performance.
TLDR; It's Pytohn's GIL who's the culprit here not LB.
Has much changed in the 10 years since this? Kubernetes obviously has entered the scene
what does Noteserver gets data from ? How you maintain the storage for Noteserver
Any idea how dropbox stores blocklist in the SFJ as mysql doesn't support list data type?
What does each server type do? Notserver, Metaserver, Blockserver?
Notification server pings the clients every time there is a change, Metaserver keeps track of metadata in the database, Blockserver handles upload and download of the data.
i am interestd in system design.But i am wondering if it is worth it watching it in 2021?
Am I the only one who thinks this guy is talking in the same style as Elon Musk?
I wish India mein bhi aise industry ppl ko lectures dene de.
Indian engineering schools (except IIT) have the worst professors
Good talk, the contest is a little old.
Who downvotes these videos?
Scale out with time..
lol why does this video look like it was shot in the 60s
You can trade money for time.
6mhyob
video looks too old .