And yet they did actually delete user data by one-sidedly deciding to rollback east coast servers and reintroduce deleted changes manually later. The right course of action was to simply let the replication system merge the vast majority of projects that had no conflicting data at all, then reach out to the very few project admins that needed manual reconciliation. Instead of discussing and letting the situation aggravate for hours, this would have been resolved in a couple minutes for almost all clients with no impact other than the downtime. And still less than a full day for those few that needed extra work. Deciding to alter user data without proper informed consent is a huge ethical no-no.
These problems always occur during routine maintenance. That's why I don't do any maintenance whatsoever and my systems have never experienced downtime (although I've never checked)
That makes your system full of security exploits as security issues are not patched too. You will also face a huge issue if you are forced to update if you use versions that are too old
It's bold to assume that a) 50% of Github users are active on any given day b) Their time is worth an average of $50/hr c) Not syncing with remote for one day would affect the average user
yeah, one of the great things about git is that it is trivial to set up a new remote and even no problem to code for weeks without an internet connection at all. I'd say GitHub could only be up ~20% of the time without that having a strong (financial) impact on most of the projects hosted there. Would piss of lots of devs, tho.
im no it guy, but 40 mins of lost data is a better sacrifice than hours of slow time, they couldnt just freeze the west db, and see what it was different, transfer and boom everything has been solved
I push maybe 3 times a week... but I'm basically using GitHub as a backup for some personal projects. So long as my computer survives I can handle not pushing for a few days
ipfs is (was?) a project with interplanetary, high-latency connection in mind with Merkle DAG datastructures for well, unstructured object data. It got adopted by the crypto crowd because memes and idk where it's going
Yeah, those assumptions seem very off to me. I'm feel like less than 50% of GitHub users are active daily between abandoned users and people who rarely use it. On top of that, a significant percentage of users will be students or personal projects that don't really have a monetary impact. Also, most users likely didn't lose anywhere near to 2 hours, especially because the website wasn't fully down for anywhere close to those 24 hours. I'm sure it didn't work great during that time, but it was usable. If it happened to me, I would likely test for 5 minutes, check with collegues and just work locally, testing every hour or so. Some people may have been affected more, but 2 hours of lost productivity seems way too high to me. With that in mind, the estimate would likely be a few orders of magnitude lower.
@@mikicerise6250 Ansible is instantaneous, no matter the distance. It even allows you to communicate both upstream and downstream of your current dimensional position.
This is one of those things that in hindsight, it is so easy to see how they set themselves up for failure. But I bet you a lot of brilliant people looked at this and still did not see the issue until it (inevitably) blew up. It do be like that sometimes...
I know at least one org that can't have that kind of failure. Their standard operating procedure is to actually force the primary switch on a regular basis. Every 2 or 3 months they power off all primary servers and check that all secondaries have promoted and are now fully operating as primaries with no data was loss. Then they return they restart the old primaries that become the new secondaries. It covers all possible kinds of failures of the primaries. This is also used for the upgrade procedure. Whenever you need to upgrade a server, you upgrade the secondary first, do some offline tests, then promote it to primary, keep the old primary/new secondary ready with the old version for a few days in case a rollback is needed. And finally update it. The first time I saw that choice of having the failover procedure being an integral part of normal operations I thought it was genius. When you have an incident, you don't need to panic and look up for exceptional procedures you are not familiar with. You just change the schedule of the regular routine. And if needed you can do forensics on the system you just put offline while users are working unaffected by the incident.
@@christianbarnay2499 everyone can have this kind of failure, it just is the level of extremes. It isnt in normal situations when you get pressured as a engineer, its when shit is on fire and suddenly all your plans which required something you assumed would be working due to its robustness, forces your hand to pull a rabbit out of your arse.
@@christianbarnay2499ut don't you run into the very same issue that GitHub had? It is great that promoting the secondaries to become primaries works nicely, but what about synchronising the Data? GitHub did the same thing that you are describing which is, in case of a primary failure, promote somebody else quickly to be a primary. But the real issue was the fact that, because of the disconnection, DB A had received data that was not synced with DB B when the connection was lost. Your scheme promotes DB B to become primary when DB A fails, but how do you synchronise the data that DB A has been receiving later after DB B has been updated and the 'timelines' are out of sync? Is it just me or it seems like you would have the very same problem that GibHub had? Their issue was not simply promotion to a new primary, but everything else
I worked at a website that handles millions of write transactions per day across like 7 global data centers. We were starting to think of a way to drop into a “read only” mode in the event something like this happened. Then we wouldn’t need to paw through the mess of uncommitted transactions…
when you say millions of transactions per day, is there something difficult about these? I mean, even if you do 100 million per day, that's on the order of 1k transactions per second, that's reasonable, yes?
Considering the scope of the GitHub disaster, it seems to me that recovery with 30 hours is very impressive. I've had to engineer recoveries from much smaller disasters and every one of them took me at least 48 hours if I remember correctly.
When the east coast database recovered and started accepting writes again from applications, they dodged the very common bullet of those apps pushing work at the database as fast as they can and overwhelming it, causing a second wave of outage. In this case, it looks like the controls over the work rate (whether implicit in the nature and scale of the apps, or an explicit mechanism) were sufficient to prevent that.
I love how in the last 30 sec, Kevin was not only able to explain how interplanetary network would work but how a random command would blow everything up in exactly 30 sec 😆
Well, it could have been worse. The automated lunar relay launch could have been misconfigured such that it did not alert US STRATCOM, and therefore appeared to be a ballistic missile launch against a domestic target, which immediately would lead to global thermonuclear war due to improper database failover configuration.
I swear to god if all of humanity gets wiped out over a stupid accident and not because of a grand painstaking political catastrophe I'ma be real disappointed in hell.
I love these videos, I work in IT but for a much smaller national company, really interesting to learn some lessons from, plus the editing and storytelling makes it very entertaining.
This was definitely not a failure. I've seen other videos where "they did everything wrong they could". In this case, in the circumstances, they did exactly what they had to do. Except for those few discussing prioritizing uptime over data consistency, which is a no-no. It's good that the right engineers prevailed. A laggy service is just so much better, than a nightmare collapse or massive inconsistency nightmare that will plague costumers all over for weeks. I get that they're paid for uptime and fluidity of the service, but in a case that is equivalent to a survival situation, you have to prioritize. Worrying about a "laggy service" in the east-coast is then equivalent to complaining about the lack of ice cream in an apocalypse scenario. In fact, I see this as a huge win! How many times have short measures without much thinking trying to treat the superficial symptoms as fast as possible, that are merely an extension of the underlying real problem, led to a full-scale disaster? At once, there were finally people thinking critically before doing something! Treating the core of the problem.
The worry here was that they had to spend the time coming up with the plan to respond. Whilst I realise you can't plan for every contingency, cross-hub failure like this should have already been considered and planned for. From the video this doesn't appear to have been the case. Guess they were lucky the initial fault didn't last more than 49 seconds.
This content is incredible! Really has me thinking about some of my architecture and how to think about planning infrastructure going forward, keep up the awesome work!
This is why you always should practice regional failovers of your cloud architecture and make doing so mandatory company events (or even random events).
Github is designed at its core to allow for loss of connectivity anywhere in the network. In this event they completely failed at handling the exact type of issue their system was designed to overcome seamlessly. As mentioned in the video this should have resulted in a 43s downtime for the vast majority of clients. And only a handful of clients having to reconcile data by hand between the west and east coast centers. The major problem is they clearly never tested the primary database loss scenario. They would have identified that they needed to replicate not only the database but the entire infrastructure to the west coast so it could still work during an east coast downtime. Or deactivate cross-country failover. The second problem is they one-sidedly decided they had to reconcile all user data by themselves. Client data belongs to clients. You should never alter client data without full information and consent. Deciding to manually rollback and backup east coast commits was altering client data and a big no-no. The right course of action should be: 1. Inform clients that there is a potential discrepancy between servers and you are building a list of affected projets, 2. Let the system reconcile projects that have no issue at all (no commits during the downtime or only west coast commits that can be pushed to the east with a fast-forward) and inform those clients that everything is fine for them and the system is back to normal operations, 3. Tell clients that need manual reconciliation that you propose the following plan: keep the branch with the most recent commit as is, and rename the conflicting branch as _ so they will have both accesible in the same repo and can reconcile their data as it suits them. And ask them to reply with their approval of the plan or a proposition for an alternate plan before some reasonable deadline. And give them contact info if they need help and/or advice. That way instead of going all in manipulating all clients data, they would only need a small taskforce ready to help those that actually need it.
@@samuellourenco1050 There are tons of ways to do it. Simplest is git merge with manual resolution of conflicts. Most tedious is creating a new branch at the diverging point and cherry pick from each side, then destroy both incomplete branches and rename the new branch to the original name. The right strategy is up to each client depending on the state of their data and their own standards for repo cleanliness. Some will want to remove all traces of the incident. Others will consider it's part of the project life and should stay visible in the history.
Where did you get the notion that they altered client data? My understanding from watching this video is that they rolled back to a consistent state, then restored the two lots of data that ended up split over the two data centres. The result being all data restored. I’m not certain in what users with the data spread across both the east coast and west coast servers experienced. But your post reads to me of “I watched a 12-minute summary and now I think I know better than the staff that worked with the product every day”.
@@JohnSmith-fz1ih In a history tool like GIT client data is not limited to the content of latest commit. Client data is the entire tree with all branches, commit dates, comments and commit order. Dealing with conflicting data is an important decision. And the way you want the data to appear and be accessible after the resolution is a decision by the project owner. Each project owner will have a different approach on the way they want to deal with such a situation. And GIT allows for all those approaches. The Github team making a single universal decision for all projects is barring project owners from making their own decision on the matter. What I say doesn't come from just watching a 12 minute video. It comes from using GIT on a daily basis, including a few occasions in which I migrated entire projects from old tech repos like CVS or SVN to GIT. And on some of those occasions I had to retrieve commits that were split over several repos and reconcile them using dates and comments. With the help of some low level GIT commands I could easily automate that process. That's why I am fully confident that GIT has all the tools needed to allow the Github team to automatically rename conflicting branches, regroup everything in the master repo, replicate to all mirrors, and then let project owners do the merge the way they want instead of forcing their own single decision for everyone. The main benefits of GIT over all other versioning systems are its high resilience to conflicts and the possibility for project admins to do absolutely everything with their repo on any PC and push the result to the central repo. This incident was the perfect occasion to highlight those features and display complete transparency by rapidly giving control of the 2 branches of their repos to project owners.
Lessons learned: Design your topology such as any sigle node could fail and have (automated) plan what to do when it does fail. Second: Why they had single primary + multiple replicas architecture? That seems like obvious single point of failure. If I designed this, any node could accept new data (act as primary) and only after the data would be provably replicated to few (not all) random other nodes, it would tell the user: yes, I have received your data. Then continue replicating to the remaining replicas in background. This strategy seems a bit slower but it seems to scale better and is more resistent to failures. But I know nothing about designing services to be running across multiple data centers. I still have the idea in my head I heard in the 90's: The Internet is mesh/web of many computers and many interconnected networks. If one fails, other can take its role. But this is not the case. Single BGP or DNS misconfiguration can bring down big portion of it.
Probably because creating and maintaining such a system is VASTLY expensive and time-consuming and may have been deemed too infeasible until they got so big that such an infrastructure was necessary.
It's often very difficult to design a system that allows any node to accept a change, when those changes are based on data that may be out of sync across nodes. The classic example is a bank balance - if you don't see right away that someone has already withdrawn the last $100 from a $100 account, then you will also be allowed to withdraw that $100. When all the nodes communicate and reconcile the changes, you have -$100. It can be done, but it takes careful design of the structure of your data, the kind of changes that can be applied, and the workflow of the applications to to deal with unexpected results when they occur. Imagine if your ATM receipt said, "Your current bank balance is probably $0, but we'll get back to you to return the money if you're overdrawn.". Come to think of it, this is exactly how checks work
@@ccthomas In the bank scenario I would imagine it would behave like this: Any node could accept your request to transfer the $100. The node would respond very quickly with something like: "yes, I successfully accepted your request to transfer $100". Quickly, because it is only one node out of many, thus not overloaded. This means the request was accepted, not that the transaction was completed. Then, some time later, after the node has communicated with enough other nodes to verify you actually have at least $100, will actually do or refuse to do the transaction, "eventual consistency". This seems like no better than single node for writes. But no, it is still better because your transfer request is independent of other people's requests, thus could be done "in paralel" whit them. Again, better horizontal scaling, no single point of failure. Or something like this. I know I imagine like this is "easy-peasy-lemon-squeezy" but it actually is not.
I think the biggest surprise in this was the fact that they had daily tests of restoring from backup, when most companies only tests that after need it.
Please…. More of these videos of software disasters! Facebook outage etc. !! As a developer myself, it’s somehow calming that such big players fall into these „oh shit….“ situations too! ❤️
I loved the whole breakdown of the issue Github faced, but its the last 30 seconds of the video that gained you a Sub! Keep up the crisp K.I.S.S explanation and subtle humour combined with the accurate images and editing!
“They expected X to follow a linear trajectory rather than the actually observed power function” can be applied to most of what’s wrong with humanity 🤣
Problem with this setup is, that both databases can assume a "write to" status. My company has two sql servers running our cad part library and some lower services. One is the master sql database some 400kms away, while the other is local in the region I live. Internet outaged are somewhat common and thus the system of having two databases, one master, the other slave, has proven gold. Only downside is adding new parts or reworking existing cad parts, I have to connect to the master sql server, import them, wait for synchronization. But even this "downside" has a silverlining, synchronization happens at fixed timestamps, letting me try out different parts and how they behave on our slave server, not having to fear of destroying anything because the next synchronization will put everything right again. If tested cad parts work on the slave database server and work as expected, I simply connect to master and repeat the adding process. In case of github, they should imho, if connection to current master is lost, automatically fallback to read only database. 43 seconds of only read is much less destructive than 6 hours of ongoing unsync. In those 43 seconds I doubt, that anyone world wide would have noticed, that the github database was read only and not accepting any commits. Even if someone commited in that exact moment, it could have been explained as internet hiccup from ones isp.
To be fair, I doubt systems will ever be designed to require live interplanetary interoperation given that the latency is already measured in minutes (10ish for Earth -> Mars, 2x for a round trip). Even translunar (3ish seconds, 6ish for round trip) is kinda pushing it. And this is a fundamental limitation of general relativity, so it's not changing anytime soon unless someone has a warp drive they're keeping secret.
We do have to admire the self-confidence of the system designers. They plunged right in, built a highly complex system, blissfully unaware of their own naïveté. Failure control is about 30x more complex than they had assumed.
I am loving your videos so much. You make describing how exactly these internet exploits are done in the most entertaining way. Even someone who only knows basics like myself can follow along and understand.
I mean, cross region issues are something you're meant to have tested disaster recovery from and this is a really obvious point of failure they shouldn't have missed. That's the issue here, not necessarily an architecture problem itself.
And the funny part of all this is that this is a well-known issue among database, storage, and server engineers. It has been solved many ways decades before. In short, hire the crusty old Unix geeks every once in a while.
These videos are hilarious! I look forward to more! It’s like the dark net diaries podcast but different and super funny. Good stuff! I watched all of these and am disappointed there isn’t more to binge watch. I hope you keep this format, this is an excellent concept for a UA-cam channel!
This type of scenario is why I've been building a CRDT backed nosql database. To make it so that you can have a ridiculously complex topology and recover from any failover. Fault tolerance is extremely important for apps this size and it seems like they had some very janky setups.
Moral of the story? Github have overcomplicated their entire process. This is what happens when programmers have too much time on their hands and are told to "Do stuff" I love the breakdown. Thanks!
A genuine question : Is it even possible to use async replication for the primary without bricking consistency on fail-over? (the root cause of this entire mess). Btw, great video and the end segment was glorious!
Nope (pretty much what CAP theorem states). Only way to prevent that would be to semisynchronously replicate to all remote DCs, which GitHub doesn't do (see 2018 blog post in description on MySQL High Availability)
Async- there is a time between one instance having data and the other instances having the data. If you pull the plug in that time then that data isn’t available to the other instances. It’s possible (and very desirable) to have synchronous knowledge of data and asynchronous transfer of that data - so your non-primary instances get a ‘journal’ entry which can ease reconciliation pain at the cost of read after write speed to the cluster (primary). Given that this is only useful occasionally but the overhead cost in worse performance is always, the approach is uncommon
11:34 this is something that can 100% happen and I will be here to reply to this comment when it or something similar happens. Kevin Fang is a prophet.
Data integrity was a problem the moment the two were out of sync. Rolling back and losing the latest changes is the correct answer (like restoring a backup). Moving forward with out of sync primaries is asking for trouble
I'm still surprised they didn't plan for whole regions to lose connection, I mean heck, didn't the a huge part of the US northeast and Canadian southeast lose power for a few hours at some point? I'm the kinda guy who'd set everything up so that everything could technically run off of one proxy server in Montana. But with my luck the failure would probably end up negating all the safeguards I've made and crash everything anyway.
All the big companies have super complicated organisation, only to end up closing because their actual features become worse then some kids could think of
"We can't delete user data, we aren't gitlab"
This video is a goldmine
gitlab*
I literally choked on my breakfast.
sounds like I missedsomething, can I have some keywords to look up?
Haha i saw that golden statement
And yet they did actually delete user data by one-sidedly deciding to rollback east coast servers and reintroduce deleted changes manually later.
The right course of action was to simply let the replication system merge the vast majority of projects that had no conflicting data at all, then reach out to the very few project admins that needed manual reconciliation.
Instead of discussing and letting the situation aggravate for hours, this would have been resolved in a couple minutes for almost all clients with no impact other than the downtime. And still less than a full day for those few that needed extra work.
Deciding to alter user data without proper informed consent is a huge ethical no-no.
These problems always occur during routine maintenance. That's why I don't do any maintenance whatsoever and my systems have never experienced downtime (although I've never checked)
can't have a problem if you don't see a problem
This is the way
Even Chernobyl was routine maintenance.
That makes your system full of security exploits as security issues are not patched too. You will also face a huge issue if you are forced to update if you use versions that are too old
Out of sight out of mind
It's bold to assume that
a) 50% of Github users are active on any given day
b) Their time is worth an average of $50/hr
c) Not syncing with remote for one day would affect the average user
That's what i was thinking lol
yeah, one of the great things about git is that it is trivial to set up a new remote and even no problem to code for weeks without an internet connection at all. I'd say GitHub could only be up ~20% of the time without that having a strong (financial) impact on most of the projects hosted there. Would piss of lots of devs, tho.
im no it guy, but 40 mins of lost data is a better sacrifice than hours of slow time, they couldnt just freeze the west db, and see what it was different, transfer and boom everything has been solved
I push maybe 3 times a week... but I'm basically using GitHub as a backup for some personal projects. So long as my computer survives I can handle not pushing for a few days
don’t you just hate when your andromeda integration service fails causing all writes made after the American civil war to be lost
As a former bitbucket employee I can confirm we have disaster recovery plans for a lunar data center outage
Now what?
Last I checked it was a disaster plan, there was no recovery...
I'd assume you would us IPFS.
@@DaveParr Those have a lot of latency tho, don't they?
As a time traveller from the future I can confirm the recovery plans are insufficient and the situation becomes irrecoverable
Interplanetary failovers are a struggle, not gonna lie.
ipfs is (was?) a project with interplanetary, high-latency connection in mind with Merkle DAG datastructures for well, unstructured object data.
It got adopted by the crypto crowd because memes and idk where it's going
@@__dm__ I work with IT solutions and I swear I've seen IPFS support in the industry before, just can't remember where
@@philip3963 *Cloudflare* says they have support for it.
PHP Devs: YOU THINK SOO??????
bruh they had the option to rollback 40 mins of write on the promoted db and sync both db. They pretty much fucked themselves in the ass tbh
The assumption that 50% of total github users are active is too optimistic
Yea, I'm guessing 2% max
It's good to grossly overestimate potential issues
As someone who hasn't pushed in weeks, that hurts, but is too true.
@@Backtrack3332 That's still a lot, though!
Yeah, those assumptions seem very off to me. I'm feel like less than 50% of GitHub users are active daily between abandoned users and people who rarely use it. On top of that, a significant percentage of users will be students or personal projects that don't really have a monetary impact. Also, most users likely didn't lose anywhere near to 2 hours, especially because the website wasn't fully down for anywhere close to those 24 hours. I'm sure it didn't work great during that time, but it was usable. If it happened to me, I would likely test for 5 minutes, check with collegues and just work locally, testing every hour or so. Some people may have been affected more, but 2 hours of lost productivity seems way too high to me. With that in mind, the estimate would likely be a few orders of magnitude lower.
Honestly I'm impressed that Bitbucket was able to lower the Earth-Mars latency down to 60 milliseconds.
they must've found a cheap way to build those einstein rosen bridges ey?
@@Fenhumsomething akin to hyper pulse relays from battletech
@@mikicerise6250 Ansible is instantaneous, no matter the distance. It even allows you to communicate both upstream and downstream of your current dimensional position.
Faster than light bitbucket
Those wormhole generators give you cancer you know
This is one of those things that in hindsight, it is so easy to see how they set themselves up for failure. But I bet you a lot of brilliant people looked at this and still did not see the issue until it (inevitably) blew up. It do be like that sometimes...
I know at least one org that can't have that kind of failure. Their standard operating procedure is to actually force the primary switch on a regular basis. Every 2 or 3 months they power off all primary servers and check that all secondaries have promoted and are now fully operating as primaries with no data was loss. Then they return they restart the old primaries that become the new secondaries. It covers all possible kinds of failures of the primaries. This is also used for the upgrade procedure. Whenever you need to upgrade a server, you upgrade the secondary first, do some offline tests, then promote it to primary, keep the old primary/new secondary ready with the old version for a few days in case a rollback is needed. And finally update it.
The first time I saw that choice of having the failover procedure being an integral part of normal operations I thought it was genius. When you have an incident, you don't need to panic and look up for exceptional procedures you are not familiar with. You just change the schedule of the regular routine. And if needed you can do forensics on the system you just put offline while users are working unaffected by the incident.
@@christianbarnay2499 Good idea.
Of course, it is also expensive AF. Robustness always costs short term efficiency.
it really do be
@@christianbarnay2499 everyone can have this kind of failure, it just is the level of extremes. It isnt in normal situations when you get pressured as a engineer, its when shit is on fire and suddenly all your plans which required something you assumed would be working due to its robustness, forces your hand to pull a rabbit out of your arse.
@@christianbarnay2499ut don't you run into the very same issue that GitHub had? It is great that promoting the secondaries to become primaries works nicely, but what about synchronising the Data? GitHub did the same thing that you are describing which is, in case of a primary failure, promote somebody else quickly to be a primary. But the real issue was the fact that, because of the disconnection, DB A had received data that was not synced with DB B when the connection was lost.
Your scheme promotes DB B to become primary when DB A fails, but how do you synchronise the data that DB A has been receiving later after DB B has been updated and the 'timelines' are out of sync?
Is it just me or it seems like you would have the very same problem that GibHub had? Their issue was not simply promotion to a new primary, but everything else
What a goldmine of a channel. I'm here with you all, witnessing the birth of a great channel
11:55 i'd say getting 60ms of latency over a 10 light-minute distance is still pretty good
I worked at a website that handles millions of write transactions per day across like 7 global data centers. We were starting to think of a way to drop into a “read only” mode in the event something like this happened. Then we wouldn’t need to paw through the mess of uncommitted transactions…
that's actually sounds good
@@yuhyi0122sure it's good ... If this is the rare web site where it even makes sense to be read only
when you say millions of transactions per day, is there something difficult about these? I mean, even if you do 100 million per day, that's on the order of 1k transactions per second, that's reasonable, yes?
@@GeorgeTsiros the difficult part, if you watched the video, is reconciling conflicting changes
Considering the scope of the GitHub disaster, it seems to me that recovery with 30 hours is very impressive. I've had to engineer recoveries from much smaller disasters and every one of them took me at least 48 hours if I remember correctly.
yee, i think this was very well handled
dude almost sounds like fireship
When the east coast database recovered and started accepting writes again from applications, they dodged the very common bullet of those apps pushing work at the database as fast as they can and overwhelming it, causing a second wave of outage. In this case, it looks like the controls over the work rate (whether implicit in the nature and scale of the apps, or an explicit mechanism) were sufficient to prevent that.
One of the greatest "history" channels on UA-cam, love the content.
Absolutely
Internet Historian: 👀
The ending was hilarious. Great video overall.
imagine being github and being unable to... MERGE two databases
It's GitHub
Didn't they delete their whole code like twice?
@@littleloner1159 might be thinking of gitlab
Yeah, but you'd expect them to learn at some point. They have their whole library of users that could help too....
@@casev799 typical YT reply evrything is easy in their eyes yet they accomplished nothing
git push --force -----FORCE ----------FOOOOOOOOORCEEEEEPLEEEEEAAAAASSSSEEEEEE
I love how in the last 30 sec, Kevin was not only able to explain how interplanetary network would work but how a random command would blow everything up in exactly 30 sec 😆
Well, it could have been worse. The automated lunar relay launch could have been misconfigured such that it did not alert US STRATCOM, and therefore appeared to be a ballistic missile launch against a domestic target, which immediately would lead to global thermonuclear war due to improper database failover configuration.
I swear to god if all of humanity gets wiped out over a stupid accident and not because of a grand painstaking political catastrophe I'ma be real disappointed in hell.
@@MrLastlivedThat was close to happening multiple times over the course of history. It's a miracle we haven't already done that.
The explosion at the end threw me into tears lol
I love these videos, I work in IT but for a much smaller national company, really interesting to learn some lessons from, plus the editing and storytelling makes it very entertaining.
This video is full of explosions and memes but in a tempered manner and it hits all the nerves in my brain. I need more videos like this.
Thank you! This was perfect. I love this. And the amount of explosions is tasteful and not overdone
This was definitely not a failure. I've seen other videos where "they did everything wrong they could". In this case, in the circumstances, they did exactly what they had to do. Except for those few discussing prioritizing uptime over data consistency, which is a no-no. It's good that the right engineers prevailed. A laggy service is just so much better, than a nightmare collapse or massive inconsistency nightmare that will plague costumers all over for weeks. I get that they're paid for uptime and fluidity of the service, but in a case that is equivalent to a survival situation, you have to prioritize. Worrying about a "laggy service" in the east-coast is then equivalent to complaining about the lack of ice cream in an apocalypse scenario.
In fact, I see this as a huge win! How many times have short measures without much thinking trying to treat the superficial symptoms as fast as possible, that are merely an extension of the underlying real problem, led to a full-scale disaster? At once, there were finally people thinking critically before doing something! Treating the core of the problem.
The worry here was that they had to spend the time coming up with the plan to respond.
Whilst I realise you can't plan for every contingency, cross-hub failure like this should have already been considered and planned for. From the video this doesn't appear to have been the case.
Guess they were lucky the initial fault didn't last more than 49 seconds.
Nobody was arguing for inconsistency. The argument was getting back up fast vs losing 40 minutes of changes
@xpusostomos losing 40 minutes of changes is i think the inconsistency in question
@@leaffinite2001 that's not a data inconsistency
@@xpusostomos why dont you define the term then, get us on even ground
That interplanetary loop was good
These graphics make me laugh. 1, 2, 4, 5, red among us guy, purple among us guy, pizza, 8 ball 😂. Also the Ace Attorney part was great.
This content is incredible! Really has me thinking about some of my architecture and how to think about planning infrastructure going forward, keep up the awesome work!
This is why you always should practice regional failovers of your cloud architecture and make doing so mandatory company events (or even random events).
My company practices that once a year I believe. I had a senior colleague take part in it.
Bro backing up data to Mars sounds so unbelievably awesome and impractical at the same time, I love it
I felt their pain.
What a fantastic job on the recovery and post mortem.
these videos are so underrated
the (visual) humor keeps getting better and better
I love these incident analysis videos. Please keep making more!
The editing is on point. Very nice video.
I love these videos, but as a DevOps Engineer I get anxious if I watch too many in a short period of time :)
Github is designed at its core to allow for loss of connectivity anywhere in the network. In this event they completely failed at handling the exact type of issue their system was designed to overcome seamlessly.
As mentioned in the video this should have resulted in a 43s downtime for the vast majority of clients. And only a handful of clients having to reconcile data by hand between the west and east coast centers.
The major problem is they clearly never tested the primary database loss scenario. They would have identified that they needed to replicate not only the database but the entire infrastructure to the west coast so it could still work during an east coast downtime. Or deactivate cross-country failover.
The second problem is they one-sidedly decided they had to reconcile all user data by themselves. Client data belongs to clients. You should never alter client data without full information and consent. Deciding to manually rollback and backup east coast commits was altering client data and a big no-no.
The right course of action should be:
1. Inform clients that there is a potential discrepancy between servers and you are building a list of affected projets,
2. Let the system reconcile projects that have no issue at all (no commits during the downtime or only west coast commits that can be pushed to the east with a fast-forward) and inform those clients that everything is fine for them and the system is back to normal operations,
3. Tell clients that need manual reconciliation that you propose the following plan: keep the branch with the most recent commit as is, and rename the conflicting branch as _ so they will have both accesible in the same repo and can reconcile their data as it suits them. And ask them to reply with their approval of the plan or a proposition for an alternate plan before some reasonable deadline. And give them contact info if they need help and/or advice.
That way instead of going all in manipulating all clients data, they would only need a small taskforce ready to help those that actually need it.
*Git* is designed to allow for loss of connectivity. Git*hub* was designed by the kind of crazies who jump on open source bandwagons.
One question about your point 3. How to reconcile two divergent branches?
@@samuellourenco1050 There are tons of ways to do it.
Simplest is git merge with manual resolution of conflicts.
Most tedious is creating a new branch at the diverging point and cherry pick from each side, then destroy both incomplete branches and rename the new branch to the original name.
The right strategy is up to each client depending on the state of their data and their own standards for repo cleanliness.
Some will want to remove all traces of the incident. Others will consider it's part of the project life and should stay visible in the history.
Where did you get the notion that they altered client data? My understanding from watching this video is that they rolled back to a consistent state, then restored the two lots of data that ended up split over the two data centres. The result being all data restored.
I’m not certain in what users with the data spread across both the east coast and west coast servers experienced. But your post reads to me of “I watched a 12-minute summary and now I think I know better than the staff that worked with the product every day”.
@@JohnSmith-fz1ih In a history tool like GIT client data is not limited to the content of latest commit. Client data is the entire tree with all branches, commit dates, comments and commit order.
Dealing with conflicting data is an important decision. And the way you want the data to appear and be accessible after the resolution is a decision by the project owner. Each project owner will have a different approach on the way they want to deal with such a situation. And GIT allows for all those approaches. The Github team making a single universal decision for all projects is barring project owners from making their own decision on the matter.
What I say doesn't come from just watching a 12 minute video. It comes from using GIT on a daily basis, including a few occasions in which I migrated entire projects from old tech repos like CVS or SVN to GIT.
And on some of those occasions I had to retrieve commits that were split over several repos and reconcile them using dates and comments. With the help of some low level GIT commands I could easily automate that process.
That's why I am fully confident that GIT has all the tools needed to allow the Github team to automatically rename conflicting branches, regroup everything in the master repo, replicate to all mirrors, and then let project owners do the merge the way they want instead of forcing their own single decision for everyone.
The main benefits of GIT over all other versioning systems are its high resilience to conflicts and the possibility for project admins to do absolutely everything with their repo on any PC and push the result to the central repo. This incident was the perfect occasion to highlight those features and display complete transparency by rapidly giving control of the 2 branches of their repos to project owners.
This video is waaaay longer than 43 seconds
I like how you turned this technical issue into an enjoyable story. Great storytelling skill.
I love your videos so much. They're so informative, interesting, well-made and even funny. Keep it up!
"For instance, how am I gonna stop some big mean Mother-Hubber from tearin' me a structurally superfluous data center?"
always nice to see a new video from you
Love the Ace Attourney bit, keep up the good work ❤
We're not GitLab had me in stitches
Lessons learned: Design your topology such as any sigle node could fail and have (automated) plan what to do when it does fail.
Second: Why they had single primary + multiple replicas architecture? That seems like obvious single point of failure.
If I designed this, any node could accept new data (act as primary) and only after the data would be provably replicated to few (not all) random other nodes, it would tell the user: yes, I have received your data. Then continue replicating to the remaining replicas in background. This strategy seems a bit slower but it seems to scale better and is more resistent to failures. But I know nothing about designing services to be running across multiple data centers.
I still have the idea in my head I heard in the 90's: The Internet is mesh/web of many computers and many interconnected networks. If one fails, other can take its role. But this is not the case. Single BGP or DNS misconfiguration can bring down big portion of it.
Probably because creating and maintaining such a system is VASTLY expensive and time-consuming and may have been deemed too infeasible until they got so big that such an infrastructure was necessary.
It's often very difficult to design a system that allows any node to accept a change, when those changes are based on data that may be out of sync across nodes. The classic example is a bank balance - if you don't see right away that someone has already withdrawn the last $100 from a $100 account, then you will also be allowed to withdraw that $100. When all the nodes communicate and reconcile the changes, you have -$100. It can be done, but it takes careful design of the structure of your data, the kind of changes that can be applied, and the workflow of the applications to to deal with unexpected results when they occur. Imagine if your ATM receipt said, "Your current bank balance is probably $0, but we'll get back to you to return the money if you're overdrawn.". Come to think of it, this is exactly how checks work
@@ccthomas In the bank scenario I would imagine it would behave like this: Any node could accept your request to transfer the $100. The node would respond very quickly with something like: "yes, I successfully accepted your request to transfer $100". Quickly, because it is only one node out of many, thus not overloaded. This means the request was accepted, not that the transaction was completed. Then, some time later, after the node has communicated with enough other nodes to verify you actually have at least $100, will actually do or refuse to do the transaction, "eventual consistency".
This seems like no better than single node for writes. But no, it is still better because your transfer request is independent of other people's requests, thus could be done "in paralel" whit them. Again, better horizontal scaling, no single point of failure.
Or something like this. I know I imagine like this is "easy-peasy-lemon-squeezy" but it actually is not.
11:34 During the solar occlusion is backup data transmitted via gravity waves or by neutrinos, that part has always been a bit confusing to me.
I think the biggest surprise in this was the fact that they had daily tests of restoring from backup, when most companies only tests that after need it.
This is why you don't TOUCH A FUCKING WIRE WHEN THEY TAKE YOU THRU A TRIP IN THE EAST COAST SERVER
$50 an hour is a wild overstatement
Please…. More of these videos of software disasters! Facebook outage etc. !! As a developer myself, it’s somehow calming that such big players fall into these „oh shit….“ situations too! ❤️
I loved the whole breakdown of the issue Github faced, but its the last 30 seconds of the video that gained you a Sub!
Keep up the crisp K.I.S.S explanation and subtle humour combined with the accurate images and editing!
The humor in this video is 120%. We need news actors like you in this world.
and HERE WE GO AGAIN!
That last 30 seconds or whatever just earned you a sub. Lmao.
“They expected X to follow a linear trajectory rather than the actually observed power function” can be applied to most of what’s wrong with humanity 🤣
The thought of being amidst these people recovering from this kind of chaos gives me stomachache.
Problem with this setup is, that both databases can assume a "write to" status. My company has two sql servers running our cad part library and some lower services. One is the master sql database some 400kms away, while the other is local in the region I live. Internet outaged are somewhat common and thus the system of having two databases, one master, the other slave, has proven gold. Only downside is adding new parts or reworking existing cad parts, I have to connect to the master sql server, import them, wait for synchronization. But even this "downside" has a silverlining, synchronization happens at fixed timestamps, letting me try out different parts and how they behave on our slave server, not having to fear of destroying anything because the next synchronization will put everything right again. If tested cad parts work on the slave database server and work as expected, I simply connect to master and repeat the adding process.
In case of github, they should imho, if connection to current master is lost, automatically fallback to read only database. 43 seconds of only read is much less destructive than 6 hours of ongoing unsync. In those 43 seconds I doubt, that anyone world wide would have noticed, that the github database was read only and not accepting any commits. Even if someone commited in that exact moment, it could have been explained as internet hiccup from ones isp.
To be fair, I doubt systems will ever be designed to require live interplanetary interoperation given that the latency is already measured in minutes (10ish for Earth -> Mars, 2x for a round trip). Even translunar (3ish seconds, 6ish for round trip) is kinda pushing it. And this is a fundamental limitation of general relativity, so it's not changing anytime soon unless someone has a warp drive they're keeping secret.
Can't wait for another video, just kinda wanna go on a binge watch of them but there isn't that many, hopefully in the future though :)
I hate dealing with databases, but watching your database stories is a pleasure 👍🏻
Everything about the video was great lmao. The humor, the animations, and not stupidly complicated.
Nice editing Kevin. Really looking forward to the next one.
Deleting servers? No, on this channel we nuke them .
Instant subscription.
We do have to admire the self-confidence of the system designers. They plunged right in, built a highly complex system, blissfully unaware of their own naïveté. Failure control is about 30x more complex than they had assumed.
I am loving your videos so much. You make describing how exactly these internet exploits are done in the most entertaining way. Even someone who only knows basics like myself can follow along and understand.
I love the animations and goofiness, pls never stop making these videos
I understood virtually nothing but still found the video absolutely exhilarating.
I mean, cross region issues are something you're meant to have tested disaster recovery from and this is a really obvious point of failure they shouldn't have missed. That's the issue here, not necessarily an architecture problem itself.
And the funny part of all this is that this is a well-known issue among database, storage, and server engineers. It has been solved many ways decades before. In short, hire the crusty old Unix geeks every once in a while.
your joke at the end about the martian servers hand me in tears, too real
I absolutely love this channel.
Your visuals are probably the best and most entertaining I've ever seen
These videos are hilarious! I look forward to more!
It’s like the dark net diaries podcast but different and super funny.
Good stuff! I watched all of these and am disappointed there isn’t more to binge watch. I hope you keep this format, this is an excellent concept for a UA-cam channel!
11:30 That 2047 bit is probably one of the greatest things I have ever heard haha
This type of scenario is why I've been building a CRDT backed nosql database. To make it so that you can have a ridiculously complex topology and recover from any failover. Fault tolerance is extremely important for apps this size and it seems like they had some very janky setups.
You always end on one side of the DB triangle. The trick is to chose the right approach to the problem.
@@omniphage9391 that's what the crdts are for, they give you strong eventual consistency. If you accept sec you can have all there as well.
2 month update perhaps?
The MCO reference was 10/10.
-Dylan
This channel gonna be big soon with these high quality vids and the algorithm starting to push em
Moral of the story? Github have overcomplicated their entire process. This is what happens when programmers have too much time on their hands and are told to "Do stuff"
I love the breakdown. Thanks!
A genuine question : Is it even possible to use async replication for the primary without bricking consistency on fail-over? (the root cause of this entire mess). Btw, great video and the end segment was glorious!
Nope (pretty much what CAP theorem states). Only way to prevent that would be to semisynchronously replicate to all remote DCs, which GitHub doesn't do (see 2018 blog post in description on MySQL High Availability)
Async- there is a time between one instance having data and the other instances having the data. If you pull the plug in that time then that data isn’t available to the other instances. It’s possible (and very desirable) to have synchronous knowledge of data and asynchronous transfer of that data - so your non-primary instances get a ‘journal’ entry which can ease reconciliation pain at the cost of read after write speed to the cluster (primary). Given that this is only useful occasionally but the overhead cost in worse performance is always, the approach is uncommon
11:34 this is something that can 100% happen and I will be here to reply to this comment when it or something similar happens. Kevin Fang is a prophet.
I too shall be waiting for mars to blow up as BitBucket an incomprehensible amount of data.
i just noticed today is the anniversary of this incident
That Bitbucket joke was the funniest thing I've heard in coding terms, Keep up the awesome stuff mate! 🤣
Your background music and sound effects are very clever.
Thank you youtube algorithm for suggesting this piece of art
I love the "until next time" segment at the end lolol
Your jokes are top tier and never fails to make me laugh!
Data integrity was a problem the moment the two were out of sync. Rolling back and losing the latest changes is the correct answer (like restoring a backup). Moving forward with out of sync primaries is asking for trouble
GitHub: Civil War
The king seems to die, so the west coast crown prince declares himself king...
And then the king shows back up.
That last part is gold. Thank you so much.
Loving these new documentary type videos!
2nd video from your channel. Realized it's awesome. You've got a new subscriber, bro!
LMAO @ the bitbucket skit at the end... that had me cracking up!
This kind of incident is pretty much a nightmare for any on-call engineer. Not eager to do any of this kind of work.
This channel is way too small for content this good
I always thought of inconsistency in divergence timelines and how Engineers would handle it. Great video 👍
I'm still surprised they didn't plan for whole regions to lose connection, I mean heck, didn't the a huge part of the US northeast and Canadian southeast lose power for a few hours at some point?
I'm the kinda guy who'd set everything up so that everything could technically run off of one proxy server in Montana. But with my luck the failure would probably end up negating all the safeguards I've made and crash everything anyway.
This is a great example of what to do after a major IT issue, which is make plans to handle such a situation better and easier should it occour again.
The outro was hilarious, fully expect this to happen with the colonisation of the solar system.
Explosions!?!?!? another banger dude, so entertaining. This is so funny, we got HA, also failover is not supported architecture.
Fireship has really nailed this video
Clears the table after dinner
Everyone in the database team:
Sounds like a major headache. One "oopsie" and all hell breaks loose.
All the big companies have super complicated organisation, only to end up closing because their actual features become worse then some kids could think of