200K a year. ... _I worked for a private bank that will not be named, handling absolutely stupid amounts of money._ Even just the fines on record keeping obligations being broken could bankrupt medium sized businesses in those cases. Say what you will about the financial sector building with dinosaur bones, they sure as f*ck make sure your compass of consequences is aligned dead straight. A colleague of mine accidentally threw away the phone records of 5 traders. The legal shitstorm that ensued was honestly terrifying to behold.
I work for an Australian Super Company (As a Software Engineer). It is a legal requirement, enforced by the regulator (APRA), that we have back-ups of back-ups (Stored with different providers, or different data centres (Not in the same State) if servers are in-house). We are audited each, and every year to prove we can recover from a disaster, and data can be recovered. Massive fines, and restrictions can be placed on the companies if not met. Superannuation is a heavily controlled/regulated industry.
@@interstella0yeah we have good regulations from government at the moment We are actively losing them thanks to government ineptitude or corruption Victoria just lost the union for the building industry because of allegations of links to crime
Conglomerates and megacorporations, yes. But regular SMBs and such, even some enterprises, no. There is just not enough staff and so they don't bother.
@@NickTaylorRickPowers Oh shit, I remember seeing something about the sheer corruption of the building industry in Australia close to a decade ago now. Given how this was treated as a "well-known fact" in some circles even then, it was probably a long time coming.
At my last work place, I wrote a tool, that sets up a test environment of our product. It was very hacky, because I had to mock some external dependencies (ex. SAP). And I included a very prominent disclaimer, that this tool is only for testing purposes and should not be used for specific cases. Guess what, they're still using my tool, to "install" our product on customer environments ...
it's your fault, you didn't add functionality that detects the scope of operation and activates restrictions if necessary - like e.g. silently activating automatic expiration.
this is why I build in auto delete into these cases as default. Then if they do use it in prob, it will auto delete itself do they cant continue using the test software.
worked for a tax company that had quarterly physical backups, in a vault in a different region that was not the home region not connected to the internet. It was wild.
I was always taught "you keep at least 2 backups, one on site for if your computer explodes, one off site as far possible for if your building or your city explodes"
Re: backups of backups. It all really depends on your fault tolerance, here in the Netherlands we had a giant flood in the 50s, it killed over a thousand people, the country got together and decided let's make sure that "never" happens again. And we build the delta works, a flood prevention system designed to withstand up till once in 10 thousand years surges. The same goes for IT, you have to calculate the value of your data, the cost of loss and the cost of proper prevention...
Old IT saying goes like this: "If you don't have backups - you don't have your data. If you don't have 2nd backups - you don't have backups". As for last sentence, it is a formula which applies to any risk/security period.
I image the UniSuper team had a crash before but it was insignificant to outsiders, but hair was on fire inside the company. When they moved over to Google, the dev-team said, imagine if that would happen, but with a billion dollars! That's how they leadership aboard on such expansive backup-plans. And I'm guessing that dev-team can write "Literally saved the company a billion dollars in loss" on their annual performance review now and have a black space to fill their own payrise. Well, they should if there is any justice in the world, but this is the corporate world, so 3% extra was probably the max.
How about a bottle of wine and a bar of chocolate as the thank you gift? Or my favorite: Why didn't you show up on time this morning after fixing this issue all night? => The entire team quit that day.
They'd be lucky if none of the blame stuck to them. If you have to give bad news to your manager, so bad that it reaches the top of your company, the blame game is likely to start before figuring out what actually happened. If management marks you as the guilty one, it's pretty hard to get rid of that stamp as they often don't want to admit they were wrong. Even if Google is at fault, like in this case, they'd still blame you for choosing Google, for not providing a hot backup on a different provider, for not detecting this could happen,...
@@SomeUserNameBlahBlah Not an expert here, but couldn't a info level log have averted this? Just a line like, "Hey boys, i'm gonna do a bit of a cleanup just like you asked me an year ago... just so you know..."
@@bravosix6738 I think it's because it was a test tool. It would be logical for a production tool to implement email warning say 3 months before deletion and one month before deletion, mabe a couple of days before deletion, but to set such warning, that's probably another parameter. There's no real need for a (default) warning in a test tool. The idea though that a test tool can be used at all for production, and then at google, I don't know if it's scary or comedic.
5:25 Still expensive, but apparently the "ve1-standard-72" is 2 CPU, 768GB of memory, 20TB of NVMe storage with a 3TB cache. That's a solid server. The price is probably very different when negociated, even the pricing page mentions sales discount.
Triple backups is the gold standard since the dawn of computers though. What's more modern is testing backups, similar to end-to-end testing in programming, but it's called disaster recovery excercises.
as a 2021/2022 ex googler, internal communications, not any dev team, Google does praise itself a lot as a company, and does a lot of things right for employees, but man do they value numbers and statistics more then people when it comes to any issues
The mandatory 11.5% Superannuation ("super" for short) is a requirement set by the government to employers, however the money itself (usually) isnt managed by the government. They're private companies that employees nominate to manage their fund for them (even banks have divisions to do it for you). Usually these companies also try to "grow" your money by investing it for you, and most of them make a profit from general account fees and some of that return on investment. People can also nominate to cut a larger percent of their wages to go into the super fund, and is usually pre-tax, hence you can even get away with paying less tax by doing this. The government usually does watch over other things involved though, like under what circumstances you can withdraw from your retirement fund (when you are under the retirement age)
It's just a sloppy work by Google. If the task was "Write your name" and it involved $88 billion dollars, I would of spent an hour verifying my OWN name before I commit. So, the Google team who ran the tool didn't bother verify it's full parameter set? A tool that is never used manually for production and the risk $88 billions, and they didn't do that? Yeah, sloppy reckless work by Google.
the off-side backup is the first thing I cut when I'm too busy, like the previous two weeks. It's a gamble, but my assumption is, if the place burns down we're out of business anyway.
This reminds me of the day I deleted the bin on the production server. Feels like only 3 months ago, because it was. Check the paths in your makefiles before doing a clean build, kids
I work for the company which helped them recover this data. These things happen all the time, so probably a good idea to have a solid backup/recovery system in place.
I think it's kind of funny that, when UniSuper initially only called it a "third-party service provider", pretty much everyone probably imagined some awful company with great B2B marketing but low tech quality that never stood out with anything good ever, and then it turned out said third-party service provider was literally Google. 💀
What most people don’t understand about insurance is that your premium covers the average risk of the group over the life of your policy, not your individual risk.
There are likely hard requirements for such funds put together by ICT experts for actors that are in the role of government suppliers. In a non-corrupt world, they do serve their primary purpose - to make the solutions safer, as opposed to a corrupt act of speccing something in favour of a pre-selected supplier.
"As a cloud engineer myself, I can confidently say this is complete bullshit, it is 100% UniSuper's fault" is the funniest thing I'll read all week. My workplace is full of that kind of hubris. "As a guy who does something like a part of that for another company, I know EXACTLY what's happening right now."
One needs multiple backups. One of the backup needs to be in a facility in a different town ... or maybe a different state in case a regional catastrophe occurs. A great way to do this is to have a local back up then make a back up of the back up.
10:27 - two decades ago I worked with a guy who was rightly obsessive about backups. People in IT would joke about him but he was always right. His stance was that your backup should have backups and you should test those backups to make sure the backups were working. Two decades later, I deal with people who run production environments with zero backups.
5:18 Something to keep in mind is that not all of that bill is going into the server capacity. You are paying for reliability, failover, automation, etc that ensures your services never go down. We've had data collection systems that ran for multiple years without any interruptions. Not something that is easy to pull off with on-premise solutions, Exhibit A: 6:00
Imagine trying to catch up on decades of Google docs in the 2 year time-span between when you start, and when you go somewhere else? It's absurd! Just reading the docs isn't enough, you need to put it into a meaningful context. That will always take time.
I work for the government and, at least at my agency, we don't have that many managers but we do often get directives from on high that are totally out of touch with what's possible. Like, the governor decides to do some program and now we have to scramble to try and get six months of work done in two with only 75% of the business logic nailed down lol
Super in Australia is great. Unironically it holds up a massive chunk of our economy, because every single Australian is a "investor" putting 15% of their income into stonks and infrastructure
10:46 We've always have backups for backups. Only now we're looking after our precious newborn babies by wheeling them into an abandoned carpark at night in the bad part of town and asking whoever happens by to mind them ... Because it saves a buck. Insane. Absolutely insane. We're right to be anxious about companies relying like this on cloud services. And they should not be rewarded.
Guys. I love this show, I always enjoyed the humour and the serious conversations that are properly had to set us all to a better future... Bro is definitely funny... Awesome stuff man... Hope all the best !!!
10:55 you need Backups not just if the provider fails - but its good to know if there is a cyberattack. They usually target backups. Having additional, not linked ones is often a blessing.
Probably the developer of the tool got an email/reminder in his inbox to extend the time on the pensionfund`s cloud, but after leaving the company no longer had access to his company email.
Having backups for your cloud backups is like trailering another car behind your car everywhere you go just in case.. and everyone around you thinking it's perfectly normal behavior.
Having another car behind you, for individuals, is expensive. So using that as metaphor is a bad metaphor. Backups are a cheap kind of insurance. It's normal and needed since the world is chaotic.
Incorrect. Most businesses are going to opt for hybrid cloud for 2 reasons. 1 Backup/failover services. 2 Broader service offering. its normal for anything mid size up to have backups with another cloud provider for anything that is truly mission critical. Large businesses should have a disaster recovery plan that considers the possibility of 1 cloud provider getting blast off the face of the planet. Mid size business should have DR strategies that move between different geographic regions, even if they opt to use the same provider for DR.
Im glad you reacted appropriately to the procetag. Its the reason we stay on exchange instead of moving to office 365 at our company. Moving to office 365 would cost us more in the long run than our entire combined it salaries
Just want to throw out there that having 1 back up physical, digital, and off site has been normal for over 20 years. In a world where back ups are one dumb CEO away from being considered not necessary, we should not consider the idea that someone has too many back ups.
in a small business operation, my boss has anxiety about data loss... so I run our local file server with local replication in Hyper-V in separate physical locations on campus, Backed up to a Local NAS, and cloud backup with iDrive. ... My server capacity per end user is obscene. it's f&^%& hillarious... especially when I get to take home the hardware that depreciates to zero and gets replaced. (edit) P.S. I also have 2 managers in my 4 person dept.
Ive had to start doing back ups to the backups. Ive been screwed twice now from not having a third backup, and at this point, im afraid something might happen where i need a fourth
According to their site they have 800+ Employees and 647k clients. I know some financial institutions still making old school backups on tape rolls and storing them underground in vaults. you know, just in case. Some jurisdictions require you to go waaay back in case of a trial / investigation.
Conclusion: Never write a code/tool that will delete the database also the backup at the same time. If everything needs to be deleted automatically then delay the backup deletion with at least one week (it still will be deleted automatically, but if it turns out that it should not be deleted developers will have one week to stop the automation).
As someone who works with super funds and with financial institutions 3 backups is very very normal and required. We also have regular simulated disaster recovery tests were we fall back to the recovery environment. Some will even switch every year between the two
Public cloud isn't cheap, but you can't compare the price to just the price of buying a computer. You can't run a national pension fund on a computer sitting in a corner of your garage in the original cardboard box without being plugged into power, without any cooling, without any network connection. You need to add in the cost of proper data center space, power, cooling and networking. It's still not cheap, so it's more appropriate for temporary capacity.
I see archived backups of backups mandatory for any important service. However, the thing that most companies fail to understand is to estimate the recovery time in worst case. If the recovery time is too long compared to the losses of income during the recovery process, the company may still die for it and then the theoretical possibility of recovering data perfectly no longer matters in reality. The big question is should to do regular test recoveries from archives? With my current work, we only regularly test recovery from main backup, not backup of backup.
This reeks of some team at Google provisioning a customer account with an internal tool designed and intended for development teams. The default expiry is obviously there so stuff gets cleaned up and not left to rot.
The fact that they didn't have the tool force you to specify the flag is just bad coding standards. Any tool even if it is only for internal testing just make the tool force you to specify --development/testing --production and never have a default for those flags
Especially if you are ever going to use it in prod. You either NEVER use it in prod, or you do in special cases. If you do use it in prod, there can't be such a default case.. it has to be a prompt. If you have to use the tool in prod, it's obviously because the client needs much higher resources than what the normal dashboard allows, so it's gotta have extra special attention. And also, why no reminders that it's set to delete? Why is that date not shown anywhere? Why are they starting such a big client instance seemingly "on a whim"? Even if they did check the command, they gotta also double/triple check the end-result.. just so very sloppy.
Amazon RDS if I remember correctly keeps 6 back ups of data......supposedly. I don't know what Google's persistence policy is....or if the data bases that were deleted are user or platform managed.
@@thecollector6746 Even if google had 20 backups, when the command to delete everything rolls in, everything, including those 20 backups, will be gone. A backup for a cloud service provider is a second cloud service provider.
actually pretty smart when your financial firm controls pensions for an entire country. I don't care how good or big the 3rd party data vendor is you never put all your eggs in one basket when talking 100s of billions of dollars
This is like, risk management 101, and they almost certainly had a legal obligation to do so for exactly this reason. The concept is called vendor risk management, and the reason you need to think about it when this much money is involved is that even if Google does everything right, there's still inherent risk involved. For example, what if they went out of business? What if there's a regulatory or sanctions change that prevents the vendor from operating in your country?
Politician working @ google: Yes, another bail-out. We are going to be rich. Uni: We have backups. Politician working @ google: We need an executive order to get rid of that.
Unisuper does seem like a spot on company in their handling of this situation as discussed. It likely should be noted that given the company is a major financial provider handling billions of dollars it almost certainly is that way due to stringent government enforced requirements placed on them. I would be very surprised if they would legally be allowed to operate without having secure offsite backups.
“Australia is a lucky country run mainly by second rate people who share its luck. It lives on other people's ideas, and, although its ordinary people are adaptable, most of its leaders (in all fields) so lack curiosity about the events that surround them that they are often taken by surprise.” -The Lucky Country, 1964. So no, they won’t sue Google, they plod along into the next foreseeable problem they’ve been ignoring and everyone will forget about this one.
Dude that wrote that died in the 60s. Australia change fucking radically in the 80s. In the 60s, our currency was tied to British pounds, our institutions still tied to British ones on the whole, and we had fucking morons. In the early 70s, we had a massive change that brought in huge reforms and universal healthcare. In the 80s we went all in on trade and technology. You are literally posting to youtube using standards that were invented by Australians. We often didn't invent the "first" thing, but made them good. Whether it's contacts that don't make you fucking sick and hurt your eyes, or wifi that actually works in a house.
Friendly reminder that "The Cloud" is just someone else's computer.
And it's someone who does not care about you or your data.
@@mallninja9805 Google cares about every bit of data that they can get their hands on -- Google surely had another "backup"
YUP
but... but...
@@mallninja9805 say that again but with Big Data companies and their means in mind.
As someone who’s worked a decade in the financial sector. You could spend 200k on back ups and it still would be a cost save.
But also a decent DR plan helps. 😂
These guys will be wishing they spent a billion on backups right about now.
Yeah, but some new manager or finance guy decided to increase their bonus by saving money...aaaaaannnd here we are today.
200K a year.
... _I worked for a private bank that will not be named, handling absolutely stupid amounts of money._
Even just the fines on record keeping obligations being broken could bankrupt medium sized businesses in those cases.
Say what you will about the financial sector building with dinosaur bones, they sure as f*ck make sure your compass of consequences is aligned dead straight. A colleague of mine accidentally threw away the phone records of 5 traders.
The legal shitstorm that ensued was honestly terrifying to behold.
I always figured anything financial is regulated to the point where stuff like this can't happen but I guess this is completely wrong.
I work for an Australian Super Company (As a Software Engineer). It is a legal requirement, enforced by the regulator (APRA), that we have back-ups of back-ups (Stored with different providers, or different data centres (Not in the same State) if servers are in-house). We are audited each, and every year to prove we can recover from a disaster, and data can be recovered. Massive fines, and restrictions can be placed on the companies if not met. Superannuation is a heavily controlled/regulated industry.
a W for Australia on that one
@@interstella0yeah we have good regulations from government at the moment
We are actively losing them thanks to government ineptitude or corruption
Victoria just lost the union for the building industry because of allegations of links to crime
Conglomerates and megacorporations, yes. But regular SMBs and such, even some enterprises, no.
There is just not enough staff and so they don't bother.
Its the same in singapore. Of course its gonna be heavily regulated- this is everyone (including the rich people’s retirement money)
@@NickTaylorRickPowers Oh shit, I remember seeing something about the sheer corruption of the building industry in Australia close to a decade ago now. Given how this was treated as a "well-known fact" in some circles even then, it was probably a long time coming.
At my last work place, I wrote a tool, that sets up a test environment of our product.
It was very hacky, because I had to mock some external dependencies (ex. SAP). And I included a very prominent disclaimer, that this tool is only for testing purposes and should not be used for specific cases.
Guess what, they're still using my tool, to "install" our product on customer environments ...
it's your fault, you didn't add functionality that detects the scope of operation and activates restrictions if necessary - like e.g. silently activating automatic expiration.
@@-Jakob- truth. Nobody reads the docs. Build them into your code.
That's normal. A hack I wrote for a customer is in production. It really only fits that customer's use case...
Thanks I'm gonna do that. I'm new in the IT game@@lynx-titan
this is why I build in auto delete into these cases as default. Then if they do use it in prob, it will auto delete itself do they cant continue using the test software.
THE OUTRO GOES SO HARD
Best. Outro. Ever. 😂
(But which one am i talking about? The answer: yes)
Thanks! Aaaand, it's now the 3rd most watched part of this vid.
17:25 the commenter saying “time for the garbage collector to run” just made me spit my drink
worked for a tax company that had quarterly physical backups, in a vault in a different region that was not the home region not connected to the internet. It was wild.
that's badass
This is the correct solution
Networking guys weep
I was always taught "you keep at least 2 backups, one on site for if your computer explodes, one off site as far possible for if your building or your city explodes"
A lot of companies do physical backups. It's called tape rotations and these physical media often get sent to Iron Mountain.
Re: backups of backups.
It all really depends on your fault tolerance, here in the Netherlands we had a giant flood in the 50s, it killed over a thousand people, the country got together and decided let's make sure that "never" happens again.
And we build the delta works, a flood prevention system designed to withstand up till once in 10 thousand years surges.
The same goes for IT, you have to calculate the value of your data, the cost of loss and the cost of proper prevention...
Old IT saying goes like this: "If you don't have backups - you don't have your data. If you don't have 2nd backups - you don't have backups".
As for last sentence, it is a formula which applies to any risk/security period.
@@chupasaurus If you never do complete SUCCESSFUL recovery, you don't really have backups.
"what do the logs say?"
"...what logs?"
NANI?!
_"Has there been a breach? Has data been compromise?"_
UniSuper: "Well, in terms of data, *_we've got no data."_*
"What Data?"
If you are dealing with $135 Billion, it has to be backups all the way down.
No if they dealth with $135 Billion, so to speak.
I image the UniSuper team had a crash before but it was insignificant to outsiders, but hair was on fire inside the company. When they moved over to Google, the dev-team said, imagine if that would happen, but with a billion dollars! That's how they leadership aboard on such expansive backup-plans.
And I'm guessing that dev-team can write "Literally saved the company a billion dollars in loss" on their annual performance review now and have a black space to fill their own payrise. Well, they should if there is any justice in the world, but this is the corporate world, so 3% extra was probably the max.
Or 0 bonus as this is their job.
How about a bottle of wine and a bar of chocolate as the thank you gift? Or my favorite: Why didn't you show up on time this morning after fixing this issue all night? => The entire team quit that day.
They'd be lucky if none of the blame stuck to them.
If you have to give bad news to your manager, so bad that it reaches the top of your company, the blame game is likely to start before figuring out what actually happened.
If management marks you as the guilty one, it's pretty hard to get rid of that stamp as they often don't want to admit they were wrong.
Even if Google is at fault, like in this case, they'd still blame you for choosing Google, for not providing a hot backup on a different provider, for not detecting this could happen,...
C-suites bonuses 😭
non-tech CEO be like: "but the service was down for multiple days and we got flak for it, you're shit"
blank logs really sound dreadfull
Not enough fiber in the diet...
No logs = no recorded problems
I don't see an issue here.
@@SomeUserNameBlahBlah
Not an expert here, but couldn't a info level log have averted this?
Just a line like, "Hey boys, i'm gonna do a bit of a cleanup just like you asked me an year ago... just so you know..."
@@bravosix6738 I think it's because it was a test tool. It would be logical for a production tool to implement email warning say 3 months before deletion and one month before deletion, mabe a couple of days before deletion, but to set such warning, that's probably another parameter. There's no real need for a (default) warning in a test tool. The idea though that a test tool can be used at all for production, and then at google, I don't know if it's scary or comedic.
@@bravosix6738 You mean like the one shown at 8:23? Logs only tell you what happened, they don't stop the thing from happening.
Aaaannnd, it’s gone. It’s all gone. This line is only for customers who have money!
S.P. xD
Lessons learned, don't trust Randy when it comes to finances.
5:25 Still expensive, but apparently the "ve1-standard-72" is 2 CPU, 768GB of memory, 20TB of NVMe storage with a 3TB cache. That's a solid server. The price is probably very different when negociated, even the pricing page mentions sales discount.
Prime: Flip, Flip Zoom in.
Flip: **Zooms out**
"I wouldn't like to be caught without a second backup" -- The Miles O'Brian Principle
Triple backups is the gold standard since the dawn of computers though. What's more modern is testing backups, similar to end-to-end testing in programming, but it's called disaster recovery excercises.
I literally asked about backup restore testing in our company in a meeting today 😂. Glad to conclude it is done regularly.
Well worth going through those exercises. It's amazing how often you end up in a chicken and egg situation when testing a bare metal restore.
Absolutely worth testing before you need it. I just had an "unscheduled" data recovery exercise last night. Thankfully it passed.
as a 2021/2022 ex googler, internal communications, not any dev team, Google does praise itself a lot as a company, and does a lot of things right for employees, but man do they value numbers and statistics more then people when it comes to any issues
It handles the entire country, so them having that many backups literally saved Australia lmao
This is factually incorrect.
✨ How it feels to spread misinformation ✨
There are many Super funds in Australia. UniSuper is among the largest, however.
Continent *
@@LeetHaxington So no Tasmania?
Indonesian gov got ransomwared and has no backup. So yeah, complaining about too many backup officially a first world problem
Google does this all the time. Just, well, google it. This case only seems to be the loudest so far.
Google uses the CADT development model for their commercial products which is insane
Google: Collects alot of user data
Also Google: Deletes accounts to make space for the 'extra' data
The mandatory 11.5% Superannuation ("super" for short) is a requirement set by the government to employers, however the money itself (usually) isnt managed by the government. They're private companies that employees nominate to manage their fund for them (even banks have divisions to do it for you). Usually these companies also try to "grow" your money by investing it for you, and most of them make a profit from general account fees and some of that return on investment. People can also nominate to cut a larger percent of their wages to go into the super fund, and is usually pre-tax, hence you can even get away with paying less tax by doing this. The government usually does watch over other things involved though, like under what circumstances you can withdraw from your retirement fund (when you are under the retirement age)
The fact that this all happened with no notification and it wasn't "soft-deleted" at least for a little while is very troubling.
Good rule of thumb: expect people to do boneheaded things and plan accordingly.
Oh my fuck empty logs is the scariest two words you can hear
It's just a sloppy work by Google. If the task was "Write your name" and it involved $88 billion dollars, I would of spent an hour verifying my OWN name before I commit.
So, the Google team who ran the tool didn't bother verify it's full parameter set? A tool that is never used manually for production and the risk $88 billions, and they didn't do that?
Yeah, sloppy reckless work by Google.
Oopsie. We are sowwy 😢
Who are you to produce such great content. Regular, quality, and educational. You made me want to learn vim.
Backup is a religious talk in Japanese companies.
the off-side backup is the first thing I cut when I'm too busy, like the previous two weeks. It's a gamble, but my assumption is, if the place burns down we're out of business anyway.
"Oh don't worry, Bob has a script."
>1 year later<
"Ooops..."
"Bob left the building"
This reminds me of the day I deleted the bin on the production server. Feels like only 3 months ago, because it was. Check the paths in your makefiles before doing a clean build, kids
10:45 off site backups seem quite the norm to me. The three two one rule: three versions, two media types, one off site still more or less applies.
23:00 i laughed with how accurate that is.
Same! I work in government and it's exactly like that
As a DBA I often answer questions like "Why do we need backups when we have replica?" It was funny first 100 times.
This sounds like RAID IS NOT A BACKUP!
Snapshots are not backups!
@@phillipsusi1791 yep. High availability is not a disaster recovery.
I work for the company which helped them recover this data. These things happen all the time, so probably a good idea to have a solid backup/recovery system in place.
I think it's kind of funny that, when UniSuper initially only called it a "third-party service provider", pretty much everyone probably imagined some awful company with great B2B marketing but low tech quality that never stood out with anything good ever, and then it turned out said third-party service provider was literally Google. 💀
What most people don’t understand about insurance is that your premium covers the average risk of the group over the life of your policy, not your individual risk.
From 25:00 till the end is the best primeagen moment of all time. Period.
There are likely hard requirements for such funds put together by ICT experts for actors that are in the role of government suppliers. In a non-corrupt world, they do serve their primary purpose - to make the solutions safer, as opposed to a corrupt act of speccing something in favour of a pre-selected supplier.
"As a cloud engineer myself, I can confidently say this is complete bullshit, it is 100% UniSuper's fault" is the funniest thing I'll read all week.
My workplace is full of that kind of hubris.
"As a guy who does something like a part of that for another company, I know EXACTLY what's happening right now."
why they even use google in the first place when they can own their own reliable server
I'm always telling ppl to backup their cloud data. And i get 'but why, its backed up by X(not actual X)' and i laugh.
One needs multiple backups. One of the backup needs to be in a facility in a different town ... or maybe a different state in case a regional catastrophe occurs.
A great way to do this is to have a local back up then make a back up of the back up.
Backups for backups is standard. Even long ago, the standard was onsite backups and offsite backups.
10:27 - two decades ago I worked with a guy who was rightly obsessive about backups. People in IT would joke about him but he was always right. His stance was that your backup should have backups and you should test those backups to make sure the backups were working. Two decades later, I deal with people who run production environments with zero backups.
Just remember. The Cloud is just someone else's computer
5:18 Something to keep in mind is that not all of that bill is going into the server capacity.
You are paying for reliability, failover, automation, etc that ensures your services never go down.
We've had data collection systems that ran for multiple years without any interruptions.
Not something that is easy to pull off with on-premise solutions, Exhibit A: 6:00
Yep. Totally right. It is a shame that in the end such a fortune only got them the garanty of deleted data after 1 year.
And now Google should host them for free for at least a decade
It doesn't sound like they were paying for any of that stuff, lol
Also it’s only the cost of 1~2 employees
Never thought I'd get to see Prime impersonating Alexis from Schitt's Creek, complete with hand gestures and the voice. Mad props my man
Imagine trying to catch up on decades of Google docs in the 2 year time-span between when you start, and when you go somewhere else? It's absurd! Just reading the docs isn't enough, you need to put it into a meaningful context. That will always take time.
Australia mentioned let’s go!
Love you guys ❤
I work for the government and, at least at my agency, we don't have that many managers but we do often get directives from on high that are totally out of touch with what's possible. Like, the governor decides to do some program and now we have to scramble to try and get six months of work done in two with only 75% of the business logic nailed down lol
Super in Australia is great. Unironically it holds up a massive chunk of our economy, because every single Australian is a "investor" putting 15% of their income into stonks and infrastructure
The money got stolen and they just covered it up cleanly.
10:46 We've always have backups for backups. Only now we're looking after our precious newborn babies by wheeling them into an abandoned carpark at night in the bad part of town and asking whoever happens by to mind them ... Because it saves a buck. Insane.
Absolutely insane. We're right to be anxious about companies relying like this on cloud services. And they should not be rewarded.
Guys. I love this show, I always enjoyed the humour and the serious conversations that are properly had to set us all to a better future... Bro is definitely funny... Awesome stuff man... Hope all the best !!!
Always flag your test code for deletion.
Function:
-check status of project
-if project is production then
-alert, flag these functions for deletion
I bet the dev(s) that pushed for the third party provider backups is feeling mighty proud of themselves now
10:55 you need Backups not just if the provider fails - but its good to know if there is a cyberattack. They usually target backups. Having additional, not linked ones is often a blessing.
The end was absolutely hilarious
Best prime video of all time, thanks bruh
Probably the developer of the tool got an email/reminder in his inbox to extend the time on the pensionfund`s cloud, but after leaving the company no longer had access to his company email.
Im sure it’s the manager who took the bonus saying « I took the decision to have backups, when my engineers advised for it »
Worst weather forecast for cloud users: it will be sunny!
Clouds have dissipated
Having backups for your cloud backups is like trailering another car behind your car everywhere you go just in case.. and everyone around you thinking it's perfectly normal behavior.
Having another car behind you, for individuals, is expensive.
So using that as metaphor is a bad metaphor.
Backups are a cheap kind of insurance. It's normal and needed since the world is chaotic.
I don't think you know much about this topic OP.
no it isn't 😕
It's the cost of not providing your own infrastructure.
Even then, you will want some backup processes.
Incorrect. Most businesses are going to opt for hybrid cloud for 2 reasons. 1 Backup/failover services. 2 Broader service offering. its normal for anything mid size up to have backups with another cloud provider for anything that is truly mission critical. Large businesses should have a disaster recovery plan that considers the possibility of 1 cloud provider getting blast off the face of the planet. Mid size business should have DR strategies that move between different geographic regions, even if they opt to use the same provider for DR.
Im glad you reacted appropriately to the procetag. Its the reason we stay on exchange instead of moving to office 365 at our company. Moving to office 365 would cost us more in the long run than our entire combined it salaries
for some companies it could be cheaper to host on cloud/datacenter providers due to space/security constraints
Just want to throw out there that having 1 back up physical, digital, and off site has been normal for over 20 years.
In a world where back ups are one dumb CEO away from being considered not necessary, we should not consider the idea that someone has too many back ups.
in a small business operation, my boss has anxiety about data loss... so I run our local file server with local replication in Hyper-V in separate physical locations on campus, Backed up to a Local NAS, and cloud backup with iDrive. ... My server capacity per end user is obscene. it's f&^%& hillarious... especially when I get to take home the hardware that depreciates to zero and gets replaced.
(edit) P.S. I also have 2 managers in my 4 person dept.
Writing documents for nobody to read but for your level uppers to approve is the CORE of working at Google.
You read the situation perfectly mate
When a company is more interested in censorship and DEI than backups.
Ive had to start doing back ups to the backups. Ive been screwed twice now from not having a third backup, and at this point, im afraid something might happen where i need a fourth
Nothing in this video was as unexpected as the robo boy switching to rap god for the finisher, and that includes prime miming Eric Cartman
"Take it out flip" phrasing are we still doing phrasing
Renting looks expensive, but remember it includes networks, UPS and support.
The ending is top notch. Both from the Kevin Fang and Prime
FLIGHT OF THE CONCHORDS MENTIONED. THE CROSSOVER MY LIFE NEEDED
According to their site they have 800+ Employees and 647k clients. I know some financial institutions still making old school backups on tape rolls and storing them underground in vaults. you know, just in case. Some jurisdictions require you to go waaay back in case of a trial / investigation.
i love Kevin Fangs stuff
E-moos? Does Prime think emus are animatronic cows?
Electronic cows
In reality, they are animatronic cats.
Conclusion: Never write a code/tool that will delete the database also the backup at the same time. If everything needs to be deleted automatically then delay the backup deletion with at least one week (it still will be deleted automatically, but if it turns out that it should not be deleted developers will have one week to stop the automation).
"don't roast us" I know I'm on UA-cam but can't stop laughing XD XD
As someone who works with super funds and with financial institutions 3 backups is very very normal and required. We also have regular simulated disaster recovery tests were we fall back to the recovery environment. Some will even switch every year between the two
At this point I'm watching out for when prime says flip take it out... And flip proceeds to keep it in 😂
Public cloud isn't cheap, but you can't compare the price to just the price of buying a computer. You can't run a national pension fund on a computer sitting in a corner of your garage in the original cardboard box without being plugged into power, without any cooling, without any network connection. You need to add in the cost of proper data center space, power, cooling and networking. It's still not cheap, so it's more appropriate for temporary capacity.
I see archived backups of backups mandatory for any important service. However, the thing that most companies fail to understand is to estimate the recovery time in worst case. If the recovery time is too long compared to the losses of income during the recovery process, the company may still die for it and then the theoretical possibility of recovering data perfectly no longer matters in reality.
The big question is should to do regular test recoveries from archives? With my current work, we only regularly test recovery from main backup, not backup of backup.
22:25 not primagean and his box diagrams again 😂
That outro was unhinged....
and I love it
This reeks of some team at Google provisioning a customer account with an internal tool designed and intended for development teams. The default expiry is obviously there so stuff gets cleaned up and not left to rot.
how is it default behavior when you can simply set a flag for "--testing" that avoids the situ all together?
The fact that they didn't have the tool force you to specify the flag is just bad coding standards. Any tool even if it is only for internal testing just make the tool force you to specify --development/testing --production and never have a default for those flags
Especially if you are ever going to use it in prod. You either NEVER use it in prod, or you do in special cases. If you do use it in prod, there can't be such a default case.. it has to be a prompt. If you have to use the tool in prod, it's obviously because the client needs much higher resources than what the normal dashboard allows, so it's gotta have extra special attention. And also, why no reminders that it's set to delete? Why is that date not shown anywhere? Why are they starting such a big client instance seemingly "on a whim"? Even if they did check the command, they gotta also double/triple check the end-result.. just so very sloppy.
"Backups for your backups" shouldn't be necessary for such an expensive service.
Amazon RDS if I remember correctly keeps 6 back ups of data......supposedly. I don't know what Google's persistence policy is....or if the data bases that were deleted are user or platform managed.
💯
@@thecollector6746 Even if google had 20 backups, when the command to delete everything rolls in, everything, including those 20 backups, will be gone. A backup for a cloud service provider is a second cloud service provider.
actually pretty smart when your financial firm controls pensions for an entire country. I don't care how good or big the 3rd party data vendor is you never put all your eggs in one basket when talking 100s of billions of dollars
This is like, risk management 101, and they almost certainly had a legal obligation to do so for exactly this reason.
The concept is called vendor risk management, and the reason you need to think about it when this much money is involved is that even if Google does everything right, there's still inherent risk involved.
For example, what if they went out of business? What if there's a regulatory or sanctions change that prevents the vendor from operating in your country?
Hilarious End. Does that video really end that way!! Amazing!!
Politician working @ google: Yes, another bail-out. We are going to be rich.
Uni: We have backups.
Politician working @ google: We need an executive order to get rid of that.
😂😂😂😂
What an absolute amazing ending for the video.
Unisuper does seem like a spot on company in their handling of this situation as discussed. It likely should be noted that given the company is a major financial provider handling billions of dollars it almost certainly is that way due to stringent government enforced requirements placed on them. I would be very surprised if they would legally be allowed to operate without having secure offsite backups.
Financial institutions have strict regulatory requirements for disaster recovery.
A separate vendor for DR backup is likely to have been one of them.
The tool was made for internal purposes only and then later got used by the whole company for everything - sounds like some of my previous lives!
Google when they fuck up: oops! Sowwy! 😊
Google when you fuck up: 🤬
“Australia is a lucky country run mainly by second rate people who share its luck. It lives on other people's ideas, and, although its ordinary people are adaptable, most of its leaders (in all fields) so lack curiosity about the events that surround them that they are often taken by surprise.” -The Lucky Country, 1964. So no, they won’t sue Google, they plod along into the next foreseeable problem they’ve been ignoring and everyone will forget about this one.
hahaha
Dude that wrote that died in the 60s. Australia change fucking radically in the 80s. In the 60s, our currency was tied to British pounds, our institutions still tied to British ones on the whole, and we had fucking morons. In the early 70s, we had a massive change that brought in huge reforms and universal healthcare. In the 80s we went all in on trade and technology. You are literally posting to youtube using standards that were invented by Australians. We often didn't invent the "first" thing, but made them good. Whether it's contacts that don't make you fucking sick and hurt your eyes, or wifi that actually works in a house.
Never would have expected ThePrimeTime to have an outro song.