People / companies allowed this to happen, so anyone related to this and still chocking on it, while not saying or doing anything about it can go and screw themselves. I'm simply tired of people ignorance and lazynes
Do not worry, soon this and others will be bought by Disney, who of course will then have all of our information, and we all know it is the land of dream, so when this happens, all our problems will be over😂
This is an issue at a lot of tech companies. I'm a dev, and I've quit jobs over the company forcing bad practices. Most notably, KnifeCenter. I started there, realized they were torrenting pirated software on the same server where they held customer credit card information. I immediately left after being told I was "confrontational" with the CTO over it.
Can confirm this 100% true and evidence is this issue. Literally if anyone tested this, they would have experienced this crash. It impacted ANY windows devices, no matter the hardware. Crowdstrike's after incident reports also admit as much, this type of update only goes through scripted testing. Edit: I watched the full video and realized you explained this haha. I was on the ground fixing this non-sense for over a week.
(this is really what happened) CrowdStrike: we updated a regex so there was 20 args instead of 21 args *week goes by* CrowdStrike: we updated a template that required the 21 args not 20 args and it crashed
@@katrinabryce it was literally a null pointer exception -the template update required that 21st arg to be pointing somewhere, it wasn't. Whilst the null pointer was caused by a poor regex pattern this time, it was ultimately human error not a regex or kernel bug.
@@katrinabryce Unfortunately, due to the nature of the software you kinda of have to do it in the Kernal. One day that won't be the case, today is not that day
While you're correct that Crowdstrike should have checked the update, and respected staging, there's actually an even bigger problem: The app isn't doing any sanity checking on the data files in the updates it gets. One bad read or flipped bit on an otherwise correct update would bluescreen an otherwise perfectly healthy PC because the app just blindly executes the update.
I was taking a flight the day it happened. It was a cluster F. I work in IT this was a dodged bullet on my end. But at the same time I can now officially say I worked on an airlines computer.
Yessir you did. I have never experienced anything like it before. I work IT for a major airline. Its insane how fast thousands people called us at 3am that it collapsed our whole phone server. Meanwhile, all of our PCs were stuck in a boot loop. Felt hopeless for a few hours…
For the sake of efficiency and costs, these companies recurrently cut on redundancies. We will be able to get stronger in the short term, but I doubt the cycle won't repeat itself as long as we have business empty heads on top of engineering companies.
It's shocking that some normal shops had to close because the register was not working. That's why we in Germany love our cash. It is a total joke that a shop full of goods has to turn back customers because the computer is not working.
When the store computer system goes down, it won't matter if you have cash. Many stores are not able to function without their computer(s). Try telling staff to write out receipts on a piece of paper or even use a calculator to add up the cost (and calculate the correct amount of change) and watch all the blank stares. (I usually pay with a credit card, but I do carry cash with me.)
It won’t help to have cash in this case. If the whole system is down,the product cannot be scanned to update inventory, receipts cannot be printed and incoming money cannot be registered as received for example (cashiers can easily pocket the cash for example in these cases and since inventory is not updated, no sale is registred either that will indicate a sale was made at all 🤷♀️)
@@daphne8406 and that's the problem. a normal cash register does not need internet, once the system is back you can update the system with the print out from the cash register. This is typical for this time, paper and pen are not good enough anymoe it seems. I wonder how supermarkets did it in the 90's.
As a QA engineer, I cringed at whatever lax testing they have in place. Automated tests are great, but there's still something to be said for taking an actual device and doing a manual test to make sure things still work right. Obviously I don't know their processes but clearly there was a dangerous gap.
FWIW, this was not a code update, but a data file update, data the Crowdstrike software uses to analyze computer activity. The data file was full of zeroes, which caused the Crowdstrike software to malfunction, causing a cascade failure.
@@DicksonJuma-iv2sc Why the hell would the CrowdStrike driver software not validate the data file when it read it ? Bad data in such a file is inevitable sooner or later, software should validate when parsing files.
This is not how staging works. Staging is a last test environment in the pipeline before anything is released to production. So a company can have a staging environment in which they will test new software before installing it on their production systems.
Also to highlight how bad they are, 3+ months before their version for Linux had a similar issue with a pushed update, then three WEEKS before another Linux update issue. That should have caused a massive rethink on QA processes for all their products.
I get the impression Crowdstrike don't test their updates thoroughly enough before rolling them out. I can understand them wanting to protect computers from the latest malicious viruses as quickly as possible, but a faulty update can cause even more damage than the viruses.
They do not have a living QA jsut automatic tests and it worked for many many updates before... just this time they didnt think to add the needed automatic tests to see if there will be a crash.
@@mrminecraft4772no that’s not answering the qn. HOW can other companies bought their service without any proper procedure? HOW can they trust enough with crowdstrike to buy their service? Is it just blind buying?
The amount of money that was lost in less than a day must have been absolutely astronomical!!! 😱 In the airline and travel industry alone soooo many people needed to receive compensation or accommodation during the wait for their cancelled flights, there were so many travellers due to summer holidays (at least in europe).
Its horrifying that the entire world including emergency services is reliant on updates from companies they can't control. Both CrowdStrike and Windows. I still can't believe essential services rely on things that could be compromised because one lazy employee (or hell, adversary) pushed a bad update
It's how any electrical system works. Power company goes down, millions of people lose access. Computers are always at risk of failure, so are phones, so are radios... literally anything can fail at scale.
I work at a small software company and there is a lot of stress for adequate testing before releasing and pushing new versions of our programs to our clients, seeing how reckless one of the biggest security companies turned out to be in that regard makes me feel a a lot better about the job that I do.
It didn't take a team of professionals to realize something was very wrong; but it did take a team of professionals to screw things up so badly. This is what happens when you do not test your software. If they had they would've realized the problem.
This happened during my vacation. We had just gotten to our destination, and spent our first night there. My husband got a call from work since he's the IT Manager of a utility company. They were hoping he was nearby, but unfortunately we were in Vegas, and we live near the Mississippi River. They thought about flying him back, but that was a bust. I don't know what they did, but they eventually got back online with minimal outages.
I work at a small credit union. Had no idea what was going on until a member told me lol. Got an email from the It department saying we are all good. Don't freak out, lol.
I fail to understand why it is that the Crowdstrike CEO has not been hauled into government hearing after government hearing all across the world since this outage happened. This is not something that should be pushed aside and forgotten about. There needs to be legislation to prevent this from happening again.
Whoa, this CrowdStrike snafu is a doozy! It's insane how a single borked update can brick so many systems across the globe. Makes you wonder how robust our digital infrastructure really is. Kudos to this video for breaking it down in layman's terms. This is a cautionary tale for all the sysadmins and DevOps engineers out there!
crowdstrike IT 1: Sir we did an oopsie, a big one. Crowdstrike IT Manager: Then fix it Crowdstrike IT 1: We cant, the systems are blue screened of death
This is the first video that I saved for future. They just confirmed everything I am doing. My scale is million if not billlion times smaller. I am trying to make streams for local hometown games. And I ALLWAYS double check EVERYTHING. Computer, cables, camera, internet, the stream itself. And 98% of tests they work 10/10. But I am aiming to those 2%, and trying to make those less than 0,5%. And so far I and my method are the best (most stable tbh) in our amateur games in my country
Airlines actually put out the ground stops on themselves not a faa mandate. And the timeline was wrong. Airlines realized this problem way early as soon as midnight some computers already went to blue screen. By 3 o clock eastern, 3 major airlines + allegiant already put out their first wave of ground stop. Ofc regular customers won’t know until their boarding time
I'm optimistic that since Crowdstrike buggered up all these Windows machines, it may help IT departments to consider ditching Windows for Linux (which wasn't affected by the update). I was appalled that Alaska decided to use Windows for their 911 service. What the hell are they thinking? Anything mission-critical shouldn't involve Windows. Microsoft is most likely partly culpable for the BSOD issues. As a former service tech for D.E.C., I've never seen our system crash and burn (unless there was a hardware issue). It wasn't until I was first introduced to Microsoft's "operating" system did I see my fair share of malfunctioning computers.
Crowdstrike actually did the exact same thing to Linux servers on two separate occasions earlier this year. The fact that this particular update was aimed at Windows machines doesn't make Linux immune.
The only real fault Microsoft had with this is even allowing anything not Windows/Microsoft related into ring 0... the OS should be a level above anything else so it doesn't crash when people push code without testing it.
Unlike Linux, Microsoft is an autonomous, proprietary entity that allowed another private company access to their kernel. There doesn't seem to be a foolproof methodology that would prevent hackers from doing harm to this operating system. It's bad enough many companies hire IT personnel who either don't understand how to keep their systems secure or the companies that hire them don't take the threats seriously and invest time and money towards keeping their data safe. I've never been a fan of a Windows from Day One. It helped me earn a living but I always thought it was a pile of dung with each iteration. I jumped the Microsoft ship just prior to Windows 7 and never looked back. I still think Microsoft bears a certain amount of culpability when they sanction access to the core code without having some say into what modified coding is allowed to be installed inside their OS.
I wasn't able to find an article explaining in any great detail good badly the Linux servers were affected. From what I could gather, the "fix" was much simpler. I wouldn't say any OS is immune to these kinds of problems. But if you run a business and put all your faith that some private company should "diddle" with the kernel of your OS, then you really need to shake your head and expect shit to hit the fan at some point. Personally, I wouldn't outsource this task to people you don't know from Adam. I think people just want a simple turnkey solution without putting real effort behind securing your data. The irony isn't lost on me that more damage was done by the supposed good guys than the evil hackers. Nothing will prevent this debacle from happening again.
Just a small correction You mentioned staging as a little-by-little release Staging is a different term, which you later used correctly. It's called Canary releases
Crowd*STROKE* Thanks for getting me 7+ hours of paid time to sit around and watch youtube since my employer was down and everyone company wide was gettin the ol blue screen lol. I was the first at my organization to get the blue screen so I thought my pc had crapped out, I had windows reinstalled an hour later and then my boss was tellin me not to bother logging in that nothing worked lol.
If anything, I find it worrisome how many things are connected to the net. Ehhh, I don't know, maybe treat your IT department not as an afterthought that only costs money.
The only issue I had was that my ring camera became unusable. They blamed crowdstrike but crowdstrike doesn’t effect app functionality. So I'm thinking it was another ring hack on the same day.
The prerequsite of how this all happened: The EU bitched and moaned about how Microsoft was not letting antivirus vendors not having the kernel API access so they could do their thing aka having a level plying field in which Microsoft never wanted kernel space boot drivers of 3rd-parties in the first place.
I went from thinking it would be an easy day before my holiday collecting research data at a hospital to a wild one acting as a runner between clinical receptionists, nurses, doctors, and everyone in-between in order to keep a clinic running. No access to clinical software meant I couldn't do any data collection anyways, so I was a messenger. I then had to catch a flight that evening across the country but by that point, the software glitch had been resolved and I was only 3 hours late.
isn't it just so frightening that one company controls so much in such a sensitive sector
Indeed
Not to mention that this was a result of an accident. Imagine the damage that could be done on purpose.
People / companies allowed this to happen, so anyone related to this and still chocking on it, while not saying or doing anything about it can go and screw themselves.
I'm simply tired of people ignorance and lazynes
Especially from a piece of anti-virus software. That's crazy ridiculous. They pretty much became a virus themselves.
Do not worry, soon this and others will be bought by Disney, who of course will then have all of our information, and we all know it is the land of dream, so when this happens, all our problems will be over😂
I worked with crowdstrike on a dev project, they dont do QA lol. It’s a circus through and through
Oof
This is an issue at a lot of tech companies. I'm a dev, and I've quit jobs over the company forcing bad practices.
Most notably, KnifeCenter. I started there, realized they were torrenting pirated software on the same server where they held customer credit card information. I immediately left after being told I was "confrontational" with the CTO over it.
Da f!!!!
they also outsource alot with little to no QA
Can confirm this 100% true and evidence is this issue. Literally if anyone tested this, they would have experienced this crash. It impacted ANY windows devices, no matter the hardware. Crowdstrike's after incident reports also admit as much, this type of update only goes through scripted testing.
Edit: I watched the full video and realized you explained this haha. I was on the ground fixing this non-sense for over a week.
Sorry for screwing up millions of computers, Here, have a $10 voucher that may or may not work! 🤣
A real kick in the teeth for people like myself that were fixing hundreds of systems.
Crowdstrike: We know we've made you suffer so here's Schrodinger's $10 Food Voucher
(this is really what happened)
CrowdStrike: we updated a regex so there was 20 args instead of 21 args
*week goes by*
CrowdStrike: we updated a template that required the 21 args not 20 args and it crashed
The problem was doing regex in the kernel, and using it as a domain specific language.
@@katrinabryce it was literally a null pointer exception -the template update required that 21st arg to be pointing somewhere, it wasn't.
Whilst the null pointer was caused by a poor regex pattern this time, it was ultimately human error not a regex or kernel bug.
@@katrinabryce Unfortunately, due to the nature of the software you kinda of have to do it in the Kernal. One day that won't be the case, today is not that day
they had to pull a 1970 tech guy out of retirement to unplug and plug it back in, thanks Phil
😂
But did he reboot?
i never knew crowdstrike until this incedent which is crazy
Yeah, massive background company
@@LogicallyAnswered one of many
While you're correct that Crowdstrike should have checked the update, and respected staging, there's actually an even bigger problem: The app isn't doing any sanity checking on the data files in the updates it gets. One bad read or flipped bit on an otherwise correct update would bluescreen an otherwise perfectly healthy PC because the app just blindly executes the update.
I was taking a flight the day it happened. It was a cluster F. I work in IT this was a dodged bullet on my end. But at the same time I can now officially say I worked on an airlines computer.
Hahaha
Yessir you did. I have never experienced anything like it before. I work IT for a major airline. Its insane how fast thousands people called us at 3am that it collapsed our whole phone server. Meanwhile, all of our PCs were stuck in a boot loop. Felt hopeless for a few hours…
@@mesothelioma2008 And the manual entries of all the Bitlocker keys man you guys had a hell of a couple of days / weeks.
For the sake of efficiency and costs, these companies recurrently cut on redundancies. We will be able to get stronger in the short term, but I doubt the cycle won't repeat itself as long as we have business empty heads on top of engineering companies.
Well, technically, they did strike large crowds.
They did do a crowdstrike
An entire team got fired over this
Given that a lot of it was ignorance, makes sense
As they should
An entire company is about to go BK from this.
its not their fault bosses want to deploy updates faster than they should be deployed to avoid costs and dont have proper IT ethnic
@@potatocrispychip No one said it was their fault but someone had to fall
It's shocking that some normal shops had to close because the register was not working. That's why we in Germany love our cash. It is a total joke that a shop full of goods has to turn back customers because the computer is not working.
When the store computer system goes down, it won't matter if you have cash. Many stores are not able to function without their computer(s). Try telling staff to write out receipts on a piece of paper or even use a calculator to add up the cost (and calculate the correct amount of change) and watch all the blank stares.
(I usually pay with a credit card, but I do carry cash with me.)
We have Funny money here in US.
It won’t help to have cash in this case. If the whole system is down,the product cannot be scanned to update inventory, receipts cannot be printed and incoming money cannot be registered as received for example (cashiers can easily pocket the cash for example in these cases and since inventory is not updated, no sale is registred either that will indicate a sale was made at all 🤷♀️)
that depends on how the system works, and if it was designed with that in mind @@daphne8406
@@daphne8406 and that's the problem. a normal cash register does not need internet, once the system is back you can update the system with the print out from the cash register. This is typical for this time, paper and pen are not good enough anymoe it seems. I wonder how supermarkets did it in the 90's.
As a QA engineer, I cringed at whatever lax testing they have in place. Automated tests are great, but there's still something to be said for taking an actual device and doing a manual test to make sure things still work right. Obviously I don't know their processes but clearly there was a dangerous gap.
The customer does the QA these days..
Tic😆tiq🙁lymes whats tiktok tick talk?
Real men test in production.
/s
Robocop (1987) - *YOU CALL THIS A GLITCH?*
Crowdstrike really showed us how careless they can be. Severe punishments should be necessary
I work in IT. It was a horrible day and following week too. But we handled it as fast as humanely possible
FWIW, this was not a code update, but a data file update, data the Crowdstrike software uses to analyze computer activity. The data file was full of zeroes, which caused the Crowdstrike software to malfunction, causing a cascade failure.
That's got to be the biggest clusterduck I've seen in a while. Why the hell would you leave a data file blank??
@@DicksonJuma-iv2scapparently it was a Data checking issue that caused it, proper QA would have caught it though
@@DicksonJuma-iv2sc Why the hell would the CrowdStrike driver software not validate the data file when it read it ? Bad data in such a file is inevitable sooner or later, software should validate when parsing files.
This is not how staging works. Staging is a last test environment in the pipeline before anything is released to production. So a company can have a staging environment in which they will test new software before installing it on their production systems.
Also to highlight how bad they are, 3+ months before their version for Linux had a similar issue with a pushed update, then three WEEKS before another Linux update issue. That should have caused a massive rethink on QA processes for all their products.
Modern Y2K, The name is starting to be correct, They are striking a crowd of computers with issues
4:25 imagine the pilots realising all planes need to be grounded. They must’ve been stressing really bad thinking about why
I get the impression Crowdstrike don't test their updates thoroughly enough before rolling them out. I can understand them wanting to protect computers from the latest malicious viruses as quickly as possible, but a faulty update can cause even more damage than the viruses.
They do not have a living QA jsut automatic tests and it worked for many many updates before... just this time they didnt think to add the needed automatic tests to see if there will be a crash.
How can the best security software not have a proper QA/testing done? Then how is it best?
its the best because its the biggest brand, it owns most of that market.
@@mrminecraft4772no that’s not answering the qn. HOW can other companies bought their service without any proper procedure? HOW can they trust enough with crowdstrike to buy their service? Is it just blind buying?
The amount of money that was lost in less than a day must have been absolutely astronomical!!! 😱
In the airline and travel industry alone soooo many people needed to receive compensation or accommodation during the wait for their cancelled flights, there were so many travellers due to summer holidays (at least in europe).
You said 911 then showed 991
True
Its horrifying that the entire world including emergency services is reliant on updates from companies they can't control. Both CrowdStrike and Windows. I still can't believe essential services rely on things that could be compromised because one lazy employee (or hell, adversary) pushed a bad update
Crazy right?
It's how any electrical system works. Power company goes down, millions of people lose access. Computers are always at risk of failure, so are phones, so are radios... literally anything can fail at scale.
I work at a small software company and there is a lot of stress for adequate testing before releasing and pushing new versions of our programs to our clients, seeing how reckless one of the biggest security companies turned out to be in that regard makes me feel a a lot better about the job that I do.
As someone who isn't in IT and no computer means I get paid to do nothing, I thought it was a pretty fun day
I would say the worst part is crowdstrike bypassing other companies staging policies.
Now crowdstrike is only known for the cool looking race cars that have them as the main sponsor. And for this other thing I guess.
Your editing has become so good from stock footage to this great work
You're back❤ was waiting for your video
Finally after 2 weeks 🙌🏻
Yes sir, we’ve been able to redo the entire production process. Should be weekly uploads moving forward!
@@LogicallyAnswered that's good to know👍🏻
It didn't take a team of professionals to realize something was very wrong; but it did take a team of professionals to screw things up so badly. This is what happens when you do not test your software. If they had they would've realized the problem.
😂
This happened during my vacation. We had just gotten to our destination, and spent our first night there. My husband got a call from work since he's the IT Manager of a utility company. They were hoping he was nearby, but unfortunately we were in Vegas, and we live near the Mississippi River. They thought about flying him back, but that was a bust. I don't know what they did, but they eventually got back online with minimal outages.
I work at a small credit union. Had no idea what was going on until a member told me lol. Got an email from the It department saying we are all good. Don't freak out, lol.
Why does UA-cam hate you bro this a great video
"So they have the Internet on computers now" - Homer Simpson
The precision in Vematum's development roadmap is reassuring.
This is wy testing updates before pushing them out is so important
Good to see more content 👍
🙏
The power this company has is terrifying, what if someone does this on purpose
Vematum's approach to data security is unparalleled.
Turns out the root cause was that they changed a function to expect 21 arguments but it was only given 20. Such an amazingly stupid bug.
This is what happens when you let the intern manage the release
Simple things like this, other countries are waiting to take advantage.
I fail to understand why it is that the Crowdstrike CEO has not been hauled into government hearing after government hearing all across the world since this outage happened. This is not something that should be pushed aside and forgotten about. There needs to be legislation to prevent this from happening again.
I still don't understand why the haven't gone bankrupt. After all problems they caused
i still cant understand how microsoft got a lot of the blame even though they were the ones from saving all devices and data from getting bricked lmao
it had nothing to do with microsoft lol
I was stuck in Chicago for 6 hours. Thankfully it was a getting a new crew issue but I think the crowdstrike issue made it worse
Whoa, this CrowdStrike snafu is a doozy! It's insane how a single borked update can brick so many systems across the globe. Makes you wonder how robust our digital infrastructure really is. Kudos to this video for breaking it down in layman's terms. This is a cautionary tale for all the sysadmins and DevOps engineers out there!
Excellent concise coverage of the story!
It’s amazing how crowd strike automatically has their hands in the base of your device by default
no it doesn't mate its only on devices that have it because a enterprise put it on
@@NewKiwiJK 👍
All that pain and money and CS goes here…have an Uber Eats coupon. WTF
The really scary thing about what I see here is that many video games anti cheat systems work at the Kernal level... How do they get away with it?
3:28 Oops: it should be 911, not 991.
Oops
I heard & saw in the CC: 911. Perhaps it was an inside joke but I didn't get it
Blame it on CrowdStrike...
crowdstrike IT 1: Sir we did an oopsie, a big one.
Crowdstrike IT Manager: Then fix it
Crowdstrike IT 1: We cant, the systems are blue screened of death
Company layoffs sometimes lead to disaster. They probably cut corners a lot.
Great explanation! It boils to going Live with an update without staging it first. That is blasphemy in the world of IT deployment.
This is the first video that I saved for future. They just confirmed everything I am doing. My scale is million if not billlion times smaller. I am trying to make streams for local hometown games. And I ALLWAYS double check EVERYTHING. Computer, cables, camera, internet, the stream itself.
And 98% of tests they work 10/10. But I am aiming to those 2%, and trying to make those less than 0,5%. And so far I and my method are the best (most stable tbh) in our amateur games in my country
I’m glad to see you back 💯
🙏
As fer as I am understanding this company has the power to shut down or crash any Windows PC connected to the internet.....just wow.
Kaspersky banned and Crowdstrike downed.
Versidium's progress is impressive. Keeping tabs!
3:27
991 service ?????
I didn’t even notice this in my Country’s but that’s might be because most companies in Denmark don’t use crowdstrike
Instead of asking if there was a doctor on board, they were asking if there was an IT guy one board.
What suprises me , the Crowd strike doesn't have a test update setup , before release
The result of Monopoly 😂😂😂
The liquidity of Vematum tokens is a major plus.
Vematum's competitive edge could redefine its market segment.
It was crazy, most don’t know but it shut down almost all of UPS.
Airlines actually put out the ground stops on themselves not a faa mandate. And the timeline was wrong. Airlines realized this problem way early as soon as midnight some computers already went to blue screen. By 3 o clock eastern, 3 major airlines + allegiant already put out their first wave of ground stop. Ofc regular customers won’t know until their boarding time
I'm optimistic that since Crowdstrike buggered up all these Windows machines, it may help IT departments to consider ditching Windows for Linux (which wasn't affected by the update).
I was appalled that Alaska decided to use Windows for their 911 service. What the hell are they thinking? Anything mission-critical shouldn't involve Windows. Microsoft is most likely partly culpable for the BSOD issues.
As a former service tech for D.E.C., I've never seen our system crash and burn (unless there was a hardware issue). It wasn't until I was first introduced to Microsoft's "operating" system did I see my fair share of malfunctioning computers.
Crowdstrike actually did the exact same thing to Linux servers on two separate occasions earlier this year. The fact that this particular update was aimed at Windows machines doesn't make Linux immune.
The only real fault Microsoft had with this is even allowing anything not Windows/Microsoft related into ring 0... the OS should be a level above anything else so it doesn't crash when people push code without testing it.
You are so uninformed
Unlike Linux, Microsoft is an autonomous, proprietary entity that allowed another private company access to their kernel. There doesn't seem to be a foolproof methodology that would prevent hackers from doing harm to this operating system.
It's bad enough many companies hire IT personnel who either don't understand how to keep their systems secure or the companies that hire them don't take the threats seriously and invest time and money towards keeping their data safe.
I've never been a fan of a Windows from Day One. It helped me earn a living but I always thought it was a pile of dung with each iteration. I jumped the Microsoft ship just prior to Windows 7 and never looked back.
I still think Microsoft bears a certain amount of culpability when they sanction access to the core code without having some say into what modified coding is allowed to be installed inside their OS.
I wasn't able to find an article explaining in any great detail good badly the Linux servers were affected. From what I could gather, the "fix" was much simpler.
I wouldn't say any OS is immune to these kinds of problems. But if you run a business and put all your faith that some private company should "diddle" with the kernel of your OS, then you really need to shake your head and expect shit to hit the fan at some point. Personally, I wouldn't outsource this task to people you don't know from Adam. I think people just want a simple turnkey solution without putting real effort behind securing your data.
The irony isn't lost on me that more damage was done by the supposed good guys than the evil hackers. Nothing will prevent this debacle from happening again.
Pros test in production, and get fired
Watch a few videos talking about this issue but I am surprised none of them mentioned one important thing: Cash is king
Funniest thing is everyone seems surprised. My nan always said never put all your eggs in one basket.
Beautiful video!
Thank you iapple!
With CS not doing staging policy update as promised/advertised to the customer, could the customers sue CS for false advertising?
Just a small correction
You mentioned staging as a little-by-little release
Staging is a different term, which you later used correctly.
It's called Canary releases
My bad, could’ve worded that sentence better.
@@LogicallyAnswered its nothing too crazy
It happens
My whole birthday vacation to Bali was canceled over this
No more FOMO for me - I'm all in on Versidium!
Worst company, they doesnt do basic ci/cd testing and release the product in the wild, they are real hacker
Crowd*STROKE*
Thanks for getting me 7+ hours of paid time to sit around and watch youtube since my employer was down and everyone company wide was gettin the ol blue screen lol. I was the first at my organization to get the blue screen so I thought my pc had crapped out, I had windows reinstalled an hour later and then my boss was tellin me not to bother logging in that nothing worked lol.
I update the FONT COLOR and my program throws 30 errors. I can't imagine not testing the thing on a pc about a dozen times before going national.
The crowd really was struck, huh.
Old school wisdom ... "Never put all your eggs in one basket"
CroswdStrike dropped the basket.
i think they kicked the basket after it hit the ground
@@mrminecraft4772
LOL ... maybe so!
Don't just boycott CrowdStrike, boycott McAfee too.
This is a glorious event
I mean literally all you gotta do for qa is buy a windows pc, apply the update and see if it can still turn on.
Versidium = Growth. Watching it unfold!
Great video as always
Thank you as always Balpreet!
If anything, I find it worrisome how many things are connected to the net.
Ehhh, I don't know, maybe treat your IT department not as an afterthought that only costs money.
The only issue I had was that my ring camera became unusable. They blamed crowdstrike but crowdstrike doesn’t effect app functionality. So I'm thinking it was another ring hack on the same day.
they use crowd strike on the ring servers
"If it works - don't mess with it"
They should have offical windows test computers for the update before making it public
they really striked the crowd 💀
the animation was fire!!
The prerequsite of how this all happened: The EU bitched and moaned about how Microsoft was not letting antivirus vendors not having the kernel API access so they could do their thing aka having a level plying field in which Microsoft never wanted kernel space boot drivers of 3rd-parties in the first place.
I went from thinking it would be an easy day before my holiday collecting research data at a hospital to a wild one acting as a runner between clinical receptionists, nurses, doctors, and everyone in-between in order to keep a clinic running. No access to clinical software meant I couldn't do any data collection anyways, so I was a messenger. I then had to catch a flight that evening across the country but by that point, the software glitch had been resolved and I was only 3 hours late.
Is CrowdStrike soon to be bankrupt?