As an IT professional, it was 100% CrowdStrike fault for Delta being down for day 1. However Delta being down day 2 to day 4, is 100% Delta IT operation fault. Most IT departments including mine was up and running 99%+ within 24 hours.
but you did not have to fix 40,000 mostly locked down servers, then spend extra time recovering from the logistics problems of pilots and aircraft not being where they were planned to be.
This global internet outage is insane! All airlines grounded and i was stock the airport and even banks, media, and offices from the U.S. to Australia. How can CrowdStrike have such a monopoly that could help restore such a massive amount of tech?
Right? It makes you think about the stability of our systems. But hey, I barely spend time online. When I checked my portfolio with Desiree Ruth Hoffman, we were still in the greens. That’s been the case for 16 months straight!
Probably from her forecast on Nvidia before the pump. But how are you in the greens with all the fluctuations due to the election and everything else? Can you share her strategy?
Honestly, just schedule a call with her. She has vast knowledge in finance and really knows how to navigate these times. I handed over my portfolio to her so I can focus on my family. These days, things just get scarier and scarier.
Missed opportunity for the Squawk crowd this morning. Instead of pressing Bastian on why Delta took several days longer than any other carrier to recover, here they are fawning over the Delta One lounge experience. The problem was not merely external.
it is a mistake to think that this was delta issue. Reboot had to be done manually for many systems due to the bricking by crowd. Delta servers were spread all over which had to be manually rebooted one at a time for 40000 times.
@@Phineas1626Of course he didn't. I would presume this isn't something that you just decide on a whim. Has any other CEO gave an answer to that what if?
@@DavidKen878 It’s PRECISELY his job to consider those “what if?” scenarios. Yes, I guarantee you other CEOs (or their designees) think about those scenarios constantly. Risk management is a big part of any big company.
Stop using CrowdStrike. They had a similar issue on Linux not that long ago. Or, don't allow updates to go straight through to your production systems before internal testing. Best of luck.
@@DaniEles-rc7ij I think the point they were trying to make is that Apple and Microsoft operate in fundamentally different spaces. Microsoft is a major player in cloud infrastructure to other businesses, while Apple primarily focuses on consumer products and services. That difference makes any direct comparison irrelevant
@@shastam9590 LINUX is the base for Apple OS... and LINUX is scalable to the enterprise level... Delta needs to invest in internal systems.. like the did in the 70's
The outage happening was Crowdstrikes fault, sure. The outage lasting for a whole week is absolutely Deltas fault. When pilots and FAs show up to the gate ready to go and they can't because of an "internal scheduling error", that is absolutely Deltas fault. Delta infra is clearly a house of cards, and i think this money grab from them will make any future vendor wary of doing business with them, it'd be like selling life insurance to a cancer patient
@@katieadams5860but the boot loop problem is specifically a windows bug, and some linux systems are build using capabilities which just roll back to an earlier install on power cycling, which can be done by unskilled people. it was the boot loop problem which kept machines down for so long.
I would like to respectfully disagree with the Delta CEO, our outgoing flight on July 19, was canceled by their partner WestJet. Moreover our return trip on their partner WestJet was canceled on July 29. The Delta call center could not help us after an hour on the phone. Told us to call WestJet. We called WestJet and waited on the phone for over 2 hours on hold and no assistance. Finally, we had to drive to the airport to rebook our flight that was cancelled by the software that was supposedly fixed. These are the facts and the CEO is either unaware or smoothing over the problem. I personally will avoid Delta in the future.
@@HY-hm9grno, if the ticket was issued by Delta, if delta is down, westjet will not fly the passenger for free if westjet cannot confirm or settle payment
most insurance has exclusions for negligence, as do the liability exclusions in eulas. this means that when it gets to court, the get out of jail free card won't work, and the insurance won't pay.
So let me get this straight, you install an app in Windows and then that app misbehaves and it's Windows' fault? How is this guy a CEO? no wonder Delta sucks so hard (leg space and most uncomfortable seats). No thank you
when it triggers a well known pre existing bug in the kernel which has not been fixed after a decade, yes microsoft has some share in the blame. it was crowdstrike which crashed the machines, but it was microsofts failure to fix the bug which kept them down and required on site repair and remediation from tech support.
You buy software at your own risk. All software will have bugs. Surely Delta read the fine print in Crowdstrike’s terms and conditions while contracting with them, didn’t they?
No, this was not a normal bug. It was an untested, un-phased updated to the kernel level. Inexcusable on Crowdstrike's part. This was completely avoidable.
most companies cannot cope with cowboy developers doing things which actively undermine your resiliency planning. lots of people got caught after 9 11 as their backup was in the other tower. same happened with the big european data center fire, when the provider had the backup literally next door, so it caught fire as well.
this should happen more when you think about how wrongly people decided that brutal rich companies made peoples suffer by not paying their taxes. Ago 1/8/24
@@christianamaral-c3ldelta took longer because they had a wider exposure than everyone else. because this required hands on fixes by skilled tech guys to fix, it took a while.
Delta is no better than American, JetBlue and United. They want to charge a premium more meager service. Whilst Bastian teavels on a private jet, catered with 5 star meals and expensive alcohol.
Kudos to Delta!!!! There needs to be a congressional hearing. Microsoft needs to pony up as well because if you're going to let certified security companies into your core OS processes, you need to make sure they simply "tested" this. A 15 min test would have caught this. Instead, they dumped it out to hundreds of million machines globally. I've delivered software all levels from actual developer, tester, support engineer, PM, and I have NEVER seen anything like this. When you deploy, you deploy to a handful of low-risk clients first (NOT aviation, hospitals, emergency services... etc !!!!), wait a week, then deploy to the rest of the world. STUNNED!!!!! Multiple failure across all organizations of CrowdStrike. They need to go out of business.
Wait A WEEK? These were continous security updates, not a new program version. There may be multiple updates per day due to emerging threats. Crowdstrike did not test properly, and rolled it out to everyone at the same time. They are at fault. Microsoft does not have responsibility here. Delta chose to install Crowdstrike.
@@benjamink8448on the contrary, microsoft has had multiple instances of boot loop problems over the last decade, and have not done enough to prevent it. yes crowdstrike took 8.5 million mostly mission critical systems down, but it was microsoft not fixing the boot loop problem for a decade which kept them down.
It was Delta, and everyone else, that was being cheap and only relying on one system. An intelligent person would say an airline should have at least one backup system....... And so should every other large corporation.
@@lordgarion514but lots of people who were hit used release n-1 on their production systems and n-2 on their backups. unfortunately the crowdstrike rapid release process gives the impression that this applies to that as well, when it does not, so what was expected by their customers was that the release process would be engineered to quickly stop a problem and recover, and that this would be spotted on the customers test machines. this then would not be applied to the n-1 systems in production, and if it took long enough to hit them, it would not hit the n-2 backus machines. what actually happened is that this flag only applied to the core kernel update, not the two files in the channel update, so when it was shipped untested, it kept shipping for 90 minutes, taking down the test machines and production machines at the same time, and then when it shifted to the backups it killed them too. the only thing that crowdstrike got right about this was saying yes it was us.
This seems to also be a failure on Delta’s IT department for not testing the CrowdStrike file and for pushing it to all production servers rather than a gradual rollout which can be ramped up after there is confidence in the file. Also, why 40,000 servers? There are better alternatives that don’t require the huge army of developers and systems people to maintain. And to apparently recover after an outage. The raised floor space and data center environmentals like redundant power, etc. must be very costly. Your IT leadership built an expensive empire.
@@HZADAMScould Delta have tested the change before moving it to production? This process for accepting CS changes seems to fly in the face of best practices. No testing and an all-at-once promotion to every production server. Hence the result we all witnessed. Thank you for the CS information!
@@katscandance Thanks! Shouldn't the CS file have first been distributed to their customers non-production servers? (Rather than directly to production.) Very interesting relationship between CS and their customers that no testing is performed before production. I wonder how CS tested this file (that resides in the Kernel) without experiencing the BSoD. More likely, CS did experience the BSoD but did not react properly to it. Just my two cents. Thanks for the reply!
@@karlking4980 I believe I read somewhere that CrowdedStrike actually had a bug in their automated tests, which resulted in them not noticing it before the rollout. However, I think that big companies should be able to test it in a non-production environment before deploying any updates on all their servers, as that is considered good practice.
I totally agree with this guy, even if they didn't have the most and adequate IT equipment, they pay millions of dollars on support from MS and Crowdstrike. Crowdstrike at least admit what they did was wrong MS didn't, they are just like oh well sorry this happened, it was a minor bug, like seriously? Delta pay for your services too, so you need to work with Crowdstrike on how to prevent this in the future.
@@christianamaral-c3lthe guy from delta pointed out that they had some of the highest exposure, with 40,000 machines affected. if you ignore traveling time, and assume there are enough techs on site, and a repair time of ten minutes per machine, working 24 hours per day, you are still talking about 277 man days to get them all back up, even before you add in the time to fix machines which were not designed to cope with being crashed. when you add in the restore time for broken machines, travel time, and not being able to get techs in the right place to fix machines, those numbers can go up very fast.
As an IT professional, it was 100% CrowdStrike fault for Delta being down for day 1. However Delta being down day 2 to day 4, is 100% Delta IT operation fault. Most IT departments including mine was up and running 99%+ within 24 hours.
Chief Excuse Officer - 100's of millions of dollars in redundancies, eh?
but you did not have to fix 40,000 mostly locked down servers, then spend extra time recovering from the logistics problems of pilots and aircraft not being where they were planned to be.
This global internet outage is insane! All airlines grounded and i was stock the airport and even banks, media, and offices from the U.S. to Australia. How can CrowdStrike have such a monopoly that could help restore such a massive amount of tech?
It's pretty concerning. If they can fix this, what other control do they have over our infrastructure? or are we truly in the matrix?
Right? It makes you think about the stability of our systems. But hey, I barely spend time online. When I checked my portfolio with Desiree Ruth Hoffman, we were still in the greens. That’s been the case for 16 months straight!
Wow, really? I've seen the name Desiree Ruth Hoffman before but can't figure out where.
Probably from her forecast on Nvidia before the pump. But how are you in the greens with all the fluctuations due to the election and everything else? Can you share her strategy?
Honestly, just schedule a call with her. She has vast knowledge in finance and really knows how to navigate these times. I handed over my portfolio to her so I can focus on my family. These days, things just get scarier and scarier.
Microsoft should have offered Delta a $10 meal voucher
🤣🤣🤣 nailed it
Ooooo. Brilliant! 👏👏👏
Delta is a mere shadow of its former self. When i worked for Disney that's all we flew, and it was a stellar experience. NOOOOT anymore.
Missed opportunity for the Squawk crowd this morning. Instead of pressing Bastian on why Delta took several days longer than any other carrier to recover, here they are fawning over the Delta One lounge experience. The problem was not merely external.
The CEO hinted when he said that, within the industry, Delta is the most heavily invested with Microsoft and CrowdStrike
it is a mistake to think that this was delta issue. Reboot had to be done manually for many systems due to the bricking by crowd. Delta servers were spread all over which had to be manually rebooted one at a time for 40000 times.
@@HZADAMSYeah, but he had no answer to what the solution is if it happened again next week.
@@Phineas1626Of course he didn't. I would presume this isn't something that you just decide on a whim. Has any other CEO gave an answer to that what if?
@@DavidKen878 It’s PRECISELY his job to consider those “what if?” scenarios. Yes, I guarantee you other CEOs (or their designees) think about those scenarios constantly. Risk management is a big part of any big company.
Lmao. The airline industry has some of the worst treatment toward the public and customers. Pot calling the kettle black.
Glad to see someone gets the irony. Maybe Delta should consider suing themselves…
They should get treated how they treat their customers
Cry about it.
Blame game as a CEO? Interesting approach
Stop using CrowdStrike. They had a similar issue on Linux not that long ago. Or, don't allow updates to go straight through to your production systems before internal testing. Best of luck.
"When was the last time you heard of a big outage at Apple?"
Bruh 😂 How you gonna make $34,214,328 a year and say some ish like that 😭
when was the last one?
@@DaniEles-rc7ij I think the point they were trying to make is that Apple and Microsoft operate in fundamentally different spaces. Microsoft is a major player in cloud infrastructure to other businesses, while Apple primarily focuses on consumer products and services. That difference makes any direct comparison irrelevant
different business, Apple is much more consumer based in comparison to Microsoft. cant go out comparing apples to oranges
@@shastam9590 LINUX is the base for Apple OS... and LINUX is scalable to the enterprise level... Delta needs to invest in internal systems.. like the did in the 70's
wdym, he can just migrate to Apple Cloud Services which totally exists, ez
/s
Recovery is paramount. They should have been using Rubrik instead of legacy backup tooling.
Free consulting? Wow….ill assume that’s sarcasm.
Sad state of affairs when the only people holding CrowdStrike accountable are the shareholders.
The outage happening was Crowdstrikes fault, sure. The outage lasting for a whole week is absolutely Deltas fault. When pilots and FAs show up to the gate ready to go and they can't because of an "internal scheduling error", that is absolutely Deltas fault. Delta infra is clearly a house of cards, and i think this money grab from them will make any future vendor wary of doing business with them, it'd be like selling life insurance to a cancer patient
Yall need to team up With HP enterprise and Apple and IBM
Why are the other two reporters there when only one is talking?? It's a weird visual.
It’s YOUR failure Delta. Stop blaming others for your ineptness.
I’ve never had a bad experience on Delta Air
Free lunch at Redmond HQ cafeteria for 100 years.
LINUX is out there.
They published a bad update last month too on Linux, a bad software update for any OS can take it down
@@katieadams5860but the boot loop problem is specifically a windows bug, and some linux systems are build using capabilities which just roll back to an earlier install on power cycling, which can be done by unskilled people. it was the boot loop problem which kept machines down for so long.
I would like to respectfully disagree with the Delta CEO, our outgoing flight on July 19, was canceled by their partner WestJet. Moreover our return trip on their partner WestJet was canceled on July 29. The Delta call center could not help us after an hour on the phone. Told us to call WestJet. We called WestJet and waited on the phone for over 2 hours on hold and no assistance. Finally, we had to drive to the airport to rebook our flight that was cancelled by the software that was supposedly fixed.
These are the facts and the CEO is either unaware or smoothing over the problem. I personally will avoid Delta in the future.
You bought a codeshare flight ticket. The operator is still Westjet, not Delta. You have to contact Westjet.
@@HY-hm9grno, if the ticket was issued by Delta, if delta is down, westjet will not fly the passenger for free if westjet cannot confirm or settle payment
He was not a happy customer … at all. 😬
Don't tell me crowdstike doesn't have insurance.
most insurance has exclusions for negligence, as do the liability exclusions in eulas.
this means that when it gets to court, the get out of jail free card won't work, and the insurance won't pay.
So the rule of thumb is to avoid Microsoft. I hope people learn from the lessons haha
So let me get this straight, you install an app in Windows and then that app misbehaves and it's Windows' fault?
How is this guy a CEO? no wonder Delta sucks so hard (leg space and most uncomfortable seats).
No thank you
when it triggers a well known pre existing bug in the kernel which has not been fixed after a decade, yes microsoft has some share in the blame. it was crowdstrike which crashed the machines, but it was microsofts failure to fix the bug which kept them down and required on site repair and remediation from tech support.
You buy software at your own risk. All software will have bugs. Surely Delta read the fine print in Crowdstrike’s terms and conditions while contracting with them, didn’t they?
Crowdstike pushed a forced update out without testing it. They didnt follow their own QA process. Cant blame delta.
No, this was not a normal bug. It was an untested, un-phased updated to the kernel level. Inexcusable on Crowdstrike's part. This was completely avoidable.
@@Holly_UnleashedMost business problems are completely avoidable.
An unprepared CEO and company crying about not being prepared ……….. they should let him go
You have no idea what you're talking about.
@@Groaznic but I am sure you do 😴😴😴😂😂😂😂 get out of here troll
@@Groaznic now you really embarrassed me. What can I say back to this response 😅😅😅😅 I am sure you are the one that knows everything 😅😅😅😅😅🥱🥱🥱🥱🥱.
@@joaocardoso7880What kind of name is Joao? Is that even a real name?
most companies cannot cope with cowboy developers doing things which actively undermine your resiliency planning.
lots of people got caught after 9 11 as their backup was in the other tower. same happened with the big european data center fire, when the provider had the backup literally next door, so it caught fire as well.
this should happen more when you think about how wrongly people decided that brutal rich companies made peoples suffer by not paying their taxes. Ago 1/8/24
I wouldn't trust this Texas based cybersecurity company, I'll stick to Kaspersky thanks.
Should have say its caused by another country's hacker.
Lotta glazing going on in the comments here 😂 People bein like "LOVE THIS GUY!!!" gtfo he don't care about you or your family
$PANW FTW!
What does Joe do on this show? Honest question.
i like this guy.. he speaks the truth.
I like him as well! He's honest, really good-looking, and an all-around amazing person! 😍😍😍
Actually no. He’s just deflecting. Delta took several days longer than anyone else to recover from the same event.
The best of the best!!!! Y’all just jealous
@@christianamaral-c3ldelta took longer because they had a wider exposure than everyone else. because this required hands on fixes by skilled tech guys to fix, it took a while.
Those 4 guys want age 77 so bad
CTO’s always go with vendors who are convenient
I, for one, sleep better at night knowing the Ukrainian gangsters at CrowdStrike have access to the kernel on my laptop.
The interviewer kept interrupting and it was so rude
yea,u aint use enuf paper straws & that made internet go sad
Delta is no better than American, JetBlue and United. They want to charge a premium more meager service. Whilst Bastian teavels on a private jet, catered with 5 star meals and expensive alcohol.
CRWD had one job - To prevent IT outages!
Kudos to Delta!!!! There needs to be a congressional hearing. Microsoft needs to pony up as well because if you're going to let certified security companies into your core OS processes, you need to make sure they simply "tested" this. A 15 min test would have caught this. Instead, they dumped it out to hundreds of million machines globally.
I've delivered software all levels from actual developer, tester, support engineer, PM, and I have NEVER seen anything like this. When you deploy, you deploy to a handful of low-risk clients first (NOT aviation, hospitals, emergency services... etc !!!!), wait a week, then deploy to the rest of the world. STUNNED!!!!! Multiple failure across all organizations of CrowdStrike. They need to go out of business.
Wait A WEEK? These were continous security updates, not a new program version. There may be multiple updates per day due to emerging threats. Crowdstrike did not test properly, and rolled it out to everyone at the same time. They are at fault. Microsoft does not have responsibility here. Delta chose to install Crowdstrike.
@@benjamink8448on the contrary, microsoft has had multiple instances of boot loop problems over the last decade, and have not done enough to prevent it.
yes crowdstrike took 8.5 million mostly mission critical systems down, but it was microsoft not fixing the boot loop problem for a decade which kept them down.
I hope they sue them to oblivion. This overreliance on vendors with single point of failure is insane.
It was Delta, and everyone else, that was being cheap and only relying on one system.
An intelligent person would say an airline should have at least one backup system.......
And so should every other large corporation.
@@lordgarion514Or maybe at least ask ‘what if…?’
@@lordgarion514but lots of people who were hit used release n-1 on their production systems and n-2 on their backups. unfortunately the crowdstrike rapid release process gives the impression that this applies to that as well, when it does not, so what was expected by their customers was that the release process would be engineered to quickly stop a problem and recover, and that this would be spotted on the customers test machines. this then would not be applied to the n-1 systems in production, and if it took long enough to hit them, it would not hit the n-2 backus machines.
what actually happened is that this flag only applied to the core kernel update, not the two files in the channel update, so when it was shipped untested, it kept shipping for 90 minutes, taking down the test machines and production machines at the same time, and then when it shifted to the backups it killed them too.
the only thing that crowdstrike got right about this was saying yes it was us.
This seems to also be a failure on Delta’s IT department for not testing the CrowdStrike file and for pushing it to all production servers rather than a gradual rollout which can be ramped up after there is confidence in the file. Also, why 40,000 servers? There are better alternatives that don’t require the huge army of developers and systems people to maintain. And to apparently recover after an outage. The raised floor space and data center environmentals like redundant power, etc. must be very costly. Your IT leadership built an expensive empire.
you have to understand how endpoint security with CRWD works. it doesnt give administrators a right to accept and schedule a rollout.
@@HZADAMScould Delta have tested the change before moving it to production? This process for accepting CS changes seems to fly in the face of best practices. No testing and an all-at-once promotion to every production server. Hence the result we all witnessed. Thank you for the CS information!
No literally every company using crowdstrike got hit with it
@@katscandance Thanks! Shouldn't the CS file have first been distributed to their customers non-production servers? (Rather than directly to production.) Very interesting relationship between CS and their customers that no testing is performed before production. I wonder how CS tested this file (that resides in the Kernel) without experiencing the BSoD. More likely, CS did experience the BSoD but did not react properly to it. Just my two cents. Thanks for the reply!
@@karlking4980 I believe I read somewhere that CrowdedStrike actually had a bug in their automated tests, which resulted in them not noticing it before the rollout. However, I think that big companies should be able to test it in a non-production environment before deploying any updates on all their servers, as that is considered good practice.
I totally agree with this guy, even if they didn't have the most and adequate IT equipment, they pay millions of dollars on support from MS and Crowdstrike. Crowdstrike at least admit what they did was wrong MS didn't, they are just like oh well sorry this happened, it was a minor bug, like seriously? Delta pay for your services too, so you need to work with Crowdstrike on how to prevent this in the future.
😮👍👍
This guy seems salty on how rich the top tech companies are lol
Palantir???
what about palantir?
Goodluck with non existant apple server.
I got an uber $10 voucher😂
™️ 🥊 KO
Apple would never
I don’t overlook Apple either,it could happen to them too,usually Updates is not always good if not properly tested
You guys are kinda gross
An honest and intelligent CEO….rare indeed.
He's brilliant... incredibly handsome... and an all-around fantastic person... A true rarity! 😍😍😍
Remember, Delta took several days longer than any other carrier to recover from the same event. Just deflecting here.
@@christianamaral-c3lthe guy from delta pointed out that they had some of the highest exposure, with 40,000 machines affected.
if you ignore traveling time, and assume there are enough techs on site, and a repair time of ten minutes per machine, working 24 hours per day, you are still talking about 277 man days to get them all back up, even before you add in the time to fix machines which were not designed to cope with being crashed.
when you add in the restore time for broken machines, travel time, and not being able to get techs in the right place to fix machines, those numbers can go up very fast.
Love the CEO's honesty
I love his honesty... As well as how handsome he is!!! 😍😍😍
Remember, Delta took several days longer than any other carrier to recover from the same event. Just deflecting here
“Thank you for calling Delta Airlines. Your call is very important to us. Your approximate hold time is 107 minutes.”