If this figure is accurate, the massive impact was likely due to collateral damages. If this took down every server at an enterprise and left most of the workstations online, then that still means that those workstations were basically paperweights.
They have about 24,000 clients so that comes out to around 350 impacted machines per client which is reasonable. It only takes a few impacted machines for thousands of people to be impacted if they are important enough.
As far as I know, none of the OSes used for virtualization hosts at scale by any of the major cloud infra players are Windows.
Not to mention: any company that uses any AWS or azure or GCP service is “using VMs” in one form or another (yes, I know I am hand waving away the difference between VMs and containers). It’s basically what they build all of their other services on.
That's how supply chains work. A link in the chain is broken, the whole thing doesn't work. Also 10% of major companies being affected, is still giant. But you're here using online services, probably still buying bread probably got fuel, probably playing video games. It's huge in the media, and it saw massive affects but there's heaps of things that just weren't even touched that information spread on. Like TV news networks seemingly kept going enough to report on it non stop unaffected. Tbh though any good continuity and disaster recovery plan should handle this with impact but continuity.
The only companies I have seen with workable BCDR plans are banks, and that is because they handle money for rich people. It wouldn't surprise me if many core banking systems are hyper-legacy as well.
I honestly think that a majority of our infrastructure didn't collapse because of the lack of security controls and shitty patch management programs.
Sure. Compliance programs work for some aspects of business but since the advent of "the cloud", BCDR plans have been a paperwork drill.
(There are probably some awesome places out there with quadruple-redunant networks with the ability to outlast a nuclear winter. I personally haven't seen them though.)
This number seems quite low. My organisation alone would have had something like 3000 employee devices taken down. Since it happened on a day where most people WFH, there's at least another thousand static devices in my building alone that may not have been in use at the time that will shit the bed tomorrow morning.
The same thing applies to our much larger sister companies interstate. So that's another 6,000 or so devices.
The two largest energy retailers were affected too, so that's another 5,000 devices at a conservative estimate.
Then there's all the self-service checkouts that went down across Australia. I have no idea how many there are, but if every Coles and Woolworths has ten of them, that's another ~40,000 devices.
That's just the organisations that I am personally aware of as being affected in Australia and can get ballpark figures for.
Obviously Microsoft are getting their figures from the auto-reportimg that happened on each crash, but it really does seem like it's too low.
It's beyond time to diversify our IT infrastructure. Enough with sticking everything "in the cloud" and paying for software (and devices!!) we don't own.
So, those numbers all account for about 54,000 of the 8.5 million devices. Using fairly generous rounding, that still leaves approximately 8.5 million more devices.
Way to miss the point. That's 54,000 that one person knows of across a small handful of organisations in one small country. I'm not even including the dozens more organisations I know were affected but can't come up with a ballpark figure for.
If you had majored in IT you would know that this Crowdstrike thing is an easy, though somewhat tedious, fix. There's honestly far more annoying problems that IT people have to content with.
I'm well aware that it's not a complicated fix, I'm more than capable of doing it. Being a guy on an understaffed IT team in an office of hundreds right now sounds fucking miserable.
Normally I would agree however this doesn’t appear to be a Microsoft update but a CrowdStrike update. Given that everyone is worried about ransomware etc.
Absolutely that. For networks that matter, patches are usually tested independently. While I wouldn't trust the average military command to do patch testing, any civilian/corporate contractors absolutely would, because money. (Microsoft is likely at the top of that stack...)
There are other conditions as well. EDR infrastructure, if it exists, would need to be isolated on a "Government cloud" which is a different beast completely. Plus, there are different levels of networks, some being air-gapped.