Is there a use case for CrowdStrike on any platform? No, there isn't. Anything that messes with the kernel at that level should be considered a security threat on the basis of potential service disruption / threat to business continuity. Do you really want to run a closed source piece of malware as a kernel module?
They completely fuck over their customers in the business continuity aspect, they become the problem and I bet that most companies would never suffer any catastrophic failure this bad if they didn’t run their software at all. No hacker would be able to take down so many systems so fast and so hard.
Not the guy you're asking but I agree. There would be no need for Falcon Sensor on every Windows-machine deployed inside an Enterprise (assuming that Falcon Sensor serves a purpose worth fulfilling in the first place) if the critical devices on their network were sufficiently hardened. The main problem (presumably the basis of such a solution existing) is that as soon as you have a human factor, people who must be able to access critical infrastructure as part of their job, there will be breakages of some kind. Not all of those must be malicioius or grow into an external threat. They still need to be averted of course.
I feel that CrowdStrike is an idea that seems appealing to those making technological decisions because it promises something that cannot be done by conventional means as we have known and deployed them before. I can't say whether or how often this promise has ever enabled companies to thwart attacks at their inception, but again, I feel that in a sufficiently hardened environment, even with compromisable human actors in play, you do not need self-surveillance (at the deepest level of an OS) to this extent.
And to also address OP's question: of course there is no need for this in a *NIX environment. There hasn't been any significant need for antivirus of any kind in any of the UNIX-based world including macOS. So really this isn't about whether an anti-malware solution in itself can satisfy the needs of a company per se, the requirements very much follow the potential attack vectors that are opened up by an existing infrastructure. In other words, when your environment is Windows-based, you are bound to deploy more extensive security countermeasures. Because they are necessary.
Some may say that this is due to market-share, but to those I say, has the risk-profile of running a Linux-based server changed over the last 20 years? They certainly have become a lot more common in that timeframe. One example I can think of was a ransomware exploit on a Linux-based NAS-brand, I think it was QNAP. This isn't a holier than thou argument. Any system can be compromised. Period. The only thing you can ensure is that the necessary investment to break your system will always be higher than the potential gain. So I guess another way to put this is that in a Windows-based environment your own investment into ensuring said fact will always be higher.
But don't get me wrong, I don't mean to say Windows needs to be removed from the desks of office-workers. Really this failure and all these photographs of publically visible bluescreens (and all the ones in datacenters and server-rooms that we didn't see) shows that Windows has way too strong of a foothold in places where plenty smart people are employed to find solutions that best serve the interests of their employers, including interests (i.e. security and privacy) that they are unaware of because they can't be printed on a balance-sheet.