XZ Hack - "If this timeline is correct, it’s not the modus operandi of a hobbyist. [...] It wouldn’t be surprising if it was paid for by a state actor."
Well — we just witnessed one of the most daring infosec capers of my career. Here’s what we know so far: some time ago, an unknown party evidently noticed that liblzma (aka xz) — a relatively obscure open-source compression library — was a dependency of
Thought this was a good read exploring some how the "how and why" including several apparent sock puppet accounts that convinced the original dev (Lasse Collin) to hand over the baton.
Imagine finding a backdoor within 45 day of it's release into a supply chain instead of months after infection. This is a most astoundingly rapid discovery.
Fedora 41 and rawhide, Arch, a few testing and unstable debian distributions and some apps like HomeBrew were affected. Not including Microsoft and other corporations who don't disclose their stack.
Arch was never affected, as described in their news post about it. Arch users had malicious code on their hard disks, but not the part that would have called into it.
Pretty bad is also that it intersects with another problem: Bus factor.
Having just one person as maintainer of a library is pretty bad. All it takes is one accident and no one knows how to maintain it.
So, you're encouraged to add more maintainers to your project.
But yeah, who do you add, if it's a security-critical project? Unless you happen to have a friend that wants to get in on it, you're basically always picking a stranger.
Unless you happen to have a friend that wants to get in on it, you’re basically always picking a stranger.
At risk of sounding tone deaf to the situation that caused this: that's what community is all about. The likelihood you know the neighbors you've talked to for years is practically nil. Your boss, your co-workers, your best friend and everyone you know, has some facet to them you have never seen. The unknown is the heart of what makes something strange.
We must all trust someone, or we are alone.
Finding strangers to collaborate with, who share your passions, is what makes society work. The internet allows you ever greater access to people you would otherwise never have met, both good and bad.
Everyone you've ever met was once a stranger. To make them known, extend blind trust, then quietly verify.
honestly these people should be getting paid if a corporation wants to use a small one-man foss project for their own multibillion software. the lawyer types in foss could put that in GPLv5 or something whenever we feel like doing it.
i can't see how paying someone would have changed anything in this scenario.
this seems to be a long running campaign to get someone into a position where they could introduce malicious code. the only thing different would have been that the bad actor would have been paid by someone.
this is not to say, that people working on foss should not be paid. if anything we need more people actively reviewing code and release artifacts even if they are not a contributor or maintainer of a piece of software.
I think bus factor would be a lot easier to cope with than a slowly progressing, semi-abandoned project and a White Knight saviour.
In a complete loss of a sole maintainer, then it should be possible to fork and continue a project. That does require a number of things, not least a reliable person who understands the codebase and is willing to undertake it. Then the distros need to approve and change potentially thousands of packages that rely upon the project as a dependency.
Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it's a sustainable project and meets requirements like a solid ownership?
The inherited debt from existing projects would be massive, and perhaps this is largely covered already - I've never tried to get a distro to accept my software.
Nothing I've seen would completely avoid risk. Blackmail upon an existing developer is not impossible to imagine. Even in this case, perhaps the new developer in xz started with pure intentions and they got personally compromised later? (I don't seriously think that is the case here though - this feels very much state sponsored and very well planned)
It's good we're asking these questions. None of them are new, but the importance is ever increasing.
Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it’s a sustainable project and meets requirements like a solid ownership?
And who is supposed to do that work? How do you know you can trust them?
And if they do find it, it'll all be kept hush hush, they'll force an update on everyone with no explanation, some people will do everything in their power to refuse because they need to keep their legacy software running, and the exploit stays alive in the wild.
open source software getting backdoored by nefarious committers is not an indictment on closed source software in any way. this was discovered by a microsoft employee due to its effect on cpu usage and its introduction of faults in valgrind, neither of which required the source to discover.
the only thing this proves is that you should never fully trust any external dependencies.
The difference here is that if a state actor wants a backdoor in closed source software they just ask/pay for it, while they have to con their way in for half a decade to touch open source software.
How many state assets might be working for Microsoft right now, and we don't get to vet their code?
Could be a lone "black hat" or a group of "black hats". Who knows.
Could be the result of a lot of public criticism in the news regarding Pegasus spyware. Who knows.
Could be paid by companies without any state actors involved. Who knows.
Could be a lone programmer who wants power or is seeking revenge for some heated mailing list discussion. Who knows.
The question of trust has been mentioned in this case of a sole maintainer with health problems.
What I asked myself is : How did this trust develop years ago ? People trusted Linus Torvalds and used the Linux kernel to build Linux distributions with to the point that the Linux kernel became from a tiny hobby thing a giant project. At some point compiling from source code became less fashionable and most people downloaded and installed binaries. New projects started and instead of tar and gzip things like xz and zstd were embraced.
When do you trust a person or a project, and who else gets on board of a project ?
Nowadays something like :
is considered perfectly normal as the default installation of some software.
Open source software is cool and has kind of produced a sort of revolution in technology but there is still a lot of work to do.
Some of the trust comes from eyes on the project thanks to it being open source. This thing got discovered, after all. Not right away, sure, but before it spread everywhere. Same question of trust applies to commercial software too.
Ideally, PR reviews help with this but smaller projects esp with few contributors may not do much of that. I doubt anyone has spent time understanding the software supply chain (SSC) attack surface of their product but that seems like a good next step. Someone needs to write a tool that scans the SSC repos and flags certain measures like the # of maintainers.
PS: I have the worst allergies I've had in ages today and my brain is in a histamine fog so maybe I shouldn't be trying to think about this stuff right now lol cough uuugh blows nose
Any speculations on the target(s) of the attack? With stuxnet the US and Israel were willing to to infect the the whole world to target a few nuclear centrifuges in Iran.
Stuxnet was an extremely focused attack, targeting specific software on specific PLCs in a specific way to prevent them mixing up nuclear batter into a boom boom cake. Even if it managed to affect the whole world, it would be a laser compared to this wide-net.
Given how low level it is and the timespan involved, there probably wasn't a specific use in mind. Just adding capability for a future attack to be determined later.
Facebook probably wanted Zstd adoption over XZ/LZMA
There was probably an analysis of who uses LZMA compression a lot, and it so happens that archivists, pirates, people and countries with low bandwidth speeds, people in Russia, game repackers et al use it a lot compared to "good law abiding" money blinded consumers of rich countries
Somebody wanted to screw over LZMA/XZ/7Z users
(most favourite right now) implanting a network backdoor into Linux servers and ecosystems
Someone thought it would be a good idea to troll open source community and make it look worse than closed source, so that closed source security can be popularised ("security" trolls in FOSS community I harp about love such ideas, beware of any Graphene/Chrome/Apple and Big Tech lovers just as example)
Tying into the idea of making FOSS ecosystem look bad, it might be a concerted effort by closed source company/companies to propel themselves above, as FOSS development is shitting on closed source corporate model
A different approach, it could be the first step in a series of steps to dismantle FOSS ecosystem, considering how much trust and transparency it has that attracts everyone enlightened enough
I could think of many other scenarios and outcomes if I put enough time, but I think this should be enough food for thought. The beneficiaries are limited, the actors few, and the methods cannot vary too much.
The world needed the open internet to bootstrap the digital revolution. It wasn't possible without the sum of humanity working altruistically to build the Library of Alexandria of software. No private entity could have possibly done it. It truly is an under appreciated marvel of the late-20th/early-21st century. FOSS contains the knowledge of software that runs the world. Now that such a thing exists I could totally see organizations (loosely speaking) wanting to conquer or ransack it. It's quite clear by now there's faction of tech with a tyrannical bent. I'd put them whoever they might be exactly as possible culprits.
What exactly does Facebook gain from more people using zstd, other than more contributions and improvement to zstd and the ecosystem (i.e. the reason corporations are willing to open source stuff).
Why do you consider zlma to be loved among pirates and hackers and zstd not to be, when zstd is incredibly popular and well-loved in the FOSS community and compresses about as well as lzma?
Every person in the world uses both lzma and zstd extensively, even if indirectly without them realizing it.
I think it's likey that, of all the mainstream compression formats, lzma was the least audited (after all, it was being maintained by one overworked person). Zstd has lots of eyes on it from Google and Facebook, all of the most talented experts in the world on data compression contributing to it, and lots of contributors. Zlib has lots of forks and overall probably more attention than lzma. Bz2 is rarely used anymore. So that leaves lzma
Someone thought it would be a good idea to troll open source community and make it look worse than closed source, so that closed source security can be popularised (“security” trolls in FOSS community I harp about love such ideas, beware of any Graphene/Chrome/Apple and Big Tech lovers just as example)
Tying into the idea of making FOSS ecosystem look bad, it might be a concerted effort by closed source company/companies to propel themselves above, as FOSS development is shitting on closed source corporate model
A different approach, it could be the first step in a series of steps to dismantle FOSS ecosystem, considering how much trust and transparency it has that attracts everyone enlightened enough
This is why it surprised me to learn that this was noticed/announced by an MS employee.
I'd be super surprised if this was western intelligence. Stuxnet escaping Natanz was an accident, and there is no way that an operation like this would get approved by the NSAs Vulnerabilities Equities Process.
My money would be MSS or GRU. Outside chance this is North Korean, but doesn't really feel like their MO
I had assumed it was probably a state sponsored attack. This looks like it was planned from the beginning, and any cyber attack that had years of planning and waiting strikes me as state-sponsored.
Historically there have been several instances of anarcho-communist organizations and social movements flourishing.
Most of them were sabotaged by plutocrat agents invoking violence or mischief. Often just giving an angry militants in the region some materiel support and bad intel.
What if the unexpected SSH latency won’t be introduced, this backdoor would live?
I'm confused by this sentence. It uses future tense in the first clause and then conditional in the second. Are you trying to express something that could've taken place in the past? Then you should be using "had been". See conditional sentences.
What if the unexpected SSH latency hadn’t been introduced, this backdoor would live?
Linux Unix since 1979: upon booting, the kernel shall run a single "init" process with unlimited permissions. Said process should be as small and simple as humanly possible and its only duty will be to spawn other, more restricted processes.
Linux since 2010: let's write an enormous, complex system(d) that does everything from launching processes to maintaining user login sessions to DNS caching to device mounting to running daemons and monitoring daemons. All we need to do is write flawless code with no security issues.
Linux since 2015: We should patch unrelated packages so they send notifications to our humongous system manager whether they're still running properly. It's totally fine to make a bridge from a process that accepts data from outside before even logging in and our absolutely secure system manager.
Excuse the cheap systemd trolling, yes, it is actually splitting into several, less-privileged processes, but I do consider the entire design unsound. Not least because it creates a single, large provider of connection points that becomes ever more difficult to replace or create alternatives to (similarly to web standard if only a single browser implementation existed).
its only duty will be to spawn other, more restricted processes.
Perhaps I'm misremembering things, but I'm pretty sure the SysVinit didn't run any "more restricted processes". It ran a bunch of bash scripts as root. Said bash scripts were often absolutely terrible.
I'm curious to know about the distro maintainers that were running bleeding edge with this exploit present. How do we know the bad actors didn't compromise their systems in the interim ?
The potential of this would have been catastrophic had it made its way into the stable versions, they could have for example accessed the build server for tor or tails or signal and targeted the build processes . not to mention banks and governments and who knows what else... Scary.
I'm hoping things change and we start looking at improving processes in the whole chain. I'd be interested to see discussions in this area.
I think the fact they targeted this package means that other similar packages will be attacked. A good first step would be identifying those packages used by many projects and with one or very few devs even more so if it has root access. More Devs means chances of scrutiny so they would likely go for packages with one or few devs to improve the odds of success.
I also think there needs to be an audit of every package shipped in the distros. A huge undertaking , perhaps it can be crowdsourced and the big companies FAAGMN etc should heavily step up here and set up a fund for audits .
What do you think could be done to mitigate or prevent this in future ?
Interesting to hear and it wouldn't surprise me either tbh. At least none of my systems were vulnerable apparently, which is good because I am running the latest Ubuntu LTS and latest Proxmox - if those were affected then wow this would have affected so many more people.