Malicious code planted in xz Utils has been circulating for more than a month.
The malicious changes were submitted by JiaT75, one of the two main xz Utils developers with years of contributions to the project.
“Given the activity over several weeks, the committer is either directly involved or there was some quite severe compromise of their system,” an official with distributor OpenWall wrote in an advisory. “Unfortunately the latter looks like the less likely explanation, given they communicated on various lists about the ‘fixes’” provided in recent updates. Those updates and fixes can be found here, here, here, and here.
On Thursday, someone using the developer's name took to a developer site for Ubuntu to ask that the backdoored version 5.6.1 be incorporated into production versions because it fixed bugs that caused a tool known as Valgrind to malfunction.
“This could break build scripts and test pipelines that expect specific output from Valgrind in order to pass,” the person warned, from an account that was created the same day.
One of maintainers for Fedora said Friday that the same developer approached them in recent weeks to ask that Fedora 40, a beta release, incorporate one of the backdoored utility versions.
“We even worked with him to fix the valgrind issue (which it turns out now was caused by the backdoor he had added),” the Ubuntu maintainer said.
He has been part of the xz project for two years, adding all sorts of binary test files, and with this level of sophistication, we would be suspicious of even older versions of xz until proven otherwise.
Bet you anything there were more pairs of eyes on SolarWinds code than this. Sick of this open source bystander effect.
Code scanners check for vulnerabilities not malicious code. Ain't no one running full coverage dynamic scanners to trigger all branches of code on this thing, otherwise this would've been caught immediately
The researchers found that open-source programmers fixed Linux issues in an average of only 25 days. In addition, Linux's developers have been improving their speed in patching security holes from 32 days in 2019 to just 15 in 2021.
Its competition didn't do nearly as well. For instance, Apple, 69 days; Google, 44 days; and Mozilla, 46 days.
Coming in at the bottom was Microsoft, 83 days, and Oracle with 109 days.
By Project Zero's count, others, which included primarily open-source organizations and companies such as Apache, Canonical, Github, and Kubernetes, came in with a respectable 44 days.
You are an idiot. It’s not blind. That’s how it was found.
From the article...
Will Dormann, a senior vulnerability analyst at security firm Analygence, said in an online interview. “BUT that's only because it was discovered early due to bad actor sloppiness. Had it not been discovered, it would have been catastrophic to the world.”
The fact that it was discovered early due to bad actor sloppiness does not imply that it could not have also been caught prior to wide spread usage via security audits that take place for many enterprise grade Linux distributions.
You can put the pom-poms/rifle down, I'm not attacking open source, not in the slightest. I'm a big believer open source.
But I also know that volunteer work is not always as rigorous as when paid for work is being done.
The only point I'm trying to make in this conversation is getting confirmation if security audits are actually done, or if everyone just thinks they're done because of "Open Source" reasons.
And precisely that's why the exploit was found. If it was a closed source programme, a lone threat actor modifying the code and passing it in a release can happen, and no one will find it out. In that case everyone trusts the internal security team of a closed source company blindly. I really don't see this as an open source issue. These are malicious actors.
Single point of failure on the lone maintainer of a popular package, vs having to hack an entire company like SolarWinds and make a backdoor that bypasses their entire SDLC. Which is harder?
A better way to compare the two would be a lone dev releasing open source software vs a lone dev releasing closed source. And a company releasing open source vs another company of the same size releasing closed source.
There's plenty of closed source packages or components with a single actor ultimately accountable for it.
Imagine a tester even bothering to open a bug that starting a session takes 500ms longer to start than it used to. Imagine what the development manager is going to do with that defect. Imagine a customer complaining about that and the answer the company will give. At best they might identify the problematic component then ask the sole maintainer to give the "working as designed" explanation, and that explanation won't be held to scrutiny, because at that point it's just a super minor performance complaint.
No, closed source is every bit as susceptible, of not more so because management is constantly trying to make all those tech people stop wasting time on little stuff that doesn't matter, and no one outside is allowed to volunteer their interest in investigating.
Checking time to login is more likely in the security sector than anywhere else. A number of vulnerabilities based on timing have been identified and removed in the past.
So many vulnerabilities were found due to time to login that one of the security features was to take longer to respond to a bad login so they couldn't tell what part failed. Here's an article I found about one such vulnerability.
You can trust blindly whatever software you like. Most of us, even those that can code, trust blindly whatever software we use because we have other priorities. But what you can do only with open source software, is open your eyes if you choose.
Ftfy: And that's why you cannot trust people blindly.
Just because we cant observe the code of proprietary software and is sold legally doesn’t mean its all safe. Genuinely I distrust anything with a profit incentive.