Thought this was a good read exploring some how the “how and why” including several apparent sock puppet accounts that convinced the original dev (Lasse Collin) to hand over the baton.
Imagine finding a backdoor within 45 day of it’s release into a supply chain instead of months after infection. This is a most astoundingly rapid discovery.
Fedora 41 and rawhide, Arch, a few testing and unstable debian distributions and some apps like HomeBrew were affected. Not including Microsoft and other corporations who don’t disclose their stack.
What a time to be alive.
Source: https://turnoff.us/
Arch was never affected, as described in their news post about it. Arch users had malicious code on their hard disks, but not the part that would have called into it.
Before resting on our laurels, we should consider it’s possible it’s more widespread but just not being disclosed until after it’s patched.
It would be wise to be on the lookout for security patches for the next few days.
Consider this the exception to the rule. There’s no reason we should assume this timeline is the norm.
True. Though remarkable is still remarkable.
Notably, the timeline post-discovery is still stellar, regardless of Microsoft/GitHub cock-blocking analysis.
Disguising the virus as a corrupted test file then ‘uncorrupting’ it is crazy
Pretty bad is also that it intersects with another problem: Bus factor.
Having just one person as maintainer of a library is pretty bad. All it takes is one accident and no one knows how to maintain it.
So, you’re encouraged to add more maintainers to your project.But yeah, who do you add, if it’s a security-critical project? Unless you happen to have a friend that wants to get in on it, you’re basically always picking a stranger.
Unless you happen to have a friend that wants to get in on it, you’re basically always picking a stranger.
At risk of sounding tone deaf to the situation that caused this: that’s what community is all about. The likelihood you know the neighbors you’ve talked to for years is practically nil. Your boss, your co-workers, your best friend and everyone you know, has some facet to them you have never seen. The unknown is the heart of what makes something strange.
We must all trust someone, or we are alone.
Finding strangers to collaborate with, who share your passions, is what makes society work. The internet allows you ever greater access to people you would otherwise never have met, both good and bad.
Everyone you’ve ever met was once a stranger. To make them known, extend blind trust, then quietly verify.
honestly these people should be getting paid if a corporation wants to use a small one-man foss project for their own multibillion software. the lawyer types in foss could put that in GPLv5 or something whenever we feel like doing it.
also hire more devs to help out!
If you think people are going to be trustworthy just because they are getting paid you are naive.
not trustworthy per se but maybe less overworked and inclined to review code more hastily, or less tired and inclined to have the worse judgement that makes such a project more vulnerable to stuff like this.
these people maintain the basis of our entire software infrastructure thanklessly for us in between the full time jobs they need to survive, this has to change.
as for trust in foss projects, the community will often notice bad faith code just like they just did (and very quickly this time, i might add!)
I guess you are using trust in a different way here. Trust in competency can vary with both volunteer and paid workers, everyone makes mistakes though. Trust that someone doesn’t do something deliberately malicious is a different matter though.
deleted by creator
i can’t see how paying someone would have changed anything in this scenario.
this seems to be a long running campaign to get someone into a position where they could introduce malicious code. the only thing different would have been that the bad actor would have been paid by someone.
this is not to say, that people working on foss should not be paid. if anything we need more people actively reviewing code and release artifacts even if they are not a contributor or maintainer of a piece of software.
i can’t see how paying someone would have changed anything in this scenario.
we need more people actively reviewing code and release artifacts
I think you’ve answered your own question there
no, the solution is not to pay someone to have someone to blame if shit happens.
there are a bus load of people involved on the way from a git repo to actuall stuff running on a machine and everyone in that chain is responsible to have an eye on what stuff they are building/packaging/installing/running and if something seems off, it’s their responsibility to investigate and communicate with each other.
attacks like this will not be solved by paying someone to read source code, because the code in the repo might not be what is going to run on a machine or might look absolutely fine in a vacuum or will be altered by some other part in the chain. and even if you have dedicated code readers, you cant be sure that they are not compromised or that their findings will reach the people running/packaging/depending on the software.
Of course you can’t be sure anyone involved, paid or not, isn’t compromised. But if you want more human effort put into a project, people need a reason to do so. Complaining that volunteer contributors don’t spend enough of their time and effort with no compensation isn’t going to solve anything. Maybe AI tools will make that work more available in the near future.
If my job didn’t pay me, I would have certainly burned out years ago. For one, I’d need another job.
I think bus factor would be a lot easier to cope with than a slowly progressing, semi-abandoned project and a White Knight saviour.
In a complete loss of a sole maintainer, then it should be possible to fork and continue a project. That does require a number of things, not least a reliable person who understands the codebase and is willing to undertake it. Then the distros need to approve and change potentially thousands of packages that rely upon the project as a dependency.
Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it’s a sustainable project and meets requirements like a solid ownership?
The inherited debt from existing projects would be massive, and perhaps this is largely covered already - I’ve never tried to get a distro to accept my software.
Nothing I’ve seen would completely avoid risk. Blackmail upon an existing developer is not impossible to imagine. Even in this case, perhaps the new developer in xz started with pure intentions and they got personally compromised later? (I don’t seriously think that is the case here though - this feels very much state sponsored and very well planned)
It’s good we’re asking these questions. None of them are new, but the importance is ever increasing.
Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it’s a sustainable project and meets requirements like a solid ownership?
And who is supposed to do that work? How do you know you can trust them?
Fair point.
If the distro team is compromised, then that leaves all their users open too. I’d hope that didn’t happen, but you’re right, it’s possible.
- Careful choice of program to infect the whole Linux ecosystem
- Time it took to gain trust
- Level of sophistication in introducing backdoor in open source product
All of these are signs of persistent threat actors aka State sponsor hacker. Though the real motive we would never know as it’s now a failed project.
imagine how pissed they are. or maybe they silently alerted the microsoft guy themselves as they only did it for cash and theyd been paid
I am sure most super powers in the world can easily sink 2 years to maintain an obscure project in order to break system as important as openssh.
I doubt they will be pissed for one failure, and we can only hope there isn’t more vulnerable projects out there (spoiler alert: probably many).
Hopefully shows why you should never trust closed source software
If the world didn’t have source access then we would have never found it
And if they do find it, it’ll all be kept hush hush, they’ll force an update on everyone with no explanation, some people will do everything in their power to refuse because they need to keep their legacy software running, and the exploit stays alive in the wild.
open source software getting backdoored by nefarious committers is not an indictment on closed source software in any way. this was discovered by a microsoft employee due to its effect on cpu usage and its introduction of faults in valgrind, neither of which required the source to discover.
the only thing this proves is that you should never fully trust any external dependencies.
The difference here is that if a state actor wants a backdoor in closed source software they just ask/pay for it, while they have to con their way in for half a decade to touch open source software.
How many state assets might be working for Microsoft right now, and we don’t get to vet their code?
Double-edged sword in this case. Open source is what allowed that backdoor in this case.
Introduced by maintainer not a random push
Closed source software has maintainers as well, the company that makes it
I cannot be sure, but I believe Lasse never met “Jia Tan”. You usually don’t get employed by a company writing closed source software without meeting and talking to several people. And since nobody works without a salary, you get some sort of tracking towards the person’s identity as well.
“Paid for by a state actor” Yes, who knows.
-
Could be a lone “black hat” or a group of “black hats”. Who knows.
-
Could be the result of a lot of public criticism in the news regarding Pegasus spyware. Who knows.
-
Could be paid by companies without any state actors involved. Who knows.
-
Could be a lone programmer who wants power or is seeking revenge for some heated mailing list discussion. Who knows.
The question of trust has been mentioned in this case of a sole maintainer with health problems. What I asked myself is : How did this trust develop years ago ? People trusted Linus Torvalds and used the Linux kernel to build Linux distributions with to the point that the Linux kernel became from a tiny hobby thing a giant project. At some point compiling from source code became less fashionable and most people downloaded and installed binaries. New projects started and instead of tar and gzip things like xz and zstd were embraced. When do you trust a person or a project, and who else gets on board of a project ? Nowadays something like :
curl -sSL https://yadayada-flintstones-revival.com | bash
is considered perfectly normal as the default installation of some software. Open source software is cool and has kind of produced a sort of revolution in technology but there is still a lot of work to do.
Strongly doubt it’s a lone actor for the reasons already given.
Boostrapping a full distribution from a 357-byte seed file is possible in GUIX:
If that seed is compromised, then the whole software stack just won’t build.
It’s an answer to the “Trusting Trust” problem outlined by Ken Thompson in 1984.
Reading a bit into this https://guix.gnu.org/manual/en/html_node/Binary-Installation.html The irony!
The only requirement is to have GNU tar and Xz.
Hahaha! Oh dear
That’s cool. Thank you.
Some of the trust comes from eyes on the project thanks to it being open source. This thing got discovered, after all. Not right away, sure, but before it spread everywhere. Same question of trust applies to commercial software too.
Ideally, PR reviews help with this but smaller projects esp with few contributors may not do much of that. I doubt anyone has spent time understanding the software supply chain (SSC) attack surface of their product but that seems like a good next step. Someone needs to write a tool that scans the SSC repos and flags certain measures like the # of maintainers.
PS: I have the worst allergies I’ve had in ages today and my brain is in a histamine fog so maybe I shouldn’t be trying to think about this stuff right now lol cough uuugh blows nose
-
Any speculations on the target(s) of the attack? With stuxnet the US and Israel were willing to to infect the the whole world to target a few nuclear centrifuges in Iran.
Stuxnet was an extremely focused attack, targeting specific software on specific PLCs in a specific way to prevent them mixing up nuclear batter into a boom boom cake. Even if it managed to affect the whole world, it would be a laser compared to this wide-net.
Definitely state sponsored attack. It could be any nation - US to North Korea, and any other nation in between.
There is some indication based on commit times and the VPN used that it’s somewhere in Asia. Really interesting detail in this write up.
The timezone bit is near the end iirc.
Good writeup.
The use of ephemeral third party accounts to “vouch” for the maintainer seems like one of those things that isn’t easy to catch in the moment (when an account is new, it’s hard to distinguish between a new account that will be used going forward versus an alt account created for just one purpose), but leaves a paper trail for an audit at any given time.
I would think that Western state sponsored hackers would be a little more careful about leaving that trail of crumbs that becomes obvious in an after-the-fact investigation. So that would seem to weigh against Western governments being behind this.
Also, the last bit about all three names seeming like three different systems of Romanization of three different dialects of Chinese is curious. If it is a mistake (and I don’t know enough about Chinese to know whether having three different dialects in the same name is completely implausible), that would seem to suggest that the sponsors behind the attack aren’t that familiar with Chinese names (which weighs against the Chinese government being behind it).
Interesting stuff, lots of unanswered questions still.
What is the trail of crumbs? Just some random email accounts?
This was in a big part a social engineering attack, so you can’t really avoid contact.
Given how low level it is and the timespan involved, there probably wasn’t a specific use in mind. Just adding capability for a future attack to be determined later.
My guesses wildly range on this topic.
- Facebook probably wanted Zstd adoption over XZ/LZMA
- There was probably an analysis of who uses LZMA compression a lot, and it so happens that archivists, pirates, people and countries with low bandwidth speeds, people in Russia, game repackers et al use it a lot compared to “good law abiding” money blinded consumers of rich countries
- Somebody wanted to screw over LZMA/XZ/7Z users
- (most favourite right now) implanting a network backdoor into Linux servers and ecosystems
- Someone thought it would be a good idea to troll open source community and make it look worse than closed source, so that closed source security can be popularised (“security” trolls in FOSS community I harp about love such ideas, beware of any Graphene/Chrome/Apple and Big Tech lovers just as example)
- Tying into the idea of making FOSS ecosystem look bad, it might be a concerted effort by closed source company/companies to propel themselves above, as FOSS development is shitting on closed source corporate model
- A different approach, it could be the first step in a series of steps to dismantle FOSS ecosystem, considering how much trust and transparency it has that attracts everyone enlightened enough
I could think of many other scenarios and outcomes if I put enough time, but I think this should be enough food for thought. The beneficiaries are limited, the actors few, and the methods cannot vary too much.
The world needed the open internet to bootstrap the digital revolution. It wasn’t possible without the sum of humanity working altruistically to build the Library of Alexandria of software. No private entity could have possibly done it. It truly is an under appreciated marvel of the late-20th/early-21st century. FOSS contains the knowledge of software that runs the world. Now that such a thing exists I could totally see organizations (loosely speaking) wanting to conquer or ransack it. It’s quite clear by now there’s faction of tech with a tyrannical bent. I’d put them whoever they might be exactly as possible culprits.
Funny coincidence for me, but I just learned this listening to a podcast called Behind the Bastards: The Ballad of Bill Gates. It talked about how one of the reasons MS became so big was because so many people shared MS BASIC back in the day, but then Gates worked so hard against piracy afterwards despite that fact. So basically just one aspect of what you are talking about.
The first 3 seem incredibly far-fetched.
- What exactly does Facebook gain from more people using zstd, other than more contributions and improvement to zstd and the ecosystem (i.e. the reason corporations are willing to open source stuff).
- Why do you consider zlma to be loved among pirates and hackers and zstd not to be, when zstd is incredibly popular and well-loved in the FOSS community and compresses about as well as lzma?
- Every person in the world uses both lzma and zstd extensively, even if indirectly without them realizing it.
I think it’s likey that, of all the mainstream compression formats, lzma was the least audited (after all, it was being maintained by one overworked person). Zstd has lots of eyes on it from Google and Facebook, all of the most talented experts in the world on data compression contributing to it, and lots of contributors. Zlib has lots of forks and overall probably more attention than lzma. Bz2 is rarely used anymore. So that leaves lzma
Cloudflare deploys Zstd, and many web servers and CDNs use it. Endless possibilities for Facebook and US gov. They can put Yann Collet out of the way or gag order him.
LZMA is the highest compression algorithm outside of PAQ and SuperRep+LOLZ, while being magnitudes faster than both. Zstd compression ratio is a joke and is only good for webpage asset loading times.
Facebook may be evil but I don’t think they’re anywhere near “inject malware into global supply chains to push adoption of a public engineering side project that they don’t directly profit from and most executives don’t care about” level of evil. Is it possible? Sure anything is possible, but that is wildly beyond many many more plausible explanations and there’s zero evidence leading us down this path. And why would they go through the trouble of backdooring zstd, which has a highly observed codebase, when they just successfully backdoored lzma because it didn’t have a lot of maintainers?
While it’s true that zstd is commonly favored for having “good” compression at blazingly fast speeds, which is useful on the web and on servers, Zstd 's max compression setting (
zstd --long -19
) is actually within about 5% of LZMA’s but faster, so it replaces most use cases of LZMA except when that extra 5% (and that’s not even constant; some inputs are even better on zstd) really does matter at all speed costI have extensively benchmarked Zstd and it is a joke compared to LZMA2 when it comes to compression ratio. And not even that, the lack of features Zstd has, that 7Z does have, makes it a far bigger joke. 7Z is a feature complete archival solution unlike Zstd, with possible options for archive repair. RAR is far superior for that bitrot resistance.
The amount of possibilities Facebook and US gov get with backdooring XZ are endless, since it could destroy trust in it if uncaught, and Zstd adoption meant web malware deployment could become a matter of when, because Facebook already does it right now with actual malware JS scripts through fbcdn domain.
- Someone thought it would be a good idea to troll open source community and make it look worse than closed source, so that closed source security can be popularised (“security” trolls in FOSS community I harp about love such ideas, beware of any Graphene/Chrome/Apple and Big Tech lovers just as example)
- Tying into the idea of making FOSS ecosystem look bad, it might be a concerted effort by closed source company/companies to propel themselves above, as FOSS development is shitting on closed source corporate model
- A different approach, it could be the first step in a series of steps to dismantle FOSS ecosystem, considering how much trust and transparency it has that attracts everyone enlightened enough
This is why it surprised me to learn that this was noticed/announced by an MS employee.
I’d be super surprised if this was western intelligence. Stuxnet escaping Natanz was an accident, and there is no way that an operation like this would get approved by the NSAs Vulnerabilities Equities Process.
My money would be MSS or GRU. Outside chance this is North Korean, but doesn’t really feel like their MO
Lol that Jia Tan there cracked me up
I had assumed it was probably a state sponsored attack. This looks like it was planned from the beginning, and any cyber attack that had years of planning and waiting strikes me as state-sponsored.
Historically there have been several instances of anarcho-communist organizations and social movements flourishing.
Most of them were sabotaged by plutocrat agents invoking violence or mischief. Often just giving an angry militants in the region some materiel support and bad intel.
What if the unexpected SSH latency hadn’t been introduced, this backdoor would live?
I wonder how many OSS projects include backdoors that doesn’t appear in performance checks
What if the unexpected SSH latency won’t be introduced, this backdoor would live?
I’m confused by this sentence. It uses future tense in the first clause and then conditional in the second. Are you trying to express something that could’ve taken place in the past? Then you should be using “had been”. See conditional sentences.
What if the unexpected SSH latency hadn’t been introduced, this backdoor would live?
Or are you trying to express something else?
Thanks, what you wrote is what I meant:
What if the unexpected SSH latency hadn’t been introduced, this backdoor would live?
LinuxUnix since 1979: upon booting, the kernel shall run a single “init” process with unlimited permissions. Said process should be as small and simple as humanly possible and its only duty will be to spawn other, more restricted processes.Linux since 2010: let’s write an enormous, complex system(d) that does everything from launching processes to maintaining user login sessions to DNS caching to device mounting to running daemons and monitoring daemons. All we need to do is write flawless code with no security issues.
Linux since 2015: We should patch unrelated packages so they send notifications to our humongous system manager whether they’re still running properly. It’s totally fine to make a bridge from a process that accepts data from outside before even logging in and our absolutely secure system manager.
Excuse the cheap systemd trolling, yes, it is actually splitting into several, less-privileged processes, but I do consider the entire design unsound. Not least because it creates a single, large provider of connection points that becomes ever more difficult to replace or create alternatives to (similarly to web standard if only a single browser implementation existed).
Yes, I remember Linux in 1979…
Linus was a child prodigy.
And so the microkernal vs monolithic kernal debate continues…
its only duty will be to spawn other, more restricted processes.
Perhaps I’m misremembering things, but I’m pretty sure the SysVinit didn’t run any “more restricted processes”. It ran a bunch of bash scripts as root. Said bash scripts were often absolutely terrible.
You mean Unix for the first one
I’m curious to know about the distro maintainers that were running bleeding edge with this exploit present. How do we know the bad actors didn’t compromise their systems in the interim ?
The potential of this would have been catastrophic had it made its way into the stable versions, they could have for example accessed the build server for tor or tails or signal and targeted the build processes . not to mention banks and governments and who knows what else… Scary.
I’m hoping things change and we start looking at improving processes in the whole chain. I’d be interested to see discussions in this area.
I think the fact they targeted this package means that other similar packages will be attacked. A good first step would be identifying those packages used by many projects and with one or very few devs even more so if it has root access. More Devs means chances of scrutiny so they would likely go for packages with one or few devs to improve the odds of success.
I also think there needs to be an audit of every package shipped in the distros. A huge undertaking , perhaps it can be crowdsourced and the big companies FAAGMN etc should heavily step up here and set up a fund for audits .
What do you think could be done to mitigate or prevent this in future ?
Interesting to hear and it wouldn’t surprise me either tbh. At least none of my systems were vulnerable apparently, which is good because I am running the latest Ubuntu LTS and latest Proxmox - if those were affected then wow this would have affected so many more people.
At least none of my systems were vulnerable apparently
none that you know of
I ran the detection script, that’s why I claim that apparently my systems were not vulnerable.
Do you think Jia Tan is alive now to talk about his famous bug?
Jia Tan is most definitely not a person, just the publicly facing account of a group of people.