Not discrediting Open Source Software, but nothing is 100% safe.
Luckily there are people who do know, and we verify things for our own security and for the community as part of keeping Open Source projects healthy.
Open source software is safe because somebody knows how to audit it.
And to a large extent, there is automatic software that can audit things like dependencies. This software is also largely open source because hey, nobody’s perfect. But this only works when your source is available.
Except when people pull off shit like Heartbleed.
See my comment below for more of my thoughts on why I think heartbleed was an overwhelming success.
And you help make my point because openssl is a dependency which is easily discovered by software like dependabot and renovate. So when the next heartbleed happens, we can spread the fixes even more quickly.
Enterprise software inventory can unfortunately be quite chaotic, and understanding the exposure to this kind of vulnerability can take weeks if not longer.
It’s safe because there’s always a loud nerd who will make sure everyone knows if it sucks. They will make it their life mission
Will that nerd be heard or be buried under the scrutiny?
I’ll listen to them because I love OSS drama. But you’re right that they may just get passed over at large
Also because those people who can audit it don’t have a financial incentive to hide any flaws they find
My very obvious rebuttal: Shellshock was introduced into bash in 1989, and found in 2014. It was incredibly trivial to exploit and if you had shell, you had root perms, which is insane.
env x=‘() { :;}; echo vulnerable’ bash -c “echo this is a test”
Though one of the major issues is that people get comfortable with that idea and assume for every open source project there is some other good Samaritan auditing it
I would argue that even in that scenario it’s still better to have the source available than have it closed.
If nobody has bothered to audit it then the number of people affected by any flaws will likely be minimal anyway. And you can be proactive and audit it yourself or hire someone to before using it in anything critical.
If nobody can audit it that’s a whole different situation though. You pretty much have to assume it is compromised in that case because you have no way of knowing.
Oh definitely, I fully agree. It’s just a lot of people need to stop approaching open source with an immediate inherent level of trust that they wouldn’t normally give to closed source. It’s only really safer once you know it’s been audited.
Have you seen the dependency trees of projects in npm? I really doubt most packages are audited on a regular basis.
The point is not that you can audit it yourself, it’s that SOMEBODY can audit it and then tell everybody about it. Only a single person needs to find an exploit and tell the community about it for that exploit to get closed.
Exactly! I wait on someone who isn’t an idiot like me to say, “ok, so here’s what’s up guys.”
deleted by creator
While I generally agree, the project needs to be big enough that somebody looks through the code. I would argue Microsoft word is safer than some l small abandoned open source software from some Russian developer
deleted by creator
That’s true, but I’m not a programmer and on a GitHub project with 3 stars I can’t count on someone else doing it. (Of course this argument doesnt apply to big projects like libre office) With Microsoft I can at least trust that they will be in trouble or at least get bad press when doing something malicious.
I mean if a github project has only 3 stars, it means no one is using it. Why does safety matter here? Early adopting anything has risks.
This is kind of a false comparison. If it has 3 stars then it doesn’t even qualify for this conversation as literally no one is using it.
deleted by creator
Ehmm. if nobody uses it, it kinda doen’t matter if it’s safe. And for this example: I bet more people had a look at the code of LibreOffice than MS Office. And i dont think it sends telemetry home in default settings.
I think they’re talking about onlyoffice.
But eventually somebody will look and if they find something, they can just fork the code and remove anything malicious. Anyways, open source to me is not about security, but about the public “owning” the code. If code is public all can benefit from it and we don’t have to redo every single crappy little program until the end of time but can instead just use what is out there.
Especially if we are talking about software payed for by taxes. That stuff has to be out in the open (with exception for some high security stuff - I don’t expect them to open source the software used in a damn tank, a rocket or a fighter jet)Fun fact*: the software in the most advanced dildos come from old missile guidance systems the government isn’t using anymore.
*not a fact, but hopefully fun.
Maybe not a fact but I will still accept it as canon
No, missle.
Agreed.
this was indeed fun
Thanks 😁
You can get a good look at a T-bone by sticking your head up a cow’s ass but I’d rather take the butcher’s word for it.
There are people that do audit open source shit quite often. That is openly documented. I’ll take their fully documented word for it. Proprietary shit does not have that benefit.
And even when problems are found, like the heartbleed bug in OpenSSL, they’re way more likely to just be fixed and update rather than, oh I dunno, ignored and compromise everybody’s security because fixing it would cost more and nobody knows about it anyway. Bodo Moller and Adam Langley fixed the heartbleed bug for free.
Wasn’t heartbleed in the wild for 2 years though?
Yeah, but that just happens sometimes. With proprietary software you don’t even have the benefit of being able to audit it to see if the programmers missed something critical, you kinda just have to trust that they’re smarter than a would-be hacker.
I get that, I just caution that FOSS doesn’t automatically mean secure.
Nothing is 100% secure. FOSS is definitely more secure, all else equal.
Thanks Callahan!
Closed-source software is inherently predatory.
It doesn’t matter if you can read the code or not, the only options that respect your freedom are open source.
I had a discussion with a security guy about this.
For software with a small community, proprietary software is safer. For software with a large community, open source is safer.
Private companies are subject to internal politics, self-serving managers, prioritizing profit over security, etc. Open source projects need enough skilled people focused on the project to ensure security. So smaller companies are more likely to do a better job, and larger open source projects are likely to do a better job.
This is why you see highly specialized software has really enterprise-y companies running it. It just works better going private, as much as I hate to say it. More general software, especially utilities like OpenSSL, is much easier to build large communities and ensure quality.
With all due respect, I have to strongly disagree. I would hold that all OSS is fundamentally better regardless of community size.
Small companies go under with startling frequency, and even with an ironclad contract, there’s often nothing you can do but take them to court when they’ve gone bankrupt. Unless you’ve specifically contracted for source access, you’re completely SOL. Profitable niche companies lose interest too, and while you may not have the same problems if they sell out, you’ll eventually have very similar problems that you can’t do anything about.
Consider any of my dozens of little OSS libraries that a handful of people have used, on the other hand. Maybe I lost interest a while ago, but it’s pretty well written still (can’t have people judging my work) and when you realize it needs to do something, or be updated (since things like dependabot can automatically tell you long after I’m gone), you’re free and licensed to go make all the changes you need to.
I think you see highly specialized software being run by enterprisey companies because that’s just business, not because it’s better. It’s easiest to start in a niche and grow from there, but that holds true with open software and protocols too. Just look at the internet: used to share research projects between a handful of universities, and now has grown to petabytes of cat gifs. Or linux. Started out as a hobby operating system for a handful of unix geeks, and now runs 96.3 percent of the top 1 million web servers.
It always starts small and gets better if it’s good enough. This goes for OSS and companies.
Unfortunately that is not the case. Closed sourced software for small communities are not safer. My company had an incredibly embarrassing data leak because they outsourced some work and trusted a software used also by the competitors. Unfortunately the issue was found by one of our customers and ended up on the newspapers.
Absolutely deserved, but still, closed sourced stuff is not more secure
prioritizing profit over security
Laughs, nervously, while looking at my company’s auth db, which uses sha-256 still lol…
It never should have been anything but bcrypt/scrypt, but sha256 is so much better than many alternatives. Hopefully it’s at least salted in addition to hashing.
deleted by creator
Do you know how to audit the code?
Yes?
I don’t. But I trust you
Just learn the basics and you don’t need to trust. Like… everything science.
Now audit the linux kernel
No.
Fair enough
deleted by creator
Let’s go to Mars?
- Yes, I do it occasionally
- You don’t need to. If it’s open source, it’s open to billions of people. It only takes one finding a problem and reporting it to the world
- There are many more benefits to open source: a. It future proofs the program (many old software can’t run on current setups without modifications). Open source makes sure you can compile a program with more recent tooling and dependencies rather than rely on existing binaries with ancient tooling or dependencies b. Remove reliance on developer for packaging. This means a developer may only produce binaries for Linux, but I can take it and compile it for MacOS or Windows or a completely different architecture like ARM c. It means I can contribute features to the program if it wasn’t the developer’s priority. I can even fork it if the developer didn’t want to merge it into their branch.
Regarding point 2. I get what you’re saying but I instantly thought of Heartbleed. Arguably one of the most used examples of open source in the world, but primarily maintained by one single guy and it took 2 years for someone to notice the flaw.
So believing something is „safe“ just because it is open source and „open to billions of people“ can be problematic.
Uhh… so? The NSA was sitting on the vulnerability for EternalBlue in Windows for over 5 years.
Dont understand what that has to do with the discussion so far. How is this relevant here?
No more or less relevant than heartbleed. Yes vulns exist in open source software, sometimes for a while. Being open source can lead to those vulns getting discovered and fixed quicker than with closed source.
And how does this negate my initial point that you shouldn’t trust in the security of something just because it is open source? I think you misunderstood what I was saying.
deleted by creator
Alright then, have a nice day!
no , but I know a bunch of passionate geek are doing it.
You shouldn’t automatically trust open source code just because its open source. There have been cases where something on github contains actual malicious code, but those are typically not very well known or don’t have very many eyes on it. But in general open source code has the potential to be more trustworthy especially if its very popular and has a lot of eyes on it.
It’s one reason I haven’t rushed to try out every lemmy app that has come out yet.
A lot of bad takes in here.
Here are a few things that apparently need to be stated:
- Any code that is distributed can be audited, closed or open source.
- It is easier to audit open source code because, well, you have the source code.
- Closed source software can still be audited using reverse engineering techniques such as static analysis (reading the disassembly) or dynamic analysis (using a debugger to walk through the assembly at runtime) or both.
- Examples of vulnerabilities published by independent researchers demonstrates 2 things: people are auditing open source software for security issues and people are in fact auditing closed source software for security issues
- Vulnerabilities published by independent researchers doesn’t demonstrate any of the wild claims many of you think they do.
- No software of a reasonable size is 100% secure. Closed or open doesn’t matter.
Very good points here, especially your last point
As you increase the complexity of a system, it makes sense that your chance of vulnerability increases. End of the day, open source or not, you will never beat basic algorithm principals and good coding practice.
I would however argue that just because closed source code is possibly reversed doesn’t mean it’s easier or as reliable as having the source code. As long as corporations have an interest in possession there will always be someone striving and spending ungodly amounts of money to keep their castle grounds gated heavily which makes securing them en mass much harder and slower
I agree, it takes longer to audit closed source software. Just wanted to point out it’s not impossible, as long as you have a binary.
Closed source software can still be audited using reverse engineering techniques such as static analysis (reading the disassembly) or dynamic analysis (using a debugger to walk through the assembly at runtime) or both.
How are you going to do that if it’s software-as-a-service?
See the first bullet point. I was referring to any code that is distributed.
Yeah, there’s no way to really audit code running on a remote server with the exception of fuzzing. Hell, even FOSS can’t be properly audited on a remote server because you kind of have to trust that they’re running the version of the source code they say they are.
You can always brute force the SSH login and take a look around yourself. If you leave an apology.txt file in /home, I’m sure the admin won’t mind.
Lol, unlikely SSH is exposed to the net. You’ll probably need an RCE in the service to pop a shell.
That’s not universally true, at least if you’re not on the same LAN. For example, most small-scale apps hosted on VPSs are typically configured with a public-facing SSH login.
Ohhh, code that is distributed. The implication of that word flew over my head lmao, thanks for the clarification.
deleted by creator
Second bullet point, it’s much easier to audit when you have the source code. Just wanted to point out it’s not important to audit closed source software. It’s just more time consuming and fewer people have the skills to do so.
Also, just because you can see the source code does not mean it has been audited, and just because you cannot see the source code does not mean it has not been audited. A company has a lot more money to spend on hiring people and external teams to audit their code (without needing to reverse engineer it). More so than some single developer does for their OSS project, even if most of the internet relies on it (see openssl).
And just because a company has the money to spend on audits doesn’t mean they did, and even when they did, doesn’t mean they acted on the results. Moreover, just because code was audited doesn’t mean all of the security issues were identified.
Yup, all reasons why it does not matter if the software is open or closed as to how secure it might be. Both open and closed source code can be developed in a more or less secure fashion. Just because something could be done does not mean it has been done.
Nah I wouldn’t say that. Especially if you consider privacy a component to security. The fact that a piece of software can more easily be independently reviewed, either by you or the open source community at large, is something I value.
Good security is a component to privacy. But you can have good security with no privacy - that is the whole idea of a surveillance state (which IMO is a horrifying concept). Both are worth having, but my previous responses were only about the security aspect of OSS. There are many other good arguments to have about the benefits of OSS, but increased security is not a valid one.
“given enough eyeballs, all bugs are shallow” …but sometimes there is a profound lack of eyeballs.
That’s exactly the problem with many open source projects.
I recently experienced this first hand when submitting some pull requests to Jerboa and following the devs: As long as there is no money funding the project the devs are trying to support the project in their free time which means little to no time for quality control. Mistakes happen… most of them are uncritical but as long as there’s little to no time and expertise to audit code meaningfully and systematically, there will be bugs and these bugs may be critical and security relevant.
Even when you do have time. There have been “researchers” submitting malicious prs and when caught just act like it’s no big deal. Even had an entire institution banned from submitting prs to the Linux kernel.
Well, i think in most of those big incidents, people got caught. That means the concept kinda works well?
Regarding the earlier comment: I think companies just started to figure that out. They/You can’t just take free libraries databases etc… If you’re big tech company you better pay a few developers or an audit to make those libraries safe. This is your way of contributing. Otherwise your big platform will get hacked because you just took some 15 year olds open source code.
Selection bias though. We don’t know how many have not yet been caught.
agree. Hell i wouldnt be shocked if some corporations or even nation-state (ie: NSA) actors do this, in a much better/more professional manner to ensure things like…backdoor access.
No hypothesis needed https://en.wikipedia.org/wiki/EternalBlue can’t have been a one-off either.
Yeha that was my though. But more a dedicated program to do similar with large FOSS projects.
They also have hardware/supply chain intercept programs to install back doors in closed source appliances (ie: Cisco firewalls)
So something similar but dedicated to open source PRs.
For the human-hours of work that’s put into it it’s very expensive. I put in translations, highlighted bugs, put in a Jerboa fork to help mitigate issues with the 0.18 Lemmy upgrade… if I were to do this kind of thing for work I’d bill 25CAD per hour at the very minimum.
deleted by creator
Just like how no one has ever put anything malicious on Wikipedia. Nope, never, not once
deleted by creator
deleted by creator
This is wrong and ignorant. It happens all the fucking time. Software vendor supply chain is a huge fucking issue.
Christ, tell me you have no idea what your talking about with 1 sentence vibes.
deleted by creator
Lol no it doesn’t. It happens weekly, all the fucking time.
Source: I’ve been developing oss software for 20 years and have had to push hundreds of teams to fix their vendors bin.
Chill == I ain’t got shit to say 🤣
Get that reddit attitude out of here.
deleted by creator
Just an fyi you can block the trolls here.
Hey I know it sucks when someone isn’t nice to you, but that person is about as right as can be.
Just a month ago thousands of malicious commits discovered on git made the news. Unaudited repositories are a huge vector for attack and have been for years.
If that person seems pissed off you could chalk it up to hearing about this stuff on newsgroup discussion two decades ago.
Lololol oSs is free and SeKuR3 cause rainbows and kittens.
20 years of experience and still behaves like a little kid, My 2 year old nephew is more mature. So sad, and ironic that you say that in a foss platform.
With a name like @redditcunts, this one is probably a troll. Just block them.
👌👍
Software vendor supply chain affects ALL software. It is caught much sooner with open source.
But someone does
Sure, someone knows how to audit code.
Whether that someone is inclined to do it for whatever random FOSS package / library / application / service / whatever is a different question.
There is a much higher chance that someone out of 7 billion people will audit open source than it is likely for a corporation to do it, let alone make it publicly known and fix it.