I find people who agree with me for the wrong reasons to be more problematic than people who simply disagree with me. After writing a lot about why free software is important, I needed to clarify that there are good and bad reasons for supporting it.
You can audit the security of proprietary software quite thoroughly; source code isn’t a necessary or sufficient precondition for a particular software implementation to be considered secure.
I am tired of people acting like blackbox analysis is same as whitebox analysis. It is like all these people never studied software testing and software engineering properly, and want to do some commentary just because internet fame and the rest of the internet audience is dumber.
I was very explicit that the two types of analysis are not the same. I repeatedly explained the merits of source code, and the limitations of black-box analysis. I also devoted an entire section to make an example of Intel ME because it showed both the strengths and the limitations of dynamic analysis and binary analysis.
My point was only that people can study proprietary software, and vulnerability discovery (beyond low-hanging fruit typically caught by e.g. static code analysis) is slanted towards black-box approaches. We should conclude that software is secure through study, not by checking the source model.
Edit: I liked that last sentence I wrote so I added it to the conclusion. Diff.
Lots of FLOSS is less secure than proprietary counterparts, and vice versa. The difference is that proprietary counterparts make us entirely dependent on the vendor for most things, including security. I wrote two articles exploring that issue, both of which I linked near the top. I think you might like them ;).
Now, if a piece of proprietary software doesn’t document its architecture, makes heavy use of obfuscation techniques in critical places, and is very large/complex: I’d be very unlikely to consider it secure enough for most purposes.
And… you cannot study the closed source software. This is the whole point of FOSS being more secure by development model. Closed source SW devs/companies act like obscurity gives them the edge, like Apple does, only for this to happen.
MAYBE. However, as other user above said:
Can you, with complete certainty, confidently assert the closed source software is more secure? How is it secure? Is it also a piece of software not invading your privacy? Security is not the origin of privacy, and security is not merely regarding its own resilience as standalone code to resist break-in attempts. This whole thing is not just a simple two way relation, but more like a magnetic field generated by a magnet itself. I am sure you understand that.
FLOSS being less secure when analysed with whitebox methods assures where it stands on security. This will always be untrue for closed source software, therefore the assertation that closed source software is more secure, is itself uncertain. FOSS does not rely on blind trust of entities, including who created the code, since it can be inspected thoroughly.
Moreover, FOSS devs are idealistic and generally have good moral inclinations towards the community and in the wild there are hardly observations that tell FOSS devs have been out there maliciously sitting with honeypots and mousetraps. This has long been untrue for closed source devs, where only a handful examples exist where closed source software devs have been against end user exploitation. (Some common examples in Android I see are Rikka Apps (AppOps), Glasswire, MiXplorer, Wavelet, many XDA apps, Bouncer, Nova Launcher, SD Maid, emulators vetted at r/emulation.)
Sure you can. I went over several example.
I freely admit that this leaves you dependent on a vendor for fixes, and that certain vendors like oracle can be horrible to work with (seriously check out that link, it’s hilarious). My previous articles on FLOSS being an important mitigation against user domestication are relevant here.
I can’t confidently assert anything with complete certainty regardless of source model, and you shouldn’t trust anyone who says they can.
I can somewhat confidently say that, for instance, Google Chrome (Google’s proprietary browser based on the open-source Chromium) is more secure than most Webkit2GTK browsers. The vast majority of Webkit2gtk-based browsers don’t even fully enable enable sandboxing (
webkit_web_context_set_sandbox_enabled
).I can even more confidently say that Google Chrome is more secure than Pale Moon.
To determine if a piece of software invades privacy, see if it phones home. Use something like Wireshark to inspect what it sends. Web browsers make it easy to save key logs to decrypt packets. Don’t stop there; there are other techniques I mentioned to work out the edge cases.
Certain forms of security are necessary for certain levels of privacy. Other forms of security are less relevant for certain levels of privacy, depending on your threat model. There’s a bit of a venn-diagram effect going on here.
Sure, but don’t stop at whitebox methods. You should use black-box methods too. I outlined why in the article and used a Linux vuln as a prototypical example.
You’re making a lot of blanket, absolute statements. Closed-source software can be analyzed, and I described how to do it. This is more true for closed-source software that documents its architecture; such documentation can then be tested.
I am in full agreement with this paragraph. There is a mind-numbing amount of proprietary shitware out there. That’s why, even if I was only interested in security, I wouldn’t consider running proprietary software that hasn’t been researched.
I know this obvious stuff well. I do not think there is debate on how to do those tasks. The issue here, that I now notice thanks to your commit link above, is this:
Linking this person gives off really, really bad vibes. He is a security grifter that recommends Windows and MacOS over Linux for some twisted security purposes. How do I know? I have had years of exchanges with him and the GrapheneOS community. I recommend you to have a look at 4 separate discussions regarding the above blog of his. Take your time.
From https://web.archive.org/web/20200528215441/https://forum.privacytools.io/t/is-madaidans-insecurities-fake-news/3248 :
https://web.archive.org/web/20200417185218/https://lobste.rs/s/ir9mcp/linux_phones_such_as_librem_5_are_major
https://teddit.net/r/linux/comments/pwi1l9/thoughts_about_an_article_talking_about_the/
I think you have gotten influenced by madaidan’s grift because you use a lot of closed source tools and want to justify it to yourself as safe.
Windows Enterprise and macOS are ahead of Linux’s exploit mitigations. Madaidan wasn’t claiming that Windows and macOS are the right OSes for you, or that Linux is too insecure for it to be a good fit for your threat model; he was only claiming that Windows and macOS have stronger defenses available.
QubesOS would definitely give Windows and macOS a run for their money, if you use it correctly. Ultimately, Fuchsia is probably going to eat their lunch security-wise; its capabilities system is incredibly well done and its controls over dynamic code execution put it even ahead of Android. I’d be interested in seeing Zircon- or Fuchsia-based distros in the future.
When it comes to privacy: I fully agree that the default settings of Windows, macOS, Chrome, and others are really bad. And I don’t think “but it’s configurable” excuses them: https://pleroma.envs.net/notice/AB6w0HTyU9KiUX7dsu
Here’s an exhaustive list of the proprietary software on my machine:
That’s it. I don’t even have proprietary drivers. I’m strongly against proprietary software on ideological grounds.
Servers use Linux. Home users needing security does not arise from OSes being insecure, but user being weak link. https://pointieststick.com/2021/11/29/who-is-the-target-user/ Also, this is contradicted by…
QubesOS is based on Linux.
Here are two HN discussions that highlights madaidan’s issues. You can read them to understand his whole game.
https://news.ycombinator.com/item?id=26954225 (10 comments)
https://news.ycombinator.com/item?id=25590079 (295 comments)
This is a defeatist attitude and meaningless excuse. Adding more closed source software and hardware stacks mean extra attack surface. And extra attack surface should be avoided first, mitigated second.
The server, desktop, and mobile computing models are all quite different. The traditional desktop model involves giving programs the same user privileges and giving them free reign over all a user’s data; the server model splits programs into different unprivileged users isolated from each other, with one admin account configuring everything; the mobile model gives programs private storage and ensures that programs can’t read each others’ data and need permission to read shared storage. Each has unique tradeoffs.
macOS has been adopting safeguards to sandbox programs with fewer privileges than what’s available to a user account; Windows has been lagging behind but has made some progress (I’m less familiar with the Windows side of this). On Linux, all modern user-friendly attempts to bring sandboxing to the desktop (Flatpak and Snap are what I’m thinking of) allow programs opt into sandboxing. The OS doesn’t force all programs to run with minimum privileges by default having users control escalating user-level privileges; if you
chmod +x
a file, it gets all user-level privileges by default. Windows is…somewhat similar in this regard, I admit. But Windows’ sandboxing options–UXP and the Windows Sandbox–are more airtight than Flatpak (I’m more familiar with Flatpak than Snap, as I have some unrelated fundamental disagreements with Snap’s design).I think Flatpak has the potential to improve a lot: it can make existing permissions enabled at run-time so that filesystem-wide sandboxing only gets enabled when a program tries to bypass a portal (most of the “filesystem=*” apps can work well without it, and some only need it for certain tasks), and the current seccomp filter can be made a “privileged execution” permission with the default filters offering fine-grained ioctl filtering and toggleable W^X + W!->X enforcement. The versions of JavaScriptCore, GJS, Electron, Java, and LuaJIT used by runtimes and apps can be patched to run in JITless mode unless e.g. an envvar for “privileged execution” is detected. I’ve voiced some of these suggestions to the devs before.
My favorite (and current) distro is Fedora. If Flatpak makes these improvements, Fedora lands FS-verity in Fedora 37, Fedora lands dm-verity in Silverblue/Kinoite, and we get some implementation of verified boot that actually lets users control the signing key: I personally wouldn’t consider Fedora “insecure” anymore. Though I’d still find it to be a bit problematic because of Systemd. I wasn’t convinced by Madaidan’s brief criticisms of Systemd; I prefer this series of posts that outlines issues in Systemd’s design and shows how past exploits could have been proactively (instead of reactively) avoided:
Systemd exposes nice functionality and I genuinely enjoy using it, but its underlying architecture doesn’t provide a lot of protections against itself. The reason I bring it up when distros like Alpine and Gentoo exist is that the distro I currently think best combines the traditional desktop model with some hardening–Fedora Silverblue/Kinoite–uses it.
QubesOS is based on Linux, but it isn’t in the same category as a traditional desktop Linux distribution. Like Android and ChromeOS, it significantly alters the desktop model by compartmentalizing everything into Xen hypervisors. I brought it up to show how it’s possible to “make Linux secure” but in doing so you’d deviate heavily from a standard distribution. Although Qubes is based on Linux, its devs feel more comfortable calling it a “Xen distribution” to highlight its differences from other Linux distributions.
I only brought this up in response to the bad-faith argument you previously made:
Since you seem to be arguing in bad faith, I don’t think I’ll engage further. Best of luck.
And this is exactly what I have been saying since the past few replies. There is no one threat model for all systems and users. Madaidan, who you quote, however exactly tells us there is zero room for nuance, hence the multiple links I shared to give you room for processing more opinions.
To him, somehow Linux is bad because use of unsafe C language and monolithic kernel, but Windows and MacOS also have monolithic kernels and get excused. More CVEs (threat levels of each CVE completely ignored) instead of being proof of maturity, become proof of worse security. You basically linked and believe in what is known well as a piece of toilet paper blog among security enthusiasts that are not GRSecurity or such GrapheneOS community related grifters.
It seems you are fairly stubborn in your beliefs. If critical thinking is bad faith argumentation, then I will disengage as well.
P.S. I am a CS grad that created r/privatelife and teaches OPSEC.