I find people who agree with me for the wrong reasons to be more problematic than people who simply disagree with me. After writing a lot about why free software is important, I needed to clarify that there are good and bad reasons for supporting it.

You can audit the security of proprietary software quite thoroughly; source code isn’t a necessary or sufficient precondition for a particular software implementation to be considered secure.

  • X_Cli@lemmy.ml
    link
    fedilink
    arrow-up
    8
    ·
    3 years ago

    Good article. Thank you. You make some excellent points.

    I agree that source access is not sufficient to get a secure software and that the many-eyes argument is often wrong. However, I am convinced that transparency is a requirement for secure software. As a consequence, I disagree with some points and especially that one:

    It is certainly possible to notice a vulnerability in source code. Excluding low-hanging fruit, it’s just not the main way they’re found nowadays.

    In my experience as a developer, the vast majority of vulnerabilities are caught by linters, source code static analysis, source-wise fuzzers and peer reviews. What is caught by blackbox (dynamic, static, and negative) testing, and scanners is the remaining bugs/vulnerabilities that were not caught during the development process. When using a closed source software, you have no idea if the developers did use these tools (software and internal validation) and so yeah: you may get excellent results with the blackbox testing. But that may just be the sign that they did not accomplish their due diligence during the development phase.

    As an ex-pentester, I can assure you that having a blackbox security tools returning no findings is not a sign that the software is secure at all. Those may fail to spot a flawed logic leading to a disaster, for instance.

    And yeah, I agree that static analysis has its limits, and that running the damn code is necessarry because UT, integrations tests and load tests can only get you so far. That’s why big companies also do blue/green deployments etc.

    But I believe this is not an argument for saying that a closed-source software may be secure if tested that way. Dynamic analysis is just one tool in the defense-in-depth strategy. It is a required one, but certainly not a sufficient one.

    Again, great article, but I believe that you may not be paranoid enough 😁 Which might be a good thing for you 😆 Working in security is bad for one’s mental health 😂

    • Seirdy@lemmy.mlOP
      link
      fedilink
      arrow-up
      3
      ·
      3 years ago

      Linters are a great thing I should’ve mentioned, esp. ones like ShellCheck. The phrase “low-hanging fruit” has been doing a lot of heavy lifting. I should mention that.

      I talked a lot about how to determine if software is insecure, but didn’t spend enough time describing how to tell if software is secure. The latter typically involves understanding software architecture, which can be done by documenting it and having reverse engineers/pentesters verify those docs’ claims.

      It’s getting late (UTC-0800) so I think I’ll edit the article tomorrow morning. Thanks for the feedback.

    • TheAnonymouseJoker@lemmy.ml
      link
      fedilink
      arrow-up
      4
      arrow-down
      8
      ·
      3 years ago

      As an ex-pentester, I can assure you that having a blackbox security tools returning no findings is not a sign that the software is secure at all. Those may fail to spot a flawed logic leading to a disaster, for instance.

      I am tired of people acting like blackbox analysis is same as whitebox analysis. It is like all these people never studied software testing and software engineering properly, and want to do some commentary just because internet fame and the rest of the internet audience is dumber.

      • Seirdy@lemmy.mlOP
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        3 years ago

        I am tired of people acting like blackbox analysis is same as whitebox analysis.

        I was very explicit that the two types of analysis are not the same. I repeatedly explained the merits of source code, and the limitations of black-box analysis. I also devoted an entire section to make an example of Intel ME because it showed both the strengths and the limitations of dynamic analysis and binary analysis.

        My point was only that people can study proprietary software, and vulnerability discovery (beyond low-hanging fruit typically caught by e.g. static code analysis) is slanted towards black-box approaches. We should conclude that software is secure through study, not by checking the source model.

        Edit: I liked that last sentence I wrote so I added it to the conclusion. Diff.

        Lots of FLOSS is less secure than proprietary counterparts, and vice versa. The difference is that proprietary counterparts make us entirely dependent on the vendor for most things, including security. I wrote two articles exploring that issue, both of which I linked near the top. I think you might like them ;).

        Now, if a piece of proprietary software doesn’t document its architecture, makes heavy use of obfuscation techniques in critical places, and is very large/complex: I’d be very unlikely to consider it secure enough for most purposes.

        • TheAnonymouseJoker@lemmy.ml
          link
          fedilink
          arrow-up
          0
          arrow-down
          5
          ·
          3 years ago

          We should conclude that software is secure through study, not by checking the source model.

          And… you cannot study the closed source software. This is the whole point of FOSS being more secure by development model. Closed source SW devs/companies act like obscurity gives them the edge, like Apple does, only for this to happen.

          Lots of FLOSS is less secure than proprietary counterparts, and vice versa.

          MAYBE. However, as other user above said:

          As an ex-pentester, I can assure you that having a blackbox security tools returning no findings is not a sign that the software is secure at all.

          Can you, with complete certainty, confidently assert the closed source software is more secure? How is it secure? Is it also a piece of software not invading your privacy? Security is not the origin of privacy, and security is not merely regarding its own resilience as standalone code to resist break-in attempts. This whole thing is not just a simple two way relation, but more like a magnetic field generated by a magnet itself. I am sure you understand that.

          FLOSS being less secure when analysed with whitebox methods assures where it stands on security. This will always be untrue for closed source software, therefore the assertation that closed source software is more secure, is itself uncertain. FOSS does not rely on blind trust of entities, including who created the code, since it can be inspected thoroughly.

          Moreover, FOSS devs are idealistic and generally have good moral inclinations towards the community and in the wild there are hardly observations that tell FOSS devs have been out there maliciously sitting with honeypots and mousetraps. This has long been untrue for closed source devs, where only a handful examples exist where closed source software devs have been against end user exploitation. (Some common examples in Android I see are Rikka Apps (AppOps), Glasswire, MiXplorer, Wavelet, many XDA apps, Bouncer, Nova Launcher, SD Maid, emulators vetted at r/emulation.)

          • Seirdy@lemmy.mlOP
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            3 years ago

            And… you cannot study the closed source software.

            Sure you can. I went over several example.

            I freely admit that this leaves you dependent on a vendor for fixes, and that certain vendors like oracle can be horrible to work with (seriously check out that link, it’s hilarious). My previous articles on FLOSS being an important mitigation against user domestication are relevant here.

            Can you, with complete certainty, confidently assert the closed source software is more secure? How is it secure? Is it also a piece of software not invading your privacy? Security is not the origin of privacy, and security is not merely regarding its own resilience as standalone code to resist break-in attempts. This whole thing is not just a simple two way relation, but more like a magnetic field generated by a magnet itself. I am sure you understand that.

            I can’t confidently assert anything with complete certainty regardless of source model, and you shouldn’t trust anyone who says they can.

            I can somewhat confidently say that, for instance, Google Chrome (Google’s proprietary browser based on the open-source Chromium) is more secure than most Webkit2GTK browsers. The vast majority of Webkit2gtk-based browsers don’t even fully enable enable sandboxing (webkit_web_context_set_sandbox_enabled).

            I can even more confidently say that Google Chrome is more secure than Pale Moon.

            To determine if a piece of software invades privacy, see if it phones home. Use something like Wireshark to inspect what it sends. Web browsers make it easy to save key logs to decrypt packets. Don’t stop there; there are other techniques I mentioned to work out the edge cases.

            Certain forms of security are necessary for certain levels of privacy. Other forms of security are less relevant for certain levels of privacy, depending on your threat model. There’s a bit of a venn-diagram effect going on here.

            FLOSS being less secure when analysed with whitebox methods assures where it stands on security.

            Sure, but don’t stop at whitebox methods. You should use black-box methods too. I outlined why in the article and used a Linux vuln as a prototypical example.

            This will always be untrue for closed source software, therefore the assertation that closed source software is more secure, is itself uncertain.

            You’re making a lot of blanket, absolute statements. Closed-source software can be analyzed, and I described how to do it. This is more true for closed-source software that documents its architecture; such documentation can then be tested.

            Moreover, FOSS devs are idealistic and generally have good moral inclinations towards the community and in the wild there are hardly observations that tell FOSS devs have been out there maliciously sitting with honeypots and mousetraps. This has long been untrue for closed source devs, where only a handful examples exist where closed source software devs have been against end user exploitation. (Some common examples in Android I see are Rikka Apps (AppOps), Glasswire, MiXplorer, Wavelet, many XDA apps, Bouncer, Nova Launcher, SD Maid, emulators vetted at r/emulation.)

            I am in full agreement with this paragraph. There is a mind-numbing amount of proprietary shitware out there. That’s why, even if I was only interested in security, I wouldn’t consider running proprietary software that hasn’t been researched.

            • TheAnonymouseJoker@lemmy.ml
              link
              fedilink
              arrow-up
              2
              arrow-down
              6
              ·
              3 years ago

              I know this obvious stuff well. I do not think there is debate on how to do those tasks. The issue here, that I now notice thanks to your commit link above, is this:

              Likewise, don’t assume software is safer than proprietary alternatives just because its source is visible. There are lots of great reasons to switch from macOS or Windows to Linux (it’s been my main OS for years), but security is low on that list.

              https://madaidans-insecurities.github.io/linux.html

              Linking this person gives off really, really bad vibes. He is a security grifter that recommends Windows and MacOS over Linux for some twisted security purposes. How do I know? I have had years of exchanges with him and the GrapheneOS community. I recommend you to have a look at 4 separate discussions regarding the above blog of his. Take your time.

              From https://web.archive.org/web/20200528215441/https://forum.privacytools.io/t/is-madaidans-insecurities-fake-news/3248 :

              Notice how the site doesnt have bad things to say about Microsoft or Google, the worse offenders of privacy and security.

              The site is mainly about security, not privacy. Stop with the straw men. Microsoft and Google aren’t security offenders.

              https://web.archive.org/web/20200417185218/https://lobste.rs/s/ir9mcp/linux_phones_such_as_librem_5_are_major

              https://teddit.net/r/linux/comments/pwi1l9/thoughts_about_an_article_talking_about_the/

              I think you have gotten influenced by madaidan’s grift because you use a lot of closed source tools and want to justify it to yourself as safe.

              • Seirdy@lemmy.mlOP
                link
                fedilink
                arrow-up
                4
                arrow-down
                1
                ·
                edit-2
                3 years ago

                He is a security grifter that recommends Windows and MacOS over Linux for some twisted security purposes.

                Windows Enterprise and macOS are ahead of Linux’s exploit mitigations. Madaidan wasn’t claiming that Windows and macOS are the right OSes for you, or that Linux is too insecure for it to be a good fit for your threat model; he was only claiming that Windows and macOS have stronger defenses available.

                QubesOS would definitely give Windows and macOS a run for their money, if you use it correctly. Ultimately, Fuchsia is probably going to eat their lunch security-wise; its capabilities system is incredibly well done and its controls over dynamic code execution put it even ahead of Android. I’d be interested in seeing Zircon- or Fuchsia-based distros in the future.

                When it comes to privacy: I fully agree that the default settings of Windows, macOS, Chrome, and others are really bad. And I don’t think “but it’s configurable” excuses them: https://pleroma.envs.net/notice/AB6w0HTyU9KiUX7dsu

                I think you have gotten influenced by madaidan’s grift because you use a lot of closed source tools and want to justify it to yourself as safe.

                Here’s an exhaustive list of the proprietary software on my machine:

                • Microcode
                • Intel subsystems for my processor (ME, AMT is disabled. My next CPU hopefully won’t be x86_64 because the research I did on ME and AMD Secure Technology gave me nightmares).
                • Non-executable firmware
                • Patent-encumbered media codecs with open-source implementations (AVC/H.264, HEVC/H.265). This should be FLOSS but algorithms are patented; commercial use and distribution can be subject to royalties.
                • Web apps I’m required to use and would rather avoid (e.g. the web version of Zoom for school).
                • Some Nintendo 3DS games I play in a FLOSS emulator (Citra). Sandboxed, ofc.

                That’s it. I don’t even have proprietary drivers. I’m strongly against proprietary software on ideological grounds.

                • TheAnonymouseJoker@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  6
                  ·
                  edit-2
                  3 years ago

                  Windows Enterprise and macOS are ahead of Linux’s exploit mitigations.

                  Servers use Linux. Home users needing security does not arise from OSes being insecure, but user being weak link. https://pointieststick.com/2021/11/29/who-is-the-target-user/ Also, this is contradicted by…

                  QubesOS would definitely give Windows and macOS a run for their money, if you use it correctly.

                  QubesOS is based on Linux.

                  Madaidan wasn’t claiming that Windows and macOS are the right OSes for you, or that Linux is too insecure for it to be a good fit for your threat model; he was only claiming that Windows and macOS have stronger defenses available.

                  Here are two HN discussions that highlights madaidan’s issues. You can read them to understand his whole game.

                  https://news.ycombinator.com/item?id=26954225 (10 comments)

                  https://news.ycombinator.com/item?id=25590079 (295 comments)

                  Here’s an exhaustive list of the proprietary software on my machine:

                  This is a defeatist attitude and meaningless excuse. Adding more closed source software and hardware stacks mean extra attack surface. And extra attack surface should be avoided first, mitigated second.

  • federico3@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    3 years ago

    While the article provides good description of fuzzing, static analysis etc it focuses only on a set of threats and mitigations. There is much more:

    • “How security fixes work”: Linux distributions do a ton of work to implement security fixes for stable releases without input from upstream developers. (And sometimes projects are completely abandoned by upstream developers). The ability for 3rd parties to produce security patches depends on having access to source code and it’s absolutely crucial for high-security environments (e.g. banks, payment processors…). Some companies pay a lot of money for such service. This aspect is a bit understated under “Good counter-arguments”.
    • Software supply chain attacks are a big issue. Open source mitigates the problem by creating transparency on what is used in a build. OS distributions solve the problem by doing reviews and freeze periods.
    • Some Linux distributions go even further and provide reproducible builds. This is not possible with closed source.
    • A transparent development process creates accountability and limits the ability for a malicious developer to insert backdoors/bugdoors. This is quite important.
    • Access to source code, commit history and bug trackers allows end users to quickly gain an understanding of the quality of the development process and the handling of security issues in the past.
    • …it also enable authorship and trust between developers and users.
    • End users and 3rd parties can contribute security-related improvements e.g. sandboxing.
    • Companies can suddenly terminate or slow down development or security support. Community driven projects, and the ability to fork projects strongly mitigates such risk.

    I agree that claiming that something is secure just because it’s FLOSS is an oversimplification. Security is a much bigger and broader process than just analyzing a binary or some sources.

    • Seirdy@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      3 years ago

      You make a lot of good points here, many of which I actually agree with.

      The article focused on studying the behavior and properties of software. For completeness, it mentioned how patching can be crowdsourced with the example of Calibre. I also described how FLOSS decreases dependence on a vendor, and wrote two prior posts about this linked at the top.

      I never claimed that source code is useless, only that we shouldn’t assume the worst if it isn’t provided.

  • TheAnonymouseJoker@lemmy.ml
    link
    fedilink
    arrow-up
    5
    arrow-down
    4
    ·
    edit-2
    3 years ago

    This serves as a springboard and a fallacy by omission that closed source is just as workable and trustable. I noticed this with r/PrivacyGuides and GrapheneOS Matrix room moderator Tommy_Tran/B0risGrishenko in this bullshit announcement post and comments regarding Rule 1 (no closed source software):

    https://teddit.net/r/PrivacyGuides/comments/siqc69/consideration_on_removing_rule_1/ (Archive: https://archive.is/Jxpmb)

    The famous Underhanded C Contest tells us why closed source software is always an inferior proposition.

  • seb@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    3 years ago

    Great article! Also, thanks a lot for adding the additional arguments from this thread - Makes it much better.

    • Seirdy@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 years ago

      Not just this thread, but the rest of Fedi, IRC, my own email, and Matrix too. My posts get atl 20% longer after I share them.

  • Korba@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    3 years ago

    what FLOSS stands for ? i mean i know what’s FOSS but this my first time hearing about FLOSS