These projects are so incomprehensibly vast that no human mind can comprehend even one small isolated subset of the entire thing.
Which means - no human mind can trust them either, and no human programmer alone can conduct a security review.
Which means they should not be trusted, and should be considered insecure - unless they can be carefully isolated from the environment so that only a trusted surface is exposed.
My ideal project size: something that an average coder can read in a week or two, and come back to their (possibly anarchist) colleagues saying: “this code looks reliable and won’t be leaking buckets”.
I should make you aware of the fact that hardened, top-to-bottom formally verified OS’s exist. Some even go so far as to harden and formally verify both software and hardware.
Some quite interesting articles, thank you - I read two and quick-scanned some more.
Formal verification seems the way to go if you do build a system top down.
What tends to ruin the day is when you need to use components of somewhat random origin. In addition to the OS, you have device drivers, often supplied by vendors who cannot be bothered to go beyond the minimum, you have applications, some of which are legacy and some poorly maintained, written in 12 different programming languages and using 12 different network protocols, some having great traction (thus being necessary) but designed as they were in those days…
…in a chaotic environment, formal verification seems like a big burden to carry. It therefore seems like a method which a rich organization can afford, but not something a small or poor organization can use.
For smaller players, I think minimizing and simplifying their tools may be more accessible. If one can throw out most components, limit the contact surface to something small, use the simplest tools on that surface and make their review easier - I think one can get reasonably high assurance without all-encompassing verification…
…but then again, if you build a manned aircraft, spacecraft or submersible, or a medical apparatus or life support system, then I think formal verification would be a really good idea. :)
Thanks for taking a look. I agree that it is REALLY challenging (and currently impossible) to maintain that kind of formality in most codebases. Where it matters, though, it really helps.
I got interested in formal verification in the context of a future (currently vaporware 😅), formally verified version of the Cardano node running on a Formally Verified flavor that utilizes the RISC-V architecture. I’m just trying to put it on the white beards’ radar over there at IOG (it’s WAY out of my wheelhouse and perhaps even many of theirs too!) because this kind of thing, if fully open source (as that whole codebase is), could make this level of sophistication much more achievable by mere mortals.
Another amazing company that got me into this level of sophistication was Runtime Verification. If you like looking at projects that are absolutely world-changing, look into their K framework. They’ve created a whole framework for formally specifying whole programming languages (and even the LLVM!) and can pretty much use it as a Babel Fish for converting code from one programming language to another, which I always thought was impossible. But if there’s any outfit that could accomplish it, it might be those geniuses who regularly work with companies like NASA and aerospace behemoths.
Modularity and microservice is great for this. That way, you can inspect an individual component with its own responsibility (true to SRP) without trudging through a monolithic codebase.
Which means - no human mind can trust them either, and no human programmer alone can conduct a security review.
Which means they should not be trusted, and should be considered insecure - unless they can be carefully isolated from the environment so that only a trusted surface is exposed.
My ideal project size: something that an average coder can read in a week or two, and come back to their (possibly anarchist) colleagues saying: “this code looks reliable and won’t be leaking buckets”.
I should make you aware of the fact that hardened, top-to-bottom formally verified OS’s exist. Some even go so far as to harden and formally verify both software and hardware.
https://www.cs.ox.ac.uk/tom.melham/pub/Gao-2021-EFV.pdf
https://github.com/standardsemiconductor/lion
http://web1.cs.columbia.edu/~junfeng/09fa-e6998/papers/sel4.pdf
https://www.cs.cornell.edu/courses/cs6410/2013fa/slides/10-verifiable-systems.pdf
https://aerospacelab.onera.fr/sites/w3.onera.fr.aerospacelab/files/AL04-10_0.pdf
https://www.cs.utexas.edu/users/hunt/FMCAD/FMCAD12/005.pdf
Some quite interesting articles, thank you - I read two and quick-scanned some more.
Formal verification seems the way to go if you do build a system top down.
What tends to ruin the day is when you need to use components of somewhat random origin. In addition to the OS, you have device drivers, often supplied by vendors who cannot be bothered to go beyond the minimum, you have applications, some of which are legacy and some poorly maintained, written in 12 different programming languages and using 12 different network protocols, some having great traction (thus being necessary) but designed as they were in those days…
…in a chaotic environment, formal verification seems like a big burden to carry. It therefore seems like a method which a rich organization can afford, but not something a small or poor organization can use.
For smaller players, I think minimizing and simplifying their tools may be more accessible. If one can throw out most components, limit the contact surface to something small, use the simplest tools on that surface and make their review easier - I think one can get reasonably high assurance without all-encompassing verification…
…but then again, if you build a manned aircraft, spacecraft or submersible, or a medical apparatus or life support system, then I think formal verification would be a really good idea. :)
Thanks for taking a look. I agree that it is REALLY challenging (and currently impossible) to maintain that kind of formality in most codebases. Where it matters, though, it really helps.
I got interested in formal verification in the context of a future (currently vaporware 😅), formally verified version of the Cardano node running on a Formally Verified flavor that utilizes the RISC-V architecture. I’m just trying to put it on the white beards’ radar over there at IOG (it’s WAY out of my wheelhouse and perhaps even many of theirs too!) because this kind of thing, if fully open source (as that whole codebase is), could make this level of sophistication much more achievable by mere mortals.
Another amazing company that got me into this level of sophistication was Runtime Verification. If you like looking at projects that are absolutely world-changing, look into their K framework. They’ve created a whole framework for formally specifying whole programming languages (and even the LLVM!) and can pretty much use it as a Babel Fish for converting code from one programming language to another, which I always thought was impossible. But if there’s any outfit that could accomplish it, it might be those geniuses who regularly work with companies like NASA and aerospace behemoths.
Modularity and microservice is great for this. That way, you can inspect an individual component with its own responsibility (true to SRP) without trudging through a monolithic codebase.