DAVE'S LIFE ON HOLD

Security and Why You Will Not Buy It

Every so often, someone brings up the issue of securely deploying code in response to an idea of dynamically pulling in code from the web. It goes something like this:


These same people with then happily install random modules via cpan, easy_install, npm, yum, apt, etc. without any validation or vetting of the sources. They will then proceed not to store the package on non-volatile media and forgo any tracking of dependencies. Some packages they will build from source using tool chains they have not validated, and can not know if they are secure.

Why?

Because you can not live without trust. The cost of paranoid fantasy is far greater than the cost of trusting others. Criminals learn to take advantage of this asymmetry, which is why we have this notion of trust by verify. Git implicitly has this model in that every transaction is stored in a cryptographically signed source, and out if band modification is identified by the tool. Does this make git a secure solution? No it just aids in one step of the verify phase.

When building a system, the issue of when the code gets loaded from a disk or server is not a security issue. Whether that disk is attached by a short length of wire is not the security concern. The concern is really has the program been modified from a verified source. If the source was modified on disk after installation, corrupted via wire transfer, or modified by a bluepill layer, it would be handy to know that the resulting executable will not be ran. This is what Trusted Computing was all about, and it had to work in hardware because hardware is the last opportunity for modification. Turns out Trust Worthy Hardware is a bigger problem as it is practically impossible to physically scan every sub component of the system for design variances that could house attack vectors.

Eternal Vigilance

At the end of the day you have to step back and realize:
But that doesn't matter what matters is I:
If you can afford a certain level of downtime, fraud, or abuse, you don't need a secure system to be effective. Security ultimately is an economic issue, wherein the cost of the security measure must be less than the realized damages incurred by system failure. If you take the probability of a recoverable system failure and multiply it by the cost of recovery you get the maximum value of security. It is not worth calculating the cost of catastrophic failure, as the value is merely a function of the value of the business, and is not a security problem.

From a reliability stand point the following scheme is better than the traditional "build, install, run" paradigm:
Deployment of new code is done by updating the known good endpoint and associated signatures. This is effectively the methodology behind using Github + rebar with an added twist of verifying and rebuilding the app each time something changes. If used with an infrastructure where rollbacks on failure are automatic, you can build a system that minimizes downtime and ensures software remains updated with the necessary security fixes.