Bash bug: failure of formal approaches to programming?

September 27, 2014

Shellshock’s gravity should not be underestimated

Bash bug

You may have heard about Shellshock, a long-standing, recently-discovered bug with serious security consequences.

Julien Vanegue commented about this security failure on Twitter:

The bash bug is another shiny example which seems to completely escape traditional methods of automated deductive program verification.

This was my initial reaction too. A previous security failure, Heartbleed, was a recognizable programming error that, to be found, required no explicit specification. An OpenSSL server, astutely manipulated by a distant adversary, read up to 64 KiB from a much smaller buffer, thus getting whatever data happened to lay next in the server’s memory, and sent that data back to the adversary. Reading beyond the bounds of the buffer was the programming error. This error is comparatively easy to recognize because such an operation is never part of the intentional behavior of any program. It may or may not create a security flaw: in the case of Heartbleed, the unfortunate follow-up to the out-of-bounds read was to send what had been read verbatim to the adversary. But out-of-bounds accesses are never good; an automatic verification tool doesn’t need to worry that one it thinks it has found may be normal; you don’t need to worry that you will remove necessary functionality if you fix it; and if you fix all of them, you can be confident that no security flaw based on such an out-of-bounds access remain in the software.

 In contrast, the problem with Shellshock is that Bash, a program the sole purpose of which is to execute arbitrary commands, sometimes executes arbitrary commands. One would have to formalize the specification of what arbitrary commands Bash is and isn’t supposed to execute at a very fine level of detail before the incorrect behavior exploited by Shellshock stood out.

 Now, I work at a company that sells products and services based on the sort of formal verification techniques that, used in isolation, aren’t practical to identify the Bash bug. And Julien used to work on techniques of the very same category, but he has recently widened his interests to include more facets of information security. So our common conclusion both saddens me and vindicates him. No wonder he commented before me.

 Still, since Julien has commented and I have been thinking about the same subject, here are some additional remarks of my own:

  • First, there is a bit of survival bias in the observation that this flaw existed despite automatic verification techniques of all kind. Bugs are found routinely with these techniques, in both legacy and new code. People who find them fix them and move on to the next one. We don’t hear about them so much when they are found before the software is released, or when they are found as part of a collection of bugs found automatically.
  • TrustInSoft provides “verification kits” for Open-Source components. The first such verification kit is for PolarSSL, an SSL library providing the same kind of functionality as OpenSSL. One piece of information provided in the verification kit is a list of the library functions that PolarSSL, configured as in the kit, calls. This list does not include system, exec, or any system call that would similarly give great power to whoever was able to control its arguments. This confirms that executing arbitrary command is not part of the intended functionality of PolarSSL as configured. The verification kit also demonstrates why PolarSSL does not unintentionally provide this functionality, by showing that it doesn’t have the sort of undefined behavior that caused the problem in the case of Heartbleed. If a customer asked for a verification kit for Bash, we wouldn’t say that exec isn’t called from Bash, nor that we can confirm that there is no dataflow between user inputs and the arguments of exec. We would probably, instead, work with the customer to understand how Bash came to be part of the security equation and how to remove it.
  • Finally, is Shellshock a failure of all formal techniques and tools that have been created since people started to write programs that manipulate programs? No, quite the contrary. Formalisms exist to describe the shape of a well-formed program. These grammars are even called “formal grammars” to distinguish them from natural language grammars. The parsing bug we are talking about is typical of hand-written parsers. Using widely available tools to mechanically generate the parser from a formal grammar does not make bugs impossible, but it makes much less likely both unintended functionality like Shellshock and undefined behavior like everyone is looking for since we have been reminded that Bash parses environment variables. In other words, Bash executing code after a closing brace is a sign that old, proven formal tools should be used more, not that they failed.

To conclude, Shellshock’s gravity should not be underestimated, because of its impact for old systems initially intended for a more benevolent Internet. But any recently designed system should already be avoiding Bash, or at the very least avoid feeding Bash any input that comes from the outside world. Here is another take on the same topic. Key quote: “A robustly engineered program should intersect with bash, or any other Turing-complete system, at as few points as possible”.

Newsletter