Rich Felker, maintainer of the musl libc, recently tweeted to ask:
Anyone want to guess what gcc generates for
a==b?0:a-b? Any ideas why?
The answer to the first question is that when
b are two pointers to char, GCC generates a sequence of several instructions, including a “conditional move” instruction, for the source-level construct
movq %rdi, %rax movl $0, %edx subq %rsi, %rax cmpq %rsi, %rdi cmove %rdx, %rax
Of these five instructions, just the first and the third would be sufficient to produce the same result. On the architecture targeted by GCC here, applying the SUB instruction to two pointers that satisfy
a==b can only produce a result of zero. In the case where
a is a pointer one-past to some object and
b a pointer to the beginning of another object, the standard allows
a==b to be true or false, but in this case the source-level construct
a-b is undefined, and any value computed by SUB is a valid result too.
One way to make GCC generate only the first and the third instructions would be to write
a-b. Unfortunately, the C standard leaves this expression undefined when
b are both null pointers, whereas
a==b is defined (and is true). If GCC documented that it allows a null pointer to be subtracted from another (and that the result is 0), this would be a safe alternative when targeting GCC. As far as I know, however, GCC does not document this behavior. Since GCC does not document it and the C standard leaves it undefined, GCC could implement a new optimization tomorrow that improves its SPECint score by 0.5% while breaking programs relying on this expression.
The situation is similar to that of implementing rotate in C in 2013. Shifting an
32-n is undefined if
n can be 0, but, at the time, Clang would only optimize to the efficient ROL instruction the pattern
(x << n) | (x >> (32-n)) , to the exclusion of
n ? (x << n) | (x >> (32-n)) : x or
(x << n) | (x >> (-n%31)). Sometimes, in the transition from C-as-it-was-written-in-1990 to modern C, the compiler forces developers to write code circumlocutions to protect themselves from UB-based optimizations, and then punishes them for the circumlocutions.
Writing a sound static analyzer for C implies, first and foremost, formalizing the language that the analyzer will allow programs to use. “Just using the standard as reference” is illusory: in several places, the C standard is ambiguous, or even self-contradictory, enough that what matters is the consensual interpretation that compiler authors have arrived to. Sometimes the situation may be too obscure to even have been discussed. A question I asked myself in 2011 is somewhat related to Rich’s: when
a is a null pointer and
b is a valid pointer,
a < b is left undefined by the standard but compilers currently produce code that makes this expression evaluate to 1, and existing code relies on this. Then, we chose to warn for pointer comparisons and to allow the subtraction of null pointers. Maybe the latter allowance is a decision that should be revisited now, since there exists at least one person in the world, Rich, who cares about this case.
Mandatory “no sane compiler would optimize this” post-scriptum
LLVM assuming that since
x << n appears in the source code,
n cannot be greater than 32 has already caused unintended behavior.
In order to break programs that rely on
a-b when both
b are null pointers, a compiler would only need to propagate the information that
b aren't null pointers from any pointer subtraction involving them. This is as easy as assuming that a pointer is not null if it was passed to
memcpy, even if that was with a size that was not known to be nonzero, and GCC already does this.