[LLVMdev] Definition of C/C++ integral conversion(was Re: nsw/nuw for trunc) (original) (raw)

Tobias Grosser tobias at grosser.es
Fri Sep 30 07:59:27 PDT 2011


On 08/11/2011 02:56 PM, Duncan Sands wrote:

Hi Florian,

we'd like to be able to check for loss of information in trunc operations in our LLVM-based bounded model checker [1]. For this it is important if the trunc was on a signed or unsigned integer, so we need nsw and nuw flags for this. Would you accept a patch that adds these flags to LLVM (and possibly clang)? nsw/nuw don't mean signed/unsigned arithmetic. They mean that signed/unsigned overflow in the operation results in undefined behaviour. As far as I know, truncating a large signed value to a too small signed integer type does not result in undefined behaviour. For example, the result of (signed char)999 is perfectly well defined. So it seems to me that nsw/nuw on truncate (which is what this cast turns into in LLVM) don't make any sense. Also, a truncate operation doesn't need to be signed or unsigned, since the operation performed is exactly the same (the same set of input bits -> the same set of output bits) regardless of the sign of the original types.

Hi Duncan,

sorry for digging out such an old thread. You stated that '(signed char) 999' is perfectly well defined in C/C++. I just looked into the C++ standard [1] and could not find this. The section that seems to apply is:


4.7 Integral conversions 1) An rvalue of an integer type can be converted to an rvalue of another integer type. An rvalue of an enumeration type can be converted to an rvalue of an integer type. 2) If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer [..] 3) If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined.

4.7.3 suggest to me, that the standard does not define a result for '(signed char) 999'. I assume you know this section, but I could not find a reason why this section should not apply in this case. Any ideas?

I just came across this looking at the following code:

float foo(float *A, long element) { int converted = element; return A[converted]; }

which is lowered to

define float @foo(float* %A, i64 %element) nounwind uwtable { entry: %conv = trunc i64 %element to i32 %idxprom = sext i32 %conv to i64 %arrayidx = getelementptr inbounds float* %A, i64 %idxprom %value = load float* %arrayidx ret float %value }

LLVM cannot optimize away the trunc operations, which does not only result in slower code, but also in SCEV being overly complex (which blocks further optimizations). Using integer types for index expression, is a pretty common pattern, and it would be great if we could optimize this.

Thanks Tobi

[1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1905.pdf



More information about the llvm-dev mailing list