[Python-Dev] Cannot declare the largest integer literal. (original) (raw)

Greg Stein gstein@lyra.org
Mon, 8 May 2000 15:11:00 -0700 (PDT)


On Tue, 9 May 2000, Christian Tismer wrote:

... Right. That was the reason for my first, dumb, proposal: Always interpret a number as negative and negate it once more. That makes it positive. In a post process, remove double-negates. This leaves negations always where they are allowed: On negatives.

IMO, that is a non-intuitive hack. It would increase the complexity of Python's parsing internals. Again, with little measurable benefit.

I do not believe that I've run into a case of needing -2147483648 in the source of one of my programs. If I had, then I'd simply switch to 0x80000000 and/or assign it to INT_MIN.

-1 on making Python more complex to support this single integer value. Users should be pointed to 0x80000000 to represent it. (a FAQ entry and/or comment in the language reference would be a Good Thing)

Cheers, -g

-- Greg Stein, http://www.lyra.org/