[Python-Dev] PEP 393 Summer of Code Project (original) (raw)

Glenn Linderman v+python at g.nevcal.com
Wed Aug 31 21:14:52 CEST 2011


On 8/31/2011 10:20 AM, Guido van Rossum wrote:

On Wed, Aug 31, 2011 at 1:09 AM, Glenn Linderman<v+python at g.nevcal.com> wrote:

The str type itself can presently be used to process other character encodings: if they are fixed width< 32-bit elements those_ _encodings might be considered Unicode encodings, but there is no requirement_ _that they are, and some operations on str may operate with knowledge of some_ _Unicode semantics, so there are caveats._ _Actually, the str type in Python 3 and the unicode type in Python 2_ _are constrained everywhere to either 16-bit or 21-bit "characters"._ _(Except when writing C code, which can do any number of invalid things_ _so is the equivalent of assuming 1 == 0.) In particular, on a wide_ _build, there is no way to get a code point>= 2**21, and I don't want PEP 393 to change this. So at best we can use these types to repesent arrays of 21-bit unsigned ints. But I think it is more useful to think of them as always representing "some form of Unicode", whether that is UTF-16 (on narrow builds) or 21-bit code points or perhaps some vaguely similar superset -- but for those code units/code points that are representable and valid (either code points or code units) according to the (supported version of) the Unicode standard, the meaning of those code points/units matches that of the standard. Note that this is different from the bytes type, where the meaning of a byte is entirely determined by what it means in the programmer's head.

Sorry, my Perl background is leaking through. I didn't double check that str constrains the values of each element to range 0x110000 but I see now by testing that it does. For some of my ideas, then, either a subtype of str would have to be able to relax that constraint, or str would not be the appropriate base type to use (but there are other base types that could be used, so this is not a serious issue for the ideas).

I have no problem with thinking of str as representing "some form of Unicode". None of my proposals change that, although they may change other things, and may invent new forms of Unicode representations. You have stated that it is better to document what str actually does, rather than attempt to adhere slavishly to Unicode standard concepts. The Unicode Consortium may well define legal, conforming bytestreams for communicating processes, but languages and applications are free to use other representations internally. We can either artificially constrain ourselves to minor tweaks of the legal conforming bytestreams, or we can invent a representation (whether called str or something else) that is useful and efficient in practice. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/python-dev/attachments/20110831/2e6d8256/attachment.html>



More information about the Python-Dev mailing list