[Python-Dev] Re: Regression in unicodestr.encode()? (original) (raw)
Guido van Rossum guido@python.org
Wed, 10 Apr 2002 11:42:07 -0400
- Previous message: [Python-Dev] Re: Regression in unicodestr.encode()?
- Next message: [Python-Dev] Re: Regression in unicodestr.encode()?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Python being 8-bit clean, it is less a problem with it than with languages much relying on NUL terminated C strings. I hope that Python will stick to its current UTF-8 behaviour, even if C extension writers were applying some pressure for a change.
Python won't change its story here. (We will get to the bottom of Barry's bug, which had nothing to do with this issue.)
--Guido van Rossum (home page: http://www.python.org/~guido/)
- Previous message: [Python-Dev] Re: Regression in unicodestr.encode()?
- Next message: [Python-Dev] Re: Regression in unicodestr.encode()?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]