In the Unicode HOWTO: http://docs.python.org/3.3/howto/unicode.html It says the following: "UTF-8 has several convenient properties: (...) 2. A Unicode string is turned into a sequence of bytes containing no embedded zero bytes. This avoids byte-ordering issues, and means UTF-8 strings can be processed by C functions such as strcpy() and sent through protocols that can’t handle zero bytes." This is not right. UTF-8 uses the zero byte to represent the Unicode codepoint U+0000 (the ASCII NULL character). This is a valid character in UTF-8 and is handled just fine by python's UTF-8 string encoding/decoding.
This is right for 99.99% cases: utf8 doesn't encode any character except explicit zero with zero bytes. UTF-16 for example encodes 'a' as b'\xff\xfea\x00'
So a correct statement would be "A UTF-8 string is turned into a sequence of bytes that contains embedded zero bytes only where they represent the NULL character (U+0000)." I think it's important to correct this because the part about processing UTF-8 with C functions like strcpy(), was wrong and could cause bugs.
I agree that the documentation should be updated. Do you mind to create a pull request mbiggs? There are UTF-8 variants which guarantee that the encoded text has no zero bytes (see Modified UTF-8), but Python only provides the standard UTF-8 and UTF-8 with BOM.
Minor bikeshed: If updating the documentation, refer to U+0000 as "the null character" or "NUL", not "NULL". Using "NULL" allows for confusion with NULL pointers; "the null character" (the name used in the Unicode standard) or "NUL" (the official three letter abbreviation in ASCII, Unicode too I think) has no such opportunity for confusion.