Issue 16223: [doc] untokenize returns a string if no encoding token is recognized (original) (raw)
Created on 2012-10-14 05:49 by eric.snow, last changed 2022-04-11 14:57 by admin.
Messages (6)
Author: Eric Snow (eric.snow) *
Date: 2012-10-14 05:49
If you pass an iterable of tokens and none of them are an ENCODING token, tokenize.untokenize() returns a string. This is contrary to what the docs say:
It returns bytes, encoded using the ENCODING token, which is the first token sequence output by tokenize().
Either the docs should be clarified or untokenize() fixed. My vote is to fix it. It could check that the first token is an ENCODING token and raise an exception. Alternately it could fall back to using 'utf-8' by default.
[1] http://docs.python.org/py3k/library/tokenize.html#tokenize.untokenize
Author: Tomasz Maćkowiak (kurazu) *
Date: 2013-07-06 15:40
untokenize has also some other problems, especially when it is using compat - it will skip first significant token, if ENCODING token is not present in input.
For example for input like this (code simplified):
tokens = tokenize(b"1 + 2") untokenize(tokens[1:]) '+2 '
It also doesn't adhere to another documentation item: "The iterable must return sequences with at least two elements. [...] Any additional sequence elements are ignored."
In current implementation sequences can be either 2 or 5 elements long, and in the 5-elements long variant the last 3 elements are not ignored, but used to construct source code with original whitespace.
I'm trying to prepare a patch for those issues.
Author: Tomasz Maćkowiak (kurazu) *
Date: 2013-07-07 10:24
Attached is a patch for untokenize, it's tests and docs and some minor pep8 improvements. The patch should fix unicode output and some corner cases handling in untokenize.
Author: Tomasz Maćkowiak (kurazu) *
Date: 2013-07-07 11:24
Attached corrected ('^' and '$' for regexp in tests) patch.
Author: Terry J. Reedy (terry.reedy) *
Date: 2014-02-18 04:27
The no encoding issue was mentioned in #12691, but needed to be opened in a separate issue, which is this one. The doc, as opposed to the docstring, says "Converts tokens back into Python source code". Python 3.3 source code is defined in the reference manual as a sequence of unicode chars. The doc also says "The reconstructed script is returned as a single string." In 3.x, that also means unicode, not bytes. On the other hand tokenize does not currently accept actually Python code (unicode) but only encoded code. I think that should change, but that is a different issue (literally).
For this issue, I think the doc and docstring should change to match current behavior: output a string unless the tokens (which contain unicode strings, not bytes) start with a non-empty ENCODING token. Change the behavior would break code that believes the code and doc (as opposed to the docstring).
Since tokenize will only put out ENCODING as the first token, I would be inclined to ignore ENCODING thereafter, but that might be seen as an impermisable change in behavior.
-- The dropped token issue is the subject of #8478, with patch1. It was mentioned again in #12691, among several other issues, and is the subject again of duplicate issue #16224 (now closed) with patch2.
The actual issue is that the first token of iterator input gets dropped, but not that of lists. The fix is reported on #8478, so dropped token is not part of this issue.
Author: Irit Katriel (iritkatriel) *
Date: 2021-12-08 16:39
The doc has been updated by now:
"It returns bytes, encoded using the ENCODING token, which is the first token sequence output by tokenize(). If there is no encoding token in the input, it returns a str instead."
https://docs.python.org/3/library/tokenize.html#tokenize.untokenize
The docstring doesn't say this though.
History
Date
User
Action
Args
2022-04-11 14:57:37
admin
set
github: 60427
2021-12-08 16:39:39
iritkatriel
set
title: untokenize returns a string if no encoding token is recognized -> [doc] untokenize returns a string if no encoding token is recognized
nosy: + iritkatriel
messages: +
versions: + Python 3.9, Python 3.10, Python 3.11, - Python 2.7, Python 3.3, Python 3.4
components: + Documentation
2014-02-18 04:27:36
terry.reedy
set
versions: - Python 3.2
nosy: + terry.reedy
messages: +
assignee: eric.snow -> terry.reedy
2013-07-07 11:24:44
kurazu
set
files: + bug16223_2.patch
messages: +
2013-07-07 10:24:47
kurazu
set
files: + bug16223.patch
keywords: + patch
2013-07-07 10:24:08
kurazu
set
messages: +
2013-07-06 15:40:24
kurazu
set
nosy: + kurazu
messages: +
2013-06-25 05:27:50
eric.snow
set
assignee: eric.snow
2012-10-14 05:49:11
eric.snow
create