[Python-Dev] Parsing f-strings from PEP 498 -- Literal String Interpolation (original) (raw)

Fabio Zadrozny fabiofz at gmail.com
Fri Nov 4 10:50:01 EDT 2016


Answers inline...

On Fri, Nov 4, 2016 at 5:56 AM, Eric V. Smith <eric at trueblade.com> wrote:

On 11/3/2016 3:06 PM, Fabio Zadrozny wrote:

Hi Python-Dev,

I'm trying to get my head around on what's accepted in f-strings -- https://www.python.org/dev/peps/pep-0498/ seems very light on the details on what it does accept as an expression and how things should actually be parsed (and the current implementation still doesn't seem to be in a state for a final release, so, I thought asking on python-dev would be a reasonable option). In what way do you think the implementation isn't ready for a final release?

Well, the cases listed in the docs​ (https://hg.python.org/ cpython/file/default/Doc/reference/lexical_analysis.rst) don't work in the latest release (with SyntaxErrors) -- and the bug I created related to it: http://bugs.python.org/issue28597 was promptly closed as duplicate -- so, I assumed (maybe wrongly?) that the parsing still needs work.

I was thinking there'd be some grammar for it (something as

https://docs.python.org/3.6/reference/grammar.html), but all I could find related to this is a quote saying that f-strings should be something as:

f ' { <optional !s, !r, or !a> } So, given that, is it safe to assume that would be equal to the "test" node from the official grammar? No. There are really three phases here: 1. The f-string is tokenized as a regular STRING token, like all other strings (f-, b-, u-, r-, etc). 2. The parser sees that it's an f-string, and breaks it into expression and text parts. 3. For each expression found, the expression is compiled with PyParserASTFromString(..., Pyevalinput, ...). Step 2 is the part that limits what types of expressions are allowed. While scanning for the end of an expression, it stops at the first '!', ':', or '}' that isn't inside of a string and isn't nested inside of parens, braces, and brackets.

​It'd be nice if at least this description could be added to the PEP (as all other language implementations and IDEs will have to work the same way and will probably reference it) -- a grammar example, even if not used would be helpful (personally, I think hand-crafted parsers are always worse in the long run compared to having a proper grammar with a parser, although I understand that if you're not really used to it, it may be more work to set it up).

Also, I find it a bit troubling that PyParser_ASTFromString is used there and not just the node which would be related to an expression, although I understand it's probably an easier approach, although in the end you probably have to filter it and end up just accepting what's beneath the "test" from the grammar, no? (i.e.: that's what a lambda body accepts).

The nesting-tracking is why these work: >>> f'{(lambda x:3)}' '<function at 0x000000000296E560>' >>> f'{(lambda x:3)!s:.20}' '<function a'

But this doesn't: >>> f'{lambda x:3}' File "", line 1 (lambda x) ^ SyntaxError: unexpected EOF while parsing Also, backslashes are not allowed anywhere inside of the expression. This was a late change right before beta 1 (I think), and differs from the PEP and docs. I have an open item to fix them. I initially thought it would obviously be, but the PEP says that using a lamda inside the expression would conflict because of the colon (which wouldn't happen if a proper grammar was actually used for this parsing as there'd be no conflict as the lamda would properly consume the colon), so, I guess some pre-parser steps takes place to separate the expression to then be parsed, so, I'm interested on knowing how exactly that should work when the implementation is finished -- lots of plus points if there's actually a grammar to back it up :)

I've considered using the grammar and tokenizer to implement f-string parsing, but I doubt it will ever happen. It's a lot of work, and everything that produced or consumed tokens would have to be aware of it. As it stands, if you don't need to look inside of f-strings, you can just treat them as regular STRING tokens.

​Well, I think all language implementations / IDEs (or at least those which want to give syntax errors) will have to look inside f-strings.

Also, you could still have a separate grammar saying how to look inside f-strings (this would make the lives of other implementors easier) even if it was a post-processing step as you're doing now.

I hope that helps. Eric.

​It does, thank you very much.

​Best Regards,

Fabio​ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/python-dev/attachments/20161104/7fdda1cc/attachment-0001.html>



More information about the Python-Dev mailing list