[Python-Dev] Python parser performance optimizations (original) (raw)
Guido van Rossum guido at python.org
Thu Sep 15 12:01:15 EDT 2016
- Previous message (by thread): [Python-Dev] Python parser performance optimizations
- Next message (by thread): [Python-Dev] Python parser performance optimizations
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
I wonder if this patch could just be rejected instead of lingering forever? It clearly has no champion among the current core devs and therefore it won't be included in Python 3.6 (we're all volunteers so that's how it goes).
The use case for the patch is also debatable: Python's parser wasn't designed to efficiently parse huge data tables like that, and if you have that much data, using JSON is the right answer. So this doesn't really scratch anyone's itch except of the patch author (Artyom).
From a quick look it seems the patch is very disruptive in terms of what it changes, so it's not easy to review.
I recommend giving up, closing the issue as "won't fix", recommending to use JSON, and moving on. Sometimes a change is just not worth the effort.
--Guido
On Tue, Aug 9, 2016 at 1:59 AM, Artyom Skrobov <Artyom.Skrobov at arm.com> wrote:
Hello,
This is a monthly ping to get a review on http://bugs.python.org/issue26415 -- “Excessive peak memory consumption by the Python parser”. Following the comments from July, the patches now include updating Misc/NEWS and compiler.rst to describe the change. The code change itself is still the same as a month ago.
From: Artyom Skrobov Sent: 07 July 2016 15:44 To: python-dev at python.org; steve at pearwood.info; mafagafogigante at gmail.com; greg.ewing at canterbury.ac.nz Cc: nd Subject: RE: Python parser performance optimizations Hello, This is a monthly ping to get a review on http://bugs.python.org/issue26415 -- “Excessive peak memory consumption by the Python parser”. The first patch of the series (an NFC refactoring) was successfully committed earlier in June, so the next step is to get the second patch, “the payload”, reviewed and committed. To address the concerns raised by the commenters back in May: the patch doesn’t lead to negative memory consumption, of course. The base for calculating percentages is the smaller number of the two; this is the same style of reporting that perf.py uses. In other words, “200% less memory usage” is a threefold shrink. The absolute values, and the way they were produced, are all reported under the ticket. From: Artyom Skrobov Sent: 26 May 2016 11:19 To: 'python-dev at python.org' Subject: Python parser performance optimizations Hello, Back in March, I’ve posted a patch at http://bugs.python.org/issue26526 -- “In parsermodule.c, replace over 2KLOC of hand-crafted validation code, with a DFA”. The motivation for this patch was to enable a memory footprint optimization, discussed at http://bugs.python.org/issue26415 My proposed optimization reduces the memory footprint by up to 30% on the standard benchmarks, and by 200% on a degenerate case which sparked the discussion. The run time stays unaffected by this optimization. Python Developer’s Guide says: “If you don’t get a response within a few days after pinging the issue, then you can try emailing python-dev at python.org asking for someone to review your patch.” So, here I am.
Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org
-- --Guido van Rossum (python.org/~guido)
- Previous message (by thread): [Python-Dev] Python parser performance optimizations
- Next message (by thread): [Python-Dev] Python parser performance optimizations
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]