msg86203 - (view) |
Author: Dave Baggett (dmbaggett) |
Date: 2009-04-20 19:01 |
The implementation of encode and decode are slow, and scale nonlinearly in the size of the message to be encoded/decoded. A simple fix is to use an array.array('c') and append to it, rather than using string concatenation. This change makes the code more than an order of magnitude faster. |
|
|
msg86235 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2009-04-21 15:21 |
Do you have a patch? Also, detailed timing examples are welcome (you can e.g. use the "timeit" module from the command line). |
|
|
msg86236 - (view) |
Author: Dave Baggett (dmbaggett) |
Date: 2009-04-21 15:23 |
I can certainly generate a patch for you. What form would you like it in, and against what source tree? Also, do you have a preference between the use of array.array vs. standard arrays? (I don't know whether it's good or bad to depend on "import array" in quoprimime.) |
|
|
msg86238 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2009-04-21 15:30 |
The patch must be against the SVN trunk, in standard unified diff ("svn diff" does the trick). What do you call standard arrays? Do you mean the builtin list type? Storing one separate character per list element is a poor choice because it will waste a lot of memory. The array.array type is more efficient for this. However, there are two other, more idiomatic ways of doing this: - accumulate the chunks of encoded/decoded data into a list (but not only 1-character strings...) and join() them at the end. - or, use a StringIO object (from the cStringIO module) Bonus points if you time all three possibilities and submit the fastest :-) |
|
|
msg86239 - (view) |
Author: Dave Baggett (dmbaggett) |
Date: 2009-04-21 15:34 |
Yes, sorry, I meant "built-in list type" not "array". Your point about using lists this way is valid, and is why I used array.array('c'). I will do as you suggest and try all three methods. I did time the array.array approach vs. the one currently in the code and it was about 30 times faster. |
|
|
msg87769 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2009-05-14 21:41 |
Any news? |
|
|
msg127654 - (view) |
Author: Matt Cain (cainmatt) |
Date: 2011-01-31 19:49 |
I re-wrote encode() to be simpler and faster. My version runs about 10 times faster on a 30KB message. I have tested it somewhat but not rigorously see attached patch |
|
|
msg173557 - (view) |
Author: Serhiy Storchaka (serhiy.storchaka) *  |
Date: 2012-10-22 18:45 |
3.2+ already use StringIO, there is no nonlinearity. On 2.7 I couldn't find the nonlinearity with the texts of up to 100 MB. Therefore the path outdated as bugfix. However it shows 20-30x speedup on 2.7. It would be worth port it to 3.4. This will be a significant enhancement (if the patch can preserve the current semantics). |
|
|
msg189126 - (view) |
Author: Serhiy Storchaka (serhiy.storchaka) *  |
Date: 2013-05-13 13:06 |
Here is a patch for 3.4 based on Matt's patch with additional optimizations. It speeds up body_encode() and header_encode(). $ ./python -m timeit -s "from email.quoprimime import body_encode as encode; x = open('Lib/decimal.py').read()[:100000]" "encode(x)" Before patch: 1.12 sec per loop After patch: 26.3 msec per loop $ ./python -m timeit -s "from email.quoprimime import header_encode as encode; x = b'A'*100" "encode(x)" Before patch: 97.9 usec per loop After patch: 23.7 usec per loop For non-ascii data difference is even larger. |
|
|
msg208030 - (view) |
Author: Roundup Robot (python-dev)  |
Date: 2014-01-13 18:31 |
New changeset 4c5b1932354b by R David Murray in branch '3.3': #20206, #5803: more efficient algorithm that doesn't truncate output. http://hg.python.org/cpython/rev/4c5b1932354b New changeset b6c3fc21286f by R David Murray in branch 'default': Merge #20206, #5803: more efficient algorithm that doesn't truncate output. http://hg.python.org/cpython/rev/b6c3fc21286f |
|
|
msg208033 - (view) |
Author: R. David Murray (r.david.murray) *  |
Date: 2014-01-13 18:33 |
I've reviewed this and applied it to both 3.3 and 3.4 in order to fix issue 20206. Thanks, Serhiy. |
|
|