msg156394 - (view) |
Author: Serhiy Storchaka (serhiy.storchaka) *  |
Date: 2012-03-20 11:30 |
ZIP File Format Specification (http://www.pkware.com/documents/casestudies/APPNOTE.TXT) supports bzip2 compression since at least 2003. Since bzip2 contained in Python standart library, it would be nice to add support for these method in zipfile. This will allow to process more foreign zip files and create more compact distributives. The proposed patch adds new method ZIP_BZIP2, which is automatically detecting when unpacking and that can be used for packing. |
|
|
msg156419 - (view) |
Author: Martin v. Löwis (loewis) *  |
Date: 2012-03-20 15:23 |
Can you please submit a contributor form? http://python.org/psf/contrib/contrib-form/ http://python.org/psf/contrib/ |
|
|
msg156427 - (view) |
Author: Martin v. Löwis (loewis) *  |
Date: 2012-03-20 16:09 |
The patch looks good. Can you also provide a test case? |
|
|
msg156436 - (view) |
Author: Serhiy Storchaka (serhiy.storchaka) *  |
Date: 2012-03-20 17:03 |
I am working on this. Should I add tests to test_zipfile.py or create new test_zipfile_bzip2.py? It would add a note that the bzip2 compression can understand not all programs (and do not understand the older versions of Python), but understands the Info-Unzip? My English is not enough for the documentation. |
|
|
msg156445 - (view) |
Author: Martin v. Löwis (loewis) *  |
Date: 2012-03-20 18:37 |
Please add it to test_zipfile. As for the documentation, I propose the wording "bzip2 compression was added to the zip file format in 2001. However, even more recent tools (including older Python releases) may not support it, causing either refusal to process the zip file altogether, or faiilure to extract individual files." I'm not a native speaker of English, either. Feel free to put things through Google translate; some native speaker will pick up the text and correct it. |
|
|
msg156482 - (view) |
Author: Serhiy Storchaka (serhiy.storchaka) *  |
Date: 2012-03-21 09:08 |
Thanks to the tests, I found the error. Since the bzip2 is block algorithm, decompressor need to eat a certain amount of data, so it began to return data. Now when reading small chunks turns out premature end of data. I'm working on a fix. |
|
|
msg156594 - (view) |
Author: Serhiy Storchaka (serhiy.storchaka) *  |
Date: 2012-03-22 16:55 |
All errors are fixed. All tests are passed. Unfortunately, the patch was more than expected. This is necessary for correct and effective work with large bzip2 buffers (for other codecs can also be a profit). |
|
|
msg156643 - (view) |
Author: Nadeem Vawda (nadeem.vawda) *  |
Date: 2012-03-23 10:05 |
[Adding Alan McIntyre, who is listed as zipfile's maintainer.] I haven't yet had a chance to properly familiarize myself with the zipfile module, but I did notice an issue in the changes to ZipExtFile's read() method. The existing code uses the b"".join() idiom for linear- time concatenation, but the patch replaces it with a version that does "buf += data" after each read. CPython can (I think) do this efficiently, but it can be much slower on other implementations. Martin: > As for the documentation, I propose the wording > > "bzip2 compression was added to the zip file format in 2001. However, even more recent tools (including older Python releases) may not support it, causing either refusal to process the zip file altogether, or faiilure to extract individual files." How about this? "The zip format specification has included support for bzip2 compression since 2001. However, some tools (including older Python releases) do not support it, and may either refuse to process the zip file altogether, or fail to extract individual files." |
|
|
msg156646 - (view) |
Author: Serhiy Storchaka (serhiy.storchaka) *  |
Date: 2012-03-23 10:40 |
> The existing code uses the b"".join() idiom for linear- > time concatenation, but the patch replaces it with a version that does > "buf += data" after each read. You made a mess. The existing code uses ``buf += data``, but I allowed myself to replace it with the ``b"".join()`` idiom. The bzip2 codec has to deal with large pieces of data, now this may be important. In read1 still used ``buf += data``, but not in loop, there is a concatenation of the only two pieces. > "The zip format specification has included support for bzip2 compression Thank you. Can you offer the variant with including both bzip2 and lzma (supported since 2006)? I put him in the upcoming patch that adds support for lzma compression to the zipfile module. |
|
|
msg156647 - (view) |
Author: Nadeem Vawda (nadeem.vawda) *  |
Date: 2012-03-23 10:52 |
> You made a mess. The existing code uses ``buf += data``, but I allowed myself > to replace it with the ``b"".join()`` idiom. The bzip2 codec has to deal with > large pieces of data, now this may be important. In read1 still used ``buf += > data``, but not in loop, there is a concatenation of the only two pieces. My mistake; I confused the bodies of read() and read1(). > Thank you. Can you offer the variant with including both bzip2 and lzma > (supported since 2006)? I put him in the upcoming patch that adds support for > lzma compression to the zipfile module. "The zip format specification has included support for bzip2 compression since 2001, and for LZMA compression since 2006. However, some tools (including older Python releases) do not support these compression methods, and may either refuse to process the zip file altogether, or fail to extract individual files." |
|
|
msg156670 - (view) |
Author: Serhiy Storchaka (serhiy.storchaka) *  |
Date: 2012-03-23 16:55 |
Fixed regeression in decompression. Nadeem Vawda, we both were wrong. `buf += data` is noticeably faster `b''.join()` in CPython. |
|
|
msg158564 - (view) |
Author: Martin v. Löwis (loewis) *  |
Date: 2012-04-17 18:19 |
What's the status of your contrib form? |
|
|
msg158618 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2012-04-18 13:23 |
> `buf += data` is noticeably faster `b''.join()` in CPython. Perhaps because your system's memory allocator is extremely good (or buf is always very small), but b''.join() is far more robust. Another alternative is accumulating in a bytearray, since it uses overallocation for linear time appending. |
|
|
msg158741 - (view) |
Author: Serhiy Storchaka (serhiy.storchaka) *  |
Date: 2012-04-19 20:25 |
> What's the status of your contrib form? Oops. I put this off for a detailed study and forgotten. I will send the form, as only get to the printer and the scanner. |
|
|
msg158742 - (view) |
Author: Serhiy Storchaka (serhiy.storchaka) *  |
Date: 2012-04-19 20:25 |
> Perhaps because your system's memory allocator is extremely good (or buf is always very small), but b''.join() is far more robust. > Another alternative is accumulating in a bytearray, since it uses overallocation for linear time appending. I thought, that it was in special optimization, mentioned in the python-dev, but could not find this in the code. Perhaps it had not been implemented. In this particular case, the bytes appending is performed only once (and probably a lot of appending with b''). Exceptions are possible only in pathological cases, for example when compressed data is much larger uncompressed data. The current implementation uses `buf += data`, if someone wants to change it, then it's not me. |
|
|
msg159743 - (view) |
Author: Roundup Robot (python-dev)  |
Date: 2012-05-01 05:58 |
New changeset 028e8e0b03e8 by Martin v. Löwis in branch 'default': Issue #14371: Support bzip2 in zipfile module. http://hg.python.org/cpython/rev/028e8e0b03e8 |
|
|
msg159744 - (view) |
Author: Martin v. Löwis (loewis) *  |
Date: 2012-05-01 05:59 |
Thanks for the patch! |
|
|