Hello, I have a problem with the librairy zipfile.py http://svn.python.org/view/python/trunk/Lib/zipfile.py?revision=73565&view=markup Zinfo structure limit the size of a file to an int max value with the ZIP64_LIMIT value ( equal to "(1 << 31) - 1 " so to 2147483647 . The problem is happening when you write a big file in the line 1095 : self.fp.write(struct.pack("<lLL", zinfo.CRC, zinfo.compress_size, zinfo.file_size)) zinfo.file_size is limited to a int size and if you have a file bigger than ZIP64_LIMIT you make a buffer overflow even if you set the flag allowZip64 to true.
yes it's zinfo.file_size which is bigger than the long specify in the struct.pack There's must have a solution with the extra header because a lot of tools can zip big file and these zip file can be open by zipfile.py it's easy to reproduice with a big file of 3 gig. i think that the problem come from that the write methode do not take care of the flag allowZip64
This is a problem with python2.7 as well. A change in struct between python2.6 and 2.7 raises an exception on overflow instead of silently allowing it. This prevents zipping any file larger than 4.5G. This exception concurs when writing the 32-bit headers (which are not used on large files anyway) The patch should be simple. Just wrap line 1100: ...struct.pack("<LLL",... with a try: except: to revert to the old behavior. Alternatively, check if size is bigger than ZIP64_LIMIT and set to anything less than ZIP64_LIMIT.
I attempted to "re-allow overflow" in the struct(...) call by replacing `zinfo.file_size` with `ZIP64_LIMIT % zinfo.file_size` in zipfile.py, and successfully produced a compressed file from a 10G file, but the resulting compressed file could not be uncompressed and was deemed "invalid" by any unzip util I tried.