I attempted to use GZipFile to process a 1.93 GB file that expands to 18.8 GB. This consistently produces the same corrupted output file that has approximately, but not exactly, the right output file size. I bypassed GZipFile by calling the 7-Zip executable to open the compressed file. This works correctly and consistently. I haven't tried to figure out how GZipFile works, but I assume that this failure is probably related to the very large size of the files I am working with. I've used GZipFile before on much smaller files with no apparent problems. I have no idea what precisely goes wrong, or how to fix it, but I felt it was important to note that GZipFile isn't working for at least some very large files.
Since you mention 7-zip, does that mean you are seeing the problem on a Windows platform? If so, exactly which version of Windows and what kind of system? Also, unless someone recognizes this as a duplicate of an earlier issue, there may not be much action on it unless you can supply a test case to reproduce the problem.
It's Windows 7 Ultimate (64-bit) on a very high end system. I don't think it would be very practical to distribute a 2 GB test file. Though I might be able to get it to a couple people if someone wanted to really study the issue. Though if it is an integer overflow (or something like that), then I would suspect that GZipFile would show corruption most of the time once the files got large enough. For example, it might occur for all files expanding to larger than 2^32 bytes (4 GB). (That's just speculation, I haven't tested it except to note that it failed the very first time I tried to use a file this large.) Perhaps someone familiar with the code could look for places where integers might overflow?
Can you show a snippet of the code (or descrive it in detail) that "processes" the GzipFile? Right now it's not obvious which operations you are doing.