REGR: to_csv problems with zip compression and large dataframes · Issue #38714 · pandas-dev/pandas (original) (raw)

Code Sample, a copy-pastable example

import pandas as pd import io f = io.BytesIO() d = pd.DataFrame({'a':[1]*5000}) d.to_csv(f, compression='zip') f.seek(0) pd.read_csv(f, compression='zip')

Problem description

Writing large (over 1163 rows) dataframes to csv with zip compression (inferred or explicit; to file or io.BytesIO) creates a corrupted zip file.
ValueError: Multiple files found in ZIP file. Only one file per ZIP: ['zip', 'zip', 'zip', 'zip', 'zip']

Output of pd.show_versions()

INSTALLED VERSIONS

commit : 3e89b4c
python : 3.8.6.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19041
machine : AMD64
processor : Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : Polish_Poland.1250

pandas : 1.2.0
numpy : 1.19.3
pytz : 2020.5
dateutil : 2.8.1
pip : 20.3.3
setuptools : 51.1.0.post20201221
Cython : None
pytest : 6.2.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.19.0
pandas_datareader: None
bs4 : None
bottleneck : 1.3.2
fsspec : 0.8.5
fastparquet : None
gcsfs : None
matplotlib : 3.3.3
numexpr : 2.7.1
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : 1.5.4
sqlalchemy : None
tables : None
tabulate : None
xarray : 0.16.2
xlrd : None
xlwt : None
numba : 0.52.0