Write partitioned Parquet file using to_parquet · Issue #23283 · pandas-dev/pandas (original) (raw)

Hi,

I'm trying to write a partitioned Parquet file using the to_parquet function:

df.to_parquet('table_name', engine='pyarrow', partition_cols = ['partone', 'parttwo']) TypeError: cinit() got an unexpected keyword argument 'partition_cols'

Problem description

It was my understanding that the to_parquet method pass the kwargs to Pyarrow and save a partitioned table.

Expected Output

Partitioned Parquet file saved.

Output of pd.show_versions()

INSTALLED VERSIONS

commit: None
python: 3.5.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.9.0-5-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8

pandas: 0.23.4
pytest: None
pip: 18.0
setuptools: 32.3.1
Cython: None
numpy: 1.15.2
scipy: 1.1.0
pyarrow: 0.11.0
xarray: None
IPython: 7.0.1
sphinx: None
patsy: None
dateutil: 2.7.3
pytz: 2018.5
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 3.0.0
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 1.0.1
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None

Thanks!