ENH/BUG: usecols does not raise an exception when col index is out of bounds. · Issue #25623 · pandas-dev/pandas (original) (raw)

import pandas as pd

df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]}) df.to_csv('test.csv', index=False)

Should raise ValueError, since there is no col at index 10

pd.read_csv('test.csv', header=0, usecols=[0, 10], engine='python')

print('Silently ran to completion.')


Output

Silently ran to completion.

Problem description

Silently passing incorrect column indices (result of some arithmetic) could potentially lead to a debugging nightmare.

A possible solution is to check if the largest value in usecols is larger than the number of columns at any level.

Something along the lines of

if any(max(col_indices) >= len(column) for column in columns): raise ValueError('...') columns = [[n for i, n in enumerate(column) if i in col_indices] for column in columns] self._col_indices = col_indices

If the idea is to accommodate different levels having different number of columns, maybe it would be better if usecols allowed a sequence of sequences (one for each level).

Note:

Edit:

Expected Output

ValueError

Output of pd.show_versions()

INSTALLED VERSIONS

commit: None
python: 3.7.1.final.0
python-bits: 64
OS: Linux
OS-release: 4.18.0-16-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8

pandas: 0.23.4
pytest: 4.0.2
pip: 18.1
setuptools: 40.6.3
Cython: 0.29.2
numpy: 1.15.4
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: 7.2.0
sphinx: 1.8.2
patsy: 0.5.1
dateutil: 2.7.5
pytz: 2018.7
blosc: None
bottleneck: 1.2.1
tables: 3.4.4
numexpr: 2.6.8
feather: None
matplotlib: 3.0.2
openpyxl: 2.5.12
xlrd: 1.2.0
xlwt: 1.3.0
xlsxwriter: 1.1.2
lxml: 4.2.5
bs4: 4.6.3
html5lib: 1.0.1
sqlalchemy: 1.2.15
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None