Add documentation section on Scaling · Issue #28315 · pandas-dev/pandas (original) (raw)

Skip to content

Provide feedback

Saved searches

Use saved searches to filter your results more quickly

Sign up

@TomAugspurger

Description

@TomAugspurger

From the user survey, the most critical feature request was "Improve scaling to larger datasets".

While we continue to do work within pandas to improve scaling (fewer copies, native string dtype, etc.), we can document a few strategies that are available that may help with scaling.

  1. Using efficient dtypes / ensure you don't have object dtypes. Possibly use Categorical for strings, if they have low cardinality. Possibly use lower-precision dtypes.
  2. Avoid unnecessary work. When loading data, use columns= (csv or parquet) to select only the columns you need. Probably some other examples.
  3. Use out of core methods, like pd.read_csv(..., chunksize=), to avoid having to rewrite.
  4. Use other libraries. I would of course recommend Dask. But I'm not opposed to a section highlighting Vaex, and possibly Spark (though installing it in our doc environment may be difficult).

Do people have thoughts on this? Any objections to highlighting outside projects like Dask?
Are there other strategies we should mention?