[Python-Dev] PEP 574 (pickle 5) implementation and backport available (original) (raw)
Antoine Pitrou solipsis at pitrou.net
Fri May 25 13:49:28 EDT 2018
- Previous message (by thread): [Python-Dev] PEP 574 (pickle 5) implementation and backport available
- Next message (by thread): [Python-Dev] PEP 574 (pickle 5) implementation and backport available
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Fri, 25 May 2018 10:36:08 -0700 Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
> On May 24, 2018, at 10:57 AM, Antoine Pitrou <solipsis at pitrou.net> wrote: > > While PEP 574 (pickle protocol 5 with out-of-band data) is still in > draft status, I've made available an implementation in branch "pickle5" > in my GitHub fork of CPython: > https://github.com/pitrou/cpython/tree/pickle5 > > Also I've published an experimental backport on PyPI, for Python 3.6 > and 3.7. This should help people play with the new API and features > without having to compile Python: > https://pypi.org/project/pickle5/ > > Any feedback is welcome.
Thanks for doing this. Hope it isn't too late, but I would like to suggest that protocol 5 support fast compression by default. We normally pickle objects so that they can be transported (saved to a file or sent over a socket). Transport costs (reading and writing a file or socket) are generally proportional to size, so compression is likely to be a net win (much as it was for header compression in HTTP/2). The PEP lists compression as a possible a refinement only for large objects, but I expect is will be a win for most pickles to compress them in their entirety.
It's not too late (the PEP is still a draft, and there's a lot of time before 3.8), but I wonder what would be the benefit of making it a part of the pickle specification, rather than compressing independently.
Whether and how to compress is generally a compromise between transmission (or storage) speed and computation speed. Also, there are specialized compressors for higher efficiency (for example, Blosc has datatype-specific compression for Numpy arrays). Such knowledge can be embodied in domain-specific libraries such as Dask/distributed, but it cannot really be incorporated in pickle itself.
Do you have something specific in mind?
Regards
Antoine.
- Previous message (by thread): [Python-Dev] PEP 574 (pickle 5) implementation and backport available
- Next message (by thread): [Python-Dev] PEP 574 (pickle 5) implementation and backport available
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]