[Python-Dev] [numpy wishlist] Interpreter support for temporary elision in third-party classes (original) (raw)

Nathaniel Smith njs at pobox.com
Fri Jun 6 03:26:26 CEST 2014


On 6 Jun 2014 02:16, "Nikolaus Rath" <Nikolaus at rath.org> wrote:

Nathaniel Smith <njs at pobox.com> writes: > Such optimizations are important enough that numpy operations always > give the option of explicitly specifying the output array (like > in-place operators but more general and with clumsier syntax). Here's > an example small-array benchmark that IIUC uses Jacobi iteration to > solve Laplace's equation. It's been written in both natural and > hand-optimized formats (compare "numupdate" to "numinplace"): > > https://yarikoptic.github.io/numpy-vbench/vb_vb_app.html#laplace-inplace > > numinplace is totally unreadable, but because we've manually elided > temporaries, it's 10-15% faster than numupdate. Does it really have to be that ugly? Shouldn't using tmp += u[2:,1:-1] tmp *= dy2 instead of np.add(tmp, u[2:,1:-1], out=tmp) np.multiply(tmp, dy2, out=tmp) give the same performance? (yes, not as nice as what you're proposing, but I'm still curious).

Yes, only the last line actually requires the out= syntax, everything else could use in place operators instead (and automatic temporary elision wouldn't work for the last line anyway). I guess whoever wrote it did it that way for consistency (and perhaps in hopes of eking out a tiny bit more speed - in numpy currently the in-place operators are implemented by dispatching to function calls like those).

Not sure how much difference it really makes in practice though. It'd still be 8 statements and two named temporaries to do the work of one infix expression, with order of operations implicit.

-n -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/python-dev/attachments/20140606/a3e0d447/attachment-0001.html>



More information about the Python-Dev mailing list