[Numpy-discussion] Proposal to accept NEP-18, array_function protocol (original) (raw)

Nathaniel Smith njs at pobox.com
Tue Aug 21 03:19:44 EDT 2018


On Wed, Aug 15, 2018 at 9:45 AM, Stephan Hoyer <shoyer at gmail.com> wrote:

RE: accidental differences in behavior:

I actually think that the arrayfunction approach is less prone to accidental differences in behavior, because we require implementing every function directly (or it raises an error). This avoids a classic subclassing problem that has plagued NumPy for years, where overriding the behavior of method A causes apparently unrelated method B to break, because it relied on method A internally. In NumPy, this constrained our implementation of np.median(), because it needed to call np.mean() in order for subclasses implementing units to work properly.

I don't think I follow... if B uses A internally, then overriding A shouldn't cause B to break, unless the overridden A is buggy.

The median() case was different: it wasn't overriding A that caused B to break, that part worked fine. It was when we changed the implementation of B that we had problems.

...actually, this made me realize that I was uncertain about what exactly happened in that case. I just checked, and AFAICT with current astropy the call to mean() is unnecessary. I tried modifying np.median to remove the call to mean, and it still gave the same result for np.median([1, 2, 3] * u.m). I also checked np.percentile, and it seems to work fine on units-arrays if you make it call np.asanyarray instead of np.asarray.

The issue is here if anyone else wants to refresh their memory: https://github.com/numpy/numpy/issues/3846

Reading the comments there it looks like we might have hacked np.median so it (a) uses np.asanyarray, and (b) always calls np.mean, and actually the 'asanyarray' change is what astropy needed and the 'mean' part was just a red herring all along! Whoops. And here I've been telling this story for years now...

RE: a hypothetical simplified interface:

The need to implement everything you want to use in NumPy's public API could certainly be onerous, but on the other hand there are a long list of projects that have already done this today -- and these are the projects that most need arrayfunction. I'm sure there are cases were simplification would be warranted, but in particular I don't think arrayconcatenate has significant advantages over simply implementing arrayfunction for np.concatenate. It's a slightly different way of spelling, but it basically does the same thing. The level of complexity to implement hstack, vstack, rowstack and columnstack in terms of np.concatenate is pretty minimal. arrayfunction implementors could easily copy and paste code from NumPy or use a third-party helpers library (like NDArrayOperatorsMixin) that provides such implementations.

And when we fix a bug in row_stack, this means we also have to fix it in all the copy-paste versions, which won't happen, so np.row_stack has different semantics on different objects, even if they started out matching. The NDArrayOperatorsMixin reduces the number of duplicate copies of the same code that need to be updated, but 2 copies is still a lot worse than 1 copy :-).

I also have other concerns about the "simplified API" approach beyond the difficulty of figuring it out, but those are already mentioned in the NEP: http://www.numpy.org/neps/nep-0018-array-function-protocol.html#implementations-in-terms-of-a-limited-core-api

Yeah, there are definitely trade-offs. I don't have like, knock-down rebuttals to these or anything, but since I didn't comment on them before I might as well say a few words :-).

1. The details of how NumPy implements a high-level function in terms of overloaded functions now becomes an implicit part of NumPy’s public API. For example, refactoring stack to use np.block() instead of np.concatenate() internally would now become a breaking change.

The way I'm imagining this would work is, we guarantee not to take a function that used to be implemented in terms of overridable operations, and refactor it so it's implemented in terms of overridable operations. So long as people have correct implementations of array_concatenate and array_block, they shouldn't care which one we use. In the interim period where we have array_concatenate but there's no such thing as array_block, then that refactoring would indeed break things, so we shouldn't do that :-). But we could fix that by adding array_block.

2. Array libraries may prefer to implement high level functions differently than NumPy. For example, a library might prefer to implement a fundamental operations like mean() directly rather than relying on sum() followed by division. More generally, it’s not clear yet what exactly qualifies as core functionality, and figuring this out could be a large project.

True. And this is a very general problem... for example, the appropriate way to implement logistic regression is very different in-core versus out-of-core. You're never going to be able to take code written for ndarray, drop in an arbitrary new array object, and get optimal results in all cases -- that's just way too ambitious to hope for. There will be cases where reducing to operations like sum() and division is fine. There will be cases where you have a high-level operation like logistic regression, where reducing to sum() and division doesn't work, but reducing to slightly-higher-level operations like np.mean also doesn't work, because you need to redo the whole high-level operation. And then there will be cases where sum() and division are too low-level, but mean() is high-level enough to make the critical difference. It's that last one where it's important to be able to override mean() directly. Are there a lot of cases like this?

For mean() in particular I doubt it. But then, mean() in particular is irrelevant here, because mean() is already directly overridable, regardless of array_function :-). So really the question is about the larger landscape of numpy APIs: What traps are lurking in the underbrush that we don't know about? And yeah, the intuition behind the "simplified API" approach is that we need to do the work to clear out that underbrush, and the downside is exactly that that will be a lot of work and take time. So... I think this criticism is basically that restated?

3. We don’t yet have an overloading system for attributes and methods on array objects, e.g., for accessing .dtype and .shape. This should be the subject of a future NEP, but until then we should be reluctant to rely on these properties.

This one I don't understand. If you have a duck-array object, and you want to access its .dtype or .shape attributes, you just... write myobj.dtype or myobj.shape? That doesn't need a NEP though so I must be missing something :-).

But... this is wishful thinking. No matter what the NEP says, I simply don't believe that we'll actually go break dask, sparse arrays, xarray, and sklearn in a numpy point release. Or any numpy release. Nor should we. If we're serious about keeping this experimental – and I think that's an excellent idea for now! – then IMO we need to do something more to avoid getting trapped by backwards compatibility.

I agree, but to be clear, development for dask, sparse and xarray (and even broadly supported machine learning libraries like TensorFlow) still happens at a much faster pace than is currently the case for "core" projects in the SciPy stack like NumPy. It would not be a big deal to encounter breaking changes in a "major" NumPy release (i.e., 1.X -> 1.(X+1)). (Side note: sklearn doesn't directly implement any array types, so I don't think it would make use of arrayfunction in any way, except possibly to implement overloadable functions.)

They don't implement array types, but they do things like use sparse arrays internally, so from the user's point of view you could have some code that only uses numpy and sklearn, and then the new numpy release breaks sklearn (because it broke the sparse package that sklearn was using internally).

My suggestion: at numpy import time, check for an envvar, like say NUMPYEXPERIMENTALARRAYFUNCTION=1. If it's not set, then all the arrayfunction dispatches turn into no-ops. This lets interested downstream libraries and users try this out, but makes sure that we won't have a hundred thousand end users depending on it without realizing.

- makes it easy for end-users to check how much overhead this adds (by running their code with it enabled vs disabled) - if/when we decide to commit to supporting it for real, we just remove the envvar. I'm slightly concerned that the cost of reading an environment variable with os.environ could exaggerate the performance cost of arrayfunction. It takes about 1 microsecond to read an environment variable on my laptop, which is comparable to the full overhead of arrayfunction.

That's why I said "at numpy import time" :-). I was imagining we'd check it once at import, and then from then on it'd be stashed in some C global, so after that the overhead would just be a single predictable branch 'if (array_function_is_enabled) { ... }'.

So we may want to switch to an explicit Python API instead, e.g., np.enableexperimentalarrayfunction().

If we do this, then libraries that want to use array_function will just call it themselves at import time. The point of the env-var is that our policy is not to break end-users, so if we want an API to be provisional and experimental then it's end-users who need to be aware of that before using it. (This is also an advantage of checking the envvar only at import time: it means libraries can't easily just setenv() to enable the functionality behind users' backs.)

My bigger concern is when/how we decide to graduate arrayfunction from requiring an explicit opt-in. We don't need to make a final decision now, but it would be good to clear about what specifically we are waiting for.

The motivation for keeping it provisional is that we'll know more after we have some implementation experience, so our future selves will be in a better position to make this decision. If I knew what I was waiting for, I might not need to wait :-).

But yeah, to be clear, I'm totally OK with the possibility that we'll do this for a few releases and then look again and be like "eh... now that we have more experience, it looks like the original plan was fine after all, let's remove the envvar and document some kind of accelerated deprecation cycle".

I don't really understand the 'types' frozenset. The NEP says "it will be used by most arrayfunction methods, which otherwise would need to extract this information themselves"... but they still need to extract the information themselves, because they still have to examine each object and figure out what type it is. And, simply creating a frozenset costs ~0.2 µs on my laptop, which is overhead that we can't possibly optimize later...

The most flexible alternative would be to just say that we provide an fixed-length iterable, and return a tuple object. (In my microbenchmarks, it's faster to make a tuple than a list or set.) In an early draft of the NEP, I proposed exactly this, but the speed difference seemed really marginal to me. I included 'types' in the interface because I really do think it's something that almost all arrayfunction implementations should use use. It preserves a nice separation of concerns between dispatching logic and implementations for a new type. At least as long as arrayfunction is experimental, I don't think we should be encouraging people to write functions that could return NotImplemented directly and to rely entirely on the NumPy interface. Many but not all implementations will need to look at argument types. This is only really essential for cases where mixed operations between NumPy arrays and another type are allowed. If you only implement the NumPy interface for MyArray objects, then in the usual Python style you wouldn't need isinstance checks. It's also important from an ecosystem perspective. If we don't make it easy to get type information, my guess is that many arrayfunction authors wouldn't bother to return NotImplemented for unexpected types, which means that arrayfunction will break in weird ways when used with objects from unrelated libraries.

This is much more of a detail as compared to the rest of the discussion, so I don't want to quibble too much about it. (Especially since if we keep things really-provisional, we can change our mind about the argument later :-).) Mostly I'm just confused, because there are lots of dunder functions in Python (and NumPy), and none of them take a special 'types' argument... so what's special about array_function that makes it necessary/worthwhile?

Any implementation of, say, concatenate-via-array_function is going to involve iterating through all the arguments and looking at each of them to figure out what kind of object it is and how to handle it, right? That's true whether or not they've done a "pre-check" using the types set, so in theory it's just as easy to return NotImplemented at that point. But I guess your point in the last paragraph is that this means there will be lots of chances to mess up the NotImplemented-returning code in particular, especially since it's less likely to be tested than the happy path, which seems plausible. So basically the point of the types set is to let people factor out that little bit of lots of functions into one common place? I guess some careful devs might be unhappy with paying extra so that other lazier devs can get away with being lazy, but maybe it's a good tradeoff for us (esp. since as numpy devs, we'll be getting the bug reports regardless :-)).

If that's the goal, then it does make me wonder if there might be a more direct way to accomplish it -- like, should we let classes define an array_function_types attribute that numpy would check before even trying to dispatch to array_function?

-n

-- Nathaniel J. Smith -- https://vorpus.org



More information about the NumPy-Discussion mailing list