Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NEP-18: Handling aliased NumPy functions #12974

Open
pentschev opened this issue Feb 15, 2019 · 28 comments · Fixed by #13305
Open

NEP-18: Handling aliased NumPy functions #12974

pentschev opened this issue Feb 15, 2019 · 28 comments · Fixed by #13305

Comments

@pentschev
Copy link
Contributor

The discussion for this issue started in cupy/cupy#2029, please refer to it for all the details.

Currently, CuPy creates aliases from some NumPy C functions that use the __array_function__ dispatch. This causes an infinite loop since CuPy keeps on calling the NumPy function, and this dispatches the CuPy alias. Note that the same case may apply to any other libraries that implementation the __array_function__ protocol.

Summary of fix suggestions we have at the moment:

  • Wrap all NumPy functions of the category described above on libraries that implement the __array_function__ protocol
  • Use a sentinel (e.g., None) as a fallback to NumPy function, similar to the NEP proposal

@shoyer @mrocklin

@mrocklin
Copy link
Contributor

Thought I'd prod here. I've been working with @pentschev a bit a on this. I think that he's happy to do some work here to solve this issue. It would be helpful to get some feedback from maintainers on what they think the right approach is.

Also cc'ing @hameerabbasi

@hameerabbasi
Copy link
Contributor

Yes, the fallback in __array_function__ was my idea. I will defer to @shoyer here on whether to go through the formal approval process of the NEP (I guess we have to?).

The difference is one would have to return a sentinel np.NotImplementedButCoercible rather than None, because we want None to be a valid value that can be logically returned.

@mattip
Copy link
Member

mattip commented Feb 25, 2019

Is there a way to protect with a Npy_EnterRecursiveCall or are there cases where the call might naturally recurse?

@hameerabbasi
Copy link
Contributor

There are cases where natural recursion may happen... Like a Dask array using a CuPy array within it.

@shoyer
Copy link
Member

shoyer commented Feb 25, 2019

cc @mhvk

@shoyer
Copy link
Member

shoyer commented Feb 25, 2019

I don't think we need to write a new NEP. We can just tweak NEP-18 (running it by the mailing list, of course) to describe the adjusted interface and why we need it. The NEP is still marked as "Provisional" after all.

@mhvk
Copy link
Contributor

mhvk commented Feb 25, 2019

I'm somewhat confused by it all: if you want to call the original function in your __array_function__ implementation, you always have to make sure the your argument that defined __array_function__ is turned into something that no longer overrides, i.e., that does not have __array_function__. Is that happening here and one still gets a recursion? If so, there is a real bug on our side. In this case, a simpler numpy-only example might help...

But I don't think we should allow some fall-back where it is fine to have cupy arrays being passed on to the implementation. Rather, if one wants to continue to use the old happen-stance that it worked because the implementation called np.asanyarray, then one can for now use the private _wrapped attribute. But one should then be prepared to be bitten when the implementation changes; one super-nice part about __array_function__ was that in the future we could start removing all those np.as[any]array calls...

@hameerabbasi
Copy link
Contributor

It would still be nice to have a way to fall back to the original NumPy implementation, even in your super-nice scenario, where you override a subset of the API and get everything dependent on just what you overrode automatically. For this to happen, the original implementation needs to be accessible.

That is basically what's being asked about here. 🙂

@hameerabbasi
Copy link
Contributor

On second thought, I do see what you mean about people becoming reliant on the "NumPy will cast stuff for us" behavior and us getting stuck to it unwillingly. I'd also recommend the CuPy team to make "dummy" functions that just call np.asarray on any CuPy arrays.

@mhvk
Copy link
Contributor

mhvk commented Feb 25, 2019

Yes, that is exactly my worry: by allowing one to mix old and new, we will never get to a state where we can change the implementation. Note that in this respect, it is no different from __array_ufunc__ - if you want to call the ufunc again, you have to change your argument into one that does not have __array_ufunc__. So, I guess the more explicit recommendation would be something like

def __array_function__(self, func, ...):
   if func not in SPECIAL_TREATMENT_FUNCTIONS:
      <do np.asarray on all CuPy arrays>
       return func(...)
   # more interesting stuff

Note that one can also just do func._wrapped(...), but then at least it is clear that this is using a private attribute so you are on your own if it breaks...

@mhvk
Copy link
Contributor

mhvk commented Feb 25, 2019

p.s. If the above is considered a good way forward, then it might be good to be explicit about it in the documentation!

@hameerabbasi
Copy link
Contributor

The problem is, while we have args and kwargs in the protocol, and a map from the args/kwargs to the desired arrays in the form of the dispatchers, what we do NOT have is a reverse map (calling a method with some arrays replaced). So this would require a manual override for each function, or a "reverse" dispatcher.

The reverse dispatcher could also have other uses.

@mhvk
Copy link
Contributor

mhvk commented Feb 25, 2019

Ah, yes, for __array_function__ one doesn't just know which arg or kwarg contains an array. A reverse dispatcher might solve this...

One other solution would be to allow the NotImplementedButCoercible after all, but then let that path explicitly do the coercion (i.e., not rely on the implementation to do it).

@hameerabbasi
Copy link
Contributor

That would require a reverse dispatcher in general as well, so it's useful either way. ;-)

@shoyer
Copy link
Member

shoyer commented Feb 25, 2019

Adding an officially supported alias like the current func.__wrapped__ seems like a strictly better option than adding a new extending the protocol with a new sentinel value, i.e., instead of

def __array_function__(self, func, types, args, kwargs):
    if func not in SPECIAL_TREATMENT_FUNCTIONS:
        return np.NotImplementedButCoercible
    # more interesting stuff

you would write:

def __array_function__(self, func, types, args, kwargs):
    if func not in SPECIAL_TREATMENT_FUNCTIONS:
        return func.implementation(*args, **kwargs)
    # more interesting stuff

This is at least much more transparent than adding more black-box logic to NumPy's internal dispatching rules.

From a forwards compatibility perspective it definitely locks us in just as much as adding NotImplementedButCoercible, so indeed it would not be easy to drop np.asarray()/np.asanyarray() coercion inside NumPy functions. But I'm honestly not sure that could ever be hoped for. The best case scenario that I can think of that after we get ride of np.matrix and rewrite np.ma.MaskedArray to not be a subclass, then it would be safe to switch everyone to np.asanyarray().

@mhvk
Copy link
Contributor

mhvk commented Feb 26, 2019

@shoyer - I see the advantage of exposing the wrapped function - though I think it would still be good if eventually this method became the place where the np.asanyarray was actually done, so that the true implementation could count on the inputs being arrays (leaving us free to make the true implementation maximally fast, and people who want that can use it). This may affect the name, though I don't have a good suggestion at the moment.

@shoyer
Copy link
Member

shoyer commented Feb 26, 2019

@shoyer - I see the advantage of exposing the wrapped function - though I think it would still be good if eventually this method became the place where the np.asanyarray was actually done, so that the true implementation could count on the inputs being arrays (leaving us free to make the true implementation maximally fast, and people who want that can use it). This may affect the name, though I don't have a good suggestion at the moment.

OK, how about:

  • np.mean: dispatches with __array_function__
  • np.mean.coerced: doesn't dispatch, but coerces all inputs with np.asarray() or np.asanyarray()
  • np.mean.implementation: neither dispatches nor coerces, assumes perfect duck-typing compatibility.

I'm not entirely sure np.mean.implementation is a good idea outside of carefully vetted functions (it may lock us into even more severely backwards compatibility constraints), but in any case we would save this for later and only support .coerced for now.

@pentschev
Copy link
Contributor Author

I was just checking CuPy's upstream, and the infinite recursion issue seems to have been resolved on their side in https://github.com/cupy/cupy/pull/2024/files.

However, I think the coercion discussion is really relevant and would be great to have a consensus on it, since CuPy itself doesn't fully implement the NumPy API, so I foresee this being useful in a very near future. Pardon me for not chiming into the discussion, but I think it's gotten much more advanced than my not-so-deep understanding of NumPy.

@mhvk
Copy link
Contributor

mhvk commented Feb 26, 2019

@pentschev - agreed that the discussion you triggered is relevant - and I think we're getting there with @shoyer's suggestion.

@hameerabbasi - as you noted, implementing a func.coerced means that we will need some form of a reverse dispatcher; it may well be worth exposing both forward and reverse.

@shoyer - I liked the suggestion and agree we can leave .implementation out for now - people who want it and are willing to live with the risk can access it via the wrapper anyway.

@hameerabbasi
Copy link
Contributor

hameerabbasi commented Feb 26, 2019

I agree with all this, but let's post to the mailing list and go through with the formal process.

As for the discussions: I have another project where both the dispatchers are useful. I'd love to have them exposed, along with some of the other machinery so SciPy/the Scikits can be overriden.

@shoyer
Copy link
Member

shoyer commented Feb 26, 2019

as you noted, implementing a func.coerced means that we will need some form of a reverse dispatcher; it may well be worth exposing both forward and reverse.

I'm not sure I follow. For now, we could simply set public_api.coerced = public_api.__wrapper__ inside np.core.overrides.array_function_dispatch, relying on NumPy functions already doing coercion when necessary.

The problem is, while we have args and kwargs in the protocol, and a map from the args/kwargs to the desired arrays in the form of the dispatchers, what we do NOT have is a reverse map (calling a method with some arrays replaced). So this would require a manual override for each function, or a "reverse" dispatcher.

The only way to do this would be to manually write this "reverse dispatcher" function for each dispatcher, e.g., for stack we current use:

def _stack_dispatcher(arrays, axis=None, out=None):
    if out is not None:
        # optimize for the typical case where only arrays is provided
        arrays = list(arrays)
        arrays.append(out)
    return arrays

and now you'd need something that can be called like reverse_dispatcher(dispatched_args, *args, **kwargs) to call np.stack() with the results of the dispatcher, e.g., something like:

def _stack_reverse_dispatcher(dispatched_args, arrays, axis=0, out=None):
    if out is not None:
        arrays_ = dispatched_args[:-1]
        out_ = dispatched_args[0]
    else:
        arrays_ = dispatched_args
        out_ = None
    return stack(arrays, axis, out)

So this is totally doable, but it approximately doubles the amount of dispatcher boilerplate in NumPy.

We can imagine using these reverse dispatcher internally in NumPy whenever a function call happens, i.e.,

  1. Collect relevant_args = dispatcher(*args, **kwargs)
  2. Check for overrides.
  3. If no overrides are found, coerce all of relevant_args to NumPy arrays with np.asarray()
  4. Call reverse_dispatcher(coerced_relevant_args, *args, **kwargs) to use NumPy's implementation.

This hypothetical model has two problems:

  • It isn't always appropriate to coerce all of relevant_args into NumPy arrays, e.g., consider functions like np.can_cast which accept either arrays or dtype objects.
  • It adds the overhead of another Python function call to every invocation of a NumPy function, even if there's no dispatching. This would probably increase the overhead of dispatching by 50% or so.

Given all this, I'm not really sure it's worth the trouble.

@mhvk
Copy link
Contributor

mhvk commented Feb 26, 2019

You're right that we could start with coerced = __wrapped__ - and only decouple as we (possibly) tackle implementations to make them presume ndarray input.

@shoyer
Copy link
Member

shoyer commented Apr 11, 2019

Please take a look at the proposed revision of NEP-18 here: #13305

@shoyer
Copy link
Member

shoyer commented Apr 15, 2019

By the way, it turns out CuPy isn't unique in aliasing a handful of NumPy functions like np.result_type(). JAX also does the same thing, which came up when I tried to write __array_function__ method for it: jax-ml/jax#611

@shoyer
Copy link
Member

shoyer commented May 25, 2019

Re-opening this, since we are rolling back the NEP changes in #13305

@shoyer shoyer reopened this May 25, 2019
@mhvk
Copy link
Contributor

mhvk commented May 25, 2019

Re-read this thread. One idea that still seems nice is to have a way to do reverse dispatch so that one can separate out coercion from the real implementation. Above, it was discussed that this was not easy in general and would need separate routines for different functions. But is it really that bad? Many functions have just one argument, so those are all trivial; looking at everything in numeric and fromnumeric, it might not be that hard to write some introspecting code that would create the reverse dispatcher based on the existing dispatcher. The one exception in the batch I looked at is choose. Others would be stack mentioned above, but that routine would share its reverse dispatcher with concatenate and friends.

I'll think a bit more about it... The worry about extra overhead is certainly there...

@jakirkham
Copy link
Contributor

@rgommers, would you be able to weigh in here? This is one of the issues we were discussing at SciPy.

@rgommers
Copy link
Member

@jakirkham the revert of __skip_array_function__ in gh-13624 says:

These solutions would solve real use cases, but at the cost of additional complexity. We would like to gain experience with how __array_function__ is actually used before making decisions that would be difficult to roll back.

I think that pretty much sums it up. For some aliasing that libraries do those simply need to be avoided or reimplemented (that'd be a few lines of code per function, and there shouldn't be many functions).

I think the case where the library implementing __array_function__ is missing functions from the NumPy API is actually much more common and interesting. For Dask that happened to be straightforward, see the line args, kwargs = compute(args, kwargs) in dask/dask#5043.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants