Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: no way to override matmul/@ if __array_ufunc__ is set #9028

Closed
shoyer opened this issue Apr 30, 2017 · 34 comments

Comments

@shoyer
Copy link
Member

@shoyer shoyer commented Apr 30, 2017

With NumPy master, it is currently impossible to override ndarray @ other if other also implements __array_ufunc__.

Consider:

import numpy as np

np.__version__  # '1.13.0.dev0+9ef5891'

class OptOut:
    __array_ufunc__ = None
    def __rmatmul__(self, other):
       return 'rmatmul'

class OtherArray:
    def __array_ufunc__(self, *args, **kwargs):
        return 'array_ufunc'
    def __rmatmul__(self, other):
        return 'rmatmul'

array = np.arange(3)
opt_out = OptOut()
other_array = OtherArray()

# OptOut works as expected:
array * opt_out  # TypeError: unsupported operand type(s) for *: 'numpy.ndarray' and 'OptOut'
array @ opt_out  # 'rmatmul'

# But OtherArray does not:
array * other_array  # 'array_ufunc'
array @ other_array  # TypeError: Object arrays are not currently supported

This is problematic. We definitely need a way to allow for overriding @ from the second argument, whether than means breaking the rules on __array_ufunc__ (allowing matmul to be passed even though it isn't a ufunc) or breaking the rules for __matmul__ (to call __rmatmul__ or allow __rmatmul__ to be called by Python, even when __array_ufunc__ is implemented on the other argument).

@shoyer shoyer added this to the 1.13.0 release milestone Apr 30, 2017
@mhvk mhvk referenced this issue Apr 30, 2017
5 of 11 tasks complete
@shoyer

This comment has been minimized.

Copy link
Member Author

@shoyer shoyer commented Apr 30, 2017

Probably the simplest thing to do is to make np.matmul use __array_ufunc__ again, even though it isn't a real ufunc. It's probably less surprising than for @ not be overridable if you use __array_ufunc__.

@njsmith

This comment has been minimized.

Copy link
Member

@njsmith njsmith commented Apr 30, 2017

The whole reason binops care about __array_ufunc__ is that they dispatch to ufuncs, so saying that binops that don't dispatch to ufuncs don't pay attention to __array_ufunc__ seems pretty defensible. OTOH I worry that if we do that then it'll become some big breaking change if/when matmul starts dispatching to a ufunc...

@shoyer

This comment has been minimized.

Copy link
Member Author

@shoyer shoyer commented Apr 30, 2017

The whole reason binops care about array_ufunc is that they dispatch to ufuncs, so saying that binops that don't dispatch to ufuncs don't pay attention to array_ufunc seems pretty defensible.

Agreed, this is also totally reasonable. __matmul__ could continue the current state of affairs, deferring based on __array_priority__.

OTOH I worry that if we do that then it'll become some big breaking change if/when matmul starts dispatching to a ufunc...

Yes, this would be my concern as well. We only get one chance to introduce __array_ufunc__ for the first time.

@charris

This comment has been minimized.

Copy link
Member

@charris charris commented Apr 30, 2017

The @ operator currently forwards on __array_priority__ and __array_ufunc__ = None.

@charris

This comment has been minimized.

Copy link
Member

@charris charris commented Apr 30, 2017

I think the opt-out is dual use, it basically tells arrays to "leave me alone". If arrays weren't such greedy bastards and only ate other arrays, we wouldn't need this solution.

@mhvk

This comment has been minimized.

Copy link
Contributor

@mhvk mhvk commented Apr 30, 2017

Arguably easiest would be to get matmul become a gufunc -- it didn't seem terribly difficult, but might be less efficient.

@eric-wieser

This comment has been minimized.

Copy link
Member

@eric-wieser eric-wieser commented Apr 30, 2017

Alternatively, could we just create some ufunc_like class, that exposes nout and nin? That should cover enough cases that we can just pass np.matmul into __array_ufunc__, even if it's not really a ufunc.

@mhvk

This comment has been minimized.

Copy link
Contributor

@mhvk mhvk commented Apr 30, 2017

Note that the basic matrix multiply gufunc is already written: https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/umath_tests.c.src#L143

What would seem to be needed is:

  • pre/append appropriate 1s in the dimension for vector-matrix and matrix-vector, or, probably better, call specific appropriate gufuncs for those since I guess can cannot guarantee that all operands can do appropriate reshaping (I vaguely recall we had a discussion about names...).
  • special-case the blas call inside the gufunc loop for matrix times matrix

@seberg, @eric-wieser - since you last looked at that file -- is this possible?

@shoyer

This comment has been minimized.

Copy link
Member Author

@shoyer shoyer commented Apr 30, 2017

I think it would be great to have matmul be a gufunc, if/when gufuncs support multiple dispatch. But I'm skeptical about dispatching to multiple gufuncs for np.matmul.

If you write ndarray @ other, how can ndarray even know which of matmat_multiply, matvec_multiply or vecmat_multiply to pass to other.__numpy_ufunc__, without inspecting potentially opaque details about other? For example, if other is an xarray.Dataset (i.e., a dict of labeled arrays), other.ndim is not even defined.

@charris

This comment has been minimized.

Copy link
Member

@charris charris commented Apr 30, 2017

I don't think the call needs to go directly to the gufunc, there can be preparation before and after the call, the override should still work as long as there is only one call to the gufunc. And the return can be reshaped. Multiple calls would get dicey.

@charris

This comment has been minimized.

Copy link
Member

@charris charris commented Apr 30, 2017

BTW, I don't think this needs to be in 1.13, we already have too much stuff.

@mhvk

This comment has been minimized.

Copy link
Contributor

@mhvk mhvk commented Apr 30, 2017

But it would be good to solve the issue here one way or another, of not being able to override __matmul__. My own sense would still be that separate matmat, matvec and vecmat gufuncs might still be the easiest route; they'd be useful even apart from helping solve matmul.

@charris

This comment has been minimized.

Copy link
Member

@charris charris commented Apr 30, 2017

Plus vecvec. In general, one would like to reshape (..., i, j) dot (j,) to (n,j) dot (j,) for efficiency in multiplication, and then unreshape, same with various other combinations, plus BLAS has contiguity requirements, so some preliminary work is always required. Timewise, there doesn't seem to be much difference between the special cases and reshaping and having just one function may make things easier for the user who wants to override.

@charris

This comment has been minimized.

Copy link
Member

@charris charris commented Apr 30, 2017

Note that @ doesn't allow stacked vectors, so the functions for vec matrix cominations are still desirable, but they can be implemented using matmat, so the override should still work.

@charris

This comment has been minimized.

Copy link
Member

@charris charris commented Apr 30, 2017

If we do have a gufunc matmat and want to use BLAS, then we should also raise an error if the passed arguments do not meet contiguity requirements. BLAS makes everything more difficult to pull off and we want to make things as easy for users as we can. @eric-wieser's suggestion of some sort of wrapper may be the best way forward.

@charris

This comment has been minimized.

Copy link
Member

@charris charris commented Apr 30, 2017

I assume gufuncs already make some continuity copies just to work with LAPACK, but that needs to be checked.

@shoyer

This comment has been minimized.

Copy link
Member Author

@shoyer shoyer commented Apr 30, 2017

More generally, we don't really want to be passing private internal gufuncs (like the two used by numpy.linalg.solve) into __array_ufunc__. This starts to expose private implementation details as public API.

A "ufunc like" interface that defines most of that standard ufunc API (at least the attributes, if not all the methods) seems like it could be pretty safe to use with __array_ufunc__. Implementations for a method like numpy.linalg.solve could look something like:

@ufunc_like
def solve(a, b):
    if override_defined(solve, a, b):
        return do_ufunc_override(solve, a, b)
    ...

That said, there's lots to sort out for what this interface should look like, so it's definitely not something to block for 1.13.

@charris

This comment has been minimized.

Copy link
Member

@charris charris commented May 5, 2017

I'm going to kick this on to 1.14 for solution.

@charris charris modified the milestones: 1.14.0 release, 1.13.0 release May 5, 2017
@shoyer

This comment has been minimized.

Copy link
Member Author

@shoyer shoyer commented Jun 9, 2017

This issue just surfaced in dask when implementing __array_ufunc__: dask/dask#2438 (comment)

Per @mrocklin (dask/dask#2438 (comment)), it appears that the result does not necessarily raise an error and can return some sort of malformed or eagerly evaluated numpy array. This is more problematic than I thought...

@mattip

This comment has been minimized.

Copy link
Member

@mattip mattip commented Apr 15, 2018

I assume gufuncs already make some continuity copies just to work with LAPACK, but that needs to be checked

linearize_@TYPE@_matrix and delinearize_@TYPE@_matrix deep inside umath_linalg.src.c copy to a tmp_buffand copy output where needed, respectively

@mattip

This comment has been minimized.

Copy link
Member

@mattip mattip commented Apr 15, 2018

@charris, when you refer to "some kind of wrapper" do you mean this comment?

Alternatively, could we just create some ufunc_like class, that exposes nout and nin

Is this still considered the best way forward, or is there more consensus around making matmul into a true ufunc?

@mhvk

This comment has been minimized.

Copy link
Contributor

@mhvk mhvk commented Apr 15, 2018

@mattip - I don't think there was consensus! For __matmul__, the problem was that the dependence on what is done on dimensions cannot be currently captured by gufuncs. It seemed the choices were:

  1. Expand the definition of a gufunc to include multiple signatures;
  2. Allow ufunc-like wrappers;
  3. Let __matmul__ call on of several separate matmat, vecmat, matvec, and vecvec gufuncs.

I think (1) might be the ideal solution and (2) could lead to that fairly easily, so perhaps that is indeed the right way forward. But as @charris noted, an advantage of (3) is that those would be good to have anyway.

@shoyer

This comment has been minimized.

Copy link
Member Author

@shoyer shoyer commented Apr 16, 2018

Let me propose one more option:

  1. __matmul__ is decoupled from checking __array_ufunc__ entirely, and instead checks for __rmatmul__ on the other type.

This option would allow for potential divergence between np.matmul and @ (if/when we finally implement ufunc overrides for np.matmul), which is unfortunate, but it could be done relatively easily without tackling a larger design problem.

I would be happy with either of @mhvk's options (1) or (2), or my option (4).

My concern with (3) is that it will impose some requirements on the other argument, namely that it must have ndim or shape defined so we can pick which sub-gufunc to use. Even if the other argument has ndim defined (which is likely), the other argument won't have the full context of the function call to overload, and thus will not be able distinguish between the multiple sub-gufuncs and ndarray @ other. I don't know if there are real use cases here (probably not ones we want to support!), but it feels inelegant.

@hameerabbasi

This comment has been minimized.

Copy link
Contributor

@hameerabbasi hameerabbasi commented Apr 18, 2018

I vote for (4) in the short term and (3) in the long term. I can think of cases where (for multidimensional arrays) I'd like to do vecvec, for example. Granted, it can be done by np.sum(x * y, axis=-1), but that calculates an intermediate result and will be suboptimal.

Allowing multiple signatures is ideal in principle, but will cause a lot of corner cases and problems while supporting/maintaining in practice (at least according to what I predict).

@mhvk

This comment has been minimized.

Copy link
Contributor

@mhvk mhvk commented Apr 18, 2018

@hameerabbasi - I should perhaps have been clear that there is no reason not to implement vecvec independently of matmul. The real question here, though, is how to deal with @ -- the ideal solution would quite obviously be that, like all other operators, it can dispatch directly to a ufunc.

@njsmith

This comment has been minimized.

Copy link
Member

@njsmith njsmith commented Apr 18, 2018

An option that falls short of full arbitrary multi-signature dispatch, would be to add support to gufuncs specifically for the way matmul does signatures. For example, syntax like (n?,k),(k,m?)->(n?,m?) could mean that n and m are optional dimensions; if missing in the input, they're treated as 1, and then dropped from the output. This would allow us to express matmul, and also np.linalg.solve.

@mhvk

This comment has been minimized.

Copy link
Contributor

@mhvk mhvk commented Apr 18, 2018

Yes, that sounds like a good solution! I do somewhat worry that this (like, e.g., #5015), will need some tweaking to the ufunc structure and thus break C-API. Right now all we pass on from each separate element/character of the signature is a simple index core_dim_ixs into the full set of core indices (there's also the number of core dimensions for each operand core_num_dims, an offset in the list core_offset and total number of elements in the signature, core_num_dim_ix).

But perhaps the right way to proceed is to try and add a set of flags or so for every element (anything that's expandable...). This would allow #5015 as well.

@njsmith

This comment has been minimized.

Copy link
Member

@njsmith njsmith commented Apr 18, 2018

We're going to have to rewrite a lot of ufunc internals anyway to progress on both the duck array and the user-defined dtypes work. A few years ago I did a lot of checking and found that there was no code on GitHub or in Debian that touched the internals of the ufunc struct, except numba and dynd. Dynd is defunct now, and we can coordinate with numba. (Probably should double check that there situation hasn't changed too.) So I think opaqueifying the ufunc internals is viable.

@njsmith

This comment has been minimized.

Copy link
Member

@njsmith njsmith commented Apr 18, 2018

Probably should double check that there situation hasn't changed too

Sorry, between poor phrasing and autocorrect this came out kind of garbled. What I meant to say is, since my checking was a few years ago now, we should probably rerun it, in case any new users have appeared.

@mattip

This comment has been minimized.

Copy link
Member

@mattip mattip commented Apr 19, 2018

If I understand how @njs 's proposal would play out, it means implementing 1 and exposing app-level access as in 3 to the various internal ufunc alternatives. It indeed looks like #5015 is a related type of ufunc extension, and should be taken into consideration when implementing the necessary changes.

@njs do you remember how you did the search? Whenever I try to search GitHub for NumPy C-API consumers, I end up getting many false-positives for repos that fork/copy NumPy code

@njsmith

This comment has been minimized.

@mattip

This comment has been minimized.

Copy link
Member

@mattip mattip commented Apr 19, 2018

@njsmith Thanks for the tool

Did I understand your proposed changes correctly, implementing 1 via a change to ufunc signatures?

@njsmith

This comment has been minimized.

Copy link
Member

@njsmith njsmith commented Apr 19, 2018

Yeah, it's (1) with the additional note that it's not adding generic "multi-signature" support, just a specific narrow form of multi-signature support.

@charris

This comment has been minimized.

Copy link
Member

@charris charris commented Dec 4, 2018

Closing. matmul is now implemented as a ufunc. What is the status of dot?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
7 participants
You can’t perform that action at this time.