Skip to content

Commit

Permalink
BUG: Fix reduce promotion with out argument
Browse files Browse the repository at this point in the history
This fixes promotion for reductions if `out=` is passed.  The
Outputs dtype should take part in promotion here.

Due to value-based casting/promotion it does not work to pass on
`out` to the legacy-promoter, because `out` may be scalar and
the legacy type resolver would (de-facto) ignore its precision.

So in that case, we keep using the old logic of making `out=`
equivalent to an implicit `dtype=out.dtype`.

This is not quite correct in theory, because ::

    arr = np.full(5, 2**25-1, dtype=np.int64)
    np.multiply.preduce(arr, out=np.zeros((), dtype=float32))

Should naturally use float64 (as the promotion result for int64 and
float32), which gives slightly different results.

This may be up for debate, but in general `out=` has normally no
influence on the result of ufuncs/functions.  Reduce-like operations
do feel somewhat special in this regard, but it is unclear to me that
they should be.

This ensures the operation is compatible with the output argument
(i.e. minimal precision), which was intermittendly broken for the
"legacy" dtype-resolution paths.

Note that we ignore the multiple/add special "upcast" paths here
and that it may make sense to override this in the future if
`initial=` is passed (normally `out=` should have no direct
influence on the operation precision).

Closes gh-20455
  • Loading branch information
seberg authored and charris committed Dec 7, 2021
1 parent 9058cec commit c296e5d
Show file tree
Hide file tree
Showing 2 changed files with 41 additions and 3 deletions.
21 changes: 18 additions & 3 deletions numpy/core/src/umath/ufunc_object.c
Expand Up @@ -2717,11 +2717,11 @@ reducelike_promote_and_resolve(PyUFuncObject *ufunc,
char *method)
{
/*
* Note that the `ops` is not realy correct. But legacy resolution
* Note that the `ops` is not really correct. But legacy resolution
* cannot quite handle the correct ops (e.g. a NULL first item if `out`
* is NULL), and it should only matter in very strange cases.
* is NULL) so we pass `arr` instead in that case.
*/
PyArrayObject *ops[3] = {arr, arr, NULL};
PyArrayObject *ops[3] = {out ? out : arr, arr, out};
/*
* TODO: If `out` is not provided, arguably `initial` could define
* the first DType (and maybe also the out one), that way
Expand All @@ -2738,6 +2738,21 @@ reducelike_promote_and_resolve(PyUFuncObject *ufunc,
Py_INCREF(operation_DTypes[0]);
operation_DTypes[2] = operation_DTypes[0];
Py_INCREF(operation_DTypes[2]);
/*
* We have to force the dtype, because otherwise value based casting
* may ignore the request entirely. This means that `out=` the same
* as `dtype=`, which should probably not be forced normally, so we
* only do it for legacy DTypes...
*/
if (NPY_DT_is_legacy(operation_DTypes[0])
&& NPY_DT_is_legacy(operation_DTypes[1])) {
if (signature[0] == NULL) {
Py_INCREF(operation_DTypes[0]);
signature[0] = operation_DTypes[0];
Py_INCREF(operation_DTypes[0]);
signature[2] = operation_DTypes[0];
}
}
}

PyArrayMethodObject *ufuncimpl = promote_and_get_ufuncimpl(ufunc,
Expand Down
23 changes: 23 additions & 0 deletions numpy/core/tests/test_ufunc.py
Expand Up @@ -2134,6 +2134,29 @@ def test_logical_ufuncs_out_cast_check(self, ufunc):
# It would be safe, but not equiv casting:
ufunc(a, c, out=out, casting="equiv")

def test_reducelike_out_promotes(self):
# Check that the out argument to reductions is considered for
# promotion. See also gh-20455.
# Note that these paths could prefer `initial=` in the future and
# do not up-cast to the default integer for add and prod
arr = np.ones(1000, dtype=np.uint8)
out = np.zeros((), dtype=np.uint16)
assert np.add.reduce(arr, out=out) == 1000
arr[:10] = 2
assert np.multiply.reduce(arr, out=out) == 2**10

# For legacy dtypes, the signature currently has to be forced if `out=`
# is passed. The two paths below should differ, without `dtype=` the
# expected result should be: `np.prod(arr.astype("f8")).astype("f4")`!
arr = np.full(5, 2**25-1, dtype=np.int64)

# float32 and int64 promote to float64:
res = np.zeros((), dtype=np.float32)
# If `dtype=` is passed, the calculation is forced to float32:
single_res = np.zeros((), dtype=np.float32)
np.multiply.reduce(arr, out=single_res, dtype=np.float32)
assert single_res != res

def test_reduce_noncontig_output(self):
# Check that reduction deals with non-contiguous output arrays
# appropriately.
Expand Down

0 comments on commit c296e5d

Please sign in to comment.