You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been working on adding an implementation of np.setxor1d to numba (numba/numba#4677), and noticed that the documentation might need some clarifications, and I also had some questions about the expected behavior:
The assume_unique argument has the following description:
assume_unique : bool
If True, the input arrays are both assumed to be unique, which can speed up the calculation.
Default is False.
More specifically the code to work properly requires that the inputs are unique, 1D and sorted.
The input arrays ar1 and ar2, according to the docs, should just be "array-like", but a user gets unexpected errors if the arrays have ndim > 1, when assume_unique is True. The function will fail either at the point where the first call to np.concatenate if the two arrays have incompatible dimensions, or at the second call to np.concatenate when there is an attempt to concat with the 1D [True] value.
I'm not sure what the proper way of handling this is. Perhaps just a clarification of the docs, or a call to ravel() if assume_unique is True to get around the concat issue if the N-D arrays do indeed only contain unique values and they are sorted according to the (default?) order.
I'm happy to put in a PR once I get some feedback from the Numpy devs about the approach that they feel like is most appropriate.
I would appreciate feedback on this issue if anyone has the time to take a quick look. Just re-upping this in case it got lost in the shuffle two weeks ago.
I've been working on adding an implementation of
np.setxor1d
to numba (numba/numba#4677), and noticed that the documentation might need some clarifications, and I also had some questions about the expected behavior:assume_unique
argument has the following description:More specifically the code to work properly requires that the inputs are unique, 1D and sorted.
ar1
andar2
, according to the docs, should just be "array-like", but a user gets unexpected errors if the arrays havendim > 1
, whenassume_unique
isTrue
. The function will fail either at the point where the first call tonp.concatenate
if the two arrays have incompatible dimensions, or at the second call tonp.concatenate
when there is an attempt to concat with the 1D[True]
value.I'm not sure what the proper way of handling this is. Perhaps just a clarification of the docs, or a call to
ravel()
ifassume_unique
isTrue
to get around the concat issue if the N-D arrays do indeed only contain unique values and they are sorted according to the (default?) order.I'm happy to put in a PR once I get some feedback from the Numpy devs about the approach that they feel like is most appropriate.
cc/ @stuartarchibald
The text was updated successfully, but these errors were encountered: