-
-
Notifications
You must be signed in to change notification settings - Fork 829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
__array__ and __array_wrap__ #589
Comments
I'm sorry for the late reply. I agree with you that it would be useful if CuPy arrays worked just like NumPy arrays. However, currently we intentionally do not allow that because that would incur hidden performance hit due to transfer between device and host. As this feature has been requested from multiple users, however, we're currently considering some other way to accomplish this, by using configuration context (like |
Thanks for your reply. So I read too much into the Anyway, for now I intend to use CuPy as a drop-in replacement of NumPy, independent of Chainer. There it would be good to have interoperability with NumPy arrays since it would make a number of implementations simpler. Avoiding data transfers between host and device would then be the job of the calling code. Would there be a way to achieve this without involving Chainer? For example, by switching the feature on by default and disabling it by default in Chainer, with a config option there to enable it again. |
I understand your point. But if NumPy and CuPy arrays were implicitly convertible, users would have to be very careful about which arrays they are dealing with. If a single NumPy array were involved in a series of CuPy operations by mistake, it would result in synchronization of all operations without notice. It would be very difficult to find the bottleneck. As for the alternative method, I didn't mean CuPy would depend on Chainer. I meant introducing similar configuration mechanism used in Chainer. |
It seems I understood your comment wrong. A config option handling as it is done in Chainer would be perfectly fine. I also agree that for your primary use case in Chainer, hidden bottlenecks would be very bad, and that you want to avoid automatic CPU<->GPU transfers. I'll have to think a bit more about my use case, and if I should also avoid mixing up CPU and GPU memory/code. For the record: Since Numpy 1.13 the |
CuPy uses asynchronous CPU to GPU memory copy by default mode. This is very easy feature because CUDA supports asynchronous execution (kernel launch).
It is good if the user can select whether to use |
Regarding |
We decided not to allow implicit conversion to NumPy arrays in #3421. |
Just wanna confirm, is this no longer the plan?
|
We have no plan for it currently, but we will reopen if someone continues the discussion to support it. |
A question about the Numpy array interface.
cupy.ndarray
implements__array__
, but it doesn't work sincenp.asarray(cupy_array)
raises an exception:So unless you have other plans for this method, it should be changed to return a Numpy array.
Do you want a PR? I'd also like to have the
__array_wrap_
method, so I'd implement that one as well.The text was updated successfully, but these errors were encountered: