-
-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support memory functions for copying between peer devices #225
Comments
So, there are really just two functions and their asynchronous variants. Well, it seems I've already added the non-array version of this the driver-wrappers branch, but under We may want a structure with a context and a region as a parameter here, e.g. something like:
and then we could say: We could also perhaps have |
Hmm... it looks like the ... and this leads me to think, that maybe we should just make all memory regions contextualized to begin with, so that |
Ok, fixed on the driver-wrappers branch. |
... and now I notice that when UVA is available, the "Peer" calls are useless. which is always for us, since we rely on UVA for copying to begin with. So, we don't even need this with the non-driver branch. |
There are specific functions - at least in the driver API - for copying between peers, including copying of arrays:
Let's support them. These have existed since at least CUDA 7... probably earlier.
The text was updated successfully, but these errors were encountered: