-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question: is there any method to avoid mapping same page? #258
Comments
Hi @hongbilu, We don't provide such API. And the agreement of the pin and map API is within the buffer you pin and map. It is unsafe to assume that you can use the same CPU VA range from mapping CUDA buffer A to access CUDA buffer B. |
yes, but cudaMalloc cannot guarantee that memory address must be different page. In fact they might be same pretty much when allocating small data size which is a very common usage. The problem is that applications will take the management of all the cuda memory and check if cpu va is at same range with others, that is an additional work, too dirty and specific solution. what do you think? |
Let's say that you have two CUDA buffers A and B from
|
thanks! so it need to allocate more buffers manually which means a not easily to use for clients |
You may use CUDA VMM instead of |
very appreciate for remind! thanks |
hi, there
i saw there's test case named "basic_small_buffers_mapping". If cudamalloc many times(far more than twice), is there any method to check if this memory's page has been already mapped? if it has been mapped by others, maybe we should use va from matched handle, the map API should not return failure?
The text was updated successfully, but these errors were encountered: