You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is useful to have in some cases support for mapped/unified memory.
This allows the user to allocated host memory and use this memory directly on the GPU without waisting useful GPU memory.
In CUDA there are two kinds of host memory which can be addressed from the GPU without the need of an explicit memcopy.
Mapping CPU memory into GPU memory space should aready be supported but is untested: alpaka::mem::buf::map(buffer, targetDevice);
The device pointer can than be accessed via auto ptr = alpaka::mem::view::getPtrDev(buffer, targetDevice).
I do not know if this works.
It is useful to have in some cases support for mapped/unified memory.
This allows the user to allocated host memory and use this memory directly on the GPU without waisting useful GPU memory.
In CUDA there are two kinds of host memory which can be addressed from the GPU without the need of an explicit memcopy.
A open question is if such a feature is also supported by other platforms like AMD gpus or maybe later FPGs.
The text was updated successfully, but these errors were encountered: