You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add argument to tensor data structures with direct GPU/TPU mapping to support re-mapping on mirrored node e.g.,
@PluginRegistrar.register
class MXNetTensor(Plugin):
def __init__(self, load_mxnet_device=None, map_mxnet_devices=None, **kwargs):
where map_mxnet_devices should be {'all': mxnet.gpu(0) when load_mxnet_device=mxnet.gpu(0) and map_mxnet_devices=None.
For instance, when load_mxnet_device=mxnet.gpu(0) or load_mxnet_device="cuda:0", map_mxnet_devices can be set manually as a dictionary representing the source device as key and the target device as value for non-default device maps.
then the source and target gpus 1 & 0 would be flipped, gpu 3 would be placed on cpu 0, and gpu 2 would be placed on gpu 0. Defining mxnet.gpu(1): mxnet.gpu(0) and cuda:1: cuda:2 in the same mapping should raise an error since the same device is mapped to two different targets.
The text was updated successfully, but these errors were encountered:
Add argument to tensor data structures with direct GPU/TPU mapping to support re-mapping on mirrored node e.g.,
where
map_mxnet_devices
should be{'all': mxnet.gpu(0)
whenload_mxnet_device=mxnet.gpu(0)
andmap_mxnet_devices=None
.For instance, when
load_mxnet_device=mxnet.gpu(0)
orload_mxnet_device="cuda:0"
,map_mxnet_devices
can be set manually as a dictionary representing the source device as key and the target device as value for non-default device maps.Suppose we have the following wrapified function:
then the source and target gpus 1 & 0 would be flipped, gpu 3 would be placed on cpu 0, and gpu 2 would be placed on gpu 0. Defining
mxnet.gpu(1): mxnet.gpu(0)
andcuda:1
:cuda:2
in the same mapping should raise an error since the same device is mapped to two different targets.The text was updated successfully, but these errors were encountered: