You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A model is trained with the config: Misc/cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv.yaml
For inference I would like to use CPU, but it complains: NotImplementedError("Deformable Conv is not supported on CPUs!")
Would this support be included in future to run Deformable Conv on CPU?
The text was updated successfully, but these errors were encountered:
I would like to use multiple models simultaneously to get predictions and my GPU memory is not large enough to accommodate all of them. Do you have any advice on how to achieve that (preferably cost efficient)?
Using multiple models simultaneously on CPU will give really bad inference times anyway.
What you could do is run each model in sequence and free up your resources before loading the other model.
Also have you already looked at how much GPU memory is used for 1 model when running inference? I can easily run 3 models and probably 4 for MaskRCNN on a single GTX1080.
A model is trained with the config: Misc/cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv.yaml
For inference I would like to use CPU, but it complains:
NotImplementedError("Deformable Conv is not supported on CPUs!")
Would this support be included in future to run Deformable Conv on CPU?
The text was updated successfully, but these errors were encountered: