Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Not compiled with GPU support on Colab #27

Closed
Kashu7100 opened this issue Jul 12, 2022 · 7 comments
Closed

RuntimeError: Not compiled with GPU support on Colab #27

Kashu7100 opened this issue Jul 12, 2022 · 7 comments

Comments

@Kashu7100
Copy link

Thank you for sharing this interesting work.
When I try to run the Colab example, the execution of 6th cell of the notebook resulted in the following error.

[[[0, 12]], [[16, 19]], [[23, 32]]]

/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py:813: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
  "The `device` argument is deprecated and will be removed in v5 of Transformers.", FutureWarning

---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

[<ipython-input-6-d454bb231030>](https://localhost:8080/#) in <module>()
      1 image = load('http://farm4.staticflickr.com/3693/9472793441_b7822c00de_z.jpg')
      2 caption = 'bobble heads on top of the shelf'
----> 3 result, _ = glip_demo.run_on_web_image(image, caption, 0.5)
      4 imshow(result, caption)

17 frames

[/content/GLIP/maskrcnn_benchmark/engine/predictor_glip.py](https://localhost:8080/#) in run_on_web_image(self, original_image, original_caption, thresh, custom_entity, alpha)
    138             custom_entity = None,
    139             alpha = 0.0):
--> 140         predictions = self.compute_prediction(original_image, original_caption, custom_entity)
    141         top_predictions = self._post_process(predictions, thresh)
    142 

[/content/GLIP/maskrcnn_benchmark/engine/predictor_glip.py](https://localhost:8080/#) in compute_prediction(self, original_image, original_caption, custom_entity)
    217         # compute predictions
    218         with torch.no_grad():
--> 219             predictions = self.model(image_list, captions=[original_caption], positive_map=positive_map_label_to_token)
    220             predictions = [o.to(self.cpu_device) for o in predictions]
    221         print("inference time per image: {}".format(timeit.time.perf_counter() - tic))

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
   1052         # Do not call functions when jit is used
   1053         full_backward_hooks, non_full_backward_hooks = [], []

[/content/GLIP/maskrcnn_benchmark/modeling/detector/generalized_vl_rcnn.py](https://localhost:8080/#) in forward(self, images, targets, captions, positive_map, greenlight_map)
    283         else:
    284             proposals, proposal_losses, fused_visual_features = self.rpn(images, visual_features, targets, language_dict_features, positive_map,
--> 285                                               captions, swint_feature_c4)
    286         if self.roi_heads:
    287             if self.cfg.MODEL.ROI_MASK_HEAD.PREDICTOR.startswith("VL"):

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
   1052         # Do not call functions when jit is used
   1053         full_backward_hooks, non_full_backward_hooks = [], []

[/content/GLIP/maskrcnn_benchmark/modeling/rpn/vldyhead.py](https://localhost:8080/#) in forward(self, images, features, targets, language_dict_features, positive_map, captions, swint_feature_c4)
    921                                                                         language_dict_features,
    922                                                                         embedding,
--> 923                                                                         swint_feature_c4
    924                                                                         )
    925         anchors = self.anchor_generator(images, features)

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
   1052         # Do not call functions when jit is used
   1053         full_backward_hooks, non_full_backward_hooks = [], []

[/content/GLIP/maskrcnn_benchmark/modeling/rpn/vldyhead.py](https://localhost:8080/#) in forward(self, x, language_dict_features, embedding, swint_feature_c4)
    737                        "lang": language_dict_features}
    738 
--> 739         dyhead_tower = self.dyhead_tower(feat_inputs)
    740 
    741         # soft token

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
   1052         # Do not call functions when jit is used
   1053         full_backward_hooks, non_full_backward_hooks = [], []

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py](https://localhost:8080/#) in forward(self, input)
    137     def forward(self, input):
    138         for module in self:
--> 139             input = module(input)
    140         return input
    141 

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
   1052         # Do not call functions when jit is used
   1053         full_backward_hooks, non_full_backward_hooks = [], []

[/content/GLIP/maskrcnn_benchmark/modeling/rpn/vldyhead.py](https://localhost:8080/#) in forward(self, inputs)
    203                 conv_args = dict(offset=offset, mask=mask)
    204 
--> 205             temp_fea = [self.DyConv[1](feature, **conv_args)]
    206 
    207             if level > 0:

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
   1052         # Do not call functions when jit is used
   1053         full_backward_hooks, non_full_backward_hooks = [], []

[/content/GLIP/maskrcnn_benchmark/modeling/rpn/vldyhead.py](https://localhost:8080/#) in forward(self, input, **kwargs)
    133 
    134     def forward(self, input, **kwargs):
--> 135         x = self.conv(input, **kwargs)
    136         if self.bn:
    137             x = self.bn(x)

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
   1052         # Do not call functions when jit is used
   1053         full_backward_hooks, non_full_backward_hooks = [], []

[/usr/local/lib/python3.7/dist-packages/torch/cuda/amp/autocast_mode.py](https://localhost:8080/#) in decorate_fwd(*args, **kwargs)
    217                     return fwd(*_cast(args, cast_inputs), **_cast(kwargs, cast_inputs))
    218             else:
--> 219                 return fwd(*args, **kwargs)
    220     return decorate_fwd
    221 

[/content/GLIP/maskrcnn_benchmark/layers/deform_conv.py](https://localhost:8080/#) in forward(self, input, offset, mask)
    380         return modulated_deform_conv(
    381             input, offset, mask, self.weight, self.bias, self.stride,
--> 382             self.padding, self.dilation, self.groups, self.deformable_groups)
    383 
    384     def __repr__(self):

[/content/GLIP/maskrcnn_benchmark/layers/deform_conv.py](https://localhost:8080/#) in forward(ctx, input, offset, mask, weight, bias, stride, padding, dilation, groups, deformable_groups)
    201             ctx.groups,
    202             ctx.deformable_groups,
--> 203             ctx.with_bias
    204         )
    205         return output

RuntimeError: Not compiled with GPU support

Here is the GPU information that I used:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03    Driver Version: 460.32.03    CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:04.0 Off |                    0 |
| N/A   51C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Any advice is appreciated.

Sincerely,

@zyong812
Copy link

Remove ./build and re-install. This solved the problem for me.

@neerajdr
Copy link

@zyong812 Where is ./build in directory

@zyong812
Copy link

Created by python setup.py build develop --user @neerajdr

@neerajdr
Copy link

@zyong812 How did you resolve this issue

I am getting isssue

/content/GLIP/maskrcnn_benchmark/engine/predictor_glip.py in overlay_entity_names(self, image, predictions, names, text_size, text_pixel, text_offset, text_offset_original)
345
346 cv2.putText(
--> 347 image, s, (int(x), int(y)-text_offset_original), cv2.FONT_HERSHEY_SIMPLEX, text_size, (self.color, self.color, self.color), text_pixel, cv2.LINE_AA
348 )
349 previous_locations.append((int(x), int(y)))

AttributeError: 'GLIPDemo' object has no attribute 'color'

@weinman
Copy link

weinman commented Jul 14, 2022

@neerajdr I too had this issue; I surmise it may have to do with when the colab was published versus the current code.

Anyhow, there's probably a better way, but you can hack your way around this problem (and see some quick results at least) by inserting self.color = 255 at the beginning of method run_on_web_image of class GLIPDemo in file maskrcnn_benchmark/engine/predictor_glip.py

At least that will get you some white boxes to look at.

E.g., here's a diff:

index 6d28576..1fda54a 100644
--- a/maskrcnn_benchmark/engine/predictor_glip.py
+++ b/maskrcnn_benchmark/engine/predictor_glip.py
@@ -137,6 +137,7 @@ class GLIPDemo(object):
             thresh=0.5,
             custom_entity = None,
             alpha = 0.0):
+        self.color = 255
         predictions = self.compute_prediction(original_image, original_caption, custom_entity)
         top_predictions = self._post_process(predictions, thresh)

@Haotian-Zhang
Copy link
Collaborator

Please refer to #31 for a quick fix, thanks!

@JasonHysy
Copy link

JasonHysy commented Jul 2, 2023

Created by python setup.py build develop --user @neerajdr

Hello,
I have the same issue. Sorry do you mind specifying what does it mean by re-installation?
like re-run the following 3 command?

pip install einops shapely timm yacs tensorboardX ftfy prettytable pymongo

pip install transformers 

python setup.py build develop --user

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants