Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to allocate memory for requested buffer of size #144

Open
Abdulhadiasa opened this issue Sep 5, 2023 · 0 comments
Open

Failed to allocate memory for requested buffer of size #144

Abdulhadiasa opened this issue Sep 5, 2023 · 0 comments

Comments

@Abdulhadiasa
Copy link

I am running:

  • ubuntu 20.04
  • RTX 2070 Super 8GB
  • installed with pip3 install anylabeling-gpu
  • CUDA 11.6
  • onnx 1.13.1
  • onnxruntime-gpu 1.14.1

While labeling a dataset I am running into this error when I try to add a point for Segment Anything (SAM), I tried ViT-(B/L/H).

2023-09-05 10:42:40.812917403 [E:onnxruntime:, sequential_executor.cc:494 ExecuteKernel] Non-zero status code returned while running ConvTranspose node. Name:'/output_upscaling/output_upscaling.0/ConvTranspose' Status Message: /onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:368 void* onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream*, bool, onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 33554432

WARNING:root:Could not inference model
WARNING:root:[ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running ConvTranspose node. Name:'/output_upscaling/output_upscaling.0/ConvTranspose' Status Message: /onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:368 void* onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream*, bool, onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 33554432

Traceback (most recent call last):
  File "/home/abed/Documents/label-tools/anylabeling/Anylabelling/lib/python3.8/site-packages/anylabeling/services/auto_labeling/segment_anything.py", line 232, in predict_shapes
    masks = self.model.predict_masks(image_embedding, self.marks)
  File "/home/abed/Documents/label-tools/anylabeling/Anylabelling/lib/python3.8/site-packages/anylabeling/services/auto_labeling/sam_onnx.py", line 193, in predict_masks
    masks = self.run_decoder(
  File "/home/abed/Documents/label-tools/anylabeling/Anylabelling/lib/python3.8/site-packages/anylabeling/services/auto_labeling/sam_onnx.py", line 126, in run_decoder
    masks, _, _ = self.decoder_session.run(None, decoder_inputs)
  File "/home/abed/Documents/label-tools/anylabeling/Anylabelling/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 200, in run
    return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running ConvTranspose node. Name:'/output_upscaling/output_upscaling.0/ConvTranspose' Status Message: /onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:368 void* onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream*, bool, onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 33554432

nvidia-smi shows the model has been loaded into memory

Tue Sep  5 10:49:57 2023       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.86.05              Driver Version: 535.86.05    CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 2070 ...    Off | 00000000:01:00.0  On |                  N/A |
|  0%   38C    P8              21W / 215W |   7964MiB /  8192MiB |     15%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A    501464      G   /usr/lib/xorg/Xorg                           22MiB |
|    0   N/A  N/A    501819      G   /usr/lib/xorg/Xorg                           45MiB |
|    0   N/A  N/A   3588976      C   ...anylabeling/Anylabelling/bin/python     7892MiB |
+---------------------------------------------------------------------------------------+

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant