Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting error while Converting to tensorRT #13109

Closed
1 task done
Amiya-Lahiri-AI opened this issue May 24, 2024 · 5 comments
Closed
1 task done

Getting error while Converting to tensorRT #13109

Amiya-Lahiri-AI opened this issue May 24, 2024 · 5 comments
Labels
question Further information is requested

Comments

@Amiya-Lahiri-AI
Copy link

Search before asking

Question

I am getting an error while trying to convert yolov9e model to tensorRT

➜  models git:(main) ✗ yolo export model=yolov9e.pt format=engine batch=4 workspace=1
WARNING ⚠️ TensorRT requires GPU export, automatically assigning device=0
Ultralytics YOLOv8.2.20 🚀 Python-3.10.13 torch-2.2.0 CUDA:0 (NVIDIA H100 PCIe, 80995MiB)
YOLOv9e summary (fused): 687 layers, 57438080 parameters, 0 gradients, 189.5 GFLOPs

PyTorch: starting from 'yolov9e.pt' with input shape (4, 3, 640, 640) BCHW and output shape(s) (4, 84, 8400) (112.1 MB)

ONNX: starting export with onnx 1.16.1 opset 17...
ONNX: simplifying with onnxsim 0.4.36...
ONNX: export success ✅ 11.7s, saved as 'yolov9e.onnx' (219.5 MB)

TensorRT: starting export with TensorRT 8.4.3.1...
[05/24/2024-16:13:36] [TRT] [I] [MemUsageChange] Init CUDA: CPU +406, GPU +0, now: CPU 1576, GPU 3271 (MiB)
[05/24/2024-16:13:36] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +1, GPU +0, now: CPU 1596, GPU 3271 (MiB)
[05/24/2024-16:13:36] [TRT] [I] ----------------------------------------------------------------
[05/24/2024-16:13:36] [TRT] [I] Input filename:   yolov9e.onnx
[05/24/2024-16:13:36] [TRT] [I] ONNX IR version:  0.0.8
[05/24/2024-16:13:36] [TRT] [I] Opset version:    17
[05/24/2024-16:13:36] [TRT] [I] Producer name:    pytorch
[05/24/2024-16:13:36] [TRT] [I] Producer version: 2.2.0
[05/24/2024-16:13:36] [TRT] [I] Domain:           
[05/24/2024-16:13:36] [TRT] [I] Model version:    0
[05/24/2024-16:13:36] [TRT] [I] Doc string:       
[05/24/2024-16:13:36] [TRT] [I] ----------------------------------------------------------------
[05/24/2024-16:13:37] [TRT] [W] onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
TensorRT: input "images" with shape(4, 3, 640, 640) DataType.FLOAT
TensorRT: output "output0" with shape(4, 84, 8400) DataType.FLOAT
TensorRT: building FP32 engine as yolov9e.engine
[05/24/2024-16:13:51] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +572, GPU +168, now: CPU 2413, GPU 1309 (MiB)
[05/24/2024-16:13:51] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +40, now: CPU 2413, GPU 1349 (MiB)
[05/24/2024-16:13:51] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[05/24/2024-16:14:01] [TRT] [E] 1: [caskBuilderUtils.cpp::trtSmToCaskCCV::556] Error Code 1: Internal Error (Unsupported SM: 0x900)
TensorRT: export failure ❌ 37.7s: __enter__
Traceback (most recent call last):
  File "/opt/conda/bin/yolo", line 8, in <module>
    sys.exit(entrypoint())
  File "/opt/conda/lib/python3.10/site-packages/ultralytics/cfg/__init__.py", line 583, in entrypoint
    getattr(model, mode)(**overrides)  # default args from model
  File "/opt/conda/lib/python3.10/site-packages/ultralytics/engine/model.py", line 602, in export
    return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/ultralytics/engine/exporter.py", line 299, in __call__
    f[1], _ = self.export_engine()
  File "/opt/conda/lib/python3.10/site-packages/ultralytics/engine/exporter.py", line 142, in outer_func
    raise e
  File "/opt/conda/lib/python3.10/site-packages/ultralytics/engine/exporter.py", line 137, in outer_func
    f, model = inner_func(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/ultralytics/engine/exporter.py", line 797, in export_engine
    with build(network, config) as engine, open(f, "wb") as t:
AttributeError: __enter__

python version:: 3.10.13
torch : 2.2.0
nvidia-tensorrt : 8.4.3.1
cuda : 12.4

Additional

No response

@Amiya-Lahiri-AI Amiya-Lahiri-AI added the question Further information is requested label May 24, 2024
Copy link

👋 Hello @Amiya-Lahiri-AI, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@alsozatch
Copy link

When I get errors like that from TensorRT it is typically because some layer is not quantizable. Yolov9 might have added some layer which is not quantizable. What TensorRT version are you on?

@glenn-jocher
Copy link
Member

It looks like you're encountering a compatibility issue with TensorRT and the model's architecture. The error message Unsupported SM: 0x900 suggests that your GPU's compute capability might not be supported by the TensorRT version you're using.

Could you confirm the GPU model you're using? Also, updating to the latest TensorRT version might help if your GPU is relatively new. This can often resolve issues with unsupported layers or features in newer models like YOLOv9.

@Amiya-Lahiri-AI
Copy link
Author

@glenn-jocher it turns out I have a problem with the instance I was using for inference.
however I have resolved it my problem by changing to a different instance.
However thanks your support

@glenn-jocher
Copy link
Member

Glad to hear you resolved the issue by switching instances! If you have any more questions or run into other issues, feel free to reach out. Happy coding! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants