-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TensorRT] ERROR: Network must have at least one output #319
Comments
Increase the verbosity of the logging. If parsing fails for any reason and you don't properly check for errors, then you will get this error (which is IMHO very misleading and could be considered a bug). There will be only an empty network, and that doesn't have outputs :) If you check for errors you will likely find the real reason. |
@gcp It would be helpful if you could be more specific please? |
|
If parsing actually succeeded and there's no error, check with https://lutzroeder.github.io/netron/ if your network has the outputs properly marked as outputs. |
Found this Tensorrt7.0 OnnxParser Error
I get his error
Says above but I don't seem to find even a single output node here. My batch size is already 1, as seen from the above code. How do I proceed? |
That's over 4G just to hold intermediate outputs. It looks like this network is simply too large for your GPU.
If you click on 106_convolutional it will likely be marked as an output. In any case, that wasn't the problem, as you found out - it does see the output when parsing the ONNX. |
Just to be clear on what I understand from you comment GTX 1050Ti is not suitable to even converting this YoloV3 model to TensorRT? I was able to convert YoloV3 to tensorRT on Jetson Nano. There must be a workaround.
This seems to be a persisting issue of |
FWIW I just ran this sample on a V100 and saw peak memory usage of about ~1.5GB, which is less than 1050Ti's cap of 4GB. However, I believe it is dependent on which kernels are chosen during engine building, which is dependent on the GPU / Compute Capability. The workspace size was the same as the script you linked (1 << 28 ~= 256MB). I don't have a 1050Ti to test on, but I'll see if I can look into the root cause a bit more. You might be able to try lowering the workspace size and see if that helps. |
@santhoshnumberone hi, did you solve this problem? I meet the same issue. |
@santhoshnumberone TensorRT Version: 6.0.1.5 GPU Type: 2080Ti CUDA Version: 10.0 CUDNN Version: 7.6.5 Operating System + Version: ubuntu 18.04 python3.6 pytorch1.4.0 onnx1.5.0 The rest of this sample can be run with either version of Python Then the code is masked: if sys.version_ Info [0] > 2: the error is as follows: TypeError: Unicode-objects must be encoded before hashing Can I refer to your conversion code? thank you! |
@santhoshnumberone can you please indicate if you still need help with this and if you were able to try experimenting with lower WS size? |
I will close since no response for more than 3 weeks, please reopen if you still have question, thanks! |
Description
Trying to convert yolov3 to tensorrt using this yolov3
I am able to convert yolov3_to_onnx.py
I get this output described here without any errors or warnings
When I try onnx_to_tensorrt.py
I get this error
According to his [TensorRT] ERROR: Network must have at least one output this happens to the tensorRT version
The tensorRT is unable to build the engine.
Do we have a work around?
What seems to be the issue here?
Environment
Linux distro: Ubuntu 18.04.3 LTS bionic
GPU type: GTX 1050Ti
Nvidia driver version: 440.33.01
CUDA version: 10.2 (according to nvidia-smi) and 9.1.85(according to nvcc --version)
CUDNN version: 7.6.5 according to ($ CUDNN_H_PATH=$(whereis cudnn.h) and cat ${CUDNN_H_PATH} | grep CUDNN_MAJOR -A 2)
Python version: Python 3.6.9
TensorRT version: 7.0.0 (according to dpkg -l | grep nvinfer)
The text was updated successfully, but these errors were encountered: