Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: could not create engine from ONNX. Aborted (core dumped) #31

Closed
shqfeng opened this issue Sep 10, 2020 · 6 comments
Closed

ERROR: could not create engine from ONNX. Aborted (core dumped) #31

shqfeng opened this issue Sep 10, 2020 · 6 comments

Comments

@shqfeng
Copy link

shqfeng commented Sep 10, 2020

when i run the command "./visualizer " in the terminal, and then select an ".bin" file ,throw the error:

$ ./visualizer 
OpenGL Context Version 4.5 core profile
GLEW initialized.
OpenGL context version: 4.5
OpenGL vendor string  : NVIDIA Corporation
OpenGL renderer string: GeForce GTX 1050/PCIe/SSE2
Extracting surfel maps partially.
Performing frame-to-model matching.
Setting verbosity to: false
Trying to open model
Trying to deserialize previously stored: /media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53/model.trt
Successfully found TensorRT engine file /media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53/model.trt
Successfully created inference runtime
No DLA selected.
Successfully allocated 426755792 for model.
Successfully read 426755792 to modelmem.
Could not deserialize TensorRT engine. 
Generating from sratch... This may take a while...
Trying to generate trt engine from : /media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53/model.onnx
Platform DOESN'T HAVE fp16 support.
No DLA selected.
----------------------------------------------------------------
Input filename:   /media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53/model.onnx
ONNX IR version:  0.0.4
Opset version:    9
Producer name:    pytorch
Producer version: 1.1
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
----- Parsing of ONNX model /media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53/model.onnx is Done ---- 
Success picking up ONNX model
Failure creating engine from ONNX model
Current trial size is 8589934592
Failure creating engine from ONNX model
Current trial size is 4294967296
Failure creating engine from ONNX model
Current trial size is 2147483648
Failure creating engine from ONNX model
Current trial size is 1073741824
Failure creating engine from ONNX model
Current trial size is 536870912
Failure creating engine from ONNX model
Current trial size is 268435456
Failure creating engine from ONNX model
Current trial size is 134217728
Failure creating engine from ONNX model
Current trial size is 67108864
Failure creating engine from ONNX model
Current trial size is 33554432
Failure creating engine from ONNX model
Current trial size is 16777216
Failure creating engine from ONNX model
Current trial size is 8388608
Failure creating engine from ONNX model
Current trial size is 4194304
Failure creating engine from ONNX model
Current trial size is 2097152
Failure creating engine from ONNX model
Current trial size is 1048576
terminate called after throwing an instance of 'std::runtime_error'
 what():  ERROR: could not create engine from ONNX.
Aborted (core dumped)

Waiting for your reply! thanks!

@Chen-Xieyuanli
Copy link
Member

Hey @shaoquanfeng, this seems to be a problem of rangenet_lib. Have you already tried the example given there?

@shqfeng
Copy link
Author

shqfeng commented Sep 10, 2020 via email

@Chen-Xieyuanli
Copy link
Member

It seems still trying to build the tensorrt model, which should be already generated if you passed the demo in rangenet_lib.

Did you specify the path of the model in the configuration file?

@shqfeng
Copy link
Author

shqfeng commented Sep 11, 2020

thanks! Today, I rebuild the project, and then pass the demo in rangenet_lib, throw the same bug with #30.

$ ./visualizer 
OpenGL Context Version 4.5 core profile
GLEW initialized.
OpenGL context version: 4.5
OpenGL vendor string  : NVIDIA Corporation
OpenGL renderer string: GeForce GTX 1050/PCIe/SSE2
Extracting surfel maps partially.
Performing frame-to-model matching.
Setting verbosity to: false
Trying to open model
Trying to deserialize previously stored: /media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53/model.trt
Successfully found TensorRT engine file /media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53/model.trt
Successfully created inference runtime
No DLA selected.
Successfully allocated 426755792 for model.
Successfully read 426755792 to modelmem.
Created engine!
Successfully deserialized Engine from trt file
Binding: 0, type: 0
[Dim 5][Dim 64][Dim 2048]
Binding: 1, type: 0
[Dim 20][Dim 64][Dim 2048]
Successfully create binding buffer
calibration filename: /home/raymond/data/kitti-odometry/kitti/dataset/sequences/07/calib.txt...loaded.
ground truth filename: /home/raymond/data/kitti-odometry/kitti/dataset/poses/07.txt
1101 poses read.
Performing frame-to-model matching.

It seems that yesterday's problem has not been solved ! this is my test data folder tree:

└── kitti-odometry
    └── kitti
        └── dataset
            ├── poses
                └── 07.txt
            └── sequences
                └── 07
                    └── velodyne
                        └── .bin

this is my model path:

<param name="model_path" type="string">/media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53</param>

@Chen-Xieyuanli
Copy link
Member

Hey @shqfeng,

Is there any update on the issue?

@Chen-Xieyuanli
Copy link
Member

Since there is no update for this issue, I'm going to close it.

If there is any problem, please feel free to ask me to reopen it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants