Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

大佬,运行demo.py报错,你之前遇到过吗 #3

Open
Handaphoser opened this issue Mar 27, 2023 · 6 comments
Open

大佬,运行demo.py报错,你之前遇到过吗 #3

Handaphoser opened this issue Mar 27, 2023 · 6 comments
Labels
bug Something isn't working

Comments

@Handaphoser
Copy link

[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File "demo.py", line 221, in
laneDetector = UltrafastLaneDetectorV2(logger=LOGGER)
File "/media/huangzd5/handaphoser/projects/autodrive/Vehicle-CV-ADAS-master/TrafficLaneDetector/ultrafastLaneDetector/ultrafastLaneDetectorV2.py", line 208, in init
self._initialize_model(self.model_path, self.cfg)
File "/media/huangzd5/handaphoser/projects/autodrive/Vehicle-CV-ADAS-master/TrafficLaneDetector/ultrafastLaneDetector/ultrafastLaneDetectorV2.py", line 213, in _initialize_model
self.infer = TensorRTEngine(model_path, cfg)
File "/media/huangzd5/handaphoser/projects/autodrive/Vehicle-CV-ADAS-master/TrafficLaneDetector/ultrafastLaneDetector/ultrafastLaneDetectorV2.py", line 60, in init
self.context = self._create_context(engine)
File "/media/huangzd5/handaphoser/projects/autodrive/Vehicle-CV-ADAS-master/TrafficLaneDetector/ultrafastLaneDetector/ultrafastLaneDetectorV2.py", line 93, in _create_context
return engine.create_execution_context()
AttributeError: 'NoneType' object has no attribute 'create_execution_context'

PyCUDA ERROR: The context stack was not empty upon module cleanup.

A context was still active when the context stack was being
cleaned up. At this point in our execution, CUDA may already
have been deinitialized, so there is no way we can finish
cleanly. The program will be aborted now.
Use Context.pop() to avoid this problem.

@jason-li-831202
Copy link
Owner

The problem may be due to the version of the environment. The CUDA version I'm using is 11.4, and the TensorRT version is 8.4.x. Remember to deploy on a device that matches the environment used for converting ONNX to TensorRT.

@Handaphoser
Copy link
Author

好的,谢谢,我跑onnx模型没什么问题。项目很惊艳,不过速度慢了点,大佬在提升速度有什么好的建议吗,非常感谢。

@jason-li-831202
Copy link
Owner

inference速度會受cpu/gpu的影響,另外object detection和lanes detection使用上沒有平行運算所以inference相對較慢,也許可以使用multi-Progress的方式去處理會有所提升,在模型方面可以試著轉換fp16的精度去提升。

@Handaphoser
Copy link
Author

好的,感谢感谢

@jason-li-831202 jason-li-831202 added the bug Something isn't working label Jun 7, 2023
@JohnsenJiang
Copy link

demo配置中的模型文件和视频文件上哪里可以下载得到?谢谢!

@jason-li-831202
Copy link
Owner

可以參考以下issue :
#1 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants