Cannot run ONNX on GPU: process didn't exit successfully exit code: 0xc0000005, STATUS_ACCESS_VIOLATION #124
Unanswered
Redflashx12
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Are you using I had also ran into |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey,
I am having a few issues when trying to make ONNX run on the GPU and I don't understand why and I wanted to ask if you have any ideas which might help me.
I already found the similar issue #44 but I tried to implement both fixes, but neither worked for me.
I already had a few issues setting up the GPU the last few days/weeks on Windows 11 22H2, but now both CUDA and TensorRT is recognized. I am running CUDA 11.8 with cuDNN 8.9.0 and TensorRT 8.6.1.6 on 2x RTX 4090s for my company.
My cargo.toml entry of ORT looks like this:
ort = { version = "1.16.3", features = ["load-dynamic", "cuda", "tensorrt"] }
I am creating my Yolov8 model like this:
When running with the CUDA Execution provider, after I exit the following postprocessing of Yolo, I get this exit code:
Apparently there are issues when trying to return the result object because that's where my program crashes. There is no code stack afterwards.
However, when I'm running on the CPU, the are no errors/crashes. Any ideas maybe? My System Path variables look like this:
The user Path is this:
Thank you.
Kind regards
Anton
Beta Was this translation helpful? Give feedback.
All reactions