-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
onnxruntime.InferenceSession.run sometimes get stuck, sometimes not #21418
Comments
what do you mean |
The output is as follows:
The program may stop at any inference time, which could be the first time, the second time, or any other time. |
1.4 is too old. |
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details. |
I have a similar issue for CPU Execution. The execution times rise 10x after approx 1h. |
Describe the issue
I have built onnxruntime-gpu 1.4.0 following https://github.com/microsoft/onnxruntime/blob/v1.4.0/dockerfiles/Dockerfile.cuda . The output of
import onnxruntime
andonnxruntime.get_device()
are both normal, andonnxruntime.InferenceSession()
seems ok too. However,sess.run()
sometimes runs smoothly but gets stuck at other times (GPU memory not full, only ~2G of 11G). I have tried variousSessionOptions
but the issue persists. PS: The code is running within a Docker container.To reproduce
Urgency
No response
Platform
Linux
OS Version
Ubuntu 18.04
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
v1.4.0
ONNX Runtime API
Python
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
CUDA 10.1, CUDNN 7.6.5, Driver 430.50, NVIDIA 2080 Ti
The text was updated successfully, but these errors were encountered: