You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"[StatusCode.INTERNAL] onnx runtime error 6: Non-zero status code returned while running Conv node. Name:'/model.1/model/model.0/model.0.1/block/block.3/block.3.0/Conv' Status Message: /workspace/onnxruntime/onnxruntime/core/framework/bfc_arena.cc:368 void* onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream*, bool, onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 4547406516732812544
After the failed to allocate memory issue occurs, I opened nvidia-smi to check memory usage and the usage peak does not reach 100%, but all subsequent inferences will fail.
Following images shows prometheus dashboard, when the model says failed to allocate memory, GRAM usage is actually low
Triton Information
What version of Triton are you using?
Tried r23.03, r23.05, r23.06. All with same problem. R22.07 is ok.
Are you using the Triton container or did you build it yourself?
Triton container
To Reproduce
Steps to reproduce the behavior.
Put ~30 ONNXruntime models in Triton, set memory_arena_shrinkage and keeps running them until one model says SafeIntOnOverflow or Failed to allocate memory. After that, that model will never succeed again unless you restart Triton
Expected behavior
A clear and concise description of what you expected to happen.
SafeIntOnOverflow should not happen. I never see this error in r22.07
Failed to allocate memory should only happen if GRAM is really full. And once Failed to allocate memory happens, it should not raise this error again after other models finish inferencing and arena_shrinkage returns GRAM to system.
The text was updated successfully, but these errors were encountered:
Thanks for reporting - are there a set of public models that can be used to reproduce? Am going to transfer to the onnx runtime backend for tracking and support.
nnshah1
transferred this issue from triton-inference-server/server
Jul 8, 2023
Thanks for reporting - are there a set of public models that can be used to reproduce? Am going to transfer to the onnx runtime backend for tracking and support.
I believe it's not so related to specific models but a general problem. Sadly I don't have public models to share.
We are also running into this issue and I can confirm that version 22.10 was also working fine. We started seeing specifically the Failed to allocate memory for requested buffer of size 13622061778317179392 type errors when we upgraded from that version to 23.10.
@zeruniverse We are using CenterNet detection models. What kind of models are you using?
Description
When workload is high, some model in Triton ONNXRUNTIME backend will fail. And after it fails, it will never succeed again. Failures will look like:
(also see microsoft/onnxruntime#12288 for this, I'm not the only one facing this problem)
and
After the
failed to allocate memory
issue occurs, I opened nvidia-smi to check memory usage and the usage peak does not reach 100%, but all subsequent inferences will fail.Following images shows prometheus dashboard, when the model says
failed to allocate memory
, GRAM usage is actually lowTriton Information
What version of Triton are you using?
Tried r23.03, r23.05, r23.06. All with same problem. R22.07 is ok.
Are you using the Triton container or did you build it yourself?
Triton container
To Reproduce
Steps to reproduce the behavior.
Put ~30 ONNXruntime models in Triton, set memory_arena_shrinkage and keeps running them until one model says
SafeIntOnOverflow
orFailed to allocate memory
. After that, that model will never succeed again unless you restart TritonExpected behavior
A clear and concise description of what you expected to happen.
SafeIntOnOverflow
should not happen. I never see this error in r22.07Failed to allocate memory
should only happen if GRAM is really full. And onceFailed to allocate memory
happens, it should not raise this error again after other models finish inferencing and arena_shrinkage returns GRAM to system.The text was updated successfully, but these errors were encountered: