Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Onnxruntime backend error when workload is high since Triton uses CUDA 12 #203

Open
zeruniverse opened this issue Jul 8, 2023 · 4 comments
Labels
bug Something isn't working

Comments

@zeruniverse
Copy link

Description

When workload is high, some model in Triton ONNXRUNTIME backend will fail. And after it fails, it will never succeed again. Failures will look like:

"[StatusCode.INTERNAL] onnx runtime error 6: Non-zero status code returned while running Conv node. Name:'/model.8/cv1/conv/Conv' Status Message: /workspace/onnxruntime/onnxruntime/core/common/safeint.h:17 static void SafeIntExceptionHandler<onnxruntime::OnnxRuntimeException>::SafeIntOnOverflow() Integer overflow

(also see microsoft/onnxruntime#12288 for this, I'm not the only one facing this problem)

and

"[StatusCode.INTERNAL] onnx runtime error 6: Non-zero status code returned while running Conv node. Name:'/model.1/model/model.0/model.0.1/block/block.3/block.3.0/Conv' Status Message: /workspace/onnxruntime/onnxruntime/core/framework/bfc_arena.cc:368 void* onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream*, bool, onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 4547406516732812544

After the failed to allocate memory issue occurs, I opened nvidia-smi to check memory usage and the usage peak does not reach 100%, but all subsequent inferences will fail.

Following images shows prometheus dashboard, when the model says failed to allocate memory, GRAM usage is actually low
image

image

Triton Information
What version of Triton are you using?

Tried r23.03, r23.05, r23.06. All with same problem. R22.07 is ok.

Are you using the Triton container or did you build it yourself?
Triton container

To Reproduce
Steps to reproduce the behavior.

Put ~30 ONNXruntime models in Triton, set memory_arena_shrinkage and keeps running them until one model says SafeIntOnOverflow or Failed to allocate memory. After that, that model will never succeed again unless you restart Triton

Expected behavior
A clear and concise description of what you expected to happen.

SafeIntOnOverflow should not happen. I never see this error in r22.07

Failed to allocate memory should only happen if GRAM is really full. And once Failed to allocate memory happens, it should not raise this error again after other models finish inferencing and arena_shrinkage returns GRAM to system.

@nnshah1 nnshah1 added the bug Something isn't working label Jul 8, 2023
@nnshah1
Copy link

nnshah1 commented Jul 8, 2023

Thanks for reporting - are there a set of public models that can be used to reproduce? Am going to transfer to the onnx runtime backend for tracking and support.

@nnshah1 nnshah1 transferred this issue from triton-inference-server/server Jul 8, 2023
@zeruniverse
Copy link
Author

Thanks for reporting - are there a set of public models that can be used to reproduce? Am going to transfer to the onnx runtime backend for tracking and support.

I believe it's not so related to specific models but a general problem. Sadly I don't have public models to share.

@OvervCW
Copy link

OvervCW commented Dec 7, 2023

We are also running into this issue and I can confirm that version 22.10 was also working fine. We started seeing specifically the Failed to allocate memory for requested buffer of size 13622061778317179392 type errors when we upgraded from that version to 23.10.

@zeruniverse We are using CenterNet detection models. What kind of models are you using?

@makavity
Copy link

makavity commented Dec 11, 2023

Also running this problem after upgrade from 22.12 to 23.04 and later.
Failed to allocate memory for requested buffer of size 4494297792244863488

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants