-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Is IOutputAllocator::reallocateOutput
guaranteed to be called before context->enqueueV3
returns?
#3875
Comments
IOutputAllocator::reallocateOutput
guaranteed to be called before context->enqueueV3
returns?
Please refer to our api doc: https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_execution_context.html#aa174ba57c44df821625ce4d3317dd7aa
yes
the ptr is always valid until you free the memory, but the correct output is ready only after synchronization is done. |
I think my question was more about the calling order of If there is guarantee that |
|
Description
I cannot find any information regarding when
IOutputAllocator::reallocateOutput
is called with respect tocontext->enqueueV3
. Is there any guarantee this function is called beforeenqueueV3
returns or should I explicitly synchronize stream?In other words, in the following pseudo-code:
Should I explicitly synchronize the stream after
enqueueV3
for deviceallocator->getDeviceBuffer()
to be valid? Or isallocator->reallocateOutput
guaranteed to be called beforeenqueueV3
returns, in which case stream synchronization is unnecessary?Environment
TensorRT Version:
NVIDIA GPU:
NVIDIA Driver Version:
CUDA Version:
CUDNN Version:
Operating System:
Python Version (if applicable):
Tensorflow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if so, version):
Relevant Files
Model link:
Steps To Reproduce
Commands or scripts:
Have you tried the latest release?:
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
):The text was updated successfully, but these errors were encountered: