Skip to content

About BatchSize failure of TensorRT-8.0.1.6 when running trtexec tool on NVIDIA Jetson Xavier NX #3549

@ThomasCai

Description

@ThomasCai

Description

Error Info (When I run /usr/src/tensorrt/bin/trtexec --loadEngine=XXX.engine --batch=40 --warmUp=200 --iterations=2000)

Error[3]: [executionContext.cpp::enqueue::276] Error Code 3: Internal Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueue::276, condition: batchSize > 0 && batchSize <= mEngine.getMaxBatchSize(). Note: Batch size was: 40, but engine max batch size was: 1
)
[12/12/2023-21:35:03] [E] Error occurred during inference

Here is my script to convert the engine according to https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-801/quick-start-guide/index.html

/usr/src/tensorrt/bin/trtexec --onnx=${1} --saveEngine=${2}  --buildOnly --workspace=10240 --explicitBatch --minShapes=input:1x3x224x224 --optShapes=input:40x3x224x224 --maxShapes=input:40x3x224x224 --fp16

Note: By the way, my Jetson has a fixed version, so I have to solve it in this version. I look forward to your reply, thank you.

Environment

TensorRT Version: 8.0.1.6

NVIDIA GPU: NVIDIA Jetson Xavier NX

CUDA Version: 10.2

CUDNN Version: 8.2.1.32

Operating System: ubuntu18.04

Jetpack: 4.6-b199

Metadata

Metadata

Assignees

Labels

triagedIssue has been triaged by maintainers

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions