Skip to content

Abnormal fp16 inference results of TensorRT v10.0 when running engine converted from onnx of NAFNet #3897

@HSDai

Description

@HSDai

Description

I tried to convert onnx to trt in FP16, the infering results are abnormal compared with FP32.

fp16:
NikonD40_0096_T_A3

fp32:
NikonD40_0096_T_A3

Environment

image

TensorRT Version:10.0

NVIDIA GPU:

NVIDIA Driver Version:

CUDA Version:

CUDNN Version:

Operating System:

Python Version (if applicable):

Tensorflow Version (if applicable):

PyTorch Version (if applicable):

Baremetal or Container (if so, version):

Relevant Files

Model link:
onnx.zip

trt10_fp16.zip
trt10_fp32.zip

Steps To Reproduce

./trtexec --onnx=color_consistency_nafnet.onnx --saveEngine=nafnetcc75_t4_float16_v10.trtmodel --inputIOFormats=fp32:chw --outputIOFormats=fp32:chw --device=3 --minShapes=input:1x64x64x3 --optShapes=input:1x1024x1024x3 --maxShapes=input:1x1920x1920x3 --fp16

Commands or scripts:

Have you tried the latest release?:

Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt):

Metadata

Metadata

Assignees

Labels

triagedIssue has been triaged by maintainers

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions