Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error Code 2: Internal Error (ForeignNode does not support data-dependent shape for now.) #3372

Closed
Egorundel opened this issue Oct 9, 2023 · 3 comments
Assignees
Labels
triaged Issue has been triaged by maintainers

Comments

@Egorundel
Copy link

Egorundel commented Oct 9, 2023

Description

Hello!
My task is to train RetinaNet (backbone: resnet50, keras).

RetinaNet:
https://github.com/fizyr/keras-retinanet

After training, I got the weights .h5. Next, I need to convert weights into ONNX model with dynamic batch_size.

Code for converting weights .h5 in ONNX-the model I took from here: keras2onnx
006a3f39c6873a5276f82d15d8dacf371c97c3c9

And then I want to convert ONNX-model to TensorRT Engine using trtexec. I have an ONNX model, but further conversion to TensorRT Engine does not work.

When I run the command:
trtexec --onnx=retinanet-bbox.onnx --saveEngine=retinaNet.trt --minShapes=images:1x512x512x3 --optShapes=images:6x512x512x3 --maxShapes=images:12x512x512x3 --useCudaGraph --memPoolSize=workspace:3000 --noTF32 --fp16

I get an error:
Error[2]: [myelinBuilderUtils.cpp::getMyelinSupportType::1270] Error Code 2: Internal Error (ForeignNode does not support data-dependent shape for now.)
17d90b314f521a701394a847733ef4578fbf3d16

My ONNX-model is correct. I checked it with code: check_onnx_model.py
95096a7caec4f3f9c5f69d396b451cf7d14ab74f

Maybe I'm creating the ONNX model incorrectly...

I can't understand why this is happening? Could you help me solve this problem?

P.S. An Internet search did not provide answers to this problem. I have attached my ONNX model in the attachment below.

Environment

TensorRT Version: 8.6
GPU Type: RTX3060
Nvidia Driver Version: nvidia-driver-535 (proprietary)
CUDA Version: 11.1
CUDNN Version: 8.0.4
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): 3.8
TensorFlow Version (if applicable): 2.4.0

Relevant Files

model.h5:
https://drive.google.com/file/d/1LnM5j7HTzIKOXkHM546vxQPj2rJQdQZ2/view?usp=sharing

retinanet-bbox.onnx:
https://drive.google.com/file/d/1Tg7xYzWzyfRvIu3DhMNz5D2yrGviZTeM/view?usp=sharing

@Egorundel
Copy link
Author

@ttyio Hello! Can you help me, please?

@zerollzeng
Copy link
Collaborator

Checking this internally

@zerollzeng zerollzeng self-assigned this Oct 10, 2023
@zerollzeng zerollzeng added the triaged Issue has been triaged by maintainers label Oct 10, 2023
@zerollzeng
Copy link
Collaborator

We still don't support this, but it's in our plan and there are some effort on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

2 participants