Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert tensorflow frozen graph to uff error:Invalid scale mode, nbWeights: 32 #104

Closed
qiuyang163 opened this issue Aug 30, 2019 · 2 comments

Comments

@qiuyang163
Copy link

qiuyang163 commented Aug 30, 2019

Hi,every one!Please help me.
I have a frozen graph about inception resnet V2,and I want to convert it to uff file.After I fiish the conversion,uff parser showed me an error below:

[TensorRT] ERROR: UffParser: Parser error: show_and_tell/main/image_text_sim_1/InceptionResnetV2/InceptionResnetV2/Conv2d_1a_3x3/BatchNorm/FusedBatchNorm: Invalid scale mode, nbWeights: 32
[TensorRT] ERROR: Network must have at least one output

here is my code:

OUTPUT_NODE = 'show_and_tell/main/image_text_sim_1/dot/MatMul'
BATCH = 8
HEIGHT = 299
WIDTH = 299
CHANNEL = 3
TEXT_SIZE = (BATCH, 20)
IMAGE_SIZE = (BATCH, HEIGHT, WIDTH, CHANNEL)

MAX_WORKSPACE = 2 << 30


def get_engine():
    logger = trt.Logger(trt.Logger.WARNING)
    builder = trt.Builder(logger)
    builder.max_batch_size = BATCH
    builder.max_workspace_size = MAX_WORKSPACE
    input_order = trt.tensorrt.UffInputOrder.NCHW
    with builder:
        network = builder.create_network()
        parser = trt.UffParser()
        parser.register_input(INPUT_NODE_IMAGE, IMAGE_SIZE, input_order)
        parser.register_input(INPUT_NODE_TEXT, TEXT_SIZE)
        parser.register_output(OUTPUT_NODE)
        parser.parse(uff_name, network)
        engine = builder.build_cuda_engine(network)
        return engine

An here is mu graph input definition,and FusedBatchNorm node definition:

import/show_and_tell/main/image_text_sim_1/InceptionResnetV2/InceptionResnetV2/Conv2d_1a_3x3/convolution
Operation: Conv2D
Attributes (6)
T
{"type":"DT_FLOAT"}
data_format
{"s":"NHWC"}
dilations
{"list":{"i":[1,1,1,1]}}
padding
{"s":"VALID"}
strides
{"list":{"i":[1,2,2,1]}}
use_cudnn_on_gpu
{"b":true}
Inputs (2)
import/show_and_tell/main/image_text_sim_1/map/TensorArrayStack/TensorArrayGatherV3 ?×299×299×3
import/InceptionResnetV2/Conv2d_1a_3x3/weights/read 3×3×3×32
Outputs (1)
import/show_and_tell/main/image_text_sim_1/InceptionResnetV2/InceptionResnetV2/Conv2d_1a_3x3/BatchNorm/FusedBatchNorm ?×149×149×32
import/show_and_tell/main/image_text_sim_1/InceptionResnetV2/InceptionResnetV2/Conv2d_1a_3x3/BatchNorm/FusedBatchNorm
Operation: FusedBatchNorm
Attributes (4)
T
{"type":"DT_FLOAT"}
data_format
{"s":"NHWC"}
epsilon
{"f":0.0010000000475}
is_training
{"b":false}
Inputs (5)
import/show_and_tell/main/image_text_sim_1/InceptionResnetV2/InceptionResnetV2/Conv2d_1a_3x3/convolution ?×149×149×32
import/InceptionResnetV2/Conv2d_1a_3x3/BatchNorm/beta/read 32
import/InceptionResnetV2/Conv2d_1a_3x3/BatchNorm/moving_mean/read 32
import/InceptionResnetV2/Conv2d_1a_3x3/BatchNorm/moving_variance/read 32
import/show_and_tell/main/image_text_sim_1/InceptionResnetV2/InceptionResnetV2/Conv2d_1a_3x3/BatchNorm/Const 32
Outputs (1)
import/show_and_tell/main/image_text_sim_1/InceptionResnetV2/InceptionResnetV2/Conv2d_1a_3x3/Relu ?×149×149×32
@rmccorm4
Copy link
Collaborator

Hi @qiuyang163,

I notice the input shape for FusedBatchNorm is ?×149×149×32, where I'm guessing the ? means a dynamic batch/dimension.

Dynamic shapes were added in the TensorRT 6.x release, and I think you ran this in TensorRT 5.x given the date - You can try upgrading to 6.0 and trying again to see if your problem is solved.

Otherwise, maybe one of these posts can help:

@rmccorm4
Copy link
Collaborator

Closing since no response, feel free to open a new issue or ask to re-open this one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants