Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error while convert to onnx #14

Closed
sawk1 opened this issue Jan 12, 2021 · 3 comments
Closed

error while convert to onnx #14

sawk1 opened this issue Jan 12, 2021 · 3 comments

Comments

@sawk1
Copy link

sawk1 commented Jan 12, 2021

Hi, i've trained the yolov4-tiny-crowdhuman-416x416 and now trying to convert it to onnx on jetson nano, but have a error:

Parsing DarkNet cfg file...
Building ONNX graph...
graph yolov4-tiny-crowdhuman-416x416 (
  %000_net[FLOAT, 1x3x416x416]
) optional inputs with matching initializers (
  %001_convolutional_bn_scale[FLOAT, 32]
  %001_convolutional_bn_bias[FLOAT, 32]
  %001_convolutional_bn_mean[FLOAT, 32]
  %001_convolutional_bn_var[FLOAT, 32]
  %001_convolutional_conv_weights[FLOAT, 32x3x3x3]
  %002_convolutional_bn_scale[FLOAT, 64]
  %002_convolutional_bn_bias[FLOAT, 64]
  %002_convolutional_bn_mean[FLOAT, 64]
  %002_convolutional_bn_var[FLOAT, 64]
  %002_convolutional_conv_weights[FLOAT, 64x32x3x3]
  %003_convolutional_bn_scale[FLOAT, 64]
  %003_convolutional_bn_bias[FLOAT, 64]
  %003_convolutional_bn_mean[FLOAT, 64]
  %003_convolutional_bn_var[FLOAT, 64]
  %003_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %005_convolutional_bn_scale[FLOAT, 32]
  %005_convolutional_bn_bias[FLOAT, 32]
  %005_convolutional_bn_mean[FLOAT, 32]
  %005_convolutional_bn_var[FLOAT, 32]
  %005_convolutional_conv_weights[FLOAT, 32x32x3x3]
  %006_convolutional_bn_scale[FLOAT, 32]
  %006_convolutional_bn_bias[FLOAT, 32]
  %006_convolutional_bn_mean[FLOAT, 32]
  %006_convolutional_bn_var[FLOAT, 32]
  %006_convolutional_conv_weights[FLOAT, 32x32x3x3]
  %008_convolutional_bn_scale[FLOAT, 64]
  %008_convolutional_bn_bias[FLOAT, 64]
  %008_convolutional_bn_mean[FLOAT, 64]
  %008_convolutional_bn_var[FLOAT, 64]
  %008_convolutional_conv_weights[FLOAT, 64x64x1x1]
  %011_convolutional_bn_scale[FLOAT, 128]
  %011_convolutional_bn_bias[FLOAT, 128]
  %011_convolutional_bn_mean[FLOAT, 128]
  %011_convolutional_bn_var[FLOAT, 128]
  %011_convolutional_conv_weights[FLOAT, 128x128x3x3]
  %013_convolutional_bn_scale[FLOAT, 64]
  %013_convolutional_bn_bias[FLOAT, 64]
  %013_convolutional_bn_mean[FLOAT, 64]
  %013_convolutional_bn_var[FLOAT, 64]
  %013_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %014_convolutional_bn_scale[FLOAT, 64]
  %014_convolutional_bn_bias[FLOAT, 64]
  %014_convolutional_bn_mean[FLOAT, 64]
  %014_convolutional_bn_var[FLOAT, 64]
  %014_convolutional_conv_weights[FLOAT, 64x64x3x3]
  %016_convolutional_bn_scale[FLOAT, 128]
  %016_convolutional_bn_bias[FLOAT, 128]
  %016_convolutional_bn_mean[FLOAT, 128]
  %016_convolutional_bn_var[FLOAT, 128]
  %016_convolutional_conv_weights[FLOAT, 128x128x1x1]
  %019_convolutional_bn_scale[FLOAT, 256]
  %019_convolutional_bn_bias[FLOAT, 256]
  %019_convolutional_bn_mean[FLOAT, 256]
  %019_convolutional_bn_var[FLOAT, 256]
  %019_convolutional_conv_weights[FLOAT, 256x256x3x3]
  %021_convolutional_bn_scale[FLOAT, 128]
  %021_convolutional_bn_bias[FLOAT, 128]
  %021_convolutional_bn_mean[FLOAT, 128]
  %021_convolutional_bn_var[FLOAT, 128]
  %021_convolutional_conv_weights[FLOAT, 128x128x3x3]
  %022_convolutional_bn_scale[FLOAT, 128]
  %022_convolutional_bn_bias[FLOAT, 128]
  %022_convolutional_bn_mean[FLOAT, 128]
  %022_convolutional_bn_var[FLOAT, 128]
  %022_convolutional_conv_weights[FLOAT, 128x128x3x3]
  %024_convolutional_bn_scale[FLOAT, 256]
  %024_convolutional_bn_bias[FLOAT, 256]
  %024_convolutional_bn_mean[FLOAT, 256]
  %024_convolutional_bn_var[FLOAT, 256]
  %024_convolutional_conv_weights[FLOAT, 256x256x1x1]
  %027_convolutional_bn_scale[FLOAT, 512]
  %027_convolutional_bn_bias[FLOAT, 512]
  %027_convolutional_bn_mean[FLOAT, 512]
  %027_convolutional_bn_var[FLOAT, 512]
  %027_convolutional_conv_weights[FLOAT, 512x512x3x3]
  %028_convolutional_bn_scale[FLOAT, 256]
  %028_convolutional_bn_bias[FLOAT, 256]
  %028_convolutional_bn_mean[FLOAT, 256]
  %028_convolutional_bn_var[FLOAT, 256]
  %028_convolutional_conv_weights[FLOAT, 256x512x1x1]
  %029_convolutional_bn_scale[FLOAT, 512]
  %029_convolutional_bn_bias[FLOAT, 512]
  %029_convolutional_bn_mean[FLOAT, 512]
  %029_convolutional_bn_var[FLOAT, 512]
  %029_convolutional_conv_weights[FLOAT, 512x256x3x3]
  %030_convolutional_conv_bias[FLOAT, 21]
  %030_convolutional_conv_weights[FLOAT, 21x512x1x1]
  %033_convolutional_bn_scale[FLOAT, 128]
  %033_convolutional_bn_bias[FLOAT, 128]
  %033_convolutional_bn_mean[FLOAT, 128]
  %033_convolutional_bn_var[FLOAT, 128]
  %033_convolutional_conv_weights[FLOAT, 128x256x1x1]
  %034_upsample_scale[FLOAT, 4]
  %036_convolutional_bn_scale[FLOAT, 256]
  %036_convolutional_bn_bias[FLOAT, 256]
  %036_convolutional_bn_mean[FLOAT, 256]
  %036_convolutional_bn_var[FLOAT, 256]
  %036_convolutional_conv_weights[FLOAT, 256x384x3x3]
  %037_convolutional_conv_bias[FLOAT, 21]
  %037_convolutional_conv_weights[FLOAT, 21x256x1x1]
) {
  %001_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%000_net, %001_convolutional_conv_weights)
  %001_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%001_convolutional, %001_convolutional_bn_scale, %001_convolutional_bn_bias, %001_convolutional_bn_mean, %001_convolutional_bn_var)
  %001_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%001_convolutional_bn)
  %002_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [2, 2]](%001_convolutional_lrelu, %002_convolutional_conv_weights)
  %002_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%002_convolutional, %002_convolutional_bn_scale, %002_convolutional_bn_bias, %002_convolutional_bn_mean, %002_convolutional_bn_var)
  %002_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%002_convolutional_bn)
  %003_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%002_convolutional_lrelu, %003_convolutional_conv_weights)
  %003_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%003_convolutional, %003_convolutional_bn_scale, %003_convolutional_bn_bias, %003_convolutional_bn_mean, %003_convolutional_bn_var)
  %003_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%003_convolutional_bn)
  %004_route_dummy0, %004_route = Split[axis = 1, split = [32, 32]](%003_convolutional_lrelu)
  %005_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%004_route, %005_convolutional_conv_weights)
  %005_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%005_convolutional, %005_convolutional_bn_scale, %005_convolutional_bn_bias, %005_convolutional_bn_mean, %005_convolutional_bn_var)
  %005_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%005_convolutional_bn)
  %006_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%005_convolutional_lrelu, %006_convolutional_conv_weights)
  %006_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%006_convolutional, %006_convolutional_bn_scale, %006_convolutional_bn_bias, %006_convolutional_bn_mean, %006_convolutional_bn_var)
  %006_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%006_convolutional_bn)
  %007_route = Concat[axis = 1](%006_convolutional_lrelu, %005_convolutional_lrelu)
  %008_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%007_route, %008_convolutional_conv_weights)
  %008_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%008_convolutional, %008_convolutional_bn_scale, %008_convolutional_bn_bias, %008_convolutional_bn_mean, %008_convolutional_bn_var)
  %008_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%008_convolutional_bn)
  %009_route = Concat[axis = 1](%003_convolutional_lrelu, %008_convolutional_lrelu)
  %010_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2]](%009_route)
  %011_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%010_maxpool, %011_convolutional_conv_weights)
  %011_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%011_convolutional, %011_convolutional_bn_scale, %011_convolutional_bn_bias, %011_convolutional_bn_mean, %011_convolutional_bn_var)
  %011_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%011_convolutional_bn)
  %012_route_dummy0, %012_route = Split[axis = 1, split = [64, 64]](%011_convolutional_lrelu)
  %013_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%012_route, %013_convolutional_conv_weights)
  %013_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%013_convolutional, %013_convolutional_bn_scale, %013_convolutional_bn_bias, %013_convolutional_bn_mean, %013_convolutional_bn_var)
  %013_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%013_convolutional_bn)
  %014_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%013_convolutional_lrelu, %014_convolutional_conv_weights)
  %014_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%014_convolutional, %014_convolutional_bn_scale, %014_convolutional_bn_bias, %014_convolutional_bn_mean, %014_convolutional_bn_var)
  %014_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%014_convolutional_bn)
  %015_route = Concat[axis = 1](%014_convolutional_lrelu, %013_convolutional_lrelu)
  %016_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%015_route, %016_convolutional_conv_weights)
  %016_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%016_convolutional, %016_convolutional_bn_scale, %016_convolutional_bn_bias, %016_convolutional_bn_mean, %016_convolutional_bn_var)
  %016_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%016_convolutional_bn)
  %017_route = Concat[axis = 1](%011_convolutional_lrelu, %016_convolutional_lrelu)
  %018_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2]](%017_route)
  %019_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%018_maxpool, %019_convolutional_conv_weights)
  %019_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%019_convolutional, %019_convolutional_bn_scale, %019_convolutional_bn_bias, %019_convolutional_bn_mean, %019_convolutional_bn_var)
  %019_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%019_convolutional_bn)
  %020_route_dummy0, %020_route = Split[axis = 1, split = [128, 128]](%019_convolutional_lrelu)
  %021_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%020_route, %021_convolutional_conv_weights)
  %021_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%021_convolutional, %021_convolutional_bn_scale, %021_convolutional_bn_bias, %021_convolutional_bn_mean, %021_convolutional_bn_var)
  %021_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%021_convolutional_bn)
  %022_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%021_convolutional_lrelu, %022_convolutional_conv_weights)
  %022_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%022_convolutional, %022_convolutional_bn_scale, %022_convolutional_bn_bias, %022_convolutional_bn_mean, %022_convolutional_bn_var)
  %022_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%022_convolutional_bn)
  %023_route = Concat[axis = 1](%022_convolutional_lrelu, %021_convolutional_lrelu)
  %024_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%023_route, %024_convolutional_conv_weights)
  %024_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%024_convolutional, %024_convolutional_bn_scale, %024_convolutional_bn_bias, %024_convolutional_bn_mean, %024_convolutional_bn_var)
  %024_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%024_convolutional_bn)
  %025_route = Concat[axis = 1](%019_convolutional_lrelu, %024_convolutional_lrelu)
  %026_maxpool = MaxPool[auto_pad = 'SAME_UPPER', kernel_shape = [2, 2], strides = [2, 2]](%025_route)
  %027_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%026_maxpool, %027_convolutional_conv_weights)
  %027_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%027_convolutional, %027_convolutional_bn_scale, %027_convolutional_bn_bias, %027_convolutional_bn_mean, %027_convolutional_bn_var)
  %027_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%027_convolutional_bn)
  %028_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%027_convolutional_lrelu, %028_convolutional_conv_weights)
  %028_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%028_convolutional, %028_convolutional_bn_scale, %028_convolutional_bn_bias, %028_convolutional_bn_mean, %028_convolutional_bn_var)
  %028_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%028_convolutional_bn)
  %029_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%028_convolutional_lrelu, %029_convolutional_conv_weights)
  %029_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%029_convolutional, %029_convolutional_bn_scale, %029_convolutional_bn_bias, %029_convolutional_bn_mean, %029_convolutional_bn_var)
  %029_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%029_convolutional_bn)
  %030_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%029_convolutional_lrelu, %030_convolutional_conv_weights, %030_convolutional_conv_bias)
  %033_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%028_convolutional_lrelu, %033_convolutional_conv_weights)
  %033_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%033_convolutional, %033_convolutional_bn_scale, %033_convolutional_bn_bias, %033_convolutional_bn_mean, %033_convolutional_bn_var)
  %033_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%033_convolutional_bn)
  %034_upsample = Upsample[mode = 'nearest'](%033_convolutional_lrelu, %034_upsample_scale)
  %035_route = Concat[axis = 1](%034_upsample, %024_convolutional_lrelu)
  %036_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%035_route, %036_convolutional_conv_weights)
  %036_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%036_convolutional, %036_convolutional_bn_scale, %036_convolutional_bn_bias, %036_convolutional_bn_mean, %036_convolutional_bn_var)
  %036_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%036_convolutional_bn)
  %037_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%036_convolutional_lrelu, %037_convolutional_conv_weights, %037_convolutional_conv_bias)
  return %030_convolutional, %037_convolutional
}
Checking ONNX model...
Traceback (most recent call last):
  File "yolo_to_onnx.py", line 955, in <module>
    main()
  File "yolo_to_onnx.py", line 945, in main
    onnx.checker.check_model(yolo_model_def)
  File "/home/fm/.local/lib/python3.6/site-packages/onnx/checker.py", line 102, in check_model
    C.check_model(protobuf_string)
onnx.onnx_cpp2py_export.checker.ValidationError: Unrecognized attribute: split for operator Split

==> Context: Bad node spec: input: "003_convolutional_lrelu" output: "004_route_dummy0" output: "004_route" name: "004_route" op_type: "Split" attribute { name: "axis" i: 1 type: INT } attribute { name: "split" ints: 32 ints: 32 type: INTS }

the only thing that i've changed is subdivisions=32 in darknet/cfg/yolov4-tiny-crowdhuman-416x416.cfg

@sawk1
Copy link
Author

sawk1 commented Jan 12, 2021

or maybe you can send me yolov4-tiny-crowdhuman-416x416.trt file?

@jkjung-avt
Copy link
Owner

or maybe you can send me yolov4-tiny-crowdhuman-416x416.trt file?

Nope. You need to build your own TensorRT engine on your target platform.

onnx.onnx_cpp2py_export.checker.ValidationError: Unrecognized attribute: split for operator Split

Are you using onnx==1.4.1?

@sawk1
Copy link
Author

sawk1 commented Jan 12, 2021

Oh, really, with onnx==1.4.1 it converted perfectly fine, thanks!

@sawk1 sawk1 closed this as completed Jan 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants