We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, Thanks for your effort.
Here I encountered a problem when converting the default examples in this repo. The procedure is quite simple:
$python json2prototxt.py
0, op:null , name:data -> data 2, op:Convolution , name:conv1__conv -> conv1__conv use shared weight -> data 5, op:BatchNorm , name:conv1__bn__bn -> conv1__bn__bn 6, op:Activation , name:conv1__bn__relu -> conv1__bn__relu 8, op:Convolution , name:conv2__conv -> conv2__conv 11, op:BatchNorm , name:conv2__bn__bn -> conv2__bn__bn 12, op:Activation , name:conv2__bn__relu -> conv2__bn__relu 14, op:Convolution , name:conv3_c1x1-a__conv -> conv3_c1x1-a__conv 17, op:BatchNorm , name:conv3_c1x1-a__bn__bn -> conv3_c1x1-a__bn__bn 18, op:Activation , name:conv3_c1x1-a__bn__relu -> conv3_c1x1-a__bn__relu 20, op:Convolution , name:conv3_c3x3-b__conv -> conv3_c3x3-b__conv 23, op:BatchNorm , name:conv3_c3x3-b__bn__bn -> conv3_c3x3-b__bn__bn 24, op:Activation , name:conv3_c3x3-b__bn__relu -> conv3_c3x3-b__bn__relu 26, op:Convolution , name:conv3_c1x1-c__conv -> conv3_c1x1-c__conv 29, op:BatchNorm , name:conv3_c1x1-c__bn__bn -> conv3_c1x1-c__bn__bn 30, op:ElementWiseSum , name:conv3_ele-sum -> conv3_ele-sum 31, op:Activation , name:conv3_sum-act__relu -> conv3_sum-act__relu 33, op:Convolution , name:conv4_c1x1-w(s/2)__conv -> conv4_c1x1-w(s/2)__conv 36, op:BatchNorm , name:conv4_c1x1-w(s/2)__bn__bn -> conv4_c1x1-w(s/2)__bn__bn 38, op:Convolution , name:conv4_c1x1-a__conv -> conv4_c1x1-a__conv 41, op:BatchNorm , name:conv4_c1x1-a__bn__bn -> conv4_c1x1-a__bn__bn 42, op:Activation , name:conv4_c1x1-a__bn__relu -> conv4_c1x1-a__bn__relu 44, op:Convolution , name:conv4_c3x3-b__conv -> conv4_c3x3-b__conv 47, op:BatchNorm , name:conv4_c3x3-b__bn__bn -> conv4_c3x3-b__bn__bn 48, op:Activation , name:conv4_c3x3-b__bn__relu -> conv4_c3x3-b__bn__relu 50, op:Convolution , name:conv4_c1x1-c__conv -> conv4_c1x1-c__conv 53, op:BatchNorm , name:conv4_c1x1-c__bn__bn -> conv4_c1x1-c__bn__bn 54, op:ElementWiseSum , name:conv4_ele-sum -> conv4_ele-sum 55, op:Activation , name:conv4_sum-act__relu -> conv4_sum-act__relu 57, op:Convolution , name:conv5_c1x1-a__conv -> conv5_c1x1-a__conv 60, op:BatchNorm , name:conv5_c1x1-a__bn__bn -> conv5_c1x1-a__bn__bn 61, op:Activation , name:conv5_c1x1-a__bn__relu -> conv5_c1x1-a__bn__relu 63, op:Convolution , name:conv5_c3x3-b__conv -> conv5_c3x3-b__conv 66, op:BatchNorm , name:conv5_c3x3-b__bn__bn -> conv5_c3x3-b__bn__bn 67, op:Activation , name:conv5_c3x3-b__bn__relu -> conv5_c3x3-b__bn__relu 69, op:Convolution , name:conv5_c1x1-c__conv -> conv5_c1x1-c__conv 72, op:BatchNorm , name:conv5_c1x1-c__bn__bn -> conv5_c1x1-c__bn__bn 73, op:ElementWiseSum , name:conv5_ele-sum -> conv5_ele-sum 74, op:Activation , name:conv5_sum-act__relu -> conv5_sum-act__relu 76, op:Convolution , name:conv6_c1x1-w(s/2)__conv -> conv6_c1x1-w(s/2)__conv 79, op:BatchNorm , name:conv6_c1x1-w(s/2)__bn__bn -> conv6_c1x1-w(s/2)__bn__bn 81, op:Convolution , name:conv6_c1x1-a__conv -> conv6_c1x1-a__conv 84, op:BatchNorm , name:conv6_c1x1-a__bn__bn -> conv6_c1x1-a__bn__bn 85, op:Activation , name:conv6_c1x1-a__bn__relu -> conv6_c1x1-a__bn__relu 87, op:Convolution , name:conv6_c3x3-b__conv -> conv6_c3x3-b__conv 90, op:BatchNorm , name:conv6_c3x3-b__bn__bn -> conv6_c3x3-b__bn__bn 91, op:Activation , name:conv6_c3x3-b__bn__relu -> conv6_c3x3-b__bn__relu 93, op:Convolution , name:conv6_c1x1-c__conv -> conv6_c1x1-c__conv 96, op:BatchNorm , name:conv6_c1x1-c__bn__bn -> conv6_c1x1-c__bn__bn 97, op:ElementWiseSum , name:conv6_ele-sum -> conv6_ele-sum 98, op:Activation , name:conv6_sum-act__relu -> conv6_sum-act__relu 99, op:Pooling , name:pool5 -> pool5 100, op:Flatten , name:flatten -> flatten 103, op:FullyConnected , name:fc -> fc 105, op:SoftmaxOutput , name:softmax -> softmax
$python mxnet2caffe.py
[09:31:04] src/nnvm/legacy_json_util.cc:190: Loading symbol saved by previous version v0.8.0. Attempting to upgrade... [09:31:04] src/nnvm/legacy_json_util.cc:198: Symbol successfully upgraded! WARNING: Logging before InitGoogleLogging() is written to STDERR I0308 09:31:04.431555 2737054528 net.cpp:51] Initializing net from parameters: name: "mxnet-mdoel" state { phase: TRAIN level: 0 } layer { name: "data" type: "Input" top: "data" input_param { shape { dim: 10 dim: 3 dim: 224 dim: 224 } } } layer { name: "conv1__conv" type: "Convolution" bottom: "data" top: "conv1__conv" param { name: "data" } convolution_param { num_output: 16 bias_term: false pad: 1 kernel_size: 3 group: 1 stride: 2 } } layer { name: "conv1__bn__bn" type: "BatchNorm" bottom: "conv1__conv" top: "conv1__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv1__bn__bn_scale" type: "Scale" bottom: "conv1__bn__bn" top: "conv1__bn__bn" scale_param { bias_term: true } } layer { name: "conv1__bn__relu" type: "ReLU" bottom: "conv1__bn__bn" top: "conv1__bn__relu" } layer { name: "conv2__conv" type: "Convolution" bottom: "conv1__bn__relu" top: "conv2__conv" convolution_param { num_output: 16 bias_term: false pad: 1 kernel_size: 3 group: 1 stride: 2 } } layer { name: "conv2__bn__bn" type: "BatchNorm" bottom: "conv2__conv" top: "conv2__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv2__bn__bn_scale" type: "Scale" bottom: "conv2__bn__bn" top: "conv2__bn__bn" scale_param { bias_term: true } } layer { name: "conv2__bn__relu" type: "ReLU" bottom: "conv2__bn__bn" top: "conv2__bn__relu" } layer { name: "conv3_c1x1-a__conv" type: "Convolution" bottom: "conv2__bn__relu" top: "conv3_c1x1-a__conv" convolution_param { num_output: 16 bias_term: false pad: 0 kernel_size: 1 group: 1 stride: 1 } } layer { name: "conv3_c1x1-a__bn__bn" type: "BatchNorm" bottom: "conv3_c1x1-a__conv" top: "conv3_c1x1-a__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv3_c1x1-a__bn__bn_scale" type: "Scale" bottom: "conv3_c1x1-a__bn__bn" top: "conv3_c1x1-a__bn__bn" scale_param { bias_term: true } } layer { name: "conv3_c1x1-a__bn__relu" type: "ReLU" bottom: "conv3_c1x1-a__bn__bn" top: "conv3_c1x1-a__bn__relu" } layer { name: "conv3_c3x3-b__conv" type: "Convolution" bottom: "conv3_c1x1-a__bn__relu" top: "conv3_c3x3-b__conv" convolution_param { num_output: 16 bias_term: false pad: 1 kernel_size: 3 group: 1 stride: 1 } } layer { name: "conv3_c3x3-b__bn__bn" type: "BatchNorm" bottom: "conv3_c3x3-b__conv" top: "conv3_c3x3-b__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv3_c3x3-b__bn__bn_scale" type: "Scale" bottom: "conv3_c3x3-b__bn__bn" top: "conv3_c3x3-b__bn__bn" scale_param { bias_term: true } } layer { name: "conv3_c3x3-b__bn__relu" type: "ReLU" bottom: "conv3_c3x3-b__bn__bn" top: "conv3_c3x3-b__bn__relu" } layer { name: "conv3_c1x1-c__conv" type: "Convolution" bottom: "conv3_c3x3-b__bn__relu" top: "conv3_c1x1-c__conv" convolution_param { num_output: 16 bias_term: false pad: 0 kernel_size: 1 group: 1 stride: 1 } } layer { name: "conv3_c1x1-c__bn__bn" type: "BatchNorm" bottom: "conv3_c1x1-c__conv" top: "conv3_c1x1-c__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv3_c1x1-c__bn__bn_scale" type: "Scale" bottom: "conv3_c1x1-c__bn__bn" top: "conv3_c1x1-c__bn__bn" scale_param { bias_term: true } } layer { name: "conv3_ele-sum" type: "Eltwise" bottom: "conv2__bn__relu" bottom: "conv3_c1x1-c__bn__bn" top: "conv3_ele-sum" } layer { name: "conv3_sum-act__relu" type: "ReLU" bottom: "conv3_ele-sum" top: "conv3_sum-act__relu" } layer { name: "conv4_c1x1-w(s/2)__conv" type: "Convolution" bottom: "conv3_sum-act__relu" top: "conv4_c1x1-w(s/2)__conv" convolution_param { num_output: 32 bias_term: false pad: 0 kernel_size: 1 group: 1 stride: 2 } } layer { name: "conv4_c1x1-w(s/2)__bn__bn" type: "BatchNorm" bottom: "conv4_c1x1-w(s/2)__conv" top: "conv4_c1x1-w(s/2)__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv4_c1x1-w(s/2)__bn__bn_scale" type: "Scale" bottom: "conv4_c1x1-w(s/2)__bn__bn" top: "conv4_c1x1-w(s/2)__bn__bn" scale_param { bias_term: true } } layer { name: "conv4_c1x1-a__conv" type: "Convolution" bottom: "conv3_sum-act__relu" top: "conv4_c1x1-a__conv" convolution_param { num_output: 32 bias_term: false pad: 0 kernel_size: 1 group: 1 stride: 1 } } layer { name: "conv4_c1x1-a__bn__bn" type: "BatchNorm" bottom: "conv4_c1x1-a__conv" top: "conv4_c1x1-a__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv4_c1x1-a__bn__bn_scale" type: "Scale" bottom: "conv4_c1x1-a__bn__bn" top: "conv4_c1x1-a__bn__bn" scale_param { bias_term: true } } layer { name: "conv4_c1x1-a__bn__relu" type: "ReLU" bottom: "conv4_c1x1-a__bn__bn" top: "conv4_c1x1-a__bn__relu" } layer { name: "conv4_c3x3-b__conv" type: "Convolution" bottom: "conv4_c1x1-a__bn__relu" top: "conv4_c3x3-b__conv" convolution_param { num_output: 32 bias_term: false pad: 1 kernel_size: 3 group: 1 stride: 2 } } layer { name: "conv4_c3x3-b__bn__bn" type: "BatchNorm" bottom: "conv4_c3x3-b__conv" top: "conv4_c3x3-b__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv4_c3x3-b__bn__bn_scale" type: "Scale" bottom: "conv4_c3x3-b__bn__bn" top: "conv4_c3x3-b__bn__bn" scale_param { bias_term: true } } layer { name: "conv4_c3x3-b__bn__relu" type: "ReLU" bottom: "conv4_c3x3-b__bn__bn" top: "conv4_c3x3-b__bn__relu" } layer { name: "conv4_c1x1-c__conv" type: "Convolution" bottom: "conv4_c3x3-b__bn__relu" top: "conv4_c1x1-c__conv" convolution_param { num_output: 32 bias_term: false pad: 0 kernel_size: 1 group: 1 stride: 1 } } layer { name: "conv4_c1x1-c__bn__bn" type: "BatchNorm" bottom: "conv4_c1x1-c__conv" top: "conv4_c1x1-c__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv4_c1x1-c__bn__bn_scale" type: "Scale" bottom: "conv4_c1x1-c__bn__bn" top: "conv4_c1x1-c__bn__bn" scale_param { bias_term: true } } layer { name: "conv4_ele-sum" type: "Eltwise" bottom: "conv4_c1x1-w(s/2)__bn__bn" bottom: "conv4_c1x1-c__bn__bn" top: "conv4_ele-sum" } layer { name: "conv4_sum-act__relu" type: "ReLU" bottom: "conv4_ele-sum" top: "conv4_sum-act__relu" } layer { name: "conv5_c1x1-a__conv" type: "Convolution" bottom: "conv4_sum-act__relu" top: "conv5_c1x1-a__conv" convolution_param { num_output: 32 bias_term: false pad: 0 kernel_size: 1 group: 1 stride: 1 } } layer { name: "conv5_c1x1-a__bn__bn" type: "BatchNorm" bottom: "conv5_c1x1-a__conv" top: "conv5_c1x1-a__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv5_c1x1-a__bn__bn_scale" type: "Scale" bottom: "conv5_c1x1-a__bn__bn" top: "conv5_c1x1-a__bn__bn" scale_param { bias_term: true } } layer { name: "conv5_c1x1-a__bn__relu" type: "ReLU" bottom: "conv5_c1x1-a__bn__bn" top: "conv5_c1x1-a__bn__relu" } layer { name: "conv5_c3x3-b__conv" type: "Convolution" bottom: "conv5_c1x1-a__bn__relu" top: "conv5_c3x3-b__conv" convolution_param { num_output: 32 bias_term: false pad: 1 kernel_size: 3 group: 1 stride: 1 } } layer { name: "conv5_c3x3-b__bn__bn" type: "BatchNorm" bottom: "conv5_c3x3-b__conv" top: "conv5_c3x3-b__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv5_c3x3-b__bn__bn_scale" type: "Scale" bottom: "conv5_c3x3-b__bn__bn" top: "conv5_c3x3-b__bn__bn" scale_param { bias_term: true } } layer { name: "conv5_c3x3-b__bn__relu" type: "ReLU" bottom: "conv5_c3x3-b__bn__bn" top: "conv5_c3x3-b__bn__relu" } layer { name: "conv5_c1x1-c__conv" type: "Convolution" bottom: "conv5_c3x3-b__bn__relu" top: "conv5_c1x1-c__conv" convolution_param { num_output: 32 bias_term: false pad: 0 kernel_size: 1 group: 1 stride: 1 } } layer { name: "conv5_c1x1-c__bn__bn" type: "BatchNorm" bottom: "conv5_c1x1-c__conv" top: "conv5_c1x1-c__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv5_c1x1-c__bn__bn_scale" type: "Scale" bottom: "conv5_c1x1-c__bn__bn" top: "conv5_c1x1-c__bn__bn" scale_param { bias_term: true } } layer { name: "conv5_ele-sum" type: "Eltwise" bottom: "conv4_sum-act__relu" bottom: "conv5_c1x1-c__bn__bn" top: "conv5_ele-sum" } layer { name: "conv5_sum-act__relu" type: "ReLU" bottom: "conv5_ele-sum" top: "conv5_sum-act__relu" } layer { name: "conv6_c1x1-w(s/2)__conv" type: "Convolution" bottom: "conv5_sum-act__relu" top: "conv6_c1x1-w(s/2)__conv" convolution_param { num_output: 32 bias_term: false pad: 0 kernel_size: 1 group: 1 stride: 2 } } layer { name: "conv6_c1x1-w(s/2)__bn__bn" type: "BatchNorm" bottom: "conv6_c1x1-w(s/2)__conv" top: "conv6_c1x1-w(s/2)__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv6_c1x1-w(s/2)__bn__bn_scale" type: "Scale" bottom: "conv6_c1x1-w(s/2)__bn__bn" top: "conv6_c1x1-w(s/2)__bn__bn" scale_param { bias_term: true } } layer { name: "conv6_c1x1-a__conv" type: "Convolution" bottom: "conv5_sum-act__relu" top: "conv6_c1x1-a__conv" convolution_param { num_output: 32 bias_term: false pad: 0 kernel_size: 1 group: 1 stride: 1 } } layer { name: "conv6_c1x1-a__bn__bn" type: "BatchNorm" bottom: "conv6_c1x1-a__conv" top: "conv6_c1x1-a__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv6_c1x1-a__bn__bn_scale" type: "Scale" bottom: "conv6_c1x1-a__bn__bn" top: "conv6_c1x1-a__bn__bn" scale_param { bias_term: true } } layer { name: "conv6_c1x1-a__bn__relu" type: "ReLU" bottom: "conv6_c1x1-a__bn__bn" top: "conv6_c1x1-a__bn__relu" } layer { name: "conv6_c3x3-b__conv" type: "Convolution" bottom: "conv6_c1x1-a__bn__relu" top: "conv6_c3x3-b__conv" convolution_param { num_output: 32 bias_term: false pad: 1 kernel_size: 3 group: 1 stride: 2 } } layer { name: "conv6_c3x3-b__bn__bn" type: "BatchNorm" bottom: "conv6_c3x3-b__conv" top: "conv6_c3x3-b__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv6_c3x3-b__bn__bn_scale" type: "Scale" bottom: "conv6_c3x3-b__bn__bn" top: "conv6_c3x3-b__bn__bn" scale_param { bias_term: true } } layer { name: "conv6_c3x3-b__bn__relu" type: "ReLU" bottom: "conv6_c3x3-b__bn__bn" top: "conv6_c3x3-b__bn__relu" } layer { name: "conv6_c1x1-c__conv" type: "Convolution" bottom: "conv6_c3x3-b__bn__relu" top: "conv6_c1x1-c__conv" convolution_param { num_output: 32 bias_term: false pad: 0 kernel_size: 1 group: 1 stride: 1 } } layer { name: "conv6_c1x1-c__bn__bn" type: "BatchNorm" bottom: "conv6_c1x1-c__conv" top: "conv6_c1x1-c__bn__bn" batch_norm_param { use_global_stats: true moving_average_fraction: 0.9 eps: 0.001 } } layer { name: "conv6_c1x1-c__bn__bn_scale" type: "Scale" bottom: "conv6_c1x1-c__bn__bn" top: "conv6_c1x1-c__bn__bn" scale_param { bias_term: true } } layer { name: "conv6_ele-sum" type: "Eltwise" bottom: "conv6_c1x1-w(s/2)__bn__bn" bottom: "conv6_c1x1-c__bn__bn" top: "conv6_ele-sum" } layer { name: "conv6_sum-act__relu" type: "ReLU" bottom: "conv6_ele-sum" top: "conv6_sum-act__relu" } layer { name: "pool5" type: "Pooling" bottom: "conv6_sum-act__relu" top: "pool5" pooling_param { pool: AVE kernel_size: 14 stride: 1 pad: 0 } } layer { name: "fc" type: "InnerProduct" bottom: "flatten" top: "fc" inner_product_param { num_output: 1000 } } F0308 09:31:04.433809 2737054528 insert_splits.cpp:29] Unknown bottom blob 'flatten' (layer 'fc', bottom index 0) *** Check failure stack trace: *** Abort trap: 6
Not sure why the blob "flatten" is not correctly recognized. Any idea?
The text was updated successfully, but these errors were encountered:
Found the solution by manually modify the bottom "flatten" to the previous layers' top ` layer { bottom: "conv6_sum-act__relu" top: "pool5" name: "pool5" type: "Pooling" pooling_param { pool: AVE kernel_size: 14 stride: 1 pad: 0 } }
layer { bottom: "flatten" -> change this to: bottom: "pool5" top: "fc" name: "fc" type: "InnerProduct" inner_product_param { num_output: 1000 } }`
Thanks for the hint.
Sorry, something went wrong.
No branches or pull requests
Hi, Thanks for your effort.
Here I encountered a problem when converting the default examples in this repo.
The procedure is quite simple:
$python json2prototxt.py
$python mxnet2caffe.py
Not sure why the blob "flatten" is not correctly recognized. Any idea?
The text was updated successfully, but these errors were encountered: