Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

innerproduct layer convert failed: caffe.LayerParameter" has no field named "weight_decay" #342

Closed
northeastsquare opened this issue Apr 9, 2018 · 2 comments

Comments

@northeastsquare
Copy link

northeastsquare commented Apr 9, 2018

Hello, when I use caffe2ncnn to convert caffe model, it failed:
tools/caffe/caffe2ncnn ./deploy.prototxt ./param.caffemodel numproto numbin

[libprotobuf ERROR google/protobuf/text_format.cc:299] Error parsing text-format caffe.NetParameter: 184:15: Message type "caffe.LayerParameter" has no field named "weight_decay".
read_proto_from_text failed

the prototxt as follows:
`name: "DeepFace_set001_net"
layer {
name: "mydata"
type: "MemoryData"
top: "data"
top: "label"
transform_param {
scale: 0.00390625
}
memory_data_param {
batch_size: 1
channels: 1
height: 64
width: 64
}
}

layer{
name: "conv1"
type: "Convolution"

convolution_param {
num_output: 12
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
}
bottom: "data"
top: "conv1"
}

layer{
name: "pool1"
type: "Pooling"
pooling_param {
pool: MAX
kernel_size: 3
stride: 3
}
bottom: "conv1"
top: "pool1"
}

layer{
name: "slice1"
type:"Slice"
slice_param {
slice_dim: 1
}
bottom: "pool1"
top: "slice1_1"
top: "slice1_2"
}

layer{
name: "etlwise1"
type: "Eltwise"
bottom: "slice1_1"
bottom: "slice1_2"
top: "eltwise1"
eltwise_param {
operation: MAX
}
}

layer{
name: "conv2"
type: "Convolution"

convolution_param {
num_output: 24
kernel_size: 3
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
}
bottom: "eltwise1"
top: "conv2"
}
layer{
name: "pool2"
type: "Pooling"
pooling_param {
pool: MAX
kernel_size: 3
stride: 3
}
bottom: "conv2"
top: "pool2"
}

layer{
name: "slice2"
type:"Slice"
slice_param {
slice_dim: 1
}
bottom: "pool2"
top: "slice2_1"
top: "slice2_2"
}

layer{
name: "etlwise2"
type: "Eltwise"
bottom: "slice2_1"
bottom: "slice2_2"
top: "eltwise2"

eltwise_param {
operation: MAX
}
}

layer{
name: "conv3"
type: "Convolution"

convolution_param {
num_output: 32
kernel_size: 3
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
}
bottom: "eltwise2"
top: "conv3"
}

layer{
name: "pool3"
type: "Pooling"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
bottom: "conv3"
top: "pool3"
}

layer{
name: "slice3"
type:"Slice"
slice_param {
slice_dim: 1
}
bottom: "pool3"
top: "slice3_1"
top: "slice3_2"
}

layer{
name: "etlwise3"
type: "Eltwise"
bottom: "slice3_1"
bottom: "slice3_2"
top: "eltwise3"
eltwise_param {
operation: MAX
}
}

layer{
name: "fc1"
type: "InnerProduct"
weight_decay: 10
weight_decay: 10
inner_product_param{
num_output: 256
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
}
bottom: "eltwise3"
top: "fc1"
}

layer{
name: "slice6"
type:"Slice"
slice_param {
slice_dim: 1
}
bottom: "fc1"
top: "slice6_1"
top: "slice6_2"
}

layer{
name: "eltwise6"
type: "Eltwise"
bottom: "slice6_1"
bottom: "slice6_2"
top: "eltwise6"
eltwise_param {
operation: MAX
}
}

layer {
name: "dropout1"
type: "Dropout"
bottom: "eltwise6"
top: "dropout1"
dropout_param {
dropout_ratio: 0.7
}
}

layer{
name: "fc2"
type: "InnerProduct"
weight_decay: 10
weight_decay: 10
inner_product_param{
num_output: 11
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
#value: 0.1
}
}
bottom: "dropout1"
top: "fc2"
}
`

@nihui
Copy link
Member

nihui commented Apr 9, 2018

try to upgrade your caffe prototxt and caffemodel to new style firstly
reference guide https://github.com/Tencent/ncnn/wiki/how-to-use-ncnn-with-alexnet

@northeastsquare
Copy link
Author

Thank you for your fast reply, I think I will have a pleasant journey with ncnn because of the kind and open author.
After upgrade, I delete
weight_decay: 10
weight_decay: 10
, as they are used in training processing . Then it passed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants