-
Notifications
You must be signed in to change notification settings - Fork 707
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
train_val prototxt #6
Comments
except data layers and loss/acc layers, the main body of the training and deploy prototxt files should be the same. please check your training prototxt files. |
thanks for your reply. Is it possible for your to share your train_val.prototxt file? |
they are actually the same. |
there seems to be an inconsistency between the batchnormalization layers definition in the mobilenet_deploy.prototxt and that of the provided Caffe model. More specifically, for example, for the conv1/bn layer, I get the following error message: ERROR: Check failed: target_blobs.size() == source_layer.blobs_size() (5 vs. 3) Incompatible number of blobs for layer conv1/bn I could remove the error message by renaming ALL the batchnormalization layers but I would like to use exactly the same model as you have provided |
do you use the official caffe? |
I am using nvCaffe and the NVIDIA Digits environment for fine-tuning.
. Do you have any suggestions how to work around this problem? |
please take a look at https://github.com/NVIDIA/caffe/blob/caffe-0.16/src/caffe/layers/batch_norm_layer.cpp#L25
please confirm that you set scale_bias false, and have no scale_filler or bias_filler for batch norm layers. |
many thanks for your reply. As I am very new to Caffe, could you please let me know exactly how to do that? |
please update your caffe to newest one from: https://github.com/NVIDIA/caffe/ |
thanks for your reply. I have upgraded the caffe but still face the same problem. Any help on how to resolve this is greatly appreciated. The configuration of required libs are as follows:
|
would you please share a link of your train_val.prototxt for me? |
Hello, Could you please provide the train_val.prototxt file?
I would like to do some fine-tuning but it seems like the number of blobs for the batchnorm layers are not the same between the training and deploy models.
The text was updated successfully, but these errors were encountered: