New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consolidate network definitions #57
Comments
In principle, it is. Cuda-convnet uses one config file for each model.
softmax_loss may need to be split into softmax plus independent loss function.
imagenet_val.prototxt has the same layer as the imagenet.prototxt but does not use the optimization parameters blobs_lr, weight_decay, weight_filler, bias_filler. It is ok for test to ignore these fields. The fields source, batchsize, mirror have conflict values. Just add a prefix train_ or test_ before each of them. The softmax_loss vs softmax issue has appeared in the output of "diff imagenet.prototxt imagenet_deploy.prototxt". |
Although it will be possible to define one prototxt for both, I think the current separation allows more flexibility, and it is easy for the code to read and process them the network definitions. Although a network verification (making sure training and test are compatible) would be nice. If there were a join prototxt, then the code will need to interpret it differently for each case. For instance the deploy cases will became more difficult to handle. @Yangqing what do you think about this? |
I actually like the consolidation idea - having a way to consolidate Yangqing On Sun, Jan 26, 2014 at 10:24 PM, Sergio Guadarrama <
|
One idea could be to add into either the LayerConnection or LayerParameter proto (not sure which would be more natural) a field "repeated string phase" (or maybe enum). If empty, the layer is used in all phases; if specified, the layer is ignored for all phases except the specified one. Then in imagenet.prototxt we specify, for example, two data layers with different phases: layers { layer { name: "data" type: "data" source: "/home/jiayq/Data/ILSVRC12/train-leveldb" meanfile: "/home/jiayq/Data/ILSVRC12/image_mean.binaryproto" batchsize: 256 cropsize: 227 mirror: true } top: "data" top: "label" phase: "train" } layers { layer { name: "data" type: "data" source: "/home/jiayq/Data/ILSVRC12/val-leveldb" meanfile: "/home/jiayq/Data/ILSVRC12/image_mean.binaryproto" batchsize: 50 cropsize: 227 mirror: false } top: "data" top: "label" phase: "val" } ... layers { layer { name: "loss" type: "softmax_loss" } bottom: "fc8" bottom: "label" phase: "train" } layers { layer { name: "prob" type: "softmax" } bottom: "fc8" top: "prob" phase: "val" phase: "deploy" } layers { layer { name: "accuracy" type: "accuracy" } bottom: "prob" bottom: "label" top: "accuracy" phase: "val" } |
I like @jeffdonahue's proposal a lot. It's concise and the meaning is clear and I don't think it would complicate net construction much. I could try for a PR next week. |
Another possibility will be to separate the network architecture from the training and and testing parameters/layers and then do an explicit include and merge, which will read the file in in the include_net field and merge the protobuf together.
Then define the network_train.prototxt as:
And define the network_test.prototxt as:
Additionally we could define a default layer, that contains a set of default values for a certain kind of layers, for instance it could be used to set the blobslr and weight_decay. In this case a more compact definition of network_train.prototxt will be:
|
To me, part of the point of consolidation is to have a single definition file as in @jeffdonahue 's proposal. I want as little redundancy as can be. Including seems like more difficult logic with protobuf too. I am going to hack on a single file def with |
@shelhamer you could still put what I said in one file, where there is one part that define the architecture, another that define the things specific for the training phase and another that define the things specific for the test phase. I'm looking forward to see your proposal. |
We should probably use [packed=true] for all repeated fields with basic types There is actually a way to import other proto definitions. We may should consider this. Also there is a simple way to merge messages, that we could use to merge partial definitions of networks |
Even though the definitions are consolidated, in the current implementation there are two networks initialized in the code, one used for training, and one for testing, the test net copies parameter from the train net. I think they should be consolidated as well, with the help of a split layer, both softmax_loss and accuracy_layer can be put in the same model. In this case, the phase parameter is not used for model initialization, but used in the forward and backward functions to decide whether calculation is needed. This will save the extra memory used during test phase. |
@mavenlin, if your proposal is implemented, it will also speed up Solver::Test() by eliminating the time of memory copy. Why don't you create an issue? CHECK_NOTNULL(test_net_.get())->CopyTrainedLayersFrom(net_param); |
Now that the amazing SplitLayer #129 has been merged into the dev branch, is there anyone working on a solution based on it? |
To be resolved by #734. |
OpenCl kernel compilation errors for android #51
Right now, a model typically has three CaffeNet definitions for training, validation, and deployment (imagenet.prototxt, imagenet_val.prototxt, imagenet_deploy.prototxt respectively for the ImageNet example). These protobufs are full of redundancy and tweaking networks requires a lot of copy-and-paste.
Is a unified protocol buffer to describe the input/output for these cases together possible?
The text was updated successfully, but these errors were encountered: