You are welcome to file issues either for bugs in the source code, feature requests!
Given a MXNet Model, MXNet2Caffe can automatically generate both .prototxt
and .caffemodel
.
Before starting using MXNet2Caffe, you need to manually set the path in find_caffe.py
and find_mxnet.py
.
After that, simply run python json2prototxt.py
to generate the corresponding .prototxt
.
And then, using python mxnet2caffe.py
to generate the corresponding .caffemodel
.
This version is change by Haicheng in order to handle more features, if your MXNet model contain slice operator, please use parsing_slice_layer.py to parse output layer produced by this version of MXNet2Caffe.
[1] Since their is not Flaten
layer in caffe, you have to manually moidify the automatically generated .prototxt
. In other words, you have to change the bottom
of the layer just after the Falatten
layer making it linking to the layer before the Falatten
layer. Currently, this part has to be done manually.
(Solved since caffe support falatten layer now)
[2] The converted model performances a little bit worse than the original MXNet model.
[3] Code for automatically reversing the weight (and bias) of the first layer to support BGR input.
[4] Better support for caffe's in-place feature.
[5] Several TODOs in prototxt_basic.py