-
Notifications
You must be signed in to change notification settings - Fork 35
Layerwise quantization #15
Comments
Are you trying to modify quantization parameters via python/pycaffe interface? I have not tried it - so don't know whether it works. What is the issue that you are facing? Currently sparsity is applied in the function: It may be possible to specify a layer index to this function so that it can sparsify only a selected layer. However, the sparsity target that is specified is for the entire network. Also in |
Hey manu, Thanks for your answer, Here is my code layer.quantization_param.qparam_w.bitwidth = 8 it is acutally a type error, however what is weird is that in the caffe.proto the type of the bitwidth attribute is integer thanks |
As far as I understand, pycaffe doesn't allow you to change the layer parameters. But may be you can work around this restriction by writing your own functions to get and set them. Let me know if you succeed. |
You can also put that field into the prototxt file. But, this method doesn't allow you to change afterwards. |
Hello manu,
is it correct to use setattr(layer.quantization_param.precision, 8) given the generated caffe_pb2 for setting the layerwise quantization ?
Also, is it possible to sparsify networks layer by layer ?
Thanks a lot,
Best
The text was updated successfully, but these errors were encountered: