You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was looking into implementing DoReFaNet for ResNet and came across a link in the source of tensorpack in "/examples/ResNet/Cifar-10-resnet.py" implementation.
The code at #69 seems to be very old and now removed from the examples as well. I tried to implement it by updating it so that it works for the new code but there were exceptions while creating resnet blocks.
So now, I tried to implement the DoReFaNet in the new code in the tensorpack based on "/examples/ResNet/Cifar-10-resnet.py" but in the old implementation, the activation quantization is done before pooling. Should this not be after pooling (as seen in the screenshot)? (sorry this was my misunderstanding, the layer l is being pooled as opposed to my misunderstanding that c2 was being pooled. The quantization step now makes sense for activation quantization)
Also shouldn't the remap variables be used to quantize weights? In the old implementation this was different but I guess this was changed later.
The text was updated successfully, but these errors were encountered:
Abhishek2271
changed the title
Issue with old resnet DoReFaNet implementation
Resnet DoReFaNet implementation
Sep 23, 2021
Hi @ppwwyyxx ,
I was looking into implementing DoReFaNet for ResNet and came across a link in the source of tensorpack in "/examples/ResNet/Cifar-10-resnet.py" implementation.
The code at #69 seems to be very old and now removed from the examples as well. I tried to implement it by updating it so that it works for the new code but there were exceptions while creating resnet blocks.
So now, I tried to implement the DoReFaNet in the new code in the tensorpack based on "/examples/ResNet/Cifar-10-resnet.py" but in the old implementation, the activation quantization is done before pooling.
Should this not be after pooling (as seen in the screenshot)?(sorry this was my misunderstanding, the layer l is being pooled as opposed to my misunderstanding that c2 was being pooled. The quantization step now makes sense for activation quantization)Also shouldn't the remap variables be used to quantize weights? In the old implementation this was different but I guess this was changed later.
The text was updated successfully, but these errors were encountered: