Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

load pre-train model from original mobilenet #17

Closed
Heng14 opened this issue Dec 19, 2019 · 5 comments
Closed

load pre-train model from original mobilenet #17

Heng14 opened this issue Dec 19, 2019 · 5 comments

Comments

@Heng14
Copy link

Heng14 commented Dec 19, 2019

Hi Alessandro,

Thank you so much for the amazing work. I have a question that I have changed pytorch official mobilenet-v2 code to quantization version by referring to your mobilenet-v1 code and is there any possibility that I could load the pre-train model (original mobilenet-v2 version) pytorch official released (which is a float version) to my quantization mobilenet-v2 so as to finetune from it?

Thank you so much!

Best,
Tracy

@volcacius
Copy link
Contributor

Hi Tracy,

Yes can do it with some work.
First thing, in the mobilenetv1 example you are mentioning, Brevitas introduce some new learned parameters (the scale factor for the activations), so when you are loading a pretrained floating point model to quantize it, Pytorch is going to complain that it can't find those parameters in the pretrained model. To fix it, you can set the env flag BREVITAS_IGNORE_MISSING_KEYS=1.
The other important point is that you should really maintain the same hierarchy of nn.Module between the floating point and the quantized implementation of the model, otherwise Pytorch won't be able to match the pretrained weights to the corresponding quantized modules. What I usually do is insert all the quantized layers with quantization disabled (setting quantization type to QuantType.FP) and make sure I can reproduce the original floating point accuracy. Once I'm sure everything is okay, I enable quantization and start retraining.
For a reference on how to quantize the residual connections, look at ProxylessNAS, it's quite similar to MobileNet V2. The idea is just to define a QuantHardTanh, which behaves like a quantized identity, and use it before and after the add.
Let me know how it goes.

Alessandro

@xfeng23
Copy link

xfeng23 commented Dec 21, 2019

Hi @alessandro,

Thank you so much for such a detailed answer. This is my another account by the way. I did as you said above. One problem is that I set the env flag BREVITAS_IGNORE_MISSING_KEYS=1. It is ok when I set quantization type to QuantType.FP. But it will miss keys related to quantization when loading pre-train model when I set quantization type as QuantType.INT.
Something like:
Missing key(s) in state_dict: "features.0.2.act_quant_proxy.fused_activation_quant_proxy.tensor_quant.scaling_impl.learned_value",
"features.1.shared_act.act_quant_proxy.fused_activation_quant_proxy.tensor_quant.scaling_impl.learned_value", ......

Do you have any ideas about this?
And another question is that if nn.Dropout has a corresponding operation that can take QuantTensor as input?

Thank you so much!

Best,
Tracy

@volcacius
Copy link
Contributor

volcacius commented Dec 21, 2019

Hi Tracy,

Can you please double check that when you set the env flag BREVITAS_IGNORE_MISSING_KEYS=1, if you do

import brevitas.config as config
print(config.IGNORE_MISSING_KEYS)

it prints True? That's the variable that reads the env setting, so it should be true. Otherwise it means that the env is not being set properly.
You can also either set config.IGNORE_MISSING_KEYS=True manually, orwhen you load the pretrained model do model.load_state_dict(my_state_dict, strict=False). Be careful that with strict=False you disable any sort of check on the state dict.

Regarding dropout, I don't have a pre-made layer but something like this should work (off the top of my head, haven't tested it). QuantTensor is just a tuple, so you can simply unpack it, pass it through the forward function, and the pack the output back into a QuantTensor:

import torch.nn as nn
from brevitas.quant_tensor import QuantTensor


QuantDropout(nn.Dropout):

def forward(input_quant_tensor):
    inp, scale, bit_width = input_quant_tensor
    output = super(QuantDropout, self).forward(inp)
    output_quant_tensor = QuantTensor(tensor=output, scale=scale, bit_width=bit_width)
    return output_quant_tensor

You can take the same approach for nn.MaxPool2d too.
Let me know how it goes.

Alessandro

@xfeng23
Copy link

xfeng23 commented Dec 23, 2019

Hi Alessandro,

Thanks for your quick reply. Now I can train it successfully. But one more issue. When I set quantization flag to FP, the model can be trained, and while training, the 'free -m' showed that memory used increased slowly and the training process was going normally. But when I set the quantization flag to INT, 'free -m' showed that memory used increased much faster and ran out of the CPU memory, and the training process will be stuck. Do you have any ideas about this issue?

Best,
Tracy

@volcacius
Copy link
Contributor

Hi Tracy,

Training aware quantization is expensive compute and memory wise. The idea is always that you trading off increased training cost for reduced inference cost.
With QuantType.FP quantization is disabled, you are just computing standard floating point, so you get normal Pytorch resource utilization.
If you are going out of memory, you should lower your batch size. Training on a GPU rather than a CPU is also highly suggested, with only a CPU you won't get very far. If you don't have access to a GPU, you can get a free one (with some limitations) on Google Collab.

Good luck with your training.

Alessandro

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants