-
Notifications
You must be signed in to change notification settings - Fork 7.2k
Closed
Labels
bughigh prioritymodule: models.quantizationIssues related to the quantizable/quantized modelsIssues related to the quantizable/quantized modelstriage review
Description
🐛 Describe the bug
The following test fails:
Traceback (most recent call last):
File "/home/circleci/project/test/test_prototype_models.py", line 231, in test_old_vs_new_factory
model_old = _build_model(_get_original_model(model_fn), **kwargs).to(device=dev)
File "/home/circleci/project/test/test_prototype_models.py", line 44, in _build_model
model = fn(**kwargs)
File "/home/circleci/project/torchvision/models/quantization/mobilenetv3.py", line 175, in mobilenet_v3_large
return _mobilenet_v3_model(arch, inverted_residual_setting, last_channel, pretrained, progress, quantize, **kwargs)
File "/home/circleci/project/torchvision/models/quantization/mobilenetv3.py", line 143, in _mobilenet_v3_model
_load_weights(arch, model, quant_model_urls.get(arch + "_" + backend, None), progress)
File "/home/circleci/project/torchvision/models/quantization/mobilenetv3.py", line 119, in _load_weights
model.load_state_dict(state_dict)
File "/home/circleci/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1491, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for QuantizableMobileNetV3:
Missing key(s) in state_dict: "classifier.2.activation_post_process.scale", "classifier.2.activation_post_process.zero_point", "classifier.2.activation_post_process.fake_quant_enabled", "classifier.2.activation_post_process.observer_enabled", "classifier.2.activation_post_process.scale", "classifier.2.activation_post_process.zero_point", "classifier.2.activation_post_process.activation_post_process.min_val", "classifier.2.activation_post_process.activation_post_process.max_val".
This is similar to the issue of https://github.com/pytorch/vision/pull/4997/files#r757434190 where new fields were introduced and broke old pretrained models. I believe the issue is now linked to stuff added in Dropout
.
Steps to reproduce:
# Works with dev20220113 (dev-server - version no longer available on conda nightly channel)
% conda list | grep pytorch
pytorch 1.11.0.dev20220113 py3.9_cuda11.1_cudnn8.0.5_0 pytorch-nightly
% python -c "import torchvision; torchvision.models.quantization.mobilenet_v3_large(pretrained=True, quantize=True)"
# failed with 1.11.0.dev20220114 (macos)
% conda install pytorch==1.11.0.dev20220114 -c pytorch-nightly
% python -c "import torchvision; torchvision.models.quantization.mobilenet_v3_large(pretrained=True, quantize=True)"
Traceback (most recent call last): ...
Versions
Latest main. Last successful hashcode 1feb637.
The last working PyTorch nightly was 20220113
It stopped working from 20220114
Metadata
Metadata
Assignees
Labels
bughigh prioritymodule: models.quantizationIssues related to the quantizable/quantized modelsIssues related to the quantizable/quantized modelstriage review