-
Notifications
You must be signed in to change notification settings - Fork 7.2k
Description
🐛 Describe the bug
We got feedback from some of our downstream frameworks (MMLabs, MobileCV, FastAI, Lightning, etc) that they are not yet ready to pin TorchVision to v0.13 or higher. This means that for compatibility reasons, they are forced to continue using the pretrained=True
idiom. For the majority of the models, that's OK because we use the handle_legacy_interface()
decorator to set the right weights. Unfortunately not all models support it and thus when they try to initialize the new models they get errors.
For example, the following at MobileCV:
from mobile_cv.model_zoo.models import model_zoo_factory
model_zoo_factory.get_model("swin_t")
Raises an exception:
TypeError: SwinTransformer.__init__() got an unexpected keyword argument 'pretrained'
The use (or lack of use) of the decorator is not consistent. For example in v0.13 we released the efficientnet_v2_s
which uses the decorator and the swin_t
which doesn't. Similarly shufflenet_v2_x1_5
uses it but resnext101_64x4d
doesn't. This lack of consistency across newly introduced models in v0.13 is probably a bug.
Adding it everywhere will ensure the behaviour is aligned across the library and will help the downstream frameworks transition smoother to the new idiom.
Versions
latest main branch