You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The architecture is almost the same. But the key names of pretrained models between torchvision and mmdet may be different. If you want to load torchvision model into mmdet, you need to convert the key name first.
It means the name of the parameters. To load these models, we need to convert the name of parameters so that they could match with those trained by MMDetection.
Suppose a Faster R-CNN model or any other pretrained model is imported from torchvision hub (e.g., https://pytorch.org/vision/stable/models/generated/torchvision.models.detection.fasterrcnn_resnet50_fpn.html#torchvision.models.detection.fasterrcnn_resnet50_fpn), can it be loaded into mmdetection directly for performing inference or ONNX/TensorRT conversion? I know mmdetect provides pretrained models, but asking just in case. If not, why? Is there any architecture difference between the pretrained models here and pretrained models from torchvision hub?
The text was updated successfully, but these errors were encountered: