Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pretrain Model #1

Closed
Ahnsun opened this issue May 11, 2021 · 8 comments
Closed

Pretrain Model #1

Ahnsun opened this issue May 11, 2021 · 8 comments

Comments

@Ahnsun
Copy link

Ahnsun commented May 11, 2021

Could you please provide the version of pretrain model. I downloaded the r50_deformable_detr-checkpoint.pth but the there is an error in loading state_dict for MOTR

@dbofseuofhust
Copy link
Collaborator

Could you please provide the version of pretrain model. I downloaded the r50_deformable_detr-checkpoint.pth but the there is an error in loading state_dict for MOTR

hi~, we use Deformable DETR, r50 + iterative bounding box refinement, you should download this model: https://drive.google.com/file/d/1JYKyRYzUH7uo9eVfDaVCiaIGZb5YTCuI/view

@zzzz94
Copy link

zzzz94 commented May 12, 2021

Could you please provide the version of pretrain model. I downloaded the r50_deformable_detr-checkpoint.pth but the there is an error in loading state_dict for MOTR

hi~, we use Deformable DETR, r50 + iterative bounding box refinement, you should download this model: https://drive.google.com/file/d/1JYKyRYzUH7uo9eVfDaVCiaIGZb5YTCuI/view

hello, I have tried the pretrained model above, but meet the error:

RuntimeError: Error(s) in loading state_dict for MOTR: size mismatch for class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([1, 256]).

After ignoring the model parameter size mismatch, I get another error about size mismatch of optimizer:
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group

How did you solve it? Could you give any suggestions?Thanks.

@dbofseuofhust
Copy link
Collaborator

Could you please provide the version of pretrain model. I downloaded the r50_deformable_detr-checkpoint.pth but the there is an error in loading state_dict for MOTR

hi~, we use Deformable DETR, r50 + iterative bounding box refinement, you should download this model: https://drive.google.com/file/d/1JYKyRYzUH7uo9eVfDaVCiaIGZb5YTCuI/view

hello, I have tried the pretrained model above, but meet the error:

RuntimeError: Error(s) in loading state_dict for MOTR: size mismatch for class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([1, 256]).

After ignoring the model parameter size mismatch, I get another error about size mismatch of optimizer:
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group

How did you solve it? Could you give any suggestions?Thanks.

Did you modify any codes? or could you paste the .sh command here?

@zzzz94
Copy link

zzzz94 commented May 12, 2021

Could you please provide the version of pretrain model. I downloaded the r50_deformable_detr-checkpoint.pth but the there is an error in loading state_dict for MOTR

hi~, we use Deformable DETR, r50 + iterative bounding box refinement, you should download this model: https://drive.google.com/file/d/1JYKyRYzUH7uo9eVfDaVCiaIGZb5YTCuI/view

hello, I have tried the pretrained model above, but meet the error:
RuntimeError: Error(s) in loading state_dict for MOTR: size mismatch for class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([1, 256]).
After ignoring the model parameter size mismatch, I get another error about size mismatch of optimizer:
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
How did you solve it? Could you give any suggestions?Thanks.

Did you modify any codes? or could you paste the .sh command here?

PRETRAIN in r50_motr_train.sh is set to r50_deformable_detr_plus_iterative_bbox_refinement-checkpoint.pth(the link you provided above) because I don't find coco_model_final.pth.

@TerranLord
Copy link
Contributor

@dbofseuofhust @zzzz94 This error may cause by the '--resume {PRETRAIN}' in the training script. Just remove that.

@zzzz94
Copy link

zzzz94 commented May 12, 2021

@dbofseuofhust @zzzz94 This error may cause by the '--resume {PRETRAIN}' in the training script. Just remove that.

thx, I'll try it.

@eeric
Copy link

eeric commented May 18, 2021

Could you please provide the version of pretrain model. I downloaded the r50_deformable_detr-checkpoint.pth but the there is an error in loading state_dict for MOTR

hi~, we use Deformable DETR, r50 + iterative bounding box refinement, you should download this model: https://drive.google.com/file/d/1JYKyRYzUH7uo9eVfDaVCiaIGZb5YTCuI/view

hello, I have tried the pretrained model above, but meet the error:

RuntimeError: Error(s) in loading state_dict for MOTR: size mismatch for class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([1, 256]).

After ignoring the model parameter size mismatch, I get another error about size mismatch of optimizer:
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group

How did you solve it? Could you give any suggestions?Thanks.

pytorch version is mismatch,need torch==1.5.1, torchvision==0.6.1

@noahcao
Copy link

noahcao commented Nov 1, 2021

I am getting into the same problem:

image

All I did is uncommenting the command in configs/r50_motr_train.sh and replace $PRETRAIN with the checkpoint you suggested above (https://drive.google.com/file/d/1JYKyRYzUH7uo9eVfDaVCiaIGZb5YTCuI/view).

image

@dbofseuofhust Do you have any idea about this? Anything to be updated in your README guidelines?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants