Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mask_iou_mean、mask_iou_at_5、mask_iou_at_7=0? #203

Open
MacBookYang opened this issue Mar 16, 2022 · 8 comments
Open

mask_iou_mean、mask_iou_at_5、mask_iou_at_7=0? #203

MacBookYang opened this issue Mar 16, 2022 · 8 comments

Comments

@MacBookYang
Copy link

Hello, I would like to ask why these indicators mask_iou_mean, mask_iou_at_5, mask_iou_at_7=0 during training?
I only used the ytb_vos dataset for training, please answer,Thank you!

@StarrySky-SHT
Copy link

StarrySky-SHT commented Mar 16, 2022

I just met the same problem recently, and i find the problem is in the siammask.py/siammask_sharp.py iou_measure. In iou_measure, mask_sum = pred.eq(1).add(label.eq(1)) which can not > 2. And in the next line, intxn = torch.sum(mask_sum==2,dim=1).float will equal to 0.
I think it may be the version of pytorch. The pytorch I use is pytorch.1.5.1. But in the official code, the version of pytorch is 0.4.0. Hope helpful

@MacBookYang
Copy link
Author

Thank you! I got it!I have solved it! Thanks again

@MacBookYang
Copy link
Author

@StarrySky-SHT HI, have you encountered the following problem? when I train the train_siammask_refine function. get WARNING:root:NaN or Inf found in input tensor

3

@StarrySky-SHT
Copy link

I just met the problem today morning ,and it confuse me for a long time. but I think I have gotten it now. I just turn down the lr in the experiments/siammask_sharp/config.json. I turn down the lr to start_lr = 0.001, end_lr = 0.00025. It seems the problem gone but I do not know if it is actually right.

@MacBookYang
Copy link
Author

I just met the problem today morning ,and it confuse me for a long time. but I think I have gotten it now. I just turn down the lr in the experiments/siammask_sharp/config.json. I turn down the lr to start_lr = 0.001, end_lr = 0.00025. It seems the problem gone but I do not know if it is actually right.

ok ! thank you! I try it. Thanks again

@MacBookYang
Copy link
Author

@StarrySky-SHT Hi! have you encountered the following problem? when I train train_siammask_refine ValueRrror: loaded state dict has a different number of parameter groups!!!
image

@chenhbo
Copy link

chenhbo commented May 4, 2022

I just met the problem today morning ,and it confuse me for a long time. but I think I have gotten it now. I just turn down the lr in the experiments/siammask_sharp/config.json. I turn down the lr to start_lr = 0.001, end_lr = 0.00025. It seems the problem gone but I do not know if it is actually right.

Thanks a lot!

@nanowhiter
Copy link

The type of mask causes this error since the mask in PyTorch 0.4.X is an integer type, but in the higher PyTorch version is a boolean type. You can modify line 179 in models/siammask.py and line 183 in models/siammask_sharp.py.

mask_sum = pred.eq(1).int().add(label.eq(1).int())

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants