Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trained as what you said but got bad performance #8

Closed
shuluoshu opened this issue Jul 17, 2018 · 4 comments
Closed

Trained as what you said but got bad performance #8

shuluoshu opened this issue Jul 17, 2018 · 4 comments

Comments

@shuluoshu
Copy link

Hi, @laughtervv
I trained as what you suggested, first pretrain the segmentation model, then use only SIM loss for the first 5 epoch and finally use the total loss. The total loss dropped to 20 and stays after 100 epochs. However, when I do the prediction, the results are bad: mAP : 0.004, other data is also far away from what you gives in your paper. I wonder what happens. The loss of yours can be as low as how much?

Thanks.

@laughtervv
Copy link
Owner

0.004 looks like a bug or so. Can you try the pretrained model?
Can I see your training log and 'pergroup_thres.txt' and 'mingroupsize.txt'?

@shuluoshu
Copy link
Author

log.txt

The above is log.txt, and next is the pergroup_thres.txt and mingroupsize.txt

pergroup_thres.txt

mingroupsize.txt

Thanks @laughtervv

@laughtervv
Copy link
Owner

There's something wrong with your 'pergroup_thres.txt'.
Here are the files I generated. Please give it a try.
pergroup_thres.txt
mingroupsize.txt

@jianuo1128
Copy link

@laughtervv @shuluoshu
Hello author, I used your pre-trained model, and when running the valid.py, I encountered the following nan problem, and the two txt files generated are as follows
e99c38c1f60ba31cb5da185b630d971
image
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants