Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

to achieve same results as presented in the paper #19

Closed
griffintin opened this issue Feb 14, 2020 · 3 comments
Closed

to achieve same results as presented in the paper #19

griffintin opened this issue Feb 14, 2020 · 3 comments

Comments

@griffintin
Copy link

@wvangansbeke

Thank you for sharing code.

I read though the code, but found that "mse loss" is by default used for training in given Shell/train.sh. At least, loss with uncertainty should be used, i guess. And Have no idea whether other options should be changed too.

Is it possible to share your training scripts which can achieve the same results as presented in the paper?

@wvangansbeke
Copy link
Owner

Hi @griffintin,

I increased the stability of the training process a while ago and I also made it converge faster by adding skip connections between the global and local network. Initially I only used guidance by multiplication with an attention map (=probability), but found out that it is less robust and that differences between a focal MSE and vanilla MSE loss function were now negligible. Be aware that this change will alter the appearance of the confidence maps since fusion takes place at mutliple stages now (see README).

In short: if you want the old version, just remove the skip connections to get the original confidence maps. The results should be similar and you should get a sense of interpretability. If only the numbers on the benchmark matter, don't alter it.

Best,
Wouter

@griffintin
Copy link
Author

griffintin commented Feb 14, 2020

@wvangansbeke

Thank you for quick reply, really appreciate it.

Yes, before I posted my questions , I have noticed your recent improvement with the skip connections between global-local networks.

Still, I am not clear about the loss functions. Since both networks need to learn the uncertainty maps, I guess "MSE_loss_uncertainty" should be used instead of the default mse_loss.

Please correct me if I miss-understood.

Regards

@wvangansbeke
Copy link
Owner

Hi @wvangansbeke,

Yes you can but the differences will be small. Try with a vanilla MSE loss first to make sure the rest is correct (e.g. dataloading).

Best,
Wouter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants