Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Softmax and log-softmax no longer applied in models. #239

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

Britefury
Copy link

@Britefury Britefury commented Jul 21, 2020

Softmax and log-softmax no longer applied in models; they are now applied in the evaluation and training scripts. This was done by using nn.CrossEntropyLoss rather than nn.NLLLoss.

Class predictions are still valid when using max/argmax of logits rather than probabilities, so we can use logits for evaluation accuracy and IoU.

Furthermore I've change the decoders so that rather than using the use_softmax flag to determine if we are in inference mode, we apply the interpolation if the segSize parameter is provided; only done in inference in your code. Also, the decoders now return a dict with the 'logits' key giving the predicted logits and the 'deepsup_logits' key giving logits for deep supervision, when using deep supervision decoders.

The motivation for this is that some uses of semantic segmentation models require losses other than softmax/log-softmax as used in supervised training. Moving this out of the model classes make them useful in a wider variety of circumstances. Specifically I want to test a PSPNet in my semi-supervised work here: https://github.com/Britefury/cutmix-semisup-seg. I use a variety of unsupervised loss functions, hence preferring that models output logits that can be processed in a variety of ways.

Evaluation and training programs now use `nn.CrossEntropyLoss` rather than `nn.NLLLoss`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant