Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: missing "not" operator to set ignore_zeros in segmentation trainer #444

Closed
remtav opened this issue Mar 1, 2022 · 4 comments · Fixed by #644
Closed

Bug: missing "not" operator to set ignore_zeros in segmentation trainer #444

remtav opened this issue Mar 1, 2022 · 4 comments · Fixed by #644
Assignees
Labels
trainers PyTorch Lightning trainers
Milestone

Comments

@remtav
Copy link
Contributor

remtav commented Mar 1, 2022

I believe there's a bug in the translation from ignore_zeros to ignore_index in the __init__ of SemanticSegmentationTask.
self.ignore_zeros = None if kwargs["ignore_zeros"] else 0 should be:
self.ignore_zeros = None if kwargs["ignore_zeros"] is None else 0 as is done when define loss function

self.ignore_zeros is later used to define the "self.ignore_index" of the self.train_metrics. For sake of clarity, I'd suggest renaming it to self.ignore_index directly in the __init__.

Therefore, this

...
        self.ignore_zeros = None if kwargs["ignore_zeros"] else 0

        self.config_task()

        self.train_metrics = MetricCollection(
            [
                Accuracy(
                    num_classes=self.hparams["num_classes"],
                    ignore_index=self.ignore_zeros,
...

would become

...
        self.ignore_index = None if kwargs["ignore_zeros"] is None else 0

        self.config_task()

        self.train_metrics = MetricCollection(
            [
                Accuracy(
                    num_classes=self.hparams["num_classes"],
                    ignore_index=self.ignore_index,
...

If you'd like, I can definitely push a PR for to fix this bug if the solution seems ok to all. Let me know.

@adamjstewart
Copy link
Collaborator

@calebrob6 were you the one who wrote this logic? Can you take a look at this?

@calebrob6
Copy link
Member

calebrob6 commented Mar 1, 2022

@remtav, the problem with defining this in init is that nn.CrossEntropLoss uses -1000 as a "None" value (

ignore_index=-1000 if self.ignore_zeros is None else 0
). Some changes that I think make sense:

  • Add an assert to make sure kwargs["ignore_zeros"] is boolean
  • Throw an error if kwargs["ignore_zeros"] == True and loss="jaccard" (as the jaccard loss doesn't have an ignore_index)
  • Say that it should be a boolean in the docstring
  • Rename self.ignore_zeros to self.ignore_index

@adamjstewart adamjstewart added the trainers PyTorch Lightning trainers label Mar 1, 2022
@adamjstewart
Copy link
Collaborator

@calebrob6 can you submit a PR for this? I'll include it in 0.2.1 (releasing this week).

@adamjstewart adamjstewart modified the milestones: 0.2.1, 0.2.2 Mar 16, 2022
@calebrob6 calebrob6 self-assigned this Jul 1, 2022
@calebrob6
Copy link
Member

@adamjstewart aiming this for 0.3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
trainers PyTorch Lightning trainers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants