Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate/compute on step #792

Merged
merged 20 commits into from
Feb 8, 2022
Merged

Deprecate/compute on step #792

merged 20 commits into from
Feb 8, 2022

Conversation

SkafteNicki
Copy link
Member

@SkafteNicki SkafteNicki commented Jan 23, 2022

What does this PR do?

Fixes #789
Deprecates compute_on_step argument.

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@codecov
Copy link

codecov bot commented Jan 23, 2022

Codecov Report

Merging #792 (71176b1) into master (c0e4250) will decrease coverage by 0%.
The diff coverage is 100%.

@@          Coverage Diff          @@
##           master   #792   +/-   ##
=====================================
- Coverage      95%    95%   -0%     
=====================================
  Files         166    166           
  Lines        6795   6796    +1     
=====================================
- Hits         6469   6460    -9     
- Misses        326    336   +10     

@Borda Borda enabled auto-merge (squash) January 25, 2022 18:46
@mergify mergify bot removed the has conflicts label Feb 5, 2022
@Borda
Copy link
Member

Borda commented Feb 5, 2022

@SkafteNicki mind checking the last two failing tests?

@mergify mergify bot added the has conflicts label Feb 5, 2022
@mergify mergify bot removed the has conflicts label Feb 5, 2022
@Borda
Copy link
Member

Borda commented Feb 7, 2022

FAILED tests/bases/test_composition.py::test_compositional_metrics_forward[4-False]
FAILED tests/bases/test_composition.py::test_compositional_metrics_forward[metric_b1-False]

@mergify mergify bot added the ready label Feb 8, 2022
@Borda Borda disabled auto-merge February 8, 2022 10:28
@Borda Borda merged commit 48dc058 into master Feb 8, 2022
@Borda Borda deleted the deprecate/compute_on_step branch February 8, 2022 10:29
@Alec-Stashevsky
Copy link

Alec-Stashevsky commented Apr 18, 2022

@SkafteNicki why was compute_on_step deprecated? I use torchmetrics with base PyTorch (non Lightning) and found that having compute_on_step == True (the default) drastically affected my results. I needed to keep it off. How would I go about that in version 0.8.0?

Thanks

@fierval
Copy link

fierval commented Apr 18, 2022

Agree with @Alec-Stashevsky 's comment. If I am reading the code correctly:

    def forward(self, *args: Any, **kwargs: Any) -> Any:
        """Automatically calls ``update()``.

        Returns the metric value over inputs if ``compute_on_step`` is True.
        """
        # add current step
        if self._is_synced:
            raise TorchMetricsUserError(
                "The Metric shouldn't be synced when performing ``update``. "
                "HINT: Did you forget to call ``unsync`` ?."
            )

        # global accumulation
        self.update(*args, **kwargs)

        if self.compute_on_step:
            self._to_sync = self.dist_sync_on_step
            # skip restore cache operation from compute as cache is stored below.
            self._should_unsync = False

            # save context before switch
            cache = {attr: getattr(self, attr) for attr in self._defaults}

            # call reset, update, compute, on single batch
            self._enable_grad = True  # allow grads for batch computation
            self.reset()
            self.update(*args, **kwargs)
            self._forward_cache = self.compute()

            # restore context
            for attr, val in cache.items():
                setattr(self, attr, val)
            self._is_synced = False

            self._should_unsync = True
            self._to_sync = True
            self._computed = None
            self._enable_grad = False

            return self._forward_cache

This used to be the forward function in 0.7.3. Now that the if statement is removed, does it mean that the metric is finalized & reset on every batch? That's not what I really want.

@SkafteNicki
Copy link
Member Author

Hi @Alec-Stashevsky
If you set compute_on_step=False you can directly see in the code that @fierval posted that calling forward is the exact same as calling update directly. Therefore instead of using a metric like this:

metric = Metric(compute_on_step=False)
metric(input)

you should instead just do

metric = Metric()
metric.update(input)

@fierval the change in 0.8 just have the effect that forward will always accumulate to the global metrics state (the first call to update) AND also calculate the metric on the current batch. The metric is therefore only finalized on the current batch and not the global state.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API / design ready refactoring refactoring and code health
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Deprecate compute_on_step
5 participants